Learn how to leverage LiveSwitch's JavaScript SDK to initiate a user with both screen sharing and microphone capabilities. This blog post provides a walkthrough of the process, allowing you to capture audio input from the user's microphone while sharing their screen. Enhance your real-time communication applications with this powerful feature using LiveSwitch!
To enable screen sharing and microphone simultaneously, you need to create two separate WebRTC streams: one for screen sharing using getDisplayMedia
and another for the microphone using getUserMedia
. Here's a code snippet showcasing the implementation:
const gdmOptions = {
video: true,
audio: false
};
const audioOptions = {
audio: true,
video: false
};
let ssstream = await navigator.mediaDevices.getDisplayMedia(gdmOptions);
let micstream = await navigator.mediaDevices.getUserMedia(audioOptions);
let localMedia = new fm.liveswitch.LocalMedia(micstream, ssstream);
//Alternative to Local Media Start
//localMedia._internal._setAudioMediaStream(micstream);
//localMedia._internal._setVideoMediaStream(ssstream);
//localMedia._internal.setState(fm.liveswitch.LocalMediaState.Started);
localMedia.start().then((lm) => {
});
Utilizing the constructor on LocalMedia, you can pass the microphone audio stream (micstream
) as the audio input and the screen sharing stream (ssstream
) as the video input.
By configuring the LocalMedia
instance in this way, you can ensure that the audio stream captures the user's microphone input instead of desktop audio or browser tab audio, while the video stream provides the desired screen visuals.
Feel free to check out our live example on CodePen.
Need assistance in architecting the perfect WebRTC application? Let our team help out! Get in touch with us today!