We live in a connected world. Users of your web application could be taking video calls from a myriad of different locations and under various conditions, imagine: Morning stand-ups when a neighbor decides to start mowing their lawn. While working from a busy coffee shop, an impromptu meeting is started to discuss design changes. Of course on the day of the big presentation, school is canceled due to snowfall. Now, with Vonage Video Noise Suppression, there’s one less thing to worry about: background noise.
Want to give it a try in your Chrome / Chromium-based (Electron, Opera, and Edge) browser, right now? We have GitHub repos for both Moderate and Advanced sample apps that you can deploy with one click. Once deployed, fill in the credentials that you can get from the Vonage Video Playground and you can begin testing. Just join the call from two different tabs and click the buttons to enable and disable noise suppression. If you are lucky enough not to be in a noisy environment, searching for and playing videos of city noises on a cell phone will do.
Go ahead and give them a try. I’ll wait.
The Noise Suppression functionality is made possible with the Vonage Media Processor, which helps access the audio data and remove any background noise.
As mentioned previously, there are two ways to implement noise suppression in your Vonage Video call. Let’s see how they work.
Moderate Implementation
With the Moderate implementation of Noise Suppression, Vonage has wrapped the Noise Suppression Transformer, Media Processor, and Media Processor Connector into a single function called createVonageNoiseSuppression()
.
The important bits are:
Creating an instance from the Noise Suppression library
const noiseSuppression = await createVonageNoiseSuppression();
Initializing the instance
await noiseSuppression.init();
Getting a connector to the Media Processor
const mediaProcessorConnector = await noiseSuppression.getConnector();
Setting the connector when initializing your publisher for the video call
publisher
.setAudioMediaProcessorConnector(mediaProcessorConnector)
Now, your audio will be passed through the Noise Suppression Transformer and then into the video call.
To enable and disable noise suppression you call some methods on the noiseSuppression
:
// enable Noise Suppression
noiseSuppression.enable();
// disable Noise Suppression
noiseSuppression.disable();
Advanced Implementation
If you want to add multiple effects to the audio stream before going into the video call or anything else that would require more control over insertable streams and transformers, the advanced use case will be the best option.
First, create a new instance of a MediaProcessor
const mediaProcessor = new MediaProcessor();
and a new Noise Suppression Transformer.
const noiseSuppressionTransformer = new NoiseSuppressionTransformer();
Then, initialize the Transformer
await noiseSuppressionTransformer.init();
Next, set the transformers that the media processor will be using. This is also where you would include other transformers that you would like to add to the audio stream.
mediaProcessor.setTransformers([noiseSuppressionTransformer]);
Now, create a Media Connector using the mediaProcessor
.
const mediaProcessorConnector = new MediaProcessorConnector(mediaProcessor);
Just like in the Moderate implementation example, you set the connector when initializing your publisher for the video call.
publisher
.setAudioMediaProcessorConnector(mediaProcessorConnector)
Since you now have access to the Noise Suppression Transformer, that is where you can enable and disable the functionality.
// enable Noise Suppression
noiseSuppressionTransformer.enable();
// disable Noise Suppression
noiseSuppressionTransformer.disable();
More Options
When initializing Noise Suppression, you can specify a couple of options.
First is Multithreaded WebAssembly. With Wasm multithreading, using multiple CPU cores can significantly boost performance. This is enabled by default, but to work, these prerequisites must be met:
Serving your web application over HTTPS for secure data transmission.
Setting the 'Cross-Origin-Opener-Policy' header to 'same-origin' to restrict execution contexts to trusted sources.
Setting the 'Cross-Origin-Embedder-Policy' header with 'require-corp' or 'credentialless' values to ensure secure usage of shared array buffers.
If you’d like to disable this option, you can pass in disableWasmMultiThread: true
when you do the initialization.
.init({disableWasmMultiThread: true});
The Web Worker, Wasm, and Noise Suppression model are loaded dynamically when the Transformer is initialized. By default, these resources are loaded from a Vonage CDN. You do have the option to do the hosting yourself. Just specify the assetsDirBaseUrl
parameter with the host server’s URL pointing to the static assets.
.init({assetsDirBaseUrl: "https://my-custom-server/path/to/the/resources/root"});
We’ve tried to make adding Noise Suppression to your video application as straightforward as possible. Give it a try!
If you have any questions or comments, please let us know in our Community Slack Channel and follow us on X.