Kotlin

Adding a custom audio renderer

You will implement simple audio renderer for subscribed streams' audio.

The NoiseAudioDevice constructor method sets up a file to save the incoming audio to a file. This is done simply to illustrate a use of the custom audio driver's audio renderer.

The BaseAudioDevice.initRenderer method is called when the app initializes the audio renderer. The NoiseAudioDevice implementation of this method instantiates a new File object, to which the the app will write audio data:

override fun initRenderer(): Boolean {
    rendererBuffer = ByteBuffer.allocateDirect(SAMPLING_RATE * 2)
    val documentsDirectory = context.getExternalFilesDir(Environment.DIRECTORY_DOCUMENTS)

    rendererFile = File(documentsDirectory, "output.raw")
    if (rendererFile?.exists() == false) {
        try {
            rendererFile?.parentFile?.mkdirs()
            rendererFile?.createNewFile()
        } catch (e: IOException) {
            e.printStackTrace()
        }
    }
    return true
}

The BaseAudioDevice.startRendering method is called when the audio device should start rendering (playing back) audio from subscribed streams. The NoiseAudioDevice implementation of this method starts the capturer thread to be run in the queue after 1 second:

override fun startRenderer(): Boolean {
    rendererStarted = true
    rendererHandler?.postDelayed(renderer, rendererIntervalMillis)
    return true
}

The renderer thread gets 1 second worth of audio from the audio bus by calling the readRenderData method of the AudioBus object. It then writes the audio data to the file (for sample purposes). And, if the audio device is still being used to render audio samples, it sets a timer to run the rendererHandler thread again after 0.1 seconds:

private var rendererHandler: Handler? = null

private val renderer: Runnable = object : Runnable {
    override fun run() {
        rendererBuffer?.clear()
        audioBus.readRenderData(rendererBuffer, SAMPLING_RATE)
        try {
            val stream = FileOutputStream(rendererFile)
            stream.write(rendererBuffer?.array())
            stream.close()
        } catch (e: FileNotFoundException) {
            e.printStackTrace()
        } catch (e: IOException) {
            e.printStackTrace()
        }
        if (rendererStarted && !audioDriverPaused) {
            rendererHandler?.postDelayed(this, rendererIntervalMillis)
        }
    }
}

Custom audio driver

Learn how to use a custom audio driver to customize publisher and subscriber stream audio. You will use the custom audio driver when you want to start and stop the audio play your own audio file, and do anything outside the default behavior of live video chat provided by the SDK.

Available on:
Kotlin Swift
Steps
1
Introduction
2
Getting Started
3
Creating a new project
4
Adding the Android SDK
5
Setting up authentication
6
Requesting permissions
7
Adding a custom audio driver
8
Capturing audio to be used by a publisher
9
Adding a custom audio renderer
10
Running the app
11
Conclusion