#include <dolbyio/comms/media/media_engine.h>

The interface through which raw audio frames are provided to the CoreSDK. This audio source must provide signed 16 bit PCM data in 10ms chunks at 10ms intervals. Application writers who want to implement this source must override the two virtual functions of registering and deregistering the RTC Audio Source on the Injector. Attaching the RTC Audio source to the injector establishes the audio pipeline so that frames can be passed to the CoreSDK. The Audio Source must be provided to the SDK using the Set Audio Source method and this must be done before starting a Conference.

class audio_source

The interface for providing audio frames.

This interface must be implemented by the injector, it shall serve as the source of audio frames passed to the rtc_audio_source.

Subclassed by dolbyio::comms::plugin::injector

Public Functions

virtual void register_audio_frame_rtc_source(rtc_audio_source *source) = 0

Connects the RTC Audio Source to the audio source, in essence creating the audio injection pipeline. This method will be called by the media_engine when an Audio Track is attached to the active Peer Connection.


source – The RTC Audio Source which will receive the injected audio frames.

virtual void deregister_audio_frame_rtc_source() = 0

Disconnects the RTC Audio Source from the Audio Source, in essence destructing the audio pipeline. This method is called by the media_engine whenever an Audio Track is to be detached from the active Peer Connection.

The RTC Audio Source is NOT to be implemented by the application. This is the interface through which the injector can view its own Audio Sink. After receiving audio frames from some media source, the injector provides the raw audio frames to this RTC Audio Source. The provided audio is expected to be in 10ms chunks provided every 10ms. The RTC Audio Source then pushes the audio data further down the audio pipeline until it is injected into the conference. The RTC Audio Source expects audio frames.

class rtc_audio_source

The adapter which is used for providing Audio frames into WebRTC. This interface is an Audio Sink from the perspective of the Injector. It is an Audio Source from the perspective of WebRTC Audio Tracks, thus it provides this connection in establishing the audio injection pipeline.

This interface is NOT implemented by the injector, it is used to be the injector to provide audio frames.

Public Functions

virtual void on_data(const void *audio_data, int bits_per_sample, int sample_rate, size_t number_of_channels, size_t number_of_frames) = 0

The callback that is invoked when 10ms of audio data is ready to be passed to WebRTC.

  • audio_data – The pointer to the PCM data

  • bits_per_sample – Bits per sample.

  • sample_rate – The sample rate of the audio.

  • number_of_channels – The number of channels.

  • number_of_frames – The total number of samples (channels * sample_rate/100)

class audio_frame

Interface that wraps decoded audio frames to be injected into WebRTC.

Public Functions

virtual ~audio_frame() = default

Default destructor.

virtual const int16_t *data() const = 0

Gets the underlying s16 raw PCM audio data.


Pointer to data.

virtual int sample_rate() const = 0

Gets the sample rate of the audio frame.


Sample rate.

virtual int channels() const = 0

Gets the number of channels in the audio frame.



virtual int samples() const = 0

Gets the number of sample in the audio frame.



See Example Injector Implementation for an example of child injection class for all possible media.