dolbyio_comms_sdk_flutter library

Classes

ActiveParticipantsNotificationData Models
The ActiveParticipantsNotificationData class gathers information about a active participants.
AudioCaptureOptions Models
The AudioCaptureModeOptions model allows selecting the preferred audio capture mode and additional options for the preferred mode.
AudioPreview
The AudioPreview model allows the local participant to test different capture modes and voice fonts before a conference. The model is supported only in SDK 3.10 and later.
AudioProcessingOptions Models
The AudioProcessingOptions class is responsible for enabling and disabling audio processing.
AudioProcessingSenderOptions Models
The AudioProcessingSenderOptions class allows enabling and disabling audio processing for the local participant who transmits an audio stream.
AudioService Services
The AudioService allows changing audio settings for the local and remote participants.
CommandService Services
The CommandService allows the application to send and receive text messages and notifications during a conference.
Conference Models
The Conference class gathers information about a conference.
ConferenceConstraints Models
The ConferenceConstraints class gathers information about the preferred WebRTC constraints.
ConferenceCreatedNotificationData Models
The ConferenceCreatedNotificationData class gathers information about a conference created.
ConferenceCreateOption Models
The ConferenceCreateOption class allows defining conference details.
ConferenceCreateParameters Models
The ConferenceCreateParameters class allows defining conference parameters.
ConferenceEndedNotificationData Models
The ConferenceEndedNotificationData class gathers information about a conference ended.
ConferenceJoinOptions Models
The ConferenceJoinOptions class defines how an application expects to join a conference in terms of media preference.
ConferenceLeaveOptions Models
The ConferenceLeaveOptions class gathers information about preferences for leaving a conference.
ConferenceListenOptions Models
The ConferenceListenOptions class defines how the application expects to join a conference using the listen method.
ConferenceMixingOptions Models
The ConferenceMixingOptions class is responsible for notifying the Dolby.io server that a participant who joins or replays a conference is a special participant called Mixer. Mixer can use the SDK to record or replay a conference. For more information, see the Recording Conferences article.
ConferenceReplayOptions Models
The ConferenceReplayOptions class contains options for replaying conferences.
ConferenceService Services
The ConferenceService allows an application to manage a conference life-cycle and interact with the conference. The service allows creating, joining, and leaving conferences and managing the audio, video, and screen-share streams.
ConferenceStatusNotificationData Models
The ConferenceStatusNotificationData class gathers information about a conference status.
DolbyioCommsSdk Services
DolbyioCommsSdk is the main class that allows an application to interact with Dolby.io services.
Event<T, B> Models
Describes a generic event.
File Models
The File class gathers information about a file that a presenter wants to share during a conference.
FileConverted Models
The FileConverted class gathers information about a converted file.
FilePresentation Models
The FilePresentation class gathers information about a file presentation.
FilePresentationService Services
The FilePresentationService allows presenting files during a conference. The Dolby.io Communications APIs service converts the user-provided file into multiple pages that are accessible through the getImage method.
InvitationReceivedNotificationData Models
The InvitationReceivedNotificationData class gathers information about a received invitation.
LocalAudio Models
The LocalAudio class allows enabling and disabling the local participant's audio as well as setting and checking the capture mode and comfort noise level.
LocalVideo Models
The LocalVideo class allows enabling and disabling the local participant's video.
MediaDeviceService Services
The MediaDeviceService allows an application to manage media devices that are used during a conference.
MediaStream Models
The MediaStream type gathers information about media streams.
MessageReceivedData Models
The MessageReceivedData interface gathers information about a received message.
NotificationService Services
The NotificationService allows inviting participants to a conference.
Participant Models
The Participant class gathers information about a conference participant.
ParticipantInfo Models
The ParticipantInfo class gathers information about a conference participant.
ParticipantInvited Models
The ParticipantInvited class gathers information about an invited participant.
ParticipantJoinedNotificationData Models
The ParticipantJoinedNotificationData class gathers information .
ParticipantLeftNotificationData Models
The ParticipantLeftNotificationData class gathers information .
ParticipantPermissions Models
The ParticipantPermissions class gathers information about the invited participants and their conference permissions.
RecordingInformation Models
The RecordingInformation class gathers information about a conference recording.
RecordingService Services
The RecordingService allows recording conferences.
RecordingStatusUpdate Models
The RecordingStatusUpdate model contains information about recording statuses.
RemoteAudio Models
The RemoteAudio class allows the local participant to locally mute and unmute remote participants.
RemoteVideo Models
The RemoteVideo class allows the local participant to locally start and stop remote participants` video streams transmission.
SessionService Services
The SessionService allows connecting the SDK with the Dolby.io backend via the open method. Opening a session is mandatory before interacting with any service.
SpatialDirection Models
The SpatialDirection class defines the direction a participant is facing. The class is specified as a set of three Euler rotations about the corresponding axis. The following properties define a rotation about the specified positive axis:
SpatialPosition Models
The SpatialPosition class represents a participant's audio position. The position is defined using Cartesian coordinates.
SpatialScale Models
The SpatialScale class defines how to convert units from the application's coordinate system (pixels or centimeters) into meters used by the spatial audio coordinate system. For example, let's assume that SpatialScale is set to (100,100,100), which indicates that 100 of the applications units (cm) map to 1 meter for the audio coordinates. If the listener's location is (0,0,0)cm and a remote participant's location is (200,200,200)cm, the listener has an impression of hearing the remote participant from the (2,2,2)m location.
StreamsChangeData Models
The StreamsChangeData class gathers information about media stream updates.
Subscription
VideoPresentation Models
The VideoPresentation class gathers information about a video presentation.
VideoPresentationService Services
The VideoPresentationService allows sharing videos during a conference. To present a video, a conference participant needs to provide the URL of the video file. We recommend sharing files in the MPEG-4 Part 14 or MP4 video formats.
VideoService Services
The VideoService allows changing video settings for the local and remote participants.
VideoView Widgets
A widget that can display a MediaStream for a Participant.
VideoViewController Widgets
A controller for the VideoView that is responsible for attaching a Participant and a MediaStream to the VideoView, detaching them, and getting information about the VideoView state.

Enums

AudioCaptureMode Models
The AudioCaptureMode model allows selecting the preferred mode for capturing the local participant's audio.
AudioPreviewEventNames Models
The AudioPreviewEventNames enum gathers the AudioPreview events.
AudioPreviewStatus
The AudioPreviewStatus model gathers all possible statuses of audio samples recording for audio preview.
Codec Models
The Codec enum gathers the available video codecs.
ComfortNoiseLevel Models
The ComfortNoiseLevel enum gathers the available comfort noise levels.
CommandServiceEventNames Models
The CommandServiceEventNames enum gathers the CommandService events.
ConferencePermission Models
The ConferencePermission enum gathers the possible permissions a participant may have in a conference.
ConferenceServiceEventNames Models
The ConferenceServiceEventNames enum gathers events that inform about changes in the participants list and the connected streams.
ConferenceStatus Models
The ConferenceStatus enum represents the possible conference statuses.
FilePresentationServiceEventNames Models
The FilePresentationServiceEventNames enum gathers events informing about the file presentation status.
MediaStreamType Models
The MediaStreamType enum gathers the possible types of media streams.
NoiseReduction Models
The NoiseReductionLevel model allows selecting the preferred level of noise reduction.
NotificationServiceEventNames Models
The NotificationServiceEventNames enum gathers the NotificationService events.
ParticipantStatus Models
The ParticipantStatus enum gathers the possible statuses of a conference participant.
ParticipantType Models
The ParticipantType enum gathers the possible types of a conference participant.
RecordingServiceEventNames Models
The RecordingServiceEventNames enum gathers the recording events.
RecordingStatus Models
The RecordingStatus enum gathers the possible statuses of recording.
RTCPMode Models
The RTCPMode enum gathers the possible bitrate adaptation modes for video transmission.
ScaleType
The ScaleType enum lets you select how you want to display a video stream in the VideoView area.
SpatialAudioStyle Models
The SpatialAudioStyle enum defines how the spatial location is communicated between SDK and the Dolby.io server. The style can be defined during a conference creation, although its value for each participant depends on the participant's spatial audio setting. The shared spatial audio style is only available for participants who joined a conference with spatial audio enabled. Setting the spatial audio style is supported only on SDK 3.6 and later. The earlier SDK versions support only the individual mode and do not allow participants to join conferences created with the spatial audio style set to shared. The following table lists the possible spatial audio style settings for the local participant:
SubscriptionType
The SubscriptionType model gathers the subscription types.
VideoForwardingStrategy Models
The VideoForwardingStrategy enum defines how the SDK should select conference participants whose videos will be transmitted to the local participant. There are two possible values; the selection can be either based on the participants' audio volume or the distance from the local participant.
VideoPresentationEventNames Models
The VideoPresentationEventNames enum gathers the possible statuses of a video presentation.
VideoPresentationState Models
The VideoPresentationState enum gathers the possible statuses of a video presentation.
VoiceFont
The VoiceFont model gathers the possible voice modification effects that you can use to change the local participant's voice in real time. The model is supported only in SDK 3.9 and later.

The following table lists audio samples for each available voice font: