Dolby.io WebRTC Statistics - v1.0.4
    Preparing search index...

    Interface InputVideo

    Represents the statistics object for an input video stream.

    interface InputVideo {
        bitrate?: number;
        frameHeight?: number;
        framesDecoded?: number;
        framesDropped?: number;
        framesPerSecond?: number;
        framesReceived?: number;
        frameWidth?: number;
        id: string;
        jitter?: number;
        jitterBufferDelay?: number;
        jitterBufferEmittedCount?: number;
        keyFramesDecoded?: number;
        mid?: string;
        mimeType?: string;
        packetLossDelta?: number;
        packetLossRatio?: number;
        packetRate?: number;
        timestamp: number;
        totalBytesReceived?: number;
        totalPacketsLost?: number;
        totalPacketsReceived?: number;
        trackIdentifier: string;
    }

    Hierarchy (View Summary)

    Index

    Properties

    bitrate?: number

    Current bitrate in bytes per second.

    frameHeight?: number

    Represents the height of the last decoded frame.

    framesDecoded?: number

    Represents the total number of frames correctly decoded for this RTP stream, i.e., frames that would be displayed if no frames are dropped.

    framesDropped?: number

    The total number of frames dropped prior to decode or dropped because the frame missed its display deadline for this receiver's track.

    framesPerSecond?: number

    The number of decoded frames in the last second.

    framesReceived?: number

    Represents the total number of complete frames received on this RTP stream. This metric is incremented when the complete frame is received.

    frameWidth?: number

    Represents the width of the last decoded frame.

    id: string

    A unique id that is associated with the object that was inspected to produce this StatsBase object.

    jitter?: number

    Packet Jitter measured in seconds for this SSRC.

    jitterBufferDelay?: number

    The purpose of the jitter buffer is to recombine RTP packets into frames (in the case of video) and have smooth playout. The model described here assumes that the samples or frames are still compressed and have not yet been decoded. It is the sum of the time, in seconds, each audio sample or a video frame takes from the time the first packet is received by the jitter buffer (ingest timestamp) to the time it exits the jitter buffer (emit timestamp).

    jitterBufferEmittedCount?: number

    The total number of audio samples or video frames that have come out of the jitter buffer (increasing jitterBufferDelay).

    keyFramesDecoded?: number

    Represents the total number of key frames, such as key frames in VP8 [RFC6386] or IDR-frames in H.264 [RFC6184], successfully decoded for this RTP media stream.

    mid?: string

    Media stream "identification-tag" negotiated and present in the local and remote descriptions.

    mimeType?: string

    The codec MIME media type/subtype. e.g., video/vp8 or equivalent.

    packetLossDelta?: number

    Number of packets lost since last collection.

    packetLossRatio?: number

    The ratio of packet loss.

    packetRate?: number

    Current packet rate in packets per second.

    timestamp: number

    The timestamp, associated with this object. The time is relative to the UNIX epoch (Jan 1, 1970, UTC).

    totalBytesReceived?: number

    Total number of bytes received for this SSRC.

    totalPacketsLost?: number

    Total number of RTP packets lost for this SSRC. Note that because of how this is estimated, it can be negative if more packets are received than sent.

    totalPacketsReceived?: number

    Total number of packets received for this SSRC.

    trackIdentifier: string

    The value of the MediaStreamTrack's id attribute.