Shared Memory

Mapping Shared Memory

Before playback can be initiated the shared memory region should be mapped, ref Rialto Application Session Management#ApplicationManagement.

Shared Memory Layout

The shared memory buffer shall be split into two regions for each playback session, one for a video stream and the other for audio (only one concurrent audio track supported, for audio track selection previous audio must be removed first). Each region must be big enough to accommodate the largest possible frame of audio/video data plus associated decryption parameters and metadata. The buffer shall initially be sized to 8Mb per playback session to allow some overhead.

For apps that can support more than one concurrent playback the shared memory buffer shall be sized accordingly and partitioned into different logical areas for each playback session. The partitions need not necessarily be equally sized, for example if an app supports one UHD and one HD playback the 'HD' partition may be smaller. There can be 0 or more Web Audio regions per Rialto Client.



Note that the application is not directly aware of the layout of the shared memory region or even its existence, this is all managed internally by Rialto.

Metadata format

The following defines the format of the metadata (i.e. data about each AV frame) stored in the shm buffer. Note that all offsets are relative to the base of the shm region.

Metadata Format Version 1

V1 metadata uses a fixed byte format similar to the AVBus specification but with the following changes:


ParameterSize
Shared memory buffer8Mb
Max frames to request[24]
Metadata size per frame~100bytes
Metadata region size~2.4kib
Video frame region size7Mb
Max video frame size7Mb - 2.4kb
Audio frame region size1Mb
Max audio frame size1Mb - 2.4kb
Web Audio region size-


The following diagram shows schematically how the shared memory buffer is partitioned into two regions for a playback session, for audio and video data respectively. Within each region the metadata for each frame is written sequentially from the beginning and the media data is stored in the remainder of the region, the offset & length of each frame being specified in the metadata.


The metadata regions have the format shown in the diagram below, namely a 4 byte version field followed by concatenated metadata structures for each media frame following the format specified in Metadata format below. The version field is used to specify the version of the metadata structure written to the buffer - this allows different versions of Rialto client (which may write different metadata formats) to interoperate with the same version of Rialto Server. Rialto server will only understand a certain number of metadata format versions, so only compatible client versions should be used. If Rialto Server sees an unsupported version number it should trigger an error and fail streaming.


Frame metadata is stored in a fixed format in the shm region as follows, undefined values should be set to 0. 

    uint32_t                        offset;                              /* Offset of first byte of sample in shm buffer */
    uint32_t                        length;                              /* Number of bytes in sample */
    int64_t                         time_position;                       /* Position in stream in nano-seconds */
    int64_t                         sample_duration;                     /* Frame/sample duration in ns */
    uint32_t                        stream_id;                           /* stream id (unique ID for ES, as defined in attachSource()) */
    uint32_t                        extra_data_size;                     /* extraData size */
    uint8_t                         extra_data[32];                      /* buffer containing extradata */

    uint32_t                        media_keys_id;                       /* Identifier of MediaKeys instance to use for decryption. If 0 use any CDM containing the MKS ID */
    uint32_t                        media_key_session_identifier_offset; /* Offset to the location of the MediaKeySessionIdentifier */
    uint32_t                        media_key_session_identifier_length; /* Length of the MediaKeySessionIdentifier */
    uint32_t                        init_vector_offset;                  /* Offset to the location of the initialization vector */
    uint32_t                        init_vector_length;                  /* Length of initialization vector */
    uint32_t                        sub_sample_info_offset;              /* Offset to the location of the sub sample info table */
    uint32_t                        sub_sample_info_len;                 /* Length of sub-sample Info table */
    uint32_t                        init_with_last_15;

    if (IS_AUDIO(stream_id))
    {
        uint32_t                    sample_rate;                         /* Samples per second */
        uint32_t                    channels_num;                        /* Number of channels */
    }
    else if (IS_VIDEO(stream_id))
    { 
        uint32_t                    width;                               /* Video width in pixels */
        uint32_t                    height;                              /* Video height in pixels */
    }


Metadata Format Version 2

ParameterSize
Shared memory buffer8Mb * max_playback_sessions + 10kb * max_web_audio_playback_sessions
Max frames to request[24]
Metadata size per frame

Clear: Variable but <100 bytes

Encrypted: TODO

Video frame region size7Mb
Max video frame sizeTODO
Audio frame region size1Mb
Max audio frame sizeTODO
Web Audio region size10kb


The following diagram shows schematically how the shared memory buffer is partitioned into two regions for a playback session, for audio and video data respectively. Within each region there is a 4 byte versions field indicating v2 metadata followed by concatenated metadata/frame pairs.


V2 metadata uses protobuf to serialise the frames' properties to the shared memory buffer. This use of protobuf aligns with the IPC protocol but also allows support for optional fields and for fields to be added and removed without causing backward/forward compatibility issues. It also supports variable length fields so the MKS ID, IV & sub-sample information can all be directly encoded in the metadata, avoiding the complexities of interleaving them with the media frames and referencing them with offsets/lengths as used in the V1 metadata format.

enum SegmentAlignment {
  ALIGNMENT_UNDEFINED = 0;
ALIGNMENT_NAL = 1;
 ALIGNMENT_AU
= 2;
}

enum CipherMode {
  CIPHER_MODE_UNDEFINED = 0;
 CIPHER_MODE_CENC     = 1; /* AES-CTR scheme */
 CIPHER_MODE_CBC1     = 2; /* AES-CBC scheme */
 CIPHER_MODE_CENS     = 3; /* AES-CTR subsample pattern encryption scheme */
 CIPHER_MODE_CBCS     = 4; /* AES-CBC subsample pattern encryption scheme */
}

message MediaSegmentMetadata {
    required uint32                 length               = 1;             /* Number of bytes in sample */
    required sint64                 time_position        = 2;             /* Position in stream in nanoseconds */
    required sint64                 sample_duration      = 3;             /* Frame/sample duration in nanoseconds */
    required uint32                 stream_id            = 4;             /* stream id (unique ID for ES, as defined in attachSource()) */
    optional uint32                 sample_rate          = 5;             /* Samples per second for audio segments */
    optional uint32                 channels_num         = 6;             /* Number of channels for audio segments */
    optional uint32                 width                = 7;             /* Frame width in pixels for video segments */
    optional uint32                 height               = 8;             /* Frame height in pixels for video segments */
    optional SegmentAlignment       segment_alignment    = 9;             /* Segment alignment can be specified for H264/H265, will use NAL if not set */
    optional bytes                  extra_data           = 10;            /* Buffer containing extradata */
    optional bytes                  media_key_session_id = 11;            /* Buffer containing key session ID to use for decryption */
    optional bytes                  key_id               = 12;            /* Buffer containing Key ID to use for decryption */
    optional bytes                  init_vector          = 13;            /* Buffer containing the initialization vector for decryption */
    optional uint32                 init_with_last_15    = 14;            /* initWithLast15 value for decryption */
    optional repeated SubsamplePair sub_sample_info      = 15;            /* If present, use gather/scatter decryption based on this list of clear/encrypted byte lengths. */
                                                                          /* If not present and content is encrypted then entire media segment needs decryption (unless    */
                                                                          /* cipher_mode indicates pattern encryption in which case crypt/skip byte block value specify    */
                                                                          /* the encryption pattern)                                                                       */
    optional bytes                  codec_data           = 16;            /* Buffer containing updated codec data for video segments */
    optional CipherMode             cipher_mode          = 17;            /* Block cipher mode of operation when common encryption used */
    optional uint32                 crypt_byte_block     = 18;            /* Crypt byte block value for CBCS cipher mode pattern */
    optional uint32                 skip_byte_block      = 19;            /* Skip byte block value for CBCS cipher mode pattern */
}

message SubsamplePair

{

    required uint32_t               num_clear_bytes      = 1;             /* How many of next bytes in sequence are clear */
    required uint32_t               num_encrypted_bytes  = 2;             /* How many of next bytes in sequence are encrypted */

}


Playback Control

Rialto interactions with Client & GStreamer

Start/Resume Playback

@startuml

autonumber

box "Container" #LightGreen
participant Client
participant rialtoClient
end box

box "Platform" #LightBlue
participant rialtoServer
participant GStreamer_server
end box


Client             ->  rialtoClient:     play(pipeline_session)
rialtoClient       ->  rialtoServer:     play(pipeline_session)

opt Gstreamer pipeline paused
rialtoServer       ->  GStreamer_server: Set pipeline state PLAYING
GStreamer_server   --> rialtoServer:     status
else opt Gstreamer pipeline playing
rialtoServer       ->  rialtoServer:     status=OK
note right: Silently ignore
else
rialtoServer       ->  rialtoServer:     status=ERROR
end

rialtoServer       --> rialtoClient:     status
rialtoClient       --> Client:           status


opt Pipeline state changed to PLAYING

opt Pending playback rate change (see Set Playback Rate)
rialtoServer       ->  GStreamer_server: Set pipeline playback speed to pending rate
GStreamer_server   --> rialtoServer:     status
note left: No notification to client on failure
end

loop While pipeline not prerolled
rialtoServer       ->  GStreamer_server: check preroll status
GStreamer_server   --> rialtoServer:
end
rialtoServer       -/  rialtoClient:     notifyPlaybackState(pipeline_session, PLAYBACK_STATE_PLAYING)
rialtoClient       -/  Client:           notifyPlaybackState(pipeline_session, PLAYBACK_STATE_PLAYING)
else opt Error starting playback
rialtoServer       -/  rialtoClient:     notifyPlaybackState(pipeline_session, PLAYBACK_STATE_FAILURE)
rialtoClient       -/  Client:           notifyPlaybackState(pipeline_session, PLAYBACK_STATE_FAILURE)
else No pipeline state change required
note over rialtoServer: No event required
end

@enduml


Pause Playback

@startuml


autonumber

box "Container" #LightGreen
participant Client
participant rialtoClient
end box

box "Platform" #LightBlue
participant rialtoServer
participant GStreamer_server
end box


Client   ->  rialtoClient:     pause(pipeline_session)
rialtoClient       ->  rialtoServer:     pause(pipeline_session)

opt Gstreamer pipeline in PLAYING state
rialtoServer       ->  GStreamer_server: Set pipeline state PAUSED
GStreamer_server   --> rialtoServer:     status
else opt Gstreamer pipeline in PAUSED state
rialtoServer       ->  rialtoServer:     status=OK
note right: Silently ignore
else
rialtoServer       ->  rialtoServer:     status=ERROR
end

rialtoServer       --> rialtoClient:     status
rialtoClient       --> Client:           status


note over Client, rialtoServer
Note that playback state notification may occur before pause() call returns
end note


opt Pipeline state changed to PAUSED
rialtoServer       -/  rialtoClient:     notifyPlaybackState(pipeline_session, PLAYBACK_STATE_PAUSED)
rialtoClient       -/  Client:           notifyPlaybackState(pipeline_session, PLAYBACK_STATE_PAUSED)
else opt Error pausing pipeline
rialtoServer       -/  rialtoClient:     notifyPlaybackState(pipeline_session, PLAYBACK_STATE_FAILURE)
rialtoClient       -/  Client:           notifyPlaybackState(pipeline_session, PLAYBACK_STATE_FAILURE)
else No pipeline state change required
note over rialtoServer: No event required
end

@enduml


Stop

@startuml

autonumber

box "Container" #LightGreen
participant Client
participant rialtoClient
end box

box "Platform" #LightBlue
participant rialtoServer
participant GStreamer_server
end box


Client               ->  rialtoClient:     stop(pipeline_session)
rialtoClient       ->  rialtoServer:     stop(pipeline_session)
rialtoServer       ->  rialtoServer:     Stop requesting new samples from attached sources
rialtoServer       ->  GStreamer_server: Set pipeline state NULL
GStreamer_server   --> rialtoServer:     status
rialtoServer       --> rialtoClient:     status
rialtoClient       --> Client:           status


note over Client, rialtoServer
Note that playback state notification may occur before stop() call returns
end note

opt Playback stopped successfully
rialtoServer       -/  rialtoClient:     notifyPlaybackState(pipeline_session, PLAYBACK_STATE_STOPPED)
rialtoClient       -/  Client:           notifyPlaybackState(pipeline_session, PLAYBACK_STATE_STOPPED)
else Error stopping playback
rialtoServer       -/  rialtoClient:     notifyPlaybackState(pipeline_session, PLAYBACK_STATE_FAILURE)
rialtoClient       -/  Client:           notifyPlaybackState(pipeline_session, PLAYBACK_STATE_FAILURE)
end

@enduml


Set Playback Rate

@startuml

autonumber

box "Container" #LightGreen
participant Client
participant rialtoClient
end box

box "Platform" #LightBlue
participant rialtoServer
participant GStreamer_server
end box


Client             ->  rialtoClient:     setPlaybackRate(pipeline_session, rate)
rialtoClient       ->  rialtoServer:     setPlaybackRate(pipeline_session, rate)

opt rate != 0.0

opt Pipeline in PLAYING state && seek not pending

rialtoServer       ->  GStreamer_server: Set pipeline playback speed to 'rate'
note right: See Starboard PlayerImpl::SetRate() for example implementation
GStreamer_server   --> rialtoServer:     status

else Pipeline not in PLAYING state || seek pending

rialtoServer       --> rialtoServer:     Store pending rate change
note right: When pipeline reaches playing state this rate must be applied

end

else rate == 0.0
rialtoServer       --> rialtoServer:     status=ERROR
note right: Client should call pause()
end

rialtoServer       --> rialtoClient:     status
rialtoClient       --> Client:           status


@enduml


Cobalt Integration

Play/Pause/Set speed

@startuml

autonumber

box "Container" #LightGreen
participant Cobalt
participant Starboard
participant GStreamer_client
participant rialtoClient
end box


Cobalt             ->  Starboard:        SbPlayerSetPlaybackRate(player, rate)
note over GStreamer_client
For now we assume max 1 pipeline session in
GStreamer_client and therefore store its
Rialto handle in a local variable. This
will need to be fixed to support dual
playback.
end note

opt rate == 0
Starboard          ->  GStreamer_client: Set pipeline state PAUSED
GStreamer_client   ->  rialtoClient:     pause(pipeline_session)
else rate != 0
Starboard          ->  GStreamer_client: Set pipeline state PLAYING
GStreamer_client   ->  rialtoClient:     play(pipeline_session)
end
rialtoClient       --> GStreamer_client: status
GStreamer_client   --> Starboard:        status

Starboard          ->  GStreamer_client: Set pipeline playback rate
GStreamer_client   ->  rialtoClient:     setPlaybackRate(rate)
rialtoClient       --> GStreamer_client: status
GStreamer_client   --> Starboard:        status

Starboard          --> Cobalt:           status



opt Pause->play successful
rialtoClient       -/  GStreamer_client: notifyPlaybackState(pipeline_session, PLAYBACK_STATE_PLAYING)
else Play->pause successful
rialtoClient       -/  GStreamer_client: notifyPlaybackState(pipeline_session, PLAYBACK_STATE_PAUSED)
else Play<->pause state change failed
rialtoClient       -/  GStreamer_client: notifyPlaybackState(pipeline_session, PLAYBACK_STATE_FAILURE)
end

@enduml 


Render Frame (Video Peek)

Render frame may be called whilst playback is paused either at the start of playback or immediately after a seek operation. The client must wait for the readyToRenderFrame() callback first.

@startuml

autonumber

box "Container" #LightGreen
participant Netflix
participant DPI
participant rialtoClient
end box

box "Platform" #LightBlue
participant rialtoServer
participant GStreamer_server
end box

Netflix            ->  DPI:              renderFrame()
DPI                ->  rialtoClient:     renderFrame()
rialtoClient       ->  rialtoServer:     renderFrame()
opt Frame renderable
rialtoServer       ->  GStreamer_server: Trigger rendering of frame

opt Frame rendered successfully

note across: It is a Netflix requirement to call updatePlaybackPosition() after renderFrame()
rialtoServer       --> rialtoClient:     notifyPosition(position)
rialtoClient       --> DPI:              notifyPosition(position)
DPI                --> Netflix:          updatePlaybackPosition(pts)

rialtoServer       --> rialtoClient:     status=true
else
rialtoServer       --> rialtoClient:     status=false
end

else renderFrame() called in bad state
rialtoServer       --> rialtoClient:     status=false
end

rialtoClient       --> DPI:              status
DPI                --> Netflix:          status

@enduml


Media data pipeline

Note that the data pipelines for different data sources (e.g. audio & video) should operate entirely independently. Rialto should

Cobalt to Gstreamer


@startuml

autonumber

box "Container" #LightGreen
participant Cobalt
participant Starboard
participant GStreamer_client_appsrc
participant GStreamer_client_rialto_sink
participant ocdmProxy
end box


Cobalt                        ->  Starboard:                    SbPlayerWriteSample2(player, sample[])
note right: Currently sample array size must be 1
Starboard                     ->  Starboard:                    Create GstBuffer and add media data from sample to it

opt Sample encrypted
Starboard                     ->  GStreamer_client_appsrc:      gst_buffer_add_protection_meta(gst_buffer, decrytion_params)
end


Starboard                     ->  GStreamer_client_appsrc:      gst_app_src_push_buffer(app_src, gst_buffer)
GStreamer_client_appsrc       --> GStreamer_client_rialto_sink: data flows through client pipeline
GStreamer_client_rialto_sink  ->  GStreamer_client_rialto_sink: gst_buffer_get_protection_meta(buffer)
GStreamer_client_rialto_sink  ->  ocdmProxy:                    opencdm_gstreamer_session_decrypt(key_session, buffer, decrytion_params)
ocdmProxy                     ->  ocdmProxy:                    Create gst struct containing encryption data decrytion_params
ocdmProxy                     ->  GStreamer_client_rialto_sink: gst_buffer_add_protection_meta(buffer, metadata)
note left
Decryption is deferred until the data is sent to Rialto so
attach the required decryption parameters to the media frame
which are then ready to be passed to Rialto when it requests
more data.
end note

@enduml


Netflix to Rialto Client


@startuml

autonumber

box "Container" #LightGreen
participant Netflix
participant DPI
participant rialtoClient
end box

rialtoClient      -/  DPI:              notifyNeedMediaData(pipeline_session, sourceId, frame_count, need_data_request_id, shm_info)
DPI               --> rialtoClient:

opt Cached segment stored from previous need data request
DPI               ->  rialtoClient:     addSegment(need_data_request_id, cached_media_segment)
rialtoClient      --> DPI:              status
note right: status!=OK should never happen here
end

loop While (frames_written < frame_count) && (addSegment() returns OK) && (get_next_media_sample_status == OK)

DPI               -/  Netflix:          getNextMediaSample(es_player, sample_writer)
Netflix           ->  DPI:              initSample(sample_writer, sample_attributes)
DPI               ->  DPI:              Cache sample_attributes
DPI               --> Netflix:          status
Netflix           ->  DPI:              write(sample_writer, data)
DPI               ->  DPI:              Create MediaSegment object from data and cached\nsample_attributes (including any decryption attributes)
DPI               ->  rialtoClient:     addSegment(need_data_request_id, media_segment)

opt Encrypted content && key session ID present in map (see Select Key ID)
rialtoClient      ->  rialtoClient:     Set key_id in media_segment to value\nfound in map for this key session ID
note left: MKS ID should only be found in map for Netflix content
end

rialtoClient      --> DPI:              status

opt status==NO_SPACE
DPI  ->  DPI:                           Cache segment for next need data request
note right
This will require allocating temporary
buffer to store the media data but this
should happen very rarely in practise.

*TODO:* Consider adding canAddSegment()
Rialto API so that initSample() could
return NO_AVAILABLE_BUFFERS to cancel
this request and avoid the need for 
the temporary media data cache.
end note
end

DPI               --> Netflix:          write_status
Netflix           --> DPI:              get_next_media_sample_status
end

opt get_next_media_sample_status == OK
DPI               ->  DPI:              have_data_status = OK
else get_next_media_sample_status == NO_AVAILABLE_SAMPLES
DPI               ->  DPI:              have_data_status = NO_AVAILABLE_SAMPLES
else get_next_media_sample_status == END_OF_STREAM
DPI               ->  DPI:              have_data_status = EOS
else
DPI               ->  DPI:              have_data_status = ERROR
end

DPI               ->  rialtoClient:     haveData(pipeline_session, have_data_status, need_data_request_id)


opt Data accepted

opt Frames pushed for all attached sources && buffered not sent
rialtoClient      -/  DPI:              notifyNetworkState(NETWORK_STATE_BUFFERED)
end

rialtoClient      --> DPI:              OK
else Errror
rialtoClient      --> DPI: ERROR
rialtoClient      -/  DPI:              notifyPlaybackState(PLAYBACK_STATE_FAILURE)
end

opt First video frame at start of playback or after seek ready for rendering
opt notifyFrameReady not currently implemented
rialtoClient      -/  DPI:              notifyFrameReady(time_position)
else
rialtoClient      -/  DPI:              notifyPlaybackState(PLAYBACK_STATE_PAUSED)
end
DPI               -/  Netflix:          readyToRenderFrame(pts=time_position)
end

@enduml


Cobalt to Rialto Client


@startuml

autonumber

box "Container" #LightGreen
participant Cobalt
participant Starboard
participant rialtoClient
end box

== Initialisation ==

Cobalt            ->  Starboard:        SbPlayerGetMaximumNumberOfSamplesPerWrite(player, sample_type)
Starboard         --> Cobalt:           max_samples=1
note right: Specify that Cobalt only send 1 sample at a time


== Write samples ==

rialtoClient      -/  Starboard:        notifyNeedMediaData(pipeline_session, sourceId, frame_count, need_data_request_id, shm_info)
Starboard         --> rialtoClient:

opt Cached segment stored from previous need data request
Starboard         ->  rialtoClient:     addSegment(need_data_request_id, cached_media_segment)
rialtoClient      --> Starboard:        status
note right: status!=OK should\nnever happen here
end

loop While (frames_written < frame_count) && (addSegment() returns OK) && (not end of stream)

Starboard         ->  Starboard:        Convert sourceId to media type
Starboard         -/  Cobalt:           SbPlayerDecoderStatusFunc(player, media_type, kSbPlayerDecoderStateNeedsData, ticket)
note right: ticket should be set to ticket value in last call to SbPlayerSeek()
Cobalt            --> Starboard:

opt Not end of stream
Cobalt            ->  Starboard:        SbPlayerWriteSample2(player, sample_type, samples, num_samples)
note right: num_samples!=1 is an error
Starboard         ->  Starboard:        Construct media_segment from sample, including any decryption parameters (drm_info)
Starboard         ->  rialtoClient:     addSegment(need_data_request_id, media_segment)
rialtoClient      --> Starboard:        status
opt status==NO_SPACE
Starboard         ->  Starboard:        Cache segment for next need data request
note right
This will require allocating temporary
buffer to store the media data but this
should happen very rarely in practise.

*TODO:* Consider adding canAddSegment()
Rialto API so that initSample() could
return NO_AVAILABLE_BUFFERS to cancel
this request and avoid the need for 
the temporary media data cache.
end note
end

else End of stream
Cobalt            ->  Starboard:        SbPlayerWriteEndOfStream(player, stream_type)
end

Starboard         --> Cobalt:
end

opt Not end of stream
Starboard         ->  Starboard:        have_data_status = OK
else End of stream
Starboard         ->  Starboard:        have_data_status = EOS
end

Starboard         ->  rialtoClient:     haveData(pipeline_session, have_data_status, need_data_request_id)

@enduml

Gstreamer Client to Rialto Server

Note: Due to the common APIs on the client and server the parameters must be used slightly differently depending on whether the app is running in a client process or directly on the Rialto server as shown in the following two diagrams. The shared memory buffer is refilled as follows when running in the client-server mode:


@startuml

autonumber

box "Container" #LightGreen
participant GStreamer_client
participant rialtoClient
end box

box "Platform" #LightBlue
participant rialtoServer
end box

Note across
Whenever the audio or video region of the shared memory buffer is empty a refill should be initiated (unless the source is at EOS). The buffer may be empty for various reasons,
such as new attachSource occurred, buffered frames all consumed or flush due to seek.
end note

rialtoServer      -/  rialtoClient:     notifyNeedMediaData(pipeline_session, sourceId, frameCount, needDataRequestId, shmInfo)
note left
shmInfo specifies the size and offset of the region to
populate with the media data in the shm buffer. This
data is used by the Rialto client to write the shm buffer.
end note
rialtoClient      ->  rialtoClient:     Store shmInfo & frameCount with request ID
rialtoClient      --> rialtoServer:
rialtoClient      -/  GStreamer_client: notifyNeedMediaData(pipeline_session, sourceId, frameCount,\n\t\t\t\t\tneedDataRequestId, shmInfo)
note right
The shmInfo should be set to invalid in this event as it is not used by the
application. It is only present to allow a common interface on client and server.
end note
GStreamer_client  --> rialtoClient:

loop While (framesFetched < frameCount) && (addSegment() returns OK)
GStreamer_client  ->  GStreamer_client: Pull frame from pipeline (use cached frame first if available)
note right: See below "Cache excess frame"
GStreamer_client  ->  GStreamer_client: Get any encryption metadata from media frame buffer
GStreamer_client  ->  GStreamer_client: Create media segment from sample data, sample\nmetadata and, if relevant, decryption metadata
GStreamer_client  ->  rialtoClient:     addSegment(needDataRequestId, segment)
opt needDataRequestId is valid && enough space to write segment and its metadata to shm region && client not trying to send too many frames
rialtoClient      ->  rialtoClient:     Copy segment & metadata to shm buffer based on previously stored shmInfo for this request ID
rialtoClient      --> GStreamer_client: OK
else Not enough space in shm region || client trying to send too many frames
rialtoClient      --> GStreamer_client: NO_SPACE
else needDataRequestId not found
rialtoClient      --> GStreamer_client: OK
note right: Silently ignore calls with invalid request ID as this is possible due to race conditions
end
end
note right
If not enough frames are available do not wait but
return whatever data is available immediately.
end note

opt Last call to addSegment() returned NO_SPACE
GStreamer_client  ->  GStreamer_client: Cache excess frame
end
note right
In previous loop it is possible that the final frame pulled from the pipeline would not
fit within the shm region. In this case that frame should be cached and sent when the
next notifyNeedMediaData() request arrives for the source.
end note

opt No frames available AND not EOS not reached
GStreamer_client  ->  GStreamer_client: status = NO_AVAILABLE_SAMPLES
else EOS reached (with or without any frames available)
GStreamer_client  ->  GStreamer_client: status = EOS
else Samples available, not EOS
GStreamer_client  ->  GStreamer_client: status = OK
else
GStreamer_client  ->  GStreamer_client: status = ERROR
end

GStreamer_client  ->  rialtoClient:     haveData(pipeline_session, status, needDataRequestId)

opt needDataRequestId is valid - i.e. it matches the last sent notifyNeedMediaData request ID for an attached source

rialtoClient      ->  rialtoServer:     haveData(pipeline_session, status, needDataRequestId)

opt version in metadata is supported by Rialto Server && needDataRequestId is valid

opt At least one frame available
rialtoServer      ->  rialtoServer:     Trigger algorithm to push data to Gstreamer
note left: This must not block, it notifies the worker thread that new data is available
else
rialtoServer      ->  rialtoServer:     Set timer to send new needData request
note left: A timer is used to prevent a needData/haveData message\nstorm when client has no samples to send
end

opt Frames pushed for all attached sources && buffered not sent
rialtoServer      -/  rialtoClient:     notifyNetworkState(NETWORK_STATE_BUFFERED)
rialtoClient      -/  GStreamer_client: notifyNetworkState(NETWORK_STATE_BUFFERED)
end

rialtoServer      --> rialtoClient:     OK
rialtoClient      --> GStreamer_client: OK
else metadata version unsupported
rialtoServer      --> rialtoClient:     ERROR
rialtoClient      --> GStreamer_client: ERROR
rialtoServer      -/  rialtoClient:     notifyPlaybackState(PLAYBACK_STATE_FAILURE)
rialtoClient      -/  GStreamer_client: notifyPlaybackState(PLAYBACK_STATE_FAILURE)
else needDataRequestId not valid
rialtoServer      --> rialtoClient:     OK
note right
There are various race conditions, especially when seeking,
that can cause the request ID to not match the cached
value. This should be logged as a warning for
troubleshooting purposes but not treated as an error.
end note
rialtoClient      --> GStreamer_client: OK
end

else needDataRequestId is not valid
rialtoClient      --> GStreamer_client: OK
note right
See note above
end note
end

@enduml


1. Rialto server notifies client that refill is required. sourceId should match that specified in attachSource() call for the A/V data stream. needDataRequestId must be a unique ID for this playback session.


Media data flows as follows when running in server only mode:


@startuml

autonumber

box "Platform" #LightBlue
participant client
participant rialtoServer
end box

rialtoServer      -/  client:           notifyNeedMediaData(pipeline_session, sourceId, frameCount, maxBytes, needDataRequestId, shmInfo)
client            ->  client:           Ignore shmInfo
client            --> rialtoServer:

loop While (framesFetched < frameCount) && (addSegment() returns OK)
client            ->  client:           Get next frame & any decryption metadata
client            ->  rialtoServer:     addSegment(needDataRequestId, segment)

opt needDataRequestId is valid && enough space to write segment and its metadata to shm region && client not trying to send too many frames
rialtoServer      ->  rialtoServer:     Copy segment & metadata to shm buffer based on shmInfo for this request ID
rialtoServer      --> client:           OK
else Not enough space in shm region || client trying to send too many frames
rialtoServer      --> client:           NO_SPACE
else needDataRequestId not found
rialtoServer      --> client:           OK
note right: Silently ignore calls with invalid request ID as this is possible due to race conditions
end



end

note over client
Set status following same rules as in client/server mode
end note
client            ->  rialtoServer:     haveData(pipeline_session, status, needDataRequestId)
note across: From this point processing follows the same flow as shown in the client-server diagram.

@enduml

The code for populating the shm buffer from the parameters to addSegment() will be common on the client & server side so this should be stored in a common location to be used by both implementations.


See also Rialto Client MSE Player Session Streaming State Machine for some additional clarity on how the Rialto client should manage the flow of data in particular regard to seek operations.

Rialto Server to Gstreamer server

This algorithm should be run for all attached sources. A haveData() call in the above sequence can restart the algorithm when it previously stopped due to data exhaustion.


@startuml

autonumber

box "Platform" #LightGreen
participant rialtoClient
end box

box "Platform" #LightBlue
participant rialtoServer
participant GStreamer_server
participant Ocdm
end box

note across
Gstreamer app source uses 2 signals, need-data and enough-data,
to notify its client whether it needs more data. Rialto server
should only push data when the appsrc indicates that it is in
the need-data state.
end note


loop While appsrc needs data && appsrc data available in shm buffer

rialtoServer       ->  rialtoServer:      Extract frame's metadata from shm
opt Frame encrypted

rialtoServer	   ->  GStreamer_server:  gst_buffer_add_protection_meta(buffer, meta)

end

rialtoServer       ->  GStreamer_server:  Set width/height caps
opt new codec_data in frame
rialtoServer       ->  GStreamer_server:  Set codec_data caps
end

rialtoServer       ->  GStreamer_server:  gst_app_src_push_buffer(src, gst_buffer)
rialtoServer       ->  rialtoServer:      'Remove' frame from shm

opt First video frame pushed at start of playback / after seek
note across: Not currently implemented
rialtoServer       --/ rialtoClient:      notifyFrameReady(frame_timestamp)
end

opt Appsrc data exhausted from shm
opt (status == EOS) for this appsrc
rialtoServer       ->  GStreamer_server:      notify EOS
else Not EOS
rialtoServer       --/ rialtoClient:      notifyNeedMediaData(...)
end
end

end

@enduml


Frames are decrypted in the pipeline when they are pulled for playback.


@startuml

autonumber

box "Platform" #LightGreen
participant rialtoClient
end box

box "Platform" #LightBlue
participant rialtoServer
participant GStreamer_server
participant Ocdm
end box

GStreamer_server   ->  rialtoServer:      decrypt(buffer)

rialtoServer	   ->  GStreamer_server:  gst_buffer_get_protection_meta(buffer)
 
opt Protection Meta exists (Frame encrypted)
 
rialtoServer       ->  rialtoServer:      Extract frame's metadata

opt media_keys.key_system == "com.netflix.playready"
rialtoServer       ->  Ocdm:              opencdm_select_key_id(ocdm_session, kid)
end

rialtoServer       ->  Ocdm:              opencdm_gstreamer_session_decrypt(ocdm_session, gst_buffer, subsample_info, iv, key, init_with_last_15)
end 

@enduml

Playback State

Position Reporting

The position reporting timer should be started whenever the PLAYING state is entered and stopped whenever the session moves to another playback state, i.e. stop polling during IDLE, BUFFERING, SEEKING etc.


@startuml


autonumber

box "Container" #LightGreen
participant Cobalt
participant Starboard
participant GStreamer_client
participant rialtoClient
end box

box "Platform" #LightBlue
participant rialtoServer
participant GStreamer_server
end box

== Regular position update notifications ==

rialtoServer     ->  rialtoServer:     Position timer fired
rialtoServer     ->  GStreamer_server: Get position from pipeline
GStreamer_server --> rialtoServer:     Current position
rialtoServer     -/  rialtoClient:     notifyPosition(pipeline_session, position)
rialtoClient     -/  GStreamer_client: notifyPosition(pipeline_session, position)
note over GStreamer_client: Not used by Cobalt as some conformance\ntests require very high position accuracy


== Get position ==

Cobalt           ->  Starboard:        SbPlayerGetInfo2(player)
Starboard        ->  GStreamer_client: Get position
GStreamer_client ->  rialtoClient:     getPosition(pipeline_session)
rialtoClient     ->  rialtoServer:     getPosition(pipeline_session)
rialtoServer     ->  GStreamer_server: Get position from pipeline
GStreamer_server --> rialtoServer:     position
rialtoServer     --> rialtoClient:     position
rialtoClient     --> GStreamer_client: position
GStreamer_client --> Starboard:        position
Starboard        ->  Starboard:        Set player_info.pos = position
Starboard        --> Cobalt:           player_info

@enduml


End of stream

@startuml

autonumber

box "Container" #LightGreen
participant Cobalt
participant Starboard
participant GStreamer_client
participant rialtoClient
end box

box "Platform" #LightBlue
participant rialtoServer
participant GStreamer_server
end box

opt End of content reached
GStreamer_server   -/  rialtoServer:         GST_MESSAGE_EOS
rialtoServer       -/  rialtoClient:         notifyPlaybackState(pipeline_session, END_OF_STREAM)
rialtoClient       -/  GStreamer_client:     notifyPlaybackState(pipeline_session, END_OF_STREAM)
note left
This should notify all attached sinks of EOS
end note
GStreamer_client   -/  Starboard:            GST_MESSAGE_EOS
Starboard          -/  Cobalt:               PlayerStatus(player, kSbPlayerStateEndOfStream)
end

@enduml


Underflow

@startuml

autonumber

box "Container" #LightGreen
participant Cobalt
participant Starboard
participant GStreamer_client
participant rialtoClient
end box

box "Platform" #LightBlue
participant rialtoServer
participant GStreamer_server
end box

== Initialisation - register for callbacks ==

opt Video source attached
rialtoServer       ->  GStreamer_server:     g_signal_connect(video_decoder, getVideoUnderflowSignalName_soc(), video_underflow_cb, user_data);
GStreamer_server   --> rialtoServer:         video_handler_id
end

opt Audio source attached
rialtoServer       ->  GStreamer_server:     g_signal_connect(audio_decoder, getAudioUnderflowSignalName_soc(), audio_underflow_cb, user_data);
GStreamer_server   --> rialtoServer:         audio_handler_id
end


== Termination - unregister for callbacks ==

opt Video source removed
rialtoServer       ->  GStreamer_server:     g_signal_handler_disconnect(video_decoder, video_handler_id);
GStreamer_server   --> rialtoServer:
end

opt Audio source removed
rialtoServer       -> GStreamer_server:     g_signal_handler_disconnect(audio_decoder, audio_handler_id);
GStreamer_server   --> rialtoServer:
end


== Underflow ==

opt Data starvation in server AV pipeline
GStreamer_server   -/  rialtoServer:         video_underflow_cb() or audio_underflow_cb()
rialtoServer       ->  rialtoServer:         Set pipeline state to paused
rialtoServer       -/  rialtoClient:         notifyPlaybackState(pipeline_session, PLAYBACK_STATE_PAUSED)
rialtoClient       -/  GStreamer_client:     notifyPlaybackState(pipeline_session, PLAYBACK_STATE_PAUSED)
rialtoServer       -/  rialtoClient:         notifyNetworkState(pipeline_session, NETWORK_STATE_STALLED)
rialtoClient       -/  GStreamer_client:     notifyNetworkState(pipeline_session, NETWORK_STATE_STALLED)

note over Starboard, GStreamer_client
Starboard does not have any support for underflow
so the event can be ignored for this integration.
end note

note across
There will be one ore more pending need data requests at this point which if serviced will allow playback to resume
end note

end

== Recovery ==

opt rialtoServer detects that any need media data requests pending at point of underflow are now serviced and pushed to GStreamer_server || EOS signalled for any underflowed sources
note across
It is likely that underflow is due to one source becoming starved whilst data is buffered for other sources, so waiting until pending request(s) are serviced should allow playback to resume.
There are also some YT conformance tests that delay signalling EOS for an underflowed source whilst the other continues to stream hence the EOS condition to allow streaming to resume for the valid source.
end note

rialtoServer       ->  rialtoServer:         Set pipeline state to playing
rialtoServer       -/  rialtoClient:         notifyNetworkState(pipeline_session, NETWORK_STATE_BUFFERED)
rialtoClient       -/  GStreamer_client:     notifyNetworkState(pipeline_session, NETWORK_STATE_BUFFERED)
rialtoServer       -/  rialtoClient:         notifyPlaybackState(pipeline_session, PLAYBACK_STATE_PLAYING)
rialtoClient       -/  GStreamer_client:     notifyPlaybackState(pipeline_session, PLAYBACK_STATE_PLAYING)

end

@enduml