...
The shared memory buffer shall be split into two regions for each playback session, one for a video stream, and the other for audio (only one concurrent audio track supported, for audio track selection previous audio must be removed first). Each region must be big enough to accommodate the largest possible frame of audio/video data plus associated decryption parameters and metadata. At the end there is also a separate section for webaudio regions (common for all playback sessions). The buffer shall initially be sized to 8Mb per playback session to allow some overhead + 10kB * number of webaudio regions. There can be 0 or more Web Audio regions per Rialto Client.
For apps that can support more than one concurrent playback the shared memory buffer shall be sized accordingly and partitioned into different logical areas for each playback session. The partitions need not necessarily be equally sized, for example if an app supports one UHD and one HD playback the 'HD' partition may be smaller. There can be a maximum of one Web Audio region per Rialto Client.
| draw.io Diagram |
|---|
| border | true |
|---|
| diagramName | Shared memory partitioning for multiple playbacks |
|---|
| simpleViewer | false | width |
|---|
| links | auto |
|---|
| tbstyle | top |
|---|
| lbox | true |
|---|
| diagramWidth | 11411261 |
|---|
| revision | 12 |
|---|
|
Note that the application is not directly aware of the layout of the shared memory region or even its existence, this is all managed internally by Rialto.
...
| draw.io Diagram |
|---|
| border | true |
|---|
| diagramName | Shared memory buffer layout for AV stream |
|---|
| simpleViewer | false |
|---|
| width | links | auto |
|---|
| tbstyle | top |
|---|
| lbox | true |
|---|
| diagramWidth | 791 |
|---|
| revision | 1 |
|---|
|
...
| draw.io Diagram |
|---|
| border | true |
|---|
| diagramName | Metadata format v1 |
|---|
| simpleViewer | false | width |
|---|
| links | auto |
|---|
| tbstyle | top |
|---|
| lbox | true |
|---|
| diagramWidth | 431 |
|---|
| revision | 1 |
|---|
|
...
| Parameter | Size |
|---|
| Shared memory buffer | 8Mb * max_playback_sessions + 10kb * max_web_audio_playback_sessions |
| Max frames to request | [24] |
| Metadata size per frame | Clear: Variable but <100 bytes Encrypted: TODO |
| Video frame region size | 7Mb |
| Max video frame size | TODO |
| Audio frame region size | 1Mb |
| Max audio frame size | TODO |
| Web Audio region size | 10kb |
...
| draw.io Diagram |
|---|
| border | true |
|---|
| |
|---|
| diagramName | Shared memory buffer layout v2 |
|---|
| simpleViewer | false |
|---|
| width | |
|---|
| links | auto |
|---|
| tbstyle | top |
|---|
| lbox | true |
|---|
| diagramWidth | 1462 |
|---|
| revision | 13 |
|---|
|
V2 metadata uses protobuf to serialise the frames' properties to the shared memory buffer. This use of protobuf aligns with the IPC protocol but also allows support for optional fields and for fields to be added and removed without causing backward/forward compatibility issues. It also supports variable length fields so the MKS ID, IV & sub-sample information can all be directly encoded in the metadata, avoiding the complexities of interleaving them with the media frames and referencing them with offsets/lengths as used in the V1 metadata format.
enum SegmentAlignment { ALIGNMENT_UNDEFINED = 0; ALIGNMENT_NAL = 1; ALIGNMENT_AU = 2;
} message
enum MediaSegmentMetadataCipherMode { CIPHER_MODE_UNDEFINED = required uint320; CIPHER_MODE_CENC = 1; /* AES-CTR scheme */
CIPHER_MODE_CBC1 = length2; /* AES-CBC scheme */
CIPHER_MODE_CENS = 3; /* AES-CTR subsample pattern encryption scheme */
CIPHER_MODE_CBCS = 4; /* AES-CBC subsample pattern encryption scheme */
}
message MediaSegmentMetadata {
optional uint32 = 1; /* Number of bytes in sample */ required sint64 length time_position = 21; /* PositionNumber inof streambytes in nanosecondssample */ optional sint64 required sint64 time_position sample_duration = 32; /* Position in stream in nanoseconds */ optional sint64 sample_duration = 3; /* Frame/Frame/sample duration in nanoseconds */ optional required uint32 stream_id = 4; /* stream id (unique ID for ES, as defined in attachSource()) */ optional uint32 sample_rate = 5; /* Samples per second for audio segments */ optional uint32 channels_num = 6; /* Number of channels for audio segments */ optional uint32 width = 7; /* Frame width in pixels for video segments */ optional uint32 height = 8; /* Frame height in pixels for video segments */ optional SegmentAlignment segment_alignment = 9; /* Segment alignment can be specified for H264/H265, will use NAL if not set */ optional bytes extra_data = 10; /* bufferBuffer containing extradata */ optional bytes media_key_session_id = 11; /* Buffer containing key session ID to use for decryption */ optional bytes key_id = 12; /* Buffer containing Key ID to use for decryption */ optional bytes init_vector = 13; /* Buffer containing the initialization vector for decryption */ optional uint32 init_with_last_15 = 14; /* initWithLast15 value for decryption */ optional repeated SubsamplePair sub_sample_info = 15; /* If present, use gather/scatter decryption based on this list of clear/encrypted byte lengths. */ /* If not present and content is encrypted then entire media segment needs decryption. (unless */ */ } message SubsamplePair { required uint32_t num_clear_bytes = 1; /* How many of next bytes in sequence are clear */ required uint32_t /* cipher_mode num_encrypted_bytes = 2;indicates pattern encryption in which case crypt/skip byte block value specify */ /* How many of next bytes in sequence are encrypted */ } |
Playback Control
the encryption pattern) */ optional bytes codec_data = 16; /* Buffer containing updated codec data for video segments */ optional CipherMode cipher_mode = 17; /* Block cipher mode of operation when common encryption used */ optional uint32 crypt_byte_block = 18; /* Crypt byte block value for CBCS cipher mode pattern */ optional uint32 skip_byte_block = 19; /* Skip byte block value for CBCS cipher mode pattern */ optional Fraction frame_rate = 20; /* Fractional frame rate of the video segments */ }
message SubsamplePair { optional uint32_t num_clear_bytes = 1; /* How many of next bytes in sequence are clear */ optional uint32_t num_encrypted_bytes = 2; /* How many of next bytes in sequence are encrypted */ } |
Playback Control
Rialto interactions with Client & GStreamer
...
| PlantUML Macro |
|---|
| format | SVG |
|---|
| title | Render Frame |
|---|
|
@startuml
autonumber
box "Container" #LightGreen
box "Container" #LightGreen
participant Netflix
participant DPI
participant rialtoClient
end box
box "Platform" #LightBlue
participant rialtoServer
participant GStreamer_server
end box
Netflix participant Netflix
participant DPI
participant rialtoClient
end box
box "Platform" #LightBlue
participant rialtoServer
participant GStreamer_server
end box
Netflix -> DPI: renderFrame()
DPI -> rialtoClient: renderFrame()
rialtoClient -> rialtoServer: renderFrame()
opt Frame renderable
rialtoServer -> DPIGStreamer_server: Trigger rendering of frame
rialtoServer --> rialtoClient: status=true
else renderFrame()
DPI called in bad state
rialtoServer --> rialtoClient: status=false
end
rialtoClient --> DPI: rialtoClient: renderFrame()
rialtoClientstatus
DPI -> rialtoServer: renderFrame()
opt Frame renderable
rialtoServer--> Netflix: -> GStreamer_server: Trigger rendering of frame
opt Frame rendered successfully
note across: It is a Netflix requirement to call updatePlaybackPosition() after renderFrame()
rialtoServer --> rialtoClient: notifyPosition(position)
rialtoClient --> DPI: notifyPosition(position)
DPI status
@enduml |
Media data pipeline
Note that the data pipelines for different data sources (e.g. audio & video) should operate entirely independently. Rialto should
- attempt to keep the shm buffer as full as possible by requesting a refill for that source whenever the source's memory buffer is empty
- attempt to push all available frames for a source to GStreamer, i.e. push until Gstreamer indicates that it can accept no more data
Cobalt to Gstreamer
| PlantUML Macro |
|---|
| format | SVG |
|---|
| title | Cobalt pushing media frames |
|---|
|
@startuml
autonumber
box "Container" #LightGreen
participant Cobalt
participant Starboard
participant GStreamer_client_appsrc
participant decrypter_element
participant ocdmProxy
end box
Cobalt --> Netflix: updatePlaybackPosition(pts)
rialtoServer --> rialtoClientStarboard: status=true
else
rialtoServer --> rialtoClient: status=false
end
else renderFrame() called in bad state
rialtoServer --> rialtoClient: status=false
end
rialtoClient SbPlayerWriteSample2(player, sample[])
note right: Currently sample array size must be 1
Starboard --> DPI: status
DPI-> Starboard: --> Netflix: Create GstBuffer and add media status
@enduml |
Media data pipeline
Note that the data pipelines for different data sources (e.g. audio & video) should operate entirely independently. Rialto should
- attempt to keep the shm buffer as full as possible by requesting a refill for that source whenever the source's memory buffer is empty
- attempt to push all available frames for a source to GStreamer, i.e. push until Gstreamer indicates that it can accept no more data
Cobalt to Gstreamer
| PlantUML Macro |
|---|
| format | SVG |
|---|
| title | Cobalt pushing media frames |
|---|
|
@startuml
autonumber
box "Container" #LightGreen
participant Cobalt
participant Starboard
participantdata from sample to it
opt Sample encrypted
Starboard -> GStreamer_client_appsrc
participant GStreamer_client_rialto_sink
participant ocdmProxy
end box
Cobalt : gst_buffer_add_protection_meta(gst_buffer, decrytion_params)
end
Starboard -> StarboardGStreamer_client_appsrc: gst_app_src_push_buffer(app_src, gst_buffer)
GStreamer_client_appsrc SbPlayerWriteSample2(player, sample[])
note right: Currently sample array size must be 1
Starboard --> decrypter_element: data flows through client pipeline
decrypter_element -> Starboard decrypter_element: gst_buffer_get_protection_meta(buffer)
opt Implementation before CBCS support added
decrypter_element Create GstBuffer and add media data from sample to it
opt Sample encrypted
Starboard -> ocdmProxy: -> GStreamer_client_appsrc: gst_buffer_add_protection_meta(gst_buffer, decrytion_params)
end
Starboardopencdm_gstreamer_session_decrypt_ex(key_session, buffer, sub_samples, iv, kid, init_with_last_15, caps)
ocdmProxy -> GStreamer_client_appsrcocdmProxy: gst_app_src_push_buffer(app_src, gst_buffer)
GStreamer_client_appsrc Create gst struct --> GStreamer_client_rialto_sink: data flows through client pipeline
GStreamer_client_rialto_sink -> GStreamer_client_rialto_sink: gst_buffer_get_protection_meta(buffer)
GStreamer_client_rialto_sink containing encryption data decrytion_params
else Implementation after CBCS support added
decrypter_element -> ocdmProxy: opencdm_gstreamer_session_decrypt_buffer(key_session, buffer, decrytion_params)caps)
end
ocdmProxy -> ocdmProxy: decrypter_element: Create gst struct containing encryption data decrytion_params
ocdmProxy -> GStreamer_client_rialto_sink: gst_buffer_add_protection_meta(buffer, metadata)
note left
Decryption is deferred until the data is sent to Rialto so
attach the required decryption parameters to the media frame
which are then ready to be passed to Rialto when it requests
more data.
end note
@enduml |
...
| PlantUML Macro |
|---|
| format | SVG |
|---|
| title | Netflix to Rialto Client |
|---|
|
@startuml
autonumber
box "Container" #LightGreen
participant Netflix
participant DPI
participant rialtoClient
end box
rialtoClient -/ DPI: notifyNeedMediaData(pipeline_session, sourceId, frame_count, need_data_request_id, shm_info)
DPI --> rialtoClient:
opt Cached segment stored from previous need data request
DPI -> rialtoClient: addSegment(need_data_request_id, cached_media_segment)
rialtoClient --> DPI: status
note right: status!=OK should never happen here
end
loop While (frames_written < frame_count) && (addSegment() returns OK) && (get_next_media_sample_status == OK)
DPI -/ Netflix: getNextMediaSample(es_player, sample_writer)
Netflix -> DPI: initSample(sample_writer, sample_attributes)
DPI -> DPI: Cache sample_attributes
DPI --> Netflix: status
Netflix -> DPI: write(sample_writer, data)
DPI -> DPI: Create MediaSegment object from data and cached\nsample_attributes (including any decryption attributes)
DPI -> rialtoClient: addSegment(need_data_request_id, media_segment)
opt Encrypted content && key session ID present in map (see Select Key ID)
rialtoClient -> rialtoClient: Set key_id in media_segment to value\nfound in map for this key session ID
note left: MKS ID should only be found in map for Netflix content
end
rialtoClient --> DPI: status
opt status==NO_SPACE
DPI -> DPI: Cache segment for next need data request
note right
This will require allocating temporary
buffer to store the media data but this
should happen very rarely in practise.
*TODO:* Consider adding canAddSegment()
Rialto API so that initSample() could
return NO_AVAILABLE_BUFFERS to cancel
this request and avoid the need for
the temporary media data cache.
end note
end
DPI --> Netflix: write_status
Netflix --> DPI: get_next_media_sample_status
end
opt get_next_media_sample_status == OK
DPI -> DPI: have_data_status = OK
else get_next_media_sample_status == NO_AVAILABLE_SAMPLES
DPI -> DPI: have_data_status = NO_AVAILABLE_SAMPLES
else get_next_media_sample_status == END_OF_STREAM
DPI -> DPI: have_data_status = EOS
else
DPI -> DPI: have_data_status = ERROR
end
DPI -> rialtoClient: haveData(pipeline_session, have_data_status, need_data_request_id)
opt Data accepted
opt Frames pushed for all attached sources && buffered not sent
rialtoClient -/ DPI: notifyNetworkState(NETWORK_STATE_BUFFERED)
end
rialtoClient --> DPI: OK
else Errror
rialtoClient --> DPI: ERROR
rialtoClient -/ DPI: notifyPlaybackState(PLAYBACK_STATE_FAILURE)
end
opt First video frame at start of playback or after seek ready for rendering
opt notifyFrameReady not currently implemented
rialtoClient -/ DPI: notifyFrameReady(time_position)
else
rialtoClient -/ DPI: notifyFrameReady notifyPlaybackState(timePLAYBACK_STATE_positionPAUSED)
end
DPI -/ Netflix: readyToRenderFrame(pts=time_position)
end
@enduml |
...
| PlantUML Macro |
|---|
| format | SVG |
|---|
| title | Server only mode |
|---|
|
@startuml
autonumber
box "Platform" #LightBlue
participant client
participant rialtoServer
end box
rialtoServer -/ client: notifyNeedMediaData(pipeline_session, sourceId, frameCount, maxBytes, needDataRequestId, shmInfo)
client -> client: Ignore shmInfo
client --> rialtoServer:
loop While (framesFetched < frameCount) && (addSegment() returns OK)
client -> client: Get next frame & any decryption metadata
client -> rialtoServer: addSegment(needDataRequestId, segment)
opt needDataRequestId is valid && enough space to write segment and its metadata to shm region && client not trying to send too many frames
rialtoServer -> rialtoServer: Copy segment & metadata to shm buffer based on shmInfo for this request ID
rialtoServer --> client: OK
else Not enough space in shm region || client trying to send too many frames
rialtoServer --> rialtoServerclient: Copy segment & metadata to shm buffer based on shmInfo for this request IDNO_SPACE
else needDataRequestId not found
rialtoServer --> client: OK
else Not enough space in shm region || client trying to send too many frames
rialtoServernote right: Silently ignore calls with invalid request ID as this is possible due to race conditions
end
end
note over client
Set status following same rules as in client/server mode
end note
client --> clientrialtoServer: haveData(pipeline_session, status, needDataRequestId)
note across: From this point processing NO_SPACE
else needDataRequestId not found
rialtoServer --> client: OK
note right: Silently ignore calls with invalid request ID as this is possible due to race conditions
end
end
note over client
Set status following same rules as in client/server mode
end note
client -> rialtoServer: haveData(pipeline_session, status, needDataRequestId)
note across: From this point processing follows the same flow as shown in the client-server diagram.
@enduml |
| Note |
|---|
The code for populating the shm buffer from the parameters to addSegment() will be common on the client & server side so this should be stored in a common location to be used by both implementations. |
See also Rialto Client MSE Player Session Streaming State Machine for some additional clarity on how the Rialto client should manage the flow of data in particular regard to seek operations.
Rialto Server to Gstreamer server
This algorithm should be run for all attached sources. A haveData() call in the above sequence can restart the algorithm when it previously stopped due to data exhaustion.
follows the same flow as shown in the client-server diagram.
@enduml |
| Note |
|---|
The code for populating the shm buffer from the parameters to addSegment() will be common on the client & server side so this should be stored in a common location to be used by both implementations. |
See also Rialto Client MSE Player Session Streaming State Machine for some additional clarity on how the Rialto client should manage the flow of data in particular regard to seek operations.
Rialto Server to Gstreamer server
This algorithm should be run for all attached sources. A haveData() call in the above sequence can restart the algorithm when it previously stopped due to data exhaustion.
| PlantUML Macro |
|---|
| format | SVG |
|---|
| title | Pushing data to Gstreamer server pipeline |
|---|
|
@startuml
autonumber
box "Platform" #LightGreen
participant rialtoClient
end box
box "Platform" #LightBlue
participant rialtoServer
participant GStreamer_server
participant Ocdm
end box
note across
Gstreamer app source uses 2 signals, need-data and enough-data,
to notify its client whether it needs more data. Rialto server
should only push data when the appsrc indicates that it is in
the need-data state.
end note
loop While appsrc needs data && appsrc data available in shm buffer
rialtoServer -> rialtoServer: Extract frame's metadata from shm
opt Frame encrypted
rialtoServer -> GStreamer_server: gst_buffer_add_protection_meta(buffer, meta)
end
rialtoServer -> GStreamer_server: Set width/height caps
opt new codec_data in frame
rialtoServer -> GStreamer_server: Set codec_data caps
end
rialtoServer -> GStreamer_server: gst_app_src_push_buffer(src, gst_buffer)
rialtoServer -> rialtoServer: 'Remove' frame from shm
opt First video frame pushed at start of playback / after seek
note across: Not currently implemented
rialtoServer --/ rialtoClient: notifyFrameReady(frame_timestamp)
end
opt Appsrc data exhausted from shm
opt (status == EOS) for this appsrc |
| PlantUML Macro |
|---|
| format | SVG |
|---|
| title | Pushing data to Gstreamer server pipeline |
|---|
|
@startuml
autonumber
box "Platform" #LightGreen
participant rialtoClient
end box
box "Platform" #LightBlue
participant rialtoServer
participant GStreamer_server
participant Ocdm
end box
note across
Gstreamer app source uses 2 signals, need-data and enough-data,
to notify its client whether it needs more data. Rialto server
should only push data when the appsrc indicates that it is in
the need-data state.
end note
loop While appsrc needs data && appsrc data available in shm buffer
rialtoServer -> rialtoServerGStreamer_server: Extract frame's metadata from shm
opt Frame encrypted
opt media_keys.key_system == "com.netfliux.playready"
rialtoServer notify EOS
else Not EOS
rialtoServer --/ rialtoClient: -> Ocdm: opencdm_select_key_id(ocdm_session, kid)
end
rialtoServer -> Ocdm: notifyNeedMediaData(...)
end
end
end
@enduml |
Frames are decrypted in the pipeline when they are pulled for playback.
| PlantUML Macro |
|---|
| format | SVG |
|---|
| title | Decrypt Frames on Gstreamer server pipeline |
|---|
|
@startuml
autonumber
box "Platform" #LightGreen
participant rialtoClient
end box
box "Platform" #LightBlue
participant rialtoServer
participant GStreamer_server
participant Ocdm
end box
GStreamer_server -> rialtoServer: opencdm_gstreamer_session_decrypt(ocdm_session, gst_buffer, subsample_info, iv, key, init_with_last_15)
end
rialtoServer -> GStreamer GStreamer_server: gst_appbuffer_srcget_pushprotection_meta(buffer(src, gst_buffer)
)
opt Protection Meta exists (Frame encrypted)
rialtoServer -> rialtoServer: 'Remove'Extract frame's from shmmetadata
opt First video frame pushed at start of playback / after seek
rialtoServer --/ rialtoClient: media_keys.key_system == "com.netflix.playready"
rialtoServer -> Ocdm: opencdm_select_key_id(ocdm_session, kid)
end
opt Implementation before CBCS support added
rialtoServer notifyFrameReady(frame_timestamp)
end
opt Appsrc data exhausted from shm
opt (status == EOS) for this appsrc
rialtoServer -> Ocdm: -> GStreamer_server: notify EOS
else Not EOSopencdm_gstreamer_session_decrypt_ex(ocdm_session, gst_buffer, subsample_info, iv, key, init_with_last_15, caps)
else Implementation after CBCS support added
rialtoServer --/> rialtoClientOcdm: opencdm_gstreamer_session_decrypt_buffer(ocdm_session, notifyNeedMediaData(...)
endgst_buffer, caps)
end
end
@enduml |
Playback State
...
| PlantUML Macro |
|---|
|
@startuml
autonumber
box "Container" #LightGreen
participant Cobalt
participant Starboard
participant GStreamer_client
participant rialtoClient
end box
box "Platform" #LightBlue
participant rialtoServer
participant GStreamer_server
end box
== Initialisation - register for callbacks ==
opt Video source attached
rialtoServer -> GStreamer_server: g_signal_connect(video_decoder, getVideoUnderflowSignalName_soc(), video_underflow_cb, user_data);
GStreamer_server --> rialtoServer: video_handler_id
end
opt Audio source attached
rialtoServer -> GStreamer_server: g_signal_connect(audio_decoder, getAudioUnderflowSignalName_soc(), audio_underflow_cb, user_data);
GStreamer_server --> rialtoServer: audio_handler_id
end
== Termination - unregister for callbacks ==
opt Video source removed
rialtoServer -> GStreamer_server: g_signal_handler_disconnect(video_decoder, video_handler_id);
GStreamer_server --> rialtoServer:
end
opt Audio source removed
rialtoServer -> GStreamer_server: g_signal_handler_disconnect(audio_decoder, audio_handler_id);
GStreamer_server --> rialtoServer:
end
== Underflow ==
opt Data starvation in server AV pipeline
GStreamer_server -/ rialtoServer: video_underflow_cb() or audio_underflow_cb()
rialtoServer -> rialtoServer: Set pipeline state to paused
rialtoServer -/ rialtoClient: notifyPlaybackState(pipeline_session, PLAYBACK_STATE_PAUSED)
rialtoClient -/ GStreamer_client: notifyPlaybackState(pipeline_session, PLAYBACK_STATE_PAUSED)
rialtoServer -/ rialtoClient: notifyNetworkState(pipeline_session, NETWORK_STATE_STALLED)
rialtoClient -/ GStreamer_client: notifyNetworkState(pipeline_session, NETWORK_STATE_STALLED)
note over Starboard, GStreamer_client
Starboard does not have any support for underflow
so the event can be ignored for this integration.
end note
note across
There will be one ore more pending need data requests at this point which if serviced will allow playback to resume
end note
end
== Recovery ==
opt rialtoServer detects that any need media data requests pending at point of underflow are now serviced and pushed to GStreamer_server || EOS signalled for any underflowed sources
note across
It is likely that underflow is due to one source becoming starved whilst data is buffered for other sources, so waiting until pending request(s) are serviced should allow playback to resume.
There are also some YT conformance tests that delay signalling EOS for an underflowed source whilst the other continues to stream hence the EOS condition to allow streaming to resume for the valid source.
end note
rialtoServer -> rialtoServer: Set pipeline state to playing
rialtoServer -/ rialtoClient: notifyNetworkState(pipeline_session, NETWORK_STATE_BUFFEREDnote across
underflow_enabled: Underflow is enabled when we're in playing state and source is attached.
underflow_cancelled: Underflow may be cancelled when haveData is called between notification from GStreamer and Underflow task handling.
end note
opt underflow_enabled && !underflow_cancelled
rialtoServer -/ rialtoClient: notifyBufferUnderflow(source_id)
rialtoClient -/ GStreamer_client: notifyBufferUnderflow(source_id)
GStreamer_client -/ Starboard: emit video_underflow_cb() or audio_underflow_cb()
note over Starboard, GStreamer_client
Starboard does not have any support for underflow
so the event can be ignored for this integration.
end note
end
note across
There will be one re more pending need data requests at this point which if serviced will allow playback to resume
end note
end
@enduml |
Non-fatal Playback Failures
Decryption: Any encrypted frames that fail to decrypt are dropped, and an error notification is propagated to the rialto-gstreamer, at which point a decryption error is raised on the sink.
| PlantUML Macro |
|---|
| format | SVG |
|---|
| title | Non-fatal Errors |
|---|
|
@startuml
autonumber
box "Container" #LightGreen
participant Application
participant rialtoGstreamer
participant rialtoClient
end box
box "Platform" #LightBlue
participant rialtoServer
participant GStreamer_server
end box
== Decryption ==
GStreamer_server -> rialtoServer: decrypt(buffer)
rialtoServer -> GStreamer_server: MediaKeyErrorStatus::Fail
GStreamer_server -> GStreamer_server: GST_BASE_TRANSFORM_FLOW_DROPPED
note over GStreamer_server
Frame is dropped but playback is unaffected.
end note
GStreamer_server -> rialtoServer: GST_MESSAGE_WARNING(src, GST_STREAM_ERROR_DECRYPT)
rialtoServer -/ rialtoClient: notifyPlaybackError(MediaSourceType, PlaybackError::DECRYPTION)
rialtoClient -/ GStreamer_client rialtoGstreamer: notifyNetworkState(pipeline_session, NETWORK_STATE_BUFFERED)
rialtoServer -/ rialtoClient: notifyPlaybackState(pipeline_session, PLAYBACK_STATE_PLAYING)
rialtoClient notifyPlaybackError(MediaSourceType, PlaybackError::DECRYPTION)
note over rialtoGstreamer
Posting an error message on the sink make the\n
sink unable to continue playing back content.
end note
rialtoGstreamer -/ GStreamer_client Application: notifyPlaybackState(pipeline_session, PLAYBACK_STATE_PLAYING)
end GST_MESSAGE_ERROR(sink, GST_STREAM_ERROR_DECRYPT)
@enduml |