...
...
| draw.io Diagram | ||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
...
| PlantUML Macro | ||||
|---|---|---|---|---|
| ||||
@startuml autonumber box "Container" #LightGreen participant Client participant rialtoClient end box box "Platform" #LightBlue participant rialtoServer participant GStreamer_server end box == Create Web Audio Player session == Client -> rialtoClient: createWebAudioPlayer(client, pcm_params) note right This should only be called in when app is in ACTIVE state. If client porting layer requires a persistent web audio player it will need to manage creation & destruction of the rialto object on state change. end note rialtoClient -> rialtoServer: createWebAudioPlayer(client, pcm_params) rialtoServer -> rialtoServer: Check resource permissions that were granted to app and currently\nallocated pipelines permit this object to be created note left: This check is (num_current_web_audio_sessions < resources.max_supported_web_audio) rialtoServer -> rialtoServer: Check pcm_params valid note left: sampleSize is valid aslong as it is > CHAR_BIT (8 bits) opt Permission & pcm_params checks passed rialtoServer -> rialtoServer: Store offset & size of media transfer buffer in shm note left: This data comes from the getSharedMemory() API called when Rialto server entered ACTIVE state rialtoServer -> GStreamer_server: Create rialto server PCM audio pipeline GStreamer_server --> rialtoServer: rialtoServer --> rialtoClient: web_audio_session rialtoClient --> Client: web_audio_session rialtoServer --/ rialtoClient: notifyState(web_audio_session, IDLE) rialtoClient --/ Client: notifyState(web_audio_session, IDLE) else Checks failed rialtoServer --> rialtoClient: nullptr rialtoClient --> Client: nullptr end == Initialisation == Client -> rialtoClient: getDeviceInfo() rialtoClient -> rialtoServer: getDeviceInfo() rialtoServer --> rialtoClient: preferred_frames, max_frames, support_deferred_play note right: preferred_frames=minimum of 640 or max_frames\nmax_frames=shm_region_size/(pcm_params.channels * pcm_params.sampleSize) \nsupport_deferred_play=true rialtoClient --> Client: preferred_frames, max_frames, support_deferred_play == Destroy Web Audio Player session == Client -> rialtoClient: ~IWebAudioPlayer() note right This should be called in when app leaves ACTIVE state or destroys it's web audio player. end note rialtoClient -> rialtoServer: ~IWebAudioPlayer(client, pcm_params) rialtoServer -> GStreamer_server: Destroy PCM audio pipeline GStreamer_server --> rialtoServer: rialtoServer --> rialtoClient: rialtoClient --> Client: @enduml |
...
| PlantUML Macro | ||||
|---|---|---|---|---|
| ||||
@startuml autonumber box "Platform" #LightBlue participant rialtoServer participant GStreamer_server end box noteloop across Gstreamerdata appavailable sourcein usesweb 2audio signals, need-data and enough-data, to notify its client whether it needs more data. Rialto server should only push data when the appsrc indicates that it is in the need-data state. end note loop While appsrc needs data && data available in web audio shm region shm region rialtoServer -> GStreamer_server: gst_app_src_get_current_level_bytes(src) GStreamer_server --> rialtoServer: bytes_in_gst_queue note right: size of gst_buffer either free_bytes in src or size of samples in shm rialtoServer -> rialtoServerGStreamer_server: gst_buffer_new_allocate(size) GStreamer_server Create--> rialtoServer: gst_buffer rialtoServer containing next set of samples from shm note right: This will need to perform endianness conversion if necessary rialtoServer -> GStreamer_server: gst_app_src_push_buffer(src, gst_buffer) rialtoServer -> rialtoServer: Update internal shm variables for consumed data opt Appsrc data exhausted from shm && internal EOS flag set rialtoServer -> GStreamer_server: notify EOS end end @enduml |
...
| PlantUML Macro | ||||
|---|---|---|---|---|
| ||||
@startuml autonumber box "Container" #LightGreen participant Client participant rialtoClient end box box "Platform" #LightBlue participant rialtoServer end box Client -> rialtoClient: getBufferDelay(web_audio_session) rialtoClient -> rialtoServer: getBufferDelay(web_audio_session) rialtoServer -> rialtoServer: Calculate delay_frames note right Ideally this would be queried from the GStreamer pipeline, either by directly getting Directly get the amount of buffered data: or calculating it something like this: delay delay = gst_elementqueued_query_durationframes() -+ gstshm_elementqueued_query_positionframes(); This requires some experimentation to see what works. The existing implementation relies on elapsed time vs position but this will become problemmatic when support for pause() is required. end note rialtoServer end note rialtoServer --> rialtoClient: delay_frames rialtoClient --> Client: delay_frames @enduml |
| PlantUML Macro | ||||
|---|---|---|---|---|
| ||||
@startuml
autonumber
box "Container" #LightGreen
participant Client
participant rialtoClient
end box
box "Platform" #LightBlue
participant rialtoServer
participant GStreamer_server
end box
Client -> rialtoClient: setVolume(web_audio_session, volume)
rialtoClient -> rialtoServer: setVolume(web_audio_session, volume)
rialtoServer -> GStreamer_server: gst_stream_volume_set_volume(pipeline, GST_STREAM_VOLUME_FORMAT_LINEAR, volume)
GStreamer_server --> rialtoServer: status
rialtoServer --> rialtoClient: status
rialtoClient --> Client: status
@enduml |
| PlantUML Macro | ||||
|---|---|---|---|---|
| ||||
@startuml autonumber box "Container" #LightGreen participant Client participant rialtoClient end box box "Platform" #LightBlue participant rialtoServer participant GStreamer_server end box Client -> rialtoClient: getVolume(web_audio_session) rialtoClient -> rialtoServer: getVolume(web_audio_session) rialtoServer -> GStreamer_server: gst_stream_volume_get_volume(pipeline, GST_STREAM_VOLUME_FORMAT_LINEAR) GStreamer_server --> rialtoServer: volume rialtoServer --> rialtoClient: delay_framesvolume rialtoClient --> Client: delay_frames volume @enduml |