...
- Introduce new IWebAudioPlayer interface to allow the client to control PCM injection. This will be implemented in both Rialto Client ClieGstreamer app source uses 2 signals,nt & Server.
- Data will be passed in existing shm buffer used for MSE
- The resources parameter provided to Rialto Application Session Server when it is spawned will be extended to include a max_web_audio_playbacks parameter to determine how many Web Audio playbacks can be performed by this application
- The shared memory buffer will have a suitably sized Web Audio region allocated if max_web_audio_playbacks > 0
...
| PlantUML Macro |
|---|
|
@startuml
hide empty description
state All {
All --> FAILURE: Fatal error
All --> [*]: \~WebAudioPlayer()
[*] --> IDLE: createWebAudioPlayer()
state Streaming {
IDLE --> PLAYING: play()
PLAYING --> PAUSED: pause()
PAUSED --> PLAYING: play()
}
PLAYING --> EOS: Gstreamer notifies EOS
EOS --> PLAYING: play()
}
@enduml |
The above state machine UML is not rendering correctly so this is a snapshot from Sky Confluence:
Image Added
Shared Memory Region
If max_web_audio_playbacks>0 then max_web_audio_playbacks regions will be allocated in the shared memory buffer for web audio data. The module managing Web Audio should fetch that region data during initialisation and then manage the memory as a circular buffer into which audio frames can be written and from which they are read. The following diagrams show typical snapshots of how the buffer might look during Web Audio streaming and how the next WebAudioShmInfo parameter returned by getBufferAvailable() would look in such a case.
...
| draw.io Diagram |
|---|
| border | true |
|---|
| |
|---|
| diagramName | Web Audio Shm Region |
|---|
| simpleViewer | false |
|---|
| width | |
|---|
| links | auto |
|---|
| tbstyle | top |
|---|
| lbox | true |
|---|
| diagramWidth | 1161 |
|---|
| revision | 1 |
|---|
|
Sequence Diagrams
Initialisation &
...
Termination
| PlantUML Macro |
|---|
|
@startuml
autonumber
box "Container" #LightGreen
participant Client
participant rialtoClient
end box
box "Platform" #LightBlue
participant rialtoServer
participant GStreamer_server
end box
== Create Web Audio Player session ==
Client -> rialtoClient: createWebAudioPlayer(client, pcm_params)
note right
This should only be called in when app is in ACTIVE
state. If client porting layer requires a persistent
web audio player it will need to manage creation &
destruction of the rialto object on state change.
end note
rialtoClient -> rialtoServer: createWebAudioPlayer(client, pcm_params)
rialtoServer -> rialtoServer: Check resource permissions that were granted to app and currently\nallocated pipelines permit this object to be created
note left: This check is (resources.supportsnum_current_web_audio_sessions == true) && (num_current< resources.max_supported_web_audio_sessions == 0)
rialtoServer -> rialtoServer: Check pcm_params valid
note left: sampleSize is valid givenaslong as channelsit &is isFloat> params &&\n(isSigned || !isFloatCHAR_BIT (8 bits)
opt Permission & pcm_params checks passed
rialtoServer -> rialtoServer: Store offset & size of media transfer buffer in shm
note left: This data comes from the getSharedMemory() API called when Rialto server entered ACTIVE state
rialtoServer -> GStreamer_server: Create rialto server PCM audio pipeline
GStreamer_server --> rialtoServer:
rialtoServer --> rialtoClient: web_audio_session
rialtoClient --> Client: web_audio_session
rialtoServer --/ rialtoClient: notifyState(web_audio_session, IDLE)
rialtoClient --/ Client: notifyState(web_audio_session, IDLE)
else Checks failed
rialtoServer --> rialtoClient: nullptr
rialtoClient --> Client: nullptr
end
== Initialisation ==
Client -> rialtoClient: getDeviceInfo()
rialtoClient -> rialtoServer: getDeviceInfo()
rialtoServer --> rialtoClient: preferred_frames, max_frames, support_deferred_play
note right: preferred_frames=minimum of 640 or max_frames\nmax_frames=shm_region_size/(pcm_params.sample_sizechannels * pcm_params.sampleSize) \nsupport_deferred_play=true
rialtoClient --> Client: preferred_frames, max_frames, support_deferred_play
== Destroy Web Audio Player session ==
Client -> rialtoClient: ~IWebAudioPlayer()
note right
This should be called in when app leaves ACTIVE
state or destroys it's web audio player.
end note
rialtoClient -> rialtoServer: ~IWebAudioPlayer(client, pcm_params)
rialtoServer -> GStreamer_server: Destroy PCM audio pipeline
GStreamer_server --> rialtoServer:
rialtoServer --> rialtoClient:
rialtoClient --> Client:
@enduml |
...
| PlantUML Macro |
|---|
| format | SVG |
|---|
| title | Push Data Algorithm |
|---|
|
@startuml
autonumber
box "Platform" #LightBlue
participant rialtoServer
participant GStreamer_server
end box
noteloop across
Gstreamerdata appavailable sourcein usesweb 2audio signals, need-data and enough-data, to notify its
client whether it needs more data. Rialto server should only push data when
the appsrc indicates that it is in the need-data state.
end note
loop While appsrc needs data && data available in web audio shm region
shm region
rialtoServer -> GStreamer_server: gst_app_src_get_current_level_bytes(src)
GStreamer_server --> rialtoServer: bytes_in_gst_queue
note right: size of gst_buffer either free_bytes in src or size of samples in shm
rialtoServer -> rialtoServer: CreateGStreamer_server: gst_buffer containing next set of samples from shm
note right: This will need to perform endianness conversion if necessary_new_allocate(size)
GStreamer_server --> rialtoServer: gst_buffer
rialtoServer -> GStreamer_server: gst_app_src_push_buffer(src, gst_buffer)
rialtoServer -> rialtoServer: Update internal shm variables for consumed data
opt Appsrc data exhausted from shm && internal EOS flag set
rialtoServer -> GStreamer_server: notify EOS
end
end
@enduml |
...
| PlantUML Macro |
|---|
| format | SVG |
|---|
| title | Get Delay Frames |
|---|
|
@startuml
autonumber
box "Container" #LightGreen
participant Client
participant rialtoClient
end box
box "Platform" #LightBlue
participant rialtoServer
end box
Client -> rialtoClient: getBufferDelay(web_audio_session)
rialtoClient -> rialtoServer: getBufferDelay(web_audio_session)
rialtoServer -> rialtoServer: Calculate delay_frames
note right
IdeallyDirectly this would be queried from the GStreamer pipeline, either by directly getting
get the amount of buffered data or calculating it something like this:
delay = gst_elementqueued_query_durationframes() -+ gstshm_elementqueued_query_positionframes();
end note
ThisrialtoServer requires some experimentation to see what works. The existing implementation
relies on elapsed time vs position but this will become problemmatic when support
for pause() is required.
end note
rialtoServer --> rialtoClient: delay_frames
rialtoClient --> Client: delay_frames
@enduml |
Set Volume
| PlantUML Macro |
|---|
|
@startuml
autonumber
box "Container" #LightGreen
participant Client
participant rialtoClient
end box
box "Platform" #LightBlue
participant rialtoServer
participant GStreamer_server
end box
Client -> rialtoClient: setVolume(web_audio_session, volume)
rialtoClient -> rialtoServer: setVolume(web_audio_session, volume)
rialtoServer -> GStreamer_server: gst_stream_volume_set_volume(pipeline, GST_STREAM_VOLUME_FORMAT_LINEAR, volume)
GStreamer_server --> rialtoServer: status
rialtoServer --> rialtoClient: status
rialtoClient --> Client: status
@enduml |
Get Volume
| PlantUML Macro |
|---|
|
@startuml
autonumber
box "Container" #LightGreen
participant Client
participant rialtoClient
end box
box "Platform" #LightBlue
participant rialtoServer
participant GStreamer_server
end box
Client -> rialtoClient: getVolume(web_audio_session)
rialtoClient -> rialtoServer: getVolume(web_audio_session)
rialtoServer -> GStreamer_server: gst_stream_volume_get_volume(pipeline, GST_STREAM_VOLUME_FORMAT_LINEAR)
GStreamer_server --> rialtoServer: volume
rialtoServer --> rialtoClient: delay_framesvolume
rialtoClient --> Client: delay_frames
volume
@enduml |