Overview

WebAudio provides a mechanism for playing PCM audio clips alongside, and mixed with, any ongoing A/V playback sessions by the client.

Changes required:


Web Audio Playback States

This diagram shows the states of the Web Audio object and how they are changed. These changes are all notified via the Web Audio Client interface.


@startuml


hide empty description

state All {
	All				--> FAILURE:	Fatal error
	All				-->	[*]:			\~WebAudioPlayer()

		[*]			-->	IDLE:			createWebAudioPlayer()

	state Streaming {

		IDLE    --> PLAYING:	play()
		PLAYING --> PAUSED:		pause()
		PAUSED  --> PLAYING:	play()
	}

		PLAYING --> EOS:			Gstreamer notifies EOS
		EOS     --> PLAYING:  play()
}

@enduml


The above state machine UML is not rendering correctly so this is a snapshot from Sky Confluence:

Shared Memory Region

If max_web_audio_playbacks>0 then max_web_audio_playbacks regions will be allocated in the shared memory buffer for web audio data. The module managing Web Audio should fetch that region data during initialisation and then manage the memory as a circular buffer into which audio frames can be written and from which they are read. The following diagrams show typical snapshots of how the buffer might look during Web Audio streaming and how the next WebAudioShmInfo parameter returned by getBufferAvailable() would look in such a case.



Sequence Diagrams

Initialisation & termination

@startuml

autonumber

box "Container" #LightGreen
participant Client
participant rialtoClient
end box

box "Platform" #LightBlue
participant rialtoServer
participant GStreamer_server
end box


== Create Web Audio Player session ==

Client            ->  rialtoClient:     createWebAudioPlayer(client, pcm_params)
note right
This should only be called in when app is in ACTIVE
state. If client porting layer requires a persistent
web audio player it will need to manage creation &
destruction of the rialto object on state change.
end note
rialtoClient      ->  rialtoServer:     createWebAudioPlayer(client, pcm_params)
rialtoServer      ->  rialtoServer:     Check resource permissions that were granted to app and currently\nallocated pipelines permit this object to be created
note left: This check is (num_current_web_audio_sessions < resources.max_supported_web_audio)
rialtoServer      ->  rialtoServer:     Check pcm_params valid
note left: sampleSize is valid given channels & isFloat params &&\n(isSigned || !isFloat)
 

opt Permission & pcm_params checks passed
rialtoServer      ->  rialtoServer:     Store offset & size of media transfer buffer in shm
note left: This data comes from the getSharedMemory() API called when Rialto server entered ACTIVE state
rialtoServer      ->  GStreamer_server: Create rialto server PCM audio pipeline
GStreamer_server  --> rialtoServer:
rialtoServer      --> rialtoClient:     web_audio_session
rialtoClient      --> Client:           web_audio_session

rialtoServer      --/ rialtoClient:     notifyState(web_audio_session, IDLE)
rialtoClient      --/ Client:           notifyState(web_audio_session, IDLE)

else Checks failed

rialtoServer      --> rialtoClient:     nullptr
rialtoClient      --> Client:              nullptr

end


== Initialisation ==

Client            ->  rialtoClient:     getDeviceInfo()
rialtoClient      ->  rialtoServer:     getDeviceInfo()
rialtoServer      --> rialtoClient:     preferred_frames, max_frames, support_deferred_play
note right: preferred_frames=640\nmax_frames=shm_region_size/pcm_params.sample_size\nsupport_deferred_play=true
rialtoClient      --> Client:           preferred_frames, max_frames, support_deferred_play


== Destroy Web Audio Player session ==

Client            ->  rialtoClient:     ~IWebAudioPlayer()
note right
This should be called in when app leaves ACTIVE
state or destroys it's web audio player.
end note
rialtoClient      ->  rialtoServer:     ~IWebAudioPlayer(client, pcm_params)
rialtoServer      ->  GStreamer_server: Destroy PCM audio pipeline
GStreamer_server  --> rialtoServer:
rialtoServer      --> rialtoClient:
rialtoClient      --> Client:


@enduml


Client Sends Audio Frames

@startuml

autonumber

box "Container" #LightGreen
participant Client
participant rialtoClient
end box

box "Platform" #LightBlue
participant rialtoServer
participant GStreamer_server
end box


== Write audio data ==

loop Until all audio data written

Client            ->  rialtoClient:     getBufferAvailable(web_audio_session)
rialtoClient      ->  rialtoServer:     getBufferAvailable(web_audio_session)
rialtoServer      ->  rialtoServer:     available_frames = buffer_size_in_frames - frames_in_buffer
rialtoServer      ->  rialtoServer:     Set web_audio_shm_info based on next location(s) of shm to be written
rialtoServer      --> rialtoClient:     available_frames, web_audio_shm_info
rialtoClient      ->  rialtoClient:     Cache available_frames & web_audio_shm_info\nready for anticipated writeBuffer() call.
rialtoClient      --> Client:           available_frames, web_audio_shm_info
note right: web_audio_shm_info should set to\ninvalid and ignored by the client

Client            ->  rialtoClient:     writeBuffer(web_audio_session, number_of_frames, data_ptr)

opt (frame_buffer valid) && (frame_buffer.size <= available_frames) && (web_audio_shm_info.lengthMain + web_audio_shm_info.lengthWrap == frame_buffer.size * pcm_params.sample_size)
rialtoClient      ->  rialtoClient:     Write frames to circular shm buffer region at location(s)\nspecified in cached web_audio_shm
rialtoClient      ->  rialtoServer:     writeBuffer(web_audio_session, number_of_frames, nullptr)
note right: nullptr indicates that data is already written to shm.\nIn server only mode this pointer must be valid.
rialtoServer      ->  rialtoServer:     Update internal circular buffer variables &\ntrigger data push algorithm
rialtoServer      --> rialtoClient:     status=true
rialtoClient      --> Client:           status
else
rialtoClient      --> Client:           false
end

end

== EOS signalled when all audio data written ==

Client            ->  rialtoClient:     setEos()
rialtoClient      ->  rialtoServer:     setEos()
rialtoServer      ->  rialtoServer:     Set internal flag to indicate no more data expected
rialtoServer      --> rialtoClient:     status=true
rialtoClient      --> Client:           status

== GStreamer has reached EOS ==
GStreamer_server  --/ rialtoServer:     EOS
rialtoServer      --/ rialtoClient:     notifyState(web_audio_session, END_OF_STREAM)
rialtoClient      --/ Client:           notifyState(web_audio_session, END_OF_STREAM)

== Underflow ==

note across: This is currently ignored

@enduml


Rialto Internal Push Data Algorithm

@startuml

autonumber

box "Platform" #LightBlue
participant rialtoServer
participant GStreamer_server
end box

note across
Gstreamer app source uses 2 signals, need-data and enough-data, to notify its
client whether it needs more data. Rialto server should only push data when
the appsrc indicates that it is in the need-data state.
end note


loop While appsrc needs data && data available in web audio shm region

rialtoServer       ->  rialtoServer:      Create gst_buffer containing next set of samples from shm
note right: This will need to perform endianness conversion if necessary
rialtoServer       ->  GStreamer_server:  gst_app_src_push_buffer(src, gst_buffer)
rialtoServer       ->  rialtoServer:      Update internal shm variables for consumed data

opt Appsrc data exhausted from shm && internal EOS flag set
rialtoServer       ->  GStreamer_server:  notify EOS
end

end

@enduml


Play & Pause

@startuml

autonumber

box "Container" #LightGreen
participant Client
participant rialtoClient
end box

box "Platform" #LightBlue
participant rialtoServer
participant GStreamer_server
end box


== Start/resume playback ==

Client            ->  rialtoClient:     play(web_audio_session)
rialtoClient      ->  rialtoServer:     play(web_audio_session)
rialtoServer      ->  GStreamer_server: Set pipeline state to PLAYING
GStreamer_server  --> rialtoServer:     status
rialtoServer      --> rialtoClient:     status
rialtoClient      --> Client:           status

opt status==true
rialtoServer      --/ rialtoClient:     notifyState(web_audio_session, PLAYING)
rialtoClient      --/ Client:           notifyState(web_audio_session, PLAYING)
else status=false
rialtoServer      --/ rialtoClient:     notifyState(web_audio_session, FAILURE)
rialtoClient      --/ Client:           notifyState(web_audio_session, FAILURE)
end


== Pause playback ==

Client            ->  rialtoClient:     pause(web_audio_session)
rialtoClient      ->  rialtoServer:     pause(web_audio_session)
rialtoServer      ->  GStreamer_server: Set pipeline state to PAUSED
GStreamer_server  --> rialtoServer:     status
rialtoServer      --> rialtoClient:     status
rialtoClient      --> Client:           status

opt status==true
rialtoServer      --/ rialtoClient:     notifyState(web_audio_session, PAUSED)
rialtoClient      --/ Client:           notifyState(web_audio_session, PAUSED)
else status=false
rialtoServer      --/ rialtoClient:     notifyState(web_audio_session, FAILURE)
rialtoClient      --/ Client:           notifyState(web_audio_session, FAILURE)
end


@enduml


Get Buffer Delay

@startuml

autonumber

box "Container" #LightGreen
participant Client
participant rialtoClient
end box

box "Platform" #LightBlue
participant rialtoServer
end box


Client            ->  rialtoClient:     getBufferDelay(web_audio_session)
rialtoClient      ->  rialtoServer:     getBufferDelay(web_audio_session)
rialtoServer      ->  rialtoServer:     Calculate delay_frames
note right
Ideally this would be queried from the GStreamer pipeline, either by directly getting
the amount of buffered data or calculating it something like this:

  delay = gst_element_query_duration() - gst_element_query_position();

This requires some experimentation to see what works. The existing implementation
relies on elapsed time vs position but this will become problemmatic when support
for pause() is required.
end note
rialtoServer      --> rialtoClient:     delay_frames
rialtoClient      --> Client:           delay_frames


@enduml