Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: content ready for move
Expand
titleTable of Content

Table of Contents
maxLevel3

Introduction

Downloadable Application Containers (DAC) is an RDK initiative and co-operation cooperation effort between Comcast, Sky, Liberty Global, RDK-M and Consult Red. The work for this is co-ordinated by RDK DAC Special interest Interest Group (RDKDACSIG)

The DAC end goal is to provide a solution that allows app developers to write and package also native (c/c++) Applications in a way that makes the application binary exchangeable and can be run across all RDK boxes without (compile time) modifications.

This group initially focused on RDK-V and that is what , the solution described on this wiki page describes/focuses on, is focused on RDK-V but many of the principles/same components are also suitable for RDK-B.

Solution Overview

The diagram above provides a high level overview of the DAC solution.

...

Multimedia
namedac_overview_rdk_tech_summit_2024_1080p.mp4

reference Architecture

Below is a more detailed diagram with technical components inside the RDK6.1 SW on STB and DAC reference cloud

...

3 functional area's are further discussed in sections below

APP SDK

A DAC SDK is created to make it easier for app developers to build oci, dac compliant containerized applications. The SDK is public, fully open source and uses standard Yocto 3.1 Dunfel Poky cross-compilation / build environment. It does NOT require you as an App developer to have any SoC or OEM specific SW/SDK for being able to compile and package your AppContainer, it is SOC/OEM agnostic. The SDK allows developers to cross compile their application sources, automatically add appropriate dependent libraries in AppContainer and produce oci, dobby, dac compliant AppContainer image. This binary oci AppContainer image can then be uploaded to an OCI registry for sharing with possible consumers/distribution platforms. As App developer you can test and run your AppContainer on all RDK6.1 RDK-M Video Accelerators (when publishing the App to RDK-M VA DAC cloud instance)

...

For information on how to really make DAC app and use DAC SDK we refer to to HOWTO to build your native App with DAC SDK and publish to RDK-M DAC cloud, cloudpublish and run on VA#useDACSDKandbuildyourfirstnativeDACapp

OCI container registry

OCI container registry: is a standard Cloud Native component that allows storage and exchange of AppContainers in the standardized oci image format. see oci "Image Format Specification" by Open Container Initiative.

...

The OCI registry is the binary App exchange point between App developer and consuming parties/operators for distribution of the App on their target platforms.

DAC Cloud reference

To run the oci compliant AppContainer on set of devices in production, an associated secure distribution solution/system must be in place.
This distribution solution is responsibility of the operator and not of the app developer. 
It is typically also a distribution system / instance per operator/environment because: 

...

That is what the reference DAC cloud system is doing. Once bundle for particular app_id/platform/compatible version is created, it will be available in cache of ASCS (Appstore Caching service) (which can serve as the origin for further caching on a CDN). If you request bundle for particular app_id/platform/compatibleversion towards ASCS and if it is not available in its cache it will automatically trigger real time creation of associated bundle in a flow managed by ASBS (appstore bundle service). ASBS will fetch appropriate metadata from ASMS and give instructions with right input parameters to respectively bundle-generator-service and bundle-cryptor-service via RabbitMQ to generate and secure the bundle and cache on ASCS. 

STB / CPE SW

Container engine

The core components in any RDK software image to run the Appscontainer on CPE are "Dobby" and "crun". Dobby is an OCI Container manager aka container engine comparable to known desktop container engines dockerd and podman but then focused on embedded device, written in C++ and designed to have minimal footprint and performance impact.
Crun is an opensource container runtime alternative to "runc" equivalent but written in C, has smaller footprint but is also well supported by the industry (is for example default container runtime used by podman)
Dobby exposes API and is used to start, stop and monitor containers running on a STB, using the crun runtime underneath to interface with the kernel and start the containers.
When requested to start a new container, crun will process the associated container configuration (run config.json) and apply the associated container policies/settings.
The OCIContainer Thunder plugin allows for interfacing with Dobby using a JSON-RPC interface , and Dobby can also be used to run other Thunder plugins in containers using the Dobby ProcessContainer backend. For more information on Dobby, see following wiki Dobby and for crun see the crun code repo.

Required device software and Video Accelerator with RDK6.1 image as reference setup

Before being able to run the Appcontainer successfully on RDK Before being able to run the Appcontainer successfully on RDK CPE, other pieces are needed as well.
Some pieces may be operator-specific such as obviously the UI to discover, store , and run the apps but also not all operators will be using RDKshell as Appmanager and some will use alternative component to LISA  LISA  for installation of apps. Also some may have system to distribute, store the apps onto the device without UI/user involved.
Operator device software must meet DAC contract for binary compatibiltycompatibility. That means following Following firebolt native abtraction abstraction interfaces need to be in place: Rialto for audio, video & DRM and libGLESv2.so.2, libEGL.so.1, libwayland-egl.so.1 and libessos.so for Graphics. JSON-RPC Server (such as rippleAlso a Firebolt Server json-rpc server (which is "ripple" in RDK 6.1 reference environment) that supports current and upcoming upcoming Firebolt api version must be in place and pass certification tests (currently version 1.0 but we are working on and aim at 2.0 ) must be in place and pass certification.
In the RDK-M reference environment which are the RDK Video Accelerators with the eg Lifecycle2 is required for native apps)

The RDK Video Accelerators running their RDK6.1 software , the Video Accelerator Resident App is used as UI image are the reference environment for RDK. They use the components as shown in the reference architecture diagram (VideoAccelerator UI, RDKshell with RialtoServermanager support, LISA, OCIcontainer rdkservice, OMI, ripple, westeros, dobby, crun. The Video Accelerator Resident App includes basic UI for working with DACapps. That UI allows you to browse and install DAC apps from the Appcatalog offered by ASMS in the DAC cloud instance setup for VA's. For demo video and more information on how to use that UI see video1 and video2.

Discovery and Installation of DAC app

That VA UI uses LISA API
That UI code interfaces with LISA to get list of all installed apps at boot and obviously to install or uninstall for installing/uninstalling specific app.
For installation of an a specific App, UI will request appropriate bundle_url for this ask for a particular app_id/app_version/HWplatform/compatibleFWversion combination to the the bundl_url to ASMS in DAC cloud and pass this bundle_url to LISA along with the install request for install. LISA will do download of that bundle_url (https pointing to bundle tarball on That URL will be https url pointing towards ASCS in DAC cloud with path towards actual (signed/encrypted) . ASCS or CDN in front will return that tarbal with HTTP 200 OK when it is available in cache. In case that bundle is not available in the cache yet (is the case bundle tarball there. LISA will do the HTTPS request for that bundle tarbal. In regular case that bundle tarbal will be available in cache of ASCS or CDN in front and hence when LISA does the HTTPS request for that bundle tarbal, ASCS will be returned immediately with HTTP 200 OK. In case that bundle is not available in the cache yet (is the case when the bundle for that combination was not already pre-generated or requested by 1 other stb of same model), the ASCS will trigger ASBS in DAC cloud to automatically on the fly create the bundle for that combination on the fly (near Just in Time, eg 1-5 seconds order) and populate in the ASCS for download by LISA. In such "not in cache" case ASBS will return 202 iso 200 to LISA to indicate the delay near Just in Time generation and few seconds of patience. LISA has built-in, tunable by config, retry mechanism in case of such 202 .
When so that it will also download the bundle in case of such near Just in time generation.
When successfully downloading the tarball LISA will unpack it in the appropriate directory locations it created for it, also create a separate persistent storage location for that app and add the app_id / version to its local database.
For more information on LISA we refer to LISA wiki https://wiki.rdkcentral.com/display/ASP/LISA

Running installed DAC App

When the user wants to run one of the installed apps, the VA UI interfaces with requests RDKshell, as the Appmanager, requesting it to start this app of type dac.native with as input the path to the unpacked associated bundle created as stored by LISA.
RDKshell will create westeros wayland server instance/display socket for the app, based on metadata create RialtoServer instance for the app or not and further delegate startContainer(bundle_path, westerosSocket) to (when needed), when Firebolt app interact with Firebolt Server (ripple) to get session_id for this app (is like security token, app needs when later connecting to Firebolt Server)  and further delegate startContainer(bundle_path, westerosSocket) to OCIContainer thunder plugin which will in case of signed/encrypted container trigger the OMI component to verify signature and decrypt config.jwt in the crypted bundle and decrypt, verify and mount the rootfs of the Appcontainer rootfs using dm-verity & dm-crypt so it is ready to be consumed. Then the startContainer request is further delegated to Dobby which uses crun to execute it.
They will setup the container according to the associated run config.json. Associated , including bind mount mounts from STB host rootfs to Appcontainer rootfs of appropriate wayland and rialto server socket will occur, as well as bind mounts of SoC specific Gfx library deps tree which (that provide libGLESv2.so.2 and libEGL.so. 1 )
When the dac.native app is a firebolt App, the specific Firebolt connection url for to useURL with session_id as security token, will be passed ( through the chain of RDKshell, OCI Container, dobby/crun ) and become be available to the App as an environment variable inside the AppContainer.

-----

For more information on Dobby, see the detailed documentation here: Dobby

As part of DAC, the RDKShell plugin has been extended to allow starting DAC apps using Dobby, creating a display and attaching it to the containerised app as necessary. RDKShell also integrates with the Packager plugin to provide a full-featured solution for downloading, installing and lifecycle management of DAC apps. For more documentation on the RDKShell and Packager integration see RDKShell and LISA/Packager. See the Getting Started section below for an example of using these components together.

Background and Terminology

What are Containers?

Container. The app can then learn its value and use it when setting up connection with Firebolt Server.

Getting Started for App Developers

Referring to the documentation HOWTO build App with DAC SDK, cloudpublish and run on VA 

Background and Terminology

What are Containers?

Used heavily in modern cloud software, containers are a standardised way to run the software in a self-contained, isolated and secure environment. Containers contain everything an application needs to run, including code, libraries and other dependencies. This means that there is no need to install specific libraries and dependencies on the host for each application. Unlike more traditional virtual machines, containers are lightweight and fast to create and destroy and don't have the overhead of virtualising an Used heavily in modern cloud software, containers are a standardised way to run the software in a self-contained, isolated and secure environment. Containers contain everything an application needs to run, including code, libraries and other dependencies. This means that there is no need to install specific libraries and dependencies on the host for each application. Unlike more traditional virtual machines, containers are lightweight and fast to create and destroy and don't have the overhead of virtualising an entire operating system. By sharing the OS kernel between containers, running applications inside containers adds very little performance or footprint overhead.

The most popular containerisation solution currently in use is Docker, although there are a number of other solutions such as LXC, Singularity and Podman. LXC containers have been available within RDK for a number of years, using a container generation tool at build time to define and create the container configurations .but for use within monolithic STB software image, not for device agnostic building and downloading separately from monolithic image what DAC is achieving.

Deployment evolution

Image source: https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/

Why use containers in RDK?

There are a number of advantages of using containers to run applications in RDK deployments, including:

...

As part of the DAC initiative, containers are used to reduce the difficulty of developing native applications that can be run on many different RDK-V devices from different operators by creating an isolated environment for each application. This means the app becomes platform agnostic and can be run on devices the develop may not have physical access to.

Open Container Initiative (OCI)

From the Open Container Initiative (OCI) website (https://opencontainers.org/):

...

Formed in 2015 by Docker and other companies in the container industry and now part of the Linux Foundation, OCI define a number of specifications that allow developers to define containers. These specifications are followed by almost all major containerisation platforms.

OCI define both a runtime specification and an image specification. The Runtime Specification outlines how to run a “filesystem bundle” that is unpacked on disk. The OCI image is used for packaging containers in a platform-agnostic way that can be easily distributed. At a high-level, an OCI implementation would download an OCI Image then unpack that image into an OCI Runtime filesystem bundle. At this point the OCI Runtime Bundle would be run by an OCI Runtime.

draw.io Diagram
bordertrue
diagramNameOCI
simpleViewerfalse
width600
linksauto
tbstyletop
lboxtrue
diagramWidth951
revision1

OCI Runtimes

An OCI runtime is a CLI tool that allows for spawning and running containers according to the OCI specification. There are two main OCI runtimes in production use:

Crun

  • Repo: https://github.com/containers/crun
  • Crun is an alternative implementation of an OCI runtime, this time written in C and optimised for performance and a low memory footprint. It is developed and supported by RedHat and is currently in use by Podman in Fedora, and will be available in RHEL 8.3.
  • This is the runtime supported by Dobby and will be used as the default runtime across RDK.

Runc

  • Repo: https://github.com/opencontainers/runc/
  • Runc is the reference implementation of an OCI runtime and is developed directly by the OCI group. This is the runtime used by Docker, Kubernetes and others. However, being written in Go, it is less suitable for embedded STB environments due to a relatively large footprint.
  • Not officially supported in RDK

Getting Started for App Developers

Refer to the the documentation here HOWTO to build your native App with DAC SDK and publish to RDK-M DAC cloud 

...