You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 64 Next »

Introduction

Downloadable Application Containers (DAC) is an RDK initiative and co-operation effort between Comcast, Sky, Liberty Global, RDK-M and Consult Red. The work for this is co-ordinated by RDK DAC Special interest Group (RDKDACSIG)

The DAC end goal is to provide a solution that allows app developers to write and package also native (c/c++) Applications in a way that makes the application binary exchangeable and can be run across all RDK boxes without (compile time) modifications.

This group initially focused on RDK-V and that is what this wiki page describes/focuses on, but many of principles/same components are also suitable for RDK-B.

Solution Overview

The diagram above provides a high level overview of the DAC solution.

It is divided into 4 functional areas which we describe futher in sections below:

  • APP SDK - The SW development toolkit that allows Appdeveloper to build and package their application in oci, dac compliant AppContainer. This AppContainer is then the App binary that can be shared/exchanged with RDK-M and various RDK operators via an OCI container registry, allowing those operators to distribute that same App binary across their target platforms.
  • OCI container registry: is a standard Cloud Native component that allows storage and exchange of AppContainers. As described above this is the exchange point for the binary AppContainer between App developer and consuming parties/operators
  • DAC Cloud 
    • These are cloud hosted software elements, microservices that 
      • allow you to publish an App with appropriate metadata in an AppCatalog for specific distribution group/environment
      • automatically convert oci-image to so called oci bundles that run on a STB model
      • and sign/encrypt binaries for securely delivery to STB/CPE models it serves. 
    • Each operator will manage its own AppCatalog and is free on how it chooses to distribute and package that towards its RDK STBs.
    • RDK-M operates a reference DAC cloud instance that serves an AppCatalog with DAC Appcontainers for all the RDK-M Video Accelerators. It acts as a staging and test environment for App developers and RDK community, allowing App developers to publish their DAC native apps and test/run them on RDK Video Accelerators.
    •  Source code of µservices that exist in such reference DAC cloud instance are available in open source.
  • STB / CPE - This is software components that run on set top box and co-ordinate to manage container installation and runtime life cycles


This video,  made by ConsultRed at TechSummit2024, nicely explains this bigger picture: dac_overview_rdk_tech_summit_2024_1080p.mp4

reference Architecture

Below is a more detailed diagram with technical components inside the RDK6.1 SW on STB and DAC reference cloud

3 functional area's are further discussed in sections below

APP SDK

A DAC SDK is created to make it easier for app developers to build oci, dac compliant containerized applications. The SDK is public, fully open source and uses standard Yocto 3.1 Dunfel Poky cross-compilation / build environment. It does NOT require you as an App developer to have any SoC or OEM specific SW/SDK for being able to compile and package your AppContainer, it is SOC/OEM agnostic. The SDK allows developers to cross compile their application sources, automatically add appropriate dependent libraries in AppContainer and produce oci, dobby, dac compliant AppContainer image. This binary oci AppContainer image can then be uploaded to an OCI registry for sharing with possible consumers/distribution platforms. As App developer you can test and run your AppContainer on all RDK6.1 RDK-M Video Accelerators (when publishing the App to RDK-M VA DAC cloud instance)

The DAC APP SDK is available on https://github.com/rdkcentral/meta-dac-sdk and comes with many example AppContainers

For information on how to really make DAC app and use DAC SDK we refer to HOWTO to build your native App with DAC SDK and publish to RDK-M DAC cloud

OCI container registry

OCI container registry: is a standard Cloud Native component that allows storage and exchange of AppContainers in the standardized oci image format. see oci "Image Format Specification" by Open Container Initiative.

To allow distribution of its application ( to allow it to be run on RDK STBs), the App developer must upload its AppContainer to an oci container registry.

The OCI registry is the binary App exchange point between App developer and consuming parties/operators for distribution of the App on their target platforms.

DAC Cloud reference

To run the oci compliant AppContainer on set of devices in production, an associated secure distribution solution/system must be in place.
This distribution solution is responsibility of the operator and not of the app developer. 
It is typically also a distribution system / instance per operator/environment because: 

  • the operator needs to be in control of what apps are made available on what environment. The available App may have to be different per country, different in test vs prod environment
  • and operators will likely have different keysets per distribution system/instance for security reasons.

RDK provides a reference distribution solution, see the "DAC cloud reference" as depicted in reference architecture diagram above. It consists out of several µservices. If you hover over the specific µservice on the diagram you can see the link to its associated open source code. 

RDK and RDK-M do not mandate use of a particular distribution solution towards the RDK operators. 
The operators are free to choose, make/buy their secure distribution solution. Some operator choose to build further on system they already had, some operators are using their instance of the DAC cloud reference.
As long as the operator distribution system can take in the oci, dac compliant AppContainer, securely distribute and convert it to its OCI bundle equivalent ready for use on the operator specific CPE devices that support the dac contract for binary compatibility, it should be fine. In future we may want to validate this as part of firebolt certification of those devices.

RDK-M does operate an instance of the reference DAC cloud that serves an AppCatalog with DAC Appcontainers for all the RDK-M Video Accelerators
It is important staging, test and reference environment for App developers and RDK community, allowing App developers to publish their DAC native apps and test/run them on all the RDK Video Accelerators with a RDK6.1 software image (from 3 different SoC vendors).
Without such environment App developers cannot test and run their Application on real devices and that is obviously not acceptable.
If RDK-M organizes certification testing of a native Application, it will tested/certify against this environment
Operators will want to see the application successfully running and passing testing on this DAC Video accelerator environment before starting testing of the same app on their environment and boxes.

To be able to install and run the AppContainer on set of target devices/population following things are required:
The reference DAC cloud solution has been designed to fulfill those requirements.

  • The application must be made discoverable in some form of appcatalog available to those target devices. System is needed that allows you to register, add applications with appropriate metadata to that AppCatalog but also allows the consuming devices to browse that appcatalog and have all the information needed to securely install particular application and once installed be able to run it. For that the Appstore Metadata µService (ASMS) has been created. It provides restful API (swagger/open API).  See more information in following wiki page on how to use the ASMS API   
  • The Appcontainer in the OCI image format needs to be transformed to so called "bundle" format to be able to load/run it on real host. Such bundle consists out of the final filesystem of the container (is result of possibly more than one file system layers specified in OCI image format spec) and a "run" time "config.json" so it can be run on a host with OCI runtime like crun. One can also say that bundle is Container rootfs + run config.json" 
    See the oci runtime specification for bundle term definition and run time "config.json" parameters.
    The "run" config.json is the Container configuration. It is the responsibility of the distributor/operator, not the application provider to configure this.
    The operator will for example configure/constrain Linux capabilities of the container for security reasons. The "run" config also contains few platform specific bind mounts (STB model, can be firmware specific) from host rootfs to AppContainer rootfs (like bind mount of SoC specific Graphics libraries that implement libGLESv2 & libEGL) as well as bind mounts of few unix domain sockets (socket of waylandserver & rialtoserver instance) required to interface with the defined abstraction layers for Graphics and Audio/Video/DRM. The run "config.json" takes some parameters from oci image config that are needed in run config (such as launch point/path of the application within the container)

In the architecture diagram you can find the bundle generator µservice  that will upon RabbitMQ trigger automatically generate from an OCI image the appropriate OCI bundle for specific HWmodel (VA) and store it for downstream usage. It will use the appropriate config template for that associated HWmodel (see example HWmodel specific templates here ) for generation of the run config.json. The bundlegen code that the µservice is executing during this bundlegeneration of an app is located here.  

  • We must be able to secure bundles:  both integrity and content confidentiality of bundle (config.json + rootfs) must be ensured during distribution and when installed on target device. That is ensuring bundle config.json and container filesystem cannot be changed by hackers after the generation by operator/distributor, during distribution nor when stored/installed on the device.

In the architecture diagram you can find the bundle cryptor µservice that will upon generation of new bundle by bundlegeneration service, sign and encrypt the bundle. For more information on that we refer to DAC session at Technology Summit 2023#pane-uid-b13cbeba-87dc-4f0a-9932-5b7e00770663-1  and documentation on access restricted page DAC Security 

  • The system should automatically generate and secure bundles for the various platforms, doing this manually does not scale.

That is what the reference DAC cloud system is doing. Once bundle for particular app_id/platform/compatible version is created, it will be available in cache of ASCS (Appstore Caching service) (which can serve as the origin for further caching on a CDN). If you request bundle for particular app_id/platform/compatibleversion towards ASCS and if it is not available in its cache it will automatically trigger real time creation of associated bundle in a flow managed by ASBS (appstore bundle service). ASBS will fetch appropriate metadata from ASMS and give instructions with right input parameters to respectively bundle-generator-service and bundle-cryptor-service via RabbitMQ to generate and secure the bundle and cache on ASCS. 

STB / CPE SW

The core components in any RDK software image to run the Appscontainer on CPE are "Dobby" and "crun". Dobby is an OCI Container manager aka container engine comparable to known desktop container engines dockerd and podman but then focused on embedded device, written in C++ and designed to have minimal footprint and performance impact.
Crun is an opensource container runtime alternative to "runc" equivalent but written in C, has smaller footprint but is also well supported by the industry (is for example default container runtime used by podman)
Dobby exposes API and is used to start, stop and monitor containers running on a STB, using the crun runtime underneath to interface with the kernel and start the containers.
When requested to start a new container, crun will process the associated container configuration (run config.json) and apply the associated container policies/settings.
The OCIContainer Thunder plugin allows for interfacing with Dobby using a JSON-RPC interface and Dobby can also be used to run other Thunder plugins in containers using the Dobby ProcessContainer backend. For more information on Dobby, see following wiki Dobby and for crun see the crun code repo.

Before being able to run the Appcontainer successfully on RDK CPE, other pieces are needed as well.
Some pieces may be operator-specific such as the UI to discover, store and run the apps but also not all operators will be using RDKshell as Appmanager and some will use alternative component to LISA  for installation of apps. Also some may have system to distribute, store the apps onto the device without UI/user involved.
Operator device software must meet DAC contract for binary compatibility. That means following firebolt native abtraction interfaces need to be in place: Rialto for audio, video & DRM and libGLESv2.so.2, libEGL.so.1, libwayland-egl.so.1 and libessos.so for Graphics. Also a Firebolt Server json-rpc server (which is "ripple" in RDK 6.1 reference environment) that supports current and upcoming Firebolt api version must be in place and pass certification tests (currently version 1.0 but we are working on and aim at 2.0)

The RDK Video Accelerators running their RDK6.1 software image are the reference environment for RDK. They use the components as shown in the reference architecture diagram (VideoAccelerator UI, RDKshell with RialtoServermanager support, LISA, OCIcontainer rdkservice, OMI, ripple, westeros, dobby, crun. The Video Accelerator Resident App includes basic UI for working with DACapps. That UI allows you to browse and install DAC apps from the Appcatalog offered by ASMS in the DAC cloud instance setup for VA's. For demo video and more information on how to use that UI see video1 and video2.

That UI code uses LISA API to get list of all installed apps at boot and obviously to install or uninstall specific app.
For installation of an specific App, UI will request appropriate bundle_url for this particular app_id/app_version/HWplatform/compatibleFWversion combination to the DAC cloud and pass this bundle_url to LISA with request for install. LISA will do download of that bundle_url (https pointing to bundle tarball on ASCS in DAC cloud). ASCS or CDN in front will return that tarbal with HTTP 200 OK when it is available in cache. In case that bundle is not available in the cache yet (is the case when the bundle for that combination was not already pre generated or requested by 1 other stb), the ASCS will trigger ASBS in DAC cloud to automatically create the bundle for that combination on the fly (near Just in Time, eg 1-5 seconds) and populate in the ASCS for download by LISA. In such "not in cache" case ASBS will return 202 iso 200 to LISA to indicate the delay and few seconds of patience. LISA has built in, tunable by config, retry mechanism in case of such 202.
When successfully downloading the tarball LISA will unpack it in the appropriate directory locations it created for it, also create separate persistent storage location for that app and add the app_id / version to its local database.
For more information on LISA we refer to https://wiki.rdkcentral.com/display/ASP/LISA
When the user wants to run one of the installed apps, the UI interfaces with RDKshell as the Appmanager, requesting it to start this app of type dac.native with as input the path to the unpacked bundle created by LISA.
RDKshell will create westeros wayland server instance/display socket for the app, based on metadata create RialtoServer instance for the app or not and further delegate startContainer(bundle_path, westerosSocket) to OCIContainer thunder plugin which will in case of signed/encrypted container trigger the OMI component to verify signature and decrypt config.jwt in the crypted bundle and decrypt, verify and mount the rootfs of the Appcontainer rootfs using dm-verity & dm-crypt so it is ready to be consumed. Then the startContainer request is further delegated to Dobby which uses crun to execute it.
They will setup the container according to the associated run config.json. Associated bind mount from STB host rootfs to Appcontainer rootfs of appropriate wayland and rialto server socket will occur, as well as bind mounts of SoC specific Gfx library tree which provide libGLESv2.so.2 and libEGL.so
When the dac.native app is a firebolt App, the specific Firebolt connection url for to use, will be passed (through chain of RDKshell, OCI Container, dobby/crun) and become available to the App as environment variable inside the AppContainer.


-----

For more information on Dobby, see following wiki Dobby and for crun see the crun code repo.

 As part of DAC, the RDKShell plugin has been extended to allow starting DAC apps using Dobby, creating a display and attaching it to the containerised app as necessary. RDKShell also integrates with the Packager plugin to provide a full-featured solution for downloading, installing and lifecycle management of DAC apps. For more documentation on the RDKShell and Packager integration see RDKShell and LISA/Packager. See the Getting Started section below for an example of using these components together.


Background and Terminology

What are Containers?

Used heavily in modern cloud software, containers are a standardised way to run the software in a self-contained, isolated and secure environment. Containers contain everything an application needs to run, including code, libraries and other dependencies. This means that there is no need to install specific libraries and dependencies on the host for each application. Unlike more traditional virtual machines, containers are lightweight and fast to create and destroy and don't have the overhead of virtualising an entire operating system. By sharing the OS kernel between containers, running applications inside containers adds very little performance or footprint overhead.

The most popular containerisation solution currently in use is Docker, although there are a number of other solutions such as LXC, Singularity and Podman. LXC containers have been available within RDK for a number of years, using a container generation tool at build time to define and create the container configurations.

Deployment evolution

Image source: https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/

Why use containers in RDK?

There are a number of advantages of using containers to run applications in RDK deployments, including:

  • Allow developers to easily write applications to run on any RDK devices
  • Consistent behaviour across all RDK operators and devices
  • Write once, deploy on many devices
  • Increase security without impacting performance

As part of the DAC initiative, containers are used to reduce the difficulty of developing native applications that can be run on many different RDK-V devices from different operators by creating an isolated environment for each application. This means the app becomes platform agnostic and can be run on devices the develop may not have physical access to.

Open Container Initiative (OCI)

From the Open Container Initiative (OCI) website (https://opencontainers.org/):

The Open Container Initiative is an open governance structure for the express purpose of creating open industry standards around container formats and runtimes.

Formed in 2015 by Docker and other companies in the container industry and now part of the Linux Foundation, OCI define a number of specifications that allow developers to define containers. These specifications are followed by almost all major containerisation platforms.

OCI define both a runtime specification and an image specification. The Runtime Specification outlines how to run a “filesystem bundle” that is unpacked on disk. The OCI image is used for packaging containers in a platform-agnostic way that can be easily distributed. At a high-level, an OCI implementation would download an OCI Image then unpack that image into an OCI Runtime filesystem bundle. At this point the OCI Runtime Bundle would be run by an OCI Runtime.

OCI Runtimes

An OCI runtime is a CLI tool that allows for spawning and running containers according to the OCI specification. There are two main OCI runtimes in production use:

Crun

  • Repo: https://github.com/containers/crun
  • Crun is an alternative implementation of an OCI runtime, this time written in C and optimised for performance and a low memory footprint. It is developed and supported by RedHat and is currently in use by Podman in Fedora, and will be available in RHEL 8.3.
  • This is the runtime supported by Dobby and will be used as the default runtime across RDK.

Runc

  • Repo: https://github.com/opencontainers/runc/
  • Runc is the reference implementation of an OCI runtime and is developed directly by the OCI group. This is the runtime used by Docker, Kubernetes and others. However, being written in Go, it is less suitable for embedded STB environments due to a relatively large footprint.
  • Not officially supported in RDK


Getting Started for App Developers

Refer to the the documentation here HOWTO to build your native App with DAC SDK and publish to RDK-M DAC cloud 

 

  • No labels