Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

For more details on D-Bus and R-Bus , refer  refer  CCSP Message Bus 

...

CCSP Message Bus

...

This section is not intended to be a tutorial on D-Bus. However there are a few points that need to be well understood.

In D-Bus, the bus is a central concept. It is the channel through which applications can do the method calls, send signals and listen to signals. There are two predefined buses: the session bus and the system bus.

  • The session bus is meant for communication between applications that are connected to the same session
  • The system bus is meant for communication when applications (or services) running with disparate sessions wish to communicate with each other. Most common use for this bus is sending system wide notifications when system wide events occur.

Normally only one system bus will exist, but there can be several session buses.

A bus exists in the system in the form of a bus daemon, a process that specializes in passing messages from a process to another. Sending a message using D-Bus will always involve the following steps (under normal conditions):

  • Creation and sending of the message to the bus daemon. This will cause at minimum two context switches.
  • Processing of the message by the bus daemon and forwarding it to the target process. This will again cause at minimum two context switches.
  • The target component will receive the message. Depending on the message type, it will either need to acknowledge it, respond with a reply or ignore it. Acknowledgement or replies will cause further context switches.

In addition to the context switches, described above, the data from the “replies” gets copied when it enters the D-Bus library, copied again as it enters the sockets to the bus, which then sends it into another socket to the client, which then makes a copy. These copies can be really expensive.

Coupled together, the above rules indicate that D-Bus is not efficient in transferring large amounts of data between processes.

D-Bus, however, provides a useful feature that allows passing UNIX file descriptors over the bus. The idea is to use DBus for flexible IPC method calls between a client and a server and a dedicated pipe for passing large results from the server back to the client. This allows very fast transmission rates, while keeping the comfort of using DBus for inter process method calls. The file descriptor may even point to a shared memory object allocated via shm_open (). Data Plane deals with this in greater detail.

D-Bus Bindings

D-Bus Bindings must be auto generated using a D-Bus binding tool such as dbus-binding-tool from D-Bus-Glib, using the introspection XML file that defines the component’s interface and supported signals. The Adapter bindings should only be thin glue for bridging Component interfaces. In other words, there should not be any component functionality directly implemented in the bindings, other than just calling into the component’s interface.

CCSP Message Bus Adapter

CCSP Message Bus Adapter is a core framework Component that enables inter-processor communication. The other processor may also run CCSP Message Bus Adapter to facilitate communication.

The internal implementation of the CCSP Message Bus Adapter may use TCP/IP sockets for communicating with the peer message bus adapter running on the other processor.

CCSP Component Registrar (CR)

CCSP Component Registrar is a centralized database of all registered CCSP components/services. It supports dynamic learning of registered components and supported capabilities - message types/namespaces. In addition it signals events to registered subscribers when components get unregistered, either gracefully or due to a fault condition.

It can also be used to signal device profile changes where a profile is a pre-determined minimal set of active components needed to behave as an embedded system. The device profile and the components that make up the device profile, is ingested by the Component Registrar as a configuration file during initial boot up. The names of the component and the names in the configuration file must match.

This enables an application to dynamically reflect the current capabilities of the device by a notification from the Component Registry if the system Profile changes. 

At boot time the CCSP components register and advertise their supported capabilities to the Component Registry. They may also choose to subscribe for device profile change events. The following table is a simple illustration of what Component Registrar contains.

...

Component Name

...

Version

...

D-Bus Path

...

Namespace

...

Successfully

Registered

...

com.cisco.spvtg.ccsp.PAM

...

"/com/cisco/spvtg/ccsp/PAM"

...

Device.DeviceInfo.Manufacturer

Device.DeviceInfo.ManufacturerOUI

Device.DeviceInfo.SerialNumber

Device.DeviceInfo.SoftwareVersion

...

com.cisco.spvtg.ccsp.SSD

...

"/com/cisco/spvtg/ccsp/SSD"

...

Device.SoftwareModules. DeploymentUnitNumberOfEntries

Device.SoftwareModules.DeploymentUnit.{i}.UUID

Device.SoftwareModules.DeploymentUnit.{i}.Name

Device.SoftwareModules.DeploymentUnit.{i}.UAlias

...

com.cisco.spvtg.ccsp.command.factoryReset

...

“Device.IP.Diagnostics.”

...

Adapter

CCSP Message Bus Adapter is a core framework Component that enables inter-processor communication. The other processor may also run CCSP Message Bus Adapter to facilitate communication.

The internal implementation of the CCSP Message Bus Adapter may use TCP/IP sockets for communicating with the peer message bus adapter running on the other processor.

CCSP Component Registrar (CR)

CCSP Component Registrar is a centralized database of all registered CCSP components/services. It supports dynamic learning of registered components and supported capabilities - message types/namespaces. In addition it signals events to registered subscribers when components get unregistered, either gracefully or due to a fault condition.

It can also be used to signal device profile changes where a profile is a pre-determined minimal set of active components needed to behave as an embedded system. The device profile and the components that make up the device profile, is ingested by the Component Registrar as a configuration file during initial boot up. The names of the component and the names in the configuration file must match.

This enables an application to dynamically reflect the current capabilities of the device by a notification from the Component Registry if the system Profile changes. 

At boot time the CCSP components register and advertise their supported capabilities to the Component Registry. They may also choose to subscribe for device profile change events. The following table is a simple illustration of what Component Registrar contains.

Component Name

Version

D-Bus Path

Namespace

Successfully

Registered

com.cisco.spvtg.ccsp.PAM

1

"/com/cisco/spvtg/ccsp/PAM"

Device.DeviceInfo.Manufacturer

Device.DeviceInfo.ManufacturerOUI

Device.DeviceInfo.SerialNumber

Device.DeviceInfo.SoftwareVersion

True

com.cisco.spvtg.ccsp.SSD

1

"/com/cisco/spvtg/ccsp/SSD"

Device.SoftwareModules. DeploymentUnitNumberOfEntries

Device.SoftwareModules.DeploymentUnit.{i}.UUID

Device.SoftwareModules.DeploymentUnit.{i}.Name

Device.SoftwareModules.DeploymentUnit.{i}.UAlias

True
 com.cisco.spvtg.ccsp.PSM1 "/com/cisco/spvtg/ccsp/PSM"

com.cisco.spvtg.ccsp.command.factoryReset

True
 com.cisco.spvtg.ccsp.TDM1 "/com/cisco/spvtg/ccsp/TDM"

“Device.IP.Diagnostics.”

True


The Registry is used by applications to perform Service Discovery based on capabilities. The Protocol Agents in the CCSP control plane, for instance, queries the CR based on capabilities and data model namespace supported and routes messages to those components.

Below figure illustrates the features of Component Registry and how it is used by other internal components and client applications.

draw.io Diagram
bordertrue
diagramNameComponent Registry
simpleViewerfalse
linksauto
tbstyletop
lboxtrue
diagramWidth562
revision3

Component Registry

Protocol Agents

Protocol Agents are components that directly interface to the Cloud. These components facilitate remote administration/management of the device. The Protocol Agents provides the necessary abstraction to the internal CCSP architecture and components for interacting with the Cloud. The internal CCSP components are not required to be aware of any protocol specific details on the cloud interfaces. . The TR69 ACS uses HTTP/SOAP to communicate with the device. The TR69 Protocol Agent hides all the protocol specific details and communicates internally using internal namespaces and APIs over the CCSP message bus.

As another example, an SNMP Protocol Agent may also be instantiated to support device management via SNMP. Again this Protocol Agent hides all the protocol specific details and communicates internally using the internal namespaces and APS over the CCSP message bus.

There may be multiple instances of the same Protocol Agent in a CCSP subsystem, one for each WAN facing interface. For example if a device has multiple WAN facing IP addresses, a TR-069 Protocol Agent may be instantiated on each WAN facing IP address (if needed).

Protocol Agents perform the following functions:

  • Authenticate and establish secure session with their corresponding cloud adaptors during initialization.
  • Perform low level protocol handling for downstream and upstream traffic.
  • Routes cloud messages to internal functional components registered for consumption/processing of those namespaces by looking up which component handles a particular namespace via the Component Registrar.
  • “Action” Normalization
    • Internal CCSP components use normalized action / RPC methods to process requests. The normalized methods are defined in the base interface supported by all CCSP components.
  • Protocol Agents may define an XML based Mapping Schema that defines the mapping from external constructs to internal constructs. These constructs may include:
    • External Objects and parameters to Internal Objects and parameters
    • Format Conversions
    • Internal Errors to External Errors
  • Signals errors and routes responses to corresponding cloud adapters
  • Performs all transactions as an atomic operation.
    • For example, if a transaction involves a “SetParameterList” action of 10 key-value pairs, then the Protocol Agent applies the changes to all of the specified key-value pairs atomically. That is, either all of the value changes are applied together, or none of the changes are applied at all.
    • In cases where a single transaction involves multiple Components, the PA aggregates the response from all the components before sending the results back to PA.
  • Manages the order of operations within a transaction and across transactions.
    • The PA serializes all transactions from its cloud interface and only allows one transaction at a time.
  • Generates and maintains Context for asynchronous notifications and transactions. This is explained in more details in the next section.

Asynchronous Notifications (Eventing)

The Protocol Agents (PA) maintains a notification table of external notification requests on the Data model parameters. On receiving requests for notifications/eventing topics from the cloud, the protocol agents

  • Maps the cloud namespace to internal name space by looking up its schema mapper
  • Adds the parameters to the notification table including their internal name space.
  • Queries the Component Registrar (CR) using the internal namespace, to get the D-Bus path of the Functional Components that the request is intended for.
  • Invokes the D-Bus API SetParameterAttributes() from the base component interface on the component to enable notifications on the parameters

The Functional components (FC) maintain a notification counter for each parameter that is set to 0 during initialization.

  • When notifications are enabled on the parameter via setParameterAttributes (), the corresponding counter is incremented. Conversely when notifications are turned off, the counter is decremented. This is done because D-bus does not provide a mechanism to inform the component to stop generating signals if there are no registered subscribers for the signal. The component has no knowledge of signal recipients. Subscribers register interest with the D-Bus daemon for the signal. The D-Bus routes the signal to all subscribers.

The Functional Component defines a common signal to notify changes on all the data model parameters it supports. It generates that signal on D-Bus if and only if the notification counter for any parameter it supports is greater than 0. The signal contains the parameter name and its new value (old value may also be included, if needed).

The Protocol Agents subscribes to the component’s signal with D-Bus.

When the value changes for a parameter whose notification has been turned on, the Functional component generates the signal on D-Bus. The signal message contains the name of the signal; the bus name of the process sending the signal and the delta.

The PA gets the notification via D-Bus and notifies the change to its cloud adapter by looking up the notification table and performing any post processing mandated by the protocol.

If the notifications are turned off by the Cloud, the protocol agents,

  • Deregisters the notifications by calling setParameterAttributes() on the Functional Component
    • The Functional Component decrement the notification counter. If the counter < 1, then Functional Component should not stop generating notifications on that signal.
  • Clears/updates the notification table
  • Unsubscribes the component’s signal on the message bus if there are no other notifications active in that component.

 The Protocol Agents must police themselves such that notifications should only be turned off if they had been turned on previously by them.

draw.io Diagram
bordertrue
diagramNameAsynchronous Notification
simpleViewerfalse
linksauto
tbstyletop
lboxtrue
diagramWidth956
revision2

Asynchronous Notification

CCSP OS Abstraction Layer

CCSP platform is implemented and optimized for Linux OS. As such the OS abstraction is provided by the POSIX APIs. All CCSP components must use the POSIX APIs for all OS services such as threads, mutual exclusion, semaphores, shared memory etc.

Hardware Abstraction Layer (HAL)

HALs provides the necessary abstraction to the CCSP components to interface with SoC vendor specific hardware components such as audio/video encoders/decoders. The HAL is a thin layer above SoC vendor’s driver software distribution. This decouples the CCSP framework from any particular SoC platform.

The HAL is made up of two layers namely – Hardware Independent Layer and Hardware Dependent Layer. CCSP components must be written to hardware independent layer.

The following provides some suggestions or recommendations for defining and an Abstraction Layer for hardware access.

1.The components may define a component specific HAL to hardware drivers that are only used by that component. For instance the Video DRM Termination Manager (VDTM) Component may define a common DRM HAL that is not tied to a specific DRM. The VDTM component is the only component that uses this abstraction layer and therefore is not common to all the components. However using this component specific HAL abstraction allows the component to quickly integrate with multiple DRMs with minimal changes. It also eliminates any unnecessary dependency, of the component, with the common HAL.

2. A Common HAL provides the necessary abstraction to all the CCSP components to interface with common hardware components. Figure below illustrates the use of common HAL along with Component specific hardware abstraction.


draw.io Diagram
bordertrue
diagramNameCommon HAL and Component Specific HAL
simpleViewerfalse
linksauto
tbstyletop
lboxtrue
diagramWidth381
revision2

Common HAL and Component Specific HAL


3. For the FOSS libraries that use common industry standard interfaces that are unlikely to change , such as DirectFB, OpenGL ES 2.0, that are ported by the SoC vendor on their respective platform, CCSP components should call directly into the interfaces exposed by these libraries. There is no additional layer of abstraction above and beyond what is provided by the FOSS libraries themselves. Figure-1 illustrates this clearly.

4. For certain FOSS libraries and third party licensed software, however, if the interfaces can potentially change or if the library is swapped, for performance gains, by another FOSS library providing equivalent services, but with different interfaces, then it is recommended to create an abstraction layer as part of the common HAL

5. If there is a use case for all CCSP components to access certain hardware capabilities, then such an interface should be made available in the CCSP Base component interface. This interface, however, should be part of the common HAL.

Data Plane

This section is intended to explicitly list out best practices and guide lines for fast communication between components and processes running on a single processor and across multi processors.

Single Process communication

Components within a single process should communicate via well-defined APIs. The APIs should abstract, encapsulate and hide any internal representation of the component.

In order to pass large amounts of data between CCSP components, memory is allocated from the Process heap by the component generating the content. The reference (pointer) to this heap can then be passed to other interested components as parameter. This avoids making unnecessary copies of the data being shared. However, this requires the format and structure of the data to be known in order for the components to process data correctly.

Message schemas along with their versions should be defined and published for all communication between CCSP components.

It is recommended to pass as parameters, the version of the message format of the data, in addition to reference to the heap. This allows for the detection of any incompatibilities if the message format is changed without being communicated to other components.

Care must be taken to free the memory after processing is complete.

Inter Process communication

Inter process communication is facilitated by CCSP Message Bus that uses D-Bus/R-Bus IPC. Components across processes communicate via well-defined APIs using the bus daemon. The D-bus/R-Bus daemon provides a publish-subscribe interface and routes signals/events to all registered subscribers.

In order to move large amounts of data between components across process boundaries, UNIX file descriptors should be passed as parameters over D-Bus/R-Bus. The idea is to use D-Bus/R-Bus for IPC method calls between a client and a server and a dedicated pipe for passing large results from the server back to the client.  This allows very fast transmission rates.

The file descriptor could be a pointer to the shared memory object allocated via shm_open ()

As mentioned in Single Process communication above, this approach too requires the format and structure of the data to be known in advance for the components to process data correctly. It is recommended to define a common message format structure and pass version of the message format to the communicating component along with the file descriptor.

Inter Processor communication

Inter processor communication is facilitated by the CCSP Message Bus Adapter. As explained in CCSP Framework, the Message Bus Adapter is a component that runs on all the processor cores and uses TCP/IP sockets to communicate with each other. This allows the processor cores to potentially run different OS stacks and yet be able to seamlessly communicate with each other.

On the CCSP frame work the message bus Adapter exposes its services via APIs on the D-Bus/R-Bus. Other CCSP components use these APIs on the message bus Adapter communicate with components running on other processor cores.

Inter Subsystem Communication

CCSP architecture is intended to be sufficiently flexible to allow communication between CCSP components that reside across multiple subsystems. This document references several complex use cases with multiple subsystems. Simpler products without this complexity are also supported. This document defines the following subsystems along with their associated prefix.

  • Router (eRT)
  • Cable Modem (eCM)

It should be noted that the Cable Modem subsystem and the Router subsystem will always coexist in a DOCSIS based WAN front end Gateway devices.

These subsystems may be implemented using the following configurations.

  • Across processor boundaries, or
  • In the same processor as logical subsystems with potentially different session bus for IPC, or
  • In the same processor all using the same session bus for IPC.

Before continuing with the inter subsystem communication architecture, it will be useful to highlight some of the capabilities of the CCSP Message Bus, in order to set the context and also emphasize that we are leveraging the message bus capabilities to its fullest extent and not redefining something that is already tested and proven.

More details available at  D-Bus Remote Communication Capabilities 

The Registry is used by applications to perform Service Discovery based on capabilities. The Protocol Agents in the CCSP control plane, for instance, queries the CR based on capabilities and data model namespace supported and routes messages to those components.

Below figure illustrates the features of Component Registry and how it is used by other internal components and client applications.

draw.io Diagram
bordertrue
diagramNameComponent Registry
simpleViewerfalse
linksauto
tbstyletop
lboxtrue
diagramWidth562
revision3

Component Registry

Protocol Agents

Protocol Agents are components that directly interface to the Cloud. These components facilitate remote administration/management of the device. The Protocol Agents provides the necessary abstraction to the internal CCSP architecture and components for interacting with the Cloud. The internal CCSP components are not required to be aware of any protocol specific details on the cloud interfaces. . The TR69 ACS uses HTTP/SOAP to communicate with the device. The TR69 Protocol Agent hides all the protocol specific details and communicates internally using internal namespaces and APIs over the CCSP message bus.

As another example, an SNMP Protocol Agent may also be instantiated to support device management via SNMP. Again this Protocol Agent hides all the protocol specific details and communicates internally using the internal namespaces and APS over the CCSP message bus.

There may be multiple instances of the same Protocol Agent in a CCSP subsystem, one for each WAN facing interface. For example if a device has multiple WAN facing IP addresses, a TR-069 Protocol Agent may be instantiated on each WAN facing IP address (if needed).

Protocol Agents perform the following functions:

  • Authenticate and establish secure session with their corresponding cloud adaptors during initialization.
  • Perform low level protocol handling for downstream and upstream traffic.
  • Routes cloud messages to internal functional components registered for consumption/processing of those namespaces by looking up which component handles a particular namespace via the Component Registrar.
  • “Action” Normalization
    • Internal CCSP components use normalized action / RPC methods to process requests. The normalized methods are defined in the base interface supported by all CCSP components.
  • Protocol Agents may define an XML based Mapping Schema that defines the mapping from external constructs to internal constructs. These constructs may include:
    • External Objects and parameters to Internal Objects and parameters
    • Format Conversions
    • Internal Errors to External Errors
  • Signals errors and routes responses to corresponding cloud adapters
  • Performs all transactions as an atomic operation.
    • For example, if a transaction involves a “SetParameterList” action of 10 key-value pairs, then the Protocol Agent applies the changes to all of the specified key-value pairs atomically. That is, either all of the value changes are applied together, or none of the changes are applied at all.
    • In cases where a single transaction involves multiple Components, the PA aggregates the response from all the components before sending the results back to PA.
  • Manages the order of operations within a transaction and across transactions.
    • The PA serializes all transactions from its cloud interface and only allows one transaction at a time.
  • Generates and maintains Context for asynchronous notifications and transactions. This is explained in more details in the next section.

Asynchronous Notifications (Eventing)

The Protocol Agents (PA) maintains a notification table of external notification requests on the Data model parameters. On receiving requests for notifications/eventing topics from the cloud, the protocol agents

  • Maps the cloud namespace to internal name space by looking up its schema mapper
  • Adds the parameters to the notification table including their internal name space.
  • Queries the Component Registrar (CR) using the internal namespace, to get the D-Bus path of the Functional Components that the request is intended for.
  • Invokes the D-Bus API SetParameterAttributes() from the base component interface on the component to enable notifications on the parameters

The Functional components (FC) maintain a notification counter for each parameter that is set to 0 during initialization.

  • When notifications are enabled on the parameter via setParameterAttributes (), the corresponding counter in incremented. Conversely when notifications are turned off, the counter is decremented. This is done because D-bus does not provide a mechanism to inform the component to stop generating signals if there are no registered subscribers for the signal. The component has no knowledge of signal recipients. Subscribers register interest with the D-Bus daemon for the signal. The D-Bus routes the signal to all subscribers.

The Functional Component defines a common signal to notify changes on all the data model parameters it supports. It generates that signal on D-Bus if and only if the notification counter for any parameter it supports is greater than 0. The signal contains the parameter name and its new value (old value may also be included, if needed).

The Protocol Agents subscribes to the component’s signal with D-Bus.

When the value changes for a parameter whose notification has been turned on, the Functional component generates the signal on D-Bus. The signal message contains the name of the signal; the bus name of the process sending the signal and the delta.

The PA gets the notification via D-Bus and notifies the change to its cloud adapter by looking up the notification table and performing any post processing mandated by the protocol.

If the notifications are turned off by the Cloud, the protocol agents,

  • Deregisters the notifications by calling setParameterAttributes() on the Functional Component
    • The Functional Component decrement the notification counter. If the counter < 1, then Functional Component should not stop generating notifications on that signal.
  • Clears/updates the notification table
  • Unsubscribes the component’s signal on the message bus if there are no other notifications active in that component.

 The Protocol Agents must police themselves such that notifications should only be turned off if they had been turned on previously by them.

draw.io Diagram
bordertrue
diagramNameAsynchronous Notification
simpleViewerfalse
linksauto
tbstyletop
lboxtrue
diagramWidth956
revision2

Asynchronous Notification

CCSP OS Abstraction Layer

CCSP platform is implemented and optimized for Linux OS. As such the OS abstraction is provided by the POSIX APIs. All CCSP components must use the POSIX APIs for all OS services such as threads, mutual exclusion, semaphores, shared memory etc.

Hardware Abstraction Layer (HAL)

HALs provides the necessary abstraction to the CCSP components to interface with SoC vendor specific hardware components such as audio/video encoders/decoders. The HAL is a thin layer above SoC vendor’s driver software distribution. This decouples the CCSP framework from any particular SoC platform.

The HAL is made up of two layers namely – Hardware Independent Layer and Hardware Dependent Layer. CCSP components must be written to hardware independent layer.

The following provides some suggestions or recommendations for defining and an Abstraction Layer for hardware access.

  1. The components may define a component specific HAL to hardware drivers that are only used by that component. For instance the Video DRM Termination Manager (VDTM) Component may define a common DRM HAL that is not tied to a specific DRM. The VDTM component is the only component that uses this abstraction layer and therefore is not common to all the components. However using this component specific HAL abstraction allows the component to quickly integrate with multiple DRMs with minimal changes. It also eliminates any unnecessary dependency, of the component, with the common HAL.
  2. A Common HAL provides the necessary abstraction to all the CCSP components to interface with common hardware components. Figure below illustrates the use of common HAL along with Component specific hardware abstraction.

draw.io Diagram
bordertrue
diagramNameCommon HAL and Component Specific HAL
simpleViewerfalse
linksauto
tbstyletop
lboxtrue
diagramWidth381
revision2

Common HAL and Component Specific HAL

3. For the FOSS libraries that use common industry standard interfaces that are unlikely to change , such as DirectFB, OpenGL ES 2.0, that are ported by the SoC vendor on their respective platform, CCSP components should call directly into the interfaces exposed by these libraries. There is no additional layer of abstraction above and beyond what is provided by the FOSS libraries themselves. Figure-1 illustrates this clearly.

4. For certain FOSS libraries and third party licensed software, however, if the interfaces can potentially change or if the library is swapped, for performance gains, by another FOSS library providing equivalent services, but with different interfaces, then it is recommended to create an abstraction layer as part of the common HAL

5. If there is a use case for all CCSP components to access certain hardware capabilities, then such an interface should be made available in the CCSP Base component interface. This interface, however, should be part of the common HAL.

Data Plane

This section is intended to explicitly list out best practices and guide lines for fast communication between components and processes running on a single processor and across multi processors.

Single Process communication

Components within a single process should communicate via well-defined APIs. The APIs should abstract, encapsulate and hide any internal representation of the component.

In order to pass large amounts of data between CCSP components, memory is allocated from the Process heap by the component generating the content. The reference (pointer) to this heap can then be passed to other interested components as parameter. This avoids making unnecessary copies of the data being shared. However, this requires the format and structure of the data to be known in order for the components to process data correctly.

Message schemas along with their versions should be defined and published for all communication between CCSP components.

It is recommended to pass as parameters, the version of the message format of the data, in addition to reference to the heap. This allows for the detection of any incompatibilities if the message format is changed without being communicated to other components.

Care must be taken to free the memory after processing is complete.

Inter Process communication

Inter process communication is facilitated by CCSP Message Bus that uses D-Bus IPC. Components across processes communicate via well-defined APIs using the bus daemon. The D-bus daemon provides a publish-subscribe interface and routes signals/events to all registered subscribers.

In order to move large amounts of data between components across process boundaries, UNIX file descriptors should be passed as parameters over D-Bus. The idea is to use D-Bus for IPC method calls between a client and a server and a dedicated pipe for passing large results from the server back to the client.  This allows very fast transmission rates.

The file descriptor could be a pointer to the shared memory object allocated via shm_open ()

As mentioned in Single Process communication above, this approach too requires the format and structure of the data to be known in advance for the components to process data correctly. It is recommended to define a common message format structure and pass version of the message format to the communicating component along with the file descriptor.

Inter Processor communication

Inter processor communication is facilitated by the CCSP Message Bus Adapter. As explained in CCSP Framework, the Message Bus Adapter is a component that runs on all the processor cores and uses TCP/IP sockets to communicate with each other. This allows the processor cores to potentially run different OS stacks and yet be able to seamlessly communicate with each other.

On the CCSP frame work the message bus Adapter exposes its services via APIs on the D-Bus. Other CCSP components use these APIs on the message bus Adapter communicate with components running on other processor cores.

Inter Subsystem Communication

CCSP architecture is intended to be sufficiently flexible to allow communication between CCSP components that reside across multiple subsystems. This document references several complex use cases with multiple subsystems. Simpler products without this complexity are also supported. This document defines the following subsystems along with their associated prefix.

  • Router (eRT)
  • Cable Modem (eCM)

It should be noted that the Cable Modem subsystem and the Router subsystem will always coexist in a DOCSIS based WAN front end Gateway devices.

These subsystems may be implemented using the following configurations.

  • Across processor boundaries, or
  • In the same processor as logical subsystems with potentially different session bus for IPC, or
  • In the same processor all using the same session bus for IPC.

Before continuing with the inter subsystem communication architecture, it will be useful to highlight some of the capabilities of D-Bus, the CCSP Message Bus, in order to set the context and also emphasize that we are leveraging the message bus capabilities to its fullest extent and not redefining something that is already tested and proven.

D-Bus Remote Communication Capabilities 

Applications using D-Bus are either servers or clients. A server listens for incoming connections; a client connects to a server. Once the connection is established, it is a symmetric flow of messages; the client-server distinction only matters when setting up the connection. When using the bus daemon, the bus daemon listens for connections and component initiates a connection to the bus daemon.

A D-Bus address specifies where a server will listen, and where a client will connect. For example, the address

unix:path=/tmp/abcdef

specifies that the server will listen on a Unix domain socket at the path /tmp/abcdef and the client will connect to that socket. An address can also specify TCP/IP sockets. For example the address

tcp:host=10.1.2.4,port=5000,family=ipv4

specifies that server at 10.1.2.4 will listen on tcp port 5000 and the client will connect to that socket. From D-Bus specification at https://dbus.freedesktop.org/doc/dbus-specification.html#transports-tcp-sockets

TCP Sockets

The TCP transport provides TCP/IP based connections between clients located on the same or different hosts. Using TCP transport without any additional secure authentication mechanisms over a network is unsecure.

Server Address Format

TCP/IP socket addresses are identified by the “tcp:” prefix and support the following key/value pairs:

...

Name

...

Values

...

Description

...

host

...

(string)

...

dns name or ip address

port

(number)

...

The tcp port the server will open.

A zero value let the server choose a free port provided from the underlying operating system.

libdbus is able to retrieve the real used port from the server.

...

family

...

(string)

...

If set, provide the type of socket family either "ipv4" or "ipv6". If unset, the family is unspecified.

...

<!ELEMENT busconfig (user |

type |
fork | 
keep_umask | 
listen  | 
pidfile | 
includedir | 
servicedir |
servicehelper | 
auth |
include | 
policy | 
limit | selinux)*>

The listen element defines address that the bus should listen on. The address is in the standard D-Bus format that contains a transport name plus possible parameters/options.

Example:  <listen>unix:path=/tmp/foo</listen> 
Example: <listen>tcp:host=localhost,port=1234</listen>

If there are multiple <listen> elements, then the bus listens on multiple addresses. The bus will pass its address to started services or other interested parties with the last address given in <listen> first. That is, apps will try to connect to the last <listen> address first.
tcp sockets can accept IPv4 addresses, IPv6 addresses or hostnames. If a hostname resolves to multiple addresses, the server will bind to all of them. The family=ipv4 or family=ipv6 options an be used to force it to bind to a subset of addresses
Example: <listen>tcp:host=localhost,port=0,family=ipv4</listen>

Inter Subsystem Communication Architecture

...