Hello everyone,
I’ve recently been experimenting with chaining multiple Greybus nodes to a single host over I²C (QWIIC port [0]). This general idea is not new (see MicroMod Qwiic Pro Kit [1]), although those setups do not use Greybus.
Since I²C does not allow a slave to initiate transfers, it is not well suited for node-initiated events (e.g. interrupts, SVC-initiated control). However, for my current use case I am primarily interested in polling-based functionality, so this limitation is acceptable.
In a typical Greybus topology, an Application Processor (AP), an SVC, and one or more modules are connected via UniPro. In practice, because most application processors lack a native UniPro interface, they connect through a bridge device that also implements the SVC.
For the I²C-based setup described above, I have considered the following topologies:
1. Separate co-processor (SVC/Bridge)
This approach is reasonable on SoCs such as TI AM6254, which include an M4F core that can serve as the SVC/bridge and control the I²C bus. However, on devices like TI AM62L, which lack such a core, introducing an additional processor solely for Greybus does not seem justified. Also, this would make the above much less portable, since it expects a hardware component, not all BeagleBoard.org boards have.
``` +----+ +---------------+ +----------+ | AP | <--- bus ----> | SVC / Bridge | <--- I2C ----> | Module | +----+ +---------------+ +----------+ | | +----------+ `----------- I2C ----> | Module | +----------+ ```
2. SVC per node
Each node implements its own SVC. Since an I²C slave cannot initiate communication, the AP must already know the address of each SVC/module. This also seems inefficient when chaining multiple nodes.
``` +----+ +----------------+ | AP | <--- I2C ---> | SVC / Module | +----+ +----------------+ | | +----------------+ `---- I2C ----> | SVC / Module | +----------------+ ```
3. SVC/Bridge functionality inside the AP
For this use case, this seems to be the most practical option.
To clarify, I am not proposing any new data paths in the Greybus pipeline. The idea is to have a reusable an SVC/bridge implementation similar to what exists in greybus-zephyr [2][3], but hosted within the AP.
``` +-------------+ +-----------+ | AP / SVC | <--- I2C ---> | Module | +-------------+ +-----------+ | | +----------+ `-- I2C ---> | Module | +----------+ ```
Before proceeding further, I would appreciate feedback on this approach—particularly whether there are concerns with option 3, or if there are alternative designs I should consider.
Best regards, Ayush Singh
[0]: https://www.sparkfun.com/qwiic [1]: https://www.digikey.in/en/maker/projects/micromod-qwiic-pro-kit-project-guid... [2]: https://github.com/beagleboard/greybus-zephyr/blob/main/subsys/greybus/svc.c... [3]: https://github.com/beagleboard/greybus-zephyr/blob/main/subsys/greybus/apbri...
On Sat Feb 28, 2026 at 8:47 AM EST, Ayush Singh wrote:
SVC per node
Each node implements its own SVC. Since an I²C slave cannot initiate
communication, the AP must already know the address of each SVC/module. This also seems inefficient when chaining multiple nodes.
[...]
SVC/Bridge functionality inside the AP
For this use case, this seems to be the most practical option.
To clarify, I am not proposing any new data paths in the Greybus
pipeline. The idea is to have a reusable an SVC/bridge implementation similar to what exists in greybus-zephyr [2][3], but hosted within the AP.
We discussed internally at Silicon Labs of this solution to get rid of the SVC on the device but haven't actually implemented it, good to know that we were not alone. I think it's a great avenue to explore because it keeps existing SVC code as is, so we keep a single data path in Greybus' core while offering the capability to handle SVC requests on the host.
Just to help me get a better mental picture, in hd->message_send you would either handle the message immediately if cport_id == 0, or convert that cport_id to an (interface, intf_cport_id) and pass the message to that interface's cport, something like that?
static int message_send(..., u16 cport_id, struct gb_message *msg, ...) { if (cport_id == GB_SVC_CPORT_ID) { return svc_bridge_handle_msg(msg); } else { struct connection *connection = svc_bridge_get_connection(cport_id); // ... or you could directly look up in hd->connections, // the whole list of connections is already there, so // no need to maintain another one separately
// somewhow convert connection->intf to an i2c address // and use connection->intf_cport_id to address the cport i2c_transfer(adap, msgs, 1); } }
``` +-------------+ +-----------+ | AP / SVC | <--- I2C ---> | Module | +-------------+ +-----------+ | | +----------+ `-- I2C ---> | Module | +----------+ ```
You mention in point 2 that i2c slaves cannot initiate communication, so I wonder how you would emulate the "MODULE_INSERTED" that flows from SVC to the AP. Would your HD poll the bus for connected modules and "send" a MODULE_INSERTED for each of them?
Besides that, I think it would work fine. I'll be happy to test and review your patch when ready.
Regards, damien
On 3/3/26 3:27 AM, Damien Riégel wrote:
On Sat Feb 28, 2026 at 8:47 AM EST, Ayush Singh wrote:
SVC per node
Each node implements its own SVC. Since an I²C slave cannot initiate
communication, the AP must already know the address of each SVC/module. This also seems inefficient when chaining multiple nodes.
[...]
SVC/Bridge functionality inside the AP
For this use case, this seems to be the most practical option.
To clarify, I am not proposing any new data paths in the Greybus
pipeline. The idea is to have a reusable an SVC/bridge implementation similar to what exists in greybus-zephyr [2][3], but hosted within the AP.
We discussed internally at Silicon Labs of this solution to get rid of the SVC on the device but haven't actually implemented it, good to know that we were not alone. I think it's a great avenue to explore because it keeps existing SVC code as is, so we keep a single data path in Greybus' core while offering the capability to handle SVC requests on the host.
Just to help me get a better mental picture, in hd->message_send you would either handle the message immediately if cport_id == 0, or convert that cport_id to an (interface, intf_cport_id) and pass the message to that interface's cport, something like that?
static int message_send(..., u16 cport_id, struct gb_message *msg, ...) { if (cport_id == GB_SVC_CPORT_ID) { return svc_bridge_handle_msg(msg); } else { struct connection *connection = svc_bridge_get_connection(cport_id); // ... or you could directly look up in hd->connections, // the whole list of connections is already there, so // no need to maintain another one separately // somewhow convert connection->intf to an i2c address // and use connection->intf_cport_id to address the cport i2c_transfer(adap, msgs, 1); } }
Yes, that's the basic idea. The APIs will probably look a bit different, but let me see how much info linux greybus module already has.
``` +-------------+ +-----------+ | AP / SVC | <--- I2C ---> | Module | +-------------+ +-----------+ | | +----------+ `-- I2C ---> | Module | +----------+ ```You mention in point 2 that i2c slaves cannot initiate communication, so I wonder how you would emulate the "MODULE_INSERTED" that flows from SVC to the AP. Would your HD poll the bus for connected modules and "send" a MODULE_INSERTED for each of them?
Besides that, I think it would work fine. I'll be happy to test and review your patch when ready.
Regards, damien
I am currently thinking of having a debugfs entry to manually add and remove modules for the demo I am working on. Generally, I would prefer not doing continuous polling on linux. But I am thinking of doing the polling based discovery on the driver probe.
Best Regards,
Ayush Singh