Hi,
This patchset brings support for Silicon Labs' CPC protocol as transport layer for Greybus. As an example, a SPI driver is added as physical layer and everything is bundled as a big kernel module. In the future, as we plan to support other physical layers like SDIO, CPC core will be its own module, and each physical layer CPC driver will be its own module as well.
CPC implements some of the features of Unipro that Greybus relies upon, like reliable transmission. CPC takes care of detecting transmission errors and retransmit frames if necessary. There's also a flow-control feature, preventing sending messages to full cports.
In addition to the host device over SPI part, there's also a class driver for a vendor protocol that enables Bluetooth on supported devices. This is mostly there to open the discussion on how a new protocol should be added to Greybus.
Damien Riégel (6): greybus: move host controller drivers comment in Makefile greybus: cpc: add core logic greybus: cpc: add SPI driver greybus: add API for async unidirectional transfer greybus: match device with bundle ID greybus: add class driver for Silabs Bluetooth
MAINTAINERS | 12 + drivers/greybus/Kconfig | 2 + drivers/greybus/Makefile | 4 +- drivers/greybus/core.c | 4 + drivers/greybus/cpc/Kconfig | 12 + drivers/greybus/cpc/Makefile | 6 + drivers/greybus/cpc/cpc.h | 135 +++++++ drivers/greybus/cpc/endpoint.c | 158 ++++++++ drivers/greybus/cpc/header.c | 212 ++++++++++ drivers/greybus/cpc/header.h | 81 ++++ drivers/greybus/cpc/host.c | 113 ++++++ drivers/greybus/cpc/protocol.c | 274 +++++++++++++ drivers/greybus/cpc/spi.c | 585 +++++++++++++++++++++++++++ drivers/greybus/operation.c | 52 +++ drivers/staging/greybus/Kconfig | 9 + drivers/staging/greybus/Makefile | 6 + drivers/staging/greybus/silabs-ble.c | 203 ++++++++++ include/linux/greybus.h | 7 +- include/linux/greybus/greybus_id.h | 2 + include/linux/greybus/operation.h | 4 + 20 files changed, 1877 insertions(+), 4 deletions(-) create mode 100644 drivers/greybus/cpc/Kconfig create mode 100644 drivers/greybus/cpc/Makefile create mode 100644 drivers/greybus/cpc/cpc.h create mode 100644 drivers/greybus/cpc/endpoint.c create mode 100644 drivers/greybus/cpc/header.c create mode 100644 drivers/greybus/cpc/header.h create mode 100644 drivers/greybus/cpc/host.c create mode 100644 drivers/greybus/cpc/protocol.c create mode 100644 drivers/greybus/cpc/spi.c create mode 100644 drivers/staging/greybus/silabs-ble.c
gb-beagleplay is also a Greybus host controller driver, so move comment accordingly.
Signed-off-by: Damien Riégel damien.riegel@silabs.com --- drivers/greybus/Makefile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/greybus/Makefile b/drivers/greybus/Makefile index d986e94f889..c3564ad151f 100644 --- a/drivers/greybus/Makefile +++ b/drivers/greybus/Makefile @@ -18,9 +18,9 @@ obj-$(CONFIG_GREYBUS) += greybus.o # needed for trace events ccflags-y += -I$(src)
+# Greybus Host controller drivers obj-$(CONFIG_GREYBUS_BEAGLEPLAY) += gb-beagleplay.o
-# Greybus Host controller drivers gb-es2-y := es2.o
obj-$(CONFIG_GREYBUS_ES2) += gb-es2.o
On Fri, Jul 04, 2025 at 08:40:31PM -0400, Damien Riégel wrote:
gb-beagleplay is also a Greybus host controller driver, so move comment accordingly.
Signed-off-by: Damien Riégel damien.riegel@silabs.com
drivers/greybus/Makefile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/greybus/Makefile b/drivers/greybus/Makefile index d986e94f889..c3564ad151f 100644 --- a/drivers/greybus/Makefile +++ b/drivers/greybus/Makefile @@ -18,9 +18,9 @@ obj-$(CONFIG_GREYBUS) += greybus.o # needed for trace events ccflags-y += -I$(src) +# Greybus Host controller drivers obj-$(CONFIG_GREYBUS_BEAGLEPLAY) += gb-beagleplay.o
The blank line should be dropped here too, right?
thanks,
greg k-h
This step adds the basic infrastructure in order to use CPC as backend in Greybus. The goal of CPC is to add reliablity, by implementing error detection and retransmission for links that don't have that capability by default.
When Greybus establishes the connection between two CPorts, CPC will create an endpoint for this connection. Greybus messages will then be encapsulated in CPC frames, which basically are a custom header + Greybus header + Greybus payload.
As this is still evolving and not the main point of the RFC, the whole core is squashed in one big commit, but it will definitely be split into more digestible commits as we refine it.
Signed-off-by: Damien Riégel damien.riegel@silabs.com --- MAINTAINERS | 6 + drivers/greybus/Kconfig | 2 + drivers/greybus/Makefile | 2 + drivers/greybus/cpc/Kconfig | 12 ++ drivers/greybus/cpc/Makefile | 6 + drivers/greybus/cpc/cpc.h | 135 ++++++++++++++++ drivers/greybus/cpc/endpoint.c | 158 +++++++++++++++++++ drivers/greybus/cpc/header.c | 212 +++++++++++++++++++++++++ drivers/greybus/cpc/header.h | 81 ++++++++++ drivers/greybus/cpc/host.c | 113 ++++++++++++++ drivers/greybus/cpc/protocol.c | 274 +++++++++++++++++++++++++++++++++ 11 files changed, 1001 insertions(+) create mode 100644 drivers/greybus/cpc/Kconfig create mode 100644 drivers/greybus/cpc/Makefile create mode 100644 drivers/greybus/cpc/cpc.h create mode 100644 drivers/greybus/cpc/endpoint.c create mode 100644 drivers/greybus/cpc/header.c create mode 100644 drivers/greybus/cpc/header.h create mode 100644 drivers/greybus/cpc/host.c create mode 100644 drivers/greybus/cpc/protocol.c
diff --git a/MAINTAINERS b/MAINTAINERS index 8256ec0ff8a..10385b5344b 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -10016,6 +10016,12 @@ S: Maintained F: Documentation/devicetree/bindings/net/ti,cc1352p7.yaml F: drivers/greybus/gb-beagleplay.c
+GREYBUS CPC DRIVERS +M: Damien Riégel damien.riegel@silabs.com +R: Silicon Labs Kernel Team linux-devel@silabs.com +S: Supported +F: drivers/greybus/cpc/* + GREYBUS SUBSYSTEM M: Johan Hovold johan@kernel.org M: Alex Elder elder@kernel.org diff --git a/drivers/greybus/Kconfig b/drivers/greybus/Kconfig index c3f056d28b0..565a0fdcb2c 100644 --- a/drivers/greybus/Kconfig +++ b/drivers/greybus/Kconfig @@ -30,6 +30,8 @@ config GREYBUS_BEAGLEPLAY To compile this code as a module, chose M here: the module will be called gb-beagleplay.ko
+source "drivers/greybus/cpc/Kconfig" + config GREYBUS_ES2 tristate "Greybus ES3 USB host controller" depends on USB diff --git a/drivers/greybus/Makefile b/drivers/greybus/Makefile index c3564ad151f..4ebf8907405 100644 --- a/drivers/greybus/Makefile +++ b/drivers/greybus/Makefile @@ -21,6 +21,8 @@ ccflags-y += -I$(src) # Greybus Host controller drivers obj-$(CONFIG_GREYBUS_BEAGLEPLAY) += gb-beagleplay.o
+obj-$(CONFIG_GREYBUS_CPC) += cpc/ + gb-es2-y := es2.o
obj-$(CONFIG_GREYBUS_ES2) += gb-es2.o diff --git a/drivers/greybus/cpc/Kconfig b/drivers/greybus/cpc/Kconfig new file mode 100644 index 00000000000..1512f9324f8 --- /dev/null +++ b/drivers/greybus/cpc/Kconfig @@ -0,0 +1,12 @@ +# SPDX-License-Identifier: GPL-2.0 + +config GREYBUS_CPC + tristate "Greybus CPC driver" + depends on SPI + select CRC_ITU_T + help + Select this option if you have a Silicon Labs EFR32 device that acts + as a Greybus SVC. + + To compile this code as a module, chose M here: the module will be + called gb-cpc.ko diff --git a/drivers/greybus/cpc/Makefile b/drivers/greybus/cpc/Makefile new file mode 100644 index 00000000000..08ef7c6d24b --- /dev/null +++ b/drivers/greybus/cpc/Makefile @@ -0,0 +1,6 @@ +# SPDX-License-Identifier: GPL-2.0 + +gb-cpc-y := endpoint.o header.o host.o main.o protocol.o + +# CPC core +obj-$(CONFIG_GREYBUS_CPC) += gb-cpc.o diff --git a/drivers/greybus/cpc/cpc.h b/drivers/greybus/cpc/cpc.h new file mode 100644 index 00000000000..4aece6da9f7 --- /dev/null +++ b/drivers/greybus/cpc/cpc.h @@ -0,0 +1,135 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (c) 2025, Silicon Laboratories, Inc. + */ + +#ifndef __CPC_H +#define __CPC_H + +#include <linux/device.h> +#include <linux/types.h> + +#include "header.h" + +#define GB_CPC_SPI_NUM_CPORTS 8 + +struct cpc_endpoint; +struct cpc_endpoint_tcb; +struct cpc_frame; +struct cpc_host_device; + +/** + * struct cpc_host_device - CPC host device + * @gb_hd: pointer to Greybus Host Device + * @lock: mutex to synchronize access to endpoint array + * @tx_queue: list of cpc_frame to send + * @endpoints: array of endpoint pointers + * @wake_tx: function called when a new packet must be transmitted + */ +struct cpc_host_device { + struct gb_host_device *gb_hd; + + struct mutex lock; + struct list_head tx_queue; + + struct cpc_endpoint *endpoints[GB_CPC_SPI_NUM_CPORTS]; + + int (*wake_tx)(struct cpc_host_device *cpc_hd); +}; + +struct cpc_endpoint *cpc_hd_get_endpoint(struct cpc_host_device *cpc_hd, u16 cport_id); +void cpc_hd_send_frame(struct cpc_host_device *cpc_hd, struct cpc_frame *frame); +void cpc_hd_rcvd(struct cpc_host_device *cpc_hd, struct cpc_header *hdr, + u8 *data, size_t length); +struct cpc_frame *cpc_hd_dequeue(struct cpc_host_device *cpc_hd); +bool cpc_hd_tx_queue_empty(struct cpc_host_device *cpc_hd); + +/** + * struct cpc_endpoint_tcb - endpoint's transmission control block + * @send_wnd: send window, maximum number of frames that the remote can accept + * TX frames should have a sequence in the range + * [send_una; send_una + send_wnd]. + * @send_nxt: send next, the next sequence number that will be used for transmission + * @send_una: send unacknowledged, the oldest unacknowledged sequence number + * @ack: current acknowledge number + * @seq: current sequence number + * @mtu: maximum transmission unit + */ +struct cpc_endpoint_tcb { + u8 send_wnd; + u8 send_nxt; + u8 send_una; + u8 ack; + u8 seq; + u16 mtu; +}; + +/** + * struct cpc_endpint - CPC endpoint + * @id: endpoint ID + * @cpc_hd: pointer to the CPC host device this endpoint belongs to + * @lock: synchronize access to other attributes + * @completion: (dis)connection completion + * @tcb: transmission control block + * @holding_queue: list of CPC frames queued to be sent + * @pending_ack_queue: list of CPC frames sent and waiting for acknowledgment + */ +struct cpc_endpoint { + u16 id; + + struct cpc_host_device *cpc_hd; + + struct mutex lock; /* Synchronize access to all other attributes. */ + struct completion completion; + struct cpc_endpoint_tcb tcb; + struct list_head holding_queue; + struct list_head pending_ack_queue; +}; + +struct cpc_endpoint *cpc_endpoint_alloc(u16 ep_id, gfp_t gfp_mask); +void cpc_endpoint_release(struct cpc_endpoint *ep); +int cpc_endpoint_frame_send(struct cpc_endpoint *ep, struct cpc_frame *frame); +int cpc_endpoint_connect(struct cpc_endpoint *ep); +int cpc_endpoint_disconnect(struct cpc_endpoint *ep); + +/** + * struct cpc_frame - CPC frame + * @header: CPC header + * @message: Greybus message to transmit + * @cancelled: indicate if Greybus message is cancelled and should not be sent + * @ep: endpoint this frame is sent over + * @links: list head in endpoint's queue + * @txq_links: list head in cpc host device's queue + */ +struct cpc_frame { + struct cpc_header header; + struct gb_message *message; + + bool cancelled; + + struct cpc_endpoint *ep; + + struct list_head links; /* endpoint->holding_queue or + * endpoint->pending_ack_queue. + */ + struct list_head txq_links; /* cpc_host_device->tx_queue. */ + +}; + +struct cpc_frame *cpc_frame_alloc(struct gb_message *message, gfp_t gfp_mask); +void cpc_frame_free(struct cpc_frame *frame); +void cpc_frame_sent(struct cpc_frame *frame, int status); + +int __cpc_protocol_write(struct cpc_endpoint *ep, struct cpc_frame *frame); + +void cpc_protocol_on_data(struct cpc_endpoint *ep, struct cpc_header *hdr, u8 *data, size_t length); +void cpc_protocol_on_syn(struct cpc_endpoint *ep, struct cpc_header *hdr); +void cpc_protocol_on_rst(struct cpc_endpoint *ep); + +void cpc_protocol_send_rst(struct cpc_host_device *cpc_hd, u8 ep_id); +int cpc_protocol_send_syn(struct cpc_endpoint *ep); + +int cpc_spi_register_driver(void); +void cpc_spi_unregister_driver(void); + +#endif diff --git a/drivers/greybus/cpc/endpoint.c b/drivers/greybus/cpc/endpoint.c new file mode 100644 index 00000000000..12710edebcf --- /dev/null +++ b/drivers/greybus/cpc/endpoint.c @@ -0,0 +1,158 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2025, Silicon Laboratories, Inc. + */ + +#include <linux/greybus.h> + +#include "cpc.h" + +/** + * cpc_endpoint_write - Write a DATA frame. + * @ep: Endpoint handle. + * @frame: Frame to send. + * + * @return: 0 on success, otherwise a negative error code. + */ +int cpc_endpoint_frame_send(struct cpc_endpoint *ep, struct cpc_frame *frame) +{ + struct cpc_header *hdr = &frame->header; + size_t cpc_payload_sz = 0; + int err; + + if (frame->message) { + cpc_payload_sz += sizeof(struct gb_operation_msg_hdr); + cpc_payload_sz += frame->message->payload_size; + } + + mutex_lock(&ep->lock); + + if (cpc_payload_sz > ep->tcb.mtu) { + err = -EINVAL; + goto out; + } + + memset(hdr, 0, sizeof(*hdr)); + hdr->ctrl = cpc_header_get_ctrl(CPC_FRAME_TYPE_DATA, true); + hdr->ep_id = ep->id; + hdr->recv_wnd = CPC_HEADER_MAX_RX_WINDOW; + hdr->seq = ep->tcb.seq; + hdr->dat.payload_len = cpc_payload_sz; + + frame->ep = ep; + + err = __cpc_protocol_write(ep, frame); + +out: + mutex_unlock(&ep->lock); + + return err; +} + +void cpc_frame_sent(struct cpc_frame *frame, int status) +{ + struct cpc_endpoint *ep = frame->ep; + struct gb_host_device *gb_hd = ep->cpc_hd->gb_hd; + + /* There is no Greybus payload, this frame is purely CPC */ + if (!frame->message) + return; + + /* + * Increase the send_nxt sequence, this is used as the upper bound of sequence number that + * can be ACK'd by the remote. Only increase if sent successfully. + */ + if (!status) { + mutex_lock(&ep->lock); + ep->tcb.send_nxt++; + mutex_unlock(&ep->lock); + } + + if (!frame->cancelled) + greybus_message_sent(gb_hd, frame->message, status); + + kfree(frame); +} + +/** + * cpc_endpoint_tcb_reset() - Reset endpoint's TCB to initial values. + * @ep: endpoint pointer + */ +static void cpc_endpoint_tcb_reset(struct cpc_endpoint *ep) +{ + ep->tcb.seq = ep->id; + ep->tcb.ack = 0; + ep->tcb.mtu = 0; + ep->tcb.send_nxt = ep->id; + ep->tcb.send_una = ep->id; + ep->tcb.send_wnd = 1; +} + +/** + * cpc_endpoint_alloc() - Allocate and initialize CPC endpoint. + * @ep_id: Endpoint ID. + * @gfp_mask: GFP mask for allocation. + * + * Return: Pointer to allocated and initialized cpc_endpoint, or NULL on failure. + */ +struct cpc_endpoint *cpc_endpoint_alloc(u16 ep_id, gfp_t gfp_mask) +{ + struct cpc_endpoint *ep; + + ep = kzalloc(sizeof(*ep), gfp_mask); + if (!ep) + return NULL; + + ep->id = ep_id; + INIT_LIST_HEAD(&ep->holding_queue); + INIT_LIST_HEAD(&ep->pending_ack_queue); + + mutex_init(&ep->lock); + cpc_endpoint_tcb_reset(ep); + init_completion(&ep->completion); + + return ep; +} + +void cpc_endpoint_release(struct cpc_endpoint *ep) +{ + kfree(ep); +} + +int cpc_endpoint_connect(struct cpc_endpoint *ep) +{ + int ret; + + ret = cpc_protocol_send_syn(ep); + if (ret) + return ret; + + return wait_for_completion_interruptible(&ep->completion); +} + +int cpc_endpoint_disconnect(struct cpc_endpoint *ep) +{ + cpc_protocol_send_rst(ep->cpc_hd, ep->id); + + return 0; +} + +struct cpc_frame *cpc_frame_alloc(struct gb_message *message, gfp_t gfp_mask) +{ + struct cpc_frame *frame; + + frame = kzalloc(sizeof(*frame), gfp_mask); + if (!frame) + return NULL; + + frame->message = message; + INIT_LIST_HEAD(&frame->links); + INIT_LIST_HEAD(&frame->txq_links); + + return frame; +} + +void cpc_frame_free(struct cpc_frame *frame) +{ + kfree(frame); +} diff --git a/drivers/greybus/cpc/header.c b/drivers/greybus/cpc/header.c new file mode 100644 index 00000000000..4faa604b13a --- /dev/null +++ b/drivers/greybus/cpc/header.c @@ -0,0 +1,212 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2025, Silicon Laboratories, Inc. + */ + +#include <linux/bitfield.h> +#include <linux/bits.h> +#include <linux/string.h> + +#include "header.h" + +#define CPC_CONTROL_TYPE_MASK 0xC0 +#define CPC_CONTROL_ACK_MASK BIT(2) + +/** + * cpc_header_get_type() - Get the frame type. + * @hdr: CPC header. + * @type: Reference to a frame type. + * + * Return: True if the type has been successfully decoded, otherwise false. + * On success, the output parameter type is assigned. + */ +bool cpc_header_get_type(const struct cpc_header *hdr, enum cpc_frame_type *type) +{ + switch (FIELD_GET(CPC_CONTROL_TYPE_MASK, hdr->ctrl)) { + case CPC_FRAME_TYPE_DATA: + *type = CPC_FRAME_TYPE_DATA; + break; + case CPC_FRAME_TYPE_SYN: + *type = CPC_FRAME_TYPE_SYN; + break; + case CPC_FRAME_TYPE_RST: + *type = CPC_FRAME_TYPE_RST; + break; + default: + return false; + } + + return true; +} + +/** + * cpc_header_get_ep_id() - Get the endpoint id. + * @hdr: CPC header. + * + * Return: Endpoint id. + */ +u8 cpc_header_get_ep_id(const struct cpc_header *hdr) +{ + return hdr->ep_id; +} + +/** + * cpc_header_get_recv_wnd() - Get the receive window. + * @hdr: CPC header. + * + * Return: Receive window. + */ +u8 cpc_header_get_recv_wnd(const struct cpc_header *hdr) +{ + return hdr->recv_wnd; +} + +/** + * cpc_header_get_seq() - Get the sequence number. + * @hdr: CPC header. + * + * Return: Sequence number. + */ +u8 cpc_header_get_seq(const struct cpc_header *hdr) +{ + return hdr->seq; +} + +/** + * cpc_header_get_ack() - Get the acknowledge number. + * @hdr: CPC header. + * + * Return: Acknowledge number. + */ +u8 cpc_header_get_ack(const struct cpc_header *hdr) +{ + return hdr->ack; +} + +/** + * cpc_header_get_req_ack() - Get the request acknowledge frame flag. + * @hdr: CPC header. + * + * Return: Request acknowledge frame flag. + */ +bool cpc_header_get_req_ack(const struct cpc_header *hdr) +{ + return FIELD_GET(CPC_CONTROL_ACK_MASK, hdr->ctrl); +} + +/** + * cpc_header_get_mtu() - Get the maximum transmission unit. + * @hdr: CPC header. + * + * Return: Maximum transmission unit. + * + * Must only be used over a SYN frame. + */ +u16 cpc_header_get_mtu(const struct cpc_header *hdr) +{ + return le16_to_cpu(hdr->syn.mtu); +} + +/** + * cpc_header_get_payload_len() - Get the payload length. + * @hdr: CPC header. + * + * Return: Payload length. + * + * Must only be used over a DATA frame. + */ +u16 cpc_header_get_payload_len(const struct cpc_header *hdr) +{ + return le16_to_cpu(hdr->dat.payload_len); +} + +/** + * cpc_header_get_ctrl() - Encode parameters into a control byte. + * @type: Frame type. + * @req_ack: Frame flag indicating a request to be acknowledged. + * + * Return: Encoded control byte. + */ +u8 cpc_header_get_ctrl(enum cpc_frame_type type, bool req_ack) +{ + return FIELD_PREP(CPC_CONTROL_TYPE_MASK, type) | + FIELD_PREP(CPC_CONTROL_ACK_MASK, req_ack); +} + +/** + * cpc_header_get_frames_acked_count() - Get frames to be acknowledged. + * @seq: Current sequence number of the endpoint. + * @ack: Acknowledge number of the received frame. + * + * Return: Frames to be acknowledged. + */ +u8 cpc_header_get_frames_acked_count(u8 seq, u8 ack) +{ + u8 frames_acked_count; + + /* Find number of frames acknowledged with ACK number. */ + if (ack > seq) { + frames_acked_count = ack - seq; + } else { + frames_acked_count = 256 - seq; + frames_acked_count += ack; + } + + return frames_acked_count; +} + +/** + * cpc_header_is_syn_ack_valid() - Check if the provided SYN-ACK valid or not. + * @seq: Current sequence number of the endpoint. + * @ack: Acknowledge number of the received SYN. + * + * Return: True if valid, otherwise false. + */ +bool cpc_header_is_syn_ack_valid(u8 seq, u8 ack) +{ + return !!cpc_header_get_frames_acked_count(seq, ack); +} + +/** + * cpc_header_number_in_window() - Test if a number is within a window. + * @start: Start of the window. + * @end: Window size. + * @n: Number to be tested. + * + * Given the start of the window and its size, test if the number is + * in the range [start; start + wnd). + * + * @return True if start <= n <= start + wnd - 1 (modulo 256), otherwise false. + */ +bool cpc_header_number_in_window(u8 start, u8 wnd, u8 n) +{ + u8 end; + + if (wnd == 0) + return false; + + end = start + wnd - 1; + + return cpc_header_number_in_range(start, end, n); +} + +/** + * cpc_header_number_in_range() - Test if a number is between start and end (included). + * @start: Lowest limit. + * @end: Highest limit inclusively. + * @n: Number to be tested. + * + * @return True if start <= n <= end (modulo 256), otherwise false. + */ +bool cpc_header_number_in_range(u8 start, u8 end, u8 n) +{ + if (end >= start) { + if (n < start || n > end) + return false; + } else { + if (n > end && n < start) + return false; + } + + return true; +} diff --git a/drivers/greybus/cpc/header.h b/drivers/greybus/cpc/header.h new file mode 100644 index 00000000000..5d574fef422 --- /dev/null +++ b/drivers/greybus/cpc/header.h @@ -0,0 +1,81 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (c) 2025, Silicon Laboratories, Inc. + */ + +#ifndef __CPC_HEADER_H +#define __CPC_HEADER_H + +#include <linux/compiler_attributes.h> +#include <linux/types.h> + +#define CPC_HEADER_MAX_RX_WINDOW U8_MAX +#define CPC_HEADER_SIZE 8 + +/** + * enum cpc_frame_type - Describes all possible frame types that can + * be received or sent. + * @CPC_FRAME_TYPE_DATA: Used to send and control application DATA frames. + * @CPC_FRAME_TYPE_SYN: Used to initiate an endpoint connection. + * @CPC_FRAME_TYPE_RST: Used to reset the endpoint connection and indicate + * that the endpoint is unavailable. + */ +enum cpc_frame_type { + CPC_FRAME_TYPE_DATA, + CPC_FRAME_TYPE_SYN, + CPC_FRAME_TYPE_RST, +}; + +/** + * struct cpc_header - Representation of the CPC header. + * @ep_id: Address of the endpoint the frame is destined to. + * @ctrl: Indicates the frame type [7..6] and frame flags [5..0]. + * Currently only the request acknowledge flag is supported. + * This flag indicates if the frame should be acknowledged by + * the remote on reception. + * @recv_wnd: Indicates to the remote how many reception buffers are + * available so it can determine how many frames it can send. + * @seq: Identifies the frame with a number. + * @ack: Indicate the sequence number of the next expected frame from + * the remote. When paired with a fast re-transmit flag, it indicates + * the sequence number of the frame in error that should be + * re-transmitted. + * @syn.mtu: On a SYN frame, this represents the maximum transmission unit. + * @dat.payload_len: On a DATA frame, this indicates the payload length. + */ +struct cpc_header { + u16 ep_id; + u8 ctrl; + u8 recv_wnd; + u8 seq; + u8 ack; + union { + u8 extension[2]; + struct __packed { + __le16 mtu; + } syn; + struct __packed { + __le16 payload_len; + } dat; + struct __packed { + u8 reserved[2]; + } rst; + }; +} __packed; + +bool cpc_header_get_type(const struct cpc_header *hdr, enum cpc_frame_type *type); +u8 cpc_header_get_ep_id(const struct cpc_header *hdr); +u8 cpc_header_get_recv_wnd(const struct cpc_header *hdr); +u8 cpc_header_get_seq(const struct cpc_header *hdr); +u8 cpc_header_get_ack(const struct cpc_header *hdr); +bool cpc_header_get_req_ack(const struct cpc_header *hdr); +u16 cpc_header_get_mtu(const struct cpc_header *hdr); +u16 cpc_header_get_payload_len(const struct cpc_header *hdr); +u8 cpc_header_get_ctrl(enum cpc_frame_type type, bool req_ack); + +u8 cpc_header_get_frames_acked_count(u8 seq, u8 ack); +bool cpc_header_is_syn_ack_valid(u8 seq, u8 ack); +bool cpc_header_number_in_window(u8 start, u8 wnd, u8 n); +bool cpc_header_number_in_range(u8 start, u8 end, u8 n); + +#endif diff --git a/drivers/greybus/cpc/host.c b/drivers/greybus/cpc/host.c new file mode 100644 index 00000000000..0805552d5ec --- /dev/null +++ b/drivers/greybus/cpc/host.c @@ -0,0 +1,113 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2025, Silicon Laboratories, Inc. + */ + +#include <linux/greybus.h> +#include <linux/list.h> +#include <linux/module.h> + +#include "cpc.h" +#include "header.h" + +struct cpc_endpoint *cpc_hd_get_endpoint(struct cpc_host_device *cpc_hd, u16 cport_id) +{ + struct cpc_endpoint *ep; + + for (int i = 0; i < ARRAY_SIZE(cpc_hd->endpoints); i++) { + ep = cpc_hd->endpoints[i]; + if (ep && ep->id == cport_id) + return ep; + } + + return NULL; +} + +void cpc_hd_rcvd(struct cpc_host_device *cpc_hd, struct cpc_header *hdr, + u8 *data, size_t length) +{ + enum cpc_frame_type type; + struct cpc_endpoint *ep; + u8 ep_id; + + cpc_header_get_type(hdr, &type); + ep_id = cpc_header_get_ep_id(hdr); + + ep = cpc_hd_get_endpoint(cpc_hd, ep_id); + if (!ep) { + if (type != CPC_FRAME_TYPE_RST) { + dev_dbg(&cpc_hd->gb_hd->dev, "ep%u not allocated (%d)\n", ep_id, type); + cpc_protocol_send_rst(cpc_hd, ep_id); + } + return; + } + + switch (type) { + case CPC_FRAME_TYPE_DATA: + cpc_protocol_on_data(ep, hdr, data, length); + break; + case CPC_FRAME_TYPE_SYN: + cpc_protocol_on_syn(ep, hdr); + break; + case CPC_FRAME_TYPE_RST: + dev_dbg(&cpc_hd->gb_hd->dev, "reset\n"); + cpc_protocol_on_rst(ep); + break; + } +} + + +/** + * cpc_interface_send_frame() - Queue a socket buffer for transmission. + * @intf: Interface to send SKB over. + * @ops: SKB to send. + * + * Queue SKB in interface's transmit queue and signal the interface. Interface is expected to use + * cpc_interface_dequeue() to get the next SKB to transmit. + */ +void cpc_hd_send_frame(struct cpc_host_device *cpc_hd, struct cpc_frame *frame) +{ + mutex_lock(&cpc_hd->lock); + list_add_tail(&frame->txq_links, &cpc_hd->tx_queue); + mutex_unlock(&cpc_hd->lock); + + cpc_hd->wake_tx(cpc_hd); +} + +/** + * cpc_interface_dequeue() - Get the next SKB that was queued for transmission. + * @intf: Interface. + * + * Get an SKB that was previously queued by cpc_interface_send_frame(). + * + * Return: An SKB, or %NULL if queue was empty. + */ +struct cpc_frame *cpc_hd_dequeue(struct cpc_host_device *cpc_hd) +{ + struct cpc_frame *f; + + mutex_lock(&cpc_hd->lock); + f = list_first_entry_or_null(&cpc_hd->tx_queue, struct cpc_frame, txq_links); + if (f) + list_del(&f->txq_links); + mutex_unlock(&cpc_hd->lock); + + return f; +} + +/** + * cpc_interface_tx_queue_empty() - Check if transmit queue is empty. + * @intf: Interface. + * + * Return: True if transmit queue is empty, false otherwise. + */ +bool cpc_hd_tx_queue_empty(struct cpc_host_device *cpc_hd) +{ + bool empty; + + mutex_lock(&cpc_hd->lock); + empty = list_empty(&cpc_hd->tx_queue); + mutex_unlock(&cpc_hd->lock); + + return empty; +} diff --git a/drivers/greybus/cpc/protocol.c b/drivers/greybus/cpc/protocol.c new file mode 100644 index 00000000000..610e4b96edd --- /dev/null +++ b/drivers/greybus/cpc/protocol.c @@ -0,0 +1,274 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2025, Silicon Laboratories, Inc. + */ + +#include <linux/greybus.h> +#include <linux/mutex.h> +#include <linux/skbuff.h> + +#include "cpc.h" +#include "header.h" + +int cpc_protocol_send_syn(struct cpc_endpoint *ep) +{ + struct cpc_frame *frame; + struct cpc_header *hdr; + + frame = cpc_frame_alloc(NULL, GFP_KERNEL); + if (!frame) + return -ENOMEM; + + hdr = &frame->header; + memset(hdr, 0, sizeof(*hdr)); + + mutex_lock(&ep->lock); + + hdr->ctrl = cpc_header_get_ctrl(CPC_FRAME_TYPE_SYN, true); + hdr->ep_id = ep->id; + hdr->recv_wnd = CPC_HEADER_MAX_RX_WINDOW; + hdr->seq = ep->tcb.seq; + hdr->syn.mtu = cpu_to_le16(U16_MAX); + + cpc_hd_send_frame(ep->cpc_hd, frame); + + mutex_unlock(&ep->lock); + + return 0; +} + +static void __cpc_protocol_send_ack(struct cpc_endpoint *ep) +{ + struct cpc_frame *frame; + struct cpc_header *hdr; + + frame = cpc_frame_alloc(NULL, GFP_KERNEL); + if (!frame) + return; + + hdr = &frame->header; + + memset(hdr, 0, sizeof(*hdr)); + hdr->ctrl = cpc_header_get_ctrl(CPC_FRAME_TYPE_DATA, false); + hdr->ep_id = ep->id; + hdr->recv_wnd = CPC_HEADER_MAX_RX_WINDOW; + hdr->ack = ep->tcb.ack; + + cpc_hd_send_frame(ep->cpc_hd, frame); +} + +/** + * cpc_protocol_send_rst - send a RST frame + * @cpc_hd: host device pointer + * @ep_id: endpoint id + */ +void cpc_protocol_send_rst(struct cpc_host_device *cpc_hd, u8 ep_id) +{ + struct cpc_frame *frame; + struct cpc_header *hdr; + + frame = cpc_frame_alloc(NULL, GFP_KERNEL); + if (!frame) + return; + + hdr = &frame->header; + memset(hdr, 0, sizeof(*hdr)); + hdr->ctrl = cpc_header_get_ctrl(CPC_FRAME_TYPE_RST, false); + hdr->ep_id = ep_id; + + cpc_hd_send_frame(cpc_hd, frame); +} + +static int __cpc_protocol_queue_tx_frame(struct cpc_endpoint *ep, struct cpc_frame *frame) +{ + frame->header.ack = ep->tcb.ack; + + list_add_tail(&frame->links, &ep->pending_ack_queue); + + cpc_hd_send_frame(ep->cpc_hd, frame); + + return 0; +} + +static void __cpc_protocol_process_pending_tx_frames(struct cpc_endpoint *ep) +{ + struct cpc_frame *frame; + u8 window; + int err; + + window = ep->tcb.send_wnd; + + while ((frame = list_first_entry_or_null(&ep->holding_queue, + struct cpc_frame, + links))) { + if (!cpc_header_number_in_window(ep->tcb.send_una, + window, + cpc_header_get_seq(&frame->header))) + return; + + list_del(&frame->links); + + err = __cpc_protocol_queue_tx_frame(ep, frame); + if (err < 0) { + list_add(&frame->links, &ep->holding_queue); + return; + } + } +} + +static void __cpc_protocol_receive_ack(struct cpc_endpoint *ep, u8 recv_wnd, u8 ack) +{ + struct cpc_frame *frame; + u8 acked_frames; + + ep->tcb.send_wnd = recv_wnd; + + frame = list_first_entry_or_null(&ep->pending_ack_queue, struct cpc_frame, links); + if (!frame) + goto out; + + /* Return if no frame to ACK. */ + if (!cpc_header_number_in_range(ep->tcb.send_una, ep->tcb.send_nxt, ack)) + goto out; + + /* Calculate how many frames will be ACK'd. */ + acked_frames = cpc_header_get_frames_acked_count(cpc_header_get_seq(&frame->header), ack); + + for (u8 i = 0; i < acked_frames; i++) { + frame = list_first_entry_or_null(&ep->pending_ack_queue, struct cpc_frame, links); + if (!frame) { + dev_err_ratelimited(&ep->cpc_hd->gb_hd->dev, "pending ack queue shorter than expected"); + break; + } + + list_del(&frame->links); + cpc_frame_free(frame); + } + + ep->tcb.send_una += acked_frames; + +out: + __cpc_protocol_process_pending_tx_frames(ep); +} + +static bool __cpc_protocol_is_syn_ack_valid(struct cpc_endpoint *ep, struct cpc_header *hdr) +{ + struct cpc_frame *syn_frame; + enum cpc_frame_type type; + u8 syn_seq; + u8 ack; + + /* Fetch the previously sent frame. */ + syn_frame = list_first_entry_or_null(&ep->pending_ack_queue, struct cpc_frame, links); + if (!syn_frame) { + dev_warn(&ep->cpc_hd->gb_hd->dev, "cannot validate syn-ack, no frame was sent\n"); + return false; + } + + cpc_header_get_type(&syn_frame->header, &type); + + /* Verify if this frame is SYN. */ + if (type != CPC_FRAME_TYPE_SYN) { + dev_warn(&ep->cpc_hd->gb_hd->dev, + "cannot validate syn-ack, no syn frame was sent (%d)\n", type); + return false; + } + + syn_seq = cpc_header_get_seq(&syn_frame->header); + ack = cpc_header_get_ack(hdr); + + /* Validate received ACK with the SEQ used in the initial SYN. */ + if (!cpc_header_is_syn_ack_valid(syn_seq, ack)) { + dev_warn(&ep->cpc_hd->gb_hd->dev, + "syn-ack (%d) is not valid with previously sent syn-seq (%d)\n", + ack, syn_seq); + return false; + } + + return true; +} + +void cpc_protocol_on_data(struct cpc_endpoint *ep, struct cpc_header *hdr, + u8 *data, size_t length) +{ + bool expected_seq; + + mutex_lock(&ep->lock); + + __cpc_protocol_receive_ack(ep, + cpc_header_get_recv_wnd(hdr), + cpc_header_get_ack(hdr)); + + if (cpc_header_get_req_ack(hdr)) { + expected_seq = cpc_header_get_seq(hdr) == ep->tcb.ack; + if (expected_seq) + ep->tcb.ack++; + + __cpc_protocol_send_ack(ep); + + if (!expected_seq) + dev_warn(&ep->cpc_hd->gb_hd->dev, + "unexpected seq: %u, expected seq: %u\n", + cpc_header_get_seq(hdr), ep->tcb.ack); + } + + mutex_unlock(&ep->lock); + + if (data) { + if (expected_seq) + greybus_data_rcvd(ep->cpc_hd->gb_hd, ep->id, data, length); + else + kfree(data); + } +} + +void cpc_protocol_on_syn(struct cpc_endpoint *ep, struct cpc_header *hdr) +{ + mutex_lock(&ep->lock); + + if (!__cpc_protocol_is_syn_ack_valid(ep, hdr)) { + cpc_protocol_send_rst(ep->cpc_hd, ep->id); + goto out; + } + + __cpc_protocol_receive_ack(ep, + cpc_header_get_recv_wnd(hdr), + cpc_header_get_ack(hdr)); + + /* On SYN-ACK, the remote's SEQ becomes our starting ACK. */ + ep->tcb.ack = cpc_header_get_seq(hdr); + ep->tcb.mtu = cpc_header_get_mtu(hdr); + ep->tcb.ack++; + + __cpc_protocol_send_ack(ep); + + complete(&ep->completion); + +out: + mutex_unlock(&ep->lock); +} + +void cpc_protocol_on_rst(struct cpc_endpoint *ep) +{ + // To be implemented when connection mechanism are restored +} + +/** + * __cpc_protocol_write() - Write a frame. + * @ep: Endpoint handle. + * @frame: Frame to write. + * + * Context: Expect endpoint's lock to be held. + * + * Return: 0 on success, otherwise a negative error code. + */ +int __cpc_protocol_write(struct cpc_endpoint *ep, struct cpc_frame *frame) +{ + list_add_tail(&frame->links, &ep->holding_queue); + + __cpc_protocol_process_pending_tx_frames(ep); + + ep->tcb.seq++; + + return 0; +}
On Fri, Jul 04, 2025 at 08:40:32PM -0400, Damien Riégel wrote:
This step adds the basic infrastructure in order to use CPC as backend in Greybus. The goal of CPC is to add reliablity, by implementing error detection and retransmission for links that don't have that capability by default.
When Greybus establishes the connection between two CPorts, CPC will create an endpoint for this connection. Greybus messages will then be encapsulated in CPC frames, which basically are a custom header + Greybus header + Greybus payload.
As this is still evolving and not the main point of the RFC, the whole core is squashed in one big commit, but it will definitely be split into more digestible commits as we refine it.
Signed-off-by: Damien Riégel damien.riegel@silabs.com
MAINTAINERS | 6 + drivers/greybus/Kconfig | 2 + drivers/greybus/Makefile | 2 + drivers/greybus/cpc/Kconfig | 12 ++ drivers/greybus/cpc/Makefile | 6 + drivers/greybus/cpc/cpc.h | 135 ++++++++++++++++ drivers/greybus/cpc/endpoint.c | 158 +++++++++++++++++++ drivers/greybus/cpc/header.c | 212 +++++++++++++++++++++++++ drivers/greybus/cpc/header.h | 81 ++++++++++ drivers/greybus/cpc/host.c | 113 ++++++++++++++ drivers/greybus/cpc/protocol.c | 274 +++++++++++++++++++++++++++++++++ 11 files changed, 1001 insertions(+)
I like the idea, but you are going to have to break this up into smaller pieces in order to get us to be able to review it well, sorry.
thanks
greg k-h
Header frames are always 10 bytes (8 bytes of header and 2 bytes of checksum). The header contains the size of the payload to receive (size to transmit is already known). As the SPI device also has some processing to do when it receives a header, the SPI driver must wait for the interrupt line to be asserted before clocking the payload.
The SPI device always expects the chip select to be asserted and deasserted after a header, even if there are no payloads to transmit. This is used to keep headers transmission synchronized between host and device. As some controllers don't support doing that if there is nothing to transmit, a null byte is transmitted in that case and it will be ignored by the device.
If there are payloads, the driver will clock the maximum length of the two payloads. The payloads are always Greybus messages, so they should be at least 8 bytes (Greybus header), plus a variable greybus payload.
Signed-off-by: Damien Riégel damien.riegel@silabs.com --- drivers/greybus/cpc/Makefile | 2 +- drivers/greybus/cpc/spi.c | 585 +++++++++++++++++++++++++++++++++++ 2 files changed, 586 insertions(+), 1 deletion(-) create mode 100644 drivers/greybus/cpc/spi.c
diff --git a/drivers/greybus/cpc/Makefile b/drivers/greybus/cpc/Makefile index 08ef7c6d24b..4ee37ea5f52 100644 --- a/drivers/greybus/cpc/Makefile +++ b/drivers/greybus/cpc/Makefile @@ -1,6 +1,6 @@ # SPDX-License-Identifier: GPL-2.0
-gb-cpc-y := endpoint.o header.o host.o main.o protocol.o +gb-cpc-y := endpoint.o header.o host.o main.o protocol.o spi.o
# CPC core obj-$(CONFIG_GREYBUS_CPC) += gb-cpc.o diff --git a/drivers/greybus/cpc/spi.c b/drivers/greybus/cpc/spi.c new file mode 100644 index 00000000000..b8f3877bde1 --- /dev/null +++ b/drivers/greybus/cpc/spi.c @@ -0,0 +1,585 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2025, Silicon Laboratories, Inc. + */ + +#include <linux/atomic.h> +#include <linux/crc-itu-t.h> +#include <linux/delay.h> +#include <linux/device.h> +#include <linux/greybus.h> +#include <linux/interrupt.h> +#include <linux/kthread.h> +#include <linux/minmax.h> +#include <linux/of.h> +#include <linux/skbuff.h> +#include <linux/slab.h> +#include <linux/spi/spi.h> +#include <linux/unaligned.h> +#include <linux/wait.h> + +#include "cpc.h" +#include "header.h" + +#define CPC_SPI_CSUM_SIZE 2 +#define GB_CPC_SPI_MSG_SIZE_MAX 2048 +#define CPC_SPI_INTERRUPT_MAX_WAIT_MS 500 + +struct cpc_spi { + struct cpc_host_device cpc_hd; + + struct spi_device *spi; + + struct task_struct *task; + wait_queue_head_t event_queue; + + struct cpc_frame *tx_frame; + u8 tx_csum[CPC_SPI_CSUM_SIZE]; + + atomic_t event_cond; + + unsigned int rx_len; + struct cpc_header rx_header; + u8 rx_frame[GB_CPC_SPI_MSG_SIZE_MAX + CPC_SPI_CSUM_SIZE]; + u8 rx_csum[CPC_SPI_CSUM_SIZE]; +}; + +struct cpc_xfer { + u8 *data; + unsigned int total_len; + unsigned int remaining_len; +}; + +static inline struct cpc_spi *gb_hd_to_cpc_spi(struct gb_host_device *hd) +{ + return (struct cpc_spi *)&hd->hd_priv; +} + +static inline struct cpc_spi *cpc_hd_to_cpc_spi(struct cpc_host_device *cpc_hd) +{ + return container_of(cpc_hd, struct cpc_spi, cpc_hd); +} + +static int gb_cpc_spi_wake_tx(struct cpc_host_device *cpc_hd) +{ + struct cpc_spi *ctx = cpc_hd_to_cpc_spi(cpc_hd); + + wake_up_interruptible(&ctx->event_queue); + + return 0; +} + +static bool buffer_is_zeroes(const u8 *buffer, size_t length) +{ + for (size_t i = 0; i < length; i++) { + if (buffer[i] != 0) + return false; + } + + return true; +} + +static u16 gb_cpc_spi_csum(u16 start, const u8 *buffer, size_t length) +{ + return crc_itu_t(start, buffer, length); +} + +static int gb_cpc_spi_do_xfer_header(struct cpc_spi *ctx) +{ + struct spi_transfer xfer_header = { + .rx_buf = (u8 *)&ctx->rx_header, + .len = CPC_HEADER_SIZE, + .speed_hz = ctx->spi->max_speed_hz, + }; + struct spi_transfer xfer_csum = { + .rx_buf = &ctx->rx_csum, + .len = sizeof(ctx->tx_csum), + .speed_hz = ctx->spi->max_speed_hz, + }; + enum cpc_frame_type type; + struct spi_message msg; + size_t payload_len = 0; + u16 rx_csum; + u16 csum; + int ret; + + if (ctx->tx_frame) { + u16 tx_hdr_csum = gb_cpc_spi_csum(0, (u8 *)&ctx->tx_frame->header, CPC_HEADER_SIZE); + + put_unaligned_le16(tx_hdr_csum, ctx->tx_csum); + + xfer_header.tx_buf = &ctx->tx_frame->header; + xfer_csum.tx_buf = ctx->tx_csum; + } + + spi_message_init(&msg); + spi_message_add_tail(&xfer_header, &msg); + spi_message_add_tail(&xfer_csum, &msg); + + ret = spi_sync(ctx->spi, &msg); + if (ret) + return ret; + + if (ctx->tx_frame) { + if (!ctx->tx_frame->message) { + cpc_frame_sent(ctx->tx_frame, ret); + ctx->tx_frame = NULL; + } + } + + if (buffer_is_zeroes((u8 *)&ctx->rx_header, CPC_HEADER_SIZE)) + return 0; + + rx_csum = get_unaligned_le16(&ctx->rx_csum); + csum = gb_cpc_spi_csum(0, (u8 *)&ctx->rx_header, CPC_HEADER_SIZE); + + if (rx_csum != csum || !cpc_header_get_type(&ctx->rx_header, &type)) { + /* + * If the header checksum is invalid, its length can't be trusted, receive + * the maximum payload length to recover from that situation. If the frame + * type cannot be extracted from the header, use same recovery mechanism. + */ + ctx->rx_len = GB_CPC_SPI_MSG_SIZE_MAX; + + return 0; + } + + if (type == CPC_FRAME_TYPE_DATA) + payload_len = cpc_header_get_payload_len(&ctx->rx_header) + + sizeof(ctx->tx_csum); + + if (payload_len) + ctx->rx_len = payload_len; + else + cpc_hd_rcvd(&ctx->cpc_hd, &ctx->rx_header, NULL, 0); + + return 0; +} + +static int gb_cpc_spi_do_xfer_notch(struct cpc_spi *ctx) +{ + struct spi_transfer xfer = { + .tx_buf = ctx->tx_csum, + .len = 1, + .speed_hz = ctx->spi->max_speed_hz, + }; + struct spi_message msg; + + ctx->tx_csum[0] = 0; + + spi_message_init(&msg); + spi_message_add_tail(&xfer, &msg); + + return spi_sync(ctx->spi, &msg); +} + +static unsigned int fill_xfer(struct spi_transfer *xfer, + u8 **tx, unsigned int *tx_len, + u8 **rx, unsigned int *rx_len) +{ + unsigned int xfer_len = 0; + + if (*tx_len && *rx_len) + xfer_len = (*tx_len < *rx_len) ? *tx_len : *rx_len; + else if (*tx_len) + xfer_len = *tx_len; + else if (*rx_len) + xfer_len = *rx_len; + else + return 0; + + xfer->tx_buf = *tx; + xfer->rx_buf = *rx; + xfer->len = xfer_len; + + if (*tx) { + *tx += xfer_len; + *tx_len -= xfer_len; + } + + if (*rx) { + *rx += xfer_len; + *rx_len -= xfer_len; + } + + return xfer_len; +} + +static int gb_cpc_spi_do_xfer_payload(struct cpc_spi *ctx) +{ + unsigned int rx_len = ctx->rx_len ? ctx->rx_len + CPC_SPI_CSUM_SIZE : 0; + struct spi_transfer xfers[4]; + struct spi_message msg; + int ret; + + unsigned int tx_lens[3] = { 0 }; + u8 *tx_ptrs[3] = { NULL }; + + spi_message_init(&msg); + + if (ctx->tx_frame && ctx->tx_frame->message) { + struct gb_message *m = ctx->tx_frame->message; + unsigned int idx = 0; + u16 csum = 0; + + tx_ptrs[idx] = (u8 *)m->header; + tx_lens[idx++] = sizeof(struct gb_operation_msg_hdr); + csum = gb_cpc_spi_csum(csum, (u8 *)m->header, sizeof(struct gb_operation_msg_hdr)); + + if (m->payload_size) { + tx_ptrs[idx] = m->payload; + tx_lens[idx++] = m->payload_size; + csum = gb_cpc_spi_csum(csum, m->payload, m->payload_size); + } + + put_unaligned_le16(csum, ctx->tx_csum); + + tx_ptrs[idx] = ctx->tx_csum; + tx_lens[idx++] = CPC_SPI_CSUM_SIZE; + } + + unsigned int tx_idx = 0; + unsigned int tx_len = tx_lens[tx_idx]; + u8 *tx_ptr = tx_ptrs[tx_idx]; + u8 *rx_ptr = rx_len ? ctx->rx_frame : NULL; + + /* + * This loop goes over a list of TX elements to send. There can be 0, 2 or 3 (nothing, + * greybus header + csum, and optionally greybus payload). + * RX, if present, consists of only one element. + * [ tx_ptr1; tx_len1 ] --> [ tx_ptr2; tx_len2 ] --> [ tx_ptr3; tx_len3 ] + * [ rx_ptr1; rx_len1 ] + * + * The RX buffer can span over several TX buffers, the loop takes care of chunking into + * spi_transfer. + * + */ + for (unsigned int i = 0; i < ARRAY_SIZE(xfers); i++) { + struct spi_transfer *xfer = &xfers[i]; + + fill_xfer(xfer, &tx_ptr, &tx_len, &rx_ptr, &rx_len); + + spi_message_add_tail(xfer, &msg); + + /* + * If the rx pointer is not NULL, but the rx length is 0, it means that the rx + * buffer was fully transferred in this iteration. + */ + if (rx_ptr && !rx_len) { + rx_ptr = NULL; + + /* + * And if tx_ptr is NULL, it means there was no TX data to send, so the + * transfer is done. + */ + if (!tx_ptr) + break; + } + + /* + * If tx_len is zero, it means we can go the next TX element to transfer. + */ + if (!tx_len) { + tx_idx++; + if (tx_idx < ARRAY_SIZE(tx_ptrs)) { + tx_len = tx_lens[tx_idx]; + tx_ptr = tx_ptrs[tx_idx]; + } else { + tx_len = 0; + tx_ptr = NULL; + } + + /* + * If there's nothing else to transfer and the rx_len was also NULL, + * that means the transfer is fully prepared. + */ + if (!tx_len && !rx_len) + break; + } + } + + ret = spi_sync(ctx->spi, &msg); + if (ret) + goto exit; + + if (ctx->rx_len) { + unsigned char *csum_ptr; + u16 expected_csum; + u16 csum; + + if (ret) + goto exit; + + csum_ptr = ctx->rx_frame + ctx->rx_len; + csum = get_unaligned_le16(csum_ptr); + + expected_csum = gb_cpc_spi_csum(0, ctx->rx_frame, ctx->rx_len); + + if (csum == expected_csum) + cpc_hd_rcvd(&ctx->cpc_hd, &ctx->rx_header, ctx->rx_frame, ctx->rx_len); + } + +exit: + ctx->rx_len = 0; + + return ret; +} + +static int gb_cpc_spi_do_xfer_thread(void *data) +{ + struct cpc_spi *ctx = data; + bool xfer_idle = true; + int ret; + + while (!kthread_should_stop()) { + if (xfer_idle) { + ret = wait_event_interruptible(ctx->event_queue, + (!cpc_hd_tx_queue_empty(&ctx->cpc_hd) || + atomic_read(&ctx->event_cond) == 1 || + kthread_should_stop())); + + if (ret) + continue; + + if (kthread_should_stop()) + return 0; + + if (!ctx->tx_frame) + ctx->tx_frame = cpc_hd_dequeue(&ctx->cpc_hd); + + /* + * Reset thread event right before transmission to prevent interrupts that + * happened while the thread was already awake to wake up the thread again, + * as the event is going to be handled by this iteration. + */ + atomic_set(&ctx->event_cond, 0); + + ret = gb_cpc_spi_do_xfer_header(ctx); + if (!ret) + xfer_idle = false; + } else { + ret = wait_event_timeout(ctx->event_queue, + (atomic_read(&ctx->event_cond) == 1 || + kthread_should_stop()), + msecs_to_jiffies(CPC_SPI_INTERRUPT_MAX_WAIT_MS)); + if (ret == 0) { + dev_err_once(&ctx->spi->dev, "device didn't assert interrupt in a timely manner\n"); + continue; + } + + atomic_set(&ctx->event_cond, 0); + + if (!ctx->tx_frame && !ctx->rx_len) + ret = gb_cpc_spi_do_xfer_notch(ctx); + else + ret = gb_cpc_spi_do_xfer_payload(ctx); + + if (!ret) + xfer_idle = true; + } + } + + return 0; +} + +static irqreturn_t gb_cpc_spi_irq_handler(int irq, void *data) +{ + struct cpc_spi *ctx = data; + + atomic_set(&ctx->event_cond, 1); + wake_up(&ctx->event_queue); + + return IRQ_HANDLED; +} + +static int gb_cpc_spi_cport_allocate(struct gb_host_device *hd, int cport_id, unsigned long flags) +{ + struct cpc_spi *ctx = gb_hd_to_cpc_spi(hd); + struct cpc_endpoint *ep; + + for (int i = 0; i < ARRAY_SIZE(ctx->cpc_hd.endpoints); i++) { + if (ctx->cpc_hd.endpoints[i] != NULL) + continue; + + if (cport_id < 0) + cport_id = i; + + ep = cpc_endpoint_alloc(cport_id, GFP_KERNEL); + if (!ep) + return -ENOMEM; + + ep->cpc_hd = &ctx->cpc_hd; + + ctx->cpc_hd.endpoints[i] = ep; + return cport_id; + } + + return -ENOSPC; +} + +static void gb_cpc_spi_cport_release(struct gb_host_device *hd, u16 cport_id) +{ + struct cpc_spi *ctx = gb_hd_to_cpc_spi(hd); + struct cpc_endpoint *ep; + + for (int i = 0; i < ARRAY_SIZE(ctx->cpc_hd.endpoints); i++) { + ep = ctx->cpc_hd.endpoints[i]; + if (ep && ep->id == cport_id) { + cpc_endpoint_release(ep); + ctx->cpc_hd.endpoints[i] = NULL; + break; + } + } +} + +static int gb_cpc_spi_cport_enable(struct gb_host_device *hd, u16 cport_id, + unsigned long flags) +{ + struct cpc_spi *ctx = gb_hd_to_cpc_spi(hd); + struct cpc_endpoint *ep; + + ep = cpc_hd_get_endpoint(&ctx->cpc_hd, cport_id); + if (!ep) + return -ENODEV; + + return cpc_endpoint_connect(ep); +} + +static int gb_cpc_spi_cport_disable(struct gb_host_device *hd, u16 cport_id) +{ + struct cpc_spi *ctx = gb_hd_to_cpc_spi(hd); + struct cpc_endpoint *ep; + + ep = cpc_hd_get_endpoint(&ctx->cpc_hd, cport_id); + if (!ep) + return -ENODEV; + + return cpc_endpoint_disconnect(ep); +} + +static int gb_cpc_spi_message_send(struct gb_host_device *hd, u16 cport_id, + struct gb_message *message, gfp_t gfp_mask) +{ + struct cpc_spi *ctx = gb_hd_to_cpc_spi(hd); + struct cpc_endpoint *ep; + struct cpc_frame *frame; + + frame = cpc_frame_alloc(message, gfp_mask); + if (!frame) + return -ENOMEM; + + ep = cpc_hd_get_endpoint(&ctx->cpc_hd, cport_id); + if (!ep) { + cpc_frame_free(frame); + return -ENODEV; + } + + message->hcpriv = frame; + + return cpc_endpoint_frame_send(ep, frame); +} + +static void gb_cpc_spi_message_cancel(struct gb_message *message) +{ + struct cpc_frame *frame = message->hcpriv; + + frame->cancelled = true; +} + +static struct gb_hd_driver gb_cpc_driver = { + .hd_priv_size = sizeof(struct cpc_spi), + .message_send = gb_cpc_spi_message_send, + .message_cancel = gb_cpc_spi_message_cancel, + .cport_allocate = gb_cpc_spi_cport_allocate, + .cport_release = gb_cpc_spi_cport_release, + .cport_enable = gb_cpc_spi_cport_enable, + .cport_disable = gb_cpc_spi_cport_disable, +}; + + +static int cpc_spi_probe(struct spi_device *spi) +{ + struct gb_host_device *hd; + struct cpc_spi *ctx; + int ret; + + if (!spi->irq) { + dev_err(&spi->dev, "cannot function without IRQ, please provide one\n"); + return -EINVAL; + } + + hd = gb_hd_create(&gb_cpc_driver, &spi->dev, + GB_CPC_SPI_MSG_SIZE_MAX, GB_CPC_SPI_NUM_CPORTS); + if (IS_ERR(hd)) + return PTR_ERR(hd); + + ctx = gb_hd_to_cpc_spi(hd); + ctx->cpc_hd.gb_hd = hd; + ctx->cpc_hd.wake_tx = gb_cpc_spi_wake_tx; + + spi_set_drvdata(spi, ctx); + + ret = gb_hd_add(hd); + if (ret) + goto err_hd_del; + + ret = request_irq(spi->irq, gb_cpc_spi_irq_handler, IRQF_TRIGGER_FALLING, + dev_name(&spi->dev), ctx); + if (ret) + goto err_hd_remove; + + ctx->task = kthread_run(gb_cpc_spi_do_xfer_thread, ctx, "%s", + dev_name(&spi->dev)); + if (IS_ERR(ctx->task)) { + ret = PTR_ERR(ctx->task); + goto free_irq; + } + + return 0; + +free_irq: + free_irq(spi->irq, ctx); +err_hd_remove: + gb_hd_del(hd); +err_hd_del: + gb_hd_put(hd); + + return ret; +} + +static void cpc_spi_remove(struct spi_device *spi) +{ + struct cpc_spi *ctx = spi_get_drvdata(spi); + + kthread_stop(ctx->task); + free_irq(spi->irq, ctx); + gb_hd_del(ctx->cpc_hd.gb_hd); + gb_hd_put(ctx->cpc_hd.gb_hd); +} + +static const struct of_device_id cpc_dt_ids[] = { + { .compatible = "silabs,cpc-spi" }, + {}, +}; +MODULE_DEVICE_TABLE(of, cpc_dt_ids); + +static const struct spi_device_id cpc_spi_ids[] = { + { .name = "cpc-spi" }, + {}, +}; +MODULE_DEVICE_TABLE(spi, cpc_spi_ids); + +static struct spi_driver gb_cpc_spi_driver = { + .driver = { + .name = "cpc-spi", + .of_match_table = cpc_dt_ids, + }, + .probe = cpc_spi_probe, + .remove = cpc_spi_remove, +}; + +module_spi_driver(gb_cpc_spi_driver); + +MODULE_DESCRIPTION("Greybus Host Driver for Silicon Labs devices using SPI"); +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("Damien Riégel damien.riegel@silabs.com");
This adds a helper function for unidirectional asynchronous transfer. This is just for convenience as some drivers do these steps manually, like the loopback driver in gb_loopback_async_operation().
Signed-off-by: Damien Riégel damien.riegel@silabs.com --- drivers/greybus/operation.c | 52 +++++++++++++++++++++++++++++++ include/linux/greybus/operation.h | 4 +++ 2 files changed, 56 insertions(+)
diff --git a/drivers/greybus/operation.c b/drivers/greybus/operation.c index 8459e9bc074..a599b9d36cf 100644 --- a/drivers/greybus/operation.c +++ b/drivers/greybus/operation.c @@ -1174,6 +1174,58 @@ int gb_operation_sync_timeout(struct gb_connection *connection, int type, } EXPORT_SYMBOL_GPL(gb_operation_sync_timeout);
+/** + * gb_operation_unidirectional_async_timeout() - initiate an asynchronous unidirectional operation + * @connection: connection to use + * @callback: function called when operation completes + * @data: user-data, retrieved with gb_operation_get_data() + * @type: type of operation to send + * @request: memory buffer to copy the request from + * @request_size: size of @request + * @timeout: send timeout in milliseconds + * + * Initiate a unidirectional operation by sending a request message. Completion is notified by the + * user-provided callback. User can determine operation status with gb_operation_result(). + * operation must be released with gb_operation_put(). + * + * Note that successful send of a unidirectional operation does not imply that + * the request as actually reached the remote end of the connection. + */ +int gb_operation_unidirectional_async_timeout(struct gb_connection *connection, + gb_operation_callback callback, void *data, + int type, void *request, int request_size, + unsigned int timeout) +{ + struct gb_operation *operation; + int ret; + + if (request_size && !request) + return -EINVAL; + + operation = gb_operation_create_flags(connection, type, + request_size, 0, + GB_OPERATION_FLAG_UNIDIRECTIONAL, + GFP_KERNEL); + if (!operation) + return -ENOMEM; + + gb_operation_set_data(operation, data); + + if (request_size) + memcpy(operation->request->payload, request, request_size); + + ret = gb_operation_request_send(operation, callback, timeout, GFP_KERNEL); + if (ret) { + dev_err(&connection->hd->dev, + "%s: asynchronous operation id 0x%04x of type 0x%02x failed: %d\n", + connection->name, operation->id, type, ret); + gb_operation_put(operation); + } + + return ret; +} +EXPORT_SYMBOL_GPL(gb_operation_unidirectional_async_timeout); + /** * gb_operation_unidirectional_timeout() - initiate a unidirectional operation * @connection: connection to use diff --git a/include/linux/greybus/operation.h b/include/linux/greybus/operation.h index cb8e4ef4522..01dd1d89d89 100644 --- a/include/linux/greybus/operation.h +++ b/include/linux/greybus/operation.h @@ -192,6 +192,10 @@ int gb_operation_sync_timeout(struct gb_connection *connection, int type, void *request, int request_size, void *response, int response_size, unsigned int timeout); +int gb_operation_unidirectional_async_timeout(struct gb_connection *connection, + gb_operation_callback callback, void *data, + int type, void *request, int request_size, + unsigned int timeout); int gb_operation_unidirectional_timeout(struct gb_connection *connection, int type, void *request, int request_size, unsigned int timeout);
On Fri, Jul 04, 2025 at 08:40:34PM -0400, Damien Riégel wrote:
This adds a helper function for unidirectional asynchronous transfer. This is just for convenience as some drivers do these steps manually, like the loopback driver in gb_loopback_async_operation().
Signed-off-by: Damien Riégel damien.riegel@silabs.com
drivers/greybus/operation.c | 52 +++++++++++++++++++++++++++++++ include/linux/greybus/operation.h | 4 +++ 2 files changed, 56 insertions(+)
Shouldn't you conver the loopback driver over to use this, so it's not just increasing the overall code size, and we can see how it will be used?
thanks,
greg k-h
When matching a device, only the vendor ID and product ID are used. As all bundles in an interface share the same VID and PID, there is no way to differentiate between two bundles, and they are forced to use the same driver.
To allow using several vendor bundles in the same device, include the bundle ID when matching. The assumption here is that bundle IDs are stable across the lifespan of a product and never change.
The goal of this change is to open the discussion. Greybus standardizes a bunch of protocols like GPIO, SPI, etc. but also has provisioning for vendor bundle and protocol. There is only one ID reserved for vendor, 0xFF, so the question is did Greybus ever envision supporting several vendor bundles, or one vendor bundle with several vendor cports in it? Or the assumption always was that there could be at most only vendor cport?
Signed-off-by: Damien Riégel damien.riegel@silabs.com --- drivers/greybus/core.c | 4 ++++ include/linux/greybus.h | 7 ++++--- include/linux/greybus/greybus_id.h | 2 ++ 3 files changed, 10 insertions(+), 3 deletions(-)
diff --git a/drivers/greybus/core.c b/drivers/greybus/core.c index 313eb65cf70..a4968a24a08 100644 --- a/drivers/greybus/core.c +++ b/drivers/greybus/core.c @@ -68,6 +68,10 @@ static bool greybus_match_one_id(struct gb_bundle *bundle, (id->product != bundle->intf->product_id)) return false;
+ if ((id->match_flags & GREYBUS_ID_MATCH_BUNDLE_ID) && + (id->bundle_id != bundle->id)) + return false; + if ((id->match_flags & GREYBUS_ID_MATCH_CLASS) && (id->class != bundle->class)) return false; diff --git a/include/linux/greybus.h b/include/linux/greybus.h index 4d58e27ceaf..9c29a1099a4 100644 --- a/include/linux/greybus.h +++ b/include/linux/greybus.h @@ -38,12 +38,13 @@ #define GREYBUS_VERSION_MINOR 0x01
#define GREYBUS_ID_MATCH_DEVICE \ - (GREYBUS_ID_MATCH_VENDOR | GREYBUS_ID_MATCH_PRODUCT) + (GREYBUS_ID_MATCH_VENDOR | GREYBUS_ID_MATCH_PRODUCT | GREYBUS_ID_MATCH_BUNDLE_ID)
-#define GREYBUS_DEVICE(v, p) \ +#define GREYBUS_DEVICE(v, p, id) \ .match_flags = GREYBUS_ID_MATCH_DEVICE, \ .vendor = (v), \ - .product = (p), + .product = (p), \ + .bundle_id = (id),
#define GREYBUS_DEVICE_CLASS(c) \ .match_flags = GREYBUS_ID_MATCH_CLASS, \ diff --git a/include/linux/greybus/greybus_id.h b/include/linux/greybus/greybus_id.h index f4c8440093e..3e0728e1f44 100644 --- a/include/linux/greybus/greybus_id.h +++ b/include/linux/greybus/greybus_id.h @@ -15,6 +15,7 @@ struct greybus_bundle_id { __u32 vendor; __u32 product; __u8 class; + __u8 bundle_id;
kernel_ulong_t driver_info __aligned(sizeof(kernel_ulong_t)); }; @@ -23,5 +24,6 @@ struct greybus_bundle_id { #define GREYBUS_ID_MATCH_VENDOR BIT(0) #define GREYBUS_ID_MATCH_PRODUCT BIT(1) #define GREYBUS_ID_MATCH_CLASS BIT(2) +#define GREYBUS_ID_MATCH_BUNDLE_ID BIT(3)
#endif /* __LINUX_GREYBUS_ID_H */
On Fri, Jul 04, 2025 at 08:40:35PM -0400, Damien Riégel wrote:
When matching a device, only the vendor ID and product ID are used.
It shouldn't be that way. That was not the intention.
As all bundles in an interface share the same VID and PID, there is no way to differentiate between two bundles, and they are forced to use the same driver.
To allow using several vendor bundles in the same device, include the bundle ID when matching. The assumption here is that bundle IDs are stable across the lifespan of a product and never change.
The goal of this change is to open the discussion. Greybus standardizes a bunch of protocols like GPIO, SPI, etc. but also has provisioning for vendor bundle and protocol. There is only one ID reserved for vendor, 0xFF, so the question is did Greybus ever envision supporting several vendor bundles, or one vendor bundle with several vendor cports in it? Or the assumption always was that there could be at most only vendor cport?
The goal was to emulate what USB does here. If you have a vendor-specific protocol, then set the vendor protocol id (0xff) and then trigger off of the VID and PID. Then you can do whatever you want here in your driver as it's a vendor-specific one.
So you are wanting multiple devices with the same vid/pid yet do different things? Why not just change the PID?
Like with USB, a bundle id is not guaranteed to be "static", BUT if you want to make that distinction in your driver that is a vendor-specific one, go ahead. Again, that should be like USB interface numbers, right?
thanks,
greg k-h
This class only supports one type of operation: - name: BLE_TRANSFER - id: 0x01 - unidirectional - payload is: - first byte: HCI packet type - HCI packet
Implementation is very naive and doesn't keep track of in-flight frames. The goal of this commit is mostly to open a discussion. What would be the process to add new bundle and protocol to Greybus? Should Linux be considered the actual standard (as it already differs in subtle ways from the official specification)? Or should the (official? [1]) specifications be updated first?
[1] https://github.com/projectara/greybus-spec
Signed-off-by: Damien Riégel damien.riegel@silabs.com --- MAINTAINERS | 6 + drivers/staging/greybus/Kconfig | 9 ++ drivers/staging/greybus/Makefile | 6 + drivers/staging/greybus/silabs-ble.c | 203 +++++++++++++++++++++++++++ 4 files changed, 224 insertions(+) create mode 100644 drivers/staging/greybus/silabs-ble.c
diff --git a/MAINTAINERS b/MAINTAINERS index 10385b5344b..ea0923741cf 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -10009,6 +10009,12 @@ F: drivers/staging/greybus/sdio.c F: drivers/staging/greybus/spi.c F: drivers/staging/greybus/spilib.c
+GREYBUS BLUETOOTH DRIVER +M: Damien Riégel damien.riegel@silabs.com +R: Silicon Labs Kernel Team linux-devel@silabs.com +S: Supported +F: drivers/staging/greybus/silabs-ble.c + GREYBUS BEAGLEPLAY DRIVERS M: Ayush Singh ayushdevel1325@gmail.com L: greybus-dev@lists.linaro.org (moderated for non-subscribers) diff --git a/drivers/staging/greybus/Kconfig b/drivers/staging/greybus/Kconfig index 1e745a8d439..3d14eabb196 100644 --- a/drivers/staging/greybus/Kconfig +++ b/drivers/staging/greybus/Kconfig @@ -213,4 +213,13 @@ config GREYBUS_ARCHE To compile this code as a module, chose M here: the module will be called gb-arche.ko
+config GREYBUS_SILABS_BLUETOOTH + tristate "Greybus Silabs Bluetooth Class driver" + help + Select this option if you have a Silicon Labs device that + supports Bluetooth over Greybus. + + To compile this code as a module, chose M here: the module + will be called gb-silabs-ble.ko + endif # GREYBUS diff --git a/drivers/staging/greybus/Makefile b/drivers/staging/greybus/Makefile index 7c5e8962233..c61e402595a 100644 --- a/drivers/staging/greybus/Makefile +++ b/drivers/staging/greybus/Makefile @@ -71,3 +71,9 @@ obj-$(CONFIG_GREYBUS_USB) += gb-usb.o gb-arche-y := arche-platform.o arche-apb-ctrl.o
obj-$(CONFIG_GREYBUS_ARCHE) += gb-arche.o + + +# Greybus vendor driver +gb-silabs-ble-y := silabs-ble.o + +obj-$(CONFIG_GREYBUS_SILABS_BLUETOOTH) += gb-silabs-ble.o diff --git a/drivers/staging/greybus/silabs-ble.c b/drivers/staging/greybus/silabs-ble.c new file mode 100644 index 00000000000..588e8e067e2 --- /dev/null +++ b/drivers/staging/greybus/silabs-ble.c @@ -0,0 +1,203 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Driver for Bluetooth HCI over Greybus. + * + * Copyright (c) 2025, Silicon Laboratories, Inc. + */ + +#include <linux/greybus.h> +#include <linux/skbuff.h> +#include <net/bluetooth/bluetooth.h> +#include <net/bluetooth/hci_core.h> + +#define GREYBUS_VENDOR_SILABS 0xBEEF +#define GREYBUS_PRODUCT_EFX 0xCAFE + +#define GB_BLE_TRANSFER 0x01 + +struct gb_ble { + struct gb_connection *conn; + struct hci_dev *hdev; + struct sk_buff_head txq; +}; + +static int gb_ble_open(struct hci_dev *hdev) +{ + struct gb_ble *ble = hci_get_drvdata(hdev); + + skb_queue_head_init(&ble->txq); + + return gb_connection_enable(ble->conn); +} + +static int gb_ble_close(struct hci_dev *hdev) +{ + struct gb_ble *ble = hci_get_drvdata(hdev); + + gb_connection_disable(ble->conn); + + return 0; +} + +static void gb_ble_xfer_done(struct gb_operation *operation) +{ + struct sk_buff *skb = gb_operation_get_data(operation); + + kfree_skb(skb); +} + +static int gb_ble_send(struct hci_dev *hdev, struct sk_buff *skb) +{ + struct gb_ble *ble = hci_get_drvdata(hdev); + int ret; + + memcpy(skb_push(skb, 1), &hci_skb_pkt_type(skb), 1); + + ret = gb_operation_unidirectional_async_timeout(ble->conn, + gb_ble_xfer_done, skb, + GB_BLE_TRANSFER, + skb->data, skb->len + 1, + GB_OPERATION_TIMEOUT_DEFAULT); + + return -ENOMEM; +} + +static int gb_ble_request_handler(struct gb_operation *operation) +{ + struct gb_ble *ble = gb_connection_get_data(operation->connection); + struct device *dev = &operation->connection->bundle->dev; + struct sk_buff *skb; + unsigned int skb_len; + + switch (operation->type) { + case GB_BLE_TRANSFER: + /* Must be unidirectional as AP is not responding to this request. */ + if (!gb_operation_is_unidirectional(operation)) + return -EINVAL; + + if (operation->request->payload_size < 2) + return -EINVAL; + + skb_len = operation->request->payload_size - 1; + skb = bt_skb_alloc(skb_len, GFP_KERNEL); + if (!skb) + return -ENOMEM; + + /* Prepare HCI SKB and pass it to upper layer */ + hci_skb_pkt_type(skb) = ((u8 *)operation->request->payload)[0]; + memcpy(skb_put(skb, skb_len), &(((u8 *)operation->request->payload)[1]), skb_len); + hci_skb_expect(skb) = skb_len; + + hci_recv_frame(ble->hdev, skb); + + break; + default: + dev_err(dev, "unsupported request: %u\n", operation->type); + return -EINVAL; + } + + return 0; +} + +static int gb_ble_probe(struct gb_bundle *bundle, + const struct greybus_bundle_id *id) +{ + struct greybus_descriptor_cport *cport_desc; + struct gb_connection *connection; + struct gb_ble *ble; + int err; + + if (bundle->num_cports != 1) + return -ENODEV; + + cport_desc = &bundle->cport_desc[0]; + if (cport_desc->protocol_id != GREYBUS_PROTOCOL_VENDOR) + return -ENODEV; + + ble = kzalloc(sizeof(*ble), GFP_KERNEL); + if (!ble) { + err = -ENOMEM; + goto alloc_ble_fail; + } + + greybus_set_drvdata(bundle, ble); + + connection = gb_connection_create(bundle, le16_to_cpu(cport_desc->id), + gb_ble_request_handler); + if (IS_ERR(connection)) { + err = PTR_ERR(connection); + goto connection_create_fail; + } + + gb_connection_set_data(connection, ble); + ble->conn = connection; + ble->hdev = hci_alloc_dev(); + if (!ble->hdev) { + err = -ENOMEM; + goto alloc_hdev_fail; + } + + hci_set_drvdata(ble->hdev, ble); + ble->hdev->open = gb_ble_open; + ble->hdev->close = gb_ble_close; + ble->hdev->send = gb_ble_send; + + err = hci_register_dev(ble->hdev); + if (err) + goto register_hdev_fail; + + return 0; + +register_hdev_fail: + hci_free_dev(ble->hdev); +alloc_hdev_fail: + gb_connection_destroy(connection); +connection_create_fail: + kfree(ble); +alloc_ble_fail: + return err; +} + +static void gb_ble_disconnect(struct gb_bundle *bundle) +{ + struct gb_ble *ble = greybus_get_drvdata(bundle); + + hci_unregister_dev(ble->hdev); + hci_free_dev(ble->hdev); + + /* + * The connection should be disabled by now as unregistering the HCI device + * calls its close callback, so it should be safe to destroy the connection. + */ + gb_connection_destroy(ble->conn); + + kfree(ble); +} + +static const struct greybus_bundle_id gb_silabs_ble_id_table[] = { + { GREYBUS_DEVICE(GREYBUS_VENDOR_SILABS, GREYBUS_PRODUCT_EFX, 1) }, + { } +}; +MODULE_DEVICE_TABLE(greybus, gb_silabs_ble_id_table); + +static struct greybus_driver gb_silabs_ble_driver = { + .name = "silabs-ble", + .probe = gb_ble_probe, + .disconnect = gb_ble_disconnect, + .id_table = gb_silabs_ble_id_table, +}; + +static int silabs_ble_init(void) +{ + return greybus_register(&gb_silabs_ble_driver); +} +module_init(silabs_ble_init); + +static void __exit silabs_ble_exit(void) +{ + greybus_deregister(&gb_silabs_ble_driver); +} +module_exit(silabs_ble_exit); + +MODULE_DESCRIPTION("Bluetooth Driver for Silicon Labs EFx devices over Greybus"); +MODULE_LICENSE("GPL");
On Fri, Jul 04, 2025 at 08:40:36PM -0400, Damien Riégel wrote:
+#include <linux/greybus.h> +#include <linux/skbuff.h> +#include <net/bluetooth/bluetooth.h> +#include <net/bluetooth/hci_core.h>
+#define GREYBUS_VENDOR_SILABS 0xBEEF +#define GREYBUS_PRODUCT_EFX 0xCAFE
Nice vendor ids :)
We really should make a file for all of the current ones, to keep track of them. At the least, the vendor ids should be made unique.
Overall this looks very good, a clean and small driver. Would you have other ones for this type of transport layer?
thanks,
greg k-h