Currently if a user enqueues a work item using schedule_delayed_work() the used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to schedule_work() that is using system_wq and queue_work(), that makes use again of WORK_CPU_UNBOUND. This lack of consistency cannot be addressed without refactoring the API.
alloc_workqueue() treats all queues as per-CPU by default, while unbound workqueues must opt-in via WQ_UNBOUND.
This default is suboptimal: most workloads benefit from unbound queues, allowing the scheduler to place worker threads where they’re needed and reducing noise when CPUs are isolated.
This continues the effort to refactor workqueue APIs, which began with the introduction of new workqueues and a new alloc_workqueue flag in:
commit 128ea9f6ccfb ("workqueue: Add system_percpu_wq and system_dfl_wq") commit 930c2ea566af ("workqueue: Add new WQ_PERCPU flag")
This change adds a new WQ_PERCPU flag to explicitly request alloc_workqueue() to be per-cpu when WQ_UNBOUND has not been specified.
With the introduction of the WQ_PERCPU flag (equivalent to !WQ_UNBOUND), any alloc_workqueue() caller that doesn’t explicitly specify WQ_UNBOUND must now use WQ_PERCPU.
Once migration is complete, WQ_UNBOUND can be removed and unbound will become the implicit default.
Suggested-by: Tejun Heo tj@kernel.org Signed-off-by: Marco Crivellari marco.crivellari@suse.com --- drivers/greybus/operation.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/greybus/operation.c b/drivers/greybus/operation.c index 54ccc434a1f7..7e12ffb2dd60 100644 --- a/drivers/greybus/operation.c +++ b/drivers/greybus/operation.c @@ -1238,7 +1238,7 @@ int __init gb_operation_init(void) goto err_destroy_message_cache;
gb_operation_completion_wq = alloc_workqueue("greybus_completion", - 0, 0); + WQ_PERCPU, 0); if (!gb_operation_completion_wq) goto err_destroy_operation_cache;
Please use just
greybus:
as prefix.
Note that hardly any driver subsystems include "drivers/" in the commit summary.
On Fri, Nov 07, 2025 at 02:21:49PM +0100, Marco Crivellari wrote:
Currently if a user enqueues a work item using schedule_delayed_work() the used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to schedule_work() that is using system_wq and queue_work(), that makes use again of WORK_CPU_UNBOUND. This lack of consistency cannot be addressed without refactoring the API.
Apart from the naming of the WORK_CPU_UNBOUND macro I don't see the inconsistency here. We queue on the local CPU as documented (unless the CPU is not in the wq_unbound cpumask for unbound workqueues).
Note sure how explicitly marking percpu workqueues is going to change this either so this paragraph doesn't seem relevant for the change at hand.
alloc_workqueue() treats all queues as per-CPU by default, while unbound workqueues must opt-in via WQ_UNBOUND.
This default is suboptimal: most workloads benefit from unbound queues, allowing the scheduler to place worker threads where they’re needed and reducing noise when CPUs are isolated.
This continues the effort to refactor workqueue APIs, which began with the introduction of new workqueues and a new alloc_workqueue flag in:
commit 128ea9f6ccfb ("workqueue: Add system_percpu_wq and system_dfl_wq") commit 930c2ea566af ("workqueue: Add new WQ_PERCPU flag")
This change adds a new WQ_PERCPU flag to explicitly request alloc_workqueue() to be per-cpu when WQ_UNBOUND has not been specified.
With the introduction of the WQ_PERCPU flag (equivalent to !WQ_UNBOUND), any alloc_workqueue() caller that doesn’t explicitly specify WQ_UNBOUND must now use WQ_PERCPU.
Once migration is complete, WQ_UNBOUND can be removed and unbound will become the implicit default.
Fair enough, the default is about to be changed.
Suggested-by: Tejun Heo tj@kernel.org Signed-off-by: Marco Crivellari marco.crivellari@suse.com
With an updated commit message you can add my:
Reviewed-by: Johan Hovold johan@kernel.org
drivers/greybus/operation.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/greybus/operation.c b/drivers/greybus/operation.c index 54ccc434a1f7..7e12ffb2dd60 100644 --- a/drivers/greybus/operation.c +++ b/drivers/greybus/operation.c @@ -1238,7 +1238,7 @@ int __init gb_operation_init(void) goto err_destroy_message_cache; gb_operation_completion_wq = alloc_workqueue("greybus_completion",
0, 0);
if (!gb_operation_completion_wq) goto err_destroy_operation_cache;WQ_PERCPU, 0);
Johan
Hi,
On Wed, Nov 12, 2025 at 4:00 PM Johan Hovold johan@kernel.org wrote:
Please use just
greybus:as prefix.
I will do it, thanks. I think I saw a couple of commits with that prefix, so I used it. I could have looked better.
On Fri, Nov 07, 2025 at 02:21:49PM +0100, Marco Crivellari wrote:
Currently if a user enqueues a work item using schedule_delayed_work() the used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to schedule_work() that is using system_wq and queue_work(), that makes use again of WORK_CPU_UNBOUND. This lack of consistency cannot be addressed without refactoring the API.
Apart from the naming of the WORK_CPU_UNBOUND macro I don't see the inconsistency here. We queue on the local CPU as documented (unless the CPU is not in the wq_unbound cpumask for unbound workqueues).
Note sure how explicitly marking percpu workqueues is going to change this either so this paragraph doesn't seem relevant for the change at hand.
That part is there only to give more context, but I can remove it from the log. I can start directly mentioning the changes in the workqueue API.
Fair enough, the default is about to be changed.
For now we're only making explicit if a workqueue is per-cpu or not. But yes, in the future, this will change.
Suggested-by: Tejun Heo tj@kernel.org Signed-off-by: Marco Crivellari marco.crivellari@suse.com
With an updated commit message you can add my:
Reviewed-by: Johan Hovold johan@kernel.org
I will send the v2 changing the log and adding your tag.
Thanks!