On Wed, 3 May 2023 03:35:29 +0000 Zack Rusin zackr@vmware.com wrote:
On Tue, 2023-05-02 at 11:32 +0200, Javier Martinez Canillas wrote:
!! External Email
Daniel Vetter daniel@ffwll.ch writes:
On Mon, Jul 11, 2022 at 11:32:39PM -0400, Zack Rusin wrote:
From: Zack Rusin zackr@vmware.com
Cursor planes on virtualized drivers have special meaning and require that the clients handle them in specific ways, e.g. the cursor plane should react to the mouse movement the way a mouse cursor would be expected to and the client is required to set hotspot properties on it in order for the mouse events to be routed correctly.
This breaks the contract as specified by the "universal planes". Fix it by disabling the cursor planes on virtualized drivers while adding a foundation on top of which it's possible to special case mouse cursor planes for clients that want it.
Disabling the cursor planes makes some kms compositors which were broken, e.g. Weston, fallback to software cursor which works fine or at least better than currently while having no effect on others, e.g. gnome-shell or kwin, which put virtualized drivers on a deny-list when running in atomic context to make them fallback to legacy kms and avoid this issue.
Signed-off-by: Zack Rusin zackr@vmware.com Fixes: 681e7ec73044 ("drm: Allow userspace to ask for universal plane list (v2)")
[...]
diff --git a/include/drm/drm_drv.h b/include/drm/drm_drv.h index f6159acb8856..c4cd7fc350d9 100644 --- a/include/drm/drm_drv.h +++ b/include/drm/drm_drv.h @@ -94,6 +94,16 @@ enum drm_driver_feature { * synchronization of command submission. */ DRIVER_SYNCOBJ_TIMELINE = BIT(6), + /** + * @DRIVER_VIRTUAL: + * + * Driver is running on top of virtual hardware. The most significant + * implication of this is a requirement of special handling of the + * cursor plane (e.g. cursor plane has to actually track the mouse + * cursor and the clients are required to set hotspot in order for + * the cursor planes to work correctly). + */ + DRIVER_VIRTUAL = BIT(7),
I think the naming here is unfortunate, because people will vonder why e.g. vkms doesn't set this, and then add it, and confuse stuff completely.
Also it feels a bit wrong to put this onto the driver, when really it's a cursor flag. I guess you can make it some kind of flag in the drm_plane structure, or a new plane type, but putting it there instead of into the "random pile of midlayer-mistake driver flags" would be a lot better.
Otherwise I think the series looks roughly how I'd expect it to look. -Daniel
AFAICT this is the only remaining thing to be addressed for this series ?
No, there was more. tbh I haven't had the time to think about whether the above makes sense to me, e.g. I'm not sure if having virtualized drivers expose "support universal planes" and adding another plane which is not universal (the only "universal" plane on them being the default one) makes more sense than a flag that says "this driver requires a cursor in the cursor plane". There's certainly a huge difference in how userspace would be required to handle it and it's way uglier with two different cursor planes. i.e. there's a lot of ways in which this could be cleaner in the kernel but they all require significant changes to userspace, that go way beyond "attach hotspot info to this plane".
I'd like to avoid approaches that mean running with atomic kms requires completely separate paths for virtualized drivers because no one will ever support and maintain it.
Hi Zack,
you'd like to avoid that, but fundamentally that really is what has to happen in userspace for *nested* KMS drivers (VKMS is a virtual driver but not part of the interest group here) to reach optimality.
It really is a different path. I see no way around that. But if you accept that fact, then you could possibly gain a lot more benefits by asking userspace to handle nested KMS drivers differently. What those benefits are exactly I'm not sure, but I have a feeling there should be some, where the knowledge of running on a nested KMS driver allows for better decisions that are not possible if the nested KMS driver just pretends to be like any other KMS hardware driver.
You can get up to some level of interoperability by pretending to be just like any other KMS driver, but if you want to optimize things, I feel that's a whole different story. It's a trade-off.
I think frame timing is one thing. A nested KMS driver increases the depth of the "swapchain" between the guest KMS app and the actual hardware. This is unexpected if userspace does not know it is running on a nested KMS driver.
The existing KMS uAPI, both legacy and atomic, have been written for classic hardware. One fundamental result of that is the page flip completion event, it signals two things simultaneously: the new framebuffer is in use, and a new flip can be programmed. On a nested driver, these two are not the same thing: the nested driver can take another flip before the new framebuffer is actually being used in the host. More importantly, the nested driver can take a new flip before the old replaced framebuffer has actually been retired.
(The above can tie into the question of making the KMS swapchain deeper in general, also for classic scanout design, in connection to present-not-before-timestamp queueing at KMS level.)
However, as long as these two are the same event, it can decimate the framerate on a nested driver, because userspace is not prepared for a swapchain depth of greater than one. Or, the nested KMS driver gives up on zero-copy. Or, you need a fragile timing arrangement that essentially needs to be hand-configured in at least one display system, guest or host.
Somewhat related, there is also the matter of KMS drivers (hardware, nested, and virtual) that do not lock their page flip events to a hardware scanout cycle (because there is none) but "complete" any flip immediately. That too requires explicit handling in userspace, because you simply do not have a scanout cycle to lock on to.
We already have an example where userspace is explicitly helping "unusual" KMS drivers: FB_DAMAGE_CLIPS. While educating userspace does take considerable effort, I'd like to believe it is doable, and it is also necessary for optimality. Excellent KMS documentation is key, naturally.
Of course, it is up to you and other people to decide to want to do the work or not. I just feel you could potentially gain a lot if you decide to take on that fight.
It's not a trivial thing because it's fundamentally hard to untangle the fact the virtualized drivers have been advertising universal plane support without ever supporting universal planes. Especially because most new userspace in general checks for "universal planes" to expose atomic kms paths.
That's not just userspace, it's built into the kernel UAPI design that you cannot have atomic without universal planes.
Thanks, pq
The other thing blocking this series was the testing of all the edge cases, I think Simon and Pekka had some ideas for things to test (e.g. run mutter with support for this and wayland without support for this in at the same time in different consoles and see what happens). I never had the time to do that either.
Zack, are you planning to re-spin a v3 of this patch-set? Asking because we want to take virtio-gpu out of the atomic KMS deny list in mutter, but first need this to land.
If you think that won't be able to do it in the short term, Bilal (Cc'ed) or me would be glad to help with that.
This has been on my todo for a while I just never had the time to go through all the remaining issues. Fundamentally it's not so much a technical issue anymore, it's about picking the least broken solution and trying to make the best out of a pretty bad situation. In general it's hard to paint a bikeshed if all you have is a million shades of gray ;)
z