Hi, Sorry if I'm covering any ground that you guys have already gone over; I thought this list was just continuing to be quiet, but turns out it was all landing in my spam folder ...
On Wed, Apr 20, 2011 at 01:23:18PM +0100, Tom Cooksey wrote:
There's a big difference between GEM and what we're trying to do. GEM is designed to manage _all_ buffers used by the graphics hardware. What I believe we're trying to do is only provide a manager which allows buffers to be shared between devices. Of all the buffers and textures a GPU needs to access, only a tiny fraction of them need to be shared between devices and userspace processes. How large that fraction is I don't know, it might still be approaching the 1024 limit, but I doubt it...
I can see this being problematic from an upstream point of view. We already have two memory managers in the kernel (GEM and TTM, even if TTM is thankfully built on top of GEM); adding another graphics memory management framework for this limited usecase while explicitly requiring the driver to support or implement another sounds like it will get Linus upset at us all again[0]. And, given the parameters, it sounds very much like the intention is to let everyone keep their own custom memory managers and not bother with GEM or TTM.
Bear in mind that I'm not saying this is a terrible idea / everyone should port to GEM right now / we should design an amazing overarching memory management and allocation API that handles literally every usecase anyone can think of. I'm just saying that upstreaming it might be more difficult than you'd think.
So, the buffers we're interested in sharing between different processes and devices are:
- Decoded video buffers (from both cameras & video decoders)
- Window back-buffers
- System-wide theme textures and font glyph caches
.... Anyone know of other candidates?
Well, in the Wayland case, window frontbuffers as well. And note that 'window' covers a lot more than what you might think -- panels, your desktop background, system tray icons (although these are, for the most part, not on the critical path for GPU performance), the date & time widget in the panel, etc, etc. And all the application windows you'd think of too. And sometimes their subwindows (cf. XEMBED, and non-WMODE NSAPI browser plugins).
So I guess you really need to replace 'window back-buffers' with 'anything the compositor will ever need to address', because having to block, synchronise, flush, etc to pull just one unaddressable surface out of X in your compositor, is going to be unbelievably painful.
I guess the bottleneck will probably be the window compositor, as it will need to have a reference to all window back buffers in the system. So, do any of the DRI folks have any idea how many windows (with back buffers) a typical desktop session has?
Well, define 'typical', I guess? My grandparents have Nautilus background + two panels + Chromium + Empathy contact list + Empathy chat. A graphic designer with ADHD probably has four thousand GIMP windows, a billion Chromium windows, a billion Empathy chat windows, etc.
Do minimised windows have back buffers allocated in X11?
In a composited environment, most window managers/compositors choose to keep minimised windows redirected (so, the answer being yes) so they can continue to update the icon previews and so on.
Cheers, Daniel
[0]: He's currently most upset at both ARM and GPU/DRM guys, the former for excessive duplication of subsystems when they could be shared. So, this may rub him the wrong way, coming from ARM GPU guys. :)