From: Rob Clark rob@ti.com
To allow the potential use of overlays to display video content, a few extra parameters are required:
+ source buffer in different format (for example, various YUV formats) and size as compared to destination drawable + multi-planar formats where discontiguous buffers are used for different planes. For example, luma and chroma split across multiple memory banks or with different tiled formats. + flipping between multiple back buffers, perhaps not in order (to handle video formats with B-frames) + cropping during swap.. in case of video, perhaps the required hw buffers are larger than the visible picture to account for codec borders (for example, reference frames where a block/macroblock moves past the edge of the visible picture, but back again in subsequent frames).
Current solutions use the GPU to do a scaled/colorconvert into a DRI2 buffer from the client context. The goal of this protocol change is to push the decision to use overlay or GPU blit to the xorg driver. --- Eventually this should replace Xv. With a few additions, like attributes, it could perhaps be possible to implement the client side Xv API on top of dri2.
Note: video is not exactly the same as 3d, there are a number of other things to consider (scaling, colorconvert, multi-planar formats). But on the other hand the principle is similar (direct rendering from hw video codecs). And a lot infrastructure of connection, authentication, is same. So there are two options, either extend DRI2 or add a new protocol which duplicates some parts. I'd like to consider extending DRI2 first, but if people think the requirements for video are too much different from 3d, then I could split this into a new protocol.
In either case, I will implement the xserver side infrastructure, but I wanted to get some feel for what is the preferred approach (extend dri2 or new videoproto) first.
dri2proto.txt | 60 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++- 1 files changed, 59 insertions(+), 1 deletions(-)
diff --git a/dri2proto.txt b/dri2proto.txt index df763c7..aa83b1a 100644 --- a/dri2proto.txt +++ b/dri2proto.txt @@ -163,7 +163,8 @@ and DRI2InvalidateBuffers. 6. Protocol Types
DRI2DRIVER { DRI2DriverDRI - DRI2DriverVDPAU } + DRI2DriverVDPAU, + DRI2DriverXV }
These values describe the type of driver the client will want to load. The server sends back the name of the driver to use @@ -184,6 +185,10 @@ DRI2ATTACHMENT { DRI2BufferFrontLeft These values describe various attachment points for DRI2 buffers.
+ In the case of video driver (DRI2DriverXV) the attachment, + other than DRI2BufferFrontLeft, just indicates buffer + number and has no other special significance. + DRI2BUFFER { attachment: CARD32 name: CARD32 pitch: CARD32 @@ -203,6 +208,16 @@ DRI2ATTACH_FORMAT { attachment: CARD32 format. 'attachment' describes the attachment point for the buffer, 'format' describes an opaque, device-dependent format for the buffer.
+ +DRI2ATTACH_VIDEO { attachment: CARD32 + format: CARD32, + width, height: CARD32 } + + The DRI2ATTACH_VIDEO describes an attachment and the associated + format for video buffers. 'attachment' describes the attachment + point for the buffer, 'format' describes a fourcc value for the + buffer. + ⚙ ⚙ ⚙ ⚙ ⚙ ⚙
@@ -367,6 +382,15 @@ The name of this extension is "DRI2". later.
┌─── + DRI2GetVideoBuffers + drawable: DRAWABLE + attachments: LISTofDRI2ATTACH_VIDEO + ▶ + width, height: CARD32 + buffers: LISTofDRI2BUFFER +└─── + +┌─── DRI2GetMSC drawable: DRAWABLE ▶ @@ -585,11 +609,21 @@ A.1 Common Types 4 CARD32 pitch 4 CARD32 cpp 4 CARD32 flags + 4 n extra names length + 4n LISTof extra names └─── A DRI2 buffer specifies the attachment, the kernel memory manager name, the pitch and chars per pixel for a buffer attached to a given drawable.
+ In case of multi-planar video formats, 'extra names' will give the + list of additional buffer names if there is one buffer per plane. + For example, I420 has one Y plane in with a 8bit luma value per + pixel, followed by one U plane subsampled 2x2 (with one 8bit U value + per 2x2 pixel block), followed by one V plane subsampled 2x2. This + could either be represented as a single buffer name, or three + separate buffer names, one each for Y, U, and V. + ┌─── DRI2ATTACH_FORMAT 4 CARD32 attachment @@ -599,6 +633,17 @@ A.1 Common Types This data type is only available with protocol version 1.1 or later.
+┌─── + DRI2ATTACH_VIDEO + 4 CARD32 attachment + 4 CARD32 format + 4 CARD32 width + 4 CARD32 height +└─── + Used to describe the attachment and format requested from the server. + This data type is only available with protocol version 1.? or + later. + A.2 Protocol Requests
┌─── @@ -745,6 +790,11 @@ A.2 Protocol Requests 4 CARD32 divisor_lo 4 CARD32 remainder_hi 4 CARD32 remainder_lo + 4 DRI2ATTACHMENT source + 4 CARD32 x1 + 4 CARD32 y1 + 4 CARD32 x2 + 4 CARD32 y2 ▶ 1 1 Reply 1 unused @@ -754,6 +804,14 @@ A.2 Protocol Requests 4 CARD32 swap_lo 5n LISTofDRI2BUFFER buffers └─── + The 'source', if not zero (DRI2BufferFrontLeft) indicates the + attachment point of the buffer to swap w/ DRI2BufferFrontLeft. + If zero is specified, DRI2BufferBackLeft is swapped with the + DRI2BufferFrontLeft buffer, for compatibility. + + If 'source' is not zero, (x1,y1), (x2,y2) specify the bounding + box in coordinates of the source buffer which should be scaled + to (0,0), (width,height) of the destination drawable.
┌─── DRI2GetMSC
To allow the potential use of overlays to display video content, a few extra parameters are required:
+ source buffer in different format (for example, various YUV formats) and size as compared to destination drawable + multi-planar formats where discontiguous buffers are used for different planes. For example, luma and chroma split across multiple memory banks or with different tiled formats. + flipping between multiple back buffers, perhaps not in order (to handle video formats with B-frames) + cropping during swap.. in case of video, perhaps the required hw buffers are larger than the visible picture to account for codec borders (for example, reference frames where a block/macroblock moves past the edge of the visible picture, but back again in subsequent frames).
Current solutions use the GPU to do a scaled/colorconvert into a DRI2 buffer from the client context. The goal of this protocol change is to push the decision to use overlay or GPU blit to the xorg driver.
In many cases, an overlay will avoid several passes through memory (blit/scale/colorconvert to DRI back-buffer on client side, blit to front and fake-front, and then whatever compositing is done by the window manager). On the other hand, overlays can often be handled directly by the scanout engine in the display hardware, with the GPU switched off.
The disadvantages of overlays are that they are (usually) a limited resource, sometimes with scaling constraints, and certainly with limitations about transformational effects.
The goal of combining video and dri2 is to have the best of both worlds, to have the flexibility of GPU blitting (ie. no limited number of video ports, no constraint about transformational effects), while still having the power consumption benefits of overlays (reduced memory bandwidth usage and ability to shut off the GPU) when the UI is relatively static other than the playing video.
Note: video is not exactly the same as 3d, there are a number of other things to consider (scaling, colorconvert, multi-planar formats). But on the other hand the principle is similar (direct rendering from hw video codecs). And a lot infrastructure of connection, authentication, is same. So there are two options, either extend DRI2 or add a new protocol which duplicates some parts. I'd like to consider extending DRI2 first, but if people think the requirements for video are too much different from 3d, then I could split this into a new protocol. --- dri2proto.txt | 133 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++- 1 files changed, 131 insertions(+), 2 deletions(-)
diff --git a/dri2proto.txt b/dri2proto.txt index df763c7..92f85a4 100644 --- a/dri2proto.txt +++ b/dri2proto.txt @@ -163,7 +163,8 @@ and DRI2InvalidateBuffers. 6. Protocol Types
DRI2DRIVER { DRI2DriverDRI - DRI2DriverVDPAU } + DRI2DriverVDPAU, + DRI2DriverXV }
These values describe the type of driver the client will want to load. The server sends back the name of the driver to use @@ -184,6 +185,11 @@ DRI2ATTACHMENT { DRI2BufferFrontLeft These values describe various attachment points for DRI2 buffers.
+ In the case of video driver (DRI2DriverXV) the attachment, + other than DRI2BufferFrontLeft, just indicates buffer + number and has no other special significance. There is no + automatic maintenance of DRI2BufferFakeFrontLeft. + DRI2BUFFER { attachment: CARD32 name: CARD32 pitch: CARD32 @@ -201,7 +207,8 @@ DRI2ATTACH_FORMAT { attachment: CARD32
The DRI2ATTACH_FORMAT describes an attachment and the associated format. 'attachment' describes the attachment point for the buffer, - 'format' describes an opaque, device-dependent format for the buffer. + 'format' describes an opaque, device-dependent format for the buffer, + or a fourcc for non-device-dependent formats.
⚙ ⚙ ⚙ ⚙ ⚙ ⚙
@@ -440,6 +447,97 @@ The name of this extension is "DRI2". DRI2SwapBuffers requests to swap at most once per interval frames, which is useful useful for limiting the frame rate.
+┌─── + DRI2SetAttribute + drawable: DRAWABLE + attribute: ATOM + value: INT32 + ▶ +└─── + Errors: Window, Match, Value + + The DRI2SetAttribute request sets the value of a drawable attribute. + The drawable attribute is identified by the attribute atom. The + following strings are guaranteed to generate valid atoms using the + InternAtom request. + + String Type + ----------------------------------------------------------------- + + "XV_ENCODING" ENCODINGID + "XV_HUE" [-1000..1000] + "XV_SATURATION" [-1000..1000] + "XV_BRIGHTNESS" [-1000..1000] + "XV_CONTRAST" [-1000..1000] + "XV_WIDTH" [0..MAX_INT] + "XV_HEIGHT" [0..MAX_INT] + "XV_OSD" XID + + If the given attribute doesn't match an attribute supported by the + drawable a Match error is generated. The supplied encoding + must be one of the encodings listed for the adaptor, otherwise an + Encoding error is generated. + + If the adaptor doesn't support the exact hue, saturation, + brightness, and contrast levels supplied, the closest levels + supported are assumed. The DRI2GetAttribute request can be used + to query the resulting levels. + + The "XV_WIDTH" and "XV_HEIGHT" attributes default to zero, indicating + that no scaling is performed and the buffer sizes match the drawable + size. They can be overriden by the client if scaling is desired. + + The "XV_OSD" attribute specifies the XID of a pixmap containing + ARGB data to be non-destructively overlayed over the video. This + could be used to implement subtiles, on-screen-menus, etc. + + : TODO: Is there a need to support DRI2SetAttribute for non-video + : DRI2DRIVER types? + : + : TODO: Do we need to keep something like PortNotify.. if attributes + : are only changing in response to DRI2SetAttribute from the client, + : then having a PortNotify like mechanism seems overkill. The assumption + : here is that, unlike Xv ports, DRI2 video drawables are not a limited + : resource (ie. if you run out of (or don't have) hardware overlays, then + : you use the GPU to do a colorconvert/scale/blit). So there is not a + : need to share "ports" between multiple client processes. + +┌─── + DRI2GetAttribute + drawable: DRAWABLE + attribute: ATOM + ▶ + value: INT32 +└─── + Errors: Window, Match + + The DRI2GetAttribute request returns the current value of the + attribute identified by the given atom. If the given atom + doesn't match an attribute supported by the adaptor a Match + error is generated. + +┌─── + DRI2GetFormats + drawable: DRAWABLE + ▶ + formats: LISTofCARD32 +└─── + Errors: Window + + Query the driver for supported formats, which can be used with + DRI2GetBuffersWithFormat. The 'format' describes an opaque, + device-dependent format for the buffer, or a fourcc for + non-device-dependent formats. + + : NOTE: I'm trying to avoid re-inventing something along the lines of + : XvImageFormatValues, which I think is overly complex and still not + : sufficient to describe weird device-dependent or tiled formats. On + : the other hand, it is probably not necessary to perfectly describe + : weird device-dependent formats. Just use at least one non-ascii + : character so the value does not collide with valid fourcc's. I + : looked at intel and nouveau xorg driver code, and it doesn't seem + : that they use values that could be confused with fourcc values. + ⚙ ⚙ ⚙ ⚙ ⚙ ⚙
9. Extension Events @@ -585,11 +683,21 @@ A.1 Common Types 4 CARD32 pitch 4 CARD32 cpp 4 CARD32 flags + 4 n extra names length + 4n LISTof extra names └─── A DRI2 buffer specifies the attachment, the kernel memory manager name, the pitch and chars per pixel for a buffer attached to a given drawable.
+ In case of multi-planar video formats, 'extra names' will give the + list of additional buffer names if there is one buffer per plane. + For example, I420 has one Y plane in with a 8bit luma value per + pixel, followed by one U plane subsampled 2x2 (with one 8bit U value + per 2x2 pixel block), followed by one V plane subsampled 2x2. This + could either be represented as a single buffer name, or three + separate buffer names, one each for Y, U, and V. + ┌─── DRI2ATTACH_FORMAT 4 CARD32 attachment @@ -745,6 +853,11 @@ A.2 Protocol Requests 4 CARD32 divisor_lo 4 CARD32 remainder_hi 4 CARD32 remainder_lo + 4 DRI2ATTACHMENT source + 4 CARD32 x1 + 4 CARD32 y1 + 4 CARD32 x2 + 4 CARD32 y2 ▶ 1 1 Reply 1 unused @@ -754,6 +867,22 @@ A.2 Protocol Requests 4 CARD32 swap_lo 5n LISTofDRI2BUFFER buffers └─── + The 'source', if not zero (DRI2BufferFrontLeft) indicates the + attachment point of the buffer to swap w/ DRI2BufferFrontLeft. + If zero is specified, DRI2BufferBackLeft is swapped with the + DRI2BufferFrontLeft buffer, for compatibility. + + If 'source' is not zero, (x1,y1), (x2,y2) specify the bounding + box in coordinates of the source buffer which should be scaled + to (0,0), (width,height) of the destination drawable. + + NOTE: cropping could also be handled via attributes.. but it + gets a bit fuzzy when the crop can change frame-by-frame. We + could just decree "updated crop attributes apply on the next + SwapBuffers" but it is a bit hand-wavey.. + + : TODO: do I add another DRI2SwapBuffer's msg description, or is + : it ok to just append fields to the end of the Req part?
┌─── DRI2GetMSC
On Thu, Sep 1, 2011 at 4:52 PM, Rob Clark rob@ti.com wrote:
To allow the potential use of overlays to display video content, a few extra parameters are required:
+ source buffer in different format (for example, various YUV formats) and size as compared to destination drawable + multi-planar formats where discontiguous buffers are used for different planes. For example, luma and chroma split across multiple memory banks or with different tiled formats. + flipping between multiple back buffers, perhaps not in order (to handle video formats with B-frames) + cropping during swap.. in case of video, perhaps the required hw buffers are larger than the visible picture to account for codec borders (for example, reference frames where a block/macroblock moves past the edge of the visible picture, but back again in subsequent frames).
Current solutions use the GPU to do a scaled/colorconvert into a DRI2 buffer from the client context. The goal of this protocol change is to push the decision to use overlay or GPU blit to the xorg driver.
In many cases, an overlay will avoid several passes through memory (blit/scale/colorconvert to DRI back-buffer on client side, blit to front and fake-front, and then whatever compositing is done by the window manager). On the other hand, overlays can often be handled directly by the scanout engine in the display hardware, with the GPU switched off.
The disadvantages of overlays are that they are (usually) a limited resource, sometimes with scaling constraints, and certainly with limitations about transformational effects.
The goal of combining video and dri2 is to have the best of both worlds, to have the flexibility of GPU blitting (ie. no limited number of video ports, no constraint about transformational effects), while still having the power consumption benefits of overlays (reduced memory bandwidth usage and ability to shut off the GPU) when the UI is relatively static other than the playing video.
Note: video is not exactly the same as 3d, there are a number of other things to consider (scaling, colorconvert, multi-planar formats). But on the other hand the principle is similar (direct rendering from hw video codecs). And a lot infrastructure of connection, authentication, is same. So there are two options, either extend DRI2 or add a new protocol which duplicates some parts. I'd like to consider extending DRI2 first, but if people think the requirements for video are too much different from 3d, then I could split this into a new protocol.
...
+┌───
- DRI2SetAttribute
- drawable: DRAWABLE
- attribute: ATOM
- value: INT32
- ▶
+└───
- Errors: Window, Match, Value
- The DRI2SetAttribute request sets the value of a drawable attribute.
- The drawable attribute is identified by the attribute atom. The
- following strings are guaranteed to generate valid atoms using the
- InternAtom request.
- String Type
- -----------------------------------------------------------------
- "XV_ENCODING" ENCODINGID
- "XV_HUE" [-1000..1000]
- "XV_SATURATION" [-1000..1000]
- "XV_BRIGHTNESS" [-1000..1000]
- "XV_CONTRAST" [-1000..1000]
- "XV_WIDTH" [0..MAX_INT]
- "XV_HEIGHT" [0..MAX_INT]
- "XV_OSD" XID
- If the given attribute doesn't match an attribute supported by the
- drawable a Match error is generated. The supplied encoding
- must be one of the encodings listed for the adaptor, otherwise an
- Encoding error is generated.
- If the adaptor doesn't support the exact hue, saturation,
- brightness, and contrast levels supplied, the closest levels
- supported are assumed. The DRI2GetAttribute request can be used
- to query the resulting levels.
- The "XV_WIDTH" and "XV_HEIGHT" attributes default to zero, indicating
- that no scaling is performed and the buffer sizes match the drawable
- size. They can be overriden by the client if scaling is desired.
- The "XV_OSD" attribute specifies the XID of a pixmap containing
- ARGB data to be non-destructively overlayed over the video. This
- could be used to implement subtiles, on-screen-menus, etc.
- : TODO: Is there a need to support DRI2SetAttribute for non-video
- : DRI2DRIVER types?
- :
- : TODO: Do we need to keep something like PortNotify.. if attributes
- : are only changing in response to DRI2SetAttribute from the client,
- : then having a PortNotify like mechanism seems overkill. The assumption
- : here is that, unlike Xv ports, DRI2 video drawables are not a limited
- : resource (ie. if you run out of (or don't have) hardware overlays, then
- : you use the GPU to do a colorconvert/scale/blit). So there is not a
- : need to share "ports" between multiple client processes.
Are you targeting/limiting this to a particular API (or the customary limitations of overlay HW)? I ask because VDPAU allows clients to pass in an arbitrary colour conversion matrix rather than color standard/hue/sat/bri/con, so it wouldn't be possible to use this in that context. Also in general, their compositing API is a lot more flexible and allows for a background + multiple layers, rather than just a single layer. I suppose you could pre-flatten the layers into a single one, but the background would be problematic.
VA on the other hand lets clients query for matrix and h/s/b/c attribute support and seems to have a simpler compositing API, so it seems doable with this, and of course Xv does.
On Thu, Sep 1, 2011 at 5:22 PM, Younes Manton younes.m@gmail.com wrote:
On Thu, Sep 1, 2011 at 4:52 PM, Rob Clark rob@ti.com wrote:
To allow the potential use of overlays to display video content, a few extra parameters are required:
+ source buffer in different format (for example, various YUV formats) and size as compared to destination drawable + multi-planar formats where discontiguous buffers are used for different planes. For example, luma and chroma split across multiple memory banks or with different tiled formats. + flipping between multiple back buffers, perhaps not in order (to handle video formats with B-frames) + cropping during swap.. in case of video, perhaps the required hw buffers are larger than the visible picture to account for codec borders (for example, reference frames where a block/macroblock moves past the edge of the visible picture, but back again in subsequent frames).
Current solutions use the GPU to do a scaled/colorconvert into a DRI2 buffer from the client context. The goal of this protocol change is to push the decision to use overlay or GPU blit to the xorg driver.
In many cases, an overlay will avoid several passes through memory (blit/scale/colorconvert to DRI back-buffer on client side, blit to front and fake-front, and then whatever compositing is done by the window manager). On the other hand, overlays can often be handled directly by the scanout engine in the display hardware, with the GPU switched off.
The disadvantages of overlays are that they are (usually) a limited resource, sometimes with scaling constraints, and certainly with limitations about transformational effects.
The goal of combining video and dri2 is to have the best of both worlds, to have the flexibility of GPU blitting (ie. no limited number of video ports, no constraint about transformational effects), while still having the power consumption benefits of overlays (reduced memory bandwidth usage and ability to shut off the GPU) when the UI is relatively static other than the playing video.
Note: video is not exactly the same as 3d, there are a number of other things to consider (scaling, colorconvert, multi-planar formats). But on the other hand the principle is similar (direct rendering from hw video codecs). And a lot infrastructure of connection, authentication, is same. So there are two options, either extend DRI2 or add a new protocol which duplicates some parts. I'd like to consider extending DRI2 first, but if people think the requirements for video are too much different from 3d, then I could split this into a new protocol.
...
+┌───
- DRI2SetAttribute
- drawable: DRAWABLE
- attribute: ATOM
- value: INT32
- ▶
+└───
- Errors: Window, Match, Value
- The DRI2SetAttribute request sets the value of a drawable attribute.
- The drawable attribute is identified by the attribute atom. The
- following strings are guaranteed to generate valid atoms using the
- InternAtom request.
- String Type
- -----------------------------------------------------------------
- "XV_ENCODING" ENCODINGID
- "XV_HUE" [-1000..1000]
- "XV_SATURATION" [-1000..1000]
- "XV_BRIGHTNESS" [-1000..1000]
- "XV_CONTRAST" [-1000..1000]
- "XV_WIDTH" [0..MAX_INT]
- "XV_HEIGHT" [0..MAX_INT]
- "XV_OSD" XID
- If the given attribute doesn't match an attribute supported by the
- drawable a Match error is generated. The supplied encoding
- must be one of the encodings listed for the adaptor, otherwise an
- Encoding error is generated.
- If the adaptor doesn't support the exact hue, saturation,
- brightness, and contrast levels supplied, the closest levels
- supported are assumed. The DRI2GetAttribute request can be used
- to query the resulting levels.
- The "XV_WIDTH" and "XV_HEIGHT" attributes default to zero, indicating
- that no scaling is performed and the buffer sizes match the drawable
- size. They can be overriden by the client if scaling is desired.
- The "XV_OSD" attribute specifies the XID of a pixmap containing
- ARGB data to be non-destructively overlayed over the video. This
- could be used to implement subtiles, on-screen-menus, etc.
- : TODO: Is there a need to support DRI2SetAttribute for non-video
- : DRI2DRIVER types?
- :
- : TODO: Do we need to keep something like PortNotify.. if attributes
- : are only changing in response to DRI2SetAttribute from the client,
- : then having a PortNotify like mechanism seems overkill. The assumption
- : here is that, unlike Xv ports, DRI2 video drawables are not a limited
- : resource (ie. if you run out of (or don't have) hardware overlays, then
- : you use the GPU to do a colorconvert/scale/blit). So there is not a
- : need to share "ports" between multiple client processes.
Are you targeting/limiting this to a particular API (or the customary limitations of overlay HW)? I ask because VDPAU allows clients to pass in an arbitrary colour conversion matrix rather than color standard/hue/sat/bri/con, so it wouldn't be possible to use this in that context.
Ideally it would something that could be used either from device-dependent VDPAU or VAAPI driver back-end, or something that could be used in a generic way, for example GStreamer sink element that could be used with software codecs.
Well, this is the goal anyways. There is one slight other complication for use in a generic way.. it would need to be a bit better defined what the buffer 'name' is, so that the client side would know how to interpret it, mmap it if needed. But I think there is a solution brewing: http://lists.linaro.org/pipermail/linaro-mm-sig/2011-August/000509.html
As far as color conversion matrix... well, the attribute system can have arbitrary device-dependent attributes. In the VDPAU case, I suppose the implementation on the client side knows which xorg driver it is talking to, and could introduce it's own attributes. Perhaps a bit awkward for communicating a matrix, but you could in theory have 4*3 different attributes (ie. XV_M00, XV_M01, ... XV_M23) for each entry in the matrix.
Also in general, their compositing API is a lot more flexible and allows for a background + multiple layers, rather than just a single layer. I suppose you could pre-flatten the layers into a single one, but the background would be problematic.
Yeah, pre-flatten into a single layer, I think. I mean, we *could* push that to xorg driver side too, but I was trying to not make things overly complicated in the protocol.
I'm not sure I caught the issue about background.. or are you thinking about video w/ AYUV? Is there any hw out there that supports overlays w/ YUV that has an alpha channel? If this is enough of a weird edge case, maybe it is ok to fall back in these cases to the old way of doing the blending on the client side and just looking like a 3d app to the xorg side. (I suspect in this sort of case you'd end up falling back to the GPU on the xorg side otherwise.) But I'm always interested to hear any other suggestions.
BR, -R
VA on the other hand lets clients query for matrix and h/s/b/c attribute support and seems to have a simpler compositing API, so it seems doable with this, and of course Xv does. _______________________________________________ xorg-devel@lists.x.org: X.Org development Archives: http://lists.x.org/archives/xorg-devel Info: http://lists.x.org/mailman/listinfo/xorg-devel