Hi everybody,
Would anyone be interested in meeting at the FOSDEM to discuss the Common Display Framework ? There will be a CDF meeting at the ELC at the end of February, the FOSDEM would be a good venue for European developers.
On Fri, Jan 11, 2013 at 2:27 PM, Laurent Pinchart laurent.pinchart@ideasonboard.com wrote:
Hi everybody,
Would anyone be interested in meeting at the FOSDEM to discuss the Common Display Framework ? There will be a CDF meeting at the ELC at the end of February, the FOSDEM would be a good venue for European developers.
sure, I'll be at FOSDEM.. I think sometime Sunday would be fine
BR, -R
-- Regards,
Laurent Pinchart
dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
On Fri, 11 Jan 2013, Laurent Pinchart laurent.pinchart@ideasonboard.com wrote:
Would anyone be interested in meeting at the FOSDEM to discuss the Common Display Framework ? There will be a CDF meeting at the ELC at the end of February, the FOSDEM would be a good venue for European developers.
Yes, count me in, Jani.
On Thu, Jan 17, 2013 at 9:42 AM, Jani Nikula jani.nikula@linux.intel.com wrote:
On Fri, 11 Jan 2013, Laurent Pinchart laurent.pinchart@ideasonboard.com wrote:
Would anyone be interested in meeting at the FOSDEM to discuss the Common Display Framework ? There will be a CDF meeting at the ELC at the end of February, the FOSDEM would be a good venue for European developers.
Yes, count me in,
Jesse, Ville and me should also be around. Do we have a slot fixed already? -Daniel
Hi Daniel,
On Thursday 17 January 2013 13:29:27 Daniel Vetter wrote:
On Thu, Jan 17, 2013 at 9:42 AM, Jani Nikula wrote:
On Fri, 11 Jan 2013, Laurent Pinchart wrote:
Would anyone be interested in meeting at the FOSDEM to discuss the Common Display Framework ? There will be a CDF meeting at the ELC at the end of February, the FOSDEM would be a good venue for European developers.
Yes, count me in,
Jesse, Ville and me should also be around. Do we have a slot fixed already?
I've sent a mail to the FOSDEM organizers to request a hacking room for a couple of hours Sunday. I'll let you know as soon as I get a reply.
Hello,
On Monday 21 January 2013 13:43:27 Laurent Pinchart wrote:
On Thursday 17 January 2013 13:29:27 Daniel Vetter wrote:
On Thu, Jan 17, 2013 at 9:42 AM, Jani Nikula wrote:
On Fri, 11 Jan 2013, Laurent Pinchart wrote:
Would anyone be interested in meeting at the FOSDEM to discuss the Common Display Framework ? There will be a CDF meeting at the ELC at the end of February, the FOSDEM would be a good venue for European developers.
Yes, count me in,
Jesse, Ville and me should also be around. Do we have a slot fixed already?
I've sent a mail to the FOSDEM organizers to request a hacking room for a couple of hours Sunday. I'll let you know as soon as I get a reply.
Just a quick follow-up. I've received information from the FOSDEM staff, there will be hacking rooms that can be reserved (on-site only) for 1h slots. They unfortunately won't have projectors, as they're not meant for talks.
Another option would be to start early on Saturday, the X.org room is reported as beeing free from 9am to 11am.
On Tue, Jan 29, 2013 at 12:27:15PM +0100, Laurent Pinchart wrote:
Hello,
On Monday 21 January 2013 13:43:27 Laurent Pinchart wrote:
I've sent a mail to the FOSDEM organizers to request a hacking room for a couple of hours Sunday. I'll let you know as soon as I get a reply.
Just a quick follow-up. I've received information from the FOSDEM staff, there will be hacking rooms that can be reserved (on-site only) for 1h slots. They unfortunately won't have projectors, as they're not meant for talks.
Another option would be to start early on Saturday, the X.org room is reported as beeing free from 9am to 11am.
-- Regards,
Laurent Pinchart
As the organizer of the X.org devroom, i would have to state that the latter is impossible. I tend to do a bit of room set-up, like put in some power bars (a limited amount this year, as i only have been given one day and it simply is not worth putting in the cabling for 100 sockets, and dragging all that kit over from Nuremberg, for just a single day) and some other things. I need one hour at least for that on saturday morning.
DevRooms are also not supposed to open before 11:00 (which is already a massive improvement over 2011 and the years before, where i was happy to be able to put the cabling in at 12:00), and i tend to first get a nod of approval from the on-site devrooms supervisor before i go in and set up the room.
So use the hackingroom this year. Things will hopefully be better next year.
Luc Verhaegen.
Hi Luc,
On Tuesday 29 January 2013 12:47:16 Luc Verhaegen wrote:
On Tue, Jan 29, 2013 at 12:27:15PM +0100, Laurent Pinchart wrote:
On Monday 21 January 2013 13:43:27 Laurent Pinchart wrote:
I've sent a mail to the FOSDEM organizers to request a hacking room for a couple of hours Sunday. I'll let you know as soon as I get a reply.
Just a quick follow-up. I've received information from the FOSDEM staff, there will be hacking rooms that can be reserved (on-site only) for 1h slots. They unfortunately won't have projectors, as they're not meant for talks.
Another option would be to start early on Saturday, the X.org room is reported as beeing free from 9am to 11am.
As the organizer of the X.org devroom, i would have to state that the latter is impossible. I tend to do a bit of room set-up, like put in some power bars (a limited amount this year, as i only have been given one day and it simply is not worth putting in the cabling for 100 sockets, and dragging all that kit over from Nuremberg, for just a single day) and some other things. I need one hour at least for that on saturday morning.
No worries. It was just an idea.
DevRooms are also not supposed to open before 11:00 (which is already a massive improvement over 2011 and the years before, where i was happy to be able to put the cabling in at 12:00), and i tend to first get a nod of approval from the on-site devrooms supervisor before i go in and set up the room.
So use the hackingroom this year. Things will hopefully be better next year.
Saturday is pretty much out of question, given that most developers interested in CDF will want to attend the X.org talks. I'll try to get a room for Sunday then, but I'm not sure yet what time slots will be available. It would be helpful if people interested in CDF discussions could tell me at what time they plan to leave Brussels on Sunday.
On Tue, Jan 29, 2013 at 1:11 PM, Laurent Pinchart laurent.pinchart@ideasonboard.com wrote:
DevRooms are also not supposed to open before 11:00 (which is already a massive improvement over 2011 and the years before, where i was happy to be able to put the cabling in at 12:00), and i tend to first get a nod of approval from the on-site devrooms supervisor before i go in and set up the room.
So use the hackingroom this year. Things will hopefully be better next year.
Saturday is pretty much out of question, given that most developers interested in CDF will want to attend the X.org talks. I'll try to get a room for Sunday then, but I'm not sure yet what time slots will be available. It would be helpful if people interested in CDF discussions could tell me at what time they plan to leave Brussels on Sunday.
I'll stay till Monday early morning, so requirements from me. Adding a bunch of Intel guys who're interested, too. -Daniel
On Tue, Jan 29, 2013 at 3:19 PM, Daniel Vetter daniel.vetter@ffwll.ch wrote:
On Tue, Jan 29, 2013 at 1:11 PM, Laurent Pinchart laurent.pinchart@ideasonboard.com wrote:
DevRooms are also not supposed to open before 11:00 (which is already a massive improvement over 2011 and the years before, where i was happy to be able to put the cabling in at 12:00), and i tend to first get a nod of approval from the on-site devrooms supervisor before i go in and set up the room.
So use the hackingroom this year. Things will hopefully be better next year.
Saturday is pretty much out of question, given that most developers interested in CDF will want to attend the X.org talks. I'll try to get a room for Sunday then, but I'm not sure yet what time slots will be available. It would be helpful if people interested in CDF discussions could tell me at what time they plan to leave Brussels on Sunday.
I'll stay till Monday early morning, so requirements from me. Adding a bunch of Intel guys who're interested, too.
Ok, in the interest of pre-heating the discussion a bit I've written down my thoughts about display slave drivers. Adding a few more people and lists to make sure I haven't missed anyone ...
Cheers, Daniel -- Display Slaves ==============
A highly biased quick analysis from Daniel Vetter.
A quick discussion about the issues surrounding some common framework for display slaves like panels, hdmi/DP/whatever encoders, ... Since these external chips are very often reused accross different SoCs, it would be beneficial to share slave driver code between different chipset drivers.
Caveat Emperor! ---------------
Current output types and slave encoders already have to deal with a pletoria of special cases and strange features. To avoid ending up with something not suitable for everyone, we should look at what's all supported already and how we could possibly deal with those things:
- audio embedded into the display stream (hdmi/dp). x86 platforms with the HD Audio framework rely on ELD and forwarding certain events as interrupts through the hw between the display and audio side ...
- hdmi/dp helpers: HDMI/DP are both standardized output connectors with nice complexity. DP is mostly about handling dp aux transactions and DPCD registers, hdmi mostly about infoframes and how to correctly set them up from the mode + edid.
- dpms is 4 states in drm, even more in fbdev afaict, but real hw only supports on/off nowadays ... how should/do we care?
- Fancy modes and how to represent them. Random list of things we need to represent somehow: broadcast/reduced rbg range for hdmi, yuv modes, different bpc modes (and handling how this affects bandwidth/clocks, e.g. i915 auto-dithers to 6bpc on DP if there's not enough), 3D hdmi modes (patches have floated on dri-devel for this), overscan compensation. Many of these things link in with e.g. the helper libraries for certain outputs, e.g. discovering DP sink capabilities or setting up the correct hdmi infoframe.
- How to expose random madness as properties, e.g. backlight controllers, broadcast mode, enable/disable embedded audio (some screens advertise it, but don't like it). For additional fun I expect different users of a display slave driver to expect different set of "standardized" properties.
- Debug support: Register dumping, exposing random debugfs files, tracing. Preferably somewhat unified to keep things sane, since most often slave drivers are rather simple, but we expect quite a few different ones.
- Random metadata surrounding a display sink, like output type. Or flags for support special modes (h/vsync polarity, interlaced/doublescan, pixel doubling, ...).
- mode_fixup: Used a lot in drm-land to allow encoders to change the input mode, e.g. for lvds encoders which can do upscaling, or if the encoder supports progressive input with interlaced output and similar fancy stuff. See e.g. the intel sdvo encoder chip support.
- Handling different control buses like i2c, direct access (not seen that yet), DSI, DP aux, some other protocols.
- Handling of different display data standards like dsi (intel invented a few of its own, I'm sure we're not the only ones).
- hpd support/polling. Depending upon desing hpd handling needs to be cooperative between slave and master, or is a slave only thing (which means the slave needs to be able to poke the master when something changes). Similarly, masters need to know which slaves require output polling.
- Initializing of slave drivers: of/devicetree based, compiled-in static tables in the driver, dynamic discovery by i2c probing, lookup through some platform-specific firmware table (ACPI). Related is how to forward random platform init values to the drivers from these sources (e.g. the panel fixed modes) to the slave driver.
- get_hw_state support. One of the major point in the i915 modeset rewrite which landed in 3.7 is that a lot of the hw state can be cross-checked with the sw tracking. Helps tremendously in tracking down driver (writer) fumbles ;-)
- PSR/dsi command mode and how the start/stop frame dance should be handled.
- Random funny expectations around the modeset sequence, i.e. when (and how often) the video stream should be enabled/disabled. In the worst case this needs some serious cooperation between master and slaves. Even more fun for trained output links like DP where a re-training and so restarting parts - or even the complete - modeset sequence could be required to happen any time.
- There's more I'm sure, gfx hw tends to be insane ...
Wishful Thinking ----------------
Ignoring reality, let's look at what the perfect display slave framework should achieve to be useful:
- Should be simple to share code between different master drivers - display slave drivers tend to be boring assemblies of register definitions and banging the right magic values into them. Which also means that we should aim for a high level of unification so that using, understanding and debugging drivers is easy.
- Since we expect drivers to be simple, even little amounts of impedence-matching code can kill the benefits of the shared code. Furthermore it should be possible to extend drivers with whatever subset of the above feature list is required by the subsystem/driver using a slave driver. Again, without incurring unnecessary amounts of impendance matching. Ofc, not all users of slave drivers will be able to use all the crazy features.
Reality Check -------------
We already have tons of different slave encoder frameworks sprinkled all over the kernel, which support different sets of crazy features and are used by different. Furthermore each subsystem seems to have come up with it's own way to describe metadata like display modes, all sorts of type enums, properties, helper functions for special output types.
Conclusions:
- Throwing away and rewriting all the existing code seems unwise, but we'll likely need tons of existing drivers with the new framework.
- Unifying the metadata handling will be _really_ painful since it's deeply ingrained into each driver. Not unifying it otoh will lead to colossal amounts of impendance matching code.
- The union of all the slave features used by all the existing frameworks is impressive, but also highly non-overlapping. Likely everyone has his own utterly "must-have" feature.
Proposal --------
I have to admit that I'm not too much in favour of the current CDF. It has a bit of midlayer smell to it imo, and looks like it will make many of the mentioned corner-case messy to enable. Also looking at things the proposed generic video mode structure it seems to lack some features e.g. drm_mode already has. Which does not include new insanity like 3d modes or some advanced infoframes stuff.
So instead I'll throw around a few ideas and principles:
- s/framework/helper library/ Yes, I really hate midlayers and just coming up with a different name seems to go a long way towards saner apis.
- I think we should reduce the scope of the intial version massively and instead increase the depth to fully cover everything. So instead of something which covers everything of a limited use-case from discover, setup, modes handling and mode-setting, concentrate on only one operation. The actual mode-set seems to be the best case, since it usually involves a lot of the boring register bashing code. The first interface version would ignore everything else completely.
- Shot for the most powerful api for that little piece we're starting with, make it the canonical thing. I.e. for modeset we need a video mode thing, and imo it only makes sense if that's the native data structure for all invovled subsystems. At least it should be the aim. Yeah, that means tons of work. Even more important is that the new datastructure supports every feature already support in some insane way in one of the existing subsystems. Imo if we keep different datastructures everywhere, the impendance matching will eat up most of the code sharing benefits.
- Since converting all invovled subsystems we should imo just forget about fbdev. For obvious reasons I'm also leaning towards simply ditching the drm prefix from the drm defines and using those ;-)
- I haven't used it in a driver yet, but mandating regmap (might need some improvements) should get us decent unification between drivers. And hopefully also an easy way to have unified debug tools. regmap already has trace points and a few other cool things.
- We need some built-in way to drill direct paths from the master display driver to the slave driver for the different subsystems. Jumping through hoops (or even making it impossible) to extend drivers in funny ways would be a big step backwards.
- Locking will be fun, especially once we start to add slave->master callbacks (e.g. for stopping/starting the display signal, hpd interrupts, ...). As a general rule I think we should aim for no locks in the slave driver, with the master owning the slave and ensure exclusion with its own locks. Slaves which use shared resources and so need locks (everything doing i2c actually) may not call master callback functions with locks held.
Then, once we've gotten things of the ground and have some slave encoder drivers which are actually shared between different subsystems/drivers/platforms or whatever we can start to organically grow more common interfaces. Ime it's much easier to simply extract decent interfaces after the fact than trying to come up.
Now let's pour this into a more concrete form:
struct display_slave_ops { /* modeset ops, e.g. prepare/modset/commit from drm */ };
struct display_slave { struct display_slave_ops *ops; void *driver_private; };
I think even just that will be worth a lot of flames to come up with a good and agreeable interface for everyone. It'll probably satisfactory to no one though.
Then each subsystem adds it's own magic, e.g.
struct drm_encoder_slave { struct display_slave slave;
/* everything else which is there already and not covered by the display * slave interface. */ };
Other subsystems/drivers like DSS would embed the struct display_slave in their own equivalent data-structure.
So now we have the little problem that we want to have one single _slave_ driver codebase, but it should be able to support n different interfaces and potentially even more ways to be initialized and set up. Here's my idea how this could be tackled:
1. Smash everything into one driver file/directory. 2. Use a common driver structure which contains pointers/members for all possible use-cases. For each interface the driver supports, it'll allocate the same structure and put the pointer into foo->slave.driver_private. This way different entry points from different interfaces could use the same internal functions since all deal with the same structure. 3. Add whatever magic is required to set up the driver for different platforms. E.g. and of match, drm_encoder_slave i2c match and some direct function to set up hardcoded cases could all live in the same file.
Getting the kernel Kconfig stuff right will be fun, but we should get by with adding tons more stub functions. That might mean that an of/devicetree platform build carries around a bit of gunk for x86 vbt matching maybe, but imo that shouldn't ever get out of hand size-wise.
Once we have a few such shared drivers in place, and even more important, unified that part of the subsystem using them a bit, it should be painfully obvious which is the next piece to extract into the common display slave library interface. After all, they'll live right next to each another in the driver sources ;-)
Eventually we should get into the real fun part like dsi bus support or command mode/PSR ... Those advanced things probably need to be optional.
But imo the key part is that we aim for real unification in the users of display_slave's, so internally convert over everything to the new structures. That should also make code-sharing much easier, so that we could move existing helper functions to the common display helper library.
Bikesheds ---------
I.e. the boring details:
- Where to put slave drivers? I'll vote for anything which does not include drivers/video ;-)
- Maybe we want to start with a different part than modeset, or add a bit more on top. Though I really think we should start minimally and modesetting seemed like the most useful piece of the puzzle.
- Naming the new interfaces. I'll have more asbestos suites on order ...
- Can we just copy the new "native" interface structs from drm, pls?
On 01/29/2013 04:50 PM, Daniel Vetter wrote:
On Tue, Jan 29, 2013 at 3:19 PM, Daniel Vetterdaniel.vetter@ffwll.ch wrote: Ok, in the interest of pre-heating the discussion a bit I've written down my thoughts about display slave drivers. Adding a few more people and lists to make sure I haven't missed anyone ...
Cheers, Daniel
Display Slaves
A highly biased quick analysis from Daniel Vetter.
And here is my biased version as one of the initiators of the idea of CDF.
I work with ARM SoCs (ST-Ericsson) and mobile devices (DSI/DPI panels). Of course some of these have the "PC" type of encoder devices like HDMI and eDP or even VGA. But from what I have seen most of these encoders are used by few different SoCs(GPUs?). And using these type of encoders was quite straight forward from DRM encoders. My goal was to get some common code of all the "mobile" panel encoders or "display module driver IC"s as some call them. Instead of tens of drivers (my assumption) you now have hundreds of drivers often using MIPI DSI/DPI/DBI or some similar interface. And lots of new come each year. There are probably more panel types than there are products on the market, since most products use more than one type of panel on the same product to secure sourcing for mass production (note multiple panels use same driver IC). So that was the initial goal, to cover all of these, which most are maintained per SoC/CPU out of kernel.org. If HDMI/DP etc fits in this framework, then that is just a nice bonus. I just wanted to give my history so we are not trying to include to many different types of encoders without an actual need. Maybe the I2C drm stuff is good enough for that type of encoders. But again, it would be nice with one suit that fits all ... I also like the idea to start out small. But if no support is added initially for the mobile panel types. Then I think it will be hard to get all vendors to start pushing those drivers, because the benefit of doing so would be small. But maybe the CDF work with Linaro and Laurent could just be a second step of adding the necessary details to your really simple baseline. And I also favor the helpers over framework approach but I miss a big piece which is the ops for panel drivers to call back to display controller (the video source stuff). Some inline comments below.
A quick discussion about the issues surrounding some common framework for display slaves like panels, hdmi/DP/whatever encoders, ... Since these external chips are very often reused accross different SoCs, it would be beneficial to share slave driver code between different chipset drivers.
Caveat Emperor!
Current output types and slave encoders already have to deal with a pletoria of special cases and strange features. To avoid ending up with something not suitable for everyone, we should look at what's all supported already and how we could possibly deal with those things:
- audio embedded into the display stream (hdmi/dp). x86 platforms with the HD Audio framework rely on ELD and forwarding certain events as interrupts through the hw between the display and audio side ...
I would assume any driver handling audio/video/cec like HDMI would hook itself up as an mfd device. And one of those exposed functions would be the CDF part. Instead of pushing everything into the "display parts". At least that is sort of what we do today and it keeps the audio, cec and display parts nicely separated.
- hdmi/dp helpers: HDMI/DP are both standardized output connectors with nice complexity. DP is mostly about handling dp aux transactions and DPCD registers, hdmi mostly about infoframes and how to correctly set them up from the mode + edid.
Yes, it is a mess. But we have managed to hide that below a simple panel API similar to CDF/omap so far.
- dpms is 4 states in drm, even more in fbdev afaict, but real hw only supports on/off nowadays ... how should/do we care?
Agreed, they should all really go away unless someone find a valid use case.
- Fancy modes and how to represent them. Random list of things we need to represent somehow: broadcast/reduced rbg range for hdmi, yuv modes, different bpc modes (and handling how this affects bandwidth/clocks, e.g. i915 auto-dithers to 6bpc on DP if there's not enough), 3D hdmi modes (patches have floated on dri-devel for this), overscan compensation. Many of these things link in with e.g. the helper libraries for certain outputs, e.g. discovering DP sink capabilities or setting up the correct hdmi infoframe.
Are you saying drm modes doesn't support this as of today? I have not used these types of modes in DRM yet. Maybe the common video mode patches is a good start.
- How to expose random madness as properties, e.g. backlight controllers, broadcast mode, enable/disable embedded audio (some screens advertise it, but don't like it). For additional fun I expect different users of a display slave driver to expect different set of "standardized" properties.
Some standardized properties would be nice :). Whatever is not standard doesn't really matter.
Debug support: Register dumping, exposing random debugfs files, tracing. Preferably somewhat unified to keep things sane, since most often slave drivers are rather simple, but we expect quite a few different ones.
Random metadata surrounding a display sink, like output type. Or flags for support special modes (h/vsync polarity, interlaced/doublescan, pixel doubling, ...).
One thing that is needed is all the meta data related to the control/data interface between display controller and encoder. Because this has to be unified per interface type like DSI/DBI so the same CDF driver can setup different display controllers. But I hope we could split the "CDF API" (panel ops) from the control/data bus API (host/source ops or CDF video source).
mode_fixup: Used a lot in drm-land to allow encoders to change the input mode, e.g. for lvds encoders which can do upscaling, or if the encoder supports progressive input with interlaced output and similar fancy stuff. See e.g. the intel sdvo encoder chip support.
Handling different control buses like i2c, direct access (not seen that yet), DSI, DP aux, some other protocols.
This is actually the place I wanted to start. With vendor specific panel drivers using common ops to access the bus (DSI/I2C/DBI etc). Then once we have a couple of panel drivers we could unify the API making them do their stuff (like the current CDF ops). Or even better, maybe these two could be made completely separate and worked on in parallel.
Handling of different display data standards like dsi (intel invented a few of its own, I'm sure we're not the only ones).
hpd support/polling. Depending upon desing hpd handling needs to be cooperative between slave and master, or is a slave only thing (which means the slave needs to be able to poke the master when something changes). Similarly, masters need to know which slaves require output polling.
I prefer a slave only thing forwarded to the drm encoder which I assume would be the drm equivalent of the display slave. At least I have not seen any need to involve the display controller in hpd (which I assume you mean by master).
- Initializing of slave drivers: of/devicetree based, compiled-in static tables in the driver, dynamic discovery by i2c probing, lookup through some platform-specific firmware table (ACPI). Related is how to forward random platform init values to the drivers from these sources (e.g. the panel fixed modes) to the slave driver.
I'm not that familiar with the bios/uefi world. But on our SoCs we always have to show a splash screen from the boot loader (like bios, usually little kernel, uboot etc). And so all probing is done by bootloader and HW is running when kernel boot. And you are not allowed to disrupt it either because that would yield visual glitches during boot. So some way or the other the boot loader would need to transfer the state to the kernel or you would have to reverse engineer the state from hw at kernel probe.
- get_hw_state support. One of the major point in the i915 modeset rewrite which landed in 3.7 is that a lot of the hw state can be cross-checked with the sw tracking. Helps tremendously in tracking down driver (writer) fumbles ;-)
This sounds more like a display controller feature than a display slave feature.
- PSR/dsi command mode and how the start/stop frame dance should be handled.
Again, a vital piece for the many mobile driver ICs. And I think we have several sources (STE, Renesas, TI, Samsung, ...) on how to do this and tested in many products. So I hope this could be an early step in the evolution.
- Random funny expectations around the modeset sequence, i.e. when (and how often) the video stream should be enabled/disabled. In the worst case this needs some serious cooperation between master and slaves. Even more fun for trained output links like DP where a re-training and so restarting parts - or even the complete - modeset sequence could be required to happen any time.
Again, we have several samples of platforms already doing this stuff. So we should be able to get a draft pretty early. From my experience when to enable/disable video stream could vary between versions of the same display controller. So I think it could be pretty hairy to get a single solution for all. Instead I think we need to leave some room for the master/slave to decide when to enable/disable. And to be able to do this we should try to have pretty specific ops on the slave and master. I'm not sure prepare/modeset/commit is specific enough unless we document what is expected to be done by the slave in each of these.
- There's more I'm sure, gfx hw tends to be insane ...
Yes, and one is the chain of slaves issue that is "common" on mobile systems. One example I have is dispc->dsi->dsi2dsi-bridge->dsi2lvds-bridge->lvds-panel. My proposal to hide this complexity in CDF was aggregate drivers. So from drm there will only be one master (dispc) and one slave (dsi2dsi). Then dsi2dsi will itself use another CDF/slave driver to talk to its slave. This way the top master (dispc) driver never have to care about this complexity. Whether this is possible to hide in practice we will see ...
Wishful Thinking
Ignoring reality, let's look at what the perfect display slave framework should achieve to be useful:
Should be simple to share code between different master drivers - display slave drivers tend to be boring assemblies of register definitions and banging the right magic values into them. Which also means that we should aim for a high level of unification so that using, understanding and debugging drivers is easy.
Since we expect drivers to be simple, even little amounts of impedence-matching code can kill the benefits of the shared code. Furthermore it should be possible to extend drivers with whatever subset of the above feature list is required by the subsystem/driver using a slave driver. Again, without incurring unnecessary amounts of impendance matching. Ofc, not all users of slave drivers will be able to use all the crazy features.
This is also my fear. Which is why I wanted to start with one slave interface at a time. And maybe even have different "API"s for differnt type of panels. Like classic I2C encoders, DSI command mode "smart" panels, DSI video mode, DPI ... and then do another layer of helpers in drm encoders. That way a DSI command mode panel wouldn't have to be forced into the same shell as a I2C HDMI encoder as they are very different with very little overlap.
Reality Check
We already have tons of different slave encoder frameworks sprinkled all over the kernel, which support different sets of crazy features and are used by different. Furthermore each subsystem seems to have come up with it's own way to describe metadata like display modes, all sorts of type enums, properties, helper functions for special output types.
Conclusions:
Throwing away and rewriting all the existing code seems unwise, but we'll likely need tons of existing drivers with the new framework.
Unifying the metadata handling will be _really_ painful since it's deeply ingrained into each driver. Not unifying it otoh will lead to colossal amounts of impendance matching code.
The union of all the slave features used by all the existing frameworks is impressive, but also highly non-overlapping. Likely everyone has his own utterly "must-have" feature.
Proposal
I have to admit that I'm not too much in favour of the current CDF. It has a bit of midlayer smell to it imo, and looks like it will make many of the mentioned corner-case messy to enable. Also looking at things the proposed generic video mode structure it seems to lack some features e.g. drm_mode already has. Which does not include new insanity like 3d modes or some advanced infoframes stuff.
So instead I'll throw around a few ideas and principles:
- s/framework/helper library/ Yes, I really hate midlayers and just coming up with a different name seems to go a long way towards saner apis.
Me like, but I hope you agree to keep calling it CDF until it is merged. We could call it Common Display Frelpers if you like ;)
- I think we should reduce the scope of the intial version massively and instead increase the depth to fully cover everything. So instead of something which covers everything of a limited use-case from discover, setup, modes handling and mode-setting, concentrate on only one operation. The actual mode-set seems to be the best case, since it usually involves a lot of the boring register bashing code. The first interface version would ignore everything else completely.
To also cover and be useful to mobile panels I suggest starting with on/off using a fixed mode initially. Because modeset is not used for most mobile panels (they only have one mode).
Shot for the most powerful api for that little piece we're starting with, make it the canonical thing. I.e. for modeset we need a video mode thing, and imo it only makes sense if that's the native data structure for all invovled subsystems. At least it should be the aim. Yeah, that means tons of work. Even more important is that the new datastructure supports every feature already support in some insane way in one of the existing subsystems. Imo if we keep different datastructures everywhere, the impendance matching will eat up most of the code sharing benefits.
Since converting all invovled subsystems we should imo just forget about fbdev. For obvious reasons I'm also leaning towards simply ditching the drm prefix from the drm defines and using those ;-)
I haven't used it in a driver yet, but mandating regmap (might need some improvements) should get us decent unification between drivers. And hopefully also an easy way to have unified debug tools. regmap already has trace points and a few other cool things.
Guideline for I2C slave drivers maybe? Do we really want to enforce how drivers are implemented when it doesn't affect the API? Also, I don't think it fits in general for slaves. Since DSI/DBI have not only registers but also operations you can execute using control interface.
We need some built-in way to drill direct paths from the master display driver to the slave driver for the different subsystems. Jumping through hoops (or even making it impossible) to extend drivers in funny ways would be a big step backwards.
Locking will be fun, especially once we start to add slave->master callbacks (e.g. for stopping/starting the display signal, hpd interrupts, ...). As a general rule I think we should aim for no locks in the slave driver, with the master owning the slave and ensure exclusion with its own locks. Slaves which use shared resources and so need locks (everything doing i2c actually) may not call master callback functions with locks held.
Agreed, and I think we should rely on upper layers like drm as much as possible for locking.
Then, once we've gotten things of the ground and have some slave encoder drivers which are actually shared between different subsystems/drivers/platforms or whatever we can start to organically grow more common interfaces. Ime it's much easier to simply extract decent interfaces after the fact than trying to come up.
Now let's pour this into a more concrete form:
struct display_slave_ops { /* modeset ops, e.g. prepare/modset/commit from drm */ };
struct display_slave { struct display_slave_ops *ops; void *driver_private; };
I think even just that will be worth a lot of flames to come up with a good and agreeable interface for everyone. It'll probably satisfactory to no one though.
Then each subsystem adds it's own magic, e.g.
struct drm_encoder_slave { struct display_slave slave;
/* everything else which is there already and not covered by the display * slave interface. */
};
I like the starting point. Hard to make it any more simple ;). But next step would probably follow quickly. I also like the idea to have current drivers aggregate the slave to make transition easier. CDF as it is now is an all or nothing API. And since you don't care how slaves interact with master (bus ops) I see the possibility still to separate "CDI device API" and "CDF bus API". Which would allow using DSI bus API for DSI panels and I2C bus API (or regmap) for I2C encoders instead of force use of the video source API in all slave drivers.
Other subsystems/drivers like DSS would embed the struct display_slave in their own equivalent data-structure.
So now we have the little problem that we want to have one single _slave_ driver codebase, but it should be able to support n different interfaces and potentially even more ways to be initialized and set up. Here's my idea how this could be tackled:
- Smash everything into one driver file/directory.
- Use a common driver structure which contains pointers/members for all
possible use-cases. For each interface the driver supports, it'll allocate the same structure and put the pointer into foo->slave.driver_private. This way different entry points from different interfaces could use the same internal functions since all deal with the same structure. 3. Add whatever magic is required to set up the driver for different platforms. E.g. and of match, drm_encoder_slave i2c match and some direct function to set up hardcoded cases could all live in the same file.
Getting the kernel Kconfig stuff right will be fun, but we should get by with adding tons more stub functions. That might mean that an of/devicetree platform build carries around a bit of gunk for x86 vbt matching maybe, but imo that shouldn't ever get out of hand size-wise.
Once we have a few such shared drivers in place, and even more important, unified that part of the subsystem using them a bit, it should be painfully obvious which is the next piece to extract into the common display slave library interface. After all, they'll live right next to each another in the driver sources ;-)
Eventually we should get into the real fun part like dsi bus support or command mode/PSR ... Those advanced things probably need to be optional.
But imo the key part is that we aim for real unification in the users of display_slave's, so internally convert over everything to the new structures. That should also make code-sharing much easier, so that we could move existing helper functions to the common display helper library.
What about drivers that are waiting for CDF to be pushed upstream instead of having to push another custom panel framework? I'm talking of my own KMS driver ... but maybe I could put most of it in staging and move relevant parts of DSI/DPI/HDMI panel drivers to "common" slave drivers ...
Bikesheds
I.e. the boring details:
- Where to put slave drivers? I'll vote for anything which does not include drivers/video ;-)
drivers/video +1, drivers/gpu -1, who came up with putting KMS under drivers/gpu ;)
- Maybe we want to start with a different part than modeset, or add a bit more on top. Though I really think we should start minimally and modesetting seemed like the most useful piece of the puzzle.
As suggested, start with on/off and static/fixed mode would help single resolution LCDs. Actually that is almost all that is needed for mobile panels and what I intended to get from CDF :)
- Naming the new interfaces. I'll have more asbestos suites on order ...
Until you get them. Would it make sense to reuse the encoder name from drm or is that to restrictive?
- Can we just copy the new "native" interface structs from drm, pls?
I hope you are not talking about the helper interfaces at least ;). But if CDF is going to be the new drm helpers of choice for encoder/connector parts. Then it sounds like CDF would replace most of the old helpers. It would be far to many layers with the old helpers too. And I think I recall Jesse wanting to deprecate/remove them too. Hopefully we could have some generic encoder/connector helper implementations that only depend on CDF.
/BR /Marcus
On Tue, Jan 29, 2013 at 08:35:28PM +0100, Marcus Lorentzon wrote:
On 01/29/2013 04:50 PM, Daniel Vetter wrote:
On Tue, Jan 29, 2013 at 3:19 PM, Daniel Vetterdaniel.vetter@ffwll.ch wrote: Ok, in the interest of pre-heating the discussion a bit I've written down my thoughts about display slave drivers. Adding a few more people and lists to make sure I haven't missed anyone ...
Cheers, Daniel
Display Slaves
A highly biased quick analysis from Daniel Vetter.
And here is my biased version as one of the initiators of the idea of CDF.
Thanks a lot for your detailed answer. Some quick replies, I need to go through this more carefully and maybe send another mail.
I work with ARM SoCs (ST-Ericsson) and mobile devices (DSI/DPI panels). Of course some of these have the "PC" type of encoder devices like HDMI and eDP or even VGA. But from what I have seen most of these encoders are used by few different SoCs(GPUs?). And using these type of encoders was quite straight forward from DRM encoders. My goal was to get some common code of all the "mobile" panel encoders or "display module driver IC"s as some call them. Instead of tens of drivers (my assumption) you now have hundreds of drivers often using MIPI DSI/DPI/DBI or some similar interface. And lots of new come each year. There are probably more panel types than there are products on the market, since most products use more than one type of panel on the same product to secure sourcing for mass production (note multiple panels use same driver IC). So that was the initial goal, to cover all of these, which most are maintained per SoC/CPU out of kernel.org. If HDMI/DP etc fits in this framework, then that is just a nice bonus. I just wanted to give my history so we are not trying to include to many different types of encoders without an actual need. Maybe the I2C drm stuff is good enough for that type of encoders. But again, it would be nice with one suit that fits all ... I also like the idea to start out small. But if no support is added initially for the mobile panel types. Then I think it will be hard to get all vendors to start pushing those drivers, because the benefit of doing so would be small. But maybe the CDF work with Linaro and Laurent could just be a second step of adding the necessary details to your really simple baseline. And I also favor the helpers over framework approach but I miss a big piece which is the ops for panel drivers to call back to display controller (the video source stuff).
Yeah, I think we have two main goals here for enabling code sharing for these output devices: 1. Basic panel support, with the panel usually glued onto the board, so squat runtime configuration required. Aim is to get the gazillion of out-of-tree drivers merged. 2. Allowing generic output encoder slaves to be used in a bunch of SoCs in.
Summarizing my previous mail I fear that if we start with with the first point and don't take some of the mad features required to do the 2nd one right into account, we'll end up at a rather ugly spot.
[cut]
- hdmi/dp helpers: HDMI/DP are both standardized output connectors with nice complexity. DP is mostly about handling dp aux transactions and DPCD registers, hdmi mostly about infoframes and how to correctly set them up from the mode + edid.
Yes, it is a mess. But we have managed to hide that below a simple panel API similar to CDF/omap so far.
Well, my concern is that we need to expose a bunch of special properties (both to the master driver and ultimately to userspace) which are rather hard to shovel through a simple panel abstraction. Ime from desktop graphics there's no limits to the insane usecases and devices people come up with and want to plug into your machine ;-)
- dpms is 4 states in drm, even more in fbdev afaict, but real hw only supports on/off nowadays ... how should/do we care?
Agreed, they should all really go away unless someone find a valid use case.
- Fancy modes and how to represent them. Random list of things we need to represent somehow: broadcast/reduced rbg range for hdmi, yuv modes, different bpc modes (and handling how this affects bandwidth/clocks, e.g. i915 auto-dithers to 6bpc on DP if there's not enough), 3D hdmi modes (patches have floated on dri-devel for this), overscan compensation. Many of these things link in with e.g. the helper libraries for certain outputs, e.g. discovering DP sink capabilities or setting up the correct hdmi infoframe.
Are you saying drm modes doesn't support this as of today? I have not used these types of modes in DRM yet. Maybe the common video mode patches is a good start.
All the stuff I've mentioned is support in drm/i915 (or at least we have patches floating around), and on a quick look at the proposed video_mode I couldn't fit this all in. Some of the features are fully fledged out, but I expect that we fill all the little tiny holes in the next few releases.
- How to expose random madness as properties, e.g. backlight controllers, broadcast mode, enable/disable embedded audio (some screens advertise it, but don't like it). For additional fun I expect different users of a display slave driver to expect different set of "standardized" properties.
Some standardized properties would be nice :). Whatever is not standard doesn't really matter.
The problem is that we have a few 100klocs of driver code lying around in upstream, so if we switch standards there's some decent fun involved converting things. Or we need to add conversion functions all over the place, which seems rather ugly, too.
Debug support: Register dumping, exposing random debugfs files, tracing. Preferably somewhat unified to keep things sane, since most often slave drivers are rather simple, but we expect quite a few different ones.
Random metadata surrounding a display sink, like output type. Or flags for support special modes (h/vsync polarity, interlaced/doublescan, pixel doubling, ...).
One thing that is needed is all the meta data related to the control/data interface between display controller and encoder. Because this has to be unified per interface type like DSI/DBI so the same CDF driver can setup different display controllers. But I hope we could split the "CDF API" (panel ops) from the control/data bus API (host/source ops or CDF video source).
I guess we have two options of panels on such buses with special needs: - either add a bunch of optional functions to the common interfaces - or subclass the common interface/struct and add additional magic in there, i.e.
struct dsi_slave { struct display_slave; struct dsi_panel_ops;
/* whatever other magic we need for dsi, e.g. callbacks to the * source for start/stopping pixel data ... */ }
The later requires a bit more casting of struct pointers, but should be more flexible. Ime from i915 code it's not too onereous, e.g. for encoders we nest such C struct classes about 4 levels deep in the code: drm_encoder -> intel_encoder -> intel_dig_encoder -> intel_dp/hdmi/ddi
So I think both approaches are doable.
mode_fixup: Used a lot in drm-land to allow encoders to change the input mode, e.g. for lvds encoders which can do upscaling, or if the encoder supports progressive input with interlaced output and similar fancy stuff. See e.g. the intel sdvo encoder chip support.
Handling different control buses like i2c, direct access (not seen that yet), DSI, DP aux, some other protocols.
This is actually the place I wanted to start. With vendor specific panel drivers using common ops to access the bus (DSI/I2C/DBI etc). Then once we have a couple of panel drivers we could unify the API making them do their stuff (like the current CDF ops). Or even better, maybe these two could be made completely separate and worked on in parallel.
Hm, so starting with some DSI interface code, similarly to how we have i2c? tbh I have pretty much zero clue about how dsi exactly works, but growing different parts of a common panel infrastructure sounds intriguing.
Handling of different display data standards like dsi (intel invented a few of its own, I'm sure we're not the only ones).
hpd support/polling. Depending upon desing hpd handling needs to be cooperative between slave and master, or is a slave only thing (which means the slave needs to be able to poke the master when something changes). Similarly, masters need to know which slaves require output polling.
I prefer a slave only thing forwarded to the drm encoder which I assume would be the drm equivalent of the display slave. At least I have not seen any need to involve the display controller in hpd (which I assume you mean by master).
I've used pretty unclear definitions. Generally master is everything no behind the slave/panel interface. Call it display driver maybe ... For this case I don't expect that hpd involves any piece of hw on the master/driver side, but we need to somehow forward this to the usespace interfaces. At least in drm, dunno what other display drivers do here.
- Initializing of slave drivers: of/devicetree based, compiled-in static tables in the driver, dynamic discovery by i2c probing, lookup through some platform-specific firmware table (ACPI). Related is how to forward random platform init values to the drivers from these sources (e.g. the panel fixed modes) to the slave driver.
I'm not that familiar with the bios/uefi world. But on our SoCs we always have to show a splash screen from the boot loader (like bios, usually little kernel, uboot etc). And so all probing is done by bootloader and HW is running when kernel boot. And you are not allowed to disrupt it either because that would yield visual glitches during boot. So some way or the other the boot loader would need to transfer the state to the kernel or you would have to reverse engineer the state from hw at kernel probe.
Actually reverse engineer the bios state from the actual hw state is what we now do for i915 ;-) Which is why we need the ->get_hw_state callback in some form. But that's just a result of some of the horrible things old firmware does, it /should/ be better on newer platforms. And hopefully the embedded ones aren't that massively screwed up ... Iirc the only current interface exposed by ACPI lets you get at the vendor boot splash and display it after you've taken over the hw.
- get_hw_state support. One of the major point in the i915 modeset rewrite which landed in 3.7 is that a lot of the hw state can be cross-checked with the sw tracking. Helps tremendously in tracking down driver (writer) fumbles ;-)
This sounds more like a display controller feature than a display slave feature.
See above for why we have that in i915. And we do call down into slave encoders (Intel (s)dvo standards) on older hw. Might be we won't need that any more on SoC platforms (I do hope that's the case at least).
- PSR/dsi command mode and how the start/stop frame dance should be handled.
Again, a vital piece for the many mobile driver ICs. And I think we have several sources (STE, Renesas, TI, Samsung, ...) on how to do this and tested in many products. So I hope this could be an early step in the evolution.
One issue with start/stop callbacks I've discussed a bit with Jani Nikula and Rob Clark is locking rules around start/stop callbacks from the slave to the display source. Especially how to handle fun like blocking the dsi bus while we need to wait for the transfer window.
- Random funny expectations around the modeset sequence, i.e. when (and how often) the video stream should be enabled/disabled. In the worst case this needs some serious cooperation between master and slaves. Even more fun for trained output links like DP where a re-training and so restarting parts - or even the complete - modeset sequence could be required to happen any time.
Again, we have several samples of platforms already doing this stuff. So we should be able to get a draft pretty early. From my experience when to enable/disable video stream could vary between versions of the same display controller. So I think it could be pretty hairy to get a single solution for all. Instead I think we need to leave some room for the master/slave to decide when to enable/disable. And to be able to do this we should try to have pretty specific ops on the slave and master. I'm not sure prepare/modeset/commit is specific enough unless we document what is expected to be done by the slave in each of these.
Well, drm/i915 killed prepare/modeset/commit ops, we now have our own which semantics matching our hw. My concern here is mostly about fancier display buses with link training - e.g. on DP you can't just start/stop the pixel stream, but there's a nice dance involved to do it.
- There's more I'm sure, gfx hw tends to be insane ...
Yes, and one is the chain of slaves issue that is "common" on mobile systems. One example I have is dispc->dsi->dsi2dsi-bridge->dsi2lvds-bridge->lvds-panel. My proposal to hide this complexity in CDF was aggregate drivers. So from drm there will only be one master (dispc) and one slave (dsi2dsi). Then dsi2dsi will itself use another CDF/slave driver to talk to its slave. This way the top master (dispc) driver never have to care about this complexity. Whether this is possible to hide in practice we will see ...
I think even more fun would be to replace the lvds endpoint with hdmi, and the try to coax the infoframe control attributes down that pipeline (plus who's responsibilty it is to do the various adjustments to the pixels).
[cut]
- I think we should reduce the scope of the intial version massively and instead increase the depth to fully cover everything. So instead of something which covers everything of a limited use-case from discover, setup, modes handling and mode-setting, concentrate on only one operation. The actual mode-set seems to be the best case, since it usually involves a lot of the boring register bashing code. The first interface version would ignore everything else completely.
To also cover and be useful to mobile panels I suggest starting with on/off using a fixed mode initially. Because modeset is not used for most mobile panels (they only have one mode).
Would that be start/stop a frame for manual refresh or enable/disable the display itself? Just curious what you're aiming for as the minimal useful thing here ...
Shot for the most powerful api for that little piece we're starting with, make it the canonical thing. I.e. for modeset we need a video mode thing, and imo it only makes sense if that's the native data structure for all invovled subsystems. At least it should be the aim. Yeah, that means tons of work. Even more important is that the new datastructure supports every feature already support in some insane way in one of the existing subsystems. Imo if we keep different datastructures everywhere, the impendance matching will eat up most of the code sharing benefits.
Since converting all invovled subsystems we should imo just forget about fbdev. For obvious reasons I'm also leaning towards simply ditching the drm prefix from the drm defines and using those ;-)
I haven't used it in a driver yet, but mandating regmap (might need some improvements) should get us decent unification between drivers. And hopefully also an easy way to have unified debug tools. regmap already has trace points and a few other cool things.
Guideline for I2C slave drivers maybe? Do we really want to enforce how drivers are implemented when it doesn't affect the API? Also, I don't think it fits in general for slaves. Since DSI/DBI have not only registers but also operations you can execute using control interface.
Yeah, that was an idea for i2c guidelines. I guess if we have a different (sub)type for DSI we could gather helpers somewhere which are useful only for DSI. E.g. drm is in the process of growing some DP helpers shared among a few drivers.
My idea behind being a bit more anal about standardization is that we exect tons of these drivers, and also that lots of different SoC platforms might share them. So trying to make them look similar and work in similar ways (where reasonable) to help enable existing drivers on new SoCs and debug isssue seemed like something we should discuss a bit.
We need some built-in way to drill direct paths from the master display driver to the slave driver for the different subsystems. Jumping through hoops (or even making it impossible) to extend drivers in funny ways would be a big step backwards.
Locking will be fun, especially once we start to add slave->master callbacks (e.g. for stopping/starting the display signal, hpd interrupts, ...). As a general rule I think we should aim for no locks in the slave driver, with the master owning the slave and ensure exclusion with its own locks. Slaves which use shared resources and so need locks (everything doing i2c actually) may not call master callback functions with locks held.
Agreed, and I think we should rely on upper layers like drm as much as possible for locking.
Then, once we've gotten things of the ground and have some slave encoder drivers which are actually shared between different subsystems/drivers/platforms or whatever we can start to organically grow more common interfaces. Ime it's much easier to simply extract decent interfaces after the fact than trying to come up.
Now let's pour this into a more concrete form:
struct display_slave_ops { /* modeset ops, e.g. prepare/modset/commit from drm */ };
struct display_slave { struct display_slave_ops *ops; void *driver_private; };
I think even just that will be worth a lot of flames to come up with a good and agreeable interface for everyone. It'll probably satisfactory to no one though.
Then each subsystem adds it's own magic, e.g.
struct drm_encoder_slave { struct display_slave slave;
/* everything else which is there already and not covered by the display * slave interface. */
};
I like the starting point. Hard to make it any more simple ;). But next step would probably follow quickly. I also like the idea to have current drivers aggregate the slave to make transition easier. CDF as it is now is an all or nothing API. And since you don't care how slaves interact with master (bus ops) I see the possibility still to separate "CDI device API" and "CDF bus API". Which would allow using DSI bus API for DSI panels and I2C bus API (or regmap) for I2C encoders instead of force use of the video source API in all slave drivers.
I didn't follow here which pieces you'd like to cut apart along which lines exactly ... Maybe some example structs or asci-art to help the clueless?
Aside about the simplicity of the above: It's slightly tongue-in-check, I expect it to be a bit feature-full ;-) Just wanted to direct the discussion a bit into a minimal, but still useful interface, highly extensible.
[cut]
But imo the key part is that we aim for real unification in the users of display_slave's, so internally convert over everything to the new structures. That should also make code-sharing much easier, so that we could move existing helper functions to the common display helper library.
What about drivers that are waiting for CDF to be pushed upstream instead of having to push another custom panel framework? I'm talking of my own KMS driver ... but maybe I could put most of it in staging and move relevant parts of DSI/DPI/HDMI panel drivers to "common" slave drivers ...
Hm, I think I've missed your driver drm/kms driver. Links to source? I think reading through a drm driver using the current cdf would be nice, that way I'm at least familiar with one part of the code ;-)
Bikesheds
I.e. the boring details:
- Where to put slave drivers? I'll vote for anything which does not include drivers/video ;-)
drivers/video +1, drivers/gpu -1, who came up with putting KMS under drivers/gpu ;)
I think the main reason was to be as far away from fbdev/fbcon code as possible ;-) Also, we have gem/ttm in drm, which is all about PU part and not really about G ..
- Maybe we want to start with a different part than modeset, or add a bit more on top. Though I really think we should start minimally and modesetting seemed like the most useful piece of the puzzle.
As suggested, start with on/off and static/fixed mode would help single resolution LCDs. Actually that is almost all that is needed for mobile panels and what I intended to get from CDF :)
- Naming the new interfaces. I'll have more asbestos suites on order ...
Until you get them. Would it make sense to reuse the encoder name from drm or is that to restrictive?
On a quick check drm lacks names for DSI encoders/panels, so we might want to add those. And maybe a generic panel output type. I guess it would be good to take my caveats list above and strike off everything we don't need for basic dsi panel support, then figure out where to steal the definitions from. Common definitions will be hard to come by, e.g. after much bikesheds and deciding to use common fourcc codes for pixel layouts drm ended up with simply adding a bunch of its own fourcc codes since the ones negotiated with v4l didn't cut it.
- Can we just copy the new "native" interface structs from drm, pls?
I hope you are not talking about the helper interfaces at least ;).
Nope, the drm helpers are not the interfaces. Ofc, if we end up with a massively generic panel interface, we might add a few helpers to give slave/panel drivers an easy way to opt for sane default behaviour. E.g. handling a fixed panel mode and always returning that mode is something which is reinvented in drm a few times ...
I probably should have written metadata structs/definitions, since that'll be the part which could get ugly if we end up with diverging standards. Interface functions obviously need to fit into what the hw bus at hand requires us to do (e.g. for DSI special cases).
[Aside wrt drm helpers: With i915 we now have an imo rather nice example that the drm crtc are really just helpers, and that it's not too hard to come up with your own modeset infrastructure. On an established driver codebase even.]
But if CDF is going to be the new drm helpers of choice for encoder/connector parts. Then it sounds like CDF would replace most of the old helpers. It would be far to many layers with the old helpers too. And I think I recall Jesse wanting to deprecate/remove them too.
Rob's tilcdc driver uses the drm crtc helpers and for the i2c encoder slaves he added a new set of helpers to easier integrate the crtc helpers with the existing drm_encoder_slave infrastructure. The end-result looks fairly reasonable imo.
In general I think as long as we aim for the different libraries to be as orthogonal as possible so that drivers can pick and choose, more kinds of helpers doesn't really sound bad. On the drm side I've recently brushed up the crtc/output polling and fb helpers quite a bit, so drivers can now pick&choose (and i915 does only use some of them). Similarly for other helper ideas floating around like DSI, hdmi infoframe handling, dp aux stuff ...
Of course I expect that we'll wrap things up into dwim() functions for all the common cases.
Hopefully we could have some generic encoder/connector helper implementations that only depend on CDF.
I'm not sure whether we should aim for that really - having a slave/panel driver with mostly common code and a wee bit of shim code once for drm and once for dss (or whatever else is out there) doesn't sound too horrible to me. But I agree that at least for new code we should aim to get this right from the start.
Cheers, Daniel
On Tue, Jan 29, 2013 at 03:19:38PM +0100, Daniel Vetter wrote:
On Tue, Jan 29, 2013 at 1:11 PM, Laurent Pinchart laurent.pinchart@ideasonboard.com wrote:
DevRooms are also not supposed to open before 11:00 (which is already a massive improvement over 2011 and the years before, where i was happy to be able to put the cabling in at 12:00), and i tend to first get a nod of approval from the on-site devrooms supervisor before i go in and set up the room.
So use the hackingroom this year. Things will hopefully be better next year.
Saturday is pretty much out of question, given that most developers interested in CDF will want to attend the X.org talks. I'll try to get a room for Sunday then, but I'm not sure yet what time slots will be available. It would be helpful if people interested in CDF discussions could tell me at what time they plan to leave Brussels on Sunday.
I'll stay till Monday early morning, so requirements from me. Adding a bunch of Intel guys who're interested, too.
My return flight isn't until Monday afternoon.
On Fri, Jan 11, 2013 at 09:27:03PM +0100, Laurent Pinchart wrote:
Would anyone be interested in meeting at the FOSDEM to discuss the Common Display Framework ? There will be a CDF meeting at the ELC at the end of February, the FOSDEM would be a good venue for European developers.
We are interested as well (Philipp, Michael, Sascha, me, maybe also some of the others from the Pengutronix crew...).
rsc
On Friday 11 January 2013 21:27:03 Laurent Pinchart wrote:
Hi everybody,
Would anyone be interested in meeting at the FOSDEM to discuss the Common Display Framework ? There will be a CDF meeting at the ELC at the end of February, the FOSDEM would be a good venue for European developers.
A quick follow-up on this.
Given the late notice getting a room from the FOSDEM staff wasn't possible. There will be two meeting rooms available that can be reserved on-site only. They can accomodate aroudn 30 people but there will deliberately be no projector. They will be given on a first-come, first-serve basis for one hour time slots at most (see https://fosdem.org/2013/news/2013-01-31-bof- announce/).
As room availability isn't guaranteed, and as one hour might be a bit short, I've secured an off-site but very close (http://www.openstreetmap.org/?lat=50.812924&lon=4.384506&zoom=18&... - UrLab) room that can accomodate 12 people around a meeting table (more is possible, but it might get a bit tight then). I propose having the CDF discussion there on Sunday morning from 9am to 11am (please let me know ASAP if you can't make it at that time).
Daniel Vetter Jani Nikula Marcus Lorentzon Laurent Pinchart Michael (from Pengutronix, not sure about the last name, sorry) Philipp Zabel Rob Clark Robert Schwebel Sascha Hauer Ville Syrjälä
That's already 10 people. If someone else would like to attend the meeting please let me know.
On Thu, Jan 31, 2013 at 11:53:30AM +0100, Laurent Pinchart wrote:
On Friday 11 January 2013 21:27:03 Laurent Pinchart wrote:
Hi everybody,
Would anyone be interested in meeting at the FOSDEM to discuss the Common Display Framework ? There will be a CDF meeting at the ELC at the end of February, the FOSDEM would be a good venue for European developers.
A quick follow-up on this.
Given the late notice getting a room from the FOSDEM staff wasn't possible. There will be two meeting rooms available that can be reserved on-site only. They can accomodate aroudn 30 people but there will deliberately be no projector. They will be given on a first-come, first-serve basis for one hour time slots at most (see https://fosdem.org/2013/news/2013-01-31-bof- announce/).
As room availability isn't guaranteed, and as one hour might be a bit short, I've secured an off-site but very close (http://www.openstreetmap.org/?lat=50.812924&lon=4.384506&zoom=18&... - UrLab) room that can accomodate 12 people around a meeting table (more is possible, but it might get a bit tight then). I propose having the CDF discussion there on Sunday morning from 9am to 11am (please let me know ASAP if you can't make it at that time).
Daniel Vetter Jani Nikula Marcus Lorentzon Laurent Pinchart Michael (from Pengutronix, not sure about the last name, sorry) Philipp Zabel Rob Clark Robert Schwebel Sascha Hauer Ville Syrjälä
That's already 10 people. If someone else would like to attend the meeting please let me know.
If place is becomes tight I think Pengutronix doesn't have to be represented with 4 people, although all of us would be interested.
Otherwise, yes, we have time on Sunday morning.
Sascha
On 01/31/2013 11:53 AM, Laurent Pinchart wrote:
On Friday 11 January 2013 21:27:03 Laurent Pinchart wrote:
Hi everybody,
Would anyone be interested in meeting at the FOSDEM to discuss the Common Display Framework ? There will be a CDF meeting at the ELC at the end of February, the FOSDEM would be a good venue for European developers.
A quick follow-up on this.
Given the late notice getting a room from the FOSDEM staff wasn't possible. There will be two meeting rooms available that can be reserved on-site only. They can accomodate aroudn 30 people but there will deliberately be no projector. They will be given on a first-come, first-serve basis for one hour time slots at most (see https://fosdem.org/2013/news/2013-01-31-bof- announce/).
As room availability isn't guaranteed, and as one hour might be a bit short, I've secured an off-site but very close (http://www.openstreetmap.org/?lat=50.812924&lon=4.384506&zoom=18&... - UrLab) room that can accomodate 12 people around a meeting table (more is possible, but it might get a bit tight then). I propose having the CDF discussion there on Sunday morning from 9am to 11am (please let me know ASAP if you can't make it at that time).
Daniel Vetter Jani Nikula Marcus Lorentzon Laurent Pinchart Michael (from Pengutronix, not sure about the last name, sorry) Philipp Zabel Rob Clark Robert Schwebel Sascha Hauer Ville Syrjälä
That's already 10 people. If someone else would like to attend the meeting please let me know.
If there's a free seat I'd like to attend as well.
Thanks, - Lars
On 31/01/2013 11:53, Laurent Pinchart wrote:
On Friday 11 January 2013 21:27:03 Laurent Pinchart wrote:
Hi everybody,
Would anyone be interested in meeting at the FOSDEM to discuss the Common Display Framework ? There will be a CDF meeting at the ELC at the end of February, the FOSDEM would be a good venue for European developers.
A quick follow-up on this.
Given the late notice getting a room from the FOSDEM staff wasn't possible. There will be two meeting rooms available that can be reserved on-site only. They can accomodate aroudn 30 people but there will deliberately be no projector. They will be given on a first-come, first-serve basis for one hour time slots at most (see https://fosdem.org/2013/news/2013-01-31-bof- announce/).
As room availability isn't guaranteed, and as one hour might be a bit short, I've secured an off-site but very close (http://www.openstreetmap.org/?lat=50.812924&lon=4.384506&zoom=18&... - UrLab) room that can accomodate 12 people around a meeting table (more is possible, but it might get a bit tight then). I propose having the CDF discussion there on Sunday morning from 9am to 11am (please let me know ASAP if you can't make it at that time).
Daniel Vetter Jani Nikula Marcus Lorentzon Laurent Pinchart Michael (from Pengutronix, not sure about the last name, sorry) Philipp Zabel Rob Clark Robert Schwebel Sascha Hauer Ville Syrjälä
That's already 10 people. If someone else would like to attend the meeting please let me know.
Hi,
I am interested in the CDF. Where and when are you meeting?
Martin
linaro-mm-sig@lists.linaro.org