Hi,
On 2013-02-06 00:27, Laurent Pinchart wrote:
Hello,
We've hosted a CDF meeting at the FOSDEM on Sunday morning. Here's a summary of the discussions.
Thanks for the summary. I've been on a longish leave, and just got back, so I haven't read the recent CDF discussions on lists yet. I thought I'll start by replying to this summary first =).
- Abbreviations
DBI - Display Bus Interface, a parallel video control and data bus that transmits data using parallel data, read/write, chip select and address signals, similarly to 8051-style microcontroller parallel busses. This is a mixed video control and data bus.
DPI - Display Pixel Interface, a parallel video data bus that transmits data using parallel data, h/v sync and clock signals. This is a video data bus only.
DSI - Display Serial Interface, a serial video control and data bus that transmits data using one or more differential serial lines. This is a mixed video control and data bus.
In case you'll re-use these abbrevs in later posts, I think it would be good to mention that DPI is a one-way bus, whereas DBI and DSI are two-way (perhaps that's implicit with control bus, though).
- Goals
The meeting started with a brief discussion about the CDF goals.
Tomi Valkeinin and Tomasz Figa have sent RFC patches to show their views of what CDF could/should be. Many others have provided very valuable feedback. Given the early development stage propositions were sometimes contradictory, and focused on different areas of interest. We have thus started the meeting with a discussion about what CDF should try to achieve, and what it shouldn't.
CDF has two main purposes. The original goal was to support display panels in a platform- and subsystem-independent way. While mostly useful for embedded systems, the emergence of platforms such as Intel Medfield and ARM-based PCs that blends the embedded and PC worlds makes panel support useful for the PC world as well.
The second purpose is to provide a cross-subsystem interface to support video encoders. The idea originally came from a generalisation of the original RFC that supported panels only. While encoder support is considered as lower priority than display panel support by developers focussed on display controller driver (Intel, Renesas, ST Ericsson, TI), companies that produce video encoders (Analog Devices, and likely others) don't share that point of view and would like to provide a single encoder driver that can be used in both KMS and V4L2 drivers.
What is an encoder? Something that takes a video signal in, and lets the CPU store the received data to memory? Isn't that a decoder?
Or do you mean something that takes a video signal in, and outputs a video signal in another format? (transcoder?)
If the latter, I don't see them as lower priority. If we use CDF also for SoC internal components (which I think would be great), then every OMAP board has transcoders.
I'm not sure about the vocabulary on this area, but a normal OMAP scenario could have a following video pipeline:
1. encoder (OMAP's DISPC, reads pixels from memory and outputs parallel RGB) 2. transcoder (OMAP's DSI, gets parallel RGB and outputs DSI) 3. transcoder (external DSI-to-LVDS chip) 4. panel (LVDS panel)
Even in the case where a panel would be connected directly to the OMAP, there would be the internal transcoder.
- Subsystems
Display panels are used in conjunction with FBDEV and KMS drivers. There was to the audience knowledge no V4L2 driver that needs to explicitly handle display panels. Even though at least one V4L2 output drivers (omap_vout) can output video to a display panel, it does so in conjunction with the KMS and/or FBDEV APIs that handle panel configuration. Panels are thus not exposed to V4L2 drivers.
Hmm, I'm no expert on omap_vout, but it doesn't use KMS nor omapfb. It uses omapdss directly, and thus accesses the panels.
That said, I'm fine with leaving omap_vout out from the equation.
- KMS Extensions
The usefulness of V4L2 for output devices was questioned, and the possibility of using KMS for complex video devices usually associated with V4L2 was raised. The TI DaVinci 8xxx family is an example of chips that could benefit from KMS support.
The KMS API is lacking support for deep-pipelining ("framebuffers" that are sourced from a data stream instead of a memory buffer) today. Extending the KMS API with deep-pipelining support was considered as a sensible goal that would mostly require the creation of a new KMS source object. Exposing the topology of the whole device would then be handled by the Media Controller API.
Isn't there also the problem that KSM doesn't support arbitrarily long chains of display devices? That actually sounds more like "deep-pipelining" than what you said, getting the source data from a data stream.
- Bus Model
Display panels are connected to a video bus that transmits video data and optionally to a control bus. Those two busses can be separate physical interfaces or combined into a single physical interface.
The Linux device model represents the system as a tree of devices (not to be confused by the device tree, abreviated as DT). The tree is organized around control busses, with every device being a child of its control bus master. For instance an I2C device will be a child of its I2C controller device, which can itself be a child of its parent PCI device.
Display panels will be represented as Linux devices. They will have a single parent from the Linux device model point of view, but will be potentially connected to multiple physical busses. CDF thus needs to define what bus to select as the Linux parent bus.
In theory any physical bus that the device is attached to can be selected as the parent bus. However, selecting a video data bus would depart from the traditional Linux device model that uses control busses only. This caused concern among several people who argued that not presenting the device to the kernel as attached to its control bus would bring issues in embedded system. Unlike on PC systems where the control bus master is usually the same physical device as the data bus master, embedded systems are made of a potentially complex assembly of completely unrelated devices. Not representing an I2C- controlled panel as a child of its I2C master in DT was thus frown upon, even though no clear agreement was reached on the subject.
I've been thinking that a good rule of thumb would be that the device must be somewhat usable after the parent bus is ready. So for, say, DPI+SPI panel, when the SPI is set up the driver can send messages to the panel, perhaps read an ID or such, even if the actual video cannot be shown yet (presuming DPI bus is still missing).
Of course there are the funny cases, as always. Say, a DSI panel, controlled via i2c, and the panel gets its functional clock from the DSI bus's clock. In that case both busses need to be up and running before the panel can do anything.
- Combined video and control busses
When the two busses are combined in a single physical bus the panel device will obviously be represented as a child of that single physical bus.
In such cases the control bus could expose video bus control methods. This would remove the need for a video source as proposed by Tomi Valkeinen in his CDF model. However, if the bus can be used for video data transfer in combination with a different control bus, a video source corresponding to the data bus will be needed.
I think this is always the case. If a bus can be used for control and video data, you can always use it only for video data.
No decision has been taken on whether to use a video source in addition to the control bus in the combined busses case. Experimentation will be needed, and the right solution might depend on the bus type.
- Multiple control busses
One panel was mentioned as being connected to a DSI bus and an I2C bus. The DSI bus is used for both control and video, and the I2C bus for control only. configuring the panel requires sending commands through both DSI and I2C. The
I have luckily not encountered such a device. However, many of the DSI devices do have i2c control as an option. From the device's point of view, both can be used at the same time, but I think usually it's saner to just pick one and use it.
The driver for the device should support both control busses, though. Probably 99% of the driver code is common for both cases.
- Miscellaneous
- If the OMAP3 DSS driver is used as a model for the DSI support
implementation, Daniel Vetter requested the DSI bus lock semaphore to be killed as it prevents lockdep from working correctly (reference needed ;-)).
I don't think OMAP DSS should be used as a model. It has too much legacy crap that should be rewritten. However, it can be used as a reference to see what kind of features are needed, as it does support both video and command mode DSI modes, and has been used with many different kinds of DSI panels and DSI transcoders.
As for the semaphore, sure, it can be removed, although I'm not aware of this lockdep problem. If there's a problem it should be fixed in any case.
- Do we need to support chaining several encoders ? We can come up with
several theoretical use cases, some of them probably exist in real hardware, but the details are still a bit fuzzy.
If encoder means the same as the "transcoder" term I used earlier, then yes, I think so.
As I wrote, I'd like to model the OMAP DSS internal components with CDF. The internal IP blocks are in no way different than external IP blocks, they just happen to be integrated into OMAP. The same display IPs are used with multiple different TI SoCs.
Also, the IPs vary between TI SoCs (for ex, omap2 doesn't have DSI, omap3 has one DSI, omap4 has two DSIs), so we'll anyway need to have some kind of dynamic system inside omapdss driver. If I can't use CDF for that, I'll need to implement a custom one, which I believe would resemble CDF in many ways.
I'm guessing that having multiple external transcoders is quite rare on production hardware, but is a very useful feature with development boards. It's not just once or twice that we've used a transcoder or two between a SoC and a panel, because we haven't had the final panel yet.
Also, sometimes there are small simple chips in the video pipeline, that do things like level shifting or ESD protection. In some cases these chips just work automatically, but in some cases one needs to setup regulators and gpios to get them up and running (for example, http://www.ti.com/product/tpd12s015). And if that's the case, then I believe having a CDF "transcoder" driver for the chip is the easiest solution.
Tomi
On Wed, 06 Feb 2013, Tomi Valkeinen tomi.valkeinen@ti.com wrote:
- Miscellaneous
- If the OMAP3 DSS driver is used as a model for the DSI support
implementation, Daniel Vetter requested the DSI bus lock semaphore to be killed as it prevents lockdep from working correctly (reference needed ;-)).
[...]
As for the semaphore, sure, it can be removed, although I'm not aware of this lockdep problem. If there's a problem it should be fixed in any case.
The problem is that lockdep does not support semaphores out of the box. I'm not sure how hard it would be to manually lockdep annotate the bus lock, and whether it would really work. In any case, as I think we learned in the past, getting locking right in a DSI command mode panel driver with an asynchronous update callback, DSI bus lock, and a driver data specific mutex can be a PITA. Lockdep would be extremely useful there.
AFAICS simply replacing the semaphore with a mutex would work for all other cases except DSI command mode display update, unless you're prepared to wait in the call until the next tearing effect interrupt plus framedone. Which would suck. I think you and I have talked about this part in the past...
BR, Jani.
On 2013-02-06 14:11, Jani Nikula wrote:
On Wed, 06 Feb 2013, Tomi Valkeinen tomi.valkeinen@ti.com wrote:
- Miscellaneous
- If the OMAP3 DSS driver is used as a model for the DSI support
implementation, Daniel Vetter requested the DSI bus lock semaphore to be killed as it prevents lockdep from working correctly (reference needed ;-)).
[...]
As for the semaphore, sure, it can be removed, although I'm not aware of this lockdep problem. If there's a problem it should be fixed in any case.
The problem is that lockdep does not support semaphores out of the box. I'm not sure how hard it would be to manually lockdep annotate the bus lock, and whether it would really work. In any case, as I think we learned in the past, getting locking right in a DSI command mode panel driver with an asynchronous update callback, DSI bus lock, and a driver data specific mutex can be a PITA. Lockdep would be extremely useful there.
AFAICS simply replacing the semaphore with a mutex would work for all other cases except DSI command mode display update, unless you're prepared to wait in the call until the next tearing effect interrupt plus framedone. Which would suck. I think you and I have talked about this part in the past...
Mutex requires locking and unlocking to happen from the same thread. But I guess that's what you meant that the problem would be with display update, where the framedone callback is used to release the bus lock.
The semaphore could probably be changed to use wait queues, but isn't that more or less what a semaphore already does?
And I want to point out to those not familiar with omapdss, that the DSI bus lock in question does not protect any data in memory, but is an indication that the DSI bus is currently in use. The bus lock can be used to wait until the bus is free again.
I guess one option would be to disallow any waiting for the bus lock. If the panel driver would try acquire bus lock, and the lock is already taken, the call would fail. This would move the handling of exclusivity to the user of the panel (drm driver, I guess), which already should handle the framedone event.
The above would require that everything the panel does should be managed by the drm driver. Currently this is not the case for OMAP, as the panel driver can get calls via sysfs, or via backlight driver, or via (gpio) interrupts.
I don't really know what would be the best option here. On one hand requiring all panel calls to be managed by drm would be nice and simple. But it is a bit limiting when thinking about complex display chips. Will that work for all cases? I'm not sure.
Tomi
On Wed, Feb 6, 2013 at 6:11 AM, Tomi Valkeinen tomi.valkeinen@ti.com wrote:
Hi,
On 2013-02-06 00:27, Laurent Pinchart wrote:
Hello,
We've hosted a CDF meeting at the FOSDEM on Sunday morning. Here's a summary of the discussions.
Thanks for the summary. I've been on a longish leave, and just got back, so I haven't read the recent CDF discussions on lists yet. I thought I'll start by replying to this summary first =).
- Abbreviations
DBI - Display Bus Interface, a parallel video control and data bus that transmits data using parallel data, read/write, chip select and address signals, similarly to 8051-style microcontroller parallel busses. This is a mixed video control and data bus.
DPI - Display Pixel Interface, a parallel video data bus that transmits data using parallel data, h/v sync and clock signals. This is a video data bus only.
DSI - Display Serial Interface, a serial video control and data bus that transmits data using one or more differential serial lines. This is a mixed video control and data bus.
In case you'll re-use these abbrevs in later posts, I think it would be good to mention that DPI is a one-way bus, whereas DBI and DSI are two-way (perhaps that's implicit with control bus, though).
- Goals
The meeting started with a brief discussion about the CDF goals.
Tomi Valkeinin and Tomasz Figa have sent RFC patches to show their views of what CDF could/should be. Many others have provided very valuable feedback. Given the early development stage propositions were sometimes contradictory, and focused on different areas of interest. We have thus started the meeting with a discussion about what CDF should try to achieve, and what it shouldn't.
CDF has two main purposes. The original goal was to support display panels in a platform- and subsystem-independent way. While mostly useful for embedded systems, the emergence of platforms such as Intel Medfield and ARM-based PCs that blends the embedded and PC worlds makes panel support useful for the PC world as well.
The second purpose is to provide a cross-subsystem interface to support video encoders. The idea originally came from a generalisation of the original RFC that supported panels only. While encoder support is considered as lower priority than display panel support by developers focussed on display controller driver (Intel, Renesas, ST Ericsson, TI), companies that produce video encoders (Analog Devices, and likely others) don't share that point of view and would like to provide a single encoder driver that can be used in both KMS and V4L2 drivers.
What is an encoder? Something that takes a video signal in, and lets the CPU store the received data to memory? Isn't that a decoder?
Or do you mean something that takes a video signal in, and outputs a video signal in another format? (transcoder?)
In KMS parlance, we have two objects a crtc and an encoder. A crtc reads data from memory and produces a data stream with display timing. The encoder then takes that datastream and timing from the crtc and converts it some sort of physical signal (LVDS, TMDS, DP, etc.). It's not always a perfect match to the hardware. For example a lot of GPUs have a DVO encoder which feeds a secondary encoder like an sil164 DVO to TMDS encoder.
Alex
On 2013-02-06 16:44, Alex Deucher wrote:
On Wed, Feb 6, 2013 at 6:11 AM, Tomi Valkeinen tomi.valkeinen@ti.com wrote:
What is an encoder? Something that takes a video signal in, and lets the CPU store the received data to memory? Isn't that a decoder?
Or do you mean something that takes a video signal in, and outputs a video signal in another format? (transcoder?)
In KMS parlance, we have two objects a crtc and an encoder. A crtc reads data from memory and produces a data stream with display timing. The encoder then takes that datastream and timing from the crtc and converts it some sort of physical signal (LVDS, TMDS, DP, etc.). It's
Isn't the video stream between CRTC and encoder just as physical, it just happens to be inside the GPU?
This is the case for OMAP, at least, where DISPC could be considered CRTC, and DSI/HDMI/etc could be considered encoder. The stream between DISPC and DSI/HDMI is plain parallel RGB signal. The video stream could as well be outside OMAP.
not always a perfect match to the hardware. For example a lot of GPUs have a DVO encoder which feeds a secondary encoder like an sil164 DVO to TMDS encoder.
Right. I think mapping the DRM entities to CDF ones is one of the bigger question marks we have with CDF. While I'm no expert on DRM, I think we have the following options:
1. Force DRM's model to CDF, meaning one encoder.
2. Extend DRM to support multiple encoders in a chain.
3. Support multiple encoders in a chain in CDF, but somehow map them to a single encoder in DRM side.
I really dislike the first option, as it would severely limit where CDF can be used, or would force you to write some kind of combined drivers, so that you can have one encoder driver running multiple encoder devices.
Tomi
On Wed, Feb 6, 2013 at 4:00 PM, Tomi Valkeinen tomi.valkeinen@ti.com wrote:
not always a perfect match to the hardware. For example a lot of GPUs have a DVO encoder which feeds a secondary encoder like an sil164 DVO to TMDS encoder.
Right. I think mapping the DRM entities to CDF ones is one of the bigger question marks we have with CDF. While I'm no expert on DRM, I think we have the following options:
Force DRM's model to CDF, meaning one encoder.
Extend DRM to support multiple encoders in a chain.
Support multiple encoders in a chain in CDF, but somehow map them to
a single encoder in DRM side.
4. Ignore drm kms encoders.
They are only exposed to userspace as a means for userspace to discover very simple constraints, e.g. 1 encoder connected to 2 outputs means you can only use one of the outputs at the same time. They are completely irrelevant for the actual modeset interface exposed to drivers, so you could create a fake kms encoder for each connector you expose through kms.
The crtc helpers use the encoders as a real entity, and if you opt to use the crtc helpers to implement the modeset sequence in your driver it makes sense to map them to some real piece of hw. But you can essentially pick any transcoder in your crtc -> final output chain for this. Generic userspace needs to be able to cope with a failed modeset due to arbitrary reasons anyway, so can't presume that simply because the currently exposed constraints are fulfilled it'll work.
I really dislike the first option, as it would severely limit where CDF can be used, or would force you to write some kind of combined drivers, so that you can have one encoder driver running multiple encoder devices.
Imo CDF and drm encoders don't really have that much to do with each another, it should just be a driver implementation detail. Of course, if common patterns emerge we could extract them somehow. E.g. if many drivers end up exposing the CDF transcoder chain as a drm encoder using the crtc helpers, we could add some library functions to make that simpler.
Another conclusion (at least from my pov) from the fosdem discussion is that we should separate the panel interface from the actual control/pixel data buses. That should give us more flexibility for insane hw and also directly exposing properties and knobs to the userspace interface from e.g. dsi transcoders. So I don't think we'll end up with _the_ canonical CDF sink interface anyway. -Daniel
linaro-mm-sig@lists.linaro.org