This series goal is to avoid drivers from having ad-hoc code to call .device_run in non-atomic context. Currently, .device_run can be called via v4l2_m2m_job_finish(), potentially running in interrupt context.
This series will be useful for the upcoming Request API, where drivers typically require .device_run to be called in non-atomic context for v4l2_ctrl_request_setup() calls.
The solution is to add a per-device worker that is scheduled by v4l2_m2m_job_finish, which replaces drivers having a threaded interrupt or similar.
This change allows v4l2_m2m_job_finish() to be called in interrupt context, separating .device_run and v4l2_m2m_job_finish() contexts.
It's worth mentioning that v4l2_m2m_cancel_job() doesn't need to flush or cancel the new worker, because the job_spinlock synchronizes both and also because the core prevents simultaneous jobs. Either v4l2_m2m_cancel_job() will wait for the worker, or the worker will be unable to run a new job.
Testing -------
In order to test the change, and make sure no regressions are introduced, a kselftest test is added to stress the mem2mem framework.
Note that this series rework the kselftest media_tests target. Those tests that need hardware and human intervention are now marked as _EXTENDED, and a frontend script is added to run those tests that can run without hardware or human intervention.
This will allow the media_tests target to be included in automatic regression testing setups.
Hopefully, we will be able to introduce more and more automatic regression tests. Currently, our self-test run looks like:
$ make TARGETS=media_tests kselftest make[1]: Entering directory '/home/zeta/repos/builds/virtme-x86_64' make[3]: warning: jobserver unavailable: using -j1. Add '+' to parent make rule. make[3]: Nothing to be done for 'all'. make[3]: warning: jobserver unavailable: using -j1. Add '+' to parent make rule. TAP version 13 selftests: media_tests: m2m_job_test.sh ======================================== ------------------- running media tests ------------------- media_device : no video4linux drivers loaded, vim2m is needed not ok 1..1 selftests: media_tests: m2m_job_test.sh [SKIP] make[1]: Leaving directory '/home/zeta/repos/builds/virtme-x86_64'
Ezequiel Garcia (3): v4l2-mem2mem: Avoid v4l2_m2m_prepare_buf from scheduling a job v4l2-mem2mem: Avoid calling .device_run in v4l2_m2m_job_finish selftests: media_tests: Add a memory-to-memory concurrent stress test
Sakari Ailus (1): v4l2-mem2mem: Simplify exiting the function in __v4l2_m2m_try_schedule
drivers/media/v4l2-core/v4l2-mem2mem.c | 72 +++-- .../testing/selftests/media_tests/.gitignore | 1 + tools/testing/selftests/media_tests/Makefile | 5 +- .../selftests/media_tests/m2m_job_test.c | 287 ++++++++++++++++++ .../selftests/media_tests/m2m_job_test.sh | 32 ++ 5 files changed, 371 insertions(+), 26 deletions(-) create mode 100644 tools/testing/selftests/media_tests/m2m_job_test.c create mode 100755 tools/testing/selftests/media_tests/m2m_job_test.sh
There is no need for v4l2_m2m_prepare_buf to try to schedule a job, as it only prepares a buffer, but does not queue or changes the state of the queue.
Remove the call to v4l2_m2m_try_schedule from v4l2_m2m_prepare_buf.
Signed-off-by: Ezequiel Garcia ezequiel@collabora.com --- drivers/media/v4l2-core/v4l2-mem2mem.c | 7 +------ 1 file changed, 1 insertion(+), 6 deletions(-)
diff --git a/drivers/media/v4l2-core/v4l2-mem2mem.c b/drivers/media/v4l2-core/v4l2-mem2mem.c index 0a93c5b173c2..efae845435c9 100644 --- a/drivers/media/v4l2-core/v4l2-mem2mem.c +++ b/drivers/media/v4l2-core/v4l2-mem2mem.c @@ -481,14 +481,9 @@ int v4l2_m2m_prepare_buf(struct file *file, struct v4l2_m2m_ctx *m2m_ctx, struct v4l2_buffer *buf) { struct vb2_queue *vq; - int ret;
vq = v4l2_m2m_get_vq(m2m_ctx, buf->type); - ret = vb2_prepare_buf(vq, buf); - if (!ret) - v4l2_m2m_try_schedule(m2m_ctx); - - return ret; + return vb2_prepare_buf(vq, buf); } EXPORT_SYMBOL_GPL(v4l2_m2m_prepare_buf);
From: Sakari Ailus sakari.ailus@linux.intel.com
The __v4l2_m2m_try_schedule function acquires and releases multiple spinlocks. Simplify unlocking the job lock by adding labels to unlock the lock and exit the function.
Signed-off-by: Sakari Ailus sakari.ailus@linux.intel.com Signed-off-by: Ezequiel Garcia ezequiel@collabora.com --- drivers/media/v4l2-core/v4l2-mem2mem.c | 29 ++++++++++++-------------- 1 file changed, 13 insertions(+), 16 deletions(-)
diff --git a/drivers/media/v4l2-core/v4l2-mem2mem.c b/drivers/media/v4l2-core/v4l2-mem2mem.c index efae845435c9..da82d151dd20 100644 --- a/drivers/media/v4l2-core/v4l2-mem2mem.c +++ b/drivers/media/v4l2-core/v4l2-mem2mem.c @@ -279,51 +279,48 @@ static void __v4l2_m2m_try_queue(struct v4l2_m2m_dev *m2m_dev,
/* If the context is aborted then don't schedule it */ if (m2m_ctx->job_flags & TRANS_ABORT) { - spin_unlock_irqrestore(&m2m_dev->job_spinlock, flags_job); dprintk("Aborted context\n"); - return; + goto job_unlock; }
if (m2m_ctx->job_flags & TRANS_QUEUED) { - spin_unlock_irqrestore(&m2m_dev->job_spinlock, flags_job); dprintk("On job queue already\n"); - return; + goto job_unlock; }
spin_lock_irqsave(&m2m_ctx->out_q_ctx.rdy_spinlock, flags_out); if (list_empty(&m2m_ctx->out_q_ctx.rdy_queue) && !m2m_ctx->out_q_ctx.buffered) { - spin_unlock_irqrestore(&m2m_ctx->out_q_ctx.rdy_spinlock, - flags_out); - spin_unlock_irqrestore(&m2m_dev->job_spinlock, flags_job); dprintk("No input buffers available\n"); - return; + goto out_unlock; } spin_lock_irqsave(&m2m_ctx->cap_q_ctx.rdy_spinlock, flags_cap); if (list_empty(&m2m_ctx->cap_q_ctx.rdy_queue) && !m2m_ctx->cap_q_ctx.buffered) { - spin_unlock_irqrestore(&m2m_ctx->cap_q_ctx.rdy_spinlock, - flags_cap); - spin_unlock_irqrestore(&m2m_ctx->out_q_ctx.rdy_spinlock, - flags_out); - spin_unlock_irqrestore(&m2m_dev->job_spinlock, flags_job); dprintk("No output buffers available\n"); - return; + goto cap_unlock; } spin_unlock_irqrestore(&m2m_ctx->cap_q_ctx.rdy_spinlock, flags_cap); spin_unlock_irqrestore(&m2m_ctx->out_q_ctx.rdy_spinlock, flags_out);
if (m2m_dev->m2m_ops->job_ready && (!m2m_dev->m2m_ops->job_ready(m2m_ctx->priv))) { - spin_unlock_irqrestore(&m2m_dev->job_spinlock, flags_job); dprintk("Driver not ready\n"); - return; + goto job_unlock; }
list_add_tail(&m2m_ctx->queue, &m2m_dev->job_queue); m2m_ctx->job_flags |= TRANS_QUEUED;
spin_unlock_irqrestore(&m2m_dev->job_spinlock, flags_job); + return; + +cap_unlock: + spin_unlock_irqrestore(&m2m_ctx->cap_q_ctx.rdy_spinlock, flags_cap); +out_unlock: + spin_unlock_irqrestore(&m2m_ctx->out_q_ctx.rdy_spinlock, flags_out); +job_unlock: + spin_unlock_irqrestore(&m2m_dev->job_spinlock, flags_job); }
/**
v4l2_m2m_job_finish() is typically called in interrupt context.
Some implementation of .device_run might sleep, and so it's desirable to avoid calling it directly from v4l2_m2m_job_finish(), thus avoiding .device_run from running in interrupt context.
Implement a deferred context that calls v4l2_m2m_try_run, and gets scheduled by v4l2_m2m_job_finish().
Signed-off-by: Ezequiel Garcia ezequiel@collabora.com --- drivers/media/v4l2-core/v4l2-mem2mem.c | 36 +++++++++++++++++++++++--- 1 file changed, 33 insertions(+), 3 deletions(-)
diff --git a/drivers/media/v4l2-core/v4l2-mem2mem.c b/drivers/media/v4l2-core/v4l2-mem2mem.c index da82d151dd20..0bf4deefa899 100644 --- a/drivers/media/v4l2-core/v4l2-mem2mem.c +++ b/drivers/media/v4l2-core/v4l2-mem2mem.c @@ -69,6 +69,7 @@ static const char * const m2m_entity_name[] = { * @curr_ctx: currently running instance * @job_queue: instances queued to run * @job_spinlock: protects job_queue + * @job_work: worker to run queued jobs. * @m2m_ops: driver callbacks */ struct v4l2_m2m_dev { @@ -85,6 +86,7 @@ struct v4l2_m2m_dev {
struct list_head job_queue; spinlock_t job_spinlock; + struct work_struct job_work;
const struct v4l2_m2m_ops *m2m_ops; }; @@ -224,10 +226,11 @@ EXPORT_SYMBOL(v4l2_m2m_get_curr_priv); /** * v4l2_m2m_try_run() - select next job to perform and run it if possible * @m2m_dev: per-device context + * @try_lock: indicates if the queue lock should be taken * * Get next transaction (if present) from the waiting jobs list and run it. */ -static void v4l2_m2m_try_run(struct v4l2_m2m_dev *m2m_dev) +static void v4l2_m2m_try_run(struct v4l2_m2m_dev *m2m_dev, bool try_lock) { unsigned long flags;
@@ -250,7 +253,20 @@ static void v4l2_m2m_try_run(struct v4l2_m2m_dev *m2m_dev) spin_unlock_irqrestore(&m2m_dev->job_spinlock, flags);
dprintk("Running job on m2m_ctx: %p\n", m2m_dev->curr_ctx); + + /* + * A m2m context lock is taken only after a m2m context + * is picked from the queue and marked as running. + * The lock is only needed if v4l2_m2m_try_run is called + * from the async worker. + */ + if (try_lock && m2m_dev->curr_ctx->q_lock) + mutex_lock(m2m_dev->curr_ctx->q_lock); + m2m_dev->m2m_ops->device_run(m2m_dev->curr_ctx->priv); + + if (try_lock && m2m_dev->curr_ctx->q_lock) + mutex_unlock(m2m_dev->curr_ctx->q_lock); }
/* @@ -340,10 +356,22 @@ void v4l2_m2m_try_schedule(struct v4l2_m2m_ctx *m2m_ctx) struct v4l2_m2m_dev *m2m_dev = m2m_ctx->m2m_dev;
__v4l2_m2m_try_queue(m2m_dev, m2m_ctx); - v4l2_m2m_try_run(m2m_dev); + v4l2_m2m_try_run(m2m_dev, false); } EXPORT_SYMBOL_GPL(v4l2_m2m_try_schedule);
+/** + * v4l2_m2m_device_run_work() - run pending jobs for the context + * @work: Work structure used for scheduling the execution of this function. + */ +static void v4l2_m2m_device_run_work(struct work_struct *work) +{ + struct v4l2_m2m_dev *m2m_dev = + container_of(work, struct v4l2_m2m_dev, job_work); + + v4l2_m2m_try_run(m2m_dev, true); +} + /** * v4l2_m2m_cancel_job() - cancel pending jobs for the context * @m2m_ctx: m2m context with jobs to be canceled @@ -403,7 +431,8 @@ void v4l2_m2m_job_finish(struct v4l2_m2m_dev *m2m_dev, /* This instance might have more buffers ready, but since we do not * allow more than one job on the job_queue per instance, each has * to be scheduled separately after the previous one finishes. */ - v4l2_m2m_try_schedule(m2m_ctx); + __v4l2_m2m_try_queue(m2m_dev, m2m_ctx); + schedule_work(&m2m_dev->job_work); } EXPORT_SYMBOL(v4l2_m2m_job_finish);
@@ -837,6 +866,7 @@ struct v4l2_m2m_dev *v4l2_m2m_init(const struct v4l2_m2m_ops *m2m_ops) m2m_dev->m2m_ops = m2m_ops; INIT_LIST_HEAD(&m2m_dev->job_queue); spin_lock_init(&m2m_dev->job_spinlock); + INIT_WORK(&m2m_dev->job_work, v4l2_m2m_device_run_work);
return m2m_dev; }
On 08/01/2018 11:50 PM, Ezequiel Garcia wrote:
v4l2_m2m_job_finish() is typically called in interrupt context.
Some implementation of .device_run might sleep, and so it's desirable to avoid calling it directly from v4l2_m2m_job_finish(), thus avoiding .device_run from running in interrupt context.
Implement a deferred context that calls v4l2_m2m_try_run, and gets scheduled by v4l2_m2m_job_finish().
Signed-off-by: Ezequiel Garcia ezequiel@collabora.com
drivers/media/v4l2-core/v4l2-mem2mem.c | 36 +++++++++++++++++++++++--- 1 file changed, 33 insertions(+), 3 deletions(-)
diff --git a/drivers/media/v4l2-core/v4l2-mem2mem.c b/drivers/media/v4l2-core/v4l2-mem2mem.c index da82d151dd20..0bf4deefa899 100644 --- a/drivers/media/v4l2-core/v4l2-mem2mem.c +++ b/drivers/media/v4l2-core/v4l2-mem2mem.c @@ -69,6 +69,7 @@ static const char * const m2m_entity_name[] = {
- @curr_ctx: currently running instance
- @job_queue: instances queued to run
- @job_spinlock: protects job_queue
*/
- @job_work: worker to run queued jobs.
- @m2m_ops: driver callbacks
struct v4l2_m2m_dev { @@ -85,6 +86,7 @@ struct v4l2_m2m_dev { struct list_head job_queue; spinlock_t job_spinlock;
- struct work_struct job_work;
const struct v4l2_m2m_ops *m2m_ops; }; @@ -224,10 +226,11 @@ EXPORT_SYMBOL(v4l2_m2m_get_curr_priv); /**
- v4l2_m2m_try_run() - select next job to perform and run it if possible
- @m2m_dev: per-device context
- @try_lock: indicates if the queue lock should be taken
I don't like this bool. See more below.
- Get next transaction (if present) from the waiting jobs list and run it.
*/ -static void v4l2_m2m_try_run(struct v4l2_m2m_dev *m2m_dev) +static void v4l2_m2m_try_run(struct v4l2_m2m_dev *m2m_dev, bool try_lock) { unsigned long flags; @@ -250,7 +253,20 @@ static void v4l2_m2m_try_run(struct v4l2_m2m_dev *m2m_dev) spin_unlock_irqrestore(&m2m_dev->job_spinlock, flags); dprintk("Running job on m2m_ctx: %p\n", m2m_dev->curr_ctx);
- /*
* A m2m context lock is taken only after a m2m context
* is picked from the queue and marked as running.
* The lock is only needed if v4l2_m2m_try_run is called
* from the async worker.
*/
- if (try_lock && m2m_dev->curr_ctx->q_lock)
mutex_lock(m2m_dev->curr_ctx->q_lock);
- m2m_dev->m2m_ops->device_run(m2m_dev->curr_ctx->priv);
- if (try_lock && m2m_dev->curr_ctx->q_lock)
mutex_unlock(m2m_dev->curr_ctx->q_lock);
} /* @@ -340,10 +356,22 @@ void v4l2_m2m_try_schedule(struct v4l2_m2m_ctx *m2m_ctx) struct v4l2_m2m_dev *m2m_dev = m2m_ctx->m2m_dev; __v4l2_m2m_try_queue(m2m_dev, m2m_ctx);
- v4l2_m2m_try_run(m2m_dev);
- v4l2_m2m_try_run(m2m_dev, false);
I would like to see a WARN_ON where you verify that q_lock is actually locked (and this needs to be documented as well).
} EXPORT_SYMBOL_GPL(v4l2_m2m_try_schedule); +/**
- v4l2_m2m_device_run_work() - run pending jobs for the context
- @work: Work structure used for scheduling the execution of this function.
- */
+static void v4l2_m2m_device_run_work(struct work_struct *work) +{
- struct v4l2_m2m_dev *m2m_dev =
container_of(work, struct v4l2_m2m_dev, job_work);
- v4l2_m2m_try_run(m2m_dev, true);
Just lock q_lock here around the try_run call. That's consistent with how try_schedule works. No need to add an extra argument to try_run.
+}
/**
- v4l2_m2m_cancel_job() - cancel pending jobs for the context
- @m2m_ctx: m2m context with jobs to be canceled
@@ -403,7 +431,8 @@ void v4l2_m2m_job_finish(struct v4l2_m2m_dev *m2m_dev, /* This instance might have more buffers ready, but since we do not * allow more than one job on the job_queue per instance, each has * to be scheduled separately after the previous one finishes. */
- v4l2_m2m_try_schedule(m2m_ctx);
- __v4l2_m2m_try_queue(m2m_dev, m2m_ctx);
- schedule_work(&m2m_dev->job_work);
You might want to add a comment here explaining why you need 'work' here.
} EXPORT_SYMBOL(v4l2_m2m_job_finish); @@ -837,6 +866,7 @@ struct v4l2_m2m_dev *v4l2_m2m_init(const struct v4l2_m2m_ops *m2m_ops) m2m_dev->m2m_ops = m2m_ops; INIT_LIST_HEAD(&m2m_dev->job_queue); spin_lock_init(&m2m_dev->job_spinlock);
- INIT_WORK(&m2m_dev->job_work, v4l2_m2m_device_run_work);
return m2m_dev; }
Regarding q_lock: I would like to see that made compulsory. So v4l2_mem2mem should check that both queue lock pointers point to the same mutex and return an error if that's not the case (I believe all m2m drivers use the same mutex already).
Also make sure that there are no drivers that set q_lock explicitly (mtk-vcodec does).
That way q_lock can be safely used here.
This will also allow us to simplify v4l2_ioctl_get_lock() in v4l2-ioctl.c: v4l2_ioctl_m2m_queue_is_output() can be dropped since the lock for capture and output is now the same.
Regards,
Hans -- To unsubscribe from this list: send the line "unsubscribe linux-kselftest" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Thu, 2018-08-02 at 10:02 +0200, Hans Verkuil wrote:
On 08/01/2018 11:50 PM, Ezequiel Garcia wrote:
v4l2_m2m_job_finish() is typically called in interrupt context.
Some implementation of .device_run might sleep, and so it's desirable to avoid calling it directly from v4l2_m2m_job_finish(), thus avoiding .device_run from running in interrupt context.
Implement a deferred context that calls v4l2_m2m_try_run, and gets scheduled by v4l2_m2m_job_finish().
Signed-off-by: Ezequiel Garcia ezequiel@collabora.com
drivers/media/v4l2-core/v4l2-mem2mem.c | 36 +++++++++++++++++++++++--- 1 file changed, 33 insertions(+), 3 deletions(-)
diff --git a/drivers/media/v4l2-core/v4l2-mem2mem.c b/drivers/media/v4l2-core/v4l2-mem2mem.c index da82d151dd20..0bf4deefa899 100644 --- a/drivers/media/v4l2-core/v4l2-mem2mem.c +++ b/drivers/media/v4l2-core/v4l2-mem2mem.c @@ -69,6 +69,7 @@ static const char * const m2m_entity_name[] = {
- @curr_ctx: currently running instance
- @job_queue: instances queued to run
- @job_spinlock: protects job_queue
*/
- @job_work: worker to run queued jobs.
- @m2m_ops: driver callbacks
struct v4l2_m2m_dev { @@ -85,6 +86,7 @@ struct v4l2_m2m_dev { struct list_head job_queue; spinlock_t job_spinlock;
- struct work_struct job_work;
const struct v4l2_m2m_ops *m2m_ops; }; @@ -224,10 +226,11 @@ EXPORT_SYMBOL(v4l2_m2m_get_curr_priv); /**
- v4l2_m2m_try_run() - select next job to perform and run it if possible
- @m2m_dev: per-device context
- @try_lock: indicates if the queue lock should be taken
I don't like this bool. See more below.
Me neither. In fact, I've spent a lot of time trying to avoid it! However...
- Get next transaction (if present) from the waiting jobs list and run it.
*/ -static void v4l2_m2m_try_run(struct v4l2_m2m_dev *m2m_dev) +static void v4l2_m2m_try_run(struct v4l2_m2m_dev *m2m_dev, bool try_lock) { unsigned long flags; @@ -250,7 +253,20 @@ static void v4l2_m2m_try_run(struct v4l2_m2m_dev *m2m_dev) spin_unlock_irqrestore(&m2m_dev->job_spinlock, flags); dprintk("Running job on m2m_ctx: %p\n", m2m_dev->curr_ctx);
- /*
* A m2m context lock is taken only after a m2m context
* is picked from the queue and marked as running.
* The lock is only needed if v4l2_m2m_try_run is called
* from the async worker.
*/
- if (try_lock && m2m_dev->curr_ctx->q_lock)
mutex_lock(m2m_dev->curr_ctx->q_lock);
Note that only after a context has been chosen, and curr_ctx is assigned, it's possible to take the mutex.
m2m_dev->m2m_ops->device_run(m2m_dev->curr_ctx->priv);
- if (try_lock && m2m_dev->curr_ctx->q_lock)
mutex_unlock(m2m_dev->curr_ctx->q_lock);
} /* @@ -340,10 +356,22 @@ void v4l2_m2m_try_schedule(struct v4l2_m2m_ctx *m2m_ctx) struct v4l2_m2m_dev *m2m_dev = m2m_ctx->m2m_dev; __v4l2_m2m_try_queue(m2m_dev, m2m_ctx);
- v4l2_m2m_try_run(m2m_dev);
- v4l2_m2m_try_run(m2m_dev, false);
I would like to see a WARN_ON where you verify that q_lock is actually locked (and this needs to be documented as well).
OK.
} EXPORT_SYMBOL_GPL(v4l2_m2m_try_schedule); +/**
- v4l2_m2m_device_run_work() - run pending jobs for the context
- @work: Work structure used for scheduling the execution of this function.
- */
+static void v4l2_m2m_device_run_work(struct work_struct *work) +{
- struct v4l2_m2m_dev *m2m_dev =
container_of(work, struct v4l2_m2m_dev, job_work);
- v4l2_m2m_try_run(m2m_dev, true);
Just lock q_lock here around the try_run call. That's consistent with how try_schedule works. No need to add an extra argument to try_run.
As I mentioned above, we might not have any lock to take at this point.
+}
/**
- v4l2_m2m_cancel_job() - cancel pending jobs for the context
- @m2m_ctx: m2m context with jobs to be canceled
@@ -403,7 +431,8 @@ void v4l2_m2m_job_finish(struct v4l2_m2m_dev *m2m_dev, /* This instance might have more buffers ready, but since we do not * allow more than one job on the job_queue per instance, each has * to be scheduled separately after the previous one finishes. */
- v4l2_m2m_try_schedule(m2m_ctx);
- __v4l2_m2m_try_queue(m2m_dev, m2m_ctx);
- schedule_work(&m2m_dev->job_work);
You might want to add a comment here explaining why you need 'work' here.
OK.
} EXPORT_SYMBOL(v4l2_m2m_job_finish); @@ -837,6 +866,7 @@ struct v4l2_m2m_dev *v4l2_m2m_init(const struct v4l2_m2m_ops *m2m_ops) m2m_dev->m2m_ops = m2m_ops; INIT_LIST_HEAD(&m2m_dev->job_queue); spin_lock_init(&m2m_dev->job_spinlock);
- INIT_WORK(&m2m_dev->job_work, v4l2_m2m_device_run_work);
return m2m_dev; }
Regarding q_lock: I would like to see that made compulsory. So v4l2_mem2mem should check that both queue lock pointers point to the same mutex and return an error if that's not the case (I believe all m2m drivers use the same mutex already).
Also make sure that there are no drivers that set q_lock explicitly (mtk-vcodec does).
That way q_lock can be safely used here.
Yes, I have that patch ready since a few days.
This will also allow us to simplify v4l2_ioctl_get_lock() in v4l2-ioctl.c: v4l2_ioctl_m2m_queue_is_output() can be dropped since the lock for capture and output is now the same.
Right.
Regards, Eze -- To unsubscribe from this list: send the line "unsubscribe linux-kselftest" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Add a test for the memory-to-memory framework, to exercise the scheduling of concurrent jobs, using multiple contexts.
This test needs to be run using the vim2m virtual driver, and so needs no hardware.
While here, rework the media_tests suite in order to make it useful for automatic tools. Those tests that need human intervention are now separated from those that can run automatically, needing only virtual drivers to work.
Signed-off-by: Ezequiel Garcia ezequiel@collabora.com --- .../testing/selftests/media_tests/.gitignore | 1 + tools/testing/selftests/media_tests/Makefile | 5 +- .../selftests/media_tests/m2m_job_test.c | 287 ++++++++++++++++++ .../selftests/media_tests/m2m_job_test.sh | 32 ++ 4 files changed, 324 insertions(+), 1 deletion(-) create mode 100644 tools/testing/selftests/media_tests/m2m_job_test.c create mode 100755 tools/testing/selftests/media_tests/m2m_job_test.sh
diff --git a/tools/testing/selftests/media_tests/.gitignore b/tools/testing/selftests/media_tests/.gitignore index 8745eba39012..71c6508348ce 100644 --- a/tools/testing/selftests/media_tests/.gitignore +++ b/tools/testing/selftests/media_tests/.gitignore @@ -1,3 +1,4 @@ media_device_test media_device_open video_device_test +m2m_job_test diff --git a/tools/testing/selftests/media_tests/Makefile b/tools/testing/selftests/media_tests/Makefile index 60826d7d37d4..d25d4c3eb7d2 100644 --- a/tools/testing/selftests/media_tests/Makefile +++ b/tools/testing/selftests/media_tests/Makefile @@ -1,6 +1,9 @@ # SPDX-License-Identifier: GPL-2.0 # CFLAGS += -I../ -I../../../../usr/include/ -TEST_GEN_PROGS := media_device_test media_device_open video_device_test +TEST_GEN_PROGS_EXTENDED := media_device_test media_device_open video_device_test m2m_job_test +TEST_PROGS := m2m_job_test.sh
include ../lib.mk + +LDLIBS += -lpthread diff --git a/tools/testing/selftests/media_tests/m2m_job_test.c b/tools/testing/selftests/media_tests/m2m_job_test.c new file mode 100644 index 000000000000..5800269567e6 --- /dev/null +++ b/tools/testing/selftests/media_tests/m2m_job_test.c @@ -0,0 +1,287 @@ +// SPDX-License-Identifier: GPL-2.0 +// +// Copyright (c) Collabora, Ltd. + +/* + * This file adds a test for the memory-to-memory + * framework. + * + * This test opens a user specified video device and then + * queues concurrent jobs. The jobs are totally dummy, + * its purpose is only to verify that each of the queued + * jobs is run, and is run only once. + * + * The vim2m driver is needed in order to run the test. + * + * Usage: + * ./m2m-job-test -d /dev/videoX + */ + +#include <assert.h> +#include <errno.h> +#include <fcntl.h> +#include <unistd.h> +#include <stdio.h> +#include <stdlib.h> +#include <string.h> +#include <sys/ioctl.h> +#include <sys/stat.h> +#include <time.h> +#include <pthread.h> +#include <poll.h> + +#include <linux/videodev2.h> + +#include "../kselftest.h" + +#define V4L2_CID_TRANS_TIME_MSEC (V4L2_CID_USER_BASE + 0x1000) +#define V4L2_CID_TRANS_NUM_BUFS (V4L2_CID_USER_BASE + 0x1001) + +#define MAX_TRANS_TIME_MSEC 500 +#define MAX_COUNT 50 +#define MAX_BUFFERS 5 +#define W 10 +#define H 10 + +#ifndef DEBUG +#define dprintf(fmt, arg...) \ + do { \ + } while (0) +#else +#define dprintf(fmt, arg...) printf(fmt, ## arg) +#endif + +static char video_device[256]; +static int thread_count; + +struct context { + int vfd; + unsigned int width; + unsigned int height; + int buffers; +}; + +static int req_src_buf(struct context *ctx, int buffers) +{ + struct v4l2_requestbuffers reqbuf; + struct v4l2_buffer buf; + int i, ret; + + memset(&reqbuf, 0, sizeof(reqbuf)); + memset(&buf, 0, sizeof(buf)); + + reqbuf.count = buffers; + reqbuf.type = V4L2_BUF_TYPE_VIDEO_OUTPUT; + reqbuf.memory = V4L2_MEMORY_MMAP; + ret = ioctl(ctx->vfd, VIDIOC_REQBUFS, &reqbuf); + if (ret) + return ret; + + for (i = 0; i < buffers; i++) { + buf.type = V4L2_BUF_TYPE_VIDEO_OUTPUT; + buf.memory = V4L2_MEMORY_MMAP; + buf.index = i; + ret = ioctl(ctx->vfd, VIDIOC_QUERYBUF, &buf); + if (ret) + return ret; + buf.bytesused = W*H*2; + ret = ioctl(ctx->vfd, VIDIOC_QBUF, &buf); + if (ret) + return ret; + } + + return 0; +} + +static int req_dst_buf(struct context *ctx, int buffers) +{ + struct v4l2_requestbuffers reqbuf; + struct v4l2_buffer buf; + int i, ret; + + memset(&reqbuf, 0, sizeof(reqbuf)); + memset(&buf, 0, sizeof(buf)); + + reqbuf.count = buffers; + reqbuf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE; + reqbuf.memory = V4L2_MEMORY_MMAP; + + ret = ioctl(ctx->vfd, VIDIOC_REQBUFS, &reqbuf); + if (ret) + return ret; + + for (i = 0; i < buffers; i++) { + buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE; + buf.memory = V4L2_MEMORY_MMAP; + buf.index = i; + ret = ioctl(ctx->vfd, VIDIOC_QUERYBUF, &buf); + if (ret) + return ret; + ret = ioctl(ctx->vfd, VIDIOC_QBUF, &buf); + if (ret) + return ret; + } + return 0; +} + +static int streamon(struct context *ctx) +{ + enum v4l2_buf_type type; + int ret; + + type = V4L2_BUF_TYPE_VIDEO_OUTPUT; + ret = ioctl(ctx->vfd, VIDIOC_STREAMON, &type); + if (ret) + return ret; + + type = V4L2_BUF_TYPE_VIDEO_CAPTURE; + ret = ioctl(ctx->vfd, VIDIOC_STREAMON, &type); + if (ret) + return ret; + + return ret; +} + +static int dqbuf(struct context *ctx) +{ + struct v4l2_buffer buf; + int i, ret, timeout; + + struct pollfd fds[] = { + { .fd = ctx->vfd, .events = POLLIN }, + }; + + for (i = 0; i < ctx->buffers; i++) { + timeout = (MAX_TRANS_TIME_MSEC + 10) * thread_count * 2; + ret = poll(fds, 1, timeout); + if (-1 == ret) { + if (errno == EINTR) + continue; + return -1; + } + + if (ret == 0) { + dprintf("%s: timeout on %p\n", __func__, ctx); + return -1; + } + + dprintf("%s: event on %p\n", __func__, ctx); + + memset(&buf, 0, sizeof(buf)); + buf.type = V4L2_BUF_TYPE_VIDEO_OUTPUT; + buf.memory = V4L2_MEMORY_MMAP; + buf.index = i; + ret = ioctl(ctx->vfd, VIDIOC_DQBUF, &buf); + if (ret) { + dprintf("%s: VIDIOC_DQBUF failed %p\n", __func__, ctx); + return ret; + } + + memset(&buf, 0, sizeof(buf)); + buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE; + buf.memory = V4L2_MEMORY_MMAP; + buf.index = i; + ret = ioctl(ctx->vfd, VIDIOC_DQBUF, &buf); + if (ret) { + dprintf("%s: VIDIOC_DQBUF failed %p\n", __func__, ctx); + return ret; + } + } + + return 0; +} + +static void *job(void *arg) +{ + struct context *ctx = (struct context *)arg; + + dprintf("%s: %p running\n", __func__, ctx); + + assert(streamon(ctx) == 0); + assert(dqbuf(ctx) == 0); + assert(dqbuf(ctx) != 0); + close(ctx->vfd); + + dprintf("%s: %p done\n", __func__, ctx); + return NULL; +} + +static void init(struct context *ctx) +{ + struct v4l2_ext_controls ext_ctrls; + struct v4l2_ext_control ctrls[2]; + struct v4l2_capability cap; + int ret, buffers; + + memset(ctx, 0, sizeof(*ctx)); + + ctx->vfd = open(video_device, O_RDWR | O_NONBLOCK, 0); + ctx->width = W; + ctx->height = H; + assert(ctx->vfd >= 0); + + ret = ioctl(ctx->vfd, VIDIOC_QUERYCAP, &cap); + assert(ret == 0); + assert(cap.device_caps & V4L2_CAP_VIDEO_M2M); + if (strcmp((const char *)cap.driver, "vim2m") != 0) + ksft_exit_skip("Please run the test as root - Exiting.\n"); + + ctrls[0].id = V4L2_CID_TRANS_TIME_MSEC; + ctrls[0].value = rand() % MAX_TRANS_TIME_MSEC + 10; + ctrls[1].id = V4L2_CID_TRANS_NUM_BUFS; + ctrls[1].value = 1; + + memset(&ext_ctrls, 0, sizeof(ext_ctrls)); + ext_ctrls.count = 2; + ext_ctrls.controls = ctrls; + ret = ioctl(ctx->vfd, VIDIOC_S_EXT_CTRLS, &ext_ctrls); + assert(ret == 0); + + buffers = rand() % MAX_BUFFERS + 1; + assert(req_src_buf(ctx, buffers) == 0); + assert(req_dst_buf(ctx, buffers) == 0); + ctx->buffers = buffers; +} + +int main(int argc, char * const argv[]) +{ + int i, opt; + + if (argc < 2) { + printf("Usage: %s [-d </dev/videoX>]\n", argv[0]); + exit(-1); + } + + /* Process arguments */ + while ((opt = getopt(argc, argv, "d:")) != -1) { + switch (opt) { + case 'd': + strncpy(video_device, optarg, sizeof(video_device) - 1); + video_device[sizeof(video_device)-1] = '\0'; + break; + default: + printf("Usage: %s [-d </dev/videoX>]\n", argv[0]); + exit(-1); + } + } + + /* Generate random number of interations */ + srand((unsigned int) time(NULL)); + thread_count = rand() % MAX_COUNT + 1; + + pthread_t in_thread[thread_count]; + struct context ctx[thread_count]; + + printf("Running %d threads\n", thread_count); + + for (i = 0; i < thread_count; i++) + init(&ctx[i]); + + for (i = 0; i < thread_count; i++) + pthread_create(&in_thread[i], NULL, job, &ctx[i]); + + for (i = 0; i < thread_count; i++) + pthread_join(in_thread[i], NULL); + + return 0; +} diff --git a/tools/testing/selftests/media_tests/m2m_job_test.sh b/tools/testing/selftests/media_tests/m2m_job_test.sh new file mode 100755 index 000000000000..59777a7ac7d8 --- /dev/null +++ b/tools/testing/selftests/media_tests/m2m_job_test.sh @@ -0,0 +1,32 @@ +#!/bin/bash +# SPDX-License-Identifier: GPL-2.0 +TCID="media_device" + +# Kselftest framework requirement - SKIP code is 4. +ksft_skip=4 + +echo "-------------------" +echo "running media tests" +echo "-------------------" + +# Not needed, but does not hurt to have it +shopt -s nullglob + +v4l=/sys/class/video4linux + +if [ ! -d $v4l ]; then + echo "$TCID : video4linux support not present" + exit $ksft_skip +fi + +if [ -z `ls $v4l` ]; then + echo "$TCID : no video4linux drivers loaded, vim2m is needed" + exit $ksft_skip +fi + +for f in $v4l/*; do + dev_node=/dev/`basename $f` + if [ -c $dev_node ]; then + ./m2m_job_test -d $dev_node + fi +done
Hi Ezequiel,
On 01/08/18 22:50, Ezequiel Garcia wrote:
Add a test for the memory-to-memory framework, to exercise the scheduling of concurrent jobs, using multiple contexts.
This test needs to be run using the vim2m virtual driver, and so needs no hardware.
While here, rework the media_tests suite in order to make it useful for automatic tools. Those tests that need human intervention are now separated from those that can run automatically, needing only virtual drivers to work.
Signed-off-by: Ezequiel Garcia ezequiel@collabora.com
.../testing/selftests/media_tests/.gitignore | 1 + tools/testing/selftests/media_tests/Makefile | 5 +- .../selftests/media_tests/m2m_job_test.c | 287 ++++++++++++++++++ .../selftests/media_tests/m2m_job_test.sh | 32 ++ 4 files changed, 324 insertions(+), 1 deletion(-) create mode 100644 tools/testing/selftests/media_tests/m2m_job_test.c create mode 100755 tools/testing/selftests/media_tests/m2m_job_test.sh
diff --git a/tools/testing/selftests/media_tests/.gitignore b/tools/testing/selftests/media_tests/.gitignore index 8745eba39012..71c6508348ce 100644 --- a/tools/testing/selftests/media_tests/.gitignore +++ b/tools/testing/selftests/media_tests/.gitignore @@ -1,3 +1,4 @@ media_device_test media_device_open video_device_test +m2m_job_test diff --git a/tools/testing/selftests/media_tests/Makefile b/tools/testing/selftests/media_tests/Makefile index 60826d7d37d4..d25d4c3eb7d2 100644 --- a/tools/testing/selftests/media_tests/Makefile +++ b/tools/testing/selftests/media_tests/Makefile @@ -1,6 +1,9 @@ # SPDX-License-Identifier: GPL-2.0 # CFLAGS += -I../ -I../../../../usr/include/ -TEST_GEN_PROGS := media_device_test media_device_open video_device_test +TEST_GEN_PROGS_EXTENDED := media_device_test media_device_open video_device_test m2m_job_test +TEST_PROGS := m2m_job_test.sh include ../lib.mk
+LDLIBS += -lpthread diff --git a/tools/testing/selftests/media_tests/m2m_job_test.c b/tools/testing/selftests/media_tests/m2m_job_test.c new file mode 100644 index 000000000000..5800269567e6 --- /dev/null +++ b/tools/testing/selftests/media_tests/m2m_job_test.c @@ -0,0 +1,287 @@ +// SPDX-License-Identifier: GPL-2.0 +// +// Copyright (c) Collabora, Ltd.
Add the year (2018), and authors (you).
+/*
- This file adds a test for the memory-to-memory
- framework.
- This test opens a user specified video device and then
- queues concurrent jobs. The jobs are totally dummy,
- its purpose is only to verify that each of the queued
- jobs is run, and is run only once.
- The vim2m driver is needed in order to run the test.
- Usage:
./m2m-job-test -d /dev/videoX
- */
+#include <assert.h> +#include <errno.h> +#include <fcntl.h> +#include <unistd.h> +#include <stdio.h> +#include <stdlib.h> +#include <string.h> +#include <sys/ioctl.h> +#include <sys/stat.h> +#include <time.h> +#include <pthread.h> +#include <poll.h>
+#include <linux/videodev2.h>
+#include "../kselftest.h"
+#define V4L2_CID_TRANS_TIME_MSEC (V4L2_CID_USER_BASE + 0x1000) +#define V4L2_CID_TRANS_NUM_BUFS (V4L2_CID_USER_BASE + 0x1001)
+#define MAX_TRANS_TIME_MSEC 500 +#define MAX_COUNT 50 +#define MAX_BUFFERS 5 +#define W 10 +#define H 10
I like short names, but W and H might be a little bit too short esp for a macro.
+#ifndef DEBUG +#define dprintf(fmt, arg...) \
- do { \
- } while (0)
+#else +#define dprintf(fmt, arg...) printf(fmt, ## arg) +#endif
+static char video_device[256]; +static int thread_count;
+struct context {
- int vfd;
- unsigned int width;
- unsigned int height;
- int buffers;
+};
+static int req_src_buf(struct context *ctx, int buffers) +{
- struct v4l2_requestbuffers reqbuf;
- struct v4l2_buffer buf;
- int i, ret;
- memset(&reqbuf, 0, sizeof(reqbuf));
- memset(&buf, 0, sizeof(buf));
- reqbuf.count = buffers;
- reqbuf.type = V4L2_BUF_TYPE_VIDEO_OUTPUT;
- reqbuf.memory = V4L2_MEMORY_MMAP;
- ret = ioctl(ctx->vfd, VIDIOC_REQBUFS, &reqbuf);
- if (ret)
return ret;
- for (i = 0; i < buffers; i++) {
buf.type = V4L2_BUF_TYPE_VIDEO_OUTPUT;
buf.memory = V4L2_MEMORY_MMAP;
buf.index = i;
ret = ioctl(ctx->vfd, VIDIOC_QUERYBUF, &buf);
if (ret)
return ret;
buf.bytesused = W*H*2;
Shouldn't you be using ->width and ->height here rather than W and H?
In fact, maybe these can actually be set as "static const unsigned int WIDTH = 10;" in the main function rather than global macros, since you're parring the context around at runtime.
ret = ioctl(ctx->vfd, VIDIOC_QBUF, &buf);
if (ret)
return ret;
- }
- return 0;
+}
+static int req_dst_buf(struct context *ctx, int buffers) +{
- struct v4l2_requestbuffers reqbuf;
- struct v4l2_buffer buf;
- int i, ret;
- memset(&reqbuf, 0, sizeof(reqbuf));
- memset(&buf, 0, sizeof(buf));
- reqbuf.count = buffers;
- reqbuf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
- reqbuf.memory = V4L2_MEMORY_MMAP;
- ret = ioctl(ctx->vfd, VIDIOC_REQBUFS, &reqbuf);
- if (ret)
return ret;
- for (i = 0; i < buffers; i++) {
buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
buf.memory = V4L2_MEMORY_MMAP;
buf.index = i;
ret = ioctl(ctx->vfd, VIDIOC_QUERYBUF, &buf);
if (ret)
return ret;
ret = ioctl(ctx->vfd, VIDIOC_QBUF, &buf);
if (ret)
return ret;
- }
- return 0;
+}
+static int streamon(struct context *ctx) +{
- enum v4l2_buf_type type;
- int ret;
- type = V4L2_BUF_TYPE_VIDEO_OUTPUT;
- ret = ioctl(ctx->vfd, VIDIOC_STREAMON, &type);
- if (ret)
return ret;
- type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
- ret = ioctl(ctx->vfd, VIDIOC_STREAMON, &type);
- if (ret)
return ret;
- return ret;
+}
+static int dqbuf(struct context *ctx) +{
- struct v4l2_buffer buf;
- int i, ret, timeout;
- struct pollfd fds[] = {
{ .fd = ctx->vfd, .events = POLLIN },
- };
- for (i = 0; i < ctx->buffers; i++) {
timeout = (MAX_TRANS_TIME_MSEC + 10) * thread_count * 2;
ret = poll(fds, 1, timeout);
if (-1 == ret) {
if (errno == EINTR)
continue;
return -1;
}
if (ret == 0) {
dprintf("%s: timeout on %p\n", __func__, ctx);
return -1;
}
dprintf("%s: event on %p\n", __func__, ctx);
memset(&buf, 0, sizeof(buf));
buf.type = V4L2_BUF_TYPE_VIDEO_OUTPUT;
buf.memory = V4L2_MEMORY_MMAP;
buf.index = i;
ret = ioctl(ctx->vfd, VIDIOC_DQBUF, &buf);
if (ret) {
dprintf("%s: VIDIOC_DQBUF failed %p\n", __func__, ctx);
return ret;
}
memset(&buf, 0, sizeof(buf));
buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
buf.memory = V4L2_MEMORY_MMAP;
buf.index = i;
ret = ioctl(ctx->vfd, VIDIOC_DQBUF, &buf);
if (ret) {
dprintf("%s: VIDIOC_DQBUF failed %p\n", __func__, ctx);
return ret;
}
- }
- return 0;
+}
+static void *job(void *arg) +{
- struct context *ctx = (struct context *)arg;
The cast isn't actually required there, coming from a (void *). If it's done on purpose for readability reasons then fine.
- dprintf("%s: %p running\n", __func__, ctx);
- assert(streamon(ctx) == 0);
- assert(dqbuf(ctx) == 0);
- assert(dqbuf(ctx) != 0);
- close(ctx->vfd);
- dprintf("%s: %p done\n", __func__, ctx);
- return NULL;
+}
+static void init(struct context *ctx) +{
- struct v4l2_ext_controls ext_ctrls;
- struct v4l2_ext_control ctrls[2];
- struct v4l2_capability cap;
- int ret, buffers;
- memset(ctx, 0, sizeof(*ctx));
- ctx->vfd = open(video_device, O_RDWR | O_NONBLOCK, 0);
- ctx->width = W;
- ctx->height = H;
- assert(ctx->vfd >= 0);
- ret = ioctl(ctx->vfd, VIDIOC_QUERYCAP, &cap);
- assert(ret == 0);
- assert(cap.device_caps & V4L2_CAP_VIDEO_M2M);
- if (strcmp((const char *)cap.driver, "vim2m") != 0)
This cast isn't needed either, passing non-const to const.
ksft_exit_skip("Please run the test as root - Exiting.\n");
- ctrls[0].id = V4L2_CID_TRANS_TIME_MSEC;
- ctrls[0].value = rand() % MAX_TRANS_TIME_MSEC + 10;
See what I wrote about random factors in tests below, regarding the number of threads. If you need a random series of values here, using the pseudo-random function rand() should be fine but it might be worth setting the seed with srand() first with a fixed value to be sure it always produces the same series so the test always does the same thing.
- ctrls[1].id = V4L2_CID_TRANS_NUM_BUFS;
- ctrls[1].value = 1;
- memset(&ext_ctrls, 0, sizeof(ext_ctrls));
- ext_ctrls.count = 2;
- ext_ctrls.controls = ctrls;
- ret = ioctl(ctx->vfd, VIDIOC_S_EXT_CTRLS, &ext_ctrls);
- assert(ret == 0);
- buffers = rand() % MAX_BUFFERS + 1;
- assert(req_src_buf(ctx, buffers) == 0);
- assert(req_dst_buf(ctx, buffers) == 0);
- ctx->buffers = buffers;
+}
+int main(int argc, char * const argv[]) +{
- int i, opt;
- if (argc < 2) {
printf("Usage: %s [-d </dev/videoX>]\n", argv[0]);
exit(-1);
- }
- /* Process arguments */
- while ((opt = getopt(argc, argv, "d:")) != -1) {
switch (opt) {
case 'd':
strncpy(video_device, optarg, sizeof(video_device) - 1);
video_device[sizeof(video_device)-1] = '\0';
break;
default:
printf("Usage: %s [-d </dev/videoX>]\n", argv[0]);
exit(-1);
}
- }
- /* Generate random number of interations */
Did you mean iterations?
- srand((unsigned int) time(NULL));
- thread_count = rand() % MAX_COUNT + 1;
Having random factors in tests can lead to hard to reproduce issues. I think it would be wiser to set a reasonable default value (maybe 50) and add a command line option to override it if necessary. For example, setting a very high value may push the system to hit an unexpected limit, or if running automatically on slow hardware a lower value may be used to reduce the time it takes to run this test case.
- pthread_t in_thread[thread_count];
- struct context ctx[thread_count];
- printf("Running %d threads\n", thread_count);
- for (i = 0; i < thread_count; i++)
init(&ctx[i]);
- for (i = 0; i < thread_count; i++)
pthread_create(&in_thread[i], NULL, job, &ctx[i]);
- for (i = 0; i < thread_count; i++)
pthread_join(in_thread[i], NULL);
- return 0;
+} diff --git a/tools/testing/selftests/media_tests/m2m_job_test.sh b/tools/testing/selftests/media_tests/m2m_job_test.sh new file mode 100755 index 000000000000..59777a7ac7d8 --- /dev/null +++ b/tools/testing/selftests/media_tests/m2m_job_test.sh @@ -0,0 +1,32 @@ +#!/bin/bash +# SPDX-License-Identifier: GPL-2.0 +TCID="media_device"
+# Kselftest framework requirement - SKIP code is 4. +ksft_skip=4
+echo "-------------------" +echo "running media tests" +echo "-------------------"
+# Not needed, but does not hurt to have it +shopt -s nullglob
+v4l=/sys/class/video4linux
+if [ ! -d $v4l ]; then
echo "$TCID : video4linux support not present"
exit $ksft_skip
+fi
+if [ -z `ls $v4l` ]; then
- echo "$TCID : no video4linux drivers loaded, vim2m is needed"
- exit $ksft_skip
+fi
+for f in $v4l/*; do
- dev_node=/dev/`basename $f`
- if [ -c $dev_node ]; then
./m2m_job_test -d $dev_node
- fi
+done
Best wishes, Guillaume -- To unsubscribe from this list: send the line "unsubscribe linux-kselftest" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
linux-kselftest-mirror@lists.linaro.org