V2: Scotty : "What are you standing around for - do you not know a jailbreak when you see one" ?
Making sure to run format-patch against the right tree this time..
Updated meta-comments to highlight adding a generic async timeout - Johan Captured the result code of schedule_work/queue_work - Bryan Drop reference count on failure to submit schedule_work/queue_work - Bryan Added timeout suffix back to new gb_operation_request_send - Johan/Greg Updated Signed-off-by to my nexus-software.ie address - Bryan Iterated requisite Star Trek Quote - Bryan/Johan
V4-private -> V1-public: McCOY: You've got him, Jim! You've got him where you want him.
This patchset adds async operations to greybus-core. Rather than have different drivers do variations on the same theme of launching and timing out asynchronous operations, it makes sense instead to provide this functionality via greybus-core.
This patchset makes it possible to launch asynchronous operations and have the completion callback for an operation indicate -ETIMEDOUT for timeouts without drivers having to manage that locally. This set doesn't convert the existing synchronous operations away from wait_for_completion_interruptible_timeout() since that would involve adding an extra work-queue item to each synchronous call, with no added benefit for synchronous operations. Synchronous operations can already detect -ETIMEDOUT by blocking on the synchronous send operations, asynchronous operations and the driver associated with those operations OTOH must implement a completion handler. The main improvement this patchset makes is to pass the -ETIMEDOUT completion status into that completion handler - hiding the setup/timeout of operations away from a driver-level and into a core-level set of logic.
Loopback as an example can have lots and lots of redundant code removed and given we previously had some in-flight effort to add asynchronous operations to SDIO it makes sense to move the generic types of things we need to do on the asynchronous path into greybus-core so that SDIO and other bundle drivers can reuse instead of reimplement asynchronous operations.
Note: we had approximately three internal versions of this on Project-Ara. Here's the log for completeness.
V3-private -> V4-private: Fix bug in loopback conversion - Mitch Tasman
V2-private -> V3-private: remotes/ara-main/sandbox/bodonoghue/ara-main-current+async_op3
Drop patch #6 converting sync operations to new t/o structure. Johan had a concern about doing an in-depth analysis on that change and considering our compressed timelines now, this analysis won't happen. OTOH the new async API is the right thing for loopback and for potential later greybus users/implementers to reuse.
Although it wasn't part of the motive for this change - I observe slightly better performance than baseline with loopback tests with this change in place - which shouldn't be surprising since we have less aggregate context switching for operation timeouts.
V1-private -> V2-private: Rename async_private => private - Greg
gb_operation_request_send_async_timeout -> gb_operation_request_send_async() - Greg
Using existing gb_operation_completion_wq - Viresh
Split out gb_operation_cancel() - timeout need not wait synchronously on cancellation queue - Bryan
Move timeout to work-queue - Greg/Viresh
cancel timeout workqueue @ operation completion point - Viresh/Bryan
Bryan O'Donoghue (5): staging: greybus: operation: add asynchronous gb_operation_cancel staging: greybus: operation: add generic asynchronous timeout operation support staging: greybus: operation: add private data with get/set accessors staging: greybus: loopback: convert loopback to use generic async operations staging: greybus: loopback: convert to use msecs internally
drivers/staging/greybus/loopback.c | 181 ++++++++---------------------------- drivers/staging/greybus/operation.c | 85 ++++++++++++++++- drivers/staging/greybus/operation.h | 19 ++++ 3 files changed, 140 insertions(+), 145 deletions(-)
Later patches don't want or need to serialize the cancellation of an operation. This patch adds gb_operation_cancel_async() as a simple subset of the existing gb_operation_cancel() sans the synchronous wait on the cancellation queue.
Signed-off-by: Bryan O'Donoghue pure.logic@nexus-software.ie --- drivers/staging/greybus/operation.c | 19 ++++++++++++++++--- drivers/staging/greybus/operation.h | 1 + 2 files changed, 17 insertions(+), 3 deletions(-)
diff --git a/drivers/staging/greybus/operation.c b/drivers/staging/greybus/operation.c index 0123109..e69f329 100644 --- a/drivers/staging/greybus/operation.c +++ b/drivers/staging/greybus/operation.c @@ -1043,13 +1043,13 @@ void gb_connection_recv(struct gb_connection *connection, }
/* - * Cancel an outgoing operation synchronously, and record the given error to + * Cancel an outgoing operation asynchronously, and record the given error to * indicate why. */ -void gb_operation_cancel(struct gb_operation *operation, int errno) +int gb_operation_cancel_async(struct gb_operation *operation, int errno) { if (WARN_ON(gb_operation_is_incoming(operation))) - return; + return -EINVAL;
if (gb_operation_result_set(operation, errno)) { gb_message_cancel(operation->request); @@ -1057,6 +1057,19 @@ void gb_operation_cancel(struct gb_operation *operation, int errno) } trace_gb_message_cancel_outgoing(operation->request);
+ return 0; +} +EXPORT_SYMBOL_GPL(gb_operation_cancel_async); + +/* + * Cancel an outgoing operation synchronously, and record the given error to + * indicate why. + */ +void gb_operation_cancel(struct gb_operation *operation, int errno) +{ + if (gb_operation_cancel_async(operation, errno)) + return; + atomic_inc(&operation->waiters); wait_event(gb_operation_cancellation_queue, !gb_operation_is_active(operation)); diff --git a/drivers/staging/greybus/operation.h b/drivers/staging/greybus/operation.h index de09a2c..467a0dc 100644 --- a/drivers/staging/greybus/operation.h +++ b/drivers/staging/greybus/operation.h @@ -174,6 +174,7 @@ gb_operation_request_send_sync(struct gb_operation *operation) GB_OPERATION_TIMEOUT_DEFAULT); }
+int gb_operation_cancel_async(struct gb_operation *operation, int errno); void gb_operation_cancel(struct gb_operation *operation, int errno); void gb_operation_cancel_incoming(struct gb_operation *operation, int errno);
On Tue, Dec 27, 2016 at 01:01:35PM +0000, Bryan O'Donoghue wrote:
Later patches don't want or need to serialize the cancellation of an operation. This patch adds gb_operation_cancel_async() as a simple subset of the existing gb_operation_cancel() sans the synchronous wait on the cancellation queue.
This one is not needed and complicates the interface for no good reason (the async suffix also falsely indicates that this function could be called from atomic context).
Just cancel synchronously in the delayed worker function you add later in the series.
Johan
On 02/01/17 16:16, Johan Hovold wrote:
On Tue, Dec 27, 2016 at 01:01:35PM +0000, Bryan O'Donoghue wrote:
Later patches don't want or need to serialize the cancellation of an operation. This patch adds gb_operation_cancel_async() as a simple subset of the existing gb_operation_cancel() sans the synchronous wait on the cancellation queue.
This one is not needed and complicates the interface for no good reason (the async suffix also falsely indicates that this function could be called from atomic context).
Just cancel synchronously in the delayed worker function you add later in the series.
Johan
hmm ... sounds sensible in principle.
Let me test that out
gott nytt år !
On Tue, Jan 03, 2017 at 09:23:29AM +0000, Bryan O'Donoghue wrote:
On 02/01/17 16:16, Johan Hovold wrote:
On Tue, Dec 27, 2016 at 01:01:35PM +0000, Bryan O'Donoghue wrote:
Later patches don't want or need to serialize the cancellation of an operation. This patch adds gb_operation_cancel_async() as a simple subset of the existing gb_operation_cancel() sans the synchronous wait on the cancellation queue.
This one is not needed and complicates the interface for no good reason (the async suffix also falsely indicates that this function could be called from atomic context).
Just cancel synchronously in the delayed worker function you add later in the series.
Johan
hmm ... sounds sensible in principle.
Let me test that out
I started reviewing the rest of the series, but got interrupted. Will send more comments on the rest later.
gott nytt år !
God fortsättning!
Johan
This patch adds a generic mechanism for handling timeouts of asynchronous operations to operation.c. After doing a gb_operation_request_send() it schedules a delayed worker. When the delayed worker's timer expires the worker runs and does a gb_operation_cancel(). A gb_operation_cancel() operation will result in the operation completion callback being called with -ETIMEDOUT as the result code. If an operation completion has already taken place then the gb_operation_cancel() has no effect. This patch means that each driver doing asynchronous operations doesn't have to have its own special timeout mechanisms, it just waits for operation.c to provide it with the appropriate status code for a given operation.
Signed-off-by: Bryan O'Donoghue pure.logic@nexus-software.ie --- drivers/staging/greybus/operation.c | 66 +++++++++++++++++++++++++++++++++++++ drivers/staging/greybus/operation.h | 5 +++ 2 files changed, 71 insertions(+)
diff --git a/drivers/staging/greybus/operation.c b/drivers/staging/greybus/operation.c index e69f329..625fb4f 100644 --- a/drivers/staging/greybus/operation.c +++ b/drivers/staging/greybus/operation.c @@ -285,6 +285,31 @@ static void gb_operation_work(struct work_struct *work) gb_operation_put(operation); }
+/* + * This function runs once the delayed_work associated with an operation + * that was originally launched with gb_operation_send_timeout has expired. + * + * If the timeout is the first thing to run then this callback cancels the + * operation and sets the result-code of the operation to -ETIMEDOUT. + * Setting the result-code to -ETIMEDOUT will subsequently cause the + * completion callback of the operation to run with -ETIMEDOUT as the result + * code. + * + * Alternatively if the operation has already had it's status code set to + * something other than -EINPROGRESS then this function will result in no + * further action being taken. + * + */ +static void gb_operation_delayed_work(struct work_struct *work) +{ + struct delayed_work *delayed_work = to_delayed_work(work); + struct gb_operation *operation = + container_of(delayed_work, struct gb_operation, delayed_work); + + gb_operation_cancel_async(operation, -ETIMEDOUT); + gb_operation_put(operation); +} + static void gb_operation_message_init(struct gb_host_device *hd, struct gb_message *message, u16 operation_id, size_t payload_size, u8 type) @@ -524,6 +549,7 @@ gb_operation_create_common(struct gb_connection *connection, u8 type, operation->type = type; operation->errno = -EBADR; /* Initial value--means "never set" */
+ INIT_DELAYED_WORK(&operation->delayed_work, gb_operation_delayed_work); INIT_WORK(&operation->work, gb_operation_work); init_completion(&operation->completion); kref_init(&operation->kref); @@ -673,6 +699,7 @@ EXPORT_SYMBOL_GPL(gb_operation_put); static void gb_operation_sync_callback(struct gb_operation *operation) { complete(&operation->completion); + cancel_delayed_work_sync(&operation->delayed_work); }
/** @@ -754,6 +781,45 @@ int gb_operation_request_send(struct gb_operation *operation, EXPORT_SYMBOL_GPL(gb_operation_request_send);
/* + * Send an asynchronous operation. This function will not block, it returns + * immediately. The delayed worker gb_operation_delayed_work() will run + * unconditionally dropping the extra reference we take below. + */ +int gb_operation_request_send_timeout(struct gb_operation *operation, + unsigned int timeout, + gb_operation_callback callback, + gfp_t gfp) +{ + bool queued; + unsigned long timeout_jiffies; + int ret = 0; + + /* Reference dropped later in gb_operation_delayed_work() or on error */ + gb_operation_get(operation); + + ret = gb_operation_request_send(operation, callback, gfp); + if (ret) { + gb_operation_put(operation); + return ret; + } + + if (timeout) + timeout_jiffies = msecs_to_jiffies(timeout); + else + timeout_jiffies = MAX_SCHEDULE_TIMEOUT; + + queued = queue_delayed_work(gb_operation_completion_wq, + &operation->delayed_work, + timeout_jiffies); + if (!queued) { + gb_operation_put(operation); + ret = -EBUSY; + } + return ret; +} +EXPORT_SYMBOL_GPL(gb_operation_request_send_timeout); + +/* * Send a synchronous operation. This function is expected to * block, returning only when the response has arrived, (or when an * error is detected. The return value is the result of the diff --git a/drivers/staging/greybus/operation.h b/drivers/staging/greybus/operation.h index 467a0dc..2cdcf84 100644 --- a/drivers/staging/greybus/operation.h +++ b/drivers/staging/greybus/operation.h @@ -98,6 +98,7 @@ struct gb_operation { struct work_struct work; gb_operation_callback callback; struct completion completion; + struct delayed_work delayed_work;
struct kref kref; atomic_t waiters; @@ -174,6 +175,10 @@ gb_operation_request_send_sync(struct gb_operation *operation) GB_OPERATION_TIMEOUT_DEFAULT); }
+int gb_operation_request_send_timeout(struct gb_operation *operation, + unsigned int timeout, + gb_operation_callback callback, + gfp_t gfp); int gb_operation_cancel_async(struct gb_operation *operation, int errno); void gb_operation_cancel(struct gb_operation *operation, int errno); void gb_operation_cancel_incoming(struct gb_operation *operation, int errno);
Asynchronous operation completion handler's lives are made easier if there is a generic pointer that can store private data associated with the operation. This patch adds a pointer field to operation.h and get/set methods to access that pointer.
Signed-off-by: Bryan O'Donoghue pure.logic@nexus-software.ie --- drivers/staging/greybus/operation.h | 13 +++++++++++++ 1 file changed, 13 insertions(+)
diff --git a/drivers/staging/greybus/operation.h b/drivers/staging/greybus/operation.h index 2cdcf84..d89aafe 100644 --- a/drivers/staging/greybus/operation.h +++ b/drivers/staging/greybus/operation.h @@ -105,6 +105,8 @@ struct gb_operation {
int active; struct list_head links; /* connection->operations */ + + void *private; };
static inline bool @@ -210,6 +212,17 @@ static inline int gb_operation_unidirectional(struct gb_connection *connection, request, request_size, GB_OPERATION_TIMEOUT_DEFAULT); }
+static inline void *gb_operation_get_data(struct gb_operation *operation) +{ + return operation->private; +} + +static inline void gb_operation_set_data(struct gb_operation *operation, + void *data) +{ + operation->private = data; +} + int gb_operation_init(void); void gb_operation_exit(void);
Loopback has its own internal method for tracking and timing out asynchronous operations however previous patches make it possible to use functionality provided by operation.c to do this instead. Using the code in operation.c means we can completely subtract the timer, the work-queue, the kref and the cringe-worthy 'pending' flag. The completion callback triggered by operation.c will provide an authoritative result code - including -ETIMEDOUT for asynchronous operations. Tested to ensure the existing stats given by loopback for synchronous, asynchronous and asynchronous+timeout still hold.
Before: gb_loopback_test -i 1000 -s 128 -t sink -p 1970-1-1 7:30:8 test: sink path: gb_loopback0 size: 128 iterations: 1000 errors: 0 async: Disabled requests per-sec: min=432, max=436, average=434.662994, jitter=4 ap-throughput B/s: min=67535 max=68036 average=67807.421875 jitter=501 ap-latency usec: min=1626 max=3559 average=2287.077881 jitter=1933 apbridge-latency usec: min=0 max=0 average=0.000000 jitter=0 gbphy-latency usec: min=0 max=0 average=0.000000 jitter=0
After: gb_loopback_test -i 1000 -s 128 -t sink -p
1970-1-1 0:49:59 test: sink path: gb_loopback0 size: 128 iterations: 1000 errors: 0 async: Disabled requests per-sec: min=437, max=442, average=440.065491, jitter=5 ap-throughput B/s: min=68212 max=69107 average=68650.218750 jitter=895 ap-latency usec: min=1600 max=9472 average=2259.075928 jitter=7872 apbridge-latency usec: min=0 max=0 average=0.000000 jitter=0 gbphy-latency usec: min=0 max=0 average=0.000000 jitter=0
Before: gb_loopback_test -i 1000 -s 128 -t sink -x -c 4 -p
arche:/ # gb_loopback_test -i 1000 -s 128 -t sink -x -c 4 -p
1970-1-1 7:31:17 test: sink path: gb_loopback0 size: 128 iterations: 1000 errors: 0 async: Enabled requests per-sec: min=1513, max=1513, average=1513.713501, jitter=0 ap-throughput B/s: min=236139 max=236139 average=236139.296875 jitter=0 ap-latency usec: min=1413 max=7656 average=2491.353027 jitter=6243 apbridge-latency usec: min=0 max=0 average=0.000000 jitter=0 gbphy-latency usec: min=0 max=0 average=0.000000 jitter=0
After: gb_loopback_test -i 1000 -s 128 -t sink -x -c 4 -p
1970-1-1 0:50:42 test: sink path: gb_loopback0 size: 128 iterations: 1000 errors: 0 async: Enabled requests per-sec: min=1513, max=1513, average=1513.702026, jitter=0 ap-throughput B/s: min=236137 max=236137 average=236137.515625 jitter=0 ap-latency usec: min=1584 max=3813 average=2488.056885 jitter=2229 apbridge-latency usec: min=0 max=0 average=0.000000 jitter=0 gbphy-latency usec: min=0 max=0 average=0.000000 jitter=0
Before: gb_loopback_test -i 1000 -s 128 -t sink -x -c 4 -p -o 2000
1970-1-1 8:17:46 test: sink path: gb_loopback0 size: 128 iterations: 1000 errors: 730 async: Enabled requests per-sec: min=353, max=353, average=353.882080, jitter=0 ap-throughput B/s: min=55205 max=55205 average=55205.605469 jitter=0 ap-latency usec: min=1498 max=14373 average=2574.329590 jitter=12875 apbridge-latency usec: min=0 max=0 average=0.000000 jitter=0 gbphy-latency usec: min=0 max=0 average=0.000000 jitter=0
After: gb_loopback_test -i 1000 -s 128 -t sink -x -c 4 -p -o 2000
1970-1-1 0:51:45 test: sink path: gb_loopback0 size: 128 iterations: 1000 errors: 645 async: Enabled requests per-sec: min=515, max=515, average=515.907410, jitter=0 ap-throughput B/s: min=80481 max=80481 average=80481.554688 jitter=0 ap-latency usec: min=1442 max=3607 average=2449.278809 jitter=2165 apbridge-latency usec: min=0 max=0 average=0.000000 jitter=0 gbphy-latency usec: min=0 max=0 average=0.000000 jitter=0
Signed-off-by: Bryan O'Donoghue pure.logic@nexus-software.ie Signed-off-by: Mitchell Tasman tasman@leaflabs.com --- drivers/staging/greybus/loopback.c | 167 +++++++------------------------------ 1 file changed, 32 insertions(+), 135 deletions(-)
diff --git a/drivers/staging/greybus/loopback.c b/drivers/staging/greybus/loopback.c index 3184dd3..5cdd8d0 100644 --- a/drivers/staging/greybus/loopback.c +++ b/drivers/staging/greybus/loopback.c @@ -59,11 +59,6 @@ struct gb_loopback_async_operation { struct gb_loopback *gb; struct gb_operation *operation; struct timeval ts; - struct timer_list timer; - struct list_head entry; - struct work_struct work; - struct kref kref; - bool pending; int (*completion)(struct gb_loopback_async_operation *op_async); };
@@ -445,56 +440,6 @@ static int gb_loopback_operation_sync(struct gb_loopback *gb, int type, return ret; }
-static void __gb_loopback_async_operation_destroy(struct kref *kref) -{ - struct gb_loopback_async_operation *op_async; - - op_async = container_of(kref, struct gb_loopback_async_operation, kref); - - list_del(&op_async->entry); - if (op_async->operation) - gb_operation_put(op_async->operation); - atomic_dec(&op_async->gb->outstanding_operations); - wake_up(&op_async->gb->wq_completion); - kfree(op_async); -} - -static void gb_loopback_async_operation_get(struct gb_loopback_async_operation - *op_async) -{ - kref_get(&op_async->kref); -} - -static void gb_loopback_async_operation_put(struct gb_loopback_async_operation - *op_async) -{ - unsigned long flags; - - spin_lock_irqsave(&gb_dev.lock, flags); - kref_put(&op_async->kref, __gb_loopback_async_operation_destroy); - spin_unlock_irqrestore(&gb_dev.lock, flags); -} - -static struct gb_loopback_async_operation * - gb_loopback_operation_find(u16 id) -{ - struct gb_loopback_async_operation *op_async; - bool found = false; - unsigned long flags; - - spin_lock_irqsave(&gb_dev.lock, flags); - list_for_each_entry(op_async, &gb_dev.list_op_async, entry) { - if (op_async->operation->id == id) { - gb_loopback_async_operation_get(op_async); - found = true; - break; - } - } - spin_unlock_irqrestore(&gb_dev.lock, flags); - - return found ? op_async : NULL; -} - static void gb_loopback_async_wait_all(struct gb_loopback *gb) { wait_event(gb->wq_completion, @@ -506,86 +451,43 @@ static void gb_loopback_async_operation_callback(struct gb_operation *operation) struct gb_loopback_async_operation *op_async; struct gb_loopback *gb; struct timeval te; - bool err = false; + int result;
do_gettimeofday(&te); - op_async = gb_loopback_operation_find(operation->id); - if (!op_async) - return; - + result = gb_operation_result(operation); + op_async = gb_operation_get_data(operation); gb = op_async->gb; + mutex_lock(&gb->mutex);
- if (!op_async->pending || gb_operation_result(operation)) { - err = true; - } else { - if (op_async->completion) - if (op_async->completion(op_async)) - err = true; - } + if (!result && op_async->completion) + result = op_async->completion(op_async);
- if (!err) { + if (!result) { gb_loopback_push_latency_ts(gb, &op_async->ts, &te); gb->elapsed_nsecs = gb_loopback_calc_latency(&op_async->ts, &te); + } else { + gb->error++; + if (result == -ETIMEDOUT) + gb->requests_timedout++; }
- if (op_async->pending) { - if (err) - gb->error++; - gb->iteration_count++; - op_async->pending = false; - del_timer_sync(&op_async->timer); - gb_loopback_async_operation_put(op_async); - gb_loopback_calculate_stats(gb, err); - } - mutex_unlock(&gb->mutex); - - dev_dbg(&gb->connection->bundle->dev, "complete operation %d\n", - operation->id); - - gb_loopback_async_operation_put(op_async); -} - -static void gb_loopback_async_operation_work(struct work_struct *work) -{ - struct gb_loopback *gb; - struct gb_operation *operation; - struct gb_loopback_async_operation *op_async; - - op_async = container_of(work, struct gb_loopback_async_operation, work); - gb = op_async->gb; - operation = op_async->operation; + gb->iteration_count++; + gb_loopback_calculate_stats(gb, result);
- mutex_lock(&gb->mutex); - if (op_async->pending) { - gb->requests_timedout++; - gb->error++; - gb->iteration_count++; - op_async->pending = false; - gb_loopback_async_operation_put(op_async); - gb_loopback_calculate_stats(gb, true); - } mutex_unlock(&gb->mutex);
- dev_dbg(&gb->connection->bundle->dev, "timeout operation %d\n", + dev_dbg(&gb->connection->bundle->dev, "complete operation %d\n", operation->id);
- gb_operation_cancel(operation, -ETIMEDOUT); - gb_loopback_async_operation_put(op_async); -} - -static void gb_loopback_async_operation_timeout(unsigned long data) -{ - struct gb_loopback_async_operation *op_async; - u16 id = data; + /* Wake up waiters */ + atomic_dec(&op_async->gb->outstanding_operations); + wake_up(&gb->wq_completion);
- op_async = gb_loopback_operation_find(id); - if (!op_async) { - pr_err("operation %d not found - time out ?\n", id); - return; - } - schedule_work(&op_async->work); + /* Release resources */ + gb_operation_put(operation); + kfree(op_async); }
static int gb_loopback_async_operation(struct gb_loopback *gb, int type, @@ -596,15 +498,11 @@ static int gb_loopback_async_operation(struct gb_loopback *gb, int type, struct gb_loopback_async_operation *op_async; struct gb_operation *operation; int ret; - unsigned long flags;
op_async = kzalloc(sizeof(*op_async), GFP_KERNEL); if (!op_async) return -ENOMEM;
- INIT_WORK(&op_async->work, gb_loopback_async_operation_work); - kref_init(&op_async->kref); - operation = gb_operation_create(gb->connection, type, request_size, response_size, GFP_KERNEL); if (!operation) { @@ -615,32 +513,29 @@ static int gb_loopback_async_operation(struct gb_loopback *gb, int type, if (request_size) memcpy(operation->request->payload, request, request_size);
+ gb_operation_set_data(operation, op_async); + op_async->gb = gb; op_async->operation = operation; op_async->completion = completion;
- spin_lock_irqsave(&gb_dev.lock, flags); - list_add_tail(&op_async->entry, &gb_dev.list_op_async); - spin_unlock_irqrestore(&gb_dev.lock, flags); - do_gettimeofday(&op_async->ts); - op_async->pending = true; + atomic_inc(&gb->outstanding_operations); + mutex_lock(&gb->mutex); - ret = gb_operation_request_send(operation, + ret = gb_operation_request_send_timeout(operation, + jiffies_to_msecs(gb->jiffy_timeout), gb_loopback_async_operation_callback, GFP_KERNEL); if (ret) goto error;
- setup_timer(&op_async->timer, gb_loopback_async_operation_timeout, - (unsigned long)operation->id); - op_async->timer.expires = jiffies + gb->jiffy_timeout; - add_timer(&op_async->timer); - goto done; error: - gb_loopback_async_operation_put(op_async); + atomic_dec(&gb->outstanding_operations); + gb_operation_put(operation); + kfree(op_async); done: mutex_unlock(&gb->mutex); return ret; @@ -1045,8 +940,10 @@ static int gb_loopback_fn(void *data) error = gb_loopback_async_sink(gb, size); }
- if (error) + if (error) { gb->error++; + gb->iteration_count++; + } } else { /* We are effectively single threaded here */ if (type == GB_LOOPBACK_TYPE_PING)
The API presented by operation.h expects milliseconds to be passed in. This patch drops the conversion from user-input microseconds-to-jiffies and from jiffies-to-milliseconds and instead converts directly from microseconds-to-milliseconds. The new minimum timeout will be one millisecond as opposed to one jiffy.
Signed-off-by: Bryan O'Donoghue pure.logic@nexus-software.ie --- drivers/staging/greybus/loopback.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/drivers/staging/greybus/loopback.c b/drivers/staging/greybus/loopback.c index 5cdd8d0..83f3b9f 100644 --- a/drivers/staging/greybus/loopback.c +++ b/drivers/staging/greybus/loopback.c @@ -95,7 +95,7 @@ struct gb_loopback { u32 requests_completed; u32 requests_timedout; u32 timeout; - u32 jiffy_timeout; + u32 msec_timeout; u32 timeout_min; u32 timeout_max; u32 outstanding_operations_max; @@ -113,7 +113,7 @@ static struct class loopback_class = { }; static DEFINE_IDA(loopback_ida);
-/* Min/max values in jiffies */ +/* Min/max values in milliseconds */ #define GB_LOOPBACK_TIMEOUT_MIN 1 #define GB_LOOPBACK_TIMEOUT_MAX 10000
@@ -263,11 +263,11 @@ static void gb_loopback_check_attr(struct gb_loopback *gb) case GB_LOOPBACK_TYPE_PING: case GB_LOOPBACK_TYPE_TRANSFER: case GB_LOOPBACK_TYPE_SINK: - gb->jiffy_timeout = usecs_to_jiffies(gb->timeout); - if (!gb->jiffy_timeout) - gb->jiffy_timeout = GB_LOOPBACK_TIMEOUT_MIN; - else if (gb->jiffy_timeout > GB_LOOPBACK_TIMEOUT_MAX) - gb->jiffy_timeout = GB_LOOPBACK_TIMEOUT_MAX; + gb->msec_timeout = gb->timeout / 1000; + if (!gb->msec_timeout) + gb->msec_timeout = GB_LOOPBACK_TIMEOUT_MIN; + else if (gb->msec_timeout > GB_LOOPBACK_TIMEOUT_MAX) + gb->msec_timeout = GB_LOOPBACK_TIMEOUT_MAX; gb_loopback_reset_stats(gb); wake_up(&gb->wq); break; @@ -525,7 +525,7 @@ static int gb_loopback_async_operation(struct gb_loopback *gb, int type,
mutex_lock(&gb->mutex); ret = gb_operation_request_send_timeout(operation, - jiffies_to_msecs(gb->jiffy_timeout), + gb->msec_timeout, gb_loopback_async_operation_callback, GFP_KERNEL); if (ret)