How significant is the cache maintenance over head? It depends, the eMMC are much faster now compared to a few years ago and cache maintenance cost more due to multiple cache levels and speculative cache pre-fetch. In relation the cost for handling the caches have increased and is now a bottle neck dealing with fast eMMC together with DMA.
The intention for introducing none blocking mmc requests is to minimize the time between a mmc request ends and another mmc request starts. In the current implementation the MMC controller is idle when dma_map_sg and dma_unmap_sg is processing. Introducing none blocking mmc request makes it possible to prepare the caches for next job parallel with an active mmc request.
This is done by making the issue_rw_rq() none blocking. The increase in throughput is proportional to the time it takes to prepare (major part of preparations is dma_map_sg and dma_unmap_sg) a request and how fast the memory is. The faster the MMC/SD is the more significant the prepare request time becomes. Measurements on U5500 and Panda on eMMC and SD shows significant performance gain for large reads when running DMA mode. In the PIO case the performance is unchanged.
There are two optional hooks pre_req() and post_req() that the host driver may implement in order to move work to before and after the actual mmc_request function is called. In the DMA case pre_req() may do dma_map_sg() and prepare the dma descriptor and post_req runs the dma_unmap_sg.
Details on measurements from IOZone and mmc_test: https://wiki.linaro.org/WorkingGroups/Kernel/Specs/StoragePerfMMC-async-req
Changes since v3: * Based on 2.6.39-rc7 * Add error check for testlist in mmc_test.c * Resolve in mmc-queue-thread that caused the mmc-thread to miss a wakeup. * Move parallel request handling to core.c. This simplifies the interface from 4 public functions to 1. This also gives access for SDIO to use the same functionallity, even though the function is not tuned for the SDIO execution flow yet.
Per Forlin (12): mmc: add none blocking mmc request function omap_hsmmc: use original sg_len for dma_unmap_sg omap_hsmmc: add support for pre_req and post_req mmci: implement pre_req() and post_req() mmc: mmc_test: add debugfs file to list all tests mmc: mmc_test: add test for none blocking transfers mmc: add member in mmc queue struct to hold request data mmc: add a block request prepare function mmc: move error code in mmc_block_issue_rw_rq to a separate function. mmc: add a second mmc queue request member mmc: test: add random fault injection in core.c mmc: add handling for two parallel block requests in issue_rw_rq
drivers/mmc/card/block.c | 452 +++++++++++++++++++++++++---------------- drivers/mmc/card/mmc_test.c | 361 ++++++++++++++++++++++++++++++++- drivers/mmc/card/queue.c | 184 +++++++++++------ drivers/mmc/card/queue.h | 32 +++- drivers/mmc/core/core.c | 165 ++++++++++++++- drivers/mmc/core/debugfs.c | 5 + drivers/mmc/host/mmci.c | 146 ++++++++++++-- drivers/mmc/host/mmci.h | 8 + drivers/mmc/host/omap_hsmmc.c | 90 ++++++++- include/linux/mmc/core.h | 6 +- include/linux/mmc/host.h | 19 ++ lib/Kconfig.debug | 11 + 12 files changed, 1187 insertions(+), 292 deletions(-)
Previously there has only been one function mmc_wait_for_req() to start and wait for a request. This patch adds * mmc_start_req() - starts a request wihtout waiting If there is on ongoing request wait for completion of that request and start the new one and return. Does not wait for the new command to complete.
This patch also adds new function members in struct mmc_host_ops only called from core.c * pre_req - asks the host driver to prepare for the next job * post_req - asks the host driver to clean up after a completed job
The intention is to use pre_req() and post_req() to do cache maintenance while a request is active. pre_req() can be called while a request is active to minimize latency to start next job. post_req() can be used after the next job is started to clean up the request. This will minimize the host driver request end latency. post_req() is typically used before ending the block request and handing over the buffer to the block layer.
Add a host-private member in mmc_data to be used by pre_req to mark the data. The host driver will then check this mark to see if the data is prepared or not.
Signed-off-by: Per Forlin per.forlin@linaro.org --- drivers/mmc/core/core.c | 111 +++++++++++++++++++++++++++++++++++++++++---- include/linux/mmc/core.h | 6 ++- include/linux/mmc/host.h | 16 +++++++ 3 files changed, 122 insertions(+), 11 deletions(-)
diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c index 1f453ac..66e1403 100644 --- a/drivers/mmc/core/core.c +++ b/drivers/mmc/core/core.c @@ -198,10 +198,108 @@ mmc_start_request(struct mmc_host *host, struct mmc_request *mrq)
static void mmc_wait_done(struct mmc_request *mrq) { - complete(mrq->done_data); + complete(&mrq->completion); +} + +static void __mmc_start_req(struct mmc_host *host, struct mmc_request *mrq) +{ + init_completion(&mrq->completion); + mrq->done = mmc_wait_done; + mmc_start_request(host, mrq); +} + +static void mmc_wait_for_req_done(struct mmc_host *host, + struct mmc_request *mrq) +{ + wait_for_completion(&mrq->completion); +} + +/** + * mmc_pre_req - Prepare for a new request + * @host: MMC host to prepare command + * @mrq: MMC request to prepare for + * @is_first_req: true if there is no previous started request + * that may run in parellel to this call, otherwise false + * + * mmc_pre_req() is called in prior to mmc_start_req() to let + * host prepare for the new request. Preparation of a request may be + * performed while another request is running on the host. + */ +static void mmc_pre_req(struct mmc_host *host, struct mmc_request *mrq, + bool is_first_req) +{ + if (host->ops->pre_req) + host->ops->pre_req(host, mrq, is_first_req); }
/** + * mmc_post_req - Post process a completed request + * @host: MMC host to post process command + * @mrq: MMC request to post process for + * @err: Error, if none zero, clean up any resources made in pre_req + * + * Let the host post process a completed request. Post processing of + * a request may be performed while another reuqest is running. + */ +static void mmc_post_req(struct mmc_host *host, struct mmc_request *mrq, + int err) +{ + if (host->ops->post_req) + host->ops->post_req(host, mrq, err); +} + +/** + * mmc_start_req - start a none blocking request + * @host: MMC host to start command + * @areq: async request to start + * @error: none zero in case of error + * + * Start a new MMC custom command request for a host. + * If there is on ongoing async request wait for completion + * of that request and start the new one and return. + * Does not wait for the new request to complete. + * + * Returns the completed async request, NULL in case of none completed. + */ +struct mmc_async_req *mmc_start_req(struct mmc_host *host, + struct mmc_async_req *areq, int *error) +{ + int err = 0; + struct mmc_async_req *data = host->areq; + + /* Prepare a new request */ + if (areq) + mmc_pre_req(host, areq->mrq, !host->areq); + + if (host->areq) { + mmc_wait_for_req_done(host, host->areq->mrq); + err = host->areq->err_check(host->card, host->areq); + if (err) { + mmc_post_req(host, host->areq->mrq, 0); + if (areq) + mmc_post_req(host, areq->mrq, -EINVAL); + + host->areq = NULL; + if (error) + *error = err; + return data; + } + } + + if (areq) + __mmc_start_req(host, areq->mrq); + + if (host->areq) + mmc_post_req(host, host->areq->mrq, 0); + + host->areq = areq; + if (error) + *error = err; + return data; +} +EXPORT_SYMBOL(mmc_start_req); + +/** * mmc_wait_for_req - start a request and wait for completion * @host: MMC host to start command * @mrq: MMC request to start @@ -212,16 +310,9 @@ static void mmc_wait_done(struct mmc_request *mrq) */ void mmc_wait_for_req(struct mmc_host *host, struct mmc_request *mrq) { - DECLARE_COMPLETION_ONSTACK(complete); - - mrq->done_data = &complete; - mrq->done = mmc_wait_done; - - mmc_start_request(host, mrq); - - wait_for_completion(&complete); + __mmc_start_req(host, mrq); + mmc_wait_for_req_done(host, mrq); } - EXPORT_SYMBOL(mmc_wait_for_req);
/** diff --git a/include/linux/mmc/core.h b/include/linux/mmc/core.h index 07f27af..28b5956 100644 --- a/include/linux/mmc/core.h +++ b/include/linux/mmc/core.h @@ -117,6 +117,7 @@ struct mmc_data {
unsigned int sg_len; /* size of scatter list */ struct scatterlist *sg; /* I/O scatter list */ + s32 host_cookie; /* host private data */ };
struct mmc_request { @@ -124,13 +125,16 @@ struct mmc_request { struct mmc_data *data; struct mmc_command *stop;
- void *done_data; /* completion data */ + struct completion completion; void (*done)(struct mmc_request *);/* completion function */ };
struct mmc_host; struct mmc_card; +struct mmc_async_req;
+extern struct mmc_async_req *mmc_start_req(struct mmc_host *, + struct mmc_async_req *, int *); extern void mmc_wait_for_req(struct mmc_host *, struct mmc_request *); extern int mmc_wait_for_cmd(struct mmc_host *, struct mmc_command *, int); extern int mmc_wait_for_app_cmd(struct mmc_host *, struct mmc_card *, diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h index eb792cb..d2d948b 100644 --- a/include/linux/mmc/host.h +++ b/include/linux/mmc/host.h @@ -88,6 +88,15 @@ struct mmc_host_ops { */ int (*enable)(struct mmc_host *host); int (*disable)(struct mmc_host *host, int lazy); + /* + * It is optional for the host to implement pre_req and post_req in + * order to support double buffering of requests (prepare one + * request while another request is active). + */ + void (*post_req)(struct mmc_host *host, struct mmc_request *req, + int err); + void (*pre_req)(struct mmc_host *host, struct mmc_request *req, + bool is_first_req); void (*request)(struct mmc_host *host, struct mmc_request *req); /* * Avoid calling these three functions too often or in a "fast path", @@ -122,6 +131,11 @@ struct mmc_host_ops { struct mmc_card; struct device;
+struct mmc_async_req { + struct mmc_request *mrq; + int (*err_check) (struct mmc_card *, struct mmc_async_req *); +}; + struct mmc_host { struct device *parent; struct device class_dev; @@ -242,6 +256,8 @@ struct mmc_host {
struct dentry *debugfs_root;
+ struct mmc_async_req *areq; /* active async req */ + unsigned long private[0] ____cacheline_aligned; };
Don't use the returned sg_len from dma_map_sg() as inparameter to dma_unmap_sg(). Use the original sg_len for both dma_map_sg and dma_unmap_sg.
Signed-off-by: Per Forlin per.forlin@linaro.org --- drivers/mmc/host/omap_hsmmc.c | 5 +++-- 1 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/mmc/host/omap_hsmmc.c b/drivers/mmc/host/omap_hsmmc.c index 259ece0..ad3731a 100644 --- a/drivers/mmc/host/omap_hsmmc.c +++ b/drivers/mmc/host/omap_hsmmc.c @@ -959,7 +959,8 @@ static void omap_hsmmc_dma_cleanup(struct omap_hsmmc_host *host, int errno) spin_unlock(&host->irq_lock);
if (host->use_dma && dma_ch != -1) { - dma_unmap_sg(mmc_dev(host->mmc), host->data->sg, host->dma_len, + dma_unmap_sg(mmc_dev(host->mmc), host->data->sg, + host->data->sg_len, omap_hsmmc_get_dma_dir(host, host->data)); omap_free_dma(dma_ch); } @@ -1343,7 +1344,7 @@ static void omap_hsmmc_dma_cb(int lch, u16 ch_status, void *cb_data) return; }
- dma_unmap_sg(mmc_dev(host->mmc), data->sg, host->dma_len, + dma_unmap_sg(mmc_dev(host->mmc), data->sg, data->sg_len, omap_hsmmc_get_dma_dir(host, data));
req_in_progress = host->req_in_progress;
pre_req() runs dma_map_sg(), post_req() runs dma_unmap_sg. If not calling pre_req() before omap_hsmmc_request() dma_map_sg will be issued before starting the transfer. It is optional to use pre_req(). If issuing pre_req() post_req() must be to be called as well.
Signed-off-by: Per Forlin per.forlin@linaro.org --- drivers/mmc/host/omap_hsmmc.c | 87 +++++++++++++++++++++++++++++++++++++++-- 1 files changed, 83 insertions(+), 4 deletions(-)
diff --git a/drivers/mmc/host/omap_hsmmc.c b/drivers/mmc/host/omap_hsmmc.c index ad3731a..2116c09 100644 --- a/drivers/mmc/host/omap_hsmmc.c +++ b/drivers/mmc/host/omap_hsmmc.c @@ -141,6 +141,11 @@ #define OMAP_HSMMC_WRITE(base, reg, val) \ __raw_writel((val), (base) + OMAP_HSMMC_##reg)
+struct omap_hsmmc_next { + unsigned int dma_len; + s32 cookie; +}; + struct omap_hsmmc_host { struct device *dev; struct mmc_host *mmc; @@ -184,6 +189,7 @@ struct omap_hsmmc_host { int reqs_blocked; int use_reg; int req_in_progress; + struct omap_hsmmc_next next_data;
struct omap_mmc_platform_data *pdata; }; @@ -1344,8 +1350,9 @@ static void omap_hsmmc_dma_cb(int lch, u16 ch_status, void *cb_data) return; }
- dma_unmap_sg(mmc_dev(host->mmc), data->sg, data->sg_len, - omap_hsmmc_get_dma_dir(host, data)); + if (!data->host_cookie) + dma_unmap_sg(mmc_dev(host->mmc), data->sg, data->sg_len, + omap_hsmmc_get_dma_dir(host, data));
req_in_progress = host->req_in_progress; dma_ch = host->dma_ch; @@ -1363,6 +1370,45 @@ static void omap_hsmmc_dma_cb(int lch, u16 ch_status, void *cb_data) } }
+static int omap_hsmmc_pre_dma_transfer(struct omap_hsmmc_host *host, + struct mmc_data *data, + struct omap_hsmmc_next *next) +{ + int dma_len; + + if (!next && data->host_cookie && + data->host_cookie != host->next_data.cookie) { + printk(KERN_WARNING "[%s] invalid cookie: data->host_cookie %d" + " host->next_data.cookie %d\n", + __func__, data->host_cookie, host->next_data.cookie); + data->host_cookie = 0; + } + + /* Check if next job is already prepared */ + if (next || + (!next && data->host_cookie != host->next_data.cookie)) { + dma_len = dma_map_sg(mmc_dev(host->mmc), data->sg, + data->sg_len, + omap_hsmmc_get_dma_dir(host, data)); + + } else { + dma_len = host->next_data.dma_len; + host->next_data.dma_len = 0; + } + + + if (dma_len == 0) + return -EINVAL; + + if (next) { + next->dma_len = dma_len; + data->host_cookie = ++next->cookie < 0 ? 1 : next->cookie; + } else + host->dma_len = dma_len; + + return 0; +} + /* * Routine to configure and start DMA for the MMC card */ @@ -1396,9 +1442,10 @@ static int omap_hsmmc_start_dma_transfer(struct omap_hsmmc_host *host, mmc_hostname(host->mmc), ret); return ret; } + ret = omap_hsmmc_pre_dma_transfer(host, data, NULL); + if (ret) + return ret;
- host->dma_len = dma_map_sg(mmc_dev(host->mmc), data->sg, - data->sg_len, omap_hsmmc_get_dma_dir(host, data)); host->dma_ch = dma_ch; host->dma_sg_idx = 0;
@@ -1478,6 +1525,35 @@ omap_hsmmc_prepare_data(struct omap_hsmmc_host *host, struct mmc_request *req) return 0; }
+static void omap_hsmmc_post_req(struct mmc_host *mmc, struct mmc_request *mrq, + int err) +{ + struct omap_hsmmc_host *host = mmc_priv(mmc); + struct mmc_data *data = mrq->data; + + if (host->use_dma) { + dma_unmap_sg(mmc_dev(host->mmc), data->sg, data->sg_len, + omap_hsmmc_get_dma_dir(host, data)); + data->host_cookie = 0; + } +} + +static void omap_hsmmc_pre_req(struct mmc_host *mmc, struct mmc_request *mrq, + bool is_first_req) +{ + struct omap_hsmmc_host *host = mmc_priv(mmc); + + if (mrq->data->host_cookie) { + mrq->data->host_cookie = 0; + return ; + } + + if (host->use_dma) + if (omap_hsmmc_pre_dma_transfer(host, mrq->data, + &host->next_data)) + mrq->data->host_cookie = 0; +} + /* * Request function. for read/write operation */ @@ -1926,6 +2002,8 @@ static int omap_hsmmc_disable_fclk(struct mmc_host *mmc, int lazy) static const struct mmc_host_ops omap_hsmmc_ops = { .enable = omap_hsmmc_enable_fclk, .disable = omap_hsmmc_disable_fclk, + .post_req = omap_hsmmc_post_req, + .pre_req = omap_hsmmc_pre_req, .request = omap_hsmmc_request, .set_ios = omap_hsmmc_set_ios, .get_cd = omap_hsmmc_get_cd, @@ -2075,6 +2153,7 @@ static int __init omap_hsmmc_probe(struct platform_device *pdev) host->mapbase = res->start; host->base = ioremap(host->mapbase, SZ_4K); host->power_mode = MMC_POWER_OFF; + host->next_data.cookie = 1;
platform_set_drvdata(pdev, host); INIT_WORK(&host->mmc_carddetect_work, omap_hsmmc_detect);
pre_req() runs dma_map_sg() and prepares the dma descriptor for the next mmc data transfer. post_req() runs dma_unmap_sg. If not calling pre_req() before mmci_request(), mmci_request() will prepare the cache and dma just like it did it before. It is optional to use pre_req() and post_req() for mmci.
Signed-off-by: Per Forlin per.forlin@linaro.org --- drivers/mmc/host/mmci.c | 146 ++++++++++++++++++++++++++++++++++++++++++---- drivers/mmc/host/mmci.h | 8 +++ 2 files changed, 141 insertions(+), 13 deletions(-)
diff --git a/drivers/mmc/host/mmci.c b/drivers/mmc/host/mmci.c index b4a7e4f..985c77d 100644 --- a/drivers/mmc/host/mmci.c +++ b/drivers/mmc/host/mmci.c @@ -320,7 +320,8 @@ static void mmci_dma_unmap(struct mmci_host *host, struct mmc_data *data) dir = DMA_FROM_DEVICE; }
- dma_unmap_sg(chan->device->dev, data->sg, data->sg_len, dir); + if (!data->host_cookie) + dma_unmap_sg(chan->device->dev, data->sg, data->sg_len, dir);
/* * Use of DMA with scatter-gather is impossible. @@ -338,7 +339,8 @@ static void mmci_dma_data_error(struct mmci_host *host) dmaengine_terminate_all(host->dma_current); }
-static int mmci_dma_start_data(struct mmci_host *host, unsigned int datactrl) +static int mmci_dma_prep_data(struct mmci_host *host, struct mmc_data *data, + struct mmci_host_next *next) { struct variant_data *variant = host->variant; struct dma_slave_config conf = { @@ -349,13 +351,20 @@ static int mmci_dma_start_data(struct mmci_host *host, unsigned int datactrl) .src_maxburst = variant->fifohalfsize >> 2, /* # of words */ .dst_maxburst = variant->fifohalfsize >> 2, /* # of words */ }; - struct mmc_data *data = host->data; struct dma_chan *chan; struct dma_device *device; struct dma_async_tx_descriptor *desc; int nr_sg;
- host->dma_current = NULL; + /* Check if next job is already prepared */ + if (data->host_cookie && !next && + host->dma_current && host->dma_desc_current) + return 0; + + if (!next) { + host->dma_current = NULL; + host->dma_desc_current = NULL; + }
if (data->flags & MMC_DATA_READ) { conf.direction = DMA_FROM_DEVICE; @@ -370,7 +379,7 @@ static int mmci_dma_start_data(struct mmci_host *host, unsigned int datactrl) return -EINVAL;
/* If less than or equal to the fifo size, don't bother with DMA */ - if (host->size <= variant->fifosize) + if (data->blksz * data->blocks <= variant->fifosize) return -EINVAL;
device = chan->device; @@ -384,14 +393,38 @@ static int mmci_dma_start_data(struct mmci_host *host, unsigned int datactrl) if (!desc) goto unmap_exit;
- /* Okay, go for it. */ - host->dma_current = chan; + if (next) { + next->dma_chan = chan; + next->dma_desc = desc; + } else { + host->dma_current = chan; + host->dma_desc_current = desc; + } + + return 0;
+ unmap_exit: + if (!next) + dmaengine_terminate_all(chan); + dma_unmap_sg(device->dev, data->sg, data->sg_len, conf.direction); + return -ENOMEM; +} + +static int mmci_dma_start_data(struct mmci_host *host, unsigned int datactrl) +{ + int ret; + struct mmc_data *data = host->data; + + ret = mmci_dma_prep_data(host, host->data, NULL); + if (ret) + return ret; + + /* Okay, go for it. */ dev_vdbg(mmc_dev(host->mmc), "Submit MMCI DMA job, sglen %d blksz %04x blks %04x flags %08x\n", data->sg_len, data->blksz, data->blocks, data->flags); - dmaengine_submit(desc); - dma_async_issue_pending(chan); + dmaengine_submit(host->dma_desc_current); + dma_async_issue_pending(host->dma_current);
datactrl |= MCI_DPSM_DMAENABLE;
@@ -406,14 +439,90 @@ static int mmci_dma_start_data(struct mmci_host *host, unsigned int datactrl) writel(readl(host->base + MMCIMASK0) | MCI_DATAENDMASK, host->base + MMCIMASK0); return 0; +}
-unmap_exit: - dmaengine_terminate_all(chan); - dma_unmap_sg(device->dev, data->sg, data->sg_len, conf.direction); - return -ENOMEM; +static void mmci_get_next_data(struct mmci_host *host, struct mmc_data *data) +{ + struct mmci_host_next *next = &host->next_data; + + if (data->host_cookie && data->host_cookie != next->cookie) { + printk(KERN_WARNING "[%s] invalid cookie: data->host_cookie %d" + " host->next_data.cookie %d\n", + __func__, data->host_cookie, host->next_data.cookie); + data->host_cookie = 0; + } + + if (!data->host_cookie) + return; + + host->dma_desc_current = next->dma_desc; + host->dma_current = next->dma_chan; + + next->dma_desc = NULL; + next->dma_chan = NULL; } + +static void mmci_pre_request(struct mmc_host *mmc, struct mmc_request *mrq, + bool is_first_req) +{ + struct mmci_host *host = mmc_priv(mmc); + struct mmc_data *data = mrq->data; + struct mmci_host_next *nd = &host->next_data; + + if (!data) + return; + + if (data->host_cookie) { + data->host_cookie = 0; + return; + } + + /* if config for dma */ + if (((data->flags & MMC_DATA_WRITE) && host->dma_tx_channel) || + ((data->flags & MMC_DATA_READ) && host->dma_rx_channel)) { + if (mmci_dma_prep_data(host, data, nd)) + data->host_cookie = 0; + else + data->host_cookie = ++nd->cookie < 0 ? 1 : nd->cookie; + } +} + +static void mmci_post_request(struct mmc_host *mmc, struct mmc_request *mrq, + int err) +{ + struct mmci_host *host = mmc_priv(mmc); + struct mmc_data *data = mrq->data; + struct dma_chan *chan; + enum dma_data_direction dir; + + if (!data) + return; + + if (data->flags & MMC_DATA_READ) { + dir = DMA_FROM_DEVICE; + chan = host->dma_rx_channel; + } else { + dir = DMA_TO_DEVICE; + chan = host->dma_tx_channel; + } + + + /* if config for dma */ + if (chan) { + if (err) + dmaengine_terminate_all(chan); + if (err || data->host_cookie) + dma_unmap_sg(mmc_dev(host->mmc), data->sg, + data->sg_len, dir); + mrq->data->host_cookie = 0; + } +} + #else /* Blank functions if the DMA engine is not available */ +static void mmci_get_next_data(struct mmci_host *host, struct mmc_data *data) +{ +} static inline void mmci_dma_setup(struct mmci_host *host) { } @@ -434,6 +543,10 @@ static inline int mmci_dma_start_data(struct mmci_host *host, unsigned int datac { return -ENOSYS; } + +#define mmci_pre_request NULL +#define mmci_post_request NULL + #endif
static void mmci_start_data(struct mmci_host *host, struct mmc_data *data) @@ -852,6 +965,9 @@ static void mmci_request(struct mmc_host *mmc, struct mmc_request *mrq)
host->mrq = mrq;
+ if (mrq->data) + mmci_get_next_data(host, mrq->data); + if (mrq->data && mrq->data->flags & MMC_DATA_READ) mmci_start_data(host, mrq->data);
@@ -966,6 +1082,8 @@ static irqreturn_t mmci_cd_irq(int irq, void *dev_id)
static const struct mmc_host_ops mmci_ops = { .request = mmci_request, + .pre_req = mmci_pre_request, + .post_req = mmci_post_request, .set_ios = mmci_set_ios, .get_ro = mmci_get_ro, .get_cd = mmci_get_cd, @@ -1003,6 +1121,8 @@ static int __devinit mmci_probe(struct amba_device *dev, host->gpio_cd = -ENOSYS; host->gpio_cd_irq = -1;
+ host->next_data.cookie = 1; + host->hw_designer = amba_manf(dev); host->hw_revision = amba_rev(dev); dev_dbg(mmc_dev(mmc), "designer ID = 0x%02x\n", host->hw_designer); diff --git a/drivers/mmc/host/mmci.h b/drivers/mmc/host/mmci.h index ec9a7bc6..e21d850 100644 --- a/drivers/mmc/host/mmci.h +++ b/drivers/mmc/host/mmci.h @@ -150,6 +150,12 @@ struct clk; struct variant_data; struct dma_chan;
+struct mmci_host_next { + struct dma_async_tx_descriptor *dma_desc; + struct dma_chan *dma_chan; + s32 cookie; +}; + struct mmci_host { phys_addr_t phybase; void __iomem *base; @@ -187,6 +193,8 @@ struct mmci_host { struct dma_chan *dma_current; struct dma_chan *dma_rx_channel; struct dma_chan *dma_tx_channel; + struct dma_async_tx_descriptor *dma_desc_current; + struct mmci_host_next next_data;
#define dma_inprogress(host) ((host)->dma_current) #else
Add a debugfs file "testlist" to print all available tests
Signed-off-by: Per Forlin per.forlin@linaro.org --- drivers/mmc/card/mmc_test.c | 39 ++++++++++++++++++++++++++++++++++++++- 1 files changed, 38 insertions(+), 1 deletions(-)
diff --git a/drivers/mmc/card/mmc_test.c b/drivers/mmc/card/mmc_test.c index abc1a63..1853ebf 100644 --- a/drivers/mmc/card/mmc_test.c +++ b/drivers/mmc/card/mmc_test.c @@ -2447,6 +2447,32 @@ static const struct file_operations mmc_test_fops_test = { .release = single_release, };
+static int mtf_testlist_show(struct seq_file *sf, void *data) +{ + int i; + + mutex_lock(&mmc_test_lock); + + for (i = 0; i < ARRAY_SIZE(mmc_test_cases); i++) + seq_printf(sf, "%d:\t%s\n", i+1, mmc_test_cases[i].name); + + mutex_unlock(&mmc_test_lock); + + return 0; +} + +static int mtf_testlist_open(struct inode *inode, struct file *file) +{ + return single_open(file, mtf_testlist_show, inode->i_private); +} + +static const struct file_operations mmc_test_fops_testlist = { + .open = mtf_testlist_open, + .read = seq_read, + .llseek = seq_lseek, + .release = single_release, +}; + static void mmc_test_free_file_test(struct mmc_card *card) { struct mmc_test_dbgfs_file *df, *dfs; @@ -2478,7 +2504,18 @@ static int mmc_test_register_file_test(struct mmc_card *card)
if (IS_ERR_OR_NULL(file)) { dev_err(&card->dev, - "Can't create file. Perhaps debugfs is disabled.\n"); + "Can't create test. Perhaps debugfs is disabled.\n"); + ret = -ENODEV; + goto err; + } + + if (card->debugfs_root) + file = debugfs_create_file("testlist", S_IRUGO, + card->debugfs_root, card, &mmc_test_fops_testlist); + + if (IS_ERR_OR_NULL(file)) { + dev_err(&card->dev, + "Can't create testlist. Perhaps debugfs is disabled.\n"); ret = -ENODEV; goto err; }
Add four tests for read and write performance per different transfer size, 4k to 4M. * Read using blocking mmc request * Read using none blocking mmc request * Write using blocking mmc request * Write using none blocking mmc request
The host dirver must support pre_req() and post_req() in order to run the none blocking test cases.
Signed-off-by: Per Forlin per.forlin@linaro.org --- drivers/mmc/card/mmc_test.c | 322 +++++++++++++++++++++++++++++++++++++++++-- 1 files changed, 313 insertions(+), 9 deletions(-)
diff --git a/drivers/mmc/card/mmc_test.c b/drivers/mmc/card/mmc_test.c index 1853ebf..4b9cb5f 100644 --- a/drivers/mmc/card/mmc_test.c +++ b/drivers/mmc/card/mmc_test.c @@ -22,6 +22,7 @@ #include <linux/debugfs.h> #include <linux/uaccess.h> #include <linux/seq_file.h> +#include <linux/random.h>
#define RESULT_OK 0 #define RESULT_FAIL 1 @@ -51,10 +52,12 @@ struct mmc_test_pages { * struct mmc_test_mem - allocated memory. * @arr: array of allocations * @cnt: number of allocations + * @size_min_cmn: lowest common size in array of allocations */ struct mmc_test_mem { struct mmc_test_pages *arr; unsigned int cnt; + unsigned int size_min_cmn; };
/** @@ -148,6 +151,26 @@ struct mmc_test_card { struct mmc_test_general_result *gr; };
+enum mmc_test_prep_media { + MMC_TEST_PREP_NONE = 0, + MMC_TEST_PREP_WRITE_FULL = 1 << 0, + MMC_TEST_PREP_ERASE = 1 << 1, +}; + +struct mmc_test_multiple_rw { + unsigned int *bs; + unsigned int len; + unsigned int size; + bool do_write; + bool do_nonblock_req; + enum mmc_test_prep_media prepare; +}; + +struct mmc_test_async_req { + struct mmc_async_req areq; + struct mmc_test_card *test; +}; + /*******************************************************************/ /* General helper functions */ /*******************************************************************/ @@ -307,6 +330,7 @@ static struct mmc_test_mem *mmc_test_alloc_mem(unsigned long min_sz, unsigned long max_seg_page_cnt = DIV_ROUND_UP(max_seg_sz, PAGE_SIZE); unsigned long page_cnt = 0; unsigned long limit = nr_free_buffer_pages() >> 4; + unsigned int min_cmn = 0; struct mmc_test_mem *mem;
if (max_page_cnt > limit) @@ -350,6 +374,12 @@ static struct mmc_test_mem *mmc_test_alloc_mem(unsigned long min_sz, mem->arr[mem->cnt].page = page; mem->arr[mem->cnt].order = order; mem->cnt += 1; + if (!min_cmn) + min_cmn = PAGE_SIZE << order; + else + min_cmn = min(min_cmn, + (unsigned int) (PAGE_SIZE << order)); + if (max_page_cnt <= (1UL << order)) break; max_page_cnt -= 1UL << order; @@ -360,6 +390,7 @@ static struct mmc_test_mem *mmc_test_alloc_mem(unsigned long min_sz, break; } } + mem->size_min_cmn = min_cmn;
return mem;
@@ -386,7 +417,6 @@ static int mmc_test_map_sg(struct mmc_test_mem *mem, unsigned long sz, do { for (i = 0; i < mem->cnt; i++) { unsigned long len = PAGE_SIZE << mem->arr[i].order; - if (len > sz) len = sz; if (len > max_seg_sz) @@ -666,7 +696,7 @@ static void mmc_test_prepare_broken_mrq(struct mmc_test_card *test, * Checks that a normal transfer didn't have any errors */ static int mmc_test_check_result(struct mmc_test_card *test, - struct mmc_request *mrq) + struct mmc_request *mrq) { int ret;
@@ -690,6 +720,16 @@ static int mmc_test_check_result(struct mmc_test_card *test, return ret; }
+ +static int mmc_test_check_result_async(struct mmc_card *card, + struct mmc_async_req *areq) +{ + struct mmc_test_async_req *test_async = + container_of(areq, struct mmc_test_async_req, areq); + + return mmc_test_check_result(test_async->test, areq->mrq); +} + /* * Checks that a "short transfer" behaved as expected */ @@ -725,6 +765,89 @@ static int mmc_test_check_broken_result(struct mmc_test_card *test, }
/* + * Tests nonblock transfer with certain parameters + */ +static void mmc_test_nonblock_reset(struct mmc_request *mrq, + struct mmc_command *cmd, + struct mmc_command *stop, + struct mmc_data *data) +{ + memset(mrq, 0, sizeof(struct mmc_request)); + memset(cmd, 0, sizeof(struct mmc_command)); + memset(data, 0, sizeof(struct mmc_data)); + memset(stop, 0, sizeof(struct mmc_command)); + + mrq->cmd = cmd; + mrq->data = data; + mrq->stop = stop; +} +static int mmc_test_nonblock_transfer(struct mmc_test_card *test, + struct scatterlist *sg, unsigned sg_len, + unsigned dev_addr, unsigned blocks, + unsigned blksz, int write, int count) +{ + struct mmc_request mrq1; + struct mmc_command cmd1; + struct mmc_command stop1; + struct mmc_data data1; + + struct mmc_request mrq2; + struct mmc_command cmd2; + struct mmc_command stop2; + struct mmc_data data2; + + struct mmc_test_async_req test_areq[2]; + struct mmc_async_req *done_areq; + struct mmc_async_req *cur_areq = &test_areq[0].areq; + struct mmc_async_req *other_areq = &test_areq[1].areq; + int i; + int ret; + + test_areq[0].test = test; + test_areq[1].test = test; + + if (!test->card->host->ops->pre_req || + !test->card->host->ops->post_req) + return -RESULT_UNSUP_HOST; + + mmc_test_nonblock_reset(&mrq1, &cmd1, &stop1, &data1); + mmc_test_nonblock_reset(&mrq2, &cmd2, &stop2, &data2); + + cur_areq->mrq = &mrq1; + cur_areq->err_check = mmc_test_check_result_async; + other_areq->mrq = &mrq2; + other_areq->err_check = mmc_test_check_result_async; + + for (i = 0; i < count; i++) { + mmc_test_prepare_mrq(test, cur_areq->mrq, sg, sg_len, dev_addr, + blocks, blksz, write); + done_areq = mmc_start_req(test->card->host, cur_areq, &ret); + + if (ret || (!done_areq && i > 0)) + goto err; + + if (done_areq) { + if (done_areq->mrq == &mrq2) + mmc_test_nonblock_reset(&mrq2, &cmd2, + &stop2, &data2); + else + mmc_test_nonblock_reset(&mrq1, &cmd1, + &stop1, &data1); + } + done_areq = cur_areq; + cur_areq = other_areq; + other_areq = done_areq; + dev_addr += blocks; + } + + done_areq = mmc_start_req(test->card->host, NULL, &ret); + + return ret; +err: + return ret; +} + +/* * Tests a basic transfer with certain parameters */ static int mmc_test_simple_transfer(struct mmc_test_card *test, @@ -1351,14 +1474,17 @@ static int mmc_test_area_transfer(struct mmc_test_card *test, }
/* - * Map and transfer bytes. + * Map and transfer bytes for multiple transfers. */ -static int mmc_test_area_io(struct mmc_test_card *test, unsigned long sz, - unsigned int dev_addr, int write, int max_scatter, - int timed) +static int mmc_test_area_io_seq(struct mmc_test_card *test, unsigned long sz, + unsigned int dev_addr, int write, + int max_scatter, int timed, int count, + bool nonblock) { struct timespec ts1, ts2; - int ret; + int ret = 0; + int i; + struct mmc_test_area *t = &test->area;
/* * In the case of a maximally scattered transfer, the maximum transfer @@ -1382,8 +1508,15 @@ static int mmc_test_area_io(struct mmc_test_card *test, unsigned long sz,
if (timed) getnstimeofday(&ts1); + if (nonblock) + ret = mmc_test_nonblock_transfer(test, t->sg, t->sg_len, + dev_addr, t->blocks, 512, write, count); + else + for (i = 0; i < count && ret == 0; i++) { + ret = mmc_test_area_transfer(test, dev_addr, write); + dev_addr += sz >> 9; + }
- ret = mmc_test_area_transfer(test, dev_addr, write); if (ret) return ret;
@@ -1391,11 +1524,19 @@ static int mmc_test_area_io(struct mmc_test_card *test, unsigned long sz, getnstimeofday(&ts2);
if (timed) - mmc_test_print_rate(test, sz, &ts1, &ts2); + mmc_test_print_avg_rate(test, sz, count, &ts1, &ts2);
return 0; }
+static int mmc_test_area_io(struct mmc_test_card *test, unsigned long sz, + unsigned int dev_addr, int write, int max_scatter, + int timed) +{ + return mmc_test_area_io_seq(test, sz, dev_addr, write, max_scatter, + timed, 1, false); +} + /* * Write the test area entirely. */ @@ -1956,6 +2097,142 @@ static int mmc_test_large_seq_write_perf(struct mmc_test_card *test) return mmc_test_large_seq_perf(test, 1); }
+static int mmc_test_rw_multiple(struct mmc_test_card *test, + struct mmc_test_multiple_rw *tdata, + unsigned int reqsize, unsigned int size) +{ + unsigned int dev_addr; + struct mmc_test_area *t = &test->area; + int ret = 0; + + /* Set up test area */ + if (size > mmc_test_capacity(test->card) / 2 * 512) + size = mmc_test_capacity(test->card) / 2 * 512; + if (reqsize > t->max_tfr) + reqsize = t->max_tfr; + dev_addr = mmc_test_capacity(test->card) / 4; + if ((dev_addr & 0xffff0000)) + dev_addr &= 0xffff0000; /* Round to 64MiB boundary */ + else + dev_addr &= 0xfffff800; /* Round to 1MiB boundary */ + if (!dev_addr) + goto err; + + /* prepare test area */ + if (mmc_can_erase(test->card) && + tdata->prepare & MMC_TEST_PREP_ERASE) { + ret = mmc_erase(test->card, dev_addr, + size / 512, MMC_SECURE_ERASE_ARG); + if (ret) + ret = mmc_erase(test->card, dev_addr, + size / 512, MMC_ERASE_ARG); + if (ret) + goto err; + } + + /* Run test */ + ret = mmc_test_area_io_seq(test, reqsize, dev_addr, + tdata->do_write, 0, 1, size / reqsize, + tdata->do_nonblock_req); + if (ret) + goto err; + + return ret; + err: + printk(KERN_INFO "[%s] error\n", __func__); + return ret; +} + +static int mmc_test_rw_multiple_size(struct mmc_test_card *test, + struct mmc_test_multiple_rw *rw) +{ + int ret = 0; + int i; + + for (i = 0 ; i < rw->len && ret == 0; i++) { + ret = mmc_test_rw_multiple(test, rw, rw->bs[i], rw->size); + if (ret) + break; + } + return ret; +} + +/* + * Multiple blocking write 4k to 4 MB chunks + */ +static int mmc_test_profile_mult_write_blocking_perf(struct mmc_test_card *test) +{ + unsigned int bs[] = {1 << 12, 1 << 13, 1 << 14, 1 << 15, 1 << 16, + 1 << 17, 1 << 18, 1 << 19, 1 << 20, 1 << 22}; + struct mmc_test_multiple_rw test_data = { + .bs = bs, + .size = 128*1024*1024, + .len = ARRAY_SIZE(bs), + .do_write = true, + .do_nonblock_req = false, + .prepare = MMC_TEST_PREP_ERASE, + }; + + return mmc_test_rw_multiple_size(test, &test_data); +}; + +/* + * Multiple none blocking write 4k to 4 MB chunks + */ +static int mmc_test_profile_mult_write_nonblock_perf(struct mmc_test_card *test) +{ + unsigned int bs[] = {1 << 12, 1 << 13, 1 << 14, 1 << 15, 1 << 16, + 1 << 17, 1 << 18, 1 << 19, 1 << 20, 1 << 22}; + struct mmc_test_multiple_rw test_data = { + .bs = bs, + .size = 128*1024*1024, + .len = ARRAY_SIZE(bs), + .do_write = true, + .do_nonblock_req = true, + .prepare = MMC_TEST_PREP_ERASE, + }; + + return mmc_test_rw_multiple_size(test, &test_data); +} + +/* + * Multiple blocking read 4k to 4 MB chunks + */ +static int mmc_test_profile_mult_read_blocking_perf(struct mmc_test_card *test) +{ + unsigned int bs[] = {1 << 12, 1 << 13, 1 << 14, 1 << 15, 1 << 16, + 1 << 17, 1 << 18, 1 << 19, 1 << 20, 1 << 22}; + struct mmc_test_multiple_rw test_data = { + .bs = bs, + .size = 128*1024*1024, + .len = ARRAY_SIZE(bs), + .do_write = false, + .do_nonblock_req = false, + .prepare = MMC_TEST_PREP_NONE, + }; + + return mmc_test_rw_multiple_size(test, &test_data); +} + +/* + * Multiple none blocking read 4k to 4 MB chunks + */ +static int mmc_test_profile_mult_read_nonblock_perf(struct mmc_test_card *test) +{ + unsigned int bs[] = {1 << 12, 1 << 13, 1 << 14, 1 << 15, 1 << 16, + 1 << 17, 1 << 18, 1 << 19, 1 << 20, 1 << 22}; + struct mmc_test_multiple_rw test_data = { + .bs = bs, + .size = 128*1024*1024, + .len = ARRAY_SIZE(bs), + .do_write = false, + .do_nonblock_req = true, + .prepare = MMC_TEST_PREP_NONE, + }; + + return mmc_test_rw_multiple_size(test, &test_data); +} + static const struct mmc_test_case mmc_test_cases[] = { { .name = "Basic write (no data verification)", @@ -2223,6 +2500,33 @@ static const struct mmc_test_case mmc_test_cases[] = { .cleanup = mmc_test_area_cleanup, },
+ { + .name = "Write performance with blocking req 4k to 4MB", + .prepare = mmc_test_area_prepare, + .run = mmc_test_profile_mult_write_blocking_perf, + .cleanup = mmc_test_area_cleanup, + }, + + { + .name = "Write performance with none blocking req 4k to 4MB", + .prepare = mmc_test_area_prepare, + .run = mmc_test_profile_mult_write_nonblock_perf, + .cleanup = mmc_test_area_cleanup, + }, + + { + .name = "Read performance with blocking req 4k to 4MB", + .prepare = mmc_test_area_prepare, + .run = mmc_test_profile_mult_read_blocking_perf, + .cleanup = mmc_test_area_cleanup, + }, + + { + .name = "Read performance with none blocking req 4k to 4MB", + .prepare = mmc_test_area_prepare, + .run = mmc_test_profile_mult_read_nonblock_perf, + .cleanup = mmc_test_area_cleanup, + }, };
static DEFINE_MUTEX(mmc_test_lock);
The way the request data is organized in the mmc queue struct it only allows processing of one request at the time. This patch adds a new struct to hold mmc queue request data such as sg list, request, blk request and bounce buffers, and updates any functions depending on the mmc queue struct. This lies the ground for using multiple active request for one mmc queue.
Signed-off-by: Per Forlin per.forlin@linaro.org --- drivers/mmc/card/block.c | 106 ++++++++++++++++++-------------------- drivers/mmc/card/queue.c | 129 ++++++++++++++++++++++++---------------------- drivers/mmc/card/queue.h | 30 ++++++++--- 3 files changed, 139 insertions(+), 126 deletions(-)
diff --git a/drivers/mmc/card/block.c b/drivers/mmc/card/block.c index 61d233a..c45c436 100644 --- a/drivers/mmc/card/block.c +++ b/drivers/mmc/card/block.c @@ -165,13 +165,6 @@ static const struct block_device_operations mmc_bdops = { .owner = THIS_MODULE, };
-struct mmc_blk_request { - struct mmc_request mrq; - struct mmc_command cmd; - struct mmc_command stop; - struct mmc_data data; -}; - static u32 mmc_sd_num_wr_blocks(struct mmc_card *card) { int err; @@ -335,7 +328,7 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *req) { struct mmc_blk_data *md = mq->data; struct mmc_card *card = md->queue.card; - struct mmc_blk_request brq; + struct mmc_blk_request *brq = &mq->mqrq_cur->brq; int ret = 1, disable_multi = 0;
mmc_claim_host(card->host); @@ -344,72 +337,72 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *req) struct mmc_command cmd; u32 readcmd, writecmd, status = 0;
- memset(&brq, 0, sizeof(struct mmc_blk_request)); - brq.mrq.cmd = &brq.cmd; - brq.mrq.data = &brq.data; + memset(brq, 0, sizeof(struct mmc_blk_request)); + brq->mrq.cmd = &brq->cmd; + brq->mrq.data = &brq->data;
- brq.cmd.arg = blk_rq_pos(req); + brq->cmd.arg = blk_rq_pos(req); if (!mmc_card_blockaddr(card)) - brq.cmd.arg <<= 9; - brq.cmd.flags = MMC_RSP_SPI_R1 | MMC_RSP_R1 | MMC_CMD_ADTC; - brq.data.blksz = 512; - brq.stop.opcode = MMC_STOP_TRANSMISSION; - brq.stop.arg = 0; - brq.stop.flags = MMC_RSP_SPI_R1B | MMC_RSP_R1B | MMC_CMD_AC; - brq.data.blocks = blk_rq_sectors(req); + brq->cmd.arg <<= 9; + brq->cmd.flags = MMC_RSP_SPI_R1 | MMC_RSP_R1 | MMC_CMD_ADTC; + brq->data.blksz = 512; + brq->stop.opcode = MMC_STOP_TRANSMISSION; + brq->stop.arg = 0; + brq->stop.flags = MMC_RSP_SPI_R1B | MMC_RSP_R1B | MMC_CMD_AC; + brq->data.blocks = blk_rq_sectors(req);
/* * The block layer doesn't support all sector count * restrictions, so we need to be prepared for too big * requests. */ - if (brq.data.blocks > card->host->max_blk_count) - brq.data.blocks = card->host->max_blk_count; + if (brq->data.blocks > card->host->max_blk_count) + brq->data.blocks = card->host->max_blk_count;
/* * After a read error, we redo the request one sector at a time * in order to accurately determine which sectors can be read * successfully. */ - if (disable_multi && brq.data.blocks > 1) - brq.data.blocks = 1; + if (disable_multi && brq->data.blocks > 1) + brq->data.blocks = 1;
- if (brq.data.blocks > 1) { + if (brq->data.blocks > 1) { /* SPI multiblock writes terminate using a special * token, not a STOP_TRANSMISSION request. */ if (!mmc_host_is_spi(card->host) || rq_data_dir(req) == READ) - brq.mrq.stop = &brq.stop; + brq->mrq.stop = &brq->stop; readcmd = MMC_READ_MULTIPLE_BLOCK; writecmd = MMC_WRITE_MULTIPLE_BLOCK; } else { - brq.mrq.stop = NULL; + brq->mrq.stop = NULL; readcmd = MMC_READ_SINGLE_BLOCK; writecmd = MMC_WRITE_BLOCK; } if (rq_data_dir(req) == READ) { - brq.cmd.opcode = readcmd; - brq.data.flags |= MMC_DATA_READ; + brq->cmd.opcode = readcmd; + brq->data.flags |= MMC_DATA_READ; } else { - brq.cmd.opcode = writecmd; - brq.data.flags |= MMC_DATA_WRITE; + brq->cmd.opcode = writecmd; + brq->data.flags |= MMC_DATA_WRITE; }
- mmc_set_data_timeout(&brq.data, card); + mmc_set_data_timeout(&brq->data, card);
- brq.data.sg = mq->sg; - brq.data.sg_len = mmc_queue_map_sg(mq); + brq->data.sg = mq->mqrq_cur->sg; + brq->data.sg_len = mmc_queue_map_sg(mq, mq->mqrq_cur);
/* * Adjust the sg list so it is the same size as the * request. */ - if (brq.data.blocks != blk_rq_sectors(req)) { - int i, data_size = brq.data.blocks << 9; + if (brq->data.blocks != blk_rq_sectors(req)) { + int i, data_size = brq->data.blocks << 9; struct scatterlist *sg;
- for_each_sg(brq.data.sg, sg, brq.data.sg_len, i) { + for_each_sg(brq->data.sg, sg, brq->data.sg_len, i) { data_size -= sg->length; if (data_size <= 0) { sg->length += data_size; @@ -417,22 +410,22 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *req) break; } } - brq.data.sg_len = i; + brq->data.sg_len = i; }
- mmc_queue_bounce_pre(mq); + mmc_queue_bounce_pre(mq->mqrq_cur);
- mmc_wait_for_req(card->host, &brq.mrq); + mmc_wait_for_req(card->host, &brq->mrq);
- mmc_queue_bounce_post(mq); + mmc_queue_bounce_post(mq->mqrq_cur);
/* * Check for errors here, but don't jump to cmd_err * until later as we need to wait for the card to leave * programming mode even when things go wrong. */ - if (brq.cmd.error || brq.data.error || brq.stop.error) { - if (brq.data.blocks > 1 && rq_data_dir(req) == READ) { + if (brq->cmd.error || brq->data.error || brq->stop.error) { + if (brq->data.blocks > 1 && rq_data_dir(req) == READ) { /* Redo read one sector at a time */ printk(KERN_WARNING "%s: retrying using single " "block read\n", req->rq_disk->disk_name); @@ -442,29 +435,29 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *req) status = get_card_status(card, req); }
- if (brq.cmd.error) { + if (brq->cmd.error) { printk(KERN_ERR "%s: error %d sending read/write " "command, response %#x, card status %#x\n", - req->rq_disk->disk_name, brq.cmd.error, - brq.cmd.resp[0], status); + req->rq_disk->disk_name, brq->cmd.error, + brq->cmd.resp[0], status); }
- if (brq.data.error) { - if (brq.data.error == -ETIMEDOUT && brq.mrq.stop) + if (brq->data.error) { + if (brq->data.error == -ETIMEDOUT && brq->mrq.stop) /* 'Stop' response contains card status */ - status = brq.mrq.stop->resp[0]; + status = brq->mrq.stop->resp[0]; printk(KERN_ERR "%s: error %d transferring data," " sector %u, nr %u, card status %#x\n", - req->rq_disk->disk_name, brq.data.error, + req->rq_disk->disk_name, brq->data.error, (unsigned)blk_rq_pos(req), (unsigned)blk_rq_sectors(req), status); }
- if (brq.stop.error) { + if (brq->stop.error) { printk(KERN_ERR "%s: error %d sending stop command, " "response %#x, card status %#x\n", - req->rq_disk->disk_name, brq.stop.error, - brq.stop.resp[0], status); + req->rq_disk->disk_name, brq->stop.error, + brq->stop.resp[0], status); }
if (!mmc_host_is_spi(card->host) && rq_data_dir(req) != READ) { @@ -497,7 +490,7 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *req) #endif }
- if (brq.cmd.error || brq.stop.error || brq.data.error) { + if (brq->cmd.error || brq->stop.error || brq->data.error) { if (rq_data_dir(req) == READ) { /* * After an error, we redo I/O one sector at a @@ -505,7 +498,8 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *req) * read a single sector. */ spin_lock_irq(&md->lock); - ret = __blk_end_request(req, -EIO, brq.data.blksz); + ret = __blk_end_request(req, -EIO, + brq->data.blksz); spin_unlock_irq(&md->lock); continue; } @@ -516,7 +510,7 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *req) * A block was successfully transferred. */ spin_lock_irq(&md->lock); - ret = __blk_end_request(req, 0, brq.data.bytes_xfered); + ret = __blk_end_request(req, 0, brq->data.bytes_xfered); spin_unlock_irq(&md->lock); } while (ret);
@@ -544,7 +538,7 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *req) } } else { spin_lock_irq(&md->lock); - ret = __blk_end_request(req, 0, brq.data.bytes_xfered); + ret = __blk_end_request(req, 0, brq->data.bytes_xfered); spin_unlock_irq(&md->lock); }
diff --git a/drivers/mmc/card/queue.c b/drivers/mmc/card/queue.c index 2ae7275..40e18b5 100644 --- a/drivers/mmc/card/queue.c +++ b/drivers/mmc/card/queue.c @@ -56,7 +56,7 @@ static int mmc_queue_thread(void *d) spin_lock_irq(q->queue_lock); set_current_state(TASK_INTERRUPTIBLE); req = blk_fetch_request(q); - mq->req = req; + mq->mqrq_cur->req = req; spin_unlock_irq(q->queue_lock);
if (!req) { @@ -97,10 +97,25 @@ static void mmc_request(struct request_queue *q) return; }
- if (!mq->req) + if (!mq->mqrq_cur->req) wake_up_process(mq->thread); }
+struct scatterlist *mmc_alloc_sg(int sg_len, int *err) +{ + struct scatterlist *sg; + + sg = kmalloc(sizeof(struct scatterlist)*sg_len, GFP_KERNEL); + if (!sg) + *err = -ENOMEM; + else { + *err = 0; + sg_init_table(sg, sg_len); + } + + return sg; +} + /** * mmc_init_queue - initialise a queue structure. * @mq: mmc queue @@ -114,6 +129,7 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card, spinlock_t *lock struct mmc_host *host = card->host; u64 limit = BLK_BOUNCE_HIGH; int ret; + struct mmc_queue_req *mqrq_cur = &mq->mqrq[0];
if (mmc_dev(host)->dma_mask && *mmc_dev(host)->dma_mask) limit = *mmc_dev(host)->dma_mask; @@ -123,8 +139,9 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card, spinlock_t *lock if (!mq->queue) return -ENOMEM;
+ memset(&mq->mqrq_cur, 0, sizeof(mq->mqrq_cur)); + mq->mqrq_cur = mqrq_cur; mq->queue->queuedata = mq; - mq->req = NULL;
blk_queue_prep_rq(mq->queue, mmc_prep_request); queue_flag_set_unlocked(QUEUE_FLAG_NONROT, mq->queue); @@ -158,53 +175,44 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card, spinlock_t *lock bouncesz = host->max_blk_count * 512;
if (bouncesz > 512) { - mq->bounce_buf = kmalloc(bouncesz, GFP_KERNEL); - if (!mq->bounce_buf) { + mqrq_cur->bounce_buf = kmalloc(bouncesz, GFP_KERNEL); + if (!mqrq_cur->bounce_buf) { printk(KERN_WARNING "%s: unable to " - "allocate bounce buffer\n", + "allocate bounce cur buffer\n", mmc_card_name(card)); } }
- if (mq->bounce_buf) { + if (mqrq_cur->bounce_buf) { blk_queue_bounce_limit(mq->queue, BLK_BOUNCE_ANY); blk_queue_max_hw_sectors(mq->queue, bouncesz / 512); blk_queue_max_segments(mq->queue, bouncesz / 512); blk_queue_max_segment_size(mq->queue, bouncesz);
- mq->sg = kmalloc(sizeof(struct scatterlist), - GFP_KERNEL); - if (!mq->sg) { - ret = -ENOMEM; + mqrq_cur->sg = mmc_alloc_sg(1, &ret); + if (ret) goto cleanup_queue; - } - sg_init_table(mq->sg, 1);
- mq->bounce_sg = kmalloc(sizeof(struct scatterlist) * - bouncesz / 512, GFP_KERNEL); - if (!mq->bounce_sg) { - ret = -ENOMEM; + mqrq_cur->bounce_sg = + mmc_alloc_sg(bouncesz / 512, &ret); + if (ret) goto cleanup_queue; - } - sg_init_table(mq->bounce_sg, bouncesz / 512); + } } #endif
- if (!mq->bounce_buf) { + if (!mqrq_cur->bounce_buf) { blk_queue_bounce_limit(mq->queue, limit); blk_queue_max_hw_sectors(mq->queue, min(host->max_blk_count, host->max_req_size / 512)); blk_queue_max_segments(mq->queue, host->max_segs); blk_queue_max_segment_size(mq->queue, host->max_seg_size);
- mq->sg = kmalloc(sizeof(struct scatterlist) * - host->max_segs, GFP_KERNEL); - if (!mq->sg) { - ret = -ENOMEM; + mqrq_cur->sg = mmc_alloc_sg(host->max_segs, &ret); + if (ret) goto cleanup_queue; - } - sg_init_table(mq->sg, host->max_segs); + }
sema_init(&mq->thread_sem, 1); @@ -219,16 +227,15 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card, spinlock_t *lock
return 0; free_bounce_sg: - if (mq->bounce_sg) - kfree(mq->bounce_sg); - mq->bounce_sg = NULL; + kfree(mqrq_cur->bounce_sg); + mqrq_cur->bounce_sg = NULL; + cleanup_queue: - if (mq->sg) - kfree(mq->sg); - mq->sg = NULL; - if (mq->bounce_buf) - kfree(mq->bounce_buf); - mq->bounce_buf = NULL; + kfree(mqrq_cur->sg); + mqrq_cur->sg = NULL; + kfree(mqrq_cur->bounce_buf); + mqrq_cur->bounce_buf = NULL; + blk_cleanup_queue(mq->queue); return ret; } @@ -237,6 +244,7 @@ void mmc_cleanup_queue(struct mmc_queue *mq) { struct request_queue *q = mq->queue; unsigned long flags; + struct mmc_queue_req *mqrq_cur = mq->mqrq_cur;
/* Make sure the queue isn't suspended, as that will deadlock */ mmc_queue_resume(mq); @@ -250,16 +258,14 @@ void mmc_cleanup_queue(struct mmc_queue *mq) blk_start_queue(q); spin_unlock_irqrestore(q->queue_lock, flags);
- if (mq->bounce_sg) - kfree(mq->bounce_sg); - mq->bounce_sg = NULL; + kfree(mqrq_cur->bounce_sg); + mqrq_cur->bounce_sg = NULL;
- kfree(mq->sg); - mq->sg = NULL; + kfree(mqrq_cur->sg); + mqrq_cur->sg = NULL;
- if (mq->bounce_buf) - kfree(mq->bounce_buf); - mq->bounce_buf = NULL; + kfree(mqrq_cur->bounce_buf); + mqrq_cur->bounce_buf = NULL;
mq->card = NULL; } @@ -312,27 +318,27 @@ void mmc_queue_resume(struct mmc_queue *mq) /* * Prepare the sg list(s) to be handed of to the host driver */ -unsigned int mmc_queue_map_sg(struct mmc_queue *mq) +unsigned int mmc_queue_map_sg(struct mmc_queue *mq, struct mmc_queue_req *mqrq) { unsigned int sg_len; size_t buflen; struct scatterlist *sg; int i;
- if (!mq->bounce_buf) - return blk_rq_map_sg(mq->queue, mq->req, mq->sg); + if (!mqrq->bounce_buf) + return blk_rq_map_sg(mq->queue, mqrq->req, mqrq->sg);
- BUG_ON(!mq->bounce_sg); + BUG_ON(!mqrq->bounce_sg);
- sg_len = blk_rq_map_sg(mq->queue, mq->req, mq->bounce_sg); + sg_len = blk_rq_map_sg(mq->queue, mqrq->req, mqrq->bounce_sg);
- mq->bounce_sg_len = sg_len; + mqrq->bounce_sg_len = sg_len;
buflen = 0; - for_each_sg(mq->bounce_sg, sg, sg_len, i) + for_each_sg(mqrq->bounce_sg, sg, sg_len, i) buflen += sg->length;
- sg_init_one(mq->sg, mq->bounce_buf, buflen); + sg_init_one(mqrq->sg, mqrq->bounce_buf, buflen);
return 1; } @@ -341,19 +347,19 @@ unsigned int mmc_queue_map_sg(struct mmc_queue *mq) * If writing, bounce the data to the buffer before the request * is sent to the host driver */ -void mmc_queue_bounce_pre(struct mmc_queue *mq) +void mmc_queue_bounce_pre(struct mmc_queue_req *mqrq) { unsigned long flags;
- if (!mq->bounce_buf) + if (!mqrq->bounce_buf) return;
- if (rq_data_dir(mq->req) != WRITE) + if (rq_data_dir(mqrq->req) != WRITE) return;
local_irq_save(flags); - sg_copy_to_buffer(mq->bounce_sg, mq->bounce_sg_len, - mq->bounce_buf, mq->sg[0].length); + sg_copy_to_buffer(mqrq->bounce_sg, mqrq->bounce_sg_len, + mqrq->bounce_buf, mqrq->sg[0].length); local_irq_restore(flags); }
@@ -361,19 +367,18 @@ void mmc_queue_bounce_pre(struct mmc_queue *mq) * If reading, bounce the data from the buffer after the request * has been handled by the host driver */ -void mmc_queue_bounce_post(struct mmc_queue *mq) +void mmc_queue_bounce_post(struct mmc_queue_req *mqrq) { unsigned long flags;
- if (!mq->bounce_buf) + if (!mqrq->bounce_buf) return;
- if (rq_data_dir(mq->req) != READ) + if (rq_data_dir(mqrq->req) != READ) return;
local_irq_save(flags); - sg_copy_from_buffer(mq->bounce_sg, mq->bounce_sg_len, - mq->bounce_buf, mq->sg[0].length); + sg_copy_from_buffer(mqrq->bounce_sg, mqrq->bounce_sg_len, + mqrq->bounce_buf, mqrq->sg[0].length); local_irq_restore(flags); } - diff --git a/drivers/mmc/card/queue.h b/drivers/mmc/card/queue.h index 64e66e0..468044f 100644 --- a/drivers/mmc/card/queue.h +++ b/drivers/mmc/card/queue.h @@ -4,19 +4,32 @@ struct request; struct task_struct;
+struct mmc_blk_request { + struct mmc_request mrq; + struct mmc_command cmd; + struct mmc_command stop; + struct mmc_data data; +}; + +struct mmc_queue_req { + struct request *req; + struct mmc_blk_request brq; + struct scatterlist *sg; + char *bounce_buf; + struct scatterlist *bounce_sg; + unsigned int bounce_sg_len; +}; + struct mmc_queue { struct mmc_card *card; struct task_struct *thread; struct semaphore thread_sem; unsigned int flags; - struct request *req; int (*issue_fn)(struct mmc_queue *, struct request *); void *data; struct request_queue *queue; - struct scatterlist *sg; - char *bounce_buf; - struct scatterlist *bounce_sg; - unsigned int bounce_sg_len; + struct mmc_queue_req mqrq[1]; + struct mmc_queue_req *mqrq_cur; };
extern int mmc_init_queue(struct mmc_queue *, struct mmc_card *, spinlock_t *); @@ -24,8 +37,9 @@ extern void mmc_cleanup_queue(struct mmc_queue *); extern void mmc_queue_suspend(struct mmc_queue *); extern void mmc_queue_resume(struct mmc_queue *);
-extern unsigned int mmc_queue_map_sg(struct mmc_queue *); -extern void mmc_queue_bounce_pre(struct mmc_queue *); -extern void mmc_queue_bounce_post(struct mmc_queue *); +extern unsigned int mmc_queue_map_sg(struct mmc_queue *, + struct mmc_queue_req *); +extern void mmc_queue_bounce_pre(struct mmc_queue_req *); +extern void mmc_queue_bounce_post(struct mmc_queue_req *);
#endif
Break out code from mmc_blk_issue_rw_rq to create a block request prepare function. This doesn't change any functionallity. This helps when handling more than one active block request.
Signed-off-by: Per Forlin per.forlin@linaro.org --- drivers/mmc/card/block.c | 170 ++++++++++++++++++++++++--------------------- 1 files changed, 91 insertions(+), 79 deletions(-)
diff --git a/drivers/mmc/card/block.c b/drivers/mmc/card/block.c index c45c436..aead2d1 100644 --- a/drivers/mmc/card/block.c +++ b/drivers/mmc/card/block.c @@ -324,97 +324,109 @@ out: return err ? 0 : 1; }
-static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *req) +static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq, + struct mmc_card *card, + int disable_multi, + struct mmc_queue *mq) { - struct mmc_blk_data *md = mq->data; - struct mmc_card *card = md->queue.card; - struct mmc_blk_request *brq = &mq->mqrq_cur->brq; - int ret = 1, disable_multi = 0; + u32 readcmd, writecmd; + struct mmc_blk_request *brq = &mqrq->brq; + struct request *req = mqrq->req;
- mmc_claim_host(card->host); + memset(brq, 0, sizeof(struct mmc_blk_request));
- do { - struct mmc_command cmd; - u32 readcmd, writecmd, status = 0; - - memset(brq, 0, sizeof(struct mmc_blk_request)); - brq->mrq.cmd = &brq->cmd; - brq->mrq.data = &brq->data; - - brq->cmd.arg = blk_rq_pos(req); - if (!mmc_card_blockaddr(card)) - brq->cmd.arg <<= 9; - brq->cmd.flags = MMC_RSP_SPI_R1 | MMC_RSP_R1 | MMC_CMD_ADTC; - brq->data.blksz = 512; - brq->stop.opcode = MMC_STOP_TRANSMISSION; - brq->stop.arg = 0; - brq->stop.flags = MMC_RSP_SPI_R1B | MMC_RSP_R1B | MMC_CMD_AC; - brq->data.blocks = blk_rq_sectors(req); + brq->mrq.cmd = &brq->cmd; + brq->mrq.data = &brq->data;
- /* - * The block layer doesn't support all sector count - * restrictions, so we need to be prepared for too big - * requests. - */ - if (brq->data.blocks > card->host->max_blk_count) - brq->data.blocks = card->host->max_blk_count; + brq->cmd.arg = blk_rq_pos(req); + if (!mmc_card_blockaddr(card)) + brq->cmd.arg <<= 9; + brq->cmd.flags = MMC_RSP_SPI_R1 | MMC_RSP_R1 | MMC_CMD_ADTC; + brq->data.blksz = 512; + brq->stop.opcode = MMC_STOP_TRANSMISSION; + brq->stop.arg = 0; + brq->stop.flags = MMC_RSP_SPI_R1B | MMC_RSP_R1B | MMC_CMD_AC; + brq->data.blocks = blk_rq_sectors(req);
- /* - * After a read error, we redo the request one sector at a time - * in order to accurately determine which sectors can be read - * successfully. + /* + * The block layer doesn't support all sector count + * restrictions, so we need to be prepared for too big + * requests. + */ + if (brq->data.blocks > card->host->max_blk_count) + brq->data.blocks = card->host->max_blk_count; + + /* + * After a read error, we redo the request one sector at a time + * in order to accurately determine which sectors can be read + * successfully. + */ + if (disable_multi && brq->data.blocks > 1) + brq->data.blocks = 1; + + if (brq->data.blocks > 1) { + /* SPI multiblock writes terminate using a special + * token, not a STOP_TRANSMISSION request. */ - if (disable_multi && brq->data.blocks > 1) - brq->data.blocks = 1; - - if (brq->data.blocks > 1) { - /* SPI multiblock writes terminate using a special - * token, not a STOP_TRANSMISSION request. - */ - if (!mmc_host_is_spi(card->host) - || rq_data_dir(req) == READ) - brq->mrq.stop = &brq->stop; - readcmd = MMC_READ_MULTIPLE_BLOCK; - writecmd = MMC_WRITE_MULTIPLE_BLOCK; - } else { - brq->mrq.stop = NULL; - readcmd = MMC_READ_SINGLE_BLOCK; - writecmd = MMC_WRITE_BLOCK; - } - if (rq_data_dir(req) == READ) { - brq->cmd.opcode = readcmd; - brq->data.flags |= MMC_DATA_READ; - } else { - brq->cmd.opcode = writecmd; - brq->data.flags |= MMC_DATA_WRITE; - } + if (!mmc_host_is_spi(card->host) + || rq_data_dir(req) == READ) + brq->mrq.stop = &brq->stop; + readcmd = MMC_READ_MULTIPLE_BLOCK; + writecmd = MMC_WRITE_MULTIPLE_BLOCK; + } else { + brq->mrq.stop = NULL; + readcmd = MMC_READ_SINGLE_BLOCK; + writecmd = MMC_WRITE_BLOCK; + } + if (rq_data_dir(req) == READ) { + brq->cmd.opcode = readcmd; + brq->data.flags |= MMC_DATA_READ; + } else { + brq->cmd.opcode = writecmd; + brq->data.flags |= MMC_DATA_WRITE; + }
- mmc_set_data_timeout(&brq->data, card); + mmc_set_data_timeout(&brq->data, card);
- brq->data.sg = mq->mqrq_cur->sg; - brq->data.sg_len = mmc_queue_map_sg(mq, mq->mqrq_cur); + brq->data.sg = mqrq->sg; + brq->data.sg_len = mmc_queue_map_sg(mq, mqrq);
- /* - * Adjust the sg list so it is the same size as the - * request. - */ - if (brq->data.blocks != blk_rq_sectors(req)) { - int i, data_size = brq->data.blocks << 9; - struct scatterlist *sg; - - for_each_sg(brq->data.sg, sg, brq->data.sg_len, i) { - data_size -= sg->length; - if (data_size <= 0) { - sg->length += data_size; - i++; - break; - } + /* + * Adjust the sg list so it is the same size as the + * request. + */ + if (brq->data.blocks != blk_rq_sectors(req)) { + int i, data_size = brq->data.blocks << 9; + struct scatterlist *sg; + + for_each_sg(brq->data.sg, sg, brq->data.sg_len, i) { + data_size -= sg->length; + if (data_size <= 0) { + sg->length += data_size; + i++; + break; } - brq->data.sg_len = i; } + brq->data.sg_len = i; + }
- mmc_queue_bounce_pre(mq->mqrq_cur); + mmc_queue_bounce_pre(mqrq); +} + +static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *req) +{ + struct mmc_blk_data *md = mq->data; + struct mmc_card *card = md->queue.card; + struct mmc_blk_request *brq = &mq->mqrq_cur->brq; + int ret = 1, disable_multi = 0; + + mmc_claim_host(card->host); + + do { + struct mmc_command cmd; + u32 status = 0;
+ mmc_blk_rw_rq_prep(mq->mqrq_cur, card, disable_multi, mq); mmc_wait_for_req(card->host, &brq->mrq);
mmc_queue_bounce_post(mq->mqrq_cur);
Break out code without functional changes. This simplifies the code and makes way for handle two parallel request.
Signed-off-by: Per Forlin per.forlin@linaro.org --- drivers/mmc/card/block.c | 226 +++++++++++++++++++++++++++------------------- 1 files changed, 132 insertions(+), 94 deletions(-)
diff --git a/drivers/mmc/card/block.c b/drivers/mmc/card/block.c index aead2d1..1c0b077 100644 --- a/drivers/mmc/card/block.c +++ b/drivers/mmc/card/block.c @@ -79,6 +79,13 @@ struct mmc_blk_data {
static DEFINE_MUTEX(open_lock);
+enum mmc_blk_status { + MMC_BLK_SUCCESS = 0, + MMC_BLK_RETRY, + MMC_BLK_DATA_ERR, + MMC_BLK_CMD_ERR, +}; + module_param(perdev_minors, int, 0444); MODULE_PARM_DESC(perdev_minors, "Minors numbers to allocate per device");
@@ -324,6 +331,102 @@ out: return err ? 0 : 1; }
+static enum mmc_blk_status mmc_blk_err_check(struct mmc_blk_request *brq, + struct request *req, + struct mmc_card *card, + struct mmc_blk_data *md) +{ + struct mmc_command cmd; + u32 status = 0; + enum mmc_blk_status ret = MMC_BLK_SUCCESS; + + /* + * Check for errors here, but don't jump to cmd_err + * until later as we need to wait for the card to leave + * programming mode even when things go wrong. + */ + if (brq->cmd.error || brq->data.error || brq->stop.error) { + if (brq->data.blocks > 1 && rq_data_dir(req) == READ) { + /* Redo read one sector at a time */ + printk(KERN_WARNING "%s: retrying using single " + "block read, brq %p\n", + req->rq_disk->disk_name, brq); + ret = MMC_BLK_RETRY; + goto out; + } + status = get_card_status(card, req); + } + + if (brq->cmd.error) { + printk(KERN_ERR "%s: error %d sending read/write " + "command, response %#x, card status %#x\n", + req->rq_disk->disk_name, brq->cmd.error, + brq->cmd.resp[0], status); + } + + if (brq->data.error) { + if (brq->data.error == -ETIMEDOUT && brq->mrq.stop) + /* 'Stop' response contains card status */ + status = brq->mrq.stop->resp[0]; + printk(KERN_ERR "%s: error %d transferring data," + " sector %u, nr %u, card status %#x\n", + req->rq_disk->disk_name, brq->data.error, + (unsigned)blk_rq_pos(req), + (unsigned)blk_rq_sectors(req), status); + } + + if (brq->stop.error) { + printk(KERN_ERR "%s: error %d sending stop command, " + "response %#x, card status %#x\n", + req->rq_disk->disk_name, brq->stop.error, + brq->stop.resp[0], status); + } + + if (!mmc_host_is_spi(card->host) && rq_data_dir(req) != READ) { + do { + int err; + + cmd.opcode = MMC_SEND_STATUS; + cmd.arg = card->rca << 16; + cmd.flags = MMC_RSP_R1 | MMC_CMD_AC; + err = mmc_wait_for_cmd(card->host, &cmd, 5); + if (err) { + printk(KERN_ERR "%s: error %d requesting status\n", + req->rq_disk->disk_name, err); + ret = MMC_BLK_CMD_ERR; + goto out; + } + /* + * Some cards mishandle the status bits, + * so make sure to check both the busy + * indication and the card state. + */ + } while (!(cmd.resp[0] & R1_READY_FOR_DATA) || + (R1_CURRENT_STATE(cmd.resp[0]) == 7)); + +#if 0 + if (cmd.resp[0] & ~0x00000900) + printk(KERN_ERR "%s: status = %08x\n", + req->rq_disk->disk_name, cmd.resp[0]); + if (mmc_decode_status(cmd.resp)) { + ret = MMC_BLK_CMD_ERR; + goto out; + } + +#endif + } + + if (brq->cmd.error || brq->stop.error || brq->data.error) { + if (rq_data_dir(req) == READ) + ret = MMC_BLK_DATA_ERR; + else + ret = MMC_BLK_CMD_ERR; + } + out: + return ret; + +} + static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq, struct mmc_card *card, int disable_multi, @@ -419,111 +522,46 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *req) struct mmc_card *card = md->queue.card; struct mmc_blk_request *brq = &mq->mqrq_cur->brq; int ret = 1, disable_multi = 0; + enum mmc_blk_status status;
mmc_claim_host(card->host);
do { - struct mmc_command cmd; - u32 status = 0; - mmc_blk_rw_rq_prep(mq->mqrq_cur, card, disable_multi, mq); mmc_wait_for_req(card->host, &brq->mrq);
mmc_queue_bounce_post(mq->mqrq_cur); + status = mmc_blk_err_check(brq, req, card, md);
- /* - * Check for errors here, but don't jump to cmd_err - * until later as we need to wait for the card to leave - * programming mode even when things go wrong. - */ - if (brq->cmd.error || brq->data.error || brq->stop.error) { - if (brq->data.blocks > 1 && rq_data_dir(req) == READ) { - /* Redo read one sector at a time */ - printk(KERN_WARNING "%s: retrying using single " - "block read\n", req->rq_disk->disk_name); - disable_multi = 1; - continue; - } - status = get_card_status(card, req); - } - - if (brq->cmd.error) { - printk(KERN_ERR "%s: error %d sending read/write " - "command, response %#x, card status %#x\n", - req->rq_disk->disk_name, brq->cmd.error, - brq->cmd.resp[0], status); - } - - if (brq->data.error) { - if (brq->data.error == -ETIMEDOUT && brq->mrq.stop) - /* 'Stop' response contains card status */ - status = brq->mrq.stop->resp[0]; - printk(KERN_ERR "%s: error %d transferring data," - " sector %u, nr %u, card status %#x\n", - req->rq_disk->disk_name, brq->data.error, - (unsigned)blk_rq_pos(req), - (unsigned)blk_rq_sectors(req), status); - } - - if (brq->stop.error) { - printk(KERN_ERR "%s: error %d sending stop command, " - "response %#x, card status %#x\n", - req->rq_disk->disk_name, brq->stop.error, - brq->stop.resp[0], status); - } - - if (!mmc_host_is_spi(card->host) && rq_data_dir(req) != READ) { - do { - int err; - - cmd.opcode = MMC_SEND_STATUS; - cmd.arg = card->rca << 16; - cmd.flags = MMC_RSP_R1 | MMC_CMD_AC; - err = mmc_wait_for_cmd(card->host, &cmd, 5); - if (err) { - printk(KERN_ERR "%s: error %d requesting status\n", - req->rq_disk->disk_name, err); - goto cmd_err; - } - /* - * Some cards mishandle the status bits, - * so make sure to check both the busy - * indication and the card state. - */ - } while (!(cmd.resp[0] & R1_READY_FOR_DATA) || - (R1_CURRENT_STATE(cmd.resp[0]) == 7)); - -#if 0 - if (cmd.resp[0] & ~0x00000900) - printk(KERN_ERR "%s: status = %08x\n", - req->rq_disk->disk_name, cmd.resp[0]); - if (mmc_decode_status(cmd.resp)) - goto cmd_err; -#endif - } - - if (brq->cmd.error || brq->stop.error || brq->data.error) { - if (rq_data_dir(req) == READ) { - /* - * After an error, we redo I/O one sector at a - * time, so we only reach here after trying to - * read a single sector. - */ - spin_lock_irq(&md->lock); - ret = __blk_end_request(req, -EIO, - brq->data.blksz); - spin_unlock_irq(&md->lock); - continue; - } + switch (status) { + case MMC_BLK_CMD_ERR: goto cmd_err; - } + break; + case MMC_BLK_RETRY: + disable_multi = 1; + ret = 1; + break; + case MMC_BLK_DATA_ERR: + /* + * After an error, we redo I/O one sector at a + * time, so we only reach here after trying to + * read a single sector. + */ + spin_lock_irq(&md->lock); + ret = __blk_end_request(req, -EIO, + brq->data.blksz); + spin_unlock_irq(&md->lock);
- /* - * A block was successfully transferred. - */ - spin_lock_irq(&md->lock); - ret = __blk_end_request(req, 0, brq->data.bytes_xfered); - spin_unlock_irq(&md->lock); + break; + case MMC_BLK_SUCCESS: + /* + * A block was successfully transferred. + */ + spin_lock_irq(&md->lock); + ret = __blk_end_request(req, 0, brq->data.bytes_xfered); + spin_unlock_irq(&md->lock); + break; + } } while (ret);
mmc_release_host(card->host);
Add an additional mmc queue request instance to make way for two active block requests. One request may be active while the other request is being prepared.
Signed-off-by: Per Forlin per.forlin@linaro.org --- drivers/mmc/card/queue.c | 44 ++++++++++++++++++++++++++++++++++++++++++-- drivers/mmc/card/queue.h | 3 ++- 2 files changed, 44 insertions(+), 3 deletions(-)
diff --git a/drivers/mmc/card/queue.c b/drivers/mmc/card/queue.c index 40e18b5..eef3510 100644 --- a/drivers/mmc/card/queue.c +++ b/drivers/mmc/card/queue.c @@ -130,6 +130,7 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card, spinlock_t *lock u64 limit = BLK_BOUNCE_HIGH; int ret; struct mmc_queue_req *mqrq_cur = &mq->mqrq[0]; + struct mmc_queue_req *mqrq_prev = &mq->mqrq[1];
if (mmc_dev(host)->dma_mask && *mmc_dev(host)->dma_mask) limit = *mmc_dev(host)->dma_mask; @@ -140,7 +141,9 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card, spinlock_t *lock return -ENOMEM;
memset(&mq->mqrq_cur, 0, sizeof(mq->mqrq_cur)); + memset(&mq->mqrq_prev, 0, sizeof(mq->mqrq_prev)); mq->mqrq_cur = mqrq_cur; + mq->mqrq_prev = mqrq_prev; mq->queue->queuedata = mq;
blk_queue_prep_rq(mq->queue, mmc_prep_request); @@ -181,9 +184,17 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card, spinlock_t *lock "allocate bounce cur buffer\n", mmc_card_name(card)); } + mqrq_prev->bounce_buf = kmalloc(bouncesz, GFP_KERNEL); + if (!mqrq_prev->bounce_buf) { + printk(KERN_WARNING "%s: unable to " + "allocate bounce prev buffer\n", + mmc_card_name(card)); + kfree(mqrq_cur->bounce_buf); + mqrq_cur->bounce_buf = NULL; + } }
- if (mqrq_cur->bounce_buf) { + if (mqrq_cur->bounce_buf && mqrq_prev->bounce_buf) { blk_queue_bounce_limit(mq->queue, BLK_BOUNCE_ANY); blk_queue_max_hw_sectors(mq->queue, bouncesz / 512); blk_queue_max_segments(mq->queue, bouncesz / 512); @@ -198,11 +209,19 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card, spinlock_t *lock if (ret) goto cleanup_queue;
+ mqrq_prev->sg = mmc_alloc_sg(1, &ret); + if (ret) + goto cleanup_queue; + + mqrq_prev->bounce_sg = + mmc_alloc_sg(bouncesz / 512, &ret); + if (ret) + goto cleanup_queue; } } #endif
- if (!mqrq_cur->bounce_buf) { + if (!mqrq_cur->bounce_buf && !mqrq_prev->bounce_buf) { blk_queue_bounce_limit(mq->queue, limit); blk_queue_max_hw_sectors(mq->queue, min(host->max_blk_count, host->max_req_size / 512)); @@ -213,6 +232,10 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card, spinlock_t *lock if (ret) goto cleanup_queue;
+ + mqrq_prev->sg = mmc_alloc_sg(host->max_segs, &ret); + if (ret) + goto cleanup_queue; }
sema_init(&mq->thread_sem, 1); @@ -229,6 +252,8 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card, spinlock_t *lock free_bounce_sg: kfree(mqrq_cur->bounce_sg); mqrq_cur->bounce_sg = NULL; + kfree(mqrq_prev->bounce_sg); + mqrq_prev->bounce_sg = NULL;
cleanup_queue: kfree(mqrq_cur->sg); @@ -236,6 +261,11 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card, spinlock_t *lock kfree(mqrq_cur->bounce_buf); mqrq_cur->bounce_buf = NULL;
+ kfree(mqrq_prev->sg); + mqrq_prev->sg = NULL; + kfree(mqrq_prev->bounce_buf); + mqrq_prev->bounce_buf = NULL; + blk_cleanup_queue(mq->queue); return ret; } @@ -245,6 +275,7 @@ void mmc_cleanup_queue(struct mmc_queue *mq) struct request_queue *q = mq->queue; unsigned long flags; struct mmc_queue_req *mqrq_cur = mq->mqrq_cur; + struct mmc_queue_req *mqrq_prev = mq->mqrq_prev;
/* Make sure the queue isn't suspended, as that will deadlock */ mmc_queue_resume(mq); @@ -267,6 +298,15 @@ void mmc_cleanup_queue(struct mmc_queue *mq) kfree(mqrq_cur->bounce_buf); mqrq_cur->bounce_buf = NULL;
+ kfree(mqrq_prev->bounce_sg); + mqrq_prev->bounce_sg = NULL; + + kfree(mqrq_prev->sg); + mqrq_prev->sg = NULL; + + kfree(mqrq_prev->bounce_buf); + mqrq_prev->bounce_buf = NULL; + mq->card = NULL; } EXPORT_SYMBOL(mmc_cleanup_queue); diff --git a/drivers/mmc/card/queue.h b/drivers/mmc/card/queue.h index 468044f..0e65807 100644 --- a/drivers/mmc/card/queue.h +++ b/drivers/mmc/card/queue.h @@ -28,8 +28,9 @@ struct mmc_queue { int (*issue_fn)(struct mmc_queue *, struct request *); void *data; struct request_queue *queue; - struct mmc_queue_req mqrq[1]; + struct mmc_queue_req mqrq[2]; struct mmc_queue_req *mqrq_cur; + struct mmc_queue_req *mqrq_prev; };
extern int mmc_init_queue(struct mmc_queue *, struct mmc_card *, spinlock_t *);
This simple fault injection proved to be very useful to test the error handling in the block.c rw_rq(). It may still be useful to test if the host driver handle pre_req() and post_req() correctly in case of errors.
Signed-off-by: Per Forlin per.forlin@linaro.org --- drivers/mmc/core/core.c | 54 ++++++++++++++++++++++++++++++++++++++++++++ drivers/mmc/core/debugfs.c | 5 ++++ include/linux/mmc/host.h | 3 ++ lib/Kconfig.debug | 11 +++++++++ 4 files changed, 73 insertions(+), 0 deletions(-)
diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c index 66e1403..51d1cbe 100644 --- a/drivers/mmc/core/core.c +++ b/drivers/mmc/core/core.c @@ -23,6 +23,8 @@ #include <linux/log2.h> #include <linux/regulator/consumer.h> #include <linux/pm_runtime.h> +#include <linux/fault-inject.h> +#include <linux/random.h>
#include <linux/mmc/card.h> #include <linux/mmc/host.h> @@ -82,6 +84,56 @@ static void mmc_flush_scheduled_work(void) flush_workqueue(workqueue); }
+#ifdef CONFIG_FAIL_MMC_REQUEST + +static DECLARE_FAULT_ATTR(fail_mmc_request); + +static int __init setup_fail_mmc_request(char *str) +{ + return setup_fault_attr(&fail_mmc_request, str); +} +__setup("fail_mmc_request=", setup_fail_mmc_request); + +static void mmc_should_fail_request(struct mmc_host *host, + struct mmc_request *mrq) +{ + struct mmc_command *cmd = mrq->cmd; + struct mmc_data *data = mrq->data; + static const int data_errors[] = { + -ETIMEDOUT, + -EILSEQ, + -EIO, + }; + + if (!data) + return; + + if (cmd->error || data->error || !host->make_it_fail || + !should_fail(&fail_mmc_request, data->blksz * data->blocks)) + return; + + data->error = data_errors[random32() % ARRAY_SIZE(data_errors)]; + data->bytes_xfered = (random32() % (data->bytes_xfered >> 9)) << 9; +} + +static int __init fail_mmc_request_debugfs(void) +{ + return init_fault_attr_dentries(&fail_mmc_request, + "fail_mmc_request"); +} + +late_initcall(fail_mmc_request_debugfs); + +#else /* CONFIG_FAIL_MMC_REQUEST */ + +static void mmc_should_fail_request(struct mmc_host *host, + struct mmc_request *mrq) +{ +} + +#endif /* CONFIG_FAIL_MMC_REQUEST */ + + /** * mmc_request_done - finish processing an MMC request * @host: MMC host which completed request @@ -108,6 +160,8 @@ void mmc_request_done(struct mmc_host *host, struct mmc_request *mrq) cmd->error = 0; host->ops->request(host, mrq); } else { + mmc_should_fail_request(host, mrq); + led_trigger_event(host->led, LED_OFF);
pr_debug("%s: req done (CMD%u): %d: %08x %08x %08x %08x\n", diff --git a/drivers/mmc/core/debugfs.c b/drivers/mmc/core/debugfs.c index 998797e..588e76f 100644 --- a/drivers/mmc/core/debugfs.c +++ b/drivers/mmc/core/debugfs.c @@ -188,6 +188,11 @@ void mmc_add_host_debugfs(struct mmc_host *host) root, &host->clk_delay)) goto err_node; #endif +#ifdef CONFIG_FAIL_MMC_REQUEST + if (!debugfs_create_u8("make-it-fail", S_IRUSR | S_IWUSR, + root, &host->make_it_fail)) + goto err_node; +#endif return;
err_node: diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h index d2d948b..25818ce 100644 --- a/include/linux/mmc/host.h +++ b/include/linux/mmc/host.h @@ -258,6 +258,9 @@ struct mmc_host {
struct mmc_async_req *areq; /* active async req */
+#ifdef CONFIG_FAIL_MMC_REQUEST + u8 make_it_fail; +#endif unsigned long private[0] ____cacheline_aligned; };
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index c768bcd..330fc70 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -1057,6 +1057,17 @@ config FAIL_IO_TIMEOUT Only works with drivers that use the generic timeout handling, for others it wont do anything.
+config FAIL_MMC_REQUEST + bool "Fault-injection capability for MMC IO" + select DEBUG_FS + depends on FAULT_INJECTION + help + Provide fault-injection capability for MMC IO. + This will make the mmc core return data errors. This is + useful for testing the error handling in the mmc block device + and how the mmc host driver handle retries from + the block device. + config FAULT_INJECTION_DEBUG_FS bool "Debugfs entries for fault-injection capabilities" depends on FAULT_INJECTION && SYSFS && DEBUG_FS
Change mmc_blk_issue_rw_rq() to become asynchronous. The execution flow looks like this: The mmc-queue calls issue_rw_rq(), which sends the request to the host and returns back to the mmc-queue. The mmc-queue calls issue_rw_rq() again with a new request. This new request is prepared, in isuue_rw_rq(), then it waits for the active request to complete before pushing it to the host. When to mmc-queue is empty it will call isuue_rw_rq() with req=NULL to finish off the active request without starting a new request.
Signed-off-by: Per Forlin per.forlin@linaro.org --- drivers/mmc/card/block.c | 118 ++++++++++++++++++++++++++++++++------------- drivers/mmc/card/queue.c | 17 +++++-- drivers/mmc/card/queue.h | 1 + 3 files changed, 97 insertions(+), 39 deletions(-)
diff --git a/drivers/mmc/card/block.c b/drivers/mmc/card/block.c index 1c0b077..72f2362 100644 --- a/drivers/mmc/card/block.c +++ b/drivers/mmc/card/block.c @@ -81,6 +81,7 @@ static DEFINE_MUTEX(open_lock);
enum mmc_blk_status { MMC_BLK_SUCCESS = 0, + MMC_BLK_PARTIAL, MMC_BLK_RETRY, MMC_BLK_DATA_ERR, MMC_BLK_CMD_ERR, @@ -331,14 +332,16 @@ out: return err ? 0 : 1; }
-static enum mmc_blk_status mmc_blk_err_check(struct mmc_blk_request *brq, - struct request *req, - struct mmc_card *card, - struct mmc_blk_data *md) +static int mmc_blk_err_check(struct mmc_card *card, + struct mmc_async_req *areq) { struct mmc_command cmd; u32 status = 0; enum mmc_blk_status ret = MMC_BLK_SUCCESS; + struct mmc_queue_req *mq_mrq = container_of(areq, struct mmc_queue_req, + mmc_active); + struct mmc_blk_request *brq = &mq_mrq->brq; + struct request *req = mq_mrq->req;
/* * Check for errors here, but don't jump to cmd_err @@ -422,9 +425,12 @@ static enum mmc_blk_status mmc_blk_err_check(struct mmc_blk_request *brq, else ret = MMC_BLK_CMD_ERR; } + + if (ret == MMC_BLK_SUCCESS && + blk_rq_bytes(req) != brq->data.bytes_xfered) + ret = MMC_BLK_PARTIAL; out: return ret; - }
static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq, @@ -513,29 +519,62 @@ static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq, brq->data.sg_len = i; }
+ mqrq->mmc_active.mrq = &brq->mrq; + mqrq->mmc_active.err_check = mmc_blk_err_check; + mmc_queue_bounce_pre(mqrq); }
-static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *req) +static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *rqc) { struct mmc_blk_data *md = mq->data; struct mmc_card *card = md->queue.card; - struct mmc_blk_request *brq = &mq->mqrq_cur->brq; - int ret = 1, disable_multi = 0; + struct mmc_blk_request *brq; + int ret = 1; + int disable_multi = 0; enum mmc_blk_status status; + struct mmc_queue_req *mq_rq; + struct request *req; + struct mmc_async_req *areq;
- mmc_claim_host(card->host); + if (!rqc && !mq->mqrq_prev->req) + goto out; + + if (rqc && !mq->mqrq_prev->req) + mmc_claim_host(card->host);
do { - mmc_blk_rw_rq_prep(mq->mqrq_cur, card, disable_multi, mq); - mmc_wait_for_req(card->host, &brq->mrq); + if (rqc) { + mmc_blk_rw_rq_prep(mq->mqrq_cur, card, 0, mq); + areq = &mq->mqrq_cur->mmc_active; + } else + areq = NULL; + areq = mmc_start_req(card->host, areq, (int *) &status); + if (!areq) + goto out;
- mmc_queue_bounce_post(mq->mqrq_cur); - status = mmc_blk_err_check(brq, req, card, md); + mq_rq = container_of(areq, struct mmc_queue_req, mmc_active); + brq = &mq_rq->brq; + req = mq_rq->req; + mmc_queue_bounce_post(mq_rq);
switch (status) { - case MMC_BLK_CMD_ERR: - goto cmd_err; + case MMC_BLK_SUCCESS: + case MMC_BLK_PARTIAL: + /* + * A block was successfully transferred. + */ + spin_lock_irq(&md->lock); + ret = __blk_end_request(req, 0, + brq->data.bytes_xfered); + spin_unlock_irq(&md->lock); + if (status == MMC_BLK_SUCCESS && ret) { + /* If this happen it is a bug */ + printk(KERN_ERR "%s BUG rq_tot %d d_xfer %d\n", + __func__, blk_rq_bytes(req), + brq->data.bytes_xfered); + goto cmd_err; + } break; case MMC_BLK_RETRY: disable_multi = 1; @@ -548,38 +587,44 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *req) * read a single sector. */ spin_lock_irq(&md->lock); - ret = __blk_end_request(req, -EIO, - brq->data.blksz); + ret = __blk_end_request(req, -EIO, brq->data.blksz); spin_unlock_irq(&md->lock); - + if (!ret) + goto start_new_req; break; - case MMC_BLK_SUCCESS: + case MMC_BLK_CMD_ERR: + ret = 1; + goto cmd_err; + break; + } + + if (ret) { /* - * A block was successfully transferred. + * In case of a none complete request + * prepare it again and resend. */ - spin_lock_irq(&md->lock); - ret = __blk_end_request(req, 0, brq->data.bytes_xfered); - spin_unlock_irq(&md->lock); - break; + mmc_blk_rw_rq_prep(mq_rq, card, disable_multi, mq); + mmc_start_req(card->host, &mq_rq->mmc_active, NULL); } } while (ret);
- mmc_release_host(card->host); + if (!rqc) + mmc_release_host(card->host);
return 1; - + out: + return 0; cmd_err: - /* - * If this is an SD card and we're writing, we can first - * mark the known good sectors as ok. - * + /* + * If this is an SD card and we're writing, we can first + * mark the known good sectors as ok. + * * If the card is not SD, we can still ok written sectors * as reported by the controller (which might be less than * the real number of written sectors, but never more). */ if (mmc_card_sd(card)) { u32 blocks; - blocks = mmc_sd_num_wr_blocks(card); if (blocks != (u32)-1) { spin_lock_irq(&md->lock); @@ -592,19 +637,24 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *req) spin_unlock_irq(&md->lock); }
- mmc_release_host(card->host); - spin_lock_irq(&md->lock); while (ret) ret = __blk_end_request(req, -EIO, blk_rq_cur_bytes(req)); spin_unlock_irq(&md->lock);
+ start_new_req: + if (rqc) { + mmc_blk_rw_rq_prep(mq->mqrq_cur, card, 0, mq); + mmc_start_req(card->host, &mq->mqrq_cur->mmc_active, NULL); + } else + mmc_release_host(card->host); + return 0; }
static int mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req) { - if (req->cmd_flags & REQ_DISCARD) { + if (req && req->cmd_flags & REQ_DISCARD) { if (req->cmd_flags & REQ_SECURE) return mmc_blk_issue_secdiscard_rq(mq, req); else diff --git a/drivers/mmc/card/queue.c b/drivers/mmc/card/queue.c index eef3510..d4fffb7 100644 --- a/drivers/mmc/card/queue.c +++ b/drivers/mmc/card/queue.c @@ -52,6 +52,7 @@ static int mmc_queue_thread(void *d) down(&mq->thread_sem); do { struct request *req = NULL; + struct mmc_queue_req *tmp;
spin_lock_irq(q->queue_lock); set_current_state(TASK_INTERRUPTIBLE); @@ -59,7 +60,10 @@ static int mmc_queue_thread(void *d) mq->mqrq_cur->req = req; spin_unlock_irq(q->queue_lock);
- if (!req) { + if (req || mq->mqrq_prev->req) { + set_current_state(TASK_RUNNING); + mq->issue_fn(mq, req); + } else { if (kthread_should_stop()) { set_current_state(TASK_RUNNING); break; @@ -67,11 +71,14 @@ static int mmc_queue_thread(void *d) up(&mq->thread_sem); schedule(); down(&mq->thread_sem); - continue; } - set_current_state(TASK_RUNNING);
- mq->issue_fn(mq, req); + /* Current request becomes previous request and vice versa. */ + mq->mqrq_prev->brq.mrq.data = NULL; + mq->mqrq_prev->req = NULL; + tmp = mq->mqrq_prev; + mq->mqrq_prev = mq->mqrq_cur; + mq->mqrq_cur = tmp; } while (1); up(&mq->thread_sem);
@@ -97,7 +104,7 @@ static void mmc_request(struct request_queue *q) return; }
- if (!mq->mqrq_cur->req) + if (!mq->mqrq_cur->req && !mq->mqrq_prev->req) wake_up_process(mq->thread); }
diff --git a/drivers/mmc/card/queue.h b/drivers/mmc/card/queue.h index 0e65807..62c27f8 100644 --- a/drivers/mmc/card/queue.h +++ b/drivers/mmc/card/queue.h @@ -18,6 +18,7 @@ struct mmc_queue_req { char *bounce_buf; struct scatterlist *bounce_sg; unsigned int bounce_sg_len; + struct mmc_async_req mmc_active; };
struct mmc_queue {
On Thu, May 26, 2011 at 3:27 AM, Per Forlin per.forlin@linaro.org wrote:
pre_req() runs dma_map_sg(), post_req() runs dma_unmap_sg. If not calling pre_req() before omap_hsmmc_request() dma_map_sg will be issued before starting the transfer. It is optional to use pre_req(). If issuing pre_req() post_req() must be to be called as well.
Signed-off-by: Per Forlin per.forlin@linaro.org
drivers/mmc/host/omap_hsmmc.c | 87 +++++++++++++++++++++++++++++++++++++++-- 1 files changed, 83 insertions(+), 4 deletions(-)
diff --git a/drivers/mmc/host/omap_hsmmc.c b/drivers/mmc/host/omap_hsmmc.c index ad3731a..2116c09 100644 --- a/drivers/mmc/host/omap_hsmmc.c +++ b/drivers/mmc/host/omap_hsmmc.c @@ -141,6 +141,11 @@ #define OMAP_HSMMC_WRITE(base, reg, val) \ __raw_writel((val), (base) + OMAP_HSMMC_##reg)
+struct omap_hsmmc_next {
- unsigned int dma_len;
- s32 cookie;
+};
struct omap_hsmmc_host { struct device *dev; struct mmc_host *mmc; @@ -184,6 +189,7 @@ struct omap_hsmmc_host { int reqs_blocked; int use_reg; int req_in_progress;
- struct omap_hsmmc_next next_data;
struct omap_mmc_platform_data *pdata; }; @@ -1344,8 +1350,9 @@ static void omap_hsmmc_dma_cb(int lch, u16 ch_status, void *cb_data) return; }
- dma_unmap_sg(mmc_dev(host->mmc), data->sg, data->sg_len,
- omap_hsmmc_get_dma_dir(host, data));
- if (!data->host_cookie)
- dma_unmap_sg(mmc_dev(host->mmc), data->sg, data->sg_len,
- omap_hsmmc_get_dma_dir(host, data));
req_in_progress = host->req_in_progress; dma_ch = host->dma_ch; @@ -1363,6 +1370,45 @@ static void omap_hsmmc_dma_cb(int lch, u16 ch_status, void *cb_data) } }
+static int omap_hsmmc_pre_dma_transfer(struct omap_hsmmc_host *host,
- struct mmc_data *data,
- struct omap_hsmmc_next *next)
+{
As you are passing &host->next_data into next, next will always be a valid pointer. So..
- int dma_len;
- if (!next && data->host_cookie &&
- data->host_cookie != host->next_data.cookie) {
!next will always return false and hence the rest of the check will never be executed. Perhaps s/&&/||/g ?
- printk(KERN_WARNING "[%s] invalid cookie: data->host_cookie %d"
- " host->next_data.cookie %d\n",
- __func__, data->host_cookie, host->next_data.cookie);
- data->host_cookie = 0;
- }
- /* Check if next job is already prepared */
- if (next ||
- (!next && data->host_cookie != host->next_data.cookie)) {
Cookie matching will never be checked..
- dma_len = dma_map_sg(mmc_dev(host->mmc), data->sg,
- data->sg_len,
- omap_hsmmc_get_dma_dir(host, data));
- } else {
and this will never be executed as well?
- dma_len = host->next_data.dma_len;
- host->next_data.dma_len = 0;
- }
- if (dma_len == 0)
- return -EINVAL;
- if (next) {
- next->dma_len = dma_len;
- data->host_cookie = ++next->cookie < 0 ? 1 : next->cookie;
- } else
- host->dma_len = dma_len;
- return 0;
+}
On Thu, May 26, 2011 at 3:27 AM, Per Forlin per.forlin@linaro.org wrote:
Don't use the returned sg_len from dma_map_sg() as inparameter to dma_unmap_sg(). Use the original sg_len for both dma_map_sg and dma_unmap_sg.
Signed-off-by: Per Forlin per.forlin@linaro.org
drivers/mmc/host/omap_hsmmc.c | 5 +++-- 1 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/mmc/host/omap_hsmmc.c b/drivers/mmc/host/omap_hsmmc.c index 259ece0..ad3731a 100644 --- a/drivers/mmc/host/omap_hsmmc.c +++ b/drivers/mmc/host/omap_hsmmc.c @@ -959,7 +959,8 @@ static void omap_hsmmc_dma_cleanup(struct omap_hsmmc_host *host, int errno) spin_unlock(&host->irq_lock);
if (host->use_dma && dma_ch != -1) {
- dma_unmap_sg(mmc_dev(host->mmc), host->data->sg, host->dma_len,
- dma_unmap_sg(mmc_dev(host->mmc), host->data->sg,
- host->data->sg_len,
omap_hsmmc_get_dma_dir(host, host->data)); omap_free_dma(dma_ch); } @@ -1343,7 +1344,7 @@ static void omap_hsmmc_dma_cb(int lch, u16 ch_status, void *cb_data) return; }
- dma_unmap_sg(mmc_dev(host->mmc), data->sg, host->dma_len,
- dma_unmap_sg(mmc_dev(host->mmc), data->sg, data->sg_len,
omap_hsmmc_get_dma_dir(host, data));
req_in_progress = host->req_in_progress;
1.7.4.1
Reviewed-by: Venkatraman S svenkatr@ti.com
Perhaps this doesn't belong to $FEATURE and can be posted as a separate patch ? Thanks, Venkat.
On Thu, May 26, 2011 at 3:27 AM, Per Forlin per.forlin@linaro.org wrote:
How significant is the cache maintenance over head? It depends, the eMMC are much faster now compared to a few years ago and cache maintenance cost more due to multiple cache levels and speculative cache pre-fetch. In relation the cost for handling the caches have increased and is now a bottle neck dealing with fast eMMC together with DMA.
The intention for introducing none blocking mmc requests is to minimize the time between a mmc request ends and another mmc request starts. In the current implementation the MMC controller is idle when dma_map_sg and dma_unmap_sg is processing. Introducing none blocking mmc request makes it possible to prepare the caches for next job parallel with an active mmc request.
This is done by making the issue_rw_rq() none blocking. The increase in throughput is proportional to the time it takes to prepare (major part of preparations is dma_map_sg and dma_unmap_sg) a request and how fast the memory is. The faster the MMC/SD is the more significant the prepare request time becomes. Measurements on U5500 and Panda on eMMC and SD shows significant performance gain for large reads when running DMA mode. In the PIO case the performance is unchanged.
There are two optional hooks pre_req() and post_req() that the host driver may implement in order to move work to before and after the actual mmc_request function is called. In the DMA case pre_req() may do dma_map_sg() and prepare the dma descriptor and post_req runs the dma_unmap_sg.
Details on measurements from IOZone and mmc_test: https://wiki.linaro.org/WorkingGroups/Kernel/Specs/StoragePerfMMC-async-req
Changes since v3: * Based on 2.6.39-rc7 * Add error check for testlist in mmc_test.c * Resolve in mmc-queue-thread that caused the mmc-thread to miss a wakeup. * Move parallel request handling to core.c. This simplifies the interface from 4 public functions to 1. This also gives access for SDIO to use the same functionallity, even though the function is not tuned for the SDIO execution flow yet.
Per Forlin (12): mmc: add none blocking mmc request function omap_hsmmc: use original sg_len for dma_unmap_sg omap_hsmmc: add support for pre_req and post_req mmci: implement pre_req() and post_req() mmc: mmc_test: add debugfs file to list all tests mmc: mmc_test: add test for none blocking transfers mmc: add member in mmc queue struct to hold request data mmc: add a block request prepare function mmc: move error code in mmc_block_issue_rw_rq to a separate function. mmc: add a second mmc queue request member mmc: test: add random fault injection in core.c mmc: add handling for two parallel block requests in issue_rw_rq
drivers/mmc/card/block.c | 452 +++++++++++++++++++++++++---------------- drivers/mmc/card/mmc_test.c | 361 ++++++++++++++++++++++++++++++++- drivers/mmc/card/queue.c | 184 +++++++++++------ drivers/mmc/card/queue.h | 32 +++- drivers/mmc/core/core.c | 165 ++++++++++++++- drivers/mmc/core/debugfs.c | 5 + drivers/mmc/host/mmci.c | 146 ++++++++++++-- drivers/mmc/host/mmci.h | 8 + drivers/mmc/host/omap_hsmmc.c | 90 ++++++++- include/linux/mmc/core.h | 6 +- include/linux/mmc/host.h | 19 ++ lib/Kconfig.debug | 11 + 12 files changed, 1187 insertions(+), 292 deletions(-)
Nitpick.. The mmc_test.c changes should be at the end of the series, after the async feature is available.
Regards, Venkat.
On 16 June 2011 15:14, S, Venkatraman svenkatr@ti.com wrote:
On Thu, May 26, 2011 at 3:27 AM, Per Forlin per.forlin@linaro.org wrote:
pre_req() runs dma_map_sg(), post_req() runs dma_unmap_sg. If not calling pre_req() before omap_hsmmc_request() dma_map_sg will be issued before starting the transfer. It is optional to use pre_req(). If issuing pre_req() post_req() must be to be called as well.
Signed-off-by: Per Forlin per.forlin@linaro.org
drivers/mmc/host/omap_hsmmc.c | 87 +++++++++++++++++++++++++++++++++++++++-- 1 files changed, 83 insertions(+), 4 deletions(-)
diff --git a/drivers/mmc/host/omap_hsmmc.c b/drivers/mmc/host/omap_hsmmc.c index ad3731a..2116c09 100644 --- a/drivers/mmc/host/omap_hsmmc.c +++ b/drivers/mmc/host/omap_hsmmc.c @@ -141,6 +141,11 @@ #define OMAP_HSMMC_WRITE(base, reg, val) \ __raw_writel((val), (base) + OMAP_HSMMC_##reg)
+struct omap_hsmmc_next {
- unsigned int dma_len;
- s32 cookie;
+};
struct omap_hsmmc_host { struct device *dev; struct mmc_host *mmc; @@ -184,6 +189,7 @@ struct omap_hsmmc_host { int reqs_blocked; int use_reg; int req_in_progress;
- struct omap_hsmmc_next next_data;
struct omap_mmc_platform_data *pdata; }; @@ -1344,8 +1350,9 @@ static void omap_hsmmc_dma_cb(int lch, u16 ch_status, void *cb_data) return; }
- dma_unmap_sg(mmc_dev(host->mmc), data->sg, data->sg_len,
- omap_hsmmc_get_dma_dir(host, data));
- if (!data->host_cookie)
- dma_unmap_sg(mmc_dev(host->mmc), data->sg, data->sg_len,
- omap_hsmmc_get_dma_dir(host, data));
req_in_progress = host->req_in_progress; dma_ch = host->dma_ch; @@ -1363,6 +1370,45 @@ static void omap_hsmmc_dma_cb(int lch, u16 ch_status, void *cb_data) } }
+static int omap_hsmmc_pre_dma_transfer(struct omap_hsmmc_host *host,
- struct mmc_data *data,
- struct omap_hsmmc_next *next)
+{
As you are passing &host->next_data into next, next will always be a valid pointer. So..
omap_hsmmc_pre_dma_transfer() is called from two places. in pre_req() and start_dma_transfer(). next is NULL if called from start_dma_transfer.
/Per
On 16 June 2011 15:16, S, Venkatraman svenkatr@ti.com wrote:
On Thu, May 26, 2011 at 3:27 AM, Per Forlin per.forlin@linaro.org wrote:
Don't use the returned sg_len from dma_map_sg() as inparameter to dma_unmap_sg(). Use the original sg_len for both dma_map_sg and dma_unmap_sg.
Signed-off-by: Per Forlin per.forlin@linaro.org
drivers/mmc/host/omap_hsmmc.c | 5 +++-- 1 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/mmc/host/omap_hsmmc.c b/drivers/mmc/host/omap_hsmmc.c index 259ece0..ad3731a 100644 --- a/drivers/mmc/host/omap_hsmmc.c +++ b/drivers/mmc/host/omap_hsmmc.c @@ -959,7 +959,8 @@ static void omap_hsmmc_dma_cleanup(struct omap_hsmmc_host *host, int errno) spin_unlock(&host->irq_lock);
if (host->use_dma && dma_ch != -1) {
- dma_unmap_sg(mmc_dev(host->mmc), host->data->sg, host->dma_len,
- dma_unmap_sg(mmc_dev(host->mmc), host->data->sg,
- host->data->sg_len,
omap_hsmmc_get_dma_dir(host, host->data)); omap_free_dma(dma_ch); } @@ -1343,7 +1344,7 @@ static void omap_hsmmc_dma_cb(int lch, u16 ch_status, void *cb_data) return; }
- dma_unmap_sg(mmc_dev(host->mmc), data->sg, host->dma_len,
- dma_unmap_sg(mmc_dev(host->mmc), data->sg, data->sg_len,
omap_hsmmc_get_dma_dir(host, data));
req_in_progress = host->req_in_progress;
1.7.4.1
Reviewed-by: Venkatraman S svenkatr@ti.com
Perhaps this doesn't belong to $FEATURE and can be posted as a separate patch ?
Yes, I will resend it as a separate patch.
Thanks, Per
On 16 June 2011 15:39, S, Venkatraman svenkatr@ti.com wrote:
On Thu, May 26, 2011 at 3:27 AM, Per Forlin per.forlin@linaro.org wrote:
How significant is the cache maintenance over head? It depends, the eMMC are much faster now compared to a few years ago and cache maintenance cost more due to multiple cache levels and speculative cache pre-fetch. In relation the cost for handling the caches have increased and is now a bottle neck dealing with fast eMMC together with DMA.
The intention for introducing none blocking mmc requests is to minimize the time between a mmc request ends and another mmc request starts. In the current implementation the MMC controller is idle when dma_map_sg and dma_unmap_sg is processing. Introducing none blocking mmc request makes it possible to prepare the caches for next job parallel with an active mmc request.
This is done by making the issue_rw_rq() none blocking. The increase in throughput is proportional to the time it takes to prepare (major part of preparations is dma_map_sg and dma_unmap_sg) a request and how fast the memory is. The faster the MMC/SD is the more significant the prepare request time becomes. Measurements on U5500 and Panda on eMMC and SD shows significant performance gain for large reads when running DMA mode. In the PIO case the performance is unchanged.
There are two optional hooks pre_req() and post_req() that the host driver may implement in order to move work to before and after the actual mmc_request function is called. In the DMA case pre_req() may do dma_map_sg() and prepare the dma descriptor and post_req runs the dma_unmap_sg.
Details on measurements from IOZone and mmc_test: https://wiki.linaro.org/WorkingGroups/Kernel/Specs/StoragePerfMMC-async-req
Changes since v3: * Based on 2.6.39-rc7 * Add error check for testlist in mmc_test.c * Resolve in mmc-queue-thread that caused the mmc-thread to miss a wakeup. * Move parallel request handling to core.c. This simplifies the interface from 4 public functions to 1. This also gives access for SDIO to use the same functionallity, even though the function is not tuned for the SDIO execution flow yet.
Per Forlin (12): mmc: add none blocking mmc request function omap_hsmmc: use original sg_len for dma_unmap_sg omap_hsmmc: add support for pre_req and post_req mmci: implement pre_req() and post_req() mmc: mmc_test: add debugfs file to list all tests mmc: mmc_test: add test for none blocking transfers mmc: add member in mmc queue struct to hold request data mmc: add a block request prepare function mmc: move error code in mmc_block_issue_rw_rq to a separate function. mmc: add a second mmc queue request member mmc: test: add random fault injection in core.c mmc: add handling for two parallel block requests in issue_rw_rq
drivers/mmc/card/block.c | 452 +++++++++++++++++++++++++---------------- drivers/mmc/card/mmc_test.c | 361 ++++++++++++++++++++++++++++++++- drivers/mmc/card/queue.c | 184 +++++++++++------ drivers/mmc/card/queue.h | 32 +++- drivers/mmc/core/core.c | 165 ++++++++++++++- drivers/mmc/core/debugfs.c | 5 + drivers/mmc/host/mmci.c | 146 ++++++++++++-- drivers/mmc/host/mmci.h | 8 + drivers/mmc/host/omap_hsmmc.c | 90 ++++++++- include/linux/mmc/core.h | 6 +- include/linux/mmc/host.h | 19 ++ lib/Kconfig.debug | 11 + 12 files changed, 1187 insertions(+), 292 deletions(-)
Nitpick.. The mmc_test.c changes should be at the end of the series, after the async feature is available.
mmc_test sits on top of core.c It doesn't test any code in the mmc block device. I use DT (data test) together with random fault generation to verify the mmc block device code.
mmc: add none blocking mmc request function omap_hsmmc: use original sg_len for dma_unmap_sg omap_hsmmc: add support for pre_req and post_req mmci: implement pre_req() and post_req() mmc: mmc_test: add debugfs file to list all tests mmc: mmc_test: add test for none blocking transfers
These patches are enough to run mmc_tests for async request for omap_hsmmc and mmci.
Regards, Per