Hello,
--- a/drivers/mtd/nand/raw/cadence-nand-controller.c +++ b/drivers/mtd/nand/raw/cadence-nand-controller.c @@ -2908,7 +2908,7 @@ static int cadence_nand_init(struct cdns_nand_ctrl
*cdns_ctrl)
if (!cdns_ctrl->dmac) { dev_err(cdns_ctrl->dev, "Unable to get a DMA channel\n");
ret = -EBUSY;
ret = -EPROBE_DEFER;
Does it work if there is no DMA channel provided? The bindings do not mention DMA channels as mandatory.
The way Cadence NAND controller driver is written in such a way that it uses has_dma=1 as hardcoded value, indicating that slave DMA interface is connected to DMA engine. However, it does not utilize the dedicated DMA channel information from the device tree.
This is not ok.
Driver works without external DMA interface i.e. has_dma=0. However current driver does not have a mechanism to configure it from device tree.
What? Why are you requesting a DMA channel from a dmaengine in this case?
Please make the distinction between the OS implementation (the driver) and the DT binding which describe the HW and only the HW.
Also, wouldn't it be more pleasant to use another helper from the DMA core that returns a proper return code? So we now which one among -EBUSY, - ENODEV or -EPROBE_DEFER we get?
Agree. I will change to "dma_request_chan_by_mask" instead of "dma_request_channel " so it can return a proper error code. cdns_ctrl->dmac = dma_request_chan_by_mask(&mask); if (IS_ERR(cdns_ctrl->dmac)) { ret = PTR_ERR(cdns_ctrl->dmac); if (ret != -EPROBE_DEFER) dev_err(cdns_ctrl->dev, "Failed to get a DMA channel:%d\n",ret); goto disable_irq; }
Is this reasonable?
It is better, but maybe you can use dev_err_probe() instead to include the EPROBE_DEFER error handling.
Thanks, Miquèl