On 08/28/2016 02:42 AM, Ard Biesheuvel wrote:
On 28 August 2016 at 00:40, Alan Ott alan@softiron.co.uk wrote:
On 08/27/2016 11:26 AM, Ard Biesheuvel wrote:
On 27 August 2016 at 15:30, Alan Ott alan@softiron.co.uk wrote:
On 08/26/2016 07:15 PM, Daniil Egranov wrote:
Hi Alan,
On 08/25/2016 05:39 PM, Alan Ott wrote:
Add support for 64-bit DMA transfers, since some 64-bit platforms don't have the ability to generate DMA addresses which can fit in 32-bits.
This code came from the FreeBSD driver, the one from which this driver was derived.
Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Alan Ott alan@softiron.co.uk
Drivers/Net/MarvellYukonDxe/if_msk.c | 57
++++++++++++++++++++++++++++----- Drivers/Net/MarvellYukonDxe/if_mskreg.h | 13 ++++++++ 2 files changed, 62 insertions(+), 8 deletions(-)
diff --git a/Drivers/Net/MarvellYukonDxe/if_msk.c b/Drivers/Net/MarvellYukonDxe/if_msk.c index 9bd0d12..3cab084 100644 --- a/Drivers/Net/MarvellYukonDxe/if_msk.c +++ b/Drivers/Net/MarvellYukonDxe/if_msk.c @@ -492,6 +492,7 @@ msk_init_rx_ring ( struct msk_rxdesc *rxd; INTN i; INTN prod;
- INTN nbuf; EFI_STATUS Status; sc_if->msk_cdata.msk_rx_cons = 0;
@@ -500,17 +501,22 @@ msk_init_rx_ring ( rd = &sc_if->msk_rdata; gBS->SetMem (rd->msk_rx_ring, MSK_RX_RING_SZ, 0);
- prod = sc_if->msk_cdata.msk_rx_prod;
- for (i = 0; i < MSK_RX_RING_CNT; i++) {
- for (i = prod = 0; i < MSK_RX_RING_CNT; i++) { rxd = &sc_if->msk_cdata.msk_rxdesc[prod]; gBS->SetMem (&rxd->rx_m, sizeof (MSK_DMA_BUF), 0); rxd->rx_le = &rd->msk_rx_ring[prod];
- Status = msk_newbuf (sc_if, prod);
- if (EFI_ERROR (Status)) {
return Status;
- } MSK_INC (prod, MSK_RX_RING_CNT); }
- nbuf = MSK_RX_BUF_CNT;
- prod = 0;
- for (i = 0; i < nbuf; i++) {
Status = msk_newbuf (sc_if, prod);
if (EFI_ERROR (Status)) {
return Status;
}
MSK_RX_INC(prod, MSK_RX_RING_CNT);
- } // Update prefetch unit. sc_if->msk_cdata.msk_rx_prod = MSK_RX_RING_CNT - 1;
@@ -532,6 +538,7 @@ msk_init_tx_ring ( sc_if->msk_cdata.msk_tx_prod = 0; sc_if->msk_cdata.msk_tx_cons = 0; sc_if->msk_cdata.msk_tx_cnt = 0;
- sc_if->msk_cdata.msk_tx_high_addr = 0; rd = &sc_if->msk_rdata; gBS->SetMem (rd->msk_tx_ring, sizeof (struct msk_tx_desc) *
MSK_TX_RING_CNT, 0); @@ -556,6 +563,13 @@ msk_discard_rxbuf ( DEBUG ((EFI_D_NET, "Marvell Yukon: discard rxbuf\n")); +#ifdef MSK_64BIT_DMA
- rxd = &sc_if->msk_cdata.msk_rxdesc[idx];
- rx_le = rxd->rx_le;
- rx_le->msk_control = htole32(OP_ADDR64 | HW_OWNER);
- MSK_INC(idx, MSK_RX_RING_CNT);
+#endif
rxd = &sc_if->msk_cdata.msk_rxdesc[idx]; DmaBuffer = &rxd->rx_m; rx_le = rxd->rx_le;
@@ -594,6 +608,14 @@ msk_newbuf ( return Status; } +#ifdef MSK_64BIT_DMA
- rx_le = rxd->rx_le;
- rx_le->msk_addr = htole32(MSK_ADDR_HI(PhysAddr));
- rx_le->msk_control = htole32(OP_ADDR64 | HW_OWNER);
- MSK_INC(idx, MSK_RX_RING_CNT);
- rxd = &sc_if->msk_cdata.msk_rxdesc[idx];
+#endif
gBS->SetMem (&(rxd->rx_m), sizeof (MSK_DMA_BUF), 0); rxd->rx_m.DmaMapping = Mapping; rxd->rx_m.Buf = Buffer;
@@ -1649,6 +1671,19 @@ msk_encap ( control = 0; +#ifdef MSK_64BIT_DMA
- if (MSK_ADDR_HI(BusPhysAddr) !=
- sc_if->msk_cdata.msk_tx_high_addr) {
sc_if->msk_cdata.msk_tx_high_addr =
MSK_ADDR_HI(BusPhysAddr);
tx_le = &sc_if->msk_rdata.msk_tx_ring[prod];
tx_le->msk_addr = htole32(MSK_ADDR_HI(BusPhysAddr));
tx_le->msk_control = htole32(OP_ADDR64 | HW_OWNER);
sc_if->msk_cdata.msk_tx_cnt++;
MSK_INC(prod, MSK_TX_RING_CNT);
- }
+#endif
si = prod; tx_le = &sc_if->msk_rdata.msk_tx_ring[prod]; tx_le->msk_addr = htole32 (MSK_ADDR_LO (BusPhysAddr));
@@ -1866,6 +1901,12 @@ msk_rxeof ( break; } +#ifdef MSK_64BIT_DMA
- rxd = &sc_if->msk_cdata.msk_rxdesc[(cons + 1) % MSK_RX_RING_CNT];
+#else
- rxd = &sc_if->msk_cdata.msk_rxdesc[cons];
+#endif
m = rxd->rx_m; Status = msk_newbuf (sc_if, cons); if (EFI_ERROR (Status)) {
@@ -1910,8 +1951,8 @@ msk_rxeof ( } } while (0); - MSK_INC (sc_if->msk_cdata.msk_rx_cons, MSK_RX_RING_CNT);
- MSK_INC (sc_if->msk_cdata.msk_rx_prod, MSK_RX_RING_CNT);
- MSK_RX_INC (sc_if->msk_cdata.msk_rx_cons, MSK_RX_RING_CNT);
- MSK_RX_INC (sc_if->msk_cdata.msk_rx_prod, MSK_RX_RING_CNT); } static
diff --git a/Drivers/Net/MarvellYukonDxe/if_mskreg.h b/Drivers/Net/MarvellYukonDxe/if_mskreg.h index f0dd05e..6e835f2 100644 --- a/Drivers/Net/MarvellYukonDxe/if_mskreg.h +++ b/Drivers/Net/MarvellYukonDxe/if_mskreg.h @@ -2239,6 +2239,9 @@ struct msk_stat_desc { #define BMU_UDP_CHECK (0x57<<16) // Descr with UDP ext (YUKON only) #define BMU_BBC 0xffff // Bit 15.. 0: Buffer Byte Counter +#if MAX_ADDRESS > 0xffffffff +#define MSK_64BIT_DMA +#endif
This will be a problem with EBC architecture independent approach as it defines the architecture conditional build. To support EBC version of the driver, this condition have to be handled during the run time. Could you take a look on it and see if such build time conditions can be removed?
The 32-bit DMA case of this driver, as far as I can tell, can be treated as an optimization on 32-bit platforms. It saves items in the RX ring. On 32-bit platforms the high addresses will just be zero.[1]
You are assuming that the CPU and the PCI host bridge have the same view of memory, which is not generally true on all architectures (although it is on x86). This means that even on a 32-bit platform, the PCI addresses returned by PciIo->Map() could have high bits set,
The linux driver for this part checks sizeof(dma_addr_t) against sizeof(u32), so I assume for Linux, your example case would be handled by a 32-bit platform having a large dma_addr_t.
Some 32-bit architectures support a larger physical address space (e.g., LPAE on ARM supports up to 40 bits, iirc). However, the UEFI spec mandates a 1:1 mapping for the CPU's view of memory, and so this high memory is not actually usable in UEFI. So no, this is not really the same thing.
It is simply that PCIe is naturally 64 bits (and so the choice of DUAL_ADDRESS_CYCLE for the name is a bit unfortunate, since it is tightly coupled to legacy 32-bits PCI, which needs two address cycles to put both halves of the address on the bus, while PCIe is not even a bus anymore), which means that a platform could be configured in a way that puts the memory that the CPU maps 1:1 higher up in the PCI controllers's address space. So this is more rare than LPAE, but still something we need to take into account.
and so 64bitness is fundamentally a property of the device (what the driver writer's guide calls the PCI controller), and of the driver, not of the architecture we happen to be running on.
In that case, it seems to make sense to leave the 64-bit support on all the time.
Agreed.
If we are deciding to keep the 64bit support all the time, please send a patch with changes. I do not have 32bit platform to verify it so we assume that is a correct approach until we will have the ability to test it.
Since the point of EBC drivers is that they don't know whether they are running on a 32- or 64-bit system (since the EBC environment provides a 64-bit interface to the driver), I don't see how this can be detected at runtime. It's part of the point of EBC to not know about the underlying platform. Maybe someone can show me wrong.
It could easily be detected at runtime, since sizeof() is not a compile time constant on EBC. But we shouldn't, given the above.
Back to your assertion that this will be a problem with EBC, are you sure that's true? Since EBC driver runs in a 64-bit environment, it seems to me that the code above, when built for EBC, will always set MSK_64BIT_DMA.
Failing that, my recommendation is to #define MSK_64BIT_DMA here in all cases (and not depend on the MAX_ADDRESS). I don't think we lose anything there (except loss of a slight optimization on 32-bit platforms).
Indeed. As an optimization, I think it should be possible to query the PCI layer whether it supports DMA above 4 GB, and enable the optimization in that case (which will also enable it for X64, since UEFI on that platform typically does not support DMA >4 GB, and maps CPU:PCI 1:1)
I agree that could be an optimization, but believe it to be a separate development effort, of course.
Yes, and please don't bother. You are not going to be able to test it anyway on Seattle.
Thanks, Ard. _______________________________________________ Linaro-uefi mailing list Linaro-uefi@lists.linaro.org https://lists.linaro.org/mailman/listinfo/linaro-uefi