On Fri, Nov 11, 2022 at 09:56:58AM +0000, Tian, Kevin wrote:
From: Jason Gunthorpe jgg@nvidia.com Sent: Tuesday, November 8, 2022 8:49 AM
+struct interval_tree_double_span_iter {
- struct rb_root_cached *itrees[2];
- struct interval_tree_span_iter spans[2];
- union {
unsigned long start_hole;
unsigned long start_used;
- };
- union {
unsigned long last_hole;
unsigned long last_used;
- };
- /* 0 = hole, 1 = used span[0], 2 = used span[1], -1 done iteration */
- int is_used;
+};
lack of a comment how this expects to be used as done for struct interval_tree_span_iter. e.g. there is no value representing used by both spans which implies this is used to find valid range in either side. Those should be spelled out.
/* * This is a variation of the general interval_tree_span_iter that computes the * spans over the union of two different interval trees. Used ranges are broken * up and reported based on the tree that provides the interval. The first span * always takes priority. Like interval_tree_span_iter it is greedy and the same * value of is_used will not repeat on two iteration cycles. */
+/*
- The IOVA to PFN map. The mapper automatically copies the PFNs into
multiple
what is the mapper?
Let's just say "The map automatically"
- ret = iommu_unmap(domain, iova, size);
- /*
* It is a logic error in this code or a driver bug if the IOMMU unmaps
* something other than exactly as requested. This implies that the
* iommu driver may not fail unmap for reasons beyond bad
agruments.
* Particularly, the iommu driver may not do a memory allocation on
the
* unmap path.
*/
didn't understand the last sentence.
Unmap path means its domain_ops->unmap
+static void batch_skip_carry(struct pfn_batch *batch, unsigned int skip_pfns)
add a comment similar to batch_clear_carry()
It seems OK as is, the comment describing what carry is just a few lines above
+{
- if (!batch->total_pfns)
return;
- skip_pfns = min(batch->total_pfns, skip_pfns);
- batch->pfns[0] += skip_pfns;
- batch->npfns[0] -= skip_pfns;
what about skip_pfns exceeds batch->npfns[0]? looks this works only if batch->total_pfns = batch->npfns[0]...
Right, at this point the batch has only 1 pfn and total_pfns == batch->nfpfs[0]
Let's add an assertion:
@@ -239,6 +239,8 @@ static void batch_skip_carry(struct pfn_batch *batch, unsigned int skip_pfns) { if (!batch->total_pfns) return; + if (IS_ENABLED(CONFIG_IOMMUFD_TEST)) + WARN_ON(batch->total_pfns != batch->npfns[0]); skip_pfns = min(batch->total_pfns, skip_pfns);
+/* true if the pfn could be added, false otherwise */ +static bool batch_add_pfn(struct pfn_batch *batch, unsigned long pfn) +{
- /* FIXME: U16 is too small */
performance or functional impact?
what would be the fix? and why cannot it be done now?
more comment is welcomed.
Er, as you noticed this was fixed and the rebase to fix it was botched. It will be u32
+static void batch_unpin(struct pfn_batch *batch, struct iopt_pages *pages,
unsigned int offset, size_t npages)
+{
- unsigned int cur = 0;
- while (offset) {
if (batch->npfns[cur] > offset)
break;
offset -= batch->npfns[cur];
cur++;
- }
'offset' usually means byte-addressed. 'index' is a better fit in this context.
It is the offset into the logical page array held in the batch.
'first_page_off' would be clearer
Thanks, Jason