Lee Jones lee.jones@linaro.org writes:
On Fri, 03 Dec 2021, Bjørn Mork wrote:
Just out of curiouslity: Is this a real device, or was this the result of fuzzing around?
This is the result of "fuzzing around" on qemu. :)
https://syzkaller.appspot.com/bug?extid=2c9b6751e87ab8706cb3
OK. Makes sense. I'd be surprised of such a device worked on that other OS.
Not that it matters - it's obviously a bug to fix in any case. Good catch!
(We probably have many more of the same, assuming the device presents semi-sane values in the NCM parameter struct)
diff --git a/drivers/net/usb/cdc_ncm.c b/drivers/net/usb/cdc_ncm.c index 24753a4da7e60..e303b522efb50 100644 --- a/drivers/net/usb/cdc_ncm.c +++ b/drivers/net/usb/cdc_ncm.c @@ -181,6 +181,8 @@ static u32 cdc_ncm_check_tx_max(struct usbnet *dev, u32 new_tx) min = ctx->max_datagram_size + ctx->max_ndp_size + sizeof(struct usb_cdc_ncm_nth32); max = min_t(u32, CDC_NCM_NTB_MAX_SIZE_TX, le32_to_cpu(ctx->ncm_parm.dwNtbOutMaxSize));
- if (max == 0)
max = CDC_NCM_NTB_MAX_SIZE_TX; /* dwNtbOutMaxSize not set */
/* some devices set dwNtbOutMaxSize too low for the above default */ min = min(min, max);
It's been a while since I looked at this, so excuse me if I read it wrongly. But I think we need to catch more illegal/impossible values than just zero here? Any buffer size which cannot hold a single datagram is pointless.
Trying to figure out what I possible meant to do with that
min = min(min, max);
I don't think it makes any sense? Does it? The "min" value we've carefully calculated allow one max sized datagram and headers. I don't think we should ever continue with a smaller buffer than that
I was more confused with the comment you added to that code:
/* some devices set dwNtbOutMaxSize too low for the above default */ min = min(min, max);
... which looks as though it should solve the issue of an inadequate dwNtbOutMaxSize, but it almost does the opposite.
That's what I read too. I must admit that I cannot remember writing any of this stuff. But I trust git...
I initially changed this segment to use the max() macro instead, but the subsequent clamp_t() macro simply chooses 'max' (0) value over the now sane 'min' one.
Yes, but what if we adjust max here instead of min?
Which is why I chose
Or are there cases where this is valid?
I'm not an expert on the SKB code, but in my simple view of the world, if you wish to use a buffer for any amount of data, you should allocate space for it.
So that really should haven been catching this bug with a
max = max(min, max)
I tried this. It didn't work either.
See the subsequent clamp_t() call a few lines down.
This I don't understand. If we have for example
new_tx = 0 max = 0 min = 1514(=datagram) + 8(=ndp) + 2(=1+1) * 4(=dpe) + 12(=nth) = 1542
then
max = max(min, max) = 1542 val = clamp_t(u32, new_tx, min, max) = 1542
so we return 1542 and everything is fine.
or maybe more readable
if (max < min) max = min
What do you think?
So the data that is added to the SKB is ctx->max_ndp_size, which is allocated in cdc_ncm_init(). The code that does it looks like:
if (ctx->is_ndp16) ctx->max_ndp_size = sizeof(struct usb_cdc_ncm_ndp16) + (ctx->tx_max_datagrams + 1) * sizeof(struct usb_cdc_ncm_dpe16); else ctx->max_ndp_size = sizeof(struct usb_cdc_ncm_ndp32) + (ctx->tx_max_datagrams + 1) * sizeof(struct usb_cdc_ncm_dpe32);
So this should be the size of the allocation too, right?
This driver doesn't add data to the skb. It allocates a new buffer and copies one or more skbs into it. I'm sure that could be improved too..
Without a complete rewrite we need to allocate new skbs large enough to hold
NTH - frame header NDP x 1 - index table, with minimum two entries (1 datagram + terminator) datagram x 1 - ethernet frame
This gives the minimum "tx_max" value.
The device is supposed to tell us the maximum "tx_max" value in dwNtbOutMaxSize. In theory. In practice we cannot trust the device, as you point out. We know aleady deal with too large values (which are commonly seen in real products), but we also need to deal with too low values.
I believe the "too low" is defined by the calculated minimum value, and the comment indicates that this what I tried to express but failed.
Why would the platform ever need to over-ride this? The platform can't make the data area smaller since there won't be enough room. It could perhaps make it bigger, but the min_t() and clamp_t() macros will end up choosing the above allocation anyway.
This leaves me feeling a little perplexed.
If there isn't a good reason for over-riding then I could simplify cdc_ncm_check_tx_max() greatly.
What do *you* think? :)
I also have the feeling that this could and should be simplified. This discussion shows that refactoring is required. git blame makes this all too embarrassing ;-)
Bjørn