On Mon, 5 Dec 2022 21:44:09 +0100 Justin Iurman wrote:
Please revert this patch.
Many people use FQ qdisc, where packets are waiting for their Earliest Departure Time to be released.
The IOAM queue depth is a very important value and is already used.
Can you say more about the use? What signal do you derive from it? I do track qlen on Meta's servers but haven't found a strong use for it yet (I did for backlog drops but not the qlen itself).
Also, the draft says:
5.4.2.7. queue depth
The "queue depth" field is a 4-octet unsigned integer field. This field indicates the current length of the egress interface queue of the interface from where the packet is forwarded out. The queue depth is expressed as the current amount of memory buffers used by the queue (a packet could consume one or more memory buffers, depending on its size). 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | queue depth | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
It is relatively clear that the egress interface is the aggregate egress interface, not a subset of the interface.
Correct, even though the definition of an interface in RFC 9197 is quite abstract (see the end of section 4.4.2.2: "[...] could represent a physical interface, a virtual or logical interface, or even a queue").
If you have 32 TX queues on a NIC, all of them being backlogged (line rate), sensing the queue length of one of the queues would give a 97% error on the measure.
Why would it? Not sure I get your idea based on that example.
Because it measures the length of a single queue not the device.