virtio-net have two usage of hashes: one is RSS and another is hash reporting. Conventionally the hash calculation was done by the VMM. However, computing the hash after the queue was chosen defeats the purpose of RSS.
Another approach is to use eBPF steering program. This approach has another downside: it cannot report the calculated hash due to the restrictive nature of eBPF.
Introduce the code to compute hashes to the kernel in order to overcome thse challenges.
An alternative solution is to extend the eBPF steering program so that it will be able to report to the userspace, but it is based on context rewrites, which is in feature freeze. We can adopt kfuncs, but they will not be UAPIs. We opt to ioctl to align with other relevant UAPIs (KVM and vhost_net).
QEMU patched to use this new feature is available at: https://github.com/daynix/qemu/tree/akihikodaki/rss2
The QEMU patches will soon be submitted to the upstream as RFC too.
This work will be presented at LPC 2024: https://lpc.events/event/18/contributions/1963/
V1 -> V2: Changed to introduce a new BPF program type.
Signed-off-by: Akihiko Odaki akihiko.odaki@daynix.com --- Changes in v3: - Reverted back to add ioctl. - Split patch "tun: Introduce virtio-net hashing feature" into "tun: Introduce virtio-net hash reporting feature" and "tun: Introduce virtio-net RSS". - Changed to reuse hash values computed for automq instead of performing RSS hashing when hash reporting is requested but RSS is not. - Extracted relevant data from struct tun_struct to keep it minimal. - Added kernel-doc. - Changed to allow calling TUNGETVNETHASHCAP before TUNSETIFF. - Initialized num_buffers with 1. - Added a test case for unclassified packets. - Fixed error handling in tests. - Changed tests to verify that the queue index will not overflow. - Rebased. - Link to v2: https://lore.kernel.org/r/20231015141644.260646-1-akihiko.odaki@daynix.com
--- Akihiko Odaki (9): skbuff: Introduce SKB_EXT_TUN_VNET_HASH virtio_net: Add functions for hashing net: flow_dissector: Export flow_keys_dissector_symmetric tap: Pad virtio header with zero tun: Pad virtio header with zero tun: Introduce virtio-net hash reporting feature tun: Introduce virtio-net RSS selftest: tun: Add tests for virtio-net hashing vhost/net: Support VIRTIO_NET_F_HASH_REPORT
Documentation/networking/tuntap.rst | 7 + drivers/net/Kconfig | 1 + drivers/net/tap.c | 2 +- drivers/net/tun.c | 255 ++++++++++++-- drivers/vhost/net.c | 16 +- include/linux/skbuff.h | 10 + include/linux/virtio_net.h | 198 +++++++++++ include/net/flow_dissector.h | 1 + include/uapi/linux/if_tun.h | 71 ++++ net/core/flow_dissector.c | 3 +- net/core/skbuff.c | 3 + tools/testing/selftests/net/Makefile | 2 +- tools/testing/selftests/net/tun.c | 666 ++++++++++++++++++++++++++++++++++- 13 files changed, 1195 insertions(+), 40 deletions(-) --- base-commit: 46a0057a5853cbdb58211c19e89ba7777dc6fd50 change-id: 20240403-rss-e737d89efa77
Best regards,
This new extension will be used by tun to carry the hash values and types to report with virtio-net headers.
Signed-off-by: Akihiko Odaki akihiko.odaki@daynix.com --- include/linux/skbuff.h | 10 ++++++++++ net/core/skbuff.c | 3 +++ 2 files changed, 13 insertions(+)
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 29c3ea5b6e93..17cee21c9999 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -334,6 +334,13 @@ struct tc_skb_ext { }; #endif
+#if IS_ENABLED(CONFIG_TUN) +struct tun_vnet_hash_ext { + u32 value; + u16 report; +}; +#endif + struct sk_buff_head { /* These two members must be first to match sk_buff. */ struct_group_tagged(sk_buff_list, list, @@ -4718,6 +4725,9 @@ enum skb_ext_id { #endif #if IS_ENABLED(CONFIG_MCTP_FLOWS) SKB_EXT_MCTP, +#endif +#if IS_ENABLED(CONFIG_TUN) + SKB_EXT_TUN_VNET_HASH, #endif SKB_EXT_NUM, /* must be last */ }; diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 83f8cd8aa2d1..ce34523fd8de 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -4979,6 +4979,9 @@ static const u8 skb_ext_type_len[] = { #if IS_ENABLED(CONFIG_MCTP_FLOWS) [SKB_EXT_MCTP] = SKB_EXT_CHUNKSIZEOF(struct mctp_flow), #endif +#if IS_ENABLED(CONFIG_TUN) + [SKB_EXT_TUN_VNET_HASH] = SKB_EXT_CHUNKSIZEOF(struct tun_vnet_hash_ext), +#endif };
static __always_inline unsigned int skb_ext_total_length(void)
Akihiko Odaki wrote:
This new extension will be used by tun to carry the hash values and types to report with virtio-net headers.
Signed-off-by: Akihiko Odaki akihiko.odaki@daynix.com
include/linux/skbuff.h | 10 ++++++++++ net/core/skbuff.c | 3 +++ 2 files changed, 13 insertions(+)
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 29c3ea5b6e93..17cee21c9999 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -334,6 +334,13 @@ struct tc_skb_ext { }; #endif +#if IS_ENABLED(CONFIG_TUN) +struct tun_vnet_hash_ext {
- u32 value;
- u16 report;
+}; +#endif
This is unlikely to belong in skbuff.h
struct sk_buff_head { /* These two members must be first to match sk_buff. */ struct_group_tagged(sk_buff_list, list, @@ -4718,6 +4725,9 @@ enum skb_ext_id { #endif #if IS_ENABLED(CONFIG_MCTP_FLOWS) SKB_EXT_MCTP, +#endif +#if IS_ENABLED(CONFIG_TUN)
- SKB_EXT_TUN_VNET_HASH,
#endif SKB_EXT_NUM, /* must be last */ }; diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 83f8cd8aa2d1..ce34523fd8de 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -4979,6 +4979,9 @@ static const u8 skb_ext_type_len[] = { #if IS_ENABLED(CONFIG_MCTP_FLOWS) [SKB_EXT_MCTP] = SKB_EXT_CHUNKSIZEOF(struct mctp_flow), #endif +#if IS_ENABLED(CONFIG_TUN)
- [SKB_EXT_TUN_VNET_HASH] = SKB_EXT_CHUNKSIZEOF(struct tun_vnet_hash_ext),
+#endif }; static __always_inline unsigned int skb_ext_total_length(void)
-- 2.46.0
They are useful to implement VIRTIO_NET_F_RSS and VIRTIO_NET_F_HASH_REPORT.
Signed-off-by: Akihiko Odaki akihiko.odaki@daynix.com --- include/linux/virtio_net.h | 198 +++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 198 insertions(+)
diff --git a/include/linux/virtio_net.h b/include/linux/virtio_net.h index 6c395a2600e8..7ee2e2f2625a 100644 --- a/include/linux/virtio_net.h +++ b/include/linux/virtio_net.h @@ -9,6 +9,183 @@ #include <uapi/linux/tcp.h> #include <uapi/linux/virtio_net.h>
+struct virtio_net_hash { + u32 value; + u16 report; +}; + +struct virtio_net_toeplitz_state { + u32 hash; + u32 key_buffer; + const __be32 *key; +}; + +#define VIRTIO_NET_SUPPORTED_HASH_TYPES (VIRTIO_NET_RSS_HASH_TYPE_IPv4 | \ + VIRTIO_NET_RSS_HASH_TYPE_TCPv4 | \ + VIRTIO_NET_RSS_HASH_TYPE_UDPv4 | \ + VIRTIO_NET_RSS_HASH_TYPE_IPv6 | \ + VIRTIO_NET_RSS_HASH_TYPE_TCPv6 | \ + VIRTIO_NET_RSS_HASH_TYPE_UDPv6) + +#define VIRTIO_NET_RSS_MAX_KEY_SIZE 40 + +static inline void virtio_net_toeplitz(struct virtio_net_toeplitz_state *state, + const __be32 *input, size_t len) +{ + u32 key; + + while (len) { + state->key++; + key = be32_to_cpu(*state->key); + + for (u32 bit = BIT(31); bit; bit >>= 1) { + if (be32_to_cpu(*input) & bit) + state->hash ^= state->key_buffer; + + state->key_buffer = + (state->key_buffer << 1) | !!(key & bit); + } + + input++; + len--; + } +} + +static inline u8 virtio_net_hash_key_length(u32 types) +{ + size_t len = 0; + + if (types & VIRTIO_NET_HASH_REPORT_IPv4) + len = max(len, + sizeof(struct flow_dissector_key_ipv4_addrs)); + + if (types & + (VIRTIO_NET_HASH_REPORT_TCPv4 | VIRTIO_NET_HASH_REPORT_UDPv4)) + len = max(len, + sizeof(struct flow_dissector_key_ipv4_addrs) + + sizeof(struct flow_dissector_key_ports)); + + if (types & VIRTIO_NET_HASH_REPORT_IPv6) + len = max(len, + sizeof(struct flow_dissector_key_ipv6_addrs)); + + if (types & + (VIRTIO_NET_HASH_REPORT_TCPv6 | VIRTIO_NET_HASH_REPORT_UDPv6)) + len = max(len, + sizeof(struct flow_dissector_key_ipv6_addrs) + + sizeof(struct flow_dissector_key_ports)); + + return 4 + len; +} + +static inline u32 virtio_net_hash_report(u32 types, + struct flow_dissector_key_basic key) +{ + switch (key.n_proto) { + case htons(ETH_P_IP): + if (key.ip_proto == IPPROTO_TCP && + (types & VIRTIO_NET_RSS_HASH_TYPE_TCPv4)) + return VIRTIO_NET_HASH_REPORT_TCPv4; + + if (key.ip_proto == IPPROTO_UDP && + (types & VIRTIO_NET_RSS_HASH_TYPE_UDPv4)) + return VIRTIO_NET_HASH_REPORT_UDPv4; + + if (types & VIRTIO_NET_RSS_HASH_TYPE_IPv4) + return VIRTIO_NET_HASH_REPORT_IPv4; + + return VIRTIO_NET_HASH_REPORT_NONE; + + case htons(ETH_P_IPV6): + if (key.ip_proto == IPPROTO_TCP && + (types & VIRTIO_NET_RSS_HASH_TYPE_TCPv6)) + return VIRTIO_NET_HASH_REPORT_TCPv6; + + if (key.ip_proto == IPPROTO_UDP && + (types & VIRTIO_NET_RSS_HASH_TYPE_UDPv6)) + return VIRTIO_NET_HASH_REPORT_UDPv6; + + if (types & VIRTIO_NET_RSS_HASH_TYPE_IPv6) + return VIRTIO_NET_HASH_REPORT_IPv6; + + return VIRTIO_NET_HASH_REPORT_NONE; + + default: + return VIRTIO_NET_HASH_REPORT_NONE; + } +} + +static inline bool virtio_net_hash_rss(const struct sk_buff *skb, + u32 types, const __be32 *key, + struct virtio_net_hash *hash) +{ + u16 report; + struct virtio_net_toeplitz_state toeplitz_state = { + .key_buffer = be32_to_cpu(*key), + .key = key + }; + struct flow_keys flow; + + if (!skb_flow_dissect_flow_keys(skb, &flow, 0)) + return false; + + report = virtio_net_hash_report(types, flow.basic); + + switch (report) { + case VIRTIO_NET_HASH_REPORT_IPv4: + virtio_net_toeplitz(&toeplitz_state, + (__be32 *)&flow.addrs.v4addrs, + sizeof(flow.addrs.v4addrs) / 4); + break; + + case VIRTIO_NET_HASH_REPORT_TCPv4: + virtio_net_toeplitz(&toeplitz_state, + (__be32 *)&flow.addrs.v4addrs, + sizeof(flow.addrs.v4addrs) / 4); + virtio_net_toeplitz(&toeplitz_state, &flow.ports.ports, + 1); + break; + + case VIRTIO_NET_HASH_REPORT_UDPv4: + virtio_net_toeplitz(&toeplitz_state, + (__be32 *)&flow.addrs.v4addrs, + sizeof(flow.addrs.v4addrs) / 4); + virtio_net_toeplitz(&toeplitz_state, &flow.ports.ports, + 1); + break; + + case VIRTIO_NET_HASH_REPORT_IPv6: + virtio_net_toeplitz(&toeplitz_state, + (__be32 *)&flow.addrs.v6addrs, + sizeof(flow.addrs.v6addrs) / 4); + break; + + case VIRTIO_NET_HASH_REPORT_TCPv6: + virtio_net_toeplitz(&toeplitz_state, + (__be32 *)&flow.addrs.v6addrs, + sizeof(flow.addrs.v6addrs) / 4); + virtio_net_toeplitz(&toeplitz_state, &flow.ports.ports, + 1); + break; + + case VIRTIO_NET_HASH_REPORT_UDPv6: + virtio_net_toeplitz(&toeplitz_state, + (__be32 *)&flow.addrs.v6addrs, + sizeof(flow.addrs.v6addrs) / 4); + virtio_net_toeplitz(&toeplitz_state, &flow.ports.ports, + 1); + break; + + default: + return false; + } + + hash->value = toeplitz_state.hash; + hash->report = report; + + return true; +} + static inline bool virtio_net_hdr_match_proto(__be16 protocol, __u8 gso_type) { switch (gso_type & ~VIRTIO_NET_HDR_GSO_ECN) { @@ -239,4 +416,25 @@ static inline int virtio_net_hdr_from_skb(const struct sk_buff *skb, return 0; }
+static inline int virtio_net_hdr_v1_hash_from_skb(const struct sk_buff *skb, + struct virtio_net_hdr_v1_hash *hdr, + bool has_data_valid, + int vlan_hlen, + const struct virtio_net_hash *hash) +{ + int ret; + + memset(hdr, 0, sizeof(*hdr)); + + ret = virtio_net_hdr_from_skb(skb, (struct virtio_net_hdr *)hdr, + true, has_data_valid, vlan_hlen); + if (!ret) { + hdr->hdr.num_buffers = cpu_to_le16(1); + hdr->hash_value = cpu_to_le32(hash->value); + hdr->hash_report = cpu_to_le16(hash->report); + } + + return ret; +} + #endif /* _LINUX_VIRTIO_NET_H */
+ +static inline void virtio_net_toeplitz(struct virtio_net_toeplitz_state *state, + const __be32 *input, size_t len)
The function calculates a hash value but its name does not make it clear. Consider adding a 'calc'.
+{ + u32 key; + + while (len) { + state->key++; + key = be32_to_cpu(*state->key);
You perform be32_to_cpu to support both CPU endianities. If you will follow with an unconditional swab32, you could run the following loop on a more natural 0 to 31 always referring to bit 0 and avoiding !!(key & bit):
key = swab32(be32_to_cpu(*state->key)); for (i = 0; i < 32; i++, key >>= 1) { if (be32_to_cpu(*input) & 1) state->hash ^= state->key_buffer; state->key_buffer = (state->key_buffer << 1) | (key & 1); }
+ + for (u32 bit = BIT(31); bit; bit >>= 1) { + if (be32_to_cpu(*input) & bit) + state->hash ^= state->key_buffer; + + state->key_buffer = + (state->key_buffer << 1) | !!(key & bit); + } + + input++; + len--; + } +} + +static inline u32 virtio_net_hash_report(u32 types, + struct flow_dissector_key_basic key) +{ + switch (key.n_proto) { + case htons(ETH_P_IP):
Other parts of the code use be_to_cpu and cpu_to_be, Why use legacy htons here?
+ if (key.ip_proto == IPPROTO_TCP && + (types & VIRTIO_NET_RSS_HASH_TYPE_TCPv4)) + return VIRTIO_NET_HASH_REPORT_TCPv4; + + if (key.ip_proto == IPPROTO_UDP && + (types & VIRTIO_NET_RSS_HASH_TYPE_UDPv4)) + return VIRTIO_NET_HASH_REPORT_UDPv4; + + if (types & VIRTIO_NET_RSS_HASH_TYPE_IPv4) + return VIRTIO_NET_HASH_REPORT_IPv4; + + return VIRTIO_NET_HASH_REPORT_NONE; + + case htons(ETH_P_IPV6): + if (key.ip_proto == IPPROTO_TCP && + (types & VIRTIO_NET_RSS_HASH_TYPE_TCPv6)) + return VIRTIO_NET_HASH_REPORT_TCPv6; + + if (key.ip_proto == IPPROTO_UDP && + (types & VIRTIO_NET_RSS_HASH_TYPE_UDPv6)) + return VIRTIO_NET_HASH_REPORT_UDPv6; + + if (types & VIRTIO_NET_RSS_HASH_TYPE_IPv6) + return VIRTIO_NET_HASH_REPORT_IPv6; + + return VIRTIO_NET_HASH_REPORT_NONE; + + default: + return VIRTIO_NET_HASH_REPORT_NONE; + } +} #endif /* _LINUX_VIRTIO_NET_H */
+static inline void virtio_net_toeplitz(struct virtio_net_toeplitz_state *state,
const __be32 *input, size_t len)
The function calculates a hash value but its name does not make it clear. Consider adding a 'calc'.
+{
- u32 key;
- while (len) {
state->key++;
key = be32_to_cpu(*state->key);
You perform be32_to_cpu to support both CPU endianities. If you will follow with an unconditional swab32, you could run the following loop on a more natural 0 to 31 always referring to bit 0 and avoiding !!(key & bit):
key = swab32(be32_to_cpu(*state->key)); for (i = 0; i < 32; i++, key >>= 1) { if (be32_to_cpu(*input) & 1) state->hash ^= state->key_buffer; state->key_buffer = (state->key_buffer << 1) | (key & 1); }
Fixing myself, in previous version 'input' was tested against same bit. Advantage is less clear now, replacing !! with extra shift. However, since little endian CPUs are more common, the combination of swab32(be32_to_cpu(x) will actually become a nop. Similar tactic may be applied to 'input' by assigning it to local variable. This may produce more efficient version but not necessary easier to understand.
key = bswap32(be32_to_cpu(*state->key)); for (u32 bit = BIT(31); bit; bit >>= 1, key >>= 1) { if (be32_to_cpu(*input) & bit) state->hash ^= state->key_buffer; state->key_buffer = (state->key_buffer << 1) | (key & 1); }
for (u32 bit = BIT(31); bit; bit >>= 1) {
if (be32_to_cpu(*input) & bit)
state->hash ^= state->key_buffer;
state->key_buffer =
(state->key_buffer << 1) | !!(key & bit);
}
input++;
len--;
- }
+}
On 2024/09/16 10:01, gur.stavi@huawei.com wrote:
+static inline void virtio_net_toeplitz(struct virtio_net_toeplitz_state *state,
const __be32 *input, size_t len)
The function calculates a hash value but its name does not make it clear. Consider adding a 'calc'.
+{
- u32 key;
- while (len) {
state->key++;
key = be32_to_cpu(*state->key);
You perform be32_to_cpu to support both CPU endianities. If you will follow with an unconditional swab32, you could run the following loop on a more natural 0 to 31 always referring to bit 0 and avoiding !!(key & bit):
key = swab32(be32_to_cpu(*state->key)); for (i = 0; i < 32; i++, key >>= 1) { if (be32_to_cpu(*input) & 1) state->hash ^= state->key_buffer; state->key_buffer = (state->key_buffer << 1) | (key & 1); }
Fixing myself, in previous version 'input' was tested against same bit. Advantage is less clear now, replacing !! with extra shift. However, since little endian CPUs are more common, the combination of swab32(be32_to_cpu(x) will actually become a nop. Similar tactic may be applied to 'input' by assigning it to local variable. This may produce more efficient version but not necessary easier to understand.
key = bswap32(be32_to_cpu(*state->key)); for (u32 bit = BIT(31); bit; bit >>= 1, key >>= 1) { if (be32_to_cpu(*input) & bit) state->hash ^= state->key_buffer; state->key_buffer = (state->key_buffer << 1) | (key & 1); }
This unfortunately does not work. swab32() works at *byte*-level but we need to reverse the order of *bits*. bitrev32() is what we need, but it cannot eliminate be32_to_cpu().
Regards, Akihiko Odaki
+static inline bool virtio_net_hash_rss(const struct sk_buff *skb,
u32 types, const __be32 *key,
struct virtio_net_hash *hash)
Based on the guidelines, this function seems imperative rather than predicate and should return an error-code integer.
https://www.kernel.org/doc/html/latest/process/coding-style.html#function-re...
+{
- u16 report;
- struct virtio_net_toeplitz_state toeplitz_state = {
.key_buffer = be32_to_cpu(*key),
.key = key
- };
- struct flow_keys flow;
- if (!skb_flow_dissect_flow_keys(skb, &flow, 0))
return false;
- report = virtio_net_hash_report(types, flow.basic);
- switch (report) {
- case VIRTIO_NET_HASH_REPORT_IPv4:
virtio_net_toeplitz(&toeplitz_state,
(__be32 *)&flow.addrs.v4addrs,
sizeof(flow.addrs.v4addrs) / 4);
break;
- case VIRTIO_NET_HASH_REPORT_TCPv4:
virtio_net_toeplitz(&toeplitz_state,
(__be32 *)&flow.addrs.v4addrs,
sizeof(flow.addrs.v4addrs) / 4);
virtio_net_toeplitz(&toeplitz_state, &flow.ports.ports,
1);
break;
- case VIRTIO_NET_HASH_REPORT_UDPv4:
virtio_net_toeplitz(&toeplitz_state,
(__be32 *)&flow.addrs.v4addrs,
sizeof(flow.addrs.v4addrs) / 4);
virtio_net_toeplitz(&toeplitz_state, &flow.ports.ports,
1);
break;
- case VIRTIO_NET_HASH_REPORT_IPv6:
virtio_net_toeplitz(&toeplitz_state,
(__be32 *)&flow.addrs.v6addrs,
sizeof(flow.addrs.v6addrs) / 4);
break;
- case VIRTIO_NET_HASH_REPORT_TCPv6:
virtio_net_toeplitz(&toeplitz_state,
(__be32 *)&flow.addrs.v6addrs,
sizeof(flow.addrs.v6addrs) / 4);
virtio_net_toeplitz(&toeplitz_state, &flow.ports.ports,
1);
break;
- case VIRTIO_NET_HASH_REPORT_UDPv6:
virtio_net_toeplitz(&toeplitz_state,
(__be32 *)&flow.addrs.v6addrs,
sizeof(flow.addrs.v6addrs) / 4);
virtio_net_toeplitz(&toeplitz_state, &flow.ports.ports,
1);
break;
- default:
return false;
- }
- hash->value = toeplitz_state.hash;
- hash->report = report;
- return true;
+}
Akihiko Odaki wrote:
They are useful to implement VIRTIO_NET_F_RSS and VIRTIO_NET_F_HASH_REPORT.
Signed-off-by: Akihiko Odaki akihiko.odaki@daynix.com
include/linux/virtio_net.h | 198 +++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 198 insertions(+)
diff --git a/include/linux/virtio_net.h b/include/linux/virtio_net.h index 6c395a2600e8..7ee2e2f2625a 100644 --- a/include/linux/virtio_net.h +++ b/include/linux/virtio_net.h @@ -9,6 +9,183 @@ #include <uapi/linux/tcp.h> #include <uapi/linux/virtio_net.h> +struct virtio_net_hash {
- u32 value;
- u16 report;
+};
+struct virtio_net_toeplitz_state {
- u32 hash;
- u32 key_buffer;
- const __be32 *key;
+};
+#define VIRTIO_NET_SUPPORTED_HASH_TYPES (VIRTIO_NET_RSS_HASH_TYPE_IPv4 | \
VIRTIO_NET_RSS_HASH_TYPE_TCPv4 | \
VIRTIO_NET_RSS_HASH_TYPE_UDPv4 | \
VIRTIO_NET_RSS_HASH_TYPE_IPv6 | \
VIRTIO_NET_RSS_HASH_TYPE_TCPv6 | \
VIRTIO_NET_RSS_HASH_TYPE_UDPv6)
+#define VIRTIO_NET_RSS_MAX_KEY_SIZE 40
+static inline void virtio_net_toeplitz(struct virtio_net_toeplitz_state *state,
const __be32 *input, size_t len)
+{
- u32 key;
- while (len) {
state->key++;
key = be32_to_cpu(*state->key);
for (u32 bit = BIT(31); bit; bit >>= 1) {
if (be32_to_cpu(*input) & bit)
state->hash ^= state->key_buffer;
state->key_buffer =
(state->key_buffer << 1) | !!(key & bit);
}
input++;
len--;
- }
+}
+static inline u8 virtio_net_hash_key_length(u32 types) +{
- size_t len = 0;
- if (types & VIRTIO_NET_HASH_REPORT_IPv4)
len = max(len,
sizeof(struct flow_dissector_key_ipv4_addrs));
- if (types &
(VIRTIO_NET_HASH_REPORT_TCPv4 | VIRTIO_NET_HASH_REPORT_UDPv4))
len = max(len,
sizeof(struct flow_dissector_key_ipv4_addrs) +
sizeof(struct flow_dissector_key_ports));
- if (types & VIRTIO_NET_HASH_REPORT_IPv6)
len = max(len,
sizeof(struct flow_dissector_key_ipv6_addrs));
- if (types &
(VIRTIO_NET_HASH_REPORT_TCPv6 | VIRTIO_NET_HASH_REPORT_UDPv6))
len = max(len,
sizeof(struct flow_dissector_key_ipv6_addrs) +
sizeof(struct flow_dissector_key_ports));
- return 4 + len;
Avoid raw constants like this 4. What field does it capture?
Instead of working from shortest to longest and using max, how about the inverse and return as soon as a match is found.
+}
+static inline u32 virtio_net_hash_report(u32 types,
struct flow_dissector_key_basic key)
+{
- switch (key.n_proto) {
- case htons(ETH_P_IP):
if (key.ip_proto == IPPROTO_TCP &&
(types & VIRTIO_NET_RSS_HASH_TYPE_TCPv4))
return VIRTIO_NET_HASH_REPORT_TCPv4;
if (key.ip_proto == IPPROTO_UDP &&
(types & VIRTIO_NET_RSS_HASH_TYPE_UDPv4))
return VIRTIO_NET_HASH_REPORT_UDPv4;
if (types & VIRTIO_NET_RSS_HASH_TYPE_IPv4)
return VIRTIO_NET_HASH_REPORT_IPv4;
return VIRTIO_NET_HASH_REPORT_NONE;
- case htons(ETH_P_IPV6):
if (key.ip_proto == IPPROTO_TCP &&
(types & VIRTIO_NET_RSS_HASH_TYPE_TCPv6))
return VIRTIO_NET_HASH_REPORT_TCPv6;
if (key.ip_proto == IPPROTO_UDP &&
(types & VIRTIO_NET_RSS_HASH_TYPE_UDPv6))
return VIRTIO_NET_HASH_REPORT_UDPv6;
if (types & VIRTIO_NET_RSS_HASH_TYPE_IPv6)
return VIRTIO_NET_HASH_REPORT_IPv6;
return VIRTIO_NET_HASH_REPORT_NONE;
- default:
return VIRTIO_NET_HASH_REPORT_NONE;
- }
+}
+static inline bool virtio_net_hash_rss(const struct sk_buff *skb,
u32 types, const __be32 *key,
struct virtio_net_hash *hash)
+{
- u16 report;
nit: move below the struct declarations.
- struct virtio_net_toeplitz_state toeplitz_state = {
.key_buffer = be32_to_cpu(*key),
.key = key
- };
- struct flow_keys flow;
- if (!skb_flow_dissect_flow_keys(skb, &flow, 0))
return false;
- report = virtio_net_hash_report(types, flow.basic);
- switch (report) {
- case VIRTIO_NET_HASH_REPORT_IPv4:
virtio_net_toeplitz(&toeplitz_state,
(__be32 *)&flow.addrs.v4addrs,
sizeof(flow.addrs.v4addrs) / 4);
break;
- case VIRTIO_NET_HASH_REPORT_TCPv4:
virtio_net_toeplitz(&toeplitz_state,
(__be32 *)&flow.addrs.v4addrs,
sizeof(flow.addrs.v4addrs) / 4);
virtio_net_toeplitz(&toeplitz_state, &flow.ports.ports,
1);
break;
- case VIRTIO_NET_HASH_REPORT_UDPv4:
virtio_net_toeplitz(&toeplitz_state,
(__be32 *)&flow.addrs.v4addrs,
sizeof(flow.addrs.v4addrs) / 4);
virtio_net_toeplitz(&toeplitz_state, &flow.ports.ports,
1);
break;
- case VIRTIO_NET_HASH_REPORT_IPv6:
virtio_net_toeplitz(&toeplitz_state,
(__be32 *)&flow.addrs.v6addrs,
sizeof(flow.addrs.v6addrs) / 4);
break;
- case VIRTIO_NET_HASH_REPORT_TCPv6:
virtio_net_toeplitz(&toeplitz_state,
(__be32 *)&flow.addrs.v6addrs,
sizeof(flow.addrs.v6addrs) / 4);
virtio_net_toeplitz(&toeplitz_state, &flow.ports.ports,
1);
break;
- case VIRTIO_NET_HASH_REPORT_UDPv6:
virtio_net_toeplitz(&toeplitz_state,
(__be32 *)&flow.addrs.v6addrs,
sizeof(flow.addrs.v6addrs) / 4);
virtio_net_toeplitz(&toeplitz_state, &flow.ports.ports,
1);
break;
- default:
return false;
- }
- hash->value = toeplitz_state.hash;
- hash->report = report;
- return true;
+}
static inline bool virtio_net_hdr_match_proto(__be16 protocol, __u8 gso_type) { switch (gso_type & ~VIRTIO_NET_HDR_GSO_ECN) { @@ -239,4 +416,25 @@ static inline int virtio_net_hdr_from_skb(const struct sk_buff *skb, return 0; } +static inline int virtio_net_hdr_v1_hash_from_skb(const struct sk_buff *skb,
struct virtio_net_hdr_v1_hash *hdr,
bool has_data_valid,
int vlan_hlen,
const struct virtio_net_hash *hash)
+{
- int ret;
- memset(hdr, 0, sizeof(*hdr));
- ret = virtio_net_hdr_from_skb(skb, (struct virtio_net_hdr *)hdr,
true, has_data_valid, vlan_hlen);
- if (!ret) {
hdr->hdr.num_buffers = cpu_to_le16(1);
hdr->hash_value = cpu_to_le32(hash->value);
hdr->hash_report = cpu_to_le16(hash->report);
- }
- return ret;
+}
I don't think that this helper is very helpful, as all the information it sets are first passed in. Just set the hdr fields directy in the caller. It is easier to follow the control flow.
On 2024/09/18 14:50, Willem de Bruijn wrote:
Akihiko Odaki wrote:
They are useful to implement VIRTIO_NET_F_RSS and VIRTIO_NET_F_HASH_REPORT.
Signed-off-by: Akihiko Odaki akihiko.odaki@daynix.com
include/linux/virtio_net.h | 198 +++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 198 insertions(+)
diff --git a/include/linux/virtio_net.h b/include/linux/virtio_net.h index 6c395a2600e8..7ee2e2f2625a 100644 --- a/include/linux/virtio_net.h +++ b/include/linux/virtio_net.h @@ -9,6 +9,183 @@ #include <uapi/linux/tcp.h> #include <uapi/linux/virtio_net.h> +struct virtio_net_hash {
- u32 value;
- u16 report;
+};
+struct virtio_net_toeplitz_state {
- u32 hash;
- u32 key_buffer;
- const __be32 *key;
+};
+#define VIRTIO_NET_SUPPORTED_HASH_TYPES (VIRTIO_NET_RSS_HASH_TYPE_IPv4 | \
VIRTIO_NET_RSS_HASH_TYPE_TCPv4 | \
VIRTIO_NET_RSS_HASH_TYPE_UDPv4 | \
VIRTIO_NET_RSS_HASH_TYPE_IPv6 | \
VIRTIO_NET_RSS_HASH_TYPE_TCPv6 | \
VIRTIO_NET_RSS_HASH_TYPE_UDPv6)
+#define VIRTIO_NET_RSS_MAX_KEY_SIZE 40
+static inline void virtio_net_toeplitz(struct virtio_net_toeplitz_state *state,
const __be32 *input, size_t len)
+{
- u32 key;
- while (len) {
state->key++;
key = be32_to_cpu(*state->key);
for (u32 bit = BIT(31); bit; bit >>= 1) {
if (be32_to_cpu(*input) & bit)
state->hash ^= state->key_buffer;
state->key_buffer =
(state->key_buffer << 1) | !!(key & bit);
}
input++;
len--;
- }
+}
+static inline u8 virtio_net_hash_key_length(u32 types) +{
- size_t len = 0;
- if (types & VIRTIO_NET_HASH_REPORT_IPv4)
len = max(len,
sizeof(struct flow_dissector_key_ipv4_addrs));
- if (types &
(VIRTIO_NET_HASH_REPORT_TCPv4 | VIRTIO_NET_HASH_REPORT_UDPv4))
len = max(len,
sizeof(struct flow_dissector_key_ipv4_addrs) +
sizeof(struct flow_dissector_key_ports));
- if (types & VIRTIO_NET_HASH_REPORT_IPv6)
len = max(len,
sizeof(struct flow_dissector_key_ipv6_addrs));
- if (types &
(VIRTIO_NET_HASH_REPORT_TCPv6 | VIRTIO_NET_HASH_REPORT_UDPv6))
len = max(len,
sizeof(struct flow_dissector_key_ipv6_addrs) +
sizeof(struct flow_dissector_key_ports));
- return 4 + len;
Avoid raw constants like this 4. What field does it capture?
It is: sizeof_field(struct virtio_net_toeplitz_state, key_buffer) I'll replace it with v4.
Instead of working from shortest to longest and using max, how about the inverse and return as soon as a match is found.
I think it is less error-prone to use max() as it does not require to sort the numbers. The compiler should properly optimize it in the way you suggested.
+}
+static inline u32 virtio_net_hash_report(u32 types,
struct flow_dissector_key_basic key)
+{
- switch (key.n_proto) {
- case htons(ETH_P_IP):
if (key.ip_proto == IPPROTO_TCP &&
(types & VIRTIO_NET_RSS_HASH_TYPE_TCPv4))
return VIRTIO_NET_HASH_REPORT_TCPv4;
if (key.ip_proto == IPPROTO_UDP &&
(types & VIRTIO_NET_RSS_HASH_TYPE_UDPv4))
return VIRTIO_NET_HASH_REPORT_UDPv4;
if (types & VIRTIO_NET_RSS_HASH_TYPE_IPv4)
return VIRTIO_NET_HASH_REPORT_IPv4;
return VIRTIO_NET_HASH_REPORT_NONE;
- case htons(ETH_P_IPV6):
if (key.ip_proto == IPPROTO_TCP &&
(types & VIRTIO_NET_RSS_HASH_TYPE_TCPv6))
return VIRTIO_NET_HASH_REPORT_TCPv6;
if (key.ip_proto == IPPROTO_UDP &&
(types & VIRTIO_NET_RSS_HASH_TYPE_UDPv6))
return VIRTIO_NET_HASH_REPORT_UDPv6;
if (types & VIRTIO_NET_RSS_HASH_TYPE_IPv6)
return VIRTIO_NET_HASH_REPORT_IPv6;
return VIRTIO_NET_HASH_REPORT_NONE;
- default:
return VIRTIO_NET_HASH_REPORT_NONE;
- }
+}
+static inline bool virtio_net_hash_rss(const struct sk_buff *skb,
u32 types, const __be32 *key,
struct virtio_net_hash *hash)
+{
- u16 report;
nit: move below the struct declarations.
I'll change accordingly with v4.
- struct virtio_net_toeplitz_state toeplitz_state = {
.key_buffer = be32_to_cpu(*key),
.key = key
- };
- struct flow_keys flow;
- if (!skb_flow_dissect_flow_keys(skb, &flow, 0))
return false;
- report = virtio_net_hash_report(types, flow.basic);
- switch (report) {
- case VIRTIO_NET_HASH_REPORT_IPv4:
virtio_net_toeplitz(&toeplitz_state,
(__be32 *)&flow.addrs.v4addrs,
sizeof(flow.addrs.v4addrs) / 4);
break;
- case VIRTIO_NET_HASH_REPORT_TCPv4:
virtio_net_toeplitz(&toeplitz_state,
(__be32 *)&flow.addrs.v4addrs,
sizeof(flow.addrs.v4addrs) / 4);
virtio_net_toeplitz(&toeplitz_state, &flow.ports.ports,
1);
break;
- case VIRTIO_NET_HASH_REPORT_UDPv4:
virtio_net_toeplitz(&toeplitz_state,
(__be32 *)&flow.addrs.v4addrs,
sizeof(flow.addrs.v4addrs) / 4);
virtio_net_toeplitz(&toeplitz_state, &flow.ports.ports,
1);
break;
- case VIRTIO_NET_HASH_REPORT_IPv6:
virtio_net_toeplitz(&toeplitz_state,
(__be32 *)&flow.addrs.v6addrs,
sizeof(flow.addrs.v6addrs) / 4);
break;
- case VIRTIO_NET_HASH_REPORT_TCPv6:
virtio_net_toeplitz(&toeplitz_state,
(__be32 *)&flow.addrs.v6addrs,
sizeof(flow.addrs.v6addrs) / 4);
virtio_net_toeplitz(&toeplitz_state, &flow.ports.ports,
1);
break;
- case VIRTIO_NET_HASH_REPORT_UDPv6:
virtio_net_toeplitz(&toeplitz_state,
(__be32 *)&flow.addrs.v6addrs,
sizeof(flow.addrs.v6addrs) / 4);
virtio_net_toeplitz(&toeplitz_state, &flow.ports.ports,
1);
break;
- default:
return false;
- }
- hash->value = toeplitz_state.hash;
- hash->report = report;
- return true;
+}
- static inline bool virtio_net_hdr_match_proto(__be16 protocol, __u8 gso_type) { switch (gso_type & ~VIRTIO_NET_HDR_GSO_ECN) {
@@ -239,4 +416,25 @@ static inline int virtio_net_hdr_from_skb(const struct sk_buff *skb, return 0; } +static inline int virtio_net_hdr_v1_hash_from_skb(const struct sk_buff *skb,
struct virtio_net_hdr_v1_hash *hdr,
bool has_data_valid,
int vlan_hlen,
const struct virtio_net_hash *hash)
+{
- int ret;
- memset(hdr, 0, sizeof(*hdr));
- ret = virtio_net_hdr_from_skb(skb, (struct virtio_net_hdr *)hdr,
true, has_data_valid, vlan_hlen);
- if (!ret) {
hdr->hdr.num_buffers = cpu_to_le16(1);
hdr->hash_value = cpu_to_le32(hash->value);
hdr->hash_report = cpu_to_le16(hash->report);
- }
- return ret;
+}
I don't think that this helper is very helpful, as all the information it sets are first passed in. Just set the hdr fields directy in the caller. It is easier to follow the control flow.
I'll remove it in v4.
Regards, Akihiko Odaki
flow_keys_dissector_symmetric is useful to derive a symmetric hash and to know its source such as IPv4, IPv6, TCP, and UDP.
Signed-off-by: Akihiko Odaki akihiko.odaki@daynix.com --- include/net/flow_dissector.h | 1 + net/core/flow_dissector.c | 3 ++- 2 files changed, 3 insertions(+), 1 deletion(-)
diff --git a/include/net/flow_dissector.h b/include/net/flow_dissector.h index ced79dc8e856..d01c1ec77b7d 100644 --- a/include/net/flow_dissector.h +++ b/include/net/flow_dissector.h @@ -423,6 +423,7 @@ __be32 flow_get_u32_src(const struct flow_keys *flow); __be32 flow_get_u32_dst(const struct flow_keys *flow);
extern struct flow_dissector flow_keys_dissector; +extern struct flow_dissector flow_keys_dissector_symmetric; extern struct flow_dissector flow_keys_basic_dissector;
/* struct flow_keys_digest: diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c index 0e638a37aa09..9822988f2d49 100644 --- a/net/core/flow_dissector.c +++ b/net/core/flow_dissector.c @@ -1852,7 +1852,8 @@ void make_flow_keys_digest(struct flow_keys_digest *digest, } EXPORT_SYMBOL(make_flow_keys_digest);
-static struct flow_dissector flow_keys_dissector_symmetric __read_mostly; +struct flow_dissector flow_keys_dissector_symmetric __read_mostly; +EXPORT_SYMBOL(flow_keys_dissector_symmetric);
u32 __skb_get_hash_symmetric_net(const struct net *net, const struct sk_buff *skb) {
tap used to simply advance iov_iter when it needs to pad virtio header. This leaves the garbage in the buffer as is and prevents telling if the header is padded or contains some real data.
In theory, a user of tap can fill the buffer with zero before calling read() to avoid such a problem, but leaving the garbage in the buffer is awkward anyway so fill the buffer in tap.
Signed-off-by: Akihiko Odaki akihiko.odaki@daynix.com --- drivers/net/tap.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/tap.c b/drivers/net/tap.c index 77574f7a3bd4..ba044302ccc6 100644 --- a/drivers/net/tap.c +++ b/drivers/net/tap.c @@ -813,7 +813,7 @@ static ssize_t tap_put_user(struct tap_queue *q, sizeof(vnet_hdr)) return -EFAULT;
- iov_iter_advance(iter, vnet_hdr_len - sizeof(vnet_hdr)); + iov_iter_zero(vnet_hdr_len - sizeof(vnet_hdr), iter); } total = vnet_hdr_len; total += skb->len;
Akihiko Odaki wrote:
tap used to simply advance iov_iter when it needs to pad virtio header. This leaves the garbage in the buffer as is and prevents telling if the header is padded or contains some real data.
In theory, a user of tap can fill the buffer with zero before calling read() to avoid such a problem, but leaving the garbage in the buffer is awkward anyway so fill the buffer in tap.
This description does not describe the need for this operation.
The new extension seemingly requires these bytes to be cleared? Please make that explicit.
Signed-off-by: Akihiko Odaki akihiko.odaki@daynix.com
drivers/net/tap.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/tap.c b/drivers/net/tap.c index 77574f7a3bd4..ba044302ccc6 100644 --- a/drivers/net/tap.c +++ b/drivers/net/tap.c @@ -813,7 +813,7 @@ static ssize_t tap_put_user(struct tap_queue *q, sizeof(vnet_hdr)) return -EFAULT;
iov_iter_advance(iter, vnet_hdr_len - sizeof(vnet_hdr));
} total = vnet_hdr_len; total += skb->len;iov_iter_zero(vnet_hdr_len - sizeof(vnet_hdr), iter);
-- 2.46.0
tun used to simply advance iov_iter when it needs to pad virtio header. This leaves the garbage in the buffer as is and prevents telling if the header is padded or contains some real data.
In theory, a user of tun can fill the buffer with zero before calling read() to avoid such a problem, but leaving the garbage in the buffer is awkward anyway so fill the buffer in tun.
Signed-off-by: Akihiko Odaki akihiko.odaki@daynix.com --- drivers/net/tun.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/tun.c b/drivers/net/tun.c index 1d06c560c5e6..9d93ab9ee58f 100644 --- a/drivers/net/tun.c +++ b/drivers/net/tun.c @@ -2073,7 +2073,7 @@ static ssize_t tun_put_user_xdp(struct tun_struct *tun, if (unlikely(copy_to_iter(&gso, sizeof(gso), iter) != sizeof(gso))) return -EFAULT; - iov_iter_advance(iter, vnet_hdr_sz - sizeof(gso)); + iov_iter_zero(vnet_hdr_sz - sizeof(gso), iter); }
ret = copy_to_iter(xdp_frame->data, size, iter) + vnet_hdr_sz; @@ -2146,7 +2146,7 @@ static ssize_t tun_put_user(struct tun_struct *tun, if (copy_to_iter(&gso, sizeof(gso), iter) != sizeof(gso)) return -EFAULT;
- iov_iter_advance(iter, vnet_hdr_sz - sizeof(gso)); + iov_iter_zero(vnet_hdr_sz - sizeof(gso), iter); }
if (vlan_hlen) {
Allow the guest to reuse the hash value to make receive steering consistent between the host and guest, and to save hash computation.
Signed-off-by: Akihiko Odaki akihiko.odaki@daynix.com --- Documentation/networking/tuntap.rst | 7 ++ drivers/net/Kconfig | 1 + drivers/net/tun.c | 146 +++++++++++++++++++++++++++++++----- include/uapi/linux/if_tun.h | 44 +++++++++++ 4 files changed, 180 insertions(+), 18 deletions(-)
diff --git a/Documentation/networking/tuntap.rst b/Documentation/networking/tuntap.rst index 4d7087f727be..86b4ae8caa8a 100644 --- a/Documentation/networking/tuntap.rst +++ b/Documentation/networking/tuntap.rst @@ -206,6 +206,13 @@ enable is true we enable it, otherwise we disable it:: return ioctl(fd, TUNSETQUEUE, (void *)&ifr); }
+3.4 Reference +------------- + +``linux/if_tun.h`` defines the interface described below: + +.. kernel-doc:: include/uapi/linux/if_tun.h + Universal TUN/TAP device driver Frequently Asked Question =========================================================
diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig index 9920b3a68ed1..e2a7bd703550 100644 --- a/drivers/net/Kconfig +++ b/drivers/net/Kconfig @@ -395,6 +395,7 @@ config TUN tristate "Universal TUN/TAP device driver support" depends on INET select CRC32 + select SKB_EXTENSIONS help TUN/TAP provides packet reception and transmission for user space programs. It can be viewed as a simple Point-to-Point or Ethernet diff --git a/drivers/net/tun.c b/drivers/net/tun.c index 9d93ab9ee58f..b8fcd71becac 100644 --- a/drivers/net/tun.c +++ b/drivers/net/tun.c @@ -173,6 +173,10 @@ struct tun_prog { struct bpf_prog *prog; };
+struct tun_vnet_hash_container { + struct tun_vnet_hash common; +}; + /* Since the socket were moved to tun_file, to preserve the behavior of persist * device, socket filter, sndbuf and vnet header size were restore when the * file were attached to a persist device. @@ -210,6 +214,7 @@ struct tun_struct { struct bpf_prog __rcu *xdp_prog; struct tun_prog __rcu *steering_prog; struct tun_prog __rcu *filter_prog; + struct tun_vnet_hash_container __rcu *vnet_hash; struct ethtool_link_ksettings link_ksettings; /* init args */ struct file *file; @@ -221,6 +226,11 @@ struct veth { __be16 h_vlan_TCI; };
+static const struct tun_vnet_hash tun_vnet_hash_cap = { + .flags = TUN_VNET_HASH_REPORT, + .types = VIRTIO_NET_SUPPORTED_HASH_TYPES +}; + static void tun_flow_init(struct tun_struct *tun); static void tun_flow_uninit(struct tun_struct *tun);
@@ -322,10 +332,17 @@ static long tun_set_vnet_be(struct tun_struct *tun, int __user *argp) if (get_user(be, argp)) return -EFAULT;
- if (be) + if (be) { + struct tun_vnet_hash_container *vnet_hash = rtnl_dereference(tun->vnet_hash); + + if (!(tun->flags & TUN_VNET_LE) && + vnet_hash && (vnet_hash->flags & TUN_VNET_HASH_REPORT)) + return -EBUSY; + tun->flags |= TUN_VNET_BE; - else + } else { tun->flags &= ~TUN_VNET_BE; + }
return 0; } @@ -522,14 +539,20 @@ static inline void tun_flow_save_rps_rxhash(struct tun_flow_entry *e, u32 hash) * the userspace application move between processors, we may get a * different rxq no. here. */ -static u16 tun_automq_select_queue(struct tun_struct *tun, struct sk_buff *skb) +static u16 tun_automq_select_queue(struct tun_struct *tun, struct sk_buff *skb, + const struct tun_vnet_hash_container *vnet_hash) { + struct tun_vnet_hash_ext *ext; + struct flow_keys keys; struct tun_flow_entry *e; u32 txq, numqueues;
numqueues = READ_ONCE(tun->numqueues);
- txq = __skb_get_hash_symmetric(skb); + memset(&keys, 0, sizeof(keys)); + skb_flow_dissect(skb, &flow_keys_dissector_symmetric, &keys, 0); + + txq = flow_hash_from_keys(&keys); e = tun_flow_find(&tun->flows[tun_hashfn(txq)], txq); if (e) { tun_flow_save_rps_rxhash(e, txq); @@ -538,6 +561,16 @@ static u16 tun_automq_select_queue(struct tun_struct *tun, struct sk_buff *skb) txq = reciprocal_scale(txq, numqueues); }
+ if (vnet_hash && (vnet_hash->common.flags & TUN_VNET_HASH_REPORT)) { + ext = skb_ext_add(skb, SKB_EXT_TUN_VNET_HASH); + if (ext) { + u32 types = vnet_hash->common.types; + + ext->report = virtio_net_hash_report(types, keys.basic); + ext->value = skb->l4_hash ? skb->hash : txq; + } + } + return txq; }
@@ -565,10 +598,13 @@ static u16 tun_select_queue(struct net_device *dev, struct sk_buff *skb, u16 ret;
rcu_read_lock(); - if (rcu_dereference(tun->steering_prog)) + if (rcu_dereference(tun->steering_prog)) { ret = tun_ebpf_select_queue(tun, skb); - else - ret = tun_automq_select_queue(tun, skb); + } else { + struct tun_vnet_hash_container *vnet_hash = rcu_dereference(tun->vnet_hash); + + ret = tun_automq_select_queue(tun, skb, vnet_hash); + } rcu_read_unlock();
return ret; @@ -2120,33 +2156,63 @@ static ssize_t tun_put_user(struct tun_struct *tun, }
if (vnet_hdr_sz) { - struct virtio_net_hdr gso; + struct tun_vnet_hash_ext *ext; + size_t vnet_hdr_content_sz = sizeof(struct virtio_net_hdr); + union { + struct virtio_net_hdr hdr; + struct virtio_net_hdr_v1_hash hdr_v1_hash; + } vnet_hdr; + int ret;
if (iov_iter_count(iter) < vnet_hdr_sz) return -EINVAL;
- if (virtio_net_hdr_from_skb(skb, &gso, - tun_is_little_endian(tun), true, - vlan_hlen)) { + ext = vnet_hdr_sz < sizeof(vnet_hdr.hdr_v1_hash) ? + NULL : skb_ext_find(skb, SKB_EXT_TUN_VNET_HASH); + + if (ext) { + struct virtio_net_hash hash = { + .value = ext->value, + .report = ext->report, + }; + + vnet_hdr_content_sz = sizeof(vnet_hdr.hdr_v1_hash); + ret = virtio_net_hdr_v1_hash_from_skb(skb, + &vnet_hdr.hdr_v1_hash, + true, + vlan_hlen, + &hash); + } else { + vnet_hdr_content_sz = sizeof(struct virtio_net_hdr); + ret = virtio_net_hdr_from_skb(skb, + &vnet_hdr.hdr, + tun_is_little_endian(tun), + true, + vlan_hlen); + } + + if (ret) { struct skb_shared_info *sinfo = skb_shinfo(skb);
if (net_ratelimit()) { netdev_err(tun->dev, "unexpected GSO type: 0x%x, gso_size %d, hdr_len %d\n", - sinfo->gso_type, tun16_to_cpu(tun, gso.gso_size), - tun16_to_cpu(tun, gso.hdr_len)); + sinfo->gso_type, + tun16_to_cpu(tun, vnet_hdr.hdr.gso_size), + tun16_to_cpu(tun, vnet_hdr.hdr.hdr_len)); print_hex_dump(KERN_ERR, "tun: ", DUMP_PREFIX_NONE, 16, 1, skb->head, - min((int)tun16_to_cpu(tun, gso.hdr_len), 64), true); + min(tun16_to_cpu(tun, vnet_hdr.hdr.hdr_len), 64), + true); } WARN_ON_ONCE(1); return -EINVAL; }
- if (copy_to_iter(&gso, sizeof(gso), iter) != sizeof(gso)) + if (copy_to_iter(&vnet_hdr, vnet_hdr_content_sz, iter) != vnet_hdr_content_sz) return -EFAULT;
- iov_iter_zero(vnet_hdr_sz - sizeof(gso), iter); + iov_iter_zero(vnet_hdr_sz - vnet_hdr_content_sz, iter); }
if (vlan_hlen) { @@ -3094,6 +3160,8 @@ static long __tun_chr_ioctl(struct file *file, unsigned int cmd, int le; int ret; bool do_notify = false; + struct tun_vnet_hash vnet_hash_common; + struct tun_vnet_hash_container *vnet_hash;
if (cmd == TUNSETIFF || cmd == TUNSETQUEUE || (_IOC_TYPE(cmd) == SOCK_IOC_TYPE && cmd != SIOCGSKNS)) { @@ -3115,6 +3183,9 @@ static long __tun_chr_ioctl(struct file *file, unsigned int cmd, if (!ns_capable(net->user_ns, CAP_NET_ADMIN)) return -EPERM; return open_related_ns(&net->ns, get_net_ns); + } else if (cmd == TUNGETVNETHASHCAP) { + return copy_to_user(argp, &tun_vnet_hash_cap, sizeof(tun_vnet_hash_cap)) ? + -EFAULT : 0; }
rtnl_lock(); @@ -3314,6 +3385,13 @@ static long __tun_chr_ioctl(struct file *file, unsigned int cmd, break; }
+ vnet_hash = rtnl_dereference(tun->vnet_hash); + if (vnet_hash && (vnet_hash->common.flags & TUN_VNET_HASH_REPORT) && + vnet_hdr_sz < (int)sizeof(struct virtio_net_hdr_v1_hash)) { + ret = -EBUSY; + break; + } + tun->vnet_hdr_sz = vnet_hdr_sz; break;
@@ -3328,10 +3406,18 @@ static long __tun_chr_ioctl(struct file *file, unsigned int cmd, ret = -EFAULT; break; } - if (le) + if (le) { tun->flags |= TUN_VNET_LE; - else + } else { + vnet_hash = rtnl_dereference(tun->vnet_hash); + if (vnet_hash && (vnet_hash->common.flags & TUN_VNET_HASH_REPORT) && + !tun_legacy_is_little_endian(tun)) { + ret = -EBUSY; + break; + } + tun->flags &= ~TUN_VNET_LE; + } break;
case TUNGETVNETBE: @@ -3396,6 +3482,30 @@ static long __tun_chr_ioctl(struct file *file, unsigned int cmd, ret = open_related_ns(&net->ns, get_net_ns); break;
+ case TUNSETVNETHASH: + if (copy_from_user(&vnet_hash_common, argp, sizeof(vnet_hash_common))) { + ret = -EFAULT; + break; + } + argp = (struct tun_vnet_hash __user *)argp + 1; + + if ((vnet_hash_common.flags & TUN_VNET_HASH_REPORT) && + (tun->vnet_hdr_sz < sizeof(struct virtio_net_hdr_v1_hash) || + !tun_is_little_endian(tun))) { + ret = -EBUSY; + break; + } + + vnet_hash = kmalloc(sizeof(vnet_hash->common), GFP_KERNEL); + if (!vnet_hash) { + ret = -ENOMEM; + break; + } + + vnet_hash->common = vnet_hash_common; + kfree_rcu_mightsleep(rcu_replace_pointer_rtnl(tun->vnet_hash, vnet_hash)); + break; + default: ret = -EINVAL; break; diff --git a/include/uapi/linux/if_tun.h b/include/uapi/linux/if_tun.h index 287cdc81c939..1561e8ce0a0a 100644 --- a/include/uapi/linux/if_tun.h +++ b/include/uapi/linux/if_tun.h @@ -62,6 +62,30 @@ #define TUNSETCARRIER _IOW('T', 226, int) #define TUNGETDEVNETNS _IO('T', 227)
+/** + * define TUNGETVNETHASHCAP - ioctl to get virtio_net hashing capability. + * + * The argument is a pointer to &struct tun_vnet_hash which will store the + * maximal virtio_net hashing configuration. + */ +#define TUNGETVNETHASHCAP _IOR('T', 228, struct tun_vnet_hash) + +/** + * define TUNSETVNETHASH - ioctl to configure virtio_net hashing + * + * The argument is a pointer to &struct tun_vnet_hash. + * + * %TUNSETVNETHDRSZ ioctl must be called with a number greater than or equal to + * the size of &struct virtio_net_hdr_v1_hash before calling this ioctl with + * %TUN_VNET_HASH_REPORT. + * + * The virtio_net header must be configured as little-endian before calling this + * ioctl with %TUN_VNET_HASH_REPORT. + * + * This ioctl currently has no effect on XDP packets. + */ +#define TUNSETVNETHASH _IOW('T', 229, struct tun_vnet_hash) + /* TUNSETIFF ifr flags */ #define IFF_TUN 0x0001 #define IFF_TAP 0x0002 @@ -115,4 +139,24 @@ struct tun_filter { __u8 addr[][ETH_ALEN]; };
+/** + * define TUN_VNET_HASH_REPORT - Request virtio_net hash reporting for vhost + */ +#define TUN_VNET_HASH_REPORT 0x0001 + +/** + * struct tun_vnet_hash - virtio_net hashing configuration + * @flags: + * Bitmask consists of %TUN_VNET_HASH_REPORT and %TUN_VNET_HASH_RSS + * @pad: + * Should be filled with zero before passing to %TUNSETVNETHASH + * @types: + * Bitmask of allowed hash types + */ +struct tun_vnet_hash { + __u16 flags; + __u8 pad[2]; + __u32 types; +}; + #endif /* _UAPI__IF_TUN_H */
Akihiko Odaki wrote:
Allow the guest to reuse the hash value to make receive steering consistent between the host and guest, and to save hash computation.
Signed-off-by: Akihiko Odaki akihiko.odaki@daynix.com
Documentation/networking/tuntap.rst | 7 ++ drivers/net/Kconfig | 1 + drivers/net/tun.c | 146 +++++++++++++++++++++++++++++++----- include/uapi/linux/if_tun.h | 44 +++++++++++ 4 files changed, 180 insertions(+), 18 deletions(-)
diff --git a/Documentation/networking/tuntap.rst b/Documentation/networking/tuntap.rst index 4d7087f727be..86b4ae8caa8a 100644 --- a/Documentation/networking/tuntap.rst +++ b/Documentation/networking/tuntap.rst @@ -206,6 +206,13 @@ enable is true we enable it, otherwise we disable it:: return ioctl(fd, TUNSETQUEUE, (void *)&ifr); } +3.4 Reference +-------------
+``linux/if_tun.h`` defines the interface described below:
+.. kernel-doc:: include/uapi/linux/if_tun.h
Universal TUN/TAP device driver Frequently Asked Question
diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig index 9920b3a68ed1..e2a7bd703550 100644 --- a/drivers/net/Kconfig +++ b/drivers/net/Kconfig @@ -395,6 +395,7 @@ config TUN tristate "Universal TUN/TAP device driver support" depends on INET select CRC32
- select SKB_EXTENSIONS help TUN/TAP provides packet reception and transmission for user space programs. It can be viewed as a simple Point-to-Point or Ethernet
diff --git a/drivers/net/tun.c b/drivers/net/tun.c index 9d93ab9ee58f..b8fcd71becac 100644 --- a/drivers/net/tun.c +++ b/drivers/net/tun.c @@ -173,6 +173,10 @@ struct tun_prog { struct bpf_prog *prog; }; +struct tun_vnet_hash_container {
- struct tun_vnet_hash common;
+};
/* Since the socket were moved to tun_file, to preserve the behavior of persist
- device, socket filter, sndbuf and vnet header size were restore when the
- file were attached to a persist device.
@@ -210,6 +214,7 @@ struct tun_struct { struct bpf_prog __rcu *xdp_prog; struct tun_prog __rcu *steering_prog; struct tun_prog __rcu *filter_prog;
- struct tun_vnet_hash_container __rcu *vnet_hash;
This is just
+struct tun_vnet_hash { + u32 value; + u16 report; +};
Can just be fields in the struct directly.
Also, only one bit really used for report, so probably can be condensed further.
struct ethtool_link_ksettings link_ksettings; /* init args */ struct file *file; @@ -221,6 +226,11 @@ struct veth { __be16 h_vlan_TCI; }; +static const struct tun_vnet_hash tun_vnet_hash_cap = {
- .flags = TUN_VNET_HASH_REPORT,
- .types = VIRTIO_NET_SUPPORTED_HASH_TYPES
+};
static void tun_flow_init(struct tun_struct *tun); static void tun_flow_uninit(struct tun_struct *tun); @@ -322,10 +332,17 @@ static long tun_set_vnet_be(struct tun_struct *tun, int __user *argp) if (get_user(be, argp)) return -EFAULT;
- if (be)
- if (be) {
struct tun_vnet_hash_container *vnet_hash = rtnl_dereference(tun->vnet_hash);
if (!(tun->flags & TUN_VNET_LE) &&
vnet_hash && (vnet_hash->flags & TUN_VNET_HASH_REPORT))
return -EBUSY;
Doesn't be here imply !tun->flags & TUN_VNET_LE? Same again below.
tun->flags |= TUN_VNET_BE;
- else
- } else { tun->flags &= ~TUN_VNET_BE;
- }
return 0; } @@ -522,14 +539,20 @@ static inline void tun_flow_save_rps_rxhash(struct tun_flow_entry *e, u32 hash)
- the userspace application move between processors, we may get a
- different rxq no. here.
*/ -static u16 tun_automq_select_queue(struct tun_struct *tun, struct sk_buff *skb) +static u16 tun_automq_select_queue(struct tun_struct *tun, struct sk_buff *skb,
const struct tun_vnet_hash_container *vnet_hash)
{
- struct tun_vnet_hash_ext *ext;
- struct flow_keys keys; struct tun_flow_entry *e; u32 txq, numqueues;
numqueues = READ_ONCE(tun->numqueues);
- txq = __skb_get_hash_symmetric(skb);
- memset(&keys, 0, sizeof(keys));
- skb_flow_dissect(skb, &flow_keys_dissector_symmetric, &keys, 0);
- txq = flow_hash_from_keys(&keys); e = tun_flow_find(&tun->flows[tun_hashfn(txq)], txq); if (e) { tun_flow_save_rps_rxhash(e, txq);
@@ -538,6 +561,16 @@ static u16 tun_automq_select_queue(struct tun_struct *tun, struct sk_buff *skb) txq = reciprocal_scale(txq, numqueues); }
- if (vnet_hash && (vnet_hash->common.flags & TUN_VNET_HASH_REPORT)) {
ext = skb_ext_add(skb, SKB_EXT_TUN_VNET_HASH);
if (ext) {
u32 types = vnet_hash->common.types;
ext->report = virtio_net_hash_report(types, keys.basic);
ext->value = skb->l4_hash ? skb->hash : txq;
}
- }
- return txq;
} @@ -565,10 +598,13 @@ static u16 tun_select_queue(struct net_device *dev, struct sk_buff *skb, u16 ret; rcu_read_lock();
- if (rcu_dereference(tun->steering_prog))
- if (rcu_dereference(tun->steering_prog)) { ret = tun_ebpf_select_queue(tun, skb);
- else
ret = tun_automq_select_queue(tun, skb);
- } else {
struct tun_vnet_hash_container *vnet_hash = rcu_dereference(tun->vnet_hash);
ret = tun_automq_select_queue(tun, skb, vnet_hash);
Already passing tun, no need to pass tun->vnet_hash separately.
- } rcu_read_unlock();
return ret; @@ -2120,33 +2156,63 @@ static ssize_t tun_put_user(struct tun_struct *tun, } if (vnet_hdr_sz) {
struct virtio_net_hdr gso;
struct tun_vnet_hash_ext *ext;
size_t vnet_hdr_content_sz = sizeof(struct virtio_net_hdr);
union {
struct virtio_net_hdr hdr;
struct virtio_net_hdr_v1_hash hdr_v1_hash;
} vnet_hdr;
int ret;
if (iov_iter_count(iter) < vnet_hdr_sz) return -EINVAL;
if (virtio_net_hdr_from_skb(skb, &gso,
tun_is_little_endian(tun), true,
vlan_hlen)) {
ext = vnet_hdr_sz < sizeof(vnet_hdr.hdr_v1_hash) ?
NULL : skb_ext_find(skb, SKB_EXT_TUN_VNET_HASH);
if (ext) {
struct virtio_net_hash hash = {
.value = ext->value,
.report = ext->report,
};
vnet_hdr_content_sz = sizeof(vnet_hdr.hdr_v1_hash);
ret = virtio_net_hdr_v1_hash_from_skb(skb,
&vnet_hdr.hdr_v1_hash,
true,
vlan_hlen,
&hash);
} else {
vnet_hdr_content_sz = sizeof(struct virtio_net_hdr);
ret = virtio_net_hdr_from_skb(skb,
&vnet_hdr.hdr,
tun_is_little_endian(tun),
true,
vlan_hlen);
}
This is why just setting the fields directly rather than adding virtio_net_hdr_v1_hash_from_skb is actually simpler.
if (ret) { struct skb_shared_info *sinfo = skb_shinfo(skb);
if (net_ratelimit()) { netdev_err(tun->dev, "unexpected GSO type: 0x%x, gso_size %d, hdr_len %d\n",
sinfo->gso_type, tun16_to_cpu(tun, gso.gso_size),
tun16_to_cpu(tun, gso.hdr_len));
sinfo->gso_type,
tun16_to_cpu(tun, vnet_hdr.hdr.gso_size),
tun16_to_cpu(tun, vnet_hdr.hdr.hdr_len)); print_hex_dump(KERN_ERR, "tun: ", DUMP_PREFIX_NONE, 16, 1, skb->head,
min((int)tun16_to_cpu(tun, gso.hdr_len), 64), true);
min(tun16_to_cpu(tun, vnet_hdr.hdr.hdr_len), 64),
}true); } WARN_ON_ONCE(1); return -EINVAL;
if (copy_to_iter(&gso, sizeof(gso), iter) != sizeof(gso))
if (copy_to_iter(&vnet_hdr, vnet_hdr_content_sz, iter) != vnet_hdr_content_sz) return -EFAULT;
iov_iter_zero(vnet_hdr_sz - sizeof(gso), iter);
}iov_iter_zero(vnet_hdr_sz - vnet_hdr_content_sz, iter);
if (vlan_hlen) { @@ -3094,6 +3160,8 @@ static long __tun_chr_ioctl(struct file *file, unsigned int cmd, int le; int ret; bool do_notify = false;
- struct tun_vnet_hash vnet_hash_common;
- struct tun_vnet_hash_container *vnet_hash;
if (cmd == TUNSETIFF || cmd == TUNSETQUEUE || (_IOC_TYPE(cmd) == SOCK_IOC_TYPE && cmd != SIOCGSKNS)) { @@ -3115,6 +3183,9 @@ static long __tun_chr_ioctl(struct file *file, unsigned int cmd, if (!ns_capable(net->user_ns, CAP_NET_ADMIN)) return -EPERM; return open_related_ns(&net->ns, get_net_ns);
- } else if (cmd == TUNGETVNETHASHCAP) {
return copy_to_user(argp, &tun_vnet_hash_cap, sizeof(tun_vnet_hash_cap)) ?
}-EFAULT : 0;
rtnl_lock(); @@ -3314,6 +3385,13 @@ static long __tun_chr_ioctl(struct file *file, unsigned int cmd, break; }
vnet_hash = rtnl_dereference(tun->vnet_hash);
if (vnet_hash && (vnet_hash->common.flags & TUN_VNET_HASH_REPORT) &&
vnet_hdr_sz < (int)sizeof(struct virtio_net_hdr_v1_hash)) {
ret = -EBUSY;
break;
}
- tun->vnet_hdr_sz = vnet_hdr_sz; break;
@@ -3328,10 +3406,18 @@ static long __tun_chr_ioctl(struct file *file, unsigned int cmd, ret = -EFAULT; break; }
if (le)
if (le) { tun->flags |= TUN_VNET_LE;
else
} else {
vnet_hash = rtnl_dereference(tun->vnet_hash);
if (vnet_hash && (vnet_hash->common.flags & TUN_VNET_HASH_REPORT) &&
!tun_legacy_is_little_endian(tun)) {
ret = -EBUSY;
break;
}
tun->flags &= ~TUN_VNET_LE;
break;}
case TUNGETVNETBE: @@ -3396,6 +3482,30 @@ static long __tun_chr_ioctl(struct file *file, unsigned int cmd, ret = open_related_ns(&net->ns, get_net_ns); break;
- case TUNSETVNETHASH:
if (copy_from_user(&vnet_hash_common, argp, sizeof(vnet_hash_common))) {
ret = -EFAULT;
break;
}
argp = (struct tun_vnet_hash __user *)argp + 1;
if ((vnet_hash_common.flags & TUN_VNET_HASH_REPORT) &&
(tun->vnet_hdr_sz < sizeof(struct virtio_net_hdr_v1_hash) ||
!tun_is_little_endian(tun))) {
ret = -EBUSY;
break;
}
vnet_hash = kmalloc(sizeof(vnet_hash->common), GFP_KERNEL);
if (!vnet_hash) {
ret = -ENOMEM;
break;
}
vnet_hash->common = vnet_hash_common;
kfree_rcu_mightsleep(rcu_replace_pointer_rtnl(tun->vnet_hash, vnet_hash));
break;
- default: ret = -EINVAL; break;
diff --git a/include/uapi/linux/if_tun.h b/include/uapi/linux/if_tun.h index 287cdc81c939..1561e8ce0a0a 100644 --- a/include/uapi/linux/if_tun.h +++ b/include/uapi/linux/if_tun.h @@ -62,6 +62,30 @@ #define TUNSETCARRIER _IOW('T', 226, int) #define TUNGETDEVNETNS _IO('T', 227) +/**
- define TUNGETVNETHASHCAP - ioctl to get virtio_net hashing capability.
- The argument is a pointer to &struct tun_vnet_hash which will store the
- maximal virtio_net hashing configuration.
- */
+#define TUNGETVNETHASHCAP _IOR('T', 228, struct tun_vnet_hash)
+/**
- define TUNSETVNETHASH - ioctl to configure virtio_net hashing
- The argument is a pointer to &struct tun_vnet_hash.
- %TUNSETVNETHDRSZ ioctl must be called with a number greater than or equal to
- the size of &struct virtio_net_hdr_v1_hash before calling this ioctl with
- %TUN_VNET_HASH_REPORT.
- The virtio_net header must be configured as little-endian before calling this
- ioctl with %TUN_VNET_HASH_REPORT.
- This ioctl currently has no effect on XDP packets.
- */
+#define TUNSETVNETHASH _IOW('T', 229, struct tun_vnet_hash)
/* TUNSETIFF ifr flags */ #define IFF_TUN 0x0001 #define IFF_TAP 0x0002 @@ -115,4 +139,24 @@ struct tun_filter { __u8 addr[][ETH_ALEN]; }; +/**
- define TUN_VNET_HASH_REPORT - Request virtio_net hash reporting for vhost
- */
+#define TUN_VNET_HASH_REPORT 0x0001
+/**
- struct tun_vnet_hash - virtio_net hashing configuration
- @flags:
Bitmask consists of %TUN_VNET_HASH_REPORT and %TUN_VNET_HASH_RSS
- @pad:
Should be filled with zero before passing to %TUNSETVNETHASH
- @types:
Bitmask of allowed hash types
- */
+struct tun_vnet_hash {
- __u16 flags;
- __u8 pad[2];
- __u32 types;
+};
The values for flags and types should probably be defined here.
#endif /* _UAPI__IF_TUN_H */
-- 2.46.0
On 2024/09/18 15:17, Willem de Bruijn wrote:
Akihiko Odaki wrote:
Allow the guest to reuse the hash value to make receive steering consistent between the host and guest, and to save hash computation.
Signed-off-by: Akihiko Odaki akihiko.odaki@daynix.com
Documentation/networking/tuntap.rst | 7 ++ drivers/net/Kconfig | 1 + drivers/net/tun.c | 146 +++++++++++++++++++++++++++++++----- include/uapi/linux/if_tun.h | 44 +++++++++++ 4 files changed, 180 insertions(+), 18 deletions(-)
diff --git a/Documentation/networking/tuntap.rst b/Documentation/networking/tuntap.rst index 4d7087f727be..86b4ae8caa8a 100644 --- a/Documentation/networking/tuntap.rst +++ b/Documentation/networking/tuntap.rst @@ -206,6 +206,13 @@ enable is true we enable it, otherwise we disable it:: return ioctl(fd, TUNSETQUEUE, (void *)&ifr); } +3.4 Reference +-------------
+``linux/if_tun.h`` defines the interface described below:
+.. kernel-doc:: include/uapi/linux/if_tun.h
Universal TUN/TAP device driver Frequently Asked Question
diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig index 9920b3a68ed1..e2a7bd703550 100644 --- a/drivers/net/Kconfig +++ b/drivers/net/Kconfig @@ -395,6 +395,7 @@ config TUN tristate "Universal TUN/TAP device driver support" depends on INET select CRC32
- select SKB_EXTENSIONS help TUN/TAP provides packet reception and transmission for user space programs. It can be viewed as a simple Point-to-Point or Ethernet
diff --git a/drivers/net/tun.c b/drivers/net/tun.c index 9d93ab9ee58f..b8fcd71becac 100644 --- a/drivers/net/tun.c +++ b/drivers/net/tun.c @@ -173,6 +173,10 @@ struct tun_prog { struct bpf_prog *prog; }; +struct tun_vnet_hash_container {
- struct tun_vnet_hash common;
+};
- /* Since the socket were moved to tun_file, to preserve the behavior of persist
- device, socket filter, sndbuf and vnet header size were restore when the
- file were attached to a persist device.
@@ -210,6 +214,7 @@ struct tun_struct { struct bpf_prog __rcu *xdp_prog; struct tun_prog __rcu *steering_prog; struct tun_prog __rcu *filter_prog;
- struct tun_vnet_hash_container __rcu *vnet_hash;
This is just
+struct tun_vnet_hash {
u32 value;
u16 report;
+};
Can just be fields in the struct directly.
I will change to store struct tun_vnet_hash directly.
Also, only one bit really used for report, so probably can be condensed further.
It is more than one bit; the report types are defined as follows: #define VIRTIO_NET_HASH_REPORT_NONE 0 #define VIRTIO_NET_HASH_REPORT_IPv4 1 #define VIRTIO_NET_HASH_REPORT_TCPv4 2 #define VIRTIO_NET_HASH_REPORT_UDPv4 3 #define VIRTIO_NET_HASH_REPORT_IPv6 4 #define VIRTIO_NET_HASH_REPORT_TCPv6 5 #define VIRTIO_NET_HASH_REPORT_UDPv6 6 #define VIRTIO_NET_HASH_REPORT_IPv6_EX 7 #define VIRTIO_NET_HASH_REPORT_TCPv6_EX 8 #define VIRTIO_NET_HASH_REPORT_UDPv6_EX 9
struct ethtool_link_ksettings link_ksettings; /* init args */ struct file *file; @@ -221,6 +226,11 @@ struct veth { __be16 h_vlan_TCI; }; +static const struct tun_vnet_hash tun_vnet_hash_cap = {
- .flags = TUN_VNET_HASH_REPORT,
- .types = VIRTIO_NET_SUPPORTED_HASH_TYPES
+};
- static void tun_flow_init(struct tun_struct *tun); static void tun_flow_uninit(struct tun_struct *tun);
@@ -322,10 +332,17 @@ static long tun_set_vnet_be(struct tun_struct *tun, int __user *argp) if (get_user(be, argp)) return -EFAULT;
- if (be)
- if (be) {
struct tun_vnet_hash_container *vnet_hash = rtnl_dereference(tun->vnet_hash);
if (!(tun->flags & TUN_VNET_LE) &&
vnet_hash && (vnet_hash->flags & TUN_VNET_HASH_REPORT))
return -EBUSY;
Doesn't be here imply !tun->flags & TUN_VNET_LE? Same again below.
Unfortunately no. TUN_VNET_LE and TUN_VNET_BE can be set at the same time, and TUN_VNET_LE is enforced in such a case.
tun->flags |= TUN_VNET_BE;
- else
- } else { tun->flags &= ~TUN_VNET_BE;
- }
return 0; } @@ -522,14 +539,20 @@ static inline void tun_flow_save_rps_rxhash(struct tun_flow_entry *e, u32 hash)
- the userspace application move between processors, we may get a
- different rxq no. here.
*/ -static u16 tun_automq_select_queue(struct tun_struct *tun, struct sk_buff *skb) +static u16 tun_automq_select_queue(struct tun_struct *tun, struct sk_buff *skb,
{const struct tun_vnet_hash_container *vnet_hash)
- struct tun_vnet_hash_ext *ext;
- struct flow_keys keys; struct tun_flow_entry *e; u32 txq, numqueues;
numqueues = READ_ONCE(tun->numqueues);
- txq = __skb_get_hash_symmetric(skb);
- memset(&keys, 0, sizeof(keys));
- skb_flow_dissect(skb, &flow_keys_dissector_symmetric, &keys, 0);
- txq = flow_hash_from_keys(&keys); e = tun_flow_find(&tun->flows[tun_hashfn(txq)], txq); if (e) { tun_flow_save_rps_rxhash(e, txq);
@@ -538,6 +561,16 @@ static u16 tun_automq_select_queue(struct tun_struct *tun, struct sk_buff *skb) txq = reciprocal_scale(txq, numqueues); }
- if (vnet_hash && (vnet_hash->common.flags & TUN_VNET_HASH_REPORT)) {
ext = skb_ext_add(skb, SKB_EXT_TUN_VNET_HASH);
if (ext) {
u32 types = vnet_hash->common.types;
ext->report = virtio_net_hash_report(types, keys.basic);
ext->value = skb->l4_hash ? skb->hash : txq;
}
- }
- return txq; }
@@ -565,10 +598,13 @@ static u16 tun_select_queue(struct net_device *dev, struct sk_buff *skb, u16 ret; rcu_read_lock();
- if (rcu_dereference(tun->steering_prog))
- if (rcu_dereference(tun->steering_prog)) { ret = tun_ebpf_select_queue(tun, skb);
- else
ret = tun_automq_select_queue(tun, skb);
- } else {
struct tun_vnet_hash_container *vnet_hash = rcu_dereference(tun->vnet_hash);
ret = tun_automq_select_queue(tun, skb, vnet_hash);
Already passing tun, no need to pass tun->vnet_hash separately.
I will remove the parameter with v4.
- } rcu_read_unlock();
return ret; @@ -2120,33 +2156,63 @@ static ssize_t tun_put_user(struct tun_struct *tun, } if (vnet_hdr_sz) {
struct virtio_net_hdr gso;
struct tun_vnet_hash_ext *ext;
size_t vnet_hdr_content_sz = sizeof(struct virtio_net_hdr);
union {
struct virtio_net_hdr hdr;
struct virtio_net_hdr_v1_hash hdr_v1_hash;
} vnet_hdr;
int ret;
if (iov_iter_count(iter) < vnet_hdr_sz) return -EINVAL;
if (virtio_net_hdr_from_skb(skb, &gso,
tun_is_little_endian(tun), true,
vlan_hlen)) {
ext = vnet_hdr_sz < sizeof(vnet_hdr.hdr_v1_hash) ?
NULL : skb_ext_find(skb, SKB_EXT_TUN_VNET_HASH);
if (ext) {
struct virtio_net_hash hash = {
.value = ext->value,
.report = ext->report,
};
vnet_hdr_content_sz = sizeof(vnet_hdr.hdr_v1_hash);
ret = virtio_net_hdr_v1_hash_from_skb(skb,
&vnet_hdr.hdr_v1_hash,
true,
vlan_hlen,
&hash);
} else {
vnet_hdr_content_sz = sizeof(struct virtio_net_hdr);
ret = virtio_net_hdr_from_skb(skb,
&vnet_hdr.hdr,
tun_is_little_endian(tun),
true,
vlan_hlen);
}
This is why just setting the fields directly rather than adding virtio_net_hdr_v1_hash_from_skb is actually simpler.
I'll make a change accordingly in v4.
if (ret) { struct skb_shared_info *sinfo = skb_shinfo(skb);
if (net_ratelimit()) { netdev_err(tun->dev, "unexpected GSO type: 0x%x, gso_size %d, hdr_len %d\n",
sinfo->gso_type, tun16_to_cpu(tun, gso.gso_size),
tun16_to_cpu(tun, gso.hdr_len));
sinfo->gso_type,
tun16_to_cpu(tun, vnet_hdr.hdr.gso_size),
tun16_to_cpu(tun, vnet_hdr.hdr.hdr_len)); print_hex_dump(KERN_ERR, "tun: ", DUMP_PREFIX_NONE, 16, 1, skb->head,
min((int)tun16_to_cpu(tun, gso.hdr_len), 64), true);
min(tun16_to_cpu(tun, vnet_hdr.hdr.hdr_len), 64),
}true); } WARN_ON_ONCE(1); return -EINVAL;
if (copy_to_iter(&gso, sizeof(gso), iter) != sizeof(gso))
if (copy_to_iter(&vnet_hdr, vnet_hdr_content_sz, iter) != vnet_hdr_content_sz) return -EFAULT;
iov_iter_zero(vnet_hdr_sz - sizeof(gso), iter);
}iov_iter_zero(vnet_hdr_sz - vnet_hdr_content_sz, iter);
if (vlan_hlen) { @@ -3094,6 +3160,8 @@ static long __tun_chr_ioctl(struct file *file, unsigned int cmd, int le; int ret; bool do_notify = false;
- struct tun_vnet_hash vnet_hash_common;
- struct tun_vnet_hash_container *vnet_hash;
if (cmd == TUNSETIFF || cmd == TUNSETQUEUE || (_IOC_TYPE(cmd) == SOCK_IOC_TYPE && cmd != SIOCGSKNS)) { @@ -3115,6 +3183,9 @@ static long __tun_chr_ioctl(struct file *file, unsigned int cmd, if (!ns_capable(net->user_ns, CAP_NET_ADMIN)) return -EPERM; return open_related_ns(&net->ns, get_net_ns);
- } else if (cmd == TUNGETVNETHASHCAP) {
return copy_to_user(argp, &tun_vnet_hash_cap, sizeof(tun_vnet_hash_cap)) ?
}-EFAULT : 0;
rtnl_lock(); @@ -3314,6 +3385,13 @@ static long __tun_chr_ioctl(struct file *file, unsigned int cmd, break; }
vnet_hash = rtnl_dereference(tun->vnet_hash);
if (vnet_hash && (vnet_hash->common.flags & TUN_VNET_HASH_REPORT) &&
vnet_hdr_sz < (int)sizeof(struct virtio_net_hdr_v1_hash)) {
ret = -EBUSY;
break;
}
- tun->vnet_hdr_sz = vnet_hdr_sz; break;
@@ -3328,10 +3406,18 @@ static long __tun_chr_ioctl(struct file *file, unsigned int cmd, ret = -EFAULT; break; }
if (le)
if (le) { tun->flags |= TUN_VNET_LE;
else
} else {
vnet_hash = rtnl_dereference(tun->vnet_hash);
if (vnet_hash && (vnet_hash->common.flags & TUN_VNET_HASH_REPORT) &&
!tun_legacy_is_little_endian(tun)) {
ret = -EBUSY;
break;
}
tun->flags &= ~TUN_VNET_LE;
break;}
case TUNGETVNETBE: @@ -3396,6 +3482,30 @@ static long __tun_chr_ioctl(struct file *file, unsigned int cmd, ret = open_related_ns(&net->ns, get_net_ns); break;
- case TUNSETVNETHASH:
if (copy_from_user(&vnet_hash_common, argp, sizeof(vnet_hash_common))) {
ret = -EFAULT;
break;
}
argp = (struct tun_vnet_hash __user *)argp + 1;
if ((vnet_hash_common.flags & TUN_VNET_HASH_REPORT) &&
(tun->vnet_hdr_sz < sizeof(struct virtio_net_hdr_v1_hash) ||
!tun_is_little_endian(tun))) {
ret = -EBUSY;
break;
}
vnet_hash = kmalloc(sizeof(vnet_hash->common), GFP_KERNEL);
if (!vnet_hash) {
ret = -ENOMEM;
break;
}
vnet_hash->common = vnet_hash_common;
kfree_rcu_mightsleep(rcu_replace_pointer_rtnl(tun->vnet_hash, vnet_hash));
break;
- default: ret = -EINVAL; break;
diff --git a/include/uapi/linux/if_tun.h b/include/uapi/linux/if_tun.h index 287cdc81c939..1561e8ce0a0a 100644 --- a/include/uapi/linux/if_tun.h +++ b/include/uapi/linux/if_tun.h @@ -62,6 +62,30 @@ #define TUNSETCARRIER _IOW('T', 226, int) #define TUNGETDEVNETNS _IO('T', 227) +/**
- define TUNGETVNETHASHCAP - ioctl to get virtio_net hashing capability.
- The argument is a pointer to &struct tun_vnet_hash which will store the
- maximal virtio_net hashing configuration.
- */
+#define TUNGETVNETHASHCAP _IOR('T', 228, struct tun_vnet_hash)
+/**
- define TUNSETVNETHASH - ioctl to configure virtio_net hashing
- The argument is a pointer to &struct tun_vnet_hash.
- %TUNSETVNETHDRSZ ioctl must be called with a number greater than or equal to
- the size of &struct virtio_net_hdr_v1_hash before calling this ioctl with
- %TUN_VNET_HASH_REPORT.
- The virtio_net header must be configured as little-endian before calling this
- ioctl with %TUN_VNET_HASH_REPORT.
- This ioctl currently has no effect on XDP packets.
- */
+#define TUNSETVNETHASH _IOW('T', 229, struct tun_vnet_hash)
- /* TUNSETIFF ifr flags */ #define IFF_TUN 0x0001 #define IFF_TAP 0x0002
@@ -115,4 +139,24 @@ struct tun_filter { __u8 addr[][ETH_ALEN]; }; +/**
- define TUN_VNET_HASH_REPORT - Request virtio_net hash reporting for vhost
- */
+#define TUN_VNET_HASH_REPORT 0x0001
+/**
- struct tun_vnet_hash - virtio_net hashing configuration
- @flags:
Bitmask consists of %TUN_VNET_HASH_REPORT and %TUN_VNET_HASH_RSS
- @pad:
Should be filled with zero before passing to %TUNSETVNETHASH
- @types:
Bitmask of allowed hash types
- */
+struct tun_vnet_hash {
- __u16 flags;
- __u8 pad[2];
- __u32 types;
+};
The values for flags and types should probably be defined here.
I put TUN_VNET_HASH_REPORT before struct tun_vnet_hash following the examples of TUN_PKT_STRIP/struct tun_pi and TUN_FLT_ALLMULTI/struct tun_filter. The types are defined in: include/uapi/linux/virtio_net.h
Regards, Akihiko Odaki
#endif /* _UAPI__IF_TUN_H */
-- 2.46.0
RSS is a receive steering algorithm that can be negotiated to use with virtio_net. Conventionally the hash calculation was done by the VMM. However, computing the hash after the queue was chosen defeats the purpose of RSS.
Another approach is to use eBPF steering program. This approach has another downside: it cannot report the calculated hash due to the restrictive nature of eBPF steering program.
Introduce the code to perform RSS to the kernel in order to overcome thse challenges. An alternative solution is to extend the eBPF steering program so that it will be able to report to the userspace, but I didn't opt for it because extending the current mechanism of eBPF steering program as is because it relies on legacy context rewriting, and introducing kfunc-based eBPF will result in non-UAPI dependency while the other relevant virtualization APIs such as KVM and vhost_net are UAPIs.
Signed-off-by: Akihiko Odaki akihiko.odaki@daynix.com --- drivers/net/tun.c | 119 +++++++++++++++++++++++++++++++++++++++----- include/uapi/linux/if_tun.h | 27 ++++++++++ 2 files changed, 133 insertions(+), 13 deletions(-)
diff --git a/drivers/net/tun.c b/drivers/net/tun.c index b8fcd71becac..5a429b391144 100644 --- a/drivers/net/tun.c +++ b/drivers/net/tun.c @@ -175,6 +175,9 @@ struct tun_prog {
struct tun_vnet_hash_container { struct tun_vnet_hash common; + struct tun_vnet_hash_rss rss; + __be32 rss_key[VIRTIO_NET_RSS_MAX_KEY_SIZE]; + u16 rss_indirection_table[]; };
/* Since the socket were moved to tun_file, to preserve the behavior of persist @@ -227,7 +230,7 @@ struct veth { };
static const struct tun_vnet_hash tun_vnet_hash_cap = { - .flags = TUN_VNET_HASH_REPORT, + .flags = TUN_VNET_HASH_REPORT | TUN_VNET_HASH_RSS, .types = VIRTIO_NET_SUPPORTED_HASH_TYPES };
@@ -591,6 +594,36 @@ static u16 tun_ebpf_select_queue(struct tun_struct *tun, struct sk_buff *skb) return ret % numqueues; }
+static u16 tun_vnet_rss_select_queue(struct tun_struct *tun, + struct sk_buff *skb, + const struct tun_vnet_hash_container *vnet_hash) +{ + struct tun_vnet_hash_ext *ext; + struct virtio_net_hash hash; + u32 numqueues = READ_ONCE(tun->numqueues); + u16 txq, index; + + if (!numqueues) + return 0; + + if (!virtio_net_hash_rss(skb, vnet_hash->common.types, vnet_hash->rss_key, + &hash)) + return vnet_hash->rss.unclassified_queue % numqueues; + + if (vnet_hash->common.flags & TUN_VNET_HASH_REPORT) { + ext = skb_ext_add(skb, SKB_EXT_TUN_VNET_HASH); + if (ext) { + ext->value = hash.value; + ext->report = hash.report; + } + } + + index = hash.value & vnet_hash->rss.indirection_table_mask; + txq = READ_ONCE(vnet_hash->rss_indirection_table[index]); + + return txq % numqueues; +} + static u16 tun_select_queue(struct net_device *dev, struct sk_buff *skb, struct net_device *sb_dev) { @@ -603,7 +636,10 @@ static u16 tun_select_queue(struct net_device *dev, struct sk_buff *skb, } else { struct tun_vnet_hash_container *vnet_hash = rcu_dereference(tun->vnet_hash);
- ret = tun_automq_select_queue(tun, skb, vnet_hash); + if (vnet_hash && (vnet_hash->common.flags & TUN_VNET_HASH_RSS)) + ret = tun_vnet_rss_select_queue(tun, skb, vnet_hash); + else + ret = tun_automq_select_queue(tun, skb, vnet_hash); } rcu_read_unlock();
@@ -3085,13 +3121,9 @@ static int tun_set_queue(struct file *file, struct ifreq *ifr) }
static int tun_set_ebpf(struct tun_struct *tun, struct tun_prog __rcu **prog_p, - void __user *data) + int fd) { struct bpf_prog *prog; - int fd; - - if (copy_from_user(&fd, data, sizeof(fd))) - return -EFAULT;
if (fd == -1) { prog = NULL; @@ -3157,6 +3189,7 @@ static long __tun_chr_ioctl(struct file *file, unsigned int cmd, int ifindex; int sndbuf; int vnet_hdr_sz; + int fd; int le; int ret; bool do_notify = false; @@ -3460,11 +3493,27 @@ static long __tun_chr_ioctl(struct file *file, unsigned int cmd, break;
case TUNSETSTEERINGEBPF: - ret = tun_set_ebpf(tun, &tun->steering_prog, argp); + if (get_user(fd, (int __user *)argp)) { + ret = -EFAULT; + break; + } + + vnet_hash = rtnl_dereference(tun->vnet_hash); + if (fd != -1 && vnet_hash && (vnet_hash->common.flags & TUN_VNET_HASH_RSS)) { + ret = -EBUSY; + break; + } + + ret = tun_set_ebpf(tun, &tun->steering_prog, fd); break;
case TUNSETFILTEREBPF: - ret = tun_set_ebpf(tun, &tun->filter_prog, argp); + if (get_user(fd, (int __user *)argp)) { + ret = -EFAULT; + break; + } + + ret = tun_set_ebpf(tun, &tun->filter_prog, fd); break;
case TUNSETCARRIER: @@ -3496,10 +3545,54 @@ static long __tun_chr_ioctl(struct file *file, unsigned int cmd, break; }
- vnet_hash = kmalloc(sizeof(vnet_hash->common), GFP_KERNEL); - if (!vnet_hash) { - ret = -ENOMEM; - break; + if (vnet_hash_common.flags & TUN_VNET_HASH_RSS) { + struct tun_vnet_hash_rss rss; + size_t indirection_table_size; + size_t key_size; + size_t size; + + if (tun->steering_prog) { + ret = -EBUSY; + break; + } + + if (copy_from_user(&rss, argp, sizeof(rss))) { + ret = -EFAULT; + break; + } + argp = (struct tun_vnet_hash_rss __user *)argp + 1; + + indirection_table_size = ((size_t)rss.indirection_table_mask + 1) * 2; + key_size = virtio_net_hash_key_length(vnet_hash_common.types); + size = sizeof(*vnet_hash) + indirection_table_size + key_size; + + vnet_hash = kmalloc(size, GFP_KERNEL); + if (!vnet_hash) { + ret = -ENOMEM; + break; + } + + if (copy_from_user(vnet_hash->rss_indirection_table, + argp, indirection_table_size)) { + kfree(vnet_hash); + ret = -EFAULT; + break; + } + argp = (u16 __user *)argp + rss.indirection_table_mask + 1; + + if (copy_from_user(vnet_hash->rss_key, argp, key_size)) { + kfree(vnet_hash); + ret = -EFAULT; + break; + } + + vnet_hash->rss = rss; + } else { + vnet_hash = kmalloc(sizeof(vnet_hash->common), GFP_KERNEL); + if (!vnet_hash) { + ret = -ENOMEM; + break; + } }
vnet_hash->common = vnet_hash_common; diff --git a/include/uapi/linux/if_tun.h b/include/uapi/linux/if_tun.h index 1561e8ce0a0a..1c130409db5d 100644 --- a/include/uapi/linux/if_tun.h +++ b/include/uapi/linux/if_tun.h @@ -75,6 +75,14 @@ * * The argument is a pointer to &struct tun_vnet_hash. * + * The argument is a pointer to the compound of the following in order if + * %TUN_VNET_HASH_RSS is set: + * + * 1. &struct tun_vnet_hash + * 2. &struct tun_vnet_hash_rss + * 3. Indirection table + * 4. Key + * * %TUNSETVNETHDRSZ ioctl must be called with a number greater than or equal to * the size of &struct virtio_net_hdr_v1_hash before calling this ioctl with * %TUN_VNET_HASH_REPORT. @@ -144,6 +152,13 @@ struct tun_filter { */ #define TUN_VNET_HASH_REPORT 0x0001
+/** + * define TUN_VNET_HASH_RSS - Request virtio_net RSS + * + * This is mutually exclusive with eBPF steering program. + */ +#define TUN_VNET_HASH_RSS 0x0002 + /** * struct tun_vnet_hash - virtio_net hashing configuration * @flags: @@ -159,4 +174,16 @@ struct tun_vnet_hash { __u32 types; };
+/** + * struct tun_vnet_hash_rss - virtio_net RSS configuration + * @indirection_table_mask: + * Bitmask to be applied to the indirection table index + * @unclassified_queue: + * The index of the queue to place unclassified packets in + */ +struct tun_vnet_hash_rss { + __u16 indirection_table_mask; + __u16 unclassified_queue; +}; + #endif /* _UAPI__IF_TUN_H */
Akihiko Odaki wrote:
RSS is a receive steering algorithm that can be negotiated to use with virtio_net. Conventionally the hash calculation was done by the VMM. However, computing the hash after the queue was chosen defeats the purpose of RSS.
Another approach is to use eBPF steering program. This approach has another downside: it cannot report the calculated hash due to the restrictive nature of eBPF steering program.
Introduce the code to perform RSS to the kernel in order to overcome thse challenges. An alternative solution is to extend the eBPF steering program so that it will be able to report to the userspace, but I didn't opt for it because extending the current mechanism of eBPF steering program as is because it relies on legacy context rewriting, and introducing kfunc-based eBPF will result in non-UAPI dependency while the other relevant virtualization APIs such as KVM and vhost_net are UAPIs.
Signed-off-by: Akihiko Odaki akihiko.odaki@daynix.com
drivers/net/tun.c | 119 +++++++++++++++++++++++++++++++++++++++----- include/uapi/linux/if_tun.h | 27 ++++++++++ 2 files changed, 133 insertions(+), 13 deletions(-)
diff --git a/drivers/net/tun.c b/drivers/net/tun.c index b8fcd71becac..5a429b391144 100644 --- a/drivers/net/tun.c +++ b/drivers/net/tun.c @@ -175,6 +175,9 @@ struct tun_prog { struct tun_vnet_hash_container { struct tun_vnet_hash common;
- struct tun_vnet_hash_rss rss;
- __be32 rss_key[VIRTIO_NET_RSS_MAX_KEY_SIZE];
- u16 rss_indirection_table[];
}; /* Since the socket were moved to tun_file, to preserve the behavior of persist @@ -227,7 +230,7 @@ struct veth { }; static const struct tun_vnet_hash tun_vnet_hash_cap = {
- .flags = TUN_VNET_HASH_REPORT,
- .flags = TUN_VNET_HASH_REPORT | TUN_VNET_HASH_RSS, .types = VIRTIO_NET_SUPPORTED_HASH_TYPES
}; @@ -591,6 +594,36 @@ static u16 tun_ebpf_select_queue(struct tun_struct *tun, struct sk_buff *skb) return ret % numqueues; } +static u16 tun_vnet_rss_select_queue(struct tun_struct *tun,
struct sk_buff *skb,
const struct tun_vnet_hash_container *vnet_hash)
+{
- struct tun_vnet_hash_ext *ext;
- struct virtio_net_hash hash;
- u32 numqueues = READ_ONCE(tun->numqueues);
- u16 txq, index;
- if (!numqueues)
return 0;
- if (!virtio_net_hash_rss(skb, vnet_hash->common.types, vnet_hash->rss_key,
&hash))
return vnet_hash->rss.unclassified_queue % numqueues;
- if (vnet_hash->common.flags & TUN_VNET_HASH_REPORT) {
ext = skb_ext_add(skb, SKB_EXT_TUN_VNET_HASH);
if (ext) {
ext->value = hash.value;
ext->report = hash.report;
}
- }
- index = hash.value & vnet_hash->rss.indirection_table_mask;
- txq = READ_ONCE(vnet_hash->rss_indirection_table[index]);
- return txq % numqueues;
+}
static u16 tun_select_queue(struct net_device *dev, struct sk_buff *skb, struct net_device *sb_dev) { @@ -603,7 +636,10 @@ static u16 tun_select_queue(struct net_device *dev, struct sk_buff *skb, } else { struct tun_vnet_hash_container *vnet_hash = rcu_dereference(tun->vnet_hash);
ret = tun_automq_select_queue(tun, skb, vnet_hash);
if (vnet_hash && (vnet_hash->common.flags & TUN_VNET_HASH_RSS))
ret = tun_vnet_rss_select_queue(tun, skb, vnet_hash);
else
} rcu_read_unlock();ret = tun_automq_select_queue(tun, skb, vnet_hash);
@@ -3085,13 +3121,9 @@ static int tun_set_queue(struct file *file, struct ifreq *ifr) } static int tun_set_ebpf(struct tun_struct *tun, struct tun_prog __rcu **prog_p,
void __user *data)
int fd)
{ struct bpf_prog *prog;
- int fd;
- if (copy_from_user(&fd, data, sizeof(fd)))
return -EFAULT;
if (fd == -1) { prog = NULL; @@ -3157,6 +3189,7 @@ static long __tun_chr_ioctl(struct file *file, unsigned int cmd, int ifindex; int sndbuf; int vnet_hdr_sz;
- int fd; int le; int ret; bool do_notify = false;
@@ -3460,11 +3493,27 @@ static long __tun_chr_ioctl(struct file *file, unsigned int cmd, break; case TUNSETSTEERINGEBPF:
ret = tun_set_ebpf(tun, &tun->steering_prog, argp);
if (get_user(fd, (int __user *)argp)) {
ret = -EFAULT;
break;
}
vnet_hash = rtnl_dereference(tun->vnet_hash);
if (fd != -1 && vnet_hash && (vnet_hash->common.flags & TUN_VNET_HASH_RSS)) {
ret = -EBUSY;
break;
}
break;ret = tun_set_ebpf(tun, &tun->steering_prog, fd);
case TUNSETFILTEREBPF:
ret = tun_set_ebpf(tun, &tun->filter_prog, argp);
if (get_user(fd, (int __user *)argp)) {
ret = -EFAULT;
break;
}
break;ret = tun_set_ebpf(tun, &tun->filter_prog, fd);
case TUNSETCARRIER: @@ -3496,10 +3545,54 @@ static long __tun_chr_ioctl(struct file *file, unsigned int cmd, break; }
vnet_hash = kmalloc(sizeof(vnet_hash->common), GFP_KERNEL);
if (!vnet_hash) {
ret = -ENOMEM;
break;
if (vnet_hash_common.flags & TUN_VNET_HASH_RSS) {
struct tun_vnet_hash_rss rss;
size_t indirection_table_size;
size_t key_size;
size_t size;
if (tun->steering_prog) {
ret = -EBUSY;
break;
}
if (copy_from_user(&rss, argp, sizeof(rss))) {
ret = -EFAULT;
break;
}
argp = (struct tun_vnet_hash_rss __user *)argp + 1;
indirection_table_size = ((size_t)rss.indirection_table_mask + 1) * 2;
Why make uapi a mask rather than a length?
Also is there a upper length bounds sanity check for this input from userspace?
key_size = virtio_net_hash_key_length(vnet_hash_common.types);
size = sizeof(*vnet_hash) + indirection_table_size + key_size;
key_size is included in sizeof(*vnet_hash), always VIRTIO_NET_RSS_MAX_KEY_SIZE.
vnet_hash = kmalloc(size, GFP_KERNEL);
if (!vnet_hash) {
ret = -ENOMEM;
break;
}
if (copy_from_user(vnet_hash->rss_indirection_table,
argp, indirection_table_size)) {
kfree(vnet_hash);
ret = -EFAULT;
break;
}
argp = (u16 __user *)argp + rss.indirection_table_mask + 1;
if (copy_from_user(vnet_hash->rss_key, argp, key_size)) {
kfree(vnet_hash);
ret = -EFAULT;
break;
}
vnet_hash->rss = rss;
} else {
vnet_hash = kmalloc(sizeof(vnet_hash->common), GFP_KERNEL);
if (!vnet_hash) {
ret = -ENOMEM;
break;
}}
vnet_hash->common = vnet_hash_common; diff --git a/include/uapi/linux/if_tun.h b/include/uapi/linux/if_tun.h index 1561e8ce0a0a..1c130409db5d 100644 --- a/include/uapi/linux/if_tun.h +++ b/include/uapi/linux/if_tun.h @@ -75,6 +75,14 @@
- The argument is a pointer to &struct tun_vnet_hash.
- The argument is a pointer to the compound of the following in order if
- %TUN_VNET_HASH_RSS is set:
- &struct tun_vnet_hash
- &struct tun_vnet_hash_rss
- Indirection table
- Key
- %TUNSETVNETHDRSZ ioctl must be called with a number greater than or equal to
- the size of &struct virtio_net_hdr_v1_hash before calling this ioctl with
- %TUN_VNET_HASH_REPORT.
@@ -144,6 +152,13 @@ struct tun_filter { */ #define TUN_VNET_HASH_REPORT 0x0001 +/**
- define TUN_VNET_HASH_RSS - Request virtio_net RSS
- This is mutually exclusive with eBPF steering program.
- */
+#define TUN_VNET_HASH_RSS 0x0002
/**
- struct tun_vnet_hash - virtio_net hashing configuration
- @flags:
@@ -159,4 +174,16 @@ struct tun_vnet_hash { __u32 types; }; +/**
- struct tun_vnet_hash_rss - virtio_net RSS configuration
- @indirection_table_mask:
Bitmask to be applied to the indirection table index
- @unclassified_queue:
The index of the queue to place unclassified packets in
- */
+struct tun_vnet_hash_rss {
- __u16 indirection_table_mask;
- __u16 unclassified_queue;
+};
#endif /* _UAPI__IF_TUN_H */
-- 2.46.0
On 2024/09/18 15:28, Willem de Bruijn wrote:
Akihiko Odaki wrote:
RSS is a receive steering algorithm that can be negotiated to use with virtio_net. Conventionally the hash calculation was done by the VMM. However, computing the hash after the queue was chosen defeats the purpose of RSS.
Another approach is to use eBPF steering program. This approach has another downside: it cannot report the calculated hash due to the restrictive nature of eBPF steering program.
Introduce the code to perform RSS to the kernel in order to overcome thse challenges. An alternative solution is to extend the eBPF steering program so that it will be able to report to the userspace, but I didn't opt for it because extending the current mechanism of eBPF steering program as is because it relies on legacy context rewriting, and introducing kfunc-based eBPF will result in non-UAPI dependency while the other relevant virtualization APIs such as KVM and vhost_net are UAPIs.
Signed-off-by: Akihiko Odaki akihiko.odaki@daynix.com
drivers/net/tun.c | 119 +++++++++++++++++++++++++++++++++++++++----- include/uapi/linux/if_tun.h | 27 ++++++++++ 2 files changed, 133 insertions(+), 13 deletions(-)
diff --git a/drivers/net/tun.c b/drivers/net/tun.c index b8fcd71becac..5a429b391144 100644 --- a/drivers/net/tun.c +++ b/drivers/net/tun.c @@ -175,6 +175,9 @@ struct tun_prog { struct tun_vnet_hash_container { struct tun_vnet_hash common;
- struct tun_vnet_hash_rss rss;
- __be32 rss_key[VIRTIO_NET_RSS_MAX_KEY_SIZE];
- u16 rss_indirection_table[]; };
/* Since the socket were moved to tun_file, to preserve the behavior of persist @@ -227,7 +230,7 @@ struct veth { }; static const struct tun_vnet_hash tun_vnet_hash_cap = {
- .flags = TUN_VNET_HASH_REPORT,
- .flags = TUN_VNET_HASH_REPORT | TUN_VNET_HASH_RSS, .types = VIRTIO_NET_SUPPORTED_HASH_TYPES };
@@ -591,6 +594,36 @@ static u16 tun_ebpf_select_queue(struct tun_struct *tun, struct sk_buff *skb) return ret % numqueues; } +static u16 tun_vnet_rss_select_queue(struct tun_struct *tun,
struct sk_buff *skb,
const struct tun_vnet_hash_container *vnet_hash)
+{
- struct tun_vnet_hash_ext *ext;
- struct virtio_net_hash hash;
- u32 numqueues = READ_ONCE(tun->numqueues);
- u16 txq, index;
- if (!numqueues)
return 0;
- if (!virtio_net_hash_rss(skb, vnet_hash->common.types, vnet_hash->rss_key,
&hash))
return vnet_hash->rss.unclassified_queue % numqueues;
- if (vnet_hash->common.flags & TUN_VNET_HASH_REPORT) {
ext = skb_ext_add(skb, SKB_EXT_TUN_VNET_HASH);
if (ext) {
ext->value = hash.value;
ext->report = hash.report;
}
- }
- index = hash.value & vnet_hash->rss.indirection_table_mask;
- txq = READ_ONCE(vnet_hash->rss_indirection_table[index]);
- return txq % numqueues;
+}
- static u16 tun_select_queue(struct net_device *dev, struct sk_buff *skb, struct net_device *sb_dev) {
@@ -603,7 +636,10 @@ static u16 tun_select_queue(struct net_device *dev, struct sk_buff *skb, } else { struct tun_vnet_hash_container *vnet_hash = rcu_dereference(tun->vnet_hash);
ret = tun_automq_select_queue(tun, skb, vnet_hash);
if (vnet_hash && (vnet_hash->common.flags & TUN_VNET_HASH_RSS))
ret = tun_vnet_rss_select_queue(tun, skb, vnet_hash);
else
} rcu_read_unlock();ret = tun_automq_select_queue(tun, skb, vnet_hash);
@@ -3085,13 +3121,9 @@ static int tun_set_queue(struct file *file, struct ifreq *ifr) } static int tun_set_ebpf(struct tun_struct *tun, struct tun_prog __rcu **prog_p,
void __user *data)
{ struct bpf_prog *prog;int fd)
- int fd;
- if (copy_from_user(&fd, data, sizeof(fd)))
return -EFAULT;
if (fd == -1) { prog = NULL; @@ -3157,6 +3189,7 @@ static long __tun_chr_ioctl(struct file *file, unsigned int cmd, int ifindex; int sndbuf; int vnet_hdr_sz;
- int fd; int le; int ret; bool do_notify = false;
@@ -3460,11 +3493,27 @@ static long __tun_chr_ioctl(struct file *file, unsigned int cmd, break; case TUNSETSTEERINGEBPF:
ret = tun_set_ebpf(tun, &tun->steering_prog, argp);
if (get_user(fd, (int __user *)argp)) {
ret = -EFAULT;
break;
}
vnet_hash = rtnl_dereference(tun->vnet_hash);
if (fd != -1 && vnet_hash && (vnet_hash->common.flags & TUN_VNET_HASH_RSS)) {
ret = -EBUSY;
break;
}
break;ret = tun_set_ebpf(tun, &tun->steering_prog, fd);
case TUNSETFILTEREBPF:
ret = tun_set_ebpf(tun, &tun->filter_prog, argp);
if (get_user(fd, (int __user *)argp)) {
ret = -EFAULT;
break;
}
break;ret = tun_set_ebpf(tun, &tun->filter_prog, fd);
case TUNSETCARRIER: @@ -3496,10 +3545,54 @@ static long __tun_chr_ioctl(struct file *file, unsigned int cmd, break; }
vnet_hash = kmalloc(sizeof(vnet_hash->common), GFP_KERNEL);
if (!vnet_hash) {
ret = -ENOMEM;
break;
if (vnet_hash_common.flags & TUN_VNET_HASH_RSS) {
struct tun_vnet_hash_rss rss;
size_t indirection_table_size;
size_t key_size;
size_t size;
if (tun->steering_prog) {
ret = -EBUSY;
break;
}
if (copy_from_user(&rss, argp, sizeof(rss))) {
ret = -EFAULT;
break;
}
argp = (struct tun_vnet_hash_rss __user *)argp + 1;
indirection_table_size = ((size_t)rss.indirection_table_mask + 1) * 2;
Why make uapi a mask rather than a length?
It follows the virtio specification. It is actually used as a mask in tun_vnet_rss_select_queue().
Also is there a upper length bounds sanity check for this input from userspace?
No, but the maximum size is limited to 128 bytes because the indirection_table_mask is 16-bit and it indexes an array of 16-bit integers.
key_size = virtio_net_hash_key_length(vnet_hash_common.types);
size = sizeof(*vnet_hash) + indirection_table_size + key_size;
key_size is included in sizeof(*vnet_hash), always VIRTIO_NET_RSS_MAX_KEY_SIZE.
I will fix this by replacing it with: struct_size(vnet_hash, rss_indirection_table, (size_t)rss.indirection_table_mask + 1)
Regards, Akihiko Odaki
vnet_hash = kmalloc(size, GFP_KERNEL);
if (!vnet_hash) {
ret = -ENOMEM;
break;
}
if (copy_from_user(vnet_hash->rss_indirection_table,
argp, indirection_table_size)) {
kfree(vnet_hash);
ret = -EFAULT;
break;
}
argp = (u16 __user *)argp + rss.indirection_table_mask + 1;
if (copy_from_user(vnet_hash->rss_key, argp, key_size)) {
kfree(vnet_hash);
ret = -EFAULT;
break;
}
vnet_hash->rss = rss;
} else {
vnet_hash = kmalloc(sizeof(vnet_hash->common), GFP_KERNEL);
if (!vnet_hash) {
ret = -ENOMEM;
break;
}}
vnet_hash->common = vnet_hash_common; diff --git a/include/uapi/linux/if_tun.h b/include/uapi/linux/if_tun.h index 1561e8ce0a0a..1c130409db5d 100644 --- a/include/uapi/linux/if_tun.h +++ b/include/uapi/linux/if_tun.h @@ -75,6 +75,14 @@
- The argument is a pointer to &struct tun_vnet_hash.
- The argument is a pointer to the compound of the following in order if
- %TUN_VNET_HASH_RSS is set:
- &struct tun_vnet_hash
- &struct tun_vnet_hash_rss
- Indirection table
- Key
- %TUNSETVNETHDRSZ ioctl must be called with a number greater than or equal to
- the size of &struct virtio_net_hdr_v1_hash before calling this ioctl with
- %TUN_VNET_HASH_REPORT.
@@ -144,6 +152,13 @@ struct tun_filter { */ #define TUN_VNET_HASH_REPORT 0x0001 +/**
- define TUN_VNET_HASH_RSS - Request virtio_net RSS
- This is mutually exclusive with eBPF steering program.
- */
+#define TUN_VNET_HASH_RSS 0x0002
- /**
- struct tun_vnet_hash - virtio_net hashing configuration
- @flags:
@@ -159,4 +174,16 @@ struct tun_vnet_hash { __u32 types; }; +/**
- struct tun_vnet_hash_rss - virtio_net RSS configuration
- @indirection_table_mask:
Bitmask to be applied to the indirection table index
- @unclassified_queue:
The index of the queue to place unclassified packets in
- */
+struct tun_vnet_hash_rss {
- __u16 indirection_table_mask;
- __u16 unclassified_queue;
+};
- #endif /* _UAPI__IF_TUN_H */
-- 2.46.0
On 2024/09/24 10:56, Akihiko Odaki wrote:
On 2024/09/18 15:28, Willem de Bruijn wrote:
Akihiko Odaki wrote:
RSS is a receive steering algorithm that can be negotiated to use with virtio_net. Conventionally the hash calculation was done by the VMM. However, computing the hash after the queue was chosen defeats the purpose of RSS.
Another approach is to use eBPF steering program. This approach has another downside: it cannot report the calculated hash due to the restrictive nature of eBPF steering program.
Introduce the code to perform RSS to the kernel in order to overcome thse challenges. An alternative solution is to extend the eBPF steering program so that it will be able to report to the userspace, but I didn't opt for it because extending the current mechanism of eBPF steering program as is because it relies on legacy context rewriting, and introducing kfunc-based eBPF will result in non-UAPI dependency while the other relevant virtualization APIs such as KVM and vhost_net are UAPIs.
Signed-off-by: Akihiko Odaki akihiko.odaki@daynix.com
drivers/net/tun.c | 119 +++++++++++++++++++++++++++++++++++++++----- include/uapi/linux/if_tun.h | 27 ++++++++++ 2 files changed, 133 insertions(+), 13 deletions(-)
diff --git a/drivers/net/tun.c b/drivers/net/tun.c index b8fcd71becac..5a429b391144 100644 --- a/drivers/net/tun.c +++ b/drivers/net/tun.c @@ -175,6 +175,9 @@ struct tun_prog { struct tun_vnet_hash_container { struct tun_vnet_hash common; + struct tun_vnet_hash_rss rss; + __be32 rss_key[VIRTIO_NET_RSS_MAX_KEY_SIZE]; + u16 rss_indirection_table[]; }; /* Since the socket were moved to tun_file, to preserve the behavior of persist @@ -227,7 +230,7 @@ struct veth { }; static const struct tun_vnet_hash tun_vnet_hash_cap = { - .flags = TUN_VNET_HASH_REPORT, + .flags = TUN_VNET_HASH_REPORT | TUN_VNET_HASH_RSS, .types = VIRTIO_NET_SUPPORTED_HASH_TYPES }; @@ -591,6 +594,36 @@ static u16 tun_ebpf_select_queue(struct tun_struct *tun, struct sk_buff *skb) return ret % numqueues; } +static u16 tun_vnet_rss_select_queue(struct tun_struct *tun, + struct sk_buff *skb, + const struct tun_vnet_hash_container *vnet_hash) +{ + struct tun_vnet_hash_ext *ext; + struct virtio_net_hash hash; + u32 numqueues = READ_ONCE(tun->numqueues); + u16 txq, index;
+ if (!numqueues) + return 0;
+ if (!virtio_net_hash_rss(skb, vnet_hash->common.types, vnet_hash->rss_key, + &hash)) + return vnet_hash->rss.unclassified_queue % numqueues;
+ if (vnet_hash->common.flags & TUN_VNET_HASH_REPORT) { + ext = skb_ext_add(skb, SKB_EXT_TUN_VNET_HASH); + if (ext) { + ext->value = hash.value; + ext->report = hash.report; + } + }
+ index = hash.value & vnet_hash->rss.indirection_table_mask; + txq = READ_ONCE(vnet_hash->rss_indirection_table[index]);
+ return txq % numqueues; +}
static u16 tun_select_queue(struct net_device *dev, struct sk_buff *skb, struct net_device *sb_dev) { @@ -603,7 +636,10 @@ static u16 tun_select_queue(struct net_device *dev, struct sk_buff *skb, } else { struct tun_vnet_hash_container *vnet_hash = rcu_dereference(tun->vnet_hash); - ret = tun_automq_select_queue(tun, skb, vnet_hash); + if (vnet_hash && (vnet_hash->common.flags & TUN_VNET_HASH_RSS)) + ret = tun_vnet_rss_select_queue(tun, skb, vnet_hash); + else + ret = tun_automq_select_queue(tun, skb, vnet_hash); } rcu_read_unlock(); @@ -3085,13 +3121,9 @@ static int tun_set_queue(struct file *file, struct ifreq *ifr) } static int tun_set_ebpf(struct tun_struct *tun, struct tun_prog __rcu **prog_p, - void __user *data) + int fd) { struct bpf_prog *prog; - int fd;
- if (copy_from_user(&fd, data, sizeof(fd))) - return -EFAULT; if (fd == -1) { prog = NULL; @@ -3157,6 +3189,7 @@ static long __tun_chr_ioctl(struct file *file, unsigned int cmd, int ifindex; int sndbuf; int vnet_hdr_sz; + int fd; int le; int ret; bool do_notify = false; @@ -3460,11 +3493,27 @@ static long __tun_chr_ioctl(struct file *file, unsigned int cmd, break; case TUNSETSTEERINGEBPF: - ret = tun_set_ebpf(tun, &tun->steering_prog, argp); + if (get_user(fd, (int __user *)argp)) { + ret = -EFAULT; + break; + }
+ vnet_hash = rtnl_dereference(tun->vnet_hash); + if (fd != -1 && vnet_hash && (vnet_hash->common.flags & TUN_VNET_HASH_RSS)) { + ret = -EBUSY; + break; + }
+ ret = tun_set_ebpf(tun, &tun->steering_prog, fd); break; case TUNSETFILTEREBPF: - ret = tun_set_ebpf(tun, &tun->filter_prog, argp); + if (get_user(fd, (int __user *)argp)) { + ret = -EFAULT; + break; + }
+ ret = tun_set_ebpf(tun, &tun->filter_prog, fd); break; case TUNSETCARRIER: @@ -3496,10 +3545,54 @@ static long __tun_chr_ioctl(struct file *file, unsigned int cmd, break; } - vnet_hash = kmalloc(sizeof(vnet_hash->common), GFP_KERNEL); - if (!vnet_hash) { - ret = -ENOMEM; - break; + if (vnet_hash_common.flags & TUN_VNET_HASH_RSS) { + struct tun_vnet_hash_rss rss; + size_t indirection_table_size; + size_t key_size; + size_t size;
+ if (tun->steering_prog) { + ret = -EBUSY; + break; + }
+ if (copy_from_user(&rss, argp, sizeof(rss))) { + ret = -EFAULT; + break; + } + argp = (struct tun_vnet_hash_rss __user *)argp + 1;
+ indirection_table_size = ((size_t)rss.indirection_table_mask + 1) * 2;
Why make uapi a mask rather than a length?
It follows the virtio specification. It is actually used as a mask in tun_vnet_rss_select_queue().
Also is there a upper length bounds sanity check for this input from userspace?
No, but the maximum size is limited to 128 bytes because the indirection_table_mask is 16-bit and it indexes an array of 16-bit integers.
Not 128 bytes but 128 KiB.
+ key_size = virtio_net_hash_key_length(vnet_hash_common.types); + size = sizeof(*vnet_hash) + indirection_table_size + key_size;
key_size is included in sizeof(*vnet_hash), always VIRTIO_NET_RSS_MAX_KEY_SIZE.
I will fix this by replacing it with: struct_size(vnet_hash, rss_indirection_table, (size_t)rss.indirection_table_mask + 1)
Regards, Akihiko Odaki
+ vnet_hash = kmalloc(size, GFP_KERNEL); + if (!vnet_hash) { + ret = -ENOMEM; + break; + }
+ if (copy_from_user(vnet_hash->rss_indirection_table, + argp, indirection_table_size)) { + kfree(vnet_hash); + ret = -EFAULT; + break; + } + argp = (u16 __user *)argp + rss.indirection_table_mask + 1;
+ if (copy_from_user(vnet_hash->rss_key, argp, key_size)) { + kfree(vnet_hash); + ret = -EFAULT; + break; + }
+ vnet_hash->rss = rss; + } else { + vnet_hash = kmalloc(sizeof(vnet_hash->common), GFP_KERNEL); + if (!vnet_hash) { + ret = -ENOMEM; + break; + } } vnet_hash->common = vnet_hash_common; diff --git a/include/uapi/linux/if_tun.h b/include/uapi/linux/if_tun.h index 1561e8ce0a0a..1c130409db5d 100644 --- a/include/uapi/linux/if_tun.h +++ b/include/uapi/linux/if_tun.h @@ -75,6 +75,14 @@ * * The argument is a pointer to &struct tun_vnet_hash. *
- The argument is a pointer to the compound of the following in
order if
- %TUN_VNET_HASH_RSS is set:
- &struct tun_vnet_hash
- &struct tun_vnet_hash_rss
- Indirection table
- Key
* %TUNSETVNETHDRSZ ioctl must be called with a number greater than or equal to * the size of &struct virtio_net_hdr_v1_hash before calling this ioctl with * %TUN_VNET_HASH_REPORT. @@ -144,6 +152,13 @@ struct tun_filter { */ #define TUN_VNET_HASH_REPORT 0x0001 +/**
- define TUN_VNET_HASH_RSS - Request virtio_net RSS
- This is mutually exclusive with eBPF steering program.
- */
+#define TUN_VNET_HASH_RSS 0x0002
/** * struct tun_vnet_hash - virtio_net hashing configuration * @flags: @@ -159,4 +174,16 @@ struct tun_vnet_hash { __u32 types; }; +/**
- struct tun_vnet_hash_rss - virtio_net RSS configuration
- @indirection_table_mask:
- * Bitmask to be applied to the indirection table index
- @unclassified_queue:
- * The index of the queue to place unclassified packets in
- */
+struct tun_vnet_hash_rss { + __u16 indirection_table_mask; + __u16 unclassified_queue; +};
#endif /* _UAPI__IF_TUN_H */
-- 2.46.0
The added tests confirm tun can perform RSS and hash reporting, and reject invalid configurations for them.
Signed-off-by: Akihiko Odaki akihiko.odaki@daynix.com --- tools/testing/selftests/net/Makefile | 2 +- tools/testing/selftests/net/tun.c | 666 ++++++++++++++++++++++++++++++++++- 2 files changed, 660 insertions(+), 8 deletions(-)
diff --git a/tools/testing/selftests/net/Makefile b/tools/testing/selftests/net/Makefile index 8eaffd7a641c..5629e68bf69d 100644 --- a/tools/testing/selftests/net/Makefile +++ b/tools/testing/selftests/net/Makefile @@ -109,6 +109,6 @@ $(OUTPUT)/reuseport_bpf_numa: LDLIBS += -lnuma $(OUTPUT)/tcp_mmap: LDLIBS += -lpthread -lcrypto $(OUTPUT)/tcp_inq: LDLIBS += -lpthread $(OUTPUT)/bind_bhash: LDLIBS += -lpthread -$(OUTPUT)/io_uring_zerocopy_tx: CFLAGS += -I../../../include/ +$(OUTPUT)/io_uring_zerocopy_tx $(OUTPUT)/tun: CFLAGS += -I../../../include/
include bpf.mk diff --git a/tools/testing/selftests/net/tun.c b/tools/testing/selftests/net/tun.c index fa83918b62d1..f46affa39d5c 100644 --- a/tools/testing/selftests/net/tun.c +++ b/tools/testing/selftests/net/tun.c @@ -2,21 +2,37 @@
#define _GNU_SOURCE
+#include <endian.h> #include <errno.h> #include <fcntl.h> +#include <stddef.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <unistd.h> -#include <linux/if.h> +#include <net/if.h> +#include <netinet/ip.h> +#include <sys/ioctl.h> +#include <sys/socket.h> +#include <linux/compiler.h> +#include <linux/icmp.h> +#include <linux/if_arp.h> #include <linux/if_tun.h> +#include <linux/ipv6.h> #include <linux/netlink.h> #include <linux/rtnetlink.h> -#include <sys/ioctl.h> -#include <sys/socket.h> +#include <linux/sockios.h> +#include <linux/tcp.h> +#include <linux/udp.h> +#include <linux/virtio_net.h>
#include "../kselftest_harness.h"
+#define TUN_HWADDR_SOURCE { 0x02, 0x00, 0x00, 0x00, 0x00, 0x00 } +#define TUN_HWADDR_DEST { 0x02, 0x00, 0x00, 0x00, 0x00, 0x01 } +#define TUN_IPADDR_SOURCE htonl((172 << 24) | (17 << 16) | 0) +#define TUN_IPADDR_DEST htonl((172 << 24) | (17 << 16) | 1) + static int tun_attach(int fd, char *dev) { struct ifreq ifr; @@ -39,7 +55,7 @@ static int tun_detach(int fd, char *dev) return ioctl(fd, TUNSETQUEUE, (void *) &ifr); }
-static int tun_alloc(char *dev) +static int tun_alloc(char *dev, short flags) { struct ifreq ifr; int fd, err; @@ -52,7 +68,8 @@ static int tun_alloc(char *dev)
memset(&ifr, 0, sizeof(ifr)); strcpy(ifr.ifr_name, dev); - ifr.ifr_flags = IFF_TAP | IFF_NAPI | IFF_MULTI_QUEUE; + ifr.ifr_flags = flags | IFF_TAP | IFF_NAPI | IFF_NO_PI | + IFF_MULTI_QUEUE;
err = ioctl(fd, TUNSETIFF, (void *) &ifr); if (err < 0) { @@ -64,6 +81,40 @@ static int tun_alloc(char *dev) return fd; }
+static bool tun_add_to_bridge(int local_fd, const char *name) +{ + struct ifreq ifreq = { + .ifr_name = "xbridge", + .ifr_ifindex = if_nametoindex(name) + }; + + if (!ifreq.ifr_ifindex) { + perror("if_nametoindex"); + return false; + } + + if (ioctl(local_fd, SIOCBRADDIF, &ifreq)) { + perror("SIOCBRADDIF"); + return false; + } + + return true; +} + +static bool tun_set_flags(int local_fd, const char *name, short flags) +{ + struct ifreq ifreq = { .ifr_flags = flags }; + + strcpy(ifreq.ifr_name, name); + + if (ioctl(local_fd, SIOCSIFFLAGS, &ifreq)) { + perror("SIOCSIFFLAGS"); + return false; + } + + return true; +} + static int tun_delete(char *dev) { struct { @@ -102,6 +153,159 @@ static int tun_delete(char *dev) return ret; }
+static uint32_t tun_sum(const void *buf, size_t len) +{ + const uint16_t *sbuf = buf; + uint32_t sum = 0; + + while (len > 1) { + sum += *sbuf++; + len -= 2; + } + + if (len) + sum += *(uint8_t *)sbuf; + + return sum; +} + +static uint16_t tun_build_ip_check(uint32_t sum) +{ + return ~((sum & 0xffff) + (sum >> 16)); +} + +static uint32_t tun_build_ip_pseudo_sum(const void *iphdr) +{ + uint16_t tot_len = ntohs(((struct iphdr *)iphdr)->tot_len); + + return tun_sum((char *)iphdr + offsetof(struct iphdr, saddr), 8) + + htons(((struct iphdr *)iphdr)->protocol) + + htons(tot_len - sizeof(struct iphdr)); +} + +static uint32_t tun_build_ipv6_pseudo_sum(const void *ipv6hdr) +{ + return tun_sum((char *)ipv6hdr + offsetof(struct ipv6hdr, saddr), 32) + + ((struct ipv6hdr *)ipv6hdr)->payload_len + + htons(((struct ipv6hdr *)ipv6hdr)->nexthdr); +} + +static void tun_build_ethhdr(struct ethhdr *ethhdr, uint16_t proto) +{ + *ethhdr = (struct ethhdr) { + .h_dest = TUN_HWADDR_DEST, + .h_source = TUN_HWADDR_SOURCE, + .h_proto = htons(proto) + }; +} + +static void tun_build_iphdr(void *dest, uint16_t len, uint8_t protocol) +{ + struct iphdr iphdr = { + .ihl = sizeof(iphdr) / 4, + .version = 4, + .tot_len = htons(sizeof(iphdr) + len), + .ttl = 255, + .protocol = protocol, + .saddr = TUN_IPADDR_SOURCE, + .daddr = TUN_IPADDR_DEST + }; + + iphdr.check = tun_build_ip_check(tun_sum(&iphdr, sizeof(iphdr))); + memcpy(dest, &iphdr, sizeof(iphdr)); +} + +static void tun_build_ipv6hdr(void *dest, uint16_t len, uint8_t protocol) +{ + struct ipv6hdr ipv6hdr = { + .version = 6, + .payload_len = htons(len), + .nexthdr = protocol, + .saddr = { + .s6_addr32 = { + htonl(0xffff0000), 0, 0, TUN_IPADDR_SOURCE + } + }, + .daddr = { + .s6_addr32 = { + htonl(0xffff0000), 0, 0, TUN_IPADDR_DEST + } + }, + }; + + memcpy(dest, &ipv6hdr, sizeof(ipv6hdr)); +} + +static void tun_build_tcphdr(void *dest, uint32_t sum) +{ + struct tcphdr tcphdr = { + .source = htons(9), + .dest = htons(9), + .fin = 1, + .doff = sizeof(tcphdr) / 4, + }; + uint32_t tcp_sum = tun_sum(&tcphdr, sizeof(tcphdr)); + + tcphdr.check = tun_build_ip_check(sum + tcp_sum); + memcpy(dest, &tcphdr, sizeof(tcphdr)); +} + +static void tun_build_udphdr(void *dest, uint32_t sum) +{ + struct udphdr udphdr = { + .source = htons(9), + .dest = htons(9), + .len = htons(sizeof(udphdr)), + }; + uint32_t udp_sum = tun_sum(&udphdr, sizeof(udphdr)); + + udphdr.check = tun_build_ip_check(sum + udp_sum); + memcpy(dest, &udphdr, sizeof(udphdr)); +} + +static bool tun_vnet_hash_check(int source_fd, const int *dest_fds, + const void *buffer, size_t len, + uint8_t flags, + uint16_t hash_report, uint32_t hash_value) +{ + size_t read_len = sizeof(struct virtio_net_hdr_v1_hash) + len; + struct virtio_net_hdr_v1_hash *read_buffer; + struct virtio_net_hdr_v1_hash hdr = { + .hdr = { + .flags = flags, + .num_buffers = hash_report ? htole16(1) : 0 + }, + .hash_value = htole32(hash_value), + .hash_report = htole16(hash_report) + }; + int ret; + int txq = hash_report ? hash_value & 1 : 2; + + if (write(source_fd, buffer, len) != len) { + perror("write"); + return false; + } + + read_buffer = malloc(read_len); + if (!read_buffer) { + perror("malloc"); + return false; + } + + ret = read(dest_fds[txq], read_buffer, read_len); + if (ret != read_len) { + perror("read"); + free(read_buffer); + return false; + } + + ret = !memcmp(read_buffer, &hdr, sizeof(*read_buffer)) && + !memcmp(read_buffer + 1, buffer, len); + + free(read_buffer); + return ret; +} + FIXTURE(tun) { char ifname[IFNAMSIZ]; @@ -112,10 +316,10 @@ FIXTURE_SETUP(tun) { memset(self->ifname, 0, sizeof(self->ifname));
- self->fd = tun_alloc(self->ifname); + self->fd = tun_alloc(self->ifname, 0); ASSERT_GE(self->fd, 0);
- self->fd2 = tun_alloc(self->ifname); + self->fd2 = tun_alloc(self->ifname, 0); ASSERT_GE(self->fd2, 0); }
@@ -159,4 +363,452 @@ TEST_F(tun, reattach_close_delete) { EXPECT_EQ(tun_delete(self->ifname), 0); }
+FIXTURE(tun_vnet_hash) +{ + int local_fd; + int source_fd; + int dest_fds[3]; +}; + +FIXTURE_SETUP(tun_vnet_hash) +{ + static const struct { + struct tun_vnet_hash hdr; + struct tun_vnet_hash_rss rss; + uint16_t rss_indirection_table[2]; + uint8_t rss_key[40]; + } vnet_hash = { + .hdr = { + .flags = TUN_VNET_HASH_REPORT | TUN_VNET_HASH_RSS, + .types = VIRTIO_NET_RSS_HASH_TYPE_IPv4 | + VIRTIO_NET_RSS_HASH_TYPE_TCPv4 | + VIRTIO_NET_RSS_HASH_TYPE_UDPv4 | + VIRTIO_NET_RSS_HASH_TYPE_IPv6 | + VIRTIO_NET_RSS_HASH_TYPE_TCPv6 | + VIRTIO_NET_RSS_HASH_TYPE_UDPv6 + }, + .rss = { .indirection_table_mask = 1, .unclassified_queue = 5 }, + .rss_indirection_table = { 3, 4 }, + .rss_key = { + 0x6d, 0x5a, 0x56, 0xda, 0x25, 0x5b, 0x0e, 0xc2, + 0x41, 0x67, 0x25, 0x3d, 0x43, 0xa3, 0x8f, 0xb0, + 0xd0, 0xca, 0x2b, 0xcb, 0xae, 0x7b, 0x30, 0xb4, + 0x77, 0xcb, 0x2d, 0xa3, 0x80, 0x30, 0xf2, 0x0c, + 0x6a, 0x42, 0xb7, 0x3b, 0xbe, 0xac, 0x01, 0xfa + } + }; + + struct { + struct virtio_net_hdr_v1_hash vnet_hdr; + struct ethhdr ethhdr; + struct arphdr arphdr; + unsigned char sender_hwaddr[6]; + uint32_t sender_ipaddr; + unsigned char target_hwaddr[6]; + uint32_t target_ipaddr; + } __packed packet = { + .ethhdr = { + .h_source = TUN_HWADDR_SOURCE, + .h_dest = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff }, + .h_proto = htons(ETH_P_ARP) + }, + .arphdr = { + .ar_hrd = htons(ARPHRD_ETHER), + .ar_pro = htons(ETH_P_IP), + .ar_hln = ETH_ALEN, + .ar_pln = 4, + .ar_op = htons(ARPOP_REQUEST) + }, + .sender_hwaddr = TUN_HWADDR_DEST, + .sender_ipaddr = TUN_IPADDR_DEST, + .target_ipaddr = TUN_IPADDR_DEST + }; + + char source_ifname[IFNAMSIZ] = ""; + char dest_ifname[IFNAMSIZ] = ""; + int i; + + self->local_fd = socket(AF_LOCAL, SOCK_STREAM, 0); + ASSERT_LE(0, self->local_fd); + + self->source_fd = tun_alloc(source_ifname, 0); + ASSERT_LE(0, self->source_fd) { + EXPECT_EQ(0, close(self->local_fd)); + } + + ASSERT_TRUE(tun_set_flags(self->local_fd, source_ifname, IFF_UP)) { + EXPECT_EQ(0, close(self->local_fd)); + } + + self->dest_fds[0] = tun_alloc(dest_ifname, IFF_VNET_HDR); + ASSERT_LE(0, self->dest_fds[0]) { + EXPECT_EQ(0, close(self->source_fd)); + EXPECT_EQ(0, close(self->local_fd)); + } + + i = sizeof(struct virtio_net_hdr_v1_hash); + ASSERT_EQ(ioctl(self->dest_fds[0], TUNSETVNETHDRSZ, &i), 0) { + EXPECT_EQ(0, close(self->dest_fds[0])); + EXPECT_EQ(0, close(self->source_fd)); + EXPECT_EQ(0, close(self->local_fd)); + } + + i = 1; + ASSERT_EQ(ioctl(self->dest_fds[0], TUNSETVNETLE, &i), 0) { + EXPECT_EQ(0, close(self->dest_fds[0])); + EXPECT_EQ(0, close(self->source_fd)); + EXPECT_EQ(0, close(self->local_fd)); + } + + ASSERT_TRUE(tun_set_flags(self->local_fd, dest_ifname, IFF_UP)) { + EXPECT_EQ(0, close(self->dest_fds[0])); + EXPECT_EQ(0, close(self->source_fd)); + EXPECT_EQ(0, close(self->local_fd)); + } + + ASSERT_EQ(write(self->dest_fds[0], &packet, sizeof(packet)), + sizeof(packet)) { + EXPECT_EQ(0, close(self->dest_fds[0])); + EXPECT_EQ(0, close(self->source_fd)); + EXPECT_EQ(0, close(self->local_fd)); + } + + ASSERT_EQ(ioctl(self->dest_fds[0], TUNSETVNETHASH, &vnet_hash), 0) { + EXPECT_EQ(0, close(self->dest_fds[0])); + EXPECT_EQ(0, close(self->source_fd)); + EXPECT_EQ(0, close(self->local_fd)); + } + + for (i = 1; i < ARRAY_SIZE(self->dest_fds); i++) { + self->dest_fds[i] = tun_alloc(dest_ifname, IFF_VNET_HDR); + ASSERT_LE(0, self->dest_fds[i]) { + while (i) { + i--; + EXPECT_EQ(0, close(self->local_fd)); + } + + EXPECT_EQ(0, close(self->source_fd)); + EXPECT_EQ(0, close(self->local_fd)); + } + } + + ASSERT_EQ(ioctl(self->local_fd, SIOCBRADDBR, "xbridge"), 0) { + EXPECT_EQ(0, ioctl(self->local_fd, SIOCBRDELBR, "xbridge")); + + for (i = 0; i < ARRAY_SIZE(self->dest_fds); i++) + EXPECT_EQ(0, close(self->dest_fds[i])); + + EXPECT_EQ(0, close(self->source_fd)); + EXPECT_EQ(0, close(self->local_fd)); + } + + ASSERT_TRUE(tun_add_to_bridge(self->local_fd, source_ifname)) { + EXPECT_EQ(0, ioctl(self->local_fd, SIOCBRDELBR, "xbridge")); + + for (i = 0; i < ARRAY_SIZE(self->dest_fds); i++) + EXPECT_EQ(0, close(self->dest_fds[i])); + + EXPECT_EQ(0, close(self->source_fd)); + EXPECT_EQ(0, close(self->local_fd)); + } + + ASSERT_TRUE(tun_add_to_bridge(self->local_fd, dest_ifname)) { + EXPECT_EQ(0, ioctl(self->local_fd, SIOCBRDELBR, "xbridge")); + + for (i = 0; i < ARRAY_SIZE(self->dest_fds); i++) + EXPECT_EQ(0, close(self->dest_fds[i])); + + EXPECT_EQ(0, close(self->source_fd)); + EXPECT_EQ(0, close(self->local_fd)); + } + + ASSERT_TRUE(tun_set_flags(self->local_fd, "xbridge", IFF_UP)) { + EXPECT_EQ(0, ioctl(self->local_fd, SIOCBRDELBR, "xbridge")); + + for (i = 0; i < ARRAY_SIZE(self->dest_fds); i++) + EXPECT_EQ(0, close(self->dest_fds[i])); + + EXPECT_EQ(0, close(self->source_fd)); + EXPECT_EQ(0, close(self->local_fd)); + } +} + +FIXTURE_TEARDOWN(tun_vnet_hash) +{ + ASSERT_TRUE(tun_set_flags(self->local_fd, "xbridge", 0)) { + for (size_t i = 0; i < ARRAY_SIZE(self->dest_fds); i++) + EXPECT_EQ(0, close(self->dest_fds[i])); + + EXPECT_EQ(0, close(self->source_fd)); + EXPECT_EQ(0, close(self->local_fd)); + } + + EXPECT_EQ(0, ioctl(self->local_fd, SIOCBRDELBR, "xbridge")); + + for (size_t i = 0; i < ARRAY_SIZE(self->dest_fds); i++) + EXPECT_EQ(0, close(self->dest_fds[i])); + + EXPECT_EQ(0, close(self->source_fd)); + EXPECT_EQ(0, close(self->local_fd)); +} + +TEST_F(tun_vnet_hash, unclassified) +{ + struct { + struct ethhdr ethhdr; + struct iphdr iphdr; + } __packed packet; + + tun_build_ethhdr(&packet.ethhdr, ETH_P_LOOPBACK); + + EXPECT_TRUE(tun_vnet_hash_check(self->source_fd, self->dest_fds, + &packet, sizeof(packet), 0, + VIRTIO_NET_HASH_REPORT_NONE, 0)); +} + +TEST_F(tun_vnet_hash, ipv4) +{ + struct { + struct ethhdr ethhdr; + struct iphdr iphdr; + } __packed packet; + + tun_build_ethhdr(&packet.ethhdr, ETH_P_IP); + tun_build_iphdr(&packet.iphdr, 0, 253); + + EXPECT_TRUE(tun_vnet_hash_check(self->source_fd, self->dest_fds, + &packet, sizeof(packet), 0, + VIRTIO_NET_HASH_REPORT_IPv4, + 0x6e45d952)); +} + +TEST_F(tun_vnet_hash, tcpv4) +{ + struct { + struct ethhdr ethhdr; + struct iphdr iphdr; + struct tcphdr tcphdr; + } __packed packet; + + tun_build_ethhdr(&packet.ethhdr, ETH_P_IP); + tun_build_iphdr(&packet.iphdr, sizeof(struct tcphdr), IPPROTO_TCP); + + tun_build_tcphdr(&packet.tcphdr, + tun_build_ip_pseudo_sum(&packet.iphdr)); + + EXPECT_TRUE(tun_vnet_hash_check(self->source_fd, self->dest_fds, + &packet, sizeof(packet), + VIRTIO_NET_HDR_F_DATA_VALID, + VIRTIO_NET_HASH_REPORT_TCPv4, + 0xfb63539a)); +} + +TEST_F(tun_vnet_hash, udpv4) +{ + struct { + struct ethhdr ethhdr; + struct iphdr iphdr; + struct udphdr udphdr; + } __packed packet; + + tun_build_ethhdr(&packet.ethhdr, ETH_P_IP); + tun_build_iphdr(&packet.iphdr, sizeof(struct udphdr), IPPROTO_UDP); + + tun_build_udphdr(&packet.udphdr, + tun_build_ip_pseudo_sum(&packet.iphdr)); + + EXPECT_TRUE(tun_vnet_hash_check(self->source_fd, self->dest_fds, + &packet, sizeof(packet), + VIRTIO_NET_HDR_F_DATA_VALID, + VIRTIO_NET_HASH_REPORT_UDPv4, + 0xfb63539a)); +} + +TEST_F(tun_vnet_hash, ipv6) +{ + struct { + struct ethhdr ethhdr; + struct ipv6hdr ipv6hdr; + } __packed packet; + + tun_build_ethhdr(&packet.ethhdr, ETH_P_IPV6); + tun_build_ipv6hdr(&packet.ipv6hdr, 0, 253); + + EXPECT_TRUE(tun_vnet_hash_check(self->source_fd, self->dest_fds, + &packet, sizeof(packet), 0, + VIRTIO_NET_HASH_REPORT_IPv6, + 0xd6eb560f)); +} + +TEST_F(tun_vnet_hash, tcpv6) +{ + struct { + struct ethhdr ethhdr; + struct ipv6hdr ipv6hdr; + struct tcphdr tcphdr; + } __packed packet; + + tun_build_ethhdr(&packet.ethhdr, ETH_P_IPV6); + tun_build_ipv6hdr(&packet.ipv6hdr, sizeof(struct tcphdr), IPPROTO_TCP); + + tun_build_tcphdr(&packet.tcphdr, + tun_build_ipv6_pseudo_sum(&packet.ipv6hdr)); + + EXPECT_TRUE(tun_vnet_hash_check(self->source_fd, self->dest_fds, + &packet, sizeof(packet), + VIRTIO_NET_HDR_F_DATA_VALID, + VIRTIO_NET_HASH_REPORT_TCPv6, + 0xc2b9f251)); +} + +TEST_F(tun_vnet_hash, udpv6) +{ + struct { + struct ethhdr ethhdr; + struct ipv6hdr ipv6hdr; + struct udphdr udphdr; + } __packed packet; + + tun_build_ethhdr(&packet.ethhdr, ETH_P_IPV6); + tun_build_ipv6hdr(&packet.ipv6hdr, sizeof(struct udphdr), IPPROTO_UDP); + + tun_build_udphdr(&packet.udphdr, + tun_build_ipv6_pseudo_sum(&packet.ipv6hdr)); + + EXPECT_TRUE(tun_vnet_hash_check(self->source_fd, self->dest_fds, + &packet, sizeof(packet), + VIRTIO_NET_HDR_F_DATA_VALID, + VIRTIO_NET_HASH_REPORT_UDPv6, + 0xc2b9f251)); +} + +FIXTURE(tun_vnet_hash_config) +{ + int fd; +}; + +FIXTURE_SETUP(tun_vnet_hash_config) +{ + char ifname[IFNAMSIZ]; + + ifname[0] = 0; + self->fd = tun_alloc(ifname, 0); + ASSERT_LE(0, self->fd); +} + +FIXTURE_TEARDOWN(tun_vnet_hash_config) +{ + EXPECT_EQ(close(self->fd), 0); +} + +TEST_F(tun_vnet_hash_config, cap) +{ + struct tun_vnet_hash cap; + + ASSERT_EQ(0, ioctl(self->fd, TUNGETVNETHASHCAP, &cap)); + EXPECT_EQ(cap.types, + VIRTIO_NET_RSS_HASH_TYPE_IPv4 | + VIRTIO_NET_RSS_HASH_TYPE_TCPv4 | + VIRTIO_NET_RSS_HASH_TYPE_UDPv4 | + VIRTIO_NET_RSS_HASH_TYPE_IPv6 | + VIRTIO_NET_RSS_HASH_TYPE_TCPv6 | + VIRTIO_NET_RSS_HASH_TYPE_UDPv6); +} + +TEST_F(tun_vnet_hash_config, insufficient_hdr_sz) +{ + static const struct tun_vnet_hash vnet_hash = { + .flags = TUN_VNET_HASH_REPORT + }; + int i; + + i = 1; + ASSERT_EQ(0, ioctl(self->fd, TUNSETVNETLE, &i)); + + ASSERT_EQ(-1, ioctl(self->fd, TUNSETVNETHASH, &vnet_hash)); + EXPECT_EQ(errno, EBUSY); +} + +TEST_F(tun_vnet_hash_config, shrink_hdr_sz) +{ + static const struct tun_vnet_hash vnet_hash = { + .flags = TUN_VNET_HASH_REPORT + }; + int i; + + i = sizeof(struct virtio_net_hdr_v1_hash); + ASSERT_EQ(0, ioctl(self->fd, TUNSETVNETHDRSZ, &i)); + + i = 1; + ASSERT_EQ(0, ioctl(self->fd, TUNSETVNETLE, &i)); + + ASSERT_EQ(0, ioctl(self->fd, TUNSETVNETHASH, &vnet_hash)); + + i = sizeof(struct virtio_net_hdr); + ASSERT_EQ(-1, ioctl(self->fd, TUNSETVNETHDRSZ, &i)); + EXPECT_EQ(errno, EBUSY); +} + +TEST_F(tun_vnet_hash_config, set_be_early) +{ + static const struct tun_vnet_hash vnet_hash = { + .flags = TUN_VNET_HASH_REPORT + }; + int i; + + i = 1; + if (ioctl(self->fd, TUNSETVNETBE, &i)) + return; + + i = sizeof(struct virtio_net_hdr_v1_hash); + ASSERT_EQ(0, ioctl(self->fd, TUNSETVNETHDRSZ, &i)); + + ASSERT_EQ(-1, ioctl(self->fd, TUNSETVNETHASH, &vnet_hash)); + EXPECT_EQ(errno, EBUSY); +} + +TEST_F(tun_vnet_hash_config, set_be_later) +{ + static const struct tun_vnet_hash vnet_hash = { + .flags = TUN_VNET_HASH_REPORT + }; + int i; + + i = sizeof(struct virtio_net_hdr_v1_hash); + ASSERT_EQ(0, ioctl(self->fd, TUNSETVNETHDRSZ, &i)); + + if (ioctl(self->fd, TUNSETVNETHASH, &vnet_hash)) + return; + + i = 1; + ASSERT_EQ(-1, ioctl(self->fd, TUNSETVNETBE, &i)); + EXPECT_TRUE(errno == EBUSY || errno == EINVAL); +} + +TEST_F(tun_vnet_hash_config, unset_le_later) +{ + static const struct tun_vnet_hash vnet_hash = { + .flags = TUN_VNET_HASH_REPORT + }; + int i; + + i = sizeof(struct virtio_net_hdr_v1_hash); + ASSERT_EQ(0, ioctl(self->fd, TUNSETVNETHDRSZ, &i)); + + i = 1; + ioctl(self->fd, TUNSETVNETBE, &i); + + if (!ioctl(self->fd, TUNSETVNETHASH, &vnet_hash)) + return; + + i = 1; + ASSERT_EQ(0, ioctl(self->fd, TUNSETVNETLE, &i)); + + ASSERT_EQ(0, ioctl(self->fd, TUNSETVNETHASH, &vnet_hash)); + + i = 0; + ASSERT_EQ(-1, ioctl(self->fd, TUNSETVNETLE, &i)); + EXPECT_EQ(errno, EBUSY); +} + TEST_HARNESS_MAIN
VIRTIO_NET_F_HASH_REPORT allows to report hash values calculated on the host. When VHOST_NET_F_VIRTIO_NET_HDR is employed, it will report no hash values (i.e., the hash_report member is always set to VIRTIO_NET_HASH_REPORT_NONE). Otherwise, the values reported by the underlying socket will be reported.
VIRTIO_NET_F_HASH_REPORT requires VIRTIO_F_VERSION_1.
Signed-off-by: Akihiko Odaki akihiko.odaki@daynix.com --- drivers/vhost/net.c | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-)
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index f16279351db5..ec1167a782ec 100644 --- a/drivers/vhost/net.c +++ b/drivers/vhost/net.c @@ -73,6 +73,7 @@ enum { VHOST_NET_FEATURES = VHOST_FEATURES | (1ULL << VHOST_NET_F_VIRTIO_NET_HDR) | (1ULL << VIRTIO_NET_F_MRG_RXBUF) | + (1ULL << VIRTIO_NET_F_HASH_REPORT) | (1ULL << VIRTIO_F_ACCESS_PLATFORM) | (1ULL << VIRTIO_F_RING_RESET) }; @@ -1604,10 +1605,13 @@ static int vhost_net_set_features(struct vhost_net *n, u64 features) size_t vhost_hlen, sock_hlen, hdr_len; int i;
- hdr_len = (features & ((1ULL << VIRTIO_NET_F_MRG_RXBUF) | - (1ULL << VIRTIO_F_VERSION_1))) ? - sizeof(struct virtio_net_hdr_mrg_rxbuf) : - sizeof(struct virtio_net_hdr); + if (features & (1ULL << VIRTIO_NET_F_HASH_REPORT)) + hdr_len = sizeof(struct virtio_net_hdr_v1_hash); + else if (features & ((1ULL << VIRTIO_NET_F_MRG_RXBUF) | + (1ULL << VIRTIO_F_VERSION_1))) + hdr_len = sizeof(struct virtio_net_hdr_mrg_rxbuf); + else + hdr_len = sizeof(struct virtio_net_hdr); if (features & (1 << VHOST_NET_F_VIRTIO_NET_HDR)) { /* vhost provides vnet_hdr */ vhost_hlen = hdr_len; @@ -1688,6 +1692,10 @@ static long vhost_net_ioctl(struct file *f, unsigned int ioctl, return -EFAULT; if (features & ~VHOST_NET_FEATURES) return -EOPNOTSUPP; + if ((features & ((1ULL << VIRTIO_F_VERSION_1) | + (1ULL << VIRTIO_NET_F_HASH_REPORT))) == + (1ULL << VIRTIO_NET_F_HASH_REPORT)) + return -EINVAL; return vhost_net_set_features(n, features); case VHOST_GET_BACKEND_FEATURES: features = VHOST_NET_BACKEND_FEATURES;
On Sun, 15 Sep 2024 10:17:39 +0900 Akihiko Odaki akihiko.odaki@daynix.com wrote:
virtio-net have two usage of hashes: one is RSS and another is hash reporting. Conventionally the hash calculation was done by the VMM. However, computing the hash after the queue was chosen defeats the purpose of RSS.
Another approach is to use eBPF steering program. This approach has another downside: it cannot report the calculated hash due to the restrictive nature of eBPF.
Introduce the code to compute hashes to the kernel in order to overcome thse challenges.
An alternative solution is to extend the eBPF steering program so that it will be able to report to the userspace, but it is based on context rewrites, which is in feature freeze. We can adopt kfuncs, but they will not be UAPIs. We opt to ioctl to align with other relevant UAPIs (KVM and vhost_net).
This will be useful for DPDK. But there still are cases where custom flow rules are needed. I.e the RSS happens after other TC rules. It would be a good if skbedit supported RSS as an option.
On 2024/09/15 21:48, Stephen Hemminger wrote:
On Sun, 15 Sep 2024 10:17:39 +0900 Akihiko Odaki akihiko.odaki@daynix.com wrote:
virtio-net have two usage of hashes: one is RSS and another is hash reporting. Conventionally the hash calculation was done by the VMM. However, computing the hash after the queue was chosen defeats the purpose of RSS.
Another approach is to use eBPF steering program. This approach has another downside: it cannot report the calculated hash due to the restrictive nature of eBPF.
Introduce the code to compute hashes to the kernel in order to overcome thse challenges.
An alternative solution is to extend the eBPF steering program so that it will be able to report to the userspace, but it is based on context rewrites, which is in feature freeze. We can adopt kfuncs, but they will not be UAPIs. We opt to ioctl to align with other relevant UAPIs (KVM and vhost_net).
This will be useful for DPDK. But there still are cases where custom flow rules are needed. I.e the RSS happens after other TC rules. It would be a good if skbedit supported RSS as an option.
Hi,
It is nice to hear about a use case other than QEMU or virtualization. I implemented RSS as tuntap ioctl because: - It is easier to configure for the user of tuntap (e.g., QEMU) - It implements hash reporting, which is specific to tuntap.
You can still add skbedit if you want to override RSS for some packets with filter. Please tell me if it is not sufficient for your use case.
Regards, Akihiko Odaki
linux-kselftest-mirror@lists.linaro.org