From: Zijian Zhang zijianzhang@bytedance.com
Original notification mechanism needs poll + recvmmsg which is not easy for applcations to accommodate. And, it also incurs unignorable overhead including extra system calls.
While making maximum reuse of the existing MSG_ZEROCOPY related code, this patch set introduces a new zerocopy socket notification mechanism. Users of sendmsg pass a control message as a placeholder for the incoming notifications. Upon returning, kernel embeds notifications directly into user arguments passed in. By doing so, we can reduce the complexity and the overhead for managing notifications.
We also have the logic related to copying cmsg to the userspace in sendmsg generic for any possible uses cases in the future. However, it introduces ABI change of sendmsg.
Changelog: v1 -> v2: - Reuse errormsg queue in the new notification mechanism, users can actually use these two mechanisms in hybrid way if they want to do so. - Update case SCM_ZC_NOTIFICATION in __sock_cmsg_send 1. Regardless of 32-bit, 64-bit program, we will always handle u64 type user address. 2. The size of data to copy_to_user is precisely calculated in case of kernel stack leak. - fix (kbuild-bot) 1. Add SCM_ZC_NOTIFICATION to arch-specific header files. 2. header file types.h in include/uapi/linux/socket.h
v2 -> v3: - 1. Users can now pass in the address of the zc_info_elem directly with appropriate cmsg_len instead of the ugly user interface. Plus, the handler is now compatible with MSG_CMSG_COMPAT and 32-bit pointer. - 2. Suggested by Willem, another strategy of getting zc info is briefly taking the lock of sk_error_queue and move to a private list, like net_rx_action. I thought sk_error_queue is protected by sock_lock, so that it's impossible for the handling of zc info and users recvmsg from the sk_error_queue at the same time. However, sk_error_queue is protected by its own lock. I am afraid that during the time it is handling the private list, users may fail to get other error messages in the queue via recvmsg. Thus, I don't implement the splice logic in this version. Any comments?
v3 -> v4: - 1. Change SOCK_ZC_INFO_MAX to 64 to avoid large stack frame size. - 2. Fix minor typos. - 3. Change cfg_zerocopy from int to enum in msg_zerocopy.c
Initially, we expect users to pass the user address of the user array as a data in cmsg, so that the kernel can copy_to_user to this address directly.
As Willem commented,
The main design issue with this series is this indirection, rather than passing the array of notifications as cmsg.
This trick circumvents having to deal with compat issues and having to figure out copy_to_user in ____sys_sendmsg (as msg_control is an in-kernel copy).
This is quite hacky, from an API design PoV.
As is passing a pointer, but expecting msg_controllen to hold the length not of the pointer, but of the pointed to user buffer.
I had also hoped for more significant savings. Especially with the higher syscall overhead due to meltdown and spectre mitigations vs when MSG_ZEROCOPY was introduced and I last tried this optimization.
We solve it by supporting put_cmsg to userspace in sendmsg path starting from v5.
v4 -> v5: - 1. Passing user address directly to kernel raises concerns about ABI. In this version, we support put_cmsg to userspace in TX path to solve this problem.
v5 -> v6: - 1. Cleanly copy cmsg to user upon returning of ___sys_sendmsg
v6 -> v7: - 1. Remove flag MSG_CMSG_COPY_TO_USER, use a member in msghdr instead - 2. Pass msg to __sock_cmsg_send. - 3. sendmsg_copy_cmsg_to_user should be put at the end of ____sys_sendmsg to make sure msg_sys->msg_control is a valid pointer. - 4. Add struct zc_info to contain the array of zc_info_elem, so that the kernel can update the zc_info->size. Another possible solution is updating the cmsg_len directly, but it will break for_each_cmsghdr. - 5. Update selftest to make cfg_notification_limit have the same semantics in both methods for better comparison.
v7 -> v8: - 1. Add a static_branch in ____sys_sendmsg to avoid overhead in the hot path. - 2. Add ZC_NOTIFICATION_MAX to limit the max size of zc_info->arr. - 3. Minimize the code in SCM_ZC_NOTIFICATION handler by adding a local sk_buff_head.
* Performance
We update selftests/net/msg_zerocopy.c to accommodate the new mechanism, cfg_notification_limit has the same semantics for both methods. Test results are as follows, we update skb_orphan_frags_rx to the same as skb_orphan_frags to support zerocopy in the localhost test.
cfg_notification_limit = 1, both method get notifications after 1 calling of sendmsg. In this case, the new method has around 17% cpu savings in TCP and 23% cpu savings in UDP. +----------------------+---------+---------+---------+---------+ | Test Type / Protocol | TCP v4 | TCP v6 | UDP v4 | UDP v6 | +----------------------+---------+---------+---------+---------+ | ZCopy (MB) | 7523 | 7706 | 7489 | 7304 | +----------------------+---------+---------+---------+---------+ | New ZCopy (MB) | 8834 | 8993 | 9053 | 9228 | +----------------------+---------+---------+---------+---------+ | New ZCopy / ZCopy | 117.42% | 116.70% | 120.88% | 126.34% | +----------------------+---------+---------+---------+---------+
cfg_notification_limit = 32, both get notifications after 32 calling of sendmsg, which means more chances to coalesce notifications, and less overhead of poll + recvmsg for the original method. In this case, the new method has around 7% cpu savings in TCP and slightly better cpu usage in UDP. In the env of selftest, notifications of TCP are more likely to be out of order than UDP, it's easier to coalesce more notifications in UDP. The original method can get one notification with range of 32 in a recvmsg most of the time. In TCP, most notifications' range is around 2, so the original method needs around 16 recvmsgs to get notified in one round. That's the reason for the "New ZCopy / ZCopy" diff in TCP and UDP here. +----------------------+---------+---------+---------+---------+ | Test Type / Protocol | TCP v4 | TCP v6 | UDP v4 | UDP v6 | +----------------------+---------+---------+---------+---------+ | ZCopy (MB) | 8842 | 8735 | 10072 | 9380 | +----------------------+---------+---------+---------+---------+ | New ZCopy (MB) | 9366 | 9477 | 10108 | 9385 | +----------------------+---------+---------+---------+---------+ | New ZCopy / ZCopy | 106.00% | 108.28% | 100.31% | 100.01% | +----------------------+---------+---------+---------+---------+
In conclusion, when notification interval is small or notifications are hard to be coalesced, the new mechanism is highly recommended. Otherwise, the performance gain from the new mechanism is very limited.
Zijian Zhang (3): sock: support copying cmsgs to the user space in sendmsg sock: add MSG_ZEROCOPY notification mechanism based on msg_control selftests: add MSG_ZEROCOPY msg_control notification test
arch/alpha/include/uapi/asm/socket.h | 2 + arch/mips/include/uapi/asm/socket.h | 2 + arch/parisc/include/uapi/asm/socket.h | 2 + arch/sparc/include/uapi/asm/socket.h | 2 + include/linux/socket.h | 8 ++ include/net/sock.h | 2 +- include/uapi/asm-generic/socket.h | 2 + include/uapi/linux/socket.h | 23 +++++ net/core/sock.c | 72 +++++++++++++- net/ipv4/ip_sockglue.c | 2 +- net/ipv6/datagram.c | 2 +- net/socket.c | 63 +++++++++++- tools/testing/selftests/net/msg_zerocopy.c | 101 ++++++++++++++++++-- tools/testing/selftests/net/msg_zerocopy.sh | 1 + 14 files changed, 265 insertions(+), 19 deletions(-)
From: Zijian Zhang zijianzhang@bytedance.com
Users can pass msg_control as a placeholder to recvmsg, and get some info from the kernel upon returning of it, but it's not available for sendmsg. Recvmsg uses put_cmsg to copy info back to the user, while ____sys_sendmsg creates a kernel copy of msg_control and passes that to the callees, put_cmsg in sendmsg path will write into this kernel buffer.
If users want to get info after returning of sendmsg, they typically have to call recvmsg on the MSG_ERRQUEUE of the socket, incurring extra system call overhead. This commit supports copying cmsg from the kernel space to the user space upon returning of sendmsg to mitigate this overhead.
Signed-off-by: Zijian Zhang zijianzhang@bytedance.com Signed-off-by: Xiaochun Lu xiaochun.lu@bytedance.com --- include/linux/socket.h | 8 ++++++ include/net/sock.h | 2 +- net/core/sock.c | 6 ++-- net/ipv4/ip_sockglue.c | 2 +- net/ipv6/datagram.c | 2 +- net/socket.c | 63 ++++++++++++++++++++++++++++++++++++++---- 6 files changed, 72 insertions(+), 11 deletions(-)
diff --git a/include/linux/socket.h b/include/linux/socket.h index df9cdb8bbfb8..40173c919d0f 100644 --- a/include/linux/socket.h +++ b/include/linux/socket.h @@ -71,6 +71,7 @@ struct msghdr { void __user *msg_control_user; }; bool msg_control_is_user : 1; + bool msg_control_copy_to_user : 1; bool msg_get_inq : 1;/* return INQ after receive */ unsigned int msg_flags; /* flags on received message */ __kernel_size_t msg_controllen; /* ancillary data buffer length */ @@ -168,6 +169,11 @@ static inline struct cmsghdr * cmsg_nxthdr (struct msghdr *__msg, struct cmsghdr return __cmsg_nxthdr(__msg->msg_control, __msg->msg_controllen, __cmsg); }
+static inline bool cmsg_copy_to_user(struct cmsghdr *__cmsg) +{ + return 0; +} + static inline size_t msg_data_left(struct msghdr *msg) { return iov_iter_count(&msg->msg_iter); @@ -396,6 +402,8 @@ struct timespec64; struct __kernel_timespec; struct old_timespec32;
+DECLARE_STATIC_KEY_FALSE(tx_copy_cmsg_to_user_key); + struct scm_timestamping_internal { struct timespec64 ts[3]; }; diff --git a/include/net/sock.h b/include/net/sock.h index cce23ac4d514..9c728287d21d 100644 --- a/include/net/sock.h +++ b/include/net/sock.h @@ -1804,7 +1804,7 @@ static inline void sockcm_init(struct sockcm_cookie *sockc, }; }
-int __sock_cmsg_send(struct sock *sk, struct cmsghdr *cmsg, +int __sock_cmsg_send(struct sock *sk, struct msghdr *msg, struct cmsghdr *cmsg, struct sockcm_cookie *sockc); int sock_cmsg_send(struct sock *sk, struct msghdr *msg, struct sockcm_cookie *sockc); diff --git a/net/core/sock.c b/net/core/sock.c index 9abc4fe25953..b2cbe753af1d 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -2826,8 +2826,8 @@ struct sk_buff *sock_alloc_send_pskb(struct sock *sk, unsigned long header_len, } EXPORT_SYMBOL(sock_alloc_send_pskb);
-int __sock_cmsg_send(struct sock *sk, struct cmsghdr *cmsg, - struct sockcm_cookie *sockc) +int __sock_cmsg_send(struct sock *sk, struct msghdr *msg __always_unused, + struct cmsghdr *cmsg, struct sockcm_cookie *sockc) { u32 tsflags;
@@ -2881,7 +2881,7 @@ int sock_cmsg_send(struct sock *sk, struct msghdr *msg, return -EINVAL; if (cmsg->cmsg_level != SOL_SOCKET) continue; - ret = __sock_cmsg_send(sk, cmsg, sockc); + ret = __sock_cmsg_send(sk, msg, cmsg, sockc); if (ret) return ret; } diff --git a/net/ipv4/ip_sockglue.c b/net/ipv4/ip_sockglue.c index cf377377b52d..6360b8ba9c84 100644 --- a/net/ipv4/ip_sockglue.c +++ b/net/ipv4/ip_sockglue.c @@ -267,7 +267,7 @@ int ip_cmsg_send(struct sock *sk, struct msghdr *msg, struct ipcm_cookie *ipc, } #endif if (cmsg->cmsg_level == SOL_SOCKET) { - err = __sock_cmsg_send(sk, cmsg, &ipc->sockc); + err = __sock_cmsg_send(sk, msg, cmsg, &ipc->sockc); if (err) return err; continue; diff --git a/net/ipv6/datagram.c b/net/ipv6/datagram.c index fff78496803d..c9ae30acf895 100644 --- a/net/ipv6/datagram.c +++ b/net/ipv6/datagram.c @@ -777,7 +777,7 @@ int ip6_datagram_send_ctl(struct net *net, struct sock *sk, }
if (cmsg->cmsg_level == SOL_SOCKET) { - err = __sock_cmsg_send(sk, cmsg, &ipc6->sockc); + err = __sock_cmsg_send(sk, msg, cmsg, &ipc6->sockc); if (err) return err; continue; diff --git a/net/socket.c b/net/socket.c index fcbdd5bc47ac..4b65ac92045a 100644 --- a/net/socket.c +++ b/net/socket.c @@ -2537,8 +2537,49 @@ static int copy_msghdr_from_user(struct msghdr *kmsg, return err < 0 ? err : 0; }
-static int ____sys_sendmsg(struct socket *sock, struct msghdr *msg_sys, - unsigned int flags, struct used_address *used_address, +DEFINE_STATIC_KEY_FALSE(tx_copy_cmsg_to_user_key); + +static int sendmsg_copy_cmsg_to_user(struct msghdr *msg_sys, + struct user_msghdr __user *umsg) +{ + struct compat_msghdr __user *umsg_compat = + (struct compat_msghdr __user *)umsg; + unsigned int flags = msg_sys->msg_flags; + struct msghdr msg_user = *msg_sys; + unsigned long cmsg_ptr; + struct cmsghdr *cmsg; + int err; + + msg_user.msg_control_is_user = true; + msg_user.msg_control_user = umsg->msg_control; + cmsg_ptr = (unsigned long)msg_user.msg_control; + for_each_cmsghdr(cmsg, msg_sys) { + if (!CMSG_OK(msg_sys, cmsg)) + break; + if (!cmsg_copy_to_user(cmsg)) + continue; + err = put_cmsg(&msg_user, cmsg->cmsg_level, cmsg->cmsg_type, + cmsg->cmsg_len - sizeof(*cmsg), CMSG_DATA(cmsg)); + if (err) + return err; + } + + err = __put_user((msg_sys->msg_flags & ~MSG_CMSG_COMPAT), + COMPAT_FLAGS(umsg)); + if (err) + return err; + if (MSG_CMSG_COMPAT & flags) + err = __put_user((unsigned long)msg_user.msg_control - cmsg_ptr, + &umsg_compat->msg_controllen); + else + err = __put_user((unsigned long)msg_user.msg_control - cmsg_ptr, + &umsg->msg_controllen); + return err; +} + +static int ____sys_sendmsg(struct socket *sock, struct user_msghdr __user *msg, + struct msghdr *msg_sys, unsigned int flags, + struct used_address *used_address, unsigned int allowed_msghdr_flags) { unsigned char ctl[sizeof(struct cmsghdr) + 20] @@ -2549,6 +2590,8 @@ static int ____sys_sendmsg(struct socket *sock, struct msghdr *msg_sys, ssize_t err;
err = -ENOBUFS; + if (static_branch_unlikely(&tx_copy_cmsg_to_user_key)) + msg_sys->msg_control_copy_to_user = false;
if (msg_sys->msg_controllen > INT_MAX) goto out; @@ -2606,6 +2649,16 @@ static int ____sys_sendmsg(struct socket *sock, struct msghdr *msg_sys, used_address->name_len); }
+ if (static_branch_unlikely(&tx_copy_cmsg_to_user_key)) { + if (msg_sys->msg_control_copy_to_user && msg && err >= 0) { + ssize_t len = err; + + err = sendmsg_copy_cmsg_to_user(msg_sys, msg); + if (!err) + err = len; + } + } + out_freectl: if (ctl_buf != ctl) sock_kfree_s(sock->sk, ctl_buf, ctl_len); @@ -2648,8 +2701,8 @@ static int ___sys_sendmsg(struct socket *sock, struct user_msghdr __user *msg, if (err < 0) return err;
- err = ____sys_sendmsg(sock, msg_sys, flags, used_address, - allowed_msghdr_flags); + err = ____sys_sendmsg(sock, msg, msg_sys, flags, used_address, + allowed_msghdr_flags); kfree(iov); return err; } @@ -2660,7 +2713,7 @@ static int ___sys_sendmsg(struct socket *sock, struct user_msghdr __user *msg, long __sys_sendmsg_sock(struct socket *sock, struct msghdr *msg, unsigned int flags) { - return ____sys_sendmsg(sock, msg, flags, NULL, 0); + return ____sys_sendmsg(sock, NULL, msg, flags, NULL, 0); }
long __sys_sendmsg(int fd, struct user_msghdr __user *msg, unsigned int flags,
From: Zijian Zhang zijianzhang@bytedance.com
The MSG_ZEROCOPY flag enables copy avoidance for socket send calls. However, zerocopy is not a free lunch. Apart from the management of user pages, the combination of poll + recvmsg to receive notifications incurs unignorable overhead in the applications. We try to mitigate this overhead with a new notification mechanism based on msg_control. Leveraging the general framework to copy cmsgs to the user space, we copy zerocopy notifications to the user upon returning of sendmsgs.
Signed-off-by: Zijian Zhang zijianzhang@bytedance.com Signed-off-by: Xiaochun Lu xiaochun.lu@bytedance.com --- arch/alpha/include/uapi/asm/socket.h | 2 + arch/mips/include/uapi/asm/socket.h | 2 + arch/parisc/include/uapi/asm/socket.h | 2 + arch/sparc/include/uapi/asm/socket.h | 2 + include/linux/socket.h | 2 +- include/uapi/asm-generic/socket.h | 2 + include/uapi/linux/socket.h | 23 +++++++++ net/core/sock.c | 72 +++++++++++++++++++++++++-- 8 files changed, 102 insertions(+), 5 deletions(-)
diff --git a/arch/alpha/include/uapi/asm/socket.h b/arch/alpha/include/uapi/asm/socket.h index e94f621903fe..7c32d9dbe47f 100644 --- a/arch/alpha/include/uapi/asm/socket.h +++ b/arch/alpha/include/uapi/asm/socket.h @@ -140,6 +140,8 @@ #define SO_PASSPIDFD 76 #define SO_PEERPIDFD 77
+#define SCM_ZC_NOTIFICATION 78 + #if !defined(__KERNEL__)
#if __BITS_PER_LONG == 64 diff --git a/arch/mips/include/uapi/asm/socket.h b/arch/mips/include/uapi/asm/socket.h index 60ebaed28a4c..3f7fade998cb 100644 --- a/arch/mips/include/uapi/asm/socket.h +++ b/arch/mips/include/uapi/asm/socket.h @@ -151,6 +151,8 @@ #define SO_PASSPIDFD 76 #define SO_PEERPIDFD 77
+#define SCM_ZC_NOTIFICATION 78 + #if !defined(__KERNEL__)
#if __BITS_PER_LONG == 64 diff --git a/arch/parisc/include/uapi/asm/socket.h b/arch/parisc/include/uapi/asm/socket.h index be264c2b1a11..77f5bee0fdc9 100644 --- a/arch/parisc/include/uapi/asm/socket.h +++ b/arch/parisc/include/uapi/asm/socket.h @@ -132,6 +132,8 @@ #define SO_PASSPIDFD 0x404A #define SO_PEERPIDFD 0x404B
+#define SCM_ZC_NOTIFICATION 0x404C + #if !defined(__KERNEL__)
#if __BITS_PER_LONG == 64 diff --git a/arch/sparc/include/uapi/asm/socket.h b/arch/sparc/include/uapi/asm/socket.h index 682da3714686..eb44fc515b45 100644 --- a/arch/sparc/include/uapi/asm/socket.h +++ b/arch/sparc/include/uapi/asm/socket.h @@ -133,6 +133,8 @@ #define SO_PASSPIDFD 0x0055 #define SO_PEERPIDFD 0x0056
+#define SCM_ZC_NOTIFICATION 0x0057 + #if !defined(__KERNEL__)
diff --git a/include/linux/socket.h b/include/linux/socket.h index 40173c919d0f..71e3c6ebfed5 100644 --- a/include/linux/socket.h +++ b/include/linux/socket.h @@ -171,7 +171,7 @@ static inline struct cmsghdr * cmsg_nxthdr (struct msghdr *__msg, struct cmsghdr
static inline bool cmsg_copy_to_user(struct cmsghdr *__cmsg) { - return 0; + return __cmsg->cmsg_type == SCM_ZC_NOTIFICATION; }
static inline size_t msg_data_left(struct msghdr *msg) diff --git a/include/uapi/asm-generic/socket.h b/include/uapi/asm-generic/socket.h index 8ce8a39a1e5f..02e9159c7944 100644 --- a/include/uapi/asm-generic/socket.h +++ b/include/uapi/asm-generic/socket.h @@ -135,6 +135,8 @@ #define SO_PASSPIDFD 76 #define SO_PEERPIDFD 77
+#define SCM_ZC_NOTIFICATION 78 + #if !defined(__KERNEL__)
#if __BITS_PER_LONG == 64 || (defined(__x86_64__) && defined(__ILP32__)) diff --git a/include/uapi/linux/socket.h b/include/uapi/linux/socket.h index d3fcd3b5ec53..b5b5fa9febb1 100644 --- a/include/uapi/linux/socket.h +++ b/include/uapi/linux/socket.h @@ -2,6 +2,8 @@ #ifndef _UAPI_LINUX_SOCKET_H #define _UAPI_LINUX_SOCKET_H
+#include <linux/types.h> + /* * Desired design of maximum size and alignment (see RFC2553) */ @@ -35,4 +37,25 @@ struct __kernel_sockaddr_storage { #define SOCK_TXREHASH_DISABLED 0 #define SOCK_TXREHASH_ENABLED 1
+#define ZC_NOTIFICATION_MAX 16 + +/* + * A zc_info_elem represents a completion notification for sendmsgs in range + * lo to high, zerocopy represents whether the underlying transmission is + * zerocopy or not. + */ +struct zc_info_elem { + __u32 lo; + __u32 hi; + __u8 zerocopy; +}; + +/* + * zc_info is the struct used for the SCM_ZC_NOTIFICATION control message. + */ +struct zc_info { + __u32 size; /* size of the zc_info_elem arr */ + struct zc_info_elem arr[]; +}; + #endif /* _UAPI_LINUX_SOCKET_H */ diff --git a/net/core/sock.c b/net/core/sock.c index b2cbe753af1d..37b1b12623ee 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -1481,10 +1481,12 @@ int sk_setsockopt(struct sock *sk, int level, int optname, ret = -EOPNOTSUPP; } if (!ret) { - if (val < 0 || val > 1) + if (val < 0 || val > 1) { ret = -EINVAL; - else + } else { sock_valbool_flag(sk, SOCK_ZEROCOPY, valbool); + static_branch_enable(&tx_copy_cmsg_to_user_key); + } } break;
@@ -2826,8 +2828,8 @@ struct sk_buff *sock_alloc_send_pskb(struct sock *sk, unsigned long header_len, } EXPORT_SYMBOL(sock_alloc_send_pskb);
-int __sock_cmsg_send(struct sock *sk, struct msghdr *msg __always_unused, - struct cmsghdr *cmsg, struct sockcm_cookie *sockc) +int __sock_cmsg_send(struct sock *sk, struct msghdr *msg, struct cmsghdr *cmsg, + struct sockcm_cookie *sockc) { u32 tsflags;
@@ -2863,6 +2865,68 @@ int __sock_cmsg_send(struct sock *sk, struct msghdr *msg __always_unused, case SCM_RIGHTS: case SCM_CREDENTIALS: break; + case SCM_ZC_NOTIFICATION: { + struct zc_info *zc = CMSG_DATA(cmsg); + struct sk_buff_head *q, local_q; + int cmsg_data_len, i = 0; + unsigned long flags; + struct sk_buff *skb; + + if (!sock_flag(sk, SOCK_ZEROCOPY) || sk->sk_family == PF_RDS) + return -EINVAL; + + cmsg_data_len = cmsg->cmsg_len - sizeof(struct cmsghdr); + if (cmsg_data_len < sizeof(struct zc_info)) + return -EINVAL; + + if (zc->size > ZC_NOTIFICATION_MAX || + (cmsg_data_len - sizeof(struct zc_info)) != + (zc->size * sizeof(struct zc_info_elem))) + return -EINVAL; + + q = &sk->sk_error_queue; + skb_queue_head_init(&local_q); + + /* Get zerocopy error messages from sk_error_queue, and add them + * to a local queue for later processing. This minimizes the + * code while the spinlock is held and irq is disabled. + */ + spin_lock_irqsave(&q->lock, flags); + skb = skb_peek(q); + while (skb && i < zc->size) { + struct sk_buff *skb_next = skb_peek_next(skb, q); + struct sock_exterr_skb *serr = SKB_EXT_ERR(skb); + + if (serr->ee.ee_errno != 0 || + serr->ee.ee_origin != SO_EE_ORIGIN_ZEROCOPY) { + skb = skb_next; + continue; + } + + __skb_unlink(skb, q); + __skb_queue_tail(&local_q, skb); + skb = skb_next; + i++; + } + spin_unlock_irqrestore(&q->lock, flags); + + i = 0; + while ((skb = skb_peek(&local_q)) != NULL) { + struct sock_exterr_skb *serr = SKB_EXT_ERR(skb); + + zc->arr[i].hi = serr->ee.ee_data; + zc->arr[i].lo = serr->ee.ee_info; + zc->arr[i].zerocopy = !(serr->ee.ee_code + & SO_EE_CODE_ZEROCOPY_COPIED); + __skb_unlink(skb, &local_q); + consume_skb(skb); + i++; + } + + zc->size = i; + msg->msg_control_copy_to_user = true; + break; + } default: return -EINVAL; }
zijianzhang@ wrote:
From: Zijian Zhang zijianzhang@bytedance.com
The MSG_ZEROCOPY flag enables copy avoidance for socket send calls. However, zerocopy is not a free lunch. Apart from the management of user pages, the combination of poll + recvmsg to receive notifications incurs unignorable overhead in the applications. We try to mitigate this overhead with a new notification mechanism based on msg_control. Leveraging the general framework to copy cmsgs to the user space, we copy zerocopy notifications to the user upon returning of sendmsgs.
May want to
- Explicitly state that receiving notifications on sendmsg is optional and existing recvmsg MSG_ERRQUEUE continues to work
- Include a very brief example of how this interface is used. Probably pseudo-code, as msghdr setup and CMSG processing are verbose operations
Btw patchwork shows red for patch 1/3 due to a new error or warning. Not sure if it's a false positive, but take a look.
Signed-off-by: Zijian Zhang zijianzhang@bytedance.com Signed-off-by: Xiaochun Lu xiaochun.lu@bytedance.com
+/*
- zc_info is the struct used for the SCM_ZC_NOTIFICATION control message.
- */
+struct zc_info {
- __u32 size; /* size of the zc_info_elem arr */
Size is ambiguous, could mean byte size. Perhaps length, or number of elements in arr[].
- struct zc_info_elem arr[];
+};
On Wed, 31 Jul 2024 18:20:35 -0400 Willem de Bruijn wrote:
Btw patchwork shows red for patch 1/3 due to a new error or warning. Not sure if it's a false positive, but take a look.
Patchwork is not for contributors, I keep repeating this :| Were you not in the room at netdev when I was talking about NIPA or am I this shit at communicating?
Next person pointing someone to patchwork will get a task to fix something in NIPA.
On Wed, Jul 31, 2024 at 9:29 PM Jakub Kicinski kuba@kernel.org wrote:
On Wed, 31 Jul 2024 18:20:35 -0400 Willem de Bruijn wrote:
Btw patchwork shows red for patch 1/3 due to a new error or warning. Not sure if it's a false positive, but take a look.
Patchwork is not for contributors, I keep repeating this :| Were you not in the room at netdev when I was talking about NIPA or am I this shit at communicating?
Next person pointing someone to patchwork will get a task to fix something in NIPA.
:-)
It's a super informative tool. I did miss the point about the intended audience, use cases and known limitations (such as false positives). Got it now!
Looking forward to the netdev talks and slides online soon.
From: Zijian Zhang zijianzhang@bytedance.com
We update selftests/net/msg_zerocopy.c to accommodate the new mechanism, cfg_notification_limit has the same semantics for both methods. Test results are as follows, we update skb_orphan_frags_rx to the same as skb_orphan_frags to support zerocopy in the localhost test.
cfg_notification_limit = 1, both method get notifications after 1 calling of sendmsg. In this case, the new method has around 17% cpu savings in TCP and 23% cpu savings in UDP. +---------------------+---------+---------+---------+---------+ | Test Type / Protocol| TCP v4 | TCP v6 | UDP v4 | UDP v6 | +---------------------+---------+---------+---------+---------+ | ZCopy (MB) | 7523 | 7706 | 7489 | 7304 | +---------------------+---------+---------+---------+---------+ | New ZCopy (MB) | 8834 | 8993 | 9053 | 9228 | +---------------------+---------+---------+---------+---------+ | New ZCopy / ZCopy | 117.42% | 116.70% | 120.88% | 126.34% | +---------------------+---------+---------+---------+---------+
cfg_notification_limit = 32, both get notifications after 32 calling of sendmsg, which means more chances to coalesce notifications, and less overhead of poll + recvmsg for the original method. In this case, the new method has around 7% cpu savings in TCP and slightly better cpu usage in UDP. In the env of selftest, notifications of TCP are more likely to be out of order than UDP, it's easier to coalesce more notifications in UDP. The original method can get one notification with range of 32 in a recvmsg most of the time. In TCP, most notifications' range is around 2, so the original method needs around 16 recvmsgs to get notified in one round. That's the reason for the "New ZCopy / ZCopy" diff in TCP and UDP here. +---------------------+---------+---------+---------+---------+ | Test Type / Protocol| TCP v4 | TCP v6 | UDP v4 | UDP v6 | +---------------------+---------+---------+---------+---------+ | ZCopy (MB) | 8842 | 8735 | 10072 | 9380 | +---------------------+---------+---------+---------+---------+ | New ZCopy (MB) | 9366 | 9477 | 10108 | 9385 | +---------------------+---------+---------+---------+---------+ | New ZCopy / ZCopy | 106.00% | 108.28% | 100.31% | 100.01% | +---------------------+---------+---------+---------+---------+
In conclusion, when notification interval is small or notifications are hard to be coalesced, the new mechanism is highly recommended. Otherwise, the performance gain from the new mechanism is very limited.
Signed-off-by: Zijian Zhang zijianzhang@bytedance.com Signed-off-by: Xiaochun Lu xiaochun.lu@bytedance.com --- tools/testing/selftests/net/msg_zerocopy.c | 101 ++++++++++++++++++-- tools/testing/selftests/net/msg_zerocopy.sh | 1 + 2 files changed, 95 insertions(+), 7 deletions(-)
diff --git a/tools/testing/selftests/net/msg_zerocopy.c b/tools/testing/selftests/net/msg_zerocopy.c index 7ea5fb28c93d..cf227f0011b5 100644 --- a/tools/testing/selftests/net/msg_zerocopy.c +++ b/tools/testing/selftests/net/msg_zerocopy.c @@ -66,6 +66,10 @@ #define SO_ZEROCOPY 60 #endif
+#ifndef SCM_ZC_NOTIFICATION +#define SCM_ZC_NOTIFICATION 78 +#endif + #ifndef SO_EE_CODE_ZEROCOPY_COPIED #define SO_EE_CODE_ZEROCOPY_COPIED 1 #endif @@ -74,6 +78,14 @@ #define MSG_ZEROCOPY 0x4000000 #endif
+#define ZC_INFO_ARR_SIZE (ZC_NOTIFICATION_MAX * sizeof(struct zc_info_elem)) +#define ZC_INFO_SIZE (sizeof(struct zc_info) + ZC_INFO_ARR_SIZE) + +enum notification_type { + MSG_ZEROCOPY_NOTIFY_ERRQUEUE = 1, + MSG_ZEROCOPY_NOTIFY_SENDMSG = 2, +}; + static int cfg_cork; static bool cfg_cork_mixed; static int cfg_cpu = -1; /* default: pin to last cpu */ @@ -86,7 +98,7 @@ static int cfg_runtime_ms = 4200; static int cfg_verbose; static int cfg_waittime_ms = 500; static int cfg_notification_limit = 32; -static bool cfg_zerocopy; +static enum notification_type cfg_zerocopy;
static socklen_t cfg_alen; static struct sockaddr_storage cfg_dst_addr; @@ -97,6 +109,8 @@ static long packets, bytes, completions, expected_completions; static int zerocopied = -1; static uint32_t next_completion; static uint32_t sends_since_notify; +static char zc_ckbuf[CMSG_SPACE(ZC_INFO_SIZE)]; +static bool added_zcopy_info;
static unsigned long gettimeofday_ms(void) { @@ -182,7 +196,26 @@ static void add_zcopy_cookie(struct msghdr *msg, uint32_t cookie) memcpy(CMSG_DATA(cm), &cookie, sizeof(cookie)); }
-static bool do_sendmsg(int fd, struct msghdr *msg, bool do_zerocopy, int domain) +static void add_zcopy_info(struct msghdr *msg) +{ + struct zc_info *zc_info; + struct cmsghdr *cm; + + if (!msg->msg_control) + error(1, errno, "NULL user arg"); + cm = (struct cmsghdr *)msg->msg_control; + cm->cmsg_len = CMSG_LEN(ZC_INFO_SIZE); + cm->cmsg_level = SOL_SOCKET; + cm->cmsg_type = SCM_ZC_NOTIFICATION; + + zc_info = (struct zc_info *)CMSG_DATA(cm); + zc_info->size = ZC_NOTIFICATION_MAX; + + added_zcopy_info = true; +} + +static bool do_sendmsg(int fd, struct msghdr *msg, + enum notification_type do_zerocopy, int domain) { int ret, len, i, flags; static uint32_t cookie; @@ -200,6 +233,12 @@ static bool do_sendmsg(int fd, struct msghdr *msg, bool do_zerocopy, int domain) msg->msg_controllen = CMSG_SPACE(sizeof(cookie)); msg->msg_control = (struct cmsghdr *)ckbuf; add_zcopy_cookie(msg, ++cookie); + } else if (do_zerocopy == MSG_ZEROCOPY_NOTIFY_SENDMSG && + sends_since_notify + 1 >= cfg_notification_limit) { + memset(&msg->msg_control, 0, sizeof(msg->msg_control)); + msg->msg_controllen = CMSG_SPACE(ZC_INFO_SIZE); + msg->msg_control = (struct cmsghdr *)zc_ckbuf; + add_zcopy_info(msg); } }
@@ -218,7 +257,7 @@ static bool do_sendmsg(int fd, struct msghdr *msg, bool do_zerocopy, int domain) if (do_zerocopy && ret) expected_completions++; } - if (do_zerocopy && domain == PF_RDS) { + if (msg->msg_control) { msg->msg_control = NULL; msg->msg_controllen = 0; } @@ -466,6 +505,44 @@ static void do_recv_completions(int fd, int domain) sends_since_notify = 0; }
+static void do_recv_completions2(void) +{ + struct cmsghdr *cm = (struct cmsghdr *)zc_ckbuf; + struct zc_info *zc_info; + __u32 hi, lo, range; + __u8 zerocopy; + int i; + + zc_info = (struct zc_info *)CMSG_DATA(cm); + for (i = 0; i < zc_info->size; i++) { + hi = zc_info->arr[i].hi; + lo = zc_info->arr[i].lo; + zerocopy = zc_info->arr[i].zerocopy; + range = hi - lo + 1; + + if (cfg_verbose && lo != next_completion) + fprintf(stderr, "gap: %u..%u does not append to %u\n", + lo, hi, next_completion); + next_completion = hi + 1; + + if (zerocopied == -1) { + zerocopied = zerocopy; + } else if (zerocopied != zerocopy) { + fprintf(stderr, "serr: inconsistent\n"); + zerocopied = zerocopy; + } + + completions += range; + sends_since_notify -= range; + + if (cfg_verbose >= 2) + fprintf(stderr, "completed: %u (h=%u l=%u)\n", + range, hi, lo); + } + + added_zcopy_info = false; +} + /* Wait for all remaining completions on the errqueue */ static void do_recv_remaining_completions(int fd, int domain) { @@ -553,11 +630,16 @@ static void do_tx(int domain, int type, int protocol) else do_sendmsg(fd, &msg, cfg_zerocopy, domain);
- if (cfg_zerocopy && sends_since_notify >= cfg_notification_limit) + if (cfg_zerocopy == MSG_ZEROCOPY_NOTIFY_ERRQUEUE && + sends_since_notify >= cfg_notification_limit) do_recv_completions(fd, domain);
+ if (cfg_zerocopy == MSG_ZEROCOPY_NOTIFY_SENDMSG && + added_zcopy_info) + do_recv_completions2(); + while (!do_poll(fd, POLLOUT)) { - if (cfg_zerocopy) + if (cfg_zerocopy == MSG_ZEROCOPY_NOTIFY_ERRQUEUE) do_recv_completions(fd, domain); }
@@ -715,7 +797,7 @@ static void parse_opts(int argc, char **argv)
cfg_payload_len = max_payload_len;
- while ((c = getopt(argc, argv, "46c:C:D:i:l:mp:rs:S:t:vz")) != -1) { + while ((c = getopt(argc, argv, "46c:C:D:i:l:mnp:rs:S:t:vz")) != -1) { switch (c) { case '4': if (cfg_family != PF_UNSPEC) @@ -749,6 +831,9 @@ static void parse_opts(int argc, char **argv) case 'm': cfg_cork_mixed = true; break; + case 'n': + cfg_zerocopy = MSG_ZEROCOPY_NOTIFY_SENDMSG; + break; case 'p': cfg_port = strtoul(optarg, NULL, 0); break; @@ -768,7 +853,7 @@ static void parse_opts(int argc, char **argv) cfg_verbose++; break; case 'z': - cfg_zerocopy = true; + cfg_zerocopy = MSG_ZEROCOPY_NOTIFY_ERRQUEUE; break; } } @@ -779,6 +864,8 @@ static void parse_opts(int argc, char **argv) error(1, 0, "-D <server addr> required for PF_RDS\n"); if (!cfg_rx && !saddr) error(1, 0, "-S <client addr> required for PF_RDS\n"); + if (cfg_zerocopy == MSG_ZEROCOPY_NOTIFY_SENDMSG) + error(1, 0, "PF_RDS does not support ZC_NOTIF_SENDMSG"); } setup_sockaddr(cfg_family, daddr, &cfg_dst_addr); setup_sockaddr(cfg_family, saddr, &cfg_src_addr); diff --git a/tools/testing/selftests/net/msg_zerocopy.sh b/tools/testing/selftests/net/msg_zerocopy.sh index 89c22f5320e0..022a6936d86f 100755 --- a/tools/testing/selftests/net/msg_zerocopy.sh +++ b/tools/testing/selftests/net/msg_zerocopy.sh @@ -118,4 +118,5 @@ do_test() {
do_test "${EXTRA_ARGS}" do_test "-z ${EXTRA_ARGS}" +do_test "-n ${EXTRA_ARGS}" echo ok
zijianzhang@ wrote:
From: Zijian Zhang zijianzhang@bytedance.com
We update selftests/net/msg_zerocopy.c to accommodate the new mechanism,
Please make commit messages stand on their own. If someone does a git blame, make the message self explanatory. Replace "the new mechanism" with sendmsg SCM_ZC_NOTIFICATION.
In patch 2 or as a separate patch 4, also add a new short section on this API in Documentation/networking/msg_zerocopy.rst. Probably with the same contents as a good explanation of the feature in the commit message of patch 2.
cfg_notification_limit has the same semantics for both methods. Test results are as follows, we update skb_orphan_frags_rx to the same as skb_orphan_frags to support zerocopy in the localhost test.
cfg_notification_limit = 1, both method get notifications after 1 calling of sendmsg. In this case, the new method has around 17% cpu savings in TCP and 23% cpu savings in UDP. +---------------------+---------+---------+---------+---------+ | Test Type / Protocol| TCP v4 | TCP v6 | UDP v4 | UDP v6 | +---------------------+---------+---------+---------+---------+ | ZCopy (MB) | 7523 | 7706 | 7489 | 7304 | +---------------------+---------+---------+---------+---------+ | New ZCopy (MB) | 8834 | 8993 | 9053 | 9228 | +---------------------+---------+---------+---------+---------+ | New ZCopy / ZCopy | 117.42% | 116.70% | 120.88% | 126.34% | +---------------------+---------+---------+---------+---------+
cfg_notification_limit = 32, both get notifications after 32 calling of sendmsg, which means more chances to coalesce notifications, and less overhead of poll + recvmsg for the original method. In this case, the new method has around 7% cpu savings in TCP and slightly better cpu usage in UDP. In the env of selftest, notifications of TCP are more likely to be out of order than UDP, it's easier to coalesce more notifications in UDP. The original method can get one notification with range of 32 in a recvmsg most of the time. In TCP, most notifications' range is around 2, so the original method needs around 16 recvmsgs to get notified in one round. That's the reason for the "New ZCopy / ZCopy" diff in TCP and UDP here. +---------------------+---------+---------+---------+---------+ | Test Type / Protocol| TCP v4 | TCP v6 | UDP v4 | UDP v6 | +---------------------+---------+---------+---------+---------+ | ZCopy (MB) | 8842 | 8735 | 10072 | 9380 | +---------------------+---------+---------+---------+---------+ | New ZCopy (MB) | 9366 | 9477 | 10108 | 9385 | +---------------------+---------+---------+---------+---------+ | New ZCopy / ZCopy | 106.00% | 108.28% | 100.31% | 100.01% | +---------------------+---------+---------+---------+---------+
In conclusion, when notification interval is small or notifications are hard to be coalesced, the new mechanism is highly recommended. Otherwise, the performance gain from the new mechanism is very limited.
Signed-off-by: Zijian Zhang zijianzhang@bytedance.com Signed-off-by: Xiaochun Lu xiaochun.lu@bytedance.com
-static bool do_sendmsg(int fd, struct msghdr *msg, bool do_zerocopy, int domain) +static void add_zcopy_info(struct msghdr *msg) +{
- struct zc_info *zc_info;
- struct cmsghdr *cm;
- if (!msg->msg_control)
error(1, errno, "NULL user arg");
Don't add precondition checks for code entirely under your control. This is not a user API.
- cm = (struct cmsghdr *)msg->msg_control;
- cm->cmsg_len = CMSG_LEN(ZC_INFO_SIZE);
- cm->cmsg_level = SOL_SOCKET;
- cm->cmsg_type = SCM_ZC_NOTIFICATION;
- zc_info = (struct zc_info *)CMSG_DATA(cm);
- zc_info->size = ZC_NOTIFICATION_MAX;
- added_zcopy_info = true;
Just initialize every time? Is this here to reuse the same msg_control as long as metadata is returned?
+}
+static bool do_sendmsg(int fd, struct msghdr *msg,
enum notification_type do_zerocopy, int domain)
{ int ret, len, i, flags; static uint32_t cookie; @@ -200,6 +233,12 @@ static bool do_sendmsg(int fd, struct msghdr *msg, bool do_zerocopy, int domain) msg->msg_controllen = CMSG_SPACE(sizeof(cookie)); msg->msg_control = (struct cmsghdr *)ckbuf; add_zcopy_cookie(msg, ++cookie);
} else if (do_zerocopy == MSG_ZEROCOPY_NOTIFY_SENDMSG &&
sends_since_notify + 1 >= cfg_notification_limit) {
memset(&msg->msg_control, 0, sizeof(msg->msg_control));
msg->msg_controllen = CMSG_SPACE(ZC_INFO_SIZE);
msg->msg_control = (struct cmsghdr *)zc_ckbuf;
} }add_zcopy_info(msg);
@@ -218,7 +257,7 @@ static bool do_sendmsg(int fd, struct msghdr *msg, bool do_zerocopy, int domain) if (do_zerocopy && ret) expected_completions++; }
- if (do_zerocopy && domain == PF_RDS) {
- if (msg->msg_control) { msg->msg_control = NULL; msg->msg_controllen = 0; }
@@ -466,6 +505,44 @@ static void do_recv_completions(int fd, int domain) sends_since_notify = 0; } +static void do_recv_completions2(void)
functionname2 is very uninformative.
do_recv_completions_sendmsg or so.
+{
- struct cmsghdr *cm = (struct cmsghdr *)zc_ckbuf;
- struct zc_info *zc_info;
- __u32 hi, lo, range;
- __u8 zerocopy;
- int i;
- zc_info = (struct zc_info *)CMSG_DATA(cm);
- for (i = 0; i < zc_info->size; i++) {
hi = zc_info->arr[i].hi;
lo = zc_info->arr[i].lo;
zerocopy = zc_info->arr[i].zerocopy;
range = hi - lo + 1;
if (cfg_verbose && lo != next_completion)
fprintf(stderr, "gap: %u..%u does not append to %u\n",
lo, hi, next_completion);
next_completion = hi + 1;
if (zerocopied == -1) {
zerocopied = zerocopy;
} else if (zerocopied != zerocopy) {
fprintf(stderr, "serr: inconsistent\n");
zerocopied = zerocopy;
}
completions += range;
sends_since_notify -= range;
if (cfg_verbose >= 2)
fprintf(stderr, "completed: %u (h=%u l=%u)\n",
range, hi, lo);
- }
- added_zcopy_info = false;
+}
/* Wait for all remaining completions on the errqueue */ static void do_recv_remaining_completions(int fd, int domain) { @@ -553,11 +630,16 @@ static void do_tx(int domain, int type, int protocol) else do_sendmsg(fd, &msg, cfg_zerocopy, domain);
if (cfg_zerocopy && sends_since_notify >= cfg_notification_limit)
if (cfg_zerocopy == MSG_ZEROCOPY_NOTIFY_ERRQUEUE &&
sends_since_notify >= cfg_notification_limit) do_recv_completions(fd, domain);
if (cfg_zerocopy == MSG_ZEROCOPY_NOTIFY_SENDMSG &&
added_zcopy_info)
do_recv_completions2();
- while (!do_poll(fd, POLLOUT)) {
if (cfg_zerocopy)
}if (cfg_zerocopy == MSG_ZEROCOPY_NOTIFY_ERRQUEUE) do_recv_completions(fd, domain);
@@ -715,7 +797,7 @@ static void parse_opts(int argc, char **argv) cfg_payload_len = max_payload_len;
- while ((c = getopt(argc, argv, "46c:C:D:i:l:mp:rs:S:t:vz")) != -1) {
- while ((c = getopt(argc, argv, "46c:C:D:i:l:mnp:rs:S:t:vz")) != -1) { switch (c) { case '4': if (cfg_family != PF_UNSPEC)
@@ -749,6 +831,9 @@ static void parse_opts(int argc, char **argv) case 'm': cfg_cork_mixed = true; break;
case 'n':
cfg_zerocopy = MSG_ZEROCOPY_NOTIFY_SENDMSG;
break;
How about -Z to make clear that this is still MSG_ZEROCOPY, just with a different notification mechanism.
And perhaps add a testcase that exercises both this mechanism and existing recvmsg MSG_ERRQUEUE. As they should work in parallel and concurrently in a multithreaded environment.
On 7/31/24 3:32 PM, Willem de Bruijn wrote:
zijianzhang@ wrote:
From: Zijian Zhang zijianzhang@bytedance.com
We update selftests/net/msg_zerocopy.c to accommodate the new mechanism,
First of all, thanks for the detailed suggestions!
Please make commit messages stand on their own. If someone does a git blame, make the message self explanatory. Replace "the new mechanism" with sendmsg SCM_ZC_NOTIFICATION.
In patch 2 or as a separate patch 4, also add a new short section on this API in Documentation/networking/msg_zerocopy.rst. Probably with the same contents as a good explanation of the feature in the commit message of patch 2.
Agreed.
cfg_notification_limit has the same semantics for both methods. Test results are as follows, we update skb_orphan_frags_rx to the same as skb_orphan_frags to support zerocopy in the localhost test.
cfg_notification_limit = 1, both method get notifications after 1 calling of sendmsg. In this case, the new method has around 17% cpu savings in TCP and 23% cpu savings in UDP. +---------------------+---------+---------+---------+---------+ | Test Type / Protocol| TCP v4 | TCP v6 | UDP v4 | UDP v6 | +---------------------+---------+---------+---------+---------+ | ZCopy (MB) | 7523 | 7706 | 7489 | 7304 | +---------------------+---------+---------+---------+---------+ | New ZCopy (MB) | 8834 | 8993 | 9053 | 9228 | +---------------------+---------+---------+---------+---------+ | New ZCopy / ZCopy | 117.42% | 116.70% | 120.88% | 126.34% | +---------------------+---------+---------+---------+---------+
cfg_notification_limit = 32, both get notifications after 32 calling of sendmsg, which means more chances to coalesce notifications, and less overhead of poll + recvmsg for the original method. In this case, the new method has around 7% cpu savings in TCP and slightly better cpu usage in UDP. In the env of selftest, notifications of TCP are more likely to be out of order than UDP, it's easier to coalesce more notifications in UDP. The original method can get one notification with range of 32 in a recvmsg most of the time. In TCP, most notifications' range is around 2, so the original method needs around 16 recvmsgs to get notified in one round. That's the reason for the "New ZCopy / ZCopy" diff in TCP and UDP here. +---------------------+---------+---------+---------+---------+ | Test Type / Protocol| TCP v4 | TCP v6 | UDP v4 | UDP v6 | +---------------------+---------+---------+---------+---------+ | ZCopy (MB) | 8842 | 8735 | 10072 | 9380 | +---------------------+---------+---------+---------+---------+ | New ZCopy (MB) | 9366 | 9477 | 10108 | 9385 | +---------------------+---------+---------+---------+---------+ | New ZCopy / ZCopy | 106.00% | 108.28% | 100.31% | 100.01% | +---------------------+---------+---------+---------+---------+
In conclusion, when notification interval is small or notifications are hard to be coalesced, the new mechanism is highly recommended. Otherwise, the performance gain from the new mechanism is very limited.
Signed-off-by: Zijian Zhang zijianzhang@bytedance.com Signed-off-by: Xiaochun Lu xiaochun.lu@bytedance.com
-static bool do_sendmsg(int fd, struct msghdr *msg, bool do_zerocopy, int domain) +static void add_zcopy_info(struct msghdr *msg) +{
- struct zc_info *zc_info;
- struct cmsghdr *cm;
- if (!msg->msg_control)
error(1, errno, "NULL user arg");
Don't add precondition checks for code entirely under your control. This is not a user API.
Ack.
- cm = (struct cmsghdr *)msg->msg_control;
- cm->cmsg_len = CMSG_LEN(ZC_INFO_SIZE);
- cm->cmsg_level = SOL_SOCKET;
- cm->cmsg_type = SCM_ZC_NOTIFICATION;
- zc_info = (struct zc_info *)CMSG_DATA(cm);
- zc_info->size = ZC_NOTIFICATION_MAX;
- added_zcopy_info = true;
Just initialize every time? Is this here to reuse the same msg_control as long as metadata is returned?
Yes, the same msg_control will be reused.
The overall paradiagm is, start: sendmsg(..) sendmsg(..) ... sends_since_notify sendmsgs in total
add_zcopy_info(..) sendmsg(.., msg_control) do_recv_completions_sendmsg(..) goto start;
if (sends_since_notify + 1 >= cfg_notification_limit), add_zcopy_info will be invoked, and the right next sendmsg will have the msg_control passed in.
If (added_zcopy_info), do_recv_completions_sendmsg will be invoked, and added_zcopy_info will be set to false in it.
+}
+static bool do_sendmsg(int fd, struct msghdr *msg,
{ int ret, len, i, flags; static uint32_t cookie;enum notification_type do_zerocopy, int domain)
@@ -200,6 +233,12 @@ static bool do_sendmsg(int fd, struct msghdr *msg, bool do_zerocopy, int domain) msg->msg_controllen = CMSG_SPACE(sizeof(cookie)); msg->msg_control = (struct cmsghdr *)ckbuf; add_zcopy_cookie(msg, ++cookie);
} else if (do_zerocopy == MSG_ZEROCOPY_NOTIFY_SENDMSG &&
sends_since_notify + 1 >= cfg_notification_limit) {
memset(&msg->msg_control, 0, sizeof(msg->msg_control));
msg->msg_controllen = CMSG_SPACE(ZC_INFO_SIZE);
msg->msg_control = (struct cmsghdr *)zc_ckbuf;
} }add_zcopy_info(msg);
@@ -218,7 +257,7 @@ static bool do_sendmsg(int fd, struct msghdr *msg, bool do_zerocopy, int domain) if (do_zerocopy && ret) expected_completions++; }
- if (do_zerocopy && domain == PF_RDS) {
- if (msg->msg_control) { msg->msg_control = NULL; msg->msg_controllen = 0; }
@@ -466,6 +505,44 @@ static void do_recv_completions(int fd, int domain) sends_since_notify = 0; } +static void do_recv_completions2(void)
functionname2 is very uninformative.
do_recv_completions_sendmsg or so.
Ack.
+{
- struct cmsghdr *cm = (struct cmsghdr *)zc_ckbuf;
- struct zc_info *zc_info;
- __u32 hi, lo, range;
- __u8 zerocopy;
- int i;
- zc_info = (struct zc_info *)CMSG_DATA(cm);
- for (i = 0; i < zc_info->size; i++) {
hi = zc_info->arr[i].hi;
lo = zc_info->arr[i].lo;
zerocopy = zc_info->arr[i].zerocopy;
range = hi - lo + 1;
if (cfg_verbose && lo != next_completion)
fprintf(stderr, "gap: %u..%u does not append to %u\n",
lo, hi, next_completion);
next_completion = hi + 1;
if (zerocopied == -1) {
zerocopied = zerocopy;
} else if (zerocopied != zerocopy) {
fprintf(stderr, "serr: inconsistent\n");
zerocopied = zerocopy;
}
completions += range;
sends_since_notify -= range;
if (cfg_verbose >= 2)
fprintf(stderr, "completed: %u (h=%u l=%u)\n",
range, hi, lo);
- }
- added_zcopy_info = false;
+}
- /* Wait for all remaining completions on the errqueue */ static void do_recv_remaining_completions(int fd, int domain) {
@@ -553,11 +630,16 @@ static void do_tx(int domain, int type, int protocol) else do_sendmsg(fd, &msg, cfg_zerocopy, domain);
if (cfg_zerocopy && sends_since_notify >= cfg_notification_limit)
if (cfg_zerocopy == MSG_ZEROCOPY_NOTIFY_ERRQUEUE &&
sends_since_notify >= cfg_notification_limit) do_recv_completions(fd, domain);
if (cfg_zerocopy == MSG_ZEROCOPY_NOTIFY_SENDMSG &&
added_zcopy_info)
do_recv_completions2();
- while (!do_poll(fd, POLLOUT)) {
if (cfg_zerocopy)
}if (cfg_zerocopy == MSG_ZEROCOPY_NOTIFY_ERRQUEUE) do_recv_completions(fd, domain);
@@ -715,7 +797,7 @@ static void parse_opts(int argc, char **argv) cfg_payload_len = max_payload_len;
- while ((c = getopt(argc, argv, "46c:C:D:i:l:mp:rs:S:t:vz")) != -1) {
- while ((c = getopt(argc, argv, "46c:C:D:i:l:mnp:rs:S:t:vz")) != -1) { switch (c) { case '4': if (cfg_family != PF_UNSPEC)
@@ -749,6 +831,9 @@ static void parse_opts(int argc, char **argv) case 'm': cfg_cork_mixed = true; break;
case 'n':
cfg_zerocopy = MSG_ZEROCOPY_NOTIFY_SENDMSG;
break;
How about -Z to make clear that this is still MSG_ZEROCOPY, just with a different notification mechanism.
And perhaps add a testcase that exercises both this mechanism and existing recvmsg MSG_ERRQUEUE. As they should work in parallel and concurrently in a multithreaded environment.
-Z is more clear, and the hybrid testcase will be helpful.
Btw, before I put some efforts to solve the current issues, I think I should wait for comments about api change from linux-api@vger.kernel.org?
On Thu, Aug 1, 2024 at 1:30 PM Zijian Zhang zijianzhang@bytedance.com wrote:
On 7/31/24 3:32 PM, Willem de Bruijn wrote:
zijianzhang@ wrote:
From: Zijian Zhang zijianzhang@bytedance.com
We update selftests/net/msg_zerocopy.c to accommodate the new mechanism,
First of all, thanks for the detailed suggestions!
Please make commit messages stand on their own. If someone does a git blame, make the message self explanatory. Replace "the new mechanism" with sendmsg SCM_ZC_NOTIFICATION.
In patch 2 or as a separate patch 4, also add a new short section on this API in Documentation/networking/msg_zerocopy.rst. Probably with the same contents as a good explanation of the feature in the commit message of patch 2.
Agreed.
cfg_notification_limit has the same semantics for both methods. Test results are as follows, we update skb_orphan_frags_rx to the same as skb_orphan_frags to support zerocopy in the localhost test.
cfg_notification_limit = 1, both method get notifications after 1 calling of sendmsg. In this case, the new method has around 17% cpu savings in TCP and 23% cpu savings in UDP. +---------------------+---------+---------+---------+---------+ | Test Type / Protocol| TCP v4 | TCP v6 | UDP v4 | UDP v6 | +---------------------+---------+---------+---------+---------+ | ZCopy (MB) | 7523 | 7706 | 7489 | 7304 | +---------------------+---------+---------+---------+---------+ | New ZCopy (MB) | 8834 | 8993 | 9053 | 9228 | +---------------------+---------+---------+---------+---------+ | New ZCopy / ZCopy | 117.42% | 116.70% | 120.88% | 126.34% | +---------------------+---------+---------+---------+---------+
cfg_notification_limit = 32, both get notifications after 32 calling of sendmsg, which means more chances to coalesce notifications, and less overhead of poll + recvmsg for the original method. In this case, the new method has around 7% cpu savings in TCP and slightly better cpu usage in UDP. In the env of selftest, notifications of TCP are more likely to be out of order than UDP, it's easier to coalesce more notifications in UDP. The original method can get one notification with range of 32 in a recvmsg most of the time. In TCP, most notifications' range is around 2, so the original method needs around 16 recvmsgs to get notified in one round. That's the reason for the "New ZCopy / ZCopy" diff in TCP and UDP here. +---------------------+---------+---------+---------+---------+ | Test Type / Protocol| TCP v4 | TCP v6 | UDP v4 | UDP v6 | +---------------------+---------+---------+---------+---------+ | ZCopy (MB) | 8842 | 8735 | 10072 | 9380 | +---------------------+---------+---------+---------+---------+ | New ZCopy (MB) | 9366 | 9477 | 10108 | 9385 | +---------------------+---------+---------+---------+---------+ | New ZCopy / ZCopy | 106.00% | 108.28% | 100.31% | 100.01% | +---------------------+---------+---------+---------+---------+
In conclusion, when notification interval is small or notifications are hard to be coalesced, the new mechanism is highly recommended. Otherwise, the performance gain from the new mechanism is very limited.
Signed-off-by: Zijian Zhang zijianzhang@bytedance.com Signed-off-by: Xiaochun Lu xiaochun.lu@bytedance.com
-static bool do_sendmsg(int fd, struct msghdr *msg, bool do_zerocopy, int domain) +static void add_zcopy_info(struct msghdr *msg) +{
- struct zc_info *zc_info;
- struct cmsghdr *cm;
- if (!msg->msg_control)
error(1, errno, "NULL user arg");
Don't add precondition checks for code entirely under your control. This is not a user API.
Ack.
- cm = (struct cmsghdr *)msg->msg_control;
- cm->cmsg_len = CMSG_LEN(ZC_INFO_SIZE);
- cm->cmsg_level = SOL_SOCKET;
- cm->cmsg_type = SCM_ZC_NOTIFICATION;
- zc_info = (struct zc_info *)CMSG_DATA(cm);
- zc_info->size = ZC_NOTIFICATION_MAX;
- added_zcopy_info = true;
Just initialize every time? Is this here to reuse the same msg_control as long as metadata is returned?
Yes, the same msg_control will be reused.
The overall paradiagm is, start: sendmsg(..) sendmsg(..) ... sends_since_notify sendmsgs in total
add_zcopy_info(..) sendmsg(.., msg_control) do_recv_completions_sendmsg(..) goto start;
if (sends_since_notify + 1 >= cfg_notification_limit), add_zcopy_info will be invoked, and the right next sendmsg will have the msg_control passed in.
If (added_zcopy_info), do_recv_completions_sendmsg will be invoked, and added_zcopy_info will be set to false in it.
This does not seem like it would need a global variable?
+}
+static bool do_sendmsg(int fd, struct msghdr *msg,
{ int ret, len, i, flags; static uint32_t cookie;enum notification_type do_zerocopy, int domain)
@@ -200,6 +233,12 @@ static bool do_sendmsg(int fd, struct msghdr *msg, bool do_zerocopy, int domain) msg->msg_controllen = CMSG_SPACE(sizeof(cookie)); msg->msg_control = (struct cmsghdr *)ckbuf; add_zcopy_cookie(msg, ++cookie);
} else if (do_zerocopy == MSG_ZEROCOPY_NOTIFY_SENDMSG &&
sends_since_notify + 1 >= cfg_notification_limit) {
memset(&msg->msg_control, 0, sizeof(msg->msg_control));
msg->msg_controllen = CMSG_SPACE(ZC_INFO_SIZE);
msg->msg_control = (struct cmsghdr *)zc_ckbuf;
}add_zcopy_info(msg); }
@@ -218,7 +257,7 @@ static bool do_sendmsg(int fd, struct msghdr *msg, bool do_zerocopy, int domain) if (do_zerocopy && ret) expected_completions++; }
- if (do_zerocopy && domain == PF_RDS) {
- if (msg->msg_control) { msg->msg_control = NULL; msg->msg_controllen = 0; }
@@ -466,6 +505,44 @@ static void do_recv_completions(int fd, int domain) sends_since_notify = 0; }
+static void do_recv_completions2(void)
functionname2 is very uninformative.
do_recv_completions_sendmsg or so.
Ack.
+{
- struct cmsghdr *cm = (struct cmsghdr *)zc_ckbuf;
- struct zc_info *zc_info;
- __u32 hi, lo, range;
- __u8 zerocopy;
- int i;
- zc_info = (struct zc_info *)CMSG_DATA(cm);
- for (i = 0; i < zc_info->size; i++) {
hi = zc_info->arr[i].hi;
lo = zc_info->arr[i].lo;
zerocopy = zc_info->arr[i].zerocopy;
range = hi - lo + 1;
if (cfg_verbose && lo != next_completion)
fprintf(stderr, "gap: %u..%u does not append to %u\n",
lo, hi, next_completion);
next_completion = hi + 1;
if (zerocopied == -1) {
zerocopied = zerocopy;
} else if (zerocopied != zerocopy) {
fprintf(stderr, "serr: inconsistent\n");
zerocopied = zerocopy;
}
completions += range;
sends_since_notify -= range;
if (cfg_verbose >= 2)
fprintf(stderr, "completed: %u (h=%u l=%u)\n",
range, hi, lo);
- }
- added_zcopy_info = false;
+}
- /* Wait for all remaining completions on the errqueue */ static void do_recv_remaining_completions(int fd, int domain) {
@@ -553,11 +630,16 @@ static void do_tx(int domain, int type, int protocol) else do_sendmsg(fd, &msg, cfg_zerocopy, domain);
if (cfg_zerocopy && sends_since_notify >= cfg_notification_limit)
if (cfg_zerocopy == MSG_ZEROCOPY_NOTIFY_ERRQUEUE &&
sends_since_notify >= cfg_notification_limit) do_recv_completions(fd, domain);
if (cfg_zerocopy == MSG_ZEROCOPY_NOTIFY_SENDMSG &&
added_zcopy_info)
do_recv_completions2();
while (!do_poll(fd, POLLOUT)) {
if (cfg_zerocopy)
if (cfg_zerocopy == MSG_ZEROCOPY_NOTIFY_ERRQUEUE) do_recv_completions(fd, domain); }
@@ -715,7 +797,7 @@ static void parse_opts(int argc, char **argv)
cfg_payload_len = max_payload_len;
- while ((c = getopt(argc, argv, "46c:C:D:i:l:mp:rs:S:t:vz")) != -1) {
- while ((c = getopt(argc, argv, "46c:C:D:i:l:mnp:rs:S:t:vz")) != -1) { switch (c) { case '4': if (cfg_family != PF_UNSPEC)
@@ -749,6 +831,9 @@ static void parse_opts(int argc, char **argv) case 'm': cfg_cork_mixed = true; break;
case 'n':
cfg_zerocopy = MSG_ZEROCOPY_NOTIFY_SENDMSG;
break;
How about -Z to make clear that this is still MSG_ZEROCOPY, just with a different notification mechanism.
And perhaps add a testcase that exercises both this mechanism and existing recvmsg MSG_ERRQUEUE. As they should work in parallel and concurrently in a multithreaded environment.
-Z is more clear, and the hybrid testcase will be helpful.
Btw, before I put some efforts to solve the current issues, I think I should wait for comments about api change from linux-api@vger.kernel.org?
I'm not sure whether anyone on that list will give feedback.
I would continue with revisions at a normal schedule, as long as that stays in the Cc.
On 8/1/24 10:36 AM, Willem de Bruijn wrote:
On Thu, Aug 1, 2024 at 1:30 PM Zijian Zhang zijianzhang@bytedance.com wrote:
-static bool do_sendmsg(int fd, struct msghdr *msg, bool do_zerocopy, int domain) +static void add_zcopy_info(struct msghdr *msg) +{
- struct zc_info *zc_info;
- struct cmsghdr *cm;
- if (!msg->msg_control)
error(1, errno, "NULL user arg");
Don't add precondition checks for code entirely under your control. This is not a user API.
Ack.
- cm = (struct cmsghdr *)msg->msg_control;
- cm->cmsg_len = CMSG_LEN(ZC_INFO_SIZE);
- cm->cmsg_level = SOL_SOCKET;
- cm->cmsg_type = SCM_ZC_NOTIFICATION;
- zc_info = (struct zc_info *)CMSG_DATA(cm);
- zc_info->size = ZC_NOTIFICATION_MAX;
- added_zcopy_info = true;
Just initialize every time? Is this here to reuse the same msg_control as long as metadata is returned?
Yes, the same msg_control will be reused.
The overall paradiagm is, start: sendmsg(..) sendmsg(..) ... sends_since_notify sendmsgs in total
add_zcopy_info(..) sendmsg(.., msg_control) do_recv_completions_sendmsg(..) goto start;
if (sends_since_notify + 1 >= cfg_notification_limit), add_zcopy_info will be invoked, and the right next sendmsg will have the msg_control passed in.
If (added_zcopy_info), do_recv_completions_sendmsg will be invoked, and added_zcopy_info will be set to false in it.
This does not seem like it would need a global variable?
Agreed, maybe I can use sends_since_notify to check whether we need to do_recv_completions_sendmsg, then we get rid of added_zcopy_info.
Btw, before I put some efforts to solve the current issues, I think I should wait for comments about api change from linux-api@vger.kernel.org?
I'm not sure whether anyone on that list will give feedback.
I would continue with revisions at a normal schedule, as long as that stays in the Cc.
Got it, thanks
linux-kselftest-mirror@lists.linaro.org