inode_hash() currently mixes a name-derived hash with the super_block pointer using an unbounded multiplication:
tmp = (hashval * (unsigned long)sb) ^ (GOLDEN_RATIO_PRIME + hashval) / L1_CACHE_BYTES;
On 64-bit kernels this multiplication can overflow for many inputs. With attacker-chosen filenames (authenticated client), overflowed products collapse into a small set of buckets, saturating a few chains and degrading lookups from O(1) to O(n). This produces second-scale latency spikes and high CPU usage in ksmbd workers (algorithmic DoS).
Replace the pointer*hash multiply with hash_long() over a mixed value (hashval ^ (unsigned long)sb) and keep the existing shift/mask. This removes the overflow source and improves bucket distribution under adversarial inputs without changing external behavior.
This is an algorithmic-complexity issue (CWE-190/CWE-407), not a memory-safety bug.
Reported-by: Qianchang Zhao pioooooooooip@gmail.com Reported-by: Zhitong Liu liuzhitong1993@gmail.com Cc: stable@vger.kernel.org Signed-off-by: Qianchang Zhao pioooooooooip@gmail.com --- fs/smb/server/vfs_cache.c | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-)
diff --git a/fs/smb/server/vfs_cache.c b/fs/smb/server/vfs_cache.c index dfed6fce8..ac18edf56 100644 --- a/fs/smb/server/vfs_cache.c +++ b/fs/smb/server/vfs_cache.c @@ -10,6 +10,7 @@ #include <linux/vmalloc.h> #include <linux/kthread.h> #include <linux/freezer.h> +#include <linux/hash.h>
#include "glob.h" #include "vfs_cache.h" @@ -65,12 +66,8 @@ static void fd_limit_close(void)
static unsigned long inode_hash(struct super_block *sb, unsigned long hashval) { - unsigned long tmp; - - tmp = (hashval * (unsigned long)sb) ^ (GOLDEN_RATIO_PRIME + hashval) / - L1_CACHE_BYTES; - tmp = tmp ^ ((tmp ^ GOLDEN_RATIO_PRIME) >> inode_hash_shift); - return tmp & inode_hash_mask; + return hash_long(hashval ^ (unsigned long)sb, inode_hash_shift) & + inode_hash_mask; }
static struct ksmbd_inode *__ksmbd_inode_lookup(struct dentry *de)