Regressions on riscv allyesconfig build failed with gcc-13 on the Linux next tag next-20250516 and next-20250522.
First seen on the next-20250516 Good: next-20250515 Bad: next-20250516
Regressions found on riscv: - build/gcc-13-allyesconfig
Regression Analysis: - New regression? Yes - Reproducible? Yes
Build regression: riscv gcc-13 allyesconfig error the frame size of 2064 bytes is larger than 2048 bytes [-Werror=frame-larger-than=]
Reported-by: Linux Kernel Functional Testing lkft@linaro.org
## Build log fs/bcachefs/data_update.c: In function '__bch2_data_update_index_update': fs/bcachefs/data_update.c:464:1: error: the frame size of 2064 bytes is larger than 2048 bytes [-Werror=frame-larger-than=] 464 | } | ^ cc1: all warnings being treated as errors
## Source * Kernel version: 6.15.0-rc7 * Git tree: https://kernel.googlesource.com/pub/scm/linux/kernel/git/next/linux-next.git * Git sha: 460178e842c7a1e48a06df684c66eb5fd630bcf7 * Git describe: next-20250522
## Build * Build log: https://qa-reports.linaro.org/api/testruns/28521854/log_file/ * Build history: https://qa-reports.linaro.org/lkft/linux-next-master/build/next-20250522/tes... * Build link: https://storage.tuxsuite.com/public/linaro/lkft/builds/2xRoAAw5dl69AvvHb8oZ3... * Kernel config: https://storage.tuxsuite.com/public/linaro/lkft/builds/2xRoAAw5dl69AvvHb8oZ3...
-- Linaro LKFT https://lkft.linaro.org
Hi Naresh,
Bcachefs has its own mailing list linux-bcachefs@vger.kernel.org mailto:linux-bcachefs@vger.kernel.org, here I Cc this email to bcachefs mailing list.
Thanks.
Coly Li
2025年5月22日 21:29,Naresh Kamboju naresh.kamboju@linaro.org 写道:
Regressions on riscv allyesconfig build failed with gcc-13 on the Linux next tag next-20250516 and next-20250522.
First seen on the next-20250516 Good: next-20250515 Bad: next-20250516
Regressions found on riscv:
- build/gcc-13-allyesconfig
Regression Analysis:
- New regression? Yes
- Reproducible? Yes
Build regression: riscv gcc-13 allyesconfig error the frame size of 2064 bytes is larger than 2048 bytes [-Werror=frame-larger-than=]
Reported-by: Linux Kernel Functional Testing lkft@linaro.org
## Build log fs/bcachefs/data_update.c: In function '__bch2_data_update_index_update': fs/bcachefs/data_update.c:464:1: error: the frame size of 2064 bytes is larger than 2048 bytes [-Werror=frame-larger-than=] 464 | } | ^ cc1: all warnings being treated as errors
## Source
- Kernel version: 6.15.0-rc7
- Git tree: https://kernel.googlesource.com/pub/scm/linux/kernel/git/next/linux-next.git
- Git sha: 460178e842c7a1e48a06df684c66eb5fd630bcf7
- Git describe: next-20250522
## Build
- Build log: https://qa-reports.linaro.org/api/testruns/28521854/log_file/
- Build history:
https://qa-reports.linaro.org/lkft/linux-next-master/build/next-20250522/tes...
- Build link: https://storage.tuxsuite.com/public/linaro/lkft/builds/2xRoAAw5dl69AvvHb8oZ3...
- Kernel config:
https://storage.tuxsuite.com/public/linaro/lkft/builds/2xRoAAw5dl69AvvHb8oZ3...
-- Linaro LKFT https://lkft.linaro.org
On Thu, May 22, 2025 at 06:59:53PM +0530, Naresh Kamboju wrote:
Regressions on riscv allyesconfig build failed with gcc-13 on the Linux next tag next-20250516 and next-20250522.
First seen on the next-20250516 Good: next-20250515 Bad: next-20250516
Regressions found on riscv:
- build/gcc-13-allyesconfig
Regression Analysis:
- New regression? Yes
- Reproducible? Yes
Build regression: riscv gcc-13 allyesconfig error the frame size of 2064 bytes is larger than 2048 bytes [-Werror=frame-larger-than=]
Is this a kmsan build? kmsan seems to inflate stack usage by quite a lot.
On Thu, 22 May 2025 at 22:18, Kent Overstreet kent.overstreet@linux.dev wrote:
On Thu, May 22, 2025 at 06:59:53PM +0530, Naresh Kamboju wrote:
Regressions on riscv allyesconfig build failed with gcc-13 on the Linux next tag next-20250516 and next-20250522.
First seen on the next-20250516 Good: next-20250515 Bad: next-20250516
Regressions found on riscv:
- build/gcc-13-allyesconfig
Regression Analysis:
- New regression? Yes
- Reproducible? Yes
Build regression: riscv gcc-13 allyesconfig error the frame size of 2064 bytes is larger than 2048 bytes [-Werror=frame-larger-than=]
Is this a kmsan build? kmsan seems to inflate stack usage by quite a lot.
This is allyesconfig build which has KASAN builds.
CONFIG_HAVE_ARCH_KASAN=y CONFIG_HAVE_ARCH_KASAN_VMALLOC=y CONFIG_CC_HAS_KASAN_GENERIC=y CONFIG_CC_HAS_WORKING_NOSANITIZE_ADDRESS=y CONFIG_KASAN=y CONFIG_CC_HAS_KASAN_MEMINTRINSIC_PREFIX=y CONFIG_KASAN_GENERIC=y # CONFIG_KASAN_OUTLINE is not set CONFIG_KASAN_INLINE=y CONFIG_KASAN_STACK=y CONFIG_KASAN_VMALLOC=y CONFIG_KASAN_KUNIT_TEST=y CONFIG_KASAN_EXTRA_INFO=y
- Naresh
On Fri, May 23, 2025, at 15:19, Naresh Kamboju wrote:
On Thu, 22 May 2025 at 22:18, Kent Overstreet kent.overstreet@linux.dev wrote:
On Thu, May 22, 2025 at 06:59:53PM +0530, Naresh Kamboju wrote:
Regressions on riscv allyesconfig build failed with gcc-13 on the Linux next tag next-20250516 and next-20250522.
First seen on the next-20250516 Good: next-20250515 Bad: next-20250516
Regressions found on riscv:
- build/gcc-13-allyesconfig
Regression Analysis:
- New regression? Yes
- Reproducible? Yes
Build regression: riscv gcc-13 allyesconfig error the frame size of 2064 bytes is larger than 2048 bytes [-Werror=frame-larger-than=]
Is this a kmsan build? kmsan seems to inflate stack usage by quite a lot.
KMSAN is currently a clang-only feature.
This is allyesconfig build which has KASAN builds.
CONFIG_HAVE_ARCH_KASAN=y CONFIG_HAVE_ARCH_KASAN_VMALLOC=y CONFIG_CC_HAS_KASAN_GENERIC=y CONFIG_CC_HAS_WORKING_NOSANITIZE_ADDRESS=y CONFIG_KASAN=y CONFIG_CC_HAS_KASAN_MEMINTRINSIC_PREFIX=y CONFIG_KASAN_GENERIC=y # CONFIG_KASAN_OUTLINE is not set CONFIG_KASAN_INLINE=y CONFIG_KASAN_STACK=y
I reproduced the problem locally and found this to go down to 1440 bytes after I turn off KASAN_STACK. next-20250523 has some changes that take the number down further to 1136 with KASAN_STACK and or 1552 with KASAN_STACK.
I've turned bcachefs with kasan-stack on for my randconfig builds again to see if there are any remaining corner cases.
Arnd
On Fri, May 23, 2025 at 03:49:54PM +0200, Arnd Bergmann wrote:
On Fri, May 23, 2025, at 15:19, Naresh Kamboju wrote:
On Thu, 22 May 2025 at 22:18, Kent Overstreet kent.overstreet@linux.dev wrote:
On Thu, May 22, 2025 at 06:59:53PM +0530, Naresh Kamboju wrote:
Regressions on riscv allyesconfig build failed with gcc-13 on the Linux next tag next-20250516 and next-20250522.
First seen on the next-20250516 Good: next-20250515 Bad: next-20250516
Regressions found on riscv:
- build/gcc-13-allyesconfig
Regression Analysis:
- New regression? Yes
- Reproducible? Yes
Build regression: riscv gcc-13 allyesconfig error the frame size of 2064 bytes is larger than 2048 bytes [-Werror=frame-larger-than=]
Is this a kmsan build? kmsan seems to inflate stack usage by quite a lot.
KMSAN is currently a clang-only feature.
This is allyesconfig build which has KASAN builds.
CONFIG_HAVE_ARCH_KASAN=y CONFIG_HAVE_ARCH_KASAN_VMALLOC=y CONFIG_CC_HAS_KASAN_GENERIC=y CONFIG_CC_HAS_WORKING_NOSANITIZE_ADDRESS=y CONFIG_KASAN=y CONFIG_CC_HAS_KASAN_MEMINTRINSIC_PREFIX=y CONFIG_KASAN_GENERIC=y # CONFIG_KASAN_OUTLINE is not set CONFIG_KASAN_INLINE=y CONFIG_KASAN_STACK=y
I reproduced the problem locally and found this to go down to 1440 bytes after I turn off KASAN_STACK. next-20250523 has some changes that take the number down further to 1136 with KASAN_STACK and or 1552 with KASAN_STACK.
I've turned bcachefs with kasan-stack on for my randconfig builds again to see if there are any remaining corner cases.
Thanks for the numbers - that does still seem high, I'll have to have a look with pahole.
On Fri, May 23, 2025, at 16:08, Kent Overstreet wrote:
On Fri, May 23, 2025 at 03:49:54PM +0200, Arnd Bergmann wrote:
On Fri, May 23, 2025, at 15:19, Naresh Kamboju wrote:
I reproduced the problem locally and found this to go down to 1440 bytes after I turn off KASAN_STACK. next-20250523 has some changes that take the number down further to 1136 with KASAN_STACK and or 1552 with KASAN_STACK.
I've turned bcachefs with kasan-stack on for my randconfig builds again to see if there are any remaining corner cases.
Thanks for the numbers - that does still seem high, I'll have to have a look with pahole.
I agree it's still larger than it should be: having more than a few hundred bytes on a function usually means that there is both the risk for actual overflow and general inefficiency if all the stack data gets accessed as well.
It's probably not actually structure data though, but a combination of effects:
- KASAN_STACK adds extra redzones for each variable - KASAN_STACK further prevents stack slots from getting reused inside one function, in order to better pinpoint which instance caused problems like out-of-scope access - passing structures by value causes them to be put on the stack on some architectures, even when the structure size is only one or two registers - sanitizers turn off optimizations that lead to better stack usage - in some cases, the missed optimization ends up causing local variables to get spilled to the stack many times because of a combination of all the above.
The good news is that so far my randconfig builds have not shown any more stack frame warnings on next-20230523 with bcachefs force-enabled, now 55 builds into the change, across arm32/arm64/x86 using gcc-15.1.
Arnd
On Fri, May 23, 2025 at 05:17:15PM +0200, Arnd Bergmann wrote:
On Fri, May 23, 2025, at 16:08, Kent Overstreet wrote:
On Fri, May 23, 2025 at 03:49:54PM +0200, Arnd Bergmann wrote:
On Fri, May 23, 2025, at 15:19, Naresh Kamboju wrote:
I reproduced the problem locally and found this to go down to 1440 bytes after I turn off KASAN_STACK. next-20250523 has some changes that take the number down further to 1136 with KASAN_STACK and or 1552 with KASAN_STACK.
I've turned bcachefs with kasan-stack on for my randconfig builds again to see if there are any remaining corner cases.
Thanks for the numbers - that does still seem high, I'll have to have a look with pahole.
I agree it's still larger than it should be: having more than a few hundred bytes on a function usually means that there is both the risk for actual overflow and general inefficiency if all the stack data gets accessed as well.
It's probably not actually structure data though, but a combination of effects:
- KASAN_STACK adds extra redzones for each variable
- KASAN_STACK further prevents stack slots from getting reused inside one function, in order to better pinpoint which instance caused problems like out-of-scope access
- passing structures by value causes them to be put on the stack on some architectures, even when the structure size is only one or two registers
We mainly do this with bkey_s_c, which is just two words: on x86_64, that gets passed in registers. Is riscv different?
- sanitizers turn off optimizations that lead to better stack usage
- in some cases, the missed optimization ends up causing local variables to get spilled to the stack many times because of a combination of all the above.
Yeesh.
I suspect we should be running with a larger stack when the sanitizers are running, and perhaps tweak the warnings accordingly. I did a bunch of stack usage work after I found a kmsan build was blowing out the stack, but then running with max stack usage tracing enabled showed it to be a largely non issue on non-sanitizer builds, IIRC.
The good news is that so far my randconfig builds have not shown any more stack frame warnings on next-20230523 with bcachefs force-enabled, now 55 builds into the change, across arm32/arm64/x86 using gcc-15.1.
Good to know, thanks.
On Fri, May 23, 2025, at 19:11, Kent Overstreet wrote:
On Fri, May 23, 2025 at 05:17:15PM +0200, Arnd Bergmann wrote:
- KASAN_STACK adds extra redzones for each variable
- KASAN_STACK further prevents stack slots from getting reused inside one function, in order to better pinpoint which instance caused problems like out-of-scope access
- passing structures by value causes them to be put on the stack on some architectures, even when the structure size is only one or two registers
We mainly do this with bkey_s_c, which is just two words: on x86_64, that gets passed in registers. Is riscv different?
Not sure, I think it's mostly older ABIs that are limited, either not passing structures in registers at all, or only possibly one but not two of them.
- sanitizers turn off optimizations that lead to better stack usage
- in some cases, the missed optimization ends up causing local variables to get spilled to the stack many times because of a combination of all the above.
Yeesh.
I suspect we should be running with a larger stack when the sanitizers are running, and perhaps tweak the warnings accordingly. I did a bunch of stack usage work after I found a kmsan build was blowing out the stack, but then running with max stack usage tracing enabled showed it to be a largely non issue on non-sanitizer builds, IIRC.
Enabling KASAN does double the available stack space. However, I don't think we should use that as an excuse to raise the per-function warning limit, because
- the majority of all function stacks do not grow that much when sanitizers are enabled - allmodconfig enables KASAN and should still catch mistakes where a driver accidentally puts a large structure on the stack - 2KB on 64-bit targes is a really large limit. At some point in the past I had a series that lowered the limit to 1536 byte for 64-bit targets, but I never managed to get all the changes merged.
Arnd
Reduce stack usage - bkey_buf has a 96 byte buffer on the stack, but the btree_trans bump allocator works just fine here.
Signed-off-by: Kent Overstreet kent.overstreet@linux.dev --- fs/bcachefs/data_update.c | 26 +++++++++++++++----------- 1 file changed, 15 insertions(+), 11 deletions(-)
diff --git a/fs/bcachefs/data_update.c b/fs/bcachefs/data_update.c index de096ca65b4b..ef648a6d9c52 100644 --- a/fs/bcachefs/data_update.c +++ b/fs/bcachefs/data_update.c @@ -187,14 +187,9 @@ static int __bch2_data_update_index_update(struct btree_trans *trans, struct data_update *m = container_of(op, struct data_update, op); struct keylist *keys = &op->insert_keys; - struct bkey_buf _new, _insert; struct printbuf journal_msg = PRINTBUF; int ret = 0;
- bch2_bkey_buf_init(&_new); - bch2_bkey_buf_init(&_insert); - bch2_bkey_buf_realloc(&_insert, c, U8_MAX); - bch2_trans_iter_init(trans, &iter, m->btree_id, bkey_start_pos(&bch2_keylist_front(keys)->k), BTREE_ITER_slots|BTREE_ITER_intent); @@ -229,11 +224,22 @@ static int __bch2_data_update_index_update(struct btree_trans *trans, goto nowork; }
- bkey_reassemble(_insert.k, k); - insert = _insert.k; + insert = bch2_trans_kmalloc(trans, + bkey_bytes(k.k) + + bkey_val_bytes(&new->k) + + sizeof(struct bch_extent_rebalance)); + ret = PTR_ERR_OR_ZERO(insert); + if (ret) + goto err; + + bkey_reassemble(insert, k); + + new = bch2_trans_kmalloc(trans, bkey_bytes(&new->k)); + ret = PTR_ERR_OR_ZERO(new); + if (ret) + goto err;
- bch2_bkey_buf_copy(&_new, c, bch2_keylist_front(keys)); - new = bkey_i_to_extent(_new.k); + bkey_copy(&new->k_i, bch2_keylist_front(keys)); bch2_cut_front(iter.pos, &new->k_i);
bch2_cut_front(iter.pos, insert); @@ -457,8 +463,6 @@ static int __bch2_data_update_index_update(struct btree_trans *trans, out: printbuf_exit(&journal_msg); bch2_trans_iter_exit(trans, &iter); - bch2_bkey_buf_exit(&_insert, c); - bch2_bkey_buf_exit(&_new, c); BUG_ON(bch2_err_matches(ret, BCH_ERR_transaction_restart)); return ret; }
The data update path doesn't need a printbuf for its log message - this will help reduce stack usage.
Signed-off-by: Kent Overstreet kent.overstreet@linux.dev --- fs/bcachefs/btree_update.c | 26 ++++++++++++++++++-------- fs/bcachefs/btree_update.h | 1 + 2 files changed, 19 insertions(+), 8 deletions(-)
diff --git a/fs/bcachefs/btree_update.c b/fs/bcachefs/btree_update.c index afd05c3dfd03..8fe2834f9d57 100644 --- a/fs/bcachefs/btree_update.c +++ b/fs/bcachefs/btree_update.c @@ -829,25 +829,35 @@ int bch2_btree_bit_mod_buffered(struct btree_trans *trans, enum btree_id btree, return bch2_trans_update_buffered(trans, btree, &k); }
-int bch2_trans_log_msg(struct btree_trans *trans, struct printbuf *buf) +static int __bch2_trans_log_str(struct btree_trans *trans, const char *str, unsigned len) { - unsigned u64s = DIV_ROUND_UP(buf->pos, sizeof(u64)); - - int ret = buf->allocation_failure ? -BCH_ERR_ENOMEM_trans_log_msg : 0; - if (ret) - return ret; + unsigned u64s = DIV_ROUND_UP(len, sizeof(u64));
struct jset_entry *e = bch2_trans_jset_entry_alloc(trans, jset_u64s(u64s)); - ret = PTR_ERR_OR_ZERO(e); + int ret = PTR_ERR_OR_ZERO(e); if (ret) return ret;
struct jset_entry_log *l = container_of(e, struct jset_entry_log, entry); journal_entry_init(e, BCH_JSET_ENTRY_log, 0, 1, u64s); - memcpy_and_pad(l->d, u64s * sizeof(u64), buf->buf, buf->pos, 0); + memcpy_and_pad(l->d, u64s * sizeof(u64), str, len, 0); return 0; }
+int bch2_trans_log_str(struct btree_trans *trans, const char *str) +{ + return __bch2_trans_log_str(trans, str, strlen(str)); +} + +int bch2_trans_log_msg(struct btree_trans *trans, struct printbuf *buf) +{ + int ret = buf->allocation_failure ? -BCH_ERR_ENOMEM_trans_log_msg : 0; + if (ret) + return ret; + + return __bch2_trans_log_str(trans, buf->buf, buf->pos); +} + int bch2_trans_log_bkey(struct btree_trans *trans, enum btree_id btree, unsigned level, struct bkey_i *k) { diff --git a/fs/bcachefs/btree_update.h b/fs/bcachefs/btree_update.h index a54dc7277177..f907eaa8b185 100644 --- a/fs/bcachefs/btree_update.h +++ b/fs/bcachefs/btree_update.h @@ -205,6 +205,7 @@ void bch2_trans_commit_hook(struct btree_trans *, struct btree_trans_commit_hook *); int __bch2_trans_commit(struct btree_trans *, unsigned);
+int bch2_trans_log_str(struct btree_trans *, const char *); int bch2_trans_log_msg(struct btree_trans *, struct printbuf *); int bch2_trans_log_bkey(struct btree_trans *, enum btree_id, unsigned, struct bkey_i *);
Separate tracepoint message generation and other slowpath code into non-inline functions, and use bch2_trans_log_str() instead of using a printbuf for our journal message.
Signed-off-by: Kent Overstreet kent.overstreet@linux.dev --- fs/bcachefs/data_update.c | 154 +++++++++++++++++++++----------------- 1 file changed, 87 insertions(+), 67 deletions(-)
diff --git a/fs/bcachefs/data_update.c b/fs/bcachefs/data_update.c index ef648a6d9c52..c34e5b88ba9d 100644 --- a/fs/bcachefs/data_update.c +++ b/fs/bcachefs/data_update.c @@ -100,9 +100,10 @@ static bool bkey_nocow_lock(struct bch_fs *c, struct moving_context *ctxt, struc return true; }
-static noinline void trace_io_move_finish2(struct data_update *u, - struct bkey_i *new, - struct bkey_i *insert) +noinline_for_stack +static void trace_io_move_finish2(struct data_update *u, + struct bkey_i *new, + struct bkey_i *insert) { struct bch_fs *c = u->op.c; struct printbuf buf = PRINTBUF; @@ -124,6 +125,7 @@ static noinline void trace_io_move_finish2(struct data_update *u, printbuf_exit(&buf); }
+noinline_for_stack static void trace_io_move_fail2(struct data_update *m, struct bkey_s_c new, struct bkey_s_c wrote, @@ -179,19 +181,84 @@ static void trace_io_move_fail2(struct data_update *m, printbuf_exit(&buf); }
+noinline_for_stack +static void trace_data_update2(struct data_update *m, + struct bkey_s_c old, struct bkey_s_c k, + struct bkey_i *insert) +{ + struct bch_fs *c = m->op.c; + struct printbuf buf = PRINTBUF; + + prt_str(&buf, "\nold: "); + bch2_bkey_val_to_text(&buf, c, old); + prt_str(&buf, "\nk: "); + bch2_bkey_val_to_text(&buf, c, k); + prt_str(&buf, "\nnew: "); + bch2_bkey_val_to_text(&buf, c, bkey_i_to_s_c(insert)); + + trace_data_update(c, buf.buf); + printbuf_exit(&buf); +} + +noinline_for_stack +static void trace_io_move_created_rebalance2(struct data_update *m, + struct bkey_s_c old, struct bkey_s_c k, + struct bkey_i *insert) +{ + struct bch_fs *c = m->op.c; + struct printbuf buf = PRINTBUF; + + bch2_data_update_opts_to_text(&buf, c, &m->op.opts, &m->data_opts); + + prt_str(&buf, "\nold: "); + bch2_bkey_val_to_text(&buf, c, old); + prt_str(&buf, "\nk: "); + bch2_bkey_val_to_text(&buf, c, k); + prt_str(&buf, "\nnew: "); + bch2_bkey_val_to_text(&buf, c, bkey_i_to_s_c(insert)); + + trace_io_move_created_rebalance(c, buf.buf); + printbuf_exit(&buf); + + this_cpu_inc(c->counters[BCH_COUNTER_io_move_created_rebalance]); +} + +noinline_for_stack +static int data_update_invalid_bkey(struct data_update *m, + struct bkey_s_c old, struct bkey_s_c k, + struct bkey_i *insert) +{ + struct bch_fs *c = m->op.c; + struct printbuf buf = PRINTBUF; + bch2_log_msg_start(c, &buf); + + prt_str(&buf, "about to insert invalid key in data update path"); + prt_printf(&buf, "\nop.nonce: %u", m->op.nonce); + prt_str(&buf, "\nold: "); + bch2_bkey_val_to_text(&buf, c, old); + prt_str(&buf, "\nk: "); + bch2_bkey_val_to_text(&buf, c, k); + prt_str(&buf, "\nnew: "); + bch2_bkey_val_to_text(&buf, c, bkey_i_to_s_c(insert)); + + bch2_fs_emergency_read_only2(c, &buf); + + bch2_print_str(c, KERN_ERR, buf.buf); + printbuf_exit(&buf); + + return -BCH_ERR_invalid_bkey; +} + static int __bch2_data_update_index_update(struct btree_trans *trans, struct bch_write_op *op) { struct bch_fs *c = op->c; struct btree_iter iter; - struct data_update *m = - container_of(op, struct data_update, op); - struct keylist *keys = &op->insert_keys; - struct printbuf journal_msg = PRINTBUF; + struct data_update *m = container_of(op, struct data_update, op); int ret = 0;
bch2_trans_iter_init(trans, &iter, m->btree_id, - bkey_start_pos(&bch2_keylist_front(keys)->k), + bkey_start_pos(&bch2_keylist_front(&op->insert_keys)->k), BTREE_ITER_slots|BTREE_ITER_intent);
while (1) { @@ -216,11 +283,11 @@ static int __bch2_data_update_index_update(struct btree_trans *trans, if (ret) goto err;
- new = bkey_i_to_extent(bch2_keylist_front(keys)); + new = bkey_i_to_extent(bch2_keylist_front(&op->insert_keys));
if (!bch2_extents_match(k, old)) { trace_io_move_fail2(m, k, bkey_i_to_s_c(&new->k_i), - NULL, "no match:"); + NULL, "no match:"); goto nowork; }
@@ -239,7 +306,7 @@ static int __bch2_data_update_index_update(struct btree_trans *trans, if (ret) goto err;
- bkey_copy(&new->k_i, bch2_keylist_front(keys)); + bkey_copy(&new->k_i, bch2_keylist_front(&op->insert_keys)); bch2_cut_front(iter.pos, &new->k_i);
bch2_cut_front(iter.pos, insert); @@ -353,31 +420,11 @@ static int __bch2_data_update_index_update(struct btree_trans *trans, .flags = BCH_VALIDATE_commit, }); if (unlikely(invalid)) { - struct printbuf buf = PRINTBUF; - bch2_log_msg_start(c, &buf); - - prt_str(&buf, "about to insert invalid key in data update path"); - prt_printf(&buf, "\nop.nonce: %u", m->op.nonce); - prt_str(&buf, "\nold: "); - bch2_bkey_val_to_text(&buf, c, old); - prt_str(&buf, "\nk: "); - bch2_bkey_val_to_text(&buf, c, k); - prt_str(&buf, "\nnew: "); - bch2_bkey_val_to_text(&buf, c, bkey_i_to_s_c(insert)); - - bch2_fs_emergency_read_only2(c, &buf); - - bch2_print_str(c, KERN_ERR, buf.buf); - printbuf_exit(&buf); - - ret = -BCH_ERR_invalid_bkey; + ret = data_update_invalid_bkey(m, old, k, insert); goto out; }
- printbuf_reset(&journal_msg); - prt_str(&journal_msg, bch2_data_update_type_strs[m->type]); - - ret = bch2_trans_log_msg(trans, &journal_msg) ?: + ret = bch2_trans_log_str(trans, bch2_data_update_type_strs[m->type]) ?: bch2_trans_log_bkey(trans, m->btree_id, 0, m->k.k) ?: bch2_insert_snapshot_whiteouts(trans, m->btree_id, k.k->p, bkey_start_pos(&insert->k)) ?: @@ -389,38 +436,12 @@ static int __bch2_data_update_index_update(struct btree_trans *trans, if (ret) goto err;
- if (trace_data_update_enabled()) { - struct printbuf buf = PRINTBUF; - - prt_str(&buf, "\nold: "); - bch2_bkey_val_to_text(&buf, c, old); - prt_str(&buf, "\nk: "); - bch2_bkey_val_to_text(&buf, c, k); - prt_str(&buf, "\nnew: "); - bch2_bkey_val_to_text(&buf, c, bkey_i_to_s_c(insert)); - - trace_data_update(c, buf.buf); - printbuf_exit(&buf); - } + if (trace_data_update_enabled()) + trace_data_update2(m, old, k, insert);
if (bch2_bkey_sectors_need_rebalance(c, bkey_i_to_s_c(insert)) * k.k->size > - bch2_bkey_sectors_need_rebalance(c, k) * insert->k.size) { - struct printbuf buf = PRINTBUF; - - bch2_data_update_opts_to_text(&buf, c, &m->op.opts, &m->data_opts); - - prt_str(&buf, "\nold: "); - bch2_bkey_val_to_text(&buf, c, old); - prt_str(&buf, "\nk: "); - bch2_bkey_val_to_text(&buf, c, k); - prt_str(&buf, "\nnew: "); - bch2_bkey_val_to_text(&buf, c, bkey_i_to_s_c(insert)); - - trace_io_move_created_rebalance(c, buf.buf); - printbuf_exit(&buf); - - this_cpu_inc(c->counters[BCH_COUNTER_io_move_created_rebalance]); - } + bch2_bkey_sectors_need_rebalance(c, k) * insert->k.size) + trace_io_move_created_rebalance2(m, old, k, insert);
ret = bch2_trans_commit(trans, &op->res, NULL, @@ -441,9 +462,9 @@ static int __bch2_data_update_index_update(struct btree_trans *trans, if (ret) break; next: - while (bkey_ge(iter.pos, bch2_keylist_front(keys)->k.p)) { - bch2_keylist_pop_front(keys); - if (bch2_keylist_empty(keys)) + while (bkey_ge(iter.pos, bch2_keylist_front(&op->insert_keys)->k.p)) { + bch2_keylist_pop_front(&op->insert_keys); + if (bch2_keylist_empty(&op->insert_keys)) goto out; } continue; @@ -461,7 +482,6 @@ static int __bch2_data_update_index_update(struct btree_trans *trans, goto next; } out: - printbuf_exit(&journal_msg); bch2_trans_iter_exit(trans, &iter); BUG_ON(bch2_err_matches(ret, BCH_ERR_transaction_restart)); return ret;