Oops, sorry for the delay here. I just forgot to check the mails.
This comment is right, when I submit this patch I mentioned that the replacement of this lock can hang the detaching routine because it needs to wait the release of the lock_sock().
But this does no harm in my testing. In fact, the relevant code can only be executed when removing the controller. I think it can wait for the lock. Moreover, this patch can fix the potential UAF indeed.
> may need further discussion. (wrote in previous mail list
Welcome the additional advise on this. Does this really broken the lock principle?
Regards Lin Ma
在 2021-06-16 23:01:08,"Greg Kroah-Hartman" <gregkh(a)linuxfoundation.org> 写道:
>On Mon, Jun 14, 2021 at 04:15:02PM +0200, Eric Dumazet wrote:
>>
>>
>> On 6/8/21 8:27 PM, Greg Kroah-Hartman wrote:
>> > From: Lin Ma <linma(a)zju.edu.cn>
>> >
>> > commit e305509e678b3a4af2b3cfd410f409f7cdaabb52 upstream.
>> >
>> > The hci_sock_dev_event() function will cleanup the hdev object for
>> > sockets even if this object may still be in used within the
>> > hci_sock_bound_ioctl() function, result in UAF vulnerability.
>> >
>> > This patch replace the BH context lock to serialize these affairs
>> > and prevent the race condition.
>> >
>> > Signed-off-by: Lin Ma <linma(a)zju.edu.cn>
>> > Signed-off-by: Marcel Holtmann <marcel(a)holtmann.org>
>> > Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
>> > ---
>> > net/bluetooth/hci_sock.c | 4 ++--
>> > 1 file changed, 2 insertions(+), 2 deletions(-)
>> >
>> > --- a/net/bluetooth/hci_sock.c
>> > +++ b/net/bluetooth/hci_sock.c
>> > @@ -755,7 +755,7 @@ void hci_sock_dev_event(struct hci_dev *
>> > /* Detach sockets from device */
>> > read_lock(&hci_sk_list.lock);
>> > sk_for_each(sk, &hci_sk_list.head) {
>> > - bh_lock_sock_nested(sk);
>> > + lock_sock(sk);
>> > if (hci_pi(sk)->hdev == hdev) {
>> > hci_pi(sk)->hdev = NULL;
>> > sk->sk_err = EPIPE;
>> > @@ -764,7 +764,7 @@ void hci_sock_dev_event(struct hci_dev *
>> >
>> > hci_dev_put(hdev);
>> > }
>> > - bh_unlock_sock(sk);
>> > + release_sock(sk);
>> > }
>> > read_unlock(&hci_sk_list.lock);
>> > }
>> >
>> >
>>
>>
>> This patch is buggy.
>>
>> lock_sock() can sleep.
>>
>> But the read_lock(&hci_sk_list.lock) two lines before is not going to allow the sleep.
>>
>> Hmmm ?
>>
>>
>
>Odd, Lin, did you see any problems with your testing of this?
>
Since the process wide cputime counter is started locklessly from
posix_cpu_timer_rearm(), it can be concurrently stopped by operations
on other timers from the same thread group, such as in the following
unlucky scenario:
CPU 0 CPU 1
----- -----
timer_settime(TIMER B)
posix_cpu_timer_rearm(TIMER A)
cpu_clock_sample_group()
(pct->timers_active already true)
handle_posix_cpu_timers()
check_process_timers()
stop_process_timers()
pct->timers_active = false
arm_timer(TIMER A)
tick -> run_posix_cpu_timers()
// sees !pct->timers_active, ignore
// our TIMER A
Fix this with simply locking process wide cputime counting start and
timer arm in the same block.
Acked-by: Peter Zijlstra (Intel) <peterz(a)infradead.org>
Signed-off-by: Frederic Weisbecker <frederic(a)kernel.org>
Fixes: 60f2ceaa8111 ("posix-cpu-timers: Remove unnecessary locking around cpu_clock_sample_group")
Cc: stable(a)vger.kernel.org
Cc: Oleg Nesterov <oleg(a)redhat.com>
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Cc: Ingo Molnar <mingo(a)kernel.org>
Cc: Eric W. Biederman <ebiederm(a)xmission.com>
---
kernel/time/posix-cpu-timers.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/kernel/time/posix-cpu-timers.c b/kernel/time/posix-cpu-timers.c
index 3bb96a8b49c9..aa52fc85dbcb 100644
--- a/kernel/time/posix-cpu-timers.c
+++ b/kernel/time/posix-cpu-timers.c
@@ -991,6 +991,11 @@ static void posix_cpu_timer_rearm(struct k_itimer *timer)
if (!p)
goto out;
+ /* Protect timer list r/w in arm_timer() */
+ sighand = lock_task_sighand(p, &flags);
+ if (unlikely(sighand == NULL))
+ goto out;
+
/*
* Fetch the current sample and update the timer's expiry time.
*/
@@ -1001,11 +1006,6 @@ static void posix_cpu_timer_rearm(struct k_itimer *timer)
bump_cpu_timer(timer, now);
- /* Protect timer list r/w in arm_timer() */
- sighand = lock_task_sighand(p, &flags);
- if (unlikely(sighand == NULL))
- goto out;
-
/*
* Now re-arm for the new expiry time.
*/
--
2.25.1