Sveiciens jums,
Es izsaku to dziļā cieņā un pazemīgā padevībā. Es lūdzu izteikt dažas
turpmākās rindiņas jūsu laipnai izskatīšanai. Es ceru, ka jūs veltīsit
dažas vērtīgās minūtes, lai ar līdzjūtību izlasītu šo aicinājumu. Man
jāatzīst, ka ar lielām cerībām, prieku un entuziasmu rakstu jums šo
e-pasta ziņojumu, kuru zinu un ticībā ticu, ka tam noteikti jāatrod
jūsu veselība.
Es esmu Sandrina Omaru jaunkundze, nelaiķa Viljamsa Omaru meita. Pirms
mana tēva nāves viņš man piezvanīja un informēja, ka viņam ir trīs
miljoni sešsimt tūkstošu eiro. (3 600 000,00 eiro) viņš noguldīja
privātā banka šeit, Abidžanas Kotdivuārā.
Viņš man teica, ka noguldījis naudu uz mana vārda, kā arī iedeva visus
nepieciešamos juridiskos dokumentus par šo noguldījumu bankā, esmu
bakalaura un īsti nezinu, ko darīt. Tagad es vēlos godīgu un DIEVA
baidošu partneri ārzemēs, kuram ar viņa palīdzību varētu pārskaitīt šo
naudu un pēc darījuma es atbraukšu un pastāvīgi dzīvošu jūsu valstī
līdz tikmēr, ka man būs ērti atgriezties mājās, ja es to darīšu.
vēlme. Tas ir tāpēc, ka šeit, Kotdivuāras krastā, nemitīgās politiskās
krīzes dēļ esmu daudz cietis.
Lūdzu, apsveriet to un sazinieties ar mani pēc iespējas ātrāk.
Nekavējoties apstiprināšu jūsu vēlmi, nosūtīšu jums savu attēlu, kā
arī informēšu sīkāku informāciju par šo lietu.
Ar cieņu,
Sandrina Omaru jaunkundze
Patch 1 clears resources earlier if there is no more reasons to keep
MPTCP sockets alive.
Patches 2 and 3 fix some locking issues visible in some rare corner
cases: the linked issues should be quite hard to reproduce.
Patch 4 makes sure subflows are correctly cleaned after the end of a
connection.
Patch 5 and 6 improve the selftests stability when running in a slow
environment by transfering data for a longer period on one hand and by
stopping the tests when all expected events have been observed on the
other hand.
All these patches fix issues introduced before v6.2.
Signed-off-by: Matthieu Baerts <matthieu.baerts(a)tessares.net>
---
Matthieu Baerts (1):
selftests: mptcp: stop tests earlier
Paolo Abeni (5):
mptcp: do not wait for bare sockets' timeout
mptcp: fix locking for setsockopt corner-case
mptcp: fix locking for in-kernel listener creation
mptcp: be careful on subflow status propagation on errors
selftests: mptcp: allow more slack for slow test-case
net/mptcp/pm_netlink.c | 10 ++++++----
net/mptcp/protocol.c | 9 +++++++++
net/mptcp/sockopt.c | 11 +++++++++--
net/mptcp/subflow.c | 12 ++++++++++--
tools/testing/selftests/net/mptcp/mptcp_join.sh | 22 +++++++++++++++++-----
5 files changed, 51 insertions(+), 13 deletions(-)
---
base-commit: 811d581194f7412eda97acc03d17fc77824b561f
change-id: 20230207-upstream-net-20230207-various-fix-6-2-1848a75bbbe6
Best regards,
--
Matthieu Baerts <matthieu.baerts(a)tessares.net>
From: Eric Biggers <ebiggers(a)google.com>
Randstruct with clang is currently unsafe to use in any clang release
that supports it, due to a clang bug that is causing miscompilations:
"-frandomize-layout-seed inconsistently randomizes all-function-pointers
structs" (https://github.com/llvm/llvm-project/issues/60349). Disable
it temporarily until the bug is fixed and the fix is released in a clang
version that can be checked for.
Fixes: 035f7f87b729 ("randstruct: Enable Clang support")
Cc: stable(a)vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers(a)google.com>
---
security/Kconfig.hardening | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/security/Kconfig.hardening b/security/Kconfig.hardening
index 53baa95cb644..aad16187148c 100644
--- a/security/Kconfig.hardening
+++ b/security/Kconfig.hardening
@@ -280,7 +280,8 @@ config ZERO_CALL_USED_REGS
endmenu
config CC_HAS_RANDSTRUCT
- def_bool $(cc-option,-frandomize-layout-seed-file=/dev/null)
+ # Temporarily disabled due to https://github.com/llvm/llvm-project/issues/60349
+ def_bool n
choice
prompt "Randomize layout of sensitive kernel structures"
base-commit: 7b753a909f426f2789d9db6f357c3d59180a9354
--
2.39.1
From: Devid Antonio Filoni <devid.filoni(a)egluetechnologies.com>
The ISO 11783-5 standard, in "4.5.2 - Address claim requirements", states:
d) No CF shall begin, or resume, transmission on the network until 250
ms after it has successfully claimed an address except when
responding to a request for address-claimed.
But "Figure 6" and "Figure 7" in "4.5.4.2 - Address-claim
prioritization" show that the CF begins the transmission after 250 ms
from the first AC (address-claimed) message even if it sends another AC
message during that time window to resolve the address contention with
another CF.
As stated in "4.4.2.3 - Address-claimed message":
In order to successfully claim an address, the CF sending an address
claimed message shall not receive a contending claim from another CF
for at least 250 ms.
As stated in "4.4.3.2 - NAME management (NM) message":
1) A commanding CF can
d) request that a CF with a specified NAME transmit the address-
claimed message with its current NAME.
2) A target CF shall
d) send an address-claimed message in response to a request for a
matching NAME
Taking the above arguments into account, the 250 ms wait is requested
only during network initialization.
Do not restart the timer on AC message if both the NAME and the address
match and so if the address has already been claimed (timer has expired)
or the AC message has been sent to resolve the contention with another
CF (timer is still running).
Signed-off-by: Devid Antonio Filoni <devid.filoni(a)egluetechnologies.com>
Acked-by: Oleksij Rempel <o.rempel(a)pengutronix.de>
Link: https://lore.kernel.org/all/20221125170418.34575-1-devid.filoni@egluetechno…
Fixes: 9d71dd0c7009 ("can: add support of SAE J1939 protocol")
Cc: stable(a)vger.kernel.org
Signed-off-by: Marc Kleine-Budde <mkl(a)pengutronix.de>
---
net/can/j1939/address-claim.c | 40 +++++++++++++++++++++++++++++++++++
1 file changed, 40 insertions(+)
diff --git a/net/can/j1939/address-claim.c b/net/can/j1939/address-claim.c
index f33c47327927..ca4ad6cdd5cb 100644
--- a/net/can/j1939/address-claim.c
+++ b/net/can/j1939/address-claim.c
@@ -165,6 +165,46 @@ static void j1939_ac_process(struct j1939_priv *priv, struct sk_buff *skb)
* leaving this function.
*/
ecu = j1939_ecu_get_by_name_locked(priv, name);
+
+ if (ecu && ecu->addr == skcb->addr.sa) {
+ /* The ISO 11783-5 standard, in "4.5.2 - Address claim
+ * requirements", states:
+ * d) No CF shall begin, or resume, transmission on the
+ * network until 250 ms after it has successfully claimed
+ * an address except when responding to a request for
+ * address-claimed.
+ *
+ * But "Figure 6" and "Figure 7" in "4.5.4.2 - Address-claim
+ * prioritization" show that the CF begins the transmission
+ * after 250 ms from the first AC (address-claimed) message
+ * even if it sends another AC message during that time window
+ * to resolve the address contention with another CF.
+ *
+ * As stated in "4.4.2.3 - Address-claimed message":
+ * In order to successfully claim an address, the CF sending
+ * an address claimed message shall not receive a contending
+ * claim from another CF for at least 250 ms.
+ *
+ * As stated in "4.4.3.2 - NAME management (NM) message":
+ * 1) A commanding CF can
+ * d) request that a CF with a specified NAME transmit
+ * the address-claimed message with its current NAME.
+ * 2) A target CF shall
+ * d) send an address-claimed message in response to a
+ * request for a matching NAME
+ *
+ * Taking the above arguments into account, the 250 ms wait is
+ * requested only during network initialization.
+ *
+ * Do not restart the timer on AC message if both the NAME and
+ * the address match and so if the address has already been
+ * claimed (timer has expired) or the AC message has been sent
+ * to resolve the contention with another CF (timer is still
+ * running).
+ */
+ goto out_ecu_put;
+ }
+
if (!ecu && j1939_address_is_unicast(skcb->addr.sa))
ecu = j1939_ecu_create_locked(priv, name);
base-commit: 811d581194f7412eda97acc03d17fc77824b561f
--
2.39.1
From: Qian Yingjin <qian(a)ddn.com>
I was running traces of the read code against an RAID storage
system to understand why read requests were being misaligned
against the underlying RAID strips. I found that the page end
offset calculation in filemap_get_read_batch() was off by one.
When a read is submitted with end offset 1048575, then it
calculates the end page for read of 256 when it should be 255.
"last_index" is the index of the page beyond the end of the read
and it should be skipped when get a batch of pages for read in
@filemap_get_read_batch().
The below simple patch fixes the problem. This code was introduced
in kernel 5.12.
Fixes: cbd59c48ae2b ("mm/filemap: use head pages in generic_file_buffered_read")
Signed-off-by: Qian Yingjin <qian(a)ddn.com>
---
mm/filemap.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/mm/filemap.c b/mm/filemap.c
index c4d4ace9cc70..0e20a8d6dd93 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -2588,18 +2588,19 @@ static int filemap_get_pages(struct kiocb *iocb, struct iov_iter *iter,
struct folio *folio;
int err = 0;
+ /* "last_index" is the index of the page beyond the end of the read */
last_index = DIV_ROUND_UP(iocb->ki_pos + iter->count, PAGE_SIZE);
retry:
if (fatal_signal_pending(current))
return -EINTR;
- filemap_get_read_batch(mapping, index, last_index, fbatch);
+ filemap_get_read_batch(mapping, index, last_index - 1, fbatch);
if (!folio_batch_count(fbatch)) {
if (iocb->ki_flags & IOCB_NOIO)
return -EAGAIN;
page_cache_sync_readahead(mapping, ra, filp, index,
last_index - index);
- filemap_get_read_batch(mapping, index, last_index, fbatch);
+ filemap_get_read_batch(mapping, index, last_index - 1, fbatch);
}
if (!folio_batch_count(fbatch)) {
if (iocb->ki_flags & (IOCB_NOWAIT | IOCB_WAITQ))
--
2.34.1
KVM_SEV_SEND_UPDATE_DATA and KVM_SEV_RECEIVE_UPDATE_DATA have an integer
overflow issue. Params.guest_len and offset are both 32bite wide, with a
large params.guest_len the check to confirm a page boundary is not
crossed can falsely pass:
/* Check if we are crossing the page boundary *
offset = params.guest_uaddr & (PAGE_SIZE - 1);
if ((params.guest_len + offset > PAGE_SIZE))
Add an additional check to this conditional to confirm that
params.guest_len itself is not greater than PAGE_SIZE.
The current code is can only overflow with a params.guest_len of greater
than 0xfffff000. And the FW spec says these commands fail with lengths
greater than 16KB. So this issue should not be a security concern
Fixes: 15fb7de1a7f5 ("KVM: SVM: Add KVM_SEV_RECEIVE_UPDATE_DATA command")
Fixes: d3d1af85e2c7 ("KVM: SVM: Add KVM_SEND_UPDATE_DATA command")
Reported-by: Andy Nguyen <theflow(a)google.com>
Suggested-by: Thomas Lendacky <thomas.lendacky(a)amd.com>
Signed-off-by: Peter Gonda <pgonda(a)google.com>
Cc: David Rientjes <rientjes(a)google.com>
Cc: Paolo Bonzini <pbonzini(a)redhat.com>
Cc: Sean Christopherson <seanjc(a)google.com>
Cc: kvm(a)vger.kernel.org
Cc: stable(a)vger.kernel.org
Cc: linux-kernel(a)vger.kernel.org
---
V2
* Updated conditional based on feedback from Tom.
---
arch/x86/kvm/svm/sev.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 273cba809328..3d74facaead8 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -1294,7 +1294,7 @@ static int sev_send_update_data(struct kvm *kvm, struct kvm_sev_cmd *argp)
/* Check if we are crossing the page boundary */
offset = params.guest_uaddr & (PAGE_SIZE - 1);
- if ((params.guest_len + offset > PAGE_SIZE))
+ if (params.guest_len > PAGE_SIZE || (params.guest_len + offset) > PAGE_SIZE)
return -EINVAL;
/* Pin guest memory */
@@ -1474,7 +1474,7 @@ static int sev_receive_update_data(struct kvm *kvm, struct kvm_sev_cmd *argp)
/* Check if we are crossing the page boundary */
offset = params.guest_uaddr & (PAGE_SIZE - 1);
- if ((params.guest_len + offset > PAGE_SIZE))
+ if (params.guest_len > PAGE_SIZE || (params.guest_len + offset) > PAGE_SIZE)
return -EINVAL;
hdr = psp_copy_user_blob(params.hdr_uaddr, params.hdr_len);
--
2.39.1.519.gcb327c4b5f-goog