Dear Child of God,
Calvary Greetings in the name of the LORD Almighty and Our LORD JESUS
CHRIST the giver of every good thing. Good day and compliments of the
seasons, i know this letter will definitely come to you as a huge
surprise, but I implore you to take the time to go through it
carefully as the decision you make will go off a long way to determine
my future and continued existence. I am Mrs Lila Haber aging widow of
57 years old suffering from long time illness.I have some funds I
inherited from my late husband, the sum of (7.2Million Dollars) and I
needed a very honest and God fearing who can withdraw this money then
use the funds for Charity works. I WISH TO GIVE THIS FUNDS TO YOU FOR
CHARITY WORKS. I found your email address from the internet after
honest prayers to the LORD to bring me a helper and i decided to
contact you if you may be willing and interested to handle these trust
funds in good faith before anything happens to me.
I accept this decision because I do not have any child who will
inherit this money after I die. I want your urgent reply to me so that
I will give you the deposit receipt which the SECURITY COMPANY issued
to me as next of kin for immediate transfer of the money to your
account in your country, to start the good work of God, I want you to
use the 25/percent of the total amount to help yourself in doing the
project. I am desperately in keen need of assistance and I have
summoned up courage to contact you for this task, you must not fail me
and the millions of the poor people in our todays WORLD. This is no
stolen money and there are no dangers involved,100% RISK FREE with
full legal proof. Please if you would be able to use the funds for the
Charity works kindly let me know immediately.I will appreciate your
utmost confidentiality and trust in this matter to accomplish my heart
desire, as I don't want anything that will jeopardize my last wish.
Please kindly respond quickly for further details.
Warmest Regards,
Mrs Lila Haber
During cdrom emulation, the response to read_toc command must contain
the cdrom address as the number of sectors (2048 byte sized blocks)
represented either as an absolute value (when MSF bit is '0') or in
terms of PMin/PSec/PFrame (when MSF bit is set to '1'). Incase of
cdrom, the fsg_lun_open call sets the sector size to 2048 bytes.
When MAC OS sends a read_toc request with MSF set to '1', the
store_cdrom_address assumes that the address being provided is the
LUN size represented in 512 byte sized blocks instead of 2048. It
tries to modify the address further to convert it to 2048 byte sized
blocks and store it in MSF format. This results in data transfer
failures as the cdrom address being provided in the read_toc response
is incorrect.
Fixes: 3f565a363cee ("usb: gadget: storage: adapt logic block size to bound block devices")
Cc: stable(a)vger.kernel.org
Signed-off-by: Krishna Kurapati <quic_kriskura(a)quicinc.com>
Acked-by: Alan Stern <stern(a)rowland.harvard.edu>
---
drivers/usb/gadget/function/storage_common.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/usb/gadget/function/storage_common.c b/drivers/usb/gadget/function/storage_common.c
index 03035db..208c6a9 100644
--- a/drivers/usb/gadget/function/storage_common.c
+++ b/drivers/usb/gadget/function/storage_common.c
@@ -294,8 +294,10 @@ EXPORT_SYMBOL_GPL(fsg_lun_fsync_sub);
void store_cdrom_address(u8 *dest, int msf, u32 addr)
{
if (msf) {
- /* Convert to Minutes-Seconds-Frames */
- addr >>= 2; /* Convert to 2048-byte frames */
+ /*
+ * Convert to Minutes-Seconds-Frames.
+ * Sector size is already set to 2048 bytes.
+ */
addr += 2*75; /* Lead-in occupies 2 seconds */
dest[3] = addr % 75; /* Frames */
addr /= 75;
--
2.7.4
The quilt patch titled
Subject: mm/migrate_device.c: copy pte dirty bit to page
has been removed from the -mm tree. Its filename was
mm-migrate_devicec-copy-pte-dirty-bit-to-page.patch
This patch was dropped because an updated version will be merged
------------------------------------------------------
From: Alistair Popple <apopple(a)nvidia.com>
Subject: mm/migrate_device.c: copy pte dirty bit to page
Date: Tue, 16 Aug 2022 17:39:24 +1000
migrate_vma_setup() has a fast path in migrate_vma_collect_pmd() that
installs migration entries directly if it can lock the migrating page.
When removing a dirty pte the dirty bit is supposed to be carried over to
the underlying page to prevent it being lost.
Currently migrate_vma_*() can only be used for private anonymous mappings.
That means loss of the dirty bit usually doesn't result in data loss
because these pages are typically not file-backed. However pages may be
backed by swap storage which can result in data loss if an attempt is made
to migrate a dirty page that doesn't yet have the PageDirty flag set.
In this case migration will fail due to unexpected references but the
dirty pte bit will be lost. If the page is subsequently reclaimed data
won't be written back to swap storage as it is considered uptodate,
resulting in data loss if the page is subsequently accessed.
Prevent this by copying the dirty bit to the page when removing the pte to
match what try_to_migrate_one() does.
Link: https://lkml.kernel.org/r/6e77914685ede036c419fa65b6adc27f25a6c3e9.16606350…
Fixes: 8c3328f1f36a ("mm/migrate: migrate_vma() unmap page from vma while collecting pages")
Signed-off-by: Alistair Popple <apopple(a)nvidia.com>
Acked-by: Peter Xu <peterx(a)redhat.com>
Reported-by: Huang Ying <ying.huang(a)intel.com>
Reviewed-by: Huang Ying <ying.huang(a)intel.com>
Cc: Alex Sierra <alex.sierra(a)amd.com>
Cc: Ben Skeggs <bskeggs(a)redhat.com>
Cc: David Hildenbrand <david(a)redhat.com>
Cc: Felix Kuehling <felix.kuehling(a)amd.com>
Cc: Jason Gunthorpe <jgg(a)nvidia.com>
Cc: John Hubbard <jhubbard(a)nvidia.com>
Cc: Karol Herbst <kherbst(a)redhat.com>
Cc: Logan Gunthorpe <logang(a)deltatee.com>
Cc: Lyude Paul <lyude(a)redhat.com>
Cc: Matthew Wilcox (Oracle) <willy(a)infradead.org>
Cc: Paul Mackerras <paulus(a)ozlabs.org>
Cc: Ralph Campbell <rcampbell(a)nvidia.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
mm/migrate_device.c | 21 ++++++++-------------
1 file changed, 8 insertions(+), 13 deletions(-)
--- a/mm/migrate_device.c~mm-migrate_devicec-copy-pte-dirty-bit-to-page
+++ a/mm/migrate_device.c
@@ -7,6 +7,7 @@
#include <linux/export.h>
#include <linux/memremap.h>
#include <linux/migrate.h>
+#include <linux/mm.h>
#include <linux/mm_inline.h>
#include <linux/mmu_notifier.h>
#include <linux/oom.h>
@@ -61,7 +62,7 @@ static int migrate_vma_collect_pmd(pmd_t
struct migrate_vma *migrate = walk->private;
struct vm_area_struct *vma = walk->vma;
struct mm_struct *mm = vma->vm_mm;
- unsigned long addr = start, unmapped = 0;
+ unsigned long addr = start;
spinlock_t *ptl;
pte_t *ptep;
@@ -193,11 +194,10 @@ again:
bool anon_exclusive;
pte_t swp_pte;
+ flush_cache_page(vma, addr, pte_pfn(*ptep));
+ pte = ptep_clear_flush(vma, addr, ptep);
anon_exclusive = PageAnon(page) && PageAnonExclusive(page);
if (anon_exclusive) {
- flush_cache_page(vma, addr, pte_pfn(*ptep));
- ptep_clear_flush(vma, addr, ptep);
-
if (page_try_share_anon_rmap(page)) {
set_pte_at(mm, addr, ptep, pte);
unlock_page(page);
@@ -205,12 +205,14 @@ again:
mpfn = 0;
goto next;
}
- } else {
- ptep_get_and_clear(mm, addr, ptep);
}
migrate->cpages++;
+ /* Set the dirty flag on the folio now the pte is gone. */
+ if (pte_dirty(pte))
+ folio_mark_dirty(page_folio(page));
+
/* Setup special migration page table entry */
if (mpfn & MIGRATE_PFN_WRITE)
entry = make_writable_migration_entry(
@@ -242,9 +244,6 @@ again:
*/
page_remove_rmap(page, vma, false);
put_page(page);
-
- if (pte_present(pte))
- unmapped++;
} else {
put_page(page);
mpfn = 0;
@@ -257,10 +256,6 @@ next:
arch_leave_lazy_mmu_mode();
pte_unmap_unlock(ptep - 1, ptl);
- /* Only flush the TLB if we actually modified any entries */
- if (unmapped)
- flush_tlb_range(walk->vma, start, end);
-
return 0;
}
_
Patches currently in -mm which might be from apopple(a)nvidia.com are
mm-gupc-simplify-and-fix-check_and_migrate_movable_pages-return-codes.patch
selftests-hmm-tests-add-test-for-dirty-bits.patch
mm-gupc-dont-pass-gup_flags-to-check_and_migrate_movable_pages.patch
mm-gupc-refactor-check_and_migrate_movable_pages.patch
When clearing a PTE the TLB should be flushed whilst still holding the
PTL to avoid a potential race with madvise/munmap/etc. For example
consider the following sequence:
CPU0 CPU1
---- ----
migrate_vma_collect_pmd()
pte_unmap_unlock()
madvise(MADV_DONTNEED)
-> zap_pte_range()
pte_offset_map_lock()
[ PTE not present, TLB not flushed ]
pte_unmap_unlock()
[ page is still accessible via stale TLB ]
flush_tlb_range()
In this case the page may still be accessed via the stale TLB entry
after madvise returns. Fix this by flushing the TLB while holding the
PTL.
Signed-off-by: Alistair Popple <apopple(a)nvidia.com>
Reported-by: Nadav Amit <nadav.amit(a)gmail.com>
Fixes: 8c3328f1f36a ("mm/migrate: migrate_vma() unmap page from vma while collecting pages")
Cc: stable(a)vger.kernel.org
---
Changes for v3:
- New for v3
---
mm/migrate_device.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/mm/migrate_device.c b/mm/migrate_device.c
index 27fb37d..6a5ef9f 100644
--- a/mm/migrate_device.c
+++ b/mm/migrate_device.c
@@ -254,13 +254,14 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
migrate->dst[migrate->npages] = 0;
migrate->src[migrate->npages++] = mpfn;
}
- arch_leave_lazy_mmu_mode();
- pte_unmap_unlock(ptep - 1, ptl);
/* Only flush the TLB if we actually modified any entries */
if (unmapped)
flush_tlb_range(walk->vma, start, end);
+ arch_leave_lazy_mmu_mode();
+ pte_unmap_unlock(ptep - 1, ptl);
+
return 0;
}
base-commit: ffcf9c5700e49c0aee42dcba9a12ba21338e8136
--
git-series 0.9.1
Hi,
Here's the relevant bit of the build log:
drivers/net/usb/smsc95xx.c: In function ‘smsc95xx_status’:
drivers/net/usb/smsc95xx.c:625:3: error: implicit declaration of
function ‘generic_handle_domain_irq’; did you mean
‘generic_handle_irq’? [-Werror
=implicit-function-declaration]
625 | generic_handle_domain_irq(pdata->irqdomain, PHY_HWIRQ);
| ^~~~~~~~~~~~~~~~~~~~~~~~~
| generic_handle_irq
drivers/net/usb/smsc95xx.c: In function ‘smsc95xx_bind’:
drivers/net/usb/smsc95xx.c:1136:21: error: implicit declaration of
function ‘irq_domain_alloc_named_fwnode’
[-Werror=implicit-function-declaration
]
1136 | pdata->irqfwnode = irq_domain_alloc_named_fwnode(usb_path);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
drivers/net/usb/smsc95xx.c:1136:19: warning: assignment to ‘struct
fwnode_handle *’ from ‘int’ makes pointer from integer without a cast
[-Wint-co
nversion]
1136 | pdata->irqfwnode = irq_domain_alloc_named_fwnode(usb_path);
| ^
drivers/net/usb/smsc95xx.c:1142:21: error: implicit declaration of
function ‘irq_domain_create_linear’
[-Werror=implicit-function-declaration]
1142 | pdata->irqdomain = irq_domain_create_linear(pdata->irqfwnode,
| ^~~~~~~~~~~~~~~~~~~~~~~~
drivers/net/usb/smsc95xx.c:1144:12: error: ‘irq_domain_simple_ops’
undeclared (first use in this function); did you mean
‘irq_domain_ops’?
1144 | &irq_domain_simple_ops,
| ^~~~~~~~~~~~~~~~~~~~~
| irq_domain_ops
drivers/net/usb/smsc95xx.c:1144:12: note: each undeclared identifier
is reported only once for each function it appears in
drivers/net/usb/smsc95xx.c:1151:12: error: implicit declaration of
function ‘irq_create_mapping’; did you mean ‘irq_dispose_mapping’?
[-Werror=imp
licit-function-declaration]
1151 | phy_irq = irq_create_mapping(pdata->irqdomain, PHY_HWIRQ);
| ^~~~~~~~~~~~~~~~~~
| irq_dispose_mapping
drivers/net/usb/smsc95xx.c:1245:2: error: implicit declaration of
function ‘irq_domain_remove’ [-Werror=implicit-function-declaration]
1245 | irq_domain_remove(pdata->irqdomain);
| ^~~~~~~~~~~~~~~~~
drivers/net/usb/smsc95xx.c:1248:2: error: implicit declaration of
function ‘irq_domain_free_fwnode’; did you mean
‘irq_domain_get_of_node’? [-Werr
or=implicit-function-declaration]
1248 | irq_domain_free_fwnode(pdata->irqfwnode);
| ^~~~~~~~~~~~~~~~~~~~~~
| irq_domain_get_of_node
drivers/net/usb/smsc95xx.c: In function ‘smsc95xx_unbind’:
drivers/net/usb/smsc95xx.c:1262:22: error: implicit declaration of
function ‘irq_find_mapping’; did you mean ‘irq_dispose_mapping’?
[-Werror=impli
cit-function-declaration]
1262 | irq_dispose_mapping(irq_find_mapping(pdata->irqdomain, PHY_HWIRQ));
| ^~~~~~~~~~~~~~~~
| irq_dispose_mapping
The build is for 32-bit x86, the defconfig can be found here:
https://github.com/urjaman/i586con/blob/master/brext/board/linux.config
The build failure also happens with 5.15.62 and 63.
--
Urja Rannikko
From: Eric Biggers <ebiggers(a)google.com>
CRYPTO_LIB_CHACHA_GENERIC doesn't need to select XOR_BLOCKS. It perhaps
was thought that it's needed for __crypto_xor, but that's not the case.
Enabling XOR_BLOCKS is problematic because the XOR_BLOCKS code runs a
benchmark when it is initialized. That causes a boot time regression on
systems that didn't have it enabled before.
Therefore, remove this unnecessary and problematic selection.
Fixes: e56e18985596 ("lib/crypto: add prompts back to crypto libraries")
Cc: stable(a)vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers(a)google.com>
---
I've separated this fix out from the larger patch
https://lore.kernel.org/r/20220725183636.97326-3-ebiggers@kernel.org
that is currently queued in cryptodev.
lib/crypto/Kconfig | 1 -
1 file changed, 1 deletion(-)
diff --git a/lib/crypto/Kconfig b/lib/crypto/Kconfig
index 9ff549f63540fa..47816af9a9d7e1 100644
--- a/lib/crypto/Kconfig
+++ b/lib/crypto/Kconfig
@@ -33,7 +33,6 @@ config CRYPTO_ARCH_HAVE_LIB_CHACHA
config CRYPTO_LIB_CHACHA_GENERIC
tristate
- select XOR_BLOCKS
help
This symbol can be depended upon by arch implementations of the
ChaCha library interface that require the generic code as a
base-commit: 4c612826bec1441214816827979b62f84a097e91
--
2.37.2
The error exit of privcmd_ioctl_dm_op() is calling unlock_pages()
potentially with pages being NULL, leading to a NULL dereference.
Additionally lock_pages() doesn't check for pin_user_pages_fast()
having been completely successful, resulting in potentially not
locking all pages into memory. This could result in sporadic failures
when using the related memory in user mode.
Fix all of that by calling unlock_pages() always with the real number
of pinned pages, which will be zero in case pages being NULL, and by
checking the number of pages pinned by pin_user_pages_fast() matching
the expected number of pages.
Cc: <stable(a)vger.kernel.org>
Fixes: ab520be8cd5d ("xen/privcmd: Add IOCTL_PRIVCMD_DM_OP")
Reported-by: Rustam Subkhankulov <subkhankulov(a)ispras.ru>
Signed-off-by: Juergen Gross <jgross(a)suse.com>
---
V2:
- use "pinned" as parameter for unlock_pages() (Jan Beulich)
- drop label "unlock" again (Jan Beulich)
- add check for complete success of pin_user_pages_fast()
V3:
- continue after partial success of pin_user_pages_fast() (Jan Beulich)
V4:
- fix case of multiple partial successes for one buffer (Jan Beulich)
---
drivers/xen/privcmd.c | 21 +++++++++++----------
1 file changed, 11 insertions(+), 10 deletions(-)
diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index 3369734108af..e88e8f6f0a33 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -581,27 +581,30 @@ static int lock_pages(
struct privcmd_dm_op_buf kbufs[], unsigned int num,
struct page *pages[], unsigned int nr_pages, unsigned int *pinned)
{
- unsigned int i;
+ unsigned int i, off = 0;
- for (i = 0; i < num; i++) {
+ for (i = 0; i < num; ) {
unsigned int requested;
int page_count;
requested = DIV_ROUND_UP(
offset_in_page(kbufs[i].uptr) + kbufs[i].size,
- PAGE_SIZE);
+ PAGE_SIZE) - off;
if (requested > nr_pages)
return -ENOSPC;
page_count = pin_user_pages_fast(
- (unsigned long) kbufs[i].uptr,
+ (unsigned long)kbufs[i].uptr + off * PAGE_SIZE,
requested, FOLL_WRITE, pages);
- if (page_count < 0)
- return page_count;
+ if (page_count <= 0)
+ return page_count ? : -EFAULT;
*pinned += page_count;
nr_pages -= page_count;
pages += page_count;
+
+ off = (requested == page_count) ? 0 : off + page_count;
+ i += !off;
}
return 0;
@@ -677,10 +680,8 @@ static long privcmd_ioctl_dm_op(struct file *file, void __user *udata)
}
rc = lock_pages(kbufs, kdata.num, pages, nr_pages, &pinned);
- if (rc < 0) {
- nr_pages = pinned;
+ if (rc < 0)
goto out;
- }
for (i = 0; i < kdata.num; i++) {
set_xen_guest_handle(xbufs[i].h, kbufs[i].uptr);
@@ -692,7 +693,7 @@ static long privcmd_ioctl_dm_op(struct file *file, void __user *udata)
xen_preemptible_hcall_end();
out:
- unlock_pages(pages, nr_pages);
+ unlock_pages(pages, pinned);
kfree(xbufs);
kfree(pages);
kfree(kbufs);
--
2.35.3