Safeguard is not just another name in the crowded field of cryptocurrency recovery; they are renowned for their effectiveness and expertise in tracing lost funds. Their team comprises skilled professionals who understand the intricate workings of blockchain technology and the tactics employed by online scammers. This specialized knowledge enables them to devise tailored strategies to recover your assets.
Email,.. safeguardbitcoin(a)consultant.com
WhatsApp,.. +44 7426 168300
Web:., https://safeguardbitcoin.wixsite.com/safeguard-bitcoin--1
Safeguard is not just another name in the crowded field of cryptocurrency recovery; they are renowned for their effectiveness and expertise in tracing lost funds. Their team comprises skilled professionals who understand the intricate workings of blockchain technology and the tactics employed by online scammers. This specialized knowledge enables them to devise tailored strategies to recover your assets.
Email,.. safeguardbitcoin(a)consultant.com
WhatsApp,.. +44 7426 168300
Web:., https://safeguardbitcoin.wixsite.com/safeguard-bitcoin--1
Safeguard is not just another name in the crowded field of cryptocurrency recovery; they are renowned for their effectiveness and expertise in tracing lost funds. Their team comprises skilled professionals who understand the intricate workings of blockchain technology and the tactics employed by online scammers. This specialized knowledge enables them to devise tailored strategies to recover your assets.
Email,.. safeguardbitcoin(a)consultant.com
WhatsApp,.. +44 7426 168300
Web:., https://safeguardbitcoin.wixsite.com/safeguard-bitcoin--1
On 4/2/26 18:13, Bence Csókás wrote:
> [Sie erhalten nicht häufig E-Mails von bence.csokas(a)arm.com. Weitere Informationen, warum dies wichtig ist, finden Sie unter https://aka.ms/LearnAboutSenderIdentification ]
>
> Hi,
>
> I just came across this commit while researching something else.
> Original patch had too few context lines, so I here's the diff with `-U10`.
>
> On 3/18/25 20:22, Daniel Almeida wrote:
>> From: Asahi Lina <lina(a)asahilina.net>
>>
>> Since commit 21aa27ddc582 ("drm/shmem-helper: Switch to reservation
>> lock"), the drm_gem_shmem_vmap and drm_gem_shmem_vunmap functions
>> require that the caller holds the DMA reservation lock for the object.
>> Add lockdep assertions to help validate this.
>
> There were already lockdep assertions...
Good point, I completely missed that.
>
>> Signed-off-by: Asahi Lina <lina(a)asahilina.net>
>> Signed-off-by: Daniel Almeida <daniel.almeida(a)collabora.com>
>> Reviewed-by: Christian König <christian.koenig(a)amd.com>
>> Signed-off-by: Lyude Paul <lyude(a)redhat.com>
>> Link: https://lore.kernel.org/r/20250318-drm-gem-shmem-v1-1-64b96511a84f@collabor…
>> ---
>> drivers/gpu/drm/drm_gem_shmem_helper.c | 4 ++++
>> 1 file changed, 4 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
>> index aa43265f4f4f..0b41f0346bad 100644
>> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
>> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
>> @@ -341,20 +341,22 @@ EXPORT_SYMBOL_GPL(drm_gem_shmem_unpin);
>> *
>> * Returns:
>> * 0 on success or a negative error code on failure.
>> */
>> int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
>> struct iosys_map *map)
>> {
>> struct drm_gem_object *obj = &shmem->base;
>> int ret = 0;
>>
>> + dma_resv_assert_held(obj->resv);
>> +
>> if (drm_gem_is_imported(obj)) {
>> ret = dma_buf_vmap(obj->dma_buf, map);
>> } else {
>> pgprot_t prot = PAGE_KERNEL;
>>
>> dma_resv_assert_held(shmem->base.resv);
>
> ... right here, and
>
>> if (refcount_inc_not_zero(&shmem->vmap_use_count)) {
>> iosys_map_set_vaddr(map, shmem->vaddr);
>> return 0;
>> @@ -401,20 +403,22 @@ EXPORT_SYMBOL_GPL(drm_gem_shmem_vmap_locked);
>> * drops to zero.
>> *
>> * This function hides the differences between dma-buf imported and natively
>> * allocated objects.
>> */
>> void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
>> struct iosys_map *map)
>> {
>> struct drm_gem_object *obj = &shmem->base;
>>
>> + dma_resv_assert_held(obj->resv);
>> +
>> if (drm_gem_is_imported(obj)) {
>> dma_buf_vunmap(obj->dma_buf, map);
>> } else {
>> dma_resv_assert_held(shmem->base.resv);
>
> ...here.
>
>> if (refcount_dec_and_test(&shmem->vmap_use_count)) {
>> vunmap(shmem->vaddr);
>> shmem->vaddr = NULL;
>>
>> drm_gem_shmem_unpin_locked(shmem);
>
> Or were those insufficient for some reason? If so, should we keep both
> of them, or should the older ones have been removed?
The dma_buf_vmap()/dma_buf_vunmap() functions require the caller to be holding the reservation lock as well.
So it kind of makes sense to move the assertions to the beginning of the functions.
Regards,
Christian.
>
> Bence
Hi,
The recent introduction of heaps in the optee driver [1] made possible
the creation of heaps as modules.
It's generally a good idea if possible, including for the already
existing system and CMA heaps.
The system one is pretty trivial, the CMA one is a bit more involved,
especially since we have a call from kernel/dma/contiguous.c to the CMA
heap code. This was solved by turning the logic around and making the
CMA heap call into the contiguous DMA code.
Let me know what you think,
Maxime
1: https://lore.kernel.org/dri-devel/20250911135007.1275833-4-jens.wiklander@l…
Signed-off-by: Maxime Ripard <mripard(a)kernel.org>
---
Changes in v4:
- Fix compilation failure
- Rework to take into account OF_RESERVED_MEM
- Fix regression making the default CMA area disappear if not created
through the DT
- Added some documentation and comments
- Link to v3: https://lore.kernel.org/r/20260303-dma-buf-heaps-as-modules-v3-0-24344812c7…
Changes in v3:
- Squashed cma_get_name and cma_alloc/release patches
- Fixed typo in Export dev_get_cma_area commit title
- Fixed compilation failure with DMA_CMA but not OF_RESERVED_MEM
- Link to v2: https://lore.kernel.org/r/20260227-dma-buf-heaps-as-modules-v2-0-454aee7e06…
Changes in v2:
- Collect tags
- Don't export dma_contiguous_default_area anymore, but export
dev_get_cma_area instead
- Mentioned that heap modules can't be removed
- Link to v1: https://lore.kernel.org/r/20260225-dma-buf-heaps-as-modules-v1-0-2109225a09…
---
Maxime Ripard (8):
dma: contiguous: Turn heap registration logic around
dma: contiguous: Make dev_get_cma_area() a proper function
dma: contiguous: Make dma_contiguous_default_area static
dma: contiguous: Export dev_get_cma_area()
mm: cma: Export cma_alloc(), cma_release() and cma_get_name()
dma-buf: heaps: Export mem_accounting parameter
dma-buf: heaps: cma: Turn the heap into a module
dma-buf: heaps: system: Turn the heap into a module
drivers/dma-buf/dma-heap.c | 1 +
drivers/dma-buf/heaps/Kconfig | 4 +--
drivers/dma-buf/heaps/cma_heap.c | 22 +++----------
drivers/dma-buf/heaps/system_heap.c | 5 +++
include/linux/dma-buf/heaps/cma.h | 16 ---------
include/linux/dma-map-ops.h | 14 ++++----
kernel/dma/contiguous.c | 66 +++++++++++++++++++++++++++++++++----
mm/cma.c | 3 ++
8 files changed, 82 insertions(+), 49 deletions(-)
---
base-commit: c081b71f11732ad2c443f170ab19c3ebe8a1a422
change-id: 20260225-dma-buf-heaps-as-modules-1034b3ec9f2a
Best regards,
--
Maxime Ripard <mripard(a)kernel.org>
Hi,
I know I'm late to the party here...
Like John, I'm also not very close to this stuff any more, but I agree
with the other discussions: makes sense for this to be a separate
heap, and cc_shared makes sense too.
I'm not clear why the heap depends on !CONFIG_HIGHMEM, but I also
don't know anything about SEV/TDX.
-Brian
On Wed, Mar 25, 2026 at 08:23:50PM +0000, Jiri Pirko wrote:
> From: Jiri Pirko <jiri(a)nvidia.com>
>
> Confidential computing (CoCo) VMs/guests, such as AMD SEV and Intel TDX,
> run with private/encrypted memory which creates a challenge
> for devices that do not support DMA to it (no TDISP support).
>
> For kernel-only DMA operations, swiotlb bounce buffering provides a
> transparent solution by copying data through shared memory.
> However, the only way to get this memory into userspace is via the DMA
> API's dma_alloc_pages()/dma_mmap_pages() type interfaces which limits
> the use of the memory to a single DMA device, and is incompatible with
> pin_user_pages().
>
> These limitations are particularly problematic for the RDMA subsystem
> which makes heavy use of pin_user_pages() and expects flexible memory
> usage between many different DMA devices.
>
> This patch series enables userspace to explicitly request shared
> (decrypted) memory allocations from new dma-buf system_cc_shared heap.
> Userspace can mmap this memory and pass the dma-buf fd to other
> existing importers such as RDMA or DRM devices to access the
> memory. The DMA API is improved to allow the dma heap exporter to DMA
> map the shared memory to each importing device.
>
> Based on dma-mapping-for-next e7442a68cd1ee797b585f045d348781e9c0dde0d
>
> Jiri Pirko (2):
> dma-mapping: introduce DMA_ATTR_CC_SHARED for shared memory
> dma-buf: heaps: system: add system_cc_shared heap for explicitly
> shared memory
>
> drivers/dma-buf/heaps/system_heap.c | 103 ++++++++++++++++++++++++++--
> include/linux/dma-mapping.h | 10 +++
> include/trace/events/dma.h | 3 +-
> kernel/dma/direct.h | 14 +++-
> kernel/dma/mapping.c | 13 +++-
> 5 files changed, 132 insertions(+), 11 deletions(-)
>
> --
> 2.51.1
>
On 4/2/26 10:36, Ekansh Gupta wrote:
> On 3/9/2026 12:29 PM, Ekansh Gupta wrote:
>>
>> On 2/24/2026 2:42 PM, Christian König wrote:
>>> On 2/23/26 20:09, Ekansh Gupta wrote:
>>>> [Sie erhalten nicht häufig E-Mails von ekansh.gupta(a)oss.qualcomm.com. Weitere Informationen, warum dies wichtig ist, finden Sie unter https://aka.ms/LearnAboutSenderIdentification ]
>>>>
>>>> Add PRIME dma-buf import support for QDA GEM buffer objects and integrate
>>>> it with the existing per-process memory manager and IOMMU device model.
>>>>
>>>> The implementation extends qda_gem_obj to represent imported dma-bufs,
>>>> including dma_buf references, attachment state, scatter-gather tables
>>>> and an imported DMA address used for DSP-facing book-keeping. The
>>>> qda_gem_prime_import() path handles reimports of buffers originally
>>>> exported by QDA as well as imports of external dma-bufs, attaching them
>>>> to the assigned IOMMU device
>>> That is usually an absolutely clear NO-GO for DMA-bufs. Where exactly in the code is that?
>> dma_buf_attach* to comute-cb iommu devices are critical for DSPs to access the buffer.
>> This is needed if the buffer is exported by anyone other than QDA(say system heap). If this is not
>> the correct way, what should be the right way here? On the current fastrpc driver also,
>> the DMABUF is getting attached with iommu device[1] due to the same requirement.
>>
>> [1] https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/tree/dr…
>
> Hi Christian,
>
> Do you have any suggestions for the shared requirements?
Well I don't fully understand what you are trying to do with the iommu. Usually it is the job of the exporter to provide the importer with DMA addresses which are valid for its device structure, that includes IOMMU mapping.
Can you iterate what exactly this iommu group is and why you have to attach the imported buffers to it, how that attachment works and how lifetime is managed?
Regards,
Christian.
>
> I'm reworking on the next version and currently I don't see any other way
> to handle dma_buf_attach* cases.
>
> //Ekansh
>
>>>> and mapping them through the memory manager
>>>> for DSP access. The GEM free path is updated to unmap and detach
>>>> imported buffers while preserving the existing behaviour for locally
>>>> allocated memory.
>>>>
>>>> The PRIME fd-to-handle path is implemented in qda_prime_fd_to_handle(),
>>>> which records the calling drm_file in a driver-private import context
>>>> before invoking the core DRM helpers. The GEM import callback retrieves
>>>> this context to ensure that an IOMMU device is assigned to the process
>>>> and that imported buffers follow the same per-process IOMMU selection
>>>> rules as natively allocated GEM objects.
>>>>
>>>> This patch prepares the driver for interoperable buffer sharing between
>>>> QDA and other dma-buf capable subsystems while keeping IOMMU mapping and
>>>> lifetime handling consistent with the existing GEM allocation flow.
>>>>
>>>> Signed-off-by: Ekansh Gupta <ekansh.gupta(a)oss.qualcomm.com>
>>> ...
>>>
>>>> @@ -15,23 +16,29 @@ static int validate_gem_obj_for_mmap(struct qda_gem_obj *qda_gem_obj)
>>>> qda_err(NULL, "Invalid GEM object size\n");
>>>> return -EINVAL;
>>>> }
>>>> - if (!qda_gem_obj->iommu_dev || !qda_gem_obj->iommu_dev->dev) {
>>>> - qda_err(NULL, "Allocated buffer missing IOMMU device\n");
>>>> - return -EINVAL;
>>>> - }
>>>> - if (!qda_gem_obj->iommu_dev->dev) {
>>>> - qda_err(NULL, "Allocated buffer missing IOMMU device\n");
>>>> - return -EINVAL;
>>>> - }
>>>> - if (!qda_gem_obj->virt) {
>>>> - qda_err(NULL, "Allocated buffer missing virtual address\n");
>>>> - return -EINVAL;
>>>> - }
>>>> - if (qda_gem_obj->dma_addr == 0) {
>>>> - qda_err(NULL, "Allocated buffer missing DMA address\n");
>>>> - return -EINVAL;
>>>> + if (qda_gem_obj->is_imported) {
>>> Absolutely clear NAK to that. Imported buffers *can't* be mmaped through the importer!
>>>
>>> Userspace needs to mmap() them through the exporter.
>>>
>>> If you absolutely have to map them through the importer for uAPI backward compatibility then there is dma_buf_mmap() for that, but this is clearly not the case here.
>>>
>>> ...
>> Okay, the requirement is slightly different here. Any buffer which is not allocated using the
>> QDA GEM interface needs to be attached to the iommu device for that particular process to
>> enable DSP for the access. I should not call it `mmap` instead it should be called importing the
>> buffer to a particular iommu context bank. With this definition, is it fine to keep it this way? Or
>> should the dma_buf_attach* calls be moved to some other place?
>>>> +static int qda_memory_manager_map_imported(struct qda_memory_manager *mem_mgr,
>>>> + struct qda_gem_obj *gem_obj,
>>>> + struct qda_iommu_device *iommu_dev)
>>>> +{
>>>> + struct scatterlist *sg;
>>>> + dma_addr_t dma_addr;
>>>> + int ret = 0;
>>>> +
>>>> + if (!gem_obj->is_imported || !gem_obj->sgt || !iommu_dev) {
>>>> + qda_err(NULL, "Invalid parameters for imported buffer mapping\n");
>>>> + return -EINVAL;
>>>> + }
>>>> +
>>>> + gem_obj->iommu_dev = iommu_dev;
>>>> +
>>>> + sg = gem_obj->sgt->sgl;
>>>> + if (sg) {
>>>> + dma_addr = sg_dma_address(sg);
>>>> + dma_addr += ((u64)iommu_dev->sid << 32);
>>>> +
>>>> + gem_obj->imported_dma_addr = dma_addr;
>>> Well that looks like you are only using the first DMA address from the imported sgt. What about the others?
>> I might have a proper appach for this now, will update in the next spin.
>>> Regards,
>>> Christian.
>
On 16.03.2026 13:08, Maxime Ripard wrote:
> On Wed, Mar 11, 2026 at 08:18:28AM -0500, Andrew Davis wrote:
>> On 3/11/26 5:19 AM, Albert Esteve wrote:
>>> On Tue, Mar 10, 2026 at 4:34 PM Andrew Davis <afd(a)ti.com> wrote:
>>>> On 3/6/26 4:36 AM, Albert Esteve wrote:
>>>>> Expose DT coherent reserved-memory pools ("shared-dma-pool"
>>>>> without "reusable") as dma-buf heaps, creating one heap per
>>>>> region so userspace can allocate from the exact device-local
>>>>> pool intended for coherent DMA.
>>>>>
>>>>> This is a missing backend in the long-term effort to steer
>>>>> userspace buffer allocations (DRM, v4l2, dma-buf heaps)
>>>>> through heaps for clearer cgroup accounting. CMA and system
>>>>> heaps already exist; non-reusable coherent reserved memory
>>>>> did not.
>>>>>
>>>>> The heap binds the heap device to each memory region so
>>>>> coherent allocations use the correct dev->dma_mem, and
>>>>> it defers registration until module_init when normal
>>>>> allocators are available.
>>>>>
>>>>> Signed-off-by: Albert Esteve <aesteve(a)redhat.com>
>>>>> ---
>>>>> drivers/dma-buf/heaps/Kconfig | 9 +
>>>>> drivers/dma-buf/heaps/Makefile | 1 +
>>>>> drivers/dma-buf/heaps/coherent_heap.c | 414 ++++++++++++++++++++++++++++++++++
>>>>> 3 files changed, 424 insertions(+)
>>>>>
>>>>> (...)
>>>> You are doing this DMA allocation using a non-DMA pseudo-device (heap_dev).
>>>> This is why you need to do that dma_coerce_mask_and_coherent(64) nonsense, you
>>>> are doing a DMA alloc for the CPU itself. This might still work, but only if
>>>> dma_map_sgtable() can handle swiotlb/iommu for all attaching devices at map
>>>> time.
>>> The concern is valid. We're allocating via a synthetic device, which
>>> ties the allocation to that device's DMA domain. I looked deeper into
>>> this trying to address the concern.
>>>
>>> The approach works because dma_map_sgtable() handles both
>>> dma_map_direct and use_dma_iommu cases in __dma_map_sg_attrs(). For
>>> each physical address in the sg_table (extracted via sg_phys()), it
>>> creates device-specific DMA mappings:
>>> - For direct mapping: it checks if the address is directly accessible
>>> (dma_capable()), and if not, it falls back to swiotlb.
>>> - For IOMMU: it creates mappings that allow the device to access
>>> physical addresses.
>>>
>>> This means every attached device gets its own device-specific DMA
>>> mapping, properly handling cases where the physical addresses are
>>> inaccessible or have DMA constraints.
>>>
>> While this means it might still "work" it won't always be ideal. Take
>> the case where the consuming device(s) have a 32bit address restriction,
>> if the allocation was done using the real devices then the backing buffer
>> itself would be allocated in <32bit mem. Whereas here the allocation
>> could end up in >32bit mem, as the CPU/synthetic device supports that.
>> Then each mapping device would instead get a bounce buffer.
>>
>> (this example might not be great as we usually know the address of
>> carveout/reserved memory regions, but substitute in whatever restriction
>> makes more sense)
>>
>> These non-reusable carveouts tend to be made for some specific device, and
>> they are made specifically because that device has some memory restriction.
>> So we might run into the situation above more than one would expect.
>>
>> Not a blocker here, but just something worth thinking on.
> As I detailed in the previous version [1] the main idea behind that work
> is to allow to get rid of dma_alloc_attrs for framework and drivers to
> allocate from the heaps instead.
>
> Robin was saying he wasn't comfortable with exposing this heap to
> userspace, and we're saying here that maybe this might not always work
> anyway (or at least that we couldn't test it fully).
>
> Maybe the best thing is to defer this series until we are at a point
> where we can start enabling the "heap allocations" in frameworks then?
> Hopefully we will have hardware to test it with by then, and we might
> not even need to expose it to userspace at all but only to the kernel.
>
> What do you think?
IMHO a good idea. Maybe in-kernel heap for the coherent allocations will
be just enough.
Best regards
--
Marek Szyprowski, PhD
Samsung R&D Institute Poland
Hello Jiri,
On Thu, 26 Mar 2026 at 00:53, Jiri Pirko <jiri(a)resnulli.us> wrote:
>
> From: Jiri Pirko <jiri(a)nvidia.com>
>
> Confidential computing (CoCo) VMs/guests, such as AMD SEV and Intel TDX,
> run with private/encrypted memory which creates a challenge
> for devices that do not support DMA to it (no TDISP support).
>
> For kernel-only DMA operations, swiotlb bounce buffering provides a
> transparent solution by copying data through shared memory.
> However, the only way to get this memory into userspace is via the DMA
> API's dma_alloc_pages()/dma_mmap_pages() type interfaces which limits
> the use of the memory to a single DMA device, and is incompatible with
> pin_user_pages().
>
> These limitations are particularly problematic for the RDMA subsystem
> which makes heavy use of pin_user_pages() and expects flexible memory
> usage between many different DMA devices.
>
> This patch series enables userspace to explicitly request shared
> (decrypted) memory allocations from new dma-buf system_cc_shared heap.
> Userspace can mmap this memory and pass the dma-buf fd to other
> existing importers such as RDMA or DRM devices to access the
> memory. The DMA API is improved to allow the dma heap exporter to DMA
> map the shared memory to each importing device.
Thank you for the patch series, it looks good to me.
Marek, if you are ok, please could you take it through your tree, with my
Acked-by: Sumit Semwal <sumit.semwal(a)linaro.org>
Best,
Sumit.
>
> Based on dma-mapping-for-next e7442a68cd1ee797b585f045d348781e9c0dde0d
>
> Jiri Pirko (2):
> dma-mapping: introduce DMA_ATTR_CC_SHARED for shared memory
> dma-buf: heaps: system: add system_cc_shared heap for explicitly
> shared memory
>
> drivers/dma-buf/heaps/system_heap.c | 103 ++++++++++++++++++++++++++--
> include/linux/dma-mapping.h | 10 +++
> include/trace/events/dma.h | 3 +-
> kernel/dma/direct.h | 14 +++-
> kernel/dma/mapping.c | 13 +++-
> 5 files changed, 132 insertions(+), 11 deletions(-)
>
> --
> 2.51.1
>
I often get this question in my mind when I think about visiting the mountains — is Leh Ladakh safe for visitors like me? The place looks absolutely beautiful in photos, with huge mountains, quiet monasteries, and those long scenic roads. But at the same time, I always wonder about the altitude, weather, and road conditions. Since it’s a high-altitude region, I feel it’s important to understand how the body reacts there and how much time I should give myself to adjust.
From what I’ve learned so far, many travelers visit every year and most of them have a wonderful experience. Still, I always prefer to read carefully and plan things before going anywhere new. I like checking routes, the best months to visit, and what kind of preparations I should make. That’s why I started looking through a Leh Ladakh travel guide, because it answers many of the small questions I have in my mind.
I also feel safer knowing that tourism is common there and local people are welcoming to visitors. So maybe the real question for me is not just safety, but how well I plan my trip. With the right preparation, the journey can be amazing. 🏔️✨
More details:
https://www.indiahighlight.com/destination/leh-ladakh
📧 emmastone(a)yzcalo.com
📞 19707840507