Hi,
The following APIs are needed for us to support the legacy Tegra
memory manager for devices("NvMap") with *DMA mapping API*.
New API:
->iova_alloc(): To allocate IOVA area.
->iova_alloc_at(): To allocate IOVA area at specific address.
->iova_free(): To free IOVA area.
->map_page_at(): To map page at specific IOVA.
misc:
->iova_get_free_total(): To return how much IOVA is available totally.
->iova_get_free_max(): To return the size of biggest IOVA area.
Although NvMap itself will be replaced soon, there are cases for the
above API where we need to specify IOVA explicitly.
(1) HWAs may require the address for special purpose, like reset vector.
(2) IOVA linear mapping: ex: [RFC 5/5] ARM: dma-mapping: Introduce
dma_map_linear_attrs() for IOVA linear map
(3) To support different heaps. To have allocation and mapping
independently.
Some of them could be supported with creating different mappings, but
currently a device can have a single contiguous mapping, and we cannot
specifiy any address inside of a map since all IOVA alloction is done
implicitly now.
This is the revised version of:
http://lists.linaro.org/pipermail/linaro-mm-sig/2012-May/001947.htmlhttp://lists.linaro.org/pipermail/linaro-mm-sig/2012-May/001948.htmlhttp://lists.linaro.org/pipermail/linaro-mm-sig/2012-May/001949.html
Any comment would be really appreciated.
Hiroshi Doyu (5):
ARM: dma-mapping: New dma_map_ops->iova_get_free_{total,max}
functions
ARM: dma-mapping: New dma_map_ops->iova_{alloc,free}() functions
ARM: dma-mapping: New dma_map_ops->iova_alloc*_at* function
ARM: dma-mapping: New dma_map_ops->map_page*_at* function
ARM: dma-mapping: Introduce dma_map_linear_attrs() for IOVA linear
map
arch/arm/include/asm/dma-mapping.h | 55 +++++++++++++
arch/arm/mm/dma-mapping.c | 124 ++++++++++++++++++++++++++++++
include/asm-generic/dma-mapping-common.h | 20 +++++
include/linux/dma-mapping.h | 14 ++++
4 files changed, 213 insertions(+), 0 deletions(-)
--
1.7.5.4
From: Rob Clark <rob(a)ti.com>
We never really clarified if unmap could be done in atomic context.
But since mapping might require sleeping, this implies mutex in use
to synchronize mapping/unmapping, so unmap could sleep as well. Add
a might_sleep() to clarify this.
Signed-off-by: Rob Clark <rob(a)ti.com>
Acked-by: Daniel Vetter <daniel.vetter(a)ffwll.ch>
---
drivers/base/dma-buf.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/base/dma-buf.c b/drivers/base/dma-buf.c
index c30f3e1..877eacb 100644
--- a/drivers/base/dma-buf.c
+++ b/drivers/base/dma-buf.c
@@ -298,6 +298,8 @@ void dma_buf_unmap_attachment(struct dma_buf_attachment *attach,
struct sg_table *sg_table,
enum dma_data_direction direction)
{
+ might_sleep();
+
if (WARN_ON(!attach || !attach->dmabuf || !sg_table))
return;
--
1.7.9.5
Hello everyone,
This patchset adds support for DMABUF [2] importing and exporting to V4L2
stack. The importer and exporter part were merged because DMA mapping
redesign [3] was scheduled for merge to mainline.
v8:
- rebased on 3.6-rc1
- merged importer and exporter patchsets
- fixed missing fields in v4l2_plane32 and v4l2_buffer32 structs
- fixed typos/style in documentation
- significant reduction of warnings from checkpatch.pl
- fixed STREAMOFF issues reported by Dima Zavin [4] by adding
__vb2_dqbuf helper to vb2-core
- DC fails if userptr is not correctly aligned
- add support for DMA attributes in DC
- add support for buffers with no kernel mapping
- add reference counting on device from allocator context
- dummy support for mmap
- use dma_get_sgtable, drop vb2_dc_kaddr_to_pages hack and
vb2_dc_get_base_sgt helper
v7:
- support for V4L2_MEMORY_DMABUF in v4l2-compact-ioctl32.c
- cosmetic fixes to the documentation
- added importing for vmalloc because vmap support in dmabuf for 3.5
was pull-requested
- support for dmabuf importing for VIVI
- resurrect allocation of dma-contig context
- remove reference of alloc_ctx in dma-contig buffer
- use sg_alloc_table_from_pages
- fix DMA scatterlist calls to use orig_nents instead of nents
- fix memleak in vb2_dc_sgt_foreach_page (use orig_nents instead of nents)
v6:
- fixed missing entry in v4l2_memory_names
- fixed a bug occuring after get_user_pages failure
- fixed a bug caused by using invalid vma for get_user_pages
- prepare/finish no longer call dma_sync for dmabuf buffers
v5:
- removed change of importer/exporter behaviour
- fixes vb2_dc_pages_to_sgt basing on Laurent's hints
- changed pin/unpin words to lock/unlock in Doc
v4:
- rebased on mainline 3.4-rc2
- included missing importing support for s5p-fimc and s5p-tv
- added patch for changing map/unmap for importers
- fixes to Documentation part
- coding style fixes
- pairing {map/unmap}_dmabuf in vb2-core
- fixing variable types and semantic of arguments in videobufb2-dma-contig.c
v3:
- rebased on mainline 3.4-rc1
- split 'code refactor' patch to multiple smaller patches
- squashed fixes to Sumit's patches
- patchset is no longer dependant on 'DMA mapping redesign'
- separated path for handling IO and non-IO mappings
- add documentation for DMABUF importing to V4L
- removed all DMABUF exporter related code
- removed usage of dma_get_pages extension
v2:
- extended VIDIOC_EXPBUF argument from integer memoffset to struct
v4l2_exportbuffer
- added patch that breaks DMABUF spec on (un)map_atachment callcacks but allows
to work with existing implementation of DMABUF prime in DRM
- all dma-contig code refactoring patches were squashed
- bugfixes
v1: List of changes since [1].
- support for DMA api extension dma_get_pages, the function is used to retrieve
pages used to create DMA mapping.
- small fixes/code cleanup to videobuf2
- added prepare and finish callbacks to vb2 allocators, it is used keep
consistency between dma-cpu acess to the memory (by Marek Szyprowski)
- support for exporting of DMABUF buffer in V4L2 and Videobuf2, originated from
[3].
- support for dma-buf exporting in vb2-dma-contig allocator
- support for DMABUF for s5p-tv and s5p-fimc (capture interface) drivers,
originated from [3]
- changed handling for userptr buffers (by Marek Szyprowski, Andrzej
Pietrasiewicz)
- let mmap method to use dma_mmap_writecombine call (by Marek Szyprowski)
[1] http://thread.gmane.org/gmane.linux.drivers.video-input-infrastructure/4296…
[2] https://lkml.org/lkml/2011/12/26/29
[3] http://thread.gmane.org/gmane.linux.kernel.cross-arch/12819
[4] http://article.gmane.org/gmane.linux.drivers.video-input-infrastructure/497…
Laurent Pinchart (2):
v4l: vb2-dma-contig: Shorten vb2_dma_contig prefix to vb2_dc
v4l: vb2-dma-contig: Reorder functions
Marek Szyprowski (5):
v4l: vb2: add prepare/finish callbacks to allocators
v4l: vb2-dma-contig: add prepare/finish to dma-contig allocator
v4l: vb2-dma-contig: let mmap method to use dma_mmap_coherent call
media: vb2: fail if user ptr buffer is not correctly aligned
v4l: vb2: add support for DMA_ATTR_NO_KERNEL_MAPPING
Sumit Semwal (4):
v4l: Add DMABUF as a memory type
v4l: vb2: add support for shared buffer (dma_buf)
v4l: vb: remove warnings about MEMORY_DMABUF
v4l: vb2-dma-contig: add support for dma_buf importing
Tomasz Stanislawski (15):
Documentation: media: description of DMABUF importing in V4L2
v4l: vb2-dma-contig: remove reference of alloc_ctx from a buffer
v4l: vb2-dma-contig: add support for scatterlist in userptr mode
v4l: vb2-vmalloc: add support for dmabuf importing
v4l: vivi: support for dmabuf importing
v4l: s5p-tv: mixer: support for dmabuf importing
v4l: s5p-fimc: support for dmabuf importing
Documentation: media: description of DMABUF exporting in V4L2
v4l: add buffer exporting via dmabuf
v4l: vb2: add buffer exporting via dmabuf
v4l: vb2-dma-contig: add support for DMABUF exporting
v4l: vb2-dma-contig: add reference counting for a device from
allocator context
v4l: s5p-fimc: support for dmabuf exporting
v4l: s5p-tv: mixer: support for dmabuf exporting
v4l: s5p-mfc: support for dmabuf exporting
Documentation/DocBook/media/v4l/compat.xml | 7 +
Documentation/DocBook/media/v4l/io.xml | 183 +++++
Documentation/DocBook/media/v4l/v4l2.xml | 1 +
.../DocBook/media/v4l/vidioc-create-bufs.xml | 3 +-
Documentation/DocBook/media/v4l/vidioc-expbuf.xml | 223 ++++++
Documentation/DocBook/media/v4l/vidioc-qbuf.xml | 15 +
Documentation/DocBook/media/v4l/vidioc-reqbufs.xml | 47 +-
drivers/media/video/Kconfig | 1 +
drivers/media/video/atmel-isi.c | 2 +-
drivers/media/video/blackfin/bfin_capture.c | 2 +-
drivers/media/video/marvell-ccic/mcam-core.c | 3 +-
drivers/media/video/mx2_camera.c | 2 +-
drivers/media/video/mx2_emmaprp.c | 2 +-
drivers/media/video/mx3_camera.c | 2 +-
drivers/media/video/s5p-fimc/Kconfig | 1 +
drivers/media/video/s5p-fimc/fimc-capture.c | 11 +-
drivers/media/video/s5p-fimc/fimc-core.c | 2 +-
drivers/media/video/s5p-fimc/fimc-lite.c | 2 +-
drivers/media/video/s5p-g2d/g2d.c | 2 +-
drivers/media/video/s5p-jpeg/jpeg-core.c | 2 +-
drivers/media/video/s5p-mfc/s5p_mfc.c | 5 +-
drivers/media/video/s5p-mfc/s5p_mfc_dec.c | 18 +
drivers/media/video/s5p-mfc/s5p_mfc_enc.c | 18 +
drivers/media/video/s5p-tv/Kconfig | 1 +
drivers/media/video/s5p-tv/mixer_video.c | 14 +-
drivers/media/video/sh_mobile_ceu_camera.c | 2 +-
drivers/media/video/v4l2-compat-ioctl32.c | 19 +
drivers/media/video/v4l2-dev.c | 1 +
drivers/media/video/v4l2-ioctl.c | 16 +
drivers/media/video/videobuf-core.c | 4 +
drivers/media/video/videobuf2-core.c | 275 +++++++-
drivers/media/video/videobuf2-dma-contig.c | 719 ++++++++++++++++++--
drivers/media/video/videobuf2-memops.c | 40 --
drivers/media/video/videobuf2-vmalloc.c | 56 ++
drivers/media/video/vivi.c | 2 +-
drivers/staging/media/dt3155v4l/dt3155v4l.c | 2 +-
include/linux/videodev2.h | 33 +
include/media/v4l2-ioctl.h | 2 +
include/media/videobuf2-core.h | 36 +
include/media/videobuf2-dma-contig.h | 4 +-
include/media/videobuf2-memops.h | 5 -
41 files changed, 1639 insertions(+), 146 deletions(-)
create mode 100644 Documentation/DocBook/media/v4l/vidioc-expbuf.xml
--
1.7.9.5
Hi Linus,
I would like to ask for pulling yet another patch for ARM dma-mapping
subsystem to Linux v3.6 kernel tree. This patch fixes potential memory
leak ARM dma-mapping code.
The following changes since commit 979570e02981d4a8fc20b3cc8fd651856c98ee9d:
Linux 3.6-rc7 (2012-09-23 18:10:57 -0700)
are available in the git repository at:
git://git.linaro.org/people/mszyprowski/linux-dma-mapping.git fixes-for-3.6
for you to fetch changes up to ec10665cbf271fb1f60daeb194ad4f2cdcdc59d9:
ARM: dma-mapping: Fix potential memory leak in atomic_pool_init() (2012-09-24 08:35:03 +0200)
----------------------------------------------------------------
Sachin Kamat (1):
ARM: dma-mapping: Fix potential memory leak in atomic_pool_init()
arch/arm/mm/dma-mapping.c | 2 ++
1 file changed, 2 insertions(+)
Hi Maarten!
Broadening the audience a bit..
On 9/14/12 9:12 AM, Maarten Lankhorst wrote:
> Op 13-09-12 23:00, Thomas Hellstrom schreef:
>> On 09/13/2012 07:13 PM, Maarten Lankhorst wrote:
>>> Hey
>>>
>>> Op 13-09-12 18:41, Thomas Hellstrom schreef:
>>>> On 09/13/2012 05:19 PM, Maarten Lankhorst wrote:
>>>>> Hey,
>>>>>
>>>>> Op 12-09-12 15:28, Thomas Hellstrom schreef:
>>>>>> On 09/12/2012 02:48 PM, Maarten Lankhorst wrote:
>>>>>>> Hey Thomas,
>>>>>>>
>>>>>>> I'm playing around with moving reservations from ttm to global, but how ttm
>>>>>>> ttm is handling reservations is getting in the way. The code wants to move
>>>>>>> the bo from the lru lock at the same time a reservation is made, but that
>>>>>>> seems to be slightly too strict. It would really help me if that guarantee
>>>>>>> is removed.
>>>>>> Hi, Maarten.
>>>>>>
>>>>>> Removing that restriction is not really possible at the moment.
>>>>>> Also the memory accounting code depends on this, and may cause reservations
>>>>>> in the most awkward places. Since these reservations don't have a ticket
>>>>>> they may and will cause deadlocks. So in short the restriction is there
>>>>>> to avoid deadlocks caused by ticketless reservations.
>>>>> I have finished the lockdep annotations now which seems to catch almost
>>>>> all abuse I threw at it, so I'm feeling slightly more confident about moving
>>>>> the locking order and reservations around.
>>>> Maarten, moving reservations in TTM out of the lru lock is incorrect as the code is
>>>> written now. If we want to move it out we need something for ticketless reservations
>>>>
>>>> I've been thinking of having a global hash table of tickets with the task struct pointer as the key,
>>>> but even then, we'd need to be able to handle EBUSY errors on every operation that might try to
>>>> reserve a buffer.
>>>>
>>>> The fact that lockdep doesn't complain isn't enough. There *will* be deadlock use-cases when TTM is handed
>>>> the right data-set.
>>>>
>>>> Isn't there a way that a subsystem can register a callback to be performed to remove stuff from LRU and
>>>> to take a pre-reservation lock?
>>> What if multiple subsystems need those? You will end up with a deadlock again.
>>>
>>> I think it would be easier to change the code in ttm_bo.c to not assume the first
>>> item on the lru list is really the least recently used, and assume the first item
>>> that can be reserved without blocking IS the least recently used instead.
>> So what would happen then is that we'd spin on the first item on the LRU list, since
>> when reserving we must release the LRU lock, and if reserving fails, we thus
>> need to restart LRU traversal. Typically after a schedule(). That's bad.
>>
>> So let's take a step back and analyze why the LRU lock has become a problem.
>> From what I can tell, it's because you want to use per-object lock when reserving instead of a
>> global reservation lock (that TTM could use as the LRU lock). Is that correct?
>> and in that case, in what situation do you envision such a global lock being contended
>> to the extent that it hurts performance?
>>
>>>>> Lockdep WILL complain about trying to use multiple tickets, doing ticketed
>>>>> and unticketed blocking reservations mixed, etc.
>>>>>
>>>>> I want to remove the global fence_lock and make it a per buffer lock, with some
>>>>> lockdep annotations it's perfectly legal to grab obj->fence_lock and obj2->fence_lock
>>>>> if you have a reservation, but it should complain loudly about trying to take 2 fence_locks
>>>>> at the same time without a reservation.
>>>> Yes, TTM was previously using per buffer fence locks, and that works fine from a deadlock perspective, but
>>>> it hurts performance. Fencing 200 buffers in a command submission (google-earth for example) will mean
>>>> 198 unnecessary locks, each discarding the processor pipelines. Locking is a *slow* operation, particularly
>>>> on systems with many processors, and I don't think it's a good idea to change that back, without analyzing
>>>> the performance impact. There are reasons people are writing stuff like RCU to avoid locking...
>>> So why don't we simply use RCU for fence pointers and get rid of the fence locking? :D
>>> danvet originally suggested it as a joke but if you think about it, it would make a lot of sense for this usecase.
>> I thought of that before, but the problem is you'd still need a spinlock to change the buffer's fence pointer,
>> even if reading it becomes quick.
> Actually, I changed lockdep annotations a bit to distinguish between the
> cases where ttm_bo_wait is called without reservation, and ttm_bo_wait
> is called with, as far as I can see there are only 2 places that do it without,
> at least if I converted my git tree properly..
>
> http://cgit.freedesktop.org/~mlankhorst/linux/log/?h=v10-wip
>
> First one is nouveau_bo_vma_del, this can be fixed easily.
> Second one is ttm_bo_cleanup_refs and ttm_bo_cleanup_refs_or_queue,
> if reservation is done first before ttm_bo_wait, the fence_lock could be
> dropped entirely by adding smb_mb() in reserve and unreserve, functionally
> there would be no difference. So if you can verify my lockdep annotations are
> correct in the most recent commit wrt what's using ttm_bo_wait without reservation
> we could remove the fence_lock entirely.
>
> ~Maarten
Being able to wait for buffer idle or get the fence pointer without
reserving is a fundamental property of TTM. Reservation is a long-term
lock. The fence lock is a very short term lock. If I were to choose, I'd
rather accept per-object fence locks than removing this property, but
see below.
Likewise, to be able to guarantee that a reserved object is not on any
LRU list is also an important property. Removing that property will, in
addition to the spin wait we've already discussed make understanding TTM
locking even more difficult, and I'd really like to avoid it.
If this were a real performance problem we were trying to solve it would
be easier to motivate changes in this area, but if it's just trying to
avoid a global reservation lock and a global fence lock that will rarely
if ever see any contention, I can't see the point. On the contrary,
having per-object locks will be very costly when reserving / fencing
many objects. As mentioned before, in the fence lock case it's been
tried and removed, so I'd like to know the reasoning behind introducing
it again, and in what situations you think the global locks will be
contended.
/Thomas
When a buffer is added to the LRU list, a reference is taken which is
not dropped until the buffer is evicted from the LRU list. This is the
correct behavior, however this LRU reference will prevent the buffer
from being dropped. This means that the buffer can't actually be dropped
until it is selected for eviction. There's no bound on the time spent
on the LRU list, which means that the buffer may be undroppable for
very long periods of time. Given that migration involves dropping
buffers, the associated page is now unmigratible for long periods of
time as well. CMA relies on being able to migrate a specific range
of pages, so these these types of failures make CMA significantly
less reliable, especially under high filesystem usage.
Rather than waiting for the LRU algorithm to eventually kick out
the buffer, explicitly remove the buffer from the LRU list when trying
to drop it. There is still the possibility that the buffer
could be added back on the list, but that indicates the buffer is
still in use and would probably have other 'in use' indicates to
prevent dropping.
Signed-off-by: Laura Abbott <lauraa(a)codeaurora.org>
---
fs/buffer.c | 38 ++++++++++++++++++++++++++++++++++++++
1 file changed, 38 insertions(+)
diff --git a/fs/buffer.c b/fs/buffer.c
index ad5938c..daa0c3d 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -1399,12 +1399,49 @@ static bool has_bh_in_lru(int cpu, void *dummy)
return 0;
}
+static void __evict_bh_lru(void *arg)
+{
+ struct bh_lru *b = &get_cpu_var(bh_lrus);
+ struct buffer_head *bh = arg;
+ int i;
+
+ for (i = 0; i < BH_LRU_SIZE; i++) {
+ if (b->bhs[i] == bh) {
+ brelse(b->bhs[i]);
+ b->bhs[i] = NULL;
+ goto out;
+ }
+ }
+out:
+ put_cpu_var(bh_lrus);
+}
+
+static bool bh_exists_in_lru(int cpu, void *arg)
+{
+ struct bh_lru *b = per_cpu_ptr(&bh_lrus, cpu);
+ struct buffer_head *bh = arg;
+ int i;
+
+ for (i = 0; i < BH_LRU_SIZE; i++) {
+ if (b->bhs[i] == bh)
+ return 1;
+ }
+
+ return 0;
+
+}
void invalidate_bh_lrus(void)
{
on_each_cpu_cond(has_bh_in_lru, invalidate_bh_lru, NULL, 1, GFP_KERNEL);
}
EXPORT_SYMBOL_GPL(invalidate_bh_lrus);
+void evict_bh_lrus(struct buffer_head *bh)
+{
+ on_each_cpu_cond(bh_exists_in_lru, __evict_bh_lru, bh, 1, GFP_ATOMIC);
+}
+EXPORT_SYMBOL_GPL(evict_bh_lrus);
+
void set_bh_page(struct buffer_head *bh,
struct page *page, unsigned long offset)
{
@@ -3052,6 +3089,7 @@ drop_buffers(struct page *page, struct buffer_head **buffers_to_free)
bh = head;
do {
+ evict_bh_lrus(bh);
if (buffer_write_io_error(bh) && page->mapping)
set_bit(AS_EIO, &page->mapping->flags);
if (buffer_busy(bh))
--
1.7.11.3
Hi Linus,
I would like to ask for pulling one more patch for ARM dma-mapping
subsystem to Linux v3.6 kernel tree. This patch fixes very subtle bug
(typical off-by-one error) which might appear in very rare
circumstances.
The following changes since commit 55d512e245bc7699a8800e23df1a24195dd08217:
Linux 3.6-rc5 (2012-09-08 16:43:45 -0700)
are available in the git repository at:
git://git.linaro.org/people/mszyprowski/linux-dma-mapping.git fixes-for-3.6
for you to fetch changes up to f3d87524975f01b885fc3d009c6ab6afd0d00746:
arm: mm: fix DMA pool affiliation check (2012-09-10 16:15:48 +0200)
Thanks!
Best regards
Marek Szyprowski
Samsung Poland R&D Center
Patch summary:
----------------------------------------------------------------
Thomas Petazzoni (1):
arm: mm: fix DMA pool affiliation check
arch/arm/mm/dma-mapping.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
On Wed, Sep 5, 2012 at 5:08 AM, Tomi Valkeinen <tomi.valkeinen(a)ti.com> wrote:
> Hi,
>
> OMAP has a custom video ram allocator, which I'd like to remove and use
> the standard dma allocation functions.
>
> There are two problems for which I'd like to hear suggestions or
> comments:
>
> First one is that the dma_alloc_* functions map the allocated memory for
> cpu use. In many cases with OMAP DSS (display subsystem) this is not
> needed: the memory may be written only by the SGX or the DSP, and it's
> only read by the DSS, so it's never touched by the CPU.
see dma_alloc_attrs() and DMA_ATTR_NO_KERNEL_MAPPING
> This is even more true when using VRFB on omap3 (and probably TILER on
> omap4) for rotation, as VRFB hides the actual memory and offers rotated
> views. In this case the backend memory is never accessed by anyone else
> than VRFB.
just fwiw, we don't actually need contiguous memory on o4/tiler :-)
(well, at least if you ignore things like secure playback)
> Is there a way to allocate the memory without creating a mapping? While
> it won't break anything as such, the allocated areas can be quite large
> thus causing large areas of the kernel's memory space to be needlessly
> reserved.
>
> The second case is passing a framebuffer address from the bootloader to
> the kernel. Often with mobile devices the bootloader will initialize the
> display hardware, showing a company logo or such. To keep the image on
> the screen when kernel starts we need to reserve the same physical
> memory area early at boot, and use that for the framebuffer.
with a bit of handwaving, this is possible. You can pass a base
address to dma_declare_contiguous() when you setup your device's CMA
pool. Although that doesn't really guarantee you're allocation from
that pool is at offset zero, I suppose.
> I'm not sure if there's any actual problem with this one, presuming
> there is a solution for the first case. Somehow the memory is reserved
> at early boot time, and this is passed to the fb driver. But can the
> memory be managed the same way as in normal case (for example freeing
> it), or does it need to be handled as a special case?
special-casing it might be better.. although possibly a dma attr could
be added for this to tell dma_alloc_from_contiguous() that we need a
particular address within the CMA pool. It seems a bit like a hack,
but OTOH I guess pretty much every consumer device would need a hack
like this.
BR,
-R
> Tomi
>
v2->v3
Split oom killer patch only.
Based on Nishanth's patch, which change ion_debug_heap_total with id.
1. add heap_found
2. Solve the issue about serveral id share one type.
Use ion_debug_heap_total(client, heap->id) instead of ion_debug_heap_total(client, heap->type)
since id is unique while type can be shared.
Fortunately Nishanth has update one patch, so rebase on the patch
v1->v2
Sync to Aug 30 common.git
v0->v1:
1. move ion_shrink out of mutex, suggested by Nishanth
2. check error flag of ERR_PTR(-ENOMEM)
3. add msleep to allow schedule out.
Base on common.git, android-3.4 branch
Add oom killer.
Once heap is used off,
SIGKILL is send to all tasks refered the buffer with descending oom_socre_adj
Nishanth Peethambaran (1):
gpu: ion: Update debugfs to show for each id
Zhangfei Gao (1):
gpu: ion: oom killer
drivers/gpu/ion/ion.c | 131 +++++++++++++++++++++++++++++++++++++++++++++----
1 files changed, 121 insertions(+), 10 deletions(-)