From subashrp@gmail.com Tue Jan 1 01:07:50 2013 From: Subash Patel To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH] arm: dma mapping: export arm iommu functions Date: Mon, 31 Dec 2012 17:07:46 -0800 Message-ID: <50E236E2.9050305@gmail.com> In-Reply-To: <20121229065356.GA13760@quad.lixom.net> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============7082107079214837916==" --===============7082107079214837916== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable On Friday 28 December 2012 10:53 PM, Olof Johansson wrote: > On Fri, Dec 28, 2012 at 09:53:47AM +0530, Prathyush K wrote: >> On Thu, Dec 27, 2012 at 7:45 PM, Marek Szyprowski >> wrote: >> >>> Hello, >>> >>> >>> On 12/27/2012 8:14 AM, Prathyush K wrote: >>> >>>> This patch adds EXPORT_SYMBOL calls to the three arm iommu >>>> functions - arm_iommu_create_mapping, arm_iommu_free_mapping >>>> and arm_iommu_attach_device. These functions can now be called >>>> from dynamic modules. >>>> >>> >>> Could You describe a bit more why those functions might be needed by >>> dynamic modules? >>> >>> Hi Marek, >> >> We are adding iommu support to exynos gsc and s5p-mfc. >> And these two drivers need to be built as modules to improve boot time. >> >> We're calling these three functions from inside these drivers: >> e.g. >> mapping =3D arm_iommu_create_mapping(&platform_bus_type, 0x20000000, SZ_25= 6M, >> 4); >> arm_iommu_attach_device(mdev, mapping); > > The driver shouldn't have to call these low-level functions directly, > something's wrong if you need that. These are not truly low-level calls, but arm specific wrappers to the=20 dma-mapping implementations. Drivers need to call former to declare=20 mappings requirement needed for their IOMMU and later to start using it. > > How is the DMA address management different here from other system/io mmus?= is > that 256M window a hardware restriction? No, each IOMMU is capable of 4G. But to keep the IOMMU address space to=20 what is required, various sizes were used earlier and later fixed on to=20 256M. This can be increased if the drivers demand more buffers mapped to=20 the device at anytime. Regards, Subash > > -Olof > > -- > To unsubscribe, send a message with 'unsubscribe linux-mm' in > the body to majordomo(a)kvack.org. For more info on Linux MM, > see: http://www.linux-mm.org/ . > Don't email: email(a)kvack.org > --===============7082107079214837916==-- From subashrp@gmail.com Tue Jan 1 01:16:02 2013 From: Subash Patel To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [RFC] ARM: DMA-Mapping: add a new attribute to clear buffer Date: Mon, 31 Dec 2012 17:15:59 -0800 Message-ID: <50E238CF.1050708@gmail.com> In-Reply-To: <1356656433-2278-1-git-send-email-daeinki@gmail.com> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============2754800373526919156==" --===============2754800373526919156== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable On Thursday 27 December 2012 05:00 PM, daeinki(a)gmail.com wrote: > From: Inki Dae > > This patch adds a new attribute, DMA_ATTR_SKIP_BUFFER_CLEAR > to skip buffer clearing. The buffer clearing also flushes CPU cache > so this operation has performance deterioration a little bit. > > With this patch, allocated buffer region is cleared as default. > So if you want to skip the buffer clearing, just set this attribute. > > But this flag should be used carefully because this use might get > access to some vulnerable content such as security data. So with this > patch, we make sure that all pages will be somehow cleared before > exposing to userspace. > > For example, let's say that the security data had been stored > in some memory and freed without clearing it. > And then malicious process allocated the region though some buffer > allocator such as gem and ion without clearing it, and requested blit > operation with cleared another buffer though gpu or other drivers. > At this time, the malicious process could access the security data. Isnt it always good to use such security related buffers through TZ=20 rather than trying to guard them in the non-secure zone? > > Signed-off-by: Inki Dae > Signed-off-by: Kyungmin Park > --- > arch/arm/mm/dma-mapping.c | 6 ++++-- > include/linux/dma-attrs.h | 1 + > 2 files changed, 5 insertions(+), 2 deletions(-) > > diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c > index 6b2fb87..fbe9dff 100644 > --- a/arch/arm/mm/dma-mapping.c > +++ b/arch/arm/mm/dma-mapping.c > @@ -1058,7 +1058,8 @@ static struct page **__iommu_alloc_buffer(struct devi= ce *dev, size_t size, > if (!page) > goto error; > > - __dma_clear_buffer(page, size); > + if (!dma_get_attr(DMA_ATTR_SKIP_BUFFER_CLEAR, attrs)) > + __dma_clear_buffer(page, size); > > for (i =3D 0; i < count; i++) > pages[i] =3D page + i; > @@ -1082,7 +1083,8 @@ static struct page **__iommu_alloc_buffer(struct devi= ce *dev, size_t size, > pages[i + j] =3D pages[i] + j; > } > > - __dma_clear_buffer(pages[i], PAGE_SIZE << order); > + if (!dma_get_attr(DMA_ATTR_SKIP_BUFFER_CLEAR, attrs)) > + __dma_clear_buffer(pages[i], PAGE_SIZE << order); > i +=3D 1 << order; > count -=3D 1 << order; > } > diff --git a/include/linux/dma-attrs.h b/include/linux/dma-attrs.h > index c8e1831..2592c05 100644 > --- a/include/linux/dma-attrs.h > +++ b/include/linux/dma-attrs.h > @@ -18,6 +18,7 @@ enum dma_attr { > DMA_ATTR_NO_KERNEL_MAPPING, > DMA_ATTR_SKIP_CPU_SYNC, > DMA_ATTR_FORCE_CONTIGUOUS, > + DMA_ATTR_SKIP_BUFFER_CLEAR, > DMA_ATTR_MAX, How is this new macro different from SKIP_CPU_SYNC? > }; > > Regards, Subash --===============2754800373526919156==-- From inki.dae@samsung.com Wed Jan 2 01:55:30 2013 From: Inki Dae To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [RFC] ARM: DMA-Mapping: add a new attribute to clear buffer Date: Wed, 02 Jan 2013 10:55:28 +0900 Message-ID: In-Reply-To: <50E238CF.1050708@gmail.com> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============7284149373560538177==" --===============7284149373560538177== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit 2013/1/1 Subash Patel : > > > On Thursday 27 December 2012 05:00 PM, daeinki(a)gmail.com wrote: >> >> From: Inki Dae >> >> This patch adds a new attribute, DMA_ATTR_SKIP_BUFFER_CLEAR >> to skip buffer clearing. The buffer clearing also flushes CPU cache >> so this operation has performance deterioration a little bit. >> >> With this patch, allocated buffer region is cleared as default. >> So if you want to skip the buffer clearing, just set this attribute. >> >> But this flag should be used carefully because this use might get >> access to some vulnerable content such as security data. So with this >> patch, we make sure that all pages will be somehow cleared before >> exposing to userspace. >> >> For example, let's say that the security data had been stored >> in some memory and freed without clearing it. >> And then malicious process allocated the region though some buffer >> allocator such as gem and ion without clearing it, and requested blit >> operation with cleared another buffer though gpu or other drivers. >> At this time, the malicious process could access the security data. > > > Isnt it always good to use such security related buffers through TZ rather > than trying to guard them in the non-secure zone? > This is for normal world. We should consider security issue to normal world and also all cases as possible. > >> >> Signed-off-by: Inki Dae >> Signed-off-by: Kyungmin Park >> --- >> arch/arm/mm/dma-mapping.c | 6 ++++-- >> include/linux/dma-attrs.h | 1 + >> 2 files changed, 5 insertions(+), 2 deletions(-) >> >> diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c >> index 6b2fb87..fbe9dff 100644 >> --- a/arch/arm/mm/dma-mapping.c >> +++ b/arch/arm/mm/dma-mapping.c >> @@ -1058,7 +1058,8 @@ static struct page **__iommu_alloc_buffer(struct >> device *dev, size_t size, >> if (!page) >> goto error; >> >> - __dma_clear_buffer(page, size); >> + if (!dma_get_attr(DMA_ATTR_SKIP_BUFFER_CLEAR, attrs)) >> + __dma_clear_buffer(page, size); >> >> for (i = 0; i < count; i++) >> pages[i] = page + i; >> @@ -1082,7 +1083,8 @@ static struct page **__iommu_alloc_buffer(struct >> device *dev, size_t size, >> pages[i + j] = pages[i] + j; >> } >> >> - __dma_clear_buffer(pages[i], PAGE_SIZE << order); >> + if (!dma_get_attr(DMA_ATTR_SKIP_BUFFER_CLEAR, attrs)) >> + __dma_clear_buffer(pages[i], PAGE_SIZE << order); >> i += 1 << order; >> count -= 1 << order; >> } >> diff --git a/include/linux/dma-attrs.h b/include/linux/dma-attrs.h >> index c8e1831..2592c05 100644 >> --- a/include/linux/dma-attrs.h >> +++ b/include/linux/dma-attrs.h >> @@ -18,6 +18,7 @@ enum dma_attr { >> DMA_ATTR_NO_KERNEL_MAPPING, >> DMA_ATTR_SKIP_CPU_SYNC, >> DMA_ATTR_FORCE_CONTIGUOUS, >> + DMA_ATTR_SKIP_BUFFER_CLEAR, >> DMA_ATTR_MAX, > > > How is this new macro different from SKIP_CPU_SYNC? > The purpose of this patch is to skip buffer clearing, not to skip cache opeation. >> }; >> >> > > Regards, > Subash --===============7284149373560538177==-- From prathyush.k@samsung.com Fri Jan 4 11:00:38 2013 From: Prathyush K To: linaro-mm-sig@lists.linaro.org Subject: [Linaro-mm-sig] [PATCH v2] arm: dma mapping: export arm iommu functions Date: Fri, 04 Jan 2013 06:22:42 -0500 Message-ID: <1357298562-28110-1-git-send-email-prathyush.k@samsung.com> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============0089679718287318190==" --===============0089679718287318190== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable This patch adds EXPORT_SYMBOL_GPL calls to the three arm iommu functions - arm_iommu_create_mapping, arm_iommu_free_mapping and arm_iommu_attach_device. These three functions are arm specific wrapper functions for creating/freeing/using an iommu mapping and they are called by various drivers. If any of these drivers need to be built as dynamic modules, these functions need to be exported. Changelog v2: using EXPORT_SYMBOL_GPL as suggested by Marek. Signed-off-by: Prathyush K --- arch/arm/mm/dma-mapping.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index 6b2fb87..226ebcf 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -1797,6 +1797,7 @@ err2: err: return ERR_PTR(err); } +EXPORT_SYMBOL_GPL(arm_iommu_create_mapping); =20 static void release_iommu_mapping(struct kref *kref) { @@ -1813,6 +1814,7 @@ void arm_iommu_release_mapping(struct dma_iommu_mapping= *mapping) if (mapping) kref_put(&mapping->kref, release_iommu_mapping); } +EXPORT_SYMBOL_GPL(arm_iommu_release_mapping); =20 /** * arm_iommu_attach_device @@ -1841,5 +1843,6 @@ int arm_iommu_attach_device(struct device *dev, pr_debug("Attached IOMMU controller to %s device.\n", dev_name(dev)); return 0; } +EXPORT_SYMBOL_GPL(arm_iommu_attach_device); =20 #endif --=20 1.8.0 --===============0089679718287318190==-- From johan.mossberg@stericsson.com Mon Jan 7 14:17:08 2013 From: Johan Mossberg To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCHv2] gpu: ion: Add ion_share_dma_buf_kernel Date: Mon, 07 Jan 2013 14:38:18 +0100 Message-ID: <50EACFCA.9040906@stericsson.com> In-Reply-To: <50EAC7EB.7080700@stericsson.com> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============1744835907702518982==" --===============1744835907702518982== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Hi Rebecca, Is this patch accepted for ion mainline or do you have more comments/question= s? /Johan Mossberg On 12/13/2012 10:24 AM, Johan Mossberg wrote: > ion_share_dma_buf_kernel enables you to share ion buffers via dma buf > for kernel only use cases. Useful for example when a GPU driver using > ion wants to share its output buffers with a 3d party display > controller driver supporting dma buf. >=20 > Signed-off-by: Johan Mossberg > --- > drivers/gpu/ion/ion.c | 22 ++++++++++++++++++---- > include/linux/ion.h | 8 ++++++++ > 2 files changed, 26 insertions(+), 4 deletions(-) >=20 > diff --git a/drivers/gpu/ion/ion.c b/drivers/gpu/ion/ion.c > index 3872095..e7b0d0b 100644 > --- a/drivers/gpu/ion/ion.c > +++ b/drivers/gpu/ion/ion.c > @@ -955,19 +955,19 @@ struct dma_buf_ops dma_buf_ops =3D { > .kunmap =3D ion_dma_buf_kunmap, > }; > =20 > -int ion_share_dma_buf(struct ion_client *client, struct ion_handle *handle) > +struct dma_buf *ion_share_dma_buf_kernel(struct ion_client *client, > + struct ion_handle *handle) > { > struct ion_buffer *buffer; > struct dma_buf *dmabuf; > bool valid_handle; > - int fd; > =20 > mutex_lock(&client->lock); > valid_handle =3D ion_handle_validate(client, handle); > mutex_unlock(&client->lock); > if (!valid_handle) { > WARN(1, "%s: invalid handle passed to share.\n", __func__); > - return -EINVAL; > + return ERR_PTR(-EINVAL); > } > =20 > buffer =3D handle->buffer; > @@ -975,8 +975,22 @@ int ion_share_dma_buf(struct ion_client *client, struc= t ion_handle *handle) > dmabuf =3D dma_buf_export(buffer, &dma_buf_ops, buffer->size, O_RDWR); > if (IS_ERR(dmabuf)) { > ion_buffer_put(buffer); > - return PTR_ERR(dmabuf); > + return dmabuf; > } > + > + return dmabuf; > +} > +EXPORT_SYMBOL(ion_share_dma_buf_kernel); > + > +int ion_share_dma_buf(struct ion_client *client, struct ion_handle *handle) > +{ > + struct dma_buf *dmabuf; > + int fd; > + > + dmabuf =3D ion_share_dma_buf_kernel(client, handle); > + if (IS_ERR(dmabuf)) > + return PTR_ERR(dmabuf); > + > fd =3D dma_buf_fd(dmabuf, O_CLOEXEC); > if (fd < 0) > dma_buf_put(dmabuf); > diff --git a/include/linux/ion.h b/include/linux/ion.h > index a7d399c..8720e9b 100644 > --- a/include/linux/ion.h > +++ b/include/linux/ion.h > @@ -205,6 +205,14 @@ void *ion_map_kernel(struct ion_client *client, struct= ion_handle *handle); > void ion_unmap_kernel(struct ion_client *client, struct ion_handle *handle= ); > =20 > /** > + * ion_share_dma_buf_kernel() - share buffer as dma-buf > + * @client: the client > + * @handle: the handle > + */ > +struct dma_buf *ion_share_dma_buf_kernel(struct ion_client *client, > + struct ion_handle *buf); > + > +/** > * ion_share_dma_buf() - given an ion client, create a dma-buf fd > * @client: the client > * @handle: the handle > --=20 > 1.8.0 >=20 >=20 > _______________________________________________ > Linaro-mm-sig mailing list > Linaro-mm-sig(a)lists.linaro.org > http://lists.linaro.org/mailman/listinfo/linaro-mm-sig >=20 >=20 --===============1744835907702518982==-- From abhinav.k@samsung.com Tue Jan 8 09:50:32 2013 From: Abhinav Kochhar To: linaro-mm-sig@lists.linaro.org Subject: [Linaro-mm-sig] [RFC] ARM: dma-mapping: Add DMA attribute to skip iommu mapping Date: Tue, 08 Jan 2013 05:12:24 -0500 Message-ID: <1357639944-12050-1-git-send-email-abhinav.k@samsung.com> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============1009778610788067521==" --===============1009778610788067521== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Adding a new dma attribute which can be used by the platform drivers to avoid creating iommu mappings. In some cases the buffers are allocated by display controller driver using dma alloc apis but are not=20 used for scanout. Though the buffers are allocated=20 by display controller but are only used for sharing=20 among different devices. With this attribute the platform drivers can choose not to create iommu mapping at the time of buffer allocation and only create the mapping when they access this buffer.=20 Change-Id: I2178b3756170982d814e085ca62474d07b616a21 Signed-off-by: Abhinav Kochhar --- arch/arm/mm/dma-mapping.c | 8 +++++--- include/linux/dma-attrs.h | 1 + 2 files changed, 6 insertions(+), 3 deletions(-) diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index c0f0f43..e73003c 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -1279,9 +1279,11 @@ static void *arm_iommu_alloc_attrs(struct device *dev,= size_t size, if (!pages) return NULL; =20 - *handle =3D __iommu_create_mapping(dev, pages, size); - if (*handle =3D=3D DMA_ERROR_CODE) - goto err_buffer; + if (!dma_get_attr(DMA_ATTR_NO_IOMMU_MAPPING, attrs)) { + *handle =3D __iommu_create_mapping(dev, pages, size); + if (*handle =3D=3D DMA_ERROR_CODE) + goto err_buffer; + } =20 if (dma_get_attr(DMA_ATTR_NO_KERNEL_MAPPING, attrs)) return pages; diff --git a/include/linux/dma-attrs.h b/include/linux/dma-attrs.h index c8e1831..1f04419 100644 --- a/include/linux/dma-attrs.h +++ b/include/linux/dma-attrs.h @@ -15,6 +15,7 @@ enum dma_attr { DMA_ATTR_WEAK_ORDERING, DMA_ATTR_WRITE_COMBINE, DMA_ATTR_NON_CONSISTENT, + DMA_ATTR_NO_IOMMU_MAPPING, DMA_ATTR_NO_KERNEL_MAPPING, DMA_ATTR_SKIP_CPU_SYNC, DMA_ATTR_FORCE_CONTIGUOUS, --=20 1.7.8.6 --===============1009778610788067521==-- From jesse.barker@linaro.org Tue Jan 8 15:24:26 2013 From: Jesse Barker To: linaro-mm-sig@lists.linaro.org Subject: [Linaro-mm-sig] Attendance at ELC San Francisco Date: Tue, 08 Jan 2013 07:24:24 -0800 Message-ID: MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============1201349973411374023==" --===============1201349973411374023== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Hi all, With all of the attention that the Common Display Framework is getting, I was wondering if it was worth having a BoF discussion at ELC next month in San Francisco. This will be only a couple of weeks after FOSDEM, but given the pace that things seem to be moving, that could be a great opportunity either to have a follow-on discussion, or simply to involve a slightly different cross-section of the community in a face-to-face discussion. If folks could let me know that they'll be at ELC and are interested in a BoF there, I'll look into getting it set up. cheers, Jesse --===============1201349973411374023== Content-Type: text/html Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="attachment.html" MIME-Version: 1.0 PGRpdiBkaXI9Imx0ciI+PGRpdj5IaSBhbGwsPGJyPjxicj48L2Rpdj5XaXRoIGFsbCBvZiB0aGUg YXR0ZW50aW9uIHRoYXQgdGhlIENvbW1vbiBEaXNwbGF5IEZyYW1ld29yayBpcyBnZXR0aW5nLCBJ IHdhcyB3b25kZXJpbmcgaWYgaXQgd2FzIHdvcnRoIGhhdmluZyBhIEJvRiBkaXNjdXNzaW9uIGF0 IEVMQyBuZXh0IG1vbnRoIGluIFNhbiBGcmFuY2lzY28uoCBUaGlzIHdpbGwgYmUgb25seSBhIGNv dXBsZSBvZiB3ZWVrcyBhZnRlciBGT1NERU0sIGJ1dCBnaXZlbiB0aGUgcGFjZSB0aGF0IHRoaW5n cyBzZWVtIHRvIGJlIG1vdmluZywgdGhhdCBjb3VsZCBiZSBhIGdyZWF0IG9wcG9ydHVuaXR5IGVp dGhlciB0byBoYXZlIGEgZm9sbG93LW9uIGRpc2N1c3Npb24sIG9yIHNpbXBseSB0byBpbnZvbHZl IGEgc2xpZ2h0bHkgZGlmZmVyZW50IGNyb3NzLXNlY3Rpb24gb2YgdGhlIGNvbW11bml0eSBpbiBh IGZhY2UtdG8tZmFjZSBkaXNjdXNzaW9uLqAgSWYgZm9sa3MgY291bGQgbGV0IG1lIGtub3cgdGhh dCB0aGV5JiMzOTtsbCBiZSBhdCBFTEMgYW5kIGFyZSBpbnRlcmVzdGVkIGluIGEgQm9GIHRoZXJl LCBJJiMzOTtsbCBsb29rIGludG8gZ2V0dGluZyBpdCBzZXQgdXAuPGJyPgo8YnI+Y2hlZXJzLDxi cj5KZXNzZTxicj48L2Rpdj4K --===============1201349973411374023==-- From konkers@android.com Tue Jan 8 16:46:10 2013 From: Erik Gilling To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] Attendance at ELC San Francisco Date: Tue, 08 Jan 2013 08:45:49 -0800 Message-ID: In-Reply-To: MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============5451780438231902964==" --===============5451780438231902964== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit I'd be interested. Also Greg Hackmann (CCed) who's been working a lot with display issues on our team might be interested in attending. Cheers, Erik On Tue, Jan 8, 2013 at 7:24 AM, Jesse Barker wrote: > Hi all, > > With all of the attention that the Common Display Framework is getting, I > was wondering if it was worth having a BoF discussion at ELC next month in > San Francisco. This will be only a couple of weeks after FOSDEM, but given > the pace that things seem to be moving, that could be a great opportunity > either to have a follow-on discussion, or simply to involve a slightly > different cross-section of the community in a face-to-face discussion. If > folks could let me know that they'll be at ELC and are interested in a BoF > there, I'll look into getting it set up. > > cheers, > Jesse > > _______________________________________________ > Linaro-mm-sig mailing list > Linaro-mm-sig(a)lists.linaro.org > http://lists.linaro.org/mailman/listinfo/linaro-mm-sig > > --===============5451780438231902964== Content-Type: text/html Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="attachment.html" MIME-Version: 1.0 SSYjMzk7ZCBiZSBpbnRlcmVzdGVkLiCgQWxzbyBHcmVnIEhhY2ttYW5uIChDQ2VkKSB3aG8mIzM5 O3MgYmVlbiB3b3JraW5nIGEgbG90IHdpdGggZGlzcGxheSBpc3N1ZXMgb24gb3VyIHRlYW0gbWln aHQgYmUgaW50ZXJlc3RlZCBpbiBhdHRlbmRpbmcuPGRpdj48YnI+PC9kaXY+PGRpdj5DaGVlcnMs PC9kaXY+PGRpdj6gIKBFcmlrPC9kaXY+PGRpdj48YnI+PGRpdiBjbGFzcz0iZ21haWxfcXVvdGUi PgoKT24gVHVlLCBKYW4gOCwgMjAxMyBhdCA3OjI0IEFNLCBKZXNzZSBCYXJrZXIgPHNwYW4gZGly PSJsdHIiPiZsdDs8YSBocmVmPSJtYWlsdG86amVzc2UuYmFya2VyQGxpbmFyby5vcmciIHRhcmdl dD0iX2JsYW5rIj5qZXNzZS5iYXJrZXJAbGluYXJvLm9yZzwvYT4mZ3Q7PC9zcGFuPiB3cm90ZTo8 YnI+PGJsb2NrcXVvdGUgY2xhc3M9ImdtYWlsX3F1b3RlIiBzdHlsZT0ibWFyZ2luOjAgMCAwIC44 ZXg7Ym9yZGVyLWxlZnQ6MXB4ICNjY2Mgc29saWQ7cGFkZGluZy1sZWZ0OjFleCI+Cgo8ZGl2IGRp cj0ibHRyIj48ZGl2PkhpIGFsbCw8YnI+PGJyPjwvZGl2PldpdGggYWxsIG9mIHRoZSBhdHRlbnRp b24gdGhhdCB0aGUgQ29tbW9uIERpc3BsYXkgRnJhbWV3b3JrIGlzIGdldHRpbmcsIEkgd2FzIHdv bmRlcmluZyBpZiBpdCB3YXMgd29ydGggaGF2aW5nIGEgQm9GIGRpc2N1c3Npb24gYXQgRUxDIG5l eHQgbW9udGggaW4gU2FuIEZyYW5jaXNjby6gIFRoaXMgd2lsbCBiZSBvbmx5IGEgY291cGxlIG9m IHdlZWtzIGFmdGVyIEZPU0RFTSwgYnV0IGdpdmVuIHRoZSBwYWNlIHRoYXQgdGhpbmdzIHNlZW0g dG8gYmUgbW92aW5nLCB0aGF0IGNvdWxkIGJlIGEgZ3JlYXQgb3Bwb3J0dW5pdHkgZWl0aGVyIHRv IGhhdmUgYSBmb2xsb3ctb24gZGlzY3Vzc2lvbiwgb3Igc2ltcGx5IHRvIGludm9sdmUgYSBzbGln aHRseSBkaWZmZXJlbnQgY3Jvc3Mtc2VjdGlvbiBvZiB0aGUgY29tbXVuaXR5IGluIGEgZmFjZS10 by1mYWNlIGRpc2N1c3Npb24uoCBJZiBmb2xrcyBjb3VsZCBsZXQgbWUga25vdyB0aGF0IHRoZXkm IzM5O2xsIGJlIGF0IEVMQyBhbmQgYXJlIGludGVyZXN0ZWQgaW4gYSBCb0YgdGhlcmUsIEkmIzM5 O2xsIGxvb2sgaW50byBnZXR0aW5nIGl0IHNldCB1cC48YnI+CgoKPGJyPmNoZWVycyw8YnI+SmVz c2U8YnI+PC9kaXY+Cjxicj5fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fXzxicj4KTGluYXJvLW1tLXNpZyBtYWlsaW5nIGxpc3Q8YnI+CjxhIGhyZWY9Im1haWx0 bzpMaW5hcm8tbW0tc2lnQGxpc3RzLmxpbmFyby5vcmciPkxpbmFyby1tbS1zaWdAbGlzdHMubGlu YXJvLm9yZzwvYT48YnI+CjxhIGhyZWY9Imh0dHA6Ly9saXN0cy5saW5hcm8ub3JnL21haWxtYW4v bGlzdGluZm8vbGluYXJvLW1tLXNpZyIgdGFyZ2V0PSJfYmxhbmsiPmh0dHA6Ly9saXN0cy5saW5h cm8ub3JnL21haWxtYW4vbGlzdGluZm8vbGluYXJvLW1tLXNpZzwvYT48YnI+Cjxicj48L2Jsb2Nr cXVvdGU+PC9kaXY+PGJyPjwvZGl2Pgo= --===============5451780438231902964==-- From rebecca@android.com Tue Jan 8 21:12:51 2013 From: Rebecca Schultz Zavin To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCHv2] gpu: ion: Add ion_share_dma_buf_kernel Date: Tue, 08 Jan 2013 13:12:50 -0800 Message-ID: In-Reply-To: <50EACFCA.9040906@stericsson.com> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============7942393413130526375==" --===============7942393413130526375== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit I'm tempted to call this new api ion_share_dma_buf and rename the old one to ion_share_dma_buf_fd while the number of users is still small. I think it's clearer. Otherwise this patch looks good. Thoughts? Rebecca On Mon, Jan 7, 2013 at 5:38 AM, Johan Mossberg < johan.mossberg(a)stericsson.com> wrote: > Hi Rebecca, > > Is this patch accepted for ion mainline or do you have more > comments/questions? > > /Johan Mossberg > > On 12/13/2012 10:24 AM, Johan Mossberg wrote: > > ion_share_dma_buf_kernel enables you to share ion buffers via dma buf > > for kernel only use cases. Useful for example when a GPU driver using > > ion wants to share its output buffers with a 3d party display > > controller driver supporting dma buf. > > > > Signed-off-by: Johan Mossberg > > --- > > drivers/gpu/ion/ion.c | 22 ++++++++++++++++++---- > > include/linux/ion.h | 8 ++++++++ > > 2 files changed, 26 insertions(+), 4 deletions(-) > > > > diff --git a/drivers/gpu/ion/ion.c b/drivers/gpu/ion/ion.c > > index 3872095..e7b0d0b 100644 > > --- a/drivers/gpu/ion/ion.c > > +++ b/drivers/gpu/ion/ion.c > > @@ -955,19 +955,19 @@ struct dma_buf_ops dma_buf_ops = { > > .kunmap = ion_dma_buf_kunmap, > > }; > > > > -int ion_share_dma_buf(struct ion_client *client, struct ion_handle > *handle) > > +struct dma_buf *ion_share_dma_buf_kernel(struct ion_client *client, > > + struct ion_handle *handle) > > { > > struct ion_buffer *buffer; > > struct dma_buf *dmabuf; > > bool valid_handle; > > - int fd; > > > > mutex_lock(&client->lock); > > valid_handle = ion_handle_validate(client, handle); > > mutex_unlock(&client->lock); > > if (!valid_handle) { > > WARN(1, "%s: invalid handle passed to share.\n", __func__); > > - return -EINVAL; > > + return ERR_PTR(-EINVAL); > > } > > > > buffer = handle->buffer; > > @@ -975,8 +975,22 @@ int ion_share_dma_buf(struct ion_client *client, > struct ion_handle *handle) > > dmabuf = dma_buf_export(buffer, &dma_buf_ops, buffer->size, > O_RDWR); > > if (IS_ERR(dmabuf)) { > > ion_buffer_put(buffer); > > - return PTR_ERR(dmabuf); > > + return dmabuf; > > } > > + > > + return dmabuf; > > +} > > +EXPORT_SYMBOL(ion_share_dma_buf_kernel); > > + > > +int ion_share_dma_buf(struct ion_client *client, struct ion_handle > *handle) > > +{ > > + struct dma_buf *dmabuf; > > + int fd; > > + > > + dmabuf = ion_share_dma_buf_kernel(client, handle); > > + if (IS_ERR(dmabuf)) > > + return PTR_ERR(dmabuf); > > + > > fd = dma_buf_fd(dmabuf, O_CLOEXEC); > > if (fd < 0) > > dma_buf_put(dmabuf); > > diff --git a/include/linux/ion.h b/include/linux/ion.h > > index a7d399c..8720e9b 100644 > > --- a/include/linux/ion.h > > +++ b/include/linux/ion.h > > @@ -205,6 +205,14 @@ void *ion_map_kernel(struct ion_client *client, > struct ion_handle *handle); > > void ion_unmap_kernel(struct ion_client *client, struct ion_handle > *handle); > > > > /** > > + * ion_share_dma_buf_kernel() - share buffer as dma-buf > > + * @client: the client > > + * @handle: the handle > > + */ > > +struct dma_buf *ion_share_dma_buf_kernel(struct ion_client *client, > > + struct ion_handle > *buf); > > + > > +/** > > * ion_share_dma_buf() - given an ion client, create a dma-buf fd > > * @client: the client > > * @handle: the handle > > -- > > 1.8.0 > > > > > > _______________________________________________ > > Linaro-mm-sig mailing list > > Linaro-mm-sig(a)lists.linaro.org > > http://lists.linaro.org/mailman/listinfo/linaro-mm-sig > > > > > > --===============7942393413130526375== Content-Type: text/html Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="attachment.html" MIME-Version: 1.0 SSYjMzk7bSB0ZW1wdGVkIHRvIGNhbGwgdGhpcyBuZXcgYXBpIGlvbl9zaGFyZV9kbWFfYnVmIGFu ZCByZW5hbWUgdGhlIG9sZCBvbmUgdG8gaW9uX3NoYXJlX2RtYV9idWZfZmQgd2hpbGUgdGhlIG51 bWJlciBvZiB1c2VycyBpcyBzdGlsbCBzbWFsbC4goEkgdGhpbmsgaXQmIzM5O3MgY2xlYXJlci4g oE90aGVyd2lzZSB0aGlzIHBhdGNoIGxvb2tzIGdvb2QuPGRpdj48YnI+PC9kaXY+PGRpdj4KVGhv dWdodHM/PC9kaXY+PGRpdj48YnI+PC9kaXY+PGRpdj5SZWJlY2NhPGJyPjxkaXY+PGJyPjxkaXYg Y2xhc3M9ImdtYWlsX3F1b3RlIj5PbiBNb24sIEphbiA3LCAyMDEzIGF0IDU6MzggQU0sIEpvaGFu IE1vc3NiZXJnIDxzcGFuIGRpcj0ibHRyIj4mbHQ7PGEgaHJlZj0ibWFpbHRvOmpvaGFuLm1vc3Ni ZXJnQHN0ZXJpY3Nzb24uY29tIiB0YXJnZXQ9Il9ibGFuayI+am9oYW4ubW9zc2JlcmdAc3Rlcmlj c3Nvbi5jb208L2E+Jmd0Ozwvc3Bhbj4gd3JvdGU6PGJyPgo8YmxvY2txdW90ZSBjbGFzcz0iZ21h aWxfcXVvdGUiIHN0eWxlPSJtYXJnaW46MCAwIDAgLjhleDtib3JkZXItbGVmdDoxcHggI2NjYyBz b2xpZDtwYWRkaW5nLWxlZnQ6MWV4Ij5IaSBSZWJlY2NhLDxicj4KPGJyPgpJcyB0aGlzIHBhdGNo IGFjY2VwdGVkIGZvciBpb24gbWFpbmxpbmUgb3IgZG8geW91IGhhdmUgbW9yZSBjb21tZW50cy9x dWVzdGlvbnM/PGJyPgo8c3BhbiBjbGFzcz0iSE9FblpiIj48Zm9udCBjb2xvcj0iIzg4ODg4OCI+ PGJyPgovSm9oYW4gTW9zc2Jlcmc8YnI+CjwvZm9udD48L3NwYW4+PGRpdiBjbGFzcz0iSE9Fblpi Ij48ZGl2IGNsYXNzPSJoNSI+PGJyPgpPbiAxMi8xMy8yMDEyIDEwOjI0IEFNLCBKb2hhbiBNb3Nz YmVyZyB3cm90ZTo8YnI+CiZndDsgaW9uX3NoYXJlX2RtYV9idWZfa2VybmVsIGVuYWJsZXMgeW91 IHRvIHNoYXJlIGlvbiBidWZmZXJzIHZpYSBkbWEgYnVmPGJyPgomZ3Q7IGZvciBrZXJuZWwgb25s eSB1c2UgY2FzZXMuIFVzZWZ1bCBmb3IgZXhhbXBsZSB3aGVuIGEgR1BVIGRyaXZlciB1c2luZzxi cj4KJmd0OyBpb24gd2FudHMgdG8gc2hhcmUgaXRzIG91dHB1dCBidWZmZXJzIHdpdGggYSAzZCBw YXJ0eSBkaXNwbGF5PGJyPgomZ3Q7IGNvbnRyb2xsZXIgZHJpdmVyIHN1cHBvcnRpbmcgZG1hIGJ1 Zi48YnI+CiZndDs8YnI+CiZndDsgU2lnbmVkLW9mZi1ieTogSm9oYW4gTW9zc2JlcmcgJmx0Ozxh IGhyZWY9Im1haWx0bzpqb2hhbi5tb3NzYmVyZ0BzdGVyaWNzc29uLmNvbSI+am9oYW4ubW9zc2Jl cmdAc3Rlcmljc3Nvbi5jb208L2E+Jmd0Ozxicj4KJmd0OyAtLS08YnI+CiZndDsgoGRyaXZlcnMv Z3B1L2lvbi9pb24uYyB8IDIyICsrKysrKysrKysrKysrKysrKy0tLS08YnI+CiZndDsgoGluY2x1 ZGUvbGludXgvaW9uLmggoCB8IKA4ICsrKysrKysrPGJyPgomZ3Q7IKAyIGZpbGVzIGNoYW5nZWQs IDI2IGluc2VydGlvbnMoKyksIDQgZGVsZXRpb25zKC0pPGJyPgomZ3Q7PGJyPgomZ3Q7IGRpZmYg LS1naXQgYS9kcml2ZXJzL2dwdS9pb24vaW9uLmMgYi9kcml2ZXJzL2dwdS9pb24vaW9uLmM8YnI+ CiZndDsgaW5kZXggMzg3MjA5NS4uZTdiMGQwYiAxMDA2NDQ8YnI+CiZndDsgLS0tIGEvZHJpdmVy cy9ncHUvaW9uL2lvbi5jPGJyPgomZ3Q7ICsrKyBiL2RyaXZlcnMvZ3B1L2lvbi9pb24uYzxicj4K Jmd0OyBAQCAtOTU1LDE5ICs5NTUsMTkgQEAgc3RydWN0IGRtYV9idWZfb3BzIGRtYV9idWZfb3Bz ID0gezxicj4KJmd0OyCgIKAgoCAua3VubWFwID0gaW9uX2RtYV9idWZfa3VubWFwLDxicj4KJmd0 OyCgfTs8YnI+CiZndDs8YnI+CiZndDsgLWludCBpb25fc2hhcmVfZG1hX2J1ZihzdHJ1Y3QgaW9u X2NsaWVudCAqY2xpZW50LCBzdHJ1Y3QgaW9uX2hhbmRsZSAqaGFuZGxlKTxicj4KJmd0OyArc3Ry dWN0IGRtYV9idWYgKmlvbl9zaGFyZV9kbWFfYnVmX2tlcm5lbChzdHJ1Y3QgaW9uX2NsaWVudCAq Y2xpZW50LDxicj4KJmd0OyArIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg oCCgIKAgc3RydWN0IGlvbl9oYW5kbGUgKmhhbmRsZSk8YnI+CiZndDsgoHs8YnI+CiZndDsgoCCg IKAgc3RydWN0IGlvbl9idWZmZXIgKmJ1ZmZlcjs8YnI+CiZndDsgoCCgIKAgc3RydWN0IGRtYV9i dWYgKmRtYWJ1Zjs8YnI+CiZndDsgoCCgIKAgYm9vbCB2YWxpZF9oYW5kbGU7PGJyPgomZ3Q7IC0g oCCgIGludCBmZDs8YnI+CiZndDs8YnI+CiZndDsgoCCgIKAgbXV0ZXhfbG9jaygmYW1wO2NsaWVu dC0mZ3Q7bG9jayk7PGJyPgomZ3Q7IKAgoCCgIHZhbGlkX2hhbmRsZSA9IGlvbl9oYW5kbGVfdmFs aWRhdGUoY2xpZW50LCBoYW5kbGUpOzxicj4KJmd0OyCgIKAgoCBtdXRleF91bmxvY2soJmFtcDtj bGllbnQtJmd0O2xvY2spOzxicj4KJmd0OyCgIKAgoCBpZiAoIXZhbGlkX2hhbmRsZSkgezxicj4K Jmd0OyCgIKAgoCCgIKAgoCCgIFdBUk4oMSwgJnF1b3Q7JXM6IGludmFsaWQgaGFuZGxlIHBhc3Nl ZCB0byBzaGFyZS5cbiZxdW90OywgX19mdW5jX18pOzxicj4KJmd0OyAtIKAgoCCgIKAgoCCgIHJl dHVybiAtRUlOVkFMOzxicj4KJmd0OyArIKAgoCCgIKAgoCCgIHJldHVybiBFUlJfUFRSKC1FSU5W QUwpOzxicj4KJmd0OyCgIKAgoCB9PGJyPgomZ3Q7PGJyPgomZ3Q7IKAgoCCgIGJ1ZmZlciA9IGhh bmRsZS0mZ3Q7YnVmZmVyOzxicj4KJmd0OyBAQCAtOTc1LDggKzk3NSwyMiBAQCBpbnQgaW9uX3No YXJlX2RtYV9idWYoc3RydWN0IGlvbl9jbGllbnQgKmNsaWVudCwgc3RydWN0IGlvbl9oYW5kbGUg KmhhbmRsZSk8YnI+CiZndDsgoCCgIKAgZG1hYnVmID0gZG1hX2J1Zl9leHBvcnQoYnVmZmVyLCAm YW1wO2RtYV9idWZfb3BzLCBidWZmZXItJmd0O3NpemUsIE9fUkRXUik7PGJyPgomZ3Q7IKAgoCCg IGlmIChJU19FUlIoZG1hYnVmKSkgezxicj4KJmd0OyCgIKAgoCCgIKAgoCCgIGlvbl9idWZmZXJf cHV0KGJ1ZmZlcik7PGJyPgomZ3Q7IC0goCCgIKAgoCCgIKAgcmV0dXJuIFBUUl9FUlIoZG1hYnVm KTs8YnI+CiZndDsgKyCgIKAgoCCgIKAgoCByZXR1cm4gZG1hYnVmOzxicj4KJmd0OyCgIKAgoCB9 PGJyPgomZ3Q7ICs8YnI+CiZndDsgKyCgIKAgcmV0dXJuIGRtYWJ1Zjs8YnI+CiZndDsgK308YnI+ CiZndDsgK0VYUE9SVF9TWU1CT0woaW9uX3NoYXJlX2RtYV9idWZfa2VybmVsKTs8YnI+CiZndDsg Kzxicj4KJmd0OyAraW50IGlvbl9zaGFyZV9kbWFfYnVmKHN0cnVjdCBpb25fY2xpZW50ICpjbGll bnQsIHN0cnVjdCBpb25faGFuZGxlICpoYW5kbGUpPGJyPgomZ3Q7ICt7PGJyPgomZ3Q7ICsgoCCg IHN0cnVjdCBkbWFfYnVmICpkbWFidWY7PGJyPgomZ3Q7ICsgoCCgIGludCBmZDs8YnI+CiZndDsg Kzxicj4KJmd0OyArIKAgoCBkbWFidWYgPSBpb25fc2hhcmVfZG1hX2J1Zl9rZXJuZWwoY2xpZW50 LCBoYW5kbGUpOzxicj4KJmd0OyArIKAgoCBpZiAoSVNfRVJSKGRtYWJ1ZikpPGJyPgomZ3Q7ICsg oCCgIKAgoCCgIKAgcmV0dXJuIFBUUl9FUlIoZG1hYnVmKTs8YnI+CiZndDsgKzxicj4KJmd0OyCg IKAgoCBmZCA9IGRtYV9idWZfZmQoZG1hYnVmLCBPX0NMT0VYRUMpOzxicj4KJmd0OyCgIKAgoCBp ZiAoZmQgJmx0OyAwKTxicj4KJmd0OyCgIKAgoCCgIKAgoCCgIGRtYV9idWZfcHV0KGRtYWJ1Zik7 PGJyPgomZ3Q7IGRpZmYgLS1naXQgYS9pbmNsdWRlL2xpbnV4L2lvbi5oIGIvaW5jbHVkZS9saW51 eC9pb24uaDxicj4KJmd0OyBpbmRleCBhN2QzOTljLi44NzIwZTliIDEwMDY0NDxicj4KJmd0OyAt LS0gYS9pbmNsdWRlL2xpbnV4L2lvbi5oPGJyPgomZ3Q7ICsrKyBiL2luY2x1ZGUvbGludXgvaW9u Lmg8YnI+CiZndDsgQEAgLTIwNSw2ICsyMDUsMTQgQEAgdm9pZCAqaW9uX21hcF9rZXJuZWwoc3Ry dWN0IGlvbl9jbGllbnQgKmNsaWVudCwgc3RydWN0IGlvbl9oYW5kbGUgKmhhbmRsZSk7PGJyPgom Z3Q7IKB2b2lkIGlvbl91bm1hcF9rZXJuZWwoc3RydWN0IGlvbl9jbGllbnQgKmNsaWVudCwgc3Ry dWN0IGlvbl9oYW5kbGUgKmhhbmRsZSk7PGJyPgomZ3Q7PGJyPgomZ3Q7IKAvKio8YnI+CiZndDsg KyAqIGlvbl9zaGFyZV9kbWFfYnVmX2tlcm5lbCgpIC0gc2hhcmUgYnVmZmVyIGFzIGRtYS1idWY8 YnI+CiZndDsgKyAqIEBjbGllbnQ6IKB0aGUgY2xpZW50PGJyPgomZ3Q7ICsgKiBAaGFuZGxlOiCg dGhlIGhhbmRsZTxicj4KJmd0OyArICovPGJyPgomZ3Q7ICtzdHJ1Y3QgZG1hX2J1ZiAqaW9uX3No YXJlX2RtYV9idWZfa2VybmVsKHN0cnVjdCBpb25fY2xpZW50ICpjbGllbnQsPGJyPgomZ3Q7ICsg oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIHN0cnVj dCBpb25faGFuZGxlICpidWYpOzxicj4KJmd0OyArPGJyPgomZ3Q7ICsvKio8YnI+CiZndDsgoCAq IGlvbl9zaGFyZV9kbWFfYnVmKCkgLSBnaXZlbiBhbiBpb24gY2xpZW50LCBjcmVhdGUgYSBkbWEt YnVmIGZkPGJyPgomZ3Q7IKAgKiBAY2xpZW50OiCgdGhlIGNsaWVudDxicj4KJmd0OyCgICogQGhh bmRsZTogoHRoZSBoYW5kbGU8YnI+CiZndDsgLS08YnI+CiZndDsgMS44LjA8YnI+CiZndDs8YnI+ CiZndDs8YnI+CiZndDsgX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX188YnI+CiZndDsgTGluYXJvLW1tLXNpZyBtYWlsaW5nIGxpc3Q8YnI+CiZndDsgPGEgaHJl Zj0ibWFpbHRvOkxpbmFyby1tbS1zaWdAbGlzdHMubGluYXJvLm9yZyI+TGluYXJvLW1tLXNpZ0Bs aXN0cy5saW5hcm8ub3JnPC9hPjxicj4KJmd0OyA8YSBocmVmPSJodHRwOi8vbGlzdHMubGluYXJv Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL2xpbmFyby1tbS1zaWciIHRhcmdldD0iX2JsYW5rIj5odHRw Oi8vbGlzdHMubGluYXJvLm9yZy9tYWlsbWFuL2xpc3RpbmZvL2xpbmFyby1tbS1zaWc8L2E+PGJy PgomZ3Q7PGJyPgomZ3Q7PGJyPgo8YnI+CjwvZGl2PjwvZGl2PjwvYmxvY2txdW90ZT48L2Rpdj48 YnI+PC9kaXY+PC9kaXY+Cg== --===============7942393413130526375==-- From jesse.barker@linaro.org Thu Jan 10 18:06:15 2013 From: Jesse Barker To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] Attendance at ELC San Francisco Date: Thu, 10 Jan 2013 10:06:14 -0800 Message-ID: In-Reply-To: MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============6020385392849474677==" --===============6020385392849474677== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable OK, so I've requested an afternoon slot (rather 2 slots, as the slot size is 50 minutes, and I wanted to give us some wiggle room), so we should be good. When the final conference agenda settles, I'll send word out again for the sake of those that are local enough to turn up to the BoF, but might not be attending the rest of the conference. cheers, Jesse On Tue, Jan 8, 2013 at 8:45 AM, Erik Gilling wrote: > I'd be interested. Also Greg Hackmann (CCed) who's been working a lot > with display issues on our team might be interested in attending. > > Cheers, > Erik > > On Tue, Jan 8, 2013 at 7:24 AM, Jesse Barker wro= te: > >> Hi all, >> >> With all of the attention that the Common Display Framework is getting, I >> was wondering if it was worth having a BoF discussion at ELC next month in >> San Francisco. This will be only a couple of weeks after FOSDEM, but given >> the pace that things seem to be moving, that could be a great opportunity >> either to have a follow-on discussion, or simply to involve a slightly >> different cross-section of the community in a face-to-face discussion. If >> folks could let me know that they'll be at ELC and are interested in a BoF >> there, I'll look into getting it set up. >> >> cheers, >> Jesse >> >> _______________________________________________ >> Linaro-mm-sig mailing list >> Linaro-mm-sig(a)lists.linaro.org >> http://lists.linaro.org/mailman/listinfo/linaro-mm-sig >> >> > --===============6020385392849474677== Content-Type: text/html Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="attachment.html" MIME-Version: 1.0 PGRpdiBkaXI9Imx0ciI+T0ssIHNvIEkmIzM5O3ZlIHJlcXVlc3RlZCBhbiBhZnRlcm5vb24gc2xv dCAocmF0aGVyIDIgc2xvdHMsIGFzIHRoZSBzbG90IHNpemUgaXMgNTAgbWludXRlcywgYW5kIEkg d2FudGVkIHRvIGdpdmUgdXMgc29tZSB3aWdnbGUgcm9vbSksIHNvIHdlIHNob3VsZCBiZSBnb29k LqAgV2hlbiB0aGUgZmluYWwgY29uZmVyZW5jZSBhZ2VuZGEgc2V0dGxlcywgSSYjMzk7bGwgc2Vu ZCB3b3JkIG91dCBhZ2FpbiBmb3IgdGhlIHNha2Ugb2YgdGhvc2UgdGhhdCBhcmUgbG9jYWwgZW5v dWdoIHRvIHR1cm4gdXAgdG8gdGhlIEJvRiwgYnV0IG1pZ2h0IG5vdCBiZSBhdHRlbmRpbmcgdGhl IHJlc3Qgb2YgdGhlIGNvbmZlcmVuY2UuPGJyPgo8YnI+Y2hlZXJzLDxicj5KZXNzZTxicj48L2Rp dj48ZGl2IGNsYXNzPSJnbWFpbF9leHRyYSI+PGJyPjxicj48ZGl2IGNsYXNzPSJnbWFpbF9xdW90 ZSI+T24gVHVlLCBKYW4gOCwgMjAxMyBhdCA4OjQ1IEFNLCBFcmlrIEdpbGxpbmcgPHNwYW4gZGly PSJsdHIiPiZsdDs8YSBocmVmPSJtYWlsdG86a29ua2Vyc0BhbmRyb2lkLmNvbSIgdGFyZ2V0PSJf YmxhbmsiPmtvbmtlcnNAYW5kcm9pZC5jb208L2E+Jmd0Ozwvc3Bhbj4gd3JvdGU6PGJyPgo8Ymxv Y2txdW90ZSBjbGFzcz0iZ21haWxfcXVvdGUiIHN0eWxlPSJtYXJnaW46MCAwIDAgLjhleDtib3Jk ZXItbGVmdDoxcHggI2NjYyBzb2xpZDtwYWRkaW5nLWxlZnQ6MWV4Ij5JJiMzOTtkIGJlIGludGVy ZXN0ZWQuIKBBbHNvIEdyZWcgSGFja21hbm4gKENDZWQpIHdobyYjMzk7cyBiZWVuIHdvcmtpbmcg YSBsb3Qgd2l0aCBkaXNwbGF5IGlzc3VlcyBvbiBvdXIgdGVhbSBtaWdodCBiZSBpbnRlcmVzdGVk IGluIGF0dGVuZGluZy48ZGl2Pgo8YnI+PC9kaXY+PGRpdj5DaGVlcnMsPC9kaXY+PGRpdj6gIKBF cmlrPC9kaXY+PGRpdj48YnI+PGRpdiBjbGFzcz0iZ21haWxfcXVvdGUiPjxkaXY+PGRpdiBjbGFz cz0iaDUiPgoKT24gVHVlLCBKYW4gOCwgMjAxMyBhdCA3OjI0IEFNLCBKZXNzZSBCYXJrZXIgPHNw YW4gZGlyPSJsdHIiPiZsdDs8YSBocmVmPSJtYWlsdG86amVzc2UuYmFya2VyQGxpbmFyby5vcmci IHRhcmdldD0iX2JsYW5rIj5qZXNzZS5iYXJrZXJAbGluYXJvLm9yZzwvYT4mZ3Q7PC9zcGFuPiB3 cm90ZTo8YnI+PC9kaXY+PC9kaXY+PGJsb2NrcXVvdGUgY2xhc3M9ImdtYWlsX3F1b3RlIiBzdHls ZT0ibWFyZ2luOjAgMCAwIC44ZXg7Ym9yZGVyLWxlZnQ6MXB4ICNjY2Mgc29saWQ7cGFkZGluZy1s ZWZ0OjFleCI+CjxkaXY+PGRpdiBjbGFzcz0iaDUiPgoKPGRpdiBkaXI9Imx0ciI+PGRpdj5IaSBh bGwsPGJyPjxicj48L2Rpdj5XaXRoIGFsbCBvZiB0aGUgYXR0ZW50aW9uIHRoYXQgdGhlIENvbW1v biBEaXNwbGF5IEZyYW1ld29yayBpcyBnZXR0aW5nLCBJIHdhcyB3b25kZXJpbmcgaWYgaXQgd2Fz IHdvcnRoIGhhdmluZyBhIEJvRiBkaXNjdXNzaW9uIGF0IEVMQyBuZXh0IG1vbnRoIGluIFNhbiBG cmFuY2lzY28uoCBUaGlzIHdpbGwgYmUgb25seSBhIGNvdXBsZSBvZiB3ZWVrcyBhZnRlciBGT1NE RU0sIGJ1dCBnaXZlbiB0aGUgcGFjZSB0aGF0IHRoaW5ncyBzZWVtIHRvIGJlIG1vdmluZywgdGhh dCBjb3VsZCBiZSBhIGdyZWF0IG9wcG9ydHVuaXR5IGVpdGhlciB0byBoYXZlIGEgZm9sbG93LW9u IGRpc2N1c3Npb24sIG9yIHNpbXBseSB0byBpbnZvbHZlIGEgc2xpZ2h0bHkgZGlmZmVyZW50IGNy b3NzLXNlY3Rpb24gb2YgdGhlIGNvbW11bml0eSBpbiBhIGZhY2UtdG8tZmFjZSBkaXNjdXNzaW9u LqAgSWYgZm9sa3MgY291bGQgbGV0IG1lIGtub3cgdGhhdCB0aGV5JiMzOTtsbCBiZSBhdCBFTEMg YW5kIGFyZSBpbnRlcmVzdGVkIGluIGEgQm9GIHRoZXJlLCBJJiMzOTtsbCBsb29rIGludG8gZ2V0 dGluZyBpdCBzZXQgdXAuPGJyPgoKCgo8YnI+Y2hlZXJzLDxicj5KZXNzZTxicj48L2Rpdj4KPGJy PjwvZGl2PjwvZGl2Pl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fPGJyPgpMaW5hcm8tbW0tc2lnIG1haWxpbmcgbGlzdDxicj4KPGEgaHJlZj0ibWFpbHRvOkxp bmFyby1tbS1zaWdAbGlzdHMubGluYXJvLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPkxpbmFyby1tbS1z aWdAbGlzdHMubGluYXJvLm9yZzwvYT48YnI+CjxhIGhyZWY9Imh0dHA6Ly9saXN0cy5saW5hcm8u b3JnL21haWxtYW4vbGlzdGluZm8vbGluYXJvLW1tLXNpZyIgdGFyZ2V0PSJfYmxhbmsiPmh0dHA6 Ly9saXN0cy5saW5hcm8ub3JnL21haWxtYW4vbGlzdGluZm8vbGluYXJvLW1tLXNpZzwvYT48YnI+ Cjxicj48L2Jsb2NrcXVvdGU+PC9kaXY+PGJyPjwvZGl2Pgo8L2Jsb2NrcXVvdGU+PC9kaXY+PGJy PjwvZGl2Pgo= --===============6020385392849474677==-- From johan.mossberg@stericsson.com Fri Jan 11 12:38:41 2013 From: Johan Mossberg To: linaro-mm-sig@lists.linaro.org Subject: [Linaro-mm-sig] [PATCHv3] gpu: ion: Add support for sharing buffers with dma buf kernel handles Date: Fri, 11 Jan 2013 13:38:13 +0100 Message-ID: <1357907893-3885-1-git-send-email-johan.mossberg@stericsson.com> In-Reply-To: <1355394294-4721-1-git-send-email-johan.mossberg@stericsson.com> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============3307794191222564009==" --===============3307794191222564009== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Currently ion can only share buffers with dma buf fd:s. Fd:s can not be used inside the kernel as they are process specific so support for sharing buffers with dma buf kernel handles is needed to support kernel only use cases. An example use case could be a GPU driver using ion that wants to share its output buffers with a 3d party display controller driver supporting dma buf. Signed-off-by: Johan Mossberg --- drivers/gpu/ion/ion.c | 26 ++++++++++++++++++++------ include/linux/ion.h | 12 ++++++++++-- 2 files changed, 30 insertions(+), 8 deletions(-) diff --git a/drivers/gpu/ion/ion.c b/drivers/gpu/ion/ion.c index 0fc02fd..8fd61b3 100644 --- a/drivers/gpu/ion/ion.c +++ b/drivers/gpu/ion/ion.c @@ -949,19 +949,19 @@ struct dma_buf_ops dma_buf_ops =3D { .kunmap =3D ion_dma_buf_kunmap, }; =20 -int ion_share_dma_buf(struct ion_client *client, struct ion_handle *handle) +struct dma_buf *ion_share_dma_buf(struct ion_client *client, + struct ion_handle *handle) { struct ion_buffer *buffer; struct dma_buf *dmabuf; bool valid_handle; - int fd; =20 mutex_lock(&client->lock); valid_handle =3D ion_handle_validate(client, handle); mutex_unlock(&client->lock); if (!valid_handle) { WARN(1, "%s: invalid handle passed to share.\n", __func__); - return -EINVAL; + return ERR_PTR(-EINVAL); } =20 buffer =3D handle->buffer; @@ -969,15 +969,29 @@ int ion_share_dma_buf(struct ion_client *client, struct= ion_handle *handle) dmabuf =3D dma_buf_export(buffer, &dma_buf_ops, buffer->size, O_RDWR); if (IS_ERR(dmabuf)) { ion_buffer_put(buffer); - return PTR_ERR(dmabuf); + return dmabuf; } + + return dmabuf; +} +EXPORT_SYMBOL(ion_share_dma_buf); + +int ion_share_dma_buf_fd(struct ion_client *client, struct ion_handle *handl= e) +{ + struct dma_buf *dmabuf; + int fd; + + dmabuf =3D ion_share_dma_buf(client, handle); + if (IS_ERR(dmabuf)) + return PTR_ERR(dmabuf); + fd =3D dma_buf_fd(dmabuf, O_CLOEXEC); if (fd < 0) dma_buf_put(dmabuf); =20 return fd; } -EXPORT_SYMBOL(ion_share_dma_buf); +EXPORT_SYMBOL(ion_share_dma_buf_fd); =20 struct ion_handle *ion_import_dma_buf(struct ion_client *client, int fd) { @@ -1086,7 +1100,7 @@ static long ion_ioctl(struct file *filp, unsigned int c= md, unsigned long arg) =20 if (copy_from_user(&data, (void __user *)arg, sizeof(data))) return -EFAULT; - data.fd =3D ion_share_dma_buf(client, data.handle); + data.fd =3D ion_share_dma_buf_fd(client, data.handle); if (copy_to_user((void __user *)arg, &data, sizeof(data))) return -EFAULT; if (data.fd < 0) diff --git a/include/linux/ion.h b/include/linux/ion.h index a55d11f..e2503e9 100644 --- a/include/linux/ion.h +++ b/include/linux/ion.h @@ -214,11 +214,19 @@ void *ion_map_kernel(struct ion_client *client, struct = ion_handle *handle); void ion_unmap_kernel(struct ion_client *client, struct ion_handle *handle); =20 /** - * ion_share_dma_buf() - given an ion client, create a dma-buf fd + * ion_share_dma_buf() - share buffer as dma-buf * @client: the client * @handle: the handle */ -int ion_share_dma_buf(struct ion_client *client, struct ion_handle *handle); +struct dma_buf *ion_share_dma_buf(struct ion_client *client, + struct ion_handle *handle); + +/** + * ion_share_dma_buf_fd() - given an ion client, create a dma-buf fd + * @client: the client + * @handle: the handle + */ +int ion_share_dma_buf_fd(struct ion_client *client, struct ion_handle *handl= e); =20 /** * ion_import_dma_buf() - given an dma-buf fd from the ion exporter get hand= le --=20 1.8.0 --===============3307794191222564009==-- From laurent.pinchart@ideasonboard.com Fri Jan 11 20:14:21 2013 From: Laurent Pinchart To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] Attendance at ELC San Francisco Date: Fri, 11 Jan 2013 21:16:02 +0100 Message-ID: <17411941.Hkb0iAyZuX@avalon> In-Reply-To: MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============7038260801157219539==" --===============7038260801157219539== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Hi Jesse, On Thursday 10 January 2013 10:06:14 Jesse Barker wrote: > OK, so I've requested an afternoon slot (rather 2 slots, as the slot size > is 50 minutes, and I wanted to give us some wiggle room), so we should be > good. When the final conference agenda settles, I'll send word out again > for the sake of those that are local enough to turn up to the BoF, but > might not be attending the rest of the conference. Thank you for handling this. I've submitted a talk proposal for the ELC, have you added my name to the afternoon slot request to make sure both won't be scheduled at the same time ? > On Tue, Jan 8, 2013 at 8:45 AM, Erik Gilling wrote: > > I'd be interested. Also Greg Hackmann (CCed) who's been working a lot > > with display issues on our team might be interested in attending. > > > > Cheers, > > > > Erik > > > > On Tue, Jan 8, 2013 at 7:24 AM, Jesse Barker wrote: > >> Hi all, > >> > >> With all of the attention that the Common Display Framework is getting, I > >> was wondering if it was worth having a BoF discussion at ELC next month > >> in San Francisco. This will be only a couple of weeks after FOSDEM, but > >> given the pace that things seem to be moving, that could be a great > >> opportunity either to have a follow-on discussion, or simply to involve a > >> slightly different cross-section of the community in a face-to-face > >> discussion. If folks could let me know that they'll be at ELC and are > >> interested in a BoF there, I'll look into getting it set up. -- Regards, Laurent Pinchart --===============7038260801157219539==-- From laurent.pinchart@ideasonboard.com Fri Jan 11 20:25:21 2013 From: Laurent Pinchart To: linaro-mm-sig@lists.linaro.org Subject: [Linaro-mm-sig] CDF discussions at FOSDEM Date: Fri, 11 Jan 2013 21:27:03 +0100 Message-ID: <2665133.qfM3EnSmyB@avalon> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============3231233463045859542==" --===============3231233463045859542== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Hi everybody, Would anyone be interested in meeting at the FOSDEM to discuss the Common Display Framework ? There will be a CDF meeting at the ELC at the end of February, the FOSDEM would be a good venue for European developers. -- Regards, Laurent Pinchart --===============3231233463045859542==-- From robdclark@gmail.com Fri Jan 11 22:27:50 2013 From: Rob Clark To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] CDF discussions at FOSDEM Date: Fri, 11 Jan 2013 16:27:49 -0600 Message-ID: In-Reply-To: <2665133.qfM3EnSmyB@avalon> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============3430798416741422719==" --===============3430798416741422719== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit On Fri, Jan 11, 2013 at 2:27 PM, Laurent Pinchart wrote: > Hi everybody, > > Would anyone be interested in meeting at the FOSDEM to discuss the Common > Display Framework ? There will be a CDF meeting at the ELC at the end of > February, the FOSDEM would be a good venue for European developers. sure, I'll be at FOSDEM.. I think sometime Sunday would be fine BR, -R > -- > Regards, > > Laurent Pinchart > > _______________________________________________ > dri-devel mailing list > dri-devel(a)lists.freedesktop.org > http://lists.freedesktop.org/mailman/listinfo/dri-devel --===============3430798416741422719==-- From smoch@web.de Mon Jan 14 11:57:18 2013 From: Soeren Moch To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH v2] mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls Date: Mon, 14 Jan 2013 12:56:57 +0100 Message-ID: <50F3F289.3090402@web.de> In-Reply-To: <1353421905-3112-1-git-send-email-m.szyprowski@samsung.com> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============7077588642514402420==" --===============7077588642514402420== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit On 20.11.2012 15:31, Marek Szyprowski wrote: > dmapool always calls dma_alloc_coherent() with GFP_ATOMIC flag, > regardless the flags provided by the caller. This causes excessive > pruning of emergency memory pools without any good reason. Additionaly, > on ARM architecture any driver which is using dmapools will sooner or > later trigger the following error: > "ERROR: 256 KiB atomic DMA coherent pool is too small! > Please increase it with coherent_pool= kernel parameter!". > Increasing the coherent pool size usually doesn't help much and only > delays such error, because all GFP_ATOMIC DMA allocations are always > served from the special, very limited memory pool. > > This patch changes the dmapool code to correctly use gfp flags provided > by the dmapool caller. > > Reported-by: Soeren Moch > Reported-by: Thomas Petazzoni > Signed-off-by: Marek Szyprowski > Tested-by: Andrew Lunn > Tested-by: Soeren Moch Now I tested linux-3.7.1 (this patch is included there) on my Marvell Kirkwood system. I still see ERROR: 1024 KiB atomic DMA coherent pool is too small! Please increase it with coherent_pool= kernel parameter! after several hours of runtime under heavy load with SATA and DVB-Sticks (em28xx / drxk and dib0700). As already reported earlier this patch improved the behavior compared to linux-3.6.x and 3.7.0 (error after several ten minutes runtime), but I still see a regression compared to linux-3.5.x. With this kernel the same system with same workload runs flawlessly. Regards, Soeren --===============7077588642514402420==-- From m.b.lankhorst@gmail.com Tue Jan 15 12:34:18 2013 From: Maarten Lankhorst To: linaro-mm-sig@lists.linaro.org Subject: [Linaro-mm-sig] [PATCH 0/7] cross-device reservation for dma-buf support Date: Tue, 15 Jan 2013 13:33:57 +0100 Message-ID: <1358253244-11453-1-git-send-email-maarten.lankhorst@canonical.com> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============0923811922421093443==" --===============0923811922421093443== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable So I'm resending the patch series for reservations. This is identical to my g= it tree at http://cgit.freedesktop.org/~mlankhorst/linux/ Some changes have been made since my last version. Most notably is the use of mutexes now instead of creating my own lock primitive, that would end up being duplicate anyway. The git tree also has a version of i915 and radeon working together like that. It's probably a bit hacky, but it works on my macbook pro 8.2. :) I haven't had any reply on the mutex extensions when I sent them out separate= ly, so I'm including it in the series. The idea is that for lockdep purposes, the seqno is tied to a locking a class. This locking class it not exclusive, but as can be seen from the last patch in the series, it catches all violations we care about. --===============0923811922421093443==-- From m.b.lankhorst@gmail.com Tue Jan 15 12:34:21 2013 From: Maarten Lankhorst To: linaro-mm-sig@lists.linaro.org Subject: [Linaro-mm-sig] [PATCH 1/7] arch: add __mutex_fastpath_lock_retval_arg to generic/sh/x86/powerpc/ia64 Date: Tue, 15 Jan 2013 13:33:58 +0100 Message-ID: <1358253244-11453-2-git-send-email-maarten.lankhorst@canonical.com> In-Reply-To: <1358253244-11453-1-git-send-email-maarten.lankhorst@canonical.com> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============2649703312912048374==" --===============2649703312912048374== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Needed for reservation slowpath. --- arch/ia64/include/asm/mutex.h | 20 ++++++++++++++++++++ arch/powerpc/include/asm/mutex.h | 20 ++++++++++++++++++++ arch/sh/include/asm/mutex-llsc.h | 20 ++++++++++++++++++++ arch/x86/include/asm/mutex_32.h | 20 ++++++++++++++++++++ arch/x86/include/asm/mutex_64.h | 20 ++++++++++++++++++++ include/asm-generic/mutex-dec.h | 20 ++++++++++++++++++++ include/asm-generic/mutex-null.h | 1 + include/asm-generic/mutex-xchg.h | 21 +++++++++++++++++++++ 8 files changed, 142 insertions(+) diff --git a/arch/ia64/include/asm/mutex.h b/arch/ia64/include/asm/mutex.h index bed73a6..2510058 100644 --- a/arch/ia64/include/asm/mutex.h +++ b/arch/ia64/include/asm/mutex.h @@ -44,6 +44,26 @@ __mutex_fastpath_lock_retval(atomic_t *count, int (*fail_f= n)(atomic_t *)) } =20 /** + * __mutex_fastpath_lock_retval_arg - try to take the lock by moving the co= unt + * from 1 to a 0 value + * @count: pointer of type atomic_t + * @arg: argument to pass along if fastpath fails. + * @fail_fn: function to call if the original value was not 1 + * + * Change the count from 1 to a value lower than 1, and call if + * it wasn't 1 originally. This function returns 0 if the fastpath succeeds, + * or anything the slow path function returns. + */ +static inline int __mutex_fastpath_lock_retval_arg(atomic_t *count, + void *arg, int (*fail_fn)(atomic_t *, void*)) +{ + if (unlikely(ia64_fetchadd4_acq(count, -1) !=3D 1)) + return fail_fn(count, arg); + else + return 0; +} + +/** * __mutex_fastpath_unlock - try to promote the count from 0 to 1 * @count: pointer of type atomic_t * @fail_fn: function to call if the original value was not 0 diff --git a/arch/powerpc/include/asm/mutex.h b/arch/powerpc/include/asm/mute= x.h index 5399f7e..df4bcff 100644 --- a/arch/powerpc/include/asm/mutex.h +++ b/arch/powerpc/include/asm/mutex.h @@ -97,6 +97,26 @@ __mutex_fastpath_lock_retval(atomic_t *count, int (*fail_f= n)(atomic_t *)) } =20 /** + * __mutex_fastpath_lock_retval_arg - try to take the lock by moving the co= unt + * from 1 to a 0 value + * @count: pointer of type atomic_t + * @arg: argument to pass along if fastpath fails. + * @fail_fn: function to call if the original value was not 1 + * + * Change the count from 1 to a value lower than 1, and call if + * it wasn't 1 originally. This function returns 0 if the fastpath succeeds, + * or anything the slow path function returns. + */ +static inline int __mutex_fastpath_lock_retval_arg(atomic_t *count, + void *arg, int (*fail_fn)(atomic_t *, void*)) +{ + if (unlikely(__mutex_dec_return_lock(count) < 0)) + return fail_fn(count, arg); + else + return 0; +} + +/** * __mutex_fastpath_unlock - try to promote the count from 0 to 1 * @count: pointer of type atomic_t * @fail_fn: function to call if the original value was not 0 diff --git a/arch/sh/include/asm/mutex-llsc.h b/arch/sh/include/asm/mutex-lls= c.h index 090358a..b68dd6d 100644 --- a/arch/sh/include/asm/mutex-llsc.h +++ b/arch/sh/include/asm/mutex-llsc.h @@ -56,6 +56,26 @@ __mutex_fastpath_lock_retval(atomic_t *count, int (*fail_f= n)(atomic_t *)) return __res; } =20 +static inline int __mutex_fastpath_lock_retval_arg(atomic_t *count, + void *arg, int (*fail_fn)(atomic_t *, void *)) +{ + int __done, __res; + + __asm__ __volatile__ ( + "movli.l @%2, %0 \n" + "add #-1, %0 \n" + "movco.l %0, @%2 \n" + "movt %1 \n" + : "=3D&z" (__res), "=3D&r" (__done) + : "r" (&(count)->counter) + : "t"); + + if (unlikely(!__done || __res !=3D 0)) + __res =3D fail_fn(count, arg); + + return __res; +} + static inline void __mutex_fastpath_unlock(atomic_t *count, void (*fail_fn)(atomic_t *)) { diff --git a/arch/x86/include/asm/mutex_32.h b/arch/x86/include/asm/mutex_32.h index 03f90c8..34f77f9 100644 --- a/arch/x86/include/asm/mutex_32.h +++ b/arch/x86/include/asm/mutex_32.h @@ -58,6 +58,26 @@ static inline int __mutex_fastpath_lock_retval(atomic_t *c= ount, } =20 /** + * __mutex_fastpath_lock_retval_arg - try to take the lock by moving the co= unt + * from 1 to a 0 value + * @count: pointer of type atomic_t + * @arg: argument to pass along if fastpath fails. + * @fail_fn: function to call if the original value was not 1 + * + * Change the count from 1 to a value lower than 1, and call if + * it wasn't 1 originally. This function returns 0 if the fastpath succeeds, + * or anything the slow path function returns. + */ +static inline int __mutex_fastpath_lock_retval_arg(atomic_t *count, + void *arg, int (*fail_fn)(atomic_t *, void*)) +{ + if (unlikely(atomic_dec_return(count) < 0)) + return fail_fn(count, arg); + else + return 0; +} + +/** * __mutex_fastpath_unlock - try to promote the mutex from 0 to 1 * @count: pointer of type atomic_t * @fail_fn: function to call if the original value was not 0 diff --git a/arch/x86/include/asm/mutex_64.h b/arch/x86/include/asm/mutex_64.h index 68a87b0..148249e 100644 --- a/arch/x86/include/asm/mutex_64.h +++ b/arch/x86/include/asm/mutex_64.h @@ -53,6 +53,26 @@ static inline int __mutex_fastpath_lock_retval(atomic_t *c= ount, } =20 /** + * __mutex_fastpath_lock_retval_arg - try to take the lock by moving the co= unt + * from 1 to a 0 value + * @count: pointer of type atomic_t + * @arg: argument to pass along if fastpath fails. + * @fail_fn: function to call if the original value was not 1 + * + * Change the count from 1 to a value lower than 1, and call if + * it wasn't 1 originally. This function returns 0 if the fastpath succeeds, + * or anything the slow path function returns. + */ +static inline int __mutex_fastpath_lock_retval_arg(atomic_t *count, + void *arg, int (*fail_fn)(atomic_t *, void*)) +{ + if (unlikely(atomic_dec_return(count) < 0)) + return fail_fn(count, arg); + else + return 0; +} + +/** * __mutex_fastpath_unlock - increment and call function if nonpositive * @v: pointer of type atomic_t * @fail_fn: function to call if the result is nonpositive diff --git a/include/asm-generic/mutex-dec.h b/include/asm-generic/mutex-dec.h index f104af7..f5d027e 100644 --- a/include/asm-generic/mutex-dec.h +++ b/include/asm-generic/mutex-dec.h @@ -43,6 +43,26 @@ __mutex_fastpath_lock_retval(atomic_t *count, int (*fail_f= n)(atomic_t *)) } =20 /** + * __mutex_fastpath_lock_retval_arg - try to take the lock by moving the co= unt + * from 1 to a 0 value + * @count: pointer of type atomic_t + * @arg: argument to pass along if fastpath fails. + * @fail_fn: function to call if the original value was not 1 + * + * Change the count from 1 to a value lower than 1, and call if + * it wasn't 1 originally. This function returns 0 if the fastpath succeeds, + * or anything the slow path function returns. + */ +static inline int +__mutex_fastpath_lock_retval_arg(atomic_t *count, void *arg, + int (*fail_fn)(atomic_t *, void*)) +{ + if (unlikely(atomic_dec_return(count) < 0)) + return fail_fn(count, arg); + return 0; +} + +/** * __mutex_fastpath_unlock - try to promote the count from 0 to 1 * @count: pointer of type atomic_t * @fail_fn: function to call if the original value was not 0 diff --git a/include/asm-generic/mutex-null.h b/include/asm-generic/mutex-nul= l.h index e1bbbc7..991e9c3 100644 --- a/include/asm-generic/mutex-null.h +++ b/include/asm-generic/mutex-null.h @@ -12,6 +12,7 @@ =20 #define __mutex_fastpath_lock(count, fail_fn) fail_fn(count) #define __mutex_fastpath_lock_retval(count, fail_fn) fail_fn(count) +#define __mutex_fastpath_lock_retval_arg(count, arg, fail_fn) fail_fn(count,= arg) #define __mutex_fastpath_unlock(count, fail_fn) fail_fn(count) #define __mutex_fastpath_trylock(count, fail_fn) fail_fn(count) #define __mutex_slowpath_needs_to_unlock() 1 diff --git a/include/asm-generic/mutex-xchg.h b/include/asm-generic/mutex-xch= g.h index c04e0db..d9cc971 100644 --- a/include/asm-generic/mutex-xchg.h +++ b/include/asm-generic/mutex-xchg.h @@ -55,6 +55,27 @@ __mutex_fastpath_lock_retval(atomic_t *count, int (*fail_f= n)(atomic_t *)) } =20 /** + * __mutex_fastpath_lock_retval_arg - try to take the lock by moving the co= unt + * from 1 to a 0 value + * @count: pointer of type atomic_t + * @arg: argument to pass along if fastpath fails. + * @fail_fn: function to call if the original value was not 1 + * + * Change the count from 1 to a value lower than 1, and call if + * it wasn't 1 originally. This function returns 0 if the fastpath succeeds, + * or anything the slow path function returns. + */ +static inline int +__mutex_fastpath_lock_retval_arg(atomic_t *count, void *arg, + int (*fail_fn)(atomic_t *, void*)) +{ + if (unlikely(atomic_xchg(count, 0) !=3D 1)) + if (likely(atomic_xchg(count, -1) !=3D 1)) + return fail_fn(count, arg); + return 0; +} + +/** * __mutex_fastpath_unlock - try to promote the mutex from 0 to 1 * @count: pointer of type atomic_t * @fail_fn: function to call if the original value was not 0 --=20 1.8.0.3 --===============2649703312912048374==-- From m.b.lankhorst@gmail.com Tue Jan 15 12:34:23 2013 From: Maarten Lankhorst To: linaro-mm-sig@lists.linaro.org Subject: [Linaro-mm-sig] [PATCH 2/7] mutex: add support for reservation style locks Date: Tue, 15 Jan 2013 13:33:59 +0100 Message-ID: <1358253244-11453-3-git-send-email-maarten.lankhorst@canonical.com> In-Reply-To: <1358253244-11453-1-git-send-email-maarten.lankhorst@canonical.com> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============6649326607882950734==" --===============6649326607882950734== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable makes it easier to port ttm over.. Signed-off-by: Maarten Lankhorst --- include/linux/mutex.h | 86 +++++++++++++- kernel/mutex.c | 317 +++++++++++++++++++++++++++++++++++++++++++++++-= -- 2 files changed, 387 insertions(+), 16 deletions(-) diff --git a/include/linux/mutex.h b/include/linux/mutex.h index 9121595..602c247 100644 --- a/include/linux/mutex.h +++ b/include/linux/mutex.h @@ -62,6 +62,11 @@ struct mutex { #endif }; =20 +struct ticket_mutex { + struct mutex base; + atomic_long_t reservation_id; +}; + /* * This is the control structure for tasks blocked on mutex, * which resides on the blocked task's kernel stack: @@ -109,12 +114,24 @@ static inline void mutex_destroy(struct mutex *lock) {} __DEBUG_MUTEX_INITIALIZER(lockname) \ __DEP_MAP_MUTEX_INITIALIZER(lockname) } =20 +#define __TICKET_MUTEX_INITIALIZER(lockname) \ + { .base =3D __MUTEX_INITIALIZER(lockname) \ + , .reservation_id =3D ATOMIC_LONG_INIT(0) } + #define DEFINE_MUTEX(mutexname) \ struct mutex mutexname =3D __MUTEX_INITIALIZER(mutexname) =20 extern void __mutex_init(struct mutex *lock, const char *name, struct lock_class_key *key); =20 +static inline void __ticket_mutex_init(struct ticket_mutex *lock, + const char *name, + struct lock_class_key *key) +{ + __mutex_init(&lock->base, name, key); + atomic_long_set(&lock->reservation_id, 0); +} + /** * mutex_is_locked - is the mutex locked * @lock: the mutex to be queried @@ -133,26 +150,91 @@ static inline int mutex_is_locked(struct mutex *lock) #ifdef CONFIG_DEBUG_LOCK_ALLOC extern void mutex_lock_nested(struct mutex *lock, unsigned int subclass); extern void _mutex_lock_nest_lock(struct mutex *lock, struct lockdep_map *ne= st_lock); + extern int __must_check mutex_lock_interruptible_nested(struct mutex *lock, unsigned int subclass); extern int __must_check mutex_lock_killable_nested(struct mutex *lock, unsigned int subclass); =20 +extern int __must_check _mutex_reserve_lock(struct ticket_mutex *lock, + struct lockdep_map *nest_lock, + unsigned long reservation_id); + +extern int __must_check _mutex_reserve_lock_interruptible(struct ticket_mute= x *, + struct lockdep_map *nest_lock, + unsigned long reservation_id); + +extern void _mutex_reserve_lock_slow(struct ticket_mutex *lock, + struct lockdep_map *nest_lock, + unsigned long reservation_id); + +extern int __must_check _mutex_reserve_lock_intr_slow(struct ticket_mutex *, + struct lockdep_map *nest_lock, + unsigned long reservation_id); + #define mutex_lock(lock) mutex_lock_nested(lock, 0) #define mutex_lock_interruptible(lock) mutex_lock_interruptible_nested(lock,= 0) #define mutex_lock_killable(lock) mutex_lock_killable_nested(lock, 0) =20 #define mutex_lock_nest_lock(lock, nest_lock) \ do { \ - typecheck(struct lockdep_map *, &(nest_lock)->dep_map); \ + typecheck(struct lockdep_map *, &(nest_lock)->dep_map); \ _mutex_lock_nest_lock(lock, &(nest_lock)->dep_map); \ } while (0) =20 +#define mutex_reserve_lock(lock, nest_lock, reservation_id) \ +({ \ + typecheck(struct lockdep_map *, &(nest_lock)->dep_map); \ + _mutex_reserve_lock(lock, &(nest_lock)->dep_map, reservation_id); \ +}) + +#define mutex_reserve_lock_interruptible(lock, nest_lock, reservation_id) \ +({ \ + typecheck(struct lockdep_map *, &(nest_lock)->dep_map); \ + _mutex_reserve_lock_interruptible(lock, &(nest_lock)->dep_map, \ + reservation_id); \ +}) + +#define mutex_reserve_lock_slow(lock, nest_lock, reservation_id) \ +do { \ + typecheck(struct lockdep_map *, &(nest_lock)->dep_map); \ + _mutex_reserve_lock_slow(lock, &(nest_lock)->dep_map, reservation_id); \ +} while (0) + +#define mutex_reserve_lock_intr_slow(lock, nest_lock, reservation_id) \ +({ \ + typecheck(struct lockdep_map *, &(nest_lock)->dep_map); \ + _mutex_reserve_lock_intr_slow(lock, &(nest_lock)->dep_map, \ + reservation_id); \ +}) + #else extern void mutex_lock(struct mutex *lock); extern int __must_check mutex_lock_interruptible(struct mutex *lock); extern int __must_check mutex_lock_killable(struct mutex *lock); =20 +extern int __must_check _mutex_reserve_lock(struct ticket_mutex *lock, + unsigned long reservation_id); +extern int __must_check _mutex_reserve_lock_interruptible(struct ticket_mute= x *, + unsigned long reservation_id); + +extern void _mutex_reserve_lock_slow(struct ticket_mutex *lock, + unsigned long reservation_id); +extern int __must_check _mutex_reserve_lock_intr_slow(struct ticket_mutex *, + unsigned long reservation_id); + +#define mutex_reserve_lock(lock, nest_lock, reservation_id) \ + _mutex_reserve_lock(lock, reservation_id) + +#define mutex_reserve_lock_interruptible(lock, nest_lock, reservation_id) \ + _mutex_reserve_lock_interruptible(lock, reservation_id) + +#define mutex_reserve_lock_slow(lock, nest_lock, reservation_id) \ + _mutex_reserve_lock_slow(lock, reservation_id) + +#define mutex_reserve_lock_intr_slow(lock, nest_lock, reservation_id) \ + _mutex_reserve_lock_intr_slow(lock, reservation_id) + # define mutex_lock_nested(lock, subclass) mutex_lock(lock) # define mutex_lock_interruptible_nested(lock, subclass) mutex_lock_interrup= tible(lock) # define mutex_lock_killable_nested(lock, subclass) mutex_lock_killable(lock) @@ -167,6 +249,8 @@ extern int __must_check mutex_lock_killable(struct mutex = *lock); */ extern int mutex_trylock(struct mutex *lock); extern void mutex_unlock(struct mutex *lock); +extern void mutex_unreserve_unlock(struct ticket_mutex *lock); + extern int atomic_dec_and_mutex_lock(atomic_t *cnt, struct mutex *lock); =20 #ifndef CONFIG_HAVE_ARCH_MUTEX_CPU_RELAX diff --git a/kernel/mutex.c b/kernel/mutex.c index a307cc9..8282729 100644 --- a/kernel/mutex.c +++ b/kernel/mutex.c @@ -126,16 +126,119 @@ void __sched mutex_unlock(struct mutex *lock) =20 EXPORT_SYMBOL(mutex_unlock); =20 +/** + * mutex_unreserve_unlock - release the mutex + * @lock: the mutex to be released + * + * Unlock a mutex that has been locked by this task previously + * with _mutex_reserve_lock*. + * + * This function must not be used in interrupt context. Unlocking + * of a not locked mutex is not allowed. + */ +void __sched mutex_unreserve_unlock(struct ticket_mutex *lock) +{ + /* + * mark mutex as no longer part of a reservation, next + * locker can set this again + */ + atomic_long_set(&lock->reservation_id, 0); + + /* + * The unlocking fastpath is the 0->1 transition from 'locked' + * into 'unlocked' state: + */ +#ifndef CONFIG_DEBUG_MUTEXES + /* + * When debugging is enabled we must not clear the owner before time, + * the slow path will always be taken, and that clears the owner field + * after verifying that it was indeed current. + */ + mutex_clear_owner(&lock->base); +#endif + __mutex_fastpath_unlock(&lock->base.count, __mutex_unlock_slowpath); +} +EXPORT_SYMBOL(mutex_unreserve_unlock); + +static inline int __sched +__mutex_lock_check_reserve(struct mutex *lock, unsigned long reservation_id) +{ + struct ticket_mutex *m =3D container_of(lock, struct ticket_mutex, base); + unsigned long cur_id; + + cur_id =3D atomic_long_read(&m->reservation_id); + if (!cur_id) + return 0; + + if (unlikely(reservation_id =3D=3D cur_id)) + return -EDEADLK; + + if (unlikely(reservation_id - cur_id <=3D LONG_MAX)) + return -EAGAIN; + + return 0; +} + +/* + * after acquiring lock with fastpath or when we lost out in contested + * slowpath, set reservation_id and wake up any waiters so they can recheck. + */ +static __always_inline void +mutex_set_reservation_fastpath(struct ticket_mutex *lock, + unsigned long reservation_id, bool check_res) +{ + unsigned long flags; + struct mutex_waiter *cur; + + if (check_res || config_enabled(CONFIG_DEBUG_LOCK_ALLOC)) { + unsigned long cur_id; + + cur_id =3D atomic_long_xchg(&lock->reservation_id, + reservation_id); +#ifdef CONFIG_DEBUG_LOCK_ALLOC + if (check_res) + DEBUG_LOCKS_WARN_ON(cur_id && + cur_id !=3D reservation_id); + else + DEBUG_LOCKS_WARN_ON(cur_id); + lockdep_assert_held(&lock->base); +#endif + + if (unlikely(cur_id =3D=3D reservation_id)) + return; + } else + atomic_long_set(&lock->reservation_id, reservation_id); + + /* + * Check if lock is contended, if not there is nobody to wake up + */ + if (likely(atomic_read(&lock->base.count) =3D=3D 0)) + return; + + /* + * Uh oh, we raced in fastpath, wake up everyone in this case, + * so they can see the new reservation_id + */ + spin_lock_mutex(&lock->base.wait_lock, flags); + list_for_each_entry(cur, &lock->base.wait_list, list) { + debug_mutex_wake_waiter(&lock->base, cur); + wake_up_process(cur->task); + } + spin_unlock_mutex(&lock->base.wait_lock, flags); +} + /* * Lock a mutex (possibly interruptible), slowpath: */ static inline int __sched __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, - struct lockdep_map *nest_lock, unsigned long ip) + struct lockdep_map *nest_lock, unsigned long ip, + unsigned long reservation_id, bool res_slow) { struct task_struct *task =3D current; struct mutex_waiter waiter; unsigned long flags; + int ret; =20 preempt_disable(); mutex_acquire_nest(&lock->dep_map, subclass, 0, nest_lock, ip); @@ -162,6 +265,12 @@ __mutex_lock_common(struct mutex *lock, long state, unsi= gned int subclass, for (;;) { struct task_struct *owner; =20 + if (!__builtin_constant_p(reservation_id) && !res_slow) { + ret =3D __mutex_lock_check_reserve(lock, reservation_id); + if (ret) + goto err_nowait; + } + /* * If there's an owner, wait for it to either * release the lock or go to sleep. @@ -172,6 +281,13 @@ __mutex_lock_common(struct mutex *lock, long state, unsi= gned int subclass, =20 if (atomic_cmpxchg(&lock->count, 1, 0) =3D=3D 1) { lock_acquired(&lock->dep_map, ip); + if (res_slow) { + struct ticket_mutex *m; + m =3D container_of(lock, struct ticket_mutex, base); + + mutex_set_reservation_fastpath(m, reservation_id, false); + } + mutex_set_owner(lock); preempt_enable(); return 0; @@ -227,15 +343,16 @@ __mutex_lock_common(struct mutex *lock, long state, uns= igned int subclass, * TASK_UNINTERRUPTIBLE case.) */ if (unlikely(signal_pending_state(state, task))) { - mutex_remove_waiter(lock, &waiter, - task_thread_info(task)); - mutex_release(&lock->dep_map, 1, ip); - spin_unlock_mutex(&lock->wait_lock, flags); + ret =3D -EINTR; + goto err; + } =20 - debug_mutex_free_waiter(&waiter); - preempt_enable(); - return -EINTR; + if (!__builtin_constant_p(reservation_id) && !res_slow) { + ret =3D __mutex_lock_check_reserve(lock, reservation_id); + if (ret) + goto err; } + __set_task_state(task, state); =20 /* didn't get the lock, go to sleep: */ @@ -250,6 +367,28 @@ done: mutex_remove_waiter(lock, &waiter, current_thread_info()); mutex_set_owner(lock); =20 + if (!__builtin_constant_p(reservation_id)) { + struct ticket_mutex *m; + struct mutex_waiter *cur; + /* + * this should get optimized out for the common case, + * and is only important for _mutex_reserve_lock + */ + + m =3D container_of(lock, struct ticket_mutex, base); + atomic_long_set(&m->reservation_id, reservation_id); + + /* + * give any possible sleeping processes the chance to wake up, + * so they can recheck if they have to back off from + * reservations + */ + list_for_each_entry(cur, &lock->wait_list, list) { + debug_mutex_wake_waiter(lock, cur); + wake_up_process(cur->task); + } + } + /* set it to 0 if there are no waiters left: */ if (likely(list_empty(&lock->wait_list))) atomic_set(&lock->count, 0); @@ -260,6 +399,19 @@ done: preempt_enable(); =20 return 0; + +err: + mutex_remove_waiter(lock, &waiter, task_thread_info(task)); + spin_unlock_mutex(&lock->wait_lock, flags); + debug_mutex_free_waiter(&waiter); + +#ifdef CONFIG_MUTEX_SPIN_ON_OWNER +err_nowait: +#endif + mutex_release(&lock->dep_map, 1, ip); + + preempt_enable(); + return ret; } =20 #ifdef CONFIG_DEBUG_LOCK_ALLOC @@ -267,7 +419,8 @@ void __sched mutex_lock_nested(struct mutex *lock, unsigned int subclass) { might_sleep(); - __mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, subclass, NULL, _RET_IP_); + __mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, + subclass, NULL, _RET_IP_, 0, 0); } =20 EXPORT_SYMBOL_GPL(mutex_lock_nested); @@ -276,7 +429,8 @@ void __sched _mutex_lock_nest_lock(struct mutex *lock, struct lockdep_map *nest) { might_sleep(); - __mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, 0, nest, _RET_IP_); + __mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, + 0, nest, _RET_IP_, 0, 0); } =20 EXPORT_SYMBOL_GPL(_mutex_lock_nest_lock); @@ -285,7 +439,8 @@ int __sched mutex_lock_killable_nested(struct mutex *lock, unsigned int subclass) { might_sleep(); - return __mutex_lock_common(lock, TASK_KILLABLE, subclass, NULL, _RET_IP_); + return __mutex_lock_common(lock, TASK_KILLABLE, + subclass, NULL, _RET_IP_, 0, 0); } EXPORT_SYMBOL_GPL(mutex_lock_killable_nested); =20 @@ -294,10 +449,63 @@ mutex_lock_interruptible_nested(struct mutex *lock, uns= igned int subclass) { might_sleep(); return __mutex_lock_common(lock, TASK_INTERRUPTIBLE, - subclass, NULL, _RET_IP_); + subclass, NULL, _RET_IP_, 0, 0); } =20 EXPORT_SYMBOL_GPL(mutex_lock_interruptible_nested); + +int __sched +_mutex_reserve_lock(struct ticket_mutex *lock, struct lockdep_map *nest, + unsigned long reservation_id) +{ + DEBUG_LOCKS_WARN_ON(!reservation_id); + + might_sleep(); + return __mutex_lock_common(&lock->base, TASK_UNINTERRUPTIBLE, + 0, nest, _RET_IP_, reservation_id, 0); +} +EXPORT_SYMBOL_GPL(_mutex_reserve_lock); + + +int __sched +_mutex_reserve_lock_interruptible(struct ticket_mutex *lock, + struct lockdep_map *nest, + unsigned long reservation_id) +{ + DEBUG_LOCKS_WARN_ON(!reservation_id); + + might_sleep(); + return __mutex_lock_common(&lock->base, TASK_INTERRUPTIBLE, + 0, nest, _RET_IP_, reservation_id, 0); +} +EXPORT_SYMBOL_GPL(_mutex_reserve_lock_interruptible); + +void __sched +_mutex_reserve_lock_slow(struct ticket_mutex *lock, struct lockdep_map *nest, + unsigned long reservation_id) +{ + DEBUG_LOCKS_WARN_ON(!reservation_id); + + might_sleep(); + __mutex_lock_common(&lock->base, TASK_UNINTERRUPTIBLE, 0, + nest, _RET_IP_, reservation_id, 1); +} +EXPORT_SYMBOL_GPL(_mutex_reserve_lock_slow); + +int __sched +_mutex_reserve_lock_intr_slow(struct ticket_mutex *lock, + struct lockdep_map *nest, + unsigned long reservation_id) +{ + DEBUG_LOCKS_WARN_ON(!reservation_id); + + might_sleep(); + return __mutex_lock_common(&lock->base, TASK_INTERRUPTIBLE, 0, + nest, _RET_IP_, reservation_id, 1); +} +EXPORT_SYMBOL_GPL(_mutex_reserve_lock_intr_slow); + + #endif =20 /* @@ -400,7 +608,8 @@ __mutex_lock_slowpath(atomic_t *lock_count) { struct mutex *lock =3D container_of(lock_count, struct mutex, count); =20 - __mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, 0, NULL, _RET_IP_); + __mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, 0, + NULL, _RET_IP_, 0, 0); } =20 static noinline int __sched @@ -408,7 +617,8 @@ __mutex_lock_killable_slowpath(atomic_t *lock_count) { struct mutex *lock =3D container_of(lock_count, struct mutex, count); =20 - return __mutex_lock_common(lock, TASK_KILLABLE, 0, NULL, _RET_IP_); + return __mutex_lock_common(lock, TASK_KILLABLE, 0, + NULL, _RET_IP_, 0, 0); } =20 static noinline int __sched @@ -416,8 +626,28 @@ __mutex_lock_interruptible_slowpath(atomic_t *lock_count) { struct mutex *lock =3D container_of(lock_count, struct mutex, count); =20 - return __mutex_lock_common(lock, TASK_INTERRUPTIBLE, 0, NULL, _RET_IP_); + return __mutex_lock_common(lock, TASK_INTERRUPTIBLE, 0, + NULL, _RET_IP_, 0, 0); +} + +static noinline int __sched +__mutex_lock_reserve_slowpath(atomic_t *lock_count, void *rid) +{ + struct mutex *lock =3D container_of(lock_count, struct mutex, count); + + return __mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, 0, + NULL, _RET_IP_, (unsigned long)rid, 0); +} + +static noinline int __sched +__mutex_lock_interruptible_reserve_slowpath(atomic_t *lock_count, void *rid) +{ + struct mutex *lock =3D container_of(lock_count, struct mutex, count); + + return __mutex_lock_common(lock, TASK_INTERRUPTIBLE, 0, + NULL, _RET_IP_, (unsigned long)rid, 0); } + #endif =20 /* @@ -473,6 +703,63 @@ int __sched mutex_trylock(struct mutex *lock) } EXPORT_SYMBOL(mutex_trylock); =20 +#ifndef CONFIG_DEBUG_LOCK_ALLOC +int __sched +_mutex_reserve_lock(struct ticket_mutex *lock, unsigned long rid) +{ + int ret; + + might_sleep(); + + ret =3D __mutex_fastpath_lock_retval_arg(&lock->base.count, (void *)rid, + __mutex_lock_reserve_slowpath); + + if (!ret) { + mutex_set_reservation_fastpath(lock, rid, true); + mutex_set_owner(&lock->base); + } + return ret; +} +EXPORT_SYMBOL(_mutex_reserve_lock); + +int __sched +_mutex_reserve_lock_interruptible(struct ticket_mutex *lock, unsigned long r= id) +{ + int ret; + + might_sleep(); + + ret =3D __mutex_fastpath_lock_retval_arg(&lock->base.count, (void *)rid, + __mutex_lock_interruptible_reserve_slowpath); + + if (!ret) { + mutex_set_reservation_fastpath(lock, rid, true); + mutex_set_owner(&lock->base); + } + return ret; +} +EXPORT_SYMBOL(_mutex_reserve_lock_interruptible); + +void __sched +_mutex_reserve_lock_slow(struct ticket_mutex *lock, unsigned long rid) +{ + might_sleep(); + __mutex_lock_common(&lock->base, TASK_UNINTERRUPTIBLE, + 0, NULL, _RET_IP_, rid, 1); +} +EXPORT_SYMBOL(_mutex_reserve_lock_slow); + +int __sched +_mutex_reserve_lock_intr_slow(struct ticket_mutex *lock, unsigned long rid) +{ + might_sleep(); + return __mutex_lock_common(&lock->base, TASK_INTERRUPTIBLE, + 0, NULL, _RET_IP_, rid, 1); +} +EXPORT_SYMBOL(_mutex_reserve_lock_intr_slow); + +#endif + /** * atomic_dec_and_mutex_lock - return holding mutex if we dec to 0 * @cnt: the atomic which we are to dec --=20 1.8.0.3 --===============6649326607882950734==-- From m.b.lankhorst@gmail.com Tue Jan 15 12:34:26 2013 From: Maarten Lankhorst To: linaro-mm-sig@lists.linaro.org Subject: [Linaro-mm-sig] [PATCH 3/7] sched: allow try_to_wake_up to be used internally outside of core.c Date: Tue, 15 Jan 2013 13:34:00 +0100 Message-ID: <1358253244-11453-4-git-send-email-maarten.lankhorst@canonical.com> In-Reply-To: <1358253244-11453-1-git-send-email-maarten.lankhorst@canonical.com> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============3285053794034626414==" --===============3285053794034626414== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Not exported, since only used by the fence implementation. Signed-off-by: Maarten Lankhorst --- include/linux/wait.h | 1 + kernel/sched/core.c | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/include/linux/wait.h b/include/linux/wait.h index 7cb64d4..7aaba95 100644 --- a/include/linux/wait.h +++ b/include/linux/wait.h @@ -11,6 +11,7 @@ typedef struct __wait_queue wait_queue_t; typedef int (*wait_queue_func_t)(wait_queue_t *wait, unsigned mode, int flag= s, void *key); int default_wake_function(wait_queue_t *wait, unsigned mode, int flags, void= *key); +int try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags= ); =20 struct __wait_queue { unsigned int flags; diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 257002c..5f23fe3 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1425,7 +1425,7 @@ static void ttwu_queue(struct task_struct *p, int cpu) * Returns %true if @p was woken up, %false if it was already running * or @state didn't match @p's state. */ -static int +int try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags) { unsigned long flags; --=20 1.8.0.3 --===============3285053794034626414==-- From m.b.lankhorst@gmail.com Tue Jan 15 12:34:29 2013 From: Maarten Lankhorst To: linaro-mm-sig@lists.linaro.org Subject: [Linaro-mm-sig] [PATCH 4/7] fence: dma-buf cross-device synchronization (v11) Date: Tue, 15 Jan 2013 13:34:01 +0100 Message-ID: <1358253244-11453-5-git-send-email-maarten.lankhorst@canonical.com> In-Reply-To: <1358253244-11453-1-git-send-email-maarten.lankhorst@canonical.com> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============2000565884478934209==" --===============2000565884478934209== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable A fence can be attached to a buffer which is being filled or consumed by hw, to allow userspace to pass the buffer without waiting to another device. For example, userspace can call page_flip ioctl to display the next frame of graphics after kicking the GPU but while the GPU is still rendering. The display device sharing the buffer with the GPU would attach a callback to get notified when the GPU's rendering-complete IRQ fires, to update the scan-out address of the display, without having to wake up userspace. A driver must allocate a fence context for each execution ring that can run in parallel. The function for this takes an argument with how many contexts to allocate: + fence_context_alloc() A fence is transient, one-shot deal. It is allocated and attached to one or more dma-buf's. When the one that attached it is done, with the pending operation, it can signal the fence: + fence_signal() To have a rough approximation whether a fence is fired, call: + fence_is_signaled() The dma-buf-mgr handles tracking, and waiting on, the fences associated with a dma-buf. The one pending on the fence can add an async callback: + fence_add_callback() The callback can optionally be cancelled with: + fence_remove_callback() To wait synchronously, optionally with a timeout: + fence_wait() + fence_wait_timeout() A default software-only implementation is provided, which can be used by drivers attaching a fence to a buffer when they have no other means for hw sync. But a memory backed fence is also envisioned, because it is common that GPU's can write to, or poll on some memory location for synchronization. For example: fence =3D custom_get_fence(...); if ((seqno_fence =3D to_seqno_fence(fence)) !=3D NULL) { dma_buf *fence_buf =3D fence->sync_buf; get_dma_buf(fence_buf); ... tell the hw the memory location to wait ... custom_wait_on(fence_buf, fence->seqno_ofs, fence->seqno); } else { /* fall-back to sw sync * / fence_add_callback(fence, my_cb); } On SoC platforms, if some other hw mechanism is provided for synchronizing between IP blocks, it could be supported as an alternate implementation with it's own fence ops in a similar way. enable_signaling callback is used to provide sw signaling in case a cpu waiter is requested or no compatible hardware signaling could be used. The intention is to provide a userspace interface (presumably via eventfd) later, to be used in conjunction with dma-buf's mmap support for sw access to buffers (or for userspace apps that would prefer to do their own synchronization). v1: Original v2: After discussion w/ danvet and mlankhorst on #dri-devel, we decided that dma-fence didn't need to care about the sw->hw signaling path (it can be handled same as sw->sw case), and therefore the fence->ops can be simplified and more handled in the core. So remove the signal, add_callback, cancel_callback, and wait ops, and replace with a simple enable_signaling() op which can be used to inform a fence supporting hw->hw signaling that one or more devices which do not support hw signaling are waiting (and therefore it should enable an irq or do whatever is necessary in order that the CPU is notified when the fence is passed). v3: Fix locking fail in attach_fence() and get_fence() v4: Remove tie-in w/ dma-buf.. after discussion w/ danvet and mlankorst we decided that we need to be able to attach one fence to N dma-buf's, so using the list_head in dma-fence struct would be problematic. v5: [ Maarten Lankhorst ] Updated for dma-bikeshed-fence and dma-buf-manager. v6: [ Maarten Lankhorst ] I removed dma_fence_cancel_callback and some commen= ts about checking if fence fired or not. This is broken by design. waitqueue_active during destruction is now fatal, since the signaller should be holding a reference in enable_signalling until it signalled the fence. Pass the original dma_fence_cb along, and call __remove_wait in the dma_fence_callback handler, so that no cleanup needs to be performed. v7: [ Maarten Lankhorst ] Set cb->func and only enable sw signaling if fence wasn't signaled yet, for example for hardware fences that may choose to signal blindly. v8: [ Maarten Lankhorst ] Tons of tiny fixes, moved __dma_fence_init to header and fixed include mess. dma-fence.h now includes dma-buf.h All members are now initialized, so kmalloc can be used for allocating a dma-fence. More documentation added. v9: Change compiler bitfields to flags, change return type of enable_signaling to bool. Rework dma_fence_wait. Added dma_fence_is_signaled and dma_fence_wait_timeout. s/dma// and change exports to non GPL. Added fence_is_signaled and fence_enable_sw_signaling calls, add ability to override default wait operation. v10: remove event_queue, use a custom list, export try_to_wake_up from scheduler. Remove fence lock and use a global spinlock instead, this should hopefully remove all the locking headaches I was having on trying to implement this. enable_signaling is called with this lock held. v11: Use atomic ops for flags, lifting the need for some spin_lock_irqsaves. However I kept the guarantee that after fence_signal returns, it is guaranteed that enable_signaling has either been called to completion, or will not be called any more. Add contexts and seqno to base fence implementation. This allows you to wait for less fences, by testing for seqno + signaled, and then only wait on the later fence. Add FENCE_TRACE, FENCE_WARN, and FENCE_ERR. This makes debugging easier. An CONFIG_DEBUG_FENCE will be added to turn off the FENCE_TRACE spam, and another runtime option can turn it off at runtime. Signed-off-by: Maarten Lankhorst --- Documentation/DocBook/device-drivers.tmpl | 2 + drivers/base/Makefile | 2 +- drivers/base/fence.c | 286 ++++++++++++++++++++++++ include/linux/fence.h | 347 ++++++++++++++++++++++++++++= ++ 4 files changed, 636 insertions(+), 1 deletion(-) create mode 100644 drivers/base/fence.c create mode 100644 include/linux/fence.h diff --git a/Documentation/DocBook/device-drivers.tmpl b/Documentation/DocBoo= k/device-drivers.tmpl index 7514dbf..6f53fc0 100644 --- a/Documentation/DocBook/device-drivers.tmpl +++ b/Documentation/DocBook/device-drivers.tmpl @@ -126,6 +126,8 @@ X!Edrivers/base/interface.c Device Drivers DMA Management !Edrivers/base/dma-buf.c +!Edrivers/base/fence.c +!Iinclude/linux/fence.h !Edrivers/base/dma-coherent.c !Edrivers/base/dma-mapping.c diff --git a/drivers/base/Makefile b/drivers/base/Makefile index 5aa2d70..0026563 100644 --- a/drivers/base/Makefile +++ b/drivers/base/Makefile @@ -10,7 +10,7 @@ obj-$(CONFIG_CMA) +=3D dma-contiguous.o obj-y +=3D power/ obj-$(CONFIG_HAS_DMA) +=3D dma-mapping.o obj-$(CONFIG_HAVE_GENERIC_DMA_COHERENT) +=3D dma-coherent.o -obj-$(CONFIG_DMA_SHARED_BUFFER) +=3D dma-buf.o +obj-$(CONFIG_DMA_SHARED_BUFFER) +=3D dma-buf.o fence.o obj-$(CONFIG_ISA) +=3D isa.o obj-$(CONFIG_FW_LOADER) +=3D firmware_class.o obj-$(CONFIG_NUMA) +=3D node.o diff --git a/drivers/base/fence.c b/drivers/base/fence.c new file mode 100644 index 0000000..28e5ffd --- /dev/null +++ b/drivers/base/fence.c @@ -0,0 +1,286 @@ +/* + * Fence mechanism for dma-buf and to allow for asynchronous dma access + * + * Copyright (C) 2012 Canonical Ltd + * Copyright (C) 2012 Texas Instruments + * + * Authors: + * Rob Clark + * Maarten Lankhorst + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 as published = by + * the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but WITHO= UT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along w= ith + * this program. If not, see . + */ + +#include +#include +#include + +atomic_t fence_context_counter =3D ATOMIC_INIT(0); +EXPORT_SYMBOL(fence_context_counter); + +int __fence_signal(struct fence *fence) +{ + struct fence_cb *cur, *tmp; + int ret =3D 0; + + if (WARN_ON(!fence)) + return -EINVAL; + + if (test_and_set_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) { + ret =3D -EINVAL; + + /* + * we might have raced with the unlocked fence_signal, + * still run through all callbacks + */ + } + + list_for_each_entry_safe(cur, tmp, &fence->cb_list, node) { + list_del_init(&cur->node); + cur->func(fence, cur, cur->priv); + } + return ret; +} +EXPORT_SYMBOL(__fence_signal); + +/** + * fence_signal - signal completion of a fence + * @fence: the fence to signal + * + * Signal completion for software callbacks on a fence, this will unblock + * fence_wait() calls and run all the callbacks added with + * fence_add_callback(). Can be called multiple times, but since a fence + * can only go from unsignaled to signaled state, it will only be effective + * the first time. + */ +int fence_signal(struct fence *fence) +{ + unsigned long flags; + + if (!fence || test_and_set_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) + return -EINVAL; + + if (test_bit(FENCE_FLAG_ENABLE_SIGNAL_BIT, &fence->flags)) { + struct fence_cb *cur, *tmp; + + spin_lock_irqsave(fence->lock, flags); + list_for_each_entry_safe(cur, tmp, &fence->cb_list, node) { + list_del_init(&cur->node); + cur->func(fence, cur, cur->priv); + } + spin_unlock_irqrestore(fence->lock, flags); + } + return 0; +} +EXPORT_SYMBOL(fence_signal); + +void release_fence(struct kref *kref) +{ + struct fence *fence =3D + container_of(kref, struct fence, refcount); + + BUG_ON(!list_empty(&fence->cb_list)); + + if (fence->ops->release) + fence->ops->release(fence); + else + kfree(fence); +} +EXPORT_SYMBOL(release_fence); + +/** + * fence_enable_sw_signaling - enable signaling on fence + * @fence: [in] the fence to enable + * + * this will request for sw signaling to be enabled, to make the fence + * complete as soon as possible + */ +void fence_enable_sw_signaling(struct fence *fence) +{ + unsigned long flags; + + if (!test_and_set_bit(FENCE_FLAG_ENABLE_SIGNAL_BIT, &fence->flags) && + !test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) { + spin_lock_irqsave(fence->lock, flags); + + if (!fence->ops->enable_signaling(fence)) + __fence_signal(fence); + + spin_unlock_irqrestore(fence->lock, flags); + } +} +EXPORT_SYMBOL(fence_enable_sw_signaling); + +/** + * fence_add_callback - add a callback to be called when the fence + * is signaled + * @fence: [in] the fence to wait on + * @cb: [in] the callback to register + * @func: [in] the function to call + * @priv: [in] the argument to pass to function + * + * cb will be initialized by fence_add_callback, no initialization + * by the caller is required. Any number of callbacks can be registered + * to a fence, but a callback can only be registered to one fence at a time. + * + * Note that the callback can be called from an atomic context. If + * fence is already signaled, this function will return -ENOENT (and + * *not* call the callback) + * + * Add a software callback to the fence. Same restrictions apply to + * refcount as it does to fence_wait, however the caller doesn't need to + * keep a refcount to fence afterwards: when software access is enabled, + * the creator of the fence is required to keep the fence alive until + * after it signals with fence_signal. The callback itself can be called + * from irq context. + * + */ +int fence_add_callback(struct fence *fence, struct fence_cb *cb, + fence_func_t func, void *priv) +{ + unsigned long flags; + int ret =3D 0; + bool was_set; + + if (WARN_ON(!fence || !func)) + return -EINVAL; + + if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) + return -ENOENT; + + spin_lock_irqsave(fence->lock, flags); + + was_set =3D test_and_set_bit(FENCE_FLAG_ENABLE_SIGNAL_BIT, &fence->flags); + + if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) + ret =3D -ENOENT; + else if (!was_set && !fence->ops->enable_signaling(fence)) { + __fence_signal(fence); + ret =3D -ENOENT; + } + + if (!ret) { + cb->func =3D func; + cb->priv =3D priv; + list_add_tail(&cb->node, &fence->cb_list); + } + spin_unlock_irqrestore(fence->lock, flags); + + return ret; +} +EXPORT_SYMBOL(fence_add_callback); + +/** + * fence_remove_callback - remove a callback from the signaling list + * @fence: [in] the fence to wait on + * @cb: [in] the callback to remove + * + * Remove a previously queued callback from the fence. This function returns + * true is the callback is succesfully removed, or false if the fence has + * already been signaled. + * + * *WARNING*: + * Cancelling a callback should only be done if you really know what you're + * doing, since deadlocks and race conditions could occur all too easily. For + * this reason, it should only ever be done on hardware lockup recovery, + * with a reference held to the fence. + */ +bool +fence_remove_callback(struct fence *fence, struct fence_cb *cb) +{ + unsigned long flags; + bool ret; + + spin_lock_irqsave(fence->lock, flags); + + ret =3D !list_empty(&cb->node); + if (ret) + list_del_init(&cb->node); + + spin_unlock_irqrestore(fence->lock, flags); + + return ret; +} +EXPORT_SYMBOL(fence_remove_callback); + +static void +fence_default_wait_cb(struct fence *fence, struct fence_cb *cb, void *priv) +{ + try_to_wake_up(priv, TASK_NORMAL, 0); +} + +/** + * fence_default_wait - default sleep until the fence gets signaled + * or until timeout elapses + * @fence: [in] the fence to wait on + * @intr: [in] if true, do an interruptible wait + * @timeout: [in] timeout value in jiffies, or MAX_SCHEDULE_TIMEOUT + * + * Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or the + * remaining timeout in jiffies on success. + */ +long +fence_default_wait(struct fence *fence, bool intr, signed long timeout) +{ + struct fence_cb cb; + unsigned long flags; + long ret =3D timeout; + bool was_set; + + if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) + return timeout; + + spin_lock_irqsave(fence->lock, flags); + + if (intr && signal_pending(current)) { + ret =3D -ERESTARTSYS; + goto out; + } + + was_set =3D test_and_set_bit(FENCE_FLAG_ENABLE_SIGNAL_BIT, &fence->flags); + + if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) + goto out; + + if (!was_set && !fence->ops->enable_signaling(fence)) { + __fence_signal(fence); + goto out; + } + + cb.func =3D fence_default_wait_cb; + cb.priv =3D current; + list_add(&cb.node, &fence->cb_list); + + while (!test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags) && ret > 0) { + if (intr) + __set_current_state(TASK_INTERRUPTIBLE); + else + __set_current_state(TASK_UNINTERRUPTIBLE); + spin_unlock_irqrestore(fence->lock, flags); + + ret =3D schedule_timeout(ret); + + spin_lock_irqsave(fence->lock, flags); + if (ret > 0 && intr && signal_pending(current)) + ret =3D -ERESTARTSYS; + } + + if (!list_empty(&cb.node)) + list_del(&cb.node); + __set_current_state(TASK_RUNNING); + +out: + spin_unlock_irqrestore(fence->lock, flags); + return ret; +} +EXPORT_SYMBOL(fence_default_wait); diff --git a/include/linux/fence.h b/include/linux/fence.h new file mode 100644 index 0000000..d9f091d --- /dev/null +++ b/include/linux/fence.h @@ -0,0 +1,347 @@ +/* + * Fence mechanism for dma-buf to allow for asynchronous dma access + * + * Copyright (C) 2012 Canonical Ltd + * Copyright (C) 2012 Texas Instruments + * + * Authors: + * Rob Clark + * Maarten Lankhorst + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 as published = by + * the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but WITHO= UT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along w= ith + * this program. If not, see . + */ + +#ifndef __LINUX_FENCE_H +#define __LINUX_FENCE_H + +#include +#include +#include +#include +#include +#include +#include + +struct fence; +struct fence_ops; +struct fence_cb; + +/** + * struct fence - software synchronization primitive + * @refcount: refcount for this fence + * @ops: fence_ops associated with this fence + * @cb_list: list of all callbacks to call + * @lock: spin_lock_irqsave used for locking + * @priv: fence specific private data + * @flags: A mask of FENCE_FLAG_* defined below + * + * the flags member must be manipulated and read using the appropriate + * atomic ops (bit_*), so taking the spinlock will not be needed most + * of the time. + * + * FENCE_FLAG_SIGNALED_BIT - fence is already signaled + * FENCE_FLAG_ENABLE_SIGNAL_BIT - enable_signaling might have been called* + * FENCE_FLAG_USER_BITS - start of the unused bits, can be used by the + * implementer of the fence for its own purposes. Can be used in different + * ways by different fence implementers, so do not rely on this. + * + * *) Since atomic bitops are used, this is not guaranteed to be the case. + * Particularly, if the bit was set, but fence_signal was called right + * before this bit was set, it would have been able to set the + * FENCE_FLAG_SIGNALED_BIT, before enable_signaling was called. + * Adding a check for FENCE_FLAG_SIGNALED_BIT after setting + * FENCE_FLAG_ENABLE_SIGNAL_BIT closes this race, and makes sure that + * after fence_signal was called, any enable_signaling call will have either + * been completed, or never called at all. + */ +struct fence { + struct kref refcount; + const struct fence_ops *ops; + struct list_head cb_list; + spinlock_t *lock; + unsigned context, seqno; + unsigned long flags; +}; + +enum fence_flag_bits { + FENCE_FLAG_SIGNALED_BIT, + FENCE_FLAG_ENABLE_SIGNAL_BIT, + FENCE_FLAG_USER_BITS, /* must always be last member */ +}; + +typedef void (*fence_func_t)(struct fence *fence, struct fence_cb *cb, void = *priv); + +/** + * struct fence_cb - callback for fence_add_callback + * @func: fence_func_t to call + * @priv: value of priv to pass to function + * + * This struct will be initialized by fence_add_callback, additional + * data can be passed along by embedding fence_cb in another struct. + */ +struct fence_cb { + struct list_head node; + fence_func_t func; + void *priv; +}; + +/** + * struct fence_ops - operations implemented for fence + * @enable_signaling: enable software signaling of fence + * @signaled: [optional] peek whether the fence is signaled + * @release: [optional] called on destruction of fence + * + * Notes on enable_signaling: + * For fence implementations that have the capability for hw->hw + * signaling, they can implement this op to enable the necessary + * irqs, or insert commands into cmdstream, etc. This is called + * in the first wait() or add_callback() path to let the fence + * implementation know that there is another driver waiting on + * the signal (ie. hw->sw case). + * + * This function can be called called from atomic context, but not + * from irq context, so normal spinlocks can be used. + * + * A return value of false indicates the fence already passed, + * or some failure occured that made it impossible to enable + * signaling. True indicates succesful enabling. + * + * Calling fence_signal before enable_signaling is called allows + * for a tiny race window in which enable_signaling is called during, + * before, or after fence_signal. To fight this, it is recommended + * that before enable_signaling returns true an extra reference is + * taken on the fence, to be released when the fence is signaled. + * This will mean fence_signal will still be called twice, but + * the second time will be a noop since it was already signaled. + * + * Notes on release: + * Can be NULL, this function allows additional commands to run on + * destruction of the fence. Can be called from irq context. + * If pointer is set to NULL, kfree will get called instead. + */ + +struct fence_ops { + bool (*enable_signaling)(struct fence *fence); + bool (*signaled)(struct fence *fence); + long (*wait)(struct fence *fence, bool intr, signed long); + void (*release)(struct fence *fence); +}; + +/** + * __fence_init - Initialize a custom fence. + * @fence: [in] the fence to initialize + * @ops: [in] the fence_ops for operations on this fence + * @lock: [in] the irqsafe spinlock to use for locking this fence + * @context: [in] the execution context this fence is run on + * @seqno: [in] a linear increasing sequence number for this context + * + * Initializes an allocated fence, the caller doesn't have to keep its + * refcount after committing with this fence, but it will need to hold a + * refcount again if fence_ops.enable_signaling gets called. This can + * be used for other implementing other types of fence. + * + * context and seqno are used for easy comparison between fences, allowing + * to check which fence is later by simply using fence_later. + */ +static inline void +__fence_init(struct fence *fence, const struct fence_ops *ops, + spinlock_t *lock, unsigned context, unsigned seqno) +{ + BUG_ON(!ops || !lock || !ops->enable_signaling || !ops->wait); + + kref_init(&fence->refcount); + fence->ops =3D ops; + INIT_LIST_HEAD(&fence->cb_list); + fence->lock =3D lock; + fence->context =3D context; + fence->seqno =3D seqno; + fence->flags =3D 0UL; +} + +/** + * fence_get - increases refcount of the fence + * @fence: [in] fence to increase refcount of + */ +static inline void fence_get(struct fence *fence) +{ + if (WARN_ON(!fence)) + return; + kref_get(&fence->refcount); +} + +extern void release_fence(struct kref *kref); + +/** + * fence_put - decreases refcount of the fence + * @fence: [in] fence to reduce refcount of + */ +static inline void fence_put(struct fence *fence) +{ + if (WARN_ON(!fence)) + return; + kref_put(&fence->refcount, release_fence); +} + +int fence_signal(struct fence *fence); +int __fence_signal(struct fence *fence); +long fence_default_wait(struct fence *fence, bool intr, signed long); +int fence_add_callback(struct fence *fence, struct fence_cb *cb, + fence_func_t func, void *priv); +bool fence_remove_callback(struct fence *fence, struct fence_cb *cb); +void fence_enable_sw_signaling(struct fence *fence); + +/** + * fence_is_signaled - Return an indication if the fence is signaled yet. + * @fence: [in] the fence to check + * + * Returns true if the fence was already signaled, false if not. Since this + * function doesn't enable signaling, it is not guaranteed to ever return tr= ue + * If fence_add_callback, fence_wait or fence_enable_sw_signaling + * haven't been called before. + * + * It's recommended for seqno fences to call fence_signal when the + * operation is complete, it makes it possible to prevent issues from + * wraparound between time of issue and time of use by checking the return + * value of this function before calling hardware-specific wait instructions. + */ +static inline bool +fence_is_signaled(struct fence *fence) +{ + if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) + return true; + + if (fence->ops->signaled && fence->ops->signaled(fence)) { + fence_signal(fence); + return true; + } + + return false; +} + +/** + * fence_later - return the chronologically later fence + * @f1: [in] the first fence from the same context + * @f2: [in] the second fence from the same context + * + * Returns NULL if both fences are signaled, otherwise the fence that would = be + * signaled last. Both fences must be from the same context, since a seqno is + * not re-used across contexts. + */ +static inline struct fence *fence_later(struct fence *f1, struct fence *f2) +{ + bool sig1, sig2; + + /* + * can't check just FENCE_FLAG_SIGNALED_BIT here, it may never have been + * set called if enable_signaling wasn't, and enabling that here is + * overkill. + */ + sig1 =3D fence_is_signaled(f1); + sig2 =3D fence_is_signaled(f2); + + if (sig1 && sig2) + return NULL; + + BUG_ON(f1->context !=3D f2->context); + + if (sig1 || f2->seqno - f2->seqno <=3D INT_MAX) + return f2; + else + return f1; +} + +/** + * fence_wait_timeout - sleep until the fence gets signaled + * or until timeout elapses + * @fence: [in] the fence to wait on + * @intr: [in] if true, do an interruptible wait + * @timeout: [in] timeout value in jiffies, or MAX_SCHEDULE_TIMEOUT + * + * Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or the + * remaining timeout in jiffies on success. Other error values may be + * returned on custom implementations. + * + * Performs a synchronous wait on this fence. It is assumed the caller + * directly or indirectly (buf-mgr between reservation and committing) + * holds a reference to the fence, otherwise the fence might be + * freed before return, resulting in undefined behavior. + */ +static inline long +fence_wait_timeout(struct fence *fence, bool intr, signed long timeout) +{ + if (WARN_ON(timeout < 0)) + return -EINVAL; + + return fence->ops->wait(fence, intr, timeout); +} + +/** + * fence_wait - sleep until the fence gets signaled + * @fence: [in] the fence to wait on + * @intr: [in] if true, do an interruptible wait + * + * This function will return -ERESTARTSYS if interrupted by a signal, + * or 0 if the fence was signaled. Other error values may be + * returned on custom implementations. + * + * Performs a synchronous wait on this fence. It is assumed the caller + * directly or indirectly (buf-mgr between reservation and committing) + * holds a reference to the fence, otherwise the fence might be + * freed before return, resulting in undefined behavior. + */ +static inline long fence_wait(struct fence *fence, bool intr) +{ + long ret; + + /* Since fence_wait_timeout cannot timeout with + * MAX_SCHEDULE_TIMEOUT, only valid return values are + * -ERESTARTSYS and MAX_SCHEDULE_TIMEOUT. + */ + ret =3D fence_wait_timeout(fence, intr, MAX_SCHEDULE_TIMEOUT); + + return ret < 0 ? ret : 0; +} + +/** + * fence context counter: each execution context should have its own + * fence context, this allows checking if fences belong to the same + * context or not. One device can have multiple separate contexts, + * and they're used if some engine can run independently of another. + */ +extern atomic_t fence_context_counter; + +static inline unsigned fence_context_alloc(unsigned num) +{ + BUG_ON(!num); + return atomic_add_return(num, &fence_context_counter) - num; +} + +#define FENCE_TRACE(f, fmt, args...) \ + do { \ + struct fence *__ff =3D (f); \ + pr_info("f %u#%u: " fmt, __ff->context, __ff->seqno, ##args); \ + } while (0) + +#define FENCE_WARN(f, fmt, args...) \ + do { \ + struct fence *__ff =3D (f); \ + pr_warn("f %u#%u: " fmt, __ff->context, __ff->seqno, ##args); \ + } while (0) + +#define FENCE_ERR(f, fmt, args...) \ + do { \ + struct fence *__ff =3D (f); \ + pr_err("f %u#%u: " fmt, __ff->context, __ff->seqno, ##args); \ + } while (0) + +#endif /* __LINUX_FENCE_H */ --=20 1.8.0.3 --===============2000565884478934209==-- From m.b.lankhorst@gmail.com Tue Jan 15 12:34:32 2013 From: Maarten Lankhorst To: linaro-mm-sig@lists.linaro.org Subject: [Linaro-mm-sig] [PATCH 5/7] seqno-fence: Hardware dma-buf implementation of fencing (v4) Date: Tue, 15 Jan 2013 13:34:02 +0100 Message-ID: <1358253244-11453-6-git-send-email-maarten.lankhorst@canonical.com> In-Reply-To: <1358253244-11453-1-git-send-email-maarten.lankhorst@canonical.com> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============5409718134942887609==" --===============5409718134942887609== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable This type of fence can be used with hardware synchronization for simple hardware that can block execution until the condition (dma_buf[offset] - value) >=3D 0 has been met. A software fallback still has to be provided in case the fence is used with a device that doesn't support this mechanism. It is useful to expose this for graphics cards that have an op to support this. Some cards like i915 can export those, but don't have an option to wait, so they need the software fallback. I extended the original patch by Rob Clark. v1: Original v2: Renamed from bikeshed to seqno, moved into dma-fence.c since not much was left of the file. Lots of documentation added. v3: Use fence_ops instead of custom callbacks. Moved to own file to avoid circular dependency between dma-buf.h and fence.h v4: Add spinlock pointer to seqno_fence_init Signed-off-by: Maarten Lankhorst --- Documentation/DocBook/device-drivers.tmpl | 1 + drivers/base/fence.c | 38 +++++++++++ include/linux/seqno-fence.h | 105 ++++++++++++++++++++++++++++= ++ 3 files changed, 144 insertions(+) create mode 100644 include/linux/seqno-fence.h diff --git a/Documentation/DocBook/device-drivers.tmpl b/Documentation/DocBoo= k/device-drivers.tmpl index 6f53fc0..ad14396 100644 --- a/Documentation/DocBook/device-drivers.tmpl +++ b/Documentation/DocBook/device-drivers.tmpl @@ -128,6 +128,7 @@ X!Edrivers/base/interface.c !Edrivers/base/dma-buf.c !Edrivers/base/fence.c !Iinclude/linux/fence.h +!Iinclude/linux/seqno-fence.h !Edrivers/base/dma-coherent.c !Edrivers/base/dma-mapping.c diff --git a/drivers/base/fence.c b/drivers/base/fence.c index 28e5ffd..1d3f29c 100644 --- a/drivers/base/fence.c +++ b/drivers/base/fence.c @@ -24,6 +24,7 @@ #include #include #include +#include =20 atomic_t fence_context_counter =3D ATOMIC_INIT(0); EXPORT_SYMBOL(fence_context_counter); @@ -284,3 +285,40 @@ out: return ret; } EXPORT_SYMBOL(fence_default_wait); + +static bool seqno_enable_signaling(struct fence *fence) +{ + struct seqno_fence *seqno_fence =3D to_seqno_fence(fence); + return seqno_fence->ops->enable_signaling(fence); +} + +static bool seqno_signaled(struct fence *fence) +{ + struct seqno_fence *seqno_fence =3D to_seqno_fence(fence); + return seqno_fence->ops->signaled && seqno_fence->ops->signaled(fence); +} + +static void seqno_release(struct fence *fence) +{ + struct seqno_fence *f =3D to_seqno_fence(fence); + + dma_buf_put(f->sync_buf); + if (f->ops->release) + f->ops->release(fence); + else + kfree(f); +} + +static long seqno_wait(struct fence *fence, bool intr, signed long timeout) +{ + struct seqno_fence *f =3D to_seqno_fence(fence); + return f->ops->wait(fence, intr, timeout); +} + +const struct fence_ops seqno_fence_ops =3D { + .enable_signaling =3D seqno_enable_signaling, + .signaled =3D seqno_signaled, + .wait =3D seqno_wait, + .release =3D seqno_release +}; +EXPORT_SYMBOL_GPL(seqno_fence_ops); diff --git a/include/linux/seqno-fence.h b/include/linux/seqno-fence.h new file mode 100644 index 0000000..603adc0 --- /dev/null +++ b/include/linux/seqno-fence.h @@ -0,0 +1,105 @@ +/* + * seqno-fence, using a dma-buf to synchronize fencing + * + * Copyright (C) 2012 Texas Instruments + * Copyright (C) 2012 Canonical Ltd + * Authors: + * Rob Clark + * Maarten Lankhorst + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 as published = by + * the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but WITHO= UT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along w= ith + * this program. If not, see . + */ + +#ifndef __LINUX_SEQNO_FENCE_H +#define __LINUX_SEQNO_FENCE_H + +#include +#include + +struct seqno_fence { + struct fence base; + + const struct fence_ops *ops; + struct dma_buf *sync_buf; + uint32_t seqno_ofs; +}; + +extern const struct fence_ops seqno_fence_ops; + +/** + * to_seqno_fence - cast a fence to a seqno_fence + * @fence: fence to cast to a seqno_fence + * + * Returns NULL if the fence is not a seqno_fence, + * or the seqno_fence otherwise. + */ +static inline struct seqno_fence * +to_seqno_fence(struct fence *fence) +{ + if (fence->ops !=3D &seqno_fence_ops) + return NULL; + return container_of(fence, struct seqno_fence, base); +} + +/** + * seqno_fence_init - initialize a seqno fence + * @fence: seqno_fence to initialize + * @lock: pointer to spinlock to use for fence + * @sync_buf: buffer containing the memory location to signal on + * @context: the execution context this fence is a part of + * @seqno_ofs: the offset within @sync_buf + * @seqno: the sequence # to signal on + * @ops: the fence_ops for operations on this seqno fence + * + * This function initializes a struct seqno_fence with passed parameters, + * and takes a reference on sync_buf which is released on fence destruction. + * + * A seqno_fence is a dma_fence which can complete in software when + * enable_signaling is called, but it also completes when + * (s32)((sync_buf)[seqno_ofs] - seqno) >=3D 0 is true + * + * The seqno_fence will take a refcount on the sync_buf until it's + * destroyed, but actual lifetime of sync_buf may be longer if one of the + * callers take a reference to it. + * + * Certain hardware have instructions to insert this type of wait condition + * in the command stream, so no intervention from software would be needed. + * This type of fence can be destroyed before completed, however a reference + * on the sync_buf dma-buf can be taken. It is encouraged to re-use the same + * dma-buf for sync_buf, since mapping or unmapping the sync_buf to the + * device's vm can be expensive. + * + * It is recommended for creators of seqno_fence to call fence_signal + * before destruction. This will prevent possible issues from wraparound at + * time of issue vs time of check, since users can check fence_is_signaled + * before submitting instructions for the hardware to wait on the fence. + * However, when ops.enable_signaling is not called, it doesn't have to be + * done as soon as possible, just before there's any real danger of seqno + * wraparound. + */ +static inline void +seqno_fence_init(struct seqno_fence *fence, spinlock_t *lock, + struct dma_buf *sync_buf, uint32_t context, uint32_t seqno_ofs, + uint32_t seqno, const struct fence_ops *ops) +{ + BUG_ON(!fence || !sync_buf || !ops->enable_signaling || !ops->wait); + + __fence_init(&fence->base, &seqno_fence_ops, lock, context, seqno); + + get_dma_buf(sync_buf); + fence->ops =3D ops; + fence->sync_buf =3D sync_buf; + fence->seqno_ofs =3D seqno_ofs; +} + +#endif /* __LINUX_SEQNO_FENCE_H */ --=20 1.8.0.3 --===============5409718134942887609==-- From m.b.lankhorst@gmail.com Tue Jan 15 12:34:35 2013 From: Maarten Lankhorst To: linaro-mm-sig@lists.linaro.org Subject: [Linaro-mm-sig] [PATCH 6/7] reservation: cross-device reservation support Date: Tue, 15 Jan 2013 13:34:03 +0100 Message-ID: <1358253244-11453-7-git-send-email-maarten.lankhorst@canonical.com> In-Reply-To: <1358253244-11453-1-git-send-email-maarten.lankhorst@canonical.com> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============5471726564273907352==" --===============5471726564273907352== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable This adds support for a generic reservations framework that can be hooked up to ttm and dma-buf and allows easy sharing of reservations across devices. The idea is that a dma-buf and ttm object both will get a pointer to a struct reservation_object, which has to be reserved before anything is done with the contents of the dma-buf. Signed-off-by: Maarten Lankhorst --- Documentation/DocBook/device-drivers.tmpl | 2 + drivers/base/Makefile | 2 +- drivers/base/reservation.c | 251 ++++++++++++++++++++++++++++= ++ include/linux/reservation.h | 182 ++++++++++++++++++++++ 4 files changed, 436 insertions(+), 1 deletion(-) create mode 100644 drivers/base/reservation.c create mode 100644 include/linux/reservation.h diff --git a/Documentation/DocBook/device-drivers.tmpl b/Documentation/DocBoo= k/device-drivers.tmpl index ad14396..24e6e80 100644 --- a/Documentation/DocBook/device-drivers.tmpl +++ b/Documentation/DocBook/device-drivers.tmpl @@ -129,6 +129,8 @@ X!Edrivers/base/interface.c !Edrivers/base/fence.c !Iinclude/linux/fence.h !Iinclude/linux/seqno-fence.h +!Edrivers/base/reservation.c +!Iinclude/linux/reservation.h !Edrivers/base/dma-coherent.c !Edrivers/base/dma-mapping.c diff --git a/drivers/base/Makefile b/drivers/base/Makefile index 0026563..f6f731d 100644 --- a/drivers/base/Makefile +++ b/drivers/base/Makefile @@ -10,7 +10,7 @@ obj-$(CONFIG_CMA) +=3D dma-contiguous.o obj-y +=3D power/ obj-$(CONFIG_HAS_DMA) +=3D dma-mapping.o obj-$(CONFIG_HAVE_GENERIC_DMA_COHERENT) +=3D dma-coherent.o -obj-$(CONFIG_DMA_SHARED_BUFFER) +=3D dma-buf.o fence.o +obj-$(CONFIG_DMA_SHARED_BUFFER) +=3D dma-buf.o fence.o reservation.o obj-$(CONFIG_ISA) +=3D isa.o obj-$(CONFIG_FW_LOADER) +=3D firmware_class.o obj-$(CONFIG_NUMA) +=3D node.o diff --git a/drivers/base/reservation.c b/drivers/base/reservation.c new file mode 100644 index 0000000..07584dd --- /dev/null +++ b/drivers/base/reservation.c @@ -0,0 +1,251 @@ +/* + * Copyright (C) 2012 Canonical Ltd + * + * Based on bo.c which bears the following copyright notice, + * but is dual licensed: + * + * Copyright (c) 2006-2009 VMware, Inc., Palo Alto, CA., USA + * All Rights Reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the + * "Software"), to deal in the Software without restriction, including + * without limitation the rights to use, copy, modify, merge, publish, + * distribute, sub license, and/or sell copies of the Software, and to + * permit persons to whom the Software is furnished to do so, subject to + * the following conditions: + * + * The above copyright notice and this permission notice (including the + * next paragraph) shall be included in all copies or substantial portions + * of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLA= IM, + * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR + * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE + * USE OR OTHER DEALINGS IN THE SOFTWARE. + * + **************************************************************************/ +/* + * Authors: Thomas Hellstrom + */ + +#include +#include +#include +#include +#include + +atomic_long_t reservation_counter =3D ATOMIC_LONG_INIT(1); +EXPORT_SYMBOL(reservation_counter); + +const char reservation_object_name[] =3D "reservation_object"; +EXPORT_SYMBOL(reservation_object_name); + +const char reservation_ticket_name[] =3D "reservation_ticket"; +EXPORT_SYMBOL(reservation_ticket_name); + +struct lock_class_key reservation_object_class; +EXPORT_SYMBOL(reservation_object_class); + +struct lock_class_key reservation_ticket_class; +EXPORT_SYMBOL(reservation_ticket_class); + +/** + * ticket_backoff - cancel a reservation + * @ticket: [in] a reservation_ticket + * @entries: [in] the list list of reservation_entry entries to unreserve + * + * This function cancels a previous reservation done by + * ticket_reserve. This is useful in case something + * goes wrong between reservation and committing. + * + * This should only be called after ticket_reserve returns success. + */ +void +ticket_backoff(struct reservation_ticket *ticket, struct list_head *entries) +{ + struct list_head *cur; + + if (list_empty(entries)) + return; + + list_for_each(cur, entries) { + struct reservation_object *obj; + + reservation_entry_get(cur, &obj, NULL); + + mutex_unreserve_unlock(&obj->lock); + } + reservation_ticket_fini(ticket); +} +EXPORT_SYMBOL(ticket_backoff); + +static void +ticket_backoff_early(struct list_head *list, struct reservation_entry *entry) +{ + list_for_each_entry_continue_reverse(entry, list, head) { + struct reservation_object *obj; + + reservation_entry_get(&entry->head, &obj, NULL); + mutex_unreserve_unlock(&obj->lock); + } +} + +/** + * ticket_reserve - reserve a list of reservation_entry + * @ticket: [out] a reservation_ticket + * @entries: [in] a list of entries to reserve. + * + * Do not initialize ticket, it will be initialized by this function + * on success. + * + * Returns -EINTR if signal got queued, -EINVAL if fence list is full, + * -EDEADLK if buffer appears on the list twice, 0 on success. + * + * XXX: Nuke rest + * The caller will have to queue waits on those fences before calling + * ufmgr_fence_buffer_objects, with either hardware specific methods, + * fence_add_callback will, or fence_wait. + * + * As such, by incrementing refcount on reservation_entry before calling + * fence_add_callback, and making the callback decrement refcount on + * reservation_entry, or releasing refcount if fence_add_callback + * failed, the reservation_entry will be freed when all the fences + * have been signaled, and only after the last ref is released, which should + * be after ufmgr_fence_buffer_objects. With proper locking, when the + * list_head holding the list of reservation_entry's becomes empty it + * indicates all fences for all bufs have been signaled. + */ +int +ticket_reserve(struct reservation_ticket *ticket, + struct list_head *entries) +{ + struct list_head *cur; + int ret; + struct reservation_object *res_bo =3D NULL; + + if (list_empty(entries)) + return 0; + + reservation_ticket_init(ticket); +retry: + list_for_each(cur, entries) { + struct reservation_entry *entry; + struct reservation_object *bo; + bool shared; + + entry =3D reservation_entry_get(cur, &bo, &shared); + if (unlikely(res_bo =3D=3D bo)) { + res_bo =3D NULL; + continue; + } + + ret =3D mutex_reserve_lock_interruptible(&bo->lock, + ticket, + ticket->seqno); + switch (ret) { + case 0: + break; + case -EAGAIN: + ticket_backoff_early(entries, entry); + if (res_bo) + mutex_unreserve_unlock(&res_bo->lock); + + ret =3D mutex_reserve_lock_intr_slow(&bo->lock, ticket, + ticket->seqno); + if (unlikely(ret !=3D 0)) + goto err_fini; + res_bo =3D bo; + break; + default: + goto err; + } + + if (shared && + bo->fence_shared_count =3D=3D BUF_MAX_SHARED_FENCE) { + WARN_ON_ONCE(1); + ret =3D -EINVAL; + mutex_unreserve_unlock(&bo->lock); + goto err; + } + if (unlikely(res_bo =3D=3D bo)) + goto retry; + continue; + +err: + if (res_bo !=3D bo) + mutex_unreserve_unlock(&bo->lock); + ticket_backoff_early(entries, entry); +err_fini: + reservation_ticket_fini(ticket); + return ret; + } + + return 0; +} +EXPORT_SYMBOL(ticket_reserve); + +/** + * ticket_commit - commit a reservation with a new fence + * @ticket: [in] the reservation_ticket returned by + * ticket_reserve + * @entries: [in] a linked list of struct reservation_entry + * @fence: [in] the fence that indicates completion + * + * This function will call reservation_ticket_fini, no need + * to do it manually. + * + * This function should be called after a hardware command submission is + * completed succesfully. The fence is used to indicate completion of + * those commands. + */ +void +ticket_commit(struct reservation_ticket *ticket, + struct list_head *entries, struct fence *fence) +{ + struct list_head *cur; + + if (list_empty(entries)) + return; + + if (WARN_ON(!fence)) { + ticket_backoff(ticket, entries); + return; + } + + list_for_each(cur, entries) { + struct reservation_object *bo; + bool shared; + + reservation_entry_get(cur, &bo, &shared); + + if (!shared) { + int i; + for (i =3D 0; i < bo->fence_shared_count; ++i) { + fence_put(bo->fence_shared[i]); + bo->fence_shared[i] =3D NULL; + } + bo->fence_shared_count =3D 0; + if (bo->fence_excl) + fence_put(bo->fence_excl); + + bo->fence_excl =3D fence; + } else { + if (WARN_ON(bo->fence_shared_count >=3D + ARRAY_SIZE(bo->fence_shared))) { + mutex_unreserve_unlock(&bo->lock); + continue; + } + + bo->fence_shared[bo->fence_shared_count++] =3D fence; + } + fence_get(fence); + + mutex_unreserve_unlock(&bo->lock); + } + reservation_ticket_fini(ticket); +} +EXPORT_SYMBOL(ticket_commit); diff --git a/include/linux/reservation.h b/include/linux/reservation.h new file mode 100644 index 0000000..fc2349d --- /dev/null +++ b/include/linux/reservation.h @@ -0,0 +1,182 @@ +/* + * Header file for reservations for dma-buf and ttm + * + * Copyright(C) 2011 Linaro Limited. All rights reserved. + * Copyright (C) 2012 Canonical Ltd + * Copyright (C) 2012 Texas Instruments + * + * Authors: + * Rob Clark + * Maarten Lankhorst + * Thomas Hellstrom + * + * Based on bo.c which bears the following copyright notice, + * but is dual licensed: + * + * Copyright (c) 2006-2009 VMware, Inc., Palo Alto, CA., USA + * All Rights Reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the + * "Software"), to deal in the Software without restriction, including + * without limitation the rights to use, copy, modify, merge, publish, + * distribute, sub license, and/or sell copies of the Software, and to + * permit persons to whom the Software is furnished to do so, subject to + * the following conditions: + * + * The above copyright notice and this permission notice (including the + * next paragraph) shall be included in all copies or substantial portions + * of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLA= IM, + * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR + * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE + * USE OR OTHER DEALINGS IN THE SOFTWARE. + */ +#ifndef _LINUX_RESERVATION_H +#define _LINUX_RESERVATION_H + +#include +#include +#include + +#define BUF_MAX_SHARED_FENCE 8 + +struct fence; + +extern atomic_long_t reservation_counter; +extern const char reservation_object_name[]; +extern struct lock_class_key reservation_object_class; +extern const char reservation_ticket_name[]; +extern struct lock_class_key reservation_ticket_class; + +struct reservation_object { + struct ticket_mutex lock; + + u32 fence_shared_count; + struct fence *fence_excl; + struct fence *fence_shared[BUF_MAX_SHARED_FENCE]; +}; + +struct reservation_ticket { + unsigned long seqno; +#ifdef CONFIG_DEBUG_LOCK_ALLOC + struct lockdep_map dep_map; +#endif +}; + +/** + * struct reservation_entry - reservation structure for a + * reservation_object + * @head: list entry + * @obj_shared: pointer to a reservation_object to reserve + * + * Bit 0 of obj_shared is set to bool shared, as such pointer has to be + * converted back, which can be done with reservation_entry_get. + */ +struct reservation_entry { + struct list_head head; + unsigned long obj_shared; +}; + + +static inline void +reservation_object_init(struct reservation_object *obj) +{ + obj->fence_shared_count =3D 0; + obj->fence_excl =3D NULL; + + __ticket_mutex_init(&obj->lock, reservation_object_name, + &reservation_object_class); +} + +static inline void +reservation_object_fini(struct reservation_object *obj) +{ + int i; + + if (obj->fence_excl) + fence_put(obj->fence_excl); + for (i =3D 0; i < obj->fence_shared_count; ++i) + fence_put(obj->fence_shared[i]); + + mutex_destroy(&obj->lock.base); +} + +static inline void +reservation_ticket_init(struct reservation_ticket *t) +{ +#ifdef CONFIG_DEBUG_LOCK_ALLOC + /* + * Make sure we are not reinitializing a held ticket: + */ + + debug_check_no_locks_freed((void *)t, sizeof(*t)); + lockdep_init_map(&t->dep_map, reservation_ticket_name, + &reservation_ticket_class, 0); +#endif + mutex_acquire(&t->dep_map, 0, 0, _THIS_IP_); + do { + t->seqno =3D atomic_long_inc_return(&reservation_counter); + } while (unlikely(!t->seqno)); +} + +/** + * reservation_ticket_fini - end a reservation ticket + * @t: [in] reservation_ticket that completed all reservations + * + * This currently does nothing, but should be called after all reservations + * made with this ticket have been unreserved. It is likely that in the futu= re + * it will be hooked up to perf events, or aid in debugging in other ways. + */ +static inline void +reservation_ticket_fini(struct reservation_ticket *t) +{ +#ifdef CONFIG_DEBUG_LOCK_ALLOC + mutex_release(&t->dep_map, 0, _THIS_IP_); + t->seqno =3D 0; +#endif +} + +/** + * reservation_entry_init - initialize and append a reservation_entry + * to the list + * @entry: entry to initialize + * @list: list to append to + * @obj: reservation_object to initialize the entry with + * @shared: whether shared or exclusive access is requested + */ +static inline void +reservation_entry_init(struct reservation_entry *entry, + struct list_head *list, + struct reservation_object *obj, bool shared) +{ + entry->obj_shared =3D (unsigned long)obj | !!shared; + list_add_tail(&entry->head, list); +} + +static inline struct reservation_entry * +reservation_entry_get(struct list_head *list, + struct reservation_object **obj, bool *shared) +{ + struct reservation_entry *e =3D container_of(list, struct reservation_entry= , head); + unsigned long val =3D e->obj_shared; + + if (obj) + *obj =3D (struct reservation_object*)(val & ~1); + if (shared) + *shared =3D val & 1; + return e; +} + +extern int ticket_reserve(struct reservation_ticket *, + struct list_head *entries); +extern void ticket_backoff(struct reservation_ticket *, + struct list_head *entries); +extern void ticket_commit(struct reservation_ticket *, + struct list_head *entries, struct fence *); + +#endif /* _LINUX_RESERVATION_H */ --=20 1.8.0.3 --===============5471726564273907352==-- From m.b.lankhorst@gmail.com Tue Jan 15 12:34:39 2013 From: Maarten Lankhorst To: linaro-mm-sig@lists.linaro.org Subject: [Linaro-mm-sig] [PATCH 7/7] reservation: Add lockdep annotation and selftests Date: Tue, 15 Jan 2013 13:34:04 +0100 Message-ID: <1358253244-11453-8-git-send-email-maarten.lankhorst@canonical.com> In-Reply-To: <1358253244-11453-1-git-send-email-maarten.lankhorst@canonical.com> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============1281843617900423573==" --===============1281843617900423573== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Signed-off-by: Maarten Lankhorst --- The self-tests will fail if the commit "lockdep: Check if nested lock is actually held" from linux tip core/locking is not applied. --- lib/Kconfig.debug | 1 + lib/locking-selftest.c | 385 ++++++++++++++++++++++++++++++++++++++++++++++-= -- 2 files changed, 367 insertions(+), 19 deletions(-) diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index 67604e5..017bcea 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -716,6 +716,7 @@ config DEBUG_ATOMIC_SLEEP config DEBUG_LOCKING_API_SELFTESTS bool "Locking API boot-time self-tests" depends on DEBUG_KERNEL + select CONFIG_DMA_SHARED_BUFFER help Say Y here if you want the kernel to run a short self-test during bootup. The self-test checks whether common types of locking bugs diff --git a/lib/locking-selftest.c b/lib/locking-selftest.c index 7aae0f2..7fe22c2 100644 --- a/lib/locking-selftest.c +++ b/lib/locking-selftest.c @@ -20,6 +20,7 @@ #include #include #include +#include =20 /* * Change this to 1 if you want to see the failure printouts: @@ -42,6 +43,7 @@ __setup("debug_locks_verbose=3D", setup_debug_locks_verbose= ); #define LOCKTYPE_RWLOCK 0x2 #define LOCKTYPE_MUTEX 0x4 #define LOCKTYPE_RWSEM 0x8 +#define LOCKTYPE_RESERVATION 0x10 =20 /* * Normal standalone locks, for the circular and irq-context @@ -920,11 +922,17 @@ GENERATE_PERMUTATIONS_3_EVENTS(irq_read_recursion_soft) static void reset_locks(void) { local_irq_disable(); + lockdep_free_key_range(&reservation_object_class, 1); + lockdep_free_key_range(&reservation_ticket_class, 1); + I1(A); I1(B); I1(C); I1(D); I1(X1); I1(X2); I1(Y1); I1(Y2); I1(Z1); I1(Z2); lockdep_reset(); I2(A); I2(B); I2(C); I2(D); init_shared_classes(); + + memset(&reservation_object_class, 0, sizeof reservation_object_class); + memset(&reservation_ticket_class, 0, sizeof reservation_ticket_class); local_irq_enable(); } =20 @@ -938,7 +946,6 @@ static int unexpected_testcase_failures; static void dotest(void (*testcase_fn)(void), int expected, int lockclass_ma= sk) { unsigned long saved_preempt_count =3D preempt_count(); - int expected_failure =3D 0; =20 WARN_ON(irqs_disabled()); =20 @@ -946,26 +953,16 @@ static void dotest(void (*testcase_fn)(void), int expec= ted, int lockclass_mask) /* * Filter out expected failures: */ + if (debug_locks !=3D expected) { #ifndef CONFIG_PROVE_LOCKING - if ((lockclass_mask & LOCKTYPE_SPIN) && debug_locks !=3D expected) - expected_failure =3D 1; - if ((lockclass_mask & LOCKTYPE_RWLOCK) && debug_locks !=3D expected) - expected_failure =3D 1; - if ((lockclass_mask & LOCKTYPE_MUTEX) && debug_locks !=3D expected) - expected_failure =3D 1; - if ((lockclass_mask & LOCKTYPE_RWSEM) && debug_locks !=3D expected) - expected_failure =3D 1; + expected_testcase_failures++; + printk("failed|"); +#else + unexpected_testcase_failures++; + printk("FAILED|"); + + dump_stack(); #endif - if (debug_locks !=3D expected) { - if (expected_failure) { - expected_testcase_failures++; - printk("failed|"); - } else { - unexpected_testcase_failures++; - - printk("FAILED|"); - dump_stack(); - } } else { testcase_successes++; printk(" ok |"); @@ -1108,6 +1105,354 @@ static inline void print_testname(const char *testnam= e) DO_TESTCASE_6IRW(desc, name, 312); \ DO_TESTCASE_6IRW(desc, name, 321); =20 +static void reservation_test_fail_reserve(void) +{ + struct reservation_ticket t; + struct reservation_object o; + int ret; + + reservation_object_init(&o); + reservation_ticket_init(&t); + t.seqno++; + + ret =3D mutex_reserve_lock(&o.lock, &t, t.seqno); + + BUG_ON(!atomic_long_read(&o.lock.reservation_id)); + + /* No lockdep test, pure API */ + ret =3D mutex_reserve_lock(&o.lock, &t, t.seqno); + WARN_ON(ret !=3D -EDEADLK); + + t.seqno++; + ret =3D mutex_trylock(&o.lock.base); + WARN_ON(ret); + + ret =3D mutex_reserve_lock(&o.lock, &t, t.seqno); + WARN_ON(ret !=3D -EAGAIN); + mutex_unlock(&o.lock.base); + + if (mutex_trylock(&o.lock.base)) + mutex_unlock(&o.lock.base); +#ifdef CONFIG_DEBUG_LOCK_ALLOC + else DEBUG_LOCKS_WARN_ON(1); +#endif + + reservation_ticket_fini(&t); +} + +static void reservation_test_two_tickets(void) +{ + struct reservation_ticket t, t2; + + reservation_ticket_init(&t); + reservation_ticket_init(&t2); + + reservation_ticket_fini(&t2); + reservation_ticket_fini(&t); +} + +static void reservation_test_ticket_unreserve_twice(void) +{ + struct reservation_ticket t; + + reservation_ticket_init(&t); + reservation_ticket_fini(&t); + reservation_ticket_fini(&t); +} + +static void reservation_test_object_unreserve_twice(void) +{ + struct reservation_object o; + + reservation_object_init(&o); + mutex_lock(&o.lock.base); + mutex_unlock(&o.lock.base); + mutex_unlock(&o.lock.base); +} + +static void reservation_test_fence_nest_unreserved(void) +{ + struct reservation_object o; + + reservation_object_init(&o); + + spin_lock_nest_lock(&lock_A, &o.lock.base); + spin_unlock(&lock_A); +} + +static void reservation_test_ticket_block(void) +{ + struct reservation_ticket t; + struct reservation_object o, o2; + int ret; + + reservation_object_init(&o); + reservation_object_init(&o2); + reservation_ticket_init(&t); + + ret =3D mutex_reserve_lock(&o.lock, &t, t.seqno); + WARN_ON(ret); + mutex_lock(&o2.lock.base); + mutex_unlock(&o2.lock.base); + mutex_unreserve_unlock(&o.lock); + + reservation_ticket_fini(&t); +} + +static void reservation_test_ticket_try(void) +{ + struct reservation_ticket t; + struct reservation_object o, o2; + int ret; + + reservation_object_init(&o); + reservation_object_init(&o2); + reservation_ticket_init(&t); + + ret =3D mutex_reserve_lock(&o.lock, &t, t.seqno); + WARN_ON(ret); + + mutex_trylock(&o2.lock.base); + mutex_unlock(&o2.lock.base); + mutex_unreserve_unlock(&o.lock); + + reservation_ticket_fini(&t); +} + +static void reservation_test_ticket_ticket(void) +{ + struct reservation_ticket t; + struct reservation_object o, o2; + int ret; + + reservation_object_init(&o); + reservation_object_init(&o2); + reservation_ticket_init(&t); + + ret =3D mutex_reserve_lock(&o.lock, &t, t.seqno); + WARN_ON(ret); + + ret =3D mutex_reserve_lock(&o2.lock, &t, t.seqno); + WARN_ON(ret); + + mutex_unreserve_unlock(&o2.lock); + mutex_unreserve_unlock(&o.lock); + + reservation_ticket_fini(&t); +} + +static void reservation_test_try_block(void) +{ + struct reservation_object o, o2; + + reservation_object_init(&o); + reservation_object_init(&o2); + + mutex_trylock(&o.lock.base); + mutex_lock(&o2.lock.base); + mutex_unlock(&o2.lock.base); + mutex_unlock(&o.lock.base); +} + +static void reservation_test_try_try(void) +{ + struct reservation_object o, o2; + + reservation_object_init(&o); + reservation_object_init(&o2); + + mutex_trylock(&o.lock.base); + mutex_trylock(&o2.lock.base); + mutex_unlock(&o2.lock.base); + mutex_unlock(&o.lock.base); +} + +static void reservation_test_try_ticket(void) +{ + struct reservation_ticket t; + struct reservation_object o, o2; + int ret; + + reservation_object_init(&o); + reservation_object_init(&o2); + + mutex_trylock(&o.lock.base); + reservation_ticket_init(&t); + + ret =3D mutex_reserve_lock(&o2.lock, &t, t.seqno); + WARN_ON(ret); + + mutex_unreserve_unlock(&o2.lock); + mutex_unlock(&o.lock.base); + + reservation_ticket_fini(&t); +} + +static void reservation_test_block_block(void) +{ + struct reservation_object o, o2; + + reservation_object_init(&o); + reservation_object_init(&o2); + + mutex_lock(&o.lock.base); + mutex_lock(&o2.lock.base); + mutex_unlock(&o2.lock.base); + mutex_unlock(&o.lock.base); +} + +static void reservation_test_block_try(void) +{ + struct reservation_object o, o2; + + reservation_object_init(&o); + reservation_object_init(&o2); + + mutex_lock(&o.lock.base); + mutex_trylock(&o2.lock.base); + mutex_unlock(&o2.lock.base); + mutex_unlock(&o.lock.base); +} + +static void reservation_test_block_ticket(void) +{ + struct reservation_ticket t; + struct reservation_object o, o2; + int ret; + + reservation_object_init(&o); + reservation_object_init(&o2); + + mutex_lock(&o.lock.base); + reservation_ticket_init(&t); + + ret =3D mutex_reserve_lock(&o2.lock, &t, t.seqno); + WARN_ON(ret); + mutex_unreserve_unlock(&o2.lock); + mutex_unlock(&o.lock.base); + + reservation_ticket_fini(&t); +} + +static void reservation_test_fence_block(void) +{ + struct reservation_object o; + + reservation_object_init(&o); + spin_lock(&lock_A); + spin_unlock(&lock_A); + + mutex_lock(&o.lock.base); + spin_lock(&lock_A); + spin_unlock(&lock_A); + mutex_unlock(&o.lock.base); + + spin_lock(&lock_A); + mutex_lock(&o.lock.base); + mutex_unlock(&o.lock.base); + spin_unlock(&lock_A); +} + +static void reservation_test_fence_try(void) +{ + struct reservation_object o; + + reservation_object_init(&o); + spin_lock(&lock_A); + spin_unlock(&lock_A); + + mutex_trylock(&o.lock.base); + spin_lock(&lock_A); + spin_unlock(&lock_A); + mutex_unlock(&o.lock.base); + + spin_lock(&lock_A); + mutex_trylock(&o.lock.base); + mutex_unlock(&o.lock.base); + spin_unlock(&lock_A); +} + +static void reservation_test_fence_ticket(void) +{ + struct reservation_ticket t; + struct reservation_object o; + int ret; + + reservation_object_init(&o); + spin_lock(&lock_A); + spin_unlock(&lock_A); + + reservation_ticket_init(&t); + + ret =3D mutex_reserve_lock(&o.lock, &t, t.seqno); + WARN_ON(ret); + spin_lock(&lock_A); + spin_unlock(&lock_A); + mutex_unreserve_unlock(&o.lock); + + spin_lock(&lock_A); + ret =3D mutex_reserve_lock(&o.lock, &t, t.seqno); + WARN_ON(ret); + mutex_unreserve_unlock(&o.lock); + spin_unlock(&lock_A); + + reservation_ticket_fini(&t); +} + +static void reservation_tests(void) +{ + printk(" -----------------------------------------------------------------= ---------\n"); + printk(" | Reservation tests |\n"); + printk(" ---------------------\n"); + + print_testname("reservation api failures"); + dotest(reservation_test_fail_reserve, SUCCESS, LOCKTYPE_RESERVATION); + printk("\n"); + + print_testname("reserving two tickets"); + dotest(reservation_test_two_tickets, FAILURE, LOCKTYPE_RESERVATION); + printk("\n"); + + print_testname("unreserve ticket twice"); + dotest(reservation_test_ticket_unreserve_twice, FAILURE, LOCKTYPE_RESERVATI= ON); + printk("\n"); + + print_testname("unreserve object twice"); + dotest(reservation_test_object_unreserve_twice, FAILURE, LOCKTYPE_RESERVATI= ON); + printk("\n"); + + print_testname("spinlock nest unreserved"); + dotest(reservation_test_fence_nest_unreserved, FAILURE, LOCKTYPE_RESERVATIO= N); + printk("\n"); + + printk(" -----------------------------------------------------\n"); + printk(" |block | try |ticket|\n"); + printk(" -----------------------------------------------------\n"); + + print_testname("ticket"); + dotest(reservation_test_ticket_block, FAILURE, LOCKTYPE_RESERVATION); + dotest(reservation_test_ticket_try, SUCCESS, LOCKTYPE_RESERVATION); + dotest(reservation_test_ticket_ticket, SUCCESS, LOCKTYPE_RESERVATION); + printk("\n"); + + print_testname("try"); + dotest(reservation_test_try_block, FAILURE, LOCKTYPE_RESERVATION); + dotest(reservation_test_try_try, SUCCESS, LOCKTYPE_RESERVATION); + dotest(reservation_test_try_ticket, FAILURE, LOCKTYPE_RESERVATION); + printk("\n"); + + print_testname("block"); + dotest(reservation_test_block_block, FAILURE, LOCKTYPE_RESERVATION); + dotest(reservation_test_block_try, SUCCESS, LOCKTYPE_RESERVATION); + dotest(reservation_test_block_ticket, FAILURE, LOCKTYPE_RESERVATION); + printk("\n"); + + print_testname("spinlock"); + dotest(reservation_test_fence_block, FAILURE, LOCKTYPE_RESERVATION); + dotest(reservation_test_fence_try, SUCCESS, LOCKTYPE_RESERVATION); + dotest(reservation_test_fence_ticket, FAILURE, LOCKTYPE_RESERVATION); + printk("\n"); +} =20 void locking_selftest(void) { @@ -1188,6 +1533,8 @@ void locking_selftest(void) DO_TESTCASE_6x2("irq read-recursion", irq_read_recursion); // DO_TESTCASE_6x2B("irq read-recursion #2", irq_read_recursion2); =20 + reservation_tests(); + if (unexpected_testcase_failures) { printk("-----------------------------------------------------------------\= n"); debug_locks =3D 0; --=20 1.8.0.3 --===============1281843617900423573==-- From maarten.lankhorst@canonical.com Tue Jan 15 13:43:46 2013 From: Maarten Lankhorst To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH 2/7] mutex: add support for reservation style locks Date: Tue, 15 Jan 2013 14:43:43 +0100 Message-ID: <50F55D0F.9090604@canonical.com> In-Reply-To: <1358253244-11453-3-git-send-email-maarten.lankhorst@canonical.com> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============0683699491196277420==" --===============0683699491196277420== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Woops, missed the updated patch description.. Op 15-01-13 13:33, Maarten Lankhorst schreef: > makes it easier to port ttm over.. > > Signed-off-by: Maarten Lankhorst mutex_reserve_lock, and mutex_reserve_lock_interruptible: Lock a buffer with a reservation_id set. reservation_id must not be set to = 0, since this is a special value that means no reservation_id. Normally if reservation_id is not set, or is older than the reservation_id = that's currently set on the mutex, the behavior will be to wait normally. However, if the reservation_id is newer than the current reservation_id, -= EAGAIN will be returned, and this function must unreserve all other mutexes and th= en redo a blocking lock with normal mutex calls to prevent a deadlock, then call mutex_locked_set_reservation on successful locking to set the reservation_i= d inside the lock. These functions will return -EDEADLK instead of -EAGAIN if reservation_id i= s the same as the reservation_id that's attempted to lock the mutex with, since in tha= t case you presumably attempted to lock the same lock twice. mutex_reserve_lock_slow and mutex_reserve_lock_intr_slow: Similar to mutex_reserve_lock, except it won't backoff with -EAGAIN. This i= s useful after mutex_reserve_lock failed with -EAGAIN, and you unreserved all buffer= s so no deadlock can occur. mutex_unreserve_unlock: Unlock a buffer reserved with the previous calls. Missing at the moment, maybe TODO? * lockdep warnings when wrongly calling mutex_unreserve_unlock or mutex_unl= ock, depending on whether reservation_id was set previously or not. - Does lockdep have something for this or do I need to extend struct mute= x? * Check if lockdep warns if you unlock a lock that other locks were nested = to. - spin_lock(m); spin_lock_nest_lock(a, m); spin_unlock(m); spin_unlock(a); would be nice if it gave a splat. Have to recheck if it does, though.. Design: I chose for ticket_mutex to encapsulate struct mutex, so the extra memory u= sage and atomic set on init will only happen when you deliberately create a ticket l= ock. Since the mutexes are mostly meant to protect buffer object serialization i= n ttm, not much contention is expected. I could be slightly smarter with wakeups, but = this would be at the expense at adding a field to struct mutex_waiter. Because this wo= uld add overhead to all cases where ticket_mutexes are not used, and ticket_mutexes= are less performance sensitive anyway since they only protect buffer objects, I didn= 't want to do this. It's still better than ttm always calling wake_up_all, which does a unconditional spin_lock_irqsave/irqrestore. I needed this in kernel/mutex.c because of the extensions to __lock_common,= which are hopefully optimized away for all normal paths. Changes since RFC patch v1: - Updated to use atomic_long instead of atomic, since the reservation_id was= a long. - added mutex_reserve_lock_slow and mutex_reserve_lock_intr_slow - removed mutex_locked_set_reservation_id (or w/e it was called) Signed-off-by: Maarten Lankhorst > --- > include/linux/mutex.h | 86 +++++++++++++- > kernel/mutex.c | 317 ++++++++++++++++++++++++++++++++++++++++++++++= +--- > 2 files changed, 387 insertions(+), 16 deletions(-) > > diff --git a/include/linux/mutex.h b/include/linux/mutex.h > index 9121595..602c247 100644 > --- a/include/linux/mutex.h > +++ b/include/linux/mutex.h > @@ -62,6 +62,11 @@ struct mutex { > #endif > }; > =20 > +struct ticket_mutex { > + struct mutex base; > + atomic_long_t reservation_id; > +}; > + > /* > * This is the control structure for tasks blocked on mutex, > * which resides on the blocked task's kernel stack: > @@ -109,12 +114,24 @@ static inline void mutex_destroy(struct mutex *lock) = {} > __DEBUG_MUTEX_INITIALIZER(lockname) \ > __DEP_MAP_MUTEX_INITIALIZER(lockname) } > =20 > +#define __TICKET_MUTEX_INITIALIZER(lockname) \ > + { .base =3D __MUTEX_INITIALIZER(lockname) \ > + , .reservation_id =3D ATOMIC_LONG_INIT(0) } > + > #define DEFINE_MUTEX(mutexname) \ > struct mutex mutexname =3D __MUTEX_INITIALIZER(mutexname) > =20 > extern void __mutex_init(struct mutex *lock, const char *name, > struct lock_class_key *key); > =20 > +static inline void __ticket_mutex_init(struct ticket_mutex *lock, > + const char *name, > + struct lock_class_key *key) > +{ > + __mutex_init(&lock->base, name, key); > + atomic_long_set(&lock->reservation_id, 0); > +} > + > /** > * mutex_is_locked - is the mutex locked > * @lock: the mutex to be queried > @@ -133,26 +150,91 @@ static inline int mutex_is_locked(struct mutex *lock) > #ifdef CONFIG_DEBUG_LOCK_ALLOC > extern void mutex_lock_nested(struct mutex *lock, unsigned int subclass); > extern void _mutex_lock_nest_lock(struct mutex *lock, struct lockdep_map *= nest_lock); > + > extern int __must_check mutex_lock_interruptible_nested(struct mutex *lock, > unsigned int subclass); > extern int __must_check mutex_lock_killable_nested(struct mutex *lock, > unsigned int subclass); > =20 > +extern int __must_check _mutex_reserve_lock(struct ticket_mutex *lock, > + struct lockdep_map *nest_lock, > + unsigned long reservation_id); > + > +extern int __must_check _mutex_reserve_lock_interruptible(struct ticket_mu= tex *, > + struct lockdep_map *nest_lock, > + unsigned long reservation_id); > + > +extern void _mutex_reserve_lock_slow(struct ticket_mutex *lock, > + struct lockdep_map *nest_lock, > + unsigned long reservation_id); > + > +extern int __must_check _mutex_reserve_lock_intr_slow(struct ticket_mutex = *, > + struct lockdep_map *nest_lock, > + unsigned long reservation_id); > + > #define mutex_lock(lock) mutex_lock_nested(lock, 0) > #define mutex_lock_interruptible(lock) mutex_lock_interruptible_nested(loc= k, 0) > #define mutex_lock_killable(lock) mutex_lock_killable_nested(lock, 0) > =20 > #define mutex_lock_nest_lock(lock, nest_lock) \ > do { \ > - typecheck(struct lockdep_map *, &(nest_lock)->dep_map); \ > + typecheck(struct lockdep_map *, &(nest_lock)->dep_map); \ > _mutex_lock_nest_lock(lock, &(nest_lock)->dep_map); \ > } while (0) > =20 > +#define mutex_reserve_lock(lock, nest_lock, reservation_id) \ > +({ \ > + typecheck(struct lockdep_map *, &(nest_lock)->dep_map); \ > + _mutex_reserve_lock(lock, &(nest_lock)->dep_map, reservation_id); \ > +}) > + > +#define mutex_reserve_lock_interruptible(lock, nest_lock, reservation_id) \ > +({ \ > + typecheck(struct lockdep_map *, &(nest_lock)->dep_map); \ > + _mutex_reserve_lock_interruptible(lock, &(nest_lock)->dep_map, \ > + reservation_id); \ > +}) > + > +#define mutex_reserve_lock_slow(lock, nest_lock, reservation_id) \ > +do { \ > + typecheck(struct lockdep_map *, &(nest_lock)->dep_map); \ > + _mutex_reserve_lock_slow(lock, &(nest_lock)->dep_map, reservation_id); \ > +} while (0) > + > +#define mutex_reserve_lock_intr_slow(lock, nest_lock, reservation_id) \ > +({ \ > + typecheck(struct lockdep_map *, &(nest_lock)->dep_map); \ > + _mutex_reserve_lock_intr_slow(lock, &(nest_lock)->dep_map, \ > + reservation_id); \ > +}) > + > #else > extern void mutex_lock(struct mutex *lock); > extern int __must_check mutex_lock_interruptible(struct mutex *lock); > extern int __must_check mutex_lock_killable(struct mutex *lock); > =20 > +extern int __must_check _mutex_reserve_lock(struct ticket_mutex *lock, > + unsigned long reservation_id); > +extern int __must_check _mutex_reserve_lock_interruptible(struct ticket_mu= tex *, > + unsigned long reservation_id); > + > +extern void _mutex_reserve_lock_slow(struct ticket_mutex *lock, > + unsigned long reservation_id); > +extern int __must_check _mutex_reserve_lock_intr_slow(struct ticket_mutex = *, > + unsigned long reservation_id); > + > +#define mutex_reserve_lock(lock, nest_lock, reservation_id) \ > + _mutex_reserve_lock(lock, reservation_id) > + > +#define mutex_reserve_lock_interruptible(lock, nest_lock, reservation_id) \ > + _mutex_reserve_lock_interruptible(lock, reservation_id) > + > +#define mutex_reserve_lock_slow(lock, nest_lock, reservation_id) \ > + _mutex_reserve_lock_slow(lock, reservation_id) > + > +#define mutex_reserve_lock_intr_slow(lock, nest_lock, reservation_id) \ > + _mutex_reserve_lock_intr_slow(lock, reservation_id) > + > # define mutex_lock_nested(lock, subclass) mutex_lock(lock) > # define mutex_lock_interruptible_nested(lock, subclass) mutex_lock_interr= uptible(lock) > # define mutex_lock_killable_nested(lock, subclass) mutex_lock_killable(lo= ck) > @@ -167,6 +249,8 @@ extern int __must_check mutex_lock_killable(struct mute= x *lock); > */ > extern int mutex_trylock(struct mutex *lock); > extern void mutex_unlock(struct mutex *lock); > +extern void mutex_unreserve_unlock(struct ticket_mutex *lock); > + > extern int atomic_dec_and_mutex_lock(atomic_t *cnt, struct mutex *lock); > =20 > #ifndef CONFIG_HAVE_ARCH_MUTEX_CPU_RELAX > diff --git a/kernel/mutex.c b/kernel/mutex.c > index a307cc9..8282729 100644 > --- a/kernel/mutex.c > +++ b/kernel/mutex.c > @@ -126,16 +126,119 @@ void __sched mutex_unlock(struct mutex *lock) > =20 > EXPORT_SYMBOL(mutex_unlock); > =20 > +/** > + * mutex_unreserve_unlock - release the mutex > + * @lock: the mutex to be released > + * > + * Unlock a mutex that has been locked by this task previously > + * with _mutex_reserve_lock*. > + * > + * This function must not be used in interrupt context. Unlocking > + * of a not locked mutex is not allowed. > + */ > +void __sched mutex_unreserve_unlock(struct ticket_mutex *lock) > +{ > + /* > + * mark mutex as no longer part of a reservation, next > + * locker can set this again > + */ > + atomic_long_set(&lock->reservation_id, 0); > + > + /* > + * The unlocking fastpath is the 0->1 transition from 'locked' > + * into 'unlocked' state: > + */ > +#ifndef CONFIG_DEBUG_MUTEXES > + /* > + * When debugging is enabled we must not clear the owner before time, > + * the slow path will always be taken, and that clears the owner field > + * after verifying that it was indeed current. > + */ > + mutex_clear_owner(&lock->base); > +#endif > + __mutex_fastpath_unlock(&lock->base.count, __mutex_unlock_slowpath); > +} > +EXPORT_SYMBOL(mutex_unreserve_unlock); > + > +static inline int __sched > +__mutex_lock_check_reserve(struct mutex *lock, unsigned long reservation_i= d) > +{ > + struct ticket_mutex *m =3D container_of(lock, struct ticket_mutex, base); > + unsigned long cur_id; > + > + cur_id =3D atomic_long_read(&m->reservation_id); > + if (!cur_id) > + return 0; > + > + if (unlikely(reservation_id =3D=3D cur_id)) > + return -EDEADLK; > + > + if (unlikely(reservation_id - cur_id <=3D LONG_MAX)) > + return -EAGAIN; > + > + return 0; > +} > + > +/* > + * after acquiring lock with fastpath or when we lost out in contested > + * slowpath, set reservation_id and wake up any waiters so they can rechec= k. > + */ > +static __always_inline void > +mutex_set_reservation_fastpath(struct ticket_mutex *lock, > + unsigned long reservation_id, bool check_res) > +{ > + unsigned long flags; > + struct mutex_waiter *cur; > + > + if (check_res || config_enabled(CONFIG_DEBUG_LOCK_ALLOC)) { > + unsigned long cur_id; > + > + cur_id =3D atomic_long_xchg(&lock->reservation_id, > + reservation_id); > +#ifdef CONFIG_DEBUG_LOCK_ALLOC > + if (check_res) > + DEBUG_LOCKS_WARN_ON(cur_id && > + cur_id !=3D reservation_id); > + else > + DEBUG_LOCKS_WARN_ON(cur_id); > + lockdep_assert_held(&lock->base); > +#endif > + > + if (unlikely(cur_id =3D=3D reservation_id)) > + return; > + } else > + atomic_long_set(&lock->reservation_id, reservation_id); > + > + /* > + * Check if lock is contended, if not there is nobody to wake up > + */ > + if (likely(atomic_read(&lock->base.count) =3D=3D 0)) > + return; > + > + /* > + * Uh oh, we raced in fastpath, wake up everyone in this case, > + * so they can see the new reservation_id > + */ > + spin_lock_mutex(&lock->base.wait_lock, flags); > + list_for_each_entry(cur, &lock->base.wait_list, list) { > + debug_mutex_wake_waiter(&lock->base, cur); > + wake_up_process(cur->task); > + } > + spin_unlock_mutex(&lock->base.wait_lock, flags); > +} > + > /* > * Lock a mutex (possibly interruptible), slowpath: > */ > static inline int __sched > __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, > - struct lockdep_map *nest_lock, unsigned long ip) > + struct lockdep_map *nest_lock, unsigned long ip, > + unsigned long reservation_id, bool res_slow) > { > struct task_struct *task =3D current; > struct mutex_waiter waiter; > unsigned long flags; > + int ret; > =20 > preempt_disable(); > mutex_acquire_nest(&lock->dep_map, subclass, 0, nest_lock, ip); > @@ -162,6 +265,12 @@ __mutex_lock_common(struct mutex *lock, long state, un= signed int subclass, > for (;;) { > struct task_struct *owner; > =20 > + if (!__builtin_constant_p(reservation_id) && !res_slow) { > + ret =3D __mutex_lock_check_reserve(lock, reservation_id); > + if (ret) > + goto err_nowait; > + } > + > /* > * If there's an owner, wait for it to either > * release the lock or go to sleep. > @@ -172,6 +281,13 @@ __mutex_lock_common(struct mutex *lock, long state, un= signed int subclass, > =20 > if (atomic_cmpxchg(&lock->count, 1, 0) =3D=3D 1) { > lock_acquired(&lock->dep_map, ip); > + if (res_slow) { > + struct ticket_mutex *m; > + m =3D container_of(lock, struct ticket_mutex, base); > + > + mutex_set_reservation_fastpath(m, reservation_id, false); > + } > + > mutex_set_owner(lock); > preempt_enable(); > return 0; > @@ -227,15 +343,16 @@ __mutex_lock_common(struct mutex *lock, long state, u= nsigned int subclass, > * TASK_UNINTERRUPTIBLE case.) > */ > if (unlikely(signal_pending_state(state, task))) { > - mutex_remove_waiter(lock, &waiter, > - task_thread_info(task)); > - mutex_release(&lock->dep_map, 1, ip); > - spin_unlock_mutex(&lock->wait_lock, flags); > + ret =3D -EINTR; > + goto err; > + } > =20 > - debug_mutex_free_waiter(&waiter); > - preempt_enable(); > - return -EINTR; > + if (!__builtin_constant_p(reservation_id) && !res_slow) { > + ret =3D __mutex_lock_check_reserve(lock, reservation_id); > + if (ret) > + goto err; > } > + > __set_task_state(task, state); > =20 > /* didn't get the lock, go to sleep: */ > @@ -250,6 +367,28 @@ done: > mutex_remove_waiter(lock, &waiter, current_thread_info()); > mutex_set_owner(lock); > =20 > + if (!__builtin_constant_p(reservation_id)) { > + struct ticket_mutex *m; > + struct mutex_waiter *cur; > + /* > + * this should get optimized out for the common case, > + * and is only important for _mutex_reserve_lock > + */ > + > + m =3D container_of(lock, struct ticket_mutex, base); > + atomic_long_set(&m->reservation_id, reservation_id); > + > + /* > + * give any possible sleeping processes the chance to wake up, > + * so they can recheck if they have to back off from > + * reservations > + */ > + list_for_each_entry(cur, &lock->wait_list, list) { > + debug_mutex_wake_waiter(lock, cur); > + wake_up_process(cur->task); > + } > + } > + > /* set it to 0 if there are no waiters left: */ > if (likely(list_empty(&lock->wait_list))) > atomic_set(&lock->count, 0); > @@ -260,6 +399,19 @@ done: > preempt_enable(); > =20 > return 0; > + > +err: > + mutex_remove_waiter(lock, &waiter, task_thread_info(task)); > + spin_unlock_mutex(&lock->wait_lock, flags); > + debug_mutex_free_waiter(&waiter); > + > +#ifdef CONFIG_MUTEX_SPIN_ON_OWNER > +err_nowait: > +#endif > + mutex_release(&lock->dep_map, 1, ip); > + > + preempt_enable(); > + return ret; > } > =20 > #ifdef CONFIG_DEBUG_LOCK_ALLOC > @@ -267,7 +419,8 @@ void __sched > mutex_lock_nested(struct mutex *lock, unsigned int subclass) > { > might_sleep(); > - __mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, subclass, NULL, _RET_IP_); > + __mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, > + subclass, NULL, _RET_IP_, 0, 0); > } > =20 > EXPORT_SYMBOL_GPL(mutex_lock_nested); > @@ -276,7 +429,8 @@ void __sched > _mutex_lock_nest_lock(struct mutex *lock, struct lockdep_map *nest) > { > might_sleep(); > - __mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, 0, nest, _RET_IP_); > + __mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, > + 0, nest, _RET_IP_, 0, 0); > } > =20 > EXPORT_SYMBOL_GPL(_mutex_lock_nest_lock); > @@ -285,7 +439,8 @@ int __sched > mutex_lock_killable_nested(struct mutex *lock, unsigned int subclass) > { > might_sleep(); > - return __mutex_lock_common(lock, TASK_KILLABLE, subclass, NULL, _RET_IP_); > + return __mutex_lock_common(lock, TASK_KILLABLE, > + subclass, NULL, _RET_IP_, 0, 0); > } > EXPORT_SYMBOL_GPL(mutex_lock_killable_nested); > =20 > @@ -294,10 +449,63 @@ mutex_lock_interruptible_nested(struct mutex *lock, u= nsigned int subclass) > { > might_sleep(); > return __mutex_lock_common(lock, TASK_INTERRUPTIBLE, > - subclass, NULL, _RET_IP_); > + subclass, NULL, _RET_IP_, 0, 0); > } > =20 > EXPORT_SYMBOL_GPL(mutex_lock_interruptible_nested); > + > +int __sched > +_mutex_reserve_lock(struct ticket_mutex *lock, struct lockdep_map *nest, > + unsigned long reservation_id) > +{ > + DEBUG_LOCKS_WARN_ON(!reservation_id); > + > + might_sleep(); > + return __mutex_lock_common(&lock->base, TASK_UNINTERRUPTIBLE, > + 0, nest, _RET_IP_, reservation_id, 0); > +} > +EXPORT_SYMBOL_GPL(_mutex_reserve_lock); > + > + > +int __sched > +_mutex_reserve_lock_interruptible(struct ticket_mutex *lock, > + struct lockdep_map *nest, > + unsigned long reservation_id) > +{ > + DEBUG_LOCKS_WARN_ON(!reservation_id); > + > + might_sleep(); > + return __mutex_lock_common(&lock->base, TASK_INTERRUPTIBLE, > + 0, nest, _RET_IP_, reservation_id, 0); > +} > +EXPORT_SYMBOL_GPL(_mutex_reserve_lock_interruptible); > + > +void __sched > +_mutex_reserve_lock_slow(struct ticket_mutex *lock, struct lockdep_map *ne= st, > + unsigned long reservation_id) > +{ > + DEBUG_LOCKS_WARN_ON(!reservation_id); > + > + might_sleep(); > + __mutex_lock_common(&lock->base, TASK_UNINTERRUPTIBLE, 0, > + nest, _RET_IP_, reservation_id, 1); > +} > +EXPORT_SYMBOL_GPL(_mutex_reserve_lock_slow); > + > +int __sched > +_mutex_reserve_lock_intr_slow(struct ticket_mutex *lock, > + struct lockdep_map *nest, > + unsigned long reservation_id) > +{ > + DEBUG_LOCKS_WARN_ON(!reservation_id); > + > + might_sleep(); > + return __mutex_lock_common(&lock->base, TASK_INTERRUPTIBLE, 0, > + nest, _RET_IP_, reservation_id, 1); > +} > +EXPORT_SYMBOL_GPL(_mutex_reserve_lock_intr_slow); > + > + > #endif > =20 > /* > @@ -400,7 +608,8 @@ __mutex_lock_slowpath(atomic_t *lock_count) > { > struct mutex *lock =3D container_of(lock_count, struct mutex, count); > =20 > - __mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, 0, NULL, _RET_IP_); > + __mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, 0, > + NULL, _RET_IP_, 0, 0); > } > =20 > static noinline int __sched > @@ -408,7 +617,8 @@ __mutex_lock_killable_slowpath(atomic_t *lock_count) > { > struct mutex *lock =3D container_of(lock_count, struct mutex, count); > =20 > - return __mutex_lock_common(lock, TASK_KILLABLE, 0, NULL, _RET_IP_); > + return __mutex_lock_common(lock, TASK_KILLABLE, 0, > + NULL, _RET_IP_, 0, 0); > } > =20 > static noinline int __sched > @@ -416,8 +626,28 @@ __mutex_lock_interruptible_slowpath(atomic_t *lock_cou= nt) > { > struct mutex *lock =3D container_of(lock_count, struct mutex, count); > =20 > - return __mutex_lock_common(lock, TASK_INTERRUPTIBLE, 0, NULL, _RET_IP_); > + return __mutex_lock_common(lock, TASK_INTERRUPTIBLE, 0, > + NULL, _RET_IP_, 0, 0); > +} > + > +static noinline int __sched > +__mutex_lock_reserve_slowpath(atomic_t *lock_count, void *rid) > +{ > + struct mutex *lock =3D container_of(lock_count, struct mutex, count); > + > + return __mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, 0, > + NULL, _RET_IP_, (unsigned long)rid, 0); > +} > + > +static noinline int __sched > +__mutex_lock_interruptible_reserve_slowpath(atomic_t *lock_count, void *ri= d) > +{ > + struct mutex *lock =3D container_of(lock_count, struct mutex, count); > + > + return __mutex_lock_common(lock, TASK_INTERRUPTIBLE, 0, > + NULL, _RET_IP_, (unsigned long)rid, 0); > } > + > #endif > =20 > /* > @@ -473,6 +703,63 @@ int __sched mutex_trylock(struct mutex *lock) > } > EXPORT_SYMBOL(mutex_trylock); > =20 > +#ifndef CONFIG_DEBUG_LOCK_ALLOC > +int __sched > +_mutex_reserve_lock(struct ticket_mutex *lock, unsigned long rid) > +{ > + int ret; > + > + might_sleep(); > + > + ret =3D __mutex_fastpath_lock_retval_arg(&lock->base.count, (void *)rid, > + __mutex_lock_reserve_slowpath); > + > + if (!ret) { > + mutex_set_reservation_fastpath(lock, rid, true); > + mutex_set_owner(&lock->base); > + } > + return ret; > +} > +EXPORT_SYMBOL(_mutex_reserve_lock); > + > +int __sched > +_mutex_reserve_lock_interruptible(struct ticket_mutex *lock, unsigned long= rid) > +{ > + int ret; > + > + might_sleep(); > + > + ret =3D __mutex_fastpath_lock_retval_arg(&lock->base.count, (void *)rid, > + __mutex_lock_interruptible_reserve_slowpath); > + > + if (!ret) { > + mutex_set_reservation_fastpath(lock, rid, true); > + mutex_set_owner(&lock->base); > + } > + return ret; > +} > +EXPORT_SYMBOL(_mutex_reserve_lock_interruptible); > + > +void __sched > +_mutex_reserve_lock_slow(struct ticket_mutex *lock, unsigned long rid) > +{ > + might_sleep(); > + __mutex_lock_common(&lock->base, TASK_UNINTERRUPTIBLE, > + 0, NULL, _RET_IP_, rid, 1); > +} > +EXPORT_SYMBOL(_mutex_reserve_lock_slow); > + > +int __sched > +_mutex_reserve_lock_intr_slow(struct ticket_mutex *lock, unsigned long rid) > +{ > + might_sleep(); > + return __mutex_lock_common(&lock->base, TASK_INTERRUPTIBLE, > + 0, NULL, _RET_IP_, rid, 1); > +} > +EXPORT_SYMBOL(_mutex_reserve_lock_intr_slow); > + > +#endif > + > /** > * atomic_dec_and_mutex_lock - return holding mutex if we dec to 0 > * @cnt: the atomic which we are to dec --===============0683699491196277420==-- From maarten.lankhorst@canonical.com Tue Jan 15 13:49:54 2013 From: Maarten Lankhorst To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH 1/7] arch: add __mutex_fastpath_lock_retval_arg to generic/sh/x86/powerpc/ia64 Date: Tue, 15 Jan 2013 14:49:52 +0100 Message-ID: <50F55E80.3060106@canonical.com> In-Reply-To: <1358253244-11453-2-git-send-email-maarten.lankhorst@canonical.com> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============0299236717064391719==" --===============0299236717064391719== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Again, missing entry :( Op 15-01-13 13:33, Maarten Lankhorst schreef: > Needed for reservation slowpath. I was hoping to convert the 'mutexes' in ttm to proper mutexes, so I extended= the core mutex code slightly to add support for reservations. This requires howev= er passing an argument to __mutex_fastpath_lock_retval, so I added this for all = archs based on their existing __mutex_fastpath_lock_retval implementation. I'm guessing this would have to be split up in the final version, but for now= it's easier to edit as a single patch. > --- > arch/ia64/include/asm/mutex.h | 20 ++++++++++++++++++++ > arch/powerpc/include/asm/mutex.h | 20 ++++++++++++++++++++ > arch/sh/include/asm/mutex-llsc.h | 20 ++++++++++++++++++++ > arch/x86/include/asm/mutex_32.h | 20 ++++++++++++++++++++ > arch/x86/include/asm/mutex_64.h | 20 ++++++++++++++++++++ > include/asm-generic/mutex-dec.h | 20 ++++++++++++++++++++ > include/asm-generic/mutex-null.h | 1 + > include/asm-generic/mutex-xchg.h | 21 +++++++++++++++++++++ > 8 files changed, 142 insertions(+) > > diff --git a/arch/ia64/include/asm/mutex.h b/arch/ia64/include/asm/mutex.h > index bed73a6..2510058 100644 > --- a/arch/ia64/include/asm/mutex.h > +++ b/arch/ia64/include/asm/mutex.h > @@ -44,6 +44,26 @@ __mutex_fastpath_lock_retval(atomic_t *count, int (*fail= _fn)(atomic_t *)) > } > =20 > /** > + * __mutex_fastpath_lock_retval_arg - try to take the lock by moving the = count > + * from 1 to a 0 value > + * @count: pointer of type atomic_t > + * @arg: argument to pass along if fastpath fails. > + * @fail_fn: function to call if the original value was not 1 > + * > + * Change the count from 1 to a value lower than 1, and call if > + * it wasn't 1 originally. This function returns 0 if the fastpath succeed= s, > + * or anything the slow path function returns. > + */ > +static inline int __mutex_fastpath_lock_retval_arg(atomic_t *count, > + void *arg, int (*fail_fn)(atomic_t *, void*)) > +{ > + if (unlikely(ia64_fetchadd4_acq(count, -1) !=3D 1)) > + return fail_fn(count, arg); > + else > + return 0; > +} > + > +/** > * __mutex_fastpath_unlock - try to promote the count from 0 to 1 > * @count: pointer of type atomic_t > * @fail_fn: function to call if the original value was not 0 > diff --git a/arch/powerpc/include/asm/mutex.h b/arch/powerpc/include/asm/mu= tex.h > index 5399f7e..df4bcff 100644 > --- a/arch/powerpc/include/asm/mutex.h > +++ b/arch/powerpc/include/asm/mutex.h > @@ -97,6 +97,26 @@ __mutex_fastpath_lock_retval(atomic_t *count, int (*fail= _fn)(atomic_t *)) > } > =20 > /** > + * __mutex_fastpath_lock_retval_arg - try to take the lock by moving the = count > + * from 1 to a 0 value > + * @count: pointer of type atomic_t > + * @arg: argument to pass along if fastpath fails. > + * @fail_fn: function to call if the original value was not 1 > + * > + * Change the count from 1 to a value lower than 1, and call if > + * it wasn't 1 originally. This function returns 0 if the fastpath succeed= s, > + * or anything the slow path function returns. > + */ > +static inline int __mutex_fastpath_lock_retval_arg(atomic_t *count, > + void *arg, int (*fail_fn)(atomic_t *, void*)) > +{ > + if (unlikely(__mutex_dec_return_lock(count) < 0)) > + return fail_fn(count, arg); > + else > + return 0; > +} > + > +/** > * __mutex_fastpath_unlock - try to promote the count from 0 to 1 > * @count: pointer of type atomic_t > * @fail_fn: function to call if the original value was not 0 > diff --git a/arch/sh/include/asm/mutex-llsc.h b/arch/sh/include/asm/mutex-l= lsc.h > index 090358a..b68dd6d 100644 > --- a/arch/sh/include/asm/mutex-llsc.h > +++ b/arch/sh/include/asm/mutex-llsc.h > @@ -56,6 +56,26 @@ __mutex_fastpath_lock_retval(atomic_t *count, int (*fail= _fn)(atomic_t *)) > return __res; > } > =20 > +static inline int __mutex_fastpath_lock_retval_arg(atomic_t *count, > + void *arg, int (*fail_fn)(atomic_t *, void *)) > +{ > + int __done, __res; > + > + __asm__ __volatile__ ( > + "movli.l @%2, %0 \n" > + "add #-1, %0 \n" > + "movco.l %0, @%2 \n" > + "movt %1 \n" > + : "=3D&z" (__res), "=3D&r" (__done) > + : "r" (&(count)->counter) > + : "t"); > + > + if (unlikely(!__done || __res !=3D 0)) > + __res =3D fail_fn(count, arg); > + > + return __res; > +} > + > static inline void > __mutex_fastpath_unlock(atomic_t *count, void (*fail_fn)(atomic_t *)) > { > diff --git a/arch/x86/include/asm/mutex_32.h b/arch/x86/include/asm/mutex_3= 2.h > index 03f90c8..34f77f9 100644 > --- a/arch/x86/include/asm/mutex_32.h > +++ b/arch/x86/include/asm/mutex_32.h > @@ -58,6 +58,26 @@ static inline int __mutex_fastpath_lock_retval(atomic_t = *count, > } > =20 > /** > + * __mutex_fastpath_lock_retval_arg - try to take the lock by moving the = count > + * from 1 to a 0 value > + * @count: pointer of type atomic_t > + * @arg: argument to pass along if fastpath fails. > + * @fail_fn: function to call if the original value was not 1 > + * > + * Change the count from 1 to a value lower than 1, and call if > + * it wasn't 1 originally. This function returns 0 if the fastpath succeed= s, > + * or anything the slow path function returns. > + */ > +static inline int __mutex_fastpath_lock_retval_arg(atomic_t *count, > + void *arg, int (*fail_fn)(atomic_t *, void*)) > +{ > + if (unlikely(atomic_dec_return(count) < 0)) > + return fail_fn(count, arg); > + else > + return 0; > +} > + > +/** > * __mutex_fastpath_unlock - try to promote the mutex from 0 to 1 > * @count: pointer of type atomic_t > * @fail_fn: function to call if the original value was not 0 > diff --git a/arch/x86/include/asm/mutex_64.h b/arch/x86/include/asm/mutex_6= 4.h > index 68a87b0..148249e 100644 > --- a/arch/x86/include/asm/mutex_64.h > +++ b/arch/x86/include/asm/mutex_64.h > @@ -53,6 +53,26 @@ static inline int __mutex_fastpath_lock_retval(atomic_t = *count, > } > =20 > /** > + * __mutex_fastpath_lock_retval_arg - try to take the lock by moving the = count > + * from 1 to a 0 value > + * @count: pointer of type atomic_t > + * @arg: argument to pass along if fastpath fails. > + * @fail_fn: function to call if the original value was not 1 > + * > + * Change the count from 1 to a value lower than 1, and call if > + * it wasn't 1 originally. This function returns 0 if the fastpath succeed= s, > + * or anything the slow path function returns. > + */ > +static inline int __mutex_fastpath_lock_retval_arg(atomic_t *count, > + void *arg, int (*fail_fn)(atomic_t *, void*)) > +{ > + if (unlikely(atomic_dec_return(count) < 0)) > + return fail_fn(count, arg); > + else > + return 0; > +} > + > +/** > * __mutex_fastpath_unlock - increment and call function if nonpositive > * @v: pointer of type atomic_t > * @fail_fn: function to call if the result is nonpositive > diff --git a/include/asm-generic/mutex-dec.h b/include/asm-generic/mutex-de= c.h > index f104af7..f5d027e 100644 > --- a/include/asm-generic/mutex-dec.h > +++ b/include/asm-generic/mutex-dec.h > @@ -43,6 +43,26 @@ __mutex_fastpath_lock_retval(atomic_t *count, int (*fail= _fn)(atomic_t *)) > } > =20 > /** > + * __mutex_fastpath_lock_retval_arg - try to take the lock by moving the = count > + * from 1 to a 0 value > + * @count: pointer of type atomic_t > + * @arg: argument to pass along if fastpath fails. > + * @fail_fn: function to call if the original value was not 1 > + * > + * Change the count from 1 to a value lower than 1, and call if > + * it wasn't 1 originally. This function returns 0 if the fastpath succeed= s, > + * or anything the slow path function returns. > + */ > +static inline int > +__mutex_fastpath_lock_retval_arg(atomic_t *count, void *arg, > + int (*fail_fn)(atomic_t *, void*)) > +{ > + if (unlikely(atomic_dec_return(count) < 0)) > + return fail_fn(count, arg); > + return 0; > +} > + > +/** > * __mutex_fastpath_unlock - try to promote the count from 0 to 1 > * @count: pointer of type atomic_t > * @fail_fn: function to call if the original value was not 0 > diff --git a/include/asm-generic/mutex-null.h b/include/asm-generic/mutex-n= ull.h > index e1bbbc7..991e9c3 100644 > --- a/include/asm-generic/mutex-null.h > +++ b/include/asm-generic/mutex-null.h > @@ -12,6 +12,7 @@ > =20 > #define __mutex_fastpath_lock(count, fail_fn) fail_fn(count) > #define __mutex_fastpath_lock_retval(count, fail_fn) fail_fn(count) > +#define __mutex_fastpath_lock_retval_arg(count, arg, fail_fn) fail_fn(coun= t, arg) > #define __mutex_fastpath_unlock(count, fail_fn) fail_fn(count) > #define __mutex_fastpath_trylock(count, fail_fn) fail_fn(count) > #define __mutex_slowpath_needs_to_unlock() 1 > diff --git a/include/asm-generic/mutex-xchg.h b/include/asm-generic/mutex-x= chg.h > index c04e0db..d9cc971 100644 > --- a/include/asm-generic/mutex-xchg.h > +++ b/include/asm-generic/mutex-xchg.h > @@ -55,6 +55,27 @@ __mutex_fastpath_lock_retval(atomic_t *count, int (*fail= _fn)(atomic_t *)) > } > =20 > /** > + * __mutex_fastpath_lock_retval_arg - try to take the lock by moving the = count > + * from 1 to a 0 value > + * @count: pointer of type atomic_t > + * @arg: argument to pass along if fastpath fails. > + * @fail_fn: function to call if the original value was not 1 > + * > + * Change the count from 1 to a value lower than 1, and call if > + * it wasn't 1 originally. This function returns 0 if the fastpath succeed= s, > + * or anything the slow path function returns. > + */ > +static inline int > +__mutex_fastpath_lock_retval_arg(atomic_t *count, void *arg, > + int (*fail_fn)(atomic_t *, void*)) > +{ > + if (unlikely(atomic_xchg(count, 0) !=3D 1)) > + if (likely(atomic_xchg(count, -1) !=3D 1)) > + return fail_fn(count, arg); > + return 0; > +} > + > +/** > * __mutex_fastpath_unlock - try to promote the mutex from 0 to 1 > * @count: pointer of type atomic_t > * @fail_fn: function to call if the original value was not 0 --===============0299236717064391719==-- From m.szyprowski@samsung.com Tue Jan 15 15:07:20 2013 From: Marek Szyprowski To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [RFC] ARM: dma-mapping: Add DMA attribute to skip iommu mapping Date: Tue, 15 Jan 2013 16:07:16 +0100 Message-ID: <50F570A4.70606@samsung.com> In-Reply-To: <1357639944-12050-1-git-send-email-abhinav.k@samsung.com> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============4987065202790229672==" --===============4987065202790229672== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Hello, On 1/8/2013 11:12 AM, Abhinav Kochhar wrote: > Adding a new dma attribute which can be used by the > platform drivers to avoid creating iommu mappings. > In some cases the buffers are allocated by display > controller driver using dma alloc apis but are not > used for scanout. Though the buffers are allocated > by display controller but are only used for sharing > among different devices. > With this attribute the platform drivers can choose > not to create iommu mapping at the time of buffer > allocation and only create the mapping when they > access this buffer. > > Change-Id: I2178b3756170982d814e085ca62474d07b616a21 > Signed-off-by: Abhinav Kochhar > --- > arch/arm/mm/dma-mapping.c | 8 +++++--- > include/linux/dma-attrs.h | 1 + > 2 files changed, 6 insertions(+), 3 deletions(-) > > diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c > index c0f0f43..e73003c 100644 > --- a/arch/arm/mm/dma-mapping.c > +++ b/arch/arm/mm/dma-mapping.c > @@ -1279,9 +1279,11 @@ static void *arm_iommu_alloc_attrs(struct device *de= v, size_t size, > if (!pages) > return NULL; > =20 > - *handle =3D __iommu_create_mapping(dev, pages, size); > - if (*handle =3D=3D DMA_ERROR_CODE) > - goto err_buffer; > + if (!dma_get_attr(DMA_ATTR_NO_IOMMU_MAPPING, attrs)) { > + *handle =3D __iommu_create_mapping(dev, pages, size); > + if (*handle =3D=3D DMA_ERROR_CODE) > + goto err_buffer; > + } > =20 > if (dma_get_attr(DMA_ATTR_NO_KERNEL_MAPPING, attrs)) > return pages; > diff --git a/include/linux/dma-attrs.h b/include/linux/dma-attrs.h > index c8e1831..1f04419 100644 > --- a/include/linux/dma-attrs.h > +++ b/include/linux/dma-attrs.h > @@ -15,6 +15,7 @@ enum dma_attr { > DMA_ATTR_WEAK_ORDERING, > DMA_ATTR_WRITE_COMBINE, > DMA_ATTR_NON_CONSISTENT, > + DMA_ATTR_NO_IOMMU_MAPPING, > DMA_ATTR_NO_KERNEL_MAPPING, > DMA_ATTR_SKIP_CPU_SYNC, > DMA_ATTR_FORCE_CONTIGUOUS, I'm sorry, but from my perspective this patch and the yet another dma attribute shows that there is something fishy happening in the exynos-drm driver. Creating a mapping in DMA address space is the MAIN purpose of the DMA mapping subsystem, so adding an attribute which skips this operation already should give you a sign of warning that something is not used right. It looks that dma-mapping in the current state is simply not adequate for this driver. I noticed that DRM drivers are already known for implementing a lots of common code for their own with slightly changed behavior, like custom page manager/allocator. It looks that exynos-drm driver grew to the point where it also needs such features. It already contains custom code for CPU cache handling, IOMMU and contiguous memory special cases management. I would advise to drop DMA-mapping API completely, avoid adding yet another dozen of DMA attributes useful only for one driver and implement your own memory manager with direct usage of IOMMU API, alloc_pages() and dma_alloc_pages_from_contiguous(). This way DMA mapping subsystem can be kept simple, robust and easy to understand without confusing or conflicting parts. Best regards --=20 Marek Szyprowski Samsung Poland R&D Center --===============4987065202790229672==-- From jason@lakedaemon.net Tue Jan 15 16:56:54 2013 From: Jason Cooper To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH v2] mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls Date: Tue, 15 Jan 2013 11:56:42 -0500 Message-ID: <20130115165642.GA25500@titan.lakedaemon.net> In-Reply-To: <50F3F289.3090402@web.de> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============3995936806939737040==" --===============3995936806939737040== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Greg, I've added you to the this thread hoping for a little insight into USB drivers and their use of coherent and GFP_ATOMIC. Am I barking up the wrong tree by looking a the drivers? On Mon, Jan 14, 2013 at 12:56:57PM +0100, Soeren Moch wrote: > On 20.11.2012 15:31, Marek Szyprowski wrote: > >dmapool always calls dma_alloc_coherent() with GFP_ATOMIC flag, > >regardless the flags provided by the caller. This causes excessive > >pruning of emergency memory pools without any good reason. Additionaly, > >on ARM architecture any driver which is using dmapools will sooner or > >later trigger the following error: > >"ERROR: 256 KiB atomic DMA coherent pool is too small! > >Please increase it with coherent_pool= kernel parameter!". > >Increasing the coherent pool size usually doesn't help much and only > >delays such error, because all GFP_ATOMIC DMA allocations are always > >served from the special, very limited memory pool. > > > >This patch changes the dmapool code to correctly use gfp flags provided > >by the dmapool caller. > > > >Reported-by: Soeren Moch > >Reported-by: Thomas Petazzoni > >Signed-off-by: Marek Szyprowski > >Tested-by: Andrew Lunn > >Tested-by: Soeren Moch > > Now I tested linux-3.7.1 (this patch is included there) on my Marvell > Kirkwood system. I still see > > ERROR: 1024 KiB atomic DMA coherent pool is too small! > Please increase it with coherent_pool= kernel parameter! > > after several hours of runtime under heavy load with SATA and > DVB-Sticks (em28xx / drxk and dib0700). Could you try running the system w/o the em28xx stick and see how it goes with v3.7.1? thx, Jason. > As already reported earlier this patch improved the behavior > compared to linux-3.6.x and 3.7.0 (error after several ten minutes > runtime), but > I still see a regression compared to linux-3.5.x. With this kernel the > same system with same workload runs flawlessly. > > Regards, > Soeren > > > _______________________________________________ > linux-arm-kernel mailing list > linux-arm-kernel(a)lists.infradead.org > http://lists.infradead.org/mailman/listinfo/linux-arm-kernel --===============3995936806939737040==-- From gregkh@linuxfoundation.org Tue Jan 15 17:49:27 2013 From: Greg KH To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH v2] mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls Date: Tue, 15 Jan 2013 09:50:20 -0800 Message-ID: <20130115175020.GA3764@kroah.com> In-Reply-To: <20130115165642.GA25500@titan.lakedaemon.net> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============3390256746481255353==" --===============3390256746481255353== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit On Tue, Jan 15, 2013 at 11:56:42AM -0500, Jason Cooper wrote: > Greg, > > I've added you to the this thread hoping for a little insight into USB > drivers and their use of coherent and GFP_ATOMIC. Am I barking up the > wrong tree by looking a the drivers? I don't understand, which drivers are you referring to? USB host controller drivers, or the "normal" drivers? Most USB drivers use GFP_ATOMIC if they are creating memory during their URB callback path, as that is interrupt context. But it shouldn't be all that bad, and the USB core hasn't changed in a while, so something else must be causing this. greg k-h --===============3390256746481255353==-- From arnd@arndb.de Tue Jan 15 19:05:27 2013 From: Arnd Bergmann To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [RFC] ARM: dma-mapping: Add DMA attribute to skip iommu mapping Date: Tue, 15 Jan 2013 19:05:11 +0000 Message-ID: <201301151905.11704.arnd@arndb.de> In-Reply-To: <50F570A4.70606@samsung.com> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============5296534555838872152==" --===============5296534555838872152== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit On Tuesday 15 January 2013, Marek Szyprowski wrote: > I'm sorry, but from my perspective this patch and the yet another dma > attribute shows that there is something fishy happening in the exynos-drm > driver. Creating a mapping in DMA address space is the MAIN purpose of > the DMA mapping subsystem, so adding an attribute which skips this > operation already should give you a sign of warning that something is > not used right. > > It looks that dma-mapping in the current state is simply not adequate > for this driver. I noticed that DRM drivers are already known for > implementing a lots of common code for their own with slightly changed > behavior, like custom page manager/allocator. It looks that exynos-drm > driver grew to the point where it also needs such features. It already > contains custom code for CPU cache handling, IOMMU and contiguous > memory special cases management. I would advise to drop DMA-mapping > API completely, avoid adding yet another dozen of DMA attributes useful > only for one driver and implement your own memory manager with direct > usage of IOMMU API, alloc_pages() and dma_alloc_pages_from_contiguous(). > This way DMA mapping subsystem can be kept simple, robust and easy to > understand without confusing or conflicting parts. Makes sense. DRM drivers and KVM are the two cases where you typically want to use the iommu API rather than the dma-mapping API, because you need protection between multiple concurrent user contexts. Arnd --===============5296534555838872152==-- From sebastian.hesselbarth@gmail.com Tue Jan 15 20:05:54 2013 From: Sebastian Hesselbarth To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH v2] mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls Date: Tue, 15 Jan 2013 21:05:50 +0100 Message-ID: <50F5B69E.1070101@gmail.com> In-Reply-To: <20130115165642.GA25500@titan.lakedaemon.net> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============6447605484504990576==" --===============6447605484504990576== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit On 01/15/2013 05:56 PM, Jason Cooper wrote: > Greg, > > I've added you to the this thread hoping for a little insight into USB > drivers and their use of coherent and GFP_ATOMIC. Am I barking up the > wrong tree by looking a the drivers? > > On Mon, Jan 14, 2013 at 12:56:57PM +0100, Soeren Moch wrote: >> On 20.11.2012 15:31, Marek Szyprowski wrote: >>> dmapool always calls dma_alloc_coherent() with GFP_ATOMIC flag, >>> regardless the flags provided by the caller. This causes excessive >>> pruning of emergency memory pools without any good reason. Additionaly, >>> on ARM architecture any driver which is using dmapools will sooner or >>> later trigger the following error: >>> "ERROR: 256 KiB atomic DMA coherent pool is too small! >>> Please increase it with coherent_pool= kernel parameter!". >>> Increasing the coherent pool size usually doesn't help much and only >>> delays such error, because all GFP_ATOMIC DMA allocations are always >>> served from the special, very limited memory pool. >>> >>> This patch changes the dmapool code to correctly use gfp flags provided >>> by the dmapool caller. >>> >>> Reported-by: Soeren Moch >>> Reported-by: Thomas Petazzoni >>> Signed-off-by: Marek Szyprowski >>> Tested-by: Andrew Lunn >>> Tested-by: Soeren Moch >> >> Now I tested linux-3.7.1 (this patch is included there) on my Marvell >> Kirkwood system. I still see >> >> ERROR: 1024 KiB atomic DMA coherent pool is too small! >> Please increase it with coherent_pool= kernel parameter! >> >> after several hours of runtime under heavy load with SATA and >> DVB-Sticks (em28xx / drxk and dib0700). > > Could you try running the system w/o the em28xx stick and see how it > goes with v3.7.1? Jason, can you point out what you think we should be looking for? I grep'd for 'GFP_' in drivers/media/usb and especially for dvb-usb (dib0700) it looks like most of the buffers in usb-urb.c are allocated GFP_ATOMIC. em28xx also allocates some of the buffers atomic. If we look for a mem leak in one of the above drivers (including sata_mv), is there an easy way to keep track of allocated and freed kernel memory? Sebastian --===============6447605484504990576==-- From jason@lakedaemon.net Tue Jan 15 20:16:30 2013 From: Jason Cooper To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH v2] mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls Date: Tue, 15 Jan 2013 15:16:17 -0500 Message-ID: <20130115201617.GC25500@titan.lakedaemon.net> In-Reply-To: <20130115175020.GA3764@kroah.com> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============3905589533803330429==" --===============3905589533803330429== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit On Tue, Jan 15, 2013 at 09:50:20AM -0800, Greg KH wrote: > On Tue, Jan 15, 2013 at 11:56:42AM -0500, Jason Cooper wrote: > > Greg, > > > > I've added you to the this thread hoping for a little insight into USB > > drivers and their use of coherent and GFP_ATOMIC. Am I barking up the > > wrong tree by looking a the drivers? > > I don't understand, which drivers are you referring to? USB host > controller drivers, or the "normal" drivers? Sorry I wasn't clear, I was referring specifically to the usb dvb drivers em28xx, drxk and dib0700. These are the drivers reported to be in heavy use when the error occurs. sata_mv is also in use, however no other users of sata_mv have reported problems. Including myself. ;-) > Most USB drivers use GFP_ATOMIC if they are creating memory during > their URB callback path, as that is interrupt context. But it > shouldn't be all that bad, and the USB core hasn't changed in a while, > so something else must be causing this. Agreed, so I went and did more reading. The key piece of the puzzle that I was missing was in arch/arm/mm/dma-mapping.c 660-684. /* * Allocate DMA-coherent memory space and return both the kernel * remapped * virtual and bus address for that space. */ void *arm_dma_alloc(struct device *dev, size_t size, dma_addr_t *handle, gfp_t gfp, struct dma_attrs *attrs) { pgprot_t prot = __get_dma_pgprot(attrs, pgprot_kernel); void *memory; if (dma_alloc_from_coherent(dev, size, handle, &memory)) return memory; return __dma_alloc(dev, size, handle, gfp, prot, false, __builtin_return_address(0)); } static void *arm_coherent_dma_alloc(struct device *dev, size_t size, dma_addr_t *handle, gfp_t gfp, struct dma_attrs *attrs) { pgprot_t prot = __get_dma_pgprot(attrs, pgprot_kernel); void *memory; if (dma_alloc_from_coherent(dev, size, handle, &memory)) return memory; return __dma_alloc(dev, size, handle, gfp, prot, true, __builtin_return_address(0)); } My understanding of this code is that when a driver requests dma memory, we will first try to alloc from the per-driver pool. If that fails, we will then attempt to allocate from the atomic_pool. Once the atomic_pool is exhausted, we get the error: ERROR: 1024 KiB atomic DMA coherent pool is too small! Please increase it with coherent_pool= kernel parameter! If my understanding is correct, one of the drivers (most likely one) either asks for too small of a dma buffer, or is not properly deallocating blocks from the per-device pool. Either case leads to exhaustion, and falling back to the atomic pool. Which subsequently gets wiped out as well. Am I on the right track? thx, Jason. --===============3905589533803330429==-- From jason@lakedaemon.net Tue Jan 15 20:19:53 2013 From: Jason Cooper To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH v2] mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls Date: Tue, 15 Jan 2013 15:19:40 -0500 Message-ID: <20130115201940.GD25500@titan.lakedaemon.net> In-Reply-To: <50F5B69E.1070101@gmail.com> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============3949052124302111891==" --===============3949052124302111891== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit On Tue, Jan 15, 2013 at 09:05:50PM +0100, Sebastian Hesselbarth wrote: > If we look for a mem leak in one of the above drivers (including sata_mv), > is there an easy way to keep track of allocated and freed kernel memory? I'm inclined to think sata_mv is not the cause here, as there are many heavy users of it without error reports. The only thing different here are the three usb dvb dongles. thx, Jason. --===============3949052124302111891==-- From jason@lakedaemon.net Tue Jan 15 21:56:14 2013 From: Jason Cooper To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH v2] mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls Date: Tue, 15 Jan 2013 16:56:02 -0500 Message-ID: <20130115215602.GF25500@titan.lakedaemon.net> In-Reply-To: <20130115201617.GC25500@titan.lakedaemon.net> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============8649601008079495522==" --===============8649601008079495522== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Soeren, On Tue, Jan 15, 2013 at 03:16:17PM -0500, Jason Cooper wrote: > If my understanding is correct, one of the drivers (most likely one) > either asks for too small of a dma buffer, or is not properly > deallocating blocks from the per-device pool. Either case leads to > exhaustion, and falling back to the atomic pool. Which subsequently > gets wiped out as well. If my hunch is right, could you please try each of the three dvb drivers in turn and see which one (or more than one) causes the error? thx, Jason. --===============8649601008079495522==-- From smoch@web.de Wed Jan 16 00:18:59 2013 From: Soeren Moch To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH v2] mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls Date: Wed, 16 Jan 2013 01:17:59 +0100 Message-ID: <50F5F1B7.3040201@web.de> In-Reply-To: <20130115215602.GF25500@titan.lakedaemon.net> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============0094878984407451392==" --===============0094878984407451392== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit On 15.01.2013 22:56, Jason Cooper wrote: > Soeren, > > On Tue, Jan 15, 2013 at 03:16:17PM -0500, Jason Cooper wrote: >> If my understanding is correct, one of the drivers (most likely one) >> either asks for too small of a dma buffer, or is not properly >> deallocating blocks from the per-device pool. Either case leads to >> exhaustion, and falling back to the atomic pool. Which subsequently >> gets wiped out as well. > > If my hunch is right, could you please try each of the three dvb drivers > in turn and see which one (or more than one) causes the error? > In fact I use only 2 types of DVB sticks: em28xx usb bridge plus drxk demodulator, and dib0700 usb bridge plus dib7000p demod. I would bet for em28xx causing the error, but this is not thoroughly tested. Unfortunately testing with removed sticks is not easy, because this is a production system and disabling some services for the long time we need to trigger this error will certainly result in unhappy users. I will see what I can do here. Is there an easy way to track the buffer usage without having to wait for complete exhaustion? In linux-3.5.x there is no such problem. Can we use all available memory for dma buffers here on armv5 architectures, in contrast to newer kernels? Regards, Soeren --===============0094878984407451392==-- From jason@lakedaemon.net Wed Jan 16 02:40:26 2013 From: Jason Cooper To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH v2] mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls Date: Tue, 15 Jan 2013 21:40:14 -0500 Message-ID: <20130116024014.GH25500@titan.lakedaemon.net> In-Reply-To: <50F5F1B7.3040201@web.de> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============9115684499890184797==" --===============9115684499890184797== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Soeren, On Wed, Jan 16, 2013 at 01:17:59AM +0100, Soeren Moch wrote: > On 15.01.2013 22:56, Jason Cooper wrote: > >On Tue, Jan 15, 2013 at 03:16:17PM -0500, Jason Cooper wrote: > >>If my understanding is correct, one of the drivers (most likely one) > >>either asks for too small of a dma buffer, or is not properly > >>deallocating blocks from the per-device pool. Either case leads to > >>exhaustion, and falling back to the atomic pool. Which subsequently > >>gets wiped out as well. > > > >If my hunch is right, could you please try each of the three dvb drivers > >in turn and see which one (or more than one) causes the error? > > In fact I use only 2 types of DVB sticks: em28xx usb bridge plus drxk > demodulator, and dib0700 usb bridge plus dib7000p demod. > > I would bet for em28xx causing the error, but this is not thoroughly > tested. Unfortunately testing with removed sticks is not easy, because > this is a production system and disabling some services for the long > time we need to trigger this error will certainly result in unhappy > users. Just out of curiosity, what board is it? > I will see what I can do here. Is there an easy way to track the buffer > usage without having to wait for complete exhaustion? DMA_API_DEBUG > In linux-3.5.x there is no such problem. Can we use all available memory > for dma buffers here on armv5 architectures, in contrast to newer > kernels? Were the loads exactly the same when you tested 3.5.x? I looked at the changes from v3.5 to v3.7.1 for all four drivers you mentioned as well as sata_mv. The biggest thing I see is that all of the media drivers got shuffled around into their own subdirectories after v3.5. 'git show -M 0c0d06c' shows it was a clean copy of all the files. What would be most helpful is if you could do a git bisect between v3.5.x (working) and the oldest version where you know it started failing (v3.7.1 or earlier if you know it). thx, Jason. --===============9115684499890184797==-- From smoch@web.de Wed Jan 16 03:25:07 2013 From: Soeren Moch To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH v2] mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls Date: Wed, 16 Jan 2013 04:24:54 +0100 Message-ID: <50F61D86.4020801@web.de> In-Reply-To: <20130116024014.GH25500@titan.lakedaemon.net> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============4784185261165894373==" --===============4784185261165894373== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit On 16.01.2013 03:40, Jason Cooper wrote: > Soeren, > > On Wed, Jan 16, 2013 at 01:17:59AM +0100, Soeren Moch wrote: >> On 15.01.2013 22:56, Jason Cooper wrote: >>> On Tue, Jan 15, 2013 at 03:16:17PM -0500, Jason Cooper wrote: >>>> If my understanding is correct, one of the drivers (most likely one) >>>> either asks for too small of a dma buffer, or is not properly >>>> deallocating blocks from the per-device pool. Either case leads to >>>> exhaustion, and falling back to the atomic pool. Which subsequently >>>> gets wiped out as well. >>> >>> If my hunch is right, could you please try each of the three dvb drivers >>> in turn and see which one (or more than one) causes the error? >> >> In fact I use only 2 types of DVB sticks: em28xx usb bridge plus drxk >> demodulator, and dib0700 usb bridge plus dib7000p demod. >> >> I would bet for em28xx causing the error, but this is not thoroughly >> tested. Unfortunately testing with removed sticks is not easy, because >> this is a production system and disabling some services for the long >> time we need to trigger this error will certainly result in unhappy >> users. > > Just out of curiosity, what board is it? The kirkwood board? A modified Guruplug Server Plus. > >> I will see what I can do here. Is there an easy way to track the buffer >> usage without having to wait for complete exhaustion? > > DMA_API_DEBUG OK, maybe I can try this. > >> In linux-3.5.x there is no such problem. Can we use all available memory >> for dma buffers here on armv5 architectures, in contrast to newer >> kernels? > > Were the loads exactly the same when you tested 3.5.x? Exactly the same, yes. >I looked at the > changes from v3.5 to v3.7.1 for all four drivers you mentioned as well > as sata_mv. > > The biggest thing I see is that all of the media drivers got shuffled > around into their own subdirectories after v3.5. 'git show -M 0c0d06c' > shows it was a clean copy of all the files. > > What would be most helpful is if you could do a git bisect between > v3.5.x (working) and the oldest version where you know it started > failing (v3.7.1 or earlier if you know it). > I did not bisect it, but Marek mentioned earlier that commit e9da6e9905e639b0f842a244bc770b48ad0523e9 in Linux v3.6-rc1 introduced new code for dma allocations. This is probably the root cause for the new (mis-)behavior (due to my tests 3.6.0 is not working anymore). I'm not very familiar with arm mm code, and from the patch itself I cannot understand what's different. Maybe CONFIG_CMA is default also for armv5 (not only v6) now? But I might be totally wrong here, maybe someone of the mm experts can explain the difference? Regards, Soeren --===============4784185261165894373==-- From inki.dae@samsung.com Wed Jan 16 06:28:56 2013 From: Inki Dae To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH 5/7] seqno-fence: Hardware dma-buf implementation of fencing (v4) Date: Wed, 16 Jan 2013 15:28:55 +0900 Message-ID: In-Reply-To: <1358253244-11453-6-git-send-email-maarten.lankhorst@canonical.com> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============6740014941294258013==" --===============6740014941294258013== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable 2013/1/15 Maarten Lankhorst : > This type of fence can be used with hardware synchronization for simple > hardware that can block execution until the condition > (dma_buf[offset] - value) >=3D 0 has been met. > > A software fallback still has to be provided in case the fence is used > with a device that doesn't support this mechanism. It is useful to expose > this for graphics cards that have an op to support this. > > Some cards like i915 can export those, but don't have an option to wait, > so they need the software fallback. > > I extended the original patch by Rob Clark. > > v1: Original > v2: Renamed from bikeshed to seqno, moved into dma-fence.c since > not much was left of the file. Lots of documentation added. > v3: Use fence_ops instead of custom callbacks. Moved to own file > to avoid circular dependency between dma-buf.h and fence.h > v4: Add spinlock pointer to seqno_fence_init > > Signed-off-by: Maarten Lankhorst > --- > Documentation/DocBook/device-drivers.tmpl | 1 + > drivers/base/fence.c | 38 +++++++++++ > include/linux/seqno-fence.h | 105 ++++++++++++++++++++++++++= ++++ > 3 files changed, 144 insertions(+) > create mode 100644 include/linux/seqno-fence.h > > diff --git a/Documentation/DocBook/device-drivers.tmpl b/Documentation/DocB= ook/device-drivers.tmpl > index 6f53fc0..ad14396 100644 > --- a/Documentation/DocBook/device-drivers.tmpl > +++ b/Documentation/DocBook/device-drivers.tmpl > @@ -128,6 +128,7 @@ X!Edrivers/base/interface.c > !Edrivers/base/dma-buf.c > !Edrivers/base/fence.c > !Iinclude/linux/fence.h > +!Iinclude/linux/seqno-fence.h > !Edrivers/base/dma-coherent.c > !Edrivers/base/dma-mapping.c > > diff --git a/drivers/base/fence.c b/drivers/base/fence.c > index 28e5ffd..1d3f29c 100644 > --- a/drivers/base/fence.c > +++ b/drivers/base/fence.c > @@ -24,6 +24,7 @@ > #include > #include > #include > +#include > > atomic_t fence_context_counter =3D ATOMIC_INIT(0); > EXPORT_SYMBOL(fence_context_counter); > @@ -284,3 +285,40 @@ out: > return ret; > } > EXPORT_SYMBOL(fence_default_wait); > + > +static bool seqno_enable_signaling(struct fence *fence) > +{ > + struct seqno_fence *seqno_fence =3D to_seqno_fence(fence); > + return seqno_fence->ops->enable_signaling(fence); > +} > + > +static bool seqno_signaled(struct fence *fence) > +{ > + struct seqno_fence *seqno_fence =3D to_seqno_fence(fence); > + return seqno_fence->ops->signaled && seqno_fence->ops->signaled(fen= ce); > +} > + > +static void seqno_release(struct fence *fence) > +{ > + struct seqno_fence *f =3D to_seqno_fence(fence); > + > + dma_buf_put(f->sync_buf); > + if (f->ops->release) > + f->ops->release(fence); > + else > + kfree(f); > +} > + > +static long seqno_wait(struct fence *fence, bool intr, signed long timeout) > +{ > + struct seqno_fence *f =3D to_seqno_fence(fence); > + return f->ops->wait(fence, intr, timeout); > +} > + > +const struct fence_ops seqno_fence_ops =3D { > + .enable_signaling =3D seqno_enable_signaling, > + .signaled =3D seqno_signaled, > + .wait =3D seqno_wait, > + .release =3D seqno_release > +}; > +EXPORT_SYMBOL_GPL(seqno_fence_ops); > diff --git a/include/linux/seqno-fence.h b/include/linux/seqno-fence.h > new file mode 100644 > index 0000000..603adc0 > --- /dev/null > +++ b/include/linux/seqno-fence.h > @@ -0,0 +1,105 @@ > +/* > + * seqno-fence, using a dma-buf to synchronize fencing > + * > + * Copyright (C) 2012 Texas Instruments > + * Copyright (C) 2012 Canonical Ltd > + * Authors: > + * Rob Clark > + * Maarten Lankhorst > + * > + * This program is free software; you can redistribute it and/or modify it > + * under the terms of the GNU General Public License version 2 as publishe= d by > + * the Free Software Foundation. > + * > + * This program is distributed in the hope that it will be useful, but WIT= HOUT > + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or > + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License f= or > + * more details. > + * > + * You should have received a copy of the GNU General Public License along= with > + * this program. If not, see . > + */ > + > +#ifndef __LINUX_SEQNO_FENCE_H > +#define __LINUX_SEQNO_FENCE_H > + > +#include > +#include > + > +struct seqno_fence { > + struct fence base; > + > + const struct fence_ops *ops; > + struct dma_buf *sync_buf; > + uint32_t seqno_ofs; > +}; Hi maarten, I'm applying dma-fence v11 and seqno-fence v4 to exynos drm and have some proposals. The above seqno_fence structure has only one dmabuf. Shouldn't it have mutiple dmabufs? For example, in case of drm driver, when pageflip is requested, one framebuffer could have one more gem buffer for NV12M format. And this means that one more exported dmabufs should be sychronized with other devices. Below is simple structure for it, struct seqno_fence_dmabuf { struct list_head list; int id; struct dmabuf *sync_buf; uint32_t seqno_ops; uint32_t seqno; }; The member, id, could be used to identify which device sync_buf is going to be accessed by. In case of drm driver, one framebuffer could be accessed by one more devices, one is Display controller and another is HDMI controller. So id would have crtc number. And seqno_fence structure could be defined like below, struct seqno_fence { struct list_head sync_buf_list; struct fence base; const struct fence_ops *ops; }; In addition, I have implemented fence-helper framework for sw sync as WIP and below is intefaces for it, struct fence_helper { struct list_head entries; struct reservation_ticket ticket; struct seqno_fence *sf; spinlock_t lock; void *priv; }; int fence_helper_init(struct fence_helper *fh, void *priv, void (*remease)(struct fence *fence)); - This function is called at driver open so process unique context would have a new seqno_fence instance. This function does just seqno_fence_init call, initialize entries list and set device specific fence release callback. bool fence_helper_check_sync_buf(struct fence_helper *fh, struct dma_buf *sync_buf, int id); - This function is called before dma is started and checks if same sync_bufs had already be committed to reservation_object, bo->fence_shared[n]. And id could be used to identy which device sync_buf is going to be accessed by. int fence_helper_add_sync_buf(struct fence_helper *fh, struct dma_buf *sync_buf, int id); - This function is called if fence_helper_check_sync_buf() is true and adds it seqno_fence's sync_buf_list wrapping sync_buf as seqno_fence_dma_buf structure. With this function call, one seqno_fence instance would have more sync_bufs. At this time, the reference count to this sync_buf is taken. void fence_helper_del_sync_buf(struct fence_helper *fh, int id); - This function is called if some operation is failed after fence_helper_add_sync_buf call to release relevant resources. int fence_helper_init_reservation_entry(struct fence_helper *fh, struct dma_buf *dmabuf, bool shared, int id); - This function is called after fence_helper_add_sync_buf call and calls reservation_entry_init function to set a reservation object of sync_buf to a new reservation_entry object. And then the new reservation_entry is added to fh->entries to track all sync_bufs this device is going to access. void fence_helper_fini_reservation_entry(struct fence_helper *fh, int id); - This function is called if some operation is failed after fence_helper_init_reservation_entry call to releae relevant resources. int fence_helper_ticket_reserve(struct fence_helper *fh, int id); - This function is called after fence_helper_init_reservation_entry call and calls ticket_reserve function to reserve a ticket(locked for each reservation entry in fh->entires) void fence_helper_ticket_commit(struct fence_helper *fh, int id); - This function is called after fence_helper_ticket_reserve() is called to commit this device's fence to all reservation_objects of each sync_buf. After that, once other devices try to access these buffers, they would be blocked and unlock each reservation entry in fh->entires. int fence_helper_wait(struct fence_helper *fh, struct dma_buf *dmabuf, bool intr); - This function is called before fence_helper_add_sync_buf() is called to wait for a signal from other devices. int fence_helper_signal(struct fence_helper *fh, int id); - This function is called by device's interrupt handler or somewhere when dma access to this buffer has been completed and calls fence_signal() with each fence registed to each reservation object in fh->entries to notify dma access completion to other deivces. At this time, other devices blocked would be waked up and forward their next step. For more detail, in addition, this function does the following, - delete each reservation entry in fh->entries. - release each seqno_fence_dmabuf object in seqno_fence's sync_buf_list and call dma_buf_put() to put the reference count to dmabuf. Now the fence-helper framework is just WIP yet so there may be my missing points. If you are ok, I'd like to post it as RFC. Thanks, Inki Dae > + > +extern const struct fence_ops seqno_fence_ops; > + > +/** > + * to_seqno_fence - cast a fence to a seqno_fence > + * @fence: fence to cast to a seqno_fence > + * > + * Returns NULL if the fence is not a seqno_fence, > + * or the seqno_fence otherwise. > + */ > +static inline struct seqno_fence * > +to_seqno_fence(struct fence *fence) > +{ > + if (fence->ops !=3D &seqno_fence_ops) > + return NULL; > + return container_of(fence, struct seqno_fence, base); > +} > + > +/** > + * seqno_fence_init - initialize a seqno fence > + * @fence: seqno_fence to initialize > + * @lock: pointer to spinlock to use for fence > + * @sync_buf: buffer containing the memory location to signal on > + * @context: the execution context this fence is a part of > + * @seqno_ofs: the offset within @sync_buf > + * @seqno: the sequence # to signal on > + * @ops: the fence_ops for operations on this seqno fence > + * > + * This function initializes a struct seqno_fence with passed parameters, > + * and takes a reference on sync_buf which is released on fence destructio= n. > + * > + * A seqno_fence is a dma_fence which can complete in software when > + * enable_signaling is called, but it also completes when > + * (s32)((sync_buf)[seqno_ofs] - seqno) >=3D 0 is true > + * > + * The seqno_fence will take a refcount on the sync_buf until it's > + * destroyed, but actual lifetime of sync_buf may be longer if one of the > + * callers take a reference to it. > + * > + * Certain hardware have instructions to insert this type of wait condition > + * in the command stream, so no intervention from software would be needed. > + * This type of fence can be destroyed before completed, however a referen= ce > + * on the sync_buf dma-buf can be taken. It is encouraged to re-use the sa= me > + * dma-buf for sync_buf, since mapping or unmapping the sync_buf to the > + * device's vm can be expensive. > + * > + * It is recommended for creators of seqno_fence to call fence_signal > + * before destruction. This will prevent possible issues from wraparound at > + * time of issue vs time of check, since users can check fence_is_signaled > + * before submitting instructions for the hardware to wait on the fence. > + * However, when ops.enable_signaling is not called, it doesn't have to be > + * done as soon as possible, just before there's any real danger of seqno > + * wraparound. > + */ > +static inline void > +seqno_fence_init(struct seqno_fence *fence, spinlock_t *lock, > + struct dma_buf *sync_buf, uint32_t context, uint32_t seqn= o_ofs, > + uint32_t seqno, const struct fence_ops *ops) > +{ > + BUG_ON(!fence || !sync_buf || !ops->enable_signaling || !ops->wait); > + > + __fence_init(&fence->base, &seqno_fence_ops, lock, context, seqno); > + > + get_dma_buf(sync_buf); > + fence->ops =3D ops; > + fence->sync_buf =3D sync_buf; > + fence->seqno_ofs =3D seqno_ofs; > +} > + > +#endif /* __LINUX_SEQNO_FENCE_H */ > -- > 1.8.0.3 > > > _______________________________________________ > Linaro-mm-sig mailing list > Linaro-mm-sig(a)lists.linaro.org > http://lists.linaro.org/mailman/listinfo/linaro-mm-sig --===============6740014941294258013==-- From smoch@web.de Wed Jan 16 08:56:22 2013 From: Soeren Moch To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH v2] mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls Date: Wed, 16 Jan 2013 09:55:55 +0100 Message-ID: <50F66B1B.40301@web.de> In-Reply-To: <50F61D86.4020801@web.de> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============7803057891938753438==" --===============7803057891938753438== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit On 16.01.2013 04:24, Soeren Moch wrote: > On 16.01.2013 03:40, Jason Cooper wrote: >> Soeren, >> >> On Wed, Jan 16, 2013 at 01:17:59AM +0100, Soeren Moch wrote: >>> On 15.01.2013 22:56, Jason Cooper wrote: >>>> On Tue, Jan 15, 2013 at 03:16:17PM -0500, Jason Cooper wrote: >>>>> If my understanding is correct, one of the drivers (most likely one) >>>>> either asks for too small of a dma buffer, or is not properly >>>>> deallocating blocks from the per-device pool. Either case leads to >>>>> exhaustion, and falling back to the atomic pool. Which subsequently >>>>> gets wiped out as well. >>>> >>>> If my hunch is right, could you please try each of the three dvb >>>> drivers >>>> in turn and see which one (or more than one) causes the error? >>> >>> In fact I use only 2 types of DVB sticks: em28xx usb bridge plus drxk >>> demodulator, and dib0700 usb bridge plus dib7000p demod. >>> >>> I would bet for em28xx causing the error, but this is not thoroughly >>> tested. Unfortunately testing with removed sticks is not easy, because >>> this is a production system and disabling some services for the long >>> time we need to trigger this error will certainly result in unhappy >>> users. >> OK, I could trigger the error ERROR: 1024 KiB atomic DMA coherent pool is too small! Please increase it with coherent_pool= kernel parameter! only with em28xx sticks and sata, dib0700 sticks removed. >> Just out of curiosity, what board is it? > > The kirkwood board? A modified Guruplug Server Plus. em28xx sticks: "TerraTec Cinergy HTC Stick HD" and "PCTV Quatro Stick" dib0700 sticks: "WinTV-NOVA-TD Stick" >> >>> I will see what I can do here. Is there an easy way to track the buffer >>> usage without having to wait for complete exhaustion? >> >> DMA_API_DEBUG > > OK, maybe I can try this. >> >>> In linux-3.5.x there is no such problem. Can we use all available memory >>> for dma buffers here on armv5 architectures, in contrast to newer >>> kernels? >> >> Were the loads exactly the same when you tested 3.5.x? > > Exactly the same, yes. > >> I looked at the >> changes from v3.5 to v3.7.1 for all four drivers you mentioned as well >> as sata_mv. >> >> The biggest thing I see is that all of the media drivers got shuffled >> around into their own subdirectories after v3.5. 'git show -M 0c0d06c' >> shows it was a clean copy of all the files. >> >> What would be most helpful is if you could do a git bisect between >> v3.5.x (working) and the oldest version where you know it started >> failing (v3.7.1 or earlier if you know it). >> > I did not bisect it, but Marek mentioned earlier that commit > e9da6e9905e639b0f842a244bc770b48ad0523e9 in Linux v3.6-rc1 introduced > new code for dma allocations. This is probably the root cause for the > new (mis-)behavior (due to my tests 3.6.0 is not working anymore). I don't want to say that Mareks patch is wrong, probably it triggers a bug somewhere else! (in em28xx?) > I'm not very familiar with arm mm code, and from the patch itself I > cannot understand what's different. Maybe CONFIG_CMA is default > also for armv5 (not only v6) now? But I might be totally wrong here, > maybe someone of the mm experts can explain the difference? > Regards, Soeren --===============7803057891938753438==-- From maarten.lankhorst@canonical.com Wed Jan 16 10:36:53 2013 From: Maarten Lankhorst To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH 5/7] seqno-fence: Hardware dma-buf implementation of fencing (v4) Date: Wed, 16 Jan 2013 11:36:48 +0100 Message-ID: <50F682C0.3030009@canonical.com> In-Reply-To: MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============6169405995628430751==" --===============6169405995628430751== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Op 16-01-13 07:28, Inki Dae schreef: > 2013/1/15 Maarten Lankhorst : >> This type of fence can be used with hardware synchronization for simple >> hardware that can block execution until the condition >> (dma_buf[offset] - value) >=3D 0 has been met. >> >> A software fallback still has to be provided in case the fence is used >> with a device that doesn't support this mechanism. It is useful to expose >> this for graphics cards that have an op to support this. >> >> Some cards like i915 can export those, but don't have an option to wait, >> so they need the software fallback. >> >> I extended the original patch by Rob Clark. >> >> v1: Original >> v2: Renamed from bikeshed to seqno, moved into dma-fence.c since >> not much was left of the file. Lots of documentation added. >> v3: Use fence_ops instead of custom callbacks. Moved to own file >> to avoid circular dependency between dma-buf.h and fence.h >> v4: Add spinlock pointer to seqno_fence_init >> >> Signed-off-by: Maarten Lankhorst >> --- >> Documentation/DocBook/device-drivers.tmpl | 1 + >> drivers/base/fence.c | 38 +++++++++++ >> include/linux/seqno-fence.h | 105 +++++++++++++++++++++++++= +++++ >> 3 files changed, 144 insertions(+) >> create mode 100644 include/linux/seqno-fence.h >> >> diff --git a/Documentation/DocBook/device-drivers.tmpl b/Documentation/Doc= Book/device-drivers.tmpl >> index 6f53fc0..ad14396 100644 >> --- a/Documentation/DocBook/device-drivers.tmpl >> +++ b/Documentation/DocBook/device-drivers.tmpl >> @@ -128,6 +128,7 @@ X!Edrivers/base/interface.c >> !Edrivers/base/dma-buf.c >> !Edrivers/base/fence.c >> !Iinclude/linux/fence.h >> +!Iinclude/linux/seqno-fence.h >> !Edrivers/base/dma-coherent.c >> !Edrivers/base/dma-mapping.c >> >> diff --git a/drivers/base/fence.c b/drivers/base/fence.c >> index 28e5ffd..1d3f29c 100644 >> --- a/drivers/base/fence.c >> +++ b/drivers/base/fence.c >> @@ -24,6 +24,7 @@ >> #include >> #include >> #include >> +#include >> >> atomic_t fence_context_counter =3D ATOMIC_INIT(0); >> EXPORT_SYMBOL(fence_context_counter); >> @@ -284,3 +285,40 @@ out: >> return ret; >> } >> EXPORT_SYMBOL(fence_default_wait); >> + >> +static bool seqno_enable_signaling(struct fence *fence) >> +{ >> + struct seqno_fence *seqno_fence =3D to_seqno_fence(fence); >> + return seqno_fence->ops->enable_signaling(fence); >> +} >> + >> +static bool seqno_signaled(struct fence *fence) >> +{ >> + struct seqno_fence *seqno_fence =3D to_seqno_fence(fence); >> + return seqno_fence->ops->signaled && seqno_fence->ops->signaled(fe= nce); >> +} >> + >> +static void seqno_release(struct fence *fence) >> +{ >> + struct seqno_fence *f =3D to_seqno_fence(fence); >> + >> + dma_buf_put(f->sync_buf); >> + if (f->ops->release) >> + f->ops->release(fence); >> + else >> + kfree(f); >> +} >> + >> +static long seqno_wait(struct fence *fence, bool intr, signed long timeou= t) >> +{ >> + struct seqno_fence *f =3D to_seqno_fence(fence); >> + return f->ops->wait(fence, intr, timeout); >> +} >> + >> +const struct fence_ops seqno_fence_ops =3D { >> + .enable_signaling =3D seqno_enable_signaling, >> + .signaled =3D seqno_signaled, >> + .wait =3D seqno_wait, >> + .release =3D seqno_release >> +}; >> +EXPORT_SYMBOL_GPL(seqno_fence_ops); >> diff --git a/include/linux/seqno-fence.h b/include/linux/seqno-fence.h >> new file mode 100644 >> index 0000000..603adc0 >> --- /dev/null >> +++ b/include/linux/seqno-fence.h >> @@ -0,0 +1,105 @@ >> +/* >> + * seqno-fence, using a dma-buf to synchronize fencing >> + * >> + * Copyright (C) 2012 Texas Instruments >> + * Copyright (C) 2012 Canonical Ltd >> + * Authors: >> + * Rob Clark >> + * Maarten Lankhorst >> + * >> + * This program is free software; you can redistribute it and/or modify it >> + * under the terms of the GNU General Public License version 2 as publish= ed by >> + * the Free Software Foundation. >> + * >> + * This program is distributed in the hope that it will be useful, but WI= THOUT >> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or >> + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License = for >> + * more details. >> + * >> + * You should have received a copy of the GNU General Public License alon= g with >> + * this program. If not, see . >> + */ >> + >> +#ifndef __LINUX_SEQNO_FENCE_H >> +#define __LINUX_SEQNO_FENCE_H >> + >> +#include >> +#include >> + >> +struct seqno_fence { >> + struct fence base; >> + >> + const struct fence_ops *ops; >> + struct dma_buf *sync_buf; >> + uint32_t seqno_ofs; >> +}; > Hi maarten, > > I'm applying dma-fence v11 and seqno-fence v4 to exynos drm and have > some proposals. > > The above seqno_fence structure has only one dmabuf. Shouldn't it have > mutiple dmabufs? For example, in case of drm driver, when pageflip is > requested, one framebuffer could have one more gem buffer for NV12M > format. And this means that one more exported dmabufs should be > sychronized with other devices. Below is simple structure for it, The fence guards a single operation, as such I didn't feel like more than one dma-buf was needed to guard it. Have you considered simply attaching multiple fences instead? Each with their= own dma-buf. There has been some muttering about allowing multiple exclusive fences to be = attached, for arm soc's. But I'm also considering getting rid of the dma-buf member and add a function= call to retrieve it, since the sync dma-buf member should not be changing often, and it would zap 2 atom= ic ops on every fence, but I want it replaced by something that's not 10x more complicated. Maybe "int get_sync_dma_buf(fence, old_dma_buf, &new_dma_buf)" that will set = new_dma_buf =3D NULL if the old_dma_buf is unchanged, and return true + return a new reference to = the sync dma_buf if it's not identical to old_dma_buf. old_dma_buf can also be NULL or a dma_buf that belongs to a different fence->= context entirely. It might be capable of returning an error, in which case the fence would count as being signaled. Th= is could reduce the need for separately checking fence_is_signaled first. I think this would allow caching the synchronization dma_buf in a similar way= without each fence needing to hold a reference to the dma_buf all the time, even for fences that are onl= y used internally. > struct seqno_fence_dmabuf { > struct list_head list; > int id; > struct dmabuf *sync_buf; > uint32_t seqno_ops; > uint32_t seqno; > }; > > The member, id, could be used to identify which device sync_buf is > going to be accessed by. In case of drm driver, one framebuffer could > be accessed by one more devices, one is Display controller and another > is HDMI controller. So id would have crtc number. Why do you need this? the base fence already has a context member. In fact I don't see why you need a linked list, at worst you'd need a static = array since the amount of dma-bufs should already be known during allocation time. I would prefer to simply make reservation_object->fence_exclusive an array, s= ince it would be easier to implement, and there have been some calls that arm might need such a thing. > And seqno_fence structure could be defined like below, > > struct seqno_fence { > struct list_head sync_buf_list; > struct fence base; > const struct fence_ops *ops; > }; > > In addition, I have implemented fence-helper framework for sw sync as > WIP and below is intefaces for it, > > struct fence_helper { > struct list_head entries; > struct reservation_ticket ticket; > struct seqno_fence *sf; > spinlock_t lock; > void *priv; > }; > > int fence_helper_init(struct fence_helper *fh, void *priv, void > (*remease)(struct fence *fence)); > - This function is called at driver open so process unique context > would have a new seqno_fence instance. This function does just > seqno_fence_init call, initialize entries list and set device specific > fence release callback. > > bool fence_helper_check_sync_buf(struct fence_helper *fh, struct > dma_buf *sync_buf, int id); > - This function is called before dma is started and checks if same > sync_bufs had already be committed to reservation_object, > bo->fence_shared[n]. And id could be used to identy which device > sync_buf is going to be accessed by. > > int fence_helper_add_sync_buf(struct fence_helper *fh, struct dma_buf > *sync_buf, int id); > - This function is called if fence_helper_check_sync_buf() is true and > adds it seqno_fence's sync_buf_list wrapping sync_buf as > seqno_fence_dma_buf structure. With this function call, one > seqno_fence instance would have more sync_bufs. At this time, the > reference count to this sync_buf is taken. > > void fence_helper_del_sync_buf(struct fence_helper *fh, int id); > - This function is called if some operation is failed after > fence_helper_add_sync_buf call to release relevant resources. > > int fence_helper_init_reservation_entry(struct fence_helper *fh, > struct dma_buf *dmabuf, bool shared, int id); > - This function is called after fence_helper_add_sync_buf call and > calls reservation_entry_init function to set a reservation object of > sync_buf to a new reservation_entry object. And then the new > reservation_entry is added to fh->entries to track all sync_bufs this > device is going to access. > > void fence_helper_fini_reservation_entry(struct fence_helper *fh, int id); > - This function is called if some operation is failed after > fence_helper_init_reservation_entry call to releae relevant resources. > > int fence_helper_ticket_reserve(struct fence_helper *fh, int id); > - This function is called after fence_helper_init_reservation_entry > call and calls ticket_reserve function to reserve a ticket(locked for > each reservation entry in fh->entires) > > void fence_helper_ticket_commit(struct fence_helper *fh, int id); > - This function is called after fence_helper_ticket_reserve() is > called to commit this device's fence to all reservation_objects of > each sync_buf. After that, once other devices try to access these > buffers, they would be blocked and unlock each reservation entry in > fh->entires. > > int fence_helper_wait(struct fence_helper *fh, struct dma_buf *dmabuf, > bool intr); > - This function is called before fence_helper_add_sync_buf() is called > to wait for a signal from other devices. > > int fence_helper_signal(struct fence_helper *fh, int id); > - This function is called by device's interrupt handler or somewhere > when dma access to this buffer has been completed and calls > fence_signal() with each fence registed to each reservation object in > fh->entries to notify dma access completion to other deivces. At this > time, other devices blocked would be waked up and forward their next > step. > > For more detail, in addition, this function does the following, > - delete each reservation entry in fh->entries. > - release each seqno_fence_dmabuf object in seqno_fence's > sync_buf_list and call dma_buf_put() to put the reference count to > dmabuf. > > > Now the fence-helper framework is just WIP yet so there may be my > missing points. If you are ok, I'd like to post it as RFC. Way too complicated.. --===============6169405995628430751==-- From inki.dae@samsung.com Wed Jan 16 12:00:09 2013 From: Inki Dae To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH 5/7] seqno-fence: Hardware dma-buf implementation of fencing (v4) Date: Wed, 16 Jan 2013 21:00:07 +0900 Message-ID: In-Reply-To: <50F682C0.3030009@canonical.com> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============1443439763342265041==" --===============1443439763342265041== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable 2013/1/16 Maarten Lankhorst : > Op 16-01-13 07:28, Inki Dae schreef: >> 2013/1/15 Maarten Lankhorst : >>> This type of fence can be used with hardware synchronization for simple >>> hardware that can block execution until the condition >>> (dma_buf[offset] - value) >=3D 0 has been met. >>> >>> A software fallback still has to be provided in case the fence is used >>> with a device that doesn't support this mechanism. It is useful to expose >>> this for graphics cards that have an op to support this. >>> >>> Some cards like i915 can export those, but don't have an option to wait, >>> so they need the software fallback. >>> >>> I extended the original patch by Rob Clark. >>> >>> v1: Original >>> v2: Renamed from bikeshed to seqno, moved into dma-fence.c since >>> not much was left of the file. Lots of documentation added. >>> v3: Use fence_ops instead of custom callbacks. Moved to own file >>> to avoid circular dependency between dma-buf.h and fence.h >>> v4: Add spinlock pointer to seqno_fence_init >>> >>> Signed-off-by: Maarten Lankhorst >>> --- >>> Documentation/DocBook/device-drivers.tmpl | 1 + >>> drivers/base/fence.c | 38 +++++++++++ >>> include/linux/seqno-fence.h | 105 ++++++++++++++++++++++++= ++++++ >>> 3 files changed, 144 insertions(+) >>> create mode 100644 include/linux/seqno-fence.h >>> >>> diff --git a/Documentation/DocBook/device-drivers.tmpl b/Documentation/Do= cBook/device-drivers.tmpl >>> index 6f53fc0..ad14396 100644 >>> --- a/Documentation/DocBook/device-drivers.tmpl >>> +++ b/Documentation/DocBook/device-drivers.tmpl >>> @@ -128,6 +128,7 @@ X!Edrivers/base/interface.c >>> !Edrivers/base/dma-buf.c >>> !Edrivers/base/fence.c >>> !Iinclude/linux/fence.h >>> +!Iinclude/linux/seqno-fence.h >>> !Edrivers/base/dma-coherent.c >>> !Edrivers/base/dma-mapping.c >>> >>> diff --git a/drivers/base/fence.c b/drivers/base/fence.c >>> index 28e5ffd..1d3f29c 100644 >>> --- a/drivers/base/fence.c >>> +++ b/drivers/base/fence.c >>> @@ -24,6 +24,7 @@ >>> #include >>> #include >>> #include >>> +#include >>> >>> atomic_t fence_context_counter =3D ATOMIC_INIT(0); >>> EXPORT_SYMBOL(fence_context_counter); >>> @@ -284,3 +285,40 @@ out: >>> return ret; >>> } >>> EXPORT_SYMBOL(fence_default_wait); >>> + >>> +static bool seqno_enable_signaling(struct fence *fence) >>> +{ >>> + struct seqno_fence *seqno_fence =3D to_seqno_fence(fence); >>> + return seqno_fence->ops->enable_signaling(fence); >>> +} >>> + >>> +static bool seqno_signaled(struct fence *fence) >>> +{ >>> + struct seqno_fence *seqno_fence =3D to_seqno_fence(fence); >>> + return seqno_fence->ops->signaled && seqno_fence->ops->signaled(f= ence); >>> +} >>> + >>> +static void seqno_release(struct fence *fence) >>> +{ >>> + struct seqno_fence *f =3D to_seqno_fence(fence); >>> + >>> + dma_buf_put(f->sync_buf); >>> + if (f->ops->release) >>> + f->ops->release(fence); >>> + else >>> + kfree(f); >>> +} >>> + >>> +static long seqno_wait(struct fence *fence, bool intr, signed long timeo= ut) >>> +{ >>> + struct seqno_fence *f =3D to_seqno_fence(fence); >>> + return f->ops->wait(fence, intr, timeout); >>> +} >>> + >>> +const struct fence_ops seqno_fence_ops =3D { >>> + .enable_signaling =3D seqno_enable_signaling, >>> + .signaled =3D seqno_signaled, >>> + .wait =3D seqno_wait, >>> + .release =3D seqno_release >>> +}; >>> +EXPORT_SYMBOL_GPL(seqno_fence_ops); >>> diff --git a/include/linux/seqno-fence.h b/include/linux/seqno-fence.h >>> new file mode 100644 >>> index 0000000..603adc0 >>> --- /dev/null >>> +++ b/include/linux/seqno-fence.h >>> @@ -0,0 +1,105 @@ >>> +/* >>> + * seqno-fence, using a dma-buf to synchronize fencing >>> + * >>> + * Copyright (C) 2012 Texas Instruments >>> + * Copyright (C) 2012 Canonical Ltd >>> + * Authors: >>> + * Rob Clark >>> + * Maarten Lankhorst >>> + * >>> + * This program is free software; you can redistribute it and/or modify = it >>> + * under the terms of the GNU General Public License version 2 as publis= hed by >>> + * the Free Software Foundation. >>> + * >>> + * This program is distributed in the hope that it will be useful, but W= ITHOUT >>> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or >>> + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License= for >>> + * more details. >>> + * >>> + * You should have received a copy of the GNU General Public License alo= ng with >>> + * this program. If not, see . >>> + */ >>> + >>> +#ifndef __LINUX_SEQNO_FENCE_H >>> +#define __LINUX_SEQNO_FENCE_H >>> + >>> +#include >>> +#include >>> + >>> +struct seqno_fence { >>> + struct fence base; >>> + >>> + const struct fence_ops *ops; >>> + struct dma_buf *sync_buf; >>> + uint32_t seqno_ofs; >>> +}; >> Hi maarten, >> >> I'm applying dma-fence v11 and seqno-fence v4 to exynos drm and have >> some proposals. >> >> The above seqno_fence structure has only one dmabuf. Shouldn't it have >> mutiple dmabufs? For example, in case of drm driver, when pageflip is >> requested, one framebuffer could have one more gem buffer for NV12M >> format. And this means that one more exported dmabufs should be >> sychronized with other devices. Below is simple structure for it, > The fence guards a single operation, as such I didn't feel like more than o= ne > dma-buf was needed to guard it. > > Have you considered simply attaching multiple fences instead? Each with the= ir own dma-buf. I thought each context per device should have one fence. If not so, I think we could use multiple fences instead. > There has been some muttering about allowing multiple exclusive fences to b= e attached, for arm soc's. > > But I'm also considering getting rid of the dma-buf member and add a functi= on call to retrieve it, since > the sync dma-buf member should not be changing often, and it would zap 2 at= omic ops on every fence, > but I want it replaced by something that's not 10x more complicated. > > Maybe "int get_sync_dma_buf(fence, old_dma_buf, &new_dma_buf)" that will se= t new_dma_buf =3D NULL > if the old_dma_buf is unchanged, and return true + return a new reference t= o the sync dma_buf if it's not identical to old_dma_buf. > old_dma_buf can also be NULL or a dma_buf that belongs to a different fence= ->context entirely. It might be capable of > returning an error, in which case the fence would count as being signaled. = This could reduce the need for separately checking > fence_is_signaled first. > > I think this would allow caching the synchronization dma_buf in a similar w= ay without each fence needing > to hold a reference to the dma_buf all the time, even for fences that are o= nly used internally. > >> struct seqno_fence_dmabuf { >> struct list_head list; >> int id; >> struct dmabuf *sync_buf; >> uint32_t seqno_ops; >> uint32_t seqno; >> }; >> >> The member, id, could be used to identify which device sync_buf is >> going to be accessed by. In case of drm driver, one framebuffer could >> be accessed by one more devices, one is Display controller and another >> is HDMI controller. So id would have crtc number. > Why do you need this? the base fence already has a context member. > > In fact I don't see why you need a linked list, at worst you'd need a stati= c array since the amount of > dma-bufs should already be known during allocation time. > > I would prefer to simply make reservation_object->fence_exclusive an array,= since it would be easier to implement, > and there have been some calls that arm might need such a thing. > Right, the array could be used instead. I just had quick implemention so it's not perfect. > >> And seqno_fence structure could be defined like below, >> >> struct seqno_fence { >> struct list_head sync_buf_list; >> struct fence base; >> const struct fence_ops *ops; >> }; >> >> In addition, I have implemented fence-helper framework for sw sync as >> WIP and below is intefaces for it, >> >> struct fence_helper { >> struct list_head entries; >> struct reservation_ticket ticket; >> struct seqno_fence *sf; >> spinlock_t lock; >> void *priv; >> }; >> >> int fence_helper_init(struct fence_helper *fh, void *priv, void >> (*remease)(struct fence *fence)); >> - This function is called at driver open so process unique context >> would have a new seqno_fence instance. This function does just >> seqno_fence_init call, initialize entries list and set device specific >> fence release callback. >> >> bool fence_helper_check_sync_buf(struct fence_helper *fh, struct >> dma_buf *sync_buf, int id); >> - This function is called before dma is started and checks if same >> sync_bufs had already be committed to reservation_object, >> bo->fence_shared[n]. And id could be used to identy which device >> sync_buf is going to be accessed by. >> >> int fence_helper_add_sync_buf(struct fence_helper *fh, struct dma_buf >> *sync_buf, int id); >> - This function is called if fence_helper_check_sync_buf() is true and >> adds it seqno_fence's sync_buf_list wrapping sync_buf as >> seqno_fence_dma_buf structure. With this function call, one >> seqno_fence instance would have more sync_bufs. At this time, the >> reference count to this sync_buf is taken. >> >> void fence_helper_del_sync_buf(struct fence_helper *fh, int id); >> - This function is called if some operation is failed after >> fence_helper_add_sync_buf call to release relevant resources. >> >> int fence_helper_init_reservation_entry(struct fence_helper *fh, >> struct dma_buf *dmabuf, bool shared, int id); >> - This function is called after fence_helper_add_sync_buf call and >> calls reservation_entry_init function to set a reservation object of >> sync_buf to a new reservation_entry object. And then the new >> reservation_entry is added to fh->entries to track all sync_bufs this >> device is going to access. >> >> void fence_helper_fini_reservation_entry(struct fence_helper *fh, int id); >> - This function is called if some operation is failed after >> fence_helper_init_reservation_entry call to releae relevant resources. >> >> int fence_helper_ticket_reserve(struct fence_helper *fh, int id); >> - This function is called after fence_helper_init_reservation_entry >> call and calls ticket_reserve function to reserve a ticket(locked for >> each reservation entry in fh->entires) >> >> void fence_helper_ticket_commit(struct fence_helper *fh, int id); >> - This function is called after fence_helper_ticket_reserve() is >> called to commit this device's fence to all reservation_objects of >> each sync_buf. After that, once other devices try to access these >> buffers, they would be blocked and unlock each reservation entry in >> fh->entires. >> >> int fence_helper_wait(struct fence_helper *fh, struct dma_buf *dmabuf, >> bool intr); >> - This function is called before fence_helper_add_sync_buf() is called >> to wait for a signal from other devices. >> >> int fence_helper_signal(struct fence_helper *fh, int id); >> - This function is called by device's interrupt handler or somewhere >> when dma access to this buffer has been completed and calls >> fence_signal() with each fence registed to each reservation object in >> fh->entries to notify dma access completion to other deivces. At this >> time, other devices blocked would be waked up and forward their next >> step. >> >> For more detail, in addition, this function does the following, >> - delete each reservation entry in fh->entries. >> - release each seqno_fence_dmabuf object in seqno_fence's >> sync_buf_list and call dma_buf_put() to put the reference count to >> dmabuf. >> >> >> Now the fence-helper framework is just WIP yet so there may be my >> missing points. If you are ok, I'd like to post it as RFC. > Way too complicated.. The purpose to the fence-helper is to use the dma-fence more simply. With the fence-helper, we doesn't need to consider fence and reservation interfaces for it. All we have to do is to call only the fence-helper interfaces without considering two things(fence and reservation). For example(In consumer case), driver_open() { struct fence-helper *fh; ... ctx->fh =3D kzalloc(); ... fence_helper_init(fh, ...); } driver_addfb() { ... fence_helper_add_sync_buf(fh, sync_buf, ...); ... } driver_pageflip() { ... fence_helper_wait(fh, sync_buf, ...); fence_helper_init_reservation_entry(fh, sync_buf, ...); fence_helper_ticket_reserve(fh, ...); fence_helper_ticket_commit(fh, ...); ... } driver_pageflip_handler() { ... fence_helper_signal(fh, ...); } The above functions are called in the following order, 1. driver_open() -> 2. driver_addfb() -> 3. driver_pageflip() -> 4.driver_pageflip_handler() Step 3 and 4 would be called repeatedly. And also producer could use similar way. I'm not sure that I understand the dma-fence framework fully so there might be something wrong and better way. Thanks, Inki Dae > > > _______________________________________________ > dri-devel mailing list > dri-devel(a)lists.freedesktop.org > http://lists.freedesktop.org/mailman/listinfo/dri-devel --===============1443439763342265041==-- From m.szyprowski@samsung.com Wed Jan 16 15:31:54 2013 From: Marek Szyprowski To: linaro-mm-sig@lists.linaro.org Subject: [Linaro-mm-sig] [PATCH 1/2] ARM: dma-mapping: add support for CMA regions placed in highmem zone Date: Wed, 16 Jan 2013 16:31:22 +0100 Message-ID: <1358350284-6972-2-git-send-email-m.szyprowski@samsung.com> In-Reply-To: <1358350284-6972-1-git-send-email-m.szyprowski@samsung.com> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============7161098145873779517==" --===============7161098145873779517== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable This patch adds missing pieces to correctly support memory pages served from CMA regions placed in high memory zones. Please note that the default global CMA area is still put into lowmem and is limited by optional architecture specific DMA zone. One can however put device specific CMA regions in high memory zone to reduce lowmem usage. Signed-off-by: Marek Szyprowski --- arch/arm/mm/dma-mapping.c | 61 +++++++++++++++++++++++++++++++++----------= -- 1 file changed, 45 insertions(+), 16 deletions(-) diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index 6b2fb87..4080c37 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -186,16 +186,29 @@ static u64 get_coherent_dma_mask(struct device *dev) =20 static void __dma_clear_buffer(struct page *page, size_t size) { - void *ptr; /* * Ensure that the allocated pages are zeroed, and that any data * lurking in the kernel direct-mapped region is invalidated. */ - ptr =3D page_address(page); - if (ptr) { - memset(ptr, 0, size); - dmac_flush_range(ptr, ptr + size); - outer_flush_range(__pa(ptr), __pa(ptr) + size); + if (!PageHighMem(page)) { + void *ptr =3D page_address(page); + if (ptr) { + memset(ptr, 0, size); + dmac_flush_range(ptr, ptr + size); + outer_flush_range(__pa(ptr), __pa(ptr) + size); + } + } else { + phys_addr_t base =3D __pfn_to_phys(page_to_pfn(page)); + phys_addr_t end =3D base + size; + while (size > 0) { + void *ptr =3D kmap_atomic(page); + memset(ptr, 0, PAGE_SIZE); + dmac_flush_range(ptr, ptr + PAGE_SIZE); + kunmap_atomic(ptr); + page++; + size -=3D PAGE_SIZE; + } + outer_flush_range(base, end); } } =20 @@ -243,7 +256,8 @@ static void __dma_free_buffer(struct page *page, size_t s= ize) #endif =20 static void *__alloc_from_contiguous(struct device *dev, size_t size, - pgprot_t prot, struct page **ret_page); + pgprot_t prot, struct page **ret_page, + const void *caller); =20 static void *__alloc_remap_buffer(struct device *dev, size_t size, gfp_t gfp, pgprot_t prot, struct page **ret_page, @@ -346,10 +360,11 @@ static int __init atomic_pool_init(void) goto no_pages; =20 if (IS_ENABLED(CONFIG_CMA)) - ptr =3D __alloc_from_contiguous(NULL, pool->size, prot, &page); + ptr =3D __alloc_from_contiguous(NULL, pool->size, prot, &page, + atomic_pool_init); else ptr =3D __alloc_remap_buffer(NULL, pool->size, GFP_KERNEL, prot, - &page, NULL); + &page, atomic_pool_init); if (ptr) { int i; =20 @@ -542,27 +557,41 @@ static int __free_from_pool(void *start, size_t size) } =20 static void *__alloc_from_contiguous(struct device *dev, size_t size, - pgprot_t prot, struct page **ret_page) + pgprot_t prot, struct page **ret_page, + const void *caller) { unsigned long order =3D get_order(size); size_t count =3D size >> PAGE_SHIFT; struct page *page; + void *ptr; =20 page =3D dma_alloc_from_contiguous(dev, count, order); if (!page) return NULL; =20 __dma_clear_buffer(page, size); - __dma_remap(page, size, prot); =20 + if (!PageHighMem(page)) { + __dma_remap(page, size, prot); + ptr =3D page_address(page); + } else { + ptr =3D __dma_alloc_remap(page, size, GFP_KERNEL, prot, caller); + if (!ptr) { + dma_release_from_contiguous(dev, page, count); + return NULL; + } + } *ret_page =3D page; - return page_address(page); + return ptr; } =20 static void __free_from_contiguous(struct device *dev, struct page *page, - size_t size) + void *cpu_addr, size_t size) { - __dma_remap(page, size, pgprot_kernel); + if (!PageHighMem(page)) + __dma_remap(page, size, pgprot_kernel); + else + __dma_free_remap(cpu_addr, size); dma_release_from_contiguous(dev, page, size >> PAGE_SHIFT); } =20 @@ -645,7 +674,7 @@ static void *__dma_alloc(struct device *dev, size_t size,= dma_addr_t *handle, else if (!IS_ENABLED(CONFIG_CMA)) addr =3D __alloc_remap_buffer(dev, size, gfp, prot, &page, caller); else - addr =3D __alloc_from_contiguous(dev, size, prot, &page); + addr =3D __alloc_from_contiguous(dev, size, prot, &page, caller); =20 if (addr) *handle =3D pfn_to_dma(dev, page_to_pfn(page)); @@ -739,7 +768,7 @@ static void __arm_dma_free(struct device *dev, size_t siz= e, void *cpu_addr, * Non-atomic allocations cannot be freed with IRQs disabled */ WARN_ON(irqs_disabled()); - __free_from_contiguous(dev, page, size); + __free_from_contiguous(dev, page, cpu_addr, size); } } =20 --=20 1.7.9.5 --===============7161098145873779517==-- From m.szyprowski@samsung.com Wed Jan 16 15:31:55 2013 From: Marek Szyprowski To: linaro-mm-sig@lists.linaro.org Subject: [Linaro-mm-sig] [PATCH 2/2] ARM: dma-mapping: use himem for DMA buffers for IOMMU-mapped devices Date: Wed, 16 Jan 2013 16:31:23 +0100 Message-ID: <1358350284-6972-3-git-send-email-m.szyprowski@samsung.com> In-Reply-To: <1358350284-6972-1-git-send-email-m.szyprowski@samsung.com> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============1984924372321461170==" --===============1984924372321461170== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable IOMMU can provide access to any memory page, so there is no point in limiting the allocated pages only to lowmem, once other parts of dma-mapping subsystem correctly supports himem pages. Signed-off-by: Marek Szyprowski --- arch/arm/mm/dma-mapping.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index 4080c37..9a6c8ce 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -1095,12 +1095,17 @@ static struct page **__iommu_alloc_buffer(struct devi= ce *dev, size_t size, return pages; } =20 + /* + * IOMMU can map any pages, so himem can also be used here + */ + gfp |=3D __GFP_NOWARN | __GFP_HIGHMEM; + while (count) { int j, order =3D __fls(count); =20 - pages[i] =3D alloc_pages(gfp | __GFP_NOWARN, order); + pages[i] =3D alloc_pages(gfp, order); while (!pages[i] && order) - pages[i] =3D alloc_pages(gfp | __GFP_NOWARN, --order); + pages[i] =3D alloc_pages(gfp, --order); if (!pages[i]) goto error; =20 --=20 1.7.9.5 --===============1984924372321461170==-- From m.szyprowski@samsung.com Wed Jan 16 15:32:15 2013 From: Marek Szyprowski To: linaro-mm-sig@lists.linaro.org Subject: [Linaro-mm-sig] [PATCH 0/2] ARM: dma-mapping: add highmem support for coherent allocation Date: Wed, 16 Jan 2013 16:31:21 +0100 Message-ID: <1358350284-6972-1-git-send-email-m.szyprowski@samsung.com> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============1180948752988680927==" --===============1180948752988680927== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Hello, This is the last missing piece to let us efficiently use large DMA buffers on systems with lots of memory, which have support for himem enabled. The first patch adds support for CMA regions placed in high memory zones, the second one also enables allocations of individual pages from high memory zone for IOMMU-mapped devices. Those two changes let us to significantly save low memory for other tasks. Best regards Marek Szyprowski Samsung Poland R&D Center Patch summary: Marek Szyprowski (2): ARM: dma-mapping: add support for CMA regions placed in highmem zone ARM: dma-mapping: use himem for DMA buffers for IOMMU-mapped devices arch/arm/mm/dma-mapping.c | 70 +++++++++++++++++++++++++++++++++----------= -- 1 file changed, 52 insertions(+), 18 deletions(-) --=20 1.7.9.5 --===============1180948752988680927==-- From jason@lakedaemon.net Wed Jan 16 15:50:58 2013 From: Jason Cooper To: linaro-mm-sig@lists.linaro.org Subject: [Linaro-mm-sig] [PATCH] ata: sata_mv: fix sg_tbl_pool alignment Date: Wed, 16 Jan 2013 10:50:45 -0500 Message-ID: <20130116155045.GI25500@titan.lakedaemon.net> In-Reply-To: <50F66B1B.40301@web.de> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============0016862910899749198==" --===============0016862910899749198== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable On Wed, Jan 16, 2013 at 09:55:55AM +0100, Soeren Moch wrote: > On 16.01.2013 04:24, Soeren Moch wrote: > >On 16.01.2013 03:40, Jason Cooper wrote: > >>On Wed, Jan 16, 2013 at 01:17:59AM +0100, Soeren Moch wrote: > >>>On 15.01.2013 22:56, Jason Cooper wrote: > >>>>On Tue, Jan 15, 2013 at 03:16:17PM -0500, Jason Cooper wrote: > OK, I could trigger the error > ERROR: 1024 KiB atomic DMA coherent pool is too small! > Please increase it with coherent_pool=3D kernel parameter! > only with em28xx sticks and sata, dib0700 sticks removed. Did you test the reverse scenario? ie dib0700 with sata_mv and no em28xx. What kind of throughput are you pushing to the sata disk? > >>What would be most helpful is if you could do a git bisect between > >>v3.5.x (working) and the oldest version where you know it started > >>failing (v3.7.1 or earlier if you know it). > >> > >I did not bisect it, but Marek mentioned earlier that commit > >e9da6e9905e639b0f842a244bc770b48ad0523e9 in Linux v3.6-rc1 introduced > >new code for dma allocations. This is probably the root cause for the > >new (mis-)behavior (due to my tests 3.6.0 is not working anymore). >=20 > I don't want to say that Mareks patch is wrong, probably it triggers a > bug somewhere else! (in em28xx?) Of the four drivers you listed, none are using dma. sata_mv is the only one. If one is to believe the comments in sata_mv.c:~151, then the alignment is wrong for the sg_tbl_pool. Could you please try the following patch? thx, Jason. ---8<---------- >From 566c7e30285e4c31d76724ea4811b016b753f24f Mon Sep 17 00:00:00 2001 From: Jason Cooper Date: Wed, 16 Jan 2013 15:43:37 +0000 Subject: [PATCH] ata: sata_mv: fix sg_tbl_pool alignment If the comment is to be believed, the alignment should be 16B, and the size 4K. The current code sets both to 4K. On some arm boards (kirkwood), this causes: ERROR: 1024 KiB atomic DMA coherent pool is too small! Please increase it with coherent_pool=3D kernel parameter! Set alignment to 16B to prevent exhausting the atomic_pool. Signed-off-by: Jason Cooper --- drivers/ata/sata_mv.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/drivers/ata/sata_mv.c b/drivers/ata/sata_mv.c index 68f4fb5..e2e5a8a 100644 --- a/drivers/ata/sata_mv.c +++ b/drivers/ata/sata_mv.c @@ -148,6 +148,9 @@ enum { * CRPB needs alignment on a 256B boundary. Size =3D=3D 256B * ePRD (SG) entries need alignment on a 16B boundary. Size =3D=3D 16B */ + MV_CRQB_Q_ALIGN =3D 1024, + MV_CRPB_Q_ALIGN =3D 256, + MV_SG_TBL_ALIGN =3D 16, MV_CRQB_Q_SZ =3D (32 * MV_MAX_Q_DEPTH), MV_CRPB_Q_SZ =3D (8 * MV_MAX_Q_DEPTH), MV_MAX_SG_CT =3D 256, @@ -3975,17 +3978,17 @@ done: static int mv_create_dma_pools(struct mv_host_priv *hpriv, struct device *de= v) { hpriv->crqb_pool =3D dmam_pool_create("crqb_q", dev, MV_CRQB_Q_SZ, - MV_CRQB_Q_SZ, 0); + MV_CRQB_Q_ALIGN, 0); if (!hpriv->crqb_pool) return -ENOMEM; =20 hpriv->crpb_pool =3D dmam_pool_create("crpb_q", dev, MV_CRPB_Q_SZ, - MV_CRPB_Q_SZ, 0); + MV_CRPB_Q_ALIGN, 0); if (!hpriv->crpb_pool) return -ENOMEM; =20 hpriv->sg_tbl_pool =3D dmam_pool_create("sg_tbl", dev, MV_SG_TBL_SZ, - MV_SG_TBL_SZ, 0); + MV_SG_TBL_ALIGN, 0); if (!hpriv->sg_tbl_pool) return -ENOMEM; =20 --=20 1.8.1.1 --===============0016862910899749198==-- From smoch@web.de Wed Jan 16 17:07:38 2013 From: Soeren Moch To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH] ata: sata_mv: fix sg_tbl_pool alignment Date: Wed, 16 Jan 2013 18:05:59 +0100 Message-ID: <50F6DDF7.9080605@web.de> In-Reply-To: <20130116155045.GI25500@titan.lakedaemon.net> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============0227405411521917423==" --===============0227405411521917423== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit On 16.01.2013 16:50, Jason Cooper wrote: > On Wed, Jan 16, 2013 at 09:55:55AM +0100, Soeren Moch wrote: >> On 16.01.2013 04:24, Soeren Moch wrote: >>> On 16.01.2013 03:40, Jason Cooper wrote: >>>> On Wed, Jan 16, 2013 at 01:17:59AM +0100, Soeren Moch wrote: >>>>> On 15.01.2013 22:56, Jason Cooper wrote: >>>>>> On Tue, Jan 15, 2013 at 03:16:17PM -0500, Jason Cooper wrote: > >> OK, I could trigger the error >> ERROR: 1024 KiB atomic DMA coherent pool is too small! >> Please increase it with coherent_pool= kernel parameter! >> only with em28xx sticks and sata, dib0700 sticks removed. > > Did you test the reverse scenario? ie dib0700 with sata_mv and no > em28xx. Maybe I can test this next night. > What kind of throughput are you pushing to the sata disk? Close to nothing. In the last test I had the root filesystem running on the sata disk plus a few 10 megabytes per hour. >>>> What would be most helpful is if you could do a git bisect between >>>> v3.5.x (working) and the oldest version where you know it started >>>> failing (v3.7.1 or earlier if you know it). >>>> >>> I did not bisect it, but Marek mentioned earlier that commit >>> e9da6e9905e639b0f842a244bc770b48ad0523e9 in Linux v3.6-rc1 introduced >>> new code for dma allocations. This is probably the root cause for the >>> new (mis-)behavior (due to my tests 3.6.0 is not working anymore). >> >> I don't want to say that Mareks patch is wrong, probably it triggers a >> bug somewhere else! (in em28xx?) > > Of the four drivers you listed, none are using dma. sata_mv is the only > one. usb_core is doing the actual DMA for the usb bridge drivers, I think. > If one is to believe the comments in sata_mv.c:~151, then the alignment > is wrong for the sg_tbl_pool. > > Could you please try the following patch? OK, what should I test first, the setup from last night (em28xx, no dib0700) plus your patch, or the reverse setup (dib0700, no em28xx) without your patch, or my normal setting (all dvb sticks) plus your patch? Regards, Soeren --===============0227405411521917423==-- From smoch@web.de Wed Jan 16 17:32:23 2013 From: Soeren Moch To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH v2] mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls Date: Wed, 16 Jan 2013 18:32:09 +0100 Message-ID: <50F6E419.5080007@web.de> In-Reply-To: <50F66B1B.40301@web.de> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============2907164803197889738==" --===============2907164803197889738== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit On 16.01.2013 09:55, Soeren Moch wrote: > On 16.01.2013 04:24, Soeren Moch wrote: >> On 16.01.2013 03:40, Jason Cooper wrote: >>> Soeren, >>> >>> On Wed, Jan 16, 2013 at 01:17:59AM +0100, Soeren Moch wrote: >>>> On 15.01.2013 22:56, Jason Cooper wrote: >>>>> On Tue, Jan 15, 2013 at 03:16:17PM -0500, Jason Cooper wrote: >>>>>> If my understanding is correct, one of the drivers (most likely one) >>>>>> either asks for too small of a dma buffer, or is not properly >>>>>> deallocating blocks from the per-device pool. Either case leads to >>>>>> exhaustion, and falling back to the atomic pool. Which subsequently >>>>>> gets wiped out as well. >>>>> >>>>> If my hunch is right, could you please try each of the three dvb >>>>> drivers >>>>> in turn and see which one (or more than one) causes the error? >>>> >>>> In fact I use only 2 types of DVB sticks: em28xx usb bridge plus drxk >>>> demodulator, and dib0700 usb bridge plus dib7000p demod. >>>> >>>> I would bet for em28xx causing the error, but this is not thoroughly >>>> tested. Unfortunately testing with removed sticks is not easy, because >>>> this is a production system and disabling some services for the long >>>> time we need to trigger this error will certainly result in unhappy >>>> users. >>> > OK, I could trigger the error > ERROR: 1024 KiB atomic DMA coherent pool is too small! > Please increase it with coherent_pool= kernel parameter! > only with em28xx sticks and sata, dib0700 sticks removed. > >>> Just out of curiosity, what board is it? >> >> The kirkwood board? A modified Guruplug Server Plus. > em28xx sticks: "TerraTec Cinergy HTC Stick HD" and "PCTV Quatro Stick" > dib0700 sticks: "WinTV-NOVA-TD Stick" >>> >>>> I will see what I can do here. Is there an easy way to track the buffer >>>> usage without having to wait for complete exhaustion? >>> >>> DMA_API_DEBUG >> >> OK, maybe I can try this. >>> >>>> In linux-3.5.x there is no such problem. Can we use all available >>>> memory >>>> for dma buffers here on armv5 architectures, in contrast to newer >>>> kernels? >>> >>> Were the loads exactly the same when you tested 3.5.x? >> >> Exactly the same, yes. >> >>> I looked at the >>> changes from v3.5 to v3.7.1 for all four drivers you mentioned as well >>> as sata_mv. >>> >>> The biggest thing I see is that all of the media drivers got shuffled >>> around into their own subdirectories after v3.5. 'git show -M 0c0d06c' >>> shows it was a clean copy of all the files. >>> >>> What would be most helpful is if you could do a git bisect between >>> v3.5.x (working) and the oldest version where you know it started >>> failing (v3.7.1 or earlier if you know it). >>> >> I did not bisect it, but Marek mentioned earlier that commit >> e9da6e9905e639b0f842a244bc770b48ad0523e9 in Linux v3.6-rc1 introduced >> new code for dma allocations. This is probably the root cause for the >> new (mis-)behavior (due to my tests 3.6.0 is not working anymore). > > I don't want to say that Mareks patch is wrong, probably it triggers a > bug somewhere else! (in em28xx?) The em28xx sticks are using isochronous usb transfers. Is there a special handling for that? >> I'm not very familiar with arm mm code, and from the patch itself I >> cannot understand what's different. Maybe CONFIG_CMA is default >> also for armv5 (not only v6) now? But I might be totally wrong here, >> maybe someone of the mm experts can explain the difference? >> Regards, Soeren --===============2907164803197889738==-- From jason@lakedaemon.net Wed Jan 16 17:47:47 2013 From: Jason Cooper To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH v2] mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls Date: Wed, 16 Jan 2013 12:47:36 -0500 Message-ID: <20130116174736.GJ25500@titan.lakedaemon.net> In-Reply-To: <50F6E419.5080007@web.de> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============5911933508655438834==" --===============5911933508655438834== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit On Wed, Jan 16, 2013 at 06:32:09PM +0100, Soeren Moch wrote: > On 16.01.2013 09:55, Soeren Moch wrote: > >On 16.01.2013 04:24, Soeren Moch wrote: > >>I did not bisect it, but Marek mentioned earlier that commit > >>e9da6e9905e639b0f842a244bc770b48ad0523e9 in Linux v3.6-rc1 introduced > >>new code for dma allocations. This is probably the root cause for the > >>new (mis-)behavior (due to my tests 3.6.0 is not working anymore). > > > >I don't want to say that Mareks patch is wrong, probably it triggers a > >bug somewhere else! (in em28xx?) > > The em28xx sticks are using isochronous usb transfers. Is there a > special handling for that? I'm looking at that now. It looks like the em28xx wants (as a maximum) 655040 bytes (em28xx-core.c:1088). There are 5 transfer buffers, with 64 max packets and 2047 max packet size (runtime reported max & 0x7ff). If it actually needs all of that, then the answer may be to just increase coherent_pool= when using that driver. I'll keep digging. thx, Jason. --===============5911933508655438834==-- From jason@lakedaemon.net Wed Jan 16 17:52:15 2013 From: Jason Cooper To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH] ata: sata_mv: fix sg_tbl_pool alignment Date: Wed, 16 Jan 2013 12:52:03 -0500 Message-ID: <20130116175203.GK25500@titan.lakedaemon.net> In-Reply-To: <50F6DDF7.9080605@web.de> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============7878899220813269792==" --===============7878899220813269792== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit On Wed, Jan 16, 2013 at 06:05:59PM +0100, Soeren Moch wrote: > On 16.01.2013 16:50, Jason Cooper wrote: > >On Wed, Jan 16, 2013 at 09:55:55AM +0100, Soeren Moch wrote: > >>On 16.01.2013 04:24, Soeren Moch wrote: > >>>On 16.01.2013 03:40, Jason Cooper wrote: > >>>>On Wed, Jan 16, 2013 at 01:17:59AM +0100, Soeren Moch wrote: > >>>>>On 15.01.2013 22:56, Jason Cooper wrote: > >>>>>>On Tue, Jan 15, 2013 at 03:16:17PM -0500, Jason Cooper wrote: > > > >>OK, I could trigger the error > >> ERROR: 1024 KiB atomic DMA coherent pool is too small! > >> Please increase it with coherent_pool= kernel parameter! > >>only with em28xx sticks and sata, dib0700 sticks removed. > > > >Did you test the reverse scenario? ie dib0700 with sata_mv and no > >em28xx. > > Maybe I can test this next night. Please do, this will tell us if it is in the USB drivers or lower (something in common). > >>>>What would be most helpful is if you could do a git bisect between > >>>>v3.5.x (working) and the oldest version where you know it started > >>>>failing (v3.7.1 or earlier if you know it). > >>>> > >>>I did not bisect it, but Marek mentioned earlier that commit > >>>e9da6e9905e639b0f842a244bc770b48ad0523e9 in Linux v3.6-rc1 introduced > >>>new code for dma allocations. This is probably the root cause for the > >>>new (mis-)behavior (due to my tests 3.6.0 is not working anymore). > >> > >>I don't want to say that Mareks patch is wrong, probably it triggers a > >>bug somewhere else! (in em28xx?) > > > >Of the four drivers you listed, none are using dma. sata_mv is the only > >one. > > usb_core is doing the actual DMA for the usb bridge drivers, I think. Yes, my mistake. I'd like to attribute that statement to pre-coffee rambling. :-) > >If one is to believe the comments in sata_mv.c:~151, then the alignment > >is wrong for the sg_tbl_pool. > > > >Could you please try the following patch? > > OK, what should I test first, the setup from last night (em28xx, no > dib0700) plus your patch, or the reverse setup (dib0700, no em28xx) > without your patch, or my normal setting (all dvb sticks) plus your > patch? if testing time is limited, please do the test I outlined at the top of this email. I've been digging more into the dma code and while I think the patch is correct, I don't see where it would fix your problem (yet). thx, Jason. --===============7878899220813269792==-- From jason@lakedaemon.net Wed Jan 16 18:35:29 2013 From: Jason Cooper To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH] ata: sata_mv: fix sg_tbl_pool alignment Date: Wed, 16 Jan 2013 13:35:18 -0500 Message-ID: <20130116183518.GL25500@titan.lakedaemon.net> In-Reply-To: <20130116175203.GK25500@titan.lakedaemon.net> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============4880342542816515937==" --===============4880342542816515937== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit > > >On Wed, Jan 16, 2013 at 09:55:55AM +0100, Soeren Moch wrote: > > >>I don't want to say that Mareks patch is wrong, probably it triggers a > > >>bug somewhere else! (in em28xx?) Could you send the output of: lsusb -v -d VEND:PROD for the em28xx? thx, Jason. --===============4880342542816515937==-- From smoch@web.de Wed Jan 16 22:28:44 2013 From: Soeren Moch To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH] ata: sata_mv: fix sg_tbl_pool alignment Date: Wed, 16 Jan 2013 23:26:21 +0100 Message-ID: <50F7290D.9090308@web.de> In-Reply-To: <20130116183518.GL25500@titan.lakedaemon.net> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============2114149469588745322==" --===============2114149469588745322== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit On 16.01.2013 19:35, Jason Cooper wrote: >>>> On Wed, Jan 16, 2013 at 09:55:55AM +0100, Soeren Moch wrote: >>>>> I don't want to say that Mareks patch is wrong, probably it triggers a >>>>> bug somewhere else! (in em28xx?) > > Could you send the output of: > > lsusb -v -d VEND:PROD > > for the em28xx? > > thx, > > Jason. > Here is the lsusb output for both of my em28xx sticks. Regards, Soeren Bus 001 Device 005: ID 0ccd:00b2 TerraTec Electronic GmbH Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x0ccd TerraTec Electronic GmbH idProduct 0x00b2 bcdDevice 1.00 iManufacturer 3 TERRATEC iProduct 1 Cinergy HTC Stick iSerial 2 123456789ABCD bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 305 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0x80 (Bus Powered) MaxPower 500mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 4 bInterfaceClass 255 Vendor Specific Class bInterfaceSubClass 0 bInterfaceProtocol 255 iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0001 1x 1 bytes bInterval 11 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x82 EP 2 IN bmAttributes 1 Transfer Type Isochronous Synch Type None Usage Type Data wMaxPacketSize 0x0000 1x 0 bytes bInterval 1 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x83 EP 3 IN bmAttributes 1 Transfer Type Isochronous Synch Type None Usage Type Data wMaxPacketSize 0x0000 1x 0 bytes bInterval 4 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x84 EP 4 IN bmAttributes 1 Transfer Type Isochronous Synch Type None Usage Type Data wMaxPacketSize 0x0000 1x 0 bytes bInterval 1 Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 1 bNumEndpoints 4 bInterfaceClass 255 Vendor Specific Class bInterfaceSubClass 0 bInterfaceProtocol 255 iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0001 1x 1 bytes bInterval 11 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x82 EP 2 IN bmAttributes 1 Transfer Type Isochronous Synch Type None Usage Type Data wMaxPacketSize 0x0000 1x 0 bytes bInterval 1 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x83 EP 3 IN bmAttributes 1 Transfer Type Isochronous Synch Type None Usage Type Data wMaxPacketSize 0x00c4 1x 196 bytes bInterval 4 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x84 EP 4 IN bmAttributes 1 Transfer Type Isochronous Synch Type None Usage Type Data wMaxPacketSize 0x03ac 1x 940 bytes bInterval 1 Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 2 bNumEndpoints 4 bInterfaceClass 255 Vendor Specific Class bInterfaceSubClass 0 bInterfaceProtocol 255 iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0001 1x 1 bytes bInterval 11 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x82 EP 2 IN bmAttributes 1 Transfer Type Isochronous Synch Type None Usage Type Data wMaxPacketSize 0x0ad0 2x 720 bytes bInterval 1 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x83 EP 3 IN bmAttributes 1 Transfer Type Isochronous Synch Type None Usage Type Data wMaxPacketSize 0x00c4 1x 196 bytes bInterval 4 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x84 EP 4 IN bmAttributes 1 Transfer Type Isochronous Synch Type None Usage Type Data wMaxPacketSize 0x03ac 1x 940 bytes bInterval 1 Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 3 bNumEndpoints 4 bInterfaceClass 255 Vendor Specific Class bInterfaceSubClass 0 bInterfaceProtocol 255 iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0001 1x 1 bytes bInterval 11 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x82 EP 2 IN bmAttributes 1 Transfer Type Isochronous Synch Type None Usage Type Data wMaxPacketSize 0x0c00 2x 1024 bytes bInterval 1 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x83 EP 3 IN bmAttributes 1 Transfer Type Isochronous Synch Type None Usage Type Data wMaxPacketSize 0x00c4 1x 196 bytes bInterval 4 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x84 EP 4 IN bmAttributes 1 Transfer Type Isochronous Synch Type None Usage Type Data wMaxPacketSize 0x03ac 1x 940 bytes bInterval 1 Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 4 bNumEndpoints 4 bInterfaceClass 255 Vendor Specific Class bInterfaceSubClass 0 bInterfaceProtocol 255 iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0001 1x 1 bytes bInterval 11 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x82 EP 2 IN bmAttributes 1 Transfer Type Isochronous Synch Type None Usage Type Data wMaxPacketSize 0x1300 3x 768 bytes bInterval 1 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x83 EP 3 IN bmAttributes 1 Transfer Type Isochronous Synch Type None Usage Type Data wMaxPacketSize 0x00c4 1x 196 bytes bInterval 4 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x84 EP 4 IN bmAttributes 1 Transfer Type Isochronous Synch Type None Usage Type Data wMaxPacketSize 0x03ac 1x 940 bytes bInterval 1 Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 5 bNumEndpoints 4 bInterfaceClass 255 Vendor Specific Class bInterfaceSubClass 0 bInterfaceProtocol 255 iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0001 1x 1 bytes bInterval 11 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x82 EP 2 IN bmAttributes 1 Transfer Type Isochronous Synch Type None Usage Type Data wMaxPacketSize 0x1380 3x 896 bytes bInterval 1 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x83 EP 3 IN bmAttributes 1 Transfer Type Isochronous Synch Type None Usage Type Data wMaxPacketSize 0x00c4 1x 196 bytes bInterval 4 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x84 EP 4 IN bmAttributes 1 Transfer Type Isochronous Synch Type None Usage Type Data wMaxPacketSize 0x03ac 1x 940 bytes bInterval 1 Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 6 bNumEndpoints 4 bInterfaceClass 255 Vendor Specific Class bInterfaceSubClass 0 bInterfaceProtocol 255 iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0001 1x 1 bytes bInterval 11 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x82 EP 2 IN bmAttributes 1 Transfer Type Isochronous Synch Type None Usage Type Data wMaxPacketSize 0x13c0 3x 960 bytes bInterval 1 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x83 EP 3 IN bmAttributes 1 Transfer Type Isochronous Synch Type None Usage Type Data wMaxPacketSize 0x00c4 1x 196 bytes bInterval 4 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x84 EP 4 IN bmAttributes 1 Transfer Type Isochronous Synch Type None Usage Type Data wMaxPacketSize 0x03ac 1x 940 bytes bInterval 1 Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 7 bNumEndpoints 4 bInterfaceClass 255 Vendor Specific Class bInterfaceSubClass 0 bInterfaceProtocol 255 iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0001 1x 1 bytes bInterval 11 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x82 EP 2 IN bmAttributes 1 Transfer Type Isochronous Synch Type None Usage Type Data wMaxPacketSize 0x1400 3x 1024 bytes bInterval 1 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x83 EP 3 IN bmAttributes 1 Transfer Type Isochronous Synch Type None Usage Type Data wMaxPacketSize 0x00c4 1x 196 bytes bInterval 4 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x84 EP 4 IN bmAttributes 1 Transfer Type Isochronous Synch Type None Usage Type Data wMaxPacketSize 0x03ac 1x 940 bytes bInterval 1 Device Qualifier (for other device speed): bLength 10 bDescriptorType 6 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 bNumConfigurations 1 Device Status: 0x0000 (Bus Powered) Bus 001 Device 009: ID 2304:0242 Pinnacle Systems, Inc. Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x2304 Pinnacle Systems, Inc. idProduct 0x0242 bcdDevice 1.00 iManufacturer 1 Pinnacle Systems iProduct 2 PCTV 510e iSerial 3 123456789012 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 305 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0x80 (Bus Powered) MaxPower 500mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 4 bInterfaceClass 255 Vendor Specific Class bInterfaceSubClass 0 bInterfaceProtocol 255 iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0001 1x 1 bytes bInterval 11 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x82 EP 2 IN bmAttributes 1 Transfer Type Isochronous Synch Type None Usage Type Data wMaxPacketSize 0x0000 1x 0 bytes bInterval 1 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x83 EP 3 IN bmAttributes 1 Transfer Type Isochronous Synch Type None Usage Type Data wMaxPacketSize 0x0000 1x 0 bytes bInterval 4 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x84 EP 4 IN bmAttributes 1 Transfer Type Isochronous Synch Type None Usage Type Data wMaxPacketSize 0x0000 1x 0 bytes bInterval 1 Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 1 bNumEndpoints 4 bInterfaceClass 255 Vendor Specific Class bInterfaceSubClass 0 bInterfaceProtocol 255 iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0001 1x 1 bytes bInterval 11 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x82 EP 2 IN bmAttributes 1 Transfer Type Isochronous Synch Type None Usage Type Data wMaxPacketSize 0x0000 1x 0 bytes bInterval 1 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x83 EP 3 IN bmAttributes 1 Transfer Type Isochronous Synch Type None Usage Type Data wMaxPacketSize 0x00c4 1x 196 bytes bInterval 4 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x84 EP 4 IN bmAttributes 1 Transfer Type Isochronous Synch Type None Usage Type Data wMaxPacketSize 0x03ac 1x 940 bytes bInterval 1 Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 2 bNumEndpoints 4 bInterfaceClass 255 Vendor Specific Class bInterfaceSubClass 0 bInterfaceProtocol 255 iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0001 1x 1 bytes bInterval 11 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x82 EP 2 IN bmAttributes 1 Transfer Type Isochronous Synch Type None Usage Type Data wMaxPacketSize 0x0ad0 2x 720 bytes bInterval 1 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x83 EP 3 IN bmAttributes 1 Transfer Type Isochronous Synch Type None Usage Type Data wMaxPacketSize 0x00c4 1x 196 bytes bInterval 4 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x84 EP 4 IN bmAttributes 1 Transfer Type Isochronous Synch Type None Usage Type Data wMaxPacketSize 0x03ac 1x 940 bytes bInterval 1 Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 3 bNumEndpoints 4 bInterfaceClass 255 Vendor Specific Class bInterfaceSubClass 0 bInterfaceProtocol 255 iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0001 1x 1 bytes bInterval 11 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x82 EP 2 IN bmAttributes 1 Transfer Type Isochronous Synch Type None Usage Type Data wMaxPacketSize 0x0c00 2x 1024 bytes bInterval 1 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x83 EP 3 IN bmAttributes 1 Transfer Type Isochronous Synch Type None Usage Type Data wMaxPacketSize 0x00c4 1x 196 bytes bInterval 4 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x84 EP 4 IN bmAttributes 1 Transfer Type Isochronous Synch Type None Usage Type Data wMaxPacketSize 0x03ac 1x 940 bytes bInterval 1 Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 4 bNumEndpoints 4 bInterfaceClass 255 Vendor Specific Class bInterfaceSubClass 0 bInterfaceProtocol 255 iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0001 1x 1 bytes bInterval 11 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x82 EP 2 IN bmAttributes 1 Transfer Type Isochronous Synch Type None Usage Type Data wMaxPacketSize 0x1300 3x 768 bytes bInterval 1 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x83 EP 3 IN bmAttributes 1 Transfer Type Isochronous Synch Type None Usage Type Data wMaxPacketSize 0x00c4 1x 196 bytes bInterval 4 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x84 EP 4 IN bmAttributes 1 Transfer Type Isochronous Synch Type None Usage Type Data wMaxPacketSize 0x03ac 1x 940 bytes bInterval 1 Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 5 bNumEndpoints 4 bInterfaceClass 255 Vendor Specific Class bInterfaceSubClass 0 bInterfaceProtocol 255 iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0001 1x 1 bytes bInterval 11 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x82 EP 2 IN bmAttributes 1 Transfer Type Isochronous Synch Type None Usage Type Data wMaxPacketSize 0x1380 3x 896 bytes bInterval 1 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x83 EP 3 IN bmAttributes 1 Transfer Type Isochronous Synch Type None Usage Type Data wMaxPacketSize 0x00c4 1x 196 bytes bInterval 4 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x84 EP 4 IN bmAttributes 1 Transfer Type Isochronous Synch Type None Usage Type Data wMaxPacketSize 0x03ac 1x 940 bytes bInterval 1 Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 6 bNumEndpoints 4 bInterfaceClass 255 Vendor Specific Class bInterfaceSubClass 0 bInterfaceProtocol 255 iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0001 1x 1 bytes bInterval 11 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x82 EP 2 IN bmAttributes 1 Transfer Type Isochronous Synch Type None Usage Type Data wMaxPacketSize 0x13c0 3x 960 bytes bInterval 1 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x83 EP 3 IN bmAttributes 1 Transfer Type Isochronous Synch Type None Usage Type Data wMaxPacketSize 0x00c4 1x 196 bytes bInterval 4 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x84 EP 4 IN bmAttributes 1 Transfer Type Isochronous Synch Type None Usage Type Data wMaxPacketSize 0x03ac 1x 940 bytes bInterval 1 Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 7 bNumEndpoints 4 bInterfaceClass 255 Vendor Specific Class bInterfaceSubClass 0 bInterfaceProtocol 255 iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0001 1x 1 bytes bInterval 11 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x82 EP 2 IN bmAttributes 1 Transfer Type Isochronous Synch Type None Usage Type Data wMaxPacketSize 0x1400 3x 1024 bytes bInterval 1 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x83 EP 3 IN bmAttributes 1 Transfer Type Isochronous Synch Type None Usage Type Data wMaxPacketSize 0x00c4 1x 196 bytes bInterval 4 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x84 EP 4 IN bmAttributes 1 Transfer Type Isochronous Synch Type None Usage Type Data wMaxPacketSize 0x03ac 1x 940 bytes bInterval 1 Device Qualifier (for other device speed): bLength 10 bDescriptorType 6 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 bNumConfigurations 1 Device Status: 0x0000 (Bus Powered) --===============2114149469588745322==-- From smoch@web.de Wed Jan 16 22:38:10 2013 From: Soeren Moch To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH v2] mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls Date: Wed, 16 Jan 2013 23:36:35 +0100 Message-ID: <50F72B73.5070504@web.de> In-Reply-To: <20130116174736.GJ25500@titan.lakedaemon.net> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============9003752078851614362==" --===============9003752078851614362== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit On 16.01.2013 18:47, Jason Cooper wrote: > On Wed, Jan 16, 2013 at 06:32:09PM +0100, Soeren Moch wrote: >> On 16.01.2013 09:55, Soeren Moch wrote: >>> On 16.01.2013 04:24, Soeren Moch wrote: >>>> I did not bisect it, but Marek mentioned earlier that commit >>>> e9da6e9905e639b0f842a244bc770b48ad0523e9 in Linux v3.6-rc1 introduced >>>> new code for dma allocations. This is probably the root cause for the >>>> new (mis-)behavior (due to my tests 3.6.0 is not working anymore). >>> >>> I don't want to say that Mareks patch is wrong, probably it triggers a >>> bug somewhere else! (in em28xx?) >> >> The em28xx sticks are using isochronous usb transfers. Is there a >> special handling for that? > > I'm looking at that now. It looks like the em28xx wants (as a maximum) > 655040 bytes (em28xx-core.c:1088). There are 5 transfer buffers, with > 64 max packets and 2047 max packet size (runtime reported max & 0x7ff). > > If it actually needs all of that, then the answer may be to just > increase coherent_pool= when using that driver. I'll keep digging. I already tested with 4M coherent pool size and could not see significant improvement. Would it make sense to further increase the buffer size? Regards, Soeren --===============9003752078851614362==-- From smoch@web.de Wed Jan 16 23:10:20 2013 From: Soeren Moch To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH] ata: sata_mv: fix sg_tbl_pool alignment Date: Thu, 17 Jan 2013 00:10:09 +0100 Message-ID: <50F73351.3050708@web.de> In-Reply-To: <20130116175203.GK25500@titan.lakedaemon.net> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============5385877580713027461==" --===============5385877580713027461== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit On 16.01.2013 18:52, Jason Cooper wrote: > On Wed, Jan 16, 2013 at 06:05:59PM +0100, Soeren Moch wrote: >> On 16.01.2013 16:50, Jason Cooper wrote: >>> On Wed, Jan 16, 2013 at 09:55:55AM +0100, Soeren Moch wrote: >>>> On 16.01.2013 04:24, Soeren Moch wrote: >>>>> On 16.01.2013 03:40, Jason Cooper wrote: >>>>>> On Wed, Jan 16, 2013 at 01:17:59AM +0100, Soeren Moch wrote: >>>>>>> On 15.01.2013 22:56, Jason Cooper wrote: >>>>>>>> On Tue, Jan 15, 2013 at 03:16:17PM -0500, Jason Cooper wrote: >>> >>>> OK, I could trigger the error >>>> ERROR: 1024 KiB atomic DMA coherent pool is too small! >>>> Please increase it with coherent_pool= kernel parameter! >>>> only with em28xx sticks and sata, dib0700 sticks removed. >>> >>> Did you test the reverse scenario? ie dib0700 with sata_mv and no >>> em28xx. >> >> Maybe I can test this next night. > > Please do, this will tell us if it is in the USB drivers or lower > (something in common). > >>>>>> What would be most helpful is if you could do a git bisect between >>>>>> v3.5.x (working) and the oldest version where you know it started >>>>>> failing (v3.7.1 or earlier if you know it). >>>>>> >>>>> I did not bisect it, but Marek mentioned earlier that commit >>>>> e9da6e9905e639b0f842a244bc770b48ad0523e9 in Linux v3.6-rc1 introduced >>>>> new code for dma allocations. This is probably the root cause for the >>>>> new (mis-)behavior (due to my tests 3.6.0 is not working anymore). >>>> >>>> I don't want to say that Mareks patch is wrong, probably it triggers a >>>> bug somewhere else! (in em28xx?) >>> >>> Of the four drivers you listed, none are using dma. sata_mv is the only >>> one. >> >> usb_core is doing the actual DMA for the usb bridge drivers, I think. > > Yes, my mistake. I'd like to attribute that statement to pre-coffee > rambling. :-) > >>> If one is to believe the comments in sata_mv.c:~151, then the alignment >>> is wrong for the sg_tbl_pool. >>> >>> Could you please try the following patch? >> >> OK, what should I test first, the setup from last night (em28xx, no >> dib0700) plus your patch, or the reverse setup (dib0700, no em28xx) >> without your patch, or my normal setting (all dvb sticks) plus your >> patch? > > if testing time is limited, please do the test I outlined at the top of > this email. I've been digging more into the dma code and while I think > the patch is correct, I don't see where it would fix your problem (yet). Unfortunately test time is limited, and the test has to run about 10 hours to trigger the error. I also think that sata is not causing the problem. Maybe your patch even goes in the wrong direction. Perhaps the dma memory pool is not too small, there might be enough memory available, but it is too much fragmented to satisfy larger block allocations. With different drivers allocating totally different block sizes aligned to bytes or words, perhaps we end up with lots of very small free blocks in the dma pool after several hours of runtime? So maybe it would help to align all allocations in the dma pool to 256B? Regards, Soeren --===============5385877580713027461==-- From jani.nikula@linux.intel.com Thu Jan 17 08:41:18 2013 From: Jani Nikula To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] CDF discussions at FOSDEM Date: Thu, 17 Jan 2013 10:42:19 +0200 Message-ID: <874nigw68k.fsf@intel.com> In-Reply-To: <2665133.qfM3EnSmyB@avalon> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============3317073665659925766==" --===============3317073665659925766== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable On Fri, 11 Jan 2013, Laurent Pinchart w= rote: > Would anyone be interested in meeting at the FOSDEM to discuss the Common=20 > Display Framework ? There will be a CDF meeting at the ELC at the end of=20 > February, the FOSDEM would be a good venue for European developers. Yes, count me in, Jani. --===============3317073665659925766==-- From smoch@web.de Thu Jan 17 09:12:48 2013 From: Soeren Moch To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH] ata: sata_mv: fix sg_tbl_pool alignment Date: Thu, 17 Jan 2013 10:11:09 +0100 Message-ID: <50F7C02D.60305@web.de> In-Reply-To: <20130116175203.GK25500@titan.lakedaemon.net> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============1075800083334540429==" --===============1075800083334540429== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit On 16.01.2013 18:52, Jason Cooper wrote: > On Wed, Jan 16, 2013 at 06:05:59PM +0100, Soeren Moch wrote: >> On 16.01.2013 16:50, Jason Cooper wrote: >>> On Wed, Jan 16, 2013 at 09:55:55AM +0100, Soeren Moch wrote: >>>> On 16.01.2013 04:24, Soeren Moch wrote: >>>>> On 16.01.2013 03:40, Jason Cooper wrote: >>>>>> On Wed, Jan 16, 2013 at 01:17:59AM +0100, Soeren Moch wrote: >>>>>>> On 15.01.2013 22:56, Jason Cooper wrote: >>>>>>>> On Tue, Jan 15, 2013 at 03:16:17PM -0500, Jason Cooper wrote: >>> >>>> OK, I could trigger the error >>>> ERROR: 1024 KiB atomic DMA coherent pool is too small! >>>> Please increase it with coherent_pool= kernel parameter! >>>> only with em28xx sticks and sata, dib0700 sticks removed. >>> >>> Did you test the reverse scenario? ie dib0700 with sata_mv and no >>> em28xx. >> >> Maybe I can test this next night. > > Please do, this will tell us if it is in the USB drivers or lower > (something in common). Until now there is no error with dib0700 + sata, without em28xx. But to be sure that there is absolutely no problem with this setting we probably need additional testing hours. BTW, these dib0700 sticks use usb bulk transfers (and maybe smaller dma buffers?). Regards, Soeren --===============1075800083334540429==-- From arnd@arndb.de Thu Jan 17 10:49:47 2013 From: Arnd Bergmann To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH v2] mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls Date: Thu, 17 Jan 2013 10:49:30 +0000 Message-ID: <201301171049.30415.arnd@arndb.de> In-Reply-To: <50F61D86.4020801@web.de> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============3474579917304816816==" --===============3474579917304816816== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit On Wednesday 16 January 2013, Soeren Moch wrote: > >> I will see what I can do here. Is there an easy way to track the buffer > >> usage without having to wait for complete exhaustion? > > > > DMA_API_DEBUG > > OK, maybe I can try this. > > Any success with this? It should at least tell you if there is a memory leak in one of the drivers. Arnd --===============3474579917304816816==-- From daniel.vetter@ffwll.ch Thu Jan 17 12:29:28 2013 From: Daniel Vetter To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] CDF discussions at FOSDEM Date: Thu, 17 Jan 2013 13:29:27 +0100 Message-ID: In-Reply-To: <874nigw68k.fsf@intel.com> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============4871094475317491815==" --===============4871094475317491815== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable On Thu, Jan 17, 2013 at 9:42 AM, Jani Nikula wrote: > On Fri, 11 Jan 2013, Laurent Pinchart = wrote: >> Would anyone be interested in meeting at the FOSDEM to discuss the Common >> Display Framework ? There will be a CDF meeting at the ELC at the end of >> February, the FOSDEM would be a good venue for European developers. > > Yes, count me in, Jesse, Ville and me should also be around. Do we have a slot fixed already? -Daniel --=20 Daniel Vetter Software Engineer, Intel Corporation +41 (0) 79 365 57 48 - http://blog.ffwll.ch --===============4871094475317491815==-- From smoch@web.de Thu Jan 17 13:49:03 2013 From: Soeren Moch To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH v2] mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls Date: Thu, 17 Jan 2013 14:47:23 +0100 Message-ID: <50F800EB.6040104@web.de> In-Reply-To: <201301171049.30415.arnd@arndb.de> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============6108668976359481783==" --===============6108668976359481783== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit On 17.01.2013 11:49, Arnd Bergmann wrote: > On Wednesday 16 January 2013, Soeren Moch wrote: >>>> I will see what I can do here. Is there an easy way to track the buffer >>>> usage without having to wait for complete exhaustion? >>> >>> DMA_API_DEBUG >> >> OK, maybe I can try this. >>> > > Any success with this? It should at least tell you if there is a > memory leak in one of the drivers. Not yet, sorry. I have to do all the tests in my limited spare time. Can you tell me what to search for in the debug output? Soeren --===============6108668976359481783==-- From r.schwebel@pengutronix.de Thu Jan 17 20:20:40 2013 From: Robert Schwebel To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] CDF discussions at FOSDEM Date: Thu, 17 Jan 2013 21:20:38 +0100 Message-ID: <20130117202038.GP30136@pengutronix.de> In-Reply-To: <2665133.qfM3EnSmyB@avalon> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============5902944413665782480==" --===============5902944413665782480== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit On Fri, Jan 11, 2013 at 09:27:03PM +0100, Laurent Pinchart wrote: > Would anyone be interested in meeting at the FOSDEM to discuss the Common > Display Framework ? There will be a CDF meeting at the ELC at the end of > February, the FOSDEM would be a good venue for European developers. We are interested as well (Philipp, Michael, Sascha, me, maybe also some of the others from the Pengutronix crew...). rsc -- Pengutronix e.K. | | Industrial Linux Solutions | http://www.pengutronix.de/ | Peiner Str. 6-8, 31137 Hildesheim, Germany | Phone: +49-5121-206917-0 | Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 | --===============5902944413665782480==-- From arnd@arndb.de Thu Jan 17 20:29:11 2013 From: Arnd Bergmann To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH v2] mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls Date: Thu, 17 Jan 2013 20:26:45 +0000 Message-ID: <201301172026.45514.arnd@arndb.de> In-Reply-To: <50F800EB.6040104@web.de> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============6152380484520643983==" --===============6152380484520643983== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable On Thursday 17 January 2013, Soeren Moch wrote: > On 17.01.2013 11:49, Arnd Bergmann wrote: > > On Wednesday 16 January 2013, Soeren Moch wrote: > >>>> I will see what I can do here. Is there an easy way to track the buffer > >>>> usage without having to wait for complete exhaustion? > >>> > >>> DMA_API_DEBUG > >> > >> OK, maybe I can try this. > >>> > > > > Any success with this? It should at least tell you if there is a > > memory leak in one of the drivers. >=20 > Not yet, sorry. I have to do all the tests in my limited spare time. > Can you tell me what to search for in the debug output? Actually now that I've looked closer, you can't immediately see all the mappings as I thought. But please try enabling DMA_API_DEBUG in combination with this one-line patch: diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index 6b2fb87..3df74ac 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -497,6 +497,7 @@ static void *__alloc_from_pool(size_t size, struct page *= *ret_page) pr_err_once("ERROR: %u KiB atomic DMA coherent pool is too small!\n" "Please increase it with coherent_pool=3D kernel parameter!\n", (unsigned)pool->size / 1024); + debug_dma_dump_mappings(NULL); } spin_unlock_irqrestore(&pool->lock, flags); =20 That will show every single allocation that is currently active. This lets you see where all the memory went, and if there is a possible leak or excessive fragmentation. Arnd --===============6152380484520643983==-- From smoch@web.de Sat Jan 19 15:32:58 2013 From: Soeren Moch To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH v2] mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls Date: Sat, 19 Jan 2013 16:29:49 +0100 Message-ID: <50FABBED.1020905@web.de> In-Reply-To: <201301172026.45514.arnd@arndb.de> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============4884483268715003250==" --===============4884483268715003250== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable On 17.01.2013 21:26, Arnd Bergmann wrote: > On Thursday 17 January 2013, Soeren Moch wrote: >> On 17.01.2013 11:49, Arnd Bergmann wrote: >>> On Wednesday 16 January 2013, Soeren Moch wrote: >>>>>> I will see what I can do here. Is there an easy way to track the buffer >>>>>> usage without having to wait for complete exhaustion? >>>>> >>>>> DMA_API_DEBUG >>>> >>>> OK, maybe I can try this. >>>>> >>> >>> Any success with this? It should at least tell you if there is a >>> memory leak in one of the drivers. >> >> Not yet, sorry. I have to do all the tests in my limited spare time. >> Can you tell me what to search for in the debug output? > > Actually now that I've looked closer, you can't immediately see > all the mappings as I thought. > > But please try enabling DMA_API_DEBUG in combination with this > one-line patch: > > diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c > index 6b2fb87..3df74ac 100644 > --- a/arch/arm/mm/dma-mapping.c > +++ b/arch/arm/mm/dma-mapping.c > @@ -497,6 +497,7 @@ static void *__alloc_from_pool(size_t size, struct page= **ret_page) > pr_err_once("ERROR: %u KiB atomic DMA coherent pool is too small!\n" > "Please increase it with coherent_pool=3D kernel parameter!\n", > (unsigned)pool->size / 1024); > + debug_dma_dump_mappings(NULL); > } > spin_unlock_irqrestore(&pool->lock, flags); > > That will show every single allocation that is currently active. This lets > you see where all the memory went, and if there is a possible leak or > excessive fragmentation. > > Arnd > Please find attached a debug log generated with your patch. I used the sata disk and two em28xx dvb sticks, no other usb devices, no ethernet cable connected, tuners on saa716x-based card not used. What I can see in the log: a lot of coherent mappings from sata_mv and=20 orion_ehci, a few from mv643xx_eth, no other coherent mappings. All coherent mappings are page aligned, some of them (from orion_ehci) are not really small (as claimed in __alloc_from_pool). I don't believe in a memory leak. When I restart vdr (the application utilizing the dvb sticks) then there is enough dma memory available again. Regards, Soeren --===============4884483268715003250== Content-Type: text/x-log Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="dma_debug.log" MIME-Version: 1.0 CkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogRVJST1I6IDEwMjQgS2lCIGF0b21pYyBE TUEgY29oZXJlbnQgcG9vbCBpcyB0b28gc21hbGwhCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtl cm5lbDogUGxlYXNlIGluY3JlYXNlIGl0IHdpdGggY29oZXJlbnRfcG9vbD0ga2VybmVsIHBhcmFt ZXRlciEKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBzYXRhX212IHNhdGFfbXYuMDog Y29oZXJlbnQgaWR4IDAgUD0yMGEwMzAwMCBEPTFmMDAxMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJ T05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IHNhdGFfbXYgc2F0YV9tdi4wOiBj b2hlcmVudCBpZHggMSBQPTIwYTA1MDAwIEQ9MWYwMDIwMDAgTD0xMDAwIERNQV9CSURJUkVDVElP TkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNv aGVyZW50IGlkeCAxIFA9MjBhMDcwMDAgRD0xZjAwMzAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9O QUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBzYXRhX212IHNhdGFfbXYuMDogY29o ZXJlbnQgaWR4IDMgUD0yMGEwOTAwMCBEPTFmMDA2MDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05B TApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IHNhdGFfbXYgc2F0YV9tdi4wOiBjb2hl cmVudCBpZHggMyBQPTIwYTBiMDAwIEQ9MWYwMDcwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFM CkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVy ZW50IGlkeCA0IFA9MjBhMmQwMDAgRD0xZjAwODAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwK SmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBzYXRhX212IHNhdGFfbXYuMDogY29oZXJl bnQgaWR4IDQgUD0yMGEyZjAwMCBEPTFmMDA5MDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApK YW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IHNhdGFfbXYgc2F0YV9tdi4wOiBjb2hlcmVu dCBpZHggNSBQPTIwYTMxMDAwIEQ9MWYwMGEwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50 IGlkeCA1IFA9MjBhMzMwMDAgRD0xZjAwYjAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFu IDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBzYXRhX212IHNhdGFfbXYuMDogY29oZXJlbnQg aWR4IDYgUD0yMGEzNTAwMCBEPTFmMDBjMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4g MTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IHNhdGFfbXYgc2F0YV9tdi4wOiBjb2hlcmVudCBp ZHggNiBQPTIwYTM3MDAwIEQ9MWYwMGQwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAx OSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50IGlk eCA3IFA9MjBhMzkwMDAgRD0xZjAwZTAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5 IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBzYXRhX212IHNhdGFfbXYuMDogY29oZXJlbnQgaWR4 IDcgUD0yMGEzYjAwMCBEPTFmMDBmMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkg MTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IHNhdGFfbXYgc2F0YV9tdi4wOiBjb2hlcmVudCBpZHgg OCBQPTIwYTFkMDAwIEQ9MWYwMTAwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAx Mzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50IGlkeCA4 IFA9MjBhMWYwMDAgRD0xZjAxMTAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEz OjU0OjU4IGd1cnV2ZHIga2VybmVsOiBzYXRhX212IHNhdGFfbXYuMDogY29oZXJlbnQgaWR4IDkg UD0yMGEyMTAwMCBEPTFmMDEyMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6 NTQ6NTggZ3VydXZkciBrZXJuZWw6IHNhdGFfbXYgc2F0YV9tdi4wOiBjb2hlcmVudCBpZHggOSBQ PTIwYTIzMDAwIEQ9MWYwMTMwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50IGlkeCAxMCBQ PTIwYTI1MDAwIEQ9MWYwMTQwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50IGlkeCAxMCBQ PTIwYTI3MDAwIEQ9MWYwMTUwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50IGlkeCAxMSBQ PTIwYTI5MDAwIEQ9MWYwMTYwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50IGlkeCAxMSBQ PTIwYTJiMDAwIEQ9MWYwMTcwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50IGlkeCAxMiBQ PTIwYTE1MDAwIEQ9MWYwMTgwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50IGlkeCAxMiBQ PTIwYTE3MDAwIEQ9MWYwMTkwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50IGlkeCAxMyBQ PTIwYTE5MDAwIEQ9MWYwMWEwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50IGlkeCAxMyBQ PTIwYTFiMDAwIEQ9MWYwMWIwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50IGlkeCAxNCBQ PTIwYTUzMDAwIEQ9MWYwMWMwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50IGlkeCAxNCBQ PTIwYTU1MDAwIEQ9MWYwMWQwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50IGlkeCAxNSBQ PTIwYTU3MDAwIEQ9MWYwMWUwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50IGlkeCAxNSBQ PTIwYTU5MDAwIEQ9MWYwMWYwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50IGlkeCAxNiBQ PTIwYTNkMDAwIEQ9MWYwMjEwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50IGlkeCAxNyBQ PTIwYTNmMDAwIEQ9MWYwMjIwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50IGlkeCAxNyBQ PTIwYTQxMDAwIEQ9MWYwMjMwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50IGlkeCAxOCBQ PTIwYTQzMDAwIEQ9MWYwMjQwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50IGlkeCAxOCBQ PTIwYTQ1MDAwIEQ9MWYwMjUwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50IGlkeCAxOSBQ PTIwYTRmMDAwIEQ9MWYwMjYwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50IGlkeCAxOSBQ PTIwYTUxMDAwIEQ9MWYwMjcwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50IGlkeCAyMCBQ PTIwYTViMDAwIEQ9MWYwMjgwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50IGlkeCAyMCBQ PTIwYTVkMDAwIEQ9MWYwMjkwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50IGlkeCAyMSBQ PTIwYTVmMDAwIEQ9MWYwMmEwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50IGlkeCAyMSBQ PTIwYTYxMDAwIEQ9MWYwMmIwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50IGlkeCAyMiBQ PTIwYTYzMDAwIEQ9MWYwMmMwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50IGlkeCAyMiBQ PTIwYTY1MDAwIEQ9MWYwMmQwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50IGlkeCAyMyBQ PTIwYTY3MDAwIEQ9MWYwMmUwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50IGlkeCAyMyBQ PTIwYTY5MDAwIEQ9MWYwMmYwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjA6 IGNvaGVyZW50IGlkeCA0MiBQPTIxNDk1MDAwIEQ9MWM4NTUwMDAgTD04MDAgRE1BX0JJRElSRUNU SU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVo Y2kuMDogY29oZXJlbnQgaWR4IDQ5IFA9MjBhNmUwMDAgRD0xZjA2MjAwMCBMPTEwMDAgRE1BX0JJ RElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9y aW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDQ5IFA9MjBhNzAwMDAgRD0xZjA2MzAwMCBMPTEwMDAg RE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1l aGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDUwIFA9MjBhNzIwMDAgRD0xZjA2NDAwMCBM PTgwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9y aW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBzaW5nbGUgaWR4IDU0IFA9MWYwNmMyYzAgRD0xZjA2YzJj MCBMPTEgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2 NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjA6IHNpbmdsZSBpZHggNjggUD0xYzA4ODAy MCBEPTFjMDg4MDIwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZk ciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4wOiBzaW5nbGUgaWR4 IDY4IFA9MWMwODg3MDAgRD0xYzA4ODcwMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEz OjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQu MDogc2luZ2xlIGlkeCA2OCBQPTFjMDg4ZGUwIEQ9MWMwODhkZTAgTD02MDAgRE1BX0ZST01fREVW SUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0 M3h4X2V0aF9wb3J0LjA6IHNpbmdsZSBpZHggNjggUD0xYzA4OTRjMCBEPTFjMDg5NGMwIEw9NjAw IERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhf ZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4wOiBzaW5nbGUgaWR4IDY4IFA9MWMwODliYTAgRD0x YzA4OWJhMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2Vy bmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA2OSBQ PTFjMDhhMjgwIEQ9MWMwOGEyODAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1 OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjA6IHNp bmdsZSBpZHggNjkgUD0xYzA4YTk2MCBEPTFjMDhhOTYwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpK YW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9l dGhfcG9ydC4wOiBzaW5nbGUgaWR4IDY5IFA9MWMwOGIwNDAgRD0xYzA4YjA0MCBMPTYwMCBETUFf RlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9w b3J0IG12NjQzeHhfZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA2OSBQPTFjMDhiNzIwIEQ9MWMwOGI3 MjAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDog bXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjA6IHNpbmdsZSBpZHggNjkgUD0xYzA4 YmUwMCBEPTFjMDhiZTAwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3Vy dXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4wOiBzaW5nbGUg aWR4IDcwIFA9MWMwOGM0ZTAgRD0xYzA4YzRlMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5 IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3Bv cnQuMDogc2luZ2xlIGlkeCA3MCBQPTFjMDhjYmMwIEQ9MWMwOGNiYzAgTD02MDAgRE1BX0ZST01f REVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBt djY0M3h4X2V0aF9wb3J0LjA6IHNpbmdsZSBpZHggNzAgUD0xYzA4ZDJhMCBEPTFjMDhkMmEwIEw9 NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQz eHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4wOiBzaW5nbGUgaWR4IDcwIFA9MWMwOGQ5ODAg RD0xYzA4ZDk4MCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIg a2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA3 MSBQPTFjMDhlMDYwIEQ9MWMwOGUwNjAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjA6 IHNpbmdsZSBpZHggNzEgUD0xYzA4ZTc0MCBEPTFjMDhlNzQwIEw9NjAwIERNQV9GUk9NX0RFVklD RQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4 eF9ldGhfcG9ydC4wOiBzaW5nbGUgaWR4IDcxIFA9MWMwOGVlMjAgRD0xYzA4ZWUyMCBMPTYwMCBE TUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0 aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA3MSBQPTFjMDhmNTAwIEQ9MWMw OGY1MDAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5l bDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjA6IHNpbmdsZSBpZHggNzIgUD0x YzA5MDAyMCBEPTFjMDkwMDIwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTgg Z3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4wOiBzaW5n bGUgaWR4IDcyIFA9MWMwOTA3MDAgRD0xYzA5MDcwMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFu IDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRo X3BvcnQuMDogc2luZ2xlIGlkeCA3MiBQPTFjMDkwZGUwIEQ9MWMwOTBkZTAgTD02MDAgRE1BX0ZS T01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9y dCBtdjY0M3h4X2V0aF9wb3J0LjA6IHNpbmdsZSBpZHggNzIgUD0xYzA5MTRjMCBEPTFjMDkxNGMw IEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12 NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4wOiBzaW5nbGUgaWR4IDcyIFA9MWMwOTFi YTAgRD0xYzA5MWJhMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2 ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMDogc2luZ2xlIGlk eCA3MyBQPTFjMDkyMjgwIEQ9MWMwOTIyODAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAx Mzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0 LjA6IHNpbmdsZSBpZHggNzMgUD0xYzA5Mjk2MCBEPTFjMDkyOTYwIEw9NjAwIERNQV9GUk9NX0RF VklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2 NDN4eF9ldGhfcG9ydC4wOiBzaW5nbGUgaWR4IDczIFA9MWMwOTMwNDAgRD0xYzA5MzA0MCBMPTYw MCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4 X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA3MyBQPTFjMDkzNzIwIEQ9 MWMwOTM3MjAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtl cm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjA6IHNpbmdsZSBpZHggNzMg UD0xYzA5M2UwMCBEPTFjMDkzZTAwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6 NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4wOiBz aW5nbGUgaWR4IDc0IFA9MWMwOTQ0ZTAgRD0xYzA5NDRlMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UK SmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhf ZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA3NCBQPTFjMDk0YmMwIEQ9MWMwOTRiYzAgTD02MDAgRE1B X0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhf cG9ydCBtdjY0M3h4X2V0aF9wb3J0LjA6IHNpbmdsZSBpZHggNzQgUD0xYzA5NTJhMCBEPTFjMDk1 MmEwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6 IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4wOiBzaW5nbGUgaWR4IDc0IFA9MWMw OTU5ODAgRD0xYzA5NTk4MCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1 cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMDogc2luZ2xl IGlkeCA3NSBQPTFjMDk2MDYwIEQ9MWMwOTYwNjAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAx OSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9w b3J0LjA6IHNpbmdsZSBpZHggNzUgUD0xYzA5Njc0MCBEPTFjMDk2NzQwIEw9NjAwIERNQV9GUk9N X0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQg bXY2NDN4eF9ldGhfcG9ydC4wOiBzaW5nbGUgaWR4IDc1IFA9MWMwOTZlMjAgRD0xYzA5NmUyMCBM PTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0 M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA3NSBQPTFjMDk3NTAw IEQ9MWMwOTc1MDAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRy IGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjA6IHNpbmdsZSBpZHgg NzYgUD0xYzA5ODAyMCBEPTFjMDk4MDIwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6 NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4w OiBzaW5nbGUgaWR4IDc2IFA9MWMwOTg3MDAgRD0xYzA5ODcwMCBMPTYwMCBETUFfRlJPTV9ERVZJ Q0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQz eHhfZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA3NiBQPTFjMDk4ZGUwIEQ9MWMwOThkZTAgTD02MDAg RE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9l dGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjA6IHNpbmdsZSBpZHggNzYgUD0xYzA5OTRjMCBEPTFj MDk5NGMwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJu ZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4wOiBzaW5nbGUgaWR4IDc2IFA9 MWMwOTliYTAgRD0xYzA5OWJhMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4 IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMDogc2lu Z2xlIGlkeCA3NyBQPTFjMDlhMjgwIEQ9MWMwOWEyODAgTD02MDAgRE1BX0ZST01fREVWSUNFCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0 aF9wb3J0LjA6IHNpbmdsZSBpZHggNzcgUD0xYzA5YTk2MCBEPTFjMDlhOTYwIEw9NjAwIERNQV9G Uk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3Bv cnQgbXY2NDN4eF9ldGhfcG9ydC4wOiBzaW5nbGUgaWR4IDc3IFA9MWMwOWIwNDAgRD0xYzA5YjA0 MCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBt djY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA3NyBQPTFjMDli NzIwIEQ9MWMwOWI3MjAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1 dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjA6IHNpbmdsZSBp ZHggNzcgUD0xYzA5YmUwMCBEPTFjMDliZTAwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkg MTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9y dC4wOiBzaW5nbGUgaWR4IDc4IFA9MWMwOWM0ZTAgRD0xYzA5YzRlMCBMPTYwMCBETUFfRlJPTV9E RVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12 NjQzeHhfZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA3OCBQPTFjMDljYmMwIEQ9MWMwOWNiYzAgTD02 MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24t ZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAxMjggUD0yMDgwNDAwMCBEPTFmOTAwMDAw IEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6 IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMTI4IFA9MjA4MDUwMDAgRD0x ZjkwMTAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIg a2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDEyOSBQPTIwODA2 MDAwIEQ9MWY5MDIwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBn dXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAxMjkg UD0yMDgwNzAwMCBEPTFmOTAzMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6 NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBp ZHggMTMwIFA9MjA4MDgwMDAgRD0xZjkwNDAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFu IDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29o ZXJlbnQgaWR4IDEzMCBQPTIwODA5MDAwIEQ9MWY5MDUwMDAgTD0xMDAwIERNQV9CSURJUkVDVElP TkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNp LjA6IGNvaGVyZW50IGlkeCAxMzEgUD0yMDgwYTAwMCBEPTFmOTA2MDAwIEw9MTAwMCBETUFfQklE SVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jp b24tZWhjaS4wOiBjb2hlcmVudCBpZHggMTMxIFA9MjA4MGIwMDAgRD0xZjkwNzAwMCBMPTEwMDAg RE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1l aGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDEzMiBQPTIwODBjMDAwIEQ9MWY5MDgwMDAg TD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDog b3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAxMzIgUD0yMDgwZDAwMCBEPTFm OTA5MDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBr ZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMTMzIFA9MjA4MGUw MDAgRD0xZjkwYTAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1 cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDEzMyBQ PTIwODBmMDAwIEQ9MWY5MGIwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlk eCAxMzQgUD0yMDgxMDAwMCBEPTFmOTBjMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4g MTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hl cmVudCBpZHggMTM0IFA9MjA4MTEwMDAgRD0xZjkwZDAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9O QUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2ku MDogY29oZXJlbnQgaWR4IDEzNSBQPTIwODEyMDAwIEQ9MWY5MGUwMDAgTD0xMDAwIERNQV9CSURJ UkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlv bi1laGNpLjA6IGNvaGVyZW50IGlkeCAxMzUgUD0yMDgxMzAwMCBEPTFmOTBmMDAwIEw9MTAwMCBE TUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVo Y2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMTM2IFA9MjA4MTQwMDAgRD0xZjkxMDAwMCBM PTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBv cmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDEzNiBQPTIwODE1MDAwIEQ9MWY5 MTEwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtl cm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAxMzcgUD0yMDgxNjAw MCBEPTFmOTEyMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3Vy dXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMTM3IFA9 MjA4MTcwMDAgRD0xZjkxMzAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0 OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4 IDEzOCBQPTIwODE4MDAwIEQ9MWY5MTQwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAx OSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVy ZW50IGlkeCAxMzggUD0yMDgxOTAwMCBEPTFmOTE1MDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05B TApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4w OiBjb2hlcmVudCBpZHggMTM5IFA9MjA4MWEwMDAgRD0xZjkxNjAwMCBMPTEwMDAgRE1BX0JJRElS RUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9u LWVoY2kuMDogY29oZXJlbnQgaWR4IDEzOSBQPTIwODFiMDAwIEQ9MWY5MTcwMDAgTD0xMDAwIERN QV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhj aSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAxNDAgUD0yMDgxYzAwMCBEPTFmOTE4MDAwIEw9 MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9y aW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMTQwIFA9MjA4MWQwMDAgRD0xZjkx OTAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2Vy bmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDE0MSBQPTIwODFlMDAw IEQ9MWY5MWEwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1 dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAxNDEgUD0y MDgxZjAwMCBEPTFmOTFiMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6 NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHgg MTQyIFA9MjA4MjAwMDAgRD0xZjkxYzAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5 IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJl bnQgaWR4IDE0MiBQPTIwODIxMDAwIEQ9MWY5MWQwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFM CkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6 IGNvaGVyZW50IGlkeCAxNDMgUD0yMDgyMjAwMCBEPTFmOTFlMDAwIEw9MTAwMCBETUFfQklESVJF Q1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24t ZWhjaS4wOiBjb2hlcmVudCBpZHggMTQzIFA9MjA4MjMwMDAgRD0xZjkxZjAwMCBMPTEwMDAgRE1B X0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNp IG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDE0NCBQPTIwODI0MDAwIEQ9MWY5MjAwMDAgTD0x MDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jp b24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAxNDQgUD0yMDgyNTAwMCBEPTFmOTIx MDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJu ZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMTQ1IFA9MjA4MjYwMDAg RD0xZjkyMjAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2 ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDE0NSBQPTIw ODI3MDAwIEQ9MWY5MjMwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1 OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAx NDYgUD0yMDgyODAwMCBEPTFmOTI0MDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkg MTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVu dCBpZHggMTQ2IFA9MjA4MjkwMDAgRD0xZjkyNTAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwK SmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDog Y29oZXJlbnQgaWR4IDE0NyBQPTIwODJhMDAwIEQ9MWY5MjYwMDAgTD0xMDAwIERNQV9CSURJUkVD VElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1l aGNpLjA6IGNvaGVyZW50IGlkeCAxNDcgUD0yMDgyYjAwMCBEPTFmOTI3MDAwIEw9MTAwMCBETUFf QklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kg b3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMTQ4IFA9MjA4MmMwMDAgRD0xZjkyODAwMCBMPTEw MDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlv bi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDE0OCBQPTIwODJkMDAwIEQ9MWY5Mjkw MDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5l bDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAxNDkgUD0yMDgyZTAwMCBE PTFmOTJhMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZk ciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMTQ5IFA9MjA4 MmYwMDAgRD0xZjkyYjAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4 IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDE1 MCBQPTIwODMwMDAwIEQ9MWY5MmMwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAx Mzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50 IGlkeCAxNTAgUD0yMDgzMTAwMCBEPTFmOTJkMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApK YW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBj b2hlcmVudCBpZHggMTUxIFA9MjA4MzIwMDAgRD0xZjkyZTAwMCBMPTEwMDAgRE1BX0JJRElSRUNU SU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVo Y2kuMDogY29oZXJlbnQgaWR4IDE1MSBQPTIwODMzMDAwIEQ9MWY5MmYwMDAgTD0xMDAwIERNQV9C SURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBv cmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAxNTIgUD0yMDgzNDAwMCBEPTFmOTMwMDAwIEw9MTAw MCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9u LWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMTUyIFA9MjA4MzUwMDAgRD0xZjkzMTAw MCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVs OiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDE1MyBQPTIwODM2MDAwIEQ9 MWY5MzIwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRy IGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAxNTMgUD0yMDgz NzAwMCBEPTFmOTMzMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTgg Z3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMTU0 IFA9MjA4MzgwMDAgRD0xZjkzNDAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEz OjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQg aWR4IDE1NCBQPTIwODM5MDAwIEQ9MWY5MzUwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNv aGVyZW50IGlkeCAxNTUgUD0yMDgzYTAwMCBEPTFmOTM2MDAwIEw9MTAwMCBETUFfQklESVJFQ1RJ T05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhj aS4wOiBjb2hlcmVudCBpZHggMTU1IFA9MjA4M2IwMDAgRD0xZjkzNzAwMCBMPTEwMDAgRE1BX0JJ RElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9y aW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDE1NiBQPTIwODNjMDAwIEQ9MWY5MzgwMDAgTD0xMDAw IERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24t ZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAxNTYgUD0yMDgzZDAwMCBEPTFmOTM5MDAw IEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6 IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMTU3IFA9MjA4M2UwMDAgRD0x ZjkzYTAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIg a2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDE1NyBQPTIwODNm MDAwIEQ9MWY5M2IwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBn dXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAxNTgg UD0yMDg0MDAwMCBEPTFmOTNjMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6 NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBp ZHggMTU4IFA9MjA4NDEwMDAgRD0xZjkzZDAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFu IDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29o ZXJlbnQgaWR4IDE1OSBQPTIwODQyMDAwIEQ9MWY5M2UwMDAgTD0xMDAwIERNQV9CSURJUkVDVElP TkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNp LjA6IGNvaGVyZW50IGlkeCAxNTkgUD0yMDg0MzAwMCBEPTFmOTNmMDAwIEw9MTAwMCBETUFfQklE SVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jp b24tZWhjaS4wOiBjb2hlcmVudCBpZHggMTYwIFA9MjA4NDQwMDAgRD0xZjk0MDAwMCBMPTEwMDAg RE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1l aGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDE2MCBQPTIwODQ1MDAwIEQ9MWY5NDEwMDAg TD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDog b3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAxNjEgUD0yMDg0NjAwMCBEPTFm OTQyMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBr ZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMTYxIFA9MjA4NDcw MDAgRD0xZjk0MzAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1 cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDE2MiBQ PTIwODQ4MDAwIEQ9MWY5NDQwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlk eCAxNjIgUD0yMDg0OTAwMCBEPTFmOTQ1MDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4g MTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hl cmVudCBpZHggMTYzIFA9MjA4NGEwMDAgRD0xZjk0NjAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9O QUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2ku MDogY29oZXJlbnQgaWR4IDE2MyBQPTIwODRiMDAwIEQ9MWY5NDcwMDAgTD0xMDAwIERNQV9CSURJ UkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlv bi1laGNpLjA6IGNvaGVyZW50IGlkeCAxNjQgUD0yMDg0YzAwMCBEPTFmOTQ4MDAwIEw9MTAwMCBE TUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVo Y2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMTY0IFA9MjA4NGQwMDAgRD0xZjk0OTAwMCBM PTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBv cmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDE2NSBQPTIwODRlMDAwIEQ9MWY5 NGEwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtl cm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAxNjUgUD0yMDg0ZjAw MCBEPTFmOTRiMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3Vy dXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMTY2IFA9 MjA4NTAwMDAgRD0xZjk0YzAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0 OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4 IDE2NiBQPTIwODUxMDAwIEQ9MWY5NGQwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAx OSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVy ZW50IGlkeCAxNjcgUD0yMDg1MjAwMCBEPTFmOTRlMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05B TApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4w OiBjb2hlcmVudCBpZHggMTY3IFA9MjA4NTMwMDAgRD0xZjk0ZjAwMCBMPTEwMDAgRE1BX0JJRElS RUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9u LWVoY2kuMDogY29oZXJlbnQgaWR4IDE2OCBQPTIwODU0MDAwIEQ9MWY5NTAwMDAgTD0xMDAwIERN QV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhj aSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAxNjggUD0yMDg1NTAwMCBEPTFmOTUxMDAwIEw9 MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9y aW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMTY5IFA9MjA4NTYwMDAgRD0xZjk1 MjAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2Vy bmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDE2OSBQPTIwODU3MDAw IEQ9MWY5NTMwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1 dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAxNzAgUD0y MDg1ODAwMCBEPTFmOTU0MDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6 NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHgg MTcwIFA9MjA4NTkwMDAgRD0xZjk1NTAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5 IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJl bnQgaWR4IDE3MSBQPTIwODVhMDAwIEQ9MWY5NTYwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFM CkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6 IGNvaGVyZW50IGlkeCAxNzEgUD0yMDg1YjAwMCBEPTFmOTU3MDAwIEw9MTAwMCBETUFfQklESVJF Q1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24t ZWhjaS4wOiBjb2hlcmVudCBpZHggMTcyIFA9MjA4NWMwMDAgRD0xZjk1ODAwMCBMPTEwMDAgRE1B X0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNp IG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDE3MiBQPTIwODVkMDAwIEQ9MWY5NTkwMDAgTD0x MDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jp b24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAxNzMgUD0yMDg1ZTAwMCBEPTFmOTVh MDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJu ZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMTczIFA9MjA4NWYwMDAg RD0xZjk1YjAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2 ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDE3NCBQPTIw ODYwMDAwIEQ9MWY5NWMwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1 OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAx NzQgUD0yMDg2MTAwMCBEPTFmOTVkMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkg MTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVu dCBpZHggMTc1IFA9MjA4NjIwMDAgRD0xZjk1ZTAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwK SmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDog Y29oZXJlbnQgaWR4IDE3NSBQPTIwODYzMDAwIEQ9MWY5NWYwMDAgTD0xMDAwIERNQV9CSURJUkVD VElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1l aGNpLjA6IGNvaGVyZW50IGlkeCAxNzYgUD0yMDg2NDAwMCBEPTFmOTYwMDAwIEw9MTAwMCBETUFf QklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kg b3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMTc2IFA9MjA4NjUwMDAgRD0xZjk2MTAwMCBMPTEw MDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlv bi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDE3NyBQPTIwODY2MDAwIEQ9MWY5NjIw MDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5l bDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAxNzcgUD0yMDg2NzAwMCBE PTFmOTYzMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZk ciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMTc4IFA9MjA4 NjgwMDAgRD0xZjk2NDAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4 IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDE3 OCBQPTIwODY5MDAwIEQ9MWY5NjUwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAx Mzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50 IGlkeCAxNzkgUD0yMDg2YTAwMCBEPTFmOTY2MDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApK YW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBj b2hlcmVudCBpZHggMTc5IFA9MjA4NmIwMDAgRD0xZjk2NzAwMCBMPTEwMDAgRE1BX0JJRElSRUNU SU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVo Y2kuMDogY29oZXJlbnQgaWR4IDE4MCBQPTIwODZjMDAwIEQ9MWY5NjgwMDAgTD0xMDAwIERNQV9C SURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBv cmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAxODAgUD0yMDg2ZDAwMCBEPTFmOTY5MDAwIEw9MTAw MCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9u LWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMTgxIFA9MjA4NmUwMDAgRD0xZjk2YTAw MCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVs OiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDE4MSBQPTIwODZmMDAwIEQ9 MWY5NmIwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRy IGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAxODIgUD0yMDg3 MDAwMCBEPTFmOTZjMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTgg Z3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMTgy IFA9MjA4NzEwMDAgRD0xZjk2ZDAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEz OjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQg aWR4IDE4MyBQPTIwODcyMDAwIEQ9MWY5NmUwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNv aGVyZW50IGlkeCAxODMgUD0yMDg3MzAwMCBEPTFmOTZmMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJ T05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhj aS4wOiBjb2hlcmVudCBpZHggMTg0IFA9MjA4NzQwMDAgRD0xZjk3MDAwMCBMPTEwMDAgRE1BX0JJ RElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9y aW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDE4NCBQPTIwODc1MDAwIEQ9MWY5NzEwMDAgTD0xMDAw IERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24t ZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAxODUgUD0yMDg3NjAwMCBEPTFmOTcyMDAw IEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6 IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMTg1IFA9MjA4NzcwMDAgRD0x Zjk3MzAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIg a2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDE4NiBQPTIwODc4 MDAwIEQ9MWY5NzQwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBn dXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAxODYg UD0yMDg3OTAwMCBEPTFmOTc1MDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6 NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBp ZHggMTg3IFA9MjA4N2EwMDAgRD0xZjk3NjAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFu IDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29o ZXJlbnQgaWR4IDE4NyBQPTIwODdiMDAwIEQ9MWY5NzcwMDAgTD0xMDAwIERNQV9CSURJUkVDVElP TkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNp LjA6IGNvaGVyZW50IGlkeCAxODggUD0yMDg3YzAwMCBEPTFmOTc4MDAwIEw9MTAwMCBETUFfQklE SVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jp b24tZWhjaS4wOiBjb2hlcmVudCBpZHggMTg4IFA9MjA4N2QwMDAgRD0xZjk3OTAwMCBMPTEwMDAg RE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1l aGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDE4OSBQPTIwODdlMDAwIEQ9MWY5N2EwMDAg TD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDog b3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAxODkgUD0yMDg3ZjAwMCBEPTFm OTdiMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBr ZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMTkwIFA9MjA4ODAw MDAgRD0xZjk3YzAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1 cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDE5MCBQ PTIwODgxMDAwIEQ9MWY5N2QwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlk eCAxOTEgUD0yMDg4MjAwMCBEPTFmOTdlMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4g MTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hl cmVudCBpZHggMTkxIFA9MjA4ODMwMDAgRD0xZjk3ZjAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9O QUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2ku MDogY29oZXJlbnQgaWR4IDE5MiBQPTIwODg0MDAwIEQ9MWY5ODAwMDAgTD0xMDAwIERNQV9CSURJ UkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlv bi1laGNpLjA6IGNvaGVyZW50IGlkeCAxOTIgUD0yMDg4NTAwMCBEPTFmOTgxMDAwIEw9MTAwMCBE TUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVo Y2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMTkzIFA9MjA4ODYwMDAgRD0xZjk4MjAwMCBM PTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBv cmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDE5MyBQPTIwODg3MDAwIEQ9MWY5 ODMwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtl cm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAxOTQgUD0yMDg4ODAw MCBEPTFmOTg0MDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3Vy dXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMTk0IFA9 MjA4ODkwMDAgRD0xZjk4NTAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0 OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4 IDE5NSBQPTIwODhhMDAwIEQ9MWY5ODYwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAx OSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVy ZW50IGlkeCAxOTUgUD0yMDg4YjAwMCBEPTFmOTg3MDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05B TApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4w OiBjb2hlcmVudCBpZHggMTk2IFA9MjA4OGMwMDAgRD0xZjk4ODAwMCBMPTEwMDAgRE1BX0JJRElS RUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9u LWVoY2kuMDogY29oZXJlbnQgaWR4IDE5NiBQPTIwODhkMDAwIEQ9MWY5ODkwMDAgTD0xMDAwIERN QV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhj aSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAxOTcgUD0yMDg4ZTAwMCBEPTFmOThhMDAwIEw9 MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9y aW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMTk3IFA9MjA4OGYwMDAgRD0xZjk4 YjAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2Vy bmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDE5OCBQPTIwODkwMDAw IEQ9MWY5OGMwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1 dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAxOTggUD0y MDg5MTAwMCBEPTFmOThkMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6 NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHgg MTk5IFA9MjA4OTIwMDAgRD0xZjk4ZTAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5 IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJl bnQgaWR4IDE5OSBQPTIwODkzMDAwIEQ9MWY5OGYwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFM CkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6 IGNvaGVyZW50IGlkeCAyMDAgUD0yMTBjMTAwMCBEPTFjOTkwMDAwIEw9ZWIwMCBETUFfQklESVJF Q1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24t ZWhjaS4wOiBjb2hlcmVudCBpZHggMjAwIFA9MjA4OTQwMDAgRD0xZjk5MDAwMCBMPTEwMDAgRE1B X0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNp IG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDIwMCBQPTIwODk1MDAwIEQ9MWY5OTEwMDAgTD0x MDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jp b24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAyMDEgUD0yMDg5NjAwMCBEPTFmOTky MDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJu ZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMjAxIFA9MjA4OTcwMDAg RD0xZjk5MzAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2 ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDIwMiBQPTIw ODk4MDAwIEQ9MWY5OTQwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1 OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAy MDIgUD0yMDg5OTAwMCBEPTFmOTk1MDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkg MTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVu dCBpZHggMjAzIFA9MjA4OWEwMDAgRD0xZjk5NjAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwK SmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDog Y29oZXJlbnQgaWR4IDIwMyBQPTIwODliMDAwIEQ9MWY5OTcwMDAgTD0xMDAwIERNQV9CSURJUkVD VElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1l aGNpLjA6IGNvaGVyZW50IGlkeCAyMDQgUD0yMDg5YzAwMCBEPTFmOTk4MDAwIEw9MTAwMCBETUFf QklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kg b3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMjA0IFA9MjA4OWQwMDAgRD0xZjk5OTAwMCBMPTEw MDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlv bi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDIwNSBQPTIwODllMDAwIEQ9MWY5OWEw MDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5l bDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAyMDUgUD0yMDg5ZjAwMCBE PTFmOTliMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZk ciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMjA2IFA9MjA4 YTAwMDAgRD0xZjk5YzAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4 IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDIw NiBQPTIwOGExMDAwIEQ9MWY5OWQwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAx Mzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50 IGlkeCAyMDcgUD0yMDhhMjAwMCBEPTFmOTllMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApK YW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBj b2hlcmVudCBpZHggMjA3IFA9MjA4YTMwMDAgRD0xZjk5ZjAwMCBMPTEwMDAgRE1BX0JJRElSRUNU SU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVo Y2kuMDogY29oZXJlbnQgaWR4IDIwOCBQPTIxMGQxMDAwIEQ9MWM5YTAwMDAgTD1lYjAwIERNQV9C SURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBv cmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAyMDggUD0yMDhhNDAwMCBEPTFmOWEwMDAwIEw9MTAw MCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9u LWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMjA4IFA9MjA4YTUwMDAgRD0xZjlhMTAw MCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVs OiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDIwOSBQPTIwOGE2MDAwIEQ9 MWY5YTIwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRy IGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAyMDkgUD0yMDhh NzAwMCBEPTFmOWEzMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTgg Z3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMjEw IFA9MjA4YTgwMDAgRD0xZjlhNDAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEz OjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQg aWR4IDIxMCBQPTIwOGE5MDAwIEQ9MWY5YTUwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNv aGVyZW50IGlkeCAyMTEgUD0yMDhhYTAwMCBEPTFmOWE2MDAwIEw9MTAwMCBETUFfQklESVJFQ1RJ T05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhj aS4wOiBjb2hlcmVudCBpZHggMjExIFA9MjA4YWIwMDAgRD0xZjlhNzAwMCBMPTEwMDAgRE1BX0JJ RElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9y aW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDIxMiBQPTIwOGFjMDAwIEQ9MWY5YTgwMDAgTD0xMDAw IERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24t ZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAyMTIgUD0yMDhhZDAwMCBEPTFmOWE5MDAw IEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6 IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMjEzIFA9MjA4YWUwMDAgRD0x ZjlhYTAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIg a2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDIxMyBQPTIwOGFm MDAwIEQ9MWY5YWIwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBn dXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAyMTQg UD0yMDhiMDAwMCBEPTFmOWFjMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6 NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBp ZHggMjE0IFA9MjA4YjEwMDAgRD0xZjlhZDAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFu IDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29o ZXJlbnQgaWR4IDIxNSBQPTIwOGIyMDAwIEQ9MWY5YWUwMDAgTD0xMDAwIERNQV9CSURJUkVDVElP TkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNp LjA6IGNvaGVyZW50IGlkeCAyMTUgUD0yMDhiMzAwMCBEPTFmOWFmMDAwIEw9MTAwMCBETUFfQklE SVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jp b24tZWhjaS4wOiBjb2hlcmVudCBpZHggMjE2IFA9MjEwZTEwMDAgRD0xYzliMDAwMCBMPWViMDAg RE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1l aGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDIxNiBQPTIwOGI0MDAwIEQ9MWY5YjAwMDAg TD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDog b3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAyMTYgUD0yMDhiNTAwMCBEPTFm OWIxMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBr ZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMjE3IFA9MjA4YjYw MDAgRD0xZjliMjAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1 cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDIxNyBQ PTIwOGI3MDAwIEQ9MWY5YjMwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlk eCAyMTggUD0yMDhiODAwMCBEPTFmOWI0MDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4g MTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hl cmVudCBpZHggMjE4IFA9MjA4YjkwMDAgRD0xZjliNTAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9O QUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2ku MDogc2luZ2xlIGlkeCAyMTkgUD0xZjFiNjU4MCBEPTFmMWI2NTgwIEw9MSBETUFfRlJPTV9ERVZJ Q0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2ku MDogY29oZXJlbnQgaWR4IDIxOSBQPTIwOGJhMDAwIEQ9MWY5YjYwMDAgTD0xMDAwIERNQV9CSURJ UkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlv bi1laGNpLjA6IGNvaGVyZW50IGlkeCAyMTkgUD0yMDhiYjAwMCBEPTFmOWI3MDAwIEw9MTAwMCBE TUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVo Y2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMjIwIFA9MjA4YmMwMDAgRD0xZjliODAwMCBM PTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBv cmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDIyMCBQPTIwOGJkMDAwIEQ9MWY5 YjkwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtl cm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAyMjEgUD0yMDhiZTAw MCBEPTFmOWJhMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3Vy dXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMjIxIFA9 MjA4YmYwMDAgRD0xZjliYjAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0 OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4 IDIyMiBQPTIwOGMwMDAwIEQ9MWY5YmMwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAx OSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVy ZW50IGlkeCAyMjIgUD0yMDhjMTAwMCBEPTFmOWJkMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05B TApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4w OiBjb2hlcmVudCBpZHggMjIzIFA9MjA4YzIwMDAgRD0xZjliZTAwMCBMPTEwMDAgRE1BX0JJRElS RUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9u LWVoY2kuMDogY29oZXJlbnQgaWR4IDIyMyBQPTIwOGMzMDAwIEQ9MWY5YmYwMDAgTD0xMDAwIERN QV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhj aSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAyMjQgUD0yMTBmMTAwMCBEPTFjOWMwMDAwIEw9 ZWIwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9y aW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMjI0IFA9MjA4YzQwMDAgRD0xZjlj MDAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2Vy bmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDIyNCBQPTIwOGM1MDAw IEQ9MWY5YzEwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1 dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAyMjUgUD0y MDhjNjAwMCBEPTFmOWMyMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6 NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHgg MjI1IFA9MjA4YzcwMDAgRD0xZjljMzAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5 IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJl bnQgaWR4IDIyNiBQPTIwOGM4MDAwIEQ9MWY5YzQwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFM CkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6 IGNvaGVyZW50IGlkeCAyMjYgUD0yMDhjOTAwMCBEPTFmOWM1MDAwIEw9MTAwMCBETUFfQklESVJF Q1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24t ZWhjaS4wOiBjb2hlcmVudCBpZHggMjI3IFA9MjA4Y2EwMDAgRD0xZjljNjAwMCBMPTEwMDAgRE1B X0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNp IG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDIyNyBQPTIwOGNiMDAwIEQ9MWY5YzcwMDAgTD0x MDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jp b24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAyMjggUD0yMGJhNjAwMCBEPTFmMWM4 MDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJu ZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMjI4IFA9MjA4Y2MwMDAg RD0xZjljODAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2 ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDIyOCBQPTIw OGNkMDAwIEQ9MWY5YzkwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1 OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAy MjkgUD0yMDhjZTAwMCBEPTFmOWNhMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkg MTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVu dCBpZHggMjI5IFA9MjA4Y2YwMDAgRD0xZjljYjAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwK SmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDog Y29oZXJlbnQgaWR4IDIzMCBQPTIwOGQwMDAwIEQ9MWY5Y2MwMDAgTD0xMDAwIERNQV9CSURJUkVD VElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1l aGNpLjA6IGNvaGVyZW50IGlkeCAyMzAgUD0yMDhkMTAwMCBEPTFmOWNkMDAwIEw9MTAwMCBETUFf QklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kg b3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMjMxIFA9MjA4ZDIwMDAgRD0xZjljZTAwMCBMPTEw MDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlv bi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDIzMSBQPTIwOGQzMDAwIEQ9MWY5Y2Yw MDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5l bDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAyMzIgUD0yMTEwMTAwMCBE PTFjOWQwMDAwIEw9ZWIwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZk ciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMjMyIFA9MjA4 ZDQwMDAgRD0xZjlkMDAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4 IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDIz MiBQPTIwOGQ1MDAwIEQ9MWY5ZDEwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAx Mzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50 IGlkeCAyMzMgUD0yMDhkNjAwMCBEPTFmOWQyMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApK YW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBj b2hlcmVudCBpZHggMjMzIFA9MjA4ZDcwMDAgRD0xZjlkMzAwMCBMPTEwMDAgRE1BX0JJRElSRUNU SU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVo Y2kuMDogY29oZXJlbnQgaWR4IDIzNCBQPTIwOGQ4MDAwIEQ9MWY5ZDQwMDAgTD0xMDAwIERNQV9C SURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBv cmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAyMzQgUD0yMDhkOTAwMCBEPTFmOWQ1MDAwIEw9MTAw MCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9u LWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMjM1IFA9MjA4ZGEwMDAgRD0xZjlkNjAw MCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVs OiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDIzNSBQPTIwOGRiMDAwIEQ9 MWY5ZDcwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRy IGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAyMzYgUD0yMDhk YzAwMCBEPTFmOWQ4MDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTgg Z3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMjM2 IFA9MjA4ZGQwMDAgRD0xZjlkOTAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEz OjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQg aWR4IDIzNyBQPTIwOGRlMDAwIEQ9MWY5ZGEwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNv aGVyZW50IGlkeCAyMzcgUD0yMDhkZjAwMCBEPTFmOWRiMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJ T05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhj aS4wOiBjb2hlcmVudCBpZHggMjM4IFA9MjA4ZTAwMDAgRD0xZjlkYzAwMCBMPTEwMDAgRE1BX0JJ RElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9y aW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDIzOCBQPTIwOGUxMDAwIEQ9MWY5ZGQwMDAgTD0xMDAw IERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24t ZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAyMzkgUD0yMDhlMjAwMCBEPTFmOWRlMDAw IEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6 IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMjM5IFA9MjA4ZTMwMDAgRD0x ZjlkZjAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIg a2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDI0MCBQPTIwOGU0 MDAwIEQ9MWY5ZTAwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBn dXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAyNDAg UD0yMDhlNTAwMCBEPTFmOWUxMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6 NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBp ZHggMjQxIFA9MjA4ZTYwMDAgRD0xZjllMjAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFu IDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29o ZXJlbnQgaWR4IDI0MSBQPTIwOGU3MDAwIEQ9MWY5ZTMwMDAgTD0xMDAwIERNQV9CSURJUkVDVElP TkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNp LjA6IGNvaGVyZW50IGlkeCAyNDIgUD0yMDhlODAwMCBEPTFmOWU0MDAwIEw9MTAwMCBETUFfQklE SVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jp b24tZWhjaS4wOiBjb2hlcmVudCBpZHggMjQyIFA9MjA4ZTkwMDAgRD0xZjllNTAwMCBMPTEwMDAg RE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1l aGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDI0MyBQPTIwOGVhMDAwIEQ9MWY5ZTYwMDAg TD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDog b3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAyNDMgUD0yMDhlYjAwMCBEPTFm OWU3MDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBr ZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMjQ0IFA9MjA4ZWMw MDAgRD0xZjllODAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1 cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDI0NCBQ PTIwOGVkMDAwIEQ9MWY5ZTkwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlk eCAyNDUgUD0yMDhlZTAwMCBEPTFmOWVhMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4g MTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hl cmVudCBpZHggMjQ1IFA9MjA4ZWYwMDAgRD0xZjllYjAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9O QUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2ku MDogY29oZXJlbnQgaWR4IDI0NiBQPTIwOGYwMDAwIEQ9MWY5ZWMwMDAgTD0xMDAwIERNQV9CSURJ UkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlv bi1laGNpLjA6IGNvaGVyZW50IGlkeCAyNDYgUD0yMDhmMTAwMCBEPTFmOWVkMDAwIEw9MTAwMCBE TUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVo Y2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMjQ3IFA9MjA4ZjIwMDAgRD0xZjllZTAwMCBM PTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBv cmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDI0NyBQPTIwOGYzMDAwIEQ9MWY5 ZWYwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtl cm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAyNDggUD0yMDhmNDAw MCBEPTFmOWYwMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3Vy dXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMjQ4IFA9 MjA4ZjUwMDAgRD0xZjlmMTAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0 OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4 IDI0OSBQPTIwOGY2MDAwIEQ9MWY5ZjIwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAx OSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVy ZW50IGlkeCAyNDkgUD0yMDhmNzAwMCBEPTFmOWYzMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05B TApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4w OiBjb2hlcmVudCBpZHggMjUwIFA9MjA4ZjgwMDAgRD0xZjlmNDAwMCBMPTEwMDAgRE1BX0JJRElS RUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9u LWVoY2kuMDogY29oZXJlbnQgaWR4IDI1MCBQPTIwOGY5MDAwIEQ9MWY5ZjUwMDAgTD0xMDAwIERN QV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhj aSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAyNTEgUD0yMDhmYTAwMCBEPTFmOWY2MDAwIEw9 MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9y aW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMjUxIFA9MjA4ZmIwMDAgRD0xZjlm NzAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2Vy bmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDI1MiBQPTIwOGZjMDAw IEQ9MWY5ZjgwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1 dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAyNTIgUD0y MDhmZDAwMCBEPTFmOWY5MDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6 NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHgg MjUzIFA9MjA4ZmUwMDAgRD0xZjlmYTAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5 IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJl bnQgaWR4IDI1MyBQPTIwOGZmMDAwIEQ9MWY5ZmIwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFM CkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6 IGNvaGVyZW50IGlkeCAyNTQgUD0yMDkwMDAwMCBEPTFmOWZjMDAwIEw9MTAwMCBETUFfQklESVJF Q1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24t ZWhjaS4wOiBjb2hlcmVudCBpZHggMjU0IFA9MjA5MDEwMDAgRD0xZjlmZDAwMCBMPTEwMDAgRE1B X0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNp IG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDI1NSBQPTIwOTAyMDAwIEQ9MWY5ZmUwMDAgTD0x MDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jp b24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAyNTUgUD0yMDkwMzAwMCBEPTFmOWZm MDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJu ZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMjk2IFA9MjExMWQwMDAg RD0xY2E1MDAwMCBMPWViMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2 ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDMwNCBQPTIx MTJkMDAwIEQ9MWNhNjAwMDAgTD1lYjAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1 OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAz MTIgUD0yMTEzZDAwMCBEPTFjYTcwMDAwIEw9ZWIwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkg MTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVu dCBpZHggMzIwIFA9MjExNGQwMDAgRD0xY2E4MDAwMCBMPWViMDAgRE1BX0JJRElSRUNUSU9OQUwK SmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBzYXRhX212IHNhdGFfbXYuMDogY29oZXJl bnQgaWR4IDMyOCBQPTIwYTBkMDAwIEQ9MWZhOTAwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFM CkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVy ZW50IGlkeCAzMjggUD0yMGEwZjAwMCBEPTFmYTkxMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05B TApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4w OiBjb2hlcmVudCBpZHggMzI4IFA9MjExNWQwMDAgRD0xY2E5MDAwMCBMPWViMDAgRE1BX0JJRElS RUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBzYXRhX212IHNhdGFfbXYu MDogY29oZXJlbnQgaWR4IDMyOSBQPTIwYTExMDAwIEQ9MWZhOTIwMDAgTD0xMDAwIERNQV9CSURJ UkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212 LjA6IGNvaGVyZW50IGlkeCAzMjkgUD0yMGExMzAwMCBEPTFmYTkzMDAwIEw9MTAwMCBETUFfQklE SVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IHNhdGFfbXYgc2F0YV9t di4wOiBjb2hlcmVudCBpZHggMzQyIFA9MjBhNDcwMDAgRD0xZmFhYzAwMCBMPTEwMDAgRE1BX0JJ RElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBzYXRhX212IHNhdGFf bXYuMDogY29oZXJlbnQgaWR4IDM0MiBQPTIwYTQ5MDAwIEQ9MWZhYWQwMDAgTD0xMDAwIERNQV9C SURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRh X212LjA6IGNvaGVyZW50IGlkeCAzNTkgUD0yMGE0YjAwMCBEPTFmYWNlMDAwIEw9MTAwMCBETUFf QklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IHNhdGFfbXYgc2F0 YV9tdi4wOiBjb2hlcmVudCBpZHggMzU5IFA9MjBhNGQwMDAgRD0xZmFjZjAwMCBMPTEwMDAgRE1B X0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZG IDAwMDA6MDA6MDEuMDogc2luZ2xlIGlkeCAzOTIgUD0xZjMxMTAwMCBEPTFmMzExMDAwIEw9MTAw MCBETUFfVE9fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBG RiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCAzOTMgUD0xZjMxMjAwMCBEPTFmMzEy MDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVs OiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDM5MyBQPTFmMzEz MDAwIEQ9MWYzMTMwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3Vy dXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHgg Mzk0IFA9MWYzMTQwMDAgRD0xZjMxNDAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAx Mzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXIt Z2F0aGVyIGlkeCAzOTQgUD0xZjMxNTAwMCBEPTFmMzE1MDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJ Q0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEu MDogc2NhdGhlci1nYXRoZXIgaWR4IDM5NSBQPTFmMzE2MDAwIEQ9MWYzMTYwMDAgTD0xMDAwIERN QV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYg MDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggMzk1IFA9MWYzMTcwMDAgRD0xZjMxNzAw MCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDog U0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCAzOTYgUD0xZjMxODAw MCBEPTFmMzE4MDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2 ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDM5 NiBQPTFmMzE5MDAwIEQ9MWYzMTkwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6 NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdh dGhlciBpZHggMzk3IFA9MWYzMWEwMDAgRD0xZjMxYTAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNF CkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6 IHNjYXRoZXItZ2F0aGVyIGlkeCAzOTcgUD0xZjMxYjAwMCBEPTFmMzFiMDAwIEw9MTAwMCBETUFf RlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAw MDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDM5OCBQPTFmMzFjMDAwIEQ9MWYzMWMwMDAg TD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNB QTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggMzk4IFA9MWYzMWQwMDAg RD0xZjMxZDAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRy IGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCAzOTkg UD0xZjMxZTAwMCBEPTFmMzFlMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0 OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRo ZXIgaWR4IDM5OSBQPTFmMzFmMDAwIEQ9MWYzMWYwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpK YW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBz Y2F0aGVyLWdhdGhlciBpZHggNDAwIFA9MWYzMjAwMDAgRD0xZjMyMDAwMCBMPTEwMDAgRE1BX0ZS T01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAw OjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA0MDAgUD0xZjMyMTAwMCBEPTFmMzIxMDAwIEw9 MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3 MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDQwMSBQPTFmMzIyMDAwIEQ9 MWYzMjIwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBr ZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNDAxIFA9 MWYzMjMwMDAgRD0xZjMyMzAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1 OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNpbmdsZSBpZHggNDAy IFA9MWYzMjUwMDAgRD0xZjMyNTAwMCBMPTEwMDAgRE1BX1RPX0RFVklDRQpKYW4gMTkgMTM6NTQ6 NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhl ciBpZHggNDAzIFA9MWYzMjYwMDAgRD0xZjMyNjAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNj YXRoZXItZ2F0aGVyIGlkeCA0MDMgUD0xZjMyNzAwMCBEPTFmMzI3MDAwIEw9MTAwMCBETUFfRlJP TV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6 MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDQwNCBQPTFmMzI4MDAwIEQ9MWYzMjgwMDAgTD0x MDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcx NnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNDA0IFA9MWYzMjkwMDAgRD0x ZjMyOTAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtl cm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA0MDUgUD0x ZjMyYTAwMCBEPTFmMzJhMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4 IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIg aWR4IDQwNSBQPTFmMzJiMDAwIEQ9MWYzMmIwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4g MTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0 aGVyLWdhdGhlciBpZHggNDA2IFA9MWYzMmMwMDAgRD0xZjMyYzAwMCBMPTEwMDAgRE1BX0ZST01f REVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAw OjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA0MDYgUD0xZjMyZDAwMCBEPTFmMzJkMDAwIEw9MTAw MCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4 IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDQwNyBQPTFmMzJlMDAwIEQ9MWYz MmUwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJu ZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNDA3IFA9MWYz MmYwMDAgRD0xZjMyZjAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBn dXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlk eCA0MDggUD0xZjMzMDAwMCBEPTFmMzMwMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5 IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhl ci1nYXRoZXIgaWR4IDQwOCBQPTFmMzMxMDAwIEQ9MWYzMzEwMDAgTD0xMDAwIERNQV9GUk9NX0RF VklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDow MS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNDA5IFA9MWYzMzIwMDAgRD0xZjMzMjAwMCBMPTEwMDAg RE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBG RiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA0MDkgUD0xZjMzMzAwMCBEPTFmMzMz MDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVs OiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDQxMCBQPTFmMzM0 MDAwIEQ9MWYzMzQwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3Vy dXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHgg NDEwIFA9MWYzMzUwMDAgRD0xZjMzNTAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAx Mzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNpbmdsZSBp ZHggNDExIFA9MWYzMzcwMDAgRD0xZjMzNzAwMCBMPTEwMDAgRE1BX1RPX0RFVklDRQpKYW4gMTkg MTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVy LWdhdGhlciBpZHggNDEyIFA9MWYzMzgwMDAgRD0xZjMzODAwMCBMPTEwMDAgRE1BX0ZST01fREVW SUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAx LjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA0MTIgUD0xZjMzOTAwMCBEPTFmMzM5MDAwIEw9MTAwMCBE TUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZG IDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDQxMyBQPTFmMzNhMDAwIEQ9MWYzM2Ew MDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6 IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNDEzIFA9MWYzM2Iw MDAgRD0xZjMzYjAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1 dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA0 MTQgUD0xZjMzYzAwMCBEPTFmMzNjMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEz OjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1n YXRoZXIgaWR4IDQxNCBQPTFmMzNkMDAwIEQ9MWYzM2QwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklD RQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4w OiBzY2F0aGVyLWdhdGhlciBpZHggNDE1IFA9MWYzM2UwMDAgRD0xZjMzZTAwMCBMPTEwMDAgRE1B X0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAw MDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA0MTUgUD0xZjMzZjAwMCBEPTFmMzNmMDAw IEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBT QUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDQxNiBQPTFmMzQwMDAw IEQ9MWYzNDAwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZk ciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNDE2 IFA9MWYzNDEwMDAgRD0xZjM0MTAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0 aGVyIGlkeCA0MTcgUD0xZjM0MjAwMCBEPTFmMzQyMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UK SmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDog c2NhdGhlci1nYXRoZXIgaWR4IDQxNyBQPTFmMzQzMDAwIEQ9MWYzNDMwMDAgTD0xMDAwIERNQV9G Uk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAw MDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNDE4IFA9MWYzNDQwMDAgRD0xZjM0NDAwMCBM PTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FB NzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA0MTggUD0xZjM0NTAwMCBE PTFmMzQ1MDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIg a2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDQxOSBQ PTFmMzQ2MDAwIEQ9MWYzNDYwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6 NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhl ciBpZHggNDE5IFA9MWYzNDcwMDAgRD0xZjM0NzAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNp bmdsZSBpZHggNDIwIFA9MWYzNDkwMDAgRD0xZjM0OTAwMCBMPTEwMDAgRE1BX1RPX0RFVklDRQpK YW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBz Y2F0aGVyLWdhdGhlciBpZHggNDIxIFA9MWYzNGEwMDAgRD0xZjM0YTAwMCBMPTEwMDAgRE1BX0ZS T01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAw OjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA0MjEgUD0xZjM0YjAwMCBEPTFmMzRiMDAwIEw9 MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3 MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDQyMiBQPTFmMzRjMDAwIEQ9 MWYzNGMwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBr ZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNDIyIFA9 MWYzNGQwMDAgRD0xZjM0ZDAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1 OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVy IGlkeCA0MjMgUD0xZjM0ZTAwMCBEPTFmMzRlMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFu IDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2Nh dGhlci1nYXRoZXIgaWR4IDQyMyBQPTFmMzRmMDAwIEQ9MWYzNGYwMDAgTD0xMDAwIERNQV9GUk9N X0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDow MDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNDI0IFA9MWYzNTAwMDAgRD0xZjM1MDAwMCBMPTEw MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2 eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA0MjQgUD0xZjM1MTAwMCBEPTFm MzUxMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2Vy bmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMTogc2luZ2xlIGlkeCA0MjQg UD0xY2I1MDAyMCBEPTFjYjUwMDIwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6 NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4xOiBz aW5nbGUgaWR4IDQyNCBQPTFjYjUwNzAwIEQ9MWNiNTA3MDAgTD02MDAgRE1BX0ZST01fREVWSUNF CkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4 X2V0aF9wb3J0LjE6IHNpbmdsZSBpZHggNDI0IFA9MWNiNTBkZTAgRD0xY2I1MGRlMCBMPTYwMCBE TUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0 aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMTogc2luZ2xlIGlkeCA0MjQgUD0xY2I1MTRjMCBEPTFj YjUxNGMwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJu ZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5nbGUgaWR4IDQyNCBQ PTFjYjUxYmEwIEQ9MWNiNTFiYTAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1 OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVy IGlkeCA0MjUgUD0xZjM1MjAwMCBEPTFmMzUyMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFu IDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2Nh dGhlci1nYXRoZXIgaWR4IDQyNSBQPTFmMzUzMDAwIEQ9MWYzNTMwMDAgTD0xMDAwIERNQV9GUk9N X0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQg bXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5nbGUgaWR4IDQyNSBQPTFjYjUyMjgwIEQ9MWNiNTIyODAg TD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2 NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjE6IHNpbmdsZSBpZHggNDI1IFA9MWNiNTI5 NjAgRD0xY2I1Mjk2MCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2 ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMTogc2luZ2xlIGlk eCA0MjUgUD0xY2I1MzA0MCBEPTFjYjUzMDQwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkg MTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9y dC4xOiBzaW5nbGUgaWR4IDQyNSBQPTFjYjUzNzIwIEQ9MWNiNTM3MjAgTD02MDAgRE1BX0ZST01f REVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBt djY0M3h4X2V0aF9wb3J0LjE6IHNpbmdsZSBpZHggNDI1IFA9MWNiNTNlMDAgRD0xY2I1M2UwMCBM PTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3 MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDQyNiBQPTFmMzU0MDAwIEQ9 MWYzNTQwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBr ZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNDI2IFA9 MWYzNTUwMDAgRD0xZjM1NTAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1 OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjE6IHNp bmdsZSBpZHggNDI2IFA9MWNiNTQ0ZTAgRD0xY2I1NDRlMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UK SmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhf ZXRoX3BvcnQuMTogc2luZ2xlIGlkeCA0MjYgUD0xY2I1NGJjMCBEPTFjYjU0YmMwIEw9NjAwIERN QV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRo X3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5nbGUgaWR4IDQyNiBQPTFjYjU1MmEwIEQ9MWNi NTUyYTAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5l bDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjE6IHNpbmdsZSBpZHggNDI2IFA9 MWNiNTU5ODAgRD0xY2I1NTk4MCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4 IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIg aWR4IDQyNyBQPTFmMzU2MDAwIEQ9MWYzNTYwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4g MTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0 aGVyLWdhdGhlciBpZHggNDI3IFA9MWYzNTcwMDAgRD0xZjM1NzAwMCBMPTEwMDAgRE1BX0ZST01f REVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBt djY0M3h4X2V0aF9wb3J0LjE6IHNpbmdsZSBpZHggNDI3IFA9MWNiNTYwNjAgRD0xY2I1NjA2MCBM PTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0 M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMTogc2luZ2xlIGlkeCA0MjcgUD0xY2I1Njc0 MCBEPTFjYjU2NzQwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZk ciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5nbGUgaWR4 IDQyNyBQPTFjYjU2ZTIwIEQ9MWNiNTZlMjAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAx Mzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0 LjE6IHNpbmdsZSBpZHggNDI3IFA9MWNiNTc1MDAgRD0xY2I1NzUwMCBMPTYwMCBETUFfRlJPTV9E RVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6 MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDQyOCBQPTFmMzU4MDAwIEQ9MWYzNTgwMDAgTD0xMDAw IERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNngg RkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNDI4IFA9MWYzNTkwMDAgRD0xZjM1 OTAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5l bDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNpbmdsZSBpZHggNDI5IFA9MWYzNWIwMDAgRD0x ZjM1YjAwMCBMPTEwMDAgRE1BX1RPX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJu ZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNDMwIFA9MWYz NWMwMDAgRD0xZjM1YzAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBn dXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlk eCA0MzAgUD0xZjM1ZDAwMCBEPTFmMzVkMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5 IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhl ci1nYXRoZXIgaWR4IDQzMSBQPTFmMzVlMDAwIEQ9MWYzNWUwMDAgTD0xMDAwIERNQV9GUk9NX0RF VklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDow MS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNDMxIFA9MWYzNWYwMDAgRD0xZjM1ZjAwMCBMPTEwMDAg RE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBG RiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA0MzIgUD0xZjM2MDAwMCBEPTFmMzYw MDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVs OiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDQzMiBQPTFmMzYx MDAwIEQ9MWYzNjEwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3Vy dXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHgg NDMzIFA9MWYzNjIwMDAgRD0xZjM2MjAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAx Mzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXIt Z2F0aGVyIGlkeCA0MzMgUD0xZjM2MzAwMCBEPTFmMzYzMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJ Q0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEu MDogc2NhdGhlci1nYXRoZXIgaWR4IDQzNCBQPTFmMzY0MDAwIEQ9MWYzNjQwMDAgTD0xMDAwIERN QV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYg MDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNDM0IFA9MWYzNjUwMDAgRD0xZjM2NTAw MCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDog U0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA0MzUgUD0xZjM2NjAw MCBEPTFmMzY2MDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2 ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDQz NSBQPTFmMzY3MDAwIEQ9MWYzNjcwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6 NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdh dGhlciBpZHggNDM2IFA9MWYzNjgwMDAgRD0xZjM2ODAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNF CkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6 IHNjYXRoZXItZ2F0aGVyIGlkeCA0MzYgUD0xZjM2OTAwMCBEPTFmMzY5MDAwIEw9MTAwMCBETUFf RlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAw MDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDQzNyBQPTFmMzZhMDAwIEQ9MWYzNmEwMDAg TD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNB QTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNDM3IFA9MWYzNmIwMDAg RD0xZjM2YjAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRy IGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNpbmdsZSBpZHggNDM4IFA9MWYzNmQw MDAgRD0xZjM2ZDAwMCBMPTEwMDAgRE1BX1RPX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZk ciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNDM5 IFA9MWYzNmUwMDAgRD0xZjM2ZTAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0 aGVyIGlkeCA0MzkgUD0xZjM2ZjAwMCBEPTFmMzZmMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UK SmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBzYXRhX212IHNhdGFfbXYuMDogY29oZXJl bnQgaWR4IDQ0MCBQPTIwOWU3MDAwIEQ9MWZiNzEwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFM CkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6 IHNjYXRoZXItZ2F0aGVyIGlkeCA0NDAgUD0xZjM3MDAwMCBEPTFmMzcwMDAwIEw9MTAwMCBETUFf RlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAw MDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDQ0MCBQPTFmMzcxMDAwIEQ9MWYzNzEwMDAg TD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IHNh dGFfbXYgc2F0YV9tdi4wOiBjb2hlcmVudCBpZHggNDQxIFA9MjA5ZTkwMDAgRD0xZmI3MzAwMCBM PTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBT QUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDQ0MSBQPTFmMzcyMDAw IEQ9MWYzNzIwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZk ciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNDQx IFA9MWYzNzMwMDAgRD0xZjM3MzAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50IGlkeCA0NDIg UD0yMDllYjAwMCBEPTFmYjc0MDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6 NTQ6NTggZ3VydXZkciBrZXJuZWw6IHNhdGFfbXYgc2F0YV9tdi4wOiBjb2hlcmVudCBpZHggNDQy IFA9MjA5ZWQwMDAgRD0xZmI3NTAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEz OjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1n YXRoZXIgaWR4IDQ0MiBQPTFmMzc0MDAwIEQ9MWYzNzQwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklD RQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4w OiBzY2F0aGVyLWdhdGhlciBpZHggNDQyIFA9MWYzNzUwMDAgRD0xZjM3NTAwMCBMPTEwMDAgRE1B X0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRh X212LjA6IGNvaGVyZW50IGlkeCA0NDMgUD0yMDllZjAwMCBEPTFmYjc2MDAwIEw9MTAwMCBETUFf QklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IHNhdGFfbXYgc2F0 YV9tdi4wOiBjb2hlcmVudCBpZHggNDQzIFA9MjA5ZjEwMDAgRD0xZmI3NzAwMCBMPTEwMDAgRE1B X0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZG IDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDQ0MyBQPTFmMzc2MDAwIEQ9MWYzNzYw MDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6 IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNDQzIFA9MWYzNzcw MDAgRD0xZjM3NzAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1 dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50IGlkeCA0NDQgUD0yMDlmMzAw MCBEPTFmYjc4MDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3Vy dXZkciBrZXJuZWw6IHNhdGFfbXYgc2F0YV9tdi4wOiBjb2hlcmVudCBpZHggNDQ0IFA9MjA5ZjUw MDAgRD0xZmI3OTAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1 cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4 IDQ0NCBQPTFmMzc4MDAwIEQ9MWYzNzgwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkg MTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVy LWdhdGhlciBpZHggNDQ0IFA9MWYzNzkwMDAgRD0xZjM3OTAwMCBMPTEwMDAgRE1BX0ZST01fREVW SUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNv aGVyZW50IGlkeCA0NDUgUD0yMDlmNzAwMCBEPTFmYjdhMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJ T05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IHNhdGFfbXYgc2F0YV9tdi4wOiBj b2hlcmVudCBpZHggNDQ1IFA9MjA5ZjkwMDAgRD0xZmI3YjAwMCBMPTEwMDAgRE1BX0JJRElSRUNU SU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6 MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDQ0NSBQPTFmMzdhMDAwIEQ9MWYzN2EwMDAgTD0xMDAw IERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNngg RkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNDQ1IFA9MWYzN2IwMDAgRD0xZjM3 YjAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5l bDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50IGlkeCA0NDYgUD0yMDlmYjAwMCBEPTFmYjdj MDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJu ZWw6IHNhdGFfbXYgc2F0YV9tdi4wOiBjb2hlcmVudCBpZHggNDQ2IFA9MjA5ZmQwMDAgRD0xZmI3 ZDAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2Vy bmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDQ0NiBQPTFm MzdjMDAwIEQ9MWYzN2MwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTgg Z3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBp ZHggNDQ2IFA9MWYzN2QwMDAgRD0xZjM3ZDAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAx OSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50IGlk eCA0NDcgUD0yMDlmZjAwMCBEPTFmYjdlMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4g MTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IHNhdGFfbXYgc2F0YV9tdi4wOiBjb2hlcmVudCBp ZHggNDQ3IFA9MjBhMDEwMDAgRD0xZmI3ZjAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFu IDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2lu Z2xlIGlkeCA0NDcgUD0xZjM3ZjAwMCBEPTFmMzdmMDAwIEw9MTAwMCBETUFfVE9fREVWSUNFCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNj YXRoZXItZ2F0aGVyIGlkeCA0NDggUD0xZjM4MDAwMCBEPTFmMzgwMDAwIEw9MTAwMCBETUFfRlJP TV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6 MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDQ0OCBQPTFmMzgxMDAwIEQ9MWYzODEwMDAgTD0x MDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcx NnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNDQ5IFA9MWYzODIwMDAgRD0x ZjM4MjAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtl cm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA0NDkgUD0x ZjM4MzAwMCBEPTFmMzgzMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4 IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIg aWR4IDQ1MCBQPTFmMzg0MDAwIEQ9MWYzODQwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4g MTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0 aGVyLWdhdGhlciBpZHggNDUwIFA9MWYzODUwMDAgRD0xZjM4NTAwMCBMPTEwMDAgRE1BX0ZST01f REVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAw OjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA0NTEgUD0xZjM4NjAwMCBEPTFmMzg2MDAwIEw9MTAw MCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4 IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDQ1MSBQPTFmMzg3MDAwIEQ9MWYz ODcwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJu ZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNDUyIFA9MWYz ODgwMDAgRD0xZjM4ODAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBn dXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlk eCA0NTIgUD0xZjM4OTAwMCBEPTFmMzg5MDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5 IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhl ci1nYXRoZXIgaWR4IDQ1MyBQPTFmMzhhMDAwIEQ9MWYzOGEwMDAgTD0xMDAwIERNQV9GUk9NX0RF VklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDow MS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNDUzIFA9MWYzOGIwMDAgRD0xZjM4YjAwMCBMPTEwMDAg RE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBG RiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA0NTQgUD0xZjM4YzAwMCBEPTFmMzhj MDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVs OiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDQ1NCBQPTFmMzhk MDAwIEQ9MWYzOGQwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3Vy dXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHgg NDU1IFA9MWYzOGUwMDAgRD0xZjM4ZTAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAx Mzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXIt Z2F0aGVyIGlkeCA0NTUgUD0xZjM4ZjAwMCBEPTFmMzhmMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJ Q0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEu MDogc2luZ2xlIGlkeCA0NTYgUD0xZjM5MTAwMCBEPTFmMzkxMDAwIEw9MTAwMCBETUFfVE9fREVW SUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAx LjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA0NTcgUD0xZjM5MzAwMCBEPTFmMzkzMDAwIEw9MTAwMCBE TUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZG IDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDQ1OCBQPTFmMzk0MDAwIEQ9MWYzOTQw MDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6 IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNDU4IFA9MWYzOTUw MDAgRD0xZjM5NTAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1 dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA0 NTkgUD0xZjM5NjAwMCBEPTFmMzk2MDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEz OjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1n YXRoZXIgaWR4IDQ1OSBQPTFmMzk3MDAwIEQ9MWYzOTcwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklD RQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4w OiBzY2F0aGVyLWdhdGhlciBpZHggNDYwIFA9MWYzOTgwMDAgRD0xZjM5ODAwMCBMPTEwMDAgRE1B X0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAw MDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA0NjAgUD0xZjM5OTAwMCBEPTFmMzk5MDAw IEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBT QUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDQ2MSBQPTFmMzlhMDAw IEQ9MWYzOWEwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZk ciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNDYx IFA9MWYzOWIwMDAgRD0xZjM5YjAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0 aGVyIGlkeCA0NjIgUD0xZjM5YzAwMCBEPTFmMzljMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UK SmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDog c2NhdGhlci1nYXRoZXIgaWR4IDQ2MiBQPTFmMzlkMDAwIEQ9MWYzOWQwMDAgTD0xMDAwIERNQV9G Uk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAw MDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNDYzIFA9MWYzOWUwMDAgRD0xZjM5ZTAwMCBM PTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FB NzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA0NjMgUD0xZjM5ZjAwMCBE PTFmMzlmMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIg a2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDQ2NCBQ PTFmM2EwMDAwIEQ9MWYzYTAwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6 NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhl ciBpZHggNDY0IFA9MWYzYTEwMDAgRD0xZjNhMTAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNj YXRoZXItZ2F0aGVyIGlkeCA0NjUgUD0xZjNhMjAwMCBEPTFmM2EyMDAwIEw9MTAwMCBETUFfRlJP TV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6 MDA6MDEuMDogc2luZ2xlIGlkeCA2MTUgUD0xZWNjZjAwMCBEPTFlY2NmMDAwIEw9MTAwMCBETUFf VE9fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAw OjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA2MTYgUD0xZWNkMDAwMCBEPTFlY2QwMDAwIEw9 MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3 MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDYxNiBQPTFlY2QxMDAwIEQ9 MWVjZDEwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBr ZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNjE3IFA9 MWVjZDIwMDAgRD0xZWNkMjAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1 OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVy IGlkeCA2MTcgUD0xZWNkMzAwMCBEPTFlY2QzMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFu IDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2Nh dGhlci1nYXRoZXIgaWR4IDYxOCBQPTFlY2Q0MDAwIEQ9MWVjZDQwMDAgTD0xMDAwIERNQV9GUk9N X0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDow MDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNjE4IFA9MWVjZDUwMDAgRD0xZWNkNTAwMCBMPTEw MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2 eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA2MTkgUD0xZWNkNjAwMCBEPTFl Y2Q2MDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2Vy bmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDYxOSBQPTFl Y2Q3MDAwIEQ9MWVjZDcwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTgg Z3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBp ZHggNjIwIFA9MWVjZDgwMDAgRD0xZWNkODAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAx OSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRo ZXItZ2F0aGVyIGlkeCA2MjAgUD0xZWNkOTAwMCBEPTFlY2Q5MDAwIEw9MTAwMCBETUFfRlJPTV9E RVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6 MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDYyMSBQPTFlY2RhMDAwIEQ9MWVjZGEwMDAgTD0xMDAw IERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNngg RkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNjIxIFA9MWVjZGIwMDAgRD0xZWNk YjAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5l bDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA2MjIgUD0xZWNk YzAwMCBEPTFlY2RjMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1 cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4 IDYyMiBQPTFlY2RkMDAwIEQ9MWVjZGQwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkg MTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVy LWdhdGhlciBpZHggNjIzIFA9MWVjZGUwMDAgRD0xZWNkZTAwMCBMPTEwMDAgRE1BX0ZST01fREVW SUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAx LjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA2MjMgUD0xZWNkZjAwMCBEPTFlY2RmMDAwIEw9MTAwMCBE TUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZG IDAwMDA6MDA6MDEuMDogc2luZ2xlIGlkeCA2MjQgUD0xZWNlMTAwMCBEPTFlY2UxMDAwIEw9MTAw MCBETUFfVE9fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBG RiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA2MjUgUD0xZWNlMjAwMCBEPTFlY2Uy MDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVs OiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDYyNSBQPTFlY2Uz MDAwIEQ9MWVjZTMwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3Vy dXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHgg NjI2IFA9MWVjZTQwMDAgRD0xZWNlNDAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAx Mzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXIt Z2F0aGVyIGlkeCA2MjYgUD0xZWNlNTAwMCBEPTFlY2U1MDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJ Q0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEu MDogc2NhdGhlci1nYXRoZXIgaWR4IDYyNyBQPTFlY2U2MDAwIEQ9MWVjZTYwMDAgTD0xMDAwIERN QV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYg MDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNjI3IFA9MWVjZTcwMDAgRD0xZWNlNzAw MCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDog U0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA2MjggUD0xZWNlODAw MCBEPTFlY2U4MDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2 ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDYy OCBQPTFlY2U5MDAwIEQ9MWVjZTkwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6 NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdh dGhlciBpZHggNjI5IFA9MWVjZWEwMDAgRD0xZWNlYTAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNF CkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6 IHNjYXRoZXItZ2F0aGVyIGlkeCA2MjkgUD0xZWNlYjAwMCBEPTFlY2ViMDAwIEw9MTAwMCBETUFf RlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAw MDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDYzMCBQPTFlY2VjMDAwIEQ9MWVjZWMwMDAg TD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNB QTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNjMwIFA9MWVjZWQwMDAg RD0xZWNlZDAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRy IGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA2MzIg UD0xZWNmMDAwMCBEPTFlY2YwMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0 OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRo ZXIgaWR4IDYzMiBQPTFlY2YxMDAwIEQ9MWVjZjEwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpK YW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBz aW5nbGUgaWR4IDYzMyBQPTFlY2YzMDAwIEQ9MWVjZjMwMDAgTD0xMDAwIERNQV9UT19ERVZJQ0UK SmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDog c2NhdGhlci1nYXRoZXIgaWR4IDYzNCBQPTFlY2Y0MDAwIEQ9MWVjZjQwMDAgTD0xMDAwIERNQV9G Uk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAw MDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNjM0IFA9MWVjZjUwMDAgRD0xZWNmNTAwMCBM PTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FB NzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA2MzUgUD0xZWNmNjAwMCBE PTFlY2Y2MDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIg a2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDYzNSBQ PTFlY2Y3MDAwIEQ9MWVjZjcwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6 NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhl ciBpZHggNjM2IFA9MWVjZjgwMDAgRD0xZWNmODAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNj YXRoZXItZ2F0aGVyIGlkeCA2MzYgUD0xZWNmOTAwMCBEPTFlY2Y5MDAwIEw9MTAwMCBETUFfRlJP TV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6 MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDYzNyBQPTFlY2ZhMDAwIEQ9MWVjZmEwMDAgTD0x MDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcx NnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNjM3IFA9MWVjZmIwMDAgRD0x ZWNmYjAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtl cm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA2MzggUD0x ZWNmYzAwMCBEPTFlY2ZjMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4 IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIg aWR4IDYzOCBQPTFlY2ZkMDAwIEQ9MWVjZmQwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4g MTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0 aGVyLWdhdGhlciBpZHggNjM5IFA9MWVjZmUwMDAgRD0xZWNmZTAwMCBMPTEwMDAgRE1BX0ZST01f REVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAw OjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA2MzkgUD0xZWNmZjAwMCBEPTFlY2ZmMDAwIEw9MTAw MCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4 IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDY0MCBQPTFlZDAwMDAwIEQ9MWVk MDAwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJu ZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNjQwIFA9MWVk MDEwMDAgRD0xZWQwMTAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBn dXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlk eCA2NDEgUD0xZWQwMjAwMCBEPTFlZDAyMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5 IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhl ci1nYXRoZXIgaWR4IDY0MSBQPTFlZDAzMDAwIEQ9MWVkMDMwMDAgTD0xMDAwIERNQV9GUk9NX0RF VklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDow MS4wOiBzaW5nbGUgaWR4IDY0MiBQPTFlZDA1MDAwIEQ9MWVkMDUwMDAgTD0xMDAwIERNQV9UT19E RVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6 MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDY0MyBQPTFlZDA2MDAwIEQ9MWVkMDYwMDAgTD0xMDAw IERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNngg RkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNjQzIFA9MWVkMDcwMDAgRD0xZWQw NzAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5l bDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA2NDQgUD0xZWQw ODAwMCBEPTFlZDA4MDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1 cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4 IDY0NCBQPTFlZDA5MDAwIEQ9MWVkMDkwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkg MTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVy LWdhdGhlciBpZHggNjQ1IFA9MWVkMGEwMDAgRD0xZWQwYTAwMCBMPTEwMDAgRE1BX0ZST01fREVW SUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAx LjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA2NDUgUD0xZWQwYjAwMCBEPTFlZDBiMDAwIEw9MTAwMCBE TUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZG IDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDY0NiBQPTFlZDBjMDAwIEQ9MWVkMGMw MDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6 IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNjQ2IFA9MWVkMGQw MDAgRD0xZWQwZDAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1 dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA2 NDcgUD0xZWQwZTAwMCBEPTFlZDBlMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEz OjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1n YXRoZXIgaWR4IDY0NyBQPTFlZDBmMDAwIEQ9MWVkMGYwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklD RQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4w OiBzY2F0aGVyLWdhdGhlciBpZHggNjQ4IFA9MWVkMTAwMDAgRD0xZWQxMDAwMCBMPTEwMDAgRE1B X0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAw MDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA2NDggUD0xZWQxMTAwMCBEPTFlZDExMDAw IEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBT QUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDY0OSBQPTFlZDEyMDAw IEQ9MWVkMTIwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZk ciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNjQ5 IFA9MWVkMTMwMDAgRD0xZWQxMzAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0 aGVyIGlkeCA2NTAgUD0xZWQxNDAwMCBEPTFlZDE0MDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UK SmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDog c2NhdGhlci1nYXRoZXIgaWR4IDY1MCBQPTFlZDE1MDAwIEQ9MWVkMTUwMDAgTD0xMDAwIERNQV9G Uk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAw MDowMDowMS4wOiBzaW5nbGUgaWR4IDY1MSBQPTFlZDE3MDAwIEQ9MWVkMTcwMDAgTD0xMDAwIERN QV9UT19ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAw MDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDY1MiBQPTFlZDE4MDAwIEQ9MWVkMTgwMDAg TD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNB QTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNjUyIFA9MWVkMTkwMDAg RD0xZWQxOTAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRy IGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA2NTMg UD0xZWQxYTAwMCBEPTFlZDFhMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0 OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRo ZXIgaWR4IDY1MyBQPTFlZDFiMDAwIEQ9MWVkMWIwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpK YW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBz Y2F0aGVyLWdhdGhlciBpZHggNjU0IFA9MWVkMWMwMDAgRD0xZWQxYzAwMCBMPTEwMDAgRE1BX0ZS T01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAw OjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA2NTQgUD0xZWQxZDAwMCBEPTFlZDFkMDAwIEw9 MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3 MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDY1NSBQPTFlZDFlMDAwIEQ9 MWVkMWUwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBr ZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNjU1IFA9 MWVkMWYwMDAgRD0xZWQxZjAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1 OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVy IGlkeCA2NTYgUD0xZWQyMDAwMCBEPTFlZDIwMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFu IDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2Nh dGhlci1nYXRoZXIgaWR4IDY1NiBQPTFlZDIxMDAwIEQ9MWVkMjEwMDAgTD0xMDAwIERNQV9GUk9N X0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDow MDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNjU3IFA9MWVkMjIwMDAgRD0xZWQyMjAwMCBMPTEw MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2 eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA2NTcgUD0xZWQyMzAwMCBEPTFl ZDIzMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2Vy bmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDY1OCBQPTFl ZDI0MDAwIEQ9MWVkMjQwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTgg Z3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBp ZHggNjU4IFA9MWVkMjUwMDAgRD0xZWQyNTAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAx OSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRo ZXItZ2F0aGVyIGlkeCA2NTkgUD0xZWQyNjAwMCBEPTFlZDI2MDAwIEw9MTAwMCBETUFfRlJPTV9E RVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6 MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDY1OSBQPTFlZDI3MDAwIEQ9MWVkMjcwMDAgTD0xMDAw IERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNngg RkYgMDAwMDowMDowMS4wOiBzaW5nbGUgaWR4IDY2MCBQPTFlZDI5MDAwIEQ9MWVkMjkwMDAgTD0x MDAwIERNQV9UT19ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4 IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDY2MSBQPTFlZDJhMDAwIEQ9MWVk MmEwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJu ZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNjYxIFA9MWVk MmIwMDAgRD0xZWQyYjAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBn dXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlk eCA2NjIgUD0xZWQyYzAwMCBEPTFlZDJjMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5 IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhl ci1nYXRoZXIgaWR4IDY2MiBQPTFlZDJkMDAwIEQ9MWVkMmQwMDAgTD0xMDAwIERNQV9GUk9NX0RF VklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDow MS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNjYzIFA9MWVkMmUwMDAgRD0xZWQyZTAwMCBMPTEwMDAg RE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBG RiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA2NjMgUD0xZWQyZjAwMCBEPTFlZDJm MDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVs OiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDY2NCBQPTFlZDMw MDAwIEQ9MWVkMzAwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3Vy dXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHgg NjY0IFA9MWVkMzEwMDAgRD0xZWQzMTAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAx Mzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXIt Z2F0aGVyIGlkeCA2NjUgUD0xZWQzMjAwMCBEPTFlZDMyMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJ Q0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEu MDogc2NhdGhlci1nYXRoZXIgaWR4IDY2NSBQPTFlZDMzMDAwIEQ9MWVkMzMwMDAgTD0xMDAwIERN QV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYg MDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNjY2IFA9MWVkMzQwMDAgRD0xZWQzNDAw MCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDog U0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA2NjYgUD0xZWQzNTAw MCBEPTFlZDM1MDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2 ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDY2 NyBQPTFlZDM2MDAwIEQ9MWVkMzYwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6 NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdh dGhlciBpZHggNjY3IFA9MWVkMzcwMDAgRD0xZWQzNzAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNF CkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6 IHNjYXRoZXItZ2F0aGVyIGlkeCA2NjggUD0xZWQzODAwMCBEPTFlZDM4MDAwIEw9MTAwMCBETUFf RlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAw MDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDY2OCBQPTFlZDM5MDAwIEQ9MWVkMzkwMDAg TD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNB QTcxNnggRkYgMDAwMDowMDowMS4wOiBzaW5nbGUgaWR4IDY3MCBQPTFlZDNjMDAwIEQ9MWVkM2Mw MDAgTD0xMDAwIERNQV9UT19ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBT QUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDY3MCBQPTFlZDNkMDAw IEQ9MWVkM2QwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZk ciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNjcx IFA9MWVkM2UwMDAgRD0xZWQzZTAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0 aGVyIGlkeCA2NzEgUD0xZWQzZjAwMCBEPTFlZDNmMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UK SmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDog c2NhdGhlci1nYXRoZXIgaWR4IDY3MiBQPTFlZDQwMDAwIEQ9MWVkNDAwMDAgTD0xMDAwIERNQV9G Uk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAw MDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNjcyIFA9MWVkNDEwMDAgRD0xZWQ0MTAwMCBM PTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FB NzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA2NzMgUD0xZWQ0MjAwMCBE PTFlZDQyMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIg a2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDY3MyBQ PTFlZDQzMDAwIEQ9MWVkNDMwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6 NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhl ciBpZHggNjc0IFA9MWVkNDQwMDAgRD0xZWQ0NDAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNj YXRoZXItZ2F0aGVyIGlkeCA2NzQgUD0xZWQ0NTAwMCBEPTFlZDQ1MDAwIEw9MTAwMCBETUFfRlJP TV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6 MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDY3NSBQPTFlZDQ2MDAwIEQ9MWVkNDYwMDAgTD0x MDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcx NnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNjc1IFA9MWVkNDcwMDAgRD0x ZWQ0NzAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtl cm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA2NzYgUD0x ZWQ0ODAwMCBEPTFlZDQ4MDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4 IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIg aWR4IDY3NiBQPTFlZDQ5MDAwIEQ9MWVkNDkwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4g MTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0 aGVyLWdhdGhlciBpZHggNjc3IFA9MWVkNGEwMDAgRD0xZWQ0YTAwMCBMPTEwMDAgRE1BX0ZST01f REVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAw OjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA2NzcgUD0xZWQ0YjAwMCBEPTFlZDRiMDAwIEw9MTAw MCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4 IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDY3OCBQPTFlZDRjMDAwIEQ9MWVk NGMwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJu ZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzaW5nbGUgaWR4IDY3OSBQPTFlZDRlMDAwIEQ9 MWVkNGUwMDAgTD0xMDAwIERNQV9UT19ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2Vy bmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDY3OSBQPTFl ZDRmMDAwIEQ9MWVkNGYwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTgg Z3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBp ZHggNjgwIFA9MWVkNTAwMDAgRD0xZWQ1MDAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAx OSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRo ZXItZ2F0aGVyIGlkeCA2ODAgUD0xZWQ1MTAwMCBEPTFlZDUxMDAwIEw9MTAwMCBETUFfRlJPTV9E RVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6 MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDY4MSBQPTFlZDUyMDAwIEQ9MWVkNTIwMDAgTD0xMDAw IERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNngg RkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNjgxIFA9MWVkNTMwMDAgRD0xZWQ1 MzAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5l bDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA2ODIgUD0xZWQ1 NDAwMCBEPTFlZDU0MDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1 cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4 IDY4MiBQPTFlZDU1MDAwIEQ9MWVkNTUwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkg MTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVy LWdhdGhlciBpZHggNjgzIFA9MWVkNTYwMDAgRD0xZWQ1NjAwMCBMPTEwMDAgRE1BX0ZST01fREVW SUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAx LjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA2ODMgUD0xZWQ1NzAwMCBEPTFlZDU3MDAwIEw9MTAwMCBE TUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZG IDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDY4NCBQPTFlZDU4MDAwIEQ9MWVkNTgw MDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6 IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNjg0IFA9MWVkNTkw MDAgRD0xZWQ1OTAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1 dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA2 ODUgUD0xZWQ1YTAwMCBEPTFlZDVhMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEz OjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1n YXRoZXIgaWR4IDY4NSBQPTFlZDViMDAwIEQ9MWVkNWIwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklD RQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4w OiBzY2F0aGVyLWdhdGhlciBpZHggNjg2IFA9MWVkNWMwMDAgRD0xZWQ1YzAwMCBMPTEwMDAgRE1B X0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAw MDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA2ODYgUD0xZWQ1ZDAwMCBEPTFlZDVkMDAw IEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBT QUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDY4NyBQPTFlZDVlMDAw IEQ9MWVkNWUwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZk ciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4wOiBzaW5nbGUgaWR4 IDgyNCBQPTFlZTcwMDIwIEQ9MWVlNzAwMjAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAx Mzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0 LjA6IHNpbmdsZSBpZHggODI0IFA9MWVlNzA3MDAgRD0xZWU3MDcwMCBMPTYwMCBETUFfRlJPTV9E RVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12 NjQzeHhfZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA4MjQgUD0xZWU3MGRlMCBEPTFlZTcwZGUwIEw9 NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQz eHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4wOiBzaW5nbGUgaWR4IDgyNCBQPTFlZTcxNGMw IEQ9MWVlNzE0YzAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRy IGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjA6IHNpbmdsZSBpZHgg ODI0IFA9MWVlNzFiYTAgRD0xZWU3MWJhMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEz OjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQu MDogc2luZ2xlIGlkeCA4MjUgUD0xZWU3MjI4MCBEPTFlZTcyMjgwIEw9NjAwIERNQV9GUk9NX0RF VklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2 NDN4eF9ldGhfcG9ydC4wOiBzaW5nbGUgaWR4IDgyNSBQPTFlZTcyOTYwIEQ9MWVlNzI5NjAgTD02 MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4 eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjA6IHNpbmdsZSBpZHggODI1IFA9MWVlNzMwNDAg RD0xZWU3MzA0MCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIg a2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA4 MjUgUD0xZWU3MzcyMCBEPTFlZTczNzIwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6 NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4w OiBzaW5nbGUgaWR4IDgyNSBQPTFlZTczZTAwIEQ9MWVlNzNlMDAgTD02MDAgRE1BX0ZST01fREVW SUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0 M3h4X2V0aF9wb3J0LjA6IHNpbmdsZSBpZHggODI2IFA9MWVlNzQ0ZTAgRD0xZWU3NDRlMCBMPTYw MCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4 X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA4MjYgUD0xZWU3NGJjMCBE PTFlZTc0YmMwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBr ZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4wOiBzaW5nbGUgaWR4IDgy NiBQPTFlZTc1MmEwIEQ9MWVlNzUyYTAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjA6 IHNpbmdsZSBpZHggODI2IFA9MWVlNzU5ODAgRD0xZWU3NTk4MCBMPTYwMCBETUFfRlJPTV9ERVZJ Q0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQz eHhfZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA4MjcgUD0xZWU3NjA2MCBEPTFlZTc2MDYwIEw9NjAw IERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhf ZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4wOiBzaW5nbGUgaWR4IDgyNyBQPTFlZTc2NzQwIEQ9 MWVlNzY3NDAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtl cm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjA6IHNpbmdsZSBpZHggODI3 IFA9MWVlNzZlMjAgRD0xZWU3NmUyMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0 OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMDog c2luZ2xlIGlkeCA4MjcgUD0xZWU3NzUwMCBEPTFlZTc3NTAwIEw9NjAwIERNQV9GUk9NX0RFVklD RQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4 eF9ldGhfcG9ydC4wOiBzaW5nbGUgaWR4IDgyOCBQPTFlZTc4MDIwIEQ9MWVlNzgwMjAgTD02MDAg RE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9l dGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjA6IHNpbmdsZSBpZHggODI4IFA9MWVlNzg3MDAgRD0x ZWU3ODcwMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2Vy bmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA4Mjgg UD0xZWU3OGRlMCBEPTFlZTc4ZGUwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6 NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4wOiBz aW5nbGUgaWR4IDgyOCBQPTFlZTc5NGMwIEQ9MWVlNzk0YzAgTD02MDAgRE1BX0ZST01fREVWSUNF CkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4 X2V0aF9wb3J0LjA6IHNpbmdsZSBpZHggODI4IFA9MWVlNzliYTAgRD0xZWU3OWJhMCBMPTYwMCBE TUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0 aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA4MjkgUD0xZWU3YTI4MCBEPTFl ZTdhMjgwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJu ZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4wOiBzaW5nbGUgaWR4IDgyOSBQ PTFlZTdhOTYwIEQ9MWVlN2E5NjAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1 OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjA6IHNp bmdsZSBpZHggODI5IFA9MWVlN2IwNDAgRD0xZWU3YjA0MCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UK SmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhf ZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA4MjkgUD0xZWU3YjcyMCBEPTFlZTdiNzIwIEw9NjAwIERN QV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRo X3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4wOiBzaW5nbGUgaWR4IDgyOSBQPTFlZTdiZTAwIEQ9MWVl N2JlMDAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5l bDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjA6IHNpbmdsZSBpZHggODMwIFA9 MWVlN2M0ZTAgRD0xZWU3YzRlMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4 IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMDogc2lu Z2xlIGlkeCA4MzAgUD0xZWU3Y2JjMCBEPTFlZTdjYmMwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpK YW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9l dGhfcG9ydC4wOiBzaW5nbGUgaWR4IDgzMCBQPTFlZTdkMmEwIEQ9MWVlN2QyYTAgTD02MDAgRE1B X0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhf cG9ydCBtdjY0M3h4X2V0aF9wb3J0LjA6IHNpbmdsZSBpZHggODMwIFA9MWVlN2Q5ODAgRD0xZWU3 ZDk4MCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVs OiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA4MzEgUD0x ZWU3ZTA2MCBEPTFlZTdlMDYwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTgg Z3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4wOiBzaW5n bGUgaWR4IDgzMSBQPTFlZTdlNzQwIEQ9MWVlN2U3NDAgTD02MDAgRE1BX0ZST01fREVWSUNFCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0 aF9wb3J0LjA6IHNpbmdsZSBpZHggODMxIFA9MWVlN2VlMjAgRD0xZWU3ZWUyMCBMPTYwMCBETUFf RlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9w b3J0IG12NjQzeHhfZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA4MzEgUD0xZWU3ZjUwMCBEPTFlZTdm NTAwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6 IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5nbGUgaWR4IDgzMiBQPTFl ZTgwMDIwIEQ9MWVlODAwMjAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBn dXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjE6IHNpbmds ZSBpZHggODMyIFA9MWVlODA3MDAgRD0xZWU4MDcwMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFu IDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRo X3BvcnQuMTogc2luZ2xlIGlkeCA4MzIgUD0xZWU4MGRlMCBEPTFlZTgwZGUwIEw9NjAwIERNQV9G Uk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3Bv cnQgbXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5nbGUgaWR4IDgzMiBQPTFlZTgxNGMwIEQ9MWVlODE0 YzAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDog bXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjE6IHNpbmdsZSBpZHggODMyIFA9MWVl ODFiYTAgRD0xZWU4MWJhMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1 cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMTogc2luZ2xl IGlkeCA4MzMgUD0xZWU4MjI4MCBEPTFlZTgyMjgwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4g MTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhf cG9ydC4xOiBzaW5nbGUgaWR4IDgzMyBQPTFlZTgyOTYwIEQ9MWVlODI5NjAgTD02MDAgRE1BX0ZS T01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9y dCBtdjY0M3h4X2V0aF9wb3J0LjE6IHNpbmdsZSBpZHggODMzIFA9MWVlODMwNDAgRD0xZWU4MzA0 MCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBt djY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMTogc2luZ2xlIGlkeCA4MzMgUD0xZWU4 MzcyMCBEPTFlZTgzNzIwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3Vy dXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5nbGUg aWR4IDgzMyBQPTFlZTgzZTAwIEQ9MWVlODNlMDAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAx OSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9w b3J0LjE6IHNpbmdsZSBpZHggODM0IFA9MWVlODQ0ZTAgRD0xZWU4NDRlMCBMPTYwMCBETUFfRlJP TV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0 IG12NjQzeHhfZXRoX3BvcnQuMTogc2luZ2xlIGlkeCA4MzQgUD0xZWU4NGJjMCBEPTFlZTg0YmMw IEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12 NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5nbGUgaWR4IDgzNCBQPTFlZTg1 MmEwIEQ9MWVlODUyYTAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1 dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjE6IHNpbmdsZSBp ZHggODM0IFA9MWVlODU5ODAgRD0xZWU4NTk4MCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5 IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3Bv cnQuMTogc2luZ2xlIGlkeCA4MzUgUD0xZWU4NjA2MCBEPTFlZTg2MDYwIEw9NjAwIERNQV9GUk9N X0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQg bXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5nbGUgaWR4IDgzNSBQPTFlZTg2NzQwIEQ9MWVlODY3NDAg TD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2 NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjE6IHNpbmdsZSBpZHggODM1IFA9MWVlODZl MjAgRD0xZWU4NmUyMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2 ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMTogc2luZ2xlIGlk eCA4MzUgUD0xZWU4NzUwMCBEPTFlZTg3NTAwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkg MTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9y dC4xOiBzaW5nbGUgaWR4IDg1MiBQPTFlZWE4MDIwIEQ9MWVlYTgwMjAgTD02MDAgRE1BX0ZST01f REVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBt djY0M3h4X2V0aF9wb3J0LjE6IHNpbmdsZSBpZHggODUyIFA9MWVlYTg3MDAgRD0xZWVhODcwMCBM PTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0 M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA4NTQgUD0xZWVhYzhh MCBEPTFlZWFjOGEwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZk ciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4wOiBzaW5nbGUgaWR4 IDg1NCBQPTFlZWFjZjgwIEQ9MWVlYWNmODAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAx Mzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0 LjA6IHNpbmdsZSBpZHggODU0IFA9MWVlYWQ2NjAgRD0xZWVhZDY2MCBMPTYwMCBETUFfRlJPTV9E RVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12 NjQzeHhfZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA4NTQgUD0xZWVhZGQ0MCBEPTFlZWFkZDQwIEw9 NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQz eHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4wOiBzaW5nbGUgaWR4IDg1NSBQPTFlZWFlNDIw IEQ9MWVlYWU0MjAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRy IGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjA6IHNpbmdsZSBpZHgg ODU1IFA9MWVlYWViMDAgRD0xZWVhZWIwMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEz OjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQu MDogc2luZ2xlIGlkeCA4NTUgUD0xZWVhZjFlMCBEPTFlZWFmMWUwIEw9NjAwIERNQV9GUk9NX0RF VklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2 NDN4eF9ldGhfcG9ydC4wOiBzaW5nbGUgaWR4IDg1NSBQPTFlZWFmOGMwIEQ9MWVlYWY4YzAgTD02 MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4 eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjE6IHNpbmdsZSBpZHggODU2IFA9MWVlYjAwMjAg RD0xZWViMDAyMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIg a2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMTogc2luZ2xlIGlkeCA4 NTYgUD0xZWViMDcwMCBEPTFlZWIwNzAwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6 NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4x OiBzaW5nbGUgaWR4IDg1NiBQPTFlZWIwZGUwIEQ9MWVlYjBkZTAgTD02MDAgRE1BX0ZST01fREVW SUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0 M3h4X2V0aF9wb3J0LjE6IHNpbmdsZSBpZHggODU2IFA9MWVlYjE0YzAgRD0xZWViMTRjMCBMPTYw MCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4 X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMTogc2luZ2xlIGlkeCA4NTYgUD0xZWViMWJhMCBE PTFlZWIxYmEwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBr ZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5nbGUgaWR4IDg1 NyBQPTFlZWIyMjgwIEQ9MWVlYjIyODAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjE6 IHNpbmdsZSBpZHggODU3IFA9MWVlYjI5NjAgRD0xZWViMjk2MCBMPTYwMCBETUFfRlJPTV9ERVZJ Q0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQz eHhfZXRoX3BvcnQuMTogc2luZ2xlIGlkeCA4NTcgUD0xZWViMzA0MCBEPTFlZWIzMDQwIEw9NjAw IERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhf ZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5nbGUgaWR4IDg1NyBQPTFlZWIzNzIwIEQ9 MWVlYjM3MjAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtl cm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjE6IHNpbmdsZSBpZHggODU3 IFA9MWVlYjNlMDAgRD0xZWViM2UwMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0 OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMTog c2luZ2xlIGlkeCA4NTggUD0xZWViNDRlMCBEPTFlZWI0NGUwIEw9NjAwIERNQV9GUk9NX0RFVklD RQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4 eF9ldGhfcG9ydC4xOiBzaW5nbGUgaWR4IDg1OCBQPTFlZWI0YmMwIEQ9MWVlYjRiYzAgTD02MDAg RE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9l dGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjE6IHNpbmdsZSBpZHggODU4IFA9MWVlYjUyYTAgRD0x ZWViNTJhMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2Vy bmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMTogc2luZ2xlIGlkeCA4NTgg UD0xZWViNTk4MCBEPTFlZWI1OTgwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6 NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4xOiBz aW5nbGUgaWR4IDg1OSBQPTFlZWI2MDYwIEQ9MWVlYjYwNjAgTD02MDAgRE1BX0ZST01fREVWSUNF CkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4 X2V0aF9wb3J0LjE6IHNpbmdsZSBpZHggODU5IFA9MWVlYjY3NDAgRD0xZWViNjc0MCBMPTYwMCBE TUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0 aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMTogc2luZ2xlIGlkeCA4NTkgUD0xZWViNmUyMCBEPTFl ZWI2ZTIwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJu ZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5nbGUgaWR4IDg1OSBQ PTFlZWI3NTAwIEQ9MWVlYjc1MDAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1 OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjE6IHNp bmdsZSBpZHggODc2IFA9MWVlZDgwMjAgRD0xZWVkODAyMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UK SmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhf ZXRoX3BvcnQuMTogc2luZ2xlIGlkeCA4NzYgUD0xZWVkODcwMCBEPTFlZWQ4NzAwIEw9NjAwIERN QV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRo X3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5nbGUgaWR4IDg3NiBQPTFlZWQ4ZGUwIEQ9MWVl ZDhkZTAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5l bDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjE6IHNpbmdsZSBpZHggODc2IFA9 MWVlZDk0YzAgRD0xZWVkOTRjMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4 IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMTogc2lu Z2xlIGlkeCA4NzYgUD0xZWVkOWJhMCBEPTFlZWQ5YmEwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpK YW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9l dGhfcG9ydC4xOiBzaW5nbGUgaWR4IDg3NyBQPTFlZWRhMjgwIEQ9MWVlZGEyODAgTD02MDAgRE1B X0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhf cG9ydCBtdjY0M3h4X2V0aF9wb3J0LjE6IHNpbmdsZSBpZHggODc3IFA9MWVlZGE5NjAgRD0xZWVk YTk2MCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVs OiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMTogc2luZ2xlIGlkeCA4NzcgUD0x ZWVkYjA0MCBEPTFlZWRiMDQwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTgg Z3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5n bGUgaWR4IDg3NyBQPTFlZWRiNzIwIEQ9MWVlZGI3MjAgTD02MDAgRE1BX0ZST01fREVWSUNFCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0 aF9wb3J0LjE6IHNpbmdsZSBpZHggODc3IFA9MWVlZGJlMDAgRD0xZWVkYmUwMCBMPTYwMCBETUFf RlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9w b3J0IG12NjQzeHhfZXRoX3BvcnQuMTogc2luZ2xlIGlkeCA4NzggUD0xZWVkYzRlMCBEPTFlZWRj NGUwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6 IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5nbGUgaWR4IDg3OCBQPTFl ZWRjYmMwIEQ9MWVlZGNiYzAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBn dXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjE6IHNpbmds ZSBpZHggODc4IFA9MWVlZGQyYTAgRD0xZWVkZDJhMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFu IDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRo X3BvcnQuMTogc2luZ2xlIGlkeCA4NzggUD0xZWVkZDk4MCBEPTFlZWRkOTgwIEw9NjAwIERNQV9G Uk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3Bv cnQgbXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5nbGUgaWR4IDg3OSBQPTFlZWRlMDYwIEQ9MWVlZGUw NjAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDog bXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjE6IHNpbmdsZSBpZHggODc5IFA9MWVl ZGU3NDAgRD0xZWVkZTc0MCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1 cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMTogc2luZ2xl IGlkeCA4NzkgUD0xZWVkZWUyMCBEPTFlZWRlZTIwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4g MTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhf cG9ydC4xOiBzaW5nbGUgaWR4IDg3OSBQPTFlZWRmNTAwIEQ9MWVlZGY1MDAgTD02MDAgRE1BX0ZS T01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9y dCBtdjY0M3h4X2V0aF9wb3J0LjA6IGNvaGVyZW50IGlkeCA4ODUgUD0yMTQ5NzAwMCBEPTFlZWVi MDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJu ZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5nbGUgaWR4IDg5NiBQ PTFlZjAwMDIwIEQ9MWVmMDAwMjAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1 OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjE6IHNp bmdsZSBpZHggODk2IFA9MWVmMDA3MDAgRD0xZWYwMDcwMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UK SmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhf ZXRoX3BvcnQuMTogc2luZ2xlIGlkeCA4OTYgUD0xZWYwMGRlMCBEPTFlZjAwZGUwIEw9NjAwIERN QV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRo X3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5nbGUgaWR4IDg5NiBQPTFlZjAxNGMwIEQ9MWVm MDE0YzAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5l bDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjE6IHNpbmdsZSBpZHggODk2IFA9 MWVmMDFiYTAgRD0xZWYwMWJhMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4 IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMTogc2lu Z2xlIGlkeCA4OTcgUD0xZWYwMjI4MCBEPTFlZjAyMjgwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpK YW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9l dGhfcG9ydC4xOiBzaW5nbGUgaWR4IDg5NyBQPTFlZjAyOTYwIEQ9MWVmMDI5NjAgTD02MDAgRE1B X0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhf cG9ydCBtdjY0M3h4X2V0aF9wb3J0LjE6IHNpbmdsZSBpZHggODk3IFA9MWVmMDMwNDAgRD0xZWYw MzA0MCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVs OiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMTogc2luZ2xlIGlkeCA4OTcgUD0x ZWYwMzcyMCBEPTFlZjAzNzIwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTgg Z3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5n bGUgaWR4IDg5NyBQPTFlZjAzZTAwIEQ9MWVmMDNlMDAgTD02MDAgRE1BX0ZST01fREVWSUNFCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0 aF9wb3J0LjE6IHNpbmdsZSBpZHggODk4IFA9MWVmMDQ0ZTAgRD0xZWYwNDRlMCBMPTYwMCBETUFf RlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9w b3J0IG12NjQzeHhfZXRoX3BvcnQuMTogc2luZ2xlIGlkeCA4OTggUD0xZWYwNGJjMCBEPTFlZjA0 YmMwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6 IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5nbGUgaWR4IDg5OCBQPTFl ZjA1MmEwIEQ9MWVmMDUyYTAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBn dXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjE6IHNpbmds ZSBpZHggODk4IFA9MWVmMDU5ODAgRD0xZWYwNTk4MCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFu IDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRo X3BvcnQuMTogc2luZ2xlIGlkeCA4OTkgUD0xZWYwNjA2MCBEPTFlZjA2MDYwIEw9NjAwIERNQV9G Uk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3Bv cnQgbXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5nbGUgaWR4IDg5OSBQPTFlZjA2NzQwIEQ9MWVmMDY3 NDAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDog bXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjE6IHNpbmdsZSBpZHggODk5IFA9MWVm MDZlMjAgRD0xZWYwNmUyMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1 cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMTogc2luZ2xl IGlkeCA4OTkgUD0xZWYwNzUwMCBEPTFlZjA3NTAwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4g MTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhf cG9ydC4xOiBjb2hlcmVudCBpZHggOTE4IFA9MjE0NGMwMDAgRD0xZWYyYzAwMCBMPTEwMDAgRE1B X0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0 aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMTogc2luZ2xlIGlkeCA5MjQgUD0xZWYzODAyMCBEPTFl ZjM4MDIwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJu ZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5nbGUgaWR4IDkyNCBQ PTFlZjM4NzAwIEQ9MWVmMzg3MDAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1 OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjE6IHNp bmdsZSBpZHggOTI0IFA9MWVmMzhkZTAgRD0xZWYzOGRlMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UK SmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhf ZXRoX3BvcnQuMTogc2luZ2xlIGlkeCA5MjQgUD0xZWYzOTRjMCBEPTFlZjM5NGMwIEw9NjAwIERN QV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRo X3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5nbGUgaWR4IDkyNCBQPTFlZjM5YmEwIEQ9MWVm MzliYTAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5l bDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjE6IHNpbmdsZSBpZHggOTI1IFA9 MWVmM2EyODAgRD0xZWYzYTI4MCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4 IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMTogc2lu Z2xlIGlkeCA5MjUgUD0xZWYzYTk2MCBEPTFlZjNhOTYwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpK YW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9l dGhfcG9ydC4xOiBzaW5nbGUgaWR4IDkyNSBQPTFlZjNiMDQwIEQ9MWVmM2IwNDAgTD02MDAgRE1B X0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhf cG9ydCBtdjY0M3h4X2V0aF9wb3J0LjE6IHNpbmdsZSBpZHggOTI1IFA9MWVmM2I3MjAgRD0xZWYz YjcyMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVs OiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMTogc2luZ2xlIGlkeCA5MjUgUD0x ZWYzYmUwMCBEPTFlZjNiZTAwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTgg Z3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5n bGUgaWR4IDkyNiBQPTFlZjNjNGUwIEQ9MWVmM2M0ZTAgTD02MDAgRE1BX0ZST01fREVWSUNFCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0 aF9wb3J0LjE6IHNpbmdsZSBpZHggOTI2IFA9MWVmM2NiYzAgRD0xZWYzY2JjMCBMPTYwMCBETUFf RlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9w b3J0IG12NjQzeHhfZXRoX3BvcnQuMTogc2luZ2xlIGlkeCA5MjYgUD0xZWYzZDJhMCBEPTFlZjNk MmEwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6 IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5nbGUgaWR4IDkyNiBQPTFl ZjNkOTgwIEQ9MWVmM2Q5ODAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBn dXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjE6IHNpbmds ZSBpZHggOTI3IFA9MWVmM2UwNjAgRD0xZWYzZTA2MCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFu IDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRo X3BvcnQuMTogc2luZ2xlIGlkeCA5MjcgUD0xZWYzZTc0MCBEPTFlZjNlNzQwIEw9NjAwIERNQV9G Uk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3Bv cnQgbXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5nbGUgaWR4IDkyNyBQPTFlZjNlZTIwIEQ9MWVmM2Vl MjAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDog bXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjE6IHNpbmdsZSBpZHggOTI3IFA9MWVm M2Y1MDAgRD0xZWYzZjUwMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1 cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMTogc2luZ2xl IGlkeCA5MjggUD0xZWY0MDAyMCBEPTFlZjQwMDIwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4g MTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhf cG9ydC4xOiBzaW5nbGUgaWR4IDkyOCBQPTFlZjQwNzAwIEQ9MWVmNDA3MDAgTD02MDAgRE1BX0ZS T01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9y dCBtdjY0M3h4X2V0aF9wb3J0LjE6IHNpbmdsZSBpZHggOTI4IFA9MWVmNDBkZTAgRD0xZWY0MGRl MCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBt djY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMTogc2luZ2xlIGlkeCA5MjggUD0xZWY0 MTRjMCBEPTFlZjQxNGMwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3Vy dXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5nbGUg aWR4IDkyOCBQPTFlZjQxYmEwIEQ9MWVmNDFiYTAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAx OSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9w b3J0LjE6IHNpbmdsZSBpZHggOTI5IFA9MWVmNDIyODAgRD0xZWY0MjI4MCBMPTYwMCBETUFfRlJP TV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0 IG12NjQzeHhfZXRoX3BvcnQuMTogc2luZ2xlIGlkeCA5MjkgUD0xZWY0Mjk2MCBEPTFlZjQyOTYw IEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12 NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5nbGUgaWR4IDkyOSBQPTFlZjQz MDQwIEQ9MWVmNDMwNDAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1 dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjE6IHNpbmdsZSBp ZHggOTI5IFA9MWVmNDM3MjAgRD0xZWY0MzcyMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5 IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3Bv cnQuMTogc2luZ2xlIGlkeCA5MjkgUD0xZWY0M2UwMCBEPTFlZjQzZTAwIEw9NjAwIERNQV9GUk9N X0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQg bXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5nbGUgaWR4IDkzMCBQPTFlZjQ0NGUwIEQ9MWVmNDQ0ZTAg TD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2 NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjE6IHNpbmdsZSBpZHggOTMwIFA9MWVmNDRi YzAgRD0xZWY0NGJjMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2 ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMTogc2luZ2xlIGlk eCA5MzAgUD0xZWY0NTJhMCBEPTFlZjQ1MmEwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkg MTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9y dC4xOiBzaW5nbGUgaWR4IDkzMCBQPTFlZjQ1OTgwIEQ9MWVmNDU5ODAgTD02MDAgRE1BX0ZST01f REVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBt djY0M3h4X2V0aF9wb3J0LjE6IHNpbmdsZSBpZHggOTMxIFA9MWVmNDYwNjAgRD0xZWY0NjA2MCBM PTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0 M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMTogc2luZ2xlIGlkeCA5MzEgUD0xZWY0Njc0 MCBEPTFlZjQ2NzQwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZk ciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5nbGUgaWR4 IDkzMSBQPTFlZjQ2ZTIwIEQ9MWVmNDZlMjAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAx Mzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0 LjE6IHNpbmdsZSBpZHggOTMxIFA9MWVmNDc1MDAgRD0xZWY0NzUwMCBMPTYwMCBETUFfRlJPTV9E RVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12 NjQzeHhfZXRoX3BvcnQuMTogY29oZXJlbnQgaWR4IDkzMyBQPTIxNDRhMDAwIEQ9MWVmNGIwMDAg TD04MDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBt djY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA5NTIgUD0xZWY3 MDAyMCBEPTFlZjcwMDIwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3Vy dXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4wOiBzaW5nbGUg aWR4IDk1MiBQPTFlZjcwNzAwIEQ9MWVmNzA3MDAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAx OSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9w b3J0LjA6IHNpbmdsZSBpZHggOTUyIFA9MWVmNzBkZTAgRD0xZWY3MGRlMCBMPTYwMCBETUFfRlJP TV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0 IG12NjQzeHhfZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA5NTIgUD0xZWY3MTRjMCBEPTFlZjcxNGMw IEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12 NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4wOiBzaW5nbGUgaWR4IDk1MiBQPTFlZjcx YmEwIEQ9MWVmNzFiYTAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1 dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjA6IHNpbmdsZSBp ZHggOTUzIFA9MWVmNzIyODAgRD0xZWY3MjI4MCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5 IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3Bv cnQuMDogc2luZ2xlIGlkeCA5NTMgUD0xZWY3Mjk2MCBEPTFlZjcyOTYwIEw9NjAwIERNQV9GUk9N X0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQg bXY2NDN4eF9ldGhfcG9ydC4wOiBzaW5nbGUgaWR4IDk1MyBQPTFlZjczMDQwIEQ9MWVmNzMwNDAg TD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2 NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjA6IHNpbmdsZSBpZHggOTUzIFA9MWVmNzM3 MjAgRD0xZWY3MzcyMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2 ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMDogc2luZ2xlIGlk eCA5NTMgUD0xZWY3M2UwMCBEPTFlZjczZTAwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkg MTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9y dC4wOiBzaW5nbGUgaWR4IDk1NCBQPTFlZjc0NGUwIEQ9MWVmNzQ0ZTAgTD02MDAgRE1BX0ZST01f REVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBt djY0M3h4X2V0aF9wb3J0LjA6IHNpbmdsZSBpZHggOTU0IFA9MWVmNzRiYzAgRD0xZWY3NGJjMCBM PTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0 M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA5NTQgUD0xZWY3NTJh MCBEPTFlZjc1MmEwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZk ciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4wOiBzaW5nbGUgaWR4 IDk1NCBQPTFlZjc1OTgwIEQ9MWVmNzU5ODAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAx Mzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0 LjA6IHNpbmdsZSBpZHggOTU1IFA9MWVmNzYwNjAgRD0xZWY3NjA2MCBMPTYwMCBETUFfRlJPTV9E RVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12 NjQzeHhfZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA5NTUgUD0xZWY3Njc0MCBEPTFlZjc2NzQwIEw9 NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQz eHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4wOiBzaW5nbGUgaWR4IDk1NSBQPTFlZjc2ZTIw IEQ9MWVmNzZlMjAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRy IGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjA6IHNpbmdsZSBpZHgg OTU1IFA9MWVmNzc1MDAgRD0xZWY3NzUwMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEz OjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQu MDogc2luZ2xlIGlkeCA5NTYgUD0xZWY3ODAyMCBEPTFlZjc4MDIwIEw9NjAwIERNQV9GUk9NX0RF VklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2 NDN4eF9ldGhfcG9ydC4wOiBzaW5nbGUgaWR4IDk1NiBQPTFlZjc4NzAwIEQ9MWVmNzg3MDAgTD02 MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4 eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjA6IHNpbmdsZSBpZHggOTU2IFA9MWVmNzhkZTAg RD0xZWY3OGRlMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIg a2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA5 NTYgUD0xZWY3OTRjMCBEPTFlZjc5NGMwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6 NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4w OiBzaW5nbGUgaWR4IDk1NiBQPTFlZjc5YmEwIEQ9MWVmNzliYTAgTD02MDAgRE1BX0ZST01fREVW SUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0 M3h4X2V0aF9wb3J0LjA6IHNpbmdsZSBpZHggOTU3IFA9MWVmN2EyODAgRD0xZWY3YTI4MCBMPTYw MCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4 X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA5NTcgUD0xZWY3YTk2MCBE PTFlZjdhOTYwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBr ZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4wOiBzaW5nbGUgaWR4IDk1 NyBQPTFlZjdiMDQwIEQ9MWVmN2IwNDAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjA6 IHNpbmdsZSBpZHggOTU3IFA9MWVmN2I3MjAgRD0xZWY3YjcyMCBMPTYwMCBETUFfRlJPTV9ERVZJ Q0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQz eHhfZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA5NTcgUD0xZWY3YmUwMCBEPTFlZjdiZTAwIEw9NjAw IERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhf ZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4wOiBzaW5nbGUgaWR4IDk1OCBQPTFlZjdjNGUwIEQ9 MWVmN2M0ZTAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtl cm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjA6IHNpbmdsZSBpZHggOTU4 IFA9MWVmN2NiYzAgRD0xZWY3Y2JjMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0 OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMDog c2luZ2xlIGlkeCA5NTggUD0xZWY3ZDJhMCBEPTFlZjdkMmEwIEw9NjAwIERNQV9GUk9NX0RFVklD RQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4 eF9ldGhfcG9ydC4wOiBzaW5nbGUgaWR4IDk1OCBQPTFlZjdkOTgwIEQ9MWVmN2Q5ODAgTD02MDAg RE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9l dGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjA6IHNpbmdsZSBpZHggOTU5IFA9MWVmN2UwNjAgRD0x ZWY3ZTA2MCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2Vy bmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA5NTkg UD0xZWY3ZTc0MCBEPTFlZjdlNzQwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6 NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4wOiBz aW5nbGUgaWR4IDk1OSBQPTFlZjdlZTIwIEQ9MWVmN2VlMjAgTD02MDAgRE1BX0ZST01fREVWSUNF CkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4 X2V0aF9wb3J0LjA6IHNpbmdsZSBpZHggOTU5IFA9MWVmN2Y1MDAgRD0xZWY3ZjUwMCBMPTYwMCBE TUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBzYXRhX212IHNh dGFfbXYuMDogY29oZXJlbnQgaWR4IDAgUD0yMGEwMzAwMCBEPTFmMDAxMDAwIEw9MTAwMCBETUFf QklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IHNhdGFfbXYgc2F0 YV9tdi4wOiBjb2hlcmVudCBpZHggMSBQPTIwYTA1MDAwIEQ9MWYwMDIwMDAgTD0xMDAwIERNQV9C SURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRh X212LjA6IGNvaGVyZW50IGlkeCAxIFA9MjBhMDcwMDAgRD0xZjAwMzAwMCBMPTEwMDAgRE1BX0JJ RElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBzYXRhX212IHNhdGFf bXYuMDogY29oZXJlbnQgaWR4IDMgUD0yMGEwOTAwMCBEPTFmMDA2MDAwIEw9MTAwMCBETUFfQklE SVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IHNhdGFfbXYgc2F0YV9t di4wOiBjb2hlcmVudCBpZHggMyBQPTIwYTBiMDAwIEQ9MWYwMDcwMDAgTD0xMDAwIERNQV9CSURJ UkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212 LjA6IGNvaGVyZW50IGlkeCA0IFA9MjBhMmQwMDAgRD0xZjAwODAwMCBMPTEwMDAgRE1BX0JJRElS RUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBzYXRhX212IHNhdGFfbXYu MDogY29oZXJlbnQgaWR4IDQgUD0yMGEyZjAwMCBEPTFmMDA5MDAwIEw9MTAwMCBETUFfQklESVJF Q1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IHNhdGFfbXYgc2F0YV9tdi4w OiBjb2hlcmVudCBpZHggNSBQPTIwYTMxMDAwIEQ9MWYwMGEwMDAgTD0xMDAwIERNQV9CSURJUkVD VElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6 IGNvaGVyZW50IGlkeCA1IFA9MjBhMzMwMDAgRD0xZjAwYjAwMCBMPTEwMDAgRE1BX0JJRElSRUNU SU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBzYXRhX212IHNhdGFfbXYuMDog Y29oZXJlbnQgaWR4IDYgUD0yMGEzNTAwMCBEPTFmMDBjMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJ T05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IHNhdGFfbXYgc2F0YV9tdi4wOiBj b2hlcmVudCBpZHggNiBQPTIwYTM3MDAwIEQ9MWYwMGQwMDAgTD0xMDAwIERNQV9CSURJUkVDVElP TkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNv aGVyZW50IGlkeCA3IFA9MjBhMzkwMDAgRD0xZjAwZTAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9O QUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBzYXRhX212IHNhdGFfbXYuMDogY29o ZXJlbnQgaWR4IDcgUD0yMGEzYjAwMCBEPTFmMDBmMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05B TApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IHNhdGFfbXYgc2F0YV9tdi4wOiBjb2hl cmVudCBpZHggOCBQPTIwYTFkMDAwIEQ9MWYwMTAwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFM CkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVy ZW50IGlkeCA4IFA9MjBhMWYwMDAgRD0xZjAxMTAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwK SmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBzYXRhX212IHNhdGFfbXYuMDogY29oZXJl bnQgaWR4IDkgUD0yMGEyMTAwMCBEPTFmMDEyMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApK YW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IHNhdGFfbXYgc2F0YV9tdi4wOiBjb2hlcmVu dCBpZHggOSBQPTIwYTIzMDAwIEQ9MWYwMTMwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50 IGlkeCAxMCBQPTIwYTI1MDAwIEQ9MWYwMTQwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50 IGlkeCAxMCBQPTIwYTI3MDAwIEQ9MWYwMTUwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50 IGlkeCAxMSBQPTIwYTI5MDAwIEQ9MWYwMTYwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50 IGlkeCAxMSBQPTIwYTJiMDAwIEQ9MWYwMTcwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50 IGlkeCAxMiBQPTIwYTE1MDAwIEQ9MWYwMTgwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50 IGlkeCAxMiBQPTIwYTE3MDAwIEQ9MWYwMTkwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50 IGlkeCAxMyBQPTIwYTE5MDAwIEQ9MWYwMWEwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50 IGlkeCAxMyBQPTIwYTFiMDAwIEQ9MWYwMWIwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50 IGlkeCAxNCBQPTIwYTUzMDAwIEQ9MWYwMWMwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50 IGlkeCAxNCBQPTIwYTU1MDAwIEQ9MWYwMWQwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50 IGlkeCAxNSBQPTIwYTU3MDAwIEQ9MWYwMWUwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50 IGlkeCAxNSBQPTIwYTU5MDAwIEQ9MWYwMWYwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50 IGlkeCAxNiBQPTIwYTNkMDAwIEQ9MWYwMjEwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50 IGlkeCAxNyBQPTIwYTNmMDAwIEQ9MWYwMjIwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50 IGlkeCAxNyBQPTIwYTQxMDAwIEQ9MWYwMjMwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50 IGlkeCAxOCBQPTIwYTQzMDAwIEQ9MWYwMjQwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50 IGlkeCAxOCBQPTIwYTQ1MDAwIEQ9MWYwMjUwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50 IGlkeCAxOSBQPTIwYTRmMDAwIEQ9MWYwMjYwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50 IGlkeCAxOSBQPTIwYTUxMDAwIEQ9MWYwMjcwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50 IGlkeCAyMCBQPTIwYTViMDAwIEQ9MWYwMjgwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50 IGlkeCAyMCBQPTIwYTVkMDAwIEQ9MWYwMjkwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50 IGlkeCAyMSBQPTIwYTVmMDAwIEQ9MWYwMmEwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50 IGlkeCAyMSBQPTIwYTYxMDAwIEQ9MWYwMmIwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50 IGlkeCAyMiBQPTIwYTYzMDAwIEQ9MWYwMmMwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50 IGlkeCAyMiBQPTIwYTY1MDAwIEQ9MWYwMmQwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50 IGlkeCAyMyBQPTIwYTY3MDAwIEQ9MWYwMmUwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50 IGlkeCAyMyBQPTIwYTY5MDAwIEQ9MWYwMmYwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0 aF9wb3J0LjA6IGNvaGVyZW50IGlkeCA0MiBQPTIxNDk1MDAwIEQ9MWM4NTUwMDAgTD04MDAgRE1B X0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNp IG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDQ5IFA9MjBhNmUwMDAgRD0xZjA2MjAwMCBMPTEw MDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlv bi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDQ5IFA9MjBhNzAwMDAgRD0xZjA2MzAw MCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVs OiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDUwIFA9MjBhNzIwMDAgRD0x ZjA2NDAwMCBMPTgwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBr ZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBzaW5nbGUgaWR4IDU0IFA9MWYwNmMyYzAg RD0xZjA2YzJjMCBMPTEgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtl cm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjA6IHNpbmdsZSBpZHggNjgg UD0xYzA4ODAyMCBEPTFjMDg4MDIwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6 NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4wOiBz aW5nbGUgaWR4IDY4IFA9MWMwODg3MDAgRD0xYzA4ODcwMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UK SmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhf ZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA2OCBQPTFjMDg4ZGUwIEQ9MWMwODhkZTAgTD02MDAgRE1B X0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhf cG9ydCBtdjY0M3h4X2V0aF9wb3J0LjA6IHNpbmdsZSBpZHggNjggUD0xYzA4OTRjMCBEPTFjMDg5 NGMwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6 IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4wOiBzaW5nbGUgaWR4IDY4IFA9MWMw ODliYTAgRD0xYzA4OWJhMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1 cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMDogc2luZ2xl IGlkeCA2OSBQPTFjMDhhMjgwIEQ9MWMwOGEyODAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAx OSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9w b3J0LjA6IHNpbmdsZSBpZHggNjkgUD0xYzA4YTk2MCBEPTFjMDhhOTYwIEw9NjAwIERNQV9GUk9N X0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQg bXY2NDN4eF9ldGhfcG9ydC4wOiBzaW5nbGUgaWR4IDY5IFA9MWMwOGIwNDAgRD0xYzA4YjA0MCBM PTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0 M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA2OSBQPTFjMDhiNzIw IEQ9MWMwOGI3MjAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRy IGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjA6IHNpbmdsZSBpZHgg NjkgUD0xYzA4YmUwMCBEPTFjMDhiZTAwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6 NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4w OiBzaW5nbGUgaWR4IDcwIFA9MWMwOGM0ZTAgRD0xYzA4YzRlMCBMPTYwMCBETUFfRlJPTV9ERVZJ Q0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQz eHhfZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA3MCBQPTFjMDhjYmMwIEQ9MWMwOGNiYzAgTD02MDAg RE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9l dGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjA6IHNpbmdsZSBpZHggNzAgUD0xYzA4ZDJhMCBEPTFj MDhkMmEwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJu ZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4wOiBzaW5nbGUgaWR4IDcwIFA9 MWMwOGQ5ODAgRD0xYzA4ZDk4MCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4 IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMDogc2lu Z2xlIGlkeCA3MSBQPTFjMDhlMDYwIEQ9MWMwOGUwNjAgTD02MDAgRE1BX0ZST01fREVWSUNFCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0 aF9wb3J0LjA6IHNpbmdsZSBpZHggNzEgUD0xYzA4ZTc0MCBEPTFjMDhlNzQwIEw9NjAwIERNQV9G Uk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3Bv cnQgbXY2NDN4eF9ldGhfcG9ydC4wOiBzaW5nbGUgaWR4IDcxIFA9MWMwOGVlMjAgRD0xYzA4ZWUy MCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBt djY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA3MSBQPTFjMDhm NTAwIEQ9MWMwOGY1MDAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1 dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjA6IHNpbmdsZSBp ZHggNzIgUD0xYzA5MDAyMCBEPTFjMDkwMDIwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkg MTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9y dC4wOiBzaW5nbGUgaWR4IDcyIFA9MWMwOTA3MDAgRD0xYzA5MDcwMCBMPTYwMCBETUFfRlJPTV9E RVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12 NjQzeHhfZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA3MiBQPTFjMDkwZGUwIEQ9MWMwOTBkZTAgTD02 MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4 eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjA6IHNpbmdsZSBpZHggNzIgUD0xYzA5MTRjMCBE PTFjMDkxNGMwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBr ZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4wOiBzaW5nbGUgaWR4IDcy IFA9MWMwOTFiYTAgRD0xYzA5MWJhMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0 OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMDog c2luZ2xlIGlkeCA3MyBQPTFjMDkyMjgwIEQ9MWMwOTIyODAgTD02MDAgRE1BX0ZST01fREVWSUNF CkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4 X2V0aF9wb3J0LjA6IHNpbmdsZSBpZHggNzMgUD0xYzA5Mjk2MCBEPTFjMDkyOTYwIEw9NjAwIERN QV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRo X3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4wOiBzaW5nbGUgaWR4IDczIFA9MWMwOTMwNDAgRD0xYzA5 MzA0MCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVs OiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA3MyBQPTFj MDkzNzIwIEQ9MWMwOTM3MjAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBn dXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjA6IHNpbmds ZSBpZHggNzMgUD0xYzA5M2UwMCBEPTFjMDkzZTAwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4g MTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhf cG9ydC4wOiBzaW5nbGUgaWR4IDc0IFA9MWMwOTQ0ZTAgRD0xYzA5NDRlMCBMPTYwMCBETUFfRlJP TV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0 IG12NjQzeHhfZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA3NCBQPTFjMDk0YmMwIEQ9MWMwOTRiYzAg TD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2 NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjA6IHNpbmdsZSBpZHggNzQgUD0xYzA5NTJh MCBEPTFjMDk1MmEwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZk ciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4wOiBzaW5nbGUgaWR4 IDc0IFA9MWMwOTU5ODAgRD0xYzA5NTk4MCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEz OjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQu MDogc2luZ2xlIGlkeCA3NSBQPTFjMDk2MDYwIEQ9MWMwOTYwNjAgTD02MDAgRE1BX0ZST01fREVW SUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0 M3h4X2V0aF9wb3J0LjA6IHNpbmdsZSBpZHggNzUgUD0xYzA5Njc0MCBEPTFjMDk2NzQwIEw9NjAw IERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhf ZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4wOiBzaW5nbGUgaWR4IDc1IFA9MWMwOTZlMjAgRD0x YzA5NmUyMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2Vy bmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA3NSBQ PTFjMDk3NTAwIEQ9MWMwOTc1MDAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1 OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjA6IHNp bmdsZSBpZHggNzYgUD0xYzA5ODAyMCBEPTFjMDk4MDIwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpK YW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9l dGhfcG9ydC4wOiBzaW5nbGUgaWR4IDc2IFA9MWMwOTg3MDAgRD0xYzA5ODcwMCBMPTYwMCBETUFf RlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9w b3J0IG12NjQzeHhfZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA3NiBQPTFjMDk4ZGUwIEQ9MWMwOThk ZTAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDog bXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjA6IHNpbmdsZSBpZHggNzYgUD0xYzA5 OTRjMCBEPTFjMDk5NGMwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3Vy dXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4wOiBzaW5nbGUg aWR4IDc2IFA9MWMwOTliYTAgRD0xYzA5OWJhMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5 IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3Bv cnQuMDogc2luZ2xlIGlkeCA3NyBQPTFjMDlhMjgwIEQ9MWMwOWEyODAgTD02MDAgRE1BX0ZST01f REVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBt djY0M3h4X2V0aF9wb3J0LjA6IHNpbmdsZSBpZHggNzcgUD0xYzA5YTk2MCBEPTFjMDlhOTYwIEw9 NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQz eHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4wOiBzaW5nbGUgaWR4IDc3IFA9MWMwOWIwNDAg RD0xYzA5YjA0MCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIg a2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA3 NyBQPTFjMDliNzIwIEQ9MWMwOWI3MjAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjA6 IHNpbmdsZSBpZHggNzcgUD0xYzA5YmUwMCBEPTFjMDliZTAwIEw9NjAwIERNQV9GUk9NX0RFVklD RQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4 eF9ldGhfcG9ydC4wOiBzaW5nbGUgaWR4IDc4IFA9MWMwOWM0ZTAgRD0xYzA5YzRlMCBMPTYwMCBE TUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0 aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA3OCBQPTFjMDljYmMwIEQ9MWMw OWNiYzAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5l bDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAxMjggUD0yMDgwNDAwMCBE PTFmOTAwMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZk ciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMTI4IFA9MjA4 MDUwMDAgRD0xZjkwMTAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4 IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDEy OSBQPTIwODA2MDAwIEQ9MWY5MDIwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAx Mzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50 IGlkeCAxMjkgUD0yMDgwNzAwMCBEPTFmOTAzMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApK YW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBj b2hlcmVudCBpZHggMTMwIFA9MjA4MDgwMDAgRD0xZjkwNDAwMCBMPTEwMDAgRE1BX0JJRElSRUNU SU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVo Y2kuMDogY29oZXJlbnQgaWR4IDEzMCBQPTIwODA5MDAwIEQ9MWY5MDUwMDAgTD0xMDAwIERNQV9C SURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBv cmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAxMzEgUD0yMDgwYTAwMCBEPTFmOTA2MDAwIEw9MTAw MCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9u LWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMTMxIFA9MjA4MGIwMDAgRD0xZjkwNzAw MCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVs OiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDEzMiBQPTIwODBjMDAwIEQ9 MWY5MDgwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRy IGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAxMzIgUD0yMDgw ZDAwMCBEPTFmOTA5MDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTgg Z3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMTMz IFA9MjA4MGUwMDAgRD0xZjkwYTAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEz OjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQg aWR4IDEzMyBQPTIwODBmMDAwIEQ9MWY5MGIwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNv aGVyZW50IGlkeCAxMzQgUD0yMDgxMDAwMCBEPTFmOTBjMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJ T05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhj aS4wOiBjb2hlcmVudCBpZHggMTM0IFA9MjA4MTEwMDAgRD0xZjkwZDAwMCBMPTEwMDAgRE1BX0JJ RElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9y aW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDEzNSBQPTIwODEyMDAwIEQ9MWY5MGUwMDAgTD0xMDAw IERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24t ZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAxMzUgUD0yMDgxMzAwMCBEPTFmOTBmMDAw IEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6 IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMTM2IFA9MjA4MTQwMDAgRD0x ZjkxMDAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIg a2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDEzNiBQPTIwODE1 MDAwIEQ9MWY5MTEwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBn dXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAxMzcg UD0yMDgxNjAwMCBEPTFmOTEyMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6 NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBp ZHggMTM3IFA9MjA4MTcwMDAgRD0xZjkxMzAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFu IDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29o ZXJlbnQgaWR4IDEzOCBQPTIwODE4MDAwIEQ9MWY5MTQwMDAgTD0xMDAwIERNQV9CSURJUkVDVElP TkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNp LjA6IGNvaGVyZW50IGlkeCAxMzggUD0yMDgxOTAwMCBEPTFmOTE1MDAwIEw9MTAwMCBETUFfQklE SVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jp b24tZWhjaS4wOiBjb2hlcmVudCBpZHggMTM5IFA9MjA4MWEwMDAgRD0xZjkxNjAwMCBMPTEwMDAg RE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1l aGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDEzOSBQPTIwODFiMDAwIEQ9MWY5MTcwMDAg TD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDog b3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAxNDAgUD0yMDgxYzAwMCBEPTFm OTE4MDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBr ZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMTQwIFA9MjA4MWQw MDAgRD0xZjkxOTAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1 cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDE0MSBQ PTIwODFlMDAwIEQ9MWY5MWEwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlk eCAxNDEgUD0yMDgxZjAwMCBEPTFmOTFiMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4g MTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hl cmVudCBpZHggMTQyIFA9MjA4MjAwMDAgRD0xZjkxYzAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9O QUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2ku MDogY29oZXJlbnQgaWR4IDE0MiBQPTIwODIxMDAwIEQ9MWY5MWQwMDAgTD0xMDAwIERNQV9CSURJ UkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlv bi1laGNpLjA6IGNvaGVyZW50IGlkeCAxNDMgUD0yMDgyMjAwMCBEPTFmOTFlMDAwIEw9MTAwMCBE TUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVo Y2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMTQzIFA9MjA4MjMwMDAgRD0xZjkxZjAwMCBM PTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBv cmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDE0NCBQPTIwODI0MDAwIEQ9MWY5 MjAwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtl cm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAxNDQgUD0yMDgyNTAw MCBEPTFmOTIxMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3Vy dXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMTQ1IFA9 MjA4MjYwMDAgRD0xZjkyMjAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0 OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4 IDE0NSBQPTIwODI3MDAwIEQ9MWY5MjMwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAx OSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVy ZW50IGlkeCAxNDYgUD0yMDgyODAwMCBEPTFmOTI0MDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05B TApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4w OiBjb2hlcmVudCBpZHggMTQ2IFA9MjA4MjkwMDAgRD0xZjkyNTAwMCBMPTEwMDAgRE1BX0JJRElS RUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9u LWVoY2kuMDogY29oZXJlbnQgaWR4IDE0NyBQPTIwODJhMDAwIEQ9MWY5MjYwMDAgTD0xMDAwIERN QV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhj aSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAxNDcgUD0yMDgyYjAwMCBEPTFmOTI3MDAwIEw9 MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9y aW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMTQ4IFA9MjA4MmMwMDAgRD0xZjky ODAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2Vy bmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDE0OCBQPTIwODJkMDAw IEQ9MWY5MjkwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1 dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAxNDkgUD0y MDgyZTAwMCBEPTFmOTJhMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6 NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHgg MTQ5IFA9MjA4MmYwMDAgRD0xZjkyYjAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5 IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJl bnQgaWR4IDE1MCBQPTIwODMwMDAwIEQ9MWY5MmMwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFM CkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6 IGNvaGVyZW50IGlkeCAxNTAgUD0yMDgzMTAwMCBEPTFmOTJkMDAwIEw9MTAwMCBETUFfQklESVJF Q1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24t ZWhjaS4wOiBjb2hlcmVudCBpZHggMTUxIFA9MjA4MzIwMDAgRD0xZjkyZTAwMCBMPTEwMDAgRE1B X0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNp IG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDE1MSBQPTIwODMzMDAwIEQ9MWY5MmYwMDAgTD0x MDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jp b24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAxNTIgUD0yMDgzNDAwMCBEPTFmOTMw MDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJu ZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMTUyIFA9MjA4MzUwMDAg RD0xZjkzMTAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2 ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDE1MyBQPTIw ODM2MDAwIEQ9MWY5MzIwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1 OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAx NTMgUD0yMDgzNzAwMCBEPTFmOTMzMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkg MTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVu dCBpZHggMTU0IFA9MjA4MzgwMDAgRD0xZjkzNDAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwK SmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDog Y29oZXJlbnQgaWR4IDE1NCBQPTIwODM5MDAwIEQ9MWY5MzUwMDAgTD0xMDAwIERNQV9CSURJUkVD VElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1l aGNpLjA6IGNvaGVyZW50IGlkeCAxNTUgUD0yMDgzYTAwMCBEPTFmOTM2MDAwIEw9MTAwMCBETUFf QklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kg b3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMTU1IFA9MjA4M2IwMDAgRD0xZjkzNzAwMCBMPTEw MDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlv bi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDE1NiBQPTIwODNjMDAwIEQ9MWY5Mzgw MDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5l bDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAxNTYgUD0yMDgzZDAwMCBE PTFmOTM5MDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZk ciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMTU3IFA9MjA4 M2UwMDAgRD0xZjkzYTAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4 IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDE1 NyBQPTIwODNmMDAwIEQ9MWY5M2IwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAx Mzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50 IGlkeCAxNTggUD0yMDg0MDAwMCBEPTFmOTNjMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApK YW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBj b2hlcmVudCBpZHggMTU4IFA9MjA4NDEwMDAgRD0xZjkzZDAwMCBMPTEwMDAgRE1BX0JJRElSRUNU SU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVo Y2kuMDogY29oZXJlbnQgaWR4IDE1OSBQPTIwODQyMDAwIEQ9MWY5M2UwMDAgTD0xMDAwIERNQV9C SURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBv cmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAxNTkgUD0yMDg0MzAwMCBEPTFmOTNmMDAwIEw9MTAw MCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9u LWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMTYwIFA9MjA4NDQwMDAgRD0xZjk0MDAw MCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVs OiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDE2MCBQPTIwODQ1MDAwIEQ9 MWY5NDEwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRy IGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAxNjEgUD0yMDg0 NjAwMCBEPTFmOTQyMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTgg Z3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMTYx IFA9MjA4NDcwMDAgRD0xZjk0MzAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEz OjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQg aWR4IDE2MiBQPTIwODQ4MDAwIEQ9MWY5NDQwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNv aGVyZW50IGlkeCAxNjIgUD0yMDg0OTAwMCBEPTFmOTQ1MDAwIEw9MTAwMCBETUFfQklESVJFQ1RJ T05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhj aS4wOiBjb2hlcmVudCBpZHggMTYzIFA9MjA4NGEwMDAgRD0xZjk0NjAwMCBMPTEwMDAgRE1BX0JJ RElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9y aW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDE2MyBQPTIwODRiMDAwIEQ9MWY5NDcwMDAgTD0xMDAw IERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24t ZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAxNjQgUD0yMDg0YzAwMCBEPTFmOTQ4MDAw IEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6 IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMTY0IFA9MjA4NGQwMDAgRD0x Zjk0OTAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIg a2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDE2NSBQPTIwODRl MDAwIEQ9MWY5NGEwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBn dXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAxNjUg UD0yMDg0ZjAwMCBEPTFmOTRiMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6 NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBp ZHggMTY2IFA9MjA4NTAwMDAgRD0xZjk0YzAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFu IDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29o ZXJlbnQgaWR4IDE2NiBQPTIwODUxMDAwIEQ9MWY5NGQwMDAgTD0xMDAwIERNQV9CSURJUkVDVElP TkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNp LjA6IGNvaGVyZW50IGlkeCAxNjcgUD0yMDg1MjAwMCBEPTFmOTRlMDAwIEw9MTAwMCBETUFfQklE SVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jp b24tZWhjaS4wOiBjb2hlcmVudCBpZHggMTY3IFA9MjA4NTMwMDAgRD0xZjk0ZjAwMCBMPTEwMDAg RE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1l aGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDE2OCBQPTIwODU0MDAwIEQ9MWY5NTAwMDAg TD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDog b3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAxNjggUD0yMDg1NTAwMCBEPTFm OTUxMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBr ZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMTY5IFA9MjA4NTYw MDAgRD0xZjk1MjAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1 cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDE2OSBQ PTIwODU3MDAwIEQ9MWY5NTMwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlk eCAxNzAgUD0yMDg1ODAwMCBEPTFmOTU0MDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4g MTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hl cmVudCBpZHggMTcwIFA9MjA4NTkwMDAgRD0xZjk1NTAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9O QUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2ku MDogY29oZXJlbnQgaWR4IDE3MSBQPTIwODVhMDAwIEQ9MWY5NTYwMDAgTD0xMDAwIERNQV9CSURJ UkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlv bi1laGNpLjA6IGNvaGVyZW50IGlkeCAxNzEgUD0yMDg1YjAwMCBEPTFmOTU3MDAwIEw9MTAwMCBE TUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVo Y2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMTcyIFA9MjA4NWMwMDAgRD0xZjk1ODAwMCBM PTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBv cmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDE3MiBQPTIwODVkMDAwIEQ9MWY5 NTkwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtl cm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAxNzMgUD0yMDg1ZTAw MCBEPTFmOTVhMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3Vy dXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMTczIFA9 MjA4NWYwMDAgRD0xZjk1YjAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0 OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4 IDE3NCBQPTIwODYwMDAwIEQ9MWY5NWMwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAx OSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVy ZW50IGlkeCAxNzQgUD0yMDg2MTAwMCBEPTFmOTVkMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05B TApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4w OiBjb2hlcmVudCBpZHggMTc1IFA9MjA4NjIwMDAgRD0xZjk1ZTAwMCBMPTEwMDAgRE1BX0JJRElS RUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9u LWVoY2kuMDogY29oZXJlbnQgaWR4IDE3NSBQPTIwODYzMDAwIEQ9MWY5NWYwMDAgTD0xMDAwIERN QV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhj aSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAxNzYgUD0yMDg2NDAwMCBEPTFmOTYwMDAwIEw9 MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9y aW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMTc2IFA9MjA4NjUwMDAgRD0xZjk2 MTAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2Vy bmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDE3NyBQPTIwODY2MDAw IEQ9MWY5NjIwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1 dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAxNzcgUD0y MDg2NzAwMCBEPTFmOTYzMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6 NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHgg MTc4IFA9MjA4NjgwMDAgRD0xZjk2NDAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5 IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJl bnQgaWR4IDE3OCBQPTIwODY5MDAwIEQ9MWY5NjUwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFM CkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6 IGNvaGVyZW50IGlkeCAxNzkgUD0yMDg2YTAwMCBEPTFmOTY2MDAwIEw9MTAwMCBETUFfQklESVJF Q1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24t ZWhjaS4wOiBjb2hlcmVudCBpZHggMTc5IFA9MjA4NmIwMDAgRD0xZjk2NzAwMCBMPTEwMDAgRE1B X0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNp IG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDE4MCBQPTIwODZjMDAwIEQ9MWY5NjgwMDAgTD0x MDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jp b24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAxODAgUD0yMDg2ZDAwMCBEPTFmOTY5 MDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJu ZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMTgxIFA9MjA4NmUwMDAg RD0xZjk2YTAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2 ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDE4MSBQPTIw ODZmMDAwIEQ9MWY5NmIwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1 OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAx ODIgUD0yMDg3MDAwMCBEPTFmOTZjMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkg MTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVu dCBpZHggMTgyIFA9MjA4NzEwMDAgRD0xZjk2ZDAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwK SmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDog Y29oZXJlbnQgaWR4IDE4MyBQPTIwODcyMDAwIEQ9MWY5NmUwMDAgTD0xMDAwIERNQV9CSURJUkVD VElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1l aGNpLjA6IGNvaGVyZW50IGlkeCAxODMgUD0yMDg3MzAwMCBEPTFmOTZmMDAwIEw9MTAwMCBETUFf QklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kg b3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMTg0IFA9MjA4NzQwMDAgRD0xZjk3MDAwMCBMPTEw MDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlv bi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDE4NCBQPTIwODc1MDAwIEQ9MWY5NzEw MDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5l bDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAxODUgUD0yMDg3NjAwMCBE PTFmOTcyMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZk ciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMTg1IFA9MjA4 NzcwMDAgRD0xZjk3MzAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4 IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDE4 NiBQPTIwODc4MDAwIEQ9MWY5NzQwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAx Mzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50 IGlkeCAxODYgUD0yMDg3OTAwMCBEPTFmOTc1MDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApK YW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBj b2hlcmVudCBpZHggMTg3IFA9MjA4N2EwMDAgRD0xZjk3NjAwMCBMPTEwMDAgRE1BX0JJRElSRUNU SU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVo Y2kuMDogY29oZXJlbnQgaWR4IDE4NyBQPTIwODdiMDAwIEQ9MWY5NzcwMDAgTD0xMDAwIERNQV9C SURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBv cmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAxODggUD0yMDg3YzAwMCBEPTFmOTc4MDAwIEw9MTAw MCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9u LWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMTg4IFA9MjA4N2QwMDAgRD0xZjk3OTAw MCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVs OiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDE4OSBQPTIwODdlMDAwIEQ9 MWY5N2EwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRy IGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAxODkgUD0yMDg3 ZjAwMCBEPTFmOTdiMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTgg Z3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMTkw IFA9MjA4ODAwMDAgRD0xZjk3YzAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEz OjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQg aWR4IDE5MCBQPTIwODgxMDAwIEQ9MWY5N2QwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNv aGVyZW50IGlkeCAxOTEgUD0yMDg4MjAwMCBEPTFmOTdlMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJ T05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhj aS4wOiBjb2hlcmVudCBpZHggMTkxIFA9MjA4ODMwMDAgRD0xZjk3ZjAwMCBMPTEwMDAgRE1BX0JJ RElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9y aW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDE5MiBQPTIwODg0MDAwIEQ9MWY5ODAwMDAgTD0xMDAw IERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24t ZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAxOTIgUD0yMDg4NTAwMCBEPTFmOTgxMDAw IEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6 IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMTkzIFA9MjA4ODYwMDAgRD0x Zjk4MjAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIg a2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDE5MyBQPTIwODg3 MDAwIEQ9MWY5ODMwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBn dXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAxOTQg UD0yMDg4ODAwMCBEPTFmOTg0MDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6 NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBp ZHggMTk0IFA9MjA4ODkwMDAgRD0xZjk4NTAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFu IDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29o ZXJlbnQgaWR4IDE5NSBQPTIwODhhMDAwIEQ9MWY5ODYwMDAgTD0xMDAwIERNQV9CSURJUkVDVElP TkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNp LjA6IGNvaGVyZW50IGlkeCAxOTUgUD0yMDg4YjAwMCBEPTFmOTg3MDAwIEw9MTAwMCBETUFfQklE SVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jp b24tZWhjaS4wOiBjb2hlcmVudCBpZHggMTk2IFA9MjA4OGMwMDAgRD0xZjk4ODAwMCBMPTEwMDAg RE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1l aGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDE5NiBQPTIwODhkMDAwIEQ9MWY5ODkwMDAg TD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDog b3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAxOTcgUD0yMDg4ZTAwMCBEPTFm OThhMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBr ZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMTk3IFA9MjA4OGYw MDAgRD0xZjk4YjAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1 cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDE5OCBQ PTIwODkwMDAwIEQ9MWY5OGMwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlk eCAxOTggUD0yMDg5MTAwMCBEPTFmOThkMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4g MTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hl cmVudCBpZHggMTk5IFA9MjA4OTIwMDAgRD0xZjk4ZTAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9O QUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2ku MDogY29oZXJlbnQgaWR4IDE5OSBQPTIwODkzMDAwIEQ9MWY5OGYwMDAgTD0xMDAwIERNQV9CSURJ UkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlv bi1laGNpLjA6IGNvaGVyZW50IGlkeCAyMDAgUD0yMTBjMTAwMCBEPTFjOTkwMDAwIEw9ZWIwMCBE TUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVo Y2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMjAwIFA9MjA4OTQwMDAgRD0xZjk5MDAwMCBM PTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBv cmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDIwMCBQPTIwODk1MDAwIEQ9MWY5 OTEwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtl cm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAyMDEgUD0yMDg5NjAw MCBEPTFmOTkyMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3Vy dXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMjAxIFA9 MjA4OTcwMDAgRD0xZjk5MzAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0 OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4 IDIwMiBQPTIwODk4MDAwIEQ9MWY5OTQwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAx OSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVy ZW50IGlkeCAyMDIgUD0yMDg5OTAwMCBEPTFmOTk1MDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05B TApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4w OiBjb2hlcmVudCBpZHggMjAzIFA9MjA4OWEwMDAgRD0xZjk5NjAwMCBMPTEwMDAgRE1BX0JJRElS RUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9u LWVoY2kuMDogY29oZXJlbnQgaWR4IDIwMyBQPTIwODliMDAwIEQ9MWY5OTcwMDAgTD0xMDAwIERN QV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhj aSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAyMDQgUD0yMDg5YzAwMCBEPTFmOTk4MDAwIEw9 MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9y aW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMjA0IFA9MjA4OWQwMDAgRD0xZjk5 OTAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2Vy bmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDIwNSBQPTIwODllMDAw IEQ9MWY5OWEwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1 dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAyMDUgUD0y MDg5ZjAwMCBEPTFmOTliMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6 NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHgg MjA2IFA9MjA4YTAwMDAgRD0xZjk5YzAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5 IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJl bnQgaWR4IDIwNiBQPTIwOGExMDAwIEQ9MWY5OWQwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFM CkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6 IGNvaGVyZW50IGlkeCAyMDcgUD0yMDhhMjAwMCBEPTFmOTllMDAwIEw9MTAwMCBETUFfQklESVJF Q1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24t ZWhjaS4wOiBjb2hlcmVudCBpZHggMjA3IFA9MjA4YTMwMDAgRD0xZjk5ZjAwMCBMPTEwMDAgRE1B X0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNp IG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDIwOCBQPTIxMGQxMDAwIEQ9MWM5YTAwMDAgTD1l YjAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jp b24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAyMDggUD0yMDhhNDAwMCBEPTFmOWEw MDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJu ZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMjA4IFA9MjA4YTUwMDAg RD0xZjlhMTAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2 ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDIwOSBQPTIw OGE2MDAwIEQ9MWY5YTIwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1 OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAy MDkgUD0yMDhhNzAwMCBEPTFmOWEzMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkg MTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVu dCBpZHggMjEwIFA9MjA4YTgwMDAgRD0xZjlhNDAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwK SmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDog Y29oZXJlbnQgaWR4IDIxMCBQPTIwOGE5MDAwIEQ9MWY5YTUwMDAgTD0xMDAwIERNQV9CSURJUkVD VElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1l aGNpLjA6IGNvaGVyZW50IGlkeCAyMTEgUD0yMDhhYTAwMCBEPTFmOWE2MDAwIEw9MTAwMCBETUFf QklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kg b3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMjExIFA9MjA4YWIwMDAgRD0xZjlhNzAwMCBMPTEw MDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlv bi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDIxMiBQPTIwOGFjMDAwIEQ9MWY5YTgw MDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5l bDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAyMTIgUD0yMDhhZDAwMCBE PTFmOWE5MDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZk ciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMjEzIFA9MjA4 YWUwMDAgRD0xZjlhYTAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4 IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDIx MyBQPTIwOGFmMDAwIEQ9MWY5YWIwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAx Mzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50 IGlkeCAyMTQgUD0yMDhiMDAwMCBEPTFmOWFjMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApK YW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBj b2hlcmVudCBpZHggMjE0IFA9MjA4YjEwMDAgRD0xZjlhZDAwMCBMPTEwMDAgRE1BX0JJRElSRUNU SU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVo Y2kuMDogY29oZXJlbnQgaWR4IDIxNSBQPTIwOGIyMDAwIEQ9MWY5YWUwMDAgTD0xMDAwIERNQV9C SURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBv cmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAyMTUgUD0yMDhiMzAwMCBEPTFmOWFmMDAwIEw9MTAw MCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9u LWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMjE2IFA9MjEwZTEwMDAgRD0xYzliMDAw MCBMPWViMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVs OiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDIxNiBQPTIwOGI0MDAwIEQ9 MWY5YjAwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRy IGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAyMTYgUD0yMDhi NTAwMCBEPTFmOWIxMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTgg Z3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMjE3 IFA9MjA4YjYwMDAgRD0xZjliMjAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEz OjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQg aWR4IDIxNyBQPTIwOGI3MDAwIEQ9MWY5YjMwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNv aGVyZW50IGlkeCAyMTggUD0yMDhiODAwMCBEPTFmOWI0MDAwIEw9MTAwMCBETUFfQklESVJFQ1RJ T05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhj aS4wOiBjb2hlcmVudCBpZHggMjE4IFA9MjA4YjkwMDAgRD0xZjliNTAwMCBMPTEwMDAgRE1BX0JJ RElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9y aW9uLWVoY2kuMDogc2luZ2xlIGlkeCAyMTkgUD0xZjFiNjU4MCBEPTFmMWI2NTgwIEw9MSBETUFf RlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9y aW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDIxOSBQPTIwOGJhMDAwIEQ9MWY5YjYwMDAgTD0xMDAw IERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24t ZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAyMTkgUD0yMDhiYjAwMCBEPTFmOWI3MDAw IEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6 IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMjIwIFA9MjA4YmMwMDAgRD0x ZjliODAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIg a2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDIyMCBQPTIwOGJk MDAwIEQ9MWY5YjkwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBn dXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAyMjEg UD0yMDhiZTAwMCBEPTFmOWJhMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6 NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBp ZHggMjIxIFA9MjA4YmYwMDAgRD0xZjliYjAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFu IDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29o ZXJlbnQgaWR4IDIyMiBQPTIwOGMwMDAwIEQ9MWY5YmMwMDAgTD0xMDAwIERNQV9CSURJUkVDVElP TkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNp LjA6IGNvaGVyZW50IGlkeCAyMjIgUD0yMDhjMTAwMCBEPTFmOWJkMDAwIEw9MTAwMCBETUFfQklE SVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jp b24tZWhjaS4wOiBjb2hlcmVudCBpZHggMjIzIFA9MjA4YzIwMDAgRD0xZjliZTAwMCBMPTEwMDAg RE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1l aGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDIyMyBQPTIwOGMzMDAwIEQ9MWY5YmYwMDAg TD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDog b3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAyMjQgUD0yMTBmMTAwMCBEPTFj OWMwMDAwIEw9ZWIwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBr ZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMjI0IFA9MjA4YzQw MDAgRD0xZjljMDAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1 cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDIyNCBQ PTIwOGM1MDAwIEQ9MWY5YzEwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlk eCAyMjUgUD0yMDhjNjAwMCBEPTFmOWMyMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4g MTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hl cmVudCBpZHggMjI1IFA9MjA4YzcwMDAgRD0xZjljMzAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9O QUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2ku MDogY29oZXJlbnQgaWR4IDIyNiBQPTIwOGM4MDAwIEQ9MWY5YzQwMDAgTD0xMDAwIERNQV9CSURJ UkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlv bi1laGNpLjA6IGNvaGVyZW50IGlkeCAyMjYgUD0yMDhjOTAwMCBEPTFmOWM1MDAwIEw9MTAwMCBE TUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVo Y2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMjI3IFA9MjA4Y2EwMDAgRD0xZjljNjAwMCBM PTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBv cmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDIyNyBQPTIwOGNiMDAwIEQ9MWY5 YzcwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtl cm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAyMjggUD0yMGJhNjAw MCBEPTFmMWM4MDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3Vy dXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMjI4IFA9 MjA4Y2MwMDAgRD0xZjljODAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0 OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4 IDIyOCBQPTIwOGNkMDAwIEQ9MWY5YzkwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAx OSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVy ZW50IGlkeCAyMjkgUD0yMDhjZTAwMCBEPTFmOWNhMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05B TApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4w OiBjb2hlcmVudCBpZHggMjI5IFA9MjA4Y2YwMDAgRD0xZjljYjAwMCBMPTEwMDAgRE1BX0JJRElS RUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9u LWVoY2kuMDogY29oZXJlbnQgaWR4IDIzMCBQPTIwOGQwMDAwIEQ9MWY5Y2MwMDAgTD0xMDAwIERN QV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhj aSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAyMzAgUD0yMDhkMTAwMCBEPTFmOWNkMDAwIEw9 MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9y aW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMjMxIFA9MjA4ZDIwMDAgRD0xZjlj ZTAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2Vy bmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDIzMSBQPTIwOGQzMDAw IEQ9MWY5Y2YwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1 dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAyMzIgUD0y MTEwMTAwMCBEPTFjOWQwMDAwIEw9ZWIwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6 NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHgg MjMyIFA9MjA4ZDQwMDAgRD0xZjlkMDAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5 IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJl bnQgaWR4IDIzMiBQPTIwOGQ1MDAwIEQ9MWY5ZDEwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFM CkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6 IGNvaGVyZW50IGlkeCAyMzMgUD0yMDhkNjAwMCBEPTFmOWQyMDAwIEw9MTAwMCBETUFfQklESVJF Q1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24t ZWhjaS4wOiBjb2hlcmVudCBpZHggMjMzIFA9MjA4ZDcwMDAgRD0xZjlkMzAwMCBMPTEwMDAgRE1B X0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNp IG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDIzNCBQPTIwOGQ4MDAwIEQ9MWY5ZDQwMDAgTD0x MDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jp b24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAyMzQgUD0yMDhkOTAwMCBEPTFmOWQ1 MDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJu ZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMjM1IFA9MjA4ZGEwMDAg RD0xZjlkNjAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2 ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDIzNSBQPTIw OGRiMDAwIEQ9MWY5ZDcwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1 OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAy MzYgUD0yMDhkYzAwMCBEPTFmOWQ4MDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkg MTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVu dCBpZHggMjM2IFA9MjA4ZGQwMDAgRD0xZjlkOTAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwK SmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDog Y29oZXJlbnQgaWR4IDIzNyBQPTIwOGRlMDAwIEQ9MWY5ZGEwMDAgTD0xMDAwIERNQV9CSURJUkVD VElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1l aGNpLjA6IGNvaGVyZW50IGlkeCAyMzcgUD0yMDhkZjAwMCBEPTFmOWRiMDAwIEw9MTAwMCBETUFf QklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kg b3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMjM4IFA9MjA4ZTAwMDAgRD0xZjlkYzAwMCBMPTEw MDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlv bi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDIzOCBQPTIwOGUxMDAwIEQ9MWY5ZGQw MDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5l bDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAyMzkgUD0yMDhlMjAwMCBE PTFmOWRlMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZk ciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMjM5IFA9MjA4 ZTMwMDAgRD0xZjlkZjAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4 IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDI0 MCBQPTIwOGU0MDAwIEQ9MWY5ZTAwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAx Mzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50 IGlkeCAyNDAgUD0yMDhlNTAwMCBEPTFmOWUxMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApK YW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBj b2hlcmVudCBpZHggMjQxIFA9MjA4ZTYwMDAgRD0xZjllMjAwMCBMPTEwMDAgRE1BX0JJRElSRUNU SU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVo Y2kuMDogY29oZXJlbnQgaWR4IDI0MSBQPTIwOGU3MDAwIEQ9MWY5ZTMwMDAgTD0xMDAwIERNQV9C SURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBv cmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAyNDIgUD0yMDhlODAwMCBEPTFmOWU0MDAwIEw9MTAw MCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9u LWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMjQyIFA9MjA4ZTkwMDAgRD0xZjllNTAw MCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVs OiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDI0MyBQPTIwOGVhMDAwIEQ9 MWY5ZTYwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRy IGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAyNDMgUD0yMDhl YjAwMCBEPTFmOWU3MDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTgg Z3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMjQ0 IFA9MjA4ZWMwMDAgRD0xZjllODAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEz OjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQg aWR4IDI0NCBQPTIwOGVkMDAwIEQ9MWY5ZTkwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNv aGVyZW50IGlkeCAyNDUgUD0yMDhlZTAwMCBEPTFmOWVhMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJ T05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhj aS4wOiBjb2hlcmVudCBpZHggMjQ1IFA9MjA4ZWYwMDAgRD0xZjllYjAwMCBMPTEwMDAgRE1BX0JJ RElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9y aW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDI0NiBQPTIwOGYwMDAwIEQ9MWY5ZWMwMDAgTD0xMDAw IERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24t ZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAyNDYgUD0yMDhmMTAwMCBEPTFmOWVkMDAw IEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6 IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMjQ3IFA9MjA4ZjIwMDAgRD0x ZjllZTAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIg a2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDI0NyBQPTIwOGYz MDAwIEQ9MWY5ZWYwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBn dXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAyNDgg UD0yMDhmNDAwMCBEPTFmOWYwMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6 NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBp ZHggMjQ4IFA9MjA4ZjUwMDAgRD0xZjlmMTAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFu IDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29o ZXJlbnQgaWR4IDI0OSBQPTIwOGY2MDAwIEQ9MWY5ZjIwMDAgTD0xMDAwIERNQV9CSURJUkVDVElP TkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNp LjA6IGNvaGVyZW50IGlkeCAyNDkgUD0yMDhmNzAwMCBEPTFmOWYzMDAwIEw9MTAwMCBETUFfQklE SVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jp b24tZWhjaS4wOiBjb2hlcmVudCBpZHggMjUwIFA9MjA4ZjgwMDAgRD0xZjlmNDAwMCBMPTEwMDAg RE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1l aGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDI1MCBQPTIwOGY5MDAwIEQ9MWY5ZjUwMDAg TD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDog b3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAyNTEgUD0yMDhmYTAwMCBEPTFm OWY2MDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBr ZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMjUxIFA9MjA4ZmIw MDAgRD0xZjlmNzAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1 cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDI1MiBQ PTIwOGZjMDAwIEQ9MWY5ZjgwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlk eCAyNTIgUD0yMDhmZDAwMCBEPTFmOWY5MDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4g MTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hl cmVudCBpZHggMjUzIFA9MjA4ZmUwMDAgRD0xZjlmYTAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9O QUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2ku MDogY29oZXJlbnQgaWR4IDI1MyBQPTIwOGZmMDAwIEQ9MWY5ZmIwMDAgTD0xMDAwIERNQV9CSURJ UkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlv bi1laGNpLjA6IGNvaGVyZW50IGlkeCAyNTQgUD0yMDkwMDAwMCBEPTFmOWZjMDAwIEw9MTAwMCBE TUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVo Y2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMjU0IFA9MjA5MDEwMDAgRD0xZjlmZDAwMCBM PTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBv cmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4IDI1NSBQPTIwOTAyMDAwIEQ9MWY5 ZmUwMDAgTD0xMDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtl cm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVyZW50IGlkeCAyNTUgUD0yMDkwMzAw MCBEPTFmOWZmMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3Vy dXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4wOiBjb2hlcmVudCBpZHggMjk2IFA9 MjExMWQwMDAgRD0xY2E1MDAwMCBMPWViMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0 OjU4IGd1cnV2ZHIga2VybmVsOiBvcmlvbi1laGNpIG9yaW9uLWVoY2kuMDogY29oZXJlbnQgaWR4 IDMwNCBQPTIxMTJkMDAwIEQ9MWNhNjAwMDAgTD1lYjAwIERNQV9CSURJUkVDVElPTkFMCkphbiAx OSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogb3Jpb24tZWhjaSBvcmlvbi1laGNpLjA6IGNvaGVy ZW50IGlkeCAzMTIgUD0yMTEzZDAwMCBEPTFjYTcwMDAwIEw9ZWIwMCBETUFfQklESVJFQ1RJT05B TApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jpb24tZWhjaS4w OiBjb2hlcmVudCBpZHggMzIwIFA9MjExNGQwMDAgRD0xY2E4MDAwMCBMPWViMDAgRE1BX0JJRElS RUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBzYXRhX212IHNhdGFfbXYu MDogY29oZXJlbnQgaWR4IDMyOCBQPTIwYTBkMDAwIEQ9MWZhOTAwMDAgTD0xMDAwIERNQV9CSURJ UkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212 LjA6IGNvaGVyZW50IGlkeCAzMjggUD0yMGEwZjAwMCBEPTFmYTkxMDAwIEw9MTAwMCBETUFfQklE SVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG9yaW9uLWVoY2kgb3Jp b24tZWhjaS4wOiBjb2hlcmVudCBpZHggMzI4IFA9MjExNWQwMDAgRD0xY2E5MDAwMCBMPWViMDAg RE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBzYXRhX212 IHNhdGFfbXYuMDogY29oZXJlbnQgaWR4IDMyOSBQPTIwYTExMDAwIEQ9MWZhOTIwMDAgTD0xMDAw IERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9t diBzYXRhX212LjA6IGNvaGVyZW50IGlkeCAzMjkgUD0yMGExMzAwMCBEPTFmYTkzMDAwIEw9MTAw MCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IHNhdGFf bXYgc2F0YV9tdi4wOiBjb2hlcmVudCBpZHggMzQyIFA9MjBhNDcwMDAgRD0xZmFhYzAwMCBMPTEw MDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBzYXRh X212IHNhdGFfbXYuMDogY29oZXJlbnQgaWR4IDM0MiBQPTIwYTQ5MDAwIEQ9MWZhYWQwMDAgTD0x MDAwIERNQV9CSURJUkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0 YV9tdiBzYXRhX212LjA6IGNvaGVyZW50IGlkeCAzNTkgUD0yMGE0YjAwMCBEPTFmYWNlMDAwIEw9 MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IHNh dGFfbXYgc2F0YV9tdi4wOiBjb2hlcmVudCBpZHggMzU5IFA9MjBhNGQwMDAgRD0xZmFjZjAwMCBM PTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBT QUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2luZ2xlIGlkeCAzOTIgUD0xZjMxMTAwMCBEPTFmMzEx MDAwIEw9MTAwMCBETUFfVE9fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDog U0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCAzOTMgUD0xZjMxMjAw MCBEPTFmMzEyMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2 ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDM5 MyBQPTFmMzEzMDAwIEQ9MWYzMTMwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6 NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdh dGhlciBpZHggMzk0IFA9MWYzMTQwMDAgRD0xZjMxNDAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNF CkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6 IHNjYXRoZXItZ2F0aGVyIGlkeCAzOTQgUD0xZjMxNTAwMCBEPTFmMzE1MDAwIEw9MTAwMCBETUFf RlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAw MDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDM5NSBQPTFmMzE2MDAwIEQ9MWYzMTYwMDAg TD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNB QTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggMzk1IFA9MWYzMTcwMDAg RD0xZjMxNzAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRy IGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCAzOTYg UD0xZjMxODAwMCBEPTFmMzE4MDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0 OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRo ZXIgaWR4IDM5NiBQPTFmMzE5MDAwIEQ9MWYzMTkwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpK YW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBz Y2F0aGVyLWdhdGhlciBpZHggMzk3IFA9MWYzMWEwMDAgRD0xZjMxYTAwMCBMPTEwMDAgRE1BX0ZS T01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAw OjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCAzOTcgUD0xZjMxYjAwMCBEPTFmMzFiMDAwIEw9 MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3 MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDM5OCBQPTFmMzFjMDAwIEQ9 MWYzMWMwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBr ZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggMzk4IFA9 MWYzMWQwMDAgRD0xZjMxZDAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1 OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVy IGlkeCAzOTkgUD0xZjMxZTAwMCBEPTFmMzFlMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFu IDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2Nh dGhlci1nYXRoZXIgaWR4IDM5OSBQPTFmMzFmMDAwIEQ9MWYzMWYwMDAgTD0xMDAwIERNQV9GUk9N X0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDow MDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNDAwIFA9MWYzMjAwMDAgRD0xZjMyMDAwMCBMPTEw MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2 eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA0MDAgUD0xZjMyMTAwMCBEPTFm MzIxMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2Vy bmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDQwMSBQPTFm MzIyMDAwIEQ9MWYzMjIwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTgg Z3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBp ZHggNDAxIFA9MWYzMjMwMDAgRD0xZjMyMzAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAx OSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNpbmds ZSBpZHggNDAyIFA9MWYzMjUwMDAgRD0xZjMyNTAwMCBMPTEwMDAgRE1BX1RPX0RFVklDRQpKYW4g MTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0 aGVyLWdhdGhlciBpZHggNDAzIFA9MWYzMjYwMDAgRD0xZjMyNjAwMCBMPTEwMDAgRE1BX0ZST01f REVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAw OjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA0MDMgUD0xZjMyNzAwMCBEPTFmMzI3MDAwIEw9MTAw MCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4 IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDQwNCBQPTFmMzI4MDAwIEQ9MWYz MjgwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJu ZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNDA0IFA9MWYz MjkwMDAgRD0xZjMyOTAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBn dXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlk eCA0MDUgUD0xZjMyYTAwMCBEPTFmMzJhMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5 IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhl ci1nYXRoZXIgaWR4IDQwNSBQPTFmMzJiMDAwIEQ9MWYzMmIwMDAgTD0xMDAwIERNQV9GUk9NX0RF VklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDow MS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNDA2IFA9MWYzMmMwMDAgRD0xZjMyYzAwMCBMPTEwMDAg RE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBG RiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA0MDYgUD0xZjMyZDAwMCBEPTFmMzJk MDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVs OiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDQwNyBQPTFmMzJl MDAwIEQ9MWYzMmUwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3Vy dXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHgg NDA3IFA9MWYzMmYwMDAgRD0xZjMyZjAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAx Mzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXIt Z2F0aGVyIGlkeCA0MDggUD0xZjMzMDAwMCBEPTFmMzMwMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJ Q0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEu MDogc2NhdGhlci1nYXRoZXIgaWR4IDQwOCBQPTFmMzMxMDAwIEQ9MWYzMzEwMDAgTD0xMDAwIERN QV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYg MDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNDA5IFA9MWYzMzIwMDAgRD0xZjMzMjAw MCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDog U0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA0MDkgUD0xZjMzMzAw MCBEPTFmMzMzMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2 ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDQx MCBQPTFmMzM0MDAwIEQ9MWYzMzQwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6 NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdh dGhlciBpZHggNDEwIFA9MWYzMzUwMDAgRD0xZjMzNTAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNF CkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6 IHNpbmdsZSBpZHggNDExIFA9MWYzMzcwMDAgRD0xZjMzNzAwMCBMPTEwMDAgRE1BX1RPX0RFVklD RQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4w OiBzY2F0aGVyLWdhdGhlciBpZHggNDEyIFA9MWYzMzgwMDAgRD0xZjMzODAwMCBMPTEwMDAgRE1B X0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAw MDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA0MTIgUD0xZjMzOTAwMCBEPTFmMzM5MDAw IEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBT QUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDQxMyBQPTFmMzNhMDAw IEQ9MWYzM2EwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZk ciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNDEz IFA9MWYzM2IwMDAgRD0xZjMzYjAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0 aGVyIGlkeCA0MTQgUD0xZjMzYzAwMCBEPTFmMzNjMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UK SmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDog c2NhdGhlci1nYXRoZXIgaWR4IDQxNCBQPTFmMzNkMDAwIEQ9MWYzM2QwMDAgTD0xMDAwIERNQV9G Uk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAw MDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNDE1IFA9MWYzM2UwMDAgRD0xZjMzZTAwMCBM PTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FB NzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA0MTUgUD0xZjMzZjAwMCBE PTFmMzNmMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIg a2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDQxNiBQ PTFmMzQwMDAwIEQ9MWYzNDAwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6 NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhl ciBpZHggNDE2IFA9MWYzNDEwMDAgRD0xZjM0MTAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNj YXRoZXItZ2F0aGVyIGlkeCA0MTcgUD0xZjM0MjAwMCBEPTFmMzQyMDAwIEw9MTAwMCBETUFfRlJP TV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6 MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDQxNyBQPTFmMzQzMDAwIEQ9MWYzNDMwMDAgTD0x MDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcx NnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNDE4IFA9MWYzNDQwMDAgRD0x ZjM0NDAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtl cm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA0MTggUD0x ZjM0NTAwMCBEPTFmMzQ1MDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4 IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIg aWR4IDQxOSBQPTFmMzQ2MDAwIEQ9MWYzNDYwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4g MTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0 aGVyLWdhdGhlciBpZHggNDE5IFA9MWYzNDcwMDAgRD0xZjM0NzAwMCBMPTEwMDAgRE1BX0ZST01f REVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAw OjAxLjA6IHNpbmdsZSBpZHggNDIwIFA9MWYzNDkwMDAgRD0xZjM0OTAwMCBMPTEwMDAgRE1BX1RP X0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDow MDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNDIxIFA9MWYzNGEwMDAgRD0xZjM0YTAwMCBMPTEw MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2 eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA0MjEgUD0xZjM0YjAwMCBEPTFm MzRiMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2Vy bmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDQyMiBQPTFm MzRjMDAwIEQ9MWYzNGMwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTgg Z3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBp ZHggNDIyIFA9MWYzNGQwMDAgRD0xZjM0ZDAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAx OSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRo ZXItZ2F0aGVyIGlkeCA0MjMgUD0xZjM0ZTAwMCBEPTFmMzRlMDAwIEw9MTAwMCBETUFfRlJPTV9E RVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6 MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDQyMyBQPTFmMzRmMDAwIEQ9MWYzNGYwMDAgTD0xMDAw IERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNngg RkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNDI0IFA9MWYzNTAwMDAgRD0xZjM1 MDAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5l bDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA0MjQgUD0xZjM1 MTAwMCBEPTFmMzUxMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1 cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMTogc2luZ2xl IGlkeCA0MjQgUD0xY2I1MDAyMCBEPTFjYjUwMDIwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4g MTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhf cG9ydC4xOiBzaW5nbGUgaWR4IDQyNCBQPTFjYjUwNzAwIEQ9MWNiNTA3MDAgTD02MDAgRE1BX0ZS T01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9y dCBtdjY0M3h4X2V0aF9wb3J0LjE6IHNpbmdsZSBpZHggNDI0IFA9MWNiNTBkZTAgRD0xY2I1MGRl MCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBt djY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMTogc2luZ2xlIGlkeCA0MjQgUD0xY2I1 MTRjMCBEPTFjYjUxNGMwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3Vy dXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5nbGUg aWR4IDQyNCBQPTFjYjUxYmEwIEQ9MWNiNTFiYTAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAx OSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRo ZXItZ2F0aGVyIGlkeCA0MjUgUD0xZjM1MjAwMCBEPTFmMzUyMDAwIEw9MTAwMCBETUFfRlJPTV9E RVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6 MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDQyNSBQPTFmMzUzMDAwIEQ9MWYzNTMwMDAgTD0xMDAw IERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhf ZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5nbGUgaWR4IDQyNSBQPTFjYjUyMjgwIEQ9 MWNiNTIyODAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtl cm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjE6IHNpbmdsZSBpZHggNDI1 IFA9MWNiNTI5NjAgRD0xY2I1Mjk2MCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0 OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMTog c2luZ2xlIGlkeCA0MjUgUD0xY2I1MzA0MCBEPTFjYjUzMDQwIEw9NjAwIERNQV9GUk9NX0RFVklD RQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4 eF9ldGhfcG9ydC4xOiBzaW5nbGUgaWR4IDQyNSBQPTFjYjUzNzIwIEQ9MWNiNTM3MjAgTD02MDAg RE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9l dGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjE6IHNpbmdsZSBpZHggNDI1IFA9MWNiNTNlMDAgRD0x Y2I1M2UwMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2Vy bmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDQyNiBQPTFm MzU0MDAwIEQ9MWYzNTQwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTgg Z3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBp ZHggNDI2IFA9MWYzNTUwMDAgRD0xZjM1NTAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAx OSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9w b3J0LjE6IHNpbmdsZSBpZHggNDI2IFA9MWNiNTQ0ZTAgRD0xY2I1NDRlMCBMPTYwMCBETUFfRlJP TV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0 IG12NjQzeHhfZXRoX3BvcnQuMTogc2luZ2xlIGlkeCA0MjYgUD0xY2I1NGJjMCBEPTFjYjU0YmMw IEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12 NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5nbGUgaWR4IDQyNiBQPTFjYjU1 MmEwIEQ9MWNiNTUyYTAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1 dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjE6IHNpbmdsZSBp ZHggNDI2IFA9MWNiNTU5ODAgRD0xY2I1NTk4MCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5 IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhl ci1nYXRoZXIgaWR4IDQyNyBQPTFmMzU2MDAwIEQ9MWYzNTYwMDAgTD0xMDAwIERNQV9GUk9NX0RF VklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDow MS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNDI3IFA9MWYzNTcwMDAgRD0xZjM1NzAwMCBMPTEwMDAg RE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9l dGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjE6IHNpbmdsZSBpZHggNDI3IFA9MWNiNTYwNjAgRD0x Y2I1NjA2MCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2Vy bmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMTogc2luZ2xlIGlkeCA0Mjcg UD0xY2I1Njc0MCBEPTFjYjU2NzQwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6 NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4xOiBz aW5nbGUgaWR4IDQyNyBQPTFjYjU2ZTIwIEQ9MWNiNTZlMjAgTD02MDAgRE1BX0ZST01fREVWSUNF CkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4 X2V0aF9wb3J0LjE6IHNpbmdsZSBpZHggNDI3IFA9MWNiNTc1MDAgRD0xY2I1NzUwMCBMPTYwMCBE TUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZG IDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDQyOCBQPTFmMzU4MDAwIEQ9MWYzNTgw MDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6 IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNDI4IFA9MWYzNTkw MDAgRD0xZjM1OTAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1 dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNpbmdsZSBpZHggNDI5IFA9MWYz NWIwMDAgRD0xZjM1YjAwMCBMPTEwMDAgRE1BX1RPX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3Vy dXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHgg NDMwIFA9MWYzNWMwMDAgRD0xZjM1YzAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAx Mzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXIt Z2F0aGVyIGlkeCA0MzAgUD0xZjM1ZDAwMCBEPTFmMzVkMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJ Q0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEu MDogc2NhdGhlci1nYXRoZXIgaWR4IDQzMSBQPTFmMzVlMDAwIEQ9MWYzNWUwMDAgTD0xMDAwIERN QV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYg MDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNDMxIFA9MWYzNWYwMDAgRD0xZjM1ZjAw MCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDog U0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA0MzIgUD0xZjM2MDAw MCBEPTFmMzYwMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2 ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDQz MiBQPTFmMzYxMDAwIEQ9MWYzNjEwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6 NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdh dGhlciBpZHggNDMzIFA9MWYzNjIwMDAgRD0xZjM2MjAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNF CkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6 IHNjYXRoZXItZ2F0aGVyIGlkeCA0MzMgUD0xZjM2MzAwMCBEPTFmMzYzMDAwIEw9MTAwMCBETUFf RlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAw MDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDQzNCBQPTFmMzY0MDAwIEQ9MWYzNjQwMDAg TD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNB QTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNDM0IFA9MWYzNjUwMDAg RD0xZjM2NTAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRy IGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA0MzUg UD0xZjM2NjAwMCBEPTFmMzY2MDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0 OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRo ZXIgaWR4IDQzNSBQPTFmMzY3MDAwIEQ9MWYzNjcwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpK YW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBz Y2F0aGVyLWdhdGhlciBpZHggNDM2IFA9MWYzNjgwMDAgRD0xZjM2ODAwMCBMPTEwMDAgRE1BX0ZS T01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAw OjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA0MzYgUD0xZjM2OTAwMCBEPTFmMzY5MDAwIEw9 MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3 MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDQzNyBQPTFmMzZhMDAwIEQ9 MWYzNmEwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBr ZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNDM3IFA9 MWYzNmIwMDAgRD0xZjM2YjAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1 OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNpbmdsZSBpZHggNDM4 IFA9MWYzNmQwMDAgRD0xZjM2ZDAwMCBMPTEwMDAgRE1BX1RPX0RFVklDRQpKYW4gMTkgMTM6NTQ6 NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhl ciBpZHggNDM5IFA9MWYzNmUwMDAgRD0xZjM2ZTAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNj YXRoZXItZ2F0aGVyIGlkeCA0MzkgUD0xZjM2ZjAwMCBEPTFmMzZmMDAwIEw9MTAwMCBETUFfRlJP TV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBzYXRhX212IHNhdGFfbXYu MDogY29oZXJlbnQgaWR4IDQ0MCBQPTIwOWU3MDAwIEQ9MWZiNzEwMDAgTD0xMDAwIERNQV9CSURJ UkVDVElPTkFMCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAw OjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA0NDAgUD0xZjM3MDAwMCBEPTFmMzcwMDAwIEw9 MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3 MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDQ0MCBQPTFmMzcxMDAwIEQ9 MWYzNzEwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBr ZXJuZWw6IHNhdGFfbXYgc2F0YV9tdi4wOiBjb2hlcmVudCBpZHggNDQxIFA9MjA5ZTkwMDAgRD0x ZmI3MzAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIg a2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDQ0MSBQ PTFmMzcyMDAwIEQ9MWYzNzIwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6 NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhl ciBpZHggNDQxIFA9MWYzNzMwMDAgRD0xZjM3MzAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50 IGlkeCA0NDIgUD0yMDllYjAwMCBEPTFmYjc0MDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApK YW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IHNhdGFfbXYgc2F0YV9tdi4wOiBjb2hlcmVu dCBpZHggNDQyIFA9MjA5ZWQwMDAgRD0xZmI3NTAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwK SmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDog c2NhdGhlci1nYXRoZXIgaWR4IDQ0MiBQPTFmMzc0MDAwIEQ9MWYzNzQwMDAgTD0xMDAwIERNQV9G Uk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAw MDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNDQyIFA9MWYzNzUwMDAgRD0xZjM3NTAwMCBM PTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0 YV9tdiBzYXRhX212LjA6IGNvaGVyZW50IGlkeCA0NDMgUD0yMDllZjAwMCBEPTFmYjc2MDAwIEw9 MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IHNh dGFfbXYgc2F0YV9tdi4wOiBjb2hlcmVudCBpZHggNDQzIFA9MjA5ZjEwMDAgRD0xZmI3NzAwMCBM PTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBT QUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDQ0MyBQPTFmMzc2MDAw IEQ9MWYzNzYwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZk ciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNDQz IFA9MWYzNzcwMDAgRD0xZjM3NzAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50IGlkeCA0NDQg UD0yMDlmMzAwMCBEPTFmYjc4MDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6 NTQ6NTggZ3VydXZkciBrZXJuZWw6IHNhdGFfbXYgc2F0YV9tdi4wOiBjb2hlcmVudCBpZHggNDQ0 IFA9MjA5ZjUwMDAgRD0xZmI3OTAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEz OjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1n YXRoZXIgaWR4IDQ0NCBQPTFmMzc4MDAwIEQ9MWYzNzgwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklD RQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4w OiBzY2F0aGVyLWdhdGhlciBpZHggNDQ0IFA9MWYzNzkwMDAgRD0xZjM3OTAwMCBMPTEwMDAgRE1B X0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRh X212LjA6IGNvaGVyZW50IGlkeCA0NDUgUD0yMDlmNzAwMCBEPTFmYjdhMDAwIEw9MTAwMCBETUFf QklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IHNhdGFfbXYgc2F0 YV9tdi4wOiBjb2hlcmVudCBpZHggNDQ1IFA9MjA5ZjkwMDAgRD0xZmI3YjAwMCBMPTEwMDAgRE1B X0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZG IDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDQ0NSBQPTFmMzdhMDAwIEQ9MWYzN2Ew MDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6 IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNDQ1IFA9MWYzN2Iw MDAgRD0xZjM3YjAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1 dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNvaGVyZW50IGlkeCA0NDYgUD0yMDlmYjAw MCBEPTFmYjdjMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3Vy dXZkciBrZXJuZWw6IHNhdGFfbXYgc2F0YV9tdi4wOiBjb2hlcmVudCBpZHggNDQ2IFA9MjA5ZmQw MDAgRD0xZmI3ZDAwMCBMPTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1 cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4 IDQ0NiBQPTFmMzdjMDAwIEQ9MWYzN2MwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkg MTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVy LWdhdGhlciBpZHggNDQ2IFA9MWYzN2QwMDAgRD0xZjM3ZDAwMCBMPTEwMDAgRE1BX0ZST01fREVW SUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogc2F0YV9tdiBzYXRhX212LjA6IGNv aGVyZW50IGlkeCA0NDcgUD0yMDlmZjAwMCBEPTFmYjdlMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJ T05BTApKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IHNhdGFfbXYgc2F0YV9tdi4wOiBj b2hlcmVudCBpZHggNDQ3IFA9MjBhMDEwMDAgRD0xZmI3ZjAwMCBMPTEwMDAgRE1BX0JJRElSRUNU SU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6 MDEuMDogc2luZ2xlIGlkeCA0NDcgUD0xZjM3ZjAwMCBEPTFmMzdmMDAwIEw9MTAwMCBETUFfVE9f REVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAw OjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA0NDggUD0xZjM4MDAwMCBEPTFmMzgwMDAwIEw9MTAw MCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4 IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDQ0OCBQPTFmMzgxMDAwIEQ9MWYz ODEwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJu ZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNDQ5IFA9MWYz ODIwMDAgRD0xZjM4MjAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBn dXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlk eCA0NDkgUD0xZjM4MzAwMCBEPTFmMzgzMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5 IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhl ci1nYXRoZXIgaWR4IDQ1MCBQPTFmMzg0MDAwIEQ9MWYzODQwMDAgTD0xMDAwIERNQV9GUk9NX0RF VklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDow MS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNDUwIFA9MWYzODUwMDAgRD0xZjM4NTAwMCBMPTEwMDAg RE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBG RiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA0NTEgUD0xZjM4NjAwMCBEPTFmMzg2 MDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVs OiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDQ1MSBQPTFmMzg3 MDAwIEQ9MWYzODcwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3Vy dXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHgg NDUyIFA9MWYzODgwMDAgRD0xZjM4ODAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAx Mzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXIt Z2F0aGVyIGlkeCA0NTIgUD0xZjM4OTAwMCBEPTFmMzg5MDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJ Q0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEu MDogc2NhdGhlci1nYXRoZXIgaWR4IDQ1MyBQPTFmMzhhMDAwIEQ9MWYzOGEwMDAgTD0xMDAwIERN QV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYg MDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNDUzIFA9MWYzOGIwMDAgRD0xZjM4YjAw MCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDog U0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA0NTQgUD0xZjM4YzAw MCBEPTFmMzhjMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2 ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDQ1 NCBQPTFmMzhkMDAwIEQ9MWYzOGQwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6 NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdh dGhlciBpZHggNDU1IFA9MWYzOGUwMDAgRD0xZjM4ZTAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNF CkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6 IHNjYXRoZXItZ2F0aGVyIGlkeCA0NTUgUD0xZjM4ZjAwMCBEPTFmMzhmMDAwIEw9MTAwMCBETUFf RlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAw MDA6MDA6MDEuMDogc2luZ2xlIGlkeCA0NTYgUD0xZjM5MTAwMCBEPTFmMzkxMDAwIEw9MTAwMCBE TUFfVE9fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAw MDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA0NTcgUD0xZjM5MzAwMCBEPTFmMzkzMDAw IEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBT QUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDQ1OCBQPTFmMzk0MDAw IEQ9MWYzOTQwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZk ciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNDU4 IFA9MWYzOTUwMDAgRD0xZjM5NTAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0 aGVyIGlkeCA0NTkgUD0xZjM5NjAwMCBEPTFmMzk2MDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UK SmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDog c2NhdGhlci1nYXRoZXIgaWR4IDQ1OSBQPTFmMzk3MDAwIEQ9MWYzOTcwMDAgTD0xMDAwIERNQV9G Uk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAw MDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNDYwIFA9MWYzOTgwMDAgRD0xZjM5ODAwMCBM PTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FB NzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA0NjAgUD0xZjM5OTAwMCBE PTFmMzk5MDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIg a2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDQ2MSBQ PTFmMzlhMDAwIEQ9MWYzOWEwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6 NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhl ciBpZHggNDYxIFA9MWYzOWIwMDAgRD0xZjM5YjAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNj YXRoZXItZ2F0aGVyIGlkeCA0NjIgUD0xZjM5YzAwMCBEPTFmMzljMDAwIEw9MTAwMCBETUFfRlJP TV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6 MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDQ2MiBQPTFmMzlkMDAwIEQ9MWYzOWQwMDAgTD0x MDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcx NnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNDYzIFA9MWYzOWUwMDAgRD0x ZjM5ZTAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtl cm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA0NjMgUD0x ZjM5ZjAwMCBEPTFmMzlmMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4 IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIg aWR4IDQ2NCBQPTFmM2EwMDAwIEQ9MWYzYTAwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4g MTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0 aGVyLWdhdGhlciBpZHggNDY0IFA9MWYzYTEwMDAgRD0xZjNhMTAwMCBMPTEwMDAgRE1BX0ZST01f REVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAw OjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA0NjUgUD0xZjNhMjAwMCBEPTFmM2EyMDAwIEw9MTAw MCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4 IEZGIDAwMDA6MDA6MDEuMDogc2luZ2xlIGlkeCA2MTUgUD0xZWNjZjAwMCBEPTFlY2NmMDAwIEw9 MTAwMCBETUFfVE9fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2 eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA2MTYgUD0xZWNkMDAwMCBEPTFl Y2QwMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2Vy bmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDYxNiBQPTFl Y2QxMDAwIEQ9MWVjZDEwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTgg Z3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBp ZHggNjE3IFA9MWVjZDIwMDAgRD0xZWNkMjAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAx OSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRo ZXItZ2F0aGVyIGlkeCA2MTcgUD0xZWNkMzAwMCBEPTFlY2QzMDAwIEw9MTAwMCBETUFfRlJPTV9E RVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6 MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDYxOCBQPTFlY2Q0MDAwIEQ9MWVjZDQwMDAgTD0xMDAw IERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNngg RkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNjE4IFA9MWVjZDUwMDAgRD0xZWNk NTAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5l bDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA2MTkgUD0xZWNk NjAwMCBEPTFlY2Q2MDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1 cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4 IDYxOSBQPTFlY2Q3MDAwIEQ9MWVjZDcwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkg MTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVy LWdhdGhlciBpZHggNjIwIFA9MWVjZDgwMDAgRD0xZWNkODAwMCBMPTEwMDAgRE1BX0ZST01fREVW SUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAx LjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA2MjAgUD0xZWNkOTAwMCBEPTFlY2Q5MDAwIEw9MTAwMCBE TUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZG IDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDYyMSBQPTFlY2RhMDAwIEQ9MWVjZGEw MDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6 IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNjIxIFA9MWVjZGIw MDAgRD0xZWNkYjAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1 dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA2 MjIgUD0xZWNkYzAwMCBEPTFlY2RjMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEz OjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1n YXRoZXIgaWR4IDYyMiBQPTFlY2RkMDAwIEQ9MWVjZGQwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklD RQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4w OiBzY2F0aGVyLWdhdGhlciBpZHggNjIzIFA9MWVjZGUwMDAgRD0xZWNkZTAwMCBMPTEwMDAgRE1B X0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAw MDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA2MjMgUD0xZWNkZjAwMCBEPTFlY2RmMDAw IEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBT QUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2luZ2xlIGlkeCA2MjQgUD0xZWNlMTAwMCBEPTFlY2Ux MDAwIEw9MTAwMCBETUFfVE9fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDog U0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA2MjUgUD0xZWNlMjAw MCBEPTFlY2UyMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2 ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDYy NSBQPTFlY2UzMDAwIEQ9MWVjZTMwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6 NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdh dGhlciBpZHggNjI2IFA9MWVjZTQwMDAgRD0xZWNlNDAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNF CkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6 IHNjYXRoZXItZ2F0aGVyIGlkeCA2MjYgUD0xZWNlNTAwMCBEPTFlY2U1MDAwIEw9MTAwMCBETUFf RlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAw MDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDYyNyBQPTFlY2U2MDAwIEQ9MWVjZTYwMDAg TD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNB QTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNjI3IFA9MWVjZTcwMDAg RD0xZWNlNzAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRy IGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA2Mjgg UD0xZWNlODAwMCBEPTFlY2U4MDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0 OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRo ZXIgaWR4IDYyOCBQPTFlY2U5MDAwIEQ9MWVjZTkwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpK YW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBz Y2F0aGVyLWdhdGhlciBpZHggNjI5IFA9MWVjZWEwMDAgRD0xZWNlYTAwMCBMPTEwMDAgRE1BX0ZS T01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAw OjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA2MjkgUD0xZWNlYjAwMCBEPTFlY2ViMDAwIEw9 MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3 MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDYzMCBQPTFlY2VjMDAwIEQ9 MWVjZWMwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBr ZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNjMwIFA9 MWVjZWQwMDAgRD0xZWNlZDAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1 OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVy IGlkeCA2MzIgUD0xZWNmMDAwMCBEPTFlY2YwMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFu IDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2Nh dGhlci1nYXRoZXIgaWR4IDYzMiBQPTFlY2YxMDAwIEQ9MWVjZjEwMDAgTD0xMDAwIERNQV9GUk9N X0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDow MDowMS4wOiBzaW5nbGUgaWR4IDYzMyBQPTFlY2YzMDAwIEQ9MWVjZjMwMDAgTD0xMDAwIERNQV9U T19ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6 MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDYzNCBQPTFlY2Y0MDAwIEQ9MWVjZjQwMDAgTD0x MDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcx NnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNjM0IFA9MWVjZjUwMDAgRD0x ZWNmNTAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtl cm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA2MzUgUD0x ZWNmNjAwMCBEPTFlY2Y2MDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4 IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIg aWR4IDYzNSBQPTFlY2Y3MDAwIEQ9MWVjZjcwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4g MTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0 aGVyLWdhdGhlciBpZHggNjM2IFA9MWVjZjgwMDAgRD0xZWNmODAwMCBMPTEwMDAgRE1BX0ZST01f REVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAw OjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA2MzYgUD0xZWNmOTAwMCBEPTFlY2Y5MDAwIEw9MTAw MCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4 IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDYzNyBQPTFlY2ZhMDAwIEQ9MWVj ZmEwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJu ZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNjM3IFA9MWVj ZmIwMDAgRD0xZWNmYjAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBn dXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlk eCA2MzggUD0xZWNmYzAwMCBEPTFlY2ZjMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5 IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhl ci1nYXRoZXIgaWR4IDYzOCBQPTFlY2ZkMDAwIEQ9MWVjZmQwMDAgTD0xMDAwIERNQV9GUk9NX0RF VklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDow MS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNjM5IFA9MWVjZmUwMDAgRD0xZWNmZTAwMCBMPTEwMDAg RE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBG RiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA2MzkgUD0xZWNmZjAwMCBEPTFlY2Zm MDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVs OiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDY0MCBQPTFlZDAw MDAwIEQ9MWVkMDAwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3Vy dXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHgg NjQwIFA9MWVkMDEwMDAgRD0xZWQwMTAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAx Mzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXIt Z2F0aGVyIGlkeCA2NDEgUD0xZWQwMjAwMCBEPTFlZDAyMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJ Q0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEu MDogc2NhdGhlci1nYXRoZXIgaWR4IDY0MSBQPTFlZDAzMDAwIEQ9MWVkMDMwMDAgTD0xMDAwIERN QV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYg MDAwMDowMDowMS4wOiBzaW5nbGUgaWR4IDY0MiBQPTFlZDA1MDAwIEQ9MWVkMDUwMDAgTD0xMDAw IERNQV9UT19ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZG IDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDY0MyBQPTFlZDA2MDAwIEQ9MWVkMDYw MDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6 IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNjQzIFA9MWVkMDcw MDAgRD0xZWQwNzAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1 dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA2 NDQgUD0xZWQwODAwMCBEPTFlZDA4MDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEz OjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1n YXRoZXIgaWR4IDY0NCBQPTFlZDA5MDAwIEQ9MWVkMDkwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklD RQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4w OiBzY2F0aGVyLWdhdGhlciBpZHggNjQ1IFA9MWVkMGEwMDAgRD0xZWQwYTAwMCBMPTEwMDAgRE1B X0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAw MDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA2NDUgUD0xZWQwYjAwMCBEPTFlZDBiMDAw IEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBT QUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDY0NiBQPTFlZDBjMDAw IEQ9MWVkMGMwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZk ciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNjQ2 IFA9MWVkMGQwMDAgRD0xZWQwZDAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0 aGVyIGlkeCA2NDcgUD0xZWQwZTAwMCBEPTFlZDBlMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UK SmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDog c2NhdGhlci1nYXRoZXIgaWR4IDY0NyBQPTFlZDBmMDAwIEQ9MWVkMGYwMDAgTD0xMDAwIERNQV9G Uk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAw MDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNjQ4IFA9MWVkMTAwMDAgRD0xZWQxMDAwMCBM PTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FB NzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA2NDggUD0xZWQxMTAwMCBE PTFlZDExMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIg a2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDY0OSBQ PTFlZDEyMDAwIEQ9MWVkMTIwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6 NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhl ciBpZHggNjQ5IFA9MWVkMTMwMDAgRD0xZWQxMzAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNj YXRoZXItZ2F0aGVyIGlkeCA2NTAgUD0xZWQxNDAwMCBEPTFlZDE0MDAwIEw9MTAwMCBETUFfRlJP TV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6 MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDY1MCBQPTFlZDE1MDAwIEQ9MWVkMTUwMDAgTD0x MDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcx NnggRkYgMDAwMDowMDowMS4wOiBzaW5nbGUgaWR4IDY1MSBQPTFlZDE3MDAwIEQ9MWVkMTcwMDAg TD0xMDAwIERNQV9UT19ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3 MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDY1MiBQPTFlZDE4MDAwIEQ9 MWVkMTgwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBr ZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNjUyIFA9 MWVkMTkwMDAgRD0xZWQxOTAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1 OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVy IGlkeCA2NTMgUD0xZWQxYTAwMCBEPTFlZDFhMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFu IDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2Nh dGhlci1nYXRoZXIgaWR4IDY1MyBQPTFlZDFiMDAwIEQ9MWVkMWIwMDAgTD0xMDAwIERNQV9GUk9N X0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDow MDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNjU0IFA9MWVkMWMwMDAgRD0xZWQxYzAwMCBMPTEw MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2 eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA2NTQgUD0xZWQxZDAwMCBEPTFl ZDFkMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2Vy bmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDY1NSBQPTFl ZDFlMDAwIEQ9MWVkMWUwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTgg Z3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBp ZHggNjU1IFA9MWVkMWYwMDAgRD0xZWQxZjAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAx OSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRo ZXItZ2F0aGVyIGlkeCA2NTYgUD0xZWQyMDAwMCBEPTFlZDIwMDAwIEw9MTAwMCBETUFfRlJPTV9E RVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6 MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDY1NiBQPTFlZDIxMDAwIEQ9MWVkMjEwMDAgTD0xMDAw IERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNngg RkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNjU3IFA9MWVkMjIwMDAgRD0xZWQy MjAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5l bDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA2NTcgUD0xZWQy MzAwMCBEPTFlZDIzMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1 cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4 IDY1OCBQPTFlZDI0MDAwIEQ9MWVkMjQwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkg MTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVy LWdhdGhlciBpZHggNjU4IFA9MWVkMjUwMDAgRD0xZWQyNTAwMCBMPTEwMDAgRE1BX0ZST01fREVW SUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAx LjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA2NTkgUD0xZWQyNjAwMCBEPTFlZDI2MDAwIEw9MTAwMCBE TUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZG IDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDY1OSBQPTFlZDI3MDAwIEQ9MWVkMjcw MDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6 IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzaW5nbGUgaWR4IDY2MCBQPTFlZDI5MDAwIEQ9MWVk MjkwMDAgTD0xMDAwIERNQV9UT19ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVs OiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDY2MSBQPTFlZDJh MDAwIEQ9MWVkMmEwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3Vy dXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHgg NjYxIFA9MWVkMmIwMDAgRD0xZWQyYjAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAx Mzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXIt Z2F0aGVyIGlkeCA2NjIgUD0xZWQyYzAwMCBEPTFlZDJjMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJ Q0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEu MDogc2NhdGhlci1nYXRoZXIgaWR4IDY2MiBQPTFlZDJkMDAwIEQ9MWVkMmQwMDAgTD0xMDAwIERN QV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYg MDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNjYzIFA9MWVkMmUwMDAgRD0xZWQyZTAw MCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDog U0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA2NjMgUD0xZWQyZjAw MCBEPTFlZDJmMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2 ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDY2 NCBQPTFlZDMwMDAwIEQ9MWVkMzAwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6 NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdh dGhlciBpZHggNjY0IFA9MWVkMzEwMDAgRD0xZWQzMTAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNF CkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6 IHNjYXRoZXItZ2F0aGVyIGlkeCA2NjUgUD0xZWQzMjAwMCBEPTFlZDMyMDAwIEw9MTAwMCBETUFf RlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAw MDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDY2NSBQPTFlZDMzMDAwIEQ9MWVkMzMwMDAg TD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNB QTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNjY2IFA9MWVkMzQwMDAg RD0xZWQzNDAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRy IGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA2NjYg UD0xZWQzNTAwMCBEPTFlZDM1MDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0 OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRo ZXIgaWR4IDY2NyBQPTFlZDM2MDAwIEQ9MWVkMzYwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpK YW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBz Y2F0aGVyLWdhdGhlciBpZHggNjY3IFA9MWVkMzcwMDAgRD0xZWQzNzAwMCBMPTEwMDAgRE1BX0ZS T01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAw OjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA2NjggUD0xZWQzODAwMCBEPTFlZDM4MDAwIEw9 MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3 MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDY2OCBQPTFlZDM5MDAwIEQ9 MWVkMzkwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBr ZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzaW5nbGUgaWR4IDY3MCBQPTFlZDNjMDAw IEQ9MWVkM2MwMDAgTD0xMDAwIERNQV9UT19ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIg a2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDY3MCBQ PTFlZDNkMDAwIEQ9MWVkM2QwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6 NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhl ciBpZHggNjcxIFA9MWVkM2UwMDAgRD0xZWQzZTAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNj YXRoZXItZ2F0aGVyIGlkeCA2NzEgUD0xZWQzZjAwMCBEPTFlZDNmMDAwIEw9MTAwMCBETUFfRlJP TV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6 MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDY3MiBQPTFlZDQwMDAwIEQ9MWVkNDAwMDAgTD0x MDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcx NnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNjcyIFA9MWVkNDEwMDAgRD0x ZWQ0MTAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtl cm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA2NzMgUD0x ZWQ0MjAwMCBEPTFlZDQyMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4 IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIg aWR4IDY3MyBQPTFlZDQzMDAwIEQ9MWVkNDMwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4g MTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0 aGVyLWdhdGhlciBpZHggNjc0IFA9MWVkNDQwMDAgRD0xZWQ0NDAwMCBMPTEwMDAgRE1BX0ZST01f REVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAw OjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA2NzQgUD0xZWQ0NTAwMCBEPTFlZDQ1MDAwIEw9MTAw MCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4 IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDY3NSBQPTFlZDQ2MDAwIEQ9MWVk NDYwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJu ZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNjc1IFA9MWVk NDcwMDAgRD0xZWQ0NzAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBn dXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlk eCA2NzYgUD0xZWQ0ODAwMCBEPTFlZDQ4MDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5 IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhl ci1nYXRoZXIgaWR4IDY3NiBQPTFlZDQ5MDAwIEQ9MWVkNDkwMDAgTD0xMDAwIERNQV9GUk9NX0RF VklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDow MS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNjc3IFA9MWVkNGEwMDAgRD0xZWQ0YTAwMCBMPTEwMDAg RE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBG RiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA2NzcgUD0xZWQ0YjAwMCBEPTFlZDRi MDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVs OiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDY3OCBQPTFlZDRj MDAwIEQ9MWVkNGMwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3Vy dXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzaW5nbGUgaWR4IDY3OSBQPTFl ZDRlMDAwIEQ9MWVkNGUwMDAgTD0xMDAwIERNQV9UT19ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1 cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4 IDY3OSBQPTFlZDRmMDAwIEQ9MWVkNGYwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkg MTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVy LWdhdGhlciBpZHggNjgwIFA9MWVkNTAwMDAgRD0xZWQ1MDAwMCBMPTEwMDAgRE1BX0ZST01fREVW SUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAx LjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA2ODAgUD0xZWQ1MTAwMCBEPTFlZDUxMDAwIEw9MTAwMCBE TUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZG IDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDY4MSBQPTFlZDUyMDAwIEQ9MWVkNTIw MDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6 IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNjgxIFA9MWVkNTMw MDAgRD0xZWQ1MzAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1 dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA2 ODIgUD0xZWQ1NDAwMCBEPTFlZDU0MDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEz OjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1n YXRoZXIgaWR4IDY4MiBQPTFlZDU1MDAwIEQ9MWVkNTUwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklD RQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4w OiBzY2F0aGVyLWdhdGhlciBpZHggNjgzIFA9MWVkNTYwMDAgRD0xZWQ1NjAwMCBMPTEwMDAgRE1B X0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAw MDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA2ODMgUD0xZWQ1NzAwMCBEPTFlZDU3MDAw IEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBT QUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDY4NCBQPTFlZDU4MDAw IEQ9MWVkNTgwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZk ciBrZXJuZWw6IFNBQTcxNnggRkYgMDAwMDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNjg0 IFA9MWVkNTkwMDAgRD0xZWQ1OTAwMCBMPTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FBNzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0 aGVyIGlkeCA2ODUgUD0xZWQ1YTAwMCBEPTFlZDVhMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UK SmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDog c2NhdGhlci1nYXRoZXIgaWR4IDY4NSBQPTFlZDViMDAwIEQ9MWVkNWIwMDAgTD0xMDAwIERNQV9G Uk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IFNBQTcxNnggRkYgMDAw MDowMDowMS4wOiBzY2F0aGVyLWdhdGhlciBpZHggNjg2IFA9MWVkNWMwMDAgRD0xZWQ1YzAwMCBM PTEwMDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogU0FB NzE2eCBGRiAwMDAwOjAwOjAxLjA6IHNjYXRoZXItZ2F0aGVyIGlkeCA2ODYgUD0xZWQ1ZDAwMCBE PTFlZDVkMDAwIEw9MTAwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIg a2VybmVsOiBTQUE3MTZ4IEZGIDAwMDA6MDA6MDEuMDogc2NhdGhlci1nYXRoZXIgaWR4IDY4NyBQ PTFlZDVlMDAwIEQ9MWVkNWUwMDAgTD0xMDAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6 NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4wOiBz aW5nbGUgaWR4IDgyNCBQPTFlZTcwMDIwIEQ9MWVlNzAwMjAgTD02MDAgRE1BX0ZST01fREVWSUNF CkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4 X2V0aF9wb3J0LjA6IHNpbmdsZSBpZHggODI0IFA9MWVlNzA3MDAgRD0xZWU3MDcwMCBMPTYwMCBE TUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0 aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA4MjQgUD0xZWU3MGRlMCBEPTFl ZTcwZGUwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJu ZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4wOiBzaW5nbGUgaWR4IDgyNCBQ PTFlZTcxNGMwIEQ9MWVlNzE0YzAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1 OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjA6IHNp bmdsZSBpZHggODI0IFA9MWVlNzFiYTAgRD0xZWU3MWJhMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UK SmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhf ZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA4MjUgUD0xZWU3MjI4MCBEPTFlZTcyMjgwIEw9NjAwIERN QV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRo X3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4wOiBzaW5nbGUgaWR4IDgyNSBQPTFlZTcyOTYwIEQ9MWVl NzI5NjAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5l bDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjA6IHNpbmdsZSBpZHggODI1IFA9 MWVlNzMwNDAgRD0xZWU3MzA0MCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4 IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMDogc2lu Z2xlIGlkeCA4MjUgUD0xZWU3MzcyMCBEPTFlZTczNzIwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpK YW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9l dGhfcG9ydC4wOiBzaW5nbGUgaWR4IDgyNSBQPTFlZTczZTAwIEQ9MWVlNzNlMDAgTD02MDAgRE1B X0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhf cG9ydCBtdjY0M3h4X2V0aF9wb3J0LjA6IHNpbmdsZSBpZHggODI2IFA9MWVlNzQ0ZTAgRD0xZWU3 NDRlMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVs OiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA4MjYgUD0x ZWU3NGJjMCBEPTFlZTc0YmMwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTgg Z3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4wOiBzaW5n bGUgaWR4IDgyNiBQPTFlZTc1MmEwIEQ9MWVlNzUyYTAgTD02MDAgRE1BX0ZST01fREVWSUNFCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0 aF9wb3J0LjA6IHNpbmdsZSBpZHggODI2IFA9MWVlNzU5ODAgRD0xZWU3NTk4MCBMPTYwMCBETUFf RlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9w b3J0IG12NjQzeHhfZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA4MjcgUD0xZWU3NjA2MCBEPTFlZTc2 MDYwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6 IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4wOiBzaW5nbGUgaWR4IDgyNyBQPTFl ZTc2NzQwIEQ9MWVlNzY3NDAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBn dXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjA6IHNpbmds ZSBpZHggODI3IFA9MWVlNzZlMjAgRD0xZWU3NmUyMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFu IDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRo X3BvcnQuMDogc2luZ2xlIGlkeCA4MjcgUD0xZWU3NzUwMCBEPTFlZTc3NTAwIEw9NjAwIERNQV9G Uk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3Bv cnQgbXY2NDN4eF9ldGhfcG9ydC4wOiBzaW5nbGUgaWR4IDgyOCBQPTFlZTc4MDIwIEQ9MWVlNzgw MjAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDog bXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjA6IHNpbmdsZSBpZHggODI4IFA9MWVl Nzg3MDAgRD0xZWU3ODcwMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1 cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMDogc2luZ2xl IGlkeCA4MjggUD0xZWU3OGRlMCBEPTFlZTc4ZGUwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4g MTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhf cG9ydC4wOiBzaW5nbGUgaWR4IDgyOCBQPTFlZTc5NGMwIEQ9MWVlNzk0YzAgTD02MDAgRE1BX0ZS T01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9y dCBtdjY0M3h4X2V0aF9wb3J0LjA6IHNpbmdsZSBpZHggODI4IFA9MWVlNzliYTAgRD0xZWU3OWJh MCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBt djY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA4MjkgUD0xZWU3 YTI4MCBEPTFlZTdhMjgwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3Vy dXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4wOiBzaW5nbGUg aWR4IDgyOSBQPTFlZTdhOTYwIEQ9MWVlN2E5NjAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAx OSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9w b3J0LjA6IHNpbmdsZSBpZHggODI5IFA9MWVlN2IwNDAgRD0xZWU3YjA0MCBMPTYwMCBETUFfRlJP TV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0 IG12NjQzeHhfZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA4MjkgUD0xZWU3YjcyMCBEPTFlZTdiNzIw IEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12 NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4wOiBzaW5nbGUgaWR4IDgyOSBQPTFlZTdi ZTAwIEQ9MWVlN2JlMDAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1 dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjA6IHNpbmdsZSBp ZHggODMwIFA9MWVlN2M0ZTAgRD0xZWU3YzRlMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5 IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3Bv cnQuMDogc2luZ2xlIGlkeCA4MzAgUD0xZWU3Y2JjMCBEPTFlZTdjYmMwIEw9NjAwIERNQV9GUk9N X0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQg bXY2NDN4eF9ldGhfcG9ydC4wOiBzaW5nbGUgaWR4IDgzMCBQPTFlZTdkMmEwIEQ9MWVlN2QyYTAg TD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2 NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjA6IHNpbmdsZSBpZHggODMwIFA9MWVlN2Q5 ODAgRD0xZWU3ZDk4MCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2 ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMDogc2luZ2xlIGlk eCA4MzEgUD0xZWU3ZTA2MCBEPTFlZTdlMDYwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkg MTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9y dC4wOiBzaW5nbGUgaWR4IDgzMSBQPTFlZTdlNzQwIEQ9MWVlN2U3NDAgTD02MDAgRE1BX0ZST01f REVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBt djY0M3h4X2V0aF9wb3J0LjA6IHNpbmdsZSBpZHggODMxIFA9MWVlN2VlMjAgRD0xZWU3ZWUyMCBM PTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0 M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA4MzEgUD0xZWU3ZjUw MCBEPTFlZTdmNTAwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZk ciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5nbGUgaWR4 IDgzMiBQPTFlZTgwMDIwIEQ9MWVlODAwMjAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAx Mzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0 LjE6IHNpbmdsZSBpZHggODMyIFA9MWVlODA3MDAgRD0xZWU4MDcwMCBMPTYwMCBETUFfRlJPTV9E RVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12 NjQzeHhfZXRoX3BvcnQuMTogc2luZ2xlIGlkeCA4MzIgUD0xZWU4MGRlMCBEPTFlZTgwZGUwIEw9 NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQz eHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5nbGUgaWR4IDgzMiBQPTFlZTgxNGMw IEQ9MWVlODE0YzAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRy IGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjE6IHNpbmdsZSBpZHgg ODMyIFA9MWVlODFiYTAgRD0xZWU4MWJhMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEz OjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQu MTogc2luZ2xlIGlkeCA4MzMgUD0xZWU4MjI4MCBEPTFlZTgyMjgwIEw9NjAwIERNQV9GUk9NX0RF VklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2 NDN4eF9ldGhfcG9ydC4xOiBzaW5nbGUgaWR4IDgzMyBQPTFlZTgyOTYwIEQ9MWVlODI5NjAgTD02 MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4 eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjE6IHNpbmdsZSBpZHggODMzIFA9MWVlODMwNDAg RD0xZWU4MzA0MCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIg a2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMTogc2luZ2xlIGlkeCA4 MzMgUD0xZWU4MzcyMCBEPTFlZTgzNzIwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6 NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4x OiBzaW5nbGUgaWR4IDgzMyBQPTFlZTgzZTAwIEQ9MWVlODNlMDAgTD02MDAgRE1BX0ZST01fREVW SUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0 M3h4X2V0aF9wb3J0LjE6IHNpbmdsZSBpZHggODM0IFA9MWVlODQ0ZTAgRD0xZWU4NDRlMCBMPTYw MCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4 X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMTogc2luZ2xlIGlkeCA4MzQgUD0xZWU4NGJjMCBE PTFlZTg0YmMwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBr ZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5nbGUgaWR4IDgz NCBQPTFlZTg1MmEwIEQ9MWVlODUyYTAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjE6 IHNpbmdsZSBpZHggODM0IFA9MWVlODU5ODAgRD0xZWU4NTk4MCBMPTYwMCBETUFfRlJPTV9ERVZJ Q0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQz eHhfZXRoX3BvcnQuMTogc2luZ2xlIGlkeCA4MzUgUD0xZWU4NjA2MCBEPTFlZTg2MDYwIEw9NjAw IERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhf ZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5nbGUgaWR4IDgzNSBQPTFlZTg2NzQwIEQ9 MWVlODY3NDAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtl cm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjE6IHNpbmdsZSBpZHggODM1 IFA9MWVlODZlMjAgRD0xZWU4NmUyMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0 OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMTog c2luZ2xlIGlkeCA4MzUgUD0xZWU4NzUwMCBEPTFlZTg3NTAwIEw9NjAwIERNQV9GUk9NX0RFVklD RQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4 eF9ldGhfcG9ydC4xOiBzaW5nbGUgaWR4IDg1MiBQPTFlZWE4MDIwIEQ9MWVlYTgwMjAgTD02MDAg RE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9l dGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjE6IHNpbmdsZSBpZHggODUyIFA9MWVlYTg3MDAgRD0x ZWVhODcwMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2Vy bmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA4NTQg UD0xZWVhYzhhMCBEPTFlZWFjOGEwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6 NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4wOiBz aW5nbGUgaWR4IDg1NCBQPTFlZWFjZjgwIEQ9MWVlYWNmODAgTD02MDAgRE1BX0ZST01fREVWSUNF CkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4 X2V0aF9wb3J0LjA6IHNpbmdsZSBpZHggODU0IFA9MWVlYWQ2NjAgRD0xZWVhZDY2MCBMPTYwMCBE TUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0 aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA4NTQgUD0xZWVhZGQ0MCBEPTFl ZWFkZDQwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJu ZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4wOiBzaW5nbGUgaWR4IDg1NSBQ PTFlZWFlNDIwIEQ9MWVlYWU0MjAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1 OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjA6IHNp bmdsZSBpZHggODU1IFA9MWVlYWViMDAgRD0xZWVhZWIwMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UK SmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhf ZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA4NTUgUD0xZWVhZjFlMCBEPTFlZWFmMWUwIEw9NjAwIERN QV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRo X3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4wOiBzaW5nbGUgaWR4IDg1NSBQPTFlZWFmOGMwIEQ9MWVl YWY4YzAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5l bDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjE6IHNpbmdsZSBpZHggODU2IFA9 MWVlYjAwMjAgRD0xZWViMDAyMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4 IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMTogc2lu Z2xlIGlkeCA4NTYgUD0xZWViMDcwMCBEPTFlZWIwNzAwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpK YW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9l dGhfcG9ydC4xOiBzaW5nbGUgaWR4IDg1NiBQPTFlZWIwZGUwIEQ9MWVlYjBkZTAgTD02MDAgRE1B X0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhf cG9ydCBtdjY0M3h4X2V0aF9wb3J0LjE6IHNpbmdsZSBpZHggODU2IFA9MWVlYjE0YzAgRD0xZWVi MTRjMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVs OiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMTogc2luZ2xlIGlkeCA4NTYgUD0x ZWViMWJhMCBEPTFlZWIxYmEwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTgg Z3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5n bGUgaWR4IDg1NyBQPTFlZWIyMjgwIEQ9MWVlYjIyODAgTD02MDAgRE1BX0ZST01fREVWSUNFCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0 aF9wb3J0LjE6IHNpbmdsZSBpZHggODU3IFA9MWVlYjI5NjAgRD0xZWViMjk2MCBMPTYwMCBETUFf RlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9w b3J0IG12NjQzeHhfZXRoX3BvcnQuMTogc2luZ2xlIGlkeCA4NTcgUD0xZWViMzA0MCBEPTFlZWIz MDQwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6 IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5nbGUgaWR4IDg1NyBQPTFl ZWIzNzIwIEQ9MWVlYjM3MjAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBn dXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjE6IHNpbmds ZSBpZHggODU3IFA9MWVlYjNlMDAgRD0xZWViM2UwMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFu IDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRo X3BvcnQuMTogc2luZ2xlIGlkeCA4NTggUD0xZWViNDRlMCBEPTFlZWI0NGUwIEw9NjAwIERNQV9G Uk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3Bv cnQgbXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5nbGUgaWR4IDg1OCBQPTFlZWI0YmMwIEQ9MWVlYjRi YzAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDog bXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjE6IHNpbmdsZSBpZHggODU4IFA9MWVl YjUyYTAgRD0xZWViNTJhMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1 cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMTogc2luZ2xl IGlkeCA4NTggUD0xZWViNTk4MCBEPTFlZWI1OTgwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4g MTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhf cG9ydC4xOiBzaW5nbGUgaWR4IDg1OSBQPTFlZWI2MDYwIEQ9MWVlYjYwNjAgTD02MDAgRE1BX0ZS T01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9y dCBtdjY0M3h4X2V0aF9wb3J0LjE6IHNpbmdsZSBpZHggODU5IFA9MWVlYjY3NDAgRD0xZWViNjc0 MCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBt djY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMTogc2luZ2xlIGlkeCA4NTkgUD0xZWVi NmUyMCBEPTFlZWI2ZTIwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3Vy dXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5nbGUg aWR4IDg1OSBQPTFlZWI3NTAwIEQ9MWVlYjc1MDAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAx OSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9w b3J0LjE6IHNpbmdsZSBpZHggODc2IFA9MWVlZDgwMjAgRD0xZWVkODAyMCBMPTYwMCBETUFfRlJP TV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0 IG12NjQzeHhfZXRoX3BvcnQuMTogc2luZ2xlIGlkeCA4NzYgUD0xZWVkODcwMCBEPTFlZWQ4NzAw IEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12 NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5nbGUgaWR4IDg3NiBQPTFlZWQ4 ZGUwIEQ9MWVlZDhkZTAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1 dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjE6IHNpbmdsZSBp ZHggODc2IFA9MWVlZDk0YzAgRD0xZWVkOTRjMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5 IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3Bv cnQuMTogc2luZ2xlIGlkeCA4NzYgUD0xZWVkOWJhMCBEPTFlZWQ5YmEwIEw9NjAwIERNQV9GUk9N X0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQg bXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5nbGUgaWR4IDg3NyBQPTFlZWRhMjgwIEQ9MWVlZGEyODAg TD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2 NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjE6IHNpbmdsZSBpZHggODc3IFA9MWVlZGE5 NjAgRD0xZWVkYTk2MCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2 ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMTogc2luZ2xlIGlk eCA4NzcgUD0xZWVkYjA0MCBEPTFlZWRiMDQwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkg MTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9y dC4xOiBzaW5nbGUgaWR4IDg3NyBQPTFlZWRiNzIwIEQ9MWVlZGI3MjAgTD02MDAgRE1BX0ZST01f REVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBt djY0M3h4X2V0aF9wb3J0LjE6IHNpbmdsZSBpZHggODc3IFA9MWVlZGJlMDAgRD0xZWVkYmUwMCBM PTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0 M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMTogc2luZ2xlIGlkeCA4NzggUD0xZWVkYzRl MCBEPTFlZWRjNGUwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZk ciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5nbGUgaWR4 IDg3OCBQPTFlZWRjYmMwIEQ9MWVlZGNiYzAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAx Mzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0 LjE6IHNpbmdsZSBpZHggODc4IFA9MWVlZGQyYTAgRD0xZWVkZDJhMCBMPTYwMCBETUFfRlJPTV9E RVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12 NjQzeHhfZXRoX3BvcnQuMTogc2luZ2xlIGlkeCA4NzggUD0xZWVkZDk4MCBEPTFlZWRkOTgwIEw9 NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQz eHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5nbGUgaWR4IDg3OSBQPTFlZWRlMDYw IEQ9MWVlZGUwNjAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRy IGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjE6IHNpbmdsZSBpZHgg ODc5IFA9MWVlZGU3NDAgRD0xZWVkZTc0MCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEz OjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQu MTogc2luZ2xlIGlkeCA4NzkgUD0xZWVkZWUyMCBEPTFlZWRlZTIwIEw9NjAwIERNQV9GUk9NX0RF VklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2 NDN4eF9ldGhfcG9ydC4xOiBzaW5nbGUgaWR4IDg3OSBQPTFlZWRmNTAwIEQ9MWVlZGY1MDAgTD02 MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4 eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjA6IGNvaGVyZW50IGlkeCA4ODUgUD0yMTQ5NzAw MCBEPTFlZWViMDAwIEw9MTAwMCBETUFfQklESVJFQ1RJT05BTApKYW4gMTkgMTM6NTQ6NTggZ3Vy dXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5nbGUg aWR4IDg5NiBQPTFlZjAwMDIwIEQ9MWVmMDAwMjAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAx OSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9w b3J0LjE6IHNpbmdsZSBpZHggODk2IFA9MWVmMDA3MDAgRD0xZWYwMDcwMCBMPTYwMCBETUFfRlJP TV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0 IG12NjQzeHhfZXRoX3BvcnQuMTogc2luZ2xlIGlkeCA4OTYgUD0xZWYwMGRlMCBEPTFlZjAwZGUw IEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12 NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5nbGUgaWR4IDg5NiBQPTFlZjAx NGMwIEQ9MWVmMDE0YzAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1 dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjE6IHNpbmdsZSBp ZHggODk2IFA9MWVmMDFiYTAgRD0xZWYwMWJhMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5 IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3Bv cnQuMTogc2luZ2xlIGlkeCA4OTcgUD0xZWYwMjI4MCBEPTFlZjAyMjgwIEw9NjAwIERNQV9GUk9N X0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQg bXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5nbGUgaWR4IDg5NyBQPTFlZjAyOTYwIEQ9MWVmMDI5NjAg TD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2 NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjE6IHNpbmdsZSBpZHggODk3IFA9MWVmMDMw NDAgRD0xZWYwMzA0MCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2 ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMTogc2luZ2xlIGlk eCA4OTcgUD0xZWYwMzcyMCBEPTFlZjAzNzIwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkg MTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9y dC4xOiBzaW5nbGUgaWR4IDg5NyBQPTFlZjAzZTAwIEQ9MWVmMDNlMDAgTD02MDAgRE1BX0ZST01f REVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBt djY0M3h4X2V0aF9wb3J0LjE6IHNpbmdsZSBpZHggODk4IFA9MWVmMDQ0ZTAgRD0xZWYwNDRlMCBM PTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0 M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMTogc2luZ2xlIGlkeCA4OTggUD0xZWYwNGJj MCBEPTFlZjA0YmMwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZk ciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5nbGUgaWR4 IDg5OCBQPTFlZjA1MmEwIEQ9MWVmMDUyYTAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAx Mzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0 LjE6IHNpbmdsZSBpZHggODk4IFA9MWVmMDU5ODAgRD0xZWYwNTk4MCBMPTYwMCBETUFfRlJPTV9E RVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12 NjQzeHhfZXRoX3BvcnQuMTogc2luZ2xlIGlkeCA4OTkgUD0xZWYwNjA2MCBEPTFlZjA2MDYwIEw9 NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQz eHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5nbGUgaWR4IDg5OSBQPTFlZjA2NzQw IEQ9MWVmMDY3NDAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRy IGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjE6IHNpbmdsZSBpZHgg ODk5IFA9MWVmMDZlMjAgRD0xZWYwNmUyMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEz OjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQu MTogc2luZ2xlIGlkeCA4OTkgUD0xZWYwNzUwMCBEPTFlZjA3NTAwIEw9NjAwIERNQV9GUk9NX0RF VklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2 NDN4eF9ldGhfcG9ydC4xOiBjb2hlcmVudCBpZHggOTE4IFA9MjE0NGMwMDAgRD0xZWYyYzAwMCBM PTEwMDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBt djY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMTogc2luZ2xlIGlkeCA5MjQgUD0xZWYz ODAyMCBEPTFlZjM4MDIwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3Vy dXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5nbGUg aWR4IDkyNCBQPTFlZjM4NzAwIEQ9MWVmMzg3MDAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAx OSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9w b3J0LjE6IHNpbmdsZSBpZHggOTI0IFA9MWVmMzhkZTAgRD0xZWYzOGRlMCBMPTYwMCBETUFfRlJP TV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0 IG12NjQzeHhfZXRoX3BvcnQuMTogc2luZ2xlIGlkeCA5MjQgUD0xZWYzOTRjMCBEPTFlZjM5NGMw IEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12 NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5nbGUgaWR4IDkyNCBQPTFlZjM5 YmEwIEQ9MWVmMzliYTAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1 dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjE6IHNpbmdsZSBp ZHggOTI1IFA9MWVmM2EyODAgRD0xZWYzYTI4MCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5 IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3Bv cnQuMTogc2luZ2xlIGlkeCA5MjUgUD0xZWYzYTk2MCBEPTFlZjNhOTYwIEw9NjAwIERNQV9GUk9N X0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQg bXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5nbGUgaWR4IDkyNSBQPTFlZjNiMDQwIEQ9MWVmM2IwNDAg TD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2 NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjE6IHNpbmdsZSBpZHggOTI1IFA9MWVmM2I3 MjAgRD0xZWYzYjcyMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2 ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMTogc2luZ2xlIGlk eCA5MjUgUD0xZWYzYmUwMCBEPTFlZjNiZTAwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkg MTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9y dC4xOiBzaW5nbGUgaWR4IDkyNiBQPTFlZjNjNGUwIEQ9MWVmM2M0ZTAgTD02MDAgRE1BX0ZST01f REVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBt djY0M3h4X2V0aF9wb3J0LjE6IHNpbmdsZSBpZHggOTI2IFA9MWVmM2NiYzAgRD0xZWYzY2JjMCBM PTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0 M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMTogc2luZ2xlIGlkeCA5MjYgUD0xZWYzZDJh MCBEPTFlZjNkMmEwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZk ciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5nbGUgaWR4 IDkyNiBQPTFlZjNkOTgwIEQ9MWVmM2Q5ODAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAx Mzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0 LjE6IHNpbmdsZSBpZHggOTI3IFA9MWVmM2UwNjAgRD0xZWYzZTA2MCBMPTYwMCBETUFfRlJPTV9E RVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12 NjQzeHhfZXRoX3BvcnQuMTogc2luZ2xlIGlkeCA5MjcgUD0xZWYzZTc0MCBEPTFlZjNlNzQwIEw9 NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQz eHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5nbGUgaWR4IDkyNyBQPTFlZjNlZTIw IEQ9MWVmM2VlMjAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRy IGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjE6IHNpbmdsZSBpZHgg OTI3IFA9MWVmM2Y1MDAgRD0xZWYzZjUwMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEz OjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQu MTogc2luZ2xlIGlkeCA5MjggUD0xZWY0MDAyMCBEPTFlZjQwMDIwIEw9NjAwIERNQV9GUk9NX0RF VklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2 NDN4eF9ldGhfcG9ydC4xOiBzaW5nbGUgaWR4IDkyOCBQPTFlZjQwNzAwIEQ9MWVmNDA3MDAgTD02 MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4 eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjE6IHNpbmdsZSBpZHggOTI4IFA9MWVmNDBkZTAg RD0xZWY0MGRlMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIg a2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMTogc2luZ2xlIGlkeCA5 MjggUD0xZWY0MTRjMCBEPTFlZjQxNGMwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6 NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4x OiBzaW5nbGUgaWR4IDkyOCBQPTFlZjQxYmEwIEQ9MWVmNDFiYTAgTD02MDAgRE1BX0ZST01fREVW SUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0 M3h4X2V0aF9wb3J0LjE6IHNpbmdsZSBpZHggOTI5IFA9MWVmNDIyODAgRD0xZWY0MjI4MCBMPTYw MCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4 X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMTogc2luZ2xlIGlkeCA5MjkgUD0xZWY0Mjk2MCBE PTFlZjQyOTYwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBr ZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5nbGUgaWR4IDky OSBQPTFlZjQzMDQwIEQ9MWVmNDMwNDAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjE6 IHNpbmdsZSBpZHggOTI5IFA9MWVmNDM3MjAgRD0xZWY0MzcyMCBMPTYwMCBETUFfRlJPTV9ERVZJ Q0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQz eHhfZXRoX3BvcnQuMTogc2luZ2xlIGlkeCA5MjkgUD0xZWY0M2UwMCBEPTFlZjQzZTAwIEw9NjAw IERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhf ZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4xOiBzaW5nbGUgaWR4IDkzMCBQPTFlZjQ0NGUwIEQ9 MWVmNDQ0ZTAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtl cm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjE6IHNpbmdsZSBpZHggOTMw IFA9MWVmNDRiYzAgRD0xZWY0NGJjMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0 OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMTog c2luZ2xlIGlkeCA5MzAgUD0xZWY0NTJhMCBEPTFlZjQ1MmEwIEw9NjAwIERNQV9GUk9NX0RFVklD RQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4 eF9ldGhfcG9ydC4xOiBzaW5nbGUgaWR4IDkzMCBQPTFlZjQ1OTgwIEQ9MWVmNDU5ODAgTD02MDAg RE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9l dGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjE6IHNpbmdsZSBpZHggOTMxIFA9MWVmNDYwNjAgRD0x ZWY0NjA2MCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2Vy bmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMTogc2luZ2xlIGlkeCA5MzEg UD0xZWY0Njc0MCBEPTFlZjQ2NzQwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6 NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4xOiBz aW5nbGUgaWR4IDkzMSBQPTFlZjQ2ZTIwIEQ9MWVmNDZlMjAgTD02MDAgRE1BX0ZST01fREVWSUNF CkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4 X2V0aF9wb3J0LjE6IHNpbmdsZSBpZHggOTMxIFA9MWVmNDc1MDAgRD0xZWY0NzUwMCBMPTYwMCBE TUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0 aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMTogY29oZXJlbnQgaWR4IDkzMyBQPTIxNDRhMDAwIEQ9 MWVmNGIwMDAgTD04MDAgRE1BX0JJRElSRUNUSU9OQUwKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIg a2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA5 NTIgUD0xZWY3MDAyMCBEPTFlZjcwMDIwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6 NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4w OiBzaW5nbGUgaWR4IDk1MiBQPTFlZjcwNzAwIEQ9MWVmNzA3MDAgTD02MDAgRE1BX0ZST01fREVW SUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0 M3h4X2V0aF9wb3J0LjA6IHNpbmdsZSBpZHggOTUyIFA9MWVmNzBkZTAgRD0xZWY3MGRlMCBMPTYw MCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4 X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA5NTIgUD0xZWY3MTRjMCBE PTFlZjcxNGMwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBr ZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4wOiBzaW5nbGUgaWR4IDk1 MiBQPTFlZjcxYmEwIEQ9MWVmNzFiYTAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1 NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjA6 IHNpbmdsZSBpZHggOTUzIFA9MWVmNzIyODAgRD0xZWY3MjI4MCBMPTYwMCBETUFfRlJPTV9ERVZJ Q0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQz eHhfZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA5NTMgUD0xZWY3Mjk2MCBEPTFlZjcyOTYwIEw9NjAw IERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhf ZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4wOiBzaW5nbGUgaWR4IDk1MyBQPTFlZjczMDQwIEQ9 MWVmNzMwNDAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtl cm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjA6IHNpbmdsZSBpZHggOTUz IFA9MWVmNzM3MjAgRD0xZWY3MzcyMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0 OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMDog c2luZ2xlIGlkeCA5NTMgUD0xZWY3M2UwMCBEPTFlZjczZTAwIEw9NjAwIERNQV9GUk9NX0RFVklD RQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4 eF9ldGhfcG9ydC4wOiBzaW5nbGUgaWR4IDk1NCBQPTFlZjc0NGUwIEQ9MWVmNzQ0ZTAgTD02MDAg RE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9l dGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjA6IHNpbmdsZSBpZHggOTU0IFA9MWVmNzRiYzAgRD0x ZWY3NGJjMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2Vy bmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA5NTQg UD0xZWY3NTJhMCBEPTFlZjc1MmEwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6 NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4wOiBz aW5nbGUgaWR4IDk1NCBQPTFlZjc1OTgwIEQ9MWVmNzU5ODAgTD02MDAgRE1BX0ZST01fREVWSUNF CkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4 X2V0aF9wb3J0LjA6IHNpbmdsZSBpZHggOTU1IFA9MWVmNzYwNjAgRD0xZWY3NjA2MCBMPTYwMCBE TUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0 aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA5NTUgUD0xZWY3Njc0MCBEPTFl Zjc2NzQwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJu ZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4wOiBzaW5nbGUgaWR4IDk1NSBQ PTFlZjc2ZTIwIEQ9MWVmNzZlMjAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1 OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjA6IHNp bmdsZSBpZHggOTU1IFA9MWVmNzc1MDAgRD0xZWY3NzUwMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UK SmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhf ZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA5NTYgUD0xZWY3ODAyMCBEPTFlZjc4MDIwIEw9NjAwIERN QV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRo X3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4wOiBzaW5nbGUgaWR4IDk1NiBQPTFlZjc4NzAwIEQ9MWVm Nzg3MDAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5l bDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjA6IHNpbmdsZSBpZHggOTU2IFA9 MWVmNzhkZTAgRD0xZWY3OGRlMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4 IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMDogc2lu Z2xlIGlkeCA5NTYgUD0xZWY3OTRjMCBEPTFlZjc5NGMwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpK YW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9l dGhfcG9ydC4wOiBzaW5nbGUgaWR4IDk1NiBQPTFlZjc5YmEwIEQ9MWVmNzliYTAgTD02MDAgRE1B X0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhf cG9ydCBtdjY0M3h4X2V0aF9wb3J0LjA6IHNpbmdsZSBpZHggOTU3IFA9MWVmN2EyODAgRD0xZWY3 YTI4MCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVs OiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA5NTcgUD0x ZWY3YTk2MCBEPTFlZjdhOTYwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTgg Z3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4wOiBzaW5n bGUgaWR4IDk1NyBQPTFlZjdiMDQwIEQ9MWVmN2IwNDAgTD02MDAgRE1BX0ZST01fREVWSUNFCkph biAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0 aF9wb3J0LjA6IHNpbmdsZSBpZHggOTU3IFA9MWVmN2I3MjAgRD0xZWY3YjcyMCBMPTYwMCBETUFf RlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9w b3J0IG12NjQzeHhfZXRoX3BvcnQuMDogc2luZ2xlIGlkeCA5NTcgUD0xZWY3YmUwMCBEPTFlZjdi ZTAwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6 IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhfcG9ydC4wOiBzaW5nbGUgaWR4IDk1OCBQPTFl ZjdjNGUwIEQ9MWVmN2M0ZTAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBn dXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjA6IHNpbmds ZSBpZHggOTU4IFA9MWVmN2NiYzAgRD0xZWY3Y2JjMCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFu IDE5IDEzOjU0OjU4IGd1cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRo X3BvcnQuMDogc2luZ2xlIGlkeCA5NTggUD0xZWY3ZDJhMCBEPTFlZjdkMmEwIEw9NjAwIERNQV9G Uk9NX0RFVklDRQpKYW4gMTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3Bv cnQgbXY2NDN4eF9ldGhfcG9ydC4wOiBzaW5nbGUgaWR4IDk1OCBQPTFlZjdkOTgwIEQ9MWVmN2Q5 ODAgTD02MDAgRE1BX0ZST01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDog bXY2NDN4eF9ldGhfcG9ydCBtdjY0M3h4X2V0aF9wb3J0LjA6IHNpbmdsZSBpZHggOTU5IFA9MWVm N2UwNjAgRD0xZWY3ZTA2MCBMPTYwMCBETUFfRlJPTV9ERVZJQ0UKSmFuIDE5IDEzOjU0OjU4IGd1 cnV2ZHIga2VybmVsOiBtdjY0M3h4X2V0aF9wb3J0IG12NjQzeHhfZXRoX3BvcnQuMDogc2luZ2xl IGlkeCA5NTkgUD0xZWY3ZTc0MCBEPTFlZjdlNzQwIEw9NjAwIERNQV9GUk9NX0RFVklDRQpKYW4g MTkgMTM6NTQ6NTggZ3VydXZkciBrZXJuZWw6IG12NjQzeHhfZXRoX3BvcnQgbXY2NDN4eF9ldGhf cG9ydC4wOiBzaW5nbGUgaWR4IDk1OSBQPTFlZjdlZTIwIEQ9MWVmN2VlMjAgTD02MDAgRE1BX0ZS T01fREVWSUNFCkphbiAxOSAxMzo1NDo1OCBndXJ1dmRyIGtlcm5lbDogbXY2NDN4eF9ldGhfcG9y dCBtdjY0M3h4X2V0aF9wb3J0LjA6IHNpbmdsZSBpZHggOTU5IFA9MWVmN2Y1MDAgRD0xZWY3ZjUw MCBMPTYwMCBETUFfRlJPTV9ERVZJQ0U= --===============4884483268715003250==-- From andrew@lunn.ch Sat Jan 19 16:26:24 2013 From: Andrew Lunn To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH v2] mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls Date: Sat, 19 Jan 2013 17:24:29 +0100 Message-ID: <20130119162429.GB27825@lunn.ch> In-Reply-To: <201301172026.45514.arnd@arndb.de> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============8162510452084981321==" --===============8162510452084981321== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable On Thu, Jan 17, 2013 at 08:26:45PM +0000, Arnd Bergmann wrote: > On Thursday 17 January 2013, Soeren Moch wrote: > > On 17.01.2013 11:49, Arnd Bergmann wrote: > > > On Wednesday 16 January 2013, Soeren Moch wrote: > > >>>> I will see what I can do here. Is there an easy way to track the buf= fer > > >>>> usage without having to wait for complete exhaustion? > > >>> > > >>> DMA_API_DEBUG > > >> > > >> OK, maybe I can try this. I tried this. Not what i expected. We have at least one problem with the ethernet driver: WARNING: at lib/dma-debug.c:933 check_unmap+0x4b8/0x8a8() mv643xx_eth_port mv643xx_eth_port.0: DMA-API: device driver failed to check m= ap error[device address=3D0x000000001f22be00] [size=3D1536 bytes] [mapped as = single] Modules linked in: [] (unwind_backtrace+0x0/0xf4) from [] (warn_slowpath_com= mon+0x4c/0x64) [] (warn_slowpath_common+0x4c/0x64) from [] (warn_slowpat= h_fmt+0x30/0x40) [] (warn_slowpath_fmt+0x30/0x40) from [] (check_unmap+0x4= b8/0x8a8) [] (check_unmap+0x4b8/0x8a8) from [] (debug_dma_unmap_pag= e+0x8c/0x98) [] (debug_dma_unmap_page+0x8c/0x98) from [] (mv643xx_eth_= poll+0x630/0x800) [] (mv643xx_eth_poll+0x630/0x800) from [] (net_rx_action+= 0xcc/0x1d4) [] (net_rx_action+0xcc/0x1d4) from [] (__do_softirq+0xa8/= 0x170) [] (__do_softirq+0xa8/0x170) from [] (do_softirq+0x5c/0x6= c) [] (do_softirq+0x5c/0x6c) from [] (local_bh_enable+0xcc/0= xdc) [] (local_bh_enable+0xcc/0xdc) from [] (ip_finish_output+= 0x1c8/0x39c) [] (ip_finish_output+0x1c8/0x39c) from [] (ip_local_out+0= x28/0x2c) [] (ip_local_out+0x28/0x2c) from [] (ip_queue_xmit+0x118/= 0x338) [] (ip_queue_xmit+0x118/0x338) from [] (tcp_transmit_skb+= 0x3fc/0x8e4) [] (tcp_transmit_skb+0x3fc/0x8e4) from [] (tcp_write_xmit= +0x228/0xb08) [] (tcp_write_xmit+0x228/0xb08) from [] (__tcp_push_pendi= ng_frames+0x30/0x9c) [] (__tcp_push_pending_frames+0x30/0x9c) from [] (tcp_sen= dmsg+0x158/0xdc4) [] (tcp_sendmsg+0x158/0xdc4) from [] (inet_sendmsg+0x38/0= x74) [] (inet_sendmsg+0x38/0x74) from [] (sock_aio_write+0x12c= /0x138) [] (sock_aio_write+0x12c/0x138) from [] (do_sync_write+0x= a0/0xd0) [] (do_sync_write+0xa0/0xd0) from [] (vfs_write+0x13c/0x1= 44) [] (vfs_write+0x13c/0x144) from [] (sys_write+0x44/0x70) [] (sys_write+0x44/0x70) from [] (ret_fast_syscall+0x0/0x= 2c) ---[ end trace b75faa8779652e63 ]--- I'm getting about 4 errors reported a second from the ethernet driver. Before i look at issues with em28xx i will first try to get the noise from the ethernet driver sorted out. Andrew --===============8162510452084981321==-- From andrew@lunn.ch Sat Jan 19 19:03:28 2013 From: Andrew Lunn To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH v2] mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls Date: Sat, 19 Jan 2013 19:59:07 +0100 Message-ID: <20130119185907.GA20719@lunn.ch> In-Reply-To: <50FABBED.1020905@web.de> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============0969819071486906701==" --===============0969819071486906701== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit > Please find attached a debug log generated with your patch. > > I used the sata disk and two em28xx dvb sticks, no other usb devices, > no ethernet cable connected, tuners on saa716x-based card not used. > > What I can see in the log: a lot of coherent mappings from sata_mv > and orion_ehci, a few from mv643xx_eth, no other coherent mappings. > All coherent mappings are page aligned, some of them (from orion_ehci) > are not really small (as claimed in __alloc_from_pool). > > I don't believe in a memory leak. When I restart vdr (the application > utilizing the dvb sticks) then there is enough dma memory available > again. Hi Soeren We should be able to rule out a leak. Mount debugfg and then: while [ /bin/true ] ; do cat /debug/dma-api/num_free_entries ; sleep 60 ; done while you are capturing. See if the number goes down. Andrew --===============0969819071486906701==-- From arnd@arndb.de Sat Jan 19 20:06:18 2013 From: Arnd Bergmann To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH v2] mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls Date: Sat, 19 Jan 2013 20:05:19 +0000 Message-ID: <201301192005.20093.arnd@arndb.de> In-Reply-To: <50FABBED.1020905@web.de> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============0400473926392473932==" --===============0400473926392473932== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable On Saturday 19 January 2013, Soeren Moch wrote: > What I can see in the log: a lot of coherent mappings from sata_mv and=20 > orion_ehci, a few from mv643xx_eth, no other coherent mappings. > All coherent mappings are page aligned, some of them (from orion_ehci) > are not really small (as claimed in __alloc_from_pool). Right. Unfortunately, the output does not show which of the mappings are atomic, so we still need to look through those that can be atomic to understand what's going on. There are a few megabytes of coherent mappings in total according to the output, but it seems that a significant portion of them is atomic, which is a bit unexpected. > I don't believe in a memory leak. When I restart vdr (the application > utilizing the dvb sticks) then there is enough dma memory available > again. I found at least one source line that incorrectly uses an atomic allocation, in ehci_mem_init(): dma_alloc_coherent (ehci_to_hcd(ehci)->self.controller, ehci->periodic_size * sizeof(__le32), &ehci->periodic_dma, 0); The last argument is the GFP_ flag, which should never be zero, as that is implicit !wait. This function is called only once, so it is not the actual culprit, but there could be other instances where we accidentally allocate something as GFP_ATOMIC. The total number of allocations I found for each type are sata_mv: 66 pages (270336 bytes) mv643xx_eth: 4 pages =3D=3D (16384 bytes) orion_ehci: 154 pages (630784 bytes) orion_ehci (atomic): 256 pages (1048576 bytes) from the distribution of the numbers, it seems that there is exactly 1 MB of data allocated between bus addresses 0x1f90000 and 0x1f9ffff, allocated in individual pages. This matches the size of your pool, so it's definitely something coming from USB, and no single other allocation, but it does not directly point to a specific line of code. One thing I found was that the ARM dma-mapping code seems buggy in the way that it does a bitwise and between the gfp mask and GFP_ATOMIC, which does not work because GFP_ATOMIC is defined by the absence of __GFP_WAIT. I believe we need the patch below, but it is not clear to me if that issue is related to your problem or now. diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index 6b2fb87..c57975f 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -640,7 +641,7 @@ static void *__dma_alloc(struct device *dev, size_t size,= dma_addr_t *handle, =20 if (is_coherent || nommu()) addr =3D __alloc_simple_buffer(dev, size, gfp, &page); - else if (gfp & GFP_ATOMIC) + else if (!(gfp & __GFP_WAIT)) addr =3D __alloc_from_pool(size, &page); else if (!IS_ENABLED(CONFIG_CMA)) addr =3D __alloc_remap_buffer(dev, size, gfp, prot, &page, caller); @@ -1272,7 +1273,7 @@ static void *arm_iommu_alloc_attrs(struct device *dev, = size_t size, *handle =3D DMA_ERROR_CODE; size =3D PAGE_ALIGN(size); =20 - if (gfp & GFP_ATOMIC) + if (!(gfp & __GFP_WAIT)) return __iommu_alloc_atomic(dev, size, handle); =20 pages =3D __iommu_alloc_buffer(dev, size, gfp, attrs); 8<------- There is one more code path I could find, which is usb_submit_urb() =3D> usb_hcd_submit_urb =3D> ehci_urb_enqueue() =3D> submit_async() =3D> qh_append_tds() =3D> qh_make(GFP_ATOMIC) =3D> ehci_qh_alloc() =3D> dma_pool_alloc() =3D> pool_alloc_page() =3D> dma_alloc_coherent() So even for a GFP_KERNEL passed into usb_submit_urb, the ehci driver causes the low-level allocation to be GFP_ATOMIC, because=20 qh_append_tds() is called under a spinlock. If we have hundreds of URBs in flight, that will exhaust the pool rather quickly. Arnd --===============0400473926392473932==-- From laurent.pinchart@ideasonboard.com Mon Jan 21 12:41:45 2013 From: Laurent Pinchart To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] CDF discussions at FOSDEM Date: Mon, 21 Jan 2013 13:43:27 +0100 Message-ID: <52263038.NjR481MbN4@avalon> In-Reply-To: MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============7858757433928637397==" --===============7858757433928637397== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Hi Daniel, On Thursday 17 January 2013 13:29:27 Daniel Vetter wrote: > On Thu, Jan 17, 2013 at 9:42 AM, Jani Nikula wrote: > > On Fri, 11 Jan 2013, Laurent Pinchart wrote: > >> Would anyone be interested in meeting at the FOSDEM to discuss the Common > >> Display Framework ? There will be a CDF meeting at the ELC at the end of > >> February, the FOSDEM would be a good venue for European developers. > > > > Yes, count me in, > > Jesse, Ville and me should also be around. Do we have a slot fixed already? I've sent a mail to the FOSDEM organizers to request a hacking room for a couple of hours Sunday. I'll let you know as soon as I get a reply. -- Regards, Laurent Pinchart --===============7858757433928637397==-- From smoch@web.de Mon Jan 21 15:03:52 2013 From: Soeren Moch To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH v2] mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls Date: Mon, 21 Jan 2013 16:01:24 +0100 Message-ID: <50FD5844.1010201@web.de> In-Reply-To: <201301192005.20093.arnd@arndb.de> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============1341450629738929304==" --===============1341450629738929304== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable On 01/19/13 21:05, Arnd Bergmann wrote: > I found at least one source line that incorrectly uses an atomic > allocation, in ehci_mem_init(): > > dma_alloc_coherent (ehci_to_hcd(ehci)->self.controller, > ehci->periodic_size * sizeof(__le32), > &ehci->periodic_dma, 0); > > The last argument is the GFP_ flag, which should never be zero, as > that is implicit !wait. This function is called only once, so it > is not the actual culprit, but there could be other instances > where we accidentally allocate something as GFP_ATOMIC. > > The total number of allocations I found for each type are > > sata_mv: 66 pages (270336 bytes) > mv643xx_eth: 4 pages =3D=3D (16384 bytes) > orion_ehci: 154 pages (630784 bytes) > orion_ehci (atomic): 256 pages (1048576 bytes) > > from the distribution of the numbers, it seems that there is exactly 1 MB > of data allocated between bus addresses 0x1f90000 and 0x1f9ffff, allocated > in individual pages. This matches the size of your pool, so it's definitely > something coming from USB, and no single other allocation, but it does not > directly point to a specific line of code. Very interesting, so this is no fragmentation problem nor something=20 caused by sata or ethernet. > One thing I found was that the ARM dma-mapping code seems buggy in the way > that it does a bitwise and between the gfp mask and GFP_ATOMIC, which does > not work because GFP_ATOMIC is defined by the absence of __GFP_WAIT. > > I believe we need the patch below, but it is not clear to me if that issue > is related to your problem or now. Out of curiosity I checked include/linux/gfp.h. GFP_ATOMIC is defined as=20 __GFP_HIGH (which means 'use emergency pool', and no wait), so this=20 patch should not make any difference for "normal" (GPF_ATOMIC /=20 GFP_KERNEL) allocations, only for gfp_flags accidentally set to zero.=20 So, can a new test with this patch help to debug the pool exhaustion? > diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c > index 6b2fb87..c57975f 100644 > --- a/arch/arm/mm/dma-mapping.c > +++ b/arch/arm/mm/dma-mapping.c > @@ -640,7 +641,7 @@ static void *__dma_alloc(struct device *dev, size_t siz= e, dma_addr_t *handle, > =20 > if (is_coherent || nommu()) > addr =3D __alloc_simple_buffer(dev, size, gfp, &page); > - else if (gfp & GFP_ATOMIC) > + else if (!(gfp & __GFP_WAIT)) > addr =3D __alloc_from_pool(size, &page); > else if (!IS_ENABLED(CONFIG_CMA)) > addr =3D __alloc_remap_buffer(dev, size, gfp, prot, &page, caller); > @@ -1272,7 +1273,7 @@ static void *arm_iommu_alloc_attrs(struct device *dev= , size_t size, > *handle =3D DMA_ERROR_CODE; > size =3D PAGE_ALIGN(size); > =20 > - if (gfp & GFP_ATOMIC) > + if (!(gfp & __GFP_WAIT)) > return __iommu_alloc_atomic(dev, size, handle); > =20 > pages =3D __iommu_alloc_buffer(dev, size, gfp, attrs); > 8<------- > > There is one more code path I could find, which is usb_submit_urb() =3D> > usb_hcd_submit_urb =3D> ehci_urb_enqueue() =3D> submit_async() =3D> > qh_append_tds() =3D> qh_make(GFP_ATOMIC) =3D> ehci_qh_alloc() =3D> > dma_pool_alloc() =3D> pool_alloc_page() =3D> dma_alloc_coherent() > > So even for a GFP_KERNEL passed into usb_submit_urb, the ehci driver > causes the low-level allocation to be GFP_ATOMIC, because > qh_append_tds() is called under a spinlock. If we have hundreds > of URBs in flight, that will exhaust the pool rather quickly. > Maybe there are hundreds of URBs in flight in my application, I have no=20 idea how to check this. It seems to me that bad reception conditions=20 (lost lock / regained lock messages for some dvb channels) accelerate=20 the buffer exhaustion. But even with a 4MB coherent pool I see the=20 error. Is there any chance to fix this in the usb or dvb subsystem (or=20 wherever)? Should I try to further increase the pool size, or what else=20 can I do besides using an older kernel? Soeren --===============1341450629738929304==-- From arnd@arndb.de Mon Jan 21 18:56:52 2013 From: Arnd Bergmann To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH v2] mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls Date: Mon, 21 Jan 2013 18:55:25 +0000 Message-ID: <201301211855.25455.arnd@arndb.de> In-Reply-To: <50FD5844.1010201@web.de> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============5317681127209151571==" --===============5317681127209151571== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable On Monday 21 January 2013, Soeren Moch wrote: > On 01/19/13 21:05, Arnd Bergmann wrote: > > from the distribution of the numbers, it seems that there is exactly 1 MB > > of data allocated between bus addresses 0x1f90000 and 0x1f9ffff, allocated > > in individual pages. This matches the size of your pool, so it's definite= ly > > something coming from USB, and no single other allocation, but it does not > > directly point to a specific line of code. > Very interesting, so this is no fragmentation problem nor something=20 > caused by sata or ethernet. Right. > > One thing I found was that the ARM dma-mapping code seems buggy in the way > > that it does a bitwise and between the gfp mask and GFP_ATOMIC, which does > > not work because GFP_ATOMIC is defined by the absence of __GFP_WAIT. > > > > I believe we need the patch below, but it is not clear to me if that issue > > is related to your problem or now. > Out of curiosity I checked include/linux/gfp.h. GFP_ATOMIC is defined as=20 > __GFP_HIGH (which means 'use emergency pool', and no wait), so this=20 > patch should not make any difference for "normal" (GPF_ATOMIC /=20 > GFP_KERNEL) allocations, only for gfp_flags accidentally set to zero.=20 Yes, or one of the rare cases where someone intentionally does something like (GFP_ATOMIC & !__GFP_HIGH) or (GFP_KERNEL || __GFP_HIGH), which are both wrong. > So, can a new test with this patch help to debug the pool exhaustion? Yes, but I would not expect this to change much. It's a bug, but not likely the one you are hitting. > > So even for a GFP_KERNEL passed into usb_submit_urb, the ehci driver > > causes the low-level allocation to be GFP_ATOMIC, because > > qh_append_tds() is called under a spinlock. If we have hundreds > > of URBs in flight, that will exhaust the pool rather quickly. > > > Maybe there are hundreds of URBs in flight in my application, I have no=20 > idea how to check this. I don't know a lot about USB, but I always assumed that this was not a normal condition and that there are only a couple of URBs per endpoint used at a time. Maybe Greg or someone else with a USB background can shed some light on this. > It seems to me that bad reception conditions=20 > (lost lock / regained lock messages for some dvb channels) accelerate=20 > the buffer exhaustion. But even with a 4MB coherent pool I see the=20 > error. Is there any chance to fix this in the usb or dvb subsystem (or=20 > wherever)? Should I try to further increase the pool size, or what else=20 > can I do besides using an older kernel? There are two things that I think can be done if hundreds of URBs is indeed the normal working condition for this driver: * change the locking in your driver so it can actually call usb_submit_urb using GFP_KERNEL rather than GFP_ATOMIC * after that is done, rework the ehci_hcd driver so it can do the allocation inside of the submit_urb path to use the mem_flags rather than unconditional GFP_ATOMIC. Note that the problem you are seeing does not just exist in the case of the atomic coherent pool getting exhausted, but also on any platform that runs into an out-of-memory condition. Arnd --===============5317681127209151571==-- From gregkh@linuxfoundation.org Mon Jan 21 21:01:56 2013 From: Greg KH To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH v2] mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls Date: Mon, 21 Jan 2013 13:01:50 -0800 Message-ID: <20130121210150.GA9184@kroah.com> In-Reply-To: <201301211855.25455.arnd@arndb.de> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============1458738105803066585==" --===============1458738105803066585== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable On Mon, Jan 21, 2013 at 06:55:25PM +0000, Arnd Bergmann wrote: > On Monday 21 January 2013, Soeren Moch wrote: > > On 01/19/13 21:05, Arnd Bergmann wrote: > > > from the distribution of the numbers, it seems that there is exactly 1 = MB > > > of data allocated between bus addresses 0x1f90000 and 0x1f9ffff, alloca= ted > > > in individual pages. This matches the size of your pool, so it's defini= tely > > > something coming from USB, and no single other allocation, but it does = not > > > directly point to a specific line of code. > > Very interesting, so this is no fragmentation problem nor something=20 > > caused by sata or ethernet. >=20 > Right. >=20 > > > One thing I found was that the ARM dma-mapping code seems buggy in the = way > > > that it does a bitwise and between the gfp mask and GFP_ATOMIC, which d= oes > > > not work because GFP_ATOMIC is defined by the absence of __GFP_WAIT. > > > > > > I believe we need the patch below, but it is not clear to me if that is= sue > > > is related to your problem or now. > > Out of curiosity I checked include/linux/gfp.h. GFP_ATOMIC is defined as = > > __GFP_HIGH (which means 'use emergency pool', and no wait), so this=20 > > patch should not make any difference for "normal" (GPF_ATOMIC /=20 > > GFP_KERNEL) allocations, only for gfp_flags accidentally set to zero.=20 >=20 > Yes, or one of the rare cases where someone intentionally does something li= ke > (GFP_ATOMIC & !__GFP_HIGH) or (GFP_KERNEL || __GFP_HIGH), which are both > wrong. >=20 > > So, can a new test with this patch help to debug the pool exhaustion? >=20 > Yes, but I would not expect this to change much. It's a bug, but not likely > the one you are hitting. >=20 > > > So even for a GFP_KERNEL passed into usb_submit_urb, the ehci driver > > > causes the low-level allocation to be GFP_ATOMIC, because > > > qh_append_tds() is called under a spinlock. If we have hundreds > > > of URBs in flight, that will exhaust the pool rather quickly. > > > > > Maybe there are hundreds of URBs in flight in my application, I have no=20 > > idea how to check this. >=20 > I don't know a lot about USB, but I always assumed that this was not > a normal condition and that there are only a couple of URBs per endpoint > used at a time. Maybe Greg or someone else with a USB background can > shed some light on this. There's no restriction on how many URBs a driver can have outstanding at once, and if you have a system with a lot of USB devices running at the same time, there could be lots of URBs in flight depending on the number of host controllers and devices and drivers being used. Sorry, greg k-h --===============1458738105803066585==-- From francescolavra.fl@gmail.com Tue Jan 22 15:12:47 2013 From: Francesco Lavra To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH 4/7] fence: dma-buf cross-device synchronization (v11) Date: Tue, 22 Jan 2013 16:13:11 +0100 Message-ID: <50FEAC87.7090702@gmail.com> In-Reply-To: <1358253244-11453-5-git-send-email-maarten.lankhorst@canonical.com> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============1627606891093727163==" --===============1627606891093727163== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Hi, On 01/15/2013 01:34 PM, Maarten Lankhorst wrote: [...] > diff --git a/include/linux/fence.h b/include/linux/fence.h > new file mode 100644 > index 0000000..d9f091d > --- /dev/null > +++ b/include/linux/fence.h > @@ -0,0 +1,347 @@ > +/* > + * Fence mechanism for dma-buf to allow for asynchronous dma access > + * > + * Copyright (C) 2012 Canonical Ltd > + * Copyright (C) 2012 Texas Instruments > + * > + * Authors: > + * Rob Clark > + * Maarten Lankhorst > + * > + * This program is free software; you can redistribute it and/or modify it > + * under the terms of the GNU General Public License version 2 as publishe= d by > + * the Free Software Foundation. > + * > + * This program is distributed in the hope that it will be useful, but WIT= HOUT > + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or > + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License f= or > + * more details. > + * > + * You should have received a copy of the GNU General Public License along= with > + * this program. If not, see . > + */ > + > +#ifndef __LINUX_FENCE_H > +#define __LINUX_FENCE_H > + > +#include > +#include > +#include > +#include > +#include > +#include > +#include > + > +struct fence; > +struct fence_ops; > +struct fence_cb; > + > +/** > + * struct fence - software synchronization primitive > + * @refcount: refcount for this fence > + * @ops: fence_ops associated with this fence > + * @cb_list: list of all callbacks to call > + * @lock: spin_lock_irqsave used for locking > + * @priv: fence specific private data > + * @flags: A mask of FENCE_FLAG_* defined below > + * > + * the flags member must be manipulated and read using the appropriate > + * atomic ops (bit_*), so taking the spinlock will not be needed most > + * of the time. > + * > + * FENCE_FLAG_SIGNALED_BIT - fence is already signaled > + * FENCE_FLAG_ENABLE_SIGNAL_BIT - enable_signaling might have been called* > + * FENCE_FLAG_USER_BITS - start of the unused bits, can be used by the > + * implementer of the fence for its own purposes. Can be used in different > + * ways by different fence implementers, so do not rely on this. > + * > + * *) Since atomic bitops are used, this is not guaranteed to be the case. > + * Particularly, if the bit was set, but fence_signal was called right > + * before this bit was set, it would have been able to set the > + * FENCE_FLAG_SIGNALED_BIT, before enable_signaling was called. > + * Adding a check for FENCE_FLAG_SIGNALED_BIT after setting > + * FENCE_FLAG_ENABLE_SIGNAL_BIT closes this race, and makes sure that > + * after fence_signal was called, any enable_signaling call will have eith= er > + * been completed, or never called at all. > + */ > +struct fence { > + struct kref refcount; > + const struct fence_ops *ops; > + struct list_head cb_list; > + spinlock_t *lock; > + unsigned context, seqno; > + unsigned long flags; > +}; The documentation above should be updated with the new structure members context and seqno. > + > +enum fence_flag_bits { > + FENCE_FLAG_SIGNALED_BIT, > + FENCE_FLAG_ENABLE_SIGNAL_BIT, > + FENCE_FLAG_USER_BITS, /* must always be last member */ > +}; > + > +typedef void (*fence_func_t)(struct fence *fence, struct fence_cb *cb, voi= d *priv); > + > +/** > + * struct fence_cb - callback for fence_add_callback > + * @func: fence_func_t to call > + * @priv: value of priv to pass to function > + * > + * This struct will be initialized by fence_add_callback, additional > + * data can be passed along by embedding fence_cb in another struct. > + */ > +struct fence_cb { > + struct list_head node; > + fence_func_t func; > + void *priv; > +}; Documentation should be updated here too. > + > +/** > + * struct fence_ops - operations implemented for fence > + * @enable_signaling: enable software signaling of fence > + * @signaled: [optional] peek whether the fence is signaled > + * @release: [optional] called on destruction of fence > + * > + * Notes on enable_signaling: > + * For fence implementations that have the capability for hw->hw > + * signaling, they can implement this op to enable the necessary > + * irqs, or insert commands into cmdstream, etc. This is called > + * in the first wait() or add_callback() path to let the fence > + * implementation know that there is another driver waiting on > + * the signal (ie. hw->sw case). > + * > + * This function can be called called from atomic context, but not > + * from irq context, so normal spinlocks can be used. > + * > + * A return value of false indicates the fence already passed, > + * or some failure occured that made it impossible to enable > + * signaling. True indicates succesful enabling. > + * > + * Calling fence_signal before enable_signaling is called allows > + * for a tiny race window in which enable_signaling is called during, > + * before, or after fence_signal. To fight this, it is recommended > + * that before enable_signaling returns true an extra reference is > + * taken on the fence, to be released when the fence is signaled. > + * This will mean fence_signal will still be called twice, but > + * the second time will be a noop since it was already signaled. > + * > + * Notes on release: > + * Can be NULL, this function allows additional commands to run on > + * destruction of the fence. Can be called from irq context. > + * If pointer is set to NULL, kfree will get called instead. > + */ > + > +struct fence_ops { > + bool (*enable_signaling)(struct fence *fence); > + bool (*signaled)(struct fence *fence); > + long (*wait)(struct fence *fence, bool intr, signed long); > + void (*release)(struct fence *fence); > +}; Ditto. > + > +/** > + * __fence_init - Initialize a custom fence. > + * @fence: [in] the fence to initialize > + * @ops: [in] the fence_ops for operations on this fence > + * @lock: [in] the irqsafe spinlock to use for locking this fence > + * @context: [in] the execution context this fence is run on > + * @seqno: [in] a linear increasing sequence number for this context > + * > + * Initializes an allocated fence, the caller doesn't have to keep its > + * refcount after committing with this fence, but it will need to hold a > + * refcount again if fence_ops.enable_signaling gets called. This can > + * be used for other implementing other types of fence. > + * > + * context and seqno are used for easy comparison between fences, allowing > + * to check which fence is later by simply using fence_later. > + */ > +static inline void > +__fence_init(struct fence *fence, const struct fence_ops *ops, > + spinlock_t *lock, unsigned context, unsigned seqno) > +{ > + BUG_ON(!ops || !lock || !ops->enable_signaling || !ops->wait); > + > + kref_init(&fence->refcount); > + fence->ops =3D ops; > + INIT_LIST_HEAD(&fence->cb_list); > + fence->lock =3D lock; > + fence->context =3D context; > + fence->seqno =3D seqno; > + fence->flags =3D 0UL; > +} > + > +/** > + * fence_get - increases refcount of the fence > + * @fence: [in] fence to increase refcount of > + */ > +static inline void fence_get(struct fence *fence) > +{ > + if (WARN_ON(!fence)) > + return; > + kref_get(&fence->refcount); > +} > + > +extern void release_fence(struct kref *kref); > + > +/** > + * fence_put - decreases refcount of the fence > + * @fence: [in] fence to reduce refcount of > + */ > +static inline void fence_put(struct fence *fence) > +{ > + if (WARN_ON(!fence)) > + return; > + kref_put(&fence->refcount, release_fence); > +} > + > +int fence_signal(struct fence *fence); > +int __fence_signal(struct fence *fence); > +long fence_default_wait(struct fence *fence, bool intr, signed long); In the parameter list the first two parameters are named, and the last one isn't. Feels a bit odd... > +int fence_add_callback(struct fence *fence, struct fence_cb *cb, > + fence_func_t func, void *priv); > +bool fence_remove_callback(struct fence *fence, struct fence_cb *cb); > +void fence_enable_sw_signaling(struct fence *fence); > + > +/** > + * fence_is_signaled - Return an indication if the fence is signaled yet. > + * @fence: [in] the fence to check > + * > + * Returns true if the fence was already signaled, false if not. Since this > + * function doesn't enable signaling, it is not guaranteed to ever return = true > + * If fence_add_callback, fence_wait or fence_enable_sw_signaling > + * haven't been called before. > + * > + * It's recommended for seqno fences to call fence_signal when the > + * operation is complete, it makes it possible to prevent issues from > + * wraparound between time of issue and time of use by checking the return > + * value of this function before calling hardware-specific wait instructio= ns. > + */ > +static inline bool > +fence_is_signaled(struct fence *fence) > +{ > + if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) > + return true; > + > + if (fence->ops->signaled && fence->ops->signaled(fence)) { > + fence_signal(fence); > + return true; > + } > + > + return false; > +} > + > +/** > + * fence_later - return the chronologically later fence > + * @f1: [in] the first fence from the same context > + * @f2: [in] the second fence from the same context > + * > + * Returns NULL if both fences are signaled, otherwise the fence that woul= d be > + * signaled last. Both fences must be from the same context, since a seqno= is > + * not re-used across contexts. > + */ > +static inline struct fence *fence_later(struct fence *f1, struct fence *f2) > +{ > + bool sig1, sig2; > + > + /* > + * can't check just FENCE_FLAG_SIGNALED_BIT here, it may never have been > + * set called if enable_signaling wasn't, and enabling that here is > + * overkill. > + */ > + sig1 =3D fence_is_signaled(f1); > + sig2 =3D fence_is_signaled(f2); > + > + if (sig1 && sig2) > + return NULL; > + > + BUG_ON(f1->context !=3D f2->context); > + > + if (sig1 || f2->seqno - f2->seqno <=3D INT_MAX) I guess you meant (f2->seqno - f1->seqno). > + return f2; > + else > + return f1; > +} Regards, Francesco --===============1627606891093727163==-- From francescolavra.fl@gmail.com Tue Jan 22 16:46:47 2013 From: Francesco Lavra To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH 6/7] reservation: cross-device reservation support Date: Tue, 22 Jan 2013 17:47:11 +0100 Message-ID: <50FEC28F.3090100@gmail.com> In-Reply-To: <1358253244-11453-7-git-send-email-maarten.lankhorst@canonical.com> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============8126875833713849733==" --===============8126875833713849733== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable On 01/15/2013 01:34 PM, Maarten Lankhorst wrote: > This adds support for a generic reservations framework that can be > hooked up to ttm and dma-buf and allows easy sharing of reservations > across devices. >=20 > The idea is that a dma-buf and ttm object both will get a pointer > to a struct reservation_object, which has to be reserved before > anything is done with the contents of the dma-buf. >=20 > Signed-off-by: Maarten Lankhorst > --- > Documentation/DocBook/device-drivers.tmpl | 2 + > drivers/base/Makefile | 2 +- > drivers/base/reservation.c | 251 ++++++++++++++++++++++++++= ++++ > include/linux/reservation.h | 182 ++++++++++++++++++++++ > 4 files changed, 436 insertions(+), 1 deletion(-) > create mode 100644 drivers/base/reservation.c > create mode 100644 include/linux/reservation.h [...] > diff --git a/include/linux/reservation.h b/include/linux/reservation.h > new file mode 100644 > index 0000000..fc2349d > --- /dev/null > +++ b/include/linux/reservation.h > @@ -0,0 +1,182 @@ > +/* > + * Header file for reservations for dma-buf and ttm > + * > + * Copyright(C) 2011 Linaro Limited. All rights reserved. > + * Copyright (C) 2012 Canonical Ltd > + * Copyright (C) 2012 Texas Instruments > + * > + * Authors: > + * Rob Clark > + * Maarten Lankhorst > + * Thomas Hellstrom > + * > + * Based on bo.c which bears the following copyright notice, > + * but is dual licensed: > + * > + * Copyright (c) 2006-2009 VMware, Inc., Palo Alto, CA., USA > + * All Rights Reserved. > + * > + * Permission is hereby granted, free of charge, to any person obtaining a > + * copy of this software and associated documentation files (the > + * "Software"), to deal in the Software without restriction, including > + * without limitation the rights to use, copy, modify, merge, publish, > + * distribute, sub license, and/or sell copies of the Software, and to > + * permit persons to whom the Software is furnished to do so, subject to > + * the following conditions: > + * > + * The above copyright notice and this permission notice (including the > + * next paragraph) shall be included in all copies or substantial portions > + * of the Software. > + * > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS= OR > + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, > + * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL > + * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY C= LAIM, > + * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR > + * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR T= HE > + * USE OR OTHER DEALINGS IN THE SOFTWARE. > + */ > +#ifndef _LINUX_RESERVATION_H > +#define _LINUX_RESERVATION_H > + > +#include > +#include > +#include > + > +#define BUF_MAX_SHARED_FENCE 8 > + > +struct fence; > + > +extern atomic_long_t reservation_counter; > +extern const char reservation_object_name[]; > +extern struct lock_class_key reservation_object_class; > +extern const char reservation_ticket_name[]; > +extern struct lock_class_key reservation_ticket_class; > + > +struct reservation_object { > + struct ticket_mutex lock; > + > + u32 fence_shared_count; > + struct fence *fence_excl; > + struct fence *fence_shared[BUF_MAX_SHARED_FENCE]; > +}; > + > +struct reservation_ticket { > + unsigned long seqno; > +#ifdef CONFIG_DEBUG_LOCK_ALLOC > + struct lockdep_map dep_map; > +#endif > +}; > + > +/** > + * struct reservation_entry - reservation structure for a > + * reservation_object > + * @head: list entry > + * @obj_shared: pointer to a reservation_object to reserve > + * > + * Bit 0 of obj_shared is set to bool shared, as such pointer has to be > + * converted back, which can be done with reservation_entry_get. > + */ > +struct reservation_entry { > + struct list_head head; > + unsigned long obj_shared; > +}; > + > + > +static inline void > +reservation_object_init(struct reservation_object *obj) > +{ > + obj->fence_shared_count =3D 0; > + obj->fence_excl =3D NULL; > + > + __ticket_mutex_init(&obj->lock, reservation_object_name, > + &reservation_object_class); > +} > + > +static inline void > +reservation_object_fini(struct reservation_object *obj) > +{ > + int i; > + > + if (obj->fence_excl) > + fence_put(obj->fence_excl); > + for (i =3D 0; i < obj->fence_shared_count; ++i) > + fence_put(obj->fence_shared[i]); > + > + mutex_destroy(&obj->lock.base); > +} > + > +static inline void > +reservation_ticket_init(struct reservation_ticket *t) > +{ > +#ifdef CONFIG_DEBUG_LOCK_ALLOC > + /* > + * Make sure we are not reinitializing a held ticket: > + */ > + > + debug_check_no_locks_freed((void *)t, sizeof(*t)); > + lockdep_init_map(&t->dep_map, reservation_ticket_name, > + &reservation_ticket_class, 0); > +#endif > + mutex_acquire(&t->dep_map, 0, 0, _THIS_IP_); If CONFIG_DEBUG_LOCK_ALLOC is not defined, t->dep_map is not there. > + do { > + t->seqno =3D atomic_long_inc_return(&reservation_counter); > + } while (unlikely(!t->seqno)); > +} -- Francesco --===============8126875833713849733==-- From maarten.lankhorst@canonical.com Tue Jan 22 17:04:04 2013 From: Maarten Lankhorst To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH 6/7] reservation: cross-device reservation support Date: Tue, 22 Jan 2013 18:04:00 +0100 Message-ID: <50FEC680.7030602@canonical.com> In-Reply-To: <50FEC28F.3090100@gmail.com> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============6733768416239109467==" --===============6733768416239109467== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Op 22-01-13 17:47, Francesco Lavra schreef: > On 01/15/2013 01:34 PM, Maarten Lankhorst wrote: >> This adds support for a generic reservations framework that can be >> hooked up to ttm and dma-buf and allows easy sharing of reservations >> across devices. >> >> The idea is that a dma-buf and ttm object both will get a pointer >> to a struct reservation_object, which has to be reserved before >> anything is done with the contents of the dma-buf. >> >> Signed-off-by: Maarten Lankhorst >> --- >> Documentation/DocBook/device-drivers.tmpl | 2 + >> drivers/base/Makefile | 2 +- >> drivers/base/reservation.c | 251 +++++++++++++++++++++++++= +++++ >> include/linux/reservation.h | 182 ++++++++++++++++++++++ >> 4 files changed, 436 insertions(+), 1 deletion(-) >> create mode 100644 drivers/base/reservation.c >> create mode 100644 include/linux/reservation.h > [...] >> diff --git a/include/linux/reservation.h b/include/linux/reservation.h >> new file mode 100644 >> index 0000000..fc2349d >> --- /dev/null >> +++ b/include/linux/reservation.h >> @@ -0,0 +1,182 @@ >> +/* >> + * Header file for reservations for dma-buf and ttm >> + * >> + * Copyright(C) 2011 Linaro Limited. All rights reserved. >> + * Copyright (C) 2012 Canonical Ltd >> + * Copyright (C) 2012 Texas Instruments >> + * >> + * Authors: >> + * Rob Clark >> + * Maarten Lankhorst >> + * Thomas Hellstrom >> + * >> + * Based on bo.c which bears the following copyright notice, >> + * but is dual licensed: >> + * >> + * Copyright (c) 2006-2009 VMware, Inc., Palo Alto, CA., USA >> + * All Rights Reserved. >> + * >> + * Permission is hereby granted, free of charge, to any person obtaining a >> + * copy of this software and associated documentation files (the >> + * "Software"), to deal in the Software without restriction, including >> + * without limitation the rights to use, copy, modify, merge, publish, >> + * distribute, sub license, and/or sell copies of the Software, and to >> + * permit persons to whom the Software is furnished to do so, subject to >> + * the following conditions: >> + * >> + * The above copyright notice and this permission notice (including the >> + * next paragraph) shall be included in all copies or substantial portions >> + * of the Software. >> + * >> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRES= S OR >> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILIT= Y, >> + * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHA= LL >> + * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY = CLAIM, >> + * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR >> + * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR = THE >> + * USE OR OTHER DEALINGS IN THE SOFTWARE. >> + */ >> +#ifndef _LINUX_RESERVATION_H >> +#define _LINUX_RESERVATION_H >> + >> +#include >> +#include >> +#include >> + >> +#define BUF_MAX_SHARED_FENCE 8 >> + >> +struct fence; >> + >> +extern atomic_long_t reservation_counter; >> +extern const char reservation_object_name[]; >> +extern struct lock_class_key reservation_object_class; >> +extern const char reservation_ticket_name[]; >> +extern struct lock_class_key reservation_ticket_class; >> + >> +struct reservation_object { >> + struct ticket_mutex lock; >> + >> + u32 fence_shared_count; >> + struct fence *fence_excl; >> + struct fence *fence_shared[BUF_MAX_SHARED_FENCE]; >> +}; >> + >> +struct reservation_ticket { >> + unsigned long seqno; >> +#ifdef CONFIG_DEBUG_LOCK_ALLOC >> + struct lockdep_map dep_map; >> +#endif >> +}; >> + >> +/** >> + * struct reservation_entry - reservation structure for a >> + * reservation_object >> + * @head: list entry >> + * @obj_shared: pointer to a reservation_object to reserve >> + * >> + * Bit 0 of obj_shared is set to bool shared, as such pointer has to be >> + * converted back, which can be done with reservation_entry_get. >> + */ >> +struct reservation_entry { >> + struct list_head head; >> + unsigned long obj_shared; >> +}; >> + >> + >> +static inline void >> +reservation_object_init(struct reservation_object *obj) >> +{ >> + obj->fence_shared_count =3D 0; >> + obj->fence_excl =3D NULL; >> + >> + __ticket_mutex_init(&obj->lock, reservation_object_name, >> + &reservation_object_class); >> +} >> + >> +static inline void >> +reservation_object_fini(struct reservation_object *obj) >> +{ >> + int i; >> + >> + if (obj->fence_excl) >> + fence_put(obj->fence_excl); >> + for (i =3D 0; i < obj->fence_shared_count; ++i) >> + fence_put(obj->fence_shared[i]); >> + >> + mutex_destroy(&obj->lock.base); >> +} >> + >> +static inline void >> +reservation_ticket_init(struct reservation_ticket *t) >> +{ >> +#ifdef CONFIG_DEBUG_LOCK_ALLOC >> + /* >> + * Make sure we are not reinitializing a held ticket: >> + */ >> + >> + debug_check_no_locks_freed((void *)t, sizeof(*t)); >> + lockdep_init_map(&t->dep_map, reservation_ticket_name, >> + &reservation_ticket_class, 0); >> +#endif >> + mutex_acquire(&t->dep_map, 0, 0, _THIS_IP_); > If CONFIG_DEBUG_LOCK_ALLOC is not defined, t->dep_map is not there. And mutex_acquire will not expand either, so it's harmless. :-) >> + do { >> + t->seqno =3D atomic_long_inc_return(&reservation_counter); >> + } while (unlikely(!t->seqno)); >> +} > -- > Francesco > --===============6733768416239109467==-- From arnd@arndb.de Tue Jan 22 18:14:52 2013 From: Arnd Bergmann To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH v2] mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls Date: Tue, 22 Jan 2013 18:13:57 +0000 Message-ID: <201301221813.57741.arnd@arndb.de> In-Reply-To: <20130121210150.GA9184@kroah.com> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============7097384604480354668==" --===============7097384604480354668== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable On Monday 21 January 2013, Greg KH wrote: > >=20 > > I don't know a lot about USB, but I always assumed that this was not > > a normal condition and that there are only a couple of URBs per endpoint > > used at a time. Maybe Greg or someone else with a USB background can > > shed some light on this. >=20 > There's no restriction on how many URBs a driver can have outstanding at > once, and if you have a system with a lot of USB devices running at the > same time, there could be lots of URBs in flight depending on the number > of host controllers and devices and drivers being used. Ok, thanks for clarifying that. I read some more of the em28xx driver, and while it does have a bunch of URBs in flight, there are only five audio and five video URBs that I see simultaneously being submitted, and then resubmitted from their completion handlers. I think this means that there should be 10 URBs active at any given time in this driver, which does not explain why we get 256 allocations. I also noticed that the initial submissions are all atomic but don't need to, so it may be worth trying the patch below, which should also help in low-memory situations. We could also try moving the resubmission into a workqueue in order to let those be GFP_KERNEL, but I don't think that will help. Arnd diff --git a/drivers/media/usb/em28xx/em28xx-audio.c b/drivers/media/usb/em28= xx/em28xx-audio.c index 2fdb66e..8b789f4 100644 --- a/drivers/media/usb/em28xx/em28xx-audio.c +++ b/drivers/media/usb/em28xx/em28xx-audio.c @@ -177,12 +177,12 @@ static int em28xx_init_audio_isoc(struct em28xx *dev) struct urb *urb; int j, k; =20 - dev->adev.transfer_buffer[i] =3D kmalloc(sb_size, GFP_ATOMIC); + dev->adev.transfer_buffer[i] =3D kmalloc(sb_size, GFP_KERNEL); if (!dev->adev.transfer_buffer[i]) return -ENOMEM; =20 memset(dev->adev.transfer_buffer[i], 0x80, sb_size); - urb =3D usb_alloc_urb(EM28XX_NUM_AUDIO_PACKETS, GFP_ATOMIC); + urb =3D usb_alloc_urb(EM28XX_NUM_AUDIO_PACKETS, GFP_KERNEL); if (!urb) { em28xx_errdev("usb_alloc_urb failed!\n"); for (j =3D 0; j < i; j++) { @@ -212,7 +212,7 @@ static int em28xx_init_audio_isoc(struct em28xx *dev) } =20 for (i =3D 0; i < EM28XX_AUDIO_BUFS; i++) { - errCode =3D usb_submit_urb(dev->adev.urb[i], GFP_ATOMIC); + errCode =3D usb_submit_urb(dev->adev.urb[i], GFP_KERNEL); if (errCode) { em28xx_errdev("submit of audio urb failed\n"); em28xx_deinit_isoc_audio(dev); diff --git a/drivers/media/usb/em28xx/em28xx-core.c b/drivers/media/usb/em28x= x/em28xx-core.c index bed07a6..c5a2c4b 100644 --- a/drivers/media/usb/em28xx/em28xx-core.c +++ b/drivers/media/usb/em28xx/em28xx-core.c @@ -1166,7 +1166,7 @@ int em28xx_init_isoc(struct em28xx *dev, enum em28xx_mo= de mode, =20 /* submit urbs and enables IRQ */ for (i =3D 0; i < isoc_bufs->num_bufs; i++) { - rc =3D usb_submit_urb(isoc_bufs->urb[i], GFP_ATOMIC); + rc =3D usb_submit_urb(isoc_bufs->urb[i], GFP_KERNEL); if (rc) { em28xx_err("submit of urb %i failed (error=3D%i)\n", i, rc); --===============7097384604480354668==-- From smoch@web.de Wed Jan 23 14:38:07 2013 From: Soeren Moch To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH v2] mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls Date: Wed, 23 Jan 2013 15:37:52 +0100 Message-ID: <50FFF5C0.60000@web.de> In-Reply-To: <201301221813.57741.arnd@arndb.de> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============1642823562600623933==" --===============1642823562600623933== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit On 22.01.2013 19:13, Arnd Bergmann wrote: > On Monday 21 January 2013, Greg KH wrote: >>> >>> I don't know a lot about USB, but I always assumed that this was not >>> a normal condition and that there are only a couple of URBs per endpoint >>> used at a time. Maybe Greg or someone else with a USB background can >>> shed some light on this. >> >> There's no restriction on how many URBs a driver can have outstanding at >> once, and if you have a system with a lot of USB devices running at the >> same time, there could be lots of URBs in flight depending on the number >> of host controllers and devices and drivers being used. I only use one host controller and (in this test) two usb devices with the same driver. > Ok, thanks for clarifying that. I read some more of the em28xx driver, > and while it does have a bunch of URBs in flight, there are only five > audio and five video URBs that I see simultaneously being submitted, > and then resubmitted from their completion handlers. I think this > means that there should be 10 URBs active at any given time in this > driver, which does not explain why we get 256 allocations. I think the audio part of the em28xx bridge is not used in my DVB tests. Are there other allocations from orion-ehci directly? Maybe something special for isochronous transfers (since there is no problem with my other dvb sticks using bulk transfers)? > I also noticed that the initial submissions are all atomic but don't > need to, so it may be worth trying the patch below, which should also > help in low-memory situations. We could also try moving the resubmission > into a workqueue in order to let those be GFP_KERNEL, but I don't think > that will help. I built a linux-3.7.4 with the em28xx patch and both of your dma-mapping.c patches. I still see the ERROR: 1024 KiB atomic DMA coherent pool is too small! Soeren --===============1642823562600623933==-- From maarten.lankhorst@canonical.com Wed Jan 23 14:56:52 2013 From: Maarten Lankhorst To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH 4/7] fence: dma-buf cross-device synchronization (v11) Date: Wed, 23 Jan 2013 15:56:49 +0100 Message-ID: <50FFFA31.6000101@canonical.com> In-Reply-To: <50FEAC87.7090702@gmail.com> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============3032699983004803824==" --===============3032699983004803824== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Op 22-01-13 16:13, Francesco Lavra schreef: > Hi, > > On 01/15/2013 01:34 PM, Maarten Lankhorst wrote: > [...] >> diff --git a/include/linux/fence.h b/include/linux/fence.h >> new file mode 100644 >> index 0000000..d9f091d >> --- /dev/null >> +++ b/include/linux/fence.h >> @@ -0,0 +1,347 @@ >> +/* >> + * Fence mechanism for dma-buf to allow for asynchronous dma access >> + * >> + * Copyright (C) 2012 Canonical Ltd >> + * Copyright (C) 2012 Texas Instruments >> + * >> + * Authors: >> + * Rob Clark >> + * Maarten Lankhorst >> + * >> + * This program is free software; you can redistribute it and/or modify it >> + * under the terms of the GNU General Public License version 2 as publish= ed by >> + * the Free Software Foundation. >> + * >> + * This program is distributed in the hope that it will be useful, but WI= THOUT >> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or >> + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License = for >> + * more details. >> + * >> + * You should have received a copy of the GNU General Public License alon= g with >> + * this program. If not, see . >> + */ >> + >> +#ifndef __LINUX_FENCE_H >> +#define __LINUX_FENCE_H >> + >> +#include >> +#include >> +#include >> +#include >> +#include >> +#include >> +#include >> + >> +struct fence; >> +struct fence_ops; >> +struct fence_cb; >> + >> +/** >> + * struct fence - software synchronization primitive >> + * @refcount: refcount for this fence >> + * @ops: fence_ops associated with this fence >> + * @cb_list: list of all callbacks to call >> + * @lock: spin_lock_irqsave used for locking >> + * @priv: fence specific private data >> + * @flags: A mask of FENCE_FLAG_* defined below >> + * >> + * the flags member must be manipulated and read using the appropriate >> + * atomic ops (bit_*), so taking the spinlock will not be needed most >> + * of the time. >> + * >> + * FENCE_FLAG_SIGNALED_BIT - fence is already signaled >> + * FENCE_FLAG_ENABLE_SIGNAL_BIT - enable_signaling might have been called* >> + * FENCE_FLAG_USER_BITS - start of the unused bits, can be used by the >> + * implementer of the fence for its own purposes. Can be used in different >> + * ways by different fence implementers, so do not rely on this. >> + * >> + * *) Since atomic bitops are used, this is not guaranteed to be the case. >> + * Particularly, if the bit was set, but fence_signal was called right >> + * before this bit was set, it would have been able to set the >> + * FENCE_FLAG_SIGNALED_BIT, before enable_signaling was called. >> + * Adding a check for FENCE_FLAG_SIGNALED_BIT after setting >> + * FENCE_FLAG_ENABLE_SIGNAL_BIT closes this race, and makes sure that >> + * after fence_signal was called, any enable_signaling call will have eit= her >> + * been completed, or never called at all. >> + */ >> +struct fence { >> + struct kref refcount; >> + const struct fence_ops *ops; >> + struct list_head cb_list; >> + spinlock_t *lock; >> + unsigned context, seqno; >> + unsigned long flags; >> +}; > The documentation above should be updated with the new structure members > context and seqno. > >> + >> +enum fence_flag_bits { >> + FENCE_FLAG_SIGNALED_BIT, >> + FENCE_FLAG_ENABLE_SIGNAL_BIT, >> + FENCE_FLAG_USER_BITS, /* must always be last member */ >> +}; >> + >> +typedef void (*fence_func_t)(struct fence *fence, struct fence_cb *cb, vo= id *priv); >> + >> +/** >> + * struct fence_cb - callback for fence_add_callback >> + * @func: fence_func_t to call >> + * @priv: value of priv to pass to function >> + * >> + * This struct will be initialized by fence_add_callback, additional >> + * data can be passed along by embedding fence_cb in another struct. >> + */ >> +struct fence_cb { >> + struct list_head node; >> + fence_func_t func; >> + void *priv; >> +}; > Documentation should be updated here too. > >> + >> +/** >> + * struct fence_ops - operations implemented for fence >> + * @enable_signaling: enable software signaling of fence >> + * @signaled: [optional] peek whether the fence is signaled >> + * @release: [optional] called on destruction of fence >> + * >> + * Notes on enable_signaling: >> + * For fence implementations that have the capability for hw->hw >> + * signaling, they can implement this op to enable the necessary >> + * irqs, or insert commands into cmdstream, etc. This is called >> + * in the first wait() or add_callback() path to let the fence >> + * implementation know that there is another driver waiting on >> + * the signal (ie. hw->sw case). >> + * >> + * This function can be called called from atomic context, but not >> + * from irq context, so normal spinlocks can be used. >> + * >> + * A return value of false indicates the fence already passed, >> + * or some failure occured that made it impossible to enable >> + * signaling. True indicates succesful enabling. >> + * >> + * Calling fence_signal before enable_signaling is called allows >> + * for a tiny race window in which enable_signaling is called during, >> + * before, or after fence_signal. To fight this, it is recommended >> + * that before enable_signaling returns true an extra reference is >> + * taken on the fence, to be released when the fence is signaled. >> + * This will mean fence_signal will still be called twice, but >> + * the second time will be a noop since it was already signaled. >> + * >> + * Notes on release: >> + * Can be NULL, this function allows additional commands to run on >> + * destruction of the fence. Can be called from irq context. >> + * If pointer is set to NULL, kfree will get called instead. >> + */ >> + >> +struct fence_ops { >> + bool (*enable_signaling)(struct fence *fence); >> + bool (*signaled)(struct fence *fence); >> + long (*wait)(struct fence *fence, bool intr, signed long); >> + void (*release)(struct fence *fence); >> +}; > Ditto. > >> + >> +/** >> + * __fence_init - Initialize a custom fence. >> + * @fence: [in] the fence to initialize >> + * @ops: [in] the fence_ops for operations on this fence >> + * @lock: [in] the irqsafe spinlock to use for locking this fence >> + * @context: [in] the execution context this fence is run on >> + * @seqno: [in] a linear increasing sequence number for this context >> + * >> + * Initializes an allocated fence, the caller doesn't have to keep its >> + * refcount after committing with this fence, but it will need to hold a >> + * refcount again if fence_ops.enable_signaling gets called. This can >> + * be used for other implementing other types of fence. >> + * >> + * context and seqno are used for easy comparison between fences, allowing >> + * to check which fence is later by simply using fence_later. >> + */ >> +static inline void >> +__fence_init(struct fence *fence, const struct fence_ops *ops, >> + spinlock_t *lock, unsigned context, unsigned seqno) >> +{ >> + BUG_ON(!ops || !lock || !ops->enable_signaling || !ops->wait); >> + >> + kref_init(&fence->refcount); >> + fence->ops =3D ops; >> + INIT_LIST_HEAD(&fence->cb_list); >> + fence->lock =3D lock; >> + fence->context =3D context; >> + fence->seqno =3D seqno; >> + fence->flags =3D 0UL; >> +} >> + >> +/** >> + * fence_get - increases refcount of the fence >> + * @fence: [in] fence to increase refcount of >> + */ >> +static inline void fence_get(struct fence *fence) >> +{ >> + if (WARN_ON(!fence)) >> + return; >> + kref_get(&fence->refcount); >> +} >> + >> +extern void release_fence(struct kref *kref); >> + >> +/** >> + * fence_put - decreases refcount of the fence >> + * @fence: [in] fence to reduce refcount of >> + */ >> +static inline void fence_put(struct fence *fence) >> +{ >> + if (WARN_ON(!fence)) >> + return; >> + kref_put(&fence->refcount, release_fence); >> +} >> + >> +int fence_signal(struct fence *fence); >> +int __fence_signal(struct fence *fence); >> +long fence_default_wait(struct fence *fence, bool intr, signed long); > In the parameter list the first two parameters are named, and the last > one isn't. Feels a bit odd... > >> +int fence_add_callback(struct fence *fence, struct fence_cb *cb, >> + fence_func_t func, void *priv); >> +bool fence_remove_callback(struct fence *fence, struct fence_cb *cb); >> +void fence_enable_sw_signaling(struct fence *fence); >> + >> +/** >> + * fence_is_signaled - Return an indication if the fence is signaled yet. >> + * @fence: [in] the fence to check >> + * >> + * Returns true if the fence was already signaled, false if not. Since th= is >> + * function doesn't enable signaling, it is not guaranteed to ever return= true >> + * If fence_add_callback, fence_wait or fence_enable_sw_signaling >> + * haven't been called before. >> + * >> + * It's recommended for seqno fences to call fence_signal when the >> + * operation is complete, it makes it possible to prevent issues from >> + * wraparound between time of issue and time of use by checking the return >> + * value of this function before calling hardware-specific wait instructi= ons. >> + */ >> +static inline bool >> +fence_is_signaled(struct fence *fence) >> +{ >> + if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) >> + return true; >> + >> + if (fence->ops->signaled && fence->ops->signaled(fence)) { >> + fence_signal(fence); >> + return true; >> + } >> + >> + return false; >> +} >> + >> +/** >> + * fence_later - return the chronologically later fence >> + * @f1: [in] the first fence from the same context >> + * @f2: [in] the second fence from the same context >> + * >> + * Returns NULL if both fences are signaled, otherwise the fence that wou= ld be >> + * signaled last. Both fences must be from the same context, since a seqn= o is >> + * not re-used across contexts. >> + */ >> +static inline struct fence *fence_later(struct fence *f1, struct fence *f= 2) >> +{ >> + bool sig1, sig2; >> + >> + /* >> + * can't check just FENCE_FLAG_SIGNALED_BIT here, it may never have been >> + * set called if enable_signaling wasn't, and enabling that here is >> + * overkill. >> + */ >> + sig1 =3D fence_is_signaled(f1); >> + sig2 =3D fence_is_signaled(f2); >> + >> + if (sig1 && sig2) >> + return NULL; >> + >> + BUG_ON(f1->context !=3D f2->context); >> + >> + if (sig1 || f2->seqno - f2->seqno <=3D INT_MAX) > I guess you meant (f2->seqno - f1->seqno). > >> + return f2; >> + else >> + return f1; >> +} > Regards, > Francesco > Thanks for the review, how does this delta look? diff --git a/include/linux/fence.h b/include/linux/fence.h index d9f091d..831ed0a 100644 --- a/include/linux/fence.h +++ b/include/linux/fence.h @@ -42,7 +42,10 @@ struct fence_cb; * @ops: fence_ops associated with this fence * @cb_list: list of all callbacks to call * @lock: spin_lock_irqsave used for locking - * @priv: fence specific private data + * @context: execution context this fence belongs to, returned by + * fence_context_alloc() + * @seqno: the sequence number of this fence inside the executation context, + * can be compared to decide which fence would be signaled later. * @flags: A mask of FENCE_FLAG_* defined below * * the flags member must be manipulated and read using the appropriate @@ -83,6 +86,7 @@ typedef void (*fence_func_t)(struct fence *fence, struct fe= nce_cb *cb, void *pri =20 /** * struct fence_cb - callback for fence_add_callback + * @node: used by fence_add_callback to append this struct to fence::cb_list * @func: fence_func_t to call * @priv: value of priv to pass to function * @@ -98,8 +102,9 @@ struct fence_cb { /** * struct fence_ops - operations implemented for fence * @enable_signaling: enable software signaling of fence - * @signaled: [optional] peek whether the fence is signaled - * @release: [optional] called on destruction of fence + * @signaled: [optional] peek whether the fence is signaled, can be null + * @wait: custom wait implementation + * @release: [optional] called on destruction of fence, can be null * * Notes on enable_signaling: * For fence implementations that have the capability for hw->hw @@ -124,6 +129,16 @@ struct fence_cb { * This will mean fence_signal will still be called twice, but * the second time will be a noop since it was already signaled. * + * Notes on wait: + * Must not be NULL, set to fence_default_wait for default implementation. + * the fence_default_wait implementation should work for any fence, as long + * as enable_signaling works correctly. + * + * Must return -ERESTARTSYS if the wait is intr =3D true and the wait was in= terrupted, + * and remaining jiffies if fence has signaled. Can also return other error + * values on custom implementations, which should be treated as if the fence + * is signaled. For example a hardware lockup could be reported like that. + * * Notes on release: * Can be NULL, this function allows additional commands to run on * destruction of the fence. Can be called from irq context. @@ -133,7 +148,7 @@ struct fence_cb { struct fence_ops { bool (*enable_signaling)(struct fence *fence); bool (*signaled)(struct fence *fence); - long (*wait)(struct fence *fence, bool intr, signed long); + long (*wait)(struct fence *fence, bool intr, signed long timeout); void (*release)(struct fence *fence); }; =20 @@ -194,7 +209,7 @@ static inline void fence_put(struct fence *fence) =20 int fence_signal(struct fence *fence); int __fence_signal(struct fence *fence); -long fence_default_wait(struct fence *fence, bool intr, signed long); +long fence_default_wait(struct fence *fence, bool intr, signed long timeout); int fence_add_callback(struct fence *fence, struct fence_cb *cb, fence_func_t func, void *priv); bool fence_remove_callback(struct fence *fence, struct fence_cb *cb); @@ -254,7 +269,7 @@ static inline struct fence *fence_later(struct fence *f1,= struct fence *f2) =20 BUG_ON(f1->context !=3D f2->context); =20 - if (sig1 || f2->seqno - f2->seqno <=3D INT_MAX) + if (sig1 || f2->seqno - f1->seqno <=3D INT_MAX) return f2; else return f1; --===============3032699983004803824==-- From smoch@web.de Wed Jan 23 15:32:43 2013 From: Soeren Moch To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH v2] mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls Date: Wed, 23 Jan 2013 16:30:53 +0100 Message-ID: <5100022D.9050106@web.de> In-Reply-To: <20130119185907.GA20719@lunn.ch> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============7367832478495419400==" --===============7367832478495419400== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable On 19.01.2013 19:59, Andrew Lunn wrote: >> Please find attached a debug log generated with your patch. >> >> I used the sata disk and two em28xx dvb sticks, no other usb devices, >> no ethernet cable connected, tuners on saa716x-based card not used. >> >> What I can see in the log: a lot of coherent mappings from sata_mv >> and orion_ehci, a few from mv643xx_eth, no other coherent mappings. >> All coherent mappings are page aligned, some of them (from orion_ehci) >> are not really small (as claimed in __alloc_from_pool). >> >> I don't believe in a memory leak. When I restart vdr (the application >> utilizing the dvb sticks) then there is enough dma memory available >> again. > > Hi Soeren > > We should be able to rule out a leak. Mount debugfg and then: > > while [ /bin/true ] ; do cat /debug/dma-api/num_free_entries ; sleep 60 ; d= one > > while you are capturing. See if the number goes down. > > Andrew Now I built a kernel with debugfs enabled. It is not clear to me what I can see from the dma-api/num_free_entries=20 output. After reboot (vdr running) I see decreasing numbers (3453 3452=20 3445 3430...), min_free_entries is lower (3390). Sometimes the output is=20 constant for several minutes ( 3396 3396 3396 3396 3396,...) Soeren --===============7367832478495419400==-- From jesse.barker@linaro.org Wed Jan 23 15:56:25 2013 From: Jesse Barker To: linaro-mm-sig@lists.linaro.org Subject: [Linaro-mm-sig] Common Display Framework BoF at ELC Date: Wed, 23 Jan 2013 07:56:23 -0800 Message-ID: MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============8537234742684632405==" --===============8537234742684632405== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Hi all, I've just confirmed the schedule for ELC ( http://events.linuxfoundation.org/events/embedded-linux-conference/schedule), and we're on for Thursday at 4pm. I requested 2 slots to give us some flexibility (that's the "Part I & Part II" on the schedule). Whether you are attending the rest of ELC or not, if you care about CDF, please come by. cheers, Jesse --===============8537234742684632405== Content-Type: text/html Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="attachment.html" MIME-Version: 1.0 PGRpdiBkaXI9Imx0ciI+PGRpdj48ZGl2PjxkaXY+SGkgYWxsLDxicj48YnI+PC9kaXY+SSYjMzk7 dmUganVzdCBjb25maXJtZWQgdGhlIHNjaGVkdWxlIGZvciBFTEMgKDxhIGhyZWY9Imh0dHA6Ly9l dmVudHMubGludXhmb3VuZGF0aW9uLm9yZy9ldmVudHMvZW1iZWRkZWQtbGludXgtY29uZmVyZW5j ZS9zY2hlZHVsZSI+aHR0cDovL2V2ZW50cy5saW51eGZvdW5kYXRpb24ub3JnL2V2ZW50cy9lbWJl ZGRlZC1saW51eC1jb25mZXJlbmNlL3NjaGVkdWxlPC9hPiksIGFuZCB3ZSYjMzk7cmUgb24gZm9y IFRodXJzZGF5IGF0IDRwbS6gIEkgcmVxdWVzdGVkIDIgc2xvdHMgdG8gZ2l2ZSB1cyBzb21lIGZs ZXhpYmlsaXR5ICh0aGF0JiMzOTtzIHRoZSAmcXVvdDtQYXJ0IEkgJmFtcDsgUGFydCBJSSZxdW90 OyBvbiB0aGUgc2NoZWR1bGUpLjxicj4KPGJyPjwvZGl2PldoZXRoZXIgeW91IGFyZSBhdHRlbmRp bmcgdGhlIHJlc3Qgb2YgRUxDIG9yIG5vdCwgaWYgeW91IGNhcmUgYWJvdXQgQ0RGLCBwbGVhc2Ug Y29tZSBieS48YnI+PGJyPjwvZGl2PmNoZWVycyw8YnI+SmVzc2U8YnI+PC9kaXY+Cg== --===============8537234742684632405==-- From andrew@lunn.ch Wed Jan 23 16:27:02 2013 From: Andrew Lunn To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH v2] mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls Date: Wed, 23 Jan 2013 17:25:15 +0100 Message-ID: <20130123162515.GK13482@lunn.ch> In-Reply-To: <5100022D.9050106@web.de> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============2530020743754584924==" --===============2530020743754584924== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable On Wed, Jan 23, 2013 at 04:30:53PM +0100, Soeren Moch wrote: > On 19.01.2013 19:59, Andrew Lunn wrote: > >>Please find attached a debug log generated with your patch. > >> > >>I used the sata disk and two em28xx dvb sticks, no other usb devices, > >>no ethernet cable connected, tuners on saa716x-based card not used. > >> > >>What I can see in the log: a lot of coherent mappings from sata_mv > >>and orion_ehci, a few from mv643xx_eth, no other coherent mappings. > >>All coherent mappings are page aligned, some of them (from orion_ehci) > >>are not really small (as claimed in __alloc_from_pool). > >> > >>I don't believe in a memory leak. When I restart vdr (the application > >>utilizing the dvb sticks) then there is enough dma memory available > >>again. > > > >Hi Soeren > > > >We should be able to rule out a leak. Mount debugfg and then: > > > >while [ /bin/true ] ; do cat /debug/dma-api/num_free_entries ; sleep 60 ; = done > > > >while you are capturing. See if the number goes down. > > > > Andrew >=20 > Now I built a kernel with debugfs enabled. > It is not clear to me what I can see from the > dma-api/num_free_entries output. After reboot (vdr running) I see > decreasing numbers (3453 3452 3445 3430...), min_free_entries is > lower (3390). Sometimes the output is constant for several minutes ( > 3396 3396 3396 3396 3396,...) We are interesting in the long term behavior. Does it gradually go down? Or is it stable? If it goes down over time, its clearly a leak somewhere. Andrew --===============2530020743754584924==-- From smoch@web.de Wed Jan 23 17:07:48 2013 From: Soeren Moch To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH v2] mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls Date: Wed, 23 Jan 2013 18:07:00 +0100 Message-ID: <510018B4.9040903@web.de> In-Reply-To: <20130123162515.GK13482@lunn.ch> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============1377664974656863582==" --===============1377664974656863582== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable On 23.01.2013 17:25, Andrew Lunn wrote: > On Wed, Jan 23, 2013 at 04:30:53PM +0100, Soeren Moch wrote: >> On 19.01.2013 19:59, Andrew Lunn wrote: >>>> Please find attached a debug log generated with your patch. >>>> >>>> I used the sata disk and two em28xx dvb sticks, no other usb devices, >>>> no ethernet cable connected, tuners on saa716x-based card not used. >>>> >>>> What I can see in the log: a lot of coherent mappings from sata_mv >>>> and orion_ehci, a few from mv643xx_eth, no other coherent mappings. >>>> All coherent mappings are page aligned, some of them (from orion_ehci) >>>> are not really small (as claimed in __alloc_from_pool). >>>> >>>> I don't believe in a memory leak. When I restart vdr (the application >>>> utilizing the dvb sticks) then there is enough dma memory available >>>> again. >>> >>> Hi Soeren >>> >>> We should be able to rule out a leak. Mount debugfg and then: >>> >>> while [ /bin/true ] ; do cat /debug/dma-api/num_free_entries ; sleep 60 ;= done >>> >>> while you are capturing. See if the number goes down. >>> >>> Andrew >> >> Now I built a kernel with debugfs enabled. >> It is not clear to me what I can see from the >> dma-api/num_free_entries output. After reboot (vdr running) I see >> decreasing numbers (3453 3452 3445 3430...), min_free_entries is >> lower (3390). Sometimes the output is constant for several minutes ( >> 3396 3396 3396 3396 3396,...) > > We are interesting in the long term behavior. Does it gradually go > down? Or is it stable? If it goes down over time, its clearly a leak > somewhere. > Now (in the last hour) stable, occasionally lower numbers: 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396=20 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396=20 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396=20 3396 3396 3396 3396 3396 3396 3396 3396 3396 3365 3396 3394 3396 3396=20 3396 3396 3373 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396=20 3396 3353 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396=20 3394 3396 3396 3396 3396 3396 3396 3396 Before the last pool exhaustion going down: 3395 3395 3389 3379 3379 3374 3367 3360 3352 3343 3343 3343 3342 3336=20 3332 3324 3318 3314 3310 3307 3305 3299 3290 3283 3279 3272 3266 3265=20 3247 3247 3247 3242 3236 3236 Soeren --===============1377664974656863582==-- From francescolavra.fl@gmail.com Wed Jan 23 17:13:42 2013 From: Francesco Lavra To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH 4/7] fence: dma-buf cross-device synchronization (v11) Date: Wed, 23 Jan 2013 18:14:07 +0100 Message-ID: <51001A5F.1080903@gmail.com> In-Reply-To: <50FFFA31.6000101@canonical.com> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============4351919051000065370==" --===============4351919051000065370== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable On 01/23/2013 03:56 PM, Maarten Lankhorst wrote: > Thanks for the review, how does this delta look? >=20 > diff --git a/include/linux/fence.h b/include/linux/fence.h > index d9f091d..831ed0a 100644 > --- a/include/linux/fence.h > +++ b/include/linux/fence.h > @@ -42,7 +42,10 @@ struct fence_cb; > * @ops: fence_ops associated with this fence > * @cb_list: list of all callbacks to call > * @lock: spin_lock_irqsave used for locking > - * @priv: fence specific private data > + * @context: execution context this fence belongs to, returned by > + * fence_context_alloc() > + * @seqno: the sequence number of this fence inside the executation contex= t, s/executation/execution Otherwise, looks good to me. -- Francesco --===============4351919051000065370==-- From smoch@web.de Wed Jan 23 17:22:02 2013 From: Soeren Moch To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH v2] mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls Date: Wed, 23 Jan 2013 18:20:46 +0100 Message-ID: <51001BEE.9020201@web.de> In-Reply-To: <510018B4.9040903@web.de> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============0786687378914902965==" --===============0786687378914902965== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit On 23.01.2013 18:07, Soeren Moch wrote: > On 23.01.2013 17:25, Andrew Lunn wrote: >> On Wed, Jan 23, 2013 at 04:30:53PM +0100, Soeren Moch wrote: >>> On 19.01.2013 19:59, Andrew Lunn wrote: >>>>> Please find attached a debug log generated with your patch. >>>>> >>>>> I used the sata disk and two em28xx dvb sticks, no other usb devices, >>>>> no ethernet cable connected, tuners on saa716x-based card not used. >>>>> >>>>> What I can see in the log: a lot of coherent mappings from sata_mv >>>>> and orion_ehci, a few from mv643xx_eth, no other coherent mappings. >>>>> All coherent mappings are page aligned, some of them (from orion_ehci) >>>>> are not really small (as claimed in __alloc_from_pool). >>>>> >>>>> I don't believe in a memory leak. When I restart vdr (the application >>>>> utilizing the dvb sticks) then there is enough dma memory available >>>>> again. >>>> >>>> Hi Soeren >>>> >>>> We should be able to rule out a leak. Mount debugfg and then: >>>> >>>> while [ /bin/true ] ; do cat /debug/dma-api/num_free_entries ; sleep >>>> 60 ; done >>>> >>>> while you are capturing. See if the number goes down. >>>> >>>> Andrew >>> >>> Now I built a kernel with debugfs enabled. >>> It is not clear to me what I can see from the >>> dma-api/num_free_entries output. After reboot (vdr running) I see >>> decreasing numbers (3453 3452 3445 3430...), min_free_entries is >>> lower (3390). Sometimes the output is constant for several minutes ( >>> 3396 3396 3396 3396 3396,...) >> >> We are interesting in the long term behavior. Does it gradually go >> down? Or is it stable? If it goes down over time, its clearly a leak >> somewhere. >> > > Now (in the last hour) stable, occasionally lower numbers: > 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 > 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 > 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 > 3396 3396 3396 3396 3396 3396 3396 3396 3396 3365 3396 3394 3396 3396 > 3396 3396 3373 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 > 3396 3353 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 > 3394 3396 3396 3396 3396 3396 3396 3396 > > Before the last pool exhaustion going down: > 3395 3395 3389 3379 3379 3374 3367 3360 3352 3343 3343 3343 3342 3336 > 3332 3324 3318 3314 3310 3307 3305 3299 3290 3283 3279 3272 3266 3265 > 3247 3247 3247 3242 3236 3236 > Here I stopped vdr (and so closed all dvb_demux devices), the number was remaining the same 3236, even after restart of vdr (and restart of streaming). > Soeren --===============0786687378914902965==-- From andrew@lunn.ch Wed Jan 23 18:13:06 2013 From: Andrew Lunn To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH v2] mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls Date: Wed, 23 Jan 2013 19:10:29 +0100 Message-ID: <20130123181029.GE20719@lunn.ch> In-Reply-To: <51001BEE.9020201@web.de> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============4233210284053414067==" --===============4233210284053414067== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit > >> > > > >Now (in the last hour) stable, occasionally lower numbers: > >3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 > >3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 > >3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 > >3396 3396 3396 3396 3396 3396 3396 3396 3396 3365 3396 3394 3396 3396 > >3396 3396 3373 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 > >3396 3353 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 > >3394 3396 3396 3396 3396 3396 3396 3396 > > > >Before the last pool exhaustion going down: > >3395 3395 3389 3379 3379 3374 3367 3360 3352 3343 3343 3343 3342 3336 > >3332 3324 3318 3314 3310 3307 3305 3299 3290 3283 3279 3272 3266 3265 > >3247 3247 3247 3242 3236 3236 > > > Here I stopped vdr (and so closed all dvb_demux devices), the number > was remaining the same 3236, even after restart of vdr (and restart > of streaming). So it does suggest a leak. Probably somewhere on an error path, e.g. its lost video sync. Andrew --===============4233210284053414067==-- From inki.dae@samsung.com Thu Jan 24 14:52:31 2013 From: Inki Dae To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH 5/7] seqno-fence: Hardware dma-buf implementation of fencing (v4) Date: Thu, 24 Jan 2013 23:52:28 +0900 Message-ID: In-Reply-To: <50F682C0.3030009@canonical.com> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============0197105819747586899==" --===============0197105819747586899== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable 2013/1/16 Maarten Lankhorst : > Op 16-01-13 07:28, Inki Dae schreef: >> 2013/1/15 Maarten Lankhorst : >>> This type of fence can be used with hardware synchronization for simple >>> hardware that can block execution until the condition >>> (dma_buf[offset] - value) >=3D 0 has been met. >>> >>> A software fallback still has to be provided in case the fence is used >>> with a device that doesn't support this mechanism. It is useful to expose >>> this for graphics cards that have an op to support this. >>> >>> Some cards like i915 can export those, but don't have an option to wait, >>> so they need the software fallback. >>> >>> I extended the original patch by Rob Clark. >>> >>> v1: Original >>> v2: Renamed from bikeshed to seqno, moved into dma-fence.c since >>> not much was left of the file. Lots of documentation added. >>> v3: Use fence_ops instead of custom callbacks. Moved to own file >>> to avoid circular dependency between dma-buf.h and fence.h >>> v4: Add spinlock pointer to seqno_fence_init >>> >>> Signed-off-by: Maarten Lankhorst >>> --- >>> Documentation/DocBook/device-drivers.tmpl | 1 + >>> drivers/base/fence.c | 38 +++++++++++ >>> include/linux/seqno-fence.h | 105 ++++++++++++++++++++++++= ++++++ >>> 3 files changed, 144 insertions(+) >>> create mode 100644 include/linux/seqno-fence.h >>> >>> diff --git a/Documentation/DocBook/device-drivers.tmpl b/Documentation/Do= cBook/device-drivers.tmpl >>> index 6f53fc0..ad14396 100644 >>> --- a/Documentation/DocBook/device-drivers.tmpl >>> +++ b/Documentation/DocBook/device-drivers.tmpl >>> @@ -128,6 +128,7 @@ X!Edrivers/base/interface.c >>> !Edrivers/base/dma-buf.c >>> !Edrivers/base/fence.c >>> !Iinclude/linux/fence.h >>> +!Iinclude/linux/seqno-fence.h >>> !Edrivers/base/dma-coherent.c >>> !Edrivers/base/dma-mapping.c >>> >>> diff --git a/drivers/base/fence.c b/drivers/base/fence.c >>> index 28e5ffd..1d3f29c 100644 >>> --- a/drivers/base/fence.c >>> +++ b/drivers/base/fence.c >>> @@ -24,6 +24,7 @@ >>> #include >>> #include >>> #include >>> +#include >>> >>> atomic_t fence_context_counter =3D ATOMIC_INIT(0); >>> EXPORT_SYMBOL(fence_context_counter); >>> @@ -284,3 +285,40 @@ out: >>> return ret; >>> } >>> EXPORT_SYMBOL(fence_default_wait); >>> + >>> +static bool seqno_enable_signaling(struct fence *fence) >>> +{ >>> + struct seqno_fence *seqno_fence =3D to_seqno_fence(fence); >>> + return seqno_fence->ops->enable_signaling(fence); >>> +} >>> + >>> +static bool seqno_signaled(struct fence *fence) >>> +{ >>> + struct seqno_fence *seqno_fence =3D to_seqno_fence(fence); >>> + return seqno_fence->ops->signaled && seqno_fence->ops->signaled(f= ence); >>> +} >>> + >>> +static void seqno_release(struct fence *fence) >>> +{ >>> + struct seqno_fence *f =3D to_seqno_fence(fence); >>> + >>> + dma_buf_put(f->sync_buf); >>> + if (f->ops->release) >>> + f->ops->release(fence); >>> + else >>> + kfree(f); >>> +} >>> + >>> +static long seqno_wait(struct fence *fence, bool intr, signed long timeo= ut) >>> +{ >>> + struct seqno_fence *f =3D to_seqno_fence(fence); >>> + return f->ops->wait(fence, intr, timeout); >>> +} >>> + >>> +const struct fence_ops seqno_fence_ops =3D { >>> + .enable_signaling =3D seqno_enable_signaling, >>> + .signaled =3D seqno_signaled, >>> + .wait =3D seqno_wait, >>> + .release =3D seqno_release >>> +}; >>> +EXPORT_SYMBOL_GPL(seqno_fence_ops); >>> diff --git a/include/linux/seqno-fence.h b/include/linux/seqno-fence.h >>> new file mode 100644 >>> index 0000000..603adc0 >>> --- /dev/null >>> +++ b/include/linux/seqno-fence.h >>> @@ -0,0 +1,105 @@ >>> +/* >>> + * seqno-fence, using a dma-buf to synchronize fencing >>> + * >>> + * Copyright (C) 2012 Texas Instruments >>> + * Copyright (C) 2012 Canonical Ltd >>> + * Authors: >>> + * Rob Clark >>> + * Maarten Lankhorst >>> + * >>> + * This program is free software; you can redistribute it and/or modify = it >>> + * under the terms of the GNU General Public License version 2 as publis= hed by >>> + * the Free Software Foundation. >>> + * >>> + * This program is distributed in the hope that it will be useful, but W= ITHOUT >>> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or >>> + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License= for >>> + * more details. >>> + * >>> + * You should have received a copy of the GNU General Public License alo= ng with >>> + * this program. If not, see . >>> + */ >>> + >>> +#ifndef __LINUX_SEQNO_FENCE_H >>> +#define __LINUX_SEQNO_FENCE_H >>> + >>> +#include >>> +#include >>> + >>> +struct seqno_fence { >>> + struct fence base; >>> + >>> + const struct fence_ops *ops; >>> + struct dma_buf *sync_buf; >>> + uint32_t seqno_ofs; >>> +}; >> Hi maarten, >> >> I'm applying dma-fence v11 and seqno-fence v4 to exynos drm and have >> some proposals. >> >> The above seqno_fence structure has only one dmabuf. Shouldn't it have >> mutiple dmabufs? For example, in case of drm driver, when pageflip is >> requested, one framebuffer could have one more gem buffer for NV12M >> format. And this means that one more exported dmabufs should be >> sychronized with other devices. Below is simple structure for it, > The fence guards a single operation, as such I didn't feel like more than o= ne > dma-buf was needed to guard it. > > Have you considered simply attaching multiple fences instead? Each with the= ir own dma-buf. > There has been some muttering about allowing multiple exclusive fences to b= e attached, for arm soc's. > > But I'm also considering getting rid of the dma-buf member and add a functi= on call to retrieve it, since > the sync dma-buf member should not be changing often, and it would zap 2 at= omic ops on every fence, > but I want it replaced by something that's not 10x more complicated. > > Maybe "int get_sync_dma_buf(fence, old_dma_buf, &new_dma_buf)" that will se= t new_dma_buf =3D NULL > if the old_dma_buf is unchanged, and return true + return a new reference t= o the sync dma_buf if it's not identical to old_dma_buf. > old_dma_buf can also be NULL or a dma_buf that belongs to a different fence= ->context entirely. It might be capable of > returning an error, in which case the fence would count as being signaled. = This could reduce the need for separately checking > fence_is_signaled first. > > I think this would allow caching the synchronization dma_buf in a similar w= ay without each fence needing > to hold a reference to the dma_buf all the time, even for fences that are o= nly used internally. > >> struct seqno_fence_dmabuf { >> struct list_head list; >> int id; >> struct dmabuf *sync_buf; >> uint32_t seqno_ops; >> uint32_t seqno; >> }; >> >> The member, id, could be used to identify which device sync_buf is >> going to be accessed by. In case of drm driver, one framebuffer could >> be accessed by one more devices, one is Display controller and another >> is HDMI controller. So id would have crtc number. > Why do you need this? the base fence already has a context member. > There was my missing point. Please ignore 'id'. If the fence relevant things are contained in each context(in case of drm page flip, a event), each driver could call fence_signal() with proper fence. --===============0197105819747586899==-- From hdoyu@nvidia.com Mon Jan 28 08:33:29 2013 From: Hiroshi Doyu To: linaro-mm-sig@lists.linaro.org Subject: [Linaro-mm-sig] [RFC] ARM: dma-mapping: Return 0 if no ->set_dma_mask() Date: Mon, 28 Jan 2013 09:33:20 +0100 Message-ID: <20130128.103320.355771370936761237.hdoyu@nvidia.com> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============8223775004444247018==" --===============8223775004444247018== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable struct dma_map_ops iommu_ops doesn't have ->set_dma_mask, which causes crash when dma_set_mask() is called from some driver. Signed-off-by: Hiroshi Doyu --- arch/arm/include/asm/dma-mapping.h | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/arch/arm/include/asm/dma-mapping.h b/arch/arm/include/asm/dma-ma= pping.h index a58e0f5..95328bf 100644 --- a/arch/arm/include/asm/dma-mapping.h +++ b/arch/arm/include/asm/dma-mapping.h @@ -32,7 +32,11 @@ static inline void set_dma_ops(struct device *dev, struct = dma_map_ops *ops) =20 static inline int dma_set_mask(struct device *dev, u64 mask) { - return get_dma_ops(dev)->set_dma_mask(dev, mask); + struct dma_map_ops *ops =3D get_dma_ops(dev); + + if (ops->set_dma_mask) + return ops->set_dma_mask(dev, mask); + return 0; } =20 #ifdef __arch_page_to_dma --=20 1.7.9.5 --===============8223775004444247018==-- From smoch@web.de Mon Jan 28 21:02:21 2013 From: Soeren Moch To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH v2] mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls Date: Mon, 28 Jan 2013 21:59:18 +0100 Message-ID: <5106E6A6.7010207@web.de> In-Reply-To: <20130123181029.GE20719@lunn.ch> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============1499235748211973873==" --===============1499235748211973873== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit On 23.01.2013 19:10, Andrew Lunn wrote: >>>> >>> >>> Now (in the last hour) stable, occasionally lower numbers: >>> 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 >>> 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 >>> 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 >>> 3396 3396 3396 3396 3396 3396 3396 3396 3396 3365 3396 3394 3396 3396 >>> 3396 3396 3373 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 >>> 3396 3353 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 >>> 3394 3396 3396 3396 3396 3396 3396 3396 >>> >>> Before the last pool exhaustion going down: >>> 3395 3395 3389 3379 3379 3374 3367 3360 3352 3343 3343 3343 3342 3336 >>> 3332 3324 3318 3314 3310 3307 3305 3299 3290 3283 3279 3272 3266 3265 >>> 3247 3247 3247 3242 3236 3236 >>> >> Here I stopped vdr (and so closed all dvb_demux devices), the number >> was remaining the same 3236, even after restart of vdr (and restart >> of streaming). > > So it does suggest a leak. Probably somewhere on an error path, > e.g. its lost video sync. > Now I activated the debug messages in em28xx. From the messages I see no correlation of the pool exhaustion and lost sync. Also I cannot see any error messages from the em28xx driver. I see a lot of init_isoc/stop_urbs (maybe EPG scan?) without draining the coherent pool (checked with 'cat /debug/dma-api/num_free_entries', which gave stable numbers), but after half an hour there are only init_isoc messages without corresponding stop_urbs messages and num_free_entries decreased until coherent pool exhaustion. Any idea where the memory leak is? What is allocating coherent buffers for orion-ehci? Soeren Jan 28 20:46:03 guruvdr kernel: em28xx #0/2-dvb: Using 5 buffers each with 64 x 940 bytes Jan 28 20:46:03 guruvdr kernel: em28xx #0 em28xx_init_isoc :em28xx: called em28xx_init_isoc in mode 2 Jan 28 20:46:03 guruvdr kernel: em28xx #1/2-dvb: Using 5 buffers each with 64 x 940 bytes Jan 28 20:46:03 guruvdr kernel: em28xx #1 em28xx_init_isoc :em28xx: called em28xx_init_isoc in mode 2 Jan 28 20:46:23 guruvdr kernel: em28xx #0 em28xx_stop_urbs :em28xx: called em28xx_stop_urbs Jan 28 20:46:23 guruvdr kernel: em28xx #1 em28xx_stop_urbs :em28xx: called em28xx_stop_urbs Jan 28 20:46:24 guruvdr kernel: em28xx #0/2-dvb: Using 5 buffers each with 64 x 940 bytes Jan 28 20:46:24 guruvdr kernel: em28xx #0 em28xx_init_isoc :em28xx: called em28xx_init_isoc in mode 2 Jan 28 20:46:24 guruvdr kernel: em28xx #1/2-dvb: Using 5 buffers each with 64 x 940 bytes Jan 28 20:46:24 guruvdr kernel: em28xx #1 em28xx_init_isoc :em28xx: called em28xx_init_isoc in mode 2 Jan 28 20:46:44 guruvdr kernel: em28xx #1 em28xx_stop_urbs :em28xx: called em28xx_stop_urbs Jan 28 20:46:44 guruvdr kernel: em28xx #0 em28xx_stop_urbs :em28xx: called em28xx_stop_urbs Jan 28 20:46:45 guruvdr kernel: em28xx #1/2-dvb: Using 5 buffers each with 64 x 940 bytes Jan 28 20:46:45 guruvdr kernel: em28xx #1 em28xx_init_isoc :em28xx: called em28xx_init_isoc in mode 2 Jan 28 20:46:45 guruvdr kernel: em28xx #0/2-dvb: Using 5 buffers each with 64 x 940 bytes Jan 28 20:46:45 guruvdr kernel: em28xx #0 em28xx_init_isoc :em28xx: called em28xx_init_isoc in mode 2 Jan 28 20:54:33 guruvdr kernel: ERROR: 1024 KiB atomic DMA coherent pool is too small! Jan 28 20:54:33 guruvdr kernel: Please increase it with coherent_pool= kernel parameter! --===============1499235748211973873==-- From jason@lakedaemon.net Tue Jan 29 00:14:06 2013 From: Jason Cooper To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH v2] mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls Date: Mon, 28 Jan 2013 19:13:54 -0500 Message-ID: <20130129001354.GN1758@titan.lakedaemon.net> In-Reply-To: <5106E6A6.7010207@web.de> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============7415748048283413635==" --===============7415748048283413635== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable On Mon, Jan 28, 2013 at 09:59:18PM +0100, Soeren Moch wrote: > On 23.01.2013 19:10, Andrew Lunn wrote: > >>>> > >>> > >>>Now (in the last hour) stable, occasionally lower numbers: > >>>3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 > >>>3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 > >>>3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 > >>>3396 3396 3396 3396 3396 3396 3396 3396 3396 3365 3396 3394 3396 3396 > >>>3396 3396 3373 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 > >>>3396 3353 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 3396 > >>>3394 3396 3396 3396 3396 3396 3396 3396 > >>> > >>>Before the last pool exhaustion going down: > >>>3395 3395 3389 3379 3379 3374 3367 3360 3352 3343 3343 3343 3342 3336 > >>>3332 3324 3318 3314 3310 3307 3305 3299 3290 3283 3279 3272 3266 3265 > >>>3247 3247 3247 3242 3236 3236 > >>> > >>Here I stopped vdr (and so closed all dvb_demux devices), the number > >>was remaining the same 3236, even after restart of vdr (and restart > >>of streaming). > > > >So it does suggest a leak. Probably somewhere on an error path, > >e.g. its lost video sync. > > >=20 > Now I activated the debug messages in em28xx. From the messages I > see no correlation of the pool exhaustion and lost sync. Also I > cannot see any error messages from the em28xx driver. > I see a lot of init_isoc/stop_urbs (maybe EPG scan?) without > draining the coherent pool (checked with 'cat > /debug/dma-api/num_free_entries', which gave stable numbers), but > after half an hour there are only init_isoc messages without > corresponding stop_urbs messages and num_free_entries decreased > until coherent pool exhaustion. >=20 > Any idea where the memory leak is? What is allocating coherent > buffers for orion-ehci? Keeping in mind that I am completely unfamiliar with usb dvb, my best guess is that the problem is in em28xx-core.c:1131 According to your log messages, it is in mode 2, which is EM28XX_DIGITAL_MODE. There seem to be good hints in 86d38d1e [media] em28xx: pre-allocate DVB isoc transfer buffers I added the relevant parties to the To:... For Gianluca and Mauro, the whole thread may be found at: http://markmail.org/message/wm4wlgzoudixd4so#query:+page:1+mid:o7phz7cosmwpcs= rz+state:results thx, Jason. >=20 > Soeren >=20 >=20 > Jan 28 20:46:03 guruvdr kernel: em28xx #0/2-dvb: Using 5 buffers > each with 64 x 940 bytes > Jan 28 20:46:03 guruvdr kernel: em28xx #0 em28xx_init_isoc :em28xx: > called em28xx_init_isoc in mode 2 > Jan 28 20:46:03 guruvdr kernel: em28xx #1/2-dvb: Using 5 buffers > each with 64 x 940 bytes > Jan 28 20:46:03 guruvdr kernel: em28xx #1 em28xx_init_isoc :em28xx: > called em28xx_init_isoc in mode 2 > Jan 28 20:46:23 guruvdr kernel: em28xx #0 em28xx_stop_urbs :em28xx: > called em28xx_stop_urbs > Jan 28 20:46:23 guruvdr kernel: em28xx #1 em28xx_stop_urbs :em28xx: > called em28xx_stop_urbs > Jan 28 20:46:24 guruvdr kernel: em28xx #0/2-dvb: Using 5 buffers > each with 64 x 940 bytes > Jan 28 20:46:24 guruvdr kernel: em28xx #0 em28xx_init_isoc :em28xx: > called em28xx_init_isoc in mode 2 > Jan 28 20:46:24 guruvdr kernel: em28xx #1/2-dvb: Using 5 buffers > each with 64 x 940 bytes > Jan 28 20:46:24 guruvdr kernel: em28xx #1 em28xx_init_isoc :em28xx: > called em28xx_init_isoc in mode 2 > Jan 28 20:46:44 guruvdr kernel: em28xx #1 em28xx_stop_urbs :em28xx: > called em28xx_stop_urbs > Jan 28 20:46:44 guruvdr kernel: em28xx #0 em28xx_stop_urbs :em28xx: > called em28xx_stop_urbs > Jan 28 20:46:45 guruvdr kernel: em28xx #1/2-dvb: Using 5 buffers > each with 64 x 940 bytes > Jan 28 20:46:45 guruvdr kernel: em28xx #1 em28xx_init_isoc :em28xx: > called em28xx_init_isoc in mode 2 > Jan 28 20:46:45 guruvdr kernel: em28xx #0/2-dvb: Using 5 buffers > each with 64 x 940 bytes > Jan 28 20:46:45 guruvdr kernel: em28xx #0 em28xx_init_isoc :em28xx: > called em28xx_init_isoc in mode 2 > Jan 28 20:54:33 guruvdr kernel: ERROR: 1024 KiB atomic DMA coherent > pool is too small! > Jan 28 20:54:33 guruvdr kernel: Please increase it with > coherent_pool=3D kernel parameter! >=20 --===============7415748048283413635==-- From andrew@lunn.ch Tue Jan 29 11:04:19 2013 From: Andrew Lunn To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH v2] mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls Date: Tue, 29 Jan 2013 12:02:28 +0100 Message-ID: <20130129110228.GA20242@lunn.ch> In-Reply-To: <5106E6A6.7010207@web.de> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============3566028567437137970==" --===============3566028567437137970== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit > Now I activated the debug messages in em28xx. From the messages I > see no correlation of the pool exhaustion and lost sync. Also I > cannot see any error messages from the em28xx driver. > I see a lot of init_isoc/stop_urbs (maybe EPG scan?) without > draining the coherent pool (checked with 'cat > /debug/dma-api/num_free_entries', which gave stable numbers), but > after half an hour there are only init_isoc messages without > corresponding stop_urbs messages and num_free_entries decreased > until coherent pool exhaustion. Hi Soeren em28xx_stop_urbs() is only called by em28xx_stop_streaming(). em28xx_stop_streaming() is only called by em28xx_stop_feed() when 0 == dvb->nfeeds. em28xx_stop_feed()and em28xx_start_feed() look O.K, dvb->nfeeds is protected by a mutex etc. Now, em28xx_init_isoc() is also called by buffer_prepare(). This uses em28xx_alloc_isoc() to do the actual allocation, and that function sets up the urb such that on completion the function em28xx_irq_callback() is called. It looks like there might be issues here: Once the data has been copied out, it resubmits the urb: urb->status = usb_submit_urb(urb, GFP_ATOMIC); if (urb->status) { em28xx_isocdbg("urb resubmit failed (error=%i)\n", urb->status); } However, if the ubs_submit_urb fails, it looks like the urb is lost. If you look at other code submitting urbs you have this pattern: rc = usb_submit_urb(isoc_bufs->urb[i], GFP_ATOMIC); if (rc) { em28xx_err("submit of urb %i failed (error=%i)\n", i, rc); em28xx_uninit_isoc(dev, mode); return rc; } Do you have your build such that you would see "urb resubmit failed" in your logs? Are there any? Andrew --===============3566028567437137970==-- From laurent.pinchart@ideasonboard.com Tue Jan 29 11:27:12 2013 From: Laurent Pinchart To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] CDF discussions at FOSDEM Date: Tue, 29 Jan 2013 12:27:15 +0100 Message-ID: <2040571.zD1Nq6nRq3@avalon> In-Reply-To: <52263038.NjR481MbN4@avalon> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============4473537021016411429==" --===============4473537021016411429== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Hello, On Monday 21 January 2013 13:43:27 Laurent Pinchart wrote: > On Thursday 17 January 2013 13:29:27 Daniel Vetter wrote: > > On Thu, Jan 17, 2013 at 9:42 AM, Jani Nikula wrote: > > > On Fri, 11 Jan 2013, Laurent Pinchart wrote: > > >> Would anyone be interested in meeting at the FOSDEM to discuss the > > >> Common Display Framework ? There will be a CDF meeting at the ELC at > > >> the end of February, the FOSDEM would be a good venue for European > > >> developers. > > >=20 > > > Yes, count me in, > >=20 > > Jesse, Ville and me should also be around. Do we have a slot fixed > > already? >=20 > I've sent a mail to the FOSDEM organizers to request a hacking room for a > couple of hours Sunday. I'll let you know as soon as I get a reply. Just a quick follow-up. I've received information from the FOSDEM staff, ther= e=20 will be hacking rooms that can be reserved (on-site only) for 1h slots. They = unfortunately won't have projectors, as they're not meant for talks. Another option would be to start early on Saturday, the X.org room is reporte= d=20 as beeing free from 9am to 11am. --=20 Regards, Laurent Pinchart --===============4473537021016411429==-- From m.szyprowski@samsung.com Tue Jan 29 11:31:18 2013 From: Marek Szyprowski To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [RFC] ARM: dma-mapping: Return 0 if no ->set_dma_mask() Date: Tue, 29 Jan 2013 12:31:13 +0100 Message-ID: <5107B301.2060802@samsung.com> In-Reply-To: <20130128.103320.355771370936761237.hdoyu@nvidia.com> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============6534850263555270949==" --===============6534850263555270949== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Hello, On 1/28/2013 9:33 AM, Hiroshi Doyu wrote: > struct dma_map_ops iommu_ops doesn't have ->set_dma_mask, which causes > crash when dma_set_mask() is called from some driver. I think that the issue is a bit different. It looks that iommu_ops lacks the mandatory set_dma_mask callback. arm_dma_set_mask() can be used for it, so please update your patch to add this missing callback. > Signed-off-by: Hiroshi Doyu > --- > arch/arm/include/asm/dma-mapping.h | 6 +++++- > 1 file changed, 5 insertions(+), 1 deletion(-) > > diff --git a/arch/arm/include/asm/dma-mapping.h b/arch/arm/include/asm/dma-= mapping.h > index a58e0f5..95328bf 100644 > --- a/arch/arm/include/asm/dma-mapping.h > +++ b/arch/arm/include/asm/dma-mapping.h > @@ -32,7 +32,11 @@ static inline void set_dma_ops(struct device *dev, struc= t dma_map_ops *ops) > =20 > static inline int dma_set_mask(struct device *dev, u64 mask) > { > - return get_dma_ops(dev)->set_dma_mask(dev, mask); > + struct dma_map_ops *ops =3D get_dma_ops(dev); > + > + if (ops->set_dma_mask) > + return ops->set_dma_mask(dev, mask); > + return 0; > } > =20 > #ifdef __arch_page_to_dma Best regards --=20 Marek Szyprowski Samsung Poland R&D Center --===============6534850263555270949==-- From libv@skynet.be Tue Jan 29 11:47:19 2013 From: Luc Verhaegen To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] CDF discussions at FOSDEM Date: Tue, 29 Jan 2013 12:47:16 +0100 Message-ID: <20130129114716.GA4239@skynet.be> In-Reply-To: <2040571.zD1Nq6nRq3@avalon> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============5353603929802245957==" --===============5353603929802245957== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable On Tue, Jan 29, 2013 at 12:27:15PM +0100, Laurent Pinchart wrote: > Hello, >=20 > On Monday 21 January 2013 13:43:27 Laurent Pinchart wrote: > >=20 > > I've sent a mail to the FOSDEM organizers to request a hacking room for a > > couple of hours Sunday. I'll let you know as soon as I get a reply. >=20 > Just a quick follow-up. I've received information from the FOSDEM staff, th= ere=20 > will be hacking rooms that can be reserved (on-site only) for 1h slots. The= y=20 > unfortunately won't have projectors, as they're not meant for talks. >=20 > Another option would be to start early on Saturday, the X.org room is repor= ted=20 > as beeing free from 9am to 11am. >=20 > --=20 > Regards, >=20 > Laurent Pinchart As the organizer of the X.org devroom, i would have to state that the=20 latter is impossible. I tend to do a bit of room set-up, like put in=20 some power bars (a limited amount this year, as i only have been given=20 one day and it simply is not worth putting in the cabling for 100=20 sockets, and dragging all that kit over from Nuremberg, for just a=20 single day) and some other things. I need one hour at least for that on=20 saturday morning. DevRooms are also not supposed to open before 11:00 (which is already a=20 massive improvement over 2011 and the years before, where i was happy=20 to be able to put the cabling in at 12:00), and i tend to first get a=20 nod of approval from the on-site devrooms supervisor before i go in and=20 set up the room. So use the hackingroom this year. Things will hopefully be better next=20 year. Luc Verhaegen. --===============5353603929802245957==-- From smoch@web.de Tue Jan 29 11:51:37 2013 From: Soeren Moch To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH v2] mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls Date: Tue, 29 Jan 2013 12:50:04 +0100 Message-ID: <5107B76C.6020704@web.de> In-Reply-To: <20130129110228.GA20242@lunn.ch> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============0257605380636531546==" --===============0257605380636531546== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable On 29.01.2013 12:02, Andrew Lunn wrote: >> Now I activated the debug messages in em28xx. From the messages I >> see no correlation of the pool exhaustion and lost sync. Also I >> cannot see any error messages from the em28xx driver. >> I see a lot of init_isoc/stop_urbs (maybe EPG scan?) without >> draining the coherent pool (checked with 'cat >> /debug/dma-api/num_free_entries', which gave stable numbers), but >> after half an hour there are only init_isoc messages without >> corresponding stop_urbs messages and num_free_entries decreased >> until coherent pool exhaustion. > > Hi Soeren > > em28xx_stop_urbs() is only called by em28xx_stop_streaming(). > > em28xx_stop_streaming() is only called by em28xx_stop_feed() > when 0 =3D=3D dvb->nfeeds. > > em28xx_stop_feed()and em28xx_start_feed() look O.K, dvb->nfeeds is > protected by a mutex etc. > > Now, em28xx_init_isoc() is also called by buffer_prepare(). This uses > em28xx_alloc_isoc() to do the actual allocation, and that function > sets up the urb such that on completion the function > em28xx_irq_callback() is called. > > It looks like there might be issues here: > > Once the data has been copied out, it resubmits the urb: > > urb->status =3D usb_submit_urb(urb, GFP_ATOMIC); > if (urb->status) { > em28xx_isocdbg("urb resubmit failed (error=3D%i)\n", > urb->status); > } > > However, if the ubs_submit_urb fails, it looks like the urb is lost. > > If you look at other code submitting urbs you have this pattern: > > rc =3D usb_submit_urb(isoc_bufs->urb[i], GFP_ATOMIC); > if (rc) { > em28xx_err("submit of urb %i failed (error=3D%i)\n= ", i, > rc); > em28xx_uninit_isoc(dev, mode); > return rc; > } > > Do you have your build such that you would see "urb resubmit failed" > in your logs? Are there any? I only had "urb resubmit failed" messages _after_ the coherent pool=20 exhaustion. So I guess something below the usb_submit_urb call is=20 allocating (too much) memory, sometimes. Or can dvb_demux allocate=20 memory and blame orion-ehci for it? Soeren --===============0257605380636531546==-- From laurent.pinchart@ideasonboard.com Tue Jan 29 12:11:00 2013 From: Laurent Pinchart To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] CDF discussions at FOSDEM Date: Tue, 29 Jan 2013 13:11:04 +0100 Message-ID: <1465762.gbACDvF2Tt@avalon> In-Reply-To: <20130129114716.GA4239@skynet.be> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============5990425631835368314==" --===============5990425631835368314== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Hi Luc, On Tuesday 29 January 2013 12:47:16 Luc Verhaegen wrote: > On Tue, Jan 29, 2013 at 12:27:15PM +0100, Laurent Pinchart wrote: > > On Monday 21 January 2013 13:43:27 Laurent Pinchart wrote: > > > I've sent a mail to the FOSDEM organizers to request a hacking room for > > > a couple of hours Sunday. I'll let you know as soon as I get a reply. > >=20 > > Just a quick follow-up. I've received information from the FOSDEM staff, > > there will be hacking rooms that can be reserved (on-site only) for 1h > > slots. They unfortunately won't have projectors, as they're not meant for > > talks. > >=20 > > Another option would be to start early on Saturday, the X.org room is > > reported as beeing free from 9am to 11am. >=20 > As the organizer of the X.org devroom, i would have to state that the > latter is impossible. I tend to do a bit of room set-up, like put in > some power bars (a limited amount this year, as i only have been given > one day and it simply is not worth putting in the cabling for 100 > sockets, and dragging all that kit over from Nuremberg, for just a > single day) and some other things. I need one hour at least for that on > saturday morning. No worries. It was just an idea. > DevRooms are also not supposed to open before 11:00 (which is already a > massive improvement over 2011 and the years before, where i was happy > to be able to put the cabling in at 12:00), and i tend to first get a > nod of approval from the on-site devrooms supervisor before i go in and > set up the room. >=20 > So use the hackingroom this year. Things will hopefully be better next > year. Saturday is pretty much out of question, given that most developers intereste= d=20 in CDF will want to attend the X.org talks. I'll try to get a room for Sunday= =20 then, but I'm not sure yet what time slots will be available. It would be=20 helpful if people interested in CDF discussions could tell me at what time=20 they plan to leave Brussels on Sunday. --=20 Regards, Laurent Pinchart --===============5990425631835368314==-- From hdoyu@nvidia.com Tue Jan 29 12:27:42 2013 From: Hiroshi Doyu To: linaro-mm-sig@lists.linaro.org Subject: [Linaro-mm-sig] [v2 1/1] ARM: dma-mapping: Call arm_dma_set_mask() if no ->set_dma_mask() Date: Tue, 29 Jan 2013 14:27:18 +0200 Message-ID: <1359462438-21006-1-git-send-email-hdoyu@nvidia.com> In-Reply-To: <5107B301.2060802@samsung.com> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============3294605965078175293==" --===============3294605965078175293== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable struct dma_map_ops iommu_ops doesn't have ->set_dma_mask, which causes crash when dma_set_mask() is called from some driver. Signed-off-by: Hiroshi Doyu --- arch/arm/include/asm/dma-mapping.h | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/arch/arm/include/asm/dma-mapping.h b/arch/arm/include/asm/dma-ma= pping.h index 5b579b9..63cc49c 100644 --- a/arch/arm/include/asm/dma-mapping.h +++ b/arch/arm/include/asm/dma-mapping.h @@ -14,6 +14,7 @@ #define DMA_ERROR_CODE (~0) extern struct dma_map_ops arm_dma_ops; extern struct dma_map_ops arm_coherent_dma_ops; +extern int arm_dma_set_mask(struct device *dev, u64 dma_mask); =20 static inline struct dma_map_ops *get_dma_ops(struct device *dev) { @@ -32,7 +33,13 @@ static inline void set_dma_ops(struct device *dev, struct = dma_map_ops *ops) =20 static inline int dma_set_mask(struct device *dev, u64 mask) { - return get_dma_ops(dev)->set_dma_mask(dev, mask); + struct dma_map_ops *ops =3D get_dma_ops(dev); + BUG_ON(!ops); + + if (ops->set_dma_mask) + return ops->set_dma_mask(dev, mask); + + return arm_dma_set_mask(dev, mask); } =20 #ifdef __arch_page_to_dma @@ -112,8 +119,6 @@ static inline void dma_free_noncoherent(struct device *de= v, size_t size, =20 extern int dma_supported(struct device *dev, u64 mask); =20 -extern int arm_dma_set_mask(struct device *dev, u64 dma_mask); - /** * arm_dma_alloc - allocate consistent memory for DMA * @dev: valid struct device pointer, or NULL for ISA and EISA-like devices --=20 1.7.9.5 --===============3294605965078175293==-- From m.szyprowski@samsung.com Tue Jan 29 12:47:38 2013 From: Marek Szyprowski To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [v2 1/1] ARM: dma-mapping: Call arm_dma_set_mask() if no ->set_dma_mask() Date: Tue, 29 Jan 2013 13:47:26 +0100 Message-ID: <5107C4DE.7060607@samsung.com> In-Reply-To: <1359462438-21006-1-git-send-email-hdoyu@nvidia.com> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============6471372153359131777==" --===============6471372153359131777== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Hello, On 1/29/2013 1:27 PM, Hiroshi Doyu wrote: > struct dma_map_ops iommu_ops doesn't have ->set_dma_mask, which causes > crash when dma_set_mask() is called from some driver. > > Signed-off-by: Hiroshi Doyu > --- > arch/arm/include/asm/dma-mapping.h | 11 ++++++++--- > 1 file changed, 8 insertions(+), 3 deletions(-) > > diff --git a/arch/arm/include/asm/dma-mapping.h b/arch/arm/include/asm/dma-= mapping.h > index 5b579b9..63cc49c 100644 > --- a/arch/arm/include/asm/dma-mapping.h > +++ b/arch/arm/include/asm/dma-mapping.h > @@ -14,6 +14,7 @@ > #define DMA_ERROR_CODE (~0) > extern struct dma_map_ops arm_dma_ops; > extern struct dma_map_ops arm_coherent_dma_ops; > +extern int arm_dma_set_mask(struct device *dev, u64 dma_mask); > =20 > static inline struct dma_map_ops *get_dma_ops(struct device *dev) > { > @@ -32,7 +33,13 @@ static inline void set_dma_ops(struct device *dev, struc= t dma_map_ops *ops) > =20 > static inline int dma_set_mask(struct device *dev, u64 mask) > { > - return get_dma_ops(dev)->set_dma_mask(dev, mask); > + struct dma_map_ops *ops =3D get_dma_ops(dev); > + BUG_ON(!ops); > + > + if (ops->set_dma_mask) > + return ops->set_dma_mask(dev, mask); > + > + return arm_dma_set_mask(dev, mask); > } > =20 > #ifdef __arch_page_to_dma > @@ -112,8 +119,6 @@ static inline void dma_free_noncoherent(struct device *= dev, size_t size, > =20 > extern int dma_supported(struct device *dev, u64 mask); > =20 > -extern int arm_dma_set_mask(struct device *dev, u64 dma_mask); > - > /** > * arm_dma_alloc - allocate consistent memory for DMA > * @dev: valid struct device pointer, or NULL for ISA and EISA-like devic= es It can be done much simpler, please just add '.set_dma_mask =3D=20 arm_dma_set_mask,' to iommu_ops declaration. There is no point complicating the header files. Best regards --=20 Marek Szyprowski Samsung Poland R&D Center --===============6471372153359131777==-- From hdoyu@nvidia.com Tue Jan 29 12:57:21 2013 From: Hiroshi Doyu To: linaro-mm-sig@lists.linaro.org Subject: [Linaro-mm-sig] [v3 1/1] ARM: dma-mapping: Set arm_dma_set_mask() for iommu->set_dma_mask() Date: Tue, 29 Jan 2013 14:57:07 +0200 Message-ID: <1359464227-19669-1-git-send-email-hdoyu@nvidia.com> In-Reply-To: <5107C4DE.7060607@samsung.com> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============7281964724873337378==" --===============7281964724873337378== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit struct dma_map_ops iommu_ops doesn't have ->set_dma_mask, which causes crash when dma_set_mask() is called from some driver. Signed-off-by: Hiroshi Doyu --- arch/arm/mm/dma-mapping.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index 6b2fb87..5dfc71f 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -1730,6 +1730,8 @@ struct dma_map_ops iommu_ops = { .unmap_sg = arm_iommu_unmap_sg, .sync_sg_for_cpu = arm_iommu_sync_sg_for_cpu, .sync_sg_for_device = arm_iommu_sync_sg_for_device, + + .set_dma_mask = arm_dma_set_mask, }; struct dma_map_ops iommu_coherent_ops = { @@ -1743,6 +1745,8 @@ struct dma_map_ops iommu_coherent_ops = { .map_sg = arm_coherent_iommu_map_sg, .unmap_sg = arm_coherent_iommu_unmap_sg, + + .set_dma_mask = arm_dma_set_mask, }; /** -- 1.7.9.5 --===============7281964724873337378==-- From daniel.vetter@ffwll.ch Tue Jan 29 14:19:39 2013 From: Daniel Vetter To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] CDF discussions at FOSDEM Date: Tue, 29 Jan 2013 15:19:38 +0100 Message-ID: In-Reply-To: <1465762.gbACDvF2Tt@avalon> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============5320997072428910834==" --===============5320997072428910834== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable On Tue, Jan 29, 2013 at 1:11 PM, Laurent Pinchart wrote: >> DevRooms are also not supposed to open before 11:00 (which is already a >> massive improvement over 2011 and the years before, where i was happy >> to be able to put the cabling in at 12:00), and i tend to first get a >> nod of approval from the on-site devrooms supervisor before i go in and >> set up the room. >> >> So use the hackingroom this year. Things will hopefully be better next >> year. > > Saturday is pretty much out of question, given that most developers interes= ted > in CDF will want to attend the X.org talks. I'll try to get a room for Sund= ay > then, but I'm not sure yet what time slots will be available. It would be > helpful if people interested in CDF discussions could tell me at what time > they plan to leave Brussels on Sunday. I'll stay till Monday early morning, so requirements from me. Adding a bunch of Intel guys who're interested, too. -Daniel --=20 Daniel Vetter Software Engineer, Intel Corporation +41 (0) 79 365 57 48 - http://blog.ffwll.ch --===============5320997072428910834==-- From daniel.vetter@ffwll.ch Tue Jan 29 15:48:33 2013 From: Daniel Vetter To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] CDF discussions at FOSDEM Date: Tue, 29 Jan 2013 16:50:40 +0100 Message-ID: <20130129155040.GR14766@phenom.ffwll.local> In-Reply-To: MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============6783532570008152557==" --===============6783532570008152557== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable On Tue, Jan 29, 2013 at 3:19 PM, Daniel Vetter wro= te: > On Tue, Jan 29, 2013 at 1:11 PM, Laurent Pinchart > wrote: >>> DevRooms are also not supposed to open before 11:00 (which is already a >>> massive improvement over 2011 and the years before, where i was happy >>> to be able to put the cabling in at 12:00), and i tend to first get a >>> nod of approval from the on-site devrooms supervisor before i go in and >>> set up the room. >>> >>> So use the hackingroom this year. Things will hopefully be better next >>> year. >> >> Saturday is pretty much out of question, given that most developers intere= sted >> in CDF will want to attend the X.org talks. I'll try to get a room for Sun= day >> then, but I'm not sure yet what time slots will be available. It would be >> helpful if people interested in CDF discussions could tell me at what time >> they plan to leave Brussels on Sunday. > > I'll stay till Monday early morning, so requirements from me. Adding a > bunch of Intel guys who're interested, too. Ok, in the interest of pre-heating the discussion a bit I've written down my thoughts about display slave drivers. Adding a few more people and lists to make sure I haven't missed anyone ... Cheers, Daniel -- Display Slaves =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D A highly biased quick analysis from Daniel Vetter. A quick discussion about the issues surrounding some common framework for display slaves like panels, hdmi/DP/whatever encoders, ... Since these extern= al chips are very often reused accross different SoCs, it would be beneficial to share slave driver code between different chipset drivers. Caveat Emperor! --------------- Current output types and slave encoders already have to deal with a pletoria = of special cases and strange features. To avoid ending up with something not suitable for everyone, we should look at what's all supported already and how= we could possibly deal with those things: - audio embedded into the display stream (hdmi/dp). x86 platforms with the HD Audio framework rely on ELD and forwarding certain events as interrupts through the hw between the display and audio side ... - hdmi/dp helpers: HDMI/DP are both standardized output connectors with nice complexity. DP is mostly about handling dp aux transactions and DPCD registers, hdmi mostly about infoframes and how to correctly set them up fr= om the mode + edid. - dpms is 4 states in drm, even more in fbdev afaict, but real hw only suppor= ts on/off nowadays ... how should/do we care? - Fancy modes and how to represent them. Random list of things we need to represent somehow: broadcast/reduced rbg range for hdmi, yuv modes, differe= nt bpc modes (and handling how this affects bandwidth/clocks, e.g. i915 auto-dithers to 6bpc on DP if there's not enough), 3D hdmi modes (patches h= ave floated on dri-devel for this), overscan compensation. Many of these things link in with e.g. the helper libraries for certain outputs, e.g. discovering DP sink capabilities or setting up the correct hdmi infoframe. - How to expose random madness as properties, e.g. backlight controllers, broadcast mode, enable/disable embedded audio (some screens advertise it, b= ut don't like it). For additional fun I expect different users of a display sl= ave driver to expect different set of "standardized" properties. - Debug support: Register dumping, exposing random debugfs files, tracing. Preferably somewhat unified to keep things sane, since most often slave drivers are rather simple, but we expect quite a few different ones. - Random metadata surrounding a display sink, like output type. Or flags for support special modes (h/vsync polarity, interlaced/doublescan, pixel doubling, ...). - mode_fixup: Used a lot in drm-land to allow encoders to change the input mo= de, e.g. for lvds encoders which can do upscaling, or if the encoder supports progressive input with interlaced output and similar fancy stuff. See e.g. = the intel sdvo encoder chip support. - Handling different control buses like i2c, direct access (not seen that yet= ), DSI, DP aux, some other protocols. - Handling of different display data standards like dsi (intel invented a few= of its own, I'm sure we're not the only ones). - hpd support/polling. Depending upon desing hpd handling needs to be cooperative between slave and master, or is a slave only thing (which means the slave needs to be able to poke the master when something changes). Similarly, masters need to know which slaves require output polling. - Initializing of slave drivers: of/devicetree based, compiled-in static tabl= es in the driver, dynamic discovery by i2c probing, lookup through some platform-specific firmware table (ACPI). Related is how to forward random platform init values to the drivers from these sources (e.g. the panel fixed modes) to the slave driver. - get_hw_state support. One of the major point in the i915 modeset rewrite wh= ich landed in 3.7 is that a lot of the hw state can be cross-checked with the sw tracking. Helps tremendously in tracking down driver (writer) fumbles ;-) - PSR/dsi command mode and how the start/stop frame dance should be handled. - Random funny expectations around the modeset sequence, i.e. when (and how often) the video stream should be enabled/disabled. In the worst case this needs some serious cooperation between master and slaves. Even more fun for trained output links like DP where a re-training and so restarting parts - = or even the complete - modeset sequence could be required to happen any time. - There's more I'm sure, gfx hw tends to be insane ... Wishful Thinking ---------------- Ignoring reality, let's look at what the perfect display slave framework shou= ld achieve to be useful: - Should be simple to share code between different master drivers - display s= lave drivers tend to be boring assemblies of register definitions and banging the right magic values into them. Which also means that we should aim for a high level of unification so that using, understanding and debugging drivers is easy. - Since we expect drivers to be simple, even little amounts of impedence-matching code can kill the benefits of the shared code. Furthermo= re it should be possible to extend drivers with whatever subset of the above feature list is required by the subsystem/driver using a slave driver. Agai= n, without incurring unnecessary amounts of impendance matching. Ofc, not all users of slave drivers will be able to use all the crazy features. Reality Check ------------- We already have tons of different slave encoder frameworks sprinkled all over the kernel, which support different sets of crazy features and are used by different. Furthermore each subsystem seems to have come up with it's own way= to describe metadata like display modes, all sorts of type enums, properties, helper functions for special output types. Conclusions: - Throwing away and rewriting all the existing code seems unwise, but we'll likely need tons of existing drivers with the new framework. - Unifying the metadata handling will be _really_ painful since it's deeply ingrained into each driver. Not unifying it otoh will lead to colossal amou= nts of impendance matching code. - The union of all the slave features used by all the existing frameworks is impressive, but also highly non-overlapping. Likely everyone has his own utterly "must-have" feature. Proposal -------- I have to admit that I'm not too much in favour of the current CDF. It has a = bit of midlayer smell to it imo, and looks like it will make many of the mentioned corner-case messy to enable. Also looking at things the proposed generic video mode structure it seems to lack some features e.g. drm_mode already has. Which does not include new insanity like 3d modes or some advanced infoframes stuff. So instead I'll throw around a few ideas and principles: - s/framework/helper library/ Yes, I really hate midlayers and just coming up with a different name seems to go a long way towards saner apis. - I think we should reduce the scope of the intial version massively and inst= ead increase the depth to fully cover everything. So instead of something which covers everything of a limited use-case from discover, setup, modes handling and mode-setting, concentrate on only one operation. The actual mode-set se= ems to be the best case, since it usually involves a lot of the boring register bashing code. The first interface version would ignore everything else completely. - Shot for the most powerful api for that little piece we're starting with, m= ake it the canonical thing. I.e. for modeset we need a video mode thing, and imo it only makes sense if that's the native data structure for all invovled subsystems. At least it should be the aim. Yeah, that means tons of work. E= ven more important is that the new datastructure supports every feature already support in some insane way in one of the existing subsystems. Imo if we keep different datastructures everywhere, the impendance matching will eat up mo= st of the code sharing benefits. - Since converting all invovled subsystems we should imo just forget about fbdev. For obvious reasons I'm also leaning towards simply ditching the drm prefix from the drm defines and using those ;-) - I haven't used it in a driver yet, but mandating regmap (might need some improvements) should get us decent unification between drivers. And hopeful= ly also an easy way to have unified debug tools. regmap already has trace poin= ts and a few other cool things. - We need some built-in way to drill direct paths from the master display dri= ver to the slave driver for the different subsystems. Jumping through hoops (or even making it impossible) to extend drivers in funny ways would be a big s= tep backwards. - Locking will be fun, especially once we start to add slave->master callbacks (e.g. for stopping/starting the display signal, hpd interrupts, ...). As a general rule I think we should aim for no locks in the slave driver, with t= he master owning the slave and ensure exclusion with its own locks. Slaves whi= ch use shared resources and so need locks (everything doing i2c actually) may = not call master callback functions with locks held. Then, once we've gotten things of the ground and have some slave encoder driv= ers which are actually shared between different subsystems/drivers/platforms or whatever we can start to organically grow more common interfaces. Ime it's mu= ch easier to simply extract decent interfaces after the fact than trying to come up. Now let's pour this into a more concrete form: struct display_slave_ops { /* modeset ops, e.g. prepare/modset/commit from drm */ }; struct display_slave { struct display_slave_ops *ops; void *driver_private; }; I think even just that will be worth a lot of flames to come up with a good a= nd agreeable interface for everyone. It'll probably satisfactory to no one thoug= h. Then each subsystem adds it's own magic, e.g. struct drm_encoder_slave { struct display_slave slave; /* everything else which is there already and not covered by the display * slave interface. */ }; Other subsystems/drivers like DSS would embed the struct display_slave in the= ir own equivalent data-structure. So now we have the little problem that we want to have one single _slave_ dri= ver codebase, but it should be able to support n different interfaces and potentially even more ways to be initialized and set up. Here's my idea how t= his could be tackled: 1. Smash everything into one driver file/directory. 2. Use a common driver structure which contains pointers/members for all possible use-cases. For each interface the driver supports, it'll allocate the same structure and put the pointer into foo->slave.driver_private. This way different entry points from different interfaces could use the same internal functions since all deal with the same structure. 3. Add whatever magic is required to set up the driver for different platform= s. E.g. and of match, drm_encoder_slave i2c match and some direct function to set up hardcoded cases could all live in the same file. Getting the kernel Kconfig stuff right will be fun, but we should get by with adding tons more stub functions. That might mean that an of/devicetree platfo= rm build carries around a bit of gunk for x86 vbt matching maybe, but imo that shouldn't ever get out of hand size-wise. Once we have a few such shared drivers in place, and even more important, unified that part of the subsystem using them a bit, it should be painfully obvious which is the next piece to extract into the common display slave libr= ary interface. After all, they'll live right next to each another in the driver sources ;-) Eventually we should get into the real fun part like dsi bus support or comma= nd mode/PSR ... Those advanced things probably need to be optional. But imo the key part is that we aim for real unification in the users of display_slave's, so internally convert over everything to the new structures. That should also make code-sharing much easier, so that we could move existing helper functions to the common display helper library. Bikesheds --------- I.e. the boring details: - Where to put slave drivers? I'll vote for anything which does not include drivers/video ;-) - Maybe we want to start with a different part than modeset, or add a bit more on top. Though I really think we should start minimally and modesetting see= med like the most useful piece of the puzzle. - Naming the new interfaces. I'll have more asbestos suites on order ... - Can we just copy the new "native" interface structs from drm, pls? --=20 Daniel Vetter Software Engineer, Intel Corporation +41 (0) 79 365 57 48 - http://blog.ffwll.ch --===============6783532570008152557==-- From ville.syrjala@linux.intel.com Tue Jan 29 16:16:00 2013 From: Ville =?utf-8?b?U3lyasOkbMOk?= To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] CDF discussions at FOSDEM Date: Tue, 29 Jan 2013 18:15:54 +0200 Message-ID: <20130129161554.GR9135@intel.com> In-Reply-To: MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============2034952294842673089==" --===============2034952294842673089== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable On Tue, Jan 29, 2013 at 03:19:38PM +0100, Daniel Vetter wrote: > On Tue, Jan 29, 2013 at 1:11 PM, Laurent Pinchart > wrote: > >> DevRooms are also not supposed to open before 11:00 (which is already a > >> massive improvement over 2011 and the years before, where i was happy > >> to be able to put the cabling in at 12:00), and i tend to first get a > >> nod of approval from the on-site devrooms supervisor before i go in and > >> set up the room. > >> > >> So use the hackingroom this year. Things will hopefully be better next > >> year. > > > > Saturday is pretty much out of question, given that most developers inter= ested > > in CDF will want to attend the X.org talks. I'll try to get a room for Su= nday > > then, but I'm not sure yet what time slots will be available. It would be > > helpful if people interested in CDF discussions could tell me at what time > > they plan to leave Brussels on Sunday. >=20 > I'll stay till Monday early morning, so requirements from me. Adding a > bunch of Intel guys who're interested, too. My return flight isn't until Monday afternoon. --=20 Ville Syrj=C3=A4l=C3=A4 Intel OTC --===============2034952294842673089==-- From marcus.xm.lorentzon@stericsson.com Tue Jan 29 19:35:58 2013 From: Marcus Lorentzon To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] CDF discussions at FOSDEM Date: Tue, 29 Jan 2013 20:35:28 +0100 Message-ID: <51082480.9070500@stericsson.com> In-Reply-To: <20130129155040.GR14766@phenom.ffwll.local> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============5972085523174835571==" --===============5972085523174835571== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable On 01/29/2013 04:50 PM, Daniel Vetter wrote: > On Tue, Jan 29, 2013 at 3:19 PM, Daniel Vetter w= rote: > Ok, in the interest of pre-heating the discussion a bit I've written down > my thoughts about display slave drivers. Adding a few more people and > lists to make sure I haven't missed anyone ... > > Cheers, Daniel > -- > Display Slaves > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > > A highly biased quick analysis from Daniel Vetter. And here is my biased version as one of the initiators of the idea of CDF. I work with ARM SoCs (ST-Ericsson) and mobile devices (DSI/DPI panels).=20 Of course some of these have the "PC" type of encoder devices like HDMI=20 and eDP or even VGA. But from what I have seen most of these encoders=20 are used by few different SoCs(GPUs?). And using these type of encoders=20 was quite straight forward from DRM encoders. My goal was to get some=20 common code of all the "mobile" panel encoders or "display module driver=20 IC"s as some call them. Instead of tens of drivers (my assumption) you=20 now have hundreds of drivers often using MIPI DSI/DPI/DBI or some=20 similar interface. And lots of new come each year. There are probably=20 more panel types than there are products on the market, since most=20 products use more than one type of panel on the same product to secure=20 sourcing for mass production (note multiple panels use same driver IC). So that was the initial goal, to cover all of these, which most are=20 maintained per SoC/CPU out of kernel.org. If HDMI/DP etc fits in this=20 framework, then that is just a nice bonus. I just wanted to give my history so we are not trying to include to many=20 different types of encoders without an actual need. Maybe the I2C drm=20 stuff is good enough for that type of encoders. But again, it would be=20 nice with one suit that fits all ... I also like the idea to start out small. But if no support is added=20 initially for the mobile panel types. Then I think it will be hard to=20 get all vendors to start pushing those drivers, because the benefit of=20 doing so would be small. But maybe the CDF work with Linaro and Laurent=20 could just be a second step of adding the necessary details to your=20 really simple baseline. And I also favor the helpers over framework=20 approach but I miss a big piece which is the ops for panel drivers to=20 call back to display controller (the video source stuff). Some inline comments below. > > A quick discussion about the issues surrounding some common framework for > display slaves like panels, hdmi/DP/whatever encoders, ... Since these exte= rnal > chips are very often reused accross different SoCs, it would be beneficial = to > share slave driver code between different chipset drivers. > > Caveat Emperor! > --------------- > > Current output types and slave encoders already have to deal with a pletori= a of > special cases and strange features. To avoid ending up with something not > suitable for everyone, we should look at what's all supported already and h= ow we > could possibly deal with those things: > > - audio embedded into the display stream (hdmi/dp). x86 platforms with the = HD > Audio framework rely on ELD and forwarding certain events as interrupts > through the hw between the display and audio side ... I would assume any driver handling audio/video/cec like HDMI would hook=20 itself up as an mfd device. And one of those exposed functions would be=20 the CDF part. Instead of pushing everything into the "display parts". At=20 least that is sort of what we do today and it keeps the audio, cec and=20 display parts nicely separated. > - hdmi/dp helpers: HDMI/DP are both standardized output connectors with nice > complexity. DP is mostly about handling dp aux transactions and DPCD > registers, hdmi mostly about infoframes and how to correctly set them up= from > the mode + edid. Yes, it is a mess. But we have managed to hide that below a simple panel=20 API similar to CDF/omap so far. > - dpms is 4 states in drm, even more in fbdev afaict, but real hw only supp= orts > on/off nowadays ... how should/do we care? Agreed, they should all really go away unless someone find a valid use case. > - Fancy modes and how to represent them. Random list of things we need to > represent somehow: broadcast/reduced rbg range for hdmi, yuv modes, diff= erent > bpc modes (and handling how this affects bandwidth/clocks, e.g. i915 > auto-dithers to 6bpc on DP if there's not enough), 3D hdmi modes (patche= s have > floated on dri-devel for this), overscan compensation. Many of these thi= ngs > link in with e.g. the helper libraries for certain outputs, e.g. discove= ring > DP sink capabilities or setting up the correct hdmi infoframe. Are you saying drm modes doesn't support this as of today? I have not=20 used these types of modes in DRM yet. Maybe the common video mode=20 patches is a good start. > - How to expose random madness as properties, e.g. backlight controllers, > broadcast mode, enable/disable embedded audio (some screens advertise it= , but > don't like it). For additional fun I expect different users of a display= slave > driver to expect different set of "standardized" properties. Some standardized properties would be nice :). Whatever is not standard=20 doesn't really matter. > - Debug support: Register dumping, exposing random debugfs files, tracing. > Preferably somewhat unified to keep things sane, since most often slave > drivers are rather simple, but we expect quite a few different ones. > > - Random metadata surrounding a display sink, like output type. Or flags for > support special modes (h/vsync polarity, interlaced/doublescan, pixel > doubling, ...). One thing that is needed is all the meta data related to the=20 control/data interface between display controller and encoder. Because=20 this has to be unified per interface type like DSI/DBI so the same CDF=20 driver can setup different display controllers. But I hope we could=20 split the "CDF API" (panel ops) from the control/data bus API=20 (host/source ops or CDF video source). > - mode_fixup: Used a lot in drm-land to allow encoders to change the input = mode, > e.g. for lvds encoders which can do upscaling, or if the encoder supports > progressive input with interlaced output and similar fancy stuff. See e.= g. the > intel sdvo encoder chip support. > > - Handling different control buses like i2c, direct access (not seen that y= et), > DSI, DP aux, some other protocols. This is actually the place I wanted to start. With vendor specific panel=20 drivers using common ops to access the bus (DSI/I2C/DBI etc). Then once=20 we have a couple of panel drivers we could unify the API making them do=20 their stuff (like the current CDF ops). Or even better, maybe these two=20 could be made completely separate and worked on in parallel. > - Handling of different display data standards like dsi (intel invented a f= ew of > its own, I'm sure we're not the only ones). > > - hpd support/polling. Depending upon desing hpd handling needs to be > cooperative between slave and master, or is a slave only thing (which me= ans > the slave needs to be able to poke the master when something changes). > Similarly, masters need to know which slaves require output polling. I prefer a slave only thing forwarded to the drm encoder which I assume=20 would be the drm equivalent of the display slave. At least I have not=20 seen any need to involve the display controller in hpd (which I assume=20 you mean by master). > - Initializing of slave drivers: of/devicetree based, compiled-in static ta= bles > in the driver, dynamic discovery by i2c probing, lookup through some > platform-specific firmware table (ACPI). Related is how to forward random > platform init values to the drivers from these sources (e.g. the panel f= ixed > modes) to the slave driver. I'm not that familiar with the bios/uefi world. But on our SoCs we=20 always have to show a splash screen from the boot loader (like bios,=20 usually little kernel, uboot etc). And so all probing is done by=20 bootloader and HW is running when kernel boot. And you are not allowed=20 to disrupt it either because that would yield visual glitches during=20 boot. So some way or the other the boot loader would need to transfer=20 the state to the kernel or you would have to reverse engineer the state=20 from hw at kernel probe. > - get_hw_state support. One of the major point in the i915 modeset rewrite = which > landed in 3.7 is that a lot of the hw state can be cross-checked with th= e sw > tracking. Helps tremendously in tracking down driver (writer) fumbles ;-) This sounds more like a display controller feature than a display slave=20 feature. > - PSR/dsi command mode and how the start/stop frame dance should be handled. Again, a vital piece for the many mobile driver ICs. And I think we have=20 several sources (STE, Renesas, TI, Samsung, ...) on how to do this and=20 tested in many products. So I hope this could be an early step in the=20 evolution. > - Random funny expectations around the modeset sequence, i.e. when (and how > often) the video stream should be enabled/disabled. In the worst case th= is > needs some serious cooperation between master and slaves. Even more fun = for > trained output links like DP where a re-training and so restarting parts= - or > even the complete - modeset sequence could be required to happen any tim= e. Again, we have several samples of platforms already doing this stuff. So=20 we should be able to get a draft pretty early. From my experience when=20 to enable/disable video stream could vary between versions of the same=20 display controller. So I think it could be pretty hairy to get a single=20 solution for all. Instead I think we need to leave some room for the=20 master/slave to decide when to enable/disable. And to be able to do this=20 we should try to have pretty specific ops on the slave and master. I'm=20 not sure prepare/modeset/commit is specific enough unless we document=20 what is expected to be done by the slave in each of these. > > - There's more I'm sure, gfx hw tends to be insane ... Yes, and one is the chain of slaves issue that is "common" on mobile=20 systems. One example I have is=20 dispc->dsi->dsi2dsi-bridge->dsi2lvds-bridge->lvds-panel. My proposal to hide this complexity in CDF was aggregate drivers. So=20 from drm there will only be one master (dispc) and one slave (dsi2dsi).=20 Then dsi2dsi will itself use another CDF/slave driver to talk to its=20 slave. This way the top master (dispc) driver never have to care about=20 this complexity. Whether this is possible to hide in practice we will=20 see ... > > Wishful Thinking > ---------------- > > Ignoring reality, let's look at what the perfect display slave framework sh= ould > achieve to be useful: > > - Should be simple to share code between different master drivers - display= slave > drivers tend to be boring assemblies of register definitions and banging= the > right magic values into them. Which also means that we should aim for a = high > level of unification so that using, understanding and debugging drivers = is > easy. > > - Since we expect drivers to be simple, even little amounts of > impedence-matching code can kill the benefits of the shared code. Furthe= rmore > it should be possible to extend drivers with whatever subset of the above > feature list is required by the subsystem/driver using a slave driver. A= gain, > without incurring unnecessary amounts of impendance matching. Ofc, not a= ll > users of slave drivers will be able to use all the crazy features. This is also my fear. Which is why I wanted to start with one slave=20 interface at a time. And maybe even have different "API"s for differnt=20 type of panels. Like classic I2C encoders, DSI command mode "smart"=20 panels, DSI video mode, DPI ... and then do another layer of helpers in=20 drm encoders. That way a DSI command mode panel wouldn't have to be=20 forced into the same shell as a I2C HDMI encoder as they are very=20 different with very little overlap. > Reality Check > ------------- > > We already have tons of different slave encoder frameworks sprinkled all ov= er > the kernel, which support different sets of crazy features and are used by > different. Furthermore each subsystem seems to have come up with it's own w= ay to > describe metadata like display modes, all sorts of type enums, properties, > helper functions for special output types. > > Conclusions: > > - Throwing away and rewriting all the existing code seems unwise, but we'll > likely need tons of existing drivers with the new framework. > > - Unifying the metadata handling will be _really_ painful since it's deeply > ingrained into each driver. Not unifying it otoh will lead to colossal a= mounts > of impendance matching code. > > - The union of all the slave features used by all the existing frameworks is > impressive, but also highly non-overlapping. Likely everyone has his own > utterly "must-have" feature. > > Proposal > -------- > > I have to admit that I'm not too much in favour of the current CDF. It has = a bit > of midlayer smell to it imo, and looks like it will make many of the mentio= ned > corner-case messy to enable. Also looking at things the proposed generic vi= deo > mode structure it seems to lack some features e.g. drm_mode already has. Wh= ich > does not include new insanity like 3d modes or some advanced infoframes stu= ff. > > So instead I'll throw around a few ideas and principles: > > - s/framework/helper library/ Yes, I really hate midlayers and just coming = up > with a different name seems to go a long way towards saner apis. Me like, but I hope you agree to keep calling it CDF until it is merged.=20 We could call it Common Display Frelpers if you like ;) > - I think we should reduce the scope of the intial version massively and in= stead > increase the depth to fully cover everything. So instead of something wh= ich > covers everything of a limited use-case from discover, setup, modes hand= ling > and mode-setting, concentrate on only one operation. The actual mode-set= seems > to be the best case, since it usually involves a lot of the boring regis= ter > bashing code. The first interface version would ignore everything else > completely. To also cover and be useful to mobile panels I suggest starting with=20 on/off using a fixed mode initially. Because modeset is not used for=20 most mobile panels (they only have one mode). > - Shot for the most powerful api for that little piece we're starting with,= make > it the canonical thing. I.e. for modeset we need a video mode thing, and= imo > it only makes sense if that's the native data structure for all invovled > subsystems. At least it should be the aim. Yeah, that means tons of work= . Even > more important is that the new datastructure supports every feature alre= ady > support in some insane way in one of the existing subsystems. Imo if we = keep > different datastructures everywhere, the impendance matching will eat up= most > of the code sharing benefits. > > - Since converting all invovled subsystems we should imo just forget about > fbdev. For obvious reasons I'm also leaning towards simply ditching the > drm prefix from the drm defines and using those ;-) > > - I haven't used it in a driver yet, but mandating regmap (might need some > improvements) should get us decent unification between drivers. And hope= fully > also an easy way to have unified debug tools. regmap already has trace p= oints > and a few other cool things. Guideline for I2C slave drivers maybe? Do we really want to enforce how=20 drivers are implemented when it doesn't affect the API? Also, I don't think it fits in general for slaves. Since DSI/DBI have=20 not only registers but also operations you can execute using control=20 interface. > - We need some built-in way to drill direct paths from the master display d= river > to the slave driver for the different subsystems. Jumping through hoops = (or > even making it impossible) to extend drivers in funny ways would be a bi= g step > backwards. > > - Locking will be fun, especially once we start to add slave->master callba= cks > (e.g. for stopping/starting the display signal, hpd interrupts, ...). As= a > general rule I think we should aim for no locks in the slave driver, wit= h the > master owning the slave and ensure exclusion with its own locks. Slaves = which > use shared resources and so need locks (everything doing i2c actually) m= ay not > call master callback functions with locks held. Agreed, and I think we should rely on upper layers like drm as much as=20 possible for locking. > Then, once we've gotten things of the ground and have some slave encoder dr= ivers > which are actually shared between different subsystems/drivers/platforms or > whatever we can start to organically grow more common interfaces. Ime it's = much > easier to simply extract decent interfaces after the fact than trying to co= me > up. > > Now let's pour this into a more concrete form: > > struct display_slave_ops { > /* modeset ops, e.g. prepare/modset/commit from drm */ > }; > > struct display_slave { > struct display_slave_ops *ops; > void *driver_private; > }; > > I think even just that will be worth a lot of flames to come up with a good= and > agreeable interface for everyone. It'll probably satisfactory to no one tho= ugh. > > Then each subsystem adds it's own magic, e.g. > > struct drm_encoder_slave { > struct display_slave slave; > > /* everything else which is there already and not covered by the d= isplay > * slave interface. */ > }; I like the starting point. Hard to make it any more simple ;). But next=20 step would probably follow quickly. I also like the idea to have current=20 drivers aggregate the slave to make transition easier. CDF as it is now=20 is an all or nothing API. And since you don't care how slaves interact=20 with master (bus ops) I see the possibility still to separate "CDI=20 device API" and "CDF bus API". Which would allow using DSI bus API for=20 DSI panels and I2C bus API (or regmap) for I2C encoders instead of force=20 use of the video source API in all slave drivers. > Other subsystems/drivers like DSS would embed the struct display_slave in t= heir > own equivalent data-structure. > > So now we have the little problem that we want to have one single _slave_ d= river > codebase, but it should be able to support n different interfaces and > potentially even more ways to be initialized and set up. Here's my idea how= this > could be tackled: > > 1. Smash everything into one driver file/directory. > 2. Use a common driver structure which contains pointers/members for all > possible use-cases. For each interface the driver supports, it'll allocate = the > same structure and put the pointer into foo->slave.driver_private. This way > different entry points from different interfaces could use the same internal > functions since all deal with the same structure. > 3. Add whatever magic is required to set up the driver for different platfo= rms. > E.g. and of match, drm_encoder_slave i2c match and some direct function to = set > up hardcoded cases could all live in the same file. > > Getting the kernel Kconfig stuff right will be fun, but we should get by wi= th > adding tons more stub functions. That might mean that an of/devicetree plat= form > build carries around a bit of gunk for x86 vbt matching maybe, but imo that > shouldn't ever get out of hand size-wise. > > Once we have a few such shared drivers in place, and even more important, > unified that part of the subsystem using them a bit, it should be painfully > obvious which is the next piece to extract into the common display slave li= brary > interface. After all, they'll live right next to each another in the driver > sources ;-) > > Eventually we should get into the real fun part like dsi bus support or com= mand > mode/PSR ... Those advanced things probably need to be optional. > > But imo the key part is that we aim for real unification in the users of > display_slave's, so internally convert over everything to the new structure= s. > That should also make code-sharing much easier, so that we could move exist= ing > helper functions to the common display helper library. What about drivers that are waiting for CDF to be pushed upstream=20 instead of having to push another custom panel framework? I'm talking of=20 my own KMS driver ... but maybe I could put most of it in staging and=20 move relevant parts of DSI/DPI/HDMI panel drivers to "common" slave=20 drivers ... > Bikesheds > --------- > > I.e. the boring details: > > - Where to put slave drivers? I'll vote for anything which does not include > drivers/video ;-) drivers/video +1, drivers/gpu -1, who came up with putting KMS under=20 drivers/gpu ;) > - Maybe we want to start with a different part than modeset, or add a bit m= ore > on top. Though I really think we should start minimally and modesetting = seemed > like the most useful piece of the puzzle. As suggested, start with on/off and static/fixed mode would help single=20 resolution LCDs. Actually that is almost all that is needed for mobile=20 panels and what I intended to get from CDF :) > > - Naming the new interfaces. I'll have more asbestos suites on order ... Until you get them. Would it make sense to reuse the encoder name from=20 drm or is that to restrictive? > > - Can we just copy the new "native" interface structs from drm, pls? I hope you are not talking about the helper interfaces at least ;). But=20 if CDF is going to be the new drm helpers of choice for=20 encoder/connector parts. Then it sounds like CDF would replace most of=20 the old helpers. It would be far to many layers with the old helpers=20 too. And I think I recall Jesse wanting to deprecate/remove them too. Hopefully we could have some generic encoder/connector helper=20 implementations that only depend on CDF. /BR /Marcus --===============5972085523174835571==-- From daniel@ffwll.ch Tue Jan 29 21:44:48 2013 From: Daniel Vetter To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] CDF discussions at FOSDEM Date: Tue, 29 Jan 2013 22:46:56 +0100 Message-ID: <20130129214655.GU14766@phenom.ffwll.local> In-Reply-To: <51082480.9070500@stericsson.com> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============9107281293776416063==" --===============9107281293776416063== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable On Tue, Jan 29, 2013 at 08:35:28PM +0100, Marcus Lorentzon wrote: > On 01/29/2013 04:50 PM, Daniel Vetter wrote: > >On Tue, Jan 29, 2013 at 3:19 PM, Daniel Vetter = wrote: > >Ok, in the interest of pre-heating the discussion a bit I've written down > >my thoughts about display slave drivers. Adding a few more people and > >lists to make sure I haven't missed anyone ... > > > >Cheers, Daniel > >-- > >Display Slaves > >=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > > > >A highly biased quick analysis from Daniel Vetter. > And here is my biased version as one of the initiators of the idea of CDF. Thanks a lot for your detailed answer. Some quick replies, I need to go through this more carefully and maybe send another mail. > I work with ARM SoCs (ST-Ericsson) and mobile devices (DSI/DPI > panels). Of course some of these have the "PC" type of encoder > devices like HDMI and eDP or even VGA. But from what I have seen > most of these encoders are used by few different SoCs(GPUs?). And > using these type of encoders was quite straight forward from DRM > encoders. My goal was to get some common code of all the "mobile" > panel encoders or "display module driver IC"s as some call them. > Instead of tens of drivers (my assumption) you now have hundreds of > drivers often using MIPI DSI/DPI/DBI or some similar interface. And > lots of new come each year. There are probably more panel types than > there are products on the market, since most products use more than > one type of panel on the same product to secure sourcing for mass > production (note multiple panels use same driver IC). > So that was the initial goal, to cover all of these, which most are > maintained per SoC/CPU out of kernel.org. If HDMI/DP etc fits in > this framework, then that is just a nice bonus. > I just wanted to give my history so we are not trying to include to > many different types of encoders without an actual need. Maybe the > I2C drm stuff is good enough for that type of encoders. But again, > it would be nice with one suit that fits all ... > I also like the idea to start out small. But if no support is added > initially for the mobile panel types. Then I think it will be hard > to get all vendors to start pushing those drivers, because the > benefit of doing so would be small. But maybe the CDF work with > Linaro and Laurent could just be a second step of adding the > necessary details to your really simple baseline. And I also favor > the helpers over framework approach but I miss a big piece which is > the ops for panel drivers to call back to display controller (the > video source stuff). Yeah, I think we have two main goals here for enabling code sharing for these output devices: 1. Basic panel support, with the panel usually glued onto the board, so squat runtime configuration required. Aim is to get the gazillion of out-of-tree drivers merged. 2. Allowing generic output encoder slaves to be used in a bunch of SoCs in. Summarizing my previous mail I fear that if we start with with the first point and don't take some of the mad features required to do the 2nd one right into account, we'll end up at a rather ugly spot. [cut] > >- hdmi/dp helpers: HDMI/DP are both standardized output connectors with ni= ce > > complexity. DP is mostly about handling dp aux transactions and DPCD > > registers, hdmi mostly about infoframes and how to correctly set them u= p from > > the mode + edid. > Yes, it is a mess. But we have managed to hide that below a simple > panel API similar to CDF/omap so far. Well, my concern is that we need to expose a bunch of special properties (both to the master driver and ultimately to userspace) which are rather hard to shovel through a simple panel abstraction. Ime from desktop graphics there's no limits to the insane usecases and devices people come up with and want to plug into your machine ;-) > >- dpms is 4 states in drm, even more in fbdev afaict, but real hw only sup= ports > > on/off nowadays ... how should/do we care? > Agreed, they should all really go away unless someone find a valid use case. > >- Fancy modes and how to represent them. Random list of things we need to > > represent somehow: broadcast/reduced rbg range for hdmi, yuv modes, dif= ferent > > bpc modes (and handling how this affects bandwidth/clocks, e.g. i915 > > auto-dithers to 6bpc on DP if there's not enough), 3D hdmi modes (patch= es have > > floated on dri-devel for this), overscan compensation. Many of these th= ings > > link in with e.g. the helper libraries for certain outputs, e.g. discov= ering > > DP sink capabilities or setting up the correct hdmi infoframe. > Are you saying drm modes doesn't support this as of today? I have > not used these types of modes in DRM yet. Maybe the common video > mode patches is a good start. All the stuff I've mentioned is support in drm/i915 (or at least we have patches floating around), and on a quick look at the proposed video_mode I couldn't fit this all in. Some of the features are fully fledged out, but I expect that we fill all the little tiny holes in the next few releases. > >- How to expose random madness as properties, e.g. backlight controllers, > > broadcast mode, enable/disable embedded audio (some screens advertise i= t, but > > don't like it). For additional fun I expect different users of a displa= y slave > > driver to expect different set of "standardized" properties. > Some standardized properties would be nice :). Whatever is not > standard doesn't really matter. The problem is that we have a few 100klocs of driver code lying around in upstream, so if we switch standards there's some decent fun involved converting things. Or we need to add conversion functions all over the place, which seems rather ugly, too. > >- Debug support: Register dumping, exposing random debugfs files, tracing. > > Preferably somewhat unified to keep things sane, since most often slave > > drivers are rather simple, but we expect quite a few different ones. > > > >- Random metadata surrounding a display sink, like output type. Or flags f= or > > support special modes (h/vsync polarity, interlaced/doublescan, pixel > > doubling, ...). > One thing that is needed is all the meta data related to the > control/data interface between display controller and encoder. > Because this has to be unified per interface type like DSI/DBI so > the same CDF driver can setup different display controllers. But I > hope we could split the "CDF API" (panel ops) from the control/data > bus API (host/source ops or CDF video source). I guess we have two options of panels on such buses with special needs: - either add a bunch of optional functions to the common interfaces - or subclass the common interface/struct and add additional magic in there, i.e. struct dsi_slave { struct display_slave; struct dsi_panel_ops; /* whatever other magic we need for dsi, e.g. callbacks to the * source for start/stopping pixel data ... */ } The later requires a bit more casting of struct pointers, but should be more flexible. Ime from i915 code it's not too onereous, e.g. for encoders we nest such C struct classes about 4 levels deep in the code: drm_encoder -> intel_encoder -> intel_dig_encoder -> intel_dp/hdmi/ddi So I think both approaches are doable. > >- mode_fixup: Used a lot in drm-land to allow encoders to change the input= mode, > > e.g. for lvds encoders which can do upscaling, or if the encoder suppor= ts > > progressive input with interlaced output and similar fancy stuff. See e= .g. the > > intel sdvo encoder chip support. > > > >- Handling different control buses like i2c, direct access (not seen that = yet), > > DSI, DP aux, some other protocols. > This is actually the place I wanted to start. With vendor specific > panel drivers using common ops to access the bus (DSI/I2C/DBI etc). > Then once we have a couple of panel drivers we could unify the API > making them do their stuff (like the current CDF ops). Or even > better, maybe these two could be made completely separate and worked > on in parallel. Hm, so starting with some DSI interface code, similarly to how we have i2c? tbh I have pretty much zero clue about how dsi exactly works, but growing different parts of a common panel infrastructure sounds intriguing. > >- Handling of different display data standards like dsi (intel invented a = few of > > its own, I'm sure we're not the only ones). > > > >- hpd support/polling. Depending upon desing hpd handling needs to be > > cooperative between slave and master, or is a slave only thing (which m= eans > > the slave needs to be able to poke the master when something changes). > > Similarly, masters need to know which slaves require output polling. > I prefer a slave only thing forwarded to the drm encoder which I > assume would be the drm equivalent of the display slave. At least I > have not seen any need to involve the display controller in hpd > (which I assume you mean by master). I've used pretty unclear definitions. Generally master is everything no behind the slave/panel interface. Call it display driver maybe ... For this case I don't expect that hpd involves any piece of hw on the master/driver side, but we need to somehow forward this to the usespace interfaces. At least in drm, dunno what other display drivers do here. > >- Initializing of slave drivers: of/devicetree based, compiled-in static t= ables > > in the driver, dynamic discovery by i2c probing, lookup through some > > platform-specific firmware table (ACPI). Related is how to forward rand= om > > platform init values to the drivers from these sources (e.g. the panel = fixed > > modes) to the slave driver. > I'm not that familiar with the bios/uefi world. But on our SoCs we > always have to show a splash screen from the boot loader (like bios, > usually little kernel, uboot etc). And so all probing is done by > bootloader and HW is running when kernel boot. And you are not > allowed to disrupt it either because that would yield visual > glitches during boot. So some way or the other the boot loader would > need to transfer the state to the kernel or you would have to > reverse engineer the state from hw at kernel probe. Actually reverse engineer the bios state from the actual hw state is what we now do for i915 ;-) Which is why we need the ->get_hw_state callback in some form. But that's just a result of some of the horrible things old firmware does, it /should/ be better on newer platforms. And hopefully the embedded ones aren't that massively screwed up ... Iirc the only current interface exposed by ACPI lets you get at the vendor boot splash and display it after you've taken over the hw. > >- get_hw_state support. One of the major point in the i915 modeset rewrite= which > > landed in 3.7 is that a lot of the hw state can be cross-checked with t= he sw > > tracking. Helps tremendously in tracking down driver (writer) fumbles ;= -) > This sounds more like a display controller feature than a display > slave feature. See above for why we have that in i915. And we do call down into slave encoders (Intel (s)dvo standards) on older hw. Might be we won't need that any more on SoC platforms (I do hope that's the case at least). > >- PSR/dsi command mode and how the start/stop frame dance should be handle= d. > Again, a vital piece for the many mobile driver ICs. And I think we > have several sources (STE, Renesas, TI, Samsung, ...) on how to do > this and tested in many products. So I hope this could be an early > step in the evolution. One issue with start/stop callbacks I've discussed a bit with Jani Nikula and Rob Clark is locking rules around start/stop callbacks from the slave to the display source. Especially how to handle fun like blocking the dsi bus while we need to wait for the transfer window. > >- Random funny expectations around the modeset sequence, i.e. when (and how > > often) the video stream should be enabled/disabled. In the worst case t= his > > needs some serious cooperation between master and slaves. Even more fun= for > > trained output links like DP where a re-training and so restarting part= s - or > > even the complete - modeset sequence could be required to happen any ti= me. > Again, we have several samples of platforms already doing this > stuff. So we should be able to get a draft pretty early. From my > experience when to enable/disable video stream could vary between > versions of the same display controller. So I think it could be > pretty hairy to get a single solution for all. Instead I think we > need to leave some room for the master/slave to decide when to > enable/disable. And to be able to do this we should try to have > pretty specific ops on the slave and master. I'm not sure > prepare/modeset/commit is specific enough unless we document what is > expected to be done by the slave in each of these. Well, drm/i915 killed prepare/modeset/commit ops, we now have our own which semantics matching our hw. My concern here is mostly about fancier display buses with link training - e.g. on DP you can't just start/stop the pixel stream, but there's a nice dance involved to do it. > >- There's more I'm sure, gfx hw tends to be insane ... > Yes, and one is the chain of slaves issue that is "common" on mobile > systems. One example I have is > dispc->dsi->dsi2dsi-bridge->dsi2lvds-bridge->lvds-panel. > My proposal to hide this complexity in CDF was aggregate drivers. So > from drm there will only be one master (dispc) and one slave > (dsi2dsi). Then dsi2dsi will itself use another CDF/slave driver to > talk to its slave. This way the top master (dispc) driver never have > to care about this complexity. Whether this is possible to hide in > practice we will see ... I think even more fun would be to replace the lvds endpoint with hdmi, and the try to coax the infoframe control attributes down that pipeline (plus who's responsibilty it is to do the various adjustments to the pixels). [cut] > >- I think we should reduce the scope of the intial version massively and i= nstead > > increase the depth to fully cover everything. So instead of something w= hich > > covers everything of a limited use-case from discover, setup, modes han= dling > > and mode-setting, concentrate on only one operation. The actual mode-se= t seems > > to be the best case, since it usually involves a lot of the boring regi= ster > > bashing code. The first interface version would ignore everything else > > completely. > To also cover and be useful to mobile panels I suggest starting with > on/off using a fixed mode initially. Because modeset is not used for > most mobile panels (they only have one mode). Would that be start/stop a frame for manual refresh or enable/disable the display itself? Just curious what you're aiming for as the minimal useful thing here ... > >- Shot for the most powerful api for that little piece we're starting with= , make > > it the canonical thing. I.e. for modeset we need a video mode thing, an= d imo > > it only makes sense if that's the native data structure for all invovled > > subsystems. At least it should be the aim. Yeah, that means tons of wor= k. Even > > more important is that the new datastructure supports every feature alr= eady > > support in some insane way in one of the existing subsystems. Imo if we= keep > > different datastructures everywhere, the impendance matching will eat u= p most > > of the code sharing benefits. > > > >- Since converting all invovled subsystems we should imo just forget about > > fbdev. For obvious reasons I'm also leaning towards simply ditching the > > drm prefix from the drm defines and using those ;-) > > > >- I haven't used it in a driver yet, but mandating regmap (might need some > > improvements) should get us decent unification between drivers. And hop= efully > > also an easy way to have unified debug tools. regmap already has trace = points > > and a few other cool things. > Guideline for I2C slave drivers maybe? Do we really want to enforce > how drivers are implemented when it doesn't affect the API? > Also, I don't think it fits in general for slaves. Since DSI/DBI > have not only registers but also operations you can execute using > control interface. Yeah, that was an idea for i2c guidelines. I guess if we have a different (sub)type for DSI we could gather helpers somewhere which are useful only for DSI. E.g. drm is in the process of growing some DP helpers shared among a few drivers. My idea behind being a bit more anal about standardization is that we exect tons of these drivers, and also that lots of different SoC platforms might share them. So trying to make them look similar and work in similar ways (where reasonable) to help enable existing drivers on new SoCs and debug isssue seemed like something we should discuss a bit. > >- We need some built-in way to drill direct paths from the master display = driver > > to the slave driver for the different subsystems. Jumping through hoops= (or > > even making it impossible) to extend drivers in funny ways would be a b= ig step > > backwards. > > > >- Locking will be fun, especially once we start to add slave->master callb= acks > > (e.g. for stopping/starting the display signal, hpd interrupts, ...). A= s a > > general rule I think we should aim for no locks in the slave driver, wi= th the > > master owning the slave and ensure exclusion with its own locks. Slaves= which > > use shared resources and so need locks (everything doing i2c actually) = may not > > call master callback functions with locks held. > Agreed, and I think we should rely on upper layers like drm as much > as possible for locking. > >Then, once we've gotten things of the ground and have some slave encoder d= rivers > >which are actually shared between different subsystems/drivers/platforms or > >whatever we can start to organically grow more common interfaces. Ime it's= much > >easier to simply extract decent interfaces after the fact than trying to c= ome > >up. > > > >Now let's pour this into a more concrete form: > > > >struct display_slave_ops { > > /* modeset ops, e.g. prepare/modset/commit from drm */ > >}; > > > >struct display_slave { > > struct display_slave_ops *ops; > > void *driver_private; > >}; > > > >I think even just that will be worth a lot of flames to come up with a goo= d and > >agreeable interface for everyone. It'll probably satisfactory to no one th= ough. > > > >Then each subsystem adds it's own magic, e.g. > > > >struct drm_encoder_slave { > > struct display_slave slave; > > > > /* everything else which is there already and not covered by the = display > > * slave interface. */ > >}; > I like the starting point. Hard to make it any more simple ;). But > next step would probably follow quickly. I also like the idea to > have current drivers aggregate the slave to make transition easier. > CDF as it is now is an all or nothing API. And since you don't care > how slaves interact with master (bus ops) I see the possibility > still to separate "CDI device API" and "CDF bus API". Which would > allow using DSI bus API for DSI panels and I2C bus API (or regmap) > for I2C encoders instead of force use of the video source API in all > slave drivers. I didn't follow here which pieces you'd like to cut apart along which lines exactly ... Maybe some example structs or asci-art to help the clueless? Aside about the simplicity of the above: It's slightly tongue-in-check, I expect it to be a bit feature-full ;-) Just wanted to direct the discussion a bit into a minimal, but still useful interface, highly extensible. [cut] > >But imo the key part is that we aim for real unification in the users of > >display_slave's, so internally convert over everything to the new structur= es. > >That should also make code-sharing much easier, so that we could move exis= ting > >helper functions to the common display helper library. > What about drivers that are waiting for CDF to be pushed upstream > instead of having to push another custom panel framework? I'm > talking of my own KMS driver ... but maybe I could put most of it in > staging and move relevant parts of DSI/DPI/HDMI panel drivers to > "common" slave drivers ... Hm, I think I've missed your driver drm/kms driver. Links to source? I think reading through a drm driver using the current cdf would be nice, that way I'm at least familiar with one part of the code ;-) > >Bikesheds > >--------- > > > >I.e. the boring details: > > > >- Where to put slave drivers? I'll vote for anything which does not include > > drivers/video ;-) > drivers/video +1, drivers/gpu -1, who came up with putting KMS under > drivers/gpu ;) I think the main reason was to be as far away from fbdev/fbcon code as possible ;-) Also, we have gem/ttm in drm, which is all about PU part and not really about G .. > >- Maybe we want to start with a different part than modeset, or add a bit = more > > on top. Though I really think we should start minimally and modesetting= seemed > > like the most useful piece of the puzzle. > As suggested, start with on/off and static/fixed mode would help > single resolution LCDs. Actually that is almost all that is needed > for mobile panels and what I intended to get from CDF :) > > > >- Naming the new interfaces. I'll have more asbestos suites on order ... > Until you get them. Would it make sense to reuse the encoder name > from drm or is that to restrictive? On a quick check drm lacks names for DSI encoders/panels, so we might want to add those. And maybe a generic panel output type. I guess it would be good to take my caveats list above and strike off everything we don't need for basic dsi panel support, then figure out where to steal the definitions from. Common definitions will be hard to come by, e.g. after much bikesheds and deciding to use common fourcc codes for pixel layouts drm ended up with simply adding a bunch of its own fourcc codes since the ones negotiated with v4l didn't cut it. > >- Can we just copy the new "native" interface structs from drm, pls? > I hope you are not talking about the helper interfaces at least ;). Nope, the drm helpers are not the interfaces. Ofc, if we end up with a massively generic panel interface, we might add a few helpers to give slave/panel drivers an easy way to opt for sane default behaviour. E.g. handling a fixed panel mode and always returning that mode is something which is reinvented in drm a few times ... I probably should have written metadata structs/definitions, since that'll be the part which could get ugly if we end up with diverging standards. Interface functions obviously need to fit into what the hw bus at hand requires us to do (e.g. for DSI special cases). [Aside wrt drm helpers: With i915 we now have an imo rather nice example that the drm crtc are really just helpers, and that it's not too hard to come up with your own modeset infrastructure. On an established driver codebase even.] > But if CDF is going to be the new drm helpers of choice for > encoder/connector parts. Then it sounds like CDF would replace most > of the old helpers. It would be far to many layers with the old > helpers too. And I think I recall Jesse wanting to deprecate/remove > them too. Rob's tilcdc driver uses the drm crtc helpers and for the i2c encoder slaves he added a new set of helpers to easier integrate the crtc helpers with the existing drm_encoder_slave infrastructure. The end-result looks fairly reasonable imo. In general I think as long as we aim for the different libraries to be as orthogonal as possible so that drivers can pick and choose, more kinds of helpers doesn't really sound bad. On the drm side I've recently brushed up the crtc/output polling and fb helpers quite a bit, so drivers can now pick&choose (and i915 does only use some of them). Similarly for other helper ideas floating around like DSI, hdmi infoframe handling, dp aux stuff ... Of course I expect that we'll wrap things up into dwim() functions for all the common cases. > Hopefully we could have some generic encoder/connector helper > implementations that only depend on CDF. I'm not sure whether we should aim for that really - having a slave/panel driver with mostly common code and a wee bit of shim code once for drm and once for dss (or whatever else is out there) doesn't sound too horrible to me. But I agree that at least for new code we should aim to get this right from the start. Cheers, Daniel --=20 Daniel Vetter Software Engineer, Intel Corporation +41 (0) 79 365 57 48 - http://blog.ffwll.ch --===============9107281293776416063==-- From robdclark@gmail.com Wed Jan 30 01:07:17 2013 From: Rob Clark To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH 2/7] mutex: add support for reservation style locks Date: Tue, 29 Jan 2013 19:07:15 -0600 Message-ID: In-Reply-To: <1358253244-11453-3-git-send-email-maarten.lankhorst@canonical.com> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============6212976851565780297==" --===============6212976851565780297== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable On Tue, Jan 15, 2013 at 6:33 AM, Maarten Lankhorst wrote: > Hi Maarten, This is a nice looking extension to avoid re-implementing a mutex in TTM/reservation code.. ofc, probably someone more familiar with mutex code should probably review, but probably a bit of explanation about what and why would be helpful. > mutex_reserve_lock, and mutex_reserve_lock_interruptible: > Lock a buffer with a reservation_id set. reservation_id must not be set t= o 0, > since this is a special value that means no reservation_id. > > Normally if reservation_id is not set, or is older than the reservation_i= d that's > currently set on the mutex, the behavior will be to wait normally. > > However, if the reservation_id is newer than the current reservation_id,= -EAGAIN > will be returned, and this function must unreserve all other mutexes and = then redo > a blocking lock with normal mutex calls to prevent a deadlock, then call > mutex_locked_set_reservation on successful locking to set the reservation= _id inside > the lock. It might be a bit more clear to write up how this works from the perspective of the user of ticket_mutex, separately from the internal implementation first, and then how it works internally? Ie, the mutex_set_reservation_fastpath() call is internal to the implementation of ticket_mutex, but -EAGAIN is something the caller of ticket_mutex shall deal with. This might give a clearer picture of how TTM / reservation uses this to prevent deadlock, so those less familiar with TTM could better understand. Well, here is an attempt to start a write-up, which should perhaps eventually be folded into Documentation/ticket-mutex-design.txt. But hopefully a better explanation of the problem and the solution will encourage some review of the ticket_mutex changes. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Basic problem statement: ----- ------- --------- GPU's do operations that commonly involve many buffers. Those buffers can be shared across contexts/processes, exist in different memory domains (for example VRAM vs system memory), and so on. And with PRIME / dmabuf, they can even be shared across devices. So there are a handful of situations where the driver needs to wait for buffers to become ready. If you think about this in terms of waiting on a buffer mutex for it to become available, this presents a problem because there is no way to guarantee that buffers appear in a execbuf/batch in the same order in all contexts. That is directly under control of userspace, and a result of the sequence of GL calls that an application makes. Which results in the potential for deadlock. The problem gets more complex when you consider that the kernel may need to migrate the buffer(s) into VRAM before the GPU operates on the buffer(s), which main in turn require evicting some other buffers (and you don't want to evict other buffers which are already queued up to the GPU), but for a simplified understanding of the problem you can ignore this. The algorithm that TTM came up with for dealing with this problem is quite simple. For each group of buffers (execbuf) that need to be locked, the caller would be assigned a unique reservation_id, from a global counter. In case of deadlock in the process of locking all the buffers associated with a execbuf, the one with the lowest reservation_id wins, and the one with the higher reservation_id unlocks all of the buffers that it has already locked, and then tries again. Originally TTM implemented this algorithm on top of an event-queue and atomic-ops, but Maarten Lankhorst realized that by merging this with the mutex code we could take advantage of the existing mutex fast-path code and result in a simpler solution, and so ticket_mutex was born. (Well, there where also some additional complexities with the original implementation when you start adding in cross-device buffer sharing for PRIME.. Maarten could probably better explain.) How it is used: --- -- -- ----- A very simplified version: int submit_execbuf(execbuf) { /* acquiring locks, before queuing up to GPU: */ seqno =3D assign_global_seqno(); retry: for (buf in execbuf->buffers) { ret =3D mutex_reserve_lock(&buf->lock, seqno); switch (ret) { case 0: /* we got the lock */ break; case -EAGAIN: /* someone with a lower seqno, so unreserve and try again: */ for (buf2 in reverse order starting before buf in execbuf->buffers) mutex_unreserve_unlock(&buf2->lock); goto retry; default: goto err; } } /* now everything is good to go, submit job to GPU: */ ... } int finish_execbuf(execbuf) { /* when GPU is finished: */ for (buf in execbuf->buffers) mutex_unreserve_unlock(&buf->lock); } =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D anyways, for the rest of the patch, I'm still going through the mutex/ticket_mutex code in conjunction with the reservation/fence patches, so for now just a couple very superficial comments. > These functions will return -EDEADLK instead of -EAGAIN if reservation_id= is the same > as the reservation_id that's attempted to lock the mutex with, since in t= hat case you > presumably attempted to lock the same lock twice. > > mutex_reserve_lock_slow and mutex_reserve_lock_intr_slow: > Similar to mutex_reserve_lock, except it won't backoff with -EAGAIN. This= is useful > after mutex_reserve_lock failed with -EAGAIN, and you unreserved all buff= ers so no > deadlock can occur. > > mutex_unreserve_unlock: > Unlock a buffer reserved with the previous calls. > > Missing at the moment, maybe TODO? > * lockdep warnings when wrongly calling mutex_unreserve_unlock or mutex_u= nlock, > depending on whether reservation_id was set previously or not. > - Does lockdep have something for this or do I need to extend struct mu= tex? > > * Check if lockdep warns if you unlock a lock that other locks were neste= d to. > - spin_lock(m); spin_lock_nest_lock(a, m); spin_unlock(m); spin_unlock(= a); > would be nice if it gave a splat. Have to recheck if it does, though.. > > Design: > I chose for ticket_mutex to encapsulate struct mutex, so the extra memory= usage and > atomic set on init will only happen when you deliberately create a ticket= lock. > > Since the mutexes are mostly meant to protect buffer object serialization= in ttm, not > much contention is expected. I could be slightly smarter with wakeups, bu= t this would > be at the expense at adding a field to struct mutex_waiter. Because this = would add > overhead to all cases where ticket_mutexes are not used, and ticket_mutex= es are less > performance sensitive anyway since they only protect buffer objects, I di= dn't want to > do this. It's still better than ttm always calling wake_up_all, which doe= s a > unconditional spin_lock_irqsave/irqrestore. > > I needed this in kernel/mutex.c because of the extensions to __lock_commo= n, which are > hopefully optimized away for all normal paths. > > Changes since RFC patch v1: > - Updated to use atomic_long instead of atomic, since the reservation_id w= as a long. > - added mutex_reserve_lock_slow and mutex_reserve_lock_intr_slow > - removed mutex_locked_set_reservation_id (or w/e it was called) > > Signed-off-by: Maarten Lankhorst > --- > include/linux/mutex.h | 86 +++++++++++++- > kernel/mutex.c | 317 ++++++++++++++++++++++++++++++++++++++++++++++= +--- > 2 files changed, 387 insertions(+), 16 deletions(-) > > diff --git a/include/linux/mutex.h b/include/linux/mutex.h > index 9121595..602c247 100644 > --- a/include/linux/mutex.h > +++ b/include/linux/mutex.h > @@ -62,6 +62,11 @@ struct mutex { > #endif > }; > > +struct ticket_mutex { > + struct mutex base; > + atomic_long_t reservation_id; > +}; > + > /* > * This is the control structure for tasks blocked on mutex, > * which resides on the blocked task's kernel stack: > @@ -109,12 +114,24 @@ static inline void mutex_destroy(struct mutex *lock) = {} > __DEBUG_MUTEX_INITIALIZER(lockname) \ > __DEP_MAP_MUTEX_INITIALIZER(lockname) } > > +#define __TICKET_MUTEX_INITIALIZER(lockname) \ > + { .base =3D __MUTEX_INITIALIZER(lockname) \ > + , .reservation_id =3D ATOMIC_LONG_INIT(0) } > + > #define DEFINE_MUTEX(mutexname) \ > struct mutex mutexname =3D __MUTEX_INITIALIZER(mutexname) > > extern void __mutex_init(struct mutex *lock, const char *name, > struct lock_class_key *key); > > +static inline void __ticket_mutex_init(struct ticket_mutex *lock, > + const char *name, > + struct lock_class_key *key) > +{ > + __mutex_init(&lock->base, name, key); > + atomic_long_set(&lock->reservation_id, 0); > +} > + > /** > * mutex_is_locked - is the mutex locked > * @lock: the mutex to be queried > @@ -133,26 +150,91 @@ static inline int mutex_is_locked(struct mutex *lock) > #ifdef CONFIG_DEBUG_LOCK_ALLOC > extern void mutex_lock_nested(struct mutex *lock, unsigned int subclass); > extern void _mutex_lock_nest_lock(struct mutex *lock, struct lockdep_map *= nest_lock); > + > extern int __must_check mutex_lock_interruptible_nested(struct mutex *lock, > unsigned int subclass); > extern int __must_check mutex_lock_killable_nested(struct mutex *lock, > unsigned int subclass); > > +extern int __must_check _mutex_reserve_lock(struct ticket_mutex *lock, > + struct lockdep_map *nest_lock, > + unsigned long reservation_id); > + > +extern int __must_check _mutex_reserve_lock_interruptible(struct ticket_mu= tex *, > + struct lockdep_map *nest_lock, > + unsigned long reservation_id); > + > +extern void _mutex_reserve_lock_slow(struct ticket_mutex *lock, > + struct lockdep_map *nest_lock, > + unsigned long reservation_id); > + > +extern int __must_check _mutex_reserve_lock_intr_slow(struct ticket_mutex = *, > + struct lockdep_map *nest_lock, > + unsigned long reservation_id); > + > #define mutex_lock(lock) mutex_lock_nested(lock, 0) > #define mutex_lock_interruptible(lock) mutex_lock_interruptible_nested(loc= k, 0) > #define mutex_lock_killable(lock) mutex_lock_killable_nested(lock, 0) > > #define mutex_lock_nest_lock(lock, nest_lock) \ > do { \ > - typecheck(struct lockdep_map *, &(nest_lock)->dep_map); \ > + typecheck(struct lockdep_map *, &(nest_lock)->dep_map); \ looks like that was unintended whitespace change..` > _mutex_lock_nest_lock(lock, &(nest_lock)->dep_map); \ > } while (0) > > +#define mutex_reserve_lock(lock, nest_lock, reservation_id) \ > +({ \ > + typecheck(struct lockdep_map *, &(nest_lock)->dep_map); \ > + _mutex_reserve_lock(lock, &(nest_lock)->dep_map, reservation_id); = \ > +}) > + > +#define mutex_reserve_lock_interruptible(lock, nest_lock, reservation_id) = \ > +({ \ > + typecheck(struct lockdep_map *, &(nest_lock)->dep_map); \ > + _mutex_reserve_lock_interruptible(lock, &(nest_lock)->dep_map, \ > + reservation_id); \ > +}) > + > +#define mutex_reserve_lock_slow(lock, nest_lock, reservation_id) \ > +do { \ > + typecheck(struct lockdep_map *, &(nest_lock)->dep_map); \ > + _mutex_reserve_lock_slow(lock, &(nest_lock)->dep_map, reservation_i= d); \ > +} while (0) > + > +#define mutex_reserve_lock_intr_slow(lock, nest_lock, reservation_id) \ > +({ \ > + typecheck(struct lockdep_map *, &(nest_lock)->dep_map); \ > + _mutex_reserve_lock_intr_slow(lock, &(nest_lock)->dep_map, \ > + reservation_id); \ > +}) > + > #else > extern void mutex_lock(struct mutex *lock); > extern int __must_check mutex_lock_interruptible(struct mutex *lock); > extern int __must_check mutex_lock_killable(struct mutex *lock); > > +extern int __must_check _mutex_reserve_lock(struct ticket_mutex *lock, > + unsigned long reservation_id); > +extern int __must_check _mutex_reserve_lock_interruptible(struct ticket_mu= tex *, > + unsigned long reservation_i= d); > + > +extern void _mutex_reserve_lock_slow(struct ticket_mutex *lock, > + unsigned long reservation_id); > +extern int __must_check _mutex_reserve_lock_intr_slow(struct ticket_mutex = *, > + unsigned long reservation_i= d); > + > +#define mutex_reserve_lock(lock, nest_lock, reservation_id) \ > + _mutex_reserve_lock(lock, reservation_id) > + > +#define mutex_reserve_lock_interruptible(lock, nest_lock, reservation_id) = \ > + _mutex_reserve_lock_interruptible(lock, reservation_id) > + > +#define mutex_reserve_lock_slow(lock, nest_lock, reservation_id) \ > + _mutex_reserve_lock_slow(lock, reservation_id) > + > +#define mutex_reserve_lock_intr_slow(lock, nest_lock, reservation_id) \ > + _mutex_reserve_lock_intr_slow(lock, reservation_id) > + > # define mutex_lock_nested(lock, subclass) mutex_lock(lock) > # define mutex_lock_interruptible_nested(lock, subclass) mutex_lock_interr= uptible(lock) > # define mutex_lock_killable_nested(lock, subclass) mutex_lock_killable(lo= ck) > @@ -167,6 +249,8 @@ extern int __must_check mutex_lock_killable(struct mute= x *lock); > */ > extern int mutex_trylock(struct mutex *lock); > extern void mutex_unlock(struct mutex *lock); > +extern void mutex_unreserve_unlock(struct ticket_mutex *lock); > + > extern int atomic_dec_and_mutex_lock(atomic_t *cnt, struct mutex *lock); > > #ifndef CONFIG_HAVE_ARCH_MUTEX_CPU_RELAX > diff --git a/kernel/mutex.c b/kernel/mutex.c > index a307cc9..8282729 100644 > --- a/kernel/mutex.c > +++ b/kernel/mutex.c > @@ -126,16 +126,119 @@ void __sched mutex_unlock(struct mutex *lock) > > EXPORT_SYMBOL(mutex_unlock); > > +/** > + * mutex_unreserve_unlock - release the mutex > + * @lock: the mutex to be released > + * > + * Unlock a mutex that has been locked by this task previously > + * with _mutex_reserve_lock*. > + * > + * This function must not be used in interrupt context. Unlocking > + * of a not locked mutex is not allowed. > + */ > +void __sched mutex_unreserve_unlock(struct ticket_mutex *lock) > +{ > + /* > + * mark mutex as no longer part of a reservation, next > + * locker can set this again > + */ > + atomic_long_set(&lock->reservation_id, 0); > + > + /* > + * The unlocking fastpath is the 0->1 transition from 'locked' > + * into 'unlocked' state: > + */ > +#ifndef CONFIG_DEBUG_MUTEXES > + /* > + * When debugging is enabled we must not clear the owner before tim= e, > + * the slow path will always be taken, and that clears the owner fi= eld > + * after verifying that it was indeed current. > + */ > + mutex_clear_owner(&lock->base); > +#endif > + __mutex_fastpath_unlock(&lock->base.count, __mutex_unlock_slowpath); > +} > +EXPORT_SYMBOL(mutex_unreserve_unlock); > + > +static inline int __sched > +__mutex_lock_check_reserve(struct mutex *lock, unsigned long reservation_i= d) > +{ > + struct ticket_mutex *m =3D container_of(lock, struct ticket_mutex, = base); > + unsigned long cur_id; > + > + cur_id =3D atomic_long_read(&m->reservation_id); > + if (!cur_id) > + return 0; > + > + if (unlikely(reservation_id =3D=3D cur_id)) > + return -EDEADLK; > + > + if (unlikely(reservation_id - cur_id <=3D LONG_MAX)) > + return -EAGAIN; > + > + return 0; > +} > + > +/* > + * after acquiring lock with fastpath or when we lost out in contested > + * slowpath, set reservation_id and wake up any waiters so they can rechec= k. > + */ I think that is a bit misleading, if I'm understanding correctly this is called once you get the lock (but in either fast or slow path) > +static __always_inline void > +mutex_set_reservation_fastpath(struct ticket_mutex *lock, > + unsigned long reservation_id, bool check_res) > +{ > + unsigned long flags; > + struct mutex_waiter *cur; > + > + if (check_res || config_enabled(CONFIG_DEBUG_LOCK_ALLOC)) { > + unsigned long cur_id; > + > + cur_id =3D atomic_long_xchg(&lock->reservation_id, > + reservation_id); > +#ifdef CONFIG_DEBUG_LOCK_ALLOC > + if (check_res) > + DEBUG_LOCKS_WARN_ON(cur_id && > + cur_id !=3D reservation_id); > + else > + DEBUG_LOCKS_WARN_ON(cur_id); > + lockdep_assert_held(&lock->base); > +#endif > + > + if (unlikely(cur_id =3D=3D reservation_id)) > + return; > + } else > + atomic_long_set(&lock->reservation_id, reservation_id); > + > + /* > + * Check if lock is contended, if not there is nobody to wake up > + */ > + if (likely(atomic_read(&lock->base.count) =3D=3D 0)) > + return; > + > + /* > + * Uh oh, we raced in fastpath, wake up everyone in this case, > + * so they can see the new reservation_id > + */ > + spin_lock_mutex(&lock->base.wait_lock, flags); > + list_for_each_entry(cur, &lock->base.wait_list, list) { > + debug_mutex_wake_waiter(&lock->base, cur); > + wake_up_process(cur->task); > + } > + spin_unlock_mutex(&lock->base.wait_lock, flags); > +} > + > /* > * Lock a mutex (possibly interruptible), slowpath: > */ > static inline int __sched > __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, > - struct lockdep_map *nest_lock, unsigned long ip) > + struct lockdep_map *nest_lock, unsigned long ip, > + unsigned long reservation_id, bool res_slow) > { > struct task_struct *task =3D current; > struct mutex_waiter waiter; > unsigned long flags; > + int ret; > > preempt_disable(); > mutex_acquire_nest(&lock->dep_map, subclass, 0, nest_lock, ip); > @@ -162,6 +265,12 @@ __mutex_lock_common(struct mutex *lock, long state, un= signed int subclass, > for (;;) { > struct task_struct *owner; > > + if (!__builtin_constant_p(reservation_id) && !res_slow) { > + ret =3D __mutex_lock_check_reserve(lock, reservatio= n_id); > + if (ret) > + goto err_nowait; > + } > + > /* > * If there's an owner, wait for it to either > * release the lock or go to sleep. > @@ -172,6 +281,13 @@ __mutex_lock_common(struct mutex *lock, long state, un= signed int subclass, > > if (atomic_cmpxchg(&lock->count, 1, 0) =3D=3D 1) { > lock_acquired(&lock->dep_map, ip); > + if (res_slow) { > + struct ticket_mutex *m; > + m =3D container_of(lock, struct ticket_mute= x, base); > + > + mutex_set_reservation_fastpath(m, reservati= on_id, false); > + } > + > mutex_set_owner(lock); > preempt_enable(); > return 0; > @@ -227,15 +343,16 @@ __mutex_lock_common(struct mutex *lock, long state, u= nsigned int subclass, > * TASK_UNINTERRUPTIBLE case.) > */ > if (unlikely(signal_pending_state(state, task))) { > - mutex_remove_waiter(lock, &waiter, > - task_thread_info(task)); > - mutex_release(&lock->dep_map, 1, ip); > - spin_unlock_mutex(&lock->wait_lock, flags); > + ret =3D -EINTR; > + goto err; > + } > > - debug_mutex_free_waiter(&waiter); > - preempt_enable(); > - return -EINTR; > + if (!__builtin_constant_p(reservation_id) && !res_slow) { > + ret =3D __mutex_lock_check_reserve(lock, reservatio= n_id); > + if (ret) > + goto err; > } > + > __set_task_state(task, state); > > /* didn't get the lock, go to sleep: */ > @@ -250,6 +367,28 @@ done: > mutex_remove_waiter(lock, &waiter, current_thread_info()); > mutex_set_owner(lock); > > + if (!__builtin_constant_p(reservation_id)) { > + struct ticket_mutex *m; > + struct mutex_waiter *cur; > + /* > + * this should get optimized out for the common case, > + * and is only important for _mutex_reserve_lock > + */ > + > + m =3D container_of(lock, struct ticket_mutex, base); > + atomic_long_set(&m->reservation_id, reservation_id); > + > + /* > + * give any possible sleeping processes the chance to wake = up, > + * so they can recheck if they have to back off from > + * reservations > + */ > + list_for_each_entry(cur, &lock->wait_list, list) { > + debug_mutex_wake_waiter(lock, cur); > + wake_up_process(cur->task); > + } > + } > + > /* set it to 0 if there are no waiters left: */ > if (likely(list_empty(&lock->wait_list))) > atomic_set(&lock->count, 0); > @@ -260,6 +399,19 @@ done: > preempt_enable(); > > return 0; > + > +err: > + mutex_remove_waiter(lock, &waiter, task_thread_info(task)); > + spin_unlock_mutex(&lock->wait_lock, flags); > + debug_mutex_free_waiter(&waiter); > + > +#ifdef CONFIG_MUTEX_SPIN_ON_OWNER > +err_nowait: > +#endif > + mutex_release(&lock->dep_map, 1, ip); > + > + preempt_enable(); > + return ret; > } > > #ifdef CONFIG_DEBUG_LOCK_ALLOC > @@ -267,7 +419,8 @@ void __sched > mutex_lock_nested(struct mutex *lock, unsigned int subclass) > { > might_sleep(); > - __mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, subclass, NULL, _RE= T_IP_); > + __mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, > + subclass, NULL, _RET_IP_, 0, 0); > } > > EXPORT_SYMBOL_GPL(mutex_lock_nested); > @@ -276,7 +429,8 @@ void __sched > _mutex_lock_nest_lock(struct mutex *lock, struct lockdep_map *nest) > { > might_sleep(); > - __mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, 0, nest, _RET_IP_); > + __mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, > + 0, nest, _RET_IP_, 0, 0); > } > > EXPORT_SYMBOL_GPL(_mutex_lock_nest_lock); > @@ -285,7 +439,8 @@ int __sched > mutex_lock_killable_nested(struct mutex *lock, unsigned int subclass) > { > might_sleep(); > - return __mutex_lock_common(lock, TASK_KILLABLE, subclass, NULL, _RE= T_IP_); > + return __mutex_lock_common(lock, TASK_KILLABLE, > + subclass, NULL, _RET_IP_, 0, 0); > } > EXPORT_SYMBOL_GPL(mutex_lock_killable_nested); > > @@ -294,10 +449,63 @@ mutex_lock_interruptible_nested(struct mutex *lock, u= nsigned int subclass) > { > might_sleep(); > return __mutex_lock_common(lock, TASK_INTERRUPTIBLE, > - subclass, NULL, _RET_IP_); > + subclass, NULL, _RET_IP_, 0, 0); > } > > EXPORT_SYMBOL_GPL(mutex_lock_interruptible_nested); > + > +int __sched > +_mutex_reserve_lock(struct ticket_mutex *lock, struct lockdep_map *nest, > + unsigned long reservation_id) > +{ > + DEBUG_LOCKS_WARN_ON(!reservation_id); > + > + might_sleep(); > + return __mutex_lock_common(&lock->base, TASK_UNINTERRUPTIBLE, > + 0, nest, _RET_IP_, reservation_id, 0); > +} > +EXPORT_SYMBOL_GPL(_mutex_reserve_lock); > + > + > +int __sched > +_mutex_reserve_lock_interruptible(struct ticket_mutex *lock, > + struct lockdep_map *nest, > + unsigned long reservation_id) > +{ > + DEBUG_LOCKS_WARN_ON(!reservation_id); > + > + might_sleep(); > + return __mutex_lock_common(&lock->base, TASK_INTERRUPTIBLE, > + 0, nest, _RET_IP_, reservation_id, 0); > +} > +EXPORT_SYMBOL_GPL(_mutex_reserve_lock_interruptible); > + > +void __sched > +_mutex_reserve_lock_slow(struct ticket_mutex *lock, struct lockdep_map *ne= st, > + unsigned long reservation_id) > +{ > + DEBUG_LOCKS_WARN_ON(!reservation_id); > + > + might_sleep(); > + __mutex_lock_common(&lock->base, TASK_UNINTERRUPTIBLE, 0, > + nest, _RET_IP_, reservation_id, 1); > +} > +EXPORT_SYMBOL_GPL(_mutex_reserve_lock_slow); > + > +int __sched > +_mutex_reserve_lock_intr_slow(struct ticket_mutex *lock, > + struct lockdep_map *nest, > + unsigned long reservation_id) > +{ > + DEBUG_LOCKS_WARN_ON(!reservation_id); > + > + might_sleep(); > + return __mutex_lock_common(&lock->base, TASK_INTERRUPTIBLE, 0, > + nest, _RET_IP_, reservation_id, 1); > +} > +EXPORT_SYMBOL_GPL(_mutex_reserve_lock_intr_slow); > + > + > #endif > > /* > @@ -400,7 +608,8 @@ __mutex_lock_slowpath(atomic_t *lock_count) > { > struct mutex *lock =3D container_of(lock_count, struct mutex, count= ); > > - __mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, 0, NULL, _RET_IP_); > + __mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, 0, > + NULL, _RET_IP_, 0, 0); > } > > static noinline int __sched > @@ -408,7 +617,8 @@ __mutex_lock_killable_slowpath(atomic_t *lock_count) > { > struct mutex *lock =3D container_of(lock_count, struct mutex, count= ); > > - return __mutex_lock_common(lock, TASK_KILLABLE, 0, NULL, _RET_IP_); > + return __mutex_lock_common(lock, TASK_KILLABLE, 0, > + NULL, _RET_IP_, 0, 0); > } > > static noinline int __sched > @@ -416,8 +626,28 @@ __mutex_lock_interruptible_slowpath(atomic_t *lock_cou= nt) > { > struct mutex *lock =3D container_of(lock_count, struct mutex, count= ); > > - return __mutex_lock_common(lock, TASK_INTERRUPTIBLE, 0, NULL, _RET_= IP_); > + return __mutex_lock_common(lock, TASK_INTERRUPTIBLE, 0, > + NULL, _RET_IP_, 0, 0); > +} > + > +static noinline int __sched > +__mutex_lock_reserve_slowpath(atomic_t *lock_count, void *rid) > +{ > + struct mutex *lock =3D container_of(lock_count, struct mutex, count= ); > + > + return __mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, 0, > + NULL, _RET_IP_, (unsigned long)rid, 0); > +} > + > +static noinline int __sched > +__mutex_lock_interruptible_reserve_slowpath(atomic_t *lock_count, void *ri= d) > +{ > + struct mutex *lock =3D container_of(lock_count, struct mutex, count= ); > + > + return __mutex_lock_common(lock, TASK_INTERRUPTIBLE, 0, > + NULL, _RET_IP_, (unsigned long)rid, 0); > } > + > #endif > > /* > @@ -473,6 +703,63 @@ int __sched mutex_trylock(struct mutex *lock) > } > EXPORT_SYMBOL(mutex_trylock); > > +#ifndef CONFIG_DEBUG_LOCK_ALLOC > +int __sched > +_mutex_reserve_lock(struct ticket_mutex *lock, unsigned long rid) > +{ > + int ret; > + > + might_sleep(); > + > + ret =3D __mutex_fastpath_lock_retval_arg(&lock->base.count, (void *= )rid, > + __mutex_lock_reserve_slowpath); > + > + if (!ret) { > + mutex_set_reservation_fastpath(lock, rid, true); > + mutex_set_owner(&lock->base); > + } > + return ret; > +} > +EXPORT_SYMBOL(_mutex_reserve_lock); > + > +int __sched > +_mutex_reserve_lock_interruptible(struct ticket_mutex *lock, unsigned long= rid) > +{ > + int ret; > + > + might_sleep(); > + > + ret =3D __mutex_fastpath_lock_retval_arg(&lock->base.count, (void *= )rid, > + __mutex_lock_interruptible_reserve_slowpath= ); > + > + if (!ret) { > + mutex_set_reservation_fastpath(lock, rid, true); > + mutex_set_owner(&lock->base); > + } > + return ret; > +} > +EXPORT_SYMBOL(_mutex_reserve_lock_interruptible); > + > +void __sched > +_mutex_reserve_lock_slow(struct ticket_mutex *lock, unsigned long rid) > +{ > + might_sleep(); > + __mutex_lock_common(&lock->base, TASK_UNINTERRUPTIBLE, > + 0, NULL, _RET_IP_, rid, 1); > +} > +EXPORT_SYMBOL(_mutex_reserve_lock_slow); > + > +int __sched > +_mutex_reserve_lock_intr_slow(struct ticket_mutex *lock, unsigned long rid) > +{ > + might_sleep(); > + return __mutex_lock_common(&lock->base, TASK_INTERRUPTIBLE, > + 0, NULL, _RET_IP_, rid, 1); > +} > +EXPORT_SYMBOL(_mutex_reserve_lock_intr_slow); > + > +#endif > + > /** > * atomic_dec_and_mutex_lock - return holding mutex if we dec to 0 > * @cnt: the atomic which we are to dec > -- > 1.8.0.3 > > _______________________________________________ > dri-devel mailing list > dri-devel(a)lists.freedesktop.org > http://lists.freedesktop.org/mailman/listinfo/dri-devel --===============6212976851565780297==-- From daniel@ffwll.ch Wed Jan 30 11:08:44 2013 From: Daniel Vetter To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH 2/7] mutex: add support for reservation style locks Date: Wed, 30 Jan 2013 12:08:42 +0100 Message-ID: In-Reply-To: MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============4331496782411629093==" --===============4331496782411629093== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit On Wed, Jan 30, 2013 at 2:07 AM, Rob Clark wrote: > ========================== > Basic problem statement: > ----- ------- --------- > GPU's do operations that commonly involve many buffers. Those buffers > can be shared across contexts/processes, exist in different memory > domains (for example VRAM vs system memory), and so on. And with > PRIME / dmabuf, they can even be shared across devices. So there are > a handful of situations where the driver needs to wait for buffers to > become ready. If you think about this in terms of waiting on a buffer > mutex for it to become available, this presents a problem because > there is no way to guarantee that buffers appear in a execbuf/batch in > the same order in all contexts. That is directly under control of > userspace, and a result of the sequence of GL calls that an > application makes. Which results in the potential for deadlock. The > problem gets more complex when you consider that the kernel may need > to migrate the buffer(s) into VRAM before the GPU operates on the > buffer(s), which main in turn require evicting some other buffers (and > you don't want to evict other buffers which are already queued up to > the GPU), but for a simplified understanding of the problem you can > ignore this. > > The algorithm that TTM came up with for dealing with this problem is > quite simple. For each group of buffers (execbuf) that need to be > locked, the caller would be assigned a unique reservation_id, from a > global counter. In case of deadlock in the process of locking all the > buffers associated with a execbuf, the one with the lowest > reservation_id wins, and the one with the higher reservation_id > unlocks all of the buffers that it has already locked, and then tries > again. > > Originally TTM implemented this algorithm on top of an event-queue and > atomic-ops, but Maarten Lankhorst realized that by merging this with > the mutex code we could take advantage of the existing mutex fast-path > code and result in a simpler solution, and so ticket_mutex was born. > (Well, there where also some additional complexities with the original > implementation when you start adding in cross-device buffer sharing > for PRIME.. Maarten could probably better explain.) I think the motivational writeup above is really nice, but the example code below is a bit wrong > How it is used: > --- -- -- ----- > > A very simplified version: > > int submit_execbuf(execbuf) > { > /* acquiring locks, before queuing up to GPU: */ > seqno = assign_global_seqno(); > retry: > for (buf in execbuf->buffers) { > ret = mutex_reserve_lock(&buf->lock, seqno); > switch (ret) { > case 0: > /* we got the lock */ > break; > case -EAGAIN: > /* someone with a lower seqno, so unreserve and try again: */ > for (buf2 in reverse order starting before buf in > execbuf->buffers) > mutex_unreserve_unlock(&buf2->lock); > goto retry; > default: > goto err; > } > } > > /* now everything is good to go, submit job to GPU: */ > ... > } > > int finish_execbuf(execbuf) > { > /* when GPU is finished: */ > for (buf in execbuf->buffers) > mutex_unreserve_unlock(&buf->lock); > } > ========================== Since gpu command submission is all asnyc (hopefully at least) we don't unlock once it completes, but right away after the commands are submitted. Otherwise you wouldn't be able to submit new execbufs using the same buffer objects (and besides, holding locks while going back out to userspace is evil). The trick is to add a fence object for async operation (essentially a waitqueue on steriods to support gpu->gpu direct signalling). And updating fences for a given execbuf needs to happen atomically for all buffers, for otherwise userspace could trick the kernel into creating a circular fence chain. This wouldn't deadlock the kernel, since everything is async, but it'll nicely deadlock the gpus involved. Hence why we need ticketing locks to get dma_buf fences off the ground. Maybe wait for Maarten's feedback, then update your motivational blurb a bit? Cheers, Daniel -- Daniel Vetter Software Engineer, Intel Corporation +41 (0) 79 365 57 48 - http://blog.ffwll.ch --===============4331496782411629093==-- From maarten.lankhorst@canonical.com Wed Jan 30 11:16:01 2013 From: Maarten Lankhorst To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH 2/7] mutex: add support for reservation style locks Date: Wed, 30 Jan 2013 12:16:00 +0100 Message-ID: <510900F0.9050800@canonical.com> In-Reply-To: MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============0856266420773712221==" --===============0856266420773712221== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Op 30-01-13 02:07, Rob Clark schreef: > On Tue, Jan 15, 2013 at 6:33 AM, Maarten Lankhorst > wrote: > Hi Maarten, > > This is a nice looking extension to avoid re-implementing a mutex in > TTM/reservation code.. ofc, probably someone more familiar with mutex > code should probably review, but probably a bit of explanation about > what and why would be helpful. > >> mutex_reserve_lock, and mutex_reserve_lock_interruptible: >> Lock a buffer with a reservation_id set. reservation_id must not be set = to 0, >> since this is a special value that means no reservation_id. >> >> Normally if reservation_id is not set, or is older than the reservation_= id that's >> currently set on the mutex, the behavior will be to wait normally. >> >> However, if the reservation_id is newer than the current reservation_id= , -EAGAIN >> will be returned, and this function must unreserve all other mutexes and= then redo >> a blocking lock with normal mutex calls to prevent a deadlock, then call >> mutex_locked_set_reservation on successful locking to set the reservatio= n_id inside >> the lock. > It might be a bit more clear to write up how this works from the > perspective of the user of ticket_mutex, separately from the internal > implementation first, and then how it works internally? Ie, the > mutex_set_reservation_fastpath() call is internal to the > implementation of ticket_mutex, but -EAGAIN is something the caller of > ticket_mutex shall deal with. This might give a clearer picture of > how TTM / reservation uses this to prevent deadlock, so those less > familiar with TTM could better understand. > > Well, here is an attempt to start a write-up, which should perhaps > eventually be folded into Documentation/ticket-mutex-design.txt. But > hopefully a better explanation of the problem and the solution will > encourage some review of the ticket_mutex changes. > > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D > Basic problem statement: > ----- ------- --------- > GPU's do operations that commonly involve many buffers. Those buffers > can be shared across contexts/processes, exist in different memory > domains (for example VRAM vs system memory), and so on. And with > PRIME / dmabuf, they can even be shared across devices. So there are > a handful of situations where the driver needs to wait for buffers to > become ready. If you think about this in terms of waiting on a buffer > mutex for it to become available, this presents a problem because > there is no way to guarantee that buffers appear in a execbuf/batch in > the same order in all contexts. That is directly under control of > userspace, and a result of the sequence of GL calls that an > application makes. Which results in the potential for deadlock. The > problem gets more complex when you consider that the kernel may need > to migrate the buffer(s) into VRAM before the GPU operates on the > buffer(s), which main in turn require evicting some other buffers (and > you don't want to evict other buffers which are already queued up to > the GPU), but for a simplified understanding of the problem you can > ignore this. > > The algorithm that TTM came up with for dealing with this problem is > quite simple. For each group of buffers (execbuf) that need to be > locked, the caller would be assigned a unique reservation_id, from a > global counter. In case of deadlock in the process of locking all the > buffers associated with a execbuf, the one with the lowest > reservation_id wins, and the one with the higher reservation_id > unlocks all of the buffers that it has already locked, and then tries > again. > > Originally TTM implemented this algorithm on top of an event-queue and > atomic-ops, but Maarten Lankhorst realized that by merging this with > the mutex code we could take advantage of the existing mutex fast-path > code and result in a simpler solution, and so ticket_mutex was born. > (Well, there where also some additional complexities with the original > implementation when you start adding in cross-device buffer sharing > for PRIME.. Maarten could probably better explain.) > > How it is used: > --- -- -- ----- > > A very simplified version: > > int submit_execbuf(execbuf) > { > /* acquiring locks, before queuing up to GPU: */ > seqno =3D assign_global_seqno(); You also need to make a 'lock' type for seqno, and lock it for lockdep purpos= es. This will be a virtual lock that will only exist in lockdep, but it's needed = for proper lockdep annotation. See reservation_ticket_init/fini. It's also important that seqno must not be = 0, ever. > retry: > for (buf in execbuf->buffers) { > ret =3D mutex_reserve_lock(&buf->lock, seqno); The lockdep class for this lock must be the same for all reservations, and fo= r maximum lockdep usability you want all the buf->lock lockdep class for all objects across all devices t= o be the same too. The __ticket_mutex_init in reservation_object_init does just that for you. :-) > switch (ret) { > case 0: > /* we got the lock */ > break; > case -EAGAIN: > /* someone with a lower seqno, so unreserve and try again: */ > for (buf2 in reverse order starting before buf in > execbuf->buffers) > mutex_unreserve_unlock(&buf2->lock); > goto retry; Almost correct, you need to re-regrab buf->lock after unreserving all other b= uffers with mutex_reserve_lock_slow, then goto retry, and skip over this bo w= hen doing the normal locking. The difference between mutex_reserve_lock and mutex_reserve_lock_slow is that= mutex_reserve_lock_slow will block indefinitely where mutex_reserve_lock wou= ld return -EAGAIN. mutex_reserve_lock_slow does not return an error code. mutex_reserve_lock_int= r_slow can return -EINTR if interrupted. > default: > goto err; > } > } > > /* now everything is good to go, submit job to GPU: */ > ... > } > > int finish_execbuf(execbuf) > { > /* when GPU is finished: */ > for (buf in execbuf->buffers) > mutex_unreserve_unlock(&buf->lock); > } > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D Thanks for taking the effort into writing this. > anyways, for the rest of the patch, I'm still going through the > mutex/ticket_mutex code in conjunction with the reservation/fence > patches, so for now just a couple very superficial comments. > >> These functions will return -EDEADLK instead of -EAGAIN if reservation_i= d is the same >> as the reservation_id that's attempted to lock the mutex with, since in = that case you >> presumably attempted to lock the same lock twice. >> >> mutex_reserve_lock_slow and mutex_reserve_lock_intr_slow: >> Similar to mutex_reserve_lock, except it won't backoff with -EAGAIN. Thi= s is useful >> after mutex_reserve_lock failed with -EAGAIN, and you unreserved all buf= fers so no >> deadlock can occur. >> >> mutex_unreserve_unlock: >> Unlock a buffer reserved with the previous calls. >> >> Missing at the moment, maybe TODO? >> * lockdep warnings when wrongly calling mutex_unreserve_unlock or mutex_= unlock, >> depending on whether reservation_id was set previously or not. >> - Does lockdep have something for this or do I need to extend struct m= utex? >> >> * Check if lockdep warns if you unlock a lock that other locks were nest= ed to. >> - spin_lock(m); spin_lock_nest_lock(a, m); spin_unlock(m); spin_unlock= (a); >> would be nice if it gave a splat. Have to recheck if it does, though= .. >> >> Design: >> I chose for ticket_mutex to encapsulate struct mutex, so the extra memor= y usage and >> atomic set on init will only happen when you deliberately create a ticke= t lock. >> >> Since the mutexes are mostly meant to protect buffer object serializatio= n in ttm, not >> much contention is expected. I could be slightly smarter with wakeups, b= ut this would >> be at the expense at adding a field to struct mutex_waiter. Because this= would add >> overhead to all cases where ticket_mutexes are not used, and ticket_mute= xes are less >> performance sensitive anyway since they only protect buffer objects, I d= idn't want to >> do this. It's still better than ttm always calling wake_up_all, which do= es a >> unconditional spin_lock_irqsave/irqrestore. >> >> I needed this in kernel/mutex.c because of the extensions to __lock_comm= on, which are >> hopefully optimized away for all normal paths. >> >> Changes since RFC patch v1: >> - Updated to use atomic_long instead of atomic, since the reservation_id = was a long. >> - added mutex_reserve_lock_slow and mutex_reserve_lock_intr_slow >> - removed mutex_locked_set_reservation_id (or w/e it was called) >> >> Signed-off-by: Maarten Lankhorst >> --- >> include/linux/mutex.h | 86 +++++++++++++- >> kernel/mutex.c | 317 +++++++++++++++++++++++++++++++++++++++++++++= ++--- >> 2 files changed, 387 insertions(+), 16 deletions(-) >> >> diff --git a/include/linux/mutex.h b/include/linux/mutex.h >> index 9121595..602c247 100644 >> --- a/include/linux/mutex.h >> +++ b/include/linux/mutex.h >> @@ -62,6 +62,11 @@ struct mutex { >> #endif >> }; >> >> +struct ticket_mutex { >> + struct mutex base; >> + atomic_long_t reservation_id; >> +}; >> + >> /* >> * This is the control structure for tasks blocked on mutex, >> * which resides on the blocked task's kernel stack: >> @@ -109,12 +114,24 @@ static inline void mutex_destroy(struct mutex *lock)= {} >> __DEBUG_MUTEX_INITIALIZER(lockname) \ >> __DEP_MAP_MUTEX_INITIALIZER(lockname) } >> >> +#define __TICKET_MUTEX_INITIALIZER(lockname) \ >> + { .base =3D __MUTEX_INITIALIZER(lockname) \ >> + , .reservation_id =3D ATOMIC_LONG_INIT(0) } >> + >> #define DEFINE_MUTEX(mutexname) \ >> struct mutex mutexname =3D __MUTEX_INITIALIZER(mutexname) >> >> extern void __mutex_init(struct mutex *lock, const char *name, >> struct lock_class_key *key); >> >> +static inline void __ticket_mutex_init(struct ticket_mutex *lock, >> + const char *name, >> + struct lock_class_key *key) >> +{ >> + __mutex_init(&lock->base, name, key); >> + atomic_long_set(&lock->reservation_id, 0); >> +} >> + >> /** >> * mutex_is_locked - is the mutex locked >> * @lock: the mutex to be queried >> @@ -133,26 +150,91 @@ static inline int mutex_is_locked(struct mutex *lock) >> #ifdef CONFIG_DEBUG_LOCK_ALLOC >> extern void mutex_lock_nested(struct mutex *lock, unsigned int subclass); >> extern void _mutex_lock_nest_lock(struct mutex *lock, struct lockdep_map = *nest_lock); >> + >> extern int __must_check mutex_lock_interruptible_nested(struct mutex *loc= k, >> unsigned int subclass); >> extern int __must_check mutex_lock_killable_nested(struct mutex *lock, >> unsigned int subclass); >> >> +extern int __must_check _mutex_reserve_lock(struct ticket_mutex *lock, >> + struct lockdep_map *nest_lock, >> + unsigned long reservation_id); >> + >> +extern int __must_check _mutex_reserve_lock_interruptible(struct ticket_m= utex *, >> + struct lockdep_map *nest_lock, >> + unsigned long reservation_id); >> + >> +extern void _mutex_reserve_lock_slow(struct ticket_mutex *lock, >> + struct lockdep_map *nest_lock, >> + unsigned long reservation_id); >> + >> +extern int __must_check _mutex_reserve_lock_intr_slow(struct ticket_mutex= *, >> + struct lockdep_map *nest_lock, >> + unsigned long reservation_id); >> + >> #define mutex_lock(lock) mutex_lock_nested(lock, 0) >> #define mutex_lock_interruptible(lock) mutex_lock_interruptible_nested(lo= ck, 0) >> #define mutex_lock_killable(lock) mutex_lock_killable_nested(lock, 0) >> >> #define mutex_lock_nest_lock(lock, nest_lock) \ >> do { \ >> - typecheck(struct lockdep_map *, &(nest_lock)->dep_map); \ >> + typecheck(struct lockdep_map *, &(nest_lock)->dep_map); \ > looks like that was unintended whitespace change..` I think it was intentional, as it would be just above 80 lines otherwise. >> _mutex_lock_nest_lock(lock, &(nest_lock)->dep_map); \ >> } while (0) >> >> +#define mutex_reserve_lock(lock, nest_lock, reservation_id) \ >> +({ \ >> + typecheck(struct lockdep_map *, &(nest_lock)->dep_map); \ >> + _mutex_reserve_lock(lock, &(nest_lock)->dep_map, reservation_id); = \ >> +}) >> + >> +#define mutex_reserve_lock_interruptible(lock, nest_lock, reservation_id)= \ >> +({ \ >> + typecheck(struct lockdep_map *, &(nest_lock)->dep_map); \ >> + _mutex_reserve_lock_interruptible(lock, &(nest_lock)->dep_map, \ >> + reservation_id); \ >> +}) >> + >> +#define mutex_reserve_lock_slow(lock, nest_lock, reservation_id) \ >> +do { \ >> + typecheck(struct lockdep_map *, &(nest_lock)->dep_map); \ >> + _mutex_reserve_lock_slow(lock, &(nest_lock)->dep_map, reservation_= id); \ >> +} while (0) >> + >> +#define mutex_reserve_lock_intr_slow(lock, nest_lock, reservation_id) \ >> +({ \ >> + typecheck(struct lockdep_map *, &(nest_lock)->dep_map); \ >> + _mutex_reserve_lock_intr_slow(lock, &(nest_lock)->dep_map, \ >> + reservation_id); \ >> +}) >> + >> #else >> extern void mutex_lock(struct mutex *lock); >> extern int __must_check mutex_lock_interruptible(struct mutex *lock); >> extern int __must_check mutex_lock_killable(struct mutex *lock); >> >> +extern int __must_check _mutex_reserve_lock(struct ticket_mutex *lock, >> + unsigned long reservation_id); >> +extern int __must_check _mutex_reserve_lock_interruptible(struct ticket_m= utex *, >> + unsigned long reservation_= id); >> + >> +extern void _mutex_reserve_lock_slow(struct ticket_mutex *lock, >> + unsigned long reservation_id); >> +extern int __must_check _mutex_reserve_lock_intr_slow(struct ticket_mutex= *, >> + unsigned long reservation_= id); >> + >> +#define mutex_reserve_lock(lock, nest_lock, reservation_id) \ >> + _mutex_reserve_lock(lock, reservation_id) >> + >> +#define mutex_reserve_lock_interruptible(lock, nest_lock, reservation_id)= \ >> + _mutex_reserve_lock_interruptible(lock, reservation_id) >> + >> +#define mutex_reserve_lock_slow(lock, nest_lock, reservation_id) \ >> + _mutex_reserve_lock_slow(lock, reservation_id) >> + >> +#define mutex_reserve_lock_intr_slow(lock, nest_lock, reservation_id) \ >> + _mutex_reserve_lock_intr_slow(lock, reservation_id) >> + >> # define mutex_lock_nested(lock, subclass) mutex_lock(lock) >> # define mutex_lock_interruptible_nested(lock, subclass) mutex_lock_inter= ruptible(lock) >> # define mutex_lock_killable_nested(lock, subclass) mutex_lock_killable(l= ock) >> @@ -167,6 +249,8 @@ extern int __must_check mutex_lock_killable(struct mut= ex *lock); >> */ >> extern int mutex_trylock(struct mutex *lock); >> extern void mutex_unlock(struct mutex *lock); >> +extern void mutex_unreserve_unlock(struct ticket_mutex *lock); >> + >> extern int atomic_dec_and_mutex_lock(atomic_t *cnt, struct mutex *lock); >> >> #ifndef CONFIG_HAVE_ARCH_MUTEX_CPU_RELAX >> diff --git a/kernel/mutex.c b/kernel/mutex.c >> index a307cc9..8282729 100644 >> --- a/kernel/mutex.c >> +++ b/kernel/mutex.c >> @@ -126,16 +126,119 @@ void __sched mutex_unlock(struct mutex *lock) >> >> EXPORT_SYMBOL(mutex_unlock); >> >> +/** >> + * mutex_unreserve_unlock - release the mutex >> + * @lock: the mutex to be released >> + * >> + * Unlock a mutex that has been locked by this task previously >> + * with _mutex_reserve_lock*. >> + * >> + * This function must not be used in interrupt context. Unlocking >> + * of a not locked mutex is not allowed. >> + */ >> +void __sched mutex_unreserve_unlock(struct ticket_mutex *lock) >> +{ >> + /* >> + * mark mutex as no longer part of a reservation, next >> + * locker can set this again >> + */ >> + atomic_long_set(&lock->reservation_id, 0); >> + >> + /* >> + * The unlocking fastpath is the 0->1 transition from 'locked' >> + * into 'unlocked' state: >> + */ >> +#ifndef CONFIG_DEBUG_MUTEXES >> + /* >> + * When debugging is enabled we must not clear the owner before ti= me, >> + * the slow path will always be taken, and that clears the owner f= ield >> + * after verifying that it was indeed current. >> + */ >> + mutex_clear_owner(&lock->base); >> +#endif >> + __mutex_fastpath_unlock(&lock->base.count, __mutex_unlock_slowpath= ); >> +} >> +EXPORT_SYMBOL(mutex_unreserve_unlock); >> + >> +static inline int __sched >> +__mutex_lock_check_reserve(struct mutex *lock, unsigned long reservation_= id) >> +{ >> + struct ticket_mutex *m =3D container_of(lock, struct ticket_mutex,= base); >> + unsigned long cur_id; >> + >> + cur_id =3D atomic_long_read(&m->reservation_id); >> + if (!cur_id) >> + return 0; >> + >> + if (unlikely(reservation_id =3D=3D cur_id)) >> + return -EDEADLK; >> + >> + if (unlikely(reservation_id - cur_id <=3D LONG_MAX)) >> + return -EAGAIN; >> + >> + return 0; >> +} >> + >> +/* >> + * after acquiring lock with fastpath or when we lost out in contested >> + * slowpath, set reservation_id and wake up any waiters so they can reche= ck. >> + */ > I think that is a bit misleading, if I'm understanding correctly this > is called once you get the lock (but in either fast or slow path) Yes, but strictly speaking it does not need to be called on slow path, it wil= l do an atomic_xchg to set reservation_id, see that it had already set reservation_id, and skip the rest. :-) Maybe I should just return !__builtin_constant_p(reservation_id) in __mutex_l= ock_common to distinguish between fastpath and slowpath. That would cause __mutex_lock_common to return 0 for normal mutexes, 1 when r= eservation_id is set and slowpath is used. This would allow me to check whether mutex_set_reservation_fastpath needs to be called o= r not, and tighten up lockdep detection of mismatched mutex_reserve_lock / mutex_lock with mutex_unlock / mutex_reserve_unlock. ~Maarte --===============0856266420773712221==-- From robdclark@gmail.com Wed Jan 30 11:52:23 2013 From: Rob Clark To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH 2/7] mutex: add support for reservation style locks Date: Wed, 30 Jan 2013 05:52:21 -0600 Message-ID: In-Reply-To: MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============3074008948968121492==" --===============3074008948968121492== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable On Wed, Jan 30, 2013 at 5:08 AM, Daniel Vetter wrote: > On Wed, Jan 30, 2013 at 2:07 AM, Rob Clark wrote: >> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D >> Basic problem statement: >> ----- ------- --------- >> GPU's do operations that commonly involve many buffers. Those buffers >> can be shared across contexts/processes, exist in different memory >> domains (for example VRAM vs system memory), and so on. And with >> PRIME / dmabuf, they can even be shared across devices. So there are >> a handful of situations where the driver needs to wait for buffers to >> become ready. If you think about this in terms of waiting on a buffer >> mutex for it to become available, this presents a problem because >> there is no way to guarantee that buffers appear in a execbuf/batch in >> the same order in all contexts. That is directly under control of >> userspace, and a result of the sequence of GL calls that an >> application makes. Which results in the potential for deadlock. The >> problem gets more complex when you consider that the kernel may need >> to migrate the buffer(s) into VRAM before the GPU operates on the >> buffer(s), which main in turn require evicting some other buffers (and >> you don't want to evict other buffers which are already queued up to >> the GPU), but for a simplified understanding of the problem you can >> ignore this. >> >> The algorithm that TTM came up with for dealing with this problem is >> quite simple. For each group of buffers (execbuf) that need to be >> locked, the caller would be assigned a unique reservation_id, from a >> global counter. In case of deadlock in the process of locking all the >> buffers associated with a execbuf, the one with the lowest >> reservation_id wins, and the one with the higher reservation_id >> unlocks all of the buffers that it has already locked, and then tries >> again. >> >> Originally TTM implemented this algorithm on top of an event-queue and >> atomic-ops, but Maarten Lankhorst realized that by merging this with >> the mutex code we could take advantage of the existing mutex fast-path >> code and result in a simpler solution, and so ticket_mutex was born. >> (Well, there where also some additional complexities with the original >> implementation when you start adding in cross-device buffer sharing >> for PRIME.. Maarten could probably better explain.) > > I think the motivational writeup above is really nice, but the example > code below is a bit wrong > >> How it is used: >> --- -- -- ----- >> >> A very simplified version: >> >> int submit_execbuf(execbuf) >> { >> /* acquiring locks, before queuing up to GPU: */ >> seqno =3D assign_global_seqno(); >> retry: >> for (buf in execbuf->buffers) { >> ret =3D mutex_reserve_lock(&buf->lock, seqno); >> switch (ret) { >> case 0: >> /* we got the lock */ >> break; >> case -EAGAIN: >> /* someone with a lower seqno, so unreserve and try again: */ >> for (buf2 in reverse order starting before buf in >> execbuf->buffers) >> mutex_unreserve_unlock(&buf2->lock); >> goto retry; >> default: >> goto err; >> } >> } >> >> /* now everything is good to go, submit job to GPU: */ >> ... >> } >> >> int finish_execbuf(execbuf) >> { >> /* when GPU is finished: */ >> for (buf in execbuf->buffers) >> mutex_unreserve_unlock(&buf->lock); >> } >> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D > > Since gpu command submission is all asnyc (hopefully at least) we > don't unlock once it completes, but right away after the commands are > submitted. Otherwise you wouldn't be able to submit new execbufs using > the same buffer objects (and besides, holding locks while going back > out to userspace is evil). right.. but I was trying to simplify the explanation for non-gpu folk.. maybe that was an over-simplification ;-) BR, -R > The trick is to add a fence object for async operation (essentially a > waitqueue on steriods to support gpu->gpu direct signalling). And > updating fences for a given execbuf needs to happen atomically for all > buffers, for otherwise userspace could trick the kernel into creating > a circular fence chain. This wouldn't deadlock the kernel, since > everything is async, but it'll nicely deadlock the gpus involved. > Hence why we need ticketing locks to get dma_buf fences off the > ground. > > Maybe wait for Maarten's feedback, then update your motivational blurb a bi= t? > > Cheers, Daniel > -- > Daniel Vetter > Software Engineer, Intel Corporation > +41 (0) 79 365 57 48 - http://blog.ffwll.ch --===============3074008948968121492==-- From inki.dae@samsung.com Thu Jan 31 09:32:16 2013 From: Inki Dae To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH 4/7] fence: dma-buf cross-device synchronization (v11) Date: Thu, 31 Jan 2013 18:32:15 +0900 Message-ID: In-Reply-To: <1358253244-11453-5-git-send-email-maarten.lankhorst@canonical.com> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============1555718462163591273==" --===============1555718462163591273== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Hi, below is my opinion. > +struct fence; > +struct fence_ops; > +struct fence_cb; > + > +/** > + * struct fence - software synchronization primitive > + * @refcount: refcount for this fence > + * @ops: fence_ops associated with this fence > + * @cb_list: list of all callbacks to call > + * @lock: spin_lock_irqsave used for locking > + * @priv: fence specific private data > + * @flags: A mask of FENCE_FLAG_* defined below > + * > + * the flags member must be manipulated and read using the appropriate > + * atomic ops (bit_*), so taking the spinlock will not be needed most > + * of the time. > + * > + * FENCE_FLAG_SIGNALED_BIT - fence is already signaled > + * FENCE_FLAG_ENABLE_SIGNAL_BIT - enable_signaling might have been called* > + * FENCE_FLAG_USER_BITS - start of the unused bits, can be used by the > + * implementer of the fence for its own purposes. Can be used in different > + * ways by different fence implementers, so do not rely on this. > + * > + * *) Since atomic bitops are used, this is not guaranteed to be the case. > + * Particularly, if the bit was set, but fence_signal was called right > + * before this bit was set, it would have been able to set the > + * FENCE_FLAG_SIGNALED_BIT, before enable_signaling was called. > + * Adding a check for FENCE_FLAG_SIGNALED_BIT after setting > + * FENCE_FLAG_ENABLE_SIGNAL_BIT closes this race, and makes sure that > + * after fence_signal was called, any enable_signaling call will have eith= er > + * been completed, or never called at all. > + */ > +struct fence { > + struct kref refcount; > + const struct fence_ops *ops; > + struct list_head cb_list; > + spinlock_t *lock; > + unsigned context, seqno; > + unsigned long flags; > +}; > + > +enum fence_flag_bits { > + FENCE_FLAG_SIGNALED_BIT, > + FENCE_FLAG_ENABLE_SIGNAL_BIT, > + FENCE_FLAG_USER_BITS, /* must always be last member */ > +}; > + It seems like that this fence framework need to add read/write flags. In case of two read operations, one might wait for another one. But the another is just read operation so we doesn't need to wait for it. Shouldn't fence-wait-request be ignored? In this case, I think it's enough to consider just only write operation. For this, you could add the following, enum fence_flag_bits { ... FENCE_FLAG_ACCESS_READ_BIT, FENCE_FLAG_ACCESS_WRITE_BIT, ... }; And the producer could call fence_init() like below, __fence_init(..., FENCE_FLAG_ACCESS_WRITE_BIT,...); With this, fence->flags has FENCE_FLAG_ACCESS_WRITE_BIT as write operation and then other sides(read or write operation) would wait for the write operation completion. And also consumer calls that function with FENCE_FLAG_ACCESS_READ_BIT so that other consumers could ignore the fence-wait to any read operations. Thanks, Inki Dae --===============1555718462163591273==-- From maarten.lankhorst@canonical.com Thu Jan 31 09:53:29 2013 From: Maarten Lankhorst To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH 4/7] fence: dma-buf cross-device synchronization (v11) Date: Thu, 31 Jan 2013 10:53:21 +0100 Message-ID: <510A3F11.2040000@canonical.com> In-Reply-To: MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============1515480821450271513==" --===============1515480821450271513== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Op 31-01-13 10:32, Inki Dae schreef: > Hi, > > below is my opinion. > >> +struct fence; >> +struct fence_ops; >> +struct fence_cb; >> + >> +/** >> + * struct fence - software synchronization primitive >> + * @refcount: refcount for this fence >> + * @ops: fence_ops associated with this fence >> + * @cb_list: list of all callbacks to call >> + * @lock: spin_lock_irqsave used for locking >> + * @priv: fence specific private data >> + * @flags: A mask of FENCE_FLAG_* defined below >> + * >> + * the flags member must be manipulated and read using the appropriate >> + * atomic ops (bit_*), so taking the spinlock will not be needed most >> + * of the time. >> + * >> + * FENCE_FLAG_SIGNALED_BIT - fence is already signaled >> + * FENCE_FLAG_ENABLE_SIGNAL_BIT - enable_signaling might have been called* >> + * FENCE_FLAG_USER_BITS - start of the unused bits, can be used by the >> + * implementer of the fence for its own purposes. Can be used in different >> + * ways by different fence implementers, so do not rely on this. >> + * >> + * *) Since atomic bitops are used, this is not guaranteed to be the case. >> + * Particularly, if the bit was set, but fence_signal was called right >> + * before this bit was set, it would have been able to set the >> + * FENCE_FLAG_SIGNALED_BIT, before enable_signaling was called. >> + * Adding a check for FENCE_FLAG_SIGNALED_BIT after setting >> + * FENCE_FLAG_ENABLE_SIGNAL_BIT closes this race, and makes sure that >> + * after fence_signal was called, any enable_signaling call will have eit= her >> + * been completed, or never called at all. >> + */ >> +struct fence { >> + struct kref refcount; >> + const struct fence_ops *ops; >> + struct list_head cb_list; >> + spinlock_t *lock; >> + unsigned context, seqno; >> + unsigned long flags; >> +}; >> + >> +enum fence_flag_bits { >> + FENCE_FLAG_SIGNALED_BIT, >> + FENCE_FLAG_ENABLE_SIGNAL_BIT, >> + FENCE_FLAG_USER_BITS, /* must always be last member */ >> +}; >> + > It seems like that this fence framework need to add read/write flags. > In case of two read operations, one might wait for another one. But > the another is just read operation so we doesn't need to wait for it. > Shouldn't fence-wait-request be ignored? In this case, I think it's > enough to consider just only write operation. > > For this, you could add the following, > > enum fence_flag_bits { > ... > FENCE_FLAG_ACCESS_READ_BIT, > FENCE_FLAG_ACCESS_WRITE_BIT, > ... > }; > > And the producer could call fence_init() like below, > __fence_init(..., FENCE_FLAG_ACCESS_WRITE_BIT,...); > > With this, fence->flags has FENCE_FLAG_ACCESS_WRITE_BIT as write > operation and then other sides(read or write operation) would wait for > the write operation completion. > And also consumer calls that function with FENCE_FLAG_ACCESS_READ_BIT > so that other consumers could ignore the fence-wait to any read > operations. > You can't put that information in the fence. If you use a fence to fence off = a hardware memcpy operation, there would be one buffer for which you would attach the fence in read mode a= nd another buffer where you need write access. ~Maarten --===============1515480821450271513==-- From daniel@ffwll.ch Thu Jan 31 09:55:17 2013 From: Daniel Vetter To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH 4/7] fence: dma-buf cross-device synchronization (v11) Date: Thu, 31 Jan 2013 10:57:26 +0100 Message-ID: <20130131095726.GD5885@phenom.ffwll.local> In-Reply-To: MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============3741806017481186429==" --===============3741806017481186429== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable On Thu, Jan 31, 2013 at 06:32:15PM +0900, Inki Dae wrote: > Hi, >=20 > below is my opinion. >=20 > > +struct fence; > > +struct fence_ops; > > +struct fence_cb; > > + > > +/** > > + * struct fence - software synchronization primitive > > + * @refcount: refcount for this fence > > + * @ops: fence_ops associated with this fence > > + * @cb_list: list of all callbacks to call > > + * @lock: spin_lock_irqsave used for locking > > + * @priv: fence specific private data > > + * @flags: A mask of FENCE_FLAG_* defined below > > + * > > + * the flags member must be manipulated and read using the appropriate > > + * atomic ops (bit_*), so taking the spinlock will not be needed most > > + * of the time. > > + * > > + * FENCE_FLAG_SIGNALED_BIT - fence is already signaled > > + * FENCE_FLAG_ENABLE_SIGNAL_BIT - enable_signaling might have been calle= d* > > + * FENCE_FLAG_USER_BITS - start of the unused bits, can be used by the > > + * implementer of the fence for its own purposes. Can be used in differe= nt > > + * ways by different fence implementers, so do not rely on this. > > + * > > + * *) Since atomic bitops are used, this is not guaranteed to be the cas= e. > > + * Particularly, if the bit was set, but fence_signal was called right > > + * before this bit was set, it would have been able to set the > > + * FENCE_FLAG_SIGNALED_BIT, before enable_signaling was called. > > + * Adding a check for FENCE_FLAG_SIGNALED_BIT after setting > > + * FENCE_FLAG_ENABLE_SIGNAL_BIT closes this race, and makes sure that > > + * after fence_signal was called, any enable_signaling call will have ei= ther > > + * been completed, or never called at all. > > + */ > > +struct fence { > > + struct kref refcount; > > + const struct fence_ops *ops; > > + struct list_head cb_list; > > + spinlock_t *lock; > > + unsigned context, seqno; > > + unsigned long flags; > > +}; > > + > > +enum fence_flag_bits { > > + FENCE_FLAG_SIGNALED_BIT, > > + FENCE_FLAG_ENABLE_SIGNAL_BIT, > > + FENCE_FLAG_USER_BITS, /* must always be last member */ > > +}; > > + >=20 > It seems like that this fence framework need to add read/write flags. > In case of two read operations, one might wait for another one. But > the another is just read operation so we doesn't need to wait for it. > Shouldn't fence-wait-request be ignored? In this case, I think it's > enough to consider just only write operation. >=20 > For this, you could add the following, >=20 > enum fence_flag_bits { > ... > FENCE_FLAG_ACCESS_READ_BIT, > FENCE_FLAG_ACCESS_WRITE_BIT, > ... > }; >=20 > And the producer could call fence_init() like below, > __fence_init(..., FENCE_FLAG_ACCESS_WRITE_BIT,...); >=20 > With this, fence->flags has FENCE_FLAG_ACCESS_WRITE_BIT as write > operation and then other sides(read or write operation) would wait for > the write operation completion. > And also consumer calls that function with FENCE_FLAG_ACCESS_READ_BIT > so that other consumers could ignore the fence-wait to any read > operations. Fences here match more to the sync-points concept from the android stuff. The idea is that they only signal when a hw operation completes. Synchronization integration happens at the dma_buf level, where you can specify whether the new operation you're doing is exclusive (which means that you need to wait for all previous operations to complete), i.e. a write. Or whether the operation is non-excluses (i.e. just reading) in which case you only need to wait for any still outstanding exclusive fences attached to the dma_buf. But you _can_ attach more than one non-exclusive fence to a dma_buf at the same time, and so e.g. read a buffer objects from different engines concurrently. There's been some talk whether we also need a non-exclusive write attachment (i.e. allow multiple concurrent writers), but I don't yet fully understand the use-case. In short the proposed patches can do what you want to do, it's just that read/write access isn't part of the fences, but how you attach fences to dma_bufs. Cheers, Daniel --=20 Daniel Vetter Software Engineer, Intel Corporation +41 (0) 79 365 57 48 - http://blog.ffwll.ch --===============3741806017481186429==-- From laurent.pinchart@ideasonboard.com Thu Jan 31 10:53:24 2013 From: Laurent Pinchart To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] CDF discussions at FOSDEM Date: Thu, 31 Jan 2013 11:53:30 +0100 Message-ID: <1646498.hdCvE5WiAs@avalon> In-Reply-To: <2665133.qfM3EnSmyB@avalon> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============8090637905475621897==" --===============8090637905475621897== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 8bit On Friday 11 January 2013 21:27:03 Laurent Pinchart wrote: > Hi everybody, > > Would anyone be interested in meeting at the FOSDEM to discuss the Common > Display Framework ? There will be a CDF meeting at the ELC at the end of > February, the FOSDEM would be a good venue for European developers. A quick follow-up on this. Given the late notice getting a room from the FOSDEM staff wasn't possible. There will be two meeting rooms available that can be reserved on-site only. They can accomodate aroudn 30 people but there will deliberately be no projector. They will be given on a first-come, first-serve basis for one hour time slots at most (see https://fosdem.org/2013/news/2013-01-31-bof- announce/). As room availability isn't guaranteed, and as one hour might be a bit short, I've secured an off-site but very close (http://www.openstreetmap.org/?lat=50.812924&lon=4.384506&zoom=18&layers=M - UrLab) room that can accomodate 12 people around a meeting table (more is possible, but it might get a bit tight then). I propose having the CDF discussion there on Sunday morning from 9am to 11am (please let me know ASAP if you can't make it at that time). Daniel Vetter Jani Nikula Marcus Lorentzon Laurent Pinchart Michael (from Pengutronix, not sure about the last name, sorry) Philipp Zabel Rob Clark Robert Schwebel Sascha Hauer Ville Syrjälä That's already 10 people. If someone else would like to attend the meeting please let me know. -- Regards, Laurent Pinchart --===============8090637905475621897==-- From s.hauer@pengutronix.de Thu Jan 31 11:02:22 2013 From: Sascha Hauer To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] CDF discussions at FOSDEM Date: Thu, 31 Jan 2013 12:02:21 +0100 Message-ID: <20130131110221.GT1906@pengutronix.de> In-Reply-To: <1646498.hdCvE5WiAs@avalon> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============0046589516305331141==" --===============0046589516305331141== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable On Thu, Jan 31, 2013 at 11:53:30AM +0100, Laurent Pinchart wrote: > On Friday 11 January 2013 21:27:03 Laurent Pinchart wrote: > > Hi everybody, > >=20 > > Would anyone be interested in meeting at the FOSDEM to discuss the Common > > Display Framework ? There will be a CDF meeting at the ELC at the end of > > February, the FOSDEM would be a good venue for European developers. >=20 > A quick follow-up on this. >=20 > Given the late notice getting a room from the FOSDEM staff wasn't possible.= =20 > There will be two meeting rooms available that can be reserved on-site only= .=20 > They can accomodate aroudn 30 people but there will deliberately be no=20 > projector. They will be given on a first-come, first-serve basis for one ho= ur=20 > time slots at most (see https://fosdem.org/2013/news/2013-01-31-bof- > announce/). >=20 > As room availability isn't guaranteed, and as one hour might be a bit short= ,=20 > I've secured an off-site but very close=20 > (http://www.openstreetmap.org/?lat=3D50.812924&lon=3D4.384506&zoom=3D18&lay= ers=3DM -=20 > UrLab) room that can accomodate 12 people around a meeting table (more is=20 > possible, but it might get a bit tight then). I propose having the CDF=20 > discussion there on Sunday morning from 9am to 11am (please let me know ASA= P=20 > if you can't make it at that time). >=20 > Daniel Vetter > Jani Nikula > Marcus Lorentzon > Laurent Pinchart > Michael (from Pengutronix, not sure about the last name, sorry) > Philipp Zabel > Rob Clark > Robert Schwebel > Sascha Hauer > Ville Syrj=C3=A4l=C3=A4 >=20 > That's already 10 people. If someone else would like to attend the meeting = > please let me know. If place is becomes tight I think Pengutronix doesn't have to be represented with 4 people, although all of us would be interested. Otherwise, yes, we have time on Sunday morning. Sascha --=20 Pengutronix e.K. | | Industrial Linux Solutions | http://www.pengutronix.de/ | Peiner Str. 6-8, 31137 Hildesheim, Germany | Phone: +49-5121-206917-0 | Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 | --===============0046589516305331141==-- From lars@metafoo.de Thu Jan 31 11:39:33 2013 From: Lars-Peter Clausen To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] CDF discussions at FOSDEM Date: Thu, 31 Jan 2013 12:40:36 +0100 Message-ID: <510A5834.8050308@metafoo.de> In-Reply-To: <1646498.hdCvE5WiAs@avalon> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============4753439931658049091==" --===============4753439931658049091== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable On 01/31/2013 11:53 AM, Laurent Pinchart wrote: > On Friday 11 January 2013 21:27:03 Laurent Pinchart wrote: >> Hi everybody, >> >> Would anyone be interested in meeting at the FOSDEM to discuss the Common >> Display Framework ? There will be a CDF meeting at the ELC at the end of >> February, the FOSDEM would be a good venue for European developers. >=20 > A quick follow-up on this. >=20 > Given the late notice getting a room from the FOSDEM staff wasn't possible.= =20 > There will be two meeting rooms available that can be reserved on-site only= .=20 > They can accomodate aroudn 30 people but there will deliberately be no=20 > projector. They will be given on a first-come, first-serve basis for one ho= ur=20 > time slots at most (see https://fosdem.org/2013/news/2013-01-31-bof- > announce/). >=20 > As room availability isn't guaranteed, and as one hour might be a bit short= ,=20 > I've secured an off-site but very close=20 > (http://www.openstreetmap.org/?lat=3D50.812924&lon=3D4.384506&zoom=3D18&lay= ers=3DM -=20 > UrLab) room that can accomodate 12 people around a meeting table (more is=20 > possible, but it might get a bit tight then). I propose having the CDF=20 > discussion there on Sunday morning from 9am to 11am (please let me know ASA= P=20 > if you can't make it at that time). >=20 > Daniel Vetter > Jani Nikula > Marcus Lorentzon > Laurent Pinchart > Michael (from Pengutronix, not sure about the last name, sorry) > Philipp Zabel > Rob Clark > Robert Schwebel > Sascha Hauer > Ville Syrj=C3=A4l=C3=A4 >=20 > That's already 10 people. If someone else would like to attend the meeting = > please let me know. >=20 If there's a free seat I'd like to attend as well. Thanks, - Lars --===============4753439931658049091==-- From robdclark@gmail.com Thu Jan 31 13:38:14 2013 From: Rob Clark To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH 2/7] mutex: add support for reservation style locks Date: Thu, 31 Jan 2013 07:38:13 -0600 Message-ID: In-Reply-To: MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============3119106806316600719==" --===============3119106806316600719== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable On Wed, Jan 30, 2013 at 5:52 AM, Rob Clark wrote: > On Wed, Jan 30, 2013 at 5:08 AM, Daniel Vetter wrote: >> On Wed, Jan 30, 2013 at 2:07 AM, Rob Clark wrote: >>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D >>> Basic problem statement: >>> ----- ------- --------- >>> GPU's do operations that commonly involve many buffers. Those buffers >>> can be shared across contexts/processes, exist in different memory >>> domains (for example VRAM vs system memory), and so on. And with >>> PRIME / dmabuf, they can even be shared across devices. So there are >>> a handful of situations where the driver needs to wait for buffers to >>> become ready. If you think about this in terms of waiting on a buffer >>> mutex for it to become available, this presents a problem because >>> there is no way to guarantee that buffers appear in a execbuf/batch in >>> the same order in all contexts. That is directly under control of >>> userspace, and a result of the sequence of GL calls that an >>> application makes. Which results in the potential for deadlock. The >>> problem gets more complex when you consider that the kernel may need >>> to migrate the buffer(s) into VRAM before the GPU operates on the >>> buffer(s), which main in turn require evicting some other buffers (and >>> you don't want to evict other buffers which are already queued up to >>> the GPU), but for a simplified understanding of the problem you can >>> ignore this. >>> >>> The algorithm that TTM came up with for dealing with this problem is >>> quite simple. For each group of buffers (execbuf) that need to be >>> locked, the caller would be assigned a unique reservation_id, from a >>> global counter. In case of deadlock in the process of locking all the >>> buffers associated with a execbuf, the one with the lowest >>> reservation_id wins, and the one with the higher reservation_id >>> unlocks all of the buffers that it has already locked, and then tries >>> again. >>> >>> Originally TTM implemented this algorithm on top of an event-queue and >>> atomic-ops, but Maarten Lankhorst realized that by merging this with >>> the mutex code we could take advantage of the existing mutex fast-path >>> code and result in a simpler solution, and so ticket_mutex was born. >>> (Well, there where also some additional complexities with the original >>> implementation when you start adding in cross-device buffer sharing >>> for PRIME.. Maarten could probably better explain.) >> >> I think the motivational writeup above is really nice, but the example >> code below is a bit wrong >> >>> How it is used: >>> --- -- -- ----- >>> >>> A very simplified version: >>> >>> int submit_execbuf(execbuf) >>> { >>> /* acquiring locks, before queuing up to GPU: */ >>> seqno =3D assign_global_seqno(); >>> retry: >>> for (buf in execbuf->buffers) { >>> ret =3D mutex_reserve_lock(&buf->lock, seqno); >>> switch (ret) { >>> case 0: >>> /* we got the lock */ >>> break; >>> case -EAGAIN: >>> /* someone with a lower seqno, so unreserve and try again: = */ >>> for (buf2 in reverse order starting before buf in >>> execbuf->buffers) >>> mutex_unreserve_unlock(&buf2->lock); >>> goto retry; >>> default: >>> goto err; >>> } >>> } >>> >>> /* now everything is good to go, submit job to GPU: */ >>> ... >>> } >>> >>> int finish_execbuf(execbuf) >>> { >>> /* when GPU is finished: */ >>> for (buf in execbuf->buffers) >>> mutex_unreserve_unlock(&buf->lock); >>> } >>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D >> >> Since gpu command submission is all asnyc (hopefully at least) we >> don't unlock once it completes, but right away after the commands are >> submitted. Otherwise you wouldn't be able to submit new execbufs using >> the same buffer objects (and besides, holding locks while going back >> out to userspace is evil). > > right.. but I was trying to simplify the explanation for non-gpu > folk.. maybe that was an over-simplification ;-) > Ok, a bit expanded version.. I meant to send this yesterday, but I forgot.. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D Basic problem statement: ----- ------- --------- GPU's do operations that commonly involve many buffers. Those buffers can be shared across contexts/processes, exist in different memory domains (for example VRAM vs system memory), and so on. And with PRIME / dmabuf, they can even be shared across devices. So there are a handful of situations where the driver needs to wait for buffers to become ready. If you think about this in terms of waiting on a buffer mutex for it to become available, this presents a problem because there is no way to guarantee that buffers appear in a execbuf/batch in the same order in all contexts. That is directly under control of userspace, and a result of the sequence of GL calls that an application makes. Which results in the potential for deadlock. The problem gets more complex when you consider that the kernel may need to migrate the buffer(s) into VRAM before the GPU operates on the buffer(s), which may in turn require evicting some other buffers (and you don't want to evict other buffers which are already queued up to the GPU), but for a simplified understanding of the problem you can ignore this. The algorithm that TTM came up with for dealing with this problem is quite simple. For each group of buffers (execbuf) that need to be locked, the caller would be assigned a unique reservation_id, from a global counter. In case of deadlock in the process of locking all the buffers associated with a execbuf, the one with the lowest reservation_id wins, and the one with the higher reservation_id unlocks all of the buffers that it has already locked, and then tries again. Originally TTM implemented this algorithm on top of an event-queue and atomic-ops, but Maarten Lankhorst realized that by merging this with the mutex code we could take advantage of the existing mutex fast-path code and result in a simpler solution, and so ticket_mutex was born. (Well, there where also some additional complexities with the original implementation when you start adding in cross-device buffer sharing for PRIME.. Maarten could probably better explain.) How it is used: --- -- -- ----- A very simplified version: int lock_execbuf(execbuf) { struct buf *res_buf =3D NULL; /* acquiring locks, before queuing up to GPU: */ seqno =3D assign_global_seqno(); retry: for (buf in execbuf->buffers) { if (buf =3D=3D res_buf) { res_buf =3D NULL; continue; } ret =3D mutex_reserve_lock(&buf->lock, seqno); if (ret < 0) goto err; } /* now everything is good to go, submit job to GPU: */ ... return 0; err: for (all buf2 before buf in execbuf->buffers) mutex_unreserve_unlock(&buf2->lock); if (res_buf) mutex_unreserve_unlock(&res_buf->lock); if (ret =3D=3D -EAGAIN) { /* we lost out in a seqno race, lock and retry.. */ mutex_reserve_lock_slow(&buf->lock, seqno); res_buf =3D buf; goto retry; } return ret; } int unlock_execbuf(execbuf) { /* when GPU is finished; */ for (buf in execbuf->buffers) mutex_unreserve_unlock(&buf->lock); } What Really Happens: ---- ------ ------- (TODO maybe this should be Documentation/dma-fence-reservation.txt and this file should just refer to it?? Well, we can shuffle things around later..) In real life, you want to keep the GPU operating asynchronously to the CPU as much as possible, and not have to wait to queue up more work for the GPU until the previous work is finished. So in practice, you are unlocking (unreserving) all the buffers once the execbuf is queued up to the GPU. The dma-buf fence objects, and the reservation code which manages the fence objects (and is the primary user of ticket_mutex) takes care of the synchronization of different buffer users from this point. If you really left the buffers locked until you got some irq back from the GPU to let you know that the GPU was finished, then you would be unable to queue up more rendering involving the same buffer(s), which would be quite horrible for performance. To just understand ticket_mutex, you can probably ignore this section. If you want to make your driver share buffers with a GPU properly, then you really need to be using reservation/fence, so you should read on. NOTE: the reservation object and fence are split out from dma-buf so that a driver can use them both for it's own internal buffers and for imported dma-bufs, without having to create a dma-buf for every internal buffer. For each rendering command queued up to the GPU, a fence object is created. You can think of the fence as a sort of waitqueue, except that (if it is supported by other devices waiting on the same buffer), it can be used for hw->hw signaling, so that CPU involvement is not required. A fence object is a transient, one-use, object, with two states. Initially it is created un-signaled. And later after the hw is done with the operation, it becomes signaled. (TODO probably should refer to a different .txt with more details about fences, hw->hw vs hw->sw vs sw->sw signaling, etc) The same fence is attached to the reservation_object of all buffers involved in a rendering command. In the fence_excl slot, if the buffer is being written to, otherwise in one of the fence_shared slots. (A buffer can safely have many readers at once.) If when preparing to submit the rendering command, a buffer has an un- signaled exclusive fence attached, then there must be some way to wait for that fence to become signaled before the hw uses that buffer. In the simple case, if that fence isn't one that the driver understands how to instruct the hw to wait for, then it must call fence_wait() to block until other devices have finished writing to the buffer. But if the driver has a way to instruct the hw to wait until the fence is signaled, it can just emit commands to instruct the GPU to wait in order to avoid blocking. NOTE: in actuality, if the fence is created on the same ring, and you therefore know that it will be signaled by the earlier render command before the hw sees the current render command, then inserting fence cmds to the hw can be skipped. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D It made me realize we also need some docs about fence/reservation.. not sure if I'll get a chance before fosdem, but I can take a stab at that too BR, -R --===============3119106806316600719==-- From inki.dae@samsung.com Thu Jan 31 14:38:04 2013 From: Inki Dae To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH 4/7] fence: dma-buf cross-device synchronization (v11) Date: Thu, 31 Jan 2013 23:38:03 +0900 Message-ID: In-Reply-To: <20130131095726.GD5885@phenom.ffwll.local> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============7286879524947483565==" --===============7286879524947483565== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable 2013/1/31 Daniel Vetter : > On Thu, Jan 31, 2013 at 06:32:15PM +0900, Inki Dae wrote: >> Hi, >> >> below is my opinion. >> >> > +struct fence; >> > +struct fence_ops; >> > +struct fence_cb; >> > + >> > +/** >> > + * struct fence - software synchronization primitive >> > + * @refcount: refcount for this fence >> > + * @ops: fence_ops associated with this fence >> > + * @cb_list: list of all callbacks to call >> > + * @lock: spin_lock_irqsave used for locking >> > + * @priv: fence specific private data >> > + * @flags: A mask of FENCE_FLAG_* defined below >> > + * >> > + * the flags member must be manipulated and read using the appropriate >> > + * atomic ops (bit_*), so taking the spinlock will not be needed most >> > + * of the time. >> > + * >> > + * FENCE_FLAG_SIGNALED_BIT - fence is already signaled >> > + * FENCE_FLAG_ENABLE_SIGNAL_BIT - enable_signaling might have been call= ed* >> > + * FENCE_FLAG_USER_BITS - start of the unused bits, can be used by the >> > + * implementer of the fence for its own purposes. Can be used in differ= ent >> > + * ways by different fence implementers, so do not rely on this. >> > + * >> > + * *) Since atomic bitops are used, this is not guaranteed to be the ca= se. >> > + * Particularly, if the bit was set, but fence_signal was called right >> > + * before this bit was set, it would have been able to set the >> > + * FENCE_FLAG_SIGNALED_BIT, before enable_signaling was called. >> > + * Adding a check for FENCE_FLAG_SIGNALED_BIT after setting >> > + * FENCE_FLAG_ENABLE_SIGNAL_BIT closes this race, and makes sure that >> > + * after fence_signal was called, any enable_signaling call will have e= ither >> > + * been completed, or never called at all. >> > + */ >> > +struct fence { >> > + struct kref refcount; >> > + const struct fence_ops *ops; >> > + struct list_head cb_list; >> > + spinlock_t *lock; >> > + unsigned context, seqno; >> > + unsigned long flags; >> > +}; >> > + >> > +enum fence_flag_bits { >> > + FENCE_FLAG_SIGNALED_BIT, >> > + FENCE_FLAG_ENABLE_SIGNAL_BIT, >> > + FENCE_FLAG_USER_BITS, /* must always be last member */ >> > +}; >> > + >> >> It seems like that this fence framework need to add read/write flags. >> In case of two read operations, one might wait for another one. But >> the another is just read operation so we doesn't need to wait for it. >> Shouldn't fence-wait-request be ignored? In this case, I think it's >> enough to consider just only write operation. >> >> For this, you could add the following, >> >> enum fence_flag_bits { >> ... >> FENCE_FLAG_ACCESS_READ_BIT, >> FENCE_FLAG_ACCESS_WRITE_BIT, >> ... >> }; >> >> And the producer could call fence_init() like below, >> __fence_init(..., FENCE_FLAG_ACCESS_WRITE_BIT,...); >> >> With this, fence->flags has FENCE_FLAG_ACCESS_WRITE_BIT as write >> operation and then other sides(read or write operation) would wait for >> the write operation completion. >> And also consumer calls that function with FENCE_FLAG_ACCESS_READ_BIT >> so that other consumers could ignore the fence-wait to any read >> operations. > > Fences here match more to the sync-points concept from the android stuff. > The idea is that they only signal when a hw operation completes. > > Synchronization integration happens at the dma_buf level, where you can > specify whether the new operation you're doing is exclusive (which means > that you need to wait for all previous operations to complete), i.e. a > write. Or whether the operation is non-excluses (i.e. just reading) in > which case you only need to wait for any still outstanding exclusive > fences attached to the dma_buf. But you _can_ attach more than one > non-exclusive fence to a dma_buf at the same time, and so e.g. read a > buffer objects from different engines concurrently. > > There's been some talk whether we also need a non-exclusive write > attachment (i.e. allow multiple concurrent writers), but I don't yet fully > understand the use-case. > > In short the proposed patches can do what you want to do, it's just that > read/write access isn't part of the fences, but how you attach fences to > dma_bufs. > Thanks for comments, Maarten and Daniel. I think I understand as your comment but I don't think that I understand fully the dma-fence mechanism. So I wish you to give me some advices for it. In our case, I'm applying the dma-fence to mali(3d gpu) driver as producer and exynos drm(display controller) driver as consumer. And the sequence is as the following: In case of producer, 1. call fence_wait to wait for the dma access completion of others. 2. And then the producer creates a fence and a new reservation entry. 3. And then it sets the given dmabuf's resv(reservation_object) to the new reservation entry. 4. And then it adds the reservation entry to entries list. 5. And then it sets the fence to all dmabufs of the entries list. Actually, this work is to set the fence to the reservaion_object of each dmabuf. 6. And then the producer's dma start. 7. Finally, when the dma start is completed, we get the entries list from a 3d job command(in case of mali core, pp job) and call fence_signal() with each fence of each reservation entry. >From here, is there my missing point? And I thought the fence from reservation entry at step 7 means that the producer wouldn't access the dmabuf attaching this fence anymore so this step wakes up all processes blocked. So I understood that the fence means a owner accessing the given dmabuf and we could aware of whether the owner commited its own fence to the given dmabuf to read or write through the fence's flags. If you give me some advices, I'd be happy. Thanks, Inki Dae > Cheers, Daniel > -- > Daniel Vetter > Software Engineer, Intel Corporation > +41 (0) 79 365 57 48 - http://blog.ffwll.ch > _______________________________________________ > dri-devel mailing list > dri-devel(a)lists.freedesktop.org > http://lists.freedesktop.org/mailman/listinfo/dri-devel --===============7286879524947483565==-- From daniel.vetter@ffwll.ch Thu Jan 31 14:49:10 2013 From: Daniel Vetter To: linaro-mm-sig@lists.linaro.org Subject: Re: [Linaro-mm-sig] [PATCH 4/7] fence: dma-buf cross-device synchronization (v11) Date: Thu, 31 Jan 2013 15:49:09 +0100 Message-ID: In-Reply-To: MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============4410276682211229549==" --===============4410276682211229549== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit On Thu, Jan 31, 2013 at 3:38 PM, Inki Dae wrote: > I think I understand as your comment but I don't think that I > understand fully the dma-fence mechanism. So I wish you to give me > some advices for it. In our case, I'm applying the dma-fence to > mali(3d gpu) driver as producer and exynos drm(display controller) > driver as consumer. > > And the sequence is as the following: > In case of producer, > 1. call fence_wait to wait for the dma access completion of others. > 2. And then the producer creates a fence and a new reservation entry. > 3. And then it sets the given dmabuf's resv(reservation_object) to the > new reservation entry. > 4. And then it adds the reservation entry to entries list. > 5. And then it sets the fence to all dmabufs of the entries list. > Actually, this work is to set the fence to the reservaion_object of > each dmabuf. > 6. And then the producer's dma start. > 7. Finally, when the dma start is completed, we get the entries list > from a 3d job command(in case of mali core, pp job) and call > fence_signal() with each fence of each reservation entry. > > From here, is there my missing point? Yeah, more or less. Although you need to wrap everything into ticket reservation locking so that you can atomically update fences if you have support for some form of device2device singalling (i.e. without blocking on the cpu until all the old users completed). At least that's the main point of Maarten's patches (and this does work with prime between a few drivers by now), but ofc you can use cpu blocking as a fallback. > And I thought the fence from reservation entry at step 7 means that > the producer wouldn't access the dmabuf attaching this fence anymore > so this step wakes up all processes blocked. So I understood that the > fence means a owner accessing the given dmabuf and we could aware of > whether the owner commited its own fence to the given dmabuf to read > or write through the fence's flags. The fence doesn't give ownership of the dma_buf object, but only indicates when the dma access will have completed. The relationship between dma_buf/reservation and the attached fences specify whether other hw engines can access the dma_buf, too (if the fence is non-exclusive). > If you give me some advices, I'd be happy. Rob and Maarten are working on some howtos and documentation with example code, I guess it'd be best to wait a bit until we have that. Or just review the existing stuff Rob just posted and reply with questions there. Cheers, Daniel -- Daniel Vetter Software Engineer, Intel Corporation +41 (0) 79 365 57 48 - http://blog.ffwll.ch --===============4410276682211229549==--