This is an optimization of a set of sgx-related codes, each of which is independent of the patch. Because the second and third patches have conflicting dependencies, these patches are put together.
--- v3 changes: * split free_cnt count and spin lock optimization into two patches
v2 changes: * review suggested changes
Tianjia Zhang (5): selftests/x86: Simplify the code to get vdso base address in sgx x86/sgx: Optimize the locking range in sgx_sanitize_section() x86/sgx: Optimize the free_cnt count in sgx_epc_section x86/sgx: Allows ioctl PROVISION to execute before CREATE x86/sgx: Remove redundant if conditions in sgx_encl_create
arch/x86/kernel/cpu/sgx/driver.c | 1 + arch/x86/kernel/cpu/sgx/ioctl.c | 9 +++++---- arch/x86/kernel/cpu/sgx/main.c | 13 +++++-------- tools/testing/selftests/sgx/main.c | 24 ++++-------------------- 4 files changed, 15 insertions(+), 32 deletions(-)
This patch uses the library function `getauxval(AT_SYSINFO_EHDR)` instead of the custom function `vdso_get_base_addr` to obtain the base address of vDSO, which will simplify the code implementation.
Signed-off-by: Tianjia Zhang tianjia.zhang@linux.alibaba.com --- tools/testing/selftests/sgx/main.c | 24 ++++-------------------- 1 file changed, 4 insertions(+), 20 deletions(-)
diff --git a/tools/testing/selftests/sgx/main.c b/tools/testing/selftests/sgx/main.c index 724cec700926..365d01dea67b 100644 --- a/tools/testing/selftests/sgx/main.c +++ b/tools/testing/selftests/sgx/main.c @@ -15,6 +15,7 @@ #include <sys/stat.h> #include <sys/time.h> #include <sys/types.h> +#include <sys/auxv.h> #include "defines.h" #include "main.h" #include "../kselftest.h" @@ -28,24 +29,6 @@ struct vdso_symtab { Elf64_Word *elf_hashtab; };
-static void *vdso_get_base_addr(char *envp[]) -{ - Elf64_auxv_t *auxv; - int i; - - for (i = 0; envp[i]; i++) - ; - - auxv = (Elf64_auxv_t *)&envp[i + 1]; - - for (i = 0; auxv[i].a_type != AT_NULL; i++) { - if (auxv[i].a_type == AT_SYSINFO_EHDR) - return (void *)auxv[i].a_un.a_val; - } - - return NULL; -} - static Elf64_Dyn *vdso_get_dyntab(void *addr) { Elf64_Ehdr *ehdr = addr; @@ -162,7 +145,7 @@ static int user_handler(long rdi, long rsi, long rdx, long ursp, long r8, long r return 0; }
-int main(int argc, char *argv[], char *envp[]) +int main(int argc, char *argv[]) { struct sgx_enclave_run run; struct vdso_symtab symtab; @@ -203,7 +186,8 @@ int main(int argc, char *argv[], char *envp[]) memset(&run, 0, sizeof(run)); run.tcs = encl.encl_base;
- addr = vdso_get_base_addr(envp); + /* Get vDSO base address */ + addr = (void *)(uintptr_t)getauxval(AT_SYSINFO_EHDR); if (!addr) goto err;
The spin lock of sgx_epc_section only locks the page_list. The EREMOVE operation and init_laundry_list is not necessary in the protection range of the spin lock. This patch reduces the lock range of the spin lock in the function sgx_sanitize_section() and only protects the operation of the page_list.
Suggested-by: Sean Christopherson seanjc@google.com Signed-off-by: Tianjia Zhang tianjia.zhang@linux.alibaba.com --- arch/x86/kernel/cpu/sgx/main.c | 11 ++++------- 1 file changed, 4 insertions(+), 7 deletions(-)
diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c index c519fc5f6948..4465912174fd 100644 --- a/arch/x86/kernel/cpu/sgx/main.c +++ b/arch/x86/kernel/cpu/sgx/main.c @@ -41,20 +41,17 @@ static void sgx_sanitize_section(struct sgx_epc_section *section) if (kthread_should_stop()) return;
- /* needed for access to ->page_list: */ - spin_lock(§ion->lock); - page = list_first_entry(§ion->init_laundry_list, struct sgx_epc_page, list);
ret = __eremove(sgx_get_epc_virt_addr(page)); - if (!ret) + if (!ret) { + spin_lock(§ion->lock); list_move(&page->list, §ion->page_list); - else + spin_unlock(§ion->lock); + } else list_move_tail(&page->list, &dirty);
- spin_unlock(§ion->lock); - cond_resched(); }
On Sun, Jan 24, 2021 at 02:29:04PM +0800, Tianjia Zhang wrote:
The spin lock of sgx_epc_section only locks the page_list. The EREMOVE operation and init_laundry_list is not necessary in the protection range of the spin lock. This patch reduces the lock range of the spin lock in the function sgx_sanitize_section() and only protects the operation of the page_list.
Suggested-by: Sean Christopherson seanjc@google.com Signed-off-by: Tianjia Zhang tianjia.zhang@linux.alibaba.com
I prefer to use the word "optimize" only if there is supporting data to show that the code change has significant value.
/Jarkko
`section->free_cnt` represents the free page in sgx_epc_section, which is assigned once after initialization. In fact, just after the initialization is completed, the pages are in the `init_laundry_list` list and cannot be allocated. This needs to be recovered by EREMOVE of function sgx_sanitize_section() before it can be used as a page that can be allocated. The sgx_sanitize_section() will be called in the kernel thread ksgxd.
This patch moves the initialization of `section->free_cnt` from the initialization function `sgx_setup_epc_section()` to the function `sgx_sanitize_section()`, and then accumulates the count after the successful execution of EREMOVE. This seems to be more reasonable, free_cnt will also truly reflect the allocatable free pages in EPC.
Sined-off-by: Tianjia Zhang tianjia.zhang@linux.alibaba.com Reviewed-by: Sean Christopherson seanjc@google.com --- arch/x86/kernel/cpu/sgx/main.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c index 4465912174fd..e455ec7b3449 100644 --- a/arch/x86/kernel/cpu/sgx/main.c +++ b/arch/x86/kernel/cpu/sgx/main.c @@ -48,6 +48,7 @@ static void sgx_sanitize_section(struct sgx_epc_section *section) if (!ret) { spin_lock(§ion->lock); list_move(&page->list, §ion->page_list); + section->free_cnt++; spin_unlock(§ion->lock); } else list_move_tail(&page->list, &dirty); @@ -643,7 +644,6 @@ static bool __init sgx_setup_epc_section(u64 phys_addr, u64 size, list_add_tail(§ion->pages[i].list, §ion->init_laundry_list); }
- section->free_cnt = nr_pages; return true; }
I could bet some money that this does not bring any significant performance gain.
On Sun, Jan 24, 2021 at 02:29:05PM +0800, Tianjia Zhang wrote:
`section->free_cnt` represents the free page in sgx_epc_section, which is assigned once after initialization. In fact, just after the initialization is completed, the pages are in the `init_laundry_list` list and cannot be allocated. This needs to be recovered by EREMOVE of function sgx_sanitize_section() before it can be used as a page that can be allocated. The sgx_sanitize_section() will be called in the kernel thread ksgxd.
This patch moves the initialization of `section->free_cnt` from the initialization function `sgx_setup_epc_section()` to the function `sgx_sanitize_section()`, and then accumulates the count after the
Use single quotes instead of hyphens.
successful execution of EREMOVE. This seems to be more reasonable, free_cnt will also truly reflect the allocatable free pages in EPC.
Sined-off-by: Tianjia Zhang tianjia.zhang@linux.alibaba.com Reviewed-by: Sean Christopherson seanjc@google.com
arch/x86/kernel/cpu/sgx/main.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c index 4465912174fd..e455ec7b3449 100644 --- a/arch/x86/kernel/cpu/sgx/main.c +++ b/arch/x86/kernel/cpu/sgx/main.c @@ -48,6 +48,7 @@ static void sgx_sanitize_section(struct sgx_epc_section *section) if (!ret) { spin_lock(§ion->lock); list_move(&page->list, §ion->page_list);
section->free_cnt++; spin_unlock(§ion->lock);
Someone can try to allocate a page while sanitize process is in progress.
I think it is better to keep critical sections in the form that when you leave from one, the global state is legit.
} else list_move_tail(&page->list, &dirty);
@@ -643,7 +644,6 @@ static bool __init sgx_setup_epc_section(u64 phys_addr, u64 size, list_add_tail(§ion->pages[i].list, §ion->init_laundry_list); }
- section->free_cnt = nr_pages; return true;
} -- 2.19.1.3.ge56e4f7
/Jarkko
Hi,
Sorry for the late reply.
On 1/28/21 1:40 AM, Jarkko Sakkinen wrote:
I could bet some money that this does not bring any significant performance gain.
Yes, this does not bring performance gains. This is not a change for performance, mainly to make the value of free_cnt look more accurate.
On Sun, Jan 24, 2021 at 02:29:05PM +0800, Tianjia Zhang wrote:
`section->free_cnt` represents the free page in sgx_epc_section, which is assigned once after initialization. In fact, just after the initialization is completed, the pages are in the `init_laundry_list` list and cannot be allocated. This needs to be recovered by EREMOVE of function sgx_sanitize_section() before it can be used as a page that can be allocated. The sgx_sanitize_section() will be called in the kernel thread ksgxd.
This patch moves the initialization of `section->free_cnt` from the initialization function `sgx_setup_epc_section()` to the function `sgx_sanitize_section()`, and then accumulates the count after the
Use single quotes instead of hyphens.
successful execution of EREMOVE. This seems to be more reasonable,
free_cnt will also truly reflect the allocatable free pages in EPC.
Sined-off-by: Tianjia Zhang tianjia.zhang@linux.alibaba.com Reviewed-by: Sean Christopherson seanjc@google.com
arch/x86/kernel/cpu/sgx/main.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c index 4465912174fd..e455ec7b3449 100644 --- a/arch/x86/kernel/cpu/sgx/main.c +++ b/arch/x86/kernel/cpu/sgx/main.c @@ -48,6 +48,7 @@ static void sgx_sanitize_section(struct sgx_epc_section *section) if (!ret) { spin_lock(§ion->lock); list_move(&page->list, §ion->page_list);
section->free_cnt++; spin_unlock(§ion->lock);
Someone can try to allocate a page while sanitize process is in progress.
I think it is better to keep critical sections in the form that when you leave from one, the global state is legit.
Do you mean to move the critical section to protect the entire while loop? Of course, this is also possible, sanitize is a process only needed for initialization, and the possibility of conflict is very small.
Best regards, Tianjia
On Thu, Feb 11, 2021 at 02:04:12PM +0800, Tianjia Zhang wrote:
Hi,
Sorry for the late reply.
On 1/28/21 1:40 AM, Jarkko Sakkinen wrote:
I could bet some money that this does not bring any significant performance gain.
Yes, this does not bring performance gains. This is not a change for performance, mainly to make the value of free_cnt look more accurate.
On Sun, Jan 24, 2021 at 02:29:05PM +0800, Tianjia Zhang wrote:
`section->free_cnt` represents the free page in sgx_epc_section, which is assigned once after initialization. In fact, just after the initialization is completed, the pages are in the `init_laundry_list` list and cannot be allocated. This needs to be recovered by EREMOVE of function sgx_sanitize_section() before it can be used as a page that can be allocated. The sgx_sanitize_section() will be called in the kernel thread ksgxd.
This patch moves the initialization of `section->free_cnt` from the initialization function `sgx_setup_epc_section()` to the function `sgx_sanitize_section()`, and then accumulates the count after the
Use single quotes instead of hyphens.
successful execution of EREMOVE. This seems to be more reasonable,
free_cnt will also truly reflect the allocatable free pages in EPC.
Sined-off-by: Tianjia Zhang tianjia.zhang@linux.alibaba.com Reviewed-by: Sean Christopherson seanjc@google.com
arch/x86/kernel/cpu/sgx/main.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c index 4465912174fd..e455ec7b3449 100644 --- a/arch/x86/kernel/cpu/sgx/main.c +++ b/arch/x86/kernel/cpu/sgx/main.c @@ -48,6 +48,7 @@ static void sgx_sanitize_section(struct sgx_epc_section *section) if (!ret) { spin_lock(§ion->lock); list_move(&page->list, §ion->page_list);
section->free_cnt++; spin_unlock(§ion->lock);
Someone can try to allocate a page while sanitize process is in progress.
I think it is better to keep critical sections in the form that when you leave from one, the global state is legit.
Do you mean to move the critical section to protect the entire while loop? Of course, this is also possible, sanitize is a process only needed for initialization, and the possibility of conflict is very small.
Best regards, Tianjia
The big picture of this change to me, to be frank is that it's completely useless.
Please start with the picture.
/Jarkko
On 2/12/21 8:19 PM, Jarkko Sakkinen wrote:
On Thu, Feb 11, 2021 at 02:04:12PM +0800, Tianjia Zhang wrote:
Hi,
Sorry for the late reply.
On 1/28/21 1:40 AM, Jarkko Sakkinen wrote:
I could bet some money that this does not bring any significant performance gain.
Yes, this does not bring performance gains. This is not a change for performance, mainly to make the value of free_cnt look more accurate.
On Sun, Jan 24, 2021 at 02:29:05PM +0800, Tianjia Zhang wrote:
`section->free_cnt` represents the free page in sgx_epc_section, which is assigned once after initialization. In fact, just after the initialization is completed, the pages are in the `init_laundry_list` list and cannot be allocated. This needs to be recovered by EREMOVE of function sgx_sanitize_section() before it can be used as a page that can be allocated. The sgx_sanitize_section() will be called in the kernel thread ksgxd.
This patch moves the initialization of `section->free_cnt` from the initialization function `sgx_setup_epc_section()` to the function `sgx_sanitize_section()`, and then accumulates the count after the
Use single quotes instead of hyphens.
successful execution of EREMOVE. This seems to be more reasonable,
free_cnt will also truly reflect the allocatable free pages in EPC.
Sined-off-by: Tianjia Zhang tianjia.zhang@linux.alibaba.com Reviewed-by: Sean Christopherson seanjc@google.com
arch/x86/kernel/cpu/sgx/main.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c index 4465912174fd..e455ec7b3449 100644 --- a/arch/x86/kernel/cpu/sgx/main.c +++ b/arch/x86/kernel/cpu/sgx/main.c @@ -48,6 +48,7 @@ static void sgx_sanitize_section(struct sgx_epc_section *section) if (!ret) { spin_lock(§ion->lock); list_move(&page->list, §ion->page_list);
section->free_cnt++; spin_unlock(§ion->lock);
Someone can try to allocate a page while sanitize process is in progress.
I think it is better to keep critical sections in the form that when you leave from one, the global state is legit.
Do you mean to move the critical section to protect the entire while loop? Of course, this is also possible, sanitize is a process only needed for initialization, and the possibility of conflict is very small.
Best regards, Tianjia
The big picture of this change to me, to be frank is that it's completely useless.
Please start with the picture.
/Jarkko
I carefully considered your suggestion, and I will delete 2/5 and 3/5 in the next version.
Best regards, Tianjia
In the function sgx_create_enclave(), the direct assignment operation of attributes_mask determines that the ioctl PROVISION operation must be executed after the ioctl CREATE operation, which will limit the flexibility of sgx developers.
This patch takes the assignment of `attributes_mask` from the function sgx_create_enclave() has been moved to the function sgx_open() to avoid this restriction.
Signed-off-by: Tianjia Zhang tianjia.zhang@linux.alibaba.com --- arch/x86/kernel/cpu/sgx/driver.c | 1 + arch/x86/kernel/cpu/sgx/ioctl.c | 1 - 2 files changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kernel/cpu/sgx/driver.c b/arch/x86/kernel/cpu/sgx/driver.c index f2eac41bb4ff..fba0d0bfe976 100644 --- a/arch/x86/kernel/cpu/sgx/driver.c +++ b/arch/x86/kernel/cpu/sgx/driver.c @@ -36,6 +36,7 @@ static int sgx_open(struct inode *inode, struct file *file) return ret; }
+ encl->attributes_mask = SGX_ATTR_DEBUG | SGX_ATTR_MODE64BIT | SGX_ATTR_KSS; file->private_data = encl;
return 0; diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c index 90a5caf76939..1c6ecf9fbeff 100644 --- a/arch/x86/kernel/cpu/sgx/ioctl.c +++ b/arch/x86/kernel/cpu/sgx/ioctl.c @@ -109,7 +109,6 @@ static int sgx_encl_create(struct sgx_encl *encl, struct sgx_secs *secs) encl->base = secs->base; encl->size = secs->size; encl->attributes = secs->attributes; - encl->attributes_mask = SGX_ATTR_DEBUG | SGX_ATTR_MODE64BIT | SGX_ATTR_KSS;
/* Set only after completion, as encl->lock has not been taken. */ set_bit(SGX_ENCL_CREATED, &encl->flags);
On Sun, Jan 24, 2021 at 02:29:06PM +0800, Tianjia Zhang wrote:
In the function sgx_create_enclave(), the direct assignment operation of attributes_mask determines that the ioctl PROVISION operation must be executed after the ioctl CREATE operation, which will limit the flexibility of sgx developers.
This patch takes the assignment of `attributes_mask` from the function sgx_create_enclave() has been moved to the function sgx_open() to avoid this restriction.
Signed-off-by: Tianjia Zhang tianjia.zhang@linux.alibaba.com
The commit message should explicit that the API behaviour changes as the result. Please don't use hyphens in quoting.
/Jarkko
arch/x86/kernel/cpu/sgx/driver.c | 1 + arch/x86/kernel/cpu/sgx/ioctl.c | 1 - 2 files changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kernel/cpu/sgx/driver.c b/arch/x86/kernel/cpu/sgx/driver.c index f2eac41bb4ff..fba0d0bfe976 100644 --- a/arch/x86/kernel/cpu/sgx/driver.c +++ b/arch/x86/kernel/cpu/sgx/driver.c @@ -36,6 +36,7 @@ static int sgx_open(struct inode *inode, struct file *file) return ret; }
- encl->attributes_mask = SGX_ATTR_DEBUG | SGX_ATTR_MODE64BIT | SGX_ATTR_KSS; file->private_data = encl;
return 0; diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c index 90a5caf76939..1c6ecf9fbeff 100644 --- a/arch/x86/kernel/cpu/sgx/ioctl.c +++ b/arch/x86/kernel/cpu/sgx/ioctl.c @@ -109,7 +109,6 @@ static int sgx_encl_create(struct sgx_encl *encl, struct sgx_secs *secs) encl->base = secs->base; encl->size = secs->size; encl->attributes = secs->attributes;
- encl->attributes_mask = SGX_ATTR_DEBUG | SGX_ATTR_MODE64BIT | SGX_ATTR_KSS;
/* Set only after completion, as encl->lock has not been taken. */ set_bit(SGX_ENCL_CREATED, &encl->flags); -- 2.19.1.3.ge56e4f7
On 1/30/21 9:26 PM, Jarkko Sakkinen wrote:
On Sun, Jan 24, 2021 at 02:29:06PM +0800, Tianjia Zhang wrote:
In the function sgx_create_enclave(), the direct assignment operation of attributes_mask determines that the ioctl PROVISION operation must be executed after the ioctl CREATE operation, which will limit the flexibility of sgx developers.
This patch takes the assignment of `attributes_mask` from the function sgx_create_enclave() has been moved to the function sgx_open() to avoid this restriction.
Signed-off-by: Tianjia Zhang tianjia.zhang@linux.alibaba.com
The commit message should explicit that the API behaviour changes as the result. Please don't use hyphens in quoting.
/Jarkko
Will be improved in the next version.
Best regards, Tianjia
arch/x86/kernel/cpu/sgx/driver.c | 1 + arch/x86/kernel/cpu/sgx/ioctl.c | 1 - 2 files changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kernel/cpu/sgx/driver.c b/arch/x86/kernel/cpu/sgx/driver.c index f2eac41bb4ff..fba0d0bfe976 100644 --- a/arch/x86/kernel/cpu/sgx/driver.c +++ b/arch/x86/kernel/cpu/sgx/driver.c @@ -36,6 +36,7 @@ static int sgx_open(struct inode *inode, struct file *file) return ret; }
- encl->attributes_mask = SGX_ATTR_DEBUG | SGX_ATTR_MODE64BIT | SGX_ATTR_KSS; file->private_data = encl;
return 0; diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c index 90a5caf76939..1c6ecf9fbeff 100644 --- a/arch/x86/kernel/cpu/sgx/ioctl.c +++ b/arch/x86/kernel/cpu/sgx/ioctl.c @@ -109,7 +109,6 @@ static int sgx_encl_create(struct sgx_encl *encl, struct sgx_secs *secs) encl->base = secs->base; encl->size = secs->size; encl->attributes = secs->attributes;
- encl->attributes_mask = SGX_ATTR_DEBUG | SGX_ATTR_MODE64BIT | SGX_ATTR_KSS;
/* Set only after completion, as encl->lock has not been taken. */ set_bit(SGX_ENCL_CREATED, &encl->flags); -- 2.19.1.3.ge56e4f7
In this scenario, there is no case where va_page is NULL, and the error has been checked. The if condition statement here is redundant, so remove the condition detection.
Signed-off-by: Tianjia Zhang tianjia.zhang@linux.alibaba.com --- arch/x86/kernel/cpu/sgx/ioctl.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c index 1c6ecf9fbeff..b0b829f1b761 100644 --- a/arch/x86/kernel/cpu/sgx/ioctl.c +++ b/arch/x86/kernel/cpu/sgx/ioctl.c @@ -66,9 +66,11 @@ static int sgx_encl_create(struct sgx_encl *encl, struct sgx_secs *secs) va_page = sgx_encl_grow(encl); if (IS_ERR(va_page)) return PTR_ERR(va_page); - else if (va_page) - list_add(&va_page->list, &encl->va_pages); - /* else the tail page of the VA page list had free slots. */ + + if (WARN_ONCE(!va_page, "non-empty VA page list before ECREATE")) + return -EIO; + + list_add(&va_page->list, &encl->va_pages);
/* The extra page goes to SECS. */ encl_size = secs->size + PAGE_SIZE;
On Sun, Jan 24, 2021 at 02:29:07PM +0800, Tianjia Zhang wrote:
In this scenario, there is no case where va_page is NULL, and the error has been checked. The if condition statement here is redundant, so remove the condition detection.
Signed-off-by: Tianjia Zhang tianjia.zhang@linux.alibaba.com
arch/x86/kernel/cpu/sgx/ioctl.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c index 1c6ecf9fbeff..b0b829f1b761 100644 --- a/arch/x86/kernel/cpu/sgx/ioctl.c +++ b/arch/x86/kernel/cpu/sgx/ioctl.c @@ -66,9 +66,11 @@ static int sgx_encl_create(struct sgx_encl *encl, struct sgx_secs *secs) va_page = sgx_encl_grow(encl); if (IS_ERR(va_page)) return PTR_ERR(va_page);
- else if (va_page)
list_add(&va_page->list, &encl->va_pages);
- /* else the tail page of the VA page list had free slots. */
- if (WARN_ONCE(!va_page, "non-empty VA page list before ECREATE"))
return -EIO;
So you just crashed machines that have panic-on-warn enabled. Don't do that for no reason, just handle the error and move on.
thanks,
greg k-h
On Sun, Jan 24, 2021, Greg KH wrote:
On Sun, Jan 24, 2021 at 02:29:07PM +0800, Tianjia Zhang wrote:
In this scenario, there is no case where va_page is NULL, and the error has been checked. The if condition statement here is redundant, so remove the condition detection.
Signed-off-by: Tianjia Zhang tianjia.zhang@linux.alibaba.com
arch/x86/kernel/cpu/sgx/ioctl.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c index 1c6ecf9fbeff..b0b829f1b761 100644 --- a/arch/x86/kernel/cpu/sgx/ioctl.c +++ b/arch/x86/kernel/cpu/sgx/ioctl.c @@ -66,9 +66,11 @@ static int sgx_encl_create(struct sgx_encl *encl, struct sgx_secs *secs) va_page = sgx_encl_grow(encl); if (IS_ERR(va_page)) return PTR_ERR(va_page);
- else if (va_page)
list_add(&va_page->list, &encl->va_pages);
- /* else the tail page of the VA page list had free slots. */
- if (WARN_ONCE(!va_page, "non-empty VA page list before ECREATE"))
return -EIO;
So you just crashed machines that have panic-on-warn enabled. Don't do that for no reason, just handle the error and move on.
The WARN will only fire if someone introduces a kernel bug. It's one part documentation, two parts helping detect future breakage. Even if the VA page management is significantly reworked, I'm having a hard time envisioning a scenario where adding VA pages before ECREATE would be anything but a kernel bug.
On Sun, Jan 24, 2021 at 02:29:07PM +0800, Tianjia Zhang wrote:
In this scenario, there is no case where va_page is NULL, and the error has been checked. The if condition statement here is redundant, so remove the condition detection.
Signed-off-by: Tianjia Zhang tianjia.zhang@linux.alibaba.com
arch/x86/kernel/cpu/sgx/ioctl.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c index 1c6ecf9fbeff..b0b829f1b761 100644 --- a/arch/x86/kernel/cpu/sgx/ioctl.c +++ b/arch/x86/kernel/cpu/sgx/ioctl.c @@ -66,9 +66,11 @@ static int sgx_encl_create(struct sgx_encl *encl, struct sgx_secs *secs) va_page = sgx_encl_grow(encl); if (IS_ERR(va_page)) return PTR_ERR(va_page);
- else if (va_page)
list_add(&va_page->list, &encl->va_pages);
- /* else the tail page of the VA page list had free slots. */
This is fine. The check does not make sense.
- if (WARN_ONCE(!va_page, "non-empty VA page list before ECREATE"))
return -EIO;
No need for this WARN_ONCE().
- list_add(&va_page->list, &encl->va_pages);
This is fine.
/* The extra page goes to SECS. */ encl_size = secs->size + PAGE_SIZE; -- 2.19.1.3.ge56e4f7
/Jarkko
On 1/30/21 10:33 PM, Jarkko Sakkinen wrote:
On Sun, Jan 24, 2021 at 02:29:07PM +0800, Tianjia Zhang wrote:
In this scenario, there is no case where va_page is NULL, and the error has been checked. The if condition statement here is redundant, so remove the condition detection.
Signed-off-by: Tianjia Zhang tianjia.zhang@linux.alibaba.com
arch/x86/kernel/cpu/sgx/ioctl.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c index 1c6ecf9fbeff..b0b829f1b761 100644 --- a/arch/x86/kernel/cpu/sgx/ioctl.c +++ b/arch/x86/kernel/cpu/sgx/ioctl.c @@ -66,9 +66,11 @@ static int sgx_encl_create(struct sgx_encl *encl, struct sgx_secs *secs) va_page = sgx_encl_grow(encl); if (IS_ERR(va_page)) return PTR_ERR(va_page);
- else if (va_page)
list_add(&va_page->list, &encl->va_pages);
- /* else the tail page of the VA page list had free slots. */
This is fine. The check does not make sense.
- if (WARN_ONCE(!va_page, "non-empty VA page list before ECREATE"))
return -EIO;
No need for this WARN_ONCE().
- list_add(&va_page->list, &encl->va_pages);
This is fine.
/* The extra page goes to SECS. */ encl_size = secs->size + PAGE_SIZE; -- 2.19.1.3.ge56e4f7
/Jarkko
Will be improved in the next version.
Thanks, Tianjia
On Sun, Jan 24, 2021 at 02:29:02PM +0800, Tianjia Zhang wrote:
This is an optimization of a set of sgx-related codes, each of which is independent of the patch. Because the second and third patches have conflicting dependencies, these patches are put together.
v3 changes:
- split free_cnt count and spin lock optimization into two patches
v2 changes:
- review suggested changes
Tianjia Zhang (5): selftests/x86: Simplify the code to get vdso base address in sgx x86/sgx: Optimize the locking range in sgx_sanitize_section() x86/sgx: Optimize the free_cnt count in sgx_epc_section x86/sgx: Allows ioctl PROVISION to execute before CREATE x86/sgx: Remove redundant if conditions in sgx_encl_create
I remember asking about previous patches. I don't recall of getting any responses to anything basically.
/Jarkko
linux-kselftest-mirror@lists.linaro.org