It will be very useful for user space if it has a method for specifying KVM ARM/ARM64 to give a VCPU with target type suitable to underlying host but with particular set of features.
In other words, user space might not be interested in having a particular target type for VCPU but it will certainely want particular set of features in the VCPU.
The patch tries to implement above described method of specifying VCPU target CPU=Host from user space by extending the KVM_ARM_VCPU_INIT ioctl and having a dummy target KVM_ARM_TARGET_HOST which means Target CPU same as (or suitable to) underlying host.
This change in KVM_ARM_VCPU_INIT ioctl is backward compatible to its current semantics hence does not break user space. The only difference is that it will now update the struct kvm_vcpu_init instance (passed by user space) with targer type and features of the VCPU that will be emulated by KVM ARM/ARM64.
Anup Patel (3): ARM: KVM: Implement target CPU=Host ARM64: KVM: Implement target CPU=Host KVM: ARM: Update documentation for KVM_ARM_VCPU_INIT ioctl
Documentation/virtual/kvm/api.txt | 5 ++++- arch/arm/include/asm/kvm_host.h | 2 +- arch/arm/include/uapi/asm/kvm.h | 1 + arch/arm/kvm/arm.c | 9 ++++++++- arch/arm/kvm/guest.c | 11 +++++++++-- arch/arm64/include/asm/kvm_host.h | 2 +- arch/arm64/include/uapi/asm/kvm.h | 2 +- arch/arm64/kvm/guest.c | 10 ++++++++-- 8 files changed, 33 insertions(+), 9 deletions(-)
This patch implements KVM_ARM_TARGET_HOST for KVM ARM.
If user space provides KVM_ARM_TARGET_HOST as target type in KVM_ARM_VCPU_INIT ioctl then we find out appropriate target type based on underlying host and return that to user space via struct kvm_vcpu_init.
Signed-off-by: Anup Patel anup.patel@linaro.org Signed-off-by: Pranavkumar Sawargaonkar pranavkumar@linaro.org --- arch/arm/include/asm/kvm_host.h | 2 +- arch/arm/include/uapi/asm/kvm.h | 1 + arch/arm/kvm/arm.c | 9 ++++++++- arch/arm/kvm/guest.c | 11 +++++++++-- 4 files changed, 19 insertions(+), 4 deletions(-)
diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h index 7d22517..3bb6c2b 100644 --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -153,7 +153,7 @@ struct kvm_vcpu_stat {
struct kvm_vcpu_init; int kvm_vcpu_set_target(struct kvm_vcpu *vcpu, - const struct kvm_vcpu_init *init); + struct kvm_vcpu_init *init); unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu); int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *indices); struct kvm_one_reg; diff --git a/arch/arm/include/uapi/asm/kvm.h b/arch/arm/include/uapi/asm/kvm.h index c1ee007..644012e 100644 --- a/arch/arm/include/uapi/asm/kvm.h +++ b/arch/arm/include/uapi/asm/kvm.h @@ -63,6 +63,7 @@ struct kvm_regs {
/* Supported Processor Types */ #define KVM_ARM_TARGET_CORTEX_A15 0 +#define KVM_ARM_TARGET_HOST 0x0FFFFFFF #define KVM_ARM_NUM_TARGETS 1
/* KVM_ARM_SET_DEVICE_ADDR ioctl id encoding */ diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index 741f66a..d8e494e 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -698,6 +698,7 @@ int kvm_vm_ioctl_irq_line(struct kvm *kvm, struct kvm_irq_level *irq_level, long kvm_arch_vcpu_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg) { + int err; struct kvm_vcpu *vcpu = filp->private_data; void __user *argp = (void __user *)arg;
@@ -708,8 +709,14 @@ long kvm_arch_vcpu_ioctl(struct file *filp, if (copy_from_user(&init, argp, sizeof(init))) return -EFAULT;
- return kvm_vcpu_set_target(vcpu, &init); + err = kvm_vcpu_set_target(vcpu, &init); + if (err) + return err; + + if (copy_to_user(argp, &init, sizeof(init))) + return -EFAULT;
+ return 0; } case KVM_SET_ONE_REG: case KVM_GET_ONE_REG: { diff --git a/arch/arm/kvm/guest.c b/arch/arm/kvm/guest.c index 152d036..050df63 100644 --- a/arch/arm/kvm/guest.c +++ b/arch/arm/kvm/guest.c @@ -198,12 +198,19 @@ int __attribute_const__ kvm_target_cpu(void) }
int kvm_vcpu_set_target(struct kvm_vcpu *vcpu, - const struct kvm_vcpu_init *init) + struct kvm_vcpu_init *init) { unsigned int i; + int phys_target = kvm_target_cpu(); + + if (phys_target < 0) { + return phys_target; + }
/* We can only do a cortex A15 for now. */ - if (init->target != kvm_target_cpu()) + if (init->target == KVM_ARM_TARGET_HOST) + init->target = phys_target; + else if (init->target != phys_target) return -EINVAL;
vcpu->arch.target = init->target;
On 05.09.2013 16:46, Anup Patel wrote:
This patch implements KVM_ARM_TARGET_HOST for KVM ARM.
If user space provides KVM_ARM_TARGET_HOST as target type in KVM_ARM_VCPU_INIT ioctl then we find out appropriate target type based on underlying host and return that to user space via struct kvm_vcpu_init.
Signed-off-by: Anup Patel anup.patel@linaro.org Signed-off-by: Pranavkumar Sawargaonkar pranavkumar@linaro.org
arch/arm/include/asm/kvm_host.h | 2 +- arch/arm/include/uapi/asm/kvm.h | 1 + arch/arm/kvm/arm.c | 9 ++++++++- arch/arm/kvm/guest.c | 11 +++++++++-- 4 files changed, 19 insertions(+), 4 deletions(-)
diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h index 7d22517..3bb6c2b 100644 --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -153,7 +153,7 @@ struct kvm_vcpu_stat { struct kvm_vcpu_init; int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
const struct kvm_vcpu_init *init);
struct kvm_vcpu_init *init);
unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu); int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *indices); struct kvm_one_reg; diff --git a/arch/arm/include/uapi/asm/kvm.h b/arch/arm/include/uapi/asm/kvm.h index c1ee007..644012e 100644 --- a/arch/arm/include/uapi/asm/kvm.h +++ b/arch/arm/include/uapi/asm/kvm.h @@ -63,6 +63,7 @@ struct kvm_regs { /* Supported Processor Types */ #define KVM_ARM_TARGET_CORTEX_A15 0 +#define KVM_ARM_TARGET_HOST 0x0FFFFFFF #define KVM_ARM_NUM_TARGETS 1 /* KVM_ARM_SET_DEVICE_ADDR ioctl id encoding */ diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index 741f66a..d8e494e 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -698,6 +698,7 @@ int kvm_vm_ioctl_irq_line(struct kvm *kvm, struct kvm_irq_level *irq_level, long kvm_arch_vcpu_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg) {
- int err; struct kvm_vcpu *vcpu = filp->private_data; void __user *argp = (void __user *)arg;
@@ -708,8 +709,14 @@ long kvm_arch_vcpu_ioctl(struct file *filp, if (copy_from_user(&init, argp, sizeof(init))) return -EFAULT;
return kvm_vcpu_set_target(vcpu, &init);
err = kvm_vcpu_set_target(vcpu, &init);
if (err)
return err;
if (copy_to_user(argp, &init, sizeof(init)))
return -EFAULT;
} case KVM_SET_ONE_REG: case KVM_GET_ONE_REG: {return 0;
diff --git a/arch/arm/kvm/guest.c b/arch/arm/kvm/guest.c index 152d036..050df63 100644 --- a/arch/arm/kvm/guest.c +++ b/arch/arm/kvm/guest.c @@ -198,12 +198,19 @@ int __attribute_const__ kvm_target_cpu(void) } int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
const struct kvm_vcpu_init *init)
struct kvm_vcpu_init *init)
{ unsigned int i;
- int phys_target = kvm_target_cpu();
- if (phys_target < 0) {
return phys_target;
- }
/* We can only do a cortex A15 for now. */
- if (init->target != kvm_target_cpu())
- if (init->target == KVM_ARM_TARGET_HOST)
init->target = phys_target;
- else if (init->target != phys_target) return -EINVAL;
vcpu->arch.target = init->target;
Acked-by: Claudio Fontana claudio.fontana@huawei.com
This patch implements KVM_ARM_TARGET_HOST for KVM ARM64.
If user space provides KVM_ARM_TARGET_HOST as target type in KVM_ARM_VCPU_INIT ioctl then we find out appropriate target type based on underlying host and return that to user space via struct kvm_vcpu_init.
Signed-off-by: Anup Patel anup.patel@linaro.org Signed-off-by: Pranavkumar Sawargaonkar pranavkumar@linaro.org --- arch/arm64/include/asm/kvm_host.h | 2 +- arch/arm64/include/uapi/asm/kvm.h | 2 +- arch/arm64/kvm/guest.c | 10 ++++++++-- 3 files changed, 10 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 0859a4d..9ee431c 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -150,7 +150,7 @@ struct kvm_vcpu_stat {
struct kvm_vcpu_init; int kvm_vcpu_set_target(struct kvm_vcpu *vcpu, - const struct kvm_vcpu_init *init); + struct kvm_vcpu_init *init); unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu); int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *indices); struct kvm_one_reg; diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h index d9f026b..eeb5726 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -56,7 +56,7 @@ struct kvm_regs { #define KVM_ARM_TARGET_FOUNDATION_V8 1 #define KVM_ARM_TARGET_CORTEX_A57 2 #define KVM_ARM_TARGET_XGENE_POTENZA 3 - +#define KVM_ARM_TARGET_HOST 0x0FFFFFFF #define KVM_ARM_NUM_TARGETS 4
/* KVM_ARM_SET_DEVICE_ADDR ioctl id encoding */ diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index d7bf7d6..261e836 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -230,12 +230,18 @@ int __attribute_const__ kvm_target_cpu(void) }
int kvm_vcpu_set_target(struct kvm_vcpu *vcpu, - const struct kvm_vcpu_init *init) + struct kvm_vcpu_init *init) { unsigned int i; int phys_target = kvm_target_cpu();
- if (init->target != phys_target) + if (phys_target < 0) { + return phys_target; + } + + if (init->target == KVM_ARM_TARGET_HOST) + init->target = phys_target; + else if (init->target != phys_target) return -EINVAL;
vcpu->arch.target = phys_target;
On 05.09.2013 16:46, Anup Patel wrote:
This patch implements KVM_ARM_TARGET_HOST for KVM ARM64.
If user space provides KVM_ARM_TARGET_HOST as target type in KVM_ARM_VCPU_INIT ioctl then we find out appropriate target type based on underlying host and return that to user space via struct kvm_vcpu_init.
Signed-off-by: Anup Patel anup.patel@linaro.org Signed-off-by: Pranavkumar Sawargaonkar pranavkumar@linaro.org
arch/arm64/include/asm/kvm_host.h | 2 +- arch/arm64/include/uapi/asm/kvm.h | 2 +- arch/arm64/kvm/guest.c | 10 ++++++++-- 3 files changed, 10 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 0859a4d..9ee431c 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -150,7 +150,7 @@ struct kvm_vcpu_stat { struct kvm_vcpu_init; int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
const struct kvm_vcpu_init *init);
struct kvm_vcpu_init *init);
unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu); int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *indices); struct kvm_one_reg; diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h index d9f026b..eeb5726 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -56,7 +56,7 @@ struct kvm_regs { #define KVM_ARM_TARGET_FOUNDATION_V8 1 #define KVM_ARM_TARGET_CORTEX_A57 2 #define KVM_ARM_TARGET_XGENE_POTENZA 3
+#define KVM_ARM_TARGET_HOST 0x0FFFFFFF #define KVM_ARM_NUM_TARGETS 4 /* KVM_ARM_SET_DEVICE_ADDR ioctl id encoding */ diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index d7bf7d6..261e836 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -230,12 +230,18 @@ int __attribute_const__ kvm_target_cpu(void) } int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
const struct kvm_vcpu_init *init)
struct kvm_vcpu_init *init)
{ unsigned int i; int phys_target = kvm_target_cpu();
- if (init->target != phys_target)
- if (phys_target < 0) {
return phys_target;
- }
- if (init->target == KVM_ARM_TARGET_HOST)
init->target = phys_target;
- else if (init->target != phys_target) return -EINVAL;
vcpu->arch.target = phys_target;
Acked-by: Claudio Fontana claudio.fontana@huawei.com
To implement target type KVM_ARM_TARGET_HOST we make the KVM_ARM_VCPU_INIT ioctl bi-direction so that KVM ARM/ARM64 can return appropriate target type to user space via struct kvm_vcpu_init.
Signed-off-by: Anup Patel anup.patel@linaro.org Signed-off-by: Pranavkumar Sawargaonkar pranavkumar@linaro.org --- Documentation/virtual/kvm/api.txt | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt index ef925ea..c0a3a05 100644 --- a/Documentation/virtual/kvm/api.txt +++ b/Documentation/virtual/kvm/api.txt @@ -2283,7 +2283,7 @@ current state. "addr" is ignored. Capability: basic Architectures: arm, arm64 Type: vcpu ioctl -Parameters: struct struct kvm_vcpu_init (in) +Parameters: struct kvm_vcpu_init (in/out) Returns: 0 on success; -1 on error Errors: Â EINVAL: Â Â Â the target is unknown, or the combination of features is invalid. @@ -2303,6 +2303,9 @@ Possible features: - KVM_ARM_VCPU_EL1_32BIT: Starts the CPU in a 32bit mode. Depends on KVM_CAP_ARM_EL1_32BIT (arm64 only).
+This ioctl returns updated struct kvm_vcpu_init showing VCPU target +type and VCPU features that will be available. +
4.83 KVM_GET_REG_LIST
Hi Anup,
I am no native English speaker, but this is my attempt at improving the descriptions:
On 05.09.2013 16:46, Anup Patel wrote:
To implement target type KVM_ARM_TARGET_HOST we make the KVM_ARM_VCPU_INIT ioctl bi-direction so that KVM ARM/ARM64 can return appropriate target type to user space via struct kvm_vcpu_init.
If I understood correctly:
To implement target type KVM_ARM_TARGET_HOST we change the "struct kvm_cpu_init" input parameter into an in/out parameter, so that KVM ARM/ARM64 can inform user space about the chosen target type.
Signed-off-by: Anup Patel anup.patel@linaro.org Signed-off-by: Pranavkumar Sawargaonkar pranavkumar@linaro.org
Documentation/virtual/kvm/api.txt | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt index ef925ea..c0a3a05 100644 --- a/Documentation/virtual/kvm/api.txt +++ b/Documentation/virtual/kvm/api.txt @@ -2283,7 +2283,7 @@ current state. "addr" is ignored. Capability: basic Architectures: arm, arm64 Type: vcpu ioctl -Parameters: struct struct kvm_vcpu_init (in) +Parameters: struct kvm_vcpu_init (in/out) Returns: 0 on success; -1 on error Errors: EINVAL: the target is unknown, or the combination of features is invalid. @@ -2303,6 +2303,9 @@ Possible features:
- KVM_ARM_VCPU_EL1_32BIT: Starts the CPU in a 32bit mode. Depends on KVM_CAP_ARM_EL1_32BIT (arm64 only).
+This ioctl returns updated struct kvm_vcpu_init showing VCPU target +type and VCPU features that will be available.
This ioctl returns an updated struct kvm_vcpu_init informing about the chosen VCPU target type and available VCPU features.
4.83 KVM_GET_REG_LIST
Ciao,
Claudio
On Fri, Sep 6, 2013 at 1:35 PM, Claudio Fontana claudio.fontana@huawei.com wrote:
Hi Anup,
I am no native English speaker, but this is my attempt at improving the descriptions:
On 05.09.2013 16:46, Anup Patel wrote:
To implement target type KVM_ARM_TARGET_HOST we make the KVM_ARM_VCPU_INIT ioctl bi-direction so that KVM ARM/ARM64 can return appropriate target type to user space via struct kvm_vcpu_init.
If I understood correctly:
To implement target type KVM_ARM_TARGET_HOST we change the "struct kvm_cpu_init" input parameter into an in/out parameter, so that KVM ARM/ARM64 can inform user space about the chosen target type.
Yes, you got it right.
Actually, KVM will return back both VCPU target type and VCPU features that KVM will be emulating for the VCPU.
Signed-off-by: Anup Patel anup.patel@linaro.org Signed-off-by: Pranavkumar Sawargaonkar pranavkumar@linaro.org
Documentation/virtual/kvm/api.txt | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt index ef925ea..c0a3a05 100644 --- a/Documentation/virtual/kvm/api.txt +++ b/Documentation/virtual/kvm/api.txt @@ -2283,7 +2283,7 @@ current state. "addr" is ignored. Capability: basic Architectures: arm, arm64 Type: vcpu ioctl -Parameters: struct struct kvm_vcpu_init (in) +Parameters: struct kvm_vcpu_init (in/out) Returns: 0 on success; -1 on error Errors: EINVAL: the target is unknown, or the combination of features is invalid. @@ -2303,6 +2303,9 @@ Possible features: - KVM_ARM_VCPU_EL1_32BIT: Starts the CPU in a 32bit mode. Depends on KVM_CAP_ARM_EL1_32BIT (arm64 only).
+This ioctl returns updated struct kvm_vcpu_init showing VCPU target +type and VCPU features that will be available.
This ioctl returns an updated struct kvm_vcpu_init informing about the chosen VCPU target type and available VCPU features.
4.83 KVM_GET_REG_LIST
Ciao,
Claudio
linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
Regards, Anup
On 5 September 2013 15:45, Anup Patel anup.patel@linaro.org wrote:
It will be very useful for user space if it has a method for specifying KVM ARM/ARM64 to give a VCPU with target type suitable to underlying host but with particular set of features.
In other words, user space might not be interested in having a particular target type for VCPU but it will certainely want particular set of features in the VCPU.
The patch tries to implement above described method of specifying VCPU target CPU=Host from user space by extending the KVM_ARM_VCPU_INIT ioctl and having a dummy target KVM_ARM_TARGET_HOST which means Target CPU same as (or suitable to) underlying host.
I thought the consensus on the call last week was to have an ioctl for "get best CPU for this host" and then for userspace to pass that value to VCPU_INIT ?
-- PMM
On Thu, Sep 5, 2013 at 8:21 PM, Peter Maydell peter.maydell@linaro.org wrote:
On 5 September 2013 15:45, Anup Patel anup.patel@linaro.org wrote:
It will be very useful for user space if it has a method for specifying KVM ARM/ARM64 to give a VCPU with target type suitable to underlying host but with particular set of features.
In other words, user space might not be interested in having a particular target type for VCPU but it will certainely want particular set of features in the VCPU.
The patch tries to implement above described method of specifying VCPU target CPU=Host from user space by extending the KVM_ARM_VCPU_INIT ioctl and having a dummy target KVM_ARM_TARGET_HOST which means Target CPU same as (or suitable to) underlying host.
I thought the consensus on the call last week was to have an ioctl for "get best CPU for this host" and then for userspace to pass that value to VCPU_INIT ?
I had considered having a separate ioctl for "get best CPU for this host" (as per KVM call minutes) but the same thing is possible with existing KVM_ARM_VCPU_INIT so why not extend this ioctl. Also, I was finding it hard to come up with a use case where having a separate ioctl for "get best CPU for this host" would be better for user space.
Further, the approach proposed by this patch also makes the KVM_ARM_VCPU_INIT ioctl inline with what we do for KVM x86 that is provide only a set of features user space wish and KVM x86 will try to provide most of those features with a virtual CPU suitable to host.
Another advantage I saw in extending KVM_ARM_VCPU_INIT ioctl is backward compatibility with current semantics. In other words, this patch does not break current KVMTOOL/QEMU and they can implement "-cpu host" whenever they wish without using any additional ioctl.
PFA, KVMTOOL patch which I used to test KVM_ARM_TARGET_HOST.
-- PMM _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/cucslists/listinfo/kvmarm
--Anup
On 06.09.2013, at 09:44, Anup Patel wrote:
On Thu, Sep 5, 2013 at 8:21 PM, Peter Maydell peter.maydell@linaro.org wrote:
On 5 September 2013 15:45, Anup Patel anup.patel@linaro.org wrote:
It will be very useful for user space if it has a method for specifying KVM ARM/ARM64 to give a VCPU with target type suitable to underlying host but with particular set of features.
In other words, user space might not be interested in having a particular target type for VCPU but it will certainely want particular set of features in the VCPU.
The patch tries to implement above described method of specifying VCPU target CPU=Host from user space by extending the KVM_ARM_VCPU_INIT ioctl and having a dummy target KVM_ARM_TARGET_HOST which means Target CPU same as (or suitable to) underlying host.
I thought the consensus on the call last week was to have an ioctl for "get best CPU for this host" and then for userspace to pass that value to VCPU_INIT ?
I had considered having a separate ioctl for "get best CPU for this host" (as per KVM call minutes) but the same thing is possible with existing KVM_ARM_VCPU_INIT so why not extend this ioctl. Also, I was finding
Because it means that everything as of the INIT call is reproducible by user space. There is no way the kernel can do magic behind our back to create a vcpu that user space couldn't create the same way on a different host when you want to live migrate.
it hard to come up with a use case where having a separate ioctl for "get best CPU for this host" would be better for user space.
It can store it and migrate it as part of its migration protocol (probably hard for QEMU's current work flow, but that's QEMU's issue).
Further, the approach proposed by this patch also makes the KVM_ARM_VCPU_INIT ioctl inline with what we do for KVM x86 that is provide only a set of features user space wish and KVM x86 will try to provide most of those features with a virtual CPU suitable to host.
I don't think you fully grasped how the x86 feature negotiation thing works :).
Another advantage I saw in extending KVM_ARM_VCPU_INIT ioctl is backward compatibility with current semantics. In other words, this patch does not break current KVMTOOL/QEMU and they can implement "-cpu host" whenever they wish without using any additional ioctl.
It's the opposite actually. By making the ioctl parameter in/out direction you change the ioctl number, breaking the ABI, no?
Alex
On Fri, Sep 06, 2013 at 09:54:23AM +0100, Alexander Graf wrote:
On 06.09.2013, at 09:44, Anup Patel wrote:
On Thu, Sep 5, 2013 at 8:21 PM, Peter Maydell peter.maydell@linaro.org wrote:
On 5 September 2013 15:45, Anup Patel anup.patel@linaro.org wrote:
It will be very useful for user space if it has a method for specifying KVM ARM/ARM64 to give a VCPU with target type suitable to underlying host but with particular set of features.
In other words, user space might not be interested in having a particular target type for VCPU but it will certainely want particular set of features in the VCPU.
The patch tries to implement above described method of specifying VCPU target CPU=Host from user space by extending the KVM_ARM_VCPU_INIT ioctl and having a dummy target KVM_ARM_TARGET_HOST which means Target CPU same as (or suitable to) underlying host.
I thought the consensus on the call last week was to have an ioctl for "get best CPU for this host" and then for userspace to pass that value to VCPU_INIT ?
I had considered having a separate ioctl for "get best CPU for this host" (as per KVM call minutes) but the same thing is possible with existing KVM_ARM_VCPU_INIT so why not extend this ioctl. Also, I was finding
Because it means that everything as of the INIT call is reproducible by user space. There is no way the kernel can do magic behind our back to create a vcpu that user space couldn't create the same way on a different host when you want to live migrate.
it hard to come up with a use case where having a separate ioctl for "get best CPU for this host" would be better for user space.
It can store it and migrate it as part of its migration protocol (probably hard for QEMU's current work flow, but that's QEMU's issue).
It's also better from the point of view of devicetree generation. For example, kvmtool needs to generate the correct architected timer interrupt numbers for the target CPU, so munging this into KVM_ARM_VCPU_INIT means we have to go off and figure out the host CPU separately anyway.
Will
On Fri, Sep 6, 2013 at 2:36 PM, Will Deacon will.deacon@arm.com wrote:
On Fri, Sep 06, 2013 at 09:54:23AM +0100, Alexander Graf wrote:
On 06.09.2013, at 09:44, Anup Patel wrote:
On Thu, Sep 5, 2013 at 8:21 PM, Peter Maydell peter.maydell@linaro.org wrote:
On 5 September 2013 15:45, Anup Patel anup.patel@linaro.org wrote:
It will be very useful for user space if it has a method for specifying KVM ARM/ARM64 to give a VCPU with target type suitable to underlying host but with particular set of features.
In other words, user space might not be interested in having a particular target type for VCPU but it will certainely want particular set of features in the VCPU.
The patch tries to implement above described method of specifying VCPU target CPU=Host from user space by extending the KVM_ARM_VCPU_INIT ioctl and having a dummy target KVM_ARM_TARGET_HOST which means Target CPU same as (or suitable to) underlying host.
I thought the consensus on the call last week was to have an ioctl for "get best CPU for this host" and then for userspace to pass that value to VCPU_INIT ?
I had considered having a separate ioctl for "get best CPU for this host" (as per KVM call minutes) but the same thing is possible with existing KVM_ARM_VCPU_INIT so why not extend this ioctl. Also, I was finding
Because it means that everything as of the INIT call is reproducible by user space. There is no way the kernel can do magic behind our back to create a vcpu that user space couldn't create the same way on a different host when you want to live migrate.
it hard to come up with a use case where having a separate ioctl for "get best CPU for this host" would be better for user space.
It can store it and migrate it as part of its migration protocol (probably hard for QEMU's current work flow, but that's QEMU's issue).
It's also better from the point of view of devicetree generation. For example, kvmtool needs to generate the correct architected timer interrupt numbers for the target CPU, so munging this into KVM_ARM_VCPU_INIT means we have to go off and figure out the host CPU separately anyway.
Please look at the patch carefully we are returning VCPU target type back to user space so, you can generate correct architected timer interrupt numbers for the target CPU.
In fact, you will be able to generate complete DTB for the Guest based on the VCPU target type returned by KVM.
For your reference also look at the KVMTOOL patch that I have attached in my previous reply.
Will
--Anup
On Fri, Sep 06, 2013 at 11:09:09AM +0100, Anup Patel wrote:
On Fri, Sep 6, 2013 at 2:36 PM, Will Deacon will.deacon@arm.com wrote:
On Fri, Sep 06, 2013 at 09:54:23AM +0100, Alexander Graf wrote:
On 06.09.2013, at 09:44, Anup Patel wrote:
it hard to come up with a use case where having a separate ioctl for "get best CPU for this host" would be better for user space.
It can store it and migrate it as part of its migration protocol (probably hard for QEMU's current work flow, but that's QEMU's issue).
It's also better from the point of view of devicetree generation. For example, kvmtool needs to generate the correct architected timer interrupt numbers for the target CPU, so munging this into KVM_ARM_VCPU_INIT means we have to go off and figure out the host CPU separately anyway.
Please look at the patch carefully we are returning VCPU target type back to user space so, you can generate correct architected timer interrupt numbers for the target CPU.
I did look at the patch, but I also looked at the overriding consensus in the feedback saying that you can't change the direction of the ioctl, so that would require what I said above.
Will
On Fri, Sep 6, 2013 at 4:19 PM, Will Deacon will.deacon@arm.com wrote:
On Fri, Sep 06, 2013 at 11:09:09AM +0100, Anup Patel wrote:
On Fri, Sep 6, 2013 at 2:36 PM, Will Deacon will.deacon@arm.com wrote:
On Fri, Sep 06, 2013 at 09:54:23AM +0100, Alexander Graf wrote:
On 06.09.2013, at 09:44, Anup Patel wrote:
it hard to come up with a use case where having a separate ioctl for "get best CPU for this host" would be better for user space.
It can store it and migrate it as part of its migration protocol (probably hard for QEMU's current work flow, but that's QEMU's issue).
It's also better from the point of view of devicetree generation. For example, kvmtool needs to generate the correct architected timer interrupt numbers for the target CPU, so munging this into KVM_ARM_VCPU_INIT means we have to go off and figure out the host CPU separately anyway.
Please look at the patch carefully we are returning VCPU target type back to user space so, you can generate correct architected timer interrupt numbers for the target CPU.
I did look at the patch, but I also looked at the overriding consensus in the feedback saying that you can't change the direction of the ioctl, so that would require what I said above.
Will
Instead of arguing more, I'll save everyone's time and revise this patch as needed by everyone.
--Anup
On Fri, Sep 06, 2013 at 11:51:07AM +0100, Anup Patel wrote:
On Fri, Sep 6, 2013 at 4:19 PM, Will Deacon will.deacon@arm.com wrote:
On Fri, Sep 06, 2013 at 11:09:09AM +0100, Anup Patel wrote:
On Fri, Sep 6, 2013 at 2:36 PM, Will Deacon will.deacon@arm.com wrote:
On Fri, Sep 06, 2013 at 09:54:23AM +0100, Alexander Graf wrote:
On 06.09.2013, at 09:44, Anup Patel wrote:
it hard to come up with a use case where having a separate ioctl for "get best CPU for this host" would be better for user space.
It can store it and migrate it as part of its migration protocol (probably hard for QEMU's current work flow, but that's QEMU's issue).
It's also better from the point of view of devicetree generation. For example, kvmtool needs to generate the correct architected timer interrupt numbers for the target CPU, so munging this into KVM_ARM_VCPU_INIT means we have to go off and figure out the host CPU separately anyway.
Please look at the patch carefully we are returning VCPU target type back to user space so, you can generate correct architected timer interrupt numbers for the target CPU.
I did look at the patch, but I also looked at the overriding consensus in the feedback saying that you can't change the direction of the ioctl, so that would require what I said above.
Will
Instead of arguing more, I'll save everyone's time and revise this patch as needed by everyone.
Cheers :)
Will
On Fri, Sep 6, 2013 at 2:24 PM, Alexander Graf agraf@suse.de wrote:
On 06.09.2013, at 09:44, Anup Patel wrote:
On Thu, Sep 5, 2013 at 8:21 PM, Peter Maydell peter.maydell@linaro.org wrote:
On 5 September 2013 15:45, Anup Patel anup.patel@linaro.org wrote:
It will be very useful for user space if it has a method for specifying KVM ARM/ARM64 to give a VCPU with target type suitable to underlying host but with particular set of features.
In other words, user space might not be interested in having a particular target type for VCPU but it will certainely want particular set of features in the VCPU.
The patch tries to implement above described method of specifying VCPU target CPU=Host from user space by extending the KVM_ARM_VCPU_INIT ioctl and having a dummy target KVM_ARM_TARGET_HOST which means Target CPU same as (or suitable to) underlying host.
I thought the consensus on the call last week was to have an ioctl for "get best CPU for this host" and then for userspace to pass that value to VCPU_INIT ?
I had considered having a separate ioctl for "get best CPU for this host" (as per KVM call minutes) but the same thing is possible with existing KVM_ARM_VCPU_INIT so why not extend this ioctl. Also, I was finding
Because it means that everything as of the INIT call is reproducible by user space. There is no way the kernel can do magic behind our back to create a vcpu that user space couldn't create the same way on a different host when you want to live migrate.
Yes, whatever we do in INIT call is still reproducible.
Also, the final VCPU target type and VCPU features are returned back to the user space.
it hard to come up with a use case where having a separate ioctl for "get best CPU for this host" would be better for user space.
It can store it and migrate it as part of its migration protocol (probably hard for QEMU's current work flow, but that's QEMU's issue).
Please look at the patch carefully.
The patch makes KVM_ARM_VCPU_INIT ioctl bi-directional this means we return the target of VCPU and features of VCPU back to user space.
Further, the approach proposed by this patch also makes the KVM_ARM_VCPU_INIT ioctl inline with what we do for KVM x86 that is provide only a set of features user space wish and KVM x86 will try to provide most of those features with a virtual CPU suitable to host.
I don't think you fully grasped how the x86 feature negotiation thing works :).
The feature negotiation implemented by this patch is as follows: 1. user space proposes a VCPU target type (or VCPU same as host) and VCPU features 2. KVM does the checking and determines appropriate VCPU target type and VCPU features 3. KVM returns VCPU target type and VCPU features back to user space
Now I am not x86 expert but the above scheme of feature negotiation is generic enough.
Another advantage I saw in extending KVM_ARM_VCPU_INIT ioctl is backward compatibility with current semantics. In other words, this patch does not break current KVMTOOL/QEMU and they can implement "-cpu host" whenever they wish without using any additional ioctl.
It's the opposite actually. By making the ioctl parameter in/out direction you change the ioctl number, breaking the ABI, no?
Originally the ioctl was only "in" and so we are preserving the "in" semantics. Thats why it is semantically backward compatible.
I have tested this patch with unmodified QEMU and KVMTOOL.
Alex
--Anup
On 06.09.2013, at 12:05, Anup Patel wrote:
On Fri, Sep 6, 2013 at 2:24 PM, Alexander Graf agraf@suse.de wrote:
On 06.09.2013, at 09:44, Anup Patel wrote:
On Thu, Sep 5, 2013 at 8:21 PM, Peter Maydell peter.maydell@linaro.org wrote:
On 5 September 2013 15:45, Anup Patel anup.patel@linaro.org wrote:
It will be very useful for user space if it has a method for specifying KVM ARM/ARM64 to give a VCPU with target type suitable to underlying host but with particular set of features.
In other words, user space might not be interested in having a particular target type for VCPU but it will certainely want particular set of features in the VCPU.
The patch tries to implement above described method of specifying VCPU target CPU=Host from user space by extending the KVM_ARM_VCPU_INIT ioctl and having a dummy target KVM_ARM_TARGET_HOST which means Target CPU same as (or suitable to) underlying host.
I thought the consensus on the call last week was to have an ioctl for "get best CPU for this host" and then for userspace to pass that value to VCPU_INIT ?
I had considered having a separate ioctl for "get best CPU for this host" (as per KVM call minutes) but the same thing is possible with existing KVM_ARM_VCPU_INIT so why not extend this ioctl. Also, I was finding
Because it means that everything as of the INIT call is reproducible by user space. There is no way the kernel can do magic behind our back to create a vcpu that user space couldn't create the same way on a different host when you want to live migrate.
Yes, whatever we do in INIT call is still reproducible.
Also, the final VCPU target type and VCPU features are returned back to the user space.
But you're combining two semantically separate things into a single operation. So now there is no way for the machine model to ask KVM what CPU it prefers without creating a CPU. So you have to first create a CPU, then somehow route the type information back into the machine model so that the machine model can create the appropriate timer.
This is messy without any good reason.
it hard to come up with a use case where having a separate ioctl for "get best CPU for this host" would be better for user space.
It can store it and migrate it as part of its migration protocol (probably hard for QEMU's current work flow, but that's QEMU's issue).
Please look at the patch carefully.
The patch makes KVM_ARM_VCPU_INIT ioctl bi-directional this means we return the target of VCPU and features of VCPU back to user space.
Further, the approach proposed by this patch also makes the KVM_ARM_VCPU_INIT ioctl inline with what we do for KVM x86 that is provide only a set of features user space wish and KVM x86 will try to provide most of those features with a virtual CPU suitable to host.
I don't think you fully grasped how the x86 feature negotiation thing works :).
The feature negotiation implemented by this patch is as follows:
- user space proposes a VCPU target type (or VCPU same as host) and
VCPU features 2. KVM does the checking and determines appropriate VCPU target type and VCPU features 3. KVM returns VCPU target type and VCPU features back to user space
Now I am not x86 expert but the above scheme of feature negotiation is generic enough.
It's orders of magnitude less flexible than what x86 allows you to do. I'm not criticizing the intent here, but I'm saying that it's not even remotely close to anything x86 does. So referring to it here is pointless :).
Another advantage I saw in extending KVM_ARM_VCPU_INIT ioctl is backward compatibility with current semantics. In other words, this patch does not break current KVMTOOL/QEMU and they can implement "-cpu host" whenever they wish without using any additional ioctl.
It's the opposite actually. By making the ioctl parameter in/out direction you change the ioctl number, breaking the ABI, no?
Originally the ioctl was only "in" and so we are preserving the "in" semantics. Thats why it is semantically backward compatible.
Great. So now we have an ioctl that says it's "in" in its ioctl descriptor, but really it's in/out. This really only works by accident because nobody is filtering the direction today.
Nack.
Alex
On 2013-09-06 11:24, Alexander Graf wrote:
On 06.09.2013, at 12:05, Anup Patel wrote:
On Fri, Sep 6, 2013 at 2:24 PM, Alexander Graf agraf@suse.de wrote:
On 06.09.2013, at 09:44, Anup Patel wrote:
[...]
Another advantage I saw in extending KVM_ARM_VCPU_INIT ioctl is backward compatibility with current semantics. In other words, this patch does not break current KVMTOOL/QEMU and they can implement "-cpu host" whenever they wish without using any additional ioctl.
It's the opposite actually. By making the ioctl parameter in/out direction you change the ioctl number, breaking the ABI, no?
Originally the ioctl was only "in" and so we are preserving the "in" semantics. Thats why it is semantically backward compatible.
Great. So now we have an ioctl that says it's "in" in its ioctl descriptor, but really it's in/out. This really only works by accident because nobody is filtering the direction today.
Nack.
Agreed. We don't break the ABI, we don't try to fool the kernel. Please.
There's been previous suggestions on how to implement this feature, please consider them.
M.
On Fri, Sep 6, 2013 at 4:04 PM, Marc Zyngier marc.zyngier@arm.com wrote:
On 2013-09-06 11:24, Alexander Graf wrote:
On 06.09.2013, at 12:05, Anup Patel wrote:
On Fri, Sep 6, 2013 at 2:24 PM, Alexander Graf agraf@suse.de wrote:
On 06.09.2013, at 09:44, Anup Patel wrote:
[...]
Another advantage I saw in extending KVM_ARM_VCPU_INIT ioctl is backward compatibility with current semantics. In other words, this patch does not break current KVMTOOL/QEMU and they can implement "-cpu host" whenever they wish without using any additional ioctl.
It's the opposite actually. By making the ioctl parameter in/out direction you change the ioctl number, breaking the ABI, no?
Originally the ioctl was only "in" and so we are preserving the "in" semantics. Thats why it is semantically backward compatible.
Great. So now we have an ioctl that says it's "in" in its ioctl descriptor, but really it's in/out. This really only works by accident because nobody is filtering the direction today.
Nack.
Agreed. We don't break the ABI, we don't try to fool the kernel. Please.
We are not breaking the ABI here and also not trying to fool the kernel.
There's been previous suggestions on how to implement this feature, please consider them.
I am not convinced about how is this approach not better.
M.
-- Fast, cheap, reliable. Pick two.
--Anup
On Fri, Sep 6, 2013 at 3:54 PM, Alexander Graf agraf@suse.de wrote:
On 06.09.2013, at 12:05, Anup Patel wrote:
On Fri, Sep 6, 2013 at 2:24 PM, Alexander Graf agraf@suse.de wrote:
On 06.09.2013, at 09:44, Anup Patel wrote:
On Thu, Sep 5, 2013 at 8:21 PM, Peter Maydell peter.maydell@linaro.org wrote:
On 5 September 2013 15:45, Anup Patel anup.patel@linaro.org wrote:
It will be very useful for user space if it has a method for specifying KVM ARM/ARM64 to give a VCPU with target type suitable to underlying host but with particular set of features.
In other words, user space might not be interested in having a particular target type for VCPU but it will certainely want particular set of features in the VCPU.
The patch tries to implement above described method of specifying VCPU target CPU=Host from user space by extending the KVM_ARM_VCPU_INIT ioctl and having a dummy target KVM_ARM_TARGET_HOST which means Target CPU same as (or suitable to) underlying host.
I thought the consensus on the call last week was to have an ioctl for "get best CPU for this host" and then for userspace to pass that value to VCPU_INIT ?
I had considered having a separate ioctl for "get best CPU for this host" (as per KVM call minutes) but the same thing is possible with existing KVM_ARM_VCPU_INIT so why not extend this ioctl. Also, I was finding
Because it means that everything as of the INIT call is reproducible by user space. There is no way the kernel can do magic behind our back to create a vcpu that user space couldn't create the same way on a different host when you want to live migrate.
Yes, whatever we do in INIT call is still reproducible.
Also, the final VCPU target type and VCPU features are returned back to the user space.
But you're combining two semantically separate things into a single operation. So now there is no way for the machine model to ask KVM what CPU it prefers without creating a CPU. So you have to first create a CPU, then somehow route the type information back into the machine model so that the machine model can create the appropriate timer.
This is messy without any good reason.
Machine model resembling real world machines will always ask for fixed VCPU target type whereas generic machines will ask for VCPU target type same as host.
For eg. VExpress-A15 machine model will ask for KVM_ARM_TARGET_CORTEX-A15 whereas mach-virt machine model will ask for KVM_ARM_TARGET_HOST.
I don't see how this will be messy for QEMU.
it hard to come up with a use case where having a separate ioctl for "get best CPU for this host" would be better for user space.
It can store it and migrate it as part of its migration protocol (probably hard for QEMU's current work flow, but that's QEMU's issue).
Please look at the patch carefully.
The patch makes KVM_ARM_VCPU_INIT ioctl bi-directional this means we return the target of VCPU and features of VCPU back to user space.
Further, the approach proposed by this patch also makes the KVM_ARM_VCPU_INIT ioctl inline with what we do for KVM x86 that is provide only a set of features user space wish and KVM x86 will try to provide most of those features with a virtual CPU suitable to host.
I don't think you fully grasped how the x86 feature negotiation thing works :).
The feature negotiation implemented by this patch is as follows:
- user space proposes a VCPU target type (or VCPU same as host) and
VCPU features 2. KVM does the checking and determines appropriate VCPU target type and VCPU features 3. KVM returns VCPU target type and VCPU features back to user space
Now I am not x86 expert but the above scheme of feature negotiation is generic enough.
It's orders of magnitude less flexible than what x86 allows you to do. I'm not criticizing the intent here, but I'm saying that it's not even remotely close to anything x86 does. So referring to it here is pointless :).
Another advantage I saw in extending KVM_ARM_VCPU_INIT ioctl is backward compatibility with current semantics. In other words, this patch does not break current KVMTOOL/QEMU and they can implement "-cpu host" whenever they wish without using any additional ioctl.
It's the opposite actually. By making the ioctl parameter in/out direction you change the ioctl number, breaking the ABI, no?
Originally the ioctl was only "in" and so we are preserving the "in" semantics. Thats why it is semantically backward compatible.
Great. So now we have an ioctl that says it's "in" in its ioctl descriptor, but really it's in/out. This really only works by accident because nobody is filtering the direction today.
The patch series also updates documentations.
Nack.
Alex
--Anup
linaro-kernel@lists.linaro.org