On Sat, Sep 21, 2013 at 06:22:23AM +0900, Peter Maydell wrote:
On 21 September 2013 04:50, Christoffer Dall christoffer.dall@linaro.org wrote:
On Fri, Sep 06, 2013 at 04:13:32PM +0100, Peter Maydell wrote:
/* these registers are mainly used for save/restore of KVM state */ uint8_t binary_point[2][NCPU]; /* [0]: group 0, [1]: group 1 */
- uint32_t active_prio[4][NCPU]; /* implementation defined layout */
You can't make this impdef in QEMU's state, that would mean we couldn't do migration between implementations which use different layouts. We need to pick a standard ("whatever the ARM v2 GIC implementation is" seems the obvious choice) and make the kernel convert if it's on an implementation which doesn't follow that version.
Implementation defined as in implementation defined in the architecture. I didn't think it would make sense to choose a format for an a15 implementation, for example, and then translate to that format for other cores using the ARM gic. Wouldn't migration only be support for same qemu model to same qemu model, in which case the format of this register would always be the same, and the kernel must return a format corresponding to the target cpu. Am I missing something here?
I know it's architecturally impdef, but there are a couple of issues here: *) moving to the 'create me an irqchip' API is separating out the "which GIC do I have?" and "which CPU do I have?" questions somewhat, so it seems a shame to tangle them up again
Hmm, they are inherently coupled in hardware at least for v7 though, right?
*) for getting TCG<->KVM and KVM-with-non-host-CPU cases right we need to do translation anyway, or at least think about it.
Why? Wouldn't we always only support the case where QEMU emulates the same model as KVM in the first case, and the kernel should behave the same and export the same state if you ask for a specific target no matter what the underlying hardware is, no?
So we need to at minimum specifically document what the format is for the CPUs we care about. At that point we might as well have a standard format. IIRC the GIC spec defines a "this is the sensible format" anyway.
In practice, for the v7 and v8 CPUs we support, what format do they use?
It's not particularly clear as far as I can read it, but I'll have to investigate in more details. Also, I'm not quite clear on how TCG GIC reads/writes would get translated to the proper format depending on the core in that case, and we would have to have code in the arm_gic_kvm file to detect the emulated target and do translation. I'm failing to see the benefits?
-Christoffer