On 18.11.2021 06:32, Juergen Gross wrote:
On 18.11.21 03:37, Stefano Stabellini wrote:
--- a/drivers/xen/xenbus/xenbus_probe.c +++ b/drivers/xen/xenbus/xenbus_probe.c @@ -951,6 +951,28 @@ static int __init xenbus_init(void) err = hvm_get_parameter(HVM_PARAM_STORE_PFN, &v); if (err) goto out_error;
/*
* Return error on an invalid value.
*
* Uninitialized hvm_params are zero and return no error.
* Although it is theoretically possible to have
* HVM_PARAM_STORE_PFN set to zero on purpose, in reality it is
* not zero when valid. If zero, it means that Xenstore hasn't
* been properly initialized. Instead of attempting to map a
* wrong guest physical address return error.
*/
if (v == 0) {
Make this "if (v == ULONG_MAX || v== 0)" instead? This would result in the same err on a new and an old hypervisor (assuming we switch the hypervisor to init params with ~0UL).
err = -ENOENT;
goto out_error;
}
/*
* ULONG_MAX is invalid on 64-bit because is INVALID_PFN.
* On 32-bit return error to avoid truncation.
*/
if (v >= ULONG_MAX) {
err = -EINVAL;
goto out_error;
}
Does it make sense to continue the system running in case of truncation? This would be a 32-bit guest with more than 16TB of RAM and the Xen tools decided to place the Xenstore ring page above the 16TB boundary. This is a completely insane scenario IMO.
A proper panic() in this case would make diagnosis of that much easier (me having doubts that this will ever be hit, though).
While I agree panic() may be an option here (albeit I'm not sure why that would be better than trying to cope with 0 and hence without xenbus), I'd like to point out that the amount of RAM assigned to a guest is unrelated to the choice of GFNs for the various "magic" items.
Jan