On Mon, Jan 27, 2025 at 04:42:56PM -0500, Hamza Mahfooz wrote:
On Mon, Jan 27, 2025 at 09:02:22PM +0000, Michael Kelley wrote:
From: Hamza Mahfooz hamzamahfooz@linux.microsoft.com Sent: Monday, January 27, 2025 10:10 AM
We should select PCI_HYPERV here, otherwise it's possible for devices to not show up as expected, at least not in an orderly manner.
The commit message needs more precision: What does "not show up" mean, and what does "not in an orderly manner" mean? And "it's possible" is vague -- can you be more specific about the conditions? Also, avoid the use of personal pronouns like "we".
But the commit message notwithstanding, I don't think this is change that should be made. CONFIG_PCI_HYPERV refers to the VMBus device driver for handling vPCI (a.k.a PCI pass-thru) devices. It's perfectly possible and normal for a VM on Hyper-V to not have any such devices, in which case the driver isn't needed and should not be forced to be included. (See Documentation/virt/hyperv/vpci.rst for more on vPCI devices.)
Ya, we ran into an issue where CONFIG_NVME_CORE=y and CONFIG_PCI_HYPERV=m caused the passed-through SSDs not to show up (i.e. they aren't visible to userspace). I guess it's cause PCI_HYPERV has to load in before the nvme stuff for that workload. So, I thought it was reasonable to select PCI_HYPERV here to prevent someone else from shooting themselves in the foot. Though, I guess it really it on the distro guys to get that right.
Does inserting the PCI_HYPERV module trigger a (re)scanning of the (v)PCI bus? If so, the passed-through NVMe devices should show up just fine, I suppose.
I agree with Michael that we should not select PCI_HYPERV by default. In some environments, it is not needed at all.
Thanks, Wei.