KVM: Raise the maximum number of user memslots
mainline inclusion from mainline-v5.12-rc1 commit 4fc096a9 category: feature bugzilla: https://gitee.com/openeuler/intel-kernel/issues/I7S3VQ CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=4fc096a99e01dd06dc55bef76ade7f8d76653245 This commit also removes the risc-v's and loongarch's version of KVM_USER_MEM_SLOTS, which never appeared in the upstream code, yet was added in backporting commit e91613ae ("RISC-V: Add initial skeletal KVM support"), and commit 622f37d3 ("LoongArch: kvm: add initial kvm support"). The reason should be due to lack of this current commit. So just also do it for risc-v and loongarch, to use the generic definition of KVM_USER_MEM_SLOTS. ---------------------------------------------------------------------- Current KVM_USER_MEM_SLOTS limits are arch specific (512 on Power, 509 on x86, 32 on s390, 16 on MIPS) but they don't really need to be. Memory slots are allocated dynamically in KVM when added so the only real limitation is 'id_to_index' array which is 'short'. We don't have any other KVM_MEM_SLOTS_NUM/KVM_USER_MEM_SLOTS-sized statically defined structures. Low KVM_USER_MEM_SLOTS can be a limiting factor for some configurations. In particular, when QEMU tries to start a Windows guest with Hyper-V SynIC enabled and e.g. 256 vCPUs the limit is hit as SynIC requires two pages per vCPU and the guest is free to pick any GFN for each of them, this fragments memslots as QEMU wants to have a separate memslot for each of these pages (which are supposed to act as 'overlay' pages). Signed-off-by:Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20210127175731.2020089-3-vkuznets@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Signed-off-by:
Yu Zhang <yu.c.zhang@linux.intel.com>
Loading
Please sign in to comment