#
7cc1d214 |
| 23-Apr-2024 |
Will Deacon <will@kernel.org> |
KVM: arm64: Avoid BBM when changing only s/w bits in Stage-2 PTE
Break-before-make (BBM) can be expensive, as transitioning via an invalid mapping (i.e. the "break" step) requires the completion of
KVM: arm64: Avoid BBM when changing only s/w bits in Stage-2 PTE
Break-before-make (BBM) can be expensive, as transitioning via an invalid mapping (i.e. the "break" step) requires the completion of TLB invalidation and can also cause other agents to fault concurrently on the invalid mapping.
Since BBM is not required when changing only the software bits of a PTE, avoid the sequence in this case and just update the PTE directly.
Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Fuad Tabba <tabba@google.com> Acked-by: Oliver Upton <oliver.upton@linux.dev> Link: https://lore.kernel.org/r/20240423150538.2103045-9-tabba@google.com Signed-off-by: Marc Zyngier <maz@kernel.org>
show more ...
|
#
96171cfa |
| 23-Apr-2024 |
Marc Zyngier <maz@kernel.org> |
KVM: arm64: Check for PTE validity when checking for executable/cacheable
Don't just assume that the PTE is valid when checking whether it describes an executable or cacheable mapping.
This makes s
KVM: arm64: Check for PTE validity when checking for executable/cacheable
Don't just assume that the PTE is valid when checking whether it describes an executable or cacheable mapping.
This makes sure that we don't issue CMOs for invalid mappings.
Suggested-by: Will Deacon <will@kernel.org> Signed-off-by: Fuad Tabba <tabba@google.com> Acked-by: Oliver Upton <oliver.upton@linux.dev> Link: https://lore.kernel.org/r/20240423150538.2103045-8-tabba@google.com Signed-off-by: Marc Zyngier <maz@kernel.org>
show more ...
|
#
4c36a156 |
| 27-Mar-2024 |
Will Deacon <will@kernel.org> |
KVM: arm64: Ensure target address is granule-aligned for range TLBI
When zapping a table entry in stage2_try_break_pte(), we issue range TLB invalidation for the region that was mapped by the table.
KVM: arm64: Ensure target address is granule-aligned for range TLBI
When zapping a table entry in stage2_try_break_pte(), we issue range TLB invalidation for the region that was mapped by the table. However, we neglect to align the base address down to the granule size and so if we ended up reaching the table entry via a misaligned address then we will accidentally skip invalidation for some prefix of the affected address range.
Align 'ctx->addr' down to the granule size when performing TLB invalidation for an unmapped table in stage2_try_break_pte().
Cc: Raghavendra Rao Ananta <rananta@google.com> Cc: Gavin Shan <gshan@redhat.com> Cc: Shaoqin Huang <shahuang@redhat.com> Cc: Quentin Perret <qperret@google.com> Fixes: defc8cc7abf0 ("KVM: arm64: Invalidate the table entries upon a range") Signed-off-by: Will Deacon <will@kernel.org> Reviewed-by: Shaoqin Huang <shahuang@redhat.com> Reviewed-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20240327124853.11206-5-will@kernel.org Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
show more ...
|
#
36e00832 |
| 27-Mar-2024 |
Will Deacon <will@kernel.org> |
KVM: arm64: Don't pass a TLBI level hint when zapping table entries
The TLBI level hints are for leaf entries only, so take care not to pass them incorrectly after clearing a table entry.
Cc: Gavin
KVM: arm64: Don't pass a TLBI level hint when zapping table entries
The TLBI level hints are for leaf entries only, so take care not to pass them incorrectly after clearing a table entry.
Cc: Gavin Shan <gshan@redhat.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Quentin Perret <qperret@google.com> Fixes: 82bb02445de5 ("KVM: arm64: Implement kvm_pgtable_hyp_unmap() at EL2") Fixes: 6d9d2115c480 ("KVM: arm64: Add support for stage-2 map()/unmap() in generic page-table") Signed-off-by: Will Deacon <will@kernel.org> Reviewed-by: Shaoqin Huang <shahuang@redhat.com> Reviewed-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20240327124853.11206-3-will@kernel.org Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
show more ...
|
#
f62d4c3e |
| 27-Mar-2024 |
Will Deacon <will@kernel.org> |
KVM: arm64: Don't defer TLB invalidation when zapping table entries
Commit 7657ea920c54 ("KVM: arm64: Use TLBI range-based instructions for unmap") introduced deferred TLB invalidation for the stage
KVM: arm64: Don't defer TLB invalidation when zapping table entries
Commit 7657ea920c54 ("KVM: arm64: Use TLBI range-based instructions for unmap") introduced deferred TLB invalidation for the stage-2 page-table so that range-based invalidation can be used for the accumulated addresses. This works fine if the structure of the page-tables remains unchanged, but if entire tables are zapped and subsequently freed then we transiently leave the hardware page-table walker with a reference to freed memory thanks to the translation walk caches. For example, stage2_unmap_walker() will free page-table pages:
if (childp) mm_ops->put_page(childp);
and issue the TLB invalidation later in kvm_pgtable_stage2_unmap():
if (stage2_unmap_defer_tlb_flush(pgt)) /* Perform the deferred TLB invalidations */ kvm_tlb_flush_vmid_range(pgt->mmu, addr, size);
For now, take the conservative approach and invalidate the TLB eagerly when we clear a table entry. Note, however, that the existing level hint passed to __kvm_tlb_flush_vmid_ipa() is incorrect and will be fixed in a subsequent patch.
Cc: Raghavendra Rao Ananta <rananta@google.com> Cc: Shaoqin Huang <shahuang@redhat.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Oliver Upton <oliver.upton@linux.dev> Signed-off-by: Will Deacon <will@kernel.org> Reviewed-by: Shaoqin Huang <shahuang@redhat.com> Reviewed-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20240327124853.11206-2-will@kernel.org Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
show more ...
|
#
c034ec84 |
| 24-Feb-2024 |
Ankit Agrawal <ankita@nvidia.com> |
KVM: arm64: Introduce new flag for non-cacheable IO memory
Currently, KVM for ARM64 maps at stage 2 memory that is considered device (i.e. it is not RAM) with DEVICE_nGnRE memory attributes; this se
KVM: arm64: Introduce new flag for non-cacheable IO memory
Currently, KVM for ARM64 maps at stage 2 memory that is considered device (i.e. it is not RAM) with DEVICE_nGnRE memory attributes; this setting overrides (as per the ARM architecture [1]) any device MMIO mapping present at stage 1, resulting in a set-up whereby a guest operating system cannot determine device MMIO mapping memory attributes on its own but it is always overridden by the KVM stage 2 default.
This set-up does not allow guest operating systems to select device memory attributes independently from KVM stage-2 mappings (refer to [1], "Combining stage 1 and stage 2 memory type attributes"), which turns out to be an issue in that guest operating systems (e.g. Linux) may request to map devices MMIO regions with memory attributes that guarantee better performance (e.g. gathering attribute - that for some devices can generate larger PCIe memory writes TLPs) and specific operations (e.g. unaligned transactions) such as the NormalNC memory type.
The default device stage 2 mapping was chosen in KVM for ARM64 since it was considered safer (i.e. it would not allow guests to trigger uncontained failures ultimately crashing the machine) but this turned out to be asynchronous (SError) defeating the purpose.
Failures containability is a property of the platform and is independent from the memory type used for MMIO device memory mappings.
Actually, DEVICE_nGnRE memory type is even more problematic than Normal-NC memory type in terms of faults containability in that e.g. aborts triggered on DEVICE_nGnRE loads cannot be made, architecturally, synchronous (i.e. that would imply that the processor should issue at most 1 load transaction at a time - it cannot pipeline them - otherwise the synchronous abort semantics would break the no-speculation attribute attached to DEVICE_XXX memory).
This means that regardless of the combined stage1+stage2 mappings a platform is safe if and only if device transactions cannot trigger uncontained failures and that in turn relies on platform capabilities and the device type being assigned (i.e. PCIe AER/DPC error containment and RAS architecture[3]); therefore the default KVM device stage 2 memory attributes play no role in making device assignment safer for a given platform (if the platform design adheres to design guidelines outlined in [3]) and therefore can be relaxed.
For all these reasons, relax the KVM stage 2 device memory attributes from DEVICE_nGnRE to Normal-NC.
The NormalNC was chosen over a different Normal memory type default at stage-2 (e.g. Normal Write-through) to avoid cache allocation/snooping.
Relaxing S2 KVM device MMIO mappings to Normal-NC is not expected to trigger any issue on guest device reclaim use cases either (i.e. device MMIO unmap followed by a device reset) at least for PCIe devices, in that in PCIe a device reset is architected and carried out through PCI config space transactions that are naturally ordered with respect to MMIO transactions according to the PCI ordering rules.
Having Normal-NC S2 default puts guests in control (thanks to stage1+stage2 combined memory attributes rules [1]) of device MMIO regions memory mappings, according to the rules described in [1] and summarized here ([(S1) - stage1], [(S2) - stage 2]):
S1 | S2 | Result NORMAL-WB | NORMAL-NC | NORMAL-NC NORMAL-WT | NORMAL-NC | NORMAL-NC NORMAL-NC | NORMAL-NC | NORMAL-NC DEVICE<attr> | NORMAL-NC | DEVICE<attr>
It is worth noting that currently, to map devices MMIO space to user space in a device pass-through use case the VFIO framework applies memory attributes derived from pgprot_noncached() settings applied to VMAs, which result in device-nGnRnE memory attributes for the stage-1 VMM mappings.
This means that a userspace mapping for device MMIO space carried out with the current VFIO framework and a guest OS mapping for the same MMIO space may result in a mismatched alias as described in [2].
Defaulting KVM device stage-2 mappings to Normal-NC attributes does not change anything in this respect, in that the mismatched aliases would only affect (refer to [2] for a detailed explanation) ordering between the userspace and GuestOS mappings resulting stream of transactions (i.e. it does not cause loss of property for either stream of transactions on its own), which is harmless given that the userspace and GuestOS access to the device is carried out through independent transactions streams.
A Normal-NC flag is not present today. So add a new kvm_pgtable_prot (KVM_PGTABLE_PROT_NORMAL_NC) flag for it, along with its corresponding PTE value 0x5 (0b101) determined from [1].
Lastly, adapt the stage2 PTE property setter function (stage2_set_prot_attr) to handle the NormalNC attribute.
The entire discussion leading to this patch series may be followed through the following links. Link: https://lore.kernel.org/all/20230907181459.18145-3-ankita@nvidia.com Link: https://lore.kernel.org/r/20231205033015.10044-1-ankita@nvidia.com
[1] section D8.5.5 - DDI0487J_a_a-profile_architecture_reference_manual.pdf [2] section B2.8 - DDI0487J_a_a-profile_architecture_reference_manual.pdf [3] sections 1.7.7.3/1.8.5.2/appendix C - DEN0029H_SBSA_7.1.pdf
Suggested-by: Jason Gunthorpe <jgg@nvidia.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Will Deacon <will@kernel.org> Reviewed-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Ankit Agrawal <ankita@nvidia.com> Link: https://lore.kernel.org/r/20240224150546.368-2-ankita@nvidia.com Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
show more ...
|
#
c60d847b |
| 12-Feb-2024 |
Will Deacon <will@kernel.org> |
KVM: arm64: Fix double-free following kvm_pgtable_stage2_free_unlinked()
kvm_pgtable_stage2_free_unlinked() does the final put_page() on the root page of the sub-tree before returning, so remove the
KVM: arm64: Fix double-free following kvm_pgtable_stage2_free_unlinked()
kvm_pgtable_stage2_free_unlinked() does the final put_page() on the root page of the sub-tree before returning, so remove the additional put_page() invocations in the callers.
Cc: Ricardo Koller <ricarkol@google.com> Fixes: f6a27d6dc51b2 ("KVM: arm64: Drop last page ref in kvm_pgtable_stage2_free_removed()") Signed-off-by: Will Deacon <will@kernel.org> Reviewed-by: Oliver Upton <oliver.upton@linux.dev> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20240212193052.27765-1-will@kernel.org
show more ...
|
#
0abc1b11 |
| 27-Nov-2023 |
Ryan Roberts <ryan.roberts@arm.com> |
KVM: arm64: Support up to 5 levels of translation in kvm_pgtable
FEAT_LPA2 increases the maximum levels of translation from 4 to 5 for the 4KB page case, when IA is >48 bits. While we can still use
KVM: arm64: Support up to 5 levels of translation in kvm_pgtable
FEAT_LPA2 increases the maximum levels of translation from 4 to 5 for the 4KB page case, when IA is >48 bits. While we can still use 4 levels for stage2 translation in this case (due to stage2 allowing concatenated page tables for first level lookup), the same kvm_pgtable library is used for the hyp stage1 page tables and stage1 does not support concatenation.
Therefore, modify the library to support up to 5 levels. Previous patches already laid the groundwork for this by refactoring code to work in terms of KVM_PGTABLE_FIRST_LEVEL and KVM_PGTABLE_LAST_LEVEL. So we just need to change these macros.
The hardware sometimes encodes the new level differently from the others: One such place is when reading the level from the FSC field in the ESR_EL2 register. We never expect to see the lowest level (-1) here since the stage 2 page tables always use concatenated tables for first level lookup and therefore only use 4 levels of lookup. So we get away with just adding a comment to explain why we are not being careful about decoding level -1.
For stage2 VTCR_EL2.SL2 is introduced to encode the new start level. However, since we always use concatenated page tables for first level look up at stage2 (and therefore we will never need the new extra level) we never touch this new field.
Reviewed-by: Oliver Upton <oliver.upton@linux.dev> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20231127111737.1897081-10-ryan.roberts@arm.com
show more ...
|
#
419edf48 |
| 27-Nov-2023 |
Ryan Roberts <ryan.roberts@arm.com> |
KVM: arm64: Convert translation level parameter to s8
With the introduction of FEAT_LPA2, the Arm ARM adds a new level of translation, level -1, so levels can now be in the range [-1;3]. 3 is always
KVM: arm64: Convert translation level parameter to s8
With the introduction of FEAT_LPA2, the Arm ARM adds a new level of translation, level -1, so levels can now be in the range [-1;3]. 3 is always the last level and the first level is determined based on the number of VA bits in use.
Convert level variables to use a signed type in preparation for supporting this new level -1.
Since the last level is always anchored at 3, and the first level varies to suit the number of VA/IPA bits, take the opportunity to replace KVM_PGTABLE_MAX_LEVELS with the 2 macros KVM_PGTABLE_FIRST_LEVEL and KVM_PGTABLE_LAST_LEVEL. This removes the assumption from the code that levels run from 0 to KVM_PGTABLE_MAX_LEVELS - 1, which will soon no longer be true.
Reviewed-by: Oliver Upton <oliver.upton@linux.dev> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20231127111737.1897081-9-ryan.roberts@arm.com
show more ...
|
#
bd412e2a |
| 27-Nov-2023 |
Ryan Roberts <ryan.roberts@arm.com> |
KVM: arm64: Use LPA2 page-tables for stage2 and hyp stage1
Implement a simple policy whereby if the HW supports FEAT_LPA2 for the page size we are using, always use LPA2-style page-tables for stage
KVM: arm64: Use LPA2 page-tables for stage2 and hyp stage1
Implement a simple policy whereby if the HW supports FEAT_LPA2 for the page size we are using, always use LPA2-style page-tables for stage 2 and hyp stage 1 (assuming an nvhe hyp), regardless of the VMM-requested IPA size or HW-implemented PA size. When in use we can now support up to 52-bit IPA and PA sizes.
We use the previously created cpu feature to track whether LPA2 is supported for deciding whether to use the LPA2 or classic pte format.
Note that FEAT_LPA2 brings support for bigger block mappings (512GB with 4KB, 64GB with 16KB). We explicitly don't enable these in the library because stage2_apply_range() works on batch sizes of the largest used block mapping, and increasing the size of the batch would lead to soft lockups. See commit 5994bc9e05c2 ("KVM: arm64: Limit stage2_apply_range() batch size to largest block").
With the addition of LPA2 support in the hypervisor, the PA size supported by the HW must be capped with a runtime decision, rather than simply using a compile-time decision based on PA_BITS. For example, on a system that advertises 52 bit PA but does not support FEAT_LPA2, A 4KB or 16KB kernel compiled with LPA2 support must still limit the PA size to 48 bits.
Therefore, move the insertion of the PS field into TCR_EL2 out of __kvm_hyp_init assembly code and instead do it in cpu_prepare_hyp_mode() where the rest of TCR_EL2 is prepared. This allows us to figure out PS with kvm_get_parange(), which has the appropriate logic to ensure the above requirement. (and the PS field of VTCR_EL2 is already populated this way).
Reviewed-by: Oliver Upton <oliver.upton@linux.dev> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20231127111737.1897081-8-ryan.roberts@arm.com
show more ...
|
#
be097997 |
| 22-Sep-2023 |
Oliver Upton <oliver.upton@linux.dev> |
KVM: arm64: Always invalidate TLB for stage-2 permission faults
It is possible for multiple vCPUs to fault on the same IPA and attempt to resolve the fault. One of the page table walks will actually
KVM: arm64: Always invalidate TLB for stage-2 permission faults
It is possible for multiple vCPUs to fault on the same IPA and attempt to resolve the fault. One of the page table walks will actually update the PTE and the rest will return -EAGAIN per our race detection scheme. KVM elides the TLB invalidation on the racing threads as the return value is nonzero.
Before commit a12ab1378a88 ("KVM: arm64: Use local TLBI on permission relaxation") KVM always used broadcast TLB invalidations when handling permission faults, which had the convenient property of making the stage-2 updates visible to all CPUs in the system. However now we do a local invalidation, and TLBI elision leads to the vCPU thread faulting again on the stale entry. Remember that the architecture permits the TLB to cache translations that precipitate a permission fault.
Invalidate the TLB entry responsible for the permission fault if the stage-2 descriptor has been relaxed, regardless of which thread actually did the job.
Acked-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20230922223229.1608155-1-oliver.upton@linux.dev Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
show more ...
|
#
fe49fd94 |
| 12-Oct-2023 |
Marc Zyngier <maz@kernel.org> |
KVM: arm64: Move VTCR_EL2 into struct s2_mmu
We currently have a global VTCR_EL2 value for each guest, even if the guest uses NV. This implies that the guest's own S2 must fit in the host's. This is
KVM: arm64: Move VTCR_EL2 into struct s2_mmu
We currently have a global VTCR_EL2 value for each guest, even if the guest uses NV. This implies that the guest's own S2 must fit in the host's. This is odd, for multiple reasons:
- the PARange values and the number of IPA bits don't necessarily match: you can have 33 bits of IPA space, and yet you can only describe 32 or 36 bits of PARange
- When userspace set the IPA space, it creates a contract with the kernel saying "this is the IPA space I'm prepared to handle". At no point does it constraint the guest's own IPA space as long as the guest doesn't try to use a [I]PA outside of the IPA space set by userspace
- We don't even try to hide the value of ID_AA64MMFR0_EL1.PARange.
And then there is the consequence of the above: if a guest tries to create a S2 that has for input address something that is larger than the IPA space defined by the host, we inject a fatal exception.
This is no good. For all intent and purposes, a guest should be able to have the S2 it really wants, as long as the *output* address of that S2 isn't outside of the IPA space.
For that, we need to have a per-s2_mmu VTCR_EL2 setting, which allows us to represent the full PARange. Move the vctr field into the s2_mmu structure, which has no impact whatsoever, except for NV.
Note that once we are able to override ID_AA64MMFR0_EL1.PARange from userspace, we'll also be able to restrict the size of the shadow S2 that NV uses.
Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20231012205108.3937270-1-maz@kernel.org Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
show more ...
|
#
bbbb6577 |
| 16-Oct-2023 |
Mark Rutland <mark.rutland@arm.com> |
arm64: Avoid cpus_have_const_cap() for ARM64_HAS_BTI
In system_supports_bti() we use cpus_have_const_cap() to check for ARM64_HAS_BTI, but this is not necessary and alternative_has_cap_*() or cpus_h
arm64: Avoid cpus_have_const_cap() for ARM64_HAS_BTI
In system_supports_bti() we use cpus_have_const_cap() to check for ARM64_HAS_BTI, but this is not necessary and alternative_has_cap_*() or cpus_have_final_*cap() would be preferable.
For historical reasons, cpus_have_const_cap() is more complicated than it needs to be. Before cpucaps are finalized, it will perform a bitmap test of the system_cpucaps bitmap, and once cpucaps are finalized it will use an alternative branch. This used to be necessary to handle some race conditions in the window between cpucap detection and the subsequent patching of alternatives and static branches, where different branches could be out-of-sync with one another (or w.r.t. alternative sequences). Now that we use alternative branches instead of static branches, these are all patched atomically w.r.t. one another, and there are only a handful of cases that need special care in the window between cpucap detection and alternative patching.
Due to the above, it would be nice to remove cpus_have_const_cap(), and migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(), or cpus_have_cap() depending on when their requirements. This will remove redundant instructions and improve code generation, and will make it easier to determine how each callsite will behave before, during, and after alternative patching.
When CONFIG_ARM64_BTI_KERNEL=y, the ARM64_HAS_BTI cpucap is a strict boot cpu feature which is detected and patched early on the boot cpu. All uses guarded by CONFIG_ARM64_BTI_KERNEL happen after the boot CPU has detected ARM64_HAS_BTI and patched boot alternatives, and hence can safely use alternative_has_cap_*() or cpus_have_final_boot_cap().
Regardless of CONFIG_ARM64_BTI_KERNEL, all other uses of ARM64_HAS_BTI happen after system capabilities have been finalized and alternatives have been patched. Hence these can safely use alternative_has_cap_*) or cpus_have_final_cap().
This patch splits system_supports_bti() into system_supports_bti() and system_supports_bti_kernel(), with the former handling where the cpucap affects userspace functionality, and ther latter handling where the cpucap affects kernel functionality. The use of cpus_have_const_cap() is replaced by cpus_have_final_cap() in cpus_have_const_cap, and cpus_have_final_boot_cap() in system_supports_bti_kernel(). This will avoid generating code to test the system_cpucaps bitmap and should be better for all subsequent calls at runtime. The use of cpus_have_final_cap() and cpus_have_final_boot_cap() will make it easier to spot if code is chaanged such that these run before the ARM64_HAS_BTI cpucap is guaranteed to have been finalized.
Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Mark Brown <broonie@kernel.org> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
show more ...
|
#
d8569fba |
| 16-Oct-2023 |
Mark Rutland <mark.rutland@arm.com> |
arm64: kvm: Use cpus_have_final_cap() explicitly
Much of the arm64 KVM code uses cpus_have_const_cap() to check for cpucaps, but this is unnecessary and it would be preferable to use cpus_have_final
arm64: kvm: Use cpus_have_final_cap() explicitly
Much of the arm64 KVM code uses cpus_have_const_cap() to check for cpucaps, but this is unnecessary and it would be preferable to use cpus_have_final_cap().
For historical reasons, cpus_have_const_cap() is more complicated than it needs to be. Before cpucaps are finalized, it will perform a bitmap test of the system_cpucaps bitmap, and once cpucaps are finalized it will use an alternative branch. This used to be necessary to handle some race conditions in the window between cpucap detection and the subsequent patching of alternatives and static branches, where different branches could be out-of-sync with one another (or w.r.t. alternative sequences). Now that we use alternative branches instead of static branches, these are all patched atomically w.r.t. one another, and there are only a handful of cases that need special care in the window between cpucap detection and alternative patching.
Due to the above, it would be nice to remove cpus_have_const_cap(), and migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(), or cpus_have_cap() depending on when their requirements. This will remove redundant instructions and improve code generation, and will make it easier to determine how each callsite will behave before, during, and after alternative patching.
KVM is initialized after cpucaps have been finalized and alternatives have been patched. Since commit:
d86de40decaa14e6 ("arm64: cpufeature: upgrade hyp caps to final")
... use of cpus_have_const_cap() in hyp code is automatically converted to use cpus_have_final_cap():
| static __always_inline bool cpus_have_const_cap(int num) | { | if (is_hyp_code()) | return cpus_have_final_cap(num); | else if (system_capabilities_finalized()) | return __cpus_have_const_cap(num); | else | return cpus_have_cap(num); | }
Thus, converting hyp code to use cpus_have_final_cap() directly will not result in any functional change.
Non-hyp KVM code is also not executed until cpucaps have been finalized, and it would be preferable to extent the same treatment to this code and use cpus_have_final_cap() directly.
This patch converts instances of cpus_have_const_cap() in KVM-only code over to cpus_have_final_cap(). As all of this code runs after cpucaps have been finalized, there should be no functional change as a result of this patch, but the redundant instructions generated by cpus_have_const_cap() will be removed from the non-hyp KVM code.
Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Marc Zyngier <maz@kernel.org> Cc: Oliver Upton <oliver.upton@linux.dev> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
show more ...
|
#
7657ea92 |
| 11-Aug-2023 |
Raghavendra Rao Ananta <rananta@google.com> |
KVM: arm64: Use TLBI range-based instructions for unmap
The current implementation of the stage-2 unmap walker traverses the given range and, as a part of break-before-make, performs TLB invalidatio
KVM: arm64: Use TLBI range-based instructions for unmap
The current implementation of the stage-2 unmap walker traverses the given range and, as a part of break-before-make, performs TLB invalidations with a DSB for every PTE. A multitude of this combination could cause a performance bottleneck on some systems.
Hence, if the system supports FEAT_TLBIRANGE, defer the TLB invalidations until the entire walk is finished, and then use range-based instructions to invalidate the TLBs in one go. Condition deferred TLB invalidation on the system supporting FWB, as the optimization is entirely pointless when the unmap walker needs to perform CMOs.
Rename stage2_put_pte() to stage2_unmap_put_pte() as the function now serves the stage-2 unmap walker specifically, rather than acting generic.
Signed-off-by: Raghavendra Rao Ananta <rananta@google.com> Reviewed-by: Shaoqin Huang <shahuang@redhat.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20230811045127.3308641-15-rananta@google.com
show more ...
|
#
defc8cc7 |
| 11-Aug-2023 |
Raghavendra Rao Ananta <rananta@google.com> |
KVM: arm64: Invalidate the table entries upon a range
Currently, during the operations such as a hugepage collapse, KVM would flush the entire VM's context using 'vmalls12e1is' TLBI operation. Speci
KVM: arm64: Invalidate the table entries upon a range
Currently, during the operations such as a hugepage collapse, KVM would flush the entire VM's context using 'vmalls12e1is' TLBI operation. Specifically, if the VM is faulting on many hugepages (say after dirty-logging), it creates a performance penalty for the guest whose pages have already been faulted earlier as they would have to refill their TLBs again.
Instead, leverage kvm_tlb_flush_vmid_range() for table entries. If the system supports it, only the required range will be flushed. Else, it'll fallback to the previous mechanism.
Signed-off-by: Raghavendra Rao Ananta <rananta@google.com> Reviewed-by: Gavin Shan <gshan@redhat.com> Reviewed-by: Shaoqin Huang <shahuang@redhat.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20230811045127.3308641-14-rananta@google.com
show more ...
|
#
117940aa |
| 11-Aug-2023 |
Raghavendra Rao Ananta <rananta@google.com> |
KVM: arm64: Define kvm_tlb_flush_vmid_range()
Implement the helper kvm_tlb_flush_vmid_range() that acts as a wrapper for range-based TLB invalidations. For the given VMID, use the range-based TLBI i
KVM: arm64: Define kvm_tlb_flush_vmid_range()
Implement the helper kvm_tlb_flush_vmid_range() that acts as a wrapper for range-based TLB invalidations. For the given VMID, use the range-based TLBI instructions to do the job or fallback to invalidating all the TLB entries.
Signed-off-by: Raghavendra Rao Ananta <rananta@google.com> Reviewed-by: Gavin Shan <gshan@redhat.com> Reviewed-by: Shaoqin Huang <shahuang@redhat.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20230811045127.3308641-11-rananta@google.com
show more ...
|
#
df6556ad |
| 27-Jun-2023 |
Oliver Upton <oliver.upton@linux.dev> |
KVM: arm64: Correctly handle page aging notifiers for unaligned memslot
Userspace is allowed to select any PAGE_SIZE aligned hva to back guest memory. This is even the case with hugepages, although
KVM: arm64: Correctly handle page aging notifiers for unaligned memslot
Userspace is allowed to select any PAGE_SIZE aligned hva to back guest memory. This is even the case with hugepages, although it is a rather suboptimal configuration as PTE level mappings are used at stage-2.
The arm64 page aging handlers have an assumption that the specified range is exactly one page/block of memory, which in the aforementioned case is not necessarily true. All together this leads to the WARN() in kvm_age_gfn() firing.
However, the WARN is only part of the issue as the table walkers visit at most a single leaf PTE. For hugepage-backed memory in a memslot that isn't hugepage-aligned, page aging entirely misses accesses to the hugepage beyond the first page in the memslot.
Add a new walker dedicated to handling page aging MMU notifiers capable of walking a range of PTEs. Convert kvm(_test)_age_gfn() over to the new walker and drop the WARN that caught the issue in the first place. The implementation of this walker was inspired by the test_clear_young() implementation by Yu Zhao [*], but repurposed to address a bug in the existing aging implementation.
Cc: stable@vger.kernel.org # v5.15 Fixes: 056aad67f836 ("kvm: arm/arm64: Rework gpa callback handlers") Link: https://lore.kernel.org/kvmarm/20230526234435.662652-6-yuzhao@google.com/ Co-developed-by: Yu Zhao <yuzhao@google.com> Signed-off-by: Yu Zhao <yuzhao@google.com> Reported-by: Reiji Watanabe <reijiw@google.com> Reviewed-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Shaoqin Huang <shahuang@redhat.com> Link: https://lore.kernel.org/r/20230627235405.4069823-1-oliver.upton@linux.dev Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
show more ...
|
#
6df696cd |
| 09-Jun-2023 |
Oliver Upton <oliver.upton@linux.dev> |
arm64: errata: Mitigate Ampere1 erratum AC03_CPU_38 at stage-2
AmpereOne has an erratum in its implementation of FEAT_HAFDBS that required disabling the feature on the design. This was done by repor
arm64: errata: Mitigate Ampere1 erratum AC03_CPU_38 at stage-2
AmpereOne has an erratum in its implementation of FEAT_HAFDBS that required disabling the feature on the design. This was done by reporting the feature as not implemented in the ID register, although the corresponding control bits were not actually RES0. This does not align well with the requirements of the architecture, which mandates these bits be RES0 if HAFDBS isn't implemented.
The kernel's use of stage-1 is unaffected, as the HA and HD bits are only set if HAFDBS is detected in the ID register. KVM, on the other hand, relies on the RES0 behavior at stage-2 to use the same value for VTCR_EL2 on any cpu in the system. Mitigate the non-RES0 behavior by leaving VTCR_EL2.HA clear on affected systems.
Cc: stable@vger.kernel.org Cc: D Scott Phillips <scott@os.amperecomputing.com> Cc: Darren Hart <darren@os.amperecomputing.com> Acked-by: D Scott Phillips <scott@os.amperecomputing.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20230609220104.1836988-2-oliver.upton@linux.dev Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
show more ...
|
#
6537565f |
| 09-Jun-2023 |
Marc Zyngier <maz@kernel.org> |
KVM: arm64: Adjust EL2 stage-1 leaf AP bits when ARM64_KVM_HVHE is set
El2 stage-1 page-table format is subtly (and annoyingly) different when HCR_EL2.E2H is set.
Take the ARM64_KVM_HVHE configurat
KVM: arm64: Adjust EL2 stage-1 leaf AP bits when ARM64_KVM_HVHE is set
El2 stage-1 page-table format is subtly (and annoyingly) different when HCR_EL2.E2H is set.
Take the ARM64_KVM_HVHE configuration into account when setting the AP bits.
Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20230609162200.2024064-13-maz@kernel.org Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
show more ...
|
#
f6a27d6d |
| 30-May-2023 |
Oliver Upton <oliver.upton@linux.dev> |
KVM: arm64: Drop last page ref in kvm_pgtable_stage2_free_removed()
The reference count on page table allocations is increased for every 'counted' PTE (valid or donated) in the table in addition to
KVM: arm64: Drop last page ref in kvm_pgtable_stage2_free_removed()
The reference count on page table allocations is increased for every 'counted' PTE (valid or donated) in the table in addition to the initial reference from ->zalloc_page(). kvm_pgtable_stage2_free_removed() fails to drop the last reference on the root of the table walk, meaning we leak memory.
Fix it by dropping the last reference after the free walker returns, at which point all references for 'counted' PTEs have been released.
Cc: stable@vger.kernel.org Fixes: 5c359cca1faf ("KVM: arm64: Tear down unlinked stage-2 subtree after break-before-make") Reported-by: Yu Zhao <yuzhao@google.com> Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Tested-by: Yu Zhao <yuzhao@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20230530193213.1663411-1-oliver.upton@linux.dev
show more ...
|
#
b53d4a27 |
| 30-May-2023 |
Mostafa Saleh <smostafa@google.com> |
KVM: arm64: Use BTI for nvhe
CONFIG_ARM64_BTI_KERNEL compiles the kernel to support ARMv8.5-BTI. However, the nvhe code doesn't make use of it as it doesn't map any pages with Guarded Page(GP) bit.
KVM: arm64: Use BTI for nvhe
CONFIG_ARM64_BTI_KERNEL compiles the kernel to support ARMv8.5-BTI. However, the nvhe code doesn't make use of it as it doesn't map any pages with Guarded Page(GP) bit.
kvm pgtable code is modified to map executable pages with GP bit if BTI is enabled for the kernel.
At hyp init, SCTLR_EL2.BT is set to 1 to match EL1 configuration (SCTLR_EL1.BT1) set in bti_enable().
One difference between kernel and nvhe code, is that the kernel maps .text with GP while nvhe maps all the executable pages, this makes nvhe code need to deal with special initialization code coming from other executable sections (.idmap.text). For this we need to add bti instruction at the beginning of __kvm_handle_stub_hvc as it can be called by __host_hvc through branch instruction(br) and unlike SYM_FUNC_START, SYM_CODE_START doesn’t add bti instruction at the beginning, and it can’t be modified to add it as it is used with vector tables. Another solution which is more intrusive is to convert __kvm_handle_stub_hvc to a function and inject “bti jc” instead of “bti c” in SYM_FUNC_START
Signed-off-by: Mostafa Saleh <smostafa@google.com> Link: https://lore.kernel.org/r/20230530150845.2856828-1-smostafa@google.com Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
show more ...
|
#
a9f0e3d5 |
| 22-May-2023 |
Fuad Tabba <tabba@google.com> |
KVM: arm64: Reload PTE after invoking walker callback on preorder traversal
The preorder callback on the kvm_pgtable_stage2_map() path can replace a table with a block, then recursively free the det
KVM: arm64: Reload PTE after invoking walker callback on preorder traversal
The preorder callback on the kvm_pgtable_stage2_map() path can replace a table with a block, then recursively free the detached table. The higher-level walking logic stashes the old page table entry and then walks the freed table, invoking the leaf callback and potentially freeing pgtable pages prematurely.
In normal operation, the call to tear down the detached stage-2 is indirected and uses an RCU callback to trigger the freeing. RCU is not available to pKVM, which is where this bug is triggered.
Change the behavior of the walker to reload the page table entry after invoking the walker callback on preorder traversal, as it does for leaf entries.
Tested on Pixel 6.
Fixes: 5c359cca1faf ("KVM: arm64: Tear down unlinked stage-2 subtree after break-before-make") Suggested-by: Oliver Upton <oliver.upton@linux.dev> Signed-off-by: Fuad Tabba <tabba@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20230522103258.402272-1-tabba@google.com
show more ...
|
#
a12ab137 |
| 26-Apr-2023 |
Marc Zyngier <maz@kernel.org> |
KVM: arm64: Use local TLBI on permission relaxation
Broadcast TLB invalidations (TLBIs) targeting the Inner Shareable Domain are usually less performant than their non-shareable variant. In particul
KVM: arm64: Use local TLBI on permission relaxation
Broadcast TLB invalidations (TLBIs) targeting the Inner Shareable Domain are usually less performant than their non-shareable variant. In particular, we observed some implementations that take millliseconds to complete parallel broadcasted TLBIs.
It's safe to use non-shareable TLBIs when relaxing permissions on a PTE in the KVM case. According to the ARM ARM (0487I.a) section D8.13.1 "Using break-before-make when updating translation table entries", permission relaxation does not need break-before-make. Specifically, R_WHZWS states that these are the only changes that require a break-before-make sequence: changes of memory type (Shareability or Cacheability), address changes, or changing the block size.
Signed-off-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Ricardo Koller <ricarkol@google.com> Reviewed-by: Gavin Shan <gshan@redhat.com> Link: https://lore.kernel.org/r/20230426172330.1439644-13-ricarkol@google.com Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
show more ...
|
#
8f5a3eb7 |
| 26-Apr-2023 |
Ricardo Koller <ricarkol@google.com> |
KVM: arm64: Add kvm_pgtable_stage2_split()
Add a new stage2 function, kvm_pgtable_stage2_split(), for splitting a range of huge pages. This will be used for eager-splitting huge pages into PAGE_SIZE
KVM: arm64: Add kvm_pgtable_stage2_split()
Add a new stage2 function, kvm_pgtable_stage2_split(), for splitting a range of huge pages. This will be used for eager-splitting huge pages into PAGE_SIZE pages. The goal is to avoid having to split huge pages on write-protection faults, and instead use this function to do it ahead of time for large ranges (e.g., all guest memory in 1G chunks at a time).
Signed-off-by: Ricardo Koller <ricarkol@google.com> Reviewed-by: Shaoqin Huang <shahuang@redhat.com> Reviewed-by: Gavin Shan <gshan@redhat.com> Link: https://lore.kernel.org/r/20230426172330.1439644-7-ricarkol@google.com Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
show more ...
|