#
9221222c |
| 09-Aug-2024 |
Juergen Gross <jgross@suse.com> |
xen: allow mapping ACPI data using a different physical address
When running as a Xen PV dom0 the system needs to map ACPI data of the host using host physical addresses, while those addresses can c
xen: allow mapping ACPI data using a different physical address
When running as a Xen PV dom0 the system needs to map ACPI data of the host using host physical addresses, while those addresses can conflict with the guest physical addresses of the loaded linux kernel. The same problem might apply in case a PV guest is configured to use the host memory map.
This conflict can be solved by mapping the ACPI data to a different guest physical address, but mapping the data via acpi_os_ioremap() must still be possible using the host physical address, as this address might be generated by AML when referencing some of the ACPI data.
When configured to support running as a Xen PV domain, have an implementation of acpi_os_ioremap() being aware of the possibility to need above mentioned translation of a host physical address to the guest physical address.
This modification requires to #include linux/acpi.h in some sources which need to include asm/acpi.h directly.
Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Signed-off-by: Juergen Gross <jgross@suse.com>
show more ...
|
#
a97756cb |
| 09-Jul-2024 |
Xin Li (Intel) <xin@zytor.com> |
x86/fred: Enable FRED right after init_mem_mapping()
On 64-bit init_mem_mapping() relies on the minimal page fault handler provided by the early IDT mechanism. The real page fault handler is install
x86/fred: Enable FRED right after init_mem_mapping()
On 64-bit init_mem_mapping() relies on the minimal page fault handler provided by the early IDT mechanism. The real page fault handler is installed right afterwards into the IDT.
This is problematic on CPUs which have X86_FEATURE_FRED set because the real page fault handler retrieves the faulting address from the FRED exception stack frame and not from CR2, but that does obviously not work when FRED is not yet enabled in the CPU.
To prevent this enable FRED right after init_mem_mapping() without interrupt stacks. Those are enabled later in trap_init() after the CPU entry area is set up.
[ tglx: Encapsulate the FRED details ]
Fixes: 14619d912b65 ("x86/fred: FRED entry/exit and dispatch code") Reported-by: Hou Wenlong <houwenlong.hwl@antgroup.com> Suggested-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Xin Li (Intel) <xin@zytor.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/all/20240709154048.3543361-4-xin@zytor.com
show more ...
|
#
4db64279 |
| 24-Apr-2024 |
Tony Luck <tony.luck@intel.com> |
x86/cpu: Switch to new Intel CPU model defines
New CPU #defines encode vendor and family as well as model.
[ dhansen: vertically align macro and remove stray subject / ]
Signed-off-by: Tony Luck <
x86/cpu: Switch to new Intel CPU model defines
New CPU #defines encode vendor and family as well as model.
[ dhansen: vertically align macro and remove stray subject / ]
Signed-off-by: Tony Luck <tony.luck@intel.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/all/20240424181516.41887-1-tony.luck%40intel.com
show more ...
|
#
dbbe13a6 |
| 10-Apr-2024 |
Ingo Molnar <mingo@kernel.org> |
x86/cpu: Improve readability of per-CPU cpumask initialization code
In smp_prepare_cpus_common() and x2apic_prepare_cpu():
- use 'cpu' instead of 'i' - use 'node' instead of 'n' - use vertical a
x86/cpu: Improve readability of per-CPU cpumask initialization code
In smp_prepare_cpus_common() and x2apic_prepare_cpu():
- use 'cpu' instead of 'i' - use 'node' instead of 'n' - use vertical alignment to improve readability - better structure basic blocks - reduce col80 checkpatch damage
Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: linux-kernel@vger.kernel.org
show more ...
|
#
e0a9ac19 |
| 10-Apr-2024 |
Li RongQing <lirongqing@baidu.com> |
x86/cpu: Take NUMA node into account when allocating per-CPU cpumasks
per-CPU cpumasks are dominantly accessed from their own local CPUs, so allocate them node-local to improve performance.
[ mingo
x86/cpu: Take NUMA node into account when allocating per-CPU cpumasks
per-CPU cpumasks are dominantly accessed from their own local CPUs, so allocate them node-local to improve performance.
[ mingo: Rewrote the changelog. ]
Signed-off-by: Li RongQing <lirongqing@baidu.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20240410030114.6201-1-lirongqing@baidu.com
show more ...
|
#
c90399fb |
| 22-Mar-2024 |
Thomas Gleixner <tglx@linutronix.de> |
x86/cpu: Ensure that CPU info updates are propagated on UP
The boot sequence evaluates CPUID information twice:
1) During early boot
2) When finalizing the early setup right before mitiga
x86/cpu: Ensure that CPU info updates are propagated on UP
The boot sequence evaluates CPUID information twice:
1) During early boot
2) When finalizing the early setup right before mitigations are selected and alternatives are patched.
In both cases the evaluation is stored in boot_cpu_data, but on UP the copying of boot_cpu_data to the per CPU info of the boot CPU happens between #1 and #2. So any update which happens in #2 is never propagated to the per CPU info instance.
Consolidate the whole logic and copy boot_cpu_data right before applying alternatives as that's the point where boot_cpu_data is in it's final state and not supposed to change anymore.
This also removes the voodoo mb() from smp_prepare_cpus_common() which had absolutely no purpose.
Fixes: 71eb4893cfaf ("x86/percpu: Cure per CPU madness on UP") Reported-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Tested-by: Guenter Roeck <linux@roeck-us.net> Link: https://lore.kernel.org/r/20240322185305.127642785@linutronix.de
show more ...
|
#
71eb4893 |
| 04-Mar-2024 |
Thomas Gleixner <tglx@linutronix.de> |
x86/percpu: Cure per CPU madness on UP
On UP builds Sparse complains rightfully about accesses to cpu_info with per CPU accessors:
cacheinfo.c:282:30: sparse: warning: incorrect type in initializ
x86/percpu: Cure per CPU madness on UP
On UP builds Sparse complains rightfully about accesses to cpu_info with per CPU accessors:
cacheinfo.c:282:30: sparse: warning: incorrect type in initializer (different address spaces) cacheinfo.c:282:30: sparse: expected void const [noderef] __percpu *__vpp_verify cacheinfo.c:282:30: sparse: got unsigned int *
The reason is that on UP builds cpu_info which is a per CPU variable on SMP is mapped to boot_cpu_info which is a regular variable. There is a hideous accessor cpu_data() which tries to hide this, but it's not sufficient as some places require raw accessors and generates worse code than the regular per CPU accessors.
Waste sizeof(struct x86_cpuinfo) memory on UP and provide the per CPU cpu_info unconditionally. This requires to update the CPU info on the boot CPU as SMP does. (Ab)use the weakly defined smp_prepare_boot_cpu() function and implement exactly that.
This allows to use regular per CPU accessors uncoditionally and paves the way to remove the cpu_data() hackery.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20240304005104.622511517@linutronix.de
show more ...
|
#
71261072 |
| 04-Mar-2024 |
Thomas Gleixner <tglx@linutronix.de> |
smp: Consolidate smp_prepare_boot_cpu()
There is no point in having seven architectures implementing the same empty stub.
Provide a weak function in the init code and remove the stubs.
This also a
smp: Consolidate smp_prepare_boot_cpu()
There is no point in having seven architectures implementing the same empty stub.
Provide a weak function in the init code and remove the stubs.
This also allows to utilize the function on UP which is required to sanitize the per CPU handling on X86 UP.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20240304005104.567671691@linutronix.de
show more ...
|
#
89b0f15f |
| 13-Feb-2024 |
Thomas Gleixner <tglx@linutronix.de> |
x86/cpu/topology: Get rid of cpuinfo::x86_max_cores
Now that __num_cores_per_package and __num_threads_per_package are available, cpuinfo::x86_max_cores and the related math all over the place can b
x86/cpu/topology: Get rid of cpuinfo::x86_max_cores
Now that __num_cores_per_package and __num_threads_per_package are available, cpuinfo::x86_max_cores and the related math all over the place can be replaced with the ready to consume data.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Michael Kelley <mhklinux@outlook.com> Tested-by: Sohil Mehta <sohil.mehta@intel.com> Link: https://lore.kernel.org/r/20240213210253.176147806@linutronix.de
show more ...
|
#
8078f4d6 |
| 13-Feb-2024 |
Thomas Gleixner <tglx@linutronix.de> |
x86/cpu/topology: Rename smp_num_siblings
It's really a non-intuitive name. Rename it to __max_threads_per_core which is obvious.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Mich
x86/cpu/topology: Rename smp_num_siblings
It's really a non-intuitive name. Rename it to __max_threads_per_core which is obvious.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Michael Kelley <mhklinux@outlook.com> Tested-by: Sohil Mehta <sohil.mehta@intel.com> Link: https://lore.kernel.org/r/20240213210253.011307973@linutronix.de
show more ...
|
#
380414be |
| 13-Feb-2024 |
Thomas Gleixner <tglx@linutronix.de> |
x86/cpu/topology: Use topology logical mapping mechanism
Replace the logical package and die management functionality and retrieve the logical IDs from the topology bitmaps.
Signed-off-by: Thomas G
x86/cpu/topology: Use topology logical mapping mechanism
Replace the logical package and die management functionality and retrieve the logical IDs from the topology bitmaps.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Michael Kelley <mhklinux@outlook.com> Tested-by: Sohil Mehta <sohil.mehta@intel.com> Link: https://lore.kernel.org/r/20240213210252.901865302@linutronix.de
show more ...
|
#
090610ba |
| 13-Feb-2024 |
Thomas Gleixner <tglx@linutronix.de> |
x86/cpu/topology: Use topology bitmaps for sizing
Now that all possible APIC IDs are tracked in the topology bitmaps, its trivial to retrieve the real information from there.
This gets rid of the g
x86/cpu/topology: Use topology bitmaps for sizing
Now that all possible APIC IDs are tracked in the topology bitmaps, its trivial to retrieve the real information from there.
This gets rid of the guesstimates for the maximal packages and dies per package as the actual numbers can be determined before a single AP has been brought up.
The number of SMT threads can now be determined correctly from the bitmaps in all situations. Up to now a system which has SMT disabled in the BIOS will still claim that it is SMT capable, because the lowest APIC ID bit is reserved for that and CPUID leaf 0xb/0x1f still enumerates the SMT domain accordingly. By calculating the bitmap weights of the SMT and the CORE domain and setting them into relation the SMT disabled in BIOS situation reports correctly that the system is not SMT capable.
It also handles the situation correctly when a hybrid systems boot CPU does not have SMT as it takes the SMT capability of the APs fully into account.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Michael Kelley <mhklinux@outlook.com> Tested-by: Sohil Mehta <sohil.mehta@intel.com> Link: https://lore.kernel.org/r/20240213210252.681709880@linutronix.de
show more ...
|
#
7c0edad3 |
| 13-Feb-2024 |
Thomas Gleixner <tglx@linutronix.de> |
x86/cpu/topology: Rework possible CPU management
Managing possible CPUs is an unreadable and uncomprehensible maze. Aside of that it's backwards because it applies command line limits after register
x86/cpu/topology: Rework possible CPU management
Managing possible CPUs is an unreadable and uncomprehensible maze. Aside of that it's backwards because it applies command line limits after registering all APICs.
Rewrite it so that it:
- Applies the command line limits upfront so that only the allowed amount of APIC IDs can be registered.
- Applies eventual late restrictions in an understandable way
- Uses simple min_t() calculations which are trivial to follow.
- Provides a separate function for resetting to UP mode late in the bringup process.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Michael Kelley <mhklinux@outlook.com> Tested-by: Sohil Mehta <sohil.mehta@intel.com> Link: https://lore.kernel.org/r/20240213210252.290098853@linutronix.de
show more ...
|
#
6055f6cf |
| 13-Feb-2024 |
Thomas Gleixner <tglx@linutronix.de> |
x86/smpboot: Make error message actually useful
"smpboot: native_kick_ap: bad cpu 33" is absolutely useless information.
Replace it with something meaningful which allows to decode the failure cond
x86/smpboot: Make error message actually useful
"smpboot: native_kick_ap: bad cpu 33" is absolutely useless information.
Replace it with something meaningful which allows to decode the failure condition.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Michael Kelley <mhklinux@outlook.com> Tested-by: Sohil Mehta <sohil.mehta@intel.com> Link: https://lore.kernel.org/r/20240213210252.170806023@linutronix.de
show more ...
|
#
58aa34ab |
| 13-Feb-2024 |
Thomas Gleixner <tglx@linutronix.de> |
x86/cpu/topology: Confine topology information
Now that all external fiddling with num_processors and disabled_cpus is gone, move the last user prefill_possible_map() into the topology code too and
x86/cpu/topology: Confine topology information
Now that all external fiddling with num_processors and disabled_cpus is gone, move the last user prefill_possible_map() into the topology code too and remove the global visibility of these variables.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Michael Kelley <mhklinux@outlook.com> Tested-by: Sohil Mehta <sohil.mehta@intel.com> Link: https://lore.kernel.org/r/20240213210251.994756960@linutronix.de
show more ...
|
#
350b5e27 |
| 13-Feb-2024 |
Thomas Gleixner <tglx@linutronix.de> |
x86/mpparse: Remove the physid_t bitmap wrapper
physid_t is a wrapper around bitmap. Just remove the onion layer and use bitmap functionality directly.
Signed-off-by: Thomas Gleixner <tglx@linutron
x86/mpparse: Remove the physid_t bitmap wrapper
physid_t is a wrapper around bitmap. Just remove the onion layer and use bitmap functionality directly.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Michael Kelley <mhklinux@outlook.com> Tested-by: Sohil Mehta <sohil.mehta@intel.com> Link: https://lore.kernel.org/r/20240212154639.994904510@linutronix.de
show more ...
|
#
ace278e7 |
| 13-Feb-2024 |
Thomas Gleixner <tglx@linutronix.de> |
x86/smpboot: Teach it about topo.amd_node_id
When switching AMD over to the new topology parser then the match functions need to look for AMD systems with the extended topology feature at the new to
x86/smpboot: Teach it about topo.amd_node_id
When switching AMD over to the new topology parser then the match functions need to look for AMD systems with the extended topology feature at the new topo.amd_node_id member which is then holding the node id information.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Juergen Gross <jgross@suse.com> Tested-by: Sohil Mehta <sohil.mehta@intel.com> Tested-by: Michael Kelley <mhklinux@outlook.com> Tested-by: Zhang Rui <rui.zhang@intel.com> Tested-by: Wang Wendy <wendy.wang@intel.com> Tested-by: K Prateek Nayak <kprateek.nayak@amd.com> Link: https://lore.kernel.org/r/20240212153625.082979150@linutronix.de
show more ...
|
#
c3dd1995 |
| 16-Nov-2023 |
Kan Liang <kan.liang@linux.intel.com> |
x86/smp: Export symbol cpu_clustergroup_mask()
Intel cstate PMU driver will invoke the topology_cluster_cpumask() to retrieve the CPU mask of a cluster. A modpost error is triggered since the symbol
x86/smp: Export symbol cpu_clustergroup_mask()
Intel cstate PMU driver will invoke the topology_cluster_cpumask() to retrieve the CPU mask of a cluster. A modpost error is triggered since the symbol cpu_clustergroup_mask is not exported.
Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20231116142245.1233485-2-kan.liang@linux.intel.com
show more ...
|
#
0b62f6cb |
| 17-Oct-2023 |
Thomas Gleixner <tglx@linutronix.de> |
x86/microcode/32: Move early loading after paging enable
32-bit loads microcode before paging is enabled. The commit which introduced that has zero justification in the changelog. The cover letter h
x86/microcode/32: Move early loading after paging enable
32-bit loads microcode before paging is enabled. The commit which introduced that has zero justification in the changelog. The cover letter has slightly more content, but it does not give any technical justification either:
"The problem in current microcode loading method is that we load a microcode way, way too late; ideally we should load it before turning paging on. This may only be practical on 32 bits since we can't get to 64-bit mode without paging on, but we should still do it as early as at all possible."
Handwaving word salad with zero technical content.
Someone claimed in an offlist conversation that this is required for curing the ATOM erratum AAE44/AAF40/AAG38/AAH41. That erratum requires an microcode update in order to make the usage of PSE safe. But during early boot, PSE is completely irrelevant and it is evaluated way later.
Neither is it relevant for the AP on single core HT enabled CPUs as the microcode loading on the AP is not doing anything.
On dual core CPUs there is a theoretical problem if a split of an executable large page between enabling paging including PSE and loading the microcode happens. But that's only theoretical, it's practically irrelevant because the affected dual core CPUs are 64bit enabled and therefore have paging and PSE enabled before loading the microcode on the second core. So why would it work on 64-bit but not on 32-bit?
The erratum:
"AAG38 Code Fetch May Occur to Incorrect Address After a Large Page is Split Into 4-Kbyte Pages
Problem: If software clears the PS (page size) bit in a present PDE (page directory entry), that will cause linear addresses mapped through this PDE to use 4-KByte pages instead of using a large page after old TLB entries are invalidated. Due to this erratum, if a code fetch uses this PDE before the TLB entry for the large page is invalidated then it may fetch from a different physical address than specified by either the old large page translation or the new 4-KByte page translation. This erratum may also cause speculative code fetches from incorrect addresses."
The practical relevance for this is exactly zero because there is no splitting of large text pages during early boot-time, i.e. between paging enable and microcode loading, and neither during CPU hotplug.
IOW, this load microcode before paging enable is yet another voodoo programming solution in search of a problem. What's worse is that it causes at least two serious problems:
1) When stackprotector is enabled, the microcode loader code has the stackprotector mechanics enabled. The read from the per CPU variable __stack_chk_guard is always accessing the virtual address either directly on UP or via %fs on SMP. In physical address mode this results in an access to memory above 3GB. So this works by chance as the hardware returns the same value when there is no RAM at this physical address. When there is RAM populated above 3G then the read is by chance the same as nothing changes that memory during the very early boot stage. That's not necessarily true during runtime CPU hotplug.
2) When function tracing is enabled, the relevant microcode loader functions and the functions invoked from there will call into the tracing code and evaluate global and per CPU variables in physical address mode. What could potentially go wrong?
Cure this and move the microcode loading after the early paging enable, use the new temporary initrd mapping and remove the gunk in the microcode loader which is required to handle physical address mode.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20231017211722.348298216@linutronix.de
show more ...
|
#
fbe1bf1e |
| 15-Oct-2023 |
Linus Torvalds <torvalds@linux-foundation.org> |
Revert "x86/smp: Put CPUs into INIT on shutdown if possible"
This reverts commit 45e34c8af58f23db4474e2bfe79183efec09a18b, and the two subsequent fixes to it:
3f874c9b2aae ("x86/smp: Don't send I
Revert "x86/smp: Put CPUs into INIT on shutdown if possible"
This reverts commit 45e34c8af58f23db4474e2bfe79183efec09a18b, and the two subsequent fixes to it:
3f874c9b2aae ("x86/smp: Don't send INIT to non-present and non-booted CPUs") b1472a60a584 ("x86/smp: Don't send INIT to boot CPU")
because it seems to result in hung machines at shutdown. Particularly some Dell machines, but Thomas says
"The rest seems to be Lenovo and Sony with Alderlake/Raptorlake CPUs - at least that's what I could figure out from the various bug reports.
I don't know which CPUs the DELL machines have, so I can't say it's a pattern.
I agree with the revert for now"
Ashok Raj chimes in:
"There was a report (probably this same one), and it turns out it was a bug in the BIOS SMI handler.
The client BIOS's were waiting for the lowest APICID to be the SMI rendevous master. If this is MeteorLake, the BSP wasn't the one with the lowest APIC and it triped here.
The BIOS change is also being pushed to others for assimilation :)
Server BIOS's had this correctly for a while now"
and it does look likely to be some bad interaction between SMI and the non-BSP cores having put into INIT (and thus unresponsive until reset).
Link: https://bbs.archlinux.org/viewtopic.php?pid=2124429 Link: https://www.reddit.com/r/openSUSE/comments/16qq99b/tumbleweed_shutdown_did_not_finish_completely/ Link: https://forum.artixlinux.org/index.php/topic,5997.0.html Link: https://bugzilla.redhat.com/show_bug.cgi?id=2241279 Acked-by: Thomas Gleixner <tglx@linutronix.de> Cc: Ashok Raj <ashok.raj@intel.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
show more ...
|
#
f577cd57 |
| 12-Jul-2023 |
Peter Zijlstra <peterz@infradead.org> |
sched/topology: Rename 'DIE' domain to 'PKG'
While reworking the x86 topology code Thomas tripped over creating a 'DIE' domain for the package mask. :-)
Since these names are CONFIG_SCHED_DEBUG=y o
sched/topology: Rename 'DIE' domain to 'PKG'
While reworking the x86 topology code Thomas tripped over creating a 'DIE' domain for the package mask. :-)
Since these names are CONFIG_SCHED_DEBUG=y only, rename them to make the name less ambiguous.
[ Shrikanth Hegde: rename on s390 as well. ] [ Valentin Schneider: also rename it in the comments. ] [ mingo: port to recent kernels & find all remaining occurances. ]
Reported-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Valentin Schneider <vschneid@redhat.com> Acked-by: Mel Gorman <mgorman@suse.de> Acked-by: Heiko Carstens <hca@linux.ibm.com> Acked-by: Gautham R. Shenoy <gautham.shenoy@amd.com> Acked-by: Vincent Guittot <vincent.guittot@linaro.org> Link: https://lore.kernel.org/r/20230712141056.GI3100107@hirez.programming.kicks-ass.net
show more ...
|
#
90781f0c |
| 14-Aug-2023 |
Thomas Gleixner <tglx@linutronix.de> |
x86/cpu/topology: Cure the abuse of cpuinfo for persisting logical ids
Per CPU cpuinfo is used to persist the logical package and die IDs. That's really not the right place simply because cpuinfo is
x86/cpu/topology: Cure the abuse of cpuinfo for persisting logical ids
Per CPU cpuinfo is used to persist the logical package and die IDs. That's really not the right place simply because cpuinfo is subject to be reinitialized when a CPU goes through an offline/online cycle.
This works by chance today, but that's far from correct and neither obvious nor documented.
Add a per cpu datastructure which persists those logical IDs, which allows to cleanup the CPUID evaluation code.
This is a temporary workaround until the larger topology management is in place, which makes all of this logical management mechanics obsolete.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Juergen Gross <jgross@suse.com> Tested-by: Sohil Mehta <sohil.mehta@intel.com> Tested-by: Michael Kelley <mikelley@microsoft.com> Tested-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Zhang Rui <rui.zhang@intel.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20230814085113.292947071@linutronix.de
show more ...
|
#
8aa2a417 |
| 14-Aug-2023 |
Thomas Gleixner <tglx@linutronix.de> |
x86/apic: Use u32 for cpu_present_to_apicid()
APIC IDs are used with random data types u16, u32, int, unsigned int, unsigned long.
Make it all consistently use u32 because that reflects the hardwar
x86/apic: Use u32 for cpu_present_to_apicid()
APIC IDs are used with random data types u16, u32, int, unsigned int, unsigned long.
Make it all consistently use u32 because that reflects the hardware register width and fixup a few related usage sites for consistency sake.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Juergen Gross <jgross@suse.com> Tested-by: Sohil Mehta <sohil.mehta@intel.com> Tested-by: Michael Kelley <mikelley@microsoft.com> Tested-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Zhang Rui <rui.zhang@intel.com> Reviewed-by: Arjan van de Ven <arjan@linux.intel.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20230814085113.054064391@linutronix.de
show more ...
|
#
6e290323 |
| 14-Aug-2023 |
Thomas Gleixner <tglx@linutronix.de> |
x86/cpu: Move cpu_l[l2]c_id into topology info
The topology IDs which identify the LLC and L2 domains clearly belong to the per CPU topology information.
Move them into cpuinfo_x86::cpuinfo_topo an
x86/cpu: Move cpu_l[l2]c_id into topology info
The topology IDs which identify the LLC and L2 domains clearly belong to the per CPU topology information.
Move them into cpuinfo_x86::cpuinfo_topo and get rid of the extra per CPU data and the related exports.
This also paves the way to do proper topology evaluation during early boot because it removes the only per CPU dependency for that.
No functional change.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Juergen Gross <jgross@suse.com> Tested-by: Sohil Mehta <sohil.mehta@intel.com> Tested-by: Michael Kelley <mikelley@microsoft.com> Tested-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Zhang Rui <rui.zhang@intel.com> Reviewed-by: Arjan van de Ven <arjan@linux.intel.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20230814085112.803864641@linutronix.de
show more ...
|
#
22dc9631 |
| 14-Aug-2023 |
Thomas Gleixner <tglx@linutronix.de> |
x86/cpu: Move logical package and die IDs into topology info
Yet another topology related data pair. Rename logical_proc_id to logical_pkg_id so it fits the common naming conventions.
No functional
x86/cpu: Move logical package and die IDs into topology info
Yet another topology related data pair. Rename logical_proc_id to logical_pkg_id so it fits the common naming conventions.
No functional change.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Juergen Gross <jgross@suse.com> Tested-by: Sohil Mehta <sohil.mehta@intel.com> Tested-by: Michael Kelley <mikelley@microsoft.com> Tested-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Zhang Rui <rui.zhang@intel.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20230814085112.745139505@linutronix.de
show more ...
|