History log of /netbsd/sys/arch/xen/x86/x86_xpmap.c (Results 1 – 25 of 91)
Revision Date Author Comments
# e8cbb42e 20-Aug-2022 riastradh <riastradh@NetBSD.org>

x86: Split most of pmap.h into pmap_private.h or vmparam.h.

This way pmap.h only contains the MD definition of the MI pmap(9)
API, which loads of things in the kernel rely on, so changing x86
pmap i

x86: Split most of pmap.h into pmap_private.h or vmparam.h.

This way pmap.h only contains the MD definition of the MI pmap(9)
API, which loads of things in the kernel rely on, so changing x86
pmap internals no longer requires recompiling the entire kernel every
time.

Callers needing these internals must now use machine/pmap_private.h.
Note: This is not x86/pmap_private.h because it contains three parts:

1. CPU-specific (different for i386/amd64) definitions used by...

2. common definitions, including Xenisms like xpmap_ptetomach,
further used by...

3. more CPU-specific inlines for pmap_pte_* operations

So {amd64,i386}/pmap_private.h defines 1, includes x86/pmap_private.h
for 2, and then defines 3. Maybe we should split that out into a new
pmap_pte.h to reduce this trouble.

No functional change intended, other than that some .c files must
include machine/pmap_private.h when previously uvm/uvm_pmap.h
polluted the namespace with pmap internals.

Note: This migrates part of i386/pmap.h into i386/vmparam.h --
specifically the parts that are needed for several constants defined
in vmparam.h:

VM_MAXUSER_ADDRESS
VM_MAX_ADDRESS
VM_MAX_KERNEL_ADDRESS
VM_MIN_KERNEL_ADDRESS

Since i386 needs PDP_SIZE in vmparam.h, I added it there on amd64
too, just to keep things parallel.

show more ...


# c8af1ce8 11-May-2022 bouyer <bouyer@NetBSD.org>

In bootstrap, after switching to a new page table make sure that
now-unused memory is unmapped.


# 1c575f78 06-Sep-2020 riastradh <riastradh@NetBSD.org>

Fix fallout from previous uvm.h cleanup.

- pmap(9) needs uvm/uvm_extern.h.

- x86/pmap.h is not usable on its own; it is only usable if included
via uvm/uvm_extern.h (-> uvm/uvm_pmap.h -> machine/

Fix fallout from previous uvm.h cleanup.

- pmap(9) needs uvm/uvm_extern.h.

- x86/pmap.h is not usable on its own; it is only usable if included
via uvm/uvm_extern.h (-> uvm/uvm_pmap.h -> machine/pmap.h).

- Make nvmm.h and nvmm_internal.h standalone.

show more ...


# 936e469b 26-May-2020 bouyer <bouyer@NetBSD.org>

Ajust pmap_enter_ma() for upcoming new Xen privcmd ioctl:
pass flags to xpq_update_foreign()
Introduce a pmap MD flag: PMAP_MD_XEN_NOTR, which cause xpq_update_foreign()
to use the MMU_PT_UPDATE_NO_T

Ajust pmap_enter_ma() for upcoming new Xen privcmd ioctl:
pass flags to xpq_update_foreign()
Introduce a pmap MD flag: PMAP_MD_XEN_NOTR, which cause xpq_update_foreign()
to use the MMU_PT_UPDATE_NO_TRANSLATE flag.
make xpq_update_foreign() return the raw Xen error. This will cause
pmap_enter_ma() to return a negative error number in this case, but the
only user of this code path is privcmd.c and it can deal with it.

Add pmap_enter_gnt()m which maps a set of Xen grant entries at the
specified va in the specified pmap. Use the hooks implemented for EPT to
keep track of mapped grand entries in the pmap, and unmap them
when pmap_remove() is called. This requires pmap_remove() to be split
into a pmap_remove_locked(), to be called from pmap_remove_gnt().

show more ...


# 64db16ed 06-May-2020 bouyer <bouyer@NetBSD.org>

xpq_queue_* use per-cpu queue; splvm() is enough to protect them.
remove the XXX SMP comments.


# d712f2e8 06-May-2020 bouyer <bouyer@NetBSD.org>

KASSERT() that the per-cpu queues are run at IPL_VM after boot.


# 3dcfbe40 02-May-2020 bouyer <bouyer@NetBSD.org>

Introduce Xen PVH support in GENERIC.
This is compiled in with
options XENPVHVM
x86 changes:
- add Xen section and xen pvh entry points to locore.S. Set vm_guest
to VM_GUEST_XENPVH in this entry po

Introduce Xen PVH support in GENERIC.
This is compiled in with
options XENPVHVM
x86 changes:
- add Xen section and xen pvh entry points to locore.S. Set vm_guest
to VM_GUEST_XENPVH in this entry point.
Most of the boot procedure (especially page table setup and switch to
paged mode) is shared with native.
- change some x86_delay() to delay_func(), which points to x86_delay() for
native/HVM, and xen_delay() for PVH

Xen changes:
- remove Xen bits from init_x86_64_ksyms() and init386_ksyms()
and move to xen_init_ksyms(), used for both PV and PVH
- set ISA no-legacy-devices property for PVH
- factor out code from Xen's cpu_bootconf() to xen_bootconf()
in xen_machdep.c
- set up a specific pvh_consinit() which starts with printk()
(which uses a simple hypercall that is available early) and switch to
xencons when we can use pmap_kenter_pa().

show more ...


# 1dea2ad8 30-Oct-2019 maxv <maxv@NetBSD.org>

Switch to new PTE bits.


# ca72d326 09-Mar-2019 maxv <maxv@NetBSD.org>

Start replacing the x86 PTE bits.


# 7e36c976 07-Mar-2019 maxv <maxv@NetBSD.org>

Drop PG_RO, PG_KR and PG_PROT, they are useless and create confusion.


# 775e80c7 04-Feb-2019 cherry <cherry@NetBSD.org>

Bump up XEN source API compatibility to 0x00030208 from 0x00030201,

but maintain backwards source API compilation compatibility.

ie; sources with config(5)
options __XEN_INTERFACE_VERSION__=0x0003

Bump up XEN source API compatibility to 0x00030208 from 0x00030201,

but maintain backwards source API compilation compatibility.

ie; sources with config(5)
options __XEN_INTERFACE_VERSION__=0x00030201 # Xen 3.1 interface

should compile and run without problems.

Not that API version 0x00030201 is the lowest version we support now.

show more ...


# 8dd19469 29-Jul-2018 maxv <maxv@NetBSD.org>

Reduce the confusion, rename a bunch of variables and reorg a little.
Tested on i386PAE-domU and amd64-dom0.


# fe418166 27-Jul-2018 maxv <maxv@NetBSD.org>

Try to reduce the confusion, rename:

l2_4_count -> PDIRSZ
count -> nL2
bootstrap_tables -> our_tables
init_tables -> xen_tables

No functional change.


# 4dec8862 26-Jul-2018 maxv <maxv@NetBSD.org>

Remove the non-PAE-i386 code of Xen. The branches are reordered so that
__x86_64__ comes first, eg:

#if defined(PAE)
/* i386+PAE */
#elif defined(__x86_64__)
/* amd64 */
#else
/* i386 */
#

Remove the non-PAE-i386 code of Xen. The branches are reordered so that
__x86_64__ comes first, eg:

#if defined(PAE)
/* i386+PAE */
#elif defined(__x86_64__)
/* amd64 */
#else
/* i386 */
#endif

becomes

#ifdef __x86_64__
/* amd64 */
#else
/* i386+PAE */
#endif

Tested on i386pae-domU and amd64-dom0.

show more ...


# de740689 26-Jul-2018 maxv <maxv@NetBSD.org>

Retire XENDEBUG_LOW, and switch its only user to XENDEBUG.


# 603bc4f8 26-Jul-2018 maxv <maxv@NetBSD.org>

Merge the blocks. No functional change.


# 4752160d 26-Jul-2018 maxv <maxv@NetBSD.org>

Simplify the conditions; (PTP_LEVELS > 3) and (PTP_LEVELS > 2) are for
amd64, so use ifdef __x86_64__. No functional change.


# 47061bda 24-Jun-2018 jdolecek <jdolecek@NetBSD.org>

mark with XXXSMP all remaining spl*() and tsleep() calls


# f509a1c2 16-Sep-2017 maxv <maxv@NetBSD.org>

Move xpq_idx into cpu_info, to prevent false sharing between CPUs. Saves
10s when doing a './build.sh -j 3 kernel=GENERIC' on xen-amd64-domU.


# 35c40bd3 18-Mar-2017 maxv <maxv@NetBSD.org>

Style, and remove debug code that does not work anyway.


# ac63fe57 08-Mar-2017 maxv <maxv@NetBSD.org>

A few changes:
* Use markers to reduce false sharing.
* Remove XENDEBUG_SYNC and several debug messages, they are just useless.
* Remove xen_vcpu_*. They are unused and not optimized: if we really

A few changes:
* Use markers to reduce false sharing.
* Remove XENDEBUG_SYNC and several debug messages, they are just useless.
* Remove xen_vcpu_*. They are unused and not optimized: if we really
wanted to flush ranges we should pack the VAs in a mmuext_op array
instead of performing several hypercalls in a loop.
* Start removing PG_k.
* KNF, reorder, simplify and remove stupid comments.

show more ...


# 0bd04880 02-Feb-2017 maxv <maxv@NetBSD.org>

Use __read_mostly on these variables, to reduce the probability of false
sharing.


# 997bc263 22-Jan-2017 maxv <maxv@NetBSD.org>

Export xpmap_pg_nx, and put it in the page table pages. It does not change
anything, since Xen removes the X bit on these; but it is better for
consistency.


# 0dd7d2d7 06-Jan-2017 maxv <maxv@NetBSD.org>

Remove a few #if 0s, and explain what we are doing on PAE: the last two PAs
are entered in reversed order.


# 4f90bdf3 16-Dec-2016 maxv <maxv@NetBSD.org>

The way the xen dummy page is taken care of makes absolutely no sense at
all, with magic offsets here and there in different layers of the system.
It is just blind luck that everything has always wor

The way the xen dummy page is taken care of makes absolutely no sense at
all, with magic offsets here and there in different layers of the system.
It is just blind luck that everything has always worked as expected so
far.

Due to this wrong design we have a problem now: we allocate one physical
page for lapic, and it happens to overlap with the dummy page, which
causes the system to crash.

Fix this by keeping the dummy va directly in a variable instead of magic
offsets. The asm locore now increments the first pa to hide the dummy page
to machdep and pmap.

show more ...


1234