History log of /qemu/accel/tcg/cputlb.c (Results 1 – 25 of 362)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
Revision tags: v8.2.3, v7.2.11, v9.0.0, v9.0.0-rc4, v9.0.0-rc3, v9.0.0-rc2, v9.0.0-rc1, v9.0.0-rc0, v8.2.2, v7.2.10, v8.2.1, v8.1.5, v7.2.9, v8.1.4, v7.2.8, v8.2.0, v8.2.0-rc4, v8.2.0-rc3
# 74781c08 06-Dec-2023 Philippe Mathieu-Daudé <philmd@linaro.org>

exec/cpu: Extract page-protection definitions to page-protection.h

Extract page-protection definitions from "exec/cpu-all.h"
to "exec/page-protection.h".

The list of files requiring the new header

exec/cpu: Extract page-protection definitions to page-protection.h

Extract page-protection definitions from "exec/cpu-all.h"
to "exec/page-protection.h".

The list of files requiring the new header was generated
using:

$ git grep -wE \
'PAGE_(READ|WRITE|EXEC|RWX|VALID|ANON|RESERVED|TARGET_.|PASSTHROUGH)'

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Acked-by: Nicholas Piggin <npiggin@gmail.com>
Acked-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20240427155714.53669-3-philmd@linaro.org>

show more ...


# aacfd8bb 03-Apr-2024 Philippe Mathieu-Daudé <philmd@linaro.org>

exec: Move CPUTLBEntry helpers to cputlb.c

The following CPUTLBEntry helpers are only used in accel/tcg/cputlb.c:
- tlb_index()
- tlb_entry()
- tlb_read_idx()
- tlb_addr_write()

Move them t

exec: Move CPUTLBEntry helpers to cputlb.c

The following CPUTLBEntry helpers are only used in accel/tcg/cputlb.c:
- tlb_index()
- tlb_entry()
- tlb_read_idx()
- tlb_addr_write()

Move them to this file, allowing to remove the huge "cpu.h" header
inclusion from "exec/cpu_ldst.h".

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20240418192525.97451-13-philmd@linaro.org>

show more ...


Revision tags: v8.2.0-rc2, v8.2.0-rc1, v7.2.7, v8.1.3, v8.2.0-rc0, v8.1.2, v8.1.1, v7.2.6, v8.0.5
# 51579d40 14-Sep-2023 Philippe Mathieu-Daudé <philmd@linaro.org>

exec: Reduce tlb_set_dirty() declaration scope

tlb_set_dirty() is only used in accel/tcg/cputlb.c,
where it is defined. Declare it statically, removing
the stub.

Signed-off-by: Philippe Mathieu-Dau

exec: Reduce tlb_set_dirty() declaration scope

tlb_set_dirty() is only used in accel/tcg/cputlb.c,
where it is defined. Declare it statically, removing
the stub.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Harsh Prateek Bora <harshpb@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20240418192525.97451-11-philmd@linaro.org>

show more ...


# aacfd8bb 03-Apr-2024 Philippe Mathieu-Daudé <philmd@linaro.org>

exec: Move CPUTLBEntry helpers to cputlb.c

The following CPUTLBEntry helpers are only used in accel/tcg/cputlb.c:
- tlb_index()
- tlb_entry()
- tlb_read_idx()
- tlb_addr_write()

Move them t

exec: Move CPUTLBEntry helpers to cputlb.c

The following CPUTLBEntry helpers are only used in accel/tcg/cputlb.c:
- tlb_index()
- tlb_entry()
- tlb_read_idx()
- tlb_addr_write()

Move them to this file, allowing to remove the huge "cpu.h" header
inclusion from "exec/cpu_ldst.h".

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20240418192525.97451-13-philmd@linaro.org>

show more ...


Revision tags: v8.2.0-rc2, v8.2.0-rc1, v7.2.7, v8.1.3, v8.2.0-rc0, v8.1.2, v8.1.1, v7.2.6, v8.0.5
# 51579d40 14-Sep-2023 Philippe Mathieu-Daudé <philmd@linaro.org>

exec: Reduce tlb_set_dirty() declaration scope

tlb_set_dirty() is only used in accel/tcg/cputlb.c,
where it is defined. Declare it statically, removing
the stub.

Signed-off-by: Philippe Mathieu-Dau

exec: Reduce tlb_set_dirty() declaration scope

tlb_set_dirty() is only used in accel/tcg/cputlb.c,
where it is defined. Declare it statically, removing
the stub.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Harsh Prateek Bora <harshpb@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20240418192525.97451-11-philmd@linaro.org>

show more ...


# 49fa457c 01-Mar-2024 Richard Henderson <richard.henderson@linaro.org>

accel/tcg: Add TLB_CHECK_ALIGNED

This creates a per-page method for checking of alignment.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderso

accel/tcg: Add TLB_CHECK_ALIGNED

This creates a per-page method for checking of alignment.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240301204110.656742-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>

show more ...


# a0ff4a87 01-Mar-2024 Richard Henderson <richard.henderson@linaro.org>

accel/tcg: Add tlb_fill_flags to CPUTLBEntryFull

Allow the target to set tlb flags to apply to all of the
comparators. Remove MemTxAttrs.byte_swap, as the bit is
not relevant to memory transactions

accel/tcg: Add tlb_fill_flags to CPUTLBEntryFull

Allow the target to set tlb flags to apply to all of the
comparators. Remove MemTxAttrs.byte_swap, as the bit is
not relevant to memory transactions, only the page mapping.
Adjust target/sparc to set TLB_BSWAP directly.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240301204110.656742-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>

show more ...


# 6aba908d 19-Feb-2024 Jonathan Cameron <Jonathan.Cameron@huawei.com>

tcg: Avoid double lock if page tables happen to be in mmio memory.

On i386, after fixing the page walking code to work with pages in
MMIO memory (specifically CXL emulated interleaved memory),
a cra

tcg: Avoid double lock if page tables happen to be in mmio memory.

On i386, after fixing the page walking code to work with pages in
MMIO memory (specifically CXL emulated interleaved memory),
a crash was seen in an interrupt handling path.

Useful part of backtrace

7 0x0000555555ab1929 in bql_lock_impl (file=0x555556049122 "../../accel/tcg/cputlb.c", line=2033) at ../../system/cpus.c:524
8 bql_lock_impl (file=file@entry=0x555556049122 "../../accel/tcg/cputlb.c", line=line@entry=2033) at ../../system/cpus.c:520
9 0x0000555555c9f7d6 in do_ld_mmio_beN (cpu=0x5555578e0cb0, full=0x7ffe88012950, ret_be=ret_be@entry=0, addr=19595792376, size=size@entry=8, mmu_idx=4, type=MMU_DATA_LOAD, ra=0) at ../../accel/tcg/cputlb.c:2033
10 0x0000555555ca0fbd in do_ld_8 (cpu=cpu@entry=0x5555578e0cb0, p=p@entry=0x7ffff4efd1d0, mmu_idx=<optimized out>, type=type@entry=MMU_DATA_LOAD, memop=<optimized out>, ra=ra@entry=0) at ../../accel/tcg/cputlb.c:2356
11 0x0000555555ca341f in do_ld8_mmu (cpu=cpu@entry=0x5555578e0cb0, addr=addr@entry=19595792376, oi=oi@entry=52, ra=0, ra@entry=52, access_type=access_type@entry=MMU_DATA_LOAD) at ../../accel/tcg/cputlb.c:2439
12 0x0000555555ca5f59 in cpu_ldq_mmu (ra=52, oi=52, addr=19595792376, env=0x5555578e3470) at ../../accel/tcg/ldst_common.c.inc:169
13 cpu_ldq_le_mmuidx_ra (env=0x5555578e3470, addr=19595792376, mmu_idx=<optimized out>, ra=ra@entry=0) at ../../accel/tcg/ldst_common.c.inc:301
14 0x0000555555b4b5fc in ptw_ldq (ra=0, in=0x7ffff4efd320) at ../../target/i386/tcg/sysemu/excp_helper.c:98
15 ptw_ldq (ra=0, in=0x7ffff4efd320) at ../../target/i386/tcg/sysemu/excp_helper.c:93
16 mmu_translate (env=env@entry=0x5555578e3470, in=0x7ffff4efd3e0, out=0x7ffff4efd3b0, err=err@entry=0x7ffff4efd3c0, ra=ra@entry=0) at ../../target/i386/tcg/sysemu/excp_helper.c:174
17 0x0000555555b4c4b3 in get_physical_address (ra=0, err=0x7ffff4efd3c0, out=0x7ffff4efd3b0, mmu_idx=0, access_type=MMU_DATA_LOAD, addr=18446741874686299840, env=0x5555578e3470) at ../../target/i386/tcg/sysemu/excp_helper.c:580
18 x86_cpu_tlb_fill (cs=0x5555578e0cb0, addr=18446741874686299840, size=<optimized out>, access_type=MMU_DATA_LOAD, mmu_idx=0, probe=<optimized out>, retaddr=0) at ../../target/i386/tcg/sysemu/excp_helper.c:606
19 0x0000555555ca0ee9 in tlb_fill (retaddr=0, mmu_idx=0, access_type=MMU_DATA_LOAD, size=<optimized out>, addr=18446741874686299840, cpu=0x7ffff4efd540) at ../../accel/tcg/cputlb.c:1315
20 mmu_lookup1 (cpu=cpu@entry=0x5555578e0cb0, data=data@entry=0x7ffff4efd540, mmu_idx=0, access_type=access_type@entry=MMU_DATA_LOAD, ra=ra@entry=0) at ../../accel/tcg/cputlb.c:1713
21 0x0000555555ca2c61 in mmu_lookup (cpu=cpu@entry=0x5555578e0cb0, addr=addr@entry=18446741874686299840, oi=oi@entry=32, ra=ra@entry=0, type=type@entry=MMU_DATA_LOAD, l=l@entry=0x7ffff4efd540) at ../../accel/tcg/cputlb.c:1803
22 0x0000555555ca3165 in do_ld4_mmu (cpu=cpu@entry=0x5555578e0cb0, addr=addr@entry=18446741874686299840, oi=oi@entry=32, ra=ra@entry=0, access_type=access_type@entry=MMU_DATA_LOAD) at ../../accel/tcg/cputlb.c:2416
23 0x0000555555ca5ef9 in cpu_ldl_mmu (ra=0, oi=32, addr=18446741874686299840, env=0x5555578e3470) at ../../accel/tcg/ldst_common.c.inc:158
24 cpu_ldl_le_mmuidx_ra (env=env@entry=0x5555578e3470, addr=addr@entry=18446741874686299840, mmu_idx=<optimized out>, ra=ra@entry=0) at ../../accel/tcg/ldst_common.c.inc:294
25 0x0000555555bb6cdd in do_interrupt64 (is_hw=1, next_eip=18446744072399775809, error_code=0, is_int=0, intno=236, env=0x5555578e3470) at ../../target/i386/tcg/seg_helper.c:889
26 do_interrupt_all (cpu=cpu@entry=0x5555578e0cb0, intno=236, is_int=is_int@entry=0, error_code=error_code@entry=0, next_eip=next_eip@entry=0, is_hw=is_hw@entry=1) at ../../target/i386/tcg/seg_helper.c:1130
27 0x0000555555bb87da in do_interrupt_x86_hardirq (env=env@entry=0x5555578e3470, intno=<optimized out>, is_hw=is_hw@entry=1) at ../../target/i386/tcg/seg_helper.c:1162
28 0x0000555555b5039c in x86_cpu_exec_interrupt (cs=0x5555578e0cb0, interrupt_request=<optimized out>) at ../../target/i386/tcg/sysemu/seg_helper.c:197
29 0x0000555555c94480 in cpu_handle_interrupt (last_tb=<synthetic pointer>, cpu=0x5555578e0cb0) at ../../accel/tcg/cpu-exec.c:844

Peter identified this as being due to the BQL already being
held when the page table walker encounters MMIO memory and attempts
to take the lock again. There are other examples of similar paths
TCG, so this follows the approach taken in those of simply checking
if the lock is already held and if it is, don't take it again.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Message-Id: <20240219173153.12114-4-Jonathan.Cameron@huawei.com>
[rth: Use BQL_LOCK_GUARD]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

show more ...


# 3b916140 29-Jan-2024 Richard Henderson <richard.henderson@linaro.org>

include/exec: Change cpu_mmu_index argument to CPUState

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>


# a4a411fb 02-Jan-2024 Stefan Hajnoczi <stefanha@redhat.com>

Replace "iothread lock" with "BQL" in comments

The term "iothread lock" is obsolete. The APIs use Big QEMU Lock (BQL)
in their names. Update the code comments to use "BQL" instead of
"iothread lock"

Replace "iothread lock" with "BQL" in comments

The term "iothread lock" is obsolete. The APIs use Big QEMU Lock (BQL)
in their names. Update the code comments to use "BQL" instead of
"iothread lock".

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Paul Durrant <paul@xen.org>
Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Harsh Prateek Bora <harshpb@linux.ibm.com>
Message-id: 20240102153529.486531-5-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>

show more ...


# 195801d7 02-Jan-2024 Stefan Hajnoczi <stefanha@redhat.com>

system/cpus: rename qemu_mutex_lock_iothread() to bql_lock()

The Big QEMU Lock (BQL) has many names and they are confusing. The
actual QemuMutex variable is called qemu_global_mutex but it's commonl

system/cpus: rename qemu_mutex_lock_iothread() to bql_lock()

The Big QEMU Lock (BQL) has many names and they are confusing. The
actual QemuMutex variable is called qemu_global_mutex but it's commonly
referred to as the BQL in discussions and some code comments. The
locking APIs, however, are called qemu_mutex_lock_iothread() and
qemu_mutex_unlock_iothread().

The "iothread" name is historic and comes from when the main thread was
split into into KVM vcpu threads and the "iothread" (now called the main
loop thread). I have contributed to the confusion myself by introducing
a separate --object iothread, a separate concept unrelated to the BQL.

The "iothread" name is no longer appropriate for the BQL. Rename the
locking APIs to:
- void bql_lock(void)
- void bql_unlock(void)
- bool bql_locked(void)

There are more APIs with "iothread" in their names. Subsequent patches
will rename them. There are also comments and documentation that will be
updated in later patches.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Paul Durrant <paul@xen.org>
Acked-by: Fabiano Rosas <farosas@suse.de>
Acked-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Acked-by: Peter Xu <peterx@redhat.com>
Acked-by: Eric Farman <farman@linux.ibm.com>
Reviewed-by: Harsh Prateek Bora <harshpb@linux.ibm.com>
Acked-by: Hyman Huang <yong.huang@smartx.com>
Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Message-id: 20240102153529.486531-2-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>

show more ...


# e2faabee 11-Nov-2023 Jessica Clarke <jrtc27@jrtc27.com>

accel/tcg: Forward probe size on to notdirty_write

Without this, we just dirty a single byte, and so if the caller writes
more than one byte to the host memory then we won't have invalidated any
tra

accel/tcg: Forward probe size on to notdirty_write

Without this, we just dirty a single byte, and so if the caller writes
more than one byte to the host memory then we won't have invalidated any
translation blocks that start after the first byte and overlap those
writes. In particular, AArch64's DC ZVA implementation uses probe_access
(via probe_write), and so we don't invalidate the entire block, only the
TB overlapping the first byte (and, in the unusual case an unaligned VA
is given to the instruction, we also probe that specific address in
order to get the right VA reported on an exception, so will invalidate a
TB overlapping that address too). Since our IC IVAU implementation is a
no-op for system emulation that relies on the softmmu already having
detected self-modifying code via this mechanism, this means we have
observably wrong behaviour when jumping to code that has been DC ZVA'ed.
In practice this is an unusual thing for software to do, as in reality
the OS will DC ZVA the page and the application will go and write actual
instructions to it that aren't UDF #0, but you can write a test that
clearly shows the faulty behaviour.

For functions other than probe_access it's not clear what size to use
when 0 is passed in. Arguably a size of 0 shouldn't dirty at all, since
if you want to actually write then you should pass in a real size, but I
have conservatively kept the implementation as dirtying the first byte
in that case so as to avoid breaking any assumptions about that
behaviour.

Signed-off-by: Jessica Clarke <jrtc27@jrtc27.com>
Message-Id: <20231104031232.3246614-1-jrtc27@jrtc27.com>
[rth: Move the dirtysize computation next to notdirty_write.]
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

show more ...


# e2faabee 11-Nov-2023 Jessica Clarke <jrtc27@jrtc27.com>

accel/tcg: Forward probe size on to notdirty_write

Without this, we just dirty a single byte, and so if the caller writes
more than one byte to the host memory then we won't have invalidated any
tra

accel/tcg: Forward probe size on to notdirty_write

Without this, we just dirty a single byte, and so if the caller writes
more than one byte to the host memory then we won't have invalidated any
translation blocks that start after the first byte and overlap those
writes. In particular, AArch64's DC ZVA implementation uses probe_access
(via probe_write), and so we don't invalidate the entire block, only the
TB overlapping the first byte (and, in the unusual case an unaligned VA
is given to the instruction, we also probe that specific address in
order to get the right VA reported on an exception, so will invalidate a
TB overlapping that address too). Since our IC IVAU implementation is a
no-op for system emulation that relies on the softmmu already having
detected self-modifying code via this mechanism, this means we have
observably wrong behaviour when jumping to code that has been DC ZVA'ed.
In practice this is an unusual thing for software to do, as in reality
the OS will DC ZVA the page and the application will go and write actual
instructions to it that aren't UDF #0, but you can write a test that
clearly shows the faulty behaviour.

For functions other than probe_access it's not clear what size to use
when 0 is passed in. Arguably a size of 0 shouldn't dirty at all, since
if you want to actually write then you should pass in a real size, but I
have conservatively kept the implementation as dirtying the first byte
in that case so as to avoid breaking any assumptions about that
behaviour.

Signed-off-by: Jessica Clarke <jrtc27@jrtc27.com>
Message-Id: <20231104031232.3246614-1-jrtc27@jrtc27.com>
[rth: Move the dirtysize computation next to notdirty_write.]
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

show more ...


# e2faabee 11-Nov-2023 Jessica Clarke <jrtc27@jrtc27.com>

accel/tcg: Forward probe size on to notdirty_write

Without this, we just dirty a single byte, and so if the caller writes
more than one byte to the host memory then we won't have invalidated any
tra

accel/tcg: Forward probe size on to notdirty_write

Without this, we just dirty a single byte, and so if the caller writes
more than one byte to the host memory then we won't have invalidated any
translation blocks that start after the first byte and overlap those
writes. In particular, AArch64's DC ZVA implementation uses probe_access
(via probe_write), and so we don't invalidate the entire block, only the
TB overlapping the first byte (and, in the unusual case an unaligned VA
is given to the instruction, we also probe that specific address in
order to get the right VA reported on an exception, so will invalidate a
TB overlapping that address too). Since our IC IVAU implementation is a
no-op for system emulation that relies on the softmmu already having
detected self-modifying code via this mechanism, this means we have
observably wrong behaviour when jumping to code that has been DC ZVA'ed.
In practice this is an unusual thing for software to do, as in reality
the OS will DC ZVA the page and the application will go and write actual
instructions to it that aren't UDF #0, but you can write a test that
clearly shows the faulty behaviour.

For functions other than probe_access it's not clear what size to use
when 0 is passed in. Arguably a size of 0 shouldn't dirty at all, since
if you want to actually write then you should pass in a real size, but I
have conservatively kept the implementation as dirtying the first byte
in that case so as to avoid breaking any assumptions about that
behaviour.

Signed-off-by: Jessica Clarke <jrtc27@jrtc27.com>
Message-Id: <20231104031232.3246614-1-jrtc27@jrtc27.com>
[rth: Move the dirtysize computation next to notdirty_write.]
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

show more ...


# e2faabee 11-Nov-2023 Jessica Clarke <jrtc27@jrtc27.com>

accel/tcg: Forward probe size on to notdirty_write

Without this, we just dirty a single byte, and so if the caller writes
more than one byte to the host memory then we won't have invalidated any
tra

accel/tcg: Forward probe size on to notdirty_write

Without this, we just dirty a single byte, and so if the caller writes
more than one byte to the host memory then we won't have invalidated any
translation blocks that start after the first byte and overlap those
writes. In particular, AArch64's DC ZVA implementation uses probe_access
(via probe_write), and so we don't invalidate the entire block, only the
TB overlapping the first byte (and, in the unusual case an unaligned VA
is given to the instruction, we also probe that specific address in
order to get the right VA reported on an exception, so will invalidate a
TB overlapping that address too). Since our IC IVAU implementation is a
no-op for system emulation that relies on the softmmu already having
detected self-modifying code via this mechanism, this means we have
observably wrong behaviour when jumping to code that has been DC ZVA'ed.
In practice this is an unusual thing for software to do, as in reality
the OS will DC ZVA the page and the application will go and write actual
instructions to it that aren't UDF #0, but you can write a test that
clearly shows the faulty behaviour.

For functions other than probe_access it's not clear what size to use
when 0 is passed in. Arguably a size of 0 shouldn't dirty at all, since
if you want to actually write then you should pass in a real size, but I
have conservatively kept the implementation as dirtying the first byte
in that case so as to avoid breaking any assumptions about that
behaviour.

Signed-off-by: Jessica Clarke <jrtc27@jrtc27.com>
Message-Id: <20231104031232.3246614-1-jrtc27@jrtc27.com>
[rth: Move the dirtysize computation next to notdirty_write.]
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

show more ...


# f4f826c0 18-Sep-2023 Philippe Mathieu-Daudé <philmd@linaro.org>

accel/tcg: Declare tcg_flush_jmp_cache() in 'exec/tb-flush.h'

"exec/cpu-common.h" is meant to contain the declarations
related to CPU usable with any accelerator / target
combination.

tcg_flush_jmp

accel/tcg: Declare tcg_flush_jmp_cache() in 'exec/tb-flush.h'

"exec/cpu-common.h" is meant to contain the declarations
related to CPU usable with any accelerator / target
combination.

tcg_flush_jmp_cache() is specific to TCG, so restrict its
declaration by moving it to "exec/tb-flush.h".

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Anton Johansson <anjo@rev.ng>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230918104153.24433-2-philmd@linaro.org>

show more ...


# f4f826c0 18-Sep-2023 Philippe Mathieu-Daudé <philmd@linaro.org>

accel/tcg: Declare tcg_flush_jmp_cache() in 'exec/tb-flush.h'

"exec/cpu-common.h" is meant to contain the declarations
related to CPU usable with any accelerator / target
combination.

tcg_flush_jmp

accel/tcg: Declare tcg_flush_jmp_cache() in 'exec/tb-flush.h'

"exec/cpu-common.h" is meant to contain the declarations
related to CPU usable with any accelerator / target
combination.

tcg_flush_jmp_cache() is specific to TCG, so restrict its
declaration by moving it to "exec/tb-flush.h".

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Anton Johansson <anjo@rev.ng>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230918104153.24433-2-philmd@linaro.org>

show more ...


# f4f826c0 18-Sep-2023 Philippe Mathieu-Daudé <philmd@linaro.org>

accel/tcg: Declare tcg_flush_jmp_cache() in 'exec/tb-flush.h'

"exec/cpu-common.h" is meant to contain the declarations
related to CPU usable with any accelerator / target
combination.

tcg_flush_jmp

accel/tcg: Declare tcg_flush_jmp_cache() in 'exec/tb-flush.h'

"exec/cpu-common.h" is meant to contain the declarations
related to CPU usable with any accelerator / target
combination.

tcg_flush_jmp_cache() is specific to TCG, so restrict its
declaration by moving it to "exec/tb-flush.h".

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Anton Johansson <anjo@rev.ng>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230918104153.24433-2-philmd@linaro.org>

show more ...


# 6046f6e9 16-Sep-2023 Richard Henderson <richard.henderson@linaro.org>

accel/tcg: Fix condition for store_atom_insert_al16

Store bytes under a mask is fundamentally a cmpxchg, not a straight store.
Use HAVE_CMPXCHG128 instead of HAVE_ATOMIC128_RW.

Signed-off-by: Richa

accel/tcg: Fix condition for store_atom_insert_al16

Store bytes under a mask is fundamentally a cmpxchg, not a straight store.
Use HAVE_CMPXCHG128 instead of HAVE_ATOMIC128_RW.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230916220151.526140-8-richard.henderson@linaro.org>

show more ...


Revision tags: v8.1.0, v8.1.0-rc4, v8.1.0-rc3, v7.2.5, v8.0.4, v8.1.0-rc2, v8.1.0-rc1, v8.1.0-rc0, v8.0.3, v7.2.4
# 24a4d59a 03-Jul-2023 Richard Henderson <richard.henderson@linaro.org>

accel/tcg: Move HMP info jit and info opcount code

Move all of it into accel/tcg/monitor.c. This puts everything
about tcg that is only used by the monitor in the same place.

Tested-by: Philippe M

accel/tcg: Move HMP info jit and info opcount code

Move all of it into accel/tcg/monitor.c. This puts everything
about tcg that is only used by the monitor in the same place.

Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

show more ...


# 6046f6e9 16-Sep-2023 Richard Henderson <richard.henderson@linaro.org>

accel/tcg: Fix condition for store_atom_insert_al16

Store bytes under a mask is fundamentally a cmpxchg, not a straight store.
Use HAVE_CMPXCHG128 instead of HAVE_ATOMIC128_RW.

Signed-off-by: Richa

accel/tcg: Fix condition for store_atom_insert_al16

Store bytes under a mask is fundamentally a cmpxchg, not a straight store.
Use HAVE_CMPXCHG128 instead of HAVE_ATOMIC128_RW.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230916220151.526140-8-richard.henderson@linaro.org>

show more ...


Revision tags: v8.1.0, v8.1.0-rc4, v8.1.0-rc3, v7.2.5, v8.0.4, v8.1.0-rc2, v8.1.0-rc1, v8.1.0-rc0, v8.0.3, v7.2.4
# 24a4d59a 03-Jul-2023 Richard Henderson <richard.henderson@linaro.org>

accel/tcg: Move HMP info jit and info opcount code

Move all of it into accel/tcg/monitor.c. This puts everything
about tcg that is only used by the monitor in the same place.

Tested-by: Philippe M

accel/tcg: Move HMP info jit and info opcount code

Move all of it into accel/tcg/monitor.c. This puts everything
about tcg that is only used by the monitor in the same place.

Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

show more ...


# 43e7a2d3 14-Sep-2023 Philippe Mathieu-Daudé <philmd@linaro.org>

accel/tcg: Make cpu-exec-common.c a target agnostic unit

cpu_in_serial_context() is not target specific,
move it declaration to "internal-common.h" (which
we include in the 4 source files modified).

accel/tcg: Make cpu-exec-common.c a target agnostic unit

cpu_in_serial_context() is not target specific,
move it declaration to "internal-common.h" (which
we include in the 4 source files modified).

Remove the unused "exec/exec-all.h" header from
cpu-exec-common.c. There is no more target specific
code in this file: make it target agnostic.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Anton Johansson <anjo@rev.ng>
Message-Id: <20230914185718.76241-12-philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

show more ...


# 4c268d6d 14-Sep-2023 Philippe Mathieu-Daudé <philmd@linaro.org>

accel/tcg: Rename target-specific 'internal.h' -> 'internal-target.h'

accel/tcg/internal.h contains target specific declarations.
Unit files including it become "target tainted": they can not
be com

accel/tcg: Rename target-specific 'internal.h' -> 'internal-target.h'

accel/tcg/internal.h contains target specific declarations.
Unit files including it become "target tainted": they can not
be compiled as target agnostic. Rename using the '-target'
suffix to make this explicit.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Anton Johansson <anjo@rev.ng>
Message-Id: <20230914185718.76241-9-philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

show more ...


# 27c46fad 12-Sep-2023 Anton Johansson <anjo@rev.ng>

accel/tcg: move ld/st helpers to ldst_common.c.inc

A large chunk of ld/st functions are moved from cputlb.c and user-exec.c
to ldst_common.c.inc as their implementation is the same between both
mode

accel/tcg: move ld/st helpers to ldst_common.c.inc

A large chunk of ld/st functions are moved from cputlb.c and user-exec.c
to ldst_common.c.inc as their implementation is the same between both
modes.

Eventually, ldst_common.c.inc could be compiled into a separate
target-specific compilation unit, and be linked in with the targets.
Keeping CPUArchState usage out of cputlb.c (CPUArchState is primarily
used to access the mmu index in these functions).

Signed-off-by: Anton Johansson <anjo@rev.ng>
Message-Id: <20230912153428.17816-12-anjo@rev.ng>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

show more ...


12345678910>>...15