Revision tags: v8.1.4, v7.2.8, v8.2.0, v8.2.0-rc4, v8.2.0-rc3 |
|
#
6bc30f19 |
| 05-Dec-2023 |
Stefan Hajnoczi <stefanha@redhat.com> |
graph-lock: remove AioContext locking
Stop acquiring/releasing the AioContext lock in bdrv_graph_wrlock()/bdrv_graph_unlock() since the lock no longer has any effect.
The distinction between bdrv_g
graph-lock: remove AioContext locking
Stop acquiring/releasing the AioContext lock in bdrv_graph_wrlock()/bdrv_graph_unlock() since the lock no longer has any effect.
The distinction between bdrv_graph_wrunlock() and bdrv_graph_wrunlock_ctx() becomes meaningless and they can be collapsed into one function.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20231205182011.1976568-6-stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
Revision tags: v8.2.0-rc2, v8.2.0-rc1, v7.2.7, v8.1.3 |
|
#
6bc0bcc8 |
| 15-Nov-2023 |
Kevin Wolf <kwolf@redhat.com> |
block: Fix deadlocks in bdrv_graph_wrunlock()
bdrv_graph_wrunlock() calls aio_poll(), which may run callbacks that have a nested event loop. Nested event loops can depend on other iothreads making p
block: Fix deadlocks in bdrv_graph_wrunlock()
bdrv_graph_wrunlock() calls aio_poll(), which may run callbacks that have a nested event loop. Nested event loops can depend on other iothreads making progress, so in order to allow them to make progress it must not hold the AioContext lock of another thread while calling aio_poll().
This introduces a @bs parameter to bdrv_graph_wrunlock() whose AioContext is temporarily dropped (which matches bdrv_graph_wrlock()), and a bdrv_graph_wrunlock_ctx() that can be used if the BlockDriverState doesn't necessarily exist any more when unlocking.
This also requires a change to bdrv_schedule_unref(), which was relying on the incorrectly taken lock. It needs to take the lock itself now. While this is a separate bug, it can't be fixed a separate patch because otherwise the intermediate state would either deadlock or try to release a lock that we don't even hold.
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20231115172012.112727-3-kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> [kwolf: Fixed up bdrv_schedule_unref()] Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
Revision tags: v8.2.0-rc0, v8.1.2 |
|
#
e6e964b8 |
| 29-Sep-2023 |
Kevin Wolf <kwolf@redhat.com> |
block: Add assertion for bdrv_graph_wrlock()
bdrv_graph_wrlock() can't run in a coroutine (because it polls) and requires holding the BQL. We already have GLOBAL_STATE_CODE() to assert the latter. A
block: Add assertion for bdrv_graph_wrlock()
bdrv_graph_wrlock() can't run in a coroutine (because it polls) and requires holding the BQL. We already have GLOBAL_STATE_CODE() to assert the latter. Assert the former as well and add a no_coroutine_fn marker.
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20230929145157.45443-23-kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
Revision tags: v8.2.0-rc0, v8.1.2 |
|
#
e6e964b8 |
| 29-Sep-2023 |
Kevin Wolf <kwolf@redhat.com> |
block: Add assertion for bdrv_graph_wrlock()
bdrv_graph_wrlock() can't run in a coroutine (because it polls) and requires holding the BQL. We already have GLOBAL_STATE_CODE() to assert the latter. A
block: Add assertion for bdrv_graph_wrlock()
bdrv_graph_wrlock() can't run in a coroutine (because it polls) and requires holding the BQL. We already have GLOBAL_STATE_CODE() to assert the latter. Assert the former as well and add a no_coroutine_fn marker.
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20230929145157.45443-23-kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
Revision tags: v8.1.1, v7.2.6, v8.0.5 |
|
#
ac2ae233 |
| 11-Sep-2023 |
Kevin Wolf <kwolf@redhat.com> |
block: Introduce bdrv_schedule_unref()
bdrv_unref() is called by a lot of places that need to hold the graph lock (it naturally happens in the context of operations that change the graph). However,
block: Introduce bdrv_schedule_unref()
bdrv_unref() is called by a lot of places that need to hold the graph lock (it naturally happens in the context of operations that change the graph). However, bdrv_unref() takes the graph writer lock internally, so it can't actually be called while already holding a graph lock without causing a deadlock.
bdrv_unref() also can't just become GRAPH_WRLOCK because it drains the node before closing it, and draining requires that the graph is unlocked.
The solution is to defer deleting the node until we don't hold the lock any more and draining is possible again.
Note that keeping images open for longer than necessary can create problems, too: You can't open an image again before it is really closed (if image locking didn't prevent it, it would cause corruption). Reopening an image immediately happens at least during bdrv_open() and bdrv_co_create().
In order to solve this problem, make sure to run the deferred unref in bdrv_graph_wrunlock(), i.e. the first possible place where we can drain again. This is also why bdrv_schedule_unref() is marked GRAPH_WRLOCK.
The output of iotest 051 is updated because the additional polling changes the order of HMP output, resulting in a new "(qemu)" prompt in the test output that was previously on a separate line and filtered out.
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20230911094620.45040-6-kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
Revision tags: v8.1.1, v7.2.6, v8.0.5 |
|
#
ac2ae233 |
| 11-Sep-2023 |
Kevin Wolf <kwolf@redhat.com> |
block: Introduce bdrv_schedule_unref()
bdrv_unref() is called by a lot of places that need to hold the graph lock (it naturally happens in the context of operations that change the graph). However,
block: Introduce bdrv_schedule_unref()
bdrv_unref() is called by a lot of places that need to hold the graph lock (it naturally happens in the context of operations that change the graph). However, bdrv_unref() takes the graph writer lock internally, so it can't actually be called while already holding a graph lock without causing a deadlock.
bdrv_unref() also can't just become GRAPH_WRLOCK because it drains the node before closing it, and draining requires that the graph is unlocked.
The solution is to defer deleting the node until we don't hold the lock any more and draining is possible again.
Note that keeping images open for longer than necessary can create problems, too: You can't open an image again before it is really closed (if image locking didn't prevent it, it would cause corruption). Reopening an image immediately happens at least during bdrv_open() and bdrv_co_create().
In order to solve this problem, make sure to run the deferred unref in bdrv_graph_wrunlock(), i.e. the first possible place where we can drain again. This is also why bdrv_schedule_unref() is marked GRAPH_WRLOCK.
The output of iotest 051 is updated because the additional polling changes the order of HMP output, resulting in a new "(qemu)" prompt in the test output that was previously on a separate line and filtered out.
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20230911094620.45040-6-kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
Revision tags: v8.1.1, v7.2.6, v8.0.5 |
|
#
ac2ae233 |
| 11-Sep-2023 |
Kevin Wolf <kwolf@redhat.com> |
block: Introduce bdrv_schedule_unref()
bdrv_unref() is called by a lot of places that need to hold the graph lock (it naturally happens in the context of operations that change the graph). However,
block: Introduce bdrv_schedule_unref()
bdrv_unref() is called by a lot of places that need to hold the graph lock (it naturally happens in the context of operations that change the graph). However, bdrv_unref() takes the graph writer lock internally, so it can't actually be called while already holding a graph lock without causing a deadlock.
bdrv_unref() also can't just become GRAPH_WRLOCK because it drains the node before closing it, and draining requires that the graph is unlocked.
The solution is to defer deleting the node until we don't hold the lock any more and draining is possible again.
Note that keeping images open for longer than necessary can create problems, too: You can't open an image again before it is really closed (if image locking didn't prevent it, it would cause corruption). Reopening an image immediately happens at least during bdrv_open() and bdrv_co_create().
In order to solve this problem, make sure to run the deferred unref in bdrv_graph_wrunlock(), i.e. the first possible place where we can drain again. This is also why bdrv_schedule_unref() is marked GRAPH_WRLOCK.
The output of iotest 051 is updated because the additional polling changes the order of HMP output, resulting in a new "(qemu)" prompt in the test output that was previously on a separate line and filtered out.
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20230911094620.45040-6-kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
Revision tags: v8.1.1, v7.2.6, v8.0.5 |
|
#
ac2ae233 |
| 11-Sep-2023 |
Kevin Wolf <kwolf@redhat.com> |
block: Introduce bdrv_schedule_unref()
bdrv_unref() is called by a lot of places that need to hold the graph lock (it naturally happens in the context of operations that change the graph). However,
block: Introduce bdrv_schedule_unref()
bdrv_unref() is called by a lot of places that need to hold the graph lock (it naturally happens in the context of operations that change the graph). However, bdrv_unref() takes the graph writer lock internally, so it can't actually be called while already holding a graph lock without causing a deadlock.
bdrv_unref() also can't just become GRAPH_WRLOCK because it drains the node before closing it, and draining requires that the graph is unlocked.
The solution is to defer deleting the node until we don't hold the lock any more and draining is possible again.
Note that keeping images open for longer than necessary can create problems, too: You can't open an image again before it is really closed (if image locking didn't prevent it, it would cause corruption). Reopening an image immediately happens at least during bdrv_open() and bdrv_co_create().
In order to solve this problem, make sure to run the deferred unref in bdrv_graph_wrunlock(), i.e. the first possible place where we can drain again. This is also why bdrv_schedule_unref() is marked GRAPH_WRLOCK.
The output of iotest 051 is updated because the additional polling changes the order of HMP output, resulting in a new "(qemu)" prompt in the test output that was previously on a separate line and filtered out.
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20230911094620.45040-6-kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
Revision tags: v8.1.1, v7.2.6, v8.0.5 |
|
#
ac2ae233 |
| 11-Sep-2023 |
Kevin Wolf <kwolf@redhat.com> |
block: Introduce bdrv_schedule_unref()
bdrv_unref() is called by a lot of places that need to hold the graph lock (it naturally happens in the context of operations that change the graph). However,
block: Introduce bdrv_schedule_unref()
bdrv_unref() is called by a lot of places that need to hold the graph lock (it naturally happens in the context of operations that change the graph). However, bdrv_unref() takes the graph writer lock internally, so it can't actually be called while already holding a graph lock without causing a deadlock.
bdrv_unref() also can't just become GRAPH_WRLOCK because it drains the node before closing it, and draining requires that the graph is unlocked.
The solution is to defer deleting the node until we don't hold the lock any more and draining is possible again.
Note that keeping images open for longer than necessary can create problems, too: You can't open an image again before it is really closed (if image locking didn't prevent it, it would cause corruption). Reopening an image immediately happens at least during bdrv_open() and bdrv_co_create().
In order to solve this problem, make sure to run the deferred unref in bdrv_graph_wrunlock(), i.e. the first possible place where we can drain again. This is also why bdrv_schedule_unref() is marked GRAPH_WRLOCK.
The output of iotest 051 is updated because the additional polling changes the order of HMP output, resulting in a new "(qemu)" prompt in the test output that was previously on a separate line and filtered out.
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20230911094620.45040-6-kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
Revision tags: v8.1.0, v8.1.0-rc4, v8.1.0-rc3, v7.2.5, v8.0.4, v8.1.0-rc2, v8.1.0-rc1, v8.1.0-rc0 |
|
#
3202d8e4 |
| 14-Jul-2023 |
Michael Tokarev <mjt@tls.msk.ru> |
block: spelling fixes
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru> Reviewed-by: Eric Blake <eblake@redhat.com>
|
Revision tags: v8.1.0, v8.1.0-rc4, v8.1.0-rc3, v7.2.5, v8.0.4, v8.1.0-rc2, v8.1.0-rc1, v8.1.0-rc0 |
|
#
3202d8e4 |
| 14-Jul-2023 |
Michael Tokarev <mjt@tls.msk.ru> |
block: spelling fixes
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru> Reviewed-by: Eric Blake <eblake@redhat.com>
|
Revision tags: v8.1.0, v8.1.0-rc4, v8.1.0-rc3, v7.2.5, v8.0.4, v8.1.0-rc2, v8.1.0-rc1, v8.1.0-rc0 |
|
#
3202d8e4 |
| 14-Jul-2023 |
Michael Tokarev <mjt@tls.msk.ru> |
block: spelling fixes
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru> Reviewed-by: Eric Blake <eblake@redhat.com>
|
Revision tags: v8.0.3, v7.2.4 |
|
#
3cce22de |
| 05-Jun-2023 |
Kevin Wolf <kwolf@redhat.com> |
Revert "graph-lock: Disable locking for now"
Now that bdrv_graph_wrlock() temporarily drops the AioContext lock that its caller holds, it can poll without causing deadlocks. We can now re-enable gra
Revert "graph-lock: Disable locking for now"
Now that bdrv_graph_wrlock() temporarily drops the AioContext lock that its caller holds, it can poll without causing deadlocks. We can now re-enable graph locking.
This reverts commit ad128dff0bf4b6f971d05eb4335a627883a19c1d.
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20230605085711.21261-12-kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
#
31b2ddfe |
| 05-Jun-2023 |
Kevin Wolf <kwolf@redhat.com> |
graph-lock: Unlock the AioContext while polling
If the caller keeps the AioContext lock for a block node in an iothread, polling in bdrv_graph_wrlock() deadlocks if the condition isn't fulfilled imm
graph-lock: Unlock the AioContext while polling
If the caller keeps the AioContext lock for a block node in an iothread, polling in bdrv_graph_wrlock() deadlocks if the condition isn't fulfilled immediately.
Now that all callers make sure to actually have the AioContext locked when they call bdrv_replace_child_noperm() like they should, we can change bdrv_graph_wrlock() to take a BlockDriverState whose AioContext lock the caller holds (NULL if it doesn't) and unlock it temporarily while polling.
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20230605085711.21261-11-kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
Revision tags: v8.0.3, v7.2.4 |
|
#
3cce22de |
| 05-Jun-2023 |
Kevin Wolf <kwolf@redhat.com> |
Revert "graph-lock: Disable locking for now"
Now that bdrv_graph_wrlock() temporarily drops the AioContext lock that its caller holds, it can poll without causing deadlocks. We can now re-enable gra
Revert "graph-lock: Disable locking for now"
Now that bdrv_graph_wrlock() temporarily drops the AioContext lock that its caller holds, it can poll without causing deadlocks. We can now re-enable graph locking.
This reverts commit ad128dff0bf4b6f971d05eb4335a627883a19c1d.
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20230605085711.21261-12-kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
#
31b2ddfe |
| 05-Jun-2023 |
Kevin Wolf <kwolf@redhat.com> |
graph-lock: Unlock the AioContext while polling
If the caller keeps the AioContext lock for a block node in an iothread, polling in bdrv_graph_wrlock() deadlocks if the condition isn't fulfilled imm
graph-lock: Unlock the AioContext while polling
If the caller keeps the AioContext lock for a block node in an iothread, polling in bdrv_graph_wrlock() deadlocks if the condition isn't fulfilled immediately.
Now that all callers make sure to actually have the AioContext locked when they call bdrv_replace_child_noperm() like they should, we can change bdrv_graph_wrlock() to take a BlockDriverState whose AioContext lock the caller holds (NULL if it doesn't) and unlock it temporarily while polling.
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20230605085711.21261-11-kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
Revision tags: v8.0.2, v8.0.1, v7.2.3 |
|
#
80fc5d26 |
| 17-May-2023 |
Kevin Wolf <kwolf@redhat.com> |
graph-lock: Disable locking for now
In QEMU 8.0, we've been seeing deadlocks in bdrv_graph_wrlock(). They come from callers that hold an AioContext lock, which is not allowed during polling. In theo
graph-lock: Disable locking for now
In QEMU 8.0, we've been seeing deadlocks in bdrv_graph_wrlock(). They come from callers that hold an AioContext lock, which is not allowed during polling. In theory, we could temporarily release the lock, but callers are inconsistent about whether they hold a lock, and if they do, some are also confused about which one they hold. While all of this is fixable, it's not trivial, and the best course of action for 8.0.1 is probably just disabling the graph locking code temporarily.
We don't currently rely on graph locking yet. It is supposed to replace the AioContext lock eventually to enable multiqueue support, but as long as we still have the AioContext lock, it is sufficient without the graph lock. Once the AioContext lock goes away, the deadlock doesn't exist any more either and this commit can be reverted. (Of course, it can also be reverted while the AioContext lock still exists if the callers have been fixed.)
Cc: qemu-stable@nongnu.org Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20230517152834.277483-2-kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
#
71438d8d |
| 10-May-2023 |
Kevin Wolf <kwolf@redhat.com> |
graph-lock: Honour read locks even in the main thread
There are some conditions under which we don't actually need to do anything for taking a reader lock: Writing the graph is only possible from th
graph-lock: Honour read locks even in the main thread
There are some conditions under which we don't actually need to do anything for taking a reader lock: Writing the graph is only possible from the main context while holding the BQL. So if a reader is running in the main context under the BQL and knows that it won't be interrupted until the next writer runs, we don't actually need to do anything.
This is the case if the reader code neither has a nested event loop (this is forbidden anyway while you hold the lock) nor is a coroutine (because a writer could run when the coroutine has yielded).
These conditions are exactly what bdrv_graph_rdlock_main_loop() asserts. They are not fulfilled in bdrv_graph_co_rdlock(), which always runs in a coroutine.
This deletes the shortcuts in bdrv_graph_co_rdlock() that skip taking the reader lock in the main thread.
Reported-by: Fiona Ebner <f.ebner@proxmox.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20230510203601.418015-9-kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
Revision tags: v8.0.2, v8.0.1, v7.2.3 |
|
#
80fc5d26 |
| 17-May-2023 |
Kevin Wolf <kwolf@redhat.com> |
graph-lock: Disable locking for now
In QEMU 8.0, we've been seeing deadlocks in bdrv_graph_wrlock(). They come from callers that hold an AioContext lock, which is not allowed during polling. In theo
graph-lock: Disable locking for now
In QEMU 8.0, we've been seeing deadlocks in bdrv_graph_wrlock(). They come from callers that hold an AioContext lock, which is not allowed during polling. In theory, we could temporarily release the lock, but callers are inconsistent about whether they hold a lock, and if they do, some are also confused about which one they hold. While all of this is fixable, it's not trivial, and the best course of action for 8.0.1 is probably just disabling the graph locking code temporarily.
We don't currently rely on graph locking yet. It is supposed to replace the AioContext lock eventually to enable multiqueue support, but as long as we still have the AioContext lock, it is sufficient without the graph lock. Once the AioContext lock goes away, the deadlock doesn't exist any more either and this commit can be reverted. (Of course, it can also be reverted while the AioContext lock still exists if the callers have been fixed.)
Cc: qemu-stable@nongnu.org Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20230517152834.277483-2-kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
#
71438d8d |
| 10-May-2023 |
Kevin Wolf <kwolf@redhat.com> |
graph-lock: Honour read locks even in the main thread
There are some conditions under which we don't actually need to do anything for taking a reader lock: Writing the graph is only possible from th
graph-lock: Honour read locks even in the main thread
There are some conditions under which we don't actually need to do anything for taking a reader lock: Writing the graph is only possible from the main context while holding the BQL. So if a reader is running in the main context under the BQL and knows that it won't be interrupted until the next writer runs, we don't actually need to do anything.
This is the case if the reader code neither has a nested event loop (this is forbidden anyway while you hold the lock) nor is a coroutine (because a writer could run when the coroutine has yielded).
These conditions are exactly what bdrv_graph_rdlock_main_loop() asserts. They are not fulfilled in bdrv_graph_co_rdlock(), which always runs in a coroutine.
This deletes the shortcuts in bdrv_graph_co_rdlock() that skip taking the reader lock in the main thread.
Reported-by: Fiona Ebner <f.ebner@proxmox.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20230510203601.418015-9-kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
#
58a2e3f5 |
| 01-May-2023 |
Stefan Hajnoczi <stefanha@redhat.com> |
block: compile out assert_bdrv_graph_readable() by default
reader_count() is a performance bottleneck because the global aio_context_list_lock mutex causes thread contention. Put this debugging asse
block: compile out assert_bdrv_graph_readable() by default
reader_count() is a performance bottleneck because the global aio_context_list_lock mutex causes thread contention. Put this debugging assertion behind a new ./configure --enable-debug-graph-lock option and disable it by default.
The --enable-debug-graph-lock option is also enabled by the more general --enable-debug option.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20230501173443.153062-1-stefanha@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
Revision tags: v7.2.2, v8.0.0, v8.0.0-rc4, v8.0.0-rc3, v7.2.1, v8.0.0-rc2, v8.0.0-rc1, v8.0.0-rc0 |
|
#
d805d8a2 |
| 09-Mar-2023 |
Stefan Hajnoczi <stefanha@redhat.com> |
block: convert bdrv_graph_wrlock() to AIO_WAIT_WHILE_UNLOCKED()
The following conversion is safe and does not change behavior:
GLOBAL_STATE_CODE(); ... - AIO_WAIT_WHILE(qemu_get_aio_co
block: convert bdrv_graph_wrlock() to AIO_WAIT_WHILE_UNLOCKED()
The following conversion is safe and does not change behavior:
GLOBAL_STATE_CODE(); ... - AIO_WAIT_WHILE(qemu_get_aio_context(), ...); + AIO_WAIT_WHILE_UNLOCKED(NULL, ...);
Since we're in GLOBAL_STATE_CODE(), qemu_get_aio_context() is our home thread's AioContext. Thus AIO_WAIT_WHILE() does not unlock the AioContext:
if (ctx_ && in_aio_context_home_thread(ctx_)) { \ while ((cond)) { \ aio_poll(ctx_, true); \ waited_ = true; \ } \
And that means AIO_WAIT_WHILE_UNLOCKED(NULL, ...) can be substituted.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20230309190855.414275-4-stefanha@redhat.com> Reviewed-by: Wilfred Mallawa <wilfred.mallawa@wdc.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
#
58a2e3f5 |
| 01-May-2023 |
Stefan Hajnoczi <stefanha@redhat.com> |
block: compile out assert_bdrv_graph_readable() by default
reader_count() is a performance bottleneck because the global aio_context_list_lock mutex causes thread contention. Put this debugging asse
block: compile out assert_bdrv_graph_readable() by default
reader_count() is a performance bottleneck because the global aio_context_list_lock mutex causes thread contention. Put this debugging assertion behind a new ./configure --enable-debug-graph-lock option and disable it by default.
The --enable-debug-graph-lock option is also enabled by the more general --enable-debug option.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20230501173443.153062-1-stefanha@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
Revision tags: v7.2.2, v8.0.0, v8.0.0-rc4, v8.0.0-rc3, v7.2.1, v8.0.0-rc2, v8.0.0-rc1, v8.0.0-rc0 |
|
#
d805d8a2 |
| 09-Mar-2023 |
Stefan Hajnoczi <stefanha@redhat.com> |
block: convert bdrv_graph_wrlock() to AIO_WAIT_WHILE_UNLOCKED()
The following conversion is safe and does not change behavior:
GLOBAL_STATE_CODE(); ... - AIO_WAIT_WHILE(qemu_get_aio_co
block: convert bdrv_graph_wrlock() to AIO_WAIT_WHILE_UNLOCKED()
The following conversion is safe and does not change behavior:
GLOBAL_STATE_CODE(); ... - AIO_WAIT_WHILE(qemu_get_aio_context(), ...); + AIO_WAIT_WHILE_UNLOCKED(NULL, ...);
Since we're in GLOBAL_STATE_CODE(), qemu_get_aio_context() is our home thread's AioContext. Thus AIO_WAIT_WHILE() does not unlock the AioContext:
if (ctx_ && in_aio_context_home_thread(ctx_)) { \ while ((cond)) { \ aio_poll(ctx_, true); \ waited_ = true; \ } \
And that means AIO_WAIT_WHILE_UNLOCKED(NULL, ...) can be substituted.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20230309190855.414275-4-stefanha@redhat.com> Reviewed-by: Wilfred Mallawa <wilfred.mallawa@wdc.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
Revision tags: v7.2.2, v8.0.0, v8.0.0-rc4, v8.0.0-rc3, v7.2.1, v8.0.0-rc2, v8.0.0-rc1, v8.0.0-rc0 |
|
#
d805d8a2 |
| 09-Mar-2023 |
Stefan Hajnoczi <stefanha@redhat.com> |
block: convert bdrv_graph_wrlock() to AIO_WAIT_WHILE_UNLOCKED()
The following conversion is safe and does not change behavior:
GLOBAL_STATE_CODE(); ... - AIO_WAIT_WHILE(qemu_get_aio_co
block: convert bdrv_graph_wrlock() to AIO_WAIT_WHILE_UNLOCKED()
The following conversion is safe and does not change behavior:
GLOBAL_STATE_CODE(); ... - AIO_WAIT_WHILE(qemu_get_aio_context(), ...); + AIO_WAIT_WHILE_UNLOCKED(NULL, ...);
Since we're in GLOBAL_STATE_CODE(), qemu_get_aio_context() is our home thread's AioContext. Thus AIO_WAIT_WHILE() does not unlock the AioContext:
if (ctx_ && in_aio_context_home_thread(ctx_)) { \ while ((cond)) { \ aio_poll(ctx_, true); \ waited_ = true; \ } \
And that means AIO_WAIT_WHILE_UNLOCKED(NULL, ...) can be substituted.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20230309190855.414275-4-stefanha@redhat.com> Reviewed-by: Wilfred Mallawa <wilfred.mallawa@wdc.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|