5bdbaebc | 02-Feb-2024 |
Hanna Czenczek <hreitz@redhat.com> |
virtio: Re-enable notifications after drain
During drain, we do not care about virtqueue notifications, which is why we remove the handlers on it. When removing those handlers, whether vq notificat
virtio: Re-enable notifications after drain
During drain, we do not care about virtqueue notifications, which is why we remove the handlers on it. When removing those handlers, whether vq notifications are enabled or not depends on whether we were in polling mode or not; if not, they are enabled (by default); if so, they have been disabled by the io_poll_start callback.
Because we do not care about those notifications after removing the handlers, this is fine. However, we have to explicitly ensure they are enabled when re-attaching the handlers, so we will resume receiving notifications. We do this in virtio_queue_aio_attach_host_notifier*(). If such a function is called while we are in a polling section, attaching the notifiers will then invoke the io_poll_start callback, re-disabling notifications.
Because we will always miss virtqueue updates in the drained section, we also need to poll the virtqueue once after attaching the notifiers.
Buglink: https://issues.redhat.com/browse/RHEL-3934 Signed-off-by: Hanna Czenczek <hreitz@redhat.com> Message-ID: <20240202153158.788922-3-hreitz@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
72098a3a | 05-Dec-2023 |
Peter Krempa <pkrempa@redhat.com> |
stream: Allow users to request only format driver names in backing file format
Introduce a new flag 'backing-mask-protocol' for the block-stream QMP command which instructs the internals to use 'raw
stream: Allow users to request only format driver names in backing file format
Introduce a new flag 'backing-mask-protocol' for the block-stream QMP command which instructs the internals to use 'raw' instead of the protocol driver in case when a image is used without a dummy 'raw' wrapper.
The flag is designed such that it can be always asserted by management tools even when there isn't any update to backing files.
The flag will be used by libvirt so that the backing images still reference the proper format even when libvirt will stop using the dummy raw driver (raw driver with no other config). Libvirt needs this so that the images stay compatible with older libvirt versions which didn't expect that a protocol driver name can appear in the backing file format field.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Message-ID: <bbee9a0a59748a8893289bf8249f568f0d587e62.1701796348.git.pkrempa@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
4b028cbe | 05-Dec-2023 |
Peter Krempa <pkrempa@redhat.com> |
commit: Allow users to request only format driver names in backing file format
Introduce a new flag 'backing-mask-protocol' for the block-commit QMP command which instructs the internals to use 'raw
commit: Allow users to request only format driver names in backing file format
Introduce a new flag 'backing-mask-protocol' for the block-commit QMP command which instructs the internals to use 'raw' instead of the protocol driver in case when a image is used without a dummy 'raw' wrapper.
The flag is designed such that it can be always asserted by management tools even when there isn't any update to backing files.
The flag will be used by libvirt so that the backing images still reference the proper format even when libvirt will stop using the dummy raw driver (raw driver with no other config). Libvirt needs this so that the images stay compatible with older libvirt versions which didn't expect that a protocol driver name can appear in the backing file format field.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Message-ID: <2cb46e37093ce793ea1604abc8bbb90f4c8e434b.1701796348.git.pkrempa@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
3cbc17ee | 12-Jul-2021 |
Paolo Bonzini <pbonzini@redhat.com> |
io_uring: move LuringState typedef to block/aio.h
The LuringState typedef is defined twice, in include/block/raw-aio.h and block/io_uring.c. Move it in include/block/aio.h, which is included everyw
io_uring: move LuringState typedef to block/aio.h
The LuringState typedef is defined twice, in include/block/raw-aio.h and block/io_uring.c. Move it in include/block/aio.h, which is included everywhere the typedef is needed, since include/block/aio.h already has to define the forward reference to the struct.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
0b2675c4 | 02-Jan-2024 |
Stefan Hajnoczi <stefanha@redhat.com> |
Rename "QEMU global mutex" to "BQL" in comments and docs
The term "QEMU global mutex" is identical to the more widely used Big QEMU Lock ("BQL"). Update the code comments and documentation to use "B
Rename "QEMU global mutex" to "BQL" in comments and docs
The term "QEMU global mutex" is identical to the more widely used Big QEMU Lock ("BQL"). Update the code comments and documentation to use "BQL" instead of "QEMU global mutex".
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Acked-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Paul Durrant <paul@xen.org> Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com> Reviewed-by: Cédric Le Goater <clg@kaod.org> Reviewed-by: Harsh Prateek Bora <harshpb@linux.ibm.com> Message-id: 20240102153529.486531-6-stefanha@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
show more ...
|
195801d7 | 02-Jan-2024 |
Stefan Hajnoczi <stefanha@redhat.com> |
system/cpus: rename qemu_mutex_lock_iothread() to bql_lock()
The Big QEMU Lock (BQL) has many names and they are confusing. The actual QemuMutex variable is called qemu_global_mutex but it's commonl
system/cpus: rename qemu_mutex_lock_iothread() to bql_lock()
The Big QEMU Lock (BQL) has many names and they are confusing. The actual QemuMutex variable is called qemu_global_mutex but it's commonly referred to as the BQL in discussions and some code comments. The locking APIs, however, are called qemu_mutex_lock_iothread() and qemu_mutex_unlock_iothread().
The "iothread" name is historic and comes from when the main thread was split into into KVM vcpu threads and the "iothread" (now called the main loop thread). I have contributed to the confusion myself by introducing a separate --object iothread, a separate concept unrelated to the BQL.
The "iothread" name is no longer appropriate for the BQL. Rename the locking APIs to: - void bql_lock(void) - void bql_unlock(void) - bool bql_locked(void)
There are more APIs with "iothread" in their names. Subsequent patches will rename them. There are also comments and documentation that will be updated in later patches.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Paul Durrant <paul@xen.org> Acked-by: Fabiano Rosas <farosas@suse.de> Acked-by: David Woodhouse <dwmw@amazon.co.uk> Reviewed-by: Cédric Le Goater <clg@kaod.org> Acked-by: Peter Xu <peterx@redhat.com> Acked-by: Eric Farman <farman@linux.ibm.com> Reviewed-by: Harsh Prateek Bora <harshpb@linux.ibm.com> Acked-by: Hyman Huang <yong.huang@smartx.com> Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com> Message-id: 20240102153529.486531-2-stefanha@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
show more ...
|
897a06c6 | 20-Nov-2023 |
Philippe Mathieu-Daudé <philmd@linaro.org> |
iothread: Remove unused Error** argument in aio_context_set_aio_params
aio_context_set_aio_params() doesn't use its undocumented Error** argument. Remove it to simplify.
Note this removes a use of
iothread: Remove unused Error** argument in aio_context_set_aio_params
aio_context_set_aio_params() doesn't use its undocumented Error** argument. Remove it to simplify.
Note this removes a use of "unchecked Error**" in iothread_set_aio_context_params().
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Markus Armbruster <armbru@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-ID: <20231120171806.19361-1-philmd@linaro.org>
show more ...
|
23c983c8 | 05-Dec-2023 |
Stefan Hajnoczi <stefanha@redhat.com> |
block: remove outdated AioContext locking comments
The AioContext lock no longer exists.
There is one noteworthy change:
- * More specifically, these functions use BDRV_POLL_WHILE(bs), which -
block: remove outdated AioContext locking comments
The AioContext lock no longer exists.
There is one noteworthy change:
- * More specifically, these functions use BDRV_POLL_WHILE(bs), which - * requires the caller to be either in the main thread and hold - * the BlockdriverState (bs) AioContext lock, or directly in the - * home thread that runs the bs AioContext. Calling them from - * another thread in another AioContext would cause deadlocks. + * More specifically, these functions use BDRV_POLL_WHILE(bs), which requires + * the caller to be either in the main thread or directly in the home thread + * that runs the bs AioContext. Calling them from another thread in another + * AioContext would cause deadlocks.
I am not sure whether deadlocks are still possible. Maybe they have just moved to the fine-grained locks that have replaced the AioContext. Since I am not sure if the deadlocks are gone, I have kept the substance unchanged and just removed mention of the AioContext.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-ID: <20231205182011.1976568-15-stefanha@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
9f8d2fdc | 05-Dec-2023 |
Stefan Hajnoczi <stefanha@redhat.com> |
aio: remove aio_context_acquire()/aio_context_release() API
Delete these functions because nothing calls these functions anymore.
I introduced these APIs in commit 98563fc3ec44 ("aio: add aio_conte
aio: remove aio_context_acquire()/aio_context_release() API
Delete these functions because nothing calls these functions anymore.
I introduced these APIs in commit 98563fc3ec44 ("aio: add aio_context_acquire() and aio_context_release()") in 2014. It's with a sigh of relief that I delete these APIs almost 10 years later.
Thanks to Paolo Bonzini's vision for multi-queue QEMU, we got an understanding of where the code needed to go in order to remove the limitations that the original dataplane and the IOThread/AioContext approach that followed it.
Emanuele Giuseppe Esposito had the splendid determination to convert large parts of the codebase so that they no longer needed the AioContext lock. This was a painstaking process, both in the actual code changes required and the iterations of code review that Emanuele eked out of Kevin and me over many months.
Kevin Wolf tackled multitudes of graph locking conversions to protect in-flight I/O from run-time changes to the block graph as well as the clang Thread Safety Analysis annotations that allow the compiler to check whether the graph lock is being used correctly.
And me, well, I'm just here to add some pizzazz to the QEMU multi-queue block layer :). Thank you to everyone who helped with this effort, including Eric Blake, code reviewer extraordinaire, and others who I've forgotten to mention.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-ID: <20231205182011.1976568-11-stefanha@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
95bbddf9 | 05-Dec-2023 |
Stefan Hajnoczi <stefanha@redhat.com> |
aio-wait: draw equivalence between AIO_WAIT_WHILE() and AIO_WAIT_WHILE_UNLOCKED()
Now that the AioContext lock no longer exists, AIO_WAIT_WHILE() and AIO_WAIT_WHILE_UNLOCKED() are equivalent.
A fut
aio-wait: draw equivalence between AIO_WAIT_WHILE() and AIO_WAIT_WHILE_UNLOCKED()
Now that the AioContext lock no longer exists, AIO_WAIT_WHILE() and AIO_WAIT_WHILE_UNLOCKED() are equivalent.
A future patch will get rid of AIO_WAIT_WHILE_UNLOCKED().
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-ID: <20231205182011.1976568-10-stefanha@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
c43d5bc8 | 05-Dec-2023 |
Stefan Hajnoczi <stefanha@redhat.com> |
block: remove bdrv_co_lock()
The bdrv_co_lock() and bdrv_co_unlock() functions are already no-ops. Remove them.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-ID: <20231205182011.1976
block: remove bdrv_co_lock()
The bdrv_co_lock() and bdrv_co_unlock() functions are already no-ops. Remove them.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-ID: <20231205182011.1976568-8-stefanha@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
b49f4755 | 05-Dec-2023 |
Stefan Hajnoczi <stefanha@redhat.com> |
block: remove AioContext locking
This is the big patch that removes aio_context_acquire()/aio_context_release() from the block layer and affected block layer users.
There isn't a clean way to split
block: remove AioContext locking
This is the big patch that removes aio_context_acquire()/aio_context_release() from the block layer and affected block layer users.
There isn't a clean way to split this patch and the reviewers are likely the same group of people, so I decided to do it in one patch.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Paul Durrant <paul@xen.org> Message-ID: <20231205182011.1976568-7-stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
6bc30f19 | 05-Dec-2023 |
Stefan Hajnoczi <stefanha@redhat.com> |
graph-lock: remove AioContext locking
Stop acquiring/releasing the AioContext lock in bdrv_graph_wrlock()/bdrv_graph_unlock() since the lock no longer has any effect.
The distinction between bdrv_g
graph-lock: remove AioContext locking
Stop acquiring/releasing the AioContext lock in bdrv_graph_wrlock()/bdrv_graph_unlock() since the lock no longer has any effect.
The distinction between bdrv_graph_wrunlock() and bdrv_graph_wrunlock_ctx() becomes meaningless and they can be collapsed into one function.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20231205182011.1976568-6-stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
6bc0bcc8 | 15-Nov-2023 |
Kevin Wolf <kwolf@redhat.com> |
block: Fix deadlocks in bdrv_graph_wrunlock()
bdrv_graph_wrunlock() calls aio_poll(), which may run callbacks that have a nested event loop. Nested event loops can depend on other iothreads making p
block: Fix deadlocks in bdrv_graph_wrunlock()
bdrv_graph_wrunlock() calls aio_poll(), which may run callbacks that have a nested event loop. Nested event loops can depend on other iothreads making progress, so in order to allow them to make progress it must not hold the AioContext lock of another thread while calling aio_poll().
This introduces a @bs parameter to bdrv_graph_wrunlock() whose AioContext is temporarily dropped (which matches bdrv_graph_wrlock()), and a bdrv_graph_wrunlock_ctx() that can be used if the BlockDriverState doesn't necessarily exist any more when unlocking.
This also requires a change to bdrv_schedule_unref(), which was relying on the incorrectly taken lock. It needs to take the lock itself now. While this is a separate bug, it can't be fixed a separate patch because otherwise the intermediate state would either deadlock or try to release a lock that we don't even hold.
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20231115172012.112727-3-kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> [kwolf: Fixed up bdrv_schedule_unref()] Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
a4dbf3fe | 14-Nov-2023 |
Michael Tokarev <mjt@tls.msk.ru> |
include/block/ufs.h: spelling fix: setted
Fixes: bc4e68d362ec "hw/ufs: Initial commit for emulated Universal-Flash-Storage" Reviewed-by: Jeuk Kim <jeuk20.kim@samsung.com> Reviewed-by: Thomas Huth <t
include/block/ufs.h: spelling fix: setted
Fixes: bc4e68d362ec "hw/ufs: Initial commit for emulated Universal-Flash-Storage" Reviewed-by: Jeuk Kim <jeuk20.kim@samsung.com> Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
show more ...
|
1f051dcb | 27-Oct-2023 |
Kevin Wolf <kwolf@redhat.com> |
block: Protect bs->file with graph_lock
Almost all functions that access bs->file already take the graph lock now. Add locking to the remaining users and finally annotate the struct field itself as
block: Protect bs->file with graph_lock
Almost all functions that access bs->file already take the graph lock now. Add locking to the remaining users and finally annotate the struct field itself as protected by the graph lock.
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20231027155333.420094-25-kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
79a55866 | 27-Oct-2023 |
Kevin Wolf <kwolf@redhat.com> |
block: Add missing GRAPH_RDLOCK annotations
This adds GRAPH_RDLOCK to some driver callbacks that are already called with the graph lock held, and which will need the annotation because they access b
block: Add missing GRAPH_RDLOCK annotations
This adds GRAPH_RDLOCK to some driver callbacks that are already called with the graph lock held, and which will need the annotation because they access bs->file, but don't have it yet.
This also covers a few callbacks that were not marked GRAPH_RDLOCK before, but where updating BlockDriver is trivially possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20231027155333.420094-21-kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
e2dd2737 | 27-Oct-2023 |
Kevin Wolf <kwolf@redhat.com> |
block: Introduce bdrv_co_change_backing_file()
bdrv_change_backing_file() is called both inside and outside coroutine context. This makes it difficult for it to take the graph lock internally. It al
block: Introduce bdrv_co_change_backing_file()
bdrv_change_backing_file() is called both inside and outside coroutine context. This makes it difficult for it to take the graph lock internally. It also means that driver implementations need to be able to run outside of coroutines, too. Switch it to the usual model with a coroutine based implementation and a co_wrapper instead. The new function is marked GRAPH_RDLOCK.
As the co_wrapper now runs the function in the AioContext of the node (as it should always have done), this is not GLOBAL_STATE_CODE() any more.
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20231027155333.420094-20-kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
004915a9 | 27-Oct-2023 |
Kevin Wolf <kwolf@redhat.com> |
block: Protect bs->backing with graph_lock
Almost all functions that access bs->backing already take the graph lock now. Add locking to the remaining users and finally annotate the struct field itse
block: Protect bs->backing with graph_lock
Almost all functions that access bs->backing already take the graph lock now. Add locking to the remaining users and finally annotate the struct field itself as protected by the graph lock.
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20231027155333.420094-18-kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
ccd6a379 | 27-Oct-2023 |
Kevin Wolf <kwolf@redhat.com> |
block: Mark bdrv_replace_node() GRAPH_WRLOCK
Instead of taking the writer lock internally, require callers to already hold it when calling bdrv_replace_node(). Its callers may already want to hold t
block: Mark bdrv_replace_node() GRAPH_WRLOCK
Instead of taking the writer lock internally, require callers to already hold it when calling bdrv_replace_node(). Its callers may already want to hold the graph lock and so wouldn't be able to call functions that take it internally.
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20231027155333.420094-17-kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
d0f9fd94 | 27-Oct-2023 |
Kevin Wolf <kwolf@redhat.com> |
block: Mark bdrv_set_backing_hd_drained() GRAPH_WRLOCK
Instead of taking the writer lock internally, require callers to already hold it when calling bdrv_set_backing_hd_drained(). Basically everthin
block: Mark bdrv_set_backing_hd_drained() GRAPH_WRLOCK
Instead of taking the writer lock internally, require callers to already hold it when calling bdrv_set_backing_hd_drained(). Basically everthing in the function needs the lock and its callers may already want to hold the graph lock and so wouldn't be able to call functions that take it internally.
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20231027155333.420094-14-kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
78a9c76e | 27-Oct-2023 |
Kevin Wolf <kwolf@redhat.com> |
block: Mark bdrv_cow_child() and callers GRAPH_RDLOCK
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_cow_child() need to hold a reader lock for the graph because it accesses bs->
block: Mark bdrv_cow_child() and callers GRAPH_RDLOCK
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_cow_child() need to hold a reader lock for the graph because it accesses bs->backing.
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20231027155333.420094-13-kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
ec82cc41 | 27-Oct-2023 |
Kevin Wolf <kwolf@redhat.com> |
block: Mark bdrv_filter_child() and callers GRAPH_RDLOCK
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_filter_child() need to hold a reader lock for the graph because it accesse
block: Mark bdrv_filter_child() and callers GRAPH_RDLOCK
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_filter_child() need to hold a reader lock for the graph because it accesses bs->file/backing.
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20231027155333.420094-12-kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
79bb7627 | 27-Oct-2023 |
Kevin Wolf <kwolf@redhat.com> |
block: Mark bdrv_chain_contains() and callers GRAPH_RDLOCK
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_chain_contains() need to hold a reader lock for the graph because it cal
block: Mark bdrv_chain_contains() and callers GRAPH_RDLOCK
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_chain_contains() need to hold a reader lock for the graph because it calls bdrv_filter_or_cow_bs(), which accesses bs->file/backing.
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20231027155333.420094-11-kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
9275fc72 | 27-Oct-2023 |
Kevin Wolf <kwolf@redhat.com> |
block: Mark bdrv_(un)freeze_backing_chain() and callers GRAPH_RDLOCK
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_(un)freeze_backing_chain() need to hold a reader lock for the
block: Mark bdrv_(un)freeze_backing_chain() and callers GRAPH_RDLOCK
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_(un)freeze_backing_chain() need to hold a reader lock for the graph because it calls bdrv_filter_or_cow_child(), which accesses bs->file/backing.
Use the opportunity to make bdrv_is_backing_chain_frozen() static, it has no external callers.
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20231027155333.420094-10-kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|