History log of /qemu/block/block-backend.c (Results 1 – 25 of 481)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
Revision tags: v9.0.0-rc2, v9.0.0-rc1
# bac09b09 22-Mar-2024 Fiona Ebner <f.ebner@proxmox.com>

block-backend: fix edge case in bdrv_next_cleanup() where BDS associated to BB changes

Same rationale as for commit "block-backend: fix edge case in
bdrv_next() where BDS associated to BB changes".

block-backend: fix edge case in bdrv_next_cleanup() where BDS associated to BB changes

Same rationale as for commit "block-backend: fix edge case in
bdrv_next() where BDS associated to BB changes". The block graph might
change between the bdrv_next() call and the bdrv_next_cleanup() call,
so it could be that the associated BDS is not the same that was
referenced previously anymore. Instead, rely on bdrv_next() to set
it->bs to the BDS it referenced and unreference that one in any case.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Message-ID: <20240322095009.346989-4-f.ebner@proxmox.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>

show more ...


# f6d38c9f 22-Mar-2024 Fiona Ebner <f.ebner@proxmox.com>

block-backend: fix edge case in bdrv_next() where BDS associated to BB changes

The old_bs variable in bdrv_next() is currently determined by looking
at the old block backend. However, if the block g

block-backend: fix edge case in bdrv_next() where BDS associated to BB changes

The old_bs variable in bdrv_next() is currently determined by looking
at the old block backend. However, if the block graph changes before
the next bdrv_next() call, it might be that the associated BDS is not
the same that was referenced previously. In that case, the wrong BDS
is unreferenced, leading to an assertion failure later:

> bdrv_unref: Assertion `bs->refcnt > 0' failed.

In particular, this can happen in the context of bdrv_flush_all(),
when polling for bdrv_co_flush() in the generated co-wrapper leads to
a graph change (for example with a stream block job [0]).

A racy reproducer:

> #!/bin/bash
> rm -f /tmp/backing.qcow2
> rm -f /tmp/top.qcow2
> ./qemu-img create /tmp/backing.qcow2 -f qcow2 64M
> ./qemu-io -c "write -P42 0x0 0x1" /tmp/backing.qcow2
> ./qemu-img create /tmp/top.qcow2 -f qcow2 64M -b /tmp/backing.qcow2 -F qcow2
> ./qemu-system-x86_64 --qmp stdio \
> --blockdev qcow2,node-name=node0,file.driver=file,file.filename=/tmp/top.qcow2 \
> <<EOF
> {"execute": "qmp_capabilities"}
> {"execute": "block-stream", "arguments": { "job-id": "stream0", "device": "node0" } }
> {"execute": "quit"}
> EOF

[0]:

> #0 bdrv_replace_child_tran (child=..., new_bs=..., tran=...)
> #1 bdrv_replace_node_noperm (from=..., to=..., auto_skip=..., tran=..., errp=...)
> #2 bdrv_replace_node_common (from=..., to=..., auto_skip=..., detach_subchain=..., errp=...)
> #3 bdrv_drop_filter (bs=..., errp=...)
> #4 bdrv_cor_filter_drop (cor_filter_bs=...)
> #5 stream_prepare (job=...)
> #6 job_prepare_locked (job=...)
> #7 job_txn_apply_locked (fn=..., job=...)
> #8 job_do_finalize_locked (job=...)
> #9 job_exit (opaque=...)
> #10 aio_bh_poll (ctx=...)
> #11 aio_poll (ctx=..., blocking=...)
> #12 bdrv_poll_co (s=...)
> #13 bdrv_flush (bs=...)
> #14 bdrv_flush_all ()
> #15 do_vm_stop (state=..., send_stop=...)
> #16 vm_shutdown ()

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Message-ID: <20240322095009.346989-3-f.ebner@proxmox.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>

show more ...


Revision tags: v9.0.0-rc0, v8.2.2, v7.2.10
# ad893672 02-Feb-2024 Hanna Czenczek <hreitz@redhat.com>

block-backend: Allow concurrent context changes

Since AioContext locks have been removed, a BlockBackend's AioContext
may really change at any time (only exception is that it is often
confined to a

block-backend: Allow concurrent context changes

Since AioContext locks have been removed, a BlockBackend's AioContext
may really change at any time (only exception is that it is often
confined to a drained section, as noted in this patch). Therefore,
blk_get_aio_context() cannot rely on its root node's context always
matching that of the BlockBackend.

In practice, whether they match does not matter anymore anyway: Requests
can be sent to BDSs from any context, so anyone who requests the BB's
context should have no reason to require the root node to have the same
context. Therefore, we can and should remove the assertion to that
effect.

In addition, because the context can be set and queried from different
threads concurrently, it has to be accessed with atomic operations.

Buglink: https://issues.redhat.com/browse/RHEL-19381
Suggested-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Hanna Czenczek <hreitz@redhat.com>
Message-ID: <20240202144755.671354-2-hreitz@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>

show more ...


Revision tags: v8.2.1, v8.1.5, v7.2.9, v8.1.4, v7.2.8, v8.2.0, v8.2.0-rc4, v8.2.0-rc3
# 23c983c8 05-Dec-2023 Stefan Hajnoczi <stefanha@redhat.com>

block: remove outdated AioContext locking comments

The AioContext lock no longer exists.

There is one noteworthy change:

- * More specifically, these functions use BDRV_POLL_WHILE(bs), which
-

block: remove outdated AioContext locking comments

The AioContext lock no longer exists.

There is one noteworthy change:

- * More specifically, these functions use BDRV_POLL_WHILE(bs), which
- * requires the caller to be either in the main thread and hold
- * the BlockdriverState (bs) AioContext lock, or directly in the
- * home thread that runs the bs AioContext. Calling them from
- * another thread in another AioContext would cause deadlocks.
+ * More specifically, these functions use BDRV_POLL_WHILE(bs), which requires
+ * the caller to be either in the main thread or directly in the home thread
+ * that runs the bs AioContext. Calling them from another thread in another
+ * AioContext would cause deadlocks.

I am not sure whether deadlocks are still possible. Maybe they have just
moved to the fine-grained locks that have replaced the AioContext. Since
I am not sure if the deadlocks are gone, I have kept the substance
unchanged and just removed mention of the AioContext.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-ID: <20231205182011.1976568-15-stefanha@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>

show more ...


# b49f4755 05-Dec-2023 Stefan Hajnoczi <stefanha@redhat.com>

block: remove AioContext locking

This is the big patch that removes
aio_context_acquire()/aio_context_release() from the block layer and
affected block layer users.

There isn't a clean way to split

block: remove AioContext locking

This is the big patch that removes
aio_context_acquire()/aio_context_release() from the block layer and
affected block layer users.

There isn't a clean way to split this patch and the reviewers are likely
the same group of people, so I decided to do it in one patch.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Paul Durrant <paul@xen.org>
Message-ID: <20231205182011.1976568-7-stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>

show more ...


# 6bc30f19 05-Dec-2023 Stefan Hajnoczi <stefanha@redhat.com>

graph-lock: remove AioContext locking

Stop acquiring/releasing the AioContext lock in
bdrv_graph_wrlock()/bdrv_graph_unlock() since the lock no longer has any
effect.

The distinction between bdrv_g

graph-lock: remove AioContext locking

Stop acquiring/releasing the AioContext lock in
bdrv_graph_wrlock()/bdrv_graph_unlock() since the lock no longer has any
effect.

The distinction between bdrv_graph_wrunlock() and
bdrv_graph_wrunlock_ctx() becomes meaningless and they can be collapsed
into one function.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Message-ID: <20231205182011.1976568-6-stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>

show more ...


Revision tags: v8.2.0-rc2, v8.2.0-rc1, v7.2.7, v8.1.3
# 6bc0bcc8 15-Nov-2023 Kevin Wolf <kwolf@redhat.com>

block: Fix deadlocks in bdrv_graph_wrunlock()

bdrv_graph_wrunlock() calls aio_poll(), which may run callbacks that
have a nested event loop. Nested event loops can depend on other
iothreads making p

block: Fix deadlocks in bdrv_graph_wrunlock()

bdrv_graph_wrunlock() calls aio_poll(), which may run callbacks that
have a nested event loop. Nested event loops can depend on other
iothreads making progress, so in order to allow them to make progress it
must not hold the AioContext lock of another thread while calling
aio_poll().

This introduces a @bs parameter to bdrv_graph_wrunlock() whose
AioContext is temporarily dropped (which matches bdrv_graph_wrlock()),
and a bdrv_graph_wrunlock_ctx() that can be used if the BlockDriverState
doesn't necessarily exist any more when unlocking.

This also requires a change to bdrv_schedule_unref(), which was relying
on the incorrectly taken lock. It needs to take the lock itself now.
While this is a separate bug, it can't be fixed a separate patch because
otherwise the intermediate state would either deadlock or try to release
a lock that we don't even hold.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-ID: <20231115172012.112727-3-kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
[kwolf: Fixed up bdrv_schedule_unref()]
Signed-off-by: Kevin Wolf <kwolf@redhat.com>

show more ...


# bb092d6d 15-Nov-2023 Kevin Wolf <kwolf@redhat.com>

block: Fix bdrv_graph_wrlock() call in blk_remove_bs()

While not all callers of blk_remove_bs() are correct in this respect,
the assumption in the function is that callers hold the AioContext lock
o

block: Fix bdrv_graph_wrlock() call in blk_remove_bs()

While not all callers of blk_remove_bs() are correct in this respect,
the assumption in the function is that callers hold the AioContext lock
of the BlockBackend (this is required by the drain calls in it).

In order to avoid deadlock in the nested event loop, bdrv_graph_wrlock()
has then to be called with the root BlockDriverState as its parameter
instead of NULL, so that this AioContext lock is temporarily dropped.

Fixes: https://issues.redhat.com/browse/RHEL-1761
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-ID: <20231115172012.112727-2-kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>

show more ...


Revision tags: v8.2.0-rc0
# ad74751f 27-Oct-2023 Kevin Wolf <kwolf@redhat.com>

block: Mark bdrv_skip_filters() and callers GRAPH_RDLOCK

This adds GRAPH_RDLOCK annotations to declare that callers of
bdrv_skip_filters() need to hold a reader lock for the graph because it
calls b

block: Mark bdrv_skip_filters() and callers GRAPH_RDLOCK

This adds GRAPH_RDLOCK annotations to declare that callers of
bdrv_skip_filters() need to hold a reader lock for the graph because it
calls bdrv_filter_child(), which accesses bs->file/backing.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-ID: <20231027155333.420094-9-kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>

show more ...


# 03b9eaca 27-Oct-2023 Kevin Wolf <kwolf@redhat.com>

block: Mark bdrv_root_attach_child() GRAPH_WRLOCK

Instead of taking the writer lock internally, require callers to already
hold it when calling bdrv_root_attach_child(). These callers will
typically

block: Mark bdrv_root_attach_child() GRAPH_WRLOCK

Instead of taking the writer lock internally, require callers to already
hold it when calling bdrv_root_attach_child(). These callers will
typically already hold the graph lock once the locking work is
completed, which means that they can't call functions that take it
internally.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-ID: <20231027155333.420094-5-kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>

show more ...


# 221caadc 27-Oct-2023 Kevin Wolf <kwolf@redhat.com>

block: Mark bdrv_probe_blocksizes() and callers GRAPH_RDLOCK

This adds GRAPH_RDLOCK annotations to declare that callers of
bdrv_probe_blocksizes() need to hold a reader lock for the graph because
it

block: Mark bdrv_probe_blocksizes() and callers GRAPH_RDLOCK

This adds GRAPH_RDLOCK annotations to declare that callers of
bdrv_probe_blocksizes() need to hold a reader lock for the graph because
it calls bdrv_filter_bs(), which accesses bs->file/backing.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-ID: <20231027155333.420094-2-kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>

show more ...


Revision tags: v8.2.0-rc0
# ad74751f 27-Oct-2023 Kevin Wolf <kwolf@redhat.com>

block: Mark bdrv_skip_filters() and callers GRAPH_RDLOCK

This adds GRAPH_RDLOCK annotations to declare that callers of
bdrv_skip_filters() need to hold a reader lock for the graph because it
calls b

block: Mark bdrv_skip_filters() and callers GRAPH_RDLOCK

This adds GRAPH_RDLOCK annotations to declare that callers of
bdrv_skip_filters() need to hold a reader lock for the graph because it
calls bdrv_filter_child(), which accesses bs->file/backing.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-ID: <20231027155333.420094-9-kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>

show more ...


# 03b9eaca 27-Oct-2023 Kevin Wolf <kwolf@redhat.com>

block: Mark bdrv_root_attach_child() GRAPH_WRLOCK

Instead of taking the writer lock internally, require callers to already
hold it when calling bdrv_root_attach_child(). These callers will
typically

block: Mark bdrv_root_attach_child() GRAPH_WRLOCK

Instead of taking the writer lock internally, require callers to already
hold it when calling bdrv_root_attach_child(). These callers will
typically already hold the graph lock once the locking work is
completed, which means that they can't call functions that take it
internally.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-ID: <20231027155333.420094-5-kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>

show more ...


# 221caadc 27-Oct-2023 Kevin Wolf <kwolf@redhat.com>

block: Mark bdrv_probe_blocksizes() and callers GRAPH_RDLOCK

This adds GRAPH_RDLOCK annotations to declare that callers of
bdrv_probe_blocksizes() need to hold a reader lock for the graph because
it

block: Mark bdrv_probe_blocksizes() and callers GRAPH_RDLOCK

This adds GRAPH_RDLOCK annotations to declare that callers of
bdrv_probe_blocksizes() need to hold a reader lock for the graph because
it calls bdrv_filter_bs(), which accesses bs->file/backing.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-ID: <20231027155333.420094-2-kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>

show more ...


Revision tags: v8.1.2
# 277f2007 29-Sep-2023 Kevin Wolf <kwolf@redhat.com>

block: Mark bdrv_op_is_blocked() and callers GRAPH_RDLOCK

This adds GRAPH_RDLOCK annotations to declare that callers of
bdrv_op_is_blocked() need to hold a reader lock for the graph
because it calls

block: Mark bdrv_op_is_blocked() and callers GRAPH_RDLOCK

This adds GRAPH_RDLOCK annotations to declare that callers of
bdrv_op_is_blocked() need to hold a reader lock for the graph
because it calls bdrv_get_device_or_node_name(), which accesses the
parents list of a node.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-ID: <20230929145157.45443-18-kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>

show more ...


# 0bb79c97 29-Sep-2023 Kevin Wolf <kwolf@redhat.com>

qcow2: Mark qcow2_signal_corruption() and callers GRAPH_RDLOCK

This adds GRAPH_RDLOCK annotations to declare that callers of
qcow2_signal_corruption() need to hold a reader lock for the graph
becaus

qcow2: Mark qcow2_signal_corruption() and callers GRAPH_RDLOCK

This adds GRAPH_RDLOCK annotations to declare that callers of
qcow2_signal_corruption() need to hold a reader lock for the graph
because it calls bdrv_get_node_name(), which accesses the parents list
of a node.

For some places, we know that they will hold the lock, but we don't have
the GRAPH_RDLOCK annotations yet. In this case, add assume_graph_lock()
with a FIXME comment. These places will be removed once everything is
properly annotated.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-ID: <20230929145157.45443-15-kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>

show more ...


# 2b3912f1 29-Sep-2023 Kevin Wolf <kwolf@redhat.com>

block: Mark bdrv_first_blk() and bdrv_is_root_node() GRAPH_RDLOCK

This adds GRAPH_RDLOCK annotations to declare that callers of
bdrv_first_blk() and bdrv_is_root_node() need to hold a reader lock
fo

block: Mark bdrv_first_blk() and bdrv_is_root_node() GRAPH_RDLOCK

This adds GRAPH_RDLOCK annotations to declare that callers of
bdrv_first_blk() and bdrv_is_root_node() need to hold a reader lock
for the graph. These functions are the only functions in block-backend.c
that access the parent list of a node.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-ID: <20230929145157.45443-5-kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>

show more ...


Revision tags: v8.1.2
# 277f2007 29-Sep-2023 Kevin Wolf <kwolf@redhat.com>

block: Mark bdrv_op_is_blocked() and callers GRAPH_RDLOCK

This adds GRAPH_RDLOCK annotations to declare that callers of
bdrv_op_is_blocked() need to hold a reader lock for the graph
because it calls

block: Mark bdrv_op_is_blocked() and callers GRAPH_RDLOCK

This adds GRAPH_RDLOCK annotations to declare that callers of
bdrv_op_is_blocked() need to hold a reader lock for the graph
because it calls bdrv_get_device_or_node_name(), which accesses the
parents list of a node.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-ID: <20230929145157.45443-18-kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>

show more ...


# 0bb79c97 29-Sep-2023 Kevin Wolf <kwolf@redhat.com>

qcow2: Mark qcow2_signal_corruption() and callers GRAPH_RDLOCK

This adds GRAPH_RDLOCK annotations to declare that callers of
qcow2_signal_corruption() need to hold a reader lock for the graph
becaus

qcow2: Mark qcow2_signal_corruption() and callers GRAPH_RDLOCK

This adds GRAPH_RDLOCK annotations to declare that callers of
qcow2_signal_corruption() need to hold a reader lock for the graph
because it calls bdrv_get_node_name(), which accesses the parents list
of a node.

For some places, we know that they will hold the lock, but we don't have
the GRAPH_RDLOCK annotations yet. In this case, add assume_graph_lock()
with a FIXME comment. These places will be removed once everything is
properly annotated.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-ID: <20230929145157.45443-15-kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>

show more ...


# 2b3912f1 29-Sep-2023 Kevin Wolf <kwolf@redhat.com>

block: Mark bdrv_first_blk() and bdrv_is_root_node() GRAPH_RDLOCK

This adds GRAPH_RDLOCK annotations to declare that callers of
bdrv_first_blk() and bdrv_is_root_node() need to hold a reader lock
fo

block: Mark bdrv_first_blk() and bdrv_is_root_node() GRAPH_RDLOCK

This adds GRAPH_RDLOCK annotations to declare that callers of
bdrv_first_blk() and bdrv_is_root_node() need to hold a reader lock
for the graph. These functions are the only functions in block-backend.c
that access the parent list of a node.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-ID: <20230929145157.45443-5-kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>

show more ...


Revision tags: v8.1.1, v7.2.6, v8.0.5
# 0f86afde 12-Sep-2023 Stefan Hajnoczi <stefanha@redhat.com>

block-backend: process zoned requests in the current AioContext

Process zoned requests in the current thread's AioContext instead of in
the BlockBackend's AioContext.

There is no need to use the Bl

block-backend: process zoned requests in the current AioContext

Process zoned requests in the current thread's AioContext instead of in
the BlockBackend's AioContext.

There is no need to use the BlockBackend's AioContext thanks to CoMutex
bs->wps->colock, which protects zone metadata.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-ID: <20230912231037.826804-5-stefanha@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>

show more ...


# 46eb6e86 12-Sep-2023 Stefan Hajnoczi <stefanha@redhat.com>

block-backend: process I/O in the current AioContext

Switch blk_aio_*() APIs over to multi-queue by using
qemu_get_current_aio_context() instead of blk_get_aio_context(). This
change will allow devi

block-backend: process I/O in the current AioContext

Switch blk_aio_*() APIs over to multi-queue by using
qemu_get_current_aio_context() instead of blk_get_aio_context(). This
change will allow devices to process I/O in multiple IOThreads in the
future.

I audited existing blk_aio_*() callers:
- migration/block.c: blk_mig_lock() protects the data accessed by the
completion callback.
- The remaining emulated devices and exports run with
qemu_get_aio_context() == blk_get_aio_context().

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-ID: <20230912231037.826804-4-stefanha@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>

show more ...


# 652b0dd8 12-Sep-2023 Stefan Hajnoczi <stefanha@redhat.com>

block: remove AIOCBInfo->get_aio_context()

The synchronous bdrv_aio_cancel() function needs the acb's AioContext so
it can call aio_poll() to wait for cancellation.

It turns out that all users run

block: remove AIOCBInfo->get_aio_context()

The synchronous bdrv_aio_cancel() function needs the acb's AioContext so
it can call aio_poll() to wait for cancellation.

It turns out that all users run under the BQL in the main AioContext, so
this callback is not needed.

Remove the callback, mark bdrv_aio_cancel() GLOBAL_STATE_CODE just like
its blk_aio_cancel() caller, and poll the main loop AioContext.

The purpose of this cleanup is to identify bdrv_aio_cancel() as an API
that does not work with the multi-queue block layer.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-ID: <20230912231037.826804-2-stefanha@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Klaus Jensen <k.jensen@samsung.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>

show more ...


# ede01e46 11-Sep-2023 Kevin Wolf <kwolf@redhat.com>

block: Mark bdrv_root_unref_child() GRAPH_WRLOCK

Instead of taking the writer lock internally, require callers to already
hold it when calling bdrv_root_unref_child(). These callers will
typically a

block: Mark bdrv_root_unref_child() GRAPH_WRLOCK

Instead of taking the writer lock internally, require callers to already
hold it when calling bdrv_root_unref_child(). These callers will
typically already hold the graph lock once the locking work is
completed, which means that they can't call functions that take it
internally.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-ID: <20230911094620.45040-20-kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>

show more ...


# 3804e3cf 11-Sep-2023 Kevin Wolf <kwolf@redhat.com>

block: Mark bdrv_parent_perms_conflict() and callers GRAPH_RDLOCK

The function reads the parents list, so it needs to hold the graph lock.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by:

block: Mark bdrv_parent_perms_conflict() and callers GRAPH_RDLOCK

The function reads the parents list, so it needs to hold the graph lock.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-ID: <20230911094620.45040-14-kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>

show more ...


12345678910>>...20