History log of /qemu/blockjob.c (Results 101 – 125 of 289)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
Revision tags: v4.1.0, v4.1.0-rc5
# db725815 12-Aug-2019 Markus Armbruster <armbru@redhat.com>

Include qemu/main-loop.h less

In my "build everything" tree, changing qemu/main-loop.h triggers a
recompile of some 5600 out of 6600 objects (not counting tests and
objects that don't depend on qemu

Include qemu/main-loop.h less

In my "build everything" tree, changing qemu/main-loop.h triggers a
recompile of some 5600 out of 6600 objects (not counting tests and
objects that don't depend on qemu/osdep.h). It includes block/aio.h,
which in turn includes qemu/event_notifier.h, qemu/notify.h,
qemu/processor.h, qemu/qsp.h, qemu/queue.h, qemu/thread-posix.h,
qemu/thread.h, qemu/timer.h, and a few more.

Include qemu/main-loop.h only where it's needed. Touching it now
recompiles only some 1700 objects. For block/aio.h and
qemu/event_notifier.h, these numbers drop from 5600 to 2800. For the
others, they shrink only slightly.

Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20190812052359.30071-21-armbru@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>

show more ...


Revision tags: v4.1.0-rc4, v3.1.1, v4.1.0-rc3, v4.1.0-rc2
# cf312932 22-Jul-2019 Kevin Wolf <kwolf@redhat.com>

block-backend: Queue requests while drained

This fixes devices like IDE that can still start new requests from I/O
handlers in the CPU thread while the block backend is drained.

The basic assumptio

block-backend: Queue requests while drained

This fixes devices like IDE that can still start new requests from I/O
handlers in the CPU thread while the block backend is drained.

The basic assumption is that in a drain section, no new requests should
be allowed through a BlockBackend (blk_drained_begin/end don't exist,
we get drain sections only on the node level). However, there are two
special cases where requests should not be queued:

1. Block jobs: We already make sure that block jobs are paused in a
drain section, so they won't start new requests. However, if the
drain_begin is called on the job's BlockBackend first, it can happen
that we deadlock because the job stays busy until it reaches a pause
point - which it can't if its requests aren't processed any more.

The proper solution here would be to make all requests through the
job's filter node instead of using a BlockBackend. For now, just
disabling request queuing on the job BlockBackend is simpler.

2. In test cases where making requests through bdrv_* would be
cumbersome because we'd need a BdrvChild. As we already got the
functionality to disable request queuing from 1., use it in tests,
too, for convenience.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>

show more ...


# e037c09c 19-Jul-2019 Max Reitz <mreitz@redhat.com>

block: Do not poll in bdrv_do_drained_end()

We should never poll anywhere in bdrv_do_drained_end() (including its
recursive callees like bdrv_drain_invoke()), because it does not cope
well with grap

block: Do not poll in bdrv_do_drained_end()

We should never poll anywhere in bdrv_do_drained_end() (including its
recursive callees like bdrv_drain_invoke()), because it does not cope
well with graph changes. In fact, it has been written based on the
postulation that no graph changes will happen in it.

Instead, the callers that want to poll must poll, i.e. all currently
globally available wrappers: bdrv_drained_end(),
bdrv_subtree_drained_end(), bdrv_unapply_subtree_drain(), and
bdrv_drain_all_end(). Graph changes there do not matter.

They can poll simply by passing a pointer to a drained_end_counter and
wait until it reaches 0.

This patch also adds a non-polling global wrapper for
bdrv_do_drained_end() that takes a drained_end_counter pointer. We need
such a variant because now no function called anywhere from
bdrv_do_drained_end() must poll. This includes
BdrvChildRole.drained_end(), which already must not poll according to
its interface documentation, but bdrv_child_cb_drained_end() just
violates that by invoking bdrv_drained_end() (which does poll).
Therefore, BdrvChildRole.drained_end() must take a *drained_end_counter
parameter, which bdrv_child_cb_drained_end() can pass on to the new
bdrv_drained_end_no_poll() function.

Note that we now have a pattern of all drained_end-related functions
either polling or receiving a *drained_end_counter to let the caller
poll based on that.

A problem with a single poll loop is that when the drained section in
bdrv_set_aio_context_ignore() ends, some nodes in the subgraph may be in
the old contexts, while others are in the new context already. To let
the collective poll in bdrv_drained_end() work correctly, we must not
hold a lock to the old context, so that the old context can make
progress in case it is different from the current context.

(In the process, remove the comment saying that the current context is
always the old context, because it is wrong.)

In all other places, all nodes in a subtree must be in the same context,
so we can just poll that. The exception of course is
bdrv_drain_all_end(), but that always runs in the main context, so we
can just poll NULL (like bdrv_drain_all_begin() does).

Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>

show more ...


Revision tags: v4.1.0-rc1, v4.1.0-rc0
# b23c580c 06-Jun-2019 Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>

block: drop bs->job

Drop remaining users of bs->job:
1. assertions actually duplicated by assert(!bs->refcnt)
2. trace-point seems not enough reason to change stream_start to return
BlockJob poin

block: drop bs->job

Drop remaining users of bs->job:
1. assertions actually duplicated by assert(!bs->refcnt)
2. trace-point seems not enough reason to change stream_start to return
BlockJob pointer
3. Restricting creation of two jobs based on same bs is bad idea, as
3.1 Some jobs creates filters to be their main node, so, this check
don't actually prevent creating second job on same real node (which
will create another filter node) (but I hope it is restricted by
other mechanisms)
3.2 Even without bs->job we have two systems of permissions:
op-blockers and BLK_PERM
3.3 We may want to run several jobs on one node one day

And finally, drop bs->job pointer itself. Hurrah!

Suggested-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>

show more ...


# 8164102f 06-Jun-2019 Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>

blockdev: blockdev_mark_auto_del: drop usage of bs->job

We are going to remove bs->job pointer. Drop it's usage in
blockdev_mark_auto_del: instead of looking at bs->job let's check all
jobs for refe

blockdev: blockdev_mark_auto_del: drop usage of bs->job

We are going to remove bs->job pointer. Drop it's usage in
blockdev_mark_auto_del: instead of looking at bs->job let's check all
jobs for references to bs.

Suggested-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>

show more ...


# a8d25326 23-May-2019 Markus Armbruster <armbru@redhat.com>

Include qemu-common.h exactly where needed

No header includes qemu-common.h after this commit, as prescribed by
qemu-common.h's file comment.

Signed-off-by: Markus Armbruster <armbru@redhat.com>
Me

Include qemu-common.h exactly where needed

No header includes qemu-common.h after this commit, as prescribed by
qemu-common.h's file comment.

Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20190523143508.25387-5-armbru@redhat.com>
[Rebased with conflicts resolved automatically, except for
include/hw/arm/xlnx-zynqmp.h hw/arm/nrf51_soc.c hw/arm/msf2-soc.c
block/qcow2-refcount.c block/qcow2-cluster.c block/qcow2-cache.c
target/arm/cpu.h target/lm32/cpu.h target/m68k/cpu.h target/mips/cpu.h
target/moxie/cpu.h target/nios2/cpu.h target/openrisc/cpu.h
target/riscv/cpu.h target/tilegx/cpu.h target/tricore/cpu.h
target/unicore32/cpu.h target/xtensa/cpu.h; bsd-user/main.c and
net/tap-bsd.c fixed up]

show more ...


# 132ada80 24-Apr-2019 Kevin Wolf <kwolf@redhat.com>

block: Adjust AioContexts when attaching nodes

So far, we only made sure that updating the AioContext of a node
affected the whole subtree. However, if a node is newly attached to a
new parent, we a

block: Adjust AioContexts when attaching nodes

So far, we only made sure that updating the AioContext of a node
affected the whole subtree. However, if a node is newly attached to a
new parent, we also need to make sure that both the subtree of the node
and the parent are in the same AioContext. This tries to move the new
child node to the parent AioContext and returns an error if this isn't
possible.

BlockBackends now actually apply their AioContext to their root node.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>

show more ...


# d861ab3a 25-Apr-2019 Kevin Wolf <kwolf@redhat.com>

block: Add BlockBackend.ctx

This adds a new parameter to blk_new() which requires its callers to
declare from which AioContext this BlockBackend is going to be used (or
the locks of which AioContext

block: Add BlockBackend.ctx

This adds a new parameter to blk_new() which requires its callers to
declare from which AioContext this BlockBackend is going to be used (or
the locks of which AioContext need to be taken anyway).

The given context is only stored and kept up to date when changing
AioContexts. Actually applying the stored AioContext to the root node
is saved for another commit.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>

show more ...


# b441dc71 13-May-2019 Alberto Garcia <berto@igalia.com>

block: Make bdrv_root_attach_child() unref child_bs on failure

A consequence of the previous patch is that bdrv_attach_child()
transfers the reference to child_bs from the caller to parent_bs,
which

block: Make bdrv_root_attach_child() unref child_bs on failure

A consequence of the previous patch is that bdrv_attach_child()
transfers the reference to child_bs from the caller to parent_bs,
which will drop it on bdrv_close() or when someone calls
bdrv_unref_child().

But this only happens when bdrv_attach_child() succeeds. If it fails
then the caller is responsible for dropping the reference to child_bs.

This patch makes bdrv_attach_child() take the reference also when
there is an error, freeing the caller for having to do it.

A similar situation happens with bdrv_root_attach_child(), so the
changes on this patch affect both functions.

Signed-off-by: Alberto Garcia <berto@igalia.com>
Message-id: 20dfb3d9ccec559cdd1a9690146abad5d204a186.1557754872.git.berto@igalia.com
[mreitz: Removed now superfluous BdrvChild * variable in
bdrv_open_child()]
Signed-off-by: Max Reitz <mreitz@redhat.com>

show more ...


# 657e1203 06-May-2019 Kevin Wolf <kwolf@redhat.com>

blockjob: Remove AioContext notifiers

The notifiers made sure that the job is quiesced and that the
job->aio_context field is updated. The first part is unnecessary today
since bdrv_set_aio_context_

blockjob: Remove AioContext notifiers

The notifiers made sure that the job is quiesced and that the
job->aio_context field is updated. The first part is unnecessary today
since bdrv_set_aio_context_ignore() drains the block node, and this
means drainig the block job, too. The second part can be done in the
.set_aio_ctx callback of the block job BdrvChildRole.

The notifiers were problematic because they poll the AioContext while
the graph is in an inconsistent state with some nodes already in the new
context, but others still in the old context. So removing the notifiers
not only simplifies the code, but actually makes the code safer.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>

show more ...


# 9ff7f0df 06-May-2019 Kevin Wolf <kwolf@redhat.com>

blockjob: Propagate AioContext change to all job nodes

Block jobs require that all of the nodes the job is using are in the
same AioContext. Therefore all BdrvChild objects of the job propagate
.(ca

blockjob: Propagate AioContext change to all job nodes

Block jobs require that all of the nodes the job is using are in the
same AioContext. Therefore all BdrvChild objects of the job propagate
.(can_)set_aio_context to all other job nodes, so that the switch is
checked and performed consistently even if both nodes are in different
subtrees.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>

show more ...


Revision tags: v4.0.0, v4.0.0-rc4, v3.0.1, v4.0.0-rc3, v4.0.0-rc2, v4.0.0-rc1, v4.0.0-rc0
# 8d9648cb 19-Mar-2019 Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>

blockjob: fix user pause in block_job_error_action

Job (especially mirror) may call block_job_error_action several
times before actual pause if it has several in-flight requests.

block_job_error_ac

blockjob: fix user pause in block_job_error_action

Job (especially mirror) may call block_job_error_action several
times before actual pause if it has several in-flight requests.

block_job_error_action will call job_pause more than once in this case,
which lead to following block-job-resume qmp command can't actually
resume the job.

Fix it by do not increase pause level in block_job_error_action if
user_paused already set.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>

show more ...


Revision tags: v3.1.0, v3.1.0-rc5, v3.1.0-rc4, v3.1.0-rc3, v3.1.0-rc2, v3.1.0-rc1, v3.1.0-rc0
# cfe29d82 18-Sep-2018 Kevin Wolf <kwolf@redhat.com>

block: Use a single global AioWait

When draining a block node, we recurse to its parent and for subtree
drains also to its children. A single AIO_WAIT_WHILE() is then used to
wait for bdrv_drain_pol

block: Use a single global AioWait

When draining a block node, we recurse to its parent and for subtree
drains also to its children. A single AIO_WAIT_WHILE() is then used to
wait for bdrv_drain_poll() to become true, which depends on all of the
nodes we recursed to. However, if the respective child or parent becomes
quiescent and calls bdrv_wakeup(), only the AioWait of the child/parent
is checked, while AIO_WAIT_WHILE() depends on the AioWait of the
original node.

Fix this by using a single AioWait for all callers of AIO_WAIT_WHILE().

This may mean that the draining thread gets a few more unnecessary
wakeups because an unrelated operation got completed, but we already
wake it up when something _could_ have changed rather than only if it
has certainly changed.

Apart from that, drain is a slow path anyway. In theory it would be
possible to use wakeups more selectively and still correctly, but the
gains are likely not worth the additional complexity. In fact, this
patch is a nice simplification for some places in the code.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>

show more ...


# b5a7a057 07-Sep-2018 Kevin Wolf <kwolf@redhat.com>

blockjob: Lie better in child_job_drained_poll()

Block jobs claim in .drained_poll() that they are in a quiescent state
as soon as job->deferred_to_main_loop is true. This is obviously wrong,
they s

blockjob: Lie better in child_job_drained_poll()

Block jobs claim in .drained_poll() that they are in a quiescent state
as soon as job->deferred_to_main_loop is true. This is obviously wrong,
they still have a completion BH to run. We only get away with this
because commit 91af091f923 added an unconditional aio_poll(false) to the
drain functions, but this is bypassing the regular drain mechanisms.

However, just removing this and telling that the job is still active
doesn't work either: The completion callbacks themselves call drain
functions (directly, or indirectly with bdrv_reopen), so they would
deadlock then.

As a better lie, tell that the job is active as long as the BH is
pending, but falsely call it quiescent from the point in the BH when the
completion callback is called. At this point, nested drain calls won't
deadlock because they ignore the job, and outer drains will wait for the
job to really reach a quiescent state because the callback is already
running.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>

show more ...


# 34dc97b9 17-Aug-2018 Kevin Wolf <kwolf@redhat.com>

blockjob: Wake up BDS when job becomes idle

In the context of draining a BDS, the .drained_poll callback of block
jobs is called. If this returns true (i.e. there is still some activity
pending), th

blockjob: Wake up BDS when job becomes idle

In the context of draining a BDS, the .drained_poll callback of block
jobs is called. If this returns true (i.e. there is still some activity
pending), the drain operation may call aio_poll() with blocking=true to
wait for completion.

As soon as the pending activity is completed and the job finally arrives
in a quiescent state (i.e. its coroutine either yields with busy=false
or terminates), the block job must notify the aio_poll() loop to wake
up, otherwise we get a deadlock if both are running in different
threads.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>

show more ...


# 3ab72385 15-Aug-2018 Peter Xu <peterx@redhat.com>

qapi: Drop qapi_event_send_FOO()'s Error ** argument

The generated qapi_event_send_FOO() take an Error ** argument. They
can't actually fail, because all they do with the argument is passing it
to

qapi: Drop qapi_event_send_FOO()'s Error ** argument

The generated qapi_event_send_FOO() take an Error ** argument. They
can't actually fail, because all they do with the argument is passing it
to functions that can't fail: the QObject output visitor, and the
@qmp_emit callback, which is either monitor_qapi_event_queue() or
event_test_emit().

Drop the argument, and pass &error_abort to the QObject output visitor
and @qmp_emit instead.

Suggested-by: Eric Blake <eblake@redhat.com>
Suggested-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20180815133747.25032-4-peterx@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
[Commit message rewritten, update to qapi-code-gen.txt corrected]
Signed-off-by: Markus Armbruster <armbru@redhat.com>

show more ...


Revision tags: v3.0.0, v3.0.0-rc4, v2.12.1, v3.0.0-rc3, v3.0.0-rc2, v3.0.0-rc1, v3.0.0-rc0, v2.11.2, v2.12.0, v2.12.0-rc4, v2.12.0-rc3, v2.12.0-rc2, v2.12.0-rc1
# 89bd0305 22-Mar-2018 Kevin Wolf <kwolf@redhat.com>

block: Really pause block jobs on drain

We already requested that block jobs be paused in .bdrv_drained_begin,
but no guarantee was made that the job was actually inactive at the
point where bdrv_dr

block: Really pause block jobs on drain

We already requested that block jobs be paused in .bdrv_drained_begin,
but no guarantee was made that the job was actually inactive at the
point where bdrv_drained_begin() returned.

This introduces a new callback BdrvChildRole.bdrv_drained_poll() and
uses it to make bdrv_drain_poll() consider block jobs using the node to
be drained.

For the test case to work as expected, we have to switch from
block_job_sleep_ns() to qemu_co_sleep_ns() so that the test job is even
considered active and must be waited for when draining the node.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>

show more ...


# 9f6bb4c0 14-May-2018 Kevin Wolf <kwolf@redhat.com>

blockjob: Remove BlockJob.driver

BlockJob.driver is redundant with Job.driver and only used in very few
places any more. Remove it.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>


# 30a5c887 04-May-2018 Kevin Wolf <kwolf@redhat.com>

job: Move progress fields to Job

BlockJob has fields .offset and .len, which are actually misnomers today
because they are no longer tied to block device sizes, but just progress
counters. As such t

job: Move progress fields to Job

BlockJob has fields .offset and .len, which are actually misnomers today
because they are no longer tied to block device sizes, but just progress
counters. As such they make a lot of sense in generic Jobs.

This patch moves the fields to Job and renames them to .progress_current
and .progress_total to describe their function better.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>

show more ...


# 2e1795b5 25-Apr-2018 Kevin Wolf <kwolf@redhat.com>

job: Add job_transition_to_ready()

The transition to the READY state was still performed in the BlockJob
layer, in the same function that sent the BLOCK_JOB_READY QMP event.

This patch brings the s

job: Add job_transition_to_ready()

The transition to the READY state was still performed in the BlockJob
layer, in the same function that sent the BLOCK_JOB_READY QMP event.

This patch brings the state transition to the Job layer and implements
the QMP event using a notifier called from the Job layer, like we
already do for other events related to state transitions.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>

show more ...


# df956ae2 25-Apr-2018 Kevin Wolf <kwolf@redhat.com>

job: Add job_is_ready()

Instead of having a 'bool ready' in BlockJob, add a function that
derives its value from the job status.

At the same time, this fixes the behaviour to match what the QAPI
do

job: Add job_is_ready()

Instead of having a 'bool ready' in BlockJob, add a function that
derives its value from the job status.

At the same time, this fixes the behaviour to match what the QAPI
documentation promises for query-block-job: 'true if the job may be
completed'. When the ready flag was introduced in commit ef6dbf1e46e,
the flag never had to be reset to match the description because after
being ready, the jobs would immediately complete and disappear.

Job transactions and manual job finalisation were introduced only later.
With these changes, jobs may stay around even after having completed
(and they are not ready to be completed a second time), however their
patches forgot to reset the ready flag.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>

show more ...


# 5f9a6a08 24-Apr-2018 Kevin Wolf <kwolf@redhat.com>

job: Add job_dismiss()

This moves block_job_dismiss() to the Job layer.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>


# 198c49cc 24-Apr-2018 Kevin Wolf <kwolf@redhat.com>

job: Add job_yield()

This moves block_job_yield() to the Job layer.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>


# 3d70ff53 24-Apr-2018 Kevin Wolf <kwolf@redhat.com>

job: Move completion and cancellation to Job

This moves the top-level job completion and cancellation functions from
BlockJob to Job.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>


# 7eaa8fb5 23-Apr-2018 Kevin Wolf <kwolf@redhat.com>

job: Move transactions to Job

This moves the logic that implements job transactions from BlockJob to
Job.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>


12345678910>>...12