History log of /qemu/tests/qemu-iotests/176.out (Results 1 – 10 of 10)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
Revision tags: v9.0.0-rc2, v9.0.0-rc1, v9.0.0-rc0, v8.2.2, v7.2.10, v8.2.1, v8.1.5, v7.2.9
# 52df1a5b 23-Jan-2024 Abhiram Tilak <atp.exp@gmail.com>

qemu-img: Fix Column Width and Improve Formatting in snapshot list

When running the command `qemu-img snapshot -l SNAPSHOT` the output of
VM_CLOCK (measures the offset between host and VM clock) can

qemu-img: Fix Column Width and Improve Formatting in snapshot list

When running the command `qemu-img snapshot -l SNAPSHOT` the output of
VM_CLOCK (measures the offset between host and VM clock) cannot to
accommodate values in the order of thousands (4-digit).

This line [1] hints on the problem. Additionally, the column width for
the VM_CLOCK field was reduced from 15 to 13 spaces in commit b39847a5
in line [2], resulting in a shortage of space.

[1]:
https://gitlab.com/qemu-project/qemu/-/blob/master/block/qapi.c?ref_type=heads#L753
[2]:
https://gitlab.com/qemu-project/qemu/-/blob/master/block/qapi.c?ref_type=heads#L763

This patch restores the column width to 15 spaces and makes adjustments
to the affected iotests accordingly. Furthermore, addresses a potential
source
of confusion by removing whitespace in column headers. Example, VM CLOCK
is modified to VM_CLOCK. Additionally a '--' symbol is introduced when
ICOUNT returns no output for clarity.

Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2062
Fixes: b39847a50553 ("migration: introduce icount field for snapshots")
Signed-off-by: Abhiram Tilak <atp.exp@gmail.com>
Message-ID: <20240123050354.22152-2-atp.exp@gmail.com>
[kwolf: Fixed up qemu-iotests 261 and 286]
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>

show more ...


# effd60c8 18-Jan-2024 Stefan Hajnoczi <stefanha@redhat.com>

monitor: only run coroutine commands in qemu_aio_context

monitor_qmp_dispatcher_co() runs in the iohandler AioContext that is not
polled during nested event loops. The coroutine currently reschedule

monitor: only run coroutine commands in qemu_aio_context

monitor_qmp_dispatcher_co() runs in the iohandler AioContext that is not
polled during nested event loops. The coroutine currently reschedules
itself in the main loop's qemu_aio_context AioContext, which is polled
during nested event loops. One known problem is that QMP device-add
calls drain_call_rcu(), which temporarily drops the BQL, leading to all
sorts of havoc like other vCPU threads re-entering device emulation code
while another vCPU thread is waiting in device emulation code with
aio_poll().

Paolo Bonzini suggested running non-coroutine QMP handlers in the
iohandler AioContext. This avoids trouble with nested event loops. His
original idea was to move coroutine rescheduling to
monitor_qmp_dispatch(), but I resorted to moving it to qmp_dispatch()
because we don't know if the QMP handler needs to run in coroutine
context in monitor_qmp_dispatch(). monitor_qmp_dispatch() would have
been nicer since it's associated with the monitor implementation and not
as general as qmp_dispatch(), which is also used by qemu-ga.

A number of qemu-iotests need updated .out files because the order of
QMP events vs QMP responses has changed.

Solves Issue #1933.

Cc: qemu-stable@nongnu.org
Fixes: 7bed89958bfbf40df9ca681cefbdca63abdde39d ("device_core: use drain_call_rcu in in qmp_device_add")
Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=2215192
Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=2214985
Buglink: https://issues.redhat.com/browse/RHEL-17369
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-ID: <20240118144823.1497953-4-stefanha@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Tested-by: Fiona Ebner <f.ebner@proxmox.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>

show more ...


# effd60c8 18-Jan-2024 Stefan Hajnoczi <stefanha@redhat.com>

monitor: only run coroutine commands in qemu_aio_context

monitor_qmp_dispatcher_co() runs in the iohandler AioContext that is not
polled during nested event loops. The coroutine currently reschedule

monitor: only run coroutine commands in qemu_aio_context

monitor_qmp_dispatcher_co() runs in the iohandler AioContext that is not
polled during nested event loops. The coroutine currently reschedules
itself in the main loop's qemu_aio_context AioContext, which is polled
during nested event loops. One known problem is that QMP device-add
calls drain_call_rcu(), which temporarily drops the BQL, leading to all
sorts of havoc like other vCPU threads re-entering device emulation code
while another vCPU thread is waiting in device emulation code with
aio_poll().

Paolo Bonzini suggested running non-coroutine QMP handlers in the
iohandler AioContext. This avoids trouble with nested event loops. His
original idea was to move coroutine rescheduling to
monitor_qmp_dispatch(), but I resorted to moving it to qmp_dispatch()
because we don't know if the QMP handler needs to run in coroutine
context in monitor_qmp_dispatch(). monitor_qmp_dispatch() would have
been nicer since it's associated with the monitor implementation and not
as general as qmp_dispatch(), which is also used by qemu-ga.

A number of qemu-iotests need updated .out files because the order of
QMP events vs QMP responses has changed.

Solves Issue #1933.

Cc: qemu-stable@nongnu.org
Fixes: 7bed89958bfbf40df9ca681cefbdca63abdde39d ("device_core: use drain_call_rcu in in qmp_device_add")
Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=2215192
Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=2214985
Buglink: https://issues.redhat.com/browse/RHEL-17369
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-ID: <20240118144823.1497953-4-stefanha@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Tested-by: Fiona Ebner <f.ebner@proxmox.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>

show more ...


Revision tags: v8.1.4, v7.2.8, v8.2.0, v8.2.0-rc4, v8.2.0-rc3, v8.2.0-rc2, v8.2.0-rc1, v7.2.7, v8.1.3, v8.2.0-rc0, v8.1.2, v8.1.1, v7.2.6, v8.0.5, v8.1.0, v8.1.0-rc4, v8.1.0-rc3, v7.2.5, v8.0.4, v8.1.0-rc2, v8.1.0-rc1, v8.1.0-rc0, v8.0.3, v7.2.4, v8.0.2, v8.0.1, v7.2.3, v7.2.2, v8.0.0, v8.0.0-rc4, v8.0.0-rc3, v7.2.1, v8.0.0-rc2, v8.0.0-rc1, v8.0.0-rc0, v7.2.0, v7.2.0-rc4, v7.2.0-rc3, v7.2.0-rc2, v7.2.0-rc1, v7.2.0-rc0, v7.1.0, v7.1.0-rc4, v7.1.0-rc3, v7.1.0-rc2, v7.1.0-rc1, v7.1.0-rc0, v7.0.0, v7.0.0-rc4, v7.0.0-rc3, v7.0.0-rc2, v7.0.0-rc1, v7.0.0-rc0, v6.1.1, v6.2.0, v6.2.0-rc4, v6.2.0-rc3, v6.2.0-rc2, v6.2.0-rc1, v6.2.0-rc0, v6.0.1, v6.1.0, v6.1.0-rc4, v6.1.0-rc3, v6.1.0-rc2, v6.1.0-rc1, v6.1.0-rc0, v6.0.0, v6.0.0-rc5, v6.0.0-rc4, v6.0.0-rc3, v6.0.0-rc2, v6.0.0-rc1, v6.0.0-rc0, v5.2.0, v5.2.0-rc4, v5.2.0-rc3, v5.2.0-rc2, v5.2.0-rc1, v5.2.0-rc0, v5.0.1, v5.1.0, v5.1.0-rc3, v5.1.0-rc2, v5.1.0-rc1, v5.1.0-rc0
# b66ff2c2 06-Jul-2020 Eric Blake <eblake@redhat.com>

iotests: Specify explicit backing format where sensible

There are many existing qcow2 images that specify a backing file but
no format. This has been the source of CVEs in the past, but has
become

iotests: Specify explicit backing format where sensible

There are many existing qcow2 images that specify a backing file but
no format. This has been the source of CVEs in the past, but has
become more prominent of a problem now that libvirt has switched to
-blockdev. With older -drive, at least the probing was always done by
qemu (so the only risk of a changed format between successive boots of
a guest was if qemu was upgraded and probed differently). But with
newer -blockdev, libvirt must specify a format; if libvirt guesses raw
where the image was formatted, this results in data corruption visible
to the guest; conversely, if libvirt guesses qcow2 where qemu was
using raw, this can result in potential security holes, so modern
libvirt instead refuses to use images without explicit backing format.

The change in libvirt to reject images without explicit backing format
has pointed out that a number of tools have been far too reliant on
probing in the past. It's time to set a better example in our own
iotests of properly setting this parameter.

iotest calls to create, rebase, and convert are all impacted to some
degree. It's a bit annoying that we are inconsistent on command line
- while all of those accept -o backing_file=...,backing_fmt=..., the
shortcuts are different: create and rebase have -b and -F, while
convert has -B but no -F. (amend has no shortcuts, but the previous
patch just deprecated the use of amend to change backing chains).

Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20200706203954.341758-9-eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>

show more ...


Revision tags: v4.2.1, v5.0.0, v5.0.0-rc4, v5.0.0-rc3, v5.0.0-rc2, v5.0.0-rc1, v5.0.0-rc0, v4.2.0, v4.2.0-rc5, v4.2.0-rc4, v4.2.0-rc3, v4.2.0-rc2, v4.1.1, v4.2.0-rc1, v4.2.0-rc0, v4.0.1, v3.1.1.1, v4.1.0, v4.1.0-rc5, v4.1.0-rc4, v3.1.1, v4.1.0-rc3, v4.1.0-rc2, v4.1.0-rc1, v4.1.0-rc0, v4.0.0, v4.0.0-rc4, v3.0.1, v4.0.0-rc3, v4.0.0-rc2, v4.0.0-rc1, v4.0.0-rc0, v3.1.0, v3.1.0-rc5
# 92548938 05-Dec-2018 Dominik Csapak <d.csapak@proxmox.com>

qmp: Split ShutdownCause host-qmp into quit and system-reset

It is interesting to know whether the shutdown cause was 'quit' or
'reset', especially when using "--no-reboot". In that case, a manageme

qmp: Split ShutdownCause host-qmp into quit and system-reset

It is interesting to know whether the shutdown cause was 'quit' or
'reset', especially when using "--no-reboot". In that case, a management
layer can now determine if the guest wanted a reboot or shutdown, and
can act accordingly.

Changes the output of the reason in the iotests from 'host-qmp' to
'host-qmp-quit'. This does not break compatibility because
the field was introduced in the same version.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Message-Id: <20181205110131.23049-4-d.csapak@proxmox.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
[Commit message tweaked]
Signed-off-by: Markus Armbruster <armbru@redhat.com>

show more ...


# ecd7a0d5 05-Dec-2018 Dominik Csapak <d.csapak@proxmox.com>

qmp: Add reason to SHUTDOWN and RESET events

This makes it possible to determine what the exact reason was for
a RESET or a SHUTDOWN. A management layer might need the specific reason
of those event

qmp: Add reason to SHUTDOWN and RESET events

This makes it possible to determine what the exact reason was for
a RESET or a SHUTDOWN. A management layer might need the specific reason
of those events to determine which cleanups or other actions it needs to do.

This patch also updates the iotests to the new expected output that includes
the reason.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Message-Id: <20181205110131.23049-3-d.csapak@proxmox.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
[Commit message tweaked]
Signed-off-by: Markus Armbruster <armbru@redhat.com>

show more ...


Revision tags: v3.1.0-rc4, v3.1.0-rc3, v3.1.0-rc2, v3.1.0-rc1, v3.1.0-rc0, v3.0.0, v3.0.0-rc4, v2.12.1, v3.0.0-rc3, v3.0.0-rc2, v3.0.0-rc1, v3.0.0-rc0, v2.11.2, v2.12.0, v2.12.0-rc4, v2.12.0-rc3, v2.12.0-rc2, v2.12.0-rc1, v2.12.0-rc0, v2.11.1, v2.10.2, v2.11.0, v2.11.0-rc5, v2.11.0-rc4, v2.11.0-rc3, v2.11.0-rc2
# 2807746f 17-Nov-2017 Eric Blake <eblake@redhat.com>

iotests: Fix 176 on 32-bit host

The contents of a qcow2 bitmap are rounded up to a size that
matches the number of bits available for the granularity, but
that granularity differs for 32-bit hosts (

iotests: Fix 176 on 32-bit host

The contents of a qcow2 bitmap are rounded up to a size that
matches the number of bits available for the granularity, but
that granularity differs for 32-bit hosts (our default 64k
cluster allows for 2M bitmap coverage per 'long') and 64-bit
hosts (4M bitmap per 'long'). If the image is a multiple of
2M but not 4M, then the number of bytes occupied by the array
of longs in memory differs between architecture, thus
resulting in different SHA256 hashes.

Furthermore (but untested by me), if our computation of the
SHA256 hash is at all endian-dependent because of how we store
data in memory, that's another variable we'd have to account
for (ideally, we specified the bitmap stored in qcow2 as
fixed-endian on disk, because the same qcow2 file must be
usable across any architecture; but that says nothing about
how we represent things in memory). But we already have test
165 to validate that bitmaps are stored correctly on disk,
while this test is merely testing that the bitmap exists.

So for this test, the easiest solution is to filter out the
actual hash value. Broken in commit 4096974e.

Reported-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-id: 20171117190422.23626-1-eblake@redhat.com
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>

show more ...


# 4096974e 17-Nov-2017 Eric Blake <eblake@redhat.com>

qcow2: fix image corruption on commit with persistent bitmap

If an image contains persistent bitmaps, we cannot use the
fast path of bdrv_make_empty() to clear the image during
qemu-img commit, beca

qcow2: fix image corruption on commit with persistent bitmap

If an image contains persistent bitmaps, we cannot use the
fast path of bdrv_make_empty() to clear the image during
qemu-img commit, because that will lose the clusters related
to the bitmaps.

Also leave a comment in qcow2_read_extensions to remind future
feature additions to think about fast-path removal, since we
just barely fixed the same bug for LUKS encryption.

It's a pain that qemu-img has not yet been taught to manipulate,
or even at a very minimum display, information about persistent
bitmaps; instead, we have to use QMP commands. It's also a
pain that only qeury-block and x-debug-block-dirty-bitmap-sha256
will allow bitmap introspection; but the former requires the
node to be hooked to a block device, and the latter is experimental.

Signed-off-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>

show more ...


Revision tags: v2.11.0-rc1, v2.11.0-rc0, v2.10.1, v2.9.1, v2.10.0, v2.10.0-rc4, v2.10.0-rc3, v2.10.0-rc2, v2.10.0-rc1, v2.10.0-rc0, v2.8.1.1, v2.9.0, v2.9.0-rc5, v2.9.0-rc4, v2.9.0-rc3
# f82c5b17 31-Mar-2017 Eric Blake <eblake@redhat.com>

iotests: Improve image-clear tests on non-aligned image

Tweak 097 and 176 to operate on an image that is not cluster-aligned,
to give further coverage of clearing out an entire image, including
the

iotests: Improve image-clear tests on non-aligned image

Tweak 097 and 176 to operate on an image that is not cluster-aligned,
to give further coverage of clearing out an entire image, including
the recent fix to eliminate the difference between fast path (97) and
slow (176) for qcow2. Also tested on qcow (97 only, since qcow lacks
snapshots).

Signed-off-by: Eric Blake <eblake@redhat.com>
Message-id: 20170331185356.2479-4-eblake@redhat.com
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>

show more ...


# 07ff948b 31-Mar-2017 Daniel P. Berrange <berrange@redhat.com>

iotests: fix 097 when run with qcow

The previous commit:

commit a3e1505daec31ef56f0489f8c8fff1b8e4ca92bd
Author: Eric Blake <eblake@redhat.com>
Date: Mon Dec 5 09:49:34 2016 -0600

qcow

iotests: fix 097 when run with qcow

The previous commit:

commit a3e1505daec31ef56f0489f8c8fff1b8e4ca92bd
Author: Eric Blake <eblake@redhat.com>
Date: Mon Dec 5 09:49:34 2016 -0600

qcow2: Don't strand clusters near 2G intervals during commit

extended the 097 test case so that it did two passes, once
with an internal snapshot, once without.

qcow (v1) does not support internal snapshots, so this change
broke test 097 when run against qcow.

This splits 097 in two, creating a new 176 that tests the
internal snapshot codepath, effectively putting 097 back
to its content before the above commit.

Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Message-Id: <20170221115512.21918-8-berrange@redhat.com>
[eblake: test collisions: s/173/176/g]
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-id: 20170331185356.2479-2-eblake@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>

show more ...