#
0b2675c4 |
| 02-Jan-2024 |
Stefan Hajnoczi <stefanha@redhat.com> |
Rename "QEMU global mutex" to "BQL" in comments and docs
The term "QEMU global mutex" is identical to the more widely used Big QEMU Lock ("BQL"). Update the code comments and documentation to use "B
Rename "QEMU global mutex" to "BQL" in comments and docs
The term "QEMU global mutex" is identical to the more widely used Big QEMU Lock ("BQL"). Update the code comments and documentation to use "BQL" instead of "QEMU global mutex".
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Acked-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Paul Durrant <paul@xen.org> Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com> Reviewed-by: Cédric Le Goater <clg@kaod.org> Reviewed-by: Harsh Prateek Bora <harshpb@linux.ibm.com> Message-id: 20240102153529.486531-6-stefanha@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
show more ...
|
Revision tags: v8.1.4, v7.2.8, v8.2.0, v8.2.0-rc4, v8.2.0-rc3 |
|
#
765ca516 |
| 04-Dec-2023 |
Stefan Hajnoczi <stefanha@redhat.com> |
virtio-scsi: don't lock AioContext around virtio_queue_aio_attach_host_notifier()
virtio_queue_aio_attach_host_notifier() does not require the AioContext lock. Stop taking the lock and add an explic
virtio-scsi: don't lock AioContext around virtio_queue_aio_attach_host_notifier()
virtio_queue_aio_attach_host_notifier() does not require the AioContext lock. Stop taking the lock and add an explicit smp_wmb() because we were relying on the implicit barrier in the AioContext lock before.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20231204164259.1515217-3-stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
Revision tags: v8.2.0-rc2, v8.2.0-rc1, v7.2.7, v8.1.3, v8.2.0-rc0, v8.1.2, v8.1.1, v7.2.6, v8.0.5, v8.1.0, v8.1.0-rc4, v8.1.0-rc3, v7.2.5, v8.0.4, v8.1.0-rc2, v8.1.0-rc1, v8.1.0-rc0, v8.0.3, v7.2.4, v8.0.2, v8.0.1, v7.2.3 |
|
#
4ee4667d |
| 24-May-2023 |
Philippe Mathieu-Daudé <philmd@linaro.org> |
hw/virtio: Remove unnecessary 'virtio-access.h' header
None of these files use the VirtIO Load/Store API declared by "hw/virtio/virtio-access.h". This header probably crept in via copy/pasting, remo
hw/virtio: Remove unnecessary 'virtio-access.h' header
None of these files use the VirtIO Load/Store API declared by "hw/virtio/virtio-access.h". This header probably crept in via copy/pasting, remove it.
Note, "virtio-access.h" is target-specific, so any file including it also become tainted as target-specific.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Acked-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Thomas Huth <thuth@redhat.com> Message-Id: <20230524093744.88442-10-philmd@linaro.org> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
show more ...
|
#
766aa2de |
| 16-May-2023 |
Stefan Hajnoczi <stefanha@redhat.com> |
virtio-scsi: implement BlockDevOps->drained_begin()
The virtio-scsi Host Bus Adapter provides access to devices on a SCSI bus. Those SCSI devices typically have a BlockBackend. When the BlockBackend
virtio-scsi: implement BlockDevOps->drained_begin()
The virtio-scsi Host Bus Adapter provides access to devices on a SCSI bus. Those SCSI devices typically have a BlockBackend. When the BlockBackend enters a drained section, the SCSI device must temporarily stop submitting new I/O requests.
Implement this behavior by temporarily stopping virtio-scsi virtqueue processing when one of the SCSI devices enters a drained section. The new scsi_device_drained_begin() API allows scsi-disk to message the virtio-scsi HBA.
scsi_device_drained_begin() uses a drain counter so that multiple SCSI devices can have overlapping drained sections. The HBA only sees one pair of .drained_begin/end() calls.
After this commit, virtio-scsi no longer depends on hw/virtio's ioeventfd aio_set_event_notifier(is_external=true). This commit is a step towards removing the aio_disable_external() API.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20230516190238.8401-19-stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
#
bd58ab40 |
| 16-May-2023 |
Stefan Hajnoczi <stefanha@redhat.com> |
virtio: make it possible to detach host notifier from any thread
virtio_queue_aio_detach_host_notifier() does two things: 1. It removes the fd handler from the event loop. 2. It processes the virtqu
virtio: make it possible to detach host notifier from any thread
virtio_queue_aio_detach_host_notifier() does two things: 1. It removes the fd handler from the event loop. 2. It processes the virtqueue one last time.
The first step can be peformed by any thread and without taking the AioContext lock.
The second step may need the AioContext lock (depending on the device implementation) and runs in the thread where request processing takes place. virtio-blk and virtio-scsi therefore call virtio_queue_aio_detach_host_notifier() from a BH that is scheduled in AioContext.
The next patch will introduce a .drained_begin() function that needs to call virtio_queue_aio_detach_host_notifier(). .drained_begin() functions cannot call aio_poll() to wait synchronously for the BH. It is possible for a .drained_poll() callback to asynchronously wait for the BH, but that is more complex than necessary here.
Move the virtqueue processing out to the callers of virtio_queue_aio_detach_host_notifier() so that the function can be called from any thread. This is in preparation for the next patch.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20230516190238.8401-17-stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
Revision tags: v7.2.2, v8.0.0, v8.0.0-rc4, v8.0.0-rc3 |
|
#
3edf660a |
| 04-Apr-2023 |
Stefan Hajnoczi <stefanha@redhat.com> |
aio-wait: avoid AioContext lock in aio_wait_bh_oneshot()
There is no need for the AioContext lock in aio_wait_bh_oneshot(). It's easy to remove the lock from existing callers and then switch from AI
aio-wait: avoid AioContext lock in aio_wait_bh_oneshot()
There is no need for the AioContext lock in aio_wait_bh_oneshot(). It's easy to remove the lock from existing callers and then switch from AIO_WAIT_WHILE() to AIO_WAIT_WHILE_UNLOCKED() in aio_wait_bh_oneshot().
Document that the AioContext lock should not be held across aio_wait_bh_oneshot(). Holding a lock across aio_poll() can cause deadlock so we don't want callers to do that.
This is a step towards getting rid of the AioContext lock.
Cc: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20230404153307.458883-1-stefanha@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
Revision tags: v7.2.2, v8.0.0, v8.0.0-rc4, v8.0.0-rc3 |
|
#
3edf660a |
| 04-Apr-2023 |
Stefan Hajnoczi <stefanha@redhat.com> |
aio-wait: avoid AioContext lock in aio_wait_bh_oneshot()
There is no need for the AioContext lock in aio_wait_bh_oneshot(). It's easy to remove the lock from existing callers and then switch from AI
aio-wait: avoid AioContext lock in aio_wait_bh_oneshot()
There is no need for the AioContext lock in aio_wait_bh_oneshot(). It's easy to remove the lock from existing callers and then switch from AIO_WAIT_WHILE() to AIO_WAIT_WHILE_UNLOCKED() in aio_wait_bh_oneshot().
Document that the AioContext lock should not be held across aio_wait_bh_oneshot(). Holding a lock across aio_poll() can cause deadlock so we don't want callers to do that.
This is a step towards getting rid of the AioContext lock.
Cc: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20230404153307.458883-1-stefanha@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
Revision tags: v7.2.1, v8.0.0-rc2, v8.0.0-rc1, v8.0.0-rc0, v7.2.0, v7.2.0-rc4, v7.2.0-rc3, v7.2.0-rc2, v7.2.0-rc1, v7.2.0-rc0, v7.1.0, v7.1.0-rc4, v7.1.0-rc3, v7.1.0-rc2 |
|
#
9a4b6a63 |
| 08-Aug-2022 |
Stefan Hajnoczi <stefanha@redhat.com> |
virtio-scsi: fix race in virtio_scsi_dataplane_start()
As soon as virtio_scsi_data_plane_start() attaches host notifiers the IOThread may start virtqueue processing. There is a race between IOThread
virtio-scsi: fix race in virtio_scsi_dataplane_start()
As soon as virtio_scsi_data_plane_start() attaches host notifiers the IOThread may start virtqueue processing. There is a race between IOThread virtqueue processing and virtio_scsi_data_plane_start() because it only assigns s->dataplane_started after attaching host notifiers.
When a virtqueue handler function in the IOThread calls virtio_scsi_defer_to_dataplane() it may see !s->dataplane_started and attempt to start dataplane even though we're already in the IOThread:
#0 0x00007f67b360857c __pthread_kill_implementation (libc.so.6 + 0xa257c) #1 0x00007f67b35bbd56 raise (libc.so.6 + 0x55d56) #2 0x00007f67b358e833 abort (libc.so.6 + 0x28833) #3 0x00007f67b358e75b __assert_fail_base.cold (libc.so.6 + 0x2875b) #4 0x00007f67b35b4cd6 __assert_fail (libc.so.6 + 0x4ecd6) #5 0x000055ca87fd411b memory_region_transaction_commit (qemu-kvm + 0x67511b) #6 0x000055ca87e17811 virtio_pci_ioeventfd_assign (qemu-kvm + 0x4b8811) #7 0x000055ca87e14836 virtio_bus_set_host_notifier (qemu-kvm + 0x4b5836) #8 0x000055ca87f8e14e virtio_scsi_set_host_notifier (qemu-kvm + 0x62f14e) #9 0x000055ca87f8dd62 virtio_scsi_dataplane_start (qemu-kvm + 0x62ed62) #10 0x000055ca87e14610 virtio_bus_start_ioeventfd (qemu-kvm + 0x4b5610) #11 0x000055ca87f8c29a virtio_scsi_handle_ctrl (qemu-kvm + 0x62d29a) #12 0x000055ca87fa5902 virtio_queue_host_notifier_read (qemu-kvm + 0x646902) #13 0x000055ca882c099e aio_dispatch_handler (qemu-kvm + 0x96199e) #14 0x000055ca882c1761 aio_poll (qemu-kvm + 0x962761) #15 0x000055ca880e1052 iothread_run (qemu-kvm + 0x782052) #16 0x000055ca882c562a qemu_thread_start (qemu-kvm + 0x96662a)
This patch assigns s->dataplane_started before attaching host notifiers so that virtqueue handler functions that run in the IOThread before virtio_scsi_data_plane_start() returns correctly identify that dataplane does not need to be started. This fix is taken from the virtio-blk dataplane code and it's worth adding a comment in virtio-blk as well to explain why it works.
Note that s->dataplane_started does not need the AioContext lock because it is set before attaching host notifiers and cleared after detaching host notifiers. In other words, the IOThread always sees the value true and the main loop thread does not modify it while the IOThread is active.
Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=2099541 Reported-by: Qing Wang <qinwang@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20220808162134.240405-1-stefanha@redhat.com> Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
show more ...
|
Revision tags: v7.2.1, v8.0.0-rc2, v8.0.0-rc1, v8.0.0-rc0, v7.2.0, v7.2.0-rc4, v7.2.0-rc3, v7.2.0-rc2, v7.2.0-rc1, v7.2.0-rc0, v7.1.0, v7.1.0-rc4, v7.1.0-rc3, v7.1.0-rc2 |
|
#
9a4b6a63 |
| 08-Aug-2022 |
Stefan Hajnoczi <stefanha@redhat.com> |
virtio-scsi: fix race in virtio_scsi_dataplane_start()
As soon as virtio_scsi_data_plane_start() attaches host notifiers the IOThread may start virtqueue processing. There is a race between IOThread
virtio-scsi: fix race in virtio_scsi_dataplane_start()
As soon as virtio_scsi_data_plane_start() attaches host notifiers the IOThread may start virtqueue processing. There is a race between IOThread virtqueue processing and virtio_scsi_data_plane_start() because it only assigns s->dataplane_started after attaching host notifiers.
When a virtqueue handler function in the IOThread calls virtio_scsi_defer_to_dataplane() it may see !s->dataplane_started and attempt to start dataplane even though we're already in the IOThread:
#0 0x00007f67b360857c __pthread_kill_implementation (libc.so.6 + 0xa257c) #1 0x00007f67b35bbd56 raise (libc.so.6 + 0x55d56) #2 0x00007f67b358e833 abort (libc.so.6 + 0x28833) #3 0x00007f67b358e75b __assert_fail_base.cold (libc.so.6 + 0x2875b) #4 0x00007f67b35b4cd6 __assert_fail (libc.so.6 + 0x4ecd6) #5 0x000055ca87fd411b memory_region_transaction_commit (qemu-kvm + 0x67511b) #6 0x000055ca87e17811 virtio_pci_ioeventfd_assign (qemu-kvm + 0x4b8811) #7 0x000055ca87e14836 virtio_bus_set_host_notifier (qemu-kvm + 0x4b5836) #8 0x000055ca87f8e14e virtio_scsi_set_host_notifier (qemu-kvm + 0x62f14e) #9 0x000055ca87f8dd62 virtio_scsi_dataplane_start (qemu-kvm + 0x62ed62) #10 0x000055ca87e14610 virtio_bus_start_ioeventfd (qemu-kvm + 0x4b5610) #11 0x000055ca87f8c29a virtio_scsi_handle_ctrl (qemu-kvm + 0x62d29a) #12 0x000055ca87fa5902 virtio_queue_host_notifier_read (qemu-kvm + 0x646902) #13 0x000055ca882c099e aio_dispatch_handler (qemu-kvm + 0x96199e) #14 0x000055ca882c1761 aio_poll (qemu-kvm + 0x962761) #15 0x000055ca880e1052 iothread_run (qemu-kvm + 0x782052) #16 0x000055ca882c562a qemu_thread_start (qemu-kvm + 0x96662a)
This patch assigns s->dataplane_started before attaching host notifiers so that virtqueue handler functions that run in the IOThread before virtio_scsi_data_plane_start() returns correctly identify that dataplane does not need to be started. This fix is taken from the virtio-blk dataplane code and it's worth adding a comment in virtio-blk as well to explain why it works.
Note that s->dataplane_started does not need the AioContext lock because it is set before attaching host notifiers and cleared after detaching host notifiers. In other words, the IOThread always sees the value true and the main loop thread does not modify it while the IOThread is active.
Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=2099541 Reported-by: Qing Wang <qinwang@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20220808162134.240405-1-stefanha@redhat.com> Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
show more ...
|
Revision tags: v7.2.1, v8.0.0-rc2, v8.0.0-rc1, v8.0.0-rc0, v7.2.0, v7.2.0-rc4, v7.2.0-rc3, v7.2.0-rc2, v7.2.0-rc1, v7.2.0-rc0, v7.1.0, v7.1.0-rc4, v7.1.0-rc3, v7.1.0-rc2 |
|
#
9a4b6a63 |
| 08-Aug-2022 |
Stefan Hajnoczi <stefanha@redhat.com> |
virtio-scsi: fix race in virtio_scsi_dataplane_start()
As soon as virtio_scsi_data_plane_start() attaches host notifiers the IOThread may start virtqueue processing. There is a race between IOThread
virtio-scsi: fix race in virtio_scsi_dataplane_start()
As soon as virtio_scsi_data_plane_start() attaches host notifiers the IOThread may start virtqueue processing. There is a race between IOThread virtqueue processing and virtio_scsi_data_plane_start() because it only assigns s->dataplane_started after attaching host notifiers.
When a virtqueue handler function in the IOThread calls virtio_scsi_defer_to_dataplane() it may see !s->dataplane_started and attempt to start dataplane even though we're already in the IOThread:
#0 0x00007f67b360857c __pthread_kill_implementation (libc.so.6 + 0xa257c) #1 0x00007f67b35bbd56 raise (libc.so.6 + 0x55d56) #2 0x00007f67b358e833 abort (libc.so.6 + 0x28833) #3 0x00007f67b358e75b __assert_fail_base.cold (libc.so.6 + 0x2875b) #4 0x00007f67b35b4cd6 __assert_fail (libc.so.6 + 0x4ecd6) #5 0x000055ca87fd411b memory_region_transaction_commit (qemu-kvm + 0x67511b) #6 0x000055ca87e17811 virtio_pci_ioeventfd_assign (qemu-kvm + 0x4b8811) #7 0x000055ca87e14836 virtio_bus_set_host_notifier (qemu-kvm + 0x4b5836) #8 0x000055ca87f8e14e virtio_scsi_set_host_notifier (qemu-kvm + 0x62f14e) #9 0x000055ca87f8dd62 virtio_scsi_dataplane_start (qemu-kvm + 0x62ed62) #10 0x000055ca87e14610 virtio_bus_start_ioeventfd (qemu-kvm + 0x4b5610) #11 0x000055ca87f8c29a virtio_scsi_handle_ctrl (qemu-kvm + 0x62d29a) #12 0x000055ca87fa5902 virtio_queue_host_notifier_read (qemu-kvm + 0x646902) #13 0x000055ca882c099e aio_dispatch_handler (qemu-kvm + 0x96199e) #14 0x000055ca882c1761 aio_poll (qemu-kvm + 0x962761) #15 0x000055ca880e1052 iothread_run (qemu-kvm + 0x782052) #16 0x000055ca882c562a qemu_thread_start (qemu-kvm + 0x96662a)
This patch assigns s->dataplane_started before attaching host notifiers so that virtqueue handler functions that run in the IOThread before virtio_scsi_data_plane_start() returns correctly identify that dataplane does not need to be started. This fix is taken from the virtio-blk dataplane code and it's worth adding a comment in virtio-blk as well to explain why it works.
Note that s->dataplane_started does not need the AioContext lock because it is set before attaching host notifiers and cleared after detaching host notifiers. In other words, the IOThread always sees the value true and the main loop thread does not modify it while the IOThread is active.
Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=2099541 Reported-by: Qing Wang <qinwang@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20220808162134.240405-1-stefanha@redhat.com> Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
show more ...
|
Revision tags: v7.2.1, v8.0.0-rc2, v8.0.0-rc1, v8.0.0-rc0, v7.2.0, v7.2.0-rc4, v7.2.0-rc3, v7.2.0-rc2, v7.2.0-rc1, v7.2.0-rc0, v7.1.0, v7.1.0-rc4, v7.1.0-rc3, v7.1.0-rc2 |
|
#
9a4b6a63 |
| 08-Aug-2022 |
Stefan Hajnoczi <stefanha@redhat.com> |
virtio-scsi: fix race in virtio_scsi_dataplane_start()
As soon as virtio_scsi_data_plane_start() attaches host notifiers the IOThread may start virtqueue processing. There is a race between IOThread
virtio-scsi: fix race in virtio_scsi_dataplane_start()
As soon as virtio_scsi_data_plane_start() attaches host notifiers the IOThread may start virtqueue processing. There is a race between IOThread virtqueue processing and virtio_scsi_data_plane_start() because it only assigns s->dataplane_started after attaching host notifiers.
When a virtqueue handler function in the IOThread calls virtio_scsi_defer_to_dataplane() it may see !s->dataplane_started and attempt to start dataplane even though we're already in the IOThread:
#0 0x00007f67b360857c __pthread_kill_implementation (libc.so.6 + 0xa257c) #1 0x00007f67b35bbd56 raise (libc.so.6 + 0x55d56) #2 0x00007f67b358e833 abort (libc.so.6 + 0x28833) #3 0x00007f67b358e75b __assert_fail_base.cold (libc.so.6 + 0x2875b) #4 0x00007f67b35b4cd6 __assert_fail (libc.so.6 + 0x4ecd6) #5 0x000055ca87fd411b memory_region_transaction_commit (qemu-kvm + 0x67511b) #6 0x000055ca87e17811 virtio_pci_ioeventfd_assign (qemu-kvm + 0x4b8811) #7 0x000055ca87e14836 virtio_bus_set_host_notifier (qemu-kvm + 0x4b5836) #8 0x000055ca87f8e14e virtio_scsi_set_host_notifier (qemu-kvm + 0x62f14e) #9 0x000055ca87f8dd62 virtio_scsi_dataplane_start (qemu-kvm + 0x62ed62) #10 0x000055ca87e14610 virtio_bus_start_ioeventfd (qemu-kvm + 0x4b5610) #11 0x000055ca87f8c29a virtio_scsi_handle_ctrl (qemu-kvm + 0x62d29a) #12 0x000055ca87fa5902 virtio_queue_host_notifier_read (qemu-kvm + 0x646902) #13 0x000055ca882c099e aio_dispatch_handler (qemu-kvm + 0x96199e) #14 0x000055ca882c1761 aio_poll (qemu-kvm + 0x962761) #15 0x000055ca880e1052 iothread_run (qemu-kvm + 0x782052) #16 0x000055ca882c562a qemu_thread_start (qemu-kvm + 0x96662a)
This patch assigns s->dataplane_started before attaching host notifiers so that virtqueue handler functions that run in the IOThread before virtio_scsi_data_plane_start() returns correctly identify that dataplane does not need to be started. This fix is taken from the virtio-blk dataplane code and it's worth adding a comment in virtio-blk as well to explain why it works.
Note that s->dataplane_started does not need the AioContext lock because it is set before attaching host notifiers and cleared after detaching host notifiers. In other words, the IOThread always sees the value true and the main loop thread does not modify it while the IOThread is active.
Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=2099541 Reported-by: Qing Wang <qinwang@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20220808162134.240405-1-stefanha@redhat.com> Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
show more ...
|
Revision tags: v7.1.0-rc1, v7.1.0-rc0 |
|
#
38738f7d |
| 27-Apr-2022 |
Stefan Hajnoczi <stefanha@redhat.com> |
virtio-scsi: don't waste CPU polling the event virtqueue
The virtio-scsi event virtqueue is not emptied by its handler function. This is typical for rx virtqueues where the device uses buffers when
virtio-scsi: don't waste CPU polling the event virtqueue
The virtio-scsi event virtqueue is not emptied by its handler function. This is typical for rx virtqueues where the device uses buffers when some event occurs (e.g. a packet is received, an error condition happens, etc).
Polling non-empty virtqueues wastes CPU cycles. We are not waiting for new buffers to become available, we are waiting for an event to occur, so it's a misuse of CPU resources to poll for buffers.
Introduce the new virtio_queue_aio_attach_host_notifier_no_poll() API, which is identical to virtio_queue_aio_attach_host_notifier() except that it does not poll the virtqueue.
Before this patch the following command-line consumed 100% CPU in the IOThread polling and calling virtio_scsi_handle_event():
$ qemu-system-x86_64 -M accel=kvm -m 1G -cpu host \ --object iothread,id=iothread0 \ --device virtio-scsi-pci,iothread=iothread0 \ --blockdev file,filename=test.img,aio=native,cache.direct=on,node-name=drive0 \ --device scsi-hd,drive=drive0
After this patch CPU is no longer wasted.
Reported-by: Nir Soffer <nsoffer@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Tested-by: Nir Soffer <nsoffer@redhat.com> Message-id: 20220427143541.119567-3-stefanha@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
show more ...
|
Revision tags: v7.1.0-rc1, v7.1.0-rc0 |
|
#
38738f7d |
| 27-Apr-2022 |
Stefan Hajnoczi <stefanha@redhat.com> |
virtio-scsi: don't waste CPU polling the event virtqueue
The virtio-scsi event virtqueue is not emptied by its handler function. This is typical for rx virtqueues where the device uses buffers when
virtio-scsi: don't waste CPU polling the event virtqueue
The virtio-scsi event virtqueue is not emptied by its handler function. This is typical for rx virtqueues where the device uses buffers when some event occurs (e.g. a packet is received, an error condition happens, etc).
Polling non-empty virtqueues wastes CPU cycles. We are not waiting for new buffers to become available, we are waiting for an event to occur, so it's a misuse of CPU resources to poll for buffers.
Introduce the new virtio_queue_aio_attach_host_notifier_no_poll() API, which is identical to virtio_queue_aio_attach_host_notifier() except that it does not poll the virtqueue.
Before this patch the following command-line consumed 100% CPU in the IOThread polling and calling virtio_scsi_handle_event():
$ qemu-system-x86_64 -M accel=kvm -m 1G -cpu host \ --object iothread,id=iothread0 \ --device virtio-scsi-pci,iothread=iothread0 \ --blockdev file,filename=test.img,aio=native,cache.direct=on,node-name=drive0 \ --device scsi-hd,drive=drive0
After this patch CPU is no longer wasted.
Reported-by: Nir Soffer <nsoffer@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Tested-by: Nir Soffer <nsoffer@redhat.com> Message-id: 20220427143541.119567-3-stefanha@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
show more ...
|
Revision tags: v7.1.0-rc1, v7.1.0-rc0 |
|
#
38738f7d |
| 27-Apr-2022 |
Stefan Hajnoczi <stefanha@redhat.com> |
virtio-scsi: don't waste CPU polling the event virtqueue
The virtio-scsi event virtqueue is not emptied by its handler function. This is typical for rx virtqueues where the device uses buffers when
virtio-scsi: don't waste CPU polling the event virtqueue
The virtio-scsi event virtqueue is not emptied by its handler function. This is typical for rx virtqueues where the device uses buffers when some event occurs (e.g. a packet is received, an error condition happens, etc).
Polling non-empty virtqueues wastes CPU cycles. We are not waiting for new buffers to become available, we are waiting for an event to occur, so it's a misuse of CPU resources to poll for buffers.
Introduce the new virtio_queue_aio_attach_host_notifier_no_poll() API, which is identical to virtio_queue_aio_attach_host_notifier() except that it does not poll the virtqueue.
Before this patch the following command-line consumed 100% CPU in the IOThread polling and calling virtio_scsi_handle_event():
$ qemu-system-x86_64 -M accel=kvm -m 1G -cpu host \ --object iothread,id=iothread0 \ --device virtio-scsi-pci,iothread=iothread0 \ --blockdev file,filename=test.img,aio=native,cache.direct=on,node-name=drive0 \ --device scsi-hd,drive=drive0
After this patch CPU is no longer wasted.
Reported-by: Nir Soffer <nsoffer@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Tested-by: Nir Soffer <nsoffer@redhat.com> Message-id: 20220427143541.119567-3-stefanha@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
show more ...
|
Revision tags: v7.0.0, v7.0.0-rc4, v7.0.0-rc3, v7.0.0-rc2, v7.0.0-rc1, v7.0.0-rc0, v6.1.1, v6.2.0, v6.2.0-rc4 |
|
#
db608fb7 |
| 07-Dec-2021 |
Stefan Hajnoczi <stefanha@redhat.com> |
virtio: unify dataplane and non-dataplane ->handle_output()
Now that virtio-blk and virtio-scsi are ready, get rid of the handle_aio_output() callback. It's no longer needed.
Signed-off-by: Stefan
virtio: unify dataplane and non-dataplane ->handle_output()
Now that virtio-blk and virtio-scsi are ready, get rid of the handle_aio_output() callback. It's no longer needed.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com> Message-id: 20211207132336.36627-7-stefanha@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
show more ...
|
#
d93d16c0 |
| 07-Dec-2021 |
Stefan Hajnoczi <stefanha@redhat.com> |
virtio: get rid of VirtIOHandleAIOOutput
The virtqueue host notifier API virtio_queue_aio_set_host_notifier_handler() polls the virtqueue for new buffers. AioContext previously required a bool progr
virtio: get rid of VirtIOHandleAIOOutput
The virtqueue host notifier API virtio_queue_aio_set_host_notifier_handler() polls the virtqueue for new buffers. AioContext previously required a bool progress return value indicating whether an event was handled or not. This is no longer necessary because the AioContext polling API has been split into a poll check function and an event handler function. The event handler is only run when we know there is work to do, so it doesn't return bool.
The VirtIOHandleAIOOutput function signature is now the same as VirtIOHandleOutput. Get rid of the bool return value.
Further simplifications will be made for virtio-blk and virtio-scsi in the next patch.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com> Message-id: 20211207132336.36627-3-stefanha@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
show more ...
|
Revision tags: v6.2.0-rc3, v6.2.0-rc2, v6.2.0-rc1, v6.2.0-rc0, v6.0.1, v6.1.0, v6.1.0-rc4, v6.1.0-rc3, v6.1.0-rc2, v6.1.0-rc1, v6.1.0-rc0 |
|
#
9cf4fd87 |
| 17-May-2021 |
Greg Kurz <groug@kaod.org> |
virtio: Clarify MR transaction optimization
The device model batching its ioeventfds in a single MR transaction is an optimization. Clarify this in virtio-scsi, virtio-blk and generic virtio code. A
virtio: Clarify MR transaction optimization
The device model batching its ioeventfds in a single MR transaction is an optimization. Clarify this in virtio-scsi, virtio-blk and generic virtio code. Also clarify that the transaction must commit before closing ioeventfds so that no one is tempted to merge the loops in the start functions error path and in the stop functions.
Signed-off-by: Greg Kurz <groug@kaod.org> Message-Id: <162125799728.1394228.339855768563326832.stgit@bahia.lan> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
show more ...
|
Revision tags: v6.0.0, v6.0.0-rc5, v6.0.0-rc4, v6.0.0-rc3 |
|
#
c4f5dcc4 |
| 07-Apr-2021 |
Greg Kurz <groug@kaod.org> |
virtio-scsi: Configure all host notifiers in a single MR transaction
This allows the virtio-scsi-pci device to batch the setup of all its host notifiers. This significantly improves boot time of VMs
virtio-scsi: Configure all host notifiers in a single MR transaction
This allows the virtio-scsi-pci device to batch the setup of all its host notifiers. This significantly improves boot time of VMs with a high number of vCPUs, e.g. from 6m5.563s down to 1m2.884s for a pseries machine with 384 vCPUs.
Note that memory_region_transaction_commit() must be called before virtio_bus_cleanup_host_notifier() because the latter might close ioeventfds that the transaction still assumes to be around when it commits.
Signed-off-by: Greg Kurz <groug@kaod.org> Message-Id: <20210407143501.244343-5-groug@kaod.org> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
show more ...
|
#
61fc57bf |
| 07-Apr-2021 |
Greg Kurz <groug@kaod.org> |
virtio-scsi: Set host notifiers and callbacks separately
Host notifiers are guaranteed to be idle until the callbacks are hooked up with virtio_queue_aio_set_host_notifier_handler(). They thus don't
virtio-scsi: Set host notifiers and callbacks separately
Host notifiers are guaranteed to be idle until the callbacks are hooked up with virtio_queue_aio_set_host_notifier_handler(). They thus don't need to be set or unset with the AioContext lock held.
Do this outside the critical section, like virtio-blk already does : basically downgrading virtio_scsi_vring_init() to only setup the host notifier and set the callback in the caller.
This will allow to batch addition/deletion of ioeventds in a single memory transaction, which is expected to greatly improve initialization time.
Signed-off-by: Greg Kurz <groug@kaod.org> Message-Id: <20210407143501.244343-4-groug@kaod.org> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
show more ...
|
Revision tags: v6.0.0-rc2, v6.0.0-rc1, v6.0.0-rc0 |
|
#
6f1a5c37 |
| 17-Dec-2020 |
Maxim Levitsky <mlevitsk@redhat.com> |
virtio-scsi: don't process IO on fenced dataplane
If virtio_scsi_dataplane_start fails, there is a small window when it drops the aio lock (in aio_wait_bh_oneshot) and the dataplane's AIO handler ca
virtio-scsi: don't process IO on fenced dataplane
If virtio_scsi_dataplane_start fails, there is a small window when it drops the aio lock (in aio_wait_bh_oneshot) and the dataplane's AIO handler can still run during that window.
This is done after the dataplane was marked as fenced, thus we use this flag to avoid it doing any IO.
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com> Message-Id: <20201217150040.906961-2-mlevitsk@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
dec2bb14 |
| 17-Dec-2020 |
Maxim Levitsky <mlevitsk@redhat.com> |
virtio-scsi: don't uninitialize queues that we didn't initialize
Count number of queues that we initialized and only deinitialize these that we initialized successfully.
Signed-off-by: Maxim Levits
virtio-scsi: don't uninitialize queues that we didn't initialize
Count number of queues that we initialized and only deinitialize these that we initialized successfully.
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com> Message-Id: <20201217150040.906961-3-mlevitsk@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
Revision tags: v5.2.0, v5.2.0-rc4, v5.2.0-rc3, v5.2.0-rc2, v5.2.0-rc1, v5.2.0-rc0, v5.0.1, v5.1.0, v5.1.0-rc3, v5.1.0-rc2, v5.1.0-rc1, v5.1.0-rc0, v4.2.1, v5.0.0, v5.0.0-rc4, v5.0.0-rc3, v5.0.0-rc2, v5.0.0-rc1, v5.0.0-rc0, v4.2.0, v4.2.0-rc5, v4.2.0-rc4, v4.2.0-rc3, v4.2.0-rc2, v4.1.1, v4.2.0-rc1, v4.2.0-rc0, v4.0.1, v3.1.1.1, v4.1.0, v4.1.0-rc5, v4.1.0-rc4, v3.1.1, v4.1.0-rc3, v4.1.0-rc2, v4.1.0-rc1, v4.1.0-rc0, v4.0.0, v4.0.0-rc4, v3.0.1, v4.0.0-rc3, v4.0.0-rc2, v4.0.0-rc1, v4.0.0-rc0, v3.1.0, v3.1.0-rc5, v3.1.0-rc4, v3.1.0-rc3, v3.1.0-rc2, v3.1.0-rc1, v3.1.0-rc0, v3.0.0, v3.0.0-rc4, v2.12.1, v3.0.0-rc3, v3.0.0-rc2, v3.0.0-rc1, v3.0.0-rc0, v2.11.2 |
|
#
a1d30f28 |
| 13-Jun-2018 |
Thomas Huth <thuth@redhat.com> |
Replace '-enable-kvm' with '-accel kvm' in docs and help texts
The preferred way to select the KVM accelerator is to use "-accel kvm" these days, so let's be consistent in our documentation and help
Replace '-enable-kvm' with '-accel kvm' in docs and help texts
The preferred way to select the KVM accelerator is to use "-accel kvm" these days, so let's be consistent in our documentation and help texts.
Signed-off-by: Thomas Huth <thuth@redhat.com> Message-Id: <1528866321-23886-3-git-send-email-thuth@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
Revision tags: v2.12.0, v2.12.0-rc4, v2.12.0-rc3, v2.12.0-rc2, v2.12.0-rc1, v2.12.0-rc0 |
|
#
184b9623 |
| 07-Mar-2018 |
Stefan Hajnoczi <stefanha@redhat.com> |
virtio-scsi: fix race between .ioeventfd_stop() and vq handler
If the main loop thread invokes .ioeventfd_stop() just as the vq handler function begins in the IOThread then the handler may lose the
virtio-scsi: fix race between .ioeventfd_stop() and vq handler
If the main loop thread invokes .ioeventfd_stop() just as the vq handler function begins in the IOThread then the handler may lose the race for the AioContext lock. By the time the vq handler is able to acquire the AioContext lock the ioeventfd has already been removed and the handler isn't supposed to run anymore!
Use the new aio_wait_bh_oneshot() function to perform ioeventfd removal from within the IOThread. This way no races with the vq handler are possible.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Acked-by: Paolo Bonzini <pbonzini@redhat.com> Message-id: 20180307144205.20619-4-stefanha@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
show more ...
|
Revision tags: v2.11.1 |
|
#
76143618 |
| 29-Jan-2018 |
Gal Hammer <ghammer@redhat.com> |
virtio: remove event notifier cleanup call on de-assign
The virtio_bus_set_host_notifier function no longer calls event_notifier_cleanup when a event notifier is removed.
The commit updates the cod
virtio: remove event notifier cleanup call on de-assign
The virtio_bus_set_host_notifier function no longer calls event_notifier_cleanup when a event notifier is removed.
The commit updates the code to match the new behavior and calls virtio_bus_cleanup_host_notifier after the notifier was de-assign and no longer in use.
This change is a preparation to allow executing the virtio_bus_set_host_notifier function in a memory region transaction.
Signed-off-by: Gal Hammer <ghammer@redhat.com> Reviewed-by: Greg Kurz <groug@kaod.org> Tested-by: Greg Kurz <groug@kaod.org> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
show more ...
|
Revision tags: v2.10.2, v2.11.0, v2.11.0-rc5, v2.11.0-rc4, v2.11.0-rc3, v2.11.0-rc2, v2.11.0-rc1, v2.11.0-rc0, v2.10.1, v2.9.1, v2.10.0, v2.10.0-rc4 |
|
#
08e2c9f1 |
| 22-Aug-2017 |
Paolo Bonzini <pbonzini@redhat.com> |
scsi: move block/scsi.h to include/scsi/constants.h
Complete the transition by renaming this header, which was shared by block/iscsi.c and the SCSI emulation code.
Reviewed-by: Philippe Mathieu-Dau
scsi: move block/scsi.h to include/scsi/constants.h
Complete the transition by renaming this header, which was shared by block/iscsi.c and the SCSI emulation code.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|