9b562c64 | 24-Sep-2020 |
Kevin Wolf <kwolf@redhat.com> |
block/export: Remove magic from block-export-add
nbd-server-add tries to be convenient and adds two questionable features that we don't want to share in block-export-add, even for NBD exports:
1. W
block/export: Remove magic from block-export-add
nbd-server-add tries to be convenient and adds two questionable features that we don't want to share in block-export-add, even for NBD exports:
1. When requesting a writable export of a read-only device, the export is silently downgraded to read-only. This should be an error in the context of block-export-add.
2. When using a BlockBackend name, unplugging the device from the guest will automatically stop the NBD server, too. This may sometimes be what you want, but it could also be very surprising. Let's keep things explicit with block-export-add. If the user wants to stop the export, they should tell us so.
Move these things into the nbd-server-add QMP command handler so that they apply only there.
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20200924152717.287415-8-kwolf@redhat.com> Acked-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
b57e4de0 | 24-Sep-2020 |
Kevin Wolf <kwolf@redhat.com> |
qemu-nbd: Use raw block driver for --offset
Instead of implementing qemu-nbd --offset in the NBD code, just put a raw block node with the requested offset on top of the user image and rely on that d
qemu-nbd: Use raw block driver for --offset
Instead of implementing qemu-nbd --offset in the NBD code, just put a raw block node with the requested offset on top of the user image and rely on that doing the job.
This does not only simplify the nbd_export_new() interface and bring it closer to the set of options that the nbd-server-add QMP command offers, but in fact it also eliminates a potential source for bugs in the NBD code which previously had to add the offset manually in all relevant places.
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20200924152717.287415-7-kwolf@redhat.com> Acked-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
56ee8626 | 24-Sep-2020 |
Kevin Wolf <kwolf@redhat.com> |
block/export: Add BlockExport infrastructure and block-export-add
We want to have a common set of commands for all types of block exports. Currently, this is only NBD, but we're going to add more ty
block/export: Add BlockExport infrastructure and block-export-add
We want to have a common set of commands for all types of block exports. Currently, this is only NBD, but we're going to add more types.
This patch adds the basic BlockExport and BlockExportDriver structs and a QMP command block-export-add that creates a new export based on the given BlockExportOptions.
qmp_nbd_server_add() becomes a wrapper around qmp_block_export_add().
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20200924152717.287415-5-kwolf@redhat.com> Acked-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
dacbb6eb | 05-Feb-2020 |
Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> |
nbd/server: use bdrv_dirty_bitmap_next_dirty_area
Use bdrv_dirty_bitmap_next_dirty_area for bitmap_to_extents. Since bdrv_dirty_bitmap_next_dirty_area is very accurate in its interface, we'll never
nbd/server: use bdrv_dirty_bitmap_next_dirty_area
Use bdrv_dirty_bitmap_next_dirty_area for bitmap_to_extents. Since bdrv_dirty_bitmap_next_dirty_area is very accurate in its interface, we'll never exceed requested region with last chunk. So, we don't need dont_fragment, and bitmap_to_extents() interface becomes clean enough to not require any comment.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-id: 20200205112041.6003-10-vsementsov@virtuozzo.com Signed-off-by: John Snow <jsnow@redhat.com>
show more ...
|
89cbc7e3 | 05-Feb-2020 |
Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> |
nbd/server: introduce NBDExtentArray
Introduce NBDExtentArray class, to handle extents list creation in more controlled way and with fewer OUT parameters in functions.
Signed-off-by: Vladimir Semen
nbd/server: introduce NBDExtentArray
Introduce NBDExtentArray class, to handle extents list creation in more controlled way and with fewer OUT parameters in functions.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-id: 20200205112041.6003-9-vsementsov@virtuozzo.com Signed-off-by: John Snow <jsnow@redhat.com>
show more ...
|
93676c88 | 14-Nov-2019 |
Eric Blake <eblake@redhat.com> |
nbd: Don't send oversize strings
Qemu as server currently won't accept export names larger than 256 bytes, nor create dirty bitmap names longer than 1023 bytes, so most uses of qemu as client or ser
nbd: Don't send oversize strings
Qemu as server currently won't accept export names larger than 256 bytes, nor create dirty bitmap names longer than 1023 bytes, so most uses of qemu as client or server have no reason to get anywhere near the NBD spec maximum of a 4k limit per string.
However, we weren't actually enforcing things, ignoring when the remote side violates the protocol on input, and also having several code paths where we send oversize strings on output (for example, qemu-nbd --description could easily send more than 4k). Tighten things up as follows:
client: - Perform bounds check on export name and dirty bitmap request prior to handing it to server - Validate that copied server replies are not too long (ignoring NBD_INFO_* replies that are not copied is not too bad) server: - Perform bounds check on export name and description prior to advertising it to client - Reject client name or metadata query that is too long - Adjust things to allow full 4k name limit rather than previous 256 byte limit
Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20191114024635.11363-4-eblake@redhat.com> Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
show more ...
|
61bc846d | 17-Sep-2019 |
Eric Blake <eblake@redhat.com> |
nbd: Grab aio context lock in more places
When iothreads are in use, the failure to grab the aio context results in an assertion failure when trying to unlock things during blk_unref, when trying to
nbd: Grab aio context lock in more places
When iothreads are in use, the failure to grab the aio context results in an assertion failure when trying to unlock things during blk_unref, when trying to unlock a mutex that was not locked. In short, all calls to nbd_export_put need to done while within the correct aio context. But since nbd_export_put can recursively reach itself via nbd_export_close, and recursively grabbing the context would deadlock, we can't do the context grab directly in those functions, but must do so in their callers.
Hoist the use of the correct aio_context from nbd_export_new() to its caller qmp_nbd_server_add(). Then tweak qmp_nbd_server_remove(), nbd_eject_notifier(), and nbd_esport_close_all() to grab the right context, so that all callers during qemu now own the context before nbd_export_put() can call blk_unref().
Remaining uses in qemu-nbd don't matter (since that use case does not support iothreads).
Suggested-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20190917023917.32226-1-eblake@redhat.com> Reviewed-by: Sergio Lopez <slp@redhat.com>
show more ...
|
b4961249 | 12-Sep-2019 |
Sergio Lopez <slp@redhat.com> |
nbd/server: attach client channel to the export's AioContext
On creation, the export's AioContext is set to the same one as the BlockBackend, while the AioContext in the client QIOChannel is left un
nbd/server: attach client channel to the export's AioContext
On creation, the export's AioContext is set to the same one as the BlockBackend, while the AioContext in the client QIOChannel is left untouched.
As a result, when using data-plane, nbd_client_receive_next_request() schedules coroutines in the IOThread AioContext, while the client's QIOChannel is serviced from the main_loop, potentially triggering the assertion at qio_channel_restart_[read|write].
To fix this, as soon we have the export corresponding to the client, we call qio_channel_attach_aio_context() to attach the QIOChannel context to the export's AioContext. This matches with the logic at blk_aio_attached().
RHBZ: https://bugzilla.redhat.com/show_bug.cgi?id=1748253 Signed-off-by: Sergio Lopez <slp@redhat.com> Message-Id: <20190912110032.26395-1-slp@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Eric Blake <eblake@redhat.com>
show more ...
|
1b5c15ce | 07-Sep-2019 |
Eric Blake <eblake@redhat.com> |
nbd/client: Add hint when TLS is missing
I received an off-list report of failure to connect to an NBD server expecting an x509 certificate, when the client was attempting something similar to this
nbd/client: Add hint when TLS is missing
I received an off-list report of failure to connect to an NBD server expecting an x509 certificate, when the client was attempting something similar to this command line:
$ ./x86_64-softmmu/qemu-system-x86_64 -name 'blah' -machine q35 -nodefaults \ -object tls-creds-x509,id=tls0,endpoint=client,dir=$path_to_certs \ -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie.0,addr=0x6 \ -drive id=drive_image1,if=none,snapshot=off,aio=threads,cache=none,format=raw,file=nbd:localhost:9000,werror=stop,rerror=stop,tls-creds=tls0 \ -device scsi-hd,id=image1,drive=drive_image1,bootindex=0 qemu-system-x86_64: -drive id=drive_image1,if=none,snapshot=off,aio=threads,cache=none,format=raw,file=nbd:localhost:9000,werror=stop,rerror=stop,tls-creds=tls0: TLS negotiation required before option 7 (go) server reported: Option 0x7 not permitted before TLS
The problem? As specified, -drive is trying to pass tls-creds to the raw format driver instead of the nbd protocol driver, but before we get to the point where we can detect that raw doesn't know what to do with tls-creds, the nbd driver has already failed because the server complained. The fix to the broken command line? Pass '...,file.tls-creds=tls0' to ensure the tls-creds option is handed to nbd, not raw. But since the error message was rather cryptic, I'm trying to improve the error message.
With this patch, the error message adds a line:
qemu-system-x86_64: -drive id=drive_image1,if=none,snapshot=off,aio=threads,cache=none,format=raw,file=nbd:localhost:9000,werror=stop,rerror=stop,tls-creds=tls0: TLS negotiation required before option 7 (go) Did you forget a valid tls-creds? server reported: Option 0x7 not permitted before TLS
And with luck, someone grepping for that error message will find this commit message and figure out their command line mistake. Sadly, the only mention of file.tls-creds in our docs relates to an --image-opts use of PSK encryption with qemu-img as the client, rather than x509 certificate encryption with qemu-kvm as the client.
CC: Tingting Mao <timao@redhat.com> CC: Daniel P. Berrangé <berrange@redhat.com> Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20190907172055.26870-1-eblake@redhat.com> [eblake: squash in iotest 233 fix] Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
show more ...
|
b491dbb7 | 23-Aug-2019 |
Eric Blake <eblake@redhat.com> |
nbd: Implement server use of NBD FAST_ZERO
The server side is fairly straightforward: we can always advertise support for detection of fast zero, and implement it by mapping the request to the block
nbd: Implement server use of NBD FAST_ZERO
The server side is fairly straightforward: we can always advertise support for detection of fast zero, and implement it by mapping the request to the block layer BDRV_REQ_NO_FALLBACK.
Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20190823143726.27062-5-eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> [eblake: update iotests 223, 233]
show more ...
|
0a479545 | 23-Aug-2019 |
Eric Blake <eblake@redhat.com> |
nbd: Prepare for NBD_CMD_FLAG_FAST_ZERO
Commit fe0480d6 and friends added BDRV_REQ_NO_FALLBACK as a way to avoid wasting time on a preliminary write-zero request that will later be rewritten by actu
nbd: Prepare for NBD_CMD_FLAG_FAST_ZERO
Commit fe0480d6 and friends added BDRV_REQ_NO_FALLBACK as a way to avoid wasting time on a preliminary write-zero request that will later be rewritten by actual data, if it is known that the write-zero request will use a slow fallback; but in doing so, could not optimize for NBD. The NBD specification is now considering an extension that will allow passing on those semantics; this patch updates the new protocol bits and 'qemu-nbd --list' output to recognize the bit, as well as the new errno value possible when using the new flag; while upcoming patches will improve the client to use the feature when present, and the server to advertise support for it.
The NBD spec recommends (but not requires) that ENOTSUP be avoided for all but failures of a fast zero (the only time it is mandatory to avoid an ENOTSUP failure is when fast zero is supported but not requested during write zeroes; the questionable use is for ENOTSUP to other actions like a normal write request). However, clients that get an unexpected ENOTSUP will either already be treating it the same as EINVAL, or may appreciate the extra bit of information. We were equally loose for returning EOVERFLOW in more situations than recommended by the spec, so if it turns out to be a problem in practice, a later patch can tighten handling for both error codes.
Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20190823143726.27062-3-eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> [eblake: tweak commit message, also handle EOPNOTSUPP]
show more ...
|
dbb38caa | 23-Aug-2019 |
Eric Blake <eblake@redhat.com> |
nbd: Improve per-export flag handling in server
When creating a read-only image, we are still advertising support for TRIM and WRITE_ZEROES to the client, even though the client should not be issuin
nbd: Improve per-export flag handling in server
When creating a read-only image, we are still advertising support for TRIM and WRITE_ZEROES to the client, even though the client should not be issuing those commands. But seeing this requires looking across multiple functions:
All callers to nbd_export_new() passed a single flag based solely on whether the export allows writes. Later, we then pass a constant set of flags to nbd_negotiate_options() (namely, the set of flags which we always support, at least for writable images), which is then further dynamically modified with NBD_FLAG_SEND_DF based on client requests for structured options. Finally, when processing NBD_OPT_EXPORT_NAME or NBD_OPT_EXPORT_GO we bitwise-or the original caller's flag with the runtime set of flags we've built up over several functions.
Let's refactor things to instead compute a baseline of flags as soon as possible which gets shared between multiple clients, in nbd_export_new(), and changing the signature for the callers to pass in a simpler bool rather than having to figure out flags. We can then get rid of the 'myflags' parameter to various functions, and instead refer to client for everything we need (we still have to perform a bitwise-OR for NBD_FLAG_SEND_DF during NBD_OPT_EXPORT_NAME and NBD_OPT_EXPORT_GO, but it's easier to see what is being computed). This lets us quit advertising senseless flags for read-only images, as well as making the next patch for exposing FAST_ZERO support easier to write.
Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20190823143726.27062-2-eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> [eblake: improve commit message, update iotest 223]
show more ...
|