History log of /qemu/migration/file.c (Results 1 – 25 of 40)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
Revision tags: v9.0.0-rc2, v9.0.0-rc1
# bd4480b0 19-Mar-2024 Fabiano Rosas <farosas@suse.de>

migration: Revert mapped-ram multifd support to fd: URI

This reverts commit decdc76772c453ff1444612e910caa0d45cd8eac in full
and also the relevant migration-tests from
7a09f092834641b7a793d50a3a2610

migration: Revert mapped-ram multifd support to fd: URI

This reverts commit decdc76772c453ff1444612e910caa0d45cd8eac in full
and also the relevant migration-tests from
7a09f092834641b7a793d50a3a261073bbb404a6.

After the addition of the new QAPI-based migration address API in 8.2
we've been converting an "fd:" URI into a SocketAddress, missing the
fact that the "fd:" syntax could also be used for a plain file instead
of a socket. This is a problem because the SocketAddress is part of
the API, so we're effectively asking users to create a "socket"
channel to pass in a plain file.

The easiest way to fix this situation is to deprecate the usage of
both SocketAddress and "fd:" when used with a plain file for
migration. Since this has been possible since 8.2, we can wait until
9.1 to deprecate it.

For 9.0, however, we should avoid adding further support to migration
to a plain file using the old "fd:" syntax or the new SocketAddress
API, and instead require the usage of either the old-style "file:" URI
or the FileMigrationArgs::filename field of the new API with the
"/dev/fdset/NN" syntax, both of which are already supported.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
Link: https://lore.kernel.org/r/20240319210941.1907-1-farosas@suse.de
Signed-off-by: Peter Xu <peterx@redhat.com>

show more ...


Revision tags: v9.0.0-rc0
# 73f6f9a1 15-Mar-2024 Fabiano Rosas <farosas@suse.de>

migration/multifd: Ensure we're not given a socket for file migration

When doing migration using the fd: URI, QEMU will fetch the file
descriptor passed in via the monitor at
fd_start_outgoing|incom

migration/multifd: Ensure we're not given a socket for file migration

When doing migration using the fd: URI, QEMU will fetch the file
descriptor passed in via the monitor at
fd_start_outgoing|incoming_migration(), which means the checks at
migration_channels_and_transport_compatible() happen too soon and we
don't know at that point whether the FD refers to a plain file or a
socket.

For this reason, we've been allowing a migration channel of type
SOCKET_ADDRESS_TYPE_FD to pass the initial verifications in scenarios
where the socket migration is not supported, such as with fd + multifd.

The commit decdc76772 ("migration/multifd: Add mapped-ram support to
fd: URI") was supposed to add a second check prior to starting
migration to make sure a socket fd is not passed instead of a file fd,
but failed to do so.

Add the missing verification and update the comment explaining this
situation which is currently incorrect.

Fixes: decdc76772 ("migration/multifd: Add mapped-ram support to fd: URI")
Signed-off-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Peter Xu <peterx@redhat.com>
Link: https://lore.kernel.org/r/20240315032040.7974-2-farosas@suse.de
Signed-off-by: Peter Xu <peterx@redhat.com>

show more ...


# 74228c59 13-Mar-2024 Fabiano Rosas <farosas@suse.de>

migration: Fix iocs leaks during file and fd migration

The memory for the io channels is being leaked in three different ways
during file migration:

1) if the offset check fails we never drop the i

migration: Fix iocs leaks during file and fd migration

The memory for the io channels is being leaked in three different ways
during file migration:

1) if the offset check fails we never drop the ioc reference;

2) we allocate an extra channel for no reason;

3) if multifd is enabled but channel creation fails when calling
dup(), we leave the previous channels around along with the glib
polling;

Fix all issues by restructuring the code to first allocate the
channels and only register the watches when all channels have been
created.

For multifd, the file and fd migrations can share code because both
are backed by a QIOChannelFile. For the non-multifd case, the fd needs
to be separate because it is backed by a QIOChannelSocket.

Fixes: 2dd7ee7a51 ("migration/multifd: Add incoming QIOChannelFile support")
Fixes: decdc76772 ("migration/multifd: Add mapped-ram support to fd: URI")
Reported-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
Link: https://lore.kernel.org/r/20240313212824.16974-2-farosas@suse.de
Signed-off-by: Peter Xu <peterx@redhat.com>

show more ...


# c827fafc 11-Mar-2024 Fabiano Rosas <farosas@suse.de>

migration: Fix error handling after dup in file migration

The file migration code was allowing a possible -1 from a failed call
to dup() to propagate into the new QIOFileChannel::fd before checking

migration: Fix error handling after dup in file migration

The file migration code was allowing a possible -1 from a failed call
to dup() to propagate into the new QIOFileChannel::fd before checking
for validity. Coverity doesn't like that, possibly due to the the
lseek(-1, ...) call that would ensue before returning from the channel
creation routine.

Use the newly introduced qio_channel_file_dupfd() to properly check
the return of dup() before proceeding.

Fixes: CID 1539961
Fixes: CID 1539965
Fixes: CID 1539960
Fixes: 2dd7ee7a51 ("migration/multifd: Add incoming QIOChannelFile support")
Fixes: decdc76772 ("migration/multifd: Add mapped-ram support to fd: URI")
Reported-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: "Daniel P. Berrangé" <berrange@redhat.com>
Link: https://lore.kernel.org/r/20240311233335.17299-3-farosas@suse.de
Signed-off-by: Peter Xu <peterx@redhat.com>

show more ...


# 44fe138e 11-Mar-2024 Fabiano Rosas <farosas@suse.de>

migration/multifd: Allow zero pages in file migration

Currently, it's an error to have no data pages in the multifd file
migration because zero page detection is done in the migration thread
and zer

migration/multifd: Allow zero pages in file migration

Currently, it's an error to have no data pages in the multifd file
migration because zero page detection is done in the migration thread
and zero pages don't reach multifd. This is enforced with the
pages->num assert.

We're about to add zero page detection on the multifd thread. Fix the
file_write_ramblock_iov() to stop considering p->iovs_num=0 an error.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
Link: https://lore.kernel.org/r/20240311180015.3359271-2-hao.xiang@linux.dev
Signed-off-by: Peter Xu <peterx@redhat.com>

show more ...


# a1bb5dd1 11-Mar-2024 Anthony PERARD <anthony.perard@citrix.com>

migration: Fix format in error message

In file_write_ramblock_iov(), "offset" is "uintptr_t" and not
"ram_addr_t". While usually they are both equivalent, this is not the
case with CONFIG_XEN_BACKEN

migration: Fix format in error message

In file_write_ramblock_iov(), "offset" is "uintptr_t" and not
"ram_addr_t". While usually they are both equivalent, this is not the
case with CONFIG_XEN_BACKEND.

Use the right format. This will fix build on 32-bit.

Fixes: f427d90b9898 ("migration/multifd: Support outgoing mapped-ram stream format")
Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Link: https://lore.kernel.org/r/20240311123439.16844-1-anthony.perard@citrix.com
Signed-off-by: Peter Xu <peterx@redhat.com>

show more ...


# 44fe138e 11-Mar-2024 Fabiano Rosas <farosas@suse.de>

migration/multifd: Allow zero pages in file migration

Currently, it's an error to have no data pages in the multifd file
migration because zero page detection is done in the migration thread
and zer

migration/multifd: Allow zero pages in file migration

Currently, it's an error to have no data pages in the multifd file
migration because zero page detection is done in the migration thread
and zero pages don't reach multifd. This is enforced with the
pages->num assert.

We're about to add zero page detection on the multifd thread. Fix the
file_write_ramblock_iov() to stop considering p->iovs_num=0 an error.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
Link: https://lore.kernel.org/r/20240311180015.3359271-2-hao.xiang@linux.dev
Signed-off-by: Peter Xu <peterx@redhat.com>

show more ...


# a1bb5dd1 11-Mar-2024 Anthony PERARD <anthony.perard@citrix.com>

migration: Fix format in error message

In file_write_ramblock_iov(), "offset" is "uintptr_t" and not
"ram_addr_t". While usually they are both equivalent, this is not the
case with CONFIG_XEN_BACKEN

migration: Fix format in error message

In file_write_ramblock_iov(), "offset" is "uintptr_t" and not
"ram_addr_t". While usually they are both equivalent, this is not the
case with CONFIG_XEN_BACKEND.

Use the right format. This will fix build on 32-bit.

Fixes: f427d90b9898 ("migration/multifd: Support outgoing mapped-ram stream format")
Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Link: https://lore.kernel.org/r/20240311123439.16844-1-anthony.perard@citrix.com
Signed-off-by: Peter Xu <peterx@redhat.com>

show more ...


# 44fe138e 11-Mar-2024 Fabiano Rosas <farosas@suse.de>

migration/multifd: Allow zero pages in file migration

Currently, it's an error to have no data pages in the multifd file
migration because zero page detection is done in the migration thread
and zer

migration/multifd: Allow zero pages in file migration

Currently, it's an error to have no data pages in the multifd file
migration because zero page detection is done in the migration thread
and zero pages don't reach multifd. This is enforced with the
pages->num assert.

We're about to add zero page detection on the multifd thread. Fix the
file_write_ramblock_iov() to stop considering p->iovs_num=0 an error.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
Link: https://lore.kernel.org/r/20240311180015.3359271-2-hao.xiang@linux.dev
Signed-off-by: Peter Xu <peterx@redhat.com>

show more ...


# a1bb5dd1 11-Mar-2024 Anthony PERARD <anthony.perard@citrix.com>

migration: Fix format in error message

In file_write_ramblock_iov(), "offset" is "uintptr_t" and not
"ram_addr_t". While usually they are both equivalent, this is not the
case with CONFIG_XEN_BACKEN

migration: Fix format in error message

In file_write_ramblock_iov(), "offset" is "uintptr_t" and not
"ram_addr_t". While usually they are both equivalent, this is not the
case with CONFIG_XEN_BACKEND.

Use the right format. This will fix build on 32-bit.

Fixes: f427d90b9898 ("migration/multifd: Support outgoing mapped-ram stream format")
Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Link: https://lore.kernel.org/r/20240311123439.16844-1-anthony.perard@citrix.com
Signed-off-by: Peter Xu <peterx@redhat.com>

show more ...


Revision tags: v8.2.2, v7.2.10
# decdc767 29-Feb-2024 Fabiano Rosas <farosas@suse.de>

migration/multifd: Add mapped-ram support to fd: URI

If we receive a file descriptor that points to a regular file, there's
nothing stopping us from doing multifd migration with mapped-ram to
that f

migration/multifd: Add mapped-ram support to fd: URI

If we receive a file descriptor that points to a regular file, there's
nothing stopping us from doing multifd migration with mapped-ram to
that file.

Enable the fd: URI to work with multifd + mapped-ram.

Note that the fds passed into multifd are duplicated because we want
to avoid cross-thread effects when doing cleanup (i.e. close(fd)). The
original fd doesn't need to be duplicated because monitor_get_fd()
transfers ownership to the caller.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Peter Xu <peterx@redhat.com>
Link: https://lore.kernel.org/r/20240229153017.2221-23-farosas@suse.de
Signed-off-by: Peter Xu <peterx@redhat.com>

show more ...


# a49d15a3 29-Feb-2024 Fabiano Rosas <farosas@suse.de>

migration/multifd: Support incoming mapped-ram stream format

For the incoming mapped-ram migration we need to read the ramblock
headers, get the pages bitmap and send the host address of each
non-ze

migration/multifd: Support incoming mapped-ram stream format

For the incoming mapped-ram migration we need to read the ramblock
headers, get the pages bitmap and send the host address of each
non-zero page to the multifd channel thread for writing.

Usage on HMP is:

(qemu) migrate_set_capability multifd on
(qemu) migrate_set_capability mapped-ram on
(qemu) migrate_incoming file:migfile

(the ram.h include needs to move because we've been previously relying
on it being included from migration.c. Now file.h will start including
multifd.h before migration.o is processed)

Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
Link: https://lore.kernel.org/r/20240229153017.2221-22-farosas@suse.de
Signed-off-by: Peter Xu <peterx@redhat.com>

show more ...


# f427d90b 29-Feb-2024 Fabiano Rosas <farosas@suse.de>

migration/multifd: Support outgoing mapped-ram stream format

The new mapped-ram stream format uses a file transport and puts ram
pages in the migration file at their respective offsets and can be
do

migration/multifd: Support outgoing mapped-ram stream format

The new mapped-ram stream format uses a file transport and puts ram
pages in the migration file at their respective offsets and can be
done in parallel by using the pwritev system call which takes iovecs
and an offset.

Add support to enabling the new format along with multifd to make use
of the threading and page handling already in place.

This requires multifd to stop sending headers and leaving the stream
format to the mapped-ram code. When it comes time to write the data, we
need to call a version of qio_channel_write that can take an offset.

Usage on HMP is:

(qemu) stop
(qemu) migrate_set_capability multifd on
(qemu) migrate_set_capability mapped-ram on
(qemu) migrate_set_parameter max-bandwidth 0
(qemu) migrate_set_parameter multifd-channels 8
(qemu) migrate file:migfile

Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
Link: https://lore.kernel.org/r/20240229153017.2221-21-farosas@suse.de
Signed-off-by: Peter Xu <peterx@redhat.com>

show more ...


# 2dd7ee7a 29-Feb-2024 Fabiano Rosas <farosas@suse.de>

migration/multifd: Add incoming QIOChannelFile support

On the receiving side we don't need to differentiate between main
channel and threads, so whichever channel is defined first gets to be
the mai

migration/multifd: Add incoming QIOChannelFile support

On the receiving side we don't need to differentiate between main
channel and threads, so whichever channel is defined first gets to be
the main one. And since there are no packets, use the atomic channel
count to index into the params array.

Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
Link: https://lore.kernel.org/r/20240229153017.2221-19-farosas@suse.de
Signed-off-by: Peter Xu <peterx@redhat.com>

show more ...


# b7b03eb6 29-Feb-2024 Fabiano Rosas <farosas@suse.de>

migration/multifd: Add outgoing QIOChannelFile support

Allow multifd to open file-backed channels. This will be used when
enabling the mapped-ram migration stream format which expects a
seekable tra

migration/multifd: Add outgoing QIOChannelFile support

Allow multifd to open file-backed channels. This will be used when
enabling the mapped-ram migration stream format which expects a
seekable transport.

The QIOChannel read and write methods will use the preadv/pwritev
versions which don't update the file offset at each call so we can
reuse the fd without re-opening for every channel.

Contrary to the socket migration, the file migration doesn't need an
asynchronous channel creation process, so expose
multifd_channel_connect() and call it directly.

Note that this is just setup code and multifd cannot yet make use of
the file channels.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Peter Xu <peterx@redhat.com>
Link: https://lore.kernel.org/r/20240229153017.2221-18-farosas@suse.de
Signed-off-by: Peter Xu <peterx@redhat.com>

show more ...


# d117ed06 29-Feb-2024 Fabiano Rosas <farosas@suse.de>

migration/multifd: Allow receiving pages without packets

Currently multifd does not need to have knowledge of pages on the
receiving side because all the information needed is within the
packets tha

migration/multifd: Allow receiving pages without packets

Currently multifd does not need to have knowledge of pages on the
receiving side because all the information needed is within the
packets that come in the stream.

We're about to add support to mapped-ram migration, which cannot use
packets because it expects the ramblock section in the migration file
to contain only the guest pages data.

Add a data structure to transfer pages between the ram migration code
and the multifd receiving threads.

We don't want to reuse MultiFDPages_t for two reasons:

a) multifd threads don't really need to know about the data they're
receiving.

b) the receiving side has to be stopped to load the pages, which means
we can experiment with larger granularities than page size when
transferring data.

Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
Link: https://lore.kernel.org/r/20240229153017.2221-16-farosas@suse.de
Signed-off-by: Peter Xu <peterx@redhat.com>

show more ...


Revision tags: v8.2.2, v7.2.10
# decdc767 29-Feb-2024 Fabiano Rosas <farosas@suse.de>

migration/multifd: Add mapped-ram support to fd: URI

If we receive a file descriptor that points to a regular file, there's
nothing stopping us from doing multifd migration with mapped-ram to
that f

migration/multifd: Add mapped-ram support to fd: URI

If we receive a file descriptor that points to a regular file, there's
nothing stopping us from doing multifd migration with mapped-ram to
that file.

Enable the fd: URI to work with multifd + mapped-ram.

Note that the fds passed into multifd are duplicated because we want
to avoid cross-thread effects when doing cleanup (i.e. close(fd)). The
original fd doesn't need to be duplicated because monitor_get_fd()
transfers ownership to the caller.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Peter Xu <peterx@redhat.com>
Link: https://lore.kernel.org/r/20240229153017.2221-23-farosas@suse.de
Signed-off-by: Peter Xu <peterx@redhat.com>

show more ...


# a49d15a3 29-Feb-2024 Fabiano Rosas <farosas@suse.de>

migration/multifd: Support incoming mapped-ram stream format

For the incoming mapped-ram migration we need to read the ramblock
headers, get the pages bitmap and send the host address of each
non-ze

migration/multifd: Support incoming mapped-ram stream format

For the incoming mapped-ram migration we need to read the ramblock
headers, get the pages bitmap and send the host address of each
non-zero page to the multifd channel thread for writing.

Usage on HMP is:

(qemu) migrate_set_capability multifd on
(qemu) migrate_set_capability mapped-ram on
(qemu) migrate_incoming file:migfile

(the ram.h include needs to move because we've been previously relying
on it being included from migration.c. Now file.h will start including
multifd.h before migration.o is processed)

Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
Link: https://lore.kernel.org/r/20240229153017.2221-22-farosas@suse.de
Signed-off-by: Peter Xu <peterx@redhat.com>

show more ...


# f427d90b 29-Feb-2024 Fabiano Rosas <farosas@suse.de>

migration/multifd: Support outgoing mapped-ram stream format

The new mapped-ram stream format uses a file transport and puts ram
pages in the migration file at their respective offsets and can be
do

migration/multifd: Support outgoing mapped-ram stream format

The new mapped-ram stream format uses a file transport and puts ram
pages in the migration file at their respective offsets and can be
done in parallel by using the pwritev system call which takes iovecs
and an offset.

Add support to enabling the new format along with multifd to make use
of the threading and page handling already in place.

This requires multifd to stop sending headers and leaving the stream
format to the mapped-ram code. When it comes time to write the data, we
need to call a version of qio_channel_write that can take an offset.

Usage on HMP is:

(qemu) stop
(qemu) migrate_set_capability multifd on
(qemu) migrate_set_capability mapped-ram on
(qemu) migrate_set_parameter max-bandwidth 0
(qemu) migrate_set_parameter multifd-channels 8
(qemu) migrate file:migfile

Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
Link: https://lore.kernel.org/r/20240229153017.2221-21-farosas@suse.de
Signed-off-by: Peter Xu <peterx@redhat.com>

show more ...


# 2dd7ee7a 29-Feb-2024 Fabiano Rosas <farosas@suse.de>

migration/multifd: Add incoming QIOChannelFile support

On the receiving side we don't need to differentiate between main
channel and threads, so whichever channel is defined first gets to be
the mai

migration/multifd: Add incoming QIOChannelFile support

On the receiving side we don't need to differentiate between main
channel and threads, so whichever channel is defined first gets to be
the main one. And since there are no packets, use the atomic channel
count to index into the params array.

Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
Link: https://lore.kernel.org/r/20240229153017.2221-19-farosas@suse.de
Signed-off-by: Peter Xu <peterx@redhat.com>

show more ...


# b7b03eb6 29-Feb-2024 Fabiano Rosas <farosas@suse.de>

migration/multifd: Add outgoing QIOChannelFile support

Allow multifd to open file-backed channels. This will be used when
enabling the mapped-ram migration stream format which expects a
seekable tra

migration/multifd: Add outgoing QIOChannelFile support

Allow multifd to open file-backed channels. This will be used when
enabling the mapped-ram migration stream format which expects a
seekable transport.

The QIOChannel read and write methods will use the preadv/pwritev
versions which don't update the file offset at each call so we can
reuse the fd without re-opening for every channel.

Contrary to the socket migration, the file migration doesn't need an
asynchronous channel creation process, so expose
multifd_channel_connect() and call it directly.

Note that this is just setup code and multifd cannot yet make use of
the file channels.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Peter Xu <peterx@redhat.com>
Link: https://lore.kernel.org/r/20240229153017.2221-18-farosas@suse.de
Signed-off-by: Peter Xu <peterx@redhat.com>

show more ...


# d117ed06 29-Feb-2024 Fabiano Rosas <farosas@suse.de>

migration/multifd: Allow receiving pages without packets

Currently multifd does not need to have knowledge of pages on the
receiving side because all the information needed is within the
packets tha

migration/multifd: Allow receiving pages without packets

Currently multifd does not need to have knowledge of pages on the
receiving side because all the information needed is within the
packets that come in the stream.

We're about to add support to mapped-ram migration, which cannot use
packets because it expects the ramblock section in the migration file
to contain only the guest pages data.

Add a data structure to transfer pages between the ram migration code
and the multifd receiving threads.

We don't want to reuse MultiFDPages_t for two reasons:

a) multifd threads don't really need to know about the data they're
receiving.

b) the receiving side has to be stopped to load the pages, which means
we can experiment with larger granularities than page size when
transferring data.

Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
Link: https://lore.kernel.org/r/20240229153017.2221-16-farosas@suse.de
Signed-off-by: Peter Xu <peterx@redhat.com>

show more ...


Revision tags: v8.2.1, v8.1.5, v7.2.9, v8.1.4, v7.2.8, v8.2.0, v8.2.0-rc4, v8.2.0-rc3, v8.2.0-rc2, v8.2.0-rc1, v7.2.7, v8.1.3, v8.2.0-rc0
# 02afba63 23-Oct-2023 Fabiano Rosas <farosas@suse.de>

migration: Convert the file backend to the new QAPI syntax

Convert the file: URI to accept a FileMigrationArgs to be compatible
with the new migration QAPI.

Signed-off-by: Fabiano Rosas <farosas@su

migration: Convert the file backend to the new QAPI syntax

Convert the file: URI to accept a FileMigrationArgs to be compatible
with the new migration QAPI.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-ID: <20231023182053.8711-9-farosas@suse.de>

show more ...


# 72a8192e 23-Oct-2023 Het Gala <het.gala@nutanix.com>

migration: convert migration 'uri' into 'MigrateAddress'

This patch parses 'migrate' and 'migrate-incoming' QAPI's 'uri'
string containing migration connection related information
and stores them in

migration: convert migration 'uri' into 'MigrateAddress'

This patch parses 'migrate' and 'migrate-incoming' QAPI's 'uri'
string containing migration connection related information
and stores them inside well defined 'MigrateAddress' struct.

Fabiano fixed for "file" transport.

Suggested-by: Aravind Retnakaran <aravind.retnakaran@nutanix.com>
Signed-off-by: Het Gala <het.gala@nutanix.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-ID: <20231023182053.8711-4-farosas@suse.de>
Message-ID: <20231023182053.8711-5-farosas@suse.de>

show more ...


Revision tags: v8.2.1, v8.1.5, v7.2.9, v8.1.4, v7.2.8, v8.2.0, v8.2.0-rc4, v8.2.0-rc3, v8.2.0-rc2, v8.2.0-rc1, v7.2.7, v8.1.3, v8.2.0-rc0
# 02afba63 23-Oct-2023 Fabiano Rosas <farosas@suse.de>

migration: Convert the file backend to the new QAPI syntax

Convert the file: URI to accept a FileMigrationArgs to be compatible
with the new migration QAPI.

Signed-off-by: Fabiano Rosas <farosas@su

migration: Convert the file backend to the new QAPI syntax

Convert the file: URI to accept a FileMigrationArgs to be compatible
with the new migration QAPI.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-ID: <20231023182053.8711-9-farosas@suse.de>

show more ...


12