#
273e0003 |
| 15-Dec-2022 |
Eugenio Pérez <eperezma@redhat.com> |
vdpa: allocate SVQ array unconditionally
SVQ may run or not in a device depending on runtime conditions (for example, if the device can move CVQ to its own group or not).
Allocate the SVQ array unc
vdpa: allocate SVQ array unconditionally
SVQ may run or not in a device depending on runtime conditions (for example, if the device can move CVQ to its own group or not).
Allocate the SVQ array unconditionally at startup, since its hard to move this allocation elsewhere.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20221215113144.322011-9-eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
show more ...
|
#
258a0394 |
| 15-Dec-2022 |
Eugenio Pérez <eperezma@redhat.com> |
vdpa: move SVQ vring features check to net/
The next patches will start control SVQ if possible. However, we don't know if that will be possible at qemu boot anymore.
Since the moved checks will be
vdpa: move SVQ vring features check to net/
The next patches will start control SVQ if possible. However, we don't know if that will be possible at qemu boot anymore.
Since the moved checks will be already evaluated at net/ to know if it is ok to shadow CVQ, move them.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20221215113144.322011-8-eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
show more ...
|
#
a585fad2 |
| 15-Dec-2022 |
Eugenio Pérez <eperezma@redhat.com> |
vdpa: request iova_range only once
Currently iova range is requested once per queue pair in the case of net. Reduce the number of ioctls asking it once at initialization and reusing that value for e
vdpa: request iova_range only once
Currently iova range is requested once per queue pair in the case of net. Reduce the number of ioctls asking it once at initialization and reusing that value for each vhost_vdpa.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Message-Id: <20221215113144.322011-7-eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Jason Wang <jasonwang@redhat.com>
show more ...
|
#
5fde952b |
| 15-Dec-2022 |
Eugenio Pérez <eperezma@redhat.com> |
vhost: move iova_tree set to vhost_svq_start
Since we don't know if we will use SVQ at qemu initialization, let's allocate iova_tree only if needed. To do so, accept it at SVQ start, not at initiali
vhost: move iova_tree set to vhost_svq_start
Since we don't know if we will use SVQ at qemu initialization, let's allocate iova_tree only if needed. To do so, accept it at SVQ start, not at initialization.
This will avoid to create it if the device does not support SVQ.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20221215113144.322011-5-eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
show more ...
|
#
3cfb4d06 |
| 15-Dec-2022 |
Eugenio Pérez <eperezma@redhat.com> |
vhost: allocate SVQ device file descriptors at device start
The next patches will start control SVQ if possible. However, we don't know if that will be possible at qemu boot anymore.
Delay device f
vhost: allocate SVQ device file descriptors at device start
The next patches will start control SVQ if possible. However, we don't know if that will be possible at qemu boot anymore.
Delay device file descriptors until we know it at device start. This will avoid to create them if the device does not support SVQ.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20221215113144.322011-4-eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
show more ...
|
#
712c1a31 |
| 15-Dec-2022 |
Eugenio Pérez <eperezma@redhat.com> |
vdpa: use v->shadow_vqs_enabled in vhost_vdpa_svqs_start & stop
This function used to trust in v->shadow_vqs != NULL to know if it must start svq or not.
This is not going to be valid anymore, as q
vdpa: use v->shadow_vqs_enabled in vhost_vdpa_svqs_start & stop
This function used to trust in v->shadow_vqs != NULL to know if it must start svq or not.
This is not going to be valid anymore, as qemu is going to allocate svq array unconditionally (but it will only start them conditionally).
Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20221215113144.322011-2-eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
show more ...
|
#
c1a10086 |
| 15-Dec-2022 |
Eugenio Pérez <eperezma@redhat.com> |
vdpa: always start CVQ in SVQ mode if possible
Isolate control virtqueue in its own group, allowing to intercept control commands but letting dataplane run totally passthrough to the guest.
Signed-
vdpa: always start CVQ in SVQ mode if possible
Isolate control virtqueue in its own group, allowing to intercept control commands but letting dataplane run totally passthrough to the guest.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Message-Id: <20221215113144.322011-13-eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com>
show more ...
|
#
6188d78a |
| 15-Dec-2022 |
Eugenio Pérez <eperezma@redhat.com> |
vdpa: add shadow_data to vhost_vdpa
The memory listener that thells the device how to convert GPA to qemu's va is registered against CVQ vhost_vdpa. memory listener translations are always ASID 0, C
vdpa: add shadow_data to vhost_vdpa
The memory listener that thells the device how to convert GPA to qemu's va is registered against CVQ vhost_vdpa. memory listener translations are always ASID 0, CVQ ones are ASID 1 if supported.
Let's tell the listener if it needs to register them on iova tree or not.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20221215113144.322011-12-eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
show more ...
|
#
cd831ed5 |
| 15-Dec-2022 |
Eugenio Pérez <eperezma@redhat.com> |
vdpa: add asid parameter to vhost_vdpa_dma_map/unmap
So the caller can choose which ASID is destined.
No need to update the batch functions as they will always be called from memory listener update
vdpa: add asid parameter to vhost_vdpa_dma_map/unmap
So the caller can choose which ASID is destined.
No need to update the batch functions as they will always be called from memory listener updates at the moment. Memory listener updates will always update ASID 0, as it's the passthrough ASID.
All vhost devices's ASID are 0 at this moment.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20221215113144.322011-10-eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
show more ...
|
#
273e0003 |
| 15-Dec-2022 |
Eugenio Pérez <eperezma@redhat.com> |
vdpa: allocate SVQ array unconditionally
SVQ may run or not in a device depending on runtime conditions (for example, if the device can move CVQ to its own group or not).
Allocate the SVQ array unc
vdpa: allocate SVQ array unconditionally
SVQ may run or not in a device depending on runtime conditions (for example, if the device can move CVQ to its own group or not).
Allocate the SVQ array unconditionally at startup, since its hard to move this allocation elsewhere.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20221215113144.322011-9-eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
show more ...
|
#
258a0394 |
| 15-Dec-2022 |
Eugenio Pérez <eperezma@redhat.com> |
vdpa: move SVQ vring features check to net/
The next patches will start control SVQ if possible. However, we don't know if that will be possible at qemu boot anymore.
Since the moved checks will be
vdpa: move SVQ vring features check to net/
The next patches will start control SVQ if possible. However, we don't know if that will be possible at qemu boot anymore.
Since the moved checks will be already evaluated at net/ to know if it is ok to shadow CVQ, move them.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20221215113144.322011-8-eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
show more ...
|
#
a585fad2 |
| 15-Dec-2022 |
Eugenio Pérez <eperezma@redhat.com> |
vdpa: request iova_range only once
Currently iova range is requested once per queue pair in the case of net. Reduce the number of ioctls asking it once at initialization and reusing that value for e
vdpa: request iova_range only once
Currently iova range is requested once per queue pair in the case of net. Reduce the number of ioctls asking it once at initialization and reusing that value for each vhost_vdpa.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Message-Id: <20221215113144.322011-7-eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Jason Wang <jasonwang@redhat.com>
show more ...
|
#
5fde952b |
| 15-Dec-2022 |
Eugenio Pérez <eperezma@redhat.com> |
vhost: move iova_tree set to vhost_svq_start
Since we don't know if we will use SVQ at qemu initialization, let's allocate iova_tree only if needed. To do so, accept it at SVQ start, not at initiali
vhost: move iova_tree set to vhost_svq_start
Since we don't know if we will use SVQ at qemu initialization, let's allocate iova_tree only if needed. To do so, accept it at SVQ start, not at initialization.
This will avoid to create it if the device does not support SVQ.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20221215113144.322011-5-eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
show more ...
|
#
3cfb4d06 |
| 15-Dec-2022 |
Eugenio Pérez <eperezma@redhat.com> |
vhost: allocate SVQ device file descriptors at device start
The next patches will start control SVQ if possible. However, we don't know if that will be possible at qemu boot anymore.
Delay device f
vhost: allocate SVQ device file descriptors at device start
The next patches will start control SVQ if possible. However, we don't know if that will be possible at qemu boot anymore.
Delay device file descriptors until we know it at device start. This will avoid to create them if the device does not support SVQ.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20221215113144.322011-4-eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
show more ...
|
#
712c1a31 |
| 15-Dec-2022 |
Eugenio Pérez <eperezma@redhat.com> |
vdpa: use v->shadow_vqs_enabled in vhost_vdpa_svqs_start & stop
This function used to trust in v->shadow_vqs != NULL to know if it must start svq or not.
This is not going to be valid anymore, as q
vdpa: use v->shadow_vqs_enabled in vhost_vdpa_svqs_start & stop
This function used to trust in v->shadow_vqs != NULL to know if it must start svq or not.
This is not going to be valid anymore, as qemu is going to allocate svq array unconditionally (but it will only start them conditionally).
Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20221215113144.322011-2-eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
show more ...
|
#
c1a10086 |
| 15-Dec-2022 |
Eugenio Pérez <eperezma@redhat.com> |
vdpa: always start CVQ in SVQ mode if possible
Isolate control virtqueue in its own group, allowing to intercept control commands but letting dataplane run totally passthrough to the guest.
Signed-
vdpa: always start CVQ in SVQ mode if possible
Isolate control virtqueue in its own group, allowing to intercept control commands but letting dataplane run totally passthrough to the guest.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Message-Id: <20221215113144.322011-13-eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com>
show more ...
|
#
6188d78a |
| 15-Dec-2022 |
Eugenio Pérez <eperezma@redhat.com> |
vdpa: add shadow_data to vhost_vdpa
The memory listener that thells the device how to convert GPA to qemu's va is registered against CVQ vhost_vdpa. memory listener translations are always ASID 0, C
vdpa: add shadow_data to vhost_vdpa
The memory listener that thells the device how to convert GPA to qemu's va is registered against CVQ vhost_vdpa. memory listener translations are always ASID 0, CVQ ones are ASID 1 if supported.
Let's tell the listener if it needs to register them on iova tree or not.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20221215113144.322011-12-eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
show more ...
|
#
cd831ed5 |
| 15-Dec-2022 |
Eugenio Pérez <eperezma@redhat.com> |
vdpa: add asid parameter to vhost_vdpa_dma_map/unmap
So the caller can choose which ASID is destined.
No need to update the batch functions as they will always be called from memory listener update
vdpa: add asid parameter to vhost_vdpa_dma_map/unmap
So the caller can choose which ASID is destined.
No need to update the batch functions as they will always be called from memory listener updates at the moment. Memory listener updates will always update ASID 0, as it's the passthrough ASID.
All vhost devices's ASID are 0 at this moment.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20221215113144.322011-10-eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
show more ...
|
#
273e0003 |
| 15-Dec-2022 |
Eugenio Pérez <eperezma@redhat.com> |
vdpa: allocate SVQ array unconditionally
SVQ may run or not in a device depending on runtime conditions (for example, if the device can move CVQ to its own group or not).
Allocate the SVQ array unc
vdpa: allocate SVQ array unconditionally
SVQ may run or not in a device depending on runtime conditions (for example, if the device can move CVQ to its own group or not).
Allocate the SVQ array unconditionally at startup, since its hard to move this allocation elsewhere.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20221215113144.322011-9-eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
show more ...
|
#
258a0394 |
| 15-Dec-2022 |
Eugenio Pérez <eperezma@redhat.com> |
vdpa: move SVQ vring features check to net/
The next patches will start control SVQ if possible. However, we don't know if that will be possible at qemu boot anymore.
Since the moved checks will be
vdpa: move SVQ vring features check to net/
The next patches will start control SVQ if possible. However, we don't know if that will be possible at qemu boot anymore.
Since the moved checks will be already evaluated at net/ to know if it is ok to shadow CVQ, move them.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20221215113144.322011-8-eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
show more ...
|
#
a585fad2 |
| 15-Dec-2022 |
Eugenio Pérez <eperezma@redhat.com> |
vdpa: request iova_range only once
Currently iova range is requested once per queue pair in the case of net. Reduce the number of ioctls asking it once at initialization and reusing that value for e
vdpa: request iova_range only once
Currently iova range is requested once per queue pair in the case of net. Reduce the number of ioctls asking it once at initialization and reusing that value for each vhost_vdpa.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Message-Id: <20221215113144.322011-7-eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Jason Wang <jasonwang@redhat.com>
show more ...
|
#
5fde952b |
| 15-Dec-2022 |
Eugenio Pérez <eperezma@redhat.com> |
vhost: move iova_tree set to vhost_svq_start
Since we don't know if we will use SVQ at qemu initialization, let's allocate iova_tree only if needed. To do so, accept it at SVQ start, not at initiali
vhost: move iova_tree set to vhost_svq_start
Since we don't know if we will use SVQ at qemu initialization, let's allocate iova_tree only if needed. To do so, accept it at SVQ start, not at initialization.
This will avoid to create it if the device does not support SVQ.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20221215113144.322011-5-eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
show more ...
|
#
3cfb4d06 |
| 15-Dec-2022 |
Eugenio Pérez <eperezma@redhat.com> |
vhost: allocate SVQ device file descriptors at device start
The next patches will start control SVQ if possible. However, we don't know if that will be possible at qemu boot anymore.
Delay device f
vhost: allocate SVQ device file descriptors at device start
The next patches will start control SVQ if possible. However, we don't know if that will be possible at qemu boot anymore.
Delay device file descriptors until we know it at device start. This will avoid to create them if the device does not support SVQ.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20221215113144.322011-4-eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
show more ...
|
#
712c1a31 |
| 15-Dec-2022 |
Eugenio Pérez <eperezma@redhat.com> |
vdpa: use v->shadow_vqs_enabled in vhost_vdpa_svqs_start & stop
This function used to trust in v->shadow_vqs != NULL to know if it must start svq or not.
This is not going to be valid anymore, as q
vdpa: use v->shadow_vqs_enabled in vhost_vdpa_svqs_start & stop
This function used to trust in v->shadow_vqs != NULL to know if it must start svq or not.
This is not going to be valid anymore, as qemu is going to allocate svq array unconditionally (but it will only start them conditionally).
Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20221215113144.322011-2-eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
show more ...
|
#
c1a10086 |
| 15-Dec-2022 |
Eugenio Pérez <eperezma@redhat.com> |
vdpa: always start CVQ in SVQ mode if possible
Isolate control virtqueue in its own group, allowing to intercept control commands but letting dataplane run totally passthrough to the guest.
Signed-
vdpa: always start CVQ in SVQ mode if possible
Isolate control virtqueue in its own group, allowing to intercept control commands but letting dataplane run totally passthrough to the guest.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Message-Id: <20221215113144.322011-13-eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com>
show more ...
|