/qemu/docs/system/devices/ |
H A D | igb.rst | 7 igb is a family of Intel's gigabit ethernet controllers. In QEMU, 82576 26 with a different operating system other than DPDK, Linux, and Windows or if you 42 As these devices are very similar, if you make a change for igb and the same 45 Please do not forget to run tests before submitting a change. As tests included
|
H A D | ivshmem.rst | 4 On Linux hosts, a shared memory device is available. The basic syntax 11 where hostmem names a host memory backend. For a POSIX shared memory 19 shared memory region. Interrupt support requires using a shared memory 20 server and using a chardev socket to connect to it. The code for the 33 When using the server, the guest will be assigned a VM ID (>=0) that 35 Guests can read their VM ID from a device register (see 53 Instead of specifying the <shm size> using POSIX shm, you may specify a
|
H A D | keyboard.rst | 22 or in some cases as a language string. Examples: 30 The above 3 examples all select a swedish keyboard layout. Table 3-15 at 127 Not all dip switch values have a corresponding language code and both "be" and
|
H A D | net.rst | 7 target) and can connect them to a network backend on the host or an 9 connect the NIC of the guest to a real network (e.g. by using a TAP 17 This is the standard way to connect QEMU to a real network. QEMU adds a 19 configure it as if it was a real ethernet card. 36 There is a virtual ethernet driver for Windows 2000/XP systems, called 45 option is specified), QEMU uses a completely user mode network stack 56 The QEMU VM behaves as if it was behind a firewall which blocks all 57 incoming connections. You can use a DHCP client to automatically 83 QEMU can simulate several hubs. A hub can be thought of as a virtual 87 such a hub using the ``-netdev [all …]
|
H A D | nvme.rst | 75 There are a number of parameters available: 123 and may only be attached to a single controller at a time. Shared namespaces 143 attachable to a single controller at a time. Additionally it will not be 183 source ranges that may be used in a Copy command. This is a 0's based value. 319 to a secondary controller. The default ``0`` resolves to 324 a secondary controller. The default ``0`` resolves to 343 nvme virt-mgmt /dev/nvme0 -c 0 -r 1 -a 1 -n 0 344 nvme virt-mgmt /dev/nvme0 -c 0 -r 0 -a 1 -n 0 363 nvme virt-mgmt /dev/nvme0 -c 1 -r 1 -a 8 -n 1 364 nvme virt-mgmt /dev/nvme0 -c 1 -r 0 -a 8 -n 2 [all …]
|
H A D | usb-u2f.rst | 5 exposed to the internet to offer a strong second factor option for end 8 The second factor is provided by a device implementing the U2F 9 protocol. In case of a USB U2F security key, it is a USB HID device 12 QEMU supports both pass-through of a host U2F key device to a VM, 13 and software emulation of a U2F key. 18 The ``u2f-passthru`` device allows you to connect a real hardware 19 U2F key on your host to a guest VM. All requests made from the guest 23 In addition, the dedicated pass-through allows you to share a single 35 requires a working libudev): 43 ``u2f-emulated`` is a completely software emulated U2F device. [all …]
|
H A D | usb.rst | 4 QEMU can emulate a PCI UHCI, OHCI, EHCI or XHCI USB controller. You can 45 The easiest way to add a UHCI controller to a ``pc`` machine is the 67 This attaches a USB tablet to the UHCI adapter and a USB mass storage 82 There is a config file in docs which will do all this for 118 a drive backed by a raw format disk image: 227 Plugging a tablet into UHCI port 1 works like this:: 231 Plugging a hub into UHCI port 2 works like this:: 265 Using host USB devices on a Linux host 316 device to make it work again (this is a bug). 338 practice only a few combinations are useful: [all …]
|
H A D | vhost-user-input.rst | 7 The Virtio input device is a paravirtualized device for input events. 12 The vhost-user-input device implementation was designed to work with a daemon 15 QEMU provides a backend implementation in contrib/vhost-user-input. 20 Virtio input requires a guest Linux kernel built with the 33 The QEMU invocation needs to create a chardev socket to communicate with the
|
H A D | vhost-user-rng.rst | 15 The vhost-user-rng device implementation was designed to work with a random 30 The QEMU invocation needs to create a chardev socket the device can 31 use to communicate as well as share the guests memory over a memfd.
|
H A D | vhost-user.rst | 7 outside of QEMU itself. To do this there are a number of things 15 a ``chardev`` option which specifies the ID of the ``--chardev`` 16 device that connects via a socket to the vhost-user *daemon*. 68 The vhost-user-device is a generic development device intended for 89 This is a separate process that is connected to by QEMU via a socket 90 following the :ref:`vhost_user_proto`. There are a number of daemons 92 meets the specification for a given device can be used. 102 objects. A reference to a file-descriptor which can access this object
|
H A D | virtio-gpu.rst | 13 virtio-gpu requires a guest Linux kernel built with the 26 **Backends:** QEMU provides a 2D virtio-gpu backend, and two accelerated 28 device label). There is a vhost-user backend that runs the graphics stack 29 in a separate process for improved isolation. 47 employ a software renderer for 3D graphics. 82 The crosvm book provides directions on how to build a `gfxstream-enabled 83 rutabaga`_ and launch a `guest Wayland proxy`_. 99 Surfaceless doesn't create a native window surface, but does copy from the 100 render target to the Pixman buffer if a virtio-gpu 2D hypercall is issued.
|
H A D | virtio-pmem.rst | 7 The virtio pmem device is a paravirtualized persistent memory device 28 A virtio pmem device backed by a memory-backend-file can be created on 37 creates a backend file with the specified size. 39 - "device virtio-pmem-pci,id=nvdimm1,memdev=mem1" creates a virtio pmem 62 to perform fsync/msync. This is different from a real nvdimm backend where 68 a hint to application to perform fsync for write persistence.
|
H A D | virtio-snd.rst | 5 The Virtio sound device is a paravirtualized sound card device. 10 Virtio sound requires a guest Linux kernel built with the 16 Virtio sound implements capture and playback from inside a guest using the 25 …ation is supported: the first one will always be a playback stream, an optional second will always… 42 To specifically add virtualized sound devices, you have to specify a PCI device
|
/qemu/docs/system/ |
H A D | gdb.rst | 8 way that you might with a low-level debug facility like JTAG 75 inside a system image, it does present challenges. Kernel preemption 80 a gdbserver exposed via a port to the outside world. 86 parallel flows of execution is a two layer one: it supports multiple 88 machine has more than one CPU, QEMU exposes each CPU cluster as a 89 separate "inferior", where each CPU within the cluster is a separate 90 "thread". Most QEMU machine types have identical CPUs, so there is a 93 machine has a cluster with one E51 core and a second cluster with four 109 Once connected, gdb will have a single inferior, for the 145 First create a chardev with the appropriate options, then [all …]
|
H A D | generic-loader.rst | 43 with a '0x'. 49 Setting a CPU's Program Counter 66 with a '0x'. 98 Setting 'force-raw=on' forces the file to be treated as a raw image. 105 with a '0x'. 114 At the moment it is just assumed that if you specify a cpu-num then
|
H A D | guest-loader.rst | 8 aimed at a particular use case of loading hypervisor guests. This is 17 This is what is typically done by a boot-loader like grub using its
|
H A D | images.rst | 15 You can create a disk image with the command:: 21 a ``G`` suffix for gigabytes. 31 read only. When sectors in written, they are written in a temporary file 33 disk images by using the ``commit`` monitor command (or C-a s in the 47 Use the monitor command ``savevm`` to create a new VM snapshot or 51 Use ``loadvm`` to restore a VM snapshot and ``delvm`` to remove a VM 63 A VM snapshot is made of a VM state info (its size is shown in 64 ``info snapshots``) and a snapshot of every writable disk image. The VM 67 The size of a snapshot in a disk image is difficult to evaluate and is 70 snapshot would need a full copy of all the disk images). [all …]
|
H A D | introduction.rst | 9 QEMU's system emulation provides a virtual model of a machine (CPU, 10 memory and emulated devices) to run a guest OS. It supports a number 11 of hypervisors (known as accelerators) as well as a JIT known as the 42 System emulation provides a wide range of device models to emulate 52 There is a full :ref:`featured block layer<Live Block Operations>` 60 QEMU provides a number of management interfaces including a line based 63 state. The :ref:`QEMU Monitor Protocol<QMP Ref>` (QMP) is a well 82 For a non-x86 system where we emulate a broad range of machine types, 89 number of projects dedicated to providing a more user friendly 144 how a block device is stored, how network devices see the [all …]
|
H A D | invocation.rst | 10 disk_image is a raw hard disk image for IDE hard disk 0. Some targets do 11 not need a disk image. 14 commas, such as in "file=my,file" and "string=a,b", it's necessary to 15 double the commas. For instance,"-fw_cfg name=z,string=a,,b" will be 16 parsed as "-fw_cfg name=z,string=a,b".
|
/qemu/docs/system/i386/ |
H A D | amd-memory-encryption.rst | 4 Secure Encrypted Virtualization (SEV) is a feature found on AMD processors. 9 unencrypted version. Each encrypted VM is associated with a unique encryption 10 key; if its data is accessed by a different entity using a different key the 16 inside the AMD-SP provides commands to support a common VM lifecycle. This 23 hypervisor to perform functions on behalf of a guest, there is architectural 35 images and provide a measurement than can be used as an attestation of a 49 several flags that restricts what can be done on a running SEV guest. 83 for a SEV-ES guest, encrypted VMSAs. This measurement is a signature of the 98 To launch a SEV guest:: 104 To launch a SEV-ES guest:: [all …]
|
H A D | hyperv.rst | 8 In some cases when implementing a hardware interface in software is slow, KVM 15 make Windows and Hyper-V guests think they're running on top of a Hyper-V 41 This feature tells guest OS to disable watchdog timeouts as it is running on a 59 Virtual Processor indices (e.g. when VP index needs to be passed in a 86 Enables Hyper-V Synthetic interrupt controller - an extension of a local APIC. 88 to the guest: SynIC messages and Events. This is a pre-requisite for 128 if there is a Windows version which acts differently. 134 itself by writing to it. Even when this MSR is enabled, it is not a recommended 188 This enlightenment tells guest OS that virtual processors will never share a 216 Enables Hyper-V synthetic debugger interface, this is a special interface used [all …]
|
H A D | kvm-pv.rst | 34 Expose a ``KVM`` specific paravirtualized clocksource to the guest. Supported 80 the number of supported vCPUs for a given configuration lower). Supported since
|
H A D | microvm.rst | 4 ``microvm`` is a machine type inspired by ``Firecracker`` and 7 It's a minimalist machine type without ``PCI`` nor ``ACPI`` support, 8 designed for short-lived guests. microvm also establishes a baseline 62 As no current FW is able to boot from a block device using 64 using a host-side kernel and, optionally, an initrd image. 67 Running a microvm-based VM 71 legacy and non-legacy devices. In this example, a VM is created 93 This is an example of a VM with all optional legacy features 110 Triggering a guest-initiated shut down 113 As the microvm machine type includes just a small set of system [all …]
|
H A D | sgx.rst | 7 Intel Software Guard eXtensions (SGX) is a set of instructions and mechanisms 29 By default, QEMU does not assign EPC to a VM, i.e. fully enabling SGX in a VM 31 memory types, e.g. hugetlbfs, EPC is exposed as a memory backend. 38 QEMU does not artificially restrict the number of EPC sections exposed to a 54 the size of the physical EPC must be a power of two (though software sees 55 a subset of the full EPC, e.g. 92M or 128M) and the EPC must be naturally 56 aligned. KVM SGX's virtual EPC is purely a software construct and only 134 Launching a guest 137 To launch a SGX guest: 146 Utilizing SGX in the guest requires a kernel/OS with SGX support. [all …]
|
H A D | xen.rst | 33 advertised to a Xen guest. If Hyper-V is also enabled, the Xen identification 59 Xen grant tables are the means by which a Xen guest grants access to its 70 The Xen PCI platform device is enabled automatically for a Xen guest. This 71 allows a guest to unplug all emulated devices, in order to use paravirtual 77 To provide a Xen console device, define a character device and then a device 87 by QEMU should automatically work and present a Xen network device to the 94 as a PV block device. Guest bootloaders typically use IDE to load the guest 112 itself, designed to run inside a Xen HVM guest and provide memory management 129 port; it must have a Xen console or it will panic. 131 The example above provides the guest kernel command line after a separator [all …]
|