xref: /freebsd/share/man/man4/ptnet.4 (revision 4b9d6057)
1.\" Copyright (c) 2018 Vincenzo Maffione
2.\" All rights reserved.
3.\"
4.\" Redistribution and use in source and binary forms, with or without
5.\" modification, are permitted provided that the following conditions
6.\" are met:
7.\" 1. Redistributions of source code must retain the above copyright
8.\"    notice, this list of conditions and the following disclaimer.
9.\" 2. Redistributions in binary form must reproduce the above copyright
10.\"    notice, this list of conditions and the following disclaimer in the
11.\"    documentation and/or other materials provided with the distribution.
12.\"
13.\" THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
14.\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
15.\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
16.\" ARE DISCLAIMED.  IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
17.\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
18.\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
19.\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
20.\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
21.\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
22.\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
23.\" SUCH DAMAGE.
24.\"
25.Dd December 11, 2018
26.Dt PTNET 4
27.Os
28.Sh NAME
29.Nm ptnet
30.Nd Ethernet driver for passed-through netmap ports
31.Sh SYNOPSIS
32This network driver is included in
33.Xr netmap 4 ,
34and it can be compiled into the kernel by adding the following
35line in your kernel configuration file:
36.Bd -ragged -offset indent
37.Cd "device netmap"
38.Ed
39.Sh DESCRIPTION
40The
41.Nm
42device driver provides direct access to host netmap ports,
43from within a Virtual Machine (VM).
44Applications running inside
45the VM can access the TX/RX rings and buffers of a netmap port
46that the hypervisor has passed-through to the VM.
47Hypervisor support for
48.Nm
49is currently available for QEMU/KVM.
50Any
51.Xr netmap 4
52port can be passed-through, including physical NICs,
53.Xr vale 4
54ports, netmap pipes, etc.
55.Pp
56The main use-case for netmap passthrough is Network Function
57Virtualization (NFV), where middlebox applications running within
58VMs may want to process very high packet rates (e.g., 1-10 millions
59packets per second or more).
60Note, however, that those applications
61must use the device in netmap mode in order to achieve such rates.
62In addition to the general advantages of netmap, the improved
63performance of
64.Nm
65when compared to hypervisor device emulation or paravirtualization (e.g.,
66.Xr vtnet 4 ,
67.Xr vmx 4 )
68comes from the hypervisor being completely bypassed in the data-path.
69For example, when using
70.Xr vtnet 4
71the VM has to convert each
72.Xr mbuf 9
73to a VirtIO-specific packet representation
74and publish that to a VirtIO queue; on the hypervisor side, the
75packet is extracted from the VirtIO queue and converted to a
76hypervisor-specific packet representation.
77The overhead of format conversions (and packet copies, in same cases) is not
78incured by
79.Nm
80in netmap mode, because mbufs are not used at all, and the packet format
81is the one defined by netmap (e.g.,
82.Ar struct netmap_slot )
83along the whole data-path.
84No format conversions or copies happen.
85.Pp
86It is also possible to use a
87.Nm
88device like a regular network interface, which interacts with the
89.Fx
90network stack (i.e., not in netmap mode).
91However, in that case it is necessary to pay the cost of data copies
92between mbufs and netmap buffers, which generally results in lower
93TCP/UDP performance than
94.Xr vtnet 4
95or other paravirtualized network devices.
96If the passed-through netmap port supports the VirtIO network header,
97.Nm
98is able to use it, and support TCP/UDP checksum offload (for both transmit
99and receive), TCP segmentation offload (TSO) and TCP large receive offload
100(LRO).
101Currently,
102.Xr vale 4
103ports support the header.
104Note that the VirtIO network header is generally not used in NFV
105use-cases, because middleboxes are not endpoints of TCP/UDP connections.
106.Sh TUNABLES
107Tunables can be set at the
108.Xr loader 8
109prompt before booting the kernel or stored in
110.Xr loader.conf 5 .
111.Bl -tag -width "xxxxxx"
112.It Va dev.netmap.ptnet_vnet_hdr
113This tunable enables (1) or disables (0) the VirtIO network header.
114If enabled,
115.Nm
116uses the same header used by
117.Xr vtnet 4
118to exchange offload metadata with the hypervisor.
119If disabled, no header is prepended to transmitted and received
120packets.
121The metadata is necessary to support TCP/UDP checksum offloads,
122TSO, and LRO.
123The default value is 1.
124.El
125.Sh SEE ALSO
126.Xr netintro 4 ,
127.Xr netmap 4 ,
128.Xr vale 4 ,
129.Xr virtio 4 ,
130.Xr vmx 4 ,
131.Xr ifconfig 8
132.Sh HISTORY
133The
134.Nm
135driver was written by
136.An Vincenzo Maffione Aq Mt vmaffione@FreeBSD.org .
137It first appeared in
138.Fx 12.0 .
139