xref: /freebsd/share/man/man4/ptnet.4 (revision 81ad6265)
1.\" Copyright (c) 2018 Vincenzo Maffione
2.\" All rights reserved.
3.\"
4.\" Redistribution and use in source and binary forms, with or without
5.\" modification, are permitted provided that the following conditions
6.\" are met:
7.\" 1. Redistributions of source code must retain the above copyright
8.\"    notice, this list of conditions and the following disclaimer.
9.\" 2. Redistributions in binary form must reproduce the above copyright
10.\"    notice, this list of conditions and the following disclaimer in the
11.\"    documentation and/or other materials provided with the distribution.
12.\"
13.\" THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
14.\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
15.\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
16.\" ARE DISCLAIMED.  IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
17.\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
18.\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
19.\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
20.\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
21.\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
22.\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
23.\" SUCH DAMAGE.
24.\"
25.\" $FreeBSD$
26.\"
27.Dd December 11, 2018
28.Dt PTNET 4
29.Os
30.Sh NAME
31.Nm ptnet
32.Nd Ethernet driver for passed-through netmap ports
33.Sh SYNOPSIS
34This network driver is included in
35.Xr netmap 4 ,
36and it can be compiled into the kernel by adding the following
37line in your kernel configuration file:
38.Bd -ragged -offset indent
39.Cd "device netmap"
40.Ed
41.Sh DESCRIPTION
42The
43.Nm
44device driver provides direct access to host netmap ports,
45from within a Virtual Machine (VM).
46Applications running inside
47the VM can access the TX/RX rings and buffers of a netmap port
48that the hypervisor has passed-through to the VM.
49Hypervisor support for
50.Nm
51is currently available for QEMU/KVM.
52Any
53.Xr netmap 4
54port can be passed-through, including physical NICs,
55.Xr vale 4
56ports, netmap pipes, etc.
57.Pp
58The main use-case for netmap passthrough is Network Function
59Virtualization (NFV), where middlebox applications running within
60VMs may want to process very high packet rates (e.g., 1-10 millions
61packets per second or more).
62Note, however, that those applications
63must use the device in netmap mode in order to achieve such rates.
64In addition to the general advantages of netmap, the improved
65performance of
66.Nm
67when compared to hypervisor device emulation or paravirtualization (e.g.,
68.Xr vtnet 4 ,
69.Xr vmx 4 )
70comes from the hypervisor being completely bypassed in the data-path.
71For example, when using
72.Xr vtnet 4
73the VM has to convert each
74.Xr mbuf 9
75to a VirtIO-specific packet representation
76and publish that to a VirtIO queue; on the hypervisor side, the
77packet is extracted from the VirtIO queue and converted to a
78hypervisor-specific packet representation.
79The overhead of format conversions (and packet copies, in same cases) is not
80incured by
81.Nm
82in netmap mode, because mbufs are not used at all, and the packet format
83is the one defined by netmap (e.g.,
84.Ar struct netmap_slot )
85along the whole data-path.
86No format conversions or copies happen.
87.Pp
88It is also possible to use a
89.Nm
90device like a regular network interface, which interacts with the
91.Fx
92network stack (i.e., not in netmap mode).
93However, in that case it is necessary to pay the cost of data copies
94between mbufs and netmap buffers, which generally results in lower
95TCP/UDP performance than
96.Xr vtnet 4
97or other paravirtualized network devices.
98If the passed-through netmap port supports the VirtIO network header,
99.Nm
100is able to use it, and support TCP/UDP checksum offload (for both transmit
101and receive), TCP segmentation offload (TSO) and TCP large receive offload
102(LRO).
103Currently,
104.Xr vale 4
105ports support the header.
106Note that the VirtIO network header is generally not used in NFV
107use-cases, because middleboxes are not endpoints of TCP/UDP connections.
108.Sh TUNABLES
109Tunables can be set at the
110.Xr loader 8
111prompt before booting the kernel or stored in
112.Xr loader.conf 5 .
113.Bl -tag -width "xxxxxx"
114.It Va dev.netmap.ptnet_vnet_hdr
115This tunable enables (1) or disables (0) the VirtIO network header.
116If enabled,
117.Nm
118uses the same header used by
119.Xr vtnet 4
120to exchange offload metadata with the hypervisor.
121If disabled, no header is prepended to transmitted and received
122packets.
123The metadata is necessary to support TCP/UDP checksum offloads,
124TSO, and LRO.
125The default value is 1.
126.El
127.Sh SEE ALSO
128.Xr netintro 4 ,
129.Xr netmap 4 ,
130.Xr vale 4 ,
131.Xr virtio 4 ,
132.Xr vmx 4 ,
133.Xr ifconfig 8
134.Sh HISTORY
135The
136.Nm
137driver was written by
138.An Vincenzo Maffione Aq Mt vmaffione@FreeBSD.org .
139It first appeared in
140.Fx 12.0 .
141