xref: /openbsd/share/man/man4/gre.4 (revision 097a140d)
1.\" $OpenBSD: gre.4,v 1.81 2021/01/08 21:26:34 kn Exp $
2.\" $NetBSD: gre.4,v 1.10 1999/12/22 14:55:49 kleink Exp $
3.\"
4.\" Copyright 1998 (c) The NetBSD Foundation, Inc.
5.\" All rights reserved.
6.\"
7.\" This code is derived from software contributed to The NetBSD Foundation
8.\" by Heiko W. Rupp <hwr@pilhuhn.de>
9.\"
10.\" Redistribution and use in source and binary forms, with or without
11.\" modification, are permitted provided that the following conditions
12.\" are met:
13.\" 1. Redistributions of source code must retain the above copyright
14.\"    notice, this list of conditions and the following disclaimer.
15.\" 2. Redistributions in binary form must reproduce the above copyright
16.\"    notice, this list of conditions and the following disclaimer in the
17.\"    documentation and/or other materials provided with the distribution.
18.\"
19.\" THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS
20.\" ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
21.\" TO, THE  IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
22.\" PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS
23.\" BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
24.\" CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
25.\" SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
26.\" INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
27.\" CONTRACT, STRICT  LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
28.\" ARISING IN ANY WAY  OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
29.\" POSSIBILITY OF SUCH DAMAGE.
30.\"
31.Dd $Mdocdate: January 8 2021 $
32.Dt GRE 4
33.Os
34.Sh NAME
35.Nm gre ,
36.Nm mgre ,
37.Nm egre ,
38.Nm nvgre
39.Nd Generic Routing Encapsulation network device
40.Sh SYNOPSIS
41.Cd "pseudo-device gre"
42.Sh DESCRIPTION
43The
44.Nm gre
45pseudo-device provides interfaces for tunnelling protocols across
46IPv4 and IPv6 networks using the Generic Routing Encapsulation (GRE)
47encapsulation protocol.
48.Pp
49GRE datagrams (IP protocol number 47) consist of a GRE header
50and an outer IP header for encapsulating another protocol's datagram.
51The GRE header specifies the type of the encapsulated datagram,
52allowing for the tunnelling of multiple protocols.
53.Pp
54Different tunnels between the same endpoints may be distinguished
55by an optional Key field in the GRE header.
56The Key field may be partitioned to carry flow information about the
57encapsulated traffic to allow better use of multipath links.
58.Pp
59This pseudo driver provides the clonable network interfaces:
60.Bl -tag -width nvgreX
61.It Nm gre
62Point-to-point Layer 3 tunnel interfaces.
63.It Nm mgre
64Point-to-multipoint Layer 3 tunnel interfaces.
65.It Nm egre
66Point-to-point Ethernet tunnel interfaces.
67.It Nm nvgre
68Network Virtualization using Generic Routing Encapsulation
69(NVGRE) overlay Ethernet network interfaces.
70.It Nm eoip
71MikroTik Ethernet over IP tunnel interfaces.
72.El
73.Pp
74See
75.Xr eoip 4
76for information regarding MikroTik Ethernet over IP interfaces.
77.Pp
78All GRE packet processing in the system is allowed or denied by setting the
79.Va net.inet.gre.allow
80.Xr sysctl 8
81variable.
82To allow GRE packet processing, set
83.Va net.inet.gre.allow
84to 1.
85.Pp
86.Nm gre ,
87.Nm mgre ,
88.Nm egre ,
89and
90.Nm nvgre
91interfaces can be created at runtime using the
92.Ic ifconfig iface Ns Ar N Ic create
93command or by setting up a
94.Xr hostname.if 5
95configuration file for
96.Xr netstart 8 .
97.Pp
98For correct operation, encapsulated traffic must not be routed
99over the interface itself.
100This can be implemented by adding a distinct or a more specific
101route to the tunnel destination than the hosts or networks routed
102via the tunnel interface.
103Alternatively, the tunnel traffic may be configured in a separate
104routing table to the encapsulated traffic.
105.Ss Point-to-Point Layer 3 GRE tunnel interfaces (gre)
106A
107.Nm gre
108tunnel can encapsulate IPv4, IPv6, and MPLS packets.
109The MTU is set to 1476 by default to match the value used by Cisco
110routers.
111.Pp
112.Nm gre
113supports sending keepalive packets to the remote endpoint which
114allows tunnel failure to be detected.
115To return keepalives, the remote host must be configured to forward
116IP packets received from inside the tunnel back to the address of
117the local tunnel endpoint.
118.Pp
119.Nm gre
120interfaces may be configured to receive IPv4 packets in
121Web Cache Communication Protocol (WCCP)
122encapsulation by setting the
123.Cm link0
124flag on the interface.
125WCCP reception may be enabled globally by setting the
126.Va net.inet.gre.wccp
127sysctl value to 1.
128Some magic with the packet filter configuration
129and a caching proxy like squid are needed
130to do anything useful with these packets.
131This sysctl requires
132.Va net.inet.gre.allow
133to also be set.
134.Ss Point-to-Multipoint Layer 3 GRE tunnel interfaces (mgre)
135.Nm mgre
136interfaces can encapsulate IPv4, IPv6, and MPLS packets.
137Unlike a point-to-point interface,
138.Nm mgre
139interfaces are configured with an address on an IP subnet.
140Peers on that subnet are mapped to the addresses of multiple tunnel
141endpoints.
142.Pp
143The MTU is set to 1476 by default to match the value used by Cisco
144routers.
145.Ss Point-to-Point Ethernet over GRE tunnel interfaces (egre)
146An
147.Nm egre
148tunnel interface carries Ethernet over GRE (EoGRE).
149Ethernet traffic is encapsulated using Transparent Ethernet (0x6558)
150as the protocol identifier in the GRE header, as per RFC 1701.
151The MTU is set to 1500 by default.
152.Ss Network Virtualization using GRE interfaces (nvgre)
153.Nm nvgre
154interfaces allow construction of virtual overlay Ethernet networks
155on top of an IPv4 or IPv6 underlay network as per RFC 7367.
156Ethernet traffic is encapsulated using Transparent Ethernet (0x6558)
157as the protocol identifier in the GRE header, a 24-bit Virtual
158Subnet ID (VSID), and an 8-bit FlowID.
159.Pp
160By default the MTU of an
161.Nm nvgre
162interface is set to 1500, and the Don't Fragment flag is set.
163The MTU on the network interfaces carrying underlay network traffic
164must be raised to accommodate this and the overhead of the NVGRE
165encapsulation, or the
166.Nm nvgre
167interface must be reconfigured for less capable underlays.
168.Pp
169The underlay network parameters on a
170.Nm nvgre
171interface are a unicast tunnel source address,
172a multicast tunnel destination address,
173and a parent network interface.
174The unicast source address is used as the NVE Provider Address (PA)
175on the underlay network.
176The parent interface is used to identify which interface the multicast
177group should be joined to.
178.Pp
179The multicast group is used to transport broadcast and multicast
180traffic from the overlay to other participating NVGRE endpoints.
181It is also used to flood unicast traffic to Ethernet addresses in
182the overlay with an unknown association to a NVGRE endpoint.
183Traffic received from other NVGRE endpoints,
184either to the Provider Address or via the multicast group,
185is used to learn associations between Ethernet addresses in the
186overlay network and the Provider Addresses of NVGRE endpoints in
187the underlay.
188.Ss Programming Interface
189.Nm gre ,
190.Nm mgre ,
191.Nm egre ,
192and
193.Nm nvgre
194interfaces support the following
195.Xr ioctl 2
196calls for configuring tunnel options:
197.Bl -tag -width indent -offset 3n
198.It Dv SIOCSLIFPHYADDR Fa "struct if_laddrreq *"
199Set the IPv4 or IPv6 addresses for the encapsulating IP packets.
200The addresses may only be configured while the interface is down.
201.Pp
202.Nm gre
203and
204.Nm egre
205interfaces support configuration of unicast IP addresses as the
206tunnel endpoints.
207.Pp
208.Nm mgre
209interfaces support configuration of a unicast local IP address,
210and require an
211.Dv AF_UNSPEC
212destination address.
213.Pp
214.Nm nvgre
215interfaces support configuration of a unicast IP address as the
216local endpoint and a multicast group address as the destination
217address.
218.It Dv SIOCGLIFPHYADDR Fa "struct if_laddrreq *"
219Get the addresses used for the encapsulating IP packets.
220.It Dv SIOCDIFPHYADDR Fa "struct ifreq *"
221Clear the addresses used for the encapsulating IP packets.
222The addresses may only be cleared while the interface is down.
223.It Dv SIOCSVNETID Fa "struct ifreq *"
224Configure a virtual network identifier for use in the GRE Key header.
225The virtual network identifier may only be configured while the
226interface is down.
227.Pp
228.Nm gre ,
229.Nm mgre ,
230and
231.Nm egre
232interfaces configured with a virtual network identifier will enable
233the use of the GRE Key header.
234The Key is a 32-bit value by default, or a 24-bit value when the
235virtual network flow identifier is enabled.
236.Pp
237.Nm nvgre
238interfaces use the virtual network identifier in the 24-bit
239Virtual Subnet Identifier (VSID)
240aka
241Tenant Network Identifier (TNI)
242field in of the GRE Key header.
243.It Dv SIOCGVNETID Fa "struct ifreq *"
244Get the virtual network identifier used in the GRE Key header.
245.It Dv SIOCDVNETID Fa "struct ifreq *"
246Disable the use of the virtual network identifier.
247The virtual network identifier may only be disabled while the interface
248is down.
249.Pp
250When the virtual network identifier is disabled on
251.Nm gre ,
252.Nm mgre ,
253and
254.Nm egre
255interfaces, it disables the use of the GRE Key header.
256.Pp
257.Nm nvgre
258interfaces do not support this ioctl as a
259Virtual Subnet Identifier
260is required by the protocol.
261.It Dv SIOCSLIFPHYRTABLE Fa "struct ifreq *"
262Set the routing table the tunnel traffic operates in.
263The routing table may only be configured while the interface is down.
264.It Dv SIOCGLIFPHYRTABLE Fa "struct ifreq *"
265Get the routing table the tunnel traffic operates in.
266.It Dv SIOCSLIFPHYTTL Fa "struct ifreq *"
267Set the Time-To-Live field in IPv4 encapsulation headers, or the
268Hop Limit field in IPv6 encapsulation headers.
269.Pp
270.Nm gre
271and
272.Nm mgre
273interfaces configured with a TTL of -1 will copy the TTL in and out
274of the encapsulated protocol headers.
275.It Dv SIOCGLIFPHYTTL Fa "struct ifreq *"
276Get the value used in the Time-To-Live field in an IPv4 encapsulation
277header or the Hop Limit field in an IPv6 encapsulation header.
278.It Dv SIOCSLIFPHYDF Fa "struct ifreq *"
279Configure whether the tunnel traffic sent by the interface can be
280fragmented or not.
281This sets the Don't Fragment (DF) bit on IPv4 packets,
282and disables fragmentation of IPv6 packets.
283.It Dv SIOCGLIFPHYDF Fa "struct ifreq *"
284Get whether the tunnel traffic sent by the interface can be fragmented
285or not.
286.It Dv SIOCSTXHPRIO Fa "struct ifreq *"
287Set the priority value used in the Type of Service field in IPv4
288headers, or the Traffic Class field in IPv6 headers.
289Values may be from 0 to 7, or
290.Dv IF_HDRPRIO_PACKET
291to specify that the current priority of a packet should be used.
292.Pp
293.Nm gre
294and
295.Nm mgre
296interfaces configured with a value of
297.Dv IF_HDRPRIO_PAYLOAD
298will copy the priority from encapsulated protocol headers.
299.It Dv SIOCGTXHPRIO Fa "struct ifreq *"
300Get the priority value used in the Type of Service field in IPv4
301headers, or the Traffic Class field in IPv6 headers.
302.El
303.Pp
304.Nm gre ,
305.Nm mgre ,
306and
307.Nm egre
308interfaces support the following
309.Xr ioctl 2
310calls:
311.Bl -tag -width indent -offset 3n
312.It Dv SIOCSVNETFLOWID Fa "struct ifreq *"
313Enable or disable the partitioning of the optional GRE Key header
314into a 24-bit virtual network identifier and an 8-bit flow
315identifier.
316.Pp
317The interface
318must already be configured with a virtual network identifier before
319enabling flow identifiers in the GRE Key header.
320The configured virtual network identify must also fit into 24 bits.
321.It Dv SIOCDVNETFLOWID Fa "struct ifreq *"
322Get the status of the partitioning of the GRE key.
323.El
324.Pp
325.Nm gre
326interfaces support the following
327.Xr ioctl 2
328calls:
329.Bl -tag -width indent -offset 3n
330.It Dv SIOCSETKALIVE Fa "struct ifkalivereq *"
331Enable the transmission of keepalive packets to detect tunnel failure.
332.\" Keepalives may only be configured while the interface is down.
333.Pp
334Setting the keepalive period or count to 0 disables keepalives on
335the tunnel.
336.It Dv SIOCGETKALIVE Fa "struct ifkalivereq *"
337Get the configuration of keepalive packets.
338.El
339.Pp
340.Nm nvgre
341interfaces support the following
342.Xr ioctl 2
343calls:
344.Bl -tag -width indent -offset 3n
345.It Dv SIOCSIFPARENT Fa "struct if_parent *"
346Configure which interface will be joined to the multicast group
347specified by the tunnel destination address.
348The parent interface may only be configured while the interface is
349down.
350.It Dv SIOCGIFPARENT Fa "struct if_parent *"
351Get the name of the interface used for multicast communication.
352.It Dv SIOCGIFPARENT Fa "struct ireq *"
353Remove the configuration of the interface used for multicast
354communication.
355.\" bridge(4) ioctls should go here too.
356.El
357.Ss Security Considerations
358The GRE protocol in all its flavours does not provide any integrated
359security features.
360GRE should only be deployed on trusted private networks,
361or protected with IPsec to add authentication and encryption for
362confidentiality.
363IPsec is especially recommended when transporting GRE over the
364public internet.
365.Pp
366The Packet Filter
367.Xr pf 4
368can be used to filter tunnel traffic with endpoint policies
369.Xr pf.conf 5 .
370.Pp
371The Time-to-Live (TTL) value of a tunnel can be set to 1 or a low
372value to restrict the traffic to the local network:
373.Bd -literal -offset indent
374# ifconfig gre0 tunnelttl 1
375.Ed
376.Sh EXAMPLES
377.Ss Point-to-Point Layer 3 GRE tunnel interfaces (gre) example
378.Bd -literal
379Host X ---- Host A ------------ tunnel ------------ Cisco D ---- Host E
380               \e                                      /
381                \e                                    /
382                 +------ Host B ------ Host C ------+
383.Ed
384.Pp
385On Host A
386.Pq Ox :
387.Bd -literal -offset indent
388# route add default B
389# ifconfig greN create
390# ifconfig greN A D netmask 0xffffffff up
391# ifconfig greN tunnel A D
392# route add E D
393.Ed
394.Pp
395On Host D (Cisco):
396.Bd -literal -offset indent
397Interface TunnelX
398 ip unnumbered D   ! e.g. address from Ethernet interface
399 tunnel source D   ! e.g. address from Ethernet interface
400 tunnel destination A
401ip route C <some interface and mask>
402ip route A mask C
403ip route X mask tunnelX
404.Ed
405.Pp
406OR
407.Pp
408On Host D
409.Pq Ox :
410.Bd -literal -offset indent
411# route add default C
412# ifconfig greN create
413# ifconfig greN D A
414# ifconfig greN tunnel D A
415.Ed
416.Pp
417To reach Host A over the tunnel (from Host D), there has to be an
418alias on Host A for the Ethernet interface:
419.Pp
420.Dl # ifconfig <etherif> alias Y
421.Pp
422and on the Cisco:
423.Pp
424.Dl ip route Y mask tunnelX
425.Pp
426.Nm gre
427keepalive packets may be enabled with
428.Xr ifconfig 8
429like this:
430.Bd -literal -offset indent
431# ifconfig greN keepalive period count
432.Ed
433.Pp
434This will send a keepalive packet every
435.Ar period
436seconds.
437If no response is received in
438.Ar count
439*
440.Ar period
441seconds, the link is considered down.
442To return keepalives, the remote host must be configured to forward packets:
443.Bd -literal -offset indent
444# sysctl net.inet.ip.forwarding=1
445.Ed
446.Pp
447If
448.Xr pf 4
449is enabled then it is necessary to add a pass rule specific for the keepalive
450packets.
451The rule must use
452.Cm no state
453because the keepalive packet is entering the network stack multiple times.
454In most cases the following should work:
455.Bd -literal -offset indent
456pass quick on gre proto gre no state
457.Ed
458.Ss Point-to-Multipoint Layer 3 GRE tunnel interfaces (mgre) example
459.Nm mgre
460can be used to build a point-to-multipoint tunnel network to several
461hosts using a single
462.Nm mgre
463interface.
464.Pp
465In this example the host A has an outer IP of 198.51.100.12, host
466B has 203.0.113.27, and host C has 203.0.113.254.
467.Pp
468Addressing within the tunnel is done using 192.0.2.0/24:
469.Bd -literal
470                        +--- Host B
471                       /
472                      /
473Host A --- tunnel ---+
474                      \e
475                       \e
476                        +--- Host C
477.Ed
478.Pp
479On Host A:
480.Bd -literal -offset indent
481# ifconfig mgreN create
482# ifconfig mgreN tunneladdr 198.51.100.12
483# ifconfig mgreN inet 192.0.2.1 netmask 0xffffff00 up
484.Ed
485.Pp
486On Host B:
487.Bd -literal -offset indent
488# ifconfig mgreN create
489# ifconfig mgreN tunneladdr 203.0.113.27
490# ifconfig mgreN inet 192.0.2.2 netmask 0xffffff00 up
491.Ed
492.Pp
493On Host C:
494.Bd -literal -offset indent
495# ifconfig mgreN create
496# ifconfig mgreN tunneladdr 203.0.113.254
497# ifconfig mgreN inet 192.0.2.3 netmask 0xffffff00 up
498.Ed
499.Pp
500To reach Host B over the tunnel (from Host A), there has to be a
501route on Host A specifying the next-hop:
502.Pp
503.Dl # route add -host 192.0.2.2 203.0.113.27 -iface -ifp mgreN
504.Pp
505Similarly, to reach Host A over the tunnel from Host B, a route must
506be present on B with A's outer IP as next-hop:
507.Pp
508.Dl # route add -host 192.0.2.1 198.51.100.12 -iface -ifp mgreN
509.Pp
510The same tunnel interface can then be used between host B and C by
511adding the appropriate routes, making the network any-to-any instead
512of hub-and-spoke:
513.Pp
514On Host B:
515.Dl # route add -host 192.0.2.3 203.0.113.254 -iface -ifp mgreN
516.Pp
517On Host C:
518.Dl # route add -host 192.0.2.2 203.0.113.27 -iface -ifp mgreN
519.Ss Point-to-Point Ethernet over GRE tunnel interfaces (egre) example
520.Nm egre
521can be used to carry Ethernet traffic between two endpoints over
522an IP network, including the public internet.
523This can also be achieved using
524.Xr etherip 4 ,
525but
526.Nm egre
527offers the ability to carry different Ethernet networks between the
528same endpoints by using virtual network identifiers to distinguish
529between them.
530.Pp
531For example, a pair of routers separated by the internet could
532bridge several Ethernet networks using
533.Nm egre
534and
535.Xr bridge 4 .
536.Pp
537In this example the first router has a public IP of 192.0.2.1, and
538the second router has 203.0.113.2.
539They are connecting the Ethernet networks on two
540.Xr vlan 4
541interfaces over the internet.
542A separate
543.Nm egre
544tunnel is created for each VLAN and given different virtual network
545identifiers so the routers can tell which network the encapsulated
546traffic is for.
547The
548.Nm egre
549interfaces are explicitly configured to provide the same MTU as the
550.Xr vlan 4
551interfaces (1500 bytes) with fragmentation enabled so they can be
552carried over the internet, which has the same or lower MTU.
553.Pp
554At the first site:
555.Bd -literal -offset indent
556# ifconfig vlan0 vnetid 100
557# ifconfig egre0 create
558# ifconfig egre0 tunnel 192.0.2.1 203.0.113.2
559# ifconfig egre0 vnetid 100
560# ifconfig egre0 mtu 1500 -tunneldf
561# ifconfig egre0 up
562# ifconfig bridge0 add vlan0 add egre0 up
563# ifconfig vlan1 vnetid 200
564# ifconfig egre1 create
565# ifconfig egre1 tunnel 192.0.2.1 203.0.113.2
566# ifconfig egre1 vnetid 200
567# ifconfig egre1 mtu 1500 -tunneldf
568# ifconfig egre1 up
569# ifconfig bridge1 add vlan1 add egre1 up
570.Ed
571.Pp
572At the second site:
573.Bd -literal -offset indent
574# ifconfig vlan0 vnetid 100
575# ifconfig egre0 create
576# ifconfig egre0 tunnel 203.0.113.2 192.0.2.1
577# ifconfig egre0 vnetid 100
578# ifconfig egre0 mtu 1500 -tunneldf
579# ifconfig egre0 up
580# ifconfig bridge0 add vlan0 add egre0 up
581# ifconfig vlan1 vnetid 200
582# ifconfig egre1 create
583# ifconfig egre1 tunnel 203.0.113.2 192.0.2.1
584# ifconfig egre1 vnetid 200
585# ifconfig egre1 mtu 1500 -tunneldf
586# ifconfig egre1 up
587# ifconfig bridge1 add vlan1 add egre1 up
588.Ed
589.Ss Network Virtualization using GRE interfaces (nvgre) example
590NVGRE can be used to build a distinct logical Ethernet network
591on top of another network.
592.Nm nvgre
593is therefore like a
594.Xr vlan 4
595interface configured on top of a physical Ethernet interface,
596except it can sit on any IP network capable of multicast.
597.Pp
598The following shows a basic
599.Nm nvgre
600configuration and an equivalent
601.Xr vlan 4
602configuration.
603In the examples, 192.168.0.1/24 will be the network configured on
604the relevant virtual interfaces.
605The NVGRE underlay network will be configured on 100.64.10.0/24,
606and will use 239.1.1.100 as the multicast group address.
607.Pp
608The
609.Xr vlan 4
610interface only relies on Ethernet, it does not rely on IP configuration
611on the parent interface:
612.Bd -literal -offset indent
613# ifconfig em0 up
614# ifconfig vlan0 create
615# ifconfig vlan0 parent em0
616# ifconfig vlan0 vnetid 10
617# ifconfig vlan0 inet 192.168.0.1/24
618# ifconfig vlan0 up
619.Ed
620.Pp
621.Nm nvgre
622relies on IP configuration on the parent interface, and an MTU large
623enough to carry the encapsulated traffic:
624.Bd -literal -offset indent
625# ifconfig em0 mtu 1600
626# ifconfig em0 inet 100.64.10.1/24
627# ifconfig em0 up
628# ifconfig nvgre0 create
629# ifconfig nvgre0 parent em0 tunnel 100.64.10.1 239.1.1.100
630# ifconfig nvgre0 vnetid 10010
631# ifconfig nvgre0 inet 192.168.0.1/24
632# ifconfig nvgre0 up
633.Ed
634.Pp
635NVGRE is intended for use in a multitenant datacentre environment to
636provide each customer with distinct Ethernet networks as needed,
637but without running into the limit on the number of VLAN tags, and
638without requiring reconfiguration of the underlying Ethernet
639infrastructure.
640Another way to look at it is NVGRE can be used to construct multipoint
641Ethernet VPNs across an IP core.
642.Pp
643For example, if a customer has multiple virtual machines running in
644.Xr vmm 4
645on distinct physical hosts,
646.Nm nvgre
647and
648.Xr bridge 4
649can be used to provide network connectivity between the
650.Xr tap 4
651interfaces connected to the virtual machines.
652If there are 3 virtual machines, all using tap0 on each hosts, and
653those hosts are connected to the same network described above,
654.Nm nvgre
655with a distinct virtual network identifier and multicast group can
656be created for them.
657The following assumes nvgre1 and bridge1 have already been created
658on each host, and em0 has had the MTU raised:
659.Pp
660On physical host 1:
661.Bd -literal -offset indent
662# ifconfig em0 inet 100.64.10.10/24
663# ifconfig nvgre1 parent em0 tunnel 100.64.10.10 239.1.1.111
664# ifconfig nvgre1 vnetid 10011
665# ifconfig bridge1 add nvgre1 add tap0 up
666.Ed
667.Pp
668On physical host 2:
669.Bd -literal -offset indent
670# ifconfig em0 inet 100.64.10.11/24
671# ifconfig nvgre1 parent em0 tunnel 100.64.10.11 239.1.1.111
672# ifconfig nvgre1 vnetid 10011
673# ifconfig bridge1 add nvgre1 add tap0 up
674.Ed
675.Pp
676On physical host 3:
677.Bd -literal -offset indent
678# ifconfig em0 inet 100.64.10.12/24
679# ifconfig nvgre1 parent em0 tunnel 100.64.10.12 239.1.1.111
680# ifconfig nvgre1 vnetid 10011
681# ifconfig bridge1 add nvgre1 add tap0 up
682.Ed
683.Pp
684Being able to carry working multicast and jumbo frames over the
685public internet is unlikely, which makes it difficult to use NVGRE
686to extended Ethernet VPNs between different sites.
687.Nm nvgre
688and
689.Nm egre
690can be bridged together to provide such connectivity.
691See the
692.Nm egre
693section for an example.
694.Sh SEE ALSO
695.Xr eoip 4 ,
696.Xr inet 4 ,
697.Xr ip 4 ,
698.Xr netintro 4 ,
699.Xr options 4 ,
700.Xr hostname.if 5 ,
701.Xr protocols 5 ,
702.Xr ifconfig 8 ,
703.Xr netstart 8 ,
704.Xr sysctl 8
705.Sh STANDARDS
706.Rs
707.%A S. Hanks
708.%A "T. Li"
709.%A D. Farinacci
710.%A P. Traina
711.%D October 1994
712.%R RFC 1701
713.%T Generic Routing Encapsulation (GRE)
714.Re
715.Pp
716.Rs
717.%A S. Hanks
718.%A "T. Li"
719.%A D. Farinacci
720.%A P. Traina
721.%D October 1994
722.%R RFC 1702
723.%T Generic Routing Encapsulation over IPv4 networks
724.Re
725.Pp
726.Rs
727.%A D. Farinacci
728.%A "T. Li"
729.%A S. Hanks
730.%A D. Meyer
731.%A P. Traina
732.%D March 2000
733.%R RFC 2784
734.%T Generic Routing Encapsulation (GRE)
735.Re
736.Pp
737.Rs
738.%A G. Dommety
739.%D September 2000
740.%R RFC 2890
741.%T Key and Sequence Number Extensions to GRE
742.Re
743.Pp
744.Rs
745.%A T. Worster
746.%A Y. Rekhter
747.%A E. Rosen
748.%D March 2005
749.%R RFC 4023
750.%T Encapsulating MPLS in IP or Generic Routing Encapsulation (GRE)
751.Re
752.Pp
753.Rs
754.%A P. Garg
755.%A Y. Wang
756.%D September 2015
757.%R RFC 7637
758.%T NVGRE: Network Virtualization Using Generic Routing Encapsulation
759.Re
760.Pp
761.Rs
762.%U https://tools.ietf.org/html/draft-ietf-wrec-web-pro-00.txt
763.%T Web Cache Coordination Protocol V1.0
764.Re
765.Pp
766.Rs
767.%U https://tools.ietf.org/html/draft-wilson-wrec-wccp-v2-00.txt
768.%T Web Cache Coordination Protocol V2.0
769.Re
770.Sh AUTHORS
771.An Heiko W. Rupp Aq Mt hwr@pilhuhn.de
772.Sh CAVEATS
773RFC 1701 and RFC 2890 describe a variety of optional GRE header
774fields in the protocol that are not implemented in the
775.Nm gre
776and
777.Nm egre
778interface drivers.
779The only optional field the drivers implement support for is the
780Key header.
781.Pp
782.Nm gre
783interfaces skip the redirect header in WCCPv2 GRE encapsulated packets.
784.Pp
785The NVGRE RFC specifies VSIDs 0 (0x0) to 4095 (0xfff) as reserved
786for future use, and VSID 16777215 (0xffffff) for use for vendor-specific
787endpoint communication.
788The NVGRE RFC also explicitly states encapsulated Ethernet packets
789must not contain IEEE 802.1Q (VLAN) tags.
790The
791.Nm nvgre
792driver does not restrict the use of these VSIDs, and does not prevent
793the configuration of child
794.Xr vlan 4
795interfaces or the bridging of VLAN tagged traffic across the tunnel.
796These non-restrictions allow non-compliant tunnels to be configured
797which may not interoperate with other vendors.
798