1# Ipvlan Network Driver
2
3### Getting Started
4
5The Ipvlan driver is currently in experimental mode in order to incubate Docker
6users use cases and vet the implementation to ensure a hardened, production ready
7driver in a future release. Libnetwork now gives users total control over both
8IPv4 and IPv6 addressing. The VLAN driver builds on top of that in giving
9operators complete control of layer 2 VLAN tagging and even Ipvlan L3 routing
10for users interested in underlay network integration. For overlay deployments
11that abstract away physical constraints see the
12[multi-host overlay](https://docs.docker.com/network/network-tutorial-overlay/)
13driver.
14
15Ipvlan is a new twist on the tried and true network virtualization technique.
16The Linux implementations are extremely lightweight because rather than using
17the traditional Linux bridge for isolation, they are simply associated to a Linux
18Ethernet interface or sub-interface to enforce separation between networks and
19connectivity to the physical network.
20
21Ipvlan offers a number of unique features and plenty of room for further
22innovations with the various modes. Two high level advantages of these approaches
23are, the positive performance implications of bypassing the Linux bridge and the
24simplicity of having fewer moving parts. Removing the bridge that traditionally
25resides in between the Docker host NIC and container interface leaves a simple
26setup consisting of container interfaces, attached directly to the Docker host
27interface. This result is easy access for external facing services as there is
28no need for port mappings in these scenarios.
29
30### Pre-Requisites
31
32- The examples on this page are all single host and require using Docker
33  experimental features to be enabled.
34- All of the examples can be performed on a single host running Docker. Any
35  example using a sub-interface like `eth0.10` can be replaced with `eth0` or
36  any other valid parent interface on the Docker host. Sub-interfaces with a `.`
37  are created on the fly. `-o parent` interfaces can also be left out of the
38  `docker network create` all together and the driver will create a `dummy`
39  interface that will enable local host connectivity to perform the examples.
40- Kernel requirements:
41    - To check your current kernel version, use `uname -r`
42    - Ipvlan Linux kernel v4.2+ (support for earlier kernels exists but is buggy)
43
44### Ipvlan L2 Mode Example Usage
45
46An example of the ipvlan `L2` mode topology is shown in the following image.
47The driver is specified with `-d driver_name` option. In this case `-d ipvlan`.
48
49![Simple Ipvlan L2 Mode Example](images/ipvlan_l2_simple.png)
50
51The parent interface in the next example `-o parent=eth0` is configured as follows:
52
53```bash
54$ ip addr show eth0
553: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
56    inet 192.168.1.250/24 brd 192.168.1.255 scope global eth0
57```
58
59Use the network from the host's interface as the `--subnet` in the
60`docker network create`. The container will be attached to the same network as
61the host interface as set via the `-o parent=` option.
62
63Create the ipvlan network and run a container attaching to it:
64
65```bash
66# Ipvlan  (-o ipvlan_mode= Defaults to L2 mode if not specified)
67$ docker network  create -d ipvlan \
68    --subnet=192.168.1.0/24 \
69    --gateway=192.168.1.1 \
70    -o ipvlan_mode=l2 \
71    -o parent=eth0 db_net
72
73# Start a container on the db_net network
74$ docker  run --net=db_net -it --rm alpine /bin/sh
75
76# NOTE: the containers can NOT ping the underlying host interfaces as
77# they are intentionally filtered by Linux for additional isolation.
78```
79
80The default mode for Ipvlan is `l2`. If `-o ipvlan_mode=` are left unspecified,
81the default mode will be used. Similarly, if the `--gateway` is left empty, the
82first usable address on the network will be set as the gateway. For example, if
83the subnet provided in the network create is `--subnet=192.168.1.0/24` then the
84gateway the container receives is `192.168.1.1`.
85
86To help understand how this mode interacts with other hosts, the following
87figure shows the same layer 2 segment between two Docker hosts that applies to
88and Ipvlan L2 mode.
89
90![Multiple Ipvlan Hosts](images/macvlan-bridge-ipvlan-l2.png)
91
92The following will create the exact same network as the network `db_net` created
93prior, with the driver defaults for `--gateway=192.168.1.1` and `-o ipvlan_mode=l2`.
94
95```bash
96# Ipvlan  (-o ipvlan_mode= Defaults to L2 mode if not specified)
97$ docker network  create -d ipvlan \
98    --subnet=192.168.1.0/24 \
99    -o parent=eth0 db_net_ipv
100
101# Start a container with an explicit name in daemon mode
102$ docker  run --net=db_net_ipv --name=ipv1 -itd alpine /bin/sh
103
104# Start a second container and ping using the container name
105# to see the docker included name resolution functionality
106$ docker  run --net=db_net_ipv --name=ipv2 -it --rm alpine /bin/sh
107$ ping -c 4 ipv1
108
109# NOTE: the containers can NOT ping the underlying host interfaces as
110# they are intentionally filtered by Linux for additional isolation.
111```
112
113The drivers also support the `--internal` flag that will completely isolate
114containers on a network from any communications external to that network. Since
115network isolation is tightly coupled to the network's parent interface the result
116of leaving the `-o parent=` option off of a `docker network create` is the exact
117same as the `--internal` option. If the parent interface is not specified or the
118`--internal` flag is used, a netlink type `dummy` parent interface is created
119for the user and used as the parent interface effectively isolating the network
120completely.
121
122The following two `docker network create` examples result in identical networks
123that you can attach container to:
124
125```bash
126# Empty '-o parent=' creates an isolated network
127$ docker network  create -d ipvlan \
128    --subnet=192.168.10.0/24 isolated1
129
130# Explicit '--internal' flag is the same:
131$ docker network  create -d ipvlan \
132    --subnet=192.168.11.0/24 --internal isolated2
133
134# Even the '--subnet=' can be left empty and the default
135# IPAM subnet of 172.18.0.0/16 will be assigned
136$ docker network  create -d ipvlan isolated3
137
138$ docker run --net=isolated1 --name=cid1 -it --rm alpine /bin/sh
139$ docker run --net=isolated2 --name=cid2 -it --rm alpine /bin/sh
140$ docker run --net=isolated3 --name=cid3 -it --rm alpine /bin/sh
141
142# To attach to any use `docker exec` and start a shell
143$ docker exec -it cid1 /bin/sh
144$ docker exec -it cid2 /bin/sh
145$ docker exec -it cid3 /bin/sh
146```
147
148### Ipvlan 802.1q Trunk L2 Mode Example Usage
149
150Architecturally, Ipvlan L2 mode trunking is the same as Macvlan with regard to
151gateways and L2 path isolation. There are nuances that can be advantageous for
152CAM table pressure in ToR switches, one MAC per port and MAC exhaustion on a
153host's parent NIC to name a few. The 802.1q trunk scenario looks the same. Both
154modes adhere to tagging standards and have seamless integration with the physical
155network for underlay integration and hardware vendor plugin integrations.
156
157Hosts on the same VLAN are typically on the same subnet and almost always are
158grouped together based on their security policy. In most scenarios, a multi-tier
159application is tiered into different subnets because the security profile of each
160process requires some form of isolation. For example, hosting your credit card
161processing on the same virtual network as the frontend webserver would be a
162regulatory compliance issue, along with circumventing the long standing best
163practice of layered defense in depth architectures. VLANs or the equivocal VNI
164(Virtual Network Identifier) when using the Overlay driver, are the first step
165in isolating tenant traffic.
166
167![Docker VLANs in Depth](images/vlans-deeper-look.png)
168
169The Linux sub-interface tagged with a vlan can either already exist or will be
170created when you call a `docker network create`. `docker network rm` will delete
171the sub-interface. Parent interfaces such as `eth0` are not deleted, only
172sub-interfaces with a netlink parent index > 0.
173
174For the driver to add/delete the vlan sub-interfaces the format needs to be
175`interface_name.vlan_tag`. Other sub-interface naming can be used as the
176specified parent, but the link will not be deleted automatically when
177`docker network rm` is invoked.
178
179The option to use either existing parent vlan sub-interfaces or let Docker manage
180them enables the user to either completely manage the Linux interfaces and
181networking or let Docker create and delete the Vlan parent sub-interfaces
182(netlink `ip link`) with no effort from the user.
183
184For example: use `eth0.10` to denote a sub-interface of `eth0` tagged with the
185vlan id of `10`. The equivalent `ip link` command would be
186`ip link add link eth0 name eth0.10 type vlan id 10`.
187
188The example creates the vlan tagged networks and then start two containers to
189test connectivity between containers. Different Vlans cannot ping one another
190without a router routing between the two networks. The default namespace is not
191reachable per ipvlan design in order to isolate container namespaces from the
192underlying host.
193
194**Vlan ID 20**
195
196In the first network tagged and isolated by the Docker host, `eth0.20` is the
197parent interface tagged with vlan id `20` specified with `-o parent=eth0.20`.
198Other naming formats can be used, but the links need to be added and deleted
199manually using `ip link` or Linux configuration files. As long as the `-o parent`
200exists anything can be used if compliant with Linux netlink.
201
202```bash
203# now add networks and hosts as you would normally by attaching to the master (sub)interface that is tagged
204$ docker network  create  -d ipvlan \
205    --subnet=192.168.20.0/24 \
206    --gateway=192.168.20.1 \
207    -o parent=eth0.20 ipvlan20
208
209# in two separate terminals, start a Docker container and the containers can now ping one another.
210$ docker run --net=ipvlan20 -it --name ivlan_test1 --rm alpine /bin/sh
211$ docker run --net=ipvlan20 -it --name ivlan_test2 --rm alpine /bin/sh
212```
213
214**Vlan ID 30**
215
216In the second network, tagged and isolated by the Docker host, `eth0.30` is the
217parent interface tagged with vlan id `30` specified with `-o parent=eth0.30`. The
218`ipvlan_mode=` defaults to l2 mode `ipvlan_mode=l2`. It can also be explicitly
219set with the same result as shown in the next example.
220
221```bash
222# now add networks and hosts as you would normally by attaching to the master (sub)interface that is tagged.
223$ docker network  create  -d ipvlan \
224    --subnet=192.168.30.0/24 \
225    --gateway=192.168.30.1 \
226    -o parent=eth0.30 \
227    -o ipvlan_mode=l2 ipvlan30
228
229# in two separate terminals, start a Docker container and the containers can now ping one another.
230$ docker run --net=ipvlan30 -it --name ivlan_test3 --rm alpine /bin/sh
231$ docker run --net=ipvlan30 -it --name ivlan_test4 --rm alpine /bin/sh
232```
233
234The gateway is set inside of the container as the default gateway. That gateway
235would typically be an external router on the network.
236
237```bash
238$$ ip route
239  default via 192.168.30.1 dev eth0
240  192.168.30.0/24 dev eth0  src 192.168.30.2
241```
242
243Example: Multi-Subnet Ipvlan L2 Mode starting two containers on the same subnet
244and pinging one another. In order for the `192.168.114.0/24` to reach
245`192.168.116.0/24` it requires an external router in L2 mode. L3 mode can route
246between subnets that share a common `-o parent=`.
247
248Secondary addresses on network routers are common as an address space becomes
249exhausted to add another secondary to an L3 vlan interface or commonly referred
250to as a "switched virtual interface" (SVI).
251
252```bash
253$ docker network  create  -d ipvlan \
254    --subnet=192.168.114.0/24 --subnet=192.168.116.0/24 \
255    --gateway=192.168.114.254  --gateway=192.168.116.254 \
256     -o parent=eth0.114 \
257     -o ipvlan_mode=l2 ipvlan114
258
259$ docker run --net=ipvlan114 --ip=192.168.114.10 -it --rm alpine /bin/sh
260$ docker run --net=ipvlan114 --ip=192.168.114.11 -it --rm alpine /bin/sh
261```
262
263A key takeaway is, operators have the ability to map their physical network into
264their virtual network for integrating containers into their environment with no
265operational overhauls required. NetOps simply drops an 802.1q trunk into the
266Docker host. That virtual link would be the `-o parent=` passed in the network
267creation. For untagged (non-VLAN) links, it is as simple as `-o parent=eth0` or
268for 802.1q trunks with VLAN IDs each network gets mapped to the corresponding
269VLAN/Subnet from the network.
270
271An example being, NetOps provides VLAN ID and the associated subnets for VLANs
272being passed on the Ethernet link to the Docker host server. Those values are
273simply plugged into the `docker network create` commands when provisioning the
274Docker networks. These are persistent configurations that are applied every time
275the Docker engine starts which alleviates having to manage often complex
276configuration files. The network interfaces can also be managed manually by
277being pre-created and docker networking will never modify them, simply use them
278as parent interfaces. Example mappings from NetOps to Docker network commands
279are as follows:
280
281- VLAN: 10, Subnet: 172.16.80.0/24, Gateway: 172.16.80.1
282    - `--subnet=172.16.80.0/24 --gateway=172.16.80.1 -o parent=eth0.10`
283- VLAN: 20, IP subnet: 172.16.50.0/22, Gateway: 172.16.50.1
284    - `--subnet=172.16.50.0/22 --gateway=172.16.50.1 -o parent=eth0.20 `
285- VLAN: 30, Subnet: 10.1.100.0/16, Gateway: 10.1.100.1
286    - `--subnet=10.1.100.0/16 --gateway=10.1.100.1 -o parent=eth0.30`
287
288### IPVlan L3 Mode Example
289
290IPVlan will require routes to be distributed to each endpoint. The driver only
291builds the Ipvlan L3 mode port and attaches the container to the interface. Route
292distribution throughout a cluster is beyond the initial implementation of this
293single host scoped driver. In L3 mode, the Docker host is very similar to a
294router starting new networks in the container. They are on networks that the
295upstream network will not know about without route distribution. For those
296curious how Ipvlan L3 will fit into container networking see the following
297examples.
298
299![Docker Ipvlan L2 Mode](images/ipvlan-l3.png)
300
301Ipvlan L3 mode drops all broadcast and multicast traffic. This reason alone
302makes Ipvlan L3 mode a prime candidate for those looking for massive scale and
303predictable network integrations. It is predictable and in turn will lead to
304greater uptimes because there is no bridging involved. Bridging loops have been
305responsible for high profile outages that can be hard to pinpoint depending on
306the size of the failure domain. This is due to the cascading nature of BPDUs
307(Bridge Port Data Units) that are flooded throughout a broadcast domain (VLAN)
308to find and block topology loops. Eliminating bridging domains, or at the least,
309keeping them isolated to a pair of ToRs (top of rack switches) will reduce hard
310to troubleshoot bridging instabilities. Ipvlan L2 modes is well suited for
311isolated VLANs only trunked into a pair of ToRs that can provide a loop-free
312non-blocking fabric. The next step further is to route at the edge via Ipvlan L3
313mode that reduces a failure domain to a local host only.
314
315- L3 mode needs to be on a separate subnet as the default namespace since it
316  requires a netlink route in the default namespace pointing to the Ipvlan parent
317  interface.
318- The parent interface used in this example is `eth0` and it is on the subnet
319 `192.168.1.0/24`. Notice the `docker network` is **not** on the same subnet
320  as `eth0`.
321- Unlike ipvlan l2 modes, different subnets/networks can ping one another as
322  long as they share the same parent interface `-o parent=`.
323
324```bash
325$$ ip a show eth0
3263: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
327    link/ether 00:50:56:39:45:2e brd ff:ff:ff:ff:ff:ff
328    inet 192.168.1.250/24 brd 192.168.1.255 scope global eth0
329```
330
331- A traditional gateway doesn't mean much to an L3 mode Ipvlan interface since
332  there is no broadcast traffic allowed. Because of that, the container default
333  gateway simply points to the containers `eth0` device. See below for CLI output
334  of `ip route` or `ip -6 route` from inside an L3 container for details.
335
336The mode ` -o ipvlan_mode=l3` must be explicitly specified since the default
337ipvlan mode is `l2`.
338
339The following example does not specify a parent interface. The network drivers
340will create a dummy type link for the user rather than rejecting the network
341creation and isolating containers from only communicating with one another.
342
343```bash
344# Create the Ipvlan L3 network
345$ docker network  create  -d ipvlan \
346    --subnet=192.168.214.0/24 \
347    --subnet=10.1.214.0/24 \
348     -o ipvlan_mode=l3 ipnet210
349
350# Test 192.168.214.0/24 connectivity
351$ docker run --net=ipnet210 --ip=192.168.214.10 -itd alpine /bin/sh
352$ docker run --net=ipnet210 --ip=10.1.214.10 -itd alpine /bin/sh
353
354# Test L3 connectivity from 10.1.214.0/24 to 192.168.212.0/24
355$ docker run --net=ipnet210 --ip=192.168.214.9 -it --rm alpine ping -c 2 10.1.214.10
356
357# Test L3 connectivity from 192.168.212.0/24 to 10.1.214.0/24
358$ docker run --net=ipnet210 --ip=10.1.214.9 -it --rm alpine ping -c 2 192.168.214.10
359
360```
361
362> **Note**
363>
364> Notice that there is no `--gateway=` option in the network create. The field
365> is ignored if one is specified `l3` mode. Take a look at the container routing
366> table from inside of the container:
367>
368> ```bash
369> # Inside an L3 mode container
370> $$ ip route
371>  default dev eth0
372>   192.168.214.0/24 dev eth0  src 192.168.214.10
373> ```
374
375In order to ping the containers from a remote Docker host or the container be
376able to ping a remote host, the remote host or the physical network in between
377need to have a route pointing to the host IP address of the container's Docker
378host eth interface. More on this as we evolve the Ipvlan `L3` story.
379
380### Dual Stack IPv4 IPv6 Ipvlan L2 Mode
381
382- Not only does Libnetwork give you complete control over IPv4 addressing, but
383it also gives you total control over IPv6 addressing as well as feature parity
384between the two address families.
385
386- The next example will start with IPv6 only. Start two containers on the same
387VLAN `139` and ping one another. Since the IPv4 subnet is not specified, the
388default IPAM will provision a default IPv4 subnet. That subnet is isolated
389unless the upstream network is explicitly routing it on VLAN `139`.
390
391```bash
392# Create a v6 network
393$ docker network create -d ipvlan \
394    --subnet=2001:db8:abc2::/64 --gateway=2001:db8:abc2::22 \
395    -o parent=eth0.139 v6ipvlan139
396
397# Start a container on the network
398$ docker run --net=v6ipvlan139 -it --rm alpine /bin/sh
399```
400
401View the container eth0 interface and v6 routing table:
402
403```bash
404# Inside the IPv6 container
405$$ ip a show eth0
40675: eth0@if55: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
407    link/ether 00:50:56:2b:29:40 brd ff:ff:ff:ff:ff:ff
408    inet 172.18.0.2/16 scope global eth0
409       valid_lft forever preferred_lft forever
410    inet6 2001:db8:abc4::250:56ff:fe2b:2940/64 scope link
411       valid_lft forever preferred_lft forever
412    inet6 2001:db8:abc2::1/64 scope link nodad
413       valid_lft forever preferred_lft forever
414
415$$ ip -6 route
4162001:db8:abc4::/64 dev eth0  proto kernel  metric 256
4172001:db8:abc2::/64 dev eth0  proto kernel  metric 256
418default via 2001:db8:abc2::22 dev eth0  metric 1024
419```
420
421Start a second container and ping the first container's v6 address.
422
423```bash
424# Test L2 connectivity over IPv6
425$ docker run --net=v6ipvlan139 -it --rm alpine /bin/sh
426
427# Inside the second IPv6 container
428$$ ip a show eth0
42975: eth0@if55: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
430    link/ether 00:50:56:2b:29:40 brd ff:ff:ff:ff:ff:ff
431    inet 172.18.0.3/16 scope global eth0
432       valid_lft forever preferred_lft forever
433    inet6 2001:db8:abc4::250:56ff:fe2b:2940/64 scope link tentative dadfailed
434       valid_lft forever preferred_lft forever
435    inet6 2001:db8:abc2::2/64 scope link nodad
436       valid_lft forever preferred_lft forever
437
438$$ ping6 2001:db8:abc2::1
439PING 2001:db8:abc2::1 (2001:db8:abc2::1): 56 data bytes
44064 bytes from 2001:db8:abc2::1%eth0: icmp_seq=0 ttl=64 time=0.044 ms
44164 bytes from 2001:db8:abc2::1%eth0: icmp_seq=1 ttl=64 time=0.058 ms
442
4432 packets transmitted, 2 packets received, 0% packet loss
444round-trip min/avg/max/stddev = 0.044/0.051/0.058/0.000 ms
445```
446
447The next example with setup a dual stack IPv4/IPv6 network with an example
448VLAN ID of `140`.
449
450Next create a network with two IPv4 subnets and one IPv6 subnets, all of which
451have explicit gateways:
452
453```bash
454$ docker network  create  -d ipvlan \
455    --subnet=192.168.140.0/24 --subnet=192.168.142.0/24 \
456    --gateway=192.168.140.1  --gateway=192.168.142.1 \
457    --subnet=2001:db8:abc9::/64 --gateway=2001:db8:abc9::22 \
458     -o parent=eth0.140 \
459     -o ipvlan_mode=l2 ipvlan140
460```
461
462Start a container and view eth0 and both v4 & v6 routing tables:
463
464```bash
465$ docker run --net=ipvlan140 --ip6=2001:db8:abc2::51 -it --rm alpine /bin/sh
466
467$ ip a show eth0
46878: eth0@if77: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
469    link/ether 00:50:56:2b:29:40 brd ff:ff:ff:ff:ff:ff
470    inet 192.168.140.2/24 scope global eth0
471       valid_lft forever preferred_lft forever
472    inet6 2001:db8:abc4::250:56ff:fe2b:2940/64 scope link
473       valid_lft forever preferred_lft forever
474    inet6 2001:db8:abc9::1/64 scope link nodad
475       valid_lft forever preferred_lft forever
476
477$$ ip route
478default via 192.168.140.1 dev eth0
479192.168.140.0/24 dev eth0  proto kernel  scope link  src 192.168.140.2
480
481$$ ip -6 route
4822001:db8:abc4::/64 dev eth0  proto kernel  metric 256
4832001:db8:abc9::/64 dev eth0  proto kernel  metric 256
484default via 2001:db8:abc9::22 dev eth0  metric 1024
485```
486
487Start a second container with a specific `--ip4` address and ping the first host
488using IPv4 packets:
489
490```bash
491$ docker run --net=ipvlan140 --ip=192.168.140.10 -it --rm alpine /bin/sh
492```
493
494> **Note**
495>
496> Different subnets on the same parent interface in Ipvlan `L2` mode cannot ping
497> one another. That requires a router to proxy-arp the requests with a secondary
498> subnet. However, Ipvlan `L3` will route the unicast traffic between disparate
499> subnets as long as they share the same `-o parent` parent link.
500
501### Dual Stack IPv4 IPv6 Ipvlan L3 Mode
502
503**Example:** IpVlan L3 Mode Dual Stack IPv4/IPv6, Multi-Subnet w/ 802.1q Vlan Tag:118
504
505As in all of the examples, a tagged VLAN interface does not have to be used. The
506sub-interfaces can be swapped with `eth0`, `eth1`, `bond0` or any other valid
507interface on the host other then the `lo` loopback.
508
509The primary difference you will see is that L3 mode does not create a default
510route with a next-hop but rather sets a default route pointing to `dev eth` only
511since ARP/Broadcasts/Multicast are all filtered by Linux as per the design. Since
512the parent interface is essentially acting as a router, the parent interface IP
513and subnet needs to be different from the container networks. That is the opposite
514of bridge and L2 modes, which need to be on the same subnet (broadcast domain)
515in order to forward broadcast and multicast packets.
516
517```bash
518# Create an IPv6+IPv4 Dual Stack Ipvlan L3 network
519# Gateways for both v4 and v6 are set to a dev e.g. 'default dev eth0'
520$ docker network  create  -d ipvlan \
521    --subnet=192.168.110.0/24 \
522    --subnet=192.168.112.0/24 \
523    --subnet=2001:db8:abc6::/64 \
524     -o parent=eth0 \
525     -o ipvlan_mode=l3 ipnet110
526
527
528# Start a few of containers on the network (ipnet110)
529# in separate terminals and check connectivity
530$ docker run --net=ipnet110 -it --rm alpine /bin/sh
531# Start a second container specifying the v6 address
532$ docker run --net=ipnet110 --ip6=2001:db8:abc6::10 -it --rm alpine /bin/sh
533# Start a third specifying the IPv4 address
534$ docker run --net=ipnet110 --ip=192.168.112.30 -it --rm alpine /bin/sh
535# Start a 4th specifying both the IPv4 and IPv6 addresses
536$ docker run --net=ipnet110 --ip6=2001:db8:abc6::50 --ip=192.168.112.50 -it --rm alpine /bin/sh
537```
538
539Interface and routing table outputs are as follows:
540
541```bash
542$$ ip a show eth0
54363: eth0@if59: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
544    link/ether 00:50:56:2b:29:40 brd ff:ff:ff:ff:ff:ff
545    inet 192.168.112.2/24 scope global eth0
546       valid_lft forever preferred_lft forever
547    inet6 2001:db8:abc4::250:56ff:fe2b:2940/64 scope link
548       valid_lft forever preferred_lft forever
549    inet6 2001:db8:abc6::10/64 scope link nodad
550       valid_lft forever preferred_lft forever
551
552# Note the default route is simply the eth device because ARPs are filtered.
553$$ ip route
554  default dev eth0  scope link
555  192.168.112.0/24 dev eth0  proto kernel  scope link  src 192.168.112.2
556
557$$ ip -6 route
5582001:db8:abc4::/64 dev eth0  proto kernel  metric 256
5592001:db8:abc6::/64 dev eth0  proto kernel  metric 256
560default dev eth0  metric 1024
561```
562
563> *Note*
564>
565> There may be a bug when specifying `--ip6=` addresses when you delete a
566> container with a specified v6 address and then start a new container with the
567> same v6 address it throws the following like the address isn't properly being
568> released to the v6 pool. It will fail to unmount the container and be left dead.
569
570```console
571docker: Error response from daemon: Address already in use.
572```
573
574### Manually Creating 802.1q Links
575
576**Vlan ID 40**
577
578If a user does not want the driver to create the vlan sub-interface it simply
579needs to exist prior to the `docker network create`. If you have sub-interface
580naming that is not `interface.vlan_id` it is honored in the `-o parent=` option
581again as long as the interface exists and is up.
582
583Links, when manually created, can be named anything as long as they exist when
584the network is created. Manually created links do not get deleted regardless of
585the name when the network is deleted with `docker network rm`.
586
587```bash
588# create a new sub-interface tied to dot1q vlan 40
589$ ip link add link eth0 name eth0.40 type vlan id 40
590
591# enable the new sub-interface
592$ ip link set eth0.40 up
593
594# now add networks and hosts as you would normally by attaching to the master (sub)interface that is tagged
595$ docker network  create  -d ipvlan \
596   --subnet=192.168.40.0/24 \
597   --gateway=192.168.40.1 \
598   -o parent=eth0.40 ipvlan40
599
600# in two separate terminals, start a Docker container and the containers can now ping one another.
601$ docker run --net=ipvlan40 -it --name ivlan_test5 --rm alpine /bin/sh
602$ docker run --net=ipvlan40 -it --name ivlan_test6 --rm alpine /bin/sh
603```
604
605**Example:** Vlan sub-interface manually created with any name:
606
607```bash
608# create a new sub interface tied to dot1q vlan 40
609$ ip link add link eth0 name foo type vlan id 40
610
611# enable the new sub-interface
612$ ip link set foo up
613
614# now add networks and hosts as you would normally by attaching to the master (sub)interface that is tagged
615$ docker network  create  -d ipvlan \
616    --subnet=192.168.40.0/24 --gateway=192.168.40.1 \
617    -o parent=foo ipvlan40
618
619# in two separate terminals, start a Docker container and the containers can now ping one another.
620$ docker run --net=ipvlan40 -it --name ivlan_test5 --rm alpine /bin/sh
621$ docker run --net=ipvlan40 -it --name ivlan_test6 --rm alpine /bin/sh
622```
623
624Manually created links can be cleaned up with:
625
626```bash
627$ ip link del foo
628```
629
630As with all of the Libnetwork drivers, they can be mixed and matched, even as
631far as running 3rd party ecosystem drivers in parallel for maximum flexibility
632to the Docker user.
633