Published 6月 16, 2019 by with 0 comment

7 - 01 - Overlay_Consul



For multi-host network, Docker provides an overlay driver. So user can build a multi-host base on VxLan.
In this lab, I will use container to run the Consul which can save the overlay network information.
Both containers between host1 and host2 will ping each other via overlay.


1. KVM1 run Consul container.
[peter@peter-KVM ~]$ docker run -d -p 8500:8500 -h consul --name consul progrium/consul -server -bootstrap
Unable to find image 'progrium/consul:latest' locally
latest: Pulling from progrium/consul
c862d82a67a2: Pull complete 
0e7f3c08384e: Pull complete 
0e221e32327a: Pull complete 
09a952464e47: Pull complete 
60a1b927414d: Pull complete 
4c9f46b5ccce: Pull complete 
417d86672aa4: Pull complete 
b0d47ad24447: Pull complete 
fd5300bd53f0: Pull complete 
a3ed95caeb02: Pull complete 
d023b445076e: Pull complete 
ba8851f89e33: Pull complete 
5d1cefca2a28: Pull complete 
Digest: sha256:8cc8023462905929df9a79ff67ee435a36848ce7a10f18d6d0faba9306b97274
Status: Downloaded newer image for progrium/consul:latest
d8316d411a302f05a1417057bccf001bf41f9367a6dccd05319f62bc2ae1a6da
[peter@peter-KVM ~]$ 


2. After start the Consul container, I can http://192.168.122.179:8500



3. Modify the host 1 and host2 docker daemon.
Using $ sudo vi /etc/systemd/system/docker.service.d/10-machine.conf to add the following command into 10-machine.conf
--cluster-store=consul://192.168.122.179:8500 --cluster-advertise=ens3:2376
peter@host1:~$ cat /etc/systemd/system/docker.service.d/10-machine.conf 
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --storage-driver overlay2 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=generic --cluster-store=consul://192.168.122.179:8500 --cluster-advertise=ens3:2376 
Environment=
peter@host1:~$ 
peter@host2:~$ cat /etc/systemd/system/docker.service.d/10-machine.conf 
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --storage-driver overlay2 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=generic --cluster-store=consul://192.168.122.179:8500 --cluster-advertise=ens3:2376 
Environment=
peter@host2:~$ 


4. Restart host 1 and host2 docker daemon
peter@host1:~$ systemctl daemon-reload 
==== AUTHENTICATING FOR org.freedesktop.systemd1.reload-daemon ===
Authentication is required to reload the systemd state.
Authenticating as: Peter,,, (peter)
Password: 
==== AUTHENTICATION COMPLETE ===
peter@host1:~$ 
peter@host1:~$ systemctl restart docker.service 
==== AUTHENTICATING FOR org.freedesktop.systemd1.manage-units ===
Authentication is required to restart 'docker.service'.
Authenticating as: Peter,,, (peter)
Password: 
==== AUTHENTICATION COMPLETE ===
peter@host1:~$

peter@host2:~$ systemctl daemon-reload
==== AUTHENTICATING FOR org.freedesktop.systemd1.reload-daemon ===
Authentication is required to reload the systemd state.
Authenticating as: Peter,,, (peter)
Password: 
==== AUTHENTICATION COMPLETE ===
peter@host2:~$
peter@host2:~$ systemctl restart docker.service
==== AUTHENTICATING FOR org.freedesktop.systemd1.manage-units ===
Authentication is required to restart 'docker.service'.
Authenticating as: Peter,,, (peter)
Password: 
==== AUTHENTICATION COMPLETE ===
peter@host2:~$ 



5. Using Docker Machine, in host1 and host2 create overlay network (ov_net01)
[peter@peter-KVM ~]$ eval $(docker-machine env host1)
[peter@peter-KVM ~ [host1]]$ 
[peter@peter-KVM ~ [host1]]$ docker network create -d overlay ov_net01
10a31605f5b02adf671c17065663f4f1e4b11b6496be6a45e5cd22683501fee1
[peter@peter-KVM ~ [host1]]$ 


6. In host2, it will auto has the ov_net01 because host1 save the ov_net01(10.0.0.0/24) to consul. consul will send this info to host2.
[peter@peter-KVM ~ [host1]]$ eval $(docker-machine env host2)
[peter@peter-KVM ~ [host2]]$ 
[peter@peter-KVM ~ [host2]]$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
cd60a1648cfc        bridge              bridge              local
dd3b165262af        host                host                local
fc25e708ecdf        none                null                local
10a31605f5b0        ov_net01            overlay             global
[peter@peter-KVM ~ [host2]]$ 
[peter@peter-KVM ~ [host2]]$ docker network inspect ov_net01
[
    {
        "Name": "ov_net01",
        "Id": "10a31605f5b02adf671c17065663f4f1e4b11b6496be6a45e5cd22683501fee1",
        "Created": "2019-06-16T12:01:23.481711106+08:00",
        "Scope": "global",
        "Driver": "overlay",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "10.0.0.0/24",
                    "Gateway": "10.0.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "ep-7b44556d7927e758dfd3382c77da6351d3daeb3b08a1d4ec9b9e72468df7ccbb": {
                "Name": "bbox1",
                "EndpointID": "7b44556d7927e758dfd3382c77da6351d3daeb3b08a1d4ec9b9e72468df7ccbb",
                "MacAddress": "02:42:0a:00:00:02",
                "IPv4Address": "10.0.0.2/24",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {}
    }
]

7. Now, let's run a container in host1 and uses the ov_net01.
eth0 10.0.0.0/24 is overlay network (ov_net01)
eth1 172.18.0.0/16 is bridge network (docker_gwbridge)
[peter@peter-KVM ~ [host1]]$ docker run -itd --name bbox1 --network ov_net01 busybox
f757416a264765248b86e0fadf782c1ab4a5d1a3a9092a5bbd3c4a7377b02f1a
[peter@peter-KVM ~ [host1]]$ 
[peter@peter-KVM ~ [host1]]$ docker exec bbox1 ip r
default via 172.18.0.1 dev eth1 
10.0.0.0/24 dev eth0 scope link  src 10.0.0.2 
172.18.0.0/16 dev eth1 scope link  src 172.18.0.2 
[peter@peter-KVM ~ [host1]]$ 


8. Creating the container bbox2 in host2 and uses ov_net01
[peter@peter-KVM ~ [host2]]$ docker run -itd --name bbox2 --network ov_net01 busybox
Unable to find image 'busybox:latest' locally
latest: Pulling from library/busybox
8e674ad76dce: Pull complete 
Digest: sha256:7a4d4ed96e15d6a3fe8bfedb88e95b153b93e230a96906910d57fc4a13210160
Status: Downloaded newer image for busybox:latest
3106d8fbebe4b67d03b75745f884b2dc381205ce543b48ecec25cc774ba064e4
[peter@peter-KVM ~ [host2]]$ 
[peter@peter-KVM ~ [host2]]$ docker exec bbox2 ip r
default via 172.18.0.1 dev eth1 
10.0.0.0/24 dev eth0 scope link  src 10.0.0.3 
172.18.0.0/16 dev eth1 scope link  src 172.18.0.2 
[peter@peter-KVM ~ [host2]]$ 
[peter@peter-KVM ~ [host2]]$ docker network inspect docker_gwbridge 
[
    {
        "Name": "docker_gwbridge",
        "Id": "bc1af1e82e547a4622c21e368360178150ef8eafd6ef6f9f8ef0d1f7edcb6d7e",
        "Created": "2019-06-16T12:23:04.735848807+08:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.18.0.0/16",
                    "Gateway": "172.18.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "3106d8fbebe4b67d03b75745f884b2dc381205ce543b48ecec25cc774ba064e4": {
                "Name": "gateway_3849ca5347d3",
                "EndpointID": "3162019e2f6f2ffacbcf5edf32ce4d5b095e6e05bc77439bc65fc1aa42f5c494",
                "MacAddress": "02:42:ac:12:00:02",
                "IPv4Address": "172.18.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.bridge.enable_icc": "false",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.name": "docker_gwbridge"
        },
        "Labels": {}
    }
]
[peter@peter-KVM ~ [host2]]$ 


9. bbox2 can use DNS service to ping bbox1 via overlay network, it also can ping outside network via bridge network (docker_gwbridge).
[peter@peter-KVM ~ [host2]]$ docker exec bbox2 ping -c 2 bbox1
PING bbox1 (10.0.0.2): 56 data bytes
64 bytes from 10.0.0.2: seq=0 ttl=64 time=0.838 ms
64 bytes from 10.0.0.2: seq=1 ttl=64 time=0.653 ms

--- bbox1 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.653/0.745/0.838 ms
[peter@peter-KVM ~ [host2]]$ 
[peter@peter-KVM ~ [host2]]$ docker exec bbox2 ping -c 2 10.253.4.206
PING 10.253.4.206 (10.253.4.206): 56 data bytes
64 bytes from 10.253.4.206: seq=0 ttl=63 time=0.425 ms
64 bytes from 10.253.4.206: seq=1 ttl=63 time=0.279 ms

--- 10.253.4.206 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.279/0.352/0.425 ms
[peter@peter-KVM ~ [host2]]$ 


10. Checking the host1 and host2 VxLAN ID
peter@host1:~$ sudo ln -s /var/run//docker/netns /var/run/netns
peter@host1:~$
peter@host1:~$ sudo ip netns
RTNETLINK answers: Invalid argument
RTNETLINK answers: Invalid argument
netns
c6f2b189e1cf (id: 1)
1-10a31605f5 (id: 0)
peter@host1:~$ 
peter@host1:~$ sudo ip netns exec 1-10a31605f5 ip -d l show
RTNETLINK answers: Invalid argument
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0 addrgenmode eui64 
2: br0:  mtu 1450 qdisc noqueue state UP mode DEFAULT group default 
    link/ether 52:04:96:13:74:b7 brd ff:ff:ff:ff:ff:ff promiscuity 0 
    bridge forward_delay 1500 hello_time 200 max_age 2000 ageing_time 30000 stp_state 0 priority 32768 vlan_filtering 0 vlan_protocol 802.1Q addrgenmode eui64 
5: vxlan0:  mtu 1450 qdisc noqueue master br0 state UNKNOWN mode DEFAULT group default 
    link/ether 52:04:96:13:74:b7 brd ff:ff:ff:ff:ff:ff link-netnsid 0 promiscuity 1 
    vxlan id 256 srcport 0 0 dstport 4789 proxy l2miss l3miss ageing 300 udpcsum 
    bridge_slave state forwarding priority 32 cost 100 hairpin off guard off root_block off fastleave off learning on flood on addrgenmode eui64 
7: veth0@if6:  mtu 1450 qdisc noqueue master br0 state UP mode DEFAULT group default 
    link/ether de:fa:41:01:71:83 brd ff:ff:ff:ff:ff:ff link-netnsid 1 promiscuity 1 
    veth 
    bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on addrgenmode eui64 
peter@host1:~$ 

peter@host2:~$ sudo ln -s /var/run//docker/netns /var/run/netns
peter@host2:~$ 
peter@host2:~$ sudo ip netns
3849ca5347d3 (id: 1)
1-10a31605f5 (id: 0)
peter@host2:~$ 
peter@host2:~$ sudo ip netns exec 1-10a31605f5 ip -d l 
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0 addrgenmode eui64 
2: br0:  mtu 1450 qdisc noqueue state UP mode DEFAULT group default 
    link/ether a6:4d:70:14:8c:63 brd ff:ff:ff:ff:ff:ff promiscuity 0 
    bridge forward_delay 1500 hello_time 200 max_age 2000 ageing_time 30000 stp_state 0 priority 32768 vlan_filtering 0 vlan_protocol 802.1Q addrgenmode eui64 
6: vxlan0:  mtu 1450 qdisc noqueue master br0 state UNKNOWN mode DEFAULT group default 
    link/ether aa:1b:be:35:fe:be brd ff:ff:ff:ff:ff:ff link-netnsid 0 promiscuity 1 
    vxlan id 256 srcport 0 0 dstport 4789 proxy l2miss l3miss ageing 300 udpcsum 
    bridge_slave state forwarding priority 32 cost 100 hairpin off guard off root_block off fastleave off learning on flood on addrgenmode eui64 
8: veth0@if7:  mtu 1450 qdisc noqueue master br0 state UP mode DEFAULT group default 
    link/ether a6:4d:70:14:8c:63 brd ff:ff:ff:ff:ff:ff link-netnsid 1 promiscuity 1 
    veth 
    bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on addrgenmode eui64 
peter@host2:~$ 
peter@host2:~$ sudo ip netns exec 1-10a31605f5 ip -d l show
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0 addrgenmode eui64 
2: br0:  mtu 1450 qdisc noqueue state UP mode DEFAULT group default 
    link/ether a6:4d:70:14:8c:63 brd ff:ff:ff:ff:ff:ff promiscuity 0 
    bridge forward_delay 1500 hello_time 200 max_age 2000 ageing_time 30000 stp_state 0 priority 32768 vlan_filtering 0 vlan_protocol 802.1Q addrgenmode eui64 
6: vxlan0:  mtu 1450 qdisc noqueue master br0 state UNKNOWN mode DEFAULT group default 
    link/ether aa:1b:be:35:fe:be brd ff:ff:ff:ff:ff:ff link-netnsid 0 promiscuity 1 
    vxlan id 256 srcport 0 0 dstport 4789 proxy l2miss l3miss ageing 300 udpcsum 
    bridge_slave state forwarding priority 32 cost 100 hairpin off guard off root_block off fastleave off learning on flood on addrgenmode eui64 
8: veth0@if7:  mtu 1450 qdisc noqueue master br0 state UP mode DEFAULT group default 
    link/ether a6:4d:70:14:8c:63 brd ff:ff:ff:ff:ff:ff link-netnsid 1 promiscuity 1 
    veth 
    bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on addrgenmode eui64 
peter@host2:~$ 


11. Docker overlay network isolation. Different VxLAN network is isolate each other.
So, ov_net01 and ov_net2 cannot ping each other.
[peter@peter-KVM ~ [host1]]$ docker network create -d overlay ov_net02
5b6dba9348d0b31f4a5a8ca6abc5fd3c72823038d892765139bd9145cde647f6
[peter@peter-KVM ~ [host1]]$ 
[peter@peter-KVM ~ [host1]]$ docker run -itd --name bbox3 --network ov_net02 busybox
68e403cbf528b1e3a3d2c6916f8c44bf9ee0a428a7353efa5c28be035aa84a6a
[peter@peter-KVM ~ [host1]]$ 
[peter@peter-KVM ~ [host1]]$ docker exec -it bbox3 ip r
default via 172.18.0.1 dev eth1 
10.0.1.0/24 dev eth0 scope link  src 10.0.1.2 
172.18.0.0/16 dev eth1 scope link  src 172.18.0.3 
[peter@peter-KVM ~ [host1]]$ 
[peter@peter-KVM ~ [host1]]$ docker exec -it bbox3 ping -c 2 10.0.0.2
PING 10.0.0.2 (10.0.0.2): 56 data bytes

--- 10.0.0.2 ping statistics ---
2 packets transmitted, 0 packets received, 100% packet loss
[peter@peter-KVM ~ [host1]]$ 
[peter@peter-KVM ~ [host1]]$ docker exec -it bbox3 ping -c 2 172.17.0.2
PING 172.17.0.2 (172.17.0.2): 56 data bytes

--- 172.17.0.2 ping statistics ---
2 packets transmitted, 0 packets received, 100% packet loss
[peter@peter-KVM ~ [host1]]$ 


12. If I want to bbox3 ping bbox1, bbox3 need to join the ov_net01
[peter@peter-KVM ~ [host1]]$ docker network connect ov_net01 bbox3
[peter@peter-KVM ~ [host1]]$ 
[peter@peter-KVM ~ [host1]]$ docker exec -it bbox3 ping -c 2 bbox1
PING bbox1 (10.0.0.2): 56 data bytes
64 bytes from 10.0.0.2: seq=0 ttl=64 time=0.529 ms
64 bytes from 10.0.0.2: seq=1 ttl=64 time=0.141 ms

--- bbox1 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.141/0.335/0.529 ms
[peter@peter-KVM ~ [host1]]$ 


13. We can assign the overlay IP range.
[peter@peter-KVM ~ [host1]]$ docker network create -d overlay --subnet 10.22.1.0/24 ov_net03
49e90f57185498b8cfe6bf15cced1e6e6a68822771a14258169db88e0ecc79eb
[peter@peter-KVM ~ [host1]]$ 
[peter@peter-KVM ~ [host1]]$ docker network inspect ov_net03
[
    {
        "Name": "ov_net03",
        "Id": "49e90f57185498b8cfe6bf15cced1e6e6a68822771a14258169db88e0ecc79eb",
        "Created": "2019-06-16T14:31:13.841271204+08:00",
        "Scope": "global",
        "Driver": "overlay",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "10.22.1.0/24"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    }
]
[peter@peter-KVM ~ [host1]]$ 


最初發表 / 最後更新: 2019.06.16 / 2019.06.16

0 comments:

張貼留言