You are on page 1of 1

Docker Swarm mode, overlay network will be create with in built key-value store when we join the cluster

in swarm
mode. Below diagram shows the same.

On consul server (192.168.1.132)

root@consulserver:~# wget https://releases.hashicorp.com/consul/1.8.3/consul_1.8.3_linux_amd64.zip


--2020-08-19 17:10:51-- https://releases.hashicorp.com/consul/1.8.3/consul_1.8.3_linux_amd64.zip
Resolving releases.hashicorp.com (releases.hashicorp.com)... 2a04:4e42:600::439, 2a04:4e42:400::439,
2a04:4e42:200::439, ...
Connecting to releases.hashicorp.com (releases.hashicorp.com)|2a04:4e42:600::439|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 42054014 (40M) [application/zip]
Saving to: ‘consul_1.8.3_linux_amd64.zip’
consul_1.8.3_linux_amd64.zip 100%
[=============================================================================>] 40.11M 592KB/s in 86s
2020-08-19 17:12:18 (476 KB/s) - ‘consul_1.8.3_linux_amd64.zip’ saved [42054014/42054014]
root@consulserver:~# unzip
Command 'unzip' not found, but can be installed with:
apt install unzip
root@consulserver:~# apt install unzip
Reading package lists... Done
Building dependency tree
Reading state information... Done
Suggested packages:
zip
The following NEW packages will be installed:
unzip
0 upgraded, 1 newly installed, 0 to remove and 23 not upgraded.
Need to get 167 kB of archives.
After this operation, 558 kB of additional disk space will be used.
Get:1 http://in.archive.ubuntu.com/ubuntu bionic/main amd64 unzip amd64 6.0-21ubuntu1 [167 kB]
Fetched 167 kB in 1s (258 kB/s)
Selecting previously unselected package unzip.
(Reading database ... 67636 files and directories currently installed.)
Preparing to unpack .../unzip_6.0-21ubuntu1_amd64.deb ...
Unpacking unzip (6.0-21ubuntu1) ...
Setting up unzip (6.0-21ubuntu1) ...
Processing triggers for mime-support (3.60ubuntu1) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
root@consulserver:~#
root@consulserver:~# unzip consul_1.8.3_linux_amd64.zip
Archive: consul_1.8.3_linux_amd64.zip
inflating: consul
root@consulserver:~# ls -l
total 151520
-rwxr-xr-x 1 root root 113091757 Aug 12 19:05 consul
-rw-r--r-- 1 root root 42054014 Aug 12 21:14 consul_1.8.3_linux_amd64.zip
root@consulserver:~# mv consul /usr/local/bin/
root@consulserver:~# consul --help
Usage: consul [--version] [--help] <command> [<args>]
Available commands are:
acl Interact with Consul's ACLs
agent Runs a Consul agent
catalog Interact with the catalog
config Interact with Consul's Centralized Configurations
connect Interact with Consul Connect
debug Records a debugging archive for operators
event Fire a new event
exec Executes a command on Consul nodes
force-leave Forces a member of the cluster to enter the "left" state
info Provides debugging information for operators.
intention Interact with Connect service intentions
join Tell Consul agent to join cluster
keygen Generates a new encryption key
keyring Manages gossip layer encryption keys
kv Interact with the key-value store
leave Gracefully leaves the Consul cluster and shuts down
lock Execute a command holding a lock
login Login to Consul using an auth method
logout Destroy a Consul token created with login
maint Controls node or service maintenance mode
members Lists the members of a Consul cluster
monitor Stream logs from a Consul agent
operator Provides cluster-level tools for Consul operators
reload Triggers the agent to reload configuration files
rtt Estimates network round trip time between nodes
services Interact with services
snapshot Saves, restores and inspects snapshots of Consul server state
tls Builtin helpers for creating CAs and certificates
validate Validate config files/directories
version Prints the Consul version
watch Watch for changes in Consul

Now, lets initiate consul standlone server using below methods, in this scenario we are using below command to run consul server instead of
systemd.

root@consulserver:~# consul agent -server -dev -ui -client 0.0.0.0 &

or

root@consulserver:~# groupadd --system consul


root@consulserver:~# useradd -s /sbin/nologin --system -g consul consul
root@consulserver:~# mkdir -p /var/lib/consul
root@consulserver:~# chown -R consul:consul /var/lib/consul
root@consulserver:~# chmod -R 775 /var/lib/consul
root@consulserver:~#
root@consulserver:~# mkdir /etc/consul.d
root@consulserver:~# chown -R consul:consul /etc/consul.d
root@consulserver:~#
root@consulserver:~# vi /etc/systemd/system/consul.service
root@consulserver:~# cat /etc/systemd/system/consul.service
[Unit]
Description=Consul Service Discovery Agent
Documentation=https://www.consul.io/
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
User=consul
Group=consul
ExecStart=/usr/local/bin/consul agent -server -dev -ui \
-client=192.168.1.132 \
-advertise=192.168.1.132 \
-bind=192.168.1.132 \
-data-dir=/var/lib/consul \
-node=consulserver \
-config-dir=/etc/consul.d
ExecReload=/bin/kill -HUP $MAINPID
KillSignal=SIGINT
TimeoutStopSec=5
Restart=on-failure
SyslogIdentifier=consul
[Install]
WantedBy=multi-user.target
root@consulserver:~#
root@consulserver:~# systemctl start consul.service
root@consulserver:~# systemctl status consul.service
● consul.service - Consul Service Discovery Agent
Loaded: loaded (/etc/systemd/system/consul.service; disabled; vendor preset: enabled)
Active: active (running) since Wed 2020-08-19 18:09:51 UTC; 6s ago
Docs: https://www.consul.io/
Main PID: 2972 (consul)
Tasks: 7 (limit: 2824)
CGroup: /system.slice/consul.service
└─2972 /usr/local/bin/consul agent -server -dev -ui -advertise=192.168.1.132 -bind=192.168.1.132 -data-
dir=/var/lib/consul -node=consulserver -conf
Aug 19 18:09:51 consulserver consul[2972]: 2020-08-19T18:09:51.405Z [INFO] agent.leader: started routine:
routine="federation state pruning"
Aug 19 18:09:51 consulserver consul[2972]: 2020-08-19T18:09:51.409Z [INFO] agent.leader: started routine:
routine="CA root pruning"
Aug 19 18:09:51 consulserver consul[2972]: 2020-08-19T18:09:51.410Z [DEBUG] agent.server: Skipping self join
check for node since the cluster is too small
Aug 19 18:09:51 consulserver consul[2972]: 2020-08-19T18:09:51.410Z [INFO] agent.server: member joined, marking
health alive: member=consulserver
Aug 19 18:09:51 consulserver consul[2972]: 2020-08-19T18:09:51.655Z [DEBUG] agent: Skipping remote check since it
is managed automatically: check=serfHeal
Aug 19 18:09:51 consulserver consul[2972]: 2020-08-19T18:09:51.656Z [INFO] agent: Synced node info
Aug 19 18:09:51 consulserver consul[2972]: 2020-08-19T18:09:51.656Z [DEBUG] agent: Node info in sync
Aug 19 18:09:51 consulserver consul[2972]: 2020-08-19T18:09:51.704Z [INFO] agent.server: federation state anti-
entropy synced
Aug 19 18:09:51 consulserver consul[2972]: 2020-08-19T18:09:51.711Z [DEBUG] agent: Skipping remote check since it
is managed automatically: check=serfHeal
Aug 19 18:09:51 consulserver consul[2972]: 2020-08-19T18:09:51.712Z [DEBUG] agent: Node info in sync
root@consulserver:~#

root@consulserver:~# consul members


Node Address Status Type Build Protocol DC Segment
consulserver 192.168.1.132:8301 alive server 1.8.3 2 dc1 <all>
root@consulserver:~#

On Docker engine servers:

root@ubuntuserverdemo:~# systemctl stop docker


Warning: Stopping docker.service, but it can still be activated by:
docker.socket
root@ubuntuserverdemo:~# systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: inactive (dead) since Wed 2020-08-19 17:29:40 UTC; 4s ago
Docs: https://docs.docker.com
Process: 956 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock (code=exited,
status=0/SUCCESS)
Main PID: 956 (code=exited, status=0/SUCCESS)
Aug 19 17:09:05 ubuntuserverdemo dockerd[956]: time="2020-08-19T17:09:05.679952455Z" level=info msg="Default bridge
(docker0) is assigned with an IP address 1
Aug 19 17:09:05 ubuntuserverdemo dockerd[956]: time="2020-08-19T17:09:05.833057924Z" level=info msg="Loading
containers: done."
Aug 19 17:09:09 ubuntuserverdemo dockerd[956]: time="2020-08-19T17:09:09.394537088Z" level=info msg="Docker daemon"
commit=48a66213fe graphdriver(s)=overlay2
Aug 19 17:09:09 ubuntuserverdemo dockerd[956]: time="2020-08-19T17:09:09.571466416Z" level=info msg="Daemon has
completed initialization"
Aug 19 17:09:10 ubuntuserverdemo systemd[1]: Started Docker Application Container Engine.
Aug 19 17:09:10 ubuntuserverdemo dockerd[956]: time="2020-08-19T17:09:10.537673012Z" level=info msg="API listen on
/var/run/docker.sock"
Aug 19 17:29:40 ubuntuserverdemo dockerd[956]: time="2020-08-19T17:29:40.357502516Z" level=info msg="Processing
signal 'terminated'"
Aug 19 17:29:40 ubuntuserverdemo dockerd[956]: time="2020-08-19T17:29:40.360590091Z" level=info msg="Daemon shutdown
complete"
Aug 19 17:29:40 ubuntuserverdemo systemd[1]: Stopping Docker Application Container Engine...
Aug 19 17:29:40 ubuntuserverdemo systemd[1]: Stopped Docker Application Container Engine.
root@ubuntuserverdemo:~#

root@ubuntuserverdemo:~# cat /lib/systemd/system/docker.service


[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd -H fd:// --cluster-store=consul://192.168.1.132:8500 --cluster-advertise=enp0s3:2376 --
containerd=/run/containerd/containerd.sock
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
# Both the old, and new location are accepted by systemd 229 and up, so using the old location
# to make them work for either version of systemd.
StartLimitBurst=3
# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
# this option work for either version of systemd.
StartLimitInterval=60s
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Comment TasksMax if your systemd version does not support it.
# Only systemd 226 and above support this option.
TasksMax=infinity
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
root@ubuntuserverdemo:~#

root@ubuntuserverdemo:~# systemctl daemon-reload


root@ubuntuserverdemo:~# systemctl start docker
root@ubuntuserverdemo:~#
root@ubuntuserverdemo:~#
root@ubuntuserverdemo:~#
root@ubuntuserverdemo:~# systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2020-08-19 17:31:48 UTC; 4s ago
Docs: https://docs.docker.com
Main PID: 2356 (dockerd)
Tasks: 12
CGroup: /system.slice/docker.service
└─2356 /usr/bin/dockerd -H fd:// --cluster-store=consul://192.168.1.132:8500 --cluster-
advertise=enp0s3:2376 --containerd=/run/containerd/container
Aug 19 17:31:47 ubuntuserverdemo dockerd[2356]: time="2020-08-19T17:31:47.919949045Z" level=warning msg="Your kernel
does not support cgroup rt period"
Aug 19 17:31:47 ubuntuserverdemo dockerd[2356]: time="2020-08-19T17:31:47.920133758Z" level=warning msg="Your kernel
does not support cgroup rt runtime"
Aug 19 17:31:47 ubuntuserverdemo dockerd[2356]: time="2020-08-19T17:31:47.920533941Z" level=info msg="Loading
containers: start."
Aug 19 17:31:48 ubuntuserverdemo dockerd[2356]: time="2020-08-19T17:31:48.190495945Z" level=info msg="Default bridge
(docker0) is assigned with an IP address
Aug 19 17:31:48 ubuntuserverdemo dockerd[2356]: time="2020-08-19T17:31:48.237918469Z" level=info msg="Loading
containers: done."
Aug 19 17:31:48 ubuntuserverdemo dockerd[2356]: time="2020-08-19T17:31:48.326400681Z" level=info msg="Docker daemon"
commit=48a66213fe graphdriver(s)=overlay2
Aug 19 17:31:48 ubuntuserverdemo dockerd[2356]: time="2020-08-19T17:31:48.326853019Z" level=info msg="Daemon has
completed initialization"
Aug 19 17:31:48 ubuntuserverdemo dockerd[2356]: time="2020-08-19T17:31:48.439223535Z" level=info msg="2020/08/19
17:31:48 [INFO] serf: EventMemberJoin: ubuntu
Aug 19 17:31:48 ubuntuserverdemo systemd[1]: Started Docker Application Container Engine.
Aug 19 17:31:48 ubuntuserverdemo dockerd[2356]: time="2020-08-19T17:31:48.481659878Z" level=info msg="API listen on
/var/run/docker.sock"
root@ubuntuserverdemo:~#

Now lets a create a overlay network on one docker host machine and verify those details reflected on other (131 node) node.

130

root@ubuntuserverdemo:~# docker network ls


NETWORK ID NAME DRIVER SCOPE
90fc3bed93b6 bridge bridge local
199523bedd56 host host local
c716838a74fd none null local
root@ubuntuserverdemo:~#
root@ubuntuserverdemo:~# docker network create --driver overlay --subnet 192.168.0.0/24 visualpath
b7f8e4485bbdafba58928b51d0660ed50f85943790068f3056c1c720eac6e5d8
root@ubuntuserverdemo:~#

root@ubuntuserverdemo:~# docker network ls


NETWORK ID NAME DRIVER SCOPE
90fc3bed93b6 bridge bridge local
199523bedd56 host host local
c716838a74fd none null local
b7f8e4485bbd visualpath overlay global
root@ubuntuserverdemo:~#

131

root@ubuntuserverdest:~# docker network ls


NETWORK ID NAME DRIVER SCOPE
73913dbb4f1c bridge bridge local
199523bedd56 host host local
c716838a74fd none null local
b7f8e4485bbd visualpath overlay global
root@ubuntuserverdest:~#

Note: check the ID of the overlay network which is same on both the machine.

On 130 docker server, lets create container named as source and assign IP address from overlay which was created above.

root@ubuntuserverdemo:~# docker run -d -ti --ip 192.168.0.10 --net visualpath --name source ubuntu:18.04
Unable to find image 'ubuntu:18.04' locally
18.04: Pulling from library/ubuntu
7595c8c21622: Pull complete
d13af8ca898f: Pull complete
70799171ddba: Pull complete
b6c12202c5ef: Pull complete
Digest: sha256:a61728f6128fb4a7a20efaa7597607ed6e69973ee9b9123e3b4fd28b7bba100b
Status: Downloaded newer image for ubuntu:18.04
5fa594bd547c2bff55917b20352a83479fef01ab3caf6bec6d49e8c59027f3f3
root@ubuntuserverdemo:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
NAMES
5fa594bd547c ubuntu:18.04 "/bin/bash" About a minute ago Up About a minute
source
root@ubuntuserverdemo:~#
root@ubuntuserverdemo:~# docker container inspect --format "{{ .NetworkSettings.Networks.visualpath.IPAddress }}"
source
192.168.0.10
root@ubuntuserverdemo:~#

Data pulled from source container created above:

root@5fa594bd547c:/# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
6: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
link/ether 02:42:c0:a8:00:0a brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.0.10/24 brd 192.168.0.255 scope global eth0
valid_lft forever preferred_lft forever
9: eth1@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet 172.18.0.2/16 brd 172.18.255.255 scope global eth1
valid_lft forever preferred_lft forever
root@5fa594bd547c:/#
root@5fa594bd547c:/#
root@5fa594bd547c:/# ip route
default via 172.18.0.1 dev eth1
172.18.0.0/16 dev eth1 proto kernel scope link src 172.18.0.2
192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.10
root@5fa594bd547c:/#

Now login to 131 docker server and create a container to ping source container hosted on 130 docker server:

root@ubuntuserverdest:~# docker run -it --rm --net visualpath debian bash


Unable to find image 'debian:latest' locally
latest: Pulling from library/debian
d6ff36c9ec48: Already exists
Digest: sha256:1e74c92df240634a39d050a5e23fb18f45df30846bb222f543414da180b47a5d
Status: Downloaded newer image for debian:latest
root@ece425bbddfa:/#
root@ece425bbddfa:/# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
6: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
link/ether 02:42:c0:a8:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.0.2/24 brd 192.168.0.255 scope global eth0
valid_lft forever preferred_lft forever
9: eth1@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet 172.18.0.2/16 brd 172.18.255.255 scope global eth1
valid_lft forever preferred_lft forever
root@ece425bbddfa:/#
root@ece425bbddfa:/# ping 192.168.0.10
PING 192.168.0.10 (192.168.0.10) 56(84) bytes of data.
64 bytes from 192.168.0.10: icmp_seq=1 ttl=64 time=0.727 ms
64 bytes from 192.168.0.10: icmp_seq=2 ttl=64 time=1.08 ms
64 bytes from 192.168.0.10: icmp_seq=3 ttl=64 time=0.865 ms
^C
--- 192.168.0.10 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 6ms
rtt min/avg/max/mdev = 0.727/0.890/1.080/0.149 ms
root@ece425bbddfa:/#
root@ece425bbddfa:/# ping google.com
PING google.com (172.217.31.206) 56(84) bytes of data.
64 bytes from maa03s28-in-f14.1e100.net (172.217.31.206): icmp_seq=1 ttl=108 time=47.10 ms
64 bytes from maa03s28-in-f14.1e100.net (172.217.31.206): icmp_seq=2 ttl=108 time=66.10 ms
^C
--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 2ms
rtt min/avg/max/mdev = 47.986/57.489/66.992/9.503 ms
root@ece425bbddfa:/#
root@ece425bbddfa:/#
exit
root@ubuntuserverdest:~#

Note: Let's try to ping container “source” 192.168.0.10 for ubuntuserverdest docker host machine and see it will reject the incoming
packets.

On both docker servers:

root@ubuntuserverdemo:~# docker network ls


NETWORK ID NAME DRIVER SCOPE
90fc3bed93b6 bridge bridge local
76856d44fc6e docker_gwbridge bridge local
199523bedd56 host host local
c716838a74fd none null local
b7f8e4485bbd visualpath overlay global
root@ubuntuserverdemo:~#

root@ubuntuserverdest:~# docker network ls


NETWORK ID NAME DRIVER SCOPE
73913dbb4f1c bridge bridge local
c752686a85b0 docker_gwbridge bridge local
199523bedd56 host host local
c716838a74fd none null local
b7f8e4485bbd visualpath overlay global
root@ubuntuserverdest:~#

root@ubuntuserverdemo:~# docker container inspect source -f {{.NetworkSettings.SandboxKey}}


/var/run/docker/netns/a1c3fa17996a
root@ubuntuserverdemo:~#

root@ubuntuserverdemo:~# nsenter --net=/var/run/docker/netns/a1c3fa17996a ip addr


1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
6: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
link/ether 02:42:c0:a8:00:0a brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.0.10/24 brd 192.168.0.255 scope global eth0
valid_lft forever preferred_lft forever
9: eth1@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet 172.18.0.2/16 brd 172.18.255.255 scope global eth1
valid_lft forever preferred_lft forever
root@ubuntuserverdemo:~#

or

root@ubuntuserverdemo:~#
root@ubuntuserverdemo:~# nsenter --net=/var/run/docker/netns/a1c3fa17996a ip addr show eth0
6: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
link/ether 02:42:c0:a8:00:0a brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.0.10/24 brd 192.168.0.255 scope global eth0
valid_lft forever preferred_lft forever
root@ubuntuserverdemo:~#
root@ubuntuserverdemo:~#
root@ubuntuserverdemo:~#

root@ubuntuserverdemo:~# nsenter --net=/var/run/docker/netns/a1c3fa17996a ethtool -S eth0


NIC statistics:
peer_ifindex: 7
root@ubuntuserverdemo:~# nsenter --net=/var/run/docker/netns/a1c3fa17996a ethtool -S eth1
NIC statistics:
peer_ifindex: 10
root@ubuntuserverdemo:~#

root@ubuntuserverdemo:~# docker inspect docker_gwbridge


[
{
"Name": "docker_gwbridge",
"Id": "76856d44fc6ee28f1ecc9c9a1cb5a1a0f53ee174e16556bfaf5a84f23eaa5ef3",
"Created": "2020-08-19T17:41:16.290204732Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"5fa594bd547c2bff55917b20352a83479fef01ab3caf6bec6d49e8c59027f3f3": {
"Name": "gateway_a1c3fa17996a",
"EndpointID": "ec54007590f6b0aa7044f4c5600c8a6e0961e732508caaa7d1397a137c924164",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.enable_icc": "false”, —> which means we cannot use this bridge for inter-container
communication
"com.docker.network.bridge.enable_ip_masquerade": "true”, —> which means the traffic from the container will be
NATed to access external networks
"com.docker.network.bridge.name": "docker_gwbridge"
},
"Labels": {}
}
]
root@ubuntuserverdemo:~#

root@ubuntuserverdemo:~# nsenter --net=/var/run/docker/netns/1-b7f8e4485b ip addr


1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
link/ether 4a:a7:b0:c6:85:0c brd ff:ff:ff:ff:ff:ff
inet 192.168.0.1/24 brd 192.168.0.255 scope global br0
valid_lft forever preferred_lft forever
5: vxlan0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master br0 state UNKNOWN group default
link/ether a6:c3:12:2d:a9:df brd ff:ff:ff:ff:ff:ff link-netnsid 0
7: veth0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master br0 state UP group default
link/ether 4a:a7:b0:c6:85:0c brd ff:ff:ff:ff:ff:ff link-netnsid 1
root@ubuntuserverdemo:~#

root@ubuntuserverdest:~# docker run -it --rm --net demonet debian ping 192.168.0.100
PING 192.168.0.100 (192.168.0.100): 56 data bytes
64 bytes from 192.168.0.100: icmp_seq=0 ttl=64 time=0.680 ms
64 bytes from 192.168.0.100: icmp_seq=1 ttl=64 time=0.503 ms
root@ubuntuserverdemo:~# sudo tcpdump -pni enp0s3 "port 4789"
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
12:55:53.652322 IP 10.0.0.11.64667 > 10.0.0.10.4789: VXLAN, flags [I] (0x08), vni 256
IP 192.168.0.2 > 192.168.0.100: ICMP echo request, id 1, seq 0, length 64
12:55:53.652409 IP 10.0.0.10.47697 > 10.0.0.11.4789: VXLAN, flags [I] (0x08), vni 256
IP 192.168.0.100 > 192.168.0.2: ICMP echo reply, id 1, seq 0, length 64

For etcd key-value store:

You might also like