5

49-Docker-网络管理及Compose单机多容器编排

 1 year ago
source link: https://blog.51cto.com/mooreyxia/6004825
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

49-Docker-网络管理及Compose单机多容器编排

推荐 原创

Docker安装后默认的网络设置

49-Docker-网络管理及Compose单机多容器编排_docker网络
  • Docker服务安装完成之后,默认在每个宿主机会生成一个名称为docker0的网卡其IP地址都是172.17.0.1/16
[root@ubuntu2204 ~]#ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:df:99:92 brd ff:ff:ff:ff:ff:ff
altname enp2s1
altname ens33
inet 10.0.0.200/24 brd 10.0.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fedf:9992/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:23:4c:b7:1e brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever

创建容器后的网络配置

  • 每次新建容器后宿主机添加一个虚拟网卡,和容器的网卡组合成一个网卡,比如: 7: veth6ef893c@if6,而在容器内的网卡名为6: eth0@if7,可以看出和宿主机的网卡之间的关联
[root@ubuntu2204 ~]#ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:f6:07:67 brd ff:ff:ff:ff:ff:ff
altname enp2s1
altname ens33
inet 10.0.0.202/24 brd 10.0.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fef6:767/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:3d:00:d5:6c brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:3dff:fe00:d56c/64 scope link
valid_lft forever preferred_lft forever
7: veth6ef893c@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether 3e:71:3c:16:e0:16 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::3c71:3cff:fe16:e016/64 scope link
valid_lft forever preferred_lft forever
[root@ubuntu2204 ~]#docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
90aea87055d3 busybox:latest "tail -f /etc/hosts" 59 seconds ago Up 57 seconds docker-test1
[root@ubuntu2204 ~]#docker exec -it docker-test1 sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
6: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
/ #
  • 每次新建容器容器会自动获取一个172.17.0.0/16网段的随机地址,默认从172.17.0.2开始分配给第1个容器使用,第2个容器为172.17.0.3,以此类推
[root@ubuntu2204 ~]#docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
90aea87055d3 busybox:latest "tail -f /etc/hosts" 6 minutes ago Up 6 minutes docker-test1
[root@ubuntu2204 ~]#docker run -d --name docker-test2 busybox:latest tail -f /etc/hosts
94d80db6d0191ce228b19ed4fe75aa7f173b9dffe188bb5eb83bd36116f00fd9
[root@ubuntu2204 ~]#docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
94d80db6d019 busybox:latest "tail -f /etc/hosts" 6 seconds ago Up 5 seconds docker-test2
90aea87055d3 busybox:latest "tail -f /etc/hosts" 7 minutes ago Up 7 minutes docker-test1
[root@ubuntu2204 ~]#ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:f6:07:67 brd ff:ff:ff:ff:ff:ff
altname enp2s1
altname ens33
inet 10.0.0.202/24 brd 10.0.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fef6:767/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:3d:00:d5:6c brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:3dff:fe00:d56c/64 scope link
valid_lft forever preferred_lft forever
7: veth6ef893c@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether 3e:71:3c:16:e0:16 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::3c71:3cff:fe16:e016/64 scope link
valid_lft forever preferred_lft forever
9: veth8b35f7a@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether 1a:c7:b4:f0:b9:40 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::18c7:b4ff:fef0:b940/64 scope link
valid_lft forever preferred_lft forever
[root@ubuntu2204 ~]#docker exec -it docker-test2 sh
/ #
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
8: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
/ #
  • 每次容器重启,网卡,网卡名称会变化,地址可能会发生地址变化
#重启容器然后建立新的网卡会发现原来的ip被新的容器占用
[root@ubuntu2204 ~]#docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
575cc8edb1a5 busybox:latest "tail -f /etc/hosts" 3 minutes ago Exited (137) 22 seconds ago docker-test4
64eb242aecaa busybox:latest "tail -f /etc/hosts" 3 minutes ago Exited (137) 22 seconds ago docker-test3
94d80db6d019 busybox:latest "tail -f /etc/hosts" 45 minutes ago Exited (137) 22 seconds ago docker-test2
90aea87055d3 busybox:latest "tail -f /etc/hosts" 53 minutes ago Exited (137) 22 seconds ago docker-test1
[root@ubuntu2204 ~]#docker run -d --name docker-test5 busybox:latest tail -f /etc/hosts
ff723d5aa3ad1ffe0cdcaf7ef4e2aab3321ce856e4f7090d8b0c3c9b9ebc366f
[root@ubuntu2204 ~]#docker run -d --name docker-test6 busybox:latest tail -f /etc/hosts
a51b6cf27acc0d4ace8929c72c010da4108bdea223ee8e91515ffd81c2cefb88
[root@ubuntu2204 ~]#docker exec -it docker-test5 sh
/ #
/ # hostname -i
172.17.0.2
/ # exit
[root@ubuntu2204 ~]#docker exec -it docker-test6 sh
/ #
/ # hostname -i
172.17.0.3
  • 容器创建后
  • 容器在宿主机的ID会作为容器的域名映射到容器内IP上
  • 宿主机上的虚拟网卡会桥接到docker0的网卡上
  • 容器停止后虚拟网卡会自动删除
#容器在宿主机的ID会作为容器的域名映射到容器内IP上
[root@ubuntu2204 ~]#docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
90aea87055d3 busybox:latest "tail -f /etc/hosts" 19 minutes ago Up 4 minutes docker-test1
[root@ubuntu2204 ~]#docker exec -it docker-test1 sh
/ #
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
12: eth0@if13: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
/ # cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.2 90aea87055d3
/ #

#此时宿主机上的虚拟网卡会桥接到docker0的网卡上
[root@ubuntu2204 ~]#brctl show
bridge name bridge id STP enabled interfaces
docker0 8000.02423d00d56c no vethea0a56b
[root@ubuntu2204 ~]#ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:f6:07:67 brd ff:ff:ff:ff:ff:ff
altname enp2s1
altname ens33
inet 10.0.0.202/24 brd 10.0.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fef6:767/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:3d:00:d5:6c brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:3dff:fe00:d56c/64 scope link
valid_lft forever preferred_lft forever
13: vethea0a56b@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether be:30:f6:9d:70:9e brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::bc30:f6ff:fe9d:709e/64 scope link
valid_lft forever preferred_lft forever

#容器停止后虚拟网卡会自动删除
[root@ubuntu2204 ~]#docker stop docker-test1
docker-test1
[root@ubuntu2204 ~]#ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:f6:07:67 brd ff:ff:ff:ff:ff:ff
altname enp2s1
altname ens33
inet 10.0.0.202/24 brd 10.0.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fef6:767/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:3d:00:d5:6c brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:3dff:fe00:d56c/64 scope link
valid_lft forever preferred_lft forever

同一个宿主机的不同容器可相互通信

默认情况下

  • 同一个宿主机的不同容器之间可以相互通信
  • 不同宿主机之间的容器IP地址重复,默认不能相互通信
[root@ubuntu2204 ~]#docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
94d80db6d019 busybox:latest "tail -f /etc/hosts" 28 minutes ago Exited (137) 20 minutes ago docker-test2
90aea87055d3 busybox:latest "tail -f /etc/hosts" 35 minutes ago Exited (137) 10 minutes ago docker-test1
[root@ubuntu2204 ~]#docker start docker-test1
docker-test1
[root@ubuntu2204 ~]#docker start docker-test2
docker-test2
[root@ubuntu2204 ~]#docker exec -it docker-test1 sh
/ #
/ # ping 172.17.0.3
PING 172.17.0.3 (172.17.0.3): 56 data bytes
64 bytes from 172.17.0.3: seq=0 ttl=64 time=0.993 ms
64 bytes from 172.17.0.3: seq=1 ttl=64 time=0.087 ms
^C
--- 172.17.0.3 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.087/0.540/0.993 ms
/ # exit
[root@ubuntu2204 ~]#docker exec -it docker-test2 sh
/ #
/ # ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2): 56 data bytes
64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.092 ms
64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.167 ms
64 bytes from 172.17.0.2: seq=2 ttl=64 time=0.129 ms
^C
--- 172.17.0.2 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.092/0.129/0.167 ms
/ # exit

案例:禁止同一个宿主机的不同容器间通信

#dockerd 的 --icc=false 选项可以禁止同一个宿主机的不同容器间通信
注意:如果设置了 "live-restore": true 重启docker-daemon 不关闭容器选项,需要提前关闭才会生效

[root@ubuntu2204 ~]#vim /lib/systemd/system/docker.service
[root@ubuntu2204 ~]#cat /lib/systemd/system/docker.service|grep ExecStart
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --icc=false

#创建两个容器,测试无法通信
[root@ubuntu2204 ~]#systemctl daemon-reload
[root@ubuntu2204 ~]#systemctl restart docker
[root@ubuntu2204 ~]#docker exec -it docker-test3 sh
Error response from daemon: Container 64eb242aecaad240f0acae6c63dff0f90572ef876f0b121038d21ac5c7d83f11 is not running
[root@ubuntu2204 ~]#docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
575cc8edb1a5 busybox:latest "tail -f /etc/hosts" 3 minutes ago Exited (137) 22 seconds ago docker-test4
64eb242aecaa busybox:latest "tail -f /etc/hosts" 3 minutes ago Exited (137) 22 seconds ago docker-test3
94d80db6d019 busybox:latest "tail -f /etc/hosts" 45 minutes ago Exited (137) 22 seconds ago docker-test2
90aea87055d3 busybox:latest "tail -f /etc/hosts" 53 minutes ago Exited (137) 22 seconds ago docker-test1
[root@ubuntu2204 ~]#docker run -d --name docker-test5 busybox:latest tail -f /etc/hosts
ff723d5aa3ad1ffe0cdcaf7ef4e2aab3321ce856e4f7090d8b0c3c9b9ebc366f
[root@ubuntu2204 ~]#docker run -d --name docker-test6 busybox:latest tail -f /etc/hosts
a51b6cf27acc0d4ace8929c72c010da4108bdea223ee8e91515ffd81c2cefb88
[root@ubuntu2204 ~]#docker exec -it docker-test5 sh
/ #
/ # hostname -i
172.17.0.2
/ # exit
[root@ubuntu2204 ~]#docker exec -it docker-test6 sh
/ #
/ # hostname -i
172.17.0.3
/ # ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2): 56 data bytes
exit
^C
--- 172.17.0.2 ping statistics ---
9 packets transmitted, 0 packets received, 100% packet loss
/ # exit
[root@ubuntu2204 ~]#docker exec -it docker-test5 sh
/ #
/ # ping 172.17.0.3
PING 172.17.0.3 (172.17.0.3): 56 data bytes
^C
--- 172.17.0.3 ping statistics ---
2 packets transmitted, 0 packets received, 100% packet loss
/ # exit

修改默认docker0网桥的网络配置

默认docker后会自动生成一个docker0的网桥,使用的IP是172.17.0.1/16,可能和宿主机的网段发生冲突,可以将其修改为其它网段的地址,避免冲突

案例:将docker0的IP修改为指定IP

[root@ubuntu2204 ~]#vim /etc/docker/daemon.json
[root@ubuntu2204 ~]#cat /etc/docker/daemon.json |grep bip
"bip": "192.168.100.1/24",

#变更前
[root@ubuntu2204 ~]#ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:f6:07:67 brd ff:ff:ff:ff:ff:ff
altname enp2s1
altname ens33
inet 10.0.0.202/24 brd 10.0.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fef6:767/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:3d:00:d5:6c brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:3dff:fe00:d56c/64 scope link
valid_lft forever preferred_lft forever

#变更后
[root@ubuntu2204 ~]#ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:f6:07:67 brd ff:ff:ff:ff:ff:ff
altname enp2s1
altname ens33
inet 10.0.0.202/24 brd 10.0.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fef6:767/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:3d:00:d5:6c brd ff:ff:ff:ff:ff:ff
inet 192.168.100.1/24 brd 192.168.100.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:3dff:fe00:d56c/64 scope link
valid_lft forever preferred_lft forever

#方法2
[root@ubuntu2204 ~]#vim /lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
--bip=192.168.100.1/24
[root@ubuntu2204 ~]#systemctl daemon-reload
[root@ubuntu2204 ~]#systemctl restart docker.service
#注意两种方法不可混用,否则将无法启动docker服务

修改默认网络设置使用自定义网桥

案例:用自定义的网桥代替默认的docker0

[root@ubuntu2204 ~]#ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:f6:07:67 brd ff:ff:ff:ff:ff:ff
altname enp2s1
altname ens33
inet 10.0.0.202/24 brd 10.0.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fef6:767/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:3d:00:d5:6c brd ff:ff:ff:ff:ff:ff
inet 192.168.100.1/24 brd 192.168.100.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:3dff:fe00:d56c/64 scope link
valid_lft forever preferred_lft forever

#添加自定义网桥
[root@ubuntu2204 ~]#apt -y install bridge-utils
正在读取软件包列表... 完成
正在分析软件包的依赖关系树... 完成
正在读取状态信息... 完成
bridge-utils 已经是最新版 (1.7-1ubuntu3)。
bridge-utils 已设置为手动安装。
升级了 0 个软件包,新安装了 0 个软件包, 要卸载 0 个软件包,有 45 个软件包未被升级。
[root@ubuntu2204 ~]#brctl addbr br0
[root@ubuntu2204 ~]#ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:f6:07:67 brd ff:ff:ff:ff:ff:ff
altname enp2s1
altname ens33
inet 10.0.0.202/24 brd 10.0.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fef6:767/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:3d:00:d5:6c brd ff:ff:ff:ff:ff:ff
inet 192.168.100.1/24 brd 192.168.100.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:3dff:fe00:d56c/64 scope link
valid_lft forever preferred_lft forever
30: br0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether fe:2d:a4:eb:ce:5f brd ff:ff:ff:ff:ff:ff
[root@ubuntu2204 ~]#ip a a 192.168.200.1/24 dev br0
[root@ubuntu2204 ~]#ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:f6:07:67 brd ff:ff:ff:ff:ff:ff
altname enp2s1
altname ens33
inet 10.0.0.202/24 brd 10.0.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fef6:767/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:3d:00:d5:6c brd ff:ff:ff:ff:ff:ff
inet 192.168.100.1/24 brd 192.168.100.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:3dff:fe00:d56c/64 scope link
valid_lft forever preferred_lft forever
30: br0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether fe:2d:a4:eb:ce:5f brd ff:ff:ff:ff:ff:ff
inet 192.168.200.1/24 scope global br0
valid_lft forever preferred_lft forever

[root@ubuntu2204 ~]#brctl show
bridge name bridge id STP enabled interfaces
br0 8000.fe2da4ebce5f no
docker0 8000.02423d00d56c no

#将容器网桥指定到自定义网桥上
[root@ubuntu2204 ~]#vim /lib/systemd/system/docker.service
[root@ubuntu2204 ~]#cat /lib/systemd/system/docker.service |grep ExecStart
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -b br0

[root@ubuntu2204systemctl daemon-reload
[root@ubuntu2204 ~]#systemctl restart docker
[root@ubuntu2204 ~]#ps -ef |grep dockerd
root 6024 1 0 12:59 ? 00:00:00 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -b br0
root 6153 1630 0 13:00 pts/1 00:00:00 grep --color=auto dockerd

#验证
[root@ubuntu2204 ~]#docker start docker-test6
docker-test6
[root@ubuntu2204 ~]#docker exec -it docker-test6 sh
/ #
/ # hostname -i
192.168.200.2
/ #

容器名称互联

新建容器时,docker会自动分配容器名称,容器ID和IP地址,导致容器名称,容器ID和IP都不固定,要实现和确定目标容器的通信就需要给容器起个固定的名称,容器之间通过固定名称实现确定目标的通信

有两种固定名称:

  • 容器名称的别名

*注意: 两种方式都最少需要两个容器才能实现

应用场景:

  • 在同一个宿主机上的容器之间可以通过自定义的容器名称相互访问,比如: 一个业务前端静态页面是使用nginx,动态页面使用的是tomcat,另外还需要负载均衡调度器,如: haproxy 对请求调度至nginx和tomcat的容器,由于容器在启动的时候其内部IP地址是DHCP 随机分配的,而给容器起个固定的名称,则是相对比较固定的,因此比较适用于此场景

*注意: 如果被引用的容器地址变化,必须重启当前容器才能生效

案例:使用容器名称进行容器间通信

#语法:
--link list #Add link to another container
格式:
docker run --name <容器名称> #先创建指定名称的容器
docker run --link <目标通信的容器ID或容器名称> #再创建容器时引用上面容器的名称

1. 先创建第一个指定容器名称的容器
[root@ubuntu2204 ~]#docker run -d --name server1 busybox:latest tail -f /etc/hosts
91a59ab63444f51bec0e7c429e6e4ef52ec34f3df72fbde3936fa79cfa74cad3

2. 新建第二个容器时引用第一个容器的名称
会自动将第一个主机的名称加入/etc/hosts文件,从而可以利用第一个容器名称进行访问
[root@ubuntu2204 ~]#docker run -it --name server2 --link server1 busybox:latest tail -f /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.2 server1 91a59ab63444
172.17.0.3 3b1148c960cf

[root@ubuntu2204 ~]#docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3b1148c960cf busybox:latest "tail -f /etc/hosts" 41 seconds ago Up 40 seconds server2
91a59ab63444 busybox:latest "tail -f /etc/hosts" 2 minutes ago Up 2 minutes server1

3. 容器内部用名称通信测试
server2 ping server1
[root@ubuntu2204 ~]#docker exec -it server2 sh
/ #
/ # hostname -i
172.17.0.3
/ # ping server1
PING server1 (172.17.0.2): 56 data bytes
64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.632 ms
64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.095 ms
64 bytes from 172.17.0.2: seq=2 ttl=64 time=0.109 ms
^C
--- server1 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.095/0.278/0.632 ms
/ #

server1 ping server2
[root@ubuntu2204 ~]#docker exec -it server1 sh
/ #
/ # hostname -i
172.17.0.2
/ # ping server2
ping: bad address 'server2' --> 通信失败,server1并没有域名解析并内加入server2
*注意: 如果被引用的容器地址变化,必须重启当前容器才能生效

*应用场景

  • 自定义的容器名称可能后期会发生变化,那么一旦名称发生变化,容器内程序之间也必须要随之发生变化,比如:程序通过固定的容器名称进行服务调用,但是容器名称发生变化之后再使用之前的名称肯定是无法成功调用,每次都进行更改的话又比较麻烦,因此可以使用自定义别名的方式解决,即容器名称可以随意更改,只要不更改别名即可

案例:创建第三个容器,引用前面创建的容器,并起别名

#语法:
docker run --name <容器名称>
#先创建指定名称的容器
docker run --name <容器名称> --link <目标容器名称>:"<容器别名1> <容器别名2> ..."
#给上面创建的容器起别名,来创建新容器

[root@ubuntu2204 ~]#docker run -d --name server3 --link server1:server1-alias busybox:latest tail -f /etc/hosts
7076a8091b7d13f76c481ed5b6d532ac7dd17feaaa4013a1a469ffccfed77ba2
[root@ubuntu2204 ~]#docker exec -it server3 sh
/ #
/ # ping server1-alias
PING server1-alias (172.17.0.2): 56 data bytes
64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.194 ms
64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.206 ms
^C
--- server1-alias ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.194/0.200/0.206 ms
/ #

*别名处理后即使容器的名称发生变化,通过别名依然可以找到该容器

综合案例:容器名称及别名实现同一宿主机 wordpress 和 MySQL 两个容器互连

[root@ubuntu2204 ~]#tree /data/dockerfile/
/data/dockerfile/
├── system
│ ├── alpine
│ │ ├── build.sh
│ │ └── Dockerfile
│ ├── centos
│ ├── debian
│ └── ubuntu
└── web
├── apache
├── jdk
├── nginx
│ └── 1.16.1-alpine
│ ├── build.sh
│ ├── Dockerfile
│ ├── index.html
│ ├── nginx-1.16.1.tar.gz
│ └── nginx.conf
└── tomcat

11 directories, 7 files

#准备配置文件
[root@ubuntu2204 ~]#cd /data/dockerfile/web/
[root@ubuntu2204 web]#mkdir -pv lamp_docker/mysql/
mkdir: 已创建目录 'lamp_docker'
mkdir: 已创建目录 'lamp_docker/mysql/'
[root@ubuntu2204 web]#vim lamp_docker/env_mysql.list
[root@ubuntu2204 web]#cat lamp_docker/env_mysql.list
MYSQL_ROOT_PASSWORD=123456
MYSQL_DATABASE=wordpress
MYSQL_USER=wpuser
MYSQL_PASSWORD=wppass
[root@ubuntu2204 web]#vim lamp_docker/env_wordpress.list
[root@ubuntu2204 web]#cat lamp_docker/env_wordpress.list
WORDPRESS_DB_HOST=mysql:3306
WORDPRESS_DB_NAME=wordpress
WORDPRESS_DB_USER=wpuser
WORDPRESS_DB_PASSWORD=wppass
WORDPRESS_TABLE_PREFIX=wp_
[root@ubuntu2204 web]#vim lamp_docker/mysql/mysql_test.cnf
[root@ubuntu2204 web]#cat lamp_docker/mysql/mysql_test.cnf
[mysqld]
server-id=200
log-bin=mysql-bin
[root@ubuntu2204 web]#tree lamp_docker/
lamp_docker/
├── env_mysql.list
├── env_wordpress.list
└── mysql
└── mysql_test.cnf

1 directory, 3 files

#拉取镜像
[root@ubuntu2204 web]#docker pull mysql
Using default tag: latest
[root@ubuntu2204 web]#docker pull wordpress
Using default tag: latest
[root@ubuntu2204 web]#docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
wordpress latest fcd4967b9728 7 hours ago 615MB
mysql latest 7484689f290f 5 weeks ago 538MB

#运行容器
[root@ubuntu2204 ~]#docker run --name mysql -v /data/dockerfile/web/lamp_docker/mysql/:/etc/mysql/conf.d -v /data/mysql:/var/lib/mysql --env-file=/data/dockerfile/web/lamp_docker/env_mysql.list -d -p 3306:3306 mysql:latest
5b44ddd48a69c2f447f950859eba438890b03f15920f9826177bd9844c3461c2
[root@ubuntu2204 ~]#docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5b44ddd48a69 mysql:latest "docker-entrypoint.s…" 4 seconds ago Up 3 seconds 0.0.0.0:3306->3306/tcp, :::3306->3306/tcp, 33060/tcp mysql
[root@ubuntu2204 ~]#docker run -d --name wordpress --link mysql -v /data/wordpress:/var/www/html/wp-content --env-file=/data/dockerfile/web/lamp_docker/env_wordpress.list -p 80:80 wordpress
2935c076a5ddc9b23164aa1bf7fbfb6c8e96cb4a80c6b57e71621c973f030913
[root@ubuntu2204 ~]#docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2935c076a5dd wordpress "docker-entrypoint.s…" 5 seconds ago Up 2 seconds 0.0.0.0:80->80/tcp, :::80->80/tcp wordpress
5b44ddd48a69 mysql:latest "docker-entrypoint.s…" 49 seconds ago Up 48 seconds 0.0.0.0:3306->3306/tcp, :::3306->3306/tcp, 33060/tcp mysql
[root@ubuntu2204 ~]#ss -nlt
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 4096 127.0.0.53%lo:53 0.0.0.0:*
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 128 127.0.0.1:6010 0.0.0.0:*
LISTEN 0 128 127.0.0.1:6011 0.0.0.0:*
LISTEN 0 4096 0.0.0.0:3306 0.0.0.0:*
LISTEN 0 4096 127.0.0.1:33327 0.0.0.0:*
LISTEN 0 4096 0.0.0.0:80 0.0.0.0:*
LISTEN 0 128 [::]:22 [::]:*
LISTEN 0 128 [::1]:6010 [::]:*
LISTEN 0 128 [::1]:6011 [::]:*
LISTEN 0 4096 [::]:3306 [::]:*
LISTEN 0 4096 [::]:80 [::]:*

#测试
49-Docker-网络管理及Compose单机多容器编排_单机多容器编排_02
49-Docker-网络管理及Compose单机多容器编排_docker_03
49-Docker-网络管理及Compose单机多容器编排_单机多容器编排_04
49-Docker-网络管理及Compose单机多容器编排_docker_05
49-Docker-网络管理及Compose单机多容器编排_docker_06

Docker 网络连接模式

49-Docker-网络管理及Compose单机多容器编排_docker_07

Docker 的网络支持5种网络模式:

  • none - Closed container,只有一个回环网卡,只能单机访问
  • bridge -Bridged container,容器有单独的网卡和宿主机中虚拟网卡vteh成对使用,最终桥接到doker0上
  • host - Open container,没有自己的网路设备,和宿主机共享一个网络
  • container - Joined container,多个容器共享一个对外通信网络,除此之外只有回环网络
  • network-name

查看默认的网络模式有三个

[root@ubuntu2204 ~]#docker network ls
NETWORK ID NAME DRIVER SCOPE
425fb3851709 bridge bridge local
da5807acacf4 host host local
070e6fce9ea4 none null local

默认新建的容器使用Bridge模式,创建容器时,docker run 命令使用以下选项指定网络模式

#语法
docker run --network <mode>
docker run --net=<mode>
<mode>: 可是以下值
none
bridge
host
container:<容器名或容器ID>
<自定义网络名称>

bridge网络模式

本模式是docker的默认模式,即不指定任何模式就是bridge模式,也是使用比较多的模式,此模式创建的容器会为每一个容器分配自己的网络 IP 等信息,并将容器连接到一个虚拟网桥与外界通信

49-Docker-网络管理及Compose单机多容器编排_docker-compose_08
  • 可以和外部网络之间进行通信,通过SNAT访问外网,使用DNAT可以让容器被外部主机访问,所以此模式也称为NAT模式
  • 此模式宿主机需要启动ip_forward功能
  • bridge网络模式特点
  • 网络资源隔离: 不同宿主机的容器无法直接通信,各自使用独立网络
  • 无需手动配置: 容器默认自动获取172.17.0.0/16的IP地址,此地址可以修改
  • 可访问外网: 利用宿主机的物理网卡,SNAT连接外网
  • 外部主机无法直接访问容器: 可以通过配置DNAT接受外网的访问
  • 低性能较低: 因为可通过NAT,网络转换带来更的损耗
  • 端口管理繁琐: 每个容器必须手动指定唯一的宿主机端口,容器产生端口冲容

案例:查看bridge模式信息

[root@ubuntu2204 ~]#docker network inspect bridge
[
{
"Name": "bridge",
"Id": "425fb38517091dc12bdfcd64ec46d56c4f60a1eadd5e93fc650d6af644e3c69f",
"Created": "2023-01-12T10:29:53.504537158+08:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"2935c076a5ddc9b23164aa1bf7fbfb6c8e96cb4a80c6b57e71621c973f030913": {
"Name": "wordpress",
"EndpointID": "ca498497069804889fd87938efd0a898b6063af33ab513a222eb1b7d3149a160",
"MacAddress": "02:42:ac:11:00:03",
"IPv4Address": "172.17.0.3/16",
"IPv6Address": ""
},
"5b44ddd48a69c2f447f950859eba438890b03f15920f9826177bd9844c3461c2": {
"Name": "mysql",
"EndpointID": "c7fca3c7b4268682f1f4b5ff732022921b956486174b37533005433894ec0c95",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]

安装docker后.默认启用ip_forward

[root@ubuntu2204 ~]#cat /proc/sys/net/ipv4/ip_forward
1
[root@ubuntu2204 ~]#iptables -vnL -t nat
Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
247 13193 DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
1 60 DOCKER all -- * * 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
68 4306 MASQUERADE all -- * !docker0 172.17.0.0/16 0.0.0.0/0
0 0 MASQUERADE tcp -- * * 172.17.0.2 172.17.0.2 tcp dpt:3306
0 0 MASQUERADE tcp -- * * 172.17.0.3 172.17.0.3 tcp dpt:80

Chain DOCKER (2 references)
pkts bytes target prot opt in out source destination
9 817 RETURN all -- docker0 * 0.0.0.0/0 0.0.0.0/0
0 0 DNAT tcp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:3306 to:172.17.0.2:3306
119 6188 DNAT tcp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 to:172.17.0.3:80

#启动容器后可以直接访问外网
[root@ubuntu2204 ~]#docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
alpine-base v1.2 4f60390e7b99 3 days ago 250MB

[root@ubuntu2204 ~]#docker run -it --rm --name alpine1 alpine-base:v1.2 sh
/ # hostname -i
172.17.0.4
/ # ping www.51cto.com
PING www.51cto.com (203.107.44.140): 56 data bytes
64 bytes from 203.107.44.140: seq=0 ttl=127 time=34.074 ms
64 bytes from 203.107.44.140: seq=1 ttl=127 time=34.403 ms
64 bytes from 203.107.44.140: seq=2 ttl=127 time=32.170 ms
^C
--- www.51cto.com ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 32.170/33.549/34.403 ms

Host 模式

  • 如果指定host模式启动的容器,那么新创建的容器不会创建自己的虚拟网卡,而是直接使用宿主机的网卡和IP地址,因此在容器里面查看到的IP信息就是宿主机的信息,访问容器的时候直接使用宿主机IP+容器端口即可,不过容器内除网络以外的其它资源,如: 文件系统、系统进程等仍然和宿主机保持隔离此模式由于直接使用宿主机的网络无需转换,网络性能最高,但是各容器内使用的端口不能相同,适用于运行容器端口比较固定的业务

Host 网络模式特点:

  • 使用参数 --network host 指定
  • 共享宿主机网络
  • 网络性能无损耗
  • 网络故障排除相对简单
  • 各容器网络无隔离
  • 网络资源无法分别统计
  • 端口管理困难: 容易产生端口冲突
  • 不支持端口映射
#查看宿主机的网络设置
[root@ubuntu2204 ~]#ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
inet6 fe80::42:23ff:fe4c:b71e prefixlen 64 scopeid 0x20<link>
ether 02:42:23:4c:b7:1e txqueuelen 0 (以太网)
RX packets 1938 bytes 1431300 (1.4 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2250 bytes 1508476 (1.5 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.0.0.200 netmask 255.255.255.0 broadcast 10.0.0.255
inet6 fe80::20c:29ff:fedf:9992 prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:df:99:92 txqueuelen 1000 (以太网)
RX packets 518671 bytes 641138793 (641.1 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 102250 bytes 8450750 (8.4 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (本地环回)
RX packets 456 bytes 45548 (45.5 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 456 bytes 45548 (45.5 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

[root@ubuntu2204 ~]#route -n
内核 IP 路由表
目标 网关 子网掩码 标志 跃点 引用 使用 接口
0.0.0.0 10.0.0.2 0.0.0.0 UG 0 0 0 eth0
10.0.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0

#打开容器前确认宿主机的80/tcp端口没有打开
[root@ubuntu2204 ~]#ss -ntl|grep :80

#创建host模式的容器
[root@ubuntu2204 ~]#docker run -d --network host --name web1 -v /opt/nginx/html/:/data/nginx/html/ nginx-alpine:1.16.1
78be41fc15215e5d591ec35fa785fefcfca276b9cbe190abceee7e9e96e6d38d
[root@ubuntu2204 ~]#docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
78be41fc1521 nginx-alpine:1.16.1 "nginx" 5 seconds ago Up 4 seconds web1

#创建容器后,宿主机的80/tcp端口打开
[root@ubuntu2204 ~]#ss -ntl|grep :80
LISTEN 0 511 0.0.0.0:80 0.0.0.0:*

#进入容器后仍显示宿主机的主机名提示符信息
[root@ubuntu2204 ~]#docker exec -it web1 sh
/ # ifconfig
docker0 Link encap:Ethernet HWaddr 02:42:23:4C:B7:1E
inet addr:172.17.0.1 Bcast:172.17.255.255 Mask:255.255.0.0
inet6 addr: fe80::42:23ff:fe4c:b71e/64 Scope:Link
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:1938 errors:0 dropped:0 overruns:0 frame:0
TX packets:2250 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1431300 (1.3 MiB) TX bytes:1508476 (1.4 MiB)

eth0 Link encap:Ethernet HWaddr 00:0C:29:DF:99:92
inet addr:10.0.0.200 Bcast:10.0.0.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fedf:9992/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:519414 errors:0 dropped:0 overruns:0 frame:0
TX packets:102666 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:641195452 (611.4 MiB) TX bytes:8492753 (8.0 MiB)

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:456 errors:0 dropped:0 overruns:0 frame:0
TX packets:456 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:45548 (44.4 KiB) TX bytes:45548 (44.4 KiB)

/ #
/ # route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.0.0.2 0.0.0.0 UG 0 0 0 eth0
10.0.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0

none 模式

在使用none 模式后,Docker 容器不会进行任何网络配置,没有网卡、没有IP也没有路由,因此默认无法与外界通信,需要手动添加网卡配置IP等,所以极少使用

none模式特点

  • 使用参数 --network none 指定
  • 默认无网络功能,无法和外部通信
  • 无法实现端口映射
  • 适用于测试环境

Container 模式

49-Docker-网络管理及Compose单机多容器编排_docker网络_09

使用此模式创建的容器需指定和一个已经存在的容器共享一个网络,而不是和宿主机共享网络,新创建的容器不会创建自己的网卡也不会配置自己的IP,而是和一个被指定的已经存在的容器共享IP和端口范围,因此这个容器的端口不能和被指定容器的端口冲突,除了网络之外的文件系统、进程信息等仍然保持相互隔离,两个容器的进程可以通过lo网卡进行通信

Container 模式特点

  • 使用参数 –-network container:名称或ID 指定
  • 与宿主机网络空间隔离
  • 空器间共享网络空间
  • 适合频繁的容器间的网络通信
  • 直接使用对方的网络,较少使用

案例:通过容器模式实现 wordpress

[root@ubuntu2204 ~]#tree /data/dockerfile/
/data/dockerfile/
├── system
│ ├── alpine
│ │ ├── build.sh
│ │ └── Dockerfile
│ ├── centos
│ ├── debian
│ └── ubuntu
└── web
├── apache
├── jdk
├── lamp_docker
│ ├── mysql
│ │ └── mysql_test.cnf
│ └── wordpress
│ ├── env_mysql.list
│ └── env_wordpress.list
├── nginx
│ └── 1.16.1-alpine
│ ├── build.sh
│ ├── Dockerfile
│ ├── index.html
│ ├── nginx-1.16.1.tar.gz
│ └── nginx.conf
└── tomcat

14 directories, 10 files

[root@ubuntu2204 ~]#cd /data/dockerfile/web/lamp_docker/
[root@ubuntu2204 lamp_docker]#cat mysql/mysql_test.cnf
[mysqld]
server-id=200

[root@ubuntu2204 lamp_docker]#cat wordpress/env_mysql.list
MYSQL_ROOT_PASSWORD=123456
MYSQL_DATABASE=wordpress
MYSQL_USER=wpuser
MYSQL_PASSWORD=wppass

[root@ubuntu2204 lamp_docker]#cat wordpress/env_wordpress.list
WORDPRESS_DB_HOST=mysql:3306
WORDPRESS_DB_NAME=wordpress
WORDPRESS_DB_USER=wpuser
WORDPRESS_DB_PASSWORD=wppass
WORDPRESS_TABLE_PREFIX=wp_

[root@ubuntu2204 lamp_docker]#docker run -d --name wordpress-con1 -v /data/wordpress:/var/www/html/wp-content --env-file=/data/dockerfile/web/lamp_docker/wordpress/env_wordpress.list -p 80:80 wordpress
c8aa35d9acc8ad94c0017a3e1dbfee281d38ac47622f6bae6497f55e969dbc19

[root@ubuntu2204 lamp_docker]#docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c8aa35d9acc8 wordpress "docker-entrypoint.s…" 7 seconds ago Up 6 seconds 0.0.0.0:80->80/tcp, :::80->80/tcp wordpress-con1

[root@ubuntu2204 lamp_docker]#docker run --network container:wordpress-con1 --name mysql-con1 -v /data/dockerfile/web/lamp_docker/mysql/:/etc/mysql/conf.d -v /data/mysql:/var/lib/mysql --env-file=/data/dockerfile/web/lamp_docker/wordpress/env_mysql.list -d mysql:latest
accfda68b7133e418c1a3a437b6be38b03b30a394fd042f922e8a6cef1cf33db

[root@ubuntu2204 lamp_docker]#docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
accfda68b713 mysql:latest "docker-entrypoint.s…" 5 seconds ago Up 4 seconds mysql-con1
c8aa35d9acc8 wordpress "docker-entrypoint.s…" 4 minutes ago Up 4 minutes 0.0.0.0:80->80/tcp, :::80->80/tcp wordpress-con1

[root@ubuntu2204 lamp_docker]#ss -nlt
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 4096 127.0.0.53%lo:53 0.0.0.0:*
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 128 127.0.0.1:6010 0.0.0.0:*
LISTEN 0 128 127.0.0.1:6011 0.0.0.0:*
LISTEN 0 4096 127.0.0.1:33327 0.0.0.0:*
LISTEN 0 4096 0.0.0.0:80 0.0.0.0:*
LISTEN 0 128 [::]:22 [::]:*
LISTEN 0 128 [::1]:6010 [::]:*
LISTEN 0 128 [::1]:6011 [::]:*
LISTEN 0 4096 [::]:80 [::]:*

实现跨宿主机的容器之间网络互联

49-Docker-网络管理及Compose单机多容器编排_单机多容器编排_10

两种方式:

  • 方式一:直接将docker0桥接到宿主机的eth0网卡上 - brctl addif docker0 eth0
  • 方式二:利用NAT实现跨主机的容器间互联

方式二实现原理: 是在宿主机做一个网络路由就可以实现A宿主机的容器访问B主机的容器的目的
*注意: 此方式只适合小型网络环境,复杂的网络或者大型的网络可以使用google开源的k8s进行互联

案例:利用NAT实现跨主机的容器间互联 - 同一宿主机内部只要有一个路由可以对外访问,其他网卡都可以接受到信息

#Docker默认网段是172.17.0.x/24,而且每个宿主机都是一样的,因此要做路由的前提就是各个主机的网络不能一致

#第一个宿主机A上docker0网段改为192.168.100.1/24
[root@ubuntu2204 ~]#vim /etc/docker/daemon.json
[root@ubuntu2204 ~]#cat /etc/docker/daemon.json
{
"registry-mirrors": [
"https://registry.docker-cn.com",
"http://hub-mirror.c.163.com",
"https://docker.mirrors.ustc.edu.cn"
],
"graph": "/data/docker",
"graph": "/data/docker",
"max-concurrent-downloads": 10,
"max-concurrent-uploads": 5,
"log-opts": {
"max-size": "300m",
"max-file": "2"
},
"live-restore": true,
"bip": "192.168.100.1/24",
"dns" : [ "114.114.114.114", "119.29.29.29"],
"dns-search": [ "mooreyxia.com", "mooreyxia.org"]
}
[root@ubuntu2204 ~]#systemctl restart docker
[root@ubuntu2204 ~]#ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:df:99:92 brd ff:ff:ff:ff:ff:ff
altname enp2s1
altname ens33
inet 10.0.0.200/24 brd 10.0.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fedf:9992/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:23:4c:b7:1e brd ff:ff:ff:ff:ff:ff
inet 192.168.100.1/24 brd 192.168.100.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:23ff:fe4c:b71e/64 scope link
valid_lft forever preferred_lft forever

[root@ubuntu2204 ~]#route -n
内核 IP 路由表
目标 网关 子网掩码 标志 跃点 引用 使用 接口
0.0.0.0 10.0.0.2 0.0.0.0 UG 0 0 0 eth0
10.0.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
192.168.100.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0


#第二个宿主机B上docker0网段改为192.168.0.1/24
[root@ubuntu2204 ~]#vim /etc/docker/daemon.json
[root@ubuntu2204 ~]#cat /etc/docker/daemon.json
{
"registry-mirrors": [
"https://registry.docker-cn.com",
"http://hub-mirror.c.163.com",
"https://docker.mirrors.ustc.edu.cn"
],
"graph": "/data/docker",
"graph": "/data/docker",
"max-concurrent-downloads": 10,
"max-concurrent-uploads": 5,
"log-opts": {
"max-size": "300m",
"max-file": "2"
},
"live-restore": true,
"bip": "192.168.0.1/24",
"dns" : [ "114.114.114.114", "119.29.29.29"],
"dns-search": [ "mooreyxia.com", "mooreyxia.org"]
}
[root@ubuntu2204 ~]#systemctl restart docker
[root@ubuntu2204 ~]#ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:f6:07:67 brd ff:ff:ff:ff:ff:ff
altname enp2s1
altname ens33
inet 10.0.0.202/24 brd 10.0.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fef6:767/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:3d:00:d5:6c brd ff:ff:ff:ff:ff:ff
inet 192.168.0.1/24 brd 192.168.0.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:3dff:fe00:d56c/64 scope link
valid_lft forever preferred_lft forever
[root@ubuntu2204 ~]#route -n
内核 IP 路由表
目标 网关 子网掩码 标志 跃点 引用 使用 接口
0.0.0.0 10.0.0.2 0.0.0.0 UG 0 0 0 eth0
10.0.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0

#在两个宿主机分别启动一个容器
#第一个宿主机启动容器server1
[root@ubuntu2204 ~]#docker run -it --name server1 --rm alpine-base:v1.1 sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
42: eth0@if43: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:c0:a8:64:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.100.2/24 brd 192.168.100.255 scope global eth0
valid_lft forever preferred_lft forever
/ # ip route
default via 192.168.100.1 dev eth0
192.168.100.0/24 dev eth0 proto kernel scope link src 192.168.100.2
/ # route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.100.1 0.0.0.0 UG 0 0 0 eth0
192.168.100.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
/ # ping -c2 192.168.0.2
PING 192.168.0.2 (192.168.0.2): 56 data bytes
^C
--- 192.168.0.2 ping statistics ---
2 packets transmitted, 0 packets received, 100% packet loss

#第二个宿主机启动容器server2
[root@ubuntu2204 ~]#docker run -it --name server2 --rm busybox:latest sh
/ #
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
40: eth0@if41: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:c0:a8:00:02 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.2/24 brd 192.168.0.255 scope global eth0
valid_lft forever preferred_lft forever
/ # ip route
default via 192.168.0.1 dev eth0
192.168.0.0/24 dev eth0 scope link src 192.168.0.2
/ # route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 eth0
192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
#第一个宿主机的容器server1无法和第二个宿主机的server2相互访问
/ # ping -c2 192.168.100.2
PING 192.168.100.2 (192.168.100.2): 56 data bytes
^C
--- 192.168.100.2 ping statistics ---
2 packets transmitted, 0 packets received, 100% packet loss


#添加静态路由和iptables规则 - 在各宿主机添加静态路由,网关指向对方宿主机的IP
#在第一台宿主机添加静态路由和iptables规则
[root@ubuntu2204 ~]#route add -net 192.168.0.0/24 gw 10.0.0.202
[root@ubuntu2204 ~]#route -n
内核 IP 路由表
目标 网关 子网掩码 标志 跃点 引用 使用 接口
0.0.0.0 10.0.0.2 0.0.0.0 UG 0 0 0 eth0
10.0.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
192.168.0.0 10.0.0.202 255.255.255.0 UG 0 0 0 eth0
192.168.100.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0

#更改默认防火墙规则
[root@ubuntu2204 ~]#iptables -A FORWARD -s 10.0.0.0/24 -j ACCEPT
[root@ubuntu2204 ~]#iptables -vnL
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination

Chain FORWARD (policy DROP 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
2 168 DOCKER-USER all -- * * 0.0.0.0/0 0.0.0.0/0
2 168 DOCKER-ISOLATION-STAGE-1 all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- * docker0 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
0 0 DOCKER all -- * docker0 0.0.0.0/0 0.0.0.0/0
2 168 ACCEPT all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- docker0 docker0 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- * * 10.0.0.0/24 0.0.0.0/0

#或者全放开
[root@ubuntu2204 ~]#iptables -P FORWARD ACCEPT

#在第二台宿主机添加静态路由和iptables规则
[root@ubuntu2204 ~]#route add -net 192.168.100.0/24 gw 10.0.0.200
[root@ubuntu2204 ~]#route -n
内核 IP 路由表
目标 网关 子网掩码 标志 跃点 引用 使用 接口
0.0.0.0 10.0.0.2 0.0.0.0 UG 0 0 0 eth0
10.0.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0
192.168.100.0 10.0.0.200 255.255.255.0 UG 0 0 0 eth0
#更改默认防火墙规则
[root@ubuntu2204 ~]#iptables -P FORWARD ACCEPT
[root@ubuntu2204 ~]#iptables -vnL
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
2 168 DOCKER-USER all -- * * 0.0.0.0/0 0.0.0.0/0
2 168 DOCKER-ISOLATION-STAGE-1 all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- * docker0 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
0 0 DOCKER all -- * docker0 0.0.0.0/0 0.0.0.0/0
2 168 ACCEPT all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- docker0 docker0 0.0.0.0/0 0.0.0.0/0

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination

Chain DOCKER (1 references)
pkts bytes target prot opt in out source destination

Chain DOCKER-ISOLATION-STAGE-1 (1 references)
pkts bytes target prot opt in out source destination
2 168 DOCKER-ISOLATION-STAGE-2 all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0
2 168 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0

Chain DOCKER-ISOLATION-STAGE-2 (1 references)
pkts bytes target prot opt in out source destination
0 0 DROP all -- * docker0 0.0.0.0/0 0.0.0.0/0
2 168 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0

Chain DOCKER-USER (1 references)
pkts bytes target prot opt in out source destination
63 5413 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0

#测试跨宿主机之间容器互联
#宿主机A的容器server1访问宿主机B容器server2
/ # ping -c3 192.168.0.2
PING 192.168.0.2 (192.168.0.2): 56 data bytes
64 bytes from 192.168.0.2: seq=0 ttl=62 time=1.046 ms
64 bytes from 192.168.0.2: seq=1 ttl=62 time=1.002 ms
64 bytes from 192.168.0.2: seq=2 ttl=62 time=0.793 ms

--- 192.168.0.2 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.793/0.947/1.046 ms

#同时在宿主机B上tcpdump抓包观察
[root@ubuntu2204 ~]#tcpdump -i eth0 -nn icmp
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), snapshot length 262144 bytes
22:06:34.935674 IP 10.0.0.200 > 192.168.0.2: ICMP echo request, id 12, seq 0, length 64
22:06:34.935884 IP 192.168.0.2 > 10.0.0.200: ICMP echo reply, id 12, seq 0, length 64
22:06:35.936352 IP 10.0.0.200 > 192.168.0.2: ICMP echo request, id 12, seq 1, length 64
22:06:35.936453 IP 192.168.0.2 > 10.0.0.200: ICMP echo reply, id 12, seq 1, length 64
22:06:36.937102 IP 10.0.0.200 > 192.168.0.2: ICMP echo request, id 12, seq 2, length 64
22:06:36.937168 IP 192.168.0.2 > 10.0.0.200: ICMP echo reply, id 12, seq 2, length 64


#宿主机B的容器server2访问宿主机A容器server1
/ # ping 192.168.100.2
PING 192.168.100.2 (192.168.100.2): 56 data bytes
64 bytes from 192.168.100.2: seq=0 ttl=62 time=2.160 ms
64 bytes from 192.168.100.2: seq=1 ttl=62 time=0.752 ms
64 bytes from 192.168.100.2: seq=2 ttl=62 time=0.851 ms
64 bytes from 192.168.100.2: seq=3 ttl=62 time=1.793 ms
^C
--- 192.168.100.2 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 0.752/1.389/2.160 ms
/ #

#同时在宿主机A上tcpdump抓包观察
[root@ubuntu2204 ~]#tcpdump -i eth0 -nn icmp
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), snapshot length 262144 bytes
22:08:55.508620 IP 10.0.0.202 > 192.168.100.2: ICMP echo request, id 10, seq 0, length 64
22:08:55.508882 IP 192.168.100.2 > 10.0.0.202: ICMP echo reply, id 10, seq 0, length 64
22:08:56.509762 IP 10.0.0.202 > 192.168.100.2: ICMP echo request, id 10, seq 1, length 64
22:08:56.509823 IP 192.168.100.2 > 10.0.0.202: ICMP echo reply, id 10, seq 1, length 64
22:08:57.510146 IP 10.0.0.202 > 192.168.100.2: ICMP echo request, id 10, seq 2, length 64
22:08:57.510207 IP 192.168.100.2 > 10.0.0.202: ICMP echo reply, id 10, seq 2, length 64
22:08:58.511027 IP 10.0.0.202 > 192.168.100.2: ICMP echo request, id 10, seq 3, length 64
22:08:58.511168 IP 192.168.100.2 > 10.0.0.202: ICMP echo reply, id 10, seq 3, length 64

容器单机编排工具 Docker Compose

docker-compose 是 docker 容器的一种单机编排服务,docker-compose 是一个管理多个容器的工具,比如: 可以解决容器之间的依赖关系,就像启动一个nginx 前端服务的时候会调用后端的tomcat,那就得先启动tomcat,但是启动tomcat 容器还需要依赖数据库,那就还得先启动数据库,dockercompose可以用来解决这样的嵌套依赖关系,并且可以替代docker命令对容器进行创建、启动和停止等手工的操作

docker命令相当于ansible命令,那么docker compose文件,就相当于ansible-playbook的yaml文件

安装Docker Compose

[root@ubuntu2204 ~]#apt install -y docker-compose

#命令格式
docker-compose --help
Define and run multi-container applications with Docker.
Usage:
docker-compose [-f <arg>...] [options] [COMMAND] [ARGS...]
docker-compose -h|--help
#选项说明:
-f,–file FILE #指定Compose 模板文件,默认为docker-compose.yml
-p,–project-name NAME #指定项目名称,默认将使用当前所在目录名称作为项目名。
--verbose #显示更多输出信息
--log-level LEVEL #定义日志级别 (DEBUG, INFO, WARNING, ERROR, CRITICAL)
--no-ansi #不显示ANSI 控制字符
-v, --version #显示版本
#以下为命令选项,需要在docker-compose.yml|yaml 文件所在在目录里执行
config -q #查看当前配置,没有错误不输出任何信息
up #创建并启动容器
build #构建镜像
bundle #从当前docker compose 文件生成一个以<当前目录>为名称的json格式的Docker Bundle 备
份文件
create #创建服务
down #停止和删除所有容器、网络、镜像和卷
events #从容器接收实时事件,可以指定json 日志格式
exec #进入指定容器进行操作
help #显示帮助细信息
images #显示镜像信息
kill #强制终止运行中的容器
logs #查看容器的日志
pause #暂停服务
port #查看端口
ps #列出容器
pull #重新拉取镜像,镜像发生变化后,需要重新拉取镜像
push #上传镜像
restart #重启服务
rm #删除已经停止的服务
run #一次性运行容器
scale #设置指定服务运行的容器个数,新版已废弃
start #启动服务
stop #停止服务
top #显示容器运行状态
unpause #取消暂定

案例:利用Docker Compose部署 Wordpress+mysql 应用

[root@ubuntu2204 ~]#mkdir /data/docker-compose
[root@ubuntu2204 ~]#cd /data/docker-compose
[root@ubuntu2204 docker-compose]#vim docker-compose.yml
[root@ubuntu2204 docker-compose]#docker-compose config
networks:
wordpress-network:
driver: bridge
ipam:
config:
- subnet: 172.30.0.0/16
services:
db:
container_name: db
environment:
MYSQL_DATABASE: wordpress
MYSQL_PASSWORD: '123456'
MYSQL_ROOT_PASSWORD: '123456'
MYSQL_USER: wordpress
image: mysql:latest
networks:
wordpress-network: null
restart: unless-stopped
volumes:
- dbdata:/var/lib/mysql:rw
wordpress:
container_name: wordpress
depends_on:
db:
condition: service_started
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_NAME: wordpress
WORDPRESS_DB_PASSWORD: '123456'
WORDPRESS_DB_USER: wordpress
image: wordpress:latest
networks:
wordpress-network: null
ports:
- published: 80
target: 80
restart: unless-stopped
volumes:
- wordpress:/var/www/html:rw
version: '3'
volumes:
dbdata: {}
wordpress: {}

[root@ubuntu2204 docker-compose]#docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
[root@ubuntu2204 docker-compose]#docker network ls
NETWORK ID NAME DRIVER SCOPE
a55c4e1d2007 bridge bridge local
da5807acacf4 host host local
070e6fce9ea4 none null local

#一般执行dockercompose命令前先进入docker-compose.yml文件所在目录,默认前台方式执行
[root@ubuntu2204 docker-compose]#docker-compose up -d
Creating network "docker-compose_wordpress-network" with driver "bridge"
Creating volume "docker-compose_wordpress" with default driver
Creating volume "docker-compose_dbdata" with default driver
Creating db ... done
Creating wordpress ... done
Attaching to db, wordpress
db | 2023-01-12 15:17:08+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.31-1.el8 started.
db | 2023-01-12 15:17:09+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
db | 2023-01-12 15:17:09+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.31-1.el8 started.
......
Creating db ... done
Creating wordpress ... done

#测试
49-Docker-网络管理及Compose单机多容器编排_单机多容器编排_11

我是moore,大家一起加油!!!


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK