Docker 中的网络

概述浏览

查看当前的机器中的网络信息

[root@iZwz91h49n3mj8r232gqweZ ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:16:3e:0c:ae:fc brd ff:ff:ff:ff:ff:ff
    inet 172.16.252.139/24 brd 172.16.252.255 scope global dynamic eth0
       valid_lft 314691780sec preferred_lft 314691780sec
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP 
    link/ether 02:42:7d:a6:c6:8d brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
.....
34: veth6991d55@if33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP 
    link/ether 9e:24:c9:78:1f:8f brd ff:ff:ff:ff:ff:ff link-netnsid 11
36: veth20601e8@if35: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP 
    link/ether 5a:b1:3d:0f:0d:08 brd ff:ff:ff:ff:ff:ff link-netnsid 10
[root@iZwz91h49n3mj8r232gqweZ ~]# 

图示docker网络解析

在这里插入图片描述

1.first-centos和second-centos 是我们在自己的centos系统上创建的两个docker容器Container
2.另外我们自己的centos系统上有一个docker0的网卡,这个网卡会将当前系统  

查看docker 容器中的ip配置信息

[root@iZwz91h49n3mj8r232gqweZ ~]# docker exec -it first-centos ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
35: eth0@if36: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
[root@iZwz91h49n3mj8r232gqweZ ~]# docker exec -it second-centos ip a     
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
33: eth0@if34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
[root@iZwz91h49n3mj8r232gqweZ ~]# 
1.通过上面的docker exec -it first-centos ip a 命令我们可以看到网卡eth0@if36
  实际上就是和上面的veth20601e8@if35通过Veth Pair技术 配对出现的
2.通过上面的docker exec -it second-centos ip a 命令我们可以看到网卡eth0@if34
  实际上就是和上面的veth6991d55@if33通过Veth Pair技术 配对出现的
3.这里注意下:所有的docker创建的容器的跟docker0的网段都在同一个网段172.17.0.XXX
4.表象是:同一个网段能够进行ping通通信,但是实际上是通过veth pair技术实现的;  

docker 网络命令

查看当前机器的所有网络 docker network ls

[root@iZwz91h49n3mj8r232gqweZ ~]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
757b74ac61c7        bridge              bridge              local
d3d1516d3a7c        harbor_harbor       bridge              local
6f425e496441        host                host                local
d082dd604405        none                null                local

备注

1.NETWORK ID:  网络ID
2.NAME:网路名称 	
3.DRIVER:网络驱动类型/网络模式
  				bridge 桥接模式,此模式会为每一个容器分配、设置IP等,
  							 并将容器连接到一个docker0虚拟网桥,
  							 通过docker0网桥以及Iptables nat表配置与宿主机通信。
  				host 宿主主机模式,容器将不会虚拟出自己的网卡,配置自己的IP等,而是使用宿主机的IP和端口。
  				null:该模式关闭了容器的网络功能。

查看某个网络 docker network inspect ${netword-name}

[root@iZwz91h49n3mj8r232gqweZ ~]# docker network inspect bridge
[
    {
        "Name": "bridge",
        "Id": "757b74ac61c7d5f2148f7dfada40d6cc6cfe9ad73c924b4d2ff351dcdd55ea69",
        "Created": "2019-12-01T09:09:58.838895122+08:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "7aa9960b8e1ae75de034d88cd8cdcbd3d4307d49138ac6d427247ede01166147": {
                "Name": "first-centos",
                "EndpointID": "8b530ec7edac68dfa2903979b8ad5c39fed8aa8704868716df8ef0a7ac3d8d87",
                "MacAddress": "02:42:ac:11:00:02",
                "IPv4Address": "172.17.0.2/16",
                "IPv6Address": ""
            },
            "f987e94f9251c01a8b48c591fd7866adb6fd1fccee394acd2452325fa3796860": {
                "Name": "second-centos",
                "EndpointID": "9bb718d4e262cf13b2e0b48f56e1a126c906e51f06c98c50a08178a0393a45ba",
                "MacAddress": "02:42:ac:11:00:03",
                "IPv4Address": "172.17.0.3/16",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]

备注

1.Containers 这里是bridge默认是我们的docker0,其中通过该网桥docker0/bridge 创建的docker容器有两个
  					 即:first-centos和second-centos

删除某个网络 docker network rm ${netword-name}

[root@iZwz91h49n3mj8r232gqweZ ~]# docker network create customer-network1
dda4d56c4a822155e48506d226bd034c7bdd73969faf9dcf687431b23282e903
[root@iZwz91h49n3mj8r232gqweZ ~]# docker network rm customer-network1    
customer-network1
[root@iZwz91h49n3mj8r232gqweZ ~]# 

创建某个网络 docker network create ${network-name}

[root@iZwz91h49n3mj8r232gqweZ ~]# docker network create customer-network
fa4e855899d5a1e290c2c0b3fd724959a7f484455a4d8452205bdda5d616c317
[root@iZwz91h49n3mj8r232gqweZ ~]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
757b74ac61c7        bridge              bridge              local
fa4e855899d5        customer-network    bridge              local
d3d1516d3a7c        harbor_harbor       bridge              local
6f425e496441        host                host                local
d082dd604405        none                null                local
[root@iZwz91h49n3mj8r232gqweZ ~]# 

备注

1.创建一个bridge桥接类型的网络  
[root@iZwz91h49n3mj8r232gqweZ ~]# docker network inspect customer-network
[
    {
        "Name": "customer-network",
        "Id": "fa4e855899d5a1e290c2c0b3fd724959a7f484455a4d8452205bdda5d616c317",
        "Created": "2019-12-08T10:27:32.313643881+08:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "172.19.0.0/16",
                    "Gateway": "172.19.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    }
]
[root@iZwz91h49n3mj8r232gqweZ ~]# 

备注

1.通过docker network inspect customer-network 命令查看,
  可以发现,当前的网络重新分配了一个172.19.0.1网络;
  之前的docker0 172.17.0.2,明显是不一样的,一个是172.17一个是172.19

指定某个网络后启动容器 docker run -d --name tomcat02 -p 8081:8081 --network

docker run -d --name ${Container-name} --network ${network-name}  ${image-name}
[root@iZwz91h49n3mj8r232gqweZ ~]# docker run -d --name tomcat02 -p 8081:8081 --network customer-network tomcat
9559aa60858c970688a3c2e768e00ae1c07b6bec2bbdd8f55b8c70657dd3ed90 
[root@iZwz91h49n3mj8r232gqweZ ~]#
1.默认如果不指定网络,那么在docker0这个网络下创建容器
2. bridge桥接模式,用的也是veth pair技术 
[root@iZwz91h49n3mj8r232gqweZ ~]# docker network inspect customer-network
[
    {
        "Name": "customer-network",
        "Id": "c2cfc0ed26761a909f1f273a937d954398070c1736d5cb3f5a306474faae7836",
        "Created": "2019-12-08T10:46:18.628042916+08:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "172.21.0.0/16",
                    "Gateway": "172.21.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "9559aa60858c970688a3c2e768e00ae1c07b6bec2bbdd8f55b8c70657dd3ed90": {
                "Name": "tomcat02",
                "EndpointID": "11142f2c0421fe292a6f9e60c62a497a17dc1452e3b1f11c6292adf6adcd9e5e",
                "MacAddress": "02:42:ac:15:00:02",
                "IPv4Address": "172.21.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {}
    }
]
1.我们在网络customer-network中创建的容器tomcat02创建成功,IP为 172.21.0.2,
  跟customer-network中的网络保持了一直 都是172.21

增加某个容器到某个网络 docker network connect customer-network tomcat01

docker network  ${network-name}  ${Container-name} 

背景

有可能在不同的网络中的容器,可能需要互相通信,这个时候他们的网段由于不一致导致无法通信,本节即为解决该问题;

原来的tomcat01容器在默认的网桥docker01之中,其中IP 172.17.0.4

[root@iZwz91h49n3mj8r232gqweZ ~]# docker exec -it tomcat01 ip a  
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
50: eth0@if51: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:11:00:04 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.17.0.4/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
[root@iZwz91h49n3mj8r232gqweZ ~]# docker exec -it tomcat01 ip a  
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
50: eth0@if51: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:11:00:04 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.17.0.4/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
[root@iZwz91h49n3mj8r232gqweZ ~]#

原来的网桥customer-network的信息如下,里面只有一个容器tomcat02

[root@iZwz91h49n3mj8r232gqweZ ~]# docker network inspect customer-network
[
    {
        "Name": "customer-network",
        "Id": "c2cfc0ed26761a909f1f273a937d954398070c1736d5cb3f5a306474faae7836",
        "Created": "2019-12-08T10:46:18.628042916+08:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "172.21.0.0/16",
                    "Gateway": "172.21.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "9559aa60858c970688a3c2e768e00ae1c07b6bec2bbdd8f55b8c70657dd3ed90": {
                "Name": "tomcat02",
                "EndpointID": "11142f2c0421fe292a6f9e60c62a497a17dc1452e3b1f11c6292adf6adcd9e5e",
                "MacAddress": "02:42:ac:15:00:02",
                "IPv4Address": "172.21.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {}
    }
]

新增tomcat01容器到customer-network

[root@iZwz91h49n3mj8r232gqweZ ~]# docker network connect customer-network tomcat01
[root@iZwz91h49n3mj8r232gqweZ ~]# docker network inspect customer-network
[
    {
        "Name": "customer-network",
        "Id": "c2cfc0ed26761a909f1f273a937d954398070c1736d5cb3f5a306474faae7836",
        "Created": "2019-12-08T10:46:18.628042916+08:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "172.21.0.0/16",
                    "Gateway": "172.21.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "9559aa60858c970688a3c2e768e00ae1c07b6bec2bbdd8f55b8c70657dd3ed90": {
                "Name": "tomcat02",
                "EndpointID": "11142f2c0421fe292a6f9e60c62a497a17dc1452e3b1f11c6292adf6adcd9e5e",
                "MacAddress": "02:42:ac:15:00:02",
                "IPv4Address": "172.21.0.2/16",
                "IPv6Address": ""
            },
            "bed934410133064a8d2aff3b9d81c3cd0ff0d75a210f2a8e7ee3e007b21e3be8": {
                "Name": "tomcat01",
                "EndpointID": "7836d5be0d13572a88a1c0667684bc58bc71a3ea10a50fa800e9fadfba474ab3",
                "MacAddress": "02:42:ac:15:00:03",
                "IPv4Address": "172.21.0.3/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {}
    }
]
[root@iZwz91h49n3mj8r232gqweZ ~]#
[root@iZwz91h49n3mj8r232gqweZ ~]# docker exec -it tomcat01 ip a                   
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
50: eth0@if51: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:11:00:04 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.17.0.4/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
54: eth1@if55: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:15:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.21.0.3/16 brd 172.21.255.255 scope global eth1
       valid_lft forever preferred_lft forever
1.新增之后查看,我们发现tomcat01已经也在我们的自定义的网桥(customer-network)之中了,并且新分配了IP
  172.21.0.32.我们单独查看tomcat01容器的ip信息,发现其IP目前有两个172.17.0.4(之前的默认网桥分配的)
  172.21.0.3为customer-network网桥分配的,这样172.21.0.XXX网段的也能够进行通信了;

容器tomcat01与tomcat02通信验证

[root@iZwz91h49n3mj8r232gqweZ ~]# docker exec -it tomcat01 ping 172.21.0.2
PING 172.21.0.2 (172.21.0.2) 56(84) bytes of data.
64 bytes from 172.21.0.2: icmp_seq=1 ttl=64 time=0.180 ms
64 bytes from 172.21.0.2: icmp_seq=2 ttl=64 time=0.065 ms
64 bytes from 172.21.0.2: icmp_seq=3 ttl=64 time=0.068 ms
64 bytes from 172.21.0.2: icmp_seq=4 ttl=64 time=0.066 ms
64 bytes from 172.21.0.2: icmp_seq=5 ttl=64 time=0.054 ms
64 bytes from 172.21.0.2: icmp_seq=6 ttl=64 time=0.078 ms

容器tomcat02与tomcat01通信验证

[root@iZwz91h49n3mj8r232gqweZ ~]# docker exec -it tomcat02 ping 172.21.0.3
PING 172.21.0.3 (172.21.0.3) 56(84) bytes of data.
64 bytes from 172.21.0.3: icmp_seq=1 ttl=64 time=0.153 ms
64 bytes from 172.21.0.3: icmp_seq=2 ttl=64 time=0.064 ms
64 bytes from 172.21.0.3: icmp_seq=3 ttl=64 time=0.072 ms
64 bytes from 172.21.0.3: icmp_seq=4 ttl=64 time=0.068 ms
64 bytes from 172.21.0.3: icmp_seq=5 ttl=64 time=0.079 ms
64 bytes from 172.21.0.3: icmp_seq=6 ttl=64 time=0.060 ms
64 bytes from 172.21.0.3: icmp_seq=7 ttl=64 time=0.064 ms
64 bytes from 172.21.0.3: icmp_seq=8 ttl=64 time=0.061 ms
64 bytes from 172.21.0.3: icmp_seq=9 ttl=64 time=0.059 ms
64 bytes from 172.21.0.3: icmp_seq=10 ttl=64 time=0.062 ms
以上发现,都能互相ping通

docker中的DNS记录问题

1.我们在进行创建容器Container的时候,在桥接模式下,
  同一个网段下,我们的不同的ip是能够进行互相ping通ip的;
  同样同一个网段下,我们通过ping Container-name也能够ping通

自定义网桥

自定义的网桥中ip互ping

[root@iZwz91h49n3mj8r232gqweZ ~]# docker exec -it tomcat33 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
60: eth0@if61: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:15:00:04 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.21.0.4/16 brd 172.21.255.255 scope global eth0
       valid_lft forever preferred_lft forever
[root@iZwz91h49n3mj8r232gqweZ ~]# docker exec -it tomcat44 ip a   
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
62: eth0@if63: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:15:00:05 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.21.0.5/16 brd 172.21.255.255 scope global eth0
       valid_lft forever preferred_lft forever
[root@iZwz91h49n3mj8r232gqweZ ~]# docker exec -it tomcat33 ping 172.21.0.5
PING 172.21.0.5 (172.21.0.5) 56(84) bytes of data.
64 bytes from 172.21.0.5: icmp_seq=1 ttl=64 time=0.159 ms
64 bytes from 172.21.0.5: icmp_seq=2 ttl=64 time=0.055 ms
^Z64 bytes from 172.21.0.5: icmp_seq=3 ttl=64 time=0.048 ms
64 bytes from 172.21.0.5: icmp_seq=4 ttl=64 time=0.046 ms
64 bytes from 172.21.0.5: icmp_seq=5 ttl=64 time=0.070 ms

自定义的网桥中Container-name互通

[root@iZwz91h49n3mj8r232gqweZ ~]# docker run -d --name tomcat33 --network customer-network tomcat
8272fab13f79b529677c1a9effd59206ddf4cb8bdad5c60bee0d8cd91cec8b11
[root@iZwz91h49n3mj8r232gqweZ ~]# docker run -d --name tomcat44 --network customer-network tomcat  
b1d73622f15c028526da0fe760c406ca282ad57e31f26d073b357b48a29d4612
[root@iZwz91h49n3mj8r232gqweZ ~]# docker exec -it tomcat33 ping tomcat44
PING tomcat44 (172.21.0.5) 56(84) bytes of data.
64 bytes from tomcat44.customer-network (172.21.0.5): icmp_seq=1 ttl=64 time=0.178 ms
64 bytes from tomcat44.customer-network (172.21.0.5): icmp_seq=2 ttl=64 time=0.059 ms
64 bytes from tomcat44.customer-network (172.21.0.5): icmp_seq=3 ttl=64 time=0.063 ms
^Z64 bytes from tomcat44.customer-network (172.21.0.5): icmp_seq=4 ttl=64 time=0.077 ms
1.这里注意下自定义的网桥中,创建Container之后,会自动生成一条DNS记录,Container-name对应着IP

默认网桥

注意

1.默认的网桥下,创建的容器,相助之前IP能够进行ping通,但是通过Container-name是无法ping通的
[root@iZwz91h49n3mj8r232gqweZ ~]# docker run -d --name tomcat11 -p 8011:8011 tomcat
8287de436fbc29dc6a62034d8a30f7cc62eb4a4b76232f4478476648c9c8473e
[root@iZwz91h49n3mj8r232gqweZ ~]# docker run -d --name tomcat22 -p 8022:8022 tomcat    
24eec4c1fb3770ec7da3251e74d6ca360f736e0db9e7919051882256237b4ada
[root@iZwz91h49n3mj8r232gqweZ ~]# docker exec -it tomcat11 ping tomcat22
ping: tomcat22: Name or service not known
[root@iZwz91h49n3mj8r232gqweZ ~]# docker exec -it tomcat11 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
56: eth0@if57: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:11:00:05 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.17.0.5/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
[root@iZwz91h49n3mj8r232gqweZ ~]# docker exec -it tomcat22 ip a  
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
58: eth0@if59: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:11:00:06 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.17.0.6/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
[root@iZwz91h49n3mj8r232gqweZ ~]# docker exec -it tomcat22 ip 172.17.0.5
Object "172.17.0.5" is unknown, try "ip help".
[root@iZwz91h49n3mj8r232gqweZ ~]# docker exec -it tomcat22 ping 172.17.0.5
PING 172.17.0.5 (172.17.0.5) 56(84) bytes of data.
64 bytes from 172.17.0.5: icmp_seq=1 ttl=64 time=0.208 ms
64 bytes from 172.17.0.5: icmp_seq=2 ttl=64 time=0.066 ms
64 bytes from 172.17.0.5: icmp_seq=3 ttl=64 time=0.074 ms
[root@iZwz91h49n3mj8r232gqweZ ~]# docker exec -it tomcat22 ping tomcat33
ping: tomcat33: Name or service not known
[root@iZwz91h49n3mj8r232gqweZ ~]# 

默认网桥中使用Container-name无法进行ping通

1.默认网桥中使用Container-name无法进行ping通
1.解决上面的问题,我们可以通过以下方式
[root@iZwz91h49n3mj8r232gqweZ ~]# docker run -d --name tomcat03 tomcat
0b3a8700698b163e7fa3163f881d73f1c2fd164dba375af72deb95b925ef536c
[root@iZwz91h49n3mj8r232gqweZ ~]# docker run -d --name tomcat04 --link tomcat03 tomcat
fa991903e990420377674646fc2fcc2892ad930948a50c703233679da190d462
[root@iZwz91h49n3mj8r232gqweZ ~]#  
[root@iZwz91h49n3mj8r232gqweZ ~]# docker exec -it tomcat04 ping tomcat03
PING tomcat03 (172.17.0.7) 56(84) bytes of data.
64 bytes from tomcat03 (172.17.0.7): icmp_seq=1 ttl=64 time=0.209 ms
64 bytes from tomcat03 (172.17.0.7): icmp_seq=2 ttl=64 time=0.064 ms
[root@iZwz91h49n3mj8r232gqweZ ~]# docker exec -it tomcat03 ping tomcat04
ping: tomcat04: Name or service not known
[root@iZwz91h49n3mj8r232gqweZ ~]# 
1.这里tomcat04能ping通tomcat03,但是tomcat03ping不通tomcat04
  因为我们这里只是读tomcat04启动的时候做了设置
2.一般不建议使用link这种操作,一般建议创建自定义网桥  

host网络模式

1.host 网络模式实际上就是使用跟宿主机一样的网络;
[root@iZwz91h49n3mj8r232gqweZ ~]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
757b74ac61c7        bridge              bridge              local
c2cfc0ed2676        customer-network    bridge              local
d3d1516d3a7c        harbor_harbor       bridge              local
6f425e496441        host                host                local
d082dd604405        none                null                local
[root@iZwz91h49n3mj8r232gqweZ ~]# docker run -d --name tomcat05 --network host tomcat
413f2f52cc292a7ceca7182b21ebe74ac6c16d0991d7b6cb79ccf6d56a450f99
[root@iZwz91h49n3mj8r232gqweZ ~]# 
[root@iZwz91h49n3mj8r232gqweZ ~]# docker exec -it tomcat05 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:16:3e:0c:ae:fc brd ff:ff:ff:ff:ff:ff
    inet 172.16.252.139/24 brd 172.16.252.255 scope global dynamic eth0
       valid_lft 314482680sec preferred_lft 314482680sec
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:7d:a6:c6:8d brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
6: br-d3d1516d3a7c: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:62:34:59:56 brd ff:ff:ff:ff:ff:ff
    inet 172.18.0.1/16 brd 172.18.255.255 scope global br-d3d1516d3a7c
       valid_lft forever preferred_lft forever
8: veth09c86df@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP group default 
    link/ether d2:95:5b:b9:df:58 brd ff:ff:ff:ff:ff:ff link-netnsid 0
10: veth5aaebfd@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP group default 
    link/ether de:d7:5a:73:3b:65 brd ff:ff:ff:ff:ff:ff link-netnsid 1
12: vethfa01851@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP group default 
    link/ether e2:94:b6:a5:f6:f5 brd ff:ff:ff:ff:ff:ff link-netnsid 5
14: vethd6d7476@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP group default 
    link/ether 1a:70:f7:4e:66:01 brd ff:ff:ff:ff:ff:ff link-netnsid 2
16: veth2881aeb@if15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP group default 
    link/ether f6:43:1a:e5:e7:8c brd ff:ff:ff:ff:ff:ff link-netnsid 4
18: vethf151547@if17: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP group default 
    link/ether 9e:f6:a5:ed:09:59 brd ff:ff:ff:ff:ff:ff link-netnsid 3
20: veth0e78003@if19: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP group default 
    link/ether f6:92:4c:29:23:91 brd ff:ff:ff:ff:ff:ff link-netnsid 6
22: vethf7699f4@if21: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP group default 
    link/ether 82:99:f5:a3:ad:b7 brd ff:ff:ff:ff:ff:ff link-netnsid 7
24: vethed86455@if23: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP group default 
    link/ether 66:d0:5f:bf:4f:dd brd ff:ff:ff:ff:ff:ff link-netnsid 8
26: veth9fe54e4@if25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP group default 
    link/ether 7e:f0:6f:ef:f7:32 brd ff:ff:ff:ff:ff:ff link-netnsid 9
34: veth6991d55@if33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether 9e:24:c9:78:1f:8f brd ff:ff:ff:ff:ff:ff link-netnsid 11
36: veth20601e8@if35: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether 5a:b1:3d:0f:0d:08 brd ff:ff:ff:ff:ff:ff link-netnsid 10
43: br-c2cfc0ed2676: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:20:b2:65:00 brd ff:ff:ff:ff:ff:ff
    inet 172.21.0.1/16 brd 172.21.255.255 scope global br-c2cfc0ed2676
       valid_lft forever preferred_lft forever
53: vethca95279@if52: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-c2cfc0ed2676 state UP group default 
    link/ether 52:62:fd:05:cf:57 brd ff:ff:ff:ff:ff:ff link-netnsid 14
57: vethd3c057b@if56: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether 7e:37:ea:e0:a2:b2 brd ff:ff:ff:ff:ff:ff link-netnsid 15
59: veth538a160@if58: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether ce:67:04:9d:e6:34 brd ff:ff:ff:ff:ff:ff link-netnsid 16
61: veth55940bb@if60: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-c2cfc0ed2676 state UP group default 
    link/ether 9a:07:d7:91:8a:24 brd ff:ff:ff:ff:ff:ff link-netnsid 17
63: vethf2b1a18@if62: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-c2cfc0ed2676 state UP group default 
    link/ether 12:4e:b2:3e:81:c8 brd ff:ff:ff:ff:ff:ff link-netnsid 18
65: veth7808c0a@if64: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether ba:86:41:bc:2c:2b brd ff:ff:ff:ff:ff:ff link-netnsid 19
67: veth3c461cd@if66: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether c2:a4:27:ed:d1:28 brd ff:ff:ff:ff:ff:ff link-netnsid 20
[root@iZwz91h49n3mj8r232gqweZ ~]# 
1.会发现其实跟宿主的ip情况是一致的;

宿主机的ip情况

[root@iZwz91h49n3mj8r232gqweZ ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:16:3e:0c:ae:fc brd ff:ff:ff:ff:ff:ff
    inet 172.16.252.139/24 brd 172.16.252.255 scope global dynamic eth0
       valid_lft 314482632sec preferred_lft 314482632sec
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP 
    link/ether 02:42:7d:a6:c6:8d brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
6: br-d3d1516d3a7c: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP 
    link/ether 02:42:62:34:59:56 brd ff:ff:ff:ff:ff:ff
    inet 172.18.0.1/16 brd 172.18.255.255 scope global br-d3d1516d3a7c
       valid_lft forever preferred_lft forever
8: veth09c86df@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP 
    link/ether d2:95:5b:b9:df:58 brd ff:ff:ff:ff:ff:ff link-netnsid 0
10: veth5aaebfd@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP 
    link/ether de:d7:5a:73:3b:65 brd ff:ff:ff:ff:ff:ff link-netnsid 1
12: vethfa01851@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP 
    link/ether e2:94:b6:a5:f6:f5 brd ff:ff:ff:ff:ff:ff link-netnsid 5
14: vethd6d7476@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP 
    link/ether 1a:70:f7:4e:66:01 brd ff:ff:ff:ff:ff:ff link-netnsid 2
16: veth2881aeb@if15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP 
    link/ether f6:43:1a:e5:e7:8c brd ff:ff:ff:ff:ff:ff link-netnsid 4
18: vethf151547@if17: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP 
    link/ether 9e:f6:a5:ed:09:59 brd ff:ff:ff:ff:ff:ff link-netnsid 3
20: veth0e78003@if19: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP 
    link/ether f6:92:4c:29:23:91 brd ff:ff:ff:ff:ff:ff link-netnsid 6
22: vethf7699f4@if21: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP 
    link/ether 82:99:f5:a3:ad:b7 brd ff:ff:ff:ff:ff:ff link-netnsid 7
24: vethed86455@if23: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP 
    link/ether 66:d0:5f:bf:4f:dd brd ff:ff:ff:ff:ff:ff link-netnsid 8
26: veth9fe54e4@if25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP 
    link/ether 7e:f0:6f:ef:f7:32 brd ff:ff:ff:ff:ff:ff link-netnsid 9
34: veth6991d55@if33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP 
    link/ether 9e:24:c9:78:1f:8f brd ff:ff:ff:ff:ff:ff link-netnsid 11
36: veth20601e8@if35: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP 
    link/ether 5a:b1:3d:0f:0d:08 brd ff:ff:ff:ff:ff:ff link-netnsid 10
43: br-c2cfc0ed2676: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP 
    link/ether 02:42:20:b2:65:00 brd ff:ff:ff:ff:ff:ff
    inet 172.21.0.1/16 brd 172.21.255.255 scope global br-c2cfc0ed2676
       valid_lft forever preferred_lft forever
53: vethca95279@if52: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-c2cfc0ed2676 state UP 
    link/ether 52:62:fd:05:cf:57 brd ff:ff:ff:ff:ff:ff link-netnsid 14
57: vethd3c057b@if56: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP 
    link/ether 7e:37:ea:e0:a2:b2 brd ff:ff:ff:ff:ff:ff link-netnsid 15
59: veth538a160@if58: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP 
    link/ether ce:67:04:9d:e6:34 brd ff:ff:ff:ff:ff:ff link-netnsid 16
61: veth55940bb@if60: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-c2cfc0ed2676 state UP 
    link/ether 9a:07:d7:91:8a:24 brd ff:ff:ff:ff:ff:ff link-netnsid 17
63: vethf2b1a18@if62: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-c2cfc0ed2676 state UP 
    link/ether 12:4e:b2:3e:81:c8 brd ff:ff:ff:ff:ff:ff link-netnsid 18
65: veth7808c0a@if64: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP 
    link/ether ba:86:41:bc:2c:2b brd ff:ff:ff:ff:ff:ff link-netnsid 19
67: veth3c461cd@if66: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP 
    link/ether c2:a4:27:ed:d1:28 brd ff:ff:ff:ff:ff:ff link-netnsid 20
[root@iZwz91h49n3mj8r232gqweZ ~]# 

none模式

[root@iZwz91h49n3mj8r232gqweZ ~]# docker run -d --name tomcat06 --network none tomcat
3480def5997a3fe31a0679b44e6636f399683e7c7da3003c348b5551820961f3
[root@iZwz91h49n3mj8r232gqweZ ~]# docker exec -it tomcat06 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
[root@iZwz91h49n3mj8r232gqweZ ~]# 
1.只有一个local的网卡

docker中的overlay 解决多机通信问题

1.多个centos宿主主机使用Docker创建Container的时候,有可能会出现不同Container里的ip是一样的的
发布了261 篇原创文章 · 获赞 37 · 访问量 20万+

猜你喜欢

转载自blog.csdn.net/u014636209/article/details/103447230