CentOS 7 ETCD集群配置

前言

Etcd 是 CoreOS 基于 Raft 开发的分布式 key-value 存储,可用于服务发现、共享配置以及一致性保障(如数据库选主、分布式锁等)

本次环境,是用于k8s集群,由于在二进制部署 k8s 中,由于 Etcd 集群导致各种各样的问题,特意抽出时间来研究 Etcd 集群。

Etcd 集群配置分为三种:

  1. 静态发现
  2. Etcd 动态发现
  3. DNS 动态发现 通过DNS的SRV解析动态发现集群

本次主要基于 静态发现 和 DNS动态发现 两种,并结合自签的TLS证书来创建集群。

环境准备

此环境实际用于 k8s 中的ETCD集群使用,用于本次文档

主机名 角色 IP 系统版本 内核版本
docker01.k8s.cn docker-01 192.168.1.222 CentOS Linux release 7.4.1708 (Core) 3.10.0-693.el7.x86_64
docker02.k8s.cn docker-02 192.168.1.221 CentOS Linux release 7.4.1708 (Core) 3.10.0-693.el7.x86_64
docker03.k8s.cn docker-03 192.168.1.223 CentOS Linux release 7.4.1708 (Core) 3.10.0-693.el7.x86_64

安装

在三台机器上均执行

[root@docker-01 ~]# yum install etcd -y
[root@docker-01 ~]# rpm -qa etcd
etcd-3.3.11-2.el7.centos.x86_64

创建Etcd所需目录,在三台机器上均执行

[root@docker-01 ~]#mkdir /data/k8s/etcd/{data,wal} -p
[root@docker-01 ~]#mkdir -p /etc/kubernetes/cert
[root@docker-01 ~]#chown -R etcd.etcd /data/k8s/etcd

三台机器防火墙放通2379和2380端口(最好是同步一下时间)

[root@docker-01 ~]# firewall-cmd --zone=public --add-port=2379/tcp --permanent
success
[root@docker-01 ~]#  firewall-cmd --zone=public --add-port=2380/tcp --permanent
success
[root@docker-01 ~]# firewall-cmd --reload
success
[root@docker-01 ~]# firewall-cmd --list-ports
2379/tcp 2380/tcp

静态集群

配置

docker-01配置文件

[root@docker-01 ~]# cat /etc/etcd/etcd.conf 
ETCD_DATA_DIR="/data/k8s/etcd/data"
ETCD_WAL_DIR="/data/k8s/etcd/wal"
ETCD_LISTEN_PEER_URLS="http://192.168.1.222:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.1.222:2379"
ETCD_MAX_SNAPSHOTS="5"
ETCD_MAX_WALS="5"
ETCD_NAME="etcd1"
ETCD_SNAPSHOT_COUNT="100000"
ETCD_HEARTBEAT_INTERVAL="100"
ETCD_ELECTION_TIMEOUT="1000"

ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.1.222:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.1.222:2379"

ETCD_INITIAL_CLUSTER="etcd1=http://192.168.1.222:2380,etcd2=http://192.168.1.221:2380,etcd3=http://192.168.1.223:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

docker-02配置文件

[root@docker-02 ~]# cat /etc/etcd/etcd.conf
ETCD_DATA_DIR="/data/k8s/etcd/data"
ETCD_WAL_DIR="/data/k8s/etcd/wal"
ETCD_LISTEN_PEER_URLS="http://192.168.1.221:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.1.221:2379"
ETCD_MAX_SNAPSHOTS="5"
ETCD_MAX_WALS="5"
ETCD_NAME="etcd2"
ETCD_SNAPSHOT_COUNT="100000"
ETCD_HEARTBEAT_INTERVAL="100"
ETCD_ELECTION_TIMEOUT="1000"

ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.1.221:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.1.221:2379"

ETCD_INITIAL_CLUSTER="etcd1=http://192.168.1.222:2380,etcd2=http://192.168.1.221:2380,etcd3=http://192.168.1.223:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

docker-03配置文件

[root@docker-03 ~]# cat /etc/etcd/etcd.conf 
ETCD_DATA_DIR="/data/k8s/etcd/data"
ETCD_WAL_DIR="/data/k8s/etcd/wal"
ETCD_LISTEN_PEER_URLS="http://192.168.1.223:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.1.223:2379"
ETCD_MAX_SNAPSHOTS="5"
ETCD_MAX_WALS="5"
ETCD_NAME="etcd3"
ETCD_SNAPSHOT_COUNT="100000"
ETCD_HEARTBEAT_INTERVAL="100"
ETCD_ELECTION_TIMEOUT="1000"

ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.1.223:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.1.223:2379"

ETCD_INITIAL_CLUSTER="etcd1=http://192.168.1.222:2380,etcd2=http://192.168.1.221:2380,etcd3=http://192.168.1.223:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

启动测试

[root@docker-01 ~]# systemctl stop firewalld
[root@docker-01 ~]# systemctl status etcd
● etcd.service - Etcd Server
   Loaded: loaded (/usr/lib/systemd/system/etcd.service; disabled; vendor preset: disabled)
   Active: active (running) since 一 2020-01-13 15:43:24 CST; 2min 1s ago
 Main PID: 2298 (etcd)
   Memory: 31.9M
   CGroup: /system.slice/etcd.service
           └─2298 /usr/bin/etcd --name=etcd1 --data-dir=/data/k8s/etcd/data --listen-client-urls=http://192.168.1.222:2379

1月 13 15:45:05 docker-01 etcd[2298]: raft.node: 164a311aff833bc1 elected leader c36e0ffc3c8f0b6 at term 70
1月 13 15:45:10 docker-01 etcd[2298]: health check for peer b1eeb25e6baf68e0 could not connect: dial tcp 192.168.1.221:2380: connect: no route to host (pr..._MESSAGE")
1月 13 15:45:10 docker-01 etcd[2298]: health check for peer b1eeb25e6baf68e0 could not connect: dial tcp 192.168.1.221:2380: connect: no route to host (pr...SNAPSHOT")
1月 13 15:45:11 docker-01 etcd[2298]: peer b1eeb25e6baf68e0 became active
1月 13 15:45:11 docker-01 etcd[2298]: established a TCP streaming connection with peer b1eeb25e6baf68e0 (stream Message reader)
1月 13 15:45:11 docker-01 etcd[2298]: established a TCP streaming connection with peer b1eeb25e6baf68e0 (stream MsgApp v2 reader)
1月 13 15:45:15 docker-01 etcd[2298]: the clock difference against peer b1eeb25e6baf68e0 is too high [2m1.808827774s > 1s] (prober "ROUND_TRIPPER_SNAPSHOT")
1月 13 15:45:15 docker-01 etcd[2298]: the clock difference against peer b1eeb25e6baf68e0 is too high [2m1.808608709s > 1s] (prober "ROUND_TRIPPER_RAFT_MESSAGE")
1月 13 15:45:15 docker-01 etcd[2298]: updated the cluster version from 3.0 to 3.3
1月 13 15:45:15 docker-01 etcd[2298]: enabled capabilities for version 3.3
Hint: Some lines were ellipsized, use -l to show in full.

查看集群状态

[root@docker-01 ~]# ETCDCTL_API=3 etcdctl --endpoints=http://192.168.1.222:2379,http://192.168.1.221:2379,http://192.168.1.223:2379 endpoint health
http://192.168.1.221:2379 is healthy: successfully committed proposal: took = 4.237397ms
http://192.168.1.223:2379 is healthy: successfully committed proposal: took = 6.593361ms
http://192.168.1.222:2379 is healthy: successfully committed proposal: took = 6.935029ms
[root@docker-01 ~]# etcdctl --endpoints=http://192.168.1.222:2379,http://192.168.1.221:2379,http://192.168.1.223:2379 cluster-health
member c36e0ffc3c8f0b6 is healthy: got healthy result from http://192.168.1.223:2379
member 164a311aff833bc1 is healthy: got healthy result from http://192.168.1.222:2379
member b1eeb25e6baf68e0 is healthy: got healthy result from http://192.168.1.221:2379
cluster is healthy

 

猜你喜欢

转载自www.cnblogs.com/liujunjun/p/12187829.html