本次测试3台集群(node1(192.168.1.2),node2(192.168.1.4),node3(192.168.1.6)),所有的主服务都安装在node1节点,操作系统为CentOS7.6
1.环境准备
参考博客:《大数据集群安装(一) Linux环境准备 步骤简单 详细》
https://blog.csdn.net/qq_35260875/article/details/111315110
2.下载Zookeeper
(1)下载
下载地址:https://archive.apache.org/dist/zookeeper/
选择需要的apache版本进行下载,本次测试使用3.6.2的版本
包名:apache-zookeeper-3.6.2-bin.tar.gz
(2)解压
将下载的压缩包通过FTP工具上传,放置于node1节点的/usr/local目录下
mv apache-zookeeper-3.6.2-bin.tar.gz /usr/local/
cd /usr/local
tar -zxvf apache-zookeeper-3.6.2-bin.tar.gz
# 目录重命名
mv apache-zookeeper-3.6.2-bin zookeeper-3.6.2
(3)建立软链接
便于后期版本更换
ln -s zookeeper-3.6.2 zookeeper
(4)删除安装包
rm -rf apache-zookeeper-3.6.2-bin.tar.gz
3.修改配置文件
(1)拷贝文件
在node1节点下拷贝zoo.cfg文件
# 进入配置目录
cd /usr/local/zookeeper/conf
# 拷贝zoo.cfg文件
cp zoo_sample.cfg zoo.cfg
原始文件如下:
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/tmp/zookeeper
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
## Metrics Providers
#
# https://prometheus.io Metrics Exporter
#metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider
#metricsProvider.httpPort=7000
#metricsProvider.exportJvmInfo=true
(2)修改文件
# 在node1节点下修改zoo.cfg文件,需要修改数据目录和配置成集群模式
# 进入配置目录
vim zoo.cfg
# 修改数据目录
dataDir=/usr/local/zookeeper/data
# 最下方添加如下几行
server.1=node1:2888:3888
server.2=node2:2888:3888
server.3=node3:2888:3888
(3)创建数据目录
# 在node1节点下执行
mkdir /usr/local/zookeeper/data
(4)创建myid
# 在node1节点下执行
# 在data目录中创建一个空文件,并向该文件写入ID,根据ID来识别Zookeeper的集群
touch /usr/local/zookeeper/data/myid
echo 1 > /usr/local/zookeeper/data/myid
4.同步配置
(1)拷贝Zookeeper目录
# 在node1节点拷贝Zookeeper目录到其他两个节点
scp -r /usr/local/zookeeper-3.6.2 root@node2:/usr/local
scp -r /usr/local/zookeeper-3.6.2 root@node3:/usr/local
(2)创建软链接并修改myid内容
# 在node2节点下执行
cd /usr/local
ln -s zookeeper-3.6.2 zookeeper
echo 2 > /usr/local/zookeeper/data/myid
# 在node3节点下执行
cd /usr/local
ln -s zookeeper-3.6.2 zookeeper
echo 3 > /usr/local/zookeeper/data/myid
5.配置环境变量
# 在三个节点下执行以下命令
vim /etc/profile
export ZOOKEEPER_HOME=/usr/local/zookeeper
export PATH=$ZOOKEEPER_HOME/bin:$PATH
# 环境变量生效
source /etc/profile
6.启动ZK
(1)启动ZK
# 在三个节点下执行以下命令
zkServer.sh start
(2)查看状态
zkServer.sh status