前置的要求
安装搭建ES集群需要最少3台机器,我们现在准备3台ubuntu 16.04的机器(192.168.71.181~183),ES版本为6.0。
更换国内源
$ sudo cp /etc/apt/sources.list /etc/apt/sources.list.bak
$ sudo vim /etc/apt/sources.list
"""
# deb cdrom:[Ubuntu 16.04 LTS _Xenial Xerus_ - Release amd64 (20160420.1)]/ xenial main restricted
deb-src http://archive.ubuntu.com/ubuntu xenial main restricted
#Added by software-properties
deb http://mirrors.aliyun.com/ubuntu/ xenial main restricted
deb-src http://mirrors.aliyun.com/ubuntu/ xenial main restricted multiverse universe
#Added by software-properties
deb http://mirrors.aliyun.com/ubuntu/ xenial-updates main restricted
deb-src http://mirrors.aliyun.com/ubuntu/ xenial-updates main restricted multiverse universe
#Added by software-properties
deb http://mirrors.aliyun.com/ubuntu/ xenial universe
deb http://mirrors.aliyun.com/ubuntu/ xenial-updates universe
deb http://mirrors.aliyun.com/ubuntu/ xenial multiverse
deb http://mirrors.aliyun.com/ubuntu/ xenial-updates multiverse
deb http://mirrors.aliyun.com/ubuntu/ xenial-backports main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ xenial-backports main restricted universe multiverse
#Added by software-properties
deb http://archive.canonical.com/ubuntu xenial partner
deb-src http://archive.canonical.com/ubuntu xenial partner
deb http://mirrors.aliyun.com/ubuntu/ xenial-security main restricted
deb-src http://mirrors.aliyun.com/ubuntu/ xenial-security main restricted multiverse universe
#Added by software-properties
deb http://mirrors.aliyun.com/ubuntu/ xenial-security universe
deb http://mirrors.aliyun.com/ubuntu/ xenial-security multiverse
"""
$ sudo apt-get update
$ sudo apt-get update --fix-missing
安装JAVA
$ sudo apt-get install openjdk-8-jdk
$ sudo apt-get install apt-transport-https
安装ES和KIBANA
# 下载安装的deb包
$ wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.2.3.deb
$ wget https://artifacts.elastic.co/downloads/kibana/kibana-6.2.3-amd64.deb
$ wget https://artifacts.elastic.co/downloads/logstash/logstash-6.2.3.deb
# 安装
$ sudo dpkg -i elasticsearch-6.2.3.deb
$ sudo dpkg -i kibana-6.2.3-amd64.deb
$ sudo dpkg -i logstash-6.2.3.deb
"""
安装路径:
主目录:/usr/share/elasticsearch
主配置:/etc/elasticsearch
环境变量:/etc/default/elasticsearch
数据:/var/lib/elasticsearch
日志:/var/log/elasticsearch
主目录:/usr/share/kibana
主配置:/etc/kibana
数据:/var/lib/kibana
"""
# 开机自动启动
$ sudo systemctl daemon-reload
$ sudo systemctl enable elasticsearch.service
$ sudo systemctl enable kibana.service
$ sudo systemctl enable logstash.service
# 启动
$ sudo systemctl start elasticsearch.service
$ sudo systemctl start kibana.service
$ sudo systemctl start logstash.service
配置
所有的3台机器都需要做下面相同配置,只需要修改自身对应IP即可。
配置181服务器为主节点
"""
# 设置集群名
cluster.name: ccnu-resource-cluster
node.name: node-181
node.master: true
node.data: false
node.ingest: false
network.host: 0.0.0.0
http.port: 9200
http.cors.enabled: true
http.cors.allow-origin: "*"
discovery.zen.ping.unicast.hosts: ["192.168.71.181"]
"""
配置182服务器为数据节点和预处理节点
"""
# 设置集群名
cluster.name: ccnu-resource-cluster
node.name: node-182
node.master: false
node.data: true
node.ingest: true
network.host: 0.0.0.0
http.port: 9200
http.cors.enabled: true
http.cors.allow-origin: "*"
discovery.zen.ping.unicast.hosts: ["192.168.71.181"]
"""
配置183服务器为数据节点和预处理节点
"""
# 设置集群名
cluster.name: ccnu-resource-cluster
node.name: node-183
node.master: false
node.data: true
node.ingest: true
network.host: 0.0.0.0
http.port: 9200
http.cors.enabled: true
http.cors.allow-origin: "*"
discovery.zen.ping.unicast.hosts: ["192.168.71.181"]
"""
名词解释
- cluster.name:一个集群中的集群名字需要一致
- node.master:用来确定控制cluster的主服务器
- node.data:它负责数据相关操作,比如分片的 CRUD,以及搜索和整合操作。这些操作都比较消耗 CPU、内存和 I/O 资源
- discovery.zen.ping.unicast.hosts:使用gossip算法的传播方式保证最终一致性,建议unicast host的列表为合法的主节点服务器。
配置KIBANA
$ sudo vim /etc/kibana/kibana.yml
"""
server.host: "192.168.71.181"
"""
重启
$ sudo systemctl restart elasticsearch.service
$ sudo systemctl restart kibana.service
维护
查看集群状态
curl localhost:9200/_cluster/health?pretty
curl localhost:9200/_nodes/http?pretty
curl localhost:9200/_nodes/jvm?pretty
curl localhost:9200/_nodes/network?pretty
curl localhost:9200/_nodes/os?pretty
curl localhost:9200/_nodes/plugins?pretty
curl localhost:9200/_nodes/process?pretty
curl localhost:9200/_nodes/settings?pretty
curl localhost:9200/_nodes/thread_pool?pretty
curl localhost:9200/_nodes/transport?pretty
Trouble Shotting
failed to send join request to master, with the same id but is a different node instance
解决方案:
Ok so the issue was copying the elasticsearch folder from one node to another over scp. Elasticsearch saves the node id in elasticsearch/data/ folder. Deleted the data folder on one node and restarted it. The cluster is up and running.
$ sudo chmod -R 777 /var/lib/elasticsearch/
$ sudo rm -rf /var/lib/elasticsearch/*
$ sudo systemctl restart elasticsearch.service