eck简介
Elastic Cloud on Kubernetes (ECK)可以基于kubernetes operator在kubernetes集群中自动化部署、管理、编排Elasticsearch、Kibana、APM Server服务。
ECK功能绝不仅限于简化 Kubernetes 上 Elasticsearch 和 Kibana 的部署工作这一项任务,ECK 专注于简化所有后期运行工作,例如:
- 管理和监测多个集群
- 轻松升级至新的集群版本
- 扩大或缩小集群容量
- 更改集群配置
- 动态调整本地存储的规模(包括 Elastic Local Volume(一款本地存储驱动器))
- 执行备份
在 ECK 上启动的所有 Elasticsearch 集群都默认受到保护,这意味着在最初创建后便已启用加密并受到默认强密码的保护。
官网:https://www.elastic.co/cn/elastic-cloud-kubernetes
项目地址:https://github.com/elastic/cloud-on-k8s
部署ECK
参考:
https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-quickstart.html
https://github.com/elastic/cloud-on-k8s/tree/master/config/recipes/beats
https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-volume-claim-templates.html
https://github.com/elastic/cloud-on-k8s/tree/master/config/samples
环境信息:
准备3个节点,这里配置master节点可调度pod:
[root@master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master01 Ready master 7d v1.18.2
node01 Ready <none> 7d v1.18.2
node02 Ready <none> 7d v1.18.2
eck部署版本:eck v1.1.0
准备nfs存储
eck数据需要进行持久化,可以使用emptydir类型的临时卷进行简单测试,也可以使用nfs或者rook等持久存储,作为测试,这里使用docker临时在master01节点部署nfs server,提供pvc所需的存储资源。
docker run -d
--name nfs-server
--privileged
--restart always
-p 2049:2049
-v /nfs-share:/nfs-share
-e SHARED_DIRECTORY=/nfs-share
itsthenetwork/nfs-server-alpine:latest
部署nfs-client-provisioner,动态申请nfs存储资源,192.168.93.11为master01节点的ip地址,nfsv4版本nfs.path指定为 / 即可。
这里使用helm从阿里云helm仓库部署nfs-client-provisioner。
helm repo add apphub https://apphub.aliyuncs.com
helm install nfs-client-provisioner
–set nfs.server=192.168.93.11
–set nfs.path=/
apphub/nfs-client-provisioner
查看创建storageClass,默认名称为nfs-client,下面部署elasticsearch时会用到该名称:
[root@master01 ~]# kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs-client cluster.local/nfs-client-provisioner Delete Immediate true 172m
所有节点安装nfs客户端并启用rpcbind服务
yum install -y nfs-utils
systemctl enable --now rpcbind
安装eck operator
部署1.1.0版本eck
kubectl apply -f https://download.elastic.co/downloads/eck/1.1.0/all-in-one.yaml
查看创建的pod
[root@master01 ~]# kubectl -n elastic-system get pods
NAME READY STATUS RESTARTS AGE
elastic-operator-0 1/1 Running 1 17m
查看创建的crd,创建了3个crd,apmserver、elasticsearche以及kibana.
[root@master01 ~]# kubectl get crd | grep elastic
apmservers.apm.k8s.elastic.co 2020-04-27T16:23:08Z
elasticsearches.elasticsearch.k8s.elastic.co 2020-04-27T16:23:08Z
kibanas.kibana.k8s.elastic.co 2020-04-27T16:23:08Z
部署es和kibana
下载github中release版本的示例yaml到本地,这里下载1.1.0版本:
curl -LO https://github.com/elastic/cloud-on-k8s/archive/1.1.0.tar.gz
tar -zxf cloud-on-k8s-1.1.0.tar.gz
cd cloud-on-k8s-1.1.0/config/recipes/beats/
创建命名空间
kubectl apply -f 0_ns.yaml
部署es和kibana,count为3指定部署3个es节点,前期也可以部署单个节点,之后在进行扩容,指定storageClassName为nfs-client,添加http部分指定服务类型为nodePort。
$ cat 1_monitor.yaml
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: monitor
namespace: beats
spec:
version: 7.6.2
nodeSets:
- name: mdi
count: 3
config:
node.master: true
node.data: true
node.ingest: true
node.store.allow_mmap: false
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
storageClassName: nfs-client
http:
service:
spec:
type: NodePort
---
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: monitor
namespace: beats
spec:
version: 7.6.2
count: 1
elasticsearchRef:
name: "monitor"
http:
service:
spec:
type: NodePort
执行yaml文件部署es和kibana
kubectl apply -f 1_monitor.yaml
如果无法拉取镜像可以手动替换为dockerhub镜像:
docker pull elastic/elasticsearch:7.6.2
docker pull elastic/kibana:7.6.2
docker tag elastic/elasticsearch:7.6.2 docker.elastic.co/elasticsearch/elasticsearch:7.6.2
docker tag elastic/kibana:7.6.2 docker.elastic.co/kibana/kibana:7.6.2
查看创建的Elasticsearch和kibana资源,包括运行状况,版本和节点数
[root@master01 ~]# kubectl get elasticsearch
NAME HEALTH NODES VERSION PHASE AGE
quickstart green 3 7.6.2 Ready 77m
[root@master01 ~]# kubectl get kibana
NAME HEALTH NODES VERSION AGE
quickstart green 1 7.6.2 137m
查看创建的pods:
[root@master01 ~]# kubectl -n beats get pods
NAME READY STATUS RESTARTS AGE
monitor-es-mdi-0 1/1 Running 0 109s
monitor-es-mdi-1 1/1 Running 0 9m
monitor-es-mdi-2 1/1 Running 0 3m26s
monitor-kb-54cbdf6b8c-jklqm 1/1 Running 0 9m
查看创建的pv和pvc
[root@master01 ~]# kubectl -n beats get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
elasticsearch-data-monitor-es-mdi-0 Bound pvc-882be3e2-b752-474b-abea-7827b492d83d 10Gi RWO nfs-client 3m33s
elasticsearch-data-monitor-es-mdi-1 Bound pvc-8e6ed97e-7524-47f5-b02c-1ff0d2af33af 10Gi RWO nfs-client 3m33s
elasticsearch-data-monitor-es-mdi-2 Bound pvc-31b5f80d-8fbd-4762-ab69-650eb6619a2e 10Gi RWO nfs-client 3m33s
[root@master01 ~]# kubectl -n beats get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-31b5f80d-8fbd-4762-ab69-650eb6619a2e 50Gi RWO Delete Bound beats/elasticsearch-data-monitor-es-mdi-2 nfs-client 3m35s
pvc-882be3e2-b752-474b-abea-7827b492d83d 50Gi RWO Delete Bound beats/elasticsearch-data-monitor-es-mdi-0 nfs-client 3m35s
pvc-8e6ed97e-7524-47f5-b02c-1ff0d2af33af 50Gi RWO Delete Bound beats/elasticsearch-data-monitor-es-mdi-1 nfs-client 3m35s
实际数据存储在master01节点/nfs-share目录下:
[root@master01 ~]# tree /nfs-share/ -L 2
/nfs-share/
├── beats-elasticsearch-data-monitor-es-mdi-0-pvc-250c8eef-4b7e-4230-bd4f-36b911a1d61b
│ └── nodes
├── beats-elasticsearch-data-monitor-es-mdi-1-pvc-c1a538df-92df-4a8e-9b7b-fceb7d395eab
│ └── nodes
└── beats-elasticsearch-data-monitor-es-mdi-2-pvc-dc21c1ba-4a17-4492-9890-df795c06213a
└── nodes
查看创建的service,部署时已经将es和kibana服务类型改为NodePort,方便从集群外访问。
[root@master01 ~]# kubectl -n beats get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
monitor-es-http NodePort 10.96.82.186 <none> 9200:31575/TCP 9m36s
monitor-es-mdi ClusterIP None <none> <none> 9m34s
monitor-kb-http NodePort 10.97.213.119 <none> 5601:30878/TCP 9m35s
默认elasticsearch启用了验证,获取elastic用户的密码:
PASSWORD=$(kubectl -n beats get secret monitor-es-elastic-user -o=jsonpath='{.data.elastic}' | base64 --decode)
echo $PASSWORD
访问elasticsearch
浏览器访问elasticsearch:
https://192.168.93.11:31575/
或者从Kubernetes集群内部访问elasticsearch的endpoint:
[root@master01 ~]# kubectl run -it --rm centos--image=centos -- sh
sh-4.4#
sh-4.4# PASSWORD=gf4mgr5fsbstwx76b8zl8m2g
sh-4.4# curl -u "elastic:$PASSWORD" -k "https://monitor-es-http:9200"
{
"name" : "quickstart-es-default-2",
"cluster_name" : "quickstart",
"cluster_uuid" : "mrDgyhp7QWa7iVuY8Hx6gA",
"version" : {
"number" : "7.6.2",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "ef48eb35cf30adf4db14086e8aabd07ef6fb113f",
"build_date" : "2020-03-26T06:34:37.794943Z",
"build_snapshot" : false,
"lucene_version" : "8.4.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
访问kibana
在浏览器中访问kibana,用户密码与elasticsearch相同,选择Explore on my own,可以看到还没有创建index。
https://192.168.93.11:30878/
部署filebeat
使用dockerhub的镜像,版本改为7.6.2.
sed -i 's#docker.elastic.co/beats/filebeat:7.6.0#elastic/filebeat:7.6.2#g' 2_filebeat-kubernetes.yaml
kubectl apply -f 2_filebeat-kubernetes.yaml
查看创建的pods
[root@master01 beats]# kubectl -n beats get pods -l k8s-app=filebeat
NAME READY STATUS RESTARTS AGE
filebeat-dctrz 1/1 Running 0 9m32s
filebeat-rgldp 1/1 Running 0 9m32s
filebeat-srqf4 1/1 Running 0 9m32s
如果无法拉取镜像,可手动拉取:
docker pull elastic/filebeat:7.6.2
docker tag elastic/filebeat:7.6.2 docker.elastic.co/beats/filebeat:7.6.2
docker pull elastic/metricbeat:7.6.2
docker tag elastic/metricbeat:7.6.2 docker.elastic.co/beats/metricbeat:7.6.2
访问kibana,此时可以搜索到filebeat的index,填写index pattern,选择@timestrap然后创建index.
查看收集到的日志
部署metricbeat
sed -i 's#docker.elastic.co/beats/metricbeat:7.6.0#elastic/metricbeat:7.6.2#g' 3_metricbeat-kubernetes.yaml
查看创建的pods
[root@master01 beats]# kubectl -n beats get pods -l k8s-app=metricbeat
NAME READY STATUS RESTARTS AGE
metricbeat-6956d987bb-c96nq 1/1 Running 0 76s
metricbeat-6h42f 1/1 Running 0 76s
metricbeat-dzkxq 1/1 Running 0 76s
metricbeat-lffds 1/1 Running 0 76s
此时访问kibana可以看到index中多了metricbeat