环境:
centos7
192.168.59.130:jdk,zookeeper,kafka,filebeat,elasticsearch
192.168.59.131:jdk,zookeeper,kafka,logstash
192.168.59.132:jdk,zookeeper,kafka,kibana
一、基础环境配置
1:3台做时间同步
ntpdate pool.ntp.org
2:3台关闭防火墙
systemctl stop firewalld
setenforce 0
3:3台修改主机名
hostnamectl set-hostname kafka1
hostnamectl set-hostname kafka2
hostnamectl set-hostname kafka3
4:修改hosts文件
vim /etc/hosts
192.168.59.130 kafka1
192.168.59.131 kafka2
192.168.59.132 kafka3
5:安装jdk
yum -y install jdk-8u131-linux-x64_.rpm
6:3台安装zookeeper
tar xzf zookeeper-3.4.14.tar.gz
mv zookeeper-3.4.14 /usr/local/zookeeper
cd /usr/local/zookeeper/conf/
mv zoo_sample.cfg zoo.cfg
编辑zoo.cfg
vim zoo.cfg
server.1=192.168.59.130:2888:3888
server.2=192.168.59.131:2888:3888
server.3=192.168.59.132:2888:3888
创建data目录
mkdir /tmp/zookeeper
配置myid
echo "1" > /tmp/zookeeper/myid #192.168.59.130
echo "2" > /tmp/zookeeper/myid #192.168.59.131
echo "3" > /tmp/zookeeper/myid #192.168.59.132
7:运行zookeeper服务
/usr/local/zookeeper/bin/zkServer.sh start
7.1查看zk的状态
/usr/local/zookeeper/bin/zkServer.sh status
8.3台安装kafka
tar xzf kafka_2.11-2.2.0.tgz
mv kafka_2.11-2.2.0 /usr/local/kafka
vim /usr/local/kafka/config/server.properties
9 启动kafka
/usr/local/kafka/bin/kafka-server-start.sh -daemon /usr/local/kafka/config/server.properties
netstat -lptnu|grep 9092
tcp6 0 0 :::9092 :::* LISTEN 15555/java
10 创建一个topic
/usr/local/kafka/bin/kafka-topics.sh --create --zookeeper 192.168.59.130:2181 --replication-factor 2 --partitions 3 --topic wg007
Created topic wg007.
10…1 模拟生产者
cd /usr/local/kafka/bin/
./kafka-console-producer.sh --broker-list 192.168.59.130:9092 --topic wg007
>
10.2 模拟消费者
/usr/local/kafka/bin/kafka-console-consumer.sh --bootstrap-server 192.168.59.130:9092 --topic wg007 --from-beginning
10.3 查看当前的topic
/usr/local/kafka/bin/kafka-topics.sh --list --zookeeper 192.168.59.130:2181
__consumer_offsets
wg007
11 安装filebeat(收集日志的)
rpm -ivh filebeat-6.8.12-x86_64.rpm
cd /etc/filebeat/
把原先的配置文件给改名(相当于备份了)
mv filebeat.yml filebeat1.yml
vim filebeat.yml
内容如下:
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/messages
output.kafka:
enabled: true
hosts: ["192.168.59.130:9092","192.168.59.131:9092","192.168.59.132:9092"]
topic: msg
开启filebeat服务
systemctl start filebeat
tailf /var/log/filebeat/filebeat
11.1随便找台机子查看一下
/usr/local/kafka/bin/kafka-topics.sh --list --zookeeper 192.168.59.130:2181
模拟消费者查看验证一下数据作用上没有
/usr/local/kafka/bin/kafka-console-consumer.sh --bootstrap-server 192.168.59.130:9092 --topic msg --from-beginning
一大串数据显示就OK了
下一步就是logstash来采集数据了
192.168.59.131安装logstash:
yum -y install logstash-6.6.0.rpm
vim /etc/logstash/conf.d/msg.conf
input{
kafka{
bootstrap_servers => ["192.168.59.130:9092,192.168.59.131:9092,192.168.59.132:9092"]
group_id => "logstash"
topics => "msg"
consumer_threads => 5
}
}
output{
elasticsearch{
hosts => "192.168.59.130:9200"
index => "msg-%{+YYYY.MM.dd}"
}
}
开启服务
systemctl start logstash
tailf /var/log/logstash/logstash-plain.log
ss -nltp |grep 9600
192.168.59.130安装elasticsearch
yum -y install elasticsearch-6.6.2.rpm
vim /etc/elasticsearch/elasticsearch.yml
17行
23行
55行
59行
需要修改
验证创建成功没有
systemctl start elasticsearch
tailf /var/log/elasticsearch/wg007.log
192.168.59.132安装kibana
yum -y install kibana-6.6.2-x86_64.rpm
vim /etc/kibana/kibana.yml
systemctl start kibana
浏览器登录192.168.59.132:5601
结束