elk平台练习(瞎练)

实验一:共四台主机:
kvm1:es集群主
kvm2:es集群从
(master-slave模式:
master收集到日志后,会把一部分数据碎片到slave上(随机的一部分数据);同时,master和slave又都会各自做副本,并把副本放到对方机器上,这样就保证了数据不会丢失。如果master宕机了,那么客户端在日志采集配置中将elasticsearch主机指向改为slave,就可以保证ELK日志的正常采集和web展示。)
kvm3:安装kibana,提供界面
kvm4:作为需要被收集日志的客户端
在这里插入图片描述
实验目的:收集kvm3de日志并展示在kibana提供的页面上。
1.1
为验证测试效果,修改日志存放位置

[root@client ~]# vim /etc/rsyslog.conf 
local6.*                                               /var/log/test.log
#对设置的目录写入内容以做测试
[root@client ~]# logger -p local6.info  "zhangkaili"
[root@client ~]# logger -p local6.info  "zhangkaili"
[root@client ~]# logger -p local6.info  "zhangkaili"
[root@client ~]# logger -p local6.info  "zhangkaili"
[root@client ~]# logger -p local6.info  "zhangkaili"
[root@client ~]# logger -p local6.info  "zhangkaili"
[root@client ~]# logger -p local6.info  "zhangkaili"
#编写logstash的数据收集规则,实现将该日志内容发送到es集群上
[root@client ~]# cat file.conf 
input {
  file {
       path => "/var/log/test.log"
       type => "message"
       start_position => "beginning"
}
}
output {
  elasticsearch {
    hosts => ["kvm1的ip:9200"]
    index => "test08-%{+YYYY.MM.dd}"
}
}
[root@client ~]/opt/logstash/bin/logstash -f file.conf &

查看es端
在这里插入图片描述
在这里插入图片描述
可以看到有test08的索引,内容也是测试时输入的内容zhangkaili,此过程有点慢,大概等了好几分钟。
实验二,客户端并不安装logstash,开启日志推送,通过514/tcp协议将日志推送到es集群的主上,通过es集群master上的logstash推到es集群上。
(客户端配置远程日志发送给elasticsearch节点 -->elasticsearch节点需要安装logstash配置日志发送到elasticsearch的514端口—>elasticsearch自己会开启514端口接收日志)

[root@client ~]# vim /etc/rsyslog.conf
*.*                                                     @@kvm1的ip
[root@client ~]# systemctl restart rsyslog.service 
kvm1端:
[root@kvm1 ~]# cat file.conf
input {
  syslog {
   type => "system-syslog"
   host => "本机ip"
   port => 514
  }
}
output {
  elasticsearch {
  hosts => ["kvm1的ip:9200"]  
  index => "test09-%{+YYYY.MM.dd}" 
}
}
[root@kvm1 ~]# logstash -f file.conf &
客户端写日志:
[root@client ~]# logger "xiaxiaxiaxiaxia"
[root@client ~]# logger "xiaxiaxiaxiaxia"
[root@client ~]# logger "xiaxiaxiaxiaxia"
[root@client ~]# logger "xiaxiaxiaxiaxia"
[root@client ~]# logger "xiaxiaxiaxiaxia"
[root@client ~]# logger "xiaxiaxiaxiaxia"
[root@client ~]# logger "xiaxiaxiaxiaxia"
[root@client ~]# logger "xiaxiaxiaxiaxia"
[root@client ~]# logger "xiaxiaxiaxiaxia"
[root@client ~]# logger "xiaxiaxiaxiaxia"
[root@client ~]# logger "xiaxiaxiaxiaxia"

测试结果如下:
在这里插入图片描述

在这里插入图片描述
可以看到索引test09,内容为测试内容有xiaxiaxia(在日志里定义的是所有设备所有级别)
速度比较快
实验三、tcp日志收集练习
kvm1端(安装了logstash和es,此处主要测试logstash)

[root@kvm1 ~]# cat tcp.conf 
input {
   tcp{
   host => "kvm1的ip"
   port => "6666"
      }
}
output {
  stdout {
     codec => "rubydebug"
   }
}
#标准输出,输出到终端,可改为推到es集群

客户端利用nc工具做个小测试

[root@client ~]# nc kvm的ip  6666 </etc/hosts
将这个文件发到kvm1上,端口6666

查看kvm1端的效果:

[root@kvm1 ~]# OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Settings: Default filter workers: 1
Logstash startup completed
{
       "message" => "127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4",
      "@version" => "1",
    "@timestamp" => "2018-10-15T12:12:19.080Z",
          "host" => "192.168.122.7",
          "port" => 58352
}
{
       "message" => "::1         localhost localhost.localdomain localhost6 localhost6.localdomain6",
      "@version" => "1",
    "@timestamp" => "2018-10-15T12:12:19.081Z",
          "host" => "192.168.122.7",
          "port" => 58352
}

实验五:在client和elasticsearch之间添加一个中间件作为缓存,先将采集到的日志内容写到中间件上,然后再从中间件输入到elasticsearch中。防止elasticsearch出现问题。
在kvm1(192.168.122.82)端安装redis

daemonize yes
bind 192.168.122.82  (允许谁能登陆redis,最好改为本机以太网ip)

测试文件如下

[root@client ~]# cat /var/log/test.log
hello
everone
it is a long day without you my firend

客户端编写logstash配置将内容放入kvm1端的redis中

[root@client ~]# cat redis_test.conf 
input {
    file {
      path => "/var/log/test.log"
      start_position => "beginning"
    }
}
output {
     redis {
        host => "192.168.122.82"
        port => "6379"
        db => "6"
        data_type => "list"
        key => "test"
     }
}

在kvm1(192.168.122.82)端的 redis查看,存入成功:

[root@kvm1 ~]# redis-cli -h 192.168.122.82
192.168.122.82:6379> info
192.168.122.82:6379> select 6
OK
192.168.122.82:6379[6]> keys *
1) "test"
192.168.122.82:6379[6]> LLEN test
(integer) 3
192.168.122.82:6379[6]> LINDEX test -1
"{\"message\":\"it is a long day without you my firend\",\"@version\":\"1\",\"@timestamp\":\"2018-10-15T12:50:06.379Z\",\"path\":\"/var/log/test.log\",\"host\":\"client\"}"
192.168.122.82:6379[6]> LINDEX test -2
"{\"message\":\"everone\",\"@version\":\"1\",\"@timestamp\":\"2018-10-15T12:50:06.379Z\",\"path\":\"/var/log/test.log\",\"host\":\"client\"}"
192.168.122.82:6379[6]> LINDEX test -3
"{\"message\":\"hello\",\"@version\":\"1\",\"@timestamp\":\"2018-10-15T12:50:06.379Z\",\"path\":\"/var/log/test.log\",\"host\":\"client\"}"

在kvm1端写logstash读取redis里的内容

扫描二维码关注公众号,回复: 3597465 查看本文章
#从本机的6379接受(即redis里的)
[root@kvm1 ~]# cat redis2.conf
input {
    redis {
      host => "192.168.122.82"
      port => "6379"
      db => "6"
      data_type => "list"
      key => "test"
   }
}
output {
    elasticsearch {
      hosts => ["192.168.122.82:9200"]
      index => "redis-in-%{+YYYY.MM.dd}"
    }
}
[root@kvm1 ~]# logstash -f redis2.conf 

查看es页面:
在这里插入图片描述
这个实验redis在es集群上,可改为如下构建
在这里插入图片描述

猜你喜欢

转载自blog.csdn.net/weixin_42275939/article/details/83063809