服务端使用ELK集群对日志进行管理. 安装的软件有elasticsearch+kibana+logstash.每个软件都安装插件x-pack. 说明: 1.系统:centos:CentOS Linux release 7.5.1804 (Core) 2.ELK版本:6.3.0 3.X-Pack版本:6.3.0 4.elasticsearch简称es 介绍: 1.es:存储数据 2.logstash:收集日志并将日志数据保存到es中 3.kibana:es的可视化界面 4.x-pack:es,logstash,kibana的权限管理插件.也是elastic家族的统一权限管理插件. 下载: 一.请到官网下载6.3.0版本的文件: 1.公共密钥: https://artifacts.elastic.co/GPG-KEY-elasticsearch 2.es: https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.3.0.rpm 3.logstash: https://artifacts.elastic.co/downloads/logstash/logstash-6.3.0.rpm 4.kibana: https://artifacts.elastic.co/downloads/kibana/kibana-6.3.0-x86_64.rpm 5.x-pack: https://artifacts.elastic.co/downloads/packs/x-pack/x-pack-6.2.4.zip 6.将下载的文件和insert.sh放在同一个目录 insert.sh的内容如下:
#/bin/sh
# install es
rpm --import GPG-KEY-elasticsearch
shasum -a 512 -c elasticsearch-6.3.0.rpm.sha512
sudo rpm --install elasticsearch-6.3.0.rpm
# install kibana
shasum -a 512 kibana-6.3.0-x86_64.rpm
sudo rpm --install kibana-6.3.0-x86_64.rpm
# install logstash
sudo rpm --install logstash-6.3.0.rpm
# 配置开机启动
sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable elasticsearch.service
sudo /bin/systemctl enable kibana.service
sudo /bin/systemctl enable logstash.service
安装: 一.安装ELK: 1.进入文件所有目录,执行./insert.sh安装软件 2.es安装目录/usr/share/elasticsearch 3.logstash安装目录/usr/share/logstash/ 4.kibanap安装目录/usr/share/kibana/ 二.安装x-pack 1.elasticsearch,logstash,kibana都要安装此插件 2.进入各自的安装目录执行如下命令: es安装插件: bin/elasticsearch-plugin install file:///path/to/file/x-pack-6.3.0.zip logstash安装插件: bin/logstash-plugin install file:///path/to/file/x-pack-6.3.0.zip kibanap安装插件: bin/kibana-plugin install file:///path/to/file/x-pack-6.3.0.zip 3.启动es sudo systemctl start elasticsearch.service 4.在es安装目录执行bin/x-pack/setup-passwords auto会生成三个密码,分别对应es,logstash和kibana 三.配置软件的环境变量 1.elasticsearch: 执行命令cd /etc/elasticsearch进入配置文件目录 执行命令 vi elasticsearch.yml.修改变量network.host: 0.0.0.0.支持外网连接,用户前端应用调用数据 保存退出. 重启服务:systemctl restart logstash.service 2.logstash: 执行cd /etc/logstash进入目录 执行vi logstash.yml 修改变量http.host: "0.0.0.0" 支持外网连接,用于收集日志信息 在最后添加如下,用于监视es: xpack.monitoring.elasticsearch.url: "http://localhost:9200" xpack.monitoring.elasticsearch.username: "elastic" xpack.monitoring.elasticsearch.password: "********" 重启服务: sudo systemctl restart logstash.service 3.kibana: 执行 cd /etc/kibana进入目录 执行 vi kibana.yml 修改server.host: "0.0.0.0" 修改es的信息,使用前面生成的账号和密码 es的账号:elasticsearch.username: "elastic" es的密码:elasticsearch.password: "********" 重启服务: sudo systemctl restart kibana.service 四:启动/停止命令: 1.es: sudo systemctl start elasticsearch.service sudo systemctl stop elasticsearch.service sudo systemctl restart elasticsearch.service 2.logstash: sudo systemctl start logstash.service sudo systemctl stop logstash.service sudo systemctl restart logstash.service 3.kibana: sudo systemctl start kibana.service sudo systemctl stop kibana.service sudo systemctl restart kibana.service 五:日志文件地址 1.es: /var/log/elasticsearch/ 2.logstash: /var/log/logstash/ 2.kibana: /var/log/kibana/ 六:启动后的使用 在浏览器中输入kibana的地址:http://localhost:5601/ 使用es的账号和密码登录.配置logstash收集日志: 1.从log文件中收集,定时检测日志文件的变量,做增量收集 2.开通tcp端口,由应用程序主动上传数据. 3.进入目录 cd /etc/logstash/conf.d/ 4.创建新的配置文件 vi logs.conf 输入以下信息:
input {
file {
path => "/data/securityopdata/synctool/logs/*.log"
type => "logfile"
start_position => "beginning"
#sincedb_path => "/dev/null"
codec => multiline {
pattern => "^%{TIMESTAMP_ISO8601}"
what => "previous"
negate => true
}
add_field => {
host_name => "郜金丹的空间"
project_name => "synctool"
}
}
file {
path => "/data/securityopdata/syncapi/logs/*.log"
type => "logfile"
start_position => "beginning"
#sincedb_path => "/dev/null"
codec => multiline {
pattern => "^%{TIMESTAMP_ISO8601}"
what => "previous"
negate => true
}
add_field => {
host_name => "郜金丹的空间"
project_name => "syncapi"
}
}
}
filter{
if([project_name] == "syncapi" or [project_name] == "synctool"){
grok {
match => {
"message" => "%{TIMESTAMP_ISO8601:createTime}\s*\[\s*%{WORD:level}\s*\]\s*\[\s*(?<logger_name>.*?)\s*\]\s*(?<message>.*)"
}
overwrite => ["message","host"]
}
mutate {
update => {"host" => "10.10.1.169"}
}
}
date {
match => ["createTime","yyyy-MM-dd HH:mm:ss","UNIX"]
}
}
output {
stdout { codec=> rubydebug }
if [type] == "logfile" {
elasticsearch {
hosts => ["10.10.1.169:9200"]
user => elastic
password => *******
index => "log-%{+YYYY.MM.dd}"
document_type => "log"
}
}
else {
elasticsearch {
hosts => ["10.10.1.169:9200"]
user => elastic
password => *******
}
}
if ([type] == "logfile" and ([level] == "ERROR" or [level] == "FATAL")) {
email {
to => "[email protected]"
from => "[email protected]"
username => "[email protected]"
password => "******"
address => "mail.testin.cn"
port => 587
via => "smtp"
use_tls => true
subject => "项目%{project_name}发现严重错误"
body => "主机名称: %{host_name}\n 主机ip: %{ip}\n 项目: %{project_name}\n 时间: %{createTime}\n 日志级别: %{level}\n 类名: %{logger_name}\n 日志: %{message}"
authentication => "plain"
}
}
}