Docker 部署 ELK (ElasticSearch/Logstash/Kibana) 日志收集分析系统
首先安装 ELK (ElasticSearch/Logstash/Kibana)
配置文件
- docker-compose.yml
# es.yml
version: '3'
services:
elasticsearch:
image: elasticsearch:7.8.0
container_name: elk-es
restart: always
environment:
# 开启内存锁定
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "TAKE_FILE_OWNERSHIP=true"
# 指定单节点启动
- discovery.type=single-node
ulimits:
# 取消内存相关限制 用于开启内存锁定
memlock:
soft: -1
hard: -1
volumes:
- ./logs/data:/usr/share/elasticsearch/data
- ./logs:/usr/share/elasticsearch/logs
- ./logs/plugins:/usr/share/elasticsearch/plugins
ports:
- "9200:9200"
kibana:
image: kibana:7.8.0
container_name: elk-kibana
restart: always
depends_on:
- elasticsearch #kibana在elasticsearch启动之后再启动
environment:
ELASTICSEARCH_HOSTS: http://elk-es:9200
I18N_LOCALE: zh-CN
ports:
- "5601:5601"
logstash:
image: logstash:7.8.0
container_name: elk-logstash
restart: always
depends_on:
- elasticsearch #logstash在elasticsearch启动之后再启动
environment:
XPACK_MONITORING_ENABLED: "false"
pipeline.batch.size: 10
volumes:
- ./conf/logstash/logstash-springboot.conf:/usr/share/logstash/pipeline/logstash.conf
ports:
- "4560:4560" #设置端口
复制代码
- logstash 的配置文件 logstash-springboot.conf
此配置只有一个输入源,如果要区分多个系统的日志,配置多个 input 即可,可以通过 xxx.log 文件形式进行配置
#新建logstash/logstash-springboot.conf文件,新增以下内容
input {
tcp {
mode => "server"
host => "0.0.0.0"
port => 4560
codec => json_lines
}
}
output {
elasticsearch {
hosts => "es:9200"
// es 中日志索引名称
index => "logstash-%{+YYYY.MM.dd}"
}
}
复制代码
文件准备完毕
- 文件准备完毕后的目录结构应该是这样的
- conf
- logstash-springboot.conf
- docker-compose.yml
复制代码
- 执行启动命令
docker-compose up -d
复制代码
- 启动完毕后查看状态是否正常
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6cff523389dc logstash:7.7.0 "/usr/local/bin/dock…" 6 hours ago Up 14 minutes 5044/tcp, 0.0.0.0:4560->4560/tcp, :::4560->4560/tcp, 9600/tcp elk_logstash
eac2af4bfa55 kibana:7.7.0 "/usr/local/bin/dumb…" 6 hours ago Up 6 hours 0.0.0.0:5601->5601/tcp, :::5601->5601/tcp elk_kibana
6fb7fd998ecf elasticsearch:7.7.0 "/tini -- /usr/local…" 6 hours ago Up 2 minutes 0.0.0.0:9200->9200/tcp, :::9200->9200/tcp, 9300/tcp elk_elasticsearch
复制代码
- 访问 5601 端口即可看到 kibana ui 界面
配置服务
- maven
<!-- logback 推送日志文件到 logstash -->
<dependency>
<groupId>net.logstash.logback</groupId>
<artifactId>logstash-logback-encoder</artifactId>
<version>6.6</version>
</dependency>
复制代码
- logback-spring.xml
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<include resource="org/springframework/boot/logging/logback/base.xml"/>
<appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<destination>127.0.0.1:4560</destination>
<!-- logstash ip和暴露的端口,logback就是通过这个地址把日志发送给logstash -->
<encoder charset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder"/>
</appender>
<root level="INFO">
<appender-ref ref="LOGSTASH"/>
<appender-ref ref="CONSOLE"/>
</root>
</configuration>
复制代码
- 编写日志输出接口,访问接口输出日志即可在 kibana 中看到日志
@GetMapping("/logs")
public String printLogs(){
log.info(this.getClass().getSimpleName()+" info : "+LocalDateTime.now().getSecond());
log.warn(this.getClass().getSimpleName()+" warn : "+LocalDateTime.now().getSecond());
log.error(this.getClass().getSimpleName()+" error : "+LocalDateTime.now().getSecond());
return"logs";
}
复制代码
-
-
然后去到 kibana -> discover 目录配置索引模式
-
输入索引、点击下一步
-
创建索引模式
-
等待创建完成
-
再次去到 kibana -> discover 即可查看日志
-
搜索