filebeat + kafka + logstash + ES + Kibana 整合使用
2018-09-28 18:24
281 查看
filebeat + kafka + logstash + ES + Kibana 整合使用
环境准备
- JDK 1.8
- filebeat 6.4.1
- kafka 0.10.2.0
- logstash 6.4.1
- elasticsearch 6.4.1
- Kibana 6.4.1
kafka
- 创建topic
kafka-topics --zookeeper 192.168.23.121,192.168.23.122,192.168.23.123 --create --partitions 3 --replication-factor 3 --topic nginx-data001
filebeat
- 修改配置文件
cd /etc/filebeat/ vi filebeat.yml
- filebeat.yml 添加如下配置,输入为Nginx日志目录,输出为kafka
filebeat.inputs: # Each - is an input. Most options can be set at the input level, so # you can use different inputs for various configurations. # Below are the input specific configurations. - type: log # Change to true to enable this input configuration. enabled: true # Paths that should be crawled and fetched. Glob based paths. paths: - /data01/datadir_test/nginx_logs/dataLKOne/access.log output.kafka: enable: true hosts: ["dsgcd4121:9092","dsgcd4122:9092","dsgcd4123:9092"] topic: 'nginx-data001' version: '0.10.2.0'
- 启动 filebeat
service filebeat start
logstash
- 自定义 logstash patterns
mkdir -p /usr/local/logstash/patterns vi /usr/local/logstash/patterns/nginx
- nginx文件内容如下:
QS1 (.*?) NGINXACCESS %{IPORHOST:clientip} - %{USERNAME:remote_user} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:http_verb} %{NOTSPACE:http_request}(?: HTTP/%{NUMBER:http_version})?|%{DATA:raw_http_request})\" (%{IPORHOST:http_host}.%{WORD:http_port}) %{NUMBER:response_status} %{NUMBER:response_length} (?:%{NUMBER:bytes_read}|-) %{QS1:referrer} %{QS1:agent} %{NUMBER:request_time:float} %{NUMBER:upstream_response_time:float}
- 创建配置文件
cd /etc/logstash/conf.d/ vi nginx_datalkone.conf
- nginx_datalkone.conf 内容如下:
input { kafka { enable_auto_commit => true auto_commit_interval_ms => "1000" codec => "json" bootstrap_servers => "192.168.23.121:9092,192.168.23.122:9092,192.168.23.123:9092" topics => ["nginx-data001"] } } filter { grok { patterns_dir => "/usr/local/logstash/patterns" match => { "message" => "%{NGINXACCESS}" } remove_field => ["message"] } urldecode { all_fields => true } geoip { source => "clientip" } } output { elasticsearch { hosts => ["localhost:9200"] index => "nginx-data-%{+YYYY.MM.dd}" } }
- 启动logstash
initctl start logstash
ElasticSearch
- 通过
curl 'localhost:9200/_cat/indices?v'
可以查看是否写入了数据
176># curl 'localhost:9200/_cat/indices?v' health status index uuid pri rep docs.count docs.deleted store.size pri.store.size yellow open .kibana iLXOPq3wTc2PECVeXtXzOw 1 1 2 0 7.7kb 7.7kb yellow open nginx-data-2018.09.28 ORbbgojRTEGJVSl_jN2MXg 5 1 0 0 401b 401b
在kibana中配置index
····
阅读更多相关文章推荐
- es & kibana & filebeat 部署与使用
- Elk集群安装+配置(Elasticsearch+Logstash+Filebeat+Kafka+Kibana)
- Elasticsearch、Logstash、Kibana、Filebeat的使用总结
- ELK 架构之 Logstash 和 Filebeat 配置使用(采集过滤)
- ELK日志系统:Filebeat使用及Kibana如何设置登录认证
- ELK日志系统:Filebeat使用及Kibana如何设置登录认证
- 23-windows下filebeat与logstash与elasticSearch的合并使用
- ELK + kafka + filebeat +kibana
- Elasticsearch&logstash&filebeat&kibana&x-pack搭建
- 配置kibana和logstash、filebeat 日志统一收集
- ELKF(Elasticsearch+Logstash+ Kibana+ Filebeat) 部署
- filebeat -> logstash -> elasticsearch -> kibana ELK 日志收集搭建
- kibana ,logstash and filebeat
- Logstash/Filebeat->Logstash->Kafka->Spring-kafka->MongoDb->Spark日志收集和处理
- 使用filebeat替代logstash进行日志采集
- 使用Filebeat和Logstash集中归档游戏日志
- ELK日志系统:Filebeat使用及Kibana如何设置登录认证(转)
- 170228、Linux操作系统安装ELK stack日志管理系统--(1)Logstash和Filebeat的安装与使用
- 玩儿透ELK日志分析集群搭建管理(rsyslog->kafka->Logstash->ES->Kibana)
- Linux操作系统安装ELK stack日志管理系统--(1)Logstash和Filebeat的安装与使用