您的位置:首页 > 其它

企业级日志收集系统——ELKstack

2016-07-01 15:31 791 查看
首先声明:本文出自 “把酒问苍天” 博客,请务必保留此出处http://79076431.blog.51cto.com/8977042/1793682本文是根据“把酒问苍天”的“企业级日志收集系统——ELKstack”,来构建自己的ELK日志收集系统。

ELKstack简介:

ELKstack是Elasticsearch、Logstash、Kibana三个开源软件的组合而成,形成一款强大的实时日志收集展示系统。各组件作用如下:Logstash:日志收集工具,可以从本地磁盘,网络服务(自己监听端口,接受用户日志),消息队列中收集各种各样的日志,然后进行过滤分析,并将日志输出到Elasticsearch中。 Elasticsearch:日志分布式存储/搜索工具,原生支持集群功能,可以将指定时间的日志生成一个索引,加快日志查询和访问。Kibana:可视化日志Web展示工具,对Elasticsearch中存储的日志进行展示,还可以生成炫丽的仪表盘。

使用ELKstack对运维工作的好处:

1、应用程序的日志大部分都是输出在服务器的日志文件中,这些日志大多数都是开发人员来看,然后开发却没有登陆服务器的权限,如果开发人员需要查看日志就需要到服务器来拿日志,然后交给开发;试想下,一个公司有10个开发,一个开发每天找运维拿一次日志,对运维人员来说就是一个不小的工作量,这样大大影响了运维的工作效率,部署ELKstack之后,开发任意就可以直接登陆到Kibana中进行日志的查看,就不需要通过运维查看日志,这样就减轻了运维的工作。2、日志种类多,且分散在不同的位置难以查找:如LAMP/LNMP网站出现访问故障,这个时候可能就需要通过查询日志来进行分析故障原因,如果需要查看apache的错误日志,就需要登陆到Apache服务器查看,如果查看数据库错误日志就需要登陆到数据库查询,试想一下,如果是一个集群环境几十台主机呢?这时如果部署了ELKstack就可以登陆到Kibana页面进行查看日志,查看不同类型的日志只需要电动鼠标切换一下索引即可。

ELKstack实验架构图:

本次只使用了两台linux虚拟机:192.168.3.24:es1+kibana,logstash(从redis收集日志)192.168.3.26: es2+kibana,logstash+filebeat(传输日志到redis),nginxredis消息队列作用说明:1、防止Logstash和ES无法正常通信,从而丢失日志。 2、防止日志量过大导致ES无法承受大量写操作从而丢失日志。3、应用程序(php,java)在输出日志时,可以直接输出到消息队列,从而完成日志收集。补充:如果redis使用的消息队列出现扩展瓶颈,可以使用更加强大的kafka,flume来代替。
实验环境说明:
[root@localhost ~]# cat /etc/redhat-release
CentOS Linux release 7.2.1511 (Core)
[root@localhost ~]# uname -r
3.10.0-327.el7.x86_64
使用软件说明:
1、jdk-8u92 官方rpm包2、Elasticsearch 2.3.3 官方rpm包3、Logstash 2.3.2 官方rpm包4、Kibana 4.5.1 官方rpm包5、Redis 3.2.1 remi rpm 包6、nginx 1.6.3 官方yum包官方rpm包均可到官网直接下载。
部署顺序说明:
1、Elasticsearch集群配置2、Logstash客户端配置(直接写入数据到ES集群,写入系统messages日志)3、Redis消息队列配置(Logstash写入数据到消息队列)4、Kibana部署5、nginx负载均衡Kibana请求6、手机nginx日志7、Kibana报表功能说明
配置注意事项:
1、时间必须同步2、关闭防火墙,selinux3、出了问题,检查日志Elasticsearch集群安装配置1、配置Java环境
[root@localhost ~]# yum install -y jdk1.8.0_92
[root@localhost ~]# java -version
java version "1.8.0_92"
Java(TM) SE Runtime Environment (build 1.8.0_92-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.92-b14, mixed mode)
2、安装elasticsearch
[root@localhost ~]# yum install -y elasticsearch
[root@localhost ~]# rpm -ql elasticsearch ticsearch
/etc/elasticsearch
/etc/elasticsearch/elasticsearch.yml
/etc/elasticsearch/logging.yml
/etc/elasticsearch/scripts
/etc/init.d/elasticsearch
/etc/sysconfig/elasticsearch
/usr/lib/sysctl.d
/usr/lib/sysctl.d/elasticsearch.conf
/usr/lib/systemd/system/elasticsearch.service
3、修改配置文件
[root@localhost ~]# vim /etc/elasticsearch/elasticsearch.yml
17 cluster.name: "linux-ES"
23 node.name: es1
33 path.data: /elk/data
37 path.logs: /elk/logs
43 bootstrap.mlockall: true
54 network.host: 0.0.0.0
58 http.port: 9200
68 discovery.zen.ping.unicast.hosts: ["192.168.3.24", "192.168.3.26"]
4、创建相关目录并赋予权限
[root@localhost ~]# mkdir -pv /elk/{data,logs}
[root@localhost ~]# chown -R elasticsearch.elasticsearch /elk
[root@localhost ~]# ll /elk
total 4
drwxr-xr-x. 3 elasticsearch elasticsearch 6 Jun 16 03:56 data
drwxr-xr-x. 2 elasticsearch elasticsearch 6 Jul  1 01:04 logs
5、启动ES,并检查是否监听在9200和9300端口
[root@localhost ~]# systemctl start elasticsearch.service
[root@localhost ~]# ss -tnl | grep "9200\|9300"
LISTEN     0      50          :::9200                    :::*
LISTEN     0      50          :::9300                    :::*
6、安装ES2,步骤和ES1一样7、查看两个节点的状态配置集群管理插件(head、kopf等)官方提供了一个ES集群管理插件,可以非常直观的查看ES的集群状态和索引数据信息
[root@localhost ~]# /usr/share/elasticsearch/bin/plugin install mobz/elasticsearch-head
[root@localhost ~]# /usr/share/elasticsearch/bin/plugin install lmenezes/elasticsearch-kopf
访问插件:http://192.168.3.24:9200/_plugin/head/http://192.168.3.24:9200/_plugin/kopf/Logstash部署1、配置Java环境,安装logstash
[root@localhost ~]# yum -y install jdk1.8.0_92
[root@localhost ~]# yum -y install logstash
2、通过配置文件验证Logstash的输入和输出
[root@localhost ~]# vim /etc/logstash/conf.d/stdout.conf
input {
stdin {}
}

output {
stdout {
codec => "rubydebug"
}
}
[root@localhost ~]# /opt/logstash/bin/logstash -f /etc/logstash/conf.d/stdout.conf
Settings: Default pipeline workers: 2
Pipeline main started
hello
{
"message" => "hello",
"@version" => "1",
"@timestamp" => "2016-07-01T06:47:06.502Z",
"host" => "localhost.localdomain"
}
你好
{
"message" => "你好",
"@version" => "1",
"@timestamp" => "2016-07-01T06:47:11.542Z",
"host" => "localhost.localdomain"
}
3、定义输出到Elasticsearch
[root@localhost ~]# vim /etc/logstash/conf.d/logstash.conf
input {
stdin {}
}
output {
input {
stdin {}
}
output {
elasticsearch {
hosts => ["192.168.3.24:9200","192.168.3.26:9200"]
index => "test"
}
}
[root@localhost ~]# /opt/logstash/bin/logstash -f /etc/logstash/conf.d/logstash.conf
Settings: Default pipeline workers: 2
Pipeline main started
hello!
你好
这个时候说明,Logstash接好Elasticsearch是可以正常工作的,下面介绍如何收集系统日志4、Logstash收集系统日志修改Logstash配置文件如下所示内容,并启动Logstash服务就可以在head中正常看到messages的日志已经写入到了ES中,并且创建了索引
[root@localhost ~]# vim /etc/logstash/conf.d/logstash.conf
input {
file {
type => "messagelog"
path => "/var/log/messages"
start_position => "beginning"
}
}
output {
file {
path => "/tmp/123.txt"
}
elasticsearch {
hosts => ["192.168.3.24:9200","192.168.3.26:9200"]
index => "system-messages-%{+yyyy.MM.dd}"
}
}

#检查配置文件语法:
/etc/init.d/logstash configtest
/opt/logstash/bin/logstash -f /etc/logstash/conf.d/logstash.conf --configtest
#更改启动Logstash用户:
# vim /etc/init.d/logstash
LS_USER=root
LS_GROUP=root
#通过配置文件启动
[root@localhost ~]# /opt/logstash/bin/logstash -f /etc/logstash/conf.d/logstash.conf &
收集成功如图所示,自动生成了system-messages的索引Kibana部署说明:我这里是在两个ES节点部署kibana并且使用nginx实现负载均衡,如果没有特殊需要,可以只部署单台节点1、安装Kibana,每个ES节点部署一个
[root@localhost ~]# yum -y install kibana
2、配置Kibana,只需要指定ES地址其他配置保持默认即可
[root@localhost ~]# vim /opt/kibana/config/kibana.yml
15 elasticsearch.url: "http://192.168.3.24:9200"
[root@localhost ~]# systemctl start kibana.service
[root@localhost ~]# ss -tnl | grep 5601
LISTEN     0      128          *:5601                     *:*
如图filebeat部署收集日志
1、安装nginx并将日志转换为json
[root@localhost ~]# yum -y install nginx[root@localhost ~]# vim /etc/nginx/nginx.conflog_format access1 '{"@timestamp":"$time_iso8601",''"host":"$server_addr",''"clientip":"$remote_addr",''"size":$body_bytes_sent,''"responsetime":$request_time,''"upstreamtime":"$upstream_response_time",''"upstreamhost":"$upstream_addr",''"http_host":"$host",''"url":"$uri",''"domain":"$host",''"xff":"$http_x_forwarded_for",''"referer":"$http_referer",''"status":"$status"}';access_log  /var/log/nginx/access.log  access1;#保存配置文件,启动服务[root@localhost ~]# systemctl start nginx
#验证nginx日志转json[root@localhost ~]# tail /var/log/nginx/access.log{"@timestamp":"2016-07-01T14:15:12+08:00","host":"192.168.3.26","clientip":"192.168.3.254","size":0,"responsetime":0.003,"upstreamtime":"0.003","upstreamhost":"192.168.3.24:5601","http_host":"192.168.3.26","url":"/bundles/src/ui/public/images/elk.ico","domain":"192.168.3.26","xff":"-","referer":"http://192.168.3.26/app/kibana","status":"304"}{"@timestamp":"2016-07-01T14:15:12+08:00","host":"192.168.3.26","clientip":"192.168.3.254","size":172,"responsetime":0.030,"upstreamtime":"0.030","upstreamhost":"192.168.3.26:5601","http_host":"192.168.3.26","url":"/elasticsearch/_mget","domain":"192.168.3.26","xff":"-","referer":"http://192.168.3.26/app/kibana","status":"200"}
2、安装tomcat并将日志转换为json
[root@localhost ~]# tar xf apache-tomcat-8.0.36.tar.gz -C /usr/local[root@localhost ~]# cd /usr/local[root@logstash1 local]# ln -sv apache-tomcat-8.0.36/ tomcat[root@localhost ~]# vim /usr/local/tomcat/conf/server.xml<Contest path="" docBase="/web"/><Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"prefix="localhost_access_log" suffix=".txt"pattern="{"clientip":"%h","ClientUser":"%l","authenticated":"%u","AccessTime":"%t","method":"%r","status":"%s","SendBytes":"%b","Query?string":"%q","partner":"%{Referer}i","AgentVersion":"%{User-Agent}i"}"/>
3、web安装filebeat并配置filebeat收集nginx日志发送给logstash#作用:在web端实时收集日志并传递给Logstash#为什么不用logstash在web端收集?4、配置filebeat从两个文件收集日志传给Logstash
[root@localhost ~]# vim /etc/filebeat/filebeat.ymlfilebeat:prospectors:-paths:- "/var/log/messages"   #收集系统日志input_type: logdocument_type: nginx2-system-message-paths:- "/var/log/nginx/access.log"   #nginx访问日志input_type: logdocument_type: nginx2-nginx-log# registry_file: /var/lib/filebeat/registryoutput:logstash: #将收集到的文件输出到Logstashhosts: ["192.168.3.26:5044"]path: "/tmp"filename: filebeat.txtshipper:logging:to_files: truefiles:path: /tmp/mybeat
5、配置logstash从filebeat接受nginx日志
[root@localhost ~]# vim /etc/logstash/conf.d/nginx-to-redis.confinput {beats {port => 5044codec => "json" #编码格式为json}}output {if [type] == "nginx2-system-message" {redis {data_type => "list"key => "nginx2-system-message" #写入到redis的key名称host => "192.168.3.26"   #redis服务器地址port => "6379"db => "0"}}if [type] == "nginx2-nginx-log" {redis {data_type => "list"key => "nginx2-nginx-log"host => "192.168.3.26"port => "6379"db => "0"}}file {path => "/tmp/nginx2-%{+yyyy-MM-dd}messages.gz" #测试日志输出}}
6、启动Logstash和filebeat
[root@localhost ~]# /etc/init.d/logstash start[root@localhost ~]# /etc/init.d/filebeat start
7、查看本地输出日志
[root@localhost ~]# tail /tmp/nginx-2016-06-30messages.gz{"message":"Jul  1 04:20:01 localhost systemd: Starting Session 30 of user root.","tags":["_jsonparsefailure","beats_input_codec_json_applied"],"@version":"1","@timestamp":"2016-06-30T20:20:10.409Z","type":"nginx1-system-message","input_type":"log","beat":{"hostname":"localhost.localdomain","name":"localhost.localdomain"},"offset":6841,"count":1,"fields":null,"source":"/var/log/messages","host":"localhost.localdomain"}
8、安装配置redis
[root@localhost ~]# yum -y install redis[root@localhost ~]# vim /etc/redis.confbind 0.0.0.0     #监听本机所有地址daemonize yes    #在后台运行appendonly yes   #开启aof[root@localhost ~]# systemctl start redis.service
访问nginx,生成日志,连接redis查看日志是否生成
[root@localhost ~]# redis-cli -h 192.168.3.26192.168.3.26:6379> keys *1) "nginx1-nginx-log"
9、在另一台logstash上收集nginx日志
[root@localhost ~]# yum -y install logstash[root@localhost ~]# vim /etc/logstash/conf.d/redis-to-elastic.confinput {redis {host => "192.168.3.26"port => "6379"db => "0"key => "nginx2-system-message"data_type => "list"codec => "json"}redis {host => "192.168.3.26"port => "6379"db => "0"key => "nginx2-nginx-log"data_type => "list"codec => "json"}}filter {if [type] == "nginx2-nginx-log" {geoip {source => "clientip"target => "geoip"#               database => "/etc/logstash/GeoLiteCity.dat"add_field => [ "[geoip][coordinaters]","%{[geoip][longitude]}" ]add_field => [ "[geoip][coordinaters]","%{[geoip][latitude]}" ]}mutate {convert => [ "geoip][coordinates]","float"]}}}output {if [type] == "nginx2-system-message" {elasticsearch {hosts => ["192.168.3.24:9200","192.168.3.26:9200"]index => "nginx2-system-message-%{+yyyy.MM.dd}"manage_template => trueflush_size => 2000idle_flush_time => 10 }}if [type] == "nginx2-nginx-log" {elasticsearch {hosts => ["192.168.3.24:9200","192.168.3.26:9200"]index => "logstash1-nginx2-nginx-log-%{+yyyy.MM.dd}"manage_template => trueflush_size => 2000idle_flush_time => 10 }}file {path => "/tmp/log2-%{+yyyy-MM-dd}messages.gz"gzip => "true"}}
10、开启logstash收集日志
[root@localhost ~]# /etc/init.d/logstash start
验证数据写入
[root@localhost ~]# ll /elk/data/linux-ES/nodes/0/indices/total 0drwxr-xr-x 8 elasticsearch elasticsearch 59 Jul  1 11:26 logstash1-nginx1-nginx-log-2016.06.30drwxr-xr-x 8 elasticsearch elasticsearch 59 Jul  1 12:36 logstash1-nginx2-nginx-log-2016.07.01drwxr-xr-x 8 elasticsearch elasticsearch 59 Jul  1 04:01 nginx1-system-message-2016.06.30drwxr-xr-x 8 elasticsearch elasticsearch 59 Jul  1 04:30 nginx1-system-message-2016.07.01drwxr-xr-x 8 elasticsearch elasticsearch 59 Jul  1 12:36 nginx2-system-message-2016.07.01drwxr-xr-x 8 elasticsearch elasticsearch 59 Jul  1 01:54 system-messages-2016.06.30drwxr-xr-x 8 elasticsearch elasticsearch 59 Jul  1 10:16 system-messages-2016.07.01drwxr-xr-x 8 elasticsearch elasticsearch 59 Jul  1 01:45 test
配置nginx进行反向代理
upstream kibana {      #定义后端主机组server 192.168.3.24:5601 weight=1 max_fails=2 fail_timeout=2;server 192.168.3.26:5601 weight=1 max_fails=2 fail_timeout=2;}server {listen       80;server_name  192.168.3.26;location / {           #定义反向代理,将访问自己的请求,都转发到kibana服务器proxy_pass http://kibana/; index  index.html index.htm;}}
查看ES和Kibana的输出结果
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签:  日志收集 ELK