ELK+redis搭建解析NGINX日志环境
2016-02-25 18:53
609 查看
工作原理
1,logstash作为日志收集工具,定时将NGINX日志插入到REDIS中;同时将REDIS中的日志进行解析处理,传递给;
2,Elasticserach接收logstash处理解析后的日志,进行处理和分析;
3,Kibana调用Elasticserach对日志进行报表展示,
useradd log_user;
passwd log_user;
2,搭建服务器环境:
192.168.1.254 安装 NGINX+ logstash
作为日志收集agent,
192.158.1.253 安装
REDIS+ logstash_indexer+Elasticserach+kibana
Redis安装
1,下载Redis:wget http://download.redis.io/releases/redis-3.0.7.tar.gz;
2,解压缩:tar xzf redis-3.0.7.tar.gz;
3,cd redis-3.0.7
4,make编译;
5,进行相关配置
2, tar -xzvf elasticsearch-1.7.1.tar.gz;
3,进入bin目录,执行elasticsearch
start -d命令,启动,默认端口9200
4,测试是否安装成:curl
-X GEThttp://localhost:9200
遇到的问题:
操作系统默认安装的是JDK1.6,elasticsearch1.2以后只支持JDK1.7以上版本,因此需要安装JKD1.7;
(1),下载jdk-7u79-linux-x64.gz程序包,解压缩,重命名为JAVA7
(2),在当前用户根目录下修改.bashrc文件添加环境变量
#JAVA_HOME
JAVA_HOME=/home/log_user/java7
export JAVA_HOME
#CLASS_PATH
CLASS_PATH=$JAVA_HOME/lib/*.jar:$JAVA_HOM/jre/lib/*.jar
export CLASS_PATH
export LANG=zh_CN.GBK
export LANGUAGE=zh_CN.GB18030:zh_CN.GB2312:zh_CN
1,从https://download.elastic.co/logstash/logstash/logstash-1.5.4.tar.gz下载安装文件;
2,解压缩;
3,通过 bin/logstash
-e 'input{stdin{}}output{stdout{codec=>rubydebug}}'检查是否成功
192.168.1.254:
1,创建etc配置路径;
2,创建配置文件logstash-agent.conf 内容如下:
input {
file {
type => "nginx access log"
path => ["/home/info/nginx/logs/shq_server.log"]
}
}
output {
redis {
host => "192.168.1.253" #redis server
port => "21031"
data_type => "list"
key => "logstash:redis"
}
}
3,启动 ./logstash -f ../etc/logstash-agent.conf
192.168.1.253:
1,创建etc配置路径;
2,创建配置文件logstash_indexer.conf 内容如下:
input {
redis {
host => "192.168.1.253"
port => "21031"
data_type => "list"
key => "logstash:redis"
type => "redis-input"
}
}
filter {
ruby {
init => "@kname = ['http_x_forwarded_for','time_local','request','status','body_bytes_sent','request_body','content_length','http_referer','http_user_agent','http_cookie','remote_addr','hostname','upstream_addr','upstream_response_time','request_time']"
code => "event.append(Hash[@kname.zip(event['message'].split(' | '))])"
}
if [request] {
ruby {
init => "@kname = ['method','uri','verb']"
code => "event.append(Hash[@kname.zip(event['request'].split(' '))])"
}
if [uri] {
ruby {
init => "@kname = ['url_path','url_args']"
code => "event.append(Hash[@kname.zip(event['request'].split('?'))])"
}
kv {
prefix => "url_"
source => "url_args"
field_split => "& "
remove_field => [ "url_args","uri","request" ]
}
}
}
mutate {
convert => [
"body_bytes_sent" , "integer",
"content_length", "integer",
"upstream_response_time", "float",
"request_time", "float"
]
}
}
output {
elasticsearch {
embedded => false
protocol => "http"
host => "localhost"
port => "9200"
}
}
3,启动 ./logstash -f ../etc/logstash_indexer.conf
1,从https://download.elastic.co/kibana/kibana/kibana-4.1.2-linux-x64.tar.gz下载安装程序
2,解压安装默认端口5601;
3,在浏览器中打开http://192.168.1.253:5601/进行相关配置(需要用高版本浏览器,不然一直卡在加载页面
坑)
基本环境搭建完成,后面就是针对需要的数据进行各种处理;多服务器下的应用场景考虑了
其他
NGINX日志格式:
参考资料:
1,http://yanliu.org/2015/08/19/ELK-redis%E6%90%AD%E5%BB%BAnginx%E6%97%A5%E5%BF%97%E5%88%86%E6%9E%90%E5%B9%B3%E5%8F%B0/
2,http://kibana.logstash.es/content/logstash/examples/nginx-access.html (官方)
目的:
之前分析NGINX日志通过将日志导出到一个目录,然后通过sqlldr+SHELL脚本的方式,将日志数据解析导入ORACLE数据库,数据量少的时候还可以接受,数量上来后导入和检索都有问题,因此,引入ELK,工作原理
1,logstash作为日志收集工具,定时将NGINX日志插入到REDIS中;同时将REDIS中的日志进行解析处理,传递给;2,Elasticserach接收logstash处理解析后的日志,进行处理和分析;
3,Kibana调用Elasticserach对日志进行报表展示,
安装准备
1,创建用户:useradd log_user;
passwd log_user;
2,搭建服务器环境:
192.168.1.254 安装 NGINX+ logstash
作为日志收集agent,
192.158.1.253 安装
REDIS+ logstash_indexer+Elasticserach+kibana
Redis安装
1,下载Redis:wget http://download.redis.io/releases/redis-3.0.7.tar.gz;
2,解压缩:tar xzf redis-3.0.7.tar.gz;
3,cd redis-3.0.7
4,make编译;
5,进行相关配置
Elasticserach安装
1,从https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.7.1.tar.gz地址下载安装包;2, tar -xzvf elasticsearch-1.7.1.tar.gz;
3,进入bin目录,执行elasticsearch
start -d命令,启动,默认端口9200
4,测试是否安装成:curl
-X GEThttp://localhost:9200
遇到的问题:
操作系统默认安装的是JDK1.6,elasticsearch1.2以后只支持JDK1.7以上版本,因此需要安装JKD1.7;
(1),下载jdk-7u79-linux-x64.gz程序包,解压缩,重命名为JAVA7
(2),在当前用户根目录下修改.bashrc文件添加环境变量
#JAVA_HOME
JAVA_HOME=/home/log_user/java7
export JAVA_HOME
#CLASS_PATH
CLASS_PATH=$JAVA_HOME/lib/*.jar:$JAVA_HOM/jre/lib/*.jar
export CLASS_PATH
export LANG=zh_CN.GBK
export LANGUAGE=zh_CN.GB18030:zh_CN.GB2312:zh_CN
Logstash安装
1,从https://download.elastic.co/logstash/logstash/logstash-1.5.4.tar.gz下载安装文件;2,解压缩;
3,通过 bin/logstash
-e 'input{stdin{}}output{stdout{codec=>rubydebug}}'检查是否成功
Logstash配置
192.168.1.254:1,创建etc配置路径;
2,创建配置文件logstash-agent.conf 内容如下:
input {
file {
type => "nginx access log"
path => ["/home/info/nginx/logs/shq_server.log"]
}
}
output {
redis {
host => "192.168.1.253" #redis server
port => "21031"
data_type => "list"
key => "logstash:redis"
}
}
3,启动 ./logstash -f ../etc/logstash-agent.conf
192.168.1.253:
1,创建etc配置路径;
2,创建配置文件logstash_indexer.conf 内容如下:
input {
redis {
host => "192.168.1.253"
port => "21031"
data_type => "list"
key => "logstash:redis"
type => "redis-input"
}
}
filter {
ruby {
init => "@kname = ['http_x_forwarded_for','time_local','request','status','body_bytes_sent','request_body','content_length','http_referer','http_user_agent','http_cookie','remote_addr','hostname','upstream_addr','upstream_response_time','request_time']"
code => "event.append(Hash[@kname.zip(event['message'].split(' | '))])"
}
if [request] {
ruby {
init => "@kname = ['method','uri','verb']"
code => "event.append(Hash[@kname.zip(event['request'].split(' '))])"
}
if [uri] {
ruby {
init => "@kname = ['url_path','url_args']"
code => "event.append(Hash[@kname.zip(event['request'].split('?'))])"
}
kv {
prefix => "url_"
source => "url_args"
field_split => "& "
remove_field => [ "url_args","uri","request" ]
}
}
}
mutate {
convert => [
"body_bytes_sent" , "integer",
"content_length", "integer",
"upstream_response_time", "float",
"request_time", "float"
]
}
}
output {
elasticsearch {
embedded => false
protocol => "http"
host => "localhost"
port => "9200"
}
}
3,启动 ./logstash -f ../etc/logstash_indexer.conf
Kibana安装
1,从https://download.elastic.co/kibana/kibana/kibana-4.1.2-linux-x64.tar.gz下载安装程序2,解压安装默认端口5601;
3,在浏览器中打开http://192.168.1.253:5601/进行相关配置(需要用高版本浏览器,不然一直卡在加载页面
坑)
安装完成
基本环境搭建完成,后面就是针对需要的数据进行各种处理;多服务器下的应用场景考虑了其他
NGINX日志格式:
log_format main "$http_x_forwarded_for | $time_local | $request | $status | $body_bytes_sent | " "$request_body | $content_length | $http_referer | $http_user_agent " "$http_cookie | $remote_addr | $hostname | $upstream_addr | $upstream_response_time | $request_time";
参考资料:
1,http://yanliu.org/2015/08/19/ELK-redis%E6%90%AD%E5%BB%BAnginx%E6%97%A5%E5%BF%97%E5%88%86%E6%9E%90%E5%B9%B3%E5%8F%B0/
2,http://kibana.logstash.es/content/logstash/examples/nginx-access.html (官方)
相关文章推荐
- Linux下php安装Redis扩展
- Linux下redis服务的安装
- WINDOWS下用脚本运行redis和mongodb
- 读redis在新浪的大规模应用
- redis命令总结
- redis中的基本数据类型,以及在Spring-Boot对Redis的基本使用
- windows下安装redis
- Redis的windows安装
- CentOS 6.5下Redis安装详细步骤
- ASP.NET Redis 开发
- CentOS 6.5下Redis安装详细步骤
- Linux上安装Redis教程
- Redis中常用命令
- 分布式锁--Redis实现
- Redis安装部署与维护详解
- spring + redis 实现数据的缓存
- Redis+Sentinel安装与配置
- Redis+Sentinel安装与配置
- redis3安装
- Nginx + Tomcat + Redis 集群下的Session共享