ELK实战-Logstash multiline:识别错误堆栈
2016-04-15 00:14
447 查看
概述
在通过ELK收集日志的时候,通常需要对日志进行分析,例如实时监控错误堆栈,并进行告警。通常错误堆栈都是多行的,但通常ELK默认都是识别单行的,怎么才能多行呢?
logstash的codec、filter中均有
multiline插件,可以匹配单行内容,并于上下行作为1个输入。
本文主要讲述如何使用logstash的multiline插件来识别错误堆栈。
测试环境
1个CentOS7系统:* ELK服务器
测试思路
logstash监控日志文件logstash配置识别multiline
手动向日志文件中写入python的错误堆栈
实战
logstash配置文件
logstash的配置文件(logstash.conf.stack)如下input { file { path => "[your_path]/python_stack.log" start_position => "beginning" } file { path => "/var/log/messages" } } filter { multiline { pattern => ".*TRACE.*" what => "previous" } } output { elasticsearch { hosts => ["localhost:9200"] } stdout {} }
配置文件说明:
输入
监控
[your_path]/python_stack.log日志文件,
监控
/var/log/messages日志文件
过滤
识别每行中
.*TRACE.*的内容,如果正则匹配,那么该行就与前一行是同一条日志,且合并内容后作为1条日志处理。
输出:
输出到elasticsearch
输出到logstash的标准输出中
启动ELK
启动elastaticsearch。参考启动elastaticsearch启动logstash:
bin/logstash -f [your_path]/logstash.conf.stack。参考启动logstash
启动kibana。参考启动kibana
向日志文件写入堆栈
向日志文件“(与logstash.conf.stack中的日志路径相同)写入python的堆栈,例如下面所示2016-04-09 00:00:05.712 40113 DEBUG [your_model]._drivers.amqp [req-958e5439-7657-42ac-973e-616ab154f471 ] UNIQUE_ID is d8fa7221771f439bb7975dc7740bab29. _add_unique_id /usr/lib/py 2016-04-09 00:00:05.713 40113 DEBUG [your_model]._drivers.amqp [req-958e5439-7657-42ac-973e-616ab154f471 ] UNIQUE_ID is 63223f5d931b499b999fd5114495e6f4. _add_unique_id /usr/lib/py 2016-04-09 00:00:05.717 40113 DEBUG [your_model]._drivers.amqp [-] unpacked context: {u'read_deleted': u'no', u'project_name': None, u'user_id': None, u'roles': [u'admin'], u'tenan 2016-04-09 00:00:05.720 40113 ERROR [your.package.path] [req-958e5439-7657-42ac-973e-616ab154f471 ] Exception during message handling: No row was found for one() 2016-04-09 00:00:05.720 40113 TRACE [your.package.path] Traceback (most recent call last): 2016-04-09 00:00:05.720 40113 TRACE [your.package.path] File "/usr/lib/python2.7/site-packages/[your_package_path].py", line 14 4000 2, in _dispatch_and_reply 2016-04-09 00:00:05.720 40113 TRACE [your.package.path] executor_callback)) 2016-04-09 00:00:05.720 40113 TRACE [your.package.path] File "/usr/lib/python2.7/site-packages/[your_package_path].py", line 186, in _dispatch 2016-04-09 00:00:05.720 40113 TRACE [your.package.path] executor_callback) 2016-04-09 00:00:05.720 40113 TRACE [your.package.path] File "/usr/lib/python2.7/site-packages/[your_package_path].py", line 130, in _do_dispatch 2016-04-09 00:00:05.720 40113 TRACE [your.package.path] result = func(ctxt, **new_args) 2016-04-09 00:00:05.720 40113 TRACE [your.package.path] File "/usr/lib/python2.7/site-packages/neutron/api/rpc/handlers/l3_rpc.py", line 62, in sync_routers 2016-04-09 00:00:05.720 40113 TRACE [your.package.path] return self._sync_routers_inner(context, **kwargs) 2016-04-09 00:00:05.720 40113 TRACE [your.package.path] File "/usr/lib/python2.7/site-packages/neutron/api/rpc/handlers/l3_rpc.py", line 77, in _sync_routers_inner 2016-04-09 00:00:05.720 40113 TRACE [your.package.path] router_ids, sfrouter_versions) 2016-04-09 00:00:05.720 40113 TRACE [your.package.path] File "/usr/lib/python2.7/site-packages/neutron/api/rpc/handlers/l3_rpc.py", line 344, in _filter_sync_routers 2016-04-09 00:00:05.720 40113 TRACE [your.package.path] return self.l3plugin._filter_sync_routers(context, router_ids, sfrouter_versions) 2016-04-09 00:00:05.720 40113 TRACE [your.package.path] File "/usr/lib/python2.7/site-packages/neutron/db/l3_db.py", line 227, in _filter_sync_routers 2016-04-09 00:00:05.720 40113 TRACE [your.package.path] filter_by(router_id=id).one()) 2016-04-09 00:00:05.720 40113 TRACE [your.package.path] File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line 2401, in one 2016-04-09 00:00:05.720 40113 TRACE [your.package.path] raise orm_exc.NoResultFound("No row was found for one()") 2016-04-09 00:00:05.720 40113 TRACE [your.package.path] NoResultFound: No row was found for one() 2016-04-09 00:00:05.720 40113 TRACE [your.package.path] 2016-04-09 00:00:05.720 40113 TRACE [your.package.path] END 2016-04-09 00:00:05.720 40113 INFO [your.package.path] TEST logstash multiline
验证
在向日志文件写入堆栈后,便可以看到在logstash的标准输出中便可以看到”多行的堆栈信息”.在kibana中也可以看到对应的堆栈显示为1条记录。
至此,ELK作为识别multiline的实战完成了。
相关文章推荐
- 使用ElasticSearch+LogStash+Kibana+Redis搭建日志管理服务
- #研发解决方案#基于Apriori算法的Nginx+Lua+ELK异常流量拦截方案
- Logstash 与Elasticsearch整合使用示例
- ELK(ElasticSearch, Logstash, Kibana)搭建实时日志分析平台
- logstash
- logstash,elasticsearch,kibana三件套
- 用Kibana和logstash快速搭建实时日志查询、收集与分析系
- logstash
- elk在centos7安装
- logstash过滤nginx日志
- 配置Logstash(1) — 配置文件的结构
- 配置Logstash(2) — “事件”相关配置
- Logstash扩展开发 - Input 与 Codec
- ELK 索引抽取模板(中文索引配置not_analyzed,才能在kibana中使用terms)
- CENTOS7安装Elasticsearch
- ElasticSearch的学习资源
- Logstash+Elasticsearch+Kibana对Openstack日志分析和监控
- Java调用WMI读取Windows EventLog (j-Interop库)
- logstash开源日志管理系统-2-logstash配置语言介绍