您的位置:首页 > 数据库 > Mongodb

ELK 5.0.1+Filebeat5.0.1实时监控MongoDB日志并使用正则解析mongodb日志

2017-02-16 14:24 911 查看
    关于ELK5.0.1的安装部署,请参考博文( ELK 5.0.1+Filebeat5.0.1
for LINUX RHEL6.6 监控MongoDB日志),

本文重点说明如何适用filebeat实时监控mongodb数据库日志及在logstash正则解析mongodb日志。

    部署完ELK5.0.1后,在需要监控mongodb日志的数据库服务器上部署filebeat来抓取日志,

首先需要修改filebeat配置文件:

[root@se122 filebeat-5.0.1]# pwd

/opt/filebeat-5.0.1

[root@se122 filebeat-5.0.1]# 

[root@se122 filebeat-5.0.1]# ls

data  filebeat  filebeat.full.yml  filebeat.template-es2x.json  filebeat.template.json  filebeat.yml  scripts

[root@se122 filebeat-5.0.1]# cat filebeat.yml 

filebeat :

 prospectors :

  -

   paths :

       - /root/rs0-0.log   #filebeat负责实时监控的mongodb日志

   document_type : mongodblog  #指定filebeat发送到logstash的mongodb日志的文档类型为document_type,一定要指定(logstash接收解析匹配要使用)

   input_type : log

 registry_file : 

   /opt/filebeat-5.0.1/data/registry

output.logstash:

  hosts: ["10.117.194.228:5044"] #logstash服务部署的机器IP地址及运行的服务端口号

[root@se122 filebeat-5.0.1]# 

其次修改logstash配置文件:

[root@rhel6 config]# pwd

/opt/logstash-5.0.1/config

[root@rhel6 config]# cat logstash_mongodb.conf 

#input {

# stdin {}

#}

input{

  beats {

    host => "0.0.0.0"

    port => 5044

    type => mongodblog  #指定filebeat输入的日志类型是mongodblog

  }

}

filter {

  if [type] == "mongodblog" { #过滤器,只处理filebeat发送过来的mogodblog日志数据

    grok {  #解析发送过来的mognodblog日志

       match => ["message","%{TIMESTAMP_ISO8601:timestamp}\s+%{MONGO3_SEVERITY:severity}\s+%{MONGO3_COMPONENT:component}\s+(?:\[%{DATA:context}\])?\s+%{GREEDYDATA:body}"]

    }

    if [component] =~ "WRITE" {

      grok { #第二层解析body部分,提取mongodblog中的command_type、db_name、command、spend_time字段

        match => ["body","%{WORD:command_type}\s+%{DATA:db_name}\s+\w+\:\s+%{GREEDYDATA:command}%{INT:spend_time}ms$"]

      }

    } else {

        grok {

          match => ["body","\s+%{DATA:db_name}\s+\w+\:\s+%{WORD:command_type}\s+%{GREEDYDATA:command}protocol.*%{INT:spend_time}ms$"]

        }

    }

    date {

      match => [ "timestamp", "UNIX", "YYYY-MM-dd HH:mm:ss", "ISO8601"]

      remove_field => [ "timestamp" ]

    }

  }

}

output{

elasticsearch {

hosts => ["192.168.144.230:9200"]

index => "mongod_log-%{+YYYY.MM}"

}

stdout {

codec => rubydebug

}

}

[root@rhel6 config]# 

    然后,确保ELK服务端的服务进程都已经开启,启动命令:

[elasticsearch@rhel6 ]$ /home/elasticsearch/elasticsearch-5.0.1/bin/elasticsearch

[root@rhel6 ~]# /opt/logstash-5.0.1/bin/logstash -f /opt/logstash-5.0.1/config/logstash_mongodb.conf 

[root@rhel6 ~]# /opt/kibana-5.0.1/bin/kibana

在远程端启动filebeat,开始监控mongodb日志:

[root@se122 filebeat-5.0.1]# /opt/filebeat-5.0.1/filebeat -e -c /opt/filebeat-5.0.1/filebeat.yml -d "Publish"

2017/02/16 05:50:40.931969 beat.go:264: INFO Home path: [/opt/filebeat-5.0.1] Config path: [/opt/filebeat-5.0.1] Data path: [/opt/filebeat-5.0.1/data] Logs path: [/opt/filebeat-5.0.1/logs]

2017/02/16 05:50:40.932036 beat.go:174: INFO Setup Beat: filebeat; Version: 5.0.1

2017/02/16 05:50:40.932167 logp.go:219: INFO Metrics logging every 30s

2017/02/16 05:50:40.932227 logstash.go:90: INFO Max Retries set to: 3

2017/02/16 05:50:40.932444 outputs.go:106: INFO Activated logstash as output plugin.

2017/02/16 05:50:40.932594 publish.go:291: INFO Publisher name: se122

2017/02/16 05:50:40.935437 async.go:63: INFO Flush Interval set to: 1s

2017/02/16 05:50:40.935473 async.go:64: INFO Max Bulk Size set to: 2048

2017/02/16 05:50:40.935745 beat.go:204: INFO filebeat start running.

2017/02/16 05:50:40.935836 registrar.go:66: INFO Registry file set to: /opt/filebeat-5.0.1/data/registry

2017/02/16 05:50:40.935905 registrar.go:99: INFO Loading registrar data from /opt/filebeat-5.0.1/data/registry

2017/02/16 05:50:40.936717 registrar.go:122: INFO States Loaded from registrar: 1

2017/02/16 05:50:40.936771 crawler.go:34: INFO Loading Prospectors: 1

2017/02/16 05:50:40.936860 prospector_log.go:40: INFO Load previous states from registry into memory

2017/02/16 05:50:40.936923 registrar.go:211: INFO Starting Registrar

2017/02/16 05:50:40.936939 sync.go:41: INFO Start sending events to output

2017/02/16 05:50:40.937148 spooler.go:64: INFO Starting spooler: spool_size: 2048; idle_timeout: 5s

2017/02/16 05:50:40.937286 prospector_log.go:67: INFO Previous states loaded: 1

2017/02/16 05:50:40.937404 crawler.go:46: INFO Loading Prospectors completed. Number of prospectors: 1

2017/02/16 05:50:40.937440 crawler.go:61: INFO All prospectors are initialised and running with 1 states to persist

2017/02/16 05:50:40.937478 prospector.go:106: INFO Starting prospector of type: log

2017/02/16 05:50:40.937745 log.go:84: INFO Harvester started for file: /root/rs0-0.log

    我们看到,这里已经开始实时监控mongodb日志是/root/rs0-0.log;然后,我们去logstash开启的前台窗口,可以看到有如下信息:

{

        "severity" => "I",

          "offset" => 243843239,

      "spend_time" => "0",

      "input_type" => "log",

          "source" => "/root/rs0-0.log",

         "message" => "2017-02-04T14:03:30.025+0800 I COMMAND  [conn272] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:364 locks:{} protocol:op_query 0ms",

            "type" => "mongodblog",

            "body" => "command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:364 locks:{} protocol:op_query 0ms",

         "command" => "{ replSetGetStatus: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:364 locks:{} ",

            "tags" => [

        [0] "beats_input_codec_plain_applied"

    ],

       "component" => "COMMAND",

      "@timestamp" => 2017-02-04T06:03:30.025Z,

         "db_name" => "admin.$cmd",

    "command_type" => "replSetGetStatus",

        "@version" => "1",

            "beat" => {

        "hostname" => "se122",

            "name" => "se122",

         "version" => "5.0.1"

    },

            "host" => "se122",

         "context" => "conn272"

}

这说明logstash按照配置文件正常过滤并按照指定的正则解析了mongodblog日志,再到kibana创建索引:



然后,就能在kibana自定义视图查看到监控到的Mongodb日志了:



内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息