配置filebeat+logstash收集nginx系统日志(一)

栏目: 服务器 · Nginx · 发布时间: 6年前

内容简介:192.168.33.10 #安装ElasticSearch及kibana, 建议内存4G以上192.168.33.11 #安装logstash, 建议内存4G以上192.168.33.12 #安装filebeat及nginx, 建议内存2G以上

环境设定

192.168.33.10 #安装ElasticSearch及kibana, 建议内存4G以上

192.168.33.11 #安装logstash, 建议内存4G以上

192.168.33.12 #安装filebeat及nginx, 建议内存2G以上

原理分析

1, filebeat收集nginx的日志, 并将其发送给logstash, 因此filebeat需要指定nginx日志的位置及logstash的地址/端口信息.

2, logstash收到日志以后, 对日志进行解析(filter, 分析出其中的关键片断, 例如时间/客户端IP/访问URL/HTTP返回状态等信息), 然后传送给ElasticSearch.

3, kibana负责展示ElasticSearch里的日志内容

1, 安装安装ElasticSearch及kibana

参考这篇文章.

2, 安装及配置logstash

本文中的Logstash安装在192.168.33.11上面.

$ yum install logstash

$ cp /etc/logstash/logstash.yml /etc/logstash/logstash.yml.`date +%Y%m%d`
$ vim /etc/logstash/logstash.yml  #修改如下配置项
...
path.data: /var/lib/logstash
path.config: /etc/logstash/conf.d
...
http.host: "192.168.33.11"
path.logs: /var/log/logstash
...
$ vim /etc/logstash/conf.d/logstash-nginx-es.conf  #写入如下内容
input {
    beats {
        host => "0.0.0.0"
        port => 5400
    }
}

filter {
   grok {
      match => { "message" => "%{IPORHOST:remote_ip} - %{DATA:user_name} \[%{HTTPDATE:access_time}\] \"%{WORD:http_method} %{DATA:url} HTTP/%{NUMBER:http_version}\" %{NUMBER:response_code} %{NUMBER:body_sent_bytes} \"%{DATA:referrer}\" \"%{DATA:agent}\"" }
        }
}

output {
 elasticsearch {
   hosts => ["192.168.33.10:9200"]
   index => "weblogs_index_pattern-%{+YYYY.MM.dd}"    #这个weblogs_index_pattern就是kibana里面的index_pattern,可以改成自定义的名字
 }
 stdout { codec => rubydebug }
}

这里补充一个知识点, 通过这个conf文件, 我们可以留意到logstash配置文件的语法:

input {
# 这里配置logstash的监听地址及端口, 以便filebeat可以向这里地址发送日志
}
filter {
# 接收到nginx的日志以后, 这里指定如何解析nginx日志
}
output {
# 解析以后的nginx发往何处, 一般是发送到ElasticSearch服务器
}

在启动logstash之前, 我们需要先测试logstash的配置文件是否正常工作.

#测试配置文件(可能需要数分钟才会返回结果)
$ cd /etc/logstash
$ /usr/share/logstash/bin/logstash --path.settings ./ -f ./conf.d/logstash-nginx-es.conf --config.test_and_exit
#返回Configuration OK表示正常

接下来我们进行第2个测试, 测试logstash能否正确抓取及解析nginx的日志. 我们这里开启2个logstash server(192.168.33.11)的终端, 一个不断访问nginx(192.168.33.12), 另一个测试日志抓取

#第一个终端
$  while true ; do n=$(( RANDOM % 10 )) ; curl "192.168.33.12/?$n" ; sleep $n ; done

#第二个终端测试解析日志
$ cd /etc/logstash
$ /usr/share/logstash/bin/logstash -f ./conf.d/logstash-nginx-es.conf  #返回如下信息就表示logstash可以正常解析nginx日志
{
      "response_code" => "404",
             "source" => "/var/log/nginx/access.log",
    "body_sent_bytes" => "3650",
            "message" => "127.0.0.1 - - [29/Aug/2018:01:08:01 +0000] \"GET /server-status?auto= HTTP/1.1\" 404 3650 \"-\" \"Go-http-client/1.1\" \"-\"",
       "http_version" => "1.1",
        "http_method" => "GET",
         "@timestamp" => 2018-08-29T01:08:03.712Z,
               "host" => {
        "name" => "data2.node"
    },
             "offset" => 3431209,
        "access_time" => "29/Aug/2018:01:08:01 +0000",
              "agent" => "Go-http-client/1.1",
           "referrer" => "-",
                "url" => "/server-status?auto=",
               "beat" => {
         "version" => "6.4.0",
            "name" => "data2.node",
        "hostname" => "data2.node"
    },
         "prospector" => {
        "type" => "log"
    },
              "input" => {
        "type" => "log"
    },
          "user_name" => "-",
          "remote_ip" => "127.0.0.1",
               "tags" => [
        [0] "beats_input_codec_plain_applied"
    ],
           "@version" => "1"
}
{
      "response_code" => "200",
             "source" => "/var/log/nginx/access.log",
    "body_sent_bytes" => "3700",
            "message" => "192.168.33.11 - - [29/Aug/2018:01:08:02 +0000] \"GET /?3 HTTP/1.1\" 200 3700 \"-\" \"curl/7.29.0\" \"-\"",
       "http_version" => "1.1",
        "http_method" => "GET",
         "@timestamp" => 2018-08-29T01:08:03.712Z,
               "host" => {
        "name" => "data2.node"
    },
             "offset" => 3431326,
        "access_time" => "29/Aug/2018:01:08:02 +0000",
              "agent" => "curl/7.29.0",
           "referrer" => "-",
                "url" => "/?3",
               "beat" => {
         "version" => "6.4.0",
            "name" => "data2.node",
        "hostname" => "data2.node"
    },
         "prospector" => {
        "type" => "log"
    },
              "input" => {
        "type" => "log"
    },
          "user_name" => "-",
          "remote_ip" => "192.168.33.11",
               "tags" => [
        [0] "beats_input_codec_plain_applied"
    ],
           "@version" => "1"
}

启动logstash

#可能需要1-2分钟才会启动
$ systemctl enable logstash && \
systemctl restart logstash

#确认启动成功(如果端口未监听不用担心,logstash启动时间较长,或者查看如下日志进行故障排查)
$ netstat -antp | egrep '(:9600|:5400)'
tcp6       0      0 127.0.0.1:9600          :::*            LISTEN      7046/java
tcp6       0      0 :::5400                 :::*            LISTEN      7046/java

启动故障排查

$ tail -20 /var/log/message
$ tail -20 /var/log/logstash/logstash-plain.log

3, 安装配置filebeat

在安装了nginx的server上安装配置filebeat, 本例是192.168.33.12, 因此需要在这一台上安装nginx及filebeat.

$ yum install nginx
$ service restart nginx
$ curl http://192.168.33.12    #确保nginx可被访问
$ yum install filebeat
$ cp /etc/filebeat/filebeat.yml /etc/filebeat/filebeat.yml.`date +%Y%m%d`
$ vim /etc/filebeat/filebeat.yml  #修改或添加以下内容
filebeat.inputs:
- type: log
  enabled: true
  paths:
    #- /var/log/messages  #注释掉系统日志
    /var/log/nginx/*.log  #添加nginx日志
  exclude_files: ['.gz$']
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
#setup.kibana:           #取消向kibana输出
  #host: "localhost:5601"
#output.elasticsearch:   #取消向elasticsearch输出
  # Array of hosts to connect to.
  #hosts: ["localhost:9200"]
output.logstash:    #开启向logstash输出
  hosts: ["192.168.33.11:5400"]

启动filebeat服务

$ systemctl enable filebeat && \
systemctl restart filebeat

#确认服务启动成功
$ ps -ef | grep filebeat
root    8848    1  1 03:30 ?    00:00:02 /usr/share/filebeat/bin/filebeat -c /etc/filebeat/filebeat.yml -path.home...

启动故障排查

$ tail -20 /var/log/messages
$ tail -20 /var/log/filebeat/filebeat

4, Kibana界面定制化分析

现在打开Kibana界面(本例中是http://192.168.33.10:5601), 点击左侧的”Discover”, 选择我们定义的index pattern.

乍一看上去, 数据好像比较混乱. 这里指导一下大家进行日志界面的定制化.

配置filebeat+logstash收集nginx系统日志(一)

最终形成的界面

配置filebeat+logstash收集nginx系统日志(一)

以上所述就是小编给大家介绍的《配置filebeat+logstash收集nginx系统日志(一)》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

Data Structures and Algorithms in Python

Data Structures and Algorithms in Python

Michael T. Goodrich、Roberto Tamassia、Michael H. Goldwasser / John Wiley & Sons / 2013-7-5 / GBP 121.23

Based on the authors' market leading data structures books in Java and C++, this book offers a comprehensive, definitive introduction to data structures in Python by authoritative authors. Data Struct......一起来看看 《Data Structures and Algorithms in Python》 这本书的介绍吧!

随机密码生成器
随机密码生成器

多种字符组合密码

UNIX 时间戳转换
UNIX 时间戳转换

UNIX 时间戳转换

HSV CMYK 转换工具
HSV CMYK 转换工具

HSV CMYK互换工具