部署ELK收集Nginx日志(redis缓存 Filebeat收集)

Filebeat连接Redis数据库:

  • 当前已经把 Nginx日志配置为 json格式了(要修改nginx日志的格式,请复制下方日志配置到nginx配置中)
  • 如果没有安装nginx,请点击下方文章链接

CentOS-7.5,源码编译Nginx-1.14.2详解

1243
log_format main_json '{"client_ip": "$remote_addr",'
'"client_user": "$remote_user",'
'"local_time": "[$time_local]",'
'"request": "$request",'
'"response_time": "$request_time",'
'"upstream_time": "$upstream_response_time",'
'"status_num": "$status",'
'"response_size": "$body_bytes_sent",'
'"skip_link": "$http_referer",'
'"client_agent": "$http_user_agent",'
'"new_subject": "$http_x_forwarded_for"'
'}';

access_log  /var/log/nginx/access.log  main_json;
  • 务必要关闭模板功能,都设置为 nginx.yml.disabled 关闭状态
    [root@web01 ~]# filebeat modules disable nginx
    Disabled nginx
安装redis数据库:
[root@web01 ~]# yum -y install redis
[root@web01 ~]# vim /etc/redis.conf +61
bind 0.0.0.0

[root@web01 ~]# systemctl start redis.service
[root@web01 ~]# systemctl enable redis.service
[root@web01 ~]# netstat -tunpl |grep redis
tcp        0      0 0.0.0.0:6379          0.0.0.0:*               LISTEN      2181/redis-server 1
安装filebeat包:
[root@web01 ~]# wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.6.0-x86_64.rpm
[root@web01 ~]# rpm -ivh filebeat-6.6.0-x86_64.rpm
warning: filebeat-6.6.0-x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID d88e42b4: NOKEY
Preparing...                          ################################# [100%]
Updating / installing...
   1:filebeat-6.6.0-1                 ################################# [100%]
配置filebeat组件:
[root@web01 ~]# vim /etc/filebeat/filebeat.yml

filebeat.inputs:
- type: log
  enabled: true    # 启用日志
  paths:
    - /var/log/nginx/access.log   # 指定nginx日志
  json.keys_under_root: true
  json.overwrite_keys: true
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false    # 关闭模板
outout.redis:
    hosts: ["172.18.1.100"]    # redis地址
    key: "filebeat"    # 在reidis内创建的key名称
    db: 0
    timeout: 5
重启filebeat组件:
[root@web01 ~]# systemctl restart filebeat.service
查看redis数据数量:
  • 当前已经清空redis所有数据,也清空了nginx日志文件
# llen 查看filebeat键的值有多少数量
[root@web01 ~]# redis-cli llen filebeat
(integer) 0

# ab 压力测试 1000个请求
[root@web01 ~]# ab -n 1000 -c 1000 http://172.18.1.100/

# 此时日志已经缓存到了redis数据库中
[root@web01 ~]# redis-cli llen filebeat
(integer) 1000

[root@web01 ~]# redis-cli rpop filebeat
"{\"@timestamp\":\"2019-12-17T06:45:43.875Z\",\"@metadata\":{\"beat\":\"filebeat\",\"type\":\"doc\",\"version\":\"6.6.0\"},\"skip_link\":\"-\",\"log\":{\"file\":{\"path\":\"/var/log/nginx/access.log\"}},\"local_time\":\"[17/Dec/2019:14:45:43 +0800]\",\"input\":{\"type\":\"log\"},\"host\":{\"name\":\"web01.novalocal\"},\"source\":\"/var/log/nginx/access.log\",\"offset\":233766,\"new_subject\":\"-\",\"beat\":{\"hostname\":\"web01.novalocal\",\"version\":\"6.6.0\",\"name\":\"web01.novalocal\"},\"client_ip\":\"172.18.1.100\",\"client_agent\":\"ApacheBench/2.3\",\"client_user\":\"-\",\"request\":\"GET / HTTP/1.0\",\"status_num\":\"200\",\"response_size\":\"11\",\"prospector\":{\"type\":\"log\"}}"

配置Logstash拉取redis数据日志:

[root@logstash ~]# yum install java-1.8.0-openjdk -y
[root@logstash ~]# java -version

openjdk version "1.8.0_191"
OpenJDK Runtime Environment (build 1.8.0_191-b12)
OpenJDK 64-Bit Server VM (build 25.191-b12, mixed mode)
下载并安装软件包:
[root@logstash ~]# wget https://artifacts.elastic.co/downloads/logstash/logstash-6.6.0.rpm
[root@logstash ~]# rpm -ivh logstash-6.6.0.rpm 
warning: logstash-6.6.0.rpm: Header V4 RSA/SHA512 Signature, key ID d88e42b4: NOKEY
Preparing...                          ################################# [100%]
Updating / installing...
   1:logstash-1:6.6.0-1               ################################# [100%]
Using provided startup.options file: /etc/logstash/startup.options
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Successfully created system startup script for Logstash
配置logstash抓取redis中nginx日志:
[root@logstash ~]# vim /etc/logstash/conf.d/nginx.conf

input {
  redis {
    host => "172.18.1.100"    # redis地址
    port => "6379"    # redis端口
    db => "0"
    key => "filebeat"   # redis中filebeat组件的key名称
    data_type => "list"
  }
}

filter {
  mutate {
    convert => ["response_time","float"]    # 
    convert => ["upstream_time","float"]    # 
  }
}

output {
  elasticsearch {
    hosts => "http://172.18.1.76:9200"     # elasticsearch主机地址
    manage_template => false
    index => "nginx-%{+YYYY.MM}"       # 输入到elasticsearch后的索引名称
  }
}
查看redis数据数量:
  • 当 logstash拉取redis数据后会清理已经拉取的数据日志,最后是redis数据为空。
[root@web01 ~]# redis-cli llen filebeat
(integer) 1000
启动logstash组件:
[root@logstash ~]# systemctl start logstash
[root@logstash ~]# systemctl enable logstash
再次查看redis数据数量:
[root@web01 ~]# redis-cli llen filebeat
(integer) 0
查看elasticsearch中索引:
[root@db01 ~]# curl '172.18.1.76:9200/_cat/indices?v' 
health status index                           uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   nginx-2019.12                   PhGgYHNHTm2vKiNepQuOFg   5   1      11000            0      3.1mb          1.5mb

安装Kibana页面:

[root@db01 ~]# wget https://artifacts.elastic.co/downloads/kibana/kibana-6.6.0-x86_64.rpm
[root@db01 ~]# rpm -ivh kibana-6.6.0-x86_64.rpm
配置Kibana连接Es:
[root@db01 ~]# vim /etc/kibana/kibana.yml

server.port: 5601                                # kibana监听的端口
server.host: "172.18.1.76"                       # kibana监听的IP地址
elasticsearch.hosts: ["http://172.18.1.76:9200"] # kibana连接ES集群master节点地址
启动Kibana:
[root@db01 ~]# systemctl start kibana.service
[root@db01 ~]# systemctl enable kibana.service
[root@db01 ~]# netstat -tunpl |grep 5601
tcp        0      0 172.18.1.76:5601        0.0.0.0:*               LISTEN      18542/node
  • 添加索引步骤略过

折线图统计同IP访问次数:

饼图统计访问状态码次数:

柱状图统计URL访问次数:

|

添加图形到仪表盘展示:
  • 搜索刚刚保存的可视化图像名称,然后单击添加到dashboard仪表盘。

「点点赞赏,手留余香」

    还没有人赞赏,快来当第一个赞赏的人吧!
0 条回复 A 作者 M 管理员
    所有的伟大,都源于一个勇敢的开始!
欢迎您,新朋友,感谢参与互动!欢迎您 {{author}},您在本站有{{commentsCount}}条评论
高级运维 中级运维 运维简历 运维简历 DB简历
加入我们
  • 站长QQ:885097398 一键联系
  • 扫一扫加站长QQ
    Linux运维交流群