filebeat收集nginx日志打到logstash处理收入到es elk
大概的一个流程图
nginx配置
主要是将日志转成json格式方便解析 , 一般配在 http{ }里
log_format access_json {"timestamp":"$time_iso8601", "host":"$server_addr", "clientIp":"$remote_addr", "size":$body_bytes_sent, "responseTime":$request_time, "upstreamTime":"$upstream_response_time", "upstreamHost":"$upstream_addr", "httpHost":"$host", "uri":"$uri", "xff":"$http_x_forwarded_for", "referer":"$http_referer", "tcpXff":"$proxy_protocol_addr", "httpUserAgent":"$http_user_agent", "status":"$status"}; access_log /home/log/access_json.log access_json;
filebeat
安装
curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.14.0-linux-x86_64.tar.gz tar -xzvf filebeat-7.14.0-linux-x86_64.tar.gz
配置, 多配置就是多写一套 -type: log
- type: log # Change to true to enable this input configuration. enabled: true # Paths that should be crawled and fetched. Glob based paths. paths: - /home/log/access_json.log #- c:programdataelasticsearchlogs* fields: log_source: nginx log_type: www fields_under_root: true tags: ["nginx"]
在下面输出配置输出到logstash
output.logstash: # The Logstash hosts hosts: ["192.168.0.1:5044"]
配置输出内容提升到json
processors: - add_host_metadata: when.not.contains.tags: forwarded - add_cloud_metadata: ~ - add_docker_metadata: ~ - add_kubernetes_metadata: ~ - decode_json_fields: fields: [message] target: "" overwrite_keys: false process_array: false max_depth: 1
不适用module中nginx配置 因为要使用json格式的日志
将message中的json对象提出来 到第一层的字段
启动 默认走filebeat.yml配置
nohup ./filebeat -e &
logstash 配置
将多个配置文件都放在一个目录下, 如/etc/logstash/conf.d
这里我配了一个logstash-nginx.yml
input { beats { port => 5044 client_inactivity_timeout => 600 #600秒之后关闭空闲的连接 } } output{ stdout {codec => rubydebug} elasticsearch { index => "nginx_success_logs" hosts => ["localhost:9200"] } }
如果有多套配置,监听的端口不要一样 就是上面的那个por=>5044
然后到logstash的目录下,去修改config/pipelines.yml 文件,里面简简单单
- pipeline.id: main path.config: "/etc/logstash/conf.d/*.conf"
就是把指向的位置改成刚才放多配置文件的目录下
检查一下配置文件
bin/logstash -f logstash.conf -t
然后出来启动一下
nohup ./bin/logstash -f ./config/pipelines.yml &
要重启最简单的方法就是把data文件夹干掉,再执行就行了,这是我在调试期间才这么简单粗暴的走
建议走重读配置
./bin/logstash -f ./config/pipelines.yml --config.reload.automatic
es 配置