elk收集nginx日志

时间:2019-08-20
本文章向大家介绍elk收集nginx日志,主要包括elk收集nginx日志使用实例、应用技巧、基本知识点总结和需要注意事项,具有一定的参考价值,需要的朋友可以参考一下。

系统环境是centos7 jdk1.8.0_221 本机的ip地址是192.168.1.8

logstash 6.0.0版本,通过RPM包安装

rpm -ivh logstash-6.0.0.rpm

配置文件在/etc/logstash目录下

vim /ect/logstash/conf.d/logstash.nginx.conf

input {
  file {
    path => "/usr/local/nginx/logs/*.log"
    start_position => "beginning"
  }
}

filter {
    grok {
      match => {
        "message" => '%{IPORHOST:remote_ip} - %{DATA:user_name} \[%{HTTPDATE:time}\] "%{WORD:request_action} %{DATA:request} HTTP/%{NUMBER:http_version}" %{NUMBER:response} %{NUMBER:bytes} "%{DATA:referrer}" "%{DATA:agent}"'
      }
    }
    date {
      match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
    }
}

output {
  elasticsearch {
     hosts => ["192.168.1.8:9200"]
     index => "logstash-nginx-access-log"
  }
  #输出到控制台
stdout { codec
=> rubydebug } }

elasicsearch 6.0.0版本,通过tar.gz源码安装

#解压
tar -zxvf elasticsearch-6.0.0.tar.gz
#移动到/usr/local目录下
mv elasticsearch-6.0.0 /usr/local

 vim /usr/local/elasticsearch-6.0.0/config/elasticsearch.yml

# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: my-tao
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node8
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /usr/local/elasticsearch-6.0.0/data
#
# Path to log files:
#
path.logs: /usr/local/elasticsearch-6.0.0/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 192.168.1.8
#
# Set a custom port for HTTP:
#
http.port: 9200
#
# For more information, consult the network module documentation.

 由于elasticsearch不能有root用户启动,我这里创建一个用户tao

更改所属用户

chown -R tao:tao /usr/local/elasticsearch.yml

用su tao进入 tao用户

启动elasticsearch时报错,报错内容如下

随即去修改tao用户系统的参数

vim /etc/security/limits.conf

#<domain>      <type>  <item>         <value>
* soft nofile 65536 * hard nofile 131072 tao soft nproc 4096 tao hard nproc 4096
#更改最大线程数
sudo sysctl -w vm.max_map_count=262144 #使修改生效 sudo sysctl -p

 kibana 6.0.0版本,通过rpm安装。

rpm -ivh kibana-6.0.0-x86_64.rpm

安装完毕后配置文件路径在/etc/kibana目录下

vim /etc/kibana/kibana.yml

# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "192.168.1.8"

# Enables you to specify a path to mount Kibana at if you are running behind a proxy. This only affects
# the URLs generated by Kibana, your proxy is expected to remove the basePath value before forwarding requests
# to Kibana. This setting cannot end in a slash.
#server.basePath: ""

# The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576

# The Kibana server's name.  This is used for display purposes.
#server.name: "your-hostname"

# The URL of the Elasticsearch instance to use for all your queries.
elasticsearch.url: "http://192.168.1.8:9200"

# When this setting's value is true Kibana uses the hostname specified in the server.host
# setting. When the value of this setting is false, Kibana uses the hostname of the host
# that connects to this Kibana instance.
#elasticsearch.preserveHost: true

# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
#kibana.index: ".kibana"

原文地址:https://www.cnblogs.com/zt14/p/11380770.html