溫馨提示×

溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊×
其他方式登錄
點(diǎn)擊 登錄注冊 即表示同意《億速云用戶服務(wù)條款》

ELK日志分析系統(tǒng)實(shí)踐

發(fā)布時(shí)間:2020-07-09 14:54:43 來源:網(wǎng)絡(luò) 閱讀:16320 作者:IT技術(shù)棧 欄目:大數(shù)據(jù)

一、ELK介紹

官網(wǎng):

    https://www.elastic.co/cn/

中文指南

https://legacy.gitbook.com/book/chenryn/elk-stack-guide-cn/details
  • ELK Stack(5.0版本后)-->Elastic Stack相當(dāng)于ELK Stack+Beats

  • ELK Stack包含:Elaticsearch、Logstash、Kibana

  • Elasticsearch是實(shí)時(shí)全文搜索和分析引擎,提供搜集、分析、存儲數(shù)據(jù)三大功能;是一套REST和JAVA API開放且提供高效搜索功能,可擴(kuò)展的分布式系統(tǒng)。它構(gòu)建于Apache Lucene搜索引擎庫之上。

  • Logstash用來采集(它支持幾乎任何類型的日志,包括系統(tǒng)日志、錯(cuò)誤日志和自定義應(yīng)用程序日志)日志,把日志解析為json格式交給ElasticSearch

  • Kibana是一個(gè)基于Web圖形界面,用于搜索、分析和可視化顯示存儲在 Elasticsearch指標(biāo)中的日志數(shù)據(jù)。它利用Elasticsearch的REST接口來檢索數(shù)據(jù),不僅允許用戶創(chuàng)建他們自己的數(shù)據(jù)的定制儀表板視圖,還允許他們以特殊的方式查詢和過濾數(shù)據(jù)

  • Beats是一個(gè)輕量級日志采集器,在早期的ELK架構(gòu)中使用Logstash收集、解析日志,但是Logstash對內(nèi)存、CPU、IO等資源消耗比較高,和Beates相比,Beates占用系統(tǒng)CPU、內(nèi)存基本上可以忽略不計(jì)

  • x-pack對Elastic Stack提供了安全,警報(bào),監(jiān)控,報(bào)表于一身的擴(kuò)展包,這塊組件是收費(fèi)的,并非開源

二、ELK架構(gòu)

ELK日志分析系統(tǒng)實(shí)踐

三、ELK安裝

環(huán)境準(zhǔn)備
1.配置節(jié)點(diǎn)互相解析
[root@node-11 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.71.11.1  node-1
10.71.11.2  node-2
10.71.11.11 node-11

2.每個(gè)節(jié)點(diǎn)安裝jdk

[root@node-11 ~]# yum install -y java-1.8.0-openjdk

查看jdk版本

[root@node-1 ~]# java -version
java version "1.8.0_161"
Java(TM) SE Runtime Environment (build 1.8.0_161-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.161-b12, mixed mode)

特別說明:目前l(fā)ogstash不支持java9

安裝Elasticsearch

注:三個(gè)節(jié)點(diǎn)都執(zhí)行以下命令

導(dǎo)入key

rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

配置yum源

[root@node-1 ~]#  vi /etc/yum.repos.d/elastic.repo
[elasticsearch-6.x]
name=Elasticsearch repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/6.x/yum
gpgcheck=1
gpgkey=https://artitacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

更新緩存

yum makecache

考慮到軟件包下載速度較慢,采用rpm包安裝elasticsearch

rpm下載地址:

https://www.elastic.co/downloads/elasticsearch

把下載的rpm包上傳到節(jié)點(diǎn)且安裝

rpm -ivh  elasticsearch-6.2.3.rpm

編輯/etc/elasticsearch/elasticsearch.yml,增加或者修改以下參數(shù)

##定義elk集群名字、節(jié)點(diǎn)名字
cluster.name: cluster_elk  
node.name: node-1   
node.master: true
node.data: false

#定義主機(jī)名IP和端口
network.host: 10.71.11.1
http.port: 9200

##定義集群節(jié)點(diǎn)
discovery.zen.ping.unicast.hosts: ["node-1","node-2","node-11"]

把node-1上的配置文件/etc/elasticsearch/elasticsearch.yml拷貝到node-2和node-11

[root@node-1 ~]# scp !$ node-2:/tmp/
scp /etc/elasticsearch/elasticsearch.yml node-2:/tmp/
elasticsearch.yml                                                                                                          100% 3001     3.6MB/s   00:00    
[root@node-1 ~]# scp /etc/elasticsearch/elasticsearch.yml node-11:/tmp/
root@node-11's password:
elasticsearch.yml     
[root@node-11 yum.repos.d]# cp /tmp/elasticsearch.yml /etc/elasticsearch/elasticsearch.yml
cp: overwrite ‘/etc/elasticsearch/elasticsearch.yml’? y
[root@node-11 yum.repos.d]# vim /etc/elasticsearch/elasticsearch.yml

在node-2上編輯/etc/elasticsearch/elasticsearch.yml

# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
cluster.name: cluster_elk
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
node.name: node-2
node.master: false
node.data: true
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 10.71.11.2
# Set a custom port for HTTP:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.zen.ping.unicast.hosts: ["host1", "host2"]
**discovery.zen.ping.unicast.hosts: ["node-1","node-2","node-11"]**
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes: 
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------

同理修改node-11上的/etc/elasticsearch/elasticsearch.yml 配置文件

在node-1上啟動elasticsearch

[root@node-1 ~]# systemctl start elasticsearch
[root@node-1 ~]# systemctl status  elasticsearch
● elasticsearch.service - Elasticsearch
   Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2018-04-12 21:11:28 CST; 12s ago
     Docs: http://www.elastic.co
Main PID: 17297 (java)
    Tasks: 67
   Memory: 1.2G
   CGroup: /system.slice/elasticsearch.service
           └─17297 /bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPre...

Apr 12 21:11:28 node-1 systemd[1]: Started Elasticsearch.
Apr 12 21:11:28 node-1 systemd[1]: Starting Elasticsearch...

查看集群日志

[root@node-1 ~]# tail -f /var/log/elasticsearch/cluster_elk.log
[2018-04-12T21:11:34,704] [INFO ] [o.e.d.DiscoveryModule    ] [node-1] using discovery type [zen]
[2018-04-12T21:11:35,187] [INFO ] [o.e.n.Node               ]  [node-1] initialized
[2018-04-12T21:11:35,187] [INFO ] [o.e.n.Node               ] [node-1] starting ...
[2018-04-12T21:11:35,370] [INFO ] [o.e.t.TransportService   ] [node-1] publish_address {10.71.11.1:9300}, bound_addresses {10.71.11.1:9300}
[2018-04-12T21:11:35,380] [INFO ] [o.e.b.BootstrapChecks    ] [node-1] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2018-04-12T21:11:38,423] [INFO ] [o.e.c.s.MasterService    ] [node-1] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {node-1}{PVxBZmElTXOHkzavFVFEnA}{xsTmwB7MTwu-8cwwALyTPA}{10.71.11.1}{10.71.11.1:9300}
[2018-04-12T21:11:38,428] [INFO ] [o.e.c.s.ClusterApplierService] [node-1] new_master {node-1}{PVxBZmElTXOHkzavFVFEnA}{xsTmwB7MTwu-8cwwALyTPA}{10.71.11.1}{10.71.11.1:9300}, reason: apply cluster state (from master [master {node-1}{PVxBZmElTXOHkzavFVFEnA}{xsTmwB7MTwu-8cwwALyTPA}{10.71.11.1}{10.71.11.1:9300} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])
[2018-04-12T21:11:38,442] [INFO ] [o.e.h.n.Netty4HttpServerTransport] [node-1] publish_address {10.71.11.1:9200}, bound_addresses {10.71.11.1:9200}
[2018-04-12T21:11:38,442] [INFO ] [o.e.n.Node               ] [node-1] started
[2018-04-12T21:11:38,449] [INFO ] [o.e.g.GatewayService     ] [node-1] recovered [0] indices into cluster_state

在主節(jié)點(diǎn)查看集群健康狀態(tài)

[root@node-1 ~]# curl '10.71.11.1:9200/_cluster/health?pretty'
{
  "cluster_name" : "cluster_elk",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 0,
  "active_primary_shards" : 0,
  "active_shards" : 0,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}

在node-1點(diǎn)查看集群的詳細(xì)信息


[root@node-1 ~]# curl '10.71.11.1:9200/_cluster/state?pretty'
{
  "cluster_name" : "cluster_elk",
  "compressed_size_in_bytes" : 226,
  "version" : 2,
  "state_uuid" : "-LLN7fEYQJiKZSLqitdOvQ",
  "master_node" : "PVxBZmElTXOHkzavFVFEnA",
  "blocks" : { },
  "nodes" : {
    "PVxBZmElTXOHkzavFVFEnA" : {
      "name" : "node-1",
      "ephemeral_id" : "xsTmwB7MTwu-8cwwALyTPA",
      "transport_address" : "10.71.11.1:9300",
      "attributes" : { }
    }
  },
  "metadata" : {
    "cluster_uuid" : "LaaRmRfRTfOY-ApuNz_nfA",
    "templates" : { },
    "indices" : { },
    "index-graveyard" : {
      "tombstones" : [ ]
    }
  },
  "routing_table" : {
    "indices" : { }
  },
  "routing_nodes" : {
    "unassigned" : [ ],
    "nodes" : { }
  },
  "snapshots" : {
    "snapshots" : [ ]
  },
  "restore" : {
    "snapshots" : [ ]
  },
  "snapshot_deletions" : {
    "snapshot_deletions" : [ ]
  }
}

安裝kibana

注:在node-1節(jié)點(diǎn)執(zhí)行

yum install -y kibana

說明:使用yum安裝速度相對較慢,所以使用rpm包安裝

下載kibana-6.2.3-x86_64 .rpm并上傳到node-1節(jié)點(diǎn)安裝kibana

https://www.elastic.co/downloads/kibana
[root@node-1 ~]# rpm -ivh kibana-6.2.3-x86_64.rpm
Preparing...                          ################################# [100%]
    package kibana-6.2.3-1.x86_64 is already installed

編輯/etc/kibana/kibana.yml

server.port :    5601   ##配置監(jiān)聽端口,默認(rèn)監(jiān)聽5601端口

server.host: "10.71.11.1"    ##配置服務(wù)主機(jī)名或者IP,需要注意的是,如果沒有安裝x-pack組件,就不能設(shè)置kibana登錄用戶和密碼,而這里的IP又是配置公網(wǎng)IP的話,任何人都能登錄kibana,如果這里配置的IP為內(nèi)網(wǎng)IP和port,要保證能從公網(wǎng)能登錄kibana的話,可以使用nginxu做代理實(shí)現(xiàn)

elasticsearch.url: "http://10.71.11.1:9200"      ##配置kibana和elasticsearch通信

logging.dest: /var/log/kibana.log    ##默認(rèn)情況下,kibana的日志是在/var/log/message/下。也可以自定義kibana.log路徑/var/log/kibana.log

啟動kibana服務(wù)

[root@node-1 ~]# systemctl start  kibana
[root@node-1 ~]# ps aux |grep kibana
kibana     650  109  0.0 944316 99684 ?        Rsl  10:59   0:02 /usr/share/kibana/bin/../node/bin/node --no-warnings /usr/share/kibana/bin/../src/cli -c /etc/kibana/kibana.yml
root       659  0.0  0.0 112660   976 pts/6    S+   10:59   0:00 grep --color=auto kib

在瀏覽器上訪問Kibana:http://10.71.11.1:5601/
ELK日志分析系統(tǒng)實(shí)踐

安裝logstash

注:無特使說明,以下操作在node-2上完成

下載logstash-6.2.3 .rpm并上傳到node-2

https://www.elastic.co/downloads/logstash

安裝logstash服務(wù)

[root@node-2 ~]# ls logstash-6.2.3.rpm
logstash-6.2.3.rpm
[root@node-2 ~]# rpm -ivh logstash-6.2.3.rpm
Preparing...                          ################################# [100%]
Updating / installing...
   1:logstash-1:6.2.3-1               ################################# [100%]
Using provided startup.options file: /etc/logstash/startup.options
Successfully created system startup script for Logstash

配置logstash收集syslog日志

編輯/etc/logstash/conf.d/syslog.conf

input{
         syslog{
                     type =>"system-syslog"
                     port => 10514

  }
}
output{
            stdout{
                        codec=>rubydebug
    }
}

檢測配置文件語法錯(cuò)誤


[root@node-2 ~]# cd /usr/share/logstash/bin/
[root@node-2 bin]#  ./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/syslog.conf --config.test_and_exit
 Sending Logstash's  logs to /var/log/logstash which is now configured via log4j2.properties
Configuration OK

參數(shù)說明:
--path.settings /etc/logstash/ 指定logstash配置文件路徑
-f 指定自定義的配置文件

檢查是否開啟10514監(jiān)聽端口

編輯/etc/rsyslog.conf,在####RULES####添加下面的配置

[root@node-2 ~]# vi /etc/rsyslog.conf
*.*  @@10.71.11.2:10514

ELK日志分析系統(tǒng)實(shí)踐

執(zhí)行l(wèi)ogstash啟動命令后,命令行終端不會返回?cái)?shù)據(jù),這個(gè)和配置etc/logstash/conf.d/syslog.conf定義的函數(shù)有關(guān)
ELK日志分析系統(tǒng)實(shí)踐

此時(shí)需要重新復(fù)制node-2的ssh終端,在新的ssh終端重啟rsyslog.service

[root@node-2 ~]# systemctl restart rsyslog.service

在新的ssh終端執(zhí)行ssh node-2命令后
ELK日志分析系統(tǒng)實(shí)踐
在另外一個(gè)node-2的ssh終端會看到有日志信息輸出,說明配置logstash收集系統(tǒng)日志成功
ELK日志分析系統(tǒng)實(shí)踐

以下操作在node-2執(zhí)行

編輯/etc/logstash/conf.d/syslog.conf

input{
      syslog{
             type =>"system-syslog"
             port => 10514
  }
}
output{
      elasticsearch {
                             hosts => ["10.71.11.1:9200"]
                             index => "system-syslog-%{+YYY.MM}"  ##定義索引
  }
}

驗(yàn)證配置文件語法是否錯(cuò)誤

[root@node-2 ~]# ./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/syslog.conf --config.test_and_exit

修改logstash目錄權(quán)限屬主和屬組

[root@node-2 bin]# chown -R logstash /var/lib/logstash

因?yàn)閘ogstash服務(wù)過程需要一些時(shí)間,當(dāng)服務(wù)啟動成功后,9600和10514端口都會被監(jiān)聽
ELK日志分析系統(tǒng)實(shí)踐
ELK日志分析系統(tǒng)實(shí)踐

說明:logstash服務(wù)日志路徑

/var/log/logstash/logstash-plain.log

在Kibana上配置收集的日志

ELK日志分析系統(tǒng)實(shí)踐

ELK日志分析系統(tǒng)實(shí)踐

先在elasticsearch上查看數(shù)據(jù)索引
編輯node-2上的/etc/logstash/logstash.yml,添加

http.host: "10.71.11.2"

ELK日志分析系統(tǒng)實(shí)踐

在node-1上執(zhí)行下面命令獲取索引信息

[root@node-1 ~]# curl '10.71.11.1:9200/_cat/indices?v'
health status index                 uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   system-syslog-2018.04 3Za0b5rBTYafhsxQ-A1P-g   5   1   

說明:成功生成索引,說明es和logstash通信正常

獲取索引的詳細(xì)信息

[root@node-1 ~]# curl '10.71.11.1:9200/indexname?pretty'
{
  "error" : {
    "root_cause" : [
      {
        "type" : "index_not_found_exception",
        "reason" : "no such index",
        "resource.type" : "index_or_alias",
        "resource.id" : "indexname",
        "index_uuid" : "_na_",
        "index" : "indexname"
      }
    ],
    "type" : "index_not_found_exception",
    "reason" : "no such index",
    "resource.type" : "index_or_alias",
    "resource.id" : "indexname",
    "index_uuid" : "_na_",
    "index" : "indexname"
  },
  "status" : 404
}

收集nginx日志配置

使用Beats采集日志

向AI問一下細(xì)節(jié)

免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場,如果涉及侵權(quán)請聯(lián)系站長郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。

AI