溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊(cè)×
其他方式登錄
點(diǎn)擊 登錄注冊(cè) 即表示同意《億速云用戶服務(wù)條款》

elk(elasticsearch、logstast,kibana)filebeat部署與實(shí)踐

發(fā)布時(shí)間:2020-08-05 20:41:05 來(lái)源:網(wǎng)絡(luò) 閱讀:769 作者:dyc2005 欄目:系統(tǒng)運(yùn)維

1、elk說(shuō)明
elk全稱:
elasticsearch:
是一個(gè)分布式、高擴(kuò)展、高實(shí)時(shí)的搜索與數(shù)據(jù)分析引擎;簡(jiǎn)稱es
logstash:
是開(kāi)源的服務(wù)器端數(shù)據(jù)處理管道,能夠同時(shí)從多個(gè)來(lái)源采集數(shù)據(jù),轉(zhuǎn)換數(shù)據(jù),然后將數(shù)據(jù)發(fā)送到您最喜歡的“存儲(chǔ)庫(kù)”中;如elasticsearch中
kibana:
是為 Elasticsearch設(shè)計(jì)的開(kāi)源分析和可視化平臺(tái)。你可以使用 Kibana 來(lái)搜索,查看存儲(chǔ)在 Elasticsearch 索引中的數(shù)據(jù)并與之交互。你可以很容易實(shí)現(xiàn)高級(jí)的數(shù)據(jù)分析和可視化,以圖標(biāo)的形式展現(xiàn)出來(lái)。
以上三個(gè)組件就是常說(shuō)的elk~

2、快速部署配置elk
1)部署環(huán)境:
Centos7,本文基于7.x部署
172.16.0.213 elasticsearch
172.16.0.217 elasticsearch
172.16.0.219 elasticsearch kibana
kibana只要在其中一臺(tái)部署即可;
2)配置官方y(tǒng)um源
三臺(tái)均配置repo源

$ cat /etc/yum.repos.d/elast.repo
[elasticsearch-7.x]
name=Elasticsearch repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

3)安裝
$ cat /etc/hosts
172.16.0.213 ickey-elk-213
172.16.0.217 ickey-elk-217
172.16.0.219 ickey-elk-219
$ yum install elasticsearch -y

4)配置

$ cat /etc/elasticsearch/elasticsearch.yml
cluster.name: elk_test ### 集群名
node.name: ickey-elk-217 ### 節(jié)點(diǎn)名需要按節(jié)點(diǎn)配置
node.master: true
node.data: true
path.data: /var/log/elasticsearch/data
path.logs: /var/log/elasticsearch/logs
network.host: 172.16.0.217 ### 節(jié)點(diǎn)ip
transport.tcp.port: 9300
transport.tcp.compress: true
http.port: 9200
http.max_content_length: 100mb
bootstrap.memory_lock: true
discovery.seed_hosts: ["172.16.0.213","172.16.0.217","172.16.0.219"]
cluster.initial_master_nodes: ["172.16.0.213","172.16.0.217","172.16.0.219"]
gateway.recover_after_nodes: 2
gateway.recover_after_time: 5m
gateway.expected_nodes: 3

修改elasticsearch啟動(dòng)內(nèi)存分配:
$ /etc/elasticsearch/jvm.options 中
-Xms4g
-Xmx4g
內(nèi)在一般是系統(tǒng)內(nèi)存80%左右;分別表示預(yù)加載內(nèi)存和最高使用內(nèi)存
此時(shí)啟動(dòng)elasticsearch
$ systemctl elasticsearch start

5)安裝kibana
就在219上安裝
$ yum install kinbana -y
配置

$ cat  /etc/kibana/kibana.yml|egrep -v "(^$|^#)"
server.port: 5601
server.host: "172.16.0.219"
server.name: "ickey-elk-219"
elasticsearch.hosts: ["http://172.16.0.213:9200","http://172.16.0.217:9200","http://172.16.0.219:9200"]
elasticsearch.username: "kibana"
elasticsearch.password: "pass"
elasticsearch.requestTimeout: 40000
logging.dest: /var/log/kibana/kibana.log   # 日志輸出,默認(rèn)輸出到了/var/log/message
i18n.locale: "zh-CN"     # 中文界面

詳情配置參考:
https://www.elastic.co/guide/cn/kibana/current/settings.html

2、logstash安裝配置及實(shí)踐
上面已經(jīng)所存儲(chǔ)搜索的es和展示及搜索圖片化的kibana安裝配置完成,數(shù)據(jù)獲取部分就需要logstash和beat這里主要使用到了logstash和filebeat
lostash收集日志比較重量級(jí),配置也相對(duì)復(fù)雜點(diǎn);可定制收集的功能也很多,這里除了安裝給也常見(jiàn)配置整理:
1)安裝
通過(guò)yum源安裝,安裝源同上
yum install logstash -y
logstash需要jdk支持;因此需要先安裝配置java jdk版本1.8及以上即可;
這里安裝 jdk-8u211-linux-x64.rpm

$cat /etc/profile.d/java.sh
xport JAVA_HOME=/usr/java/latest
export JAVA_BIN=${JAVA_HOME}/bin
export PATH=${PATH}:${JAVA_HOME}/bin
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export JAVA_HOME JAVA_BIN PATH CLASSPATH
export JRE_HOME=/usr/java/latest

安裝完成后需要執(zhí)行 /usr/share/logstash/bin/system-install
Centos 6的系統(tǒng)通過(guò)以下方式管理服務(wù)
initctl status|start|stop|restart logstash
CentOS7:
systemctl restart logstash

2)實(shí)踐配置
收集nginx日志:(nginx服務(wù)器上執(zhí)行)

$ cat /etc/logstash/conf.d/nginx-172.16.0.14.conf
input {
file {
path => ["/var/log/nginx/test.log"]
codec => json
sincedb_path => "/var/log/logstash/null"
discover_interval => 15
stat_interval => 1
start_position => "beginning"
}
}
filter {
date {
locale => "en"
timezone => "Asia/Shanghai"
match => [ "timestamp", "ISO8601" ,"yyyy-MM-dd'T'HH:mm:ssZZ" ]
}
mutate {
convert => [ "upstreamtime", "float" ]
}
mutate {
gsub => ["message", "\x", "\\x"]
}
if [user_agent] {
             useragent {
                    prefix => "remote_"
                    source => "user_agent"
            }
    }
        if [request] {
ruby {
init => "@kname = ['method1','uri1','verb']"
code => "new_event = LogStash::Event.new(Hash[@kname.zip(event.get('request').split(' '))])
new_event.remove('@timestamp')
new_event.remove('method1')
event.append(new_event)"
remove_field => [ "request" ]
}
}

geoip {
source => "clientRealIp"
target => "geoip"
database => "/tmp/GeoLite2-City.mmdb"
add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
}
mutate {
convert => [
"[geoip][coordinates]", "float",
"upstream_response_time","float",
"responsetime","float",
"body_bytes_sent","integer",
"bytes_sent","integer"]
}
}

output {
elasticsearch {
hosts => ["172.16.0.219:9200"]
index => "logstash-nginx-%{+YYYY.MM.dd}"
workers => 1
template_overwrite => true
}
}

注意需要nginx中的日志格式配置為:

log_format logstash '{"@timestamp":"$time_iso8601",'
'"@version":"1",'
'"host":"$server_addr",'
'"size":$body_bytes_sent,'
'"domain":"$host",'
'"method":"$request_method",'
'"url":"$uri",'
'"request":"$request",'
'"status":"$status",'
'"referer":"$http_referer",'
'"user_agent":"$http_user_agent",'
'"body_bytes_sent":"$body_bytes_sent",'
'"bytes_sent":"$bytes_sent",'
'"clientRealIp":"$clientRealIp",'
'"forwarded_for":"$http_x_forwarded_for",'
'"responsetime":"$request_time",'
'"upstreamhost":"$upstream_addr",'
'"upstream_response_time":"$upstream_response_time"}';

配置成接收syslog

$ cat /etc/logstash/conf.d/rsyslog-tcp.conf
input {
syslog {
type => "system-syslog"
host => "172.16.0.217"
port => 1514
}
}
filter {
if [type] == "system-syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?:%{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}

output {
if [type] == "system-syslog" {
elasticsearch {
hosts => ["172.16.0.217:9200"]
index => "logstash-%{type}-%{+YYYY.MM.dd}"
#workers => 1
template_overwrite => true
}
}
}

客戶端需要配置:

$ tail -fn 1 /etc/rsyslog.conf
. @172.16.0.217:1514

配置收集硬件日志服務(wù)器

[yunwei@ickey-elk-217 ~]$ cat /etc/logstash/conf.d/hardware.conf

input {
syslog {
type => "hardware-syslog"
host => "172.16.0.217"
port => 514
}
}

filter {
if [type] == "hardware-syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?:%{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}

output {
if [type] == "hardware-syslog" {
elasticsearch {
hosts => ["172.16.0.217:9200"]
index => "logstash-%{type}-%{+YYYY.MM.dd}"
}
}
}

3、filebeat安裝配置及應(yīng)用實(shí)踐

1)說(shuō)明
filebeat 原先是基于 logstash-forwarder 的源碼改造出來(lái)的。換句話說(shuō):filebeat 就是新版的 logstash-forwarder,也會(huì)是 Elastic Stack 在 shipper 端的第一選擇。
下圖摘自官方,es logstasilebeat.pngh filbeat kafa redis之間的關(guān)系;如圖:
elk(elasticsearch、logstast,kibana)filebeat部署與實(shí)踐
2)安裝
同樣基于以上的yum源
$ yum install filebeat -y

3)配置之收集runtime和php-fpm錯(cuò)誤日志

[root@ickey-app-api-52 yunwei]# cat /etc/filebeat/filebeat.yml
#=========================== Filebeat inputs =============================
filebeat.inputs:
- type: log
enabled: true
paths:
- /home/wwwroot/.ickey.cn/runtime/logs/.log
fields:
type: "runtime"
json.message_key: log
json.keys_under_root: true
- type: log
enabled: true
paths:
- /var/log/php-fpm/www-error.log
fields:
type: "php-fpm"
#============================= Filebeat modules ===============================
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: true
#==================== Elasticsearch template setting ==========================
setup.template.settings:
index.number_of_shards: 2

#============================== Kibana =====================================
setup.kibana:
host: "172.16.0.219:5601"
#============================= Elastic Cloud ==================================
output.elasticsearch:
hosts: ["172.16.0.213:9200","172.16.0.217:9200","172.16.0.219:9200"]
indices:
- index: "php-fpm-log-%{+yyyy.MM.dd}"
when.equals:
fields.type: "php-fpm"
- index: "runtime-log-%{+yyyy.MM.dd}"
when.equals:
fields.type: "runtime"

pipelines:
- pipeline: "php-error-pipeline"
when.equals:
fields.type: "php-fpm"

#================================ Processors =====================================
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~

#================================ Logging =====================================
logging.level: info
logging.to_files: true
logging.files:
path: /var/log/filebeat
name: filebeat
keepfiles: 7
permissions: 0644

說(shuō)明:
php-fpm error.log格式如下:

[29-Oct-2019 11:33:01 PRC] PHP Fatal error:  Call to a member function getBSECollection() on null in /var/html/wwwroot/framework/Excel5.php on line 917

由于 我們需要提取其中的時(shí)間,PHP Fatal error 及出錯(cuò)的行數(shù);在logstash中收集需要定義 grok,filebeat則需要通過(guò)ingest處理,大概過(guò)程 是這樣的filebeat先獲取內(nèi)容 放到logstash上 通過(guò)ingest定義輸出成我們相要的樣子;
因此需要在logstash上做如下操作:

[root@ickey-elk-213 ~]# cat phperror-pipeline.json
{
  "description": "php error log pipeline",
  "processors": [
    {
      "grok": {
        "field": "message",
          "patterns": "%{DATA:datatime} PHP .*: %{DATA:errorinfo} in %{DATA:error-url} on line %{NUMBER:error-line}"
      }
    }
  ]
}

應(yīng)用 :

curl -H 'Content-Type: application/json' -XPUT 'http://localhost:9200/_ingest/pipeline/php-error-pipeline' -d@phperror-pipeline.json
查詢:
curl -H 'Content-Type: application/json' -GET 'http://localhost:9200/_ingest/pipeline/php-error-pipeline'
刪除:
curl -H 'Content-Type: application/json' -XDELETE 'http://localhost:9200/_ingest/pipeline/php-error-pipeline'

收集數(shù)據(jù)庫(kù)日志:

filebeat.inputs:
- type: log
  paths:
    - /var/log/mysql/mysql.err
  fields:
    type: "mysqlerr"
  exclude_files: ['Note']
  multiline.pattern: '^[0-9]{4}.*'
  multiline.negate: true
  multiline.match: after
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: True
setup.template.settings:
  index.number_of_shards: 2
setup.kibana:
  host: "172.16.0.219:5601"
output.elasticsearch:
  hosts: ["172.16.0.213:9200"]
  indices:
    - index: "mysql-err-%{+yyyy.MM.dd}"
      when.equals:
        fields.type: "mysqlerr"
processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~

4、安裝配置elasticsearch-head

elasticsearch-head是開(kāi)源的,圖形化查看操作es中索引web界面;
1)安裝

$ git clone https://github.com/mobz/elasticsearch-head.git
$ cd elasticsearch-head
$ registry=https://registry.npm.taobao.org
$ npm install grunt -save --
└─┬ grunt@1.0.1
.....省略....
├── path-is-absolute@1.0.1
└── rimraf@2.2.8
npm WARN elasticsearch-head@0.0.0 license should be a valid SPDX license expression
$ npm install --registry=https://registry.npm.taobao.org
npm WARN deprecated http2@3.3.7: Use the built-in module in node 9.0.0 or newer, instead
[ ............] - fetchMetadata: verb afterAdd /root/.npm/debug/2.6.9/package/package.json written

此步需要等待一段時(shí)間

2)配置開(kāi)機(jī)自啟服務(wù)

$ cat /usr/bin/elasticsearch-head
#!/bin/bash
# chkconfig: - 25 75
# description: starts and stops the elasticsearch-head

data="cd /usr/local/src/elasticsearch-head/; nohup npm run start > /dev/null 2>&1 & "
START(){
eval $data && echo -e "elasticsearch-head start\033[32m ok\033[0m"
}

STOP(){
ps -ef |grep grunt |grep -v "grep" |awk '{print $2}' |xargs kill -s 9 > /dev/null && echo -e "elasticsearch-head stop\033[32m ok\033[0m"
}

STATUS(){
PID=$(ps aux |grep grunt|grep -v grep|awk '{print $2}')
}

case "$1" in
start)
START
;;
stop)
STOP
;;
restart)
STOP
sleep 3
START
;;
*)
echo "Usage: elasticsearch-head (start|stop|restart)"
;;
esac

訪問(wèn):
http://172.16.0.219:9100 如圖:
elk(elasticsearch、logstast,kibana)filebeat部署與實(shí)踐

向AI問(wèn)一下細(xì)節(jié)

免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如果涉及侵權(quán)請(qǐng)聯(lián)系站長(zhǎng)郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。

AI