您好,登錄后才能下訂單哦!
這篇文章主要介紹了Docker如何構(gòu)建ELK Docker集群日志收集系統(tǒng)的相關(guān)知識(shí),內(nèi)容詳細(xì)易懂,操作簡(jiǎn)單快捷,具有一定借鑒價(jià)值,相信大家閱讀完這篇Docker如何構(gòu)建ELK Docker集群日志收集系統(tǒng)文章都會(huì)有所收獲,下面我們一起來(lái)看看吧。
elk簡(jiǎn)介
elk由elasticsearch、logstash和kiabana三個(gè)開(kāi)源工具組成
elasticsearch是個(gè)開(kāi)源分布式搜索引擎,它的特點(diǎn)有:分布式,零配置,自動(dòng)發(fā)現(xiàn),索引自動(dòng)分片,索引副本機(jī)制,restful風(fēng)格接口,多數(shù)據(jù)源,自動(dòng)搜索負(fù)載等。
logstash是一個(gè)完全開(kāi)源的工具,他可以對(duì)你的日志進(jìn)行收集、過(guò)濾,并將其存儲(chǔ)供以后使用
kibana 也是一個(gè)開(kāi)源和免費(fèi)的工具,它kibana可以為 logstash 和 elasticsearch 提供的日志分析友好的 web 界面,可以幫助您匯總、分析和搜索重要數(shù)據(jù)日志。
使用docker搭建elk平臺(tái)
首先我們編輯一下 logstash的配置文件 logstash.conf
input { udp { port => 5000 type => json } } filter { json { source => "message" } } output { elasticsearch { hosts => "elasticsearch:9200" #將logstash的輸出到 elasticsearch 這里改成你們自己的host } }
然后我們還需要需要一下kibana 的啟動(dòng)方式
編寫(xiě)啟動(dòng)腳本 等待elasticserach 運(yùn)行成功后啟動(dòng)
#!/usr/bin/env bash # wait for the elasticsearch container to be ready before starting kibana. echo "stalling for elasticsearch" while true; do nc -q 1 elasticsearch 9200 2>/dev/null && break done echo "starting kibana" exec kibana
修改dockerfile 生成自定義的kibana鏡像
from kibana:latest run apt-get update && apt-get install -y netcat copy entrypoint.sh /tmp/entrypoint.sh run chmod +x /tmp/entrypoint.sh run kibana plugin --install elastic/sense cmd ["/tmp/entrypoint.sh"]
同時(shí)也可以修改一下kibana 的配置文件 選擇需要的插件
# kibana is served by a back end server. this controls which port to use. port: 5601 # the host to bind the server to. host: "0.0.0.0" # the elasticsearch instance to use for all your queries. elasticsearch_url: "http://elasticsearch:9200" # preserve_elasticsearch_host true will send the hostname specified in `elasticsearch`. if you set it to false, # then the host you use to connect to *this* kibana instance will be sent. elasticsearch_preserve_host: true # kibana uses an index in elasticsearch to store saved searches, visualizations # and dashboards. it will create a new index if it doesn't already exist. kibana_index: ".kibana" # if your elasticsearch is protected with basic auth, this is the user credentials # used by the kibana server to perform maintence on the kibana_index at statup. your kibana # users will still need to authenticate with elasticsearch (which is proxied thorugh # the kibana server) # kibana_elasticsearch_username: user # kibana_elasticsearch_password: pass # if your elasticsearch requires client certificate and key # kibana_elasticsearch_client_crt: /path/to/your/client.crt # kibana_elasticsearch_client_key: /path/to/your/client.key # if you need to provide a ca certificate for your elasticsarech instance, put # the path of the pem file here. # ca: /path/to/your/ca.pem # the default application to load. default_app_id: "discover" # time in milliseconds to wait for elasticsearch to respond to pings, defaults to # request_timeout setting # ping_timeout: 1500 # time in milliseconds to wait for responses from the back end or elasticsearch. # this must be > 0 request_timeout: 300000 # time in milliseconds for elasticsearch to wait for responses from shards. # set to 0 to disable. shard_timeout: 0 # time in milliseconds to wait for elasticsearch at kibana startup before retrying # startup_timeout: 5000 # set to false to have a complete disregard for the validity of the ssl # certificate. verify_ssl: true # ssl for outgoing requests from the kibana server (pem formatted) # ssl_key_file: /path/to/your/server.key # ssl_cert_file: /path/to/your/server.crt # set the path to where you would like the process id file to be created. # pid_file: /var/run/kibana.pid # if you would like to send the log output to a file you can set the path below. # this will also turn off the stdout log output. log_file: ./kibana.log # plugins that are included in the build, and no longer found in the plugins/ folder bundled_plugin_ids: - plugins/dashboard/index - plugins/discover/index - plugins/doc/index - plugins/kibana/index - plugins/markdown_vis/index - plugins/metric_vis/index - plugins/settings/index - plugins/table_vis/index - plugins/vis_types/index - plugins/visualize/index
好了下面我們編寫(xiě)一下 docker-compose.yml 方便構(gòu)建
端口之類(lèi)的可以根據(jù)自己的需求修改 配置文件的路徑根據(jù)你的目錄修改一下 整體系統(tǒng)配置要求較高 請(qǐng)選擇配置好點(diǎn)的機(jī)器
elasticsearch: image: elasticsearch:latest command: elasticsearch -des.network.host=0.0.0.0 ports: - "9200:9200" - "9300:9300" logstash: image: logstash:latest command: logstash -f /etc/logstash/conf.d/logstash.conf volumes: - ./logstash/config:/etc/logstash/conf.d ports: - "5001:5000/udp" links: - elasticsearch kibana: build: kibana/ volumes: - ./kibana/config/:/opt/kibana/config/ ports: - "5601:5601" links: - elasticsearch
#好了命令 就可以直接啟動(dòng)elk了 docker-compose up -d
訪問(wèn)之前的設(shè)置的kibanna的5601端口就可以看到是否啟動(dòng)成功了
使用logspout收集docker日志
下一步我們要使用logspout對(duì)docker日志進(jìn)行收集 我們根據(jù)我們的需求修改一下logspout鏡像
編寫(xiě)配置文件 modules.go
package main import ( _ "github.com/looplab/logspout-logstash" _ "github.com/gliderlabs/logspout/transports/udp" )
編寫(xiě)dockerfile
from gliderlabs/logspout:latest copy ./modules.go /src/modules.go
重新構(gòu)建鏡像后 在各個(gè)節(jié)點(diǎn)運(yùn)行即可
docker run -d --name="logspout" --volume=/var/run/docker.sock:/var/run/docker.sock \ jayqqaa12/logspout logstash://你的logstash地址
現(xiàn)在打開(kāi)kibana 就可以看到收集到的 docker日志了
注意docker容器應(yīng)該選擇以console輸出 這樣才能采集到
關(guān)于“Docker如何構(gòu)建ELK Docker集群日志收集系統(tǒng)”這篇文章的內(nèi)容就介紹到這里,感謝各位的閱讀!相信大家對(duì)“Docker如何構(gòu)建ELK Docker集群日志收集系統(tǒng)”知識(shí)都有一定的了解,大家如果還想學(xué)習(xí)更多知識(shí),歡迎關(guān)注億速云行業(yè)資訊頻道。
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如果涉及侵權(quán)請(qǐng)聯(lián)系站長(zhǎng)郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。