您好,登錄后才能下訂單哦!
一、環(huán)境說明
1、服務(wù)器信息
172.21.184.43 kafka、zk
172.21.184.44 kafka、zk
172.21.184.45 kafka、zk
172.21.244.7 ansible
2、軟件版本信息
系統(tǒng):CentOS Linux release 7.5.1804 (Core)
kafka:kafka_2.11-2.2.0
Zookeeper version: 3.4.8
ansible:ansible 2.7.10
二、配置準備
1、編寫playbook相關(guān)配置文件,先tree看下整目錄結(jié)構(gòu)
tree
.
├── kafka
│ ├── group_vars
│ │ └── kafka
│ ├── hosts
│ ├── kafkainstall.yml
│ └── templates
│ ├── server.properties-1.j2
│ ├── server.properties-2.j2
│ ├── server.properties-3.j2
│ └── server.properties.j2
└── zookeeper
├── group_vars
│ └── zook
├── hosts
├── templates
│ └── zoo.cfg.j2
└── zooKeeperinstall.yml
2、建立相關(guān)目錄
mkdir /chj/ansibleplaybook/kafka/group_vars -p
mkdir /chj/ansibleplaybook/kafka/templates
mkdir /chj/ansibleplaybook/zookeeper/group_vars -p
mkdir /chj/ansibleplaybook/zookeeper/templates
3、撰寫部署zookeeper的配置文件
A、zookeeper的group_vars文件
vim /chj/ansibleplaybook/zookeeper/group_vars/zook
---
zk01server: 172.21.184.43
zk02server: 172.21.184.44
zk03server: 172.21.184.45
zookeeper_group: work
zookeeper_user: work
zookeeper_dir: /chj/data/zookeeper
zookeeper_appdir: /chj/app/zookeeper
zk01myid: 43
zk02myid: 44
zk03myid: 45
B、zookeeper的templates文件
vim /chj/ansibleplaybook/zookeeper/templates/zoo.cfg.j2
tickTime=2000
initLimit=500
syncLimit=20
dataDir={{ zookeeper_dir }}
dataLogDir=/chj/data/log/zookeeper/
clientPort=10311
maxClientCnxns=1000000
server.{{ zk01myid }}={{ zk01server }}:10301:10331
server.{{ zk02myid }}={{ zk02server }}:10302:10332
server.{{ zk03myid }}={{ zk03server }}:10303:10333
C、zookeeper的host文件
vim /chj/ansibleplaybook/zookeeper/hosts
[zook]
172.21.184.43
172.21.184.44
172.21.184.45
D、zookeeper的安裝的yml文件
vim /chj/ansibleplaybook/zookeeper/zooKeeperinstall.yml
---
- hosts: "zook"
gather_facts: no
tasks:
- name: Create zookeeper group
group:
name: '{{ zookeeper_group }}'
state: present
tags:
- zookeeper_user
- name: Create zookeeper user
user:
name: '{{ zookeeper_user }}'
group: '{{ zookeeper_group }}'
state: present
createhome: no
tags:
- zookeeper_group
- name: 檢測是否安過zk
stat:
path: /chj/app/zookeeper
register: node_files
- debug:
msg: "{{ node_files.stat.exists }}"
- name: 檢查是否存在java環(huán)境
shell: if [ ! -f "/usr/local/jdk/bin/java" ];then echo "創(chuàng)建目錄"; curl -o /usr/local/jdk1.8.0_121.tar.gz http://download.pkg.chj.cloud/chj_jdk1.8.0_121.tar.gz; tar xf /usr/local/jdk1.8.0_121.tar.gz -C /usr/local/jdk1.8.0_121; cd /usr/local/; mv /usr/local/jdk1.8.0_121 jdk; ln -s /usr/local/jdk/bin/java /sbin/java; else echo "目錄已存在\n" ;fi
- name: 下載解壓 chj_zookeeper
unarchive: src=http://ops.chehejia.com:9090/pkg/zookeeper.tar.gz dest=/chj/app/ copy=no
when: node_files.stat.exists == False
register: unarchive_msg
- debug:
msg: "{{ unarchive_msg }}"
- name: 創(chuàng)建zookeeper 數(shù)據(jù)目錄和日志目錄
shell: if [ ! -d "/chj/data/zookeeper" ] && [ ! -d "/chj/data/log/zookeeper" ];then echo "創(chuàng)建目錄"; mkdir -p /chj/data/{zookeeper,zookeeperLog} ; else echo "目錄已存在\n" ;fi
- name: 修改目錄權(quán)限
shell: chown work:work -R /chj/{data,app}
when: node_files.stat.exists == False
- name: 配置zk myid
shell: "hostname -i| cut -d '.' -f 4|awk '{print $1}' > /chj/data/zookeeper/myid"
- name: Config zookeeper service
template:
src: zoo.cfg.j2
dest: /chj/app/zookeeper/conf/zoo.cfg
mode: 0755
- name: Reload systemd
command: systemctl daemon-reload
- name: Restart ZooKeeper service
shell: sudo su - work -c "/chj/app/zookeeper/console start"
- name: Status ZooKeeper service
shell: "sudo su - work -c '/chj/app/zookeeper/console status'"
register: zookeeper_status_result
ignore_errors: True
- debug:
msg: "{{ zookeeper_status_result }}"
4、編寫部署kafka的配置文件
A、kafka的group_vars文件
vim /chj/ansibleplaybook/kafka/group_vars/kafka
---
kafka01: 172.21.184.43
kafka02: 172.21.184.44
kafka03: 172.21.184.45
kafka_group: work
kafka_user: work
log_dir: /chj/data/kafka
brokerid1: 1
brokerid2: 2
brokerid3: 3
zk_addr: 172.21.184.43:10311,172.21.184.44:10311,172.21.184.45:10311/kafka
B、kafka的templates文件
vim /chj/ansibleplaybook/kafka/templates/server.properties-1.j2
broker.id={{ brokerid1 }} ##server.properties-2.j2和server.properties-3.j2分別配置為brokerid2和brokerid3
auto.create.topics.enable=false
auto.leader.rebalance.enable=true
broker.rack=/default-rack
compression.type=snappy
controlled.shutdown.enable=true
controlled.shutdown.max.retries=3
controlled.shutdown.retry.backoff.ms=5000
controller.message.queue.size=10
controller.socket.timeout.ms=30000
default.replication.factor=1
delete.topic.enable=true
fetch.message.max.bytes=10485760
fetch.purgatory.purge.interval.requests=10000
leader.imbalance.check.interval.seconds=300
leader.imbalance.per.broker.percentage=10
host.name= {{ kafka01 }}
listeners=PLAINTEXT://{{ kafka01}}:9092 ##server.properties-2.j2和server.properties-3.j2分別配置為brokerid2和brokerid3
log.cleanup.interval.mins=1200
log.dirs= {{ log_dir}}
log.index.interval.bytes=4096
log.index.size.max.bytes=10485760
log.retention.bytes=-1
log.retention.hours=168
log.roll.hours=168
log.segment.bytes=1073741824
message.max.bytes=10000000
min.insync.replicas=1
num.io.threads=8
num.network.threads=3
num.partitions=1
num.recovery.threads.per.data.dir=1
num.replica.fetchers=1
offset.metadata.max.bytes=4096
offsets.commit.required.acks=-1
offsets.commit.timeout.ms=5000
offsets.load.buffer.size=5242880
offsets.retention.check.interval.ms=600000
offsets.retention.minutes=86400000
offsets.topic.compression.codec=0
offsets.topic.num.partitions=50
offsets.topic.replication.factor=3
transaction.state.log.replication.factor=3
transaction.state.log.min.isr=1
offsets.topic.segment.bytes=104857600
port=9092
producer.purgatory.purge.interval.requests=10000
queued.max.requests=500
replica.fetch.max.bytes=10485760
replica.fetch.min.bytes=1
replica.fetch.wait.max.ms=500
replica.high.watermark.checkpoint.interval.ms=5000
replica.lag.max.messages=4000
replica.lag.time.max.ms=10000
replica.socket.receive.buffer.bytes=65536
replica.socket.timeout.ms=30000
sasl.enabled.mechanisms=GSSAPI
sasl.mechanism.inter.broker.protocol=GSSAPI
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
socket.send.buffer.bytes=102400
zookeeper.connect= {{ zk_addr }}
zookeeper.connection.timeout.ms=25000
zookeeper.session.timeout.ms=30000
zookeeper.sync.time.ms=2000
group.initial.rebalance.delay.ms=10000
C、kafka的host文件
vim /chj/ansibleplaybook/kafka/hosts
[kafka]
172.21.184.43
172.21.184.44
172.21.184.45
D、kafka的安裝的yml文件
vim /chj/ansibleplaybook/kafka/kafkainstall.yml
---
- hosts: "kafka"
gather_facts: yes
tasks:
- name: obtain eth0 ipv4 address
debug: msg={{ ansible_default_ipv4.address }}
when: ansible_default_ipv4.alias == "eth0"
- name: Create kafka group
group:
name: '{{ kafka_group }}'
state: present
tags:
- kafka_user
- name: Create kafka user
user:
name: '{{ kafka_user }}'
group: '{{ kafka_group }}'
state: present
createhome: no
tags:
- kafka_group
- name: 檢測是否安過zk
stat:
path: /chj/app/kafka
register: node_files
- debug:
msg: "{{ node_files.stat.exists }}"
- name: 檢查是否存在java環(huán)境
shell: if [ ! -f "/usr/local/jdk/bin/java" ];then echo "創(chuàng)建目錄"; curl -o /usr/local/jdk1.8.0_121.tar.gz http://download.pkg.chj.cloud/chj_jdk1.8.0_121.tar.gz; tar xf /usr/local/jdk1.8.0_121.tar.gz -C /usr/local/jdk1.8.0_121; cd /usr/local/; mv /usr/local/jdk1.8.0_121 jdk; ln -s /usr/local/jdk/bin/java /sbin/java; else echo "目錄已存在\n" ;fi
- name: 下載解壓 kafka
unarchive: src=http://ops.chehejia.com:9090/pkg/kafka.tar.gz dest=/chj/app/ copy=no
when: node_files.stat.exists == False
register: unarchive_msg
- debug:
msg: "{{ unarchive_msg }}"
- name: 創(chuàng)建kafka 數(shù)據(jù)目錄和日志目錄
shell: if [ ! -d "/chj/data/kafka" ] && [ ! -d "/chj/data/log/kafka" ];then echo "創(chuàng)建目錄"; mkdir -p /chj/data/{kafka,log/kafka} ; else echo "目錄已存在\n" ;fi
- name: 修改目錄權(quán)限
shell: chown work:work -R /chj/{data,app}
when: node_files.stat.exists == False
- name: Config kafka01 service
template:
src: server.properties-1.j2
dest: /chj/app/kafka/config/server.properties
mode: 0755
when: ansible_default_ipv4.address == "172.21.184.43"
- name: Config kafka02 service
template:
src: server.properties-2.j2
dest: /chj/app/kafka/config/server.properties
mode: 0755
when: ansible_default_ipv4.address == "172.21.184.44"
- name: Config kafka03 service
template:
src: server.properties-3.j2
dest: /chj/app/kafka/config/server.properties
mode: 0755
when: ansible_default_ipv4.address == "172.21.184.45"
- name: Reload systemd
command: systemctl daemon-reload
- name: Restart kafka service
shell: sudo su - work -c "/chj/app/kafka/console start"
- name: Status kafka service
shell: "sudo su - work -c '/chj/app/kafka/console status'"
register: kafka_status_result
ignore_errors: True
- debug:
msg: "{{ kafka_status_result }}"
PS:安裝需要用到的jdk、kafka、zk的二進制包,自行替換成能訪問到的下載地址
三、部署
1、先部署zookeeper集群
cd /chj/ansibleplaybook/zookeeper/
ansible-playbook -i hosts zooKeeperinstall.yml -b
2、在部署kafka集群
cd /chj/ansibleplaybook/kafka/
ansible-playbook -i hosts kafkainstall.yml -b
免責聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點不代表本網(wǎng)站立場,如果涉及侵權(quán)請聯(lián)系站長郵箱:is@yisu.com進行舉報,并提供相關(guān)證據(jù),一經(jīng)查實,將立刻刪除涉嫌侵權(quán)內(nèi)容。