溫馨提示×

溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊×
其他方式登錄
點擊 登錄注冊 即表示同意《億速云用戶服務條款》

kubeadm部署k8s:v1.16.3高可用集群

發(fā)布時間:2020-07-06 20:30:01 來源:網(wǎng)絡 閱讀:924 作者:羊皮裘老頭 欄目:云計算

一、環(huán)境說明

cat? /etc/hosts

192.168.10.11? node1????????#master1

192.168.10.14? node4????????#master2

192.168.10.15? node5????????#master3

備注:由于是在自己虛擬機操作,因此只部署了master節(jié)點,worker節(jié)點執(zhí)行的操作我會一并寫出,按照操作即可。


二、環(huán)境配置<master和worker執(zhí)行>

?1、設置阿里云yum源(可選)

curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

rm -rf /var/cache/yum && yum makecache

2、安裝依賴包

yum install -y epel-release conntrack ipvsadm ipset jq sysstat curl iptables libseccomp

3、關閉防火墻

systemctl stop firewalld && systemctl disable firewalld

iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT

4、關閉SELinux

setenforce 0

sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

5、關閉 swap 分區(qū)

swapoff -a

sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

6、加載內(nèi)核模塊

cat > /etc/sysconfig/modules/ipvs.modules <<EOF

#!/bin/bash

modprobe -- ip_vs

modprobe -- ip_vs_rr

modprobe -- ip_vs_wrr

modprobe -- ip_vs_sh

modprobe -- nf_conntrack_ipv4

modprobe -- br_netfilter

EOF

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules

7、設置內(nèi)核參數(shù)

cat << EOF | tee /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-iptables=1

net.bridge.bridge-nf-call-ip6tables=1

net.ipv4.ip_forward=1

net.ipv4.tcp_tw_recycle=0

vm.swappiness=0

vm.overcommit_memory=1

vm.panic_on_oom=0

fs.inotify.max_user_watches=89100

fs.file-max=52706963

fs.nr_open=52706963

net.ipv6.conf.all.disable_ipv6=1

net.netfilter.nf_conntrack_max=2310720

EOF

sysctl -p /etc/sysctl.d/k8s.conf

?8、安裝Docker

yum install -y yum-utils device-mapper-persistent-data lvm2

yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

yum makecache fast

yum install -y docker-ce-18.09.6

systemctl start docker

systemctl enable docker


安裝完成后配置啟動時的命令,否則docker會將iptables FORWARD chain的默認策略設置為DROP

另外Kubeadm建議將systemd設置為cgroup驅(qū)動,所以還要修改daemon.json

sed -i "13i ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT" /usr/lib/systemd/system/docker.service

tee /etc/docker/daemon.json <<-'EOF'

{? "exec-opts": ["native.cgroupdriver=systemd"]? }

EOF

systemctl daemon-reload

systemctl restart docker

?9、安裝kubeadm和kubelet

cat <<EOF > /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/

enabled=1

gpgcheck=0

repo_gpgcheck=0

gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

EOF

yum makecache fast

yum install -y kubelet kubeadm kubectl

systemctl enable kubelet

vim /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf

#設置kubelet的cgroup driver

KUBELET_KUBECONFIG_ARGS=--cgroup-driver=systemd

systemctl daemon-reload

systemctl restart kubelet.service

10、拉取所需鏡像

kubeadm config images list | sed -e 's/^/docker pull /g' -e 's#k8s.gcr.io#registry.cn-hangzhou.aliyuncs.com/google_containers#g' | sh -x

docker images | grep registry.cn-hangzhou.aliyuncs.com/google_containers | awk '{print "docker tag",$1":"$2,$1":"$2}' | sed -e 's/registry.cn-hangzhou.aliyuncs.com\/google_containers/k8s.gcr.io/2' | sh -x

docker images | grep registry.cn-hangzhou.aliyuncs.com/google_containers | awk '{print "docker rmi """$1""":"""$2}' | sh -x


三、安裝keepalived和haproxy<master執(zhí)行>

????Kubernetes的高可用主要指的是控制平面的高可用,簡單說就是有多套Master節(jié)點組件和Etcd組件,工作節(jié)點通過負載均衡連接到各Master。

將etcd與Master節(jié)點組件混布在一起:

kubeadm部署k8s:v1.16.3高可用集群

Etcd混布方式:
????所需機器資源少
????部署簡單,利于管理
????容易進行橫向擴展
????風險大,一臺宿主機掛了,master和etcd就都少了一套,集群冗余度受到的影響比較大。

????3.1master安裝

yum install -y keepalived haproxy

????3.2修改haproxy配置文件:(三個節(jié)點都一致)

global
????log?????????127.0.0.1?local2
????chroot??????/var/lib/haproxy
????pidfile?????/var/run/haproxy.pid
????maxconn?????4000
????user????????haproxy
????group???????haproxy
????daemon
????stats?socket?/var/lib/haproxy/stats
defaults
????mode????????????????????http
????log?????????????????????global
????option??????????????????httplog
????option??????????????????dontlognull
????option?http-server-close
????option?forwardfor???????except?127.0.0.0/8
????option??????????????????redispatch
????retries?????????????????3
????timeout?http-request????10s
????timeout?queue???????????1m
????timeout?connect?????????10s
????timeout?client??????????1m
????timeout?server??????????1m
????timeout?http-keep-alive?10s
????timeout?check???????????10s
????maxconn?????????????????3000
listen?stats
????bind?????????????????*:1080
????stats?auth???????????admin:awesomePassword
????stats?refresh????????5s
????stats?realm??????????HAProxy\?Statistics
????stats?uri????????????/admin?stats
frontend?kubernetes-apiserver
???mode??tcp
???bind??*:8443
???option???tcplog
???default_backend?????kubernetes-apiserver
backend?kubernetes-apiserver
????balance?????roundrobin
????mode????????tcp
????server??node1?192.168.10.11:6443?check?inter?5000?fall?2?rise?2?weight?1
????server??node4?192.168.10.14:6443?check?inter?5000?fall?2?rise?2?weight?1
????server??node5?192.168.10.15:6443?check?inter?5000?fall?2?rise?2?weight?1

kubeadm部署k8s:v1.16.3高可用集群

kubeadm部署k8s:v1.16.3高可用集群

??? 3.3修改keepalived的配置文件

????節(jié)點一:

!?Configuration?File?for?keepalived
global_defs?{
???router_id?LVS_DEVEL
}
vrrp_script?check_haproxy?{
????script?"/etc/keepalived/check_haproxy.sh"
????interval?3
????weight?-2
????fall?10
????rise?2
}
vrrp_instance?VI_1?{
????state?MASTER
????interface?ens33?????????#宿主機物理網(wǎng)卡名稱
????virtual_router_id?51
????priority?100
????advert_int?1
????authentication?{
????????auth_type?PASS
????????auth_pass?1111
????}
????virtual_ipaddress?{
????????192.168.10.16??????#VIP要與自己的IP在同一網(wǎng)段
????}
????????track_script?{
????????????check_haproxy
????}
}

kubeadm部署k8s:v1.16.3高可用集群

????節(jié)點二:

!?Configuration?File?for?keepalived
global_defs?{
???router_id?LVS_DEVEL
}
vrrp_script?check_haproxy?{
????script?"/etc/keepalived/check_haproxy.sh"
????interval?3
????weight?-2
????fall?10
????rise?2
}
vrrp_instance?VI_1?{
????state?BACKUP
????interface?ens33
????virtual_router_id?51
????priority?80
????advert_int?1
????authentication?{
????????auth_type?PASS
????????auth_pass?1111
????}
????virtual_ipaddress?{
????????192.168.10.16
????}
????????track_script?{
????????????check_haproxy
????}
}

kubeadm部署k8s:v1.16.3高可用集群

????節(jié)點三:

!?Configuration?File?for?keepalived
global_defs?{
???router_id?LVS_DEVEL
}
vrrp_script?check_haproxy?{
????script?"/etc/keepalived/check_haproxy.sh"
????interval?3
????weight?-2
????fall?10
????rise?2
}
vrrp_instance?VI_1?{
????state?BACKUP
????interface?ens33
????virtual_router_id?51
????priority?60
????advert_int?1
????authentication?{
????????auth_type?PASS
????????auth_pass?1111
????}
????virtual_ipaddress?{
????????192.168.10.16
????}
????????track_script?{
????????????check_haproxy
????}
}

kubeadm部署k8s:v1.16.3高可用集群

????在三個master執(zhí)行:

cat?>?/etc/keepalived/check_haproxy.sh?<<EOF
#!/bin/bash
systemctl?status?haproxy?>?/dev/null
if?[[?\$??!=?0?]];then
????????echo?"haproxy?is?down,close?the?keepalived"
????????systemctl?stop?keepalived
fi
EOF
chmod?+x?/etc/keepalived/check_haproxy.sh
systemctl?enable?keepalived?&&?systemctl?start?keepalived?
systemctl?enable?haproxy?&&?systemctl?start?haproxy
systemctl?status?keepalived?&&?systemctl?status?haproxy
#如果keepalived狀態(tài)不是running,則從新執(zhí)行
systemctl??restart??keepalived

kubeadm部署k8s:v1.16.3高可用集群

kubeadm部署k8s:v1.16.3高可用集群

????即可在master節(jié)點看到:

kubeadm部署k8s:v1.16.3高可用集群

到此keepalived和haproxy準備完成。


四、初始化集群

kubeadm init \
? --kubernetes-version=v1.16.3 \
? --pod-network-cidr=10.244.0.0/16 \
? --apiserver-advertise-address=192.168.10.11 \
? --control-plane-endpoint 192.168.10.16:8443 --upload-certs

kubeadm部署k8s:v1.16.3高可用集群

則表示初始化成功

????1.為需要使用kubectl的用戶進行配置

mkdir -p $HOME/.kube

cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

chown $(id -u):$(id -g) $HOME/.kube/config

????2.安裝Pod Network

????安裝canal網(wǎng)絡插件

wget https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/canal/rbac.yaml

wget https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/canal/canal.yaml

這里需要修改canal.yaml文件中

修改為:kubeadm部署k8s:v1.16.3高可用集群

kubeadm部署k8s:v1.16.3高可用集群


????3.然后部署:

kubeadm部署k8s:v1.16.3高可用集群

看到所有狀態(tài)都是running則部署成功

kubeadm部署k8s:v1.16.3高可用集群

kubeadm部署k8s:v1.16.3高可用集群

????4、加入其他的master節(jié)點

kubeadm join 192.168.10.16:8443 --token 4r7i1t.pu099ydf73ju2dq0 \
??? --discovery-token-ca-cert-hash sha256:65547a2b5633ea663cf9edbde3a65c3d1eb4d0f932ac2c6c6fcaf77dcd86a55f \
??? --control-plane --certificate-key e8aeb23b165bf87988b4b30a80635d35e45a14d958a10ec616190665c835dc6a

kubeadm部署k8s:v1.16.3高可用集群

在任意節(jié)點執(zhí)行:

kubectl? get? node

kubeadm部署k8s:v1.16.3高可用集群

????5.進行測試master高可用:

????down掉master1

kubeadm部署k8s:v1.16.3高可用集群

在其他節(jié)點查看

kubeadm部署k8s:v1.16.3高可用集群


五、加入worker節(jié)點

kubeadm join 192.168.10.16:8443 --token 4r7i1t.pu099ydf73ju2dq0 \
??? --discovery-token-ca-cert-hash sha256:65547a2b5633ea663cf9edbde3a65c3d1eb4d0f932ac2c6c6fcaf77dcd86a55f

向AI問一下細節(jié)

免責聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點不代表本網(wǎng)站立場,如果涉及侵權(quán)請聯(lián)系站長郵箱:is@yisu.com進行舉報,并提供相關證據(jù),一經(jīng)查實,將立刻刪除涉嫌侵權(quán)內(nèi)容。

AI