您好,登錄后才能下訂單哦!
這篇文章主要介紹了如何部署Kubernetes高可用,具有一定借鑒價(jià)值,感興趣的朋友可以參考下,希望大家閱讀完這篇文章之后大有收獲,下面讓小編帶著大家一起了解一下。
Kubernetes高可用是保證Master節(jié)點(diǎn)中API Server服務(wù)的高可用。API Server提供了Kubernetes各類資源對(duì)象增刪改查的唯一訪問入口,是整個(gè)Kubernetes系統(tǒng)的數(shù)據(jù)總線和數(shù)據(jù)中心。采用負(fù)載均衡(Load Balance)連接兩個(gè)Master節(jié)點(diǎn)可以提供穩(wěn)定容器云業(yè)務(wù)。
主機(jī)名 | IP地址 | 操作系統(tǒng) | 主要軟件 |
K8s-master01 | 192.168,200.111 | CentOS7.x | Etcd+Kubernetes |
K8s-master02 | 192.168.200.112 | CentOS7.x | Etcd+Kubernetes |
K8s-node01 | 192.168.200.113 | CentOS7.x | Etcd+Kubernetes+Flannel+Docker |
K8s-node02 | 192.168.200.114 | CentOS7.x | Etcd+Kubernetes+Flannel+Docker |
K8s-lb01 | 192.168.200.115 | CentOS7.x | Nginx+Keepalived |
K8s-lb02 | 192.168.200.116 | CentOS7.x | Nginx+Keepalived |
LB集群VIP地址為192.168.200.200。
為所有主機(jī)配置IP地址、網(wǎng)關(guān)、DNS(建議配置阿里云的223.5.5.5)等基礎(chǔ)網(wǎng)絡(luò)信息。建議主機(jī)設(shè)置為靜態(tài)IP地址,避免因?yàn)镮P地址變化出現(xiàn)集群集中無法連接API Server的現(xiàn)象,導(dǎo)致Kubernetes群集不可用。
為所有主機(jī)配置主機(jī)名并添加地址解析記錄,下面以k8s-master01主機(jī)為例進(jìn)行操作演示。
[root@localhost ~]# hostnamectl set-hostname k8s-master01 [root@localhost ~]# bash [root@k8s-master01 ~]# cat <<EOF>> /etc/hosts 192.168.200.111 k8s-master01 192.168.200.112 k8s-master02 192.168.200.113 k8s-node01 192.168.200.114 k8s-node02 192.168.200.115 k8s-lb01 192.168.200.116 k8s-lb02 EOF
[root@k8s-master01 ~]# iptables -F [root@k8s-master01 ~]# systemctl stop firewalld && systemctl disable firewalld [root@k8s-master01 ~]# setenforce 0 [root@k8s-master01 ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
在k8s-master01主機(jī)上創(chuàng)建的目錄“/k8s”,并將準(zhǔn)備好的腳本文件etcd-cert.sh和etcd.sh上傳至/k8s目錄中。其中etcd-cert.sh腳本是etcd證書創(chuàng)建的腳本:etcd.sh腳本是etcd服務(wù)腳本,包含配置文件及啟動(dòng)腳本。
[root@k8s-master01 ~]# mkdir /k8s [root@k8s-master01 ~]# cd /k8s/ [root@k8s-master01 k8s]# ls etcd-cert.sh etcd.sh
創(chuàng)建目錄/k8s/etcd-cert,證書全部存放至該目錄中,方便管理。
[root@k8s-master01 k8s]# mkdir etcd-cert [root@k8s-master01 k8s]# mv etcd-cert.sh etcd-cert
上傳cfssl、cfssl-certinfo、cfssljson軟件包。部署到/usr/local/bin目錄下并配置執(zhí)行權(quán)限
[root@k8s-master01 k8s]# ls #上傳cfssl、cfssl-certinfo、cfssljson軟件包(證書生成工具) cfssl cfssl-certinfo cfssljson etcd-cert etcd.sh [root@k8s-master01 k8s]# mv cfssl* /usr/local/bin/ [root@k8s-master01 k8s]# chmod +x /usr/local/bin/cfssl* [root@k8s-master01 k8s]# ls -l /usr/local/bin/cfssl* -rwxr-xr-x 1 root root 10376657 7月 21 2020 /usr/local/bin/cfssl -rwxr-xr-x 1 root root 6595195 7月 21 2020 /usr/local/bin/cfssl-certinfo -rwxr-xr-x 1 root root 2277873 7月 21 2020 /usr/local/bin/cfssljson
創(chuàng)建CA和Server證書
[root@k8s-master01 ~]# cd /k8s/etcd-cert/ [root@k8s-master01 etcd-cert]# cat etcd-cert.sh cat > ca-config.json <<EOF { "signing": { "default": { "expiry": "87600h" }, "profiles": { "www": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF cat > ca-csr.json <<EOF { "CN": "etcd CA", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing" } ] } EOF cfssl gencert -initca ca-csr.json | cfssljson -bare ca - #----------------------- cat > server-csr.json <<EOF { "CN": "etcd", "hosts": [ #這里寫的是etcd節(jié)點(diǎn)的IP地址(注意最后一個(gè)不能有逗號(hào)) "192.168.200.111", "192.168.200.112", "192.168.200.113", "192.168.200.114" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing" } ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server [root@k8s-master01 etcd-cert]# bash etcd-cert.sh 2021/01/28 15:20:19 [INFO] generating a new CA key and certificate from CSR 2021/01/28 15:20:19 [INFO] generate received request 2021/01/28 15:20:19 [INFO] received CSR 2021/01/28 15:20:19 [INFO] generating key: rsa-2048 2021/01/28 15:20:19 [INFO] encoded CSR 2021/01/28 15:20:19 [INFO] signed certificate with serial number 165215637414524108023506135876170750574821614462 2021/01/28 15:20:19 [INFO] generate received request 2021/01/28 15:20:19 [INFO] received CSR 2021/01/28 15:20:19 [INFO] generating key: rsa-2048 2021/01/28 15:20:19 [INFO] encoded CSR 2021/01/28 15:20:19 [INFO] signed certificate with serial number 423773750965483892371547928227340126131739080799 2021/01/28 15:20:19 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). [root@k8s-master01 etcd-cert]# ls ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem etcd-cert.sh server.csr server-csr.json server-key.pem server.pem
[root@k8s-master01 ~]# cd /k8s/
上傳 etcd-v3.3.18-linux-amd64.tar.gz軟件包
[root@k8s-master01 k8s]# ls etcd etcd-cert/ etcd.sh etcd-v3.3.18-linux-amd64/ etcd-v3.3.18-linux-amd64.tar.gz [root@k8s-master01 k8s]# mkdir /opt/etcd/{cfg,bin,ssl} -p [root@k8s-master01 k8s]# cd etcd-v3.3.18-linux-amd64 [root@k8s-master01 etcd-v3.3.18-linux-amd64]# ls Documentation etcd etcdctl README-etcdctl.md README.md READMEv2-etcdctl.md [root@k8s-master01 etcd-v3.3.18-linux-amd64]# mv etcd etcdctl /opt/etcd/bin/ [root@k8s-master01 etcd-v3.3.18-linux-amd64]# cp /k8s/etcd-cert/*.pem /opt/etcd/ssl/ [root@k8s-master01 etcd-v3.3.18-linux-amd64]# ls /opt/etcd/ssl/ ca-key.pem ca.pem server-key.pem server.pem
[root@k8s-master01 etcd-v3.3.18-linux-amd64]# cd /k8s/ [root@k8s-master01 k8s]# bash etcd.sh etcd01 192.168.200.111 etcd02=https://192.168.200.112:2380,etcd03=https://192.168.200.113:2380,etcd04=https://192.168.200.114:2380 Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.
執(zhí)行時(shí)會(huì)卡在啟動(dòng)etcd服務(wù)上,實(shí)際已經(jīng)啟動(dòng)Ctrl+C終止就行。(查看進(jìn)程存在)(因?yàn)榈谝粋€(gè)etcd會(huì)嘗試去連接其他節(jié)點(diǎn),但他們此時(shí)還并未啟動(dòng))
[root@k8s-master01 k8s]# scp -r /opt/etcd/ root@k8s-master02:/opt/ [root@k8s-master01 k8s]# scp -r /opt/etcd/ root@k8s-node01:/opt/ [root@k8s-master01 k8s]# scp -r /opt/etcd/ root@k8s-node02:/opt/ [root@k8s-master01 k8s]# scp /usr/lib/systemd/system/etcd.service root@k8s-master02:/usr/lib/systemd/system/ [root@k8s-master01 k8s]# scp /usr/lib/systemd/system/etcd.service root@k8s-node01:/usr/lib/systemd/system/ [root@k8s-master01 k8s]# scp /usr/lib/systemd/system/etcd.service root@k8s-node02:/usr/lib/systemd/system/
其他節(jié)點(diǎn)拿到后需修改后使用
[root@k8s-master02 ~]# cat /opt/etcd/cfg/etcd #[Member] ETCD_NAME="etcd02" # 修改為相應(yīng)的主機(jī)名 ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://192.168.200.112:2380" # 修改為相應(yīng)的IP地址 ETCD_LISTEN_CLIENT_URLS="https://192.168.200.112:2379" # 修改為相應(yīng)的IP地址 #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.200.112:2380" # 修改為相應(yīng)的IP地址 ETCD_ADVERTISE_CLIENT_URLS="https://192.168.200.112:2379" # 修改為相應(yīng)的IP地址 ETCD_INITIAL_CLUSTER="etcd01=https://192.168.200.111:2380,etcd02=https://192.168.200.112:2380,etcd03=https://192.168.200.113:2380,etcd04=https://192.168.200.114:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new"
[root@k8s-node01 ~]# cat /opt/etcd/cfg/etcd #[Member] ETCD_NAME="etcd03" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://192.168.200.113:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.200.113:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.200.113:2380" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.200.113:2379" ETCD_INITIAL_CLUSTER="etcd01=https://192.168.200.111:2380,etcd02=https://192.168.200.112:2380,etcd03=https://192.168.200.113:2380,etcd04=https://192.168.200.114:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new"
[root@k8s-node02 ~]# cat /opt/etcd/cfg/etcd #[Member] ETCD_NAME="etcd04" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://192.168.200.114:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.200.114:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.200.114:2380" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.200.114:2379" ETCD_INITIAL_CLUSTER="etcd01=https://192.168.200.111:2380,etcd02=https://192.168.200.112:2380,etcd03=https://192.168.200.113:2380,etcd04=https://192.168.200.114:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new"
maser01、master02、node01、node02這4臺(tái)主機(jī)上均執(zhí)行以下操作
[root@k8s-master01 k8s]# systemctl daemon-reload && systemctl restart etcd && systemctl enable etcd
上傳并解壓master.zip包后會(huì)生成三個(gè)腳本:apiserver.sh、controller-manager.sh、及scheduler.sh。為腳本文件添加執(zhí)行權(quán)限,后面每一個(gè)服務(wù)的啟動(dòng)都要依賴于這三個(gè)腳本。
[root@k8s-master01 ~]# cd /k8s/ [root@k8s-master01 k8s]# unzip master.zip Archive: master.zip inflating: apiserver.sh inflating: controller-manager.sh inflating: scheduler.sh [root@k8s-master01 k8s]# chmod +x *.sh
創(chuàng)建/k8s/k8s-cert目錄,作為證書自簽的工作目錄,將所有證書都生成到此目錄中。在/k8s/k8s-cert目錄中創(chuàng)建證書生成腳本k8s-cert.sh,腳本內(nèi)容如下所示。執(zhí)行k8s-cert.sh腳本即可生成CA證書、服務(wù)器端的私鑰、admin證書、proxy代理端證書。
[root@k8s-master01 k8s]# mkdir /k8s/k8s-cert [root@k8s-master01 k8s]# cd /k8s/k8s-cert/ [root@k8s-master01 k8s-cert]# vim k8s-cert.sh cat > ca-config.json <<EOF { "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF cat > ca-csr.json <<EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "k8s", "OU": "System" } ] } EOF cfssl gencert -initca ca-csr.json | cfssljson -bare ca - #----------------------- cat > server-csr.json <<EOF { "CN": "kubernetes", "hosts": [ "10.0.0.1", "127.0.0.1", "192.168.200.111", "192.168.200.112", "192.168.200.113", "192.168.200.114", "192.168.200.200", # 上面是四個(gè)節(jié)點(diǎn)的IP地址,200.200為VIP)(因?yàn)樽鐾旮呖捎弥髇ode通過VIP連接到master,所以咱們的證書也要對(duì)VIP起到作用) "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server #----------------------- cat > admin-csr.json <<EOF { "CN": "admin", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "system:masters", "OU": "System" } ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin #----------------------- cat > kube-proxy-csr.json <<EOF { "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
執(zhí)行k8s-cert.sh腳本會(huì)生成8張證書。
[root@k8s-master01 k8s-cert]# bash k8s-cert.sh 2021/01/28 16:34:13 [INFO] generating a new CA key and certificate from CSR 2021/01/28 16:34:13 [INFO] generate received request 2021/01/28 16:34:13 [INFO] received CSR 2021/01/28 16:34:13 [INFO] generating key: rsa-2048 2021/01/28 16:34:13 [INFO] encoded CSR 2021/01/28 16:34:13 [INFO] signed certificate with serial number 308439344193766038756929834816982880388926996986 2021/01/28 16:34:13 [INFO] generate received request 2021/01/28 16:34:13 [INFO] received CSR 2021/01/28 16:34:13 [INFO] generating key: rsa-2048 2021/01/28 16:34:14 [INFO] encoded CSR 2021/01/28 16:34:14 [INFO] signed certificate with serial number 75368861589931302301330401750480744629388496397 2021/01/28 16:34:14 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). 2021/01/28 16:34:14 [INFO] generate received request 2021/01/28 16:34:14 [INFO] received CSR 2021/01/28 16:34:14 [INFO] generating key: rsa-2048 2021/01/28 16:34:14 [INFO] encoded CSR 2021/01/28 16:34:14 [INFO] signed certificate with serial number 108292524112693440628246698004254871159937905177 2021/01/28 16:34:14 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). 2021/01/28 16:34:14 [INFO] generate received request 2021/01/28 16:34:14 [INFO] received CSR 2021/01/28 16:34:14 [INFO] generating key: rsa-2048 2021/01/28 16:34:14 [INFO] encoded CSR 2021/01/28 16:34:14 [INFO] signed certificate with serial number 262399212790704249587468309931495790220005272357 2021/01/28 16:34:14 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). [root@k8s-master01 k8s-cert]# ls *.pem admin-key.pem admin.pem ca-key.pem ca.pem kube-proxy-key.pem kube-proxy.pem server-key.pem server.pem [root@k8s-master01 k8s-cert]# ls *.pem | wc -l 8
證書生辰以后,需要將其中的CA與Server相關(guān)證書拷貝到Kubernetes的工作目錄。創(chuàng)建/opt/kubernetes/{cfg,bin,ssl}目錄,分別用于存放配置文件、可執(zhí)行文件以及證書文件。
[root@k8s-master01 ~]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p [root@k8s-master01 ~]# cd /k8s/k8s-cert/ [root@k8s-master01 k8s-cert]# cp ca*pem server*pem /opt/kubernetes/ssl/ [root@k8s-master01 k8s-cert]# ls /opt/kubernetes/ssl/ ca-key.pem ca.pem server-key.pem server.pem
上傳并解壓Kubernetes軟件壓縮包,將壓縮包中的kube-apiserver、kubectl、kube-controller-manager與kube-scheduler組件的腳本文件拷貝到/opt/kubernetes/bin/目錄下。
[root@k8s-master01 ~]# cd /k8s/ [root@k8s-master01 k8s]# tar xf kubernetes-server-linux-amd64.tar.gz [root@k8s-master01 k8s]# cd kubernetes/server/bin/ [root@k8s-master01 bin]# cp kube-apiserver kubectl kube-controller-manager kube-scheduler /opt/kubernetes/bin/ [root@k8s-master01 bin]# ls /opt/kubernetes/bin/ kube-apiserver kube-controller-manager kubectl kube-scheduler
在/opt/kubernetes/cfg/目錄中創(chuàng)建名為token.csv的token文件,其本質(zhì)就是創(chuàng)建一個(gè)用戶角色,可以理解為管理性的角色。Node節(jié)點(diǎn)加入到群集當(dāng)中也是通過這個(gè)角色去控制。但是,在此之前需要通過head命令生成隨機(jī)序列號(hào)作為token令牌。token文件的主要內(nèi)容如下所示,其中:
48be2e8be6cca6e349d3e932768f5d71為token令牌;
kubelet-bootstrap為角色名;
10001為角色I(xiàn)D;
"system:kubelet-bootstrap"為綁定的超級(jí)用戶權(quán)限。
[root@k8s-master01 bin]# head -c 16 /dev/urandom | od -An -t x | tr -d ' ' 48be2e8be6cca6e349d3e932768f5d71 [root@k8s-master01 ~]# vim /opt/kubernetes/cfg/token.csv 48be2e8be6cca6e349d3e932768f5d71,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
將k8s-master01主機(jī)/opt/kubernetes/目錄下的所有文件拷貝到k8s-master02主機(jī)中。
[root@k8s-master01 ~]# ls -R /opt/kubernetes/ /opt/kubernetes/: # 三個(gè)目錄 bin cfg ssl /opt/kubernetes/bin: # 一些命令 kube-apiserver kube-controller-manager kubectl kube-scheduler /opt/kubernetes/cfg: # token的文件 token.csv /opt/kubernetes/ssl: # 一些證書 ca-key.pem ca.pem server-key.pem server.pem [root@k8s-master01 ~]# scp -r /opt/kubernetes/ root@k8s-master02:/opt
運(yùn)行apiserver.sh腳本,運(yùn)行腳本需要填寫兩個(gè)位置參數(shù)。第一個(gè)位置參數(shù)是本地的IP地址,第二個(gè)位置參數(shù)是API Server群集列表。
[root@k8s-master01 ~]# cd /k8s/ [root@k8s-master01 k8s]# bash apiserver.sh https://192.168.200.111 https://192.168.200.111:2379,https://192.168.200.112:2379,https://192.168.200.113:2379,https://192.168.200.114:2379 Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service. [root@k8s-master01 k8s]# ps aux | grep [k]ube
查看k8s-master01節(jié)點(diǎn)的6443安全端口以及https的8080端口是否啟動(dòng)。
[root@k8s-master01 k8s]# netstat -anpt | grep -E "6443|8080" tcp 0 0 192.168.200.111:6443 0.0.0.0:* LISTEN 39105/kube-apiserve tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN 39105/kube-apiserve tcp 0 0 192.168.200.111:46832 192.168.200.111:6443 ESTABLISHED 39105/kube-apiserve tcp 0 0 192.168.200.111:6443 192.168.200.111:46832 ESTABLISHED 39105/kube-apiserve
將/opt/kubernetes/cfg/工作目錄下的kube-apiserver配置文件及其token.csv令牌文件拷貝到k8s-master02主機(jī)上。在k8s-master02主機(jī)是上修改kube-apiserver配置文件,將bind-address、advertise-address地址修改為本機(jī)地址。
[root@k8s-master01 k8s]# scp /opt/kubernetes/cfg/* root@k8s-master02:/opt/kubernetes/cfg/
k8s-master02主機(jī)操作:
KUBE_APISERVER_OPTS="--logtostderr=true \ --v=4 \ --etcd-servers=https://192.168.200.111:2379,https://192.168.200.112:2379,https://192.168.200.113:2379,https://192.168.200.114:2379 \ --bind-address=192.168.200.112 \ # 修改為相應(yīng)的IP地址 --secure-port=6443 \ --advertise-address=192.168.200.112 \ # 修改為相應(yīng)的IP地址 --allow-privileged=true \ --service-cluster-ip-range=10.0.0.0/24 \ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \ --authorization-mode=RBAC,Node \ --kubelet-https=true \ --enable-bootstrap-token-auth \ --token-auth-file=/opt/kubernetes/cfg/token.csv \ --service-node-port-range=30000-50000 \ --tls-cert-file=/opt/kubernetes/ssl/server.pem \ --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \ --client-ca-file=/opt/kubernetes/ssl/ca.pem \ --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \ --etcd-cafile=/opt/etcd/ssl/ca.pem \ --etcd-certfile=/opt/etcd/ssl/server.pem \ --etcd-keyfile=/opt/etcd/ssl/server-key.pem" [root@k8s-master02 ~]# vim /opt/kubernetes/cfg/kube-apiserver
將k8s-master01節(jié)點(diǎn)的kube-apiserver.service啟動(dòng)腳本拷貝到k8s-master02節(jié)點(diǎn)的/usr/lib/systemd/system目錄下,并且在k8s-master02啟動(dòng)API Server,并且查看端口信息。
[root@k8s-master01 k8s]# scp /usr/lib/systemd/system/kube-apiserver.service root@k8s-master02:/usr/lib/systemd/system
k8s-master02主機(jī)操作:
[root@k8s-master02 ~]# systemctl start kube-scheduler && systemctl enable kube-scheduler [root@k8s-master02 ~]# netstat -anptu | grep -E "6443|8080" tcp 0 0 192.168.200.112:6443 0.0.0.0:* LISTEN 544/kube-apiserver tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN 544/kube-apiserver
[root@k8s-master01 ~]# cd /k8s/ [root@k8s-master01 k8s]# ./scheduler.sh 127.0.0.1 Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service. [root@k8s-master01 k8s]# ps aux | grep [k]ube
將k8s-master01節(jié)點(diǎn)的kube-scheduler配置文件與kube-scheduler.service啟動(dòng)腳本拷貝到k8s-master02節(jié)點(diǎn)上,并且在k8s-master02啟動(dòng)Schedule。
[root@k8s-master01 k8s]# scp /opt/kubernetes/cfg/kube-controller-manager root@k8s-master02:/opt/kubernetes/cfg/ [root@k8s-master01 k8s]# scp /usr/lib/systemd/system/kube-controller-manager.service root@k8s-master02:/usr/lib/systemd/system
k8s-master02主機(jī)操作:
[root@k8s-master02 ~]# systemctl start kube-scheduler [root@k8s-master02 ~]# systemctl enable kube-scheduler
在k8s-master01節(jié)點(diǎn),啟動(dòng)Controller-Manager服務(wù)。
[root@k8s-master01 k8s]# ./controller-manager.sh 127.0.0.1 Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
將k8s-master01節(jié)點(diǎn)的kube-controller-manager配置文件和controller-manager.service啟動(dòng)腳本拷貝到k8s-master02節(jié)點(diǎn)的/opt/kubernetes/cfg目錄下,并且在k8s-master02節(jié)點(diǎn)上啟動(dòng)Controller-Manager。
[root@k8s-master01 k8s]# scp /opt/kubernetes/cfg/kube-controller-manager root@k8s-master02:/opt/kubernetes/cfg/ [root@k8s-master01 k8s]# scp /usr/lib/systemd/system/kube-controller-manager.service root@k8s-master02:/usr/lib/systemd/system
k8s-master02主機(jī)操作:
[root@k8s-master02 ~]# systemctl start kube-controller-manager [root@k8s-master02 ~]# systemctl enable kube-controller-manager
在k8s-master01和k8s-master02節(jié)點(diǎn)上,查看各組件狀態(tài)。
[root@k8s-master01 k8s]# /opt/kubernetes/bin/kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health":"true"} etcd-1 Healthy {"health":"true"} etcd-2 Healthy {"health":"true"} etcd-3 Healthy {"health":"true"}
[root@k8s-master02 ~]# /opt/kubernetes/bin/kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-1 Healthy {"health":"true"} etcd-0 Healthy {"health":"true"} etcd-3 Healthy {"health":"true"} etcd-2 Healthy {"health":"true"}
在兩臺(tái)node節(jié)點(diǎn)上均需要操作,以k8s-node01主機(jī)為例:
安裝docker-ce
wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo yum install -y yum-utils device-mapper-persistent-data lvm2 yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo yum makecache fast yum -y install docker-ce systemctl start docker && systemctl enable docker docker version
阿里云鏡像加速器
tee /etc/docker/daemon.json <<-'EOF' { "registry-mirrors": ["https://vbgix9o1.mirror.aliyuncs.com"] } EOF systemctl daemon-reload && systemctl restart docker docker info
雖然在兩臺(tái)node節(jié)點(diǎn)上安裝了Docker,但是Docker運(yùn)行的容器還需要網(wǎng)絡(luò)組件Flannel的支持來實(shí)現(xiàn)彼此之間互聯(lián)互通。
首先需要將分配的子網(wǎng)段寫入到Etcd中,以便Flannel使用。網(wǎng)絡(luò)中涉及到的路由如何轉(zhuǎn)發(fā)、源目地址如何封裝等信息均存儲(chǔ)到Etcd中。
通過執(zhí)行以下的etcdctl命令,定義以逗號(hào)進(jìn)行分割列出群集中的IP地址,set指定網(wǎng)絡(luò)中的配置,對(duì)應(yīng)的參數(shù)etcd是一個(gè)鍵值對(duì),設(shè)置網(wǎng)段為172.17.0.0/16,類型是vxlan。
執(zhí)行完后,查看兩臺(tái)node節(jié)點(diǎn)的docker0地址,即docker網(wǎng)關(guān)的地址是否為172.17.0.0/16網(wǎng)段的地址。
[root@k8s-master01 ~]# cd /k8s/etcd-cert/ [root@k8s-master01 etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.200.111:2379,https://192.168.200.112:2379,https://192.168.200.113:2379,https://192.168.200.114:2379" set /coreos.com/network/config '{"Network":"172.17.0.0/16","Backend":{"Type":"vxlan"}}' {"Nerwork":"172.17.0.0/16","Backend":{"Type":"vxlan"}}
[root@k8s-node01 ~]# ifconfig docker0 docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255 ether 02:42:d6:c7:05:8b txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
查看寫入的網(wǎng)絡(luò)信息
[root@k8s-master01 etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.200.111:2379,https://192.168.200.112:2379,https://192.168.200.113:2379,https://192.168.200.114:2379" get /coreos.com/network/config {"Nerwork":"172.17.0.0/1","Backend":{"Type":"vxlan"}}
將flannel-v0.10.1-linux-amd64.tar.gz軟件包上傳兩個(gè)node節(jié)點(diǎn)服務(wù)器,并進(jìn)行解壓縮。
在兩臺(tái)node節(jié)點(diǎn)上均需要操作
[root@k8s-node01 ~]# tar xf flannel-v0.12.0-linux-amd64.tar.gz
在node節(jié)點(diǎn)上創(chuàng)建k8s工作目錄。將flanneld腳本和mk-docker-opts.sh腳本剪切至k8s工作目錄中。
[root@k8s-node01 ~]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p [root@k8s-node01 ~]# mv mk-docker-opts.sh flanneld /opt/kubernetes/bin/
將追備好的flannel.sh腳本拖至兩臺(tái)node節(jié)點(diǎn)上,用以啟動(dòng)Flannel服務(wù)和創(chuàng)建陪孩子文件。其中:指定陪孩子文件路徑/opt/kubernetes/cfg/flanneld/,Etcd的終端地址以及需要認(rèn)證的證書密鑰文件;指定啟動(dòng)腳本路徑/usr/lib/systemd/system/flanneld.service,添加至自定義系統(tǒng)服務(wù)中,交由系統(tǒng)統(tǒng)一管理。
以k8s-node01為例:
[root@k8s-node01 ~]# cat /opt/etcd/cfg/etcd #[Member] ETCD_NAME="etcd03" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://192.168.200.113:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.200.113:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.200.113:2380" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.200.113:2379" ETCD_INITIAL_CLUSTER="etcd01=https://192.168.200.111:2380,etcd02=https://192.168.200.112:2380,etcd03=https://192.168.200.113:2380,etcd04=https://192.168.200.114:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" [root@k8s-node01 ~]# cat flannel.sh #!/bin/bash ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"} cat <<EOF >/opt/kubernetes/cfg/flanneld FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \ -etcd-cafile=/opt/etcd/ssl/ca.pem \ -etcd-certfile=/opt/etcd/ssl/server.pem \ -etcd-keyfile=/opt/etcd/ssl/server-key.pem" EOF cat <<EOF >/usr/lib/systemd/system/flanneld.service [Unit] Description=Flanneld overlay address etcd agent After=network-online.target network.target Before=docker.service [Service] Type=notify EnvironmentFile=/opt/kubernetes/cfg/flanneld ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env Restart=on-failure [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable flanneld systemctl restart flanneld [root@k8s-node01 ~]# bash flannel.sh https://192.168.200.111:2379,https://192.168.200.112:2379,https://192.168.200.113,https://192.168.200.114:2379
兩臺(tái)node節(jié)點(diǎn)配置Docker連接Flannel。docker.service需要借助Flannel進(jìn)行通信,需要修改docker.service。添加EnvironmentFile=/run/flannel/subnet.env,借助Flannel的子網(wǎng)進(jìn)行通信以及添加$DOCKER_NETWORK_OPTIONS網(wǎng)絡(luò)參數(shù)。以上兩個(gè)參數(shù)均是官網(wǎng)要求。下面以k8s-node01主機(jī)為例進(jìn)行操作演示。
[root@k8s-node01 ~]# vim /usr/lib/systemd/system/docker.service [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket containerd.service [Service] Type=notify # the default is not to use systemd for cgroups because the delegate issues still # exists and systemd currently does not support the cgroup feature set required # for containers run by docker EnvironmentFile=/run/flannel/subnet.env # 添加此行 ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock # 添加變量 ExecReload=/bin/kill -s HUP $MAINPID TimeoutSec=0 RestartSec=2 Restart=always # Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. # Both the old, and new location are accepted by systemd 229 and up, so using the old location # to make them work for either version of systemd. StartLimitBurst=3 # Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. # Both the old, and new name are accepted by systemd 230 and up, so using the old name to make # this option work for either version of systemd. StartLimitInterval=60s # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Comment TasksMax if your systemd version does not support it. # Only systemd 226 and above support this option. TasksMax=infinity # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process OOMScoreAdjust=-500 [Install] WantedBy=multi-user.target
在兩臺(tái)node節(jié)點(diǎn)上查看使用的子網(wǎng)地址分別為172.17.11.1/24和172.17.100.1/24。bip是指定啟動(dòng)時(shí)的子網(wǎng)。
[root@k8s-node01 ~]# cat /run/flannel/subnet.env DOCKER_OPT_BIP="--bip=172.17.11.1/24" DOCKER_OPT_IPMASQ="--ip-masq=false" DOCKER_OPT_MTU="--mtu=1450" DOCKER_NETWORK_OPTIONS=" --bip=172.17.11.1/24 --ip-masq=false --mtu=1450"
[root@k8s-node02 ~]# cat /run/flannel/subnet.env DOCKER_OPT_BIP="--bip=172.17.100.1/24" DOCKER_OPT_IPMASQ="--ip-masq=false" DOCKER_OPT_MTU="--mtu=1450" DOCKER_NETWORK_OPTIONS=" --bip=172.17.100.1/24 --ip-masq=false --mtu=1450"
在兩臺(tái)node節(jié)點(diǎn)上修改完啟動(dòng)腳本之后,需要重新啟動(dòng)Docker服務(wù)。分別查看兩臺(tái)node節(jié)點(diǎn)的docker0網(wǎng)卡信息。
[root@k8s-node01 ~]# systemctl daemon-reload && systemctl restart docker [root@k8s-node01 ~]# ip add s docker0 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:d6:c7:05:8b brd ff:ff:ff:ff:ff:ff inet 172.17.11.1/24 brd 172.17.11.255 scope global docker0 valid_lft forever preferred_lft forever
[root@k8s-node02 ~]# systemctl daemon-reload && systemctl restart docker [root@k8s-node02 ~]# ip add s docker0 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:b8:77:89:4a brd ff:ff:ff:ff:ff:ff inet 172.17.100.1/24 brd 172.17.100.255 scope global docker0 valid_lft forever preferred_lft forever
在兩臺(tái)node節(jié)點(diǎn)上分別運(yùn)行busybox容器。(busybox是一個(gè)集成了三百多個(gè)常用linux命令和工具的軟件工具箱,在本案例中用于測(cè)試)。
進(jìn)入容器內(nèi)部查看k8s-node01節(jié)點(diǎn)的地址是172.17.11.2;k8s-node02節(jié)點(diǎn)的地址是172.17.100.2。與/run/flannel/subnet.env文件中看到的子網(wǎng)信息處于同一個(gè)網(wǎng)段。
接著再通過ping命令測(cè)試,如果k8s-node02容器能ping通k8s-node01容器的IP地址就代表兩個(gè)獨(dú)立的容器可以互通,說明Flannel組件搭建成功。
[root@k8s-node01 ~]# docker pull busybox [root@k8s-node01 ~]# docker run -it busybox /bin/sh / # ipaddr show eth0 9: eth0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue link/ether 02:42:ac:11:0b:03 brd ff:ff:ff:ff:ff:ff inet 172.17.11.2/24 brd 172.17.11.255 scope global eth0 valid_lft forever preferred_lft forever
[root@k8s-node02 ~]# docker pull busybox [root@k8s-node02 ~]# docker run -it busybox /bin/sh / # ip a s eth0 7: eth0@if8: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue link/ether 02:42:ac:11:64:02 brd ff:ff:ff:ff:ff:ff inet 172.17.100.2/24 brd 172.17.100.255 scope global eth0 valid_lft forever preferred_lft forever / # ping -c 4 172.17.11.2 PING 172.17.11.2 (172.17.11.2): 56 data bytes 64 bytes from 172.17.11.2: seq=0 ttl=62 time=1.188 ms 64 bytes from 172.17.11.2: seq=1 ttl=62 time=0.598 ms 64 bytes from 172.17.11.2: seq=2 ttl=62 time=0.564 ms 64 bytes from 172.17.11.2: seq=3 ttl=62 time=0.372 ms --- 172.17.11.2 ping statistics --- 4 packets transmitted, 4 packets received, 0% packet loss round-trip min/avg/max = 0.372/0.680/1.188 ms
在k8s-master01節(jié)點(diǎn)上將kubelet和kube-proxy執(zhí)行腳本拷貝到兩臺(tái)node節(jié)點(diǎn)上。
[root@k8s-master01 ~]# cd /k8s/kubernetes/server/bin/ [root@k8s-master01 bin]# scp kubelet kube-proxy root@k8s-node01:/opt/kubernetes/bin/ [root@k8s-master01 bin]# scp kubelet kube-proxy root@k8s-node02:/opt/kubernetes/bin/
將node.zip上傳至兩臺(tái)node節(jié)點(diǎn),并解壓node.zip,可獲得proxy.sh和kubelet.sh兩個(gè)執(zhí)行腳本。以k8s-node01為例進(jìn)行操作演示。
[root@k8s-node01 ~]# unzip node.zip Archive: node.zip inflating: proxy.sh inflating: kubelet.sh [root@k8s-node02 ~]# unzip node.zip Archive: node.zip inflating: proxy.sh inflating: kubelet.sh
在k8s-master01節(jié)點(diǎn)上創(chuàng)建kubeconfig工作目錄。將kubecofng.sh腳本上傳至當(dāng)前目錄/k8s/kubeconfig/下,此腳本中包含有創(chuàng)建TLS Bootstrapping Token、創(chuàng)建kubeletbootstrapping kubeconfig、設(shè)置集群參數(shù)、設(shè)置客戶端認(rèn)證參數(shù)、設(shè)置上下文參數(shù)、設(shè)置默認(rèn)上下文、創(chuàng)建kube-proxy kubeconfig文件。
查看序列號(hào)將其拷貝到客戶端認(rèn)證參數(shù)。更新kubeconfig.sh腳本的token值。
[root@k8s-master01 ~]# mkdir /k8s/kubeconfig [root@k8s-master01 ~]# cd /k8s/kubeconfig/ [root@k8s-master01 kubeconfig]# cat /opt/kubernetes/cfg/token.csv 48be2e8be6cca6e349d3e932768f5d71,kubelet-bootstrap,10001,"system:kubelet-bootstrap" [root@k8s-master01 kubeconfig]# vim kubeconfig.sh # 創(chuàng)建 TLS Bootstrapping Token #BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ') BOOTSTRAP_TOKEN=48be2e8be6cca6e349d3e932768f5d71
為了便于識(shí)別在k8s-master01和k8s-master02節(jié)點(diǎn)上聲明路徑export PATH=$PATH:/opt/kubernetes/bin/到環(huán)境變量中。
[root@k8s-master01 ~]# echo "export PATH=$PATH:/opt/kubernetes/bin/" >> /etc/profile [root@k8s-master01 ~]# source /etc/profile [root@k8s-master02 ~]# echo "export PATH=$PATH:/opt/kubernetes/bin/" >> /etc/profile [root@k8s-master02 ~]# source /etc/profile
將kubeconfig.sh重命名為kubeconfig,執(zhí)行kubeconfig腳本。使用bash執(zhí)行kubeconfig,第一個(gè)參數(shù)是當(dāng)前APIServer的IP,它會(huì)寫入整個(gè)配置當(dāng)中;第二個(gè)參數(shù)執(zhí)行證書kubenets的證書位置。執(zhí)行完成以后會(huì)生成bootstrap.kubeconfig和kube-proxy.kubeconfig兩個(gè)文件。
[root@k8s-master01 ~]# cd /k8s/kubeconfig/ [root@k8s-master01 kubeconfig]# mv kubeconfig.sh kubeconfig [root@k8s-master01 kubeconfig]# bash kubeconfig 192.168.200.111 /k8s/k8s-cert/ Cluster "kubernetes" set. User "kubelet-bootstrap" set. Context "default" created. Switched to context "default". Cluster "kubernetes" set. User "kube-proxy" set. Context "default" created. Switched to context "default". [root@k8s-master01 kubeconfig]# ls bootstrap.kubeconfig kubeconfig kube-proxy.kubeconfig token.csv
將bootstrap.kubeconfig和kube-proxy.kubeconfig文件拷貝到兩臺(tái)node節(jié)點(diǎn)上。
[root@k8s-master01 kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@k8s-node01:/opt/kubernetes/cfg/ [root@k8s-master01 kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@k8s-node02:/opt/kubernetes/cfg/
創(chuàng)建bootstrap角色,并賦予權(quán)限。用于連接API Server請(qǐng)求簽名(關(guān)鍵)。查看k8s-node01節(jié)點(diǎn)的bootstrap.kubeconfig。kubelet在啟動(dòng)的時(shí)候如果想加入集群中,需要請(qǐng)求申請(qǐng)API Server請(qǐng)求簽名。kubeconfig的作用是指名如果想要加入群集,需要通過哪一個(gè)地址、端口才能申請(qǐng)到所需要的證書。
[root@k8s-master01 kubeconfig]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
[root@k8s-node01 ~]# cat /opt/kubernetes/cfg/bootstrap.kubeconfig apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR2akNDQXFhZ0F3SUJBZ0lVTmdibUJxMkJpRkF5Z1lEVFpvb1p1a3V4QWZvd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1pURUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4RURBT0JnTlZCQWNUQjBKbAphV3BwYm1jeEREQUtCZ05WQkFvVEEyczRjekVQTUEwR0ExVUVDeE1HVTNsemRHVnRNUk13RVFZRFZRUURFd3ByCmRXSmxjbTVsZEdWek1CNFhEVEl4TURFeU9EQTRNamt3TUZvWERUSTJNREV5TnpBNE1qa3dNRm93WlRFTE1Ba0cKQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4RURBT0JnTlZCQWNUQjBKbGFXcHBibWN4RERBSwpCZ05WQkFvVEEyczRjekVQTUEwR0ExVUVDeE1HVTNsemRHVnRNUk13RVFZRFZRUURFd3ByZFdKbGNtNWxkR1Z6Ck1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBMk1GRzYyVDdMTC9jbnpvNGx4V0gKZGNOVnVkblkzRTl0S2ZvbThNaVZKcFRtVUhlYUhoczY2M1loK1VWSklnWXkwVXJzWGRyc2VPWDg2Nm9PcEN1NQpUajRKbEUxbXQ5b1NlOEhLeFVhYkRqVjlwd05WQm1WSllCOEZIMnZVaTZVZEVpOVNnVXF2OTZIbThBSUlFbTFhCmpLREc2QXRJRWFZdFpJQ1MyeVg5ZStPVXVCUUtkcDBCcGdFdUxkMko5OEpzSjkrRzV6THc5bWdab0t5RHBEeHUKVHdGRC9HK2k5Vk9mbTh7ZzYzVzRKMUJWL0RLVXpTK1Q3NEs0S3I5ZmhDbHp4ZVo3bXR1eXVxUkM2c1lrcXpBdApEbklmNzB1QWtPRzRYMU52eUhjVmQ5Rzg4ZEM3NDNSbFZGZGNvbzFOM0hoZ1FtaG12ZXdnZ0tQVjZHWGwwTkJnCkx3SURBUUFCbzJZd1pEQU9CZ05WSFE4QkFmOEVCQU1DQVFZd0VnWURWUjBUQVFIL0JBZ3dCZ0VCL3dJQkFqQWQKQmdOVkhRNEVGZ1FVRVJ0cngrWHB4andVQWlKemJnUEQ2bGJOUlFFd0h4WURWUjBqQkJnd0ZvQVVFUnRyeCtYcAp4andVQWlKemJnUEQ2bGJOUlFFd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFJWTBtdWxJK25BdE1KcWpSZXFnCmRuWk1Ya2U3ZGxIeHJKMmkvT3NaSXRoZUhYakMwMGdNWlRZSGV6WUxSKzl0MUNKV1lmUVdOR3V3aktvYitPaDUKMlE5SURUZmpJblhXcmU5VU5SNUdGNndNUDRlRzZreUVNbE9WcUc3L2tldERpNlRzRkZyZWJVY0FraEFnV0J1eApJWXJWb1ZhMFlCK3hhZk1KdTIzMnQ5VmtZZHovdm9jWGV1MHd1L096Z1dsUEJFNFBkSUVHRWprYW5yQTk5UCtGCjhSUkJudmVJcjR4S21iMlJIcEFYWENMRmdvNTc1c1hEQWNGbWswVm1KM2kzL3pPbmlsd3cwRmpFNFU2OVRmNWMKekhncE0vdmtLbG9aTjYySW44YUNtbUZTcmphcjJRem1Ra3FwWHRsQmdoZThwUjQ3UWhiZS93OW5DWGhsYnVySgpzTzQ9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K server: https://192.168.200.200:6443 name: kubernetes contexts: - context: cluster: kubernetes user: kubelet-bootstrap name: default current-context: default kind: Config preferences: {} users: - name: kubelet-bootstrap user: token: 48be2e8be6cca6e349d3e932768f5d71
在兩臺(tái)node節(jié)點(diǎn)上,執(zhí)行kubelet腳本,并通過ps命令查看服務(wù)啟動(dòng)情況。kubelet啟動(dòng)之后會(huì)自動(dòng)聯(lián)系A(chǔ)PI Server發(fā)進(jìn)行證書申請(qǐng)。在k8s-master01節(jié)點(diǎn)上通過get csr命令查看是否收到請(qǐng)求申請(qǐng)。當(dāng)看到處于Pending狀態(tài)時(shí),即為等待集群給該節(jié)點(diǎn)頒發(fā)證書。
[root@k8s-node01 ~]# bash kubelet.sh 192.168.200.113 Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service. [root@k8s-node01 ~]# ps aux | grep [k]ube
[root@k8s-node02 ~]# bash kubelet.sh 192.168.200.114 Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service. [root@k8s-node02 ~]# ps aux | grep [k]ube
[root@k8s-master01 kubeconfig]# kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-4R-rZXsEI6Zf6HnklxVGZooXPwS11_8mTG5H5czDXTM 105s kubelet-bootstrap Pending node-csr-Ha9B_rJVSOJ2OSTlOSiXwFbnwyLw_x4qUVfQfX-ks_4 48s kubelet-bootstrap Pending
k8s-master01節(jié)點(diǎn)頒發(fā)證書給兩臺(tái)node節(jié)點(diǎn)。通過get csr命令可以查看到證書已經(jīng)頒發(fā)。使用get node查看,兩臺(tái)node節(jié)點(diǎn)都已經(jīng)加入到了群集中。
[root@k8s-master01 kubeconfig]# kubectl certificate approve node-csr-4R-rZXsEI6Zf6HnklxVGZooXPwS11_8mTG5H5czDXTM certificatesigningrequest.certificates.k8s.io/node-csr-4R-rZXsEI6Zf6HnklxVGZooXPwS11_8mTG5H5czDXTM approved [root@k8s-master01 kubeconfig]# kubectl certificate approve node-csr-Ha9B_rJVSOJ2OSTlOSiXwFbnwyLw_x4qUVfQfX-ks_4 certificatesigningrequest.certificates.k8s.io/node-csr-Ha9B_rJVSOJ2OSTlOSiXwFbnwyLw_x4qUVfQfX-ks_4 approved [root@k8s-master01 kubeconfig]# kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-4R-rZXsEI6Zf6HnklxVGZooXPwS11_8mTG5H5czDXTM 5m44s kubelet-bootstrap Approved,Issued node-csr-Ha9B_rJVSOJ2OSTlOSiXwFbnwyLw_x4qUVfQfX-ks_4 4m47s kubelet-bootstrap Approved,Issued
在兩臺(tái)node節(jié)點(diǎn)上執(zhí)行proxy.sh腳本。
[root@k8s-node01 ~]# bash proxy.sh 192.168.200.113 Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service. [root@k8s-node01 ~]# systemctl status kube-proxy.service
[root@k8s-node02 ~]# bash proxy.sh 192.168.200.114 Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service. [root@k8s-node02 ~]# systemctl status kube-proxy
[root@k8s-master02 ~]# kubectl get node NAME STATUS ROLES AGE VERSION 192.168.200.113 Ready <none> 25h v1.12.3 192.168.200.114 Ready <none> 25h v1.12.3
在NodePort基礎(chǔ)上,Kubernetes可以請(qǐng)求底層云平臺(tái)創(chuàng)建一個(gè)負(fù)載均衡器,將每個(gè)Node作為后端,進(jìn)行服務(wù)分發(fā)。該模式需要底層云平臺(tái)(例如GCE)支持。
安裝配置Nginx服務(wù),lb01、lb02主機(jī)上執(zhí)行以下操作,以lb01節(jié)點(diǎn)為例
[root@k8s-lb01 ~]# rpm -ivh epel-release-latest-7.noarch.rpm [root@k8s-lb01 ~]# yum -y install nginx [root@k8s-lb01 ~]# vim /etc/nginx/nginx.conf events { worker_connections 1024; } stream { #四層代理stream和http是平級(jí)所以不要放在里面 log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent'; access_log /var/log/nginx/k8s-access.log main; upstream k8s-apiserver { # upstream聲明k8s-apiserver,指定了兩個(gè)master的6443端口 server 192.168.200.111:6443; server 192.168.200.112:6443; } server { # 然后server,listen監(jiān)聽的端口6443,proxy_pass反向代理給他 listen 6443; proxy_pass k8s-apiserver; } } http { log_format main '$remote_addr - $remote_user [$time_local] "$request" ' [root@k8s-lb01 ~]# nginx -t nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful [root@k8s-lb01 ~]# systemctl start nginx && systemctl enable nginx Created symlink from /etc/systemd/system/multi-user.target.wants/nginx.service to /usr/lib/systemd/system/nginx.service.
修改兩臺(tái)Nginx節(jié)點(diǎn)的首頁,以示區(qū)分,并且瀏覽器中訪問兩臺(tái)LB節(jié)點(diǎn)
[root@k8s-lb01 ~]# echo "This is Master Server" > /usr/share/nginx/html/index.html [root@k8s-lb02 ~]# echo "This is Backup Server" > /usr/share/nginx/html/index.html
[root@k8s-lb01 ~]# yum -y install keepalived [root@k8s-lb01 ~]# vim /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { router_id LVS_DEVEL } vrrp_script check_nginx { script "/etc/nginx/check_nginx.sh" } vrrp_instance VI_1 { state MASTER interface ens32 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.200.200 } track_script { check_nginx } } [root@k8s-lb01 ~]# scp /etc/keepalived/keepalived.conf 192.168.200.116:/etc/keepalived/
[root@k8s-lb02 ~]# vim /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { router_id LVS_DEVEL } vrrp_script check_nginx { script "/etc/nginx/check_nginx.sh" } vrrp_instance VI_1 { state BACKUP # 修改 interface ens32 virtual_router_id 51 priority 90 # 修改優(yōu)先級(jí) advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.200.200 } track_script { check_nginx } }
在兩臺(tái)LB節(jié)點(diǎn)上創(chuàng)建觸發(fā)腳本,統(tǒng)計(jì)數(shù)據(jù)進(jìn)行比對(duì),值為0的時(shí)候,關(guān)閉Keepalived服務(wù)。
lb01、lb02主機(jī)上均執(zhí)行以下操作
[root@k8s-lb01 ~]# vim /etc/nginx/check_nginx.sh count=$(ps -ef|grep nginx|egrep -cv "grep|$$") if [ "$count" -eq 0 ];then systemctl stop keepalived fi [root@k8s-lb02 ~]# chmod +x /etc/nginx/check_nginx.sh [root@k8s-lb02 ~]# systemctl start keepalived && systemctl enable keepalived
查看網(wǎng)卡信息,可以查看到k8s-lb01節(jié)點(diǎn)上有漂移地址192.168.200.200/24,而k8s-lb02節(jié)點(diǎn)上沒有漂移地址。
[root@k8s-lb01 ~]# ip a s ens32 2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:0c:29:3b:05:a3 brd ff:ff:ff:ff:ff:ff inet 192.168.200.115/24 brd 192.168.200.255 scope global noprefixroute ens32 valid_lft forever preferred_lft forever inet 192.168.200.200/32 scope global ens32 valid_lft forever preferred_lft forever inet6 fe80::e88c:df62:6a14:b1f3/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft forever inet6 fe80::5d90:6146:c376:1e0f/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft forever inet6 fe80::ea69:6f19:80be:abc3/64 scope link noprefixroute valid_lft forever preferred_lft forever
[root@k8s-lb02 ~]# ip a s ens32 2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:0c:29:74:cc:8a brd ff:ff:ff:ff:ff:ff inet 192.168.200.116/24 brd 192.168.200.255 scope global noprefixroute ens32 valid_lft forever preferred_lft forever inet6 fe80::5d90:6146:c376:1e0f/64 scope link noprefixroute valid_lft forever preferred_lft forever inet6 fe80::e88c:df62:6a14:b1f3/64 scope link noprefixroute valid_lft forever preferred_lft forever inet6 fe80::ea69:6f19:80be:abc3/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft forever
驗(yàn)證故障轉(zhuǎn)移切換:首先將k8s-lb01節(jié)點(diǎn)上的Nginx服務(wù)關(guān)閉,查看IP信息可以看出k8s-lb01的漂移IP已經(jīng)不存在,Keepalived服務(wù)也關(guān)閉離;查看k8s-lb02的IP信息,漂移IP地址已經(jīng)綁定在k8s-lb02節(jié)點(diǎn)上。此時(shí)在將k8s-lb01的Nginx與Keepalived服務(wù)開啟,漂移IP地址就會(huì)重新k8s-lb01節(jié)點(diǎn)上。
[root@k8s-lb01 ~]# systemctl stop nginx [root@k8s-lb01 ~]# ps aux | grep [k]eepalived [root@k8s-lb02 ~]# ip a s ens32 2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:0c:29:74:cc:8a brd ff:ff:ff:ff:ff:ff inet 192.168.200.116/24 brd 192.168.200.255 scope global noprefixroute ens32 valid_lft forever preferred_lft forever inet 192.168.200.200/32 scope global ens32 valid_lft forever preferred_lft forever inet6 fe80::5d90:6146:c376:1e0f/64 scope link noprefixroute valid_lft forever preferred_lft forever inet6 fe80::e88c:df62:6a14:b1f3/64 scope link noprefixroute valid_lft forever preferred_lft forever inet6 fe80::ea69:6f19:80be:abc3/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft forever
故障恢復(fù)測(cè)試
[root@k8s-lb01 ~]# systemctl start nginx [root@k8s-lb01 ~]# systemctl start keepalived [root@k8s-lb01 ~]# ip a s ens32 2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:0c:29:3b:05:a3 brd ff:ff:ff:ff:ff:ff inet 192.168.200.115/24 brd 192.168.200.255 scope global noprefixroute ens32 valid_lft forever preferred_lft forever inet 192.168.200.200/32 scope global ens32 valid_lft forever preferred_lft forever inet6 fe80::e88c:df62:6a14:b1f3/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft forever inet6 fe80::5d90:6146:c376:1e0f/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft forever inet6 fe80::ea69:6f19:80be:abc3/64 scope link noprefixroute valid_lft forever preferred_lft forever
[root@k8s-lb02 ~]# ip add s ens32 2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:0c:29:74:cc:8a brd ff:ff:ff:ff:ff:ff inet 192.168.200.116/24 brd 192.168.200.255 scope global noprefixroute ens32 valid_lft forever preferred_lft forever inet6 fe80::e88c:df62:6a14:b1f3/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft forever inet6 fe80::5d90:6146:c376:1e0f/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft forever inet6 fe80::ea69:6f19:80be:abc3/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft forever
修改兩臺(tái)node節(jié)點(diǎn)上的bootstrap.kubeconfig、kubelet.kubeconfig和kube-prosy.kubeconfig配置文件,這三個(gè)文件中指向API Server的IP地址,將此地址更新為VIP地址。
node01、node02主機(jī)上均執(zhí)行以下操作
[root@k8s-node01 ~]# cd /opt/kubernetes/cfg/ [root@k8s-node01 cfg]# vim bootstrap.kubeconfig ……//省略部分內(nèi)容 server: https://192.168.200.111:6443 ……//省略部分內(nèi)容 [root@k8s-node01 cfg]# vim kubelet.kubeconfig ……//省略部分內(nèi)容 server: https://192.168.200.111:6443 ……//省略部分內(nèi)容 [root@k8s-node01 cfg]# vim kube-proxy.kubeconfig ……//省略部分內(nèi)容 server: https://192.168.200.111:6443 ……//省略部分內(nèi)容 [root@k8s-node01 cfg]# grep 200.200 * bootstrap.kubeconfig: server: https://192.168.200.200:6443 kubelet.kubeconfig: server: https://192.168.200.200:6443 kube-proxy.kubeconfig: server: https://192.168.200.200:6443
重啟兩臺(tái)node節(jié)點(diǎn)相關(guān)服務(wù)。node01、node02主機(jī)上均執(zhí)行以下操作
[root@k8s-node01 cfg]# systemctl restart kubelet [root@k8s-node01 cfg]# systemctl restart kube-proxy
k8s-lb01節(jié)點(diǎn)上動(dòng)態(tài)查看Nginx的訪問日志。從日志中可以看出了負(fù)載均衡已經(jīng)實(shí)現(xiàn)。
[root@k8s-lb01 ~]# tail -fn 200 /var/log/nginx/k8s-access.log 192.168.200.113 192.168.200.111:6443 - [29/Jan/2021:20:29:41 +0800] 200 1120 192.168.200.113 192.168.200.112:6443 - [29/Jan/2021:20:29:41 +0800] 200 1120 192.168.200.114 192.168.200.112:6443 - [29/Jan/2021:20:30:12 +0800] 200 1121 192.168.200.114 192.168.200.111:6443 - [29/Jan/2021:20:30:12 +0800] 200 1121
在k8s-master01節(jié)點(diǎn)上創(chuàng)建Pod,使用的鏡像是Nginx。
[root@k8s-node01 ~]# docker pull nginx [root@k8s-node02 ~]# docker pull nginx
[root@k8s-master01 ~]# kubectl run nginx --image=nginx kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead. deployment.apps/nginx created [root@k8s-master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE nginx-dbddb74b8-9f5m6 1/1 Running 1 21h
開啟查看日志權(quán)限。
[root@k8s-master01 ~]# kubectl create clusterrolebinding cluseter-system-anonymous --clusterrole=cluster-admin --user=system:anonymous clusterrolebinding.rbac.authorization.k8s.io/cluseter-system-anonymous created
通過-o wide參數(shù),輸出整個(gè)網(wǎng)絡(luò)狀態(tài)??梢圆榭创巳萜鞯腎P是172.17.11.2,容器是放在IP地址為192.168.200.113的node節(jié)點(diǎn)中。
[root@k8s-master01 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE nginx-dbddb74b8-9f5m6 1/1 Running 0 4m27s 172.17.11.2 192.168.200.113 <none>
[root@k8s-node01 ~]# ip a s flannel.1 4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default link/ether a6:29:7d:74:2d:1a brd ff:ff:ff:ff:ff:ff inet 172.17.11.0/32 scope global flannel.1 valid_lft forever preferred_lft forever inet6 fe80::a429:7dff:fe74:2d1a/64 scope link valid_lft forever preferred_lft forever
使用curl訪問Pod容器地址172.17.11.2。訪問日志會(huì)產(chǎn)生信息,回到k8s-master01節(jié)點(diǎn)中查看日志信息。并且查看容器。其他的node節(jié)點(diǎn)也能訪問到。
[root@k8s-node01 ~]# curl 172.17.11.2 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h2>Welcome to nginx!</h2> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>
查看日志輸出
[root@k8s-master01 ~]# kubectl logs nginx-dbddb74b8-9f5m6 /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/ /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh 10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf 10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh /docker-entrypoint.sh: Configuration complete; ready for start up 172.17.11.1 - - [29/Jan/2021:12:58:28 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.29.0" "-" [root@k8s-master01 ~]# kubectl logs nginx-dbddb74b8-9f5m6 /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/ /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh 10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf 10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh /docker-entrypoint.sh: Configuration complete; ready for start up 172.17.11.1 - - [29/Jan/2021:12:58:28 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.29.0" "-" 172.17.11.1 - - [29/Jan/2021:12:59:41 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.29.0" "-"
感謝你能夠認(rèn)真閱讀完這篇文章,希望小編分享的“如何部署Kubernetes高可用”這篇文章對(duì)大家有幫助,同時(shí)也希望大家多多支持億速云,關(guān)注億速云行業(yè)資訊頻道,更多相關(guān)知識(shí)等著你來學(xué)習(xí)!
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如果涉及侵權(quán)請(qǐng)聯(lián)系站長(zhǎng)郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。