您好,登錄后才能下訂單哦!
K8s二進(jìn)制生產(chǎn)環(huán)境擴(kuò)容node節(jié)點(diǎn)
由于項(xiàng)目微服務(wù)也是部署在k8s集群中去維護(hù)的,所以擴(kuò)容node節(jié)點(diǎn)也是必要的聯(lián)系,擴(kuò)容node節(jié)點(diǎn)一定要保證你整個(gè)集群的容器環(huán)境的網(wǎng)絡(luò)都是互通的,這也是很重要的一步,這里我根據(jù)自己的經(jīng)驗(yàn)去擴(kuò)容,僅供參考
首先我這里是安裝的二進(jìn)制方式去部署的k8s集群,進(jìn)行擴(kuò)容node的時(shí)候,也是非常方便的
擴(kuò)容node節(jié)點(diǎn)分為兩步,第一步先將我們舊的node節(jié)點(diǎn)上的配置先去拷貝到我們新的節(jié)點(diǎn)上,第二點(diǎn)就是將我們的容器網(wǎng)絡(luò)環(huán)境打通
這里我是直接擴(kuò)容兩個(gè)node節(jié)點(diǎn)。
第一步:
[root@k8s-node3 ~]# mkdir -p /opt/kubernetes/{bin,ssl,cfg}
[root@k8s-node4 ~]# mkdir -p /opt/kubernetes/{bin,ssl,cfg}
[root@k8s-master1 ~]# scp /data/k8s/soft/kubernetes/server/bin/{kubelet,kube-proxy} root@192.168.30.25:/opt/kubernetes/bin/
[root@k8s-master1 ~]# scp /data/k8s/soft/kubernetes/server/bin/{kubelet,kube-proxy} root@192.168.30.26:/opt/kubernetes/bin/
[root@k8s-node1 ~]# scp -r /opt/kubernetes/ root@192.168.30.25:/opt
[root@k8s-node1 ~]# scp -r /opt/kubernetes/ root@192.168.30.26:/opt
[root@k8s-node1 ~]# scp /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@192.168.30.25:/usr/lib/systemd/system
[root@k8s-node1 ~]# scp /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@192.168.30.26:/usr/lib/systemd/system
去node3上操作
把拷貝過去的文件的證書刪除,這是node1的證書,我們需要重新生成
[root@k8s-node3 ~]# cd /opt/kubernetes/ssl/
[root@k8s-node3 ssl]# ls
kubelet-client-2019-11-07-14-37-36.pem kubelet-client-current.pem kubelet.crt kubelet.key
[root@k8s-node3 ssl]# rm -rf *
Node4上的也是一樣刪除
[root@k8s-node4 ~]# cd /opt/kubernetes/ssl/
[root@k8s-node4 ssl]# ls
kubelet-client-2019-11-07-14-37-36.pem kubelet-client-current.pem kubelet.crt kubelet.key
[root@k8s-node4 ssl]# rm -rf *
修改ip ,找到配置文件把ip上改成第三個(gè)node,也就是本身的node
[root@k8s-node3 cfg]# grep 23 *
kubelet:--hostname-override=192.168.30.23 \
kubelet.config:address: 192.168.30.23
kube-proxy:--hostname-override=192.168.30.23 \
這個(gè)和擴(kuò)容第4個(gè)node節(jié)點(diǎn)都是一樣的
擴(kuò)容的時(shí)候記得這里是需要docker環(huán)境的,需要安裝一下docker-ce
[root@k8s-node3 ~]# systemctl restart docker
[root@k8s-node3 ~]# docker -v
Docker version 19.03.4, build 9013bf583a
[root@k8s-node4 ~]# systemctl restart docker
[root@k8s-node4 ~]# docker -v
Docker version 19.03.4, build 9013bf583a
另外就是需要etcd的啟動(dòng)文件。也拷貝過來,然后重啟
[root@k8s-node1 ~]# scp -r /opt/etcd/ root@192.168.30.25:/opt
[root@k8s-node1 ~]# scp -r /opt/etcd/ root@192.168.30.25:/opt
把這些都修改為25主機(jī)的IP之后啟動(dòng)
[root@k8s-node3cfg]# systemctl restart kubelet
[root@k8s-node3 cfg]# systemctl restart kube-proxy.service
[root@k8s-node3 cfg]# ps -ef |grep kube
root 62846 1 0 16:49 ? 00:00:07 root 86738 1 6 21:27 ? 00:00:00 /opt/kubernetes/bin/kubelet --logtostderr=false --log-dir=/opt/kubernetes/log --v=4 --hostname-override=192.168.30.25 --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig --config=/opt/kubernetes/cfg/kubelet.config --cert-dir=/opt/kubernetes/ssl --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0
root 86780 1 35 21:28 ? 00:00:02 /opt/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=192.168.30.25 --cluster-cidr=10.0.0.0/24 --proxy-mode=ipvs --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig
root 86923 66523 0 21:28 pts/1 00:00:00 grep --color=auto kube
查看到master節(jié)點(diǎn)又有新的節(jié)點(diǎn)加入
[root@k8s-master1 ~]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-eH_jPNUBXJF6sIii9SvNz9fW71543MLjPvOYWeDteqo 90s kubelet-bootstrap Pending
node-csr-xLNLbvb3cibW-fyr_5Qyd3YuUYAX9DJgDwViu3AyXMk 31m kubelet-bootstrap Approved,Issued
頒發(fā)證書
[root@k8s-master1 ~]# kubectl certificate approve node-csr-eH_jPNUBXJF6sIii9SvNz9fW71543MLjPvOYWeDteqo
certificatesigningrequest.certificates.k8s.io/node-csr-eH_jPNUBXJF6sIii9SvNz9fW71543MLjPvOYWeDteqo approved
[root@k8s-master1 ~]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-eH_jPNUBXJF6sIii9SvNz9fW71543MLjPvOYWeDteqo 3m18s kubelet-bootstrap Approved,Issued
node-csr-xLNLbvb3cibW-fyr_5Qyd3YuUYAX9DJgDwViu3AyXMk 33m kubelet-bootstrap Approved,Issued
查看node節(jié)點(diǎn)狀態(tài)
[root@k8s-master1 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.30.23 Ready <none> 25m v1.15.1
192.168.30.24 Ready <none> 51s v1.15.1
192.168.30.25 Ready <none> 25m v1.15.1
192.168.30.26 Ready <none> 51s v1.15.1
第二步:
打通容器之間的網(wǎng)絡(luò)通信環(huán)境,這里我使用的是flannel進(jìn)行管理
準(zhǔn)備docker環(huán)境,這里我們之前是準(zhǔn)備好的,但是我們還是需要給他們?nèi)シ峙湟粋€(gè)子網(wǎng),flanneld和docker分配要一個(gè)子網(wǎng)里面
給新加入的節(jié)點(diǎn)部署flannel,將部署的文件拷貝過去
[root@k8s-node1 ~]# scp /usr/lib/systemd/system/{flanneld,docker}.service root@192.168.30.25:/usr/lib/systemd/system
[root@k8s-node1 ~]# scp /usr/lib/systemd/system/{flanneld,docker}.service root@192.168.30.26:/usr/lib/systemd/system
去node1上指定一個(gè)我們的node[root@k8s-node1 ~]# ./flannel.sh https://192.168.30.21:2379,https://192.168.30.22:2379,https://192.168.30.23:2379,https://192.168.30.24:2379,https://192.168.30.25:2379,https://192.168.30.26:2379
然后將我們的指定好的flanneld文件拷貝到新的節(jié)點(diǎn)上
[root@k8s-node1 ~]# cd /opt/kubernetes/cfg/
[root@k8s-node1 cfg]# ls
bootstrap.kubeconfig flanneld kubelet kubelet.config kubelet.kubeconfig kube-proxy kube-proxy.kubeconfig
[root@k8s-node1 cfg]# scp flanneld root@192.168.30.25:/opt/kubernetes/cfg/
[root@k8s-node1 cfg]# scp flanneld root@192.168.30.26:/opt/kubernetes/cfg/
重啟新的節(jié)點(diǎn)
查看網(wǎng)絡(luò)是否與docker同一網(wǎng)段
[root@k8s-node3 ~]# ip a
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:97:f5:6c:cd brd ff:ff:ff:ff:ff:ff
inet 172.17.25.1/24 brd 172.17.25.255 scope global docker0
valid_lft forever preferred_lft forever
6: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
link/ether b2:1a:97:5c:61:1f brd ff:ff:ff:ff:ff:ff
inet 172.17.25.0/32 scope global flannel.1
valid_lft forever preferred_lft forever
[root@k8s-node4 ~]# systemctl start flanneld
[root@k8s-node4 ~]# systemctl restart docker
[root@k8s-node4 ~]# ip a
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:3f:3c:a8:62 brd ff:ff:ff:ff:ff:ff
inet 172.17.77.1/24 brd 172.17.77.255 scope global docker0
valid_lft forever preferred_lft forever
6: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
link/ether 96:1c:bc:ec:05:d6 brd ff:ff:ff:ff:ff:ff
inet 172.17.77.0/32 scope global flannel.1
并測(cè)試與其他節(jié)點(diǎn)的容器是否都能共享各個(gè)節(jié)點(diǎn)的網(wǎng)絡(luò)環(huán)境
[root@k8s-master1 ~]# kubectl exec -it nginx-deployment-7b8677db56-wkbzb /bin/sh
ping 172.17.79.2
PING 172.17.79.2 (172.17.79.2): 56 data bytes
64 bytes from 172.17.79.2: icmp_seq=0 ttl=62 time=0.703 ms
64 bytes from 172.17.79.2: icmp_seq=1 ttl=62 time=0.459 ms
^C--- 172.17.79.2 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.459/0.581/0.703/0.122 ms
ping 172.17.40.3
PING 172.17.40.3 (172.17.40.3): 56 data bytes
64 bytes from 172.17.40.3: icmp_seq=0 ttl=62 time=0.543 ms
64 bytes from 172.17.40.3: icmp_seq=1 ttl=62 time=0.404 ms
^C--- 172.17.40.3 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.404/0.474/0.543/0.070 ms
ping 172.17.6.3
PING 172.17.6.3 (172.17.6.3): 56 data bytes
64 bytes from 172.17.6.3: icmp_seq=0 ttl=62 time=0.385 ms
64 bytes from 172.17.6.3: icmp_seq=1 ttl=62 time=0.323 ms
^C--- 172.17.6.3 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.323/0.354/0.385/0.031 ms
測(cè)試成功都能連通
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如果涉及侵權(quán)請(qǐng)聯(lián)系站長(zhǎng)郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。