溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊(cè)×
其他方式登錄
點(diǎn)擊 登錄注冊(cè) 即表示同意《億速云用戶服務(wù)條款》

CentOS-7安裝Kubernetes-1.12.1

發(fā)布時(shí)間:2020-07-02 11:14:46 來(lái)源:網(wǎng)絡(luò) 閱讀:3196 作者:一王小可一 欄目:云計(jì)算

CentOS-7安裝Kubernetes-1.12.1

標(biāo)簽: CentOS-7-安裝Kubernetes-1.12.1


環(huán)境描述:

系統(tǒng): CentOS-7 4.19.0-1.el7.elrepo.x86_64
Kubernetes:Kubernetes-1.12.1
架構(gòu)一臺(tái)master一臺(tái)node

先決條件(每臺(tái)服務(wù)器都要執(zhí)行):

1.開(kāi)啟iptables轉(zhuǎn)發(fā)

xian使用命令cat /proc/sys/net/bridge/bridge-nf-call-iptables 查看值是否為1,如果為1則如下步驟不需要執(zhí)行,否則請(qǐng)繼續(xù)下面的步驟開(kāi)啟相關(guān)功能。
1.1修改文件

sed -i 7,9s/0/1/g /usr/lib/sysctl.d/00-system.conf

1.2加載netfilter模塊(可以使用 lsmod | grep netfilter命令查看是否加載了模塊)

modprobe br_netfilter

2.1.3使做的更改生效

sysctl -p  /usr/lib/sysctl.d/00-system.conf

2.關(guān)閉Swap交換空間

2.1修改文件

echo 'vm.swappiness = 0' >> /usr/lib/sysctl.d/00-system.conf

2.2使做的更改生效

sysctl -p /usr/lib/sysctl.d/00-system.conf

2.3關(guān)閉swap

swapoff -a

2.4注釋掉“/etc/fstab”文件中關(guān)于swap的掛載代碼 (關(guān)閉開(kāi)機(jī)自動(dòng)掛載)

更改前:
/dev/mapper/cl-swap     swap                    swap    defaults        0 0
更改后:
#/dev/mapper/cl-swap     swap                    swap    defaults        0 0

3.添加hosts保證主機(jī)名解析正常

echo -e '192.168.2.168 node1.ztpt.com\n192.168.2.162 node2.ztpt.com\n192.168.2.170 node3.ztpt.com' >> /etc/hosts

4.關(guān)閉iptables、selinux和firewalld服務(wù)

[root@node1 ~]# getenforce 
Disabled
[root@node1 ~]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
   Active: inactive (dead)
     Docs: man:firewalld(1)
[root@node1 ~]# systemctl status iptables
● iptables.service - IPv4 firewall with iptables
   Loaded: loaded (/usr/lib/systemd/system/iptables.service; disabled; vendor preset: disabled)
   Active: inactive (dead)

5.三臺(tái)服務(wù)器都要安裝 docker、kubelet、kubectl、kubeladm

  • 安裝Docker-CE(可參照:https://blog.51cto.com/wangxiaoke/2174103)
    增加docker倉(cāng)庫(kù)
    wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

    安裝依賴

    sudo yum install -y yum-utils device-mapper-persistent-data lvm2

    安裝docker-ce

    yum install -y docker-ce

    設(shè)置docker服務(wù)開(kāi)機(jī)啟動(dòng)

    systemctl enable docker.service 

    設(shè)置docker-registry-mirrors地址(阿里云提供免費(fèi)的鏡像服務(wù))

    sudo tee /etc/docker/daemon.json <<-'EOF'
    {
    "registry-mirrors": ["https://kzflpq4b.mirror.aliyuncs.com"]
    }
    EOF
    sudo systemctl daemon-reload
    sudo systemctl restart docker
  • 安裝kubernets的kubelet、kubectl、kubeadm
    增加kubernetes倉(cāng)庫(kù)
    tee /etc/yum.repos.d/kubernets.repo << EOF
    [Kubernetes]
    name=kubernetes 
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
    gpgcheck=1
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg,https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg,https://mirrors.aliyun.com/kubernetes/yum/doc/apt-key.gpg
    EOF
    #創(chuàng)建yum元數(shù)據(jù)
    sudo yum makecache

    安裝kubelet、kubectl、kubeadm
    注意:如果報(bào)密鑰的問(wèn)題請(qǐng)使用rpm --import命令手動(dòng)倒入key,或者禁用gpgcheck

    yum install -y kubelet kubectl kubeadm

    設(shè)置kubelet開(kāi)啟自啟動(dòng)

    systemctl enable kubelet.service

注意:在各個(gè)節(jié)點(diǎn)上統(tǒng)一操作的部分完結(jié),接下來(lái)要看好哪些是在master上的操作哪些是在node上的操作

Master上的操作

  • 初始化master節(jié)點(diǎn)
    注意:由于中國(guó)長(zhǎng)城的阻攔,無(wú)法訪問(wèn)k8s的鏡像庫(kù),解決辦法有兩個(gè)
    ①:將k8s的庫(kù)拉到阿里云或者dockerHUB上,然后再改標(biāo)簽,網(wǎng)上還有人特意寫(xiě)的腳本
    ②:設(shè)置docker代理

    我這里使用的是使用別人腳本的辦法
    腳本內(nèi)容:
    #!/bin/sh
    #拉取鏡像
    docker pull mirrorgooglecontainers/kube-apiserver:v1.12.1
    docker pull mirrorgooglecontainers/kube-controller-manager:v1.12.1
    docker pull mirrorgooglecontainers/kube-scheduler:v1.12.1
    docker pull mirrorgooglecontainers/kube-proxy:v1.12.1
    docker pull mirrorgooglecontainers/pause:3.1
    docker pull mirrorgooglecontainers/etcd:3.2.24
    docker pull coredns/coredns:1.2.2
    #修改標(biāo)簽
    docker tag mirrorgooglecontainers/kube-proxy:v1.12.1  k8s.gcr.io/kube-proxy:v1.12.1
    docker tag mirrorgooglecontainers/kube-scheduler:v1.12.1 k8s.gcr.io/kube-scheduler:v1.12.1
    docker tag mirrorgooglecontainers/kube-apiserver:v1.12.1 k8s.gcr.io/kube-apiserver:v1.12.1
    docker tag mirrorgooglecontainers/kube-controller-manager:v1.12.1 k8s.gcr.io/kube-controller-manager:v1.12.1
    docker tag mirrorgooglecontainers/etcd:3.2.24  k8s.gcr.io/etcd:3.2.24
    docker tag coredns/coredns:1.2.2 k8s.gcr.io/coredns:1.2.2
    docker tag mirrorgooglecontainers/pause:3.1  k8s.gcr.io/pause:3.1
    #刪除無(wú)用鏡像
    docker rmi mirrorgooglecontainers/kube-apiserver:v1.12.1
    docker rmi mirrorgooglecontainers/kube-controller-manager:v1.12.1
    docker rmi mirrorgooglecontainers/kube-scheduler:v1.12.1
    docker rmi mirrorgooglecontainers/kube-proxy:v1.12.1
    docker rmi mirrorgooglecontainers/pause:3.1
    docker rmi mirrorgooglecontainers/etcd:3.2.24
    docker rmi coredns/coredns:1.2.2

    使用命令初始化master節(jié)點(diǎn)

    kubeadm init --kubernetes-version=stable-1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12

    輸出信息如下(保存好輸出的信息,以后會(huì)用到):

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.2.168:6443 --token j1v9o1.wxd0xz5mv1qgo6b1 --discovery-token-ca-cert-hash sha256:6ae6c734198b0a69e73c8d7b576e8692514e3aa642f9431d21234e86f35b316f

根據(jù)提示使用如下命令:

 mkdir -p $HOME/.kube
 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 sudo chown $(id -u):$(id -g) $HOME/.kube/config

安裝flannel(這一步程序會(huì)自動(dòng)下載鏡像和運(yùn)行pod,根據(jù)網(wǎng)絡(luò)情況時(shí)間可能會(huì)有些慢,耐心等待!)

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

查看節(jié)點(diǎn)信息

[root@node1 ~]# kubectl get node
NAME             STATUS     ROLES    AGE     VERSION
node1.ztpt.com   NotReady   master   6m45s   v1.12.1

查看namespace

[root@node2 ~]# kubectl get namespace
NAME          STATUS   AGE
default       Active   12h
kube-public   Active   12h
kube-system   Active   12h

查看pod(查看pod注意READY列和STATUS列,如果有問(wèn)題請(qǐng)查看pod日志和kubelet日志,具體命令查看文章下面內(nèi)容,實(shí)在不行就重置初始化,重置命令也請(qǐng)查看文章下面內(nèi)容)

[root@node1 ~]# kubectl get pods --namespace=kube-system -o wide
NAME                                     READY   STATUS    RESTARTS   AGE   IP              NODE             NOMINATED NODE
coredns-576cbf47c7-5tgnm                 1/1     Running   1          18h   10.244.0.47     node1.ztpt.com   <none>
coredns-576cbf47c7-r9fr6                 1/1     Running   1          18h   10.244.0.46     node1.ztpt.com   <none>
etcd-node1.ztpt.com                      1/1     Running   1          18h   192.168.2.168   node1.ztpt.com   <none>
kube-apiserver-node1.ztpt.com            1/1     Running   1          18h   192.168.2.168   node1.ztpt.com   <none>
kube-controller-manager-node1.ztpt.com   1/1     Running   1          18h   192.168.2.168   node1.ztpt.com   <none>
kube-flannel-ds-amd64-rx9jw              1/1     Running   1          18h   192.168.2.168   node1.ztpt.com   <none>
kube-proxy-nnmpj                         1/1     Running   1          18h   192.168.2.168   node1.ztpt.com   <none>
kube-scheduler-node1.ztpt.com            1/1     Running   1          18h   192.168.2.168   node1.ztpt.com   <none>

NODE節(jié)點(diǎn)加入集群(如果忘記了join命令,請(qǐng)查看文章下面內(nèi)容)

首先要保證做了之前先決條件里的事情,并且保證master上所有的pod都屬于正常狀態(tài)
執(zhí)行join命令

因?yàn)楸粔Φ脑颍覀円獙aster上的3個(gè)鏡像導(dǎo)出到NODE節(jié)點(diǎn)上
k8s.gcr.io/kube-proxy v1.12.1
quay.io/coreos/flannel v0.10.0-amd64
k8s.gcr.io/pause 3.1
導(dǎo)出命令 docker save 鏡像名 > 鏡像名.tar
導(dǎo)入命令 docker load < 鏡像名.tar
我join的時(shí)候報(bào)錯(cuò)沒(méi)有找到VIPS支持模塊,最后使用命令modprobe命令加載ip_vs_sh ip_vs_wrr ip_vs_rr ip_vs 模塊后就好了

kubeadm join 192.168.2.168:6443 --token 1evrs8.iz8bl6l77jtal4na --discovery-token-ca-cert-hash sha256:fd509be1a3362afbff39ed807b5c25ef7a5034feb6876df1b76c0a0d8eb637db

移除NODE節(jié)點(diǎn)

#先將節(jié)點(diǎn)設(shè)置為維護(hù)模式(node2.ztpt.com是節(jié)點(diǎn)名稱)
kubectl drain node2.ztpt.com --delete-local-data --force --ignore-daemonsets
#然后刪除節(jié)點(diǎn)
kubectl delete node node2.ztpt.com

雜談?dòng)涗?/h2>

如果初始化失敗,請(qǐng)使用如下代碼清除后重新初始化

kubeadm reset
ip link delete flannel.1
ip link delete cni0
rm -rf /var/lib/etcd/*

查看kubelet日志命令

[root@node2 ~]# journalctl -u kubelet -f

查看pod日志

[root@node2 ~]# kubectl logs -f kube-apiserver-node2.ztpt.com --namespace=kube-system
#-f 是滾動(dòng)輸出,就像是tail -f中的-f一樣
#kube-apiserver-node2.ztpt.com是pod名字

如果忘記了kubeadm join怎么辦?

  • kubeadm join使用的token默認(rèn)有效期24小時(shí),過(guò)期后可使用kubeadm token create創(chuàng)建
  • 如果忘記了可使用kubeadm token list查看,如果過(guò)期了還是得重新創(chuàng)建
  • 如果連--discovery-token-ca-cert-hash的值也忘記了,那就用命令openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2&gt;/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'查看吧!然后用新的token和ca-hash加入集群
向AI問(wèn)一下細(xì)節(jié)

免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如果涉及侵權(quán)請(qǐng)聯(lián)系站長(zhǎng)郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。

AI