溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊(cè)×
其他方式登錄
點(diǎn)擊 登錄注冊(cè) 即表示同意《億速云用戶服務(wù)條款》

k8s部署---master節(jié)點(diǎn)組件部署(三)

發(fā)布時(shí)間:2020-02-27 09:07:40 來(lái)源:網(wǎng)絡(luò) 閱讀:451 作者:SiceLc 欄目:云計(jì)算

kube-APIserver組件介紹

  • kube-APIserver提供了k8s各類資源對(duì)象(pod,RC,Service等)的增刪改查及watch等HTTP Rest接口,是整個(gè)系統(tǒng)的數(shù)據(jù)總線和數(shù)據(jù)中心。

kube-APIserver的功能

  • 提供了集群管理的REST API接口(包括認(rèn)證授權(quán)、數(shù)據(jù)校驗(yàn)以及集群狀態(tài)變更)
  • 提供其他模塊之間的數(shù)據(jù)交互和通信的樞紐(其他模塊通過(guò)API Server查詢或修改數(shù)據(jù),只有API Server才直接操作etcd)
  • 是資源配額控制的入口
  • 擁有完備的集群安全機(jī)制

kube-apiserver工作原理圖

k8s部署---master節(jié)點(diǎn)組件部署(三)

kubernetes API的訪問(wèn)

  • k8s通過(guò)kube-apiserver這個(gè)進(jìn)程提供服務(wù),該進(jìn)程運(yùn)行在單個(gè)k8s-master節(jié)點(diǎn)上。默認(rèn)有兩個(gè)端口
    • 本地端口
    • 該端口用于接收HTTP請(qǐng)求
    • 該端口默認(rèn)值為8080,可以通過(guò)API Server的啟動(dòng)參數(shù)“--insecure-port”的值來(lái)修改默認(rèn)值
    • 默認(rèn)的IP地址為“l(fā)ocalhost”,可以通過(guò)啟動(dòng)參數(shù)“--insecure-bind-address”的值來(lái)修改該IP地址
    • 非認(rèn)證或授權(quán)的HTTP請(qǐng)求通過(guò)該端口訪問(wèn)API Server
    • 安全端口
    • 該端口默認(rèn)值為6443,可通過(guò)啟動(dòng)參數(shù)“--secure-port”的值來(lái)修改默認(rèn)值
    • 默認(rèn)IP地址為非本地(Non-Localhost)網(wǎng)絡(luò)端口,通過(guò)啟動(dòng)參數(shù)“--bind-address”設(shè)置該值
    • 該端口用于接收HTTPS請(qǐng)求
    • 用于基于Tocken文件或客戶端證書(shū)及HTTP Base的認(rèn)證
    • 用于基于策略的授權(quán)
    • 默認(rèn)不啟動(dòng)HTTPS安全訪問(wèn)控制

kube-controller-manager組件介紹

  • kube-Controller Manager作為集群內(nèi)部的管理控制中心,負(fù)責(zé)集群內(nèi)的Node、Pod副本、服務(wù)端點(diǎn)(Endpoint)、命名空間(Namespace)、服務(wù)賬號(hào)(ServiceAccount)、資源定額(ResourceQuota)的管理,當(dāng)某個(gè)Node意外宕機(jī)時(shí),Controller Manager會(huì)及時(shí)發(fā)現(xiàn)并執(zhí)行自動(dòng)化修復(fù)流程,確保集群始終處于預(yù)期的工作狀態(tài)。

kube-scheduler組件介紹

  • kube-scheduler是以插件形式存在的組件,正因?yàn)橐圆寮问酱嬖?,所以其具有可擴(kuò)展可定制的特性。kube-scheduler相當(dāng)于整個(gè)集群的調(diào)度決策者,其通過(guò)預(yù)選和優(yōu)選兩個(gè)過(guò)程決定容器的最佳調(diào)度位置。
  • kube-scheduler(調(diào)度器)的指責(zé)主要是為新創(chuàng)建的pod在集群中尋找最合適的node,并將pod調(diào)度到Node上
  • 從集群所有節(jié)點(diǎn)中,根據(jù)調(diào)度算法挑選出所有可以運(yùn)行該pod的節(jié)點(diǎn)
  • 再根據(jù)調(diào)度算法從上述node節(jié)點(diǎn)選擇最優(yōu)節(jié)點(diǎn)作為最終結(jié)果
  • Scheduler調(diào)度器運(yùn)行在master節(jié)點(diǎn),它的核心功能是監(jiān)聽(tīng)apiserver來(lái)獲取PodSpec.NodeName為空的pod,然后為pod創(chuàng)建一個(gè)binding指示pod應(yīng)該調(diào)度到哪個(gè)節(jié)點(diǎn)上,調(diào)度結(jié)果寫(xiě)入apiserver

kube-scheduler主要職責(zé)

  • 集群高可用:如果 kube-scheduler 設(shè)置了 leader-elect 選舉啟動(dòng)參數(shù),那么會(huì)通過(guò) etcd 進(jìn)行節(jié)點(diǎn)選主( kube-scheduler 和 kube-controller-manager 都使用了一主多從的高可用方案)
  • 調(diào)度資源監(jiān)聽(tīng):通過(guò) list-Watch 機(jī)制監(jiān)聽(tīng) kube-apiserver 上資源的變化,這里的資源主要指的是 Pod 和 Node
  • 調(diào)度節(jié)點(diǎn)分配:通過(guò)預(yù)選(Predicates)與優(yōu)選(Priorites)策略,為待調(diào)度的 Pod 分配一個(gè) Node 進(jìn)行綁定并填充nodeName,同時(shí)將分配結(jié)果通過(guò) kube-apiserver 寫(xiě)入 etcd

實(shí)驗(yàn)部署

實(shí)驗(yàn)環(huán)境

  • Master01:192.168.80.12
  • Node01:192.168.80.13
  • Node02:192.168.80.14
  • 本篇實(shí)驗(yàn)部署是接上篇文章Flannel部署的,所以實(shí)驗(yàn)環(huán)境不變,本次部署主要是部署master節(jié)點(diǎn)需要的組件

kube-APIserver組件部署

  • master01服務(wù)器操作,配置apiserver自簽證書(shū)
    [root@master01 k8s]# cd /mnt/           //進(jìn)入宿主機(jī)掛載目錄
    [root@master01 mnt]# ls
    etcd-cert     etcd-v3.3.10-linux-amd64.tar.gz     k8s-cert.sh                           master.zip
    etcd-cert.sh  flannel.sh                          kubeconfig.sh                         node.zip
    etcd.sh       flannel-v0.10.0-linux-amd64.tar.gz  kubernetes-server-linux-amd64.tar.gz
    [root@master01 mnt]# cp master.zip /root/k8s/      //復(fù)制壓縮包到k8s工作目錄
    [root@master01 mnt]# cd /root/k8s/             //進(jìn)入k8s工作目錄
    [root@master01 k8s]# ls
    cfssl.sh   etcd-v3.3.10-linux-amd64            kubernetes-server-linux-amd64.tar.gz
    etcd-cert  etcd-v3.3.10-linux-amd64.tar.gz     master.zip
    etcd.sh    flannel-v0.10.0-linux-amd64.tar.gz
    [root@master01 k8s]# unzip master.zip               //解壓壓縮包
    Archive:  master.zip
    inflating: apiserver.sh
    inflating: controller-manager.sh
    inflating: scheduler.sh
    [root@master01 k8s]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p      //在master01中創(chuàng)建工作目錄,之前在node節(jié)點(diǎn)中同樣也創(chuàng)建過(guò)工作目錄
    [root@master01 k8s]# mkdir k8s-cert        //創(chuàng)建自簽證書(shū)目錄
    [root@master01 k8s]# cp /mnt/k8s-cert.sh /root/k8s/k8s-cert    //將掛載的自簽證書(shū)腳本移動(dòng)到k8s工作目錄中的自簽證書(shū)目錄
    [root@master01 k8s]# cd k8s-cert         //進(jìn)入目錄
    [root@master01 k8s-cert]# vim k8s-cert.sh     //編輯拷貝過(guò)來(lái)的腳本文件
    ...
    cat > server-csr.json <<EOF
    {
    "CN": "kubernetes",
    "hosts": [
      "10.0.0.1",
      "127.0.0.1",
      "192.168.80.12",          //更改地址為master01IP地址
      "192.168.80.11",          //添加地址為master02IP地址,為之后我們要做的多節(jié)點(diǎn)做準(zhǔn)備
      "192.168.80.100",         //添加vrrp地址,為之后要做的負(fù)載均衡做準(zhǔn)備
      "192.168.80.13",          //更改地址為node01節(jié)點(diǎn)IP地址
      "192.168.80.14",          //更改地址為node02節(jié)點(diǎn)IP地址
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
    }
    EOF
    ...
    :wq
    [root@master01 k8s-cert]# bash k8s-cert.sh      //執(zhí)行腳本,生成證書(shū)
    2020/02/10 10:59:17 [INFO] generating a new CA key and certificate from CSR
    2020/02/10 10:59:17 [INFO] generate received request
    2020/02/10 10:59:17 [INFO] received CSR
    2020/02/10 10:59:17 [INFO] generating key: rsa-2048
    2020/02/10 10:59:17 [INFO] encoded CSR
    2020/02/10 10:59:17 [INFO] signed certificate with serial number 10087572098424151492431444614087300651068639826
    2020/02/10 10:59:17 [INFO] generate received request
    2020/02/10 10:59:17 [INFO] received CSR
    2020/02/10 10:59:17 [INFO] generating key: rsa-2048
    2020/02/10 10:59:17 [INFO] encoded CSR
    2020/02/10 10:59:17 [INFO] signed certificate with serial number 125779224158375570229792859734449149781670193528
    2020/02/10 10:59:17 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
    websites. For more information see the Baseline Requirements for the Issuance and Management
    of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
    specifically, section 10.2.3 ("Information Requirements").
    2020/02/10 10:59:17 [INFO] generate received request
    2020/02/10 10:59:17 [INFO] received CSR
    2020/02/10 10:59:17 [INFO] generating key: rsa-2048
    2020/02/10 10:59:17 [INFO] encoded CSR
    2020/02/10 10:59:17 [INFO] signed certificate with serial number 328087687681727386760831073265687413205940136472
    2020/02/10 10:59:17 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
    websites. For more information see the Baseline Requirements for the Issuance and Management
    of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
    specifically, section 10.2.3 ("Information Requirements").
    2020/02/10 10:59:17 [INFO] generate received request
    2020/02/10 10:59:17 [INFO] received CSR
    2020/02/10 10:59:17 [INFO] generating key: rsa-2048
    2020/02/10 10:59:18 [INFO] encoded CSR
    2020/02/10 10:59:18 [INFO] signed certificate with serial number 525069068228188747147886102005817997066385735072
    2020/02/10 10:59:18 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
    websites. For more information see the Baseline Requirements for the Issuance and Management
    of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
    specifically, section 10.2.3 ("Information Requirements").
    [root@master01 k8s-cert]# ls *pem       //查看  會(huì)生成8個(gè)證書(shū)
    admin-key.pem  admin.pem  ca-key.pem  ca.pem  kube-proxy-key.pem  kube-proxy.pem  server-key.pem  server.pem
    [root@master01 k8s-cert]# cp ca*pem server*pem /opt/kubernetes/ssl/   //將證書(shū)移動(dòng)到k8s工作目錄下ssl目錄中
  • 配置apiserver

    [root@master01 k8s-cert]# cd ..      //回到k8s工作目錄
    [root@master01 k8s]# tar zxvf kubernetes-server-linux-amd64.tar.gz      //解壓軟件包
    kubernetes/
    kubernetes/server/
    kubernetes/server/bin/
    ...
    [root@master01 k8s]# cd kubernetes/server/bin/     //進(jìn)入加壓后軟件命令存放目錄
    [root@master01 bin]# ls
    apiextensions-apiserver              kube-apiserver.docker_tag           kube-proxy
    cloud-controller-manager             kube-apiserver.tar                  kube-proxy.docker_tag
    cloud-controller-manager.docker_tag  kube-controller-manager             kube-proxy.tar
    cloud-controller-manager.tar         kube-controller-manager.docker_tag  kube-scheduler
    hyperkube                            kube-controller-manager.tar         kube-scheduler.docker_tag
    kubeadm                              kubectl                             kube-scheduler.tar
    kube-apiserver                       kubelet                             mounter
    [root@master01 bin]# cp kube-apiserver kubectl kube-controller-manager kube-scheduler /opt/kubernetes/bin/          //復(fù)制關(guān)鍵命令文件到k8s工作目錄的bin目錄中
    [root@master01 bin]# cd /root/k8s/
    [root@master01 k8s]# head -c 16 /dev/urandom | od -An -t x | tr -d ' '      //生成一個(gè)序列號(hào)
    c37758077defd4033bfe95a071689272
    [root@master01 k8s]# vim /opt/kubernetes/cfg/token.csv           //創(chuàng)建token.csv文件,可以理解為創(chuàng)建一個(gè)管理性的角色
    c37758077defd4033bfe95a071689272,kubelet-bootstrap,10001,"system:kubelet-bootstrap"   //指定用戶角色身份,前面的序列號(hào)使用生成的序列號(hào)
    :wq
    [root@master01 k8s]# bash apiserver.sh 192.168.80.12 https://192.168.80.12:2379,https://192.168.80.13:2379,https://192.168.80.14:2379    //二進(jìn)制文件,token,證書(shū)都準(zhǔn)備好,執(zhí)行apiserver腳本,同時(shí)生成配置文件 
    Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
    [root@master01 k8s]# ps aux | grep kube       //檢查進(jìn)程是否啟動(dòng)成功
    root      17088  8.7 16.7 402260 312192 ?       Ssl  11:17   0:08 /opt/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://192.168.80.12:2379,https://192.168.80.13:2379,https://192.168.80.14:2379 --bind-address=192.168.80.12 --secure-port=6443 --advertise-address=192.168.80.12 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --kubelet-https=true --enable-bootstrap-token-auth --token-auth-file=/opt/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/opt/kubernetes/ssl/server.pem --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem --client-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem    --etcd-cafile=/opt/etcd/ssl/ca.pem --etcd-certfile=/opt/etcd/ssl/server.pem --etcd-keyfile=/opt/etcd/ssl/server-key.pem
    root      17101  0.0  0.0 112676   980 pts/0    S+   11:19   0:00 grep --color=auto kube
    [root@master01 k8s]# cat /opt/kubernetes/cfg/kube-apiserver    //查看生成的配置文件
    
    KUBE_APISERVER_OPTS="--logtostderr=true \
    --v=4 \
    --etcd-servers=https://192.168.80.12:2379,https://192.168.80.13:2379,https://192.168.80.14:2379 \
    --bind-address=192.168.80.12 \
    --secure-port=6443 \
    --advertise-address=192.168.80.12 \
    --allow-privileged=true \
    --service-cluster-ip-range=10.0.0.0/24 \
    --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
    --authorization-mode=RBAC,Node \
    --kubelet-https=true \
    --enable-bootstrap-token-auth \
    --token-auth-file=/opt/kubernetes/cfg/token.csv \
    --service-node-port-range=30000-50000 \
    --tls-cert-file=/opt/kubernetes/ssl/server.pem  \
    --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
    --client-ca-file=/opt/kubernetes/ssl/ca.pem \
    --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
    --etcd-cafile=/opt/etcd/ssl/ca.pem \
    --etcd-certfile=/opt/etcd/ssl/server.pem \
    --etcd-keyfile=/opt/etcd/ssl/server-key.pem"
    [root@master01 k8s]# netstat -ntap | grep 6443      //查看監(jiān)聽(tīng)的端口是否開(kāi)啟
    tcp        0      0 192.168.80.12:6443      0.0.0.0:*               LISTEN      17088/kube-apiserve
    tcp        0      0 192.168.80.12:48320     192.168.80.12:6443      ESTABLISHED 17088/kube-apiserve
    tcp        0      0 192.168.80.12:6443      192.168.80.12:48320     ESTABLISHED 17088/kube-apiserve
    [root@master01 k8s]# netstat -ntap | grep 8080      //查看監(jiān)聽(tīng)的端口是否開(kāi)啟
    tcp        0      0 127.0.0.1:8080          0.0.0.0:*               LISTEN      17088/kube-apiserve
  • 配置scheduler服務(wù)
    [root@master01 k8s]# ./scheduler.sh 127.0.0.1       //直接執(zhí)行腳本,啟動(dòng)服務(wù),并生成配置文件即可
    Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
    [root@master01 k8s]# systemctl status kube-scheduler.service      //查看服務(wù)運(yùn)行狀態(tài)
    ● kube-scheduler.service - Kubernetes Scheduler
    Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled)
    Active: active (running) since 一 2020-02-10 11:22:13 CST; 2min 46s ago     //成功運(yùn)行
     Docs: https://github.com/kubernetes/kubernetes
     ...
  • 配置controller-manager服務(wù)
    [root@master01 k8s]# chmod +x controller-manager.sh       //添加腳本執(zhí)行權(quán)限
    [root@master01 k8s]# ./controller-manager.sh 127.0.0.1    //執(zhí)行腳本,啟動(dòng)服務(wù),并生成配置文件
    Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
    [root@master01 k8s]# systemctl status kube-controller-manager.service     //查看運(yùn)行狀態(tài)
    ● kube-controller-manager.service - Kubernetes Controller Manager 
    Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)
    Active: active (running) since 一 2020-02-10 11:28:21 CST; 7min ago     //成功運(yùn)行
    ...
    [root@master01 k8s]# /opt/kubernetes/bin/kubectl get cs      //查看節(jié)點(diǎn)運(yùn)行狀態(tài)
    NAME                 STATUS    MESSAGE             ERROR
    scheduler            Healthy   ok
    controller-manager   Healthy   ok
    etcd-2               Healthy   {"health":"true"}
    etcd-0               Healthy   {"health":"true"}
    etcd-1               Healthy   {"health":"true"}

    master節(jié)點(diǎn)組件部署完成

向AI問(wèn)一下細(xì)節(jié)

免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如果涉及侵權(quán)請(qǐng)聯(lián)系站長(zhǎng)郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。

AI