溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊(cè)×
其他方式登錄
點(diǎn)擊 登錄注冊(cè) 即表示同意《億速云用戶服務(wù)條款》

(二)搭建一個(gè)完成的Kubernetes/K8s集群v.1.16

發(fā)布時(shí)間:2020-07-29 09:16:34 來(lái)源:網(wǎng)絡(luò) 閱讀:223 作者:dwl1988721 欄目:系統(tǒng)運(yùn)維

(二)搭建一個(gè)完成的Kubernetes/K8s集群v.1.16
單節(jié)點(diǎn)集群
(二)搭建一個(gè)完成的Kubernetes/K8s集群v.1.16
多節(jié)點(diǎn)集群 注意node通過(guò)連接loadbalancer 轉(zhuǎn)發(fā)到mateter 的apiserver來(lái)進(jìn)行運(yùn)作的
集群規(guī)劃:

角色 ip 組件
K8S-master1 192.168.0.101 kube-apiserver kube-controller-manager kube-scheduleretcd
K8S-master2 192.168.0.102 kube-apiserver kube-controller-manager kube-scheduleretcd
K8S-node1 192.168.0.103 kubelet kube-proxy docker etcd
K8S-node2 192.168.0.104 kubelet kube-proxy docker etcd
K8S-load-balancer 192.168.0.106(vip)實(shí)際IP105 Nginx L4

1,系統(tǒng)初始化

##關(guān)閉防火墻:
systemctl stop firewalld
systemctl disable firewalld

##關(guān)閉selinux:
setenforce 0 ## 臨時(shí)
sed -i 's/enforcing/disabled/' /etc/selinux/config ## 永久

##關(guān)閉swap:
swapoff -a  ## 臨時(shí)
vim /etc/fstab ##將swap 那一行注釋掉

##同步系統(tǒng)時(shí)間:
ntpdate time.windows.com   ##同步時(shí)間可能需要安裝ntp服務(wù)器 同步內(nèi)網(wǎng)時(shí)間
ntpdate 192.168.0.101

##添加hosts:
vim /etc/hosts
192.168.0.101 k8s-master1
192.168.0.102 k8s-master2
192.168.0.103 k8s-node1
192.168.0.104 k8s-node2

##修改主機(jī)名:
hostnamectl set-hostname k8s-master1

2,etcd集群安裝
(1)證書簽發(fā)(注意 etcd集群是雙向證書)

# cd TLS/etcd
安裝cfssl工具:
# ./cfssl.sh
#curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
#curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
#curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
cp -rf cfssl cfssl-certinfo cfssljson /usr/local/bin
chmod +x /usr/local/bin/cfssl*

修改請(qǐng)求文件中hosts字段包含所有etcd節(jié)點(diǎn)IP:
# vi server-csr.json (簽發(fā)具體域名配置)
{
    "CN": "etcd",
    "hosts": [
        "192.168.0.101",
        "192.168.0.103",
        "192.168.0.104"
        ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing"
        }
    ]
}

生成ca根證書文件

{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}

簽發(fā)etcd雙向證書配置文件

{
  "signing": {
    "default": {
      "expiry": "876000h"
    },
    "profiles": {
      "www": {
         "expiry": "876000h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}

生成ca根證書

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

簽發(fā)etcd證書

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

etcd 安裝所需文件(注意文件路徑)
etcd.service,/usr/lib/systemd/system,etcd/ssl/{ca,server,server-key}.pem ,/etcd/bin/etcd,/etcd/bin/etcdctl.etcd.config

[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd.conf
ExecStart=/opt/etcd/bin/etcd \
        --name=${ETCD_NAME} \
        --data-dir=${ETCD_DATA_DIR} \
        --listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \
        --listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
        --advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \
        --initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
        --initial-cluster=${ETCD_INITIAL_CLUSTER} \
        --initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \
        --initial-cluster-state=new \
        --cert-file=/opt/etcd/ssl/server.pem \
        --key-file=/opt/etcd/ssl/server-key.pem \
        --peer-cert-file=/opt/etcd/ssl/server.pem \
        --peer-key-file=/opt/etcd/ssl/server-key.pem \
        --trusted-ca-file=/opt/etcd/ssl/ca.pem \
        --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

etcd.config

#[Member]
ETCD_NAME="etcd-1"    ##集群里節(jié)點(diǎn)名稱(唯一)
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"  ##存放路徑
ETCD_LISTEN_PEER_URLS="https://192.168.0.101:2380"    ##內(nèi)部互相通信監(jiān)聽(tīng)端口
ETCD_LISTEN_CLIENT_URLS="https://192.168.0.101:2379"  ##外部通信監(jiān)聽(tīng)端口 比如面向apiserver

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.0.101:2380"   ##集群里內(nèi)部通信端口
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.0.101:2379"          ##集群外部通信端口
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.0.101:2380,etcd-2=https://192.168.0.103:2380,etcd-3=https://192.168.0.104:2380"        ##集群里 其他節(jié)點(diǎn)名稱 地址 和監(jiān)聽(tīng)端口
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"      ## 集群內(nèi)認(rèn)證口令 字符串 可以隨便改 但是得統(tǒng)一
ETCD_INITIAL_CLUSTER_STATE="new"            ##集群的狀態(tài) new 是新建 exsiting 表示已有集群 然后新增加

所有etcd節(jié)點(diǎn)

systemctl daemon-reload
systemctl restart etcd
systemctl enable etcd
##在啟動(dòng)的時(shí)候 其他節(jié)點(diǎn)會(huì)等候加入,在所有節(jié)點(diǎn)起來(lái)后才會(huì)啟動(dòng),如有問(wèn)題查詢/var/log/message 系統(tǒng)啟動(dòng)日志
##驗(yàn)證節(jié)點(diǎn)是否健康正常運(yùn)行
/opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.0.101:2379,https://192.168.0.103:2379,https://192.168.0.104:2379" cluster-health

3,master節(jié)點(diǎn)安裝
自簽api ssl 證書(注意此時(shí)跟etcd 不是用的同一套ca)
ca根證書

vim ca-config.json
{
  "signing": {
    "default": {
      "expiry": "876000h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "876000h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}

vim ca-csr.json
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}

api-server

server-csr.json
{
    "CN": "kubernetes",                  ##K8S證書官方規(guī)定使用默認(rèn)字段名
            "hosts": [                                 
            "10.0.0.1",                              ##service 內(nèi)部集群通信的第一個(gè)IP 地址
      "127.0.0.1",                        
      "kubernetes",                         ##官方規(guī)定的需要添加進(jìn)入證書的名稱
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local",
      "192.168.0.100",                    ##master api server 地址 包括自己本身,已經(jīng)需要訪問(wèn)的load balance 地址(通過(guò)lb 訪問(wèn)可以不用添加node地址)
      "192.168.0.101",
      "192.168.0.102",
      "192.168.0.103",
      "192.168.0.104",
      "192.168.0.105"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}

##worker-node節(jié)點(diǎn) kube-proxy 證書 注意CN 字段名
kube-proxy.json
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}

簽發(fā)

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
##生成apiserver 證書 以及kube-proxy證書

安裝啟動(dòng)

tar zxvf k8s-master.tar.gz
cd kubernetes
cp TLS/k8s/ssl/*.pem ssl
cp –rf kubernetes /opt
cp kube-apiserver.service kube-controller-manager.service kube-scheduler.service /usr/lib/systemd/system
systemctl start kube-apiserver
systemctl start kube-controller-manager
systemctl start kube-scheduler
systemctl enable kube-apiserver
systemctl enable kube-controller-manager
systemctl enable kube-scheduler
#為kubelet TLS Bootstrapping 授權(quán)
cat /opt/kubernetes/cfg/token.csv 
##c47ffb939f5ca36231d9e3121a252940,kubelet-bootstrap,10001,"system:node-bootstrapper"
kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap

kube-apiserver.conf

KUBE_APISERVER_OPTS="--logtostderr=false \  ##輸出日志
--v=2 \  ##輸出日志級(jí)別
--log-dir=/opt/kubernetes/logs \  ##日志存放位置
--etcd-servers=https://192.168.31.61:2379,https://192.168.31.62:2379,https://192.168.31.63:2379 \   ##etcd地址
--bind-address=192.168.31.61 \ ##綁定的IP,可以用公網(wǎng)地址
--secure-port=6443 \  ##監(jiān)聽(tīng)端口
--advertise-address=192.168.31.61 \    ##通告地址,一般和本機(jī)IP一樣,告訴node通過(guò)哪個(gè)IP來(lái)鏈接訪問(wèn)
--allow-privileged=true \  #允許創(chuàng)建的容器具有超級(jí)管理員權(quán)限
--service-cluster-ip-range=10.0.0.0/24 \    #service IP范圍,service會(huì)分配這個(gè)IP段的IP地址
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \ #啟用控制插件,屬于k8s高級(jí)功能,比如資源配額限制,訪問(wèn)控制等
--authorization-mode=RBAC,Node \  #授權(quán)模式,一般使用rbac角色來(lái)訪問(wèn)
--enable-bootstrap-token-auth=true \ #啟用bootstrap,為node用戶請(qǐng)求自動(dòng)頒發(fā)證書,在token.csv 定義具體權(quán)限內(nèi)容。
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-32767 \    ## server 服務(wù)暴露的端口
--kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \  ##kubelet證書
--kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \        ##apiserver使用https證書
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \   
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \   ###etcd 證書
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem \
--audit-log-maxage=30 \     #日志審計(jì)配置
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/opt/kubernetes/logs/k8s-audit.log"

kube-controller-manager.conf 配置文件

KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \ ##配置日志
--v=2 \ #配置日志級(jí)別
--log-dir=/opt/kubernetes/logs \ #配置日志目錄
--leader-elect=true \  #集群選舉,api server才做高可用, kube-controller-manager本身會(huì)基于etcd來(lái)實(shí)現(xiàn)高可用,啟用該選項(xiàng)即可
--master=127.0.0.1:8080 \  #apiserver的IP,我們?cè)O(shè)置鏈接本地,8080是apiserver監(jiān)聽(tīng)端口,它默認(rèn)會(huì)開放該端口
--address=127.0.0.1 \ #組件監(jiān)聽(tīng)地址,本地即可,無(wú)需對(duì)外
--allocate-node-cidrs=true \   ##允許安裝cni網(wǎng)絡(luò)的插件
--cluster-cidr=10.244.0.0/16 \   ##pod的地址池
--service-cluster-ip-range=10.0.0.0/24 \ #server的IP范圍,和 kube-apiserverIP范圍是一樣的

#集群簽名的證書,node加入集群頒發(fā)自動(dòng)頒發(fā)kubelet證書,kubelet由controller-manager頒發(fā),controller-manager由下面配置的頒發(fā)
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \

#簽署service-account所需要的私鑰
--root-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \

#為node頒發(fā)證書時(shí)間,10年
--experimental-cluster-signing-duration=87600h0m0s"

kube-scheduler.conf

KUBE_SCHEDULER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--leader-elect \ #多個(gè)scheduler集群選舉
--master=127.0.0.1:8080 \  #鏈接apiserver地址
--address=127.0.0.1" #監(jiān)聽(tīng)本地地址

4,部署node 組件
kubelet,kube-proxy,docker
4.1,安裝docker

tar zxvf k8s-node.tar.gz
tar zxvf docker-18.09.6.tgz
mv docker/* /usr/bin
mkdir /etc/docker
mv daemon.json /etc/docker
mv docker.service /usr/lib/systemd/system
systemctl start docker
systemctl enable docker
docker info   ##查看docker信息 比如倉(cāng)庫(kù)配置等

4.2.安裝kubelet 以及 kube-proxy(注意更改節(jié)點(diǎn)名稱,以及部署的masterIP)

bootstrap.kubeconfig 下的server要為master的IP
kube-proxy.kubeconfig 下的server要為master的IP
kubelet.conf 下的hostname-override 注冊(cè)節(jié)點(diǎn)名稱要唯一
kube-proxy-config.yml 下的hostnameOverride 注冊(cè)節(jié)點(diǎn)名稱要唯一
配置文件后綴含義
.conf #基本配置文件
.kubeconfig #鏈接apiserver的配置文件
.yml #主要配置文件(動(dòng)態(tài)更新配置文件)
kubernetes/
├── bin
│ ├── kubectl
│ └── kube-proxy
├── cfg
│ ├── bootstrap.kubeconfig #請(qǐng)求證書的配置文件
│ ├── kubelet.conf
│ ├── kubelet-config.yml#動(dòng)態(tài)調(diào)整kubelet配置
│ ├── kube-proxy.conf
│ ├── kube-proxy-config.yml #動(dòng)態(tài)調(diào)整proxy配置
│ └── kube-proxy.kubeconfig #是鏈接apiserver的組件
├── logs
└── ssl

 vim kubelet.conf
##輸出以下內(nèi)容:
KUBELET_OPTS="--logtostderr=false \ #日志
--v=2 \ #日志級(jí)別
--log-dir=/opt/kubernetes/logs \ #日志目錄
--hostname-override=k8s-node1 \ #節(jié)點(diǎn)名稱,名稱必須唯一,每個(gè)節(jié)點(diǎn)都要改一下
--network-plugin=cni \ #啟用網(wǎng)絡(luò)插件

##指定配置文件路徑
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet-config.yml \
--cert-dir=/opt/kubernetes/ssl \ #指定為節(jié)點(diǎn)頒發(fā)的證書存放目錄
--pod-infra-container-image=lizhenliang/pause-amd64:3.0" #啟動(dòng)pod的鏡像,這個(gè)pod鏡像主要是管理pod的命名空間
bootstrap.kubeconfig
##輸出以下內(nèi)容:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /opt/kubernetes/ssl/ca.pem
server: https://192.168.0.101:6443 #master1服務(wù)器IP(內(nèi)網(wǎng)IP)
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubelet-bootstrap
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: kubelet-bootstrap
user:
token: c47ffb939f5ca36231d9e3121a252940  
##token要與/opt/kubernetes/cfg/token.csv 里面的token一致
##k8s為了解決kubelet頒發(fā)證書的復(fù)雜性,所以引入了bootstrap機(jī)制,自動(dòng)的為將要加入到集群的node頒發(fā)kubelet證書,所有鏈接apiserver的都需要證書。

bootstrap工作流程(帶tenken 驗(yàn)證,在請(qǐng)求通過(guò)后會(huì)生成kubelet.kubeconfig)
(二)搭建一個(gè)完成的Kubernetes/K8s集群v.1.16

kubelet-config.yml
#輸出以下內(nèi)容:
kind: KubeletConfiguration #使用對(duì)象
apiVersion: kubelet.config.k8s.io/v1beta1 #api版本
address: 0.0.0.0 #監(jiān)聽(tīng)地址
port: 10250 #當(dāng)前kubelet的端口
readOnlyPort: 10255 #kubelet暴露的端口
cgroupDriver: cgroupfs #驅(qū)動(dòng),要于docker info顯示的驅(qū)動(dòng)一致
clusterDNS: 
- 10.0.0.2
clusterDomain: cluster.local  #集群域
failSwapOn: false #關(guān)閉swap
#訪問(wèn)授權(quán)
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /opt/kubernetes/ssl/ca.pem 
##pod 優(yōu)化參數(shù)
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
evictionHard:
imagefs.available: 15%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
kube-proxy.kubeconfig 
#輸出以下內(nèi)容:

apiVersion: v1
clusters:
- cluster:
certificate-authority: /opt/kubernetes/ssl/ca.pem #指定ca
server: https://192.168.0.101:6443 
#masterIP地址(內(nèi)網(wǎng))
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kube-proxy
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: kube-proxy
user:
client-certificate: /opt/kubernetes/ssl/kube-proxy.pem
client-key: /opt/kubernetes/ssl/kube-proxy-key.pem
kube-proxy-config.yml
#輸出以下內(nèi)容:
 kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
address: 0.0.0.0 #監(jiān)聽(tīng)地址
metricsBindAddress: 0.0.0.0:10249 #監(jiān)控指標(biāo)地址,監(jiān)控獲取相關(guān)信息 就從這里獲取
clientConnection:
kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig #讀取配置文件
hostnameOverride: k8s-node1 #注冊(cè)到k8s的節(jié)點(diǎn)名稱唯一
clusterCIDR: 10.0.0.0/24
mode: ipvs #模式,使用ipvs(性能比較好),默認(rèn)是IPtables
ipvs:
scheduler: "rr"
iptables:
masqueradeAll: true

安裝啟動(dòng)

##在簽發(fā) 證書的機(jī)器上分發(fā)證書到node 節(jié)點(diǎn)(3個(gè)證書 ca.pem,kube-proxy.pem,kube-proxy.key)
cd TLS/k8s
scp ca.pem kube-proxy*.pem root@192.168.31.65:/opt/kubernetes/ssl/
##在nodejiqi 
tar zxvf k8s-node.tar.gz
mv kubernetes /opt
cp kubelet.service kube-proxy.service /usr/lib/systemd/system

##修改以下2個(gè)文件中IP地址:(master地址)
 grep 192 *
bootstrap.kubeconfig:    server: https://192.168.0.101:6443
kube-proxy.kubeconfig:    server: https://192.168.0.101:6443
##修改以下兩個(gè)文件中主機(jī)名:(改正成之前規(guī)定好的主機(jī)名)
grep hostname *
kubelet.conf:--hostname-override=k8s-node1 \
kube-proxy-config.yml:hostnameOverride: k8s-node1
##啟動(dòng),設(shè)置開機(jī)啟動(dòng)
systemctl start kubelet
systemctl start kube-proxy
systemctl enable kubelet
systemctl enable kube-proxy
##啟動(dòng)后kubne-proxy會(huì)出現(xiàn)Failed to delete stale service IP 10.0.0.2 connections
yum -y install yum -y install conntrack 解決
###啟動(dòng)后在master上
kubectl get csr  ## 如果啟動(dòng)沒(méi)問(wèn)題 會(huì)顯示node 節(jié)點(diǎn)kubelet請(qǐng)求頒發(fā)證書
##允許給node 頒發(fā)證書 后半段是get csr 顯示的內(nèi)容
kubectl certificate approve node-csr-MYUxbmf_nmPQjmH3LkbZRL2uTO-_FCzDQUoUfTy7YjI
##查看node
kubectl get node 
##(會(huì)顯示NotReady  不影響 這是因?yàn)檫€沒(méi)不是cni 網(wǎng)絡(luò)插件的原因)

4.3安裝cni網(wǎng)絡(luò)插件和flannel網(wǎng)絡(luò)
cni是k8s的一個(gè)接口,如果需要對(duì)接k8s就需要遵循cni接口標(biāo)準(zhǔn),部署cni主要是為了接通第三方網(wǎng)絡(luò)
需要做的:
cni安裝到每臺(tái)node節(jié)點(diǎn)
flannel安裝到master節(jié)點(diǎn)
4.3.1,安裝cni

#下載安裝包c(diǎn)ni
wget https://github.com/containernetworking/plugins/releases/download/v0.8.5/cni-plugins-linux-amd64-v0.8.5.tgz

##解壓安裝包c(diǎn)ni
mkdir -p /opt/cni/bin #工作目錄
mkdir -p /etc/cni/net.d #配置文件
tar -zxvf cni-plugins-linux-amd64-v0.8.5.tgz -C /opt/cni/bin

4.3.2master節(jié)點(diǎn)安裝flannel

kubectl apply -f   kubectl apply -f kube-flannel.yaml 
##這個(gè)flannel只需要安裝到master節(jié)點(diǎn)上
##這個(gè)文件需要***,下載到服務(wù)器后直接執(zhí)行 kubectl apply -f kube-flannel.yml(里面的鏡像需要***,直接安裝國(guó)外的會(huì)失敗,不建議)
##yaml里面的網(wǎng)絡(luò)net-conf.json要和 cat /opt/kubernetes/cfg/kube-controller-manager.conf 里面的cluster-cidr值一致
##如果不使用flannel,其他的組件也一樣
##安裝好后,每個(gè)node都會(huì)啟動(dòng)一個(gè)pod

5,部署webui
官方部署:

https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/
##在master安裝 注意更改暴露的端口
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml
kubectl apply -f recommended.yaml

修改部署:

kubectl apply -f dashboard.yaml
kubectl get pods -n kubernetes-dashboard
#輸出以下內(nèi)容
NAME READY STATUS RESTARTS AGE
dashboard-metrics-scraper-76585494d8-sbzjv 1/1 Running 0 2m6s
kubernetes-dashboard-5996555fd8-fc7zf 1/1 Running 2 2m6s

#查看端口
kubectl get pods,svc -n kubernetes-dashboard

#輸出以下內(nèi)容
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/dashboard-metrics-scraper ClusterIP 10.0.0.8 <none> 8000/TCP 16m
service/kubernetes-dashboard NodePort 10.0.0.88 <none> 443:30001/TCP
#使用任意node節(jié)點(diǎn)IP+端口即可訪問(wèn)
https://nodeip:300001

我們使用token方式來(lái)登錄,創(chuàng)建service account并綁定默認(rèn)cluster-admin管理員集群角色

kubectl apply -f dashboard-adminuser.yaml
#獲取token
kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')

由于證書問(wèn)題 部分瀏覽器并不能登錄 比如chorm,這時(shí)候 還需要給dashboard 簽發(fā)一個(gè)自簽證書 以支持多種瀏覽器

mkdir key && cd key
#生成證書
openssl genrsa -out dashboard.key 2048
openssl req -new -out dashboard.csr -key dashboard.key -subj '/CN=kubernetes-dashboard-certs'
openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt
#刪除原有的證書secret
kubectl delete secret kubernetes-dashboard-certs -n kubernetes-dashboard
#創(chuàng)建新的證書secret
kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboard.crt -n kube-system
#查看dashboard pod,v2.0是 -n kubernetes-dashboard
kubectl get pod -n kube-system
#重啟dashboard pod,v2.0是 -n kubernetes-dashboard
kubectl delete pod <pod name> -n kube-system

部署DNS(DNS 是為service 提供解析 kubectl get svc里的)
功能:可以通過(guò)svc的名稱去訪問(wèn)到svc 然后svc轉(zhuǎn)發(fā)到對(duì)應(yīng)的pod

kubectl apply -f coredns.yaml
##注意:clusterIP要和node節(jié)點(diǎn)的cat /opt/kubernetes/cfg/kubelet.conf 的clusterDNS一致,否則pod會(huì)解析失敗
##查看dns
kubectl get pods -n kube-system

##輸出以下內(nèi)容:
coredns-6d8cfdd59d-mw47j 1/1 Running 0 5m45s
##測(cè)試
##安裝busybox工具
kubectl apply -f bs.yaml

##查看我們啟動(dòng)的pod
kubectl get pods

##進(jìn)入容器內(nèi)部
kubectl exec -t busybox sh

##測(cè)試
ping kubernetes
向AI問(wèn)一下細(xì)節(jié)

免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如果涉及侵權(quán)請(qǐng)聯(lián)系站長(zhǎng)郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。

AI