您好,登錄后才能下訂單哦!
01.系統(tǒng)初始化和全局變量
主機分配
主機名 | 系統(tǒng) | ip地址 | vip |
dev-k8s-master1 | centos7.6 | 172.19.201.244 | 172.19.201.242 |
dev-k8s-master2 | centos7.6 | 172.19.201.249 | |
dev-k8s-master2 | centos7.6 | 172.19.201.248 | |
dev-k8s-node1 | centos7.6 | 172.19.201.247 | |
dev-k8s-node2 | centos7.6 | 172.19.201.246 | |
dev-k8s-node3 | centos7.6 | 172.19.201.243 | |
flanne | 10.10.0.0/16 | ||
docker | 10.10.1.1/24 |
主機名
設(shè)置永久主機名稱,然后重新登錄:
?
hostnamectl set-hostname dev-k8s-master1
?
設(shè)置的主機名保存在?/etc/hostname?文件中;
無密碼 ssh 登錄其它節(jié)點
如果沒有特殊指明,本文檔的所有操作均在 zhangjun-k8s01 節(jié)點上執(zhí)行,然后遠程分發(fā)文件和執(zhí)行命令,所以需要添加該節(jié)點到其它節(jié)點的 ssh 信任關(guān)系。
設(shè)置 zhangjun-k8s01 的 root 賬戶可以無密碼登錄所有節(jié)點:
ssh-keygen -t rsa
ssh-copy-id root@dev_k8s_master1
...
?
更新 PATH 變量
將可執(zhí)行文件目錄添加到 PATH 環(huán)境變量中:
echo 'PATH=/opt/k8s/bin:$PATH' >>/root/.bashrc
source /root/.bashrc
?
安裝依賴包
在每臺機器上安裝依賴包:
CentOS:
yum install -y epel-release
yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget
?
?
關(guān)閉防火墻
在每臺機器上關(guān)閉防火墻,清理防火墻規(guī)則,設(shè)置默認轉(zhuǎn)發(fā)策略:
systemctl stop firewalld
systemctl disable firewalld
iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat
iptables -P FORWARD ACCEPT
?
關(guān)閉 swap 分區(qū)
如果開啟了 swap 分區(qū),kubelet 會啟動失敗(可以通過將參數(shù) --fail-swap-on 設(shè)置為 false 來忽略 swap on),故需要在每臺機器上關(guān)閉 swap 分區(qū)。同時注釋?/etc/fstab?中相應(yīng)的條目,防止開機自動掛載 swap 分區(qū):
?
swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
關(guān)閉 SELinux
關(guān)閉 SELinux,否則后續(xù) K8S 掛載目錄時可能報錯?Permission denied:
setenforce 0
sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
加載內(nèi)核模塊
modprobe ip_vs_rr
modprobe br_netfilter
?
優(yōu)化內(nèi)核參數(shù)
cat?>?kubernetes.conf?<<EOF net.bridge.bridge-nf-call-iptables=1 net.bridge.bridge-nf-call-ip6tables=1 net.ipv4.ip_forward=1 net.ipv4.tcp_tw_recycle=0 vm.swappiness=0 vm.overcommit_memory=1 vm.panic_on_oom=0 fs.inotify.max_user_instances=8192 fs.inotify.max_user_watches=1048576 fs.file-max=52706963 fs.nr_open=52706963 net.ipv6.conf.all.disable_ipv6=1 net.netfilter.nf_conntrack_max=2310720 EOF
cp kubernetes.conf ?/etc/sysctl.d/kubernetes.conf
sysctl -p /etc/sysctl.d/kubernetes.conf
設(shè)置系統(tǒng)時區(qū)
# 調(diào)整系統(tǒng) TimeZone
timedatectl set-timezone Asia/Shanghai
?
關(guān)閉無關(guān)的服務(wù)
systemctl stop postfix && systemctl disable postfix
設(shè)置 rsyslogd 和 systemd journald
systemd 的 journald 是 Centos 7 缺省的日志記錄工具,它記錄了所有系統(tǒng)、內(nèi)核、Service Unit 的日志。
相比 systemd,journald 記錄的日志有如下優(yōu)勢:
可以記錄到內(nèi)存或文件系統(tǒng);(默認記錄到內(nèi)存,對應(yīng)的位置為 /run/log/jounal);
可以限制占用的磁盤空間、保證磁盤剩余空間;
可以限制日志文件大小、保存的時間;
journald 默認將日志轉(zhuǎn)發(fā)給 rsyslog,這會導(dǎo)致日志寫了多份,/var/log/messages 中包含了太多無關(guān)日志,不方便后續(xù)查看,同時也影響系統(tǒng)性能。
# 持久化保存日志的目錄
mkdir?/var/log/journal mkdir?/etc/systemd/journald.conf.d cat?>?/etc/systemd/journald.conf.d/99-prophet.conf?<<EOF [Journal] Storage=persistent Compress=yes SyncIntervalSec=5m RateLimitInterval=30s RateLimitBurst=1000 SystemMaxUse=10G SystemMaxFileSize=200M MaxRetentionSec=2week ForwardToSyslog=no EOF
?
systemctl restart systemd-journald
?
創(chuàng)建相關(guān)目錄
創(chuàng)建目錄:
mkdir -p ?/opt/k8s/{bin,work} /etc/{kubernetes,etcd}/cert
升級內(nèi)核
yum -y update
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
yum --disablerepo="*" --enablerepo="elrepo-kernel" list available
yum --enablerepo=elrepo-kernel install kernel-lt.x86_64 -y
sudo awk -F\' '$1=="menuentry " {print i++ " : " $2}' /etc/grub2.cfg
? ? ? ? ? ? ?
sudo grub2-set-default 0
安裝內(nèi)核源文件(可選,在升級完內(nèi)核并重啟機器后執(zhí)行):
?
?
02.創(chuàng)建 CA 證書和秘鑰
安裝 cfssl 工具集
sudo mkdir -p /opt/k8s/cert && cd /opt/k8s
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
mv cfssl_linux-amd64 /opt/k8s/bin/cfssl
?
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
mv cfssljson_linux-amd64 /opt/k8s/bin/cfssljson
?
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
mv cfssl-certinfo_linux-amd64 /opt/k8s/bin/cfssl-certinfo
?
chmod +x /opt/k8s/bin/*
export PATH=/opt/k8s/bin:$PATH
?
創(chuàng)建根證書 (CA)
CA 證書是集群所有節(jié)點共享的,只需要創(chuàng)建一個 CA 證書,后續(xù)創(chuàng)建的所有證書都由它簽名。
創(chuàng)建配置文件
CA 配置文件用于配置根證書的使用場景 (profile) 和具體參數(shù) (usage,過期時間、服務(wù)端認證、客戶端認證、加密等),后續(xù)在簽名其它證書時需要指定特定場景。
cd?/opt/k8s/work cat?>?ca-config.json?<<EOF { ?"signing":?{ ???"default":?{ ?????"expiry":?"87600h" ???}, ???"profiles":?{ ?????"kubernetes":?{ ???????"usages":?[ ???????????"signing", ???????????"key?encipherment", ???????????"server?auth", ???????????"client?auth" ???????], ???????"expiry":?"87600h" ?????} ???} ?} } EOF
?
?
創(chuàng)建證書簽名請求文件
cd?/opt/k8s/work cat?>?ca-csr.json?<<EOF { ?"CN":?"kubernetes", ?"key":?{ ???"algo":?"rsa", ???"size":?2048 ?}, ?"names":?[ ???{ ?????"C":?"CN", ?????"ST":?"BeiJing", ?????"L":?"BeiJing", ?????"O":?"k8s", ?????"OU":?"4Paradigm" ???} ?], ?"ca":?{ ???"expiry":?"876000h" } } EOF
?
?
生成 CA 證書和私鑰
cd /opt/k8s/work
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
ls ca*
?
分發(fā)證書文件
將生成的 CA 證書、秘鑰文件、配置文件拷貝到所有節(jié)點的?/etc/kubernetes/cert?目錄下:
mkdir -p /etc/kubernetes/cert
scp ca*.pem ca-config.json root@${node_ip}:/etc/kubernetes/cert
?
?
下載和分發(fā) kubectl 二進制文件
下載和解壓:
cd /opt/k8s/work
wget https://dl.k8s.io/v1.14.2/kubernetes-client-linux-amd64.tar.gz
tar -xzvf kubernetes-client-linux-amd64.tar.gz
?
分發(fā)到所有使用 kubectl 的節(jié)點:
?
scp kubernetes/client/bin/kubectl root@dev-k8s-master1:/opt/k8s/bin/
chmod +x /opt/k8s/bin/*
?
?
創(chuàng)建 admin 證書和私鑰
kubectl 與 apiserver https 安全端口通信,apiserver 對提供的證書進行認證和授權(quán)。
kubectl 作為集群的管理工具,需要被授予最高權(quán)限,這里創(chuàng)建具有最高權(quán)限的 admin 證書。
創(chuàng)建證書簽名請求:
cd?/opt/k8s/work cat?>?admin-csr.json?<<EOF { ?"CN":?"admin", ?"hosts":?[], ?"key":?{ ???"algo":?"rsa", ???"size":?2048 ?}, ?"names":?[ ???{ ?????"C":?"CN", ?????"ST":?"BeiJing", ?????"L":?"BeiJing", ?????"O":?"system:masters", ?????"OU":?"4Paradigm" ???} ?] } EOF
?
?
生成證書和私鑰:
cd /opt/k8s/work
cfssl gencert -ca=/opt/k8s/work/ca.pem \
?-ca-key=/opt/k8s/work/ca-key.pem \
?-config=/opt/k8s/work/ca-config.json \
?-profile=kubernetes admin-csr.json | cfssljson -bare admin
?
?
創(chuàng)建 kubeconfig 文件
kubeconfig 為 kubectl 的配置文件,包含訪問 apiserver 的所有信息,如 apiserver 地址、CA 證書和自身使用的證書;
cd /opt/k8s/work
?
# 設(shè)置集群參數(shù)
kubectl config set-cluster kubernetes \
?--certificate-authority=/opt/k8s/work/ca.pem \
?--embed-certs=true \
?--server=https://172.19.201.242:8443 \
?--kubeconfig=kubectl.kubeconfig
?
# 設(shè)置客戶端認證參數(shù)
kubectl config set-credentials admin \
?--client-certificate=/opt/k8s/work/admin.pem \
?--client-key=/opt/k8s/work/admin-key.pem \
?--embed-certs=true \
?--kubeconfig=kubectl.kubeconfig
?
# 設(shè)置上下文參數(shù)
kubectl config set-context kubernetes \
?--cluster=kubernetes \
?--user=admin \
?--kubeconfig=kubectl.kubeconfig
?
# 設(shè)置默認上下文
kubectl config use-context kubernetes --kubeconfig=kubectl.kubeconfig
分發(fā) kubeconfig 文件
分發(fā)到所有使用?kubectl?命令的節(jié)點:
cd /opt/k8s/work
mkdir -p ~/.kube
scp kubectl.kubeconfig root@dev-k8s-master1:~/.kube/config
?
03.部署 kubectl 命令行工具
下載和分發(fā) kubectl 二進制文件
下載和解壓:
cd /opt/k8s/work
wget https://dl.k8s.io/v1.14.2/kubernetes-client-linux-amd64.tar.gz
tar -xzvf kubernetes-client-linux-amd64.tar.gz
?
分發(fā)到所有使用 kubectl 的節(jié)點:
?
cd /opt/k8s/work
scp kubernetes/client/bin/kubectl root@dev-k8s-master1:/opt/k8s/bin/
chmod +x /opt/k8s/bin/*
創(chuàng)建 admin 證書和私鑰
kubectl 與 apiserver https 安全端口通信,apiserver 對提供的證書進行認證和授權(quán)。
kubectl 作為集群的管理工具,需要被授予最高權(quán)限,這里創(chuàng)建具有最高權(quán)限的 admin 證書。
創(chuàng)建證書簽名請求:
cd?/opt/k8s/work cat?>?admin-csr.json?<<EOF { ?"CN":?"admin", ?"hosts":?[], ?"key":?{ ???"algo":?"rsa", ???"size":?2048 ?}, ?"names":?[ ???{ ?????"C":?"CN", ?????"ST":?"BeiJing", ?????"L":?"BeiJing", ?????"O":?"system:masters", ?????"OU":?"4Paradigm" ???} ?] } EOF
?
?
生成證書和私鑰:
cd /opt/k8s/work
cfssl gencert -ca=/opt/k8s/work/ca.pem \
?-ca-key=/opt/k8s/work/ca-key.pem \
?-config=/opt/k8s/work/ca-config.json \
?-profile=kubernetes admin-csr.json | cfssljson -bare admin
?
?
創(chuàng)建 kubeconfig 文件
kubeconfig 為 kubectl 的配置文件,包含訪問 apiserver 的所有信息,如 apiserver 地址、CA 證書和自身使用的證書;
cd /opt/k8s/work
?
# 設(shè)置集群參數(shù)
kubectl config set-cluster kubernetes \
?--certificate-authority=/opt/k8s/work/ca.pem \
?--embed-certs=true \
?--server="https://172.19.201.202:8443" \
?--kubeconfig=kubectl.kubeconfig
?
# 設(shè)置客戶端認證參數(shù)
kubectl config set-credentials admin \
?--client-certificate=/opt/k8s/work/admin.pem \
?--client-key=/opt/k8s/work/admin-key.pem \
?--embed-certs=true \
?--kubeconfig=kubectl.kubeconfig
?
# 設(shè)置上下文參數(shù)
kubectl config set-context kubernetes \
?--cluster=kubernetes \
?--user=admin \
?--kubeconfig=kubectl.kubeconfig
?
# 設(shè)置默認上下文
kubectl config use-context kubernetes --kubeconfig=kubectl.kubeconfig
?
?
分發(fā) kubeconfig 文件
分發(fā)到所有使用?kubectl?命令的節(jié)點:
cd /opt/k8s/work
mkdir -p ~/.kube
scp kubectl.kubeconfig root@dev-k8s-master1:/root/.kube/config
?
?
?
?
04部署haproxy+keepalived
部署keepalived【所有master】
此處的keeplived的主要作用是為haproxy提供vip(172.19.201.242),在三個haproxy實例之間提供主備,降低當(dāng)其中一個haproxy失效的時對服務(wù)的影響。
?
安裝keepalived
yum install -y keepalived
配置keepalived:
[注意:VIP地址是否正確,且各個節(jié)點的priority不同,master1節(jié)點為MASTER,其余節(jié)點為BACKUP,killall -0 意思是根據(jù)進程名稱檢測進程是否存活]
cat?>?/etc/keepalived/keepalived.conf?<<EOF !?Configuration?File?for?keepalived global_defs?{ ??router_id?LVS_DEVEL } vrrp_script?check_haproxy?{ ???script?"killall?-0?haproxy" ???interval?3 ???weight?-2 ???fall?10 ???rise?2 } vrrp_instance?VI_1?{ ???state?MASTER ???interface?eno1 ???virtual_router_id?51 ???priority?100 ???advert_int?1 ???authentication?{ ???????auth_type?PASS ???????auth_pass?1111 ???} ???virtual_ipaddress?{ ???????172.19.201.242 ???} } EOF
?
scp -pr /etc/keepalived/keepalived.conf root@dev-k8s-master2:/etc/keepalived/ ? (master節(jié)點)
?
1.killall -0 根據(jù)進程名稱檢測進程是否存活,如果服務(wù)器沒有該命令,請使用yum install psmisc -y安裝
2.第一個master節(jié)點的state為MASTER,其他master節(jié)點的state為BACKUP
3.priority表示各個節(jié)點的優(yōu)先級,范圍:0~250(非強制要求)
?
啟動并檢測服務(wù)
systemctl enable keepalived.service?
systemctl start keepalived.service
systemctl status keepalived.service?
ip address show eno1
?
部署haproxy【所有master】
此處的haproxy為apiserver提供反向代理,haproxy將所有請求輪詢轉(zhuǎn)發(fā)到每個master節(jié)點上。相對于僅僅使用keepalived主備模式僅單個master節(jié)點承載流量,這種方式更加合理、健壯。
?
安裝haproxy
yum install -y haproxy
配置haproxy【三個master節(jié)點一樣】
cat?>?/etc/haproxy/haproxy.cfg?<<?EOF #--------------------------------------------------------------------- #?Example?configuration?for?a?possible?web?application.??See?the #?full?configuration?options?online. # #???http://haproxy.1wt.eu/download/1.4/doc/configuration.txt # #--------------------------------------------------------------------- ? #--------------------------------------------------------------------- #?Global?settings #--------------------------------------------------------------------- global ???#?to?have?these?messages?end?up?in?/var/log/haproxy.log?you?will ???#?need?to: ???# ???#?1)?configure?syslog?to?accept?network?log?events.??This?is?done ???#????by?adding?the?'-r'?option?to?the?SYSLOGD_OPTIONS?in ???#????/etc/sysconfig/syslog ???# ???#?2)?configure?local2?events?to?go?to?the?/var/log/haproxy.log ???#???file.?A?line?like?the?following?can?be?added?to ???#???/etc/sysconfig/syslog ???# ???#????local2.*???????????????????????/var/log/haproxy.log ???# ???log?????????127.0.0.1?local2 ? ???chroot??????/var/lib/haproxy ???pidfile?????/var/run/haproxy.pid ???maxconn?????4000 ???user????????haproxy ???group???????haproxy ???daemon ? ???#?turn?on?stats?unix?socket ???stats?socket?/var/lib/haproxy/stats ? #--------------------------------------------------------------------- #?common?defaults?that?all?the?'listen'?and?'backend'?sections?will #?use?if?not?designated?in?their?block #--------------------------------------------------------------------- defaults ???mode????????????????????http ???log?????????????????????global ???option??????????????????httplog ???option??????????????????dontlognull ???option?http-server-close ???option?forwardfor???????except?127.0.0.0/8 ???option??????????????????redispatch ???retries?????????????????3 ???timeout?http-request????10s ???timeout?queue???????????1m ???timeout?connect?????????10s ???timeout?client??????????1m ???timeout?server??????????1m ???timeout?http-keep-alive?10s ???timeout?check???????????10s ???maxconn?????????????????3000 ? #--------------------------------------------------------------------- #?main?frontend?which?proxys?to?the?backends #--------------------------------------------------------------------- frontend??kubernetes-apiserver ???mode?????????????????tcp ???bind?????????????????*:8443 ???option???????????????tcplog ???default_backend??????kubernetes-apiserver ? ? #--------------------------------------------------------------------- #?round?robin?balancing?between?the?various?backends #--------------------------------------------------------------------- backend?kubernetes-apiserver ???mode????????tcp ???balance?????roundrobin ???server??dev-k8s-master1?172.19.201.244:6443?check ???server??dev-k8s-master2?172.19.201.249:6443?check ???server??dev-k8s-master3?172.19.201.248:6443?check ? #--------------------------------------------------------------------- #?collection?haproxy?statistics?message #--------------------------------------------------------------------- listen?stats ???bind?????????????????*:1080 ???stats?auth???????????admin:awesomePassword ???stats?refresh????????5s ???stats?realm??????????HAProxy\?Statistics ???stats?uri????????????/admin?stats
?
?
把配置文件拷貝到其他兩臺master節(jié)點
scp -pr /etc/haproxy/haproxy.cfg root@dev-k8s-master2:/etc/haproxy/
?
啟動并檢測服務(wù)
?systemctl enable haproxy.service?
?systemctl start haproxy.service?
?systemctl status haproxy.service?
?ss -lnt | grep -E "8443|1080"
?
05.部署 etcd 集群
下載和分發(fā) etcd 二進制文件
到 etcd 的?release 頁面?下載最新版本的發(fā)布包:
cd /opt/k8s/work
wget https://github.com/coreos/etcd/releases/download/v3.3.13/etcd-v3.3.13-linux-amd64.tar.gz
tar -xvf etcd-v3.3.13-linux-amd64.tar.gz
?
分發(fā)二進制文件到集群所有節(jié)點:
cd /opt/k8s/work
scp etcd-v3.3.13-linux-amd64/etcd* root@${node_ip}:/opt/k8s/bin
chmod +x /opt/k8s/bin/*
創(chuàng)建 etcd 證書和私鑰
創(chuàng)建證書簽名請求:
cd?/opt/k8s/work cat?>?etcd-csr.json?<<EOF { ?"CN":?"etcd", ?"hosts":?[ ???"127.0.0.1", ???"172.19.201.244", ???"172.19.201.249", ???"172.19.201.248", ???"172.19.201.242" ?], ?"key":?{ ???"algo":?"rsa", ???"size":?2048 ?}, ?"names":?[ ???{ ?????"C":?"CN", ?????"ST":?"BeiJing", ?????"L":?"BeiJing", ?????"O":?"k8s", ?????"OU":?"4Paradigm" ???} ?] } EOF
?
?
生成證書和私鑰:
cd /opt/k8s/work
cfssl gencert -ca=/opt/k8s/work/ca.pem \
? ?-ca-key=/opt/k8s/work/ca-key.pem \
? ?-config=/opt/k8s/work/ca-config.json \
? ?-profile=kubernetes etcd-csr.json | cfssljson -bare etcd
?
分發(fā)生成的證書和私鑰到各 etcd 節(jié)點:
cd /opt/k8s/work
mkdir -p /etc/etcd/cert
scp etcd*.pem root@dev-k8s-master1:/etc/etcd/cert/
?
創(chuàng)建 etcd 的 systemd 文件
vim?/etc/systemd/system/etcd.service [Unit] Description=Etcd?Server After=network.target After=network-online.target Wants=network-online.target Documentation=https://github.com/coreos ? [Service] Type=notify WorkingDirectory=/data/k8s/etcd/data ExecStart=/opt/k8s/bin/etcd?\ ??--data-dir=/data/k8s/etcd/data?\ ??--wal-dir=/data/k8s/etcd/wal?\ ??--name=dev-k8s-master1?\ ??--cert-file=/etc/etcd/cert/etcd.pem?\ ??--key-file=/etc/etcd/cert/etcd-key.pem?\ ??--trusted-ca-file=/etc/kubernetes/cert/ca.pem?\ ??--peer-cert-file=/etc/etcd/cert/etcd.pem?\ ??--peer-key-file=/etc/etcd/cert/etcd-key.pem?\ ??--peer-trusted-ca-file=/etc/kubernetes/cert/ca.pem?\ ??--peer-client-cert-auth?\ ??--client-cert-auth?\ ??--listen-peer-urls=https://172.19.201.244:2380?\ ??--initial-advertise-peer-urls=https://172.19.201.244:2380?\ ??--listen-client-urls=https://172.19.201.244:2379,http://127.0.0.1:2379?\ ??--advertise-client-urls=https://172.19.201.244:2379?\ ??--initial-cluster-token=etcd-cluster-0?\ ??--initial-cluster=dev-k8s-master1=https://172.19.201.244:2380,dev-k8s-master2=https://172.19.201.249:2380,dev-k8s-master3=https://172.19.201. 248:2380?\ ??--initial-cluster-state=new?\ ??--auto-compaction-mode=periodic?\ ??--auto-compaction-retention=1?\ ??--max-request-bytes=33554432?\ ??--quota-backend-bytes=6442450944?\ ??--heartbeat-interval=250?\ ??--election-timeout=2000 Restart=on-failure RestartSec=5 LimitNOFILE=65536 ? [Install] WantedBy=multi-user.target
?
?
驗證服務(wù)狀態(tài)
部署完 etcd 集群后,在任一 etcd 節(jié)點上執(zhí)行如下命令:
cd /opt/k8s/work
ETCDCTL_API=3 /opt/k8s/bin/etcdctl \
? ?--endpoints=https://172.19.201.244:2379,https://172.19.201.249:2379,https://172.19.201.248:2379 \
? ?--cacert=/opt/k8s/work/ca.pem \
? ?--cert=/etc/etcd/cert/etcd.pem \
? ?--key=/etc/etcd/cert/etcd-key.pem endpoint health
?
輸出結(jié)果:輸出均為?healthy?時表示集群服務(wù)正常。
?
查看當(dāng)前的 leader
ETCDCTL_API=3 /opt/k8s/bin/etcdctl \
?-w table --cacert=/opt/k8s/work/ca.pem \
?--cert=/etc/etcd/cert/etcd.pem \
?--key=/etc/etcd/cert/etcd-key.pem \
?--endpoints=https://172.19.201.244:2379,https://172.19.201.249:2379,https://172.19.201.248:2379 endpoint status
?
?
? ? ? ? ? ? ?
?
?
06.部署 flannel 網(wǎng)絡(luò)
下載和分發(fā) flanneld 二進制文件
從 flannel 的?release 頁面?下載最新版本的安裝包:
cd /opt/k8s/work
mkdir flannel
wget https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz
tar -xzvf flannel-v0.11.0-linux-amd64.tar.gz -C flannel
?
分發(fā)二進制文件到集群所有節(jié)點:
cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
scp flannel/{flanneld,mk-docker-opts.sh} root@dev-k8s-node1:/opt/k8s/bin/
chmod +x /opt/k8s/bin/*
?
?
創(chuàng)建 flannel 證書和私鑰
flanneld 從 etcd 集群存取網(wǎng)段分配信息,而 etcd 集群啟用了雙向 x509 證書認證,所以需要為 flanneld 生成證書和私鑰。
創(chuàng)建證書簽名請求:
cd?/opt/k8s/work cat?>?flanneld-csr.json?<<EOF { ?"CN":?"flanneld", ?"hosts":?[], ?"key":?{ ???"algo":?"rsa", ???"size":?2048 ?}, ?"names":?[ ???{ ?????"C":?"CN", ?????"ST":?"BeiJing", ?????"L":?"BeiJing", ?????"O":?"k8s", ?????"OU":?"4Paradigm" ???} ?] } EOF
?
?
該證書只會被 kubectl 當(dāng)做 client 證書使用,所以 hosts 字段為空;
生成證書和私鑰:
?
cfssl gencert -ca=/opt/k8s/work/ca.pem \
?-ca-key=/opt/k8s/work/ca-key.pem \
?-config=/opt/k8s/work/ca-config.json \
?-profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld
?
將生成的證書和私鑰分發(fā)到所有節(jié)點(master 和 worker):
cd /opt/k8s/work
mkdir -p /etc/flanneld/cert
scp flanneld*.pem root@dev-k8s-master1:/etc/flanneld/cert
?
?
向 etcd 寫入集群 Pod 網(wǎng)段信息
注意:本步驟只需執(zhí)行一次。
cd /opt/k8s/work
etcdctl \
?--endpoints=https://172.19.201.244:2379,https://172.19.201.249:2379,https://172.19.201.248:2379 \
?--ca-file=/opt/k8s/work/ca.pem \
?--cert-file=/opt/k8s/work/flanneld.pem \
?--key-file=/opt/k8s/work/flanneld-key.pem \
?set /kubernetes/network/config '{"Network":"'10.10.0.0/16'", "SubnetLen": 21, "Backend": {"Type": "vxlan"}}'
?
flanneld?當(dāng)前版本 (v0.11.0) 不支持 etcd v3,故使用 etcd v2 API 寫入配置 key 和網(wǎng)段數(shù)據(jù);
寫入的 Pod 網(wǎng)段?${CLUSTER_CIDR}?地址段(如 /16)必須小于?SubnetLen,必須與?kube-controller-manager?的?--cluster-cidr?參數(shù)值一致;
?
創(chuàng)建 flanneld 的 systemd unit 文件
cat?/etc/systemd/system/flanneld.service [Unit] Description=Flanneld?overlay?address?etcd?agent After=network.target After=network-online.target Wants=network-online.target After=etcd.service Before=docker.service ? [Service] Type=notify ExecStart=/opt/k8s/bin/flanneld?\ ??-etcd-cafile=/etc/kubernetes/cert/ca.pem?\ ??-etcd-certfile=/etc/flanneld/cert/flanneld.pem?\ ??-etcd-keyfile=/etc/flanneld/cert/flanneld-key.pem?\ ??-etcd-endpoints=https://172.19.201.244:2379,https://172.19.201.249:2379,https://172.19.201.248:2379?\ ??-etcd-prefix=/kubernetes/network?\ ??-iface=eno1?\ ??-ip-masq ExecStartPost=/opt/k8s/bin/mk-docker-opts.sh?-k?DOCKER_NETWORK_OPTIONS?-d?/run/flannel/docker Restart=always RestartSec=5 StartLimitInterval=0 ? [Install] WantedBy=multi-user.target RequiredBy=docker.service
?
啟動 flanneld 服務(wù)
systemctl daemon-reload && systemctl enable flanneld && systemctl restart flanneld
?
檢查分配給各 flanneld 的 Pod 網(wǎng)段信息
查看集群 Pod 網(wǎng)段(/16):
etcdctl \
?--endpoints=https://172.19.201.244:2379,https://172.19.201.249:2379,https://172.19.201.248:2379 \
?--ca-file=/etc/kubernetes/cert/ca.pem \
?--cert-file=/etc/flanneld/cert/flanneld.pem \
?--key-file=/etc/flanneld/cert/flanneld-key.pem \
?get /kubernetes/network/config
?
輸出:
{"Network":"10.10.0.0/16", "SubnetLen": 21, "Backend": {"Type": "vxlan"}}
?
?
查看已分配的 Pod 子網(wǎng)段列表(/24):
etcdctl \
?--endpoints=https://172.19.201.244:2379,https://172.19.201.249:2379,https://172.19.201.248:2379 \
?--ca-file=/etc/kubernetes/cert/ca.pem \
?--cert-file=/etc/flanneld/cert/flanneld.pem \
?--key-file=/etc/flanneld/cert/flanneld-key.pem \
?ls /kubernetes/network/subnets
?
輸出(結(jié)果視部署情況而定):
? ? ? ? ? ? ?
?
查看某一 Pod 網(wǎng)段對應(yīng)的節(jié)點 IP 和 flannel 接口地址:
?
etcdctl \
?--endpoints=https://172.19.201.244:2379,https://172.19.201.249:2379,https://172.19.201.248:2379 \
?--ca-file=/etc/kubernetes/cert/ca.pem \
?--cert-file=/etc/flanneld/cert/flanneld.pem \
?--key-file=/etc/flanneld/cert/flanneld-key.pem \
?get ?/kubernetes/network/subnets/10.10.80.0-21
?
輸出(結(jié)果視部署情況而定):
? ? ? ? ? ? ?
檢查節(jié)點 flannel 網(wǎng)絡(luò)信息
? ? ? ? ? ? ?
?
flannel.1 網(wǎng)卡的地址為分配的 Pod 子網(wǎng)段的第一個 IP(.0),且是 /32 的地址;
[root@dev-k8s-node1 ~]# ip route show |grep flannel.1
? ? ? ? ? ? ?
?
?
驗證各節(jié)點能通過 Pod 網(wǎng)段互通
在各節(jié)點上部署?flannel 后,檢查是否創(chuàng)建了 flannel 接口(名稱可能為 flannel0、flannel.0、flannel.1 等):
?
ssh dev-k8s-node2 "/usr/sbin/ip addr show flannel.1|grep -w inet"
? ? ? ? ? ? ?
?
在各節(jié)點上 ping 所有 flannel 接口 IP,確保能通:
?
ssh dev-k8s-node2 "ping -c 2 10.10.176.0"
? ? ? ? ? ? ?
?
?
07.部署高可用 kube-apiserver 集群
創(chuàng)建 kubernetes 證書和私鑰
創(chuàng)建證書簽名請求:
cd?/opt/k8s/work cat?>?kubernetes-csr.json?<<EOF { ?"CN":?"kubernetes", ?"hosts":?[ ???"127.0.0.1", ???"172.19.201.244", ???"172.19.201.249", ???"172.19.201.248", ???"172.19.201.242", ???"kubernetes", ???"kubernetes.default", ???"kubernetes.default.svc", ???"kubernetes.default.svc.cluster", ???"kubernetes.default.svc.cluster.local." ?], ?"key":?{ ???"algo":?"rsa", ???"size":?2048 ?}, ?"names":?[ ???{ ?????"C":?"CN", ?????"ST":?"BeiJing", ?????"L":?"BeiJing", ?????"O":?"k8s", ?????"OU":?"4Paradigm" ???} ?] } EOF
?
kubernetes 服務(wù) IP 是 apiserver 自動創(chuàng)建的,一般是?--service-cluster-ip-range?參數(shù)指定的網(wǎng)段的第一個IP,后續(xù)可以通過下面命令獲?。?/span>
?
kubectl get svc kubernetes
? ? ? ? ? ? ?
?
?
生成證書和私鑰:
cfssl gencert -ca=/opt/k8s/work/ca.pem \
?-ca-key=/opt/k8s/work/ca-key.pem \
?-config=/opt/k8s/work/ca-config.json \
?-profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
ls kubernetes*pem
?
將生成的證書和私鑰文件拷貝到所有 master 節(jié)點:
?
cd /opt/k8s/work
mkdir -p /etc/kubernetes/cert
scp kubernetes*.pem root@dev-k8s-master1:/etc/kubernetes/cert/
?
?
創(chuàng)建加密配置文件
cd?/opt/k8s/work cat?>?encryption-config.yaml?<<EOF kind:?EncryptionConfig apiVersion:?v1 resources: ?-?resources: ?????-?secrets ???providers: ?????-?aescbc: ?????????keys: ???????????-?name:?key1 ?????????????secret:?$(head?-c?32?/dev/urandom?|?base64) ?????-?identity:?{} EOF
?
將加密配置文件拷貝到 master 節(jié)點的?/etc/kubernetes?目錄下:
?
cd /opt/k8s/work
scp encryption-config.yaml root@dev-k8s-master1:/etc/kubernetes/
?
?
創(chuàng)建審計策略文件
cd?/opt/k8s/work cat?>?audit-policy.yaml?<<EOF apiVersion:?audit.k8s.io/v1beta1 kind:?Policy rules: ?#?The?following?requests?were?manually?identified?as?high-volume?and?low-risk,?so?drop?them. ?-?level:?None ???resources: ?????-?group:?"" ???????resources: ?????????-?endpoints ?????????-?services ?????????-?services/status ???users: ?????-?'system:kube-proxy' ???verbs: ?????-?watch ? ?-?level:?None ???resources: ?????-?group:?"" ???????resources: ?????????-?nodes ?????????-?nodes/status ???userGroups: ?????-?'system:nodes' ???verbs: ?????-?get ? ?-?level:?None ???namespaces: ?????-?kube-system ???resources: ?????-?group:?"" ???????resources: ?????????-?endpoints ???users: ?????-?'system:kube-controller-manager' ?????-?'system:kube-scheduler' ?????-?'system:serviceaccount:kube-system:endpoint-controller' ???verbs: ?????-?get ?????-?update ? ?-?level:?None ???resources: ?????-?group:?"" ???????resources: ?????????-?namespaces ?????????-?namespaces/status ?????????-?namespaces/finalize ???users: ?????-?'system:apiserver' ???verbs: ?????-?get ? ?#?Don't?log?HPA?fetching?metrics. ?-?level:?None ???resources: ?????-?group:?metrics.k8s.io ???users: ?????-?'system:kube-controller-manager' ???verbs: ?????-?get ?????-?list ? ?#?Don't?log?these?read-only?URLs. ?-?level:?None ???nonResourceURLs: ?????-?'/healthz*' ?????-?/version ?????-?'/swagger*' ? ?#?Don't?log?events?requests. ?-?level:?None ???resources: ?????-?group:?"" ???????resources: ?????????-?events ? ?#?node?and?pod?status?calls?from?nodes?are?high-volume?and?can?be?large,?don't?log?responses?for?expected?updates?from?nodes ?-?level:?Request ???omitStages: ?????-?RequestReceived ???resources: ?????-?group:?"" ???????resources: ?????????-?nodes/status ?????????-?pods/status ???users: ?????-?kubelet ?????-?'system:node-problem-detector' ?????-?'system:serviceaccount:kube-system:node-problem-detector' ???verbs: ?????-?update ?????-?patch ? ?-?level:?Request ???omitStages: ?????-?RequestReceived ???resources: ?????-?group:?"" ???????resources: ?????????-?nodes/status ?????????-?pods/status ???userGroups: ?????-?'system:nodes' ???verbs: ?????-?update ?????-?patch ? ?#?deletecollection?calls?can?be?large,?don't?log?responses?for?expected?namespace?deletions ?-?level:?Request ???omitStages: ?????-?RequestReceived ???users: ?????-?'system:serviceaccount:kube-system:namespace-controller' ???verbs: ?????-?deletecollection ? ?#?Secrets,?ConfigMaps,?and?TokenReviews?can?contain?sensitive?&?binary?data, ?#?so?only?log?at?the?Metadata?level. ?-?level:?Metadata ???omitStages: ?????-?RequestReceived ???resources: ?????-?group:?"" ???????resources: ?????????-?secrets ?????????-?configmaps ?????-?group:?authentication.k8s.io ???????resources: ?????????-?tokenreviews ?#?Get?repsonses?can?be?large;?skip?them. ?-?level:?Request ???omitStages: ?????-?RequestReceived ???resources: ?????-?group:?"" ?????-?group:?admissionregistration.k8s.io ?????-?group:?apiextensions.k8s.io ?????-?group:?apiregistration.k8s.io ?????-?group:?apps ?????-?group:?authentication.k8s.io ?????-?group:?authorization.k8s.io ?????-?group:?autoscaling ?????-?group:?batch ?????-?group:?certificates.k8s.io ?????-?group:?extensions ?????-?group:?metrics.k8s.io ?????-?group:?networking.k8s.io ?????-?group:?policy ?????-?group:?rbac.authorization.k8s.io ?????-?group:?scheduling.k8s.io ?????-?group:?settings.k8s.io ?????-?group:?storage.k8s.io ???verbs: ?????-?get ?????-?list ?????-?watch ? ?#?Default?level?for?known?APIs ?-?level:?RequestResponse ???omitStages: ?????-?RequestReceived ???resources: ?????-?group:?"" ?????-?group:?admissionregistration.k8s.io ?????-?group:?apiextensions.k8s.io ?????-?group:?apiregistration.k8s.io ?????-?group:?apps ?????-?group:?authentication.k8s.io ?????-?group:?authorization.k8s.io ?????-?group:?autoscaling ?????-?group:?batch ?????-?group:?certificates.k8s.io ?????-?group:?extensions ?????-?group:?metrics.k8s.io ?????-?group:?networking.k8s.io ?????-?group:?policy ?????-?group:?rbac.authorization.k8s.io ?????-?group:?scheduling.k8s.io ?????-?group:?settings.k8s.io ?????-?group:?storage.k8s.io ????? ?#?Default?level?for?all?other?requests. ?-?level:?Metadata ???omitStages: ?????-?RequestReceived EOF
?
分發(fā)審計策略文件:
?
cd /opt/k8s/work
scp audit-policy.yaml root@dev-k8s-master1:/etc/kubernetes/audit-policy.yaml
?
?
創(chuàng)建后續(xù)訪問 metrics-server 使用的證書
創(chuàng)建證書簽名請求:
cat?>?proxy-client-csr.json?<<EOF { ?"CN":?"aggregator", ?"hosts":?[], ?"key":?{ ???"algo":?"rsa", ???"size":?2048 ?}, ?"names":?[ ???{ ?????"C":?"CN", ?????"ST":?"BeiJing", ?????"L":?"BeiJing", ?????"O":?"k8s", ?????"OU":?"4Paradigm" ???} ?] } EOF
?
?
?
生成證書和私鑰:
?
cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \
?-ca-key=/etc/kubernetes/cert/ca-key.pem ?\
?-config=/etc/kubernetes/cert/ca-config.json ?\
?-profile=kubernetes proxy-client-csr.json | cfssljson -bare proxy-client
?
ls proxy-client*.pem
將生成的證書和私鑰文件拷貝到所有 master 節(jié)點:
?
scp proxy-client*.pem root@dev-k8s-master1:/etc/kubernetes/cert/
?
?
創(chuàng)建 kube-apiserver systemd unit 配置文件
cd /opt/k8s/work
vim?/etc/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes?API?Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target ? [Service] WorkingDirectory=/data/k8s/k8s/kube-apiserver ExecStart=/opt/k8s/bin/kube-apiserver?\ ??--advertise-address=172.19.201.244?\ ??--default-not-ready-toleration-seconds=360?\ ??--default-unreachable-toleration-seconds=360?\ ??--feature-gates=DynamicAuditing=true?\ ??--max-mutating-requests-inflight=2000?\ ??--max-requests-inflight=4000?\ ??--default-watch-cache-size=200?\ ??--delete-collection-workers=2?\ ??--encryption-provider-config=/etc/kubernetes/encryption-config.yaml?\ ??--etcd-cafile=/etc/kubernetes/cert/ca.pem?\ ??--etcd-certfile=/etc/kubernetes/cert/kubernetes.pem?\ ??--etcd-keyfile=/etc/kubernetes/cert/kubernetes-key.pem?\ ??--etcd-servers=https://172.19.201.244:2379,https://172.19.201.249:2379,https://172.19.201.248:2379?\ ??--bind-address=172.19.201.244?\ ??--secure-port=6443?\ ??--tls-cert-file=/etc/kubernetes/cert/kubernetes.pem?\ ??--tls-private-key-file=/etc/kubernetes/cert/kubernetes-key.pem?\ ??--insecure-port=0?\ ??--audit-dynamic-configuration?\ ??--audit-log-maxage=15?\ ??--audit-log-maxbackup=3?\ ??--audit-log-maxsize=100?\ ??--audit-log-truncate-enabled?\ ??--audit-log-path=/data/k8s/k8s/kube-apiserver/audit.log?\ ??--audit-policy-file=/etc/kubernetes/audit-policy.yaml?\ ??--profiling?\ ??--anonymous-auth=false?\ ??--client-ca-file=/etc/kubernetes/cert/ca.pem?\ ??--enable-bootstrap-token-auth?\ ??--requestheader-allowed-names="aggregator"?\ ??--requestheader-client-ca-file=/etc/kubernetes/cert/ca.pem?\ ??--requestheader-extra-headers-prefix="X-Remote-Extra-"?\ ??--requestheader-group-headers=X-Remote-Group?\ ??--requestheader-username-headers=X-Remote-User?\ ??--service-account-key-file=/etc/kubernetes/cert/ca.pem?\ ??--authorization-mode=Node,RBAC?\ ??--runtime-config=api/all=true?\ ??--enable-admission-plugins=NodeRestriction?\ ??--allow-privileged=true?\ ??--apiserver-count=3?\ ??--event-ttl=168h?\ ??--kubelet-certificate-authority=/etc/kubernetes/cert/ca.pem?\ ??--kubelet-client-certificate=/etc/kubernetes/cert/kubernetes.pem?\ ??--kubelet-client-key=/etc/kubernetes/cert/kubernetes-key.pem?\ ??--kubelet-https=true?\ ??--kubelet-timeout=10s?\ ??--proxy-client-cert-file=/etc/kubernetes/cert/proxy-client.pem?\ ??--proxy-client-key-file=/etc/kubernetes/cert/proxy-client-key.pem?\ ??--service-cluster-ip-range=10.254.0.0/16?\ ??--service-node-port-range=30000-32767?\ ??--logtostderr=true?\ ??--v=2 Restart=on-failure RestartSec=10 Type=notify LimitNOFILE=65536 ? [Install] WantedBy=multi-user.target
?
?
?
?
啟動 kube-apiserver 服務(wù)
啟動服務(wù)前必須先創(chuàng)建工作目錄;
mkdir -p /data/k8s/k8s/kube-apiserver
systemctl daemon-reload && systemctl enable kube-apiserver && systemctl restart kube-apiserver
?
打印 kube-apiserver 寫入 etcd 的數(shù)據(jù)
ETCDCTL_API=3 etcdctl \
? ?--endpoints=https://172.19.201.244:2379,https://172.19.201.249:2379,https://172.19.201.248:2379 \
? ?--cacert=/opt/k8s/work/ca.pem \
? ?--cert=/opt/k8s/work/etcd.pem \
? ?--key=/opt/k8s/work/etcd-key.pem \
? ?get /registry/ --prefix --keys-only
?
檢查集群信息
$ kubectl cluster-info
Kubernetes master is running at https://172.19.201.242:8443
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
?
$ kubectl get all --all-namespaces
NAMESPACE ? NAME ? ? ? ? ? ? ? ? TYPE ? ? ? ?CLUSTER-IP ? EXTERNAL-IP ? PORT(S) ? AGE
default ? ? service/kubernetes ? ClusterIP ? 10.254.0.1 ? <none> ? ? ? ?443/TCP ? 12m
?
$ kubectl get componentstatuses
? ? ? ? ? ? ?
檢查 kube-apiserver 監(jiān)聽的端口
sudo netstat -lnpt|grep kube
? ? ? ? ? ? ?
?
授予 kube-apiserver 訪問 kubelet API 的權(quán)限
在執(zhí)行 kubectl exec、run、logs 等命令時,apiserver 會將請求轉(zhuǎn)發(fā)到 kubelet 的 https 端口。這里定義 RBAC 規(guī)則,授權(quán) apiserver 使用的證書(kubernetes.pem)用戶名(CN:kuberntes)訪問 kubelet API 的權(quán)限:
kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes
? ? ? ? ? ? ?
?
?
?
08.部署高可用 kube-controller-manager 集群
創(chuàng)建 kube-controller-manager 證書和私鑰
創(chuàng)建證書簽名請求:
cd?/opt/k8s/work cat?>?kube-controller-manager-csr.json?<<EOF { ???"CN":?"system:kube-controller-manager", ???"key":?{ ???????"algo":?"rsa", ???????"size":?2048 ???}, ???"hosts":?[ ?????"127.0.0.1", ?????"172.19.201.244", ?????"172.19.201.249", "172.19.201.248", ?????"172.19.201.242" ???], ???"names":?[ ?????{ ???????"C":?"CN", ???????"ST":?"BeiJing", ???????"L":?"BeiJing", ???????"O":?"system:kube-controller-manager", ???????"OU":?"4Paradigm" ?????} ???] } EOF
?
生成證書和私鑰:
cd /opt/k8s/work
cfssl gencert -ca=/opt/k8s/work/ca.pem \
?-ca-key=/opt/k8s/work/ca-key.pem \
?-config=/opt/k8s/work/ca-config.json \
?-profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
?
ls kube-controller-manager*pem
將生成的證書和私鑰分發(fā)到所有 master 節(jié)點:
?
cd /opt/k8s/work
scp kube-controller-manager*.pem root@dev-k8s-master1:/etc/kubernetes/cert/
?
?
創(chuàng)建和分發(fā) kubeconfig 文件
kube-controller-manager 使用 kubeconfig 文件訪問 apiserver,該文件提供了 apiserver 地址、嵌入的 CA 證書和 kube-controller-manager 證書:
?
cd /opt/k8s/work
kubectl config set-cluster kubernetes \
?--certificate-authority=/opt/k8s/work/ca.pem \
?--embed-certs=true \
?--server=https://172.19.201.242:8443 \
?--kubeconfig=kube-controller-manager.kubeconfig
?
kubectl config set-credentials system:kube-controller-manager \
?--client-certificate=kube-controller-manager.pem \
?--client-key=kube-controller-manager-key.pem \
?--embed-certs=true \
?--kubeconfig=kube-controller-manager.kubeconfig
?
kubectl config set-context system:kube-controller-manager \
?--cluster=kubernetes \
?--user=system:kube-controller-manager \
?--kubeconfig=kube-controller-manager.kubeconfig
?
kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
?
分發(fā) kubeconfig 到所有 master 節(jié)點:
?
cd /opt/k8s/work
scp kube-controller-manager.kubeconfig root@dev-k8s-master1:/etc/kubernetes/
創(chuàng)建 kube-controller-manager systemd unit 模板文件
cd /opt/k8s/work
cat?/etc/systemd/system/kube-controller-manager.service [Unit] Description=Kubernetes?Controller?Manager Documentation=https://github.com/GoogleCloudPlatform/kubernetes ? [Service] WorkingDirectory=/data/k8s/k8s/kube-controller-manager ExecStart=/opt/k8s/bin/kube-controller-manager?\ ??--port=0?\ ??--secure-port=10252?\ ??--bind-address=127.0.0.1?\ ??--kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig?\ ??--authentication-kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig?\ ??--authorization-kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig?\ ??--service-cluster-ip-range=10.254.0.0/16?\ ??--cluster-name=kubernetes?\ ??--cluster-signing-cert-file=/etc/kubernetes/cert/ca.pem?\ ??--cluster-signing-key-file=/etc/kubernetes/cert/ca-key.pem?\ ??--experimental-cluster-signing-duration=8760h?\ ??--root-ca-file=/etc/kubernetes/cert/ca.pem?\ ??--service-account-private-key-file=/etc/kubernetes/cert/ca-key.pem?\ ??--leader-elect=true?\ ??--controllers=*,bootstrapsigner,tokencleaner?\ ??--tls-cert-file=/etc/kubernetes/cert/kube-controller-manager.pem?\ ??--tls-private-key-file=/etc/kubernetes/cert/kube-controller-manager-key.pem?\ ??--use-service-account-credentials=true?\ ??--kube-api-qps=1000?\ ??--kube-api-burst=2000?\ ??--logtostderr=true?\ ??--v=2 Restart=on-failure RestartSec=5 ? [Install] WantedBy=multi-user.target
?
?
?
創(chuàng)建目錄
mkdir -p /data/k8s/k8s/kube-controller-manager
?
啟動服務(wù)
systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl restart kube-controller-manager
?
kube-controller-manager 監(jiān)聽 10252 端口,接收 https 請求:
sudo netstat -lnpt | grep kube-cont
? ? ? ? ? ? ?
?
?
kube-controller-manager 賦予相應(yīng)的權(quán)限
kubectl create clusterrolebinding controller-manager:system:auth-delegator --user system:kube-controller-manager --clusterrole system:auth-delegator
?
?
kubectl describe clusterrole system:kube-controller-manager
? ? ? ? ? ? ?
?
kubectl get clusterrole|grep controller
? ? ? ? ? ? ?
?
kubectl describe clusterrole system:controller:deployment-controller
? ? ? ? ? ? ?
?
查看當(dāng)前的 leader
kubectl get endpoints kube-controller-manager --namespace=kube-system ?-o yaml
? ? ? ? ? ? ?
?
?
09.部署高可用 kube-scheduler 集群
創(chuàng)建 kube-scheduler 證書和私鑰
創(chuàng)建證書簽名請求:
cd?/opt/k8s/work cat?>?kube-scheduler-csr.json?<<EOF { ???"CN":?"system:kube-scheduler", ???"hosts":?[ ?????"127.0.0.1", ?????"172.19.201.244", ?????"172.19.201.249", ??????"172.19.201.248", ?????"172.19.201.242" ???], ???"key":?{ ???????"algo":?"rsa", ???????"size":?2048 ???}, ???"names":?[ ?????{ ???????"C":?"CN", ???????"ST":?"BeiJing", ???????"L":?"BeiJing", ???????"O":?"system:kube-scheduler", ???????"OU":?"4Paradigm" ?????} ???] } EOF
?
生成證書和私鑰:
cd /opt/k8s/work
?
cfssl gencert -ca=/opt/k8s/work/ca.pem \
?-ca-key=/opt/k8s/work/ca-key.pem \
?-config=/opt/k8s/work/ca-config.json \
?-profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
?
?
將生成的證書和私鑰分發(fā)到所有 master 節(jié)點:
cd /opt/k8s/work
scp kube-scheduler*.pem root@dev-k8s-master1:/etc/kubernetes/cert/
?
?
創(chuàng)建和分發(fā) kubeconfig 文件
kube-scheduler 使用 kubeconfig 文件訪問 apiserver,該文件提供了 apiserver 地址、嵌入的 CA 證書和 kube-scheduler 證書:
?
cd /opt/k8s/work
kubectl config set-cluster kubernetes \
?--certificate-authority=/opt/k8s/work/ca.pem \
?--embed-certs=true \
?--server=https://172.19.201.242:8443 \
?--kubeconfig=kube-scheduler.kubeconfig
?
kubectl config set-credentials system:kube-scheduler \
?--client-certificate=kube-scheduler.pem \
?--client-key=kube-scheduler-key.pem \
?--embed-certs=true \
?--kubeconfig=kube-scheduler.kubeconfig
?
kubectl config set-context system:kube-scheduler \
?--cluster=kubernetes \
?--user=system:kube-scheduler \
?--kubeconfig=kube-scheduler.kubeconfig
?
kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
?
分發(fā) kubeconfig 到所有 master 節(jié)點:
cd /opt/k8s/work
scp kube-scheduler.kubeconfig root@dev-k8s-master1:/etc/kubernetes/
創(chuàng)建 kube-scheduler 配置文件
cd?/opt/k8s/work cat?<<EOF?|?sudo?tee?kube-scheduler.yaml apiVersion:?kubescheduler.config.k8s.io/v1alpha1 kind:?KubeSchedulerConfiguration clientConnection: ?kubeconfig:?"/etc/kubernetes/kube-scheduler.kubeconfig" leaderElection: ?leaderElect:?true EOF
?
分發(fā) kube-scheduler 配置文件到所有 master 節(jié)點:
scp kube-scheduler.yaml root@dev-k8s-master1:/etc/kubernetes/
?
?
?
創(chuàng)建 kube-scheduler systemd 文件
cat?/etc/systemd/system/kube-scheduler.service [Unit] Description=Kubernetes?Scheduler Documentation=https://github.com/GoogleCloudPlatform/kubernetes ? [Service] WorkingDirectory=/data/k8s/k8s/kube-scheduler ExecStart=/opt/k8s/bin/kube-scheduler?\ ??--config=/etc/kubernetes/kube-scheduler.yaml?\ ??--address=127.0.0.1?\ ??--kube-api-qps=100?\ ??--logtostderr=true?\ ??--v=2 Restart=always RestartSec=5 StartLimitInterval=0 ? [Install] WantedBy=multi-user.target
?
?
?
創(chuàng)建目錄
mkdir -p /data/k8s/k8s/kube-scheduler
?
啟動 kube-scheduler 服務(wù)
systemctl daemon-reload && systemctl enable kube-scheduler && systemctl restart kube-scheduler
?
檢查服務(wù)運行狀態(tài)
systemctl status kube-scheduler
?
查看輸出的 metrics
sudo netstat -lnpt |grep kube-sch
? ? ? ? ? ? ?
?
curl -s --cacert /opt/k8s/work/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://172.19.201.244:10259/metrics |head
? ? ? ? ? ? ?
?
查看當(dāng)前的 leader
$ kubectl get endpoints kube-scheduler --namespace=kube-system ?-o yaml
? ? ? ? ? ? ?
?
?
10.部署 docker 組件
下載和分發(fā) docker 二進制文件
到?docker 下載頁面?下載最新發(fā)布包:
cd /opt/k8s/work
wget https://download.docker.com/linux/static/stable/x86_64/docker-18.09.6.tgz
tar -xvf docker-18.09.6.tgz
?
分發(fā)二進制文件到所有 worker 節(jié)點:
cd /opt/k8s/work
scp docker/* ?root@dev-k8s-node1:/opt/k8s/bin/
ssh root@dev-k8s-node1 "chmod +x /opt/k8s/bin/*"
?
?
在work節(jié)點創(chuàng)建和分發(fā) systemd unit 文件
cat?/etc/systemd/system/docker.service [Unit] Description=Docker?Application?Container?Engine Documentation=http://docs.docker.io ? [Service] WorkingDirectory=/data/k8s/docker Environment="PATH=/opt/k8s/bin:/bin:/sbin:/usr/bin:/usr/sbin" EnvironmentFile=-/run/flannel/docker ExecStart=/opt/k8s/bin/dockerd?$DOCKER_NETWORK_OPTIONS ExecReload=/bin/kill?-s?HUP?$MAINPID Restart=on-failure RestartSec=5 LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity Delegate=yes KillMode=process ? [Install] WantedBy=multi-user.target
?
?
?
sudo iptables -P FORWARD ACCEPT
/sbin/iptables -P FORWARD ACCEPT
?
分發(fā) systemd unit 文件到所有 worker 機器:
cd /opt/k8s/work
scp docker.service root@dev-k8s-node1:/etc/systemd/system/
?
配置和分發(fā) docker 配置文件
使用國內(nèi)的倉庫鏡像服務(wù)器以加快 pull image 的速度,同時增加下載的并發(fā)數(shù) (需要重啟 dockerd 生效):
cd /opt/k8s/work
分發(fā) docker 配置文件到所有 worker 節(jié)點:
mkdir -p ?/etc/docker/ /data/k8s/docker/{data,exec}
cat?/etc/docker/daemon.json { ????"registry-mirrors":?["https://docker.mirrors.ustc.edu.cn","https://hub-mirror.c.163.com"], ????"insecure-registries":?["docker02:35000"], ????"max-concurrent-downloads":?20, ????"live-restore":?true, ????"max-concurrent-uploads":?10, ????"debug":?true, ????"data-root":?"/data/k8s/docker/data", ????"exec-root":?"/data/k8s/docker/exec", ????"log-opts":?{ ??????"max-size":?"100m", ??????"max-file":?"5" ????} }
?
?
分發(fā) docker 配置文件到所有 worker 節(jié)點:
mkdir -p ?/etc/docker/ /data/k8s/docker/{data,exec}
scp docker-daemon.json root@dev-k8s-node1:/etc/docker/daemon.json
?
啟動 docker 服務(wù)
systemctl daemon-reload && systemctl enable docker && systemctl restart docker
?
?
檢查服務(wù)運行狀態(tài)
systemctl status docker|grep active
?
?
配置docker的配置文件
vim?/etc/docker/daemon.json { ????"registry-mirrors":?["https://docker.mirrors.ustc.edu.cn","https://hub-mirror.c.163.com"], ????"insecure-registries":?["docker02:35000"], ????"max-concurrent-downloads":?20, ????"live-restore":?true, ????"max-concurrent-uploads":?10, ????"debug":?true, ????"data-root":?"/data/k8s/docker/data", ????"exec-root":?"/data/k8s/docker/exec", ????"log-opts":?{ ??????"max-size":?"100m", ??????"max-file":?"5" ????} }
啟動 docker 服務(wù)
systemctl daemon-reload && systemctl enable docker && systemctl restart docker
?
?
12.部署 kubelet 組件
創(chuàng)建 kubelet bootstrap kubeconfig 文件
cd?/opt/k8s/work vim?/opt/k8s/bin/environment.sh #!/bin/bash KUBE_APISERVER="https://172.19.201.202:8443" BOOTSTRAP_TOKEN="head?-c?16?/dev/urandom?|?od?-An?-t?x?|?tr?-d?'?'" NODE_NAMES=(dev-k8s-node1?dev-k8s-node2?dev-k8s-node3) ? ? source?/opt/k8s/bin/environment.sh for?node_name?in?${NODE_NAMES[@]} ?do ???echo?">>>?${node_name}" ? ???#?創(chuàng)建?token ???export?BOOTSTRAP_TOKEN=$(kubeadm?token?create?\ ?????--description?kubelet-bootstrap-token?\ ?????--groups?system:bootstrappers:${node_name}?\ ?????--kubeconfig?~/.kube/config) ? ???#?設(shè)置集群參數(shù) ???kubectl?config?set-cluster?kubernetes?\ ?????--certificate-authority=/etc/kubernetes/cert/ca.pem?\ ?????--embed-certs=true?\ ?????--server=${KUBE_APISERVER}?\ ?????--kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig ? ???#?設(shè)置客戶端認證參數(shù) ???kubectl?config?set-credentials?kubelet-bootstrap?\ ?????--token=${BOOTSTRAP_TOKEN}?\ ?????--kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig ? ???#?設(shè)置上下文參數(shù) ???kubectl?config?set-context?default?\ ?????--cluster=kubernetes?\ ?????--user=kubelet-bootstrap?\ ?????--kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig ? ???#?設(shè)置默認上下文 ???kubectl?config?use-context?default?--kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig ?done
?
證書中寫入 Token 而非證書,證書后續(xù)由 kube-controller-manager 創(chuàng)建。
查看 kubeadm 為各節(jié)點創(chuàng)建的 token:
kubeadm token list --kubeconfig ~/.kube/config
? ? ? ? ? ? ?
?
分發(fā) bootstrap kubeconfig 文件到所有 worker 節(jié)點
scp -pr kubelet-bootstrap-dev-k8s-master1.kubeconfig root@dev-k8s-master1:/etc/kubernetes/kubelet-bootstrap.kubeconfig
注:對應(yīng)的文件名傳到對應(yīng)的主機上
創(chuàng)建和分發(fā) kubelet 參數(shù)配置文件
cat?>?/etc/kubernetes/kubelet-config.yaml?<<?EOF kind:?KubeletConfiguration apiVersion:?kubelet.config.k8s.io/v1beta1 authentication: ?anonymous: ???enabled:?false ?webhook: ???enabled:?true ?x509: ???clientCAFile:?"/etc/kubernetes/cert/ca.pem" authorization: ?mode:?Webhook clusterDomain:?"cluster.local" clusterDNS: ?-?"10.254.0.2" podCIDR:?"" maxPods:?220 serializeImagePulls:?false hairpinMode:?promiscuous-bridge cgroupDriver:?cgroupfs runtimeRequestTimeout:?"15m" rotateCertificates:?true serverTLSBootstrap:?true readOnlyPort:?0 port:?10250 address:?"172.19.201.247" EOF
?
?
為各節(jié)點創(chuàng)建和分發(fā) kubelet 配置文件:(分發(fā)到work節(jié)點上)
scp -pr /etc/kubernetes/kubelet-config.yaml root@dev-k8s-master2:/etc/kubernetes/
?
創(chuàng)建和分發(fā) kubelet systemd unit 文件
cat ?/etc/systemd/system/kubelet.service
[Unit] Description=Kubernetes?Kubelet Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=docker.service Requires=docker.service ? [Service] WorkingDirectory=/data/k8s/k8s/kubelet ExecStart=/opt/k8s/bin/kubelet?\ ?--root-dir=/data/k8s/k8s/kubelet?\ ?--bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig?\ ?--cert-dir=/etc/kubernetes/cert?\ ?--kubeconfig=/etc/kubernetes/kubelet.kubeconfig?\ ?--config=/etc/kubernetes/kubelet-config.yaml?\ ?--hostname-override=dev-k8s-node1?\ ?--pod-infra-container-image=registry.cn-beijing.aliyuncs.com/k8s_images/pause-amd64:3.1?\ ?--allow-privileged=true?\ ?--event-qps=0?\ ?--kube-api-qps=1000?\ ?--kube-api-burst=2000?\ ?--registry-qps=0?\ ?--image-pull-progress-deadline=30m?\ ?--logtostderr=true?\ ?--v=2 Restart=always RestartSec=5 StartLimitInterval=0 ? [Install] WantedBy=multi-user.target
?
?
為各節(jié)點創(chuàng)建和分發(fā) kubelet systemd unit 文件:
scp -pr /etc/systemd/system/kubelet.service root@dev-k8s-node1:/etc/systemd/system/
?
Bootstrap Token Auth 和授予權(quán)限:
kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers
?
創(chuàng)建工作路徑
mkdir -p /data/k8s/k8s/kubelet
?
關(guān)閉swapoff, 否則 kubelet 會啟動失敗
/usr/sbin/swapoff -a
?
啟動 kubelet 服務(wù)
systemctl daemon-reload ?&& systemctl restart kubelet && systemctl enable kubelet
?
檢測服務(wù)其否啟動
systemctl status kubelet |grep active
?
自動 approve CSR 請求
創(chuàng)建三個 ClusterRoleBinding,分別用于自動 approve client、renew client、renew server 證書:
cd /opt/k8s/work
cat?>?csr-crb.yaml?<<EOF #?Approve?all?CSRs?for?the?group?"system:bootstrappers" kind:?ClusterRoleBinding apiVersion:?rbac.authorization.k8s.io/v1 metadata: ??name:?auto-approve-csrs-for-group subjects: -?kind:?Group ??name:?system:bootstrappers ??apiGroup:?rbac.authorization.k8s.io roleRef: ??kind:?ClusterRole ??name:?system:certificates.k8s.io:certificatesigningrequests:nodeclient ??apiGroup:?rbac.authorization.k8s.io --- #?To?let?a?node?of?the?group?"system:nodes"?renew?its?own?credentials kind:?ClusterRoleBinding apiVersion:?rbac.authorization.k8s.io/v1 metadata: ??name:?node-client-cert-renewal subjects: -?kind:?Group ??name:?system:nodes ??apiGroup:?rbac.authorization.k8s.io roleRef: ??kind:?ClusterRole ??name:?system:certificates.k8s.io:certificatesigningrequests:selfnodeclient ??apiGroup:?rbac.authorization.k8s.io --- #?A?ClusterRole?which?instructs?the?CSR?approver?to?approve?a?node?requesting?a #?serving?cert?matching?its?client?cert. kind:?ClusterRole apiVersion:?rbac.authorization.k8s.io/v1 metadata: ?name:?approve-node-server-renewal-csr rules: -?apiGroups:?["certificates.k8s.io"] ?resources:?["certificatesigningrequests/selfnodeserver"] ?verbs:?["create"] --- #?To?let?a?node?of?the?group?"system:nodes"?renew?its?own?server?credentials kind:?ClusterRoleBinding apiVersion:?rbac.authorization.k8s.io/v1 metadata: ??name:?node-server-cert-renewal subjects: -?kind:?Group ??name:?system:nodes ??apiGroup:?rbac.authorization.k8s.io roleRef: ??kind:?ClusterRole ??name:?approve-node-server-renewal-csr ??apiGroup:?rbac.authorization.k8s.io EOF
?
生效配置:
kubectl apply -f csr-crb.yaml
?
查看 kublet 的情況
等待一段時間(1-10 分鐘),三個節(jié)點的 CSR 都被自動 approved:
手動 approve server cert csr
基于安全性考慮,CSR approving controllers 默認不會自動 approve kubelet server 證書簽名請求,需要手動 approve。
kubectl get csr
? ? ? ? ? ? ?
kubectl certificate approve csr-bjtp4 ?
?
13.部署kube-proxy 組件
kube-proxy 運行在所有 worker 節(jié)點上,,它監(jiān)聽 apiserver 中 service 和 Endpoint 的變化情況,創(chuàng)建路由規(guī)則來進行服務(wù)負載均衡。
?
創(chuàng)建 kube-proxy 證書
創(chuàng)建證書簽名請求:
cd /opt/k8s/work
cat?>?kube-proxy-csr.json?<<EOF { ?"CN":?"system:kube-proxy", ?"key":?{ ???"algo":?"rsa", ???"size":?2048 ?}, ?"names":?[ ???{ ?????"C":?"CN", ?????"ST":?"BeiJing", ?????"L":?"BeiJing", ?????"O":?"k8s", ?????"OU":?"4Paradigm" ???} ?] } EOF
?
?
生成證書和私鑰:
cfssl gencert -ca=/opt/k8s/work/ca.pem \
?-ca-key=/opt/k8s/work/ca-key.pem \
?-config=/opt/k8s/work/ca-config.json \
?-profile=kubernetes ?kube-proxy-csr.json | cfssljson -bare kube-proxy
?
?
創(chuàng)建和分發(fā) kubeconfig 文件
kubectl config set-cluster kubernetes \
?--certificate-authority=/opt/k8s/work/ca.pem \
?--embed-certs=true \
?--server=https://172.19.201242:8443 \
?--kubeconfig=kube-proxy.kubeconfig
?
kubectl config set-credentials kube-proxy \
?--client-certificate=kube-proxy.pem \
?--client-key=kube-proxy-key.pem \
?--embed-certs=true \
?--kubeconfig=kube-proxy.kubeconfig
?
kubectl config set-context default \
?--cluster=kubernetes \
?--user=kube-proxy \
?--kubeconfig=kube-proxy.kubeconfig
?
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
?
分發(fā) kubeconfig 文件:(拷貝到work工作節(jié)點)
scp kube-proxy.kubeconfig root@master1:/etc/kubernetes/ ? ? ? ?
?
?
創(chuàng)建 kube-proxy 配置文件
cat?>??/etc/kubernetes/kube-proxy-config.yaml?<<EOF kind:?KubeProxyConfiguration apiVersion:?kubeproxy.config.k8s.io/v1alpha1 clientConnection: ??kubeconfig:?"/etc/kubernetes/kube-proxy.kubeconfig" bindAddress:?172.19.201.247 clusterCIDR:?10.10.0.0/16 healthzBindAddress:?172.19.201.247:10256 hostnameOverride:?dev-k8s-node1 metricsBindAddress:?172.19.201.247:10249 mode:?"ipvs" EOF
?
注:修改各個節(jié)點上的配置文件,寫上對應(yīng)的主機名
?
為各節(jié)點創(chuàng)建和分發(fā) kube-proxy 配置文件:(拷貝到所有工作節(jié)點)
scp?-pr?/etc/kubernetes/kube-proxy-config.yaml?root@dev-k8s-node1:?/etc/kubernetes/
? ?
?
創(chuàng)建和分發(fā) kube-proxy systemd unit 文件
cat??/etc/systemd/system/kube-proxy.service [Unit] Description=Kubernetes?Kube-Proxy?Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target ? [Service] WorkingDirectory=/data/k8s/k8s/kube-proxy ExecStart=/opt/k8s/bin/kube-proxy?\ ??--config=/etc/kubernetes/kube-proxy-config.yaml?\ ??--logtostderr=true?\ ??--v=2 Restart=on-failure RestartSec=5 LimitNOFILE=65536 ? [Install] WantedBy=multi-user.target
?
必須先創(chuàng)建工作目錄
mkdir -p /data/k8s/k8s/kube-proxy
?
啟動kube-proxy服務(wù)
systemctl daemon-reload && systemctl enable kube-proxy && systemctl restart kube-proxy
?
檢查啟動結(jié)果,確保狀態(tài)為?active (running)
systemctl status kube-proxy|grep active
?
查看監(jiān)聽端口和 metrics
netstat -lnpt|grep kube-proxy
?
14.部署coredns插件
修改配置文件
將下載的 kubernetes-server-linux-amd64.tar.gz 解壓后,再解壓其中的 kubernetes-src.tar.gz 文件。
cd /opt/k8s/work/kubernetes/
tar -xzvf kubernetes-src.tar.gz
?
coredns 目錄是?cluster/addons/dns:
cd /opt/k8s/work/kubernetes/cluster/addons/dns/coredns
cp coredns.yaml.base coredns.yaml
source /opt/k8s/bin/environment.sh
sed -i -e "s/__PILLAR__DNS__DOMAIN__/${CLUSTER_DNS_DOMAIN}/" -e "s/__PILLAR__DNS__SERVER__/${CLUSTER_DNS_SVC_IP}/" coredns.yaml
創(chuàng)建 coredns
kubectl create -f coredns.yaml
檢查 coredns 功能
[root@dev-k8s-master1 test]# kubectl get pods? -n kube-system
NAME? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? READY? ?STATUS? ? RESTARTS? ?AGE
coredns-6dcf4d5b7b-tvn26? ? ? ? ? ? ? ? 1/1? ? ?Running? ?5? ? ? ? ? 17h
?
15.部署 ingress 插件
下載源碼包
wget https://github.com/kubernetes/ingress-nginx/archive/nginx-0.20.0.tar.gz
tar -zxvf nginx-0.20.0.tar.gz
?
進入工作路徑
cd ingress-nginx-nginx-0.20.0/deploy
創(chuàng)建 ingress
kubectl create -f mandatory.yaml
?
cd /opt/k8s/work/ingress-nginx-nginx-0.20.0/deploy/provider/baremetal
創(chuàng)建 ingress service
kubectl create -f service-nodeport.yaml
檢驗ingress-nginx是否啟動
kubectl get pods -n ingress-nginx
?
? ? ? ? ? ? ?
?
16.部署dashboard 插件
修改配置文件
cd /opt/k8s/work/kubernetes/cluster/addons/dashboard
?
修改 service 定義,指定端口類型為 NodePort,這樣外界可以通過地址 NodeIP:NodePort 訪問 dashboard;
?
cat dashboard-service.yaml
apiVersion:?v1 kind:?Service metadata: ?name:?kubernetes-dashboard ?namespace:?kube-system ?labels: ???k8s-app:?kubernetes-dashboard ???kubernetes.io/cluster-service:?"true" ???addonmanager.kubernetes.io/mode:?Reconcile spec: ?type:?NodePort?#?增加這一行 ?selector: ???k8s-app:?kubernetes-dashboard ?ports: ?-?port:?443 ???targetPort:?8443
?
執(zhí)行所有定義文件
$ ls *.yaml
dashboard-configmap.yaml ?dashboard-controller.yaml ?dashboard-rbac.yaml ?dashboard-secret.yaml ?dashboard-service.yaml
?
$ kubectl apply -f ?.
?
查看分配的 NodePort
$ kubectl get deployment kubernetes-dashboard ?-n kube-system
? ? ? ? ? ? ?
?
創(chuàng)建登錄 Dashboard 的 token 和 kubeconfig 配置文件
dashboard 默認只支持 token 認證(不支持 client 證書認證),所以如果使用 Kubeconfig 文件,需要將 token 寫入到該文件。
?
創(chuàng)建登錄 token
kubectl create sa dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
ADMIN_SECRET=$(kubectl get secrets -n kube-system | grep dashboard-admin | awk '{print $1}')
DASHBOARD_LOGIN_TOKEN=$(kubectl describe secret -n kube-system ${ADMIN_SECRET} | grep -E '^token' | awk '{print $2}')
echo ${DASHBOARD_LOGIN_TOKEN}
使用輸出的 token 登錄 Dashboard。
?
創(chuàng)建使用 token 的 KubeConfig 文件
kubectl config set-cluster kubernetes \
?--certificate-authority=/etc/kubernetes/cert/ca.pem \
?--embed-certs=true \
?--server=https://172.19.201.202:8443 \
?--kubeconfig=dashboard.kubeconfig
?
# 設(shè)置客戶端認證參數(shù),使用上面創(chuàng)建的 Token
kubectl config set-credentials dashboard_user \
?--token=${DASHBOARD_LOGIN_TOKEN} \
?--kubeconfig=dashboard.kubeconfig
?
# 設(shè)置上下文參數(shù)
kubectl config set-context default \
?--cluster=kubernetes \
?--user=dashboard_user \
?--kubeconfig=dashboard.kubeconfig
?
# 設(shè)置默認上下文
kubectl config use-context default --kubeconfig=dashboard.kubeconfig
用生成的 dashboard.kubeconfig 登錄 Dashboard。
?
?
? ? ? ? ? ? ?
?
? ? ? ?? ? ??
?
?
?
17.錯誤排查
當(dāng)k8s新增加節(jié)點時, 新添加的node,創(chuàng)建pods 分配不了ip,報錯如下:
Warning? FailedCreatePodSandBox? 72s (x26 over 6m40s)? kubelet, dev-k8s-master2? Failed create pod sandbox: rpc error: code = Unknown desc = failed pulling image "registr
y.cn-beijing.aliyuncs.com/k8s_images/pause-amd64:3.1": Error response from daemon: pull access denied for registry.cn-beijing.aliyuncs.com/k8s_images/pause-amd64, repository does not exist or may require 'docker login'
?
解決方法:
在node節(jié)點操作:
docker pull lc13579443/pause-amd64
docker tag lc13579443/pause-amd64 registry.cn-beijing.aliyuncs.com/k8s_images/pause-amd64:3.1
重啟kubelet
systemctl daemon-reload && systemctl restart kubelet
?
?
?
?
?
?
?
?
?
?
?
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點不代表本網(wǎng)站立場,如果涉及侵權(quán)請聯(lián)系站長郵箱:is@yisu.com進行舉報,并提供相關(guān)證據(jù),一經(jīng)查實,將立刻刪除涉嫌侵權(quán)內(nèi)容。