溫馨提示×

溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊×
其他方式登錄
點(diǎn)擊 登錄注冊 即表示同意《億速云用戶服務(wù)條款》

二進(jìn)制部署Kubernetes集群參考文檔(V1.15.0)

發(fā)布時間:2020-08-02 03:47:05 來源:網(wǎng)絡(luò) 閱讀:1213 作者:風(fēng)的影子001 欄目:系統(tǒng)運(yùn)維

一、基礎(chǔ)概念

1、概念

Kubernetes(通常寫成“k8s”)Kubernetes是Google開源的容器集群管理系統(tǒng)。其設(shè)計目標(biāo)是在主機(jī)集群之間提供一個能夠自動化部署、可拓展、應(yīng)用容器可運(yùn)營的平臺。

Kubernetes通常結(jié)合docker容器工具工作,并且整合多個運(yùn)行著docker容器的主機(jī)集群。

2、功能特性

a、自動化容器部署
b、自動化擴(kuò)/縮容器規(guī)模
c、提供容器間的負(fù)載均衡
d、快速更新和快速回滾

3、相關(guān)組件說明

3.1、Master節(jié)點(diǎn)組件

master節(jié)點(diǎn)上主要運(yùn)行四個組件:api-server、scheduler、controller-manager、etcd。

APIServer:APIServer負(fù)責(zé)對外提供RESTful的Kubernetes API服務(wù),它是系統(tǒng)管理指令的統(tǒng)一入口,任何對資源進(jìn)行增刪改查的操作都給APIServer處理后再提交給etcd。

schedule:scheduler的職責(zé)很明確,就是負(fù)責(zé)調(diào)度pod到合適的Node上。如果把scheduler看成一個黑匣子,那么它的輸入是pod和由多個Node組成的列表,輸出是Pod和一

個Node的綁定,即將這個pod部署到這個Node上。Kubernetes目前提供了調(diào)度算法,但是同樣也保了接口,用戶可以根據(jù)自己的需求定義自己的調(diào)度算法。

controller-manager:如果說APIServer做的是“前臺”的工作的話,那controller manager就是負(fù)責(zé)“后臺”的。每個資源一般都對一個控制器,而controller manager就是

負(fù)責(zé)管理這些控制器的。比如我們通過APIServer創(chuàng)建一個pod,當(dāng)這個pod創(chuàng)建成功后,APIServer的任務(wù)就算完成了。而后面保證Pod的狀態(tài)始終和我們預(yù)期的一樣的重任

就由controller manager去保證了。

etcd:etcd是一個高可用的鍵值存儲系統(tǒng),Kubernetes使用它來存儲各個資源的狀態(tài),從而實(shí)現(xiàn)了Restful的API。

3.2、Node節(jié)點(diǎn)組件

每個Node節(jié)點(diǎn)主要由三個模塊組成:kubelet、kube-proxy、runtime。

runtime:指的是容器運(yùn)行環(huán)境,目前Kubernetes支持docker和rkt兩種容器。

kube-proxy:該模塊實(shí)現(xiàn)了Kubernetes中的服務(wù)發(fā)現(xiàn)和反向代理功能。反向代理方面:kube-proxy支持TCP和UDP連接轉(zhuǎn)發(fā),默認(rèn)基于Round Robin算法將客戶端流量轉(zhuǎn)發(fā)到

與service對應(yīng)的一組后端pod。服務(wù)發(fā)現(xiàn)方面,kube-proxy使用etcd的watch機(jī)制,監(jiān)控集群中service和endpoint對象數(shù)據(jù)的動態(tài)變化,并且維護(hù)一個service到endpoint

的映射關(guān)系,從而保證了后端pod的IP變化不會對訪問者造成影響。另外kube-proxy還支持session affinity。

kubelet:Kubelet是Master在每個Node節(jié)點(diǎn)上面的agent,是Node節(jié)點(diǎn)上面最重要的模塊,它負(fù)責(zé)維護(hù)和管理該Node上面的所有容器但是如果容器不是通過Kubernetes創(chuàng)建

的,它并不會管理。本質(zhì)上,它負(fù)責(zé)使Pod得運(yùn)行狀態(tài)與期望的狀態(tài)一致。

3.3、pod

Pod是k8s進(jìn)行資源調(diào)度的最小單位,每個Pod中運(yùn)行著一個或多個密切相關(guān)的業(yè)務(wù)容器,這些業(yè)務(wù)容器共享這個Pause容器的IP和Volume,我們以這個不易死亡的Pause容器

作為Pod的根容器,以它的狀態(tài)表示整個容器組的狀態(tài)。一個Pod一旦被創(chuàng)建就會放到Etcd中存儲,然后由Master調(diào)度到一個Node綁定,由這個Node上的Kubelet進(jìn)行實(shí)例化。

每個Pod會被分配一個單獨(dú)的Pod IP,Pod IP + ContainerPort 組成了一個Endpoint。

3.4、Service

Service其功能使應(yīng)用暴露,Pods 是有生命周期的,也有獨(dú)立的 IP 地址,隨著 Pods 的創(chuàng)建與銷毀,一個必不可少的工作就是保證各個應(yīng)用能夠感知這種變化。這就要提

到 Service 了,Service 是 YAML 或 JSON 定義的由 Pods 通過某種策略的邏輯組合。更重要的是,Pods 獨(dú)立 IP 需要通過 Service 暴露到網(wǎng)絡(luò)中。

二、安裝部署

部署方式有多中,此篇文章我們采用二進(jìn)制方式部署。

1、環(huán)境介紹

主機(jī)名IP安裝軟件包系統(tǒng)版本
k8s-master192.168.248.65kube-apiserver,kube-controller-manager,kube-schedulerRed Hat Enterprise Linux Server release 7.3
k8s-node1192.168.248.66etcd,kubelet,kube-proxy,flannel,dockerRed Hat Enterprise Linux Server release 7.3
k8s-node2192.168.248.67etcd,kubelet,kube-proxy,flannel,dockerRed Hat Enterprise Linux Server release 7.3
k8s-node3192.168.248.68etcd,kubelet,kube-proxy,flannel,dockerRed Hat Enterprise Linux Server release 7.3

軟件部署版本及下載鏈接

版本

kubenetes version v1.15.0

etcd version v3.3.10

flannel version v0.11.0

下載鏈接

kubernetes網(wǎng)址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.15.md#v1150

server端二進(jìn)制文件:https://dl.k8s.io/v1.15.0/kubernetes-server-linux-amd64.tar.gz

node端二進(jìn)制文件:https://dl.k8s.io/v1.15.0/kubernetes-node-linux-amd64.tar.gz

etcd網(wǎng)址:https://github.com/etcd-io/etcd/releases

flannel網(wǎng)址https://github.com/coreos/flannel/releases

2、服務(wù)器初始化環(huán)境準(zhǔn)備

同步系統(tǒng)時間

#?ntpdate?time1.aliyun.com
#?echo?"*/5?*?*?*?*?/usr/sbin/ntpdate?-s?time1.aliyun.com"?>?/var/spool/cron/root

修改主機(jī)名

#?hostnamectl?--static?set-hostname?k8s-master
#?hostnamectl?--static?set-hostname?k8s-node1
#?hostnamectl?--static?set-hostname?k8s-node2
#?hostnamectl?--static?set-hostname?k8s-node3

添加hosts解析

[root@k8s-master?~]#?cat?/etc/hosts
192.168.248.65?k8s-master
192.168.248.66?k8s-node1
192.168.248.67?k8s-node2
192.168.248.68?k8s-node3

關(guān)閉并禁用firewalld及selinux

#?systemctl?stop?firewalld
#?systemctl?disable?firewalld
#?setenforce?0
#?vim?/etc/sysconfig/selinux
??SELINUX=disabled

關(guān)閉swap

#?swapoff?-a?&&?sysctl?-w?vm.swappiness=0
#?sed?-i?'/?swap?/?s/^\(.*\)$/#\1/g'?/etc/fstab

設(shè)置系統(tǒng)參數(shù)

#?cat?/etc/sysctl.d/kubernetes.conf
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1

3、kubernetes集群安裝部署

所有node節(jié)點(diǎn)安裝docker-ce

#?wget?-P?/etc/yum.repos.d/?https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
#?yum?makecache
#?yum?install?docker-ce-18.06.2.ce-3.el7?-y
#?systemctl?start?docker?&&?systemctl?enable?docker

創(chuàng)建安裝目錄

#?mkdir?/data/{install,ssl_config}?-pv
#?mkdir?/data/ssl_config/{etcd,kubernetes}?-pv
#?mkdir?/cloud/k8s/etcd/{bin,cfg,ssl}?-pv
#?mkdir?/cloud/k8s/kubernetes/{bin,cfg,ssl}?-pv

添加環(huán)境變量

vim?/etc/profile
######Kubernetes########
export?PATH=$PATH:/cloud/k8s/etcd/bin/:/cloud/k8s/kubernetes/bin/

4、創(chuàng)建ssl證書

下載證書生成工具

[root@k8s-master?~]#?wget?-P?/usr/local/bin/?https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
[root@k8s-master?~]#?wget?-P?/usr/local/bin/?https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
[root@k8s-master?~]#?wget?-P?/usr/local/bin/?https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64?
[root@k8s-master?~]#?mv?/usr/local/bin/cfssl_linux-amd64?/usr/local/bin/cfssl
[root@k8s-master?~]#?mv?/usr/local/bin/cfssljson_linux-amd64?/usr/local/bin/cfssljson
[root@k8s-master?~]#?mv?/usr/local/bin/cfssl-certinfo_linux-amd64?/usr/local/bin/cfssl-certinfo
[root@k8s-master?~]#?chmod?+x?/usr/local/bin/*

創(chuàng)建etcd相關(guān)證書

#?etcd證書ca配置
[root@k8s-master?etcd]#?pwd
/data/ssl_config/etcd
[root@k8s-master?etcd]#?cat?ca-config.json?
{
??"signing":?{
????"default":?{
??????"expiry":?"87600h"
????},
????"profiles":?{
??????"www":?{
?????????"expiry":?"87600h",
?????????"usages":?[
????????????"signing",
????????????"key?encipherment",
????????????"server?auth",
????????????"client?auth"
????????]
??????}
????}
??}
}

#?etcd?ca配置文件
[root@k8s-master?etcd]#?cat?ca-csr.json?
{
????"CN":?"etcd?CA",
????"key":?{
????????"algo":?"rsa",
????????"size":?2048
????},
????"names":?[
????????{
????????????"C":?"CN",
????????????"L":?"Beijing",
????????????"ST":?"Beijing"
????????}
????]
}

#?etcd?server?證書
[root@k8s-master?etcd]#?cat?server-csr.json?
{
????"CN":?"etcd",
????"hosts":?[
????"k8s-node3",
????"k8s-node2",
????"k8s-node1",
????"192.168.248.66",
????"192.168.248.67",
????"192.168.248.68"
????],
????"key":?{
????????"algo":?"rsa",
????????"size":?2048
????},
????"names":?[
????????{
????????????"C":?"CN",
????????????"L":?"Beijing",
????????????"ST":?"Beijing"
????????}
????]
}

#?生成etcd?ca證書和私鑰
#?cfssl?gencert?-initca?ca-csr.json?|?cfssljson?-bare?ca?-
#?cfssl?gencert?-ca=ca.pem?-ca-key=ca-key.pem?-config=ca-config.json?-profile=www?server-csr.json?|?cfssljson?-bare?server

創(chuàng)建kubernetes相關(guān)證書

#?kubernetes?證書ca配置
[root@k8s-master?kubernetes]#?pwd
/data/ssl_config/kubernetes
[root@k8s-master?kubernetes]#?cat?ca-config.json
{
??"signing":?{
????"default":?{
??????"expiry":?"87600h"
????},
????"profiles":?{
??????"kubernetes":?{
?????????"expiry":?"87600h",
?????????"usages":?[
????????????"signing",
????????????"key?encipherment",
????????????"server?auth",
????????????"client?auth"
????????]
??????}
????}
??}
}

#?創(chuàng)建ca證書配置
[root@k8s-master?kubernetes]#?cat?ca-csr.json?
{
????"CN":?"kubernetes",
????"key":?{
????????"algo":?"rsa",
????????"size":?2048
????},
????"names":?[
????????{
????????????"C":?"CN",
????????????"L":?"Beijing",
????????????"ST":?"Beijing",
????????????"O":?"k8s",
????????????"OU":?"System"
????????}
????]
}

#?生成API_SERVER證書
[root@k8s-master?kubernetes]#?cat?server-csr.json?
{
????"CN":?"kubernetes",
????"hosts":?[
??????"10.0.0.1",
??????"127.0.0.1",
??????"192.168.248.65",
??????"k8s-master",
??????"kubernetes",
??????"kubernetes.default",
??????"kubernetes.default.svc",
??????"kubernetes.default.svc.cluster",
??????"kubernetes.default.svc.cluster.local"
????],
????"key":?{
????????"algo":?"rsa",
????????"size":?2048
????},
????"names":?[
????????{
????????????"C":?"CN",
????????????"L":?"Beijing",
????????????"ST":?"Beijing",
????????????"O":?"k8s",
????????????"OU":?"System"
????????}
????]
}

#?創(chuàng)建?Kubernetes?Proxy?證書
[root@k8s-master?kubernetes]#?cat?kube-proxy-csr.json
{
??"CN":?"system:kube-proxy",
??"hosts":?[],
??"key":?{
????"algo":?"rsa",
????"size":?2048
??},
??"names":?[
????{
??????"C":?"CN",
??????"L":?"Beijing",
??????"ST":?"Beijing",
??????"O":?"k8s",
??????"OU":?"System"
????}
??]
}

#?生成ca證書
#?cfssl?gencert?-initca?ca-csr.json?|?cfssljson?-bare?ca?-
#?生成?api-server?證書
#?cfssl?gencert?-ca=ca.pem?-ca-key=ca-key.pem?-config=ca-config.json?-profile=kubernetes?server-csr.json?|?cfssljson?-bare?server
#?生成?kube-proxy?證書
#?cfssl?gencert?-ca=ca.pem?-ca-key=ca-key.pem?-config=ca-config.json?-profile=kubernetes?kube-proxy-csr.json?|?cfssljson?-bare?kube-proxy

5、部署etcd集群(在所有node節(jié)點(diǎn)操作)

解壓并配置etcd軟件包

#?tar?-xvf?etcd-v3.3.10-linux-amd64.tar.gz
#?cp?etcd-v3.3.10-linux-amd64/{etcd,etcdctl}?/cloud/k8s/etcd/bin/

編寫etcd配置文件

[root@k8s-node1?~]#?cat?/cloud/k8s/etcd/cfg/etcd?
#[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.248.66:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.248.66:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.248.66:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.248.66:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.248.66:2380,etcd02=https://192.168.248.67:2380,etcd03=https://192.168.248.68:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

[root@k8s-node2?~]#?cat?/cloud/k8s/etcd/cfg/etcd?
#[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.248.67:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.248.67:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.248.67:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.248.67:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.248.66:2380,etcd02=https://192.168.248.67:2380,etcd03=https://192.168.248.68:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

[root@k8s-node3?~]#?cat?/cloud/k8s/etcd/cfg/etcd?
#[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.248.68:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.248.68:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.248.68:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.248.68:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.248.66:2380,etcd02=https://192.168.248.67:2380,etcd03=https://192.168.248.68:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

創(chuàng)建etcd啟動文件

[root@k8s-node1?~]#?cat?/usr/lib/systemd/system/etcd.service?
[Unit]
Description=Etcd?Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/cloud/k8s/etcd/cfg/etcd
ExecStart=/cloud/k8s/etcd/bin/etcd?\
--name=${ETCD_NAME}?\
--data-dir=${ETCD_DATA_DIR}?\
--listen-peer-urls=${ETCD_LISTEN_PEER_URLS}?\
--listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379?\
--advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS}?\
--initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS}?\
--initial-cluster=${ETCD_INITIAL_CLUSTER}?\
--initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN}?\
--initial-cluster-state=new?\
--cert-file=/cloud/k8s/etcd/ssl/server.pem?\
--key-file=/cloud/k8s/etcd/ssl/server-key.pem?\
--peer-cert-file=/cloud/k8s/etcd/ssl/server.pem?\
--peer-key-file=/cloud/k8s/etcd/ssl/server-key.pem?\
--trusted-ca-file=/cloud/k8s/etcd/ssl/ca.pem?\
--peer-trusted-ca-file=/cloud/k8s/etcd/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

將生成的etcd證書文件拷貝到所有node節(jié)點(diǎn)

[root@k8s-master?etcd]#?pwd
/data/ssl_config/etcd
[root@k8s-master?etcd]#?scp?*.pem?k8s-node1:/cloud/k8s/etcd/ssl/
[root@k8s-master?etcd]#?scp?*.pem?k8s-node2:/cloud/k8s/etcd/ssl/
[root@k8s-master?etcd]#?scp?*.pem?k8s-node3:/cloud/k8s/etcd/ssl/

啟動etcd集群服務(wù)

systemctl?daemon-reload
systemctl?enable?etcd
systemctl?start?etcd

查看啟動狀態(tài)(任意一個node節(jié)點(diǎn)上執(zhí)行即可)

[root@k8s-node1?ssl]#?etcdctl?--ca-file=/cloud/k8s/etcd/ssl/ca.pem?--cert-file=/cloud/k8s/etcd/ssl/server.pem?--key-file=/cloud/k8s/etcd/ssl/server-key.pem?--endpoints="https://192.168.248.66:2379,https://192.168.248.67:2379,https://192.168.248.68:2379"??cluster-health
member?2830381866015ef6?is?healthy:?got?healthy?result?from?https://192.168.248.67:2379
member?355a96308320dc2a?is?healthy:?got?healthy?result?from?https://192.168.248.66:2379
member?a9a44d5d05a31ce0?is?healthy:?got?healthy?result?from?https://192.168.248.68:2379
cluster?is?healthy

二進(jìn)制部署Kubernetes集群參考文檔(V1.15.0)

6、部署flannel網(wǎng)絡(luò)(所有node節(jié)點(diǎn))

向etcd集群中寫入pod網(wǎng)段信息(任意一臺node節(jié)點(diǎn)上執(zhí)行)

etcdctl?--ca-file=/cloud/k8s/etcd/ssl/ca.pem?\
--cert-file=/cloud/k8s/etcd/ssl/server.pem?\
--key-file=/cloud/k8s/etcd/ssl/server-key.pem??\
--endpoints="https://192.168.248.66:2379,https://192.168.248.67:2379,https://192.168.248.68:2379"??\
set?/coreos.com/network/config?'{?"Network":?"172.18.0.0/16",?"Backend":?{"Type":?"vxlan"}}'

二進(jìn)制部署Kubernetes集群參考文檔(V1.15.0)

查看寫入etcd集群中的網(wǎng)段信息

#?etcdctl?--ca-file=/cloud/k8s/etcd/ssl/ca.pem?\
--cert-file=/cloud/k8s/etcd/ssl/server.pem?\
--key-file=/cloud/k8s/etcd/ssl/server-key.pem??\
--endpoints="https://192.168.248.66:2379,https://192.168.248.67:2379,https://192.168.248.68:2379"??\
get?/coreos.com/network/config

#?[root@k8s-node1?ssl]#?etcdctl?--ca-file=/cloud/k8s/etcd/ssl/ca.pem?\
>?--cert-file=/cloud/k8s/etcd/ssl/server.pem?\
>?--key-file=/cloud/k8s/etcd/ssl/server-key.pem??\
>?--endpoints="https://192.168.248.66:2379,https://192.168.248.67:2379,https://192.168.248.68:2379"?\
>?ls?/coreos.com/network/subnets
/coreos.com/network/subnets/172.18.95.0-24
/coreos.com/network/subnets/172.18.22.0-24
/coreos.com/network/subnets/172.18.54.0-24

二進(jìn)制部署Kubernetes集群參考文檔(V1.15.0)二進(jìn)制部署Kubernetes集群參考文檔(V1.15.0)

解壓并配置flannel網(wǎng)絡(luò)插件

#?tar?xf??flannel-v0.11.0-linux-amd64.tar.gz
#?mv?flanneld?mk-docker-opts.sh?/cloud/k8s/kubernetes/bin/

配置flannel

[root@k8s-node1?cfg]#?cat?/cloud/k8s/kubernetes/cfg/flanneld?
FLANNEL_OPTIONS="--etcd-endpoints=https://192.168.248.66:2379,https://192.168.248.67:2379,https://192.168.248.68:2379?-etcd-cafile=/cloud/k8s/etcd/ssl/ca.pem?-etcd-certfile=/cloud/k8s/etcd/ssl/server.pem?-etcd-keyfile=/cloud/k8s/etcd/ssl/server-key.pem"

配置flanneld啟動文件

[root@k8s-node1?cfg]#?cat?/usr/lib/systemd/system/flanneld.service?
[Unit]
Description=Flanneld?overlay?address?etcd?agent
After=network-online.target?network.target
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/cloud/k8s/kubernetes/cfg/flanneld
ExecStart=/cloud/k8s/kubernetes/bin/flanneld?--ip-masq?$FLANNEL_OPTIONS
ExecStartPost=/cloud/k8s/kubernetes/bin/mk-docker-opts.sh?-k?DOCKER_NETWORK_OPTIONS?-d?/run/flannel/subnet.env
Restart=on-failure

[Install]
WantedBy=multi-user.target

配置Docker啟動指定子網(wǎng)段

[root@k8s-node1?cfg]#?cat?/usr/lib/systemd/system/docker.service?
[Unit]
Description=Docker?Application?Container?Engine
Documentation=https://docs.docker.com
After=network-online.target?firewalld.service
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
#?the?default?is?not?to?use?systemd?for?cgroups?because?the?delegate?issues?still
#?exists?and?systemd?currently?does?not?support?the?cgroup?feature?set?required
#?for?containers?run?by?docker
ExecStart=/usr/bin/dockerd?$DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill?-s?HUP?$MAINPID
#?Having?non-zero?Limit*s?causes?performance?problems?due?to?accounting?overhead
#?in?the?kernel.?We?recommend?using?cgroups?to?do?container-local?accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
#?Uncomment?TasksMax?if?your?systemd?version?supports?it.
#?Only?systemd?226?and?above?support?this?version.
#TasksMax=infinity
TimeoutStartSec=0
#?set?delegate?yes?so?that?systemd?does?not?reset?the?cgroups?of?docker?containers
Delegate=yes
#?kill?only?the?docker?process,?not?all?processes?in?the?cgroup
KillMode=process
#?restart?the?docker?process?if?it?exits?prematurely
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target

啟動服務(wù)

systemctl?daemon-reload
systemctl?start?flanneld
systemctl?enable?flanneld
systemctl?restart?docker

驗(yàn)證fiannel網(wǎng)絡(luò)配置

二進(jìn)制部署Kubernetes集群參考文檔(V1.15.0)

二進(jìn)制部署Kubernetes集群參考文檔(V1.15.0)二進(jìn)制部署Kubernetes集群參考文檔(V1.15.0)

二進(jìn)制部署Kubernetes集群參考文檔(V1.15.0)

node節(jié)點(diǎn)之間相互ping測docker0網(wǎng)卡的ip地址,能ping通說明flanneld網(wǎng)絡(luò)插件部署成功。


7、部署master節(jié)點(diǎn)組件

解壓master節(jié)點(diǎn)安裝包

#?tar?xf?kubernetes-server-linux-amd64.tar.gz
#?cp?kubernetes//server/bin/{kube-scheduler,kube-apiserver,kube-controller-manager,kubectl}??/cloud/k8s/kubernetes/bin/

配置kubernetes相關(guān)證書

#?cp?/data/ssl_config/kubernetes/*.pem?/cloud/k8s/kubernetes/ssl/

部署 kube-apiserver 組件

創(chuàng)建 TLS Bootstrapping Token

[root@k8s-master?cfg]#?head?-c?16?/dev/urandom?|?od?-An?-t?x?|?tr?-d?'?'???#生成隨機(jī)字符串
[root@k8s-master?cfg]#?pwd
/cloud/k8s/kubernetes/cfg
[root@k8s-master?cfg]#?cat?token.csv?
a081e7ba91d597006cbdacfa8ee114ac,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

apiserver配置文件

[root@k8s-master?cfg]#?cat?kube-apiserver?
KUBE_APISERVER_OPTS="--logtostderr=true?\
--v=4?\
--etcd-servers=https://192.168.248.66:2379,https://192.168.248.67:2379,https://192.168.248.68:2379?\
--bind-address=192.168.248.65?\
--secure-port=6443?\
--advertise-address=192.168.248.65?\
--allow-privileged=true?\
--service-cluster-ip-range=10.0.0.0/24?\
--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction?\
--authorization-mode=RBAC,Node?\
--enable-bootstrap-token-auth?\
--token-auth-file=/cloud/k8s/kubernetes/cfg/token.csv?\
--service-node-port-range=30000-50000?\
--tls-cert-file=/cloud/k8s/kubernetes/ssl/server.pem??\
--tls-private-key-file=/cloud/k8s/kubernetes/ssl/server-key.pem?\
--client-ca-file=/cloud/k8s/kubernetes/ssl/ca.pem?\
--service-account-key-file=/cloud/k8s/kubernetes/ssl/ca-key.pem?\
--etcd-cafile=/cloud/k8s/etcd/ssl/ca.pem?\
--etcd-certfile=/cloud/k8s/etcd/ssl/server.pem?\
--etcd-keyfile=/cloud/k8s/etcd/ssl/server-key.pem"

kube-apiserver啟動文件

[root@k8s-master?cfg]#?cat?/usr/lib/systemd/system/kube-apiserver.service?
[Unit]
Description=Kubernetes?API?Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/cloud/k8s/kubernetes/cfg/kube-apiserver
ExecStart=/cloud/k8s/kubernetes/bin/kube-apiserver?$KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

啟動kube-apiserver服務(wù)

[root@k8s-master?cfg]#?systemctl?daemon-reload
[root@k8s-master?cfg]#?systemctl?enable?kube-apiserver
[root@k8s-master?cfg]#?systemctl?start?kube-apiserver
[root@k8s-master?cfg]#?ps?-ef?|grep?kube-apiserver
root???????1050??????1??4?09:02??????????00:25:21?/cloud/k8s/kubernetes/bin/kube-apiserver?--logtostderr=true?--v=4?--etcd-servers=https://192.168.248.66:2379,https://192.168.248.67:2379,https://192.168.248.68:2379?--bind-address=192.168.248.65?--secure-port=6443?--advertise-address=192.168.248.65?--allow-privileged=true?--service-cluster-ip-range=10.0.0.0/24?--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction?--authorization-mode=RBAC,Node?--enable-bootstrap-token-auth?--token-auth-file=/cloud/k8s/kubernetes/cfg/token.csv?--service-node-port-range=30000-50000?--tls-cert-file=/cloud/k8s/kubernetes/ssl/server.pem?--tls-private-key-file=/cloud/k8s/kubernetes/ssl/server-key.pem?--client-ca-file=/cloud/k8s/kubernetes/ssl/ca.pem?--service-account-key-file=/cloud/k8s/kubernetes/ssl/ca-key.pem?--etcd-cafile=/cloud/k8s/etcd/ssl/ca.pem?--etcd-certfile=/cloud/k8s/etcd/ssl/server.pem?--etcd-keyfile=/cloud/k8s/etcd/ssl/server-key.pem
root???????1888???1083??0?18:15?pts/0????00:00:00?grep?--color=auto?kube-apiserve

部署kube-scheduler組件

創(chuàng)建kube-scheduler配置文件

[root@k8s-master?cfg]#?cat?/cloud/k8s/kubernetes/cfg/kube-scheduler?
KUBE_SCHEDULER_OPTS="--logtostderr=true?--v=4?--master=127.0.0.1:8080?--leader-elect"

創(chuàng)建kube-scheduler啟動文件

[root@k8s-master?cfg]#?cat?/usr/lib/systemd/system/kube-scheduler.service?
[Unit]
Description=Kubernetes?Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/cloud/k8s/kubernetes/cfg/kube-scheduler
ExecStart=/cloud/k8s/kubernetes/bin/kube-scheduler?$KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

啟動kube-scheduler服務(wù)

[root@k8s-master?cfg]#?systemctl?daemon-reload?
[root@k8s-master?cfg]#?systemctl?enable?kube-scheduler.service??
[root@k8s-master?cfg]#?systemctl?start?kube-scheduler.service
[root@k8s-master?cfg]#?ps?-ef?|grep?kube-scheduler
root???????1716??????1??0?16:12??????????00:00:19?/cloud/k8s/kubernetes/bin/kube-scheduler?--logtostderr=true?--v=4?--master=127.0.0.1:8080?--leader-elect
root???????1897???1083??0?18:21?pts/0????00:00:00?grep?--color=auto?kube-scheduler

部署kube-controller-manager組件

創(chuàng)建kube-controller-manager配置文件

[root@k8s-master?cfg]#?cat?/cloud/k8s/kubernetes/cfg/kube-controller-manager?
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true?\
--v=4?\
--master=127.0.0.1:8080?\
--leader-elect=true?\
--address=127.0.0.1?\
--service-cluster-ip-range=10.0.0.0/24?\
--cluster-name=kubernetes?\
--cluster-signing-cert-file=/cloud/k8s/kubernetes/ssl/ca.pem?\
--cluster-signing-key-file=/cloud/k8s/kubernetes/ssl/ca-key.pem??\
--root-ca-file=/cloud/k8s/kubernetes/ssl/ca.pem?\
--service-account-private-key-file=/cloud/k8s/kubernetes/ssl/ca-key.pem"

創(chuàng)建kube-controller-manager啟動文件

[root@k8s-master?cfg]#?cat?/usr/lib/systemd/system/kube-controller-manager.service?
[Unit]
Description=Kubernetes?Controller?Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/cloud/k8s/kubernetes/cfg/kube-controller-manager
ExecStart=/cloud/k8s/kubernetes/bin/kube-controller-manager?$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

啟動kube-controller-manager服務(wù)

[root@k8s-master?cfg]#?systemctl?daemon-reload
[root@k8s-master?cfg]#?systemctl?enable?kube-controller-manager
[root@k8s-master?cfg]#?systemctl?start?kube-controller-manager
[root@k8s-master?cfg]#?ps?-ef?|grep?kube-controller-manager
root???????1709??????1??2?16:12??????????00:03:11?/cloud/k8s/kubernetes/bin/kube-controller-manager?--logtostderr=true?--v=4?--master=127.0.0.1:8080?--leader-elect=true?--address=127.0.0.1?--service-cluster-ip-range=10.0.0.0/24?--cluster-name=kubernetes?--cluster-signing-cert-file=/cloud/k8s/kubernetes/ssl/ca.pem?--cluster-signing-key-file=/cloud/k8s/kubernetes/ssl/ca-key.pem?--root-ca-file=/cloud/k8s/kubernetes/ssl/ca.pem?--service-account-private-key-file=/cloud/k8s/kubernetes/ssl/ca-key.pem
root???????1907???1083??0?18:29?pts/0????00:00:00?grep?--color=auto?kube-controller-manager

查看集群狀態(tài)

[root@k8s-master?cfg]#?kubectl?get?cs
NAME?????????????????STATUS????MESSAGE?????????????ERROR
controller-manager???Healthy???ok??????????????????
scheduler????????????Healthy???ok??????????????????
etcd-0???????????????Healthy???{"health":"true"}???
etcd-2???????????????Healthy???{"health":"true"}???
etcd-1???????????????Healthy???{"health":"true"}

8、部署node節(jié)點(diǎn)組件(所有node節(jié)點(diǎn)操作)

解壓node節(jié)點(diǎn)安裝包

[root@k8s-node1?install]#?tar?xf?kubernetes-node-linux-amd64.tar.gz
[root@k8s-node1?install]#?cp?kubernetes/node/bin/{kubelet,kube-proxy}?/cloud/k8s/kubernetes/bin/

創(chuàng)建kubelet bootstrap.kubeconfig 文件

[root@k8s-master?kubernetes]#?cat?environment.sh?
#?創(chuàng)建kubelet?bootstrapping?kubeconfig
BOOTSTRAP_TOKEN=a081e7ba91d597006cbdacfa8ee114ac
KUBE_APISERVER="https://192.168.248.65:6443"
#?設(shè)置集群參數(shù)
kubectl?config?set-cluster?kubernetes?\
??--certificate-authority=./ca.pem?\
??--embed-certs=true?\
??--server=${KUBE_APISERVER}?\
??--kubeconfig=bootstrap.kubeconfig

#?設(shè)置客戶端認(rèn)證參數(shù)
kubectl?config?set-credentials?kubelet-bootstrap?\
??--token=${BOOTSTRAP_TOKEN}?\
??--kubeconfig=bootstrap.kubeconfig

#?設(shè)置上下文參數(shù)
kubectl?config?set-context?default?\
??--cluster=kubernetes?\
??--user=kubelet-bootstrap?\
??--kubeconfig=bootstrap.kubeconfig

#?設(shè)置默認(rèn)上下文
kubectl?config?use-context?default?--kubeconfig=bootstrap.kubeconfig

#?執(zhí)行environment.sh生成bootstrap.kubeconfig[object?Object]

創(chuàng)建 kubelet.kubeconfig 文件

[root@k8s-master?kubernetes]#?cat?envkubelet.kubeconfig.sh
#?創(chuàng)建kubelet?bootstrapping?kubeconfig
BOOTSTRAP_TOKEN=a081e7ba91d597006cbdacfa8ee114ac
KUBE_APISERVER="https://192.168.248.65:6443"

#?設(shè)置集群參數(shù)
kubectl?config?set-cluster?kubernetes?\
??--certificate-authority=./ca.pem?\
??--embed-certs=true?\
??--server=${KUBE_APISERVER}?\
??--kubeconfig=kubelet.kubeconfig

#?設(shè)置客戶端認(rèn)證參數(shù)
kubectl?config?set-credentials?kubelet?\
??--token=${BOOTSTRAP_TOKEN}?\
??--kubeconfig=kubelet.kubeconfig

#?設(shè)置上下文參數(shù)
kubectl?config?set-context?default?\
??--cluster=kubernetes?\
??--user=kubelet?\
??--kubeconfig=kubelet.kubeconfig

#?設(shè)置默認(rèn)上下文
kubectl?config?use-context?default?--kubeconfig=kubelet.kubeconfig

#執(zhí)行envkubelet.kubeconfig.sh腳本,生成kubelet.kubeconfig[object?Object]

創(chuàng)建kube-proxy.kubeconfig文件

[root@k8s-master?kubernetes]#?cat?env_proxy.sh
#?創(chuàng)建kube-proxy?kubeconfig文件

BOOTSTRAP_TOKEN=a081e7ba91d597006cbdacfa8ee114ac
KUBE_APISERVER="https://192.168.248.65:6443"

kubectl?config?set-cluster?kubernetes?\
??--certificate-authority=./ca.pem?\
??--embed-certs=true?\
??--server=${KUBE_APISERVER}?\
??--kubeconfig=kube-proxy.kubeconfig

kubectl?config?set-credentials?kube-proxy?\
??--client-certificate=./kube-proxy.pem?\
??--client-key=./kube-proxy-key.pem?\
??--embed-certs=true?\
??--kubeconfig=kube-proxy.kubeconfig

kubectl?config?set-context?default?\
??--cluster=kubernetes?\
??--user=kube-proxy?\
??--kubeconfig=kube-proxy.kubeconfig

kubectl?config?use-context?default?--kubeconfig=kube-proxy.kubeconfig

#執(zhí)行env_proxy.sh腳本生成kube-proxy.kubeconfig文件

將以上生成的kubeconfig復(fù)制到所有node節(jié)點(diǎn)

[root@k8s-master?kubernetes]#?scp?bootstrap.kubeconfig?kubelet.kubeconfig?kube-proxy.kubeconfig?k8s-node1:/cloud/k8s/kubernetes/cfg/
[root@k8s-master?kubernetes]#?scp?bootstrap.kubeconfig?kubelet.kubeconfig?kube-proxy.kubeconfig?k8s-node2:/cloud/k8s/kubernetes/cfg/
[root@k8s-master?kubernetes]#?scp?bootstrap.kubeconfig?kubelet.kubeconfig?kube-proxy.kubeconfig?k8s-node3:/cloud/k8s/kubernetes/cfg/

所有node節(jié)點(diǎn)創(chuàng)建kubelet 參數(shù)配置模板文件

[root@k8s-node1?cfg]#?cat?kubelet.config?
kind:?KubeletConfiguration
apiVersion:?kubelet.config.k8s.io/v1beta1
address:?192.168.248.66
port:?10250
readOnlyPort:?10255
cgroupDriver:?cgroupfs
clusterDNS:?["10.0.0.2"]
clusterDomain:?cluster.local.
failSwapOn:?false
authentication:
??anonymous:
????enabled:?true

[root@k8s-node2?cfg]#?cat?kubelet.config?
kind:?KubeletConfiguration
apiVersion:?kubelet.config.k8s.io/v1beta1
address:?192.168.248.67
port:?10250
readOnlyPort:?10255
cgroupDriver:?cgroupfs
clusterDNS:?["10.0.0.2"]
clusterDomain:?cluster.local.
failSwapOn:?false
authentication:
??anonymous:
????enabled:?true
????
[root@k8s-node3?cfg]#?cat?kubelet.config?
kind:?KubeletConfiguration
apiVersion:?kubelet.config.k8s.io/v1beta1
address:?192.168.248.68
port:?10250
readOnlyPort:?10255
cgroupDriver:?cgroupfs
clusterDNS:?["10.0.0.2"]
clusterDomain:?cluster.local.
failSwapOn:?false
authentication:
??anonymous:
????enabled:?true

創(chuàng)建kubelet配置文件

[root@k8s-node1?cfg]#?cat?/cloud/k8s/kubernetes/cfg/kubelet
KUBELET_OPTS="--logtostderr=true?\
--v=4?\
--hostname-override=k8s-node1?\
--kubeconfig=/cloud/k8s/kubernetes/cfg/kubelet.kubeconfig?\
--bootstrap-kubeconfig=/cloud/k8s/kubernetes/cfg/bootstrap.kubeconfig?\
--config=/cloud/k8s/kubernetes/cfg/kubelet.config?\
--cert-dir=/cloud/k8s/kubernetes/ssl?\
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

[root@k8s-node2?cfg]#?cat?/cloud/k8s/kubernetes/cfg/kubelet
KUBELET_OPTS="--logtostderr=true?\
--v=4?\
--hostname-override=k8s-node2?\
--kubeconfig=/cloud/k8s/kubernetes/cfg/kubelet.kubeconfig?\
--bootstrap-kubeconfig=/cloud/k8s/kubernetes/cfg/bootstrap.kubeconfig?\
--config=/cloud/k8s/kubernetes/cfg/kubelet.config?\
--cert-dir=/cloud/k8s/kubernetes/ssl?\
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

[root@k8s-node3?cfg]#?cat?/cloud/k8s/kubernetes/cfg/kubelet
KUBELET_OPTS="--logtostderr=true?\
--v=4?\
--hostname-override=k8s-node3?\
--kubeconfig=/cloud/k8s/kubernetes/cfg/kubelet.kubeconfig?\
--bootstrap-kubeconfig=/cloud/k8s/kubernetes/cfg/bootstrap.kubeconfig?\
--config=/cloud/k8s/kubernetes/cfg/kubelet.config?\
--cert-dir=/cloud/k8s/kubernetes/ssl?\
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

創(chuàng)建kubelet啟動文件

[root@k8s-node1?cfg]#?cat?/usr/lib/systemd/system/kubelet.service?
[Unit]
Description=Kubernetes?Kubelet
After=docker.service
Requires=docker.service

[Service]
EnvironmentFile=/cloud/k8s/kubernetes/cfg/kubelet
ExecStart=/cloud/k8s/kubernetes/bin/kubelet?$KUBELET_OPTS
Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target

將kubelet-bootstrap用戶綁定到系統(tǒng)集群角色(不綁定角色,kubelet將無法啟動成功)

kubectl?create?clusterrolebinding?kubelet-bootstrap?\
??--clusterrole=system:node-bootstrapper?\
??--user=kubelet-bootstrap

啟動kubelet服務(wù)(所有node節(jié)點(diǎn))

[root@k8s-node1?cfg]#?systemctl?daemon-reload
[root@k8s-node1?cfg]#?systemctl?enable?kubelet
[root@k8s-node1?cfg]#?systemctl?start?kubelet
[root@k8s-node1?cfg]#?ps?-ef?|grep?kubelet
root???????3306??????1??2?09:02??????????00:14:47?/cloud/k8s/kubernetes/bin/kubelet?--logtostderr=true?--v=4?--hostname-override=k8s-node1?--kubeconfig=/cloud/k8s/kubernetes/cfg/kubelet.kubeconfig?--bootstrap-kubeconfig=/cloud/k8s/kubernetes/cfg/bootstrap.kubeconfig?--config=/cloud/k8s/kubernetes/cfg/kubelet.config?--cert-dir=/cloud/k8s/kubernetes/ssl?--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0
root??????87181??12020??0?19:22?pts/0????00:00:00?grep?--color=auto?kubelet

在master節(jié)點(diǎn)上approve kubelet CSR 請求

kubectl?get?csr
kubectl?certificate?approve?$NAME
csr?狀態(tài)變?yōu)?Approved,Issued?即可

查看集群狀態(tài)及node節(jié)點(diǎn)

[root@k8s-master?kubernetes]#?kubectl??get?cs,node
NAME?????????????????????????????????STATUS????MESSAGE?????????????ERROR
componentstatus/controller-manager???Healthy???ok??????????????????
componentstatus/scheduler????????????Healthy???ok??????????????????
componentstatus/etcd-2???????????????Healthy???{"health":"true"}???
componentstatus/etcd-0???????????????Healthy???{"health":"true"}???
componentstatus/etcd-1???????????????Healthy???{"health":"true"}???

NAME?????????????STATUS???ROLES????AGE????VERSION
node/k8s-node1???Ready????<none>???4d2h???v1.15.0
node/k8s-node2???Ready????<none>???4d2h???v1.15.0
node/k8s-node3???Ready????<none>???4d2h???v1.15.0

部署 node kube-proxy 組件

[root@k8s-node1?cfg]#?cat?/cloud/k8s/kubernetes/cfg/kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true?\
--v=4?\
--hostname-override=k8s-node1?\
--cluster-cidr=10.0.0.0/24?\
--kubeconfig=/cloud/k8s/kubernetes/cfg/kube-proxy.kubeconfig"

創(chuàng)建kube-proxy啟動文件

[root@k8s-node1?cfg]#?cat?/usr/lib/systemd/system/kube-proxy.service?
[Unit]
Description=Kubernetes?Proxy
After=network.target

[Service]
EnvironmentFile=/cloud/k8s/kubernetes/cfg/kube-proxy
ExecStart=/cloud/k8s/kubernetes/bin/kube-proxy?$KUBE_PROXY_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

啟動kube-proxy服務(wù)

[root@k8s-node1?cfg]#?systemctl?daemon-reload
[root@k8s-node1?cfg]#?systemctl?enable?kube-proxy
[root@k8s-node1?cfg]#?systemctl?start?kube-proxy
[root@k8s-node1?cfg]#?ps?-ef?|grep?kube-proxy
root????????966??????1??0?09:02??????????00:01:20?/cloud/k8s/kubernetes/bin/kube-proxy?--logtostderr=true?--v=4?--hostname-override=k8s-node1?--cluster-cidr=10.0.0.0/24?--kubeconfig=/cloud/k8s/kubernetes/cfg/kube-proxy.kubeconfig
root??????87093??12020??0?19:22?pts/0????00:00:00?grep?--color=auto?kube-proxy

部署Coredns組件

[root@k8s-master?~]#?cat?coredns.yaml?
#?Warning:?This?is?a?file?generated?from?the?base?underscore?template?file:?coredns.yaml.base

apiVersion:?v1
kind:?ServiceAccount
metadata:
??name:?coredns
??namespace:?kube-system
??labels:
??????kubernetes.io/cluster-service:?"true"
??????addonmanager.kubernetes.io/mode:?Reconcile
---
apiVersion:?rbac.authorization.k8s.io/v1
kind:?ClusterRole
metadata:
??labels:
????kubernetes.io/bootstrapping:?rbac-defaults
????addonmanager.kubernetes.io/mode:?Reconcile
??name:?system:coredns
rules:
-?apiGroups:
??-?""
??resources:
??-?endpoints
??-?services
??-?pods
??-?namespaces
??verbs:
??-?list
??-?watch
---
apiVersion:?rbac.authorization.k8s.io/v1
kind:?ClusterRoleBinding
metadata:
??annotations:
????rbac.authorization.kubernetes.io/autoupdate:?"true"
??labels:
????kubernetes.io/bootstrapping:?rbac-defaults
????addonmanager.kubernetes.io/mode:?EnsureExists
??name:?system:coredns
roleRef:
??apiGroup:?rbac.authorization.k8s.io
??kind:?ClusterRole
??name:?system:coredns
subjects:
-?kind:?ServiceAccount
??name:?coredns
??namespace:?kube-system
---
apiVersion:?v1
kind:?ConfigMap
metadata:
??name:?coredns
??namespace:?kube-system
??labels:
??????addonmanager.kubernetes.io/mode:?EnsureExists
data:
??Corefile:?|
????.:53?{
????????errors
????????health
????????kubernetes?cluster.local?in-addr.arpa?ip6.arpa?{
????????????pods?insecure
????????????upstream
????????????fallthrough?in-addr.arpa?ip6.arpa
????????}
????????prometheus?:9153
????????proxy?.?/etc/resolv.conf
????????cache?30
????????loop
????????reload
????????loadbalance
????}
---
apiVersion:?extensions/v1beta1
kind:?Deployment
metadata:
??name:?coredns
??namespace:?kube-system
??labels:
????k8s-app:?kube-dns
????kubernetes.io/cluster-service:?"true"
????addonmanager.kubernetes.io/mode:?Reconcile
????kubernetes.io/name:?"CoreDNS"
spec:
??#?replicas:?not?specified?here:
??#?1.?In?order?to?make?Addon?Manager?do?not?reconcile?this?replicas?parameter.
??#?2.?Default?is?1.
??#?3.?Will?be?tuned?in?real?time?if?DNS?horizontal?auto-scaling?is?turned?on.
??replicas:?3
??strategy:
????type:?RollingUpdate
????rollingUpdate:
??????maxUnavailable:?1
??selector:
????matchLabels:
??????k8s-app:?kube-dns
??template:
????metadata:
??????labels:
????????k8s-app:?kube-dns
??????annotations:
????????seccomp.security.alpha.kubernetes.io/pod:?'docker/default'
????spec:
??????serviceAccountName:?coredns
??????tolerations:
????????-?key:?node-role.kubernetes.io/master
??????????effect:?NoSchedule
????????-?key:?"CriticalAddonsOnly"
??????????operator:?"Exists"
??????containers:
??????-?name:?coredns
????????image:?coredns/coredns:1.3.1
????????imagePullPolicy:?IfNotPresent
????????resources:
??????????limits:
????????????memory:?170Mi
??????????requests:
????????????cpu:?100m
????????????memory:?70Mi
????????args:?[?"-conf",?"/etc/coredns/Corefile"?]
????????volumeMounts:
????????-?name:?config-volume
??????????mountPath:?/etc/coredns
??????????readOnly:?true
????????ports:
????????-?containerPort:?53
??????????name:?dns
??????????protocol:?UDP
????????-?containerPort:?53
??????????name:?dns-tcp
??????????protocol:?TCP
????????-?containerPort:?9153
??????????name:?metrics
??????????protocol:?TCP
????????livenessProbe:
??????????httpGet:
????????????path:?/health
????????????port:?8080
????????????scheme:?HTTP
??????????initialDelaySeconds:?60
??????????timeoutSeconds:?5
??????????successThreshold:?1
??????????failureThreshold:?5
????????securityContext:
??????????allowPrivilegeEscalation:?false
??????????capabilities:
????????????add:
????????????-?NET_BIND_SERVICE
????????????drop:
????????????-?all
??????????readOnlyRootFilesystem:?true
??????dnsPolicy:?Default
??????volumes:
????????-?name:?config-volume
??????????configMap:
????????????name:?coredns
????????????items:
????????????-?key:?Corefile
??????????????path:?Corefile
---
apiVersion:?v1
kind:?Service
metadata:
??name:?kube-dns
??namespace:?kube-system
??annotations:
????prometheus.io/port:?"9153"
????prometheus.io/scrape:?"true"
??labels:
????k8s-app:?kube-dns
????kubernetes.io/cluster-service:?"true"
????addonmanager.kubernetes.io/mode:?Reconcile
????kubernetes.io/name:?"CoreDNS"
spec:
??selector:
????k8s-app:?kube-dns
??clusterIP:?10.0.0.2?
??ports:
??-?name:?dns
????port:?53
????protocol:?UDP
??-?name:?dns-tcp
????port:?53
????protocol:?TCP
[root@k8s-master?~]#?kubectl?apply?-f?coredns.yaml?
serviceaccount/coredns?unchanged
clusterrole.rbac.authorization.k8s.io/system:coredns?unchanged
clusterrolebinding.rbac.authorization.k8s.io/system:coredns?unchanged
configmap/coredns?unchanged
deployment.extensions/coredns?unchanged
service/kube-dns?unchanged

[root@k8s-master?~]#?kubectl?get?deployment?-n?kube-system???
NAME??????READY???UP-TO-DATE???AVAILABLE???AGE
coredns???3/3?????3????????????3???????????33h
[root@k8s-master?~]#?kubectl?get?deployment?-n?kube-system?-o?wide
NAME??????READY???UP-TO-DATE???AVAILABLE???AGE???CONTAINERS???IMAGES??????????????????SELECTOR
coredns???3/3?????3????????????3???????????33h???coredns??????coredns/coredns:1.3.1???k8s-app=kube-dns
[root@k8s-master?~]#?kubectl?get?pod?-n?kube-system?-o?wide??????????
NAME??????????????????????READY???STATUS????RESTARTS???AGE???IP????????????NODE????????NOMINATED?NODE???READINESS?GATES
coredns-b49c586cf-nwzv6???1/1?????Running???1??????????33h???172.18.54.3???k8s-node3???<none>???????????<none>
coredns-b49c586cf-qv5b9???1/1?????Running???1??????????33h???172.18.22.3???k8s-node1???<none>???????????<none>
coredns-b49c586cf-rcqhc???1/1?????Running???1??????????33h???172.18.95.2???k8s-node2???<none>???????????<none>
[root@k8s-master?~]#?kubectl?get?svc?-n?kube-system?-o?wide???
NAME???????TYPE????????CLUSTER-IP???EXTERNAL-IP???PORT(S)?????????AGE???SELECTOR
kube-dns???ClusterIP???10.0.0.2?????<none>????????53/UDP,53/TCP???33h???k8s-app=kube-dns

到此kubernetes V1.15.0乞丐版部署完成。

向AI問一下細(xì)節(jié)

免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場,如果涉及侵權(quán)請聯(lián)系站長郵箱:is@yisu.com進(jìn)行舉報,并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。

AI