您好,登錄后才能下訂單哦!
kube-controller-manager屬于master節(jié)點(diǎn)組件,kube-controller-manager集群包含 3 個(gè)節(jié)點(diǎn),啟動(dòng)后將通過競(jìng)爭(zhēng)選舉機(jī)制產(chǎn)生一個(gè) leader 節(jié)點(diǎn),其它節(jié)點(diǎn)為阻塞狀態(tài)。當(dāng) leader 節(jié)點(diǎn)不可用后,剩余節(jié)點(diǎn)將再次進(jìn)行選舉產(chǎn)生新的 leader 節(jié)點(diǎn),從而保證服務(wù)的高可用性。
特別說明:這里所有的操作都是在devops這臺(tái)機(jī)器上通過ansible工具執(zhí)行;kube-controller-manager 在如下兩種情況下使用證書:
#################### Variable parameter setting ######################
KUBE_NAME=kube-controller-manager
K8S_INSTALL_PATH=/data/apps/k8s/kubernetes
K8S_BIN_PATH=${K8S_INSTALL_PATH}/sbin
K8S_LOG_DIR=${K8S_INSTALL_PATH}/logs
K8S_CONF_PATH=/etc/k8s/kubernetes
KUBE_CONFIG_PATH=/etc/k8s/kubeconfig
CA_DIR=/etc/k8s/ssl
SOFTWARE=/root/software
VERSION=v1.14.2
PACKAGE="kubernetes-server-${VERSION}-linux-amd64.tar.gz"
DOWNLOAD_URL=“”https://github.com/devops-apps/download/raw/master/kubernetes/${PACKAGE}"
ETH_INTERFACE=eth2
LISTEN_IP=$(ifconfig | grep -A 1 ${ETH_INTERFACE} |grep inet |awk '{print $2}')
USER=k8s
SERVICE_CIDR=10.254.0.0/22
訪問kubernetes github 官方地址下載穩(wěn)定的 realease 包至本機(jī);
wget $DOWNLOAD_URL -P $SOFTWARE
將kubernetes 軟件包分發(fā)到各個(gè)master節(jié)點(diǎn)服務(wù)器;
sudo ansible master_k8s_vgs -m copy -a "src=${SOFTWARE}/$PACKAGE dest=${SOFTWARE}/" -b
### 1.Check if the install directory exists.
if [ ! -d "$K8S_BIN_PATH" ]; then
mkdir -p $K8S_BIN_PATH
fi
if [ ! -d "$K8S_LOG_DIR/$KUBE_NAME" ]; then
mkdir -p $K8S_LOG_DIR/$KUBE_NAME
fi
if [ ! -d "$K8S_CONF_PATH" ]; then
mkdir -p $K8S_CONF_PATH
fi
if [ ! -d "$KUBE_CONFIG_PATH" ]; then
mkdir -p $KUBE_CONFIG_PATH
fi
### 2.Install kube-apiserver binary of kubernetes.
if [ ! -f "$SOFTWARE/kubernetes-server-${VERSION}-linux-amd64.tar.gz" ]; then
wget $DOWNLOAD_URL -P $SOFTWARE >>/tmp/install.log 2>&1
fi
cd $SOFTWARE && tar -xzf kubernetes-server-${VERSION}-linux-amd64.tar.gz -C ./
cp -fp kubernetes/server/bin/$KUBE_NAME $K8S_BIN_PATH
ln -sf $K8S_BIN_PATH/$KUBE_NAM /usr/local/bin
chown -R $USER:$USER $K8S_INSTALL_PATH
chmod -R 755 $K8S_INSTALL_PATH
cd ${CA_DIR}
sudo ansible master_k8s_vgs -m copy -a "src=ca.pem dest=${CA_DIR}/" -b
sudo ansible master_k8s_vgs -m copy -a "src=ca-key.pem dest=${CA_DIR}/" -b
sudo ansible master_k8s_vgs -m copy -a \
"src=kubecontroller-manager.pem dest=${CA_DIR}/" -b
sudo ansible master_k8s_vgs -m copy -a \
"src=kubecontroller-manager-key.pem dest=${CA_DIR}/" -b
kube-controller-manager使用 kubeconfig文件連接訪問 apiserver服務(wù),該文件提供了 apiserver 地址、嵌入的 CA 證書和 kube-scheduler證書:
cd $KUBE_CONFIG_PATH
sudo ansible master_k8s_vgs -m copy -a \
"src=kube-controller-manager.kubeconfig dest=$KUBE_CONFIG_PATH/" -b
備注: 如果在前面小節(jié)已經(jīng)同步過各組件kubeconfig和證書文件,此處可以不必執(zhí)行此操作;
cat >/usr/lib/systemd/system/${KUBE_NAME}.service<<EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
User=${USER}
WorkingDirectory=${K8S_INSTALL_PATH}
ExecStart=${K8S_BIN_PATH}/${KUBE_NAME} \\
--port=10252 \\
--secure-port=10257 \\
--bind-address=${LISTEN_IP} \\
--address=127.0.0.1 \\
--kubeconfig=${KUBE_CONFIG_PATH}/${KUBE_NAME}.kubeconfig \\
--authentication-kubeconfig=${KUBE_CONFIG_PATH}/${KUBE_NAME}.kubeconfig \\
--authorization-kubeconfig=${KUBE_CONFIG_PATH}/${KUBE_NAME}.kubeconfig \\
--client-ca-file=${CA_DIR}/ca.pem \\
--service-cluster-ip-range=${SERVICE_CIDR} \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=${CA_DIR}/ca.pem \\
--cluster-signing-key-file=${CA_DIR}/ca-key.pem \\
--root-ca-file=${CA_DIR}/ca.pem \\
--service-account-private-key-file=${CA_DIR}/ca-key.pem \\
--leader-elect=true \\
--feature-gates=RotateKubeletServerCertificate=true \\
--horizontal-pod-autoscaler-use-rest-clients=true \\
--horizontal-pod-autoscaler-sync-period=10s \\
--concurrent-service-syncs=2 \\
--kube-api-qps=1000 \\
--kube-api-burst=2000 \\
--concurrent-gc-syncs=30 \\
--concurrent-deployment-syncs=10 \\
--terminated-pod-gc-threshold=10000 \\
--controllers=*,bootstrapsigner,tokencleaner \\
--requestheader-allowed-names="" \\
--requestheader-client-ca-file=${CA_DIR}/ca.pem \\
--requestheader-extra-headers-prefix="X-Remote-Extra-" \\
--requestheader-group-headers=X-Remote-Group \\
--requestheader-username-headers=X-Remote-User \\
--tls-cert-file=${CA_DIR}/kube-controller-manager.pem \\
--tls-private-key-file=${CA_DIR}/kube-controller-manager-key.pem \\
--use-service-account-credentials=true \\
--alsologtostderr=true \\
--logtostderr=false \\
--log-dir=${K8S_LOG_DIR}/${KUBE_NAME} \\
--flex-volume-plugin-dir=${K8S_INSTALL_PATH}/libexec/kubernetes \\
--v=2
Restart=on
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
kube-controller-manager監(jiān)聽10252和10257端口,兩個(gè)接口都對(duì)外提供 /metrics 和 /healthz 的訪問。
sudo netstat -ntlp | grep kube-con
tcp 0 0 127.0.0.1:10252 0.0.0.0:* LISTEN 2450/kube-controlle
tcp 0 0 10.10.10.22:10257 0.0.0.0:* LISTEN 2450/kube-controlle
注意:很多安裝文檔都是關(guān)閉了非安全端口,將安全端口改為10250,這會(huì)導(dǎo)致查看集群狀態(tài)是報(bào)如下錯(cuò)誤,執(zhí)行 kubectl get cs命令時(shí),apiserver 默認(rèn)向 127.0.0.1 發(fā)送請(qǐng)求。當(dāng)controller-manager、scheduler以集群模式運(yùn)行時(shí),有可能和kube-apiserver不在一臺(tái)機(jī)器上,且訪問方式為https,則 controller-manager或scheduler 的狀態(tài)為 Unhealthy,但實(shí)際上它們工作正常。則會(huì)導(dǎo)致上述error,但實(shí)際集群是安全狀態(tài);
kubectl get componentstatuses
NAME STATUS MESSAGE ERROR
controller-manager Unhealthy dial tcp 127.0.0.1:10252: connect: connection refused
scheduler Unhealthy dial tcp 127.0.0.1:10251: connect: connection refused
etcd-0 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
正常輸出應(yīng)該為:
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-2 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}
查看服務(wù)是否運(yùn)行
systemctl status kube-controller-manager|grep Active
確保狀態(tài)為 active (running),否則查看日志,確認(rèn)原因:
sudo journalctl -u kube-controller-manager
注意:以下命令在 kube-controller-manager 節(jié)點(diǎn)上執(zhí)行。
https方式訪問
curl -s --cacert /opt/k8s/work/ca.pem \
--cert /opt/k8s/work/admin.pem \
--key /opt/k8s/work/admin-key.pem \
https://10.10.10.22:10257/metrics |head
http方式訪問
curl -s http://127.0.0.1:10252/metrics |head
ClusteRole system:kube-controller-manager 的權(quán)限很小,只能創(chuàng)建 secret、serviceaccount 等資源對(duì)象,各 controller 的權(quán)限分散到 ClusterRole system:controller:XXX 中:
$ kubectl describe clusterrole system:kube-controller-manager
Name: system:kube-controller-manager
Labels: kubernetes.io/bootstrapping=rbac-defaults
Annotations: rbac.authorization.kubernetes.io/autoupdate=true
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
secrets [] [] [create delete get update]
endpoints [] [] [create get update]
serviceaccounts [] [] [create get update]
events [] [] [create patch update]
tokenreviews.authentication.k8s.io [] [] [create]
subjectacce***eviews.authorization.k8s.io [] [] [create]
configmaps [] [] [get]
namespaces [] [] [get]
*.* [] [] [list watch]
需要在kube-controller-manager的啟動(dòng)參數(shù)中添加"--use-service-account-credentials=true"參數(shù),這樣main controller將會(huì)為各controller創(chuàng)建對(duì)應(yīng)的ServiceAccount XXX-controller。然后內(nèi)置的 ClusterRoleBinding system:controller:XXX則將賦予各XXX-controller ServiceAccount對(duì)應(yīng)的ClusterRole system:controller:XXX 權(quán)限。
$ kubectl get clusterrole|grep controller
system:controller:attachdetach-controller 17d
system:controller:certificate-controller 17d
system:controller:clusterrole-aggregation-controller 17d
system:controller:cronjob-controller 17d
system:controller:daemon-set-controller 17d
system:controller:deployment-controller 17d
system:controller:disruption-controller 17d
system:controller:endpoint-controller 17d
system:controller:expand-controller 17d
system:controller:generic-garbage-collector 17d
system:controller:horizontal-pod-autoscaler 17d
system:controller:job-controller 17d
system:controller:namespace-controller 17d
system:controller:node-controller 17d
system:controller:persistent-volume-binder 17d
system:controller:pod-garbage-collector 17d
system:controller:pv-protection-controller 17d
system:controller:pvc-protection-controller 17d
system:controller:replicaset-controller 17d
system:controller:replication-controller 17d
system:controller:resourcequota-controller 17d
system:controller:route-controller 17d
system:controller:service-account-controller 17d
system:controller:service-controller 17d
system:controller:statefulset-controller 17d
system:controller:ttl-controller 17d
system:kube-controller-manager 17d
以 deployment controller 為例:
$ kubectl describe clusterrole system:controller:deployment-controller
Name: system:controller:deployment-controller
Labels: kubernetes.io/bootstrapping=rbac-defaults
Annotations: rbac.authorization.kubernetes.io/autoupdate=true
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
replicasets.apps [] [] [create delete get list patch update watch]
replicasets.extensions [] [] [create delete get list patch update watch]
events [] [] [create patch update]
pods [] [] [get list update watch]
deployments.apps [] [] [get list update watch]
deployments.extensions [] [] [get list update watch]
deployments.apps/finalizers [] [] [update]
deployments.apps/status [] [] [update]
deployments.extensions/finalizers[] [] [update]
deployments.extensions/status [] [] [update]
kubectl get endpoints kube-controller-manager --namespace=kube-system -o yaml
隨機(jī)找一個(gè)或兩個(gè) master 節(jié)點(diǎn),停掉kube-controller-manager服務(wù),看其它節(jié)點(diǎn)是否獲取了 leader 權(quán)限.
關(guān)于 controller 權(quán)限和 use-service-account-credentials 參數(shù):
https://github.com/kubernetes/kubernetes/issues/48208
kubelet 認(rèn)證和授權(quán):
https://kubernetes.io/docs/admin/kubelet-authentication-authorization/#kubelet-authorization
安裝完成kube-controller-manager后,還需要安裝kube-scheduler,請(qǐng)參考:kubernetes集群安裝指南:kube-scheduler組件集群部署。關(guān)于kube-controller-manager腳本請(qǐng)此處獲取
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如果涉及侵權(quán)請(qǐng)聯(lián)系站長(zhǎng)郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。