您好,登錄后才能下訂單哦!
這篇文章將為大家詳細(xì)講解有關(guān)kubeadm中怎么創(chuàng)建單個主節(jié)點(diǎn)集群,文章內(nèi)容質(zhì)量較高,因此小編分享給大家做個參考,希望大家閱讀完這篇文章后對相關(guān)知識有一定的了解。
Area | Maturity Level |
---|---|
Command line UX | GA |
Implementation | GA |
Config file API | beta |
CoreDNS | GA |
kubeadm alpha subcommands | alpha |
High availability | alpha |
DynamicKubeletConfig | alpha |
Self-hosting | alpha |
kubeadm的總體特征狀態(tài)為GA。 一些子特性,如配置文件API,仍在積極開發(fā)中。 隨著工具的發(fā)展,創(chuàng)建集群的實(shí)現(xiàn)可能略有變化,但總體實(shí)現(xiàn)應(yīng)該相當(dāng)穩(wěn)定。 根據(jù)定義,kubeadm alpha下的任何命令都在alpha級別上受支持。
一臺或多臺運(yùn)行 deb/rpm-compatible 操作系統(tǒng)的機(jī)器,例如Ubuntu或CentOS
每臺機(jī)器2 GB或更多內(nèi)存
master主機(jī)2個CPU以上
集群中所有機(jī)器之間的全網(wǎng)絡(luò)連接(公共網(wǎng)絡(luò)或私有網(wǎng)絡(luò)都是好的)
kubeadmin安裝查看https://my.oschina.net/jennerlo/blog/3007440
master是控制面板組件運(yùn)行的機(jī)器,包括etcd(集群數(shù)據(jù)庫)和API服務(wù)器(kubectl CLI與之通信)。
選擇pod網(wǎng)絡(luò)附加組件,并驗(yàn)證是否需要將任何參數(shù)傳遞給kubeadm初始化。 根據(jù)您選擇的第三方供應(yīng)商,您可能需要將 --pod-network-cidr
設(shè)置為特定于供應(yīng)商的值。 查看安裝pod網(wǎng)絡(luò)附加組件。
(可選)除非另外有指定,kubeadm使用與默認(rèn)網(wǎng)關(guān)關(guān)聯(lián)的網(wǎng)絡(luò)接口來廣告master的IP。
要使用不同的網(wǎng)絡(luò)接口, 執(zhí)行 kubeadm init
時要指定 --apiserver-advertise-address=<ip-address>
參數(shù)。 要使用IPv6尋址部署IPv6 Kubernetes集群,必須指定IPv6地址,例如 --apiserver-advertise-address=fd00::101
(可選)在 kubeadm init
之前運(yùn)行 kubeadm config images pull
,以驗(yàn)證與 gcr.io registries 的連接。
初始化命令:
kubeadm init <args>
有關(guān)kubeadm init參數(shù)的更多信息,請查看 kubeadm參考指南。
獲取配置選項(xiàng)的完整列表, 請參閱配置文件文檔
要自定義控制面板組件,包括可選的IPv6分配到控制面板組件和etcd服務(wù)器的活動探針,為每個組件提供額外的參數(shù),如自定義參數(shù)中所述。
要再次運(yùn)行kubeadm init,必須首先拆毀集群。
如果您將具有不同體系結(jié)構(gòu)的節(jié)點(diǎn)加入到集群中,請為節(jié)點(diǎn)上的kube-proxy
和kube-dns
創(chuàng)建單獨(dú)的Deployment或DaemonSet。 這是因?yàn)檫@些組件的Docker鏡像目前不支持多架構(gòu)。
kubeadm init
首先運(yùn)行一系列預(yù)檢查,以確保機(jī)器已經(jīng)準(zhǔn)備好運(yùn)行Kubernetes。 這些預(yù)檢查拋出警告并在錯誤時退出。 kubeadm init
然后下載并安裝集群控制平面組件。 這可能需要幾分鐘。輸出應(yīng)該是這樣的:
[init] Using Kubernetes version: vX.Y.Z [preflight] Running pre-flight checks [kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0) [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [kubeadm-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.138.0.4] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] This often takes around a minute; or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 39.511972 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node master as master by adding a label and a taint [markmaster] Master master tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: <token> [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run (as a regular user): mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the addon options listed at: http://kubernetes.io/docs/admin/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>
要使kubectl為非root用戶環(huán)境下工作,請運(yùn)行這些命令:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
如果是root用戶,則可以運(yùn)行:
export KUBECONFIG=/etc/kubernetes/admin.conf
token用于主節(jié)點(diǎn)和連接節(jié)點(diǎn)之間的相互身份驗(yàn)證。 這里包含的token是機(jī)密的。確保它的安全性,因?yàn)槿魏尉哂写藅oken的人都可以將經(jīng)過身份驗(yàn)證的節(jié)點(diǎn)添加到集群中。
關(guān)于kubeadm中怎么創(chuàng)建單個主節(jié)點(diǎn)集群就分享到這里了,希望以上內(nèi)容可以對大家有一定的幫助,可以學(xué)到更多知識。如果覺得文章不錯,可以把它分享出去讓更多的人看到。
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場,如果涉及侵權(quán)請聯(lián)系站長郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。