您好,登錄后才能下訂單哦!
kubeadm中如何安裝Kubernetes集群,針對(duì)這個(gè)問(wèn)題,這篇文章詳細(xì)介紹了相對(duì)應(yīng)的分析和解答,希望可以幫助更多想解決這個(gè)問(wèn)題的小伙伴找到更簡(jiǎn)單易行的方法。
1,修改主機(jī)名
//master 192.168.2.211 hostnamectl set-hostname kube-master //node1 192.168.2.212 hostnamectl set-hostname kube-node1 //node1 192.168.2.213 hostnamectl set-hostname kube-node2
2, 關(guān)閉防火墻
systemctl stop firewalld && systemctl disable firewalld
3,關(guān)閉selinux,必須修改允許容器訪問(wèn)主機(jī)文件系統(tǒng)
//臨時(shí) setenforce 0 //永久 sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/sysconfig/selinux
4,關(guān)閉swap,否則會(huì)報(bào)錯(cuò)
//臨時(shí) swapoff -a //永久 vi /etc/fstab 注釋掉/dev/mapper/cl-swap那一行
如圖
修改永久關(guān)閉的配置,需要重啟系統(tǒng)
5,安裝docker
官方文檔說(shuō)已驗(yàn)證的docker版本為1.11.1, 1.12.1, 1.13.1, 17.03, 17.06, 17.09, 18.06(https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.13.md#external-dependencies),但是在測(cè)試中使用1.13之前版本時(shí)遇到很多問(wèn)題,所以這里使用目前最新版18。
sudo yum install -y yum-utils sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo sudo yum install -y docker-ce systemctl start docker && systemctl enable docker
版本:
[root@kube-master ~]# docker version Client: Version: 18.09.0 API version: 1.39 Go version: go1.10.4 Git commit: 4d60db4 Built: Wed Nov 7 00:48:22 2018 OS/Arch: linux/amd64 Experimental: false Server: Docker Engine - Community Engine: Version: 18.09.0 API version: 1.39 (minimum version 1.12) Go version: go1.10.4 Git commit: 4d60db4 Built: Wed Nov 7 00:19:08 2018 OS/Arch: linux/amd64 Experimental: false
6,安裝kubeadm,kubelet,kubectl
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF yum install -y kubelet kubeadm kubectl systemctl enable kubelet && systemctl start kubelet
官方提供的安裝地址在墻外,如果服務(wù)器在國(guó)內(nèi),可以使用阿里的源(https://opsx.alibaba.com/mirror?lang=zh-cn)。
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF yum install -y kubelet kubeadm kubectl systemctl enable kubelet && systemctl start kubelet
版本:
這一步我沒(méi)遇到,但是官方文檔說(shuō)某些 RHEL/CentOS 7系統(tǒng)會(huì)出現(xiàn)由于iptables被繞過(guò),流量被錯(cuò)誤地路由的問(wèn)題。需要確保sysctl配置中net.bridge.bridge-nf-call-iptables的值為1:
cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system
在測(cè)試時(shí)發(fā)現(xiàn)老版本docker驅(qū)動(dòng)不是cgroupfs,導(dǎo)致kubernetes報(bào)錯(cuò)。查看驅(qū)動(dòng)版本:
docker info | grep -i cgroup
如果不是需要修改kubernetes配置:
vi /etc/default/kubelet KUBELET_EXTRA_ARGS=--cgroup-driver=改為當(dāng)前驅(qū)動(dòng) systemctl daemon-reload systemctl restart kubelet
7,下載服務(wù)所需鏡像
可以跳過(guò),初始化會(huì)自動(dòng)下載,但是因?yàn)橛袎Γ孕枰M量提前準(zhǔn)備好。首先設(shè)置docker代理,
mkdir /etc/systemd/system/docker.service.d cat <<EOF > /etc/systemd/system/docker.service.d/http-proxy.conf [Service] Environment="HTTP_PROXY=http://192.168.2.100:1080" EOF systemctl daemon-reload systemctl restart docker
下載鏡像
kubeadm config images pull --kubernetes-version v1.13.0
得到鏡像如下
8,初始化master
kubeadm init --apiserver-advertise-address=192.168.2.211 --kubernetes-version=v1.13.0 --pod-network-cidr=10.244.0.0/16
--apiserver-advertise-address:api-server服務(wù)監(jiān)聽(tīng)地址,即master的ip
--kubernetes-version:指定要部署的kubernetes版本,此處為1.13.0
--pod-network-cidr:指定pod網(wǎng)絡(luò)分配地址范圍。需要與下一步要使用的插件的配置一致,此處使用Flannel
[init] Using Kubernetes version: v1.13.0 [preflight] Running pre-flight checks [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06 [WARNING Hostname]: hostname "kube-master" could not be reached [WARNING Hostname]: hostname "kube-master": lookup kube-master on 192.168.2.1:53: no such host [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [kube-master localhost] and IPs [192.168.2.211 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [kube-master localhost] and IPs [192.168.2.211 127.0.0.1 ::1] [certs] Generating "ca" certificate and key [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kube-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.2.211] [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 23.502901 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kube-master" as an annotation [mark-control-plane] Marking the node kube-master as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node kube-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: thmy85.7ahn8zezyt6m39yy [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.2.211:6443 --token thmy85.7ahn8zezyt6m39yy --discovery-token-ca-cert-hash sha256:576442c07cb68e731badb48f15d28d701fa184f1dffc78556ebe834f8a651021
9,創(chuàng)建配置文件
上一步輸出中提示創(chuàng)建配置文件,否則執(zhí)行kubectl會(huì)報(bào)錯(cuò)。如:
按照提示執(zhí)行
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
再次執(zhí)行
[root@kube-master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION kube-master NotReady master 3m18s v1.13.1
10, 安裝Pod網(wǎng)絡(luò)插件,使Pod可以相互通信。
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
11,啟動(dòng)node
切換到node服務(wù)器,重復(fù)步驟1~7。執(zhí)行第七步初始化master時(shí)最后打印的命令,如
kubeadm join 192.168.2.211:6443 --token thmy85.7ahn8zezyt6m39yy --discovery-token-ca-cert-hash sha256:576442c07cb68e731badb48f15d28d701fa184f1dffc78556ebe834f8a651021
完成后,回到master執(zhí)行:watch kubectl get nodes,直到狀態(tài)變?yōu)镽eady
NAME STATUS ROLES AGE VERSION kube-master Ready master 27m v1.13.1 kube-node1 Ready <none> 9m19s v1.13.1 kube-node2 Ready <none> 5m36s v1.13.1
12,測(cè)試
在master上創(chuàng)建文件nginx.yaml,創(chuàng)建一個(gè)deployment并通過(guò)service發(fā)布。
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deplo spec: replicas: 2 selector: matchLabels: run: nginx-deploy template: metadata: labels: run: nginx-deploy spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: nginx-service labels: run: nginx-deploy spec: type: NodePort ports: - port: 80 targetPort: 80 nodePort: 30001 selector: run: nginx-deploy
執(zhí)行
kubectl create -f service.yaml watch kubectl get deploy
等待狀態(tài)Ready后,測(cè)試服務(wù)。
nginx成功返回說(shuō)明服務(wù)已啟動(dòng)。在兩個(gè)Node上執(zhí)行docker ps,可以看到每個(gè)node各啟動(dòng)了一個(gè)nginx容器。
另外,我把鏡像傳到了七牛云上,可以按照命令下載使用
docker pull reg.qiniu.com/fgding/kube-proxy:v1.13.0 docker pull reg.qiniu.com/fgding/kube-controller-manager:v1.13.0 docker pull reg.qiniu.com/fgding/kube-apiserver:v1.13.0 docker pull reg.qiniu.com/fgding/kube-scheduler:v1.13.0 docker pull reg.qiniu.com/fgding/coredns:1.2.6 docker pull reg.qiniu.com/fgding/etcd:3.2.24 docker pull reg.qiniu.com/fgding/pause:3.1 docker tag reg.qiniu.com/fgding/kube-proxy:v1.13.0 k8s.gcr.io/kube-proxy:v1.13.0 docker tag reg.qiniu.com/fgding/kube-controller-manager:v1.13.0 k8s.gcr.io/kube-controller-manager:v1.13.0 docker tag reg.qiniu.com/fgding/kube-apiserver:v1.13.0 k8s.gcr.io/kube-apiserver:v1.13.0 docker tag reg.qiniu.com/fgding/kube-scheduler:v1.13.0 k8s.gcr.io/kube-scheduler:v1.13.0 docker tag reg.qiniu.com/fgding/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6 docker tag reg.qiniu.com/fgding/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24 docker tag reg.qiniu.com/fgding/pause:3.1 k8s.gcr.io/pause:3.1
關(guān)于kubeadm中如何安裝Kubernetes集群?jiǎn)栴}的解答就分享到這里了,希望以上內(nèi)容可以對(duì)大家有一定的幫助,如果你還有很多疑惑沒(méi)有解開(kāi),可以關(guān)注億速云行業(yè)資訊頻道了解更多相關(guān)知識(shí)。
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如果涉及侵權(quán)請(qǐng)聯(lián)系站長(zhǎng)郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。