您好,登錄后才能下訂單哦!
kubeadm中如何部署kubernetes集群,相信很多沒(méi)有經(jīng)驗(yàn)的人對(duì)此束手無(wú)策,為此本文總結(jié)了問(wèn)題出現(xiàn)的原因和解決方法,通過(guò)這篇文章希望你能解決這個(gè)問(wèn)題。
這里使用RHEL7.5
master、etcd:192.168.10.101,主機(jī)名:master
node1:192.168.10.103,主機(jī)名:node1
node2:192.168.10.104,主機(jī)名:node2
所有機(jī)子能基于主機(jī)名通信,編輯每臺(tái)機(jī)子的/etc/hosts文件:
192.168.10.101 master
192.168.10.103 node1
192.168.10.104 node2
所有機(jī)子時(shí)間要同步
所有機(jī)子關(guān)閉防火墻和selinux。
master可以免密登錄全部機(jī)子。
【重要問(wèn)題】
集群初始化以及節(jié)點(diǎn)加入集群的時(shí)候都會(huì)從谷歌倉(cāng)庫(kù)下載鏡像,然而,我們并不能訪(fǎng)問(wèn)到谷歌,所以無(wú)法下載所需的鏡像。而我已經(jīng)將所需鏡像上傳至阿里云個(gè)人倉(cāng)庫(kù)。
1、etcd cluster,僅master節(jié)點(diǎn);
2、flannel,集群的所有節(jié)點(diǎn);
3、配置k8s的master:僅master節(jié)點(diǎn);
kubernetes-master
啟動(dòng)的服務(wù):kube-apiserver,kube-scheduler,kube-controller-manager
4、配置k8s的各Node節(jié)點(diǎn);
kubernetes-node
先設(shè)定啟動(dòng)docker服務(wù);
啟動(dòng)的k8s的服務(wù):kube-proxy,kubelet
kubeadm
1、master,nodes:安裝kubelet,kubeadm,docker
2、master:kubeadm init
3、nodes:kubeadm join
https://github.com/kubernetes/kubeadm/blob/master/docs/design/design_v1.10.md
(1)yum源配置
這里使用1.12.0版本。下載地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.12.md#downloads-for-v1120
這里使用yum下載。配置yum源,先配置docker的yum源,直接下載阿里云的repo文件即可:
[root@master ~]# curl -o /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
創(chuàng)建kubernetes的yum源文件:
[root@master ~]# vim /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes Repo baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg enabled=1
將這兩個(gè)repo文件復(fù)制到其他節(jié)點(diǎn)的/etc/yum.repo.d目錄中:
[root@master ~]# for i in 102 103; do scp /etc/yum.repos.d/{docker-ce.repo,kubernetes.repo} root@192.168.10.$i:/etc/yum.repos.d/; done
安裝yum源的檢驗(yàn)key:
[root@master ~]# ansible all -m shell -a "curl -O https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg && rpm --import rpm-package-key.gpg"
(2)安裝docker、kubelet、kubeadm、kubectl
[root@master ~]# yum install docker-ce kubelet kubeadm kubectl -y
(3)修改防火墻
[root@master ~]# echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables [root@master ~]# echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables [root@master ~]# ansible all -m shell -a "iptables -P FORWARD ACCEPT"
注意:這是臨時(shí)修改,重啟機(jī)器參數(shù)會(huì)失效。
永久修改:/usr/lib/sysctl.d/00-system.conf
(4)修改docker服務(wù)文件并啟動(dòng)docker
[root@master ~]# vim /usr/lib/systemd/system/docker.service [Service] Type=notify # the default is not to use systemd for cgroups because the delegate issues still # exists and systemd currently does not support the cgroup feature set required # for containers run by docker #Environment="HTTPS_PROXY=http://www.ik8s.io:10080" Environment="NO_PROXY=127.0.0.1/8,127.0.0.1/16"
在Service段中添加:
Environment="NO_PROXY=127.0.0.1/8,127.0.0.1/16"
啟動(dòng)docker:
[root@master ~]# systemctl daemon-reload [root@master ~]# systemctl start docker [root@master ~]# systemctl enable docker
(5)設(shè)置kubelet開(kāi)機(jī)啟動(dòng)
[root@master ~]# systemctl enable kubelet
(6)初始化
編輯配置文件,忽略某些參數(shù):
[root@master ~]# vim /etc/sysconfig/kubelet KUBELET_EXTRA_ARGS="--fail-swap-on=false"
執(zhí)行初始化:
[root@master ~]# kubeadm init --kubernetes-version=v1.12.0 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap [init] using Kubernetes version: v1.12.0 [preflight] running pre-flight checks [WARNING Swap]: running with swap on is not supported. Please disable swap [preflight/images] Pulling images required for setting up a Kubernetes cluster [preflight/images] This might take a minute or two, depending on the speed of your internet connection [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull' [preflight] Some fatal errors occurred: [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.12.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: proxyconnect tcp: dial tcp 172.96.236.117:10080: connect: connection refused , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.12.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: proxyconnect tcp: dial tcp 172.96.236.117:10080: connect: connection refused , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.12.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: proxyconnect tcp: dial tcp 172.96.236.117:10080: connect: connection refused , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.12.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: proxyconnect tcp: dial tcp 172.96.236.117:10080: connect: connection refused , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: proxyconnect tcp: dial tcp 172.96.236.117:10080: connect: connection refused , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.2.24: output: Error response from daemon: Get https://k8s.gcr.io/v2/: proxyconnect tcp: dial tcp 172.96.236.117:10080: connect: connection refused , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.2.2: output: Error response from daemon: Get https://k8s.gcr.io/v2/: proxyconnect tcp: dial tcp 172.96.236.117:10080: connect: connection refused , error: exit status 1 [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` [root@master ~]#
無(wú)法下載鏡像。因?yàn)闊o(wú)法訪(fǎng)問(wèn)谷歌鏡像倉(cāng)庫(kù)??梢酝ㄟ^(guò)其他途徑下載鏡像到本地,再執(zhí)行初始化。
鏡像下載腳本:https://github.com/yanyuzm/k8s_images_script
相關(guān)鏡像我已上傳到阿里云,執(zhí)行以下腳本即可:
[root@master ~]# vim pull-images.sh #!/bin/bash images=(kube-apiserver:v1.12.0 kube-controller-manager:v1.12.0 kube-scheduler:v1.12.0 kube-proxy:v1.12.0 pause:3.1 etcd:3.2.24 coredns:1.2.2) for ima in ${images[@]} do docker pull registry.cn-shenzhen.aliyuncs.com/lurenjia/$ima docker tag registry.cn-shenzhen.aliyuncs.com/lurenjia/$ima k8s.gcr.io/$ima docker rmi -f registry.cn-shenzhen.aliyuncs.com/lurenjia/$ima done [root@master ~]# sh pull-images.sh
用到的鏡像有:
[root@master ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE k8s.gcr.io/kube-controller-manager v1.12.0 07e068033cf2 2 weeks ago 164MB k8s.gcr.io/kube-apiserver v1.12.0 ab60b017e34f 2 weeks ago 194MB k8s.gcr.io/kube-scheduler v1.12.0 5a1527e735da 2 weeks ago 58.3MB k8s.gcr.io/kube-proxy v1.12.0 9c3a9d3f09a0 2 weeks ago 96.6MB k8s.gcr.io/etcd 3.2.24 3cab8e1b9802 3 weeks ago 220MB k8s.gcr.io/coredns 1.2.2 367cdc8433a4 6 weeks ago 39.2MB k8s.gcr.io/pause 3.1 da86e6ba6ca1 9 months ago 742kB [root@master ~]#
重新初始化:
[root@master ~]# kubeadm init --kubernetes-version=v1.12.0 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap [init] using Kubernetes version: v1.12.0 [preflight] running pre-flight checks [preflight/images] Pulling images required for setting up a Kubernetes cluster [preflight/images] This might take a minute or two, depending on the speed of your internet connection [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [preflight] Activating the kubelet service [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [192.168.10.101 127.0.0.1 ::1] [certificates] Generated apiserver-etcd-client certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [master localhost] and IPs [127.0.0.1 ::1] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.10.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki" [certificates] Generated sa key and public key. [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] this might take a minute or longer if the control plane images have to be pulled [apiclient] All control plane components are healthy after 71.135592 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster [markmaster] Marking the node master as master by adding the label "node-role.kubernetes.io/master=''" [markmaster] Marking the node master as master by adding the taints [node-role.kubernetes.io/master:NoSchedule] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "master" as an annotation [bootstraptoken] using token: qaqahg.5xbt355fl26wu8tg [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.10.101:6443 --token qaqahg.5xbt355fl26wu8tg --discovery-token-ca-cert-hash sha256:654f52a18fa04234c05eb38a001d92b9831982d06272e5a22b7d898bc6280e47 [root@master ~]#
OK。初始化成功。
初始化成功,最后的提示:很重要
Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.10.101:6443 --token qaqahg.5xbt355fl26wu8tg --discovery-token-ca-cert-hash sha256:654f52a18fa04234c05eb38a001d92b9831982d06272e5a22b7d898bc6280e47
master節(jié)點(diǎn):按照提示,做以下操作:
[root@master ~]# mkdir -p $HOME/.kube [root@master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config cp: overwrite ‘/root/.kube/config’? y [root@master ~]# chown $(id -u):$(id -g) $HOME/.kube/config [root@master ~]#
查看一下:
[root@master ~]# kubectl get componentstatus NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health": "true"} [root@master ~]# kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health": "true"} [root@master ~]#
健康狀態(tài)。
查看集群節(jié)點(diǎn):
[root@master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION master NotReady master 110m v1.12.1 [root@master ~]#
只有master節(jié)點(diǎn),但處于NotReady狀態(tài)。因?yàn)闆](méi)有部署flannel。
(7)安裝flannel
地址:https://github.com/coreos/flannel
執(zhí)行以下命令:
[root@master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.extensions/kube-flannel-ds-amd64 created daemonset.extensions/kube-flannel-ds-arm64 created daemonset.extensions/kube-flannel-ds-arm created daemonset.extensions/kube-flannel-ds-ppc64le created daemonset.extensions/kube-flannel-ds-s390x created [root@master ~]#
執(zhí)行完成后,需要等待很長(zhǎng)時(shí)間,因?yàn)橐螺dflannel鏡像。
[root@master ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE k8s.gcr.io/kube-controller-manager v1.12.0 07e068033cf2 2 weeks ago 164MB k8s.gcr.io/kube-apiserver v1.12.0 ab60b017e34f 2 weeks ago 194MB k8s.gcr.io/kube-scheduler v1.12.0 5a1527e735da 2 weeks ago 58.3MB k8s.gcr.io/kube-proxy v1.12.0 9c3a9d3f09a0 2 weeks ago 96.6MB k8s.gcr.io/etcd 3.2.24 3cab8e1b9802 3 weeks ago 220MB k8s.gcr.io/coredns 1.2.2 367cdc8433a4 6 weeks ago 39.2MB quay.io/coreos/flannel v0.10.0-amd64 f0fad859c909 8 months ago 44.6MB k8s.gcr.io/pause 3.1 da86e6ba6ca1 9 months ago 742kB [root@master ~]#
OK,flannel鏡像下載完成。查看節(jié)點(diǎn):
[root@master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready master 155m v1.12.1 [root@master ~]#
OK,master處于Ready狀態(tài)。
如果flannel下載不成功,可以下載阿里云的:
docker pull registry.cn-shenzhen.aliyuncs.com/lurenjia/flannel:v0.10.0-amd64
下載成功后,修改鏡像的tag:
docker tag registry.cn-shenzhen.aliyuncs.com/lurenjia/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64
查看一下命名空間情況:
[root@master ~]# kubectl get ns NAME STATUS AGE default Active 158m kube-public Active 158m kube-system Active 158m [root@master ~]#
查看kube-system的pod:
[root@master ~]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-576cbf47c7-hfvcq 1/1 Running 0 158m coredns-576cbf47c7-xcpgd 1/1 Running 0 158m etcd-master 1/1 Running 6 132m kube-apiserver-master 1/1 Running 9 132m kube-controller-manager-master 1/1 Running 33 132m kube-flannel-ds-amd64-vqc9h 1/1 Running 3 41m kube-proxy-z9xrw 1/1 Running 4 158m kube-scheduler-master 1/1 Running 33 132m [root@master ~]#
1、安裝docker-ce、kubelet、kubeadm
[root@node1 ~]# yum install docker-ce kubelet kubeadm -y [root@node2 ~]# yum install docker-ce kubelet kubeadm -y
2、復(fù)制master節(jié)點(diǎn)的文件到node
[root@master ~]# scp /etc/sysconfig/kubelet 192.168.10.103:/etc/sysconfig/ kubelet 100% 42 45.4KB/s 00:00 [root@master ~]# scp /etc/sysconfig/kubelet 192.168.10.104:/etc/sysconfig/ kubelet 100% 42 4.0KB/s 00:00 [root@master ~]#
3、node節(jié)點(diǎn)加入集群
啟動(dòng)docker、kubelet
[root@node1 ~]# systemctl start docker kubelet [root@node1 ~]# systemctl enable docker kubelet [root@node2 ~]# systemctl start docker kubelet [root@node2 ~]# systemctl enable docker kubelet
node節(jié)點(diǎn)加入集群:
[root@node1 ~]# kubeadm join 192.168.10.101:6443 --token qaqahg.5xbt355fl26wu8tg --discovery-token-ca-cert-hash sha256:654f52a18fa04234c05eb38a001d92b9831982d06272e5a22b7d898bc6280e47 --ignore-preflight-errors=Swap [preflight] running pre-flight checks [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{} ip_vs:{} ip_vs_rr:{}] you can solve this problem with following methods: 1. Run 'modprobe -- ' to load missing kernel modules; 2. Provide the missing builtin kernel ipvs support [WARNING Swap]: running with swap on is not supported. Please disable swap [preflight] Some fatal errors occurred: [ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1 [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` [root@node1 ~]# echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables [root@node1 ~]#
報(bào)錯(cuò),按提示設(shè)置即可。
[root@node1 ~]# kubeadm join 192.168.10.101:6443 --token qaqahg.5xbt355fl26wu8tg --discovery-token-ca-cert-hash sha256:654f52a18fa04234c05eb38a001d92b9831982d06272e5a22b7d898bc6280e47 --ignore-preflight-errors=Swap [preflight] running pre-flight checks [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{} ip_vs:{}] you can solve this problem with following methods: 1. Run 'modprobe -- ' to load missing kernel modules; 2. Provide the missing builtin kernel ipvs support [WARNING Swap]: running with swap on is not supported. Please disable swap [discovery] Trying to connect to API Server "192.168.10.101:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.10.101:6443" [discovery] Requesting info from "https://192.168.10.101:6443" again to validate TLS against the pinned public key [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.10.101:6443" [discovery] Successfully established connection with API Server "192.168.10.101:6443" [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [preflight] Activating the kubelet service [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap... [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node1" as an annotation This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. [root@node1 ~]#
OK,node1加入成功。
[root@node2 ~]# echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables [root@node2 ~]# kubeadm join 192.168.10.101:6443 --token qaqahg.5xbt355fl26wu8tg --discovery-token-ca-cert-hash sha256:654f52a18fa04234c05eb38a001d92b9831982d06272e5a22b7d898bc6280e47 --ignore-preflight-errors=Swap [preflight] running pre-flight checks [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_rr ip_vs_wrr ip_vs_sh ip_vs] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}] you can solve this problem with following methods: 1. Run 'modprobe -- ' to load missing kernel modules; 2. Provide the missing builtin kernel ipvs support [WARNING Swap]: running with swap on is not supported. Please disable swap [discovery] Trying to connect to API Server "192.168.10.101:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.10.101:6443" [discovery] Requesting info from "https://192.168.10.101:6443" again to validate TLS against the pinned public key [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.10.101:6443" [discovery] Successfully established connection with API Server "192.168.10.101:6443" [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [preflight] Activating the kubelet service [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap... [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node2" as an annotation This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. [root@node2 ~]#
OK,node2加入成功。
4、node手動(dòng)下載kube-proxy、pause鏡像
node節(jié)點(diǎn)均執(zhí)行以下命令:
for ima in kube-proxy:v1.12.0 pause:3.1;do docker pull registry.cn-shenzhen.aliyuncs.com/lurenjia/$ima && docker tag registry.cn-shenzhen.aliyuncs.com/lurenjia/$ima k8s.gcr.io/$ima && docker rmi -f registry.cn-shenzhen.aliyuncs.com/lurenjia/$ima ;done
5、到master節(jié)點(diǎn)查看node情況:
[root@master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready master 3h20m v1.12.1 node1 Ready <none> 18m v1.12.1 node2 Ready <none> 17m v1.12.1 [root@master ~]#
OK,全部處于Ready狀態(tài)。如果node節(jié)點(diǎn)還是不正常,就重啟一下node節(jié)點(diǎn)的docker、kubelet服務(wù)。
查看kube-system的pod信息:
[root@master ~]# kubectl get pods -n kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE coredns-576cbf47c7-hfvcq 1/1 Running 0 3h21m 10.244.0.3 master <none> coredns-576cbf47c7-xcpgd 1/1 Running 0 3h21m 10.244.0.2 master <none> etcd-master 1/1 Running 6 165m 192.168.10.101 master <none> kube-apiserver-master 1/1 Running 9 165m 192.168.10.101 master <none> kube-controller-manager-master 1/1 Running 33 165m 192.168.10.101 master <none> kube-flannel-ds-amd64-bd4d8 1/1 Running 0 21m 192.168.10.103 node1 <none> kube-flannel-ds-amd64-srhb9 1/1 Running 0 20m 192.168.10.104 node2 <none> kube-flannel-ds-amd64-vqc9h 1/1 Running 3 74m 192.168.10.101 master <none> kube-proxy-8bfvt 1/1 Running 1 21m 192.168.10.103 node1 <none> kube-proxy-gz55d 1/1 Running 1 20m 192.168.10.104 node2 <none> kube-proxy-z9xrw 1/1 Running 4 3h21m 192.168.10.101 master <none> kube-scheduler-master 1/1 Running 33 165m 192.168.10.101 master <none> [root@master ~]#
至此,集群搭建成功??纯创罱ㄓ玫降溺R像有哪些:
master節(jié)點(diǎn):
[root@master ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE k8s.gcr.io/kube-controller-manager v1.12.0 07e068033cf2 2 weeks ago 164MB k8s.gcr.io/kube-apiserver v1.12.0 ab60b017e34f 2 weeks ago 194MB k8s.gcr.io/kube-scheduler v1.12.0 5a1527e735da 2 weeks ago 58.3MB k8s.gcr.io/kube-proxy v1.12.0 9c3a9d3f09a0 2 weeks ago 96.6MB k8s.gcr.io/etcd 3.2.24 3cab8e1b9802 3 weeks ago 220MB k8s.gcr.io/coredns 1.2.2 367cdc8433a4 6 weeks ago 39.2MB quay.io/coreos/flannel v0.10.0-amd64 f0fad859c909 8 months ago 44.6MB k8s.gcr.io/pause 3.1 da86e6ba6ca1 9 months ago 742kB [root@master ~]#
node節(jié)點(diǎn):
[root@node1 ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE k8s.gcr.io/kube-proxy v1.12.0 9c3a9d3f09a0 2 weeks ago 96.6MB quay.io/coreos/flannel v0.10.0-amd64 f0fad859c909 8 months ago 44.6MB k8s.gcr.io/pause 3.1 da86e6ba6ca1 9 months ago 742kB [root@node1 ~]# [root@node2 ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE k8s.gcr.io/kube-proxy v1.12.0 9c3a9d3f09a0 2 weeks ago 96.6MB quay.io/coreos/flannel v0.10.0-amd64 f0fad859c909 8 months ago 44.6MB k8s.gcr.io/pause 3.1 da86e6ba6ca1 9 months ago 742kB [root@node2 ~]#
跑一個(gè)nginx
[root@master ~]# kubectl run nginx-deploy --image=nginx --port=80 --replicas=1 deployment.apps/nginx-deploy created [root@master ~]# kubectl get deploy NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE nginx-deploy 1 1 1 1 10s [root@master ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE nginx-deploy-8c5fc574c-d8jxj 1/1 Running 0 18s 10.244.2.4 node2 <none> [root@master ~]#
在node節(jié)點(diǎn)上看看可不可以訪(fǎng)問(wèn)這個(gè)nginx:
[root@node1 ~]# curl -I 10.244.2.4 HTTP/1.1 200 OK Server: nginx/1.15.5 Date: Tue, 16 Oct 2018 12:02:34 GMT Content-Type: text/html Content-Length: 612 Last-Modified: Tue, 02 Oct 2018 14:49:27 GMT Connection: keep-alive ETag: "5bb38577-264" Accept-Ranges: bytes [root@node1 ~]#
返回200,訪(fǎng)問(wèn)成功。
[root@master ~]# kubectl expose deployment nginx-deploy --name=nginx --port=80 --target-port=80 --protocol=TCP service/nginx exposed [root@master ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 21h nginx ClusterIP 10.104.88.59 <none> 80/TCP 51s [root@master ~]#
啟動(dòng)一個(gè)busybox:
[root@master ~]# kubectl run client --image=busybox --replicas=1 -it --restart=Never If you don't see a command prompt, try pressing enter. / # / # wget -O - -q http://nginx:80 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h2>Welcome to nginx!</h2> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> / #
刪除重新建:
[root@master ~]# kubectl delete svc nginx service "nginx" deleted [root@master ~]# kubectl expose deployment nginx-deploy --name=nginx service/nginx exposed [root@master ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 22h nginx ClusterIP 10.110.52.68 <none> 80/TCP 8s [root@master ~]#
創(chuàng)建多副本:
[root@master ~]# kubectl run myapp --image=ikubernetes/myapp:v1 --replicas=2 deployment.apps/myapp created [root@master ~]# [root@master ~]# kubectl get deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE myapp 2 2 2 2 49s nginx-deploy 1 1 1 1 36m [root@master ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE client 1/1 Running 0 3m49s 10.244.2.6 node2 <none> myapp-6946649ccd-knd8r 1/1 Running 0 78s 10.244.2.7 node2 <none> myapp-6946649ccd-pfl2r 1/1 Running 0 78s 10.244.1.6 node1 <none> nginx-deploy-8c5fc574c-5bjjm 1/1 Running 0 12m 10.244.1.5 node1 <none> [root@master ~]#
給myapp創(chuàng)建service:
[root@master ~]# kubectl expose deployment myapp --name=myapp --port=80 service/myapp exposed [root@master ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 22h myapp ClusterIP 10.110.238.138 <none> 80/TCP 11s nginx ClusterIP 10.110.52.68 <none> 80/TCP 9m37s [root@master ~]#
將myapp擴(kuò)展到5個(gè):
[root@master ~]# kubectl scale --replicas=5 deployment myapp deployment.extensions/myapp scaled [root@master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE client 1/1 Running 0 5m24s myapp-6946649ccd-6kqxt 1/1 Running 0 8s myapp-6946649ccd-7xj45 1/1 Running 0 8s myapp-6946649ccd-8nh9q 1/1 Running 0 8s myapp-6946649ccd-knd8r 1/1 Running 0 11m myapp-6946649ccd-pfl2r 1/1 Running 0 11m nginx-deploy-8c5fc574c-5bjjm 1/1 Running 0 23m [root@master ~]#
修改myapp:
[root@master ~]# kubectl edit svc myapp type: NodePort
type改為:NodePort
[root@master ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23h myapp NodePort 10.110.238.138 <none> 80:30937/TCP 35m nginx ClusterIP 10.110.52.68 <none> 80/TCP 44m [root@master ~]#
端口:30937,物理機(jī)打開(kāi):192.168.10.101:30937
OK,可以訪(fǎng)問(wèn)。
1、資源類(lèi)型
資源:實(shí)例化之后為對(duì)象,主要有:
wordload:Pod,ReplicaSet,Deployment,StatefulSet,DaemonSet,Job,Cronjob。。。
服務(wù)發(fā)現(xiàn)及均衡:Service,Ingress,。。。
配置與存儲(chǔ):Volume,CSI,特殊的有ConfigMap,Secret;DownwardAPI
集群級(jí)資源:Namespace,Node,Role,ClusterRole,RoleBinding,ClusterRoleBinding
元數(shù)據(jù)型資源:HPA,PodTemplate,LimitRange
2、創(chuàng)建資源的方法:
apiserver僅接收J(rèn)SON格式的資源定義;
yaml格式提供配置清單,apiserver可自動(dòng)將其轉(zhuǎn)為json格式,而后再提交;
大部分資源的配置清單:
apiVesion: group/version,使用kubectl api-versions可以查看
kind: 資源類(lèi)別 metadata:元數(shù)據(jù)(name,namespace,labels,annotations) 每個(gè)的資源的引用PATH:/api/GROUP/VERSION/namespace/NAMESPACE/TYPE/NAME,例如: /api/v1/namespaces/default/pods/myapp-6946649ccd-c6m9b spec:期望的狀態(tài),disired state status:當(dāng)前狀態(tài),current state,本字段由kubernetes集群維護(hù) 查看某種資源的定義,比如:查看pod
root@master ~]# kubectl explain pod KIND: Pod VERSION: v1 DESCRIPTION: 。。。
pod資源定義示例:
[root@master ~]# mkdir maniteste [root@master ~]# vim maniteste/pod-demo.yaml apiVersion: v1 kind: Pod metadata: name: pod-demo namespace: default labels: app: myapp tier: frontend spec: containers: - name: myapp image: ikubernetes/myapp:v1 - name: busybox image: busybox:latest command: - "/bin/sh" - "-c" - "sleep 5"
創(chuàng)建資源:
[root@master ~]# kubectl create -f maniteste/pod-demo.yaml [root@master ~]# kubectl describe pods pod-demo Name: pod-demo Namespace: default Priority: 0 PriorityClassName: <none> Node: node2/192.168.10.104 Start Time: Wed, 17 Oct 2018 19:54:03 +0800 Labels: app=myapp tier=frontend Annotations: <none> Status: Running IP: 10.244.2.26
查看日志:
[root@master ~]# curl 10.244.2.26 Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a> [root@master ~]# kubectl logs pod-demo myapp 10.244.0.0 - - [17/Oct/2018:11:56:49 +0000] "GET / HTTP/1.1">
1個(gè)pod跑2個(gè)容器。
刪除pod:kubectl delete -f maniteste/pod-demo.yaml
1、查看pod的containers定義信息:kubectl explain pods.spec.containers
資源配置清單:
自主式Pod資源 資源清單格式: 一級(jí)字段:apiVersion(group/version),kind,metadata(name,namespace,labels,annotations,。。),spec,status(只讀) Pod資源: spec.containers <\[\]object> \- name <string> image <string> imagePullPlocy Always | Never | IfNotPresent
2、標(biāo)簽:
key=value,key由字母、數(shù)字、_、-、.組成。 value:可以為空,只能字母或數(shù)字開(kāi)頭或結(jié)尾,中間可以使用
打標(biāo)簽:
[root@master ~]# kubectl get pods -l app --show-labels NAME READY STATUS RESTARTS AGE LABELS pod-demo 0/2 ContainerCreating 0 4m46s app=myapp,tier=frontend [root@master ~]# kubectl label pods pod-demo release=haha pod/pod-demo labeled [root@master ~]# kubectl get pods -l app --show-labels NAME READY STATUS RESTARTS AGE LABELS pod-demo 0/2 ContainerCreating 0 5m27s app=myapp,release=haha,tier=frontend [root@master ~]#
查看擁有某標(biāo)簽的pod:
[root@master ~]# kubectl get pods -l app,release NAME READY STATUS RESTARTS AGE pod-demo 0/2 ContainerCreating 0 7m43s [root@master ~]#
標(biāo)簽選擇器:
等值關(guān)系:=、==、!=
如: kubectl get pods -l release=stable
集合關(guān)系:KEY in (VALUE1,VALUE2….) 、KEY notin (VALUE1,VALUE2….)、KEY、!KEY
[root@master ~]# kubectl get pods -l "release notin (stable,haha)" NAME READY STATUS RESTARTS AGE client 0/1 Error 0 46h myapp-6946649ccd-2lncx 1/1 Running 2 46h nginx-deploy-8c5fc574c-5bjjm 1/1 Running 2 46h [root@master ~]#
許多資源支持內(nèi)嵌字段定義其使用的標(biāo)簽選擇器:
matchLabels:直接給定鍵值
matchExpressions:基于給定的表達(dá)式來(lái)定義使用標(biāo)簽選擇器,{key:"KEY",operator:"OPRATOR",values:[VAL1,VAL2,。。。]}
操作符(operator):In,NotIn:values字段的值必須為非空列表,Exists,NotExists:values字段的值必須為空列表
3、nodeSelector :節(jié)點(diǎn)標(biāo)簽選擇器
nodeName
給某個(gè)節(jié)點(diǎn)打標(biāo)簽,比如:
[root@master ~]# kubectl label nodes node1 disktype=ssd node/node1 labeled [root@master ~]#
修改yaml文件:
[root@master ~]# vim maniteste/pod-demo.yaml apiVersion: v1 kind: Pod metadata: name: pod-demo namespace: default labels: app: myapp tier: frontend spec: containers: - name: myapp image: ikubernetes/myapp:v1 ports: - name: http containerPort: 80 - name: busybox image: busybox:latest imagePullPolicy: IfNotPresent command: - "/bin/sh" - "-c" - "sleep 5" nodeSelector: disktype: ssd
重新創(chuàng)建:
[root@master ~]# kubectl delete pods pod-demo pod "pod-demo" deleted [root@master ~]# kubectl create -f maniteste/pod-demo.yaml pod/pod-demo created [root@master ~]#
4、annotations
與label不同的地方在于,它不能用于挑選資源對(duì)象,僅用于為對(duì)象提供“元數(shù)據(jù)”
示例:
apiVersion: v1 kind: Pod metadata: name: pod-demo namespace: default labels: app: myapp tier: frontend annotations: haha.com/create_by: "hello world" spec: containers: - name: myapp image: ikubernetes/myapp:v1 ports: - name: http containerPort: 80 - name: busybox image: busybox:latest imagePullPolicy: IfNotPresent command: - "/bin/sh" - "-c" - "sleep 3600" nodeSelector: disktype: ssd
5、Pod生命周期
狀態(tài):Pending(掛起),Running,F(xiàn)ailed,Success,Unknown
Pod生命周期中的重要行為:初始化容器、容器探測(cè)(liveness、readliness)
restartPolicy:Always, OnFailure,Never. Default to Always
探針類(lèi)型: ExecAction、TCPSocketAction、HTTPGetAction。
ExecAction舉例:
[root@master ~]# vim liveness-exec.yaml apiVersion: v1 kind: Pod metadata: name: liveness-exec-pod namespace: default spec: containers: - name: liveness-exec-container image: busybox:latest imagePullPolicy: IfNotPresent command: ["/bin/sh","-c","touch /tmp/healthy; sleep 30; rm -f /tmp/healthy; sleep 3600"] livenessProbe: exec: command: ["test","-e","/tmp/healthy"] initialDelaySeconds: 2 periodSeconds: 3
創(chuàng)建:
[root@master ~]# kubectl create -f liveness-exec.yaml pod/liveness-exec-pod created [root@master ~]# kubectl get pods -w NAME READY STATUS RESTARTS AGE client 0/1 Error 0 3d liveness-exec-pod 1/1 Running 3 3m myapp-6946649ccd-2lncx 1/1 Running 4 3d nginx-deploy-8c5fc574c-5bjjm 1/1 Running 4 3d liveness-exec-pod 1/1 Running 4 4m
HTTPGetAction舉例:
[root@master ~]# vim liveness-httpGet.yaml apiVersion: v1 kind: Pod metadata: name: liveness-httpget-pod namespace: default spec: containers: - name: liveness-httpget-container image: ikubernetes/myapp:v1 imagePullPolicy: IfNotPresent ports: - name: http containerPort: 80 livenessProbe: httpGet: port: http path: /index.html initialDelaySeconds: 1 periodSeconds: 3 [root@master ~]# kubectl create -f liveness-httpGet.yaml pod/liveness-httpget-pod created [root@master ~]#
readiness:
[root@master ~]# vim readiness-httget.yaml apiVersion: v1 kind: Pod metadata: name: readiness-httpget-pod namespace: default spec: containers: kind: Pod metadata: name: readiness-httpget-pod namespace: default spec: containers: - name: readiness-httpget-container image: ikubernetes/myapp:v1 imagePullPolicy: IfNotPresent ports: - name: http containerPort: 80 readinessProbe: httpGet: port: http path: /index.html initialDelaySeconds: 1 periodSeconds: 3
容器生命周期-poststart示例:
[root@master ~]# vim poststart-pod.yaml apiVersion: v1 kind: Pod metadata: name: poststart-pod namespace: default spec: containers: - name: busybox-httpd image: busybox:latest imagePullPolicy: IfNotPresent lifecycle: postStart: exec: command: ["/bin/sh","-c","echo Home_Page >> /tmp/index.html"] #command: ['/bin/sh','-c','sleep 3600'] command: ["/bin/httpd"] args: ["-f","-h /tmp"] [root@master ~]# kubectl create -f poststart-pod.yaml pod/poststart-pod created [root@master ~]#
但,使用/tmp目錄作為網(wǎng)站目錄肯定是不行的。
6、Pod控制器
pod控制器有多種類(lèi)型:
ReplicaSet: 代用戶(hù)創(chuàng)建指定數(shù)量的pod副本數(shù)量,確保pod副本數(shù)量符合預(yù)期狀態(tài),并且支持滾動(dòng)式自動(dòng)擴(kuò)容和縮容功能。
ReplicaSet主要三個(gè)組件組成:
?。?)用戶(hù)期望的pod副本數(shù)量
?。?)標(biāo)簽選擇器,判斷哪個(gè)pod歸自己管理
?。?)當(dāng)現(xiàn)存的pod數(shù)量不足,會(huì)根據(jù)pod資源模板進(jìn)行新建
幫助用戶(hù)管理無(wú)狀態(tài)的pod資源,精確反應(yīng)用戶(hù)定義的目標(biāo)數(shù)量,但是RelicaSet不是直接使用的控制器,而是使用Deployment。
Deployment:工作在ReplicaSet之上,用于管理無(wú)狀態(tài)應(yīng)用,目前來(lái)說(shuō)最好的控制器。支持滾動(dòng)更新和回滾功能,還提供聲明式配置。
DaemonSet:用于確保集群中的每一個(gè)節(jié)點(diǎn)只運(yùn)行特定的pod副本,通常用于實(shí)現(xiàn)系統(tǒng)級(jí)后臺(tái)任務(wù)。比如ELK服務(wù)
特性:服務(wù)是無(wú)狀態(tài)的,服務(wù)必須是守護(hù)進(jìn)程
Job:只要完成就立即退出,不需要重啟或重建。
Cronjob:周期性任務(wù)控制,不需要持續(xù)后臺(tái)運(yùn)行,
StatefulSet:管理有狀態(tài)應(yīng)用
ReplicaSet(rs)示例:
[root@master ~]# kubectl get deployments NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE myapp 1 1 1 1 4d nginx-deploy 1 1 1 1 4d1h [root@master ~]# kubectl delete deploy myapp deployment.extensions "myapp" deleted [root@master ~]# kubectl delete deploy nginx-deploy deployment.extensions "nginx-deploy" deleted [root@master ~]# [root@master ~]# vim rs-demo.yaml apiVersion: apps/v1 kind: ReplicaSet metadata: name: myapp namespace: default spec: replicas: 2 selector: matchLabels: app: myapp release: canary template: metadata: name: myapp-pod labels: app: myapp release: canary environment: qa spec: containers: - name: myapp-conatainer image: ikubernetes/myapp:v1 ports: - name: http containerPort: 80 [root@master ~]# kubectl create -f rs-demo.yaml replicaset.apps/myapp created
查看標(biāo)簽:
[root@master ~]# kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS client 0/1 Error 0 4d run=client liveness-httpget-pod 1/1 Running 1 107m <none> myapp-fspr7 1/1 Running 0 75s app=myapp,environment=qa,release=canary myapp-ppxrw 1/1 Running 0 75s app=myapp,environment=qa,release=canary pod-demo 2/2 Running 0 3s app=myapp,tier=frontend readiness-httpget-pod 1/1 Running 0 86m <none> [root@master ~]#
給pod-demo打一個(gè)標(biāo)簽release=canary:
[root@master ~]# kubectl label pods pod-demo release=canary pod/pod-demo labeled
deploy示例:
[root@master ~]# kubectl delete rs myapp replicaset.extensions "myapp" deleted [root@master ~]# [root@master ~]# vim deploy-demo.yaml apiVersion: apps/v1 kind: Deployment metadata: name: myapp-deploy namespace: default spec: replicas: 2 selector: matchLabels: app: myapp release: canary template: metadata: labels: app: myapp release: canary spec: containers: - name: myapp image: ikubernetes/myapp:v1 ports: - name: http containerPort: 80 [root@master ~]# kubectl create -f deploy-demo.yaml deployment.apps/myapp-deploy created [root@master ~]# [root@master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE client 0/1 Error 0 4d20h liveness-httpget-pod 1/1 Running 2 22h myapp-deploy-574965d786-5x42g 1/1 Running 0 70s myapp-deploy-574965d786-dqzpd 1/1 Running 0 70s pod-demo 2/2 Running 3 20h readiness-httpget-pod 1/1 Running 1 21h [root@master ~]# kubectl get rs NAME DESIRED CURRENT READY AGE myapp-deploy-574965d786 2 2 2 93s [root@master ~]#
如果要修改副本數(shù),則編輯deploy-demo.yaml修改副本數(shù),執(zhí)行kubectl apply -f deploy-demo.yaml
或者:kubectl patch deployment myapp-deploy -p '{"spec":{"replicas":5}}',這里是修改5個(gè)副本。
修改其他屬性,比如:
[root@master ~]# kubectl patch deployment myapp-deploy -p '{"spec":{"strategy":{"rollingUpdate":{"maxSurge":1,"maxUnavailable":0}}}}' deployment.extensions/myapp-deploy patched [root@master ~]#
更新版本:
[root@master ~]# kubectl set image deployment myapp-deploy myapp=ikubernetes/myapp:v3 && kubectl rollout pause deployment myapp-deploy deployment.extensions/myapp-deploy image updated deployment.extensions/myapp-deploy paused [root@master ~]# [root@master ~]# kubectl rollout status deployment myapp-deploy Waiting for deployment "myapp-deploy" rollout to finish: 1 out of 2 new replicas have been updated... [root@master ~]# kubectl rollout resume deployment myapp-deploy deployment.extensions/myapp-deploy resumed [root@master ~]#
版本回滾:
[root@master ~]# kubectl rollout undo deployment myapp-deploy --to-revision=1 deployment.extensions/myapp-deploy [root@master ~]#
DaemonSet示例:
node1、node2執(zhí)行:docker pull ikubernetes/filebeat:5.6.5-alpine
編輯yaml文件:
[root@master ~]# vim ds-demo.yaml apiVersion: apps/v1 kind: DaemonSet metadata: name: myapp-ds namespace: default spec: selector: matchLabels: app: filebeat release: stable template: metadata: labels: app: filebeat release: stable spec: containers: - name: filebeat image: ikubernetes/filebeat:5.6.5-alpine env: - name: REDIS_HOST value: redis.default.svc.cluster.local - name: REDIS_LOG_LEVEL value: info [root@master ~]# kubectl apply -f ds-demo.yaml daemonset.apps/myapp-ds created [root@master ~]#
修改yaml文件:
[root@master ~]# vim ds-demo.yaml apiVersion: apps/v1 kind: Deployment metadata: name: redis namespace: default spec: replicas: 1 selector: matchLabels: app: redis role: logstor template: metadata: labels: app: redis role: logstor spec: containers: - name: redis image: redis:4.0-alpine ports: - name: redis containerPort: 6379 --- apiVersion: apps/v1 kind: DaemonSet metadata: name: filebeat-ds namespace: default spec: selector: matchLabels: app: filebeat release: stable template: metadata: labels: app: filebeat release: stable spec: containers: - name: filebeat image: ikubernetes/filebeat:5.6.5-alpine env: - name: REDIS_HOST value: redis.default.svc.cluster.local - name: REDIS_LOG_LEVEL value: info [root@master ~]# kubectl delete -f ds-demo.yaml [root@master ~]# kubectl apply -f ds-demo.yaml deployment.apps/redis created daemonset.apps/filebeat-ds created [root@master ~]#
暴露redis端口:
[root@master ~]# kubectl expose deployment redis --port=6379 service/redis exposed [root@master ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5d20h myapp NodePort 10.110.238.138 <none> 80:30937/TCP 4d21h nginx ClusterIP 10.110.52.68 <none> 80/TCP 4d21h redis ClusterIP 10.97.196.222 <none> 6379/TCP 11s [root@master ~]#
進(jìn)入redis:
[root@master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE redis-664bbc646b-sg6wk 1/1 Running 0 2m55s [root@master ~]# kubectl exec -it redis-664bbc646b-sg6wk -- /bin/sh /data # netstat -tnl Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 0.0.0.0:6379 0.0.0.0:* LISTEN tcp 0 0 :::6379 :::* LISTEN /data # nslookup redis.default.svc.cluster.local nslookup: can't resolve '(null)': Name does not resolve Name: redis.default.svc.cluster.local Address 1: 10.97.196.222 redis.default.svc.cluster.local /data # /data # redis-cli -h redis.default.svc.cluster.local redis.default.svc.cluster.local:6379> keys * (empty list or set) redis.default.svc.cluster.local:6379>
進(jìn)入filebeat:
[root@master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE client 0/1 Error 0 4d21h filebeat-ds-bszfz 1/1 Running 0 6m2s filebeat-ds-w5nzb 1/1 Running 0 6m2s redis-664bbc646b-sg6wk 1/1 Running 0 6m2s [root@master ~]# kubectl exec -it filebeat-ds-bszfz -- /bin/sh / # printenv / # nslookup redis.default.svc.cluster.local / # kill -1 1
更新:[root@master ~]# kubectl set image daemonsets filebeat-ds filebeat=ikubernetes/filebeat:5.6.6-alpine
service是kubernetes中最核心的資源對(duì)象之一,Service可以理解成是微服務(wù)架構(gòu)中的一個(gè)"微服務(wù)“
簡(jiǎn)單講,一個(gè)service本質(zhì)上是一組pod組成的一個(gè)集群,service和pod之間是通過(guò)Label串起來(lái),相同的Service的pod的Label是一樣的。同一個(gè)service下的所有pod是通過(guò)kube-proxy實(shí)現(xiàn)負(fù)載均衡,而每個(gè)service都會(huì)分配一個(gè)全局唯一的虛擬ip,也就cluster ip。在該service整個(gè)生命周期內(nèi),cluster ip保持不變,而在kubernetes中還有一個(gè)dns服務(wù),它把service的name和cluster ip應(yīng)聲起來(lái)。
工作模式:userspace、iptables、ipvs
類(lèi)型:ExternalName, ClusterIP, NodePort, LoadBalancer
資源記錄:SVC_NAME.NS_NAME.DOMAIN.LTD.
svc.cluster.local. 例如:redis.default.svc.cluster.local.
[root@master ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5d20h myapp NodePort 10.110.238.138 <none> 80:30937/TCP 4d22h nginx ClusterIP 10.110.52.68 <none> 80/TCP 4d22h redis ClusterIP 10.97.196.222 <none> 6379/TCP 29m [root@master ~]# kubectl delete svc redis [root@master ~]# kubectl delete svc nginx [root@master ~]# kubectl delete svc myapp [root@master ~]# vim redis-svc.yaml apiVersion: v1 kind: Service metadata: name: redis namespace: default spec: selector: app: redis role: logstor clusterIP: 10.97.97.97 type: ClusterIP ports: - port: 6379 targetPort: 6379 [root@master ~]# kubectl apply -f redis-svc.yaml service/redis created [root@master ~]#
NodePort:
[root@master ~]# vim myapp-svc.yaml apiVersion: v1 kind: Service metadata: name: myapp namespace: default spec: selector: app: myapp role: canary clusterIP: 10.99.99.99 type: NodePort ports: - port: 80 targetPort: 80 nodePort: 30080 [root@master ~]# kubectl apply -f myapp-svc.yaml service/myapp created [root@master ~]# [root@master ~]# kubectl patch svc myapp -p '{"spec":{"sessionAffinity":"ClientIP"}}' service/myapp patched [root@master ~]#
不指定ClusterIP:
[root@master ~]# vim myapp-svc-headless.yaml apiVersion: v1 kind: Service metadata: name: myapp-svc namespace: default spec: selector: app: myapp release: canary clusterIP: "None" ports: - port: 80 targetPort: 80 [root@master ~]# kubectl apply -f myapp-svc-headless.yaml service/myapp-svc created [root@master ~]# kubectl get svc -n kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 5d21h [root@master ~]# dig -t A myapp-svc.default.svc.cluster.local. @10.96.0.10 ; <<>> DiG 9.9.4-RedHat-9.9.4-61.el7_5.1 <<>> -t A myapp-svc.default.svc.cluster.local. @10.96.0.10 ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 32215 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 5, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;myapp-svc.default.svc.cluster.local. IN A ;; ANSWER SECTION: myapp-svc.default.svc.cluster.local. 5 IN A 10.244.1.59 myapp-svc.default.svc.cluster.local. 5 IN A 10.244.2.51 myapp-svc.default.svc.cluster.local. 5 IN A 10.244.1.60 myapp-svc.default.svc.cluster.local. 5 IN A 10.244.1.58 myapp-svc.default.svc.cluster.local. 5 IN A 10.244.2.52 ;; Query time: 2 msec ;; SERVER: 10.96.0.10#53(10.96.0.10) ;; WHEN: Sun Oct 21 19:41:16 CST 2018 ;; MSG SIZE rcvd: 319 [root@master ~]#
Ingress可以簡(jiǎn)單的理解成k8s內(nèi)部的nginx, 用作負(fù)載均衡器。
Ingress由兩部分組成:Ingress Controller 和 Ingress 服務(wù)。
ingress-nginx:https://github.com/kubernetes/ingress-nginx、https://kubernetes.github.io/ingress-nginx/deploy/
1、下載相關(guān)文件
[root@master ~]# mkdir ingress-nginx [root@master ~]# cd ingress-nginx [root@master ingress-nginx]# for file in namespace.yaml configmap.yaml rbac.yaml with-rbac.yaml ; do curl -O https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/${file};done
2、創(chuàng)建
[root@master ingress-nginx]# kubectl apply -f ./
3、編寫(xiě)yaml文件
[root@master ~]# mkdir maniteste/ingress [root@master ~]# cd maniteste/ingress [root@master ingress]#vim deploy-demo.yaml apiVersion: v1 kind: Service metadata: name: myapp namespace: default spec: selector: app: myapp release: canary ports: - name: http port: 80 targetPort: 80 --- apiVersion: apps/v1 kind: Deployment metadata: name: myapp-deploy namespace: default spec: replicas: 3 selector: matchLabels: app: myapp release: canary template: metadata: labels: app: myapp release: canary spec: containers: - name: myapp image: ikubernetes/myapp:v2 ports: - name: http containerPort: 80 [root@master ingress]# kubectl delete svc myapp [root@master ingress]# kubectl delete deployment myapp-deploy [root@master ingress]# kubectl apply -f deploy-demo.yaml service/myapp created deployment.apps/myapp-deploy created [root@master ingress]#
3、創(chuàng)建service
如果不定義nodePort則會(huì)隨機(jī)映射端口。
4、app
[root@master ingress-nginx]# vim ingress-myapp.yaml apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-myapp namespace: default annotations: kubernetes.io/ingress.class: "nginx" spec: rules: - host: myapp.haha.com http: paths: - path: backend: serviceName: myapp servicePort: 80 [root@master ingress-nginx]# kubectl apply -f ingress-myapp.yaml [root@master ~]# kubectl get ingresses NAME HOSTS ADDRESS PORTS AGE ingress-myapp myapp.haha.com 80 58s [root@master ~]#
修改物理機(jī)的hosts,瀏覽器打開(kāi):
可以這樣查看:kubectl get svc -n ingress-nginx
5、部署一個(gè)Tomcat
[root@master ingress-nginx]# vim tomcat-deploy.yaml apiVersion: v1 kind: Service metadata: name: tomcat namespace: default spec: selector: app: tomcat release: canary ports: - name: http port: 8080 targetPort: 8080 - name: ajp port: 8009 targetPort: 8009 --- apiVersion: apps/v1 kind: Deployment metadata: name: tomcat-deploy namespace: default spec: replicas: 3 selector: matchLabels: app: tomcat release: canary template: metadata: labels: app: tomcat release: canary spec: containers: - name: tomcat image: tomcat:8.5.34-jre8-alpine ports: - name: http containerPort: 8080 - name: ajp containerPort: 8009 [root@master ingress-nginx]# vim ingress-tomcat.yaml apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-tomcat namespace: default annotations: kubernetes.io/ingress.class: "tomcat" spec: rules: - host: tomcat.haha.com http: paths: - path: backend: serviceName: tomcat servicePort: 8080 [root@master ingress-nginx]# kubectl apply -f tomcat-deploy.yaml [root@master ingress-nginx]# kubectl apply -f ingress-tomcat.yaml
查看Tomcat:
[root@master ~]# kubectl get pod NAME READY STATUS RESTARTS AGE myapp-deploy-7b64976db9-5ww72 1/1 Running 0 66m myapp-deploy-7b64976db9-fm7jl 1/1 Running 0 66m myapp-deploy-7b64976db9-s6f95 1/1 Running 0 66m tomcat-deploy-695dbfd5bd-6kx42 1/1 Running 0 5m54s tomcat-deploy-695dbfd5bd-f5d7n 0/1 ImagePullBackOff 0 5m54s tomcat-deploy-695dbfd5bd-v5d9d 1/1 Running 0 5m54s [root@master ~]# kubectl exec tomcat-deploy-695dbfd5bd-6kx42 -- netstat -tnl Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 127.0.0.1:8005 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:8009 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN [root@master ~]#
可以使用:docker pull tomcat:8.5.34-jre8-alpine實(shí)現(xiàn)下載好鏡像。
創(chuàng)建ssl證書(shū):
創(chuàng)建私鑰:
[root@master ingress]# openssl genrsa -out tls.key 2048
創(chuàng)建自簽證書(shū):
[root@master ingress]# openssl req -new -x509 -key tls.key -out tls.crt -subj /C=CN/ST=Guangdong/L=Guangdong/O=DevOps/CN=tomcat.haha.com
要想將證書(shū)注入到pod,必須轉(zhuǎn)格式:
[root@master ingress]# kubectl create secret tls tomcat-ingress-secret --cert=tls.crt --key=tls.key secret/tomcat-ingress-secret created [root@master ingress]# kubectl get secret NAME TYPE DATA AGE default-token-kcvkv kubernetes.io/service-account-token 3 8d tomcat-ingress-secret kubernetes.io/tls 2 29s [root@master ingress]#
格式為:kubernetes.io/tls
配置tomcat:
[root@master ingress]# vim ingress-tomcat-tls.yaml apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-tomcat-tls namespace: default annotations: kubernetes.io/ingress.class: "nginx" spec: tls: - hosts: - tomcat.haha.com secretName: tomcat-ingress-secret rules: - host: tomcat.haha.com http: paths: - path: backend: serviceName: tomcat servicePort: 8080 [root@master ingress]# kubectl apply -f ingress-tomcat-tls.yaml ingress.extensions/ingress-tomcat-tls created [root@master ingress]#
瀏覽器打開(kāi):https://tomcat.haha.com:30443/
看完上述內(nèi)容,你們掌握kubeadm中如何部署kubernetes集群的方法了嗎?如果還想學(xué)到更多技能或想了解更多相關(guān)內(nèi)容,歡迎關(guān)注億速云行業(yè)資訊頻道,感謝各位的閱讀!
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如果涉及侵權(quán)請(qǐng)聯(lián)系站長(zhǎng)郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。