您好,登錄后才能下訂單哦!
Kubernetes服務(wù)的介紹以及如何創(chuàng)建,相信很多沒有經(jīng)驗(yàn)的人對(duì)此束手無策,為此本文總結(jié)了問題出現(xiàn)的原因和解決方法,通過這篇文章希望你能解決這個(gè)問題。
上文介紹了Kubernetes副本機(jī)制,正是因?yàn)楦北緳C(jī)制你的部署能自動(dòng)保待運(yùn)行,并且保持健康,無須任何手動(dòng)干預(yù);本文繼續(xù)介紹kubernetes的另一個(gè)強(qiáng)大的功能服務(wù),在客戶端和pod之間提供一個(gè)服務(wù)層,提供了單一的接入點(diǎn),更加方便客戶端使用pod。
Kubernetes服務(wù)是一種為一組功能相同的pod提供單一不變的接入點(diǎn)的資源;當(dāng)服務(wù)存在時(shí),它的IP地址和端口不會(huì)改變,客戶端通過IP地址和端口號(hào)建立連接,這些連接會(huì)被路由到提供該服務(wù)的任意一個(gè)pod上;
服務(wù)的連接對(duì)所有的后端pod是負(fù)載均衡的,至于哪些pod被屬于哪個(gè)服務(wù),通過在定義服務(wù)的時(shí)候設(shè)置標(biāo)簽選擇器;
[d:\k8s]$ kubectl create -f kubia-rc.yaml replicationcontroller/kubia created [d:\k8s]$ kubectl get pod NAME READY STATUS RESTARTS AGE kubia-6dxn7 0/1 ContainerCreating 0 4skubia-fhxht 0/1 ContainerCreating 0 4skubia-fpvc7 0/1 ContainerCreating 0 4s
使用之前的yaml文件創(chuàng)建pod,模版中設(shè)置的標(biāo)簽為app: kubia,所以創(chuàng)建服務(wù)的yaml(還有之前介紹的kubectl expose方式也可以創(chuàng)建服務(wù))中也需要指定相同的標(biāo)簽:
apiVersion: v1kind: Servicemetadata: name: kubiaspec: ports: - port: 80targetPort: 8080 selector:app: kubia
首先指定的資源類型為Service,然后指定了兩個(gè)端口分別:port服務(wù)提供的端口,targetPort指定pod中進(jìn)程監(jiān)聽的端口,最后指定標(biāo)簽選擇器,相同標(biāo)簽的pod被當(dāng)前服務(wù)管理;
[d:\k8s]$ kubectl create -f kubia-svc.yaml service/kubia created [d:\k8s]$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d15h kubia ClusterIP 10.96.191.193 <none> 80/TCP 4s [d:\k8s]$ kubectl exec kubia-6dxn7 -- curl -s http://10.96.191.193You've hit kubia-fhxht [d:\k8s]$ kubectl exec kubia-6dxn7 -- curl -s http://10.96.191.193 You've hit kubia-fpvc7
創(chuàng)建完服務(wù)之后,可以發(fā)現(xiàn)給kubia分配了CLUSTER-IP,這是一個(gè)內(nèi)部ip;至于如何測(cè)試可以使用kubectl exec命令遠(yuǎn)程地在一個(gè)已經(jīng)存在的pod容器上執(zhí)行任何命令;pod名稱可以隨意指定三個(gè)中的任何一個(gè),接收到crul命令的pod,會(huì)轉(zhuǎn)發(fā)給Service,由Service來決定將請(qǐng)求交給哪個(gè)pod處理,所以可以看到多次執(zhí)行,發(fā)現(xiàn)每次處理的pod都不一樣;如果希望特定客戶端產(chǎn)生的所有請(qǐng)求每次都指向同一個(gè)pod, 可以設(shè)置服務(wù)的sessionAffinity屬性為ClientIP;
apiVersion: v1kind: Servicemetadata: name: kubiaspec: sessionAffinity: ClientIP ports: - port: 80targetPort: 8080 selector:app: kubia
除了添加了sessionAffinity: ClientIP,其他都一樣
[d:\k8s]$ kubectl delete svc kubia service "kubia" deleted [d:\k8s]$ kubectl create -f kubia-svc-client-ip-session-affinity.yaml service/kubia created [d:\k8s]$ kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d15h kubia ClusterIP 10.96.51.99 <none> 80/TCP 25s [d:\k8s]$ kubectl exec kubia-6dxn7 -- curl -s http://10.96.51.99You've hit kubia-fhxht [d:\k8s]$ kubectl exec kubia-6dxn7 -- curl -s http://10.96.51.99 You've hit kubia-fhxht
如果pod監(jiān)聽了兩個(gè)或者多個(gè)端口,那么服務(wù)同樣可以暴露多個(gè)端口:
apiVersion: v1kind: Servicemetadata: name: kubiaspec: ports: - name: httpport: 80targetPort: 8080 - name: httpsport: 443targetPort: 8080 selector:app: kubia
因?yàn)镹ode.js只監(jiān)聽了8080一個(gè)端口,所以這里在Service里面配置兩個(gè)端口都指向同一個(gè)目標(biāo)端口,看是否都能訪問:
[d:\k8s]$ kubectl create -f kubia-svc-named-ports.yaml service/kubia created [d:\k8s]$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d18h kubia ClusterIP 10.96.13.178 <none> 80/TCP,443/TCP 7s [d:\k8s]$ kubectl exec kubia-6dxn7 -- curl -s http://10.96.13.178You've hit kubia-fpvc7 [d:\k8s]$ kubectl exec kubia-6dxn7 -- curl -s http://10.96.13.178:443 You've hit kubia-fpvc7
可以發(fā)現(xiàn)使用兩個(gè)端口都可以訪問;
在Service中指定了端口為8080,如果目標(biāo)端口變了這里也需要改變,可以在定義pod的模版中給端口命名,在Service中可以直接指定名稱:
apiVersion: v1kind: ReplicationControllermetadata: name: kubiaspec: replicas: 3 selector: app: kubia template: metadata: labels:app: kubia spec: containers: - name: kubia image: ksfzhaohui/kubia ports: - name: http containerPort: 8080
在之前的ReplicationController中稍作修改,在port是中指定了名稱,Service的yaml文件同樣做修改,直接使用名稱:
apiVersion: v1kind: Servicemetadata: name: kubiaspec: ports: - port: 80targetPort: http selector:app: kubia
targetPort直接使用了名稱http:
[d:\k8s]$ kubectl create -f kubia-rc2.yaml replicationcontroller/kubia created [d:\k8s]$ kubectl get podNAME READY STATUS RESTARTS AGE kubia-4m9nv 1/1 Running 0 66s kubia-bm6rx 1/1 Running 0 66s kubia-dh87r 1/1 Running 0 66s [d:\k8s]$ kubectl create -f kubia-svc2.yaml service/kubia created [d:\k8s]$ kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7d kubia ClusterIP 10.96.106.37 <none> 80/TCP 10s [d:\k8s]$ kubectl exec kubia-4m9nv -- curl -s http://10.96.106.37You've hit kubia-dh87r
服務(wù)給我們提供了一個(gè)單一不變的ip去訪問pod,那是否每次都要先創(chuàng)建服務(wù),然后找到服務(wù)的CLUSTER-IP,再給其他pod去使用;這樣就太麻煩了,Kubernets還提供了其他方式去訪問服務(wù);
在pod開始運(yùn)行的時(shí)候,Kubernets會(huì)初始化一系列的環(huán)境變量指向現(xiàn)在存在的服務(wù);如果創(chuàng)建的服務(wù)早于客戶端pod的創(chuàng)建,pod上的進(jìn)程可以根據(jù)環(huán)境變量獲得服務(wù)的IP地址和端口號(hào);
[d:\k8s]$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7d14hkubia ClusterIP 10.96.106.37 <none> 80/TCP 14h[d:\k8s]$ kubectl get pod NAME READY STATUS RESTARTS AGE kubia-4m9nv 1/1 Running 0 14hkubia-bm6rx 1/1 Running 0 14hkubia-dh87r 1/1 Running 0 14h[d:\k8s]$ kubectl exec kubia-4m9nv env PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=kubia-4m9nv KUBERNETES_SERVICE_PORT_HTTPS=443KUBERNETES_PORT=tcp://10.96.0.1:443KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443KUBERNETES_PORT_443_TCP_PROTO=tcp KUBERNETES_PORT_443_TCP_PORT=443KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1KUBERNETES_SERVICE_HOST=10.96.0.1KUBERNETES_SERVICE_PORT=443NPM_CONFIG_LOGLEVEL=info NODE_VERSION=7.10.1YARN_VERSION=0.24.4HOME=/root
因?yàn)檫@里的pod早于服務(wù)的創(chuàng)建,所有沒有相關(guān)服務(wù)的相關(guān)信息:
[d:\k8s]$ kubectl delete po --all pod "kubia-4m9nv" deleted pod "kubia-bm6rx" deleted pod "kubia-dh87r" deleted [d:\k8s]$ kubectl get pod NAME READY STATUS RESTARTS AGE kubia-599v9 1/1 Running 0 48s kubia-8s8j4 1/1 Running 0 48s kubia-dm6kr 1/1 Running 0 48s [d:\k8s]$ kubectl exec kubia-599v9 env PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/binHOSTNAME=kubia-599v9 ... KUBIA_SERVICE_HOST=10.96.106.37KUBIA_SERVICE_PORT=80...
如果刪除pod重新創(chuàng)建新的pod,這樣服務(wù)就在創(chuàng)建pod之前了,再次獲取環(huán)境變量可以發(fā)現(xiàn)有KUBIA_SERVICE_HOST和KUBIA_SERVICE_PORT,分別代表了kubia服務(wù)的IP地址和端口號(hào);這樣就可以通過環(huán)境變量去獲取IP和端口了;
命名空間kube-system下有一個(gè)默認(rèn)的服務(wù)kube-dns,其后端是一個(gè)coredns的pod:
[d:\k8s]$ kubectl get svc --namespace kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 9d[d:\k8s]$ kubectl get po -o wide --namespace kube-system NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES coredns-7f9c544f75-h3cwn 1/1 Running 0 9d 172.17.0.3 minikube <none> <none> coredns-7f9c544f75-x2ttk 1/1 Running 0 9d 172.17.0.2 minikube <none> <none>
運(yùn)行在pod上的進(jìn)程DNS查詢都會(huì)被Kubernets自身的DNS服務(wù)器響應(yīng),該服務(wù)器知道系統(tǒng)中運(yùn)行的所有服務(wù);客戶端的pod在知道服務(wù)名稱的情況下可以通過全限定域名(FQDN)來訪問
[d:\k8s]$ kubectl exec kubia-599v9 -- curl -s http://kubia.default.svc.cluster.localYou've hit kubia-8s8j4
kubia對(duì)應(yīng)服務(wù)名稱,default為服務(wù)所在的命名空間,svc.cluster.local是在所有集群本地服務(wù)名稱中使用的可配置集群域后綴;如果兩個(gè)pod在同一個(gè)命名空間下,可以省略svc.cluster.local和default,使用服務(wù)名即可:
[d:\k8s]$ kubectl exec kubia-599v9 -- curl -s http://kubia.default You've hit kubia-dm6kr [d:\k8s]$ kubectl exec kubia-599v9 -- curl -s http://kubia You've hit kubia-dm6kr
d:\k8s>winpty kubectl exec -it kubia-599v9 -- sh# curl -s http://kubiaYou've hit kubia-dm6kr# exit
通過kubectl exec命令在一個(gè)pod容器上運(yùn)行bash,這樣就無須為每個(gè)要運(yùn)行的命令執(zhí)行kubectl exec命令;因?yàn)樵趙indows環(huán)境下使用了winpty工具;
以上介紹的后端是集群中運(yùn)行的一個(gè)或多個(gè)pod的服務(wù);但也存在希望通過Kubernetes服務(wù)特性暴露外部服務(wù)的情況,可以通過Endpoint方式和外部服務(wù)別名的方式;
服務(wù)并不是和pod直接相連的;有一種資源介于兩者之間:它就是Endpoint資源
[d:\k8s]$ kubectl describe svc kubiaName: kubiaNamespace: defaultLabels: <none>Annotations: <none>Selector: app=kubiaType: ClusterIPIP: 10.96.106.37Port: <unset> 80/TCPTargetPort: http/TCPEndpoints: 172.17.0.10:8080,172.17.0.11:8080,172.17.0.9:8080Session Affinity: NoneEvents: <none>[d:\k8s]$ kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kubia-599v9 1/1 Running 0 3h61m 172.17.0.10 minikube <none> <none>kubia-8s8j4 1/1 Running 0 3h61m 172.17.0.11 minikube <none> <none>kubia-dm6kr 1/1 Running 0 3h61m 172.17.0.9 minikube <none> <none>
可以看到Endpoints對(duì)應(yīng)其實(shí)就是pod的IP和端口;當(dāng)客戶端連接到服務(wù)時(shí),服務(wù)代理選擇這些IP和端口對(duì)中的一個(gè),并將傳入連接重定向到在該位置監(jiān)聽的服務(wù)器;
如果創(chuàng)建了不包含pod選擇器的服務(wù),Kubernetes將不會(huì)創(chuàng)建Endpoint資源;這樣就需要?jiǎng)?chuàng)建Endpoint資源來指定該服務(wù)的Endpoint列表;
apiVersion: v1kind: Servicemetadata: name: external-servicespec: ports: - port: 80
如上定義沒有指定selector選擇器:
[d:\k8s]$ kubectl create -f external-service.yaml service/external-service created [d:\k8s]$ kubectl get svc external-serviceNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEexternal-service ClusterIP 10.96.241.116 <none> 80/TCP 74s [d:\k8s]$ kubectl describe svc external-serviceName: external-service Namespace: defaultLabels: <none> Annotations: <none> Selector: <none>Type: ClusterIP IP: 10.96.241.116Port: <unset> 80/TCP TargetPort: 80/TCP Endpoints: <none>Session Affinity: NoneEvents: <none>
可以發(fā)現(xiàn)因?yàn)闆]有指定selector選擇器,external-service的Endpoints為none,這種情況可以手動(dòng)配置服務(wù)的Endpoint;
apiVersion: v1kind: Endpointsmetadata: name: external-servicesubsets: - addresses: - ip: 172.17.0.9- ip: 172.17.0.10ports: - port: 8080
Endpoint對(duì)象需要與服務(wù)具有相同的名稱,并包含該服務(wù)的目標(biāo)IP地址和端口列表:
[d:\k8s]$ kubectl create -f external-service-endpoints.yaml endpoints/external-service created [d:\k8s]$ kubectl describe svc external-serviceName: external-serviceNamespace: defaultLabels: <none>Annotations: <none>Selector: <none>Type: ClusterIPIP: 10.96.241.116Port: <unset> 80/TCPTargetPort: 80/TCPEndpoints: 172.17.0.10:8080,172.17.0.9:8080Session Affinity: NoneEvents: <none> [d:\k8s]$ kubectl exec kubia-599v9 -- curl -s http://external-serviceYou've hit kubia-dm6kr
可以發(fā)現(xiàn)再創(chuàng)建完Endpoints之后,服務(wù)external-service的Endpoints中多了pod的ip地址和端口,同樣也可以通過kubectl exec執(zhí)行請(qǐng)求;
以上在endpoint配置的是kubernetes內(nèi)部的ip端口,同樣也可以配置外部的ip端口,在kubernetes外部啟動(dòng)一個(gè)服務(wù):
apiVersion: v1kind: Endpointsmetadata: name: external-servicesubsets: - addresses: - ip: 10.13.82.21ports: - port: 8080
以上配置的10.13.82.21:8080就是一個(gè)普通的tomcat服務(wù),在本機(jī)啟動(dòng)即可
[d:\k8s]$ kubectl create -f external-service-endpoints2.yaml endpoints/external-service created [d:\k8s]$ kubectl create -f external-service.yaml service/external-service created [d:\k8s]$ kubectl exec kubia-599v9 -- curl -s http://external-serviceok
經(jīng)測(cè)試可以返回外部服務(wù)的響應(yīng)
除了手動(dòng)配置服務(wù)的Endpoint來代替公開外部服務(wù)方法,還可以通過給外部服務(wù)指定一個(gè)別名,比如給10.13.82.21指定一個(gè)域名:api.ksfzhaohui.com
apiVersion: v1kind: Servicemetadata: name: external-servicespec: type: ExternalName externalName: api.ksfzhaohui.com ports: - port: 80
要?jiǎng)?chuàng)建一個(gè)具有別名的外部服務(wù)的服務(wù)時(shí),要將創(chuàng)建服務(wù)資源的一個(gè)type字段設(shè)置為ExternalName;在externalName中指定外服服務(wù)的域名:
[d:\k8s]$ kubectl create -f external-service-externalname.yaml service/external-service created [d:\k8s]$ kubectl exec kubia-599v9 -- curl -s http://external-service:8080ok
經(jīng)測(cè)試可以返回外部服務(wù)的響應(yīng)
向外部公開某些服務(wù),kubernetes提供了三種方式:NodePort服務(wù),LoadBalance服務(wù)以及Ingress資源方式,下面分別介紹及實(shí)戰(zhàn);
創(chuàng)建一個(gè)服務(wù)并將其類型設(shè)置為NodePort,通過創(chuàng)建NodePort服務(wù),可以讓kubernetes在其所有節(jié)點(diǎn)上保留一個(gè)端口(所有節(jié)點(diǎn)上都使用相同的端口號(hào)),然后將傳入的連接轉(zhuǎn)發(fā)給pod;
apiVersion: v1kind: Servicemetadata: name: kubia-nodeportspec: type: NodePort ports: - port: 80targetPort: 8080nodePort: 30123 selector:app: kubia
指定服務(wù)類型為NodePort,節(jié)點(diǎn)端口為30123;
d:\k8s]$ kubectl create -f kubia-svc-nodeport.yaml service/kubia-nodeport created [d:\k8s]$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 31dkubia-nodeport NodePort 10.96.59.16 <none> 80:30123/TCP 3s [d:\k8s]$ kubectl exec kubia-7fs6m -- curl -s http://10.96.59.16You've hit kubia-m487j
要外部可以訪問內(nèi)部pod服務(wù),需要知道節(jié)點(diǎn)的IP,我們這里使用的節(jié)點(diǎn)為minikube,因?yàn)檫@里的minikube是安裝在本地windows系統(tǒng)下,可以直接使用minikube的內(nèi)部ip進(jìn)行訪問
d:\k8s]$ kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME minikube Ready master 34d v1.17.0 192.168.99.108 <none> Buildroot 2019.02.7 4.19.81 docker://19.3.5
相比NodePort方式可以通過任何節(jié)點(diǎn)的30312端口訪問內(nèi)部的pod,LoadBalance方式擁有自己獨(dú)一無二的可公開訪問的IP地址;LoadBalance其實(shí)是NodePort的一種擴(kuò)展,使得服務(wù)可以通過一個(gè)專用的負(fù)載均衡器來訪問;
apiVersion: v1kind: Servicemetadata: name: kubia-loadbalancerspec: type: LoadBalancer ports: - port: 80targetPort: 8080 selector:app: kubia
指定服務(wù)類型為L(zhǎng)oadBalancer,無需指定節(jié)點(diǎn)端口;
d:\k8s]$ kubectl create -f kubia-svc-loadbalancer.yaml service/kubia-loadbalancer created [d:\k8s]$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 31dkubia-loadbalancer LoadBalancer 10.96.207.113 <pending> 80:30038/TCP 7s kubia-nodeport NodePort 10.96.59.16 <none> 80:30123/TCP 32m
可以看到雖然我們沒有指定節(jié)點(diǎn)端口,但是創(chuàng)建完之后自動(dòng)啟動(dòng)了30038節(jié)點(diǎn)端口
所以可以發(fā)現(xiàn)同樣能通過使用NodePort的方式來訪問服務(wù)(節(jié)點(diǎn)IP+節(jié)點(diǎn)端口);同時(shí)也可以通過EXTERNAL-IP來訪問,但是使用Minikube,就不會(huì)有外部IP地址,外部IP地址將會(huì)一直是pending狀態(tài);
當(dāng)外部客戶端通過節(jié)點(diǎn)端口連接到服務(wù)時(shí),隨機(jī)選擇的pod并不一定在接收連接的同一節(jié)點(diǎn)上運(yùn)行;可以通過將服務(wù)配置為僅將外部通信重定向到接收連接的節(jié)點(diǎn)上運(yùn)行的pod來阻止此額外跳數(shù);
apiVersion: v1kind: Servicemetadata: name: kubia-nodeport-onlylocalspec: type: NodePort externalTrafficPolicy: Local ports: - port: 80targetPort: 8080nodePort: 30124 selector:app: kubia
通過在服務(wù)的spec部分中設(shè)置externalTrafficPolicy字段來完成;
每個(gè)LoadBalancer服務(wù)都需要自己的負(fù)載均衡器,以及獨(dú)有的公有IP地址;而Ingress 只需要一個(gè)公網(wǎng)IP就能為許多服務(wù)提供訪問;當(dāng)客戶端向Ingress發(fā)送HTTP請(qǐng)求時(shí),Ingress會(huì)根據(jù)請(qǐng)求的主機(jī)名和路徑轉(zhuǎn)發(fā)到對(duì)應(yīng)的服務(wù);
只有Ingress控制器在集群中運(yùn)行,Ingress資源才能正常工作;不同的Kubernetes環(huán)境使用不同的控制器實(shí)現(xiàn),但有些并不提供默認(rèn)控制器;我這里使用的Minikube需要啟用附加組件才可以使用控制器;
[d:\Program Files\Kubernetes\Minikube]$ minikube addons list - addon-manager: enabled- dashboard: enabled- default-storageclass: enabled- efk: disabled- freshpod: disabled- gvisor: disabled- helm-tiller: disabled- ingress: disabled- ingress-dns: disabled- logviewer: disabled- metrics-server: disabled- nvidia-driver-installer: disabled- nvidia-gpu-device-plugin: disabled- registry: disabled- registry-creds: disabled- storage-provisioner: enabled- storage-provisioner-gluster: disabled
列出所有的附件組件,可以看到ingress是不可用的,所以需要開啟
[d:\Program Files\Kubernetes\Minikube]$ minikube addons enable ingress * ingress was successfully enabled
啟動(dòng)之后可以查看kube-system命名空間下的pod
[d:\k8s]$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-7f9c544f75-h3cwn 1/1 Running 0 55dcoredns-7f9c544f75-x2ttk 1/1 Running 0 55detcd-minikube 1/1 Running 0 55dkube-addon-manager-minikube 1/1 Running 0 55dkube-apiserver-minikube 1/1 Running 0 55dkube-controller-manager-minikube 1/1 Running 2 55dkube-proxy-xtbc4 1/1 Running 0 55dkube-scheduler-minikube 1/1 Running 2 55dnginx-ingress-controller-6fc5bcc8c9-nvcb5 0/1 ContainerCreating 0 8sstorage-provisioner 1/1 Running 0 55d
可以發(fā)現(xiàn)正在創(chuàng)建一個(gè)名稱為nginx-ingress-controller的pod,會(huì)一直停留在拉取鏡像狀態(tài),并顯示如下錯(cuò)誤:
Failed to pull image "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.1": rpc error: code = Unknown desc = context canceled
這是因?yàn)閲?guó)內(nèi)無法下載quay.io下面的鏡像,可以使用阿里云鏡像:
image: registry.aliyuncs.com/google_containers/nginx-ingress-controller:0.26.1
可以從ingress-nginx下的deploy/static/mandatory.yaml文件修改其中的鏡像為阿里云鏡像,然后重新創(chuàng)建即可:
[d:\k8s]$ kubectl create -f mandatory.yamlnamespace/ingress-nginx created configmap/nginx-configuration created configmap/tcp-services created configmap/udp-services created serviceaccount/nginx-ingress-serviceaccount created clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created role.rbac.authorization.k8s.io/nginx-ingress-role created rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created deployment.apps/nginx-ingress-controller created
再次查看kube-system命名空間下的pod
[d:\k8s]$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-7f9c544f75-h3cwn 1/1 Running 0 56dcoredns-7f9c544f75-x2ttk 1/1 Running 0 56detcd-minikube 1/1 Running 0 56dkube-addon-manager-minikube 1/1 Running 0 56dkube-apiserver-minikube 1/1 Running 0 56dkube-controller-manager-minikube 1/1 Running 2 56dkube-proxy-xtbc4 1/1 Running 0 56dkube-scheduler-minikube 1/1 Running 2 56dnginx-ingress-controller-6fc5bcc8c9-nvcb5 1/1 Running 0 10m storage-provisioner 1/1 Running 0 56d
nginx-ingress-controller已經(jīng)為Running狀態(tài),下面就可以使用Ingress資源了;
Ingress控制器啟動(dòng)之后,就可以創(chuàng)建Ingress資源了
apiVersion: extensions/v1beta1kind: Ingressmetadata: name: kubiaspec: rules: - host: kubia.example.comhttp: paths: - path: /backend: serviceName: kubia-nodeport servicePort: 80
指定資源類型為Ingress,定一個(gè)單一規(guī)則,所有發(fā)送kubia.example.com的請(qǐng)求都會(huì)被轉(zhuǎn)發(fā)給端口為80的kubia-nodeport服務(wù)上;
[d:\k8s]$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 53dkubia-nodeport NodePort 10.96.204.104 <none> 80:30123/TCP 21h[d:\k8s]$ kubectl create -f kubia-ingress.yaml ingress.extensions/kubia created [d:\k8s]$ kubectl get ingress NAME HOSTS ADDRESS PORTS AGE kubia kubia.example.com 192.168.99.108 80 6m4s
需要把域名映射到ADDRESS:192.168.99.108,修改hosts文件即可,下面就可以直接用域名訪問了,最終請(qǐng)求會(huì)被轉(zhuǎn)發(fā)到kubia-nodeport服務(wù)
大致請(qǐng)求流程如下:瀏覽器中請(qǐng)求域名首先會(huì)查詢域名服務(wù)器,然后DNS返回了控制器的IP地址;客戶端向控制器發(fā)送請(qǐng)求并在頭部指定了kubia.example.com;然后控制器根據(jù)頭部信息確定客戶端需要訪問哪個(gè)服務(wù);然后通過服務(wù)關(guān)聯(lián)的Endpoint對(duì)象查看pod IP,并將請(qǐng)求轉(zhuǎn)發(fā)給其中一個(gè);
rules和paths是數(shù)組,可以配置多個(gè)
apiVersion: extensions/v1beta1kind: Ingressmetadata: name: kubia2spec: rules: - host: kubia.example.comhttp: paths: - path: /v1backend: serviceName: kubia-nodeport servicePort: 80 - path: /v2backend: serviceName: kubia-nodeport servicePort: 80 - host: kubia2.example.comhttp: paths: - path: /backend: serviceName: kubia-nodeport servicePort: 80
配置了多個(gè)host和path,這里為了方便映射了同樣服務(wù);
[d:\k8s]$ kubectl create -f kubia-ingress2.yamlingress.extensions/kubia2 created [d:\k8s]$ kubectl get ingress NAME HOSTS ADDRESS PORTS AGE kubia kubia.example.com 192.168.99.108 80 41m kubia2 kubia.example.com,kubia2.example.com 192.168.99.108 80 15m
同樣需要配置host文件,測(cè)試如下:
以上介紹的消息都是基于Http協(xié)議,Https協(xié)議需要配置相關(guān)證書;客戶端創(chuàng)建到Ingress控制器的TLS連接時(shí),控制器將終止TLS連接;客戶端與Ingress控制器之間是加密的,而Ingress控制器和pod之間沒有加密;要使控制器可以這樣,需要將證書和私鑰附加到Ingress中;
[root@localhost batck-job]# openssl genrsa -out tls.key 2048Generating RSA private key, 2048 bit long modulus ..................................................................+++ ........................+++ e is 65537 (0x10001) [root@localhost batck-job]# openssl req -new -x509 -key tls.key -out tls.cert -days 360 -subj /CN=kubia.example.com [root@localhost batck-job]# ll -rw-r--r--. 1 root root 1115 Feb 11 01:20 tls.cert -rw-r--r--. 1 root root 1679 Feb 11 01:20 tls.key
生成的兩個(gè)文件創(chuàng)建secret
[d:\k8s]$ kubectl create secret tls tls-secret --cert=tls.cert --key=tls.keysecret/tls-secret created
現(xiàn)在可以更新Ingress對(duì)象,以便它也接收kubia.example.com的HTTPS請(qǐng)求;
apiVersion: extensions/v1beta1kind: Ingressmetadata: name: kubiaspec: tls: - hosts: - kubia.example.comsecretName: tls-secret rules: - host: kubia.example.comhttp: paths: - path: /backend: serviceName: kubia-nodeport servicePort: 80
tls中指定相關(guān)證書
[d:\k8s]$ kubectl apply -f kubia-ingress-tls.yamlWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply ingress.extensions/kubia configured
通過瀏覽器訪問https協(xié)議,如下圖所示
只要pod的標(biāo)簽和服務(wù)的pod選擇器想匹配,pod就可以作為服務(wù)的后端,但是如果pod沒有準(zhǔn)備好,是不能處理請(qǐng)求的,這時(shí)候就需要就緒探針了,用來檢查pod是否已經(jīng)準(zhǔn)備好了,如果檢查成功就可以作為服務(wù)的后端處理消息了;
就緒探針有三種類型分別:
Exec探針:執(zhí)行進(jìn)程的地方,容器的狀態(tài)由進(jìn)程的退出狀態(tài)代碼確認(rèn);
Http get探針:向容器發(fā)送HTTP GET請(qǐng)求,通過響應(yīng)的HTTP狀態(tài)代碼判斷容器是否準(zhǔn)備好;
Tcp socket探針:它打開一個(gè)TCP連接到容器的指定端口,如果連接己建立,則認(rèn)為容器己準(zhǔn)備就緒。
kubernetes會(huì)周期性地調(diào)用探針,并根據(jù)就緒探針的結(jié)果采取行動(dòng)。如果某個(gè)pod報(bào)告它尚未準(zhǔn)備就緒,則會(huì)從該服務(wù)中刪除該pod。如果pod再次準(zhǔn)備就緒,則重新添加pod;
編輯ReplicationController,修改pod模版添加就緒探針
[d:\k8s]$ kubectl edit rc kubia libpng warning: iCCP: known incorrect sRGB profile replicationcontroller/kubia edited [d:\k8s]$ kubectl get pods NAME READY STATUS RESTARTS AGE kubia-7fs6m 1/1 Running 0 22dkubia-m487j 1/1 Running 0 22dkubia-q6z5w 1/1 Running 0 22d
編輯ReplicationController如下所示,添加readinessProbe
apiVersion: v1kind: ReplicationControllermetadata: name: kubiaspec: replicas: 3 selector: app: kubia template: metadata: labels:app: kubia spec: containers: - name: kubia image: ksfzhaohui/kubia ports: - containerPort: 8080 readinessProbe: exec:command: - ls - /var/ready
就緒探針將定期在容器內(nèi)執(zhí)行l(wèi)s/var/ready命令。如果文件存在,則ls命令返回退出碼 0, 否則返回非零的退出碼;如果文件存在,則就緒探針將成功,否則失??;
我們編輯完ReplicationController還沒有產(chǎn)生新的pod所以可以發(fā)現(xiàn)以上pod的READY都為1,表示已經(jīng)準(zhǔn)備好可以處理消息;
[d:\k8s]$ kubectl delete pod kubia-m487j pod "kubia-m487j" deleted [d:\k8s]$ kubectl get pods NAME READY STATUS RESTARTS AGE kubia-7fs6m 1/1 Running 0 22dkubia-cxz5v 0/1 Running 0 114skubia-q6z5w 1/1 Running 0 22d
刪除一個(gè)pod,馬上會(huì)創(chuàng)建一個(gè)帶有就緒探針的pod,可以發(fā)現(xiàn)長(zhǎng)時(shí)間READY為0;
本文首先介紹了服務(wù)的基本知識(shí),如何創(chuàng)建服務(wù)發(fā)現(xiàn)服務(wù);然后介紹了服務(wù)和pod直接的關(guān)聯(lián)器endpoint;最后重點(diǎn)介紹了將服務(wù)暴露給外部客戶端的三種方式。
看完上述內(nèi)容,你們掌握Kubernetes服務(wù)的介紹以及如何創(chuàng)建的方法了嗎?如果還想學(xué)到更多技能或想了解更多相關(guān)內(nèi)容,歡迎關(guān)注億速云行業(yè)資訊頻道,感謝各位的閱讀!
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如果涉及侵權(quán)請(qǐng)聯(lián)系站長(zhǎng)郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。