您好,登錄后才能下訂單哦!
使用Deployment創(chuàng)建的Pod是無狀態(tài)的,當(dāng)掛在Volume之后,如果該P(yáng)od掛了,Replication Controller會(huì)再run一個(gè)來保證可用性,但是由于是無狀態(tài)的,Pod掛了的時(shí)候與之前的Volume的關(guān)系就已經(jīng)斷開了,新起來的Pod無法找到之前的Pod。但是對(duì)于用戶而言,他們對(duì)底層的Pod掛了沒有感知,但是當(dāng)Pod掛了之后就無法再使用之前掛載的磁盤了。
Pod一致性:包含次序(啟動(dòng)、停止次序)、網(wǎng)絡(luò)一致性。此一致性與Pod相關(guān),與被調(diào)度到哪個(gè)node節(jié)點(diǎn)無關(guān)。
穩(wěn)定的次序:對(duì)于N個(gè)副本的StatefulSet,每個(gè)Pod都在[0,N)的范圍內(nèi)分配一個(gè)數(shù)字序號(hào),且是唯一的。
穩(wěn)定的網(wǎng)絡(luò):Pod的hostname模式為(statefulset名稱)- (序號(hào))。
穩(wěn)定的存儲(chǔ):通過VolumeClaimTemplate為每個(gè)Pod創(chuàng)建一個(gè)PV。刪除、減少副本,不會(huì)刪除相關(guān)的卷。
template(模板):根據(jù)模板 創(chuàng)建出來的Pod,它們J的狀態(tài)都是一模一樣的(除了名稱,IP, 域名之外)
可以理解為:任何一個(gè)Pod, 都可以被刪除,然后用新生成的Pod進(jìn)行替換。
mysql:主從關(guān)系。
如果把之前無狀態(tài)的服務(wù)比喻為牛、羊等牲畜,因?yàn)椋@些到一定時(shí)候就可以”送出“。那么,有狀態(tài)就比喻為:寵物,而寵物不像牲畜一樣到達(dá)一定時(shí)候“送出”,人們往往會(huì)照顧寵物的一生。
storageclass:自動(dòng)創(chuàng)建PV
需要解決:自動(dòng)創(chuàng)建PVC。
與 ReplicaSet 和 Deployment 資源一樣,StatefulSet 也使用控制器的方式實(shí)現(xiàn),它主要由 StatefulSetController、StatefulSetControl 和 StatefulPodControl 三個(gè)組件協(xié)作來完成 StatefulSet 的管理,StatefulSetController 會(huì)同時(shí)從 PodInformer 和 ReplicaSetInformer 中接受增刪改事件并將事件推送到隊(duì)列中:
控制器 StatefulSetController 會(huì)在 Run 方法中啟動(dòng)多個(gè) Goroutine 協(xié)程,這些協(xié)程會(huì)從隊(duì)列中獲取待處理的 StatefulSet 資源進(jìn)行同步,接下來我們會(huì)先介紹 Kubernetes 同步 StatefulSet 的過程。
[root@master yaml]# vim statefulset.yaml
apiVersion: v1
kind: Service
metadata:
name: headless-svc
labels:
app: headless-svc
spec:
ports:
- port: 80
selector:
app: headless-pod
clusterIP: None #沒有同一的ip
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: statefulset-test
spec:
serviceName: headless-svc
replicas: 3
selector:
matchLabels:
app: headless-pod
template:
metadata:
labels:
app: headless-pod
spec:
containers:
- name: myhttpd
image: httpd
ports:
- containerPort: 80
Deployment : Deploy+RS+隨機(jī)字符串(Pod的名稱。)沒有順序的,可
以沒隨意替代的。
1、headless-svc :無頭服務(wù)。因?yàn)闆]有IP地址,所以它不具備負(fù)載均衡的功能了。因?yàn)閟tatefulset要求Pod的名稱是有順序的,每一個(gè)Pod都不能被隨意取代,也就是即使Pod重建之后,名稱依然不變。為后端的每一個(gè)Pod去命名。
2、statefulSet:定義具體的應(yīng)用
3、volumeClaimT emplates:自動(dòng)創(chuàng)建PVC,為后端的Pod提供專有的存儲(chǔ)。
[root@master yaml]# kubectl apply -f statefulset.yaml
[root@master yaml]# kubectl get svc
[root@master yaml]# kubectl get pod
//可看到這些pod是有順序的
下載nfs所需安裝包
[root@node02 ~]# yum -y install nfs-utils rpcbind
創(chuàng)建共享目錄
[root@master ~]# mkdir /nfsdata
創(chuàng)建共享目錄的權(quán)限
[root@master ~]# vim /etc/exports
/nfsdata *(rw,sync,no_root_squash)
開啟nfs和rpcbind
[root@master ~]# systemctl start nfs-server.service
[root@master ~]# systemctl start rpcbind
測(cè)試一下
[root@master ~]# showmount -e
[root@master yaml]# vim rbac-rolebind.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-provisioner
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: nfs-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["watch", "create", "update", "patch"]
- apiGroups: [""]
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-provisioner
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: nfs-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["services", "endpoints"]
verbs: ["get","create","list", "watch","update"]
- apiGroups: ["extensions"]
resources: ["podsecuritypolicies"]
resourceNames: ["nfs-provisioner"]
verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-provisioner
subjects:
- kind: ServiceAccount
name: nfs-provisioner
namespace: default #必寫字段
roleRef:
kind: ClusterRole
name: nfs-provisioner-runner
apiGroup: rbac.authorization.k8s.io
[root@master yaml]# kubectl apply -f rbac-rolebind.yaml
[root@master yaml]# vim nfs-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nfs-client-provisioner
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccount: nfs-provisioner
containers:
- name: nfs-client-provisioner
image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: bdqn
- name: NFS_SERVER
value: 192.168.1.21
- name: NFS_PATH
value: /nfsdata
volumes:
- name: nfs-client-root
nfs:
server: 192.168.1.21
path: /nfsdata
[root@master yaml]# kubectl apply -f nfs-deployment.yaml
[root@master yaml]# kubectl get pod
[root@master yaml]# vim test-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: stateful-nfs
provisioner: bdqn #通過provisioner字段關(guān)聯(lián)到上述Deploy
reclaimPolicy: Retain
[root@master yaml]# kubectl apply -f test-storageclass.yaml
[root@master yaml]# kubectl get sc
[root@master yaml]# vim statefulset.yaml
apiVersion: v1
kind: Service
metadata:
name: headless-svc
labels:
app: headless-svc
spec:
ports:
- port: 80
name: myweb
selector:
app: headless-pod
clusterIP: None
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: statefulset-test
spec:
serviceName: headless-svc
replicas: 3
selector:
matchLabels:
app: headless-pod
template:
metadata:
labels:
app: headless-pod
spec:
containers:
- image: httpd
name: myhttpd
ports:
- containerPort: 80
name: httpd
volumeMounts:
- mountPath: /mnt
name: test
volumeClaimTemplates: #> 自動(dòng)創(chuàng)建PVC,為后端的Pod提供專有的存儲(chǔ)。**
- metadata:
name: test
annotations: #這是指定storageclass
volume.beta.kubernetes.io/storage-class: stateful-nfs
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
在此示例中:
headless-svc
的 Service 對(duì)象,由 metadata: name
字段指示。該 Service 會(huì)定位一個(gè)名為 headless-svc
的應(yīng)用,由 labels: app: headless-svc
和 selector: app: headless-pod
指示。該 Service 會(huì)公開端口 80 并將其命名為 web
。而且該 Service 會(huì)控制網(wǎng)域并將互聯(lián)網(wǎng)流量路由到 StatefulSet 部署的容器化應(yīng)用。replicas: 3
) 創(chuàng)建了一個(gè)名為 web
的 StatefulSet。spec: template
) 指示其 Pod 標(biāo)記為 app: headless-pod
。template: spec
) 指示 StatefulSet 的 Pod 運(yùn)行一個(gè)容器 myhttpd
,該容器運(yùn)行版本為 httpd
映像。容器映像由 Container Registry 管理。web
端口。template: spec: volumeMounts
指定一個(gè)名為 test
的 mountPath
。mountPath
是容器中應(yīng)裝載存儲(chǔ)卷的路徑。test
。[root@master yaml]# kubectl apply -f statefulset.yaml
[root@master yaml]# kubectl get pod
如果第一個(gè)pod出現(xiàn)了問題,后面的pod就不會(huì)生成。
[root@master yaml]# kubectl get statefulsets
[root@master yaml]# kubectl exec -it statefulset-test-0 /bin/sh
# cd /mnt
# touch testfile
# exit
[root@master yaml]# ls /nfsdata/default-test-statefulset-test-0-pvc-bf1ae1d0-f496-4d69-b33b-39e8aa0a6e8d/
testfile
以自己的名稱創(chuàng)建一個(gè)名稱空間,以下所有資源都運(yùn)行在此空間中。用statefuset資源運(yùn)行一個(gè)httpd web服務(wù),要求3個(gè)Pod,但是每個(gè)Pod的主界面內(nèi)容不一樣,并且都要做專有的數(shù)據(jù)持久化,嘗試刪除其中一個(gè)Pod,查看新生成的Pod,總結(jié)對(duì)比與之前Deployment資源控制器控制的Pod有什么不同之處?
注意:nfs服務(wù)要開啟
[root@master yaml]# vim namespace.yaml
kind: Namespace
apiVersion: v1
metadata:
name: xgp-lll #namespave的名稱
[root@master yaml]# kubectl apply -f namespace.yaml
[root@master yaml]# kubectl get namespaces
[root@master yaml]# vim rbac-rolebind.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-provisioner
namespace: xgp-lll
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: nfs-provisioner-runner
namespace: xgp-lll
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["services", "endpoints"]
verbs: ["get","create","list", "watch","update"]
- apiGroups: ["extensions"]
resources: ["podsecuritypolicies"]
resourceNames: ["nfs-provisioner"]
verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-provisioner
subjects:
- kind: ServiceAccount
name: nfs-provisioner
namespace: xgp-lll
roleRef:
kind: ClusterRole
name: nfs-provisioner-runner
apiGroup: rbac.authorization.k8s.io
[root@master yaml]# kubectl apply -f rbac-rolebind.yaml
[root@master yaml]# vim nfs-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nfs-client-provisioner
namespace: xgp-lll
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccount: nfs-provisioner
containers:
- name: nfs-client-provisioner
image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: xgp
- name: NFS_SERVER
value: 192.168.1.21
- name: NFS_PATH
value: /nfsdata
volumes:
- name: nfs-client-root
nfs:
server: 192.168.1.21
path: /nfsdata
[root@master yaml]# kubectl apply -f nfs-deployment.yaml
[root@master yaml]# kubectl get pod -n xgp-lll
[root@master yaml]# vim test-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: stateful-nfs
namespace: xgp-lll
provisioner: xgp #通過provisioner字段關(guān)聯(lián)到上述Deploy
reclaimPolicy: Retain
[root@master yaml]# kubectl apply -f test-storageclass.yaml
[root@master yaml]# kubectl get sc -n xgp-lll
apiVersion: v1
kind: Service
metadata:
name: headless-svc
namespace: xgp-lll
labels:
app: headless-svc
spec:
ports:
- port: 80
name: myweb
selector:
app: headless-pod
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: statefulset-test
namespace: xgp-lll
spec:
serviceName: headless-svc
replicas: 3
selector:
matchLabels:
app: headless-pod
template:
metadata:
labels:
app: headless-pod
spec:
containers:
- image: httpd
name: myhttpd
ports:
- containerPort: 80
name: httpd
volumeMounts:
- mountPath: /usr/local/apache2/htdocs
name: test
volumeClaimTemplates: #> 自動(dòng)創(chuàng)建PVC,為后端的Pod提供專有的>存儲(chǔ)。**
- metadata:
name: test
annotations: #這是指定storageclass
volume.beta.kubernetes.io/storage-class: stateful-nfs
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
[root@master yaml]# kubectl apply -f statefulset.yaml
[root@master yaml]# kubectl get pod -n xgp-lll
第一個(gè)
[root@master yaml]# kubectl exec -it -n xgp-lll statefulset-test-0 /bin/bash
root@statefulset-test-0:/usr/local/apache2# echo 123 > /usr/local/apache2/htdocs/index.html
第二個(gè)
[root@master yaml]# kubectl exec -it -n xgp-lll statefulset-test-1 /bin/bash
root@statefulset-test-2:/usr/local/apache2# echo 456 > /usr/local/apache2/htdocs/index.html
第三個(gè)
[root@master yaml]# kubectl exec -it -n xgp-lll statefulset-test-2 /bin/bash
root@statefulset-test-1:/usr/local/apache2# echo 789 > /usr/local/apache2/htdocs/index.html
第一個(gè)
[root@master yaml]# cat /nfsdata/xgp-lll-test-statefulset-test-0-pvc-ccaa02df-4721-4453-a6ec-4f2c928221d7/index.html
123
第二個(gè)
[root@master yaml]# cat /nfsdata/xgp-lll-test-statefulset-test-1-pvc-88e60a58-97ea-4986-91d5-a3a6e907deac/index.html
456
第三個(gè)
[root@master yaml]# cat /nfsdata/xgp-lll-test-statefulset-test-2-pvc-4eb2bbe2-63d2-431a-ba3e-b7b8d7e068d3/index.html
789
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場,如果涉及侵權(quán)請(qǐng)聯(lián)系站長郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。