您好,登錄后才能下訂單哦!
今天就跟大家聊聊有關(guān)Kubernetes的資源指標(biāo)API及自定義指標(biāo)API是什么,可能很多人都不太了解,為了讓大家更加了解,小編給大家總結(jié)了以下內(nèi)容,希望大家根據(jù)這篇文章可以有所收獲。
以前是用heapster來收集資源指標(biāo)才能看,現(xiàn)在heapster要廢棄了
從1.8以后引入了資源api指標(biāo)監(jiān)視
資源指標(biāo):metrics-server(核心指標(biāo))
自定義指標(biāo):prometheus,k8s-prometheus-adapter(將Prometheus采集的數(shù)據(jù)轉(zhuǎn)換為指標(biāo)格式)
k8s的中的prometheus需要k8s-prometheus-adapter轉(zhuǎn)換一下才可以使用
新一代架構(gòu):
核心指標(biāo)流水線:
kubelet,metrics-service以及API service提供api組成;cpu累計使用率,內(nèi)存實時使用率,pod的資源占用率和容器磁盤占用率;
監(jiān)控流水線:
用于從系統(tǒng)收集各種指標(biāo)數(shù)據(jù)并提供終端用戶,存儲系統(tǒng)以及HPA,他們包括核心指標(biāo)以及很多非核心指標(biāo),非核心指標(biāo)本身不能被k8s解析
復(fù)制代碼
第二章、安裝部署metrics-server
1、下載yaml文件,并安裝
項目地址:https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/metrics-server ,選擇與版本對應(yīng)的分支,我的是v1.10.0,所以這里我選擇v1.10.0分支
[root@k8s-master_01 manifests]# mkdir metrics-server [root@k8s-master_01 manifests]# cd metrics-server [root@k8s-master_01 metrics-server]# for file in auth-delegator.yaml auth-reader.yaml metrics-apiservice.yaml metrics-server-deployment.yaml metrics-server-service.yaml resource-reader.yaml;do wget https://raw.githubusercontent.com/kubernetes/kubernetes/v1.10.0/cluster/addons/metrics-server/$file;done #記住,下載raw格式的文件 [root@k8s-master_01 metrics-server]# grep image: ./* #查看使用的鏡像,如果可以上外網(wǎng),那么忽略,如果不可用那么需要提前下載,通過修改配置文件或修改鏡像的名稱的方式加載鏡像,鏡像可以到阿里云上去搜索 ./metrics-server-deployment.yaml: image: k8s.gcr.io/metrics-server-amd64:v0.2.1 ./metrics-server-deployment.yaml: image: k8s.gcr.io/addon-resizer:1.8.1 [root@k8s-node_01 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/criss/addon-resizer:1.8.1 #手動在所有的node節(jié)點上下載鏡像,注意版本號沒有v [root@k8s-node_01 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/k8s-kernelsky/metrics-server-amd64:v0.2.1 [root@k8s-master_01 metrics-server]# grep image: metrics-server-deployment.yaml image: registry.cn-hangzhou.aliyuncs.com/k8s-kernelsky/metrics-server-amd64:v0.2.1 image: registry.cn-hangzhou.aliyuncs.com/criss/addon-resizer:1.8.1 [root@k8s-master_01 metrics-server]# kubectl apply -f . [root@k8s-master_01 metrics-server]# kubectl get pod -n kube-system
2、驗證
[root@k8s-master01 ~]# kubectl api-versions |grep metrics metrics.k8s.io/v1beta1 [root@k8s-node01 ~]# kubectl proxy --port=8080 #重新打開一個終端,啟動代理功能 [root@k8s-master_01 metrics-server]# curl http://localhost:8080/apis/metrics.k8s.io/v1beta1 #查看這個資源組包含哪些組件 [root@k8s-master_01 metrics-server]# curl http://localhost:8080/apis/metrics.k8s.io/v1beta1/pods #可能需要等待一會在會有數(shù)據(jù) [root@k8s-master_01 metrics-server]# curl http://localhost:8080/apis/metrics.k8s.io/v1beta1/nodes [root@k8s-node01 ~]# kubectl top node NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% k8s-master01 176m 4% 3064Mi 39% k8s-node01 62m 1% 4178Mi 54% k8s-node02 65m 1% 2141Mi 27% [root@k8s-node01 ~]# kubectl top pods NAME CPU(cores) MEMORY(bytes) node-affinity-pod 0m 1Mi
3.注意事項
1.#在更新的版本中,如v1.11及以上會出現(xiàn)問題,這是因為metric-service默認(rèn)從kubernetes的summary_api中獲取數(shù)據(jù),而summary_api默認(rèn)使用10255端口來獲 取數(shù)據(jù),但是10255是一個http協(xié)議的端口,可能官方認(rèn)為http協(xié)議不安全所以封禁了10255端口改為使用10250端口,而10250是一個https協(xié)議端口,所以我們需要修改一下連接方式: 由 - --source=kubernetes.summary_api:'' 修改為 - --source=kubernetes.summary_api:https://kubernetes.default?kubeletHttps=true&kubeletPort=10250&insecure-true #表示雖然我使用https協(xié)議來通信,并且端口也是10250,但是如果證書不能認(rèn)證依然可以通過非安全不加密的方式來通信 [root@k8s-node01 deploy]# grep source=kubernetes metrics-server-deployment.yaml 2.[root@k8s-node01 deploy]# grep nodes/stats resource-reader.yaml #在新的版本中,授權(quán)文內(nèi)沒有 node/stats 的權(quán)限,需要手動去添加 [root@k8s-node01 deploy]# cat resource-reader.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: system:metrics-server rules: - apiGroups: - "" resources: - pods - nodes - nodes/stats #添加這一行 - namespaces 3.在1.12.3版本中測試發(fā)現(xiàn),需要進(jìn)行如下修改才能成功部署(權(quán)限依然需要修改,其他版本暫未測試) [root@k8s-master-01 metrics-server]# vim metrics-server-deployment.yaml command: #metrics-server命令參數(shù)修改為如下參數(shù) - /metrics-server - --metric-resolution=30s - --kubelet-port=10250 - --kubelet-insecure-tls - --kubelet-preferred-address-types=InternalIP command: #metrics-server-nanny 命令參數(shù)修改為如下參數(shù) - /pod_nanny - --config-dir=/etc/config - --cpu=40m - --extra-cpu=0.5m - --memory=40Mi - --extra-memory=4Mi - --threshold=5 - --deployment=metrics-server-v0.3.1 - --container=metrics-server - --poll-period=300000 - --estimator=exponential
第三章、安裝部署prometheus
項目地址:https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/prometheus(由于prometheus只有v1.11.0及以上才有,所有我選擇v1.11.0來部署)
1.下載yaml文件及部署前操作 [root@k8s-node01 ~]# cd /mnt/ [root@k8s-node01 mnt]# git clone https://github.com/kubernetes/kubernetes.git #我嫌麻煩就直接克隆kubernetes整個項目了 [root@k8s-node01 mnt]# cd kubernetes/cluster/addons/prometheus/ [root@k8s-node01 prometheus]# git checkout v1.11.0 [root@k8s-node01 prometheus]# cd .. [root@k8s-node01 addons]# cp -r prometheus /root/manifests/ [root@k8s-node01 manifests]# cd prometheus/ [root@k8s-node01 prometheus]# grep -w "namespace: kube-system" ./* #默認(rèn)prometheus使用的是kube-system名稱空間,我們把它單獨部署到一個名稱空間中,方便之后的管理 ./alertmanager-configmap.yaml: namespace: kube-system ...... [root@k8s-node01 prometheus]# sed -i 's/namespace: kube-system/namespace\: k8s-monitor/g' ./* [root@k8s-node01 prometheus]# grep storage: ./* #安裝需要兩個pv,等下我們需要創(chuàng)建一下 ./alertmanager-pvc.yaml: storage: "2Gi" ./prometheus-statefulset.yaml: storage: "16Gi" [root@k8s-node01 prometheus]# cat pv.yaml #注意第二pv的storageClassName apiVersion: v1 kind: PersistentVolume metadata: name: alertmanager spec: capacity: storage: 5Gi accessModes: - ReadWriteOnce - ReadWriteMany persistentVolumeReclaimPolicy: Recycle nfs: path: /data/volumes/v1 server: 172.16.150.158 --- apiVersion: v1 kind: PersistentVolume metadata: name: standard spec: capacity: storage: 25Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Recycle storageClassName: standard #storageClassName與prometheus-statefulset.yaml中volumeClaimTemplates下定義的需要保持一致 nfs: path: /data/volumes/v2 server: 172.16.150.158 [root@k8s-node01 prometheus]# kubectl create namespace k8s-monitor [root@k8s-node01 prometheus]# mkdir node-exporter kube-state-metrics alertmanager prometheus #將每個組件單獨放入一個目錄中,方便部署及管理 [root@k8s-node01 prometheus]# mv node-exporter-* node-exporter [root@k8s-node01 prometheus]# mv alertmanager-* alertmanager [root@k8s-node01 prometheus]# mv kube-state-metrics-* kube-state-metrics [root@k8s-node01 prometheus]# mv prometheus-* prometheus
2.安裝node-exporter(用于收集節(jié)點的數(shù)據(jù)指標(biāo))
[root@k8s-node01 prometheus]# grep -r image: node-exporter/* node-exporter/node-exporter-ds.yml: image: "prom/node-exporter:v0.15.2" #非官方鏡像,不能上外網(wǎng)的也可以下載,所以不需要提前下載 [root@k8s-node01 prometheus]# kubectl apply -f node-exporter/ daemonset.extensions "node-exporter" created service "node-exporter" created [root@k8s-node01 prometheus]# kubectl get pod -n k8s-monitor NAME READY STATUS RESTARTS AGE node-exporter-l5zdw 1/1 Running 0 1m node-exporter-vwknx 1/1 Running 0 1m
3.安裝prometheus
[root@k8s-master_01 prometheus]# kubectl apply -f pv.yaml persistentvolume "alertmanager" configured persistentvolume "standard" created [root@k8s-master_01 prometheus]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE alertmanager 5Gi RWO,RWX Recycle Available 9s standard 25Gi RWO Recycle Available 9s [root@k8s-node01 prometheus]# grep -i image prometheus/* #查看鏡像是否需要下載 [root@k8s-node01 prometheus]# vim prometheus-service.yaml #默認(rèn)prometheus的service端口類型為ClusterIP,為了可以集群外訪問,修改為NodePort ... type: NodePort ports: - name: http port: 9090 protocol: TCP targetPort: 9090 nodePort: 30090 ... [root@k8s-node01 prometheus]# kubectl apply -f prometheus/ [root@k8s-node01 prometheus]# kubectl get pod -n k8s-monitor NAME READY STATUS RESTARTS AGE node-exporter-l5zdw 1/1 Running 0 24m node-exporter-vwknx 1/1 Running 0 24m prometheus-0 2/2 Running 0 1m [root@k8s-node01 prometheus]# kubectl get svc -n k8s-monitor NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE node-exporter ClusterIP None <none> 9100/TCP 25m prometheus NodePort 10.96.9.121 <none> 9090:30090/TCP 22m [root@k8s-master_01 prometheus]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE alertmanager 5Gi RWO,RWX Recycle Available 1h standard 25Gi RWO Recycle Bound k8s-monitor/prometheus-data-prometheus-0 standard 1h
訪問prometheus(node節(jié)點IP:端口)
4.部署metrics適配器(將prometheus數(shù)據(jù)轉(zhuǎn)換為k8s可以識別的數(shù)據(jù))
[root@k8s-node01 kube-state-metrics]# grep image: ./* ./kube-state-metrics-deployment.yaml: image: quay.io/coreos/kube-state-metrics:v1.3.0 ./kube-state-metrics-deployment.yaml: image: k8s.gcr.io/addon-resizer:1.7 [root@k8s-node02 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/ccgg/addon-resizer:1.7 [root@k8s-node01 kube-state-metrics]# vim kube-state-metrics-deployment.yaml #修改鏡像地址 [root@k8s-node01 kube-state-metrics]# kubectl apply -f kube-state-metrics-deployment.yaml deployment.extensions "kube-state-metrics" configured [root@k8s-node01 kube-state-metrics]# kubectl get pod -n k8s-monitor NAME READY STATUS RESTARTS AGE kube-state-metrics-54849b96b4-dmqtk 2/2 Running 0 23s node-exporter-l5zdw 1/1 Running 0 2h node-exporter-vwknx 1/1 Running 0 2h prometheus-0 2/2 Running 0 1h
5.部署k8s-prometheus-adapter(將數(shù)據(jù)輸出為一個API服務(wù))
項目地址:https://github.com/DirectXMan12/k8s-prometheus-adapter
[root@k8s-master01 ~]# cd /etc/kubernetes/pki/ [root@k8s-master01 pki]#(umask 077; openssl genrsa -out serving.key 2048) [root@k8s-master01 pki]#openssl req -new -key serving.key -out serving.csr -subj "/CN=serving" #CN必須為serving [root@k8s-master01 pki]#openssl x509 -req -in serving.csr -CA ./ca.crt -CAkey ./ca.key -CAcreateserial -out serving.crt -days 3650 [root@k8s-master01 pki]# kubectl create secret generic cm-adapter-serving-certs --from-file=serving.crt=./serving.crt --from-file=serving.key=./serving.key -n k8s-monitor #證書名稱必須為cm-adapter-serving-certs [root@k8s-master01 pki]#kubectl get secret -n k8s-monitor [root@k8s-master01 pki]# cd [root@k8s-node01 ~]# git clone https://github.com/DirectXMan12/k8s-prometheus-adapter.git [root@k8s-node01 ~]# cd k8s-prometheus-adapter/deploy/manifests/ [root@k8s-node01 manifests]# grep namespace: ./* #處理role-binding之外的namespace的名稱改為k8s-monitor [root@k8s-node01 manifests]# grep image: ./* #鏡像不需要下載 [root@k8s-node01 ~]# sed -i 's/namespace\: custom-metrics/namespace\: k8s-monitor/g' ./* #rolebinding的不要替換 [root@k8s-node01 ~]# kubectl apply -f ./ [root@k8s-node01 ~]# kubectl get pod -n k8s-monitor [root@k8s-node01 ~]#kubectl get svc -n k8s-monitor kubectl api-versions |grep custom
第四章、部署prometheus+grafana
[root@k8s-master01 ~]# wget https://raw.githubusercontent.com/kubernetes-retired/heapster/master/deploy/kube-config/influxdb/grafana.yaml #找不到grafana的yaml文件,所以到heapster里面掏了一個下來用用 [root@k8s-master01 ~]#egrep -i "influxdb|namespace|nodeport" grafana.yaml #注釋掉influxdb環(huán)境變量,修改namespace及port類型 [root@k8s-master01 ~]#kubectl apply -f grafana.yaml [root@k8s-master01 ~]#kubectl get svc -n k8s-monitor [root@k8s-master01 ~]#kubectl get pod -n k8s-monitor
登錄grafana,并修改數(shù)據(jù)源
配置數(shù)據(jù)源
點擊右側(cè)的Dashborads,可以導(dǎo)入grafana自帶的prometheus的模板
回到home下,下拉選擇對應(yīng)的模板查看數(shù)據(jù)
例如:
但是,grafana自帶的模板和數(shù)據(jù)有些不匹配,我們可以去grafana官網(wǎng)去下載應(yīng)用于k8s使用的模板,地址為:https://grafana.com/dashboards
訪問grafana官網(wǎng)搜索k8s相關(guān)模板,有時搜索框點擊沒有反應(yīng),可以直接在URL后面加上搜索內(nèi)容即可
我們選擇kubernetes cluster(prometheus)作為測試
點擊需要下載的模板,并下載json文件
下載完成后,導(dǎo)入文件
選擇上傳文件
導(dǎo)入后選擇數(shù)據(jù)源
導(dǎo)入后展示的界面
第五章、實現(xiàn)HPA
1、使用v1版本測試
[root@k8s-master01 alertmanager]# kubectl api-versions |grep autoscaling autoscaling/v1 autoscaling/v2beta1 [root@k8s-master01 manifests]# cat deploy-demon.yaml apiVersion: v1 kind: Service metadata: name: myapp namespace: default spec: selector: app: myapp type: NodePort ports: - name: http port: 80 targetPort: 80 nodePort: 32222 --- apiVersion: apps/v1 kind: Deployment metadata: name: myapp-deploy spec: replicas: 2 selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: myapp image: ikubernetes/myapp:v2 ports: - name: httpd containerPort: 80 resources: requests: memory: "64Mi" cpu: "100m" limits: memory: "128Mi" cpu: "200m" [root@k8s-master01 manifests]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 47d my-nginx NodePort 10.104.13.148 <none> 80:32008/TCP 19d myapp NodePort 10.100.76.180 <none> 80:32222/TCP 16s tomcat ClusterIP 10.106.222.72 <none> 8080/TCP,8009/TCP 19d [root@k8s-master01 manifests]# kubectl get pod NAME READY STATUS RESTARTS AGE myapp-deploy-5db497dbfb-h7zcb 1/1 Running 0 16s myapp-deploy-5db497dbfb-tvsf5 1/1 Running 0 16s
測試
[root@k8s-master01 manifests]# kubectl autoscale deployment myapp-deploy --min=1 --max=8 --cpu-percent=60 deployment.apps "myapp-deploy" autoscaled [root@k8s-master01 manifests]# kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE myapp-deploy Deployment/myapp-deploy <unknown>/60% 1 8 0 22s [root@k8s-master01 pod-dir]# yum install http-tools -y [root@k8s-master01 pod-dir]# ab -c 1000 -n 5000000 http://172.16.150.213:32222/index.html [root@k8s-master01 ~]# kubectl describe hpa Name: myapp-deploy Namespace: default Labels: <none> Annotations: <none> CreationTimestamp: Sun, 16 Dec 2018 20:34:41 +0800 Reference: Deployment/myapp-deploy Metrics: ( current / target ) resource cpu on pods (as a percentage of request): 178% (178m) / 60% Min replicas: 1 Max replicas: 8 Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale False BackoffBoth the time since the previous scale is still within both the downscale and upscale forbidden windows ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request) ScalingLimited True ScaleUpLimit the desired replica count is increasing faster than the maximum scale rate Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulRescale 19m horizontal-pod-autoscaler New size: 1; reason: All metrics below target Normal SuccessfulRescale 2m horizontal-pod-autoscaler New size: 2; reason: cpu resource utilization (percentage of request) above target [root@k8s-master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE myapp-deploy-5db497dbfb-6kssf 1/1 Running 0 2m myapp-deploy-5db497dbfb-h7zcb 1/1 Running 0 24m [root@k8s-master01 ~]# kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE myapp-deploy Deployment/myapp-deploy 178%/60% 1 8 2 20m
2、使用v2beat1
[root@k8s-master01 pod-dir]# cat hpa-demo.yaml apiVersion: autoscaling/v2beta1 kind: HorizontalPodAutoscaler metadata: name: myapp-hpa-v2 spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: myapp-deploy minReplicas: 1 maxReplicas: 10 metrics: - type: Resource resource: name: cpu targetAverageUtilization: 55 - type: Resource resource: name: memory targetAverageValue: 100Mi [root@k8s-master01 pod-dir]# kubectl delete hpa myapp-deploy horizontalpodautoscaler.autoscaling "myapp-deploy" deleted [root@k8s-master01 pod-dir]# kubectl apply -f hpa-demo.yaml horizontalpodautoscaler.autoscaling "myapp-hpa-v2" created [root@k8s-master01 pod-dir]# kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE myapp-hpa-v2 Deployment/myapp-deploy <unknown>/100Mi, <unknown>/55% 1 10 0 6s
測試
[root@k8s-master01 ~]# kubectl describe hpa Name: myapp-hpa-v2 Namespace: default Labels: <none> Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"autoscaling/v2beta1","kind":"HorizontalPodAutoscaler","metadata":{"annotations":{},"name":"myapp-hpa-v2","namespace":"default"},"spec":{... CreationTimestamp: Sun, 16 Dec 2018 21:07:25 +0800 Reference: Deployment/myapp-deploy Metrics: ( current / target ) resource memory on pods: 1765376 / 100Mi resource cpu on pods (as a percentage of request): 200% (200m) / 55% Min replicas: 1 Max replicas: 10 Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True SucceededRescale the HPA controller was able to update the target scale to 4 ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request) ScalingLimited False DesiredWithinRange the desired count is within the acceptable range Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulRescale 18s horizontal-pod-autoscaler New size: 4; reason: cpu resource utilization (percentage of request) above target [root@k8s-master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE myapp-deploy-5db497dbfb-5n885 1/1 Running 0 26s myapp-deploy-5db497dbfb-h7zcb 1/1 Running 0 40m myapp-deploy-5db497dbfb-z2tqd 1/1 Running 0 26s myapp-deploy-5db497dbfb-zkjhw 1/1 Running 0 26s [root@k8s-master01 ~]# kubectl describe hpa Name: myapp-hpa-v2 Namespace: default Labels: <none> Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"autoscaling/v2beta1","kind":"HorizontalPodAutoscaler","metadata":{"annotations":{},"name":"myapp-hpa-v2","namespace":"default"},"spec":{... CreationTimestamp: Sun, 16 Dec 2018 21:07:25 +0800 Reference: Deployment/myapp-deploy Metrics: ( current / target ) resource memory on pods: 1765376 / 100Mi resource cpu on pods (as a percentage of request): 0% (0) / 55% Min replicas: 1 Max replicas: 10 Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale False BackoffBoth the time since the previous scale is still within both the downscale and upscale forbidden windows ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from memory resource ScalingLimited False DesiredWithinRange the desired count is within the acceptable range Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulRescale 6m horizontal-pod-autoscaler New size: 4; reason: cpu resource utilization (percentage of request) above target Normal SuccessfulRescale 34s horizontal-pod-autoscaler New size: 1; reason: All metrics below target [root@k8s-master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE myapp-deploy-5db497dbfb-h7zcb 1/1 Running 0 46m
3.使用v2beat1測試自定義選項
[root@k8s-master01 pod-dir]# cat ../deploy-demon-metrics.yaml apiVersion: v1 kind: Service metadata: name: myapp namespace: default spec: selector: app: myapp type: NodePort ports: - name: http port: 80 targetPort: 80 nodePort: 32222 --- apiVersion: apps/v1 kind: Deployment metadata: name: myapp-deploy spec: replicas: 2 selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: myapp image: ikubernetes/metrics-app #測試鏡像 ports: - name: httpd containerPort: 80 [root@k8s-master01 pod-dir]# kubectl apply -f deploy-demon-metrics.yaml [root@k8s-master01 pod-dir]# cat hpa-custom.yaml apiVersion: autoscaling/v2beta1 kind: HorizontalPodAutoscaler metadata: name: myapp-hpa-v2 spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: myapp-deploy minReplicas: 1 maxReplicas: 10 metrics: - type: Pods #注意類型 pods: metricName: http_requests #容器中自定義的參數(shù) targetAverageValue: 800m #m表示個數(shù),即800個并發(fā)數(shù) [root@k8s-master01 pod-dir]# kubectl apply -f hpa-custom.yaml [root@k8s-master01 pod-dir]# kubectl describe hpa myapp-hpa-v2 Name: myapp-hpa-v2 Namespace: default Labels: <none> Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"autoscaling/v2beta1","ks":{},"name":"myapp-hpa-v2","namespace":"default"},"spec":{... CreationTimestamp: Sun, 16 Dec 2018 22:09:32 +0800 Reference: Deployment/myapp-deploy Metrics: ( current / target ) "http_requests" on pods: <unknown> / 800m Min replicas: 1 Max replicas: 10 Events: <none> [root@k8s-master01 pod-dir]# kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE myapp-hpa-v2 Deployment/myapp-deploy <unknown>/800m 1 10 2 5m
看完上述內(nèi)容,你們對Kubernetes的資源指標(biāo)API及自定義指標(biāo)API是什么有進(jìn)一步的了解嗎?如果還想了解更多知識或者相關(guān)內(nèi)容,請關(guān)注億速云行業(yè)資訊頻道,感謝大家的支持。
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點不代表本網(wǎng)站立場,如果涉及侵權(quán)請聯(lián)系站長郵箱:is@yisu.com進(jìn)行舉報,并提供相關(guān)證據(jù),一經(jīng)查實,將立刻刪除涉嫌侵權(quán)內(nèi)容。