溫馨提示×

溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊×
其他方式登錄
點擊 登錄注冊 即表示同意《億速云用戶服務(wù)條款》

docker中資源指標(biāo)API及自定義指標(biāo)API的示例分析

發(fā)布時間:2021-11-19 11:45:20 來源:億速云 閱讀:168 作者:小新 欄目:開發(fā)技術(shù)

這篇文章給大家分享的是有關(guān)docker中資源指標(biāo)API及自定義指標(biāo)API的示例分析的內(nèi)容。小編覺得挺實用的,因此分享給大家做個參考,一起跟隨小編過來看看吧。

    以前是用heapster來收集資源指標(biāo)才能看,現(xiàn)在heapster要廢棄了。

    從k8s v1.8開始后,引入了新的功能,即把資源指標(biāo)引入api。

    資源指標(biāo):metrics-server

    自定義指標(biāo): prometheus,k8s-prometheus-adapter

    因此,新一代架構(gòu):

    1) 核心指標(biāo)流水線:由kubelet、metrics-server以及由API server提供的api組成;cpu累計利用率、內(nèi)存實時利用率、pod的資源占用率及容器的磁盤占用率

    2) 監(jiān)控流水線:用于從系統(tǒng)收集各種指標(biāo)數(shù)據(jù)并提供終端用戶、存儲系統(tǒng)以及HPA,他們包含核心指標(biāo)以及許多非核心指標(biāo)。非核心指標(biāo)不能被k8s所解析。

    metrics-server是個api server,僅僅收集cpu利用率、內(nèi)存利用率等。

[root@master ~]# kubectl api-versions
admissionregistration.k8s.io/v1beta1
apiextensions.k8s.io/v1beta1
apiregistration.k8s.io/v1
apiregistration.k8s.io/v1beta1
apps/v1
apps/v1beta1
apps/v1beta2
authentication.k8s.io/v1
authentication.k8s.io/v1beta1
authorization.k8s.io/v1

  資源指標(biāo)(metrics)

docker中資源指標(biāo)API及自定義指標(biāo)API的示例分析

    訪問 https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/metrics-server

    把文件下載到本地目錄,,注意,一定要到和自己k8s集群版本一致目錄里面下載,比如我的k8s 是v1.11.2。否則安裝后metrics的pod運行不起來。

[root@master metrics-server]# cd kubernetes-1.11.2/cluster/addons/metrics-server
[root@master metrics-server]# ls
auth-delegator.yaml  metrics-apiservice.yaml         metrics-server-service.yaml
auth-reader.yaml     metrics-server-deployment.yaml  resource-reader.yaml

注意:需要修改的地方:

metrics-server-deployment.yaml
# - --source=kubernetes.summary_api:''
- --source=kubernetes.summary_api:https://kubernetes.default?kubeletHttps=true&kubeletPort=10250&insecure=true
 
resource-reader.yaml
 resources:
  - pods
  - nodes
  - namespaces
  - nodes/stats  #新加
[root@master metrics-server]# kubectl apply -f ./
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
serviceaccount/metrics-server created
configmap/metrics-server-config created
deployment.extensions/metrics-server-v0.3.1 created
service/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
[root@master metrics-server]# kubectl get pods -n kube-system -o wide
NAME                                    READY     STATUS    RESTARTS   AGE       IP             NODE
metrics-server-v0.2.1-fd596d746-c7x6q   2/2       Running   0          1m        10.244.2.49    node2
[root@master metrics-server]# kubectl api-versions
metrics.k8s.io/v1beta1

    看到api-version里面有metrics了。

[root@master ~]# kubectl proxy --port=8080
Starting to serve on 127.0.0.1:8080
[root@master ~]# curl http://localhost:8080/apis/metrics.k8s.io/v1beta1
{
  "kind": "APIResourceList",
  "apiVersion": "v1",
  "groupVersion": "metrics.k8s.io/v1beta1",
  "resources": [
    {
      "name": "nodes",
      "singularName": "",
      "namespaced": false,
      "kind": "NodeMetrics",
      "verbs": [
        "get",
        "list"
      ]
    },
    {
      "name": "pods",
      "singularName": "",
      "namespaced": true,
      "kind": "PodMetrics",
      "verbs": [
        "get",
        "list"
      ]
    }
  ]
[root@master metrics-server]#  curl http://localhost:8080/apis/metrics.k8s.io/v1beta1/pods
{
  "kind": "PodMetricsList",
  "apiVersion": "metrics.k8s.io/v1beta1",
  "metadata": {
    "selfLink": "/apis/metrics.k8s.io/v1beta1/pods"
  },
  "items": [
    {
      "metadata": {
        "name": "pod1",
        "namespace": "dev",
        "selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/dev/pods/pod1",
        "creationTimestamp": "2018-10-15T09:26:57Z"
      },
      "timestamp": "2018-10-15T09:26:00Z",
      "window": "1m0s",
      "containers": [
        {
          "name": "myapp",
          "usage": {
            "cpu": "0",
            "memory": "2940Ki"
          }
        }
      ]
    },
    {
      "metadata": {
        "name": "rook-ceph-osd-0-b9b94dc6c-ffs8z",
        "namespace": "rook-ceph",
        "selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/rook-ceph/pods/rook-ceph-osd-0-b9b94dc6c-ffs8z",
        "creationTimestamp": "2018-10-15T09:26:57Z"
      },
      "timestamp": "2018-10-15T09:26:00Z",
      "window": "1m0s",
      "containers": [
        {
[root@master metrics-server]#  curl http://localhost:8080/apis/metrics.k8s.io/v1beta1/nodes
{
  "kind": "NodeMetricsList",
  "apiVersion": "metrics.k8s.io/v1beta1",
  "metadata": {
    "selfLink": "/apis/metrics.k8s.io/v1beta1/nodes"
  },
  "items": [
    {
      "metadata": {
        "name": "node2",
        "selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/node2",
        "creationTimestamp": "2018-10-15T09:27:26Z"
      },
      "timestamp": "2018-10-15T09:27:00Z",
      "window": "1m0s",
      "usage": {
        "cpu": "90m",
        "memory": "1172044Ki"
      }
    },
    {
      "metadata": {
        "name": "master",
        "selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/master",
        "creationTimestamp": "2018-10-15T09:27:26Z"
      },
      "timestamp": "2018-10-15T09:27:00Z",
      "window": "1m0s",
      "usage": {
        "cpu": "186m",
        "memory": "1582972Ki"
      }
    },
    {
      "metadata": {
        "name": "node1",
        "selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/node1",
        "creationTimestamp": "2018-10-15T09:27:26Z"
      },
      "timestamp": "2018-10-15T09:27:00Z",
      "window": "1m0s",
      "usage": {
        "cpu": "68m",
        "memory": "1079332Ki"
      }
    }
  ]
}[root@master metrics-server]#

    看到iterms里面有數(shù)據(jù)了,說明可以采集各節(jié)點和pod里面的資源使用情況了。注意,如果你看不到就多等一會,如果等了很長的時間,iterm里面還是空,那么就看看metrics容器里面的日志是不是有報錯。查看日志的方法為:

[root@master metrics-server]#kubectl get pods -n kube-system
NAME                                    READY     STATUS    RESTARTS   AGE
metrics-server-v0.2.1-84678c956-jdtr5   2/2       Running   0          14m
[root@master metrics-server]# kubectl logs metrics-server-v0.2.1-84678c956-jdtr5 -c metrics-server -n kube-system
-8r6lz
I1015 09:26:57.117323       1 reststorage.go:93] No metrics for pod rook-ceph/rook-ceph-osd-prepare-node1-8r6lz
I1015 09:26:57.117336       1 reststorage.go:140] No metrics for container rook-ceph-osd in pod rook-ceph/rook-ceph-osd-prepare-node2-vnr97
I1015 09:26:57.117347       1 reststorage.go:93] No metrics for pod rook-ceph/rook-ceph-osd-prepare-node2-vnr97

    這樣,kubectl top命令就能使用了:

[root@master ~]# kubectl top nodes
NAME      CPU(cores)   CPU%      MEMORY(bytes)   MEMORY%   
master    131m         3%        1716Mi          46%       
node1     68m          1%        1169Mi          31%       
node2     96m          2%        1236Mi          33%
[root@master manifests]# kubectl top pods 
NAME                            CPU(cores)   MEMORY(bytes)   
myapp-deploy-69b47bc96d-dfpvp   0m           2Mi             
myapp-deploy-69b47bc96d-g9kkz   0m           2Mi
[root@master manifests]# kubectl top pods -n kube-system
NAME                                    CPU(cores)   MEMORY(bytes)   
canal-4h3ww                             11m          49Mi            
canal-6tdxn                             11m          49Mi            
canal-z2tp4                             11m          43Mi            
coredns-78fcdf6894-2l2cf                1m           9Mi             
coredns-78fcdf6894-dkkfq                1m           10Mi            
etcd-master                             14m          242Mi           
kube-apiserver-master                   26m          527Mi           
kube-controller-manager-master          20m          68Mi            
kube-flannel-ds-amd64-6zqzr             2m           15Mi            
kube-flannel-ds-amd64-7qtcl             2m           17Mi            
kube-flannel-ds-amd64-kpctn             2m           18Mi            
kube-proxy-9snbs                        2m           16Mi            
kube-proxy-psmxj                        2m           18Mi            
kube-proxy-tc8g6                        2m           17Mi            
kube-scheduler-master                   6m           16Mi            
kubernetes-dashboard-767dc7d4d-4mq9z    0m           12Mi            
metrics-server-v0.2.1-84678c956-jdtr5   0m           29Mi

自定義指標(biāo)(prometheus)

    大家看到,我們的metrics已經(jīng)可以正常工作了。不過,metrics只能監(jiān)控cpu和內(nèi)存,對于其他指標(biāo)如用戶自定義的監(jiān)控指標(biāo),metrics就無法監(jiān)控到了。這時就需要另外一個組件叫prometheus。

docker中資源指標(biāo)API及自定義指標(biāo)API的示例分析

    prometheus的部署非常麻煩。

    node_exporter是agent;

    PromQL相當(dāng)于sql語句來查詢數(shù)據(jù);

    k8s-prometheus-adapter:prometheus是不能直接解析k8s的指標(biāo)的,需要借助k8s-prometheus-adapter轉(zhuǎn)換成api

    kube-state-metrics是用來整合數(shù)據(jù)的。

docker中資源指標(biāo)API及自定義指標(biāo)API的示例分析

    下面開始部署。

    訪問 https://github.com/ikubernetes/k8s-prom

[root@master pro]# git clone https://github.com/iKubernetes/k8s-prom.git

    先創(chuàng)建一個叫prom的名稱空間:

[root@master k8s-prom]# kubectl apply -f namespace.yaml 
namespace/prom created

    部署node_exporter:

[root@master k8s-prom]# cd node_exporter/
[root@master node_exporter]# ls
node-exporter-ds.yaml  node-exporter-svc.yaml
[root@master node_exporter]# kubectl apply -f .
daemonset.apps/prometheus-node-exporter created
service/prometheus-node-exporter created
[root@master node_exporter]# kubectl get pods -n prom
NAME                             READY     STATUS    RESTARTS   AGE
prometheus-node-exporter-dmmjj   1/1       Running   0          7m
prometheus-node-exporter-ghz2l   1/1       Running   0          7m
prometheus-node-exporter-zt2lw   1/1       Running   0          7m

    部署prometheus:

[root@master k8s-prom]# cd prometheus/
[root@master prometheus]# ls
prometheus-cfg.yaml  prometheus-deploy.yaml  prometheus-rbac.yaml  prometheus-svc.yaml
[root@master prometheus]# kubectl apply -f .
configmap/prometheus-config created
deployment.apps/prometheus-server created
clusterrole.rbac.authorization.k8s.io/prometheus created
serviceaccount/prometheus created
clusterrolebinding.rbac.authorization.k8s.io/prometheus created
service/prometheus created

      看prom名稱空間中的所有資源:

[root@master prometheus]# kubectl get all -n prom
NAME                                     READY     STATUS    RESTARTS   AGE
pod/prometheus-node-exporter-dmmjj       1/1       Running   0          10m
pod/prometheus-node-exporter-ghz2l       1/1       Running   0          10m
pod/prometheus-node-exporter-zt2lw       1/1       Running   0          10m
pod/prometheus-server-65f5d59585-6l8m8   1/1       Running   0          55s
NAME                               TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
service/prometheus                 NodePort    10.111.127.64   <none>        9090:30090/TCP   56s
service/prometheus-node-exporter   ClusterIP   None            <none>        9100/TCP         10m
NAME                                      DESIRED   CURRENT   READY     UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/prometheus-node-exporter   3         3         3         3            3           <none>          10m
NAME                                DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/prometheus-server   1         1         1            1           56s
NAME                                           DESIRED   CURRENT   READY     AGE
replicaset.apps/prometheus-server-65f5d59585   1         1         1         56s

    上面我們看到通過NodePorts的方式,可以通過宿主機(jī)的30090端口,來訪問prometheus容器里面的應(yīng)用。

docker中資源指標(biāo)API及自定義指標(biāo)API的示例分析

    最好掛載個pvc的存儲,要不這些監(jiān)控數(shù)據(jù)過一會就沒了。 

    部署kube-state-metrics,用來整合數(shù)據(jù):  

[root@master k8s-prom]# cd kube-state-metrics/
[root@master kube-state-metrics]# ls
kube-state-metrics-deploy.yaml  kube-state-metrics-rbac.yaml  kube-state-metrics-svc.yaml
[root@master kube-state-metrics]# kubectl apply -f .
deployment.apps/kube-state-metrics created
serviceaccount/kube-state-metrics created
clusterrole.rbac.authorization.k8s.io/kube-state-metrics created
clusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics created
service/kube-state-metrics created
[root@master kube-state-metrics]# kubectl get all -n prom
NAME                                      READY     STATUS    RESTARTS   AGE
pod/kube-state-metrics-58dffdf67d-v9klh   1/1       Running   0          14m
NAME                               TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
service/kube-state-metrics         ClusterIP   10.111.41.139   <none>        8080/TCP         14m

    部署k8s-prometheus-adapter,這個需要自制證書:

[root@master k8s-prometheus-adapter]# cd /etc/kubernetes/pki/
[root@master pki]# (umask 077; openssl genrsa -out serving.key 2048)
Generating RSA private key, 2048 bit long modulus
...........................................................................................+++
...............+++
e is 65537 (0x10001)

    證書請求:

[root@master pki]#  openssl req -new -key serving.key -out serving.csr -subj "/CN=serving"

    開始簽證:

[root@master pki]# openssl  x509 -req -in serving.csr -CA ./ca.crt -CAkey ./ca.key -CAcreateserial -out serving.crt -days 3650
Signature ok
subject=/CN=serving
Getting CA Private Key

    創(chuàng)建加密的配置文件:

[root@master pki]# kubectl create secret generic cm-adapter-serving-certs --from-file=serving.crt=./serving.crt --from-file=serving.key=./serving.key  -n prom
secret/cm-adapter-serving-certs created

    注:cm-adapter-serving-certs是custom-metrics-apiserver-deployment.yaml文件里面的名字。

[root@master pki]# kubectl get secrets -n prom
NAME                             TYPE                                  DATA      AGE
cm-adapter-serving-certs         Opaque                                2         51s
default-token-knsbg              kubernetes.io/service-account-token   3         4h
kube-state-metrics-token-sccdf   kubernetes.io/service-account-token   3         3h
prometheus-token-nqzbz           kubernetes.io/service-account-token   3         3h

    部署k8s-prometheus-adapter:

[root@master k8s-prom]# cd k8s-prometheus-adapter/
[root@master k8s-prometheus-adapter]# ls
custom-metrics-apiserver-auth-delegator-cluster-role-binding.yaml   custom-metrics-apiserver-service.yaml
custom-metrics-apiserver-auth-reader-role-binding.yaml              custom-metrics-apiservice.yaml
custom-metrics-apiserver-deployment.yaml                            custom-metrics-cluster-role.yaml
custom-metrics-apiserver-resource-reader-cluster-role-binding.yaml  custom-metrics-resource-reader-cluster-role.yaml
custom-metrics-apiserver-service-account.yaml                       hpa-custom-metrics-cluster-role-binding.yaml

    由于k8s v1.11.2和k8s-prometheus-adapter最新版不兼容,解決辦法就是訪問https://github.com/DirectXMan12/k8s-prometheus-adapter/tree/master/deploy/manifests下載最新版的custom-metrics-apiserver-deployment.yaml文件,并把里面的namespace的名字改成prom;同時還要下載custom-metrics-config-map.yaml文件到本地來,并把里面的namespace的名字改成prom。

[root@master k8s-prometheus-adapter]# kubectl apply -f .
clusterrolebinding.rbac.authorization.k8s.io/custom-metrics:system:auth-delegator created
rolebinding.rbac.authorization.k8s.io/custom-metrics-auth-reader created
deployment.apps/custom-metrics-apiserver created
clusterrolebinding.rbac.authorization.k8s.io/custom-metrics-resource-reader created
serviceaccount/custom-metrics-apiserver created
service/custom-metrics-apiserver created
apiservice.apiregistration.k8s.io/v1beta1.custom.metrics.k8s.io created
clusterrole.rbac.authorization.k8s.io/custom-metrics-server-resources created
clusterrole.rbac.authorization.k8s.io/custom-metrics-resource-reader created
clusterrolebinding.rbac.authorization.k8s.io/hpa-controller-custom-metrics created
[root@master k8s-prometheus-adapter]# kubectl get all -n prom
NAME                                           READY     STATUS    RESTARTS   AGE
pod/custom-metrics-apiserver-65f545496-64lsz   1/1       Running   0          6m
pod/kube-state-metrics-58dffdf67d-v9klh        1/1       Running   0          4h
pod/prometheus-node-exporter-dmmjj             1/1       Running   0          4h
pod/prometheus-node-exporter-ghz2l             1/1       Running   0          4h
pod/prometheus-node-exporter-zt2lw             1/1       Running   0          4h
pod/prometheus-server-65f5d59585-6l8m8         1/1       Running   0          4h
NAME                               TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
service/custom-metrics-apiserver   ClusterIP   10.103.87.246   <none>        443/TCP          36m
service/kube-state-metrics         ClusterIP   10.111.41.139   <none>        8080/TCP         4h
service/prometheus                 NodePort    10.111.127.64   <none>        9090:30090/TCP   4h
service/prometheus-node-exporter   ClusterIP   None            <none>        9100/TCP         4h
NAME                                      DESIRED   CURRENT   READY     UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/prometheus-node-exporter   3         3         3         3            3           <none>          4h
NAME                                       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/custom-metrics-apiserver   1         1         1            1           36m
deployment.apps/kube-state-metrics         1         1         1            1           4h
deployment.apps/prometheus-server          1         1         1            1           4h
NAME                                                  DESIRED   CURRENT   READY     AGE
replicaset.apps/custom-metrics-apiserver-5f6b4d857d   0         0         0         36m
replicaset.apps/custom-metrics-apiserver-65f545496    1         1         1         6m
replicaset.apps/custom-metrics-apiserver-86ccf774d5   0         0         0         17m
replicaset.apps/kube-state-metrics-58dffdf67d         1         1         1         4h
replicaset.apps/prometheus-server-65f5d59585          1         1         1         4h

    最終看到prom名稱空間里面的所有資源都是running狀態(tài)了。

[root@master k8s-prometheus-adapter]# kubectl api-versions
custom.metrics.k8s.io/v1beta1

    可以看到custom.metrics.k8s.io/v1beta1這個api了。

    開個代理:

[root@master k8s-prometheus-adapter]# kubectl proxy --port=8080

     可以看到指標(biāo)數(shù)據(jù)了:

[root@master pki]# curl  http://localhost:8080/apis/custom.metrics.k8s.io/v1beta1/
 {
      "name": "pods/ceph_rocksdb_submit_transaction_sync",
      "singularName": "",
      "namespaced": true,
      "kind": "MetricValueList",
      "verbs": [
        "get"
      ]
    },
    {
      "name": "jobs.batch/kube_deployment_created",
      "singularName": "",
      "namespaced": true,
      "kind": "MetricValueList",
      "verbs": [
        "get"
      ]
    },
    {
      "name": "jobs.batch/kube_pod_owner",
      "singularName": "",
      "namespaced": true,
      "kind": "MetricValueList",
      "verbs": [
        "get"
      ]
    },

    下面我們就可以愉快的創(chuàng)建HPA了(水平Pod自動伸縮)。

    另外,prometheus還可以和grafana整合。如下步驟。

    先下載文件grafana.yaml,訪問https://github.com/kubernetes/heapster/blob/master/deploy/kube-config/influxdb/grafana.yaml

[root@master pro]# wget

    修改grafana.yaml文件內(nèi)容:

 把namespace: kube-system改成prom,有兩處;
 把env里面的下面兩個注釋掉:
        - name: INFLUXDB_HOST
          value: monitoring-influxdb
 在最有一行加個type: NodePort
 ports:
  - port: 80
    targetPort: 3000
  selector:
    k8s-app: grafana
  type: NodePort
[root@master pro]# kubectl apply -f grafana.yaml 
deployment.extensions/monitoring-grafana created
service/monitoring-grafana created
[root@master pro]# kubectl get pods -n prom
NAME                                       READY     STATUS    RESTARTS   AGE
monitoring-grafana-ffb4d59bd-gdbsk         1/1       Running   0          5s

    看到grafana這個pod運行起來了。

[root@master pro]# kubectl get svc -n prom
NAME                       TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
monitoring-grafana         NodePort    10.106.164.205   <none>        80:32659/TCP     19m

    我們可以訪問宿主機(jī)ip: http://172.16.1.100:32659

docker中資源指標(biāo)API及自定義指標(biāo)API的示例分析

 docker中資源指標(biāo)API及自定義指標(biāo)API的示例分析

docker中資源指標(biāo)API及自定義指標(biāo)API的示例分析

docker中資源指標(biāo)API及自定義指標(biāo)API的示例分析

    然后,就能從界面上看到相應(yīng)的數(shù)據(jù)了。

    登錄下面的網(wǎng)站下載個grafana監(jiān)控k8s-prometheus的模板:

docker中資源指標(biāo)API及自定義指標(biāo)API的示例分析

    然后再grafana的界面中導(dǎo)入上面下載的模板:

docker中資源指標(biāo)API及自定義指標(biāo)API的示例分析

docker中資源指標(biāo)API及自定義指標(biāo)API的示例分析

    導(dǎo)入模板之后,就能看到監(jiān)控數(shù)據(jù)了:

docker中資源指標(biāo)API及自定義指標(biāo)API的示例分析

 HPA(水平pod自動擴(kuò)展)

    當(dāng)pod壓力大了,會根據(jù)負(fù)載自動擴(kuò)展Pod個數(shù)以均勻壓力。

    目前,HPA只支持兩個版本,v1版本只支持核心指標(biāo)的定義(只能根據(jù)cpu利用率的指標(biāo)進(jìn)行pod的擴(kuò)展);

[root@master pro]# kubectl explain hpa.spec.scaleTargetRef
scaleTargetRef:表示基于什么指標(biāo)來計算pod伸縮的標(biāo)準(zhǔn)
[root@master pro]# kubectl api-versions |grep auto
autoscaling/v1
autoscaling/v2beta1

    上面看到分別支持hpav1和hpav2。

    下面我們用命令行的方式重新創(chuàng)建一個帶有資源限制的pod myapp:

[root@master ~]# kubectl run myapp --image=ikubernetes/myapp:v1 --replicas=1 --requests='cpu=50m,memory=256Mi' --limits='cpu=50m,memory=256Mi' --labels='app=myapp' --expose --port=80
service/myapp created
deployment.apps/myapp created
[root@master ~]# kubectl get pods
NAME                     READY     STATUS    RESTARTS   AGE
myapp-6985749785-fcvwn   1/1       Running   0          58s

    下面我們讓myapp 這個pod能自動水平擴(kuò)展,用kubectl autoscale,其實就是指明HPA控制器的。

[root@master ~]# kubectl autoscale deployment myapp --min=1 --max=8 --cpu-percent=60
horizontalpodautoscaler.autoscaling/myapp autoscaled

    --min:表示最小擴(kuò)展pod的個數(shù)

    --max:表示最多擴(kuò)展pod的個數(shù)

    --cpu-percent:cpu利用率

[root@master ~]# kubectl get hpa
NAME      REFERENCE          TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
myapp     Deployment/myapp   0%/60%    1         8         1          4m
[root@master ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
myapp        ClusterIP   10.105.235.197   <none>        80/TCP              19

    下面我們把service改成NodePort的方式:

[root@master ~]# kubectl patch svc myapp -p '{"spec":{"type": "NodePort"}}'
service/myapp patched
[root@master ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
myapp        NodePort    10.105.235.197   <none>        80:31990/TCP        22m
[root@master ~]# yum install httpd-tools #主要是為了安裝ab壓測工具
[root@master ~]# kubectl get pods -o wide
NAME                     READY     STATUS    RESTARTS   AGE       IP            NODE
myapp-6985749785-fcvwn   1/1       Running   0          25m       10.244.2.84   node2

    開始用ab工具壓測

[root@master ~]# ab -c 1000 -n 5000000 http://172.16.1.100:31990/index.html
This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 172.16.1.100 (be patient)

    多等一會,會看到pods的cpu利用率為98%,需要擴(kuò)展為2個pod了:

[root@master ~]# kubectl describe hpa
resource cpu on pods  (as a percentage of request):  98% (49m) / 60%
Deployment pods:                                       1 current / 2 desired
[root@master ~]# kubectl top pods
NAME                     CPU(cores)   MEMORY(bytes)   
myapp-6985749785-fcvwn   49m (我們設(shè)置的總cpu是50m)         3Mi
[root@master ~]#  kubectl get pods -o wide
NAME                     READY     STATUS    RESTARTS   AGE       IP             NODE
myapp-6985749785-fcvwn   1/1       Running   0          32m       10.244.2.84    node2
myapp-6985749785-sr4qv   1/1       Running   0          2m        10.244.1.105   node1

    上面我們看到已經(jīng)自動擴(kuò)展為2個pod了,再等一會,隨著cpu壓力的上升,還會看到自動擴(kuò)展為4個或更多的pod:

[root@master ~]#  kubectl get pods -o wide
NAME                     READY     STATUS    RESTARTS   AGE       IP             NODE
myapp-6985749785-2mjrd   1/1       Running   0          1m        10.244.1.107   node1
myapp-6985749785-bgz6p   1/1       Running   0          1m        10.244.1.108   node1
myapp-6985749785-fcvwn   1/1       Running   0          35m       10.244.2.84    node2
myapp-6985749785-sr4qv   1/1       Running   0          5m        10.244.1.105   node1

    等壓測一停止,pod個數(shù)還會收縮為正常個數(shù)的。

    上面我們用的是hpav1來做的水平pod自動擴(kuò)展的功能,我們前面也說過,hpa v1版本只能根據(jù)cpu利用率括水平自動擴(kuò)展pod。

    下面我們介紹一下hpa v2的功能,它可以根據(jù)自定義指標(biāo)利用率來水平擴(kuò)展pod。

    在使用hpa v2版本前,我們先把前面創(chuàng)建的hpa v1版本刪除了,以免和我們測試的hpa v2版本沖突:

[root@master hpa]# kubectl delete hpa myapp
horizontalpodautoscaler.autoscaling "myapp" deleted

    好了,下面我們創(chuàng)建一個hpa v2:

[root@master hpa]# cat hpa-v2-demo.yaml 
apiVersion: autoscaling/v2beta1   #從這可以看出是hpa v2版本
kind: HorizontalPodAutoscaler
metadata:
  name: myapp-hpa-v2
spec:
  scaleTargetRef: #根據(jù)什么指標(biāo)來做評估壓力
    apiVersion: apps/v1 #對誰來做自動擴(kuò)展
    kind: Deployment
    name: myapp
  minReplicas: 1 #最少副本數(shù)量
  maxReplicas: 10
  metrics: #表示依據(jù)哪些指標(biāo)來進(jìn)行評估
  - type: Resource #表示基于資源進(jìn)行評估
    resource: 
      name: cpu
      targetAverageUtilization: 55 #表示pod cpu使用率超過55%,就自動水平擴(kuò)展pod個數(shù)
  - type: Resource
    resource:
      name: memory #我們知道hpa v1版本只能根據(jù)cpu來進(jìn)行評估,而到了我們的hpa v2版本就可以根據(jù)內(nèi)存來進(jìn)行評估了
      targetAverageValue: 50Mi #表示pod內(nèi)存使用超過50M,就自動水平擴(kuò)展pod個數(shù)
[root@master hpa]# kubectl apply -f hpa-v2-demo.yaml 
horizontalpodautoscaler.autoscaling/myapp-hpa-v2 created
[root@master hpa]# kubectl get hpa
NAME           REFERENCE          TARGETS                MINPODS   MAXPODS   REPLICAS   AGE
myapp-hpa-v2   Deployment/myapp   3723264/50Mi, 0%/55%   1         10        1          37s

    我們看到現(xiàn)在只有一個pod

[root@master hpa]# kubectl get pods -o wide
NAME                     READY     STATUS    RESTARTS   AGE       IP            NODE
myapp-6985749785-fcvwn   1/1       Running   0          57m       10.244.2.84   node2

    開始壓測:

[root@master ~]# ab -c 100 -n 5000000 http://172.16.1.100:31990/index.html

    看hpa v2的檢測情況:

[root@master hpa]# kubectl describe hpa
Metrics:                                               ( current / target )
  resource memory on pods:                             3756032 / 50Mi
  resource cpu on pods  (as a percentage of request):  82% (41m) / 55%
Min replicas:                                          1
Max replicas:                                          10
Deployment pods:                                       1 current / 2 desired
[root@master hpa]# kubectl get pods -o wide
NAME                     READY     STATUS    RESTARTS   AGE       IP             NODE
myapp-6985749785-8frq4   1/1       Running   0          1m        10.244.1.109   node1
myapp-6985749785-fcvwn   1/1       Running   0          1h        10.244.2.84    node2

    看到自動擴(kuò)展出了2個Pod。等壓測一停止,pod個數(shù)還會收縮為正常個數(shù)的。

    將來我們不光可以用hpa v2,根據(jù)cpu和內(nèi)存使用率進(jìn)行伸縮Pod個數(shù),還可以根據(jù)http并發(fā)量等。

    比如下面的:

[root@master hpa]# cat hpa-v2-custom.yaml 
apiVersion: autoscaling/v2beta1  #從這可以看出是hpa v2版本
kind: HorizontalPodAutoscaler
metadata:
  name: myapp-hpa-v2
spec:
  scaleTargetRef: #根據(jù)什么指標(biāo)來做評估壓力
    apiVersion: apps/v1 #對誰來做自動擴(kuò)展
    kind: Deployment
    name: myapp
  minReplicas: 1 #最少副本數(shù)量
  maxReplicas: 10
  metrics: #表示依據(jù)哪些指標(biāo)來進(jìn)行評估
  - type: Pods #表示基于資源進(jìn)行評估
    pods: 
      metricName: http_requests#自定義的資源指標(biāo)
        targetAverageValue: 800m #m表示個數(shù),表示并發(fā)數(shù)800

感謝各位的閱讀!關(guān)于“docker中資源指標(biāo)API及自定義指標(biāo)API的示例分析”這篇文章就分享到這里了,希望以上內(nèi)容可以對大家有一定的幫助,讓大家可以學(xué)到更多知識,如果覺得文章不錯,可以把它分享出去讓更多的人看到吧!

向AI問一下細(xì)節(jié)

免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點不代表本網(wǎng)站立場,如果涉及侵權(quán)請聯(lián)系站長郵箱:is@yisu.com進(jìn)行舉報,并提供相關(guān)證據(jù),一經(jīng)查實,將立刻刪除涉嫌侵權(quán)內(nèi)容。

AI