内容简介:Horizontal Pod Autoscaling,简称HPA,是Kubernetes中实现POD水平自动伸缩的功能。K8S集群可以通过Replication Controller的scale机制完成服务的扩容或缩容,实现具有伸缩性的服务。K8S自动伸缩分为:
Horizontal Pod Autoscaling,简称HPA,是Kubernetes中实现POD水平自动伸缩的功能。
简介
K8S集群可以通过Replication Controller的scale机制完成服务的扩容或缩容,实现具有伸缩性的服务。
K8S自动伸缩分为:
- sacle手动伸缩。见 k8s滚动升级(RollingUpdate)
- autoscale自动伸缩,见HPA。
自动扩展主要分为两种:
- 水平扩展(scale out),针对于实例数目的增减。
- 垂直扩展(scal up),即单个实例可以使用的资源的增减, 比如增加cpu和增大内存。
HPA属于前者。它可以根据CPU使用率或应用自定义metrics自动扩展Pod数量(支持 replication controller、deployment 和 replica set)。
获取metrics的两种方式:
- Heapster:heapster提供metrics服务,但是在v1(autoscaling/v1)版本中仅支持以CPU作为扩展度量指标。而其他比如:内存,网络流量,qps等目前处于beta阶段(autoscaling/v2beta1)。
- Cousom:同样处于beta阶段(autoscaling/v2beta1),但是涉及到自定义的REST API的开发,复杂度会大一些,并且当需要从自定义的监控中获取数据时,只能设置绝对值,无法设置使用率。
工作流程
- 创建HPA资源,设定目标CPU使用率限额,以及最大/最小实例数,一定要设置Pod的资源限制参数:
request
,否则HPA不会工作。 - 控制管理器每隔30s(在
kube-controller-manager.service
中可以通过–horizontal-pod-autoscaler-sync-period
修改)查询metrics的资源使用情况。 - 然后与创建时设定的值和指标做对比(平均值之和/限额),求出目标调整的实例个数。
- 目标调整的实例数不能超过第一条中设定的最大/最小实例数。如果没有超过,则扩容;超过,则扩容至最大的实例个数。
- 重复第2-4步。
自动伸缩算法
HPA Controller会通过调整副本数量使得CPU使用率尽量向期望值靠近,而且不是完全相等。另官方考虑到自动扩展的决策可能需要一段时间才会生效:例如当pod所需要的CPU负荷过大,从而在创建一个新pod的过程中,系统的CPU使用量可能会同样在有一个攀升的过程。所以在每一次作出决策后的一段时间内,将不再进行扩展决策。对于扩容而言,这个时间段为3分钟,缩容为5分钟(可以通过 --horizontal-pod-autoscaler-downscale-delay
, --horizontal-pod-autoscaler-upscale-delay
进行调整)。
- HPA Controller中有一个tolerance(容忍力)的概念,它允许一定范围内的使用量的不稳定,现在默认为0.1,这也是出于维护系统稳定性的考虑。例如设定HPA调度策略为cpu使用率高于50%触发扩容,那么只有当使用率大于55%或者小于45%才会触发伸缩活动,HPA会尽力把Pod的使用率控制在这个范围之间。
- 具体的每次扩容或者缩容的多少Pod的算法为:Ceil(前采集到的使用率 / 用户自定义的使用率) * Pod数量)。
- 每次最大扩容pod数量不会超过当前副本数量的2倍。
环境说明
角色 | IP | 操作系统版本 |
---|---|---|
master | 192.168.1.201 | centos 7.4 |
etcd1 | 192.168.1.201 | centos 7.4 |
etcd2 | 192.168.1.202 | centos 7.4 |
etcd3 | 192.168.1.203 | centos 7.4 |
node1 | 192.168.1.204 | centos 7.4 |
node2 | 192.168.1.205 | centos 7.4 |
环境 | 软件版本 |
---|---|
kubectl server | v1.9.2 |
kubectl client | v1.9.2 |
Go | go1.9.2 |
etcdctl | 3.2.15 |
etcd | 3.2.15 |
flanneld | v0.10.0 |
cfssl | 1.2.0 |
docker | 18.09.1-beta1 |
[root@master ~]# kubectl cluster-info Kubernetes master is running at https://192.168.1.201:6443 Heapster is running at https://192.168.1.201:6443/api/v1/namespaces/kube-system/services/heapster/proxy monitoring-grafana is running at https://192.168.1.201:6443/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy monitoring-influxdb is running at https://192.168.1.201:6443/api/v1/namespaces/kube-system/services/monitoring-influxdb/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. [root@master ~]#
[root@master ~]# kubectl -s http://192.168.1.201:8080 get componentstatuses NAME STATUS MESSAGE ERROR controller-manager Healthy ok etcd-2 Healthy {"health": "true"} etcd-1 Healthy {"health": "true"} scheduler Healthy ok etcd-0 Healthy {"health": "true"} [root@master ~]#
[root@master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION 192.168.1.204 Ready <none> 21h v1.9.2 192.168.1.205 Ready <none> 21h v1.9.2 [root@master ~]#
部署HPA
先准备一套K8S集群环境,环境部署略。
创建Deployment POD应用nginx
[root@master ~]# cat nginx.yml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx spec: replicas: 1 template: metadata: labels: app: nginx-hpa spec: containers: - name: nginx image: nginx:latest ports: - containerPort: 80 name: http protocol: TCP resources: requests: cpu: 0.01 memory: 25Mi limits: cpu: 0.05 memory: 60Mi --- apiVersion: v1 kind: Service metadata: name: nginx labels: app: nginx-hpa spec: selector: app: nginx-hpa type: NodePort ports: - name: http protocol: TCP port: 80 targetPort: 80 nodePort: 30080 [root@master ~]#
[root@master ~]# kubectl apply -f nginx.yml [root@master ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE nginx-5dcf548595-bk9cr 1/1 Running 1 14h 172.30.94.2 192.168.1.205 [root@master ~]#
创建nginx应用的HPA
[root@master ~]# cat nginx-hpa-cpu.yml apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: nginx-hpa spec: scaleTargetRef: apiVersion: extensions/v1beta1 kind: Deployment name: nginx minReplicas: 1 maxReplicas: 5 targetCPUUtilizationPercentage: 70 [root@master ~]#
[root@master ~]# kubectl apply -f nginx-hpa-cpu.yml [root@master ~]# kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE nginx-hpa Deployment/nginx <unknown> / 70% 1 5 1 14h [root@master ~]#
Q1
这时发现nginx-hpa获取不到当前的CPU情况(TARGETS)。等待几分钟后执行 kubectl describe hpa
发现HPA报错信息如下:
[root@master ~]# kubectl describe hpa Name: nginx-hpa Namespace: default Labels: <none> Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"au toscaling/v1","kind":"HorizontalPodAutoscaler","metadata":{"annotations":{},"name":"nginx-hpa","namespace":"default"},"spec":{"maxReplic... CreationTimestamp: Sat, 26 Jan 2019 22:23:08 +0800 Reference: Deployment/nginx Metrics: ( current / target ) resource cpu on pods (as a percentage of request): <unknown> / 70% Min replicas: 1 Max replicas: 5 Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: unable to get metrics for resource cpu: unable to fetch metrics from API: the server could not find the requested resource (get pods.metrics.k8s.io) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedComputeMetricsReplicas 1m (x12 over 3m) horizontal-pod-autoscaler failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from API: the server could not find the requested resource (get pods.metrics.k8s.io) Warning FailedGetResourceMetric 1m (x13 over 3m) horizontal-pod-autoscaler unable to get metrics for resource cpu: unable to fetch metrics from API: the server could not find the requested resource (get pods.metrics.k8s.io) [root@master ~]#
大概意思是HPA无法通过API获取到metrics值。
解决办法:
在 /etc/systemd/system/kube-controller-manager.service
配置文件中新增 --horizontal-pod-autoscaler-use-rest-clients=false
配置参数。然后重启kube-controller-manager服务即可。
kube-controller-manager's parameter --horizontal-pod-autoscaler-use-rest-clients in k8s 1.9.0 default value is true , while in k8s 1.8.x is false change it to false and it works.
[root@master ~]# cat /etc/systemd/system/kube-controller-manager.service [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] ExecStart=/usr/local/k8s/bin/kube-controller-manager \ --address=127.0.0.1 \ --master=http://192.168.1.201:8080 \ --allocate-node-cidrs=true \ --service-cluster-ip-range=172.16.0.0/16 \ --cluster-cidr=172.30.0.0/16 \ --cluster-name=kubernetes \ --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \ --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \ --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \ --root-ca-file=/etc/kubernetes/ssl/ca.pem \ --leader-elect=true \ --horizontal-pod-autoscaler-use-rest-clients=false \ --v=2 Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target [root@master ~]#
[root@master ~]# systemctl daemon-reload [root@master ~]# systemctl restart kube-controller-manager
Q2
配置并重启完成kube-controller-manager服务后,执行 kubectl delete -f nginx-hpa-cpu.yml
和 kubectl apply -f nginx-hpa-cpu.yml
重新创建服务后,发现出现新的错误,信息如下:
[root@master ~]# kubectl describe hpa Name: nginx-hpa Namespace: default Labels: <none> Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"au scaling/v1","kind":"HorizontalPodAutoscaler","metadata":{"annotations":{},"name":"nginx-hpa","namespace":"default"},"spec{"maxRepl... CreationTimestamp: Sun, 27 Jan 2019 00:18:02 +0800 Reference: Deployment/nginx Metrics: ( current / target ) resource cpu on pods (as a percentage of request): <unknown> / 70% Min replicas: 1 Max replicas: 5 Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: unable to get metrics r resource cpu: failed to get pod resource metrics: an error on the server ("Error: 'dial tcp 172.30.9.4:8082: getsockoptconnection timed out'\nTrying to reach: 'http://172.30.9.4:8082/apis/metrics/v1alpha1/namespaces/default/pods?labelSelect=app%3Dnginx-hpa'") has prevented the request from succeeding (get services http:heapster:) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedUpdateStatus 2m horizontal-pod-autoscaler Operation cannot be fulfilled on hozontalpodautoscalers.autoscaling "nginx-hpa": the object has been modified; please apply your changes to the latest versi and try again Warning FailedGetResourceMetric 24s (x3 over 4m) horizontal-pod-autoscaler unable to get metrics for resource u: failed to get pod resource metrics: an error on the server ("Error: 'dial tcp 172.30.9.4:8082: getsockopt: connection med out'\nTrying to reach: 'http://172.30.9.4:8082/apis/metrics/v1alpha1/namespaces/default/pods?labelSelector=app%3Dnginhpa'") has prevented the request from succeeding (get services http:heapster:) Warning FailedComputeMetricsReplicas 24s (x3 over 4m) horizontal-pod-autoscaler failed to get cpu utilization: unab to get metrics for resource cpu: failed to get pod resource metrics: an error on the server ("Error: 'dial tcp 172.30.9.4:8082: getsockopt: connection timed out'\nTrying to reach: 'http://172.30.9.4:8082/apis/metrics/v1alpha1/namespaces/defaulpods?labelSelector=app%3Dnginx-hpa'") has prevented the request from succeeding (get services http:heapster:) [root@master ~]#
意思是HPA无法连接heapster服务。于是检查heapster服务是否异常。
[root@master ~]# kubectl get pod -o wide -n kube-system NAME READY STATUS RESTARTS AGE IP NODE heapster-6d5c495969-2rgcr 1/1 Running 2 20h 172.30.9.4 192.168.1.204 kubernetes-dashboard-cbbf9945c-bkvbk 1/1 Running 2 20h 172.30.9.3 192.168.1.204 monitoring-grafana-67d68bf9c6-zv928 1/1 Running 2 20h 172.30.9.2 192.168.1.204 monitoring-influxdb-7c4c46745f-kbxgb 1/1 Running 0 <invalid> 172.30.9.5 192.168.1.204 [root@master ~]#
访问kube-dashboard发现POD是可以通过heapster获取到CPU内存的信息的。如下,说明heapster工作正常。
于是到node节点手动curl访问连接异常的URL。经测试在node1节点上访问正常。
[root@node1 ~]# curl 'http://172.30.9.4:8082/apis/metrics/v1alpha1/namespaces/default/pods?labelSelector=app%3Dnginx-hpa' { "metadata": {}, "items": [ { "metadata": { "name": "nginx-5dcf548595-bk9cr", "namespace": "default", "creationTimestamp": "2019-01-27T07:29:43Z" }, "timestamp": "2019-01-27T07:29:00Z", "window": "1m0s", "containers": [ { "name": "nginx", "usage": { "cpu": "0", "memory": "2820Ki" } } ] } ] } [root@node1 ~]#
于是到kube-master上访问测试,发现HPA无法访问到heapster。
[root@master ~]# curl 'http://172.30.9.4:8082/apis/metrics/v1alpha1/namespaces/default/pods?labelSelector=app%3Dnginx-hpa' curl: (7) Failed connect to 172.30.9.4:8082; Connection timed out [root@master ~]#
接下来我们来测试下网络情况,发现kube-master无法Ping通heapster的POD地址。
[root@master ~]# ping 172.30.9.4 PING 172.30.9.4 (172.30.9.4) 56(84) bytes of data. ^C --- 172.30.9.4 ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 1002ms [root@master ~]# telnet 172.30.9.4 8082 Trying 172.30.9.4... telnet: connect to address 172.30.9.4: Connection timed out [root@master ~]#
测试发现是网络不通导致的。解决办法是在kube-master上安装flannel网络。
如果flannel网络的IP地址丢失,重启flannel网卡 systemctl restart flanneld
即可解决。
[root@localhost ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:48:f6:1d brd ff:ff:ff:ff:ff:ff inet 192.168.1.201/24 brd 192.168.1.255 scope global ens33 valid_lft forever preferred_lft forever inet6 fe80::22d8:9dda:6705:ec09/64 scope link valid_lft forever preferred_lft forever 3: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN link/ether 6e:05:c0:9c:34:3f brd ff:ff:ff:ff:ff:ff inet 172.30.13.0/32 scope global flannel.1 valid_lft forever preferred_lft forever inet6 fe80::6c05:c0ff:fe9c:343f/64 scope link valid_lft forever preferred_lft forever [root@localhost ~]#
再测试下kube-master到heapster POD的网络情况:
[root@master ~]# ping 172.30.9.4 -c 4 PING 172.30.9.4 (172.30.9.4) 56(84) bytes of data. 64 bytes from 172.30.9.4: icmp_seq=1 ttl=63 time=2.15 ms 64 bytes from 172.30.9.4: icmp_seq=2 ttl=63 time=1.27 ms 64 bytes from 172.30.9.4: icmp_seq=3 ttl=63 time=1.30 ms 64 bytes from 172.30.9.4: icmp_seq=4 ttl=63 time=1.66 ms --- 172.30.9.4 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3003ms rtt min/avg/max/mdev = 1.277/1.599/2.150/0.354 ms [root@master ~]# telnet 172.30.9.4 8082 Trying 172.30.9.4... telnet: connect to address 172.30.9.4: Connection refused [root@master ~]#
重新导入nginx-hpa-cpu.yml文件,然后等待几分钟…
[root@localhost ~]# kubectl delete -f nginx-hpa-cpu.yml horizontalpodautoscaler "nginx-hpa" deleted [root@localhost ~]# [root@localhost ~]# kubectl apply -f nginx-hpa-cpu.yml horizontalpodautoscaler "nginx-hpa" created [root@localhost ~]#
OK,HPA连接heapster成功。
[root@localhost ~]# kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE nginx-hpa Deployment/nginx 0% / 70% 1 5 1 39s [root@localhost ~]# [root@localhost ~]# kubectl describe hpa Name: nginx-hpa Namespace: default Labels: <none> Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"au toscaling/v1","kind":"HorizontalPodAutoscaler","metadata":{"annotations":{},"name":"nginx-hpa","namespace":"default"},"spec":{"maxRepl... CreationTimestamp: Sun, 27 Jan 2019 01:04:25 +0800 Reference: Deployment/nginx Metrics: ( current / target ) resource cpu on pods (as a percentage of request): 0% (0) / 70% Min replicas: 1 Max replicas: 5 Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale ScalingActive True ValidMetricFound the HPA was able to succesfully calculate a replica count from cpu resource utilization (percentage of request) ScalingLimited True TooFewReplicas the desired replica count is increasing faster than the maximum scale rate Events: <none> [root@localhost ~]#
HPA测试
截至目前,HPA支持的API版本有三个。分别是 autoscaling/v1
, autoscaling/v2beta1
, autoscaling/v2beta2
。其中 autoscaling/v1
只支持CPU一种伸缩指标;在 autoscaling/v2beta1
中增加支持custom metrics;在 autoscaling/v2beta2
中增加支持external metrics。
详细说明参考:
官方说明,在k8s 1.11版本,HPA将不再从heapster上获取指标。
The HorizontalPodAutoscaler normally fetches metrics from a series of aggregated APIs (metrics.k8s.io, custom.metrics.k8s.io, and external.metrics.k8s.io). The metrics.k8s.io API is usually provided by metrics-server, which needs to be launched separately. See metrics-server for instructions. The HorizontalPodAutoscaler can also fetch metrics directly from Heapster. Note: FEATURE STATE: Kubernetes 1.11 deprecated Fetching metrics from Heapster is deprecated as of Kubernetes 1.11.
autoscaling/v1
[root@master ~]# cat nginx-hpa-cpu.yml apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: nginx-hpa spec: scaleTargetRef: apiVersion: extensions/v1beta1 kind: Deployment name: nginx minReplicas: 1 maxReplicas: 5 targetCPUUtilizationPercentage: 70 [root@master ~]#
这里只针对CPU的HPA 压力测试。
压测命令
[root@node1 ~]# cat test.sh while true do wget -q -O- http://192.168.1.204:30080 done [root@node1 ~]# sh test.sh
观察HPA当前负载和POD的情况
[root@master ~]# kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE nginx-hpa Deployment/nginx 0% / 70% 1 5 1 14h [root@master ~]#
[root@master ~]# kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE nginx-hpa Deployment/nginx 14% / 70% 1 5 1 14h [root@master ~]#
当负载飙升时,HPA会按照定义的规则开始创建新的POD副本(定义POD的CPU阈值为70%)。
[root@master ~]# kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE nginx-hpa Deployment/nginx 180% / 70% 1 5 3 14h [root@master ~]# [root@master ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE nginx-5dcf548595-bk9cr 1/1 Running 1 15h 172.30.94.2 192.168.1.205 nginx-5dcf548595-pdndb 1/1 Running 0 1m 172.30.94.4 192.168.1.205 nginx-5dcf548595-z9d6h 1/1 Running 0 1m 172.30.94.3 192.168.1.205 [root@master ~]#
继续压测,会发现POD副本数量继续增加(REPLICAS从3到5)。
[root@master ~]# kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE nginx-hpa Deployment/nginx 139% / 70% 1 5 5 14h [root@master ~]# [root@master ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE nginx-5dcf548595-9gmqf 0/1 ContainerCreating 0 39s <none> 192.168.1.204 nginx-5dcf548595-bk9cr 1/1 Running 1 15h 172.30.94.2 192.168.1.205 nginx-5dcf548595-pdndb 1/1 Running 0 10m 172.30.94.4 192.168.1.205 nginx-5dcf548595-r7n4b 1/1 Running 0 39s 172.30.94.5 192.168.1.205 nginx-5dcf548595-z9d6h 1/1 Running 0 10m 172.30.94.3 192.168.1.205 [root@master ~]#
当REPLICAS达到定义的上限时,即使当前CPU的压力仍然很大,REPLICAS也不会再增加了。
[root@master ~]# kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE nginx-hpa Deployment/nginx 112% / 70% 1 5 5 14h [root@master ~]# [root@master ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE nginx-5dcf548595-9gmqf 1/1 Running 0 2m 172.30.9.6 192.168.1.204 nginx-5dcf548595-bk9cr 1/1 Running 1 15h 172.30.94.2 192.168.1.205 nginx-5dcf548595-pdndb 1/1 Running 0 12m 172.30.94.4 192.168.1.205 nginx-5dcf548595-r7n4b 1/1 Running 0 2m 172.30.94.5 192.168.1.205 nginx-5dcf548595-z9d6h 1/1 Running 0 12m 172.30.94.3 192.168.1.205 [root@master ~]#
停止压测,当CPU负载降低时,HPA会自动减少POD的数量。
[root@master ~]# kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE nginx-hpa Deployment/nginx 40% / 70% 1 5 3 14h [root@master ~]# [root@master ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE nginx-5dcf548595-pdndb 1/1 Running 0 16m 172.30.94.4 192.168.1.205 nginx-5dcf548595-r7n4b 1/1 Running 0 6m 172.30.94.5 192.168.1.205 nginx-5dcf548595-z9d6h 1/1 Running 0 16m 172.30.94.3 192.168.1.205 [root@master ~]#
慢慢的,HPA会减少POD的数量,直到降低到最小POD数(MINPODS)。
[root@master ~]# kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE nginx-hpa Deployment/nginx 0% / 70% 1 5 1 15h [root@master ~]# [root@master ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE nginx-5dcf548595-z9d6h 1/1 Running 0 1h 172.30.94.3 192.168.1.205 [root@master ~]#
通过kube-dashboard观察这个过程的变化。
通过HPA的日志信息查看到它伸缩的过程。
[root@master ~]# kubectl describe hpa Name: nginx-hpa Namespace: default Labels: <none> Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"au toscaling/v1","kind":"HorizontalPodAutoscaler","metadata":{"annotations":{},"name":"nginx-hpa","namespace":"default"},"spec":{"maxRepl...CreationTimestamp: Sun, 27 Jan 2019 01:04:25 +0800 Reference: Deployment/nginx Metrics: ( current / target ) resource cpu on pods (as a percentage of request): 0% (0) / 70% Min replicas: 1 Max replicas: 5 Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale False BackoffDownscale the time since the previous scale is still within the downscale forbidden window ScalingActive True ValidMetricFound the HPA was able to succesfully calculate a replica count from cpu resource utilization (percentage of request) ScalingLimited True TooFewReplicas the desired replica count is increasing faster than the maximum scale rate Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulRescale 41m (x2 over 1h) horizontal-pod-autoscaler New size: 5; reason: cpu resource utilization (percentage of request) above target Normal SuccessfulRescale 29m (x2 over 1h) horizontal-pod-autoscaler New size: 3; reason: All metrics below target Normal SuccessfulRescale 17m horizontal-pod-autoscaler New size: 2; reason: All metrics below target Normal SuccessfulRescale 8m (x2 over 1h) horizontal-pod-autoscaler New size: 3; reason: cpu resource utilization (percentage of request) above target Normal SuccessfulRescale 3m (x2 over 12m) horizontal-pod-autoscaler New size: 1; reason: All metrics below target [root@master ~]#
autoscaling/v2beta1
autoscaling/v2beta1
中增加支持custom metrics。
[root@master ~]# cat nginx-hpa-v2beta1.yml apiVersion: autoscaling/v2beta1 kind: HorizontalPodAutoscaler metadata: name: nginx-hpa spec: scaleTargetRef: apiVersion: extensions/v1beta1 kind: Deployment name: nginx minReplicas: 1 maxReplicas: 5 metrics: - type: Resource resource: name: memory targetAverageUtilization: 70 - type: Resource resource: name: cpu targetAverageUtilization: 70 [root@master ~]#
[root@master ~]# kubectl apply -f nginx-hpa-v2beta1.yml
等待几分钟后…
观察发现前面10%是内存的使用百分比,后面0%是CPU的使用百分比。
[root@master ~]# kubectl get hpa nginx-hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE nginx-hpa Deployment/nginx 10% / 70%, 0% / 70% 1 5 1 51s [root@master ~]#
[root@master ~]# kubectl describe hpa nginx-hpa Name: nginx-hpa Namespace: default Labels: <none> Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion": "autoscaling/v2beta1","kind":"HorizontalPodAutoscaler","metadata":{"annotations":{},"name":"nginx-hpa","namespace":"default"},"spec":{"ma...CreationTimestamp: Mon, 28 Jan 2019 22:22:01 +0800 Reference: Deployment/nginx Metrics: ( current / target ) resource memory on pods (as a percentage of request): 10% (2670592) / 70% resource cpu on pods (as a percentage of request): 0% (0) / 70% Min replicas: 1 Max replicas: 5 Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale ScalingActive True ValidMetricFound the HPA was able to succesfully calculate a replica count from memory resou rce utilization (percentage of request) ScalingLimited False DesiredWithinRange the desired count is within the acceptable range Events: <none> [root@master ~]#
autoscaling/v2beta2
autoscaling/v2beta2
测试发现目前k8s 1.9.2暂不支持这个API版本。
[root@master ~]# kubectl get hpa.v2beta2.autoscaling -o yaml the server doesn't have a resource type "hpa" in group "v2beta2.autoscaling" [root@master ~]#
参考:
http://blog.51cto.com/ylw6006/2113848
https://blog.frognew.com/2017/01/kubernetes-pod-scale.html
https://k8smeetup.github.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/
https://blog.csdn.net/qq_17016649/article/details/79297796
https://github.com/kubernetes/kubernetes/issues/57673附件:
HPA测试配置文件.zip以上所述就是小编给大家介绍的《k8s集群水平扩展(HPA)》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!
猜你喜欢:- Redis集群水平扩展(二)
- Redis集群水平扩展(一)
- Linux集群类型、系统扩展方式及调度方法
- 在 Kubernetes 集群中部署可扩展的 WordPress 站点
- Vitess 10 发布,MySQL 数据库集群水平扩展系统
- Vitess 4.0 发布,MySQL 数据库集群水平扩展系统
本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
八年级数学(华东师大版)-解题升级-解题快速反应一本通(新课标)
孙丽敏等编 / 吉林教育出版社 / 2004-6 / 10.0
本书将与知识点、重点、难点和考点有关的典型题做全析全解,是具有解题题典性质的助学读物。但本书又优于解题题典,不仅展示解题过程,更详细地提供了解题思考过程和切入点的选择方法,教方法导引思路的功能更强。 学生要提高解题能力,必须具备两个条件:一是打好基础,二是能够运动所学知识分析问题和解决问题。本书用例题解析解说知识点、重点、难点和考点,同时提供解题思考过程,在打基础中激活能力,在解题实......一起来看看 《八年级数学(华东师大版)-解题升级-解题快速反应一本通(新课标)》 这本书的介绍吧!