이전 포스팅 다시 보기


  1. [Openshift] Openshift Origin v3.11 설치, App 배포
  2. [Openshift] Openshift Web Console 을 이용한 배포

Openshift 의 HPA 를 이용한 Auto-Scaling 구현


Openshift 에서 Horizontal Pod Autoscaler(이하 HPA) 를 이용하여 설정한 CPU 사용률을 기반으로 Replicaset, Deployment 의 Pod 수를 자동으로 Scaling 할 수 있습니다.

HPA 를 하기 위해서는 Pod 의 부하에 대해 모니터링 및 수집을 하는 Metrics-Server 가 필요합니다.
아래에서 Openshift 에 Metrics-Server 를 배포하고 성능 수집을 해보도록 하겠습니다.

Metrics-Server 배포


기본적으로 Openshift 설치 과정에서는 metric-server 는 기본 배포로 설정이 안되어 있습니다.
추가로 설치를 진행 해야됩니다.

아래와 같이 이전 포스팅에서 사용한 Openshift Playbook 을 이용합니다.

# ansible-playbook -i inventory/hosts.localhost /usr/share/ansible/openshift-ansible/playbooks/metrics-server/config.yml -e openshift_metrics_server_install=true

... < 중략 > ...
  
PLAY RECAP ********************************************************************************************************************************************************************************************************************************************
localhost                  : ok=119  changed=13   unreachable=0    failed=0
  
  
INSTALLER STATUS **************************************************************************************************************************************************************************************************************************************
Initialization          : Complete (0:00:59)
metrics-server Install  : Complete (0:00:56)
Thursday 06 June 2019  13:28:05 +0200 (0:00:00.105)       0:01:56.691 *********
===============================================================================
Run variable sanity checks -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 23.51s
Gathering Facts -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 4.03s
Gather Cluster facts --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 3.00s
metrics_server : Ensure metrics-server namespace is present ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2.80s
metrics_server : slurp ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2.44s
Initialize openshift.node.sdn_mtu -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2.28s
get openshift_current_version ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2.09s
metrics_server : generate metrics-server keys -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.64s
metrics_server : generate metrics-server secret template --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.26s
metrics_server : Applying /tmp/openshift-metrics-server-ansible-56a8WR/templates/metrics-server-certs.yaml ------------------------------------------------------------------------------------------------------------------------------------- 1.17s
metrics_server : Applying /tmp/openshift-metrics-server-ansible-56a8WR/templates/metrics-server-sa.yaml ---------------------------------------------------------------------------------------------------------------------------------------- 1.14s
metrics_server : Applying /tmp/openshift-metrics-server-ansible-56a8WR/templates/metrics-server-resource-reader-rolebinding.yaml --------------------------------------------------------------------------------------------------------------- 1.14s
metrics_server : Applying /tmp/openshift-metrics-server-ansible-56a8WR/templates/metrics-server-service.yaml ----------------------------------------------------------------------------------------------------------------------------------- 1.13s
metrics_server : generate ca certificate chain ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.13s
metrics_server : Applying /tmp/openshift-metrics-server-ansible-56a8WR/templates/metrics-server-auth-delegator-rolebinding.yaml ---------------------------------------------------------------------------------------------------------------- 1.12s
metrics_server : Applying /tmp/openshift-metrics-server-ansible-56a8WR/templates/extension-apiserver-authentication-reader-metrics-server-rolebinding.yaml ------------------------------------------------------------------------------------- 1.09s
metrics_server : Applying /tmp/openshift-metrics-server-ansible-56a8WR/templates/metrics-server-apiservice.yaml -------------------------------------------------------------------------------------------------------------------------------- 1.08s
metrics_server : Applying /tmp/openshift-metrics-server-ansible-56a8WR/templates/metrics-server-deployment.yaml -------------------------------------------------------------------------------------------------------------------------------- 1.06s
metrics_server : Checking generation of Service metrics-server --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.03s
metrics_server : Determine change status of ClusterRoleBinding system:metrics-server ----------------------------------------------------------------------------------------------------------------------------------------------------------- 1.03s
[root@master openshift-ansible]#

위와 같이 배포가 완료되면, oc 명령을 통해 metrics-server 가 정상 작동하는지 확인 할 수 있습니다.

[root@master ~]# oc adm top node
NAME                 CPU(cores)   CPU%      MEMORY(bytes)   MEMORY%
master.example.com   907m         11%       6146Mi          38%

//openshift-metrics-server namespaces 에 metrics-server 가 배포 된 것을 확인 할 수 있습니다. 

[root@master ~]#  oc adm top pod --all-namespaces
NAMESPACE                           NAME                                           CPU(cores)   MEMORY(bytes)
default                             docker-registry-1-9t2zp                        2m           32Mi
default                             registry-console-1-n4v9t                       0m           1Mi
default                             router-1-58cpz                                 4m           45Mi
kube-service-catalog                apiserver-pnsnf                                1m           51Mi
kube-service-catalog                controller-manager-qsq5m                       10m          22Mi
kube-system                         master-api-master.example.com                  96m          1061Mi
kube-system                         master-controllers-master.example.com          99m          532Mi
kube-system                         master-etcd-master.example.com                 36m          408Mi
openshift-ansible-service-broker    asb-1-k9nww                                    2m           19Mi
openshift-console                   console-5677c7c58d-n5z5h                       2m           9Mi
openshift-metrics-server            metrics-server-7bf4cf7dd4-svfs7                1m           30Mi
openshift-monitoring                alertmanager-main-0                            4m           25Mi
openshift-monitoring                alertmanager-main-1                            7m           25Mi
openshift-monitoring                alertmanager-main-2                            6m           26Mi
openshift-monitoring                cluster-monitoring-operator-6465f8fbc7-f4w4t   0m           35Mi
openshift-monitoring                grafana-6b9f85786f-z9gpj                       6m           42Mi
openshift-monitoring                kube-state-metrics-7449d589bc-qsrvh            7m           67Mi
openshift-monitoring                node-exporter-hjh46                            2m           21Mi
openshift-monitoring                prometheus-k8s-0                               64m          495Mi
openshift-monitoring                prometheus-k8s-1                               47m          485Mi
openshift-monitoring                prometheus-operator-6644b8cd54-cv2hf           0m           25Mi
openshift-node                      sync-fnd6m                                     0m           7Mi
openshift-sdn                       ovs-ssckw                                      5m           80Mi
openshift-sdn                       sdn-77shb                                      12m          37Mi
openshift-template-service-broker   apiserver-2mpk8                                4m           30Mi
openshift-web-console               webconsole-7df4f9f689-mxfkj                    15m          25Mi
sample-project                      chhanz-hello-example                           0m           0Mi
sample-project                      http-example-1-5dhj5                           3m           9Mi
sample-project                      sampleapp-2-hbmzs                              7m           222Mi
test-project                        load-generator-779c5f458c-g4kff                0m           0Mi
test-project                        mysample-3-cr8cp                               1m           215Mi
test-project                        mysample-3-dvrlp                               1m           206Mi
test-project                        mysample-3-npgq7                               1m           228Mi
test-project                        mysample-3-t9dwg                               1m           214Mi
test-project                        mysample-3-vtm5j                               1m           217Mi
test-project                        nginx-1-rjd86                                  0m           1Mi
test-project                        php-apache-b5b7bd9c8-hpcw8                     0m           9Mi
test-project                        ruby-ex-1-nkfxn                                1m           83Mi
[root@master ~]#

참고로 $ oc top nodes 명령을 통해 성능 수집이 정상적으로 동작하려면 배포 후, 일정 시간이 지나야 됩니다.

HPA 구현


HPA 를 이용한 Auto-Scaling 을 구현하기 위해 아래와 같이 신규 Project 를 생성합니다.

[root@master ~]# oc new-project hpa-project
Now using project "hpa-project" on server "https://master.example.com:8443".
 
You can add applications to this project with the 'new-app' command. For example, try:
 
    oc new-app centos/ruby-25-centos7~https://github.com/sclorg/ruby-ex.git
 
to build a new example application in Ruby.
[root@master ~]#
[root@master ~]# oc get po
No resources found.
[root@master ~]#
[root@master ~]#

부하(load)를 감지하고 응답 할 수 있는 App 을 생성합니다.

Dockerfile 생성 및 Build

# ls
Dockerfile  index.php

# cat Dockerfile
FROM php:5-apache
ADD index.php /var/www/html/index.php
RUN chmod a+rx index.php
# cat index.php
<?php
  $x = 0.0001;
  for ($i = 0; $i <= 1000000; $i++) {
    $x += sqrt($x);
  }
  echo "OK!";
?>
# docker build -t hpa-example .
Sending build context to Docker daemon  3.072kB
Step 1/3 : FROM php:5-apache
5-apache: Pulling from library/php
5e6ec7f28fb7: Pull complete
cf165947b5b7: Pull complete
7bd37682846d: Pull complete
99daf8e838e1: Pull complete
ae320713efba: Pull complete
ebcb99c48d8c: Pull complete
9867e71b4ab6: Pull complete
936eb418164a: Pull complete
bc298e7adaf7: Pull complete
ccd61b587bcd: Pull complete
b2d4b347f67c: Pull complete
56e9dde34152: Pull complete
9ad99b17eb78: Pull complete
Digest: sha256:0a40fd273961b99d8afe69a61a68c73c04bc0caa9de384d3b2dd9e7986eec86d
Status: Downloaded newer image for php:5-apache
 ---> 24c791995c1e
Step 2/3 : ADD index.php /var/www/html/index.php
 ---> bcc3ff35ceb8
Step 3/3 : RUN chmod a+rx index.php
 ---> Running in d1e173fb27a3
Removing intermediate container d1e173fb27a3
 ---> f4bb43246866
Successfully built f4bb43246866
Successfully tagged hpa-example:latest
  
# docker push han0495/hpa-example
The push refers to repository [docker.io/han0495/hpa-example]
b42dc42a2d18: Pushed
056e15bb815c: Pushed
1aab22401f12: Mounted from library/php
13ab94c9aa15: Mounted from library/php
588ee8a7eeec: Mounted from library/php
bebcda512a6d: Mounted from library/php
5ce59bfe8a3a: Mounted from library/php
d89c229e40ae: Mounted from library/php
9311481e1bdc: Mounted from library/php
4dd88f8a7689: Mounted from library/php
b1841504f6c8: Mounted from library/php
6eb3cfd4ad9e: Mounted from library/php
82bded2c3a7c: Mounted from library/php
b87a266e6a9c: Mounted from library/php
3c816b4ead84: Mounted from library/php
latest: digest: sha256:07c8ebfbe5b8084b878d0ed4ebafe5baad124b0b932e4f56d81480fbf715f03f size: 3449

App 배포

php 로 만들어진 App 을 Openshift 에 배포합니다.

[root@master ~]# oc login -u admin
Logged into "https://master.example.com:8443" as "admin" using existing credentials.
 
You have access to the following projects and can switch between them with 'oc project <projectname>':
 
    default
  * hpa-project
    kube-public
    kube-service-catalog
    kube-system
    management-infra
    openshift
    openshift-ansible-service-broker
    openshift-console
    openshift-infra
    openshift-logging
    openshift-metrics-server
    openshift-monitoring
    openshift-node
    openshift-sdn
    openshift-template-service-broker
    openshift-web-console
    sample-project
    test-project
 
Using project "hpa-project".
 
[root@master ~]# oc run hpa-example-pod --image=han0495/hpa-example --expose --port 80 --requests=cpu=200m
service/hpa-example-pod created
deploymentconfig.apps.openshift.io/hpa-example-pod created

[root@master ~]# oc get all
NAME                           READY     STATUS              RESTARTS   AGE
pod/hpa-example-pod-1-deploy   1/1       Running             0          9s
pod/hpa-example-pod-1-ngj64    0/1       ContainerCreating   0          6s
 
NAME                                      DESIRED   CURRENT   READY     AGE
replicationcontroller/hpa-example-pod-1   1         1         0         9s
 
NAME                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
service/hpa-example-pod   ClusterIP   172.30.193.122   <none>        80/TCP    9s
 
NAME                                                 REVISION   DESIRED   CURRENT   TRIGGERED BY
deploymentconfig.apps.openshift.io/hpa-example-pod   1          1         1         config

위와 같이 배포가 완료 되었습니다.
해당 Pod 은 CPU 리소스를 200m 까지 요청 수 있도록 설정 하였습니다.

HPA 생성

생성된 Pod 에 HPA 를 만들고 연결합니다.

[root@master ~]# oc autoscale deploymentconfig.apps.openshift.io/hpa-example-pod --min 1 --max 8 --cpu-percent=70
horizontalpodautoscaler.autoscaling/hpa-example-pod autoscaled
 
 
[root@master ~]# oc describe hpa
Name:                                                  hpa-example-pod
Namespace:                                             hpa-project
Labels:                                                <none>
Annotations:                                           <none>
CreationTimestamp:                                     Mon, 24 Jun 2019 08:20:40 +0200
Reference:                                             DeploymentConfig/hpa-example-pod
Metrics:                                               ( current / target )
  resource cpu on pods  (as a percentage of request):  0% (0) / 70%
Min replicas:                                          1
Max replicas:                                          8
DeploymentConfig pods:                                 1 current / 1 desired
Conditions:
  Type            Status  Reason            Message
  ----            ------  ------            -------
  AbleToScale     True    ReadyForNewScale  the last scale time was sufficiently old as to warrant a new scale
  ScalingActive   True    ValidMetricFound  the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request)
  ScalingLimited  True    TooFewReplicas    the desired replica count is increasing faster than the maximum scale rate
Events:           <none>
[root@master ~]#

생성된 HPA 의 상세 정보를 보도록 하겠습니다.
부하(load)가 발생되면 해당 Pod 의 평균 CPU 사용량이 70% 을 유지하도록 설정 되어 있습니다. 최소 1개의 Pod 을 만들 수 있고, 최대 8개까지 Pod 을 증가 시킬 수 있습니다.

부하 테스트


Openshift 에서 제공하는 Grafana 를 이용하여 실제 Pod 에 발생되는 부하(load)를 모니터링 하도록 하겠습니다.

CPU 의 사용량은 0% 로 거의 부하(load)가 없습니다.

해당 Pod 에 부하(load)를 발생 시키도록 하겠습니다.

[root@master ~]# oc run -it load-generator --image=busybox
[root@master ~]# oc get all
NAME                          READY     STATUS    RESTARTS   AGE
pod/hpa-example-pod-1-ngj64   1/1       Running   0          16m
pod/load-generator-1-tl2s4    1/1       Running   0          29s
 
NAME                                      DESIRED   CURRENT   READY     AGE
replicationcontroller/hpa-example-pod-1   1         1         1         16m
replicationcontroller/load-generator-1    1         1         1         32s
 
NAME                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
service/hpa-example-pod   ClusterIP   172.30.193.122   <none>        80/TCP    16m
 
NAME                                                  REFERENCE                          TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
horizontalpodautoscaler.autoscaling/hpa-example-pod   DeploymentConfig/hpa-example-pod   0%/70%    1         8         1          13m
 
NAME                                                 REVISION   DESIRED   CURRENT   TRIGGERED BY
deploymentconfig.apps.openshift.io/hpa-example-pod   1          1         1         config
deploymentconfig.apps.openshift.io/load-generator    1          1         1         config
 
[root@master ~]# oc exec load-generator-1-f9xrd -ti /bin/sh
/ #
/ # while true; do wget -q -O- http://hpa-example-pod.hpa-project.svc.cluster.local; done

위와 같이 busybox를 이용하여 load-generator 를 만들었습니다.
# while true; do wget -q -O- http://hpa-example-pod.hpa-project.svc.cluster.local; done 명령을 통해 App 에 부하(load)를 유발합니다.

점점 HPA 에서 부하(load)를 감지합니다.

본격적으로 HPA 에서 부하(load)를 감지하고 CPU 사용률 평균 70%를 유지하기 위해 Pod 를 Auto-Scaling 합니다.

지속된 부하(load)로 인해 Pod이 7개까지 증가 되었습니다.

Grafana에서 Pod 상태를 보도록 하겠습니다.

다량의 부하(load)로 인해 설정된 최대 Pod 수 8개까지 증가 된 것을 확인 할 수 있습니다.

Grafana 에서도 Pod 8개가 Auto-Scaling 되어 작동하는 것을 확인 하였습니다.
이처럼 HPA에서 부하(load)를 감지하고 Auto-Scaling 을 하여 정상적인 서비스를 유지 할 수 있습니다.

부하 테스트 종료

부하(load) 발생을 중지하면 HPA에서 감지하고 다시 Pod 의 수를 조절합니다.

이번 포스팅에서는 HPA 를 이용하여 Auto-Scaling 을 구현 하였습니다.
Openshift의 HPA 을 이용하면 유연하고 지속적인 서비스를 구현하고 운영 할 수 있습니다.

참고 자료


chhanz's profile image

chhanz

2019-07-15

Read more posts by this author