top of page
Search

Kubernetes Event-Driven Autoscaling

  • Writer: Marko Brkusanin
    Marko Brkusanin
  • Jun 28
  • 3 min read

Introduction to KEDA Scaler for Kubernetes


Scaling Kubernetes applications efficiently has always been a challenge, but KEDA offers a game-changing solution

KEDA (Kubernetes Event-driven Autoscaling) is an open-source project that allows Kubernetes applications to scale based on external events. Unlike traditional Kubernetes autoscalers that rely solely on CPU and memory metrics, KEDA enables scaling based on a variety of external sources, such as message queues, databases, and other event-driven architectures. List of supported scalers are growing from the moment KEDA was released. You can also contribute to this open source project and create your own custom scaler.

Key Features of KEDA

  • Event-driven Scaling: KEDA allows applications to scale up or down based on the number of events in a queue or other metrics.

  • Integration with Prometheus: It can utilize Prometheus metrics for scaling decisions, providing flexibility in monitoring.

  • Support for Multiple Scalers: KEDA supports a variety of scalers, including those for Azure Queue Storage, Kafka, RabbitMQ, and more.

  • Easy to Deploy: KEDA can be easily deployed in a Kubernetes cluster and integrates seamlessly with existing applications.

How KEDA Works

KEDA operates by monitoring the specified metrics from various sources and adjusting the number of pods in a deployment accordingly. The process involves:

  1. Scaler Configuration: Define the scaler in a KEDA ScaledObject, specifying the trigger and the scaling logic.

  2. Monitoring: KEDA continuously monitors the defined metrics.

  3. Scaling Actions: Based on the metrics, KEDA scales the deployment up or down by adjusting the number of pods.

Setting Up KEDA in Your Kubernetes Cluster

To set up KEDA, follow these steps:


Install KEDA: Use Helm or kubectl to install KEDA in your Kubernetes cluster.

helm repo add kedacore https://kedacore.github.io/charts
helm repo  update
helm install keda kedacore/keda --namespace keda --create-namespace

Create a ScaledObject: Define a ScaledObject YAML file that specifies the deployment and the desired scaling triggers.

apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: cpu-scaledobject
  namespace: default
spec:
  scaleTargetRef:
    name: my-deployment
  triggers:
  - type: cpu
    metricType: Utilization # Allowed types are 'Utilization' or 'AverageValue'
    metadata:
      value: "50"

Deploy Your Application: Ensure your application is deployed and configured to work with KEDA.


KEDA is is working in conjuction with Kubernetes built-in HPA and it can use its settings. One issue I was facing is that KEDA can react on sudden CPU or memory spikes and scale up your app without actual need. This can be prevented using built-in HPA advanced configuration:

spec:
  scaleTargetRef:
    name: <deployment_name>
  minReplicaCount: 1
  maxReplicaCount: 3
  cooldownPeriod: 60
  advanced:
    horizontalPodAutoscalerConfig:
      behavior:
        scaleUp:
          stabilizationWindowSeconds: 300  # 5 minutes
          policies:
            - type: Pods
              value: 1
              periodSeconds: 120  # only allow adding 1 pod every 2 minutes
        scaleDown:
          stabilizationWindowSeconds: 300  # also smooth scale down more
          policies:
            - type: Percent
              value: 50  # reduce by 50% at most per 2 minutes
              periodSeconds: 120

Let's break down this configuration:

  minReplicaCount: 1 -> minumum number of replicas for you deployment maxReplicaCount: 3 -> maximum number of replicas for you deployment cooldownPeriod: 60 -> period in seconds between two scalings

advanced:
    horizontalPodAutoscalerConfig:
      behavior:
        scaleUp:
          stabilizationWindowSeconds: 300  # 5 minutes
          policies:
            - type: Pods
              value: 1
              periodSeconds: 120  # only allow adding 1 pod every 2 minutes
        scaleDown:
          stabilizationWindowSeconds: 300  # also smooth scale down more
          policies:
            - type: Percent
              value: 50  # reduce by 50% at most per 2 minutes
              periodSeconds: 120

scaleUp: stabilizationWindowSeconds: 300 -> metric will be calculated in this time period. It means that cpu or memory spikes will trigger scaling, rather it will monitor 5 minute period.


Conclusion

KEDA is a powerful tool for managing the scalability of Kubernetes applications in an event-driven architecture. By leveraging KEDA, developers can ensure that their applications respond efficiently to varying workloads, ultimately improving resource utilization and reducing costs. Whether you are working with cloud-native applications or traditional workloads, KEDA can enhance your Kubernetes deployments. Using all three types of scaler in your Kubernetes cluster:

  1. Cluster autoscaler

  2. HPA (Horizontal Pod Autoscaling)

  3. VPA (Vertical Pod Autoscaling)

can make your app really powerful and resilient to all kind of events, which can ocur in your environment.

If you already use Kubernetes, there is no reason not to start using KEDA today and optimize your Kubernetes deployments.

 
 
 

Comentários


Ebisoft
bottom of page