Kubernetes Services cluster helps you host microservices architecture, which also means this allows us to configure autoscaling of the nodes and pods based on the application requirement. But if you configure node autoscaling there are high chances cluster nodes scale up successfully when there is a resource demand but don’t scale down as expected. According to the official document, there are numerous reasons which would stop cluster autoscaler from scalding down.

  • Pods with restrictive PodDisruptionBudget.
  • Kube-system pods that:
    • are not run on the node by default, *
    • don’t have a pod disruption budget set or their PDB is too restrictive (since CA 0.6).
  • Pods that are not backed by a controller object (so not created by deployment, replica set, job, stateful set etc). *
  • Pods with local storage. *
  • Pods that cannot be moved elsewhere due to various constraints (lack of resources, non-matching node selectors or affinity, matching anti-affinity, etc)
  • Pods that have the following annotation set:
"cluster-autoscaler.kubernetes.io/safe-to-evict": "false"
  • Unless the pod has the following annotation (supported in CA 1.0.3 or later):
"cluster-autoscaler.kubernetes.io/safe-to-evict": "true"

Advertisement