NavigationContentFooter
Jump toSuggest an edit
Was this page helpful?

Containers autoscaling reference

Reviewed on 18 February 2025Published on 18 February 2025

Benefits of autoscalingLink to this anchor

Autoscaling offers several benefits, including improved responsiveness and cost efficiency. By automatically adjusting the number of instances of your resource based on current demand, you ensure that your applications can handle varying loads without manual intervention. This not only enhances user experience by maintaining performance levels but also helps in reducing costs by only using resources when necessary. Additionally, autoscaling helps in optimal resource utilization, minimizing the risk of performance degradation during peak times.

Autoscaling can be based on exactly one of the following parameters:

  • Request concurrency
  • CPU percentage
  • RAM percentage

Request concurrencyLink to this anchor

Minimum and maximum scalesLink to this anchor

Minimum scale (min-scale)

This parameter sets the minimum number of instances your resource is allowed to scale down to:

  • If you set a value of 0, all instances of your resource will be terminated after 15 minutes of inactivity.

  • If you set a value of 1 or more, the corresponding number of instances of your resource will remain available at all times.

Customizing the minimum scale for Serverless can help ensure that an instance remains pre-allocated and ready to handle requests, reducing delays associated with cold starts. However, this setting also impacts the costs of your Serverless Container.

Maximum scale (max-scale)

This parameter sets the maximum number of instances of your resource. You should adjust it based on your resource’s traffic spikes, keeping in mind that you may wish to limit the max scale to manage costs effectively.

When the maximum scale is reached, new requests are queued for processing. When the queue is full, the resource returns 503 errors for requests received beyond this point.

Autoscaler behaviorLink to this anchor

The autoscaler decides to add new instances (scale up) when the number of concurrent requests defined (default is 80) is reached.

The same autoscaler decides to remove instances (scale down) down to 1 when no more requests are received for 30 seconds.

Scaling down to zero (if min-scale is set to 0) happens after 15 minutes of inactivity.

Note

Redeploying your resource does not entail downtime. Instances are gradually replaced with new ones.

Old instances remain running to handle traffic, while new instances are brought up and verified before fully replacing the old ones. This method helps maintain application availability and service continuity throughout the update process.

CPU and RAM percentageLink to this anchor

Minimum and maximum scalesLink to this anchor

Minimum scale (min-scale)

This parameter sets the minimum number of instances your resource is allowed to scale down to.

Note

For technical reasons, the minimum scale for CPU/RAM-based autoscaling is 1.

Maximum scale (max-scale)

This parameter sets the maximum number of instances of your resource. You should adjust it based on your resource’s CPU or RAM usage spikes, keeping in mind that you may wish to limit the max scale to manage costs effectively.

Autoscaler behaviorLink to this anchor

The autoscaler decides to start new instances when the existing instances’ CPU or RAM usage exceeds the threshold you defined for a certain amount of time.

The same autoscaler decides to remove existing instances when the CPU or RAM usage of certain instances is reduced, and the remaining instances’ usage does not exceed the threshold.

Was this page helpful?
API DocsScaleway consoleDedibox consoleScaleway LearningScaleway.comPricingBlogCareers
© 2023-2025 – Scaleway