Autoscaling
Autoscaling[], also spelled auto scaling or auto-scaling, and sometimes also called automatic scaling, is a method used in cloud computing, whereby the amount of computational resources in a server farm, typically measured in terms of the number of active servers, which vary automatically based on the load on the farm. Typically this means the number of servers you pay for goes up or down as users are busy or quiet on your web servers. It is closely related to, and builds upon, the idea of load balancing.
Advantages
Autoscaling offers the following advantages:- For companies running their own web server infrastructure, autoscaling typically means allowing some servers to go to sleep during times of low load, saving on electricity costs.
- For companies using infrastructure hosted in the cloud, autoscaling can mean lower bills, because most cloud providers charge based on total usage rather than maximum capacity.
- Even for companies that cannot reduce the total compute capacity they run or pay for at any given time, autoscaling can help by allowing the company to run less time-sensitive workloads on machines that get freed up by autoscaling during times of low traffic.
- Autoscaling solutions, such as the one offered by Amazon Web Services, can also take care of replacing unhealthy instances and therefore protecting somewhat against hardware, network, and application failures.
- Autoscaling can offer greater uptime and more availability in cases where production workloads are variable and unpredictable.
Terminology
In the list below, we use the terminology used by Amazon Web Services. However, alternative names are noted and terminology that is specific to the names of Amazon services is not used for the names.Name | Meaning | Alternative names |
Instance | A single server or machine that is part of the group of machines subject to autoscaling | |
Autoscaling group | The collection of instances subject to autoscaling, along with all the associated policies and state information | Managed instance group |
Size | The number of instances currently part of the autoscaling group | |
Desired capacity | The number of instances that the autoscaling group should have at any given point in time. If the size is less than the desired size, the autoscaling group will try to launch new instances. If the size is more than the desired size, the autoscaling group will try to remove instances | |
Minimum size | A number of instances below which the desired capacity is not allowed to fall | |
Maximum size | A number of instances above which the desired capacity is not allowed to rise | |
Metric | A measurement associated with the autoscaling group, for which a time series of data points is generated regularly. Thresholds for metrics can be used to set autoscaling policies. Metrics can be based on aggregates of metrics for instances of the autoscaling group, or based on load balancers associated with the autoscaling group | |
Scaling policy | A policy that specifies a change to the autoscaling group's desired capacity in response to metrics crossing specific thresholds. Scaling policies can have associated cooldown periods, which prevent additional scaling actions from occurring immediately after a specific scaling action. Changes to desired capacity could be incremental or could specify a new value of the desired capacity. Policies that increase the desired capacity are called "scaling out" or "scaling up" policies, and policies that decrease the desired capacity are called "scaling in" or "scaling down" policies | |
Health check | A way for the autoscaling group to determine if the instances attached to it are functioning properly. A health check may be based on whether the instance still exists and is reachable, or it could be based on whether the instance is still registered and in service with an associated load balancer | |
Launch configuration | A description of the parameters and scripts used when launching a new instance. This includes the instance type, purchase options, possible availability zones for launch, machine image, and scripts to run on launch | Instance template |
Manual scaling | A scaling action executed manually | |
Scheduled scaling | A scaling policy that is executed at a specific time, for instance, time of day or week or month or year. See #Scheduled scaling for more |
Practice
Amazon Web Services (AWS)
Amazon Web Services launched the Amazon Elastic Compute Cloud service in August 2006, that allowed developers to programmatically create and terminate instances. At the time of initial launch, AWS did not offer autoscaling, but the ability to programmatically create and terminate instances gave developers the flexibility to write their own code for autoscaling.Third-party autoscaling software for AWS began appearing around April 2008. These included tools by Scalr and RightScale. RightScale was used by Animoto, which was able to handle Facebook traffic by adopting autoscaling.
On May 18, 2009, Amazon launched its own autoscaling feature along with Elastic Load Balancing, as part of Amazon Elastic Compute Cloud. Autoscaling is now an integral component of Amazon's EC2 offering. Autoscaling on Amazon Web Services is done through a web browser or the command line tool.. On May 2016 Autoscaling was also offered in AWS ECS Service.
On-demand video provider Netflix documented their use of autoscaling with Amazon Web Services to meet their highly variable consumer needs. They found that aggressive scaling up and delayed and cautious scaling down served their goals of uptime and responsiveness best.
In an article for TechCrunch, Zev Laderman, the co-founder and CEO of Newvem, a service that helps optimize AWS cloud infrastructure, recommended that startups use autoscaling in order to keep their Amazon Web Services costs low.
Various best practice guides for AWS use suggest using its autoscaling feature even in cases where the load is not variable. That is because autoscaling offers two other advantages: automatic replacement of any instances that become unhealthy for any reason, and automatic replacement of spot instances that get interrupted for price or capacity reasons, making it more feasible to use spot instances for production purposes. Netflix's internal best practices require every instance to be in an autoscaling group, and its conformity monkey terminates any instance not in an autoscaling group in order to enforce this best practice.
Microsoft's Windows Azure
On June 27, 2013, Microsoft announced that it was adding autoscaling support to its Windows Azure cloud computing platform. Documentation for the feature is available on the Microsoft Developer Network.Oracle Cloud
allows server instances to automatically scale a cluster in or out by defining an auto-scaling rule. These rules are based on CPU and/or memory utilization and determine when to add or remove nodes.Google Cloud Platform
On November 17, 2014, the Google Compute Engine announced a public beta of its autoscaling feature for use in Google Cloud Platform applications. As of March 2015, the autoscaling tool is still in Beta.Kubernetes Horizontal Pod Autoscaler
Horizontal Pod Autoscaler automatically scales the number of pods in a , or based on observed CPU utilizationAlternative autoscaling decision approaches
Autoscaling by default uses reactive decision approach for dealing with traffic scaling: scaling only happens in response to real-time changes in metrics. In some cases, particularly when the changes occur very quickly, this reactive approach to scaling is insufficient. Two other kinds of autoscaling decision approaches are described below.Scheduled autoscaling approach
This is an approach to autoscaling where changes are made to the minimum size, maximum size, or desired capacity of the autoscaling group at specific times of day. Scheduled scaling is useful, for instance, if there is a known traffic load increase or decrease at specific times of the day, but the change is too sudden for reactive approach based autoscaling to respond fast enough. AWS autoscaling groups support scheduled scaling.Predictive autoscaling
This approach to autoscaling uses predictive analytics. The idea is to combine recent usage trends with historical usage data as well as other kinds of data to predict usage in the future, and autoscale based on these predictions.For parts of their infrastructure and specific workloads, Netflix found that Scryer, their predictive analytics engine, gave better results than Amazon's reactive autoscaling approach. In particular, it was better for:
- Identifying huge spikes in demand in the near future and getting capacity ready a little in advance
- Dealing with large-scale outages, such as failure of entire availability zones and regions
- Dealing with variable traffic patterns, providing more flexibility on the rate of scaling out or in based on the typical level and rate of change in demand at various times of day