Engine Yard Developer Center

Autoscaling of Environments

The application instances of Engine Yard environments can be horizontally scaled automatically based on certain criteria, with the number of instances running under the environment increased or decreased as required, without user intervention. This feature leverages AWS' Autoscaling feature. Using this feature, AWS allows the definition of scaling policies that can be applied to your environment's application instances, with the EY Cloud platform then taking care of the instance configuration, in the same way as if they were provisioned manually through our UI or API.

Things to take into account:

  • As it stands, this feature is for scaling application instances only. If you need to scale other type of instances, open a ticket with our Support Team to discuss other options.
  • Autoscaling is a per-environment configuration. On EY Cloud, a single autoscaling policy can't provision/terminate instances across environments.
  • Autoscaling is only possible on instances that live in a VPC. If your app instances aren't VPC native (that meaning their being EC2-Classic, regardless of ClassicLink status) then you'll need to migrate them to a VPC. Contact our Support Team for assistance on how this can be achieved.
  • Time-based scaling is not available yet through the UI.
  • New instances will be created from fresh EBS volumes. Given the nature of how AWS autoscaling provisions new instances and how EY Cloud handles snapshots, having autoscaled instances use the latest snapshot is essentially not possible. If your environments rely on snapshots to bring up new app instances, contact our Support Team to evaluate which alternatives are possible.
  • When you Terminate an Environment the Autoscaling configuration will be deleted.



Creating an autoscaling group for an environment

To actually use autoscaling on a given environment, an autoscaling group has to be created at the environment level. Go to the 'More Options' section, find 'Autoscaling'


and select it. If the option is greyed out a list of reasons will be displayed. If you require any assistance resolving the reasons please contact Support.

A new window will appear:


(Do not mind about dynamic scaling policies just yet. It will be covered below)

The default settings for an autoscaling group will have one app instance as the minimum and 10 as the maximum. Alter those as desired, as well as the 'health check grace' and 'default cooldown' values.

Please note that the app_master is not in the autoscaling group, in order to avoid any scaling process terminating it.

Click on 'Create Autoscaling Group' and wait whilst the autoscaling group is provisioned and configured. The following message will show up:


Reload the page and if the provisioning is complete you will see the in-place configuration:


At this point the autoscaling group is provisioned and active. A new setting appears, 'Desired capacity', which by default matches 'Minimum number of servers'. This value is the number of instances that is currently seen as needing to run in the environment, and will be set dynamically by the autoscaler in response to the scaling policy. If you have left the defaults values then a new app instance will be added to the environment as soon as AWS triggers it.

Defining autoscaling policies

The autoscale policies in EYCloud are very similar to the ones in AWS. While it's not necessarily required to know them in details, consider having a read of the AWS docs on the matter.

Click on 'Add New' got start working with policies


Every policy has three sections:

  • The policy type and its name. Select here which type of on-demand scaling you'll be using, and give a name to the policy. The linked AWS doc provides a good explanation of the different on-demand types of scaling.


  • What will trigger the policy (or its 'alarm'). Here keep in mind that 'metric' is 'aggregated' for 'numbers of periods' across all the instances that at already part of the autoscaling group. And will trigger if the result matches the 'trigger value' according to the 'operand'. In this context, 'period length' is how long the event will take place, which in EYCloud is related to the time it takes for an instance to be provisioned/terminated.


  • The action to happen when the policy is triggered. Special attention has to be put here on the value for 'cooldown', which is how long the autoscaling engine will wait until trigger another action. This is related with how long it takes to provision a fully working instance in your environment, or to terminate it.


Click on 'Create' to create the policy, and then on 'Update Auto Scaling Group' to have  the changes be put in place in AWS. A common mistake here is to forget to 'Update', in which case you'll need to redo the changes on the policy.


Next steps


As it is with changes that involve major parts of the infrastructure, we advise to test policies on a non-production environment first. Pay special attention to provision and termination times for your application and adjust the 'period length' and 'cooldown' accordingly.

Health check

Defining a http health check will ensure that new instances provisioned by the autoscaling process do not receive any traffic until they are fully configured and the application running upon them.

By default both HAProxy and Amazon's xLB load balancers will utilise a TCP health check. Such a health check requires only a TCP connection to be accepted by Nginx on the instances, resulting in situations where Nginx is running on the instance but the configuration and deployment of the application is not yet complete, leading to unconfigured instances being added to the load balancer and receiving requests, therefore returning application errors to visitors.

It is therefore recommended that you add a fully-formed health check path, being one that queries a dynamic page of your application and requires a database call to complete.

For HAProxy such a path can be added via the 'Edit Application' link, shown when viewing an Application in the EY dashboard. 

For Amazon Classic Load Balancers the health check path, port and protocol can be set by expanding out the relevant Load Balancer on the 'Tools -> Classic Load Balancers' page. To ensure the instances are passing the health check follow the 'View this load balancer in the AWS console' link, then select the health check tab and ensure all instances are 'healthy'.

For Amazon Application Load Balancers the health check path, port and protocol are configured for the Target Groups, which can be viewed from the link on the Environment page of the EY dashboard. To ensure the instances are passing the health check follow the 'View this load balancer in the AWS console' link for the relevant ALB on the 'Tools -> Application Load Balancers' page, then select 'Target Groups' in the list in the left-hand menu, then the 'Targets' tab and ensure all instances are 'healthy'.

If you require any assistance configuring or verifying health checks please contact Engine Yard Support.


Was this article helpful?
0 out of 0 found this helpful
Have more questions? Submit a request


Article is closed for comments.

Powered by Zendesk