author avatar
By Federico Rosario Senior DevOps Engineer

*Views, thoughts, and opinions expressed in this post belong solely to the author, and not necessarily to SemanticBits.

I must have heard it a thousand times—“Turn out the lights!”—the battle cry of parents far and wide, cursing their electric bills each month. Of course, I was a child back then and knew nothing about such expenses. Fast forward a few decades. Now I’m the adult and the DevOps engineer that keeps an eye on the bill. Only, this time it’s not an electric bill but something more traumatic—the monthly AWS invoice! All these wonderful services don’t come free. Just look at this invoice, courtesy of parkmycloud.com.

All jokes aside, an invoice that large is nothing to laugh at. Luckily, the invoice we receive at SemanticBits isn’t that high. But we felt it was necessary to see how we could minimize costs in this area without disrupting the product.

The talented team at SemanticBits could probably engineer something fancy and impressive to solve this problem but, in the spirit of Occam’s razor, we looked for the simplest solution. In our current architecture, we rely heavily on autoscaling to ensure high availability. It’s convenient but expensive. So, that’s where we began.

Out of the six environments that I focus on, three of them are rarely used overnight or on the weekends, making them prime candidates for cost-cutting measures. Since we could accurately predict our load for those three environments, we were able to capitalize on scheduled scaling to essentially shut down three of the environments during overnight hours.

Amazon’s documentation explains how to use scheduled scaling by manually using their GUI or by issuing API calls through their command line tool. This was useful but cumbersome for our needs. We regularly create new autoscaling groups, and destroy the old ones as part of our rolling deployment strategy. However, it turned out that the solution was with us all along.

The tool of choice for provisioning our environments is Terraform. Lo and behold, a trivial search in the Terraform documentation revealed a configuration resource for autoscaling schedules! And here I thought we would need to write custom code around the AWS API.

Instead, here’s the gist:

resource "aws_autoscaling_schedule" "night" {
  scheduled_action_name = "night"
  min_size = 0
  max_size = 0
  desired_capacity = 0
  recurrence = "00 02 * * 1-5" #Mon-Fri at 10PM EST
  autoscaling_group_name = "${module.app.aws_autoscaling_group_name}"
}

resource "aws_autoscaling_schedule" "morning" {
  scheduled_action_name = "morning"
  min_size = 10
  max_size = 10
  desired_capacity = 10
  recurrence = "00 11 * * 1-5" #Mon-Fri at 7AM EST
  autoscaling_group_name = "${module.app.aws_autoscaling_group_name}"
}

These two scheduled actions are created with every new autoscaling group.

A quick peek back at the previous 24 hours shows that our instance count did, in fact, drop to zero for the nightly window we specified.

With all the money saved from shutting these unused environments down, we can now splurge on some shiny new memes! Enjoy!