This module helps you shutdown AWS resources you don't use at night or during weekends to keep both your $ and CO² bills low!
It uses a lambda function and a few cronjobs to trigger a start or stop function at a given hour, on a subset of your AWS resources, selected by a tag.
It supports :
- AutoscalingGroups: it suspends the ASG and terminates its instances. At the start, it resumes the ASG, which launches new instances by itself.
- EKS node groups: if a node group is tagged, it will use the ASG handler for its underlying ASG.
- RDS: Run the function stop and start on them.
- EC2 instances: terminate instances.
⚠️ It does not start them back, as it is not stopped but terminated. Use with caution.
The lambda function is idempotent, so you can launch it on an already stopped/started resource without any risks! It simplifies your job when planning with crons.
AWS Instance Scheduler is the official AWS solution for this problem. It is a more complete solution, using a controlle approach: a lambda regularly checks the current time and decides to start or stop resources. It is therefore more resilient.
However it is also more complex and needs to be setup with CloudFormation.
A good rule of thumb to decide: if you have a few accounts and want to keep it simple, use this Terraform module. If you manage a multi-account cloud organization, check for the more complete and robust Instance Scheduler.
If you don't know much about crons, check https://cron.help/.
- It adds a 6th character for the year.
- You cannot set '*' for both Day-of-week and Day-of-month
:alarm-clock: All the cronjobs expressions are in UTC time ! Check your current timezone and do the maths.
This module can be installed using Padok's registry.
For example, if you want to shutdown during nights and weekends all staging resources :
- each night at 18:00 UTC (20:00 for France), stop resources with tag
Env=staging
- each morning at 6:00 UTC (8:00 for France), stop resources with tag
Env=staging
module "aws_start_stop_scheduler" {
source = "github.com/padok-team/terraform-aws-start-stop-scheduler"
version = "v0.3.1"
name = "start_stop_scheduler"
schedules = [
{
name = "weekday_working_hours",
start = "0 6 ? * MON-FRI *",
stop = "0 18 ? * MON-FRI *",
tag_key = "Env",
tag_value = "staging",
}
]
# to adjust if you have a lot of resources to manage
# lamda_timeout = 600
}
You can choose to only start or stop a set of resources by omitting start or stop. For example, here a valid conf :
schedules = [
{
name = "stop_at_night",
start = "",
stop = "0 18 ? * MON-FRI *",
tag_key = "Env",
tag_value = "sandbox",
}
]
You may also set up several schedule in the same module. The name parameter must be unique between schedules.
schedules = [
{
name = "weekday_working_hours",
start = "0 6 ? * MON-FRI *",
stop = "0 18 ? * MON-FRI *",
tag_key = "Env",
tag_value = "staging",
},
{
name = "stop_at_night",
start = "",
stop = "0 18 ? * MON-FRI *",
tag_key = "Env",
tag_value = "sandbox",
}
]
You can check at examples/asg
for a complete example with AutoScalingGroups, and examples/rds
for RDS.
You can also test the deployed lambda function with arbitrary arguments :
aws lambda invoke --function-name <function_name_from_output> --payload '{"action": "start", "tag": {"key": "Env", "value": "staging"}}' --cli-binary-format raw-in-base64-out out.txt
Name | Version |
---|---|
archive | ~> 2.0 |
aws | ~> 4.0 |
Name | Description | Type | Default | Required |
---|---|---|---|---|
name | A name used to create resources in module | string |
n/a | yes |
schedules | List of map containing, the following keys: name (for jobs name), start (cron for the start schedule), stop (cron for stop schedule), tag_key and tag_value (target recources) | list(object({ |
n/a | yes |
asg_schedule | Run the scheduler on AutoScalingGroup. | bool |
true |
no |
aws_regions | List of AWS region where the scheduler will be applied. By default target the current region. | list(string) |
null |
no |
custom_iam_lambda_role | Use a custom role used for the lambda. Useful if you cannot create IAM ressource directly with your AWS profile, or to share a role between several resources. | bool |
false |
no |
custom_iam_lambda_role_arn | Custom role arn used for the lambda. Used only if custom_iam_lambda_role is set to true. | string |
null |
no |
ec2_schedule | Run the scheduler on EC2 instances. (only allows downscaling) | bool |
false |
no |
lambda_timeout | Amount of time your Lambda Function has to run in seconds. | number |
120 |
no |
rds_schedule | Run the scheduler on RDS. | bool |
true |
no |
tags | Custom Resource tags | map(string) |
{} |
no |
Name | Description |
---|---|
clouwatch_event_rules | Cloudwatch event rules generated by the module to trigger the lambda |
lambda_function_arn | The ARN of the Lambda function |
lambda_function_invoke_arn | The ARN to be used for invoking Lambda function from API Gateway |
lambda_function_last_modified | The date Lambda function was last modified |
lambda_function_log_group_arn | The ARN of the lambda's log group |
lambda_function_log_group_name | The name of the lambda's log group |
lambda_function_name | The name of the Lambda function |
lambda_function_version | Latest published version of your Lambda function |
lambda_iam_role_arn | The ARN of the IAM role used by Lambda function |
lambda_iam_role_name | The name of the IAM role used by Lambda function |
In some cases, you might not be able to create IAM resources with the same role used to create the lambda function, or you might want to share a common role between several modules. In that case, you can provide a custom iam role to the module, which will be used instead of the one created inside the module.
In that case you need to set both variables custom_iam_lambda_role
and custom_iam_lambda_role_arn
.
module "aws_start_stop_scheduler" {
...
custom_iam_lambda_role = true
custom_iam_lambda_role_arn = aws_iam_role.lambda.arn
}
You have a full working example in examples/custom_role.
If you are using Karpenter on its own node group, which then schedules the pods on EC2 instances, you should
- First scale down the node group with Karpenter, to prevent it from scaling up new instances.
- Then stop the EC2 instances.
When you scale up the node group, Karpenter will schedule the pods on the new instances. It will also cleanup ghost Kubernetes nodes from the API server.
Here an example of how to do it :
schedules = [
{
name = "weekday_asg_working_hours",
start = "0 6 ? * MON-FRI *",
stop = "0 19 ? * MON-FRI *", # 19:00
tag_key = "scheduler",
tag_value = "karpenter_node_group" # EKS node group hosting karpenter is tagged with this
},
{
name = "weekday_ec2_karpenter_working_hours",
start = "", # do not scale up
stop = "5 19 ? * MON-FRI *", # 19:05, 5 min after the ASG
tag_key = "scheduler",
tag_value = "ec2_karpenter" # EC2 instances launched by Karpenter are tagged with this
},
]
ec2_schedule = true # karpenter spawn raw EC2 instances
To avoid any issues with application lock in databases (for example migrations), you should shutdown databases after the application has been stopped. For this you may use two different schedules :
schedules = [
{
name = "weekday_asg_working_hours",
start = "0 6 ? * MON-FRI *",
stop = "0 19 ? * MON-FRI *", # 30 min before the RDS
tag_key = "scheduler",
tag_value = "asg"
},
{
name = "weekday_rds_working_hours",
start = "30 5 ? * MON-FRI *",
stop = "30 19 ? * MON-FRI *",
tag_key = "scheduler",
tag_value = "rds"
},
]
Refer to the contribution guidelines for information on contributing to this module.
Please open GitLab issues for any problems encoutered when using the module, or suggestions !
You can find the initial draft document here.
The project has the following folders and files:
- /: root folder
- /examples: examples for using this module
- /scripts: Scripts for specific tasks on module (see Infrastructure section on this file)
- /test: Folders with files for testing the module (see Testing section on this file)
- /helpers: Optional helper scripts for ease of use
- /main.tf: main file for this module, contains all the resources to create
- /variables.tf: all the variables for the module
- /output.tf: the outputs of the module
- /README.md: this file