This Terraform module creates the required infrastructure needed to host GitHub Actions self-hosted, auto-scaling runners on AWS spot instances. It provides the required logic to handle the life cycle for scaling up and down using a set of AWS Lambda functions. Runners are scaled down to zero to avoid costs when no workflows are active.
NEW: Ephemeral runners available as beta feature.
NEW: Windows runners are available.
NEW: Examples for custom AMI are available.
- Motivation
- Overview
- Usages
- Examples
- Sub modules
- Debugging
- Requirements
- Providers
- Modules
- Resources
- Inputs
- Outputs
- Contribution
- Philips Forest
GitHub Actions self-hosted
runners provide a flexible option to run CI workloads on the infrastructure of your choice. Currently, no option is provided to automate the creation and scaling of action runners. This module creates the AWS infrastructure to host action runners on spot instances. It provides lambda modules to orchestrate the life cycle of the action runners.
Lambda is chosen as the runtime for two major reasons. First, it allows the creation of small components with minimal access to AWS and GitHub. Secondly, it provides a scalable setup with minimal costs that works on repo level and scales to organization level. The lambdas will create Linux based EC2 instances with Docker to serve CI workloads that can run on Linux and/or Docker. The main goal is to support Docker-based workloads.
A logical question would be, why not Kubernetes? In the current approach, we stay close to how the GitHub action runners are available today. The approach is to install the runner on a host where the required software is available. With this setup, we stay quite close to the current GitHub approach. Another logical choice would be AWS Auto Scaling groups. However, this choice would typically require much more permissions on instance level to GitHub. And besides that, scaling up and down is not trivial.
The moment a GitHub action workflow requiring a self-hosted
runner is triggered, GitHub will try to find a runner which can execute the workload. This module reacts to GitHub's check_run
event or workflow_job
event for the triggered workflow and creates a new runner if necessary.
For receiving the check_run
or workflow_job
event by the webhook (lambda), a webhook needs to be created in GitHub. The workflow_job
is the preferred option, and the check_run
option will be maintained for backward compatibility. The advantage of the workflow_job
event is that the runner checks if the received event can run on the configured runners by matching the labels, which avoid instances being scaled up and never used. The following options are available:
workflow_job
: (preferred option) create a webhook on enterprise, org or app level. Select this option for ephemeral runners.check_run
: create a webhook on enterprise, org, repo or app level. When using the app option, the app needs to be installed to repo's are using the self-hosted runners.- a Webhook needs to be created. The webhook hook can be defined on enterprise, org, repo, or app level.
In AWS a API gateway endpoint is created that is able to receive the GitHub webhook events via HTTP post. The gateway triggers the webhook lambda which will verify the signature of the event. This check guarantees the event is sent by the GitHub App. The lambda only handles workflow_job
or check_run
events with status queued
and matching the runner labels (only for workflow_job
). The accepted events are posted on a SQS queue. Messages on this queue will be delayed for a configurable amount of seconds (default 30 seconds) to give the available runners time to pick up this build.
The "scale up runner" lambda listens to the SQS queue and picks up events. The lambda runs various checks to decide whether a new EC2 spot instance needs to be created. For example, the instance is not created if the build is already started by an existing runner, or the maximum number of runners is reached.
The Lambda first requests a registration token from GitHub, which is needed later by the runner to register itself. This avoids that the EC2 instance, which later in the process will install the agent, needs administration permissions to register the runner. Next, the EC2 spot instance is created via the launch template. The launch template defines the specifications of the required instance and contains a user_data
script. This script will install the required software and configure it. The registration token for the action runner is stored in the parameter store (SSM), from which the user data script will fetch it and delete it once it has been retrieved. Once the user data script is finished, the action runner should be online, and the workflow will start in seconds.
Scaling down the runners is at the moment brute-forced, every configurable amount of minutes a lambda will check every runner (instance) if it is busy. In case the runner is not busy it will be removed from GitHub and the instance terminated in AWS. At the moment there seems no other option to scale down more smoothly.
Downloading the GitHub Action Runner distribution can be occasionally slow (more than 10 minutes). Therefore a lambda is introduced that synchronizes the action runner binary from GitHub to an S3 bucket. The EC2 instance will fetch the distribution from the S3 bucket instead of the internet.
Secrets and private keys are stored in SSM Parameter Store. These values are encrypted using the default KMS key for SSM or passing in a custom KMS key.
Permission are managed on several places. Below the most important ones. For details check the Terraform sources.
- The GitHub App requires access to actions and publish
workflow_job
events to the AWS webhook (API gateway). - The scale up lambda should have access to EC2 for creating and tagging instances.
- The scale down lambda should have access to EC2 to terminate instances.
Besides these permissions, the lambdas also need permission to CloudWatch (for logging and scheduling), SSM and S3. For more details about the required permissions see the documentation of the IAM module which uses permission boundaries.
To be able to support a number of use-cases the module has quite a lot of configuration options. We try to choose reasonable defaults. The several examples also show for the main cases how to configure the runners.
- Org vs Repo level. You can configure the module to connect the runners in GitHub on an org level and share the runners in your org. Or set the runners on repo level and the module will install the runner to the repo. There can be multiple repos but runners are not shared between repos.
- Checkrun vs Workflow job event. You can configure the webhook in GitHub to send checkrun or workflow job events to the webhook. Workflow job events are introduced by GitHub in September 2021 and are designed to support scalable runners. We advise when possible using the workflow job event, you can set
runner_enable_workflow_job_labels_check = true
to let the webhook only accept jobs based on the labels configured. The webhook will check the custom labels provided via the variablerunner_extra_labels
and the GitHub managed labels, "self-hosted", OS and architecture. The OS and architecture are derived from the settings. By default the check is disabled. - Linux vs Windows. you can configure the OS types linux and win. Linux will be used by default.
- Re-use vs Ephemeral. By default runners are re-used for till detected idle. Once idle they will be removed from the pool. To improve security we are introducing ephemeral runners. Those runners are only used for one job. Ephemeral runners are only working in combination with the workflow job event. We also suggest using a pre-build AMI to improve the start time of jobs.
- GitHub Cloud vs GitHub Enterprise Server (GHES). The runner support GitHub Cloud as well GitHub Enterprise Server. For GHES we rely on our community to test and support. We have no possibility to test ourselves on GHES.
- Spot vs on-demand. The runners use either the EC2 spot or on-demand life cycle. Runners will be created via the AWS CreateFleet API. The module (scale up lambda) will request via the CreateFleet API to create instances in one of the subnets and of the specified instance types.
When using the default example or top-level module, specifying instance_types
that match a Graviton/Graviton 2 (ARM64) architecture (e.g. a1, t4g or any 6th-gen g
or gd
type), you must also specify runner_architecture = "arm64"
and the sub-modules will be automatically configured to provision with ARM64 AMIs and leverage GitHub's ARM64 action runner. See below for more details.
Examples are provided in the example directory. Please ensure you have installed the following tools.
- Terraform, or tfenv.
- Bash shell or compatible
- Docker (optional, to build lambdas without node).
- AWS cli (optional)
- Node and yarn (for lambda development).
The module supports two main scenarios for creating runners. On repository level a runner will be dedicated to only one repository, no other repository can use the runner. On organization level you can use the runner(s) for all the repositories within the organization. See GitHub self-hosted runner instructions for more information. Before starting the deployment you have to choose one option.
The setup consists of running Terraform to create all AWS resources and manually configuring the GitHub App. The Terraform module requires configuration from the GitHub App and the GitHub app requires output from Terraform. Therefore you first create the GitHub App and configure the basics, then run Terraform, and afterwards finalize the configuration of the GitHub App.
Go to GitHub and create a new app. Beware you can create apps your organization or for a user. For now we support only organization level apps.
- Create app in Github
- Choose a name
- Choose a website (mandatory, not required for the module).
- Disable the webhook for now (we will configure this later or create an alternative webhook).
- Permissions for all runners:
- Repository:
Actions
: Read-only (check for queued jobs)Checks
: Read-only (receive events for new builds)Metadata
: Read-only (default/required)
- Repository:
- Permissions for repo level runners only:
- Repository:
Administration
: Read & write (to register runner)
- Repository:
- Permissions for organization level runners only:
- Organization
Self-hosted runners
: Read & write (to register runner)
- Organization
- Save the new app.
- On the General page, make a note of the "App ID" and "Client ID" parameters.
- Generate a new private key and save the
app.private-key.pem
file.
To apply the terraform module, the compiled lambdas (.zip files) need to be available either locally or in an S3 bucket. They can be either downloaded from the GitHub release page or build locally.
To read the files from S3, set the lambda_s3_bucket
variable and the specific object key for each lambda.
The lambdas can be downloaded manually from the release page or using the download-lambda terraform module (requires curl
to be installed on your machine). In the download-lambda
directory, run terraform init && terraform apply
. The lambdas will be saved to the same directory.
For local development you can build all the lambdas at once using .ci/build.sh
or individually using yarn dist
.
To create spot instances the AWSServiceRoleForEC2Spot
role needs to be added to your account. You can do that manually by following the AWS docs. To use terraform for creating the role, either add the following resource or let the module manage the the service linked role by setting create_service_linked_role_spot
to true
. Be aware this is an account global role, so maybe you don't want to manage it via a specific deployment.
resource "aws_iam_service_linked_role" "spot" {
aws_service_name = "spot.amazonaws.com"
}
Next create a second terraform workspace and initiate the module, or adapt one of the examples.
Note that github_app.key_base64
needs to be a base64-encoded string of the .pem
file i.e. the output of base64 app.private-key.pem
. The decoded string can either be a multiline value or a single line value with new lines represented with literal \n
characters.
module "github-runner" {
source = "philips-labs/github-runner/aws"
version = "REPLACE_WITH_VERSION"
aws_region = "eu-west-1"
vpc_id = "vpc-123"
subnet_ids = ["subnet-123", "subnet-456"]
environment = "gh-ci"
github_app = {
key_base64 = "base64string"
id = "1"
webhook_secret = "webhook_secret"
}
webhook_lambda_zip = "lambdas-download/webhook.zip"
runner_binaries_syncer_lambda_zip = "lambdas-download/runner-binaries-syncer.zip"
runners_lambda_zip = "lambdas-download/runners.zip"
enable_organization_runners = true
}
Run terraform by using the following commands
terraform init
terraform apply
The terraform output displays the API gateway url (endpoint) and secret, which you need in the next step.
The lambda for syncing the GitHub distribution to S3 is triggered via CloudWatch (by default once per hour). After deployment the function is triggered via S3 to ensure the distribution is cached.
At this point you have 2 options. Either create a separate webhook (enterprise, org, or repo), or create webhook in the App.
- Create a new webhook on repo level for repo level for repo level runner, or org (or enterprise level) for an org level runner.
- Provide the webhook url, should be part of the output of terraform.
- Provide the webhook secret (
terraform output -raw <NAME_OUTPUT_VAR>
). - In the "Permissions & Events" section and then "Subscribe to Events" subsection, check either "Workflow Job" or "Check Run" (choose only 1 option!!!).
- In the "Install App" section, install the App in your organization, either in all or in selected repositories.
Go back to the GitHub App and update the following settings.
- Enable the webhook.
- Provide the webhook url, should be part of the output of terraform.
- Provide the webhook secret (
terraform output -raw <NAME_OUTPUT_VAR>
). - In the "Permissions & Events" section and then "Subscribe to Events" subsection, check either "Workflow Job" or "Check Run" (choose only 1 option!!!).
Finally you need to ensure the app is installed to all or selected repositories.
Go back to the GitHub App and update the following settings.
- In the "Install App" section, install the App in your organization, either in all or in selected repositories.
The module support 2 scenarios to manage environment secrets and private key of the Lambda functions.
This is the default, no additional configuration is required.
You have to create an configure you KMS key. The module will use the context with key: Environment
and value var.environment
as encryption context.
resource "aws_kms_key" "github" {
is_enabled = true
}
module "runners" {
...
kms_key_arn = aws_kms_key.github.arn
...
The module basically supports two options for keeping a pool of runners. One is via a pool which only supports org-level runners, the second option is keeping runners idle.
The pool is introduced in combination with the ephemeral runners and is primary meant to ensure if any event is unexpected dropped, and no runner was created the pool can pick up the job. The pool is maintained by a lambda. Each time the lambda is triggered a check is preformed if the number of idler runners managed by the module are meeting the expected pool size. If not, the pool will be adjusted. Keep in mind that the scale down function is still active and will terminate instances that are detected as idle.
pool_runner_owner = "my-org" # Org to which the runners are added
pool_config = [{
size = 20 # size of the pool
schedule_expression = "cron(* * * * ? *)" # cron expression to trigger the adjustment of the pool
}]
The pool is NOT enabled by default and can be enabled by setting at least one object of the pool config list. The ephemeral example contains configuration options (commented out).
The module will scale down to zero runners by default. By specifying a idle_config
config, idle runners can be kept active. The scale down lambda checks if any of the cron expressions matches the current time with a margin of 5 seconds. When there is a match, the number of runners specified in the idle config will be kept active. In case multiple cron expressions matches, only the first one is taken into account. Below is an idle configuration for keeping runners active from 9 to 5 on working days.
idle_config = [{
cron = "* * 9-17 * * 1-5"
timeZone = "Europe/Amsterdam"
idleCount = 2
}]
Note: When using Windows runners it's recommended to keep a few runners warmed up due to the minutes-long cold start time.
Cron expressions are parsed by cron-parser. The supported syntax.
* * * * * *
┬ ┬ ┬ ┬ ┬ ┬
│ │ │ │ │ |
│ │ │ │ │ └ day of week (0 - 7) (0 or 7 is Sun)
│ │ │ │ └───── month (1 - 12)
│ │ │ └────────── day of month (1 - 31)
│ │ └─────────────── hour (0 - 23)
│ └──────────────────── minute (0 - 59)
└───────────────────────── second (0 - 59, optional)
For time zones please check TZ database name column for the supported values.
Currently a beta feature! You can configure runners to be ephemeral, runners will be used only for one job. The feature should be used in conjunction with listening for the workflow job event. Please consider the following:
- The scale down lambda is still active, and should only remove orphan instances. But there is no strict check in place. So ensure you configure the
minimum_running_time_in_minutes
to a value that is high enough to got your runner booted and connected to avoid it got terminated before executing a job. - The messages sent from the webhook lambda to scale-up lambda are by default delayed delayed by SQS, to give available runners to option to start the job before the decision is made to scale more runners. For ephemeral runners there is no need to wait. Set
delay_webhook_event
to0
. - To ensure runners are created in the same order GitHub sends the events we use by default a FIFO queue, this is mainly relevant for repo level runners. For ephemeral runners you can set
fifo_build_queue
tofalse
. - Error related to scaling should be retried via SQS. You can configure
job_queue_retention_in_seconds
redrive_build_queue
to tune the behavior. We have no mechanism to avoid events will never processed, which means potential no runner could be created and the job in GitHub can time out in 6 hours.
The example for ephemeral runners is based on the default example. Have look on the diff to see the major configuration differences.
This module also allows you to run agents from a prebuilt AMI to gain faster startup times. You can find more information in the image README.md
Examples are located in the examples directory. The following examples are provided:
- Default: The default example of the module
- ARM64: Example usage with ARM64 architecture
- Ubuntu: Example usage of creating a runner using Ubuntu AMIs.
- Windows: Example usage of creating a runner using Windows as the OS.
- Ephemeral: Example usages of ephemeral runners based on the default example.
- Prebuilt Images: Example usages of deploying runners with a custom prebuilt image.
- Permissions boundary: Example usages of permissions boundaries.
The module contains several submodules, you can use the module via the main module or assemble your own setup by initializing the submodules yourself.
The following submodules are the core of the module and are mandatory:
- runner-binaries-syncer - Syncs the action runner distribution.
- runners - Scales the action runners up and down
- webhook - Handles GitHub webhooks
The following sub modules are optional and are provided as example or utility:
- download-lambda - Utility module to download lambda artifacts from GitHub Release
- setup-iam-permissions - Example module to setup permission boundaries
When using the top level module configure runner_architecture = "arm64"
and ensure the list of instance_types
matches. When not using the top-level, ensure these properties are set on the submodules.
In case the setup does not work as intended follow the trace of events:
- In the GitHub App configuration, the Advanced page displays all webhook events that were sent.
- In AWS CloudWatch, every lambda has a log group. Look at the logs of the
webhook
andscale-up
lambdas. - In AWS SQS you can see messages available or in flight.
- Once an EC2 instance is running, you can connect to it in the EC2 user interface using Session Manager (use
enable_ssm_on_runners = true
). Check the user data script usingcat /var/log/user-data.log
. By default several log files of the instances are streamed to AWS CloudWatch, look for a log group named<environment>/runners
. In the log group you should see at least the log streams for the user data installation and runner agent. - Registered instances should show up in the Settings - Actions page of the repository or organization (depending on the installation mode).
Name | Version |
---|---|
terraform | >= 0.14.1 |
aws | ~> 3.38 |
Name | Version |
---|---|
aws | ~> 3.38 |
random | n/a |
Name | Source | Version |
---|---|---|
runner_binaries | ./modules/runner-binaries-syncer | n/a |
runners | ./modules/runners | n/a |
ssm | ./modules/ssm | n/a |
webhook | ./modules/webhook | n/a |
Name | Type |
---|---|
aws_resourcegroups_group.resourcegroups_group | resource |
aws_sqs_queue.queued_builds | resource |
aws_sqs_queue.queued_builds_dlq | resource |
aws_sqs_queue_policy.build_queue_dlq_policy | resource |
aws_sqs_queue_policy.build_queue_policy | resource |
random_string.random | resource |
aws_iam_policy_document.deny_unsecure_transport | data source |
Name | Description | Type | Default | Required |
---|---|---|---|---|
ami_filter | List of maps used to create the AMI filter for the action runner AMI. By default amazon linux 2 is used. | map(list(string)) |
null |
no |
ami_owners | The list of owners used to select the AMI of action runner instances. | list(string) |
[ |
no |
aws_partition | (optiona) partition in the arn namespace to use if not 'aws' | string |
"aws" |
no |
aws_region | AWS region. | string |
n/a | yes |
block_device_mappings | The EC2 instance block device configuration. Takes the following keys: device_name , delete_on_termination , volume_type , volume_size , encrypted , iops |
map(string) |
{} |
no |
cloudwatch_config | (optional) Replaces the module default cloudwatch log config. See https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-Configuration-File-Details.html for details. | string |
null |
no |
create_service_linked_role_spot | (optional) create the serviced linked role for spot instances that is required by the scale-up lambda. | bool |
false |
no |
delay_webhook_event | The number of seconds the event accepted by the webhook is invisible on the queue before the scale up lambda will receive the event. | number |
30 |
no |
disable_runner_autoupdate | Disable the auto update of the github runner agent. Be-aware there is a grace period of 30 days, see also the GitHub article | bool |
false |
no |
enable_cloudwatch_agent | Enabling the cloudwatch agent on the ec2 runner instances, the runner contains default config. Configuration can be overridden via cloudwatch_config . |
bool |
true |
no |
enable_ephemeral_runners | Enable ephemeral runners, runners will only be used once. | bool |
false |
no |
enable_managed_runner_security_group | Enabling the default managed security group creation. Unmanaged security groups can be specified via runner_additional_security_group_ids . |
bool |
true |
no |
enable_organization_runners | Register runners to organization, instead of repo level | bool |
false |
no |
enable_ssm_on_runners | Enable to allow access the runner instances for debugging purposes via SSM. Note that this adds additional permissions to the runner instances. | bool |
false |
no |
enabled_userdata | Should the userdata script be enabled for the runner. Set this to false if you are using your own prebuilt AMI | bool |
true |
no |
environment | A name that identifies the environment, used as prefix and for tagging. | string |
n/a | yes |
fifo_build_queue | Enable a FIFO queue to remain the order of events received by the webhook. Suggest to set to true for repo level runners. | bool |
false |
no |
ghes_ssl_verify | GitHub Enterprise SSL verification. Set to 'false' when custom certificate (chains) is used for GitHub Enterprise Server (insecure). | bool |
true |
no |
ghes_url | GitHub Enterprise Server URL. Example: https://github.internal.co - DO NOT SET IF USING PUBLIC GITHUB | string |
null |
no |
github_app | GitHub app parameters, see your github app. Ensure the key is the base64-encoded .pem file (the output of base64 app.private-key.pem , not the content of private-key.pem ). |
object({ |
n/a | yes |
idle_config | List of time period that can be defined as cron expression to keep a minimum amount of runners active instead of scaling down to 0. By defining this list you can ensure that in time periods that match the cron expression within 5 seconds a runner is kept idle. | list(object({ |
[] |
no |
instance_allocation_strategy | The allocation strategy for spot instances. AWS recommends to use capacity-optimized however the AWS default is lowest-price . |
string |
"lowest-price" |
no |
instance_max_spot_price | Max price price for spot intances per hour. This variable will be passed to the create fleet as max spot price for the fleet. | string |
null |
no |
instance_profile_path | The path that will be added to the instance_profile, if not set the environment name will be used. | string |
null |
no |
instance_target_capacity_type | Default lifecycle used for runner instances, can be either spot or on-demand . |
string |
"spot" |
no |
instance_type | [DEPRECATED] See instance_types. | string |
null |
no |
instance_types | List of instance types for the action runner. Defaults are based on runner_os (amzn2 for linux and Windows Server Core for win). | list(string) |
[ |
no |
job_queue_retention_in_seconds | The number of seconds the job is held in the queue before it is purged | number |
86400 |
no |
key_name | Key pair name | string |
null |
no |
kms_key_arn | Optional CMK Key ARN to be used for Parameter Store. This key must be in the current account. | string |
null |
no |
lambda_principals | (Optional) add extra principals to the role created for execution of the lambda, e.g. for local testing. | list(object({ |
[] |
no |
lambda_s3_bucket | S3 bucket from which to specify lambda functions. This is an alternative to providing local files directly. | any |
null |
no |
lambda_security_group_ids | List of security group IDs associated with the Lambda function. | list(string) |
[] |
no |
lambda_subnet_ids | List of subnets in which the action runners will be launched, the subnets needs to be subnets in the vpc_id . |
list(string) |
[] |
no |
log_level | Logging level for lambda logging. Valid values are 'silly', 'trace', 'debug', 'info', 'warn', 'error', 'fatal'. | string |
"info" |
no |
log_type | Logging format for lambda logging. Valid values are 'json', 'pretty', 'hidden'. | string |
"pretty" |
no |
logging_kms_key_id | Specifies the kms key id to encrypt the logs with | string |
null |
no |
logging_retention_in_days | Specifies the number of days you want to retain log events for the lambda log group. Possible values are: 0, 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1827, and 3653. | number |
180 |
no |
market_options | DEPCRECATED: Replaced by instance_target_capacity_type . |
string |
null |
no |
minimum_running_time_in_minutes | The time an ec2 action runner should be running at minimum before terminated if not busy. | number |
null |
no |
pool_config | The configuration for updating the pool. The pool_size to adjust to by the events triggered by the the schedule_expression. For example you can configure a cron expression for week days to adjust the pool to 10 and another expression for the weekend to adjust the pool to 1. |
list(object({ |
[] |
no |
pool_lambda_reserved_concurrent_executions | Amount of reserved concurrent executions for the scale-up lambda function. A value of 0 disables lambda from being triggered and -1 removes any concurrency limitations. | number |
1 |
no |
pool_lambda_timeout | Time out for the pool lambda lambda in seconds. | number |
60 |
no |
pool_runner_owner | The pool will deploy runners to the GitHub org ID, set this value to the org to which you want the runners deployed. Repo level is not supported. | string |
null |
no |
redrive_build_queue | Set options to attach (optional) a dead letter queue to the build queue, the queue between the webhook and the scale up lambda. You have the following options. 1. Disable by setting, enalbed' to false. 2. Enable by setting enabledto true, maxReceiveCount to a number of max retries. |
object({ |
{ |
no |
repository_white_list | List of repositories allowed to use the github app | list(string) |
[] |
no |
role_path | The path that will be added to role path for created roles, if not set the environment name will be used. | string |
null |
no |
role_permissions_boundary | Permissions boundary that will be added to the created roles. | string |
null |
no |
runner_additional_security_group_ids | (optional) List of additional security groups IDs to apply to the runner | list(string) |
[] |
no |
runner_allow_prerelease_binaries | Allow the runners to update to prerelease binaries. | bool |
false |
no |
runner_architecture | The platform architecture of the runner instance_type. | string |
"x64" |
no |
runner_as_root | Run the action runner under the root user. Variable runner_run_as will be ingored. |
bool |
false |
no |
runner_binaries_s3_sse_configuration | Map containing server-side encryption configuration for runner-binaries S3 bucket. | any |
{} |
no |
runner_binaries_syncer_lambda_timeout | Time out of the binaries sync lambda in seconds. | number |
300 |
no |
runner_binaries_syncer_lambda_zip | File location of the binaries sync lambda zip file. | string |
null |
no |
runner_boot_time_in_minutes | The minimum time for an EC2 runner to boot and register as a runner. | number |
5 |
no |
runner_ec2_tags | Map of tags that will be added to the launch template instance tag specificatons. | map(string) |
{} |
no |
runner_egress_rules | List of egress rules for the GitHub runner instances. | list(object({ |
[ |
no |
runner_enable_workflow_job_labels_check | If set to true all labels in the workflow job even are matched agaist the custom labels and GitHub labels (os, architecture and self-hosted ). When the labels are not matching the event is dropped at the webhook. |
bool |
false |
no |
runner_extra_labels | Extra (custom) labels for the runners (GitHub). Separate each label by a comma. Labels checks on the webhook can be enforced by setting enable_workflow_job_labels_check . GitHub read-only labels should not be provided. |
string |
"" |
no |
runner_group_name | Name of the runner group. | string |
"Default" |
no |
runner_iam_role_managed_policy_arns | Attach AWS or customer-managed IAM policies (by ARN) to the runner IAM role | list(string) |
[] |
no |
runner_log_files | (optional) Replaces the module default cloudwatch log config. See https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-Configuration-File-Details.html for details. | list(object({ |
null |
no |
runner_metadata_options | Metadata options for the ec2 runner instances. | map(any) |
{ |
no |
runner_os | The EC2 Operating System type to use for action runner instances (linux,windows). | string |
"linux" |
no |
runner_run_as | Run the GitHub actions agent as user. | string |
"ec2-user" |
no |
runners_lambda_s3_key | S3 key for runners lambda function. Required if using S3 bucket to specify lambdas. | any |
null |
no |
runners_lambda_s3_object_version | S3 object version for runners lambda function. Useful if S3 versioning is enabled on source bucket. | any |
null |
no |
runners_lambda_zip | File location of the lambda zip file for scaling runners. | string |
null |
no |
runners_maximum_count | The maximum number of runners that will be created. | number |
3 |
no |
runners_scale_down_lambda_timeout | Time out for the scale down lambda in seconds. | number |
60 |
no |
runners_scale_up_lambda_timeout | Time out for the scale up lambda in seconds. | number |
30 |
no |
scale_down_schedule_expression | Scheduler expression to check every x for scale down. | string |
"cron(*/5 * * * ? *)" |
no |
scale_up_reserved_concurrent_executions | Amount of reserved concurrent executions for the scale-up lambda function. A value of 0 disables lambda from being triggered and -1 removes any concurrency limitations. | number |
1 |
no |
subnet_ids | List of subnets in which the action runners will be launched, the subnets needs to be subnets in the vpc_id . |
list(string) |
n/a | yes |
syncer_lambda_s3_key | S3 key for syncer lambda function. Required if using S3 bucket to specify lambdas. | any |
null |
no |
syncer_lambda_s3_object_version | S3 object version for syncer lambda function. Useful if S3 versioning is enabled on source bucket. | any |
null |
no |
tags | Map of tags that will be added to created resources. By default resources will be tagged with name and environment. | map(string) |
{} |
no |
userdata_post_install | Script to be ran after the GitHub Actions runner is installed on the EC2 instances | string |
"" |
no |
userdata_pre_install | Script to be ran before the GitHub Actions runner is installed on the EC2 instances | string |
"" |
no |
userdata_template | Alternative user-data template, replacing the default template. By providing your own user_data you have to take care of installing all required software, including the action runner. Variables userdata_pre/post_install are ignored. | string |
null |
no |
volume_size | Size of runner volume | number |
30 |
no |
vpc_id | The VPC for security groups of the action runners. | string |
n/a | yes |
webhook_lambda_s3_key | S3 key for webhook lambda function. Required if using S3 bucket to specify lambdas. | any |
null |
no |
webhook_lambda_s3_object_version | S3 object version for webhook lambda function. Useful if S3 versioning is enabled on source bucket. | any |
null |
no |
webhook_lambda_timeout | Time out of the webhook lambda in seconds. | number |
10 |
no |
webhook_lambda_zip | File location of the webhook lambda zip file. | string |
null |
no |
Name | Description |
---|---|
binaries_syncer | n/a |
queues | SQS queues. |
runners | n/a |
ssm_parameters | n/a |
webhook | n/a |
We welcome contribution, please checkout the contribution guide. Be-aware we use pre commit hooks to update the docs.
This module is part of the Philips Forest.
___ _
/ __\__ _ __ ___ ___| |_
/ _\/ _ \| '__/ _ \/ __| __|
/ / | (_) | | | __/\__ \ |_
\/ \___/|_| \___||___/\__|
Infrastructure
Talk to the forestkeepers in the runners
-channel on Slack.