Terraform modules for provisioning and managing AWS Glue resources.
The following Glue resources are supported:
Refer to modules for more details.
This project is part of our comprehensive "SweetOps" approach towards DevOps.
It's 100% Open Source and licensed under the APACHE2.
For a complete example, see examples/complete. The example provisions a Glue catalog database and a Glue crawler that crawls a public dataset in an S3 bucket and writes the metadata into the Glue catalog database. It also provisions an S3 bucket with a Glue Job Python script, and a destination S3 bucket for Glue job results. And finally, it provisions a Glue job pointing to the Python script in the S3 bucket, and a Glue trigger that triggers the Glue job on a schedule. The Glue job processes the dataset, cleans up the data, and writes the result into the destination S3 bucket.
For an example on how to provision source and destination S3 buckets, Glue Catalog database and table, and a Glue crawler that processes data in the source S3 bucket and writes the result into the destination S3 bucket, see examples/crawler.
For automated tests of the examples using bats and Terratest (which tests and deploys the examples on AWS), see test.
locals {
enabled = module.this.enabled
s3_bucket_source = module.s3_bucket_source.bucket_id
role_arn = module.iam_role.arn
# The dataset used in this example consists of Medicare-Provider payment data downloaded from two Data.CMS.gov sites:
# Inpatient Prospective Payment System Provider Summary for the Top 100 Diagnosis-Related Groups - FY2011, and Inpatient Charge Data FY 2011.
# AWS modified the data to introduce a couple of erroneous records at the tail end of the file
data_source = "s3://awsglue-datasets/examples/medicare/Medicare_Hospital_Provider.csv"
}
module "glue_catalog_database" {
source = "cloudposse/glue/aws//modules/glue-catalog-database"
# Cloud Posse recommends pinning every module to a specific version
# version = "x.x.x"
catalog_database_description = "Glue Catalog database for the data located in ${local.data_source}"
location_uri = local.data_source
attributes = ["payments"]
context = module.this.context
}
module "glue_catalog_table" {
source = "cloudposse/glue/aws//modules/glue-catalog-table"
# Cloud Posse recommends pinning every module to a specific version
# version = "x.x.x"
catalog_table_name = "medicare"
catalog_table_description = "Test Glue Catalog table"
database_name = module.glue_catalog_database.name
storage_descriptor = {
# Physical location of the table
location = local.data_source
}
context = module.this.context
}
resource "aws_lakeformation_permissions" "default" {
principal = local.role_arn
permissions = ["ALL"]
table {
database_name = module.glue_catalog_database.name
name = module.glue_catalog_table.name
}
}
# Crawls the data in the S3 bucket and puts the results into a database in the Glue Data Catalog.
# The crawler will read the first 2 MB of data from that file, and recognize the schema.
# After that, the crawler will sync the table `medicare` in the Glue database.
module "glue_crawler" {
source = "cloudposse/glue/aws//modules/glue-crawler"
# Cloud Posse recommends pinning every module to a specific version
# version = "x.x.x"
crawler_description = "Glue crawler that processes data in ${local.data_source} and writes the metadata into a Glue Catalog database"
database_name = module.glue_catalog_database.name
role = local.role_arn
schedule = "cron(0 1 * * ? *)"
schema_change_policy = {
delete_behavior = "LOG"
update_behavior = null
}
catalog_target = [
{
database_name = module.glue_catalog_database.name
tables = [module.glue_catalog_table.name]
}
]
context = module.this.context
depends_on = [
aws_lakeformation_permissions.default
]
}
# Source S3 bucket to store Glue Job scripts
module "s3_bucket_source" {
source = "cloudposse/s3-bucket/aws"
# Cloud Posse recommends pinning every module to a specific version
# version = "x.x.x"
acl = "private"
versioning_enabled = false
force_destroy = true
allow_encrypted_uploads_only = true
allow_ssl_requests_only = true
block_public_acls = true
block_public_policy = true
ignore_public_acls = true
restrict_public_buckets = true
attributes = ["source"]
context = module.this.context
}
resource "aws_s3_object" "job_script" {
bucket = local.s3_bucket_source
key = "data_cleaning.py"
source = "${path.module}/scripts/data_cleaning.py"
force_destroy = true
etag = filemd5("${path.module}/scripts/data_cleaning.py")
tags = module.this.tags
}
# Destination S3 bucket to store Glue Job results
module "s3_bucket_destination" {
source = "cloudposse/s3-bucket/aws"
# Cloud Posse recommends pinning every module to a specific version
# version = "x.x.x"
acl = "private"
versioning_enabled = false
force_destroy = true
allow_encrypted_uploads_only = true
allow_ssl_requests_only = true
block_public_acls = true
block_public_policy = true
ignore_public_acls = true
restrict_public_buckets = true
attributes = ["destination"]
context = module.this.context
}
module "iam_role" {
source = "cloudposse/iam-role/aws"
# Cloud Posse recommends pinning every module to a specific version
# version = "x.x.x"
principals = {
"Service" = ["glue.amazonaws.com"]
}
managed_policy_arns = [
"arn:aws:iam::aws:policy/service-role/AWSGlueServiceRole"
]
policy_document_count = 0
policy_description = "Policy for AWS Glue with access to EC2, S3, and Cloudwatch Logs"
role_description = "Role for AWS Glue with access to EC2, S3, and Cloudwatch Logs"
context = module.this.context
}
module "glue_workflow" {
source = "cloudposse/glue/aws//modules/glue-workflow"
# Cloud Posse recommends pinning every module to a specific version
# version = "x.x.x"
workflow_description = "Test Glue Workflow"
max_concurrent_runs = 2
context = module.this.context
}
module "glue_job" {
source = "cloudposse/glue/aws//modules/glue-job"
# Cloud Posse recommends pinning every module to a specific version
# version = "x.x.x"
job_description = "Glue Job that runs data_cleaning.py Python script"
role_arn = local.role_arn
glue_version = var.glue_version
worker_type = "Standard"
number_of_workers = 2
max_retries = 2
# The job timeout in minutes
timeout = 20
command = {
# The name of the job command. Defaults to `glueetl`.
# Use `pythonshell` for Python Shell Job Type, or `gluestreaming` for Streaming Job Type.
name = "glueetl"
script_location = format("s3://%s/data_cleaning.py", local.s3_bucket_source)
python_version = 3
}
context = module.this.context
}
module "glue_trigger" {
source = "cloudposse/glue/aws//modules/glue-trigger"
# Cloud Posse recommends pinning every module to a specific version
# version = "x.x.x"
workflow_name = module.glue_workflow.name
trigger_enabled = true
start_on_creation = true
trigger_description = "Glue Trigger that triggers a Glue Job on a schedule"
schedule = "cron(15 12 * * ? *)"
type = "SCHEDULED"
actions = [
{
job_name = module.glue_job.name
# The job run timeout in minutes. It overrides the timeout value of the job
timeout = 10
}
]
context = module.this.context
}
Available targets:
help Help screen
help/all Display help for all targets
help/short This help short screen
lint Lint terraform code
Name | Version |
---|---|
terraform | >= 1.0 |
aws | >= 3.74.0 |
awsutils | >= 0.11.1 |
No providers.
Name | Source | Version |
---|---|---|
this | cloudposse/label/null | 0.25.0 |
No resources.
Name | Description | Type | Default | Required |
---|---|---|---|---|
additional_tag_map | Additional key-value pairs to add to each map in tags_as_list_of_maps . Not added to tags or id .This is for some rare cases where resources want additional configuration of tags and therefore take a list of maps with tag key, value, and additional configuration. |
map(string) |
{} |
no |
attributes | ID element. Additional attributes (e.g. workers or cluster ) to add to id ,in the order they appear in the list. New attributes are appended to the end of the list. The elements of the list are joined by the delimiter and treated as a single ID element. |
list(string) |
[] |
no |
context | Single object for setting entire context at once. See description of individual variables for details. Leave string and numeric variables as null to use default value.Individual variable settings (non-null) override settings in context object, except for attributes, tags, and additional_tag_map, which are merged. |
any |
{ |
no |
delimiter | Delimiter to be used between ID elements. Defaults to - (hyphen). Set to "" to use no delimiter at all. |
string |
null |
no |
descriptor_formats | Describe additional descriptors to be output in the descriptors output map.Map of maps. Keys are names of descriptors. Values are maps of the form {<br> format = string<br> labels = list(string)<br>} (Type is any so the map values can later be enhanced to provide additional options.)format is a Terraform format string to be passed to the format() function.labels is a list of labels, in order, to pass to format() function.Label values will be normalized before being passed to format() so they will beidentical to how they appear in id .Default is {} (descriptors output will be empty). |
any |
{} |
no |
enabled | Set to false to prevent the module from creating any resources | bool |
null |
no |
environment | ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT' | string |
null |
no |
id_length_limit | Limit id to this many characters (minimum 6).Set to 0 for unlimited length.Set to null for keep the existing setting, which defaults to 0 .Does not affect id_full . |
number |
null |
no |
label_key_case | Controls the letter case of the tags keys (label names) for tags generated by this module.Does not affect keys of tags passed in via the tags input.Possible values: lower , title , upper .Default value: title . |
string |
null |
no |
label_order | The order in which the labels (ID elements) appear in the id .Defaults to ["namespace", "environment", "stage", "name", "attributes"]. You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present. |
list(string) |
null |
no |
label_value_case | Controls the letter case of ID elements (labels) as included in id ,set as tag values, and output by this module individually. Does not affect values of tags passed in via the tags input.Possible values: lower , title , upper and none (no transformation).Set this to title and set delimiter to "" to yield Pascal Case IDs.Default value: lower . |
string |
null |
no |
labels_as_tags | Set of labels (ID elements) to include as tags in the tags output.Default is to include all labels. Tags with empty values will not be included in the tags output.Set to [] to suppress all generated tags.Notes: The value of the name tag, if included, will be the id , not the name .Unlike other null-label inputs, the initial setting of labels_as_tags cannot bechanged in later chained modules. Attempts to change it will be silently ignored. |
set(string) |
[ |
no |
name | ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'. This is the only ID element not also included as a tag .The "name" tag is set to the full id string. There is no tag with the value of the name input. |
string |
null |
no |
namespace | ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique | string |
null |
no |
regex_replace_chars | Terraform regular expression (regex) string. Characters matching the regex will be removed from the ID elements. If not set, "/[^a-zA-Z0-9-]/" is used to remove all characters other than hyphens, letters and digits. |
string |
null |
no |
stage | ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release' | string |
null |
no |
tags | Additional tags (e.g. {'BusinessUnit': 'XYZ'} ).Neither the tag keys nor the tag values will be modified by this module. |
map(string) |
{} |
no |
tenant | ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for | string |
null |
no |
No outputs.
Like this project? Please give it a ★ on our GitHub! (it helps us a lot)
Are you using this project or any of our other projects? Consider leaving a testimonial. =)
Check out these related projects.
- terraform-aws-components - Catalog of terraform AWS components
For additional context, refer to some of these links.
- Glue Getting Started Guide - Guide for getting oriented with glue and spark
- Program AWS Glue ETL Scripts in Python - Documentation about the process of running ETL with AWS Glue and the Python programming language
- Python shell jobs in AWS Glue - Documentation about the process of configuring and running Python shell jobs in AWS Glue
- AWS Glue Jobs unit testing - Illustrates the execution of PyTest unit test cases for AWS Glue jobs in AWS CodePipeline using AWS CodeBuild projects
- AWS Glue knowledge center - Why does my AWS Glue crawler or ETL job fail with the error "Insufficient Lake Formation permission(s)"?
Got a question? We got answers.
File a GitHub issue, send us an email or join our Slack Community.
We are a DevOps Accelerator. We'll help you build your cloud infrastructure from the ground up so you can own it. Then we'll show you how to operate it and stick around for as long as you need us.
Work directly with our team of DevOps experts via email, slack, and video conferencing.
We deliver 10x the value for a fraction of the cost of a full-time engineer. Our track record is not even funny. If you want things done right and you need it done FAST, then we're your best bet.
- Reference Architecture. You'll get everything you need from the ground up built using 100% infrastructure as code.
- Release Engineering. You'll have end-to-end CI/CD with unlimited staging environments.
- Site Reliability Engineering. You'll have total visibility into your apps and microservices.
- Security Baseline. You'll have built-in governance with accountability and audit logs for all changes.
- GitOps. You'll be able to operate your infrastructure via Pull Requests.
- Training. You'll receive hands-on training so your team can operate what we build.
- Questions. You'll have a direct line of communication between our teams via a Shared Slack channel.
- Troubleshooting. You'll get help to triage when things aren't working.
- Code Reviews. You'll receive constructive feedback on Pull Requests.
- Bug Fixes. We'll rapidly work with you to fix any bugs in our projects.
Join our Open Source Community on Slack. It's FREE for everyone! Our "SweetOps" community is where you get to talk with others who share a similar vision for how to rollout and manage infrastructure. This is the best place to talk shop, ask questions, solicit feedback, and work together as a community to build totally sweet infrastructure.
Participate in our Discourse Forums. Here you'll find answers to commonly asked questions. Most questions will be related to the enormous number of projects we support on our GitHub. Come here to collaborate on answers, find solutions, and get ideas about the products and services we value. It only takes a minute to get started! Just sign in with SSO using your GitHub account.
Sign up for our newsletter that covers everything on our technology radar. Receive updates on what we're up to on GitHub as well as awesome new projects we discover.
Join us every Wednesday via Zoom for our weekly "Lunch & Learn" sessions. It's FREE for everyone!
Please use the issue tracker to report any bugs or file feature requests.
If you are interested in being a contributor and want to get involved in developing this project or help out with our other projects, we would love to hear from you! Shoot us an email.
In general, PRs are welcome. We follow the typical "fork-and-pull" Git workflow.
- Fork the repo on GitHub
- Clone the project to your own machine
- Commit changes to your own branch
- Push your work back up to your fork
- Submit a Pull Request so that we can review your changes
NOTE: Be sure to merge the latest changes from "upstream" before making a pull request!
Copyright © 2021-2023 Cloud Posse, LLC
See LICENSE for full details.
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
All other trademarks referenced herein are the property of their respective owners.
This project is maintained and funded by Cloud Posse, LLC. Like it? Please let us know by leaving a testimonial!
We're a DevOps Professional Services company based in Los Angeles, CA. We ❤️ Open Source Software.
We offer paid support on all of our projects.
Check out our other projects, follow us on twitter, apply for a job, or hire us to help with your cloud strategy and implementation.
Erik Osterman |
Leo Przybylski |
Andriy Knysh |
---|