- Clone this repo:
git clone git@github.com:aws-observability/aws-otel-test-framework.git
- Clone the ADOT Collector repo:
git clone git@github.com:aws-observability/aws-otel-collector.git
-
Install Terraform: https://learn.hashicorp.com/tutorials/terraform/install-cli
-
Install Docker compose: https://docs.docker.com/compose/install/
-
Run one of the test cases:
cd aws-otel-test-framework/terraform/mock
terraform init
terraform apply -var="testcase=../testcases/otlp_mock"
terraform destroy
- Builds collector image from the directory ../aws-otel-collector
- Runs the collector, sample app, and mock server in docker.
- Validates if the mock server receives data from collector.
In the case that you want to debug for a certain platform, you can also use this testing framework to run your test case locally in multiple AWS platforms including EC2, ECS, and EKS.
- docker installed locally
- awscli installed locally
Refer to: https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html
First create a unique S3 bucket identifier that will be appened to your S3 bucket names. This will ensure that the S3 bucket name is globally unique. The UUID can be generated with any method of your choosing. See here for S3 bucket naming rules.
export TF_VAR_bucketUUID=$(dd if=/dev/urandom bs=1k count=1k | shasum | cut -b 1-8)
Setup only needs to be run once, it creates:
- one iam role
- one vpc
- one security group
- two ecr repos, one for sample apps, one for mocked server
- one amazon managed service for prometheus endpoint.
- one s3 bucket, one dynamodb table
Run
cd terraform/setup && terraform init && terraform apply -auto-approve
And Run
cd terraform/imagebuild && terraform init && terraform apply -auto-approve
this task will build and push the sample apps and mocked server images to the ecr repos, so that the following test could use them.
Remember, if you have changes on sample apps or the mocked server, you need to rerun this imagebuild task.
Note: imagebuild publishes multi-arch images to ECR that are compatible with linux/amd64 and linux/amr64 architectures.
Prerequisite:
- you are required to run the setup basic components once if you and other developers did not setup these components before.
- Uncomment the backend configuration to share the setup's terraform state
Advantage:
- Avoid creating duplicate resources on the same account and having duplicate-resources error when running test case such as VPC.
- Sharing up-to-date resource with other developers instead of creating required resources from scratch.
cd aws-otel-test-framework/terraform/setup
terraform init
terraform apply
Please build your image with the new component, push this image to dockerhub, and record the image link, which will be used in your testing.
cd terraform/ec2 && terraform init && terraform apply -auto-approve \
-var="aoc_image_repo={{the docker image repo name you just pushed}}" \
-var="aoc_version={{the aoc binary version}}" \
-var="testcase=../testcases/{{your test case folder name}}" \
-var-file="../testcases/{{your test case folder name}}/parameters.tfvars"
Don't forget to clean up your resources:
terraform destroy -auto-approve
cd terraform/ecs && terraform init && terraform apply -auto-approve \
-var="aoc_image_repo={{the docker image repo name you just pushed}}" \
-var="aoc_version={{ the docker image tag name}}" \
-var="testcase=../testcases/{{your test case folder name}}" \
-var-file="../testcases/{{your test case folder name}}/parameters.tfvars"
Don't forget to clean up your resources:
terraform destroy -auto-approve
Prerequisite: you are required to create an EKS cluster in your account. See cdk_infra directory for deploying clusters.
cd terraform/eks && terraform init && terraform apply -auto-approve \
-var="aoc_image_repo={{the docker image you just pushed}}" \
-var="aoc_version={{ the docker image tag name}}" \
-var="testcase=../testcases/{{your test case folder name}}" \
-var-file="../testcases/{{your test case folder name}}/parameters.tfvars" \
-var="eks_cluster_name={{the eks cluster name in your account}}"
Don't forget to clean up your resources:
terraform destroy -auto-approve \
-var="eks_cluster_name={the eks cluster name in your account}"
cd terraform/eks_fargate_setup && terraform apply -auto-approve -var="eks_cluster_name=<your_cluster>"
Add -var="deployment_type=fargate" to the eks creation statement Supported tests
- otlp_mock
Not supported tests
- otlp_trace
- This is because no sts role given to the sample app.
Test
cd terraform/eks && terraform apply -auto-approve \
-var="aoc_image_repo={{the docker image you just pushed}}" \
-var="aoc_version={{ the docker image tag name}}" \
-var="testcase=../testcases/{{your test case folder name}}" \
-var-file="../testcases/{{your test case folder name}}/parameters.tfvars" \
-var="eks_cluster_name={{the eks cluster name in your account}}" \
-var="deployment_type=fargate"
Don't forget to clean up your resources:
terraform destroy -auto-approve \
-var="eks_cluster_name={{the eks cluster name in your account}}" \
-var="deployment_type=fargate"
Prerequisite: you are required to build aotutil for checking patch status
make build-aotutil
cd terraform/soaking && terraform init && terraform apply -auto-approve \
-var="testing_ami={{ami need to test with such as soaking_window}}" \
-var="aoc_image_repo={{the docker image you just pushed}}" \
-var="aoc_version={{ the docker image tag name}}" \
-var="testcase=../testcases/{{your test case folder name}}" \
-var-file="../testcases/{{your test case folder name}}/parameters.tfvars" \
Don't forget to clean up your resources:
terraform destroy -auto-approve
cd terraform/soaking && terraform init && terraform apply -auto-approve \
-var="negative_soaking=true" \
-var="testing_ami={{ami need to test with such as soaking_window}}" \
-var="aoc_image_repo={{the docker image you just pushed}}" \
-var="aoc_version={{ the docker image tag name}}" \
-var="testcase=../testcases/{{your test case folder name}}" \
-var-file="../testcases/{{your test case folder name}}/parameters.tfvars" \
Don't forget to clean up your resources:
terraform destroy -auto-approve
cd terraform/canary && terraform init && terraform apply -auto-approve \
-var="aoc_image_repo={{the docker image you just pushed}}" \
-var="aoc_version={{ the docker image tag name}}" \
-var="testcase=../testcases/{{your test case folder name}}" \
-var-file="../testcases/{{your test case folder name}}/parameters.tfvars"
Don't forget to clean up your resources:
terraform destroy -auto-approve
Batch testing allows a set of tests to be run synchronously. To do this,
a test-case-batch
file is required in the ./terraform
directory.
The format of the test-case-batch
file is as such.
serviceName1 testCase1 additionalValues1
serviceName2 testCase2 additionalValues2
serviceName3 testCase3 additionalValues3
serviceNameN testCaseN additionalValuesN
The values for these fields are as follows
serviceName
: EKS
, EKS_ARM64
, EKS_FARGATE
, EKS_ADOT_OPERATOR
, EKS_ADOT_OPERATOR_ARM_64
, ECS
, EC2
testCase
: Must be an applicable test case in the terraform/testcases
directory
additionalValues
: For EC2
tests it is expected that the testing_ami
value is provided.
For ECS tests the launch_type
variable is expected. For EKS-arm64
tests it is expected that
a pipe delimited string of region|clustername|amp_endoint
is provided.
It is also expected thatTF_VAR_aoc_version
and TF_VAR_aoc_image_repo
are set to valid values
pointing to a Collector image and repository to utilize. Default aoc_image_repo
values can be utilized
but the TF_VAR_aoc_version
must be specified.
To execute the test run
make exectute-batch-test
To clean up the successful test run cache
make postBatchClean
##3. Optional add-on
####3.1. Upload test case's terraform state to s3 bucket
Prerequisite: you are required to run the test case before uploading any terraform state to s3
Advantage: Record what resources were created by test case and back-up in destroying those resources
when terraform destroy
failed.
cd terraform/add_on/remote_state && terraform init && terraform apply \
-var="testcase=../../testcases/{{your test case folder name}}" \
-var="testing_id={{test case unique id}}" \
-var="folder_name={{folder name when uploading to s3}} \
-var="platform={{platform running (ec2, ecs, eks, canary,...)"\