- Development environment
- Enterprise-scale environment
- Collect detailed k8s logs
- Force terminate cluster
- Connect to a product pod
- Connect to the execution environment pod
- Connect to the RDS database
- Enable detailed resources monitoring
- Rebuild atlassian/dcapt docker image on the fly
- Run tests locally from docker container
- Run tests from execution environment pod
- Retry to copy run results from the execution environment pod to local
- Debug AWS required policies
- Navigate to
dc-app-performance-toolkit/app/util/k8s
folder - Set AWS credential in aws_envs file
- set correct values in dcapt-small.tfvars file:
environment_name
products
license
- Run install development environment command from :
docker run --pull=always --env-file aws_envs \
-v "/$PWD/dcapt-small.tfvars:/data-center-terraform/conf.tfvars" \
-v "/$PWD/dcapt-snapshots.json:/data-center-terraform/dcapt-snapshots.json" \
-v "/$PWD/logs:/data-center-terraform/logs" \
-it atlassianlabs/terraform:2.9.3 ./install.sh -c conf.tfvars
Note: install and uninstall commands have to use the same atlassianlabs/terraform:TAG
image tag.
- Navigate to
dc-app-performance-toolkit/app/util/k8s
folder - Set AWS credential in aws_envs file
- Run command:
docker run --pull=always --env-file aws_envs \
-v "/$PWD/dcapt-small.tfvars:/data-center-terraform/conf.tfvars" \
-v "/$PWD/dcapt-snapshots.json:/data-center-terraform/dcapt-snapshots.json" \
-v "/$PWD/logs:/data-center-terraform/logs" \
-it atlassianlabs/terraform:2.9.3 ./uninstall.sh -c conf.tfvars
- Navigate to
dc-app-performance-toolkit/app/util/k8s
folder - Set AWS credential in aws_envs file
- Set correct values in dcapt.tfvars file:
environment_name
products
license
- Run install enterprise-scale environment command:
docker run --pull=always --env-file aws_envs \
-v "/$PWD/dcapt.tfvars:/data-center-terraform/conf.tfvars" \
-v "/$PWD/dcapt-snapshots.json:/data-center-terraform/dcapt-snapshots.json" \
-v "/$PWD/logs:/data-center-terraform/logs" \
-it atlassianlabs/terraform:2.9.3 ./install.sh -c conf.tfvars
Note: install and uninstall commands have to use the same atlassianlabs/terraform:TAG
image tag.
- Navigate to
dc-app-performance-toolkit/app/util/k8s
folder - Set AWS credential in aws_envs file
- Run command:
docker run --pull=always --env-file aws_envs \
-v "/$PWD/dcapt.tfvars:/data-center-terraform/conf.tfvars" \
-v "/$PWD/dcapt-snapshots.json:/data-center-terraform/dcapt-snapshots.json" \
-v "/$PWD/logs:/data-center-terraform/logs" \
-it atlassianlabs/terraform:2.9.3 ./uninstall.sh -c conf.tfvars
Note: On unsuccessful deployment detailed logs generated automatically
in dc-app-performance-toolkit/app/util/logs/k8s_logs
folder.
To generate detailed k8s logs:
- Navigate to
dc-app-performance-toolkit/app/util/k8s
folder - Set AWS credential in aws_envs file
- Run command:
export ENVIRONMENT_NAME=your_environment_name
export REGION=us-east-2
docker run --pull=always --env-file aws_envs \
-v "/$PWD/k8s_logs:/data-center-terraform/k8s_logs" \
-v "/$PWD/logs:/data-center-terraform/logs" \
-it atlassianlabs/terraform:2.9.3 ./scripts/collect_k8s_logs.sh atlas-$ENVIRONMENT_NAME-cluster $REGION k8s_logs
- Navigate to
dc-app-performance-toolkit/app/util/k8s
folder - Set AWS credential in aws_envs file
- Run command:
export ENVIRONMENT_NAME=your_environment_name
export REGION=us-east-2
docker run --pull=always --env-file aws_envs \
--workdir="//data-center-terraform" \
--entrypoint="python" \
-v "/$PWD/terminate_cluster.py:/data-center-terraform/terminate_cluster.py" \
atlassian/dcapt terminate_cluster.py --cluster_name atlas-$ENVIRONMENT_NAME-cluster --aws_region $REGION
-
Navigate to
dc-app-performance-toolkit/app/util/k8s
folder -
Set AWS credential in aws_envs file
-
Set your environment name:
export ENVIRONMENT_NAME=your_environment_name export REGION=us-east-2
-
SSH to terraform container:
docker run --pull=always --env-file aws_envs \ -e ENVIRONMENT_NAME=$ENVIRONMENT_NAME \ -e REGION=$REGION \ -it atlassianlabs/terraform:2.9.3 bash
-
Connect to the product pod. Example below for jira pod with number 0. For other product or pod number change
PRODUCT_POD
accordingly.export PRODUCT_POD=jira-0 aws eks update-kubeconfig --name atlas-$ENVIRONMENT_NAME-cluster --region $REGION kubectl exec -it $PRODUCT_POD -n atlassian -- bash
- Navigate to
dc-app-performance-toolkit
folder - Set AWS credential in aws_envs file
- Set your environment name:
export ENVIRONMENT_NAME=your_environment_name export REGION=us-east-2
- SSH to terraform container:
docker run --pull=always --env-file ./app/util/k8s/aws_envs \ -e ENVIRONMENT_NAME=$ENVIRONMENT_NAME \ -e REGION=$REGION \ -v "/$PWD:/data-center-terraform/dc-app-performance-toolkit" \ -it atlassianlabs/terraform:2.9.3 bash
- Copy code base and connect to the execution environment pod:
aws eks update-kubeconfig --name atlas-$ENVIRONMENT_NAME-cluster --region $REGION exec_pod_name=$(kubectl get pods -n atlassian -l=exec=true --no-headers -o custom-columns=":metadata.name") kubectl exec -it "$exec_pod_name" -n atlassian -- rm -rf /dc-app-performance-toolkit kubectl cp --retries 10 dc-app-performance-toolkit atlassian/"$exec_pod_name":/dc-app-performance-toolkit kubectl exec -it "$exec_pod_name" -n atlassian -- bash
- Navigate to
dc-app-performance-toolkit/app/util/k8s
folder - Set AWS credential in aws_envs file
- Export environment variables for environment name, region and product:
export ENVIRONMENT_NAME=your_environment_name export REGION=us-east-2 export PRODUCT=jira # PRODUCT options: jira/confluence/bitbucket. For jsm use jira as well.
- Start and ssh to
atlassianlabs/terraform
docker container:docker run --pull=always --env-file aws_envs \ -e ENVIRONMENT_NAME=$ENVIRONMENT_NAME \ -e REGION=$REGION \ -e PRODUCT=$PRODUCT \ -v "/$PWD/script-runner.yml:/data-center-terraform/script-runner.yml" \ -it atlassianlabs/terraform:2.9.3 bash
- Run following commands one by one inside docker container:
aws eks update-kubeconfig --name atlas-$ENVIRONMENT_NAME-cluster --region $REGION kubectl apply -f script-runner.yml rds_endpoint=$(aws rds --region $REGION describe-db-instances --filters "Name=db-instance-id,Values=atlas-${ENVIRONMENT_NAME}-${PRODUCT}-db" --query "DBInstances[].Endpoint.Address" --output text) kubectl exec -it script-runner -- psql -h $rds_endpoint -d $PRODUCT -U atl$PRODUCT
- Default DB password:
Password1!
To enable detailed CPU/Memory monitoring and Grafana dashboards for visualisation:
- Navigate to
dc-app-performance-toolkit/app/util/k8s
folder - Set AWS credential in aws_envs file
- Go to
dcapt.tvars
file -> Monitoring section - Uncomment and set to
true
following required variables:monitoring_enabled
andmonitoring_grafana_expose_lb
- Modify if needed other optional variables
- Do
install.sh
as described in Create enterprise-scale environment - Get Grafana URL:
export ENVIRONMENT_NAME=your_environment_name export REGION=us-east-2
docker run --pull=always --env-file aws_envs \ -e ENVIRONMENT_NAME=$ENVIRONMENT_NAME \ -e REGION=$REGION \ -it atlassianlabs/terraform:2.9.3 bash
aws eks update-kubeconfig --name atlas-$ENVIRONMENT_NAME-cluster --region $REGION kubectl get svc -n kube-monitoring | grep grafana
- Open Grafana URL in the browser. Default Grafana creds:
admin/prom-operator
. - Go to Dashboards -> General -> select one of the available dashboards.
In case any changes are needed in atlassian/dcapt
image:
- Modify locally
dc-app-performance-toolkit/Dockerfile
file - Run tests from execution environment pod with extra flag in the end
--docker_image_rebuild
Note: this option is not suitable for full-scale performance runs as local network is a bottleneck.
- Navigate to
dc-app-performance-toolkit
folder - Select needed product and run below command (example below is for jira):
docker run --pull=always --shm-size=4g -v "/$PWD:/dc-app-performance-toolkit" atlassian/dcapt jira.yml
- Navigate to
dc-app-performance-toolkit
folder - Set AWS credential in aws_envs file
- Set environment name:
export ENVIRONMENT_NAME=your_environment_name
- Select needed product and run below command (example below is for jira):
docker run --pull=always --env-file ./app/util/k8s/aws_envs \ -e REGION=us-east-2 \ -e ENVIRONMENT_NAME=$ENVIRONMENT_NAME \ -v "/$PWD:/data-center-terraform/dc-app-performance-toolkit" \ -v "/$PWD/app/util/k8s/bzt_on_pod.sh:/data-center-terraform/bzt_on_pod.sh" \ -it atlassianlabs/terraform:2.9.3 bash bzt_on_pod.sh jira.yml
- Navigate to
dc-app-performance-toolkit
folder - Set AWS credential in aws_envs file
- Set environment name:
export ENVIRONMENT_NAME=your_environment_name
- Run following command to copy results from execution environment pod to local:
docker run --pull=always --env-file ./app/util/k8s/aws_envs \ -e REGION=us-east-2 \ -e ENVIRONMENT_NAME=$ENVIRONMENT_NAME \ -v "/$PWD:/data-center-terraform/dc-app-performance-toolkit" \ -v "/$PWD/app/util/k8s/copy_run_results.sh:/data-center-terraform/copy_run_results.sh" \ -it atlassianlabs/terraform:2.9.3 bash copy_run_results.sh
- Navigate to
dc-app-performance-toolkit/app/util/k8s
folder - Set AWS credential in aws_envs file
- Start and ssh to
atlassianlabs/terraform
docker container:docker run --pull=always --env-file aws_envs \ -it atlassianlabs/terraform:2.9.3 bash
- Make sure you have IAM policies with names
policy1
,policy2
, created from policy1.json and policy2.json. - Run following commands one by one inside docker container to get effective policies permissions:
ACCOUNT_ID=$(aws sts get-caller-identity --query 'Account' --output text) POLICY_1_VERSION_ID=$(aws iam get-policy --policy-arn arn:aws:iam::$ACCOUNT_ID:policy/policy1 --query 'Policy.DefaultVersionId' --output text) POLICY_2_VERSION_ID=$(aws iam get-policy --policy-arn arn:aws:iam::$ACCOUNT_ID:policy/policy2 --query 'Policy.DefaultVersionId' --output text) aws iam get-policy-version --policy-arn arn:aws:iam::$ACCOUNT_ID:policy/policy1 --version-id $POLICY_1_VERSION_ID aws iam get-policy-version --policy-arn arn:aws:iam::$ACCOUNT_ID:policy/policy2 --version-id $POLICY_2_VERSION_ID