Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

provider/aws: aws_codedeploy_deployment_group Panics when setting on_premises_instance_tag_filter #6617

Merged
merged 1 commit into from
May 11, 2016

Conversation

stack72
Copy link
Contributor

@stack72 stack72 commented May 11, 2016

Fixes #6593

When setting on_premises_instance_tag_filter, Terraform was not
pushing the changes on the cReate (due to a spelling mistake). A second
apply would push the tags and then cause a panic. Terraform was building
a ec2.Tagfilter struct without checking for optional values. When the
TagFilter was being dereferenced, it caused a panic

Basic and new test not affected (to prove with and without this on_premises_tag`

make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSCodeDeployDeploymentGroup' 2>~/tf.log
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /vendor/)
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSCodeDeployDeploymentGroup -timeout 120m
=== RUN   TestAccAWSCodeDeployDeploymentGroup_basic
--- PASS: TestAccAWSCodeDeployDeploymentGroup_basic (98.84s)
=== RUN   TestAccAWSCodeDeployDeploymentGroup_onPremiseTag
--- PASS: TestAccAWSCodeDeployDeploymentGroup_onPremiseTag (41.09s)
PASS
ok      github.com/hashicorp/terraform/builtin/providers/aws    139.107s

@stack72 stack72 force-pushed the f-aws-codedeploy-onpremisetag branch from 588a08a to c082c12 Compare May 11, 2016 16:44
@stack72 stack72 changed the title provider/aws: aws_codedeploy_deployment_group Panics when setting on_premises_instance_tag_filter [WIP] provider/aws: aws_codedeploy_deployment_group Panics when setting on_premises_instance_tag_filter May 11, 2016
@stack72 stack72 force-pushed the f-aws-codedeploy-onpremisetag branch from c082c12 to 251c7aa Compare May 11, 2016 16:50
l["value"] = *tf.Value
}
if *tf.Type != "" {
if tf.Type != nil && *tf.Type != "" {
l["type"] = *tf.Type
}
result = append(result, l)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isn't this gonna append an empty map if tf is nil

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a range across the list - therefore if there is nothing in the list then it won't even try and evaluate it. I am running all the tests now to see if this works as expected

@stack72 stack72 force-pushed the f-aws-codedeploy-onpremisetag branch from 251c7aa to ca0e20c Compare May 11, 2016 17:06
`on_premises_instance_tag_filter`

When setting `on_premises_instance_tag_filter`, Terraform was not
pushing the changes on the cReate (due to a spelling mistake). A second
apply would push the tags and then cause a panic. Terraform was building
a ec2.Tagfilter struct without checking for optional values. When the
TagFilter was being dereferenced, it caused a panic
@stack72 stack72 force-pushed the f-aws-codedeploy-onpremisetag branch from ca0e20c to 12d2206 Compare May 11, 2016 17:20
@stack72 stack72 changed the title [WIP] provider/aws: aws_codedeploy_deployment_group Panics when setting on_premises_instance_tag_filter provider/aws: aws_codedeploy_deployment_group Panics when setting on_premises_instance_tag_filter May 11, 2016
@jen20
Copy link
Contributor

jen20 commented May 11, 2016

This LGTM pending travis.

@jdextraze
Copy link

I just tryed this and it created the tags on first run but it failed:

Error applying plan:

2 error(s) occurred:

* aws_codedeploy_deployment_group.middleware-qa: unexpected EOF
* aws_codedeploy_deployment_group.middleware-dev: unexpected EOF

@stack72
Copy link
Contributor Author

stack72 commented May 11, 2016

@jdextraze What config did you try? I have added an acceptance test that passes and used your config from the issue and both pass

@stack72
Copy link
Contributor Author

stack72 commented May 11, 2016

My TF plan looks as follows:

The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed.

Note: You didn't specify an "-out" parameter to save this plan, so when
"apply" is called, Terraform can't guarantee this is what will execute.

+ aws_codedeploy_app.foo_app
    name:      "" => "foo_app"
    unique_id: "" => "<computed>"

+ aws_codedeploy_deployment_group.foo
    app_name:                                        "" => "foo_app"
    deployment_config_name:                          "" => "CodeDeployDefault.OneAtATime"
    deployment_group_name:                           "" => "foo"
    on_premises_instance_tag_filter.#:               "" => "1"
    on_premises_instance_tag_filter.291055090.key:   "" => ""
    on_premises_instance_tag_filter.291055090.type:  "" => "VALUE_ONLY"
    on_premises_instance_tag_filter.291055090.value: "" => "staging-qa"
    service_role_arn:                                "" => "${aws_iam_role.foo_role.arn}"

+ aws_iam_role.foo_role
    arn:                "" => "<computed>"
    assume_role_policy: "" => "{\n    \"Version\": \"2012-10-17\",\n    \"Statement\": [\n        {\n            \"Sid\": \"\",\n            \"Effect\": \"Allow\",\n            \"Principal\": {\n                \"Service\": [\n                    \"codedeploy.amazonaws.com\"\n                ]\n            },\n            \"Action\": \"sts:AssumeRole\"\n        }\n    ]\n}\n"
    name:               "" => "foo_role"
    path:               "" => "/"
    unique_id:          "" => "<computed>"

+ aws_iam_role_policy.foo_policy
    name:   "" => "foo_policy"
    policy: "" => "{\n    \"Version\": \"2012-10-17\",\n    \"Statement\": [\n        {\n            \"Effect\": \"Allow\",\n            \"Action\": [\n                \"autoscaling:CompleteLifecycleAction\",\n                \"autoscaling:DeleteLifecycleHook\",\n                \"autoscaling:DescribeAutoScalingGroups\",\n                \"autoscaling:DescribeLifecycleHooks\",\n                \"autoscaling:PutLifecycleHook\",\n                \"autoscaling:RecordLifecycleActionHeartbeat\",\n                \"ec2:DescribeInstances\",\n                \"ec2:DescribeInstanceStatus\",\n                \"tag:GetTags\",\n                \"tag:GetResources\"\n            ],\n            \"Resource\": \"*\"\n        }\n    ]\n}\n"
    role:   "" => "${aws_iam_role.foo_role.id}"


Plan: 4 to add, 0 to change, 0 to destroy.

Then the apply looks as follows:

aws_codedeploy_app.foo_app: Creating...
  name:      "" => "foo_app"
  unique_id: "" => "<computed>"
aws_iam_role.foo_role: Creating...
  arn:                "" => "<computed>"
  assume_role_policy: "" => "{\n    \"Version\": \"2012-10-17\",\n    \"Statement\": [\n        {\n            \"Sid\": \"\",\n            \"Effect\": \"Allow\",\n            \"Principal\": {\n                \"Service\": [\n                    \"codedeploy.amazonaws.com\"\n                ]\n            },\n            \"Action\": \"sts:AssumeRole\"\n        }\n    ]\n}\n"
  name:               "" => "foo_role"
  path:               "" => "/"
  unique_id:          "" => "<computed>"
aws_iam_role.foo_role: Creation complete
aws_iam_role_policy.foo_policy: Creating...
  name:   "" => "foo_policy"
  policy: "" => "{\n    \"Version\": \"2012-10-17\",\n    \"Statement\": [\n        {\n            \"Effect\": \"Allow\",\n            \"Action\": [\n                \"autoscaling:CompleteLifecycleAction\",\n                \"autoscaling:DeleteLifecycleHook\",\n                \"autoscaling:DescribeAutoScalingGroups\",\n                \"autoscaling:DescribeLifecycleHooks\",\n                \"autoscaling:PutLifecycleHook\",\n                \"autoscaling:RecordLifecycleActionHeartbeat\",\n                \"ec2:DescribeInstances\",\n                \"ec2:DescribeInstanceStatus\",\n                \"tag:GetTags\",\n                \"tag:GetResources\"\n            ],\n            \"Resource\": \"*\"\n        }\n    ]\n}\n"
  role:   "" => "foo_role"
aws_iam_role_policy.foo_policy: Creation complete
aws_codedeploy_app.foo_app: Creation complete
aws_codedeploy_deployment_group.foo: Creating...
  app_name:                                        "" => "foo_app"
  deployment_config_name:                          "" => "CodeDeployDefault.OneAtATime"
  deployment_group_name:                           "" => "foo"
  on_premises_instance_tag_filter.#:               "" => "1"
  on_premises_instance_tag_filter.291055090.key:   "" => ""
  on_premises_instance_tag_filter.291055090.type:  "" => "VALUE_ONLY"
  on_premises_instance_tag_filter.291055090.value: "" => "staging-qa"
  service_role_arn:                                "" => "arn:aws:iam::881237884953:role/foo_role"
aws_codedeploy_deployment_group.foo: Creation complete

Apply complete! Resources: 4 added, 0 changed, 0 destroyed.

The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.

State path: terraform.tfstate

Then Terraform refresh to check the state shows as follows:

aws_codedeploy_app.foo_app: Refreshing state... (ID: ada70e3d-25a8-40a6-a183-fdf8e33b90db:foo_app)
aws_iam_role.foo_role: Refreshing state... (ID: foo_role)
aws_iam_role_policy.foo_policy: Refreshing state... (ID: foo_role:foo_policy)
aws_codedeploy_deployment_group.foo: Refreshing state... (ID: 0cdc9de7-538c-48a5-906e-3f9d472abbef)

This all looks to work here

@stack72 stack72 merged commit 06b5cf7 into master May 11, 2016
@stack72 stack72 deleted the f-aws-codedeploy-onpremisetag branch May 11, 2016 17:38
@stack72
Copy link
Contributor Author

stack72 commented May 11, 2016

@jdextraze OK, now I'm uber curious as to what the EOF bug is. How and what did you build and were there any other errors?

P.

@jdextraze
Copy link

jdextraze commented May 11, 2016

@stack72 Sorry it works. Had to clean my state first and was not using full changes.

Great work man 👍

@stack72
Copy link
Contributor Author

stack72 commented May 11, 2016

Spot on! Glad that's working :) thanks for letting me know

cristicalin pushed a commit to cristicalin/terraform that referenced this pull request May 24, 2016
…ashicorp#6617)

`on_premises_instance_tag_filter`

When setting `on_premises_instance_tag_filter`, Terraform was not
pushing the changes on the cReate (due to a spelling mistake). A second
apply would push the tags and then cause a panic. Terraform was building
a ec2.Tagfilter struct without checking for optional values. When the
TagFilter was being dereferenced, it caused a panic
@ghost
Copy link

ghost commented Apr 25, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 25, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

aws_codedeploy_deployment_group on_premises_instance_tag_filter not added after apply
3 participants