Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

provider/aws: Support Import aws_elasticache_cluster #8325

Closed
wants to merge 2 commits into from

Conversation

AMeng
Copy link
Contributor

@AMeng AMeng commented Aug 19, 2016

Add support for importing AWS ElastiCache clusters.

Testing with newly compiled binary:

$ terraform import aws_elasticache_cluster.my_cluster my_cluster
aws_elasticache_cluster.my_cluster: Importing from ID "my_cluster"...
aws_elasticache_cluster.my_cluster: Import complete!
  Imported aws_elasticache_cluster (ID: my_cluster)
aws_elasticache_cluster.my_cluster: Refreshing state... (ID: my_cluster)

Import success! The resources imported are shown above. These are
now in your Terraform state. Import does not currently generate
configuration, so you must do this next. If you do not create configuration
for the above resources, then the next `terraform plan` will mark
them for destruction.

Results in this terraform.tfstate file:

{
    "version": 3,
    "terraform_version": "0.7.1",
    "serial": 0,
    "lineage": "67b45360-d9a2-4fdb-8ae4-1d1cc41d1dfa",
    "modules": [
        {
            "path": [
                "root"
            ],
            "outputs": {},
            "resources": {
                "aws_elasticache_cluster.my_cluster": {
                    "type": "aws_elasticache_cluster",
                    "depends_on": [],
                    "primary": {
                        "id": "my_cluster",
                        "attributes": {
                            "availability_zone": "us-east-1a",
                            "cache_nodes.#": "1",
                            "cache_nodes.0.address": "XXXXXXXXXX",
                            "cache_nodes.0.availability_zone": "us-east-1a",
                            "cache_nodes.0.id": "0001",
                            "cache_nodes.0.port": "6379",
                            "cluster_id": "my_cluster",
                            "engine": "redis",
                            "engine_version": "2.8.22",
                            "id": "my_cluster",
                            "maintenance_window": "sun:09:00-sun:10:00",
                            "node_type": "cache.t2.micro",
                            "num_cache_nodes": "1",
                            "security_group_names.#": "0",
                            "snapshot_retention_limit": "0",
                            "snapshot_window": "",
                            "subnet_group_name": "my_cluster",
                            "tags.%": "0"
                        },
                        "meta": {},
                        "tainted": false
                    },
                    "deposed": [],
                    "provider": "aws"
                }
            },
            "depends_on": []
        }
    ]
}

@stack72
Copy link
Contributor

stack72 commented Aug 19, 2016

Hi @AMeng

Thanks for the PR here - right now, this won't work with our nightly tests. We run our tests against us-west-2 and the config for the cluster you are importing points to us-east-1

therefore, as part of the import test you need to follow the pattern laid out here https://github.com/hashicorp/terraform/blob/master/builtin/providers/aws/import_aws_db_security_group_test.go#L11

You can then run the acceptance tests as follows:

make testacc TEST=./builtin/providers/aws TESTARGS='-run= TestAccAWSElasticacheCluster_importBasic'

This will give you the output as to whether the test will run or not

thanks

Paul

@stack72 stack72 added enhancement waiting-response An issue/pull request is waiting for a response from the community provider/aws labels Aug 19, 2016
@AMeng
Copy link
Contributor Author

AMeng commented Aug 19, 2016

I'm getting an error running those acceptance tests. My AWS account is VPC-enabled.

$ make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSElasticacheCluster_importBasic'
==> Checking that code complies with gofmt requirements...
/home/alex/gopath/bin/stringer
go generate $(go list ./... | grep -v /terraform/vendor/)
2016/08/19 11:47:48 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSElasticacheCluster_importBasic -timeout 120m
=== RUN   TestAccAWSElasticacheCluster_importBasic
--- FAIL: TestAccAWSElasticacheCluster_importBasic (6.98s)
    testing.go:265: Step 0 error: Error applying: 1 error(s) occurred:

        * aws_elasticache_security_group.bar: Error creating CacheSecurityGroup: InvalidParameterValue: Use of cache security groups is not permitted in this API version for your account.
            status code: 400, request id: XXXXXXXXXXXXXXX
FAIL
exit status 1
FAIL    github.com/hashicorp/terraform/builtin/providers/aws    6.993s
Makefile:47: recipe for target 'testacc' failed
make: *** [testacc] Error 1

@jonbender
Copy link

@AMeng seems that the top-level port attribute is missing from that state file. i pulled this PR into my branch, imported one of our redis clusters and created the corresponding aws_elasticache_cluster terraform configuration, and my plan showed the port => "" -> port => "6379" change

@jonbender
Copy link

Also noticed that the "security_group_ids" were not imported into the state file which causes my plan to come back with a change

~ aws_elasticache_cluster.my-cluster
    security_group_ids.#: "" => "<computed>"

@stack72
Copy link
Contributor

stack72 commented Sep 23, 2016

Hi @AMeng

Ok, I finally worked out the issue here! I have opened the PR #9010 to fix it - it is based off all the work you added! Thanks for the work here and for helping me figure out the bug we had in setting values back to state!

Paul

@stack72 stack72 closed this Sep 23, 2016
@ghost
Copy link

ghost commented Apr 22, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 22, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
enhancement provider/aws waiting-response An issue/pull request is waiting for a response from the community
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants