Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

aws_elasticache_subnet_group throws an error on 2nd run #2178

Closed
mzupan opened this issue Jun 1, 2015 · 17 comments · Fixed by #2191
Closed

aws_elasticache_subnet_group throws an error on 2nd run #2178

mzupan opened this issue Jun 1, 2015 · 17 comments · Fixed by #2191

Comments

@mzupan
Copy link
Contributor

mzupan commented Jun 1, 2015

If I create the following then every apply after throws an error

resource "aws_elasticache_subnet_group" "mts-redis" {
    name = "WEB-mts-redis"
    description = "subnet group for MTS redis cache"
    subnet_ids = ["${aws_subnet.mts.*.id}"]
}

works then the next run produces

* 1 error(s) occurred:

* Error retrieving cache subnet group: &{[0xc20849edc0] <nil> {false}}

My state looks ok

                "aws_elasticache_subnet_group.mts-redis": {
                    "type": "aws_elasticache_subnet_group",
                    "depends_on": [
                        "aws_subnet.mts"
                    ],
                    "primary": {
                        "id": "WEB-mts-redis",
                        "attributes": {
                            "description": "subnet group for MTS redis cache",
                            "id": "WEB-mts-redis",
                            "name": "WEB-mts-redis",
                            "subnet_ids.#": "3",
                            "subnet_ids.280445083": "subnet-1063f63b",
                            "subnet_ids.2858461270": "subnet-657d3312",
                            "subnet_ids.953240699": "subnet-4f375516"
                        }
                    }
                },
@catsby
Copy link
Contributor

catsby commented Jun 2, 2015

Hey @mzupan – I think I fixed this via #2191, do you have the means to check it out?

At present, ElastiCache subnet groups don't have any update functionality, but I add it in that PR.

Let me know!

@catsby
Copy link
Contributor

catsby commented Jun 2, 2015

Alas, I just noticed this:

subnet_ids = ["${aws_subnet.mts.*.id}"]

Do you have a configuration example that uses this splat format?

@mzupan
Copy link
Contributor Author

mzupan commented Jun 2, 2015

@catsby still getting the same error. Using version

Terraform v0.6.0-dev (a2717acf81104f888b4f3661fc5192854a519b92)

here's a sample config

resource "aws_subnet" "mts" {
  count  = 3
  vpc_id = "${aws_vpc.main.id}"
  cidr_block = "${lookup(var.vpc-subnet, var.env)}.${count.index+20}.0/24"
  availability_zone = "${lookup(var.azs, concat("az", count.index))}"

  tags {
    Name = "Web - MTS ${lookup(var.vpc-subnet, var.env)}.${count.index+20}.0/24"
    CostCenter = "web"
  }
}
resource "aws_route_table_association" "mts" {
  subnet_id = "${element(aws_subnet.mts.*.id, count.index)}"
  route_table_id = "${aws_route_table.nat.id}"
  count = 3
}

resource "aws_elasticache_subnet_group" "mts-redis" {
    name = "WEB-mts-redis2"
    description = "subnet group for MTS redis cache"
    subnet_ids = ["${aws_subnet.mts.*.id}"]
}

The cache subnet is created correctly. Just on the 2nd run it dies with that error.

@catsby
Copy link
Contributor

catsby commented Jun 2, 2015

The second run of what, exactly? After you create it successfully, what does plan show?
What are you applying that second time? Are you re-running the same plan that was used/created before?

@mzupan
Copy link
Contributor Author

mzupan commented Jun 2, 2015

After I apply and it gets created.. a terraform plan throws that error..

@catsby
Copy link
Contributor

catsby commented Jun 2, 2015

that's very strange, I can't reproduce that. This is the config I'm using, see anything I'm missing?

provider "aws" {
  region = "us-west-2"
}
resource "aws_vpc" "default" {
  cidr_block = "10.250.0.0/16"
  tags {
    Name = "ec-sub-test"
  }
}

resource "aws_subnet" "private" {
  vpc_id = "${aws_vpc.default.id}"
  cidr_block = "10.250.3.0/24"
}

resource "aws_subnet" "private2" {
  vpc_id = "${aws_vpc.default.id}"
  cidr_block = "10.250.2.0/24"
}

resource "aws_elasticache_subnet_group" "test" {
  name = "test"
  description = "Test Memcached"
  subnet_ids = [
    "${aws_subnet.private.*.id}", 
    "${aws_subnet.private2.id}",
  ]
}

I assume you're using the b-aws-elasticache-subnet-updates branch (your versions suggests so). Can run make updatedeps and then re-build? There were some changes to upstream SDK for AWS, maybe that's different? 😦

@catsby
Copy link
Contributor

catsby commented Jun 2, 2015

I went ahead and merged 2191 as it was needed anyway.
I'm still curious to see what the problem is here though, please let me know if you get a moment to run updatedeps and rebuild.

Thanks!

@mzupan
Copy link
Contributor Author

mzupan commented Jun 9, 2015

@catsby yea I still get the error.. even on a new environment.

@thepastelsuit
Copy link

@mzupan You ever resolve this? I'm not doing any splatting in my tf file, just deploying a subnet group and a cluster. Applies fine, next plan will break with that error.

* Error retrieving cache subnet group: &{[0xc2082bd480] <nil> {false}}

@mzupan
Copy link
Contributor Author

mzupan commented Jul 18, 2015

this is how I'm doing it now and its working off master

resource "aws_elasticache_subnet_group" "logstash-redis" {
  name = "logstash-redis"
  description = "logstash-redis"
  subnet_ids = ["${aws_subnet.mgmt.*.id}"]
}

resource "aws_elasticache_cluster" "logstash-redis" {
  cluster_id = "web-logstash-redis"
  engine = "redis"
  node_type = "cache.m3.medium"
  num_cache_nodes = 1
  parameter_group_name = "default.redis2.8"
  port = 6379

  subnet_group_name = "${aws_elasticache_subnet_group.logstash-redis.name}"
  security_group_ids = ["${aws_security_group.internal-redis.id}"]
  parameter_group_name = "default.redis2.8"

  tags {
    CostCenter = "web"
  }
}

its been working

@thepastelsuit
Copy link

@mzupan wow... I think I just figured out the problem. Testing theory now...

@thepastelsuit
Copy link

@mzupan Yep. I've solved the mystery! This is one of those fun issues that could be pointed at 3 different entities:

  1. An issue with AWS forcing lowercase on creation, but not doing the same when looking up subnet group by name.
  2. An issue with Terraform not converting to lowercase in the specific case of elasticache_subnet_groups.
  3. Just me not reading the Terraform documentation which says Name for the cache subnet group. This value is stored as a lowercase string.

I'd say it's likely easier for Terraform to just convert the string to lowercase than it would be to wait for AWS to fix their API. Any thoughts @catsby?

@mzupan
Copy link
Contributor Author

mzupan commented Jul 18, 2015

looks like #3 might be the reason originally this issue failed..

Seems to me terraform should just lower case it to be on the safe side

@phinze
Copy link
Contributor

phinze commented Jul 29, 2015

Hey @mzupan and @thepastelsuit - sounds reasonable that Terraform can do a better job handling this situation - can one of you open a fresh issue to talk about improving the behavior with mixed-case subnet group names?

@thrashr888
Copy link
Member

I'm having this same issue. I also have a mixed-case id. It's non-trivial for me to switch the id to lowercase because it's populated with a variable that's used elsewhere.

@apparentlymart
Copy link
Contributor

I've attempted to address the issue with the API lowercasing the name in PR #3120.

@ghost
Copy link

ghost commented May 1, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators May 1, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants