Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot create both read and write autoscaling target for DynamoDB. #2282

Closed
sforcier opened this issue Nov 14, 2017 · 5 comments
Closed

Cannot create both read and write autoscaling target for DynamoDB. #2282

sforcier opened this issue Nov 14, 2017 · 5 comments

Comments

@sforcier
Copy link

I cannot seem to create both a read and write autoscaling target for my DynamoDB (DDB). This may be something I'm doing, but I've looked at it enough to where I think an issue submission is finally in order.

I have the following tf code...

resource "aws_appautoscaling_target" "asset_read_ddb" {
    max_capacity = "${var.assets_read_max}"
    min_capacity = "${var.assets_read_min}"
    resource_id = "table/${aws_dynamodb_table.main.name}"
    role_arn = "${aws_iam_role.asset_ddb_autoscaling.arn}"
    scalable_dimension = "dynamodb:table:ReadCapacityUnits"
    service_namespace = "dynamodb"
}

resource "aws_appautoscaling_policy" "asset_read_ddb" {
    name = "DynamoDBReadCapacityUtilization:${aws_appautoscaling_target.asset_read_ddb.resource_id}"
    policy_type = "TargetTrackingScaling"
    resource_id        = "${aws_appautoscaling_target.asset_read_ddb.resource_id}"
    scalable_dimension = "${aws_appautoscaling_target.asset_read_ddb.scalable_dimension}"
    service_namespace  = "${aws_appautoscaling_target.asset_read_ddb.service_namespace}"
    target_tracking_scaling_policy_configuration = {
        target_value = 70.0
        predefined_metric_specification = {
            predefined_metric_type = "DynamoDBReadCapacityUtilization"
        }
    }

    depends_on = ["aws_appautoscaling_target.asset_read_ddb", "aws_iam_role_policy.asset_ddb_autoscaling"]
}

resource "aws_appautoscaling_target" "asset_write_ddb" {
    max_capacity = "${var.assets_write_max}"
    min_capacity = "${var.assets_write_min}"
    resource_id = "table/${aws_dynamodb_table.main.name}"
    role_arn = "${aws_iam_role.asset_ddb_autoscaling.arn}"
    scalable_dimension = "dynamodb:table:WriteCapacityUnits"
    service_namespace = "dynamodb"
}

resource "aws_appautoscaling_policy" "asset_write_ddb" {
    policy_type = "TargetTrackingScaling"
    resource_id        = "${aws_appautoscaling_target.asset_write_ddb.resource_id}"
    scalable_dimension = "${aws_appautoscaling_target.asset_write_ddb.scalable_dimension}"
    service_namespace  = "${aws_appautoscaling_target.asset_write_ddb.service_namespace}"
    target_tracking_scaling_policy_configuration = {
        target_value = 70.0
        predefined_metric_specification = {
            predefined_metric_type = "DynamoDBWriteCapacityUtilization"
        }
    }

    depends_on = ["aws_appautoscaling_target.asset_write_ddb", "aws_iam_role_policy.asset_ddb_autoscaling"]
}

Which, when applied, results in an error...

* aws_appautoscaling_policy.asset_write_ddb: Error putting scaling policy: ValidationException: Scalable dimension dynamodb:table:ReadCapacityUnits only supports the following predefined metric types: DynamoDBReadCapacityUtilization

Notice that the error says my write policy is having issues with the ReadCapacityUnits. That's pretty weird and not really supported by my tf code. What I'm finding on further inspection is that in my terraform.tfstate file after this operation I end up with...

{
	"snip" : "I snipped this section but I still wanted it to be valid JSON.",
	"aws_appautoscaling_target.asset_read_ddb": {
		"type": "aws_appautoscaling_target",
		"depends_on": [
			"aws_dynamodb_table.main",
			"aws_iam_role.asset_ddb_autoscaling"
		],
		"primary": {
			"id": "table/p3asset",
			"attributes": {
				"id": "table/p3asset",
				"max_capacity": "10",
				"min_capacity": "1",
				"resource_id": "table/p3asset",
				"role_arn": "arn:aws:iam::<REDACTED>:role/p3assetassetdynamodbautoscaler",
				"scalable_dimension": "dynamodb:table:ReadCapacityUnits",
				"service_namespace": "dynamodb"
			},
			"meta": {},
			"tainted": false
		},
		"deposed": [],
		"provider": ""
	},
	"aws_appautoscaling_target.asset_write_ddb": {
		"type": "aws_appautoscaling_target",
		"depends_on": [
			"aws_dynamodb_table.main",
			"aws_iam_role.asset_ddb_autoscaling"
		],
		"primary": {
			"id": "table/p3asset",
			"attributes": {
				"id": "table/p3asset",
				"max_capacity": "10",
				"min_capacity": "1",
				"resource_id": "table/p3asset",
				"role_arn": "arn:aws:iam::<REDACTED>:role/p3assetassetdynamodbautoscaler",
				"scalable_dimension": "dynamodb:table:ReadCapacityUnits",
				"service_namespace": "dynamodb"
			},
			"meta": {},
			"tainted": false
		},
		"deposed": [],
		"provider": ""
	}
}

Notice that these targets both have the same primary.id value. If I review the plan after the failed apply, then I see something really weird...

-/+ aws_appautoscaling_target.asset_write_ddb (new resource required)
      id:                                                                                                      "table/p3asset" => <computed> (forces new resource)
      max_capacity:                                                                                            "10" => "10"
      min_capacity:                                                                                            "1" => "1"
      resource_id:                                                                                             "table/p3asset" => "table/p3asset"
      role_arn:                                                                                                "arn:aws:iam::<snip>:role/p3assetassetdynamodbautoscaler" => "arn:aws:iam::<snip>:role/p3assetassetdynamodbautoscaler"
      scalable_dimension:                                                                                      "dynamodb:table:ReadCapacityUnits" => "dynamodb:table:WriteCapacityUnits" (forces new resource)
      service_namespace:                                                                                       "dynamodb" => "dynamodb"

Notice that the scalable_dimension is due to be changed. But why was it ever ReadCapacityUnits in the first place?
My overall goal is to simply have read and write autoscaling, so if there's a workaround or I'm doing something wrong I'd appreciate the feedback. Any assistance is appreciated.

Terraform Version

v0.10.8

Affected Resource(s)

Please list the resources as a list, for example:

  • aws_appautoscaling_target

If this issue appears to affect multiple resources, it may be an issue with Terraform's core, so please mention this.

Terraform Configuration Files

Embedded inline above.

Panic Output

N/A

Expected Behavior

My DDB ends up with read and write autoscaling targets/policies

Actual Behavior

Only read is configured properly, subsequent apply statements tend to "flip flop" destruction of read/write (which takes more than the default timeout of 5 minutes for some reason).

Steps to Reproduce

  1. terraform apply with an HCL that has a table and the above targets/policies configured for read/write autoscaling.

Important Factoids

My statement above has all necessary factoids, IMO.

References

I could not find any similar references.

@sforcier
Copy link
Author

A quick review of the code (I'm not familiar with 'go' but it's fairly easy to read) it seems that within resourceAwsAppautoscalingTargetCreate line 90 reads d.SetId(d.Get("resource_id").(string)) and for both the DDB read and write target the resource_id is the same so it ends up giving two resources the same id. It seems like the target's id should perhaps incorporate the scalable dimension in addition to the resource_id.

@bflad
Copy link
Contributor

bflad commented Nov 14, 2017

@sforcier this should've been fixed in version 1.1.0 of the AWS provider. See: #1808. If its still an issue let us know, I went down the same rabbit hole and 1.1.0+ works fine in our environment.

Note: after Terraform 0.10.x the providers have their own versioning. If you need help updating the provider, see: https://www.terraform.io/docs/configuration/providers.html#provider-versions

@sforcier
Copy link
Author

Ah... I wasn't aware that the providers update separately now. Thanks for the fast reply. I'll check that out.

@sforcier
Copy link
Author

It seems to be working fine with 1.1.0 of the AWS provider. Thanks again.

@ghost
Copy link

ghost commented Apr 10, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!

@ghost ghost locked and limited conversation to collaborators Apr 10, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants