Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add resource_compute_region_autoscaler #544

Merged
merged 2 commits into from
Oct 9, 2017
Merged

Conversation

usererror
Copy link
Contributor

@usererror usererror commented Oct 6, 2017

Why

Attempts to fix #535.

It seemed easy enough based on @rosbo 's comment and I think this works but please let me know if I've missed anything! I tried to copy the regular resource_compute_autoscaler as much as possible and luckily they both use the same underlying type.

Tests

I ran acceptance tests pointing at one of my google projects and they all passed.

➜  google git:(region-autoscaler) ✗ TF_ACC=1 go test -v -run TestAccComputeRegionAutoscaler
=== RUN   TestAccComputeRegionAutoscaler_basic
--- PASS: TestAccComputeRegionAutoscaler_basic (70.90s)
=== RUN   TestAccComputeRegionAutoscaler_update
--- PASS: TestAccComputeRegionAutoscaler_update (83.31s)
PASS
ok      github.com/terraform-providers/terraform-provider-google/google    154.249s

return resourceComputeRegionAutoscalerRead(d, meta)
}

func flattenRegionAutoscalingPolicy(policy *compute.AutoscalingPolicy) []map[string]interface{} {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure this needs to be redefined for the regional autoscaler since they both use compute.Autoscaler... anyway. I didn't touch this function actually, just renamed it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can delete this method and reuse the one in the non-regional version

Copy link
Contributor

@rosbo rosbo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks Scott for your contribution!

I added comments about reusing code for the regional and zonal autoscaler. I also added comments about problems I noticed that also affect the zonal autoscaler. By doing the refactoring suggested it should fix both at the same time but if that is not the case for some, do you mind also fixing it for the zonal autoscaler too? Thank you.

return resourceComputeRegionAutoscalerRead(d, meta)
}

func flattenRegionAutoscalingPolicy(policy *compute.AutoscalingPolicy) []map[string]interface{} {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can delete this method and reuse the one in the non-regional version

ForceNew: true,
},

"autoscaling_policy": &schema.Schema{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can extract this &schema.Schema definition and reuse it for both autoscaler

}
}

func buildRegionAutoscaler(d *schema.ResourceData) (*compute.Autoscaler, error) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can remove this method and use the buildAutoscaler. In fact, I think you are not even using it and already using the buildAutoscaler.

d.Set("self_link", scaler.SelfLink)
d.Set("name", scaler.Name)
d.Set("target", scaler.Target)
d.Set("region", regionUrl[len(regionUrl)-1])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can use the method GetResourceNameFromSelfLink


"autoscaling_policy": &schema.Schema{
Type: schema.TypeList,
Optional: true,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add MaxItems: 1 since we can only specify at most on autoscaling_policy. It is also not Optional since you must have at least one autoscaling_policy. Please add Required: true


aspCount := d.Get("autoscaling_policy.#").(int)
if aspCount != 1 {
return nil, fmt.Errorf("The autoscaler must have exactly one autoscaling_policy, found %d.", aspCount)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can remove this line once this requirement is enforced by the schema like suggested above. The advantage of enforcing this in the schema is that it fails at terraform plan instead of terraform apply.

@@ -0,0 +1,262 @@
package google
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you also add import acceptance tests?

By convention, we put them in a separate file. See `import_compute_autoscaler_test.go"

- Enforce single autoscaling policy requirement at terraform plan time
@usererror
Copy link
Contributor Author

Thanks for the review Vincent! I fixed up the zonal autoscaler on your recommendation and got rid of the duplicate regional functions and both are much more readable now. I also added the import acceptance tests for the regional autoscaler and ran all the acceptance tests again. This was a pleasure to work on, very easy to test and work with!

Acceptance Tests

➜  google git:(region-autoscaler) ✗ TF_ACC=1 go test -v -run TestAccComputeAutoscaler
=== RUN   TestAccComputeAutoscaler_importBasic
--- PASS: TestAccComputeAutoscaler_importBasic (69.92s)
=== RUN   TestAccComputeAutoscaler_basic
--- PASS: TestAccComputeAutoscaler_basic (68.78s)
=== RUN   TestAccComputeAutoscaler_update
--- PASS: TestAccComputeAutoscaler_update (81.43s)
PASS
ok  	github.com/terraform-providers/terraform-provider-google/google	220.148s
➜  google git:(region-autoscaler) ✗ TF_ACC=1 go test -v -run TestAccComputeRegionAutoscaler
=== RUN   TestAccComputeRegionAutoscaler_importBasic
--- PASS: TestAccComputeRegionAutoscaler_importBasic (70.69s)
=== RUN   TestAccComputeRegionAutoscaler_basic
--- PASS: TestAccComputeRegionAutoscaler_basic (70.94s)
=== RUN   TestAccComputeRegionAutoscaler_update
--- PASS: TestAccComputeRegionAutoscaler_update (83.86s)
PASS
ok  	github.com/terraform-providers/terraform-provider-google/google	225.508s

@rosbo
Copy link
Contributor

rosbo commented Oct 9, 2017

Awesome work! Thanks for adding support for this new resource.

@rosbo rosbo merged commit d574c6a into hashicorp:master Oct 9, 2017
negz pushed a commit to negz/terraform-provider-google that referenced this pull request Oct 17, 2017
* Add resource_compute_region_autoscaler
* Add import acceptance tests, reuse zonal autoscaler code
* Enforce single autoscaling policy requirement at terraform plan time
negz pushed a commit to negz/terraform-provider-google that referenced this pull request Oct 17, 2017
* Add resource_compute_region_autoscaler
* Add import acceptance tests, reuse zonal autoscaler code
* Enforce single autoscaling policy requirement at terraform plan time
luis-silva pushed a commit to luis-silva/terraform-provider-google that referenced this pull request May 21, 2019
Signed-off-by: Modular Magician <magic-modules@google.com>
@ghost
Copy link

ghost commented Mar 30, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks!

@ghost ghost locked and limited conversation to collaborators Mar 30, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

google_compute_autoscaler does not support google_compute_region_instance_group_manager
3 participants