Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update Terraform SDK to v2 #1027

Merged
merged 15 commits into from
Nov 4, 2020
Merged

Update Terraform SDK to v2 #1027

merged 15 commits into from
Nov 4, 2020

Conversation

dak1n1
Copy link
Contributor

@dak1n1 dak1n1 commented Oct 5, 2020

Description

This PR updates the Terraform SDK v2, which drops support for Terraform 0.11.

This PR also includes a feature created by Ilia Lazebnik in PR #910, adding allowed_topologies to the storageclass resource and data source.

Acceptance tests

  • Have you added an acceptance test for the functionality being added?
  • Have you run the acceptance tests on this branch?

Release Note

Release note for CHANGELOG:

IMPROVEMENTS:

* Update Terraform SDK to v2 and drop support for Terraform 0.11.
* Add `allowed_topologies` to StorageClass (#391, #891)

References

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

@dak1n1 dak1n1 force-pushed the sdkv2 branch 2 times, most recently from 729bc27 to 7b54990 Compare October 8, 2020 22:50
@dak1n1
Copy link
Contributor Author

dak1n1 commented Oct 8, 2020

This isn't actually ready for review, but I'm marking it as such so I can run acceptance tests in TC.

@dak1n1 dak1n1 changed the title Update Terraform SDK to v2 [WIP] Update Terraform SDK to v2 Oct 8, 2020
@dak1n1 dak1n1 marked this pull request as ready for review October 8, 2020 22:52
@ghost ghost added size/XXL labels Oct 9, 2020
@dak1n1 dak1n1 mentioned this pull request Oct 19, 2020
@dak1n1
Copy link
Contributor Author

dak1n1 commented Oct 23, 2020

Here are the results of acceptance tests run locally. https://gist.githubusercontent.com/dak1n1/d9a9e1c59a3ee959fc0185be3204fe5d/raw/7ae46bfdda01636f6abe1e8b94d37fd23720183d/gistfile1.txt

Also, upgrade-specific tests can be run on multiple resources using this:

TESTARGS="-run '_regression'" make testacc

@dak1n1
Copy link
Contributor Author

dak1n1 commented Oct 23, 2020

The test is running in GKE here. https://ci-oss.hashicorp.engineering/viewLog.html?buildId=153868

Looks like I have some work to do on some data sources.

Copy link
Collaborator

@jrhouston jrhouston left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this heroic effort @dak1n1, this was A LOT of work. 💪

I've looked at every file other than the vendor directory. I see that most of it is just adding Context and wrapping errors into Diagnostics.

I've made a few comments – some questions and mostly cosmetic stuff.

GNUmakefile Show resolved Hide resolved
return testAccKubernetesServiceConfig_basic(name) + `
data "kubernetes_service" "test" {
return fmt.Sprintf(`resource "kubernetes_service" "test" {
metadata {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: need to fix the formatting here to be spaces.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done... across 20 files even. ha

kubernetes/provider.go Outdated Show resolved Hide resolved
kubernetes/provider_test.go Show resolved Hide resolved
kubernetes/provider_test.go Outdated Show resolved Hide resolved
kubernetes/resource_kubernetes_ingress.go Show resolved Hide resolved
})
}

func testAccCheckKubernetesServiceForceNew(old, new *api.Service, wantNew bool) resource.TestCheckFunc {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we're only checking the UID from the metadata I'd bet we could write a generic function for this using the dynamic client so we don't need to write this function for every resource. We can do that in another PR though.

kubernetes/resource_kubernetes_storage_class.go Outdated Show resolved Hide resolved
kubernetes/resource_kubernetes_storage_class_test.go Outdated Show resolved Hide resolved
* Update import paths and functions to use Terraform SDKv2.
* Add support for testing multiple versions of the provider.
* Add `allowed_topologies` to the storageclass resource,
  authored by Ilia Lazebnik in PR
  hashicorp#910
* Fix acceptance tests and add upgrade tests.
@dak1n1
Copy link
Contributor Author

dak1n1 commented Oct 29, 2020

I was going to link the TC test I'm running, but I'm number 62 in the queue! Maybe it will show up here tomorrow. https://ci-oss.hashicorp.engineering/viewQueued.html?itemId=154938

@dak1n1 dak1n1 changed the title [WIP] Update Terraform SDK to v2 Update Terraform SDK to v2 Oct 29, 2020
@dak1n1
Copy link
Contributor Author

dak1n1 commented Oct 29, 2020

@jrhouston I think this PR is ready to go. I fixed a few more tests since running TC here, so we're close to a green run. https://ci-oss.hashicorp.engineering/viewLog.html?buildId=155031&

Copy link
Member

@alexsomesan alexsomesan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wow, this was really quite an undertaking! Thanks a million for doing all of this.

I'm slowly going through all the changes.
I thought I'd leave comments in batches since it might be faster to address them while I keep reading through the PR.

.terraformrc Outdated
@@ -0,0 +1,10 @@
provider_installation {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this configuration is specific to our testing workflow, would it make sense to place in a subfolder that highlights its purpose as a test asset? Like somewhere in a testdata folder or similar?

I'm trying to avoid it being misinterpreted by users as an example of how to deploy the provider (far fetched, but I'd rather be safe).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good idea! I do have a directory we can use for this: kubernetes/test-infra/external-providers. I'll use that.

GNUmakefile Show resolved Hide resolved

Schema: map[string]*schema.Schema{
"metadata": namespacedMetadataSchema("ingress", false),
"spec": {
Type: schema.TypeList,
Description: docIngress["spec"],
Computed: true,
MaxItems: 1,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why was it necessary to remove these MaxItems attributes? Can there be a scenario where multiple spec entries would be present in the ingress resource?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new SDK version doesn't allow using MaxItems together with Computed. It throws this error:

MaxItems is for configurable attributes,there's nothing to configure on computed-only field

Steps: []resource.TestStep{
{
{ // The first apply creates the resource. The second apply reads the resource using the data source.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice! I like this two stepped approach to datasources. Things are easier to follow like this.

I just hope we don't end up masking any potential regressions regarding the datasources' ability to evaluate correctly in a single apply. That is how the vast majority of real world users would user them.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. Yeah, there might actually be a regression... because these used to work in one apply. But now they require two applies. But when I looked up data sources in the Terraform docs and related issues, this seems to be the expected behavior.

Most data sources will be evaluated before the resource it depends on is evaluated. The exception is when the attribute it's fetching is computed. Only then will it wait for the resource to be created before evaluating the data source.

Terraform only delays reading a data resource if its configuration includes a value that can't be determined until the apply step.

The pattern of reading something using a data resource in the same configuration where it's created does tend to lead to this sort of problem, and so we'd recommend avoiding that where possible. 

hashicorp/terraform#23242 (comment)

Most times, users should reference values from the resource itself, instead of using a data source. So I'm hoping we're setting the right example by doing this in two applies.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TBH, i wasn't aware of that nuance in the in evaluation of datasources and computed attributes. Thanks for uncovering that bit of info. It does make a lot of sense when that's the documented expected behaviour.

@@ -80,17 +87,15 @@ func testAccKubernetesDataSourcePVCConfig_basic(name string) string {
}

wait_until_bound = false
lifecycle {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Was this necessary to remove? It's purpose is to avoid producing diffs on clusters that use Calico networking which inject some annotations. Without this, the test fails on Typhoon for example.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I removed this to get the test to pass. Here's what I have in my notes about it:

Error: Computed attribute cannot be set
on config992760292/terraform_plugin_test.tf line 2, in resource "kubernetes_persistent_volume_claim" "test":
metadata {

# solution: remove the ignore_changes block
   wait_until_bound = false
-  lifecycle {
-    ignore_changes = [metadata]
-  }

@dak1n1
Copy link
Contributor Author

dak1n1 commented Nov 2, 2020

I updated the TC test to include the new .terraformrc path. It looks like it's running well now. https://ci-oss.hashicorp.engineering/viewLog.html?buildId=155606&

@jrhouston jrhouston mentioned this pull request Nov 2, 2020
2 tasks
@dak1n1
Copy link
Contributor Author

dak1n1 commented Nov 2, 2020

That's better:

$ TESTARGS="-run 'googleCloud'" make testacc
=== RUN   TestAccKubernetesPersistentVolumeClaim_googleCloud_volumeMatch
--- PASS: TestAccKubernetesPersistentVolumeClaim_googleCloud_volumeMatch (148.40s)
=== RUN   TestAccKubernetesPersistentVolumeClaim_googleCloud_volumeUpdate
--- PASS: TestAccKubernetesPersistentVolumeClaim_googleCloud_volumeUpdate (76.47s)
=== RUN   TestAccKubernetesPersistentVolumeClaim_googleCloud_storageClass
--- PASS: TestAccKubernetesPersistentVolumeClaim_googleCloud_storageClass (13.65s)
=== RUN   TestAccKubernetesPersistentVolume_googleCloud_basic
--- PASS: TestAccKubernetesPersistentVolume_googleCloud_basic (80.44s)
=== RUN   TestAccKubernetesPersistentVolume_googleCloud_volumeSource
--- PASS: TestAccKubernetesPersistentVolume_googleCloud_volumeSource (72.04s)
PASS
ok      github.com/hashicorp/terraform-provider-kubernetes/kubernetes   391.045s

@@ -184,6 +184,7 @@ func TestAccKubernetesPersistentVolume_googleCloud_basic(t *testing.T) {
resource.TestCheckResourceAttr("kubernetes_persistent_volume.test", "spec.0.persistent_volume_source.0.gce_persistent_disk.#", "1"),
resource.TestCheckResourceAttr("kubernetes_persistent_volume.test", "spec.0.persistent_volume_source.0.gce_persistent_disk.0.pd_name", diskName),
),
ExpectNonEmptyPlan: true, // GCP adds label "goog-gke-volume" to the disk.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if doing this will mask a whole set of potential issues as we will be ignoring any unexpected diffs on the whole resource.
Regardless, I think this PR is already doing a lot, so it would best to continue this conversation in a different context.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a good point. This kind of worries me too. It's a label placed the google disk resource (a resource owned by another provider). I could probably use something like this:

lifecycle {
    ignore_changes = [labels]
  }

I might give it a test in another PR.

Copy link
Member

@alexsomesan alexsomesan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, thanks a lot for the effort of seeing this through. It's a massive chunk of work!

I've checked this branch in a few of the CI stages and it looks like it does very similarly to master in terms of flaky tests.

I think we can go ahead and merge and address the test issues individually, like usual.

@dak1n1
Copy link
Contributor Author

dak1n1 commented Nov 4, 2020

Thanks for the thorough reviews on this beast of a PR! @alexsomesan @jrhouston 🥳

@dak1n1 dak1n1 merged commit aae2fef into hashicorp:master Nov 4, 2020
@ghost
Copy link

ghost commented Dec 5, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks!

@ghost ghost locked as resolved and limited conversation to collaborators Dec 5, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants