-
Notifications
You must be signed in to change notification settings - Fork 724
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
use remote state to read data from previous steps #782
use remote state to read data from previous steps #782
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @daniel-cit did an initial review and looks great.
@@ -67,6 +67,21 @@ locals { | |||
"roles/compute.xpnAdmin", | |||
], | |||
} | |||
|
|||
granular_sa_seed_project = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's add a comment here this will allow individual stages to access state for all other stages (incase users want to restrict this)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch.
actually, these roles are redundant with the role roles/storage.admin
granted in the bucket in the tf_cloudbuild_workspace
module
since we are using a single tf state bucket created in the seed project to host all the states like we did before using the workspace module.
to be able to revoke the access of the others service accounts to one of the state buckets, we will need to:
- stop using the tf state bucket from the seed project
- create a tf state bucket for each workspace in the CI/CD project, setting
var.create_state_bucket
to true - grant read roles in a bucket for the other service accounts that need to read the outputs
@bharathkkb should we do this change?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, I think we can keep this in the PR now and track it in a separate PR. My hope with the workspace module was in the future we can split this out for more granular permissions.
|
||
config = { | ||
bucket = "${var.backend_bucket}" | ||
prefix = "terraform/environments/production" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As a follow up we should look at UX for adding/deleting stages. An approach could be a dynamic "env" which we source from 1-org but we can discuss further in an issue.
@@ -194,6 +194,7 @@ the following steps: | |||
| Name | Description | | |||
|------|-------------| | |||
| cloudbuild\_project\_id | Project where CloudBuild configuration and terraform container image will reside. | | |||
| common\_config | Common configuration data to be used in other steps. | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: After this is merged lets update docs to call out how we will use this common config created here though out the rest of SFB.
} | ||
return false, err | ||
} | ||
|
||
func TestBootstrap(t *testing.T) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can do this in a follow up, but I am thinking now we should add more rigor around testing stage interfaces since remote state relies on it. Our tests will still catch this today but slowly depending on the test stage. This way we can fail fast when we assert that the outputs we expected are at least present.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
now that the shared
env of project has been split form the same file it was with dev/non-prod/prod
envs, we will fail faster if something happens in the shared env.
@bharathkkb would the context for fail fast be to fail all envs in 4-project in one of then fails?
like if bu1_development
fails all other envs bu1_non_production
, ..., bu2_production
would not be executed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
been able to deploy only one environment in the test, like for example:
- 3-networks: shared, production
- 4-projects: bu1_shared, bu1_production
- 5-app-infra: bu1_production
would speed up the tests in at least 30~40 minutes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@daniel-cit It is slightly different ask I opened #800 to track. But I do agree deploying 1 env is also something I am interested in trying out (opened #801 for that)
require.NoError(t, err2) | ||
if !fExists { | ||
srcFile := path.Join(cwd, "../../../0-bootstrap/backend.tf.example") | ||
_, err3 := exec.Command("cp", srcFile, destFile).CombinedOutput() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to exec copy? Why not just write a file if not exists with a backend config string (we can fix in a follow up after testing).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we are using cp
to also validate the file backend.tf.example
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@daniel-cit we should just read backend.tf.example
and write to backend.tf. If any of that call path fails, we can fail the test.
@daniel-cit a few comments, since this PR is big lets get it in and fix the items as follow up PRs. |
Fixes #685