Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add secret for kubeadmin pre-idp user #771

Conversation

sallyom
Copy link
Contributor

@sallyom sallyom commented Nov 30, 2018

This (really more this: openshift/origin#21580)configures an initial pre-identity-provider-setup user that can access registry, prometheus, web-console, grafana, while at the same time does not create a role/identity/etc that will have to be cleaned up once an admin does set up an idp. This PR generates a bcrypted randomly generated password rather than any password a user supplies.

@enj has more info about the kubeadmin initial user.

The password/hash is a new asset, KubeadminPassword with a file ${CLUSTER_DIR}/kubeadmin-password (similar to metadata.json generation). Also, I removed the unused Admin.Email. This unblocks access to ui-token based components, while we can have cross-team discussion to iterate upon the best place to create/store/log this secret is.

@enj @mrogers950 what else has to happen to enable this initial user? I see openshift/origin#21580 merged.

@openshift-ci-robot openshift-ci-robot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Nov 30, 2018
@@ -54,6 +62,13 @@ func (t *Tectonic) Generate(dependencies asset.Parents) error {
worker := &machines.Worker{}
master := &machines.Master{}
dependencies.Get(installConfig, clusterk8sio, worker, master)
kubeadminPassword, kubeadminPasswordHash, err := generateRandomPasswordHash()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are we generating this instead of asking the user if they want to supply a password?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this (really more this: openshift/origin#21580) creates an initial pre-identity-provider-setup user that can access registry, prometheus, web-console, grafana, while at the same time not creating a bunch of roles/identities/objects that have to be cleaned up once an admin does set up an idp. @enj definitely does not want to let a user choose this password, it has to be secure.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@enj definitely does not want to let a user choose this password, it has to be secure.

14 random chars is less seceure than my default ;). And I'd really like to not take responsibility for this :p. Can it be a day-2 operation?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We want to try to prevent the user from setting a password like "password" initially, configuring their IDP, and then ignore the cleanup step (which is to delete the generated secret) leaving an easily guessed admin account. If we force a good password then forgetting to clean up is less dangerous.

logrus.Infof("Install complete! Run 'export KUBECONFIG=%s' to manage your cluster.", kubeconfig)
logrus.Info("After exporting your kubeconfig, run 'oc -h' for a list of OpenShift client commands.")
// TODO: Direct users to web-console
// TODO: Get kubeadmin password, log here
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we already give the user admin kubeconfig. I rather tell people how to get/create the password.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I rather tell people how to get/create the password.

How about we point folks at their admin kubeconfig (like we do now) and then drop a link to a "common day-2 operations" landing page? Where would something like that live? Master branch of https://github.com/openshift/openshift-docs ? Some sort of pre-release branch? Some other repo?

Copy link
Contributor Author

@sallyom sallyom Nov 30, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

without this: openshift/origin#21580 in order to have a user that can login to the console/prometheus/etc, you'd have to set up an identity provider. This kubeadmin user is expected to be removed upon setting up your idp.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to be able to do oauth flows for the console, etc. and that can't be done with just a kubeconfig.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to be able to do oauth flows for the console, etc. and that can't be done with just a kubeconfig.

But this PR is just pushing in a new secret, right? That's not installer-specific. At any time after the control plane comes up, you can use the admin kubeconfig and oc to push in the kubeadmin-password secret. You could wrap it up in an oc subcommand if constructing the password, hashing it, and putting it inside a secret were a concern. A benefit of that approach is that it would be opt-in, folks who are going to start setting up a fully-fledged identity provider immediately after the cluster comes up could skip it and have nothing around to clean up.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mrogers950
If you want the admin user to have both admin kubeconfig and admin ui creds by default that is understandable.

We need to be able to do oauth flows for the console, etc. and that can't be done with just a kubeconfig.

But I think you can most definitely do all the steps to do the oauth flows for the console with that admin kubeconfig.

pkg/asset/manifests/tectonic.go Outdated Show resolved Hide resolved
pkg/asset/manifests/tectonic.go Outdated Show resolved Hide resolved
pkg/asset/manifests/tectonic.go Outdated Show resolved Hide resolved
pkg/asset/manifests/tectonic.go Outdated Show resolved Hide resolved
pkg/asset/manifests/tectonic.go Outdated Show resolved Hide resolved
pkg/asset/manifests/tectonic.go Outdated Show resolved Hide resolved
pkg/asset/manifests/tectonic.go Outdated Show resolved Hide resolved
@@ -54,6 +62,13 @@ func (t *Tectonic) Generate(dependencies asset.Parents) error {
worker := &machines.Worker{}
master := &machines.Master{}
dependencies.Get(installConfig, clusterk8sio, worker, master)
kubeadminPassword, kubeadminPasswordHash, err := generateRandomPasswordHash()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It doesn't seem quite right to me that the kube admin password is generated as part of the tectonic manifests asset. Should it be a separate asset that the tectonic manifests asset depends upon?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure, thinking

Copy link
Contributor Author

@sallyom sallyom Nov 30, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is a secret that is required by the oauth server to authenticate a pre-identity-provider-setup user with an oauth token. The password could be generated elsewhere but the secret should be created during the tectonic service, i think. I'm going to move the generatePassword function to generate a new asset, KubeadminPassword and write to ${CLUSTER_DIR}/kubeadmin-password

@sallyom sallyom force-pushed the add-oauthadmin-preidp-password-secret branch 2 times, most recently from 5a496d3 to 8a13740 Compare November 30, 2018 23:27
@openshift-ci-robot openshift-ci-robot added size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. and removed size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Dec 1, 2018
@sallyom sallyom force-pushed the add-oauthadmin-preidp-password-secret branch from 1637a1f to 8d3b8c3 Compare December 1, 2018 00:58
@sallyom sallyom force-pushed the add-oauthadmin-preidp-password-secret branch 2 times, most recently from 277a7c9 to b7a38a2 Compare December 1, 2018 02:48
@sallyom
Copy link
Contributor Author

sallyom commented Dec 1, 2018

/test e2e-aws

@sallyom sallyom changed the title WIP: add secret for kubeadmin pre-idp user Add secret for kubeadmin pre-idp user Dec 2, 2018
@openshift-ci-robot openshift-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Dec 2, 2018
@enj
Copy link
Contributor

enj commented Dec 3, 2018

@wking @abhinavdahiya @staebler let me explain the requirements that have been outlined by @smarterclayton and @derekwaynecarr:

  1. User runs installer and provides AWS credentials
  2. Installer completes, and outputs (at least) web console URL and username/password
  3. username/password must work against the console
  4. username/password must work with Prometheus, grafana, etc
  5. username/password must have cluster admin rights
  6. The user can explore the entire product through the web console, no oc required

This should make it clear that this is not a day two activity - it is expected to always be there. No configuration from the user is required to make it happen (IDP or otherwise). It is opt-out, not opt-in. We also know that this must be done via an OAuth flow as that is the only thing the web console / Prometheus / etc support.

The requirements from @openshift/sig-auth are:

  1. The user cannot set the password - it must be randomly generated
  2. The password is not stored on the cluster (only the hash is) - the installer must provision the password, as it will be the only thing that will ever know the password

We have a host of other requirements that I have taken care of inside the openshift OAuth server.

// generateRandomPasswordHash generates a hash of a random ASCII 14 char string with at least
// one digit and one special character.
func generateRandomPasswordHash() (string, []byte, error) {
rand.New(rand.NewSource(time.Now().UnixNano()))

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

// generateRandomPasswordHash generates a hash of a random ASCII 14 char string with at least
// one digit and one special character.
func generateRandomPasswordHash() (string, []byte, error) {
rand.New(rand.NewSource(time.Now().UnixNano()))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should not be here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

pkg/asset/installconfig/password.go Outdated Show resolved Hide resolved
pkg/asset/installconfig/password.go Outdated Show resolved Hide resolved
pkg/asset/installconfig/password.go Outdated Show resolved Hide resolved
rand.New(rand.NewSource(time.Now().UnixNano()))
const (
lowercase = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
uppercase = "abcdefghijklmnopqrstuvwxyz"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let us avoid ambiguous characters such as 0 and O - even with the reduced entropy we still have ~100 bits.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

took out 0 O 1 l

pkg/asset/installconfig/password.go Outdated Show resolved Hide resolved
pkg/asset/installconfig/password.go Outdated Show resolved Hide resolved
@sallyom sallyom force-pushed the add-oauthadmin-preidp-password-secret branch 7 times, most recently from b06096f to e9d5852 Compare December 4, 2018 00:34
Email: emailAddress.EmailAddress,
Password: password.Password,
SSHKey: sshPublicKey.Key,
Password: password.Password,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From what I can tell, the entire install config in stored in the kube-system/cluster-config-v1 config map. We do not want the password or the hash stored there.

Copy link
Contributor Author

@sallyom sallyom Dec 4, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok,.. generating the pw in pkg/asset/password/password.go, and the pw will be stored in the install dir at ${INSTALL_DIR}/kubeadmin-password similar to how aws metadata is written to metadata.json

@sallyom sallyom force-pushed the add-oauthadmin-preidp-password-secret branch from cb3b810 to 8f3dce1 Compare December 4, 2018 18:15
@openshift-ci-robot openshift-ci-robot removed the lgtm Indicates that a PR is ready to be merged. label Dec 4, 2018
@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Dec 4, 2018

@sallyom: The following test failed, say /retest to rerun them all:

Test name Commit Details Rerun command
ci/prow/e2e-libvirt b7a38a2 link /test e2e-libvirt

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@sallyom sallyom force-pushed the add-oauthadmin-preidp-password-secret branch from 8f3dce1 to 7bd885c Compare December 4, 2018 18:26
@sallyom
Copy link
Contributor Author

sallyom commented Dec 4, 2018

@abhinavdahiya I've addressed your comments. Also, I've moved the asset file to ${CLUSTER_DIR}/auth/kubeadmin-password per discussion with auth team.

@abhinavdahiya
Copy link
Contributor

nit: kubeadmin is weird just admin

@sallyom what is motivation regarding kubeadmin-password naming. kube seems unnecessary and admin-password seems more logical as use would login oc -u admin -p <admin-password> ?

@sallyom sallyom force-pushed the add-oauthadmin-preidp-password-secret branch from 7bd885c to fcd8f73 Compare December 4, 2018 19:12
@sallyom
Copy link
Contributor Author

sallyom commented Dec 4, 2018

@abhinavdahiya

nit: kubeadmin is weird just admin

@sallyom what is motivation regarding kubeadmin-password naming. kube seems unnecessary and admin-password seems more logical as use would login oc -u admin -p <admin-password> ?

@abhinavdahiya kubeadmin was handed down from @enj 's work, that name was decided upon after discussion with various people (not me :) )

@abhinavdahiya
Copy link
Contributor

@abhinavdahiya

nit: kubeadmin is weird just admin

@sallyom what is motivation regarding kubeadmin-password naming. kube seems unnecessary and admin-password seems more logical as use would login oc -u admin -p <admin-password> ?

@abhinavdahiya kubeadmin was handed down from @enj 's work, that name was decided upon after discussion with various people (not me :) )

would it possible to link to that discussion? @enj

@abhinavdahiya
Copy link
Contributor

@abhinavdahiya

nit: kubeadmin is weird just admin

@sallyom what is motivation regarding kubeadmin-password naming. kube seems unnecessary and admin-password seems more logical as use would login oc -u admin -p <admin-password> ?

@abhinavdahiya kubeadmin was handed down from @enj 's work, that name was decided upon after discussion with various people (not me :) )

would it possible to link to that discussion? @enj

so it looks like the name was picked based on restrictions based on the openshift user and oauth apis as per @enj

/lgtm

i'll let @wking give the /lgtm as he had comments #771 (comment)

@sallyom
Copy link
Contributor Author

sallyom commented Dec 4, 2018

@wking I see your point from #771 (comment)
With this PR, we are adding a secret to the cluster via the installer, to enable access to token-based components without any post-install/manual steps. The only cleanup would be to remove the secret. This special kube:admin user has no identity, no policy, etc.. The only thing that authenticates this user is the secret. This may be required by clusters installed via other means, we can add this secret to a different install process as well. @enj correct me please if i'm wrong

@wking
Copy link
Member

wking commented Dec 5, 2018

... to enable access to token-based components without any post-install/manual steps

So is the no-oc requirement trying to limit installation complexity, and not "maybe the user won't like/have oc"? I dunno what my issue is. Maybe just that "hey, we stuffed in an entry-level identity provider" feels like it slants the deck too much towards "toy cluster" vs. "production cluster". How do folks feel about adding a wizard question for this?

???? If selected, this will enable the kubeadmin user used by the bootstrap authenticator.
? Add a basic identity provider (Y/n) 

That's from:

question := &survey.Question{
	Prompt: &survey.Confirm{
		Message: "Add a basic identity provider",
		Help:    "If selected, this will enable the kubeadmin user used by the bootstrap authenticator.",
		Default: true,
	},
}

@wking
Copy link
Member

wking commented Dec 5, 2018

Or maybe no prompt and add a boolean to the InstallConfig (NoBootstrapAuthenticator?), since folks creating production clusters are more likely to be willing to skip the wizard and provide their own seed files?

@smarterclayton
Copy link
Contributor

So the defaults should definitely align to what we want to achieve. For the next 4 months, there will be no production clusters installed with this. There will be innumerable kick the tires clusters, followed by a phase where seeding clusters starts to become real, followed by GA.

In the next four months or so we need to convince everyone who tries it that we can deliver the whole enchilada. I agree disablement will be useful in the future (service delivery will want something like it). But since disablement is so easy post install I would want to focus on keeping our input options limited.

Doing this by default doesn’t meaningfully reduce the security profile of the cluster any more than the default admin kubeconfig does, and this is in many ways a superior flow for showing the coherent story we are being tasked to deliver. We can absolutely refine over the next month or two.

@derekwaynecarr
Copy link
Member

the installer collects two pieces of information today that have no purpose (user and pw).

the request is that we can demonstrate a frictionless initial experience for upcoming presentation at KubeCon. If we can accept this change and litigate after if the secret should be injected as a post-install step, let’s litigate that after KubeCon.

@derekwaynecarr
Copy link
Member

the console has already introduced support to alert a user logged in with this identity to configure a proper idp.

@enj
Copy link
Contributor

enj commented Dec 5, 2018

... let me explain the requirements...

Thank you :).

  1. username/password must work against the console
  2. username/password must work with Prometheus, grafana, etc
  3. username/password must have cluster admin rights

As far as the installer is concerned, all of these are just "has to push the kubeadmin secret into the kube-system namespace", right?

Yup.

  1. The user can explore the entire product through the web console, no oc required

I think this was the main bit we were missing from @sallyom's initial description. Is there more background on this discussion somewhere where we can read up on it? Is it just "some folks don't like command lines"? I agree that's a thing, but the installer is already a command-line tool. Anyone who wants to wrap the command-line in a web UI (or whatever) could also wrap oc and its (hypothetical) kubeadmin-secret-injector command. Were there other reasons?

I will send an email with details once I get a chance to collect thought. As an aside, I think it would be cool to compile the installer into a web asm module so you could run it from the browser 😄

This should make it clear that this is not a day two activity - it is expected to always be there. No configuration from the user is required to make it happen (IDP or otherwise). It is opt-out, not opt-in.

So how do we opt out? And is there background discussion on opt-in vs. opt-out somewhere too?

The web console will have UI to help you add IDPs and remove the secret.

We also know that this must be done via an OAuth flow as that is the only thing the web console / Prometheus / etc support.

I don't see anything OAuth about this PR. I assume that's something you handled in openshift/origin#21580 which we can ignore here?

Yes.

The requirements from @openshift/sig-auth are:

  1. The user cannot set the password - it must be randomly generated
  2. The password is not stored on the cluster (only the hash is) - the installer must provision the password, as it will be the only thing that will ever know the password

But they don't really care about the installer, right? It's just that once you have "password is not stored on the cluster" and "don't assume oc is present", you end up with "well, I guess that leaves the installer". One additional wrinkle is that the installer host may not have direct network access to the cluster (e.g. if the cluster is created via some cloud API, the fact that the installer can reach that cloud API doesn't mean the installer can reach the cluster that API created). But if you don't have network access to the cluster, you're probably not going to be poking around Prometheus, etc. either.

It is basically "whatever provisioned the cluster" since it cannot be done on the cluster.

I'm currently on board with adding password generation and this secret to the installer's asset graph if we blindly accept the no-oc requirement. But I'm still not sold on the no-oc requirement, and I'd like to have a clearer picture of where that came from.

If you are new to kube/openshift, something like the web console is far easier to get started with. And to do that, you need to be able to login as some user.

Why would this only be a thing that folks who install via the installer will need, vs. something that any OpenShift cluster admin may need at one point or another?

Anything that provisions a cluster and wants to have a nice experience in the console would be expected to create this secret. The installer just happens to be the provisioner we care about.

Also, where does the remover live? I think folks are more likely to remember to remove a resource if they added it in the first place (oc create bootstrap-identity and oc delete bootstrap-identity?). Vs. auto-created with the installer, and removed by oc or some web UI?

While we certainly make it obvious to people that they should remove it, it poses no security risk to the cluster. Nothing is going to brute force that password.

So is the no-oc requirement trying to limit installation complexity, and not "maybe the user won't like/have oc"? I dunno what my issue is. Maybe just that "hey, we stuffed in an entry-level identity provider" feels like it slants the deck too much towards "toy cluster" vs. "production cluster". How do folks feel about adding a wizard question for this?

I believe you are looking at this incorrectly. This user is no different than the system:admin user that the kubeconfig provides - it just also works in the web console. "Connect this cluster to your corporate LDAP so you can see promethus" just does not cut it - the barrier to entry is too high. Also, if you lose control of the system:admin cert, you have to replace the whole CA since there is no revocation. At least with kubeadmin all you have to do is delete the secret.

@wking
Copy link
Member

wking commented Dec 5, 2018

It is basically "whatever provisioned the cluster" since it cannot be done on the cluster.

This what the admin kubeconfig allows you to assert. It doesn't have to be a single binary.

If you are new to kube/openshift, something like the web console is far easier to get started with. And to do that, you need to be able to login as some user.

Wouldn't these folks mostly want to use openshift.io or some other hosted offering? Or your asm module? ;)

Nothing is going to brute force that password.

It's also just exposure. What if the backing code has bugs? What if someone finds a way to push this one secret and escalate themselves to admin status?

"Connect this cluster to your corporate LDAP so you can see promethus" just does not cut it - the barrier to entry is too high.

I was thinking "install oc and run oc create bootstrap-identity", which is a bit lower ;). Still non-zero though.

Also, if you lose control of the system:admin cert, you have to replace the whole CA since there is no revocation.

That is terrifying. Is there an issue tracking it?

Anyhow, I think opt-out would be an easy add, but yeah, we can land this now and revisit later.

/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Dec 5, 2018
@openshift-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: ericavonb, sallyom, wking

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.