Skip to content
This repository has been archived by the owner on Jun 14, 2019. It is now read-only.

Detect when a config map doesn't exist on a running pod for > X time, fail #171

Open
smarterclayton opened this issue Oct 3, 2018 · 4 comments

Comments

@smarterclayton
Copy link
Contributor

A common error mode is forgetting to create a dependent config map. We should do a slightly better job of conveying that condition back (either as a job failure or a log message)

@stevekuznetsov
Copy link
Contributor

We should? Or k8s should?

@stevekuznetsov
Copy link
Contributor

Configmaps we have issues with are:

  • CI Operator config
  • any of the assorted configmaps for the templates

The first is fixed now with an updated approach to uploading those
The second should be mitigated from copy-pasta errors for extant templates by generating the Prow config. For net-new templates, agreed this would be an issue.

@smarterclayton
Copy link
Contributor Author

I think in this case the ci-operator "wait for pod success" job could easily report the error that k8s puts on the container when the config doesn't exist after a period of time.

@stevekuznetsov
Copy link
Contributor

Yes, and with secrets too

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants