Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

improve user experience about CNI errors #296

Closed
GheRivero opened this issue Jun 13, 2017 · 5 comments
Closed

improve user experience about CNI errors #296

GheRivero opened this issue Jun 13, 2017 · 5 comments
Labels
area/UX help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/documentation Categorizes issue or PR as related to documentation. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. triaged

Comments

@GheRivero
Copy link

FEATURE REQUEST
One of the most common questions in the kubeadm channel is about "errors" with the CNI plugin. Before deploying any network addon, people take a look at the cluster status and see errors in the DNS "Pending" pod the main node in a NotReady status (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized).

To avoid this recurrent UX problem, there should be a way in kubeadm init to include which network add-on to deploy to avoid this kind of problems. There should be no default option and be totally compatible with current kubeadm behaviour. Possible options:

  • Add a flag to kubeadm init to specify the addon to deploy (-network-addon=) This could be a full URL to the manifest to use. (could be extended to use only the addon name, but this would require maintaining a map in kubeadm about all the solutions and their manifest file location). There could be problems with current addons that use more than one manifest, like flannel (one for RBAC groups and the second one for the services. This is easily fixable by compacting them into just one manifest)

  • Extend the config file to include the addon. This option is already implemented and can be easily extended to include a new option.

  • Any other suggestions? This should be open to debate to get full consensus.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 15, 2018
@errordeveloper
Copy link
Member

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 22, 2018
@timothysc timothysc added help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. triaged priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. labels Jan 31, 2018
@timothysc timothysc added kind/documentation Categorizes issue or PR as related to documentation. triaged and removed triaged labels Apr 7, 2018
@k8s-ci-robot k8s-ci-robot added kind/feature Categorizes issue or PR as related to a new feature. and removed kind/enhancement labels Jun 5, 2018
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 3, 2018
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 3, 2018
@timothysc
Copy link
Member

I no longer see this being reported. Closing.
If you feel otherwise and have reproduction information please re-open.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/UX help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/documentation Categorizes issue or PR as related to documentation. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. triaged
Projects
None yet
Development

No branches or pull requests

6 participants