Skip to content
This repository has been archived by the owner on Apr 17, 2019. It is now read-only.

Move code that should be maintained to dedicated repos #762

Closed
bgrant0607 opened this issue Apr 12, 2016 · 40 comments
Closed

Move code that should be maintained to dedicated repos #762

bgrant0607 opened this issue Apr 12, 2016 · 40 comments
Labels
kind/velocity-improvement lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.

Comments

@bgrant0607
Copy link
Contributor

contrib isn't monitored adequately for maintained code. Issues aren't triaged. PRs aren't reviewed. We need to fix those problems, but it's not a good idea to mix maintained and unmaintained code.

Examples:

  • addon-resizer
  • ingress and service-loadbalancer
  • test infra/utils and mungeithub
  • do we still need release-notes?
  • ansible?

cc @bprashanth @eparis @david-mcmahon @mikedanese @ixdy @Q-Lee

@bprashanth
Copy link

Yeah I've brought this up before.
I think we decided (at the time) that we don't really get much from splitting Ingress out into another repo.

Fwiw both serviceloadbalancer and ingress have e2es (https://github.com/kubernetes/kubernetes/blob/master/test/e2e/ingress.go) in the main repo that consume released images from: https://github.com/kubernetes/contrib/releases. Unittests will run as part of contrib pre-checking github hooks. I usually include a line in the release notes when I bump up the image in: https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/cluster-loadbalancing/glbc/glbc-controller.yaml#L21

It's a hand wavy process that I'd like to improve.

The meta issue is: How do we manage cluster addons?

  • Where are they run?
    • If they're on the node, then we need to be really careful about increasing request limits
    • If they're on the master, we need to allow users to swap them out (eg: gce l7 for nginx)
  • Where do they live?
    • kube-dns is checked into main
    • heaptster has it's own repo but not sure about e2es
    • Ingress is described above
    • cadvisor runs as part of kubelet but still has its own repo/release.
  • Are they all shipped by default, with every cluster?
    • Increases resources required for smallest cluster
    • If we don't, Ux is worse, docs harder to write (eg: start an Ingress controller, then create this)
  • How are they tested?
    • Ingress would benefit from a builder available to the community
    • Running e2es from contrib/-HEAD would also be a big simplification
    • But if each of these gets a seperate e2e-builder how do these block a release?

@spiffxp
Copy link
Contributor

spiffxp commented Apr 14, 2016

/cc @kubernetes/sig-testing we've talked about moving test infra out before

@bgrant0607
Copy link
Contributor Author

I would like to nuke contrib. It's no longer needed by the community.

@davidopp
Copy link
Contributor

davidopp commented May 6, 2016

I would like to nuke contrib. It's no longer needed by the community.

Can you clarify? What would we do with the stuff that's currently in kubernetes/contrib?

@bgrant0607
Copy link
Contributor Author

@davidopp See the PR description.

@davidopp
Copy link
Contributor

davidopp commented May 6, 2016

Ah OK, so what you're saying is "once we move everything from contrib out of contrib, contrib will no longer be needed"? I can't really disagree with that. :)

@eparis
Copy link
Contributor

eparis commented May 6, 2016

Where does mungegithub go? I don't care mind you....

@bgrant0607
Copy link
Contributor Author

@eparis mungegithub would go into something like "productivity-tools". Some repo with a clear set of responsible owners, who would review its PRs, triage its issues, build and push updates to the tool(s), etc.

@bgrant0607
Copy link
Contributor Author

Note that we might just delete some things (e.g., podex, which is totally broken at this point).

@spxtr
Copy link
Contributor

spxtr commented May 6, 2016

mungegithub could go into test-infra.

@bprashanth
Copy link

contrib isn't monitored adequately for maintained code. Issues aren't triaged. PRs aren't reviewed. We need to fix those problems, but it's not a good idea to mix maintained and unmaintained code.

If there're projects suffering from this in subdirectories of contrib/, I feel like they'll suffer from it in another repo too.

I also think we should be clear about the bar for a new project under kubernetes/ repo.

I want a place to incubate ideas without putting serious time and effort into making it production ready. For example, the keepalived project: https://github.com/kubernetes/contrib/tree/master/keepalived-vip is a useful prototype of an ipvs replacement of kube-proxy. Splitting that out into kubernetes/keepalived-vip doesn't help, it'll still be only a prototype. Keeping it under bprashanth/keepalived-vip doesn't either, because I want people to use it/contribute etc.

@bgrant0607
Copy link
Contributor Author

@bprashanth Github is optimized for small, single-purpose repos. Separate repos have several benefits.

  • Clear ownership, even without additional tooling
  • Independent management of admin and write access
  • Easy to set up independent CI
  • Small and focused enough to make notification subscription feasible
  • More easily independently forked/cloned
  • Can't accrue spurious dependencies

Bar should be something potentially reasonably useful that someone on the project is willing to maintain, at least so long as it exists. If we're not going to maintain it (at least respond to PRs and issues), it belongs elsewhere.

@ingvagabund
Copy link
Contributor

Ansible will go under kubernetes/kube-deploy repository (sooner or later).

@bgrant0607
Copy link
Contributor Author

As mentioned by @thockin in #1389 (comment) and in comments above, we should move mungegithub to another repo. I'm currently thinking github-tools. We also need to move the doc munger, as mentioned in kubernetes/kubernetes#29314. Other github-related tools could go into that repo, also.

@thockin
Copy link
Contributor

thockin commented Jul 26, 2016

+1, having a distinct repo would further the goal of being repo-agnostic
and generic.

On Mon, Jul 25, 2016 at 11:14 PM, Brian Grant notifications@github.com
wrote:

As mentioned by @thockin https://github.com/thockin in #1389 (comment)
#1389 (comment)
and in comments above, we should move mungegithub to another repo. I'm
currently thinking github-tools. We also need to move the doc munger, as
mentioned in kubernetes/kubernetes#29314
kubernetes/kubernetes#29314. Other
github-related tools could go into that repo, also.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#762 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AFVgVPOAoMJ4bVMz016PQzaFS8CewKFRks5qZaXIgaJpZM4IFmN5
.

@bgrant0607
Copy link
Contributor Author

bgrant0607 commented Jul 26, 2016

Stab at suggesting where each thing should go. Will gradually fill these in. Suggestions welcome.

  • 404-server: For testing Ingress. Move to ingress repo or test-infra.
  • addon-resizer: Move to kube-deploy.
  • ansible: Move to kube-deploy or merge with kargo and delete.
  • cluster-autoscaler: Move to cluster-autoscaler repo.
  • cni-plugins: Move to cni-plugins repo.
  • compare: Move to test-infra repo.
  • continuousdelivery: Move to kubernetes/examples (which should be moved to a new repo eventually).
  • diurnal: Not maintained. Delete.
  • dnsmasq: Move to kube-dns repo
  • docker-micro-benchmark:
  • election:
  • exec-healthz:
  • for-demos:
  • git-sync:
  • hack:
  • images:
  • ingress: Move to ingress repo.
  • init:
  • keepalived-vip
  • kubeform:
  • kubelet-to-gcm:
  • logging
  • micro-demos
  • mungegithub: Move to pr-bot repo
  • netperf-tester
  • perfdash:
  • pets:
  • podex: Obsolete. Delete.
  • prometheus: Move to kubernetes/examples.
  • recipes
  • release-notes: Move to release repo.
  • rescheduler: Move to rescheduler repo.
  • scale-demo
  • service-loadbalancer
  • test-utils: Move to test-infra repo.

@bprashanth
Copy link

Filed #1441 for ingress

@rmmh
Copy link
Contributor

rmmh commented Jul 28, 2016

Mungegithub should be in its own repo.

Travis CI should work well for almost all of these repos, since they should have small, fast test suites. This also means the submit queue flow shouldn't be necessary-- just merge when the CI status is green.

@bgrant0607
Copy link
Contributor Author

@rmmh If tests are fast, then submit queue will be fast.

@rmmh
Copy link
Contributor

rmmh commented Jul 28, 2016

OK, we can make the submit-queue trigger a travis build to test the merge commit then. Travis is a much better experience for everyone involved.

@thockin
Copy link
Contributor

thockin commented Jul 28, 2016

dnsmasq should probably move into a kube-dns repo, along with the kube-dns
code.

On Tue, Jul 26, 2016 at 10:01 AM, Brian Grant notifications@github.com
wrote:

Stab at suggesting where each thing should go. Will gradually fill these
in. Suggestions welcome.

  • 404-server: For testing Ingress. Move to ingress repo or test-infra.
  • addon-resizer: Move to kube-deploy.
  • ansible: Move to kube-deploy or merge with kargo and delete.
  • cluster-autoscaler: Move to cluster-autoscaler repo.
  • cni-plugins: Move to cni-plugins repo.
  • compare: Move to test-infra repo.
  • continuousdelivery: Move to kubernetes/examples (which should be
    moved to a new repo eventually).
  • diurnal: Not maintained. Delete.
  • dnsmasq:
  • docker-micro-benchmark:
  • election:
  • exec-healthz:
  • for-demos:
  • git-sync:
  • hack:
  • images:
  • ingress: Move to ingress repo.
  • init:
  • keepalived-vip
  • kubeform:
  • kubelet-to-gcm:
  • logging
  • micro-demos
  • mungegithub
  • netperf-tester
  • perfdash:
  • pets:
  • podex: Obsolete. Delete.
  • prometheus: Move to kubernetes/examples.
  • recipes
  • release-notes: Move to release repo.
  • rescheduler: Move to rescheduler repo.
  • scale-demo
  • service-loadbalancer
  • test-utils: Move to test-infra repo.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#762 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AFVgVAazjiaQ9miBm8j94HHS6cY-7gVbks5qZj1_gaJpZM4IFmN5
.

@spxtr
Copy link
Contributor

spxtr commented Jul 28, 2016

I agree that travis should be sufficient for most repos. However, travis can be a bit of a pain when you have multiple languages, or when you want to do long or continuous testing. Travis also limits to 5 concurrent builds per org, so that's not great. I'm working on something better right now but it won't be ready right away.

I would really rather not run the submit queue/mungers against every repo. Just click the big green merge button when it's passed CI.

@bgrant0607
Copy link
Contributor Author

Created pr-bot repo for mungegithub.

@kokhang
Copy link

kokhang commented Aug 10, 2016

I have this PR #1343, which adds a new project in contrib. In light of this discussion, where should i make it go? I would like to make this project available and open for the community.

@bprashanth
Copy link

@kokhang I'd rather hold off checking in a uber serviceloadbalancer implementation (#1343). Suggest keeping it in a local repo for now.

We will at some point in the not too distant future need to figure out what we do with serviceloadbalancer (https://github.com/kubernetes/contrib/tree/master/service-loadbalancer), because we're about to decommission contrib/ entirely. I will probably take some sort of community poll to figure out if it's worth maintaining, depending on who's using it for what. If #1343 is feature compatible, and we decide that we do need a serviceloadbalancer implementation, we should put the mult-backend version through the incubator (https://docs.google.com/document/d/1ugAd9Zj-jW3YHdrNVdktmvDMEWtChPqyGHfkwWdQ3zo/edit#).

where will new service loadbalancers go?

@Q-Lee If we go the incubator route, they will become backends for #1343

Imo, a netscaler implementation is a great addition for on-premise k8s deployments.

#1440 is an Ingress controller for Netscale. Probably the best place to keep it is in an official Netscaler repo, like nginx does: https://github.com/nginxinc/kubernetes-ingress (we also have an nginx ingress but they have different goals, ours tries to maintain cross platform purity, while theirs tries to surface nginx plus features).

@thockin
Copy link
Contributor

thockin commented Aug 12, 2016

This is a two-edged sword, of course. If users who wan to assemble a k8s
cluster have to pull pieces from all over the internet to do it, the
perception will be that this is a piecemeal, cobbled-together system. we
have to find a balance.

On Thu, Aug 11, 2016 at 3:19 PM, Prashanth B notifications@github.com
wrote:

@kokhang https://github.com/kokhang I'd rather hold off checking in a
uber serviceloadbalancer implementation (#1343
#1343). Suggest keeping it in
a local repo for now.

We will at some point in the not too distant future need to figure out
what we do with serviceloadbalancer (https://github.com/
kubernetes/contrib/tree/master/service-loadbalancer), because we're about
to decommission contrib/ entirely. I will probably take some sort of
community poll to figure out if it's worth maintaining, depending on who's
using it for what. If #1343
#1343 is feature compatible,
and we decide that we do need a serviceloadbalancer implementation, we
should put the mult-backend version through the incubator (
https://docs.google.com/document/d/1ugAd9Zj-jW3YHdrNVdktmvDMEWtChPqyGHfkwW
dQ3zo/edit#).

where will new service loadbalancers go?

@Q-Lee https://github.com/Q-Lee If we go the incubator route, they will
become backends for #1343
#1343

Imo, a netscaler implementation is a great addition for on-premise k8s
deployments.

#1440 #1440 is an Ingress
controller for Netscale. Probably the best place to keep it is in an
official Netscaler repo, like nginx does: https://github.com/nginxinc/
kubernetes-ingress (we also have an nginx ingress but they have different
goals, ours tries to maintain cross platform purity, while theirs tries to
surface nginx plus features).


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#762 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AFVgVAzKUsrjymTj8NzECY1r4hW23M1Fks5qe6AFgaJpZM4IFmN5
.

@kokhang
Copy link

kokhang commented Aug 15, 2016

@bprashanth I am all for trying to get the multi-backend service LB (#1343) in kubernetes incubator. Having it in my own repo would be hard for the community to know about and contribute to it.

Did this incubator process result as part of the elimination of /contrib?

@vipulsabhaya
Copy link

@kokhang @bprashanth Just to be clear, #1343 is not a drop-in replacement (yet) of service-loadbalancer. We would need to add L7 ingress support to it.

@thockin
Copy link
Contributor

thockin commented Aug 16, 2016

I don't MIND adding new repos under kubernetes, if we think that they will
lead towards generally useful, generic, components. I have not had a
chance to digest this repo in particular, but maybe you should START with
it in your own space, and get it to a place where it's really useful to a
few folks and then talk about incubating it?

There's a certain gravitas that projects under kubernetes/* should have (I
said SHOULD - not all do)

On Mon, Aug 15, 2016 at 2:55 PM, Vipul Sabhaya notifications@github.com
wrote:

@kokhang https://github.com/kokhang @bprashanth
https://github.com/bprashanth Just to be clear, #1343
#1343 is not a drop-in
replacement (yet) of service-loadbalancer. We would need to add L7 ingress
support to it.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#762 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AFVgVMbjX3y25zkdKVe4M1WcOT2y0uaKks5qgOBAgaJpZM4IFmN5
.

@bgrant0607
Copy link
Contributor Author

git-sync has its own repo:
https://github.com/kubernetes/git-sync

@thockin
Copy link
Contributor

thockin commented Aug 23, 2016

git-sync is an example of a really self-contained thing that doesn't even
really have to live in kubernetes, except we don't have a better place to
stick it (and we want to rely on it, I guess). Many repos will not be sol
self-contained, but that's what we should strive for.

On Tue, Aug 23, 2016 at 4:39 PM, Brian Grant notifications@github.com
wrote:

git-sync has its own repo:
https://github.com/kubernetes/git-sync


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#762 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AFVgVI59i0vZ0UYthwtVemjiRBcCed71ks5qi4SrgaJpZM4IFmN5
.

@porridge
Copy link
Contributor

@thockin when creating new repos, do you think there should be a naming convention? For example in case of #1441, should its new home be github.com/kubernetes/controller-ingress or just plain github.com/kubernetes/ingress?

Something worth thinking about before there are 10 pages of repos under kubernetes...

@thockin
Copy link
Contributor

thockin commented Nov 2, 2016

We could also use multiple orgs, since GitHub has no other nesting.

Something like kubernetes-sidecars/git-sync and kubernetes-addons/ingress

I don't know if we need to impose naming conventions yet, or if we do it is
not clear what they should be...

On Fri, Oct 28, 2016 at 9:43 AM, Marcin Owsiany notifications@github.com
wrote:

@thockin https://github.com/thockin when creating new repos, do you
think there should be a naming convention? For example in case of #1441
#1441, should its new home
be github.com/kubernetes/controller-ingress or just plain
github.com/kubernetes/ingress?

Something worth thinking about before there are 10 pages of repos under
kubernetes...


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#762 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AFVgVIkb8UbtZR0trouK5grePs60NGsFks5q4aesgaJpZM4IFmN5
.

@fejta-bot
Copy link

Issues go stale after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 18, 2017
@spiffxp
Copy link
Contributor

spiffxp commented Jan 2, 2018

/lifecycle frozen
The only way this repo ever goes away is if we find a home for or explicitly deprecate the code within

@k8s-ci-robot k8s-ci-robot added the lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. label Jan 2, 2018
@bgrant0607 bgrant0607 removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 8, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/velocity-improvement lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.
Projects
None yet
Development

No branches or pull requests