Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

git auth with multiple keys for the same provider fail #2407

Closed
r0bj opened this issue Apr 15, 2020 · 20 comments
Closed

git auth with multiple keys for the same provider fail #2407

r0bj opened this issue Apr 15, 2020 · 20 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@r0bj
Copy link

r0bj commented Apr 15, 2020

Expected Behavior

Multiple ssh keys to the same host/provider (eg. github) are working.

Actual Behavior

Only first ssh key for github is working, rest is failing.

Steps to Reproduce the Problem

Two different secrets with two different ssh keys for two different github repos:

---
apiVersion: v1
kind: Secret
metadata:
  name: ssh-key-repo1
  annotations:
    tekton.dev/git-0: github.com
type: kubernetes.io/ssh-auth
data:
  ssh-privatekey: <key>

---
apiVersion: v1
kind: Secret
metadata:
  name: ssh-key-repo2
  annotations:
    tekton.dev/git-0: github.com
type: kubernetes.io/ssh-auth
data:
  ssh-privatekey: <key>

ServiceAccount:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: fetch-repo
secrets:
- name: ssh-key-repo1
- name: ssh-key-repo2

It produces .ssh/config like this:

Host github.com
    HostName github.com
    Port 22
    IdentityFile /tekton/home/.ssh/id_ssh-key-repo1
    IdentityFile /tekton/home/.ssh/id_ssh-key-repo2

For ssh this config is fine but git for some reason respect only first IdentityFile clause and ignores rest. So in this case you can only authenticate to repo1. Authentication to repo2 will fail - github will complain that the repository doesn't match.

So in order to authenticate properly with git and multiple ssh keys, .ssh/config file should look similar to this:

Host github.com-ssh-key-repo1
    HostName github.com
    Port 22
    IdentityFile /tekton/home/.ssh/id_ssh-key-repo1

Host github.com-ssh-key-repo2
    HostName github.com
    Port 22
    IdentityFile /tekton/home/.ssh/id_ssh-key-repo2

Unfortunately git client then should also use those fake hosts:

git clone git@github.com-ssh-key-repo1:example/repo1.git

and

git clone git@github.com-ssh-key-repo2:example/repo2.git

For this reason, above workaround seems not quite reasonable since you can have git repo url dynamically fetched from git push webhook and then it would be in a form: git@github.com:example/repo1.git or git@github.com:example/repo2.git

Additional Info

This is particularly painful with github because it doesn't allow to use the same deploy key for different repos so you need to use different keys. And this it how you hit this issue.

  • Kubernetes version:

    Output of kubectl version:

Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.4", GitCommit:"8d8aa39598534325ad77120c120a22b3a990b5ea", GitTreeState:"clean", BuildDate:"2020-03-12T23:41:24Z", GoVersion:"go1.14", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.4", GitCommit:"8d8aa39598534325ad77120c120a22b3a990b5ea", GitTreeState:"clean", BuildDate:"2020-03-12T20:55:23Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
  • Tekton Pipeline version:
Client version: 0.8.0
Pipeline version: v0.11.1
@r0bj
Copy link
Author

r0bj commented Apr 15, 2020

More thoughts about it, if it's not possible to create single .ssh/config file that allows git to authenticate to multiple github private repos then it completely changes EventListener use case. Instead of use one EventListener for multiple private repos (single TriggerTemplate with ServiceAccount) we need to use many EventListeners, each for every single private github repo (each requires separate ServiceAccount with TriggerTemplate).
Am I wrong on this?

@r0bj
Copy link
Author

r0bj commented Apr 15, 2020

Or maybe at least we can make the git-init image (gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/git-init) from catalog https://github.com/tektoncd/catalog/blob/v1beta1/git/git-clone.yaml to work properly and respect multiple IdentityFile clauses in .ssh/config?

@vdemeester
Copy link
Member

It is indeed one of the current limitiation of creds-init. There is multiple potential solution for this:

  • have one .ssh/config as proposed above
  • for each "duplicate provider", generate its own .ssh/config (and what comes with it, the ssh key, …) — this would require the clone task to be able to refer the correct one

More thoughts about it, if it's not possible to create single .ssh/config file that allows git to authenticate to multiple github private repos then it completely changes EventListener use case. Instead of use one EventListener for multiple private repos (single TriggerTemplate with ServiceAccount) we need to use many EventListeners, each for every single private github repo (each requires separate ServiceAccount with TriggerTemplate).
Am I wrong on this?

PipelineRun (or TaskRun) are the one in need for TriggerTemplates. It's not directly at the EventListener level that you specify this. You could have a parametrized serviceAccount in your template so that, depending on the repository the event originates from, you use different serviceAccount. I also think the EventListener, as of today, covers both use cases, it's not really opiniated on if you should use.

The git-init binary (and thus image) doesn't do anything specific related to authentication. It just reads .gitcredentials, .ssh/config and .gitconfig as it uses git and ssh.

/kind feature

@tekton-robot tekton-robot added the kind/feature Categorizes issue or PR as related to a new feature. label Apr 16, 2020
@bobcatfish
Copy link
Collaborator

Note that @sbwsg is proposing some improvements to creds init in #2343 which I think takes a pretty firm divergence from the annotation based approach we've had so far and makes usage more explicit

@tekton-robot
Copy link
Collaborator

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

/close

Send feedback to tektoncd/plumbing.

@tekton-robot
Copy link
Collaborator

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.

/lifecycle stale

Send feedback to tektoncd/plumbing.

@tekton-robot
Copy link
Collaborator

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.

/lifecycle rotten

Send feedback to tektoncd/plumbing.

@tekton-robot
Copy link
Collaborator

@tekton-robot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

/close

Send feedback to tektoncd/plumbing.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@tekton-robot tekton-robot added lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Aug 14, 2020
@ghost
Copy link

ghost commented Aug 14, 2020

/reopen
/remove-lifecycle stale
/remove-lifecycle rotten

@tekton-robot tekton-robot reopened this Aug 14, 2020
@tekton-robot
Copy link
Collaborator

@sbwsg: Reopened this issue.

In response to this:

/reopen
/remove-lifecycle stale
/remove-lifecycle rotten

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@tekton-robot tekton-robot removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Aug 14, 2020
@ghost
Copy link

ghost commented Aug 14, 2020

This remains an issue and we've had a user in slack recently query us about it. I think longer-term the Credentials UX improvements I'm trying to push in #2343 will allow users to handle this problem but I would like to keep it on the radar until optional workspaces or an equivalent feature are well-supported.

@tekton-robot
Copy link
Collaborator

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.

/lifecycle stale

Send feedback to tektoncd/plumbing.

@tekton-robot tekton-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 12, 2020
@vdemeester
Copy link
Member

/remove-lifecycle stale

@tekton-robot tekton-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 13, 2020
@dmitry-mightydevops
Copy link

Is my understanding correct that if we have 2 repos, defined via PipelineResources as:

apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
  name: backend-repo
spec:
  type: git
  params:
    - name: url
      value: git@github.com/team/pga-backend.git
    - name: revision
      value: main

---

apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
  name: frontend-repo
spec:
  type: git
  params:
    - name: url
      value: git@github.com:team/pga-frontend.git
    - name: revision
      value: main

then both of these should have just a single and same github deployment key for both repos in github repo settings?

@ghost
Copy link

ghost commented Feb 10, 2021

then both of these should have just a single and same github deployment key for both repos in github repo settings?

Are you using these PipelineResources with a single Task? If so then yes you will unfortunately need to use the same deploy key.

If you use them as part of separate Tasks of a Pipeline you can assign each Task its own ServiceAccount + Secret. So the deploy keys can be different.

The more robust approach is to build a Pipeline that uses git-clone Tasks from the catalog. Each can be provided a different service account w/ deploy key. There's more coordination involved (with workspaces, etc) but the result is much more flexibility in the ways you can orchestrate.

@dmitry-mightydevops
Copy link

thank you Scott! Makes total sense!

@tekton-robot
Copy link
Collaborator

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale with a justification.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close with a justification.
If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.

/lifecycle stale

Send feedback to tektoncd/plumbing.

@tekton-robot tekton-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 11, 2021
@tekton-robot
Copy link
Collaborator

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten with a justification.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close with a justification.
If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.

/lifecycle rotten

Send feedback to tektoncd/plumbing.

@tekton-robot tekton-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 10, 2021
@tekton-robot
Copy link
Collaborator

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen with a justification.
Mark the issue as fresh with /remove-lifecycle rotten with a justification.
If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.

/close

Send feedback to tektoncd/plumbing.

@tekton-robot
Copy link
Collaborator

@tekton-robot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen with a justification.
Mark the issue as fresh with /remove-lifecycle rotten with a justification.
If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.

/close

Send feedback to tektoncd/plumbing.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

5 participants