-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Local source code feature (using bundle images) #35
Conversation
0e8ab23
to
c59927e
Compare
c59927e
to
c2f12c4
Compare
The PR is not consider complete yet, however it is working end to end with the respective https://github.com/shipwright-io/build version in the |
a4a0229
to
d6d8214
Compare
d6d8214
to
d4069d8
Compare
d4069d8
to
65621ae
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @HeavyWombat
As a general convention across many of the related upstream projects related to shipwright, we typically put the vendor / go.mod changes in a single commit, with a title like "bump(a list of what major repos where changed in go.mod)"
something like
bump(*): k8s.io/client-go, shipwright-io,build
and then it is easy for reviewers to separate in repo changes vs. vendor changes
So no mixture of files for in repo changes vs. vendor bump changes in a given commit.
Could you restructure you commits along these lines please?
I had no major comments yet on your in repo changes, but with some of the 800 odd files that are updated, it is not inconceivable I missed something.
thanks
65621ae
to
a823ef3
Compare
Thanks for the feedback! I rearranged the commits to match that style. |
7814891
to
dacd79f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A combination of nit type comments, some editorial / open discussion type things, and some side notes on future work that would touch these files in the possible near future @HeavyWombat
Also, I saw a vendor escape in one of your non vendor commit while reviewing, but then also got a github notification that commits had changed while reviewing. Not sure what the change was, but if one of my comments is now a no-op, no worries
/assign @gabemontero
/assign @otaviof
when the time comes and my comments are done, ideally @otaviof takes a pass at this since he is looking at the pluggable/alternative form of source upload.
thanks!!
4b49856
to
7108cac
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @HeavyWombat
So my requests to move of the polling and the generate name stuff have definitely been satisfied.
I'm generally OK with the reorganization around the multiple create path you cited earlier today.
That said, I'm going to defer to @otaviof for sign off on that. He previously had some strong opinions on how it should be structured related to the alternative, non bundle form of local source code support he is pursuing. Let's make sure he has taken a look and we reach consensus with him. I'll try to remind him during a team meeting he and I have tomorrow, if he does not chime in Wednesday AM Europe time, while I'm still asleep.
Lastly, I have some authentication question, and potential concerns, but admittedly that may just be my lack of the experience with the client being used. Let's see what your answers are and we'll go from there.
@@ -0,0 +1,10 @@ | |||
package cmd |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there is, or at least was, a golang convention in some repos in the "greater k8s ecosystem" that employ a package with the name dependencymagnet
for go mod items like this.
A current example: https://github.com/openshift/api/blob/master/dependencymagnet/doc.go
I seem to recall this at least being employed by upstream k8s in the past, but am not finding refs now (granted I only looked a few minutes)
@adambkaplan do you recall the history of dependencymagnet
?
Feels like perhaps something we should agree upon and document in https://github.com/shipwright-io/build/blob/main/CONTRIBUTING.md or some such
But I do like the explicit nature of that vs. a file like this in a "typical" code path, when you might think there is "real" code other than these imports.
Unless we get consensus here in the PR discussion beforehand, let's discuss in the next community meeting.
And I'm not going to draw a hard line wrt block this PR on this. But ideally, if you are amenable @HeavyWombat I prefer dependencymagnet
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In my other projects, I usually tried to place those "underscore" imports into the same package and source file where I implemented the Kubernetes cluster authentication and clientset retrieval. However, I could not really figure out a good place here as the clientset retrieval calls might happen in different places, depending on what you run. So I figured a explicitly named file might do the trick here.
From what I can see with the dependencymagnet
, we have a slightly different use case here as we always require these imports so that their respective init
functions are called. I guess this is what we really need here.
Anyway, happy to refactor it if we have a better idea on how we want to name things and whether there is a better style for solving this kind of requirement.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As I recall dependencymagnet
is an "always" thing ... I could be missing something, but I don't think I am :-). What made you think precisely that it is insufficient @HeavyWombat ? Did you try it and something was amiss ?
Let's wait a little to see if others like @adambkaplan @otaviof chime in once they return / catch up from being out. Otherwise, let's definitely discuss this during the Monday community meeting. Even if it is a "we do what you have know, but agree on a different approach to be handled as a follow up" during the meeting, I won't hold on this past Monday.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I was concerned with the build tags: // +build tools
. It looked like the package was "only" for dependencies that were kind of tools and only required for build time, not runtime.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes the build tool scenario is another reason to employ
went back and revisited openshift/api and yes, while the package alone handles the vendoring, there are no imports of it, and yes with your case an import of the package is needed for the init functions, and you get that import because of the package you placed this in is imported for other function related here
that said, I still find the naming here a bit lacking .... if not misleading, definitely not fully informative.
I think a tweak on the dependencymagnet approach could be better .... changing the name of this file to dependencymagnet.go
would be more descriptive of its intent. Also, it is generic such that if we had future dependencies of this nature, other than "auth", we could add them here, without having to worry about changing from auth.go to something else.
If you are amenable to my idea as is, great, go for it. But I certainly understand if you'd like to get a broader consensus amongst the community.
If so, let's bring it up on Monday's call.
return "", err | ||
} | ||
|
||
req, err := http.NewRequest("POST", "https://hub.docker.com/v2/users/login/", bytes.NewReader(loginData)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
feels like we would have to login into quay.io, registry.redhat.io, gcr.io as well no?
did you test against those registries and discover you did not need to log in?? I'm very surprised if that is the case.
feels like this method should be made such that the registry URL is a paramenter vs. being hard coded, and the called from methods like Prune
regardless of which registry.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I ran into a lot of issues I was not aware of before. Most notably, the somewhat special nature of DockerHub: Even though I tried the DockerHub API examples, I always ended-up with 403
when trying to delete a specific tag. It seems I am not alone with this behavior. So long story, short, I decided upon the provider switch:
- DockerHub: Where we have to do some explicit logins and call their APIs.
- All the others: Rely on
go-containerregistry
library, which works with all registries that adhere to the spec (quay.io
, oricr.io
).
return err | ||
} | ||
|
||
auth, err := authn.DefaultKeychain.Resolve(ref.Context()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this alone, without ID / password, is sufficient authentication for well know registries other that docker hub ? ....how is it getting the creds ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I became quite a fan of the go-containerregistry
library. It sure was written in a nice way. So, basically, this one line uses the local registry logins that were done (e.g. docker login ...
, or crane auth login
) and matches the available login tokens (not sure this is the correct wording) against the server mentioned in the ref
(i.e. ref
could be something like quay.io/namespace/image:tag
). You can then use the auth
variable in remote calls that require target authentication. This also works for DockerHub, but only for all registry spec calls that they support. They do not support all registry API calls. I also learned that other container registries also do not necessarily support all the spec API calls.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah interesting .... thanks for the explanation and education @HeavyWombat :-)
Now that said, I took a peek at https://github.com/google/go-containerregistry/blob/main/pkg/authn/keychain.go#L50-L59 and the godoc is a bit light there. Certainly it make some assumptions about being aware of all that is entailed with implementing key chain by interpreting docker config files.
Would you be amenable @HeavyWombat to augmenting/adding to your already wonderful comments, and capture your research and findings here when others revisit this code in the months to come?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Definitely. Good idea. Will add a comment to better explain the magic behind this one line.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@HeavyWombat nice stuff!
/lgtm
Without going into too much detail, turns our @HeavyWombat that @otaviof had to take an extra day or two away from the office.. Hopefully he returns later this week and can sign off on your organizational changes for the watch related stuff he crafted. But we'll play it by ear, and see if his time away becomes extensive, and if need be, move forward. Thanks for your patience. |
Thanks for the update. No worries. We have no rush here. The more people that have time to review and play with the feature, the better. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have a couple questions, otherwise it's a good set of changes.
// CreateBuildRun creates a BuildRun based on the given name and spec using the | ||
// provided end-user options/settings, for example printing the pod container | ||
// logs of the build. | ||
func CreateBuildRun( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I understand it correctly, we've moved the code from pkg/cmd/build/run.go
to this new place, but why do we have it all convoluted in a single method? Back there we had onEvent
and better separation of concerns.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My problem basically was that the onEvent
function as a pointer receiver function might have done the trick, however due to the fact that it was "connected" to the RunCommand
being a pointer receiver function made it unable to be used somewhere else. So it ended up being a function that was only used once and could only be used in one exact place.
Moving the code in-line as an ad-hoc function feels more in line with how similar packages deal with those kind of functions, e.g. the Kubernetes wait
package, or sort.Slice()
, or similar. Plus, being in-line helps because you do not have to "expose" other fields in order to make them accessible, like the BuildRun name. It being inline gives it access to the correct variables through the scope. This further simplified the code a bit. Also, with that, I was able to improve the output so that it does not print the "Pending" phase multiple times, since the in-line function has access to the helper variable that knows the previous state.
To be honest, I played around with a lot of different options and styles and this only represents the state I was happy with. It would have been probably easier if this was done as part of a pairing session, because just with the PR review one usually lacks a lot of the thought process that was involved.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we need more attention on this part of the code, more refactoring and more testing as well. So, I'm okay on merging this as it's, and then continue iterating on the next PRs 👍🏼
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cool / thanks @otaviof
can you open an issue with any specifics / details you can provide, so we don't lose track ?
Reference shipwright-io/build#717 Add `bundle` package that contains convenience code to push a local source code directory as a bundle image to a container registry. Add new flags for container image and local directory settings.
Go version to 1.16 github.com/google/go-containerregistry to `v0.6.0` github.com/schollz/progressbar/v3 to `v3.8.2` k8s.io/api to `v0.20.6` k8s.io/apimachinery to `v0.20.6` k8s.io/cli-runtime to `v0.20.2` k8s.io/client-go to `v0.20.6` github.com/shipwright-io/build to `v0.5.2-0.20210830191632-04ff81c93dd5`
Make sure to support IBM Cloud and Google Cloud Kubernetes services and their respective auth requirements. Add underscore imports to pull in required init functions.
7108cac
to
55dec55
Compare
/lgtm |
/hold @HeavyWombat @otaviof OK I finally had some cycles to check out this branch and try out the commands we don't have unit tests for yet that we mentioned during the community meeting. Most notably, the Using the default/nightly strategy Just tried it and I got a panic:
I did rebuild using the main branch and it works. The last little bit of the log:
So my recollection from our discussion yesterday, our options are:
related, but also independent of 1/2/3, @otaviof opens a follow up issue with details for #35 (comment) I'll start with you @HeavyWombat - do you have a preference between 1), 2), or 3) ? |
In case you missed it @HeavyWombat #39 has merged and we now have a unit test for It calls the method that is in the stack for the above panic ^^ and so should serve as validation for any code changes you want to continue to propose here @otaviof is also close to merging e2e tests with #40 Once that happens, when I next have cycles here, I plan to add a |
@HeavyWombat: PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Will be redone. |
Changes
Reference shipwright-io/build#717
Add
bundle
package that contains convenience code to push a localsource code directory as a bundle image to a container registry.
Add new flags for container image and local directory settings.
Add
github.com/google/go-containerregistry
.Add
github.com/schollz/progressbar/v3
.Submitter Checklist
See the contributor guide
for details on coding conventions, github and prow interactions, and the code review process.
Release Notes