-
Notifications
You must be signed in to change notification settings - Fork 71
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
first draft of pack direct podman call rfc #288
base: main
Are you sure you want to change the base?
Conversation
Maintainers, As you review this RFC please queue up issues to be created using the following commands:
Issues(none) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd like a little more motivation on why we want to avoid socket based communication.
text/0000-pack-direct-podman-call.md
Outdated
# Summary | ||
[summary]: #summary | ||
|
||
A flag `--container-engine` will be added to the `pack` CLI, the allowed flag value will be `docker` (the current functioning) and `podman` to call podman directly (without socket call) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we add a little motivation on why we want to avoid the socket call?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a way to make pack standalone, developer would be able to use pack without docker or podman, however they will still need a tool like skopeo to move OCI images.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we add a little motivation on why we want to avoid the socket call?
podman
doesn't react the socket by default and it needs to be started by calling systemd
manually. https://github.com/containers/podman/blob/main/docs/tutorials/socket_activation.md
Probably the socket concern is the security. I don't know if another process can interfere with existing socket communication, but if it can call podman, I guess there is no difference in security.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
docker act as root, podman doesn't need that, so you can use it on a computer without root access (like a company one), in that case why bother to launch a socket in user space ? Moreover as i have already said, using podman as a library, we can make the pack utility standalone, that could be usefull for some case where you don't run container locally, because you will not need docker or podman at all.
text/0000-pack-direct-podman-call.md
Outdated
# How it Works | ||
[how-it-works]: #how-it-works | ||
|
||
The flag will change the initialization of the CommonAPIClient passed to call docker (interface defined in "github.com/docker/docker/client") by an adapter to call podman as a library. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we sketch out how the adapter would work?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the goal is a to transmit/translate call to an initialized Runtime https://github.com/containers/podman/blob/v4.5.1/libpod/runtime.go#L63
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can limit the adapter to the sub interface DockerClient https://github.com/buildpacks/pack/blob/v0.29.0/pkg/client/docker.go#L14, because the other method of the CommonAPIClient are not used within pack
what do you think of an option in the configuration file in addition to the flag ? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the direction that this RFC is taking. I've suggested expanding some explanations. In general, I'd like an RFC to contain enough information that we could hand-off the implementation to any developer.
|
||
# How it Works | ||
[how-it-works]: #how-it-works | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to expand on this section? I'd like a
- high-level overview of how-it-works
- details on the adapter interface
- discussion on how layers are cached
- discussion on where the output image ends up
I expect that we're proposing here to use an OCI registry for the layer cache and for output images. But I think we need to make all this explict to RFC readers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In order to store locally OCI images with the new mode, change in the lifecycle tool will be needed too
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have added a short code sample to show how an adapter could be done to answer 2
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have digged deeper into the caching part of pack and lifecycle, it don't seem needed to modify those part (volume caching store in local host by mounting it in the container using the interface DockerClient, image caching use registries (licefycle use github.com/google/go-containerregistry via imgutil))
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In order to store locally OCI images with the new mode, change in the lifecycle tool will be needed too
Could you talk about what these changes would look like?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In order to store locally OCI images with the new mode, change in the lifecycle tool will be needed too
Could you talk about what these changes would look like?
As stated in the following comment, after a deeper check, that part doesn't need change, it relies on the DockerClient interface or an independent library no tied to the Docker socket
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd love to know more about @AidanDelaney's points 1-4. Not being super familiar with podman myself, a little background information would be helpful.
1 & 2 have been detailed in the RFC document
3 layer are cached with a "VolumeCache" mecanism, which rely on the DockerClient interface
4 podman store local image in user space (/home/dvaumoron/.local/share/containers on my computer), and docker store them in common space using root right (/var/lib/containers/storage on my computer)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it relies on the DockerClient interface or an independent library no tied to the Docker socket
Here's where we initialize the docker client in the lifecycle: https://github.com/buildpacks/lifecycle/blob/367b451eb708dff1aa6e5ba179e1c88c4b9917e2/priv/docker.go#L16
I'm assuming shouldConnectSock
will be false in a daemonless scenario as pack will not mount the socket. But don't we need to specify other options to enable the podman wrapper (can you elaborate how this would be done - via some new input to the lifecycle or something else)? Or are you expecting to use the lifecycle in "OCI layout mode" and handle the processing in pack?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After some more check, depending on which lifecycle command is launched the fallback when the flag use daemon is false, will be the layout version or the remote version, so the options passed to lifecycle should be adapted to ensure correctness.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added precisions in the RFC corresponding to my previous comment
For reference, a previous discussion on replacing the daemon is at #201 |
Hi @dvaumoron. As my understanding the proposal is to offer a new implementation of the our DockerClient interface through the Adapter. The following diagram shows the components that depends on this docker client implementation, maybe it could help us on thinking potential edges cases we need to take care of. classDiagram
DockerClient <|-- docker_client
DockerClient <|-- podmanAdapter
ImageFactory <|-- pack_imageFactory
ImageFetcher <|-- image_Fetcher
LifecycleExecutor <|-- build_LifecycleExecutor
BuildpackDownloader <|-- buildpack_buildpackDownloader
class ImageFactory {
<<interface>>
}
FetchOptions <.. ImageFetcher
class ImageFetcher {
<<interface>>
}
class BlobDownloader {
<<interface>>
}
class LifecycleExecutor {
<<interface>>
}
build_LifecycleExecutor o-- DockerClient
class build_LifecycleExecutor {
}
class BuildpackDownloader {
<<interface>>
}
class buildpack_buildpackDownloader {
}
class pack_Client {
}
class FetchOptions {
+Bool Daemon
+String Platform
+PullPolicy PullPolicy
+LayoutOption LayoutOption
}
pack_Client o-- ImageFactory
pack_Client o-- ImageFetcher
pack_Client o-- DockerClient
pack_Client o-- BlobDownloader
pack_Client o-- LifecycleExecutor
pack_Client o-- BuildpackDownloader
buildpack_buildpackDownloader o-- ImageFetcher
class DockerClient {
<<interface>>
ImageHistory()
ImageInspectWithRaw()
ImageTag()
ImageLoad()
ImageSave()
ImageRemove()
ImagePull()
Info()
VolumeRemove()
ContainerCreate()
CopyFromContainer()
ContainerInspect()
ContainerRemove()
CopyToContainer()
ContainerWait()
ContainerAttach()
ContainerStart()
}
pack_imageFactory o-- DockerClient
image_Fetcher o-- DockerClient
class pack_imageFactory {
}
class image_Fetcher {
}
From the diagram, I noticed:
I hope this can be helpful, I just tried to put my thoughts on paper and share them 😄 . Also can you fix the DCO error ? |
Hi @jjbustamante, The methods in DockerClient interface return struct, the goal is to populate them with the data returned by the ContainerEngine implementation, the Image implementation returned by the method NewImage from "github.com/buildpacks/imgutil/local" (called at https://github.com/buildpacks/pack/blob/main/pkg/client/client.go#L271) wich use the DockerClient interface doesn't need to change (see https://github.com/buildpacks/imgutil/blob/main/local/local.go#L280 or https://github.com/buildpacks/imgutil/blob/main/local/save.go#L76 among other examples). As earlier the Fetcher struct which implements the ImageFetcher interface use the DockerClient interface (see https://github.com/buildpacks/pack/blob/main/pkg/image/fetcher.go#L56) The LifecycleExecutor struct use the DockerClient interface too (see https://github.com/buildpacks/pack/blob/main/internal/build/lifecycle_executor.go#L49), the lifecycle part was unclear so i have checked (reported at #288 (comment)) All my verifications convey the same conclusion, by (a good) design the DockerClient interface is the single point of contact with the Docker API, a correct replacement with an adapter which call the Podman API instead should be enough to make the feature work. |
I didn't get the point with the DCO check, it doesn't accept my signed-of-by, but looking at 714a7f0, it seems possible to use your true name with an email with doesn't match, and my name associated with my github account is correct, i need to dive deeper... |
Signed-off-by: dvaumoron <dvaumoron@gmail.com>
Signed-off-by: dvaumoron <dvaumoron@gmail.com>
Signed-off-by: dvaumoron <dvaumoron@gmail.com>
Accept Aidan Delaney suggestion Co-authored-by: Aidan Delaney <aidan.delaney@gmail.com> Signed-off-by: dvaumoron <dvaumoron@gmail.com>
Signed-off-by: dvaumoron <dvaumoron@gmail.com>
Signed-off-by: dvaumoron <dvaumoron@gmail.com>
Signed-off-by: dvaumoron <dvaumoron@gmail.com>
Signed-off-by: dvaumoron <dvaumoron@gmail.com>
Signed-off-by: dvaumoron <dvaumoron@gmail.com>
accept Aidan Delaney suggestion before adjustement Co-authored-by: Aidan Delaney <aidan.delaney@gmail.com> Signed-off-by: dvaumoron <dvaumoron@gmail.com>
accept Aidan Delaney suggestion before adjustement Co-authored-by: Aidan Delaney <aidan.delaney@gmail.com> Signed-off-by: dvaumoron <dvaumoron@gmail.com>
Signed-off-by: dvaumoron <dvaumoron@gmail.com>
Signed-off-by: dvaumoron <dvaumoron@gmail.com>
I rebase and reword my commit in order to have the waited signed-off-by |
text/0000-pack-direct-podman-call.md
Outdated
# Motivation | ||
[motivation]: #motivation | ||
|
||
`pack` currently requires a container manager client API available via a socket. Both `docker` and `podman` are compatible with the [socket API](https://docs.docker.com/engine/api/v1.24/). This RFC proposes that `pack` is extended to support a "daemonless" mode. In a "daemonless" mode `pack` will directly use a container manager client API. This avoids running a `docker` or `podman` daemon and avoids opening up a local socket. This will allow to use pack as a full-userspace, standalone application on the OS supported by podman (currently Linux, as Windows and macOS need podman machine). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pack
uses the docker daemon both to spin up containers in which the lifecycle executes, as well as an export target in which to store application images. Does this RFC support both use cases? I assume so, but thought it best to clarify.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It depend of what kind of export we are talking about, if you mean export to a registry, pack use the github.com/google/go-containerregistry library to do it, and do not depend on Docker for that.
If you talk about local storage, pack call the Docker socket via the DockerClient interface, changing this implementation will change the place where image will be stored, however i don't think this is a problem.
Users using the new flag to avoid installing Docker will have podman if they need to run locally containers (without having to use the podman compatibility socket), or no tools at all because pack will be usable as a standalone.
User with several tool including Docker, would be able to use them to "send" images to Docker if they prefer to use the flag (podman or buildah (which use the same image storage) allow to do that).
User with only Docker have no reason to use the flag, if they use it and want to run the generated images locally without installing more tools, they will need to export them to a registry and retrieve them from it. Maybe adding a warning when the flag is used will avoid this kind of mistake.
Signed-off-by: dvaumoron <dvaumoron@gmail.com>
Signed-off-by: dvaumoron <dvaumoron@gmail.com>
Signed-off-by: dvaumoron <dvaumoron@gmail.com>
A callgraph of functions that are calling socket API while building the sample project, would greatly help to estimate the critical path that needs to be patched. |
accept Anatoli Babenia suggestion before adjustement Co-authored-by: Anatoli Babenia <anatoli@rainforce.org> Signed-off-by: Denis Vaumoron <dvaumoron@gmail.com>
Signed-off-by: dvaumoron <dvaumoron@gmail.com>
go-callvis doesn't seem to help for that, it does not keep track of object passed as argument. The diagram from @jjbustamante is more helpful than those I obtain with go-calvis |
To be more accurate, you are limited to path calling some part of a package (pkg/client) from another (cmd/pack), the resulting graph is not very readable |
This PR follows my message in buildpacks/pack#413