Skip to content
This repository has been archived by the owner on Sep 12, 2018. It is now read-only.

Authentication on private registry #541

Open
matleh opened this issue Aug 28, 2014 · 36 comments
Open

Authentication on private registry #541

matleh opened this issue Aug 28, 2014 · 36 comments

Comments

@matleh
Copy link

matleh commented Aug 28, 2014

I am implementing a private docker-index, taking some guidance from http://docs.docker.com/reference/api/hub_registry_spec/. It seems to me that this documentation is not in line with the current implementation of the docker-registry (not sure about docker itself).

The docs talk about a cookie being used for communication between docker and docker-registry. Since docker-registry 0.7 cookies are no longer used (so says the changelog).

It looks like docker just reuses the token from the docker-index for each call it has to make to the registry during a "pull" or "push". But that means, docker has to use a token with "access=write" to GET different resources before it PUTs the image data. The registry sends a 401 for any GET request which is made using a token with "access=write". The "push" still seems to work, but I am not sure about the consequences of these 401 (and some 409) responses.

Maybe I misunderstand something about the interaction between docker, registry and index?

A possible workaround could be to have the registry accept "access=write" tokens for GET requests (or to generalize: "read" is only valid for GET, "write" is valid for GET, PUT and POST and "delete" is valid for GET, PUT, POST and DELETE).

Still the question remains at what time the index can invalidate a token. The docs say that a token is invalidated when the registry uses it to check the access rights. But since the registry has to use the same token a couple of times it seems like we can currently only assign a static TTL to all tokens and invalidate them after that timespan.

@samalba, I was told that you are the go-to-authority for everything related to the registry: what do you think?

@dmp42
Copy link
Contributor

dmp42 commented Aug 28, 2014

@matleh there are unfortunate discrepancies between the official, docker operated index (hence API documentation) and what lives in the open-source registry.

That being said, I would rather encourage you to mimick the standalone behavior of the registry (in that mode, it doesn't need tokens from the index, and you are supposed to implement your own authentication means, say, using nginx auth).

Furthermore, you will likely not be able to use the official docker client against the official index with a custom registry.

I guess it depends on what you want to achieve.

@matleh
Copy link
Author

matleh commented Aug 29, 2014

@dmp42 thanks for you answer. I am not sure whether I understand everything.

The documentation gives the impression, that Docker is striving for a distributed architecture with a more or less clear divide of responsibilities between index, registry and docker client. The index is responsible for authentication and to know what exists and where it exists. The registries are responsible to store the images and contact the index to see if a request is allowed. The docker client first contacts the index and is then redirected to the right registry for the particular repository...

That architecure does make sense to me (even though is has some rough edges) and appears open for everybody to operate either a registry or both an index and registry. But after I digged deeper, I get the impression that this architecture and this openness is not really where the real implementation is heading to (maybe because the Docker Hub is the main business model of Docker Inc.?). That is understandable from a business perspective but regrettable from an Open-Source perspective.

But on the other hand, all this code code is there - it is possible to give a index_endpoint to the registry, it is possible to do "docker login some.private.registry", the code is there for this token handling an everything ... Are these just relicts from the past?

To me it looks like there are only two "officially" supported scenaries:
(1) use the Docker Hub operated by Docker Inc. and have everything (also the registry) under their control
(2) use a docker-registry in standalone mode

(I know there is also quay.io, but that is more or less just a variation of (1))

(1) has the disadvantages
(a) I have to entrust all my intellectual property into the hands of Docker Inc.
(b) authentication and permission handling is limited to what the Docker Hub provides - that is I can give access to my private repositories only to other users

(2) has the disadvantages
(a) there is no authentication taking place inside the docker-registry, so this is only suitable for public repositories or internal use

I want to be able to use a docker registry to distribute commercial software to customers. That is, I want the registry to be publicly available but have access to the repositories restricted to customers (and be able to create new customers ad hoc). As far as I can see, that kind of setup is currently not supported.

I will have a look into what can be done with nginx auth, which you suggested.

@bacongobbler
Copy link
Contributor

I will have a look into what can be done with nginx auth, which you suggested.

Just gonna leave this one here: https://github.com/docker/docker-registry/blob/master/contrib/nginx.conf

If you want to use the registry to distribute software, it is completely possible to roll your own authentication system in front of the registry. It's not built into the open source registry as you have noted, and others have had to write their own authentication server in front of it.

However, if you just want end users to have the ability to retrieve images, then you can limit everything but GET requests with auth_basic inside nginx. That way, customers can view and retrieve images via docker pull but cannot push anything unless they have the right username/password. In this setup, the images are read-only unless they have the credentials necessary to push images. Does that work for you?

@matleh
Copy link
Author

matleh commented Aug 29, 2014

I made good progress today writing some authentication system in front of the registry using nginx and ngx_http_auth_request_module. Thanks for your suggestions.

The only problem I see with this, is that the authentication server does not know which image-ids belong to which repositories, so it has to allow GETing of all images to everybody who is allowed to GET any repository. For our current setup, that might be tolerable (at least only known customers have access to the images and without having access to the repository one has to guess image-ids), but it would be nice to be able to close this security hole...

Maybe I will have a look, if I can hook into the registry to gain access to this information.

@matleh matleh changed the title Use of tokens between registry and index unclear Authentication on private registry Sep 2, 2014
@matleh
Copy link
Author

matleh commented Sep 2, 2014

Well, after I spend a considerable amount of time implementing access control with nginx and everything looked very nice, today came the disenchantment. It looks like access control with Basic Auth is broken with current versions of docker (I am on arch-linux which has docker version 1.2).

The first request to the registry is made with Authorization:Basic but after that, docker switches to using Authorization:Token (even if no token is provided in the response from the registry) which breaks access control checking based on Basic Auth information.

Looks like I can throw away my work of the last days and have to fork the docker-registry to implement the access control right within the registry itself.

Any other ideas or did I misunderstand/misuse anything?

@dmp42
Copy link
Contributor

dmp42 commented Sep 2, 2014

@matleh do you run standalone? I need infos about how you launch your registry and your registry configuration in order to help you.

@matleh
Copy link
Author

matleh commented Sep 3, 2014

I run the "registry" image (docker run -e STANDALONE=true -e STORAGE_PATH=/data -p 5001:5000 -v /var/registry-data:/data --name local_registry --rm registry) which is on version 0.8.1 and I use the docker client with version 1.2.0.

I have nginx as a reverse proxy before that with the following configuration:

pid /home/mat/checkouts/docker-auth/nginx.pid;
working_directory /home/mat/checkouts/docker-auth;
error_log /home/mat/checkouts/docker-auth/nginx-error.log info;

events {
    worker_connections  1024;
}

http {
    access_log /home/mat/checkouts/docker-auth/nginx-access.log;
    # the following temp-dirs are needed for nginx to start
    proxy_temp_path /home/mat/checkouts/docker-auth/tmp/proxy_temp 1 2;
    client_body_temp_path /home/mat/checkouts/docker-auth/tmp/client_body 1 2;
    fastcgi_temp_path /home/mat/checkouts/docker-auth/tmp/fastcgi 1 2;
    uwsgi_temp_path /home/mat/checkouts/docker-auth/tmp/uwsgi 1 2;
    scgi_temp_path /home/mat/checkouts/docker-auth/tmp/scgi 1 2;

    server {
        listen       5000;
        server_name  hub.example.com;

        location / {
            proxy_pass http://127.0.0.1:5001;
            proxy_set_header Host $host:5000;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header Authorization "";
            proxy_hide_header X-Docker-Token;
            proxy_read_timeout 900;

            client_max_body_size 0;
            auth_request /_auth;
            auth_request_set $token $upstream_http_x_token;
            add_header X-Docker-Token $token;
        }

        location /_ping {
            auth_basic off;
        }

        location /v1/_ping {
            auth_basic off;
        }

        location /_auth {
            proxy_pass http://127.0.0.1:8999/auth;
            proxy_pass_request_body off;
            proxy_set_header Content-Length "";
            proxy_set_header X-Original-URI $request_uri;
            proxy_set_header X-Original-Method $request_method;
        }
    }
}

There is a custom access-control application running on http://127.0.0.1:8999 which uses the username and password in the Authorization header and the requested URL to check if the request is allowed.

As you may see in the nginx configuration, I found a workaround for this problem: my access-control application hijacks the X-Docker-Token header and uses it to identify following requests. Since the registry does not check the X-Docker-Token in standalone mode, this should work (did not have time for extensive testing, yet). But I wouldn't mind if I could do without that.

@jdiaz5513
Copy link

This is somewhat annoying; I was planning on implementing auth on my own registry as well.

I think the problem here lies with docker; if it doesn't receive an auth token from the registry it should continue to use basic auth instead. I'm still ok with docker-registry not handling auth on its own but it should always be clear how to implement it for those who need it. That depends on reliable behavior from the docker client.

@dmp42
Copy link
Contributor

dmp42 commented Sep 5, 2014

@jdiaz5513 simple auth on top of the registry should be straightforward: https://github.com/docker/docker-registry/blob/master/contrib/nginx.conf

Now, you are right, this is a problem with docker, and we plan on changing that.

@matleh
Copy link
Author

matleh commented Sep 5, 2014

@dmp42 as for my experience of the last days, the ngnx.conf you linked to does not work any more, precisely for the reasons I wrote about above.

@dmp42
Copy link
Contributor

dmp42 commented Sep 5, 2014

@shin- (or @bacongobbler if you have time) can you look into this and confirm that docker 1.2 works with the latest registry and proposed nginx simple auth config (or if it doesn't, investigate why)?

@matleh thanks

@dmp42 dmp42 added this to the 0.9 milestone Sep 5, 2014
@matleh
Copy link
Author

matleh commented Sep 15, 2014

All right, what I wrote only is true when docker-registry is not running over HTTPS.

If docker-registry runs over HTTPS, docker just sends Basic Authorization headers with every request.
If docker-registry run over HTTP, docker sends Basic Authorization headers with the first request and after that switches to Token Authorization. It is questionable, whethe that makes sense. If docker sends basic authentication data in plain text over HTTP once, it may as well continue doing so (it is insecure anyways). That way, one would at least have consistent behavior.

For anyone who reads this - be aware that to use docker-registry over HTTPS, one needs a "real" SSL certificate - self-signed certificates do not work and there is no way to make docker accept them for testing and development purposes, besides installing the test-CA certificate on the system that runs docker. (see moby/moby#2687)

@shin-
Copy link
Contributor

shin- commented Sep 15, 2014

As advertised in the past, we do not support authentication over HTTP. If docker tries to send credentials over HTTP, then it is a bug and it needs to be fixed, but very clearly we have no intention to support non-HTTPS auth.

@tangicolin
Copy link

But if you want to have docker registry in High Availability mode you will have following schema:

Frontend: HAproxy ... HAproxy

Auth : Nginx reverse proxy ... Nginx reverse proxy

App: Docker registry ... Docker registry

Storage : Distributed storage

So for HAproxy we have three strategy :
SSL termination (No working because Docker registry doesn't allow auth over HTTP)
SSL pass-trough (Working but only at tcp level, we can't use HAproxy Http features like url routing)
SSL decode and re-encode (Work but very complex setup)

And having SSL decode is not CPU-free, having feature to have docker auth over http will simplify HA setup. It's not really a security issue if network behind HAproxy is a local network.

@matleh
Copy link
Author

matleh commented Sep 17, 2014

@tangicolin I don't think this is a problem in your setup, since it is docker and not the docker-registry who refuses to do basic Auth over http. So the haproxy can be the ssl endpoint.

@adamhadani
Copy link

Pretty sure i'm running into the same or related issue -

  1. I set up a docker-registry instance
  2. I setup an nginx front proxy to it which uses (not self signed) SSL certs and http basic auth
  3. The following flow for example fails on a 401 which I can trace in nginx logs (see immediately below):
docker pull foo/bar  # some example public container
docker tag foo/bar user:pass@docker-images-local.mydomain.com:443/bar
docker push user:pass@docker-images-local.mydomain.com:443/bar

Results in:

The push refers to a repository [...] (len: 1)
Sending image list
Pushing repository user:pass@docker-images-local.mydomain.com:443/bar (1 tags)
8dbd9e392a96: Pushing 
2014/09/17 16:10:00 HTTP code 401 while uploading metadata: invalid character '<' looking for beginning of value

and nginx logs show:

172.16.91.1 - default [17/Sep/2014:23:10:59 +0000] "GET /v1/_ping HTTP/1.1" 200 4 "-" "Go 1.1 package http"
172.16.91.1 - default [17/Sep/2014:23:10:59 +0000] "GET /v1/_ping HTTP/1.1" 200 4 "-" "Go 1.1 package http"
172.16.91.1 - default [17/Sep/2014:23:10:59 +0000] "PUT /v1/repositories/bar/ HTTP/1.1" 200 2 "-" "docker/1.2.0 go/go1.3.1 git-commit/fa7b24f kernel/3.16.1-tinycore64 os/linux arch/amd64"
172.16.91.1 - - [17/Sep/2014:23:10:59 +0000] "GET /v1/images/8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c/json HTTP/1.1" 401 194 "-" "docker/1.2.0 go/go1.3.1 git-commit/fa7b24f kernel/3.16.1-tinycore64 os/linux arch/amd64"
172.16.91.1 - - [17/Sep/2014:23:10:59 +0000] "PUT /v1/images/8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c/json HTTP/1.1" 401 194 "-" "docker/1.2.0 go/go1.3.1 git-commit/fa7b24f kernel/3.16.1-tinycore64 os/linux arch/amd64"

so you can see it switches at the last two http requests to giving 401s, possibly related to other reports of it 'switching' to use Token auth or something like that?

@dmp42
Copy link
Contributor

dmp42 commented Sep 18, 2014

@adamhadani are you running the registry standalone? Can you copy your registry launch command and/or configuration?

Thanks

@matleh
Copy link
Author

matleh commented Sep 18, 2014

@adamhadani are you sure that the communication happens over HTTPS? Are you sure, Nginx only listens on port 443? You nginx.conf would be helpful, too.

@adamhadani
Copy link

Here's the nginx site config (this is a template populated in an Ansible deploy, final values are guaranteed to be valid). This template is virtually identical to the example file (https://github.com/docker/docker-registry/blob/master/contrib/nginx_1-3-9.conf) minus the port 80 redirect. I am not listening at all on port 80 (verified this, nginx default site is deactivated as well) so I don't assume this is the problem i'm seeing.

upstream docker-registry {
    server {{ facter_ipaddress_docker0 }}:5000;
}

server {
    listen 443 ssl;
    server_name {{ docker_registry_hostname }};

    ssl on;
    ssl_certificate      /etc/nginx/certs/{{ docker_registry_ssl_certificate }};
    ssl_certificate_key  /etc/nginx/certs/{{ docker_registry_ssl_certificate_key }};

    proxy_set_header Host $http_host; # required for docker client's sake
    proxy_set_header X-Real-IP $remote_addr; # pass on real client's IP
    proxy_set_header Authorization ""; # see https://github.com/dotcloud/docker-registry/issues/170

    client_max_body_size 0; # disable any limits to avoid HTTP 413 for large image uploads

    # required to avoid HTTP 411: see Issue #1486 (https://github.com/dotcloud/docker/issues/1486)
    chunked_transfer_encoding on;

    location / {
        auth_basic  "Restricted";
        auth_basic_user_file conf.d/docker-registry.htpasswd;

        proxy_pass http://docker-registry;
        proxy_set_header Host $host;
        proxy_read_timeout 900;
    }

    location /_ping {
        auth_basic off;
        proxy_pass http://docker-registry;
    }

    location /v1/_ping {
        auth_basic off;
        proxy_pass http://docker-registry;
    }
}

As for the docker-registry, i'm using the config_example file (https://github.com/docker/docker-registry/blob/master/config/config_sample.yml) and passing in the docker environment vars:

SETTINGS_FLAVOR=local
SEARCH_BACKEND=sqlalchemy
DOCKER_REGISTRY_CONFIG=/etc/docker-registry/config.yml

the registry seems to work fine btw when I just hit it directly via port 5000. My problems seem to be around the usage of HTTP Basic Auth via nginx proxy.

@adamhadani
Copy link

Some more info. doing some packet sniffing, looks like where things go wrong happens after docker-registry returns back this 'token' thing in response header, e.g the response to the 'PUT /v1/repositories/myapp/ HTTP/1.0' part

HTTP/1.0 200 OK
Server: gunicorn/18.0
Date: Thu, 18 Sep 2014 22:28:22 GMT
Connection: close
X-Docker-Token: Token signature=8IRV2GB1G8FHRB9G,repository="library/myapp",access=write
X-Docker-Endpoints: docker-images-local.mydomain.com:443
Pragma: no-cache
Cache-Control: no-cache
Expires: -1
Content-Type: application/json
WWW-Authenticate: Token signature=8IRV2GB1G8FHRB9G,repository="library/myapp",access=write
Content-Length: 2
X-Docker-Registry-Version: 0.8.1
X-Docker-Registry-Config: local

After this response, the next request coming out of docker client has a much shorter 'Authorization' http header which triggers the 401 assumably. I can't see any other authentication-related http header in there either:

X-Original-URI: /v1/images/<hash>/json
Host: auth
Connection: close
User-Agent: docker/1.2.0 go/go1.3.1 git-commit/fa7b24f kernel/3.16.1-tinycore64 os/linux arch/amd64
Authorization: Basic Og==
Accept-Encoding: gzip

@adamhadani
Copy link

With the risk of getting ahead of myself, I looked around the docker_registry code. Inside https://github.com/docker/docker-registry/blob/master/docker_registry/index.py, there's a generate_headers function which seem to generate the WWW-Authenticate: Token ... stuff.
If running in standalone automatically dictates no token auth should be used is that a potential bug? e.g is that an instruction to the docker client to switch to use that auth token method which causes a basic auth authentication failure in subsequent requests as I saw above?

@matleh
Copy link
Author

matleh commented Sep 22, 2014

@adamhadani No, docker will not work, if it does not get a X-Docker-Token header (at least in the version 1.2.0 that I use). I had to include it in the response again after I tried to remove it, because docker started to complain.

Authorization: Basic Og== is just a : (base64-decoded), so docker sends basic auth with an empty username and password. I think, docker does not fully support the "username:password@my.registry.url" syntax that you use. It works on the first call, but for subsequent calls, docker did not remember username and password and therefore send empty ones instead (just my theory). I think, you need to do docker login -u username -p password -e email my.registry.url and then just docker push my/imagg my.registry.url.

@shin-
Copy link
Contributor

shin- commented Sep 22, 2014

. I think, docker does not fully support the "username:password@my.registry.url" syntax that you use.

That is correct. The correct way to use auth over a private registry is to docker login on that private registry, then push the image normally.

@shin-
Copy link
Contributor

shin- commented Sep 22, 2014

Can I close this or is there anything that's still unclear regarding private registry auth?

@matleh
Copy link
Author

matleh commented Sep 22, 2014

The inconsistent behaviour over HTTP and HTTPS is not really resolved and the lack of documentation is neither.

@shin-
Copy link
Contributor

shin- commented Sep 22, 2014

When pushing to a private registry over HTTP, do you not get this message? I'll look into adding a note for the HTTPS stuff in our docs.

$ docker push my.registry.io/ubuntu
The push refers to a repository [my.registry.io/ubuntu] (len: 1)
Sending image list
Pushing repository my.registry.io/ubuntu (1 tags)
511136ea3c5a: Pushing 
2014/04/08 16:40:27 HTTP code 401, Docker will not send auth headers over HTTP.

@adamhadani
Copy link

Thanks for replies, going to try these suggestions now and report back with findings

@adamhadani
Copy link

OK, so got it to work using some combination of the feedback provided here. I'm gonna report my steps and some points of interest in the hope this helps other people who might come across this issue and perhaps provide pointers for how to improve documentation / make behaviour more consistent.

Pushing to a private repo which is using HTTPS and Basic Auth

  • Login using a user/pass combo which exists in the .htpasswd for the basic auth (docker registry currently accepts any login credentials with 'account created', however docker client assumably also uses the same login details for the http basic auth). The correct login cli invocation seems to be:
docker login -u <user> -p <password> -e doesnt@matter.com https://docker-images-local.mydomain.com

on the nginx proxy logs this command causes the following stream of requests, notice the 401 on the POST:

172.16.91.1 - - [23/Sep/2014:19:18:41 +0000] "GET /v1/_ping HTTP/1.1" 200 4 "-" "Go 1.1 package http"
172.16.91.1 - - [23/Sep/2014:19:18:41 +0000] "POST /v1/users/ HTTP/1.1" 401 194 "-" "Go 1.1 package http"
172.16.91.1 - default [23/Sep/2014:19:18:41 +0000] "GET /v1/users/ HTTP/1.1" 200 4 "-" "docker/1.2.0 go/go1.3.1 git-commit/fa7b24f kernel/3.16.1-tinycore64 os/linux arch/amd64"
  • Tag some image you already have using the following syntax in order to designate it as "push-able" to the private repo . In this case i'm just tagging and pushing a publicly available container I pulled earlier (dockerfile/nginx):
docker tag dockerfile/nginx docker-images-local.mydomain.com/nginx
  • Push the image using the same path you used for the tag command:
docker push docker-images-local.mydomain.com/nginx
  • Similarly, pulling is done using the same syntax:
docker pull docker-images-local.mydomain.com/nginx

So a few things worth mentioning that are probably worth documenting (I've gone over a few blog posts and documentation pages and wasn't able to 'deduce' this, so its not clear at all to most people I assume, which is a shame since otherwise setting up a private docker repo is very useful and was pretty straight forward):

  • Docker by default always tries to use HTTPS (port 443) first - no need to be explicit about ports / protocol in CLI invocations, (if the 'docker login' command was invoked in the context of https:// ? or always?)
  • Docker by default will use the user/pass combo given in the 'docker login' command as the basis for the base64-encoded token in the HTTP 'Authorization' header (e.g the basic auth credentials).
    Interestingly, the following syntax also seems to work (providing the user/pass inline in the url in addition to the -u and -p params) and not result in the POST failure i've mentioned above in step 1., and there's no GET on the users resource following it either:
docker login -u <user> -p <pass> -e doesnt@matter.com https://<user>:<pass>@docker-images-local.mydomain.com
172.16.91.1 - default [23/Sep/2014:19:24:29 +0000] "GET /v1/_ping HTTP/1.1" 200 4 "-" "Go 1.1 package http"
172.16.91.1 - default [23/Sep/2014:19:24:29 +0000] "POST /v1/users/ HTTP/1.1" 201 14 "-" "Go 1.1 package http"
  • One needs to tag an image with the private repo url / repo path before he can push to it. Personally this syntax / logical flow doesn't make sense to me at all, being able to push images to different repositories should not necessitate a "tag" invocation, I would try to take cue from the git / github semantics of having repositories and remotes, and being able to add a remote, pull/push to different remotes etc., but I digress.
  • As mentioned before in this thread and other ones, this whole flow will only work when using HTTPS (e.g using SSL certificate in the nginx proxy), as docker client will not send credentials in the clear. This makes it a requirement to use https if one wishes to also use basic auth

@dephee
Copy link

dephee commented Oct 2, 2014

Basic auth only works for https. Thus to make https works few things that help me:

  • restart docker after adding ca.pem to /etc/ssl/certs ,
    sudo systemctl restart docker
  • login using https
    docker login https://yourdomain:443

after this, I can do docker pull and docker push

@dmp42 dmp42 modified the milestones: 1.0, 0.9 Oct 6, 2014
@tzz
Copy link

tzz commented Nov 5, 2014

After setting up with nginx auth (following https://github.com/docker/docker-registry/blob/master/contrib/nginx/nginx.conf and https://github.com/docker/docker-registry/blob/master/contrib/nginx/docker-registry.conf) to a socket file, the following magic worked (Fedora 20, nginx 1.4.7, docker-registry 0.8.1, docker-io 1.3.0):

docker login -u USER -p PASS -e whatever@pwnie.com https://USER:PASS@myhost
docker tag ubuntu USER:PASS@myhost/ubuntu
docker push USER:PASS@myhost/ubuntu

It's not ideal because the USER and PASS are exposed in the tags, but it's the only way I've found. I hope it helps research this issue.

@dmp42
Copy link
Contributor

dmp42 commented Nov 5, 2014

@bacongobbler thoughts?

@paturuv
Copy link

paturuv commented Jan 19, 2015

I am facing issue with docker client to connect to registry. I have the following:

  1. I have a private registry running on a server which is ssl enabled (nginx)
  2. The basic authentication implemented with Nginx
  3. The curl is working fine
  4. When tried to login using docker client that fails with the following error message
    Error response from daemon: Server Error: Post https://example.domain.com/v1/users/: x509: certificate signed by unknown authority

I am using a CA certificate to enable SSL and that works fine with curl command too... Not sure if this is due to x509 certificate??

@hibooboo2
Copy link

@paturuv The specific issue you are talking about is likely that you are not fully chaining all the certs.

I just went through this issue today when setting up a private registry using a ssl cert that has an intermediate one.

I had to add the bundled ssl cert to the cert that was for me so that docker would resolve the full chain. In order to do this you can do this:

cat yourssl.crt sslcertbundle.crt > thecerttouse.crt

And then use the generated cert as the cert that nginx serves up.

@olibob
Copy link

olibob commented Mar 16, 2015

@hibooboo2 could you please elaborate how you fixed the issue? Where is the sslcertbundle.crt located? Are you talking about /etc/ssl/certs/ca-certificates.crt in a boot2docker VM (docker-machine)?

@hgomez
Copy link
Contributor

hgomez commented Mar 31, 2015

I'm still batling with this one, docker-registry 0.9.1, docker 1.5.0 and Apache HTTPd 2.4

I used easy_install --user "docker-registry==0.9.1"

Initial request came with

 Authorization:Basic ZWNkLWRlcGxveWVyOjEyMw==|Content-type:application/json|X-Docker-Token:true|Accept-Encoding:gzip

Successive came with

Authorization:Token Token signature=061R11KL2ETBB6E4,repository="library/ecd-busybox",access=write|Accept-Encoding:gzip

HTTPd front didn'f find Authorization:Basic and didn't forward the push request.

With docker-registry image, started using docker run, I still have Authorization:Basic presented back.

What's difference between docker-registry image and docker-registry via pip/easy_install ?

Thanks

@hibooboo2
Copy link

@olibob The certs i am talking about are not in the boot2docker vm they are in the nginx setup that does the reverse proxy to provide authentication to the registry.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests