Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is it possible accessing host machine ports from Kind pods? #1200

Closed
fllaca opened this issue Dec 23, 2019 · 14 comments
Closed

Is it possible accessing host machine ports from Kind pods? #1200

fllaca opened this issue Dec 23, 2019 · 14 comments
Labels
kind/support Categorizes issue or PR as a support question.

Comments

@fllaca
Copy link
Contributor

fllaca commented Dec 23, 2019

Hi!
we are planning to use Kind as part of our development environment in our laptops, the idea is to use it in combination with Skaffold. We are finding some difficulties to make connections from the application running in pods inside Kind to the database and queues running in the host machine (usually also in a another container).

To give you some context, ee are basically using 2/3 approaches:

  1. Use the machine IP in the database config (not repeatable as this IP might change)
  2. use "host.docker.internal" (only works for Mac users)
  3. Deploy database and queues also in pods.

Approach 3. is fine, but it gets a bit more complicated, specially if data is wanted to be persisted (it can be done with persistent volumes and kind node mounts), as opposed to have these dependencies running in simple docker-compose files. Of course we have tried running our app also in docker-compose (since Kind doesn't aim to be an alternative to docker-compose), but using Kind allows us to make our local environment look closer to our pre-production and production deployments, and to test some features that are K8s-related (in our case: an embedded distributed cache (Hazelcast) that uses K8s API for peers discovery).

Do you have any recommendation for this use case?
Thanks for this amazing project

@fllaca fllaca added the kind/support Categorizes issue or PR as a support question. label Dec 23, 2019
@fllaca
Copy link
Contributor Author

fllaca commented Dec 24, 2019

I came with a working experimental approach, but requires some hacks and modifications in Kind. Steps:

  1. Using this modified version of Kind, create a cluster using this configuration:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  image: kindest/node:v1.16.3
  extraDockerOptions:
    - --network
    - integration-tests
- role: worker
  image: kindest/node:v1.16.3
  extraDockerOptions:
    - --network
    - integration-tests

("integration-tests" is a pre-existing docker network: docker network create integration-tests)

  1. This will make the loop found at Using bridge network for nodes #484 (comment) happen, next steps are tricks to workaround it:

  2. In my experiment, deployed a socat daemonset with hostNetwork: true to expose the docker embedded dns on the k8s nodes IPs:

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: socat-dns
  namespace: kube-system
  labels:
    k8s-app: socat-dns
spec:
  selector:
    matchLabels:
      name: socat-dns
  template:
    metadata:
      labels:
        name: socat-dns
    spec:
      hostNetwork: true
      containers:
      - name: socat
        image: alpine/socat
        args:
          - tcp-listen:5353,reuseaddr,fork
          - tcp:127.0.0.11:53
  1. Now Modify CoreDNS deployment adding...:
        env:
        - name: HOST_IP
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: status.hostIP
  1. ...and Modify CoreDNS configmap with:
        # old config: 
        # forward . /etc/resolv.conf 
        forward . {$HOST_IP}:5353 {
          force_tcp
        }

And profit! With this setup, it allows you to resolve names of containers attached to the docker network from inside Kind pods! :D :

docker run -d --network integration-tests --rm --name redis redis
$ kubectl run my-shell --rm -i --tty --image alpine -- sh
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
If you don't see a command prompt, try pressing enter.
/ # nc -vz redis.integration-tests 6379
redis.integration-tests (172.24.0.3:6379) open

For steps 3. 4. and 5. I made some custom manifests with those modifications to apply easily with kubectl apply -f coredns-deploy.yaml -f coredns-configmap.yaml -f socat-ds.yaml.

I'm not super happy with the approach..., it's kinda hacky, but it's an alternative 🤷‍♂ , maybe it can bring some ideas for the conversation at #148 🤔 .

Do you think extraDockerOptions could be a good feature for Kind? It would allow great flexibility for adventurous users :P. If so, I'm very happy to elaborate it and PR it.

PS: Sorry for the long text.

@BenTheElder
Copy link
Member

Removing the forward breaks upstream DNS..?

I am working on docker networks support, please see other issues in the tracker for the problems with doing this properly. I'm currently developing a prototype.

We are not accepting docker flags anywhere in the API surface, we do not wish to be coupled to docker's CLI externally in any way and have discussed this in the past.

@BenTheElder
Copy link
Member

This should be possible without user defined networks IIRC, I will have to dig out the details.

host.docker.internal should be possible to mimic on linux

@fllaca
Copy link
Contributor Author

fllaca commented Dec 24, 2019

Removing the forward breaks upstream DNS..?

It doesn't remove it, it replaces the /etc/resolv.conf based forward with an IP:port forward (using the hostIP and port 5353, which is tunneled to 127.0.0.11:53 using socat in the host container(the kind node ) ). The problem with the original /etc/resolv.conf in CoreDNS was that it contains "nameserver 127.0.0.11", mounted from the host kind container. Since it is a loopback address, CoreDNS detects it as a loop and crashes. My experiment "tricks" that by using the hostIP, avoiding the loop. I tested that both kube-dns and docker embedded dns can be used, and also public dns resolution (like google.com). Anyway this was experimenting playing around, not happy with it as a solution.

We are not accepting docker flags anywhere in the API surface, we do not wish to be coupled to docker's CLI externally in any way and have discussed this in the past.

👍

host.docker.internal should be possible to mimic on linux

I will explore a bit more 🤔

@fllaca
Copy link
Contributor Author

fllaca commented Dec 24, 2019

Finally I found a way to achive this with qoomon/docker-host:

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: dockerhost
  labels:
    k8s-app: dockerhost
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: dockerhost
  template:
    metadata:
      labels:
        k8s-app: dockerhost
    spec:
      containers:
      - name: dockerhost
        image: qoomon/docker-host
        securityContext:
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        # Not needed in MacOs:
        - name: DOCKER_HOST
           value: 172.17.0.1 # <-- docker bridge network default gateway

---
apiVersion: v1
kind: Service
metadata:
  name: dockerhost
spec:
  clusterIP: None # <-- Headless service
  selector:
    k8s-app: dockerhost

From inside a pod running in Kind:

/ # curl dockerhost:8080 -I
HTTP/1.1 200 OK
Server: nginx/1.17.6
Date: Tue, 24 Dec 2019 11:56:11 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 19 Nov 2019 12:50:08 GMT
Connection: keep-alive
ETag: "5dd3e500-264"
Accept-Ranges: bytes

Thanks @BenTheElder for pointing me into another direction! :) Do you think this trick is worthy to be added to the user guide documentation ("Accessing host machine ports")?

In any case, I think we can close this question, thx for the assistance!.

@fllaca fllaca closed this as completed Dec 24, 2019
@s12chung
Copy link

s12chung commented Jan 21, 2020

@fllaca how exactly did you get the final solution to work? I did a k apply -f on your post above and tried to see if that worked, but failed. does it require the extra docker options?

edit:
ah.... found it. I kept docker bridge network default gateway empty and it worked. I'm on macOS, so I guess it's different.

think adding this into the docs would be great.

@fllaca
Copy link
Contributor Author

fllaca commented Feb 1, 2020

hi @s12chung !
Yes, I had to set DOCKER_HOST when using a linux host. In MacOs, qoomon/dockerhost uses the special DNS name "host.docker.internal" to forward the traffic, so DOCKER_HOST mustn't be set.

@mrbrandao
Copy link

mrbrandao commented Jun 21, 2020

I think it is possible to get the same result, by only allowing the ports in the firewall, without the need to use a container to NAT connections to the docker default gateway.
I have success with this:

  1. kind pod with a container name: test

  2. local container: myapp exposing port 8081

  3. Allow port in the firewall

iptables -I INPUT -p tcp --dport 8081 -j ACCEPT

now from inside container test in kind cluster, you can run:
curl 172.17.0.1:8081
where 172.17.0.1 is your docker bridge default gateway.

@tqwewe
Copy link

tqwewe commented Dec 11, 2021

I'm having trouble with this sadly.
I have a separate docker container running that I want to connect from within a kind k8 deployment.

I can connect from my host with 127.0.0.1:9092, but using the yaml config and dockerhost:9092 doesnt seem to connect.
I get:

 Failed to resolve 'dockerhost:9092': Name or service not known (after 7ms in state CONNECT)

@fllaca
Copy link
Contributor Author

fllaca commented Nov 6, 2022

For those interested I think I came with a simpler solution to route network to the host machine, using headless or ExternalName services. The solution is different for Linux and Mac/Window (the latter using Docker desktop).

Linux solution: headless service + endpoint:

---
apiVersion: v1
kind: Endpoints
metadata:
  name: dockerhost
subsets:
- addresses:
  - ip: 172.17.0.1 # this is the gateway IP in the "bridge" docker network
---
apiVersion: v1
kind: Service
metadata:
  name: dockerhost
spec:
  clusterIP: None

Mac/Windows (Docker Desktop) solution: ExternalName service:

---
apiVersion: v1
kind: Service
metadata:
  name: dockerhost
spec:
  type: ExternalName
  externalName: host.docker.internal

@charandas
Copy link

One use-case for us for doing this is being able to locally develop an extensions server that we can get aggregated into the kind cluster using apiservices.apiregistration.k8s.io. ExternalName works perfectly on macOS with host.docker.internal. For Linux, we tried using EndpointSlice but with not much success.

Endpoints using the docker0 bridge ip and a Service does work (but you have to list the ports of the service) and it can't be headless.

@charandas
Copy link

Working config for extension servers pointed to a locally running process on Linux:

---
apiVersion: v1
kind: Endpoints
metadata:
  name: api
subsets:
  - addresses:
      - ip: 172.17.0.1 # docker0 bridge ip
    ports:
      - appProtocol: https
        port: 8443
        protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
  name: api
spec:
  ports:
    - protocol: TCP
      appProtocol: https
      port: 8443
      targetPort: 8443
  

@Puneethgr
Copy link

@fllaca @charandas I wanted to access the interfaces of the host machine from the kind pods. Does your solution work for accessing interfaces as well?
Please let me know if you have any inputs. Thank you for your solution.

@BenTheElder
Copy link
Member

@fllaca @charandas I wanted to access the interfaces of the host machine from the kind pods. Does your solution work for accessing interfaces as well?

No, that's not going to work. You should probably investigate running the host as a single-node kubeadm cluster directly instead if you're going to directly interact with host devices.

An alternative would be hack/local-up-cluster.sh in kubernetes, but kubeadm is what kind uses under the hood.

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question.
Projects
None yet
Development

No branches or pull requests

7 participants