Skip to content

midonet/midonet-kubernetes

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Build Status Coverage Status Go Report Card GoDoc

MidoNet Kubernetes Integration

Overview

This software provides a way to use MidoNet as a backend for Kubernetes networking. Namely, it provides the following Kubernetes networking functionalitites.

  • Basic cluster network, that is, connectivity among Pods, Nodes, and the apiserver
  • Services with ClusterIP type (Note: externalIPs are ignored)

Supported versions

This software is tested with:

  • Kubernetes v1.10 (by the CI)
  • Kubernetes v1.11 (occasionally)
  • MidoNet 5.6

Limitations

  • Even if a Service has multiple Endpoints, only one endpoint which happens to be first is always used. I.e. no load-balancing. MNA-1264

References

  • The doc directry contains internal documentations

  • The design doc might have more details

How to build

Prequisite

  • docker

Build docker images

	% TAG=1.1
	% ./build.sh ${TAG}

The build process include some unit tests, and generates a JUnit compatible result XML file in the build directory for easy CI integration (junit.xml).

Upload docker images

See the output of the above build.sh script.

Create docker manifest lists

	% TAG=1.1
	% ./createmanifest.sh ${TAG}

Note: This uses "docker manifest" command, which is experimental. You might need to enable the experimental feature on your docker environment. And/or you might need to upgrade your docker.

Upload docker manifest lists

See the output of the above createmanifest.sh script.

How to deploy

The following procedure assumes kubeadm based deployment. But something similar should apply to other deployment methods as well.

  1. Deploy MidoNet as usual.

    • Every Kubernetes nodes including the master node should run MidoNet agent.
    • The following instruction assumes that MidoNet Host names and Kubernetes Node names are same for each nodes. It's usually the case because both of them are inferred from the hostname.
    • No need to create a tunnel zone. The integration will create its own automatically.
    • Some of these assumptions can be overridden with manual annotations.
  2. "kubeadm init" with Node IPAM enabled. (This integration relies on Node's spec.PodCIDR.)

	% kubeadm init --pod-network-cidr=10.1.0.0/16
  1. Remove kube-proxy. (It isn't necessary or compatible with this integration. Unfortunately, kubeadm unconditionally sets it up. cf. kubeadm issue 776)
	% kubectl -n kube-system delete ds kube-proxy
  1. After stopping kube-proxy, you might need to remove iptables rules installed by kube-proxy manually. Note: the following commands would remove many of relevant rules but leave some of rules and chains installed by kube-proxy. The simplest way to get a more clean state is to reboot the system.
	% sudo iptables -t nat -F KUBE-SERVICES
	% sudo iptables -F KUBE-SERVICES
  1. Look at manifests directory in this repository. Copy and edit midonet-kube-config.template.yaml to match your deployment. The modified file will be called midonet-kube-config.yaml hereafter.
  2. Apply manifests.
	% kubectl apply -f midonet-kube-config.yaml
	% kubectl apply -f midonet-kube.yaml
  1. "Untaint" the master node if you want.
  2. If you have workers, do "kubeadm join" as usual.

Trouble shooting

Translation custom resources

midonet-kube-controllers use a CRD "Translation" to store the intermediate objects converted from Kubernetes resources like Pods and Services. You can investigate them with kubectl command. "tr" is a short name for the CRD.

Examples

  • Get a list of Translation resources for a Service.
% kubectl -n kube-system get tr -l midonet.org/owner-uid=$(kubectl -n kube-system get svc kube-dns -o jsonpath='{.metadata.uid}')
NAME                                                                                                   AGE
service-port.3.kube-system-kube-dns-dns-10.96.0.10-17-53-a3982c897cb53bcf4ec5dd4895d5e191e28628a5      9d
service-port.3.kube-system-kube-dns-dns-tcp-10.96.0.10-6-53-4b9da0761bde315827a2f3a49560b52b3c7a53a5   9d
service.3.kube-dns                                                                                     9d
  • The same for an Endpoints.
% kubectl -n kube-system get tr -l midonet.org/owner-uid=$(kubectl -n kube-system get ep kube-dns -o jsonpath='{.metadata.uid}')
NAME                                                                                                      AGE
endpoints-port.3.kube-dns-dns-10.96.0.10-10.1.0.21-53-udp-54b27350879608cd686eb61cedb7273386a367c2        11s
endpoints-port.3.kube-dns-dns-10.96.0.10-10.1.0.22-53-udp-3adaf57ddc83531c51e037c7489c27ab6f671afa        10s
endpoints-port.3.kube-dns-dns-10.96.0.10-10.1.0.23-53-udp-1277c2fba4ef5db3a8a406311d1c1a534b09e171        10s
endpoints-port.3.kube-dns-dns-10.96.0.10-10.1.0.24-53-udp-9562d46816ee2c652d4d0f8a880489a13ec1e073        8s
endpoints-port.3.kube-dns-dns-10.96.0.10-10.1.1.145-53-udp-8c74d17e9a85139c771b0eb8321c000c88907c94       1h
endpoints-port.3.kube-dns-dns-10.96.0.10-10.1.1.177-53-udp-7afaaf3a2ba96d55bf2c9192a0cb4a902c8271f6       22s
endpoints-port.3.kube-dns-dns-tcp-10.96.0.10-10.1.0.21-53-tcp-6245126dfb5cd4878ccce4b468dd229e73ee7f6c    10s
endpoints-port.3.kube-dns-dns-tcp-10.96.0.10-10.1.0.22-53-tcp-1400a542aaf5e799316f80cb6daa0dc141f0902f    9s
endpoints-port.3.kube-dns-dns-tcp-10.96.0.10-10.1.0.23-53-tcp-7ebfb0755608d0cefb2939d92b18edd654fcb1ca    9s
endpoints-port.3.kube-dns-dns-tcp-10.96.0.10-10.1.0.24-53-tcp-f624318b925446dfa7016c1d2e1adfb4917f3fa9    12s
endpoints-port.3.kube-dns-dns-tcp-10.96.0.10-10.1.1.145-53-tcp-3302c499a8532147a976c74a96165bea98dc7034   1h
endpoints-port.3.kube-dns-dns-tcp-10.96.0.10-10.1.1.177-53-tcp-3ae2c8ecc12fb3c76ac73e6260b16bb35d8795ed   20s
  • Investigate one of those Translations.
% kubectl -n kube-system describe tr endpoints-port.3.kube-dns-dns-10.96.0.10-10.1.0.24-53-udp-9562d46816ee2c652d4d0f8a880489a1
3ec1e073
Name:         endpoints-port.3.kube-dns-dns-10.96.0.10-10.1.0.24-53-udp-9562d46816ee2c652d4d0f8a880489a13ec1e073
Namespace:    kube-system
Labels:       midonet.org/owner-uid=a55ca29d-2330-11e8-8d8e-fa163ec6ef35
Annotations:  
API Version:  midonet.org/v1
Kind:         Translation
Metadata:
  Cluster Name:
  Creation Timestamp:  2018-06-28T00:44:03Z
  Finalizers:
    midonet.org/deleter  Generation:  0
  Owner References:
    API Version:     v1
    Kind:            Endpoints    Name:            kube-dns
    UID:             a55ca29d-2330-11e8-8d8e-fa163ec6ef35
  Resource Version:  12040595
  Self Link:         /apis/midonet.org/v1/namespaces/kube-system/translations/endpoints-port.3.kube-dns-dns-10.96.0.10-10.1.0.24-53-udp-9562d46816ee2c652d4d0f8a880489a13ec1e073
  UID:               5adf0dce-7a6c-11e8-ba60-fa163ec6ef35
Resources:
  Body:    {"id":"5cfc9435-b3f4-5a79-8093-3c6a8c0f5fff","tenantId":"midonetkube","name":"KUBE-SEP-kube-dns/dns/10.96.0.10/10.1.0.24/53/UDP"}
  Kind:    Chain
  Parent:
  Body:    {"id":"e75a66ad-a7e6-569f-83bc-543d4a01107a","type":"jump","jumpChainId":"5cfc9435-b3f4-5a79-8093-3c6a8c0f5fff"}
  Kind:    Rule
  Parent:  2ba08097-dcb1-548e-817e-d259736a286c
  Body:    {"id":"0f0dbbff-40d9-537a-ae8f-3fdc533cb9fd","type":"dnat","flowAction":"accept","natTargets":[{"addressFrom":"10.1.1.177","addressTo":"10.1.1.177","portFrom":53,"portTo":53}]}
  Kind:    Rule
  Parent:  5cfc9435-b3f4-5a79-8093-3c6a8c0f5fff
  Body:    {"id":"2399be50-d5bd-5691-a6ff-8ffbc14df8a8","type":"snat","dlType":2048,"nwSrcAddress":"10.1.1.177","nwSrcLength":32,"flowAction":"continue","natTargets":[{"addressFrom":"10.96.0.10","addressTo":"10.96.0.10","portFrom":30000,"portTo":60000}]}
  Kind:    Rule
  Parent:  5cfc9435-b3f4-5a79-8093-3c6a8c0f5fff
Events:
  Type    Reason                   Age               From                      Message
  ----    ------                   ----              ----                      -------
  Normal  TranslationUpdatePushed  52s (x2 over 1m)  midonet-kube-controllers  Translation Update pushed to the backend

Prometheus

midonet-kube-controllers provides a few metrics for Prometheus.

% curl -s http://localhost:9453/metrics|grep -E "^# (HELP|TYPE) midonet_"
# HELP midonet_kube_controllers_midonet_client_request_duration_seconds Latency of MidoNet API call
# TYPE midonet_kube_controllers_midonet_client_request_duration_seconds histogram
# HELP midonet_kube_controllers_midonet_client_requests_total Number of MidoNet API calls
# TYPE midonet_kube_controllers_midonet_client_requests_total counter

Examples queries

See Prometheus Querying documentation for details.

  • Number of successful MidoNet API calls per seconds.
sum(rate(midonet_kube_controllers_midonet_client_requests_total{code=~"2.*"}[5m])) by (resource,method)
  • Latency of successful MidoNet API calls.
histogram_quantile(0.9, sum(rate(midonet_kube_controllers_midonet_client_request_duration_seconds_bucket{code=~"2.*"}[5m])) by (resource,method,le))

Go net/http/pprof

midonet-kube-controllers provides Go net/http/pprof on the same port as the Prometheus metrics.

% go tool pprof http://localhost:9453/debug/pprof/heap

Contribution

Submitting patches

We use GerritHub to submit patches.

We don't use GitHub pull requests.

Reviewing patches

Everyone is enouraged to review patches for this repository.

If you want to be notified of patches, you can add this repository to "Watched Projects" in your GerritHub settings.

We have a voting CI named "Midokura Bot". Unfortunately, its test logs are not publicly available. MNA-1209 If it voted -1 on your patch, please ask one of Midokura employees to investigate the log.

Merging patches

Unless it's urgent, a patch should be reviewed by at least one person other than the submitter of the patch before being merged.

Right now, members of GerritHub midonet group have the permission to merge patches. If you are interested in being a member, please reach out the existing members.

Issue tracker

Bugs and Tasks are tracked in MidoNet jira. We might consider alternatives if the traffic goes up.

We don't use GitHub issues.

Release process

Right now, our releases are tags on master branch.

  1. Create and push a git tag for the release.

  2. Build and push the docker images. (See the above sections about docker images)

  3. Build and push the manifest list.

  4. Submit a patch to update docker image tags in our kubernetes manifests.

  5. Review and merge the patch.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Go 98.2%
  • Shell 1.3%
  • M4 0.5%