Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update docs for windows container resources #7653

Merged
merged 1 commit into from
Mar 9, 2018
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
83 changes: 67 additions & 16 deletions docs/getting-started-guides/windows/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ The Kubernetes control plane (API Server, Scheduler, Controller Manager, etc) co
{: .note}

## Get Windows Binaries
We recommend using the release binaries that can be found at [https://github.com/kubernetes/kubernetes/releases/latest](https://github.com/kubernetes/kubernetes/releases/latest). Under the CHANGELOG you can find the Node Binaries link for Windows-amd64, which will include kubeadm, kubectl, kubelet and kube-proxy.
We recommend using the release binaries that can be found at [https://github.com/kubernetes/kubernetes/releases/latest](https://github.com/kubernetes/kubernetes/releases/latest). Under the CHANGELOG you can find the Node Binaries link for Windows-amd64, which will include kubeadm, kubectl, kubelet and kube-proxy.

If you wish to build the code yourself, please refer to detailed build instructions [here](https://docs.microsoft.com/en-us/virtualization/windowscontainers/kubernetes/compiling-kubernetes-binaries).

Expand All @@ -31,7 +31,7 @@ In Kubernetes version 1.9 or later, Windows Server Containers for Kubernetes are

## Networking
There are several supported network configurations with Kubernetes v1.9 on Windows, including both Layer-3 routed and overlay topologies using third-party network plugins.

1. [Upstream L3 Routing](#upstream-l3-routing-topology) - IP routes configured in upstream ToR
2. [Host-Gateway](#host-gateway-topology) - IP routes configured on each host
3. [Open vSwitch (OVS) & Open Virtual Network (OVN) with Overlay](#using-ovn-with-ovs) - overlay networks (supports STT and Geneve tunneling types)
Expand All @@ -47,7 +47,7 @@ An additional two CNI plugins [win-l2bridge (host-gateway) and win-overlay (vxla
The above networking approaches are already supported on Linux using a bridge interface, which essentially creates a private network local to the node. Similar to the Windows side, routes to all other pod CIDRs must be created in order to send packets via the "public" NIC.

### Windows
Windows supports the CNI network model and uses plugins to interface with the Windows Host Networking Service (HNS) to configure host networking and policy. At the time of this writing, the only publicly available CNI plugin from Microsoft is built from a private repo and available here [wincni.exe](https://github.com/Microsoft/SDN/blob/master/Kubernetes/windows/cni/wincni.exe). It uses an l2bridge network created through the Windows Host Networking Service (HNS) by an administrator using HNS PowerShell commands on each node as documented in the [Windows Host Setup](#windows-host-setup) section below. Source code for the future CNI plugins will be made available publicly.
Windows supports the CNI network model and uses plugins to interface with the Windows Host Networking Service (HNS) to configure host networking and policy. At the time of this writing, the only publicly available CNI plugin from Microsoft is built from a private repo and available here [wincni.exe](https://github.com/Microsoft/SDN/blob/master/Kubernetes/windows/cni/wincni.exe). It uses an l2bridge network created through the Windows Host Networking Service (HNS) by an administrator using HNS PowerShell commands on each node as documented in the [Windows Host Setup](#windows-host-setup) section below. Source code for the future CNI plugins will be made available publicly.

#### Upstream L3 Routing Topology
In this topology, networking is achieved using L3 routing with static IP routes configured in an upstream Top of Rack (ToR) switch/router. Each cluster node is connected to the management network with a host IP. Additionally, each node uses a local 'l2bridge' network with a pod CIDR assigned. All pods on a given worker node will be connected to the pod CIDR subnet ('l2bridge' network). In order to enable network communication between pods running on different nodes, the upstream router has static routes configured with pod CIDR prefix => Host IP.
Expand All @@ -65,7 +65,7 @@ The following diagram gives a general overview of the architecture and interacti

(The above image is from [https://github.com/openvswitch/ovn-kubernetes#overlay-mode-architecture-diagram](https://github.com/openvswitch/ovn-kubernetes#overlay-mode-architecture-diagram))

Due to its architecture, OVN has a central component which stores your networking intent in a database. Other components i.e. kube-apiserver, kube-controller-manager, kube-scheduler etc. can be deployed on that central node as well.
Due to its architecture, OVN has a central component which stores your networking intent in a database. Other components i.e. kube-apiserver, kube-controller-manager, kube-scheduler etc. can be deployed on that central node as well.

## Setting up Windows Server Containers on Kubernetes
To run Windows Server Containers on Kubernetes, you'll need to set up both your host machines and the Kubernetes node components for Windows. Depending on your network topology, routes may need to be set up for pod communication on different nodes.
Expand All @@ -76,7 +76,7 @@ To run Windows Server Containers on Kubernetes, you'll need to set up both your

##### Linux Host Setup

1. Linux hosts should be setup according to their respective distro documentation and the requirements of the Kubernetes version you will be using.
1. Linux hosts should be setup according to their respective distro documentation and the requirements of the Kubernetes version you will be using.
2. Configure Linux Master node using steps [here](https://github.com/MicrosoftDocs/Virtualization-Documentation/blob/live/virtualization/windowscontainers/kubernetes/creating-a-linux-master.md)
3. [Optional] CNI network plugin installed.

Expand All @@ -92,7 +92,7 @@ To run Windows Server Containers on Kubernetes, you'll need to set up both your

More detailed instructions can be found [here](https://github.com/MicrosoftDocs/Virtualization-Documentation/blob/live/virtualization/windowscontainers/kubernetes/getting-started-kubernetes-windows.md).

**Windows CNI Config Example**
**Windows CNI Config Example**
Today, Windows CNI plugin is based on wincni.exe code with the following example, configuration file. This is based on the ToR example diagram shown above, specifying the configuration to apply to Windows node-1. Of special interest is Windows node-1 pod CIDR (10.10.187.64/26) and the associated gateway of cbr0 (10.10.187.66). The exception list is specifying the Service CIDR (11.0.0.0/8), Cluster CIDR (10.10.0.0/16), and Management (or Host) CIDR (10.127.132.128/25).

Note: this file assumes that a user previous created 'l2bridge' host networks on each Windows node using `<Verb>-HNSNetwork` cmdlets as shown in the `start-kubelet.ps1` and `start-kubeproxy.ps1` scripts linked above
Expand Down Expand Up @@ -229,7 +229,7 @@ Use your preferred method to start Kubernetes cluster on Linux. Please note that

## Support for kubeadm join

If your cluster has been created by [kubeadm](https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/),
If your cluster has been created by [kubeadm](https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/),
and your networking is setup correctly using one of the methods listed above (networking is setup outside of kubeadm), you can use kubeadm to add a Windows node to your cluster. At a high level, you first have to initialize the master with kubeadm (Linux), then set up the CNI based networking (outside of kubeadm), and finally start joining Windows or Linux worker nodes to the cluster. For additional documentation and reference material, visit the kubeadm link above.

The kubeadm binary can be found at [Kubernetes Releases](https://github.com/kubernetes/kubernetes/releases), inside the node binaries archive. Adding a Windows node is not any different than adding a Linux node:
Expand Down Expand Up @@ -290,9 +290,9 @@ Secrets and ConfigMaps can be utilized in Windows Server Containers, but must be
data:
username: YWRtaW4=
password: MWYyZDFlMmU2N2Rm

---

apiVersion: v1
kind: Pod
metadata:
Expand All @@ -315,7 +315,7 @@ Secrets and ConfigMaps can be utilized in Windows Server Containers, but must be
nodeSelector:
beta.kubernetes.io/os: windows
```

Windows pod with configMap values mapped to environment variables

```yaml
Expand Down Expand Up @@ -351,14 +351,14 @@ spec:
nodeSelector:
beta.kubernetes.io/os: windows
```

### Volumes
Some supported Volume Mounts are local, emptyDir, hostPath. One thing to remember is that paths must either be escaped, or use forward slashes, for example `mountPath: "C:\\etc\\foo"` or `mountPath: "C:/etc/foo"`.

Persistent Volume Claims are supported for supported volume types.

**Examples:**

Windows pod with a hostPath volume
```yaml
apiVersion: v1
Expand All @@ -380,9 +380,9 @@ Persistent Volume Claims are supported for supported volume types.
hostPath:
path: "C:\\etc\\foo"
```

Windows pod with multiple emptyDir volumes

```yaml
apiVersion: v1
kind: Pod
Expand Down Expand Up @@ -434,14 +434,65 @@ spec:

Windows Stats use a hybrid model: pod and container level stats come from CRI (via dockershim), while node level stats come from the "winstats" package that exports cadvisor like data structures using windows specific perf counters from the node.

### Container Resources

Container resources (CPU and memory) could be set now for windows containers in v1.10.

```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: iis
spec:
replicas: 3
template:
metadata:
labels:
app: iis
spec:
containers:
- name: iis
image: microsoft/iis
resources:
limits:
memory: "128Mi"
cpu: 2
ports:
- containerPort: 80
```

### Hyper-V Containers

Hyper-V containers are supported as experimental in v1.10. To create a Hyper-V container, kubelet should be started with feature gates `HyperVContainer=true` and Pod should include annotation `experimental.windows.kubernetes.io/isolation-type=hyperv`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@alinbalutoiu can we add support for hyper-v isolation in your Windows Services work as well?


```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: iis
spec:
replicas: 3
template:
metadata:
labels:
app: iis
annotations:
experimental.windows.kubernetes.io/isolation-type: hyperv
spec:
containers:
- name: iis
image: microsoft/iis
ports:
- containerPort: 80
```

## Known Limitations for Windows Server Containers with v1.9
Some of these limitations will be addressed by the community in future releases of Kubernetes
- Shared network namespace (compartment) with multiple Windows Server containers (shared kernel) per pod is only supported on Windows Server 1709 or later
- Using Secrets and ConfigMaps as volume mounts is not supported
- Using Secrets and ConfigMaps as volume mounts is not supported
- Mount propagation is not supported on Windows
- The StatefulSet functionality for stateful applications is not supported
- Horizontal Pod Autoscaling for Windows Server Container pods has not been verified to work end-to-end
- Hyper-V isolated containers are not supported.
- Windows container OS must match the Host OS. If it does not, the pod will get stuck in a crash loop.
- Under the networking models of L3 or Host GW, Kubernetes Services are inaccessible to Windows nodes due to a Windows issue. This is not an issue if using OVN/OVS for networking.
- Windows kubelet.exe may fail to start when running on Windows Server under VMware Fusion [issue 57110](https://github.com/kubernetes/kubernetes/pull/57124)
Expand Down