- Quick Start
- Next Steps
This guide will step through everything needed to build your first Kubernetes cluster and deploy a Windows web server on it. The steps include:
- Getting the right tools
- Completing an ACS-Engine apimodel which describes what you want to deploy
- Running ACS-Engine to generate Azure Resource Model templates
- Deploying your first Kubernetes cluster with Windows Server nodes
- Managing the cluster from your Windows machine
- Deploying your first app on the cluster
All of these steps can be done from any OS platform, so some sections are split out by Windows, Mac or Linux to provide the most relevant samples and scripts. If you have a Windows machine but want to use the Linux tools - no problem! Set up the Windows Subsystem for Linux and you can follow the Linux instructions on this page.
Note: Windows support for Kubernetes is still in beta and under active development. If you run into problems, please be sure to check the Troubleshooting page and active Windows issues in this repo, then help us by filing new issues for things that aren't already covered.
This guide needs a few important tools, which are available on Windows, Mac, and Linux:
- ACS-Engine - used to generate the Azure Resource Manager (ARM) template to automatically deploy a Kubernetes cluster
- Azure CLI - used to log into Azure, create resource groups, and deploy a Kubernetes cluster from a template
- Kubectl - "Kube control" tool used to manage Kubernetes clusters
- SSH - A SSH public key is needed when you deploy a cluster. It's used to connect to the Linux VMs running the cluster if you need to do more management or troubleshooting later.
Click the download link, and choose "Run". Click through the setup steps as needed.
Once it's installed, make sure you can connect to Azure with it. Open a new PowerShell window, then run az login
. It will have you log in to Azure in your web browser, then return back to the command line and show "You have logged in. Now let us find all the subscriptions to which you have access..." along with the list of subscriptions.
If you want other versions, check out the official instructions. For more help, check out the Azure CLI getting started page.
Windows support is evolving rapidly, so be sure to use the latest ACS-Engine version (v0.20 or later).
-
Browse to the ACS-Engine releases page on GitHub.
-
Find the latest version, and download the file ending in
-windows-amd64.zip
. -
Extract the
acs-engine...-windows-amd64.zip
file to a working folder such asc:\tools
-
Check that it runs with
.\acs-engine.exe version
PS C:\Users\patrick\acs-engine> .\acs-engine.exe version
Version: v0.20.6
GitCommit: 293adfda
GitTreeState: clean
- Add the folder you created in step 3 to your path.
$ENV:Path += ';c:\tools'
# If you want to save the setting permanently, then run
$oldPath = [Environment]::GetEnvironmentVariable('Path', [EnvironmentVariableTarget]::User)
[Environment]::SetEnvironmentVariable('Path', $oldPath + ';c:\tools', [EnvironmentVariableTarget]::User)
The latest release of Kubernetes Control (kubectl) is available on the Kubernetes release page. Look for kubernetes-client-windows-amd64.tar.gz
and download it.
Windows 10 version 1803 already includes tar
, so extract the archive and move kubectl.exe
to the same folder (such as c:\tools
) that you put acs-engine.exe
. If you don't already have tar
, then busybox-w32 is a good alternative. Download busybox.exe, then copy it to c:\tools\tar.exe
. It must be named to tar.exe
for the next step to work.
tar xvzf C:\Users\patrick\Downloads\kubernetes-client-windows-amd64.tar.gz
Move-Item .\kubernetes\client\bin\kubectl.exe c:\tools
Windows 10 version 1803 comes with the Secure Shell (SSH) client as an optional feature installed at C:\Windows\system32\openssh
. If you have ssh.exe
and ssh-keygen.exe
there, skip forward to Generate SSH key (Windows)
- Download the latest OpenSSH-Win64.zip file from Win32-OpenSSH releases
- Extract it to the same
c:\tools
folder or another folder in your path
First, check if you already have a SSH key generated at ~\.ssh\id_rsa.pub
dir ~\.ssh\id_rsa.pub
dir : Cannot find path 'C:\Users\patrick\.ssh\id_rsa.pub' because it does not exist.
If the file already exists, then you can skip forward to Create a Resource Group and Service Principal.
If it does not exist, then run ssh-keygen.exe
. Use the default file, and enter a passphrase if you wish to protect it. Be sure not to use a SSH key with blank passphrase in production.
PS C:\Users\patrick\acs-engine> ssh-keygen.exe
Generating public/private rsa key pair.
Enter file in which to save the key (C:\Users\patrick/.ssh/id_rsa):
Created directory 'C:\Users\patrick/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in C:\Users\patrick/.ssh/id_rsa.
Your public key has been saved in C:\Users\patrick/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:... patrick@plang-g1
The key's randomart image is:
+---[RSA 2048]----+
...
+----[SHA256]-----+
Most of the needed tools are available with Homebrew. Use it or another package manager to install these:
jq
- helpful JSON processorazure-cli
- for theaz
Azure command line toolkubernetes-cli
- for thekubectl
"Kube Control" management tool
Once you have those installed, make sure you can log into Azure. Open a new Terminal window, then run az login
. It will have you log in to Azure in your web browser, then return back to the command line and show "You have logged in. Now let us find all the subscriptions to which you have access..." along with the list of subscriptions.
Windows support is evolving rapidly, so be sure to use the latest ACS-Engine version (v0.20 or later).
-
Browse to the ACS-Engine releases page on GitHub.
-
Find the latest version, and download the file ending in
-darwin-amd64.zip
. -
Extract the
acs-engine...-darwin-amd64.zip
file to a folder in your path such as/usr/local/bin
-
Check that it runs with
acs-engine version
$ acs-engine.exe version
Version: v0.20.6
GitCommit: 293adfda
GitTreeState: clean
SSH is preinstalled, but you may need to generate an SSH key.
Open up Terminal, and make sure you have a SSH public key
$ ls ~/.ssh/id_rsa.pub
/home/patrick/.ssh/id_rsa.pub
If the file doesn't exist, run ssh-keygen
to create one.
These tools are included in most distributions. Use your typical package manager to make sure they're installed:
jq
- helpful JSON processorcurl
- to download filesopenssh
or anotherssh
clienttar
Packages for the az
cli are available for most distributions. Please follow the right link for your package manager:
apt,
yum,
zypper
Now, make sure you can log into Azure. Open a new Terminal window, then run az login
. It will have you log in to Azure in your web browser, then return back to the command line and show "You have logged in. Now let us find all the subscriptions to which you have access..." along with the list of subscriptions.
Windows support is evolving rapidly, so be sure to use the latest ACS-Engine version (v0.20 or later).
-
Browse to the ACS-Engine releases page on GitHub.
-
Find the latest version, and download the file ending in
-linux-amd64.zip
. -
Extract the
acs-engine...-linux-amd64.zip
file to a folder in your path such as/usr/local/bin
-
Check that it runs with
acs-engine version
$ acs-engine.exe version
Version: v0.20.6
GitCommit: 293adfda
GitTreeState: clean
The latest release of Kubernetes Control (kubectl) is available on the Kubernetes release page. Look for kubernetes-client-linux-....tar.gz
and copy the link to it.
Download and extract it with curl & tar:
curl -L https://dl.k8s.io/v1.11.0/kubernetes-client-linux-amd64.tar.gz | tar xvzf -
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 161 100 161 0 0 304 0 --:--:-- --:--:-- --:--:-- 304
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0kubernetes/
kubernetes/client/
kubernetes/client/bin/
kubernetes/client/bin/kubectl
100 13.2M 100 13.2M 0 0 5608k 0 0:00:02 0:00:02 --:--:-- 8034k
Then copy it to /usr/local/bin
or another directory in your PATH
sudo cp kubernetes/client/bin/kubectl /usr/local/bin/
From a terminal, make sure you have a SSH public key
$ ls ~/.ssh/id_rsa.pub
/home/patrick/.ssh/id_rsa.pub
If the file doesn't exist, run ssh-keygen
to create one.
Now that we have the Azure CLI configured and a SSH key generated, it's time to create a resource group to hold the deployment.
ACS-Engine and Kubernetes also need access to deploy resources inside that resource group to build the cluster, as well as configure more resources such as Azure Load Balancers once the cluster is running. This is done using an Azure Service Principal. It's safest to create one with access just to the resource group so that once your deployment is deleted, the service principal can't be used to make other changes in your subscription.
az group create --location <location> --name <name>
will create a group for you. Be sure to use a unique name for each cluster. If you need a list of available locations, run az account list-locations -o table
.
PS C:\Users\patrick\acs-engine> az group create --location westus2 --name k8s-win1
{
"id": "/subscriptions/df392461-0000-1111-2222-cd3aa2d911a6/resourceGroups/k8s-win1",
"location": "westus2",
"managedBy": null,
"name": "k8s-win1",
"properties": {
"provisioningState": "Succeeded"
},
"tags": null
}
Now that the group is created, create a service principal with Contributor access for that group only
# Get the group id
$groupId = (az group show --resource-group <group name> --query id).Replace("""","")
# Create the service principal
$sp = az ad sp create-for-rbac --role="Contributor" --scopes=$groupId | ConvertFrom-JSON
az group create --location <location> --name <name>
will create a group for you. Be sure to use a unique name for each cluster. If you need a list of available locations, run az account list-locations -o table
.
export RESOURCEGROUP=k8s-win1
export LOCATION=westus2
az group create --location $LOCATION --name $RESOURCEGROUP
Now that the group is created, create a service principal with Contributor access for that group only
# Get the group id
export RESOURCEGROUPID=$(az group show --resource-group $RESOURCEGROUP --query id | sed "s/\"//g")
# Create the service principal
export SERVICEPRINCIPAL=$(az ad sp create-for-rbac --role="Contributor" --scopes=$RESOURCEGROUPID)
Multiple samples are available in this repo under examples/windows. This guide will use the windows/kubernetes.json sample to deploy 1 Linux VM to run Kubernetes services, and 2 Windows nodes to run your Windows containers.
After downloading that file, you will need to
- Set windowsProfile.adminUsername and adminPassword. Be sure to check the Azure Windows VM username and password requirements first.
- Set a unique name for masterProfile.dnsPrefix. This will be the first part of the domain name you'll use to manage the Kubernetes cluster later
- Set the ssh public key that will be used to log into the Linux VM
- Set the Azure service principal for the deployments
You can use the same PowerShell window from earlier to run this next script to do all that for you. Be sure to replace $dnsPrefix
with something unique and descriptive, $windowsUser
and $windowsPassword
to meet the requirements.
# Be sure to change these next 3 lines for your deployment
$dnsPrefix = "wink8s1"
$windowsUser = "winuser"
$windowsPassword = "Cr4shOverride!"
# Download template
Invoke-WebRequest -UseBasicParsing https://raw.githubusercontent.com/Azure/acs-engine/master/examples/windows/kubernetes.json -OutFile kubernetes-windows.json
# Load template
$inJson = Get-Content .\kubernetes-windows.json | ConvertFrom-Json
# Set dnsPrefix
$inJson.properties.masterProfile.dnsPrefix = $dnsPrefix
# Set Windows username & password
$inJson.properties.windowsProfile.adminPassword = $windowsPassword
$inJson.properties.windowsProfile.adminUsername = $windowsUser
# Copy in your SSH public key from `~/.ssh/id_rsa.pub` to linuxProfile.ssh.publicKeys.keyData
$inJson.properties.linuxProfile.ssh.publicKeys[0].keyData = [string](Get-Content "~/.ssh/id_rsa.pub")
# Set servicePrincipalProfile
$inJson.properties.servicePrincipalProfile.clientId = $sp.appId
$inJson.properties.servicePrincipalProfile.secret = $sp.password
# Save file
$inJson | ConvertTo-Json -Depth 5 | Out-File -Encoding ascii -FilePath "kubernetes-windows-complete.json"
Using the same terminal as before, you can use this script to download the template and fill it out. Be sure to set DNSPREFIX, WINDOWSUSER, and WINDOWSPASSWORD to meet the requirements.
export DNSPREFIX="wink8s1"
export WINDOWSUSER="winuser"
export WINDOWSPASSWORD="Cr4shOverride!"
curl -L https://raw.githubusercontent.com/Azure/acs-engine/master/examples/windows/kubernetes.json -o kubernetes.json
cat kubernetes.json | \
jq ".properties.masterProfile.dnsPrefix = \"$DNSPREFIX\"" | \
jq ".properties.linuxProfile.ssh.publicKeys[0].keyData = \"`cat ~/.ssh/id_rsa.pub`\"" | \
jq ".properties.servicePrincipalProfile.clientId = `echo $SERVICEPRINCIPAL | jq .appId`" | \
jq ".properties.servicePrincipalProfile.secret = `echo $SERVICEPRINCIPAL | jq .password`" | \
jq ".properties.windowsProfile.adminPassword = \"$WINDOWSPASSWORD\"" | \
jq ".properties.windowsProfile.adminUsername = \"$WINDOWSUSER\"" > kubernetes-windows-complete.json
Now that the ACS-Engine cluster definition is complete, generate the Azure templates with acs-engine generate kubernetes-windows-complete.json
acs-engine.exe generate kubernetes-windows-complete.json
INFO[0000] Generating assets into _output/plangk8swin1...
This will generate a _output
directory with a subdirectory named after the dnsPrefix you set above. In this example, it's _output/plangk8swin1
.
It will also create a working Kubernetes client config file in _output/<dnsprefix>/kubeconfig
folder. We'll come back to that in a bit.
Get the paths to azuredeploy.json
and azuredeploy.parameters.json
from the last step, and pass them into az group deployment create --name <name for deployment> --resource-group <resource group name> --template-file <...azuredeploy.json> --parameters <...azuredeploy.parameters.json>
az group deployment create --name plangk8swin1-deploy --resource-group k8s-win1 --template-file "./_output/plangk8swin1/azuredeploy.json" --parameters "./_output/plangk8swin1/azuredeploy.parameters.json"
After several minutes, it will return the list of resources created in JSON. Look for masterFQDN
.
"masterFQDN": {
"type": "String",
"value": "plangk8swin1.westus2.cloudapp.azure.com"
},
As mentioned earlier, acs-engine generate
also creates Kubernetes configuration files under _output/<dnsprefix>/kubeconfig
. There will be one per possible region, so find the one matching the region you deployed in.
In the example above with dnsprefix
=plangk8swin1
and the westus2
region, the filename would be _output/plangk8swin1/kubeconfig/kubeconfig.westus2.json
.
Set $ENV:KUBECONFIG
to the full path to that file.
$ENV:KUBECONFIG=(Get-Item _output\plangk8swin1\kubeconfig\kubeconfig.westus2.json).FullName
export KUBECONFIG=$(PWD)/_output/$DNSPREFIX/kubeconfig/kubeconfig.westus2.json
Once you have KUBECONFIG
set, you can verify the cluster is up with kubectl get node -o wide
.
kubectl get node -o wide
NAME STATUS ROLES AGE VERSION EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
40336k8s9000 Ready <none> 21m v1.9.10 <none> Windows Server Datacenter 10.0.17134.112
docker://17.6.2
40336k8s9001 Ready <none> 20m v1.9.10 <none> Windows Server Datacenter 10.0.17134.112
docker://17.6.2
k8s-master-40336153-0 Ready master 22m v1.9.10 <none> Ubuntu 16.04.5 LTS 4.15.0-1018-azure docker://1.13.1
If you would like to manage the cluster over SSH, you can connect to the Linux master directly using the FQDN of the cluster:
ssh azureuser@plangk8swin1.westus2.cloudapp.azure.com
Kubernetes deployments are typically written in YAML files. This one will create a pod with a container running the IIS web server, and tell Kubernetes to expose it as a service with the Azure Load Balancer on an external IP.
apiVersion: apps/v1
kind: Deployment
metadata:
name: iis-1803
labels:
app: iis-1803
spec:
replicas: 1
template:
metadata:
name: iis-1803
labels:
app: iis-1803
spec:
containers:
- name: iis
image: microsoft/iis:windowsservercore-1803
ports:
- containerPort: 80
nodeSelector:
"beta.kubernetes.io/os": windows
selector:
matchLabels:
app: iis-1803
---
apiVersion: v1
kind: Service
metadata:
name: iis
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 80
selector:
app: iis-1803
Copy and paste that into a file called iis.yaml
, then run kubectl apply -f iis.yaml
. kubectl will show the deployment and service were created:
kubectl apply -f .\iis.yaml
deployment.apps/iis-1803 created
service/iis created
Now, you can check the status of the pod and service with kubectl get pod
and kubectl get service
respectively.
Initially, the pod will be in the ContainerCreating
state, and eventually go to Running
. The service will show <pending>
under EXTERNAL-IP
. Here's what the first progress will look like:
kubectl get pod
NAME READY STATUS RESTARTS AGE
iis-1803-6c49777598-h45cs 0/1 ContainerCreating 0 1m
kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
iis LoadBalancer 10.0.9.47 <pending> 80:31240/TCP 1m
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 46m
Since this is the first deployment, it will probably take several minutes for the Windows node to download and run the container. Later deployments will be faster because the large microsoft/windowsservercore
container will already be on disk.
The service will eventually show an EXTERNAL-IP as well:
kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
iis LoadBalancer 10.0.9.47 13.66.203.178 80:31240/TCP 1m
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 46m
Once the pod is in Running
state, get the IP from kubectl get service
then visit http://<EXTERNAL-IP>
to test your web server.
Once your Kubernetes cluster has been created you will have a resource group containing:
-
1 master accessible by SSH on port 22 or kubectl on port 443
-
A set of Windows and/or Linux nodes. The windows nodes can be accessed through an RDP SSH tunnel via the master node, following these steps Connecting to Windows Nodes.
These parts were all automatically created using the Azure Resource Manager template created by ACS-Engine:
- Master Components - The master runs the Kubernetes scheduler, api server, and controller manager. Port 443 is exposed for remote management with the kubectl cli.
- Linux Nodes - the Kubernetes nodes run in an availability set. Azure load balancers are dynamically added to the cluster depending on exposed services.
- Windows Nodes - the Kubernetes windows nodes run in an availability set.
- Common Components - All VMs run a kubelet, Docker, and a Proxy.
- Networking - All VMs are assigned an ip address in the 10.240.0.0/16 network and are fully accessible to each other.
For more resources on Windows and ACS-Engine, continue reading:
- Customizing Windows Deployments
- More Examples
- Troubleshooting
- Using Kubernetes ingress for more flexibility in http and https routing
If you'd like to learn more about Kubernetes in general, check out these guides:
- Kubernetes Bootcamp - shows you how to deploy, scale, update, and debug containerized applications.
- Kubernetes Userguide - provides information on running programs in an existing Kubernetes cluster.
- Kubernetes Examples - provides a number of examples on how to run real applications with Kubernetes.