Skip to content

Latest commit

 

History

History
 
 

getting-started

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 

Kubernetes - First Steps with the Kubernetes CLI

Before starting this section, please have created a cluster on AWS with kops, as explained in the Create a Kubernetes cluster using kops section.

Kubernetes basic commands

Now that we have a cluster up and running we can start issuing some basic commands and deploy some simple resources.

In this part we will familiarize ourselves with the kubectl CLI tool and basic Kubernetes commands. We will first deploy a basic NGiNX pod and execute some commands to help you gain comfort with the Kubernetes environment from an end-user perspective. This helps get developers up and running - taking advantage of the Kubernetes application deployment capabilities - without having to worry about the infrastructure related complexities.

Display nodes

This command will show all the nodes available in your kubernetes cluster:

$ kubectl get nodes

It will show an output similar to:

NAME                                           STATUS    ROLES     AGE      VERSION
ip-172-20-105-158.us-east-2.compute.internal   Ready     node      2d       v1.7.4
ip-172-20-124-26.us-east-2.compute.internal    Ready     master    2d       v1.7.4
ip-172-20-127-251.us-east-2.compute.internal   Ready     node      2d       v1.7.4
ip-172-20-52-35.us-east-2.compute.internal     Ready     master    2d       v1.7.4
ip-172-20-63-150.us-east-2.compute.internal    Ready     node      2d       v1.7.4
ip-172-20-71-14.us-east-2.compute.internal     Ready     node      2d       v1.7.4
ip-172-20-87-91.us-east-2.compute.internal     Ready     node      2d       v1.7.4
ip-172-20-94-153.us-east-2.compute.internal    Ready     master    2d       v1.7.4

Create your first pod

This command instantiates an nginx container into your cluster, inside a pod:

$ kubectl run nginx --image=nginx
deployment "nginx" created

Get the list of deployments:

$ kubectl get deployments
NAME      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
nginx     1         1         1            0           41s

Get the list of running pods:

$ kubectl get pods
NAME                     READY     STATUS    RESTARTS   AGE
nginx-4217019353-pmkzb   1/1       Running   0          1m

Get additional details for the pod by using the <pod-name> from the above output:

$ kubectl describe pod/nginx-4217019353-pmkzb
kubectl describe pod/nginx-4217019353-pmkzb
Name:           nginx-4217019353-pmkzb
Namespace:      default
Node:           ip-172-20-87-91.us-east-2.compute.internal/172.20.87.91
Start Time:     Fri, 01 Dec 2017 16:36:48 -0800
Labels:         pod-template-hash=4217019353
                run=nginx
Annotations:    kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"nginx-4217019353","uid":"e161abe9-d6f8-11e7-af8f-06c4465216f2","...
                kubernetes.io/limit-ranger=LimitRanger plugin set: cpu request for container nginx
Status:         Running
IP:             100.96.7.19
Created By:     ReplicaSet/nginx-4217019353
Controlled By:  ReplicaSet/nginx-4217019353
Containers:
  nginx:
    Container ID:   docker://2def6a6dc1594c337748abf6160ff81fab9fa6d734aca02b5f1afc4d395edc6b
    Image:          nginx
    Image ID:       docker-pullable://nginx@sha256:b81f317384d7388708a498555c28a7cce778a8f291d90021208b3eba3fe74887
    Port:           <none>
    State:          Running
      Started:      Fri, 01 Dec 2017 16:36:52 -0800
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        100m
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-cqht0 (ro)
Conditions:
  Type           Status
  Initialized    True
  Ready          True
  PodScheduled   True
Volumes:
  default-token-cqht0:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-cqht0
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.alpha.kubernetes.io/notReady:NoExecute for 300s
                 node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason                 Age   From                                                 Message
  ----    ------                 ----  ----                                                 -------
  Normal  Scheduled              46s   default-scheduler                                    Successfully assigned nginx-4217019353-pmkzb to ip-172-20-87-91.us-east-2.compute.internal
  Normal  SuccessfulMountVolume  46s   kubelet, ip-172-20-87-91.us-east-2.compute.internal  MountVolume.SetUp succeeded for volume "default-token-cqht0"
  Normal  Pulling                46s   kubelet, ip-172-20-87-91.us-east-2.compute.internal  pulling image "nginx"
  Normal  Pulled                 42s   kubelet, ip-172-20-87-91.us-east-2.compute.internal  Successfully pulled image "nginx"
  Normal  Created                42s   kubelet, ip-172-20-87-91.us-east-2.compute.internal  Created container
  Normal  Started                42s   kubelet, ip-172-20-87-91.us-east-2.compute.internal  Started container

By default, pods are created in a default namespace. In addition, a kube-system namespace is also reserved for Kubernetes system pods. A list of all the pods in kube-system namespace can be displayed as shown:

$ kubectl get pods --namespace kube-system
NAME                                                                  READY     STATUS    RESTARTS   AGE
dns-controller-3497129722-4pxd6                                       1/1       Running   0          28d
etcd-server-events-ip-172-20-124-26.us-east-2.compute.internal        1/1       Running   0          28d
etcd-server-events-ip-172-20-52-35.us-east-2.compute.internal         1/1       Running   0          28d
etcd-server-events-ip-172-20-94-153.us-east-2.compute.internal        1/1       Running   0          28d
etcd-server-ip-172-20-124-26.us-east-2.compute.internal               1/1       Running   0          28d
etcd-server-ip-172-20-52-35.us-east-2.compute.internal                1/1       Running   0          28d
etcd-server-ip-172-20-94-153.us-east-2.compute.internal               1/1       Running   0          28d
kube-apiserver-ip-172-20-124-26.us-east-2.compute.internal            1/1       Running   0          28d
kube-apiserver-ip-172-20-52-35.us-east-2.compute.internal             1/1       Running   0          28d
kube-apiserver-ip-172-20-94-153.us-east-2.compute.internal            1/1       Running   0          28d
kube-controller-manager-ip-172-20-124-26.us-east-2.compute.internal   1/1       Running   0          28d
kube-controller-manager-ip-172-20-52-35.us-east-2.compute.internal    1/1       Running   0          28d
kube-controller-manager-ip-172-20-94-153.us-east-2.compute.internal   1/1       Running   0          28d
kube-dns-1311260920-jgl0m                                             3/3       Running   0          28d
kube-dns-1311260920-tvpmp                                             3/3       Running   0          28d
kube-dns-autoscaler-1818915203-5kxrb                                  1/1       Running   0          28d
kube-proxy-ip-172-20-105-158.us-east-2.compute.internal               1/1       Running   0          28d
kube-proxy-ip-172-20-124-26.us-east-2.compute.internal                1/1       Running   0          28d
kube-proxy-ip-172-20-127-251.us-east-2.compute.internal               1/1       Running   0          28d
kube-proxy-ip-172-20-52-35.us-east-2.compute.internal                 1/1       Running   0          28d
kube-proxy-ip-172-20-63-150.us-east-2.compute.internal                1/1       Running   0          28d
kube-proxy-ip-172-20-71-14.us-east-2.compute.internal                 1/1       Running   0          28d
kube-proxy-ip-172-20-87-91.us-east-2.compute.internal                 1/1       Running   0          28d
kube-proxy-ip-172-20-94-153.us-east-2.compute.internal                1/1       Running   0          28d
kube-scheduler-ip-172-20-124-26.us-east-2.compute.internal            1/1       Running   0          28d
kube-scheduler-ip-172-20-52-35.us-east-2.compute.internal             1/1       Running   0          28d
kube-scheduler-ip-172-20-94-153.us-east-2.compute.internal            1/1       Running   0          28d
tiller-deploy-1114875906-k2pj2                                        1/1       Running   0          28d

Again, the exact output may vary but your results should look similar to these.

Get logs from the pod

Logs from the pod can be obtained (a fresh nginx does not have logs - check again later once you have accessed the service):

$ kubectl logs <pod-name>

Execute a shell on the running pod

This command will open a TTY to a shell in your pod:

$ kubectl get pods
$ kubectl exec -it <pod-name> /bin/bash

This opens a bash shell and allows you to look around the filesystem of the container.

Expose the deployment as a service (via NodePort)

By default, all Kubernetes resources are only accessible within the cluster. This command will expose the pod via a dynamically chosen port on the node itself. This will eventually allow the the NGiNX deployment to be accessible from the Internet:

$ kubectl expose deployment nginx --type=NodePort --port=80 --target-port=80 --name=web
service "web" exposed

Expose the deployment creates what Kubernetes calls a service. You can see the published service:

$ kubectl get service
NAME         TYPE           CLUSTER-IP      EXTERNAL-IP        PORT(S)        AGE
kubernetes   ClusterIP      100.64.0.1      <none>             443/TCP        2d
web          NodePort       100.67.45.110   <none>             80:32400/TCP   32s

We will learn more about Services and Deployments later in the workshop.

View the NGiNX deployment using the NodePort

First, we find the hostname of the node on which the nginx Pod is running:

$ kubectl get pods -l="run=nginx"
NAME                     READY     STATUS    RESTARTS   AGE
nginx-4217019353-pmkzb   1/1       Running   0          16m
$ kubectl get pod nginx-4217019353-pmkzb -o=jsonpath={.spec.nodeName}
ip-172-20-87-91.us-east-2.compute.internal

The -o flag allows us to choose a different output format, and choosing the jsonpath output format allows us to filter the resultant JSON down to the exact value we need.

Next we need the dynamic port the service opened for us on the node running the NGiNX pod.

$ kubectl get service web -o jsonpath={.spec.ports..nodePort}
32400

Next, we need to authorize access on the port on the node itself to receive traffic. We do this by finding the node’s security group, then using the AWS CLI to authorize ingress on that port. Replace the Values value with the nodeName retreived earlier.

$ aws ec2 describe-instances --filters "Name=private-dns-name,Values=ip-172-20-87-91.us-east-2.compute.internal" --query 'Reservations[*].Instances[*].SecurityGroups[*].GroupId'
 [
     [
         "sg-c0285fa8"
     ]
 ]

Next, we add an ingress rule to the security group opening up the port on the node to anyone - 0.0.0.0/0. This is purely for demonstration purposes in this workshop and is not a best practice. Most services will be exposed behind a load balancer, which we cover in the next section. For the command below, replace the --group-id parameter with the GroupId you just retrieved, and the --port parameter value with the nodePort retreived above.

$ aws ec2 authorize-security-group-ingress --group-id sg-c0285fa8 --protocol tcp --port 32400 --cidr 0.0.0.0/0

Finally, we can use the AWS CLI to retrieve the public hostname of the node itself.

$ aws ec2 describe-instances --filters "Name=private-dns-name,Values=ip-172-20-87-91.us-east-2.compute.internal" --query 'Reservations[*].Instances[*].NetworkInterfaces[*].PrivateIpAddresses[*].Association.PublicDnsName'
[
    [
        [
            "ec2-18-216-5-70.us-east-2.compute.amazonaws.com"
        ]
    ]
]

You should now be able to point a browser or use cURL to retrieve the combined <public hostname>:<node port> and see the default NGiNX homepage:

$ curl http://ec2-18-216-5-70.us-east-2.compute.amazonaws.com:32400
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>

Expose the deployment as a service (via a Load Balancer)

Delete the existing service and deployment:

$ kubectl delete service/web deployment/nginx
service "web" deleted
deployment "nginx" deleted

Create a new NGiNX deployment, this time with 2 pods.

$ kubectl run nginx --image=nginx --replicas=2
deployment "nginx" created

Next we expose the deployment as a service, this time with a Load Balancer instead of a Node Port:

$ kubectl expose deployment nginx --type=LoadBalancer --port=80 --target-port=80 --name=web
service "web" exposed

We can see the status of the load balancer creation by describing the service:

$ kubectl describe service web
Name:                     web
Namespace:                default
Labels:                   run=nginx
Annotations:              <none>
Selector:                 run=nginx
Type:                     LoadBalancer
IP:                       100.69.50.175
LoadBalancer Ingress:     ae70e9781d70e11e7af8f06c4465216f-893264534.us-east-2.elb.amazonaws.com
Port:                     <unset>  80/TCP
TargetPort:               80/TCP
NodePort:                 <unset>  31997/TCP
Endpoints:                100.96.4.21:80,100.96.6.19:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:
  Type    Reason                Age   From                Message
  ----    ------                ----  ----                -------
  Normal  CreatingLoadBalancer  9s    service-controller  Creating load balancer
  Normal  CreatedLoadBalancer   8s    service-controller  Created load balancer

It will take a few minutes for the load balancer to be created and attached to the nodes.

View the deployment as a service via the Load Balancer

In addition to seeing the hostname via the desribe operation, we can also retrieve just the hostname of the load balancer via:

$ kubectl get service web -o jsonpath={.status.loadBalancer.ingress..hostname}
ae70e9781d70e11e7af8f06c4465216f-893264534.us-east-2.elb.amazonaws.com

Just as before, you can point a browser or cURL to that hostname (this time on port 80) and you should see the NGiNX homepage.

Delete resources and clean up

Delete all the Kubernetes resources created so far:

$ kubectl delete service/web deployment/nginx

Revoke access on the port you opened on the EC2 security group earlier:

$ aws ec2 revoke-security-group-ingress --group-id sg-c0285fa8 --protocol tcp --port 32400 --cidr 0.0.0.0/0