To create kubernetes cluster at AWS
- Install kops 1.15.1
- For detailed usage, this tutorial to use kops with AWS
- Install kubectl
1.17.0
- You may use kubectl aliases
- Install helm 3.0.3
- Create a ssh keypair
- kops use the
~/.ssh/id_rsa.pub
, by default
- kops use the
- Create an account with these IAM permissions:
AmazonEC2FullAccess
AmazonRoute53FullAccess
AmazonS3FullAccess
IAMFullAccess
AmazonVPCFullAccess
- Export your
AWS_ACCESS_KEY_ID
export AWS_ACCESS_KEY_ID='<YOUR ACCESS KEY>'
- Export your
AWS_SECRET_ACCESS_KEY
export AWS_SECRET_ACCESS_KEY='<YOUR SECRET KEY>'
-
Create a S3 bucket to store kops state
- If are familiar with Terraform, use this to create the bucket
-
Edit the
env.sh
script and set the following variables
# Ohio, for example
export ZONES='us-east-2a'
# Use .k8s.local suffix, telling to kops to create a gossip-based cluster
export NAME='pets.k8s.local'
export KOPS_STATE_STORE='s3://YOUR BUCKET NAME'
- Execute the
cluster-create.sh
script
./cluster-create.sh
-
Wait until the cluster is ready to create the resources ahead
-
To see cluster status
kubectl get nodes
You can go ahead when every node is Ready
:
NAME STATUS ROLES AGE VERSION
ip-172-20-37-9.us-east-2.compute.internal Ready master 29m v1.15.9
ip-172-20-46-165.us-east-2.compute.internal Ready node 28m v1.15.9
ip-172-20-54-151.us-east-2.compute.internal Ready node 28m v1.15.9
Checkout these details
- Open the cluster configuration
source env.sh
kops edit cluster
- Edit the kubelet spec, to add
authenticationTokenWebhook
andauthorizationMode
kind: Cluster
metadata:
name: xxx.xxx.com
spec:
# ...
kubelet:
anonymousAuth: false
authenticationTokenWebhook: true
authorizationMode: Webhook
- Update the cluster
kops update cluster --yes
kops rolling-update cluster --yes
The metrics server provide data for horizontal pod autoscale.
helm install --namespace=kube-system \
metrics-server helm/metrics-server
- Execute the
create-kubedash.sh
script
./create-kubedash.sh
- After few seconds, execute command bellow to get the public DNS of that dashboard
kubectl describe svc \
kubernetes-dashboard-public \
-n kubernetes-dashboard
You should use this account to login at kubernetes dashboard.
- Execute the
create-admin.sh
./create-admin.sh
- Execute the command bellow:
helm install prometheus helm/prometheus
- The Prometheus's server URL will be:
http://prometheus-server.default.svc.cluster.local
# Export the pod name
export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}")
# Start the port forwarding
kubectl --namespace default port-forward $POD_NAME 9090
- Add the bitnami helm charts repository
helm repo add bitnami https://charts.bitnami.com/bitnami
- Execute the command bellow:
helm install grafana bitnami/grafana
- Get the admin password
kubectl get secret grafana-secret \
--namespace default \
-o jsonpath="{.data.GF_SECURITY_ADMIN_PASSWORD}" | base64 --decode
- Expose grafana with a public DNS
kubectl expose service grafana \
--type=LoadBalancer \
--name=grafana-public \
-n default
- And get that public DNS
kubectl describe svc \
grafana-public \
-n default
- Add the elastic helm charts repository
helm repo add elastic https://helm.elastic.co
- Install the Elasticsearch (details):
helm install elasticsearch elastic/elasticsearch --version 7.5.2
- Install the Filebeat (details):
helm install filebeat elastic/filebeat --version 7.5.2
- Install the Kibana (details):
helm install kibana elastic/kibana --version 7.5.2
- Expose the Kibana with public DNS:
kubectl expose deployment kibana-kibana \
--type=LoadBalancer \
--name=kibana-public \
-n default
- After few seconds, execute command bellow to get the public DNS of Kibana
kubectl describe svc kibana-public -n default
kubectl create namespace 'a-namespace'
- Execute the command bellow:
helm install --namespace='a-namespace' mongodb helm/mongodb
- Get the root password
kubectl get secret \
--namespace='a-namespace' \
mongodb \
-o jsonpath="{.data.mongodb-root-password}" \
| base64 --decode
-
Install istioctl 1.4.3
- Get more details
-
Deploy the istio, running this command:
istioctl manifest apply --set profile=demo
- Execute the
cluster-delete.sh
./cluster-delete.sh