jcalonsoh
8/17/2019 - 8:35 PM

Cloud Native Certified Kubernetes Administrator (CKA)

Exploring the Kubernetes Cluster via the Command Line

We have been given a Kubernetes cluster to inspect. In order to better understand the layout and the structure of this cluster, we must run the appropriate commands.

Log in to the Kube Master server using the credentials on the lab page (either in your local terminal, using the Instant Terminal feature, or using the public IP), and work through the objectives listed.

List all the nodes in the cluster.

Use the following command to list the nodes in your cluster:

kubectl get nodes

We should see three nodes: one master and two workers.

List all the pods in all namespaces.

Use the following command to list the pods in all namespaces:

kubectl get pods --all-namespaces

List all the namespaces in the cluster.

Use the following command to list all the namespaces in the cluster:

kubectl get namespaces

Here, we should see four namespaces: default, kube-public, kube-system, and web.

Check to see if there are any pods running in the default namespace.

Use the following command to list the pods in the default namespace:

kubectl get pods

We should see that there aren't any pods in the default namespace.

Find the IP address of the API server running on the master node.

Use the following command to find the IP address of the API server:

kubectl get pods --all-namespaces -o wide

See if there are any deployments in this cluster.

Use the following command to check for any deployments in the cluster:

kubectl get deployments

We should see there aren't any deployments in the cluster.

Find the label applied to the etcd pod on the master node.

Use the following command to view the label on the etcd pod:

kubectl get pods --all-namespaces --show-labels -o wide

Installing and Testing the Components of a Kubernetes Cluster

We have been given three nodes, in which we must install the components necessary to build a running Kubernetes cluster. Once the cluster has been built and we have verified all nodes are in the ready status, we need to start testing deployments, pods, services, and port forwarding, as well as executing commands from a pod.

Log in to all three nodes (the controller/master and workers) using the credentials on the lab page (either in your local terminal, using the Instant Terminal feature, or using the public IPs), and work through the objectives listed.

Get the Docker gpg, and add it to your repository.

  1. In all three terminals, run the following command to get the Docker gpg key:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
  1. In all three terminals, add it to your repository:
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

Get the Kubernetes gpg key, and add it to your repository.

  1. In all three terminals, run the following command to get the Kubernetes gpg key:
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
  1. In all three terminals, add it to your repository:
cat << EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
  1. In all three terminals, update the packages:
sudo apt-get update -y 

Install Docker, kubelet, kubeadm, and kubectl.

  1. In all three terminals, run the following command to install Docker, kubelet, kubeadm, and kubectl:
sudo apt-get install -y docker-ce=18.06.1~ce~3-0~ubuntu kubelet=1.13.5-00 
kubeadm=1.13.5-00 kubectl=1.13.5-00
  1. Initialize the Kubernetes cluster.
sudo kubeadm init --pod-network-cidr=10.244.0.0/16

Set up local kubeconfig.

  1. In the master node terminal, run the following commands to set up local kubeconfig:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Upgrading the Kubernetes Cluster Using kubeadm

We have been given a three-node cluster that is in need of an upgrade. In this hands-on lab, we must perform the upgrade to all of the cluster components, including kubeadm, kube-controller-manager, kube-scheduler, kubeadm, and kubectl.

Log in to all three servers using the credentials on the lab page (either in your local terminal, using the Instant Terminal feature, or using the public IPs), and work through the objectives listed.

Get the latest version of kubeadm.

In the terminal where you're logged in to the Master node, use the following commands to create a variable and get the latest version of kubeadm:

export VERSION=$(curl -sSL https://dl.k8s.io/release/stable.txt)
export VERSION=“v1.13.5"
export ARCH=amd64
curl -sSL https://dl.k8s.io/release/${VERSION}/bin/linux/${ARCH}/kubeadm > kubeadm

Install kubeadm and verify it has been installed correctly.

Still in the Master node terminal, run the following commands to install kubeadm and verify the version:

sudo install -o root -g root -m 0755 ./kubeadm /usr/bin/kubeadm
sudo kubeadm version

Plan the upgrade in order to check for errors.

Still in the Master node terminal, use the following command to plan the upgrade:

sudo kubeadm upgrade plan

Perform the upgrade of the kube-scheduler and kube-controller-manager.

Still in the Master node terminal, use this command to apply the upgrade (also in the output of upgrade plan):

sudo kubeadm upgrade apply v1.13.5

Get the latest version of kubelet.

Now, in each node terminal, use the following commands to get the latest version of kubelet on each node:

export VERSION=$(curl -sSL https://dl.k8s.io/release/stable.txt)
export VERSION=v1.13.5
export ARCH=amd64
curl -sSL https://dl.k8s.io/release/${VERSION}/bin/linux/${ARCH}/kubelet > kubelet

Install kubelet on each node and restart the kubelet service.

In each node terminal, use these commands to install kubelet and restart the kubelet service:

sudo install -o root -g root -m 0755 ./kubelet /usr/bin/kubelet
sudo systemctl restart kubelet.service

Verify the kubelet was installed correctly.

Use the following command to verify the kubelet was installed correctly:

kubectl get nodes

Get the latest version of kubectl.

In each node terminal, use the following command to get the latest version of kubectl:

curl -sSL https://dl.k8s.io/release/${VERSION}/bin/linux/${ARCH}/kubectl > kubectl

Install the latest version of kubectl.

In each node terminal, use the following command to install the latest version of kubectl:

sudo install -o root -g root -m 0755 ./kubectl /usr/bin/kubectl

Creating a Service and Discovering DNS Names in Kubernetes

We have been given a three-node cluster. Within that cluster, we must perform the following tasks in order to create a service and resolve the DNS names for that service. We will create the necessary Kubernetes resources in order to perform this DNS query.

To adequately complete this hands-on lab, we must have a working deployment, a working service, and be able to record the DNS name of the service within our Kubernetes cluster.

Log in to the Kube Master server using the credentials on the lab page (either in your local terminal, using the Instant Terminal feature, or using the public IP), and work through the objectives listed.

Create an nginx deployment, and verify it was successful.

  1. Use this command to create an nginx deployment:
kubectl run nginx --image=nginx
  1. Use this command to verify deployment was successful:
kubectl get deployments

Create a service, and verify the service was successful.

  1. Use this command to create a service:
kubectl expose deployment nginx --port 80 --type NodePort
  1. Use this command to verify the service was created:
kubectl get services

Create a pod that will allow you to query DNS, and verify it’s been created.

  1. Using an editor of your choice (e.g., Vim and the command vim busybox.yaml), enter the following YAML to create the busybox pod spec:
 apiVersion: v1
 kind: Pod
 metadata:
   name: busybox
 spec:
   containers:
   - image: busybox:1.28.4
     command:
       - sleep
       - "3600"
     name: busybox
   restartPolicy: Always
  1. Use the following command to create the busybox pod:
kubectl create -f busybox.yaml
  1. Use the following command to verify the pod was created successfully:
kubectl get pods

Perform a DNS query to the service.

  1. Use the following command to query the DNS name of the nginx service:
kubectl exec busybox -- nslookup nginx

Record the DNS name.

  1. Record the name of:
<service-name>;.default.svc.cluster.local

Scheduling Pods with Taints and Tolerations in Kubernetes

In this hands-on lab, we have been given a three-node cluster. Within that cluster, we must perform the following tasks to taint the production node in order to repel work. We will create the necessary taint to properly label one of the nodes “prod.” Then, we will deploy two pods — one to each environment. One pod spec will contain the toleration for the taint.

Log in to the Kube Master server using the credentials on the lab page (either in your local terminal, using the Instant Terminal feature, or using the public IP), and work through the objectives listed.

Taint one of the worker nodes to repel work.

  1. In the terminal, run the following command:
kubectl get nodes

This will list out the nodes, which we'll need for the following tasks.

  1. Use the following command to taint the node:
kubectl taint node <node_name> node-type=prod:NoSchedule

Here, <node_name> will be one of the worker node names you saw as a result of kubectl get nodes

Schedule a pod to the dev environment

  1. Using an editor of your choice (e.g., Vim and the command vim dev-pod.yaml), enter the following YAML to specify a pod that will be scheduled to the dev environment:
apiVersion: v1
kind: Pod
metadata:
 name: dev-pod
 labels:
   app: busybox
spec:
 containers:
 - name: dev
   image: busybox
   command: ['sh', '-c', 'echo Hello Kubernetes! && sleep 3600']
  1. Use the following command to create the pod:
kubectl create -f dev-pod.yaml

Schedule a pod to the prod environment

  1. Use the following YAML (called prod-deployment.yaml) to specify a pod that will be scheduled to the prod environment:
apiVersion: apps/v1
kind: Deployment
metadata:
 name: prod
spec:
 replicas: 1
 selector:
   matchLabels:
     app: prod
 template:
   metadata:
     labels:
       app: prod
   spec:
     containers:
     - args:
       - sleep
       - "3600"
       image: busybox
       name: main
     tolerations:
     - key: node-type
       operator: Equal
       value: prod
       effect: NoSchedule
  1. Use the following command to create the pod:
kubectl create -f prod-deployment.yaml

Verify each pod has been scheduled to the correct environment

  1. Use the following command to verify the pods have been scheduled:
kubectl get pods -o wide

Performing a Rolling Update of an Application in Kubernetes

In this hands-on lab, we have been given a three-node cluster. Within that cluster, we must deploy our application and then successfully update the application to a new version without causing any downtime.

Log in to the Kube Master server using the credentials on the lab page (either in your local terminal, using the Instant Terminal feature, or using the public IPs), and work through the objectives listed.

Create and roll out version 1 of the application, and verify a successful deployment.

  1. Use the following YAML named kubeserve-deployment.yaml to create your deployment:
 apiVersion: apps/v1
 kind: Deployment
 metadata:
   name: kubeserve
 spec:
   replicas: 3
   selector:
     matchLabels:
       app: kubeserve
   template:
     metadata:
       name: kubeserve
       labels:
         app: kubeserve
     spec:
       containers:
       - image: linuxacademycontent/kubeserve:v1
         name: app
  1. Create the deployment:
kubectl apply -f kubeserve-deployment.yaml --record
  1. Verify the deployment was successful:
kubectl rollout status deployments kubeserve
  1. Verify the app is at the correct version:
kubectl describe deployment kubeserve

Scale up the application to create high availability.

  1. Scale up your application to five replicas:
kubectl scale deployment kubeserve --replicas=5
  1. Verify the additional replicas have been created:
kubectl get pods

Create a service, so users can access the application.

  1. Create a service for your deployment:
kubectl expose deployment kubeserve --port 80 --target-port 80 --type NodePort
  1. Verify the service is present, and collect the cluster IP:
kubectl get services
  1. Verify the service is responding:
curl http://<ip-address-of-the-service>

Perform a rolling update to version 2 of the application, and verify its success.

  1. Start another terminal session to the same Kube Master server. There, use this curl loop command to see the version change as you perform the rolling update:
while true; do curl http://<ip-address-of-the-service>; done
  1. Perform the update in the original terminal session (while the curl loop is running in the new terminal session):
kubectl set image deployments/kubeserve app=linuxacademycontent/kubeserve:v2 --v 6
  1. View the additional ReplicaSet created during the update:
kubectl get replicasets
  1. Verify all pods are up and running:
kubectl get pods
  1. View the rollout history:
kubectl rollout history deployment kubeserve

Creating Persistent Storage for Pods in Kubernetes

In this hands-on lab, to decouple our storage from our pods, we will create a persistent volume to mount for use by our pods. We will deploy a mongodb image that will contain a MongoDB database. We will first create the persistent volume, then create the pod YAML for deploying the pod to mount the volume. We will then delete the pod and create a new pod, which will access that same volume.

Log in to the Kube Master server using the credentials on the lab page (either in your local terminal, using the Instant Terminal feature, or using the public IP), and work through the objectives listed.

Create a PersistentVolume.

  1. Use the following YAML spec for the PersistentVolume named mongodb-pv.yaml:
 apiVersion: v1
 kind: PersistentVolume
 metadata:
   name: mongodb-pv
 spec:
   storageClassName: local-storage
   capacity:
     storage: 1Gi
   accessModes:
     - ReadWriteOnce
   hostPath:
     path: "/mnt/data"
  1. Then, create the PersistentVolume:
kubectl apply -f mongodb-pv.yaml

Create a PersistentVolumeClaim.

  1. Use the following YAML spec for the PersistentVolumeClaim named mongodb-pvc.yaml:
apiVersion: v1
 kind: PersistentVolumeClaim
 metadata:
   name: mongodb-pvc
 spec:
   storageClassName: local-storage
   accessModes:
     - ReadWriteOnce
   resources:
     requests:
       storage: 1Gi
  1. Then, create the PersistentVolumeClaim:
kubectl apply -f mongodb-pvc.yaml

Create a pod from the mongodb image, with a mounted volume to mount path /data/db.

  1. Use the following YAML spec for the pod named mongodb-pod.yaml:
 apiVersion: v1
 kind: Pod
 metadata:
   name: mongodb
 spec:
   containers:
   - image: mongo
     name: mongodb
     volumeMounts:
     - name: mongodb-data
       mountPath: /data/db
     ports:
     - containerPort: 27017
       protocol: TCP
   volumes:
   - name: mongodb-data
     persistentVolumeClaim:
       claimName: mongodb-pvc
  1. Then, create the pod:
kubectl apply -f mongodb-pod.yaml
  1. Verify the pod was created:
kubectl get pods

Access the node and view the data within the volume.

  1. Run the following command:
kubectl get nodes
  1. Connect to the worker node (get the <node_hostname> from the NAME column of the above output), using the same password as the Kube Master:
ssh <node_hostname>
  1. Switch to the /mnt/data directory:
cd /mnt/data
  1. List the contents of the directory:
ls

Delete the pod and create a new pod with the same YAML spec.

  1. Exit out of the worker node:
exit
  1. Delete the pod:
kubectl delete pod mongodb
  1. Create a new pod:
kubectl apply -f mongodb-pod.yaml

Verify the data still resides on the volume.

  1. Log in to the worker node again:
ssh <node_hostname>
  1. Switch to the /mnt/data directory:
cd /mnt/data
  1. List the contents of the directory:
ls

Creating a ClusterRole to Access a PV in Kubernetes (*)

We have been given access to a three-node cluster. Within that cluster, a PV has already been provisioned. We will need to make sure we can access the PV directly from a pod in our cluster. By default, pods cannot access PVs directly, so we will need to create a ClusterRole and test the access after it's been created. Every ClusterRole requires a ClusterRoleBinding to bind the role to a user, service account, or group. After we have created the ClusterRole and ClusterRoleBinding, we will try to access the PV directly from a pod.

Log in to the Kube Master server using the credentials on the lab page (either in your local terminal, using the Instant Terminal feature, or using the public IP), and work through the objectives listed.

View the Persistent Volume.

  1. Use the following command to view the Persistent Volume within the cluster:
kubectl get pv

Create a ClusterRole.

  1. Use the following command to create the ClusterRole:
kubectl create clusterrole pv-reader --verb=get,list \
--resource=persistentvolumes

Create a ClusterRoleBinding.

  1. Use the following command to create the ClusterRoleBinding:
kubectl create clusterrolebinding pv-test --clusterrole=pv-reader \
--serviceaccount=web:default

Create a pod to access the PV.

  1. Use the following YAML to create a pod that will proxy the connection and allow you to curl the address:
apiVersion: v1
 kind: Pod
 metadata:
   name: curlpod
   namespace: web
 spec:
   containers:
   - image: tutum/curl
     command: ["sleep", "9999999"]
     name: main
   - image: linuxacademycontent/kubectl-proxy
     name: proxy
   restartPolicy: Always
  1. Use the following command to create the pod:
kubectl apply -f curlpod.yaml

Request access to the PV from the pod.

  1. Use the following command (from within the pod) to access a shell from the pod:
kubectl exec -it curlpod -n web -- sh
  1. Use the following command to curl the PV resource:
curl localhost:8001/api/v1/persistentvolumes