We have been given a Kubernetes cluster to inspect. In order to better understand the layout and the structure of this cluster, we must run the appropriate commands.
Log in to the Kube Master server using the credentials on the lab page (either in your local terminal, using the Instant Terminal feature, or using the public IP), and work through the objectives listed.
Use the following command to list the nodes in your cluster:
kubectl get nodes
We should see three nodes: one master and two workers.
Use the following command to list the pods in all namespaces:
kubectl get pods --all-namespaces
Use the following command to list all the namespaces in the cluster:
kubectl get namespaces
Here, we should see four namespaces: default, kube-public, kube-system, and web.
Use the following command to list the pods in the default namespace:
kubectl get pods
We should see that there aren't any pods in the default namespace.
Use the following command to find the IP address of the API server:
kubectl get pods --all-namespaces -o wide
Use the following command to check for any deployments in the cluster:
kubectl get deployments
We should see there aren't any deployments in the cluster.
Use the following command to view the label on the etcd pod:
kubectl get pods --all-namespaces --show-labels -o wide
We have been given three nodes, in which we must install the components necessary to build a running Kubernetes cluster. Once the cluster has been built and we have verified all nodes are in the ready status, we need to start testing deployments, pods, services, and port forwarding, as well as executing commands from a pod.
Log in to all three nodes (the controller/master and workers) using the credentials on the lab page (either in your local terminal, using the Instant Terminal feature, or using the public IPs), and work through the objectives listed.
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
cat << EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
sudo apt-get update -y
sudo apt-get install -y docker-ce=18.06.1~ce~3-0~ubuntu kubelet=1.13.5-00
kubeadm=1.13.5-00 kubectl=1.13.5-00
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
We have been given a three-node cluster that is in need of an upgrade. In this hands-on lab, we must perform the upgrade to all of the cluster components, including kubeadm, kube-controller-manager, kube-scheduler, kubeadm, and kubectl.
Log in to all three servers using the credentials on the lab page (either in your local terminal, using the Instant Terminal feature, or using the public IPs), and work through the objectives listed.
In the terminal where you're logged in to the Master node, use the following commands to create a variable and get the latest version of kubeadm:
export VERSION=$(curl -sSL https://dl.k8s.io/release/stable.txt)
export VERSION=“v1.13.5"
export ARCH=amd64
curl -sSL https://dl.k8s.io/release/${VERSION}/bin/linux/${ARCH}/kubeadm > kubeadm
Still in the Master node terminal, run the following commands to install kubeadm and verify the version:
sudo install -o root -g root -m 0755 ./kubeadm /usr/bin/kubeadm
sudo kubeadm version
Still in the Master node terminal, use the following command to plan the upgrade:
sudo kubeadm upgrade plan
Still in the Master node terminal, use this command to apply the upgrade (also in the output of upgrade plan):
sudo kubeadm upgrade apply v1.13.5
Now, in each node terminal, use the following commands to get the latest version of kubelet on each node:
export VERSION=$(curl -sSL https://dl.k8s.io/release/stable.txt)
export VERSION=v1.13.5
export ARCH=amd64
curl -sSL https://dl.k8s.io/release/${VERSION}/bin/linux/${ARCH}/kubelet > kubelet
In each node terminal, use these commands to install kubelet and restart the kubelet service:
sudo install -o root -g root -m 0755 ./kubelet /usr/bin/kubelet
sudo systemctl restart kubelet.service
Use the following command to verify the kubelet was installed correctly:
kubectl get nodes
In each node terminal, use the following command to get the latest version of kubectl:
curl -sSL https://dl.k8s.io/release/${VERSION}/bin/linux/${ARCH}/kubectl > kubectl
In each node terminal, use the following command to install the latest version of kubectl:
sudo install -o root -g root -m 0755 ./kubectl /usr/bin/kubectl
We have been given a three-node cluster. Within that cluster, we must perform the following tasks in order to create a service and resolve the DNS names for that service. We will create the necessary Kubernetes resources in order to perform this DNS query.
To adequately complete this hands-on lab, we must have a working deployment, a working service, and be able to record the DNS name of the service within our Kubernetes cluster.
Log in to the Kube Master server using the credentials on the lab page (either in your local terminal, using the Instant Terminal feature, or using the public IP), and work through the objectives listed.
kubectl run nginx --image=nginx
kubectl get deployments
kubectl expose deployment nginx --port 80 --type NodePort
kubectl get services
apiVersion: v1
kind: Pod
metadata:
name: busybox
spec:
containers:
- image: busybox:1.28.4
command:
- sleep
- "3600"
name: busybox
restartPolicy: Always
kubectl create -f busybox.yaml
kubectl get pods
kubectl exec busybox -- nslookup nginx
<service-name>;.default.svc.cluster.local
In this hands-on lab, we have been given a three-node cluster. Within that cluster, we must perform the following tasks to taint the production node in order to repel work. We will create the necessary taint to properly label one of the nodes “prod.” Then, we will deploy two pods — one to each environment. One pod spec will contain the toleration for the taint.
Log in to the Kube Master server using the credentials on the lab page (either in your local terminal, using the Instant Terminal feature, or using the public IP), and work through the objectives listed.
kubectl get nodes
This will list out the nodes, which we'll need for the following tasks.
kubectl taint node <node_name> node-type=prod:NoSchedule
Here, <node_name> will be one of the worker node names you saw as a result of kubectl get nodes
apiVersion: v1
kind: Pod
metadata:
name: dev-pod
labels:
app: busybox
spec:
containers:
- name: dev
image: busybox
command: ['sh', '-c', 'echo Hello Kubernetes! && sleep 3600']
kubectl create -f dev-pod.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: prod
spec:
replicas: 1
selector:
matchLabels:
app: prod
template:
metadata:
labels:
app: prod
spec:
containers:
- args:
- sleep
- "3600"
image: busybox
name: main
tolerations:
- key: node-type
operator: Equal
value: prod
effect: NoSchedule
kubectl create -f prod-deployment.yaml
kubectl get pods -o wide
In this hands-on lab, we have been given a three-node cluster. Within that cluster, we must deploy our application and then successfully update the application to a new version without causing any downtime.
Log in to the Kube Master server using the credentials on the lab page (either in your local terminal, using the Instant Terminal feature, or using the public IPs), and work through the objectives listed.
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubeserve
spec:
replicas: 3
selector:
matchLabels:
app: kubeserve
template:
metadata:
name: kubeserve
labels:
app: kubeserve
spec:
containers:
- image: linuxacademycontent/kubeserve:v1
name: app
kubectl apply -f kubeserve-deployment.yaml --record
kubectl rollout status deployments kubeserve
kubectl describe deployment kubeserve
kubectl scale deployment kubeserve --replicas=5
kubectl get pods
kubectl expose deployment kubeserve --port 80 --target-port 80 --type NodePort
kubectl get services
curl http://<ip-address-of-the-service>
while true; do curl http://<ip-address-of-the-service>; done
kubectl set image deployments/kubeserve app=linuxacademycontent/kubeserve:v2 --v 6
kubectl get replicasets
kubectl get pods
kubectl rollout history deployment kubeserve
In this hands-on lab, to decouple our storage from our pods, we will create a persistent volume to mount for use by our pods. We will deploy a mongodb image that will contain a MongoDB database. We will first create the persistent volume, then create the pod YAML for deploying the pod to mount the volume. We will then delete the pod and create a new pod, which will access that same volume.
Log in to the Kube Master server using the credentials on the lab page (either in your local terminal, using the Instant Terminal feature, or using the public IP), and work through the objectives listed.
apiVersion: v1
kind: PersistentVolume
metadata:
name: mongodb-pv
spec:
storageClassName: local-storage
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
kubectl apply -f mongodb-pv.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongodb-pvc
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
kubectl apply -f mongodb-pvc.yaml
apiVersion: v1
kind: Pod
metadata:
name: mongodb
spec:
containers:
- image: mongo
name: mongodb
volumeMounts:
- name: mongodb-data
mountPath: /data/db
ports:
- containerPort: 27017
protocol: TCP
volumes:
- name: mongodb-data
persistentVolumeClaim:
claimName: mongodb-pvc
kubectl apply -f mongodb-pod.yaml
kubectl get pods
kubectl get nodes
ssh <node_hostname>
cd /mnt/data
ls
exit
kubectl delete pod mongodb
kubectl apply -f mongodb-pod.yaml
ssh <node_hostname>
cd /mnt/data
ls
We have been given access to a three-node cluster. Within that cluster, a PV has already been provisioned. We will need to make sure we can access the PV directly from a pod in our cluster. By default, pods cannot access PVs directly, so we will need to create a ClusterRole and test the access after it's been created. Every ClusterRole requires a ClusterRoleBinding to bind the role to a user, service account, or group. After we have created the ClusterRole and ClusterRoleBinding, we will try to access the PV directly from a pod.
Log in to the Kube Master server using the credentials on the lab page (either in your local terminal, using the Instant Terminal feature, or using the public IP), and work through the objectives listed.
kubectl get pv
kubectl create clusterrole pv-reader --verb=get,list \
--resource=persistentvolumes
kubectl create clusterrolebinding pv-test --clusterrole=pv-reader \
--serviceaccount=web:default
apiVersion: v1
kind: Pod
metadata:
name: curlpod
namespace: web
spec:
containers:
- image: tutum/curl
command: ["sleep", "9999999"]
name: main
- image: linuxacademycontent/kubectl-proxy
name: proxy
restartPolicy: Always
kubectl apply -f curlpod.yaml
kubectl exec -it curlpod -n web -- sh
curl localhost:8001/api/v1/persistentvolumes