Kubernetes - Deploy Your Application at Scale in K8S
A Hands-On Workshop on Scaling Application on Kubernetes
- Author: Gineesh Madapparambath
- Status: COMPLETED
- Updated: 13 Jan 2024
Note: Prepared for Kubernetes Community Days Kerala 2024.
Table of Contents
- Kubernetes - Deploy Your Application at Scale in K8S
Abstract
Dive into the world of Kubernetes with our hands-on guide on deploying your application on a Kubernetes platform. This session will walk you through the fundamental steps, from the initial deployment to exploring and accessing your application. We’ll demystify the process of scaling your app to meet varying demands and guide you through seamless updates to keep your application in sync with the latest enhancements and with insights into Monitoring. Whether you’re a newcomer or looking to brush up on your skills, join us for an interactive journey into the heart of Kubernetes deployment.
Target audience:
Beginners to Kubernetes
Takeaways for the attendee:
Basic understanding on how application can be deployed and scaled on Kubernetes
Prerequisites
1. Kubernetes Cluster
A working Kubernetes Cluster (See other options) - We are using a single node minikube
cluster for the demonstration.
2. kubectl
kubectl
installed and configured.
3. Sample deployment YAML files
Access to the Demo Repository
4. Enable metrics-server
This command is intended for clusters utilizing minikube
. If you’re employing alternative methods in your Kubernetes lab, ensure to use the suitable approach for installing the metric server.
$ minikube addons enable metrics-server
💡 metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
▪ Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
🌟 The 'metrics-server' addon is enabled
Ensure metrics-server
pod is running.
$ kubectl get po -n kube-system | grep metrics-server
metrics-server-7c66d45ddc-22svg 1/1 Running 0 2m1s
5. Get load test tool
You can use any available load testing/benchmarking tools but we are using a simple tool called hey
in this workshop.
- Download the
hey
package for your operating system. - Set executable permission and copy the file to a executable path (eg:
ln -s ~/Downloads/hey_linux_amd64 ~/.local/bin/
)
Check hey repo for more details.
Exercise 1 - Deploy a simple nginx
Prepare the Application Deployment
Prepare the declarations for deployment and service.
Clone the repo - [iamgini/workshops-demos][https://github.com/iamgini/workshops-demos] - and verify the todo-app.yaml
YAML file.
Deploy Application
$ kubectl apply -f nginx.yaml
namespace/webapp created
deployment.apps/nginx-deployment created
service/nginx-nodeport-service created
$ kubectl get po,svc -n webapp
NAME READY STATUS RESTARTS AGE
pod/nginx-deployment-57c9fb648d-km4rv 1/1 Running 0 3m44s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/nginx-nodeport-service NodePort 10.111.52.104 <none> 8080:30442/TCP 3m44s
Note: you can also expose the application using imperative commands as follows. For the demonstration purpose we are using the YAMLs.
$ kubectl expose deployment wordpress --type=NodePort --port=8082
Verify application
Feel free to employ any service type; however, for this instance, we’re utilizing the NodePort service. Obtain the NodePort of the application service with the provided command. Note that this is specific to minikube-based clusters; refer to alternative documentation for other Kubernetes cluster setups.
$ minikube service --url nginx-nodeport-service -n webapp
http://192.168.49.2:30442
nginx-nodeport-service
- the service name (refer tonginx.yaml
)-n webapp
- the namespacehttp://192.168.49.2:30442
- the application url returned by the command.
You can verify the application access using the returned URL via CLI (curl
) or from a Web browser.
$ curl http://192.168.49.2:30442
Refer to How to access applications deployed in minikube Kubernetes cluster using NodePort to learn more.
Scaling Manually
kubectl scale deployment nginx-deployment -n webapp --replicas 3
Create HorizontalPodAutoscaler
$ kubectl autoscale deployment nginx-deployment --cpu-percent=80 --min=1 --max=5 -n webapp
horizontalpodautoscaler.autoscaling/nginx-deployment autoscaled
- Targets the
nginx-deployment
for autoscaling. - Scales based on CPU usage, aiming for
80%
utilization. - Allows replicas to scale between 1 and 5.
$ kubectl get hpa -n webapp
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
nginx-deployment Deployment/nginx-deployment 0%/80% 1 5 1 57s
Increase the application load
$ hey -z 4m -c 15 http://192.168.49.2:<PORT>
- Runs for 4 minutes (-z 4m) to sustain load for a longer period.
- Uses 15 concurrent connections (-c 15) to generate higher load, aiming to push CPU usage closer to 80%.
Check Pods and load
$ watch kubectl top pods webapp
Every 2.0s: kubectl top pods -n webapp gmadappa: Sat Jan 13 12:42:33 2024
NAME CPU(cores) MEMORY(bytes)
nginx-deployment-57c9fb648d-6c4gf 240m 10Mi
nginx-deployment-57c9fb648d-94dzt 261m 10Mi
nginx-deployment-57c9fb648d-gm576 225m 10Mi
nginx-deployment-57c9fb648d-gsp8x 254m 10Mi
nginx-deployment-57c9fb648d-rhgrm 232m 10Mi
$ watch kubectl get po -n webapp
Every 2.0s: kubectl get po -n webapp iamgini: Sat Jan 13 12:42:17 2024
NAME READY STATUS RESTARTS AGE
nginx-deployment-57c9fb648d-6c4gf 1/1 Running 0 2m53s
nginx-deployment-57c9fb648d-94dzt 1/1 Running 0 3m53s
nginx-deployment-57c9fb648d-gm576 1/1 Running 0 113s
nginx-deployment-57c9fb648d-gsp8x 1/1 Running 0 21m
nginx-deployment-57c9fb648d-rhgrm 1/1 Running 0 2m53s
Clean up
Remove the resources and namespace from cluster as part of housekeeping
$ kubectl delete -f nginx.yaml
namespace "webapp" deleted
deployment.apps "nginx-deployment" deleted
service "nginx-nodeport-service" deleted
Exercise 2 - Deploy a Todo app
Prepare the Application Deployment
Prepare the declarations for deployment and service.
Clone the repo - [iamgini/workshops-demos][https://github.com/iamgini/workshops-demos] - and verify the todo-app.yaml
YAML file.
Deploy Application
$ kubectl apply -f todo-app.yaml
namespace/todo created
deployment.apps/todo-app created
service/todo-app created
Verify application
$ minikube service --url todo-app -n todo
http://192.168.49.2:31680
Create HorizontalPodAutoscaler
$ kubectl autoscale deployment todo-app --cpu-percent=80 --min=1 --max=5 -n todo
horizontalpodautoscaler.autoscaling/todo-app autoscaled
$ kubectl get hpa -n todo
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
todo-app Deployment/todo-app <unknown>/80% 1 5 1 59s
Increase the application load
$ hey -z 4m -c 15 http://192.168.49.2:<PORT>
Check Pods and load
$ watch "kubectl get all -n todo;kubectl top pods -n todo"
Clean up
Remove the resources and namespace from cluster as part of housekeeping
$ kubectl delete -f todo-app.yaml
namespace "todo" deleted
deployment.apps "todo-app" deleted
service "todo-app" deleted
Exercise 3 - Deploy a WordPress site (2 tier)
Prepare the Application Deployment
Prepare the declarations for deployment and service.
Clone the repo - [iamgini/workshops-demos][https://github.com/iamgini/workshops-demos] - and verify the todo-app.yaml
YAML file.
Deploy Application
$ kubectl apply -f todo-app.yaml
namespace/todo created
deployment.apps/todo-app created
service/todo-app created
Verify application
$ minikube service --url wordpress-app -n wordpress
http://192.168.49.2:31680
Create HorizontalPodAutoscaler
$ kubectl autoscale deployment wordpress-deployment --cpu-percent=80 --min=1 --max=5 -n wordpress
Increase the application load
$ hey -z 4m -c 10 http://192.168.49.2:<PORT>
Check Pods and load
$ watch "kubectl get all -n wordpress;kubectl top pods -n wordpress"
Clean up
Remove the resources and namespace from cluster as part of housekeeping
$ kubectl delete -f wordpress.yaml
References & Resources
- Kubernetes Basics Modules
- Slides
- How to access applications deployed in minikube Kubernetes cluster using NodePort
- Accessing apps
Building a Kubernetes Cluster
You can try one or many (We suggest to try all if possible) methods to create your own Kubernetes cluster for learning and practicing.
- Let us test Kubernetes 1.29 with minikube (using VirtualBox, Docker or Podman)
- Exploring Kubernetes 1.29 with Kind (using Docker or Podman)
- Create Multi-node Kubernetes Cluster in 10 minutes (using Vagrant and Ansible)
- Create Kubernetes cluster with Kubespray
-
Installing minikube with Vagrant and Ansible Video - Create a Kubernetes cluster using Kubadm - Creating a cluster with kubeadm
- Kubernetes The Hard Way by Kelsey Hightower.
Simple HPA with CPU and Memory metrics
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: nginx-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: nginx-deployment
minReplicas: 1
maxReplicas: 5
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 60
Advanced HPA with custom metrics:
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: nginx-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: nginx-deployment
minReplicas: 1
maxReplicas: 5
metrics:
- type: Pods
pods:
metricName: nginx-request-count
target:
type: AverageValue
averageValue: 100
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.