Getting Started with Kubernetes: A Beginner's Guide
Learn the fundamentals of Kubernetes container orchestration, from basic concepts to deploying your first application.
Getting Started with Kubernetes: A Hands-On Guide for Students and Enthusiasts
Published by Hindizubaan Technologies
Kubernetes has revolutionized how we deploy, scale, and manage containerized applications. Whether you're a student diving into cloud-native technologies or an enthusiast looking to expand your skill set, this comprehensive guide will take you from theory to practice with real, working examples.
By the end of this tutorial, you'll have hands-on experience creating pods, services, deployments, and more—all while understanding the underlying concepts that make Kubernetes so powerful.
What is Kubernetes and Why Should You Care?
Kubernetes, often abbreviated as K8s, is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. Think of it as a sophisticated conductor orchestrating a complex symphony of microservices.
Why Kubernetes Matters:
- Scalability: Automatically scale applications based on demand
- Reliability: Self-healing capabilities that restart failed containers
- Portability: Run consistently across different environments (development, staging, production)
- Resource Efficiency: Optimal resource utilization across your cluster
Prerequisites: Setting Up Your Learning Environment
Before we dive into creating Kubernetes resources, you'll need a working environment. Here are the most accessible options for beginners:
Option 1: Minikube (Recommended for Beginners)
# Install Minikube (MacOS)
brew install minikube
# Install Minikube (Linux)
curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube /usr/local/bin/
# Start your local cluster
minikube start
# Verify installation
kubectl cluster-info
Option 2: Docker Desktop with Kubernetes
Enable Kubernetes in Docker Desktop settings for a simple local setup.
Option 3: Kind (Kubernetes in Docker)
# Install Kind
go install sigs.k8s.io/kind@latest
# Create a cluster
kind create cluster --name learning-cluster
Understanding Kubernetes Architecture
Before creating resources, let's understand the key components:
Master Node Components:
- API Server: The central management hub
- etcd: Distributed key-value store for cluster state
- Scheduler: Assigns pods to nodes
- Controller Manager: Maintains desired state
Worker Node Components:
- kubelet: Node agent that manages pods
- kube-proxy: Network proxy for services
- Container Runtime: Runs containers (Docker, containerd, etc.)
Chapter 1: Your First Pod - The Building Block
A Pod is the smallest deployable unit in Kubernetes. Let's create your first one:
Creating a Simple Pod
Create a file called my-first-pod.yaml
:
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
app: nginx
environment: learning
spec:
containers:
- name: nginx-container
image: nginx:1.21
ports:
- containerPort: 80
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
Apply the configuration:
# Create the pod
kubectl apply -f my-first-pod.yaml
# Check if the pod is running
kubectl get pods
# Get detailed information
kubectl describe pod nginx-pod
# Check logs
kubectl logs nginx-pod
# Access the pod (for testing)
kubectl port-forward nginx-pod 8080:80
Visit localhost:8080
in your browser to see the nginx welcome page!
Understanding Pod Lifecycle
# Watch pod creation in real-time
kubectl get pods --watch
# Get pod details with more information
kubectl get pods -o wide
# Delete the pod
kubectl delete pod nginx-pod
Chapter 2: Services - Making Pods Accessible
Pods are ephemeral, but Services provide stable networking. Let's create a service to access our pods reliably.
Creating a ClusterIP Service
First, let's create a deployment with multiple replicas:
# nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.21
ports:
- containerPort: 80
resources:
requests:
memory: "64Mi"
cpu: "100m"
limits:
memory: "128Mi"
cpu: "200m"
Now create a service to expose these pods:
# nginx-service.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-service
labels:
app: nginx
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP
Deploy and test:
# Apply the deployment and service
kubectl apply -f nginx-deployment.yaml
kubectl apply -f nginx-service.yaml
# Check your deployment
kubectl get deployments
kubectl get pods -l app=nginx
# Check your service
kubectl get services
kubectl describe service nginx-service
# Test the service
kubectl port-forward service/nginx-service 8080:80
Creating a NodePort Service for External Access
# nginx-nodeport-service.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-nodeport
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30080
type: NodePort
# Apply the NodePort service
kubectl apply -f nginx-nodeport-service.yaml
# Get the service URL (for Minikube)
minikube service nginx-nodeport --url
# For other setups, access via node IP and port 30080
Chapter 3: ConfigMaps and Secrets - Managing Configuration
Real applications need configuration. Let's learn how to manage it properly.
Creating and Using ConfigMaps
# app-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
database_url: "postgresql://localhost:5432/myapp"
debug_mode: "true"
max_connections: "100"
app.properties: |
server.port=8080
server.address=0.0.0.0
logging.level.root=INFO
Using ConfigMap in a Pod:
# app-with-config.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-with-config
spec:
replicas: 2
selector:
matchLabels:
app: demo-app
template:
metadata:
labels:
app: demo-app
spec:
containers:
- name: demo-container
image: nginx:1.21
env:
- name: DATABASE_URL
valueFrom:
configMapKeyRef:
name: app-config
key: database_url
- name: DEBUG_MODE
valueFrom:
configMapKeyRef:
name: app-config
key: debug_mode
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: app-config
Working with Secrets
# Create a secret from command line
kubectl create secret generic db-secret \
--from-literal=username=admin \
--from-literal=password=secretpassword
# Or create from YAML (base64 encoded)
# db-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: db-secret-yaml
type: Opaque
data:
username: YWRtaW4= # base64 encoded 'admin'
password: c2VjcmV0cGFzc3dvcmQ= # base64 encoded 'secretpassword'
Using Secrets in Pods:
# app-with-secrets.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-with-secrets
spec:
replicas: 1
selector:
matchLabels:
app: secure-app
template:
metadata:
labels:
app: secure-app
spec:
containers:
- name: secure-container
image: nginx:1.21
env:
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: db-secret
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-secret
key: password
Chapter 4: Persistent Storage with Volumes
Let's create a pod that persists data using volumes.
Using Persistent Volumes and Claims
# persistent-volume.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: local-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: manual
hostPath:
path: /tmp/k8s-data
# persistent-volume-claim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: manual
# pod-with-storage.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-with-storage
spec:
replicas: 1
selector:
matchLabels:
app: data-app
template:
metadata:
labels:
app: data-app
spec:
containers:
- name: data-container
image: nginx:1.21
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: data-volume
ports:
- containerPort: 80
volumes:
- name: data-volume
persistentVolumeClaim:
claimName: data-pvc
Test persistent storage:
# Apply all storage configurations
kubectl apply -f persistent-volume.yaml
kubectl apply -f persistent-volume-claim.yaml
kubectl apply -f pod-with-storage.yaml
# Check PV and PVC status
kubectl get pv
kubectl get pvc
# Create some data
kubectl exec -it deployment/app-with-storage -- bash
echo "Hello, persistent world!" > /usr/share/nginx/html/index.html
exit
# Delete and recreate the pod - data should persist
kubectl delete deployment app-with-storage
kubectl apply -f pod-with-storage.yaml
# Check if data is still there
kubectl port-forward deployment/app-with-storage 8080:80
Chapter 5: Real-World Example - Complete Web Application
Let's bring everything together with a complete application stack:
Database Layer (PostgreSQL)
# postgres-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
data:
POSTGRES_DB: "sampledb"
POSTGRES_USER: "sampleuser"
---
apiVersion: v1
kind: Secret
metadata:
name: postgres-secret
type: Opaque
data:
POSTGRES_PASSWORD: c2FtcGxlcGFzc3dvcmQ= # base64: samplepassword
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:13
envFrom:
- configMapRef:
name: postgres-config
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: POSTGRES_PASSWORD
ports:
- containerPort: 5432
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: postgres-pvc
---
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
selector:
app: postgres
ports:
- protocol: TCP
port: 5432
targetPort: 5432
Application Layer (Node.js)
# app-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app-deployment
spec:
replicas: 3
selector:
matchLabels:
app: web-app
template:
metadata:
labels:
app: web-app
spec:
containers:
- name: web-app
image: node:16-alpine
command: ["sh", "-c"]
args:
- |
echo 'const express = require("express");
const app = express();
app.get("/", (req, res) => {
res.json({
message: "Hello from Kubernetes!",
hostname: process.env.HOSTNAME,
timestamp: new Date().toISOString()
});
});
app.get("/health", (req, res) => {
res.json({ status: "healthy" });
});
app.listen(3000, () => {
console.log("Server running on port 3000");
});' > app.js &&
npm init -y &&
npm install express &&
node app.js
ports:
- containerPort: 3000
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 10
periodSeconds: 10
readinessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 5
periodSeconds: 5
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "200m"
---
apiVersion: v1
kind: Service
metadata:
name: web-app-service
spec:
selector:
app: web-app
ports:
- protocol: TCP
port: 80
targetPort: 3000
type: LoadBalancer
Deploy the Complete Stack
# Deploy the database
kubectl apply -f postgres-config.yaml
# Deploy the application
kubectl apply -f app-deployment.yaml
# Check everything is running
kubectl get all
# Test the application
kubectl port-forward service/web-app-service 8080:80
# Scale the application
kubectl scale deployment web-app-deployment --replicas=5
# Watch the scaling in action
kubectl get pods -l app=web-app --watch
Chapter 6: Monitoring and Debugging
Essential Debugging Commands
# View cluster information
kubectl cluster-info
# Get all resources in current namespace
kubectl get all
# Describe resources for detailed information
kubectl describe pod <pod-name>
kubectl describe service <service-name>
kubectl describe deployment <deployment-name>
# View logs
kubectl logs <pod-name>
kubectl logs -f deployment/web-app-deployment
# Execute commands in pods
kubectl exec -it <pod-name> -- /bin/bash
# Check resource usage
kubectl top nodes
kubectl top pods
# Debug network issues
kubectl run debug-pod --image=busybox:1.36 --rm -it -- sh
# Inside the pod:
# nslookup web-app-service
# wget -O- http://web-app-service
Health Checks and Monitoring
Add proper health checks to your deployments:
spec:
containers:
- name: app
image: your-app:latest
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 3
readinessProbe:
httpGet:
path: /ready
port: 3000
initialDelaySeconds: 5
periodSeconds: 5
successThreshold: 1
Advanced Topics for Further Learning
Now that you've mastered the basics, here are areas to explore next:
Ingress Controllers
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web-app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: myapp.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web-app-service
port:
number: 80
Horizontal Pod Autoscaling
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: web-app-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: web-app-deployment
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
Best Practices You Should Follow
1. Resource Management: Always set resource requests and limits
2. Health Checks: Implement liveness and readiness probes
3. Configuration: Use ConfigMaps and Secrets, never hardcode sensitive data
4. Labels: Use consistent labeling for organization and selection
5. Namespaces: Organize resources using namespaces in larger deployments
6. Security: Follow principle of least privilege with RBAC
7. Monitoring: Implement proper logging and monitoring from the start
What's Next?
You've now built a solid foundation in Kubernetes! Here are suggested next steps:
1. Learn Helm: Package manager for Kubernetes
2. Explore Operators: Extend Kubernetes functionality
3. Study GitOps: Automated deployment patterns with ArgoCD or Flux
4. Practice Security: RBAC, NetworkPolicies, and Pod Security Standards
5. Cloud Integration: Learn cloud-specific features (EKS, GKE, AKS)
Troubleshooting Common Issues
Pod Stuck in Pending State:
kubectl describe pod <pod-name>
# Check for resource constraints or node selector issues
Service Not Accessible:
kubectl get endpoints <service-name>
# Verify pod labels match service selector
Image Pull Errors:
kubectl describe pod <pod-name>
# Check image name, tag, and registry accessibility
Practice Exercises
1. Create a multi-tier application with frontend, backend, and database
2. Implement rolling updates with zero downtime
3. Set up monitoring with resource limits and alerts
4. Practice disaster recovery by simulating node failures
5. Experiment with different service types and understand their use cases
---
Ready to Take Your Kubernetes Skills Further?
This hands-on guide has given you practical experience with the core concepts of Kubernetes. The best way to master these skills is through practice and experimentation.
At Hindizubaan Technologies, we help organizations implement robust, scalable Kubernetes solutions. Whether you're planning a migration to Kubernetes or optimizing existing deployments, contact us at [email protected] to learn how we can accelerate your cloud-native journey.
---
Hindizubaan Technologies specializes in platform engineering and cloud-native solutions. Our expertise spans from foundational Kubernetes implementations to advanced GitOps workflows and AI-powered operational automation.