kind — Local Kubernetes Cluster — Part 6

In this article we will look how we can get service type LoadBalancer in our cluster using MetalLB

Unni P
4 min readApr 21, 2023
Image source — kind documentation

Introduction

  • MetalLB provides a network load balancer implementation in our cluster
  • Allows you to create Kubernetes services of type LoadBalancer in clusters that don’t run on a cloud provider
  • Sets up MetalLB using layer2 protocol
  • We can send traffic directly to the load balancer’s external IP if the IP space is within the Docker IP space

Usage

  • Create a simple cluster using the below configuration file
$ cat kind.yml 
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: dev
nodes:
- role: control-plane
- role: worker
- role: worker
$ kind create cluster --config kind.yml 
Creating cluster "dev" ...
✓ Ensuring node image (kindest/node:v1.26.3) 🖼
✓ Preparing nodes 📦 📦 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
✓ Joining worker nodes 🚜
Set kubectl context to "kind-dev"
You can now use your cluster with:

kubectl cluster-info --context kind-dev
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
dev-control-plane Ready control-plane 68s v1.26.3
dev-worker Ready <none> 37s v1.26.3
dev-worker2 Ready <none> 37s v1.26.3
  • Deploy MetalLB using the default manifests and verify the components are up and running
$ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml
namespace/metallb-system created
customresourcedefinition.apiextensions.k8s.io/addresspools.metallb.io created
customresourcedefinition.apiextensions.k8s.io/bfdprofiles.metallb.io created
customresourcedefinition.apiextensions.k8s.io/bgpadvertisements.metallb.io created
customresourcedefinition.apiextensions.k8s.io/bgppeers.metallb.io created
customresourcedefinition.apiextensions.k8s.io/communities.metallb.io created
customresourcedefinition.apiextensions.k8s.io/ipaddresspools.metallb.io created
customresourcedefinition.apiextensions.k8s.io/l2advertisements.metallb.io created
serviceaccount/controller created
serviceaccount/speaker created
role.rbac.authorization.k8s.io/controller created
role.rbac.authorization.k8s.io/pod-lister created
clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
rolebinding.rbac.authorization.k8s.io/controller created
rolebinding.rbac.authorization.k8s.io/pod-lister created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
secret/webhook-server-cert created
service/webhook-service created
deployment.apps/controller created
daemonset.apps/speaker created
validatingwebhookconfiguration.admissionregistration.k8s.io/metallb-webhook-configuration created
$ kubectl -n metallb-system get pods
NAME READY STATUS RESTARTS AGE
controller-577b5bdfcc-p7sb5 1/1 Running 0 76s
speaker-cgmm4 1/1 Running 0 76s
speaker-gwfqr 1/1 Running 0 76s
speaker-jk684 1/1 Running 0 76s
  • As we said earlier in the introduction part, we are using layer2 protocol of MetalLB.
    For completing the layer2 configuration, we need to provide MetalLB a range of IP addresses it controls. This IP address range need to be in the Docker kind network.
$ docker network inspect -f '{{.IPAM.Config}}' kind
[{172.18.0.0/16 172.18.0.1 map[]} {fc00:f853:ccd:e793::/64 fc00:f853:ccd:e793::1 map[]}]
  • Now we want our load balancer IP range to come from this subclass and we can configure MetalLB to use 172.19.255.200 to 172.19.255.250 by creating IPAddressPool and L2Advertisement resources.
  • Create the necessary MetalLB resources using the below manifest file.
$ cat metallb.yml 
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: kind
namespace: metallb-system
spec:
addresses:
- 172.18.255.200-172.18.255.250

---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: kind
namespace: metallb-system
spec:
ipAddressPools:
- kind
$ kubectl apply -f metallb.yml 
ipaddresspool.metallb.io/kind unchanged
l2advertisement.metallb.io/kind created
$ kubectl -n metallb-system get ipaddresspools
NAME AUTO ASSIGN AVOID BUGGY IPS ADDRESSES
kind true false ["172.18.255.200-172.18.255.250"]

$
kubectl -n metallb-system get l2advertisements
NAME IPADDRESSPOOLS IPADDRESSPOOL SELECTORS INTERFACES
kind ["kind"]

Deploy our Application

  • Create a Nginx pod using the below manifest file and verify it’s status
$ cat nginx.yml 
apiVersion: v1
kind: Pod
metadata:
labels:
run: nginx
name: nginx
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 80
$ kubectl apply -f nginx.yml 
pod/nginx created
$ kubectl get pods nginx
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 23s
  • Expose the Nginx pod as a LoadBalancer service using the below manifest file
$ cat nginx-loadbalancer.yml 
apiVersion: v1
kind: Service
metadata:
labels:
run: nginx
name: nginx
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: nginx
type: LoadBalancer
$ kubectl apply -f nginx-loadbalancer.yml 
service/nginx created
  • Check the created nginx service and we can see a IP address in the EXTERNAL-IP section
$ kubectl get svc nginx 
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx LoadBalancer 10.96.43.161 172.18.255.200 80:30433/TCP 30s
  • Access the application using external IP and port
$ curl http://172.18.255.200:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Cleanup

  • Delete the cluster after use
$ kind delete cluster --name dev
Deleting cluster "dev" ...
Deleted nodes: ["dev-worker2" "dev-control-plane" "dev-worker"]
Unni P
Unni P

Written by Unni P

SysAdmin turned into DevOps Engineer | Collaboration and Shared Responsibility

Responses (1)

Write a response

Hello,
thanks for the articles.
I tried all the instructions and all is doing well, the
EXTERNALL-IP is 172.18.255.200 and curl is responding. All is working.
Now I'd like to have an EXTERNAL-IP in the 192.168.1.xxx network, so I tried to change the…