Load balancing is an important part of any resilient web application. Kubernetes supports a few options for external load balancing, but they are limited in features. After a brief discussion of those options and the features they lack, we’ll show how to build an advanced load balancing solution for Kubernetes on top of NGINX, utilizing Kubernetes features including Ingress, Annotations, and ConfigMap. We’ll conclude with a demo of how to use NGINX and NGINX Plus to expose services to the Internet.
Sched Link: http://sched.co/6Bc9
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
KubeCon EU 2016: Creating an Advanced Load Balancing Solution for Kubernetes with NGINX
1. Creating an Advanced Load Balancing
Solution for Kubernetes with NGINX
Andrew Hutchings — Technical Product Manager, NGINX, Inc., @LinuxJedi
2. About LinuxJedi
• Kubernetes user for 4 days
• Worked at HP on OpenStack LBaaS and ATG
• Worked on several Open Source DBs
• Alopecia sufferer
3. Goals
• Basic and advanced load balancing
• Current load balancing options in Kubernetes
• Ingress resource
• Implementing an Ingress controller for NGINX
• Load balancing demo: exposing Kubernetes services to the Internet
4. Basic Load Balancing
A load balancer
distributes request
among healthy servers
LB
Server 1 Server 2 Server 3
10. External: NodePort
apiVersion: v1
kind: Service
metadata:
name: backend-service
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
protocol: TCP
selector:
app: backend
apiVersion: v1
kind: Service
metadata:
name: backend-service
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
protocol: TCP
selector:
app: backend
$ kubectl create -f backend-service-nodeport.yaml
You have exposed your service on an external port on all
nodes in your
cluster. If you want to expose this service to the
external internet, you may
need to set up firewall rules for the service port(s)
(tcp:31107) to serve traffic.
$ kubectl create -f backend-service-nodeport.yaml
You have exposed your service on an external port on all
nodes in your
cluster. If you want to expose this service to the
external internet, you may
need to set up firewall rules for the service port(s)
(tcp:31107) to serve traffic.
11. External: NodePort
Features
• TCP/UDP
• Health checks
kube-proxykube-proxykube-proxykube-proxy
BB
kube-proxykube-proxy
BB
NodePortNodePort NodePortNodePort NodePortNodePort
BB
24. NGINX Ingress Controller
1. Watch for Ingress resources
2. Watch for Services and Endpoints: to get IP address of a service or its
endpoints in case of a headless service
3. Watch for Secrets
26. NGINX Ingress Controller
• NGINX Plus supports re-resolving DNS names in runtime every X
seconds
• Doesn’t fail when a name can’t be resolved
• Simplifies implementation: no need to watch for Services and
Endpoints
27. NGINX Ingress Controller
• As an example we took the GCE HTTP Load Balancer Ingress Controller
—
https://github.com/kubernetes/contrib/tree/master/ingress/controllers/
gce
• Written in Go
• Different implementations for NGINX and NGINX Plus
• Deployed in the same container as NGINX. the Controller starts first and
then launches NGINX.
30. Demo
• tea-rc and tea-svc
• coffee-rc and headless coffee-svc
• Ingress resource cafe-ingress with TLS
• Secret cafe-secret
• NGINX Plus Ingress Controller nginx-plus-ingress-rc
31. NGINX Ingress Controller
• Expose more NGINX features via
Kubernetes resources (Annotations
and Config Maps)
• Make it production-ready
• Improve it based on your feedback
Wishlist
32. The End
● Resources: http://tiny.cc/nginx-ingress
● NGINX: https://www.nginx.com/
● My site: http://linuxjedi.co.uk/
● Twitter: @LinuxJedi
● Freenode: LinuxJedi
● Email: linuxjedi@nginx.com