2. 2
Architecture Overview: Kubernetes with Red Hat Enterprise Linux 7.1
What's this document?
Kubernetes is now supported with Red Hat Enterprise Linux 7.1 (RHEL7.1) !
This documents describes the architecture overview of Kubernetes provided
with RHEL7.1.
The description of OpenShift v3 is based on the Beta release. Details may
change in the GA version.
3. 3
Architecture Overview: Kubernetes with Red Hat Enterprise Linux 7.1
$ who am i
– The author of “Professional Linux Systems” series.
• Translation offering from publishers are welcomed ;-)
Self-study Linux
Deploy and Manage by yourself
Professional Linux Systems
Deployment and Management
Professional Linux Systems
Network Management
Etsuji Nakai
– Senior solution architect and
cloud evangelist at Red Hat.
Professional Linux Systems
Technology for Next Decade
4. 4
Architecture Overview: Kubernetes with Red Hat Enterprise Linux 7.1
Contents
Architecture of Kubernetes
Container deployment model
Definition file examples
Feature extension of OpenShift v3
References
6. 6
Architecture Overview: Kubernetes with Red Hat Enterprise Linux 7.1
Server configuration
etcd
・・・
Backend Database(KVS)
Kubernetes Master
Kubernetes Node (Minion)
・・・
Scale-out cluster
Docker Docker Docker
Add more minions
if necessary.
Docker Registry
Kubernetes manages multiple nodes (minions) from a single master.
– Clustering of multiple masters is not available now. You may use active-standby
configuration with standard HA tools for high availability.
– etcd (KVS) is used as a backend database. It can be configured as a scale-out cluster.
7. 7
Architecture Overview: Kubernetes with Red Hat Enterprise Linux 7.1
Network configuration
etcd Kubernetes
Master
Docker
Registry
Configured as
an overlay network.
・・・
Physical network is simple. Kubernetes works just by connecting all servers to a single
service network.
However, you need to create an internal network for container communication using an
overlay network.
– You may use Flannel, Open vSwitch, etc. as an overlay technology.
Service network
192.168.122.0/24
Minion
docker0
Minion
docker0
Internal network
10.1.0.0/16
8. 8
Architecture Overview: Kubernetes with Red Hat Enterprise Linux 7.1
Internal network details
The internal network needs to be prepared independently from Kubernetes.
– Flannel is the most convenient tool for this purpose.
Flannel configures an internal network as follows:
– Assign non-overlapping subnets to the Linux bridge (docker0) of each minion. (eg.
10.1.x.0/24 with x=1,2,3,...)
– Create a virtual interface "flannel.1" which works as a gateway to other minions.
– Linux kernel on each minion transferes packets from/to flannel.1 using the VXLAN
encapslation. (Flannel daemon "flanneld" provides necessary information for VXLAN
processing to the kernel.)
flannel.1
docker0
10.1.1.0/24
10.1.1.0
etn0
10.1.1.1
Gateway to
10.1.0.0/16
Encapsulation flannel.1
docker0
10.1.2.0/24
10.1.2.0
etn0
10.1.2.1
Gateway to
10.1.0.0/16
minion01 minion02
10.1.0.0/16
flanneld flanneld
9. 9
Architecture Overview: Kubernetes with Red Hat Enterprise Linux 7.1
External access
etcd Kubernetes
Master
Minion
Docker
Registry
Minion
API requests Image upload
・・・
Service access
There are following cases for the external access.
– API requests are sent to the master.
– Services running on containers are accessed from minions' external IPs via proxy
mechanism (described later.)
– Docker registry is an independent component from Kubernetes. You may use a
registry server running on a container.
Service network
Internal network
11. 11
Architecture Overview: Kubernetes with Red Hat Enterprise Linux 7.1
Pod
Kubernetes launches containers in the unit of Pod. You
specify the container images inside a pod when
launching a new pod.
– You can specify a single image when you want to
launch a single container.
– Kubernetes monitors the status of containers inside
pods, and launches a new one in the case of failure.
Container
A
Virtual NIC
Container
B
Pod
docker0
When you launch a container using Docker, a single NIC and private IP is assigned to it.
However, with some options, you can launch multiple containers sharing the same NIC
and private IP.
Kubernetes supports this configuration and a group of containers sharing the same NIC
is called "pod". You can aggregate containers which need to communicate via localhost
into a single pod.
– eg. Pod with PostgreSQL container and pgadmin container.
– eg. Pod with an apllication container which sends logs to syslog, and rsyslogd
container.
Linux Bridge
12. 12
Architecture Overview: Kubernetes with Red Hat Enterprise Linux 7.1
Replication Controller
Replication controller activates the specified number of pods with the same
configuration. The typical usecase is to run multiple web servers for the load balancing
purpose.
– The scheduler decides which minions are used to launch pods.
– A new pod is launched in the case of failure to keep the number of active pods.
– The number of pods can be dynamically changed. You may add an auto-scale
mechanism on top of this.
You can launch a single pod with or without replication controller.
– If you launch a pod with relication controller (with "number = 1"), you can change
the number of pods later.
13. 13
Architecture Overview: Kubernetes with Red Hat Enterprise Linux 7.1
Service
You need to define a service so that you can access the containers inside pods. An
private and (optionally public) IP is assigned to each service.
– You define a single service which aggregates the multiple pods running the same
image. Access to the "IP + port" associated to a service is transferred to the
backend pods with the round-robin manner.
When defining a service, you need to explicitly specify a port number. A "private IP" is
automatically assigned. The private IP is used for accessing from other pods (not
external uses.)
– Access to the private IP is received by the proxy daemon running on the local minion,
and transferred to the backend pods.
– When launching a new pod, the private IPs and ports of existing services are set in
the environment variables inside new containers.
Pod
ProxyThe local proxy daemon
receives the packets to
the private IP.
Pod
Proxy
Round-robin access via
the internal network.
Pod
Proxy
Minion Minion Minion
14. 14
Architecture Overview: Kubernetes with Red Hat Enterprise Linux 7.1
Minion
External access to services
Service access
You can specify multiple public IPs for each service.
– By that, external users can access the service via multiple minions so that a specific
minion does not become a SPOF.
– External mechanism to select/load balance multiple minions is required. Typically,
you can use the DNS load balancing.
Pod
Proxy
The proxy daemon receives
packets to service ports.
Accessing to the
minions' public IPs.
Minion
Pod
Proxy
Round-robin access via
the internal network.
When defining a service, you need to specify
"Public IPs" if you need to make it accessible
from external users.
– Public IPs' correspond to minions' IP
addresses from which external uses can
access the service.
– The packets to the corresponding minions
(for the service port) are received by the
proxy daemon, and transferred to the
backend pods.
16. 16
Architecture Overview: Kubernetes with Red Hat Enterprise Linux 7.1
Launching a single pod
The following is an example definition to launch a single pod.
– Resources defined in Kubernetes can be associated with any numbers of (key, value)
labels. Labels are used to refer from other resources.
– Resources defined in Kubernetes are associated with a namespace. Only the
resources in the same namespace can be referred each other.
{
"kind": "Pod",
"id": "apache",
"apiVersion": "v1beta1",
"labels": { "name": "apache" },
"namespace": "default",
"desiredState": {
"manifest": {
"id": "apache",
"restartPolicy": { "always": {} },
"version": "v1beta1",
"volumes": null,
"containers": [
{
"image": "fedora/apache",
"name": "my-fedora-apache",
"ports": [ { "containerPort": 80, "protocol": "TCP" } ]
}
]
}
}
}
Containers inside pod
Label
Namespace
17. 17
Architecture Overview: Kubernetes with Red Hat Enterprise Linux 7.1
Launching multiple pods using replication controller
The following is an example to launch multiple pods using replication controller.
{
"kind": "ReplicationController",
"id": "apache-controller",
"apiVersion": "v1beta1",
"labels": { "name": "apache-controller" },
"namespace": "default",
"desiredState": {
"replicaSelector": { "name": "apache" },
"replicas": 2,
"podTemplate": {
"desiredState": {
"manifest": {
"id": "apache",
"containers": [
{
"image": "fedora/apache",
"name": "my-fedora-apache",
"ports": [ { "containerPort": 80, "protocol": "TCP" } ]
}
],
"restartPolicy": { "always": {} },
"version": "v1beta1",
"volumes": null
}
},
"labels": { "name": "apache" }
}
}
}
Definition of pod
The label of pods to be managed
with this controller.
18. 18
Architecture Overview: Kubernetes with Red Hat Enterprise Linux 7.1
Associating a service to existing pods
The following is an example to associate a service to existing pods.
– Label is used to specify the backend pods.
– You need to specify the pair of ports (an externally visible port and a corresponding
container port.)
– Public IPs are required if you need to make it accessible from external users.
{
"kind": "Service",
"id": "frontend",
"apiVersion": "v1beta1",
"labels": { "name": "frontend" },
"namespace": "default",
"selector": { "name": "apache" },
"containerPort": 80,
"port": 999,
"publicIPs": [ "192.168.122.10", "192.168.122.11" ]
}
Label of pods to
associate the service.
Public IPs
20. 20
Architecture Overview: Kubernetes with Red Hat Enterprise Linux 7.1
Feature extensions of OpenShift v3
OpenShift v3 utilizes Kubernetes as an internal engine. It will provide the following
feature extensions compared to the "bare" Kubernetes.
– Internal network with Open vSwitch.
• Flannel are not good at high latency communication. OpenShift v3 uses Open
vSwitch to provide VXLAN overlay network for high latency communication.
– Transparent service access with service URL.
• External users need to use minion's IP addresses to access services running
inside pods. OpenShift v3 associates an unique URL to each service, and external
users can access the service via the service URL.
– Multi-tenancy
• OpenShift v3 provides the multi-tenant interface utilizing the namespace
feature of Kubernetes.
– Source to Image automation
• The container images should be built and uploaded outside Kubernetes.
OpenShift v3 provides the automated image build feature, like, "pushing source
codes to git, running unit tests, building images, uploading to the registry."
22. 22
Architecture Overview: Kubernetes with Red Hat Enterprise Linux 7.1
References
OpenShift v3 Internal networking details
– http://www.slideshare.net/enakai/openshift-45465283