SlideShare a Scribd company logo
1 of 28
SERVICE DISCOVERY USING
ETCD, CONSUL, KUBERNETES
Presenter Name: Sreenivas Makam
Presented at: Open source Meetup Bangalore
Presentation Date: April 16, 2016
About me
• Senior Engineering Manager at
Cisco Systems Data Center group
• Personal blog can be found at
https://sreeninet.wordpress.com/
and my hacky code at
https://github.com/smakam
• Author of “Mastering CoreOS”
book, published on Feb 2016.
(https://www.packtpub.com/netw
orking-and-servers/mastering-
coreos )
• You can reach me on LinkedIn at
https://in.linkedin.com/in/sreeniva
smakam, Twitter handle -
@srmakam
Death star Architecture
Image from: http://www.slideshare.net/InfoQ/migrating-to-cloud-native-
with-microservices
Sample Microservices Architecture
Image from https://www.nginx.com/blog/introduction-to-microservices/
Monolith Microservices
What should Service Discovery
provide?
• Discovery - Services need to discover each other
dynamically to get IP address and port detail to
communicate with other services in the cluster.
• Health check – Only healthy services should
participate in handling traffic, unhealthy services
need to be dynamically pruned out.
• Load balancing – Traffic destined to a particular
service should be dynamically load balanced to
all instances providing the particular service.
Client vs Server side Service discovery
Pictures from https://www.nginx.com/blog/service-discovery-in-a-microservices-
architecture/
Client talks to Service registry and does
load balancing.
Client service needs to be Service registry
aware.
Eg: Netflix OSS
Client talks to load balancer and load
balancer talks to Service registry.
Client service need not be Service
registry aware
Eg: Consul, AWS ELB
Client Discovery Server Discovery
Service Discovery Components
• Service Registry – Maintains a database of services
and provides an external API(HTTP/DNS) to interact.
Typically Implemented as a distributed key, value
store
• Registrator – Registers services dynamically to Service
registry by listening to Service creation and deletion
events
• Health checker – Monitors Service health dynamically
and updates Service registry appropriately
• Load balancer – Distribute traffic destined for the
service to active participants
Service discovery using etcd
• Etcd can be used as KV store for Service registry.
• Service itself can directly update etcd or a Sidekick service
can be used to update etcd on the Service details.
• Sidekick service serves as registrator.
• Other services can query etcd database to do the dynamic
Service discovery.
• Side kick service does the health check for main service.
Simple Discovery Discovery using Side kick service
Service discovery – etcd exampleApache service:
[Unit]
Description=Apache web server service on port %i
# Requirements
Requires=etcd2.service
Requires=docker.service
Requires=apachet-discovery@%i.service
# Dependency ordering
After=etcd2.service
After=docker.service
Before=apachet-discovery@%i.service
[Service]
# Let processes take awhile to start up (for first run Docker containers)
TimeoutStartSec=0
# Change killmode from "control-group" to "none" to let Docker remove
# work correctly.
KillMode=none
# Get CoreOS environmental variables
EnvironmentFile=/etc/environment
# Pre-start and Start
## Directives with "=-" are allowed to fail without consequence
ExecStartPre=-/usr/bin/docker kill apachet.%i
ExecStartPre=-/usr/bin/docker rm apachet.%i
ExecStartPre=/usr/bin/docker pull coreos/apache
ExecStart=/usr/bin/docker run --name apachet.%i -p
${COREOS_PUBLIC_IPV4}:%i:80 coreos/apache /usr/sbin/apache2ctl -D
FOREGROUND
# Stop
ExecStop=/usr/bin/docker stop apachet.%i
Apache sidekick service:
[Unit]
Description=Apache web server on port %i etcd registration
# Requirements
Requires=etcd2.service
Requires=apachet@%i.service
# Dependency ordering and binding
After=etcd2.service
After=apachet@%i.service
BindsTo=apachet@%i.service
[Service]
# Get CoreOS environmental variables
EnvironmentFile=/etc/environment
# Start
## Test whether service is accessible and then register useful information
ExecStart=/bin/bash -c '
while true; do 
curl -f ${COREOS_PUBLIC_IPV4}:%i; 
if [ $? -eq 0 ]; then 
etcdctl set /services/apachet/${COREOS_PUBLIC_IPV4} '{"host": "%H",
"ipv4_addr": ${COREOS_PUBLIC_IPV4}, "port": %i}' --ttl 30; 
else 
etcdctl rm /services/apachet/${COREOS_PUBLIC_IPV4}; 
fi; 
sleep 20; 
done'
# Stop
ExecStop=/usr/bin/etcdctl rm /services/apachet/${COREOS_PUBLIC_IPV4}
[X-Fleet]
# Schedule on the same machine as the associated Apache service
X-ConditionMachineOf=apachet@%i.service
Service discovery – etcd example(contd)
3 node CoreOS cluster:
$ fleetctl list-machines
MACHINE IP METADATA
7a895214... 172.17.8.103 -
a4562fd1... 172.17.8.101 -
d29b1507... 172.17.8.102 -
Start 2 instances of the service:
fleetctl start apachet@8080.service apachet-discovery@8080.service
fleetctl start apachet@8081.service apachet-discovery@8081.service
See running services:
$ fleetctl list-units
UNIT MACHINE ACTIVE SUB
apachet-discovery@8080.service 7a895214.../172.17.8.103 active running
apachet-discovery@8081.service a4562fd1.../172.17.8.101 active running
apachet@8080.service 7a895214.../172.17.8.103 active running
apachet@8081.service a4562fd1.../172.17.8.101 active running
Check etcd database:
$ etcdctl ls / --recursive /services
/services/apachet
/services/apachet/172.17.8.103
/services/apachet/172.17.8.101
$ etcdctl get /services/apachet/172.17.8.101
{"host": "core-01", "ipv4_addr": 172.17.8.101, "port": 8081}
$ etcdctl get /services/apachet/172.17.8.103
{"host": "core-03", "ipv4_addr": 172.17.8.103, "port": 8080}
Etcd with Load balancing
• Previous example with etcd demonstrates Service database
and health check. It does not achieve DNS and Load
balancing.
• Load balancing can be achieved by combining etcd with
confd or haproxy.
Etcd with confd Etcd with haproxy
Reference: http://adetante.github.io/articles/service-
discovery-haproxy/
Reference:
https://www.digitalocean.com/community/tutorials/how-to-
use-confd-and-etcd-to-dynamically-reconfigure-services-in-
coreos
Consul
• Has a distributed key value store for storing Service
database.
• Provides comprehensive service health checking using
both in-built solutions as well as user provided custom
solutions.
• Provides REST based HTTP api for external interaction.
• Service database can be queried using DNS.
• Does dynamic load balancing.
• Supports single data center and can be scaled to support
multiple data centers.
• Integrates well with Docker.
• Consul integrates well with other Hashicorp tools.
Consul health check options
Following are the options that Consul provides for health-check:
• Script based check - User provided script is run periodically to
verify health of the service.
• HTTP based check – Periodic HTTP based check is done to the
service IP and endpoint address.
• TCP based check – Periodic TCP based check is done to the service
IP and specified port.
• TTL based check – Previous schemes are driven from Consul server
to the service. In this case, the service is expected to refresh a TTL
counter in the Consul server periodically.
• Docker Container based check – Health check application is
available as a Container and Consul invokes the Container
periodically to do the health-check.
Sample application with Consul
Ubuntu Container
(http client)
Nginx Container1
Nginx Container2
Consul
Load balancer,
DNS, Service
registry
• Two nginx containers will serve as the web servers. ubuntu container will
serve as http client.
• Consul will load balance the request between two nginx web servers.
• Consul will be used as service registry, load balancer, health checker as well
as DNS server for this application.
Consul web Interface
Following picture shows Consul GUI with:
• 2 instances of “http” service and 1 instance of “consul” service.
• Health check is passing for both services
Consul with manual registration
Service files:
http1_checkhttp.json:
{
"ID": "http1",
"Name": "http",
"Address": "172.17.0.3",
"Port": 80,
"check": {
"http": "http://172.17.0.3:80",
"interval": "10s",
"timeout": "1s"
}
}
http2_checkhttp.json:
{
"ID": "http2",
"Name": "http",
"Address": "172.17.0.4",
"Port": 80,
"check": {
"http": "http://172.17.0.4:80",
"interval": "10s",
"timeout": "1s"
}
}
Register services:
curl -X PUT --data-binary @http1_checkhttp.json
http://localhost:8500/v1/agent/service/register
curl -X PUT --data-binary @http2_checkhttp.json
http://localhost:8500/v1/agent/service/register
Service status:
$ curl -s http://localhost:8500/v1/health/checks/http | jq .
[
{
"ModifyIndex": 424,
"CreateIndex": 423,
"Node": "myconsul",
"CheckID": "service:http1",
"Name": "Service 'http' check",
"Status": "passing",
"Notes": "",
"Output": "",
"ServiceID": "http1",
"ServiceName": "http"
},
{
"ModifyIndex": 427,
"CreateIndex": 425,
"Node": "myconsul",
"CheckID": "service:http2",
"Name": "Service 'http' check",
"Status": "passing",
"Notes": "",
"Output": "",
"ServiceID": "http2",
"ServiceName": "http"
}
]
Consul health check – Good status
dig @172.17.0.1 http.service.consul SRV
; <<>> DiG 9.9.5-3ubuntu0.7-Ubuntu <<>> @172.17.0.1 http.service.consul
SRV
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 34138
;; flags: qr aa rd; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 2
;; WARNING: recursion requested but not available
;; QUESTION SECTION:
;http.service.consul. IN SRV
;; ANSWER SECTION:
http.service.consul. 0 IN SRV 1 1
80 myconsul.node.dc1.consul.
http.service.consul. 0 IN SRV 1 1
80 myconsul.node.dc1.consul.
;; ADDITIONAL SECTION:
myconsul.node.dc1.consul. 0 IN A 172.17.0.4
myconsul.node.dc1.consul. 0 IN A 172.17.0.3
Consul health Check – Bad status
$ dig @172.17.0.1 http.service.consul SRV
; <<>> DiG 9.9.5-3ubuntu0.7-Ubuntu <<>> @172.17.0.1
http.service.consul SRV
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 23330
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL:
1
;; WARNING: recursion requested but not available
;; QUESTION SECTION:
;http.service.consul. IN SRV
;; ANSWER SECTION:
http.service.consul. 0 IN SRV
1 1 80 myconsul.node.dc1.consul.
;; ADDITIONAL SECTION:
myconsul.node.dc1.consul. 0 IN A
172.17.0.3
Consul with Registrator
• Manual registration of service details to Consul is error-prone.
• Gliderlabs Registrator open source project (https://github.com/gliderlabs/registrator) takes care
of automatically registering/deregistering the service by listening to Docker events and updating
Consul registry.
• Choosing the Service IP address for the registration is critical. There are 2 choices:
– With internal IP option, Container IP and port number gets registered with Consul. This approach is
useful when we want to access the service registry from within a Container. Following is an
example of starting Registrator using "internal" IP option.
• docker run -d -v /var/run/docker.sock:/tmp/docker.sock --net=host gliderlabs/registrator -internal
consul://localhost:8500
– With external IP option, host IP and port number gets registered with Consul. Its necessary to
specify IP address manually. If its not specified, loopback address gets registered. Following is an
example of starting Registrator using "external" IP option.
• docker run -d -v /var/run/docker.sock:/tmp/docker.sock gliderlabs/registrator -ip 192.168.99.100
consul://192.168.99.100:8500
• Following is an example for registering “http” service with 2 nginx servers using HTTP check:
– docker run -d -p :80 -e "SERVICE_80_NAME=http" -e "SERVICE_80_ID=http1" -e
"SERVICE_80_CHECK_HTTP=true" -e "SERVICE_80_CHECK_HTTP=/" --name=nginx1 nginx
– docker run -d -p :80 -e "SERVICE_80_NAME=http" -e "SERVICE_80_ID=http2" -e
"SERVICE_80_CHECK_HTTP=true" -e "SERVICE_80_CHECK_HTTP=/" --name=nginx2 nginx
• Following is an example for registering “http” service with 2 nginx servers using TTL check:
– docker run -d -p :80 -e "SERVICE_80_NAME=http" -e "SERVICE_80_ID=http1" -e
"SERVICE_80_CHECK_TTL=30s" --name=nginx1 nginx
– docker run -d -p :80 -e "SERVICE_80_NAME=http" -e "SERVICE_80_ID=http2" -e
"SERVICE_80_CHECK_TTL=30s" --name=nginx2 nginx
Kubernetes Architecture
Kubernetes Service discovery components:
• SkyDNS is used to map Service name to IP address.
• Etcd is used as KV store for Service database.
• Kubelet does the health check and replication controller takes care of maintaining
Pod count.
• Kube-proxy takes care of load balancing traffic to the individual pods.
Kubernetes Service
• Service is a L3 routable object with
IP address and port number.
• Service gets mapped to pods using
selector labels. In example on
right, “MyApp” is the label.
• Service port gets mapped to
targetPort in the pod.
• Kubernetes supports head-less
services. In this case, service is not
allocated an IP address, this allows
for user to choose their own
service registration option.
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "my-service"
},
"spec": {
"selector": {
"app": "MyApp"
},
"ports": [
{
"protocol": "TCP",
"port": 80,
"targetPort": 9376
}
]
}
}
Kubernetes Service discovery Internals
• Service name gets mapped to Virtual IP and port using Skydns.
• Kube-proxy watches Service changes and updates IPtables. Virtual IP to Service IP,
port remapping is achieved using IP tables.
• Kubernetes does not use DNS based load balancing to avoid some of the known
issues associated with it.
Picture source:
http://kubernetes.io/docs/use
r-guide/services/
Kubernetes Health check
• Kubelet can implement a health check to check
if Container is healthy.
• Kubelet will kill the Container if it is not
healthy. Replication controller would take care
of maintaining endpoint count.
• Health check is defined in Pod manifest.
• Currently, 3 options are supported for health-
check:
– HTTP Health Checks - The Kubelet will call a web
hook. If it returns between 200 and 399, it is
considered success, failure otherwise.
– Container Exec - The Kubelet will execute a
command inside the container. If it exits with status
0 it will be considered a success.
– TCP Socket - The Kubelet will attempt to open a
socket to the container. If it can establish a
connection, the container is considered healthy, if it
can’t it is considered a failure.
Pod with HTTP health check:
apiVersion: v1
kind: Pod
metadata:
name: pod-with-healthcheck
spec:
containers:
- name: nginx
image: nginx
# defines the health checking
livenessProbe:
# an http probe
httpGet:
path: /_status/healthz
port: 80
# length of time to wait for a pod to initialize
# after pod startup, before applying health
checking
initialDelaySeconds: 30
timeoutSeconds: 1
ports:
- containerPort: 80
Kubernetes Service Discovery options
• For internal service discovery, Kubernetes provides
two options:
– Environment variable: When a new Pod is created,
environment variables from older services can be
imported. This allows services to talk to each other. This
approach enforces ordering in service creation.
– DNS: Every service registers to the DNS service; using this,
new services can find and talk to other services.
Kubernetes provides the kube-dns service for this.
• For external service discovery, Kubernetes provides
two options:
– NodePort: In this method, Kubernetes exposes the
service through special ports (30000-32767) of the node
IP address.
– Loadbalancer: In this method, Kubernetes interacts with
the cloud provider to create a load balancer that redirects
the traffic to the Pods. This approach is currently available
with GCE.
REDIS_MASTER_SERVICE_HOST=10.0.0.11
REDIS_MASTER_SERVICE_PORT=6379
REDIS_MASTER_PORT=tcp://10.0.0.11:63
79
REDIS_MASTER_PORT_6379_TCP=tcp://1
0.0.0.11:6379
REDIS_MASTER_PORT_6379_TCP_PROTO
=tcp
REDIS_MASTER_PORT_6379_TCP_PORT=
6379
REDIS_MASTER_PORT_6379_TCP_ADDR=
10.0.0.11
apiVersion: v1
kind: Service
metadata:
name: frontend
labels:
app: guestbook
tier: frontend
spec:
# if your cluster supports it, uncomment the
following to automatically create
# an external load-balanced IP for the frontend
service.
type: LoadBalancer
ports:
# the port that this service should serve on
- port: 80
selector:
app: guestbook
tier: frontend
Docker Service Discovery
• With Docker 1.9, Container name to IP address mapping
was done by updating “/etc/hosts” automatically.
• With Docker 1.10 release, Docker added embedded DNS
server which does Container name resolution within a
user defined network.
• Name resolution can be done for Container name(--
name), network alias(--net-alias) and Container link(--link).
Port number is not part of DNS.
• With Docker 1.11 release, Docker added DNS based
random load balancing for Containers with same network
alias.
• Docker’s Service Discovery is very primitive and it does not
have health check and comprehensive load balancing.
Docker DNS in release 1.11
Create 3 Containers in “fe” network:
docker run -d --name=nginx1 --
net=fe --net-alias=nginxnet nginx
docker run -d --name=nginx2 --
net=fe --net-alias=nginxnet nginx
docker run -ti --name=myubuntu --
net=fe --link=nginx1:nginx1link --
link=nginx2:nginx2link ubuntu bash
DNS by network alias:
root@4d2d6e34120d:/# ping -c1 nginxnet
PING nginxnet (172.20.0.3) 56(84) bytes of data.
64 bytes from nginx2.fe (172.20.0.3): icmp_seq=1 ttl=64
time=0.852 ms
root@4d2d6e34120d:/# ping -c1 nginxnet
PING nginxnet (172.20.0.2) 56(84) bytes of data.
64 bytes from nginx1.fe (172.20.0.2): icmp_seq=1 ttl=64
time=0.244 ms
DNS by Container name:
root@4d2d6e34120d:/# ping -c1 nginx1
PING nginx1 (172.20.0.2) 56(84) bytes of data.
64 bytes from nginx1.fe (172.20.0.2): icmp_seq=1 ttl=64
time=0.112 ms
root@4d2d6e34120d:/# ping -c1 nginx2
PING nginx2 (172.20.0.3) 56(84) bytes of data.
64 bytes from nginx2.fe (172.20.0.3): icmp_seq=1 ttl=64
time=0.090 ms
DNS by link name:
root@4d2d6e34120d:/# ping -c1 nginx1link
PING nginx1link (172.20.0.2) 56(84) bytes of data.
64 bytes from nginx1.fe (172.20.0.2): icmp_seq=1 ttl=64
time=0.049 ms
root@4d2d6e34120d:/# ping -c1 nginx2link
PING nginx2link (172.20.0.3) 56(84) bytes of data.
64 bytes from nginx2.fe (172.20.0.3): icmp_seq=1 ttl=64
time=0.253 ms
References
• https://www.nginx.com/blog/service-discovery-in-a-microservices-
architecture/
• http://jasonwilder.com/blog/2014/02/04/service-discovery-in-the-
cloud/
• http://progrium.com/blog/2014/07/29/understanding-modern-service-
discovery-with-docker/
• http://artplustech.com/docker-consul-dns-registrator/
• https://jlordiales.me/2015/01/23/docker-consul/
• Mastering CoreOS book - https://www.packtpub.com/networking-and-
servers/mastering-coreos
• Kubernetes Services - http://kubernetes.io/docs/user-guide/services/
• Docker DNS Server -
https://docs.docker.com/engine/userguide/networking/configure-dns/,
https://github.com/docker/libnetwork/pull/974
DEMO

More Related Content

What's hot

Red Hat Java Update and Quarkus Introduction
Red Hat Java Update and Quarkus IntroductionRed Hat Java Update and Quarkus Introduction
Red Hat Java Update and Quarkus IntroductionJohn Archer
 
Docker introduction
Docker introductionDocker introduction
Docker introductiondotCloud
 
Kubernetes dealing with storage and persistence
Kubernetes  dealing with storage and persistenceKubernetes  dealing with storage and persistence
Kubernetes dealing with storage and persistenceJanakiram MSV
 
Modernization patterns to refactor a legacy application into event driven mic...
Modernization patterns to refactor a legacy application into event driven mic...Modernization patterns to refactor a legacy application into event driven mic...
Modernization patterns to refactor a legacy application into event driven mic...Bilgin Ibryam
 
Introduction to Docker Compose
Introduction to Docker ComposeIntroduction to Docker Compose
Introduction to Docker ComposeAjeet Singh Raina
 
Introduction to kubernetes
Introduction to kubernetesIntroduction to kubernetes
Introduction to kubernetesRishabh Indoria
 
Quarkus - a next-generation Kubernetes Native Java framework
Quarkus - a next-generation Kubernetes Native Java frameworkQuarkus - a next-generation Kubernetes Native Java framework
Quarkus - a next-generation Kubernetes Native Java frameworkSVDevOps
 
Introduction to Kubernetes
Introduction to KubernetesIntroduction to Kubernetes
Introduction to Kubernetesrajdeep
 
Docker: From Zero to Hero
Docker: From Zero to HeroDocker: From Zero to Hero
Docker: From Zero to Herofazalraja
 
Deploying OpenShift Container Platform on AWS by Red Hat
Deploying OpenShift Container Platform on AWS by Red HatDeploying OpenShift Container Platform on AWS by Red Hat
Deploying OpenShift Container Platform on AWS by Red HatAmazon Web Services
 
Docker Security workshop slides
Docker Security workshop slidesDocker Security workshop slides
Docker Security workshop slidesDocker, Inc.
 
Prometheus design and philosophy
Prometheus design and philosophy   Prometheus design and philosophy
Prometheus design and philosophy Docker, Inc.
 
Docker, Linux Containers (LXC), and security
Docker, Linux Containers (LXC), and securityDocker, Linux Containers (LXC), and security
Docker, Linux Containers (LXC), and securityJérôme Petazzoni
 
Docker Swarm for Beginner
Docker Swarm for BeginnerDocker Swarm for Beginner
Docker Swarm for BeginnerShahzad Masud
 
Podman Overview and internals.pdf
Podman Overview and internals.pdfPodman Overview and internals.pdf
Podman Overview and internals.pdfSaim Safder
 
Ceph Introduction 2017
Ceph Introduction 2017  Ceph Introduction 2017
Ceph Introduction 2017 Karan Singh
 
Deploy Prometheus - Grafana and EFK stack on Kubic k8s Clusters
Deploy Prometheus - Grafana and EFK stack on Kubic k8s ClustersDeploy Prometheus - Grafana and EFK stack on Kubic k8s Clusters
Deploy Prometheus - Grafana and EFK stack on Kubic k8s ClustersSyah Dwi Prihatmoko
 
Infrastructure-as-Code with Pulumi - Better than all the others (like Ansible)?
Infrastructure-as-Code with Pulumi- Better than all the others (like Ansible)?Infrastructure-as-Code with Pulumi- Better than all the others (like Ansible)?
Infrastructure-as-Code with Pulumi - Better than all the others (like Ansible)?Jonas Hecht
 
Ansible Introduction
Ansible Introduction Ansible Introduction
Ansible Introduction Robert Reiz
 

What's hot (20)

Red Hat Java Update and Quarkus Introduction
Red Hat Java Update and Quarkus IntroductionRed Hat Java Update and Quarkus Introduction
Red Hat Java Update and Quarkus Introduction
 
Docker introduction
Docker introductionDocker introduction
Docker introduction
 
Introduction to docker
Introduction to dockerIntroduction to docker
Introduction to docker
 
Kubernetes dealing with storage and persistence
Kubernetes  dealing with storage and persistenceKubernetes  dealing with storage and persistence
Kubernetes dealing with storage and persistence
 
Modernization patterns to refactor a legacy application into event driven mic...
Modernization patterns to refactor a legacy application into event driven mic...Modernization patterns to refactor a legacy application into event driven mic...
Modernization patterns to refactor a legacy application into event driven mic...
 
Introduction to Docker Compose
Introduction to Docker ComposeIntroduction to Docker Compose
Introduction to Docker Compose
 
Introduction to kubernetes
Introduction to kubernetesIntroduction to kubernetes
Introduction to kubernetes
 
Quarkus - a next-generation Kubernetes Native Java framework
Quarkus - a next-generation Kubernetes Native Java frameworkQuarkus - a next-generation Kubernetes Native Java framework
Quarkus - a next-generation Kubernetes Native Java framework
 
Introduction to Kubernetes
Introduction to KubernetesIntroduction to Kubernetes
Introduction to Kubernetes
 
Docker: From Zero to Hero
Docker: From Zero to HeroDocker: From Zero to Hero
Docker: From Zero to Hero
 
Deploying OpenShift Container Platform on AWS by Red Hat
Deploying OpenShift Container Platform on AWS by Red HatDeploying OpenShift Container Platform on AWS by Red Hat
Deploying OpenShift Container Platform on AWS by Red Hat
 
Docker Security workshop slides
Docker Security workshop slidesDocker Security workshop slides
Docker Security workshop slides
 
Prometheus design and philosophy
Prometheus design and philosophy   Prometheus design and philosophy
Prometheus design and philosophy
 
Docker, Linux Containers (LXC), and security
Docker, Linux Containers (LXC), and securityDocker, Linux Containers (LXC), and security
Docker, Linux Containers (LXC), and security
 
Docker Swarm for Beginner
Docker Swarm for BeginnerDocker Swarm for Beginner
Docker Swarm for Beginner
 
Podman Overview and internals.pdf
Podman Overview and internals.pdfPodman Overview and internals.pdf
Podman Overview and internals.pdf
 
Ceph Introduction 2017
Ceph Introduction 2017  Ceph Introduction 2017
Ceph Introduction 2017
 
Deploy Prometheus - Grafana and EFK stack on Kubic k8s Clusters
Deploy Prometheus - Grafana and EFK stack on Kubic k8s ClustersDeploy Prometheus - Grafana and EFK stack on Kubic k8s Clusters
Deploy Prometheus - Grafana and EFK stack on Kubic k8s Clusters
 
Infrastructure-as-Code with Pulumi - Better than all the others (like Ansible)?
Infrastructure-as-Code with Pulumi- Better than all the others (like Ansible)?Infrastructure-as-Code with Pulumi- Better than all the others (like Ansible)?
Infrastructure-as-Code with Pulumi - Better than all the others (like Ansible)?
 
Ansible Introduction
Ansible Introduction Ansible Introduction
Ansible Introduction
 

Viewers also liked

Service discovery in a microservice architecture using consul
Service discovery in a microservice architecture using consulService discovery in a microservice architecture using consul
Service discovery in a microservice architecture using consulJos Dirksen
 
CoreOS Overview and Current Status
CoreOS Overview and Current StatusCoreOS Overview and Current Status
CoreOS Overview and Current StatusSreenivas Makam
 
Docker Networking Tip - Load balancing options
Docker Networking Tip - Load balancing optionsDocker Networking Tip - Load balancing options
Docker Networking Tip - Load balancing optionsSreenivas Makam
 
Consul and Complex Networks
Consul and Complex NetworksConsul and Complex Networks
Consul and Complex Networksslackpad
 
Building a Cloud Native Service - Docker Meetup Santa Clara (July 20, 2017)
Building a Cloud Native Service - Docker Meetup Santa Clara (July 20, 2017)Building a Cloud Native Service - Docker Meetup Santa Clara (July 20, 2017)
Building a Cloud Native Service - Docker Meetup Santa Clara (July 20, 2017)Yong Tang
 
Service Discovery and Registration in a Microservices Architecture
Service Discovery and Registration in a Microservices ArchitectureService Discovery and Registration in a Microservices Architecture
Service Discovery and Registration in a Microservices ArchitecturePLUMgrid
 
Consul: Microservice Enabling Microservices and Reactive Programming
Consul: Microservice Enabling Microservices and Reactive ProgrammingConsul: Microservice Enabling Microservices and Reactive Programming
Consul: Microservice Enabling Microservices and Reactive ProgrammingRick Hightower
 

Viewers also liked (8)

Service discovery in a microservice architecture using consul
Service discovery in a microservice architecture using consulService discovery in a microservice architecture using consul
Service discovery in a microservice architecture using consul
 
CoreOS Overview and Current Status
CoreOS Overview and Current StatusCoreOS Overview and Current Status
CoreOS Overview and Current Status
 
Docker Networking Tip - Load balancing options
Docker Networking Tip - Load balancing optionsDocker Networking Tip - Load balancing options
Docker Networking Tip - Load balancing options
 
Service Discovery 101
Service Discovery 101Service Discovery 101
Service Discovery 101
 
Consul and Complex Networks
Consul and Complex NetworksConsul and Complex Networks
Consul and Complex Networks
 
Building a Cloud Native Service - Docker Meetup Santa Clara (July 20, 2017)
Building a Cloud Native Service - Docker Meetup Santa Clara (July 20, 2017)Building a Cloud Native Service - Docker Meetup Santa Clara (July 20, 2017)
Building a Cloud Native Service - Docker Meetup Santa Clara (July 20, 2017)
 
Service Discovery and Registration in a Microservices Architecture
Service Discovery and Registration in a Microservices ArchitectureService Discovery and Registration in a Microservices Architecture
Service Discovery and Registration in a Microservices Architecture
 
Consul: Microservice Enabling Microservices and Reactive Programming
Consul: Microservice Enabling Microservices and Reactive ProgrammingConsul: Microservice Enabling Microservices and Reactive Programming
Consul: Microservice Enabling Microservices and Reactive Programming
 

Similar to Service Discovery using etcd, Consul and Kubernetes

Session: A Reference Architecture for Running Modern APIs with NGINX Unit and...
Session: A Reference Architecture for Running Modern APIs with NGINX Unit and...Session: A Reference Architecture for Running Modern APIs with NGINX Unit and...
Session: A Reference Architecture for Running Modern APIs with NGINX Unit and...NGINX, Inc.
 
Writing robust Node.js applications
Writing robust Node.js applicationsWriting robust Node.js applications
Writing robust Node.js applicationsTom Croucher
 
Deploying windows containers with kubernetes
Deploying windows containers with kubernetesDeploying windows containers with kubernetes
Deploying windows containers with kubernetesBen Hall
 
Orchestration Tool Roundup - Arthur Berezin & Trammell Scruggs
Orchestration Tool Roundup - Arthur Berezin & Trammell ScruggsOrchestration Tool Roundup - Arthur Berezin & Trammell Scruggs
Orchestration Tool Roundup - Arthur Berezin & Trammell ScruggsCloud Native Day Tel Aviv
 
Managing Your Security Logs with Elasticsearch
Managing Your Security Logs with ElasticsearchManaging Your Security Logs with Elasticsearch
Managing Your Security Logs with ElasticsearchVic Hargrave
 
The Challenges of Becoming Cloud Native
The Challenges of Becoming Cloud NativeThe Challenges of Becoming Cloud Native
The Challenges of Becoming Cloud NativeBen Hall
 
PostgreSQL High-Availability and Geographic Locality using consul
PostgreSQL High-Availability and Geographic Locality using consulPostgreSQL High-Availability and Geographic Locality using consul
PostgreSQL High-Availability and Geographic Locality using consulSean Chittenden
 
Orchestrating Docker with Terraform and Consul by Mitchell Hashimoto
Orchestrating Docker with Terraform and Consul by Mitchell Hashimoto Orchestrating Docker with Terraform and Consul by Mitchell Hashimoto
Orchestrating Docker with Terraform and Consul by Mitchell Hashimoto Docker, Inc.
 
Load Balancing Applications with NGINX in a CoreOS Cluster
Load Balancing Applications with NGINX in a CoreOS ClusterLoad Balancing Applications with NGINX in a CoreOS Cluster
Load Balancing Applications with NGINX in a CoreOS ClusterKevin Jones
 
From nothing to Prometheus : one year after
From nothing to Prometheus : one year afterFrom nothing to Prometheus : one year after
From nothing to Prometheus : one year afterAntoine Leroyer
 
Webinar - 2020-09-23 - Escape the ticketing turmoil with Teleport PagerDuty &...
Webinar - 2020-09-23 - Escape the ticketing turmoil with Teleport PagerDuty &...Webinar - 2020-09-23 - Escape the ticketing turmoil with Teleport PagerDuty &...
Webinar - 2020-09-23 - Escape the ticketing turmoil with Teleport PagerDuty &...Teleport
 
Service discovery like a pro (presented at reversimX)
Service discovery like a pro (presented at reversimX)Service discovery like a pro (presented at reversimX)
Service discovery like a pro (presented at reversimX)Eran Harel
 
Docker Logging and analysing with Elastic Stack - Jakub Hajek
Docker Logging and analysing with Elastic Stack - Jakub Hajek Docker Logging and analysing with Elastic Stack - Jakub Hajek
Docker Logging and analysing with Elastic Stack - Jakub Hajek PROIDEA
 
Docker Logging and analysing with Elastic Stack
Docker Logging and analysing with Elastic StackDocker Logging and analysing with Elastic Stack
Docker Logging and analysing with Elastic StackJakub Hajek
 
Exploring Async PHP (SF Live Berlin 2019)
Exploring Async PHP (SF Live Berlin 2019)Exploring Async PHP (SF Live Berlin 2019)
Exploring Async PHP (SF Live Berlin 2019)dantleech
 
UEMB200: Next Generation of Endpoint Management Architecture and Discovery Se...
UEMB200: Next Generation of Endpoint Management Architecture and Discovery Se...UEMB200: Next Generation of Endpoint Management Architecture and Discovery Se...
UEMB200: Next Generation of Endpoint Management Architecture and Discovery Se...Ivanti
 
Scaling Docker Containers using Kubernetes and Azure Container Service
Scaling Docker Containers using Kubernetes and Azure Container ServiceScaling Docker Containers using Kubernetes and Azure Container Service
Scaling Docker Containers using Kubernetes and Azure Container ServiceBen Hall
 
ITB2019 NGINX Overview and Technical Aspects - Kevin Jones
ITB2019 NGINX Overview and Technical Aspects - Kevin JonesITB2019 NGINX Overview and Technical Aspects - Kevin Jones
ITB2019 NGINX Overview and Technical Aspects - Kevin JonesOrtus Solutions, Corp
 
Bare Metal to OpenStack with Razor and Chef
Bare Metal to OpenStack with Razor and ChefBare Metal to OpenStack with Razor and Chef
Bare Metal to OpenStack with Razor and ChefMatt Ray
 

Similar to Service Discovery using etcd, Consul and Kubernetes (20)

Session: A Reference Architecture for Running Modern APIs with NGINX Unit and...
Session: A Reference Architecture for Running Modern APIs with NGINX Unit and...Session: A Reference Architecture for Running Modern APIs with NGINX Unit and...
Session: A Reference Architecture for Running Modern APIs with NGINX Unit and...
 
Writing robust Node.js applications
Writing robust Node.js applicationsWriting robust Node.js applications
Writing robust Node.js applications
 
Deploying windows containers with kubernetes
Deploying windows containers with kubernetesDeploying windows containers with kubernetes
Deploying windows containers with kubernetes
 
Orchestration Tool Roundup - Arthur Berezin & Trammell Scruggs
Orchestration Tool Roundup - Arthur Berezin & Trammell ScruggsOrchestration Tool Roundup - Arthur Berezin & Trammell Scruggs
Orchestration Tool Roundup - Arthur Berezin & Trammell Scruggs
 
Managing Your Security Logs with Elasticsearch
Managing Your Security Logs with ElasticsearchManaging Your Security Logs with Elasticsearch
Managing Your Security Logs with Elasticsearch
 
The Challenges of Becoming Cloud Native
The Challenges of Becoming Cloud NativeThe Challenges of Becoming Cloud Native
The Challenges of Becoming Cloud Native
 
PostgreSQL High-Availability and Geographic Locality using consul
PostgreSQL High-Availability and Geographic Locality using consulPostgreSQL High-Availability and Geographic Locality using consul
PostgreSQL High-Availability and Geographic Locality using consul
 
Orchestrating Docker with Terraform and Consul by Mitchell Hashimoto
Orchestrating Docker with Terraform and Consul by Mitchell Hashimoto Orchestrating Docker with Terraform and Consul by Mitchell Hashimoto
Orchestrating Docker with Terraform and Consul by Mitchell Hashimoto
 
Load Balancing Applications with NGINX in a CoreOS Cluster
Load Balancing Applications with NGINX in a CoreOS ClusterLoad Balancing Applications with NGINX in a CoreOS Cluster
Load Balancing Applications with NGINX in a CoreOS Cluster
 
From nothing to Prometheus : one year after
From nothing to Prometheus : one year afterFrom nothing to Prometheus : one year after
From nothing to Prometheus : one year after
 
Webinar - 2020-09-23 - Escape the ticketing turmoil with Teleport PagerDuty &...
Webinar - 2020-09-23 - Escape the ticketing turmoil with Teleport PagerDuty &...Webinar - 2020-09-23 - Escape the ticketing turmoil with Teleport PagerDuty &...
Webinar - 2020-09-23 - Escape the ticketing turmoil with Teleport PagerDuty &...
 
Service discovery like a pro (presented at reversimX)
Service discovery like a pro (presented at reversimX)Service discovery like a pro (presented at reversimX)
Service discovery like a pro (presented at reversimX)
 
Docker Logging and analysing with Elastic Stack - Jakub Hajek
Docker Logging and analysing with Elastic Stack - Jakub Hajek Docker Logging and analysing with Elastic Stack - Jakub Hajek
Docker Logging and analysing with Elastic Stack - Jakub Hajek
 
Docker Logging and analysing with Elastic Stack
Docker Logging and analysing with Elastic StackDocker Logging and analysing with Elastic Stack
Docker Logging and analysing with Elastic Stack
 
Exploring Async PHP (SF Live Berlin 2019)
Exploring Async PHP (SF Live Berlin 2019)Exploring Async PHP (SF Live Berlin 2019)
Exploring Async PHP (SF Live Berlin 2019)
 
Catalyst MVC
Catalyst MVCCatalyst MVC
Catalyst MVC
 
UEMB200: Next Generation of Endpoint Management Architecture and Discovery Se...
UEMB200: Next Generation of Endpoint Management Architecture and Discovery Se...UEMB200: Next Generation of Endpoint Management Architecture and Discovery Se...
UEMB200: Next Generation of Endpoint Management Architecture and Discovery Se...
 
Scaling Docker Containers using Kubernetes and Azure Container Service
Scaling Docker Containers using Kubernetes and Azure Container ServiceScaling Docker Containers using Kubernetes and Azure Container Service
Scaling Docker Containers using Kubernetes and Azure Container Service
 
ITB2019 NGINX Overview and Technical Aspects - Kevin Jones
ITB2019 NGINX Overview and Technical Aspects - Kevin JonesITB2019 NGINX Overview and Technical Aspects - Kevin Jones
ITB2019 NGINX Overview and Technical Aspects - Kevin Jones
 
Bare Metal to OpenStack with Razor and Chef
Bare Metal to OpenStack with Razor and ChefBare Metal to OpenStack with Razor and Chef
Bare Metal to OpenStack with Razor and Chef
 

More from Sreenivas Makam

GKE Tip Series - Usage Metering
GKE Tip Series -  Usage MeteringGKE Tip Series -  Usage Metering
GKE Tip Series - Usage MeteringSreenivas Makam
 
GKE Tip Series how do i choose between gke standard, autopilot and cloud run
GKE Tip Series   how do i choose between gke standard, autopilot and cloud run GKE Tip Series   how do i choose between gke standard, autopilot and cloud run
GKE Tip Series how do i choose between gke standard, autopilot and cloud run Sreenivas Makam
 
Kubernetes design principles, patterns and ecosystem
Kubernetes design principles, patterns and ecosystemKubernetes design principles, patterns and ecosystem
Kubernetes design principles, patterns and ecosystemSreenivas Makam
 
Top 3 reasons why you should run your Enterprise workloads on GKE
Top 3 reasons why you should run your Enterprise workloads on GKETop 3 reasons why you should run your Enterprise workloads on GKE
Top 3 reasons why you should run your Enterprise workloads on GKESreenivas Makam
 
How Kubernetes helps Devops
How Kubernetes helps DevopsHow Kubernetes helps Devops
How Kubernetes helps DevopsSreenivas Makam
 
Docker Networking Tip - Macvlan driver
Docker Networking Tip - Macvlan driverDocker Networking Tip - Macvlan driver
Docker Networking Tip - Macvlan driverSreenivas Makam
 
Docker Networking Overview
Docker Networking OverviewDocker Networking Overview
Docker Networking OverviewSreenivas Makam
 
Docker Networking - Common Issues and Troubleshooting Techniques
Docker Networking - Common Issues and Troubleshooting TechniquesDocker Networking - Common Issues and Troubleshooting Techniques
Docker Networking - Common Issues and Troubleshooting TechniquesSreenivas Makam
 
Compare Docker deployment options in the public cloud
Compare Docker deployment options in the public cloudCompare Docker deployment options in the public cloud
Compare Docker deployment options in the public cloudSreenivas Makam
 
Docker Mentorweek beginner workshop notes
Docker Mentorweek beginner workshop notesDocker Mentorweek beginner workshop notes
Docker Mentorweek beginner workshop notesSreenivas Makam
 
Docker Security Overview
Docker Security OverviewDocker Security Overview
Docker Security OverviewSreenivas Makam
 
Docker 1.11 Presentation
Docker 1.11 PresentationDocker 1.11 Presentation
Docker 1.11 PresentationSreenivas Makam
 
Container Monitoring with Sysdig
Container Monitoring with SysdigContainer Monitoring with Sysdig
Container Monitoring with SysdigSreenivas Makam
 
CI, CD with Docker, Jenkins and Tutum
CI, CD with Docker, Jenkins and TutumCI, CD with Docker, Jenkins and Tutum
CI, CD with Docker, Jenkins and TutumSreenivas Makam
 
Docker 1.9 Feature Overview
Docker 1.9 Feature OverviewDocker 1.9 Feature Overview
Docker 1.9 Feature OverviewSreenivas Makam
 
Docker Networking - Current Status and goals of Experimental Networking
Docker Networking - Current Status and goals of Experimental NetworkingDocker Networking - Current Status and goals of Experimental Networking
Docker Networking - Current Status and goals of Experimental NetworkingSreenivas Makam
 

More from Sreenivas Makam (18)

GKE Tip Series - Usage Metering
GKE Tip Series -  Usage MeteringGKE Tip Series -  Usage Metering
GKE Tip Series - Usage Metering
 
GKE Tip Series how do i choose between gke standard, autopilot and cloud run
GKE Tip Series   how do i choose between gke standard, autopilot and cloud run GKE Tip Series   how do i choose between gke standard, autopilot and cloud run
GKE Tip Series how do i choose between gke standard, autopilot and cloud run
 
Kubernetes design principles, patterns and ecosystem
Kubernetes design principles, patterns and ecosystemKubernetes design principles, patterns and ecosystem
Kubernetes design principles, patterns and ecosystem
 
My kubernetes toolkit
My kubernetes toolkitMy kubernetes toolkit
My kubernetes toolkit
 
Top 3 reasons why you should run your Enterprise workloads on GKE
Top 3 reasons why you should run your Enterprise workloads on GKETop 3 reasons why you should run your Enterprise workloads on GKE
Top 3 reasons why you should run your Enterprise workloads on GKE
 
How Kubernetes helps Devops
How Kubernetes helps DevopsHow Kubernetes helps Devops
How Kubernetes helps Devops
 
Docker Networking Tip - Macvlan driver
Docker Networking Tip - Macvlan driverDocker Networking Tip - Macvlan driver
Docker Networking Tip - Macvlan driver
 
Docker Networking Overview
Docker Networking OverviewDocker Networking Overview
Docker Networking Overview
 
Docker Networking - Common Issues and Troubleshooting Techniques
Docker Networking - Common Issues and Troubleshooting TechniquesDocker Networking - Common Issues and Troubleshooting Techniques
Docker Networking - Common Issues and Troubleshooting Techniques
 
Compare Docker deployment options in the public cloud
Compare Docker deployment options in the public cloudCompare Docker deployment options in the public cloud
Compare Docker deployment options in the public cloud
 
Docker Mentorweek beginner workshop notes
Docker Mentorweek beginner workshop notesDocker Mentorweek beginner workshop notes
Docker Mentorweek beginner workshop notes
 
Devops in Networking
Devops in NetworkingDevops in Networking
Devops in Networking
 
Docker Security Overview
Docker Security OverviewDocker Security Overview
Docker Security Overview
 
Docker 1.11 Presentation
Docker 1.11 PresentationDocker 1.11 Presentation
Docker 1.11 Presentation
 
Container Monitoring with Sysdig
Container Monitoring with SysdigContainer Monitoring with Sysdig
Container Monitoring with Sysdig
 
CI, CD with Docker, Jenkins and Tutum
CI, CD with Docker, Jenkins and TutumCI, CD with Docker, Jenkins and Tutum
CI, CD with Docker, Jenkins and Tutum
 
Docker 1.9 Feature Overview
Docker 1.9 Feature OverviewDocker 1.9 Feature Overview
Docker 1.9 Feature Overview
 
Docker Networking - Current Status and goals of Experimental Networking
Docker Networking - Current Status and goals of Experimental NetworkingDocker Networking - Current Status and goals of Experimental Networking
Docker Networking - Current Status and goals of Experimental Networking
 

Recently uploaded

How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonetsnaman860154
 
Maximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxMaximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxOnBoard
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking MenDelhi Call girls
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptxHampshireHUG
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitecturePixlogix Infotech
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Miguel Araújo
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesSinan KOZAK
 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersThousandEyes
 
SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024Scott Keck-Warren
 
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
 
A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024Results
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreternaman860154
 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsMaria Levchenko
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking MenDelhi Call girls
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024Rafal Los
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonAnna Loughnan Colquhoun
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountPuma Security, LLC
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Allon Mureinik
 

Recently uploaded (20)

How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 
Maximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxMaximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptx
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC Architecture
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen Frames
 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
 
SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024
 
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
 
A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreter
 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)
 

Service Discovery using etcd, Consul and Kubernetes

  • 1. SERVICE DISCOVERY USING ETCD, CONSUL, KUBERNETES Presenter Name: Sreenivas Makam Presented at: Open source Meetup Bangalore Presentation Date: April 16, 2016
  • 2. About me • Senior Engineering Manager at Cisco Systems Data Center group • Personal blog can be found at https://sreeninet.wordpress.com/ and my hacky code at https://github.com/smakam • Author of “Mastering CoreOS” book, published on Feb 2016. (https://www.packtpub.com/netw orking-and-servers/mastering- coreos ) • You can reach me on LinkedIn at https://in.linkedin.com/in/sreeniva smakam, Twitter handle - @srmakam
  • 3. Death star Architecture Image from: http://www.slideshare.net/InfoQ/migrating-to-cloud-native- with-microservices
  • 4. Sample Microservices Architecture Image from https://www.nginx.com/blog/introduction-to-microservices/ Monolith Microservices
  • 5. What should Service Discovery provide? • Discovery - Services need to discover each other dynamically to get IP address and port detail to communicate with other services in the cluster. • Health check – Only healthy services should participate in handling traffic, unhealthy services need to be dynamically pruned out. • Load balancing – Traffic destined to a particular service should be dynamically load balanced to all instances providing the particular service.
  • 6. Client vs Server side Service discovery Pictures from https://www.nginx.com/blog/service-discovery-in-a-microservices- architecture/ Client talks to Service registry and does load balancing. Client service needs to be Service registry aware. Eg: Netflix OSS Client talks to load balancer and load balancer talks to Service registry. Client service need not be Service registry aware Eg: Consul, AWS ELB Client Discovery Server Discovery
  • 7. Service Discovery Components • Service Registry – Maintains a database of services and provides an external API(HTTP/DNS) to interact. Typically Implemented as a distributed key, value store • Registrator – Registers services dynamically to Service registry by listening to Service creation and deletion events • Health checker – Monitors Service health dynamically and updates Service registry appropriately • Load balancer – Distribute traffic destined for the service to active participants
  • 8. Service discovery using etcd • Etcd can be used as KV store for Service registry. • Service itself can directly update etcd or a Sidekick service can be used to update etcd on the Service details. • Sidekick service serves as registrator. • Other services can query etcd database to do the dynamic Service discovery. • Side kick service does the health check for main service. Simple Discovery Discovery using Side kick service
  • 9. Service discovery – etcd exampleApache service: [Unit] Description=Apache web server service on port %i # Requirements Requires=etcd2.service Requires=docker.service Requires=apachet-discovery@%i.service # Dependency ordering After=etcd2.service After=docker.service Before=apachet-discovery@%i.service [Service] # Let processes take awhile to start up (for first run Docker containers) TimeoutStartSec=0 # Change killmode from "control-group" to "none" to let Docker remove # work correctly. KillMode=none # Get CoreOS environmental variables EnvironmentFile=/etc/environment # Pre-start and Start ## Directives with "=-" are allowed to fail without consequence ExecStartPre=-/usr/bin/docker kill apachet.%i ExecStartPre=-/usr/bin/docker rm apachet.%i ExecStartPre=/usr/bin/docker pull coreos/apache ExecStart=/usr/bin/docker run --name apachet.%i -p ${COREOS_PUBLIC_IPV4}:%i:80 coreos/apache /usr/sbin/apache2ctl -D FOREGROUND # Stop ExecStop=/usr/bin/docker stop apachet.%i Apache sidekick service: [Unit] Description=Apache web server on port %i etcd registration # Requirements Requires=etcd2.service Requires=apachet@%i.service # Dependency ordering and binding After=etcd2.service After=apachet@%i.service BindsTo=apachet@%i.service [Service] # Get CoreOS environmental variables EnvironmentFile=/etc/environment # Start ## Test whether service is accessible and then register useful information ExecStart=/bin/bash -c ' while true; do curl -f ${COREOS_PUBLIC_IPV4}:%i; if [ $? -eq 0 ]; then etcdctl set /services/apachet/${COREOS_PUBLIC_IPV4} '{"host": "%H", "ipv4_addr": ${COREOS_PUBLIC_IPV4}, "port": %i}' --ttl 30; else etcdctl rm /services/apachet/${COREOS_PUBLIC_IPV4}; fi; sleep 20; done' # Stop ExecStop=/usr/bin/etcdctl rm /services/apachet/${COREOS_PUBLIC_IPV4} [X-Fleet] # Schedule on the same machine as the associated Apache service X-ConditionMachineOf=apachet@%i.service
  • 10. Service discovery – etcd example(contd) 3 node CoreOS cluster: $ fleetctl list-machines MACHINE IP METADATA 7a895214... 172.17.8.103 - a4562fd1... 172.17.8.101 - d29b1507... 172.17.8.102 - Start 2 instances of the service: fleetctl start apachet@8080.service apachet-discovery@8080.service fleetctl start apachet@8081.service apachet-discovery@8081.service See running services: $ fleetctl list-units UNIT MACHINE ACTIVE SUB apachet-discovery@8080.service 7a895214.../172.17.8.103 active running apachet-discovery@8081.service a4562fd1.../172.17.8.101 active running apachet@8080.service 7a895214.../172.17.8.103 active running apachet@8081.service a4562fd1.../172.17.8.101 active running Check etcd database: $ etcdctl ls / --recursive /services /services/apachet /services/apachet/172.17.8.103 /services/apachet/172.17.8.101 $ etcdctl get /services/apachet/172.17.8.101 {"host": "core-01", "ipv4_addr": 172.17.8.101, "port": 8081} $ etcdctl get /services/apachet/172.17.8.103 {"host": "core-03", "ipv4_addr": 172.17.8.103, "port": 8080}
  • 11. Etcd with Load balancing • Previous example with etcd demonstrates Service database and health check. It does not achieve DNS and Load balancing. • Load balancing can be achieved by combining etcd with confd or haproxy. Etcd with confd Etcd with haproxy Reference: http://adetante.github.io/articles/service- discovery-haproxy/ Reference: https://www.digitalocean.com/community/tutorials/how-to- use-confd-and-etcd-to-dynamically-reconfigure-services-in- coreos
  • 12. Consul • Has a distributed key value store for storing Service database. • Provides comprehensive service health checking using both in-built solutions as well as user provided custom solutions. • Provides REST based HTTP api for external interaction. • Service database can be queried using DNS. • Does dynamic load balancing. • Supports single data center and can be scaled to support multiple data centers. • Integrates well with Docker. • Consul integrates well with other Hashicorp tools.
  • 13. Consul health check options Following are the options that Consul provides for health-check: • Script based check - User provided script is run periodically to verify health of the service. • HTTP based check – Periodic HTTP based check is done to the service IP and endpoint address. • TCP based check – Periodic TCP based check is done to the service IP and specified port. • TTL based check – Previous schemes are driven from Consul server to the service. In this case, the service is expected to refresh a TTL counter in the Consul server periodically. • Docker Container based check – Health check application is available as a Container and Consul invokes the Container periodically to do the health-check.
  • 14. Sample application with Consul Ubuntu Container (http client) Nginx Container1 Nginx Container2 Consul Load balancer, DNS, Service registry • Two nginx containers will serve as the web servers. ubuntu container will serve as http client. • Consul will load balance the request between two nginx web servers. • Consul will be used as service registry, load balancer, health checker as well as DNS server for this application.
  • 15. Consul web Interface Following picture shows Consul GUI with: • 2 instances of “http” service and 1 instance of “consul” service. • Health check is passing for both services
  • 16. Consul with manual registration Service files: http1_checkhttp.json: { "ID": "http1", "Name": "http", "Address": "172.17.0.3", "Port": 80, "check": { "http": "http://172.17.0.3:80", "interval": "10s", "timeout": "1s" } } http2_checkhttp.json: { "ID": "http2", "Name": "http", "Address": "172.17.0.4", "Port": 80, "check": { "http": "http://172.17.0.4:80", "interval": "10s", "timeout": "1s" } } Register services: curl -X PUT --data-binary @http1_checkhttp.json http://localhost:8500/v1/agent/service/register curl -X PUT --data-binary @http2_checkhttp.json http://localhost:8500/v1/agent/service/register Service status: $ curl -s http://localhost:8500/v1/health/checks/http | jq . [ { "ModifyIndex": 424, "CreateIndex": 423, "Node": "myconsul", "CheckID": "service:http1", "Name": "Service 'http' check", "Status": "passing", "Notes": "", "Output": "", "ServiceID": "http1", "ServiceName": "http" }, { "ModifyIndex": 427, "CreateIndex": 425, "Node": "myconsul", "CheckID": "service:http2", "Name": "Service 'http' check", "Status": "passing", "Notes": "", "Output": "", "ServiceID": "http2", "ServiceName": "http" } ]
  • 17. Consul health check – Good status dig @172.17.0.1 http.service.consul SRV ; <<>> DiG 9.9.5-3ubuntu0.7-Ubuntu <<>> @172.17.0.1 http.service.consul SRV ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 34138 ;; flags: qr aa rd; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 2 ;; WARNING: recursion requested but not available ;; QUESTION SECTION: ;http.service.consul. IN SRV ;; ANSWER SECTION: http.service.consul. 0 IN SRV 1 1 80 myconsul.node.dc1.consul. http.service.consul. 0 IN SRV 1 1 80 myconsul.node.dc1.consul. ;; ADDITIONAL SECTION: myconsul.node.dc1.consul. 0 IN A 172.17.0.4 myconsul.node.dc1.consul. 0 IN A 172.17.0.3
  • 18. Consul health Check – Bad status $ dig @172.17.0.1 http.service.consul SRV ; <<>> DiG 9.9.5-3ubuntu0.7-Ubuntu <<>> @172.17.0.1 http.service.consul SRV ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 23330 ;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ;; WARNING: recursion requested but not available ;; QUESTION SECTION: ;http.service.consul. IN SRV ;; ANSWER SECTION: http.service.consul. 0 IN SRV 1 1 80 myconsul.node.dc1.consul. ;; ADDITIONAL SECTION: myconsul.node.dc1.consul. 0 IN A 172.17.0.3
  • 19. Consul with Registrator • Manual registration of service details to Consul is error-prone. • Gliderlabs Registrator open source project (https://github.com/gliderlabs/registrator) takes care of automatically registering/deregistering the service by listening to Docker events and updating Consul registry. • Choosing the Service IP address for the registration is critical. There are 2 choices: – With internal IP option, Container IP and port number gets registered with Consul. This approach is useful when we want to access the service registry from within a Container. Following is an example of starting Registrator using "internal" IP option. • docker run -d -v /var/run/docker.sock:/tmp/docker.sock --net=host gliderlabs/registrator -internal consul://localhost:8500 – With external IP option, host IP and port number gets registered with Consul. Its necessary to specify IP address manually. If its not specified, loopback address gets registered. Following is an example of starting Registrator using "external" IP option. • docker run -d -v /var/run/docker.sock:/tmp/docker.sock gliderlabs/registrator -ip 192.168.99.100 consul://192.168.99.100:8500 • Following is an example for registering “http” service with 2 nginx servers using HTTP check: – docker run -d -p :80 -e "SERVICE_80_NAME=http" -e "SERVICE_80_ID=http1" -e "SERVICE_80_CHECK_HTTP=true" -e "SERVICE_80_CHECK_HTTP=/" --name=nginx1 nginx – docker run -d -p :80 -e "SERVICE_80_NAME=http" -e "SERVICE_80_ID=http2" -e "SERVICE_80_CHECK_HTTP=true" -e "SERVICE_80_CHECK_HTTP=/" --name=nginx2 nginx • Following is an example for registering “http” service with 2 nginx servers using TTL check: – docker run -d -p :80 -e "SERVICE_80_NAME=http" -e "SERVICE_80_ID=http1" -e "SERVICE_80_CHECK_TTL=30s" --name=nginx1 nginx – docker run -d -p :80 -e "SERVICE_80_NAME=http" -e "SERVICE_80_ID=http2" -e "SERVICE_80_CHECK_TTL=30s" --name=nginx2 nginx
  • 20. Kubernetes Architecture Kubernetes Service discovery components: • SkyDNS is used to map Service name to IP address. • Etcd is used as KV store for Service database. • Kubelet does the health check and replication controller takes care of maintaining Pod count. • Kube-proxy takes care of load balancing traffic to the individual pods.
  • 21. Kubernetes Service • Service is a L3 routable object with IP address and port number. • Service gets mapped to pods using selector labels. In example on right, “MyApp” is the label. • Service port gets mapped to targetPort in the pod. • Kubernetes supports head-less services. In this case, service is not allocated an IP address, this allows for user to choose their own service registration option. { "kind": "Service", "apiVersion": "v1", "metadata": { "name": "my-service" }, "spec": { "selector": { "app": "MyApp" }, "ports": [ { "protocol": "TCP", "port": 80, "targetPort": 9376 } ] } }
  • 22. Kubernetes Service discovery Internals • Service name gets mapped to Virtual IP and port using Skydns. • Kube-proxy watches Service changes and updates IPtables. Virtual IP to Service IP, port remapping is achieved using IP tables. • Kubernetes does not use DNS based load balancing to avoid some of the known issues associated with it. Picture source: http://kubernetes.io/docs/use r-guide/services/
  • 23. Kubernetes Health check • Kubelet can implement a health check to check if Container is healthy. • Kubelet will kill the Container if it is not healthy. Replication controller would take care of maintaining endpoint count. • Health check is defined in Pod manifest. • Currently, 3 options are supported for health- check: – HTTP Health Checks - The Kubelet will call a web hook. If it returns between 200 and 399, it is considered success, failure otherwise. – Container Exec - The Kubelet will execute a command inside the container. If it exits with status 0 it will be considered a success. – TCP Socket - The Kubelet will attempt to open a socket to the container. If it can establish a connection, the container is considered healthy, if it can’t it is considered a failure. Pod with HTTP health check: apiVersion: v1 kind: Pod metadata: name: pod-with-healthcheck spec: containers: - name: nginx image: nginx # defines the health checking livenessProbe: # an http probe httpGet: path: /_status/healthz port: 80 # length of time to wait for a pod to initialize # after pod startup, before applying health checking initialDelaySeconds: 30 timeoutSeconds: 1 ports: - containerPort: 80
  • 24. Kubernetes Service Discovery options • For internal service discovery, Kubernetes provides two options: – Environment variable: When a new Pod is created, environment variables from older services can be imported. This allows services to talk to each other. This approach enforces ordering in service creation. – DNS: Every service registers to the DNS service; using this, new services can find and talk to other services. Kubernetes provides the kube-dns service for this. • For external service discovery, Kubernetes provides two options: – NodePort: In this method, Kubernetes exposes the service through special ports (30000-32767) of the node IP address. – Loadbalancer: In this method, Kubernetes interacts with the cloud provider to create a load balancer that redirects the traffic to the Pods. This approach is currently available with GCE. REDIS_MASTER_SERVICE_HOST=10.0.0.11 REDIS_MASTER_SERVICE_PORT=6379 REDIS_MASTER_PORT=tcp://10.0.0.11:63 79 REDIS_MASTER_PORT_6379_TCP=tcp://1 0.0.0.11:6379 REDIS_MASTER_PORT_6379_TCP_PROTO =tcp REDIS_MASTER_PORT_6379_TCP_PORT= 6379 REDIS_MASTER_PORT_6379_TCP_ADDR= 10.0.0.11 apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. type: LoadBalancer ports: # the port that this service should serve on - port: 80 selector: app: guestbook tier: frontend
  • 25. Docker Service Discovery • With Docker 1.9, Container name to IP address mapping was done by updating “/etc/hosts” automatically. • With Docker 1.10 release, Docker added embedded DNS server which does Container name resolution within a user defined network. • Name resolution can be done for Container name(-- name), network alias(--net-alias) and Container link(--link). Port number is not part of DNS. • With Docker 1.11 release, Docker added DNS based random load balancing for Containers with same network alias. • Docker’s Service Discovery is very primitive and it does not have health check and comprehensive load balancing.
  • 26. Docker DNS in release 1.11 Create 3 Containers in “fe” network: docker run -d --name=nginx1 -- net=fe --net-alias=nginxnet nginx docker run -d --name=nginx2 -- net=fe --net-alias=nginxnet nginx docker run -ti --name=myubuntu -- net=fe --link=nginx1:nginx1link -- link=nginx2:nginx2link ubuntu bash DNS by network alias: root@4d2d6e34120d:/# ping -c1 nginxnet PING nginxnet (172.20.0.3) 56(84) bytes of data. 64 bytes from nginx2.fe (172.20.0.3): icmp_seq=1 ttl=64 time=0.852 ms root@4d2d6e34120d:/# ping -c1 nginxnet PING nginxnet (172.20.0.2) 56(84) bytes of data. 64 bytes from nginx1.fe (172.20.0.2): icmp_seq=1 ttl=64 time=0.244 ms DNS by Container name: root@4d2d6e34120d:/# ping -c1 nginx1 PING nginx1 (172.20.0.2) 56(84) bytes of data. 64 bytes from nginx1.fe (172.20.0.2): icmp_seq=1 ttl=64 time=0.112 ms root@4d2d6e34120d:/# ping -c1 nginx2 PING nginx2 (172.20.0.3) 56(84) bytes of data. 64 bytes from nginx2.fe (172.20.0.3): icmp_seq=1 ttl=64 time=0.090 ms DNS by link name: root@4d2d6e34120d:/# ping -c1 nginx1link PING nginx1link (172.20.0.2) 56(84) bytes of data. 64 bytes from nginx1.fe (172.20.0.2): icmp_seq=1 ttl=64 time=0.049 ms root@4d2d6e34120d:/# ping -c1 nginx2link PING nginx2link (172.20.0.3) 56(84) bytes of data. 64 bytes from nginx2.fe (172.20.0.3): icmp_seq=1 ttl=64 time=0.253 ms
  • 27. References • https://www.nginx.com/blog/service-discovery-in-a-microservices- architecture/ • http://jasonwilder.com/blog/2014/02/04/service-discovery-in-the- cloud/ • http://progrium.com/blog/2014/07/29/understanding-modern-service- discovery-with-docker/ • http://artplustech.com/docker-consul-dns-registrator/ • https://jlordiales.me/2015/01/23/docker-consul/ • Mastering CoreOS book - https://www.packtpub.com/networking-and- servers/mastering-coreos • Kubernetes Services - http://kubernetes.io/docs/user-guide/services/ • Docker DNS Server - https://docs.docker.com/engine/userguide/networking/configure-dns/, https://github.com/docker/libnetwork/pull/974
  • 28. DEMO

Editor's Notes

  1. Microsoft Confidential
  2. Microsoft Confidential