Or, how NGINX can act as your stevedores properly routing and accelerating HTTP and TCP traffic to pods of containers across a globally distributed environment.
NGINX can be used to manage and route your traffic across your distributed micro services architecture offering a seamless interface to your customers and giving you granular management of backend service scaling and versions. Add in some caching and load balancing and the efficiencies of an application delivery platform become apparent.
28. MORE INFORMATION AT NGINX.COM
This .yml file builds –
Consul for service
discovery
Registrator
tutum/hello-world
Google/golang-hello
& NGINX Plus
29. MORE INFORMATION AT NGINX.COM
$ sarah@ubuntu:~/service-discovery$ more docker-compose.yml
nginx:
build: ./nginxplus
links:
- consul
ports:
- "9050:80"
- "8080:8080"
consul:
command: -server -bootstrap -advertise 10.0.2.15
image: progrium/consul:latest
ports:
- "8300:8300"
- "8400:8400"
- "8500:8500"
- "8600:53/udp”
30. MORE INFORMATION AT NGINX.COM
registrator:
command: consul://consul:8500
image: progrium/registrator:latest
links:
- consul
volumes:
- "/var/run/docker.sock:/tmp/docker.sock"
service1:
image: tutum/hello-world:latest
environment:
SERVICE_80_NAME: http
SERVICE_NAME: service1
SERVICE_TAGS: production
ports:
- "80"
31. MORE INFORMATION AT NGINX.COM
service2:
image: google/golang-hello:latest
environment:
SERVICE_80_NAME: http
SERVICE_NAME: service2
SERVICE_TAGS: production
ports:
- "8080"
sarah@ubuntu:~/service-discovery$
32. MORE INFORMATION AT NGINX.COM
sarah@ubuntu:~/service-discovery$ docker-compose build
consul uses an image, skipping
Building nginx...
Step 0 : FROM ubuntu:14.04
---> 6d4946999d4f
Step 1 : MAINTAINER NGINX Docker Maintainers "docker-maint@nginx.com"
---> Using cache
---> 339d0f20dc6e
…
sarah@ubuntu:~/service-discovery$ docker-compose up -d
Recreating servicediscovery_consul_1...
Recreating servicediscovery_nginx_1...
Recreating servicediscovery_registrator_1...
Recreating servicediscovery_service1_6...
Recreating servicediscovery_service2_1...
sarah@ubuntu:~/service-discovery$
40. MORE INFORMATION AT NGINX.COM
Visit our kiosk
tomorrow to go
through this
demo
It was based on a blog post for
bellycard by @shanesveller
And built by @fymemon
We’ve all heard the hype about microservices and how they make the world a better place for developers
And now, since the services are independent, they can be written in different languages using different frameworks. So you gain the flexibility to choose the right tools for each service.
And another big advantage is that changes can also be made independently. As long as the interfaces don’t change you are free to roll out new versions of a service without having to worry about impacting other services.
The ease of making changes and deploying microservices allows you to dramatically reduce your release cycles and achieve rapid deployment or continuous delivery.
Once you have decided to take advantage of the benefits of a microservices architecture you need to decide how you will deploy the services. It should come as no surprise that with microservices being a very new way of doing things, you will want to look to new tools to deploy them. To take full advantage you will want to embrace containers, cloud and DevOps. The rise of containerization has provided an ideal platform for hosting microservices. Containers are much lighter weight then full virtualization allowing you to achieve a far higher density with the same amount of resources and they are much more DevOps friendly, allowing services to be more easily created and scaled.
But, with these positives come some of the very same facts reframed as negatives.
And now, since the services are independent, they can be written in different languages using different frameworks. So you gain the flexibility to choose the right tools for each service.
The ease of making changes and deploying microservices allows you to dramatically reduce your release cycles and achieve rapid deployment or continuous delivery.
Once you have decided to take advantage of the benefits of a microservices architecture you need to decide how you will deploy the services. It should come as no surprise that with microservices being a very new way of doing things, you will want to look to new tools to deploy them. To take full advantage you will want to embrace containers, cloud and DevOps. The rise of containerization has provided an ideal platform for hosting microservices. Containers are much lighter weight then full virtualization allowing you to achieve a far higher density with the same amount of resources and they are much more DevOps friendly, allowing services to be more easily created and scaled.
But, with these positives come some of the very same facts reframed as negatives.
NGINX Plus can act as an HTTP router. It can handle a large number of incoming requests, inspecting each one and making sure they get to the correct service. And it supports scaling service instances up and down and making sure to only send requests to healthy instances by actively checking the health of each service instance.
In reality, microservices architectures look more like this. Here we show an aggregation layer at the front. This layer takes single service requests and makes multiple service requests, aggregating the responses before returning to the client. This is especially useful for mobile apps. Because of the lower bandwidth and higher latency of mobile device,s bundling multiple requests can have a large impact on performance, but aggregation can also be used for non-mobile applications.
Then once the aggregation layer makes its service requests, each service can make requests to other services.
Since the aggregation layer and the services layer can scale independently, you need something to distribute the traffic. And this is where NGINX comes in. It can handle the client requests to the aggregation layer, load balancing them across the available aggregation servers, and then handling the request from the aggregation layer to the services and also the requests from one service to another. In all cases making sure to route traffic to healthy services, using the NGINX Plus health checks. Allowing services to be easily scaled using tools like docker-compose, kubernetes, swarm, NGINX Plus dynamic configuration API,and other automation infrastructures allowing for intelligent routing based on factors such as URL’s, headers, and letting you do A/B testing, etc.
There are many tools vying for a place in the container orchestration ecosystem. Kubernetes, mesos, docker compose, swarm …
There are many tools vying for a place in the container orchestration ecosystem. Kubernetes, mesos, docker compose, swarm …
The big trick in all of the managing complexity is going from one to many