6. 6
100s Microservices
1,000s Releases / Day
10,000s Virtual Machines
100K+ User actions / Second
81 M Customers Globally
1 B Time series Metrics
10 B Hours of video streaming
every quarter
Source: NetFlix: : https://www.youtube.com/watch?v=UTKIT6STSVM
10s OPs Engineers
0 NOC
0 Data Centers
So what do NetFlix think about DevOps?
No DevOps
Don’t do lot of Process / Procedures
Freedom for Developers & be Accountable
Trust people you Hire
No Controls / Silos / Walls / Fences
Ownership – You Build it, You Run it.
7. 7
50M Paid Subscribers
100M Active Users
60 Countries
Cross Functional Team
Full, End to End ownership of features
Autonomous1000+ Microservices
Source: https://microcph.dk/media/1024/conference-microcph-2017.pdf
1000+ Tech Employees
120+ Teams
8. Microservices definition
4/1/2019 8
In short, the microservice architectural style is an approach to
developing a single application as a suite of small services, each
running in its own process and communicating with lightweight
mechanisms, often an HTTP resource API. These services are built
around business capabilities and independently deployable by
fully automated deployment machinery. There is a bare minimum
of centralized management of these services, which may be
written in different programming languages and use different
data storage technologies.
https://martinfowler.com/articles/microservices.html
By James Lewis and Martin Fowler
Bolo Definition Kya hai?
Tell me what’s the definition
9. Microservices Characteristics
9
We can scale our operation independently, maintain
unparalleled system availability, and introduce new
services quickly without the need for massive
reconfiguration. —
Werner Vogels, CTO, Amazon Web Services
Modularity ... is to a technological economy
what the division of labor is to a
manufacturing one.
W. Brian Arthur,
author of e Nature of Technology
The key in making great and growable systems is
much more to design how its modules communicate
rather than what their internal properties and
behaviors should be.
Alan Kay, 1998 email to the Squeak-dev list
Components
via
Services
Organized around
Business
Capabilities
Products
NOT
Projects
Smart
Endpoints
& Dumb Pipes
Decentralized
Governance &
Data Management
Infrastructure
Automation
Design for
Failure
Evolutionary
Design
10. When should I use them (Microservices)?
01-04-2019 10
• Strong Module Boundaries:
Microservices reinforce modular
structure, which is particularly important
for larger teams.
• Independent Deployment: Simple
services are easier to deploy, and since
they are autonomous, are less likely to
cause system failures when they go
wrong.
• Technology Diversity: With microservices
you can mix multiple languages,
development frameworks and data-
storage technologies.
When you have What’s the Cost
Distribution: Distributed systems are harder to
program, since remote calls are slow and are
always at risk of failure.
Eventual Consistency: Maintaining strong
consistency is extremely difficult for a
distributed system, which means everyone has
to manage eventual consistency.
Operational Complexity: You need a mature
operations team to manage lots of services,
which are being redeployed regularly.
Source: https://www.martinfowler.com/microservices/
11. What is the right size for a Microservice?
01-04-2019 11
• Rather than the size what matters is the Business Function /
Domain of the service.
• One Microservice may have half a dozen entities and other a
couple of dozen entities. What’s more important is the role
Microservices plays.
• Bounded Context from DDD helps you to decompose a large
multi domain Monolith into a Microservice for each Bounded
Context.
• Focusing on User stories will help you clearly define the
boundaries of the Business Domain.
13. 13
Monolithic vs. Microservices Example
Traditional Monolithic App using
Single Technology Stack
Micro Services with Multiple Technology Stack
This 3 tier model is obsolete now.
Source: Gartner Market Guide for Application Platforms. Nov 23, 2016
Event Stream / Queues / Pub-Sub / Storage
UI Layer
Web Services
Business Logic
Database Layer
Micro
Service
4
EE 7
Inventory
UI Layer
Web Services
Business Logic
Database Layer
Micro
Service
1
Customer
SE 8
UI Layer
Web Services
Business Logic
Database Layer
Micro
Service
3
ShoppingCart
UI Layer
Web Services
Business Logic
Database Layer
Micro
Service
2
Order
UI Layer
WS
BL
DL
Database
ShoppingCart
Order
Customer
Inventory
API Gateway (Zuul Edge Server)
Load Balancer
(Ribbon)
Circuit Breaker
(Hystrix)
Service Discovery (Eureka)
Load Balancer
(Ribbon)
Circuit Breaker
(Hystrix)
Load Balancer
(Ribbon)
Circuit Breaker
(Hystrix)
Load Balancer
(Ribbon)
Circuit Breaker
(Hystrix)
12
14. 01-04-2019 14
SOA vs. Microservices Example
Traditional Monolithic App with SOA
Micro Services with Multiple Technology Stack
Event Stream / Queues/ Pub-Sub / Storage
UI Layer
Web Services
Business Logic
Database Layer
Micro
Service
1
Customer
SE 8
UI Layer
Web Services
Business Logic
Database Layer
Micro
Service
3
ShoppingCart
UI Layer
Web Services
Business Logic
Database Layer
Micro
Service
2
Order
API Gateway
Load Balancer
Circuit Breaker
Service Discovery
Load Balancer
Circuit Breaker
Load Balancer
Circuit Breaker
UI Layer
Database
ShoppingCart
Order
Customer
Inventory
Enterprise Service
Bus
Messaging
REST / SOAP
HTTP
MOM
JMS
ODBC / JDBC
Translation
Web Services
XML
WSDL
Addressing
Security
Registry
Management
Producers
Shared
Database
Consumers3rd Party Apps
Smart Pipes
Lot of Business logic
resides in the Pipe
15. Microservices Deployment Model
Microservices with Multiple Technology Stack – Software Stack for Networking
Event Stream / Queues / Pub-Sub / Storage
Users
Service
Discovery
(Eureka)
Config
Server
(Spring)
API (Zuul) Gateway
UI Layer
Web Services
Business Logic
Database Layer
Micro
Service
2
ShoppingCart
SE 8
LB = Ribbon
CB = Hystrix
LB = Ribbon
CB = Hystrix
UI Layer
Web Services
Business Logic
Database Layer
Product
SE 8
Micro Service 1
With 4 node cluster
LB = Ribbon
CB = Hystrix
UI Layer
Web Services
Business Logic
Database Layer
Order
SE 8
Micro Service 3
With 2 node
Cluster
LB = Ribbon
CB = Hystrix
UI Layer
Web Services
Business Logic
Database Layer
Customer
Micro Service 4
With 2 node
cluster
HTTP Server
All UI Code is
bundled
Virtual
Private
Network
01-04-2019 15
16. Shopping Portal – Docker / Kubernetes – Network Stack
/ui
/productms
Load Balancer
Ingress
Deployment / Replica / Pod NodesKubernetes Objects
Firewall
UI Pod
UI Pod
UI Pod
UI Service
N1
N2
N2
EndPoints
Product Pod
Product Pod
Product Pod
Product
Service
N4
N3
MySQL
Pod
EndPoints
Review Pod
Review Pod
Review Pod
Review
Service
N4
N3
N1
Service Call
Kube DNS
EndPoints
Internal
Load Balancers
16
Users
Routing based on Layer 3,4 and 7
20. 20
12FactorAppMethodology
4 Backing Services Treat Backing services like DB, Cache as attached resources
5 Build, Release, Run Separate Build and Run Stages
6 Process Execute App as One or more Stateless Process
7 Port Binding Export Services with Specific Port Binding
8 Concurrency Scale out via the process Model
9 Disposability Maximize robustness with fast startup and graceful exit
10 Dev / Prod Parity
Keep Development, Staging and Production as similar as possible
Checkout the Shift – Left in DevOps (Slide 157)
11 Logs Treat logs as Event Streams
12 Admin Process
Run Admin Tasks as one of Process (Eg. DB Migration, Software
upgrades, etc..)
Factors Description
1 Codebase One Code base tracked in revision control
2 Dependencies Explicitly declare dependencies
3 Configuration Configuration driven Apps
Source:https://12factor.net/
21. Catalogues of Microservices
4/1/2019 21
System Z Model From Spotify
• Different types of Components Z Supports
• Libraries
• Data Pipelines
• Views in the client
• Data Store
• Service
22. Pros
1. Adds Complexity
2. Skillset shortage
3. Confusion on getting the
right size
4. Team need to manage
end-to-end of the Service
(From UI to Backend to
Running in Production).
01-04-2019 22
1. Robust
2. Scalable
3. Testable (Local)
4. Easy to Change and
Replace
5. Easy to Deploy
6. Technology Agnostic
Cons
Microservices Pros and Cons
23. Monolithic > Microservices
• FORRESTER RESEARCH
• MODERNIZATION JOURNEY
• ASSESS AND CLASSIFY YOUR APP PORTFOLIO
• PLAN AND PRIORITIZE
4/1/2019 23
25. * For IT Services : They can do one more project of same size with ZERO COST in Platform Licensing (Based on 20 developer pack USD $50K per month for 3 months License cost = $150K)
2501-04-2019
26. Scale Cube and Micro Services
4/1/2019 26
1. Y Axis Scaling – Functional Decomposition : Business Function as a Service
2. Z Axis Scaling – Database Partitioning : Avoid locks by Database Sharding
3. X Axis Scaling – Cloning of Individual Services for Specific Service Scalability
27. Modernization Journey
4/1/2019 27
Start new features as Microservices
Incrementally establish the success early.
Expose Legacy On-Premise Apps API’s
If legacy Apps cant be shifted to Cloud
Refactor Monolithic features to Microservices
Breakdown and Deploy Feature by Feature
Containerize the Microservice
Reduce costs, simplifies the operations and consistent
environment between Dev, QA and Production
Monolith De-commission Plan
Incrementally sunset the Monolith
Velocity as you
transform
Increase your
Delivery Velocity
along the Journey
High
FuturePresent
Low
Inspired by a paper from IBM
28. Assess and Classify your App Portfolio
4/1/2019 28
Take inventory of your Apps
Classify the Apps based on
technology, complexity.
Align Apps to your Business
Priorities
Identify the Apps that are critical for
Modernization.
Identify Business
Modernization Requirements
Create a Roadmap with faster go to
market strategy
Understand the effort and
Cost
Evaluate all possible Modernization
options
Container Refactor Expose APIsLift & Shift
BusinessValueCostComplexity
Product Catalogue
Product Review
Inventory Shopping Cart
Customer
Profile
Order Management
Inspired by a paper from IBM
29. Plan and Prioritize
4/1/2019 29
Complexity Cost Value Score Rank
Weightage 35% 25% 40%
Customer
Med
3
Med
3
Low
1
2.20
7
6
Product
Reviews
Med
3
High
5
Med
3
3.50
11
3
Product
Catalogue
Med
3
Med
3
High
5
4.80
11
1
Shopping
Cart
High
5
Med
3
Med
3
3.70
11
4
Order
Very High
7
Med
3
High
5
5.20
15
2
Inventory
Very High
7
High
5
Med
3
4.90
15
5
Prioritize
Low Priority projects are good
test cases but does not bring
much business value.
Quick Wins
Identify a feature / project which
has good business value and low
in complexity.
Project Duration
Focus on shorter duration
projects with high Business Value.
Shopping Portal Features Prioritization
Inspired by a paper from IBM
30. Monolithic to Microservices Summary
4/1/2019 30
1. Classify your Apps into Following areas
1. Lift and Shit
2. Containerize
3. Refactor
4. Expose API
2. Prioritize High Business Value Low Technical
Complexity
3. Focus on Shorter Duration – From Specs to
Operation
32. Servers / Virtual Machines / Containers
Hardware
OS
BINS / LIB
App
1
App
2
App
3
Server
Hardware
Host OS
HYPERVISOR
App 1 App 2 App 3
Guest
OS
BINS
/ LIB
Guest
OS
BINS
/ LIB
Guest
OS
BINS
/ LIB
Type 1 Hypervisor
Hardware
Host OS
App
1
App
2
App
3
BINS
/ LIB
BINS
/ LIB
BINS
/ LIB
Container
Hardware
HYPERVISOR
App 1 App 2 App 3
Guest
OS
BINS
/ LIB
Guest
OS
BINS
/ LIB
Guest
OS
BINS
/ LIB
Type 2 Hypervisor01-04-2019
32
33. Docker containers are Linux Containers
CGROUPS
NAME
SPACES
Copy on
Write
DOCKER
CONTAINER
• Kernel Feature
• Groups Processes
• Control Resource
Allocation
• CPU, CPU Sets
• Memory
• Disk
• Block I/O
• Images
• Not a File System
• Not a VHD
• Basically a tar file
• Has a Hierarchy
• Arbitrary Depth
• Fits into Docker
Registry
• The real magic behind
containers
• It creates barriers
between processes
• Different Namespaces
• PID Namespace
• Net Namespace
• IPC Namespace
• MNT Namespace
• Linux Kernel Namespace
introduced between
kernel 2.6.15 – 2.6.26
docker runlxc-start
33
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/resource_management_guide/ch01
01-04-2019
34. Linux Kernel
34
HOST OS (Ubuntu)
Client
Docker Daemon
Cent OS
Alpine
Debian
LinuxKernel
Host Kernel
Host Kernel
Host Kernel
All the containers
will have the same
Host OS Kernel
If you require a
specific Kernel
version then Host
Kernel needs to be
updated
HOST OS (Windows 10)
Client
Docker Daemon
Nano Server
Server Core
Nano Server
WindowsKernel
Host Kernel
Host Kernel
Host Kernel
Windows Kernel
35. Docker DaemonDocker Client
How Docker works….
$ docker search ….
$ docker build ….
$ docker container create ..
Docker Hub
Images
Containers
$ docker container run ..
$ docker container start ..
$ docker container stop ..
$ docker container ls ..
$ docker push ….
$ docker swarm ..
01-04-2019 35
21
34
1. Search for the Container
2. Docker Daemon Sends the request to Hub
3. Downloads the image
4. Run the Container from the image
37. Deployment – Updates and rollbacks, Canary Release
D
ReplicaSet – Self Healing, Scalability, Desired State
R
Worker Node 1
Master Node (Control Plane)
Kubernetes
Architecture
POD
POD itself is a Linux
Container, Docker
container will run inside
the POD. PODs with single
or multiple containers
(Sidecar Pattern) will share
Cgroup, Volumes,
Namespaces of the POD.
(Cgroup / Namespaces)
Scheduler
Controller
Manager
Using yaml or json
declare the desired
state of the app.
State is stored in
the Cluster store.
Self healing is done by Kubernetes using watch loops if the desired state is changed.
POD POD POD
BE
1.210.1.2.34
BE
1.210.1.2.35
BE
1.210.1.2.36
BE
15.1.2.100
DNS: a.b.com 1.2
Service Pod IP Address is dynamic, communication should
be based on Service which will have routable IP
and DNS Name. Labels (BE, 1.2) play a critical role
in ReplicaSet, Deployment, & Services etc.
Cluster
Store
etcd
Key Value
Store
Pod Pod Pod
Label Selector selects pods based on the Labels.
Label
Selector
Label Selector
Label Selector
Node
Controller
End Point
Controller
Deployment
Controller
Pod
Controller
….
Labels
Internet
Firewall
K8s Cluster
Cloud Controller
For the cloud providers to manage
nodes, services, routes, volumes etc.
Kubelet
Node
Manager
Container
Runtime
Interface
Port 10255
gRPC
ProtoBuf
Kube-Proxy
Network Proxy
TCP / UDP Forwarding
IPTABLES / IPVS
Allows multiple
implementation of
containers from v1.7
RESTful yaml / json
$ kubectl ….
Port 443API Server
Pod IP ...34 ...35 ...36EP
• Declarative Model
• Desired State
Key Aspects
Namespace1Namespace2
• Pods
• ReplicaSet
• Deployment
• Service
• Endpoints
• StatefulSet
• Namespace
• Resource Quota
• Limit Range
• Persistent
Volume
Kind
Secrets
Kind
• apiVersion:
• kind:
• metadata:
• spec:
Declarative Model
• Pod
• ReplicaSet
• Service
• Deployment
• Virtual Service
• Gateway, SE, DR
• Policy, MeshPolicy
• RbaConfig
• Prometheus, Rule,
• ListChekcer …
@
@
Annotations
Names
Cluster IP
Node
Port
Load
Balancer
External
Name
@
Ingress
01-04-2019 37
38. Service Mesh – Sidecar Design Pattern
01-04-2019 38
CB – Circuit Breaker
LB – Load Balancer
SD – Service Discovery
Microservice
Process1Process2
Service Mesh Control Plane
Service
Discovery
Routing
Rules
Control Plane will have all the rules for Routing and
Service Discovery. Local Service Mesh will download the
rules from the Control pane will have a local copy.
Service Discovery Calls
Service
Mesh
Calls
Customer Microservice
Application Localhost calls
http://localhost/api/order/
Router
Network Stack
LBCB SD
ServiceMesh
Sidecar
UI Layer
Web Services
Business Logic
Order Microservice
Application Localhost calls
http://localhost/api/payment/
Router
Network Stack
LBCB SD
ServiceMesh
Sidecar
UI Layer
Web Services
Business Logic
Data Plane
39. Shopping Portal
/ui
/productms
/auth
/order
Gateway
Virtual Service
Deployment / Replica / Pod NodesIstio Sidecar - Envoy
Load Balancer
Kubernetes
Objects
Istio Objects
Firewall
P M CIstio Control Plane
UI Pod N5v2Canary
v2User X = Canary
Others = Stable
A / B Testing using
Canary Deployment
v1 UI Pod
UI Pod
UI Pod
UI
Service
N1
N2
N2
Destination
Rule
Stable / v1
EndPoints
Internal
Load Balancers
39
Source:https://github.com/meta-magic/kubernetes_workshop
Users
Product Pod
Product Pod
Product Pod
Product
Service
MySQL
Pod
N4
N3
Destination
Rule
EndPoints
Review Pod
Review Pod
Review Pod
Review
Service
N1
N4
N3
Service Call
Kube DNS
EndPoints
40. Shopping Portal
/ui
/productms
/auth
/order
Gateway
Virtual Service
Deployment / Replica / Pod NodesIstio Sidecar - Envoy
Load Balancer
Kubernetes
Objects
Istio Objects
Firewall
P M CIstio Control Plane
UI Pod N5v2Canary
v2
v1 UI Pod
UI Pod
UI Pod
UI
Service
N1
N2
N2
Destination
Rule
Stable / v1
EndPoints
Internal
Load Balancers
40
Source:https://github.com/meta-magic/kubernetes_workshop
Users
Product Pod
Product Pod
Product Pod
Product
Service
MySQL
Pod
N4
N3
Destination
Rule
EndPoints
Review Pod
Review Pod
Review Pod
Review
Service
N1
N4
N3
Service Call
Kube DNS
EndPoints
Traffic Shifting
Canary Deployment
10% = Canary
90% = Stable
41. Shopping Portal
/ui
/productms
/auth
/order
Gateway
Virtual Service
Deployment / Replica / Pod NodesIstio Sidecar - Envoy
Load Balancer
Kubernetes
Objects
Istio Objects
Firewall
P M CIstio Control Plane
UI Pod N5
v2Canary
v2
v1 UI Pod
UI Pod
UI Pod
UI
Service
N1
N2
N2
Destination
Rule
Stable / v1
EndPoints
Internal
Load Balancers
41
Source:https://github.com/meta-magic/kubernetes_workshop
Users
Product Pod
Product Pod
Product Pod
Product
Service
MySQL
Pod
N4
N3
Destination
Rule
EndPoints
Review Pod
Review Pod
Review Pod
Review
Service
N1
N4
N3
Service Call
Kube DNS
EndPoints
Blue Green Deployment
100% = Stable
When you want to shift to v2
Change 100% to Canary
42. Shopping Portal
/ui
/productms
/auth
/order
Gateway
Virtual Service
Deployment / Replica / Pod NodesIstio Sidecar - Envoy
Load Balancer
Kubernetes
Objects
Istio Objects
Firewall
P M CIstio Control Plane
UI Pod N5
v2Canary
v2
v1 UI Pod
UI Pod
UI Pod
UI
Service
N1
N2
N2
Destination
Rule
Stable / v1
EndPoints
Internal
Load Balancers
42
Source:https://github.com/meta-magic/kubernetes_workshop
Users
Product Pod
Product Pod
Product Pod
Product
Service
MySQL
Pod
N4
N3
Destination
Rule
EndPoints
Review Pod
Review Pod
Review Pod
Review
Service
N1
N4
N3
Service Call
Kube DNS
EndPoints
Mirror Deployment
100% = Stable
Mirror = Canary
Production Data is mirrored to
new release for real-time testing
43. 43
Shopping Portal
/ui
/productms
/auth
/order
Gateway
Virtual Service
Deployment / Replica / Pod NodesIstio Sidecar - Envoy
Load Balancer
Firewall
P M CIstio Control PlaneFault Injection
MySQL
Pod
N4
N3
Destination
Rule
Product Pod
Product Pod
Product Pod
Product
Service
Service Call
Kube DNS
EndPoints
Internal
Load Balancers
43
Source:https://github.com/meta-magic/kubernetes_workshop
Fault Injection
Delay = 2 Sec
Abort = 10%
Kubernetes
Objects
Istio Objects
Users
Review Pod
Review Pod
Review Pod
Review
Service
N1
N4
N3EndPoints
UI Pod
UI Pod
UI Pod
UI
Service
N1
N2
N2
Destination
Rule
v1EndPoints
44. Container & Kubernetes Summary
4/1/2019 44
1. Containers are NOT Virtual Machines
2. Containers are isolated area in the OS kernel
3. Kubernetes – is a Container Orchestration
Platform.
4. Kubernetes abstracts the cloud vendor (AWS,
Azure, GCP) scalability features.
5. Kubernetes Concepts – Declarative Model,
Desired State and Current State.
45. Kafka
• CONCEPTS : QUEUES / PUB – SUB / EVENT STREAMING
• WHY IS IT DIFFERENT FROM TRADITIONAL MESSAGE QUEUES?
• DATA STORAGE / CLUSTER / DURABILITY
• PERFORMANCE
4/1/2019
(C)COPYRIGHTMETAMAGICGLOBALINC.,NEWJERSEY,USA
45
46. Kafka Core Concepts
01-04-2019 46
Publish & Subscribe
Read and write streams of data
like a messaging system
Process
Write scalable stream processing
apps that react to events in real-
time.
Store
Store streams of data safely in a
distributed, replicated, fault
tolerant cluster.
47. Traditional Queue / Pub-Sub Vs. Kafka
01-04-2019 47
0 1 2 3 4 5 6 7 8 9
8
7
9 Consumer 1
Consumer 2
Consumer 3
Queues
Data
Data can be partitioned for scalability for parallel
processing by same type of consumers
Pros:
Cons:
Queues are NOT multi subscribers. Once a Consumer
reads the data, its gone from the queue. Ordering of
records will be lost in asynchronous parallel processing.
0 1 2 3 4 5 6 7 8 9
9
9
9 Consumer 1
Consumer 2
Consumer 3
Pub – Sub
Data
Multiple subscribers can get the same data.Pros:
Scaling is difficult as every message goes to every
subscriber.
Cons:
Kafka generalizes these two concepts.
As with a queue the consumer group allows you to
divide up processing over a collection of processes
(the members of the consumer group).
As with publish-subscribe, Kafka allows you to
broadcast messages to multiple consumer groups.
48. Anatomy of a Topic
01-04-2019 48
Source : https://kafka.apache.org/intro
• A Topic is a category or feed name to which
records are published.
• Topics in Kafka are always multi subscriber.
• That is, a Topic can have zero, one, or many
consumers that subscribe to the data
written to it.
• Each Partition is an ordered, immutable
sequence of records that is continually
appended to—a structured commit log.
• A Partition is nothing but a directory of Log
Files
• The records in the partitions are each assigned a sequential id number called
the offset that uniquely identifies each record within the partition.
49. 01-04-2019 49
Partition Log Segment
• Partition (Kafka’s Storage unit) is Directory of Log
Files.
• A partition cannot be split across multiple brokers or
even multiple disks
• Partitions are split into Segments
• Segments are two files: 000.log & 000.index
• Segments are named by their base offset. The base
offset of a segment is an offset greater than offsets in
previous segments and less than or equal to offsets in
that segment.
• Indexes store offsets relative to its segments base
offset
• Indexes map each offset to their message position in
the log and they are used to look up messages.
• Purging of data is based on oldest segment and one
segment at a time.
0 1 2 3 4 5 6 7 8 9
Partition
Data
6
3
0 Segment 0
Segment 3
Segment 6
9 Segment 9 - Active
$ tree kafka-logs | head -n 6
kafka-logs
|──── SigmaHawk-2
| |──── 00000000006109871597.index
| |──── 00000000006109871597.log
| |──── 00000000007306321253.index
| |──── 00000000007306321253.log
Topic /
Partition
Segment 1
Segment 2
Rel.Offset, Position Offset, Position, Size, Payload
0000.index 0000.log
0 0 0 0 7 ABCDEFG
1 7 1 7 4 ABCD
2 11 2 11 9 ABCDEFGIJ
4 Bytes 4 Bytes
50. 01-04-2019 50
Kafka Cluster – Topics & Partitions
• The partitions of the log are distributed over the servers in the Kafka cluster with each server handling data and requests
for a share of the partitions.
Source : https://kafka.apache.org/intro
Broker 1
Leader
Broker 2
Follower
Broker 3
Follower
Broker 4
Follower
Broker 5
Leader
Partition 1
Partition 0
Topic ABC
• Each server acts as a leader for some of its partitions and a follower for others so load is well balanced within the cluster.
• Each partition has one server which acts as the "leader" and zero or more servers which act as "followers".
51. 01-04-2019
51
Record Commit Process
Broker 1
Leader
Topic 1
Broker 2
Follower
Producer
Consumer
2
2
Commit
3
ack
• Each partition is replicated across a configurable
number of servers for fault tolerance.
• The leader handles all read and write requests for
the partition while the followers passively replicate
the leader.
• If the leader fails, one of the followers will
automatically become the new leader.1
Message with Offset
4
777743
Broker 3
Follower
Data Durability From Kafka v0.8.0 onwards
acks Acknowledgement Description
0
If set to zero then the producer will NOT wait for any
acknowledgment from the server at all. The record will be
immediately added to the socket buffer and considered sent.
No guarantee can be made that the server has received the
record in this case, and the retries configuration will not take
effect (as the client won't generally know of any failures). The
offset given back for each record will always be set to -1.
1
This will mean the leader will write the record to its local log
but will respond without awaiting full acknowledgement
from all followers. In this case should the leader fail
immediately after acknowledging the record but before the
followers have replicated it then the record will be lost.
All /
-1
This means the leader will wait for the full set of in-sync
replicas to acknowledge the record. This guarantees that the
record will not be lost as long as at least one in-sync replica
remains alive. This is the strongest available guarantee. This is
equivalent to the acks=-1 setting.
Source: https://kafka.apache.org/documentation/#topicconfigs
acks Steps
0 1
1 1,3
-1 1,2,3
Producer Configuration
52. 01-04-2019 52
Replication 6
3.2
m1
m2
m3
L(A)
m1
m2
F(B)
m1
F(C)ISR = (A, B, C)
Leader A commits Message
m1. Message m2 & m3 not
yet committed.
1
m1
m2
F(C)
m1
m2
L(B)
m1
m2
m3
L(A)
ISR = (B,C)
A fails and B is the new
Leader. B commits m22
m1
m2
m3
L(A)
m1
m2
L(B)
m4
m5
m1
m2
F(C)
m4
m5
ISR = (B,C)
B commits new messages
m4 and m5
3
m1
m2
L(B)
m4
m5
m1
m2
F(C)
m4
m5
m1
F(A)
ISR = (A, B,C)
A comes back, restores to
last commit and catches
up to latest messages.
4
m1
m2
L(B)
m4
m5
m1
m2
F(C)
m4
m5
m1
m2
F(A)
m4
m5
ISR – In-sync Replica
• Instead of majority vote, Kafka
dynamically maintains a set of in-sync
replicas (ISR) that are caught-up to the
leader.
• Only members of this set are eligible for
election as leader.
• A write to a Kafka partition is not
considered committed until all in-sync
replicas have received the write.
• This ISR set is persisted to ZooKeeper
whenever it changes. Because of this, any
replica in the ISR is eligible to be elected
leader.
53. LinkedIn Kafka Cluster
01-04-2019 53
Brokers60
Partitions50K
Messages / Second800K
MB / Second inbound300
MB / Second Outbound1024
The tuning looks fairly
aggressive, but all of the
brokers in that cluster
have a 90% GC pause
time of about 21ms, and
they’re doing less than 1
young GC per second.
55. Kafka Summary
4/1/2019 55
1. Combined Best of Queues and Pub / Sub
Model.
2. Data Durability
3. Fastest Messaging Infrastructure
4. Streaming capabilities
5. Replication
56. 4/1/2019 56
Architecture & Design Patterns
• I N F R A ST R U C T U R E D ES I G N PAT T E R N S
• C A PA B I L I T Y C E N T R I C D ES I G N
• D O M A I N D R I V E N D ES I G N
• E V E N T S O U RC I N G & CQ RS
• F U N C T I O NA L R EAC T I V E P RO G R A M M I N G
• U I D ES I G N PAT T E R N S
• R EST F U L A P I S A N D V E RS I O N I N G
2
57. 4/1/2019 57
Infrastructure Design Patterns
• API GATEWAY
• LOAD BALANCER
• SERVICE DISCOVERY
• CIRCUIT BREAKER
• SERVICE AGGREGATOR
• LET-IT CRASH PATTERN
58. API Gateway Design Pattern
58
UILayer
WS
BL
DL
Database
Shopping Cart
Order
Customer
Product
Firewall
Users
API Gateway
LoadBalancer
CircuitBreaker
UILayer
WebServices
BusinessLogic
DatabaseLayer
Product
SE8
Product
Microservice
With 4 node
cluster
LoadBalancer
CircuitBreaker
UILayer
WebServices
BusinessLogic
DatabaseLayer
Customer
Customer
Microservice
With 2 node
cluster
Users
Access the
Monolithic
App
Directly
API Gateway (Reverse Proxy Server) routes the traffic
to appropriate Microservices (Load Balancers)
59. Load Balancer Design Pattern
59
Firewall
Users
API Gateway
Load
Balancer
CircuitBreaker
UILayer
WebServices
BusinessLogic
DatabaseLayer
Product
SE8
Product
Microservice
With 4 node
cluster
Load
Balancer
CB=Hystrix
UILayer
WebServices
BusinessLogic
DatabaseLayer
Customer
Customer
Microservice
With 2 node
cluster
API Gateway (Reverse Proxy Server) routes
the traffic to appropriate Microservices
(Load Balancers)
Load Balancer Rules
1. Round Robin
2. Based on
Availability
3. Based on
Response Time
60. Service Discovery – NetFlix Network Stack Model
60
Firewall
Users
API Gateway
LoadBalancer
CircuitBreaker
Product
Product
Microservice
With 4 node
cluster
LoadBalancer
CircuitBreaker
UILayer
WebServices
BusinessLogic
DatabaseLayer
Customer
Customer
Microservice
With 2 node
cluster
• In this model Developers write the
code in every Microservice to register
with NetFlix Eureka Service Discovery
Server.
• Load Balancers and API Gateway also
registers with Service Discovery.
• Service Discovery will inform the Load
Balancers about the instance details
(IP Addresses).
Service Discovery
61. Service Discovery – Kubernetes Model
Kubernetes
Objects
Firewall
Service Call
Kube DNS
61
Users
Sports 1
Sports 2
Sports 3
Sports
Service
N4
N3
N1
EndPoints
Internal
Load Balancers
DB
Reverse Proxy Server
API
Gateway N1
N2
N2Politics 1
Politics 2
Politics 3
Politics
Service
EndPoints
DB
Internal
Load Balancers
Pods Nodes
• API Gateway (Reverse Proxy Server) doesn't know the instances (IP
Addresses) of News Pod. It knows the IP address of the Services
defined for each Microservice (News, Politics, Sports etc.).
• Services handles the dynamic IP Addresses of the pods. Services
will automatically discover the new Pods based on Labels.
Service Definition
from Kubernetes
Perspective
Internal
Load Balancers
EndPoints
News Pod 1
News Pod 2
News Pod 3
News
Service
N4
N3
N2
Pods Nodes
62. Circuit Breaker Pattern
/ui
/productms
If Product Review is not
available Product service
will return the product
details with a message
review not available.
Reverse Proxy Server
Ingress
Deployment / Replica / Pod NodesKubernetes Objects
Firewall
UI Pod
UI Pod
UI Pod
UI Service
N1
N2
N2
EndPoints
Product Pod
Product Pod
Product Pod
Product
Service
N4
N3
MySQL
Pod
EndPoints
Internal
Load Balancers
62
Users
Routing based on Layer 3,4 and 7
Review Pod
Review Pod
Review Pod
Review
Service
N4
N3
N1
Service Call
Kube DNS
EndPoints
63. Service Aggregator Pattern
/newservice
Reverse Proxy Server
Ingress
Deployment / Replica / Pod Nodes
Kubernetes
Objects
Firewall
Service Call
Kube DNS
63
Users
Internal
Load Balancers
EndPoints News Pod
News Pod
News Pod
News
Service
N4
N3
N2
News Service Portal
• News Category wise
Microservices
• Aggregator Microservice to
aggregate all category of news.
Auto Scaling
• Sports Events (IPL) spikes the
traffic for Sports Microservice.
• Auto scaling happens for both
News and Sports Microservices.
N1
N2
N2National
National
National
National
Service
EndPoints
Internal
Load Balancers
DB
N1
N2
N2Politics
Politics
Politics
Politics
Service
EndPoints
DB
Sports
Sports
Sports
Sports
Service
N4
N3
N1
EndPoints
Internal
Load Balancers
DB
65. Service Aggregator Pattern
/artist
Reverse Proxy Server
Ingress
Deployment / Replica / Pod Nodes
Kubernetes
Objects
Firewall
Service Call
Kube DNS
65
Users
Internal
Load Balancers
EndPoints Artist Pod
Artist Pod
Artist Pod
Artist
Service
N4
N3
N2
Spotify Microservices
• Artist Microservice combines all
the details from Discography,
Play count and Playlists.
Auto Scaling
• Scaling of Artist and downstream
Microservices will automatically
scale depends on the load factor.
N1
N2
N2Discography
Discography
Discography
Discography
Service
EndPoints
Internal
Load Balancers
DB
N1
N2
N2Play Count
Play Count
Play Count
Play Count
Service
EndPoints
DB
Playlist
Playlist
Playlist
Playlist
Service
N4
N3
N1
EndPoints
Internal
Load Balancers
DB
66. Software Network Stack Vs Network Stack
4/1/2019 66
Pattern Software Stack Java Software Stack .NET Kubernetes
1 API Gateway Zuul Server SteelToe Istio Envoy
2 Service Discovery Eureka Server SteelToe Kube DNS
3 Load Balancer Ribbon Server SteelToe Istio Envoy
4 Circuit Breaker Hysterix SteelToe
5 Config Server Spring Config SteelToe Secrets, Env - K8s Master
Web Site https://netflix.github.io/ https://steeltoe.io/ https://kubernetes.io/
Developer need to write code to integrate with the Software Stack (Programming
Language Specific. For Ex. Every microservice needs to subscribe to Service
Discovery when the Microservice boots up.
Service Discovery in Kubernetes is based on the Labels assigned to Pod and Services
and its Endpoints (IP Address) are dynamically mapped (DNS) based on the Label.
67. Let-it-Crash Design Pattern – Erlang Philosophy
4/1/2019 67
• The Erlang view of the world is that everything is a process and that processes can
interact only by exchanging messages.
• A typical Erlang program might have hundreds, thousands, or even millions of processes.
• Letting processes crash is central to Erlang. It’s the equivalent of unplugging your router
and plugging it back in – as long as you can get back to a known state, this turns out to be
a very good strategy.
• To make that happen, you build supervision trees.
• A supervisor will decide how to deal with a crashed process. It will restart the process, or
possibly kill some other processes, or crash and let someone else deal with it.
• Two models of concurrency: Shared State Concurrency, & Message Passing Concurrency.
The programming world went one way (toward shared state). The Erlang community
went the other way.
• All languages such as C, Java, C++, and so on, have the notion that there is this stuff called
state and that we can change it. The moment you share something you need to bring
Mutex a Locking Mechanism.
• Erlang has no mutable data structures (that’s not quite true, but it’s true enough). No
mutable data structures = No locks. No mutable data structures = Easy to parallelize.
68. Let-it-Crash Design Pattern
4/1/2019 68
1. The idea of Messages as the first class citizens of a system, has been
rediscovered by the Event Sourcing / CQRS community, along with a strong
focus on domain models.
2. Event Sourced Aggregates are a way to Model the Processes and NOT things.
3. Each component MUST tolerate a crash and restart at any point in time.
4. All interaction between the components must tolerate that peers can crash.
This mean ubiquitous use of timeouts and Circuit Breaker.
5. Each component must be strongly encapsulated so that failures are fully
contained and cannot spread.
6. All requests sent to a component MUST be self describing as is practical so
that processing can resume with a little recovery cost as possible after a
restart.
69. Let-it-Crash : Comparison Erlang Vs. Microservices Vs. Monolithic Apps
69
Erlang Philosophy Micro Services Architecture Monolithic Apps (Java, C++, C#, Node JS ...)
1 Perspective
Everything is a
Process
Event Sourced Aggregates are a way to
model the Process and NOT things.
Things (defined as Objects) and
Behaviors
2
Crash
Recovery
Supervisor will
decide how to
handle the
crashed process
Kubernetes Manager monitors all the
Pods (Microservices) and its Readiness
and Health. K8s terminates the Pod if
the health is bad and spawns a new
Pod. Circuit Breaker Pattern is used
handle the fallback mechanism.
Not available. Most of the monolithic
Apps are Stateful and Crash Recovery
needs to be handled manually and all
languages other than Erlang focuses
on defensive programming.
3 Concurrency
Message Passing
Concurrency
Domain Events for state changes within
a Bounded Context & Integration Events
for external Systems.
Mostly Shared State Concurrency
4 State
Stateless :
Mostly Immutable
Structures
Immutability is handled thru Event
Sourcing along with Domain Events and
Integration Events.
Predominantly Stateful with Mutable
structures and Mutex as a Locking
Mechanism
5 Citizen Messages
Messages are 1st class citizen by Event
Sourcing / CQRS pattern with a strong
focus on Domain Models
Mutable Objects and Strong focus on
Domain Models and synchronous
communication.
70. Infrastructure Design Patterns Summary
4/1/2019 70
1. API Gateway
2. Service Discovery
3. Load Balancer
4. Circuit Breaker
5. Service Aggregator Pattern
6. Let It Crash Pattern
72. Business Solution & Business Process
4/1/2019 72
Business Solution focuses the entire Journey of the User which
can run across multiple Microservices.
Business Solution comprises a set of Business Processes.
A specific Microservice functionality will be focused on a
Business Process / Concern
Business Process can be divided further into Business Functions
Business Solution: Customer Dining Experience
Order PaymentFood Menu KitchenDining
Browse Menu Order Dinner Dinner Served Get Bill Make Payment
73. 4/1/2019 73
Capability Centric Design
Vertically sliced Product Team
Business Centric Development
• Focus on Business Capabilities
• Entire team is aligned towards
Business Capability.
• From Specs to Operations – The
team handles the entire spectrum
of Software development.
• Every vertical will have it’s own
Code Pipeline
Front-End-Team Back-End-Team Database-Team
In a typical Monolithic way the team is
divided based on technology / skill set
rather than business functions. This leads
to not only bottlenecks but also lack of
understanding of the Business Domain.
QA / QC Team
Front-End
Back-End
Database
Business
Capability 1
QA/QCTeam
Front-End
Back-End
Database
Business
Capability 2
QA/QCTeam
Front-End
Back-End
Database
Business
Capability 3
QA/QCTeam
75. Capability Centric Design Summary
4/1/2019 75
1. Business Solutions
2. Business Process
3. Business Capabilities
4. Business Driven Teams (From Specs to Ops)
5. Outcome Oriented instead of Activity
Oriented.
77. 4/1/2019 77
Domain Driven Design
• STRATEGIC: BOUNDED CONTEXT, UBIQUITOUS LANGUAGE
• TACTICAL DESIGN: ENTITIES, AGGREGATE ROOT, VALUE OBJECT,
FACTORIES, REPOSITORY, EVENTS, SERVICES
• CASE STUDY: SHOPPING PORTAL
78. DDD: Bounded Context – Strategic Design
01-04-2019 78
• Bounded Context is a Specific Business Process / Concern.
• Components / Modules inside the Bounded Context are context specific.
• Multiple Bounded Contexts are linked using Context Mapping.
• One Team assigned to a Bounded Context.
• Each Bounded Context will have it’s own Source Code Repository.
• When the Bounded Context is being developed as a key strategic
initiative of your organization, it’s called the Core Domain.
• Within a Bounded Context the team must have same language called
Ubiquitous language for Spoken and for Design / Code Implementation.
79. DDD: App User’s Journey & Bounded Context
4/1/2019
79
An App User’s Journey can run across
multiple Bounded Context / Micro
Services.
User Journey X
Bounded
Context
Bounded
Context
Bounded
Context
User Journey Y
Bounded
Context
Bounded
Context
Bounded
Context
Dinning
Order
Reservation
Tables
Recipes
Raw
Materials
Frozen
Semi Cooked
Appetizer Veg
Appetizer Non
Veg
Soft Drinks
Main Course
Non Veg
Main Course
Veg
Hot Drinks Desserts
Steward
Chef
Menu
uses
uses
Dinning
Order
Reservation
Tables
Recipes
Raw
Materials
Frozen
Semi Cooked
Appetizer Veg
Appetizer Non
Veg
Soft Drinks
Main Course
Non Veg
Main Course
Veg
Hot Drinks Desserts
Steward
Chef
Menu
uses
uses
UnderstandingBoundedContext(DDD)ofaRestaurantApp
Dinning
Context
Kitchen
Context
Menu Context
Source: Domain-Driven Design
Reference by Eric Evans
80. 4/1/2019 80
Ubiquitous
Language
Vocabulary shared by
all involved parties
Used in all forms of spoken /
written communication
Ubiquitous
Language
Domain
Expert
Analyst Developers
QA
Design
Docs
Test Cases
Code
Restaurant Context – Food Item :
Eg. Food Item (Navrathnakurma)
can have different meaning or
properties depends on the
context.
• In the Menu Context it’s a
Veg Dish.
• In the Kitchen Context it’s
is recipe.
• And in the Dining Context
it will have more info
related to user feed back
etc.
DDD: Ubiquitous Language: Strategic Design
As an Restaurant Owner
I want to know who my Customers are
So that I can serve them better
Role-Feature-Reason Matrix
BDD – Behavior Driven Development
Given Customer John Doe exists
When Customer orders food
Then
Assign customer preferences
as Veg or Non Veg customer
BDD Construct
81. 01-04-2019 81
Hexagonal Architecture
Ports & Adapters
The layer between the Adapter and
the Domain is identified as the Ports
layer. The Domain is inside the port,
adapters for external entities are on
the outside of the port.
The notion of a “port” invokes the
OS idea that any device that adheres
to a known protocol can be plugged
into a port. Similarly many adapters
may use the Ports.
Source : http://alistair.cockburn.us/Hexagonal+architecture
https://skillsmatter.com/skillscasts/5744-decoupling-from-asp-net-hexagonal-architectures-in-net
Services
for UI
Ports
File
system Database
Order Tracking
JPA Repository
Implementation
Adapters
OrderProcessing
Domain Service
(Business Rules)
Implementation
Domain
Models
Domain Layer
Order Data
Validation
OrderService
REST Service
Implementation
OrderProcessing
Interface
p
Order Tracking
Repository
Interface
p
A
A
External
Apps
A
A A
Others
A
A
OrderService
Interface
p
Web
Services
Data
Store
Use Case Boundary
Bounded Context
A
• Reduces Technical Debt
• Dependency Injection
• Auto Wiring
82. Layered Architecture
01-04-2019 82
• Explicit Domain Models – Isolate your models from UI, Business
Logic.
• Domain Objects – Free of the Responsibility of displaying
themselves or storing themselves or managing App Tasks.
• Zero Dependency on Infrastructure, UI and Persistent Layers.
• Use Dependency Injection for Loosely Coupled Objects.
• All the Code for Domain Model in a Single Layer.
• Domain Model should be Rich enough to represent Business
Knowledge.
Source: DDD Reference by Chris Evans Page 17
83. 4/1/2019 83
Domain Driven Design – Tactical Design
Source: Domain-Driven Design Reference by Eric Evans
84. DDD: Understanding Aggregate Root
84
Order
Customer
Shipping
Address
Aggregate
Root
Line Item
Line Item
Line Item
*
Payment
Strategy
Credit Card
Cash
Bank Transfer
Source: Martin Fowler : Aggregate Root
• An aggregate will have one of its component
objects be the aggregate root. Any references
from outside the aggregate should only go to the
aggregate root. The root can thus ensure the
integrity of the aggregate as a whole.
• Aggregates are the basic element of transfer of
data storage - you request to load or save whole
aggregates. Transactions should not cross
aggregate boundaries.
• Aggregates are sometimes confused with
collection classes (lists, maps, etc.).
• Aggregates are domain concepts (order, clinic visit,
playlist), while collections are generic. An
aggregate will often contain multiple collections,
together with simple fields.
125
Domain
Driven
Design
(C) COPYRIGHT METAMAGIC GLOBAL INC., NEW JERSEY, USA01-04-2019
85. DDD: Domain Events & Integration Events
01-04-2019 85
1. Domain Events represent something happened in a specific Domain.
2. Domain Events should be used to propagate STATE changes across
Multiple Aggregates within the Bounded Context.
3. The purpose of Integration Events is to propagate committed
transactions and updates to additional subsystems, whether they are
other microservices, Bounded Contexts or even external applications.
Source: Domain Events : Design and Implementation – Microsoft Docs – May 26, 2017
Domain
Data Behavior
Order (Aggregate Root)
Data Behavior
Address (Value Object)
Data Behavior
OrderItem (Child)
1
n
1
1
Order Created
Domain Event
Domain Layer
Enforce consistency
with other Aggregates
Event Handler 1
Event Handler n
Create and Publish Integration
Event to Event Bus.
Example: Order Placed
Integration Event can be
subscribed by Inventory system
to update the Inventory details.
Event Handler 2
87. 4/1/2019 87
DDD: Use Case Order Module
Models
Value Object
• Currency
• Item Value
• Order Status
• Payment Type
• Record State
• Audit Log
Entity
• Order (Aggregate Root)
• Order Item
• Shipping Address
• Payment
DTO
• Order
• Order Item
• Shipping Address
• Payment
Domain Layer Adapters
• Order Repository
• Order Service
• Order Web Service
• Order Query Web Service
• Shipping Address Web Service
• Payment Web Service
Adapters Consists of Actual
Implementation of the Ports like
Database Access, Web Services
API etc.
Converters are used to convert
an Enum value to a proper
Integer value in the Database.
For Example Order Status
Complete is mapped to integer
value 100 in the database.
Services / Ports
• Order Repository
• Order Service
• Order Web Service
Utils
• Order Factory
• Order Status Converter
• Record State Converter
• Order Query Web Service
• Shipping Address Web Service
• Payment Web Service
Shopping Portal
88. Procedural Design Vs. Domain Driven Design
88
1. Anemic Entity Structure
2. Massive IF Statements
3. Entire Logic resides in Service
Layer
4. Type Dependent calculations are
done based on conditional checks
in Service Layer
4
1
23
Domain Driven Design with Java EE 6
By Adam Bien | Javaworld
Source: http://www.javaworld.com/article/2078042/java-app-dev/domain-driven-design-with-java-ee-6.html
89. Polymorphic Business Logic inside a Domain object
01-04-2019 89
Domain Driven Design with Java EE 6
By Adam Bien | Javaworld
Computation of the total cost
realized inside a rich
Persistent Domain Object
(PDO) and not inside a service.
This simplifies creating very
complex business rules.
Source: http://www.javaworld.com/article/2078042/java-app-dev/domain-driven-design-with-java-ee-6.html
90. Type Specific Computation in a Sub Class
90
We can change the
computation of the shipping
cost of a Bulky Item without
touching the remaining
classes.
Its easy to introduce a new
Sub Class without affecting
the computation of the total
cost in the Load Class.
Domain Driven Design with Java EE 6
By Adam Bien | Javaworld
of
Source: http://www.javaworld.com/article/2078042/java-app-dev/domain-driven-design-with-java-ee-6.html
91. Object Construction : Procedural Way Vs. Builder Pattern
91
Procedural Way Builder Pattern
Source: http://www.javaworld.com/article/2078042/java-app-dev/domain-driven-design-with-java-ee-6.html
Domain Driven Design with Java EE 6
By Adam Bien | Javaworld
93. 4/1/2019 93
Event Storming
• EVENT SOURCING / CQRS
• CASE STUDY: SHOPPING PORTAL
• CASE STUDY: RESTAURANT APP
• CASE STUDY: MOVIE BOOKING
• CASE STUDY: MOVIE STREAMING
94. Mind Shift : From Object Modeling to Process Modeling
4/1/2019 94
Developers with Strong Object Modeling experience
will have trouble making Events a first class citizen.
• How do I start Event Sourcing?
• Where do I Start on Event Sourcing / CQRS?
The Key is:
1. App User’s Journey
2. Business Process
3. Ubiquitous Language – DDD
4. Capability Centric Design
5. Outcome Oriented The Best tool to define your process and its tasks.
How do you define your End User’s Journey & Business Process?
• Think It
• Build It
• Run IT
95. 95
Process
• Define your Business Processes. Eg. Various aspects of Order
Processing in an E-Commerce Site, Movie Ticket Booking,
Patient visit in Hospital.1
Commands • Define the Commands (End-User interaction with your App) to
execute the Process. Eg. Add Item to Cart is a Command.2
Event Sourced
Aggregate
• Current state of the Aggregate is always derived from the Event
Store. Eg. Shopping Cart, Order etc. This will be part of the Rich
Domain Model (Bounded Context) of the Micro Service.4
Projections
• Projections focuses on the View perspective of the Application.
As the Read & Write are different Models, you can have
different Projections based on your View perspective.
5
Write
Data
Read
Data
Events • Commands generates the Events to be stored in Event Store.
Eg. Item Added Event (in the Shopping Cart).3
Event Storming – Concept
96. 4/1/2019 96
Event Sourcing Intro
Standard CRUD Operations – Customer Profile – Aggregate Root
Profile Created Title Updated New Address added
Derived
Notes Removed
Time T1 T2 T4T3
Event Sourcing and Derived Aggregate Root
Commands
1. Create Profile
2. Update Title
3. Add Address
4. Delete Notes
2
Events
1. Profile Created Event
2. Title Updated Event
3. Address Added Event
4. Notes Deleted Event
3
Current State of the
Customer Profile
4
Event store
Single Source of Truth
Greg
Young
97. Event Sourcing & CQRS (Command and Query Responsibility Segregation)
• In traditional data management systems, both
commands (updates to the data) and queries
(requests for data) are executed against the same
set of entities in a single data repository.
• CQRS is a pattern that segregates the operations
that read data (Queries) from the operations that
update data (Commands) by using separate
interfaces.
• CQRS should only be used on specific portions of a
system in Bounded Context (in DDD).
• CQRS should be used along with Event Sourcing.
4/1/2019 97
MSDN – Microsoft https://msdn.microsoft.com/en-us/library/dn568103.aspx |
Martin Fowler : CQRS – http://martinfowler.com/bliki/CQRS.html
CQS :
Bertrand Meyer
Axon
Framework
For Java
Java Axon Framework Resource : http://www.axonframework.org
Greg
Young
(C) COPYRIGHT METAMAGIC GLOBAL INC., NEW JERSEY, USA
98. 4/1/2019 98
Case Study: Restaurant Dining – Event Sourcing and CQRS
Order Payment
• Add Drinks
• Add Food
• Update Food
Commands • Open Table
• Add Juice
• Add Soda
• Add Appetizer 1
• Add Appetizer 2
• Serve Drinks
• Prepare Food
• Serve Food
Events
• Drinks Added
• Food Added
• Food Updated
• Food Discontinued
• Table Opened
• Juice Added
• Soda Added
• Appetizer 1 Added
• Appetizer 2 Added
• Juice Served
• Soda Served
• Appetizer Served
• Food Prepared
• Food Served
• Prepare Bill
• Process
Payment
• Bill Prepared
• Payment Processed
• Payment Approved
• Payment Declined
• Cash Paid
When people arrive at the Restaurant and take a table, a Table is opened. They may then order drinks and
food. Drinks are served immediately by the table staff, however food must be cooked by a chef. Once the
chef prepared the food it can then be served. Table is closed then the bill is prepared.
Microservices
• Dinning Order
• Billable Order
Customer Journey thru Dinning Processes
Processes
Food Menu KitchenDining
• Remove Soda
• Add Food 1
• Add Food 2
• Place Order
• Close Table
• Remove Soda
• Food 1 Added
• Food 2 Added
• Order Placed
• Table Closed
ES Aggregate
3
2 4
1
99. Case Study: Shopping Site – Event Sourcing / CQRS
4/1/2019 99
Catalogue Shopping Cart Order Payment
• Search Products
• Add Products
• Update Products
Commands
• Add to Cart
• Remove Item
• Update Quantity
Customer
• Process Order
• Select Address
• Select Delivery Mode
Events
• Product Added
• Product Updated
• Product Discontinued
• Item Added
• Item Removed /
Discontinued
• Item Updated
• Order Initiated
• Address Selected
• Delivery Mode Selected
• Order Created
• Proceed for Payment
• Confirm Order for Payment
• Cancel Order
• Payment Initiated
• Order Cancelled
• Order Confirmed
• OTP Send
• Payment Approved
• Payment Declined
Commands are End-User interaction with the App and based on the commands (Actions) Events are created. These Events includes
both Domain Events and Integration Events. Event Sourced Aggregates will be derived using Domain Events. Each Micro Service will
have its own separate Database. Depends on the scalability requirement each of the Micro Service can be scaled separately. For
Example. Catalogue can be on a 50 node cluster compared to Customer Micro Service.
Microservices
ESA
• Customer
• Shop.. Cart
• Order
Customer Journey thru Shopping Process
The purpose of this example is to demonstrate the concept of ES / CQRS thru Event Storming principles.
2
100. Case Study: Movie Booking – Event Sourcing / CQRS
4/1/2019 100
Order Payment
• Search Movies
• Add Movies
• Update Movies
Commands
• Select Movie
• Select Theatre / Show
• Select Tickets
• Process Order
• Select Food
• Food Removed
• Skip Food
• Process Order
Events
• Movie Added
• Movie Updated
• Movie Discontinued
• Movie Added
• Theatre / Show Added
• Tickets Added
• Order Initiated
• Popcorn Added
• Drinks Added
• Popcorn Removed
• Order Finalized
• Proceed for Payment
• Confirm Order for Payment
• Cancel Order
• Payment Initiated
• Order Cancelled
• Order Confirmed
• OTP Send
• Payment Approved
• Payment Declined
Movies Theatres Food
Microservices
Commands are End-User interaction with the App and based on the commands (Actions) Events are created. These Events includes both
Domain Events and Integration Events. Event Sourced Aggregates will be derived using Domain Events. Each Micro Service will have its
own separate Database. Depends on the scalability requirement each of the Micro Service can be scaled separately. For Example.
Theatre can be on a 50 node cluster compared to Food Micro Service.
ESA
• Theatre
• Show
• Order
Customer Journey thru booking Movie Ticket
The purpose of this example is to demonstrate the concept of ES / CQRS thru Event Storming principles.
101. Case Study: Movie Streaming – Event Sourcing / CQRS
4/1/2019 101
Subscription Payment
• Search Movies
• Add Movies
• Update Movies
Commands
• Request Streaming
• Start Movie Streaming
• Pause Movie Streaming
• Validate Streaming
License
• Validate Download
License
Events
• Movie Added
• Movie Updated
• Movie Discontinued
• Streaming Requested
• Streaming Started
• Streaming Paused
• Streaming Done
• Streaming Request
Accepted
• Streaming Request
Denied
• Subscribe Monthly
• Subscribe Annually
• Monthly
Subscription Added
• Yearly Subscription
Added
• Payment Approved
• Payment Declined
Discovery
Microservices
Commands are End-User interaction with the App and based on the commands (Actions) Events are created. These Events includes both
Domain Events and Integration Events. Event Sourced Aggregates will be derived using Domain Events. Each Micro Service will have its
own separate Database. Depends on the scalability requirement each of the Micro Service can be scaled separately. For Example.
Theatre can be on a 50 node cluster compared to Food Micro Service.
ESA
• Stream List
• Favorite List
Customer Journey thru Streaming Movie / TV Show
The purpose of this example is to demonstrate the concept of ES / CQRS thru Event Storming principles.
LicenseStreaming
102. Event Sourcing & CQRS Summary
4/1/2019 102
1. Process
Ex. Various aspects of Order Processing in an E-Commerce Site, Movie Ticket Booking,
Patient visit in Hospital.
2. Commands
End-User interaction with your App) to execute the Process. Eg. Add Item to Cart is a
Command.
3. Events
Item Added Event (in the Shopping Cart).
4. Event Sourced Aggregate
Current state of the Aggregate is always derived from the Event Store. Eg. Shopping Cart
5. Read & Write Separates Databases
103. 4/1/2019 103
Reactive Programming
• BUILDING BLOCKS: OBSERVABLE, OBSERVER, SCHEDULER, OPERATOR
• COMPARISON: ITERABLE (JAVA 6), STREAMS (JAVA 8), RX JAVA
• CASE STUDY: MERGE STREAMS, FILTER, SORT, TAKE
104. 4/1/2019 104
Functional Reactive Programming: 4 Building Blocks of RxJava
Source of Data Stream [ Sender ]Observable1
Listens for emitted values [ Receiver ]Observer2
Source: http://reactivex.io/
Schedulers3
Schedulers are used to manage and control concurrency.
1. observeOn: Thread Observable is executed
2. subscribeOn: Thread subscribe is executed
4 Operators
Content Filtering
Time Filtering
Transformation
Operators that let you
Transform, Combine,
Manipulate, and work
with the sequence of
items emitted by
Observables
105. 4/1/2019 105
Comparison : Iterable / Streams / Observable 1Building Block
First Class Visitor (Consumer)
Serial Operations
Parallel Streams (10x Speed)
Still On Next, On Complete and
On Error are Serial Operations
Completely Asynchronous
Operations
Java 8 – Blocking CallJava 6 – Blocking Call Rx Java - Freedom
Source Code: https://github.com/meta-magic/rxjava
106. 4/1/2019 106
Rx 2 Java Operator : Filter / Sort / FlatMap 4Building
Block
Objective:
toSortedList() returns an Observable with a single List containing Fruits.
Using FlatMap to Transform Observable <List> to Observable <Fruit>
Rx Example 2
SourceCodeGitHub:https://github.com/meta-magic/Rx-Java-2
• Merge
• Filter
• Sort
• Take
107. Functional Reactive Programming Summary
4/1/2019 107
1. Observable
Source of the Data Stream
2. Observer
Listens to emitted values
3. Scheduler
Are used to manage and control and concurrency.
4. Operators
Operators that let you Transform, Combine, Manipulate, and work with the
sequence of items emitted by Observables
109. 4/1/2019 109
UI DesignPatterns
MVC/ MVP/ MVVM
View
Controller
Model
Passes
calls To
Fire
Events
Manipulates
• The Controller is responsible to process incoming
requests. It receives input from users via the View,
then process the user's data with the help of Model
and passing the results back to the View.
• Typically, it acts as the coordinator between the
View and the Model.
Model
View
Controller
1
*
• The View Model is responsible for exposing methods,
commands, and other properties that helps to maintain
the state of the view, manipulate the model as the
result of actions on the view, and trigger events in the
view itself.
• There is many-to-one relationship between View and
ViewModel means many View can be mapped to one
ViewModel.
• Supports two-way data binding between View and
ViewModel.
View
ViewModel
Model
Passes
calls To
Manipulates
Updates
Fire
Events
Model
View
ViewModel
• The Presenter is responsible for handling all UI events on
behalf of the view. This receive input from users via the
View, then process the user's data with the help of Model
and passing the results back to the View.
• Unlike view and controller, view and presenter are
completely decoupled from each other’s and
communicate to each other’s by an interface. Also,
presenter does not manage the incoming request traffic as
controller.
• Supports two-way data binding.
Model
View
Presenter
View
Presenter
Model
Passes
calls To
Fire
Events
Manipulates
Updates1
1
110. 4/1/2019 110
UI Design Patterns
Flux / Redux
ViewDispatcher
Every action is sent to all Stores via callbacks the
stores register with the Dispatcher
Store
Action
Action
1 *
Controller-Views
• Listens to Store changes
• Emit Actions to Dispatcher
Dispatcher
• Single Dispatcher per Application
• Manages the Data Flow View to Model
• Receives Actions and dispatch them to Stores
Stores
• Contains state for a Domain (Vs. Specific Component)
• In Charge of modifying the Data
• Inform the views when the Data is changed by emitting the
Changed Event.
Flux Core Concepts
1. One way Data Flow
2. No Event Chaining
3. Entire App State is resolved in store before Views Update
4. Data Manipulation ONLY happen in one place (Store).
Actions
• Simple JS Objects
• Contains Name of the Action and Data (Payload)
• Action represent something that has happened.
• Has No Business Logic
111. 4/1/2019 111
UI Design Patterns
Redux
Actions
• Simple JS Objects
• Contains Name of the
Action and Data
(Payload)
• Has NO Business Logic
• Action represent
something that has
happened.
Store
• Multiple View layers can Subscribe
• View layer to Dispatch actions
• Single Store for the Entire Application
• Data manipulation logic moves out of
store to Reducers
Reducer
• Pure JS Functions
• No External calls
• Can combine multiple reducers
• A function that specifies how the state changes in response to an Action.
• Reducer does NOT modify the state. It returns the NEW State.
Redux Core Concepts
1. One way Data Flow
2. No Dispatcher compared to Flux
3. Immutable Store
Available for React & Angular
View
Action
State
Dispatcher
Reducer
R R
R
Store
Middleware
Middleware
Middleware
• Handles External calls
• Multiple Middleware's can be chained.
112. UI Design Pattern Summary
4/1/2019 112
1. MVC
2. MVP
3. MVVM
4. Flux
5. Redux
Redux is a much better pattern if you are building complex enterprise
applications.
114. Distributed Transactions : 2 Phase Commit
2 PC or not 2 PC, Wherefore Art Thou XA?
01April2019
114
How does 2PC impact scalability?
• Transactions are committed in two phases.
• This involves communicating with every database (XA
Resources) involved to determine if the transaction will commit
in the first phase.
• During the second phase each database is asked to complete
the commit.
• While all of this coordination is going on, locks in all of the data
sources are being held.
• The longer duration locks create the risk of higher contention.
• Additionally, the two phases require more database
processing time than a single phase commit.
• The result is lower overall TPS in the system.
Transaction
Manager
XA Resources
Request to Prepare
Commit
Prepared
Prepare
Phase
Commit
PhaseDone
Source : Pat Helland (Amazon) : Life Beyond Distributed Transactions Distributed Computing : http://dancres.github.io/Pages/
Solution : Resilient System
• Event Based
• Design for failure
• Asynchronous Recovery
• Make all operations idempotent.
• Each DB operation is a 1 PC
115. Scalability Best Practices : Lessons from
Best Practices Highlights
#1 Partition By Function
• Decouple the Unrelated Functionalities.
• Selling functionality is served by one set of applications, bidding by another, search by yet another.
• 16,000 App Servers in 220 different pools
• 1000 logical databases, 400 physical hosts
#2 Split Horizontally
• Break the workload into manageable units.
• eBay’s interactions are stateless by design
• All App Servers are treated equal and none retains any transactional state
• Data Partitioning based on specific requirements
#3
Avoid Distributed
Transactions
• 2 Phase Commit is a pessimistic approach comes with a big COST
• CAP Theorem (Consistency, Availability, Partition Tolerance). Apply any two at any point in time.
• @ eBay No Distributed Transactions of any kind and NO 2 Phase Commit.
#4
Decouple Functions
Asynchronously
• If Component A calls component B synchronously, then they are tightly coupled. For such systems to
scale A you need to scale B also.
• If Asynchronous A can move forward irrespective of the state of B
• SEDA (Staged Event Driven Architecture)
#5
Move Processing to
Asynchronous Flow
• Move as much processing towards Asynchronous side
• Anything that can wait should wait
#6 Virtualize at All Levels • Virtualize everything. eBay created their on O/R layer for abstraction
#7 Cache Appropriately • Cache Slow changing, read-mostly data, meta data, configuration and static data.
115
Source: http://www.infoq.com/articles/ebay-scalability-best-practices
116. 4/1/2019 116
Distributed Tx: SAGA Design Pattern instead of 2PC
Long Lived Transactions (LLTs) hold on to DB resources for relatively long periods of time, significantly delaying
the termination of shorter and more common transactions.
Source: SAGAS (1987) Hector Garcia Molina / Kenneth Salem,
Dept. of Computer Science, Princeton University, NJ, USA
T1 T2 Tn
Local Transactions
C1 C2 Cn-1
Compensating Transaction
Divide long–lived, distributed transactions into quick local ones with compensating actions for
recovery.
Travel : Flight Ticket & Hotel Booking Example
BASE (Basic Availability, Soft
State, Eventual Consistency)
Room ReservedT1
Room PaymentT2
Seat ReservedT3
Ticket PaymentT4
Cancelled Room Reservation
C1
Cancelled Room Payment
C2
Cancelled Ticket Reservation
C3
117. SAGA Design Pattern Features
4/1/2019 117
1. Backward Recovery (Rollback)
T1 T2 T3 T4 C3 C2 C1
Order Processing, Banking
Transactions, Ticket Booking
Examples
Updating individual scores in
a Team Game.
2. Forward Recovery with Save Points
T1 (sp) T2 (sp) T3 (sp)
• To recover from Hardware Failures, SAGA needs to be persistent.
• Save Points are available for both Forward and Backward Recovery.
Type
Source: SAGAS (1987) Hector Garcia Molina / Kenneth Salem, Dept. of Computer Science, Princeton University, NJ, USA
118. Handling Invariants – Monolithic to Micro Services
4/1/2019 118
In a typical Monolithic App
Customer Credit Limit info and
the order processing is part of
the same App. Following is a
typical pseudo code.
Order CreatedT1
Order
Microservice
Credit ReservedT2
Customer
Microservice
In Micro Services world with Event Sourcing, it’s a
distributed environment. The order is cancelled if
the Credit is NOT available. If the Payment
Processing is failed then the Credit Reserved is
cancelled.
Payment
MicroservicePayment ProcessedT3
Order Cancelled
C1
Credit Cancelled due to
payment failureC2
Begin Transaction
If Order Value <= Available
Credit
Process Order
Process Payments
End Transaction
Monolithic 2 Phase Commit
https://en.wikipedia.org/wiki/Invariant_(computer_science)
119. 4/1/2019 119
Use Case : Restaurant – Forward Recovery
Domain
The example focus on a
concept of a Restaurant
which tracks the visit of
an individual or group
to the Restaurant. When
people arrive at the
Restaurant and take a
table, a table is opened.
They may then order
drinks and food. Drinks
are served immediately
by the table staff,
however food must be
cooked by a chef. Once
the chef prepared the
food it can then be
served.
PaymentBillingDining
Source: http://cqrs.nu/tutorial/cs/01-design
Soda Cancelled
Table Opened
Juice Ordered
Soda Ordered
Appetizer Ordered
Soup Ordered
Food Ordered
Juice Served
Food Prepared
Food Served
Appetizer Served
Table Closed
Aggregate Root : Dinning Order
Billed OrderT1
Payment CCT2
Payment CashT3
T1 (sp) T2 (sp) T3 (sp)
Event Stream
Aggregate Root : Food Bill
Transaction doesn't rollback if one payment
method is failed. It moves forward to the
NEXT one.
sp
Network
ErrorC1 sp
120. Distributed Transaction Summary
4/1/2019 120
1. 2 Phase Commit
Doesn’t scale well in cloud environment
2. SAGA Design Pattern
Raise compensating events when the local transaction fails.
3. SAGA Supports Rollbacks & Roll
Forwards
Critical pattern to address distributed transactions.
122. 4/1/2019 122
RESTful Guidelines
1. Endpoints as nouns, NOT verbs
Ex. /catalogues
/orders
/catalogues/products
and NOT
/getProducts/
/updateProducts/
2. Use plurals
Ex. /catalogues/{catalogueId}
and NOT
/catalogue/{catalogueId}
3. Documenting
4. Paging
5. Use SSL
6. HTTP Methods
GET / POST / PUT / DELETE / OPTIONS / HEAD
7. HTTP Status Codes (Effective usage)
8. Versioning
Media Type Version
GET /account/5555 HTTP/1.1
Accept: application/vnd.catalogues.v1+json
URL path version
https://domain/v1/catalogues/products
123. 4/1/2019 123
RESTful Guidelines – Query Examples
Search All
Products
Search Products By
Catalogue ID
Search Products By
Catalogue ID & Product ID
125. 4/1/2019 125
# Name * Who Uses Pros Cons
1
Media Type Versioning
Accept:
Application/vnd.api.article+xml;
version=1.0
Med GitHub
• Version Directly @
resource level
• Preserve URI
• Close to RESTful Specs
• Harder to Test
• Distort HTTP Headers purpose
• Tools required for testing
2
Custom Headers Versioning
X-API-Version: 2.
Med Microsoft • Preservers URI
• Harder to Test
• Tools required for testing
3
URI Versioning
api.example.com/v1/resource
High
Google
Twitter
Amazon
• Most common method
• Versions can be explored
using Browser
• Easy to use
• Disrupts RESTful Compliance.
URI should represent resource
and not versions
4
Domain Versioning
apiv1.example.com/resource
Low Facebook
• Same as are URI
Versioning
• Same as URI Versioning
5
Request Parameter
Versioning
GET /something/?version=0.1
High
Pivotal
NetFlix
• Similar to URI versioning • It can get messy
6
Date Versioning
First request saves the date.
Low Clearbit
• New APIs can be shipped
without changing the
end points
• Complex to implement
• Traceability is difficult.
API Versioning
126. Functional Reactive Programming Summary
4/1/2019 126
1. 2 Phase
Doesn’t scale well in cloud environment
2. Observer
Listens to emitted values
3. Scheduler
Are used to manage and control and concurrency.
4. Operators
Operators that let you Transform, Combine, Manipulate, and work with the
sequence of items emitted by Observables
129. 4/1/2019 129
Microservices Testing Strategies
Ubiquitous
Language
Domain
Expert
Analyst Developers
QA
Design
Docs
Test Cases
Code
E2E
Testing
Integration
Testing
Contract Testing
Component Testing
Unit Testing
Number of Tests
Speed
Cost
Time
Mike Cohen’s Testing Pyramid
Test Pyramid: https://martinfowler.com/bliki/TestPyramid.html
70%
20%
10%
130. 4/1/2019 130
Microservices Testing Strategy
Unit Testing
A unit test exercises the
smallest piece of testable
software in the application
to determine whether it
behaves as expected.
Source: https://martinfowler.com/articles/microservice-testing/#agenda
Component Testing
A component test limits the
scope of the exercised
software to a portion of the
system under test,
manipulating the system
through internal code
interfaces and using test
doubles to isolate the code
under test from other
components.
Integration Testing
An integration test verifies
the communication paths
and interactions between
components to detect
interface defects
Integration Contract Testing
An Integration Contract test is a
test at the boundary of an
external service verifying that it
meets the contract expected by a
consuming service.
End 2 End Testing
An end-to-end test verifies that a
system meets external
requirements and achieves its
goals, testing the entire system,
from end to end
Say NO to End 2 End Tests - Mike
Walker April 22, 2015. Google Test Blog
131. 4/1/2019 131
Testing ToolsMicroservices Testing Scenarios / Tools
Contract Testing Scope
Integration Testing
Verifies the communication
paths and interactions
between components to
detect interface defects
Contract Testing
It is a test at the boundary of an
external service verifying that it
meets the contract expected by a
consuming service.
Payment Mock
IntegrationContractTestingScope
Test Double
Montebank
Cart
Component Testing
Unit
Testing
IntegrationTestingScope
Order
REST / HTTP or
Events / Kafka
Item ID,
Quantity,
Address..
Mock Order
Component Testing
A component test limits
the scope of the exercised
software to a portion of
the system under test.
Order
Payment
Unit
Testing
Firewall
Integration Testing Scope
REST / HTTP
Payment
Sandbox
Component
Testing
132. Testing Strategy Summary
4/1/2019 132
1. Unit Testing
A unit test exercises the smallest piece of testable software.
2. Component Testing
A component test limits the scope of the exercised software to a portion
of the system under test.
3. Contract Testing
It is a test at the boundary of an external service verifying that it meets the
contract expected by a consuming service
4. Integration Testing
It verifies the communication paths and interactions between components
to detect interface defects.
134. 4/1/2019 134
Build Small Container Images
• Simple Java Web Apps with Ubuntu & Tomcat can have a size of
700 MB
• Use Alpine Image as your base Linux OS
• Alpine images are 10x smaller than base Ubuntu images
• Smaller Image size reduce the Container vulnerabilities.
• Ensure that only Runtime Environments are there in your
container. For Example your Alpine + Java + Tomcat image
should contain only the JRE and NOT JDK.
• Log the App output to Container Std out and Std error.
1
135. 4/1/2019 135
Docker: To Root or Not to Root!
• Create Multiple layers of Images
• Create a User account
• Add Runtime software’s based on the User
Account.
• Run the App under the user account
• This gives added security to the container.
• Add Security module SELinux or AppArmour
to increase the security,
Alpine
JRE 8
Tomcat 8
My App 1
2
136. 4/1/2019 136
Docker: Container Security
1. Secure your HOST OS! Containers runs on Host Kernel.
2. No Runtime software downloads inside the container.
Declare the software requirements at the build time itself.
3. Download Docker base images from Authentic site.
4. Limit the resource utilization using Container orchestrators
like Kubernetes.
5. Don’t run anything on Super privileged mode.
3
137. 4/1/2019 137
Kubernetes: Naked Pods
• Never use a Naked Pod, that is Pod without any
ReplicaSet or Deployments. Naked pods will never
get re-scheduled if the Pod goes down.
• Never access a Pod directly from another Pod.
Always use a Service to access a Pod.
• User labels to select the pods { app: myapp, tier:
frontend, phase: test, deployment: v3 }.
• Never use :latest tag in the image in the
production scenario.
4
138. 4/1/2019 138
Kubernetes: Namespace
default
Kube system
Kube public
Kubernetes Cluster• Group your Services / Pods / Traffic Rules based on
Specific Namespace.
• This helps you apply specific Network Policies for
that Namespace with increase in Security and
Performance.
• Handle specific Resource Allocations for a
Namespace.
• If you have more than a dozen Microservices then
it’s time to bring in Namespaces.
Service-Name.Namespace.svc.cluster.local
$ kubectl config set-context $(kubectl config current-context) --namespace=your-ns
The above command will let you switch the namespace to your namespace (your-ns).
5
139. 4/1/2019 139
Kubernetes: Pod Health Check
• Pod Health check is critical to increase the overall
resiliency of the network.
• Readiness
• Liveness
• Ensure that all your Pods have Readiness and
Liveness Probes.
• Choose the Protocol wisely (HTTP, Command &
TCP)
6
140. 4/1/2019 140
Kubernetes: Resource Utilization
• For the Best Quality define the requests and limits
for your Pods.
• You can set specific resource requests for a Dev
Namespace to ensure that developers don’t create
pods with a very large resource or a very small
resource.
• Limit Range can be set to ensure that containers
were create with too low resource or too large
resource.
7
141. 4/1/2019 141
Kubernetes: Pod Termination Lifecycle
• Make sure that the Application to Handle SIGTERM
message.
• You can use preStop Hook
• Set the terminationGracePeriodSeconds: 60
• Ensure that you clean up the connections or any other
artefacts and ready for clean shutdown of the App
(Microservice).
• If the Container is still running after the grace period,
Kubernetes sends a SIGKILL event to shutdown the Pod.
8
142. 4/1/2019 142
Kubernetes: External Services
• There are systems that can be outside the Kubernetes
cluster like
• Databases or
• external services in the cloud.
• You can create an Endpoint with Specific IP Address and
Port with the same name as Service.
• You can create a Service with an External Name (URL)
which does a CNAME redirection at the Kernel level.
9
143. 4/1/2019 143
Kubernetes: Upgrade Cluster
• Make sure that the Master behind a Load Balancer.
• Upgrade Master
• Scale up the Node with an extra Node
• Drain the Node and
• Upgrade Node
• Cluster will be running even if the master is not working.
Only Kubectl and any master specific functions will be
down until the master is up.
10
146. 4/1/2019 146
What is Kanban
Kanban is a method for managing the creation of
products with an emphasis on
• continual delivery while
• not overburdening the development team.
Like Scrum, Kanban is a process designed to help
teams work together more effectively.
Kanban is a visual management method that was developed by Hirotaka Takeuchi and Ikujiro
Nonaka
147. 4/1/2019 147
Three Principles of Kanban
Source: https://resources.collab.net/agile-101/what-is-kanban
• Visualize what you do today (workflow): seeing all
the items in context of each other can be very
informative
• Limit the amount of work in progress (WIP): this
helps balance the flow-based approach so teams
don’t start and commit to too much work at once
• Enhance flow: when something is finished, the next
highest thing from the backlog is pulled into play
150. 4/1/2019 150
Kanban vs. Scrum
Kanban Scrum
Roles &
Responsibilities
No prescribed roles
Pre-defined roles of Scrum master,
Product owner and team member
Delivery Time
Lines
Continuous Delivery Time boxed sprints
Delegation &
Prioritization
Work is pulled through the system
(single piece flow)
Work is pulled through the system
in batches (the sprint backlog)
Modifications Changes can be made at any time No changes allowed mid-sprint
Measurement
of Productivity
Cycle time Velocity
When to Use?
More appropriate in operational
environments with a high degree of
variability in priority
More appropriate in situations
where work can be prioritized in
batches that can be left alone
Source: https://leankit.com/learn/kanban/kanban-vs-scrum/
151. 4/1/2019 151
Benefits of Kanban
• Shorter cycle times can deliver features faster.
• Responsiveness to Change:
• When priorities change very frequently, Kanban is ideal.
• Balancing demand against throughput guarantees that most
the customer-centric features are always being worked.
• Requires fewer organization / room set-up changes to get
started
• Reducing waste and removing activities that don’t add value to
the team/department/organization
• Rapid feedback loops improve the chances of more motivated,
empowered and higher-performing team members
153. ITIL – Service Life Cycle
153Source: https://www.flycastpartners.com/itil-service-lifecycle-guide/
• ITIL is a framework providing best practice
guidelines on all aspects of end to end
service management.
• It covers complete spectrum of People,
Processes, Products and use of Partners (v3).
Service is a means of delivering value to
customers by achieving customer's desired
results while working within given constraints.
Incident is defined as any disruption in IT
service.
Service Level Agreement. It is a commitment
between a service provider and a client.
155. DevOps – Lean thinking
4/1/2019 155Source: Sanjeev Sharma, IBM, DevOps for Dummies
Systems of Records: Critical
Enterprise transactions and
these Apps doesn’t require
frequent changes.
Systems of Engagement: With
introduction of Rich Web Apps
and Mobiles Apps, Systems of
Records were augmented by
Systems of Engagements.
Customers directly engage with
these Apps and these Apps
requires Rapid Releases.
DevOps Return on Investment
1. Enhanced Customer Experience
2. Increased Capacity to Innovate
3. Faster time to value
156. Shift Left – Operational Concerns
156
• Operations Concerns move earlier in software delivery life cycle, towards
development.
• The Goal is to enable Developers and QC Team to Develop and Test the
software that behave like Production System.
Development Environment
Build
Build
Build
Test Environment Stage Environment Production Environment
Continuous Integration
Unit
Testing
Component
Testing
Contract
Testing
Integration
Testing
Continuous Testing
Acceptance
Testing
Continuous Delivery
Continuous Monitoring
Fully
Automated
Shift Left moves operations earlier in development cycle.
157. Infrastructure as a Code
157
• Infrastructure as a Code is a
critical capability for DevOps
• This helps the organizations to
establish a fully automated
pipeline for Continuous Delivery.
• Infra as a Code is a software defined environment to manage
the following:
• Network Topologies, Roles, Relationship, Network Policies
• Deployment Models, Workloads, Workload Policies &
Behaviors.
158. Stages of DevOps Delivery Pipeline
4/1/2019 158Source: Sanjeev Sharma, IBM, DevOps for Dummies
Application Release Management
Development Build Package
Repository
Test
Environment
Stage
Environment
Production
Environment
Application Deployment Automation
Cloud Provisioning
159. 5 Principles of DevOps (Philosophies)
159
Reduce
Organization
Silos
Accept
Failure as
Normal
Implement
Gradual
Change
Leverage Tooling & Automation
Measure
Everything
161. class SRE implements DevOps
161
Reduce
Organization
Silos
Accept
Failure as
Normal
Implement
Gradual
Change
Leverage Tooling & Automation
Measure
Everything
Share Ownership SLOs & Blameless PM Canary Deployment
Automate this years Job Measure toil & reliability
162. Service Levels – SLI / SLO
162
SLI – Service Level Indicator
For Web sites:
SLI is a Percentage of requests
responded in good health.
SLI can be a Performance Indicator:
Percentage of search results returned
under 50 milli-seconds.
SLO – Service Level Objective
SLO is a goal built around SLI. It is
usually a percentage and is tied to a
period and it is usually measured in
a number of nines. Time periods
can be last 24 hours, last week, last
30 days, current quarter etc.
uptime Last 30 Days
90%
(1 nine of uptime): Meaning you
were down for 10% of the
period. This means you were
down for three days out of the
last thirty days.
99%
(2 nines of uptime): Meaning 1%
or 7.2 hours of downtime over
the last thirty days.
99.9%
(3 nines of uptime): Meaning
0.1% or 43.2 minutes of
downtime.
99.99%
(4 nines of uptime): Meaning
0.01% or 4.32 minutes of
downtime.
99.999%
(5 nines of uptime): Meaning 26
seconds or 0.001% of downtime.
163. SRE – Concept
4/1/2019 163
Bridge the Gap between Development & Operations
Developers wants to ship features as fast as possible
Operations want stability in Production
Empowers the Software Developers to own the operations of
Applications in Production.
Site Reliability Engineers spends 50% of their time in Operations.
SRE has a deep understanding of the application, the code, how
it runs, is configured and how it will scale.
They monitor and manage the support apart from the
development activities.
Source: https://stackify.com/site-reliability-engineering/
164. SRE – Responsibilities
4/1/2019 164
Proactively monitor and review application performance
Handle on-call and emergency support
Ensure software has good logging and diagnostics
Create and maintain operational runbooks
Help triage escalated support tickets
Work on feature requests, defects and other
development tasks
Contribute to overall product roadmap
Source: https://stackify.com/site-reliability-engineering/
165. 165
100s Microservices
1,000s Releases / Day
10,000s Virtual Machines
100K+ User actions / Second
81 M Customers Globally
1 B Time series Metrics
10 B Hours of video streaming
every quarter
Source: NetFlix: : https://www.youtube.com/watch?v=UTKIT6STSVM
10s OPs Engineers
0 NOC
0 Data Centers
So what do NetFlix think about DevOps?
No DevOps
Don’t do lot of Process / Procedures
Freedom for Developers & be Accountable
Trust people you Hire
No Controls / Silos / Walls / Fences
Ownership – You Build it, You Run it.
166. 4/1/2019 166
Design Patterns are
solutions to general
problems that
software developers
faced during software
development.
Design Patterns
168. 4/1/2019 168
References
1. Lewis, James, and Martin Fowler. “Microservices: A Definition of This New Architectural Term”, March 25, 2014.
2. Miller, Matt. “Innovate or Die: The Rise of Microservices”. e Wall Street Journal, October 5, 2015.
3. Newman, Sam. Building Microservices. O’Reilly Media, 2015.
4. Alagarasan, Vijay. “Seven Microservices Anti-patterns”, August 24, 2015.
5. Cockcroft, Adrian. “State of the Art in Microservices”, December 4, 2014.
6. Fowler, Martin. “Microservice Prerequisites”, August 28, 2014.
7. Fowler, Martin. “Microservice Tradeoffs”, July 1, 2015.
8. Humble, Jez. “Four Principles of Low-Risk Software Release”, February 16, 2012.
9. Zuul Edge Server, Ketan Gote, May 22, 2017
10. Ribbon, Hysterix using Spring Feign, Ketan Gote, May 22, 2017
11. Eureka Server with Spring Cloud, Ketan Gote, May 22, 2017
12. Apache Kafka, A Distributed Streaming Platform, Ketan Gote, May 20, 2017
13. Functional Reactive Programming, Araf Karsh Hamid, August 7, 2016
14. Enterprise Software Architectures, Araf Karsh Hamid, July 30, 2016
15. Docker and Linux Containers, Araf Karsh Hamid, April 28, 2015
169. 4/1/2019 169
References
Domain Driven Design
16. Oct 27, 2012 What I have learned about DDD Since the book. By Eric Evans
17. Mar 19, 2013 Domain Driven Design By Eric Evans
18. May 16, 2015 Microsoft Ignite: Domain Driven Design for the Database Driven Mind
19. Jun 02, 2015 Applied DDD in Java EE 7 and Open Source World
20. Aug 23, 2016 Domain Driven Design the Good Parts By Jimmy Bogard
21. Sep 22, 2016 GOTO 2015 – DDD & REST Domain Driven API’s for the Web. By Oliver Gierke
22. Jan 24, 2017 Spring Developer – Developing Micro Services with Aggregates. By Chris Richardson
23. May 17. 2017 DEVOXX – The Art of Discovering Bounded Contexts. By Nick Tune
Event Sourcing and CQRS
23. Nov 13, 2014 GOTO 2014 – Event Sourcing. By Greg Young
24. Mar 22, 2016 Spring Developer – Building Micro Services with Event Sourcing and CQRS
25. Apr 15, 2016 YOW! Nights – Event Sourcing. By Martin Fowler
26. May 08, 2017 When Micro Services Meet Event Sourcing. By Vinicius Gomes
170. 4/1/2019 170
References
27. MSDN – Microsoft https://msdn.microsoft.com/en-us/library/dn568103.aspx
28. Martin Fowler : CQRS – http://martinfowler.com/bliki/CQRS.html
29. Udi Dahan : CQRS – http://www.udidahan.com/2009/12/09/clarified-cqrs/
30. Greg Young : CQRS - https://www.youtube.com/watch?v=JHGkaShoyNs
31. Bertrand Meyer – CQS - http://en.wikipedia.org/wiki/Bertrand_Meyer
32. CQS : http://en.wikipedia.org/wiki/Command–query_separation
33. CAP Theorem : http://en.wikipedia.org/wiki/CAP_theorem
34. CAP Theorem : http://www.julianbrowne.com/article/viewer/brewers-cap-theorem
35. CAP 12 years how the rules have changed
36. EBay Scalability Best Practices : http://www.infoq.com/articles/ebay-scalability-best-practices
37. Pat Helland (Amazon) : Life beyond distributed transactions
38. Stanford University: Rx https://www.youtube.com/watch?v=y9xudo3C1Cw
39. Princeton University: SAGAS (1987) Hector Garcia Molina / Kenneth Salem
40. Rx Observable : https://dzone.com/articles/using-rx-java-observable
171. 4/1/2019 171
References – Microservices – Videos
41. Martin Fowler – Micro Services : https://www.youtube.com/watch?v=2yko4TbC8cI&feature=youtu.be&t=15m53s
42. GOTO 2016 – Microservices at NetFlix Scale: Principles, Tradeoffs & Lessons Learned. By R Meshenberg
43. Mastering Chaos – A NetFlix Guide to Microservices. By Josh Evans
44. GOTO 2015 – Challenges Implementing Micro Services By Fred George
45. GOTO 2016 – From Monolith to Microservices at Zalando. By Rodrigue Scaefer
46. GOTO 2015 – Microservices @ Spotify. By Kevin Goldsmith
47. Modelling Microservices @ Spotify : https://www.youtube.com/watch?v=7XDA044tl8k
48. GOTO 2015 – DDD & Microservices: At last, Some Boundaries By Eric Evans
49. GOTO 2016 – What I wish I had known before Scaling Uber to 1000 Services. By Matt Ranney
50. DDD Europe – Tackling Complexity in the Heart of Software By Eric Evans, April 11, 2016
51. AWS re:Invent 2016 – From Monolithic to Microservices: Evolving Architecture Patterns. By Emerson L, Gilt D. Chiles
52. AWS 2017 – An overview of designing Microservices based Applications on AWS. By Peter Dalbhanjan
53. GOTO Jun, 2017 – Effective Microservices in a Data Centric World. By Randy Shoup.
54. GOTO July, 2017 – The Seven (more) Deadly Sins of Microservices. By Daniel Bryant
55. Sept, 2017 – Airbnb, From Monolith to Microservices: How to scale your Architecture. By Melanie Cubula
56. GOTO Sept, 2017 – Rethinking Microservices with Stateful Streams. By Ben Stopford.
57. GOTO 2017 – Microservices without Servers. By Glynn Bird.
173. 4/1/2019 173
1. Simoorg : LinkedIn’s own failure inducer framework. It was designed to be easy to extend and
most of the important components are plug‐ gable.
2. Pumba : A chaos testing and network emulation tool for Docker.
3. Chaos Lemur : Self-hostable application to randomly destroy virtual machines in a BOSH-
managed environment, as an aid to resilience testing of high-availability systems.
4. Chaos Lambda : Randomly terminate AWS ASG instances during business hours.
5. Blockade : Docker-based utility for testing network failures and partitions in distributed
applications.
6. Chaos-http-proxy : Introduces failures into HTTP requests via a proxy server.
7. Monkey-ops : Monkey-Ops is a simple service implemented in Go, which is deployed into an
OpenShift V3.X and generates some chaos within it. Monkey-Ops seeks some OpenShift
components like Pods or Deployment Configs and randomly terminates them.
8. Chaos Dingo : Chaos Dingo currently supports performing operations on Azure VMs and VMSS
deployed to an Azure Resource Manager-based resource group.
9. Tugbot : Testing in Production (TiP) framework for Docker.
Testing tools
Editor's Notes
DevOpsAmazon: https://www.youtube.com/watch?v=mBU3AJ3j1rg
NetFlix: https://www.youtube.com/watch?v=UTKIT6STSVM
DevOps and SRE: https://www.youtube.com/watch?v=uTEL8Ff1Zvk
SLI, SLO, SLA : https://www.youtube.com/watch?v=tEylFyxbDLE
DevOps and SRE : Risks and Budgets : https://www.youtube.com/watch?v=y2ILKr8kCJU
Memory
You can limit the amount of RAM and swap space that can be used by a group of processes.It accounts for the memory used by the processes for their private use (their Resident Set Size, or RSS), but also for the memory used for caching purposes.
This is actually quite powerful, because traditional tools (ps, analysis of /proc, etc.) have no way to find out the cache memory usage incurred by specific processes. This can make a big difference, for instance, with databases.
A database will typically use very little memory for its processes (unless you do complex queries, but let’s pretend you don’t!), but can be a huge consumer of cache memory: after all, to perform optimally, your whole database (or at least, your “active set” of data that you refer to the most often) should fit into memory.
Limiting the memory available to the processes inside a cgroup is as easy as echo1000000000 > /cgroup/polkadot/memory.limit_in_bytes (it will be rounded to a page size).
To check the current usage for a cgroup, inspect the pseudo-filememory.usage_in_bytes in the cgroup directory. You can gather very detailed (and very useful) information into memory.stat; the data contained in this file could justify a whole blog post by itself!
CPU
You might already be familiar with scheduler priorities, and with the nice and renice commands. Once again, control groups will let you define the amount of CPU, that should be shared by a group of processes, instead of a single one. You can give each cgroup a relative number of CPU shares, and the kernel will make sure that each group of process gets access to the CPU in proportion of the number of shares you gave it.
Setting the number of shares is as simple as echo 250 > /cgroup/polkadot/cpu.shares. Remember that those shares are just relative numbers: if you multiply everyone’s share by 10, the end result will be exactly the same. This control group also gives statistics incpu.stat.
CPU sets
This is different from the cpu controller.In systems with multiple CPUs (i.e., the vast majority of servers, desktop & laptop computers, and even phones today!), the cpuset control group lets you define which processes can use which CPU.
This can be useful to reserve a full CPU to a given process or group of processes. Those processes will receive a fixed amount of CPU cycles, and they might also run faster because there will be less thrashing at the level of the CPU cache.
On systems with Non Uniform Memory Access (NUMA), the memory is split in multiple banks, and each bank is tied to a specific CPU (or set of CPUs); so binding a process (or group of processes) to a specific CPU (or a specific group of CPUs) can also reduce the overhead happening when a process is scheduled to run on a CPU, but accessing RAM tied to another CPU.
Block I/O
The blkio controller gives a lot of information about the disk accesses (technically, block devices requests) performed by a group of processes. This is very useful, because I/O resources are much harder to share than CPU or RAM.
A system has a given, known, and fixed amount of RAM. It has a fixed number of CPU cycles every second – and even on systems where the number of CPU cycles can change (tickless systems, or virtual machines), it is not an issue, because the kernel will slice the CPU time in shares of e.g. 1 millisecond, and there is a given, known, and fixed number of milliseconds every second (doh!). I/O bandwidth, however, is quite unpredictable. Or rather, as we will see, it is predictable, but the prediction isn’t very useful.
A hard disk with a 10ms average seek time will be able to do about 100 requests of 4 KB per second; but if the requests are sequential, typical desktop hard drives can easily sustain 80 MB/s transfer rates – which means 20000 requests of 4 kB per second.
The average throughput (measured in IOPS, I/O Operations Per Second) will be somewhere between those two extremes. But as soon as some application performs a task requiring a lot of scattered, random I/O operations, the performance will drop – dramatically. The system does give you some guaranteed performance, but this guaranteed performance is so low, that it doesn’t help much (that’s exactly the problem of AWS EBS, by the way). It’s like a highway with an anti-traffic jam system that would guarantee that you can always go above a given speed, except that this speed is 5 mph. Not very helpful, is it?
That’s why SSD storage is becoming increasingly popular. SSD has virtually no seek time, and can therefore sustain random I/O as fast as sequential I/O. The available throughput is therefore predictably good, under any given load.
Actually, there are some workloads that can cause problems; for instance, if you continuously write and rewrite a whole disk, you will find that the performance will drop dramatically. This is because read and write operations are fast, but erase, which must be performed at some point before write, is slow. This won’t be a problem in most situations.
An example use-case which could exhibit the issue would be to use SSD to do catch-up TV for 100 HD channels simultaneously: the disk will sustain the write throughput until it has written every block once; then it will need to erase, and performance will drop below acceptable levels.)
To get back to the topic – what’s the purpose of the blkio controller in a PaaS environment like dotCloud?
The blkio controller metrics will help detecting applications that are putting an excessive strain on the I/O subsystem. Then, the controller lets you set limits, which can be expressed in number of operations and/or bytes per second. It also allows for different limits for read and write operations. It allows to set some safeguard limits (to make sure that a single app won’t significantly degrade performance for everyone). Furthermore, once a I/O-hungry app has been identified, its quota can be adapted to reduce impact on other apps.
more
The pid namespace
This is probably the most useful for basic isolation.
Each pid namespace has its own process numbering. Different pid namespaces form a hierarchy: the kernel keeps track of which namespace created which other. A “parent” namespace can see its children namespaces, and it can affect them (for instance, with signals); but a child namespace cannot do anything to its parent namespace. As a consequence:
each pid namespace has its own “PID 1” init-like process;
processes living in a namespace cannot affect processes living in parent or sibling namespaces with system calls like kill or ptrace, since process ids are meaningful only inside a given namespace;
if a pseudo-filesystem like proc is mounted by a process within a pid namespace, it will only show the processes belonging to the namespace;
since the numbering is different in each namespace, it means that a process in a child namespace will have multiple PIDs: one in its own namespace, and a different PID in its parent namespace.
The last item means that from the top-level pid namespace, you will be able to see all processes running in all namespaces, but with different PIDs. Of course, a process can have more than 2 PIDs if there are more than two levels of hierarchy in the namespaces.
The net namespace
With the pid namespace, you can start processes in multiple isolated environments (let’s bite the bullet and call them “containers” once and for all). But if you want to run e.g. a different Apache in each container, you will have a problem: there can be only one process listening to port 80/tcp at a time. You could configure your instances of Apache to listen on different ports… or you could use the net namespace.
As its name implies, the net namespace is about networking. Each different net namespace can have different network interfaces. Even lo, the loopback interface supporting 127.0.0.1, will be different in each different net namespace.
It is possible to create pairs of special interfaces, which will appear in two different net namespaces, and allow a net namespace to talk to the outside world.
A typical container will have its own loopback interface (lo), as well as one end of such a special interface, generally named eth0. The other end of the special interface will be in the “original” namespace, and will bear a poetic name like veth42xyz0. It is then possible to put those special interfaces together within an Ethernet bridge (to achieve switching between containers), or route packets between them, etc. (If you are familiar with the Xen networking model, this is probably no news to you!)
Note that each net namespace has its own meaning for INADDR_ANY, a.k.a. 0.0.0.0; so when your Apache process binds to *:80 within its namespace, it will only receive connections directed to the IP addresses and interfaces of its namespace – thus allowing you, at the end of the day, to run multiple Apache instances, with their default configuration listening on port 80.
In case you were wondering: each net namespace has its own routing table, but also its own iptables chains and rules.
The ipc namespace
This one won’t appeal a lot to you; unless you passed your UNIX 101 a long time ago, when they still taught about IPC (InterProcess Communication)!
IPC provides semaphores, message queues, and shared memory segments.
While still supported by virtually all UNIX flavors, those features are considered by many people as obsolete, and superseded by POSIX semaphores, POSIX message queues, and mmap. Nonetheless, some programs – including PostgreSQL – still use IPC.
What’s the connection with namespaces? Well, each IPC resources are accessed through a unique 32-bits ID. IPC implement permissions on resources, but nonetheless, one application could be surprised if it failed to access a given resource because it has already been claimed by another process in a different container.
Introduce the ipc namespace: processes within a given ipc namespace cannot access (or even see at all) IPC resources living in other ipc namespaces. And now you can safely run a PostgreSQL instance in each container without fearing IPC key collisions!
The mnt namespace
You might already be familiar with chroot, a mechanism allowing to sandbox a process (and its children) within a given directory. The mnt namespace takes that concept one step further.
As its name implies, the mnt namespace deals with mountpoints.
Processes living in different mnt namespaces can see different sets of mounted filesystems – and different root directories. If a filesystem is mounted in a mnt namespace, it will be accessible only to those processes within that namespace; it will remain invisible for processes in other namespaces.
At first, it sounds useful, since it allows to sandbox each container within its own directory, hiding other containers.
At a second glance, is it really that useful? After all, if each container is chroot‘ed in a different directory, container C1 won’t be able to access or see the filesystem of container C2, right? Well, that’s right, but there are side effects.
Inspecting /proc/mounts in a container will show the mountpoints of all containers. Also, those mountpoints will be relative to the original namespace, which can give some hints about the layout of your system – and maybe confuse some applications which would rely on the paths in /proc/mounts.
The mnt namespace makes the situation much cleaner, allowing each container to have its own mountpoints, and see only those mountpoints, with their path correctly translated to the actual root of the namespace.
The uts namespace
Finally, the uts namespace deals with one little detail: the hostname that will be “seen” by a group of processes.
Each uts namespace will hold a different hostname, and changing the hostname (through the sethostname system call) will only change it for processes running in the same namespace.
The index file is made up of 8 byte entries, 4 bytes to store the offset relative to the base offset and 4 bytes to store the position. The offset is relative to the base offset so that only 4 bytes is needed to store the offset. For example: let’s say the base offset is 10000000000000000000, rather than having to store subsequent offsets 10000000000000000001 and 10000000000000000002 they are just 1 and 2.
Kafka wraps compressed messages together
Producers sending compressed messages will compress the batch together and send it as the payload of a wrapped message. And as before, the data on disk is exactly the same as what the broker receives from the producer over the network and sends to its consumers.
https://thehoard.blog/how-kafkas-storage-internals-work-3a29b02e026
Durability - the ability to withstand wear, pressure, or damage.
http://martinfowler.com/bliki/DDD_Aggregate.html
Effective Aggregate Design By Vaughn Vernon
Part 1 : http://dddcommunity.org/wp-content/uploads/files/pdf_articles/Vernon_2011_1.pdf
Part 2 : http://dddcommunity.org/wp-content/uploads/files/pdf_articles/Vernon_2011_2.pdf
Part 3 : http://dddcommunity.org/wp-content/uploads/files/pdf_articles/Vernon_2011_3.pdf
Video
Part 2 : https://vimeo.com/33708293
In computer science, an invariant is a condition that can be relied upon to be true during execution of a program, or during some portion of it. It is a logical assertion that is held to always be true during a certain phase of execution