True to its name, Ananta provides cloud scale load balancing. It addresses limitations of traditional load balancers by supporting 100Gbps per VIP, rapid failover of thousands of VIPs, and tenant isolation to prevent overloads in one tenant from impacting others. Ananta implements load balancing across three tiers - packet-level in routers, connection-level in servers, and stateful NAT - to achieve high scalability, availability, and flexibility.
8. 32 Regions Worldwide, 24 Generally Available…
Central US
Iowa
West US
California
East US 2
Virginia
US Gov
Virginia
North Central US
Illinois
US Gov
Iowa
South Central US
Texas
Brazil South
Sao Paulo State
West Europe
Netherlands
China North*
Beijing
China South*
Shanghai
Japan East
Tokyo,
Saitama
Japan West
Osaka
India
South
Chennai
East Asia
Hong Kong
SE Asia
Singapore
Australia South East
Victoria
Australia East
New South Wales
India Central
Pune
Canada East
Quebec City
Canada Central
Toronto
India West
Mumbai
Germany North East
Magdeburg
Germany Central
Frankfurt
United Kingdom
Regions (2)
North Europe
Ireland
US DoD
West TBA
US DoD
East TBA
East US
Virginia Korea Regions (2)
*Operated by 21Vianet
Announced/not operational
Operational
公表32リージョン/稼働済み24 その時点の配置 (*)
(*)現在は公表38/稼働済み30
9. $azure location list --details
info: Executing command location list
+ Getting ARM registered providers
info: Getting locations...
data:
data: Location : eastasia
data: DisplayName : East Asia
data:
data:
[…]
12. Colocation Density
2.0+ Power Usage
Effectiveness (PUE) 1.4 – 1.6 PUE
Discrete servers
Capacity
20 year technology
Rack
Density & deployment
Minimized resource
impact
Generation 1 Generation 2
Containment Modular Hyper-scale
1.2 – 1.5 PUE
1.12 – 1.20 PUE 1.07 – 1.19 PUE
Containers, PODs
Scalability &
sustainability
Air & water
Economization
Differentiated SLAs
Deployment Areas &
ITPACs
No more traditional IT
Right-sized
Faster time-to-market
Outside air cooled
Fully integrated
Resilient software
Common
infrastructure
Operational simplicity
Flexible & scalable
Generation 3 Generation 4 Generation 5
13.
14. S. Sankar, K. Vaid, M. Shaw “Impact of Temperature on Hard Disk Drive Reliability in
Large Datacenters” Microsoft, IEEE, 2011
Inlet Temperature and Impact on Hard Disk Failure Rates
HDD Case Temp Relative AFR HDD Case Temp Relative AFR
10 C 50 F 11 C 100% 30 C 100%
15 C 59 F 16 C 100% 34 C 100%
20 C 68 F 21 C 100% 38 C 100%
25 C 77 F 26 C 100% 41 C 106%
30 C 86 F 31 C 100% 45 C 131%
35 C 95 F 36 C 100% 49 C 153%
40 C 104 F 41 C 106% 53 C 189%
45 C 113 F 46 C 138% 56 C 231%
50 C 122 F 51 C 179% 60 C 281%
HDD's in Front, ΔT 1˚C
Buried HDDs Design, ΔT 20˚C
cold de-rated to ΔT 10˚C hotInlet Temp
“Azure Network and Datacenter Infrastructure: Enterprise Quality at Cloud Scale” Microsoft Ignite 2015
30. ToR
FPGA
NIC
Server
FPGA
NIC
Server
FPGA
NIC
Server
FPGA
NIC
Server
CS0 CS1 CS2 CS3
ToR
FPGA
NIC
Server
FPGA
NIC
Server
FPGA
NIC
Server
FPGA
NIC
Server
SP0 SP1 SP2 SP3
L0
L1/L2
Microsoft's Production Configurable Cloud” Mark Russinovich, Chief Technology Officer, Microsoft Azure, SCS Distinguished Lecture, 11/15/2016
October 15, 2016
31. Credits
Virtual Channel
Data
Header
Elastic
Router
(multi-
VC
on-chip
router)
Send Connection Table
Transmit State
Machine
Send
Frame
QueueConnection
Lookup
Packetizer
and
Transmit
Buffer
Unack’d
Frame
Store
Ethernet
Encap
Ethernet
Decap
40G
MAC+PHY
Receive Connection Table
Credits
Virtual Channel
Data
Header
Depacketizer
Credit
Management
Ack Receiver
Ack Generation
Receive State
Machine
Solid links show Data flow, Dotted links show ACK flow
Datacenter
Network
Microsoft's Production Configurable Cloud” Mark Russinovich, Chief Technology Officer, Microsoft Azure, SCS Distinguished Lecture, 11/15/2016
32. Microsoft's Production Configurable Cloud” Mark Russinovich, Chief Technology Officer, Microsoft Azure, SCS Distinguished Lecture, 11/15/2016
41. “The problem I have right now? It is supply chain. I am not so worried about technology. We
have our Open Cloud Server, which I think is very compelling in that it offers some real economic
capabilities. But I have got to nurture my supply chain because traditionally we bought from
OEMs and now we are designing with ODMs so we can take advantage of prices and lower our
overall costs. So I am moving very, very quickly to build out new capacity, and I want to do it in
a very efficient and effective way and it is really about the commoditization of the infrastructure.”
( https://www.nextplatform.com/2016/09/26/rare-tour-microsofts-hyperscale-datacenters/ )
Rick Bakken, Sr. Director, Data Center Evangelism, Microsoft
71. • Today’s Server to Tier 0
• Interconnect is based on 25G technology
• Links are 50G Ethernet - 2x25G based on 25G Ethernet Consortium spec
• Bandwidth growth drove us to use 50G
• Don’t require an 802.3 specification here
• Tomorrow’s Server to Tier 0
• Interconnect will be based upon 50G PAM4 technology
• Expect links will be 100G Ethernet (2x50G)
• Choice for 802.3:
• Create the specification
• Let a consortium do it