SlideShare a Scribd company logo
1 of 52
Kayukov Valeriy
Optical System Engineer - Step Logic
April 20, 2016
Cisco Optical Networking – Disaster Recovery
Solutions (SAN over DWDM Calculation).
Cisco Support Community
Expert Series Webcast
• VMDC DCI Design
• Protocol Comparison
• Fiber Channel
• Cisco Transponders Solutions
• Cisco NCS 1002
• Cisco Transport Encryption
• Transport Optical Protection
Content of presentation
• Storage Networking
• Optical Recovery and Restoration
• Partnership Support Ecosystem
VMDC DCI Design
Recovery Point Objective (RPO) and the Recovery Time Objective (RTO)
How fast business need recovery?
• Two important objectives in the designing process are the Recovery Point Objective (RPO) and the Recovery Time
Objective (RTO).
• The RPO is the time period between backup points and describes the acceptable age of the data that must be restored
after a failure has occurred. For example, if a remote backup occurs every day at midnight and a site failure occurs at 11
pm, changes to data made within the last 23 hours will not be recoverable.
• The RTO describes the time needed to recover from the disaster. The RTO determines the acceptable length of time a
break in continuity can occur with minimal or no impact to business services. Options for replication generally fall into
one of several categories.
• A business continuity solution with strict RTO and RPO may require high-speed synchronous or near-synchronous
replication between sites as well as application clustering for immediate service recovery.
• A medium level Disaster Recovery (DR) solution may require high-speed replication that could be synchronous or
asynchronous with an RTO from several minutes to a few hours. Backup of non-critical application data that does not
require immediate access after a failure can be accomplished via tape vaulting. Recovery from tape has the greatest
RTO. In addition other technologies such as Continuous Data Protection (CDP) can be used to find the appropriate RPO
and RTO.
VMDC DCI Design
Recovery Point Objective (RPO) and the Recovery Time Objective (RTO) Terms
• Business units should dictate the anticipated risks
in depending on the region's business and territorial
coverage of possible disasters.
• Coverage of potential risks dictates the need for
the appearance of the second data center or
appearance of Disaster Recovery Site.
• Also coverage determines transport technologies
and application architecture of complete solution
•VMDC DCI Design reduce CAPEX/OPEX of
infrastructure
VMDC DCI Design
Basic Terminology and architecture of classic DRS Solution
Business Continuity
Workload Mobility
Disaster Recovery
Site Migrations
Load Balanced Workloads
Operations Maintenance
Operations Rebalancing
Application Clusters
Simplify the DCI Design Process for Operations Teams - Interconnecting Cloud Data Centers involves many infrastructure elements and application
components. The VMDC DCI validated design significantly reduces risk of implementation using Cisco’s latest product innovations
End-to-end Validation of the Application Environment - VMDC DCI delivers validated guidelines across the end-to-end layers of the cloud data center.
Competitive offerings only focus on a few elements. VMDC DCI spans different sites, addressing each Application element including WAN connections, tenancy, network containers,
distributed virtual switching, and L4-L7 services, hypervisor migration tools, and storage replication. This is a complete DCI solution.
Validates 2 of the most used DCI Design Options - VMDC DCI validates the most common design options to achieve 2 major Recovery Point Objective (RPO)
and Recovery Time Objective (RTO) targets. The first design option enables the movement of applications, their services, and network containers to support near zero RPO/RTO for
the most business critical functions. Less business critical applications can be mapped to a second option to achieve RPO/RTO targets of 15 minutes or more.
Minimal Disruption to the Application - VMDC DCI allows operators to preserve IP addresses of moved applications and their services between sites.
Reduction in CAPEX/OPEX for DCI Deployments - VMDC DCI helps customers align the correct DCI design to achieve application RPO/RTO targets. The most
stringent recovery targets typically require the highest CAPEX/OPEX. VMDC DCI provides a framework to map Applications to different Criticality Levels, and then select the most
cost effective design option that meets application requirements.
Planned Usage of Recovery Capacity - Recovery capacity at remote sites can be used for other applications during “normal operations” and “reclaimed” as needed
during recovery events. This “Reuse-Reclaim” design strategy allows for planned utilization of extra capacity and many-to-one resource sharing, reducing CAPEX/OPEX.
Multiple Hypervisors supported - Both VMware and Microsoft Hyper-V environments are supported.
DCI Use Cases Validated with Business Applications - VMDC DCI used traditional business applications across each workload migration and business
continuity use case. The test applications include Oracle database servers, Microsoft SharePoint and SQL, for single tier and multi-tier test applications.
Product Performance Measured across DCI Use Cases - The performance of Cisco products and Partner Products was measured and documented across
Metro/Geo environments. Performance limitations, design recommendations, and configurations are provided for Cisco and Partner products.
VMDC DCI Value Proposition
Simply DCI Deployments, reduce CAPEX/OPEX of design, Reuse-Reclaim Recovery Resources
Mgt Infrastructure
and Orchestration
Switching Fabric
Integrated
Compute
Stacks
WAN Edge / DCI
Storage
and Fabric
Extensions
Virtual
Switching
Services &
Containers
Virtual Storage
Volumes
Data Center 1
Cisco
Product
 OTV LAN Extension, Preserve IP Addressing of Applications
 IP WAN Transport with 10ms RTT across Metro distance
 External Path Re-direction thru routing update and orchestration
 Routing re-convergence to new site
 Stretched ESX Clusters and Server Affinity
 VMware Live vMotion across Metro sites
 Distributed vCenter spanning Metro sites
 Single and Multi-Tier Application migration strategy
 VMDC 3.0 FabricPath (Typical Design) with Multi-Tenancy
 Palladium Network Container
 Nexus 1000v with VSMs and VEMs across Metro sites
 Service and Security Profiles follow Application VMs
 Different Nexus 1000v’s mapped to Application Domains as needed
 Virtual volumes follow VM
 NetApp MetroCluster Synchronous Storage Replication
 ONTAP 8.1 Fabric MetroCluster, 160 Km long haul link (DWDM)
 FCoE to compute stack, Cisco MDS FC Switching for data replication
 Replicate Service Container to new site to support Mobile VM
 Virtual Mgt Infrastructure support across Metro
VMDC DCI Design Choices
Partner
Product
Route Optimization
Path Optimization
(LISP/ DNS / Manual )
Layer 2 Extension
(OTV / VPLS / E-VPN)
Stateful Services
(FW / SLB / IPSec / VSG)
VMware & Hyper-V
UCS / Geo-Clusters / Mobility
Distributed Virtual Switch
(Nexus 1000v)
Distributed Virtual Volumes
Container Orchestration
Storage Clusters
MDS Fabric and FCoE
Tenancy and QoS
 Stateful Services between sites
 Citrix SDX SLB at each site (no Metro extension)
 ASA 5500 FW Clustering at each site (no Metro extension)
VMDC DCI Design
Active-Active Metro Design Choices
Synchronous Data Replication
guarantees continuous data
integrity during the replication
process with no extra risk of data
loss.
The impact on the performance
of the application can be
significant and is highly
dependent on the distance
between the sites.
Metro Distances (depending on the
Application can be 50-200km max)
VMDC DCI Design
Synchronous Data Replication
Asynchronous replication
overcomes the performance
limitations of synchronous
replication, but some data loss
must be accepted.
The primary factors influencing
the amount of data loss are the
rate of change (ROC) of data on
the primary site, the link speed,
and the distance between sites.
Unlimited distances.
VMDC DCI Design
Asynchronous Data Replication
 Fiber channel released 32G Standard,
soon coming 64G FC.
 PCI Special Interest Group roadmap PCIe
4.0 2016-4Q - 16 GT/s.
 Primitive flow control commands makes
FC stack faster.
 Resilient and secure network, dedicated
from Internet.
Protocol comparison
Fiber Channel versus Ethernet
 Ethernet 200G have standardized
802.3bs protocol. Coming 400G.
 IEEE organization plan to realize
P802.3bs 400 Gb/s Standard at 2017-Q4
 Lower speed, complex OSI stack makes
demand in communication.
 Cloud Storage Is Software-Defined,
Scale-Out and Not Fiber Channel.
 Hyper-Converged Infrastructure (HCI) Is
Showing Hyper-Growth
 Growth of File and Object Storage.
Analyst IDC predicts file and object
storage are growing at 24% per year.
 Fiber Channel SANs are faster than iSCSI
SANs in case of line rate. For sample, line
rate of 1G (1.0625) FC higher than 1GE.
 Fiber Channel’s state machine use
primitive sequence to make flow control.
This primitive makes it faster.
 FC latency is less than 3500ps, SAS less
than 6500 ps. But iSCSI latency should
be significantly greater than Fiber
Channel, because of TCP latency.
 Self-documented due to FC LOGINs and
Name Server Registration in fabric.
Protocol comparison
FCFCoE versus iSCSI
 In most cases iSCSI SANs are always
less expensive than Fiber Channel SANs.
 At first sight, iSCSI SANs simpler to
operate vs. Fibre Channel SANs. But in
real zoning provisioning simpler than
iSCSI’s pointing IQN and IP addreses.
 Ethernet use « Cut Throught » (less than
5 ps) and « Store and Forward » (around
20 ps) technologis.
Summary: It’s more about what is right for your design and less about technology.
FC suitable for isolated low latency network, or in case of IBM System Z.
Essentially the same in terms of
– Network-centric
– Similar management tools
– Same multipathing software (for iSCSI as well)
– Similar basic port types
▪ N_Ports / F_Ports vs. VN_Ports and VF_Ports
▪ E_Ports vs. VE_Ports
– Same scalability limits
▪ Number of domains
▪ Number of N_Ports / VN_Ports
▪ Number of hops
Protocol comparison
FC versus FCoE
 Buffer Credits Define the maximum amount of data that can be sent prior to an acknowledgement
 Buffer Credits are physical ASIC port or card memory resources and are finite in number as a function of cost
 Within a fabric, each port may have a different number of buffer credits
 The number of available buffer credits is communicated at fabric logon (FLOGI)
 BB Flow Control works on Link Layer (FC-1), EE Flow Control works on Transport Layer (FC-2)
Buffer-to-buffer (BB) credit flow control is implemented to limit the amount of data that a port may send, and is
based on the number and size of the frames sent from that port.
SAN switches can support two methods of flow control over an ISL:
 Virtual Channel (VC_RDY) flow control
 Receiver Ready (R_RDY) flow control
Fiber Channel
Buffer Credit Flow Control
VC_RDY flow control differentiates traffic across an ISL:
 Algorithm differentiate fabric internal service traffic, and
to differentiate different data flows of end-to-end device
traffic to avoid head-of-line blocking. Service type of traffic
is given a higher priority, then other.
 Multiple I/Os are multiplexed over a single ISL by
assigning different VCs to different I/Os and giving them
the same priority (unless QoS is enabled).
 I/O multiplexing gives function balancing the
performance of different devices communicating across
the ISL.
Fiber Channel
Virtual Channel (VC_RDY) versus Receiver Ready (R_RDY)
 R_RDY flow control, is defined in FC standards and has
only a single lane or channel for all frame types.
 When switches are configured to use R_RDY flow control,
there are other mechanisms to enable QOS and avoid
head-of-line blocking problems.
Fiber Channel
B-to-B and E-to-E Flow Control Domains
 Used by Class 1 and Class 2 service
between 2 end nodes.
 Nodes monitor end to end flow control
between themselves, directors do not
participate.
 End to End flow control is always
managed between a specific pair of node
ports
BB_Credit management occurs between:
– One N_Port and one F_Port
– Two E_Ports
– Two N_Ports in a P2P topology
– In Arbitrated Loop different modes
When connecting switches across dark fiber or WDM communication links note:
 VC_RDY is the preferred method, but there are some distance extension devices that require the E_Port to be
configured for R_RDY.
 To prefer “buffer-to-buffer credit spoofing” disable on ports buffer-to-buffer state change (BB_SC) number. (read in
notes)
FC switches track the available BB_Credits in the
following manner:
 Before any data frames are sent, the transmitter sets a
counter equal to the BB_Credit value communicated by its
receiver during FLOGI
 For each data frame sent by the transmitter, the counter
is decremented by one
 Upon receipt of a data frame, the receiver sends a
status frame (R_RDY) to the transmitter indicating that
the data frame was received and the buffer is ready to
receive another data frame
 For each R_RDY received by the transmitter, the
counter is incremented by one
Fiber Channel
Buffer credit negotiation
 To transmit to the same
distance, at higher speed,
requires more buffer credits.
 Full data frame 2148 bytes
in size with QOS enabled.
 Methods of BB calculation:
• Time length based
• Classic method
• Average based method
Fiber Channel
Buffer credits to Link speed
SRDF synchronous mode - the EMC storage device responds to the host that issued a write operation to the
source of the composite group after the EMC storage, which contains the target of the composite group, acknowledges that
it has received and checked the data.
SRDF asynchronous replication - the EMC storage device provides a consistent point-in-time image on the target
of the composite group, which is a short period of time behind the source of the composite group. Asynchronous mode is
managed in sessions.
Asynchronous mode transfers data in predefined timed cycles or in delta sets to ensure that data at the remote target of
the composite group site is in the dependent write consistent state.
Fiber Channel
EMC specific feature SRDF
SiRT feature localize transfer-ready response to
local RF port, thereby reducing an unnecessary
acknowledgement response trip over SAN and
DWDM network,
Single RoundTrip (SiRT) feature dynamically
enabled for SRDF links/S links with distance
more than 12km with blocks up to 32K.
SAN switches and DWDM Transponders
measure link latency and disable it automatically
if connected to these devices.
With SRDF recommended to use fast write
feature on network devices.
There are two modes of SiRT feature – Off and
Auto. Auto –accelerate only write commands.
Fiber Channel
EMC specific feature SRDF (SiRT)
Fast write feature localize transfer ready
response between transponder and E port
of SAN switch.
This feature transparent to SRDF FC link
and used to all SRDF modes.
Transponder client FC port looks like
phantom target to initiator
Fiber Channel
EMC specific feature SRDF (Fast Write)
This method use time of frame propagation in fiber to calculate
sophisticated number of buffer credits.
Optimal number of credits is determined by:
 Distance (frame delivery time)
 Processing time at receiving port
 Link signaling rate
 Size of frames being transmitted
Optimal # BB_Credit = (Round-trip receiving time + Receiving_port
processing time) / Frame Transmission time
As the link speed increases, the frame transmission time is reduced;
therefore, as we get faster iterations of FICON such as FICON Express4
and Express8, the amount of credits need to be increased to obtain full link
utilization, even in a short distance environment
Reference link to Brocade BB calculator:
http://community.brocade.com/t5/Storage-Networks/Fibre-Channel-Buffer-
Credit-calculator-spreadsheet-February-2015/ba-p/70873
Fiber Channel
Time length based buffer credit calculation
1. Determine the desired distance in kilometers of the switch-to-switch connection.
2. Determine the speed that you will use for the long-distance connection. .
3. Use one of the following formulas to calculate the reserved buffers for distance:
• If QoS is enabled: (Reserved_Buffer_for_Distance_Y) = (X * LinkSpeed / 2) + 6 + 14
• If QoS is not enabled: (Reserved_Buffer_for_Distance_Y) = (X * LinkSpeed / 2) + 6
The formulas use the following parameters: X = The distance determined in step 1 (in km).
LinkSpeed = The speed of the link determined in step 2.
6 = The number of buffer credits reserved for fabric services, multicast, and broadcast traffic. This
number is static.
14 = The number of buffer credits reserved for QoS. This number is static
Fiber Channel
Classic method buffer credit calculation
Allocating buffer credits based on average-size frames
In cases where the frame size is average, for example
1,024 bytes, you must allocate twice the buffer credits or
configure twice the distance in the long-distance LS
configuration mode. 1. Use the following formula to
calculate the value for the desired_distance parameter
needed for Fabric OS to determine the number of buffer
credits to allocate:
desired_distance = roundup [(real_estimated_distance *
2112) / average_payload_size]
2. Determine the speed you will use for the long-distance
connection.
Fiber Channel
Brocade SAN switches specific buffer credit formulas (from FOS Admin guide)
Gigabit value Buffer requirements
1 Gbps 1.0625
2 Gbps 2.125
4 Gbps 4.25
8 Gbps 8.5
10 Gbps 10.625
16 Gbps 17
3. Look up in table the data_rate value for the speed of the connection.
4. Use the following formula to calculate the number of buffer credits to allocate:
buffer_credits = [desired_distance * (data_rate / 2.125)]
Fiber Channel
BB Calculation for 10GFC – Classic method versus Timebased method
50 60 70 80 90 100 110 120 130 140 150
BB calculation - classic method 10GFC 270 320 370 420 470 520 570 620 670 720 770
BB calculation - Timebased method
10GFC
254 303 353 402 452 501 551 600 650 699 749
200
300
400
500
600
700
800
BBCredits
Fiber Channel
BB calculation - frames average-size based method 10GFC (1024 payload)
50 60 70 80 90 100 110 120 130 140 150
BB calculation - frames average-size
based method 10GFC (1024 payload)
516 619 722 825 928 1031 1134 1238 1341 1444 1547
BB calculation - frames average-size
based method 10GFC (2112 payload)
250 300 350 400 450 500 550 600 650 700 750
200
400
600
800
1000
1200
1400
1600
BBCredits
 All methods (classic method & time-based method & average based
method) gives adequate results, witch have just about same results. For
sample, for 150 km distance credits port need about 749 and 770 credits.
 Assuming that the frame is a full-size frame, one buffer credit allows a device
to send one payload up to 2,112 bytes (2,148 with headers).
 Assuming that each payload is 2,112, you need one credit per 1 km of link
length at 2 Gbps -> smaller payloads require additional buffer credits to
maintain link utilization.
Fiber channel
Methods comparison
 In this case DWDM not transparent for
SAN. SAN switches connecting to each
other thru pair transponders.
 Credit starvation occurs when number
of available credits reaches zero
preventing of transition of FC prevented
 Once starvation occurs timeout will be
triggered causing link re-initialization.
 This situation triggered transponders
have one buffer credits to extend
distance and own flow control, long
distance feature. Now it’s dead way.
Fiber Channel
Buffer credits on DWDM Transponders
Two new universal service card, which replaced the existing 2.5G and
10G transponders and muxponders:
• AR-MXP - maximum performance up to 20 Gbit / s
• AR-XP - maximum capacity of up to 40 Gbit / s
• Supports all types of client interfaces and protocols
• Support for different modes of operation, with the possibility of
combining the functional
• The possibility of using two trunk ports to work as protected
transponder or muxponders
• Support for new features - 8G FC, OTU1 Muxing, Auto Sensing, etc.
• Pay-As-You-Grow licensing model that allows you to add the
necessary functionality as needed
Cisco Transponders Solutions
Any Rate Muxponder and Any Rate Xponder – AR-MXP
Cisco Transponders Solutions
Any Rate Muxponder and Any Rate Xponder – AR-MXP
• 8x slots for SFP and XFP slots for 2x
• TDM functionality
• OC3 / STM-1, OC12 / STM4, OC48 / STM16
• Transparent transport of TDM services
• The ability to multiplex between any
ports
• OTN functionality
• Support OTU1 with G.975 FEC on client ports
• The ability to multiplex 4xOTU1 in OTU2
• Support for Standard G.709 FEC, G.975.1 I.4
and I.7 E-FEC on
OTU2 trunk ports
• Ethernet and SAN:
• 8x FE, 8xGE, 8x1G FC, 4x2G FC, 2x4G FC,
• 4x ISC-3 (STP)
• 1 x 8G FC TxP
• 4G FC TxP
• Video
 In this configuration transponders are used:
• Transponders 10x10G and Multi-Rate
multiplexing 10 channel 10G into OTU4 through
chassis backplane.
• Transponder 200G uses only 200G Trunk port
to transport OTU4 (Line rate - 111.809G,
Transparent G.709 standard)
 Two 10x10G Transponders can’t be used with
200G, because it can’t work in “skeep slot”.
 Many operation modes, HW ready to support
SAN technologies, SW in feature. Now could be
used to connect DC with use of Ethernet
technology. Need to contact with local Cisco
CAMSE to get details.
Cisco Transponders Solutions
10x10 TXP + 10x10 MR-MXP + 200G TXP (Muxponder mode)
 Recently released new 400G transponder with 2 DWDM ports 200G XFP
based, 6 QSFP+ and 4 combo QSFP+QSFP28.
 Many operation modes, HW ready to support SAN technologies, SW will in
feature. Now could be used to connect DC with use of Ethernet technology.
Need to contact with local Cisco CAMSE to get details.
 Applications:
– 200 Gbps Muxponder Application (Metro, Long Haul)
– Cisco 100 Gbps SR4 or LR4 2xTransponder Application (Ultra Long Haul)
– 10 Gbps, 40 Gbps Muxponder
– 10 Gbps, 40 Gbps and 100 Gbps Muxponder
– OTN Switching
 In attach BOM for pair of transponders.
Cisco Transponders Solutions
400 Gbps XPonder Line Card
Cisco Transport Encryption
NCS 2000 Transport Encryption Portfolio
10G Transport:
5 x 10G Encrypting Transponder
• Five independently encrypted 10G
streams. Multi-protocol support
• Grey (SR, LR, ER, ZR) or DWDM
(fixed or tunable) line side optics
100G / 200G Transport:
Multi-rate Encrypting Muxponder
• 100G CPAK SR/LR client or 10G / 40G
multiplexed payload
• Pairs with coherent DWDM trunk card for
transport over 100G or 200G wavelength
Cisco Transport Encryption
Multiple Simultaneous Operating Modes
Encrypted 10G Transponder
Encrypted 10G Muxponder (10G Muxponder upstream)
Encrypted 10G without DWDM
Unencrypted 10G Transponder
Unencrypted Regenerator
Cisco Transport Encryption
Multi-Rate Muxponder Line Card
• 10G, 40G, and 100G client card
• 2 x 10G SFP+, 2 x 40G QSFP+, and 1 x 100G CPAK ports
• 10G & 40G clients can be aggregated to the backplane or to the
CPAK port
• Clients can be aggregated to 100G or 200G DWDM trunk
• Aggregated client signal can be encrypted (2H 2015)
Multi-Rate
Muxponder
Nx10G
Nx40G
100G
Client(s)
100/200G
WDM Line Card
100G or 200G
Wavelength
100G
Cisco Transport Encryption
200G DWDM Encryption Configurations
QSFP
SFP+
Multi-Rate
MXP
DWDM
Trunk
QSFP
SFP+
CPAK CPAK
200G
200G Muxponder Client
(with CPAK on Trunk Card)
QSFP
SFP+
QSFP
SFP+
CPAK
200G
200G Muxponder Client
(no CPAK on Trunk Card)
Multi-Rate
MXP
DWDM
Trunk
Multi-Rate
MXP
NCS1k Highlights:
 At 2 RU, the system supports up to 2Tbps
traffic in complete DWDM system
 Linux kernel with the 64 bit IOS XR OS in a
Linux Container (LxC).
 The NCS 1002 has 2 redundant and field
replaceable AC & DC power supply units and 3
redundant and field replaceable fans, controller
and SSD disk.
 Each NCS 1002 unit provides 20 QSFP based
clients and 8 CFP2-ACO based DWDM trunk
ports.
 Multiply configurations of multiplexing – 9.
Cisco NCS 1002
All in one box
2x100G QSFP – 2x100G CFP2
Cisco NCS 1002
All in one box - configurations
5x100G QSFP – 2x250G CFP2
5x100G QSFP – 2x200G CFP2 20x10G QSFP – 2x100G CFP2
 Cisco NCS 1002 could be ideally used in case of emergency need of transport multiplexed,
encrypted on long haul distances over 3-rd party DWDM optical transport System.
 3-rd party DWDM system have support Alien wavelength Transport or Black Link (ITU G698.2) and
multiplexing with use of standard ITU 40 channel grid (oddeven) operating at 100 GHz.
 Cisco MSTP software support 3-rd party IPoDWDM Transponders.
Cisco NCS 1002
All in one box – Alien Wavelength
 PSM based protection have two variants
(discussed later):
 PSM Section
 PSM Line
 Channel protection (not covered in
discussion, because 15454-AD-XC-xx.x=
soon will EOL)
 Transponder based protection on 400G
transponder will work in feature releases.
FC links will NOT re-initialize in case of
shutdown of one trunk port.
Transport Optical Protection
PSM based protection, Splitter protection
 VIAVI solutions develops PSM modules for Cisco,
VIAVI authorized technology partner and distributor of
Polatis Company’s products.
 Inside PSM Splitter 50%-50% and optical switch.
 In case PSM protection configuration Raman
amplifiers can’t be used because return loss (RL) of
optical switch is to high.
 PSM insert significant loss to total optical budget of
line, around 7db per link.
 400G Transponder with PSM and trunk based
protection not yet tested.
Transport Optical Protection
PSM Multiplex Section  Line Protection
VIAVIPolatis
optical switch
Sample – 1, PSM Section Protection, 8x8G FC, 150km.
Price – 900k USD
Transport Optical Protection
Reference design
Sample – 2, PSM Section Protection, 10x8G
FC,10x10GE, 160 and 250km. Price – 1 000k USD.
Sample – 3, Client based protection, 10x8G FC,10x10GE,
dark fiber. Price – various price.
Sample – 4, Client based protection, 10x8G
FC,10x10GE, 250km.
Price – 1 430k USD
CTP *.mpz file
CTP *.mpz file
CTP *.mpz file
CTP *.mpz file
Storage networking
Cisco MDS 9700 Overview
Storage networking
Most Modern 9700 Modules
Part
Number
Product
Name
No. of Port
Groups
No. of Ports
Per Port
Group
Bandwidth Per
Port Group
(Gbps)
DS-X9248-
256K9
48-port 8-
Gbps Adv FC
module
8 6 32.41 12.82
DS-X9232-
256K9
32-port 8-
Gbps Adv FC
module
8 4 32.41 12.82
DS-X9248-
96K9
48-port 8-
Gbps FC
module
8 6 12.83
DS-X9448-
768K9
48-port 16-
Gbps
12 4 644
1 MDS 9513 with fabric 3 module installed
2 MDS 9506 (all) and MDS 9509 (all) or MDS 9513 with fabric 2 module is more
oversubscribed
3 MDS 9506 (all), MDS 9509 (all), or MDS 9513 (all)
4 MDS 9700 (all)
The number of ISLs required between Cisco MDS
switches depends on the desired end-to-end
oversubscription ratio.
Examples of storage, server, and ISL combinations, all
with the same oversubscription ratio of 8:1.
 The first example has one 16-Gbps storage port
with eight 16-Gbps server ports traversing one 16-
Gbps ISL.
 The second example has one 16-Gbps storage port
with sixteen 8-Gbps server ports traversing one 16-
Gbps ISL.
 The third example has eight 16-Gbps storage port
with sixty-four 16-Gbps server ports traversing eight
16-Gbps ISLs.
Storage networking
SAN ISLs Oversubscription Ratio
Sample: Accurately design SAN switch and
modules to connect two Fabrics on 150 km
distance with total throughput of 20x10G FC.
Solution:
― Long distance link 150 km with 10G FC
(payload 2112 bits) require 750 buffer
credits per port.
― Each MDS 9448 module provide 4095
buffer credits.
― In our design DS-9448 module provides 5
links 10GFC for given distance. Only 10%
port density utilized.
― Ports 10G Fiber Channel best utilize OTN
multiplexing sections OTU2 of 200G
Muxponder.
Storage networking
SAN Buffer credit usage on long haul links -Sample
― In case of connection IDC Layer to Storage Core
oversubscription between severs and storages of
each DC is the same.
― Offset connection point of IDC Layer to Storage Edge
Layer lifts oversubscription between Storages of each
DC and decrease oversubscription between Servers
of each DC.
― To overcome ISL oversubscription inconsistence use
ISL QOS Zoning for each Layer to dedicate speed.
― Separate connection of IDC SAN switch directly to
Storage Arrays is acceptable in case of full isolation
and resilience.
― Separate IDC decrease total LOGINs per fabric, in its
turn lifts fabric scalability
Storage networking
SAN Design Reference
Optical recovery and restoration
Cisco TNCS-O with build in OTDR function
TNCS-O card have two XFP modules with OTDR feature. OTDR
measurement doesn’t influent on OSC channel because it use
different wavelength.
The OTDR feature in the TNCS-O cards lets you do the
following:
― Inspect the transmission on the fiber.
― Identify discontinuities or defect on the fiber.
― Measure the distance (max. 100km) and magnitude of
defects like insertion loss, reflection loss, and so on.
― Monitor variations in scan values and configured threshold
values periodically.
The OSC transmission ranges are:
― Standard range: 12 - 43 dB
― Reduced range: 5 - 30 dB
The Viavi Multiple Application Platform (MAP-200) is an
optical test and measurement platform optimized for cost-
effective development and manufacturing of optical
transmission network elements.
Competitors: Huawei(OSA+OS), ECINOPS(PM+OS)
Application samples:
― Check EDFA Amplifiers in case of gain degradation with
BBS, OSA, filter and optical switch.
― Perform granular raman (RAMAN-CTP, COP, EDRA)
calibration with BBS, OSA, filter and optical switch.
― Perform annual CD and PMD measurement with optical
switch, BBS, OSA and other modules.
― Perform L1-L4 tests (BERT, TrueSpeed, RFC 6349, RFC
2544 and other) with MTS8kOTN platforms and optical
switch.
Partnership support ecosystem
VIAVI Solutions – MAP 200
MAP 200 Chassis, OSA, PM, BBS
and other modules
This Sample shows interoperate
Cisco management suite (CPO) with
VIAVI measurement system in case
of degradation of EDFA section in
NCS system. In result VIAVI makes
advanced troubleshooting (fault
detection) and decision to disable
circuit and notify OSS system about
need to make RMA.
Benefit of application:
– Self measured intelligent network
– Opportunity make granular network
upgrade without need of Field
Engineers locally to participate.
Partnership support ecosystem
VIAVI Solutions – MAP 200 – Application check EDFA
Xgig Analyzer is recognized as gold standard for
performance insight with key capabilities:
 FC, FCIP, FCoE, Ethernet analyses
 Redundant link analyst
 Highly accurate timing clock for precise latency
measurements
 Simple access to perform metrics like:
- IOPS by LUN
- Min/Max/Aver exchange completion time
- read and write MB/Sec by LUN
- frames/sec by LUN
- errors by LUN
 Protocol visibility, including all errors and primitives
 Cascading feature availability
Partnership support ecosystem
VIAVI Solutions – MAP 200XgigPolatis
Most common application DCI design – use optical
switch (VIAVI-Polatis) with Xgig Protocol Analyzer
(VIAVI) to perform manual test and analyst.
To make it automate and proactive need to enhance
application with use of VIAVI Management Server.
Mgt Infrastructure
and Orchestration
Switching Fabric
Integrated
Compute
Stacks
WAN Edge / DCI
Storage
and Fabric
Extensions
Virtual
Switching
Services &
Containers
Virtual Storage
Volumes
Data Center 1
Cisco
Product
 IP WAN Transport with 10ms RTT across Metro distance
 VMDC 3.0 FabricPath (Typical Design)
 10GE over DWDM NCS 2000, NCS 1002
 NCS 10x10G + 200G Transponders or 400G XPonders
 NetApp MetroCluster Synchronous Storage Replication
 EMC VPLEX Synchronous Storage Replication
 Cisco MDS 9700 Directors for Fiber Channel and FICON
 FC or FICON over DWDM NCS 2000, NCS 1002
 Cisco Prime Optical Management Suite
 Cisco TNCS-O measurement system
 VIAVI Infrastructure measurement systems
VMDC DCI Design Choices in case of
OpticalTransport
Partner
Product
Route Optimization
Path Optimization
(LISP/ DNS / Manual )
Layer 2 Extension
(OTV / VPLS / E-VPN)
Stateful Services
(FW / SLB / IPSec / VSG)
VMware & Hyper-V
UCS / Geo-Clusters / Mobility
Distributed Virtual Switch
(Nexus 1000v)
Distributed Virtual Volumes
Container Orchestration
Storage Clusters
MDS Fabric and FCoE
Tenancy and QoS
 Stateful Services between sites require L2 over distance
 ASA 5500 FW Clustering at each site
 Ethernet over FC or FICON over DWDM NCS 2000, NCS 1002
 NCS Any-Rate Transponders, 10x10G Transponders
VMDC DCI Design
Active-Active Metro Design Choices
Optical Transport for Mission Critical Applications

More Related Content

Viewers also liked

NETCONF Call Home
NETCONF Call Home NETCONF Call Home
NETCONF Call Home ADVA
 
Introduction to Optical Backbone Networks
Introduction to Optical Backbone NetworksIntroduction to Optical Backbone Networks
Introduction to Optical Backbone NetworksAnuradha Udunuwara
 
DWDM Presentation
DWDM PresentationDWDM Presentation
DWDM Presentationayodejieasy
 
White Box Optics: Will It Kill or Encourage Innovation?
White Box Optics: Will It Kill or Encourage Innovation?White Box Optics: Will It Kill or Encourage Innovation?
White Box Optics: Will It Kill or Encourage Innovation?ADVA
 
OTN for Beginners
OTN for BeginnersOTN for Beginners
OTN for BeginnersMapYourTech
 
Implications of super channels on CDC ROADM architectures
Implications of super channels on CDC ROADM architecturesImplications of super channels on CDC ROADM architectures
Implications of super channels on CDC ROADM architecturesAnuj Malik
 
Cisco Carrier Packet Transport System: Foundation for Next-Generation Transport
Cisco Carrier Packet Transport System: Foundation for Next-Generation Transport Cisco Carrier Packet Transport System: Foundation for Next-Generation Transport
Cisco Carrier Packet Transport System: Foundation for Next-Generation Transport Cisco Canada
 
Forget the Layers: NFV Is About Dynamism
Forget the Layers: NFV Is About DynamismForget the Layers: NFV Is About Dynamism
Forget the Layers: NFV Is About DynamismADVA
 
How Does SDN Fit into the Data Centre?
How Does SDN Fit into the Data Centre?How Does SDN Fit into the Data Centre?
How Does SDN Fit into the Data Centre?ADVA
 
The New NFV Powerhouse
The New NFV Powerhouse The New NFV Powerhouse
The New NFV Powerhouse ADVA
 
Amplification, ROADM and Optical Networking activities at CPqD
Amplification, ROADM and Optical Networking activities at CPqDAmplification, ROADM and Optical Networking activities at CPqD
Amplification, ROADM and Optical Networking activities at CPqDCPqD
 
Introducing the ADVA FSP 150-GE110 Pro Series
Introducing the ADVA FSP 150-GE110 Pro SeriesIntroducing the ADVA FSP 150-GE110 Pro Series
Introducing the ADVA FSP 150-GE110 Pro SeriesADVA
 
Optimizing Data Center WANs with SDN and Underlay Networking
Optimizing Data Center WANs with SDN and Underlay NetworkingOptimizing Data Center WANs with SDN and Underlay Networking
Optimizing Data Center WANs with SDN and Underlay NetworkingInfinera
 
How to Quantum-Secure Optical Networks
 How to Quantum-Secure Optical Networks How to Quantum-Secure Optical Networks
How to Quantum-Secure Optical NetworksADVA
 
Datacryptor Ethernet Layer 2 Rel 4.5
Datacryptor Ethernet Layer 2 Rel 4.5Datacryptor Ethernet Layer 2 Rel 4.5
Datacryptor Ethernet Layer 2 Rel 4.5Eugene Sushchenko
 
Understanding senetas layer 2 encryption
Understanding senetas layer 2 encryptionUnderstanding senetas layer 2 encryption
Understanding senetas layer 2 encryptionSenetas
 

Viewers also liked (19)

NETCONF Call Home
NETCONF Call Home NETCONF Call Home
NETCONF Call Home
 
Introduction to Optical Backbone Networks
Introduction to Optical Backbone NetworksIntroduction to Optical Backbone Networks
Introduction to Optical Backbone Networks
 
WDM Basics
WDM BasicsWDM Basics
WDM Basics
 
DWDM Presentation
DWDM PresentationDWDM Presentation
DWDM Presentation
 
WDM principles
WDM principlesWDM principles
WDM principles
 
White Box Optics: Will It Kill or Encourage Innovation?
White Box Optics: Will It Kill or Encourage Innovation?White Box Optics: Will It Kill or Encourage Innovation?
White Box Optics: Will It Kill or Encourage Innovation?
 
OTN for Beginners
OTN for BeginnersOTN for Beginners
OTN for Beginners
 
Implications of super channels on CDC ROADM architectures
Implications of super channels on CDC ROADM architecturesImplications of super channels on CDC ROADM architectures
Implications of super channels on CDC ROADM architectures
 
Cisco Carrier Packet Transport System: Foundation for Next-Generation Transport
Cisco Carrier Packet Transport System: Foundation for Next-Generation Transport Cisco Carrier Packet Transport System: Foundation for Next-Generation Transport
Cisco Carrier Packet Transport System: Foundation for Next-Generation Transport
 
Forget the Layers: NFV Is About Dynamism
Forget the Layers: NFV Is About DynamismForget the Layers: NFV Is About Dynamism
Forget the Layers: NFV Is About Dynamism
 
How Does SDN Fit into the Data Centre?
How Does SDN Fit into the Data Centre?How Does SDN Fit into the Data Centre?
How Does SDN Fit into the Data Centre?
 
The New NFV Powerhouse
The New NFV Powerhouse The New NFV Powerhouse
The New NFV Powerhouse
 
Amplification, ROADM and Optical Networking activities at CPqD
Amplification, ROADM and Optical Networking activities at CPqDAmplification, ROADM and Optical Networking activities at CPqD
Amplification, ROADM and Optical Networking activities at CPqD
 
Introducing the ADVA FSP 150-GE110 Pro Series
Introducing the ADVA FSP 150-GE110 Pro SeriesIntroducing the ADVA FSP 150-GE110 Pro Series
Introducing the ADVA FSP 150-GE110 Pro Series
 
Optimizing Data Center WANs with SDN and Underlay Networking
Optimizing Data Center WANs with SDN and Underlay NetworkingOptimizing Data Center WANs with SDN and Underlay Networking
Optimizing Data Center WANs with SDN and Underlay Networking
 
How to Quantum-Secure Optical Networks
 How to Quantum-Secure Optical Networks How to Quantum-Secure Optical Networks
How to Quantum-Secure Optical Networks
 
Datacryptor Ethernet Layer 2 Rel 4.5
Datacryptor Ethernet Layer 2 Rel 4.5Datacryptor Ethernet Layer 2 Rel 4.5
Datacryptor Ethernet Layer 2 Rel 4.5
 
Understanding senetas layer 2 encryption
Understanding senetas layer 2 encryptionUnderstanding senetas layer 2 encryption
Understanding senetas layer 2 encryption
 
Transport Solutions
Transport SolutionsTransport Solutions
Transport Solutions
 

Recently uploaded

TRENDS Enabling and inhibiting dimensions.pptx
TRENDS Enabling and inhibiting dimensions.pptxTRENDS Enabling and inhibiting dimensions.pptx
TRENDS Enabling and inhibiting dimensions.pptxAndrieCagasanAkio
 
IP addressing and IPv6, presented by Paul Wilson at IETF 119
IP addressing and IPv6, presented by Paul Wilson at IETF 119IP addressing and IPv6, presented by Paul Wilson at IETF 119
IP addressing and IPv6, presented by Paul Wilson at IETF 119APNIC
 
『澳洲文凭』买拉筹伯大学毕业证书成绩单办理澳洲LTU文凭学位证书
『澳洲文凭』买拉筹伯大学毕业证书成绩单办理澳洲LTU文凭学位证书『澳洲文凭』买拉筹伯大学毕业证书成绩单办理澳洲LTU文凭学位证书
『澳洲文凭』买拉筹伯大学毕业证书成绩单办理澳洲LTU文凭学位证书rnrncn29
 
SCM Symposium PPT Format Customer loyalty is predi
SCM Symposium PPT Format Customer loyalty is prediSCM Symposium PPT Format Customer loyalty is predi
SCM Symposium PPT Format Customer loyalty is predieusebiomeyer
 
Company Snapshot Theme for Business by Slidesgo.pptx
Company Snapshot Theme for Business by Slidesgo.pptxCompany Snapshot Theme for Business by Slidesgo.pptx
Company Snapshot Theme for Business by Slidesgo.pptxMario
 
Unidad 4 – Redes de ordenadores (en inglés).pptx
Unidad 4 – Redes de ordenadores (en inglés).pptxUnidad 4 – Redes de ordenadores (en inglés).pptx
Unidad 4 – Redes de ordenadores (en inglés).pptxmibuzondetrabajo
 
ETHICAL HACKING dddddddddddddddfnandni.pptx
ETHICAL HACKING dddddddddddddddfnandni.pptxETHICAL HACKING dddddddddddddddfnandni.pptx
ETHICAL HACKING dddddddddddddddfnandni.pptxNIMMANAGANTI RAMAKRISHNA
 
『澳洲文凭』买詹姆士库克大学毕业证书成绩单办理澳洲JCU文凭学位证书
『澳洲文凭』买詹姆士库克大学毕业证书成绩单办理澳洲JCU文凭学位证书『澳洲文凭』买詹姆士库克大学毕业证书成绩单办理澳洲JCU文凭学位证书
『澳洲文凭』买詹姆士库克大学毕业证书成绩单办理澳洲JCU文凭学位证书rnrncn29
 
Cybersecurity Threats and Cybersecurity Best Practices
Cybersecurity Threats and Cybersecurity Best PracticesCybersecurity Threats and Cybersecurity Best Practices
Cybersecurity Threats and Cybersecurity Best PracticesLumiverse Solutions Pvt Ltd
 

Recently uploaded (9)

TRENDS Enabling and inhibiting dimensions.pptx
TRENDS Enabling and inhibiting dimensions.pptxTRENDS Enabling and inhibiting dimensions.pptx
TRENDS Enabling and inhibiting dimensions.pptx
 
IP addressing and IPv6, presented by Paul Wilson at IETF 119
IP addressing and IPv6, presented by Paul Wilson at IETF 119IP addressing and IPv6, presented by Paul Wilson at IETF 119
IP addressing and IPv6, presented by Paul Wilson at IETF 119
 
『澳洲文凭』买拉筹伯大学毕业证书成绩单办理澳洲LTU文凭学位证书
『澳洲文凭』买拉筹伯大学毕业证书成绩单办理澳洲LTU文凭学位证书『澳洲文凭』买拉筹伯大学毕业证书成绩单办理澳洲LTU文凭学位证书
『澳洲文凭』买拉筹伯大学毕业证书成绩单办理澳洲LTU文凭学位证书
 
SCM Symposium PPT Format Customer loyalty is predi
SCM Symposium PPT Format Customer loyalty is prediSCM Symposium PPT Format Customer loyalty is predi
SCM Symposium PPT Format Customer loyalty is predi
 
Company Snapshot Theme for Business by Slidesgo.pptx
Company Snapshot Theme for Business by Slidesgo.pptxCompany Snapshot Theme for Business by Slidesgo.pptx
Company Snapshot Theme for Business by Slidesgo.pptx
 
Unidad 4 – Redes de ordenadores (en inglés).pptx
Unidad 4 – Redes de ordenadores (en inglés).pptxUnidad 4 – Redes de ordenadores (en inglés).pptx
Unidad 4 – Redes de ordenadores (en inglés).pptx
 
ETHICAL HACKING dddddddddddddddfnandni.pptx
ETHICAL HACKING dddddddddddddddfnandni.pptxETHICAL HACKING dddddddddddddddfnandni.pptx
ETHICAL HACKING dddddddddddddddfnandni.pptx
 
『澳洲文凭』买詹姆士库克大学毕业证书成绩单办理澳洲JCU文凭学位证书
『澳洲文凭』买詹姆士库克大学毕业证书成绩单办理澳洲JCU文凭学位证书『澳洲文凭』买詹姆士库克大学毕业证书成绩单办理澳洲JCU文凭学位证书
『澳洲文凭』买詹姆士库克大学毕业证书成绩单办理澳洲JCU文凭学位证书
 
Cybersecurity Threats and Cybersecurity Best Practices
Cybersecurity Threats and Cybersecurity Best PracticesCybersecurity Threats and Cybersecurity Best Practices
Cybersecurity Threats and Cybersecurity Best Practices
 

Optical Transport for Mission Critical Applications

  • 1. Kayukov Valeriy Optical System Engineer - Step Logic April 20, 2016 Cisco Optical Networking – Disaster Recovery Solutions (SAN over DWDM Calculation). Cisco Support Community Expert Series Webcast
  • 2. • VMDC DCI Design • Protocol Comparison • Fiber Channel • Cisco Transponders Solutions • Cisco NCS 1002 • Cisco Transport Encryption • Transport Optical Protection Content of presentation • Storage Networking • Optical Recovery and Restoration • Partnership Support Ecosystem
  • 3. VMDC DCI Design Recovery Point Objective (RPO) and the Recovery Time Objective (RTO) How fast business need recovery?
  • 4. • Two important objectives in the designing process are the Recovery Point Objective (RPO) and the Recovery Time Objective (RTO). • The RPO is the time period between backup points and describes the acceptable age of the data that must be restored after a failure has occurred. For example, if a remote backup occurs every day at midnight and a site failure occurs at 11 pm, changes to data made within the last 23 hours will not be recoverable. • The RTO describes the time needed to recover from the disaster. The RTO determines the acceptable length of time a break in continuity can occur with minimal or no impact to business services. Options for replication generally fall into one of several categories. • A business continuity solution with strict RTO and RPO may require high-speed synchronous or near-synchronous replication between sites as well as application clustering for immediate service recovery. • A medium level Disaster Recovery (DR) solution may require high-speed replication that could be synchronous or asynchronous with an RTO from several minutes to a few hours. Backup of non-critical application data that does not require immediate access after a failure can be accomplished via tape vaulting. Recovery from tape has the greatest RTO. In addition other technologies such as Continuous Data Protection (CDP) can be used to find the appropriate RPO and RTO. VMDC DCI Design Recovery Point Objective (RPO) and the Recovery Time Objective (RTO) Terms
  • 5. • Business units should dictate the anticipated risks in depending on the region's business and territorial coverage of possible disasters. • Coverage of potential risks dictates the need for the appearance of the second data center or appearance of Disaster Recovery Site. • Also coverage determines transport technologies and application architecture of complete solution •VMDC DCI Design reduce CAPEX/OPEX of infrastructure VMDC DCI Design Basic Terminology and architecture of classic DRS Solution Business Continuity Workload Mobility Disaster Recovery Site Migrations Load Balanced Workloads Operations Maintenance Operations Rebalancing Application Clusters
  • 6. Simplify the DCI Design Process for Operations Teams - Interconnecting Cloud Data Centers involves many infrastructure elements and application components. The VMDC DCI validated design significantly reduces risk of implementation using Cisco’s latest product innovations End-to-end Validation of the Application Environment - VMDC DCI delivers validated guidelines across the end-to-end layers of the cloud data center. Competitive offerings only focus on a few elements. VMDC DCI spans different sites, addressing each Application element including WAN connections, tenancy, network containers, distributed virtual switching, and L4-L7 services, hypervisor migration tools, and storage replication. This is a complete DCI solution. Validates 2 of the most used DCI Design Options - VMDC DCI validates the most common design options to achieve 2 major Recovery Point Objective (RPO) and Recovery Time Objective (RTO) targets. The first design option enables the movement of applications, their services, and network containers to support near zero RPO/RTO for the most business critical functions. Less business critical applications can be mapped to a second option to achieve RPO/RTO targets of 15 minutes or more. Minimal Disruption to the Application - VMDC DCI allows operators to preserve IP addresses of moved applications and their services between sites. Reduction in CAPEX/OPEX for DCI Deployments - VMDC DCI helps customers align the correct DCI design to achieve application RPO/RTO targets. The most stringent recovery targets typically require the highest CAPEX/OPEX. VMDC DCI provides a framework to map Applications to different Criticality Levels, and then select the most cost effective design option that meets application requirements. Planned Usage of Recovery Capacity - Recovery capacity at remote sites can be used for other applications during “normal operations” and “reclaimed” as needed during recovery events. This “Reuse-Reclaim” design strategy allows for planned utilization of extra capacity and many-to-one resource sharing, reducing CAPEX/OPEX. Multiple Hypervisors supported - Both VMware and Microsoft Hyper-V environments are supported. DCI Use Cases Validated with Business Applications - VMDC DCI used traditional business applications across each workload migration and business continuity use case. The test applications include Oracle database servers, Microsoft SharePoint and SQL, for single tier and multi-tier test applications. Product Performance Measured across DCI Use Cases - The performance of Cisco products and Partner Products was measured and documented across Metro/Geo environments. Performance limitations, design recommendations, and configurations are provided for Cisco and Partner products. VMDC DCI Value Proposition Simply DCI Deployments, reduce CAPEX/OPEX of design, Reuse-Reclaim Recovery Resources
  • 7. Mgt Infrastructure and Orchestration Switching Fabric Integrated Compute Stacks WAN Edge / DCI Storage and Fabric Extensions Virtual Switching Services & Containers Virtual Storage Volumes Data Center 1 Cisco Product  OTV LAN Extension, Preserve IP Addressing of Applications  IP WAN Transport with 10ms RTT across Metro distance  External Path Re-direction thru routing update and orchestration  Routing re-convergence to new site  Stretched ESX Clusters and Server Affinity  VMware Live vMotion across Metro sites  Distributed vCenter spanning Metro sites  Single and Multi-Tier Application migration strategy  VMDC 3.0 FabricPath (Typical Design) with Multi-Tenancy  Palladium Network Container  Nexus 1000v with VSMs and VEMs across Metro sites  Service and Security Profiles follow Application VMs  Different Nexus 1000v’s mapped to Application Domains as needed  Virtual volumes follow VM  NetApp MetroCluster Synchronous Storage Replication  ONTAP 8.1 Fabric MetroCluster, 160 Km long haul link (DWDM)  FCoE to compute stack, Cisco MDS FC Switching for data replication  Replicate Service Container to new site to support Mobile VM  Virtual Mgt Infrastructure support across Metro VMDC DCI Design Choices Partner Product Route Optimization Path Optimization (LISP/ DNS / Manual ) Layer 2 Extension (OTV / VPLS / E-VPN) Stateful Services (FW / SLB / IPSec / VSG) VMware & Hyper-V UCS / Geo-Clusters / Mobility Distributed Virtual Switch (Nexus 1000v) Distributed Virtual Volumes Container Orchestration Storage Clusters MDS Fabric and FCoE Tenancy and QoS  Stateful Services between sites  Citrix SDX SLB at each site (no Metro extension)  ASA 5500 FW Clustering at each site (no Metro extension) VMDC DCI Design Active-Active Metro Design Choices
  • 8. Synchronous Data Replication guarantees continuous data integrity during the replication process with no extra risk of data loss. The impact on the performance of the application can be significant and is highly dependent on the distance between the sites. Metro Distances (depending on the Application can be 50-200km max) VMDC DCI Design Synchronous Data Replication
  • 9. Asynchronous replication overcomes the performance limitations of synchronous replication, but some data loss must be accepted. The primary factors influencing the amount of data loss are the rate of change (ROC) of data on the primary site, the link speed, and the distance between sites. Unlimited distances. VMDC DCI Design Asynchronous Data Replication
  • 10.  Fiber channel released 32G Standard, soon coming 64G FC.  PCI Special Interest Group roadmap PCIe 4.0 2016-4Q - 16 GT/s.  Primitive flow control commands makes FC stack faster.  Resilient and secure network, dedicated from Internet. Protocol comparison Fiber Channel versus Ethernet  Ethernet 200G have standardized 802.3bs protocol. Coming 400G.  IEEE organization plan to realize P802.3bs 400 Gb/s Standard at 2017-Q4  Lower speed, complex OSI stack makes demand in communication.  Cloud Storage Is Software-Defined, Scale-Out and Not Fiber Channel.  Hyper-Converged Infrastructure (HCI) Is Showing Hyper-Growth  Growth of File and Object Storage. Analyst IDC predicts file and object storage are growing at 24% per year.
  • 11.  Fiber Channel SANs are faster than iSCSI SANs in case of line rate. For sample, line rate of 1G (1.0625) FC higher than 1GE.  Fiber Channel’s state machine use primitive sequence to make flow control. This primitive makes it faster.  FC latency is less than 3500ps, SAS less than 6500 ps. But iSCSI latency should be significantly greater than Fiber Channel, because of TCP latency.  Self-documented due to FC LOGINs and Name Server Registration in fabric. Protocol comparison FCFCoE versus iSCSI  In most cases iSCSI SANs are always less expensive than Fiber Channel SANs.  At first sight, iSCSI SANs simpler to operate vs. Fibre Channel SANs. But in real zoning provisioning simpler than iSCSI’s pointing IQN and IP addreses.  Ethernet use « Cut Throught » (less than 5 ps) and « Store and Forward » (around 20 ps) technologis. Summary: It’s more about what is right for your design and less about technology. FC suitable for isolated low latency network, or in case of IBM System Z.
  • 12. Essentially the same in terms of – Network-centric – Similar management tools – Same multipathing software (for iSCSI as well) – Similar basic port types ▪ N_Ports / F_Ports vs. VN_Ports and VF_Ports ▪ E_Ports vs. VE_Ports – Same scalability limits ▪ Number of domains ▪ Number of N_Ports / VN_Ports ▪ Number of hops Protocol comparison FC versus FCoE
  • 13.  Buffer Credits Define the maximum amount of data that can be sent prior to an acknowledgement  Buffer Credits are physical ASIC port or card memory resources and are finite in number as a function of cost  Within a fabric, each port may have a different number of buffer credits  The number of available buffer credits is communicated at fabric logon (FLOGI)  BB Flow Control works on Link Layer (FC-1), EE Flow Control works on Transport Layer (FC-2) Buffer-to-buffer (BB) credit flow control is implemented to limit the amount of data that a port may send, and is based on the number and size of the frames sent from that port. SAN switches can support two methods of flow control over an ISL:  Virtual Channel (VC_RDY) flow control  Receiver Ready (R_RDY) flow control Fiber Channel Buffer Credit Flow Control
  • 14. VC_RDY flow control differentiates traffic across an ISL:  Algorithm differentiate fabric internal service traffic, and to differentiate different data flows of end-to-end device traffic to avoid head-of-line blocking. Service type of traffic is given a higher priority, then other.  Multiple I/Os are multiplexed over a single ISL by assigning different VCs to different I/Os and giving them the same priority (unless QoS is enabled).  I/O multiplexing gives function balancing the performance of different devices communicating across the ISL. Fiber Channel Virtual Channel (VC_RDY) versus Receiver Ready (R_RDY)  R_RDY flow control, is defined in FC standards and has only a single lane or channel for all frame types.  When switches are configured to use R_RDY flow control, there are other mechanisms to enable QOS and avoid head-of-line blocking problems.
  • 15. Fiber Channel B-to-B and E-to-E Flow Control Domains  Used by Class 1 and Class 2 service between 2 end nodes.  Nodes monitor end to end flow control between themselves, directors do not participate.  End to End flow control is always managed between a specific pair of node ports BB_Credit management occurs between: – One N_Port and one F_Port – Two E_Ports – Two N_Ports in a P2P topology – In Arbitrated Loop different modes When connecting switches across dark fiber or WDM communication links note:  VC_RDY is the preferred method, but there are some distance extension devices that require the E_Port to be configured for R_RDY.  To prefer “buffer-to-buffer credit spoofing” disable on ports buffer-to-buffer state change (BB_SC) number. (read in notes)
  • 16. FC switches track the available BB_Credits in the following manner:  Before any data frames are sent, the transmitter sets a counter equal to the BB_Credit value communicated by its receiver during FLOGI  For each data frame sent by the transmitter, the counter is decremented by one  Upon receipt of a data frame, the receiver sends a status frame (R_RDY) to the transmitter indicating that the data frame was received and the buffer is ready to receive another data frame  For each R_RDY received by the transmitter, the counter is incremented by one Fiber Channel Buffer credit negotiation
  • 17.  To transmit to the same distance, at higher speed, requires more buffer credits.  Full data frame 2148 bytes in size with QOS enabled.  Methods of BB calculation: • Time length based • Classic method • Average based method Fiber Channel Buffer credits to Link speed
  • 18. SRDF synchronous mode - the EMC storage device responds to the host that issued a write operation to the source of the composite group after the EMC storage, which contains the target of the composite group, acknowledges that it has received and checked the data. SRDF asynchronous replication - the EMC storage device provides a consistent point-in-time image on the target of the composite group, which is a short period of time behind the source of the composite group. Asynchronous mode is managed in sessions. Asynchronous mode transfers data in predefined timed cycles or in delta sets to ensure that data at the remote target of the composite group site is in the dependent write consistent state. Fiber Channel EMC specific feature SRDF
  • 19. SiRT feature localize transfer-ready response to local RF port, thereby reducing an unnecessary acknowledgement response trip over SAN and DWDM network, Single RoundTrip (SiRT) feature dynamically enabled for SRDF links/S links with distance more than 12km with blocks up to 32K. SAN switches and DWDM Transponders measure link latency and disable it automatically if connected to these devices. With SRDF recommended to use fast write feature on network devices. There are two modes of SiRT feature – Off and Auto. Auto –accelerate only write commands. Fiber Channel EMC specific feature SRDF (SiRT)
  • 20. Fast write feature localize transfer ready response between transponder and E port of SAN switch. This feature transparent to SRDF FC link and used to all SRDF modes. Transponder client FC port looks like phantom target to initiator Fiber Channel EMC specific feature SRDF (Fast Write)
  • 21. This method use time of frame propagation in fiber to calculate sophisticated number of buffer credits. Optimal number of credits is determined by:  Distance (frame delivery time)  Processing time at receiving port  Link signaling rate  Size of frames being transmitted Optimal # BB_Credit = (Round-trip receiving time + Receiving_port processing time) / Frame Transmission time As the link speed increases, the frame transmission time is reduced; therefore, as we get faster iterations of FICON such as FICON Express4 and Express8, the amount of credits need to be increased to obtain full link utilization, even in a short distance environment Reference link to Brocade BB calculator: http://community.brocade.com/t5/Storage-Networks/Fibre-Channel-Buffer- Credit-calculator-spreadsheet-February-2015/ba-p/70873 Fiber Channel Time length based buffer credit calculation
  • 22. 1. Determine the desired distance in kilometers of the switch-to-switch connection. 2. Determine the speed that you will use for the long-distance connection. . 3. Use one of the following formulas to calculate the reserved buffers for distance: • If QoS is enabled: (Reserved_Buffer_for_Distance_Y) = (X * LinkSpeed / 2) + 6 + 14 • If QoS is not enabled: (Reserved_Buffer_for_Distance_Y) = (X * LinkSpeed / 2) + 6 The formulas use the following parameters: X = The distance determined in step 1 (in km). LinkSpeed = The speed of the link determined in step 2. 6 = The number of buffer credits reserved for fabric services, multicast, and broadcast traffic. This number is static. 14 = The number of buffer credits reserved for QoS. This number is static Fiber Channel Classic method buffer credit calculation
  • 23. Allocating buffer credits based on average-size frames In cases where the frame size is average, for example 1,024 bytes, you must allocate twice the buffer credits or configure twice the distance in the long-distance LS configuration mode. 1. Use the following formula to calculate the value for the desired_distance parameter needed for Fabric OS to determine the number of buffer credits to allocate: desired_distance = roundup [(real_estimated_distance * 2112) / average_payload_size] 2. Determine the speed you will use for the long-distance connection. Fiber Channel Brocade SAN switches specific buffer credit formulas (from FOS Admin guide) Gigabit value Buffer requirements 1 Gbps 1.0625 2 Gbps 2.125 4 Gbps 4.25 8 Gbps 8.5 10 Gbps 10.625 16 Gbps 17 3. Look up in table the data_rate value for the speed of the connection. 4. Use the following formula to calculate the number of buffer credits to allocate: buffer_credits = [desired_distance * (data_rate / 2.125)]
  • 24. Fiber Channel BB Calculation for 10GFC – Classic method versus Timebased method 50 60 70 80 90 100 110 120 130 140 150 BB calculation - classic method 10GFC 270 320 370 420 470 520 570 620 670 720 770 BB calculation - Timebased method 10GFC 254 303 353 402 452 501 551 600 650 699 749 200 300 400 500 600 700 800 BBCredits
  • 25. Fiber Channel BB calculation - frames average-size based method 10GFC (1024 payload) 50 60 70 80 90 100 110 120 130 140 150 BB calculation - frames average-size based method 10GFC (1024 payload) 516 619 722 825 928 1031 1134 1238 1341 1444 1547 BB calculation - frames average-size based method 10GFC (2112 payload) 250 300 350 400 450 500 550 600 650 700 750 200 400 600 800 1000 1200 1400 1600 BBCredits
  • 26.  All methods (classic method & time-based method & average based method) gives adequate results, witch have just about same results. For sample, for 150 km distance credits port need about 749 and 770 credits.  Assuming that the frame is a full-size frame, one buffer credit allows a device to send one payload up to 2,112 bytes (2,148 with headers).  Assuming that each payload is 2,112, you need one credit per 1 km of link length at 2 Gbps -> smaller payloads require additional buffer credits to maintain link utilization. Fiber channel Methods comparison
  • 27.  In this case DWDM not transparent for SAN. SAN switches connecting to each other thru pair transponders.  Credit starvation occurs when number of available credits reaches zero preventing of transition of FC prevented  Once starvation occurs timeout will be triggered causing link re-initialization.  This situation triggered transponders have one buffer credits to extend distance and own flow control, long distance feature. Now it’s dead way. Fiber Channel Buffer credits on DWDM Transponders
  • 28. Two new universal service card, which replaced the existing 2.5G and 10G transponders and muxponders: • AR-MXP - maximum performance up to 20 Gbit / s • AR-XP - maximum capacity of up to 40 Gbit / s • Supports all types of client interfaces and protocols • Support for different modes of operation, with the possibility of combining the functional • The possibility of using two trunk ports to work as protected transponder or muxponders • Support for new features - 8G FC, OTU1 Muxing, Auto Sensing, etc. • Pay-As-You-Grow licensing model that allows you to add the necessary functionality as needed Cisco Transponders Solutions Any Rate Muxponder and Any Rate Xponder – AR-MXP
  • 29. Cisco Transponders Solutions Any Rate Muxponder and Any Rate Xponder – AR-MXP • 8x slots for SFP and XFP slots for 2x • TDM functionality • OC3 / STM-1, OC12 / STM4, OC48 / STM16 • Transparent transport of TDM services • The ability to multiplex between any ports • OTN functionality • Support OTU1 with G.975 FEC on client ports • The ability to multiplex 4xOTU1 in OTU2 • Support for Standard G.709 FEC, G.975.1 I.4 and I.7 E-FEC on OTU2 trunk ports • Ethernet and SAN: • 8x FE, 8xGE, 8x1G FC, 4x2G FC, 2x4G FC, • 4x ISC-3 (STP) • 1 x 8G FC TxP • 4G FC TxP • Video
  • 30.  In this configuration transponders are used: • Transponders 10x10G and Multi-Rate multiplexing 10 channel 10G into OTU4 through chassis backplane. • Transponder 200G uses only 200G Trunk port to transport OTU4 (Line rate - 111.809G, Transparent G.709 standard)  Two 10x10G Transponders can’t be used with 200G, because it can’t work in “skeep slot”.  Many operation modes, HW ready to support SAN technologies, SW in feature. Now could be used to connect DC with use of Ethernet technology. Need to contact with local Cisco CAMSE to get details. Cisco Transponders Solutions 10x10 TXP + 10x10 MR-MXP + 200G TXP (Muxponder mode)
  • 31.  Recently released new 400G transponder with 2 DWDM ports 200G XFP based, 6 QSFP+ and 4 combo QSFP+QSFP28.  Many operation modes, HW ready to support SAN technologies, SW will in feature. Now could be used to connect DC with use of Ethernet technology. Need to contact with local Cisco CAMSE to get details.  Applications: – 200 Gbps Muxponder Application (Metro, Long Haul) – Cisco 100 Gbps SR4 or LR4 2xTransponder Application (Ultra Long Haul) – 10 Gbps, 40 Gbps Muxponder – 10 Gbps, 40 Gbps and 100 Gbps Muxponder – OTN Switching  In attach BOM for pair of transponders. Cisco Transponders Solutions 400 Gbps XPonder Line Card
  • 32. Cisco Transport Encryption NCS 2000 Transport Encryption Portfolio 10G Transport: 5 x 10G Encrypting Transponder • Five independently encrypted 10G streams. Multi-protocol support • Grey (SR, LR, ER, ZR) or DWDM (fixed or tunable) line side optics 100G / 200G Transport: Multi-rate Encrypting Muxponder • 100G CPAK SR/LR client or 10G / 40G multiplexed payload • Pairs with coherent DWDM trunk card for transport over 100G or 200G wavelength
  • 33. Cisco Transport Encryption Multiple Simultaneous Operating Modes Encrypted 10G Transponder Encrypted 10G Muxponder (10G Muxponder upstream) Encrypted 10G without DWDM Unencrypted 10G Transponder Unencrypted Regenerator
  • 34. Cisco Transport Encryption Multi-Rate Muxponder Line Card • 10G, 40G, and 100G client card • 2 x 10G SFP+, 2 x 40G QSFP+, and 1 x 100G CPAK ports • 10G & 40G clients can be aggregated to the backplane or to the CPAK port • Clients can be aggregated to 100G or 200G DWDM trunk • Aggregated client signal can be encrypted (2H 2015) Multi-Rate Muxponder Nx10G Nx40G 100G Client(s) 100/200G WDM Line Card 100G or 200G Wavelength 100G
  • 35. Cisco Transport Encryption 200G DWDM Encryption Configurations QSFP SFP+ Multi-Rate MXP DWDM Trunk QSFP SFP+ CPAK CPAK 200G 200G Muxponder Client (with CPAK on Trunk Card) QSFP SFP+ QSFP SFP+ CPAK 200G 200G Muxponder Client (no CPAK on Trunk Card) Multi-Rate MXP DWDM Trunk Multi-Rate MXP
  • 36. NCS1k Highlights:  At 2 RU, the system supports up to 2Tbps traffic in complete DWDM system  Linux kernel with the 64 bit IOS XR OS in a Linux Container (LxC).  The NCS 1002 has 2 redundant and field replaceable AC & DC power supply units and 3 redundant and field replaceable fans, controller and SSD disk.  Each NCS 1002 unit provides 20 QSFP based clients and 8 CFP2-ACO based DWDM trunk ports.  Multiply configurations of multiplexing – 9. Cisco NCS 1002 All in one box
  • 37. 2x100G QSFP – 2x100G CFP2 Cisco NCS 1002 All in one box - configurations 5x100G QSFP – 2x250G CFP2 5x100G QSFP – 2x200G CFP2 20x10G QSFP – 2x100G CFP2
  • 38.  Cisco NCS 1002 could be ideally used in case of emergency need of transport multiplexed, encrypted on long haul distances over 3-rd party DWDM optical transport System.  3-rd party DWDM system have support Alien wavelength Transport or Black Link (ITU G698.2) and multiplexing with use of standard ITU 40 channel grid (oddeven) operating at 100 GHz.  Cisco MSTP software support 3-rd party IPoDWDM Transponders. Cisco NCS 1002 All in one box – Alien Wavelength
  • 39.  PSM based protection have two variants (discussed later):  PSM Section  PSM Line  Channel protection (not covered in discussion, because 15454-AD-XC-xx.x= soon will EOL)  Transponder based protection on 400G transponder will work in feature releases. FC links will NOT re-initialize in case of shutdown of one trunk port. Transport Optical Protection PSM based protection, Splitter protection
  • 40.  VIAVI solutions develops PSM modules for Cisco, VIAVI authorized technology partner and distributor of Polatis Company’s products.  Inside PSM Splitter 50%-50% and optical switch.  In case PSM protection configuration Raman amplifiers can’t be used because return loss (RL) of optical switch is to high.  PSM insert significant loss to total optical budget of line, around 7db per link.  400G Transponder with PSM and trunk based protection not yet tested. Transport Optical Protection PSM Multiplex Section Line Protection VIAVIPolatis optical switch
  • 41. Sample – 1, PSM Section Protection, 8x8G FC, 150km. Price – 900k USD Transport Optical Protection Reference design Sample – 2, PSM Section Protection, 10x8G FC,10x10GE, 160 and 250km. Price – 1 000k USD. Sample – 3, Client based protection, 10x8G FC,10x10GE, dark fiber. Price – various price. Sample – 4, Client based protection, 10x8G FC,10x10GE, 250km. Price – 1 430k USD CTP *.mpz file CTP *.mpz file CTP *.mpz file CTP *.mpz file
  • 43. Storage networking Most Modern 9700 Modules Part Number Product Name No. of Port Groups No. of Ports Per Port Group Bandwidth Per Port Group (Gbps) DS-X9248- 256K9 48-port 8- Gbps Adv FC module 8 6 32.41 12.82 DS-X9232- 256K9 32-port 8- Gbps Adv FC module 8 4 32.41 12.82 DS-X9248- 96K9 48-port 8- Gbps FC module 8 6 12.83 DS-X9448- 768K9 48-port 16- Gbps 12 4 644 1 MDS 9513 with fabric 3 module installed 2 MDS 9506 (all) and MDS 9509 (all) or MDS 9513 with fabric 2 module is more oversubscribed 3 MDS 9506 (all), MDS 9509 (all), or MDS 9513 (all) 4 MDS 9700 (all)
  • 44. The number of ISLs required between Cisco MDS switches depends on the desired end-to-end oversubscription ratio. Examples of storage, server, and ISL combinations, all with the same oversubscription ratio of 8:1.  The first example has one 16-Gbps storage port with eight 16-Gbps server ports traversing one 16- Gbps ISL.  The second example has one 16-Gbps storage port with sixteen 8-Gbps server ports traversing one 16- Gbps ISL.  The third example has eight 16-Gbps storage port with sixty-four 16-Gbps server ports traversing eight 16-Gbps ISLs. Storage networking SAN ISLs Oversubscription Ratio
  • 45. Sample: Accurately design SAN switch and modules to connect two Fabrics on 150 km distance with total throughput of 20x10G FC. Solution: ― Long distance link 150 km with 10G FC (payload 2112 bits) require 750 buffer credits per port. ― Each MDS 9448 module provide 4095 buffer credits. ― In our design DS-9448 module provides 5 links 10GFC for given distance. Only 10% port density utilized. ― Ports 10G Fiber Channel best utilize OTN multiplexing sections OTU2 of 200G Muxponder. Storage networking SAN Buffer credit usage on long haul links -Sample
  • 46. ― In case of connection IDC Layer to Storage Core oversubscription between severs and storages of each DC is the same. ― Offset connection point of IDC Layer to Storage Edge Layer lifts oversubscription between Storages of each DC and decrease oversubscription between Servers of each DC. ― To overcome ISL oversubscription inconsistence use ISL QOS Zoning for each Layer to dedicate speed. ― Separate connection of IDC SAN switch directly to Storage Arrays is acceptable in case of full isolation and resilience. ― Separate IDC decrease total LOGINs per fabric, in its turn lifts fabric scalability Storage networking SAN Design Reference
  • 47. Optical recovery and restoration Cisco TNCS-O with build in OTDR function TNCS-O card have two XFP modules with OTDR feature. OTDR measurement doesn’t influent on OSC channel because it use different wavelength. The OTDR feature in the TNCS-O cards lets you do the following: ― Inspect the transmission on the fiber. ― Identify discontinuities or defect on the fiber. ― Measure the distance (max. 100km) and magnitude of defects like insertion loss, reflection loss, and so on. ― Monitor variations in scan values and configured threshold values periodically. The OSC transmission ranges are: ― Standard range: 12 - 43 dB ― Reduced range: 5 - 30 dB
  • 48. The Viavi Multiple Application Platform (MAP-200) is an optical test and measurement platform optimized for cost- effective development and manufacturing of optical transmission network elements. Competitors: Huawei(OSA+OS), ECINOPS(PM+OS) Application samples: ― Check EDFA Amplifiers in case of gain degradation with BBS, OSA, filter and optical switch. ― Perform granular raman (RAMAN-CTP, COP, EDRA) calibration with BBS, OSA, filter and optical switch. ― Perform annual CD and PMD measurement with optical switch, BBS, OSA and other modules. ― Perform L1-L4 tests (BERT, TrueSpeed, RFC 6349, RFC 2544 and other) with MTS8kOTN platforms and optical switch. Partnership support ecosystem VIAVI Solutions – MAP 200 MAP 200 Chassis, OSA, PM, BBS and other modules
  • 49. This Sample shows interoperate Cisco management suite (CPO) with VIAVI measurement system in case of degradation of EDFA section in NCS system. In result VIAVI makes advanced troubleshooting (fault detection) and decision to disable circuit and notify OSS system about need to make RMA. Benefit of application: – Self measured intelligent network – Opportunity make granular network upgrade without need of Field Engineers locally to participate. Partnership support ecosystem VIAVI Solutions – MAP 200 – Application check EDFA
  • 50. Xgig Analyzer is recognized as gold standard for performance insight with key capabilities:  FC, FCIP, FCoE, Ethernet analyses  Redundant link analyst  Highly accurate timing clock for precise latency measurements  Simple access to perform metrics like: - IOPS by LUN - Min/Max/Aver exchange completion time - read and write MB/Sec by LUN - frames/sec by LUN - errors by LUN  Protocol visibility, including all errors and primitives  Cascading feature availability Partnership support ecosystem VIAVI Solutions – MAP 200XgigPolatis Most common application DCI design – use optical switch (VIAVI-Polatis) with Xgig Protocol Analyzer (VIAVI) to perform manual test and analyst. To make it automate and proactive need to enhance application with use of VIAVI Management Server.
  • 51. Mgt Infrastructure and Orchestration Switching Fabric Integrated Compute Stacks WAN Edge / DCI Storage and Fabric Extensions Virtual Switching Services & Containers Virtual Storage Volumes Data Center 1 Cisco Product  IP WAN Transport with 10ms RTT across Metro distance  VMDC 3.0 FabricPath (Typical Design)  10GE over DWDM NCS 2000, NCS 1002  NCS 10x10G + 200G Transponders or 400G XPonders  NetApp MetroCluster Synchronous Storage Replication  EMC VPLEX Synchronous Storage Replication  Cisco MDS 9700 Directors for Fiber Channel and FICON  FC or FICON over DWDM NCS 2000, NCS 1002  Cisco Prime Optical Management Suite  Cisco TNCS-O measurement system  VIAVI Infrastructure measurement systems VMDC DCI Design Choices in case of OpticalTransport Partner Product Route Optimization Path Optimization (LISP/ DNS / Manual ) Layer 2 Extension (OTV / VPLS / E-VPN) Stateful Services (FW / SLB / IPSec / VSG) VMware & Hyper-V UCS / Geo-Clusters / Mobility Distributed Virtual Switch (Nexus 1000v) Distributed Virtual Volumes Container Orchestration Storage Clusters MDS Fabric and FCoE Tenancy and QoS  Stateful Services between sites require L2 over distance  ASA 5500 FW Clustering at each site  Ethernet over FC or FICON over DWDM NCS 2000, NCS 1002  NCS Any-Rate Transponders, 10x10G Transponders VMDC DCI Design Active-Active Metro Design Choices

Editor's Notes

  1. When synchronous replication is used, an update made to a data volume at the primary site is synchronously replicated to a data volume at a secondary site. This guarantees that the secondary site has an identical copy of the data at all times. The disadvantage of synchronous replication is that a write I/O operation acknowledgement is sent to the application only after the write I/O operation is acknowledged by the storage subsystem at both the primary and the secondary site. Before responding to the application, the storage subsystem must wait for the secondary subsystem I/O process to complete, resulting in an increased response time to the application. Thus, performance with synchronous replication is highly impacted by factors such as link latency and link bandwidth. Deployment is only practical when the secondary site is located close to the primary site. When evaluating the use of synchronous replication, an important consideration is the behavior of the storage subsystem when the connection between the primary and secondary subsystem is temporarily disrupted. For more details, see the “Recovery Point Objective” section below. Synchronous replication does not provide protection against data corruption and loss of data due to human errors. Snapshot technology must be used with synchronous replication to provide full protection against both loss of access to data (protected by replication) and loss of data due to data corruption (protected by creating snapshots).
  2. In an asynchronous mode of operation, I/O operations are written to the primary storage system and then sent to one or more remote storage systems at some later point in time. Due to the time lag, data on the remote systems is not always an exact mirror of the data at the primary site. This mode is ideal for disk-to-disk backup or taking a snapshot of data for offline processes, such as testing or business planning. The time lag enables data to be replicated over lower-bandwidth networks, but it does not provide the same level of protection as synchronous replication. Asynchronous replication is less sensitive to distance and link transmission speeds. However, because replication might be delayed, data can be lost if a communication failure occurs followed by a primary site outage. Considerations when sizing link requirements and configuring the storage subsystem include the following: • Incoming ROC (for more information about ROC, see the “Rate of Change” section below) • Speed of the replication process in the storage subsystem (how quickly data can be pushed out to the replication link) • Line speed and line latency • Size of the replication buffer (queue space) in the storage subsystem; the buffer has to be large enough to cope with peak ROCs
  3. SAN switches can support two methods of flow control over an ISL: Virtual Channel (VC_RDY) and Receiver Ready (R_RDY) flow control. VC_RDY is the default method and uses multiple lanes or channels, each with different buffer credit allocations, to prioritize traffic types and prevent head-of-line blocking.
  4. VC_RDY flow control differentiates traffic across an ISL. It serves two main purposes: to differentiate fabric internal traffic from end-to-end device traffic, and to differentiate different data flows of end-to-end device traffic to avoid head-of-line blocking. Fabric internal traffic is generated by switches that communicate with each other to exchange state information (such as link state information for routing and device information for Name Service). This type of traffic is given a higher priority so that switches can distribute the most upto-date information across the fabric even under heavy device traffic. Additionally, multiple I/Os are multiplexed over a single ISL by assigning different VCs to different I/Os and giving them the same priority (unless QoS is enabled). Each I/O can have a fair share of the bandwidth, so that a large-size I/O will not consume the whole bandwidth and starve a small-size I/O, thus balancing the performance of different devices communicating across the ISL. When connecting switches across dark fiber or WDM communication links, VC_RDY is the preferred method, but there are some distance extension devices that require the E_Port to be configured for R_RDY. In order to configure R_RDY flow control on B-Series switches, use the portCfgISLMode command.
  5. When connecting switches across dark fiber or WDM communication links, VC_RDY is the preferred method, but there are some distance extension devices that require the E_Port to be configured for R_RDY. The BB_SC_N field (word 1, bits 15-12) specifies the buffer-to-buffer state change (BB_SC) number. The BB_SC_N field indicates that the sender of the port login (PLOGI), fabric login (FLOGI), or ISLs (E or TE ports) frame is requesting 2^SC_BB_N number of frames to be sent between two conensecutive BB_SC send primitives, and twice the number of R_RDY primitives to be sent between two consecutive BB_SC receive primitives. This can fail the ISLs if used with optical equipment using distance extension (DE), also known as buffer-to-buffer credit spoofing.
  6. Refer to Fibre Channel gigabit values reference definition on page 127 for an approximation of the calculated number of buffer credits.