SlideShare a Scribd company logo
1 of 24
Download to read offline
Build the Optimal Mainframe Storage Architecture With
Hitachi Data Systems and Brocade
Why Choose an IBM®
FICON®
Switched Network?
DATA DRIVEN GLOBAL VISION CLOUD PLATFORM STRATEG
ON POWERFUL RELEVANT PERFORMANCE SOLUTION CLO
VIRTUAL BIG DATA SOLUTION ROI FLEXIBLE DATA DRIVEN V
WHITEPAPER
By Bill Martin, Hitachi Data Systems
Stephen Guendert, PhD, Brocade
March 2013
WHITE PAPER 2
Contents
Executive Summary	 3
Introduction	 4
Why Networked FICON Storage Is Better Than Direct-Attached Storage	 4
Hitachi Virtual Storage Platform	 4
Why Brocade Gen5 DCX 8510 Is the Best FICON Director	 4
An Ideal Pairing: Hitachi Virtual Storage Platform and
Brocade Gen5 DCX 8510 	 5
Why IT Should Choose Networked Storage for FICON Over
Direct-Attached Storage	 5
Technical Reasons for a Switched FICON Architecture	 5
Business Reasons for a Switched FICON Architecture	 10
Why Switched FICON: Summary 	 14
Hitachi Virtual Storage Platform	 15
Scalability (3-D Scaling: Out, Up, Deep)	 15
Performance	 16
IBM 3390 and FICON Support	 16
Hitachi Dynamic Provisioning	 16
Hitachi Dynamic Tiering	 17
Hitachi Remote Replication	 17
Multiplatform Support	 20
Cost-Savings Efficiencies	 20
Brocade Gen5 DCX 8510 in Mainframe Environments	 20
Reliability, Availability and Serviceability	 21
Scalability	 21
Pair the Two Platforms Together	 22
Linux on the Mainframe	 22
FICON and FCP Intermix 	 22
Private Cloud	 22
Conclusion	 23
WHITE PAPER 3
Build the Optimal Mainframe Storage Architecture With Hitachi Data
Systems and Brocade
Executive Summary
The IBM®
System z®
and newer zEnterprise®
or, in other words, mainframes, continue to be a critical foundation in
the IT infrastructure of many large companies today. An important element of the mainframe environment is the disk
storage system (subsystem) that is connected to the mainframe via channels. The overall reliability, availability and
performance of mainframe-based applications are dependent on this storage system.
The performance demands, capacity, reliability, flexibility, efficiency and cost-effectiveness of the storage system are
important aspects of any storage acquisition and configuration decision. The increasing demands for improved per-
formance, in other words, throughput (IOPS) and response time, make this storage system a critical element of the IT
infrastructure. Another key factor in configuring the storage system is the decision of how it should be connected to
the mainframe channels: direct attached or through a switched IBM FICON®
network. This decision impacts the flex-
ibility, reliability and availability of the storage infrastructure and the efficiency of the storage administrators.
Hitachi Virtual Storage Platform (VSP) is an enterprise-class storage system that provides a comprehensive set of
storage and data services. These provide mainframe users with a cost-effective, highly reliable and available stor-
age platform that delivers outstanding performance, capacity and scalability. VSP supports the operating systems
used with IBM zEnterprise processors: z/OS®
, z/VSE®
, z/VM®
, and Linux on System z. This industry-leading storage
system provides IBM 3390 disk drive support across a variety of disk drive types to meet the variety of performance
and capacity needs of mainframe environments. The platform provides an internal physical disk capacity of approxi-
mately 2.5PB per storage system. With externally attached storage, the VSP can support up to 255PB of storage
capacity. It supports 8Gb/sec FICON across all front-end ports for connectivity to the mainframe and 8Gb/sec Fibre
Channel for connecting external storage.
Using a FICON network configured with a switch or director to connect a storage system to the mainframe channels
can significantly enhance reliability, flexibility and availability of storage systems. At the same time, it can maximize
storage performance and throughput. A switched FICON network allows the implementation of a fan-in, fan-out con-
figuration, which allows maximum resource utilization and simultaneously helps localize failures, improving availability.
The Brocade Gen5 DCX 8510 is a backbone-class FICON or Fibre Channel director. The Brocade Gen5 DCX 8510
family of FICON directors provides the industry’s most powerful switching infrastructure for modern mainframe envi-
ronments. It provides the most reliable, scalable, efficient, cost-effective, high-performance foundation for today’s
highly virtualized mainframe environments. The Brocade Gen5 DCX 8510 builds upon years of innovation and experi-
ence and leverages the core technology of Brocade systems, providing over 99.999% uptime in the world’s most
demanding data centers. The Gen5 DCX 8510 supports the operating systems used with zEnterprise processors:
z/OS and z/OS.e, z/VSE, z/VM, Linux on System z, and zTPF for System z. This industry-leading FICON director sup-
ports 2, 4, 8, 10, and 16Gb/sec Fibre Channel links, FICON I/O traffic, and 1 gigabit Ethernet (GbE) or 10GbE links
on Fibre Channel over IP (FCIP) while providing 8.2Tb/sec chassis bandwidth.
The combination of switched FICON connectivity with Hitachi VSP connected to mainframe channels through a
Brocade Gen5 DCX 8510 Director provides a powerful, flexible and highly available solution. Together, they support
the storage features, performance and capacity needed for today’s mainframe environments.
WHITE PAPER 4
Introduction
This paper explores both technical and business reasons for implementing a switched FICON architecture instead of
a direct-attached storage FICON architecture. It also explains why Hitachi Virtual Storage Platform and the Brocade
FICON Director together provide an outstanding, industry-leading solution for FICON environments.
With the many enhancements and improvements in mainframe I/O technology in the past 5 years, the question “Do I
need FICON switching technology, or should I go with direct-attached storage?” is frequently asked. With up to 320
FICON Express8S channels supported on an IBM zEnterprise z114, z196 and zEC12, why not just direct-attach the
control units? The short answer is that with all of the I/O improvements, switching technology is needed — now more
than ever. In fact, there are more reasons to use switched FICON than there were to use switched ESCON. Some of
these reasons are purely technical; others are more business-related.
Why Networked FICON Storage Is Better Than Direct-Attached Storage
The raw bandwidth of FICON Express8S running on IBM zEnterprise Systems is 40 times greater than the capabilities
of IBM ESCON®
. The raw I/Os per second (IOPS) capacity of FICON Express8S channels is even more impressive,
particularly when a channel program utilizes the z High Performance FICON (zHPF) protocol. To utilize these tremen-
dous improvements, the FICON protocol is packet-switched and, unlike ESCON, capable of having multiple I/Os
occupy the same channel simultaneously.
FICON Express8S channels on zEnterprise processors can have up to 64 concurrent I/Os (open exchanges) to dif-
ferent devices. FICON Express8S channels running zHPF can have up to 750 concurrent I/Os on the zEnterprise
processor family. Only when a director or switch is used between the host and storage device can the true perfor-
mance potential inherent in these channel bandwidth and I/O processing gains be fully exploited.
Hitachi Virtual Storage Platform
Hitachi Virtual Storage Platform, with its vast functionality and throughput capability, is ideal for IBM mainframe
environments and provides a comprehensive set of storage and data services. The flexibility in configuring and par-
titioning VSP makes it ideal for mainframe environments, with multiple LPARS running multiple operating system
images in the same SYSPLEX.
The packaging, enhanced features and improved manageability of VSP provide mainframe users with a cost-
effective, highly reliable and available storage platform that delivers outstanding performance, capacity and scalability.
The storage platform easily supports both mainframe and open systems environments. For mainframe environments,
it supports z/OS, z/VSE and z/VM. Additionally, with many organizations considering the benefits of or running LINUX
on IBM zEnterprise processors, VSP supports this capability for both CKD and FBA disk formats.
With support for FICON Express8S and the support of 2Gb, 4Gb and 8Gb FICON and 2Gb, 4Gb and 8Gb Fibre
Channel connectivity, this platform delivers industry-leading I/O performance. A VSP can have up to 24 front-end
directors with a total of 176 FICON ports. Each port can support more IOPS than a single zEnterprise FICON
Express8 channel can deliver. As a result, it is ideally suited for connectivity to the mainframe through a switched
FICON network.
Why Brocade Gen5 DCX 8510 Is the Best FICON Director
Emerging and evolving enterprise-critical workloads and higher density virtualization are continuing to push
the limits of SAN infrastructures. This is even truer in a data center with IBM zEnterprise and its support for
Microsoft®
Windows®
in the zEnterprise Blade Center Extension (zBX). The Brocade Gen5 DCX 8510 family features
industry-leading 16Gb/sec performance, and 8.2Tb chassis bandwidth to address these next-generation I/O and
WHITE PAPER 5
bandwidth-intensive application requirements. In addition, the Brocade Gen5 DCX 8510 provides unmatched slot-
to-slot and port performance, with 512Gb/sec bandwidth per slot (port card/blade). And this performance comes in
the most energy-efficient FICON director in the industry, using an average of less than 1 watt per Gb/sec, which is 15
times more efficient than competitive offerings.
The Brocade Gen5 DCX 8510 family enables high-speed replication and backup solutions over metro or WAN links
with native Fibre Channel (10Gb/sec or 16Gb/sec) and optional FCIP 1GbE or 10GbE extension support. These solu-
tions are accomplished by integrating this technology via a blade (FX24-8) or standalone switch (Brocade 7800).
Finally, this solution is accomplished with unsurpassed levels of reliability, availability and serviceability (RAS), based
upon more than 25 years of Brocade experience in the mainframe space. This experience includes defining the
FICON standards and authoring or co-authoring many of the FICON patents.
An Ideal Pairing: Hitachi Virtual Storage Platform and Brocade Gen5 DCX 8510
The IBM zEnterprise architecture is the highest performing, most scalable, cost-effective, energy-efficient platform in
mainframe computing history. To get the most out of your investment in IBM zEnterprise, you need a storage infra-
structure, that is, a DASD platform and FICON director, which can match the impressive capabilities of zEnterprise.
Hitachi Data Systems and Brocade, via VSP and Gen5 DCX 8510, together offer the highest performing and most
reliable, scalable, cost-effective and energy-efficient products in the storage and networking industry. The experience
of these 2 companies in the mainframe market, coupled with the capabilities of VSP and Gen5 DCX 8510, make pair-
ing them with IBM’s zEnterprise the ideal “best in industry” storage architecture for mainframe data centers.
Why IT Should Choose Networked Storage for FICON Over Direct-Attached Storage
Direct-attached FICON storage might appear to be a great way to take advantage of FICON technology. However, a
closer examination will show why a switched FICON architecture is a better, more robust design for enterprise data
centers than direct-attached FICON.
Technical Reasons for a Switched FICON Architecture
There are 5 key technical reasons for connecting storage control units using switched FICON:
■■ Overcome buffer credit limitations on FICON Express8 channels.
■■ Build fan-in, fan-out architecture designs for maximizing resource utilization.
■■ Localize failures for improved availability.
■■ Increase scalability and enable flexible connectivity for continued growth.
■■ Leverage new FICON technologies.
FICON Channel Buffer Credits
When IBM introduced the availability of FICON Express8 channels, one very important change was the number of
buffer credits available on each port per 4-port FICON Express8 channel card. While FICON Express4 channels had
200 buffer credits per port on a 4-port FICON Express4 channel card, this changed to 40 buffer credits per port on
a FICON Express8 channel card. Organizations familiar with buffer credits will recall that the number of buffer credits
required for a given distance varies directly in a linear relationship with link speed. In other words, doubling the link
speed would double the number of buffer credits required to achieve the same performance at the same distance.
Also, organizations might recall the IBM System z10™ Statement of Direction concerning buffer credits:
“The FICON Express4 features are intended to be the last features to support extended distance
without performance degradation. IBM intends to not offer FICON features with buffer credits
for performance at extended distances. Future FICON features are intended to support up to
WHITE PAPER 6
10km without performance degradation. Extended distance solutions may include FICON direc-
tors or switches (for buffer credit provision) or Dense Wave Division Multiplexers (for buffer credit
simulation).”
IBM held true to its statement, and the 40 buffer credits per port on a FICON Express8/FICON Express8S channel
card can support up to 10km of distance for full-frame size I/Os (2KB frames). What happens if an organization has
I/Os with smaller than full-size frames? The distance supported by the 40 buffer credits would increase. It is likely that
at faster future link speeds, the distance supported will decrease to 5km or less.
A switched architecture allows organizations to overcome the buffer credit limitations on the FICON Express8/FICON
Express8S channel card. Depending upon the specific model, FICON directors and switches can have more than
1300 buffer credits available per port for long-distance connectivity.
Fan-In, Fan-Out Architecture Designs
In the late 1990s, the open systems world started to implement Fibre Channel storage area networks (SANs) to over-
come the low utilization of resources inherent in a direct-attached storage architecture. SANs addressed this issue
through the use of fan-in and fan-out storage network designs. That is, multiple server host bus adapters (HBAs)
could be connected through a Fibre Channel switch to a single storage port: in other words, fan-in. Or a single-server
HBA could be connected through a Fibre Channel switch to multiple storage ports: that is, fan-out. These same prin-
ciples apply to a FICON storage network.
As a general rule, FICON Express8 and FICON Express8S channels offer different levels of performance, in terms of
IOPS and bandwidth, than the storage host adapter ports to which they are connected. Therefore, a direct-attached
FICON storage architecture may see very low channel or storage port utilization rates. To overcome this issue, fan-in
and fan-out storage network designs are used.
A switched FICON architecture allows a single channel to fan-out to multiple storage devices via switching, improving
overall resource utilization. This can be especially valuable if an organization’s environment has newer FICON chan-
nels, such as FICON Express8 or Express8S, but older tape drive technology. Figure 1 illustrates how a single FICON
channel can concurrently keep several tape drives running at full-rated speeds. The actual fan-out ratios for connec-
tivity to tape drives will, of course, depend on the specific tape drive and control unit; however, it is not unusual to see
a FICON Express8 or Express8S channel fan-out from a switch to 5 to 6 tape drives (a 1:5 or 1:6 fan-out ratio). The
same principles apply for fan-out to storage systems. The exact fan-out ratio is dependent on the storage system
model and host adapter capabilities for IOPS and/or bandwidth. On the other hand, several FICON channels could
be connected through a director or switch to a single storage port to maximize the port utilization and increase overall
I/O efficiency and throughput.
WHITE PAPER 7
Figure 1. Switched FICON allows one channel to keep multiple tape drives fully utilized.
Keep Failures Localized
In a direct-attached architecture, a failure anywhere in the path renders both the channel interface and the control
unit port unusable. The failure could be of an entire FICON channel card, a port on the channel card, a failure of the
cable, a failure of the entire storage host adapter card, or a failure of an individual port on the storage host adapter
card. In other words, a failure on any of these components will affect both the mainframe connection and the storage
connection. The worst possible reliability, availability and serviceability for FICON-attached storage are provided with a
direct-attached architecture.
With a switched architecture, failures are localized to only the affected FICON channel interface or control unit inter-
face, not both. The nonfailing side remains available, and if the storage side has not failed, other FICON channels
can still access that host adapter port via the switch or director (see Figure 2). This failure isolation, combined with
fan-in and fan-out architectures, allows for the most robust storage architectures, minimizing downtime and maximiz-
ing availability.
WHITE PAPER 8
Figure 2. A FICON director isolates faults and improves availability.
Scalable and Flexible Connectivity
Direct-attached FICON does not easily allow for dynamic growth and scalability, since a single FICON channel card
port is tied to a single dedicated storage host adapter port. In such an architecture, there is a 1:1 relationship (no
fan-in or fan-out). Since there is a finite number of FICON channels available (dependent on the mainframe model or
machine type), growth in a mainframe storage environment with such an architecture can pose a problem. What hap-
pens if an organization needs more FICON connectivity, but has run out of FICON channels? FICON switching and
proper usage of fan-in and fan-out in the storage architecture design will go a long way toward improving scalability.
In addition, best-practice storage architecture designs include room for growth. With a switched FICON architecture,
adding a new storage system or port in a storage system is much easier: simply connect the new storage system
or port to the switch. This eliminates the need to open the channel cage in the mainframe to add new channel inter-
faces, reducing both capital and operational costs. This also gives managers more flexible planning options when
upgrades are necessary, since the urgency of upgrades is lessened.
What about the next generation of channels? The bandwidth capabilities of channels are growing at a much faster
rate than those of storage devices. As channel speeds increase, switches will allow data center managers to take
advantage of new technology as it becomes available, while protecting investments and minimizing costs.
Also, it is an IBM best-practice recommendation to use single-mode long-wave connections for FICON channels.
Storage vendors, however, often offer single-mode long-wave connections and multimode short-wave connections
on their storage systems, allowing organizations to decide which to use. The organization makes the decision based
on the trade-off between cost and reliability. Some organizations’ existing storage devices have a mix of single-mode
and multimode connections. Since they cannot directly connect a single-mode FICON channel to a multimode stor-
age host adapter, this could pose a problem. With a FICON director or switch in the path, however, organizations do
not need to change the storage host adapter ports to comply with the single-mode best-practice recommendation
for the FICON channels. The FICON switching device can have both types of connectivity. It can have single-mode
long-wave ports for attaching the FICON channels, and multimode short-wave ports for attaching the storage.
WHITE PAPER 9
Furthermore, FICON switching elements at 2 different locations can be interconnected by fiber at distances up to
100km or more, creating a cascaded FICON switched architecture. This setup is typically used in disaster recovery
and business continuance architectures. As previously discussed, FICON switching allows resources to be shared.
With cascaded FICON switching, those resources can be shared between geographically separated locations,
allowing data to be replicated or tape backups to be made at the alternate site, away from the primary site, with no
performance loss. Often, workloads will be distributed such that both the local and remote sites are primary produc-
tion sites, and each site uses the other as its backup.
While the fiber itself is relatively inexpensive, laying new fiber may require an expensive construction project. While
dense wave division multiplexing (DWDM) can help get more out of fiber connections, inter-switch links with up to
16Gb/sec of bandwidth are offered by switch vendors and can reduce the cost of DWDM or even eliminate the need
for DWDM. FICON switches maximize utilization of this valuable intersite fiber by allowing multiple environments to
share the same fiber link. In addition, FICON switching devices offer unique storage network management features,
such as ISL trunking and preferred pathing, which are not available with DWDM equipment.
FICON switches allow data center managers to further exploit intersite fiber sharing by enabling them to intermix
FICON and native Fibre Channel Protocol (FCP) traffic, which is known as Protocol Intermix Mode, or PIM. Even in
data centers where there is enough fiber to separate FICON and open systems traffic, preferred pathing features on
a FICON switch can be a great cost saver. With preferred paths established, certain cross-site fiber can be allocated
for the mainframe environment, while other fiber can be allocated for open systems. The ISLs can be configured such
that in the event of a failure, and only in the event of an ISL failure, the links would be shared by both open systems
and mainframe traffic.
Leverage New Technologies
Over the past 5 years, IBM has announced a series of technology enhancements that require the use of switched
FICON. These include:
■■ N_port ID virtualization (NPIV) support for z Linux.
■■ Dynamic Channel-Path Management (DCM).
■■ z/OS FICON Discovery and Auto-Configuration (zDAC).
NPIV allows for full support of LUN masking and zoning by virtualizing the Fibre Channel identifiers. IBM announced
support for NPIV on z Linux in 2005. Today, NPIV is supported on the System z9®
, z10, z196, and z114. Until NPIV
was supported on System z, adoption of Linux on System z had been relatively slow. This, in turn, allows each
Linux on System z image to appear as if it has its own individual HBA when those images are, in fact, sharing FCP
channels. Since IBM began supporting NPIV on System z, adoption of Linux on System z has grown significantly.
IBM believes approximately 19% of MIPS shipping on new z196s are for Linux on System z implementations.
Implementation of NPIV on System z requires a switched architecture.
DCM is another feature that requires a switched FICON architecture. DCM provides the ability to have System z auto-
matically manage FICON I/O paths connected to storage systems in response to changing workload demands. Use
of DCM helps simplify I/O configuration planning and definition, reduces the complexity of managing I/O, dynamically
balances I/O channel resources, and enhances availability. DCM can best be summarized as a feature that allows for
more flexible channel configurations, by designating channels as “managed,” and proactive performance manage-
ment. DCM requires a switched FICON architecture because topology information is communicated via the switch or
director. The FICON switch must have a control unit port (CUP) license and be configured or defined as a control unit
in the hardware configuration definition (HCD).
WHITE PAPER 10
z/OS FICON Discovery and Auto-Configuration (zDAC) is the latest technology enhancement for FICON. IBM intro-
duced zDAC as a follow-on to an earlier enhancement in which the FICON channels log into the Fibre Channel name
server on a FICON director. zDAC enables the automatic discovery and configuration of FICON-attached DASD and
tape devices. Essentially, zDAC automates a portion of the HCD Sysgen process. zDAC uses intelligent analysis to
help validate the System z and storage definitions’ compatibility, and uses built-in best practices to help configure
for high availability and avoid single points of failure. zDAC is transparent to existing configurations and settings. It is
invoked and integrated with the z/OS HCD and z/OS Hardware Configuration Manager (HCM). zDAC also requires a
switched FICON architecture.
IBM also introduced support for transport-mode FICON (known as z High Performance FICON, or zHPF) in October
2008 and announced recent enhancements in July 2011. While not required for zHPF, a switched architecture is
recommended.
Business Reasons for a Switched FICON Architecture
In addition to the technical reasons described earlier, the following business reasons support implementing a
switched FICON architecture:
■■ Enable massive consolidation in order to reduce capital and operating expenses.
■■ Improve application performance at long distances.
■■ Support growth and enable effective resource sharing.
Massive Consolidation
With NPIV support on System z, server and I/O consolidation is very compelling (see Figure 3). IBM undertook a
well-publicized project at its internal data centers (Project Big Green) and consolidated 3900 open systems servers
onto 30 System z mainframes running Linux. IBM’s total cost of ownership (TCO) savings was calculated, taking into
account footprint reductions, power and cooling, and management simplification costs. The result was nearly 80%
TCO savings for a 5-year period. This scale of TCO savings is why 19% of new IBM mainframe processor shipments
are now being used for Linux.
Implementation of NPIV requires connectivity from the FICON (FCP) channel to a switching device (director or smaller
port-count switch) that supports NPIV. A special microcode load is installed on the FICON channel to enable it to
function as an FCP channel. NPIV allows the consolidation of up to 255 z Linux images (“servers”) behind each FCP
channel, using one port on a channel card and one port on the attached switching device for connecting these virtual
servers. This enables massive consolidation of many HBAs, each attached to its own switch port in the SAN.
As a best practice, IBM currently recommends configuring no more than 32 Linux images per FCP channel. Although
this level of I/O consolidation was possible prior to NPIV support on System z, implementing LUN masking and
zoning in the same manner as with open systems servers, SAN and storage was not possible prior to the support for
NPIV with Linux System z.
NPIV implementation on System z has also resulted in consolidation and adoption of a common SAN for distributed
or open systems (FCP) and mainframe (FICON), commonly known as protocol intermix mode (PIM). While IBM has
supported PIM in System z environments since 2003, adoption rates were low until NPIV implementations for Linux
for System Z picked up with the introduction of System z10 in 2008. With z10 enhanced segregation and security
beyond simple zoning was possible through switch partitioning or virtual fabrics and logical switches. With 19% of
new mainframes being shipped for use with Linux on System z, it is safe to say that at least 19% of mainframe envi-
ronments are now running a shared PIM environment.
WHITE PAPER 11
Leveraging enhancements in switching technology, performance and management, PIM users can now fully populate
the latest high-density directors with minimal or no oversubscription. They can use management capabilities such
as virtual fabrics or logical switches to fully isolate open systems ports and FICON ports in the same physical direc-
tor chassis. Rather than having more partially populated switching platforms that are dedicated to either mainframe
(FICON) or open systems (FCP), PIM allows for consolidation onto fewer physical switching devices, reducing man-
agement complexity and improving resource utilization. This, in turn, leads to lower operating costs, and a lower TCO
for the storage network. It also allows for a consolidated, simplified cabling infrastructure.
Figure 3. Organizations implement NPIV to consolidate I/O in z Linux environments.
Application Performance Over Distance
As previously discussed, the number of buffer credits per port on a 4-port FICON Express8 channel has been
reduced to 40, supporting up to 10km without performance degradation. What happens if an organization needs to
go beyond 10km for a direct-attached storage configuration? They will likely see performance degradation due to
insufficient buffer credits. Without a sufficient quantity of buffer credits, the “pipe” cannot be kept full with streaming
frames of data.
Switched FICON avoids this problem (see Figure 4). FICON directors and switches have a sufficient quantity of buffer
credits available on ports to allow them to stream frames at full-line performance rates with no bandwidth degrada-
tion. IT organizations that implement a cascaded FICON configuration between sites can, with the latest FICON
director platforms, stream frames at 16Gb/sec rates with no performance degradation for sites that are 100km apart.
WHITE PAPER 12
Switched FICON technology also allows organizations to take advantage of hardware-based FICON protocol accel-
eration or emulation techniques for tape (reads and writes), as well as with zGM (z/OS Global Mirror, formerly known
as XRC, or Extended Remote Copy). This emulation technology is available on standalone extension switches or on a
blade in FICON directors. It allows the z/OS-initiated channel programs to be acknowledged locally at each site and
avoids the back-and-forth protocol handshakes that normally travel between remote sites. It also reduces the impact
of latency on application performance and delivers local-like performance over unlimited distances. In addition, this
acceleration or emulation technology optimizes bandwidth utilization.
Why is bandwidth efficiency so important? It is typically the most expensive budget component in an organization’s
multisite disaster recovery or business continuity architecture. Anything that can be done to improve the utilization
and/or reduce the bandwidth requirements between sites would likely lead to significant TCO savings.
Figure 4. Switched FICON with emulation allows optimized performance and bandwidth utilization over extended
distance.
Enable Growth and Resource Sharing
Direct-attached storage forces a 1:1 relationship between host connectivity and storage connectivity. In other words,
each storage port on a storage system host adapter requires its own physical port connection on a FICON Express8
channel card. These channel cards are typically very expensive on a per-port basis — typically 4 to 6 times the cost
of a FICON director port. Also, there is a finite number of FICON Express8S channels available on a zEnterprise196
(a maximum of 320), as well as a finite number of host adapter ports in the storage system. If an organization has
WHITE PAPER 13
a large configuration and a direct-attached FICON storage architecture, how does it plan to scale its environment?
What happens if an organization acquires a company and needs additional channel ports? A switched FICON infra-
structure allows cost-effective, seamless expansion to meet growth requirements.
Direct-attached FICON storage also typically results in underutilized host channel card ports and host adapter ports
in storage systems. FICON Express8 and FICON Express8S channels can comfortably perform at high-channel uti-
lization rates, and a direct-attached storage architecture typically sees channel utilization rates of 10% or less. As
illustrated in Figure 5, leveraging FICON directors or switches allows organizations to maximize channel utilization.
Figure 5. Switched FICON drives improved channel utilization, while preserving CHPIDs for growth.
It also is very important to keep traffic for tape drives streaming, and to avoid stopping and starting the tape drives, as
this leads to unwanted wear and tear of tape heads, cartridges, and the tape media itself. Using FICON acceleration
or emulation techniques, as described earlier, this can be accomplished with a configuration similar to the one shown
in Figure 6. Such a configuration requires solid analysis and planning, but it will pay dividends for an organization’s
FICON tape environment.
WHITE PAPER 14
Figure 6. A well-planned configuration can maximize CHPID capacity utilization for FICON tape efficiency.
Finally, switches facilitate fan-in, which allows different hosts and LPARs whose I/O subsystems are not shared to
share the same assets. While some benefits may be realized immediately, the potential for value in future equipment
planning can be even greater. With the ability to share assets, equipment that would be too expensive for a single
environment can be deployed in a cost-saving manner. The most common example is to replace tape farms with
virtual tape systems. By reducing the number of individual tape drives, maintenance (service contracts), floor space,
power, tape handling and cooling costs are reduced. Virtual tape also improves reliable data recovery, allows for sig-
nificantly shorter recovery time objectives (RTO) and nearer recovery point objectives (RPO), and offers features such
as peer-to-peer copies. However, without the ability to share these systems, it may be difficult to amass sufficient
cost savings to justify the initial cost of virtual tape. And the only practical way to share these standalone tape sys-
tems or tape libraries is through a switch.
With disk storage systems, in addition to sharing the asset, it is sometimes desirable to share the data across mul-
tiple systems. The port limitations on a storage system may prohibit or limit this capability using direct-attached
(point-to-point) FICON channels. Again, the switch can provide a solution to this issue.
Even when there is no need to share devices during normal production, this capability can be very valuable in the
event of a failure. Data sets stored on tape can quickly be read by CPUs picking up workload that is already attached
to the same switch as the tape drives. Similarly, data stored on a storage system can be available as soon as a fault
is determined.
Switch features, such as preconfigured port prohibit or allow matrix tables, can ensure that access intended only for a
disaster scenario is prohibited during normal production.
Why Switched FICON: Summary
Direct-attached FICON might appear to be a great way to take advantage of FICON technology’s advances over
ESCON. However, a closer examination shows that switched FICON, similar to switched ESCON, is a better, more
robust architecture for enterprise data centers. Switched FICON offers:
■■ Better utilization of host channels and their performance capabilities.
■■ Scalability to meet growth requirements.
■■ Improved reliability, problem isolation and availability.
■■ Flexible connectivity to support evolving infrastructures.
■■ More robust business continuity implementations via cascaded FICON.
WHITE PAPER 15
■■ Improved distance connectivity, with improved performance over extended distances.
■■ New mainframe I/O technology enhancements such as NPIV, FICON DCM, zDAC and zHPF.
Switched FICON also provides many business advantages and potential cost savings, including:
■■ The ability to perform massive server, I/O and SAN consolidation, dramatically reducing capital and operating
expenses.
■■ Local-like application performance over any distance, allowing host and storage resources to reside wherever
business dictates.
■■ More effective resource sharing, improved utilization, reduced costs and improved recovery time.
With the growing trend toward increased usage of Linux on System z, and the cost advantages of NPIV implemen-
tations and PIM SAN architectures, direct-attached storage in a mainframe environment is becoming a thing of the
past. Investments made in switches for disaster recovery and business continuance are likely to pay the largest divi-
dends. Having access to alternative resources and multiple paths to those resources can result in significant savings
in the event of a failure. The advantages of a switched FICON infrastructure are simply too great to ignore.
Hitachi Virtual Storage Platform
Hitachi Data Systems has over 20 years of experience supporting IBM mainframe environments. A large portion of
the installed base of Hitachi storage systems connects to IBM z/OS and S/390®
mainframes via ESCON and FICON
networks.
Hitachi Virtual Storage Platform builds on this experience and introduces new features and packaging to improve
performance while lowering TCO. In addition to its new 3-D scaling architecture, it features lower power and cool-
ing requirements, high-density packaging based on industry-standard 19-inch racks, faster microprocessors and the
choice of disk drives types, including solid state disk (SSD), serial attached SCSI (SAS) and nearline SAS. This stor-
age platform provides an industry-leading, reliable and highly available storage system for mainframes in IBM z/OS
environments.
It supports z/OS, z/VSE and z/VM for zEnterprise. Additionally, with many organizations considering the benefits of
or running LINUX on IBM zEnterprise processors, Virtual Storage Platform supports this capability for both count
key device (CKD) and fixed block architecture (FBA) disk formats. Hitachi has implemented support for many key
performance features in support of these operating systems running on zEnterprise, including PAV, HyperPAV, z/HPF,
Multiple Allegiance, MIDAW and Priority I/O Queuing. It also provides a unique mainframe storage management solu-
tion to deliver functionally compatible extended address volumes (EAV) for z/OS, data volume expansion (DVE), and
IBM FlashCopy®
SE (with space efficiency capability).
Hitachi Virtual Storage Platform is designed to be highly available and resilient. All critical components are imple-
mented in pairs. If a component fails, the paired component can take over the workload without an outage. With
its support of multiple RAID configurations, an organization’s data is protected in event of a disk drive problem.
Additionally, with its industry-leading replication software and support of FlashCopy, FlashCopy SE and Hitachi
Compatible Software for IBM XRC®
providing the functionality of the IBM z/OS Metro/Global Mirror, copies of data
can be maintained locally and at remote locations. This ensures its availability in case the primary copy becomes
unusable or is not accessible.
Scalability (3-D Scaling: Out, Up, Deep)
Hitachi Virtual Storage Platform can scale up to provide increased performance, capacity, throughput and connectiv-
ity. It can scale out by dynamically combining multiple units into a single logical system with shared resources. It can
WHITE PAPER 16
also scale deep by dynamically virtualizing new and existing external storage systems. This 3-D scaling means that
VSP can grow nondisruptively to meet changing needs within the data center. It minimizes outages to extend the
platform and enhance functionality while providing flexibility in the configuration and choice of disk technology to meet
the specific needs of each environment.
The ability to scale deep is provided by Hitachi controller-based storage virtualization, which supports connectivity
to external storage. This enables organizations to further extend the life of existing storage assets, including stor-
age from a variety of other vendors. It also provides IBM mainframes the ability to connect to both enterprise and
midrange storage platforms, some of which can be configured with lower cost nearline SAS or SATA drives. This vir-
tualization of external storage can potentially extend the life of existing storage assets and reduce costs.
Three important benefits of scaling deep are:
■■ Enables the reuse of existing or legacy assets for less critical or accessed data.
■■ Simplifies management of external storage with common management and data protection for internal and
external storage.
■■ Supports the reuse of existing or legacy assets across data centers within a metro area network distance and
across global distances with replication capabilities of the scale up storage system.
Performance
Hitachi Virtual Storage Platform ushers in a new level of I/O throughput, response and scalability. It supports of 8Gb
FICON (FICON Express8 and FICON Express8S) and enables a single VSP FICON 8Gb port to handle higher traffic
rates that can be delivered by a single zEnterprise FICON Express8 or FICON Express8S channel. This storage net-
working is critical to optimizing performance and maximizing throughput in mainframe environments.
IBM 3390 and FICON Support
This industry-leading storage system provides 3390 disk drive support through emulation across a variety of disk
drive types to meet the variety of performance and capacity needs of mainframe environments. The platform sup-
ports SSD flash drives, providing ultra-high-speed response with capacities of 200GB and 400GB, as well as
2.5-inch SAS drives, and nearline SAS drives. It can control up to 65,280 logical volumes and provides an internal
physical disk capacity of approximately 2.5PB per storage system. With externally attached storage, Hitachi Virtual
Storage Platform can support up to 255PB of storage capacity.
VSP supports 8Gb/sec FICON (FICON Express8 and FICON Express8S) across all front-end ports for connectivity to
the mainframe and 8Gb/sec Fibre Channel for connecting external storage. VSP supports high-performance FICON
(z/HPF) for z/OS. On the back end, it supports SAS, SATA and SSD drives, which are connected using the SAS 2
protocol with 6GB/sec connectivity per back-end port.
Hitachi Dynamic Provisioning
Hitachi Dynamic Provisioning for Mainframe optimizes performance through extremely wide striping and more effec-
tive use of storage through thin provisioning (see Figure 7). In other words, it allocates storage to an application
without actually mapping the corresponding physical storage until it is used. This separation of allocation from physi-
cal mapping results in more effective use of physical storage with higher overall performance and rates of storage
utilization. Dynamic Provisioning also enables Dynamic Volume Expansion (DVE) of 3390 volumes and FlashCopy SE
for more efficient use of storage when creating local copies.
WHITE PAPER 17
Figure 7. Hitachi Dynamic Provisioning for Mainframe optimizes performance.
Hitachi Dynamic Tiering
Hitachi Dynamic Tiering (HDT) for Mainframe enables the automatic movement of data between tiers. HDT moves
highly accessed blocks of data to the highest tier storage and migrates less frequently accessed data to the lowest
tiers. This significantly reduces the time storage administrators have to spend analyzing storage usage and managing
the movement of data to optimize performance. HDT complements z/OS System Managed Storage and can move
pages of data to the appropriate tier when needed rather than moving entire datasets.
Hitachi Remote Replication
Business continuity is more important than ever in today’s business environment as demonstrated through the natu-
ral disasters and physical intrusion and destruction of IT resources over the last few years. A loss of business-critical
data can force a company to its knees and even into bankruptcy. In addition, regulatory compliance requirements
demand a business continuity and disaster recovery plan and infrastructure to support that plan or face stiff fines and
business restrictions. Hitachi remote replication offerings provide the ability to copy critical data to off-site facilities
either within a metropolitan area and/or to distant remote locations. The combination of the enterprise-level Hitachi
Virtual Storage Platform with Brocade’s solutions to extend and optimize fabric connectivity facilitates the movement
of your business-critical data over longer distances. Together, they enable and enhance your ability to support busi-
ness continuity and disaster recovery.
WHITE PAPER 18
Hitachi TrueCopy®
Hitachi TrueCopy synchronous software provides a continuous, nondisruptive, host-independent remote data replica-
tion solution for disaster recovery or data migration over distances within the same metropolitan area. It provides a
no-data-loss, rapid-restart solution (see Figure 8). For enterprise environments, TrueCopy synchronous software com-
bined with Hitachi Universal Replicator on Virtual Storage Platform allows for advanced 3 data center configurations.
This includes consistency across up to 12 storage systems in 1 site for optimal data protection.
Figure 8. Hitachi TrueCopy synchronous supports business continuity and disaster recovery efforts.
TrueCopy synchronous supports business continuity and disaster recovery efforts, improving business resilience. It
improves service levels by reducing planned and unplanned downtime of customer-facing applications. It enables fre-
quent, nondisruptive disaster recovery testing with an online copy of current and accurate production data. TrueCopy
synchronous can be seamlessly integrated into existing z/OS environments and controlled with familiar PPRC com-
mands or with Hitachi Business Continuity Manager software.
Hitachi Universal Replicator
Hitachi Universal Replicator provides asynchronous data replication across any distance for both internal Virtual
Storage Platform storage and external storage managed by VSP (see Figure 9). Universal Replicator provides
enterprise-class performance associated with storage system-based replication. At the same time, it provides resilient
business continuity without the need for remote host involvement, or redundant servers or replication appliances.
Universal Replicator maintains the integrity of replicated copies without impacting processing, even when replication
network outages occur or optimal bandwidth is not available. When compared to traditional methods of storage-
system-based replication, Universal Replicator leverages performance-optimized disk-based journals, resulting in
significantly reduced cache utilization and increased bandwidth utilization.
Universal Replicator ensures availability of up-to-date copies of data in up to 3 dispersed locations by leveraging the
synchronous capabilities of Hitachi TrueCopy synchronous. In the event of a disaster at the primary data center, the
delta resync feature of Universal Replicator enables fast failover and restart of the application without loss of data,
whether at the local or remote data center.
WHITE PAPER 19
Figure 9. Hitachi Universal Replicator ensures availability of current copies of data in up to 3 dispersed locations.
Universal Replicator can be integrated into an IBM GDPS®
environment, providing a much more cost-effective and
complete recovery solution than the IBM alternative of z/OS Global Mirror. With Universal Replicator and TrueCopy
synchronous support of a 3 data center replication solution, VSP supports delta resync, which is similar to but more
efficient than z/OS Metro/Global Mirror Incremental Resync.
Hitachi Virtual Storage Platform also supports IBM z/OS Basic HyperSwap®
, which is enabled by IBM Tivoli
Productivity Center for Replication for System z Basic Edition (TPC-R). TPC-R enables the administrator to develop
a z/OS Basic HyperSwap configuration using VSP. Using VSP, the organization can create a z/OS Basic HyperSwap
plan, for a 2 data center configuration with TrueCopy synchronous or a 3 data center configuration with TrueCopy
synchronous, Universal Replicator and Business Continuity Manager. Initially, VSP will support a maximum of 3 stor-
age systems at each site.
VSP with Universal Replicator will support a 4 data center configuration and allow you to have 2 long asynchronous
data paths and 2 synchronous paths. This solution offers you the ability to create multiple copies of data in many
locations and reduce the impact of data migration.
Hitachi Compatible Software for IBM®
XRC®
Hitachi Compatible Replication software for IBM XRC is a cross-license technology between Hitachi Data Systems
and IBM that provides support for z/OS Global Mirror. This Hitachi software is fully compatible with IBM XRC and lets
administrators create and share server-based remote copies between Hitachi Virtual Storage Platform, the Hitachi
Universal Storage Platform family and IBM enterprise storage systems, such as the DS8000®
system. Hitachi Data
Systems is the only 3rd-party storage vendor capable of fully supporting IBM XRC command sets.
Hitachi Business Continuity Manager
Hitachi Business Continuity Manager enables centralized, enterprise-wide replication management for IBM z/OS
mainframe environments. Through a single, consistent interface based on the Time Sharing Option/Interactive System
Productivity Facility (TSO/ISPF) it uses full-screen panels to automate Hitachi Universal Replicator, Hitachi TrueCopy
WHITE PAPER 20
synchronous (including multisite topologies) and in-system Hitachi ShadowImage Heterogeneous Replication soft-
ware operations.
This software feature automates complex disaster recovery and planned outage functions, resulting in reduced recov-
ery times. It also enables advanced, 3 data center disaster recovery configurations and extended consistency group
capabilities. Business Continuity Manager provides built-in capabilities for monitoring and managing critical perfor-
mance metrics and thresholds for proactive problem avoidance. It also delivers autodiscovery of enterprise-wide
storage configuration and replication objects, eliminating tedious, error-prone data entry that can cause outages.
Hitachi Business Continuity Manager integrates with the Hitachi replication management framework, Hitachi
Replication Manager software, for replication monitoring and continuous operations in mainframe (and open system)
environments.
Multiplatform Support
Hitachi Virtual Storage Platform can support multiple operating systems at the same time. Although many mainframe
organizations have been reluctant to share their storage platforms with open systems servers, the need to share stor-
age is becoming more important: Organizations are implementing Linux on System z. In addition, the introduction of
IBM zEnterprise BladeCenter Extension (zBX) for mainframe processors enables Microsoft Windows to operate as
part of zEnterprise servers. VSP can be configured to facilitate the isolation of disparate types of data. Additionally,
the FICON and Fibre Channel ports are completely separate and help ensure that critical mainframe data cannot be
accessed directly by open systems servers or clients.
Cost-Savings Efficiencies
This storage system is designed to lower TCO wherever possible. The physical packaging has been designed to
use standard-size racks and chassis. The internal layout supports front-to-back airflow, to facilitate the use of hot
and cold aisles and maximize the efficiency of data center cooling. In combination with very fast processors, denser
packaging and smaller batteries, the physical floor space and the heating and cooling requirements result in very low
power per square foot (KVA/sq ft.). Operating expenditure (opex) is lower than previous systems thanks to denser
packaging, blade architecture, low power memory, small form factor disks, SSD disk and flash-protected cache with
its smaller batteries. Hitachi Data Systems is committed to continuing to deliver more efficient packaging, resulting in
more sustainable products.
Brocade Gen5 DCX 8510 in Mainframe Environments
Now on its 5th-generation (1G, 2G, 4G, 8G and 16G) of switching technology (Gen5), Brocade has the experience
to rely on. The company has been in the mainframe storage networking business for more than 20 years, as far back
as the parallel channel extension technology of the late 1980s. Brocade has a history of thought leadership. It has 4
of its own FICON patents, as well as 5 FICON joint patents with IBM on technologies, such as the FICON bridge card
and control unit port (CUP). Brocade helped IBM develop Fibre Connection (FICON), and in 2000 the 1st IBM certified
FICON network infrastructure, using 1Gb/sec ED5000 Directors, was deployed. Brocade has the only FICON archi-
tecture certification program (BCAF) in the industry. Brocade manufactured the 9032-5 ESCON director for IBM, and
pioneered ESCON channel extension emulation technology. Brocade has continued its heritage of mainframe storage
networking thought leadership with 9 generations of FICON directors. These products include the current industry-
leading FICON directors, such as the DCX and DCX 8510, and FICON channel extension, such as the Brocade 7800
and FX8-24 extension blade.
WHITE PAPER 21
Reliability, Availability and Serviceability
The largest corporations in the world literally run their businesses on mainframes. Government institutions in many
countries worldwide also rely on the mainframe for their critical computing needs. RAS qualities for these mission-
critical environments are of the utmost importance. Mainframe practitioners in these organizations avoid risk at all
costs. They never want to suffer an unscheduled outage, and they want to minimize if not outright eliminate sched-
uled or planned outages. Mainframes such as the IBM zEnterprise have historically been the rock-solid pillar in terms
of computing RAS. Mainframe practitioners have a history of creating I/O infrastructures that have “five nines” avail-
ability. For FICON channel connectivity to mainframe-attached storage, these same organizations have a requirement
for a FICON director platform that offers the same levels of RAS as the mainframe itself. The Brocade Gen5 DCX
8510 is the ideal FICON director for these RAS requirements.
The Brocade Gen5 DCX 8510 FICON Director features a modular, high-availability architecture that supports these
mission-critical mainframe environments. The Brocade Gen5 DCX 8510 chassis has been engineered from incep-
tion for “five nines” of availability by providing multiple fans (supporting hot aisle-cool aisle), multiple fan connectors,
dual core blade internal connectivity, dual control processors, dual power supplies, a passive backplane and dual
I/O timing clocks. These features and the switching design of the Brocade Gen5 DCX 8510 result in leading mean
time between failure (MBTF) and mean time to recovery or repair (MTTR) numbers. In a recent study performed with
a sample size of 26,593 Brocade products, the average yearly downtime was .53 minutes per year, for an availability
rate of 99.99984%. It is this kind of availability that consistently leads OEM partners such as HDS to praise Brocade
products for their quality.
Scalability
With the advent of the zBX and the zEnterprise Unified Resource Manager, private cloud computing centered on the
IBM zEnterprise has emerged as a “hot topic.” Cloud computing requires a highly scalable (hyper-scale) storage net-
working architecture to support it. Hyper-Scale Inter-Chassis Link (ICL) is a unique Brocade Gen5 DCX 8510 feature
that provides connectivity among 2 or more Brocade 8510-4 or 8510-8 chassis. This is the 2nd generation of ICL
technology from Brocade with optical QSFP (Quad Small Form Factor). The 1st generation used a copper connector.
Each ICL connects the core routing blades of two 8510 chassis and provides up to 64Gb/sec of throughput within a
single cable. The Brocade 8510-8 allows up to 32 QSFP ports, and the 8510-4 allows up to 16 QSFP ports to help
preserve switch ports for end devices.
This 2nd generation of Brocade optical ICL technology, based on QSFP technology, provides a number of benefits
to the organization. Brocade has improved ICL connectivity over the use of copper connectors by upgrading to an
optical form factor. With this improvement, Brocade has also increased the distance of the connection from 2 meters
to 50 meters. QSFP combines 4 cables into 1 cable per port, significantly reducing the number of ISL cables the
customer needs to run. Since the QSFP connections reside on the core blades within each 8510, they do not use up
connections on the slot line cards. This improvement frees up to 33% of the available ports for additional server and
storage connectivity.
Dual-chassis backbone topologies connected through low-latency ICL connections are ideal in a FICON environment.
The majority of FICON installations have switches that are connected in dual or triangular topologies, using ISLs to
meet the FICON requirement for low latency between switches. New 64Gb/sec QSFP based ICLs enable simpler,
flatter, low-latency chassis topologies spanning a distance of up to 50 meters with off-the-shelf cables. They reduce
interswitch cables by 75% and preserve 33% of front-end ports for servers and storage, leading to fewer cables and
more usable ports in a smaller footprint.
WHITE PAPER 22
Pair the Two Platforms Together
Traditional (z/OS) Mainframe Environments
In a “traditional” z/OS mainframe environment, RAS, as well as performance are the key concerns to most orga-
nizations. These characteristics provide the stability for the mainframe-based applications, on which the largest
companies in the world run their businesses. Dr. Thomas E. Bell, winner of the Computer Measurement Group (CMG)
Michelson Award for lifetime achievement in the computer performance field, once famously commented that “all
CPUs wait at the same speed.” Likewise, Dr. Steve Guendert, a CMG Board member has commented in his blog that
“The IBM zEnterprise is a hungry machine, and its users need to feed the I/O beast.” Response time means money in
these environments. The ability to process transactions more rapidly provides companies a competitive advantage in
today’s financial industry. Hitachi Virtual Storage Platform and Brocade DCX 8510, together, make sure the “I/O beast
is fed.”
Linux on the Mainframe
A 2011 IDC report indicated that of all the mainframes being shipped, approximately 19% of the processing power
is intended for Linux. And IBM has been quoted as saying that 32% of IBM’s zEnterprise installed base is running
integrated facility for Linux (IFL) specialty engines. Regardless of whether Linux is running as a guest under z/VM
or natively in an LPAR, it is an important trend that cannot be ignored. This trend has been growing since the 2005
introduction of support for NPIV on System z. IT organizations are realizing that there are significant cost savings to
be realized by moving to Linux on System z, and these cost savings are in terms of hardware acquisition, software
licensing and operational costs, such as power and cooling. Hitachi VSP and Brocade Gen5 DCX 8510 are the ideal
choice for these Linux environments. VSP offers very powerful virtualization, support for NPIV and both Dynamic
Provisioning and Dynamic Tiering. Brocade Gen5 DCX 8510 offers full support for NPIV, and its Virtual Fabrics func-
tionality allows for highly secure separation of the z/OS data traffic from the Linux traffic on the FICON director.
FICON and FCP Intermix
FICON and FCP Intermix, or protocol intermix mode (PIM) is another growing trend in mainframe environments. Linux
on System z has been the major driver of this trend, as its very nature often leads to mainframe end users using FCP
channels and FICON channels on the mainframe. IBM’s recent announcement and GA of support for Windows blade
servers on the zEnterprise Blade Center Extension (zBX) is likely to see even further adoption or acceptance of PIM
as a storage networking architecture. The virtualization, performance, scalability and tiering capabilities of Hitachi VSP
make it an ideal disk storage platform for a PIM storage architecture. The performance and virtual fabrics capabilities,
coupled with the immense number of open systems SAN experience at Brocade make the DCX 8510 the ideal direc-
tor platform to go along with the VSP in a PIM architecture.
Private Cloud
The ideas behind cloud computing are well known to experienced mainframers, who remember “service bureau com-
puting.” Private cloud computing is a “hot topic.” It is seeing a lot of adoption, and the concept of IBM zEnterprise
Systems at the center of a private cloud is gaining a lot of traction. Private cloud computing relies on extensive virtu-
alization. This virtualization is not just at the server and application; it is at everything in the data center, most notably
with the storage devices and the network. Hitachi Virtual Storage Platform paired with Brocade Gen5 DCX 8510 cre-
ates the ideal architecture for a mainframe-centric private cloud.
WHITE PAPER 23
Conclusion
A networked FICON storage architecture for your mainframe is a well-documented industry best practice for a wide
variety of reasons, both technical and financial. Networked storage architectures beat direct-attached architectures in
terms of RAS, performance, scalability and long-run costs. The latest I/O enhancements to IBM mainframes, such as
Dynamic Channel-path Management (DCM) and System z Discovery and Configuration (zDAC), require a networked
storage architecture (with FICON directors) if the end user wishes to take advantage of them.
The IBM zEnterprise offers unprecedented performance, scalability and innovative new features, such as the zBX, as
well as support for Windows. To take full advantage of a zEnterprise requires the end user to have an equally capable
storage system and FICON director platform for connectivity. Hitachi Virtual Storage Platform paired with Brocade
Gen5 DCX 8510 is the ideal combination with zEnterprise mainframes, whether intended for a traditional z/OS, Linux,
PIM or private cloud environment. Hitachi Data Systems and Brocade have the experience to rely on and VSP and
DCX 8510 are the best platforms in the industry for mainframe data centers.
© Hitachi Data Systems Corporation 2013. All rights reserved. HITACHI is a trademark or registered trademark of Hitachi, Ltd. Universal Storage Platform, ShadowImage and TrueCopy
are trademarks or registered trademarks of Hitachi Data Systems Corporation. IBM, FICON, ESCON, System z, z/OS, zEnterprise, z/VM, z9, z10, s/390, z/VSE, FlashCopy, XRC, GDPS,
HyperSwap and DS8000 are trademarks or registered trademarks of International Business Machines. Microsoft and Windows are trademarks or registered trademarks of Microsoft
Corporation. All other trademarks, service marks, and company names are properties of their respective owners.
Notice: This document is for informational purposes only, and does not set forth any warranty, expressed or implied, concerning any equipment or service offered or to be offered by
Hitachi Data Systems Corporation.
WP-432-C DG March 2013
Corporate Headquarters
2845 Lafayette Street
Santa Clara, CA 96050-2639 USA
www.HDS.com
Regional Contact Information
Americas: +1 408 970 1000 or info@hds.com
Europe, Middle East and Africa: +44 (0) 1753 618000 or info.emea@hds.com
Asia Pacific: +852 3189 7900 or hds.marketing.apac@hds.com

More Related Content

What's hot

Five Best Practices for Improving the Cloud Experience
Five Best Practices for Improving the Cloud ExperienceFive Best Practices for Improving the Cloud Experience
Five Best Practices for Improving the Cloud ExperienceHitachi Vantara
 
Unified Compute Platform Pro for VMware vSphere
Unified Compute Platform Pro for VMware vSphereUnified Compute Platform Pro for VMware vSphere
Unified Compute Platform Pro for VMware vSphereHitachi Vantara
 
Storage virtualization: deliver storage as a utility for the cloud webinar
Storage virtualization: deliver storage as a utility for the cloud webinarStorage virtualization: deliver storage as a utility for the cloud webinar
Storage virtualization: deliver storage as a utility for the cloud webinarHitachi Vantara
 
Reduce Costs and Complexity with Backup-Free Storage
Reduce Costs and Complexity with Backup-Free StorageReduce Costs and Complexity with Backup-Free Storage
Reduce Costs and Complexity with Backup-Free StorageHitachi Vantara
 
Solve the Top 6 Enterprise Storage Issues White Paper
Solve the Top 6 Enterprise Storage Issues White PaperSolve the Top 6 Enterprise Storage Issues White Paper
Solve the Top 6 Enterprise Storage Issues White PaperHitachi Vantara
 
Step 2: Back Up Less Datasheet
Step 2: Back Up Less DatasheetStep 2: Back Up Less Datasheet
Step 2: Back Up Less DatasheetHitachi Vantara
 
Hitachi white-paper-future-proof-your-datacenter-with-the-right-nas-platform
Hitachi white-paper-future-proof-your-datacenter-with-the-right-nas-platformHitachi white-paper-future-proof-your-datacenter-with-the-right-nas-platform
Hitachi white-paper-future-proof-your-datacenter-with-the-right-nas-platformHitachi Vantara
 
Hitachi Virtual Storage Platform and Storage Virtualization Operating System ...
Hitachi Virtual Storage Platform and Storage Virtualization Operating System ...Hitachi Virtual Storage Platform and Storage Virtualization Operating System ...
Hitachi Virtual Storage Platform and Storage Virtualization Operating System ...Hitachi Vantara
 
Simplify Data Center Monitoring With a Single-Pane View
Simplify Data Center Monitoring With a Single-Pane ViewSimplify Data Center Monitoring With a Single-Pane View
Simplify Data Center Monitoring With a Single-Pane ViewHitachi Vantara
 
How and why to upgrade to hitachi device manager v7 webinar
How and why to upgrade to hitachi device manager v7 webinarHow and why to upgrade to hitachi device manager v7 webinar
How and why to upgrade to hitachi device manager v7 webinarHitachi Vantara
 
Virtual Infrastructure Integrator Overview Presentation
Virtual Infrastructure Integrator Overview PresentationVirtual Infrastructure Integrator Overview Presentation
Virtual Infrastructure Integrator Overview PresentationHitachi Vantara
 
Powering the Creation of Great Work Solution Profile
Powering the Creation of Great Work Solution ProfilePowering the Creation of Great Work Solution Profile
Powering the Creation of Great Work Solution ProfileHitachi Vantara
 
Maximize IT Overview Slidecast
Maximize IT Overview SlidecastMaximize IT Overview Slidecast
Maximize IT Overview SlidecastHitachi Vantara
 
HDS Influencer Summit 2014: Innovating with Information to Address Business N...
HDS Influencer Summit 2014: Innovating with Information to Address Business N...HDS Influencer Summit 2014: Innovating with Information to Address Business N...
HDS Influencer Summit 2014: Innovating with Information to Address Business N...Hitachi Vantara
 
ESG - HDS HCP Anywhere Easy, Secure, On-Premises File Sharing
ESG - HDS HCP Anywhere Easy, Secure, On-Premises File SharingESG - HDS HCP Anywhere Easy, Secure, On-Premises File Sharing
ESG - HDS HCP Anywhere Easy, Secure, On-Premises File SharingHitachi Vantara
 
Hu Yoshida's Point of View: Competing In An Always On World
Hu Yoshida's Point of View: Competing In An Always On WorldHu Yoshida's Point of View: Competing In An Always On World
Hu Yoshida's Point of View: Competing In An Always On WorldHitachi Vantara
 
Why Networked FICON Storage Is Better Than Direct Attached Storage
Why Networked FICON Storage Is Better Than Direct Attached StorageWhy Networked FICON Storage Is Better Than Direct Attached Storage
Why Networked FICON Storage Is Better Than Direct Attached StorageHitachi Vantara
 
Hitachi Virtual Storage Platform Competitive Comparison Guide
Hitachi Virtual Storage Platform Competitive Comparison GuideHitachi Virtual Storage Platform Competitive Comparison Guide
Hitachi Virtual Storage Platform Competitive Comparison GuideHitachi Vantara
 
Storage Analytics: Transform Storage Infrastructure Into a Business Enabler
Storage Analytics: Transform Storage Infrastructure Into a Business EnablerStorage Analytics: Transform Storage Infrastructure Into a Business Enabler
Storage Analytics: Transform Storage Infrastructure Into a Business EnablerHitachi Vantara
 
Achieve Higher Quality Decisions Faster for a Competitive Edge in the Oil and...
Achieve Higher Quality Decisions Faster for a Competitive Edge in the Oil and...Achieve Higher Quality Decisions Faster for a Competitive Edge in the Oil and...
Achieve Higher Quality Decisions Faster for a Competitive Edge in the Oil and...Hitachi Vantara
 

What's hot (20)

Five Best Practices for Improving the Cloud Experience
Five Best Practices for Improving the Cloud ExperienceFive Best Practices for Improving the Cloud Experience
Five Best Practices for Improving the Cloud Experience
 
Unified Compute Platform Pro for VMware vSphere
Unified Compute Platform Pro for VMware vSphereUnified Compute Platform Pro for VMware vSphere
Unified Compute Platform Pro for VMware vSphere
 
Storage virtualization: deliver storage as a utility for the cloud webinar
Storage virtualization: deliver storage as a utility for the cloud webinarStorage virtualization: deliver storage as a utility for the cloud webinar
Storage virtualization: deliver storage as a utility for the cloud webinar
 
Reduce Costs and Complexity with Backup-Free Storage
Reduce Costs and Complexity with Backup-Free StorageReduce Costs and Complexity with Backup-Free Storage
Reduce Costs and Complexity with Backup-Free Storage
 
Solve the Top 6 Enterprise Storage Issues White Paper
Solve the Top 6 Enterprise Storage Issues White PaperSolve the Top 6 Enterprise Storage Issues White Paper
Solve the Top 6 Enterprise Storage Issues White Paper
 
Step 2: Back Up Less Datasheet
Step 2: Back Up Less DatasheetStep 2: Back Up Less Datasheet
Step 2: Back Up Less Datasheet
 
Hitachi white-paper-future-proof-your-datacenter-with-the-right-nas-platform
Hitachi white-paper-future-proof-your-datacenter-with-the-right-nas-platformHitachi white-paper-future-proof-your-datacenter-with-the-right-nas-platform
Hitachi white-paper-future-proof-your-datacenter-with-the-right-nas-platform
 
Hitachi Virtual Storage Platform and Storage Virtualization Operating System ...
Hitachi Virtual Storage Platform and Storage Virtualization Operating System ...Hitachi Virtual Storage Platform and Storage Virtualization Operating System ...
Hitachi Virtual Storage Platform and Storage Virtualization Operating System ...
 
Simplify Data Center Monitoring With a Single-Pane View
Simplify Data Center Monitoring With a Single-Pane ViewSimplify Data Center Monitoring With a Single-Pane View
Simplify Data Center Monitoring With a Single-Pane View
 
How and why to upgrade to hitachi device manager v7 webinar
How and why to upgrade to hitachi device manager v7 webinarHow and why to upgrade to hitachi device manager v7 webinar
How and why to upgrade to hitachi device manager v7 webinar
 
Virtual Infrastructure Integrator Overview Presentation
Virtual Infrastructure Integrator Overview PresentationVirtual Infrastructure Integrator Overview Presentation
Virtual Infrastructure Integrator Overview Presentation
 
Powering the Creation of Great Work Solution Profile
Powering the Creation of Great Work Solution ProfilePowering the Creation of Great Work Solution Profile
Powering the Creation of Great Work Solution Profile
 
Maximize IT Overview Slidecast
Maximize IT Overview SlidecastMaximize IT Overview Slidecast
Maximize IT Overview Slidecast
 
HDS Influencer Summit 2014: Innovating with Information to Address Business N...
HDS Influencer Summit 2014: Innovating with Information to Address Business N...HDS Influencer Summit 2014: Innovating with Information to Address Business N...
HDS Influencer Summit 2014: Innovating with Information to Address Business N...
 
ESG - HDS HCP Anywhere Easy, Secure, On-Premises File Sharing
ESG - HDS HCP Anywhere Easy, Secure, On-Premises File SharingESG - HDS HCP Anywhere Easy, Secure, On-Premises File Sharing
ESG - HDS HCP Anywhere Easy, Secure, On-Premises File Sharing
 
Hu Yoshida's Point of View: Competing In An Always On World
Hu Yoshida's Point of View: Competing In An Always On WorldHu Yoshida's Point of View: Competing In An Always On World
Hu Yoshida's Point of View: Competing In An Always On World
 
Why Networked FICON Storage Is Better Than Direct Attached Storage
Why Networked FICON Storage Is Better Than Direct Attached StorageWhy Networked FICON Storage Is Better Than Direct Attached Storage
Why Networked FICON Storage Is Better Than Direct Attached Storage
 
Hitachi Virtual Storage Platform Competitive Comparison Guide
Hitachi Virtual Storage Platform Competitive Comparison GuideHitachi Virtual Storage Platform Competitive Comparison Guide
Hitachi Virtual Storage Platform Competitive Comparison Guide
 
Storage Analytics: Transform Storage Infrastructure Into a Business Enabler
Storage Analytics: Transform Storage Infrastructure Into a Business EnablerStorage Analytics: Transform Storage Infrastructure Into a Business Enabler
Storage Analytics: Transform Storage Infrastructure Into a Business Enabler
 
Achieve Higher Quality Decisions Faster for a Competitive Edge in the Oil and...
Achieve Higher Quality Decisions Faster for a Competitive Edge in the Oil and...Achieve Higher Quality Decisions Faster for a Competitive Edge in the Oil and...
Achieve Higher Quality Decisions Faster for a Competitive Edge in the Oil and...
 

Viewers also liked

HDS Cloud Solutions Infographic
HDS Cloud Solutions Infographic HDS Cloud Solutions Infographic
HDS Cloud Solutions Infographic Hitachi Vantara
 
IDC Analyst Connection: Flash, Cloud, and Software-Defined Storage: Trends Di...
IDC Analyst Connection: Flash, Cloud, and Software-Defined Storage: Trends Di...IDC Analyst Connection: Flash, Cloud, and Software-Defined Storage: Trends Di...
IDC Analyst Connection: Flash, Cloud, and Software-Defined Storage: Trends Di...Hitachi Vantara
 
Why hitachi virtual storage platform does so well in a mainframe environment ...
Why hitachi virtual storage platform does so well in a mainframe environment ...Why hitachi virtual storage platform does so well in a mainframe environment ...
Why hitachi virtual storage platform does so well in a mainframe environment ...Hitachi Vantara
 
Comprehensive and Simplified Management for VMware vSphere environments
Comprehensive and Simplified Management for VMware vSphere environmentsComprehensive and Simplified Management for VMware vSphere environments
Comprehensive and Simplified Management for VMware vSphere environmentsHitachi Vantara
 
Hitachi Unified Storage 100 Family: Unify Without Compromise -- Datasheet
Hitachi Unified Storage 100 Family: Unify Without Compromise -- DatasheetHitachi Unified Storage 100 Family: Unify Without Compromise -- Datasheet
Hitachi Unified Storage 100 Family: Unify Without Compromise -- DatasheetHitachi Vantara
 
Define Your Future with Continuous Cloud Infrastructure Checklist Infographic
Define Your Future with Continuous Cloud Infrastructure Checklist InfographicDefine Your Future with Continuous Cloud Infrastructure Checklist Infographic
Define Your Future with Continuous Cloud Infrastructure Checklist InfographicHitachi Vantara
 
Hitachi data systems and tsys success story
Hitachi data systems and tsys success storyHitachi data systems and tsys success story
Hitachi data systems and tsys success storyHitachi Vantara
 

Viewers also liked (7)

HDS Cloud Solutions Infographic
HDS Cloud Solutions Infographic HDS Cloud Solutions Infographic
HDS Cloud Solutions Infographic
 
IDC Analyst Connection: Flash, Cloud, and Software-Defined Storage: Trends Di...
IDC Analyst Connection: Flash, Cloud, and Software-Defined Storage: Trends Di...IDC Analyst Connection: Flash, Cloud, and Software-Defined Storage: Trends Di...
IDC Analyst Connection: Flash, Cloud, and Software-Defined Storage: Trends Di...
 
Why hitachi virtual storage platform does so well in a mainframe environment ...
Why hitachi virtual storage platform does so well in a mainframe environment ...Why hitachi virtual storage platform does so well in a mainframe environment ...
Why hitachi virtual storage platform does so well in a mainframe environment ...
 
Comprehensive and Simplified Management for VMware vSphere environments
Comprehensive and Simplified Management for VMware vSphere environmentsComprehensive and Simplified Management for VMware vSphere environments
Comprehensive and Simplified Management for VMware vSphere environments
 
Hitachi Unified Storage 100 Family: Unify Without Compromise -- Datasheet
Hitachi Unified Storage 100 Family: Unify Without Compromise -- DatasheetHitachi Unified Storage 100 Family: Unify Without Compromise -- Datasheet
Hitachi Unified Storage 100 Family: Unify Without Compromise -- Datasheet
 
Define Your Future with Continuous Cloud Infrastructure Checklist Infographic
Define Your Future with Continuous Cloud Infrastructure Checklist InfographicDefine Your Future with Continuous Cloud Infrastructure Checklist Infographic
Define Your Future with Continuous Cloud Infrastructure Checklist Infographic
 
Hitachi data systems and tsys success story
Hitachi data systems and tsys success storyHitachi data systems and tsys success story
Hitachi data systems and tsys success story
 

Similar to Build the Optimal Mainframe Storage Architecture

Hitachi Data Systems and Brocade Build the Optimal Mainframe Storage Architec...
Hitachi Data Systems and Brocade Build the Optimal Mainframe Storage Architec...Hitachi Data Systems and Brocade Build the Optimal Mainframe Storage Architec...
Hitachi Data Systems and Brocade Build the Optimal Mainframe Storage Architec...Hitachi Vantara
 
Connecting an IBM PureFlex System to the Network
Connecting an IBM PureFlex System to the NetworkConnecting an IBM PureFlex System to the Network
Connecting an IBM PureFlex System to the NetworkIBM India Smarter Computing
 
Jan 2011 Presentation
Jan 2011 PresentationJan 2011 Presentation
Jan 2011 PresentationRamanDua
 
Ibm symp14 referent_marcus alexander mac dougall_ibm x6 und flex system
Ibm symp14 referent_marcus alexander mac dougall_ibm x6 und flex systemIbm symp14 referent_marcus alexander mac dougall_ibm x6 und flex system
Ibm symp14 referent_marcus alexander mac dougall_ibm x6 und flex systemIBM Switzerland
 
#IBMEdge: Flash Storage Session
#IBMEdge: Flash Storage Session#IBMEdge: Flash Storage Session
#IBMEdge: Flash Storage SessionBrocade
 
IBM Flex System: A Solid Foundation for Microsoft Exchange Server 2010
IBM Flex System: A Solid Foundation for Microsoft Exchange Server 2010IBM Flex System: A Solid Foundation for Microsoft Exchange Server 2010
IBM Flex System: A Solid Foundation for Microsoft Exchange Server 2010IBM India Smarter Computing
 
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by DesignIBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by DesignStefan Lein
 
G108277 ds8000-resiliency-lagos-v1905c
G108277 ds8000-resiliency-lagos-v1905cG108277 ds8000-resiliency-lagos-v1905c
G108277 ds8000-resiliency-lagos-v1905cTony Pearson
 
Hitachi white-paper-ibm-mainframe-storage-compatibility-and-innovation-quick-...
Hitachi white-paper-ibm-mainframe-storage-compatibility-and-innovation-quick-...Hitachi white-paper-ibm-mainframe-storage-compatibility-and-innovation-quick-...
Hitachi white-paper-ibm-mainframe-storage-compatibility-and-innovation-quick-...Hitachi Vantara
 
Unified Fabric Architecture from BLADE Network Technologies
Unified Fabric Architecture from BLADE Network TechnologiesUnified Fabric Architecture from BLADE Network Technologies
Unified Fabric Architecture from BLADE Network TechnologiesIBM System Networking
 
Deploying Applications in Today’s Compute, Storage, and Network Infrastructure
Deploying Applications in Today’s Compute, Storage, and Network InfrastructureDeploying Applications in Today’s Compute, Storage, and Network Infrastructure
Deploying Applications in Today’s Compute, Storage, and Network InfrastructureCisco Canada
 
Netcloud Breakfast Event Mai 2011
Netcloud Breakfast Event Mai 2011Netcloud Breakfast Event Mai 2011
Netcloud Breakfast Event Mai 2011Null00
 
A switching storage world _ InfoWorld
A switching storage world _ InfoWorldA switching storage world _ InfoWorld
A switching storage world _ InfoWorldThomas Hughes
 
InfiniBand in the Enterprise Data Center.pdf
InfiniBand in the Enterprise Data Center.pdfInfiniBand in the Enterprise Data Center.pdf
InfiniBand in the Enterprise Data Center.pdfbui thequan
 

Similar to Build the Optimal Mainframe Storage Architecture (20)

Hitachi Data Systems and Brocade Build the Optimal Mainframe Storage Architec...
Hitachi Data Systems and Brocade Build the Optimal Mainframe Storage Architec...Hitachi Data Systems and Brocade Build the Optimal Mainframe Storage Architec...
Hitachi Data Systems and Brocade Build the Optimal Mainframe Storage Architec...
 
Connecting an IBM PureFlex System to the Network
Connecting an IBM PureFlex System to the NetworkConnecting an IBM PureFlex System to the Network
Connecting an IBM PureFlex System to the Network
 
Jan 2011 Presentation
Jan 2011 PresentationJan 2011 Presentation
Jan 2011 Presentation
 
IBM System Networking RackSwitch G8264CS
IBM System Networking RackSwitch G8264CSIBM System Networking RackSwitch G8264CS
IBM System Networking RackSwitch G8264CS
 
Ibm symp14 referent_marcus alexander mac dougall_ibm x6 und flex system
Ibm symp14 referent_marcus alexander mac dougall_ibm x6 und flex systemIbm symp14 referent_marcus alexander mac dougall_ibm x6 und flex system
Ibm symp14 referent_marcus alexander mac dougall_ibm x6 und flex system
 
#IBMEdge: Flash Storage Session
#IBMEdge: Flash Storage Session#IBMEdge: Flash Storage Session
#IBMEdge: Flash Storage Session
 
IBM PureSystems
IBM PureSystemsIBM PureSystems
IBM PureSystems
 
IBM Flex System: A Solid Foundation for Microsoft Exchange Server 2010
IBM Flex System: A Solid Foundation for Microsoft Exchange Server 2010IBM Flex System: A Solid Foundation for Microsoft Exchange Server 2010
IBM Flex System: A Solid Foundation for Microsoft Exchange Server 2010
 
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by DesignIBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
 
NetApp All Flash storage
NetApp All Flash storageNetApp All Flash storage
NetApp All Flash storage
 
G108277 ds8000-resiliency-lagos-v1905c
G108277 ds8000-resiliency-lagos-v1905cG108277 ds8000-resiliency-lagos-v1905c
G108277 ds8000-resiliency-lagos-v1905c
 
20230614 LinuxONE Distinguished_Recognition ISSIP_Award_Talk.pptx
20230614 LinuxONE Distinguished_Recognition ISSIP_Award_Talk.pptx20230614 LinuxONE Distinguished_Recognition ISSIP_Award_Talk.pptx
20230614 LinuxONE Distinguished_Recognition ISSIP_Award_Talk.pptx
 
Hitachi white-paper-ibm-mainframe-storage-compatibility-and-innovation-quick-...
Hitachi white-paper-ibm-mainframe-storage-compatibility-and-innovation-quick-...Hitachi white-paper-ibm-mainframe-storage-compatibility-and-innovation-quick-...
Hitachi white-paper-ibm-mainframe-storage-compatibility-and-innovation-quick-...
 
IBM PureFlex System
IBM PureFlex SystemIBM PureFlex System
IBM PureFlex System
 
Unified Fabric Architecture from BLADE Network Technologies
Unified Fabric Architecture from BLADE Network TechnologiesUnified Fabric Architecture from BLADE Network Technologies
Unified Fabric Architecture from BLADE Network Technologies
 
Deploying Applications in Today’s Compute, Storage, and Network Infrastructure
Deploying Applications in Today’s Compute, Storage, and Network InfrastructureDeploying Applications in Today’s Compute, Storage, and Network Infrastructure
Deploying Applications in Today’s Compute, Storage, and Network Infrastructure
 
Netcloud Breakfast Event Mai 2011
Netcloud Breakfast Event Mai 2011Netcloud Breakfast Event Mai 2011
Netcloud Breakfast Event Mai 2011
 
DS8800 Client Presentation
DS8800 Client PresentationDS8800 Client Presentation
DS8800 Client Presentation
 
A switching storage world _ InfoWorld
A switching storage world _ InfoWorldA switching storage world _ InfoWorld
A switching storage world _ InfoWorld
 
InfiniBand in the Enterprise Data Center.pdf
InfiniBand in the Enterprise Data Center.pdfInfiniBand in the Enterprise Data Center.pdf
InfiniBand in the Enterprise Data Center.pdf
 

More from Hitachi Vantara

Webinar: What Makes a Smart City Smart
Webinar: What Makes a Smart City SmartWebinar: What Makes a Smart City Smart
Webinar: What Makes a Smart City SmartHitachi Vantara
 
Hyperconverged Systems for Digital Transformation
Hyperconverged Systems for Digital TransformationHyperconverged Systems for Digital Transformation
Hyperconverged Systems for Digital TransformationHitachi Vantara
 
Powering the Enterprise Cloud with CSC and Hitachi Data Systems
Powering the Enterprise Cloud with CSC and Hitachi Data SystemsPowering the Enterprise Cloud with CSC and Hitachi Data Systems
Powering the Enterprise Cloud with CSC and Hitachi Data SystemsHitachi Vantara
 
Virtualizing SAP HANA with Hitachi Unified Compute Platform Solutions: Bring...
Virtualizing SAP HANA with Hitachi Unified Compute Platform Solutions: Bring...Virtualizing SAP HANA with Hitachi Unified Compute Platform Solutions: Bring...
Virtualizing SAP HANA with Hitachi Unified Compute Platform Solutions: Bring...Hitachi Vantara
 
HDS and VMware vSphere Virtual Volumes (VVol)
HDS and VMware vSphere Virtual Volumes (VVol) HDS and VMware vSphere Virtual Volumes (VVol)
HDS and VMware vSphere Virtual Volumes (VVol) Hitachi Vantara
 
Cloud Adoption, Risks and Rewards Infographic
Cloud Adoption, Risks and Rewards InfographicCloud Adoption, Risks and Rewards Infographic
Cloud Adoption, Risks and Rewards InfographicHitachi Vantara
 
Economist Intelligence Unit: Preparing for Next-Generation Cloud
Economist Intelligence Unit: Preparing for Next-Generation CloudEconomist Intelligence Unit: Preparing for Next-Generation Cloud
Economist Intelligence Unit: Preparing for Next-Generation CloudHitachi Vantara
 
Information Innovation Index 2014 UK Research Results
Information Innovation Index 2014 UK Research ResultsInformation Innovation Index 2014 UK Research Results
Information Innovation Index 2014 UK Research ResultsHitachi Vantara
 
HitVirtualized Tiered Storage Solution Profile
HitVirtualized Tiered Storage Solution ProfileHitVirtualized Tiered Storage Solution Profile
HitVirtualized Tiered Storage Solution ProfileHitachi Vantara
 
Use Case: Large Biotech Firm Expands Data Center and Reduces Overheating with...
Use Case: Large Biotech Firm Expands Data Center and Reduces Overheating with...Use Case: Large Biotech Firm Expands Data Center and Reduces Overheating with...
Use Case: Large Biotech Firm Expands Data Center and Reduces Overheating with...Hitachi Vantara
 
The Next Evolution in Storage Virtualization Management White Paper
The Next Evolution in Storage Virtualization Management White PaperThe Next Evolution in Storage Virtualization Management White Paper
The Next Evolution in Storage Virtualization Management White PaperHitachi Vantara
 
The Future of Convergence Paper
The Future of Convergence PaperThe Future of Convergence Paper
The Future of Convergence PaperHitachi Vantara
 
Hitachi white-paper-storage-virtualization
Hitachi white-paper-storage-virtualizationHitachi white-paper-storage-virtualization
Hitachi white-paper-storage-virtualizationHitachi Vantara
 
Lower total-cost-of-ownership-and-simplify-administration-for-oracle-environm...
Lower total-cost-of-ownership-and-simplify-administration-for-oracle-environm...Lower total-cost-of-ownership-and-simplify-administration-for-oracle-environm...
Lower total-cost-of-ownership-and-simplify-administration-for-oracle-environm...Hitachi Vantara
 
High-Performance Storage for the Evolving Computational Requirements of Energ...
High-Performance Storage for the Evolving Computational Requirements of Energ...High-Performance Storage for the Evolving Computational Requirements of Energ...
High-Performance Storage for the Evolving Computational Requirements of Energ...Hitachi Vantara
 
Meet the Data Processing Workflow Challenges of Oil and Gas Exploration with ...
Meet the Data Processing Workflow Challenges of Oil and Gas Exploration with ...Meet the Data Processing Workflow Challenges of Oil and Gas Exploration with ...
Meet the Data Processing Workflow Challenges of Oil and Gas Exploration with ...Hitachi Vantara
 
Hitachi solution-profile-advanced-project-version-management-in-schlumberger-...
Hitachi solution-profile-advanced-project-version-management-in-schlumberger-...Hitachi solution-profile-advanced-project-version-management-in-schlumberger-...
Hitachi solution-profile-advanced-project-version-management-in-schlumberger-...Hitachi Vantara
 
Hitachi solution-profile-achieving-decisions-faster-in-oil-and-gas
Hitachi solution-profile-achieving-decisions-faster-in-oil-and-gasHitachi solution-profile-achieving-decisions-faster-in-oil-and-gas
Hitachi solution-profile-achieving-decisions-faster-in-oil-and-gasHitachi Vantara
 
Hitachi high-performance-accelerates-life-sciences-research
Hitachi high-performance-accelerates-life-sciences-researchHitachi high-performance-accelerates-life-sciences-research
Hitachi high-performance-accelerates-life-sciences-researchHitachi Vantara
 

More from Hitachi Vantara (19)

Webinar: What Makes a Smart City Smart
Webinar: What Makes a Smart City SmartWebinar: What Makes a Smart City Smart
Webinar: What Makes a Smart City Smart
 
Hyperconverged Systems for Digital Transformation
Hyperconverged Systems for Digital TransformationHyperconverged Systems for Digital Transformation
Hyperconverged Systems for Digital Transformation
 
Powering the Enterprise Cloud with CSC and Hitachi Data Systems
Powering the Enterprise Cloud with CSC and Hitachi Data SystemsPowering the Enterprise Cloud with CSC and Hitachi Data Systems
Powering the Enterprise Cloud with CSC and Hitachi Data Systems
 
Virtualizing SAP HANA with Hitachi Unified Compute Platform Solutions: Bring...
Virtualizing SAP HANA with Hitachi Unified Compute Platform Solutions: Bring...Virtualizing SAP HANA with Hitachi Unified Compute Platform Solutions: Bring...
Virtualizing SAP HANA with Hitachi Unified Compute Platform Solutions: Bring...
 
HDS and VMware vSphere Virtual Volumes (VVol)
HDS and VMware vSphere Virtual Volumes (VVol) HDS and VMware vSphere Virtual Volumes (VVol)
HDS and VMware vSphere Virtual Volumes (VVol)
 
Cloud Adoption, Risks and Rewards Infographic
Cloud Adoption, Risks and Rewards InfographicCloud Adoption, Risks and Rewards Infographic
Cloud Adoption, Risks and Rewards Infographic
 
Economist Intelligence Unit: Preparing for Next-Generation Cloud
Economist Intelligence Unit: Preparing for Next-Generation CloudEconomist Intelligence Unit: Preparing for Next-Generation Cloud
Economist Intelligence Unit: Preparing for Next-Generation Cloud
 
Information Innovation Index 2014 UK Research Results
Information Innovation Index 2014 UK Research ResultsInformation Innovation Index 2014 UK Research Results
Information Innovation Index 2014 UK Research Results
 
HitVirtualized Tiered Storage Solution Profile
HitVirtualized Tiered Storage Solution ProfileHitVirtualized Tiered Storage Solution Profile
HitVirtualized Tiered Storage Solution Profile
 
Use Case: Large Biotech Firm Expands Data Center and Reduces Overheating with...
Use Case: Large Biotech Firm Expands Data Center and Reduces Overheating with...Use Case: Large Biotech Firm Expands Data Center and Reduces Overheating with...
Use Case: Large Biotech Firm Expands Data Center and Reduces Overheating with...
 
The Next Evolution in Storage Virtualization Management White Paper
The Next Evolution in Storage Virtualization Management White PaperThe Next Evolution in Storage Virtualization Management White Paper
The Next Evolution in Storage Virtualization Management White Paper
 
The Future of Convergence Paper
The Future of Convergence PaperThe Future of Convergence Paper
The Future of Convergence Paper
 
Hitachi white-paper-storage-virtualization
Hitachi white-paper-storage-virtualizationHitachi white-paper-storage-virtualization
Hitachi white-paper-storage-virtualization
 
Lower total-cost-of-ownership-and-simplify-administration-for-oracle-environm...
Lower total-cost-of-ownership-and-simplify-administration-for-oracle-environm...Lower total-cost-of-ownership-and-simplify-administration-for-oracle-environm...
Lower total-cost-of-ownership-and-simplify-administration-for-oracle-environm...
 
High-Performance Storage for the Evolving Computational Requirements of Energ...
High-Performance Storage for the Evolving Computational Requirements of Energ...High-Performance Storage for the Evolving Computational Requirements of Energ...
High-Performance Storage for the Evolving Computational Requirements of Energ...
 
Meet the Data Processing Workflow Challenges of Oil and Gas Exploration with ...
Meet the Data Processing Workflow Challenges of Oil and Gas Exploration with ...Meet the Data Processing Workflow Challenges of Oil and Gas Exploration with ...
Meet the Data Processing Workflow Challenges of Oil and Gas Exploration with ...
 
Hitachi solution-profile-advanced-project-version-management-in-schlumberger-...
Hitachi solution-profile-advanced-project-version-management-in-schlumberger-...Hitachi solution-profile-advanced-project-version-management-in-schlumberger-...
Hitachi solution-profile-advanced-project-version-management-in-schlumberger-...
 
Hitachi solution-profile-achieving-decisions-faster-in-oil-and-gas
Hitachi solution-profile-achieving-decisions-faster-in-oil-and-gasHitachi solution-profile-achieving-decisions-faster-in-oil-and-gas
Hitachi solution-profile-achieving-decisions-faster-in-oil-and-gas
 
Hitachi high-performance-accelerates-life-sciences-research
Hitachi high-performance-accelerates-life-sciences-researchHitachi high-performance-accelerates-life-sciences-research
Hitachi high-performance-accelerates-life-sciences-research
 

Recently uploaded

Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebUiPathCommunity
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii SoldatenkoFwdays
 
Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clashcharlottematthew16
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsMiki Katsuragi
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostZilliz
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfAddepto
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piececharlottematthew16
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Commit University
 
Artificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxArtificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxhariprasad279825
 
Vector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesVector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesZilliz
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsRizwan Syed
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024Stephanie Beckett
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupFlorian Wilhelm
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLScyllaDB
 
The Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdfThe Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdfSeasiaInfotech2
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machinePadma Pradeep
 
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr LapshynFwdays
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsMemoori
 

Recently uploaded (20)

Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio Web
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko
 
Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clash
 
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering Tips
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdf
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piece
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!
 
Artificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxArtificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptx
 
Vector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesVector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector Databases
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL Certs
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project Setup
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQL
 
The Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdfThe Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdf
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machine
 
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial Buildings
 

Build the Optimal Mainframe Storage Architecture

  • 1. Build the Optimal Mainframe Storage Architecture With Hitachi Data Systems and Brocade Why Choose an IBM® FICON® Switched Network? DATA DRIVEN GLOBAL VISION CLOUD PLATFORM STRATEG ON POWERFUL RELEVANT PERFORMANCE SOLUTION CLO VIRTUAL BIG DATA SOLUTION ROI FLEXIBLE DATA DRIVEN V WHITEPAPER By Bill Martin, Hitachi Data Systems Stephen Guendert, PhD, Brocade March 2013
  • 2. WHITE PAPER 2 Contents Executive Summary 3 Introduction 4 Why Networked FICON Storage Is Better Than Direct-Attached Storage 4 Hitachi Virtual Storage Platform 4 Why Brocade Gen5 DCX 8510 Is the Best FICON Director 4 An Ideal Pairing: Hitachi Virtual Storage Platform and Brocade Gen5 DCX 8510 5 Why IT Should Choose Networked Storage for FICON Over Direct-Attached Storage 5 Technical Reasons for a Switched FICON Architecture 5 Business Reasons for a Switched FICON Architecture 10 Why Switched FICON: Summary 14 Hitachi Virtual Storage Platform 15 Scalability (3-D Scaling: Out, Up, Deep) 15 Performance 16 IBM 3390 and FICON Support 16 Hitachi Dynamic Provisioning 16 Hitachi Dynamic Tiering 17 Hitachi Remote Replication 17 Multiplatform Support 20 Cost-Savings Efficiencies 20 Brocade Gen5 DCX 8510 in Mainframe Environments 20 Reliability, Availability and Serviceability 21 Scalability 21 Pair the Two Platforms Together 22 Linux on the Mainframe 22 FICON and FCP Intermix 22 Private Cloud 22 Conclusion 23
  • 3. WHITE PAPER 3 Build the Optimal Mainframe Storage Architecture With Hitachi Data Systems and Brocade Executive Summary The IBM® System z® and newer zEnterprise® or, in other words, mainframes, continue to be a critical foundation in the IT infrastructure of many large companies today. An important element of the mainframe environment is the disk storage system (subsystem) that is connected to the mainframe via channels. The overall reliability, availability and performance of mainframe-based applications are dependent on this storage system. The performance demands, capacity, reliability, flexibility, efficiency and cost-effectiveness of the storage system are important aspects of any storage acquisition and configuration decision. The increasing demands for improved per- formance, in other words, throughput (IOPS) and response time, make this storage system a critical element of the IT infrastructure. Another key factor in configuring the storage system is the decision of how it should be connected to the mainframe channels: direct attached or through a switched IBM FICON® network. This decision impacts the flex- ibility, reliability and availability of the storage infrastructure and the efficiency of the storage administrators. Hitachi Virtual Storage Platform (VSP) is an enterprise-class storage system that provides a comprehensive set of storage and data services. These provide mainframe users with a cost-effective, highly reliable and available stor- age platform that delivers outstanding performance, capacity and scalability. VSP supports the operating systems used with IBM zEnterprise processors: z/OS® , z/VSE® , z/VM® , and Linux on System z. This industry-leading storage system provides IBM 3390 disk drive support across a variety of disk drive types to meet the variety of performance and capacity needs of mainframe environments. The platform provides an internal physical disk capacity of approxi- mately 2.5PB per storage system. With externally attached storage, the VSP can support up to 255PB of storage capacity. It supports 8Gb/sec FICON across all front-end ports for connectivity to the mainframe and 8Gb/sec Fibre Channel for connecting external storage. Using a FICON network configured with a switch or director to connect a storage system to the mainframe channels can significantly enhance reliability, flexibility and availability of storage systems. At the same time, it can maximize storage performance and throughput. A switched FICON network allows the implementation of a fan-in, fan-out con- figuration, which allows maximum resource utilization and simultaneously helps localize failures, improving availability. The Brocade Gen5 DCX 8510 is a backbone-class FICON or Fibre Channel director. The Brocade Gen5 DCX 8510 family of FICON directors provides the industry’s most powerful switching infrastructure for modern mainframe envi- ronments. It provides the most reliable, scalable, efficient, cost-effective, high-performance foundation for today’s highly virtualized mainframe environments. The Brocade Gen5 DCX 8510 builds upon years of innovation and experi- ence and leverages the core technology of Brocade systems, providing over 99.999% uptime in the world’s most demanding data centers. The Gen5 DCX 8510 supports the operating systems used with zEnterprise processors: z/OS and z/OS.e, z/VSE, z/VM, Linux on System z, and zTPF for System z. This industry-leading FICON director sup- ports 2, 4, 8, 10, and 16Gb/sec Fibre Channel links, FICON I/O traffic, and 1 gigabit Ethernet (GbE) or 10GbE links on Fibre Channel over IP (FCIP) while providing 8.2Tb/sec chassis bandwidth. The combination of switched FICON connectivity with Hitachi VSP connected to mainframe channels through a Brocade Gen5 DCX 8510 Director provides a powerful, flexible and highly available solution. Together, they support the storage features, performance and capacity needed for today’s mainframe environments.
  • 4. WHITE PAPER 4 Introduction This paper explores both technical and business reasons for implementing a switched FICON architecture instead of a direct-attached storage FICON architecture. It also explains why Hitachi Virtual Storage Platform and the Brocade FICON Director together provide an outstanding, industry-leading solution for FICON environments. With the many enhancements and improvements in mainframe I/O technology in the past 5 years, the question “Do I need FICON switching technology, or should I go with direct-attached storage?” is frequently asked. With up to 320 FICON Express8S channels supported on an IBM zEnterprise z114, z196 and zEC12, why not just direct-attach the control units? The short answer is that with all of the I/O improvements, switching technology is needed — now more than ever. In fact, there are more reasons to use switched FICON than there were to use switched ESCON. Some of these reasons are purely technical; others are more business-related. Why Networked FICON Storage Is Better Than Direct-Attached Storage The raw bandwidth of FICON Express8S running on IBM zEnterprise Systems is 40 times greater than the capabilities of IBM ESCON® . The raw I/Os per second (IOPS) capacity of FICON Express8S channels is even more impressive, particularly when a channel program utilizes the z High Performance FICON (zHPF) protocol. To utilize these tremen- dous improvements, the FICON protocol is packet-switched and, unlike ESCON, capable of having multiple I/Os occupy the same channel simultaneously. FICON Express8S channels on zEnterprise processors can have up to 64 concurrent I/Os (open exchanges) to dif- ferent devices. FICON Express8S channels running zHPF can have up to 750 concurrent I/Os on the zEnterprise processor family. Only when a director or switch is used between the host and storage device can the true perfor- mance potential inherent in these channel bandwidth and I/O processing gains be fully exploited. Hitachi Virtual Storage Platform Hitachi Virtual Storage Platform, with its vast functionality and throughput capability, is ideal for IBM mainframe environments and provides a comprehensive set of storage and data services. The flexibility in configuring and par- titioning VSP makes it ideal for mainframe environments, with multiple LPARS running multiple operating system images in the same SYSPLEX. The packaging, enhanced features and improved manageability of VSP provide mainframe users with a cost- effective, highly reliable and available storage platform that delivers outstanding performance, capacity and scalability. The storage platform easily supports both mainframe and open systems environments. For mainframe environments, it supports z/OS, z/VSE and z/VM. Additionally, with many organizations considering the benefits of or running LINUX on IBM zEnterprise processors, VSP supports this capability for both CKD and FBA disk formats. With support for FICON Express8S and the support of 2Gb, 4Gb and 8Gb FICON and 2Gb, 4Gb and 8Gb Fibre Channel connectivity, this platform delivers industry-leading I/O performance. A VSP can have up to 24 front-end directors with a total of 176 FICON ports. Each port can support more IOPS than a single zEnterprise FICON Express8 channel can deliver. As a result, it is ideally suited for connectivity to the mainframe through a switched FICON network. Why Brocade Gen5 DCX 8510 Is the Best FICON Director Emerging and evolving enterprise-critical workloads and higher density virtualization are continuing to push the limits of SAN infrastructures. This is even truer in a data center with IBM zEnterprise and its support for Microsoft® Windows® in the zEnterprise Blade Center Extension (zBX). The Brocade Gen5 DCX 8510 family features industry-leading 16Gb/sec performance, and 8.2Tb chassis bandwidth to address these next-generation I/O and
  • 5. WHITE PAPER 5 bandwidth-intensive application requirements. In addition, the Brocade Gen5 DCX 8510 provides unmatched slot- to-slot and port performance, with 512Gb/sec bandwidth per slot (port card/blade). And this performance comes in the most energy-efficient FICON director in the industry, using an average of less than 1 watt per Gb/sec, which is 15 times more efficient than competitive offerings. The Brocade Gen5 DCX 8510 family enables high-speed replication and backup solutions over metro or WAN links with native Fibre Channel (10Gb/sec or 16Gb/sec) and optional FCIP 1GbE or 10GbE extension support. These solu- tions are accomplished by integrating this technology via a blade (FX24-8) or standalone switch (Brocade 7800). Finally, this solution is accomplished with unsurpassed levels of reliability, availability and serviceability (RAS), based upon more than 25 years of Brocade experience in the mainframe space. This experience includes defining the FICON standards and authoring or co-authoring many of the FICON patents. An Ideal Pairing: Hitachi Virtual Storage Platform and Brocade Gen5 DCX 8510 The IBM zEnterprise architecture is the highest performing, most scalable, cost-effective, energy-efficient platform in mainframe computing history. To get the most out of your investment in IBM zEnterprise, you need a storage infra- structure, that is, a DASD platform and FICON director, which can match the impressive capabilities of zEnterprise. Hitachi Data Systems and Brocade, via VSP and Gen5 DCX 8510, together offer the highest performing and most reliable, scalable, cost-effective and energy-efficient products in the storage and networking industry. The experience of these 2 companies in the mainframe market, coupled with the capabilities of VSP and Gen5 DCX 8510, make pair- ing them with IBM’s zEnterprise the ideal “best in industry” storage architecture for mainframe data centers. Why IT Should Choose Networked Storage for FICON Over Direct-Attached Storage Direct-attached FICON storage might appear to be a great way to take advantage of FICON technology. However, a closer examination will show why a switched FICON architecture is a better, more robust design for enterprise data centers than direct-attached FICON. Technical Reasons for a Switched FICON Architecture There are 5 key technical reasons for connecting storage control units using switched FICON: ■■ Overcome buffer credit limitations on FICON Express8 channels. ■■ Build fan-in, fan-out architecture designs for maximizing resource utilization. ■■ Localize failures for improved availability. ■■ Increase scalability and enable flexible connectivity for continued growth. ■■ Leverage new FICON technologies. FICON Channel Buffer Credits When IBM introduced the availability of FICON Express8 channels, one very important change was the number of buffer credits available on each port per 4-port FICON Express8 channel card. While FICON Express4 channels had 200 buffer credits per port on a 4-port FICON Express4 channel card, this changed to 40 buffer credits per port on a FICON Express8 channel card. Organizations familiar with buffer credits will recall that the number of buffer credits required for a given distance varies directly in a linear relationship with link speed. In other words, doubling the link speed would double the number of buffer credits required to achieve the same performance at the same distance. Also, organizations might recall the IBM System z10™ Statement of Direction concerning buffer credits: “The FICON Express4 features are intended to be the last features to support extended distance without performance degradation. IBM intends to not offer FICON features with buffer credits for performance at extended distances. Future FICON features are intended to support up to
  • 6. WHITE PAPER 6 10km without performance degradation. Extended distance solutions may include FICON direc- tors or switches (for buffer credit provision) or Dense Wave Division Multiplexers (for buffer credit simulation).” IBM held true to its statement, and the 40 buffer credits per port on a FICON Express8/FICON Express8S channel card can support up to 10km of distance for full-frame size I/Os (2KB frames). What happens if an organization has I/Os with smaller than full-size frames? The distance supported by the 40 buffer credits would increase. It is likely that at faster future link speeds, the distance supported will decrease to 5km or less. A switched architecture allows organizations to overcome the buffer credit limitations on the FICON Express8/FICON Express8S channel card. Depending upon the specific model, FICON directors and switches can have more than 1300 buffer credits available per port for long-distance connectivity. Fan-In, Fan-Out Architecture Designs In the late 1990s, the open systems world started to implement Fibre Channel storage area networks (SANs) to over- come the low utilization of resources inherent in a direct-attached storage architecture. SANs addressed this issue through the use of fan-in and fan-out storage network designs. That is, multiple server host bus adapters (HBAs) could be connected through a Fibre Channel switch to a single storage port: in other words, fan-in. Or a single-server HBA could be connected through a Fibre Channel switch to multiple storage ports: that is, fan-out. These same prin- ciples apply to a FICON storage network. As a general rule, FICON Express8 and FICON Express8S channels offer different levels of performance, in terms of IOPS and bandwidth, than the storage host adapter ports to which they are connected. Therefore, a direct-attached FICON storage architecture may see very low channel or storage port utilization rates. To overcome this issue, fan-in and fan-out storage network designs are used. A switched FICON architecture allows a single channel to fan-out to multiple storage devices via switching, improving overall resource utilization. This can be especially valuable if an organization’s environment has newer FICON chan- nels, such as FICON Express8 or Express8S, but older tape drive technology. Figure 1 illustrates how a single FICON channel can concurrently keep several tape drives running at full-rated speeds. The actual fan-out ratios for connec- tivity to tape drives will, of course, depend on the specific tape drive and control unit; however, it is not unusual to see a FICON Express8 or Express8S channel fan-out from a switch to 5 to 6 tape drives (a 1:5 or 1:6 fan-out ratio). The same principles apply for fan-out to storage systems. The exact fan-out ratio is dependent on the storage system model and host adapter capabilities for IOPS and/or bandwidth. On the other hand, several FICON channels could be connected through a director or switch to a single storage port to maximize the port utilization and increase overall I/O efficiency and throughput.
  • 7. WHITE PAPER 7 Figure 1. Switched FICON allows one channel to keep multiple tape drives fully utilized. Keep Failures Localized In a direct-attached architecture, a failure anywhere in the path renders both the channel interface and the control unit port unusable. The failure could be of an entire FICON channel card, a port on the channel card, a failure of the cable, a failure of the entire storage host adapter card, or a failure of an individual port on the storage host adapter card. In other words, a failure on any of these components will affect both the mainframe connection and the storage connection. The worst possible reliability, availability and serviceability for FICON-attached storage are provided with a direct-attached architecture. With a switched architecture, failures are localized to only the affected FICON channel interface or control unit inter- face, not both. The nonfailing side remains available, and if the storage side has not failed, other FICON channels can still access that host adapter port via the switch or director (see Figure 2). This failure isolation, combined with fan-in and fan-out architectures, allows for the most robust storage architectures, minimizing downtime and maximiz- ing availability.
  • 8. WHITE PAPER 8 Figure 2. A FICON director isolates faults and improves availability. Scalable and Flexible Connectivity Direct-attached FICON does not easily allow for dynamic growth and scalability, since a single FICON channel card port is tied to a single dedicated storage host adapter port. In such an architecture, there is a 1:1 relationship (no fan-in or fan-out). Since there is a finite number of FICON channels available (dependent on the mainframe model or machine type), growth in a mainframe storage environment with such an architecture can pose a problem. What hap- pens if an organization needs more FICON connectivity, but has run out of FICON channels? FICON switching and proper usage of fan-in and fan-out in the storage architecture design will go a long way toward improving scalability. In addition, best-practice storage architecture designs include room for growth. With a switched FICON architecture, adding a new storage system or port in a storage system is much easier: simply connect the new storage system or port to the switch. This eliminates the need to open the channel cage in the mainframe to add new channel inter- faces, reducing both capital and operational costs. This also gives managers more flexible planning options when upgrades are necessary, since the urgency of upgrades is lessened. What about the next generation of channels? The bandwidth capabilities of channels are growing at a much faster rate than those of storage devices. As channel speeds increase, switches will allow data center managers to take advantage of new technology as it becomes available, while protecting investments and minimizing costs. Also, it is an IBM best-practice recommendation to use single-mode long-wave connections for FICON channels. Storage vendors, however, often offer single-mode long-wave connections and multimode short-wave connections on their storage systems, allowing organizations to decide which to use. The organization makes the decision based on the trade-off between cost and reliability. Some organizations’ existing storage devices have a mix of single-mode and multimode connections. Since they cannot directly connect a single-mode FICON channel to a multimode stor- age host adapter, this could pose a problem. With a FICON director or switch in the path, however, organizations do not need to change the storage host adapter ports to comply with the single-mode best-practice recommendation for the FICON channels. The FICON switching device can have both types of connectivity. It can have single-mode long-wave ports for attaching the FICON channels, and multimode short-wave ports for attaching the storage.
  • 9. WHITE PAPER 9 Furthermore, FICON switching elements at 2 different locations can be interconnected by fiber at distances up to 100km or more, creating a cascaded FICON switched architecture. This setup is typically used in disaster recovery and business continuance architectures. As previously discussed, FICON switching allows resources to be shared. With cascaded FICON switching, those resources can be shared between geographically separated locations, allowing data to be replicated or tape backups to be made at the alternate site, away from the primary site, with no performance loss. Often, workloads will be distributed such that both the local and remote sites are primary produc- tion sites, and each site uses the other as its backup. While the fiber itself is relatively inexpensive, laying new fiber may require an expensive construction project. While dense wave division multiplexing (DWDM) can help get more out of fiber connections, inter-switch links with up to 16Gb/sec of bandwidth are offered by switch vendors and can reduce the cost of DWDM or even eliminate the need for DWDM. FICON switches maximize utilization of this valuable intersite fiber by allowing multiple environments to share the same fiber link. In addition, FICON switching devices offer unique storage network management features, such as ISL trunking and preferred pathing, which are not available with DWDM equipment. FICON switches allow data center managers to further exploit intersite fiber sharing by enabling them to intermix FICON and native Fibre Channel Protocol (FCP) traffic, which is known as Protocol Intermix Mode, or PIM. Even in data centers where there is enough fiber to separate FICON and open systems traffic, preferred pathing features on a FICON switch can be a great cost saver. With preferred paths established, certain cross-site fiber can be allocated for the mainframe environment, while other fiber can be allocated for open systems. The ISLs can be configured such that in the event of a failure, and only in the event of an ISL failure, the links would be shared by both open systems and mainframe traffic. Leverage New Technologies Over the past 5 years, IBM has announced a series of technology enhancements that require the use of switched FICON. These include: ■■ N_port ID virtualization (NPIV) support for z Linux. ■■ Dynamic Channel-Path Management (DCM). ■■ z/OS FICON Discovery and Auto-Configuration (zDAC). NPIV allows for full support of LUN masking and zoning by virtualizing the Fibre Channel identifiers. IBM announced support for NPIV on z Linux in 2005. Today, NPIV is supported on the System z9® , z10, z196, and z114. Until NPIV was supported on System z, adoption of Linux on System z had been relatively slow. This, in turn, allows each Linux on System z image to appear as if it has its own individual HBA when those images are, in fact, sharing FCP channels. Since IBM began supporting NPIV on System z, adoption of Linux on System z has grown significantly. IBM believes approximately 19% of MIPS shipping on new z196s are for Linux on System z implementations. Implementation of NPIV on System z requires a switched architecture. DCM is another feature that requires a switched FICON architecture. DCM provides the ability to have System z auto- matically manage FICON I/O paths connected to storage systems in response to changing workload demands. Use of DCM helps simplify I/O configuration planning and definition, reduces the complexity of managing I/O, dynamically balances I/O channel resources, and enhances availability. DCM can best be summarized as a feature that allows for more flexible channel configurations, by designating channels as “managed,” and proactive performance manage- ment. DCM requires a switched FICON architecture because topology information is communicated via the switch or director. The FICON switch must have a control unit port (CUP) license and be configured or defined as a control unit in the hardware configuration definition (HCD).
  • 10. WHITE PAPER 10 z/OS FICON Discovery and Auto-Configuration (zDAC) is the latest technology enhancement for FICON. IBM intro- duced zDAC as a follow-on to an earlier enhancement in which the FICON channels log into the Fibre Channel name server on a FICON director. zDAC enables the automatic discovery and configuration of FICON-attached DASD and tape devices. Essentially, zDAC automates a portion of the HCD Sysgen process. zDAC uses intelligent analysis to help validate the System z and storage definitions’ compatibility, and uses built-in best practices to help configure for high availability and avoid single points of failure. zDAC is transparent to existing configurations and settings. It is invoked and integrated with the z/OS HCD and z/OS Hardware Configuration Manager (HCM). zDAC also requires a switched FICON architecture. IBM also introduced support for transport-mode FICON (known as z High Performance FICON, or zHPF) in October 2008 and announced recent enhancements in July 2011. While not required for zHPF, a switched architecture is recommended. Business Reasons for a Switched FICON Architecture In addition to the technical reasons described earlier, the following business reasons support implementing a switched FICON architecture: ■■ Enable massive consolidation in order to reduce capital and operating expenses. ■■ Improve application performance at long distances. ■■ Support growth and enable effective resource sharing. Massive Consolidation With NPIV support on System z, server and I/O consolidation is very compelling (see Figure 3). IBM undertook a well-publicized project at its internal data centers (Project Big Green) and consolidated 3900 open systems servers onto 30 System z mainframes running Linux. IBM’s total cost of ownership (TCO) savings was calculated, taking into account footprint reductions, power and cooling, and management simplification costs. The result was nearly 80% TCO savings for a 5-year period. This scale of TCO savings is why 19% of new IBM mainframe processor shipments are now being used for Linux. Implementation of NPIV requires connectivity from the FICON (FCP) channel to a switching device (director or smaller port-count switch) that supports NPIV. A special microcode load is installed on the FICON channel to enable it to function as an FCP channel. NPIV allows the consolidation of up to 255 z Linux images (“servers”) behind each FCP channel, using one port on a channel card and one port on the attached switching device for connecting these virtual servers. This enables massive consolidation of many HBAs, each attached to its own switch port in the SAN. As a best practice, IBM currently recommends configuring no more than 32 Linux images per FCP channel. Although this level of I/O consolidation was possible prior to NPIV support on System z, implementing LUN masking and zoning in the same manner as with open systems servers, SAN and storage was not possible prior to the support for NPIV with Linux System z. NPIV implementation on System z has also resulted in consolidation and adoption of a common SAN for distributed or open systems (FCP) and mainframe (FICON), commonly known as protocol intermix mode (PIM). While IBM has supported PIM in System z environments since 2003, adoption rates were low until NPIV implementations for Linux for System Z picked up with the introduction of System z10 in 2008. With z10 enhanced segregation and security beyond simple zoning was possible through switch partitioning or virtual fabrics and logical switches. With 19% of new mainframes being shipped for use with Linux on System z, it is safe to say that at least 19% of mainframe envi- ronments are now running a shared PIM environment.
  • 11. WHITE PAPER 11 Leveraging enhancements in switching technology, performance and management, PIM users can now fully populate the latest high-density directors with minimal or no oversubscription. They can use management capabilities such as virtual fabrics or logical switches to fully isolate open systems ports and FICON ports in the same physical direc- tor chassis. Rather than having more partially populated switching platforms that are dedicated to either mainframe (FICON) or open systems (FCP), PIM allows for consolidation onto fewer physical switching devices, reducing man- agement complexity and improving resource utilization. This, in turn, leads to lower operating costs, and a lower TCO for the storage network. It also allows for a consolidated, simplified cabling infrastructure. Figure 3. Organizations implement NPIV to consolidate I/O in z Linux environments. Application Performance Over Distance As previously discussed, the number of buffer credits per port on a 4-port FICON Express8 channel has been reduced to 40, supporting up to 10km without performance degradation. What happens if an organization needs to go beyond 10km for a direct-attached storage configuration? They will likely see performance degradation due to insufficient buffer credits. Without a sufficient quantity of buffer credits, the “pipe” cannot be kept full with streaming frames of data. Switched FICON avoids this problem (see Figure 4). FICON directors and switches have a sufficient quantity of buffer credits available on ports to allow them to stream frames at full-line performance rates with no bandwidth degrada- tion. IT organizations that implement a cascaded FICON configuration between sites can, with the latest FICON director platforms, stream frames at 16Gb/sec rates with no performance degradation for sites that are 100km apart.
  • 12. WHITE PAPER 12 Switched FICON technology also allows organizations to take advantage of hardware-based FICON protocol accel- eration or emulation techniques for tape (reads and writes), as well as with zGM (z/OS Global Mirror, formerly known as XRC, or Extended Remote Copy). This emulation technology is available on standalone extension switches or on a blade in FICON directors. It allows the z/OS-initiated channel programs to be acknowledged locally at each site and avoids the back-and-forth protocol handshakes that normally travel between remote sites. It also reduces the impact of latency on application performance and delivers local-like performance over unlimited distances. In addition, this acceleration or emulation technology optimizes bandwidth utilization. Why is bandwidth efficiency so important? It is typically the most expensive budget component in an organization’s multisite disaster recovery or business continuity architecture. Anything that can be done to improve the utilization and/or reduce the bandwidth requirements between sites would likely lead to significant TCO savings. Figure 4. Switched FICON with emulation allows optimized performance and bandwidth utilization over extended distance. Enable Growth and Resource Sharing Direct-attached storage forces a 1:1 relationship between host connectivity and storage connectivity. In other words, each storage port on a storage system host adapter requires its own physical port connection on a FICON Express8 channel card. These channel cards are typically very expensive on a per-port basis — typically 4 to 6 times the cost of a FICON director port. Also, there is a finite number of FICON Express8S channels available on a zEnterprise196 (a maximum of 320), as well as a finite number of host adapter ports in the storage system. If an organization has
  • 13. WHITE PAPER 13 a large configuration and a direct-attached FICON storage architecture, how does it plan to scale its environment? What happens if an organization acquires a company and needs additional channel ports? A switched FICON infra- structure allows cost-effective, seamless expansion to meet growth requirements. Direct-attached FICON storage also typically results in underutilized host channel card ports and host adapter ports in storage systems. FICON Express8 and FICON Express8S channels can comfortably perform at high-channel uti- lization rates, and a direct-attached storage architecture typically sees channel utilization rates of 10% or less. As illustrated in Figure 5, leveraging FICON directors or switches allows organizations to maximize channel utilization. Figure 5. Switched FICON drives improved channel utilization, while preserving CHPIDs for growth. It also is very important to keep traffic for tape drives streaming, and to avoid stopping and starting the tape drives, as this leads to unwanted wear and tear of tape heads, cartridges, and the tape media itself. Using FICON acceleration or emulation techniques, as described earlier, this can be accomplished with a configuration similar to the one shown in Figure 6. Such a configuration requires solid analysis and planning, but it will pay dividends for an organization’s FICON tape environment.
  • 14. WHITE PAPER 14 Figure 6. A well-planned configuration can maximize CHPID capacity utilization for FICON tape efficiency. Finally, switches facilitate fan-in, which allows different hosts and LPARs whose I/O subsystems are not shared to share the same assets. While some benefits may be realized immediately, the potential for value in future equipment planning can be even greater. With the ability to share assets, equipment that would be too expensive for a single environment can be deployed in a cost-saving manner. The most common example is to replace tape farms with virtual tape systems. By reducing the number of individual tape drives, maintenance (service contracts), floor space, power, tape handling and cooling costs are reduced. Virtual tape also improves reliable data recovery, allows for sig- nificantly shorter recovery time objectives (RTO) and nearer recovery point objectives (RPO), and offers features such as peer-to-peer copies. However, without the ability to share these systems, it may be difficult to amass sufficient cost savings to justify the initial cost of virtual tape. And the only practical way to share these standalone tape sys- tems or tape libraries is through a switch. With disk storage systems, in addition to sharing the asset, it is sometimes desirable to share the data across mul- tiple systems. The port limitations on a storage system may prohibit or limit this capability using direct-attached (point-to-point) FICON channels. Again, the switch can provide a solution to this issue. Even when there is no need to share devices during normal production, this capability can be very valuable in the event of a failure. Data sets stored on tape can quickly be read by CPUs picking up workload that is already attached to the same switch as the tape drives. Similarly, data stored on a storage system can be available as soon as a fault is determined. Switch features, such as preconfigured port prohibit or allow matrix tables, can ensure that access intended only for a disaster scenario is prohibited during normal production. Why Switched FICON: Summary Direct-attached FICON might appear to be a great way to take advantage of FICON technology’s advances over ESCON. However, a closer examination shows that switched FICON, similar to switched ESCON, is a better, more robust architecture for enterprise data centers. Switched FICON offers: ■■ Better utilization of host channels and their performance capabilities. ■■ Scalability to meet growth requirements. ■■ Improved reliability, problem isolation and availability. ■■ Flexible connectivity to support evolving infrastructures. ■■ More robust business continuity implementations via cascaded FICON.
  • 15. WHITE PAPER 15 ■■ Improved distance connectivity, with improved performance over extended distances. ■■ New mainframe I/O technology enhancements such as NPIV, FICON DCM, zDAC and zHPF. Switched FICON also provides many business advantages and potential cost savings, including: ■■ The ability to perform massive server, I/O and SAN consolidation, dramatically reducing capital and operating expenses. ■■ Local-like application performance over any distance, allowing host and storage resources to reside wherever business dictates. ■■ More effective resource sharing, improved utilization, reduced costs and improved recovery time. With the growing trend toward increased usage of Linux on System z, and the cost advantages of NPIV implemen- tations and PIM SAN architectures, direct-attached storage in a mainframe environment is becoming a thing of the past. Investments made in switches for disaster recovery and business continuance are likely to pay the largest divi- dends. Having access to alternative resources and multiple paths to those resources can result in significant savings in the event of a failure. The advantages of a switched FICON infrastructure are simply too great to ignore. Hitachi Virtual Storage Platform Hitachi Data Systems has over 20 years of experience supporting IBM mainframe environments. A large portion of the installed base of Hitachi storage systems connects to IBM z/OS and S/390® mainframes via ESCON and FICON networks. Hitachi Virtual Storage Platform builds on this experience and introduces new features and packaging to improve performance while lowering TCO. In addition to its new 3-D scaling architecture, it features lower power and cool- ing requirements, high-density packaging based on industry-standard 19-inch racks, faster microprocessors and the choice of disk drives types, including solid state disk (SSD), serial attached SCSI (SAS) and nearline SAS. This stor- age platform provides an industry-leading, reliable and highly available storage system for mainframes in IBM z/OS environments. It supports z/OS, z/VSE and z/VM for zEnterprise. Additionally, with many organizations considering the benefits of or running LINUX on IBM zEnterprise processors, Virtual Storage Platform supports this capability for both count key device (CKD) and fixed block architecture (FBA) disk formats. Hitachi has implemented support for many key performance features in support of these operating systems running on zEnterprise, including PAV, HyperPAV, z/HPF, Multiple Allegiance, MIDAW and Priority I/O Queuing. It also provides a unique mainframe storage management solu- tion to deliver functionally compatible extended address volumes (EAV) for z/OS, data volume expansion (DVE), and IBM FlashCopy® SE (with space efficiency capability). Hitachi Virtual Storage Platform is designed to be highly available and resilient. All critical components are imple- mented in pairs. If a component fails, the paired component can take over the workload without an outage. With its support of multiple RAID configurations, an organization’s data is protected in event of a disk drive problem. Additionally, with its industry-leading replication software and support of FlashCopy, FlashCopy SE and Hitachi Compatible Software for IBM XRC® providing the functionality of the IBM z/OS Metro/Global Mirror, copies of data can be maintained locally and at remote locations. This ensures its availability in case the primary copy becomes unusable or is not accessible. Scalability (3-D Scaling: Out, Up, Deep) Hitachi Virtual Storage Platform can scale up to provide increased performance, capacity, throughput and connectiv- ity. It can scale out by dynamically combining multiple units into a single logical system with shared resources. It can
  • 16. WHITE PAPER 16 also scale deep by dynamically virtualizing new and existing external storage systems. This 3-D scaling means that VSP can grow nondisruptively to meet changing needs within the data center. It minimizes outages to extend the platform and enhance functionality while providing flexibility in the configuration and choice of disk technology to meet the specific needs of each environment. The ability to scale deep is provided by Hitachi controller-based storage virtualization, which supports connectivity to external storage. This enables organizations to further extend the life of existing storage assets, including stor- age from a variety of other vendors. It also provides IBM mainframes the ability to connect to both enterprise and midrange storage platforms, some of which can be configured with lower cost nearline SAS or SATA drives. This vir- tualization of external storage can potentially extend the life of existing storage assets and reduce costs. Three important benefits of scaling deep are: ■■ Enables the reuse of existing or legacy assets for less critical or accessed data. ■■ Simplifies management of external storage with common management and data protection for internal and external storage. ■■ Supports the reuse of existing or legacy assets across data centers within a metro area network distance and across global distances with replication capabilities of the scale up storage system. Performance Hitachi Virtual Storage Platform ushers in a new level of I/O throughput, response and scalability. It supports of 8Gb FICON (FICON Express8 and FICON Express8S) and enables a single VSP FICON 8Gb port to handle higher traffic rates that can be delivered by a single zEnterprise FICON Express8 or FICON Express8S channel. This storage net- working is critical to optimizing performance and maximizing throughput in mainframe environments. IBM 3390 and FICON Support This industry-leading storage system provides 3390 disk drive support through emulation across a variety of disk drive types to meet the variety of performance and capacity needs of mainframe environments. The platform sup- ports SSD flash drives, providing ultra-high-speed response with capacities of 200GB and 400GB, as well as 2.5-inch SAS drives, and nearline SAS drives. It can control up to 65,280 logical volumes and provides an internal physical disk capacity of approximately 2.5PB per storage system. With externally attached storage, Hitachi Virtual Storage Platform can support up to 255PB of storage capacity. VSP supports 8Gb/sec FICON (FICON Express8 and FICON Express8S) across all front-end ports for connectivity to the mainframe and 8Gb/sec Fibre Channel for connecting external storage. VSP supports high-performance FICON (z/HPF) for z/OS. On the back end, it supports SAS, SATA and SSD drives, which are connected using the SAS 2 protocol with 6GB/sec connectivity per back-end port. Hitachi Dynamic Provisioning Hitachi Dynamic Provisioning for Mainframe optimizes performance through extremely wide striping and more effec- tive use of storage through thin provisioning (see Figure 7). In other words, it allocates storage to an application without actually mapping the corresponding physical storage until it is used. This separation of allocation from physi- cal mapping results in more effective use of physical storage with higher overall performance and rates of storage utilization. Dynamic Provisioning also enables Dynamic Volume Expansion (DVE) of 3390 volumes and FlashCopy SE for more efficient use of storage when creating local copies.
  • 17. WHITE PAPER 17 Figure 7. Hitachi Dynamic Provisioning for Mainframe optimizes performance. Hitachi Dynamic Tiering Hitachi Dynamic Tiering (HDT) for Mainframe enables the automatic movement of data between tiers. HDT moves highly accessed blocks of data to the highest tier storage and migrates less frequently accessed data to the lowest tiers. This significantly reduces the time storage administrators have to spend analyzing storage usage and managing the movement of data to optimize performance. HDT complements z/OS System Managed Storage and can move pages of data to the appropriate tier when needed rather than moving entire datasets. Hitachi Remote Replication Business continuity is more important than ever in today’s business environment as demonstrated through the natu- ral disasters and physical intrusion and destruction of IT resources over the last few years. A loss of business-critical data can force a company to its knees and even into bankruptcy. In addition, regulatory compliance requirements demand a business continuity and disaster recovery plan and infrastructure to support that plan or face stiff fines and business restrictions. Hitachi remote replication offerings provide the ability to copy critical data to off-site facilities either within a metropolitan area and/or to distant remote locations. The combination of the enterprise-level Hitachi Virtual Storage Platform with Brocade’s solutions to extend and optimize fabric connectivity facilitates the movement of your business-critical data over longer distances. Together, they enable and enhance your ability to support busi- ness continuity and disaster recovery.
  • 18. WHITE PAPER 18 Hitachi TrueCopy® Hitachi TrueCopy synchronous software provides a continuous, nondisruptive, host-independent remote data replica- tion solution for disaster recovery or data migration over distances within the same metropolitan area. It provides a no-data-loss, rapid-restart solution (see Figure 8). For enterprise environments, TrueCopy synchronous software com- bined with Hitachi Universal Replicator on Virtual Storage Platform allows for advanced 3 data center configurations. This includes consistency across up to 12 storage systems in 1 site for optimal data protection. Figure 8. Hitachi TrueCopy synchronous supports business continuity and disaster recovery efforts. TrueCopy synchronous supports business continuity and disaster recovery efforts, improving business resilience. It improves service levels by reducing planned and unplanned downtime of customer-facing applications. It enables fre- quent, nondisruptive disaster recovery testing with an online copy of current and accurate production data. TrueCopy synchronous can be seamlessly integrated into existing z/OS environments and controlled with familiar PPRC com- mands or with Hitachi Business Continuity Manager software. Hitachi Universal Replicator Hitachi Universal Replicator provides asynchronous data replication across any distance for both internal Virtual Storage Platform storage and external storage managed by VSP (see Figure 9). Universal Replicator provides enterprise-class performance associated with storage system-based replication. At the same time, it provides resilient business continuity without the need for remote host involvement, or redundant servers or replication appliances. Universal Replicator maintains the integrity of replicated copies without impacting processing, even when replication network outages occur or optimal bandwidth is not available. When compared to traditional methods of storage- system-based replication, Universal Replicator leverages performance-optimized disk-based journals, resulting in significantly reduced cache utilization and increased bandwidth utilization. Universal Replicator ensures availability of up-to-date copies of data in up to 3 dispersed locations by leveraging the synchronous capabilities of Hitachi TrueCopy synchronous. In the event of a disaster at the primary data center, the delta resync feature of Universal Replicator enables fast failover and restart of the application without loss of data, whether at the local or remote data center.
  • 19. WHITE PAPER 19 Figure 9. Hitachi Universal Replicator ensures availability of current copies of data in up to 3 dispersed locations. Universal Replicator can be integrated into an IBM GDPS® environment, providing a much more cost-effective and complete recovery solution than the IBM alternative of z/OS Global Mirror. With Universal Replicator and TrueCopy synchronous support of a 3 data center replication solution, VSP supports delta resync, which is similar to but more efficient than z/OS Metro/Global Mirror Incremental Resync. Hitachi Virtual Storage Platform also supports IBM z/OS Basic HyperSwap® , which is enabled by IBM Tivoli Productivity Center for Replication for System z Basic Edition (TPC-R). TPC-R enables the administrator to develop a z/OS Basic HyperSwap configuration using VSP. Using VSP, the organization can create a z/OS Basic HyperSwap plan, for a 2 data center configuration with TrueCopy synchronous or a 3 data center configuration with TrueCopy synchronous, Universal Replicator and Business Continuity Manager. Initially, VSP will support a maximum of 3 stor- age systems at each site. VSP with Universal Replicator will support a 4 data center configuration and allow you to have 2 long asynchronous data paths and 2 synchronous paths. This solution offers you the ability to create multiple copies of data in many locations and reduce the impact of data migration. Hitachi Compatible Software for IBM® XRC® Hitachi Compatible Replication software for IBM XRC is a cross-license technology between Hitachi Data Systems and IBM that provides support for z/OS Global Mirror. This Hitachi software is fully compatible with IBM XRC and lets administrators create and share server-based remote copies between Hitachi Virtual Storage Platform, the Hitachi Universal Storage Platform family and IBM enterprise storage systems, such as the DS8000® system. Hitachi Data Systems is the only 3rd-party storage vendor capable of fully supporting IBM XRC command sets. Hitachi Business Continuity Manager Hitachi Business Continuity Manager enables centralized, enterprise-wide replication management for IBM z/OS mainframe environments. Through a single, consistent interface based on the Time Sharing Option/Interactive System Productivity Facility (TSO/ISPF) it uses full-screen panels to automate Hitachi Universal Replicator, Hitachi TrueCopy
  • 20. WHITE PAPER 20 synchronous (including multisite topologies) and in-system Hitachi ShadowImage Heterogeneous Replication soft- ware operations. This software feature automates complex disaster recovery and planned outage functions, resulting in reduced recov- ery times. It also enables advanced, 3 data center disaster recovery configurations and extended consistency group capabilities. Business Continuity Manager provides built-in capabilities for monitoring and managing critical perfor- mance metrics and thresholds for proactive problem avoidance. It also delivers autodiscovery of enterprise-wide storage configuration and replication objects, eliminating tedious, error-prone data entry that can cause outages. Hitachi Business Continuity Manager integrates with the Hitachi replication management framework, Hitachi Replication Manager software, for replication monitoring and continuous operations in mainframe (and open system) environments. Multiplatform Support Hitachi Virtual Storage Platform can support multiple operating systems at the same time. Although many mainframe organizations have been reluctant to share their storage platforms with open systems servers, the need to share stor- age is becoming more important: Organizations are implementing Linux on System z. In addition, the introduction of IBM zEnterprise BladeCenter Extension (zBX) for mainframe processors enables Microsoft Windows to operate as part of zEnterprise servers. VSP can be configured to facilitate the isolation of disparate types of data. Additionally, the FICON and Fibre Channel ports are completely separate and help ensure that critical mainframe data cannot be accessed directly by open systems servers or clients. Cost-Savings Efficiencies This storage system is designed to lower TCO wherever possible. The physical packaging has been designed to use standard-size racks and chassis. The internal layout supports front-to-back airflow, to facilitate the use of hot and cold aisles and maximize the efficiency of data center cooling. In combination with very fast processors, denser packaging and smaller batteries, the physical floor space and the heating and cooling requirements result in very low power per square foot (KVA/sq ft.). Operating expenditure (opex) is lower than previous systems thanks to denser packaging, blade architecture, low power memory, small form factor disks, SSD disk and flash-protected cache with its smaller batteries. Hitachi Data Systems is committed to continuing to deliver more efficient packaging, resulting in more sustainable products. Brocade Gen5 DCX 8510 in Mainframe Environments Now on its 5th-generation (1G, 2G, 4G, 8G and 16G) of switching technology (Gen5), Brocade has the experience to rely on. The company has been in the mainframe storage networking business for more than 20 years, as far back as the parallel channel extension technology of the late 1980s. Brocade has a history of thought leadership. It has 4 of its own FICON patents, as well as 5 FICON joint patents with IBM on technologies, such as the FICON bridge card and control unit port (CUP). Brocade helped IBM develop Fibre Connection (FICON), and in 2000 the 1st IBM certified FICON network infrastructure, using 1Gb/sec ED5000 Directors, was deployed. Brocade has the only FICON archi- tecture certification program (BCAF) in the industry. Brocade manufactured the 9032-5 ESCON director for IBM, and pioneered ESCON channel extension emulation technology. Brocade has continued its heritage of mainframe storage networking thought leadership with 9 generations of FICON directors. These products include the current industry- leading FICON directors, such as the DCX and DCX 8510, and FICON channel extension, such as the Brocade 7800 and FX8-24 extension blade.
  • 21. WHITE PAPER 21 Reliability, Availability and Serviceability The largest corporations in the world literally run their businesses on mainframes. Government institutions in many countries worldwide also rely on the mainframe for their critical computing needs. RAS qualities for these mission- critical environments are of the utmost importance. Mainframe practitioners in these organizations avoid risk at all costs. They never want to suffer an unscheduled outage, and they want to minimize if not outright eliminate sched- uled or planned outages. Mainframes such as the IBM zEnterprise have historically been the rock-solid pillar in terms of computing RAS. Mainframe practitioners have a history of creating I/O infrastructures that have “five nines” avail- ability. For FICON channel connectivity to mainframe-attached storage, these same organizations have a requirement for a FICON director platform that offers the same levels of RAS as the mainframe itself. The Brocade Gen5 DCX 8510 is the ideal FICON director for these RAS requirements. The Brocade Gen5 DCX 8510 FICON Director features a modular, high-availability architecture that supports these mission-critical mainframe environments. The Brocade Gen5 DCX 8510 chassis has been engineered from incep- tion for “five nines” of availability by providing multiple fans (supporting hot aisle-cool aisle), multiple fan connectors, dual core blade internal connectivity, dual control processors, dual power supplies, a passive backplane and dual I/O timing clocks. These features and the switching design of the Brocade Gen5 DCX 8510 result in leading mean time between failure (MBTF) and mean time to recovery or repair (MTTR) numbers. In a recent study performed with a sample size of 26,593 Brocade products, the average yearly downtime was .53 minutes per year, for an availability rate of 99.99984%. It is this kind of availability that consistently leads OEM partners such as HDS to praise Brocade products for their quality. Scalability With the advent of the zBX and the zEnterprise Unified Resource Manager, private cloud computing centered on the IBM zEnterprise has emerged as a “hot topic.” Cloud computing requires a highly scalable (hyper-scale) storage net- working architecture to support it. Hyper-Scale Inter-Chassis Link (ICL) is a unique Brocade Gen5 DCX 8510 feature that provides connectivity among 2 or more Brocade 8510-4 or 8510-8 chassis. This is the 2nd generation of ICL technology from Brocade with optical QSFP (Quad Small Form Factor). The 1st generation used a copper connector. Each ICL connects the core routing blades of two 8510 chassis and provides up to 64Gb/sec of throughput within a single cable. The Brocade 8510-8 allows up to 32 QSFP ports, and the 8510-4 allows up to 16 QSFP ports to help preserve switch ports for end devices. This 2nd generation of Brocade optical ICL technology, based on QSFP technology, provides a number of benefits to the organization. Brocade has improved ICL connectivity over the use of copper connectors by upgrading to an optical form factor. With this improvement, Brocade has also increased the distance of the connection from 2 meters to 50 meters. QSFP combines 4 cables into 1 cable per port, significantly reducing the number of ISL cables the customer needs to run. Since the QSFP connections reside on the core blades within each 8510, they do not use up connections on the slot line cards. This improvement frees up to 33% of the available ports for additional server and storage connectivity. Dual-chassis backbone topologies connected through low-latency ICL connections are ideal in a FICON environment. The majority of FICON installations have switches that are connected in dual or triangular topologies, using ISLs to meet the FICON requirement for low latency between switches. New 64Gb/sec QSFP based ICLs enable simpler, flatter, low-latency chassis topologies spanning a distance of up to 50 meters with off-the-shelf cables. They reduce interswitch cables by 75% and preserve 33% of front-end ports for servers and storage, leading to fewer cables and more usable ports in a smaller footprint.
  • 22. WHITE PAPER 22 Pair the Two Platforms Together Traditional (z/OS) Mainframe Environments In a “traditional” z/OS mainframe environment, RAS, as well as performance are the key concerns to most orga- nizations. These characteristics provide the stability for the mainframe-based applications, on which the largest companies in the world run their businesses. Dr. Thomas E. Bell, winner of the Computer Measurement Group (CMG) Michelson Award for lifetime achievement in the computer performance field, once famously commented that “all CPUs wait at the same speed.” Likewise, Dr. Steve Guendert, a CMG Board member has commented in his blog that “The IBM zEnterprise is a hungry machine, and its users need to feed the I/O beast.” Response time means money in these environments. The ability to process transactions more rapidly provides companies a competitive advantage in today’s financial industry. Hitachi Virtual Storage Platform and Brocade DCX 8510, together, make sure the “I/O beast is fed.” Linux on the Mainframe A 2011 IDC report indicated that of all the mainframes being shipped, approximately 19% of the processing power is intended for Linux. And IBM has been quoted as saying that 32% of IBM’s zEnterprise installed base is running integrated facility for Linux (IFL) specialty engines. Regardless of whether Linux is running as a guest under z/VM or natively in an LPAR, it is an important trend that cannot be ignored. This trend has been growing since the 2005 introduction of support for NPIV on System z. IT organizations are realizing that there are significant cost savings to be realized by moving to Linux on System z, and these cost savings are in terms of hardware acquisition, software licensing and operational costs, such as power and cooling. Hitachi VSP and Brocade Gen5 DCX 8510 are the ideal choice for these Linux environments. VSP offers very powerful virtualization, support for NPIV and both Dynamic Provisioning and Dynamic Tiering. Brocade Gen5 DCX 8510 offers full support for NPIV, and its Virtual Fabrics func- tionality allows for highly secure separation of the z/OS data traffic from the Linux traffic on the FICON director. FICON and FCP Intermix FICON and FCP Intermix, or protocol intermix mode (PIM) is another growing trend in mainframe environments. Linux on System z has been the major driver of this trend, as its very nature often leads to mainframe end users using FCP channels and FICON channels on the mainframe. IBM’s recent announcement and GA of support for Windows blade servers on the zEnterprise Blade Center Extension (zBX) is likely to see even further adoption or acceptance of PIM as a storage networking architecture. The virtualization, performance, scalability and tiering capabilities of Hitachi VSP make it an ideal disk storage platform for a PIM storage architecture. The performance and virtual fabrics capabilities, coupled with the immense number of open systems SAN experience at Brocade make the DCX 8510 the ideal direc- tor platform to go along with the VSP in a PIM architecture. Private Cloud The ideas behind cloud computing are well known to experienced mainframers, who remember “service bureau com- puting.” Private cloud computing is a “hot topic.” It is seeing a lot of adoption, and the concept of IBM zEnterprise Systems at the center of a private cloud is gaining a lot of traction. Private cloud computing relies on extensive virtu- alization. This virtualization is not just at the server and application; it is at everything in the data center, most notably with the storage devices and the network. Hitachi Virtual Storage Platform paired with Brocade Gen5 DCX 8510 cre- ates the ideal architecture for a mainframe-centric private cloud.
  • 23. WHITE PAPER 23 Conclusion A networked FICON storage architecture for your mainframe is a well-documented industry best practice for a wide variety of reasons, both technical and financial. Networked storage architectures beat direct-attached architectures in terms of RAS, performance, scalability and long-run costs. The latest I/O enhancements to IBM mainframes, such as Dynamic Channel-path Management (DCM) and System z Discovery and Configuration (zDAC), require a networked storage architecture (with FICON directors) if the end user wishes to take advantage of them. The IBM zEnterprise offers unprecedented performance, scalability and innovative new features, such as the zBX, as well as support for Windows. To take full advantage of a zEnterprise requires the end user to have an equally capable storage system and FICON director platform for connectivity. Hitachi Virtual Storage Platform paired with Brocade Gen5 DCX 8510 is the ideal combination with zEnterprise mainframes, whether intended for a traditional z/OS, Linux, PIM or private cloud environment. Hitachi Data Systems and Brocade have the experience to rely on and VSP and DCX 8510 are the best platforms in the industry for mainframe data centers.
  • 24. © Hitachi Data Systems Corporation 2013. All rights reserved. HITACHI is a trademark or registered trademark of Hitachi, Ltd. Universal Storage Platform, ShadowImage and TrueCopy are trademarks or registered trademarks of Hitachi Data Systems Corporation. IBM, FICON, ESCON, System z, z/OS, zEnterprise, z/VM, z9, z10, s/390, z/VSE, FlashCopy, XRC, GDPS, HyperSwap and DS8000 are trademarks or registered trademarks of International Business Machines. Microsoft and Windows are trademarks or registered trademarks of Microsoft Corporation. All other trademarks, service marks, and company names are properties of their respective owners. Notice: This document is for informational purposes only, and does not set forth any warranty, expressed or implied, concerning any equipment or service offered or to be offered by Hitachi Data Systems Corporation. WP-432-C DG March 2013 Corporate Headquarters 2845 Lafayette Street Santa Clara, CA 96050-2639 USA www.HDS.com Regional Contact Information Americas: +1 408 970 1000 or info@hds.com Europe, Middle East and Africa: +44 (0) 1753 618000 or info.emea@hds.com Asia Pacific: +852 3189 7900 or hds.marketing.apac@hds.com