SlideShare a Scribd company logo
1 of 53
Power Systems with POWER 8
Enterprise Technical Sales
IBM Systems & Technology Group
IBM Power SystemsTM
© 2013 IBM Corporation
POWER7™ Technical Excellence
and Announcement Highlights
4
IBM’s Virtualization
5
IBM’s 39-year history of leadership in
virtualization
IBM develops
hypervisor that
would become
VM on the
mainframe
IBM
announces
first machines
to do physical
partitioning
IBM
announces
LPAR on the
mainframe
POWER
LPAR design
begins
19671967 19731973 19871987
IBM introduces
LPAR in
POWER4™
based systems
with AIX 5L™
Advanced
POWER™
Virtualization
ships
200420042001200119971997
Timeline reference http://www.levenez.com/unix/history.html#01
Customer quote source: rku.it case study published at http://www.ibm.com/software/success/cssdb.nsf/CS/JSTS-6KXPPG?OpenDocument&Site=eserverpseries
“In our opinion, they [System p servers] bring mainframe-quality
virtualization capabilities to the world of AIX®.”- Ulrich Klenke, CIO, rku.it
January 2006
Advanced POWER Virtualization
on IBM System p™ servers
6
IBM APV Benefits
 Can help lower the cost of
existing infrastructure
 Can increase business
flexibility and reduce the
complexity to grow your
infrastructure
 Deployed in production
by a significant number of
System p clients5
Advanced POWER Virtualization on IBM System p
1) Advanced POWER Virtualization (APV) is an optionally orderable feature on IBM System p, 2) Partition Load Manager (PLM) is not supported on OpenPower / Linux Partitions, 3) Only available on select models, 4) “Business Case for IBM System p5 Virtualization,” Economic Benefits of IT Simplification. International Technology Group,
02/10/2006. Study methodology: Companies in financial services, manufacturing and retail with $15 Billion+ revenues and total 200,000+ employees focusing on UNIX® large enterprise environments with multiple, broad-ranging applications. Study compared the cost of the company's workload running on multiple vendor servers and
employing minimal virtualization to the cost of the company's workload running on the p5-510, 550, 570, 590 and 595 – all using Advanced POWER Virtualization [APV]. APV is standard on System p5 590 and 595. Other System p servers have the option to add APV except the System p5 185. This cost analysis was performed for financial
services, manufacturing and retail example environments with an overall average savings of up to 62% in TCO savings by virtualizing and consolidating on the System p servers. For further information, see the white paper at: http://www-03.ibm.com/systems/p/library /consult/ itg_p5virtualization.pdf Total Cost of Ownership may not
be reduced in each consolidation case. TCO depends on the specific customer environment, the existing environments and staff, and the consolidation potential. , 5) IBM sales Statistics, *All statements regarding IBM future directions and intent are subject to change or withdrawal without notice and represent goals and objectives only.
Any reliance on these Statements of General Direction is at the relying party's sole risk and will not create liability or obligation for IBM. IBM Confidential
Advanced POWER Virtualization
1
Micro -Partitioning
• Share processors across multiple partitions
• minimum 1/10 th
processor
Micro-Partitioning™
• Create up to 10 micro-partitions for each System p5 processor
• Resize without rebooting your system
Virtual I/O Server
• Integrated VirtualizationManager 3
• Share Ethernet, SCSI and
Virtual I/O Server
• Share Ethernet, SCSI and Fibre
Channel disks
Partition Load Manager 2
• Automatically balance processor
Partition Load Manager
2
• Automatically balance processor
and memory requests
Integrated Virtualization
Manager3
• Manage a single system without an HMC
Partition Mobility 4Q07*
• Move a running partition from one P6
server to another with no downtime
Linux on POWER
7
Databases
Applications
Web
System p virtualization
 Proven mainframe-inspired
Hypervisor
 Outstanding RAS
 Low Overhead
 Easy-to-use management
interface
 AIX 5L™ and Linux
 Deployed in production
by a significant number of
System p clients*
*IBM Sales Statistics
8
How does it work?
9
Advanced POWER Virtualization
Option Virtual I/O Server
– Shared Ethernet
– Shared SCSI and
Fibre Channel-attached disk
subsystems
– Supports AIX 5L V5.3 and
Linux* partitions
Micro-Partitioning
– Share processors across
multiple partitions
– Minimum partition 1/10th
processor
– AIX 5L V5.3, Linux*, or
i5/OS**
Partition Load Manager****
– Balances processor and
memory request
Managed via HMC or IVM***
AIX 5L
V5.2Linux
Hypervisor
Dynamically resizable
2
CPUs
4
CPUs
6
CPUs
Linux
Linux
AIX5LV5.3
Virtual I/O
paths
AIX5LV5.3
AIX5LV5.3
AIX5LV5.3
AIX5LV5.3
Micro-Partitioning
Manager
Server
LPAR 2
AIX 5L V5.3
LPAR 1
AIX 5L V5.3
LPAR 3
Linux
PLM partitions
Unmanaged
partitions
Hypervisor
PLM agent PLM agent
AIX 5L
V5.3
6
CPUs
Ethernet
sharing
Virtual I/O
server
partition
Storage
sharing
1
CPU
i5/OS
V5R3**
1
CPU
* SLES 9 or RHEL AS 4 and above
**Available on selected p5-570, p5-590 and p5-595 models
***Available on System p5 560Q and below as well as the BladeCenter® JS21
****Available for AIX 5L V5.2 or above (RPQ required on POWER4)
IVM
10
Advanced POWER Virtualization
Option Virtual I/O Server
– Shared Ethernet
– Shared SCSI and
Fibre Channel-attached disk
subsystems
– Supports AIX 5L V5.3 and
Linux* partitions
Micro-Partitioning
– Share processors across
multiple partitions
– Minimum partition 1/10th
processor
– AIX 5L V5.3, Linux*, or
i5/OS**
Partition Load Manager****
– Balances processor and
memory request
Managed via HMC or IVM***
AIX 5L
V5.2Linux
Hypervisor
Dynamically resizable
2
CPUs
4
CPUs
6
CPUs
Linux
Linux
AIX5LV5.3
Virtual I/O
paths
AIX5LV5.3
AIX5LV5.3
AIX5LV5.3
AIX5LV5.3
Micro-Partitioning
Manager
Server
LPAR 2
AIX 5L V5.3
LPAR 1
AIX 5L V5.3
LPAR 3
Linux
PLM partitions
Unmanaged
partitions
Hypervisor
PLM agent PLM agent
AIX 5L
V5.3
6
CPUs
Ethernet
sharing
Virtual I/O
server
partition
Storage
sharing
1
CPU
i5/OS
V5R3**
1
CPU
* SLES 9 or RHEL AS 4 and above
**Available on selected p5-570, p5-590 and p5-595 models
***Available on System p5 560Q and below as well as the BladeCenter® JS21
****Available for AIX 5L V5.2 or above (RPQ required on POWER4)
IVM
11
Micro-Partitioning
technology
Partitioning options
– Micro-partitions: Up to 254*
Configured via the HMC
Number of logical processors
– Minimum/maximum
Entitled capacity
– In units of 1/100 of a CPU
– Minimum 1/10 of a CPU
Variable weight
– % share (priority) of
surplus capacity
Capped or uncapped partitions
Micro-partitions
Pool of 6 CPUs
Linux
i5/OSV5R3**
AIX5LV5.3
AIX5LV5.3
Linux
Entitled
capacity
Hypervisor
Min
Max
*on p5-590 and p5-595
** on p5-570, p5-590 and p5-595
AIX5LV5.2
AIX5LV5.3
Dynamic
LPARs
Whole
Processors
Micro-Partitioning technology allows each
processor to be subdivided into as many as 10
“virtual servers”, helping to consolidate UNIX®
and Linux applications.
Note: Micro-partitions are available via optional Advanced POWER
Virtualization or POWER Hypervisor and VIOS features.
12
How to create a Micro-Partitioning
LPAR?
13
Physical Processors and Virtual
Processors
14
IBM System p5: Simultaneous
multithreadingPOWER4 (Single Threaded)
CRL
FX0
FX1
LSO
LS1
FP0
FP1
BRZ
Thread1 active
Thread0 active
No thread active
 Utilizes unused execution unit cycles
 Presents symmetric multiprocessing (SMP) programming model to software
 Natural fit with superscalar out-of-order execution core
 Dispatch two threads per processor: “It’s like doubling the number of processors.”
 Net result:
– Better performance
– Better processor utilization
Appears as four CPUs
per chip to the
operating system
(AIX 5L V5.3 and
Linux)
Systemthroughput
SMTST
POWER5+ (simultaneous multithreading)
15
# smtctl -off
# smtctl -on
Simultaneous Multi-Threading (On and Off)
16
Advanced POWER Virtualization
Option Virtual I/O Server
– Shared Ethernet
– Shared SCSI and
Fibre Channel-attached disk
subsystems
– Supports AIX 5L V5.3 and
Linux* partitions
Micro-Partitioning
– Share processors across
multiple partitions
– Minimum partition 1/10th
processor
– AIX 5L V5.3, Linux*, or
i5/OS**
Partition Load Manager****
– Balances processor and
memory request
Managed via HMC or IVM***
AIX 5L
V5.2Linux
Hypervisor
Dynamically resizable
2
CPUs
4
CPUs
6
CPUs
Linux
Linux
AIX5LV5.3
Virtual I/O
paths
AIX5LV5.3
AIX5LV5.3
AIX5LV5.3
AIX5LV5.3
Micro-Partitioning
Manager
Server
LPAR 2
AIX 5L V5.3
LPAR 1
AIX 5L V5.3
LPAR 3
Linux
PLM partitions
Unmanaged
partitions
Hypervisor
PLM agent PLM agent
AIX 5L
V5.3
6
CPUs
Ethernet
sharing
Virtual I/O
server
partition
Storage
sharing
1
CPU
i5/OS
V5R3**
1
CPU
* SLES 9 or RHEL AS 4 and above
**Available on selected p5-570, p5-590 and p5-595 models
***Available on System p5 560Q and below as well as the BladeCenter® JS21
****Available for AIX 5L V5.2 or above (RPQ required on POWER4)
IVM
17
 Dedicated or Shared Partitions
 Virtual I/O Server is not necessary
Hypervisor
1
Virtual Ethernet
TCP/IP
stack
TCP/IP
stack
TCP/IP
stack
TCP/IP
stack
TCP/IP
stack
AIX 5L
V5.3
Linux
AIX 5L
V5.3
Virtual Ethernet (Partition
Communication)
Linux
AIX 5L
V5.3
2 VLANs
Partitions
18
Creating Virtual IO Server
19
 Configured like a standard Ethernet
 Can have multiple connections per partition
 Virtual “MAC” addressing
 Each adapter can support 16 virtual Ethernet LANs
Hypervisor
IP
network
Virtual
I/O server 1
partition
R
B
Virtual Ethernet
TCP/IP
stack
TCP/IP
stack
TCP/IP
stack
TCP/IP
stack
TCP/IP
stack
Virtual
I/O server 2
partition
AIX 5L
V5.3
Linux
AIX 5L
V5.3
Virtual
Networking
Virtual
Networking
Virtual I/O Server - Ethernet Sharing
20
How it work?How it work? ExampleExample
Virtual I/O Server Ethernet Sharing
21
 One physical drive can appear to
be multiple logical drives
– LUNs appear as individual
logical drives
 Minimizes the number of adapters
AIX 5L
V5.3
Linux
AIX 5L
V5.3
2A
3A
4A
ASCSI
Fibre
2B
3B
4B
B SCSI
Fibre
1B
Mirror
5B
MirrorMirror Mirror
Virtual
I/O server
1
Virtual
I/O server
2
2B 3B
Hypervisor layer
Mirror
 Can have mixed configuration
(virtual and real adapters)
 SCSI and Fibre supported
 Supports AIX 5L V5.3 and Linux partitions
4B
2A1A 5A3A 4A
Hypervisor layer
Virtual I/O Server Disk Sharing
22
Virtual I/O Server Disk Sharing (How it work?)
23
Virtual I/O Server Disk Sharing (Examples)
24
Advanced POWER Virtualization enhancements
Virtual I/O Server (VIOS) 1.3
• VIOS Monitoring through PTX and Topas
• Performance Enhancements for Virtual SCSI and Virtual Ethernet
Integrated Virtualization Manager (IVM)
• Decrease downtime - Resize and modify partitions without server disruption with support for dynamic
LPAR
Leverage System p Virtualization and Reduce your server TCO by up to 60%*
* “Business Case for IBM System p5 Virtualization,” Economic Benefits of IT Simplification. International Technology Group, February 10,
2006
Topas CEC Monitor Interval: 10 Thu Jul 28 17:04:57 2005
Partitions Memory (GB) Processors
Shr: 3 Mon:24.6 InUse: 2.7 Shr:1.5 PSz: 3 Shr_PhysB: 0.27
Ded: 3 Avl: - Ded: 5 APP: 2.6 Ded_PhysB: 2.70
Host OS M Mem InU Lp Us Sy Wa Id PhysB Ent %EntC Vcsw PhI
-------------------------------------shared-------------------------------------
ptoolsl3 A53 c 4.1 0.4 2 20 13 5 62 0.17 0.50 35.5 218 5
ptoolsl2 A53 C 4.1 0.4 4 14 1 0 84 0.08 0.50 15.0 209 1
ptoolsl5 A53 U 4.1 0.4 4 0 0 0 99 0.02 0.50 0.1 205 2
------------------------------------dedicated-----------------------------------
ptoolsl1 A53 S 4.1 0.5 4 100 0 0 0 2.00
ptoolsl4 A53 4.1 0.5 2 20 10 0 70 0.60
ptoolsl6 A52 4.1 0.5 1 5 5 12 88 0.10
Topas (part of AIX 5.3) PTX (AIX LPP)
25
Advanced POWER Virtualization
Option Virtual I/O Server
– Shared Ethernet
– Shared SCSI and
Fibre Channel-attached disk
subsystems
– Supports AIX 5L V5.3 and
Linux* partitions
Micro-Partitioning
– Share processors across
multiple partitions
– Minimum partition 1/10th
processor
– AIX 5L V5.3, Linux*, or
i5/OS**
Partition Load Manager****
– Balances processor and
memory request
Managed via HMC or IVM***
AIX 5L
V5.2Linux
Hypervisor
Dynamically resizable
2
CPUs
4
CPUs
6
CPUs
Linux
Linux
AIX5LV5.3
Virtual I/O
paths
AIX5LV5.3
AIX5LV5.3
AIX5LV5.3
AIX5LV5.3
Micro-Partitioning
Manager
Server
LPAR 2
AIX 5L V5.3
LPAR 1
AIX 5L V5.3
LPAR 3
Linux
PLM partitions
Unmanaged
partitions
Hypervisor
PLM agent PLM agent
AIX 5L
V5.3
6
CPUs
Ethernet
sharing
Virtual I/O
server
partition
Storage
sharing
1
CPU
i5/OS
V5R3**
1
CPU
* SLES 9 or RHEL AS 4 and above
**Available on selected p5-570, p5-590 and p5-595 models
***Available on System p5 560Q and below as well as the BladeCenter® JS21
****Available for AIX 5L V5.2 or above (RPQ required on POWER4)
IVM
26
Before resource tuning
• Policy-based, automatic partition resource tuning
• Dynamically adjust CPU and memory allocation
Test LPAR
Agent
CRM LPAR
Agent
Finance LPAR
Agent
Unbalanced resource allocation
3
CPU
5
CPU
6
CPU
After resource tuning
Agent Agent Agent
Adjust resource allocation based
on business priority
Test LPAR CRM LPAR Finance LPAR
1
CPU
3
CPU
10
CPU
PLM ServerPLM Server
Partition Load Manager for AIX5L
27
PLM – Configuration by WebSM
28
Advanced POWER Virtualization
Option Virtual I/O Server
– Shared Ethernet
– Shared SCSI and
Fibre Channel-attached disk
subsystems
– Supports AIX 5L V5.3 and
Linux* partitions
Micro-Partitioning
– Share processors across
multiple partitions
– Minimum partition 1/10th
processor
– AIX 5L V5.3, Linux*, or
i5/OS**
Partition Load Manager****
– Balances processor and
memory request
Managed via HMC or IVM***
AIX 5L
V5.2Linux
Hypervisor
Dynamically resizable
2
CPUs
4
CPUs
6
CPUs
Linux
Linux
AIX5LV5.3
Virtual I/O
paths
AIX5LV5.3
AIX5LV5.3
AIX5LV5.3
AIX5LV5.3
Micro-Partitioning
Manager
Server
LPAR 2
AIX 5L V5.3
LPAR 1
AIX 5L V5.3
LPAR 3
Linux
PLM partitions
Unmanaged
partitions
Hypervisor
PLM agent PLM agent
AIX 5L
V5.3
6
CPUs
Ethernet
sharing
Virtual I/O
server
partition
Storage
sharing
1
CPU
i5/OS
V5R3**
1
CPU
* SLES 9 or RHEL AS 4 and above
**Available on selected p5-570, p5-590 and p5-595 models
***Available on System p5 560Q and below as well as the BladeCenter® JS21
****Available for AIX 5L V5.2 or above (RPQ required on POWER4)
IVM
29
Hardware Management Console (HMC)
§ Supports POWER5 and POWER5+
processor-based servers only
Licensed Machine Code:
§ 7310-C05 (desktop)
§ 7310-CR4 (rack-mount)
Models available:
§ Required for CoD and clustering environments
and some RAS functions
§ Optional for APV on standalone servers
Requirements:
§ POWER5 provides Ethernet support of the
HMC
Ethernet support:
server
RU
N
server
RU
N
serv
er
R
U
N
721
2
server
RUN
x33
0
server
RUN
x33
0
R
U
N
serv
er
30
Integrated Virtualization Manager
31
System p APV vs. HP/Sun/VMware
Source: competitive analysis, company websites
32
Advanced POWER Virtualization Web site
http://www-03.ibm.com/systems/p/apv/index.html
“The logical partition [LPAR] capability of the System p5 server was the key
factor in our decision, enabling us to run multiple independent systems on
the same physical machine. In our opinion, IBM leads the market in this
area.”
1
- Wolfgang Franz, IT Manager, Bionorica AG. December 2005
Your one-stop
shop for System p
virtualization info:
-Discussion Forums
-Certifications
-Case Studies
-Whitepapers
-Education
1) Bionorica case study published at http://www-306.ibm.com/software/success/cssdb.nsf/CS/DNSD-6KBFWW?OpenDocument&Site=eserverpseries
33
Extra Tools
34
Easily size for your virtualized environment
FREE! IBM System Planning Tool
Download From: http://www.ibm.com/servers/eserver/support/tools/systemplanningtool/
Design and size System p
partitions
Export system configuration for
new server orders
Automatically deploy your
system plan through the
Hardware Management Console
(V5.2)
35
Easily track usage and accounting
costs in a System p virtualized
environment
• Reduces errors and provides value
audit trails
• Measures, analyses and reports the
utilization and costs of different
computing resources:
• Servers
• Storage
• Networks
• Databases
• Virtualized environments
• Messaging
• And, other shared services
IBM Tivoli® Usage and Accounting Manager –
one central repository with automation and auditing
Unique accounting features in AIX 5L V5.3
allow System p clients to benefit from
detailed reporting not available to other
UNIX platforms.
New!New!New!New!
36
 What is System p Application Virtual
Environment for x86 Linux
(System p AVE - x86)
 Supports installation and running of existing
32-bit x86 Linux applications1,2
 Creates an x86 Linux application
environment running on Linux on System p
 Extends value of IBM System p and
BladeCenter JS21 to x86 Linux apps
How does it work?
 Dynamically translates and maps x86 Linux
instructions to POWER
 Mapping and caching techniques are used
to enhance application performance within
the System p AVE-x86 environment
System p Application Virtual Environment for x86 Linux
Operating system
call mapping
Dynamic binary
translation
Allows software written for x86 Linux to just run on IBM System p servers running Linux
x86 Linux Applications
Linux on POWER
(1) No direct hardware access and no kernel access
(2) IA-32 instruction set architecture (x86) *
* As defined by the 1997 Intel Architecture Software Developer's Manual consisting of Basic
Architecture (Order Number 243190), Instruction Set Reference Manual (Order Number 243191) and
the System Programming Guide (Order Number 243192) all dated 1997.
37
Virtualization Benefits
38
“The centralized, virtualized CRM environment powered by
eServer p5 [System p] servers provides superior infrastructure
support, improved efficiency in our people and systems and truly
optimized operations.”
6
Help improve application performance
…and increase business responsiveness
These companies
reported increased
application performance
with IBM technologies
including System p
virtualization:
KCA DEUTAG: Reduced SAP
reporting from 2.5 hours to
20 minutes
1
Alstrom: Doubled SAP
transactions per second
2
Sinopec: Completed ERP
reporting in 30-35%
less time
3
rku.it: Increased SAP
response time by 33%
4
39
Driven by the
increasing numbers
of physical
systems, systems
management
has become the
dominant
component of IT
costs and is
growing rapidly
Many Servers, Much Capacity, Low Utilization =
$140B unutilized server assets
40
POWER6
Virtualization
Enhancements
41
Partition Mobility: Active and Inactive
LPARs
Active Partition Mobility
 Active Partition Migration is the actual movement of a running LPAR from one
physical machine to another without disrupting* the operation of the OS and
applications running in that LPAR.
 Applicability
 Workload consolidation (e.g. many to one)
 Workload balancing (e.g. move to larger system)
 Planned CEC outages for maintenance/upgrades
 Impending CEC outages (e.g. hardware warning received)
Active Partition Mobility
 Active Partition Migration is the actual movement of a running LPAR from one
physical machine to another without disrupting* the operation of the OS and
applications running in that LPAR.
 Applicability
 Workload consolidation (e.g. many to one)
 Workload balancing (e.g. move to larger system)
 Planned CEC outages for maintenance/upgrades
 Impending CEC outages (e.g. hardware warning received)
Inactive Partition Mobility
 Inactive Partition Migration transfers a partition that is logically ‘powered off’ (not
running) from one system to another.
Inactive Partition Mobility
 Inactive Partition Migration transfers a partition that is logically ‘powered off’ (not
running) from one system to another.
Partition Mobility supported on POWER6™
AIX 5.3, AIX 6.1 and Linux
42
Move active Micro-Partitions between systems
§ Reduce the impact of planned outages
§ Relocate workloads to enable growth
§ Provision new technology with no disruption to service
Virtualized SAN and Network InfrastructureVirtualized SAN and Network Infrastructure
IBM System p – Partition Mobility
43
• Partitioned system capacity
• Each Workload Partition obtains a
regulated share of the processor and
memory resources
• Each Workload Partition has separate
network and filesystems and many
system services (e.g. telnetd, etc.)
• Separate Administrative control
• Each partition is a separate
administrative and
security domain
• Shared system resources
• I/O Devices
• Processor
• Operating system
• Shared Library and Text
Workload
Partition
A
AIX 6.1
Image
Workload
Partition
C
Workload
Partition
B
Workload
Partition
D Workload
Partition
E
Separate regions of application space within a single AIX image
What Are Workload Partitions?
44
Workload
Partition
QA
AIX # 2
Workload
Partition
Data Mining
What is AIX Application Mobility?
Workload
Partition
Database
Workload
Partition
Web
AIX # 1
Application
Partition
Dev
The ability to move a Workload Partition from one server to another
Provides outage avoidance and multi-system workload balancing
Workload
Partition
ERP
Policy based automation can provide more efficient resource usage
Workload
Partition
Manager
Policy
Workload
Partition
Billing
45
AIX 6 Workload Partitions can be used in Logical Partitions
LPAR
Asia
LPAR LPAR
EMEA
LPAR
Americas
VIO
Server
Micro-partition Processor PoolDedicated
Processor
LPAR
Finance
Dedicated
Processor
LPAR
Planning
Power Hypervisor
WPAR1
WPAR1
WPAR2
WPAR1
WPAR2
WPAR3
46
What About SUN Containers?
Capability System p AIX SUN Solaris
Highest isolation with Logical Partitions
+ -
LPAR available across entire product line
+ -
Live relocation of a LPAR to another system*
+ -
Live relocation of a WPAR / container*
+ -
System WPAR / container*
= =
Application WPAR / container*
+ -
Single system management of WPAR / container*
= =
Multi-system management of WPAR / container*
+ -
Policy based relocation of WPAR / container*
+ -
System commands are WPAR / container ready*
+ -
WPAR Resource isolation – memory and processor*
= =
WPAR Resource isolation – thread, process, paging*
+ -
WPAR Processor regulation based on Fair Share*
= =
WPAR Processor regulation based on Percentage*
+ -
*Planned
47
When to use Workload Partitions
Requirement MicroPartitions Workload Partitions
Hardware enforced Isolation
Minimal number of AIX images
Server Consolidation
Greatest Flexibility
Cross system workload management
Move workload between systems
48
Consolidations: Virtual Servers or Application Regions?
Goal: Consolidation
Partition Mobility (Part of APV)
Each Application Has Its Own AIX LPAR,
Multiple LPARs Per Server
Each Application Is In Its Own AIX WPAR,
Multiple WPARs Per AIX
Fault Isolation
Full isolation: OS faults will only impact the specific
LPAR
Limited isolation: OS faults will bring down all regions,
but happen infrequently
Security Isolation Complete isolation between OSes – shared nothing
Only user space level isolation; kernel level is still
exposed
OS levels Different OS levels in LPARs possible Has to be the same across all WPARs
OS Service/Fix Level
OS service fixes can be applied to individual LPAR OS
images
OS service fixes affect all WPARs, negatively impacting
multi application - OS dependency
ISV Software Cost
ISVs will count only the number of processors
in LPAR
Many ISVs price based on number of CPUs, they will
count all processors in OS image
Efficiency
Good, but little sharing of OS resources leads to lower
efficiency than with region approach
Very good, due to extensive sharing of OS resources
such as code or text
System Admin costs
System admin costs don’t go up linearly per OS image
as cluster sys mgmt tools lower costs for managing
multiple OS images
TCO analysis based on number of OS images would
favor regions, although in reality system administration
costs will shift to per region level
Resource Granularity Fine grained Very fine grained
Mobility
Active LPAR movement in 2007
Requires POWER6
Active WPAR movement in 2007
Requires AIX 5.4 (Can run on P4 and up)
Each approach comes with pros and cons, so the choice depends on customer needs and preferences.








49
Integrated Virtual Ethernet - How it
Works……
LPAR
#2
LPAR
VIOS
LPAR
#3
LPAR
#1
Power Hypervisor
VNET
Packet
Router
Virtual Ethernet Switch
Virtual
Ethernet
Driver
Virtual
Ethernet
Driver
Virtual
Ethernet
Driver
LPAR
#2
LPAR
VIOS
LPAR
#3
LPAR
#1
Power
Hyper-
visor
VNET
Packet
Router
Ethernet
Driver
Ethernet
Driver
Ethernet
Driver
IVE Adapter Port
Native Performance
Software Transparency
AIX and Linux
IVE Adapter
Port
Option 1 Option 2
Or
50
51
Questions?
53
Thank You
Merci
Grazie
Gracias
Obrigado
Danke
Japanese
French
Russian
German
Italian
Spanish
Brazilian Portuguese
Arabic
Traditional Chinese
Simplified Chinese
Hindi
Tamil
Thai
Korean

More Related Content

Viewers also liked

Viewers also liked (14)

OpenPOWER Roadmap Toward CORAL
OpenPOWER Roadmap Toward CORALOpenPOWER Roadmap Toward CORAL
OpenPOWER Roadmap Toward CORAL
 
OpenPOWER Update
OpenPOWER UpdateOpenPOWER Update
OpenPOWER Update
 
The State of Linux Containers
The State of Linux ContainersThe State of Linux Containers
The State of Linux Containers
 
IBM POWER8 as an HPC platform
IBM POWER8 as an HPC platformIBM POWER8 as an HPC platform
IBM POWER8 as an HPC platform
 
Presentacin webinar move_up_to_power8_with_scale_out_servers_final
Presentacin webinar move_up_to_power8_with_scale_out_servers_finalPresentacin webinar move_up_to_power8_with_scale_out_servers_final
Presentacin webinar move_up_to_power8_with_scale_out_servers_final
 
Blockchain
BlockchainBlockchain
Blockchain
 
Bitcoin explained
Bitcoin explainedBitcoin explained
Bitcoin explained
 
Oracle Solaris Software Integration
Oracle Solaris Software IntegrationOracle Solaris Software Integration
Oracle Solaris Software Integration
 
Puppet + Windows Nano Server
Puppet + Windows Nano ServerPuppet + Windows Nano Server
Puppet + Windows Nano Server
 
Expert summit SQL Server 2016
Expert summit   SQL Server 2016Expert summit   SQL Server 2016
Expert summit SQL Server 2016
 
IlOUG Tech Days 2016 - Big Data for Oracle Developers - Towards Spark, Real-T...
IlOUG Tech Days 2016 - Big Data for Oracle Developers - Towards Spark, Real-T...IlOUG Tech Days 2016 - Big Data for Oracle Developers - Towards Spark, Real-T...
IlOUG Tech Days 2016 - Big Data for Oracle Developers - Towards Spark, Real-T...
 
Oracle Solaris Secure Cloud Infrastructure
Oracle Solaris Secure Cloud InfrastructureOracle Solaris Secure Cloud Infrastructure
Oracle Solaris Secure Cloud Infrastructure
 
Oracle Solaris Build and Run Applications Better on 11.3
Oracle Solaris  Build and Run Applications Better on 11.3Oracle Solaris  Build and Run Applications Better on 11.3
Oracle Solaris Build and Run Applications Better on 11.3
 
The Quantum Effect: HPC without FLOPS
The Quantum Effect: HPC without FLOPSThe Quantum Effect: HPC without FLOPS
The Quantum Effect: HPC without FLOPS
 

Recently uploaded

Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
ZurliaSoop
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
ciinovamais
 

Recently uploaded (20)

Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
 
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
 
Spatium Project Simulation student brief
Spatium Project Simulation student briefSpatium Project Simulation student brief
Spatium Project Simulation student brief
 
SKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptx
SKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptxSKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptx
SKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptx
 
Holdier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfHoldier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdf
 
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
 
Sociology 101 Demonstration of Learning Exhibit
Sociology 101 Demonstration of Learning ExhibitSociology 101 Demonstration of Learning Exhibit
Sociology 101 Demonstration of Learning Exhibit
 
PROCESS RECORDING FORMAT.docx
PROCESS      RECORDING        FORMAT.docxPROCESS      RECORDING        FORMAT.docx
PROCESS RECORDING FORMAT.docx
 
Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The Basics
 
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17  How to Extend Models Using Mixin ClassesMixin Classes in Odoo 17  How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
 
SOC 101 Demonstration of Learning Presentation
SOC 101 Demonstration of Learning PresentationSOC 101 Demonstration of Learning Presentation
SOC 101 Demonstration of Learning Presentation
 
General Principles of Intellectual Property: Concepts of Intellectual Proper...
General Principles of Intellectual Property: Concepts of Intellectual  Proper...General Principles of Intellectual Property: Concepts of Intellectual  Proper...
General Principles of Intellectual Property: Concepts of Intellectual Proper...
 
Third Battle of Panipat detailed notes.pptx
Third Battle of Panipat detailed notes.pptxThird Battle of Panipat detailed notes.pptx
Third Battle of Panipat detailed notes.pptx
 
How to Give a Domain for a Field in Odoo 17
How to Give a Domain for a Field in Odoo 17How to Give a Domain for a Field in Odoo 17
How to Give a Domain for a Field in Odoo 17
 
ICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptxICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptx
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
 
Accessible Digital Futures project (20/03/2024)
Accessible Digital Futures project (20/03/2024)Accessible Digital Futures project (20/03/2024)
Accessible Digital Futures project (20/03/2024)
 
Application orientated numerical on hev.ppt
Application orientated numerical on hev.pptApplication orientated numerical on hev.ppt
Application orientated numerical on hev.ppt
 
Unit-IV; Professional Sales Representative (PSR).pptx
Unit-IV; Professional Sales Representative (PSR).pptxUnit-IV; Professional Sales Representative (PSR).pptx
Unit-IV; Professional Sales Representative (PSR).pptx
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdf
 

Power Systems With Power8 Enterprise Technical Sales

  • 1. Power Systems with POWER 8 Enterprise Technical Sales
  • 2. IBM Systems & Technology Group IBM Power SystemsTM © 2013 IBM Corporation POWER7™ Technical Excellence and Announcement Highlights
  • 3.
  • 5. 5 IBM’s 39-year history of leadership in virtualization IBM develops hypervisor that would become VM on the mainframe IBM announces first machines to do physical partitioning IBM announces LPAR on the mainframe POWER LPAR design begins 19671967 19731973 19871987 IBM introduces LPAR in POWER4™ based systems with AIX 5L™ Advanced POWER™ Virtualization ships 200420042001200119971997 Timeline reference http://www.levenez.com/unix/history.html#01 Customer quote source: rku.it case study published at http://www.ibm.com/software/success/cssdb.nsf/CS/JSTS-6KXPPG?OpenDocument&Site=eserverpseries “In our opinion, they [System p servers] bring mainframe-quality virtualization capabilities to the world of AIX®.”- Ulrich Klenke, CIO, rku.it January 2006 Advanced POWER Virtualization on IBM System p™ servers
  • 6. 6 IBM APV Benefits  Can help lower the cost of existing infrastructure  Can increase business flexibility and reduce the complexity to grow your infrastructure  Deployed in production by a significant number of System p clients5 Advanced POWER Virtualization on IBM System p 1) Advanced POWER Virtualization (APV) is an optionally orderable feature on IBM System p, 2) Partition Load Manager (PLM) is not supported on OpenPower / Linux Partitions, 3) Only available on select models, 4) “Business Case for IBM System p5 Virtualization,” Economic Benefits of IT Simplification. International Technology Group, 02/10/2006. Study methodology: Companies in financial services, manufacturing and retail with $15 Billion+ revenues and total 200,000+ employees focusing on UNIX® large enterprise environments with multiple, broad-ranging applications. Study compared the cost of the company's workload running on multiple vendor servers and employing minimal virtualization to the cost of the company's workload running on the p5-510, 550, 570, 590 and 595 – all using Advanced POWER Virtualization [APV]. APV is standard on System p5 590 and 595. Other System p servers have the option to add APV except the System p5 185. This cost analysis was performed for financial services, manufacturing and retail example environments with an overall average savings of up to 62% in TCO savings by virtualizing and consolidating on the System p servers. For further information, see the white paper at: http://www-03.ibm.com/systems/p/library /consult/ itg_p5virtualization.pdf Total Cost of Ownership may not be reduced in each consolidation case. TCO depends on the specific customer environment, the existing environments and staff, and the consolidation potential. , 5) IBM sales Statistics, *All statements regarding IBM future directions and intent are subject to change or withdrawal without notice and represent goals and objectives only. Any reliance on these Statements of General Direction is at the relying party's sole risk and will not create liability or obligation for IBM. IBM Confidential Advanced POWER Virtualization 1 Micro -Partitioning • Share processors across multiple partitions • minimum 1/10 th processor Micro-Partitioning™ • Create up to 10 micro-partitions for each System p5 processor • Resize without rebooting your system Virtual I/O Server • Integrated VirtualizationManager 3 • Share Ethernet, SCSI and Virtual I/O Server • Share Ethernet, SCSI and Fibre Channel disks Partition Load Manager 2 • Automatically balance processor Partition Load Manager 2 • Automatically balance processor and memory requests Integrated Virtualization Manager3 • Manage a single system without an HMC Partition Mobility 4Q07* • Move a running partition from one P6 server to another with no downtime Linux on POWER
  • 7. 7 Databases Applications Web System p virtualization  Proven mainframe-inspired Hypervisor  Outstanding RAS  Low Overhead  Easy-to-use management interface  AIX 5L™ and Linux  Deployed in production by a significant number of System p clients* *IBM Sales Statistics
  • 9. 9 Advanced POWER Virtualization Option Virtual I/O Server – Shared Ethernet – Shared SCSI and Fibre Channel-attached disk subsystems – Supports AIX 5L V5.3 and Linux* partitions Micro-Partitioning – Share processors across multiple partitions – Minimum partition 1/10th processor – AIX 5L V5.3, Linux*, or i5/OS** Partition Load Manager**** – Balances processor and memory request Managed via HMC or IVM*** AIX 5L V5.2Linux Hypervisor Dynamically resizable 2 CPUs 4 CPUs 6 CPUs Linux Linux AIX5LV5.3 Virtual I/O paths AIX5LV5.3 AIX5LV5.3 AIX5LV5.3 AIX5LV5.3 Micro-Partitioning Manager Server LPAR 2 AIX 5L V5.3 LPAR 1 AIX 5L V5.3 LPAR 3 Linux PLM partitions Unmanaged partitions Hypervisor PLM agent PLM agent AIX 5L V5.3 6 CPUs Ethernet sharing Virtual I/O server partition Storage sharing 1 CPU i5/OS V5R3** 1 CPU * SLES 9 or RHEL AS 4 and above **Available on selected p5-570, p5-590 and p5-595 models ***Available on System p5 560Q and below as well as the BladeCenter® JS21 ****Available for AIX 5L V5.2 or above (RPQ required on POWER4) IVM
  • 10. 10 Advanced POWER Virtualization Option Virtual I/O Server – Shared Ethernet – Shared SCSI and Fibre Channel-attached disk subsystems – Supports AIX 5L V5.3 and Linux* partitions Micro-Partitioning – Share processors across multiple partitions – Minimum partition 1/10th processor – AIX 5L V5.3, Linux*, or i5/OS** Partition Load Manager**** – Balances processor and memory request Managed via HMC or IVM*** AIX 5L V5.2Linux Hypervisor Dynamically resizable 2 CPUs 4 CPUs 6 CPUs Linux Linux AIX5LV5.3 Virtual I/O paths AIX5LV5.3 AIX5LV5.3 AIX5LV5.3 AIX5LV5.3 Micro-Partitioning Manager Server LPAR 2 AIX 5L V5.3 LPAR 1 AIX 5L V5.3 LPAR 3 Linux PLM partitions Unmanaged partitions Hypervisor PLM agent PLM agent AIX 5L V5.3 6 CPUs Ethernet sharing Virtual I/O server partition Storage sharing 1 CPU i5/OS V5R3** 1 CPU * SLES 9 or RHEL AS 4 and above **Available on selected p5-570, p5-590 and p5-595 models ***Available on System p5 560Q and below as well as the BladeCenter® JS21 ****Available for AIX 5L V5.2 or above (RPQ required on POWER4) IVM
  • 11. 11 Micro-Partitioning technology Partitioning options – Micro-partitions: Up to 254* Configured via the HMC Number of logical processors – Minimum/maximum Entitled capacity – In units of 1/100 of a CPU – Minimum 1/10 of a CPU Variable weight – % share (priority) of surplus capacity Capped or uncapped partitions Micro-partitions Pool of 6 CPUs Linux i5/OSV5R3** AIX5LV5.3 AIX5LV5.3 Linux Entitled capacity Hypervisor Min Max *on p5-590 and p5-595 ** on p5-570, p5-590 and p5-595 AIX5LV5.2 AIX5LV5.3 Dynamic LPARs Whole Processors Micro-Partitioning technology allows each processor to be subdivided into as many as 10 “virtual servers”, helping to consolidate UNIX® and Linux applications. Note: Micro-partitions are available via optional Advanced POWER Virtualization or POWER Hypervisor and VIOS features.
  • 12. 12 How to create a Micro-Partitioning LPAR?
  • 13. 13 Physical Processors and Virtual Processors
  • 14. 14 IBM System p5: Simultaneous multithreadingPOWER4 (Single Threaded) CRL FX0 FX1 LSO LS1 FP0 FP1 BRZ Thread1 active Thread0 active No thread active  Utilizes unused execution unit cycles  Presents symmetric multiprocessing (SMP) programming model to software  Natural fit with superscalar out-of-order execution core  Dispatch two threads per processor: “It’s like doubling the number of processors.”  Net result: – Better performance – Better processor utilization Appears as four CPUs per chip to the operating system (AIX 5L V5.3 and Linux) Systemthroughput SMTST POWER5+ (simultaneous multithreading)
  • 15. 15 # smtctl -off # smtctl -on Simultaneous Multi-Threading (On and Off)
  • 16. 16 Advanced POWER Virtualization Option Virtual I/O Server – Shared Ethernet – Shared SCSI and Fibre Channel-attached disk subsystems – Supports AIX 5L V5.3 and Linux* partitions Micro-Partitioning – Share processors across multiple partitions – Minimum partition 1/10th processor – AIX 5L V5.3, Linux*, or i5/OS** Partition Load Manager**** – Balances processor and memory request Managed via HMC or IVM*** AIX 5L V5.2Linux Hypervisor Dynamically resizable 2 CPUs 4 CPUs 6 CPUs Linux Linux AIX5LV5.3 Virtual I/O paths AIX5LV5.3 AIX5LV5.3 AIX5LV5.3 AIX5LV5.3 Micro-Partitioning Manager Server LPAR 2 AIX 5L V5.3 LPAR 1 AIX 5L V5.3 LPAR 3 Linux PLM partitions Unmanaged partitions Hypervisor PLM agent PLM agent AIX 5L V5.3 6 CPUs Ethernet sharing Virtual I/O server partition Storage sharing 1 CPU i5/OS V5R3** 1 CPU * SLES 9 or RHEL AS 4 and above **Available on selected p5-570, p5-590 and p5-595 models ***Available on System p5 560Q and below as well as the BladeCenter® JS21 ****Available for AIX 5L V5.2 or above (RPQ required on POWER4) IVM
  • 17. 17  Dedicated or Shared Partitions  Virtual I/O Server is not necessary Hypervisor 1 Virtual Ethernet TCP/IP stack TCP/IP stack TCP/IP stack TCP/IP stack TCP/IP stack AIX 5L V5.3 Linux AIX 5L V5.3 Virtual Ethernet (Partition Communication) Linux AIX 5L V5.3 2 VLANs Partitions
  • 19. 19  Configured like a standard Ethernet  Can have multiple connections per partition  Virtual “MAC” addressing  Each adapter can support 16 virtual Ethernet LANs Hypervisor IP network Virtual I/O server 1 partition R B Virtual Ethernet TCP/IP stack TCP/IP stack TCP/IP stack TCP/IP stack TCP/IP stack Virtual I/O server 2 partition AIX 5L V5.3 Linux AIX 5L V5.3 Virtual Networking Virtual Networking Virtual I/O Server - Ethernet Sharing
  • 20. 20 How it work?How it work? ExampleExample Virtual I/O Server Ethernet Sharing
  • 21. 21  One physical drive can appear to be multiple logical drives – LUNs appear as individual logical drives  Minimizes the number of adapters AIX 5L V5.3 Linux AIX 5L V5.3 2A 3A 4A ASCSI Fibre 2B 3B 4B B SCSI Fibre 1B Mirror 5B MirrorMirror Mirror Virtual I/O server 1 Virtual I/O server 2 2B 3B Hypervisor layer Mirror  Can have mixed configuration (virtual and real adapters)  SCSI and Fibre supported  Supports AIX 5L V5.3 and Linux partitions 4B 2A1A 5A3A 4A Hypervisor layer Virtual I/O Server Disk Sharing
  • 22. 22 Virtual I/O Server Disk Sharing (How it work?)
  • 23. 23 Virtual I/O Server Disk Sharing (Examples)
  • 24. 24 Advanced POWER Virtualization enhancements Virtual I/O Server (VIOS) 1.3 • VIOS Monitoring through PTX and Topas • Performance Enhancements for Virtual SCSI and Virtual Ethernet Integrated Virtualization Manager (IVM) • Decrease downtime - Resize and modify partitions without server disruption with support for dynamic LPAR Leverage System p Virtualization and Reduce your server TCO by up to 60%* * “Business Case for IBM System p5 Virtualization,” Economic Benefits of IT Simplification. International Technology Group, February 10, 2006 Topas CEC Monitor Interval: 10 Thu Jul 28 17:04:57 2005 Partitions Memory (GB) Processors Shr: 3 Mon:24.6 InUse: 2.7 Shr:1.5 PSz: 3 Shr_PhysB: 0.27 Ded: 3 Avl: - Ded: 5 APP: 2.6 Ded_PhysB: 2.70 Host OS M Mem InU Lp Us Sy Wa Id PhysB Ent %EntC Vcsw PhI -------------------------------------shared------------------------------------- ptoolsl3 A53 c 4.1 0.4 2 20 13 5 62 0.17 0.50 35.5 218 5 ptoolsl2 A53 C 4.1 0.4 4 14 1 0 84 0.08 0.50 15.0 209 1 ptoolsl5 A53 U 4.1 0.4 4 0 0 0 99 0.02 0.50 0.1 205 2 ------------------------------------dedicated----------------------------------- ptoolsl1 A53 S 4.1 0.5 4 100 0 0 0 2.00 ptoolsl4 A53 4.1 0.5 2 20 10 0 70 0.60 ptoolsl6 A52 4.1 0.5 1 5 5 12 88 0.10 Topas (part of AIX 5.3) PTX (AIX LPP)
  • 25. 25 Advanced POWER Virtualization Option Virtual I/O Server – Shared Ethernet – Shared SCSI and Fibre Channel-attached disk subsystems – Supports AIX 5L V5.3 and Linux* partitions Micro-Partitioning – Share processors across multiple partitions – Minimum partition 1/10th processor – AIX 5L V5.3, Linux*, or i5/OS** Partition Load Manager**** – Balances processor and memory request Managed via HMC or IVM*** AIX 5L V5.2Linux Hypervisor Dynamically resizable 2 CPUs 4 CPUs 6 CPUs Linux Linux AIX5LV5.3 Virtual I/O paths AIX5LV5.3 AIX5LV5.3 AIX5LV5.3 AIX5LV5.3 Micro-Partitioning Manager Server LPAR 2 AIX 5L V5.3 LPAR 1 AIX 5L V5.3 LPAR 3 Linux PLM partitions Unmanaged partitions Hypervisor PLM agent PLM agent AIX 5L V5.3 6 CPUs Ethernet sharing Virtual I/O server partition Storage sharing 1 CPU i5/OS V5R3** 1 CPU * SLES 9 or RHEL AS 4 and above **Available on selected p5-570, p5-590 and p5-595 models ***Available on System p5 560Q and below as well as the BladeCenter® JS21 ****Available for AIX 5L V5.2 or above (RPQ required on POWER4) IVM
  • 26. 26 Before resource tuning • Policy-based, automatic partition resource tuning • Dynamically adjust CPU and memory allocation Test LPAR Agent CRM LPAR Agent Finance LPAR Agent Unbalanced resource allocation 3 CPU 5 CPU 6 CPU After resource tuning Agent Agent Agent Adjust resource allocation based on business priority Test LPAR CRM LPAR Finance LPAR 1 CPU 3 CPU 10 CPU PLM ServerPLM Server Partition Load Manager for AIX5L
  • 28. 28 Advanced POWER Virtualization Option Virtual I/O Server – Shared Ethernet – Shared SCSI and Fibre Channel-attached disk subsystems – Supports AIX 5L V5.3 and Linux* partitions Micro-Partitioning – Share processors across multiple partitions – Minimum partition 1/10th processor – AIX 5L V5.3, Linux*, or i5/OS** Partition Load Manager**** – Balances processor and memory request Managed via HMC or IVM*** AIX 5L V5.2Linux Hypervisor Dynamically resizable 2 CPUs 4 CPUs 6 CPUs Linux Linux AIX5LV5.3 Virtual I/O paths AIX5LV5.3 AIX5LV5.3 AIX5LV5.3 AIX5LV5.3 Micro-Partitioning Manager Server LPAR 2 AIX 5L V5.3 LPAR 1 AIX 5L V5.3 LPAR 3 Linux PLM partitions Unmanaged partitions Hypervisor PLM agent PLM agent AIX 5L V5.3 6 CPUs Ethernet sharing Virtual I/O server partition Storage sharing 1 CPU i5/OS V5R3** 1 CPU * SLES 9 or RHEL AS 4 and above **Available on selected p5-570, p5-590 and p5-595 models ***Available on System p5 560Q and below as well as the BladeCenter® JS21 ****Available for AIX 5L V5.2 or above (RPQ required on POWER4) IVM
  • 29. 29 Hardware Management Console (HMC) § Supports POWER5 and POWER5+ processor-based servers only Licensed Machine Code: § 7310-C05 (desktop) § 7310-CR4 (rack-mount) Models available: § Required for CoD and clustering environments and some RAS functions § Optional for APV on standalone servers Requirements: § POWER5 provides Ethernet support of the HMC Ethernet support: server RU N server RU N serv er R U N 721 2 server RUN x33 0 server RUN x33 0 R U N serv er
  • 31. 31 System p APV vs. HP/Sun/VMware Source: competitive analysis, company websites
  • 32. 32 Advanced POWER Virtualization Web site http://www-03.ibm.com/systems/p/apv/index.html “The logical partition [LPAR] capability of the System p5 server was the key factor in our decision, enabling us to run multiple independent systems on the same physical machine. In our opinion, IBM leads the market in this area.” 1 - Wolfgang Franz, IT Manager, Bionorica AG. December 2005 Your one-stop shop for System p virtualization info: -Discussion Forums -Certifications -Case Studies -Whitepapers -Education 1) Bionorica case study published at http://www-306.ibm.com/software/success/cssdb.nsf/CS/DNSD-6KBFWW?OpenDocument&Site=eserverpseries
  • 34. 34 Easily size for your virtualized environment FREE! IBM System Planning Tool Download From: http://www.ibm.com/servers/eserver/support/tools/systemplanningtool/ Design and size System p partitions Export system configuration for new server orders Automatically deploy your system plan through the Hardware Management Console (V5.2)
  • 35. 35 Easily track usage and accounting costs in a System p virtualized environment • Reduces errors and provides value audit trails • Measures, analyses and reports the utilization and costs of different computing resources: • Servers • Storage • Networks • Databases • Virtualized environments • Messaging • And, other shared services IBM Tivoli® Usage and Accounting Manager – one central repository with automation and auditing Unique accounting features in AIX 5L V5.3 allow System p clients to benefit from detailed reporting not available to other UNIX platforms. New!New!New!New!
  • 36. 36  What is System p Application Virtual Environment for x86 Linux (System p AVE - x86)  Supports installation and running of existing 32-bit x86 Linux applications1,2  Creates an x86 Linux application environment running on Linux on System p  Extends value of IBM System p and BladeCenter JS21 to x86 Linux apps How does it work?  Dynamically translates and maps x86 Linux instructions to POWER  Mapping and caching techniques are used to enhance application performance within the System p AVE-x86 environment System p Application Virtual Environment for x86 Linux Operating system call mapping Dynamic binary translation Allows software written for x86 Linux to just run on IBM System p servers running Linux x86 Linux Applications Linux on POWER (1) No direct hardware access and no kernel access (2) IA-32 instruction set architecture (x86) * * As defined by the 1997 Intel Architecture Software Developer's Manual consisting of Basic Architecture (Order Number 243190), Instruction Set Reference Manual (Order Number 243191) and the System Programming Guide (Order Number 243192) all dated 1997.
  • 38. 38 “The centralized, virtualized CRM environment powered by eServer p5 [System p] servers provides superior infrastructure support, improved efficiency in our people and systems and truly optimized operations.” 6 Help improve application performance …and increase business responsiveness These companies reported increased application performance with IBM technologies including System p virtualization: KCA DEUTAG: Reduced SAP reporting from 2.5 hours to 20 minutes 1 Alstrom: Doubled SAP transactions per second 2 Sinopec: Completed ERP reporting in 30-35% less time 3 rku.it: Increased SAP response time by 33% 4
  • 39. 39 Driven by the increasing numbers of physical systems, systems management has become the dominant component of IT costs and is growing rapidly Many Servers, Much Capacity, Low Utilization = $140B unutilized server assets
  • 41. 41 Partition Mobility: Active and Inactive LPARs Active Partition Mobility  Active Partition Migration is the actual movement of a running LPAR from one physical machine to another without disrupting* the operation of the OS and applications running in that LPAR.  Applicability  Workload consolidation (e.g. many to one)  Workload balancing (e.g. move to larger system)  Planned CEC outages for maintenance/upgrades  Impending CEC outages (e.g. hardware warning received) Active Partition Mobility  Active Partition Migration is the actual movement of a running LPAR from one physical machine to another without disrupting* the operation of the OS and applications running in that LPAR.  Applicability  Workload consolidation (e.g. many to one)  Workload balancing (e.g. move to larger system)  Planned CEC outages for maintenance/upgrades  Impending CEC outages (e.g. hardware warning received) Inactive Partition Mobility  Inactive Partition Migration transfers a partition that is logically ‘powered off’ (not running) from one system to another. Inactive Partition Mobility  Inactive Partition Migration transfers a partition that is logically ‘powered off’ (not running) from one system to another. Partition Mobility supported on POWER6™ AIX 5.3, AIX 6.1 and Linux
  • 42. 42 Move active Micro-Partitions between systems § Reduce the impact of planned outages § Relocate workloads to enable growth § Provision new technology with no disruption to service Virtualized SAN and Network InfrastructureVirtualized SAN and Network Infrastructure IBM System p – Partition Mobility
  • 43. 43 • Partitioned system capacity • Each Workload Partition obtains a regulated share of the processor and memory resources • Each Workload Partition has separate network and filesystems and many system services (e.g. telnetd, etc.) • Separate Administrative control • Each partition is a separate administrative and security domain • Shared system resources • I/O Devices • Processor • Operating system • Shared Library and Text Workload Partition A AIX 6.1 Image Workload Partition C Workload Partition B Workload Partition D Workload Partition E Separate regions of application space within a single AIX image What Are Workload Partitions?
  • 44. 44 Workload Partition QA AIX # 2 Workload Partition Data Mining What is AIX Application Mobility? Workload Partition Database Workload Partition Web AIX # 1 Application Partition Dev The ability to move a Workload Partition from one server to another Provides outage avoidance and multi-system workload balancing Workload Partition ERP Policy based automation can provide more efficient resource usage Workload Partition Manager Policy Workload Partition Billing
  • 45. 45 AIX 6 Workload Partitions can be used in Logical Partitions LPAR Asia LPAR LPAR EMEA LPAR Americas VIO Server Micro-partition Processor PoolDedicated Processor LPAR Finance Dedicated Processor LPAR Planning Power Hypervisor WPAR1 WPAR1 WPAR2 WPAR1 WPAR2 WPAR3
  • 46. 46 What About SUN Containers? Capability System p AIX SUN Solaris Highest isolation with Logical Partitions + - LPAR available across entire product line + - Live relocation of a LPAR to another system* + - Live relocation of a WPAR / container* + - System WPAR / container* = = Application WPAR / container* + - Single system management of WPAR / container* = = Multi-system management of WPAR / container* + - Policy based relocation of WPAR / container* + - System commands are WPAR / container ready* + - WPAR Resource isolation – memory and processor* = = WPAR Resource isolation – thread, process, paging* + - WPAR Processor regulation based on Fair Share* = = WPAR Processor regulation based on Percentage* + - *Planned
  • 47. 47 When to use Workload Partitions Requirement MicroPartitions Workload Partitions Hardware enforced Isolation Minimal number of AIX images Server Consolidation Greatest Flexibility Cross system workload management Move workload between systems
  • 48. 48 Consolidations: Virtual Servers or Application Regions? Goal: Consolidation Partition Mobility (Part of APV) Each Application Has Its Own AIX LPAR, Multiple LPARs Per Server Each Application Is In Its Own AIX WPAR, Multiple WPARs Per AIX Fault Isolation Full isolation: OS faults will only impact the specific LPAR Limited isolation: OS faults will bring down all regions, but happen infrequently Security Isolation Complete isolation between OSes – shared nothing Only user space level isolation; kernel level is still exposed OS levels Different OS levels in LPARs possible Has to be the same across all WPARs OS Service/Fix Level OS service fixes can be applied to individual LPAR OS images OS service fixes affect all WPARs, negatively impacting multi application - OS dependency ISV Software Cost ISVs will count only the number of processors in LPAR Many ISVs price based on number of CPUs, they will count all processors in OS image Efficiency Good, but little sharing of OS resources leads to lower efficiency than with region approach Very good, due to extensive sharing of OS resources such as code or text System Admin costs System admin costs don’t go up linearly per OS image as cluster sys mgmt tools lower costs for managing multiple OS images TCO analysis based on number of OS images would favor regions, although in reality system administration costs will shift to per region level Resource Granularity Fine grained Very fine grained Mobility Active LPAR movement in 2007 Requires POWER6 Active WPAR movement in 2007 Requires AIX 5.4 (Can run on P4 and up) Each approach comes with pros and cons, so the choice depends on customer needs and preferences.        
  • 49. 49 Integrated Virtual Ethernet - How it Works…… LPAR #2 LPAR VIOS LPAR #3 LPAR #1 Power Hypervisor VNET Packet Router Virtual Ethernet Switch Virtual Ethernet Driver Virtual Ethernet Driver Virtual Ethernet Driver LPAR #2 LPAR VIOS LPAR #3 LPAR #1 Power Hyper- visor VNET Packet Router Ethernet Driver Ethernet Driver Ethernet Driver IVE Adapter Port Native Performance Software Transparency AIX and Linux IVE Adapter Port Option 1 Option 2 Or
  • 50. 50
  • 52.

Editor's Notes

  1. IBM has a long history of leadership in the area of virtualization. This is not a new technology – it has been around since 1967 on the mainframe, and was first developed for the POWER processor back in 1997. Since then we have been refining the technology to make it more reliable, scalable and better server your business needs. 1967 IBM develops hypervisor that would eventually become VM on the mainframe 1973 IBM announces S/370 model 158 and model 168, the first two machines to do physical partitioning 1987 PR/SM is announced (LPAR on the mainframe) 1990 ES/9000 family is announced. This is the last IBM mainframe to support physical partitioning 1997 POWER LPAR design begins 1999 System i LPAR announced 2001 System i ships sub-processor LPAR support / System p ships whole processor LPAR support 2001 LPAR introduced in POWER4™ with AIX 5L™ V5.1 2004 MicroPartitioning LPAR and Virtual I/O with POWER5™ and AIX 5L V5.3
  2. Virtualization is a critical requirement and key decision factor for most clients Nearly 40 years after IBM pioneered virtualization on the mainframe, we continue to expand the reach of virtualization across our systems, including our UNIX and Linux System p servers IBM delivers real Advanced POWER Virtualization capabilities today that competitors cannot match 1) Mature and field-proven technology: Advanced POWER Virtualization is deployed on 40% of all System p5 CPUs -- it is used by companies large and small for business critical applications in both production and test environments 2) Numerous customer case studies as well as research from external analysts has shown reduced TCO of up to 62% by deploying System p Virtualization 3) POWER LPAR development started back in 1997, and has been refined ever since. 4) APV makes it easier to grow your infrastructure to meet changing business needs -- new "virtual" servers can be deployed in a matter of minutes. Advanced POWER Virtualization (APV) is an optionally orderable feature which includes: Micro-PartitioningTM – shared resource pools, highly granular partitions Virtual LAN – high-speed inter-partition communication Virtual I/O – shared network, storage, fiber channel adapters POWER Hypervisor™ – mainframe-inspired, enterprise RAS/availability [COMES STANDARD on all pSeries systems] Multi-OS Support – AIX 5L, RHEL4, SLES9, i5/OS3 ---------------------------------- Micro-Partitioning Micro-Partitioning allows for multiple partitions to share one physical processor Up to 10 partitions per physical processor Up to 254 partitions active at the same time TWO MAJOR DEPLOYMENT MODELSResponsive OS images in Shared Processor LPARs with higher utilization Historically, needed to set aside capacity for unexpected peaks; run at low utilization rates Rapid increase in capacity (10ms) - high priority workloads get resources when needed; lower priority/batch workloads consume all unused resources Server consolidation of small underutilized servers using multiple sub-CPU micro-partition Micro-partitioning allows for multiple partitions to share one physical processor. A partition may be defined with a processor capacity as small as 10 processor units. This represents 1/10 of a physical processor. Each processor can be shared by up to 10 shared processor partitions. The shared processor partitions are dispatched and time-sliced on the physical processors under control of the POWER Hypervisor. Micro-partitioning is supported across the entire POWER5 product line from the entry to the high-end systems. Shared processor partitions still need dedicated memory, but the partitions I/O requirements can be supported through Virtual Ethernet and Virtual SCSI Server. Utilizing all virtualization features support for up to 254 shared processor partitions is possible. The shared processor partitions are created and managed by the HMC. When you start creating a partition, you have to choose between a shared processor partition and a dedicated processor partition. When setting up a partition, you have to define the resources that belong to the partition like memory and IO resources. For shared processor partitions, you have to specify the following partition attributes that are used to define the dimensions and performance characteristics of shared partitions: Minimum, desired, and maximum processor capacity Minimum, desired, and maximum number of virtual processors Capped or uncapped Variable capacity weight ---------------------------------- Partition Load Manager Policy-based, automatic partition resource tuning Dynamically adjust CPU, memory allocation Supported on AIX 5L™ 5.3 and 5.21 Supported on POWER4™ and POWER5™ systems Workload manager now supports automatically switching resource polices based on time of day - reducing administrator workload and helping to maximize system utilization. Workload manager can enforce per process cpu, I/O and connect time limits by warning and killing processes that exceed these limits- protecting the system against errant or malicious processes. Also, the administrator can set up per class limits on the number of threads, processes or logins. The administrator can be automatically notified if these thresholds are reached. "Processor Sets" allow the administrator to specify which physical processors are assigned to a process or class, Memory affinity provides the same function for memory - insuring maximum performance for high performance computing workloads that need to minimize memory latency. Note that processor sets are only applicable to SMP environments, not LPAR. Additional WLM resources were added to broaden the capabilities to manage diverse workloads. The dynamic LPAR and CUoD strengthens our total workload management capabilities which includes WLM and LPAR technology. The LPAR support delivered with the Regatta p690 server last December, offered much greater resource partitioning granularity than partitioning competitive implementations, with minimum partition resources allocations to the single processor or I/O adapter. In AIX 5.2, dynamic LPAR support will provide the capability to dynamically add/delete partition resources including processors, memory, and I/O adapters without even requiring a reboot. AIX is also enabled for up to 32 partitions but no hardware supports that configuration as of the announce date. The dynamic reconfiguration APIs provide applications with the ability to detect and adjust to changes in resources available in a partition. For example - Oracle plans to take advantage of this feature in 2003. Dynamic Capacity on Demand allows the customer to "turn on" unused processors as there workload increases. This provides non-disruptive (no ipl reqd.) capacity upgrades. Note: CUoD only supports processors in 5.2 - memory will be added later. Hot Sparing -For customers using CUoD, they get addition reliability capability though having the system automatically substitute an unlicensed processor for on that is failing. LPARs share a pool of physical processors each LPAR is given a "share"of the physical processing power within a pool software fault isolation is maintained for shared-processor LPARS more than 100 shared-processor LPARs in a single system Supports the efficient use of resources LPARs (AIX images) yield idle processor cycles no need to reserve dedicated resource to deal with spikes in capacity requirements LPAR spikes can be dealt with from the shared pool Dedicated processor partitions continue to be supported Virtual Ethernet supports in-memory network connections between LPARs reduces the need for physical Ethernet adapters for some LPARs, except for external connections Virtual Disk reduces the need for dedicated physical disk resources essential for shared processor LPAR support where there may be lots of LPARs hosted through separate VIO AIX LPARs ------------------------------------ Virtual I/O Server Virtual Ethernet The Virtual Ethernet enables inter-partition communication without the need for physical network adapters in each partition. The Virtual Ethernet allows the administrator to define in-memory point to point connections between partitions. These connections exhibit similar characteristics, as high bandwidth Ethernet connections supports multiple protocols (IPv4, IPv6, and ICMP). Virtual Ethernet requires a POWER5 system with either AIX 5L V5.3 or the appropriate level of Linux and a Hardware Management Console (HMC) to define the Virtual Ethernet devices. Virtual Ethernet does not require the purchase of any additional features or software, such as the Advanced Virtualization Feature. Virtual Ethernet is also called Virtual LAN or even VLAN, which can be confusing, because these terms are also used in network topology topics. But the Virtual Ethernet, which uses virtual devices, has nothing to do with the VLAN known from Network-Topology, which divides a LAN in further Sub-LANs. Enables inter-partition communication. In-memory point to point connections Physical network adapters are not needed. Similar to high-bandwidth Ethernet connections. Supports multiple protocols (IPv4, IPv6, and ICMP). No Advanced POWER Virtualization feature required. POWER5™ Systems AIX 5L V5.3 or appropriate Linux level Hardware management console (HMC) Virtual SCSI Virtual SCSI is based on a client/server relationship. The virtual I/O server owns the physical resources and acts as the server. The logical partitions access the virtual I/O resources provided by the virtual I/O server as the clients. The virtual I/O resources are assigned using an HMC. Often the virtual I/O server partition is also referred to as hosting partition and the client partitions as hosted partitions. Virtual SCSI enables sharing of adapters as well as disk devices. To make a physical or a logical volume available to a client partition, it is assigned to a virtual SCSI server adapter in the virtual I/O server partition. The client partition accesses its assigned disks through a virtual SCSI client adapter. It sees standard SCSI devices and LUNs through this virtual adapter. Virtual SCSI resources can be assigned and removed dynamically. On the HMC, virtual SCSI target and server adapters can be assigned and removed from a partition using dynamic logical partitioning. The mapping between physical and virtual resources on the virtual I/O server can also be done dynamically. This chart shows an example where one physical disk is split up into two logical volumes inside the virtual I/O server. Each of the two client partitions is assigned one logical volume, which it accesses through a virtual I/O adapter (vSCSI Client Adapter). Inside the partition, the disk is seen as normal hdisk.
  3. Virtualization is a critical requirement and key decision factor for most clients Nearly 40 years after IBM pioneered virtualization on the mainframe, we continue to expand the reach of virtualization across our systems, including our UNIX and Linux System p servers IBM delivers real Advanced POWER Virtualization capabilities today that competitors cannot match 1) Mature and field-proven technology: Advanced POWER Virtualization is deployed on 40% of all System p5 CPUs -- it is used by companies large and small for business critical applications in both production and test environments 2) Numerous customer case studies as well as research from external analysts has shown reduced TCO of up to 62% by deploying System p Virtualization 3) POWER LPAR development started back in 1997, and has been refined ever since. 4) APV makes it easier to grow your infrastructure to meet changing business needs -- new "virtual" servers can be deployed in a matter of minutes.
  4. O que esse chip trouxe de novidade em performance Clock não é mesmo q desempenho. As arquiteturas são diferentes. Intel = Intel então dá para dizer q a mesma coisa. SMT – Simultaneos Multing Threading. Banco de Dados não utiliza unidades de ponto flutuante. “gone forever” Unidades Lógicas – paradas – utilização de unidades de execução ao mesmo tempo. Que tipo de aplicação não tem beneficio. (Aplicações de um processo – single treaded) Que tipo de aplicação tem beneficio (todas as demais – DB, SAP, etc) 30% de ganho no SAP com essa tecnologia. Vc pode ligar e desligar (30% de máquina)
  5. Virtual Ethernet Switch Ethernet feita pelo firmware Interessante em ambiente 3 camadas (DB – virtual ethernet – App) Não precisa da rede interna fisica. Trazer flexibilidade – configurar o ambiente de acordo com as necessidades de negócio, e não o contrário.
  6. Virtual Ethernet Switch Ethernet feita pelo firmware Interessante em ambiente 3 camadas (DB – virtual ethernet – App) Não precisa da rede interna fisica. Trazer flexibilidade – configurar o ambiente de acordo com as necessidades de negócio, e não o contrário.
  7. Virtual Ethernet Switch Ethernet feita pelo firmware Interessante em ambiente 3 camadas (DB – virtual ethernet – App) Não precisa da rede interna fisica. Trazer flexibilidade – configurar o ambiente de acordo com as necessidades de negócio, e não o contrário.
  8. EMIF Virtualização de I/O – Como fazer para ter acesso a discos em cada uma das partições? Como acessar a REDE? Partição de virtualização Não precisa colocar controladora em todas as partições. Um disco de 146GB pode ser fatiado entre as partições na partição da máquina
  9. EMIF Virtualização de I/O – Como fazer para ter acesso a discos em cada uma das partições? Como acessar a REDE? Partição de virtualização Não precisa colocar controladora em todas as partições. Um disco de 146GB pode ser fatiado entre as partições na partição da máquina
  10. EMIF Virtualização de I/O – Como fazer para ter acesso a discos em cada uma das partições? Como acessar a REDE? Partição de virtualização Não precisa colocar controladora em todas as partições. Um disco de 146GB pode ser fatiado entre as partições na partição da máquina
  11. For the August 2006 GA, we will introduce a number of enhancements to Advanced POWER Virtualization. Virtual I/O Server Enhancements VIOS Monitoring through Topas and PTX ISV/IHV Support: iSCSI TOE adapter, iSCSI direct attached n3700 storage subsystem, HP storage Virtual SCSI functional enhancements Support for SCSI Reserve/Release for limited configurations changeable queue depth Updating virtual device capacity nondisruptively so that the virtual disk can "grow" without requiring a reconfig Configurable fast fail time (number of retries on failure) error log enhancements TCP/IP Acceleration: Large Block Send Integrated Virtualization Manager Enhancements DLPAR Support for memory and processors in managed partitions GUI Support for System Plan management, including the Logical Partition (LPAR) Deployment Wizard Web UI Support for IP configuration support Task Manager for long-running tasks Various usability enhancements, including the ability to create a new partition based on an existing one. IBM Director Integration (to be released w/next rev of IBM Director)
  12. Now let’s talk about the competitive view of virtualization. Basically read through the chart. The thing to note is our leadership in mostly all areas.
  13. Check out the APV Website for links to: -- Discussion forums: these are moderated q&a sections monitored by our development team -- APV Wiki: Part of the AIX Collaboration center, this wiki will help keep you up-to-date in the world of System p virtualization -- Certifications: Virtualization Certified Engineer: IBM has released a new certification program: IBM Certified Systems Expert – IBM System p5 Virtualization Technical Support AIX 5L v5.3 This certification provides companies with an added degree of confidence that their technician is highly qualified to determine customer and business requirements for virtualization. They are experts in planning, designing, implementing, managing and troubleshooting System p5 Virtualization solutions. -- Customer Case Studies: There are over 40 APV customer success stories. Visit the APV homepage to find out more. -- Whitepapers: Check out the whitepapers on APV, such as the ITG Whitepaper which demonstrates the cost savings possible when moving to System p virtualization
  14. System Planning Tool Enhancements To support your virtualization planning needs, the System Planning Tool (SPT) is available as a no charge download from http://www.ibm.com/servers/eserver/support/tools/systemplanningtool/. You can use the System Planning Tool for designing System p and System i partitions. The resulting design, represented by a System Plan, can then be imported onto your Hardware Management Console (HMC) V5.2, where, via the new System Plans feature, you can automate configuration of the partitions designed via the System Planning Tool. The System Plans feature of the HMC also allows generation of system plans via the mksysplan command. Highlights of the System Planning Tool Include: Integration of sizing tools Performance Monitor and Workload Estimator (WLE) Export system configuration to IBM for order processing via the IBM Sales Configurator (eConfig) Export system configuration to HMC for support of POWER-based systems configuration Performance Monitor and Workload Estimator (WLE) integration to aid in sizing workloads for your system System plans enabling automated deployment using the HMC Web browser interface Interactive reports Help system Support for files created via LPAR Validation Tool (LVT)
  15. Additional services are available to deploy the IBM Tivoli Usage and Accounting Manager: IBM QuickStart Services for Usage and Accounting Manager -- So you can move quickly achieve systems management objectives and realize rapid time to value Other Deliverables (depending on duration of services agreed to – up to 10 days on site) Perform the installation and configuration of IBM Tivoli Usage and Accounting Manager in a limited environment Demonstration of client data processed through IBM Tivoli Usage and Accounting Manager Produce sample invoices and reports Provide knowledge transfer and best practices Deployment summary and recommendations for next steps
  16. Optimizes and caches frequently executed code - conceptually similar to a Just-in-Time (JIT) compiler for Java ™
  17. Leveraging System p and APV can also help to improve application performance and increase business responsiveness. The following companies experienced a significant application performance increase by leveraging IBM technologies including System p virtualization. rku it -- Increased SAP response time by 33% Alstrom -- Doubled SAP transactions per second Sinopec - ERP reporting completed in 30 – 50% less time Compudatacenter -- Some processes cut from two hours to ten minutes kca deutag -- Supported 100% increase in SAP users, SAP reporting cut from 2.5 hours to 20 minutes
  18. Application Mobility is an optional capability that will allow an administrator to move a running WPAR from one system to another using advanced checkpoint restart capabilities that will make the movement transparent to the end user.
  19. This slide shows the use of Workload Partitions within Dedicated LPARs and MicroPartitions. The leverages the best capabilities of each technology. For example, the Finance LPAR needs the highest degree of isolation and the workload demands a dedicated set of processor resources. The Planning workload also requires a dedicated number of processors, but there are two main work areas within the Planning function (perhaps Strategic Planning and Operations Management) that we would like to combine within the same system image using AIX Partitions to keep the performance resources and administration separate. Each WPAR within each Logical Partition offers separate administration, security and management.
  20. As you can see from this chart, IBM System p provides the most complete and capable virtualization functionality for our clients starting with LPARs. LPARs provide the greatest degree of isolation and flexibility in the UNIX industry today. Although WPARs and SUN Containers have similar characteristics, WPARs offer many advantages over Containers. - WPARs offer Application WPARs – essentially just a lightweight wrapper around an application to provide for enhanced manageability - System p offering consolidated, cross system management of WPARs via the WPAR Manager - WPARs can be relocated between systems via the Live Application Mobility capability – without requiring the application be restarted - WPARs can be relocated between systems automatically, via a policy set by the client - WPARs offer additional resource isolation and control