Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Optimize oracle on VMware (April 2011)

Talk on optimization of Oracle databases on VMWare ESX. Oracle open world 2010

Related Books

Free with a 30 day trial from Scribd

See all

Related Audiobooks

Free with a 30 day trial from Scribd

See all
  • Be the first to comment

Optimize oracle on VMware (April 2011)

  1. 1. Optimize Oracle on VMware<br />Guy Harrison<br />Director, R&D Melbourne<br /><br /><br />@guyharrison<br />
  2. 2. Introductions<br />
  3. 3.
  4. 4.
  5. 5.
  6. 6. Agenda<br />Motivations for Virtualization<br />VMware ESX resource management:<br />Memory<br />CPU<br />IO<br />Paravirtualization (OVM) vs Hardware Assisted Virtualization (ESX)<br />
  7. 7. Motivations for Virtualization <br />
  8. 8. Resistance to Database virtualization <br />
  9. 9. DB virtualization is happening<br />
  10. 10.<br />Mar 2010<br />May 2009<br />
  11. 11. Oracle virtualization is lagging....<br />
  12. 12. Managing ESX memory<br />ESX can “overcommit” memory<br />Sum of all VM physical memory allocations > actual ESX physical memory<br />Memory is critical to Oracle server performance<br />SGA memory to reduce datafile IO<br />PGA memory to reduce sort & hash IO <br />ESX uses four methods to share memory:<br />Memory Page Sharing<br />“Ballooning”<br />ESX swapping<br />Memory compression<br />DBA needs to carefully configure to avoid disaster<br />
  13. 13. Configuring VM memory<br />VMs Compete for memory in this range<br />Relative Memory Priority for this VM<br />Maximum memory for the VM (dynamic)<br />Minimum Memory for this VM<br />
  14. 14. Monitoring VM memory <br />
  15. 15. ESX and VM memory<br />ESX Swap<br />ESX swap<br />VM<br />ESX virtual memory <br />Effective VM physical memory <br />ESX physical memory<br />VM virtual memory <br />
  16. 16. ESX Ballooning<br />ESX Swap<br />ESX swap<br />Vmmemctl<br />“balloon”<br />VM<br />ESX virtual memory <br />Apparent VM physical memory <br />Effective VM physical memory <br />ESX physical memory<br />VM Swap<br />VM Swap<br />
  17. 17. ESX Ballooning<br />As memory grows, ESX balloon driver (vmmemctl) forces VM to page out memory to VM swapfile<br />
  18. 18. ESX Ballooning<br />Inside the VM, paging to the swapfile is observed.<br />The guest OS will determine which pages are paged out<br />If LOCK_SGA=TRUE, then the SGA should not be paged.<br />
  19. 19. ESX Swapping<br />ESX Swap<br />ESX swap<br />ESX virtual memory <br />VM<br />Effective VM physical memory <br />ESX physical memory<br />VM virtual memory <br />
  20. 20. ESX Swapping<br />ESX Swap<br />ESX swap<br />VM<br />Apparent VM physical memory <br />Effective VM physical memory <br />ESX virtual memory <br />ESX physical memory<br />
  21. 21. ESX Swapping<br />ESX swaps out VM memory to ESXswapfile<br />
  22. 22. ESX Swapping<br />Within the VM, swapping cannot be detected.<br />ESX will determine which memory pages go to disk<br />Particularly occurs when VMware tools are not installed<br />Even if LOCK_SGA=TRUE, SGA memory might be on disk<br />
  23. 23. Avoiding Ballooning and swapping<br />memory reservations help avoid ballooning or ESX swapping<br />
  24. 24. Other VMware memory management thoughts<br />Memory page sharing <br />Multiple VMs can share an identical page of memory (Oracle code pages, etc) <br />Modern chipsets reduce memory management overhead<br />Multiple hardware page layers<br />Memory compression (new in ESX 4.1)<br />Pages are compressed and written to cache rather than to disk<br />Swapping is more expensive than ballooning<br />Slower to restore memory<br />OS and Oracle get no choice about what gets paged<br />“Double paging” can occur – guest and ESX both page a block of memory<br />
  25. 25. Ballooning vs. Swapping <br />Swingbench workload running on Oracle database – from VMWare whitepaper:<br />
  26. 26. VMware memory recommendations <br />Paging or swapping of PGA or SGA is almost always a Very Bad Thingtm.<br />Use memory reservations to avoid swapping<br />Install VMware tools to minimize swapping<br />Ballooning >> swapping (double paging, paging SGA, high swap-in latency, etc)<br />Set Memory reservation = PGA+SGA+process Overhead<br />Be realistic about memory requirements:<br />In physical machines, we are used to using all available memory<br />In VM, use only the memory you need, freeing up memory for other VMs<br />Oracle advisories (or Spotlight) can show you how much memory is needed<br />Reduce VM reservation and Oracle memory targets in tandem to release memory<br />
  27. 27.
  28. 28. VMware CPU management <br />If more virtual CPUs than ESX CPUs, then VCPUs must sometimes wait.<br />Time “stops” inside the VM when this occurs <br />For multi-CPU VMs, it’s (nearly) all or nothing). <br />A VCPU can be in one of three states:<br />Associated with an ESX CPU but idle<br />Associated with an ESX CPU and executing instructions<br />Waiting for ESX CPU to become available <br />Shares and reservations determine which VM wins access to the ESX CPUs<br />
  29. 29. Configuring VM CPU <br />VMs compete for CPU in this range<br />Shares determine relative CPU allocated when competing <br />
  30. 30. CPU utilization VM<br />“CPU Ready” is the amount of time VM spends waiting on ESX for CPU <br />Inside the VM, CPU stats can be misleading<br />
  31. 31. SMP for vCPUs<br />ESX usually has to schedule all vCPUs for a VM simultaneously<br />The more CPUs the harder this is<br />Some CPU is also needed for ESX<br />More is therefore not always better<br />(Thanks to Carl Bradshaw for letting me reprint this diagram from his Oracle on VMWare whitepaper)<br />
  32. 32. ESX CPU performance comparisons<br />VT enabled<br />Vs 2 core 1.8 GHz physical machine<br />
  33. 33. Programmatic performance <br />
  34. 34. Programmatic performance (2)<br />
  35. 35. ESX CPU recommendations <br />Use up to date chipsets and ESX software<br />Allocate as few VCPUs as possible to each VM<br />Use reservations and shares to prioritise access to ESX CPU<br />Performance of CPU critical workloads may be disappointing on older hardware<br />Monitor ESX Ready time to determine the “penalty” of competing with other virtual machines<br />
  36. 36. Typical VMWare disk configuration <br />
  37. 37. Disk Resource Allocation<br />Disk shares can be used to prioritize IO bandwidth.<br />However, ESX often does not know underlying storage architecture <br />
  38. 38. Performant VMware disk configuration <br />
  39. 39. Optimal configuration <br />See “Oracle Database Scalability in VMware® ESX” at<br />Each virtual disk directly mapped via RDM to dedicated RAID 0 (+1) group <br />41 Spindles!<br />
  40. 40. VMWare disk configuration<br />Follow normal best practice for physical disks<br />Avoid sharing disk workloads <br />Dedicated datastores using VMFS <br />Align virtual disks to physical disks?<br />Consider Raw Device Mapping (RDM)<br />If you can’t optimize IO, avoid IO:<br />Tune, tune, tune SQL<br />Prefer indexed paths<br />Memory configuration<br />Don’t forget about temp IO (sorts, hash joins) <br />
  41. 41.
  42. 42.
  43. 43. Paravirtualizationvs “Hardware Virtualization”<br />Virtualization is not emulation....<br />Where-ever possible, Hypervisor runs native code from OS against underlying hardware<br />Because a virtualized operating system is running outside privileged x86 “ring 0”, direct calls to hardware need special handling.<br />The three main approaches are:<br />Full Virtualization (VMWare on older hardware)<br />ParaVirtualization (Xen, Oracle VM)<br />Hardware Assisted Virtualization (Intel VT, AMD-V)<br />
  44. 44. Full virtualization <br />Hardware calls from the Guest are handled by the hypervisor by:<br />Catching the calls as they occur at run time<br />Re-writing the VM image at load time (binary translation)<br />Requires no special hardware<br />Supports any guest OS<br />Relatively Poor performance<br />Used by ESX on older chip-sets<br />Guest OS<br />Hypervisor<br />Ring 0<br />Hardware<br />
  45. 45. Hardware Assisted virtualization<br />Intel VT and AMD-V chips add a “root mode”.<br />Guest can issue instructions from non-root Ring 0.<br />CPU can divert these to hypervisor<br />No changes to OS required<br />Good performance<br />Requires modern chipsets<br />Root Mode<br />Non-Root Mode<br />Guest<br />Hypervisor<br />Ring 0<br />Hardware<br />
  46. 46. Paravirtualization<br /><ul><li>Guest operating system is rewritten to translate device calls to “hypercalls”
  47. 47. Hypercalls are handled by a special VM (dom0 in Xen/OVM)
  48. 48. Good performance but requires modified guest OS
  49. 49. Xen can use either paravirtualization or hardware assist</li></ul>Guest<br />(domU)<br />Ring 0<br />Guest<br />(dom0)<br />Hypervisor<br />Hardware<br />
  50. 50. Paravirtualization and RAC<br />According to Oracle, only a paravirtualization solution can guarantee clock-synchronization between members of a RAC cluster.<br />For this reason, RAC is not officially supported on VMWare, but is on Xen based OVM<br />Single instance databases are fully supported – but not certified – on ESX.<br />See Oracle MySupport Note 249212.1<br />
  51. 51. References<br />My blog ( ):<br /><br /><br /><br /><br /><br />
  52. 52. End of Presentation<br />

    Be the first to comment

    Login to see the comments

  • liu_xiao_jun

    Sep. 28, 2010
  • Fenng

    Sep. 30, 2010
  • elenacvas

    Jun. 24, 2013
  • amyshomemadeicecreams

    Oct. 6, 2013

Talk on optimization of Oracle databases on VMWare ESX. Oracle open world 2010


Total views


On Slideshare


From embeds


Number of embeds