2. Topics Covered Introduction Root Cause Analysis Performance Characteristics CPU Networking Memory Disk Virtual Machine optimisation ESXTop vm-support Service Console Resource Groups Design Guidelines Capacity Planner limitations and cautions Conclusion Reference Articles
3. Introduction Multiple layers of virtualisation are used to increase service levels, availability and manageability However, multiple layers of virtualisation often mask performance and configuration issues making it more of a challenge to troubleshoot and correct The worst out come is that performance issues after a virtualisation project lead to the perception that VMware results in reduced performance and future confidence in VMware can be affected
9. Do not rely on guest tools, but Can show high CPU, & Memory Utilisation Measurement of Latency & throughput of Disk & Network Interfaces Use the virtualisation layer, to diagnose cause: Guest is unaware of virtualisation workload The way in which guest OS’s account time is different No visibility of available resources Monitoring Performance
10. esxtop (service console only) resxtop (remote command line utilities) Performance graphs in vCentre Performance Analysis Tools
11. esxtop can be run: Interactively Batch (eg. esxtop -a -b > analysis.csv) Load batch into windows perfmon or MS Excel Two keys to remember H : help F : fields to display esxtop
12. esxtop basics Host Resources Name of Resource Pool, Virtual Machine or World Number of Worlds
13. Performance Characteristics CPU Networking Memory Disk Slow Processing High CPU Wait Packet Loss Slow Network Slow Processing Disk Swapping Log Stalls Disk Queue Slow Application Performance Reduced User Experience Data Loss and Corruption
14. CPU ESX Scheduler Basic World States Read / Run / Wait CPU States Ready / Usage / Wait Service Console Virtual Machine Limits / Shares / Reservations
23. VMware Tools Balloon Driver to force the VM to swap to disk
24.
25. Ballooning can be disabled (0 value) or controlled on a per Virtual Machine basis using: sched.mem.maxmemctl Default is set to 65%, can be controlled at host level. Only is an issue in resource contention scenarios. (or VM’s with low latency eg Citrix) Memory