IBM logoEnterprise Cloud Management SupportContact Us
ArticlesArticles Most Popular ArticlesMost Popular Articles Most Helpful ArticlesMost Helpful Articles Submit A QuestionSubmit A Question
RSS Feeds
Skip Navigation
DrillDown Icon Table of Contents
DrillDown Icon Enterprise Cloud
DrillDown Icon Release Notes
DrillDown Icon Known Issues
DrillDown Icon Policies
DrillDown Icon Registration and Login
DrillDown Icon Infinicenter Console
DrillDown Icon Enterprise Cloud API
DrillDown Icon Best Practices
DrillDown Icon General
DrillDown Icon Resource Management
DrillDown Icon Processor
DrillDown Icon Memory
DrillDown Icon Networking
DrillDown Icon Timekeeping/Synchronization
DrillDown Icon FAQs
DrillDown Icon Troubleshooting
DrillDown Icon Enterprise Cloud Managed Edition
DrillDown Icon Sales and Support
DrillDown Icon Proprietary Statement
  Email This ArticlePrint PreviewPrint Current Article and All Sub-Articles
VMware Performance Tuning Best Practices - Processor

VMware Performance Tuning Best Practices

Processor virtualization adds varying amounts of overhead depending on the percentage of the virtual machine's workload that can be executed on the processor and the cost of virtualizing the remaining workload.

For applications that are processor-bound (that is, most of the application's time is spent executing instructions rather than waiting for external events such as user interaction, device input, or data retrieval), any processor virtualization overhead translates into a reduction in overall performance.

Applications that are not processor-bound can still deliver comparable performance because processor cycles remain available to absorb the virtualization overhead.

VMware recommends the following practices and configurations for optimal processor performance:

  1. When configuring virtual machines, the ESX server itself has some overhead; allow for the processor overhead required by virtualization. Take care not to excessively overcommit processor resources in terms of processor utilizations and the total number of virtual processors.
  2. Match the speed of the virtualized processors to the physical socket speed on the host.
  3. Each allocated virtual processor equals one hardware processor core. To provide optimal cycle efficiency and experience, VMware allocates as many physical cores as possible from the same socket to serve a virtual machine's cycles. If more virtual processors are assigned than available from a single socket, the hypervisor schedules the cycles on cores in another socket until all virtual processors are scheduled.
  4. In VMware-based virtualization, allocating more processors than will be regularly used results in additional overhead, including queuing for available cores on the physical hosts. VMware recommends a target utilization trending around 70% utilization of allocated resources. If your guest operating system and applications will regularly use only two processors, allocate only two processors. Allocating four in this case only degrades process performance due to queuing and overhead.
  5. Do not use virtual symmetric multi-processing (SMP) if your application is single threaded and does not benefit from the additional virtual processors.
  6. Note: Virtual machines configured with virtual processors that are not used still imposes resource requirements on the ESX server. In some guest operating systems, the unused virtual processor still consumes timer interrupts and executes the idle loop of the guest operating system, which translates into real processor consumption.

  7. When running multi-threaded or multi-process applications in a virtual machine with multiple processors, it can help to pin guest operating system threads or processes to specific processors. Migrating processes between virtual processors in the guest operating system incurs a small processor overhead. If migration is frequent, pinning the process to a particular processor eliminates this overhead.
  8. 64-bit guests and applications can have better performance than corresponding 32-bit versions.
  9. The guest operating system timer rate can have an impact on performance.
    • Linux® guests keep time by counting timer interrupts.
    • Unpatched 2.4 and earlier kernels program the virtual system timer to request clock interrupts at 100Hz (100 interrupts per second).
    • Some 2.6 kernels request interrupts at 1000 Hz, others at 250 Hz, so you should check your kernel to determine the actual rate.
    • These rates are for the UP kernels only and the SMP Linux kernels request additional timer interrupts.
    • Microsoft® Windows operating systems timer rates are specific to the version of Microsoft Windows and the Windows hardware abstraction layer (HAL) installed.
    • For most uniprocessor Windows installations, the base timer rate is usually 100Hz. Virtual machines with Microsoft Windows request 1000 interrupts per second if they run applications that make use of the Microsoft Windows multimedia timer service; such multimedia applications should be avoided if possible.
    • If you have a choice, use guest operating systems that require lower timer rates.
Related Articles