VMware Cloud Community
roconnor
Enthusiast
Enthusiast
Jump to solution

Recommended virtual hardware for ESX 4.1 vms

Hi all

I want to clarify what is the VMware recommended virtual hardware for Windows 2008 R2 64 bit virtual machines

When I create a new Windows VM with ESX 4.1 (using hardware version 7) it gets E1000 network adapters and a LSI Logic SAS SCSI Controller

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=100180...

Says ‘the VMXNET 3 adapter is the next generation’ and is supported on 32 and 64bit versions of Microsoft Windows XP, 2003, 2008, and 2008 R2.

Is that to say VMware thinks it is better than the E1000 adapter…? And so I should replace the default E1000 and use VMXNET 3? Is the E1000 adapter approaching end of lifecycle?

We have hundreds of vms to migrate to ESX 4, should we consider changing the adapters from E1000 to VMXNET 3 as part of the virtual hardware upgrade?

As for the controller the VI Client help file says ‘if your virtual machine and guest operating system support SAS, choose LSI SAS to maintain future compatibility. So are VMware saying, only use PVSCSI in very specific circumstances, no MSCS clusters, high I/O, no snapshots, no host memory overcommitment.

Again same question as part of the virtual hardware upgrade should we change the controller to SAS? or should we do as the Duncan suggests in thread 211320 and use PVSCSI in all but the clusters and linux vms

Thanks to http://virtualizationeh.ca/2010/10/04/pvscsi-lsi-sas-or-parallel-what-vscsi-adapter-should-i-choose/ for your excellent article, and http://communities.vmware.com/thread/211320  

Thanks in advance

Russ

0 Kudos
1 Solution

Accepted Solutions
roconnor
Enthusiast
Enthusiast
Jump to solution

Here is the answer I got back from VMWare Support

Best Network adapter

************************

VMware recommend to use VMXNET3 driver optimized for performance.

It is recommended to change the adapter on running VM's from E1000 to VMXNET 3 if possible.’

Best SCSI adapter/s

********************

Regarding PVSCSI : The test results show that PVSCSI is better than LSI Logic, except under one condition--the virtual machine is performing less than 2,000 IOPS and issuing greater than 4 outstanding I/Os. This issue is fixed in vSphere 4.1, so that the PVSCSI virtual adapter can be used with good performance, even under this condition. http://kb.vmware.com/kb/1017652. VMware suggest use PVSCSI where possible. PVSCSI is not supported for MSCS and linux (except RHEL 5).

LSI Logic Parallel and SAS are equal in terms of performance however to be future proof VMWare recommend using SAS when it is required.’

Ok so as I see it

New/Templates vms in hardware 7

****************************************

  • Windows 2008 64 bit vms

    • VMXNET3 Network adapter
    • PVSCSI or LSI Logic SAS*1

  • Windows 2003 32 bit vms
    • VMXNET3 Network adapter
    • LSI Logic Parallel*2

  • Pending tests for RHEL

Notes

*1 Our environment is a mix of older Windows, MSCS, Linux RHEL 4 and 5, so to keep things simple we will use LSI logic SAS as the standard and PVSCSI for high performance vms.

*2 I would prefer to use LSI Logic SAS, but when we create an empty vm (not a clone) it come up with LSI Logic Parallel. Pending more tests

Now let me put it in terms of updating hundreds of vms from 3.5 to 4.1, I asked if E1000 network adapters or LSI Logic SCSI adapters are approaching end of lifecycle, if we don't change the adapter/s are we likely to run into end-of-support issues in a couple of years...

The reply from VMWare Support

‘Please note that E1000 and LSI logic SCSI adapter is NOT approaching end of life.

It will not be necessary to change the adapter on running VM's.’

That’s all for now when I have tests done on Windows 2003 64/32 bit, and RHEL 5 64/32 bit I will add them.

Russ

View solution in original post

0 Kudos
2 Replies
roconnor
Enthusiast
Enthusiast
Jump to solution

Here is the answer I got back from VMWare Support

Best Network adapter

************************

VMware recommend to use VMXNET3 driver optimized for performance.

It is recommended to change the adapter on running VM's from E1000 to VMXNET 3 if possible.’

Best SCSI adapter/s

********************

Regarding PVSCSI : The test results show that PVSCSI is better than LSI Logic, except under one condition--the virtual machine is performing less than 2,000 IOPS and issuing greater than 4 outstanding I/Os. This issue is fixed in vSphere 4.1, so that the PVSCSI virtual adapter can be used with good performance, even under this condition. http://kb.vmware.com/kb/1017652. VMware suggest use PVSCSI where possible. PVSCSI is not supported for MSCS and linux (except RHEL 5).

LSI Logic Parallel and SAS are equal in terms of performance however to be future proof VMWare recommend using SAS when it is required.’

Ok so as I see it

New/Templates vms in hardware 7

****************************************

  • Windows 2008 64 bit vms

    • VMXNET3 Network adapter
    • PVSCSI or LSI Logic SAS*1

  • Windows 2003 32 bit vms
    • VMXNET3 Network adapter
    • LSI Logic Parallel*2

  • Pending tests for RHEL

Notes

*1 Our environment is a mix of older Windows, MSCS, Linux RHEL 4 and 5, so to keep things simple we will use LSI logic SAS as the standard and PVSCSI for high performance vms.

*2 I would prefer to use LSI Logic SAS, but when we create an empty vm (not a clone) it come up with LSI Logic Parallel. Pending more tests

Now let me put it in terms of updating hundreds of vms from 3.5 to 4.1, I asked if E1000 network adapters or LSI Logic SCSI adapters are approaching end of lifecycle, if we don't change the adapter/s are we likely to run into end-of-support issues in a couple of years...

The reply from VMWare Support

‘Please note that E1000 and LSI logic SCSI adapter is NOT approaching end of life.

It will not be necessary to change the adapter on running VM's.’

That’s all for now when I have tests done on Windows 2003 64/32 bit, and RHEL 5 64/32 bit I will add them.

Russ

0 Kudos
LucasAlbers
Expert
Expert
Jump to solution

We have slightly less than 600 vms.

We are on 4.0 waiting for 4.1 u1 to consider upgrading.

Our team argued for days on whether it would make sense to switch from e1000 to vmxnet3.

We are split down the middle in opinion on whether it make sense to switch.

E1000 is slower, but in practice the difference is negligible for most work loads.

E1000 supports kickstart install an important point for mass deployment of rhel.

On some workloads vmxnet2 is faster, on others vmxnet 3 is faster, it depends on the average packet size.

Awhile ago I did some stress testing, vmotioning machines back and forth in perpetuity.

Occasionally perhaps less than 1% or perhaps .01% of the time the vmxnet3 cards wouldn't see the network, this never happened with the flexible card.

My original argument was that vmxnet3 was the newest and fastest and should be the default.

My argument has changed to the opinion that the e1000 or flexible driver has been around the longest and should be the absolute most stable, as it has been timetested the longest.

My default is to just go with whatever vmware suggests as the default, unless I think the machine will be heavily utilized.

If it's a new server with a new OS I will also give it a vmxnet3.

If I need to change the MAC address from inside the vm I give it vmxnet2.

Some of our system's do scalability loadtesting with e1000 card's. (rhel5.5)

The limiting factor is not the cards.  I think vsphere 4.0 with update 2, now supports tso offload for the e1000 cards.

I cannot remember at what exact point this support appeared.

Proper kernel settings have more of an affect on cpu as it reduces the timer interrupts, with any network card.

The newer network cards generally have lower background cpu load.

I just spent the last month migrating some machines from vmware server to esxi 4.0.

If you are considering windows machines, you have to remember that windows activation can bite you in the ass if you change:

cpu's.

memory.

Disk controller.

Hardware version.

In addition the foreign language versions of windows are pickier on deciding when to reactivate.

If you don't have the newer vmware tools installed and you upgrade from hw 4 to hw 7 it will always reactivate.

Just relying on synthetic benchmarks to determine the optimal performance is suboptimal.

For example synthetic benchmarks determined that virtualizing mysql would give us terrible performance, as we compared raw disk io between physical hardware and virtual hardware.  In practice our virtualized mysql servers worked better than physical.

Mysql does row locking on databas drops which prevents multiple database creates and drops from occurring asynchronously, so having multiple little mysql servers instead of one big physical server, dramatically improved our multi database performance.

On vm upgrades or conversions leave the hardware alone unless forced to change.

Then if you find that a machine has performance issues or you think it will, you can focus on that small subset, and change the hardware.

Focus your manual effort on the items that have immediate benefit.