VMware Cloud Community
henber
Contributor
Contributor

Copy rate physical Win2008r2 vs virtual Win2008r2

Hello, I have a little question reguarding the copy speed physical vs virtual server connected to the same storage, through the same protocol (FC), through the same fabric and so on.

Storage : XIV

4 Gb FC

Virtual ENV:

OS: Win2008R2

SCSI adapter: LSI SAS or paravirt driver, (tested both)

6 paths / hba on the ESX host against the XIV

ESX:

Queue depht: 64kb

ESXi 5

Physical server:

OS: Win2008R2

Qlogic 4 Gb HBA x 2

6 paths / hba against XIV

Q:

When I do a copy of a large file in the virtual server I get a write rate at approx 50-60 Mb/s

When I do a copy of the same file to a disk on the physical server I get approx 500 Mb/s

The only thing I see is the difference is that the virtual VM is on a VMFS5 (non upgraded), sharing the datastore with 2 other VM:s (non high IO VMs).

Am I missing something? Or do I have to configure the ESXi 5 differently to get the speed against the storage? Is there any advanced settings I can use to test anything else.

I have change queue depth and Disk.SchedNumReqOutstanding to 64 on the host. What I have not tried is to change the queue depht on the paravirtualized driver in Win guest to 64.

Thanks for answers in advanced.

Br

0 Kudos
5 Replies
rickardnobel
Champion
Champion

henber wrote:

Q:

When I do a copy of a large file in the virtual server I get a write rate at approx 50-60 Mb/s

When I do a copy of the same file to a disk on the physical server I get approx 500 Mb/s

The difference might be where the write IOs are done, so some clarifications:

When you copy the file in the virtual machine, do you have a virtual disk and inside this you have a large file and you copy the file inside this disk?

When you does the same with the physical server, how do you copy the file? From the FC SAN to local storage in the physical server?

My VMware blog: www.rickardnobel.se
0 Kudos
henber
Contributor
Contributor

Hello, thank you for the answer.

Sorry for my little unclear description, but the copying is done on FC volume.

So on the virtual VM has it disk on a VMFS5 volume, and I do the copy on the same drive. ex. 😧 drive.

In the physical I do the copy on the 😧 drive that is a FC drive on the same XIV. Zoned the same way in the same FC switches, we have tried to wash the fiber and also changed ports etc for the ESX server.

Do you have any suggestions on why it is so huge different?

Br

Henrik

0 Kudos
henber
Contributor
Contributor

Ok, lets spice it up a little bit 🙂

After some testing today against the VM meantioned above we have not solved it, we get about 50-60 MBwrite/s, 700 writes /s against that .vmdk file.

To do some more testing today we forgot about our physical server and concentrated on a new VM that we placed on the same ESXi 5 and on the same datastore residing on an IBM XIV with FC. We added a disk to the new VM with PVSCSI driver just like the VM that we have problem with.

We then run the same IO test against this VM and get about 270-330 MBWrite/s, 3800-4200 Writes/s in ESXTOP.

So this is our config:

VM A (problem VM)

VM hardware version: 8

OS: Windows 2008R2

disk: SCSI 0:0 (LSI SAS) 40 Gb OS disk

        SCSI 1:0 (PVSCSI) 400 GB OS disk, 64k allignment

vCPU: 4

MEM: 16 Gb

VM B with higher throughput against storage

VM hardware version: 8

OS: Windows 2008R2

disk: SCSI 0:0 (LSI SAS) 40 Gb OS disk

        SCSI 1:0 (PVSCSI) 400 GB OS disk, 64k allignment

vCPU: 4

MEM: 16 Gb

Both these VM:s are on the same ESXi5 host that has queue lenght 64, on a 1.8 TB Datastore (VMFS5, created from scratch) from IBM XIV (FC, round-robin). Both VMs has the same Win patches, the same set of software, the same version of VMware Tools. Same pvscsi driver and so on...

VM B gets about 270-330 MbWrites and VM A gets about 50-60 MbWrites. We have viewed this both from Storage side and in monitored in ESXTOP.

The only different that we know of is that on VM A, the SCSI 1:0 disk is not created on a PVSCSI adapter but on a LSI SAS. The PVSCSI adapter was added to the disk/VM after the disk was created. But no problem with installing the drivers etc in Windows.

So what is the different between these two VM:s?

there are no difference in shares, no limits. No constraints in that config.... in the same resourcepool.. !

Are there any settings in Windows reguardning PVSCSI that someone knows about? As Duncan Eping talks about here http://www.yellow-bricks.com/2011/03/04/no-one-likes-queues/ the default queue depth for guest driver pvscsi is 64.. we have tried added that value to the registry that this KB talks about: http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&externalId=1017423&sl...

no difference there either..

I would be really happy if someone had some suggestions that could help us resolving this.. pretty anoying thing...

BR

Henrik

0 Kudos
BharatR
Hot Shot
Hot Shot

Hi,

Henber Wrote:


When I do a copy of a large file in the virtual server I get a write rate at approx 50-60 Mb/s

VM B gets about 270-330 MbWrites and VM A gets about 50-60 MbWrites.

Try disabling the Auto-Tuning Level of the VM you are copying files from

The first line lets you verify the setting. If it is 'enabled' or 'normal' you can disable it with the second line. Great new feature, just needs some fine-tuning.

netsh interface tcp show global
netsh interface tcp set global autotuninglevel=disabled

and  have you by any chance unticked/disabled the IPv6 protocol in 2008 R2 LAN  properties and checked

Best regards, BharatR--VCP4-Certification #: 79230, If you find this information useful, please award points for "correct" or "helpful".
0 Kudos
henber
Contributor
Contributor

Thank you for the answer BharatR, but unfortunately the IO test we are using to test this is not doing any copying over the network. It creates a file local on the VM:s and tests writes and reads to that file. So it is only local operations. But just to be sure I tested your suggestions with the same resul as before.

We will try to remove the PVSCSI disk from the VM A today, uninstall VMware tools with drivers etc. Reinstall them, add the disk again to the VM, do a rescan of the disks so that all bus are updated and try again. Hopefully this will correct whatever bug/feature we are having.

We have also tried to mount the disk created om VM B to VM A:s PVSCSI adapter with the same result.. hehe funny thing this!!!1

BR

0 Kudos