VMware Cloud Community
rkelleyrtp
Contributor
Contributor

Slow vStorage Migration on 10G network - 30MB/sec max

Greetings all,
I am running into a performance problem with vStorage Migration and am hoping someone can help out.  When doing a storage migration from one data store to another, I am only getting about 30MB/sec thruput - even though all devices (ESXi servers and NFS servers) have 10G connectivity. 
A description of my setup:
  • ESXi 4.0U3 server with 10G Intel x520-DA (82599) cards, Intel Nehalem 2.9G QuadCore CPUs, 32G RAM, USB boot drive
  • The ESX server has a single vSwitch using a single 10G NIC configured for 9000 MTUs (both the VMKernel and vSwitch ports)

  • Two NFS servers with 10G Intel x520-DA cards, LSI 8888ELP RAID Controllers, Intel 2.3GHz CPUs, 4G RAM, CentOS 5.7, 12x1T drives in RAID-6.  The servers can read/write at +400MB/sec across NFS mount points (using async options)
  • Cisco N5010 10G network switch (no VPC) with jumbo frames enabled

I have confirmed the ESXi servers can read/write to the NFS storage >100MB/sec using the following tests:
  • From the ESXi CLI, I can do a copy from one NFS datastore to another at > 100MB/sec
  • From the ESXi CLI, I can do a "wget" from one NFS server to /dev/null at +400MB/sec (wget http://nfs1/file1 -O /dev/null)

  • From the ESXi CLI, I can do a "wget" from one NFS server to another nfs mount point at +125MB/sec (wget http://nfs1/file1 -O /vmfs/volumes/nfs1/file1)
  • A VM running on one of the NFS data stores can perform a local write at > 100MB/sec using "dd" (dd if=/dev/zero of=file1 bs=1M count=1000)
  • Running iperf between the NFS server gives >900MB/sec network performance (no packets loss)

I have been doing tons of testing and have narrowed the problem down to the storage migration function.  Although the ESXi servers can read and write from the network and NFS servers at >100MB/sec, the storage vMotion tool only gets 30MB/sec max speed.  My test VM is a CentOS VM with a 40GB thick-provisioned VMDK (powered off during storage migration), and there are no other VMs running on the ESXi server.  I have fully patched both the NFS and ESXi servers, and I have even loaded the latest Intel driver package from VMWare ("ixgbe" driver 3.4.23) installed on both ESXi servers.

What can be causing the storage migration tool to throttle the migration?
Thanks for any pointers...
-Ron
0 Kudos
3 Replies
LIAI
Contributor
Contributor

Did you ever find a solution to this? We're seeing a similar issue.

0 Kudos
colinprea
Contributor
Contributor

Yes, we're seeing very much the same problem too.

NetApp 6080 filers, 10GbE connectivity, storage vmotion is like treacle though host-to-host is in excess of 2Gbps (that's gigabit, not gigabyte)

I would really, really appreciate any light that anyone can shed on this.

0 Kudos
brianlhogan
Contributor
Contributor

I am having the same problem.  Did you ever find any resolution.

1)  Migrate/Storage Vmotion over 10GB link is fast (>600000kbps) when using vmotion with a powered on VM.

2)  migrate/Storage Vmotion over 10GB link is slow (~600kbps) when using vmotion with a powered off VM.

Vcenter 5.1 and ESXi 5.1.

0 Kudos