VMware Cloud Community
systimax
Contributor
Contributor

Max speed of Win VM to VM transfer speed on same host same storage?

Can someone tell me and explain why the max speed of transfer data from 1 Windows VM to another Windows VM that sit on the same host and same storage would max out at 60MB/s (thats what i have been told)

Since your bypassing the physical network card assocated to the VMs network why would you be limited to anything  below the speed of the 10 gig virtual nic in the VM.

I assume the vm is still using the vmknic connection to the san to read and write data. However since I have 10gig FC i think it would be faster.

Any insight would be great.

thanks

Reply
0 Kudos
6 Replies
chriswahl
Virtuoso
Virtuoso

For your data transfer example the limitation would be the read and write speeds of the underlying storage. If the storage can only handle reading and writing at 60 MBps, the network throughput is not relevant. In your case, the same storage is having to do both operations which taxes it further (it is both the read source and write destination).

Also, FC SAN is in speeds of 2/4/8 Gb, unless you mean iSCSI or NFS.

VCDX #104 (DCV, NV) ஃ WahlNetwork.com ஃ @ChrisWahl ஃ Author, Networking for VMware Administrators
Reply
0 Kudos
systimax
Contributor
Contributor

Storage is a Nimble CS220. Sorry I was typing and talking. I have 10GbE using a fiber connection not fiber chan.

I thought the array can do more then 120 MB/s as its a flash hybrid. It can do at least 120MB/s with a physical host copying data to a vm. Its only vm to vm that seems to bottle neck around 60MB/s Plus i would assume that 120 MB/s from physical to VM is a 1Gb ethernet limitation.

For back ground i am using two 1 GB nics from the esx host with vmware round robbin. Maybee the issue is that I only have 2 round robbin phyical nics and that can only max out at 120. and in this case 50% is for read 60 MB/s and 50% is for writes 60Mb/s

Does this makes sense?

Reply
0 Kudos
rlund
Enthusiast
Enthusiast

I believe nimble is iscsi only today.

Roger Lund

Sent from my iPhone

Roger Lund Minnesota VMUG leader Blogger VMware and IT Evangelist My Blog: http://itblog.rogerlund.net & http://www.vbrainstorm.com
Reply
0 Kudos
systimax
Contributor
Contributor

It is. But you can use fiber modules (SFP+) or coper ethernet to connect the nimble to your physical switch.

Im guessing like i said in the last post the 60MB/s is due to RR and 2 1 gb nics. read and writies taking 50% each.

I would probally need to do lacp or some other bonding to get higher then 120MBps

Reply
0 Kudos
chriswahl
Virtuoso
Virtuoso

That sounds reasonable. I have an iSCSI NAS in my home lab that is LACP to the switch, which allows me to send about 216 MBps down a pair of 1GbE links. If you were limited to one link, and were doing both reads and writes, a limitation of 60 MBps sounds about right.

Details on this process can be found here, if interested: http://wahlnetwork.com/2012/07/25/creating-a-link-aggregation-group-for-a-vsphere-lab-video/

VCDX #104 (DCV, NV) ஃ WahlNetwork.com ஃ @ChrisWahl ஃ Author, Networking for VMware Administrators
Reply
0 Kudos
systimax
Contributor
Contributor

Great video. I have two links from the Esx host to the storage target using vmwares round robin. However from what I understand even though I have two 1 gb links max is still around 120MB/s. (and even I had more 1 gig links it would be still 120) hence the 60 read 60 write.

ill try LACP. Thanks for helping.

Reply
0 Kudos