VMware Cloud Community
cyber_smity
Contributor
Contributor
Jump to solution

performance difference from vmfs3,nfs,iscsi?

So currently we've got all our VMs running directly off of the locatl

vmfs3 datastores. Since we want to be able to utilize vmotion between

different VMWare servers without shutting down the VM, we've been

researching moving the VMs over to an NFS or iSCSI solution. My

question is this, if all the data for the VM must then be transmitted

over the network to communicate with the shared storage location, won't

this essentially create a bottleneck in the performance of the virtual

machine?

The way I see it is that these SATA drives can do 3+gbps, and a network

would max at only 1gbps. So shouldn't this degrade performance?

Hopefully somebody can shed some light on this.

Can anyone speak from personal experience from implementing a Windows

Server based virtual machine on a shared storage location that provides

enterprise-level applications? I'd be very curious to hear of

any experiences and of the observed performance.

Reply
0 Kudos
1 Solution

Accepted Solutions
Jae_Ellers
Virtuoso
Virtuoso
Jump to solution

To avoid the bottleneck you need to have enough physical nics (pNics) and should segregate your traffic such that your service console, vmotion, virtual machines, and storage traffic are all on different pNic teams.

At a minimum this should take 4 pNics. You can play games with trunking (802.1q) to optimize your exposure and minimize your nics. However you'll still probably want 6 pnics if you're doing IP based storage.

A reference (best practice) architecture would take 8.

2 for COS

2 for Vmotion

2 for VMs

2 for nfs or iscsi.

If each of these are teamed then you can stand a physical switch or pNic outage and keep your systems up

That said I make it work with 4 nics and take my chances. I intend to change this with the next round of hardware AND I'm using mostly FC storage.

2 pnics are teamed and provide COS & VM traffic

1 pnic provides vmotion and some crazy test vlans

1 pnic provides limited iSCSI storage.

"Do as I say, not as I do".

-=-=-=-=-=-=-=-=-=-=-=-

Check my blog: http://blog.mr-vm.com

Now contributing to http://www.vmprofessional.com

-=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-=-=-=- http://blog.mr-vm.com http://www.vmprofessional.com -=-=-=-=-=-=-=-=-=-=-=-=-=-=-

View solution in original post

Reply
0 Kudos
5 Replies
zemotard
Hot Shot
Hot Shot
Jump to solution






Best Regards

If this information is useful for you, please consider awarding points for "Correct" or "Helpful".

Best Regards If this information is useful for you, please consider awarding points for "Correct" or "Helpful".
cyber_smity
Contributor
Contributor
Jump to solution

This is very helpful! But the only thing that's missing is a comparison between these and a local disk (non-shared vmfs3 datastore). In a perfect world I'd like to see how that compares as well.

Reply
0 Kudos
Jae_Ellers
Virtuoso
Virtuoso
Jump to solution

To avoid the bottleneck you need to have enough physical nics (pNics) and should segregate your traffic such that your service console, vmotion, virtual machines, and storage traffic are all on different pNic teams.

At a minimum this should take 4 pNics. You can play games with trunking (802.1q) to optimize your exposure and minimize your nics. However you'll still probably want 6 pnics if you're doing IP based storage.

A reference (best practice) architecture would take 8.

2 for COS

2 for Vmotion

2 for VMs

2 for nfs or iscsi.

If each of these are teamed then you can stand a physical switch or pNic outage and keep your systems up

That said I make it work with 4 nics and take my chances. I intend to change this with the next round of hardware AND I'm using mostly FC storage.

2 pnics are teamed and provide COS & VM traffic

1 pnic provides vmotion and some crazy test vlans

1 pnic provides limited iSCSI storage.

"Do as I say, not as I do".

-=-=-=-=-=-=-=-=-=-=-=-

Check my blog: http://blog.mr-vm.com

Now contributing to http://www.vmprofessional.com

-=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-=-=-=- http://blog.mr-vm.com http://www.vmprofessional.com -=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Reply
0 Kudos
Jae_Ellers
Virtuoso
Virtuoso
Jump to solution

On decent hardware (IBM x3850) here's a rough comparison w/ ESX 3.0.0:

Local disk sux0r. This image compares fiber channel (fc), iscsi w/ toe (itoe), sw iscsi (inic), and local disk with 4 vms running iometer w/ 2 threads and sweeping thru block sizes from 16 - 512k.

FC peaks at 320+ MB/sec

itoe peaks at 140+ MB/sec

inic peaks at 120 MB/sec

ld peaks at 45 MB/sec.

Spindle count, my friend, spindle count.

-=-=-=-=-=-=-=-=-=-=-=-

Check my blog: http://blog.mr-vm.com

Now contributing to http://www.vmprofessional.com

-=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-=-=-=- http://blog.mr-vm.com http://www.vmprofessional.com -=-=-=-=-=-=-=-=-=-=-=-=-=-=-
cyber_smity
Contributor
Contributor
Jump to solution

That is exactly the information I was looking for, thank you!

On a side note, any chance you can provide me with some more hardware information on a recommended fiber channel setup? Thanks!!

Reply
0 Kudos