VMware Cloud Community
jwdaigle
Contributor
Contributor

Performance of shared datastore with iSCSI versus local storage of VMs

Im knew to the ESXi world, and experimenting with different configurations as I settle on best practices. We are currently storing VMs on the local disk of ESXi hosts. This as you all already know is a pain when we want to move a VM from one host to another.

Someone in another thread suggested using shared storage for all the VM hosts.

The best way for me to do this is via iSCSI.

But its bothering me that all disk accesses will be over the network? (since the VM is stored "remotely" from the host). What about VMs that do a lot of disk access (transactional database for instance).

So I guess my question is what is the relative performance difference for a VM that is doing a lot of disk access for the VM to be stored on an iSCSI box versus local storage for that VM. Lets assume equivalent performance for the disk system on both machines (VM host and iSCSI box).

Are there any references that discuss this?

Thanks!

Reply
0 Kudos
10 Replies
Gerrit_Lehr
Commander
Commander

That realy depends on your configuration. Having a dedicated 1GB or even 10GB connection to an iSCSI storage with several disks, can indeed bring a lot better performance then local storage. But as I said, it depends.

Kind Regards,

Gerrit Lehr

If you found this or other information useful, please consider awarding points for "Correct" or "Helpful".

Kind regards, Gerrit Lehr If you found this or other information useful, please consider awarding points for "Correct" or "Helpful".
Reply
0 Kudos
krowczynski
Virtuoso
Virtuoso

HI,

it depends on yout enviroment.

We have 4 host and anly running iscsi and have no problems with performance.

You can e.g. create a seperate VLAN for the ISCSI Traffic.

MCP, VCP3 , VCP4
Reply
0 Kudos
J1mbo
Virtuoso
Virtuoso

The issue is that iSCSI is typically on 1 GigE, which limits maximum throughput to about 120 MB/s per link (aggregation capabilities depend on the device in question and how it's configured).

However, for database workloads the issue is not block sequential IO - a single £50 SATA drive can approach 100 MB/s - but random throughput. A 16-drive iSCSI array with 15k RPM disks will obviously be massively quicker than the £50 drive for that work load. Similarly a 16-drive DAS array, with decent controllers, would also be quicker. The question is whether you'd be putting in a similar array locally.

In my experience getting numbers out of the manufacturers can be tough though (see my question on it here).

Please award points to any useful answer.

Reply
0 Kudos
AntonVZhbankov
Immortal
Immortal

Whta performance are you talking about? Megabytes per second or IOPS?

iSCSI will definitely lose megabytes per second to local storage, but do you really use 90 MB/s? All other stuff depends on so many things...

iSCSI has bigger latency without any doubts, but do you really need low latency? How many IOPS do you want? What disks will you use, how many disks, what RAID controller?


---

VMware vExpert '2009

http://blog.vadmin.ru

EMCCAe, HPE ASE, MCITP: SA+VA, VCP 3/4/5, VMware vExpert XO (14 stars)
VMUG Russia Leader
http://t.me/beerpanda
Reply
0 Kudos
TimPhillips
Enthusiast
Enthusiast

If you want to use shared storage you have really two options: iSCSI and FC.

iSCSI is cheap, powerful and limited only to your Ethernet bandwidth. While FC gives more perfomance but costs really much more money. It`s very expensive.iSCSI is best option, I`m using it for long time.Also I would recommend you to buy not a hba or NetApp, but SATA disk array (SAS is better to use for direct connection, or with 10Gb connection for maximum perfomance from this cost) and software iSCSI. Wouldn`t recommend "free" solutions like freenas or openfiler, better to buy robust and checked solutions as StarWind, Datacore or Lefthand.

Reply
0 Kudos
AntonVZhbankov
Immortal
Immortal

>If you want to use shared storage you have really two options: iSCSI and FC.

Actually there are also NFS and DAS (for small installations with 2-3 hosts).

Wouldn`t recommend "free" solutions like freenas or openfiler, better to buy robust and checked solutions as StarWind, Datacore or Lefthand.

Or maybe entry level hardware storage array from NetApp, HP, EMC etc.


---

VMware vExpert '2009

http://blog.vadmin.ru

EMCCAe, HPE ASE, MCITP: SA+VA, VCP 3/4/5, VMware vExpert XO (14 stars)
VMUG Russia Leader
http://t.me/beerpanda
Reply
0 Kudos
LucasAlbers
Expert
Expert

I think nfs scales a little better than just 2 or 3 hosts.

In our experience using nfs for database server data volumes gives crap performance.

We have had ok performance using nfs for the os and file server vmdks, but not so good when used to mount sql server data vmdk's.

Reply
0 Kudos
TimPhillips
Enthusiast
Enthusiast

2 Anton: Thank you, i know about DAS and NAS solutions, but in this case talking is going about iSCSI from the beginning, that`s why i`ve posted about it.

2 LucasAlbers: NFS scales much better than for 2-3 hosts, but it`s best of the best when you share files and nothing more, while iSCSI is more flexible solution, designed for more managable solutions.

Reply
0 Kudos
LucasAlbers
Expert
Expert

2 timphillips,

Why would you pick iscsi over nfs?

Why is iscsi more flexible and manageable?

Reply
0 Kudos
Rumple
Virtuoso
Virtuoso

nfs on a netapp with a single tray aggregate of 10k 04 10k drives is able to host well over 80-100 typical VM's (including servers with local databases like BES) and multiple hosts.

Openfiler on the other hand, probably won't handle it quite so well.

Personally given the choice I will never go back to using anything but nfs, just for the file level vs lun level locking alone...

Reply
0 Kudos