VMware Cloud Community
xianmacx
Contributor
Contributor

NFS NAS SSD and bottlenecks...

Hello everyone,

We are doing some testing for Vspehre 5 and I am trying to setup some fast Shared Storage.  There are no raid requirements as its just for testing.

Here is what I am looking at:

I have 3 hosts with 6x gb nics.

GB network switch

I am looking at building a barebones physical machine with a 120gb SSD on Sata III.  I would use openfiler or the likes to setup a NAS NFS/ISCI storage.  This system has 1gb nic.  The drives is very fast, speced to 60,000 iops.

Based on this, I would think I can run 5-10 VM's on this shared storage with no issues. (except drive size)

My questions:  Is using 1 gb nic from each host going to be enough bandwidth?  Do I need to team 2?

                       Where will my bottlenecks be with my configuration?

                       Will I ever even come close to fully utilizing the drive performance in this setup?  Should I opt for slightly cheaper Sata II SSD at about 15,000 iops?

                         What overall performance do you think I can expect for the VM's living on this datastore?


Again, I need no redundancy, just sheer performance from shared storage.


Thanks,

Ian

Reply
0 Kudos
5 Replies
mcowger
Immortal
Immortal

1) No SATAIII drive will be able to actually service 60K IOPs.  Be happy if you get 20K

2) You will easily saturate 1GBit with such a drive.

3) No.  I'd go for the cheaper 15K IOPs drive.

4) 5 VMs should be fine, depending on their workload.  Your controller and the link will be your bottleneck.

A single host running openfiler and a single SSD (hence only 1 queue) is not a performance setup Smiley Happy

--Matt VCDX #52 blog.cowger.us
Reply
0 Kudos
xianmacx
Contributor
Contributor

Matt,

Thank you for the response.

So how does number of queues get factored into and effect performance?  I thought we could just compare iops to iops?

How can you get an idea of when your 1gb link will be fully saturated?

Thanks in advance, sorry for the silly questions,

Ian

Reply
0 Kudos
mcowger
Immortal
Immortal

The number of queues affects the # of IOs that can happen at the same time.  With only 1 drive, you have only 1 queue.  The queue is likely to be your bottleneck.

As far as IOPs on a 1GBit link - thats entirely dependent on the IO size - no way to predict without the IO size.

--Matt VCDX #52 blog.cowger.us
Reply
0 Kudos
NinjaHideout
Enthusiast
Enthusiast

You usually need shared storage when you want to test stuff like DRS / HA / FT.

Depending on what you want to experiment with, you might want to dedicate a NIC from each host for FT / vMotion traffic.

As for the NAS box, to squeeze most performance out of it, I would add at least another NIC and team it with the existing one.

Reply
0 Kudos
xianmacx
Contributor
Contributor

Ok Matt so here is another option that I didn't consider.

NAS Option 1

1 ssd SATA III drive with a therectical 20,000 IOPS

1 nic

NAS Option 2

6 - ultra320 15k scsi drives, Raid 0

1 nic

So since you can't just compare IOPS to IOPS, how would option 1 compare to option 2?  Option 1 has much higher IOPS, but only 1 queue.  Option 2 has lower IOPS but 6 queues.

Thank you for your help!

Ian

Reply
0 Kudos