Contributor
Contributor

Adding more disks to vSAN

Jump to solution

We have 8 hosts running VSAN v6.2 with 2 disk groups. Each disk group has 1x SSD SATA 400GB (Dell S3610) and 6x 10K RPM SAS 600GB. Each server still has 12 empty drive slots available. To expand the storage for each server, I am thinking about getting:

  • 2x Intel DC P3700 400GB PCI-E (NVMe) + 12x 10K RPM SAS 600GB

I read somewhere it’s not recommended to mix different model of drives in a server. Since P3700 PCIe has better performance than Dell S3610, will this cause any issues?

Or

  • 2x SSD SATA 400GB (Dell S3610) + 10x 10K RPM SAS 600GB

Each server will have 2 disk groups with (1 SSD + 6x HDD) and 2 disk groups with (1 SSD + 5x HDD). Is it ok to have different numbers of disks in different disk groups on the same server?

Any comments and suggestions are appreciated.

0 Kudos
1 Solution

Accepted Solutions
VMware Employee
VMware Employee

Ideally you want to stay with similar hardware. Mixing drives may cause the "faster" drives to be dumbed down, speed wise. Also It is important to check on the controller vs. disks. If the total queue depth of your disks will exceed the queue depth of the controller, this is no bueno. You could also add another controller to split the load and increase failure domains, but this of course is more costly. Also try to stay away from expanders if possible.

Hope this helps

A+, DCSE, MCP, MCSA, MCSE, MCTS, MCITP, MCDBA, NCDA, NCIE-SAN, NCIE-BR, VCP4, VCP5, VCP5-DT, VCAP5-DCA _____________________ If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful.

View solution in original post

0 Kudos
4 Replies
VMware Employee
VMware Employee

Ideally you want to stay with similar hardware. Mixing drives may cause the "faster" drives to be dumbed down, speed wise. Also It is important to check on the controller vs. disks. If the total queue depth of your disks will exceed the queue depth of the controller, this is no bueno. You could also add another controller to split the load and increase failure domains, but this of course is more costly. Also try to stay away from expanders if possible.

Hope this helps

A+, DCSE, MCP, MCSA, MCSE, MCTS, MCITP, MCDBA, NCDA, NCIE-SAN, NCIE-BR, VCP4, VCP5, VCP5-DT, VCAP5-DCA _____________________ If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful.

View solution in original post

0 Kudos
Contributor
Contributor

Hi GreatWhiteTec,

Thank you very much for the info. Sounds like option 2 would be a more reliable solution. The servers are Dell R730xd and each server has 26 drive slots. The controller is Dell PERC H730 Mini. According to the vSan Compatibility Guide, the controller has queue depth of 895. Could you please let me know how can I find out the total queue depth that is required by the disks?

0 Kudos
Contributor
Contributor

If you stick with the same type of drives, which I would also suggest, you can see their queue depth by utilizing "esxtop" on one of your current hosts.

Open esxtop and press "u" to switch into disk device view.  Take a look at the DQLEN column to view the queue length of each device. You might have to hit "f" to view field selection and make sure "F: QSTATS is selected". 

You can also verify your controller's queue depth by going into esxtop and pressing "d" for adapters. Then press "f" for fields and hit "d" to select QSTATS. The value in AQLEN will show your queue depth.

Additionally, since you're running a controller that's based on the LSI 3108 chipset, are you aware of this KB: https://kb.vmware.com/kb/2144936 ?

Contributor
Contributor

esxtop does show the controller has queue depth of 895. Each drive has the queue depth of 64. With 14 drives, that's 896. Looks like we have to get the 2nd controller if we're going to add more drives.

We're running the latest ESXi 6.0u2 patch, so we should be good.

Thank you, jb_bhs!

0 Kudos