VMware Cloud Community
barondavid
Contributor
Contributor
Jump to solution

ESXi on SD-card and VMs on a raid-6 NAS, performance?

Hi!

I am planning a ESXi environment with one HP Proliant DL380 G6 which I already own.

The server has a internal SD-Card port and according to this article it supports to run ESXi from an internal SD-Card.

Q1

Someone hwo as tried this, what are the downside with this, except redundancy on the SD-card. I have had a hard time finding any feedback on this

Q2

I plan to run the VMs on a NAS via iSCSI, (N8800 from thecus, have another thread about whether this sould work or not here)

With this config i don't need any HD in the DL380.

However, I don't know if a good performing NAS is good enough for my VMs and if I maybe need to run OS's on internal drives with higher performance.

First my plan was to run the OS-partitions on a raid with internal 10k SAS-drives. And have all data partitions on the NAS.

But since I got the news on running ESXi on SD or USB I am thinking about the possibility to run the DL380 without HD's.

But, as stated above, is it possible that there is performance enough for this.

Have in mind that my environment is and will be VERY SMALL, we are talking about maximum of seven windows server 2008 and totally 150 users.

0 Kudos
1 Solution

Accepted Solutions
RParker
Immortal
Immortal
Jump to solution

But I still don't have a clue if theres a chance that my 7 VMs running windows server 2008 will be capped to much by running from the NAS. The Nas will run with eight 7.2k RPM Sata disks, they are high preforming sata disks, but still not a SCSI raid. I have the opportunity to run raid 10 on it insted of 6 and I will consider it and try how much better performance it gives me. The 1TB sata drives are 70% cheaper than the 300gb 2,5" SAS 10k drives.

yes there is a good reason SATA is cheaper than SAS. SATA drives don't have the same command queue that SAS drives use, plus the backplane on SATA relies on CPU, SAS can offload this to the CARD. The drives themselves appear to be competitive, but they are a far cry from SAS/SCSI performance, you will get good performance up until you starting doing simultaneous IO and hitting them heavily then you will start to see the performance degrade

Still if you do testing and you are happy with it, great. I really hope it works well. RAID 10 may be a good idea with SATA, that may help make up for that performance loss a little.

The number of spindles is the easiest way to achieve the best performance, the more hard drive physical disks you throw at a RAID of any type the better you will be.

Seagate Barracuda ES.2 -1 TB - 3.5" - SATA-300 - 7200 rpm - buffert: 32 MB

vs SAS 15K (that's 2x the IOP/s)

View solution in original post

0 Kudos
5 Replies
mikelane
Expert
Expert
Jump to solution

I don't know how much this will help but I can share my experience of setting up openfiler with ESXi.

Both ESXi and Openfiler were on AMD 2.3Ghz Dual Core CPUs at 45w - Openfiler had 2Gb RAM with software raid 5 (3x500Gb).

ESXi was using the software iSCSI initiator with gigabit ethernet between the two boxes.

Below is the HD Tach result from a single VM running on ESXi.

You may well fare better with your setup - in the end I decided to go with local Sata storage on the ESXi host as I only have one ESXi server anyway so centralized storage was not such a big deal for me anyway ...

RParker
Immortal
Immortal
Jump to solution

First the order in performance order is SSDD -> SAN -> Local SAS/SCSI -> Local SATA

Of course that's the same order in descending price order as well. But you are better off using SAN rather than Local disk NAS will be fine, but Local disk has limitations on performance. We have internal disks and I thought since they were ultra fast drives, it would be better, and they were to a point. When you load up 10 or 15 VM's spindle count becomes a HUGE issue. When those spindles get busy, you are going to have diminishing returns on performance and SAN / NAS solution should not have this problem, provided you have 20 spindles to throw at your RAID. RAID 6 is good, don't really see a lot of speed performance hit vs RAID 5, and the parity gain is nice. So that's a good idea.

I went to an EMC lab the other day, a single SSDD disk outperforms MOST RAID's on the SAN. SD is like ridiculous performance, but they are cost prohibitive.

I wouldn't use iSCSI however, NFS is better, and they use the same network. The iSCSI requires a LUN, NFS can use any backed storage and you don't have to format it as VMFS, something to consider if you want to grow your NFS storage over time, you can't resize VMFS (although the next version of ESX will have this capability). But even if future versions have VMFS resize, it's still new. NFS has more stability for the moment.

barondavid
Contributor
Contributor
Jump to solution

Thanx alot for your answers, interesting info.

But I still don't have a clue if theres a chance that my 7 VMs running windows server 2008 will be capped to much by running from the NAS. The Nas will run with eight 7.2k RPM Sata disks, they are high preforming sata disks, but still not a SCSI raid. I have the opportunity to run raid 10 on it insted of 6 and I will consider it and try how much better performance it gives me. The 1TB sata drives are 70% cheaper than the 300gb 2,5" SAS 10k drives.

The Nas has a Intel® Celeron M 2.0 GHz and 1GB DDR2 RAM, it have very good reviews in matter of performance ( Here's one ). The disks i will load up with are Seagate Barracuda ES.2 -1 TB - 3.5" - SATA-300 - 7200 rpm - buffert: 32 MB

The internal drives that I load up with (if I have to) are 10k RPM 300gb SAS-drives and I will keep OS-partition and swapfiles on these drives and store the data on the NAS. Shouldn't that outperform my NAS, even though I hope that the NAS is enought.

0 Kudos
RParker
Immortal
Immortal
Jump to solution

But I still don't have a clue if theres a chance that my 7 VMs running windows server 2008 will be capped to much by running from the NAS. The Nas will run with eight 7.2k RPM Sata disks, they are high preforming sata disks, but still not a SCSI raid. I have the opportunity to run raid 10 on it insted of 6 and I will consider it and try how much better performance it gives me. The 1TB sata drives are 70% cheaper than the 300gb 2,5" SAS 10k drives.

yes there is a good reason SATA is cheaper than SAS. SATA drives don't have the same command queue that SAS drives use, plus the backplane on SATA relies on CPU, SAS can offload this to the CARD. The drives themselves appear to be competitive, but they are a far cry from SAS/SCSI performance, you will get good performance up until you starting doing simultaneous IO and hitting them heavily then you will start to see the performance degrade

Still if you do testing and you are happy with it, great. I really hope it works well. RAID 10 may be a good idea with SATA, that may help make up for that performance loss a little.

The number of spindles is the easiest way to achieve the best performance, the more hard drive physical disks you throw at a RAID of any type the better you will be.

Seagate Barracuda ES.2 -1 TB - 3.5" - SATA-300 - 7200 rpm - buffert: 32 MB

vs SAS 15K (that's 2x the IOP/s)

0 Kudos
barondavid
Contributor
Contributor
Jump to solution

ok. more good info on this, I will have this in mind. The problem as always is the budget. But as the environment will grow slowly, i will try the NAS solution first and let the SAS-drive wait, if I see need of better performance I always have the opportunity to buy som internal SAS and move some/all OS-partitions there. As said, the environment is small, so hopefully good planing in when to execute disk active tasks on the VMs, as updating, restarting and backups. I will also try to dedicate allot of ram to minimize paging and swapping.

When I'm thinking about this setup, there will be a maximum of 7-8 VMs before i start to plan another server (and maybe another storage solution able to handle both host). 8 VMs will run on 8 good performing SATA-drives. The 8 drives will be in raid 10 so if I keep the simple and theoretic thoughts here, that should be somewhere near as having 2 VMs on one raid 0 array on 2 disks. Wich shouldn't be a problem in a very small environment, about 10-20 users per VM.

It's probably pretty silly to compare it this way but off course I will need to test this hard before the decision is made.

I will start with the NAS and without internal storage and go from there.

What should be a good way to perform a test for this matter, run disk performance utilities inside multiple VMs simuntaniously?

0 Kudos