VMware Cloud Community
Grevane
Enthusiast
Enthusiast
Jump to solution

ESXi Beginner - Advice and a few pointers...

Hi, I'm after a little advice from some of you "experts".

I'm getting a little bogged down in a wealth of information about VMware and it's a little tricky to pick out the relevant parts so I thought I'd ask as from reading this forum, it appears to have a lot of very helpful people on it.

I'll start with giving you an overview of my hardware config.

ML115 G1

8Gb RAM

Adaptec 3405 Controller (on HCL list)

Dual Pro1000 MT Server Adapter

(4x) 1Tb SATA drives

The reason I got the 3405 was to get around those "issues" with the onboard MP55 SATA controller plus that controller has some major performance issues so I decided to just "bin" it rather than trying all those work around of editing config files etc...

This isn't a "production" server, this is merely an installation to provide me with (a) a non-production VM enviroment that I can experiment & learn from and (b) to make the most of the hardware resources I have (at home) i.e. not have to buy anymore physical hardware.

Ok, onto my first question.

How would "you" setup the drive/storage in this system?

So far, I can see a few alternatives.

Firstly, I tried the extract and restore "VMware-VMvisor-big-3.5.0_Update_3-123629.i386.dd" to a USB drive and use that as the boot device method.

This works fine and gets the server up and running but I've run into problems with the 2Tb LUN size limit. I've read a few threads about this and it appears VMware suffers from the same Windows boot device size limitations. Although VMware sees all of the 2.7Tb array (if I use all 4 disks in RAID5), I can only create a datastore in the first 744.9Gb and the rest of the space is unavailable (I gather this is due to VMware subtracting 2Tb from what it can see and only making the remaining amount available).

I also imagine that another way around this is to split the LUN into 2 arrays (for demostration purposes, lets say a 750Gb and a 1.98Tb) which although I haven't tested this so far, I would imagine this would get around the afore mentioned size problem.

And then it occured to me that I could just create a third array of 1-2Gb to "install" ESXi into and thus save using the USB stick (again, not tested yet).

And that was when it occured to me that it might be prudent to check what the opinions are of people who really know VMware as to how they configure the disks in this system. I've also always found I learn better by reading/listening to how "experts" in a particular field approach a problem and then I can see/understand their logic of how they got to that end conclusion.

So how would you configure these drives?

With or without USB boot device?

1,2 or 3 arrays?

And thanks in advance for any help/pointers.

Kind Regards,

Jamie

Reply
0 Kudos
1 Solution

Accepted Solutions
RParker
Immortal
Immortal
Jump to solution

Raid 5 is generally considered a bad idea due to performance issues.

I would disagree, so would Netapp. They actually use RAID 4 on their SAN (because the parity is on one or two disks per array instead of the entire stripe, thus alleviating hits on all the drives).

We ran benchmarks, and the biggest difference is cache of the RAID card, and number of drives, 4 drives is cutting it close, 6 would be better and 8 better still, but performance isn't bad for RAID 5.

Maybe RAID 10 is better, but losing half the disks, how good can it be? It's not worth the cost of the drive space.






!http://communities.vmware.com/servlet/JiveServlet/downloadImage/5441/VMW_vExpert_Q109_200px.jpg|height=50|width=100|src=http://communities.vmware.com/servlet/JiveServlet/downloadImage/5441/VMW_vExpert_Q109_200px.jpg !

View solution in original post

Reply
0 Kudos
4 Replies
imclaren
Contributor
Contributor
Jump to solution

Hi,

Raid 5 is generally considered a bad idea due to performance issues.

I'm not familiar with the adaptec card, but if it has two distinct channels, I'd be tempted to setup 2x raid 1 arrays. The alternative is one raid 10 (or 0,1) array using the 4 disks. Either way, you're going to get a net 2Tb. I'm assuming you want fault-tolerance in the array, of course.

I'd prefer to boot from a hard disk that a flash drive. You're not going to lose much capacity in doing so.

Cheers,

Iain

Grevane
Enthusiast
Enthusiast
Jump to solution

Hi Iain,

Thanks very much for the reply mate.

To be honest, I previously ran this server with Win2K8 on it in RAID0+1 due to (a) the performance issues in RAID5 with the onboard MP55 controller and (b) due to the boot drive space limitations in windows. I got this controller partly due to it being one of the cheaper ones on the HCL list (and as its much better than a 2nd hand E200 HP contoller for example) and partly just because the RAID performance on that MP55 chipset is really crap with Win2K8.

I also did a quick test install while reading this forums of the following... Everythign is still RAID5 but I setup 3 arrays.

(1) 1Gb (ESXi)

(2) 745Gb (Datastore1)

(3) 2Tb (Datastore2)

Vmware installed fine and all the space was pre-assigned to datastores unlike the "USB workaround" method. i.e. arrays 2 and 3 above are setup as expected.

Is RAID5 vs RAID0+1 a huge performance hit to VM's? This isn't a production box and is only ever going to be running a few not very heavily used Servers and although I'd like to add more disks, the ML115 chassis doesn't really allow for much more than 4 disks.

Am I really going to see a "real life" noticable difference by using RAID5 or would you expect it only to be noticable on a production box under load (or by testing with HDTach or similar)?

Edit: Also, I wasn't sure if I should "team" all the NICs in this Server via the "customise" F2 local option or whether I should only setup one via F2 for the management NIC and then assign the remaining 2 NICs via VIC...

Regards,

Jamie

Message was edited by: Grevane (Sorry, thought of one other question I was unsure about)

Reply
0 Kudos
RParker
Immortal
Immortal
Jump to solution

Raid 5 is generally considered a bad idea due to performance issues.

I would disagree, so would Netapp. They actually use RAID 4 on their SAN (because the parity is on one or two disks per array instead of the entire stripe, thus alleviating hits on all the drives).

We ran benchmarks, and the biggest difference is cache of the RAID card, and number of drives, 4 drives is cutting it close, 6 would be better and 8 better still, but performance isn't bad for RAID 5.

Maybe RAID 10 is better, but losing half the disks, how good can it be? It's not worth the cost of the drive space.






!http://communities.vmware.com/servlet/JiveServlet/downloadImage/5441/VMW_vExpert_Q109_200px.jpg|height=50|width=100|src=http://communities.vmware.com/servlet/JiveServlet/downloadImage/5441/VMW_vExpert_Q109_200px.jpg !

Reply
0 Kudos
Grevane
Enthusiast
Enthusiast
Jump to solution

Hi mate, thanks for the reply/input....

Since this was a "home" installation I wasn't too worried about the end performance hit with RAID5 (although I was interested in that info as a topic of conversation). The interesting thing I've found (and one of the reasons I chose the Adaptec 3405) was that it appears my read/write speeds are not that much different to an average desktop single SATA drive (or so it seems at the moment through VIC - haven't tested with HDTach or similar yet) but in principle, I'm getting better read/write in 3VMs using this 3405 than I was using the onboard MP55 controller "just" with Windows 2008.

My "impression" in regards to all this read/write speed "concern" was that there is a much bigger "benefit" or "hit" from choosing the right or wrong controller than there was just by changing how the arrays/disks are setup if that makes sense... i.e. People can spend ages testing RAID0+1,5,10 etc... to find the optimum layout, but an extra £50 on a more powerful RAID controller would ultimately made a MUCH bigger difference.... Does that sound reasonable?

In respect to what you were saying about RAID10, my main limitation is my ML115 chassis... with 4x SATA disks, a dual port Intel NIC and the 3405, things are getting a little cramped in there... If I wanted to put anymore drives in there (and I can using SAS extenders with that controller - up to 128 I believe) I would need to do the classic "hang them by their power & sata cables" trick....lol but I'm not sure if there is even hanging space in there anymore Smiley Wink

But yeah, I'd certainly agree so far that the RAID5 performance I'm seeing isn't that bad... certainly not as bad as I've read for people who have used HP E200 controller (20-30Mb) as I'm easily getting double that at the moment (and that was while one of the arrays was still building).

Thank you again for your input as it really is appreciated since there is SO much to take in.... kinda got sick of reading PDF's... it's much "nicer" to learn from humans occassionaly than just loads of large tedious PDF's Smiley Happy

Reply
0 Kudos