VMware Cloud Community
damccumb
Contributor
Contributor

Some design Ideas

II am about to do a big hardware upgrade on my ESX server and SAN. Currently have about 20 vm's running on 2 datastores about 1.5 tb each. We are upgrading to a new SAN with over 15 TB. Tho I love all the new space I am not sure how to design the luns. We plan to move alot more vm's onto our systems. Plus the new SANs are iSCSI. Any thoughts on LUN design? Lota small luns? Fewer Big Luns? how to seperate the vms? By type? size? How do you all out there in the world do it.. Thanks

-Dave

0 Kudos
8 Replies
SparkFan
VMware Employee
VMware Employee

In fact, to seperate of you VMs better refer to your VM function layout as well the network topology.

Some points from me to share:

1, The lun should not be greater than 2TB as ESX doen't support. And no more than 256 luns to a single ESX.

2, Active Vms take overhead memory and CPU resource of ESX. Keep an eye on whole ESX performance and resource utilization per cent. Much over load will cause some problem to ESX.

3, Prepare luns for particular purpose. The backup lun, finance lun, IT training lun... And spare additional rooms in the lun for different VMs. For those one-time used and short term used VMs we can spare less space, say half of the lun. For long term running VMs, 3/4 of the luns are better for further requirement

-Spark

0 Kudos
JohnADCO
Expert
Expert

Some of the design considerations will be SAN model dependant. Especially in dual controller sans.

Some sans do controller ownership by disk group, others by virtual disk, others still strictly by defined lun targets.

I almost always give each VM it's own lun though, this is a design choice I have made through experience. iSCSI reservation issues can add up quick when multiple VM's are using the same lun on many iSCSI sans.

0 Kudos
robert_jensen
Hot Shot
Hot Shot

We use 500 GB's LUN's

The reason for that, is taht its easy to manage.

The guides that i have read, all talk about 400 - 600 GB luns, but it all depends on how easy it is for you to manage.

I think that its important to be able to put diffrent disk files, on different LUN's, so that you can put your disk load, on diffrent LUN's.

I once tryed to have a LUN for each VM, but i found it to hard to manage.

/Robert

0 Kudos
kcollo
Contributor
Contributor

It really does depend on the storage system and implementation. I preffer small luns(for production/critical) VMs. Here we stay at ~300 gigs per lun. You can get away with larger data stores on faster drives (15kRPM, solid state) withouth taking a huge performance hit. That is in the SAN environment of course. How are/will you be connecting to the storage? We maintain ~20tb of storage for the ESX environment, and have found that smaller data stores spread across more spindles is best for us.

Kevin Goodman

http://blog.colovirt.com

0 Kudos
glynnd1
Expert
Expert

I almost always give each VM it's own lun though, this is a design choice I have made through experience. iSCSI reservation issues can add up quick when multiple VM's are using the same lun on many iSCSI sans.

John, surely you jest? Given that ESX can only see a limited number of LUNs or iSCSI targets this would greatly limit the number of VMs one could have in a cluster. Yes SCSI reservations can be a problem, but your solution is a bit drastic.

0 Kudos
polysulfide
Expert
Expert

The more LUNs you have your VMs spread accross the better the performance will be as the data will be spread across more physical spindles and caches. This of course assumes that you will push your storage near its limits. Every configuration had a different magic number of VMDK files per LUN that work well. Find the magic number. 10 is a good number on modern storage if you don't want to do your research. Size your LUNs to store this many active VMDK files. Size will vary based on the services hosted on that LUN.

Sometimes an application may justify it's own LUN or even multiple LUNs, a high performance DB for example. You might put the OS disk on a shared LUN, put the logs on thier own LUN, and the Database itslef on yet another private LUN.

If you put too many VMs on too few LUNs you'll see performance degrade long before you'll see storage capacity go down.

If it was useful, give me credit

Jason White - VCP

0 Kudos
JohnADCO
Expert
Expert

Obviosuly not the answer if your running a lot of VM's. But in the 24 or so we run per host it's fine and definetly a good way to go. The OP did say 20vm's you know. Not possible at the data center level in other words. Smiley Happy

PS: He did ask for design ideas you know. That is one I highly suggest for smaller shops with extra hard hitting VM's.

0 Kudos
damccumb
Contributor
Contributor

WOW guys thanks for all the replies. Sorry for being MIA, I have not had a chance to sit on the forums and read everyone response's.

To clarify a bit. our current setup is a fiber connected EMC SAN about 3TB in size. This san is about 5 years old and is on it last leg. I have split this up into two lun's Storage A(1.5TB) and Storage B(1.5TB). So far we have 17 - 20 vms split between the two luns, with about 300-400GB remaning on both luns. This has worked well. so far, but I think that as I grow I may want to look into a better designed storage platform.

We are moving to a new iSCSI Equallogics solution with two SAN's giving me a total of about 20TB or space between the two.

I have thought more and more about making Lun's based on function. we have alot of our web servers on vmware right now to that would be an easy lun but past that I have alot of one off function stuff that would be one vm a lun which I dont want looking forward. Also, why not make your luns 2 TB? reading some of the post I see that people are making 500g luns or so. I assume you have these luns on certain RAID or spindels for the function of the VM's.. I have too much space right now then I know what to do with. With the rate that we have gown in the past year and a half with VMware, I dont expect to use 10TB in the 2 or 3 years. So I know I have a good problem here with more than enough space. I think I may just mirror what I have in the current san and play with it from there. I could always just move stuff around later. Hmm I dont know. ?:|

My biggest question/fear is iSCSI it self. I would love to hear any gotcha's or tips about planning/designing and iSCSI environment. To be honest, with storage VMotion and some of the cool features of the Equallogics. I think I can move vm's around even if I don't like my design. Or at least until I get too many vm's to manage, but hopefully I will get my storage figured out by then.

I am still playing with iSCSI this week but I hope to start moving some production VM's over. So thanks again for the responce's and any other idea's or suggestions would be greatly appreciated. Thanks

-Dave

0 Kudos