VMware Cloud Community
TG-Mtown
Contributor
Contributor

one VM per LUN

Hello all

I hope this question has not been answered before...but I am having a hard time trying to find data to either support or negate this setup.

We currently have created seperate LUNs for each of our VM C: drives and also for any additional drives. We are using the best practice of same amount of SWAP per memory and approx 20% extra for snaps/logs ect.

Does anyone have any good info on why this is a bad idea or may degrade performance...I realize there is more management involved...but we feel it helps us conserve SAN space.

thanks for your input.

Ted

Reply
0 Kudos
9 Replies
outbacker
Enthusiast
Enthusiast

I've been doing it like that for years, as it helps in replication to our DR site to have multiple small as opposed to one big. The only trouble i've had is swap files getting too large due to users always wanting more memory for their apps, and when vmfs got upgraded from esx2 to esx3, it needed and extra couple of gigs for overhead. I've made the luns a minimum of 5 gigs larger than the vmdk and have had no issues. And, if push comes to shove, you could move the swapfile to local esx storage, or make a memory reservation equal to the memory size to eliminate the swap all together.

depping
Leadership
Leadership

I usually keep the C:\ and D:\ together, but that's mainly due to the VCB issues that used to exist.

Duncan

My virtualisation blog:

If you find this information useful, please award points for "correct" or "helpful".

Reply
0 Kudos
khughes
Virtuoso
Virtuoso

Besides the extra work which you are obviously willing to do. I would think my only concern would be if you needed to expand the vmdk file for one of your servers and you only have so many small LUNs it wouldn't support it, except if you made a larger new LUN and moved it there first. IMO its just too much work for me to think about, adding complexity to the setup, but its your setup and if you have it documented and are comfortable with it, I don't see any real draw backs.

  • Kyle

-- Kyle "RParker wrote: I guess I was wrong, everything CAN be virtualized "
outbacker
Enthusiast
Enthusiast

It can be a pain, I have 152 datastores right now. But a good spreadsheet and descriptive labels makes it work. I suppose svmotion would come in handy if the vmdk needed to grow and you could carve out a newer larger disk. Haven't gone down that road yet.

Reply
0 Kudos
TG-Mtown
Contributor
Contributor

Great Info....

We only have about 18VM's and will probably not go over 30-40 total...so I think we will have no trouble managing each so few datastores

thanks for the input!

T

Reply
0 Kudos
outbacker
Enthusiast
Enthusiast

I remember when we used to say that. 30-40. ha!

Reply
0 Kudos
TG-Mtown
Contributor
Contributor

Yes you can setup 1 VM per LUN/VMFS and long as you are prepared to manage all the individual datastores. Also be aware of sizing the LUNs appropriately for Swap space and logs/snapshots...also I found that if using Vizioncore Vranger, it wants to have 6G or 10% freespace for each VM, which means a more addition to your LUN.

Reply
0 Kudos
alex0
Enthusiast
Enthusiast

One VM per LUN may work for a small shop but it doesn't scale to an enterprise level.

On a large ESX farm you will quickly hit the limit of 256 LUNs.

Personally I think it causes unnecessary overhead in managing extra LUNs.

Also if your LUNs are all sharing the same underlying disk array then I you're going to be hitting the same disk and performance isn't going to be that beneficial by splitting into the smaller LUNs (one for each VM).

I don't see that one VM per LUN is going to degrade performance, however I doubt it improves it (unless you have one LUN per disk array), and it causes unnecessary complexity and overhead.

Reply
0 Kudos
Ken_Cline
Champion
Champion

I'm in agreement with Alex. One of the big reasons for "going virtual" is to simplify your environment. Carving out one LUN per logical drive seems to me to be an inordinate amount of management overhead -- and for no good reason. As was said above, it may be workable in a small environment, but each ESX host is limited to seeing 256 LUNs. Since all hosts in a cluster should see the same storage configuration, that limits your cluster to 256 LUNs. Do the math and you run out of LUNs very quickly.

Ken Cline

Technical Director, Virtualization

Wells Landers

TVAR Solutions, A Wells Landers Group Company

VMware Communities User Moderator

Ken Cline VMware vExpert 2009 VMware Communities User Moderator Blogging at: http://KensVirtualReality.wordpress.com/
Reply
0 Kudos