VMware Cloud Community
xtiva2000
Contributor
Contributor

Yet another LUN configuration question..

Hi there,

I am newbie to this VMware & SAN world and as we are implementing the VMWare ESX 3, just wanted to get some more ideas on the configuration for our small implementation.

We will have 2 x identical Dell server with 2x HBA each on server connected through 2 x Brocade Switch to EMC Clariion CX300 SAN with 5 x 76GB for Flare and 10 x 146GB disks.

We will only have 5 VM OS as below.

Server C size D size

W2K3 DC 20 GB 70GB

W2K File/Print, DHCP Server 20 GB 300 GB

Domino Server 20 GB 350 GB

MS SMS server 20 GB 130 GB

Sage Platinum with Pervasive SQL 20GB 50GB

And was thinking of having

1 LUN of 100 GB for OS C Parition

1 LUN of 350 GB for Domino data partition

1 LUN of 550 GB for rest of data/aplication partition

Also as for the RAID group, I believe VM recommends 1 LUN per 1 RAID group but as we only have 10x 146GB disk I was not sure how I can create 3 RAID group to meet our needs.

146 x 2 in RAID 1 for OS partition

146 x 4 in RAID 5 for Domino

That does not leave me any disk space for 550GB required for other data.

Another solution would then be to use disk space available for flare for OS partition and use 146GB x 6 in RAID 5 for the rest of data partition.

Hope to get some expert opinion on this.

Thanks in advance.

0 Kudos
8 Replies
daniel_uk
Hot Shot
Hot Shot

Hi,

Ideally you want to have all LUN's on large chunks of say 500-600GB in a RAID 5 and then put your smaller (less than 250GB) VMDK files on these. It will become a management nightmare with having your Lun's in ESX setup like this.

For the larger VMDK's like your domino disk (over 250GB) i would put these on RAW disk files, with the disk space you have it will be best to plan the least amount of large VMFS volumes possible and factor in how much Raw disk space you will have to factor in.

will provide information on RAW luns as it was a confusing concept to me at first!

Thanks

Dan

0 Kudos
RParker
Immortal
Immortal

You are new, and thats why making smaller luns and having multiple luns makes sense. But as you grow you will quickly discover that its better to keep it simple.

Don't have different size luns, that will become a nightmare later. Keep the luns relatively small, but not too small.. 350 is quite low.

1 LUN of 100 GB for OS C Parition

1 LUN of 350 GB for Domino data partition

1 LUN of 550 GB for rest of data/aplication partition

Should me more like:

3 LUN 500 GB. Then add more 500 GB luns later. You want to keep this as balanced as possible. Give your Volumes as many spindles as possible, and divide that Volume into LUNS of equal size. trust me. You can't resize a VMFS volume, and you are heading for extents, I can tell.

You are not going to equal the performance of a Physical host, so don't even try. Use best practices for VM's (1CPU, minimal memory). VM's are not DISK IO havens, and you aren't going to notice performance increases by dividing your LUNS this way.

You can still keep separate drives inside the guests, but put them all on the same LUN.

Bigger environments separate some of their drives, but believe me, thats not the norm.

0 Kudos
bfent
Enthusiast
Enthusiast

Having multiple partitions on a VM is not beneficial with the exception of specialized servers (email, database, etc). I would not recommend a 😧 drive on your DC server unless you are doing more with your DC than AD (which is not best practice). I am not familiar with Domino or Sage but taking into account it is email and SQL, I would suggest 3 RAID Groups and 6 LUNS (I've configured multiple LUNS on RAID Groups before and have never seen any problems with VMware). The RAID Groups would consist of:

RAID5 using the first 5 drives (FLARE should only be using a small portion) - create a LUN with the remaining space to use for the VM's OS (C: drive).

RAID5 using the next 6 drives - create 2 LUNs, one for the file/print/DHCP server and one for the Domino server

RAID10 using the remaining 4 drives - create 2 LUNs, one for the SMS Server and one for the Pervasive SQL database (it is recommended to use RAID10 for SQL)

Ideally, a second enclosure would allow a better configuration for future growth.

0 Kudos
Texiwill
Leadership
Leadership

Hello,

Based on performance you may want to keep some of your C/D drives on the same VMFS, and use a split out D partition for those that really need the performance. C is not hit all that much in some cases and D may be.

This all varies based on applications, and the number of VMs. Once things are installed you will be performing some balancing to get the best performance. That could mean moving VMDKs from LUN to LUN, etc.

THe only 'rules' that apply are:

1) no more than 12-15 VMs per LUN

2) Set a minimun size for a RDM (I use 100GBs anything over gets an RDM, some people use 250GBs, others use 1TB)

3) You will have to balance things across LUNs/controllers/etc.


Best regards,

Edward L. Haletky

VMware Communities User Moderator

====

Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education. As well as the Virtualization Wiki at http://www.astroarch.com/wiki/index.php/Virtualization

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
0 Kudos
daniel_uk
Hot Shot
Hot Shot

I always split them to ensure I get redundancy in the event of the VMFS going bang! This is only for critical VM's like a DC or DB server.

All of this is less of an issue now with the addition of storage vmotion in 3.5 so you can evaluate the performance side with peice of mind you can shift VM's where you want to.

Thanks

Dan

0 Kudos
RickPollock
Enthusiast
Enthusiast

VMware support recommended 350GB LUN's as the average size. That's what we went with and everything seems to run great!

0 Kudos
Milton21
Hot Shot
Hot Shot

I try to group luns by performance.

LUN 1 Raid 10 of 12 disks

Runs a DB ,Web, Exchange server

Lun 2 Raid 5 0f 5 disk

Runs a file server and a web server

I try to group and move Disk IO around

0 Kudos
Milton21
Hot Shot
Hot Shot

Raid 10 all 10 disks. 1 Lun

0 Kudos