VMware Cloud Community
ablej
Hot Shot
Hot Shot

VMware Datastore 1:1 Volume/LUN Relationship

We are currently implementing VMware on Netapp FC. We've been told that it was a best practice to use Datastore with a relationship 1:1 Volume/LUN. I have never this before, but would like to hear others thoughts on it.

If you find this information useful, please award points for "correct" or "helpful"

David Strebel www.david-strebel.com If you find this information useful, please award points for "correct" or "helpful"
Reply
0 Kudos
6 Replies
julianwood
Enthusiast
Enthusiast

Although this is technically feasable you will drive yourself mad with managing so many LUNS.

We use iSCSI in some offices and NFS in others. For the iSCSI offices we standardised on 400GB LUNs. This was an educated guess figure as we decided that 400GB would be big enough to hold enough VMs without having to provision LUNs too often yet small enough that we could restore one from tape or clone a LUN (without flexclone) without it being too big.

We group our various VMs in multiple LUNs in separate volumes based on function and also importantly replication schedules for DR.

Name your volumes, qtrees and LUNs intelligently so you know exactly what is going on.

We always create a qtree in any volume as it give you more flexibility during migrations.

So, it's easier to explain by example. We have a dedicated volume for BCP Servers in our London site that need to be snapmirrored to a remote site at midnight and call the volume v_lonvm_srvbcp1_00h00. We keep online snapshots nightly for 3 days.

In that volume we create non space guaranteed LUNs at 400GB. (Current volume size is 180GB with A-SIS) At the moment we have two LUNs called. (2 x 400GB LUNs taking 180GB of Filer space)

/vol/v_lonvm_srvbcp1_00h00/q_lonvm_srvbcp1_00h00/lonvm_srvbcp1_00h00-1.lun

/vol/v_lonvm_srvbcp1_00h00/q_lonvm_srvbcp1_00h00/lonvm_srvbcp1_00h00-2.lun

If we needed to add more servers and the LUNs were showing as running out of space we could add another 400GB LUN without consuming any more disk space on the Filer and only increase the volume size if needed with the additional data that A-SIS couldn't dedupe.

We then have normal production servers that are not required in DR in a volume called v_lonvm_srvprod1

Luns are:

/vol/v_lonvm_srvprod1/q_lonvm_srvprod1/lonvm_srvprod1-1.lun

/vol/v_lonvm_srvprod1/q_lonvm_srvprod1/lonvm_srvprod1-2.lun

Test Servers: v_lonvm_srvtest1

/vol/v_lonvm_srvtest1/q_lonvm_srvtest1/lonvm_srvtest1-1.lun

/vol/v_lonvm_srvtest1/q_lonvm_srvtest1/lonvm_srvtest1-2.lun

/vol/v_lonvm_srvtest1/q_lonvm_srvtest1/lonvm_srvtest1-3.lun

/vol/v_lonvm_srvtest1/q_lonvm_srvtest1/lonvm_srvtest1-4.lun

/vol/v_lonvm_srvtest1/q_lonvm_srvtest1/lonvm_srvtest1-5.lun

Workstations: /vol/v_lonvm_ws1

/vol/v_lonvm_ws1/q_lonvm_ws1/lonvm_ws1-1.lun

/vol/v_lonvm_ws1/q_lonvm_ws1/lonvm_ws1-2.lun

As you can see it is very easy to work out what VMs are in what volumes.

Our next stage is to Qtree Snapmirror all VMs to another filer in a remote site for backups and keep snapshots for 14 days so we only need to back up VMs to tape once every two weeks (We nede to backup VMs on tape for regulatory reasons

Saying all of this, if budgets permit I would very happily get rid of iSCSI and go for NFS. No hassles with LUNs and thin-provisioning, only one export per volume and simplicity all round and more than enough performance.

We run 200 VMs in this setup in London and recently one of our iSCSI patches died so we were left with 200 VMs back ended on filer running through a single GB uplink only using 60% bandwidth. So don't believe all the scare stories you hear about I/O. Test for yourself.

We have a bigger office running purely NFS with 800 VMs and it is a breeze to provision and manage but an NFS license is another cost that currently needs to be avoided.

http://WoodITWork.com
Reply
0 Kudos
RParker
Immortal
Immortal

We've been told that it was a best practice to use Datastore with a relationship 1:1 Volume/LUN. I have never this before, but would like to hear others thoughts on it.

Are you talking 1 VMFS volume on 1 LUN? That would be correct. If you are talking 1 VM per LUN, I don't think that's accurate, Netapp would not recommend that, in the first place Netapp recommends NFS volumes and NOT LUN's, since they want you to use de-dupe technology and their Ontap Software tools.

I would call them for clarification on that. You sure you 'heard' that correctly?

Reply
0 Kudos
RParker
Immortal
Immortal

We use iSCSI in some offices and NFS in others. For the iSCSI offices we standardised on 400GB LUNs. This was an educated guess figure as we decided that 400GB would be big enough to hold enough VMs without having to provision LUNs too often yet small enough that we could restore one from tape or clone a LUN (without flexclone) without it being too big.

These figures should be taken with a grain of salt. Think outside the box. What if your VM's are 100GB? What if your VM's are 10Gb? The size of the LUN isn't a hard and fast number, it's a starting point.

So you should also take into consideration not only the size of the LUN but the contents. The number comes from too many open files per LUN/VMFS volume. And it's recommended to be anywhere from 10 and no more than 20 VM's per LUN, that's the more accurate number, not just the size of the LUN.

Also for restores/backup, you probably wouldn't restore an entire LUN at once except in disaster recovery, a typical restore would involve restoring VM's on an individual basis using VCB, so you don't have to restore a LUN and wait for the restore to complete. Restoring a single VM one a time would get you a faster recovery because more often than not you only need certain VM's up before others.

Reply
0 Kudos
julianwood
Enthusiast
Enthusiast

I completely agree with your sizing comments. There is no one size fits all, I am listing some of the things we thought about when sizing our LUNs based on our requirements.

When people have been advised or are thinking 1 VM per LUN they need to see other opinions.

Yes, you probably wouldn't need to restore an entire LUN but if a LUN was corrupt or you had a catastrophic failure with your storage you need to think what you would do.

Also if we needed to restore a VM from say 1 year ago on tape we would need to restore an entire LUN.

Also we don't use vcb so wouldn't have the option to restore just a single VM file. We're looking at NetBackup and its vcb integration and then again may go NFS and we'd just back up the NFs mount so no need to vcb.

http://WoodITWork.com
Reply
0 Kudos
Texiwill
Leadership
Leadership

Hello,

The rule is: 1 VMFS to 1 LUN

This is due to the way the ESX performs its locks. It locks an entire LUN. If you have more than one VMFS per LUN then a lock will lock everything.

If you mean 1 LUN to NetApp Volume, that could also be related to locking issues.

However, it has always been many VMDKs to VMFS/LUN.


Best regards,
Edward L. Haletky
VMware Communities User Moderator
====
Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education.
Blue Gears and SearchVMware Pro Blogs -- Top Virtualization Security Links -- Virtualization Security Round Table Podcast

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
Reply
0 Kudos
julianwood
Enthusiast
Enthusiast

That is correct although the original question was 1:1 Volume/LUN.

You can definitaly have multiple LUNs per volume but only one VMFS datastore per LUN

http://WoodITWork.com
Reply
0 Kudos