VMware Cloud Community
vmgdk
Contributor
Contributor

VM swap file location when using NetApp SMVI and tiered storage

I'm reading the NetApp best practice guide when using NetApp SMVI. It suggests creating a second VM disk for the swap space for each VM. We have a 2 tier storage systems FC drives and SATA drives. My question on which tier should the VM swap live? From a performance standpoint I think FC but I'm thinking if the swap is underutilized then putting it on FC is a waste of pricy disk.

Reply
0 Kudos
9 Replies
Brad_C
Contributor
Contributor

Two things:

First, I'm not an expert but I would think this may depend on how far overcommitted your esx host/cluster memory is. If you aren't that far overcommitted, guest OSes shouldn't be hitting the pagefile that hard should they? If you plan on having pagefiles used extensively with host memory overcommited, you may want to go ahead with FC instead of SATA for the performance. If you don't see much usage of pagefile in your VMs, maybe go SATA. Again, not an expert here.

Second, I have a question. I'm following the netapp vmware storage best practices guide too. I'm trying to figure out if they document is suggestting that each VM disk for swap needs to be on a datastore that is on a separate netapp volume or if they can all live on the same datastore. It is unclear because the diagram they show only shows the configuration for a single VM.

Hope you get this figured out.

-Brad C.

Reply
0 Kudos
vmgdk
Contributor
Contributor

Brad,

Thanks for the help. In the end I just thought do you want your swap file slow or fast. So I went with FC disk for the swap.

For you question. If you go into the ESX config options to configure a datstore swap file location on the SAN, you can only configure 1 swap location as far as I can tell. So they all pile on the same LUN for swap. I have 100 VM right now all using a 200GB swap LUN. There is plenty of free space at this point, but as you point out, in all depends on mem overcoming.

Geoff

Reply
0 Kudos
Brad_C
Contributor
Contributor

This will be the first time I've tried to change the vmware swap file location, so I wasn't sure if it is configured per esx host, per datastore or what? I will be getting into testing configs tomorrow. Thanks for the information.

-Brad

Reply
0 Kudos
Jasemccarty
Immortal
Immortal

You can modify the swapfile location in the cluster configuration, and when guests are moved between hosts, the swapfile will move from being stored with the VM to being stored on the alternate location.

This does not require the guest to be powered off, and it doesn't require any special changes to the cluster, other than the swapfile setting.

Jase McCarty

http://www.jasemccarty.com

Co-Author of VMware ESX Essentials in the Virtual Data Center

(ISBN:1420070274) from Auerbach

Please consider awarding points if this post was helpful or correct

Jase McCarty - @jasemccarty
Reply
0 Kudos
Brad_C
Contributor
Contributor

I think I probably need to read some more about vmware backups and restores to better understand what vmware files are required for a restore and what exactly needs to be backed up. I have two open questions in the esx 3.5 community and one is about the proper datastore/volume/lun config and swap/guest os temp locations to use if I want to use netapp snapshots for backup. Have a look at that post if you can find it for the specifics. I'm not at my computer so can't get the link. Search netapp in the 3.5 community.

Reply
0 Kudos
vmgdk
Contributor
Contributor

Are you talking about using the NetApp product SMVI (snap managaer for virtualization)?

Proper datasotre/volume /lun config is not as straight forward as I first thought. You have to know what you want out of the VMware/netapp relationship once you know that then you can determine how to config datastore/volume/lun. It's not a one size fits all unfortuanatly. I followed a few white paper as from NetApp as a guide, but had to make adjustment's to fit my particular environment.

For the backup of swap question the answer is don't back it up. If you are putting swap on a separate datastore then this is something that does not need to be backed up. Especially if you are using NetApp snapshots. You will tear through a ton of disk space if you try to backup swap on a netapp, and for what? Nothing. One of the big reasons to put swap on another datastore is so it's disk space you do not have to back up, where if swap was with the VM you would be backing it up. Big waste of space.

Reply
0 Kudos
Brad_C
Contributor
Contributor

Exactly. I had gathered the same thing you just stated from the best practice doc. I'm not planning on using smvi but rather use scripts to do snapshots. I guess my main question concerns whether it is okay to put the virtual disks that my vms will use for "temporary and transient data" on the same datastore that holds all the vmware swap files that will not be backed up? So then I'd have one big lun/datastore holding all that temp data for all vms and then have a separate volume/lun/datastore for the primary data of each group of vms that have the same backup schedule. Each different backup schedule would constitute a new volume/lun/datastore for vm primary data. Is this how you ended up doing it? Btw, thank you so much for answering my questions.

-Brad

Reply
0 Kudos
007Roberto
Contributor
Contributor

When following the best practice with NetApp storage on a separate datastore hosting swapfiles for VMs as it pertains to snapshots (as stated in TR-3428), what is the maximum number of VMs that should have their VMs swapfile on that datastore?

Reply
0 Kudos
cfizz34
Contributor
Contributor

Does anyone know if NetApp has a best practice guide for setting up the SWAP (virtual and guest swap) datastore in regards to thin provisioning?  I know vmware has a best practice stating this:

Taken from page 28 of the vSphere5 Best Practice Guide:

Regardless of the storage type or location used for the regular swap file, for the best performance, and

to avoid the possibility of running out of space, swap files should not be placed on thin-provisioned

storage.

Reply
0 Kudos