VMware Cloud Community
jgreenwald235
Contributor
Contributor
Jump to solution

How to configure ESXi and VSphere to run VMs stored on a single NAS on different hosts

Here is my proposed architecture and questions:

  1. Assume my lab environment has 3 computers: (“Hosts”)
  • An I-7-quad core PC running Windows 10 with 64 GB of RAM, 2 TB of SSD storage Workstation Pro 14 with some VM images stored on it, locally.
  • A 2010 Mac Pro (model 5,1) with 2  6-core Xeon processors with 64 GB of RAM and 2 TB of SSD storage with some different Fusion 10 VM images stored on it, locally.
  • A Dell R710 server with 2 6-core Xeon CPUs with 96 GB of RAM and 2 TB of SSD storage.
  • All machines are connected by a gigabit Ethernet network with static assigned IP addresses. This can easily be upgraded to 10 gigabit if needed.

Without using VSphere or ESXi, I would need to connect to each machine individually to access the local installation of Workstation or Fusion to startup the VMs that I want to run on that machine. This can be a hassle to switch between RDP and VNC and manage multiple connections to the three hosts.

I can see that managing 30+ VM images across 3 different host computers would be a pain. Ideally, I would like to put all the VMWare images on a single File server/NAS and be able to start any VM on any of the 3 computer hosts and control, monitor and startup the VMs from a single central console.

For example: I want VM#1 to run on the Mac today. But, after shutting down VM#1, I want to run the same VM on the Dell host server tomorrow.

Form what I have read and experimented with I believe this is possible. I believe I would need to run ESXI on each machine (after converting it) and use VSphere console to centrally manage the ESXi instances that see the single central group of VM instances s part of their data store allowing me to register VMs.

My questions are:

  • Is it possible to do this? and if so how do I configure all this to have:
    1. All the VM instances physically stored in one central location (file server/NAS) but select a particular VM to execute on a selected server –using that server’s CPUs and memory without physically copying the VM files over to the desired server.

  • Would I need to install ESXI on each computer host and “register” the centrally stored VM images with each ESXI instance, and then use the central VSphere console to see the ESXi instances and then pick which VM instance of a particular ESXi instance will run on a different ESXI instance/host?

  • I can easily install ESXI directly on the hardware of the Dell server. But, installing it directly on the Mac is problematic, as I would need the Mac to dual boot between ESXi and Mac OSX because I also use it as a video workstation. And it would also be an issue to install ESXi directly on the PC as I need the PC for running windows natively to teach my classes on the web.
    1. So, I’m wondering: if I installed and ran ESXi as a virtual machine itself under Workstation 14 on the PC host and Fusion on the Mac host, and give that ESXi VM instance “access” to all the memory and CPUs on the host, it will be able to startup one of the centrally stored VMs locally, and thus use the RAM and CPUs of that local machine while still giving me the ability to  access the separate ESXi instance – 1 running directly on the Dell server host, the other two ESXi instances running as VMs on the PC host and Mac host respectively.
  • What is the best (fastest speed and most reliable quality of converted output) way to convert the existing VM images? OVF or VCenter Converter?
Tags (3)
0 Kudos
1 Solution

Accepted Solutions
raidzero
Enthusiast
Enthusiast
Jump to solution

The main issue I see is (and I'm not sure of your exact situation) you will need a license for vCenter and a paid license for ESX to centrally manage the environment.  Free ESXi will install fine but can't be used with vCenter. 

Additionally, if you are talking about just running lab machines, vCenter probably won't do a lot for you with things like HA/DRS/vMotion.

For my money and what problem you are trying to solve (power on VMs on various servers), I would convert both PC (or run nested) and server to ESXi free, connect to shared storage, and then use powercli to manipulate powering on or cold-relocating VMs.  For 30 VMs that's over 5GB mem per VM which is really high for a lab in my experience.  You can leave the Mac as a desktop, or even run nested on it as well if you like.

ESXi uses file locking for datastores, so even if hosts aren't connected to the same cluster, or the same vCenter, they will still respect the clustered file system shared between them.  So you can power off a VM, unregister it, register it on a different host, and power it up.  All without causing corruption or inconsistencies. 

View solution in original post

0 Kudos
8 Replies
raidzero
Enthusiast
Enthusiast
Jump to solution

The main issue I see is (and I'm not sure of your exact situation) you will need a license for vCenter and a paid license for ESX to centrally manage the environment.  Free ESXi will install fine but can't be used with vCenter. 

Additionally, if you are talking about just running lab machines, vCenter probably won't do a lot for you with things like HA/DRS/vMotion.

For my money and what problem you are trying to solve (power on VMs on various servers), I would convert both PC (or run nested) and server to ESXi free, connect to shared storage, and then use powercli to manipulate powering on or cold-relocating VMs.  For 30 VMs that's over 5GB mem per VM which is really high for a lab in my experience.  You can leave the Mac as a desktop, or even run nested on it as well if you like.

ESXi uses file locking for datastores, so even if hosts aren't connected to the same cluster, or the same vCenter, they will still respect the clustered file system shared between them.  So you can power off a VM, unregister it, register it on a different host, and power it up.  All without causing corruption or inconsistencies. 

0 Kudos
jgreenwald235
Contributor
Contributor
Jump to solution

Thank you so much - that answers most of my questions. But of course, with things like this - you give me a cookie and I want your soul 😉

HA/DRS/vMotion are not big needs for me - as this "lab environment' is a home test/presentation environment where I run and test various server (database RAC, weblogic clusters...)  combinations and use them as demos when I teach classes. I'm not providing a "live" Lab environment for students to work in. My apologies I should have been clearer.

I plan to do daily backups of the VMs to a separate NAS, and do not plan to move them around. If I can choose (or change via unregister/register) what host to start a VM from via a central location/console, or CLI, that is enough.

Some additional questions:

1. Running 'Nested' I assume means running ESXi insde of VMware Workstation 14 or Fusion 10 as a virtual machine itself.

2. So, if I understand this correctly, my process is:

A. Convert my VMs to a format that will upload into a Datastore. Does this have to be OVF or can I use vConvert? Which is better?  And I'll test it myself as well.

B. Put all my SSDs into the Dell on which I'll install ESXi free and that will be my "shared storage" that the PC and Mac nested EXSI free will see.

C. Decide which VMs will typically run on which host: ex: VMs with large CPU/Memory needs go to the Dell Server or Mac (running ESXi nested as a vm itself?) in order to minimize having to unregister/re-register vms.

I'm unclear as to how the shared storage works across ESXi datastores. Does each EXSi free instance "see" the "same" network shared files (on the Dell server) as it's own datastore? Is that how they "respect" the file-locking? and then how they "see" vms to register/unregister?

And, do you name your servers as "pets or cattle": https://www.theregister.co.uk/2013/03/18/servers_pets_or_cattle_cern/

thank you again 😉

joe

0 Kudos
raidzero
Enthusiast
Enthusiast
Jump to solution

Yep nested means virtualization inside of virtualization essentially. 

Somebody else might have a better idea about conversion.  I see this VMware Knowledge Base article talking specifically about workstation to ESXi as a pretty straightfoward function of workstation software.  And this VMware Knowledge Base looks like similar functionality on fusion.  You can also just do a backup and restore if you have that functionality (and it sounds like you do) although this may be more time consuming.  VMware converter I think requires vCenter.

The only thing that kind of jumps out at me for your plan is this:

>>>B. Put all my SSDs into the Dell on which I'll install ESXi free and that will be my "shared storage" that the PC and Mac nested EXSI free will see.

You won't be able to share, by default, storage on the Dell as a NAS device through ESXi.  I thought you  might have an external NAS. 

You can turn up a linux VM (free), present it with the Dell storage, and then share that out as NFS storage which your other hosts can connect to.  The only question is, where do you store the linux VM. 

You could just carve the storage up as one big device and do everything there.  But I would personally check and see if the Dell server has a RAID controller and can do logical segmenting.  That is what I would likely do.  So take all my disks and put it in a big logical RAID group, then carve out say 30-50GB as a separate instance.  You can install ESXi on this partition and it will use the remaining space as a datastore which you can deploy the linux VM on.  Then you can utilize the rest of the space as a separate datastore for one or more drives attached to the linux VM, which can share that out as an NFS export. 

The only screwy, Inception-ist part here is that you are now nesting storage.  So your Dell server will mount this NFS export that is technically the same data space as the local storage it is presenting to the linux VM.  But as long as you don't deploy other stuff on the "local" datastore and you only deploy on the NFS datastore this should work.  At least, it works in my head. 

This is all a lot easier with an external NAS like a synology, etc.

Basically you will have an NFS export which is addressable at the IP address of your linux instance.  Then your hosts all connect to this and that is their NFS datastore.  The hosts all see the same file system, the same folders, the same VMs and the same VMX files, and the VMs would only exist (be registered) in one host at a time.  But even if the hosts aren't together in vCenter, there is file locking that prevents an ESXi host from attempting to boot a VM which is already running on another server (for a more technical explanation, google vmkfstools and the vmkfstools -D CLI option).  So it won't be possible for a host to accidentally stomp on data which another host is accessing (like say you and I are both reading/writing to the same file on a file share), which is a core issue for shared storage and clusters.  You'll essentially have two (or three) separate hosts, but they are accessing the same datastore safely.  And with some scripting work with powercli you can pretty easily do as I said before...power down, unregister, register on new host, power up.  Or you can even do this manually through the GUI if you want. 

Hope that helps.  Servers vs cattle, for me personally I typically just do cattle.  labesx01, labesx02, labad01, labad02, etc.  But everyone is different.  My labs are usually pretty transient as I rebuild often.

0 Kudos
jgreenwald235
Contributor
Contributor
Jump to solution

That all makes total sense and I can easily see how to architect it and what the topology will be - which in itself is both a bit scary and gratifying at the same time - and a testament to your clarity of explanation - thank you 😉 . I started down this path literally last Saturday.

As it turns out...I *do* have a synology NAS with plenty of space and will try that. My main issue is speed - my network is 1 gigabit (with Cat 6a cabling) and I can get a vm to startup/shutdown over the network from the NAS  in a just a few seconds slower than what it takes to boot the vm locally off an SSD, which is pretty good. Smiley Happy

Suspend and resume is easily 10 times slower.Smiley Sad

If I just go with the synology NAS and 1gb network , then it's all basically done and next up is the scripting......but.. I wanted a faster network so I also added a 10gbe nic to the Dell (refurb from OrangeComputers. Highly recommended. Ex: Dell R710, dual 6 core E5-5660 xeons, 32GB ram, 600 GB SAS, RAID controller, DVD, 10gbe NIC for $430 + 90 day warranty. Pretty hard to beat - and they are super helpful and friendly.) While I realize 10gbe is not needed with HDDs, I plan to use SSDs (actually m.2 ssds on PCI cards) as the main storage and think the 10gbe will show a noticeable improvement in suspend and resume. If not - well, no knowledge is lost and my 10gbe investment is less than $100.

Do you think I'll see much improvement with a 10gbe network over the 1gbit Synology - or is it not worth the hassle?

But, since the Dell will only be for the VMware install - nothing else - your suggestion of exposing the linux VM as an NFS might work.

I'll test and see of course - if the 10Gbe makes a huge difference - like more than 20% in start/shut/resume/suspend speeds then  as an alternative to the Synology and Linux VM, I'm thinking of buying a cheap Dell T310 ($100 on ebay) and put FreeNAS with all the drives and make it into a 10gbe-based NAS for hosting the VMs only.

joe

0 Kudos
wesmcm
Contributor
Contributor
Jump to solution

I use a T310 for FreeNAS, and it works well.  Cooling through the front is decent enough for the drives, CPU upgrades are pretty easy (took mine up to a Xeon X3470), RAM upgrades are easy if you stay dual channel (such as 4x4GB), and most importantly (to me) there are plenty of full-height PCIe slots.  I have a cross-flashed (to LSI IT) H200 in the x16 slot, and two Intel X520-DA1, one in each x8 slot.  Load during a scrub can get pretty high, but other than that, it's a smooth and snappy NAS with 4x4TB HGST and 16GB ECC.  [only downside = the 8 year old iDRAC is hosed, and this is a one-owner system, not an ebay special.)

0 Kudos
jgreenwald235
Contributor
Contributor
Jump to solution

Wonderful!  That makes sense and might be the 10gbe & shared storage NAS host I'm looking for and leave the Dell R710 as pure ESXi.

I'd buy the T310 right now but I have a concern about noise from the T310. I want this in my home office closet - I can shut the door - mostly: I do need some airflow 😉

Currently have a 2010 Mac Pro dual xeon 6 core and water cooled PC I-7 6700 running in there and can barely hear both. The Dell R710 next week concerns me of course, but youtube video seems to show that it's not too loud once it quiets down.

Question: What's your opinion of the T310 noise? I've watched videos and at startup it's quite loud, but then drops off fast.

I plan to use the extra PCI slots for PCI-based m.2 SSDs and maybe one HDD for local backup/clones/templates, so CPU utilization and hard drive/heat should be minimal. Full backup would go out via 1gbit ethernet to the NAS synology box.

Thanks!

0 Kudos
jgreenwald235
Contributor
Contributor
Jump to solution

Just curious...why two Intel X520-DA1? B/c it's the NAS and goes to tow other machines and you didn't want to get an - expensive - 10gbe switch?

Why not a dual card - to take advantage of the two slots?

0 Kudos
wesmcm
Contributor
Contributor
Jump to solution

I happen to have 3 2-port X520 in 1U servers, so I use my 1-ports elsewhere where space is less of a concern.  There is a 10G switch involved - Quanta LB6M. 

T310 is not a loud server by any means, at any time, if you‘ve used a 1U anything.  Redundant power supplies at startup are the #1 noise source.  The system fan is large and ducted and essentially silent. 

0 Kudos