VMware Cloud Community
RichardL-Moto
Contributor
Contributor

vMotion stack questions - 1st time poster

Please move if this is the wrong Forum.

I am REAL new to VMware but have been tasked to determine some viability.  I work at Motorola and I am an RF engineer so Networking knowledge is limited to my RNI (Radio network infrastructure)  We use as the back bone for many of our containers straight ESXI  v6 flavor. They run on DL380 Gen8 up to Gen10 depending on the size of our radio core. We have a request to determine for a customer the posibility of introducing vCenter to mange our two hosts so a third host in a (stuff hitting the fan scenario would further maintain core funtionality) Let me explain. 

We have two VMS's each running 5-6 containers, mostly all linux with two windows DC's. Our Zone Controller container, the most critical has ZC1 on one host and ZC2 on a second host.  We have propriatary software that handles the heart beeting and health of each ZC instance. One is always active the other is on Standy with exact, realtime copies of the ZC database.  I am not looking to vmotioning those, it would be unsupported by Moto. Our Setup works unbelievably fast when we force a ZC roll.

What this customer is asking us is it possible to introduce a third host, under a vcenter umbrella for all three and then if VMS01 AND VMS02 seem to be in distress, could we vmotion the ZC VM to a third host. They realize if it is possible the recovery time would be seconds as compared to ms that the two VMS enjoy.  So with this basic background and with almost zero knowledge, I am trying to wrap my head around multi nic vmotion and three hosts and doing it without physical switches if possible.

Our VMS are loaded with unused Nics. I think we have 14 nics, 5 are used. Of the unused one we have an HPE Ethernet 10Gb 2-port 530T on each of the 2 DL280 Gen10 servers. I do not have a 10GB card on the 3rd new host YET. Willing to purchase based on responses here.

THIS IS IN A LAB and only to see if it is feasible.

With three hosts how to I cross connect all 3. We can get into the vmotion and switch type VSS or VDS, but my first question is do I have to purchase another 10GB card for the new 3rd host, and then because I am peering (my word) each host to the other two, do I really need to add a second, two port card to each of them giving 4 10GB ports?

The reason I ask is because;

SvrA's 10GB ports cabled to SvrB and I do the active standby setup for vmotion for them. But what about SvrC? I would have not more 10GB ports from SvrA or B to go to SrvC.

SO do I add another card to SVRA and SVRB and then 2 in SVRC, then I have links to and from each machine to the other?

AM I way overthinking this or is that accurate from a HW perspective? Because in my scenario, we don't know which SVR A or B is the Active Zone Controller so we would have to vmotion both to SvrC

IF I need to add this HW then how would you propose I roll this out? From what I have read it seems like a VDS is better suited from a Vcenter perspective.

Last question is if this is feasable, what does my non routing IP structure look like? I am confused when I am not using a physical switch if each vmkernel I build needs to be a DIFFERENT ip scheme as they are direct peer non routing. Does anything I have written make sense?

-Richard

0 Kudos
5 Replies
grimsrue
Enthusiast
Enthusiast

Okay. So. There is a lot to unpack here.

Based on what I read it seems to me you have two servers that are connected to each other using 10GB NICs and what I assume is cross-over cables? The servers are not or will not be connected to any type of routed network? How do you access your VMs remotely? The current ESXi servers are running VMs that I assume interact with Radio Network in some way, so are these servers cabled to the Radio Network and do you access your VMs through that Radio Network?

Based on how you answer the questions above I can give you a better idea of what you need. That being said....generally vMotion needs some type of network connectivity for heartbeat. I personally do not think you could use vMotion by just interconnecting 10GB NICs to each other across servers, at least I have never tried it. You would need at least a small network switch that is capable of 10GB to make vMotion function correctly.

Also, to simplify things you do not have to run a separate VMK for vMotion, You can use the VMK0, that your host runs on, to do vMotion, BUT all three host have to be connected to the same switch or have to be using subnets that are rout-able to each other in some way. If the Radio network has the ability to allow your servers to talk to each other then you can use that network for vMotion. If not then you will want to have a separate switch to cable a spare NIC to from each server. You can then use a private subnet (192.168.0.0) to run your vMotions over.

Something else you need to consider. The three servers need to be running the same CPU Model. If not then you are going to have to enable EVC mode for vMotion to function. Both EVC and vMotion are features of vCenter. EVC mode level sets your CPU instruction set across all the servers. If one server is a higher CPU then the other, then your vMotion will fail. This failure will happen because the VMs that are running on the newer CPU will be running instructions sets that the older CPU on another server does not have. You will have to either migrate the VMs off a server to turn EVC mode on or at least power them down. vCenter/vSphere 6.7 introduced the ability to turn EVC mode on for individual VMs, but you will have to shut down the VM first to enable it.

One last thing. All three servers need to share the same Datastore/s for fast vMotion. If all three servers have their own separate datastores then your vMotions will have to be "Storage / Server" vMotions and that will take significantly longer to do a vMotion. Seconds can become mins or hours.

0 Kudos
RichardL-Moto
Contributor
Contributor

Based on what I read it seems to me you have two servers that are connected to each other using 10GB NICs and what I assume is cross-over cables? The servers are not or will not be connected to any type of routed network? How do you access your VMs remotely? The current ESXi servers are running VMs that I assume interact with Radio Network in some way, so are these servers cabled to the Radio Network and do you access your VMs through that Radio Network?

The two existing hosts, are not connected directly to each other. We use a wide array of VLANS for core services on a 10.x.x.x network that use our proprietary heartbeating/health to turn up one or the other Zone controller. VMS 1 and 2 are 10.x.233.121 and 122. x is determined by a unique core configuration should we connect to other 911 cores in a geographic area. This is part of a regional internal IP plan for interoperability. Really not needed for this CONVO. Our subsite communications, (remote broadcast simulcast is a total different scheme, routed through and changed at our firewalls to Microwave or Fiber backhaul to the subsite. Again not really needed for the CONVO, but thought I would answer. So they are connected to a routed network in the core through Junipers. In my Scenario, the 3rd host VM would come up as as either 10.x.233.121 or 121. Not that it matters, our RNI doesn't route out to the internet, not connected to the internet.  We respond to the core to interact with the Core, no remote access. We do have monitoring capability that is an electrical GAP between ip supplied data and a SCADA like interface back to our Network monitoring datacenters.

The ZC are lean machines, using local storage. We do have a DAS that has a backup of the Zone database but that is for restorative purposes only. The ZC database is actually small, as its only job is to authenticate a Radio as authorized on the network and to which talkgroups (channels) they are allowed to communicate on. So moving the DB is not a heavy lift time wise. All radio traffic is ip simulcast. 

In my lab I have created a 2 host vmotion ip stack which were direct connected host to host. I get confused on how to do three.

Didn't know the about the CPU compatibility being an issue but I am familiar to what you speak of.... I have been messing with V7 where it seems you can do an LCD of chip type. I think in my testing all three worked at the HASWELL LCD. All shared the the least capable subset of instructions. I only have licenses for 6.x at this time. I will eventually have V7 licenses that aren't hamstrung by the fact they 60 day licenses.

Does 6.7 allow for that "Least common denominator" chip selection? I can take each VMS offline to do the configuration. We do that routinely to patch Linux or update our applications. Hence the reason for the 2 ZC's. 

Thanks for responding to my question and hopefully shed a little more light.

 

 

 

 

 

 

 

 

0 Kudos
grimsrue
Enthusiast
Enthusiast

Good to know that each ESXi host is connected to a central switch.

Now you have one of three ways making use of vMotion. (You have to use a vCenter for vMotion)
You can use the primary network connection for each host (VMK0) as your vMotion connection. Its just a matter of editing "VMK0" and checking the box for "vMotion" on each server.

The second way of making use of vMotion is to create a separate VLAN for vMotion, create a portgroup in the vSS/vDS, assign the VLAN to the portgroup, create a new VMK, assign an IP (No GW) to the new VMK for each host and attached that VMK to your new portgroup. You can then modify the new VMK to do vMotion only.

The third way is same as the second way above but you are assigning the vMotion portgroup and VMK to the vMotion IP stack and you will then assign a GW to that VMK.

You can use a host to host connection for two hosts, but when you get to three you have to use a switch. You could probably experiment and try connecting a second cable each of the two host to the third host, but I don't think that will really work.


I would keep in simple and just use the Host VMK0 as your vMotion connection. You have a small enough setup that it will not hurt anything to run your Host connectivity and vMotion over the same VMK and IP address.

The next part is your Hardware CPU models and Storage.
EVC mode will be needed if you are running different CPU Generations between servers. You will create a cluster, add the hosts, go to the "Configuration" tab on the cluster and you will see VMWare EVC. That is where you will set the Intel or AMD generation for the whole cluster. Same thing for individual VMs. Click on the VM, go to the configuration tab, VMWare EVC.
Whole cluster means all VMs need to be shut down first
Individual VMs need to be shut down first
ESXi/vCenter 6.7 or later for Individual VMs

Storage has to be share for fast vMotion. If you do not have shared storage you will be essentially coping the files that make up the whole VM to different storage when you migrate the VMs to a different server. Based on the size of the VM depends on how long it will take to copy to the storage on another server. ( min vs hours)

The subnet the VMs run on needs to be available to each of the three servers. So you will need to make sure the VLANs are trunked to the switch interfaces for each servers primary network connections.
If you are running on "Virtual Standard Switches" (vSS) then the Portroup Name must be the same as well and Standard Switch names across all three hosts.
A Virtual Distributed Switch (vDS) is shared across all three hosts. You just attach your hosts to it.
A vSS is attached to each host
Each host is attached to a vDS

Let me know if this make sense to you. I can try and break it down a bit more, but once you have vCenter up and a cluster created then some of this will make a bit more sense.

RichardL-Moto
Contributor
Contributor

Ok, So I went out and purchased an 8 port 10Gbe Netgear switch. Decided cross connect wasn't making good sense.

I have read all of these articles on the web about multi-nic vmotion. All three hosts now have 2 ports each at 10Gbe.

I get creating the the vmotion solution using a standard vss or vds but, I deal in the physical world so don't these three host HAVE to connect to the Netgear switch I just bought? The part about breaking it down, would help, I am struggling with the lingo.

I am confused about how this new switch should be configured and/or connected to core lan switch ip structure, or leave it just providing connectivity to the three hosts and no uplink into my radio network therefor on a totally different private ip structure

0 Kudos
RichardL-Moto
Contributor
Contributor

@jackyjoy123

What is the point of your useless post. You offer no incite, nothing, EXCEPT for some stupid links.

0 Kudos