VMware Cloud Community
AnoniMoose
Contributor
Contributor

Unable to Migrate between hosts

I just took over a network with my new job and am currently working on flattening the network. The previous individuals that managed it had a flair for the extravagant and the network was too much. In my opinion, it's still a bit hefty. A quick background:

We run 3 hosts (2 HP Proliant DL380 Gen9s, and a Supermicro)

The 2 HP hosts are using the HP customized images

A total of about 10 virtual machines, though I'm still trimming that down

They had about 6 VLANs when I started here, which was equal to the number of people that work in the office. I've got this down to 2 that I'm really using.

The two HP's are running 6.7, and the SuperMicro is running 6.0 as I've been having some issues upgrading it (Something I'll get to after awhile)

I would consider myself novice with ESXi, but quick to catch on. I use it at home, and at work so I have a decent understanding. My skills are overwhelmingly lacking with Linux, which is a work in progress at the moment.

Now to the issue. As I've been trying to move the servers around to clear up a VM so I can do a fresh install I've moved 1 host to the base network. Right now I have one host on our 192.168.0.x network, and 2 hosts on the 192.168.20.x network. My work PC is on the former. I can ping all, access all of the gui's for the hosts, I have them all in VCenter and they show up without issue. After I moved the initial host over I couldn't get to VCenter but I could ping it. I uninstalled and reinstalled on a different host (that's still on the .20.x network). I can access VCenter, and all that good stuff but when I try to migrate a host to another server I get an error.

On VSphere I get "Cannot connect to host" and on the host I get "Failed - The specified key, name, or identifier 'vpxuser' already exists. Previous to rebuilding VCenter it worked as I've migrated a few around already.

I apologize for my extremely long winded post, and I'll be more than happy to provide information. It just might take me a bit to figure out how to pull everything. I've checked some of the KB articles, but I can't seem to get one that works.

Tags (1)
9 Replies
daphnissov
Immortal
Immortal

I'm a little confused as to what you had and what you now have in the way of configuration.

As I've been trying to move the servers around to clear up a VM so I can do a fresh install I've moved 1 host to the base network.

This sentence I don't understand.

Right now I have one host on our 192.168.0.x network, and 2 hosts on the 192.168.20.x network.

I take this to mean you have changed the management vmkernel IP for one host to be 192.168.0.x/24 and the other two hosts are on 192.168.20.0/24. What did you have prior to this change? Are your hosts added to vCenter (6.7, I assume) by IP address or FQDN? So when you say "moved", I'm trying to understand what you've done as well as what your goals for doing this work are. This will, I think, help identify the cause of your issue. And which host is experiencing this issue? One of the two on version 6.7 or the one on 6.0?

AnoniMoose
Contributor
Contributor

Thank you for the reply Daph, and yo're correct my phrasing is awkward. What I meant by "VM" was actually "host".

The previous sysadmin has things all spread out. I'll clarify a bit further.

When I first took over all of the hosts were on the 192.168.20.x network. In order to make sure I didn't mess the whole network up, I started moving the individual hosts over 1 at a time. The first host I moved over had VCenter (you're correct, 6.7) on it. Despite me being able to get to the host, VCenter wasn't accessible to me anymore. That's what initiated me redeploying the VCenter server. Right now my intention is to get all of the hosts on the 192.168.0.x network.

Right now it sits as follows:

Host 192.168.20.151 is the Supermicro, that host is what currently has the VCenter on it. Right now, I'm not trying to move anything to it because that's the one that currently has 6.0. I'm also not trying to move anything from it to the others. Other than running VCenter, it's not really involved.

Host 192.168.0.152 is the host that has the VM's I'm trying to migrate to the other host which is 192.168.20.152. The reasoning behind me moving the VM's off of 192.168.0.152 is because I want to wipe it and start over. He has different kernels setup to handle individual tasks (VMotion) on their own VLANs, which makes no sense for an office this size.

I hope this helps clarify a bit more. I'm more than happy to post screen shots (if possible, I know I'm new so sometimes you need a post limit), or any other detailed information.

0 Kudos
daphnissov
Immortal
Immortal

Ok, this makes more sense. Now, could you show some screenshots of your vCenter's inventory and each host's networking config? Particularly the vswitches, vmkernel ports, and physical nics. Also, do you have shared storage across these three hosts? If so, details on that and how it's connected, please.

0 Kudos
AnoniMoose
Contributor
Contributor

This is for the server I'm moving from (it's just those two VM's I want to move, the other two are getting trashed).

Vswitch1-2-0-152.JPGVswitch0-0-152.JPGNICS.JPGKernels.JPG

0 Kudos
daphnissov
Immortal
Immortal

Ok, so if you want to move this host to another subnet (talking its management vmkernel port), what I'd suggest is clean up the vmkernel ports first. You have several you can remove including one (presumably) just called "vmkernel" which makes no sense. Remove things that are obviously unused so as not to get confused. You have a single vSwitch with two uplinks providing connectivity to the management vmkernel port as well as the VMs you care about. The management vmkernel port is using no VLAN tag, which means that subnet is the native VLAN on your trunk. So you'll want to add the VLAN to the trunk (if non-existent) for the new network to which you want to move the vmkernel port. Once you've done that, you need to remove this host from inventory. The VMs will remain up and available during this process. You can then go into the DCUI of the host and change the networking to the IP you want not forgetting to input the new VLAN ID that corresponds to the network. Ping the new IP to ensure it's available. Now you can log back into your vCenter and re-add this host using its new IP. If all goes well, it should pop back up in inventory with all VMs. Let me know if there was anything about this procedure you didn't understand.

AnoniMoose
Contributor
Contributor

I understand.

I'd like to get Vmotion, management, etc. all on the same network\VLAN (which is what I gathered you are instructing me to do in your post). In fact, I'm planning on doing away with every VLAN except for 200, which is our security camera VLAN, and the native VLAN. The cameras are completely separate though, they don't interact with the servers\hosts.

Through providing these screenshots I stumbled on the fact that he made a 143 vlan with no gateway, uplink, etc. He just has the 2nd or 3rd network port on each host set to use the 192.168.143.x network. Then he has 3 ports set as Access with 143 untagged on them, which the ports from the servers plug into. I'm assuming there's something in there that's giving me fits with the Vmotion not migrating the hosts as well. I feel a bit dumb for not connecting all of that earlier, but this definitely put me on the right track.

Thank you again, your help is much appreciated.

0 Kudos
daphnissov
Immortal
Immortal

I would honestly recommend you keep a dedicated VLAN for vMotion and put all vMotion vmkernel ports on that VLAN. It doesn't necessarily have to be dedicated vmnics (pNICs), although from your screenshot you will have plenty of ports. Regardless of whether it's physical or logical, even in small environments it is a good idea to segregate vMotion. But you can re-do that config at any time as it's not dependent upon management vmkernel connectivity.

AnoniMoose
Contributor
Contributor

So in essence what I'll be looking at as a final configuration is:

Management VLAN (which can include Vmotion, etc) that is assigned to one Vswitch and assigned to a PNic (maybe 2 for redundancy).

The other PNics can be assigned to the native VLAN, and that's what will be used as the network for the VM's (which would also be best to have 2 ports)

At some point, when I make sure everything works, I want to create a vmkernel that includes, at least, Vmotion so that it runs "alone"?

I just want to make sure I understand it properly.

0 Kudos
daphnissov
Immortal
Immortal

Let me spell this out a little clearer as I didn't do a good job. Here's what I'd recommend in your case which gives you what you want but sets you up for a level of resiliency:

  1. Take two vmnics (vmnic == physical NIC) per host and make them both trunks. Trunk down the VLAN for management and the VLAN for vMotion.
    1. Management vmkernel port gets connected to a switch and sets active for vmnicX. vMotion vmkernel port (so separate IP; separate VLAN) gets connected to same switch and is active for vmnicY. Each vmkernel port can failover to use the other vmnic if one link fails. This gives you a dedicated 1 GbE interface for vMotion traffic and on a separate VLAN. It also gives you resiliency in case either link fails.
  2. Take two more vmnics and put them on a second vswitch. Make both access for the virtual machine VLAN you want. Can also be a trunk if you have multiple VLANs for VM traffic. Even if you have one, I'd probably do a trunk anyway and tag it so in the future if you do expand and need to add more VLAN IDs, you can do so non-disruptively.
    1. Both vmnics go into a team on the same vswitch. All virtual machine port groups get connected here. Teaming policy can just be simply route based on originating virtual port ID. With identical config, you again have failover resiliency. There will also be some degree of load sharing (note: I didn't not say "load balancing") because both vmnics are essentially active.

And you still have several vmnics free after this which gives you flexibility in the future for other kernel services if you want those. This also makes it fairly simple to maintain and know how traffic ingresses/egresses the hosts.