VMware Cloud Community
eeg3
Commander
Commander
Jump to solution

Migrating to ESXi and dealing with VMkernel changes

We are currently running ESX 4.1 in our cluster, and with the release of 4.1u1, I wanted to migrate to ESXi. However, this is being a bit challenging due to the changes in the service console.

Since I can no longer have 2 different gateways, I have to keep all of the VMkernel traffic on the same gateway, but my service console previously ran on 10.157.188.x, but my iSCSI traffic ran on 10.72.66.x.

If I move all of my VMkernels to 10.72.66.x, then VMware HA will not enable and I can't seem to join the host back to the cluster that has its service consoles on the 10.157.188.x network.

How can I get around that?

Blog: http://blog.eeg3.net
0 Kudos
1 Solution

Accepted Solutions
bulletprooffool
Champion
Champion
Jump to solution

Hi,

We had a similar design, where we used a second NIC for NFS storage.

The solution was to create  second VMKernel port on the Host and attach it directly to the storage network (non-routable)

As everything was on the same IP range, no DG required and the added bonus was that storage traffic was always 100% isolated and secure.

One day I will virtualise myself . . .

View solution in original post

0 Kudos
5 Replies
ats0401
Enthusiast
Enthusiast
Jump to solution

Not sure I understand your setup and what you are asking, but you should have a seperate vmkernel for iSCSI and management.

There is no more service console in ESXi.

So just make sure your management vmkernel is in the same subnet as your ESX hosts with SC and it should join the cluster just fine.

iSCSI vmkernel - 10.72.66.x

management vmkernel - 10.157.188.x

AndreTheGiant
Immortal
Immortal
Jump to solution

HA can work also on other vmkernel interfaces... just use the advanced settings.

Andre

Andrew | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
0 Kudos
bulletprooffool
Champion
Champion
Jump to solution

Hi,

We had a similar design, where we used a second NIC for NFS storage.

The solution was to create  second VMKernel port on the Host and attach it directly to the storage network (non-routable)

As everything was on the same IP range, no DG required and the added bonus was that storage traffic was always 100% isolated and secure.

One day I will virtualise myself . . .
0 Kudos
eeg3
Commander
Commander
Jump to solution

Alan van Wyk wrote:

Hi,

We had a similar design, where we used a second NIC for NFS storage.

The solution was to create  second VMKernel port on the Host and attach it directly to the storage network (non-routable)

As everything was on the same IP range, no DG required and the added bonus was that storage traffic was always 100% isolated and secure.

This seems to work. I can actually still ping the storage network VMkernels even though they have the wrong gateway configured. Seems like this should've broken the external traffic. Do you think this will cause any problems? Network loop, etc?

Blog: http://blog.eeg3.net
0 Kudos
bulletprooffool
Champion
Champion
Jump to solution

Nope - worked perfectly for us.

One day I will virtualise myself . . .
0 Kudos