VMware Cloud Community
JimBernsteinSV
Enthusiast
Enthusiast
Jump to solution

Can't ping new vmkernel IP addresses

I created some new bonded iSCSI vmkernel portgroups for our 10G storage on a new vSwitch but I can’t ping either of the IP addresses assigned to them (10.1.45.187 and 188). I have the same setup on another host and it works just fine. I swapped the connections on the 10G switch with the ones from the working host as well as the cables so I know it’s not a switch port, cable or switch config issue. I even swapped the IP address with the working ones just to make sure.

But if I SSH into another host on the same network I can ping these vmkernal IP addresses which I can’t from my desktop. I can also ping the vmkernel IP addresses on the working host from my desktop which is on the same subnet as the non-working host. I can’t ping the vmkernel IPs from a host on a different subnet but can ping the working host’s vmkernel ports from a host on a different subnet.

I’m sure it’s just something simple that I’m missing… I hope!

0 Kudos
1 Solution

Accepted Solutions
JimBernsteinSV
Enthusiast
Enthusiast
Jump to solution

Don't ask me why I didn't find this before but here is the answer to my question. Can I give myself points? :smileylaugh:

VMware KB: Change to ICMP ping response behavior in ESXi 5.1 and ESXi 5.5

View solution in original post

0 Kudos
14 Replies
a_p_
Leadership
Leadership
Jump to solution

If I understand this correctly, then it actually works as designed from a networking point of view. Without routing you will not be able to access IP addresses on a different subnet. What I could think of is that someone created a static route for the iSCSI network on the "working" ESXi host (see VMware KB: Configuring static routes for vmkernel ports on an ESXi host ), which - regarding security - should not relly be done.

André

0 Kudos
JimBernsteinSV
Enthusiast
Enthusiast
Jump to solution

The different subnets should be configured to talk to each other in the switches and nothing was configured on the working host as far as I know. The 1G iSCSI connection on the non working host works fine and its on the same subnet as the 10G connection so I would think that would allow the 10G connection to work.

There were no static routes setup for the 10G connection on the working host because I setup the vmkernel ports with the addresses and someone would have had to know what IPs I was going to be using beforehand unless they allowed the whole subnet.

Assuming I am using the right command here are the routes on the non working host:

~ # esxcfg-route -l

VMkernel Routes:

Network          Netmask          Gateway          Interface

10.1.43.0        255.255.255.0    Local Subnet     vmk0

10.1.44.0        255.255.255.0    Local Subnet     vmk5

10.1.45.0        255.255.255.0    Local Subnet     vmk1

10.1.48.0        255.255.255.0    Local Subnet     vmk2

169.254.95.0     255.255.255.0    Local Subnet     vmk4

default          0.0.0.0          10.1.43.241      vmk0

And on the working host:

~ # esxcfg-route -l

VMkernel Routes:

Network          Netmask          Gateway          Interface

10.1.43.0        255.255.255.0    Local Subnet     vmk0

10.1.44.0        255.255.255.0    Local Subnet     vmk4

10.1.45.0        255.255.255.0    Local Subnet     vmk1

10.1.48.0        255.255.255.0    Local Subnet     vmk3

default          0.0.0.0          10.1.43.241      vmk0

I notice it doesn't show all the vmk interfaces.

0 Kudos
chriswahl
Virtuoso
Virtuoso
Jump to solution

I created some new bonded iSCSI vmkernel portgroups for our 10G storage on a new vSwitch

You mentioned bonding - do you mean that you are using link aggregation on your iSCSI network adapters? If so, that's something I would avoid.

Additionally, unless you're using port binding, this iSCSI configuration appears to be invalid. You have 4 different vmk's on the iSCSI subnet, and the kernel router is only going to select the first one in the routing table. This is why your screenshot shows the route for your 10.1.45.0 network as going over vmk1.

VCDX #104 (DCV, NV) ஃ WahlNetwork.com ஃ @ChrisWahl ஃ Author, Networking for VMware Administrators
0 Kudos
JimBernsteinSV
Enthusiast
Enthusiast
Jump to solution

I am using port binding and have it working fine on 2 other hosts setup the same way. Ill attach some screenshots of the non working host in question. I watched your video (good stuff) and it appears to be the same way I did mine.

0 Kudos
chriswahl
Virtuoso
Virtuoso
Jump to solution

Right on. So the only issue here is that you can't ping the vmk interfaces from your desktop? Are they able to hit the iSCSI storage array?

VCDX #104 (DCV, NV) ஃ WahlNetwork.com ஃ @ChrisWahl ஃ Author, Networking for VMware Administrators
0 Kudos
JimBernsteinSV
Enthusiast
Enthusiast
Jump to solution

I'm not sure what you mean by can they hit the iSCSI storage array. I can access LUNs on the new 10G storage from this host but I don't know if the connection is from the 10G cards or the other 1G cards that are still connected. The 10G and 1G switches are connected so there is a path to the 10G storage from the 1G switch.

0 Kudos
chriswahl
Virtuoso
Virtuoso
Jump to solution

The storage array should be able to tell you which initiators have logged in. And the ESXi host will show a path for each initiator / target pairing.

VCDX #104 (DCV, NV) ஃ WahlNetwork.com ஃ @ChrisWahl ஃ Author, Networking for VMware Administrators
0 Kudos
JimBernsteinSV
Enthusiast
Enthusiast
Jump to solution

If its a single software iSCSI adapter can the storage tell what network card\vmk is being used? Its HP LeftHand storage. In the storage I see the iqn for the VMware iSCSI initiator but it doesn't tell me much. I'm new to this storage so I may be missing something.

I see some volumes on the 10G storage but don't know if its using the 10G or 1G card to connect to it (red marks in image).

0 Kudos
chriswahl
Virtuoso
Virtuoso
Jump to solution

If its a single software iSCSI adapter can the storage tell what network card\vmk is being used?

Indeed. There's one software adapter, but since you bound 4 unique vmk's, you have 4 initiators. Each one has its own IP. Most storage arrays will show you all the initiators logged in with a list of their IPs, which is important if you're trying to mask certain IP ranges for various security or control purposes.

But I digress. We're in the same boat; I've never used LeftHand storage. Based on your picture, it looks like you have one target configured for each of the 37 LUNs (devices) and 4 paths each (one per initiator). Thus, 148 paths.

The key is the C#:T#:L#. This is the channel, target, and LUN values. For vSphere, each channel # will represent your iSCSI initiator. Each of your devices should have a connection to C0, C1, C2, and C3 (all four bound vmk interfaces). The three red marks you made shows C0, C1, and C2 - I assume there's a C3 hiding down there somewhere? This would indicate that you do, in fact, have four connections that are Active to the storage array - a pair of 1 GbE and a pair of 10 GbE.

If you want to get deep in the weeds, SSH into the trouble host and issue esxcli storage core path list. This will display all of the paths to your block devices; you'd want to look at your paths from vmhba34 (the software iSCSI adapter).

As for why you can't ping the interface from your desktop - I'd start by seeing if the router can see the MAC address of your 10 GbE vmk interfaces, and if it has a valid way to route traffic back to your desktop. Though iSCSI traffic really should be isolated to the point where only the iSCSI subnet is able to communicate to one another.

VCDX #104 (DCV, NV) ஃ WahlNetwork.com ஃ @ChrisWahl ஃ Author, Networking for VMware Administrators
JimBernsteinSV
Enthusiast
Enthusiast
Jump to solution

After looking around in the storage manager a bit more I found the initiators for one of the LUNs on the new storage. It has all 4 of the vmk IP addresses (attachment). So that shows the new 10G connections are working but why I cant ping them is another story.

Like I mentioned I swapped cables, switch port connections and IP addresses with ones I could ping and still no go. And I can ping the new 10G vmk addresses on other hosts that are setup the same way. Could it just be something funky with the vSwitch where I should just delete it and make a new one? I can use vmkping from a different host to ping the vmk IP address successfully.

As I was typing this I decided to try it again and I cant ping the original 1G vmk IP addresses on that host either. This host is in a new vCenter 5.5 environment that I am setting up while the others that I can ping are in out current\production 5.0 environment. I removed this host from the 5.0 and added it to the 5.5 without wiping it or starting over. I just added it and upgraded it to 5.5. Could that have something to do with it? I also tried another host that I stole from the 5.0 environment and cant ping that either. Nothing was changed on these hosts except for moving them over to 5.5 and upgrading them.

I have a non upgraded host that I moved over as well and pinging the iSCSI vmk IPs works but its attached to different storage on a different subnet\location so I don't know if that tells us something or not.

0 Kudos
chriswahl
Virtuoso
Virtuoso
Jump to solution

As long as your iSCSI connections are Active and show in the storage array, you're good to go.

Pinging a vmk for storage traffic isn't all that common (or necessary) from non-iSCSI network(s). Likely the host simply has no way to route traffic to you, since the configured default gateway is 10.1.43.241.

VCDX #104 (DCV, NV) ஃ WahlNetwork.com ஃ @ChrisWahl ஃ Author, Networking for VMware Administrators
0 Kudos
JimBernsteinSV
Enthusiast
Enthusiast
Jump to solution

Yeah it just comes in handy for testing connectivity. So you don't think moving and upgrading the hosts did something to them? I still have other hosts to move so after I import them but before I upgrade them to 5.5 Ill have to see if I can ping them so I will know if its the upgrade or not.

0 Kudos
JimBernsteinSV
Enthusiast
Enthusiast
Jump to solution

Don't ask me why I didn't find this before but here is the answer to my question. Can I give myself points? :smileylaugh:

VMware KB: Change to ICMP ping response behavior in ESXi 5.1 and ESXi 5.5

0 Kudos
pgoggins
Enthusiast
Enthusiast
Jump to solution

Saw this as well when trying to setup some external monitoring to the ESXi hosts (This also affected ESXi 6.0).

If you want this functionality please fill out the product request form: http://www.vmware.com/company/contact/contactus.html?department=prod_request, hopefully they might get the hint that it is useful for troubleshooting and 3rd party monitoring.

----------------------------------------------------------- Blog @ http://www.liquidobject.com
0 Kudos