VMware Cloud Community
toddysm
Contributor
Contributor

ESXi 5.5 Cannot connect to Management Network

Hello Everybody,

I am brand new to VMWare - just installed ESXi 5.5 on two blade servers and trying to configure them and connect them to VSphere client however I am experiencing issues with the network connectivity. Doesn't matter what configuration I set up for management network I am not able to ping the servers or ping any of the other machines on my network. My network is in the 192.168.1.x range and there is DHCP server running. If I configure ESXi to use DHCP I see error that it couldn't connect to the DHCP server. If I configure it to use static IP I am still not able to ping any other machines.

I also tried using 10.1.0.x network by configuring VM with dual adapter but this one didn't work either.

Machines are working fine because if I install Windows or Linux on them they are able to connect to the network. Are there any specifics for configuring the management network on ESXi hosts?

Both machines are experiencing the same issue. Logs do not yield any useful information.

Best,

Toddy

13 Replies
a_p_
Leadership
Leadership

Welcome to the Community,

please provide some more information about the environment (blade vendor/model, chassis) and how you configured the Management Network during installation. Blades usually come with at least 2 network adapters. Can you confirm that either both of them are properly connected, and configured on the physical switch ports? If only one of them is setup, did you select the correct one during installation? You may try and change the Management Network's uplink (vmnic) from the ESXi host's DCUI to see whether this helps.

André

0 Kudos
Geoff_Rose
Enthusiast
Enthusiast

If you click on one of the hosts then configuration then networking you should see your standard switch. You can then go into properties and check the settings for the management network.

My initial thought was that the network on the blade chassis switches wasn't setup properly but this is probably unlikely since Windows/Linux is connecting to the network.

Are you able to ping the gateway from the hosts?

are there any VLANs in use?

0 Kudos
warring
Enthusiast
Enthusiast

I would start with the blade chassis switch.

VCP510-DCV
0 Kudos
toddysm
Contributor
Contributor

Thank you guys for the suggestions!

@Andre: The blades are custom built with 2x Intel Xeon CPUs/16GB RAM and 2x Intel PC82573L NICs (no brand name)

@Geoff: I am not able to ping anything from the host - every ping fails from the hosts out and from other machines on the network to the hosts.

@warring: what do you mean with the "blade chassis switch"?

I updated the NIC driver with net-igb-4.2.16.8-1OEM.550.0.0.1198611.x86_64.vib assuming that the issue was with the driver. It turns out it didn't help. Now trying to figure out whether I can find something more useful in the logs but to be honest the interface is far from user friendly.

0 Kudos
warring
Enthusiast
Enthusiast

With all the IBM blade chassis (not sure of the others) there is a Cisco switch with ports for each blade slot to be enabled. You can find the switch IP from the blade management console then ssh. Or am talking crap?? So the machines running on that host can talk to other machines and the host is seen by Virtual Center?

VCP510-DCV
0 Kudos
toddysm
Contributor
Contributor

warring

Here is what esxcli network vswitch standard list returns:

Name: vSwitch0

Class: etherswitch

Num Ports: 1536

Used Ports: 4

Configured Ports: 128

MIU: 1500

CDP Status: listen

Beacon Enabled: false

Beacon Interval: 1

Beacon Threshold: 3

Beacon Required By:

Uplinks: vmnic0

Portgroups: VM Network, Management Network

0 Kudos
toddysm
Contributor
Contributor

I am actually talking about the host - the host machine cannot connect to the network. I don't have any VMs yet because I cannot manage the host (i.e. connect it to Virtual Center or VSphere Client)

0 Kudos
Geoff_Rose
Enthusiast
Enthusiast

OK so it sounds like each blade is individually wired to your switches? Usually (this various depending on vendor though - UCS is different) the blades have internal connections to 2 switches that live in the chassis that you then connect to your other switches - it cuts down on cabling as you dont cable all the blades to the switches.

It looks like the host is seeing the NIC so the driver should be ok.

so when you installed Windows/linux on them you could ping them from another computer not part of the blades?

toddysm
Contributor
Contributor

@Geoff: Correct - if I install Windows on the blades I can ping them from the rest of the network (as well as ping the rest of the network). If I install ESXi 5.5 though I cannot. Attached are few logs that I managed to transfer from the blades if at all helpful.

0 Kudos
Alistar
Expert
Expert

Hello,

can you post a screenshot of your virtual switch config? Another helpful bit would be the CDP information that is uncovered by pressing the "chat bubble" icon next to your vSwitch config. Also a list of "esxcli network nic list" command via SSH would be nice to have here as well.

I am quite baffled by a message in vmkernel.log:

DMA: 612: DMA Engine 'vmnic0' created using mapper 'DMANull'.

I don't think that 'Null' is the right mapping for Direct Memory Access. Are you sure your e1000e drivers and the intel NIC firmware are properly up-to-date?

2015-01-05T22:14:44.540Z cpu7:33333)<6>e1000e: Intel(R) PRO/1000 Network Driver - 1.1.2-NAPI

2015-01-05T22:14:44.540Z cpu7:33333)<6>e1000e: Copyright(c) 1999 - 2009 Intel Corporation.

2015-01-05T22:14:45.047Z cpu7:33333)<6>0000:04:00.0: vmnic0: Intel(R) PRO/1000 Network Connection

2015-01-05T22:14:45.228Z cpu7:33333)<6>0000:05:00.0: vmnic1: Intel(R) PRO/1000 Network Connection

2015-01-05T22:14:44.540Z cpu7:33333)PCI: driver e1000e is looking for devices

2015-01-05T22:14:44.540Z cpu7:33333)DMA: 612: DMA Engine 'vmklnxpci-0:4:0.0' created using mapper 'DMANull'.

2015-01-05T22:14:44.540Z cpu7:33333)DMA: 612: DMA Engine 'vmklnxpci-0:4:0.0' created using mapper 'DMANull'.

2015-01-05T22:14:44.540Z cpu7:33333)DMA: 612: DMA Engine 'vmklnxpci-0:4:0.0' created using mapper 'DMANull'.

2015-01-05T22:14:44.540Z cpu7:33333)DMA: 657: DMA Engine 'vmklnxpci-0:4:0.0' destroyed.

I somehow don't think that it is coping with the '09 drivers very well on the newest ESXi revision.

Edit: some posts I have found pertain to the use of SR-IOV or NPIV - are you sure you have this disabled in BIOS/EFI ?

Stop by my blog if you'd like 🙂 I dabble in vSphere troubleshooting, PowerCLI scripting and NetApp storage - and I share my journeys at http://vmxp.wordpress.com/
toddysm
Contributor
Contributor

OK, as it turns out the hardware doesn't seem to be supported by ESXi 5.5 (some missing drivers or something that nobody can figure out). I installed ESXi 5.0 and it seems to be working fine. Thank you all for the good suggestion - at least they were helpful for faster ramp up Smiley Happy

0 Kudos
Geoff_Rose
Enthusiast
Enthusiast

If you have support it might be worth a call and discuss it with VMware - they might be able to check and advise you if a working driver is coming or might be able to tell you if there is a work around to get it working on 5.5

0 Kudos
Alistar
Expert
Expert

Usually VMware searches the vendor's repositories for drivers or recommends you to use a branded, partner-supported image. If this blade is custom-built I guess there is little VMware could have done except perhaps finding a newer driver.

For a "community-supported" NIC driver you could have used GLRoman's driver Net-e1000e - V-Front VIBSDepot Wiki (works like a charm), but since you already solved this by reverting to an earlier version, there is not much to discuss further Smiley Happy

Stop by my blog if you'd like 🙂 I dabble in vSphere troubleshooting, PowerCLI scripting and NetApp storage - and I share my journeys at http://vmxp.wordpress.com/
0 Kudos