VMware Cloud Community
surfup
Enthusiast
Enthusiast

IBM HS21 NIC Porblem?

Hi all,

I have an IBM T-Chasis with 3 HS21 blade running on ESX 3.02 build 61618 (latest build?). Each of the blade configured with 2 dual-nic. The internal dual-NIC is BroadCom NetExtreme II BCM5798 and the external dual-NIC is NetExtreme BCM5704S.

I configure the first NIC on the internal dual-NIC for the Service Console and iSCSI - vSwitch 0. For the VirtualMachine Network vSwitch 1, I bonded the second internal dual-NIC and the first external dual-NIC. And the second external dual-NIC for VMotion.

I have created a VM1 on ESX1 and clone it to VM2 on ESX2 server. Both of the VMs are connected to vSwitch 1. We have some custom applications that we can push from VM1 to VM2 - like configure VM2 as Windows Domain Controller, install ISA 2006, SQL 2K, etc. The problem is duriong the installation process whenever we need to reboot VM2 we lost network connection. Meaning I could not ping the VM's IP address. I will need to reboot the VM a second time and I can ping the VM's IP address.

I have noticed "I need to reboot the VM twice" to make it ping-able when I cloned the VM? At that time, I am not sure why? But, not this is create a problem as it stop the application install process?

I checked the NIC settings and all are good! The one question I am not certain is the way I boned the NICs for the vSwitch 1 use for VirtualMachine network?

Is there a log file for NIC that I can check or any other suggestions or advice is greatly appreciated.

PS: This is my first time working with IBM blade and I already don't like it Smiley Happy

Cheers,

0 Kudos
2 Replies
FredPeterson
Expert
Expert

This doesn't sound like a problem with the Blade at all - if it were, you'd need to restart the networking component in the service console, or reboot the host or even worse restart the BC switch. It may be more something really weird in the delay in spanning tree passing through the BC hubs to the core switches since you're bouncing the controlling switch twice, once through the BC then again to the core, most likely. It may also be a misconfiguration of the ether-channel type setup on the core switches where it doesn't handle the BC hubs traffic correctly.

When you bond NIC's as long as they are on the same subnet it shouldn't matter, but generally in my opinion you would want to bond the NIC's that are meant to be redundant, so bond both internal and both external, but try to steer away from bonding NICs that are connected to the same switches.

Can another VM on the same vSwitch ping the VM that isn't pingable from another non VM server but on the same subnet?

0 Kudos
surfup
Enthusiast
Enthusiast

Thanks for info.

As I read you reply one thing that struck me is that both of the NICs bonded are connected to the same external Cisco switch? Is this will cause the problem? Note, all of the 4 NICs are connected to the same external switch since we are working in the LAB at the moment? Any suggestion?

Anyway for testing, I delete the vSwitch that connected to the bonded NICs. And I created two vSwitch and binded to each of the NIC. From the Windows DOS command console on the same IP subnet I ping the VM's IP address with -t option so I continue to see the host reply or not. In the VC I switch the the VM's vSwitch. the vSwitch connected to the internal NIC1 is not ping-able. But, as soon as I switch to vSwitch that connected to the external NIC2 I could ping the VM.

Which make me concern as the internal NIC 0 is connected to vSwitch0 which is included Service Console and iSCSI - they are working fine at the moment. And I could not ping external NIC 3 which is dedicated to VMmotion.

0 Kudos