VMware Cloud Community
VirtualMikeS
Contributor
Contributor

Dell 1955, 9 blades, VMs don't see network

I have a Dell 1855 chassis containing nine 1955 blades. Each of the nine blades has ESXi 4.1 260247 installed, and a single IP address for each host, although each blade has two NICs.

All 18 NIC ports are plugged into a single unmanaged Dell 24-port switch.

All nine ESX hosts are accessible over the network, and I connect to and manage each with vSphere Client 4.1.0, build 258902.

I used vSphere Converter to migrate 4 VMs from a ESX 2.5 host to the first blade in the chassis, and those 4 VMs are running fine.

The problem is, any VMs I build on the other 8 hosts do not get network connectivity. They build without problems, have functional virtual NICs, but performing an ifconfig on Linux machines shows the network unreachable, and on a Windows 7 machine, it shows no network past the machine's own NIC.

If I unregister the machine from a blade and re-register it on Blade 1 (no vMotion, just shared storage), it comes up on Blade 1 and has full network connectivity.

From each ESX host's SSH command line, DNS works and I can ping any host on the network or internet. But a VM on any blade except Blade 1 has no network connectivity at all.

Any tips of what to look for?

Each host is configured identically, is successfully using Active Directory for vSphere authentication, and NTP for time synchronization. I tried setting a few of the blades' VM Network virtual switch to different VLAN IDs, from the default of 1, to no avail.

Reply
0 Kudos
7 Replies
logiboy123
Expert
Expert

You could try using host profiles to enforce networking configuration from host 1 to any other host and see if it wants to make changes.

If you don't have licensing for Enterprise+ but this is a new environment you can still try the above.

1) Remove the license key from your environment. This should revert you back to the grace period with full functionality.

2) Right click on host 1 and create a profile from it.

3) Right click on host 2 and add it to the profile created from host 1.

4) Put host 2 into maintenance mode and apply the profile. A list of changes will appear and once out of maintenance mode this may resolve your issues.

Another problem solving step for me would be to plug a different host into the same network cables that you are using for host 1. For me this would remove the possibility of a network cabling or switch configuration issue.

I would also confirm how your networking is setup on the Chassis. I have never used a Dell Chassis before but I presume that the NIC's would be configured using a console of some sort. It could be something as simple as not having added them to a group or defined them as being part of a trunk etc.

VirtualMikeS
Contributor
Contributor

Thanks for those tips.

This morning, I set the VLAN ID on vSwitch 0 to 4095, and the VMs on that blade were then able to network properly.

I then went through and did the same on each blade in the chassis, and now network is working for all VMs. I don't know if this is particular to my set-up of a single blade chassis with 18 ports to a single non-managed switch, but this is the only ESX host where I've had to set the VLAN to 4095 to get the VMs to see the network.

Reply
0 Kudos
logiboy123
Expert
Expert

It sounds like you are using Virtual Guest Trunking (VGT) VLAN method. Best practise is to use Virtual Switch Trunking (VST). Please see the following article for more information:

Regards,

Paul Kelly

VirtualMikeS
Contributor
Contributor

Informative post, thanks.

It doesn't describe how to enable VST, but implies it's configured on the physical switch into which the ESX hosts are patched, and using any VLAN besides 4095. Would I configure those physical ports as trunking ports rather than access?

In my case, these blades are patched into an unmanaged switch, so I have no way of configuring the ports. Would I configure the next hop switch to enable trunking on the port into which the unmanaged switch is patched?

Reply
0 Kudos
logiboy123
Expert
Expert

As I understand it, if you set a VLAN on your managed switch that is the uplink port for your unmanaged switch, then all ports on your unmanaged switch will use that VLAN.

If you want to seperate your networking out using VLAN tagging, which is deffinately the recommended approach then I'm fairly confidant that you will need to buy a managed switch.

You could keep your unmanaged switch for say iSCSI networking and move everything else off to a managed switch. If you end up needing more then one port group for VM Networks that use differnt VLAN's then there is no point using this switch for that traffic.

Regards,

Paul Kelly

Reply
0 Kudos
logiboy123
Expert
Expert

Two NIC's per host is not a lot. To get any redundancy at all you would have to create a single vSwitch per host and run; Management, VMotion, Fault Tolerance, iSCSI and VM Networking from a single vSwtich. This is deffinately not the recommended approach.

I would have;

Management/VMotion/FT - vSwitch0 - 2 NIC;s

iSCSI - vSwitch1 - 2 NIC's

VM Networking vSwitch3 - 2 NIC's

If you cannot get the extra NIC's then you absolutely should be using VLAN tagging from a managed switch. I use VLAN tagging at all times, it just adds another layer of security for very little effort.

Regards,

Paul Kelly

Reply
0 Kudos
logiboy123
Expert
Expert

Actually I realise I may have confused several issues here.

First off, if you are using a shared networking infrastructure for storage, management and vm networking, then there is not much point in having seperate vSwitches. After all once any traffic hits a physical switch it is using a shared medium anyway.

However in a lot of my engagements the management layer is physically seperated out from the rest of the environment and in this scenario it is worth using a seperate vSwitch.

Further, when using Jumbo Frames for iSCSI I set this at the vSwitch level, so thats why I seperate out the storage switch, regardless of whether I have shared networking infrastructure or not.

Lastly I always use VLAN tagging, shared networking infrastructure or not.

Regards,

Paul Kelly

Reply
0 Kudos