I know this is more a Dell question than a Vmware question, but I figure some of you are using the same BladeSystem as me and may be able to provide a quicker answer than Dell. We have a new M1000E BladeSystem which contains four M600 servers (each with 6 NICs) and six M6220 interconnect switches. The plan is to dedicate two of the switches to iSCSI and connect the other four switches back to our core switch, aggregating all four external links on each switch. We plan to use those four switches for the VM data network, service console and VMotion (using VLANs).
We have configured the link aggregation on the switches back to the ProCurve 5412 core switch and the link is up. We have configured all the internal switch ports as well as the LAG group to operate in trunk mode and we have created VLANs for use for service console (20) and VMotion (10). These VLANs are tagged on the internal ports as well as the LAG group uplink. On the HP, the ports that connect to the M6220 ports are configured as untagged for the default VLAN (1) and tagged for our other VLANs (10, 20). This is the same port configuration we use for some other VMware host servers which connect directly to the HP switch.
Unless I have it configured incorrectly, this scenario should let me specify the VLAN in VMware and the traffic should pass through the M6220 switches to our core switch, which then routes traffic between the VLANs.
When I install VMware on one of the blades I specify the first NIC to use for the service console. I specify VLAN ID 20 and provide static IP addressing for that subnet. When I try to ping that host from a workstation I can't reach it - keeps timing out.
Is there anyone out there with a similar configuration (Dell Blades with M6220 using VLANs) who can tell me what I'm doing wrong? I'm pretty sure the VLAN configuration on the M6220s is incorrect but I can't figure out what we are doing wrong...