VMware Cloud Community
mtsm
Contributor
Contributor

HP C7000 + ESX 3.5 + GBE2C

Hi guys..i need a little help , my company just bought a new c7000 full with sixteen small blades each one with 2 internal nics and two mezzanine cards for hbas and nics , so each blade will have 4 nics and 2 hbas , of course they didnt involve my team in buying this equipment so i was thinking the following, the first enclosure (where the sixteen blades are) have 4 GBE2C switches with 16 ports each :

two nics in etherchannel for SC + vmkernel (vlan tagging enabled sc and vmkernel in distincts vlans)

two nics in etherchannel for vm traffic (many vlans passing through)

so the hp technicians powered on the blade and gave me access...when i see portmapping i see a little problem , each nic is connected to a distinct switch (nics 0 to switch 0 , nic 1 to switch 1 , nic 2 to switch 2 , nic 3 to switch 3) so my idea of creating the etherchannel/trunking is gone as i cant aggregates with nics in different switches , does anyone know if its possible to change that mapping logically ? so i can put nic0/nic1 to the same switch? i didnt find anything on manuals , on the console i can just view. i am attaching an image to show what i am talking about..

does anyone have an idea for a better arquiteture than the one i was proposing?

here is the link for the image in case the attach doesnt work ,

Reply
0 Kudos
13 Replies
CiscoKid
Enthusiast
Enthusiast

Me again, I would do the following, configure vSwitch0 with portgroups Service Console and VMKernel and then assign one of each NIC type (onboard and mezzanine). Then create vSwitch1 with the remaining NICs and assign all of your Virtual Machine Network portgroups to this switch. As for etherchanneling, you are correct, you cannot do this at the access layer (server to switch). However, you can have multiple ISLs that can be etherchanneled at the distribution layer. How many VMs do you plan on running per server?

mtsm
Contributor
Contributor

hi man..thanks again for you answer..

so isnt there a way to change that mapping so i can have one internal nics and one mezzanine nic on the same switch so i can create my channel??? my concern is about balancing.....with no channel/trunk at the switch side wont i have perfmance/balance issues?

so for that config you talked about i would have for vswitch0 (SC+VMKernel) + nic (internal)+nic (mezzanine) and load balance type based on virtual port id right? same for the other two nics...

for the isl i will try with the networking team to get as many isl links as i can..i was thinking in the fully meshed architeture , like this

we are planning about 160 vms for the 16 blades..so about 10 per host , each blade has 32gb ram and two quad processors.

thanks for the eva tips

Reply
0 Kudos
CiscoKid
Enthusiast
Enthusiast

Okay, so you have BL480c blades? If that is the case you should have the following mapping if you are using Cisco 3020s for ICB1&2:

Server1 in Bay1/9 will have ports G0/1 and G/09 in ICB1&2

Server2 in Bay2/10 will have ports G0/2 and G0/10 in ICB1&2

Server3 in Bay3/11 will have ports G0/3 and G0/11 in ICB1&2

....

Reply
0 Kudos
CiscoKid
Enthusiast
Enthusiast

My bad, I researched your switches and it looks like these are basic switches and not Cisco. In this case you will not have the ability to have your 4 onboards to connect to ICB1&2...you have to use it in multiple ICBs. We use the Cisco 3020s and it allows us to use all 4 ports on ICB1&2. What happens to the iLO port map if you were to remove the other 2 switches? Does it see the other NICs as disconnected?

CiscoKid
Enthusiast
Enthusiast

As far as number of VMs per host, I think you will be fine with the 2x1GB ports that you will use on vSwitch1 without etherchannel to the access layer.

Reply
0 Kudos
mtsm
Contributor
Contributor

its an HP GBE2C switch (nortel i think) , take a look at this image , http://xs140.xs.to/xs140/09262/myblade156.png, so i have:

server 1 embedded nics: bay1-port1 / bay2-port1

mezzanine slot 1: hbas bay3-port1 / bay4-port1

mezzanine slot2 : nics bay5-port1 / bay6-port1

bay 7 and bay 8 are empty... i really cant belive i cant change that mapping ...this machine suck for virtualized environments

Reply
0 Kudos
mtsm
Contributor
Contributor

i really didnt find any option to remove switches or change the port mapping , will try to read some more tomorrow....i will let you know...thanks again for the tips

Reply
0 Kudos
CiscoKid
Enthusiast
Enthusiast

Well, it's not the server, it's the interconnect switch that is the limitation. I really enjoy BL480c blades as ESX hosts as they provide an excellent platform for a mid-range ESX host as a 2-CPU server. Best bang for the buck. We started out with 8 BL480c blades configured with 32GB RAM with the intention that the consolidation ratio was going to be 10:1. Our initial scope was for 166 physical servers to be virtualized on 16 BL480c servers and found that the consolidation ratio is more like 15:1-17:1 based on workloads. We started running out of memory before CPU so we upgraded memory to 48GB and everything is nice and stable. We have not implemented 8 of the 16 hosts, but when we do we will employ them as a separate HA/DRS cluster and making sure to also stagger the physical servers across 3 C7000 chassis to ensure we are protected from chassis failure.

Reply
0 Kudos
sclingan
Contributor
Contributor

I have the same setup pretty much.

You cant use therchannel, but your vm's will split across the nics in the vm vswitch. So you get 2 x 1Gb for your vm's. Not sure how esx balances across the nics, but they are assigned to a particular nic and therefore interconnect switch, so each vm will get a share of maximum 1Gb.

I used to run rack mount 4 cpu boxes with heaps more nics than two, and these were trunked. But that might have been overkill and I find the BL460 goes ok for bandwidth with only 2Gb and maybe 16 vm's max. I get about 10 vms typically per blade.

As I think was mentioned you can trunk out of each nortel into your other network infrastructure to give more bandwidth to the entire chassis.

If you run a lot of esx hosts in that chassis you will need this.

This basic config gives you link failure redundancy for your nics in your vswitch, but since the link is in the backplane its really only giving you redundancy for when an interconnect switch fails. Upstream path failures will not cause the vswitch to swap to the other nic, as I discovered recently.

If you want more redundancy it gets a little complicated. There are a few options

In one scenario you need to enable the crossbar between the switches (ports 17 and 18) and enable STP. Note this will mean you are only using the trunk from one of the interconnects to provide all your upstream bandwidth. If that trunk fails, STP will cut you over to the other trunk coming out of the other interconnect.

esx hosts connected to the interconnect switch with the inactive trunk will connect upstream via the crossbar to the interconnect with the active trunk.

In this scenario your trunk might need to be 3 or 4 ports to get the required bandwidth.

If you need more than 2 x 1Gb bandwidth for your esx host then you can use 10Gb ethernet I guess (flex10 option), but 2 x 10Gb is a lot of traffic for a 2 CPU blade and as mentioned your going to run out of memory or CPU before you run out of bandwidth.

Oh, there is a quad port card option but you need another two interconnect switches.

Theres a doco from HP on how to configure your network for redundancy, cant find it though.

Reply
0 Kudos
mtsm
Contributor
Contributor

as you have the same setup i have , let me ask you one thing......i want to create an internal vlan for my vmkernel traffice just inside the gbe2c , so i created a vlan60 inside the switch...to make it works between two switch i just have to associate the vlan60 to port 17,18 ? but to communicate with the other two switches i will have to go to outside world right?...so my vlan60 need also to be created in my cisco border switches (where the uplinks are connected) is that correct?

Reply
0 Kudos
sclingan
Contributor
Contributor

Yes thats correct. You need to create the vlan on all switches, the uplinks/trunks, the crosslink ports 17, 18, and esx ports (1-16 as necessary)

But rather than me tell you read this doco (found it).

http://h71028.www7.hp.com/ERC/downloads/4AA1-0779ENW.pdf

In the examples they use cisco border switches (like you) with the GbE2c interconnect switch so just pick one of the topolgoes and follow the instructions.

We use HP Procurve but I figured it all out...

Chees

Reply
0 Kudos
mtsm
Contributor
Contributor

hehe we had a big mess here....so on our first enclusure i have four switches , A , B , C , D

A=B isl connected via port 17/18 and port 20/21 , A connected to our catalyst switch X , B connected to our catalyst switch Y

C=D isp connected via port 17/18 and port 20/21 , C connected to our catalyst switch X , D connected to our catalyst switch Y

so....we configured vlan 21,35,60 on our catalyst

so i configured vlan 21,35,60 to ports 20,21 , 17,18 of all my gbe2c switch (A,B,C,D)

when i configured the last port 17,18 on switch A to pass vlans 20,21,35,60.....we had a big loop and all datacenter network stopped working hheh so we had to run and disconnect the cables from port 20/21 switch A , so i removed vlans 20,21,35,60 from port 17/18 (sw A) then connected again ...so no more loops...vlan 35 is working i have one blade connected to port 1 and everything is fine

i have STP on to all my 17,18,20,21 ports....as i am not a network specialist ....wtf is wrong? i followed that document in your message (topology 3) with no stp on.. as we have not two , but four switches i decided to leave stp on....but my network went down anyway..

can anyone help?

Reply
0 Kudos
mtsmbr
Contributor
Contributor

well im happy now , using cisco ucs , hp is too complicated.

Reply
0 Kudos