Hello,
I have a bit of a conundrum I've been trying to work out. I have a c7000 blade chassis with four pnics. Each nic is on a Cisco Catalyst Blade switch 3020. I have 3 production Vlans and a DMZ. I'm planning on running the DMZ and Production networks directly into the four Cisco Blade switches. The DMZ is a copper connection with a Cisco 3750-G. The Production Vlans will come through on two Fiber runs each going back to one core switch we have two of them currently used for FT. I'm then going to do VST and create a Prod/DMZ vSwitch and a second switch with vMotion/SC. This way I have redundant ports for vMotion/SC and the Prod/DMZ networks. I plan on alternating which vNics I'm using for Prod/DMZ vs SC/vMotion based on ESX hosts. Since the 3020's will be aware of all Vlans.
Is this best practice? I wish I had 6 pNics.
Thanks,
Jim
My only point was (and some might suggest its a pretty minor point) that the total number of physical NIC ports available shouldn't be the only thing considered when you are deciding on the design. It just becomes tricky if you have an adapter that has more NIC ports than the other. Protecting against adapter failure is different when combining a 2 port NIC card and a 4 port NIC card (6 total ports) than it is when you combine 3, 2 port NIC cards (6 ports). The total port quantity is the same, but when protecting services (VM networks, SC networks, FT networks, iSCSI networks, vmotion networks, etc.) from going down due to adapter failure, it would functionally be different. It's almost better to think of that 2 adapter combo of a 2 port NIC and a 4 port NIC as a total of 4 ports, with a couple of extra ports that one is willing to go as non-redundant.
I've probably just added to the confusion now.
Having just 4 NIC's is limiting your options. But you are right use 2 for Prod/DMZ and 2 for vMotion/SC teamed this way you have 2 active links in a redundant mode.
Thanks for the sanity check.
It's pretty expensive but I'd like to get a quad nics on the blades and purchase two more catalyst 3020's. Now if HP only had a trade in for the dual port nic risers.
Jim
We have the exact same setup, 3020s and all.
One warning - document the mac addresses of your NICs and map them to the switches. If you utilize NICs connected to different physical switches within your vSwitch, you can perform maintenance on the switches without interrupting operation.
Remember also that even if you had your dream of bumping up to a higher density NIC, you still have to plan your design for protection against ADAPTER failure, not just NIC port failure. (the former is almost always the case when there is a failure).
Thanks for the heads up everyone. Adapter failure is a possibility but, in the blade scenario you have two on board and two/four on your riser card. So if your onboard nics fail the riser card will still work.
I'm trying to wrap my head around what you are saying. If have two vswitches one for VMotion/SC the other for Prod/DMZ and alternate one nic onboard and one nice riser to vSwitch 0 and the opposite for vSwitch 1 I would still be covered correct?
Jim
My only point was (and some might suggest its a pretty minor point) that the total number of physical NIC ports available shouldn't be the only thing considered when you are deciding on the design. It just becomes tricky if you have an adapter that has more NIC ports than the other. Protecting against adapter failure is different when combining a 2 port NIC card and a 4 port NIC card (6 total ports) than it is when you combine 3, 2 port NIC cards (6 ports). The total port quantity is the same, but when protecting services (VM networks, SC networks, FT networks, iSCSI networks, vmotion networks, etc.) from going down due to adapter failure, it would functionally be different. It's almost better to think of that 2 adapter combo of a 2 port NIC and a 4 port NIC as a total of 4 ports, with a couple of extra ports that one is willing to go as non-redundant.
I've probably just added to the confusion now.
Hey Sketchy
Thanks for the response I understand what you are saying. It's just unfortunate the design of the Blade chassis versus a traditional Server. I currently have two onboard, a Qlogic FC and two on a riser card.
Thanks!
Jim
My comments come with total sympathy! ...I have a handfull of Dell PowerEdge M6xx blades. They are fantastic units, but only can max out to 6 ports on each with 1 onboard adapter, and 2 additional 2 port adapters on the riser card. I wish I had at least a few more ports. Although, with the Dells, they did come out recently with a new switch to fit in the blade enclosure chassis that will increase adapter ports to the internal switching. More $$ though. ...oh well.
Your answer is Xsigo.
But, for most situations, VLANs and trunking will be more than good enough with 2 "data" NICs.
Thanks everyone much appreciated. Great conversation.
Jim