VMware Cloud Community
jorgemgonz
Contributor
Contributor

ESXi5 on HP BL685c-G7 Blades-What is the Max # Flex-NICs allowed?

Hi there. This is question for anyone out there with experience installing ESX(i) on HP Blade Systems. I have experience with BL460's but I have a questions regarding the Full height Blades. We are currently deploying HP BL685c G7 Blades - 8 per c7000 chassis. And plan to run ESXi-5-U1 on the blades. We realize that the BL685's come with 2 dual port CNAs -  which yields 4 LOMs per blade. If we carve out all of the possible bandwiths 4 per LOM then we are left with 16 Flex NICs. 2 Flex Nics will be used for FC. That leaves 14 Ethernet Flex NICs that can be presented to the ESXi5 OS.

If we present all 14 Flex NICs to ESXi5 as VMNICs are we in violation of the VMware-ESXi5 Configuration Maximum of 8 10Gbe NICs allowed to be presented to ESXi5?

Would we be in an Un-supported configuration?

Is it best to just carve out the required bandwith and only present 8 ethernet Flex NICs Max to the ESXi5 OS?

Thanks in advance for your help - Jorge

Reply
0 Kudos
4 Replies
joshodgers
Enthusiast
Enthusiast

I would suggest presenting more than 4 x 10GB to an ESXi host is overkill in any case, but as per the config maximums as you suggested, 8 is the maximum. Also note the maximum relating to the combination of 10Gb and 1Gb Ethernet ports, which is Six 10Gb and Four 1Gb ports.

In my opinion, you should consider using Network I/O control and logical separation with VLANs and dvPortGroups.

If you did in fact present a larger number of 10Gb connections to the blades runnning your ESXi hosts, the bottleneck will still be the ports existing the blade chassis, which in alot of cases is much less than the total bandwidth presented internally.

For example I have used the below in large environments with alot of success.

2 x 1gb for ESXi Management (Active/Passive) on vSwitch0

2 x 10gb for vMotion (Active vmNICx/Passive vmNICy) / FT (Active vmNICy/Passive vmNICx) / IP Storage (Two VMKernels active on alternate vmNICs) and running NIOC giving IP Storage highest share value on dvSwitch0

2 x 10gb for VM Networking on dvSwitch1 (Route based on physical NIC load)

The physical network in this case would be all 802.1q

Josh Odgers | VCDX #90 | Blog: www.joshodgers.com | Twitter @josh_odgers
Reply
0 Kudos
jorgemgonz
Contributor
Contributor

Hi Josh,

Thank you for your inputs. Much appreciated.

Jorge

Reply
0 Kudos
joshodgers
Enthusiast
Enthusiast

Your welcome, if you could mark my reply as "Correct" or "Helpfull" that would be great. Cheers

Josh Odgers | VCDX #90 | Blog: www.joshodgers.com | Twitter @josh_odgers
Gkeerthy
Expert
Expert

as mentioned by josh if you have NIOC and Enterprise plus license, this will be great solution, else if you have only standard or enterprise,

see the below

blade-traa1.jpg

 

refer the below and my blog where I have given an example how to do traffic layout in the blades for standard switch,

http://pibytes.wordpress.com/

http://communities.vmware.com/message/2081040#2081040

Please don't forget to award point for 'Correct' or 'Helpful', if you found the comment useful. (vExpert, VCP-Cloud. VCAP5-DCD, VCP4, VCP5, MCSE, MCITP)
Reply
0 Kudos