VMware Cloud Community
mikera
Contributor
Contributor

HBA & NIC config

I am about to install my first ESX Server and have a question about the best way to configure my HBA and NICs.

We have a Dell PE6850 with dual onboard Gb NIC and a single PCI Gb NIC. We also have a Qlogic 4052 for connection to our EQ PS100E. My question is what is the best way to utilize the Gb NIC ports? We will only have this one ESX host for now, but will be adding a second ESX host sometime next year. So no HA or VMotion for now. Will we need another Gb NIC or two either now or in the future?

If I was to dedicate the single PCI Gb NIC to the SAN how should I split the iSCSI traffic between the HBA ports and the NIC?

If I need additional NICs please indicate your recommendation for the best setup for all NIC and HBA ports. I like the idea of using MPIO round-robin, if possible, but believe this won't be possible via the HBA, correct?

Thanks,

Mike

Reply
0 Kudos
17 Replies
Cloneranger
Hot Shot
Hot Shot

Hi,

I would dedicate the one NIC to the service console,

and use the two others in one vSwitch for your VMs, - what you can do with these in terms of failover/load balancing depends on your switch, but ESX does a pretty good job on its own,

You will get load balancing on outgoing traffic, but you will need to config your switch to get it on the inbound route,

Just use the HBA on its own for ISCSI connectivity,

You will need more NICs in the future,

You will need at least one for vMotion, and I also like to have at least one redundent link for the service console,

I use 12 NICS per host into six switches, but its a little overkill Smiley Wink

Mike_Deardurff
Enthusiast
Enthusiast

Definitely go with the dedicated NIC for service console. I would also setup one for VMotion (future), 2-3 for a teamed production network and maybe 1 for DMZ/extra. I have found it good practice as well to implement in internal switch with no physical NIc attached and use it for isolating VMs or as a secured staging area. Just a thought.

Read up on your MPIO in regards to the level of redundancy it offers too. You may want to grab another HBA for your iSCSI connection. From my memory, MPIO works best in a active-active setup where both SPs have access to the LUNs at the same time. MPIO with Active-passive will not give you a complete redundant solution.

Also, remember that iSCSI traffic over a standard NIC (no TOE) will behave much differently than over a iSCSI HBA or TOE NIC. I dont know what kinda effects you would see if you tried to balance traffic between them. For sure the overhead will be worse when using a standard NIC

-Mike

mikera
Contributor
Contributor

So just to be clear, is this a good setup?

NIC1 (onboard)-----|

\ |-(teamed)-> production network

NIC2 (onboard)-----|

NIC3 (PCI)------> service console

HBA1 -


> iSCSI array

HBA2 -


> iSCSI array

If not please clarify. I'm planning to add another NIC down the road for VMotion. I'd be interested in any recommendations for a specific setup that will support 2 ESX hosts and the EQ PS100E that would provide active-active MPIO capabilities.

Thanks,

Mike

Reply
0 Kudos
christianZ
Champion
Champion

As I remember you can use either software initiator or iscsi hba.

I prefer iscsi hba (max. 2, eg. 1X 4052 or 2X 4050). Use all nics for your networks.

Reply
0 Kudos
langonej
Enthusiast
Enthusiast

I would make this small adjustment:

NIC1 (onboard)-----|

\|-(teamed)-> production network

NIC2 (onboard)-----|

NIC3 (PCI)------> service console

NIC1 (onboard) -


|

NIC2 (onboard)------| --- Virtual Switch (all 3 NICs teamed)

NIC3 (PCI)----


|

Port Group for Virtual Machines:

NIC Teaming Tab

Override

NIC1 and NIC2 active.

NIC3 standby.

Port Group for Service Console:

NIC Teaming Tab

Override

NIC3 active.

NIC1 and NIC2 standby.

Similar approach for VMotion port group if you're using it.

Reply
0 Kudos
langonej
Enthusiast
Enthusiast

Start reading my last post from:

NIC1 (onboard) -


|

NIC2 (onboard)------| --- Virtual Switch (all 3 NICs teamed)

NIC3 (PCI)----


|

I accidentally copied in your config above mine and I don't know how to edit my post.

Reply
0 Kudos
Texiwill
Leadership
Leadership

Hello,

Another option is:

NIC1 -> SC portgroup w/vMotion portgroup with no bridges between the portgroups.

NIC2 & NIC3 -> Teamed for VM Network

This way your SC is not on the same network as your VMs. I like to keep those quite separate for security reasons.

Best regards,

Edward

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
Reply
0 Kudos
bertdb
Virtuoso
Virtuoso

if you team for redundancy, don't team your two onboard NICs. They are on the same part of the PCI bus, and might fail together.

Better team one onboard NIC with the PCI NIC (or all three together as suggested).

Reply
0 Kudos
langonej
Enthusiast
Enthusiast

No redundancy in your solution for the SC.

I'd take the comment above and apply it to my post and use 1 and 3 as your active for VMs with 2 as standby and use 2 as your active for SC with 1/3 as your standby.

Reply
0 Kudos
mikera
Contributor
Contributor

These are all good and helpful posts. I do have a question about IP schemes for my SAN vs production networks. Currently our SAN and LAN are on the same subnet - I was told this would make it easier to manage the PS100E than trying to put it on a separate subnet (of course I could have misinterpretted something). But lately I've been reading several posts where people describe separating their LAN and SAN networks for security reasons.

My question is how should I configure each of the interfaces above if I follow the recommendations to team NICs 1&3 for the VMs and use NIC 2 for the SC. We use a 192.168.1.x subnet so what would be my ideal setup? Should I definitely separate the SAN and LAN into different subnets? If I do so then how would I be able to manage the SAN from my workstation?

My current plans are to have a single GB switch dedicated between the ESX host and the SAN. This switch will be connected to the LAN switch so I can manage the SAN from my workstation. Next year I'll be adding a second switch for redundancy. Will all of this work OK or have I gone about this the wrong way?

Forgive my networking ignorance. I'm still somewhat green when it comes to network design. Thanks again for all the great suggestions!!

Mike

Reply
0 Kudos
bertdb
Virtuoso
Virtuoso

correct, you can't use both for the same SAN, multipathing and loadbalancing won't work from hardware to software initiator or vice versa.

Reply
0 Kudos
mikera
Contributor
Contributor

Sorry, I should have been more clear.

I'm not trying to multipath between the HBAs and the NICs. I'm going to dedicate the HBAs to the SAN. However, my question is whether I should create a completely separate subnet for the SAN to keep it segregated from my production network. Is that necessary if I'm dedicating the HBAs to the SAN and letting all my network traffic flow through the NICs? It seems easiest just to keep things as they are with a single subnet.

Reply
0 Kudos
langonej
Enthusiast
Enthusiast

I'm no iSCSI guru, but I'd guess you might not need separate subnets, but you will definitely want separate VLANs.

Reply
0 Kudos
mikera
Contributor
Contributor

I have one switch dedicated to the SAN that is mostly separate from my production switches. The only connection between the two is a single patch cable between my SAN switch and one of my production switches so that I can connect to the SAN for management purposes. Is a separate VLAN still necessary in this setup?

Reply
0 Kudos
langonej
Enthusiast
Enthusiast

Not if it's physically separate.

Reply
0 Kudos
mikera
Contributor
Contributor

OK, so I think I have this ready to go, but I'd like some validation from the gurus (you guys). Smiley Happy

Does this look like the best way to configure ESX networking?:

Set up a single vSwitch to use all 3 NICs in a team. Set NICs 1&3 as Active for the VM Port Group and Standby for the SC Port. Then set NIC 2 as Active for the SC Port and Standby for the VM Port Group.

Or would it be better to create 2 separate vSwitches and dedicate NICs 1&3 to one vSwitch and NIC 2 to the other vSwitch? That would seem more complicated but does that config give me any benefits vs a single vSwitch?

Remember, in the future I'll be adding another dual-port NIC and making use of VMotion, HA and DRS when we set up our second host. For now I just need to best utilize the 3 NICs that we have. Thanks.

Reply
0 Kudos
Texiwill
Leadership
Leadership

Hello,

I use dedicated vSwitches/pNIC combinations. But that only works if you have plenty of NIC ports available. I like the physical separation. But with VLANs and portgroups this is not necessary.

As for iSCSI, your SC need to see the iSCSI network (be on the same subnet or netmask that allows both subnets) The main reasons is to handle authentication which is not part of the vmkernel. The VM Network does not need this.

Best regards,

Edward

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
Reply
0 Kudos