Good day everybody,
I tried to run a search for what I was looking for, but can't seem to find anything relevant to ESX 3.5, so here goes...
We just bought a few new Dell PowerEdge 2950s that we are going to be using as ESX hosts. We are also using an EqualLogic SAN for our storage, so we're going to need additional NICs.
Last week my network admin and I took an unofficial VMware 'Boot Camp' last week one thing that was stressed was that you should avoid mixing different brands of NIC. This is going to be easier said than done, however, as the two onboard ports for these servers are Broadcom, and the only 4-port PCIe NICs I can find are Intel-made.
Does anyone have any experience here? Will it be problematic to mix Broadcom and Intel NICs in the same host? Just a quick note, if we do add-in Intel cards we would *not *be port-grouping/teaming Intel and Broadcoms together. My thought is that the on-board Broadcoms for the COS and HyperVisor then one Intel NIC for LAN traffic and one Intel NIC for SAN traffic.
Thoughts here?
Thanks in advance! ![]()
Same thing with NIC chipsets?
Same thing with nic chipsets.
ESX doesn't care about the precise models of the underlying nics in a team: they are driven in their own, separate layer and ESX interacts with them through a completely generic interface. So as long as a device is capable of sending/receiving Ethernet frames, it can be plugged to a team. This is different from Windows, where load-balancing is (usually) implemented at the driver layer and thus requires homogeneous nics.
The only minor issue you can expect from mixing nics in a team is a slight variation of performance. Because some nics might be faster than others, the networking performance of your VMs might vary depending on the teaming decision ESX makes while forwarding frames. I've seen people teaming Gig nics and 100Mbits nics together (and it should work fine as far as network connectivity is concerned) and wondering why their throughput was so bumpy ![]()
Also, since I have ya, does the software iSCSI initiator support Gigabit connections in ESX 3.5? We were also told that 3.02 only did 100Mb.
I'm no iSCSI expert, but I don't see why we would have any limitation on Gigabit links.
I took an unofficial VMware 'Boot Camp' last week one thing that was stressed was that you should avoid mixing different brands of NIC.
This is BS.
Mixing nic brands is not problematic at all and is fully supported.
This is BS.
Mixing nic brands is not problematic at all and is fully supported.
Perhaps I am using am mixing terms here...and I figure I should clear that up I suppose.
Same thing with NIC chipsets?
Also, since I have ya, does the software iSCSI initiator support Gigabit connections in ESX 3.5? We were also told that 3.02 only did 100Mb.
Same thing with NIC chipsets?
Same thing with nic chipsets.
ESX doesn't care about the precise models of the underlying nics in a team: they are driven in their own, separate layer and ESX interacts with them through a completely generic interface. So as long as a device is capable of sending/receiving Ethernet frames, it can be plugged to a team. This is different from Windows, where load-balancing is (usually) implemented at the driver layer and thus requires homogeneous nics.
The only minor issue you can expect from mixing nics in a team is a slight variation of performance. Because some nics might be faster than others, the networking performance of your VMs might vary depending on the teaming decision ESX makes while forwarding frames. I've seen people teaming Gig nics and 100Mbits nics together (and it should work fine as far as network connectivity is concerned) and wondering why their throughput was so bumpy ![]()
Also, since I have ya, does the software iSCSI initiator support Gigabit connections in ESX 3.5? We were also told that 3.02 only did 100Mb.
I'm no iSCSI expert, but I don't see why we would have any limitation on Gigabit links.
You sir (or ma'am) are a fountain of knowledge - thank you very much!
That'll do it for my base networking question, but I'll have more. And likely more regarding the iSCSI management once we get 'er going.
I'll be around...
We're also using PE2950's and have Intel Quad port cards in them (VT model I think), I'm even doing active/standby connections across the Broadcom and Intel vnics for the VM & SC networks. Not had any issues so far (the storage is fibre-attached though). For a client we have the same on order but the storage will be iSCSI attached (and will come with dual quad-port Intel cards), not anticipating any issues with this apart from not being able to use TOE on the quad-port cards for the iSSI connections.
Make sure you're on the latest ESX patch though as I think EMC fixed an issue with quad-port cards recently.
We're working through a possible teaming problem here with Broadcom and Intel mixed.
We get intermittant failures on vmotion and sftp uploads to the datastores through the service console.
It's intermittant though (not good) and tough to isolate for us.
It seems to take as much as a full 3gb dvd upload to fail -- or a couple of vmotions.
Moved the intel on one box and the broadcom on the other to unused on the vswitch and did about 3g of transfer with sftp and about 5 vmotions so far.
vmnic0-Broadcom vmnic6-Intel with ESX3.5 Update3.
