VMware Cloud Community
dbeatty1954
Contributor
Contributor
Jump to solution

ESX4.1 Networking

First I need to say I am not a VM person, I work on the networking side.  We are having a discussion on how best to incorporate 10Gb nics into our environment.  I have read a document discussing the use of their 10Gb nics in VMWare.  Currently we have up to 10 1Gb connections per server into a Cisco C4900M switch.  The cost is very high and I am sure you can imagine what the rack looks like with up to 8 ESX servers per rack.  The Intel document mentions VDS a few times and I am pretty sure we don't use VDS in our environment.  I want to be able to port channel/trunk two 10Gb connections together and pass all of our traffic across this port channel.  This traffic will include the service consoles, vmotion and the vm server traffic.  The system admins of our VM's don't feel this can be done.  They are still wanting to have a seperate copper connect for the service console, and possibly vmotion.  The networking staff feels this can be run across the port channel by creating a vSwitch and then creating port groups that are members of this vSwitch.  And of course, we will be adding the vlan tag to the port group.  Is anybody doing this and is willing to provide some insight to trunking esx servers.  I will also attach the Intel document I read.  Thanks for any info you can provide.

0 Kudos
1 Solution

Accepted Solutions
Josh26
Virtuoso
Virtuoso
Jump to solution

dbeatty1954 wrote:

Thanks Calvin, that is what we as network types thought but we are getting some disagreement from the VM side of the house.  It seems pretty clear, port channel and trunk the 10Gb, create a vSwitch and add Portgroups to that vSwitch.  Again thanks for the info.

Hi,

My advice would be to start by being clear about what you mean by a port channel. Network types like us want to talk LACP - however VMware doesn't support this in general.

A lot of the advise is based on older technology. Where VM traffic was on a 1Gb NIC it was possible to flood it to the point that management traffic out that NIC would drop. Separate copper became a best practise that was almost required.

Being realistic, no VM is going to flood a 10GbE NIC in such a way. We've tried, with iperf, and we still couldn't bring a management network offline. Regardless of any old advise you read, there is no reason this can't work. In fact, all our lab environments use a single 1Gb NIC for management, vMotion, VM traffic - and iSCSI. I wouldn't do it outside a lab, but the point is, it does work fine.

Edit: I'll also point out that the HCL for 10GB NICs is very small. Get one that's listed. I had a client wish to use a certain brand of NICs and it was a complete disaster with the vendor supplying non-HCL drivers.

View solution in original post

0 Kudos
8 Replies
cjscol
Expert
Expert
Jump to solution

Of course you can do this, just the same as you could have the Mangement Port/Service Console, vMotion and virtual machine traffic on a single vSwitch with only 2 x 1Gb Ethernet NICs on a small implementation.

If you have Distributed Switches then you can also take advantage of Network I/O Control to limit how much bandwidth each service uses on the vSwitch, see http://www.vmware.com/files/pdf/techpaper/VMW_Netioc_BestPractices.pdf

I have implemented a solution using 2 x 10GbE ports for all traffic without Distributed Switches and based on ESXi 4.1.  For the same reasons you describe.  The original solution was going to be 6 ESXi hosts with 10 x 1Gb ports but I changed this to 6 ESXi hosts with 2 x 10GbE ports.  The servers had 2 onboard 1Gb ports but these were never cabled up.  With 6 hosts with 10 ethernet ports each was a lot af cabling to manage.  Also this would have required 2 x 24 1Gb ethernet port modules in each of the 2 core switches, as we would have needed 30 connections to each core switch.  Whereas the new solution only required 12 cables and 1 x 8 10Gb ethernet port modules in each of the 2 core switches.

Calvin Scoltock VCP 2.5, 3.5, 4, 5 & 6 VCAP5-DCD VCAP5-DCA http://pelicanohintsandtips.wordpress.com/blog LinkedIn: https://www.linkedin.com/in/cscoltock
dbeatty1954
Contributor
Contributor
Jump to solution

Thanks Calvin, that is what we as network types thought but we are getting some disagreement from the VM side of the house.  It seems pretty clear, port channel and trunk the 10Gb, create a vSwitch and add Portgroups to that vSwitch.  Again thanks for the info.

0 Kudos
rickardnobel
Champion
Champion
Jump to solution

dbeatty1954 wrote:

It seems pretty clear, port channel and trunk the 10Gb, create a vSwitch and add Portgroups to that vSwitch. 

Another thing, if using port channel you must also change the NIC teaming load balancing policy to "Ip hash" on the vSwitch.

My VMware blog: www.rickardnobel.se
Josh26
Virtuoso
Virtuoso
Jump to solution

dbeatty1954 wrote:

Thanks Calvin, that is what we as network types thought but we are getting some disagreement from the VM side of the house.  It seems pretty clear, port channel and trunk the 10Gb, create a vSwitch and add Portgroups to that vSwitch.  Again thanks for the info.

Hi,

My advice would be to start by being clear about what you mean by a port channel. Network types like us want to talk LACP - however VMware doesn't support this in general.

A lot of the advise is based on older technology. Where VM traffic was on a 1Gb NIC it was possible to flood it to the point that management traffic out that NIC would drop. Separate copper became a best practise that was almost required.

Being realistic, no VM is going to flood a 10GbE NIC in such a way. We've tried, with iperf, and we still couldn't bring a management network offline. Regardless of any old advise you read, there is no reason this can't work. In fact, all our lab environments use a single 1Gb NIC for management, vMotion, VM traffic - and iSCSI. I wouldn't do it outside a lab, but the point is, it does work fine.

Edit: I'll also point out that the HCL for 10GB NICs is very small. Get one that's listed. I had a client wish to use a certain brand of NICs and it was a complete disaster with the vendor supplying non-HCL drivers.

0 Kudos
cjscol
Expert
Expert
Jump to solution

If you are using iSCSI and want to run this over the 10GbE vSwitch then you as far as i know you will not be able to configure Link Aggregation on the two 10GbE ports.  In the case of iSCSI to provide failover you should create two vmkernel ports on the vSwitch for iSCSI, one using on of the physical NICs and the other one using the other physical NIC.

Calvin Scoltock VCP 2.5, 3.5, 4, 5 & 6 VCAP5-DCD VCAP5-DCA http://pelicanohintsandtips.wordpress.com/blog LinkedIn: https://www.linkedin.com/in/cscoltock
0 Kudos
dbeatty1954
Contributor
Contributor
Jump to solution

Some of what I read was an Intel article that I attached to my original message.  It discussed using the Intel 10Gbe and VMWare 4.1.  I agree some of the stats we were able to see on the VMWare Virtual Console seemed to confirm network utilization was minimal.  I think the highhest spike we saw on any ESX server was about 4Gb and this was a consolidation of all traffic through all interfaces on the ESX.  I agree I don't think we could flood a 10Gb link let alone two 10Gb links in a port channel.

As for the HCL, per the system admins this card is on the HCL.

Thanks for the info.

0 Kudos
dbeatty1954
Contributor
Contributor
Jump to solution

Yes, I did see that  was a requirement.  Thanks.

0 Kudos
dbeatty1954
Contributor
Contributor
Jump to solution

No Calvin, iSCSI is running on fiber straight back to our SAN.  iSCSI will not be traversing the trunk.  I will leave iSCSI to our storage group.  Thanks again for the info.

0 Kudos