VMware Cloud Community
kellino
Enthusiast
Enthusiast

TOE adapters in ESX, and VM NIC settings

We are using TOE (TCP/IP Offload Engine) adapters in our ESX3 servers.

I noticed that inside a Windows VM there are several options for the VMWare Accelerated AMD PCNet NIC including:

TCP/IP Offload : Off (Default)

Does this need to be enabled inside each VM in order to take advantage of the TOE capabilities of our NIC's?

Thanks!

0 Kudos
21 Replies
garneja
Enthusiast
Enthusiast

Does this need to be enabled inside each VM in order to take advantage of the TOE capabilities of our NIC's?

Probably wouldn't work. There is no TOE support in ESX3.

kellino
Enthusiast
Enthusiast

Even though the NICs are on the HCL and a VMWorld presentation suggested the use of TOE NIC's because they would reduce CPU inside VM's?

0 Kudos
garneja
Enthusiast
Enthusiast

I can tell you for sure, only L2 functionality on the NICs is excercised. Which VMWorld presentation suggested TOE support?

0 Kudos
kellino
Enthusiast
Enthusiast

I can't recall and I'd have to dig for it. This spring I was looking at one of the VMWorld 2005 persentations and there was a section regarding best practices. It talked about things like removing unnecessary devices (USB, etc.) to reduce the CPU polling. At one point there was a bullet point -- maybe even a whole slide -- that said to explore adapters with TCP/IP offloading abilities because it could reduce the CPU clocks consumed by VM's.

Also wondering what the "TCP/IP Offload" option does on the VMWare AMD PCNET driver options. Can't seem to find any detail on how this setting is used.

0 Kudos
Jae_Ellers
Virtuoso
Virtuoso

qla 4010 is on 3.0 hcl. Rumors suggest 405x may be on 3.01 hcl.

Need to be listed as iSCSI adapter, not just supported NIC.

Performance w/ 3.0 was worse using hw iSCSI, hence listed as experimental in 3.0. Should be supported as production in 3.01 w/ a couple more cards.

-=-=-=-=-=-=-=-=-=-=-=-=-=-=- http://blog.mr-vm.com http://www.vmprofessional.com -=-=-=-=-=-=-=-=-=-=-=-=-=-=-
0 Kudos
kellino
Enthusiast
Enthusiast

HP's 370T card is marketed as a multifunction adapter and HP refers to it as an "iSCSI HBA":

http://h18004.www1.hp.com/products/servers/networking/nc370t/index.html

...however on the ESX3 HCL it is listed as a network adapter and not as an iSCSI HBA.

Are we to infer that the TOE capabilities of any adapter are only supported if it shows up on the iSCSI section of the HCL?

I know this is relatviely new in the industry. Hoping VMWare will help provide clarity soon as this is about as unclear as it gets.

0 Kudos
Jae_Ellers
Virtuoso
Virtuoso

Yep, needs to be in the storage section, not network. See the qla4010.

-=-=-=-=-=-=-=-=-=-=-=-=-=-=- http://blog.mr-vm.com http://www.vmprofessional.com -=-=-=-=-=-=-=-=-=-=-=-=-=-=-
0 Kudos
Mork
Enthusiast
Enthusiast

I attended a VMware seminar a couple of weeks ago, and they were very emphatic about TOE support (or lack thereof) in regards to NIC's.

There is currently one and one only supported adaptor which is the QLOGIC one.

None of the multifunction cards are supported and aren't expected to be for some time yet.

The inference I got was that if you're serious about your VMware implementation in Production, you'll be using SAN based storage, not NAS or iSCSI.

Mind you, that makes perfect sense to me...

0 Kudos
Jae_Ellers
Virtuoso
Virtuoso

Yep, makes sense, if you've got FC infrastructure. We don't. So my directive is to use iSCSI until I can't.

We've provisioned & migrated 1 Tb w/ 4 luns & 33 VMs with another 50 or so to go. So far so good. NetApp FAS 3050 & SW iSCSI. We'll go to TOEs when it's recommended by VMware. Already have a couple qla4010s installed.

-=-=-=-=-=-=-=-=-=-=-=-=-=-=- http://blog.mr-vm.com http://www.vmprofessional.com -=-=-=-=-=-=-=-=-=-=-=-=-=-=-
0 Kudos
kellino
Enthusiast
Enthusiast

"The inference I got was that if you're serious about your VMware implementation in Production, you'll be using SAN based storage, not NAS or iSCSI. "

I agree completely. We don't even use iSCSI and all our storage is on a 4GB SAN.

I was simply interested in the benefits gained in network bandwitdh and reduced CPU overhead by offloading TCPIP processing outside of the VM's and on hardware.

0 Kudos
Mork
Enthusiast
Enthusiast

Lacking FC infrastructure will always stop you using SAN Smiley Happy

Are you finding any performance issues with using iSCSI at all?

Have you dedicated switches etc. to this or do you share with your prod network?

Just curious as I haven't had anything to do with iSCSI yet and was wondering how it would go.

0 Kudos
garneja
Enthusiast
Enthusiast

I was simply interested in the benefits gained in network bandwitdh and reduced CPU overhead by offloading TCPIP processing outside of the VM's and on hardware.

No TOE offload capability is implemented for VMs or for ESX's internal TCP/IP stack today. iSCSI though is alltogether different matter. Hardware iSCSI initators have to use TCP/IP offload to talk the iSCSI protocol. Those on the HCL are completely supported, I'm sure.

TOE doesn't make sense with the latest breed of processors and 1Gig hardware. 10Gig on the other hand probably will needs some form of hardware accleration. When 10Gig becomes widely available, expect ESX to start exploiting these hardware acceleration features.

0 Kudos
BillBauman
Contributor
Contributor

Hi, everyone. Since I was looking for some info on the TP Mode option and found this, I figured I'd try to help out a bit.

Having read this thread, I see there are about 3 separate topics getting confused. I know some of you know this, and some might not, so I'm not "talking down" or anything, just clarifying from my point of view. I say all that so no one gets upset, it's my standard disclaimer from the start. If I get something wrong, or you have a different opinion, please just let me know, don't yell, ok? Smiley Happy

Topics:

1) iSCSI vs. Fibre Channel for SAN infrastructure - connections to Hosts basically

2) TCP Offloading for NICs at the Host level (TOE)

3) TCP/IP Offload in the

Guest

VM

Being as though I do get to work with all this stuff, the first topic is practically religion for some people. That said, iSCSI is poised to take SMB by storm in 2007. Stay tuned. Smiley Happy I won't get into the debate, but FC SANs aren't going away, iSCSI SANs aren't taking over. It's like saying there'll never be another hard drive internal to a server or no one will ever buy another FC SAN because they're expensive. Let's not kid ourselves, people still buy mainframes, don't they? Smiley Happy

The second topic is whether or not the Host, read: Host OS supports TOE. That's where TOE would have to occur. The idea of doing TOE at the Guest OS level is silly. Think about it, it's purpose is to do a physical thing to improve performance. Enabling TOE at the Guest OS level would be like enabling a virtual co-processor. The best it could do is virtualize the instruction set and still have to go back down through VMKernel to do the "work". That said, TOE at the Host level is Guest-OS-independent. It just takes the packets coming through and adds/strips the TCP headers so that the CPU doesn't have to do it. A note on this, Intel's IOAT is the TOE alternative and you'll see does very little to help CPU performance - imagine that, Intel wants to sell more CPUs. All of that being said, TOE is only supported on one major OS as of today - Windows Server 2003 R2 SP1 with the Scalable Networking Pack installed. So, TOE for NICs for the Host OS of VMware can't work as of today. It'd be fantastic if it did. Stay tuned. Smiley Happy BTW - TOE has immediate benefits for CPU offloading in Windows with only one gigabit card. It does not necessarily improve throughput all that much until you reach 2 gigabit interfaces driven to 100% utilization or more.

The third topic refers to TCP Checksum offloading, actually. It's poor-man's TOE done with a little bit of driver help. I've seen it have some benefits in hardware-based Windows. In a Guest, again, all you're doing is virtualizing the hardware, so enabling a hardware/driver-based feature isn't going to help you, it'll only probably increase processor utilization. Which is exactly what it did when I just ran a test on a Windows Server 2003 Guest Virtual machine just now. My network throughput dropped a couple percent, nothing big, and proc utilization went up about 5-10%, nothing big, but as expected.

Net: Leave TCP Offload (OFF) in your Guest VM's.

Use iSCSI (TOE-based) hardware initiators for SAN/Disk access

TOE for NICs, network-based access isn't there yet; but would help a lot.

I hope I've helped to clarify here. Smiley Happy

I swear I used paragraphs, but it's not taking.

-Bill

0 Kudos
avarcher
Commander
Commander

This is all good stuff, can I offer a bit of crystal-ball-gazing. In the near to medium term we will get i/o virtualisation. Which I see as the ability of the guest OS to harness the capabilities of the underlying hardware - potential for good things on SAN, and the ability of Guest OS to use TOE - but would this TOE aspect be worthwhile?

Cheers, Andy.

0 Kudos
locosta
Contributor
Contributor

Hi

Are there new informations about full TOE support by vmware esx 3.xx ?

Regards

0 Kudos
Artemis
Contributor
Contributor

\*Bump*

Any new word on TNIC support on ESX3?

0 Kudos
MarkE100
Enthusiast
Enthusiast

Rumour has it TOE support will be in 3.1.0 along with lots of other nice new networking features -

Support for InfiniBand network cards

Support for 10Gbit Ethernet network cards

Support for TCP/IP Offload Engine (TOE) network cards

Support for network load balancing algorithms

Support for IPv6 in virtual networking

Support for Cisco Discovery Protocol (CDP)

0 Kudos
specVM
Contributor
Contributor

Any update on this?

My VMs get pegged 100% when trying to saturate their 1Gbit NIC.

The host supports TOE on its bonded 2Gbit connection, which is very nice.

It would be great if the VMs to leverage the offloading.

I really want to virtualize my file server, but the lack of TOE in the VMs is putting me off.

0 Kudos
RenaudL
Hot Shot
Hot Shot

My VMs get pegged 100% when trying to saturate their 1Gbit NIC.

There's definitely something wrong in this case. Which virtual nic are you using? This is very important as most offloading features are only available with Enhanced Vmxnet.

0 Kudos