We currently have a live Oracle server, that has an LACP connection over 4 1 Gb NICs to a netapp FAS2040. Unfortunately it's failing so we're going to create a VM. We have it online right now, but I have a question. The VMWare server has 4 1 Gb NICs trunked out, will a single VMXNET 3 NIC be able to push as much traffic as the 4 trunked NICs could? Or do I need to create a similar setup? Has anyone else done something like this?
Thanks in advance.
VMXNET3 supports up to 10Gb/sec transfer rate. So you can definately have your uplinks on LACP and present 1 virtual VMXNET3 nic to the VM and attain the speeds, although your bottleneck will most probably be you disk iops.
see this document for more detailed info on VMXNET3 performance
http://www.vmware.com/pdf/vsp_4_vmxnet3_perf.pdf
regards,
Rick
What guest OS is this? (my experiences are mostly Linux) My preference would be to simplify things by having a single VMXNET3 NIC in the VM. VMXNET3 usually looks like a 10GbE NIC but I think I've seen it working faster than that between VMs on a single host (and VMs on blades within a single HP blade chassis). It would be worth doing a quick sanity test though if you've got the VM running.
VMXNET3 supports up to 10Gb/sec transfer rate. So you can definately have your uplinks on LACP and present 1 virtual VMXNET3 nic to the VM and attain the speeds, although your bottleneck will most probably be you disk iops.
see this document for more detailed info on VMXNET3 performance
http://www.vmware.com/pdf/vsp_4_vmxnet3_perf.pdf
regards,
Rick
It's a Solaris 10 machine, sorry. Thought I had put that in there. Thanks for the info guys. That information helps quite a bit, I'll run some testing for the heck of it though.
hi there - you will find that the trunked ports will not aggregate traffic into the VM. As far as the vnic is concerened it will use only one path, so your maximum is 1GB I'm afraid, unless you can hook up the netapp in such away that you can present it over 4 vnics -> subnets. We have had some people try this with some success i beleive. You using NFS right?
Looking quickly around this one might be a good place to start
vSwitches do support LACP but only level 0 (if I recall the terminology correctly) which is for failover and not aggregation of traffic. For that reason we generally avoid referring to LACP as it creates confusion!
HTH.
dB
Unfortunately no, it's iSCSI. We had a hard time getting the vendor who maintains the database (an Oracle electronic medical records) to even agree to a virtual server, they would only work with iSCSI when we got them to agree. Our current Oracle server is failing slowly but surely, and we're building this for a temporary solution, unless performance is adequate than we'll use it as a permenant one.
We currently have 4 NICs from each ESXi server setup as an LACP trunk across an HP 5406zl, and 4 NICs from each head on the NetApp SAN also set as an LACP trunk.
I think I need to elaborate on this more, as it's a bit more complex than that.
The ESX to SAN is an NFS share. However, the way things are setup is we have an iSCSI lun on the NetApp, holding the actual database that the current physical server attaches to. The virtual machine itself is stored on the NFS volume, but connects via iSCSI to the lun holding the actual database. Hope that's a bit clearer
That's a good point by dBurgess - I hadn't realised that limitation existed in vSwitches (it seems quite a bad one too - why can't the links be aggregated virtually if they can be physically?).
Therefore I've three suggestions for you to consider:
HTH
Simon
You can use the same technique for iSCSI arrays I'd check with Netapp to see if that can work for them. Unless I'm totally mistaken the lacp trunk won't give you what you are looking for though. Could be 1 gig is enough, do you have any stats from the existing setup?
Sent from mobile... dB