VMware Cloud Community
stcchc
Contributor
Contributor
Jump to solution

Oracle VMWare network question

We currently have a live Oracle server, that has an LACP connection over 4 1 Gb NICs to a netapp FAS2040.   Unfortunately it's failing so we're going to create a VM.  We have it online right now, but I have a question.  The VMWare server has 4 1 Gb NICs trunked out, will a single VMXNET 3 NIC be able to push as much traffic as the 4 trunked NICs could?  Or do I need to create a similar setup?  Has anyone else done something like this?

Thanks in advance.

Reply
0 Kudos
1 Solution

Accepted Solutions
Pikee99
Contributor
Contributor
Jump to solution

VMXNET3 supports up to 10Gb/sec transfer rate. So you can definately have your uplinks on LACP and present 1 virtual VMXNET3 nic to the VM and attain the speeds, although your bottleneck will most probably be you disk iops.

see this document for more detailed info on VMXNET3 performance

http://www.vmware.com/pdf/vsp_4_vmxnet3_perf.pdf

regards,

Rick

=============== Riccardo Ventura Twitter: @Hypervise Linked in: http://ca.linkedin.com/pub/riccardo-ventura/4/b5/827 My Blog: http://hypervise.wordpress.com

View solution in original post

Reply
0 Kudos
7 Replies
Simon_H
Enthusiast
Enthusiast
Jump to solution

What guest OS is this? (my experiences are mostly Linux) My preference would be to simplify things by having a single VMXNET3 NIC in the VM. VMXNET3 usually looks like a 10GbE NIC but I think I've seen it working faster than that between VMs on a single host (and VMs on blades within a single HP blade chassis). It would be worth doing a quick sanity test though if you've got the VM running.

Pikee99
Contributor
Contributor
Jump to solution

VMXNET3 supports up to 10Gb/sec transfer rate. So you can definately have your uplinks on LACP and present 1 virtual VMXNET3 nic to the VM and attain the speeds, although your bottleneck will most probably be you disk iops.

see this document for more detailed info on VMXNET3 performance

http://www.vmware.com/pdf/vsp_4_vmxnet3_perf.pdf

regards,

Rick

=============== Riccardo Ventura Twitter: @Hypervise Linked in: http://ca.linkedin.com/pub/riccardo-ventura/4/b5/827 My Blog: http://hypervise.wordpress.com
Reply
0 Kudos
stcchc
Contributor
Contributor
Jump to solution

It's a Solaris 10 machine, sorry.  Thought I had put that in there.  Thanks for the info guys.  That information helps quite a bit, I'll run some testing for the heck of it though.

Reply
0 Kudos
dburgess
VMware Employee
VMware Employee
Jump to solution

hi there - you will find that the trunked ports will not aggregate traffic into the VM. As far as the vnic is concerened it will use only one path, so your maximum is 1GB I'm afraid, unless you can hook up the netapp in such away that you can present it over 4 vnics -> subnets. We have had some people try this with some success i beleive. You using NFS right?

Looking quickly around this one might be a good place to start

http://wahlnetwork.com/2012/04/27/nfs-on-vsphere-technical-deep-dive-on-multiple-subnet-storage-traf...

vSwitches do support LACP but only level 0 (if I recall the terminology correctly) which is for failover and not aggregation of traffic. For that reason we generally avoid referring to LACP as it creates confusion!

HTH.

dB

Reply
0 Kudos
stcchc
Contributor
Contributor
Jump to solution

Unfortunately no, it's iSCSI.  We had a hard time getting the vendor who maintains the database (an Oracle electronic medical records) to even agree to a virtual server, they would only work with iSCSI when we got them to agree.  Our current Oracle server is failing slowly but surely, and we're building this for a temporary solution, unless performance is adequate than we'll use it as a permenant one.

We currently have 4 NICs from each ESXi server setup as an LACP trunk across an HP 5406zl, and 4 NICs from each head on the NetApp SAN also set as an LACP trunk.

I think I need to elaborate on this more, as it's a bit more complex than that.

The ESX to SAN is an NFS share.  However, the way things are setup is we have an iSCSI lun on the NetApp, holding the actual database that the current physical server attaches to.  The virtual machine itself is stored on the NFS volume, but connects via iSCSI to the lun holding the actual database.  Hope that's a bit clearer Smiley Happy

Reply
0 Kudos
Simon_H
Enthusiast
Enthusiast
Jump to solution

That's a good point by dBurgess - I hadn't realised that limitation existed in vSwitches (it seems quite a bad one too - why can't the links be aggregated virtually if they can be physically?).

Therefore I've three suggestions for you to consider:

  1. Pass the NICs straight through to the VM guest (i.e. using VT-d / IOMMU), and then bond the NICs within the OS itself. Of course this will require that those NICs are dedicated for that single VM so if that's all of them on the host then you may need to keep a couple back for ESXi / any other VMs, or else buy some extra NICs. I'm guessing if you're running iSCSI you perhaps already have dedicated NICs. You'll also have to look into Solaris bonding - I think you can do active/active now but you'll need to check.
  2. If your 4 NICs are used for both iSCSI and general network traffic are you sure you need more than 1Gbps for iSCSI - could you manage with the iSCSI NICs (active/passive) in a separate VLAN (if not already) and a separate 1Gb vNIC in the VM just for storage traffic?
  3. It looks as though your switch chassis can have up to 24 10GbE ports (http://h10010.www1.hp.com/wwpc/ca/en/sm/WF06b/12136296-12136298-12136298-12136298-12136304-12388288-...) - would it be worth buying the appropriate 10GbE modules and a couple of 10GbE NICs for the server to increase the network bandwidth to the VM? Alternatively if that switch is due to be replaced fairly soon maybe you can install some new 10GbE networking alongside the existing and trunk into the core?

HTH

Simon

Reply
0 Kudos
dburgess
VMware Employee
VMware Employee
Jump to solution

You can use the same technique for iSCSI arrays I'd check with Netapp to see if that can work for them. Unless I'm totally mistaken the lacp trunk won't give you what you are looking for though. Could be 1 gig is enough, do you have any stats from the existing setup?

Sent from mobile... dB

Reply
0 Kudos