1 person found this helpful
What guest OS is this? (my experiences are mostly Linux) My preference would be to simplify things by having a single VMXNET3 NIC in the VM. VMXNET3 usually looks like a 10GbE NIC but I think I've seen it working faster than that between VMs on a single host (and VMs on blades within a single HP blade chassis). It would be worth doing a quick sanity test though if you've got the VM running.
VMXNET3 supports up to 10Gb/sec transfer rate. So you can definately have your uplinks on LACP and present 1 virtual VMXNET3 nic to the VM and attain the speeds, although your bottleneck will most probably be you disk iops.
see this document for more detailed info on VMXNET3 performance
It's a Solaris 10 machine, sorry. Thought I had put that in there. Thanks for the info guys. That information helps quite a bit, I'll run some testing for the heck of it though.
hi there - you will find that the trunked ports will not aggregate traffic into the VM. As far as the vnic is concerened it will use only one path, so your maximum is 1GB I'm afraid, unless you can hook up the netapp in such away that you can present it over 4 vnics -> subnets. We have had some people try this with some success i beleive. You using NFS right?
Looking quickly around this one might be a good place to start
vSwitches do support LACP but only level 0 (if I recall the terminology correctly) which is for failover and not aggregation of traffic. For that reason we generally avoid referring to LACP as it creates confusion!
Unfortunately no, it's iSCSI. We had a hard time getting the vendor who maintains the database (an Oracle electronic medical records) to even agree to a virtual server, they would only work with iSCSI when we got them to agree. Our current Oracle server is failing slowly but surely, and we're building this for a temporary solution, unless performance is adequate than we'll use it as a permenant one.
We currently have 4 NICs from each ESXi server setup as an LACP trunk across an HP 5406zl, and 4 NICs from each head on the NetApp SAN also set as an LACP trunk.
I think I need to elaborate on this more, as it's a bit more complex than that.
The ESX to SAN is an NFS share. However, the way things are setup is we have an iSCSI lun on the NetApp, holding the actual database that the current physical server attaches to. The virtual machine itself is stored on the NFS volume, but connects via iSCSI to the lun holding the actual database. Hope that's a bit clearer
That's a good point by dBurgess - I hadn't realised that limitation existed in vSwitches (it seems quite a bad one too - why can't the links be aggregated virtually if they can be physically?).
Therefore I've three suggestions for you to consider:
- Pass the NICs straight through to the VM guest (i.e. using VT-d / IOMMU), and then bond the NICs within the OS itself. Of course this will require that those NICs are dedicated for that single VM so if that's all of them on the host then you may need to keep a couple back for ESXi / any other VMs, or else buy some extra NICs. I'm guessing if you're running iSCSI you perhaps already have dedicated NICs. You'll also have to look into Solaris bonding - I think you can do active/active now but you'll need to check.
- If your 4 NICs are used for both iSCSI and general network traffic are you sure you need more than 1Gbps for iSCSI - could you manage with the iSCSI NICs (active/passive) in a separate VLAN (if not already) and a separate 1Gb vNIC in the VM just for storage traffic?
- It looks as though your switch chassis can have up to 24 10GbE ports (http://h10010.www1.hp.com/wwpc/ca/en/sm/WF06b/12136296-12136298-12136298-12136298-12136304-12388288-71606335.html?dnr=1) - would it be worth buying the appropriate 10GbE modules and a couple of 10GbE NICs for the server to increase the network bandwidth to the VM? Alternatively if that switch is due to be replaced fairly soon maybe you can install some new 10GbE networking alongside the existing and trunk into the core?
You can use the same technique for iSCSI arrays I'd check with Netapp to see if that can work for them. Unless I'm totally mistaken the lacp trunk won't give you what you are looking for though. Could be 1 gig is enough, do you have any stats from the existing setup?
Sent from mobile... dB