bhirst
Contributor
Contributor

Can’t get ESX 3.5 to push data through two VMKernel Physical Adapters Simultaneous

Jump to solution

Guys, really scratching my head here - I have two physical nics on my VMKernel, but no matter what I do, I can't get traffic to go over both gigabit links simultaneously.

-


Test setup:

• ESX Server with two VM's each running IOMeter against NAS NFS mounted file systems

• Separately each VM can saturate a gigabit link using IOMeter

Here's what's really going to flip your noodle - when I issue the command: "port-channel load-balance src-dst-ip" on my 3750, the traffic switches to the other port!! (See attached images for proof)

-


Configuration background:

Hardware: DL380G5 / 2x Cisco 3750's stacked

VMKernel: ip-hash team vmnic1,4

Cisco Etherchannel Config:

interface Port-channel40

switchport trunk encapsulation dot1q

switchport trunk allowed vlan 10

switchport mode trunk

spanning-tree portfast trunk

Etherchannel summary:

Switch1#show etherchannel 40 sum

Flags: D - down P - in port-channel

I - stand-alone s - suspended

H - Hot-standby (LACP only)

R - Layer3 S - Layer2

U - in use f - failed to allocate aggregator

u - unsuitable for bundling

w - waiting to be aggregated

d - default port

Group Port-channel Protocol Ports

-


----


--
+
--


40 Po40(SU) - Gi1/0/18(P) Gi2/0/18(P)

0 Kudos
1 Solution

Accepted Solutions
kjb007
Immortal
Immortal

Ok, you will either have to map both LUNs to the same VM, and then run IOMeter, or have a 2nd VM, connecting to the second LUN, and run IOmeter in each.

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB

View solution in original post

0 Kudos
29 Replies
Rumple
Virtuoso
Virtuoso

you should also add in a native vlan that doesn't include any traffic that is going to cross that trunk to ensure it taggs all traffic or it may not work correctly. More of a good housekeeping for esx then anything.

Now, from what I understand (and I've been known to be wrong most times), without actually setting up an etherchannel is it not going to give you aggregate bankdwidth to a single target. Its going to pick a nic and its going to send the traffic over it. Only if you etherchannel those and set it to a trunk is it going to use the aggregate banddwidth.

If you have 2 VM's and you start them on that setting you will only see 1GB bandwidth traffic since its not the VM's using the data connection, its the kernel making the connection to the NAS..so you could start 5 VM's and its only a single connection to the NAS.

dominic7
Virtuoso
Virtuoso

Behind the vmkernel port is a virtual ethernet interface, and just like a virtual ethernet interface from a VM it can only be bound to a single physical nic at any one time. So while you can provide an aggregated link, I don't think you'll ever be able to utilize more than one physical nic unless VMware reworks their design. The same is true for the VMware software iSCSI initiator. The load balancing options deal with how the virtual interfaces are spread across the physical nics.

kjb007
Immortal
Immortal

I think the problem that you're running into here is that you're using an NFS mounted datastore. For NFS, the mount actually happens from the Service Console, which is one IP, connecting to the NFS server, one IP, When you're using src-dst-ip algorithm, your connections are balanced using source and destination IP, which in your case, is the same everytime, so no load balancing occurs. As long as you have one source IP and one dest IP, you can only use one NIC. If you change the algorithm to something else, that provides more than one combination, your can balance your load. In your scenario, in order to do that, you can not use NFS. If you use iSCSI, then you can balance the load by using multiple targets, if your SAN supports it. Otherwise, you could also use iSCSI mounted to your VM's themselves and your traffic will be balanced in that scenario as well,

Hope that helps,

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
0 Kudos
bhirst
Contributor
Contributor

Guys – thank you for the replies, much appreciated. I will make sure I tag this question as answered by someone.

What I’m reacting to is this document : “[H2756 - Using EMC Celerra IP Storage with VMware Infrastructure 3 over iSCSI and NFS|http://www.vmware.com/resources/techresources/1036] ”

Are you guys saying (in your posts above) that a VMKernel can only utilitze ONE Gigabit link to access vmfs mounted on NFS shares? That would imply that distribution of network load across multiple interfaces is not possible?

Excerpt from Document H2756:

-


Storage Access Path High Availability options

The ESX hosts offer several advanced networking options to improve the service level of the VMkernel IP

storage interface. NIC teaming in ESX Server 3 provides options for load balancing and redundancy...

Consider the use of NIC teams with IP or port hashing to distribute the load across multiple network

interfaces on the ESX Server host. When multiple Celerra iSCSI targets or NFS file systems are

provisioned for ESX Server, NIC teaming can be combined with Celerra advanced network functionality to

route the sessions across multiple Data Mover network interfaces for NFS and iSCSI sessions. The NIC

teaming feature will allow for the use of multiple NICs on the VMkernel switch. Logical network

connections configured for link aggregation on Celerra provide a session-based load-balancing solution that

can improve throughput to the virtual machines. Figure 6 provides a topological diagram of how

EtherChannel can be used with ESX and Celerra.

-


THANKS

0 Kudos
kjb007
Immortal
Immortal

Unfortunatley, I think that would have to be a yes. As I stated above, NFS is only mounted by the service console. If you had two separate NFS servers, and mounted each as separate datastores through the service console, then you would get side benefit of the load balancing, as long as the src-dst ip combination is different.

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
0 Kudos
bhirst
Contributor
Contributor

Do I need multiple iSCSI targets to see load-balancing over the interfaces? I mounted an iSCSI lun via RDM / physical - turned on IOMeter and I still only see traffic over a single link.

0 Kudos
kjb007
Immortal
Immortal

Sorry to say yes again, but yes. Otherwise, your array has to support multiple sessions per target. Otherwise, if you have multiple IP addresses on your array, then you can map them twice that way.

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
0 Kudos
bhirst
Contributor
Contributor

KjB - i'm reaaly close to getting this right - i have two IP addreses on my array, and two physical nics assigned to my VMKernel. The iSCSI targets come up twice as expected, and everything looks great, however - traffic still only goes over one interface! I'm really starting to think that link aggregation on the ESX side is a myth.

0 Kudos
kjb007
Immortal
Immortal

Ok, the infrastructure is complete. Now, let's verify the policy and teaming are good also. If you look in the vi client networking configuration for your iSCSI switch. In the nic teaming section under failover, make sure one NIC is not marked for standby, and both are in active mode.

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
0 Kudos
bhirst
Contributor
Contributor

Both are indeed set to active.

p.s. I just read this here ( )

NIC Teaming...

A given virtual machine cannot use more than one physical Ethernet adapter at any given time unless ...

That kindof blows my mind... does that require teaming virtual adapters INSIDE the host OS?

0 Kudos
kjb007
Immortal
Immortal

Since the teaming is done at the ESX level, this should not apply. Now, you have two target IP addresses, so your src-dst-ip policy should balance between the physical NICs. How is your storage mapped? Do you have two datastores mapping to two LUNs, one from each target?

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
0 Kudos
bhirst
Contributor
Contributor

See attach for how the luns present in VMWare. My IOMeter machine has one of the LUNS mounted raw physical.

0 Kudos
kjb007
Immortal
Immortal

Ok, you will either have to map both LUNs to the same VM, and then run IOMeter, or have a 2nd VM, connecting to the second LUN, and run IOmeter in each.

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB

View solution in original post

0 Kudos
bhirst
Contributor
Contributor

ok cool! looks like it's working! I see RX on two vmnics!! looks like the TX are going out only on one though... 1/2 solved.

p.s. know off the top of your head why I can't create an rdmp.vmdk on an NFS volume?

WARNING: NFS: 2281: Unable to create IOMeter_Small_3-rdmp.vmdk, nfsclient does not support creation of type: 8

0 Kudos
kjb007
Immortal
Immortal

Actually, I think so. The difference between NFS and iSCSI/FC SAN is that with iSCSI and FC, you have access to the raw LUN and can get access to the disk itself for SCSI locks. With NFS, you don't have access to the disk, only to the file which you are accessing, since the filesystem exists on the NFS server.

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
0 Kudos
bhirst
Contributor
Contributor

Makes sense - how the heck can I support VMotion if I have to store the mapping file on the local disk?

0 Kudos
kjb007
Immortal
Immortal

Well, you can't do RDM from an NFS location, but you can still create a VMFS datastore from an NFS share. From there, you create your VM's with their disks existing on the datastore. Now, you have an NFS datastore which can be mounted from all your servers, which will support vMotion.

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
0 Kudos
bhirst
Contributor
Contributor

Create a VMFS datastore from an NFS share? How is that accomplished?

0 Kudos
kjb007
Immortal
Immortal

Sorry, I meant NFS datastore, not VMFS NFS datastore. The NFS datastores are used quite often to store templates and images. I borrowed the below steps from another post, thanks to dkaur.

Creating NFS Data Store is a two step process.

1) For NFS Access, you will need to create VMKernel Port. The properties of the

vswitch named service console port should allow you to add a new connection

of type VMKernel with the ip address and subnet mask for VMkernel's port.

2) You can use the storage link to Add Storage.

The Add Storage Wizard prompts you for the Storage Type, where you should select

"Nework File System", a data store name, name of the NFS Server

and a directory on the NFS Server, if needed.

http://communities.vmware.com/thread/94937

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
0 Kudos