VMware Cloud Community
scale21
Enthusiast
Enthusiast

issues with 10gbE and speeds in 4.1

Ive got a bit of a problem here.

WE just picked up a new IBM san and hooked it up with 10gbE via ibms recommendations and best practices.

Here is my situation;

Hosts

HP dl360 g7 servers currently connected over 1gbe to a older san (we are trying to migrate away from)

Each server runs esx 4.1

Vswitch0 = service console

Vswitch1 = 1gb vmotion (on its own vlan/subnet)

Vswitch2 = 1gb vm network (on its own vlan/subnet)

Vswitch3 = 1gb storage network (on its own vlan/subnet)

Vswitch4 = 10gbe storage network (isolated HP 10gbe switch)

Our 10gbE cards in our hosts are HP NC552sfp running the latest vmware drivers (4.1.334.0) and firmware (4.1.450.16)

I am utilizing the vmware softrware iscsi adapter. All of our data is in vmdk files. No in guest mapping to the storage.

Now my question

I cant seem to get past 1gb speeds to our new storage.

For instance, if i svmotion 2 VMs to the new storage on a new volume and also vmotion the 2vms so they run off the same host.....and

I try to copy a 25GB file from a share on 1 Vm to the Other VM in the guest OS, the best i can get is about 60 - 80MB/sec

IF i look at esxtop i show utilization on Vswitch2 and Vsiwtch4 which makes sense but it is only at 1gb speeds.

Is there a direct relationship between your VMNics and your storage nics where you are only as fast as your slowest connection?

Even though they are on the same host and datastore, does it still use the VM netowrk and the physical nics to do the transfer or is it all internal and only utilizing the 10gbe?

Both of the VMs im testing in have the vmxnet3 adapters.

Right now ive tried JUMBO frames from my 10gbe nics, vswitch4 and new san and it was the same or worse so that didnt help.

IF i copy a 22gb file to itself the same place on the same VM my storage nics show activity. Windows reports this speed to be about 100MB/sec....or what i would expect a 1gb connection to do at full tilt.

I have tried connecting a a physical server (same make, model, hardware), loaded windows 2008r2 on it and used the MS iscsi int to mount storage and was able to copy a 12gb file in a few seconds time at 1.8+ GB/Sec.....so i know faster is possible.

what am i missing?

Reply
0 Kudos
3 Replies
Rumple
Virtuoso
Virtuoso

you are measuring the wrong interfaces with your tests.

Any copies of the data between vm's or doing any svmotions..you are utilizing ther 1gbit nics and not touching the 10gbit nic's.

Personally, I would drop all the 1gbit nic's and just utilize the 10g only since you are not going to be saturating that 10g link anytime soon across multiple hosts

If you want to see san throughput you are going to need to do activities that relate to storage (like running iometer inside a VM.)

That will cause disk I/O which will use the 10g link...

We have 7 hosts running 2x10gb links for everything (san, vmotion, mgmt, client network).  The only thing I did was implement egress bandwidth on the vmotion port group to 3mbps because esx host will utilize 8 concurrent vmotions at once and could saturate the link....bringing down the storage...

With 250 vm's on 7 hosts we actually see very little real traffic on the 10gb link unless I do a vmotion.

PS - if those 10g cards are the dual port and you have 2 per host you are running in an incorrect configuration as esx 4.1 only support 4x10gb nics or  2x10gb and 4 1gbit or 8 1gb (I think)

You cannot disable one of the ports on the 10gbit dual ports so you may end up having random ports working and not working during reboots with 4 physical 10gb and multiple 1gbit.

One other thing....vmware is really designed to ensure that multiple systems can fully utilize any given resource...so running a single VM with a single busy disk, doesn't really do anything because it typically can't utilize all available resources.

Run io meter on 20 vm's at once and you will crush that storage link...singe vm...not so much...

Ethan44
Enthusiast
Enthusiast

Hi

Welcome to the communities.

please let us know the first question about media  .

Its FC or LAN ?

I assume it must be FC if so recheck network configuration .

"a journey of a thousand miles starts  with a single step."
Reply
0 Kudos
scale21
Enthusiast
Enthusiast

hmm interesting.

I show my 10gbe links using only about 1gb speeds which would make sense if the data actually is traversing the VMnics to get to the storage nics. I always thought if the VMs were on the same host (and same datastore) that the communication between the vms was all done against the BUS of the machine and not the Pnics.

im trying to sort out in my mind how i would have both vm and storage traffice traveling accorss the same 10gbe link.

I dont know if that is possible in our scenerio.

Our vswitch that contains our VMs has about 20 different port groups (Differnet Vlans for different purposes).

Our vswitch that contains our storage is just that, storage.

WE do have the dual port 10gbe cards with only 1 interface being utilized from each host. There is 1 of those per server so we are 2x10gbe and currently only using 1 of the 2 interfaces (for storage)

i do not have enough ports currently to employ the second 10gbe interface on each host for our VMnetwork.

Our PSwitch that handles vmtraffic and client requests is 1gb and the vswitch for that traffic is teamed with two 1gbe cards

I assume then that this needs to get bumped up to 10gbe to get 10gbe speeds? (even though in my testing ive been on the same host, same datastore)?

I have also been using IOMeter in my vms on our 10gbe storage network but getting the same type of results. I have everything in vmdk files currently so no direct attached iscsi storage from the VM. I have not atttempted that yet. I am confused by how slow iometer appears to run in a guest against a vmdk file (d drive) in windows.

I do have 1 host that i can play with here that is totally empty. I am curous to know how i can get storage and vm traffic working over the same 10gbe interface. Having both traffics on the same nic i thought was a bad idea. I could segment them with more vlans maybe. Ive got some playing around to do from the sounds of it.

Another thing that is confusing me.

If i am logged into a vm and that VM is on our 10gbe storage. I have 1 path to the 10gbe storage which is 10gb..... IF take a 20gb file and make a copy of the file in the same directroy (which is stored in the vmdk on the 10gbe)....i only show about 100MB/sec.

I would think this would mean that im limited in some other way. I know the disks can do faster than that. IF i used a direct attached pserver via iscsi i get 1.8GB/s for instance.

Reply
0 Kudos