I currently have one SBS 2003 VM on a ESXi 4 host connected to a HP ProCurve 1800-24G switch. My ESXi (Dell R610 server) host has 6 gigabit ports. I want to set this up as one 6Gbit trunk as we use it for transferring and capturing HD video. My SBS guest VM is set up with with the vmxnet 3 driver which shows as a 10Gbit connection. I have all the adaptors active on the host and have set it up for Load Balancing - Route based on ip hash and the switch is setup with these ports as a trunk. All apears to work ok however I am only getting around 400Mbps when transferring files. This is actually worse than when I had one Gb port used and SBS not setup as a VM!
All ports are detected as 1000 base-t when set to auto but just for good measure I've forced it to 1000. I can't see where the problem lies. There is a frimware update for the 1800-24g which adds some options when setting up trunks as follows.
'Trunks — There are now six configuration options on the Trunks ->Membership configura- tion screen: SMAC XOR DMAC XOR IP-Info, SMAC, DMAC, SMAC XOR DMAC, Pseudo- Randomized, and IP-Info.'
Which is the best option to set up the trunk on the switch? Before I update the firmware Iwant to make sure I have the correct options set. I've done a lot of reading through the forums and as far as I can see everything is correct. Any help appreciated!
Ive attached some snaps of my configuration. Ive disabled two of the adapters in these snaps as they were on a different card. The configuation is the same and I still get 400Mbps.
The device I'm writing too is connected to the VM with VM-Direct Path and a Fibre HBA. I get 280MB/sec average writing to the RAID from WITHIN the VM so thats not the bottleneck.



Im assuming my configuration is right if nobody has replied saying its not! Maybe someone can reccomend a decent switch that doesnt cost the earth which they know works well with trunking and esxi?
Are you transferring between 1 source and destination pair ? If so the "IP hash" will be the same throughout the transfer and only 1 pNIC will be used. If you start moving files around from multiple clients to/from your SBS you should see greater aggregate throughput ... providing your storage subsystem is up to the task of course.
Also be aware that, to the best of my knowledge, dynamic LACP is not supported. Not sure if the 1800-24G does static LACP ?
The 1800-24G does support static and that particular trunk is set up as so. I've forced the trunk to 1Gb for all 4 ports as well now but that makes no difference. Even if nobody else is working on the network and I transfer files from one client (which also has 2 teamed NICs, but I've tried unteamed as well), from a 10 disk RAID 0 with 400MB/s read/write to the server with 200MB/s RAID (minimum) I still only get 400Mbps max. I don't mind if I get 1Gb/s transfer from each client but I don't even get that. Thats what is puzzling me.
Most clients are working off 1 wireless base station and don't do any file transfers across the network so they have little to no impact on network activity. We have 2 macs/2 windows machines that I really need the full bandwidth for, 1 Gbps per client would be great.
Hi,
I have the same enviroment as domb.
I couldnt get the maximum throughput
any advice vmware????
I'm pretty sure you're being limited by the ProCurve 1800 switch... This was one of the factors that came into consideration when I was looking at switches (last year) for my lab. I ended up going with the 2510G-24, which has full CLI in addition to the limited web interface (which is your only option on the 1800 switch)...
Still, I wouldn't advise trying to force the traffic as you are at the switch level. Better to gang the NIC's up at the host, then configure the switch to pass the traffic along to the target. If you're using iSCSI, then you'll also want to have jumbo frames properly enabled across all items in that set. Such as the storage array, switch, and on the vSwitches (must be done via CLI on the host)...
VMware VCP4
Consider awarding points for "helpful" and/or "correct" answers.
Hi golddiggie ,
I might move my VMware to my 3500 switch now, since the 1800 pretty much very limited for vmware.
How did you setup your 2510G
Currently, ESX is using IP hash with Trunk on the switch.
sholuld I go with LACP or... any advice?
Thanks,
Uri
I was eventually able to get my farm trunked to the 1800 switch I own but I don't trust it for production. At times one of the hosts will completely disconnect and cause headaches. Looks like I'm going to have to explain to the wife the cost of a 2510G now..... what a bummer.
