I have been reading the posts for a while regarding the network speed
issues people are experiencing and I have also tried the best practices
for Iscsi and vSphere for about a month.
My setup consists of a 2 x Dell 2950,
16GB of ram, 2.88 Quad core Intel processor and 6x1 Gig ports connected to
1st Dell 2950 has vSphere installed and 1 Windows 2008 Guest OS
configured as a DC/file/Print server.
2nd Dell 2950 has Esx3.5 installed and 9 other Windows 2003/Linux
Guest OS configured.
3rd Dell 2850 is stand alone Windows 2008 as a Backup server with
3Ghz Xeon Cpu, 3GB of ram and 2 x 1 gig Ethernet ports.
I also have Dell EQlogic PS5000e connected to the HP2824.
My problem seems to be with a transfer rate from/ to and between the VMware
boxes as well as between the vm’s and the stand alone Dell 2850 box.
On any Dell 2950 When I do a normal copy and paste file
transfer from/to the Iscsi box Dell EQlogic PS5000e I get
anything between 4Mb to 34Mb per sec depending on the file.
On any Dell 2950 when I do a normal copy and paste file
transfer from /to any other vm , I get anything between 4Mb to 34 Mb per sec
depending on the file.
From any Dell 2950 to/from Dell 2850 when I do a normal copy
and paste file transfer other vm , I get anything between 4Mb to 34 Mb per sec
depending on the file.
However, if I do multiple copy’s say 5, I can get anything
between 4MB and 22MB concurrently for each copy. This apply to both Dell 2950’s
Now, if I do the same file transfer to/from the Dell EQlogic
PS5000e using Dell 2850 I get remarkable 100+MB per sec sustained for a single
copy and about 20MB per sec on 5 simultaneous copies.
Question is: How do I get the VM’s to be able to do the same
thing as the Dell 2850 box and give me maximum throughput for a single file transfer?
As opposed to a file copy, have you tried something like a sqlio or iometer to see what type of throughput you're getting from the vm to the storage? From running those tools, you can see how well the pipe is at the various io block sizes
Thanks for the reply.
I have run the IOmeter and got the following results.
4Mb Blocks give me about 60MB per sec
128K blocks gives me about 10Mb per sec
64k blocks give me about 5.6MB per sec
32K bloocks give me about 34MB per Sec
16K blocks give me about 20MB per sec
4k blocks give me about 10MB per sec
.512 blocks give me abaout .64MB Per sec
Does that mean 4MB block will work well for speed, and I will loose lots of space on my storge due to small files?
Theanks in advance
Thanks for your reply as well.
It's just a basic setup all is on one subnet, all servers connect to the HP2824 directly with GB ports in the same rack.
All leads are Cat 5e 2.5M long.
The vSphere host has 6 x 1GB Ports connected to the HP2824
The vSphere is located on the Dell 2950, the guest is on the ISCSI lun, utilizing 2 of the 1 gig port's
No Jambo packets or anything like that, it's all vanilla network.
The Guest configured is Windows 2008 with configuration to use 2 network cards and two ip's addresses, mapped to use the other 2 x 1 gig ports, shared with management console.
That leaves me with 2 spare ports on vmware for now.
Lets just look at one example, the difference of speed between VMware Guest and a stand alone server.
The Stand alone Windows 2008 can transfer files up to 114MB per sec, 100MB per sec sustained per single copy to the same mapped drive/Iscsi target through Iscsi initiator.
The Guest on Vmware can at most do 34MB per sec per single copy if i'm lucky.
When I was testing I have tried the LUN being mapped from Vmware, and also from the Iscsi initiator on Windows 2008 both came back with the same results.
Does Vmware throttle the Ethernet in any way?
Hope that's enough info.
I have found the settings after installing the latest intel driver set proadmin V14. I would guess they are also with the older proadmin versions.
The settings are under Advanced Tab.
I'm now capable of transfer between 0-60MB per sec
Are you using the E1000 driver in the VM or the new vxnet3 network card? You could also remove the E1000 nic, use the show hidden devices trick to remove the vnic and then install the new nic as the vxnet3 and see how your transfer speeds are.