VMware Cloud Community
sepsupport
Contributor
Contributor

iSCSI Performance Problem under 10 Gbps

We got an new iSCSI SAN (HP Lefthand P4500) running on 10 Gbps.

Server HW HP Prolinat DL 380 Gt with 2 CPUs Intel X5660 and 96 GB ram, HP NC550SFP dual Port Lan adaptor. All Server and Networking componnents ar certfied for Windows and VMWARE.

We made a performance Test with an Windows 2008 R2 as OS and SW  iSCSI initiator. Then we made the same Test wit excatly the same HW, running one Windows 2008 R2 as a virtuall maschine on vsphere 4.1 (build 348481) it.

To compare the perfornace, we used Intel Iomeeter SW with 64k blocks read and write

The Results are frustrating

iscsi-vmware.jpg

Windows 2008 R2 running directliy on the Hardware

read: 662 MB/sec

write: 430 MB/sec

one Windows 2008 R2 running on VMWARE 4.1 as a VM

read: 100 MB/sec

write; 157 MB/sec

one Windows 2008 R2 running on VMWARE 4.1 using MS windows iscsi

read: 71 MB/sec

write: 122 MB/sec

I don't understand

. why is writing faste then reading

- why is it so much slower than on a HW server

since datatransfer is more than 1 Gbps, could it be that vmware is not able to handle 10 Gbps

What can we do to enhence the performance?

Tags (4)
0 Kudos
13 Replies
Gfuss
Contributor
Contributor

sepsupport,

How are you running Iometer for the VMware tests?  Directly on ESX or inside a guest OS? 

While not explaining your results, I can say from my personal experience (no performance tests), that the P4500 performance can be affected by A LOT of variables (Network RAID level, number of hosts, number of shelves in cluster).  You also need to ensure you purchase the correct model for your needs as they each offer different drives types (some include mid-line drives) which can adversly affect read/write performance.

The product is feature rich (licenses included), but you need to ensure it fits the environment it will be used in.

------------- Gfuss
0 Kudos
sepsupport
Contributor
Contributor

on recomendation of HP we configured Lefthand with flow control, no jumbo frames, an achtive-passiv bondig.

When we look an the results of the hw server with windows 2008R2 the results are very good.

So i dont think the problem is on the Lefthand side.

We are runnig the VM on the ESX Server on al Lefthand LUN and used for teh test another lun an the lefthand.

I have the feeling, that our WMWARE server is not able to handle 10 Gbps and we are therefor on the 1 Gbps limit

0 Kudos
idle-jam
Immortal
Immortal

are you able to look at the performance chart of vsphere client and see what is the total throughput of the vmnic during that period? from there you can estimate whether it's on 1Gbe or 10Gbe

0 Kudos
rogard
Expert
Expert

Are you using the software hba?

0 Kudos
sepsupport
Contributor
Contributor

In the Network configuration of the ESX Host ist shows 10'000 Full Duplex for the 10G adapters, but when i go to the Performance cart of my virtual machine it shows only about 1 Gbps during the perfrmance Test.

Since the HP 10G adapters are only certified as network Adapters i use the vmare SW iscsi initiator.

0 Kudos
Gfuss
Contributor
Contributor

Are you using the VMXNET3 driver on the VM?

------------- Gfuss
0 Kudos
Josh26
Virtuoso
Virtuoso

Hi,

I'm interested in how the vmware native iSCSI was setup.

Can you post a screenshot of your network configuration and storage adapters tab?

0 Kudos
sepsupport
Contributor
Contributor

we are using VMXNET3 Network adapters

this is our configuration

Networking:

Netwerok Adapters:

nic.jpg

Networking:

vswitch1.jpg

configure explicit vmkNIC-to-vmNIC binding

nic teaming.jpg

only one active Adapter for each kernel Port

storage.jpg

Adapter Binding

esxcli swiscsi nic add -n vmk0 -d vmhba37

esxcli swiscsi nic add -n vmk1 -d vmhba37

0 Kudos
rgard
Contributor
Contributor

I realize this is an old thread, but I wanted to point out something that was probably your issue.  The VMNET3 driver shows up as a 10gig connection to the OS, however it performs at 1gig at this time.  I am hopeful that updates to the driver and virtual switch will be coming soon and should help.

I have to wonder if just simply running a standard disk off the Datastore instead of presenting an ISCSI lun to the Windows VM would give you better performance at 10gig when using NFS. 

I have to give a VM a large amount of space, and I am tempted to just do a standard disk over my 10gig NFS instead of using an ISCSI lun to the Windows VM that would run at 1gig.

Anyone following this thread have any articles or experience in this?

0 Kudos
ellism
Contributor
Contributor

From your post it looks like you got the HP P4500 and blades talking at 10G?  I thouhgt maybe you could me some help. 

I have an HP c7000 with 4 bl460c-G7 blades, Flex-10 and virtual connect, two P4500 nodes (with 10Gbe card) and a Arista Networks 10Gbe switch.  I have been trying to get VMware to see the storage for over an month now and HP has not been much help.  I have VMware tech support look at what I have doen and they said my configuration is fine.  It is a very simple setup for now, I created 3 x 1G connections and 1 x 7G connection out of the flex-10.  The 7G connection using jumbo frames was going to be my data path to the HP P4500.  I ping from the Arista switch to my P4500 with MTU size 9000 with zero packet loss but when I try this from the blades it either hanfgs or has 80% packet loss.  HP has already replaced the Flex-10 10Gbe modules and that did not fix anything.  HP has verified that all my firmware and drivers are up to date so I am at a loss here as to what is going on.

If you have any ideas please let me know.

0 Kudos
bobvance
Enthusiast
Enthusiast

@ ellism

Did you ever get your problem resolved?

bv

0 Kudos
ellism
Contributor
Contributor

Bob,

Yes I did.  It turned out to be incompatible CX4 cables that HP sent me.  They sent two different types nd one type would not work with my Arista 10G switch.

0 Kudos
bobvance
Enthusiast
Enthusiast

Wow, that's interesting. Thanks.

stupid cables.....

bv

0 Kudos