VMware Cloud Community
td3201
Contributor
Contributor

iscsi performance issues

Hello,

I have some checking to do on the layer2 side as well as the SAN but assuming that is all OK, I want to verify my logic with you guys to make sure we are configured correctly.

I have a dedicated vswitch with two nics in it for iscsi. Here is a dump of that config:

Switch Name Num Ports Used Ports Configured Ports MTU Uplinks

vSwitch1 64 6 64 1500 vmnic1,vmnic3

PortGroup Name VLAN ID Used Ports Uplinks

iSCSI-SC 0 1 vmnic3,vmnic1

iSCSI-VMK 0 1 vmnic3,vmnic1

Interface Port Group IP Address Netmask Broadcast MAC Address MTU TSO MSS Enabled

vmk0 iSCSI-VMK 10.1.0.105 255.255.255.0 10.1.0.255 00:50:56:7c:4a:eb 1500 40960 true

Name Port Group IP Address Netmask Broadcast Enabled DHCP

vswif1 iSCSI-SC 10.1.0.104 255.255.255.0 10.1.0.255 true false

Name PCI Driver Link Speed Duplex MTU Description

vmnic3 0c:00.01 igb Up 1000Mbps Full 1500 Intel Corporation Intel(R) Gigabit VT Quad Port Server Adapter

vmnic1 07:00.00 bnx2 Up 1000Mbps Full 1500 Broadcom Corporation Broadcom NetXtreme II BCM5708 1000Base-T

# esxcfg-swiscsi -q

Software iSCSI is enabled

Here is a simple speed test of an iscsi volume from the service console:

# time dd if=/dev/zero of=foo bs=1M count=1024

1024+0 records in

1024+0 records out

real 0m51.614s

user 0m0.000s

sys 0m0.780s

Here is another host on a different SAN but you can see the performance I am getting from that box at least:

# time dd if=/dev/zero of=foo bs=1M count=1024

1024+0 records in

1024+0 records out

1073741824 bytes (1.1 GB) copied, 3.65729 seconds, 294 MB/s

real 0m3.671s

user 0m0.002s

sys 0m2.252s

Clearly we are getting a lot better performance on another linux box. Granted, it is using ext3 vs. vmfs but still. I don't see that being that big of a difference. There are some underlying network things I need to check. I will do that but I wanted to start a dialog here to see if anyone has any gotchas.

Reply
0 Kudos
5 Replies
Paul_Lalonde
Commander
Commander

Hi,

Don't use the 'dd' command from the service console to benchmark your ESX iSCSI performance... it's doomed to fail in providing the numbers you're looking for :smileygrin:

Instead, run it from inside a Linux virtual machine.

Service console I/O via Linux calls are 'translated' through the VMkernel and will not illustrate real performance.

For the best form of testing, run IOMETER within a Windows VM and also from a physical Windows box connected to the iSCSI SAN. This will give you an accurate reading.

PS. How did you get 294MB/s on the physical Linux host? Unless you have multipath drivers or devmapper configured to use multiple GigE links, you're getting cached results, not real ones.

Regards,

Paul

td3201
Contributor
Contributor

Doh, I didn't even do the math on the Linux side. It is bonded.

Reply
0 Kudos
td3201
Contributor
Contributor

I performed the tests you recommended on windows boxes that were both very quiet during the tests.

physical

=======

MBps - 8.64

Read MBps - 5.78

Write MBps - 2.85

Transcations ps - 4424

VM

=======

MBps - 5.78

Read MBps - 3.88

Write MBps - 1.90

Transactions ps - 2962

Couple of other notes. The physical uses MPIO and jumbo frames. I am using software iscsi on the VM side. I don't expect these numbers to match.

1. should I bump up the MTU to 9000?

2. any other tips?

Reply
0 Kudos
td3201
Contributor
Contributor

I need to take a step back and talk a bit about architecture. There

has been a lot of conversation surrounding what I want to do with

sw-iscsi. MPIO is not a reality and it appears that jumbo frames are

not supported. Without enabling hw-iscsi, I think that my last (and

only) option to increase performance here is to somehow distribute my

load across multiple LUNs. I am not certain how to do this with

Equallogic as they don't really use that terminology nor have that type

of architecture on the front end. I create a volume and it puts that

volume on a member that has it's own IP address but all initiation occurs via a single group ip address.

Anyone know how to do this?

Reply
0 Kudos
chrisy
Enthusiast
Enthusiast

You will probably find that your best performance option is to use the iSCSI initiator inside the virtual machine ispecially if it's. If you set up two extra physical NICs onto the storage network and create two virtual nics in a VM, you will have MPIO and 2G available bandwidth. In addition for Windows VMs you'll gain access to the hardware VSS support on the SAN and to EqualLogic features like SQL / Exchange integration. Performance should be excellent at the cost of some CPU.

The downside is that you're proliferating LUNs, the boot drive will be on a VMDK but all the other drives will need their own SAN volume. Of course these volumes are not presented to ESX but to the IP addresses of the virtual NICs only. The other downside is that if the VM is Windows 2000 or an old version of Linux, the iSCSI support is not so good.

Chris

Reply
0 Kudos