VMware Cloud Community
Sebson
Contributor
Contributor

ESXi 5 single iSCSI vs multiple iSCSI

Hello,

I am new to ESXi, recently I setup new server ESXi 5.1, my hardware conf. is:

MB: Intel DQ77MK (2x 1Gb NIC)

i5-3470T

32 Gb RAM

1Gb NIC (intel 82574L)

1Gb NIC (intel 82574L) - VMKernel Port for iSCSI (different LAN)

local SATA 320 Gb for shared vmfs disk

local SATA 74 Gb WD raptor for VMs swap

today  I just bought 60 gb SSD - planing to use it for: host swapping to a SSD  - dont know how much would speed up VM itself is one runing from local  SSD - compare to iSCSI?

Otherwise for VMs iSCSI disk on Synology NAS DS1511+ (Thin Provisioning & VMware VAAI support)

My  iSCSI was created on Volume (created by NAS), I read somewhere, that is  perfromance difference if you first create iSCSI (before any Volume is  created).

So I am planing (after all backup of NAS) to create first iSCSI LUN, than go with volume.

My question is:

is it better to create (for performance):

a) 1 iSCSI LUN + iSCSI target on it, for each VM - (if you have them 10 = 10 iSCSI LUNs + 10 iSCSI targets)

b) 1 iSCSI LUN + multiple iSCSI targets on it for multiple VMs

Any other suggestion is welcome.

Reply
0 Kudos
14 Replies
clarkwayne
Enthusiast
Enthusiast

Hi,

I don't believe having multiple iSCSI targets/luns will do anything for you because all of them will be going through the same 1gb link.

If you use your SSD for host swapping I doubt it will ever be used because you have 32gb ram. Host swapping is really only used if you have over commited your ram. If your SSD can it speeds of over 125MB/s then you are better off using it locally to store your VM's on. Because using it through iSCSI will bottle neck you because you are using a 1gb link.

Thanks,

-Clark.

My Blog: http://vflent.com
elgreco81
Expert
Expert

Hi,

As clarkwayne said, I think that SSD drives for host swapping will help you only if you are actively swapping to your disks now. It could also help you to achieve higher consolidation ratios where swapping will happen and active memory for the VMs gets readen from disks.

Since ATS, SCSI locks at LUN level are disapear (if your hosts are ESXi 5 and VMFS5 and VAAI I think), so I don't see the point in having 1 LUN dedicated to only 1 VM. It could make sense before when SCSI locks were at LUN level and the rest of hosts had to wait until metadata was written by only 1 host.

I wouldn't know about performance difference when creating the volume from NAS or from vSphere client. I would use the vSphere client as it will make the things the most optimal for the vSphere enviroment (I'm talking about alignment).

Now, for a security reason, I wouldn't put more than 10 VMs for LUN. As I told before, since ATS it is not a performance issue but only a security one. If for some reason one of your Datastores (I'm thinking in a 1:1 relationship between Datastore/LUN) gets broken, you don't lose everything.

Regards,

elgreco81

Please remember to mark as answered this question if you think it is and to reward the persons who helped you giving them the available points accordingly. IT blog in Spanish - http://chubascos.wordpress.com
Sebson
Contributor
Contributor

I would like to thank you both for your answer - suggestions.

Since this is my Lab enviroment and I am testing, learning...

     I am suprised that my NFS share (from Synology) is working faster than iSCSI (with all features enabled & hardware Acceleration supported)... especially read/write latency for iSCSI is terrible on spikes.

Reply
0 Kudos
elgreco81
Expert
Expert

mmmm...that's good to know Smiley Happy As soon as I can put my hands on the lab again, I will give it a try with an openfiler vappliance!

Thank you!

elgreco81

Please remember to mark as answered this question if you think it is and to reward the persons who helped you giving them the available points accordingly. IT blog in Spanish - http://chubascos.wordpress.com
Reply
0 Kudos
Sebson
Contributor
Contributor

As well I tryed FreeNas 8.2 x64 as VM on runing ESXi 5.1:

1) local disk (vmfs formated and set as vitrual hdd) into FreeNas - worse performance than nfs & iscsi from Synology as well as local hdd from ESXi

2) DirectPath I/O passthrough - Adaptec 1430 II SA - device started normaly (didnt saw created RAID 10 disk array) - latter created RAIDZ as ZFS, than created iSCSI... bad performance as well a lot of troubles with stability (example HDD reported as failure - when took it out and check it - was working normaly).

I know that Adaptec 1430 II SA is not supported in ESXi (wasn't native recognized), so its possible that with supported RAID adapter would work fine.

Reply
0 Kudos
clarkwayne
Enthusiast
Enthusiast

Could you please create a Windows 7 VM on the Synology and run the attached I/O meter test and upload the results?

I've been playing with different storage appliances (FreeNAS 8.0, 8.2, and 8.3, Openfiler 2.99 Final, and Open Indiana + Napp-IT). By far I am getting the best performance and stability out of the OpenIndiana and Napp-IT. I would like to see if my appliance is comparable to the Synology you have and if I should consider buying one for my lab.

Thanks,

-Clark.

My Blog: http://vflent.com
Reply
0 Kudos
Sebson
Contributor
Contributor

Here are results from my iSCSI, NFS both synology & local SSD (intel 330 60Gb).

VM = Windows 7 pro / 4Gb RAM, 2x vCPU

Its same VM migrated to different datastores.

Those results are really hard to read for me, tryed to import them into excel, but wasnt working really nice (didnt got comma seperated result into cell). Is there any program or any other solution to get those results into something more readable.

Reply
0 Kudos
Sebson
Contributor
Contributor

I did some more tests with same VM - on 2 local datastores (local HDD)

Found some pl script to get results to html:

http://dev.studio2012.si/io/

some graphs:

http://dev.studio2012.si/io/graph.html

Dont know what is the problem but results of iSCSI at Random-8k-70%Read & RealLife-60%Rand-65%Read for RW & IOps are much lower than on any other datastore.

Would like to see results from your Open Indiana + Napp-IT if you can attach them.

Reply
0 Kudos
clarkwayne
Enthusiast
Enthusiast

I have attached the file.

The OS disk for the virtual appliance is on a Seagate Barracuda 1tb 7200rpm drive w/32mb cache.

I have 4 virtual hard disks attached to it that reside on 4 separate Seagate Barracuda 2tb 7200rpm drives w/64mb cache.

I have 2 virtual hard disks that reside on a single OCZ Agility 3 240gb SSD.

The 4 disks are setup in a Raid Z configuration with the 2 virtual disks setup for read and write caching.

I need to buy a dummy sata card and passthrough the 4 2tb disks through to the virtual appliance and see if it improves the performance.

How many and what disks are running in your Synology?

My Blog: http://vflent.com
Reply
0 Kudos
Sebson
Contributor
Contributor

Thank you for posted results, I allready compare them with my and difference is huge.

First of all in your case (did you run same test with IOmeter) your CPU is allways 0%, all performances are way better than my except Avg. response time at Max Throughput-50%Read and some error or something at avg. response time at Max Throughput-100%Read.

So you are are using that setup on OpenIndiana and Napp-IT, how did you setup SSD for RW caching - inside Napp-IT or some 3rd party for ESXi?

You aid that you are using OpenIndiana and Napp-IT as a virtual machine... without passthrough? So you imported local datastores (HDD) formated to VMFS with ESXi?

I am using 3x WD Black Caviar 2Tb - 7200rpm drives w/64mb cache inside my Synology.

thanks

Reply
0 Kudos
clarkwayne
Enthusiast
Enthusiast

I did run it with IO meter.

There is an option within Napp-IT to setup read and write caching, its really easy. Best practices suggest a disk the size of the amount of ram for write cache (also known as ZIL). For read cache (also known as L2ARC), it depends on the amount of ram you have (also known as LARC) and what your usage is like. Ram is the first thing it caches to, then it moves to the SSD if the ram is full. My tests start up at the same speeds as your shows, but they MB/s and IOPS keep increasing and the latency/response time keeps decreasing up until about 2/3's of the way through, where it peaks out and stays there.

I created a virtual disk the size of the hard drive on the 4 drives.

I assume you are running Raid 5, correct?

For some of the speeds, you have to remember I am running the virtual appliance in the same physical host, so even though the appliance has a e1000 nic, the e1000 nic supports greater speeds than the theoretical max speed of a physical 1gb connection. Since you have a 1gb physical connection, it will be limited to the theoretical max of a 1gb connection, which is why your sequential read speeds should be lower than mine, but for the real life test, the OpenIndiana + Napp-IT does perform a lot better.

In OpenFiler if you use iSCSI I know for a fact that while running the test you will start getting latency deteriorating messages. If you use NFS the write/random write speeed gets really low. I think I remember FreeNAS doing the same thing but it was a little better.

With OpenIndiana + Napp-IT, everything works like a charm. (I am using NFS by the way).

My Blog: http://vflent.com
Reply
0 Kudos
Sebson
Contributor
Contributor

Thank you for your answers.

I am using Synology SHR (similar to RAID 5).

I will give a try on With OpenIndiana + Napp-IT it looks very promising.

Could you tell me how did you setup HDD, by some passtrough RAID card or you created a virtual disk the size of the hard drive inside ESXi than imported them into Napp-IT.

Thank you again.

Reply
0 Kudos
clarkwayne
Enthusiast
Enthusiast

Basically make a VM. Add 3 virtual disks with it 1820 GB each (thats probably the usable space the ESXi box sees on the hard drives once formatted), thin provisioned, and specify which datastore to put the disk on (so you will have 1 1820GB thin provisioned virtual disk on each WD Black).

My Blog: http://vflent.com
Reply
0 Kudos
clarkwayne
Enthusiast
Enthusiast

Have you gotten your OI system working? If so, can you post the benchmarks?

Thanks.

My Blog: http://vflent.com
Reply
0 Kudos