With looking at more cost effective storage methods for a new VMWare ESX 3.5 installation, iSCSI is nothing something I have ever approached in the past.
I am interested to know what best practices and practical uses you guys round these parts have for iSCSI, as for its uses etc. Can it really be a replacement for FC based storage paths?
Any comments and thoughts are welcome here, really open to just get a better overview of iSCSI's uses really! :smileycool:
Thanks,
P.
Thank you for your response.
I did have a read through that particular PDF document before posting, and certainly got a lot of useful information from that.
What I was really after from this thread was more individuals experiences, and what they are using iSCSI for in comparison to FC based storage.
For example, I would of thought iSCSI would not of been suitable for high disk IO.. but wanted to know if there was any scenarios where anyone was using it in relatively intensive scenarios... etc. That kind of thing
P.
ok, sorry
I can say that new vi3.5 version, has increesed iscsi performance. I have experience for scenario with 3 esx host (HP DL system) and one emc2 ax150 with 12 vm. (I have also an exchange 2003 vm and sql 2000 vm, all no i/o intensive).
Best practice they say to use a separate vlan for iscsi (better if use separate and dedicated physical switch).
It's very difficult paragone FC vs iSCSI. Depending from a lot of factor: Storage type adn characteristics, Disk type (SAS is better than SATA), Raid type (for read seek or write) LUN number, vm number, i/o access type, NIC type (with iSCSI sw initiator or HW inityator)(HW in better than sw for cpu overload). All this data will be compared with cost of architecture (tipically Gbit switch are already present and FC switch no)
I hope this info can help to clear your idea.
Bye Alberto
Thanks Alberto.. exactly the kind of feedback I am looking for. As time goes on, and the more I read.. the more I am certainly coming to like iSCSI in terms of its flexibility. Although, just like a full blown FC setup.. it does need some planning and consideration.
I would of course thought that isolation of the IP iSCSI side of things is a must, as with any storage paths of this nature. But all very valid comments, and much appreciated.
Hopefully some more guys will send me some feedback this way :smileygrin: hint
P.
Can it really be a replacement for FC based storage paths?
Yes yes yes yes. I just pulled out my last fiber card today!!! We migrated from fiber connected EMC storage to an iSCSI Equallogic (Now Dell) box. We're using the software initiator in ESX and performance wise, it actually seems faster. So lets see, it's faster, cheaper, and MUCH easier to manage.
ESX doesn't yet support jumbo frames for iSCSI but I hear it's coming.
That's exactly what I wanted someone to say! I was very keen on using iSCSI as an FC replacement simply because of cost really, although I appreciate that decent gigabit switche are similar in cost to lower end FC switches.. but there was something holding me back!
It is good to know there is configurations out there, totally reliant on iSCSI and even more so that have been moved from FC. I just had some concerns about performance, but thought that well in all fairness in most cases VMs dont utilise the full capacity of FC storage paths.. which is why I considered iSCSI a good option.
So, is it recommended to use TOE cards as opposed to gateways etc? What kind of things have you guys got in place there?
Woooo for ISCSI!
P.
TOE cards are not supported in 3.5. that's not to say that a TOE card will hurt anything. The NICs I have in my Dell servers are all TOE capable but it's not enabled. Honestly though, I'm not sure you'll need it. TOE is designed to offload some of the CPU requirements from the CPU to the NIC (Where they should have been in the firstplace). If you are worried about CPU cycles enough to even think about TOE adapters, you may have bigger issues. You'll run out of RAM on a host server long before you run out of CPU cycles. I have not noticed even the slightest increase in CPU cycles since we went to iSCSI. Granted, i didn't do any benchmarking or before and after testing but, CPU utilization on each host has remained fairly steady.
One thing though, Jumbo frames are still not supported for iSCSI. They are supported for network traffic on the VMs but not for iSCSI traffic on the host. This is a HUGE disappointment in 3.5. Not a deal-breaker for me but it would be nice to have.
Also, one of the third party vendors I deal wih for Vmware has said that they have not done a fiber Vmware implementation since iSCSI has beeen supported. This is a fairly large vendor. They wrote many of the v3.5 white papers you see on vmware's site.