VMware Cloud Community
georgemason
Contributor
Contributor

Real world ESXi4 SAN advice needed

Hi folks,

Am in the process of setting up a 2 node vSphere environment using HP DL360/380 servers and an HP MSA 2312i iSCSI SAN. Got it all just about designed with the exception of the SAN switches. Before I go any further, I must stress that for this installation, bottom line is EVERYTHING - there is not a penny to spare.

We were initially hoping to go with HP 2510G switches but have now discovered that they support jumbo frames OR flow control, but not both, and the concensus seems to be that this is a no-go.

What I would like to know (and I know it's a "how long is a piece of string" question, but I'm trying to get a feel for this) is how much of an issue would not having either jumbo frames or flow control on the SAN ports be? I see that the 2910al switches do support both JF and FC but they are three times the price, and therefore a very tough sell. I guess the other logical question out of this is, if either flow control or jumbo frames are off the menu, which would be the best to go for?

The environment will be used to virtualise between 10-15 servers currently mostly on Win2003 R2 but moving to 2008 in the future - Exchange, SQL, file server, the usual suspects - nothing too strenuous in fact I think the new kit will be fairly lightly loaded.

Thanks in advance for any comments.

George

0 Kudos
8 Replies
JonT
Enthusiast
Enthusiast

The question you need to try to ask yourself is what sort of I/O goals do you have? If you need specifically throughput, then you should be able to overcome the Jumbo Frames and Flow-control issue by simply adding another dual-port NIC to each of your two hosts and add them to your design for the iSCSI traffic. If response time is more your objective, none of this will really affect that aside from port speed which I would assume will be 1Gb copper. If you use Fiber or 10Gb Copper (pricey), then you can increase your response time to your SAN.

Mostly what you need to plan for is separating the vmkernel and vmguest traffic. You will notice less IOPs to your SAN with no Jumbo Frames/Flow Control at the same time, but it won't be detrimental if you increase the number of ports to the port group that is setup for iSCSI. The cost of another dual-port NIC is probably measurable in a hundred or two dollars, vs. the cost of that 2900 switch which is likely to be in the thousands.

Hope this helps!

0 Kudos
georgemason
Contributor
Contributor

That's very helpful info - thanks! I have already designed the LAN/SAN infrastructure so the VM storage traffic is never going to contend with the guest network traffic - they'll be on completely seperate physical switches. To be honest I think the infrastructure will support the number of VMs we plan to run on it without JF or flow control but since they're available I see no sense in not using them.

Any thoughts on which is more useful, in a run-of-the-mill virtualisation installation?

Thanks again.

G

0 Kudos
cebomholt
Enthusiast
Enthusiast

If I had to choose between them, I would go with flow-control. Jumbo's can help performance depending on workload, but flow-control is ALWAYS a good practice for iSCSI. Especially if you are oversubscribing bandwidth at all, I think flow control is probably more important than jumbos.

JRink
Enthusiast
Enthusiast

IMHO, skipping Jumbo Frames or Flow Control is fine.

I utilize a dual controller HP MSA 2312i SAN at one client with (2) 1810G-8 switches (we utilize MPIO in ESX) without either flow control or jumbo frames (even though the 1810G-8 does support both) and we run 14 VMs on the ESX host without any problems. I have file/print server, oracle server, sql server, exchange server, web server, etc. as VMs.

If money is an issue, I wouldn't even bother looking at the 2910G series, check out the el-cheapo 1810G-8 switch for $155. I'm using it in 8 of them in 4 different ESX enviroments right now. They've been more than sufficient for iSCSI so far. (and in some cases, I do use them with Flow Control and Jumbo both enabled too).

0 Kudos
georgemason
Contributor
Contributor

That's really interesting - the 1810G switches do BOTH jumbos and flow control? Am surprised at that when the next model up doesn't. I need to do some more research I think!

0 Kudos
georgemason
Contributor
Contributor

Found this thread, which discusses the same issue: http://communities.vmware.com/thread/186569

The feeling I get from everything that I've read is that whilst the 1800 series can handle the job on paper, they might end up choking under load. I think we're going to stick with the design we had chalked out, and buy 2 x 2510G 24 port switches for the SAN - they might not support both jumbos and flow control but by the sounds of it we'll be ok with just flow control.

Thinking about it a bit more, I guess ESX has only really started supporting jumbos in the last release (or two?) so it can't be that crucial for most installations as it's a very mature product these days.

Thanks to all for your help and suggestions.

George

0 Kudos
georgemason
Contributor
Contributor

I think we're settled on 2510Gs. I've installed one to test, and without any config on the switch (and currently only using a single GB nic to the switch and one controller in the MSA) we're seeing 30MB/s in IOmeter when configured for 100% sequential reads of 64KB (as in the IOmeter quick start documentation). I've heard people get as much as 100MB/s from a pair of GB nics with flow control etc. enabled so am hoping that with a bit of tweaking we can get somewhere nearer to that.

Interested to hear people's comments on the above and whether my expectations are reasonable given the hardware - just ran the same tests on a box with local SAS storage and got sequential reads in the order of 72MB/s so hope I can improve on the SAN figures above considerably.

Cheers.

George

0 Kudos
georgemason
Contributor
Contributor

Hi all,

Just thought I'd post a response to this and close it off, now that the project is complete, if anything to help others in my position when I was scouring the web for info to set my expectations.

We did end up going for the 2510-24G switches for the SAN side of the infrastructure as, after reading extensively around the issue, as it seemed that for the size of setup we were building that having both flow control and jumbo frames was "nice to have" but not essential. The switches were configured as redundant paths to the storage using the MRU multipathing algorithm. To recap the previous discussion, the 2510G-24 switches support flow control but not jumbo frames, and the extra price for the 2910al switches that support both was too much for this project.

The system has now been running for about 3 months on the configuration above. At first my perception was that the throughput wasn't sufficient, I was using IOmeter within the VMs to load the storage, and was finding that I couldn't achieve anywhere near the sequential read rate that I thought I should be getting. In a single Windows 2003 VM I could pull about 15-20MB/s from the storage, with multiple VMs doing the same I could get somewhere near 60MB/s before they all started to decrease, indicating that overheads within each VM were limiting the throughput I was seeing.

After extensive testing, including "real world example" testing, it was decided that the performance was about right of sufficient at the very least, and the servers were put into production. VMs were added incrementally and the load on both hosts and storage was carefully monitored. SInce then we have been monitoring the situation and have found that the system's performance is absolutely fine for the size of infrastructure (2 hosts, about 15 VMs mostly Win2k3 with a couple Win2k8, Exchange, SQL, file server, DCs etc. with about 120 users). The VMs operate without a hint of slowdown from the storage, and in fact after monitoring the throughput on the switches we are seeing far less load than we were throwing at the storage with IOmeter.

In short, the setup with the HP 2510G-24 switches worked out fine. I would have no hestiation in using the same kit again, I've had good experiences with HP networking kit elsewhere and in this case I'm happy to report that it also worked out fine.

Hope this info is useful to someone!

George

0 Kudos