VMware Cloud Community
averylarry2
Contributor
Contributor

Recommended SAN?

I'm looking for real-world information on iSCSI SAN's. If I setup ESX (Essentials Plus) I'm a little worried about the data throughput trying to run a bunch of VM's from 1 SAN. Obviously I understand:

1) Using a local disk for ESX boot and for the memory swap files will help a lot.

2) The relative "busy-ness" of the VM's will be the primary factor.

My primary data usage will be the primary file server (150 employees ~ 100Gb data, nothing much beyond word/excel files) and the email system (fairly heavy usage). Add maybe a dozen misc. VM's.

I would expect a single 1Gb port on an iSCSI SAN to likely choke. Do you think a single SAN enclosure with 2, 4, 6, 8 ??? ports would keep up?

I feel like I'm rambling. I'm trying to get a grasp on how to proactively try to keep the SAN from becoming the bottleneck. If I buy a nice server that has the processors and memory to run 20 virtual machines, but they are slow because they are constantly waiting on the SAN, then that's not beneficial.

Also -- I probably can't spend more than $15,000 on SAN enclosures (minus hard drives). Impossible?

Reply
0 Kudos
9 Replies
golddiggie
Champion
Champion

Look at the Dell MD3000i iSCSI storage system... It has dual Gb controllers (dual ports each if I recall correctly). Fill it with high speed SAS drives (10k or 15k) and the IOPs of the storage shouldn't have any trouble handling the load you described. There are people posting that have these iSCSI devices in production and will be able to better guide you for how to carve up the storage. I would tend to go a single RIAD 10 volume on the chassis, but others might advise splitting it into two arrays (both RAID 10). You can get 300GB (and larger) 15k rpm SAS drives (up to 600GB actually) so it all depends on how much space you need, how much money you really want to spend in the end (300GB 15k rpm SAS drives are in the $300-$350 each price range right now) and such.

An advantage of the MD3000i is that you can expand the storage later (with the MD1000 I believe) so you won't be locked into a static storage amount. Do plan for growth with the storage though. Take what you need now, think about what you'll use for the VM's, and such, then tripple it. That should be a fairly healthy storage size to start with. At one of my previous jobs we were seeing groups storage requirements (on the SAN) pretty much double every year. I would also advise against telling people how much actual (or usable) storage you have available. IF you do that, they'll think it's all available for them to use (when it's really not). I would also suggest planning on adding storage growth to each budget year so that you don't need to fight hard to get dollars next year when you're running low on space...

VCP4

Reply
0 Kudos
averylarry2
Contributor
Contributor

I'm worried that 2 Gb ports won't be enough. Our email server is already kinda slow (400Gb data).

Any thoughts on the iStor iS325 with a 10Gb port?

Reply
0 Kudos
golddiggie
Champion
Champion

Unless you already have a 10Gb backbone in place, you're just going to be waisting money and resources getting 10Gb storage. You'll need to have at least 10Gb NIC's in the ESX/ESXi host, and a 10Gb switch, to communicate.

With the MD3000i having dual controllers (typically set up as active/passive or so if one fails the other picks up immediately) I don't see that as an issue. At least not for a single host. You'll want to have MORE NIC's in place for the VM traffic to the rest of the network. Plan on two active, two redundant iSCSI NIC's (that's four ports there), one active, one redunant console port (another pair) and then at least two active and two redundant for VM traffic. Thats' ten Gb NIC ports in the host to start with. I would go with quad port cards, since you'll probably want to at least double the VM traffic ports based on your concerns. I would also put into place two switches for the ESX environment, with the host connecting to both, as well as the two controllers on the storage connecting to both. Have the switches connected to each other as well (such as the HP ProCurve 2900 series with the 10Gb connection between them) so that you'll not have any down time/data loss when one does go hinky. For the production environment, build the redundancy so that you don't have all the traffic going to any single switch, or module in a switch. The better you balance the entire network aspect, the better the entire system will perform.

For the 400Gb (you sure it's not 400GB?) of data on your email server, how much of that is transmitted at any single point in time? Run the VMware capacity planner on your environment for at least a week (longer if possible) so you can eliminate/reduce bottlenecks before you build it up.

What are you going to use for servers? If you're getting brand new ones, get the 5500 or 5600 series Xeon's, dual socket. Get as much memory as you can get them to buy as well. A minimum of 24GB, with 48GB being a better idea. Get two ESX hosts as well as vCenter, HA and DRS too. You need to properly balance all loads across every aspect of the environment... Of course, the capacity planner will give you more details as to what you need to fit what you have now. Still always plan on things growing/increasing each year. So don't think that what works today will in two years time (unless you plan ahead, and go with what may call 'overkill' or over engineering).

VCP4

Reply
0 Kudos
averylarry2
Contributor
Contributor

Interesting. I thought 10Gb on the SAN so I could run 3 hosts with dual Gb ports each, perhaps bonded. Less configuration -- much like the old days when gigabit network first came out. You'd have a bunch of workstations still running 100Mb but you could still get better throughput with your servers running Gb.

email server -- 400 gigabytes. It's email -- I know most of it just sits there. But we do have people with 20,000+ emails that have to be populated each time they leave their mailbox and click back on it. It's still not huge bandwidth, but I don't want to worry about fighting with 5, 10, 15? other VM's.

Reply
0 Kudos
golddiggie
Champion
Champion

Unless you're ready to go ahead and get the other 10Gb hardware to make a single 10Gb port SAN work, I wouldn't do it. Plus, I would never get any storage array/device that has just ONE connection port. No single point of failure allowed when it comes to any production environment. If you're looking at <1TB of space needed, at least for this budget year (and into next) then you can build up the MD3000i with the 146GB 15k rpm SAS drives. Going RAID 10, that will give you just about 1TB of space (minus formatting and such).

I'd get the people with 20,000+ messages to use pst files and store them better. Having them all in the mailbox sets them up for major problems.

Using good switches will help to eliminate/alleviate some bandwidth bottlenecks, but you need to locate all of them in order to plan how to either eliminate or at least reduce their impact.

Something else to keep in mind. Get a total amount you're going to be allowed to spend on the storage INCLUDING spindles. Drives going into the array won't be cheap if you want decent size, AND performance. You'll want a minimum of 10k rpm or 15k rpm SAS or SASII drives there (SASII drives are 6Gbps and are fairly new).

In stead of looking at small storage players, look at the large manufacturers that have been in the enterprise market for some time now. EMC, HP, EqualLogic, Dell, even IBM. If you have sales reps with them, contact them and get quotes from each with the performance you're looking for and within the budget you set. You're probably looking at more than $15k for the entire package. I think you'll be lucky to get a Tier 1 storage array, with high IOPs for under $50k at least in something over a couple of TB of usable space (never count the per disk space, but the total space configured under either RAID 10 or at least 60 with some spares). Such as an EqualLogic array with 16 1TB drives came down to about 11TB after being set up under RAID 50.

VCP4

Reply
0 Kudos
averylarry2
Contributor
Contributor

So what's your experience running VM's from a single gigabit port on a SAN? Let's say 2 hosts, 7 VM's per host, each host using a different Gb port as primary with a failover (not load balance) to the other port. How likely do you think the Gb connection to the SAN will become the bottleneck?

Reply
0 Kudos
golddiggie
Champion
Champion

So what's your experience running VM's from a single gigabit port on a SAN? Let's say 2 hosts, 7 VM's per host, each host using a different Gb port as primary with a failover (not load balance) to the other port. How likely do you think the Gb connection to the SAN will become the bottleneck?

</div>That's NOT the way you do things. You set up a Qb switch with VLAN's for iSCSI and vMotion and then use the VM traffic on a different VLAN. Do NOT directly connect the iSCSI storage to the hosts... Ever... You allow the ethernet controllers within the iSCSI appliance to manage the connections properly and according to actual load.

Reach out to the VMware partner you went through to get your original configuration. If you didn't use one, then get the recommendations from your regional VMware sales rep as to who to use. Get them to design a PROPER iSCSI implementation, and network, for you and give you all the little details on why it works this way.

VCP4

Reply
0 Kudos
averylarry2
Contributor
Contributor

We're passing by each other somewhere.

I thought I understood your original idea with the Dell SAN to be what I described . . . It has 2 Gb ports, right? One would be primary and the other would be failover for any attached host (flip primary/secondary for the 2nd host). I wasn't describing any host-host or host-network connections. Simply the host-SAN connections.

If you only had 1 host and 1 SAN, why would you need a switch? Does it actually do something better than direct patch cables between the host and SAN? (Again, not for host-host or host-network traffic.)

I guess the point of my last question was this -- if a host ends up only having a single active gigabit connection to a SAN enclosure, will that be enough bandwidth to run 7 VM's? In theory, of course, given that the 7 VM's don't (regularly) exceed 1Gb bandwidth, it should be fine. But in the real world . . . ?

Reply
0 Kudos
golddiggie
Champion
Champion

Get the MD3000i with DUAL ethernet controllers, which have TWO Gb ports each. You could opt to have one controller set for fail-over in case the primary goes down. This gives you TWO Gb ethernet pipes to the VMware host/environment.

A Gb switch is the best way to connect the devices together. Get a managed switch with MORE ports than you think you'll use so that you have room to grow. I opted for a 24 port switch in my lab so that I could use it for everything. I went with a HP ProCurve 2510G-24 model since it has full CLI, jumbo frame support, and more. Not the least of which is the full lifetime warranty it carries since it's an HP ProCurve switch. A switch will also ensure that PROPER packet traffic managment is taking place. Going without it is a colossal bad idea in my book.

In real world situations things are constantly changing and evolving. In a business environment you're going to be 10000x better off designing this correctly from day one than having to go back and redo it all (or partially) later on. You can get a quality (HP ProCurve) switch for under a grand (the 2510G-24 cost me about $700 6-12 months ago) or you can go for one of the more advanced switches for around $2k-$3k.

VCP4

Reply
0 Kudos