VMware Cloud Community
Osm3um
Enthusiast
Enthusiast
Jump to solution

San suggestions please

Currently running ESX 3.0.2 on two servers attached to an EMC AX150i: Exchange 2003, Sharepoint, file servers, DCs etc. about 9-10 servers. Not running HA, DRS etc (although HA will be included with 3.5 so will be running HA soon). I would like to stick with shared storage.

I have a Dell 2950 loaded with 6x143gb SAS drives I will have available for us in Q1 2008. From what I gather I can load LeftHand software on it for about $10,000. Also I have been looking at Lefthands VSA VM appliance for about $5000 + a vmware license.

Alternatively It seems I can get a Dell MD3000i with about 6x500gb SATA for less than $10,000.

Any thoughts would be appreciated. It seems like a shame to not use the 2950 (although I can find a use for it elsewhere), but it also seems it might be more cost affective to buy a whole new unit (i.e. MD3000i).

Any thoughts.....

Thanks

Bob

0 Kudos
1 Solution

Accepted Solutions
kingsfan01
Enthusiast
Enthusiast
Jump to solution

Sorry I can't give you any real world performance opinions yet but I am doing the same thing.

I am about to re-commission a pair of HP DL380s (6x300GB 15K SCSI) using the Left Hand re-commission program for $15,900. I looked into loading the VSA as an alternative but opted to re-commission for a few reasons:

1) If you go VSA, you need an underlying OS. If you go ESX, you are paying at least $1,500 plus the cost of the VSA. If you go a cheaper route (Windows Server w/ VMware Server), you loose the benefits of a faster Linux OS plus additional management hassle.

2) If you go VSA... you need to note that you only get 1 network interface which will severely limit your throughput as well as your ability to seperate your iSCSI storage traffic from your LAN traffic (ideal setup).

3) The pricing for the re-comisison program isn't too far off from the VSA. If you have a relationship with a local vendor... you can usually get them to cut the price a bit.

In addition to the DL380s I'm going to re-commission (for our DR site), I will be implementing a pair of LeftHand HP DL320s (12x146GB 15K SAS each) for our primary SAN. From everything I have seen and read, the LeftHand SAN should be a great complement to our VMware infrastructure which currently consists of VMware Standard on servers w/ local storage.

One last tip... The LeftHand engineer I have been dealing with recommends breaking up the SAN such that your VM's reside together in LUNs but that they receive RDMs for storage (especially for File Services, SQL, Exchange and any other I/O intensive application). You will also get the benefits of the LH SAN like Synch & Asynch replication, Remote Copy, etc.

LeftHand - "Best practice is to use VMDK only for the boot drive of the VM. Additional drives for things like, file shares, Exchange logs and DB, SQL, etc, should be RDM LUNs off the SAN. This cuts out some layers of unnecessary virtualization (vmdk through vmfs) and allows for better portability between virtual and physical. For example, if you mounted your RDMed exchange volumes on one of your physical windows boxes it would look like a NTFS volume with exchange data. If you used VMDK it would look like a bad partition to your physical windows servers."

Tyler

View solution in original post

0 Kudos
11 Replies
Cloneranger
Hot Shot
Hot Shot
Jump to solution

6x 146 SAS drives will leave 6 x 500gb SATA standing still,

and I am not just talking 20-30% here, its more like 3 to 5 times the performance depending on the type of IO you do,

I would not reccomend SATA for any moderately IO intensive database, and Exchange 2003 is extremely IO intensive,

0 Kudos
Hairyman
Enthusiast
Enthusiast
Jump to solution

Hi there,

We are running 3 instances of Exchange 2007 on a Dell / EMC CX3-20 (15x 146GB FC (2GB) disks) attached to 2 1955 blades with 16GB each of RAM and have no issues. approx 200 Mailboxes so far.

Loading the GUI console from within the VM for exchange 2007 takes some time initially but once loaded runs very well, no complants from end users.

Cheers

Aaron

Cloneranger
Hot Shot
Hot Shot
Jump to solution

Smiley Wink to be fair, the kit you are talking about is a different league to the initial poster's,

6x SAS on a lefthand box, or 6x SATA on a Dell ISCSI San, wouldnt compare to 15x 146 FC disks,

I wouldnt imagine you're setup would have an issue with just 200 mailboxes, i would expect it to support 2 or 3 times that without problems.

kingsfan01
Enthusiast
Enthusiast
Jump to solution

Sorry I can't give you any real world performance opinions yet but I am doing the same thing.

I am about to re-commission a pair of HP DL380s (6x300GB 15K SCSI) using the Left Hand re-commission program for $15,900. I looked into loading the VSA as an alternative but opted to re-commission for a few reasons:

1) If you go VSA, you need an underlying OS. If you go ESX, you are paying at least $1,500 plus the cost of the VSA. If you go a cheaper route (Windows Server w/ VMware Server), you loose the benefits of a faster Linux OS plus additional management hassle.

2) If you go VSA... you need to note that you only get 1 network interface which will severely limit your throughput as well as your ability to seperate your iSCSI storage traffic from your LAN traffic (ideal setup).

3) The pricing for the re-comisison program isn't too far off from the VSA. If you have a relationship with a local vendor... you can usually get them to cut the price a bit.

In addition to the DL380s I'm going to re-commission (for our DR site), I will be implementing a pair of LeftHand HP DL320s (12x146GB 15K SAS each) for our primary SAN. From everything I have seen and read, the LeftHand SAN should be a great complement to our VMware infrastructure which currently consists of VMware Standard on servers w/ local storage.

One last tip... The LeftHand engineer I have been dealing with recommends breaking up the SAN such that your VM's reside together in LUNs but that they receive RDMs for storage (especially for File Services, SQL, Exchange and any other I/O intensive application). You will also get the benefits of the LH SAN like Synch & Asynch replication, Remote Copy, etc.

LeftHand - "Best practice is to use VMDK only for the boot drive of the VM. Additional drives for things like, file shares, Exchange logs and DB, SQL, etc, should be RDM LUNs off the SAN. This cuts out some layers of unnecessary virtualization (vmdk through vmfs) and allows for better portability between virtual and physical. For example, if you mounted your RDMed exchange volumes on one of your physical windows boxes it would look like a NTFS volume with exchange data. If you used VMDK it would look like a bad partition to your physical windows servers."

Tyler

0 Kudos
Osm3um
Enthusiast
Enthusiast
Jump to solution

I am working with my reseller to get a good price on the raw software for my Dell 2950.

However I still find the VSA to be very interesting.

Can you elaborate on the negatives? i.e. you mentioned it only had one network connection.

Thanks,

Bob

0 Kudos
dslarve
Contributor
Contributor
Jump to solution

Unless you're ready to pony up to a SAN, the Dell DAS and iSCSI solutions don't really seem to add alot of value in contrast to sticking with a loaded-up 2950, at least in my experience. We started with a 6850, then have gone with 2950's, with dual quad core, 24 gig ram, and 6x750 sata drives. Yes, I know SATA isn't going to measure up to the I/O on a 15k scsi, but they're more cost effective, and (at least for us) meet our needs until we move to an fc SAN (which are getting cheaper each day). We typically have 15-20 servers running concurrently, most with relatively demanding ERP application and DB servers. This configuration buys us 2 x 1.5 TB datastores, both raid5. We have a few of these, and they have worked nicely, almost too nicely in fact- more work to justify the san purchase. Once you get into the 3-4 TB neighborhood though, DAS quickly exhausts it's ROI, and other issues will dictate the need for an enterprise grade SAN.

So in the end (from what I've found), it's just as cost effective to get a 2950 server loaded up than an MD3000 that will not scale like a true CX/EMC box. Whatever you do, validate that the solution is vmware supported.

0 Kudos
Osm3um
Enthusiast
Enthusiast
Jump to solution

I was messing with VSA this evening and see what you mean with the single network port. I will not be abel to get back to it until Wed., but would it be possible to:

Bind the VSA NIC to physical NIC A through VM Switch A. Connect NIC A to a switch/network dedicated to ISCSI traffic.

Create a windows VM which would be dual homed with one NIC going to VM Switch A with one goign to "VM Switch B" where NIC B would be connected to the general network.

This, in theory, would keep the ISCSI traffic off of the network adn enable mangament of the SAN via the windows VM.

Thanks,

Bob

0 Kudos
kingsfan01
Enthusiast
Enthusiast
Jump to solution

Bob,

Your scenario would definately work for management of the VSA (it could also be done with a dual homed physical PC and you can save the costs of software licensing for the VM). While this will seperate your iSCSI traffic from your general LAN traffic... keep in mind that you will still lose some of the features of the re-purposing.

1) Re-purposed machines (Dell/HP/etc) will have dual NICs for link aggregation and failover whereas you are limited to the one virtual link on the VSA. Technically you can add an additional NIC to your vSwitch for link failover but you are still limited to the 1Gbps throughput.

2) OS overhead. To run the VSA... you need either ESX or VMware Server (in which case you would need a seperate OS). If the VSA were the route you would choose... I'd run it off of the ESX server as I would think it would run better than the Windows box (I have no data to back that statement up... just personal experiences). If you go that route, you would also need to measure the costs of the underlying OS plus the VSA versus just re-commissioning.

3) Capacity. The VSA is limited to a 2TB storage capacity. If you are running the VSA solely off of the 2950, you will be fine... if you are running the VSA from a server connected to the MD3000i storage array, you will not be able to make use of all of the array's capacity. Are you limited to 6 drives on the MD3000i because of budgeting purposes? It appears that array can support 15 drives?

If you went the route of the MD3000i with the VSA... you could get pretty creative with your options. One thing you may want to explore is setting up an ESX server with 2 (or more) controller cards connected to the MD3000i. You can setup a couple (or more) datastores, each linked to seperate arrays on the MD3000i, each through seperate controller cards. Each data store would be used by independant VSAs and would allow for an iSCSI SAN across DAS. If you were to go this route... I'd connect each VSA to it's own physical NIC and if you were to setup at least a 2-way replication on the VSA for your volumes, you should be able to get both SAN redundancy (across VSAs) as well as 2Gbps aggregate link speed. You would still have a couple single points of failure... but this could be lessened by running two ESX servers off of the MD3000i with the above mentioned setup.

What were you planning on using the VSAs for primarily? I was exploring their use for our DR environment which the VSA seems particularly well suited for... but I wouldn't consider it for my data center. In the end I decided that in the off chance that we need to do a host or site failover... I'd much rather have some of the benefits (hardware benefits... the VSA is capable of performing all of the software functions of the physical SAN) of the re-commissioned SAN than the VSA.

I think it will come down to a personal and company operational decision that will need to be based off of costs, management time, desired feature sets, etc.

If you would like... PM me and I'll shoot you back the contact info from my LH vendor... they may be able to run a better deal on the software than your vendor.

Tyler

0 Kudos
deploylinux
Enthusiast
Enthusiast
Jump to solution

Take the six sas drives out of the 2950 and put them in the MD3000i, in which case you can just order the chassis for probably under $7K w/ gold support. The best reason for getting an md3000i is the drive interoperability

with all 9th generation Dell Servers. Dell will support the configuration. We filled all 15 drives in a primary chassis and 15 drives in an attached md1000 just migrating drives from ESX boxes which had previously been using local storage.

Fyi - we did our own evaluation on the VSA vs md3000i, and many other issues ended up pushing us towards the md3000i.

1) VSA's performance seems to be limited by some local storage issues and i/o bugs in ESX (according the lefthand engineers we discussed our results with some stuff wouldnt run faster than 66MB/s no matter what hardware you threw at it).

2) md3000i supports redundant controllers, each with 512MB cache which are updated synchronously

3) Base VSA is limited to 2TB - I think that is because ESX 3's limitation of vmdk sizes, but it was unclear when this issue would be addressed or if you could just setup multiple vmdk's.

4) The initial VSA quotes we received were essentially equal in price to the md3000i chassis

0 Kudos
Osm3um
Enthusiast
Enthusiast
Jump to solution

Well, that is simply brilliant. Seriously, that did not even cross my mind.......I was going through all of these fancy configurations in my head trying to figure out how to go. Isn't that weird, sometimes the simplest is the best. As far as that goes I can get six more 146GB SAS drives if I replace the primary drives in my other 2950s with smaller drives.

I am still excited about the VSA, especially as it can be used for ISCSI for free even after 30 days. The fees come in withe the advanced options, the basic ISCSI SAN piece is $0.

Thanks, I feel kind of stupid....

Bob

0 Kudos
Osm3um
Enthusiast
Enthusiast
Jump to solution

I was/am looking at using the VSA as a cheap way to create an ISCSI SAN. I was figuring this would get my foot in the door for later on when we expand our network further. Especially as the VSA can be used for free (without the advanced features. In the short term I just need a target to point my ESX boxes at, no replication etc. I am also thinking that this sort of thing is going to be big in the small to mid market in the future what with EXS being embedded on the motherboard.

The thing I am really struggling with is the single NIC on the VSA. unless I am missing something that seems very strange as, from what I understand, ISCSI is supposed to be separate from normal traffic.

Bob

0 Kudos