The MSA1000 is SCSI only, I'm sure you mean MSA1500?
Have you looked at the EVA 3000 SAN unit...
I referred to the full support (2.5.1) provided by IBM for the DS storage family.
IBM usually support products not yes supported by sofware vendors.
Here is my 2 cents.
SATA SAN's are generally tier 3 storage. They should not be used as primary tier 1 type storage. The drives only have a 30% duty cycle and will not stand up to the heavy use of multiple operating systems booting from them and then hammering away at a 70% - 100% duty cycle rate.
You are asking for performance problems as well as drive failures if you use this type of storage for a purpose that it was not intended. It is intended for data that is not critical and that is not accessed on a regular basis.
You are proposing to boot multiple operating systems, house swap files and data on these drives and they will not hold up under that kind of stress.
I went through a lot of analysis on this subject recently and decided on fibre channel drives for our SAN. Since their implementation I have not been disappointed and our performance monitoring has verified that the SATA drives would have been swamped under the load.
I will say that it is not all of the drives that need to be fibre there are many data drives that can easily exist on SATA drives. I would look for a SAN like the HP where you can mix and match SATA and fibre. That way you can put your performance critical systems on fibre drives while conserving money on the not so critical systems.
One company: EMC
We had an IBM and it was junk. Got an EMC Clariion and never looked back.
I have to agree wit h the minster. You can get a CX300 with a few fibre-channel drives, one switch to start with, and two nice HP servers for under $100K.
Just to give you an idea:
$55K CX300 w/6 146GB drives + two switches & maintenance
$30K 2x DL385 2xDualcore 2.4 16GB RAM + drives or less...
$15K for VMWare licenses or so....
Message was edited by:
kix1979 make a pretty breakout
I'm going through the SAN decision making process as well right now. What are the problems that make the IBM SAN "junk"?
Just to represent the other storage vendors, we are putting an HP EVA 8000 with our production ESX environment.
But for a more basic starting configuration you could look at starting with something like an EVA 4000.
One advantage you could get by going with the EVA is that you could get end to end support from HP for the servers through storage including VMware.
That was one of the main reasons we went with HP end to end. That way we know we have a "certified" configuration all the way through.
As for the MSA 1000 just read all of the topics i have written over the last year. The MSA1000 is great for about 4 dual proc ESX servers with 7-15 virtual machines per server. The MSA will only support 30,000 ioops per seconds and that is based on the drive specs since the caching algorithm is horrible. An msa1000 should only be marketed for test environments and should never see the production floor.
I know purchase cost is important, but also have a look at the management side of the SAN. Will the SAN allow you to redefine existing logical drives? Will it allow you to delete an existing logical drive or only the last created? Will it support your business needs for the next 2 years?
We have a MSA1000 and it works fine for our 3 2-cpu servers which mainly have test and development vm's and about 10 production vms. But the IO is really bad compared with our EVA5000. On the positive side, the MSA1000 is really easy to manage. Downside besides IO is that you can only delete the last drive created meaning that if I have created 10 logical drives and I don't need the first any more, I can't delete it without deleting the 9 other logical drives first... So you end up with a lot of downtime when redefining the logical drives...
We use an MSA-1500 for dev right now which is sufficient for what we need.
I have heard though that the newer firmwares for the MSA-1000 are making some noticable performance improvements for people.
I hadn't heard about the LUN deleting issue regarding only being able to delete the last one created on the MSA1000 though.
1 person found this helpful
If you need to keep everything under $100k, you may want to consider three 2 way dual core ESX hosts instead of two 4 way ESX hosts. It will costs less, and only have to vmotion 1/3 of the guests at time when servicing a host such as ESX version upgrade instead of 1/2.
I haven't tested a x86 from Sun yet, but right now their Sun Fire X4200 is looking better than Dell's 2850 (limitted to 12GB of RAM), and their 40Z is looking fiarly good in terms of a 4 way box. I am strongly considering Sun for our next Vmware server.
If you are certain that a SATA array will provide enough performance, and want to keep the costs down, then you could consider Western Scientific's Tornado with a couple of Qlogic SANbox 5200 switches. From the testing I done, and the benchmarks in terms of IOPs from the MSA and AX100, I think it is the fastest. Haven't seen the benchmarks of the DS4000.