VMware Cloud Community
ATL2VEGAS
Enthusiast
Enthusiast

Shared Storage Solution for 3 Host Cluster ESX 4 needed

Hello,

I would like to know what type of shared storage is being used for a small 3-4 host cluster? I am trying to evualate a shared storage solution for a 3 host cluster and wanted to see who's using what solution? I am looking at the MSA 2000SA G2 as a possible solution but wanted some feedback from the community.

Thanks,

AC

Andre Chambers Contract IT Consultant Medium & Enterprise Business Solutions AndreChambersLV@gmail.com 702-203-9068
0 Kudos
23 Replies
mehul96
Enthusiast
Enthusiast

We are using NetApp FAS270 and it works well

0 Kudos
Igor_The_Great
Enthusiast
Enthusiast

What are your requirements?

One of the cheaper versions would be - iomega ix2 - 1TB for ~$300 using NFS

-Igor

If you found this or any other answer useful please consider the use of the Helpful or correct buttons to award points.

-Igor If you found this or any other answer useful please consider the use of the Helpful or correct buttons to award points.
0 Kudos
AntonVZhbankov
Immortal
Immortal

MSA2000 G2 is very interesting solution, perfectly suitable to small deployment.

I have MSA2012fc with SAS disks - works pretty fast.


---

VMware vExpert '2009

http://blog.vadmin.ru

EMCCAe, HPE ASE, MCITP: SA+VA, VCP 3/4/5, VMware vExpert XO (14 stars)
VMUG Russia Leader
http://t.me/beerpanda
0 Kudos
DenisJones
Contributor
Contributor

Using IBM BladeCenter 6Disk SATA300 and Starwind Server - all fine for now

0 Kudos
smithdt62
Contributor
Contributor

DenisJones,

Do I understand you are using SATA drives for shared VMFS?

I thought this was not possible due to SATA not supporting SCSI reservation commands used by ESX for mananging writes to shared volumes.

d

0 Kudos
lmorel
Contributor
Contributor

I am very much interested as well. I want to set my cluster up the exact same way: ESX4 and 3 hosts that would share the MSA2000sa. I read somewhere on a forum that the MSA2000sa could accomodate up to 4 ports. Meaning we could use 3 single connections out of 4 to connect each host to the MSA. But I also read on HP's website that going with dual port per host creates redundancy. A vendor told me that the MSA2000sa can only accomodate 2 hosts and i assume they do not even consider going with a single connection per host.

What do you guys think? How much risk we would be taking by going with single instead of dual?

0 Kudos
wardgtr
Contributor
Contributor

MSA 2000SA G2 uses SAS interfaces so you are going to be limited in expanding to more ESX host's. I would go with the MSA2012i can scale better. Both systems are solid low budget SAN's.

VMware Technical Consultant

SHI

Bryan
0 Kudos
lmorel
Contributor
Contributor

Thank you for the quick response.

We are consolidating servers and we have around 150 employees. A dozen servers. Basic network with one Exchange server, one SQL server, one intranet box and the rest are scattered file servers. We do like to keep multiple copies of files due to versioning reasons. We are a civil engineering firm doing mostly CAD and GIS work. I guess I am wondering if we could get away with it with a MSA2000sa for a few years and most likely go with whatever technology is going to be available then or should we stick with your idea of going with iSCSI up front. I completely agree with you though with the iSCSI and being able to scale better down the road. Price differences are quite a bit between 2000i and 2000sa?

0 Kudos
wardgtr
Contributor
Contributor

Price wise they are pretty close. Just check out HP.com or ask you local SAles person to give you a qoute on both. This past year I have done about 15 installs that are what you are discussing. 2 ESX host's, MSA2012i, 1-2 HP procurve switchs, Running 10-15 VM's.

VMware Technical Consultant

SHI

Bryan
0 Kudos
lmorel
Contributor
Contributor

2 hosts instead of 3? I thought the industry standard is 3 as a full redudant solution. Plus that's what my bundle comes with, 3 seats.

0 Kudos
ATL2VEGAS
Enthusiast
Enthusiast

After several meetings with HP we decided on the MSA2000saG2. The old MSA2000 could only accomodate 2 hosts. The MSA2000saG2 has 8 Sas ports that can accomodate 4 hosts. If you purchase the dual controller model each host can be wired to both controllers. We run in a 24/7/365 environment and purchased our first cluster at a pilot. The 3 host HP Proliant system with the MSA helped us consolidate 45 Current & legacy systems into the cluster with ease. We decided no to go with ISCSI for several reasons. The first being that we have no plans to expand beyond 4 hosts. The direct sas connection allowed us to configure ample storage without dealing with VLAN's or a network upgrade. I've attached a document that outlines the new MS2000SAg2

-Andre

Andre Chambers Contract IT Consultant Medium & Enterprise Business Solutions AndreChambersLV@gmail.com 702-203-9068
0 Kudos
lmorel
Contributor
Contributor

Sweet!! There is hope. I think that's exactly why we would want to go SAS. Because we don't plan on scaling up, probably ever.

All of you guys thank you for all that input!! I am going to want to quote both SAS and iSCSI anyways as it seems the price difference will be minimal. Again your ideas and comments are extremely helpful!!

0 Kudos
wardgtr
Contributor
Contributor

2 hosts for budget reasons.

Bryan Ward VCP MCSE CCNA

VMware Technical Consultant

SHI

Bryan
0 Kudos
whitesj
Contributor
Contributor

We're using an Equallogic PS6500E (24TB RAW) and it does great. The features and performance are great for the value (and capacity).

0 Kudos
vidkun
Contributor
Contributor

I'll be honest and admit up front that I didn't read through this entire thread. However, I would not recommend anything less that iSCSI for shared storage. We had horrible performance on NFS. Currently we are running 3 hosts off of an old Dell PE2950 with a RAID 5 array for a total storage just shy of 5TB. This storage server is running OpenFiler (free linux based network storage OS) and sharing it all out as iSCSI targets across our gigabit ethernet. We currently run anywhere from 20-40 VMs at a time off of this setup. If you need more storage than that, then you will of course need to get different server that can handle more drives. However, for a small infrastructure like you are mentioning, you could easily get by without having to shell out for big buck high-end gear like FiberChannel or hardware iSCSI.

0 Kudos
ConstantinV
Hot Shot
Hot Shot

iSCSI has some advantages in compare to SAS. For example SAS is limited to 8 metres long, when iSCSI has no limit, SAS`s speed - up to 3Gb, when iSCSI - your Ethernet speed, etc. iSCSI is more flexible solution, IMHO.

Starwind Software Developer

VCP 4/5, VCAP-DCD 5, VCAP-DCA 5, VCAP-CIA 5, vExpert 2012, 2013, 2014, 2015
0 Kudos
ATL2VEGAS
Enthusiast
Enthusiast

I would agree that ISCSI provides more flexibiity when it comes to distance. However lets look at raw speed . With 1GB iSCSI you would have to bond 1GB connections in order to acheive higher speeds. Depending on your bonding/multipathing setup you may or may not acheive speeds higher than 1GB.

10G ISCSI solutions are also available but at a much higher cost. The HP SC08Ge SAS HBA provides 2 x 3GB connections to the SAS storage controller with an effective data rate of 2 X 2.4GB or 4.GB Native without using special bonding/multipathing software. If your storage LAN isn't setup correctly you may not even acheive the native 1GB speed of ISCSI. Since my servers are located in the same rack as the storage a direct 2 x 3GB lane to a dual controller storage array without anything in between was the most economical for high end database peformance. We had 3 major vendors bring in a comparable ISCSI solution demo and could not outperform the direct attached solution.

We also have to look at the max speed of SAS which is currently 3GB (6GB Second Generatio) Most devices out there right now support 3GB with 6GB well on the way. The HP G6 servers are already shipping wtih 6GB drives in them so it won't be long before a direct attached sas san solution hits 6GB. At this point you will have dual ported 6GB drives giving you access to 12GB bandwidth on a dual controller array.

-Andre

I.T. Manager

Andre Chambers Contract IT Consultant Medium & Enterprise Business Solutions AndreChambersLV@gmail.com 702-203-9068
0 Kudos
lmorel
Contributor
Contributor

Yeah, I don't care about distance limitation either. And after reading the performance comparison with the MSA2000sa G2 and the iSCSI one, man, there is a gap there. I guess we can bundle a bunch of runs over gigabit but like you mentioned, things starts to get expensive. 10GB even more. I think for the size of our business this MSA2000sa is going to be just fine for us. And it's a big thing for us as we never used a SAN before. So it's a good way to start.

0 Kudos
vidkun
Contributor
Contributor

You would also need to factor in many other issues though. Such as the size of the company and the budget. Does it warrant the expense for such a high-end piece of gear? How does the cost of that compare to a simple server with a RAID array and running a software iSCSI target.? How many Hosts will be connecting up to this shared storage and pulling the data? Does this number warrant the need for the massive network bandwidth? For example, a single gigabit ethernet is far more than enough to server 3 hosts running 20-40 VM simultaneously.

Then there's the fact the MAX theoretical throughput of SAS is currently 3Gb (that's small b not big, it's bits not bytes) and will eventually double to 6Gbps. That 6Gbps will be 768MB (bytes there) of theoretical MAX throughput. With Gigabit ethernet, you will see typical maximum transfers on a good server quality NICs of about 100-130MBps (Bytes again). Then factor in the fact that it is VERY unlikely that you will come anywhere close to reaching that max throughput on the SAS drives themselves so cut it about in half to get 384MBps of I/O throughput.

So again, iss the extra cost for all that high-end gear (given your intended business situation) justified over throwing 3 decent gigabit NICs into a server running something like OpenFiler and iSCSI targets. Then setup multipathing on the 3 hosts with a preferred route for each host so that each hosts accesses the target on a separate NIC (i.e. a full gigabit nic per host)?

0 Kudos