VMware Cloud Community
menora
Contributor
Contributor

Refreshing hardware

Hi all,

we are looking at replacing our existing server hardware, we have 120 users, 15vms, 2-3tb total data and gigabit iscsi connection.  currently using 3 x Intel esx 3.5 server and 2 x SanMelody San storage (with thin provisioning and sync mirroring (auto storage failover).  we currently dont have performance issues, it just the hardware almost 4 years old.  the bosses wants to get branded servers this time, thus we got 2 solution from different suppliers.  Please kindly leave your comments and preference.

solution A.

3 x HP DL360 G7 (dual x5650, 60gb ram each)

HP lefthand san P4500 G2 14.4tb

HP D2D backup server

HP 1/8 G2 tape autoloader

HP data protector

based on the existing gigabit connectivity

solution B.

2 x IBM x3650 M3 (dual e5650, 96gb ram each)

2 x IBM DS3524 (18 x 300gb 10k sas and 6 x 1tb 7.2k sas)

Remote mirroring software (async mirroring)

IBM x3650 with Veeam as D2D backup

IBM TS2250 tape

FC switches

replace onto FC connectivity

Both cost of both quotes are very close.  Uptime and reliablity are high priority to the company (since even our exisitng system has no perfermence issues), easy to management and maintain would be the next consideration, next concern is provision for vsphere5. 

Please provide your expert opinion.

0 Kudos
12 Replies
whynotq
Commander
Commander

Why only 60G RAM in the HP as opposed to 96G in the IBM?

Apart from that my only concern would be that i have seen regular network connectivity issues with the Broacom driver for the onboard NICs. There are a few KB articles and forum threads relating to issues around this.

Apart from that i can't pick any wholes, shame you are moving from Intel as a realy like these servers and recommend them regularly,

regards

Paul

0 Kudos
smsf
Contributor
Contributor

I have a similar environment and looking for similar solution so I'll be checking this thread quite often.

Here's the post regarding my planned upgrade.

Menora, how have the SanMelody units performed and how are you measuring performance?

0 Kudos
menora
Contributor
Contributor

thanks Paul's comment.

hp provide the solution based on our standard license, they said it will make even more sense when upgrade to vsphere5 as it is not going to utilize more than 60g ram on each server anyway.  also when 1 host down, ibm will only give 96g ram use + no redundency, whereas hp will still give around 120g ram use + still 2 running host.

I need to do some reading and ask them about the broadcom issues then, thanks for this.

I personally do prefer Intel servers, unfortunately most of the consulting companies are saying they are "white box", vmware is not going to provide full support.  unlike the branded server, you just need to tell them the model number, they can assist easily since they are on the vmware compatitbility guide. what gets me was every single part we bought from Intel are also in listed in vmware guide, the only thing that they can question is we are running SANmelody on the Intel storage server (this specific setup is not certified by vmware), although SANmelody is also on the SAN storage compatibility guide too.  Anyway, can't argue with the management and no point to chanllege the consultants.

0 Kudos
menora
Contributor
Contributor

interesting smsf, we do have a very similar environment.

SanMelody did very well for its price.  we started with 3tb bundle (inc. thin provisioning & sync. mirroring), then we upgraded to 8tb.  Initially we started with some 15k sas for vm and some 7.2k sata for user data, it started ok but then became a real disaster when the sata hdd failed on one server.  the raid downgraded, then the users are complainting that the network files are very slow open, save and close.  especially when they have their archive pst files open over network, that make it even worse.  I can quickly failover thestorage path onto the second san, so that I can replace the faulty sata hdd without affecting the performance and uptime, however it still takes time and performance to rebuild the raid and the mirror after replace the hdd, for over 1tb user data, it look us 5 days to restore everything to normal.  since then, we replace all sata hdd to 15k sas, everything is now running smoothly without any major issue.  even we lost a sas hdd, it only took 1 day to restore to normal, and seamless to the end users.

if you do this, you need to make sure you either run direct crossover between the 2 sans and from san to esx for dedicate connection, or of course you can setup vlan for each connection.  you probably need to upgrade to at least this netgear JGS524E switch.

also it requires more ram, if you have 8 or 16g ram for this software to cache the data, it will even be better in performance (we only have 4g on each san).

we dont have any problem running on the gigabit network, very comfortable.  the only thing the user can feel the performance changing is when the 2 db servers and email server are peaking.  then I will get calls asking why take a bit of time to open and save docs.

the major concerns from the IT managers are these servers are running on Intel servers (some people called it "whitebox"), some consultants said that vmware can refuse to support if it is not exactly listed under the compatility guide.  don't get me wrong, sanmelody and all intel hardware we've got are all listed on the guide, but not as simple as the branded h/w that you can just quote the model number.

if we don't have to consider about this, i would definitely go for intel servers as you can get the more powerful and upto date hardware model for the same price.

0 Kudos
smsf
Contributor
Contributor

Menora,

Thanks for the insight into SanMelody. A few years ago I was proposing SAN when we moved to ESX servers but cost was too prohibitive and SAN was deemed a bit overkill so I skipped SAN. Now, with 2 ESXi hosts and a 3rd on the way, I'm a bit overdue for a SAN. However, I still have crazy budget constraints.

I'm now taking a serious look at Starwind HA combined off-the-shelf R510 servers.

Here's a link discussing Starwind + off-the-shelf servers vs. EqualLogic. Here's another discussion about using Virtual Storage Arrays like Starwind.

A solution with Starwind HA, dual SAN nodes, dual dedicated gigE switches and vSphere license is right around $25K.

Here's the break down (based on my research thus far):

Capture.PNG

What do you think?

Also, I have concerns over using the Netgear gigE unmanaged switches but these guys do have some good points.

As for cross-over cables, if I don't physically separate my SAN nodes, I'll probably do that. Maybe even go with 10GBE NICs and do cross-over for the node-node connection. That allows me to add a 10GBE switch and separate them in the future (when 10GBE switches come down in price).

0 Kudos
menora
Contributor
Contributor

Thanks smsf. this setup looks pretty similar to was we have got, except we run vlan to isolate all the traffic between san and esx, as well as the traffic from esx to the core network.

Looks like the starwind software does the same job as sanmelody, I personally haven’t use it before, so I can really comment on it. I can find it listed under vmware compatibility guide, so I can’t see any problem. Although again some people may say the brand server still better as software + h/w come together and tested by the vendor, they will say your combination is not fully supported by vmware supports.

My experience with software san is you need very good support, you need quick and skilled support contact. I have the best support experience with vmware and datacore (sanmelody). Once I logged a level 1 support call with datacore, a tech call me back in 5 minutes later, and I gave him everything as requested. He knows what he is doing and how to solve the problem, he logged in remotely and guide me through to fix the problem. This is extremely important when something happened, they don’t make me feel nervous and uncertain. Depends on where you location, datacore also provides a local toll free number to call + engineer on call 24*7. At least I don’t have to call an overseas number when something goes wrong.

I’m not they are the best but at least the starwind software should provide similar level of services.

Regards,

Patrick

0 Kudos
smsf
Contributor
Contributor

Thanks Patrick.

I'll definitely focus on support with the Starwind solution.

At the same time, matching your config A may not be out of our budget (it'll be quite a stretch but I'll try) so I'm very interested in what you find in your research.

I'm looking into hardware (Equalogic, EMC VNX) vs software (Lefthand P2000i, Lefthand P4000, Starwind). Whichever config gives the most value, reliability, and performance will win.

Also, what are your thoughts about the connectivity backbone? I plan to start with both SAN nodes together in my data center then moving the 2nd SAN node (and eventually one of my VM hosts) over to a near-site server room.

I'm thinking dual GigE unmanaged switches (fastest) for VM Hosts/iSCSI clients to SAN then 10GigE cross over for SAN-SAN. Later on, add 10GigE switches to extend to other site (when the 10GigE switches become more affordable).

What do you think?

0 Kudos
menora
Contributor
Contributor

sorry for the late reply.

If you could hold on to the project, please wait for vSphere 5 to release.  coz they will release a new component called "VSA, VMware vSphere™ Storage Appliance", with this you can turn all internal storages into shared storage, and it allows storage failover like SAN.  this is possibly better fit into your tight budget, you can get more hosts with more RAID internal storage, save money on SAN but have basic SAN-like functions.

http://www.vmware.com/products/datacenter-virtualization/vsphere/vsphere-storage-appliance/features....

regarding to your question, I personally don't have a problem with our gigabit iSCSI connection so far, so down the track if we need more throughput, upgrade to 10gbe should be quite easy anyway.  until $$ is not an issue, i'll stay on the existing ethernet network.

0 Kudos
logiboy123
Expert
Expert

Couple of points;

1) Introducing fibre into your network will add an unnecessary extra layer of work/overhead/complexity. It is a small shop, there is no reason why iSCSI can't do the job you are trying to fill. Make sure you get switches capable of VLAN tagging and jumbo frames if you want all traffic to flow through the same network, otherwise just buy a dumb switch and have all iSCSI traffic through this switch from SAN to hosts.

2) HP Dataprotector is absolute rubbish. Avoid at all costs if you can. A veeam or Commvault solution will be a better solution and possibly more cost effective, it depends if HP is throwing DP in as a freebie or an extra.

3) Generally speaking I have had the most success with HP servers, especially in regards to management and problem resolution. IBM servers are very good though and worthy of your time. Dell servers in my experience are rubbish, especially their storage solutions (not including Compellant which has only recently been acquired and is still kind of okay, but wont be if it is fully integrated into Dell's model). Don't get me wrong, Dell desktops almost always win especially on price and they are reasonable machines, but they just don't know how to make good servers.

4) Lefthand is awesome. The product is fully featured out of the box, no hiden costs or requirements to upgrade licenses. Once you buy a Lefthand you get all functionality, not like other storage vendors.

5) Consider buying Ent+ licensing as this will come with NIOC, SIOC and Storage DRS. Incredibly useful and actually more affordable in a small environment.

6) Consider getting smaller and faster disks for your array. I prefer 15k of course and try very hard to convince your boss that you don't want 7.2k, try to push for a minimum of 10k. You said yourself that you are only currently using 3TB.

Regards,

Paul

menora
Contributor
Contributor

thanks Paul,

1) Introducing fibre into your network will add an unnecessary extra layer of work/overhead/complexity. It is a small shop, there is no reason why iSCSI can't do the job you are trying to fill. Make sure you get switches capable of VLAN tagging and jumbo frames if you want all traffic to flow through the same network, otherwise just buy a dumb switch and have all iSCSI traffic through this switch from SAN to hosts.

     I totally agree, I know FC has its advantage and benefit but only if you are confidence in configurating and managing them.

2) HP Dataprotector is absolute rubbish. Avoid at all costs if you can. A veeam or Commvault solution will be a better solution and possibly more cost effective, it depends if HP is throwing DP in as a freebie or an extra.

     The only reason we keep this software in the quote as if we would like to keep a complete HP solution, so we only have to contact 1 vendor whenever a      problem, they actually gave us a good bargain price for the DP software since we quoted with a full HP solution, no freebie of course.

3) Generally speaking I have had the most success with HP servers, especially in regards to management and problem resolution. IBM servers are very good though and worthy of your time. Dell servers in my experience are rubbish, especially their storage solutions (not including Compellant which has only recently been acquired and is still kind of okay, but wont be if it is fully integrated into Dell's model). Don't get me wrong, Dell desktops almost always win especially on price and they are reasonable machines, but they just don't know how to make good servers.    

     Well, if I have a choose, I won't even consider all these vendors, I will stay on my Intel hardware.  Actually with the new VSA feature in vsphere5, I      can do this, but of course, I have to run it pass the management and convince them it is not a "white box"

4) Lefthand is awesome. The product is fully featured out of the box, no hiden costs or requirements to upgrade licenses. Once you buy a Lefthand you get all functionality, not like other storage vendors.

     I haven't use it myself, but from what it can do, it does miles better than the IBM DS3524.

5) Consider buying Ent+ licensing as this will come with NIOC, SIOC and Storage DRS. Incredibly useful and actually more affordable in a small environment.

     asked long time ago, managers will not consider.

6) Consider getting smaller and faster disks for your array. I prefer 15k of course and try very hard to convince your boss that you don't want 7.2k, try to push for a minimum of 10k. You said yourself that you are only currently using 3TB.

     no, 7.2k hdd will never be my cup of tea in server environment...

0 Kudos
logiboy123
Expert
Expert

I neglected to comment on the VSA;

IMO Don't bother. It costs a lot of money and will give you at most 1/4 of the disks that you throw at it (that's a whopping 75% of overhead!). It will never give you the throughput and reliability of a proper SAN solution.

http://www.theregister.co.uk/2011/07/19/vsa_virtual_filer/

http://vmguy.com/wordpress/index.php/archives/1685

If they threw it in for free with Ent+ it might be worth upgrading to that tier of license, but alas the list price is $5,995.

Regards,

Paul

0 Kudos
menora
Contributor
Contributor

Well, this is certainly for a smaller size of company, especially when they can't afford the big $$ but still wants the good bits.  as usual, you get what you pay for!

0 Kudos