queensbridgeitt
Contributor
Contributor

Planning SAN for vmware in school.

Hi

At our school I am currently looking at replacing some of our physical servers and migrating them to VMware. We currently use VMware V3 on 2 separate hypervisor hosts but its only for web and proxy services at the moment.

I want to migrate our windows active directory, DNS, DHCP, print, exchange services over to a virtualised platform.

We currently use an overland snap server 520 as an iscsi device but using SATA disks its running at capacity and we suffer performance problems.

I am looking to replace this. Has anyone used any of the Equallogic iScsi PS6000 range or HP MSA2324fc range? I have currently investigating these two options for the school.

The HP is a fibre channel SAN which is aimed at entry level but the costing I have had back for the Equallogic is far greater than the MSA buts its iSCSI.

I always heard that if you have the option go for fibre channel over iSCSI because 4gb server to SAN bandwidth over iscsi 1gb server to SAN traffic will give far greater performance benefits. Fibre channel I have heard tends to be more expensive.

Is there something that I'm missing with the MSA range, why is so cheaper for fibre channel SAN?

Would anyone recommend either of these two in a VMware setup? I appreciate the Equallogic is fairly new but if anyone has any links to some useful videocasts, whitepapers and useful news on integration with vmware would be really helpful.

Thanks

0 Kudos
7 Replies
AndreTheGiant
Immortal
Immortal

I know well the Dell products (less knowhow on other storage).

Equallogic is a good product.

But maybe also MD3000i could works fine.

How many server you have to consolidate?

Andre

**if you found this or any other answer useful please consider allocating points for helpful or correct answers

Andre | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
0 Kudos
queensbridgeitt
Contributor
Contributor

2 Hosts. The plan will be

Host 1

VM1 - Web server Ubuntu Linux

VM2 -Server 2003 Primary DNS, DHCP, DC, Print. Will be migrated over to 2008 over time.

VM3 - Server 2008 with Exchange 2007

VM4 - File Server in Server 2008

Host 2

VM1 - Squid proxy Ubuntu Linux

VM2 - Server 2003 Secondary DNS and DC

VM3 - Server 2003 with Sophos and WSUS

VM4 - Server 2003 with SQL Server 2005.

Both hosts will have 24gb of RAM, dual Intel Xeon Quad processors and data stores will be located on the SAN.

This is what it will be initially but will want scope in case a host fails, I want to be able transfer services using vmotion from one host to another.

So in total 8 servers initially to consolidate.

0 Kudos
AndreTheGiant
Immortal
Immortal

For you environment you can use a MD3000i, with 2 different RAID5 (or RAID10) groups, with 2 controller, with two dedicated switch (for the SAN side).

It will cost (probably) 1/3 of Equallogic solution.

Andre

**if you found this or any other answer useful please consider allocating points for helpful or correct answers

Andre | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
Texiwill
Leadership
Leadership

Hello,

The HP MSA provides quite a bit of functionality. I used to use an MSA1000, it was a good solid box. But I since upgraded to an IBM DS3400. I chose not to go with the MSA2000 for cost/power reasons mostly. It is still a solid box. At the moment I prefer FC over iSCSI mainly do to the immediate performance gain of using 4Gb links over a 1Gb link. That is unless you can afford 10Gb iSCSI links....

Price will be important, so will manageability. I went with what was easier to do. The MD3000i has the same management screens as the IBM DS3400, both are quite simple. If you want to do something truly inexpensive look into using OpenFiler on a solid physical box with multiple GigE links.


Best regards, Edward L. Haletky VMware Communities User Moderator, VMware vExpert 2009, DABCC Analyst[/url]
Now Available on Rough-Cuts: 'VMware vSphere(TM) and Virtual Infrastructure Security: Securing ESX and the Virtual Environment'[/url]
Also available 'VMWare ESX Server in the Enterprise'[/url]
[url=http://www.astroarch.com/wiki/index.php/Blog_Roll]SearchVMware Pro[/url]|Blue Gears[/url]|Top Virtualization Security Links[/url]|Virtualization Security Round Table Podcast[/url]

--
Edward L. Haletky
vExpert XIV: 2009-2022,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
twoodland
Enthusiast
Enthusiast

The iSCSI vs Fiber is not just about the storage bandwidth but also about the speed of your disks; I would not get too caught up on choosing one over the other. The focus should be on is the speed of the disks in that solution. I'd much rather have iSCSI with 15k rpm SAS drives over fiber channel with 7200 rpm sata drives. Spindle speed is very important, as well as the size of the cache on the storage processors.

If you do decide to go with iSCSI, make sure you get hardware adapters rather than software, and make sure you have redundant paths to your storage. If you have a physical switch failure, your environment will be down. If you have multiple paths, if one switch fails, your hosts fail over to the other adapter.

I'm not a huge fan of the MSA products from HP. In more cases than not, I've heard customers complaining about poor performance. If you do go MSA, make sure it's set up right according to the HP specifications so if you do have issues, you can get support from HP.

I would say storage is fairly critical, so weigh all of the options, not just storage bandwidth. In more cases than not, when I've heard people complain about their guest performance, it's been because of storage bottlenecks, typically the speed of their disks.

Also, as a side suggestion, I recommend you remove all of the services off your domain controller. A domain controller should not be hosting other services (other than DNS if using Microsoft dns in an active directory integrated zone). You are asking for problems, especially if that domain controller is holding fsmo roles.

0 Kudos
Texiwill
Leadership
Leadership

Hello,

Speed of Link + Speed of Disk + # of Spindles per LUN will be very important. You really want a way to carve LUNs across as many disks as possible to get a large number of spindles involved. Try to do apples to apples comparisons when looking at solutions. All things being equal (speed of disk + # of spindles) the next thing is speed of the link available queue depth and manageability.

If you saddle your SAN/iSCSI with slow disks expect bad performance.


Best regards, Edward L. Haletky VMware Communities User Moderator, VMware vExpert 2009, DABCC Analyst[/url]
Now Available on Rough-Cuts: 'VMware vSphere(TM) and Virtual Infrastructure Security: Securing ESX and the Virtual Environment'[/url]
Also available 'VMWare ESX Server in the Enterprise'[/url]
[url=http://www.astroarch.com/wiki/index.php/Blog_Roll]SearchVMware Pro[/url]|Blue Gears[/url]|Top Virtualization Security Links[/url]|Virtualization Security Round Table Podcast[/url]

--
Edward L. Haletky
vExpert XIV: 2009-2022,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
0 Kudos
Raj4mDNF
Contributor
Contributor

Hi,

I think you can also think about Stonefly/DNF products for your solution.

We are running more than 15 VMware servers with stonefly & very much satisfied.

Plz visit www.dnfstorage.com & www.stonefly.com for your further query.

Regds.

Raj

0 Kudos