VMware Cloud Community
aspx
Contributor
Contributor
Jump to solution

how implement a best solution

Hello,

I need some help.

It's my first project with virtualization, and i have many questions with the componenents.

My money resources for the first year are limited, and I want to design a small, but scalable, solution that could be expanded somewhere after the first year.

In Second year, I want to add two more servers to infrastructure, and Upgrade the standard versions to Enterprise Versions.

I need 7 virtual machines, and for this first year, the redudancy is not priority.

I have the following two scenarios:

1) ISCSI Solution

1x DELL AX150I Base, ISCSI

2x DELL PE 1950 Xeon5130 2.0ghz /4mb 1333 FSB

2x Intel Pro 1000PT dual port server adapter, gigabit nic, cu, PCIe x 4

8gb Ram

1x VmWare Standart

2) Fibre Solution

1x DELL CX300

1x DELL PE 2950 Xeon5130 2.0ghz /4mb 1333 FSB

1x Intel Pro 1000PT dual port server adapter, gigabit nic, cu, PCIe x 4

16GB ram

1x Brocade 200E FC4 Single tier switch, 8 SW SFPs - RoHs Compiant

1x VmWare Standart

The two solutions have an equal price. I need help to personalize and select the best solution to grow.

By the way... Is it possible to implement some redundancy with any of this solutions?

Best Regards,

aspx

Reply
0 Kudos
1 Solution

Accepted Solutions
glynnd1
Expert
Expert
Jump to solution

A SAN is a SAN in terms of being able to do VMotion etc. The requirement for VMotion is that shared storage.

Yes, it is possible to directly connect two servers to an array like that, but I would confirm that this will be supported. Just because IBM say it will work doesn't mean it will, you want to know that VMware will support you.

Also, you'll need to do some rework when you want to expand. By rework I mean adding in FC switches and down times on the existing environment.

If you want to shave some costs from that you could drop the physical server for Virtual Center. It is supported to run this in a VM. There are some reason not to do this, some people feel that the management server should not be inside the environment, as it can make debugging problem difficult. Personally, I'll be running my next VC server on a VM, at least for the first while.

Given that you are going to be directly connecting he servers to the storage, you may want to takes step to ensure you can max out these servers, ie quad core CPU. Those servers can max out at 48gb of RAM so you'll get many VMs in there.

View solution in original post

Reply
0 Kudos
12 Replies
christianZ
Champion
Champion
Jump to solution

1. for you FC solution you need fc hbas for your host (maybe forgotten)

I think the FC solution gives you much more expandability (additionaly disk shelves - not possible with AX150) and it would be much faster.

But remember FC is complex (when you haven't any yet).

Maybe the Qlogic fc switches could be more inexpensive?

As iSCSI solution I would prefer EQL or Lefthand, where you can use the iscsi hbas too.

When you want to get EMC the CX300i is an alternative too (with iscsi hbas supported).

... and redundancy is possible by all these systems.

Hope that helps.

glynnd1
Expert
Expert
Jump to solution

Given that redundancy is not a priority have you considered going with ESX Starter and running the 7 VMs on local storage? If you want to add redundancy get a second server and replicate the VMDKs to it on a regular bases - should a physical crash you can power on the replicated VMs. When you later add in a SAN you only have wasted some local hard disks.

Remember to get the full use of Enterprise you need to buy Virtual Center as well and an SQL & OS license.

Price wise I don't think there is much between the 1950 and the 2950. I think you may be more limited in PCI slots in the 1950.

When buying decide now if you are ever going to max out the RAM to 32gb, if so go with 4gb dimms now. Yes, this will cost you some extra $ now, but at least the memory won't be sitting on the shelf in 18 months.

It is only an extra $150 per CPU to go to the Xeon E5345, while you will not need the CPU power for just 7 VMs, it will give you lots of head room for next year.

A few questions:

Do you need FC?

How many VMs will you be running on these 4 servers this time next year?

How many VMs do you think you can run on a 2950 dual quad core with 32 GB RAM?

aspx
Contributor
Contributor
Jump to solution

In the FC solutions FC HBAs were considered. Sorry for not pointing that out. The host has two Qlogic QLE2460, Single-Port 4Gpbs Fibre Channel PCI-Express HBA Card

In the FC scenario, I have only one host for the first year, which means that if it stops the struture will stop funtioning. I think if we want to assute scalability this is a compromise we have to do for the first year due to our budget limits.

Nevertheless if you had to design a small solution with the following constraints,

- Fibre or Scusi

- budget $40.000

- scalable solution (preferable to the next)

- redundant solution

how would you design it?

Thanks

Best Regards

aspx

Message was edited by:

aspx

Reply
0 Kudos
aspx
Contributor
Contributor
Jump to solution

“Given that redundancy is not a priority have you considered going with ESX Starter and running the 7 VMs on local storage? If you want to add redundancy get a second server and replicate the VMDKs to it on a regular bases - should a physical crash you can power on the replicated VMs”[/i]

Well, I haven´t considered that, but how could I replicate the VMDKS to the other server? The problem is that I need at least 1000gb /year.

“Remember to get the full use of Enterprise you need to buy Virtual Center as well and an SQL & OS license.”[/i]

Yes I have conscience of this.

"Do you need FC?" It´s not important

"How many VMs will you be running on these 4 servers this time next year? "[/i]

In the beginning we will needed about 7 and we expect at the end of the first year more or less 10 VMS

"How many VMs do you think you can run on a 2950 dual quad core with 32 GB RAM?"

We think about 14 a 16 vms (2g ram for each machine)

The same question that I asked to Christian

Nevertheless if you had to design a small solution with the following constraints

\- Fibre or Scusi

\- budget $40.000

\- scalable solution (preferable to the next)

\- redundant solution

\- Space for files and databases 1000 gb / year

\- 7Vms for first year

how would you design it?

Thanks

Best Regards

aspx

Reply
0 Kudos
christianZ
Champion
Champion
Jump to solution

When you can get a cheap CX300 - get it with e.g. 12 fc disks (140 or 300 GB - depends on what capacity you need for now)

alternative here: the Lsi Engenio 3992 (DS4700)

http://www.eurostor.com/english/ES9500.E.php

http://www-03.ibm.com/systems/storage/disk/ds4000/ds4700/

http://www.lsi.com/storage_home/products_home/external_raid/3994_storage_system/index.html

\- don't forget here "SANShare Storage Partitioning"

when too expensive then maybe DS3400 with SAS disks (Engenio 1932)

FC switch from Qlogic (Sanbox 5600):

http://www.qlogic.com/products/fc_san_switchs.asp

Server: Dell PE 2950, 2 X QC Xeon 2.0-1333, + 1 Dual Gb NIC, 16 or 32 GB RAM (difference here ~ 5000,-EUR), + 2 x FC QLA2460

VMWare VI 3 Standard X 1 (later you can get VC and when you get a second esx host you can update to Enterprise - when needed?)

Reply
0 Kudos
aspx
Contributor
Contributor
Jump to solution

Hello christianZ ,

Thanks for your help.

You have give a good advices, And I appreciate.

You have give me a Sans list, some not in "Storage / SAN Compatibility Guide

For ESX Server 3.x", for example the ES-9500 FC⁄FC(SATA) RAID. Is it fully compatible with vmware ? Can I use all functionalities from vmware enterprise with this SAN ?

Best regards

aspx

Reply
0 Kudos
christianZ
Champion
Champion
Jump to solution

This is Engenio 3xxx (IBM DS4700) - Engenio sells only over oems partners.

This was only an examle to see what it will cost (real) (you can check the prices by Ibm, but they are list prices).

Reply
0 Kudos
glynnd1
Expert
Expert
Jump to solution

Before spending a dime Ricardo you need to spend some time figuring out how much resources your VMs will need. The reason I mention this is that you are on a tight budget and need to make sure you spend the money in just the right places. Pay particular to the disk I/O as this will dictate what you need from local or SAN storage - because SAN storage is expensive.

Also, if your 7 VMs can live within 8gb of RAM you can skate on Starter for the first while.

Also with a small environment the costs of software licenses & support can gobble up a large proportion.

For $11k you can get a Dell 2950 with dual quad core 2.33, 8gb ram (24gb so you can later max out at 32), 6400gb in RAID 5 with a hot spare giving you 1.5TB of local storage. Add in Starter for $1k plus any few $ for support (you have to but at least 1 year).

The hard limit you have on that config is the 8gb of memory that Starter imposes, but you can run a lot in 8gb if you pay attention. Another 8gb will cost $1200 or max out at 32gb for $3,900, but you need Standard then at $3750 for these.

The two on board NICs with be sufficient for the Service Console and the VMs, but later you can add in additional NICs to support VMotion & improve your redundancy story.

If you've put some critical applications in the above ESX host, any down time of that host will be a bigger issue. So something like esxReplicator[/url] will let you replicate your VMs to a second 2950.

So for ~$26k you could easily run 7 VM and have very good redundancy. As you wish to expand add RAM and ungrade to Standard and now support say 14 VMs.

Yes this solution is not as sexy as having a SAN and ESX Enterprise. It can be expanded, either by adding more RAM, and later more systems (the replication gets a little more complex).

christian has listed some SAN options, and you previously mentioned the AX150 and CX300, both are available in iSCSI or FC. If your enviroment is going to be pretty small, say under 20 VMs with none of them making high I/O. You might be able to get away with the AX150. If has a max IOPS of 1500 (going from memory) and ~320MBs. The CX300 is higher then that, but costs ~$30k for 3TB raw. An Equallogic PS100 retails for $35k (probably less now, that price is 6 month old).

Just some random thoughts.

Reply
0 Kudos
aspx
Contributor
Contributor
Jump to solution

Hello christianZ,

Thanks for your help.

aspx

Reply
0 Kudos
aspx
Contributor
Contributor
Jump to solution

Hello David,

Thanks for your sugestions.

One more questions, is the last Smiley Wink

I've just got a quotation for an IBM infrastructure that includes two servers

IBMx3650 Xeon Dual Core 5160 3.0GHz/1333MHz/2x2MB L2 2x512MB

9GB Ram

2x Intel PRO/1000 PT Dual Port Server Adapter

2x Emulex 2x 4 Gb FC HBA PCI-E Controller

and a SAN IBM System Storage DS3400 Dual Controller, 7x 146GB 15K SAS connect together with FC and no switch and another server for the virtual Center, x3550 Xeon 5130 2.0GHz/1333MHz/2x2MB L2 2x512MB, 1GB Ram...

it's close to budget, Nevertheless I could find much information about the SAN

and I'm having dought about the connection between the servers and the san...

Can I explore all the benefits of VMWare Enterprise with this san?

This solution also includes licenses for VmEnterprise and VirtualCenter Management Server.

IS a good solution ?

aspx

Reply
0 Kudos
glynnd1
Expert
Expert
Jump to solution

A SAN is a SAN in terms of being able to do VMotion etc. The requirement for VMotion is that shared storage.

Yes, it is possible to directly connect two servers to an array like that, but I would confirm that this will be supported. Just because IBM say it will work doesn't mean it will, you want to know that VMware will support you.

Also, you'll need to do some rework when you want to expand. By rework I mean adding in FC switches and down times on the existing environment.

If you want to shave some costs from that you could drop the physical server for Virtual Center. It is supported to run this in a VM. There are some reason not to do this, some people feel that the management server should not be inside the environment, as it can make debugging problem difficult. Personally, I'll be running my next VC server on a VM, at least for the first while.

Given that you are going to be directly connecting he servers to the storage, you may want to takes step to ensure you can max out these servers, ie quad core CPU. Those servers can max out at 48gb of RAM so you'll get many VMs in there.

Reply
0 Kudos
christianZ
Champion
Champion
Jump to solution

IBMx3650 Xeon Dual Core 5160 3.0GHz/1333MHz/2x2MB L2

2x512MB

9GB Ram

2x Intel PRO/1000 PT Dual Port Server Adapter

2x Emulex 2x 4 Gb FC HBA PCI-E Controller

and a SAN IBM System Storage DS3400 Dual Controller,

7x 146GB 15K SAS connect together with FC and no

switch and another server for the virtual Center,

x3550 Xeon 5130 2.0GHz/1333MHz/2x2MB L2 2x512MB, 1GB

Ram...

it's close to budget, Nevertheless I could find much

information about the SAN

and I'm having dought about the connection between

the servers and the san...

Can I explore all the benefits of VMWare Enterprise

with this san?

This solution also includes licenses for VmEnterprise

and VirtualCenter Management Server.

For me it looks good - ok you haven't any fc switches here, but I think it will work - and you have IBM in boat. This solution can be expanded later - check the rams chips in the servers so that you can expand them later to more ram w/o exchanging (515 MB chips would be bad for that).

And as posted before you can run VC in vm - you can spare the additional server, I think.

The Qlogic has a small fc switches in program too - the 1400 serie, not expensive.

Reply
0 Kudos