VMware Cloud Community
BridgeBDE
Contributor
Contributor
Jump to solution

Need help regarding Hardware (Servers 1 Storage)

Hi everyone!

We want to put all of our production servers and testsystems on ESX. We also need some redundancy, so that if a server would fail,the other one could host all the systems for the time beeing.

We contacted Dell fo a nice solution, and the came up with 2x1950 and 1 MD3000. I know we can't do any HAL and vMotion with the MD3000,but for our needs it would already be enough if we could start those systems manually on the other server.

As far as I understand is that both servers share the same storage, and are locking the vmdk when used, am I correct?

I've posted our HW we are gonna buy and I would really appreciate if someone could take a look at it and gives their oppinion on this system:

Cheers

2x PE1950 III Quad-Core Xeon E5420 2.5GHz/2x6MB 1333FSB

PE1950 PCIE-Riser-Karte (2 Steckplätze)

16GB (8x2GB Dual Rank DIMMs) 667MHz FBD

PE1950 III - Additional Quad-Core Xeon E5420 2.5GHz/2x6MB 1333FSB

2x 160GB Serial ATAu 7.200 1/min 3,5Zoll Festplatte Hot-Plug-fähig

SAS 6i/R Integrated Controller for C2/C3/C6 1 S

DVD-ROM SATA

PE1950 III Redundante Stromversorgung Kein Netzkabel

Intel PRO 1000PT Dual Port Server Adapter, Gigabit NIC, Cu, PCIe x4

TCP/IP Offload Engine 2P

VMware VI 3.5, Foundation, 2 CPU - 1 Yr SnS Only

VMware VI 3.5, Foundation 2 CPU, License Only, 1yr, NFI

PE1950 OpenManage Kit

PE1950 III - C3,MSSR1, ADD IN PERC 5i/6i or SAS6iR, min 2 / max 2

1Yr Basic Warranty - Next Business Day - Minimum Warranty

Base Warranty 1 S

3Yr ProSupport for IT and Next Business Day On-Site Service

1 Cluster-Crossover-Kabel (Kit)

1x PV MD3000 Extern SAS RAID Array mit 2 Single Port Controller

5x 750GB SATA II 7,2k 3,5Zoll Festplatte

Keine SAS 5/E Option 1 S

Base Warranty 1 S

1Yr Basic Warranty - Next Business Day - Minimum Warranty

3Yr ProSupport for IT and 4hr On-Site After Diagnosis

Tags (3)
0 Kudos
1 Solution

Accepted Solutions
glynnd1
Expert
Expert
Jump to solution

Well you learn something new everyday.

You can share a LUN between two servers from a Switched SAS array, ie the MD3000 you are looking at. Of course you are limited to two servers.

You'd be surprised how fast iSCSI is. Just because it is ethernet doesn't mean it is slow. At my last job we had a NetApp FAS3020 with FC and SATA disks connected to eight dual quad core 32GB ram ESX hosts running ~160 VMs, and they recently bought four more dual quad core servers with 64GB to support ~120VMs.

Granted it all depends on what your VMs are doing, in your case you may be doing much higher disk IO, and feel the need for faster disk, but a number of the VMs in the above environment were processing real time weather and flight data.

Having said that, if you are not going to grow bigger then two ESX hosts, going switched SAS does keep things simple. I'll have to see about borrowing one of those...

Good luck.

View solution in original post

0 Kudos
9 Replies
glynnd1
Expert
Expert
Jump to solution

Unless you know that 16GB is going to be sufficient I would consider changing "16GB (8x2GB Dual Rank DIMMs) 667MHz FBD" to "16GB (4x4GB Dual Ranked DIMMs) 667MHz". This will cost you an additional ~$350 USD now, but leave you with the option of upgrading to 32GB in the future.

Bear in mind, one of your servers needs to be able to run all of your VMs should one of them fail or be off-line for maintenance.

I'm not sure that SATA is currently supported for booting ESX. I'll check some docs, but maybe meantime someone else can chime in.

As far as I understand is that both servers share the same storage, and are locking the vmdk when used, am I correct?

Pretty much. All your ESX hosts will have visibility to the same storage. The host that is running the VM will write a lock file into the directory of the VM when it is running the VMs.

Occasionally an ESX host will issue an SCSI reservation against the entire LUN when doing certain operations which prevents other ESX hosts from doing the same, but these events are short lived.

0 Kudos
glynnd1
Expert
Expert
Jump to solution

What do you know, you can, I stand corrected.

ESX Server 3 and VirtualCenter Installation Guide : Installing VMware ESX Server Software : Preparing to Install : Installation on SATA Drives

Installation on SATA Drives

When installing ESX Server SATA drives, consider the following situations:

Ensure that your SATA drives are connected through supported SAS controllers:

• mptscsi_pcie — LSI1068E (LSISAS3442E)

• mptscsi_pcix — LSI1068 (SAS 5)

• aacraid_esx30 — IBM serveraid 8k SAS controller

• cciss — Smart Array P400/256 controller

• megaraid_sas — Dell PERC 5.0.1 controller

Do not use SATA disks to create VMFS datastores shared across multiple ESX Server hosts.

See ESX Server 3 Requirements for complete hardware requirements. See ESX Server Partitioning for a description of partitioning requirements.

BridgeBDE
Contributor
Contributor
Jump to solution

Pretty much. All your ESX hosts will have visibility to the same storage. The host that is running the VM will write a lock file into the directory of the VM when it is running the VMs.

Occasionally an ESX host will issue an SCSI reservation against the entire LUN when doing certain operations which prevents other ESX hosts from doing the same, but these events are short lived.

But it wont affect any running systems on the other system, does it?

16GB should be fine. If that scenario shoul occur, we limit our testsystem for that time.

To be back on my main issue (just to be really sure as I'm alone in charge for the whole project :smileylaugh:) :

Both server connected to the direct attached storage see the same partition,can acces the same files and lock those who are used. If one server should die for whatever reasons, I can easily start those systems from my other running server, right?

PS:I don't really need to have vCenter for only two servers, do I?

0 Kudos
azn2kew
Champion
Champion
Jump to solution

From an architecture standpoint, you should have at least 2 ESX hosts with shared storage (iSCSI, NFS, FC) and all these storage should have redundant path (multipath to HBAs, or IP alias if NFS) which you have in place 2 single controllers for your MD 3000 box. You should have N+1 design in mind if any failed host than standby hosts should have sufficient resources to handle restarted guests. You can run at least 8-12 VMs for each host with 16GB total. It varies depends on your memory allocation but average 4 VMs per core.

Your host should at least have 4-6 NICs with combination:

1. NIC1-SC/VMotion

2. NIC2->VMotion/SC

3. NIC3-4->VM Network

4. NIC5-6->DMZ, iSCSI, Backup (whatever you want)

Steps to implement;

1. Assemble your storage systems.

2. Rack your Dell servers update firmware/bios and stress test it (48 hours)

3. Create virtual center database & VUM (and install vCenter server components)

4. Plan your networking configurations and portgroups and all general information (networking settings)

5. Install ESX 3.5 U4 hosts and connect with VI client and update latest patches.

6. Grant permission to users/group who manage the hosts and guests.

7. Configure your storage (iSCSI, NFS, FC) whatever you have in placed.

8. Standardize your VM provisioning and create gold template for them.

Consult with your storage vendor for best practice under VMware ESX 3.5. iSCSI reservation conflicts happened when too many vmdk (guests) hosted on the same LUN and compete with I/O contentions.

If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!!

Regards,

Stefan Nguyen

iGeek Systems Inc.

VMware, Citrix, Microsoft Consultant

If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!! Regards, Stefan Nguyen VMware vExpert 2009 iGeek Systems Inc. VMware vExpert, VCP 3 & 4, VSP, VTSP, CCA, CCEA, CCNA, MCSA, EMCSE, EMCISA
0 Kudos
glynnd1
Expert
Expert
Jump to solution

They are very short lived and only occur at limited time, for your environment you can ignore this. In an environment with say 450 VMs spread over 10 hosts, rebooting all the VMs at once would cause a lot of reservations and some issues - but nothing you need to worry about.

Ops! I missed that you said MD3000 and not MD3000i. There is a big difference here, the non i one isn't what you want in my thinking, one one server can see the storage, I believe, I'm not very familiar with the product. It is possible that it can be configured to behave in the way you want. But it is on the VMware hardware list as Switched SAS - hmmm....something I need to read up on....

You can get three ESX Foundation licenses and Virtual Center foundation for something like $3,500 USD, pretty cheap for what it can do, look for the VMware Infrastructure Foundation Acceleration Kit. Given that one license of ESX Foundation costs $1,500, you are only spending an extra $500. It is useful, essential? No, but useful.

BridgeBDE
Contributor
Contributor
Jump to solution

I don't know if iSCSI is the right solution for us. As far as I know it's slower then attaching the storage directly, and we would also need additionals switches for that purpose.

0 Kudos
glynnd1
Expert
Expert
Jump to solution

Well you learn something new everyday.

You can share a LUN between two servers from a Switched SAS array, ie the MD3000 you are looking at. Of course you are limited to two servers.

You'd be surprised how fast iSCSI is. Just because it is ethernet doesn't mean it is slow. At my last job we had a NetApp FAS3020 with FC and SATA disks connected to eight dual quad core 32GB ram ESX hosts running ~160 VMs, and they recently bought four more dual quad core servers with 64GB to support ~120VMs.

Granted it all depends on what your VMs are doing, in your case you may be doing much higher disk IO, and feel the need for faster disk, but a number of the VMs in the above environment were processing real time weather and flight data.

Having said that, if you are not going to grow bigger then two ESX hosts, going switched SAS does keep things simple. I'll have to see about borrowing one of those...

Good luck.

0 Kudos
BridgeBDE
Contributor
Contributor
Jump to solution

I'm really happy that that what I'm planning is also possible. Thank you very much for your help!

Hope that I'll get everything up and running Smiley Happy

PS:I thought because on iSCSI its running with 1Gb, and directly via the controller with 3Gb. Also the switch might be slowing down the whole traffic..

But it really seems like it's much faster then in theory (from my view at least).

0 Kudos
glynnd1
Expert
Expert
Jump to solution

Glad I could help.

Have fun with ESX, it really is a cool product.

0 Kudos