VMware Cloud Community
dmca
Contributor
Contributor

Hardware Planning

I am new to VMWare and VMWare planning, I am trying to put together a plan for a new branch we are opening for a max of 35 people. I want all our server apps to run off VM's at this new site. I am looking for some advicce on the hardware i am planning on using. In total i can forsee 6 VM's so far and was planning the following hardware:

Servers:

2 x Dell PowerEdge 2950 (ESX Servers)

1 x Dell PowerEdge 1950 (Virtual Centre Server)

SAN:

1 x Dell MD3000i (4 x 400GB SAS Disks)

or

1 x Dell AX150 (with Fibre Channels)

Switch:

2 x Dell PowerConnect 5324 Switch

Up until the other day i was leaning towards the AX150 however i was just told today that if i wanted to leverage VMotion & HA (which i do), then i would also need to invest in a fibre switch as the direct fibre ('loop') attachment to a AX150 from the two PowerEdge servers is not supported for HA. Is this true??

I realise that the MD3000i is not currently supported by VMWare but my understanding is that it will be in December of this year. Does anyone know if the MD3000i supports HA??

Also the switch which i have mentioned, is it overkill for this solution. Would a simple Dell PowerConnect 2724 switch do the job just as well?

0 Kudos
20 Replies
ejward
Expert
Expert

How much memory in the 2950's? The more, the better. CPU is really not important. I've got a density of about 14 VM's per 2950 with 16GB of RAM. For a SAN, we use fiber now but are moving to iSCSI. In testing, the Vmware iSCSI software initiator is giving us the same performance as our 2GB fiber. And .... we don't have to buy $1000 fiber cards. Our new iSCSI device is Equallogic and as of yesterday, Dell just bought them. I'm not sure what that says about all the EMC stuff Dell sells with the Dell name on it. I believe the AX150 supports iSCSI so, I would imagine you could use it for HA and Vmotion without having to buy a fiber switch. If you are going to use HA, why not run Virtual Center as a VM? With just a couple of ESX servers and a few VM's, you won't have performance issues.

0 Kudos
Dave_Mishchenko
Immortal
Immortal

You'll probably want to wait on the MD3000i to be supported for VMware before you get one. You can get HA/DRS to work, but as you can see in this thread you have to jump through some hoops to get it to work - http://communities.vmware.com/message/772741.

As mentioned VirtualCenter will run fine in a VM - {font:Verdana}http://www.vmware.com/pdf/vi3_vc_in_vm.pdf{font}

Have you considered just sticking with local storage and not using shared storage. You would loose HA, but you would also eliminate the MD3000i or something else as a single point of failure. You could just go with local storage and then replicate the VMs from one host to another. If you lost one host, you could simply restart the VMs on the other host.

0 Kudos
ejward
Expert
Expert

Have you considered just sticking with local storage and not using shared storage. You would loose HA, but you would also eliminate the MD3000i or something else as a single point of failure. You could just go with local storage and then replicate the VMs from one host to another. If you lost one host, you could simply restart the VMs on the other host

You could also save some money by not having to buy Enterprise Edition. You can squeeze a whole lot of storage into a 2950 with SAS drives (300GB x 6 drives). Or, if you wait until the next version of ESX (Due out in a month or so I think) you can use serial ATA drives and squeeze 6 x 750GB into a 2950.

0 Kudos
Texiwill
Leadership
Leadership

Hello,

Hardware truly depends on what you are trying to do... How many networks you have to deal with

  • Generally 6 NIC Ports are necessary for full redundancy and security for a basic install (2 for SC, 2 for vMotion, 2 for VMs)

  • Throw in iSCSI or NFS and another 2 NIC Ports should be used for full redundancy and security.

  • Local vs Remote Storage depends if you want to use vMotion or Not. How much failover do you need.

  • How many VMs will be used by these 35 people? This will tell you how much memory and CPU you need as well as network. Do you need more network to handle the bandwidth.

Have you run some tests on the existing setup somwhere? Do you have the existing utilization numbers to help you decide on how much of everything you need? You may find you need a bigger box in which to place more network or even a larger amount of disk, etc. Or a different design to handle the hardware you have chosen.

Best regards,

Edward L. Haletky, author of the forthcoming 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', publishing January 2008, (c) 2008 Pearson Education. Available on Rough Cuts at http://safari.informit.com/9780132302074

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
0 Kudos
dmca
Contributor
Contributor

Hi, Thank you for all the replys.

Just to clarify a little bit of what i am trying to do. I have three countries: Country A, Country B, Country C.

Country A - Primary Site:

1 x PowerEdge 1950 (Virtual Centre)

2 x PowerEdge 2950 (ESX, 73GB SATA x 3, RAID 5, 4 x 1 GB NICS, 16GB RAM)

1 x MD3000i (4 x 400GB SAS, RAID 5)

Country A - Backup Site:

1 x PowerEdge 1950 (Virtual Centre)

2 x PowerEdge 2950 (ESX, 400GB SAS x 3, RAID 5, 4 x 1 GB NICS, 16GB RAM)

Country B

2 x PowerEdge 2950 (ESX, 73GB SATA x 3, RAID 5, 4 x 1 GB NICS, 16GB RAM)

1 x MD3000i (4 x 400GB SAS, RAID 5)

Country C

2 x PowerEdge 2950 (ESX, 73GB SATA x 3, RAID 5, 4 x 1 GB NICS)

1 x MD3000i (4 x 400GB SAS, RAID 5)

I plan to manage all countries VM's through the one PE1950 in Country A across VPN connections.

For my backup i was planning on attaching a Tape Drive directly to the MD3000i at each site, each country will act as a backup to the other. Should i loose Country B, VM's with get couriered to Country A and restored onto the Country A MD3000i, likewise for country C.

If i was to loose Country A office then i would go to backup site and restore VM's to run on a PE2950 with no iSCSI. It would be bare essential servers.

If i was to loose just the MD3000i in an office then i am not 100% sure what i would do, While i would love to have a DR site in each country with a backup MD3000i it just isn't in the budget. Perhaps a 2950 fully loaded with discs might be an option?

0 Kudos
ejward
Expert
Expert

The current version of ESX doesn't support installing on SATA drives. If you try to configure a Vmware 2950 on Dell's site with SATA drives, it will tell you.

0 Kudos
Texiwill
Leadership
Leadership

Hello,

I would bump up your NIC to 6x1GB ports (for full redundancy/performance/security) and your local RAID5 to 4 disks (3 + 1 Spare), I would also consider using the same hardware SAS or SATA at each site so that parts are pretty easy to stockpile and ship as necessary. I would also have enough local space just in case the SAN fails and the network link to the backup site fails. Plan in as much redundancy as you can afford.

Best regards,

Edward L. Haletky, author of the forthcoming 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', publishing January 2008, (c) 2008 Pearson Education. Available on Rough Cuts at http://safari.informit.com/9780132302074

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
0 Kudos
dmca
Contributor
Contributor

Thanks all for the replies,

I have been looking on the dell site and have spec'd a PE2950 with the following:

-


2 x Quad Core Intel® Xeon® X5355, 2x4MB Cache, 2.66GHz, 1333MHZ FSB

16GB,667MHz FBD (8X2GB),2R

1x6 Backplane for 3.5-inch Hard Drives

4 x 146GB, SAS, 3.5-inch, 15.000 rpm Hard Drive (hot plug)

Riser with PCI-X Support (2x PCI-X 64/133 slots, 1x PCIe x8 slot)

C4 Integrated SAS / SATA, RAID 5 using add in PERC 5/i controller, min3/max6or8 Hard Drives 64252 35

PERC 5/i, x6 Backplane, Integrated RAID Controller Card

Dual embedded Broadcom® NetXtreme II 5708 Gigabit Ethernet NIC

2x Broadcom® NetXtreme II 5708 1-Port Gb Ethernet NIC w/TOE, Cu, PCIe

8X DVD Drive, Internal, Half Height

-


I took on-board what you said about ensuring the local PE's could handle the VM's if the iSCSI was lost, so i plan to have them be 146GB SAS drives with one as a redundancy.

I am also going to request and additional two 1 GB NIC's when ordering so that i have a total of 6.

Is it true that ESX won't run on SATA and i need to buy SAS drives?

These new branches and completely different from what we currently have anywhere else. The 6 VM's at each site will consist of:

1 x Exchange 2007

1 x Domain Controller & DNS

1 x Fax Server

1 x SCCM 2007 & Symantec Server

1 x Blackberry Server

1 x Application Server

I am pretty confident the utilization will be low. The application server will most likely have the greatest utilisation stats. I was planning to set the ESX boxes up as follows:

ESX1 - Exchange, Blackberry, Fax, Domain Controller

ESX 2 - Application Server, SCCM 2007 & Symantec

Am i going down the right track here?

If i lost the MD3000i, is it just as simple as restoring the backed up VM's directly to a PE2950 and starting them with the VC or am i over simplifying??

Thanks for all the advise so far!!

0 Kudos
ejward
Expert
Expert

I certianly sounds like a good plan. Currently, ESX will not recognize SATA drives. Dell won't sell you a server with SATA drives if you buy Vmware through them. If you're getting Vmware somewhere else, don't get SATA drives in the servers. The next version of ESX (v3.5) does work with SATA. I've currently got 4 Dell servers with SATA drives running the beta. I have no idea when this is being released. I think the beta is over and they now have a release canidate. So, a real release can't be too far away. This isn't Microsoft we're talking about. :smileylaugh:

0 Kudos
pyosifov
Enthusiast
Enthusiast

Actually you don't need to put all that money to SAN if you don't really need it.NAS or iSCSI would do the trick for you.SAN is an expensive solution for such a small organization.Of course it is the best solution but the most expensive as well.For VMotion and HA to work you only need shared storage,no matter SAN, NAS or iSCSI.You should plan some extra resources on your physical machines in case of a hardware problem.

0 Kudos
ejward
Expert
Expert

Our Vmware rep says that there are very few new deployments of Vmware that use fiber attached storage. Since v3, it's all iSCSI now. At Vmworld this year the same was true. There was very little talk of fiber attached storage. It was all iSCSI.

0 Kudos
Texiwill
Leadership
Leadership

Hello,

I think you are on the proper track. As for the restoration directly onto the PE2950, yes, that is one possibility. I like to, as part of my backup strategy, place a copy of the VM directly on a VMFS on the the local storage. This way you do not need to restore if the iSCSI server has issues. Just boot the local storage back up. Backup strategy could be:

  • Use local vcbMounter to copy the VM from the Remote storage to the local VMFS.

  • Perform normal backup mechanism

This has the advantage of not needing to restore from tape in case of failure. Going with 6 pNIC is a good choice. I would place your Storage (iSCSI network) in this situation on the vSwitch for your Service Console while not the highest level of performance, the SC must participate in the iSCSI network so it has access to it no matter what so it is generally acceptable from a security perspective. This however could affect performance of the VMs during backups as the SC is also your backup channel. With only 6 pNICS you have to make some choices somewhere. I would still shoot for 8 ports perhaps using 4 Port Intel MT or Intel VT cards. Offloading your iSCSI IO is a good thing to do if you can.

I think many new sales of ESX use iSCSI, but there are still A LOT of SAN based solutions out there and more coming on board, currently it is still the fastest IO solution if you look at just the raw numbers. That will change when 10G is available but I think SAN will run 10G as well as iSCSI so it may just be a dead heat. But that is in the future some when....

Best regards,

Edward L. Haletky, author of the forthcoming 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', publishing January 2008, (c) 2008 Pearson Education. Available on Rough Cuts at http://safari.informit.com/9780132302074

Message was edited by: Texiwill

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
0 Kudos
dmca
Contributor
Contributor

Thank you Edward for your help on this.

I will look into the 8 port idea (4 port NIC's), i'll speak with our Dell Rep and see what he can offer.

Not sure i quite understand what you mean when you say "Offloading your iSCSI IO is a good thing to do if you can."

I think i will also go with your idea of placing a backup of the VM's onto the VMFS of the PE's as it gives better DR capabilities. I will just overwrite every second nite to keep space requirements down.

I think also by using the NIC's instead of FC i can do it cheaper but also when the 10GB NIC's come out it will hopefully be just a case of purchasing the new 10GB cards and swaping them in as well as buying a new 10GB switch. This will still be allot cheaper than buying a full out SAN i would imagine.

Look forward to your book coming out.

0 Kudos
ejward
Expert
Expert

I think the idea is to put all your iSCSI traffic on it's own switch. There's ways to optimize a switch for iSCSI traffic that you would not do for regular network traffic.

0 Kudos
Texiwill
Leadership
Leadership

Hello,

By 'offloading' I was implying that the iSCSI traffic should not have to share the pNIC/vSwitch within anything else, if you do then you may have performance issues, perhaps lag as the vSwitch/pNIC gets overloaded. I would have 2 pSwitches in use for redundancy reasons.

May I use your network questions in some upcoming blog posts? I will be writing some after the book is published, in proof stage at the moment on the book, still on target.

Best regards,

Edward L. Haletky, author of the forthcoming 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', publishing January 2008, (c) 2008 Pearson Education. Available on Rough Cuts at http://safari.informit.com/9780132302074

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
0 Kudos
dmca
Contributor
Contributor

Edward / ejward

I was planning to use:

2 x Dell PowerConnect 5324 Switches in order to give redundancy.

However in saying that i was also looking to use a lesser Physical switch as i thought the Dell PowerConnect 5324 may have been a little overspec'd for the solution. So I was looking at the Dell PowerConnect 2724 as possible alternatives.

If i was to create VLAN's then i should be able to manage the performance a bit better i think??

By all means use my question in your Blog.

0 Kudos
dmca
Contributor
Contributor

Hi,

I wanted to do some testing on an existing server i have but it has SATA drives. Where could i get hold of a copy of the 3.5 beta that supports SATA, you mentioned that you were running it in a test lab at the moment.

0 Kudos
Texiwill
Leadership
Leadership

Hello,

I think it would work as long as your VLAN terminates at the pSwitch (EST), that way you get the pNIC redundancy at the vSwitch level.

Best regards,

Edward L. Haletky, author of the forthcoming 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', publishing January 2008, (c) 2008 Pearson Education. Available on Rough Cuts at http://safari.informit.com/9780132302074

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
0 Kudos
ejward
Expert
Expert

Edward / ejward

I was planning to use:

2 x Dell PowerConnect 5324 Switches in order to give redundancy.

However in saying that i was also looking to use a lesser Physical switch as i thought the Dell PowerConnect 5324 may have been a little overspec'd for the solution. So I was looking at the Dell PowerConnect 2724 as possible alternatives.

If i was to create VLAN's then i should be able to manage the performance a bit better i think??

By all means use my question in your Blog.

Here are the recommended switch settings for iSCSI for Cisco switches:

"

  • It is recommended that you do not use Spanning-Tree (STP) on switch ports that connect end nodes (iSCSI initiators or storage array network interfaces). If you want to use STP or RSTP (preferable to STP), enable Cisco's PortFast option on each switch port that connects end nodes. PortFast will reduce network interruptions that occur when devices restart.

Note that the use of Spanning-Tree for a single-cable connection between switches is encouraged, as is the use of trunking for multiple-cable connections between switches.

  • It is recommended that you configure Flow Control on each switch port that handles iSCSI traffic. If your server is using a software iSCSI initiator and NIC combination to handle iSCSI traffic, you must also enable Flow Control on the NICs to obtain the performance benefit.

  • It is recommended that you disable unicast storm control on each switch that handles iSCSI traffic. However, the use of broadcast and multicast storm control is encouraged.

  • Configure Jumbo Frames on each switch that handles iSCSI traffic. If your server is using a software iSCSI initiator and NIC combination to handle iSCSI traffic, you must also enable Jumbo Frames on the NICs to obtain the performance benefit (or reduce CPU overhead) and ensure consistent behavior.

Notes: Do not enable Jumbo Frames on switches unless Jumbo Frames is also configured on the NICs; otherwise, behavior may be inconsistent. "

I'm no expert but, when I sent this to the people that manage our network switches, they told me to buy my own for iSCSI.

0 Kudos