VMware Cloud Community
Snr_Whippy
Contributor
Contributor

My dream of 40-50 Tier1-2 VM's with 3 Hosts using ISCSI

Tier2 as in Various IIS web servers and reasonably quiet applications. Any databases will be on physical servers.

Is it a dream? I am desperately trying to design an infrastructure which can accomodate this amount of guests on one ISCSI SAN

I have been looking at various models of ISCSI SAN. One particular with a sensible price that caught my eye was the DS5000E from equalLogic.

This particular model uses 7.5K SATAII drives normally configured as RAID50. 4TB with about 2.5TB of usable storage.

Has anyone used this model or similar and how many VM's have they been running on there without performance degredation? Just a rough idea would give me some direction?

This model claims 60,000 I/O's which seems to be somewhat disputed on various forums as they make use of cacheing to achieve that figure.

0 Kudos
6 Replies
chrisfmss
Enthusiast
Enthusiast

We have 2 PS5000E with 1 TB Sata disk, and we have 15 luns (exchange server, file server, sql for SCCM 2007, VMware) and performance is great. If you want performance, you must use a good switch for your iSCSI lan. We have 2 Cisco 3750-E. I have you look for the new PS5500E with 48 TB or 24 TB in 4U?

azn2kew
Champion
Champion

Your SAN should be good enough but depends on your ESX host hardware piece? Is it quad or 8th cores? Remember you can manage to place 6-8 VMs per cores, so 3 hosts x 4 cores x 8 VMs/cores = 96 VMs easily but it depends, if you mentioned low usage IIS servers, that would be best solution for virtualizations and sometimes you can place more if low usage. You have to do some testings before doing it live.

If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!!

Regards,

Stefan Nguyen

iGeek Systems Inc.

VMware, Citrix, Microsoft Consultant

If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!! Regards, Stefan Nguyen VMware vExpert 2009 iGeek Systems Inc. VMware vExpert, VCP 3 & 4, VSP, VTSP, CCA, CCEA, CCNA, MCSA, EMCSE, EMCISA
Snr_Whippy
Contributor
Contributor

The hardware will be:

3 hosts with 2 x 3GHz quad core cpu's

24GB Ram

10 NIc's 2 onboard + 2 quad ports

I am still undecided whether to use the Qlogic Hardware scsi initiators, most people who i seem to talk too don't seem too interested in the hardware initiators but surely saving as much cpu for the VM's the better?

Apart from the cost of the hardware initiators and the fact it only seems to be a particular QLogic card which is actually supported I dont see why they shouldnt be used.

Also I am toying with using one of these for the ISCSI storage has anyone had any experience of these specifically:

IBM : N3300 N3600 and N3700,

Dell EqualLogic PS5000E and PS5000X

EMC Clarion AX4.

I might have to go for EMC Clarion AX4 not sure what the performance of these are though.

Message was edited by: Snr_Whippy

0 Kudos
TomHowarth
Leadership
Leadership

from the numbers you are quoting you hardware (consultants hat on) should be meaty enough to handle the load of 40 to 50 servers. even taking into account a N+1 architecture this means a maximum of 20-25 guests per physical host in a failure situation. that is a maximum of 3.25 Guest per logical Core.

Memory should be good to go, as remember that ESX has the baloon driver and also Shared Memory pages that significantly improve Memory utilisation on a Host.

If you have the slots and budget then a Hardware iSCSI initiator is a valuable addition to improving performance on the Host servers however that being said the Software VMware iSCSI initiator is quite efficient.

I am curious to understand your reasoning as to why you want so many NIC's. in a small SMB type environment you could get away with four.

Nic1 for the SC failover to Nic2

Nic2 for the VMKernel failover to Nic 1

Bonded pair on Nic's 3+4 (IP Hash) for the production Network, this would provide good bandwith to your guests.

6 NIC's brings better resiliancy in the SC and VMKernel networks can be bonded

the only reason I would have that many NIC's is in a multi production VLAN environment.

If you found this or any other answer useful please consider the use of the Helpful or correct buttons to award points

Tom Howarth

VMware Communities User Moderator

Tom Howarth VCP / VCAP / vExpert
VMware Communities User Moderator
Blog: http://www.planetvm.net
Contributing author on VMware vSphere and Virtual Infrastructure Security: Securing ESX and the Virtual Environment
Contributing author on VCP VMware Certified Professional on VSphere 4 Study Guide: Exam VCP-410
0 Kudos
chrisfmss
Enthusiast
Enthusiast

My setup for iSCSI, 2 ports for ESX Initiator ans 2 ports for MS Initiator within the vm. With this setup, I use ESX Init for the C: drive and MS Init for Data drive, and i don't see many CPU overhead.

0 Kudos
Snr_Whippy
Contributor
Contributor

I was going with 10 NIC's as I needed 6 at a minimum to include the DMZ

Also if needed i could dedicate VM's to sets of adapters so they werent all crammed down the one pipe. If that makes any sense?

0 Kudos