VMware Cloud Community
Razorhog
Contributor
Contributor

Advice on Dell Server Purchase + ESXi

I am wanting to virtualize some of my servers running on old hardware. I'd like to get a Dell server with ESXi embedded, and an iScsi SAN. My Dell rep has suggested a Poweredge 2950 III with dual quadcore Xeon 5460's, and 32gb ram. She wanted to put a RAID 5 array in the server, but I've heard that a SAN is a better choice for VM's. Any sugestions there?

This host would be running approximately 3 windows XP machines (very low utilization), 1 Netware 6.5 server with Library Database software, 1 Netware 6.5 server running Zenworks (how will pushing/pulling computer images effect things?), Groupwise messenger, ifolder, and iprint. I'd also hope that I'd have room to grow with this host for future servers.

My concern is the library software server and the Zenworks server. The Library server is currently running on an old no-name server with a single xeon 2.4 ghz cpu and 4gb ram. CPU usage is minimal, and it is showing 3gb free ram most of the time.

The Zenworks Server has similar specs but dual processors. It is a Gateway 975. It is idle 99% of the time, unless we image a machine from/to it.

I assume I need to have at least 4 physical network ports on the host, so that a bottleneck doesn't happen. Would it be wise to assign one network port to each "more intensive" VM, and let other smaller VM's just share a port?

Please give any suggestions, hints, etc. I'm learning! Thanks! Smiley Happy

Reply
0 Kudos
28 Replies
ivp2k9
Enthusiast
Enthusiast

External (in case of my VMs) is most likely to mean CIFS. Don't ask -- it is a long (and partly stupid) story involving stupid (yet de-facto killer) apps from xUSSR ( I am currently experimenting with iSCSI and software initiators from within a VM, though).

The point I was tryin to make was difference between internal (such as RAID from internal HDDs in a physical server or VMWare disks for VM) and external (such as managed RAID eSAS enclosures, CIFS, NFS, iSCSI or FC) and how later is these days are likely smarter decision than the former -- due to flexibility of movement and sharing of resources between servers.

As far as ESXi usage of external storage goes -- yes, you got it right. SAN would be the place for datastores where your VMs would physically reside. Yet for 10k$ to spend iSCSI or FC SAN is out of consideration. I'd say the way to go (as opposed to das uberbox) is 2 (not 1) less powerful boxes (4 or 8 GT cores with 16-24 GB RAM per server) and MD1000 (think of external HDD you can share in a fancy way between 2-4 computers). This is better in terms of overal availability (and likely future scaling and investment protection as well) and should provide more or less aedequate (maybe somewhat worse than internal RAID5 with 4x15kRPM drives, though).

I understand your concern about backing up 50GB or so VMs from two hosts. However, if you still plan daily backups of 1 uberbox -- 2 boxes are not going to make huge difference -- at the vary least, you can still use one as "hot spare" hostings for VMs normally hosted on the other one (God bless external storage -- the sharing / migration is easy) with absolutely no external software or scripts. If your load is usually unintense (as you were describing) -- chances are 1 smaller box will be able to fully perform the duty you wanted to assign to the uberbox. The ability to assign datastores and folders to whichever server you like in a mouseclick (of course, that is not as finegraded a control you get with proper VI) may proove to be extremaly useful at times.

p.s. MD1000 can be later integrated into SAN solution of some sort -- you do not have to replace it or modify it in any way. Simply put an MD3000i or NX somthing -- and your drivers are available over IP Smiley Happy

Reply
0 Kudos
FusionHosting
Contributor
Contributor

I'd recommend you evaluate an actual VMWare Backup program like Veeam Backup, It's a lot easier to manage and works with ESXi.

Reply
0 Kudos
s1xth
VMware Employee
VMware Employee

I would recommend the MD3000 over the MD1000 due to the SAS HBA's on the 3000. You get better performance on that box than on the 1000. 30-40% better. Do you HAVE to go with a 1000? No. But just so you are aware of the differences.

http://www.virtualizationimpact.com http://www.handsonvirtualization.com Twitter: @jfranconi
Reply
0 Kudos
s1xth
VMware Employee
VMware Employee

Veeam backup is great and all, but its slow unless you are running full blown ESX with the console so Veeam can leverage the console for backup speeds.

http://www.virtualizationimpact.com http://www.handsonvirtualization.com Twitter: @jfranconi
Reply
0 Kudos
Razorhog
Contributor
Contributor

Well, I've talked with my boss about this some, and he likes the idea of an iSCSI SAN - which means it might be bumped into the budget...

I need to explore all options, and learn about implementing iSCSI. I need to present it with the knowledge to explain how it works, and why it would benefit us.

Reply
0 Kudos
Razorhog
Contributor
Contributor

Which brings me to some questions, of course :smileygrin:

If I were to get a SAN, does ESXi work with a physical HBA, or do you have to use software initiators?

When setting up the SAN, how do you determine what will be used for VMFS, and what will be used for other things? Does the entire array have to be formatted in VMFS, or can I have a volume within the array that is formatted with VMFS, and in that VMFS have multiple LUNs? Trying to make sense of this...

Reply
0 Kudos
ivp2k9
Enthusiast
Enthusiast

't'z'all'bout the money... It's all 'bout the dumdumdadadum.....lower end (not 1000, silly /me, 1120, at least for dual box setup) would easily fit the budget stated, the MD3k thing...unsure.

Besides, "fancier" HBAs (hate dell for this) may actually work worse under flavors of *NIX...

p.s. Did Zaratustra really prohibit using identical conrollers for different MDXXXX devices?

Reply
0 Kudos
Razorhog
Contributor
Contributor

Which brings me to some questions, of course :smileygrin:

If I were to get a SAN, does ESXi work with a physical HBA, or do you have to use software initiators?

When setting up the SAN, how do you determine what will be used for VMFS, and what will be used for other things? Does the entire array have to be formatted in VMFS, or can I have a volume within the array that is formatted with VMFS, and in that VMFS have multiple LUNs? Trying to make sense of this...

Can anyone help explain this? I suppose different brands of SAN devices are going to be different. This might be a stupid question, but how do I access the SAN from a regular workstation so that I could use it for storage and/or copy VMs to/from it?

Reply
0 Kudos
ivp2k9
Enthusiast
Enthusiast

Any recent x86 machine under a recent OS (all windows, linux 2.6.x, freebsd7, opensolaris10, netbsd4, .....list could go on....) can access volumes on almost any concievable iSCSI SAN. All you need is sufficiently fast IP network connection (like, a gigabit or tengigabit ethernet NIC, but any technology really works -- even ATM or PoS). That is the beauty of iSCSI. Most of the mentioned platforms are capable of being iSCSI targets (servers that is in a normal language) as well. Of course, a number of specialized applications is better off with a real SCSI HBA and/or some custom software. But you see, people have learned they can live with em...software that is good enough ( © Microsfot Corp), so I do not quite see why not live with "good enough" iSCSI.

Using stock Microsoft iSCSI initiator on stock Dell 1435SC with stock broadcom cards and Dell/EMC AX-150i, I was getting over 850 mbit/s of transfers between initiator and target (~ 100+mbytes/second of actual file copying ) nearly 2 years ago. That was (and still is for folks using that setup today) good enough for me. The technology, you know, does not get worse with time these days -- but you'd probably have to see a working iSCSI SAN before finally deciding it for /self.

Reply
0 Kudos