Hi guys
I'm new to VI3 and looking for some help, I've been tasked with consolidating five physical boxes with dual Xeon 2.4 cpus with 2 GB of ram and each 4 x 36 GB disks raid 5 striped in the companies data centre. Sounds easy enough however these are all public facing application servers with high availability and failover boxes etc.
Whatever solutions I suggest must have the same level of resilience. Ive got some good stuff from the vm ware and dell sites and came up with this solution.
Vi3 enterprise so I can use HA, Vmotion and consolidated backup on 2 dell dual socket PE2950 with one quad core to allow upscale, 8 gig of ram and 2x 36 raid one and 4x 73 raid 5.
I also need to keep the cost down so I think a san is out of the question for now so Id need to have VI3 on the mirrored disks and put the VMFS on the raid 5 set and run VCMS on a desktop. Would a nas better for the VMFS?
Would this setup allow me to use V motion and HA to there full capacity?
I do plan to speak with whaterver vender we plan to use once we get to that stage but it would be good to know whether I'm on the right track or not. Anyhelp would be great.
Thanks
R.
Hi,
Since you are not planning on using a SAN, DRS, vmotion and HA will not work. If you want to make use of these features (given the setup I think you would), you should use a SAN. Sure this costs more, but also think about future possibilities.... Expanding your ESX setup is much cheaper once you do have a SAN. The possibilities you get as an extra using ESX (DRS, HA, vmotion) also add up as great extras.
ESX gets "cheaper" as you consolidate more. So if expanding the setup in the future is an option, it might be worthwhile to do some math on those figures. Reduced power consumption, cooling and datacenter space reduction alone might cover the extra costs....
For Vmotion, DRS and HA to work, you need shared storage. You are not going with a san, so you have a cheaper option, that in my experience is not too bad actually. iSCSI! iSCSI (providing that your network is stable and fast), will provide you with shared storage and allow Vmotion, DRS and HA.
You can install Linux or Windows on a server packed with disks, and install iSCSI target software onto it, and connect to the iSCSI target with ESX and format the partitions as VMFS with ESX.
I have had this setup tested and proven efficient in a testlab environment, but am confident that it can work well in production, at a fraction of the cost of a Fibre SAN.
Regards
Rynardt
VCP - VI3
As iSCSI might be a nice solution, I would not recommend to use a linux box with iSCSI software on it if your environment is about uptime, uptime and uptime. HA might work very nice for host failure, but who ever really thinks about SAN failure? This is much more dramatic in most setups. As soon as you start "saving" money using physical servers as iSCSI boxes, you start adding up risks for your environment. Uptime costs money, do not try to save money at the cost of your nights sleep
Linux and uptime? What are you on about. In our datacenter s the windows machines are the one's with uptime problems. Linux is a much more robust server. Why on earth do you think more than 70% of the web servers on the internet is sitting on some Unix/Linux system running Apache Web server. If you want DOWNTIME, GO WINDOWS!
It's not the OS it's the platform, an x86 server has many single points of failure that are not present on enterprise class SANs (FC or iSCSI).
Regards,
Iain
I do understand the point you are making, but it's either iSCSi or a nice expensive SAN, and while you are going to be spending, just go and buy an EMC Symmetrix!
The point I am trying to make is simple. Without shared storage, there are no High Availability. Now do any of you have a better idea than iSCSI if the SAN option is not an option? And don't even try and say "NAS".
Well then, it's iSCSI then or no HA, DRS and VMotion then hey? Or as I said before, if you don't like it, go and spend £900K on an EMC Symmetrix.
I'd argue there is no point putting in a SAN/iSCSI for HA if your SAN/iSCSI box has the same MTBF as your ESX Host. If anything you are increasing your likelyhood of a failure.
That said, you don't need to spend £900K, we recently put in an iSCSI box for £19K that has far fewer SPoF than an x86 platform.
Regards,
Iain
Hi Rynardts,
Why so aggressive as to the stability or non stability of Linux versus Windows? I am not focussing AT ALL on the operating system running an iSCSI host,but merely the fact that you should not go ahead and create an HA environment with an underlying single server for ALL your shared storage. Servers break down. They always do, someday. HA prevents a big disaster when an ESX host fails, but what about your iSCSI box? The point I am trying to make, is the same one over and over again: If you decide to go and do HA, then make sure you have the infrastructure (and thus the $$) for it. We have a saying for this in Holland, roughly translated it comes to "you can't sit first row for a quarter"...
If you decide to go for a setup like this(using an iSCSI box), one should consider the fact that the iSCSI box is a single point of failure. What to do if it breaks down. What steps to take, how much downtime (and loss of data) is acceptable? Most people I know do NOT like the idea of SAN failure...
And when buying a SAN, you can actually start off just a little cheaper than a Symmetrix
Message was edited by: Erik Zandboer
PS: With "iSCSI box" I mean a regular server running iSCSI. A dedicated iSCSI box is more reliable off course (and more expensive)...
I said you should buy Symmetrix because I am almost identical to "Chandler Bing" on Friends Of course I was not being serious. We are talking truckloads of cash for one of those
I just mis understood when you said "Linux Box". I only later realised you actually meant and X86 box.
Sorry for the slight aggression.
I do agree with you saying that it's a single point of failure, which is why it's only being used in my test lab, and for the live environment we use EMC Clariion CX.
Fair enough. Sorry for any confusion. Indeed I mean to say that x86 servers, or any server for that matter, tend to stop working one day
so for the original poster: Look into your wallet and decide which way to go!
Hello,
Storage options depend on cost, MTBF, and your usage. No matter what per LUN there is a single point of failure for all devices.... That is the disk itself. If say you use RAID-5, you can handle 1 disk failure before something bad happens..... So you must understand all there is to know about the possible failures and how to alleviate them. (RAID1 vs RAID5 vs RAID10 vs ADG/RAID6 vs RAID50)....
iSCSI, and SANs devices also suffer the same thing. There is a single point of failure for everything, yes there are multiple paths, storage processors, LUNs, etc. But that chassis is a single point of failure. So people use multiple devices at times. I have seen single SAN environments also do very bad things. There is no way around this except to have data in other locations so you can stay running. This is where DR comes in. You will need it whether you use iSCSI, SAN, or NFS.
So iSCSI limitations:
Performance: 1GB connections (700Mbps max throughput up and 700Mbps max throughput down)
Requires another Server Box/or specialized device
Supports IDE, SATA, SAS, SCSI (recommendation is always SCSI)
Block IO protocol
So SAN limitations:
Performance: 2GB or 4GB connections
Requires another specialized device
Supports SATA, SAS, SCSI (recommendation is always SCSI)
Block IO protocol
So NFS Limitations:
Performance: 1GB connections (700Mbps max throughput up and 700Mbps max throughput down)
Requires another Server Box/or specialized device
Supports IDE, SATA, SAS, SCSI (recommendation is always SCSI)
Non-Block IO protocol
As you can see SAN is faster in theory but both have specialized hardware, that is about all the difference there is. MTBF for disks is the same. The uptime of the device is about the same as well depending on what you use for iSCSI. Linux/FreeNAS make very good iSCSI servers and plenty are used in production today, and most specialized iSCSI servers actually run Linux internally. Windows is used as well but has a lower MTBF in my opinion.
So what do you do for DR? For Business Continuity purposes, you could have another SAN, iSCSI server ready to go; you could have local storage also available. I recommend the later quite a bit, that way if the Remote Storage has issues, you can switch to local storage for my most important VMs which as part of my backup mechanisms are placed on local VMFS storage.
IMHO, iSCSI, NFS, SAN all have issues. Depending on the usage and DR plan, one is often better than the other. I have used them all for customers and they all work. In many cases it comes down to understanding the limitations, how to tune them, and the costs involved. You always want your test environment to mimic as close as possible your production environment.
Performance wise it goes like this: Local Storage, SAN, iSCSI, then last NFS.
For an inexpensive VMware Starter License setup, I would suggest Local Storage + NFS.
For an inexpensive VMware Foundation License setup, I would suggest Local Storage + iSCSI or an entry Level SAN (MSA1000 or equiv)
For an inexpensive VMware Enterprise License setup, I would suggest Local Storage + any enterprise SAN or iSCSI server.
Compare your options, contrast your MTBF, pick the best components you can for which ever remote storage device you can afford. Currently I would always use SCSI disks as they are the fastest and most reliable. SATA is coming close with the 10K devices but they have a lower MTBF currently. But that is changing rapidly as well.
Best regards,
Edward L. Haletky, author of the forthcoming 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', publishing January 2008, (c) 2008 Pearson Education.