VMware Cloud Community
Frosticle
Contributor
Contributor

Needing some advice on storage

Hoping a few people can offer their opinions (all care, no responsibility!) as to how I can best proceed to upgrade our VMware infrastructure.

Current situation:

-- 1 x IBM x3652 M2 rack server with 16GB RAM and 1TB internal RAID 5 storage (8 x 146GB SAS drives, I think)

-


5 x guest O/S, mostly Windows Server 2003, incl. one small SQL Server

-- 2 x physical Windows servers

-


1 x small web server

-


1 x Exchange 2007 server

n.b. the Exchange Server also acts as a File Server, and has an IBM EXP3000 system storage device (IBM MegaRAID 8480 adaptor) with 4TB of RAID 0 storage on it, formatted as NTFS (don't ask me why its RAID 0; it was here where I got here!).

I have budget to purchase a 2nd IBM x3650 M2 with identical configuration (16GB RAM and 1TB internal storage) and will then convert my physical servers to virtuals and retire the physical servers down to our DR site.

I'm looking for some feedback on the EXP3000 storage device. It has 4 x 1TB drives in it, however this could be expanded to 8x or even 12x I suppose. The spec sheets I can find say that it has "2 x mini SAS connectors" and can offer "3Gbps port data transfer" speeds.

Because my longer term objective is to implement vMotion, will the EXP3000 do the job as the "shared storage" for both ESXi servers and perform almost as well (or better than) the internal drive storage in the x3650's, or will it struggle to do the job? In other words, will I need to plan to purchase a new SAN down the track if I want to more to shared storage and vMotion?

Reply
0 Kudos
7 Replies
Frosticle
Contributor
Contributor

OK, one likely issue ... my EXP3000 definitely has 4 x 1TB drives in it ... but the specs for the EXP3000 on the IBM website say:

>> Drives supported

>> SAS 15,000 rpm capacities: 73 GB, 146 GB, 300 GB, 450 GB

>> SATA 7,200 rpm capacities: 500 GB, 750 GB, 1.0 TB

>> Solid State Disk: 50 GB SATA (System x direct attach only via the ServeRAID MR10M SAS/SATA controller adapter)

So my drives must be SATA ... therfore I assume this means I will have to purchase a bunch of new SAS drives (which is fine I suppose, as they will offer much faster performance).

So if I put SAS drives in it, will it run just as fast as if I put SAS drives into the x3650 internal storage bays?

Reply
0 Kudos
golddiggie
Champion
Champion

The callout of "2 x mini SAS connectors", to me, sounds like it will be DAS (Direct Attach Storage) not NAS/SAN (or over the network). IF you really want to plan for vMotion, and DR/HA/SRM in the future, you'll need network attached storage that can be accessed from both locations. If you're tight budget wise, look to iSCSI solutions. Have the solution populated with SAS drives, to increase the IOPS. Otherwise, you'll want to get solutions with a lot of drives (12 or 16 spindles running either RAID 10 or 50). At my last company we picked up a EqualLogic 16TB iSCSI solution for around 40k (don't remember the exact number, since we put some additional hardware and services onto the same purchase). That was added to an existing 4TB solution and yielded us about 13TB of actual usable storage space. Adding the second chassis to the existing iSCSI array gave us a performance boost as well.

I would stay clear of large amounts of local storage for production ESX hosts. Go with the smallest drives you can get in the server when ordering it. Get 10,000 or 15,000 RPM SAS drives (75GB is more than enough, no need to get the 146GB drives) and have them a mirrored pair of drives. You don't need anything more in the host. Everything else should be on shared storage. I would also go with a pair of hosts at your primary site to enable HA there (in case you need to work on one host, you have the second one to take the load). Otherwise, you'll always have to schedule down time for any hardware work on the host. With the pair, and HA, you'll be able to use Maintenance Mode, vMotion, etc. to ensure 5 9's of uptime of any virtual server you have in the production environment. Later, when you can get the funds allocated, get a pair of brand new servers for the primary location and move the two you'll have from earlier to the DR location. This is a fairly common practice.

Also, having the pair of servers, you'll be able to virtualize Exchange, along with everything else you have in the environment (listed above). I would move the file server off of the exchange server ASAP. If you must, make a dedicated virtual server to share out a LUN, or several LUN's depending on the actual amount of information you need to share. No point in having more traffic onto the Exchange server when not needed.

Frosticle
Contributor
Contributor

You're right, the EXP3000 is directly connected to the physical file server. So that matches the description of Direct Attach Storage.

I assume that this is pretty comparable to internal storage in terms of performance. How would iSCSI compare to this? The idea of pulling your data off a network-attached device fills me with apprehension. Can it really keep up with the performance of internal drives?

Anyway, it sounds like we'd be up for some new hardware. Will have to wait and see what flexibility I have in the budget I guess.

Reply
0 Kudos
golddiggie
Champion
Champion

A little research will go a long way for you. Reach out to manufacturers like EMC, NetApp, HP and EqualLogic and pose the question to them. I know that when we added the second EqualLogic chassis to our environment, mating it to the first one, the performance of the storage went through the roof. Of course, we're also talking 16 spindles per array/chassis.

It will really depend on how much you can allocate funds wise to the storage. Don't go cheap, but don't go overboard either. Our total purchase of our VMware environment, including three servers, VMware licensing, two EqualLogic storage arrays, two Gb switches for vMotion and storage networking, as well as another switch for another segment of the LAN came to under 100k. For what we got in both capabilities, performance, and flexibility, it was money well spent (and a bargain in my opinion).

With how far iSCSI and network attached storage has come in recent years, it performs just as well, if not better, than what you're running now. If you want ultra-high IOPS for some ultra-high demand applications, then go for SSD inside the storage. Otherwise, 10k or 15k SAS drives will do you well. In fact, if we had SAS drives in our EqualLogic arrays, it would have been insanely fast. As it stands, they were populated with 7200rpm SATA drives. Performance was still well within acceptible levels for all servers residing upon them. That could have also been due to the servers we were using (brand new Dell R710 with the 5520 Xeons inside).

AnatolyVilchins

I think the best project is to purchase IBM x3650 M2, and two run both of your Windows servers as storage VM with High Availability setup. Second storage can be stored far from primary. That will provide more reliability and Business Continuity to your system.

Here:

http://www.starwindsoftware.com/vmware-availability-guide

iSCSI Software Support Department

http://www.starwindsoftware.com

Kind Regards, Anatoly Vilchinsky
Reply
0 Kudos
golddiggie
Champion
Champion

Issue will be using DAS in that configuration. With that kind of configuration, if you lose the connection to one of the servers, and by result the storage connected to it, HA will not be able to do anything to bring those VM's online with the other host.

Reply
0 Kudos
Frosticle
Contributor
Contributor

OK, thanks all for your posts. I think for now I will have to restructure the plan to look something like this:

STAGE 1 - get 2nd IBM x3650 M2 server now with 1TB internal storage and run without iSCSI or vMotion/HA

... then in a year or so ...

STAGE 2 - get 2 new production servers and an iSCSI array (major investment) and move the old servers to disaster recovery (where they can operate without needing iSCSI with their 2 x 1TB internal RAID storage)

Its not ideal, as I had hoped to make better use of the EXP3000, and budget constraints prevent me going iSCSI in the next 18 months. (we have a major investment happening in software that takes precedence, but once that software has been written they will be thinking about infrastructure growth/performance issues).

Reply
0 Kudos