VMware Cloud Community
TahoeTech
Contributor
Contributor
Jump to solution

Possible NAS Storage solution??? Promise VTRAK 15100 (U320 SCSI) as storage for ESXi/VirtualCenter????

I have several Promise VTRAK units (both 12110 and 15100) that I would like to use as storage for a proof-of-concept project using ESXi and a few blade enclosures and 1U servers in a "datacenter" type setup... The Promise VTRAKs are 12 and 15 SATA Drive enclosures, they have 2x U320 scsi channels on the back...

I would like to install ESXi on the blades and 1U servers and have the storage live on the Promise VTRAKs and setup a test environment for deploying webservers and DB servers. I see lots of people using SAN solutions - BUT I don't want to just throw these VTRAKs away.

Is this possible? I didn't see them on the HCL but as we all know, that doesn't mean it isn't "possible"...

THANKS TO ANY WHO CAN PROVIDE INSIGHT.

0 Kudos
1 Solution

Accepted Solutions
Craig_Baltzer
Expert
Expert
Jump to solution

So of your options...

  1. This is prob. the best option from a performance perspective. It will take a little bit to get the LInux drivers sorted out if they aren't already in the OpenFiler bare metal package.

  2. This option might work, however the "multiple hops" to the storage (i.e. ESX -> OpenFiler -> W2K3 -> storage) won't likely give you the best performance and has lots of "moving parts" to complicate things. I'd try and avoid this one if possible, and if you want to stick with Windows to drive the VTrak then look at StarWind from RocketDivision as an alternative to OpenFiler/Linux in option #1

  3. NFS from W2K3/W2K8 is possible as well. There have been a few anecdotal accounts of trying it and finding the performance lacking, but you may find it "good enough" for your needs

From all the options #1 is likely to give you the best performance, and you could do it with OpenFiler, Windows + StarWind, OpenSolaris or some other Linux distribution of your choice...

View solution in original post

0 Kudos
7 Replies
Craig_Baltzer
Expert
Expert
Jump to solution

Likely the most straightforward solution would be to connect the Promise VTRAKs to one of your 1U servers and dedicate that as your "storage" server. You could run something like OpenFiler on it to make the storage available to your ESXi systems running on the blades via iSCSI or NFS...

TahoeTech
Contributor
Contributor
Jump to solution

Wow, I had just come back here to ask that VERY question...

Currently I have a Vtrak 15100 (15 Sata drive enclosure w/ 1TB drives = 15TB) hooked up via SCSI U320 to a windows server 2003 machine. It shows up as the E:\ drive w/ 15TB.

So with your above suggestion (and my research) Which of these options is the BEST or CORRECT course of action:

OPTION 1:

My Vtrak IS compatible with Linux OS, so I could WIPE the Server 2003 OS, Install Openfiler OS. In Openfiler my 15TB would show up as the E:\ drive (or SDA2 or Dev1 or whatever linux shows). I could then use the Openfiler web management GUI to utilize that 15TB partition essentially turning the 1U server into an ISCSI SAN correct?

OPTION 2:

OR would I have to keep the Server 2003 OS with the 15TB E:\ drive and create a VM or BARE METAL Openfiler machine that connects to the Server 2003 OS (directly attached to VTRAK)?

OPTION 3:

What about NFS? Is that a possible scenario WITHOUT using Openfiler?

I am hoping I can go with option 1 as that seems like the solution with the "least overhead" . WHAT KIND OF PERFORMANCE HITS WOULD I SEE IF I WENT WITH OPTION 1? I plan to use this

Openfiler SAN as my storage for all the virtual machines OS (and their

storage) for my ESXi infrastructure. Using this across 4 or 5 ESXi

physical hosts.

Most if not ALL VMs will be webservers. Some with HEAVY traffic.

I am still new to linux, and have never used openfiler and only just found out about it (10minutes ago) so forgive me if I sound like I don't know what I am talking about .

0 Kudos
Craig_Baltzer
Expert
Expert
Jump to solution

Yes, that's the basic idea. There is a "bare metal" OpenFiler version that basically is a stripped down Linux with OpenFiler pre-installed that may already have drivers for the VTrak included in it. If not you could still try to use it by adding the VTrak Linux drivers after the fact. OpenSolaris is another option that has an iSCSI target and supports NFS. A number of the folks around here have used CentOS as the Linux distro for their iSCSI setup. If you want to stay Windows and have budget to spend a little $ you can look at StarWind iSCSI Target from RocketDivision

Keep in mind that the largest single extent (i.e. a LUN) that ESX can deal with is 2TB, so even though you've got 15TB you'd need to present it in 2TB chunks (so from the ESX side you'd have 8 VMFS datastores, or you could use multiple extents to put the 2TB chunks back together into larger VMFS volume). Using multiple extents in a single volume is not usually recommended as the loss of a single extent could mean the loss of the entire volume, as well as potential performance issues (disk "hot spots"). is a handy reference to the "maximums" that can be supported under ESX.

As far as performance goes it really depends on the performance of the VTrak and the server you're using. The SATA drives will be slower than say SAS drives (7.2K vs 15K) and the network will be slower than say fibre channel, however a fair number of people are using iSCSI/NFS with quite satisfactory results. The best thing you can do is setup the configuration and then do a bit of testing with something like IOMETER to get an idea of IOPS, transfer rates, etc...

0 Kudos
Erik_Zandboer
Expert
Expert
Jump to solution

Might be a cool solution... Ok here it goes. I have a solution that looks VERY MUCH like yours... I have two whtieboxes, both with a parallel SCSI controller, and an EONstor, which is an 8-SATA drive box, with two channels on the back, just like yours! The EONstor is able to show LUNs on one of both outputs, or on BOTH.

So I have two boot LUNs which are each only visible to a single ESX node, and all other LUNs are visible to both nodes. That is the shared storage, and it works! Even vmotion works. I have no internal harddisks inside the nodes. Take care though: It is totally unsupported.

So basically your storage box needs to have two SEPARATE channels, on which you can show and/or hide the LUNs. And your (two) nodes need to have a parallel SCSI interface (U160 or U320 preferably), and you could try to get it to work!

Visit my blog at http://www.vmdamentals.com
0 Kudos
Craig_Baltzer
Expert
Expert
Jump to solution

So of your options...

  1. This is prob. the best option from a performance perspective. It will take a little bit to get the LInux drivers sorted out if they aren't already in the OpenFiler bare metal package.

  2. This option might work, however the "multiple hops" to the storage (i.e. ESX -> OpenFiler -> W2K3 -> storage) won't likely give you the best performance and has lots of "moving parts" to complicate things. I'd try and avoid this one if possible, and if you want to stick with Windows to drive the VTrak then look at StarWind from RocketDivision as an alternative to OpenFiler/Linux in option #1

  3. NFS from W2K3/W2K8 is possible as well. There have been a few anecdotal accounts of trying it and finding the performance lacking, but you may find it "good enough" for your needs

From all the options #1 is likely to give you the best performance, and you could do it with OpenFiler, Windows + StarWind, OpenSolaris or some other Linux distribution of your choice...

0 Kudos
TahoeTech
Contributor
Contributor
Jump to solution

What about NFS? I understand how I would setup and connect everything if I chose to go ISCSI (using option 1 above), But what about NFS? How would that work with my setup? Would I use openfiler again and instead of choosing ISCSI I would use NFS ?

I guess what I am now wondering is "how can I make my infrastructure as fast as possible with the hardware I already own"....

What is the difference between the two choices? Performance-wise is ISCSI or NFS the better solution?

Also you mentioned 2TB chunks... How would that work when creating VMs? Would I have a pool of say 7 x 2TB chunks, and 1 x 1TB chunk to allocate to VMS? (I haven't installed ESXi / VirtualCenter 2.5 or the client yet so I don't know what to expect to see or how I control everything)?

And sort of a side question, maybe I should start a new thread? But is ESXi a better choice for me than ESX Server 3.5?

Really appreciate the help! Thanks!

0 Kudos
Craig_Baltzer
Expert
Expert
Jump to solution

You'd need "something" to present the disks from the VTrak as NFS, and that could be Windows 2003/2008 R2 (built-in NFS), OpenFiler, FreeNAS, pretty much any Linux distro, OpenSolaris, etc. Lots of options there.

From a performance perspective it depends on the implementation I think. There are folks who feel they have great performance with NFS and others who feel the same about their iSCSI implementation. The great thing about the approach you're considering is that it can be setup for both iSCSI and NFS, and then you can do some benchmarking to see which performs better for your particular configuration. Personally I use iSCSI as in my configurations it provides good performance, and it "looks" like any other SAN disk so I can treat it the same as storage coming from a fibre channel SAN, create RDMs and move storage between physical machines and VMs, etc., etc., etc.

For the 2TB chunks if you make them individual volumes, then think of them as just a set of "drive letters" under Windows. When you create a VM you allocate storage to the VM (i.e. create one or more virtual disks), and you pick a datastore to house each of the virtual disks. Unless you are going to have one VM that needs more than 2TB of storage on a single disk (unlikely unless you're running W2K8 and using GPT disks to break the 2TB partition limit) having multiple 2TB datastores works well, and also helps you manage things like SCSI reservation conflicts (if all your VMs are on a single volume then you may start suffering performance issues due to SCSI reservation conflicts as the number of VMs increases).

With ESXi you have an option of using for free (forever) whereas with ESX you have to buy it after the evaluation period is over. Keep in mind you just get "basic" virtualization with ESXi for free, and if you want the "fancy stuff" like management via Virtual Center, vMotion, HA, DRS, etc. then you'll need to buy licenses and you'll wind up at exactly the same cost as ESX. ESXi doesn't have the service console so there is less to patch, and if you have supported hardware can be installed/booted from USB. ESX has the service console so you can (in a fully supported fashion) run configuration, monitoring, troubleshooting, etc. from the ESX server console itself. So "better" is really based on the features you need and how much budget you have to spend...