This is also something I am researching. We are planning on moving our four servers to one ESX server (a DC/FP, mail server, SQL server and a secondary FP).
We are a print house with 24 employees so the load should be ok but we deal with a huge amount of data from a lot of clients. We are wanting to move to a solution where all of our archive is always online (4.6TB atm).
We are thinking of either 1.5TB of RAID5 SAS for virtual discs and current projects and 8x 1TB drives as Archive. We're looking at two RAID cards to do this so once you do all the math a DroboPro is the same ball park. OR we could get a DroboPro and just put everything on it, virtual discs and storage. OR a couple of 1TB sata drives for virtual discs and a DroboPro for all project data.
Anyone have any ideas?
P.S. sorry for hijacking the thread.
Did you had a chance to try it out iSCSI of DroboPro with ESX ?
I recently bought a Drobo Pro and hooked it up to ESXi. I had good read performance, it was about 85-100 Mb/s. When I was writing to the drobo however, it was about 5. I was able to speak with one of the Drobo guys and they said that the block size needs to be 8MB and that helps in speed but I noticed only a slight improvement. They will be releasing a best practices guide in a few weeks but he did say that for now, I can format the Drobo as NTFS and go to the VM and add a HD as RAW. He said that will help since the problem could be how ESX writes to the DroboPro. They are in the process of officially supporting ESXi, but the guy did say that in their tests the write times were slower than the read times. He was not sure how much slower but he said he would not be surprised if it was not half.
The Drobo is fast when hooked up to a Windows 2003 Server (NTFS or FAT32) or a MAC (FAT32), it is constant at 100Mb/s Read, and 90Mb/s write. I really think it is VMFS that is causing the problem.
I am also seeings unacceptable performance when using the DroboPro with ESX, around 5MB/sec or less write speeds. Quite frankly this is pathedic; an IDE drive from the late 90s would be faster. However, using the Microsoft iSCSI initiator the performance is exceptional. In your post, you mentioned adding the drive as RAW - are you refering to using the drive as attached storage and not iSCSI? Curious if you have had any success with performance tuning and the DroboPro.
At firstI triedusing the Drobo Pro before the latest firmware update (1.1.3) but I updated the Drobo and set up a few 2TB LUNs (1 for each VM) with 8MB Block sizes and it works much better now. I moved two VMs from the local RAID made up of 10K SCSI drives to the Drobo and was getting about 120 Mbps file speed which is really great. When I would copy a 1 GB file, it would take less than a minute. I now have the two VMs running on the Drobo Pro and they do run slower, but I assume it is the same latency that one would expect from a more expensive iSCSI NAS. I am not sure how many VMs can run on it at once though. I wish it had another iSCSI port on it becuase that could help with speed. if you go to Drobo's webiste, they had a best practices guide for running it with ESX now that they are VmWare certified. The firmware update made the difference for me.
The DroboPro I am working with has the latest firmware version on it. Which version of ESX are you running the DroboPro with? If I could see 120MB/sec I would be thrilled.
I am running version 3.5. Did you follow the best practices guide on their website? I know the block size makes a big difference. I also heard that the more drives in the Drobo the better. I have 4 drives in it right now, but supposedly performance would be better with at least 6. I am not seeing 120MB/s on the ESX performance chart, I calculated that out by transferring 1 GB and 120 GB files onto the Drobo from ESX and timing it. ESXi only shows about 5000 kbps for some reason.
Well shoot. As you can see from my previous posts, I was also getting about 5MB/sec throughput on the DroboPro with ESX. However, after doing some messing around with the volumes on the DroboPro, re-seating all of the drives, reformatting all of the drives, fighting just to get it to respond as it was prior to a hard fault of the ESX host while writing to the DroboPro (fresh ESX installation btw) - I am now getting 50-55MB/sec throughput. Now I have to take a step back and try to figure out how I fixed it...quite frankly I am a little weary of that type of resolution. When will it rear it's ugly head again? When I need it most?
For those that are wondering, it is my opinion that the DroboPro is not quite ready for a production VMware environment. I saw performance anywhere from 5MB/sec to 55MB/sec with no change in configuration while cloning a VM from an EqualLogic to the DroboPro. Being that the DroboPro is VMware certified, I had high hopes for an inexpensive and solid iSCSI target when I purchased the DroboPro. For those that are going to use the DroboPro with the Microsoft iSCSI Software Initiator I was fairly impressed with the product, both performance and configuration ease.
In case someone at Data Robotics reads this post, these are my thoughts in regards to enhancing the DroboPro:
1. Make the option to create a volume without a filesystem type - although its not really an issue, formatting it NTFS gets confusing when the intent is to add the LUN to VMware and format with VMFS
2. Create a web interface for configuring the DroboPro for those that intend to use the device as an iSCSI target. Keep the Drobo Dashboard for USB/Firewire use.
3. Enable security features such as ACL, CHAP.
4. Provide some method of knowing what the DroboPro is currently doing. I realize that the engineers want the backend of the DroboPro to be lock out, however, a simple status would be nice as we wait 15 minutes for the DroboPro to figure out what it is doing.
I am actually disappoint that I have to return the DroboPro - I would like to spend some more time troubleshooting, however, work duties make the "calls" and I need a functioning/simple iSCSI target yesterday. For those that are wondering, I repurposed an older Dell server, some external eSATA storage, and installed the opensource OpenFiler product. Performance is pretty good with OpenFiler. If you want more robust SAN capabilities, try OpenSolaris and COMSTAR - just keep in mind that OpenSolaris is picky on the hardware and performance is DIRECTLY affected by the hardware you choose.
That is it for now... thanks for the original post.