VMware Cloud Community
nuttervm
Contributor
Contributor
Jump to solution

Support for PCIe SSD?

Has anyone tried to get one of the OCZ technology revodrive PCIe SSD devices working on ESXi 4.1? (e.g. http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&DEPA=0&Order=BESTMATCH&Description=revodri... I have seen other people expressing similar sentiments via google searches, but have never seen anyone say they tried it and are using it successfully.

I know it doesn't exist in the HAL yet, but I wonder if anyone has taken a chance and given it a shot. I would LOVE to put one of these in my server to host my VMs that require the best performance and/or use FreeNAS to use this drive as a read/write cache on a ZFS volume of spinning media.

Similarly, does anyone know if it is on the roadmap for being supported in a future patch release?

For the search engines:

[OCZ RevoDrive OCZSSDPX-1RVD0050 PCI-Express x4 50GB PCI Express MLC Internal Solid State Drive (SSD)|http://www.newegg.com/Product/Product.aspx?Item=N82E16820227596&cm_re=revodrive-_-20-227-596-_-Product|View Details]

Reply
0 Kudos
1 Solution

Accepted Solutions
kbotc
Contributor
Contributor
Jump to solution

I will be that person:

It does not work in 4.1. You can make it work via some unsupported hacking and loading an unsupported driver from the Linux kernel. Keep in mind that it is really a software RAID, so you will end up with several smaller drives appearing in ESXi, rather than a single large, fast drive. I also had an issue with one of the drives dissapearing (Probably a hardware failure, but it is still something to know anecdotally).

View solution in original post

Reply
0 Kudos
11 Replies
DSTAVERT
Immortal
Immortal
Jump to solution

Without specific reference in the HCL it would be a try and see. Even if it worked would it be reliable. Since it would need device support (driver) in ESXi I would look through the documentation to see what type of controller it appears to the hardware and to an OS. I would be worth a support call to OCZ. See if you can get their interest in testing it out.






Forum Upgrade Notice - the VMware Communities forums will be upgraded the weekend of December 12th. The forum will be in read-only mode from Friday, December 10th 6 PM PST until Sunday, December 12th 2 AM PST.

-- David -- VMware Communities Moderator
nuttervm
Contributor
Contributor
Jump to solution

I beleive it is a Sandforce controller in a striped RAID that is hidden from the user, but I need to verify that. I saw similar questions on OCZ's forums asking if worked in VMWare but there are no answers.

Your idea to give their support line a call is a good one and probably the best option we have if no one speaks up and says they've tried it. I was hoping there would be at least one maverick with perhaps a bit too much cash who gave it a shot Smiley Happy

For those that are just curious: these drives are actually much faster than your "standard" SAS/SATA interface SSD devices. Check out the performance specs and you will be impressed. It's not just a normal SSD in another interface for no good reason.

Reply
0 Kudos
DSTAVERT
Immortal
Immortal
Jump to solution

I was hoping there would be at least one maverick with perhaps a bit too much cash who gave it a shot Smiley Happy

You could be that person Smiley Wink






Forum Upgrade Notice - the VMware Communities forums will be upgraded the weekend of December 12th. The forum will be in read-only mode from Friday, December 10th 6 PM PST until Sunday, December 12th 2 AM PST.

-- David -- VMware Communities Moderator
Reply
0 Kudos
kbotc
Contributor
Contributor
Jump to solution

I will be that person:

It does not work in 4.1. You can make it work via some unsupported hacking and loading an unsupported driver from the Linux kernel. Keep in mind that it is really a software RAID, so you will end up with several smaller drives appearing in ESXi, rather than a single large, fast drive. I also had an issue with one of the drives dissapearing (Probably a hardware failure, but it is still something to know anecdotally).

Reply
0 Kudos
DSTAVERT
Immortal
Immortal
Jump to solution

The best use of something like this would be to use it on a supported platform as shared storage or augment storage as cache etc.

Thanks for posting your experience.

-- David -- VMware Communities Moderator
nuttervm
Contributor
Contributor
Jump to solution

It's too bad that they don't support it yet and it doesn't seem to work via custom Linux drivers.  I knew these devices were essentially SSD RAID arrays, but thought the firmware handled all that and it was abstracted from the consumer/user.  I didn't know they were software "fakeraid", but they can definitely still be useful...

You can easily couple this kind of storage as a ZFS read/write cache that sits in front of a large spinning media array.  We are doing this in our lab (but with a simple SATA SSD instead of one of theses screamers) and the performance is absolutely awesome.

Or of course you can relegate thei storage to a single purpose datastore that hosts your VMs that need uncompromising performance, but i don think i would do that these days with ZFS available.

Reply
0 Kudos
DSTAVERT
Immortal
Immortal
Jump to solution

Adding these as locally attached just isn't useful in an ESX(i) environment. The lack of VM mobility is perhaps the biggest reason. It won't be long before spinning disks are relagated to nearline or offline.  

-- David -- VMware Communities Moderator
Reply
0 Kudos
nuttervm
Contributor
Contributor
Jump to solution

I agree that directly attached storage limits mobility and that in turn limits the use-case that this kind of technology can be deployed on.  However, don't make the mistake of assuming that everyone running ESXi in a high performance environment NEEDS vMotion, is running clusters, etc.  My lab isn't the same as your datacenter Smiley Happy

FYI, in our lab we are using a network storage appliance shared between two ESXi hosts, so your point is well taken.  My hope was to have a hybrid approach of SSD cache enhanced ZFS network storage and also a local PCIe SSD on each host that needs uncompromising performance.

Reply
0 Kudos
DSTAVERT
Immortal
Immortal
Jump to solution

I agree that every situation is different but at the present time there are issues with SSD drives. No support for trim for one. There are situations where the current crop get slower. In my opinion one of the biggest reasons for virtualization is mobility. If you need absolute performance make it physical.  

-- David -- VMware Communities Moderator
Reply
0 Kudos
dkvello
Enthusiast
Enthusiast
Jump to solution

You could put a couple of these in a standalone server with some 10GBe Ethernet and/or 8GB FC HBA's and plug it into Your SAN for block-storage or NFS/iSCSI

Then install Nexentastor http://www.nexentastor.org/projects/site/wiki/CommunityEdition on the server and voila !! Shared PCIe SSD for those demanding VM's.

I would probably go for a good server with the latest Nehalem CPU (for HW accelerated encryption/decryption)

Here's an example:

http://v-reality.info/2010/06/using-nexentastor-zfs-storage-appliance-with-vsphere/

Reply
0 Kudos
fabianl
Contributor
Contributor
Jump to solution

I see support for this as being really useful in the following scenario:

2/3 Host Vsphere cluster with Virtual SAN such as SvSAN.

One of these cards in 2 hosts and 10GBe for Data replication between the 2 hosts.

Outcome:

Replicated SAN storage on uber fast SSD without the need to buy a proper SAN.

Total cost: ~$10,000.00 for roughly 1TB

2x $3000 for the 960GB version of the OZc Cards

1x $2000 SvSAN HA license

2X $300 10GBe Network Card

I am actually contemplating doing this with regular Enterprise SSD drives as my servers only have 6 drive bays and i need 2000 iops and 1TB of storage for to host several server vms and about 30 Virtual Desktops.

I could possibly do this with 6x 200GB SSD in each server

I am just concerned about the long term viability due to lack of TRIM support and potentially no idle time for Garbage Collection on the SSDs.

Anyone have any views on this?

Reply
0 Kudos