VMware Horizon Community
DarrenBull
Contributor
Contributor
Jump to solution

Anyone use SSD SAN for View (evaluating Whiptail xlr8r)

Hi,

We're looking at running our View environment on an SSD SAN - specifically a Whiptail XLR8R device - 1.5tb and it dedupes so we should easily be able to fit our environment on to it at superfast speed. However, its not certified by VMware (although they can show me people who are using it successfully). It is Citrix certified though and performance results have been impressive.

So, anyone use a Whiptail, or have general comments on SSD SAN's for View?

Cheers

0 Kudos
22 Replies
jimrmclean
Contributor
Contributor
Jump to solution

Currently, we keep our replicas and linked clones (C drive) on SSD.  For some pools, we also keep the persistent disks there and for some pools we just keep the persistent disks on regular SAN storage.  VDI on SSD is a different animal than VDI on traditional storage.  We found that it's not just the raw number of IOPS that these things can handle, but the fact that latency is something like 0.1 ms (worst case) versus 5-8 ms for normal disk.  Latency kills VDI, so when latency is gone, those virtual desktops perform better and respond more quickly than most physical desktops.

As far as cost goes, you're best to contact the vendor since the price we paid may be more (or less) than what you are offered.  It is pricey since it is SSD, but when you compare what it would cost on traditional storage to get the same type of IO, it suddenly seems relatively inexpensive.  Combine that with the small form factor (2U), takes less than 400W of electricity and doesn't generate much heat; suddenly it doesn't make sense at all to buy 14+ shelves (x 24 drives each) worth of SAS drives to try to get close to the IOPS this thing can generate.

Definitely do your homework and compare some of the top vendors in the SSD market.  There has been alot of innovation and new products brought out in the last 12 months, so it would be wise to understand the differences before rushing to buy a specific solution.  We learned the hard way.

0 Kudos
rabbiw
Contributor
Contributor
Jump to solution

Hello, we have just completed our Eval of the Whiptail XLR8r. We use Netapp for VDI storage for 300 users and had a few issues with IO on the netapp. The number of 15K disks required to support the required IO was cost provibitive. as a short term solution we went for Netapp flash cache to limit the read IO's while buying us more time to investigate a better strategic solutiion. We evaluated the Whiptail appliance using production load and performing workloads that we would no even attempt on the standard disk based storage such as continually rebooting as well as a scheduled antivirus scan of all drives on all VDI desktops all the while measuring the user experience in real terms. I am glad to say there was minimal effect on the user experience. Such a test on the netapp caused the VDI desktops to be unavailable for hours while they all competed for IO. Our strategy on the next refresh would be a tactical solution for VDI storage using Whiptail and for general file and database storage using Netapp. I have had a lot of discussions with Netall on the upcoming mixed SSD/HD volumes and aggregates and this looks quite interesting. for now we will proceed with a mixed infrstructire of Whiptail and Netapp.

0 Kudos
Jaggy201110141
Contributor
Contributor
Jump to solution

Hi All,

Just got my new whiptail storage and day one very impressive. I did an IO test using IOMeter to generate storage load on both Whiptail and NetApp (SAS + Flash Cache 512 GB) arrays. The traffic simulated was similar to VDI expected load with 80% writes and 20% reads based around 4k random IO. Whiptail was able to generate 69100 IOPS as compared to NetApp 3240 which generated only 33100.

I would say go for whiptail and get a smaller NetApp/EMC for CIFS.

0 Kudos