VMware Cloud Community
codypetry
Enthusiast
Enthusiast

Decided on: Compellent SC4020 all-flash array

I've decided to go with an all-flash Compellent array.  I got a great deal on a SC4020 with 1 x 6 pack SLC and 3 x 6 pack of eMLC (32 TB raw)!

If Dell can continue to offer the deal they gave me to other people none of these flash start up companies stand a chance; not even remotely.  They are the only ones who even offer SLC.

I love how the array works. Everything gets sent into Tier 1, RAID-10 first, on SLC, so the SLC absorbs all of the writes first.  Then the system can organize and use a proper wear leveling algorithm to move data into eMLC.

Originally I really wanted to go NFS so I could get per-VM IOPS/metrics.  However, that's a moot point if the array can kick out 300K IOPS.

I've been using FC for years and am sticking with it.  I'll just upgrade the switching to 8 Gbps.  The array supports 16 Gbps fibre but I don't want to incur the expense of 16 Gbps switching and Mez cards.

I should have this new all flash Compellent in our shop and online in 2-3 weeks.  I'll post a small review once I get it up.  I've been using HDS for years.  So this will be my first non-Hitachi array.

6 Replies
jedijeff
Enthusiast
Enthusiast

That is a nice system. I have 3 SC8000 systems and I am hoping to grab one these 4020s with all Flash soon myself.

0 Kudos
aweaver614
Contributor
Contributor

When running the SC4020 or any Compellent array in virtual port mode is the VMWare ESXi host supposed to see 8 paths (for the 8 virtual WWNs) or only 4?

Im currently deploying a SC4020 and pretty much the only way I can get ESXi to acknowledge that the virtual WWNs exist on the second controller is to force a volume to be mapped to the second controller.

Everything seems to work. I just don't know what the intended result would be. 8 ports spread across two fabrics with 8 virtual WWNs to me means that 4 virtual WWNs should be visible on fabric a and 4 should be visible on fabric b.

Does anyone know?

0 Kudos
Phoung
Contributor
Contributor

If you have 2 Fault Domains, usually for each of your 2 Fabric switches(I am assuming you do 2 Fabrics), then you should see half your front ports as paths to each LUN. Ie if you have 16 front-end ports, you should see 8 paths to each LUN.

Also be away that Compellent recommends Round Robin MPIO policy for your data stores. And do not forget to set the specific Qlogic or Emulex HBA settings on your esx hosts for the compellent. There are couple of port timeout settings to ensure failover works.

0 Kudos
jedijeff
Enthusiast
Enthusiast

If you ever get UNMAP to work successfully on Replays, drop me a private message!

0 Kudos
hillapp
Contributor
Contributor

Congrats! Hope it's going well with the gear.  Very underrated system.

0 Kudos
codypetry
Enthusiast
Enthusiast

I agree completely!  I had no idea what a dream virtual RAID groups would be.  I love how efficient this Compellent array is.  With my Hitachi system you would end up with a bunch of wasted space as non contiguous chunks all over the place.  A Compellent array is completely virtual with how it operates internally and so it actually uses space correctly.  My UNMAP is also working just fine with my SC4020.  If I delete data off a LUN, or even within a VMDK, I can see that space free up on the SAN.

Now I can't wait to kick all of my Hitachi systems out the door and replace them with Compellents.

This all flash SC4020 is crazy fast too.  My latencies are well below sub-MS and my throughput from one VMDK to another (testing large file transfers) is 400-500 MBps.  I'm using 8 Gbps fibre.

0 Kudos