VMware Cloud Community
buymycorp7
Contributor
Contributor
Jump to solution

Need to upgrade a hybrid 126TB (raw) vSAN array across 5-host cluster

I sold my client a 5 host ESX 6.0 u2 with vSAN 6.0 in early 2016. Late last year we upgraded to ESX 6.5 u2...but since VMware technical support had no idea how long a block-by-block vSAN 6.0 to vSAN 6.6 upgrade would take my client has left it running vSAN 6.0. I certainly can’t blame them...and I would never sell hybrid vSAN again even if my life depended on it.  It complicates EVERYTHING. Want to put a host running hybrid vSAN in miantenance mode? You’d better carefully plan that out and expect it to take 4+ hours.

Anyway, enough venting. I want to create the (2) L3 VPNs between thier production cluster and a new on-demand 3-host VMware Cloud then live migrate their workloads over to AWS.  Then run the (long overdue) vSAN 6.6 on-disk upgrade on their production cluster. My question is

1. Am I forgetting something.

2. They don’t have NSX on their production cluster but I’m planning on selling them NSX Entrrprise Plus SPLA licenses (~$20/month/per protected VM) Prior to the live migration since NSX is running on AWS.  Any advice?

3.  Windows licensing & Trend Micro Deep Security licensing & Veeam licensing.  I sold this client Windows datacenter licensing with Software Assurance, trend ds licensing and you guessed it Veeam licenseing.  Since I don’t intend to instantiate new Windows guests while in the AWS environment I’m betting/hoping it is kosher to run these properly licensed Windows workloads in AWS. I’m not even going to try to migrate the Trend licenses - we‘ll have to roll the dice on viruses.  Which leaves Veeam...which again cant migrate the licenses.  So my plan is to create AWS snapshots regularly and then a fresh full Veeam backup the minute a server is live migrated back.  Please don’t be shy in telling my plan sucks. 

4.  Uhm...did I mention that my client has a mission critical ~12TB VM fileserver? Well they do.  If I could devote a 500Mbps circuit to it alone that would take 50 hours. They have ESX 6.5 which the FAQ said is all that is required for live migration. Question: has anyone live migrated such an enormous VM? Can anyone confirm that it would be available on-prem after triggering the vMotion to AWS but before the vMotion completed and vice verse be available in AWS after triggering the return vMotion?  Anyone think this part of my plan sucks?  If so, what would you do?

Thank you.

0 Kudos
1 Solution

Accepted Solutions
buymycorp7
Contributor
Contributor
Jump to solution

I had an enlightened call with VMware technical and sales resources. This entry may contain mistakes, but they are unintentional.  You can’t blend license types.  This customer has perpetual ESX and vSAN Licensing, therefore may not SPLA NSX.  Let’s set that aside for the moment.  ‘vSAN rebalancing’ happens in a hybrid (ssd and magnetic drives versus pure SSD) environment when the percentage of usable free space on the array exceed 79%. Rebalancing is the endless reshuffling of the 100GB chunks that comprise every virtual disk on your array...and yeah it destroys performance and is to be avoided like the plague. In VMware Cloud on AWS presently only i3 hosts are available and have ~8TB usable. But instead of triggering rebalancing at 80% it automatically (mandatorily) adds an additional host to your cluster at 25% free space. Translation each VMware Cloud on AWS i3 host has ~5.5TB of usable space...go over that and you’ll incur an additional $8.36/hr for a new i3 host. I’ll be fair and admit that the advanced vSAN features (only available on pure SSD vSAN arrays - which VMware Cloud on AWS obviously is) ‘Compression’ & ‘Deduplication’ could effectively stretch the 5.5TB significantly.  However, my client’s ~12TB file server won’t fit even if I devoted an entire host to it alone. Setting that issue aside...using napkin math I arrive at the the conclusion that migrating my client’s 5-host cluster to VMware Cloud on AWS would require 9 hosts (maybe 8 hosts with the advanced vSAN features) at best that’s ($66.88/hr). Having said that, our discussion turned to a different client of mine who have the SPLA ESX and therefore could use SPLA NSX.  This client needs to earn their PCI RoC.  We spoke about NSX’s amazing capabilities.  Just one example that stuck with me: NSX rules/policies attached to a VM are applied between the VM and its own vNIC!  Think about that!  Absolutely amazing.  Hope this helped.

View solution in original post

0 Kudos
1 Reply
buymycorp7
Contributor
Contributor
Jump to solution

I had an enlightened call with VMware technical and sales resources. This entry may contain mistakes, but they are unintentional.  You can’t blend license types.  This customer has perpetual ESX and vSAN Licensing, therefore may not SPLA NSX.  Let’s set that aside for the moment.  ‘vSAN rebalancing’ happens in a hybrid (ssd and magnetic drives versus pure SSD) environment when the percentage of usable free space on the array exceed 79%. Rebalancing is the endless reshuffling of the 100GB chunks that comprise every virtual disk on your array...and yeah it destroys performance and is to be avoided like the plague. In VMware Cloud on AWS presently only i3 hosts are available and have ~8TB usable. But instead of triggering rebalancing at 80% it automatically (mandatorily) adds an additional host to your cluster at 25% free space. Translation each VMware Cloud on AWS i3 host has ~5.5TB of usable space...go over that and you’ll incur an additional $8.36/hr for a new i3 host. I’ll be fair and admit that the advanced vSAN features (only available on pure SSD vSAN arrays - which VMware Cloud on AWS obviously is) ‘Compression’ & ‘Deduplication’ could effectively stretch the 5.5TB significantly.  However, my client’s ~12TB file server won’t fit even if I devoted an entire host to it alone. Setting that issue aside...using napkin math I arrive at the the conclusion that migrating my client’s 5-host cluster to VMware Cloud on AWS would require 9 hosts (maybe 8 hosts with the advanced vSAN features) at best that’s ($66.88/hr). Having said that, our discussion turned to a different client of mine who have the SPLA ESX and therefore could use SPLA NSX.  This client needs to earn their PCI RoC.  We spoke about NSX’s amazing capabilities.  Just one example that stuck with me: NSX rules/policies attached to a VM are applied between the VM and its own vNIC!  Think about that!  Absolutely amazing.  Hope this helped.

0 Kudos