VMware Cloud Community
MABEs
Contributor
Contributor
Jump to solution

vSAN limitations stretched cluster

Hi,

I have two questions.

1. Is the host limitation (15 +15+1) hosts in a stretched cluster global or per setup cluster in the vsan enviroment?

2. The new vSAN 6.6 release increases the write cache to 1.6tb, does this mean that the 600gb write limit is no more?

Thanks!

//Marcus

Reply
0 Kudos
1 Solution

Accepted Solutions
TheBobkin
Champion
Champion
Jump to solution

Sure, you could create 5 separate vSAN clusters with 30 nodes each.

They could be managed by the same vCenter, however they won't share vsandatastores, they will have their own distinct vsandatastore per cluster.

Bob

View solution in original post

Reply
0 Kudos
4 Replies
TheBobkin
Champion
Champion
Jump to solution

Hello Marcus,

Correct, Stretched cluster maximum configuration is currently limited to 30 data-nodes (15+15+1).

However the maximum for a regular non-stretched cluster is 64 nodes in 6.x (6.0/6.1/6.2/6.5/6.6) and 32 nodes in 5.5:

http://www.virten.net/vmware/vmware-vsphere-esx-and-vcenter-configuration-maximums/#vsan_maximums

https://blogs.vmware.com/virtualblocks/2015/05/29/20-common-vsan-questions/

The usable space for write-cache on Cache-tier SSDs is still 600GB in vSAN 6.6 .

However disks larger than this can be used for increased expected lifetime versus smaller devices.

Bob

-o- If you found this comment useful please click the 'Helpful' button and/or select as 'Answer' if you consider it so, please ask follow-up questions if you have any -o-

Reply
0 Kudos
MABEs
Contributor
Contributor
Jump to solution

Hi Bob,

can I create 5 vsan clusters with 30 datanodes each within the enviroment?

Reply
0 Kudos
TheBobkin
Champion
Champion
Jump to solution

Sure, you could create 5 separate vSAN clusters with 30 nodes each.

They could be managed by the same vCenter, however they won't share vsandatastores, they will have their own distinct vsandatastore per cluster.

Bob

Reply
0 Kudos
Jasemccarty
Immortal
Immortal
Jump to solution

Correct!

Jase McCarty - @jasemccarty
Reply
0 Kudos