It is appealing to deploy an SDDC and to "just use it".
And even though this is certainly possible, you might end up with quite a few challenges.
As an example, you should verify your storage needs and set your storage policies right for your VMs.
Failure in doing so can lead to increased storage consumption, which can cause additional Nodes being added at an extra cost.
To avoid any storage related challenges, my colleague Glenn Sizemore, one of VMware's Technical Marketing Architects focusing on Storage in VMware Cloud on AWS, crafted the following Blog article, outlining all important settings and explains the responsibilities in detail:
Happy reading and as always: please feel free to reach in case you have any further questions.
We, the Customer Success Team, is here for you to help you get the most out of the VMC Service.
All the best,
Great article! The biggest and most important concept to get your head around as a VMware Administrator, is the need to take into account ALL of the storage overheads in a software defined storage world. In traditional IT silos, the "storage team" present the storage to the VMware administrators to consume, so when it gets to VMware, you have 1TB (for example) of usable space to consume (but in the physical storage layer underpinning it all, it is actually consuming 2.5TB (for example). When you're talking about VMC on AWS, vSAN, or any SDS application, no longer can you just say "My VM will consume 100GB of 1TB"; it works in the way that the overheads are built into the object (my 100GB vmdk is actually going to consume 250GB of the 1TB (for example)) and no longer at the physical storage layer. Utilising the right storage policies, and understanding what consumption is factored in to the policy, is paramount in a software defined storage world.