grosas's Posts

Hi Krzysztof, There's no documentation at the moment that lists the internal validation in detail. I can give some here/ For VM validations there are some things validated at the source:  - You... See more...
Hi Krzysztof, There's no documentation at the moment that lists the internal validation in detail. I can give some here/ For VM validations there are some things validated at the source:  - You already mentioned ISOs. - HCX also looks at snapshots, - mac addresses, - vm tools, - change block tracking, - encryption configurations (vm encryption/vmo encryption), - scsi bus configs, - disk persistency modes,  - vmx/hw version, - virtual hardware (usb, URI serial port, floppys,) - raw disk mapping, - audio cards, - NVME controllers, - guest customization inputs, - power state, - watchdog/precision clock devices, - user permissions for the migration, - disk values. At the destination, there are several checks related to compatibility: - hardware version, - storage policies, - migration type support, - migration container connections, - seedcheckpoint data, - genera things like "will the VM fit on the storage"  - "can the VM run on the host" - MA health, - vmotion and replication heath at the target. This is not exhaustive, just to give you an idea of what validation button does.  it doesn't do any functions related to environmental or point-in-time performance or point-in-time activity on the VM.  In this sphere you can do some baselining and testing to understand the environments capability to deal with VM change and theres's some alarms that will show up on the dashboard when noisy conditions are detected, but neither of these things are tied to the per-migration validation function. 
Hi @lvaibhavt  This is the closest thing. WAF/CDN service ranges: connect.hcx.vmware.com uses Imperva/Incapsula WAF.  https://docs.imperva.com/howto/c85245b7 hybridity-depot.vmware.com... See more...
Hi @lvaibhavt  This is the closest thing. WAF/CDN service ranges: connect.hcx.vmware.com uses Imperva/Incapsula WAF.  https://docs.imperva.com/howto/c85245b7 hybridity-depot.vmware.com is Akamai CDN backed.  https://learn.akamai.com/en-us/webhelp/origin-ip-acl/origin-ip-acl-guide/GUID-E5AD1B2B-BDA1-4C3F-87DE-B0CDBDD1E1B0.html (edited)    learn.akamai.com Allow Origin IP Access Control List addresses through your firewall Edit your firewall settings to allow the listed addresses to access your origin.
The purpose of this guide is to provide VMware HCX® best practices for a multi-cloud environment, typically consisting of an on-premises data center and VMware hybrid cloud offerings. This guide focu... See more...
The purpose of this guide is to provide VMware HCX® best practices for a multi-cloud environment, typically consisting of an on-premises data center and VMware hybrid cloud offerings. This guide focuses on VMware Cloud™ on AWS, Azure VMware Solution, Google Cloud VMware Engine, and Oracle Cloud VMware Solution. Still, the design principles can be applied to any multi-cloud architecture. This guide describes VMware HCX multi-cloud best practices and implementation considerations. Although there was a considerable effort in collating the best practices information, some deployment scenarios may not be covered. This guide is not intended as a comprehensive guide for implementing VMware HCX in every design. The following topics will be covered: • VMware HCX overview • Use cases for VMware HCX multi-cloud • Multi-cloud connectivity and security design considerations • VMware HCX multi-cloud site pairing and service mesh considerations • VMware HCX workload migrations and network extension considerations • Compatibility and interoperability considerations • Supportability considerations • Licensing considerations • VMware HCX cloud-specific considerations  [Published ~ Mid 2022 in VMware Cloud Techzone - Cloud Migration]    [Author: Caio Oliveira] 
To see the details of what is happening in the management plane during the operation, you  can tail -f /common/log/admin/app.log on the HCX Connector and HCX Cloud Managers.  1. Across both sites y... See more...
To see the details of what is happening in the management plane during the operation, you  can tail -f /common/log/admin/app.log on the HCX Connector and HCX Cloud Managers.  1. Across both sites you will see the NE bridge become disabled  2a. The destination T1's gateway become connected.  2b. In parallel at the source, the original gateway/SVI etc should be manually disabled.         (this is a coordination with your network team, if you don't control the first hop gateways) 3a. The routing configuration in the new VCF should be able to advertise the newly connected subnet very quickly.  3b. The original route also needs to age out of be removed.  Mileage varies depending on the coordination of the activities and the overall routing configuration.  Recommendation is to run through the process end to end with a test network so you can understand route propagation/age out timings in your environment.  
This document is superseded by HCX Availability Guide 1.0.
About the document: The VMware HCX Availability Guide provides information to help users understand known configurations that affect the availability of migrated virtual machines, extended networ... See more...
About the document: The VMware HCX Availability Guide provides information to help users understand known configurations that affect the availability of migrated virtual machines, extended networks and VMware HCX systems. This document provides best practices for improved business continuity outcomes while using HCX. Audience: This information is for migration and cloud architects, systems administrators and any reader with interest in the implementation of highly available HCX deployments. It is assumed that readers have familiarity with VMware HCX, vSphere and NSX, and have basic knowledge of the systems underpinning HCX services. [Prepared using VMware HCX 4.3.0]  
The HCX Network Underlay Characterization and Performance Outcomes technical paper provides information to help HCX users understand the relationships between the network underlay and VMware®... See more...
The HCX Network Underlay Characterization and Performance Outcomes technical paper provides information to help HCX users understand the relationships between the network underlay and VMware® HCX. With HCX performance, various dimensions of environmental and load data need to be considered. One of the dimensions is the network underlay and the HCX performance derived from the underlay capabilities. In this regard, Characterizing an Underlay for HCX means understanding whether the underlay meets the minimum HCX requirements for providing successful virtual machine migrations and network extension services, and understanding baseline performance outcomes for given underlay conditions (even with the inclusion of IPSec VPN or SD-WAN, or VPN services which were previously not supported for HCX implementations).   This document attempts to put these considerations in perspective and also tries to provide some guidance on how to verify whether the performance is optimal for the given environment and parameters. [Prepared October 2021 with HCX 4.2] [Updated to 1.1 March 2022 - Corrections]  
HCX Mobility Optimized Networking (MON) is an enterprise capability of the VMware HCX Network Extension (HCX-NE) feature. MON enables optimized application mobility for virtual machi... See more...
HCX Mobility Optimized Networking (MON) is an enterprise capability of the VMware HCX Network Extension (HCX-NE) feature. MON enables optimized application mobility for virtual machine application groups that span multiple segmented networks or for virtual machines with inter-VLAN dependencies, as well as for hybrid applications, throughout the migration cycle. Migrated virtual machines can be configured to access the internet and AWS S3 storage buckets optimally, without experiencing the network tromboning effect. This technical paper describes the HCX Mobility Optimized Networking technology the VMware Cloud on AWS.   
it helps to see the full error message from the HCX Cloud manager, but this usually comes up when there is no NSX-T Overlay Transport Zone available at the destination.  Specifically the selected... See more...
it helps to see the full error message from the HCX Cloud manager, but this usually comes up when there is no NSX-T Overlay Transport Zone available at the destination.  Specifically the selected HCX Deployment Cluster in the CP has to be part of the overlay tz.  Also this error can happen if DVS were selected instead of the NSX-T Overlay TZ in the CP screen where you select objects for Network Extension (the screen says select DVS, so it's misleading, but it will be fixed in upcoming release).
Hi Trevor, Not sure I'm understanding the question. But you can edit HCX Manager IP address configuration in the appliance management interface (9443), under Administration tab > Network Sett... See more...
Hi Trevor, Not sure I'm understanding the question. But you can edit HCX Manager IP address configuration in the appliance management interface (9443), under Administration tab > Network Settings > General Network.
Yes, With an HCX network extension, you don't need to go configure the subnets, the network extension operation creates an overlay based broadcast domain extension in either Azure Vmware Solut... See more...
Yes, With an HCX network extension, you don't need to go configure the subnets, the network extension operation creates an overlay based broadcast domain extension in either Azure Vmware Solution or VMC.   Migrated VMs keep the same IP configuration and the same MAC addresses.  Everything continues to communicate like it's business as usual. Then at some point, you can unextend the network (disable the L2 bridge, and shut off the on-prem gateway)  to "transition the entire gateway/subnet function to the cloud.
Nice! Thanks for sharing!
Recently completed this technical paper. Here’s a synopsis right from the doc: The purpose of this document is to analyze HCX service behavior (as it relates to planned and unplanned service impact... See more...
Recently completed this technical paper. Here’s a synopsis right from the doc: The purpose of this document is to analyze HCX service behavior (as it relates to planned and unplanned service impacting events), and to explore configuration and architecture practices to maximize availability during those types of events.  The document gives a synopsis of HCX operations, and the details about those operations as it relates to service or workload availability. The following topics will be covered: Understanding Service Impact During Planned HCX Migration Operations Understanding Service Impact During Planned HCX Network Extension Operations Understanding Service Impact During Planned HCX Service Updates Understanding Unplanned HCX Service Impact Scenarios Considerations and Best Practices for HCX Service Availability VMware HCX Service Availability & Resiliency v1.0   
Sounds good.  Just keep in mind - this is a workaround for your testing, if you’re using a network extension, You will have to manage the power states at the origin,this operation assumes the... See more...
Sounds good.  Just keep in mind - this is a workaround for your testing, if you’re using a network extension, You will have to manage the power states at the origin,this operation assumes the source become ceases to exist at the “disaster side”.  A coordinated bulk migration will more gracefully manage the states to minimize downtime.
Hi Simon, The Bulk and RAV migration workflows don't support it today.  It is being considered as an enhancement.  Although it's not quite a coordinated migration, as a workaround to reduce th... See more...
Hi Simon, The Bulk and RAV migration workflows don't support it today.  It is being considered as an enhancement.  Although it's not quite a coordinated migration, as a workaround to reduce the egress cost, you could use the HCX Disaster Recovery workflows to consume the original seed data and recover the VM to the original site once the changes are replicated.
Hi trevordavismsft​ - good to see you here!   Just to let you know, there is a HCX​ specific place here in VMTN communities.  Opaque networks are coming from NSX, with I would try to see what... See more...
Hi trevordavismsft​ - good to see you here!   Just to let you know, there is a HCX​ specific place here in VMTN communities.  Opaque networks are coming from NSX, with I would try to see what the network extension service is logging on the HCX Cloud app.log. 
Hi tgrayatshi​, There is a dedicated HCX place in VMTN. Did you ever work through the issue?   Using vMotion VSS PGs for the vMotion Network Profile is supported.  The documentation entry ... See more...
Hi tgrayatshi​, There is a dedicated HCX place in VMTN. Did you ever work through the issue?   Using vMotion VSS PGs for the vMotion Network Profile is supported.  The documentation entry you mentioned (about VSS Management).  That capability came later.  Support for VSS vMotion networks existed prior to the RN's existence, that's why it's not specifically mentioned.
There‘s a communities page for HCX​ that you can move this to.  Does your configuration already have a Service Mesh? Or only an HCX Manager?
Just wanted to mention there is now a dedicated place for HCX​ discussions in communities! What was the resolution to this issue? Looks like you may have been registering an unsupported NSX-T ver... See more...
Just wanted to mention there is now a dedicated place for HCX​ discussions in communities! What was the resolution to this issue? Looks like you may have been registering an unsupported NSX-T version.
Hi Daniel, There is a place for HCX​ inquiries now. Replication-Assisted vMotion (RAV) is one of the HCX migration types, it combines the VMware HBR Replication engine (used for initial see... See more...
Hi Daniel, There is a place for HCX​ inquiries now. Replication-Assisted vMotion (RAV) is one of the HCX migration types, it combines the VMware HBR Replication engine (used for initial seeding and continuous replication of disks at large parallel scale) with the vMotion protocol (for the memory state / transaction Integrity for the last phase of the migratio. Feel free to break in the HCX page with any questions.