ZibiM's Posts

This is resolved in the nsx-t 3.1 https://fojta.wordpress.com/2020/11/12/nsx-t-3-1-sharing-transport-vlan-between-hosts-and-edge-nodes/  
Seems kinda like something for the vCloud Director It kinda works in a manner you desire - you provide complete IaaS for your customers together with full networking isolation based on NSX-V Edg... See more...
Seems kinda like something for the vCloud Director It kinda works in a manner you desire - you provide complete IaaS for your customers together with full networking isolation based on NSX-V Edge or NSX-T T1 Unfortunately this just for Cloud Providers and Telcos
Yes, that's right We are looking at plenty of nodes in different formats We are looking at setting up different Edge Clusters for different T0 roles On the last Vmworld there was Nimish Desai ... See more...
Yes, that's right We are looking at plenty of nodes in different formats We are looking at setting up different Edge Clusters for different T0 roles On the last Vmworld there was Nimish Desai session concerning Large Scale and Providers. Landing page Around 34 minute the fun part begins MultiTier T0 VRF lite This is really a maze that will be hard to get right
Cloud provider environment with strict network separation for customers (tenants) The requirements for that seems to be either having T0 with VRFlite subT0s or separated T0s for the tenants Ima... See more...
Cloud provider environment with strict network separation for customers (tenants) The requirements for that seems to be either having T0 with VRFlite subT0s or separated T0s for the tenants Imagine having 100+ pair of T0s Anyway I'm really looking forward for the NSX-T Implementation Guidelines for vCloud Director
Word of the mouth from the last week - it will be really soon, it's in the validation phase. We need to be patient little bit more.
Hello Couple of things to consider: 1. NSX-T 3.0 brought VRF-lite -> You can create one main T0 and then subsequent number of subordinated T0s for each VRF (tenant). Those subordinated T0s sh... See more...
Hello Couple of things to consider: 1. NSX-T 3.0 brought VRF-lite -> You can create one main T0 and then subsequent number of subordinated T0s for each VRF (tenant). Those subordinated T0s should not count toward the limit of 1 T0 per node. 2. You can create Edge based on VMs -> You just need to create as many VM Edges as you need for your T0s 3. You can create multiple Edge clusters -> you can for example have dedicated Bare Metal Edge cluster for your VRF enabled T0s and Edge VM cluster for those T0s that are independent. IMHO limit 1 T0 per Node is kinda valid for Enterprise workloads -> for the cases where T0 is really the sole big pipe from Enterprise DC networks toward the north. As such it's crucial for whole environment availability and performance. For the cloud providers you have really small T0s - majority of the tenants can be small, and not many of them will be in any meaningful size Back to your question - i'd guess it depends really Size of the edges, amount of the function inside, T0 topology (active-active or active-standby ?)
I participated on some vmworld sessions regarding this topic. The support you talk about should come together with vCD 10.2, and it should be "soon". As usually there were disclaimers and some ... See more...
I participated on some vmworld sessions regarding this topic. The support you talk about should come together with vCD 10.2, and it should be "soon". As usually there were disclaimers and some ambiguity regarding exact timelines. My personal feel is like it will appear in about 1-2 months The biggest question really is how long do you want to wait after GA in order to flesh out all the usual bugs
Finally it's officially released Thank you!
Yes - this presentation is for NSX-T only Unfortunately I cannot say that much about NSX-V Please bear in mind that NSX-V has 1:1 connection with the vCenter and it's not expected to have NSX... See more...
Yes - this presentation is for NSX-T only Unfortunately I cannot say that much about NSX-V Please bear in mind that NSX-V has 1:1 connection with the vCenter and it's not expected to have NSX infra spread across 2 separate sites. In your situation I'd rather check whether it is possible to use Fault Tolerance Smthg like singular NSX Controller at each site with FT enabled I don't know if this is a supported scenario though
Hello First of all are you asking about NSX-V or NSX-T ? Which version ? In regard to the NSX-T please check Dimitri Desmidt presentation about NSX design for multisite In short - splitti... See more...
Hello First of all are you asking about NSX-V or NSX-T ? Which version ? In regard to the NSX-T please check Dimitri Desmidt presentation about NSX design for multisite In short - splitting nsx components between just 2 sites is not recommended You either keep everything on 1 site with metro-cluster ability to HA on the 2nd site, or you have the design with spreading nsx controllers on 3 sites (1 per site) and utilizing external LB in front.
Hi Tarun Try to identify what is causing such CPU usage. I had once similar situation when edge got hit by 100% cpu usage out of nowhere and it was caused by overload of the incoming network ... See more...
Hi Tarun Try to identify what is causing such CPU usage. I had once similar situation when edge got hit by 100% cpu usage out of nowhere and it was caused by overload of the incoming network traffic from the internet: 1. There was rate limit on this particular network 2. This particular network got impacted by the ddos 3. Internet routers started to drop about 50% of incoming packets in order to stay within the rate limit 4. NSX Edge started to consume 100% CPU - it was busy trying to maintain TCP sessions, retransmitting dropped packets, and such Regards