ewy's Posts

Where you able to do this successfully with okta? We dont have an adfs setup but our secTeam has told us OKTA can emulate!? the closest i have found to that is this https://www.okta.com/integrations/... See more...
Where you able to do this successfully with okta? We dont have an adfs setup but our secTeam has told us OKTA can emulate!? the closest i have found to that is this https://www.okta.com/integrations/okta-mfa-for-microsoft-adfs/ and it clearly shows you need an ADFS server..
@Great_White_Tec I think you meant you wouldn't send it through the primary site. I agree, but in my case the gateway used to send witness traffic to the witness site is part of an HSRP group ... See more...
@Great_White_Tec I think you meant you wouldn't send it through the primary site. I agree, but in my case the gateway used to send witness traffic to the witness site is part of an HSRP group  which means that the gateway will (should) always be available even after the primary site fails. I have been reading about implementing WTS since it is available on the version I am running, but have a couple of things I want to confirm first. If I understand it correctly I would add another vmkernel to each host  tagged for witness traffic (I could use vmk0 but rather not). For instance: Site A Hosts vmkWitness 10.10.3.0/24 (vlan 103) , gw 10.10.3.1 only located on Site A Site B Hosts vmkWitness 10.10.4.0/24 (vlan 104) , gw 10.10.4.1 only located on Site B and then add static routes to each host as follow: - Site A hosts: esxcli network ip route ipv4 -n 10.10.1.0/24 -g 10.10.3.1 - Site B Hosts: esxcli network ip route ipv4 -n 10.10.1.0/24 -g 10.10.4.1 - Witness Host To reach Site A hosts: esxcli network ip route ipv4 -n 10.10.3.0/24 -g 10.10.1.1 To reach Site B hosts: esxcli network ip route ipv4 -n 10.10.4.0/24 -g 10.10.1.1 10.10.1.1 is the witness host subnet gateway located on the witness site. Something very similar to this Setup Step 5: Validate Networking | vSAN Stretched Cluster Guide | VMware
I need some help validating this Network config for a stretched cluster. Ill summarize whats on the attached, not so good, diagram. - cluster is running 6.7u1 - Each data site has Nexus vp... See more...
I need some help validating this Network config for a stretched cluster. Ill summarize whats on the attached, not so good, diagram. - cluster is running 6.7u1 - Each data site has Nexus vpc Pairs as cores. - L2 between data site. the vsan vlan has an EIGRP advertised SVI which is part of an HSRP group with 4 members (1 SVI per nexus, so 2 per site) (this is one of the key things i want to validate)  - Each host has static route which uses the vsan SVI (10.10.1.0      255.255.255.0        10.10.0.0   vmk2       MANUAL) (want to make sure this is recommended or if i should use something else) - das.usedefaultisolationaddress = false - HA advance setting is configured with the following das.isolationaddress(1-4): IPs 10.10.0.2(Site A),  10.10.0.4(SiteB), 10.10.0.3(SiteA),  10.10.0.5(SiteB) in that order (want to validate this too) - As you can see vsan witness traffic always flows through the primary site (sub optimal routing, not big deal unless HSRP fail-over time is too high and could cause vsan issues after a site failure while 10.10.0.1 becomes available again. Questions: 1- Should i advertise (route) the vsan SVIs. 2- Can i use a different route for witness traffic to avoid traversing the ISL 3- Is the HA advanced settings (Isolation addresses) configured properly. What would you recommend to improve this design. Dont be shy with the details . I understand this is very networking "heavy" but that is what we have to deal with by using Ethernet based distributed storage systems which HCI is. As we always hear.. network reliability is key to HCI. Thanks in advance folks!! Looking forward to your responses.
    Having the attached errors showing up in a few vms after an HA event.     I am unable to create snapshots. It is not a space issue as a KB suggested. these are running on a vsan datastore.... See more...
    Having the attached errors showing up in a few vms after an HA event.     I am unable to create snapshots. It is not a space issue as a KB suggested. these are running on a vsan datastore.      Thanks in advance for your help!
Can some from VMware clarify this (non-consistent backups when VMware Snapshot Provided disabled). We have several non-2019 servers too where this "fix" has been applied and the assumption is tha... See more...
Can some from VMware clarify this (non-consistent backups when VMware Snapshot Provided disabled). We have several non-2019 servers too where this "fix" has been applied and the assumption is that the snapshots are app consistent as long as the app "can be quiesced", not that we were removing the possibility of it being quiesced and being fooled into thinking all is good. Also where in this architecture if so would this service sit. Thanks
Can some from VMware clarify this (non-consistent backups when VMware Snapshot Provided disabled). We have several non-2019 servers too where this "fix" has been applied and the assumption is tha... See more...
Can some from VMware clarify this (non-consistent backups when VMware Snapshot Provided disabled). We have several non-2019 servers too where this "fix" has been applied and the assumption is that the snapshots are app consistent as long as the app "can be quiesced", not that we were removing the possibility of it being quiesced and being fooled into thinking all is good. Also where in this architecture if so would this service sit. Thanks
Well, it is set to 100% now and the warning is still here
I will try increasing the number as suggested, but I am also trying to understand the logic so that I can apply it to my capacity planning (At this point I have no clue as to what resource is cau... See more...
I will try increasing the number as suggested, but I am also trying to understand the logic so that I can apply it to my capacity planning (At this point I have no clue as to what resource is causing the alarm to go off, memory seems the most obvious). I have tried reading multiple articles including the latest clustering deep dive book and I cant still grasp this HA admission control logic. Anyways, here is additional info on my cluster: My VM mem reservation add up to  40GB and cpu to 41200 MHz, like I said small...My total cluster resources 1.34 THz and 10.48 TB Memory. Based on the information provided in the article I linked in a previous reply and assuming the algorithm uses active memory (not consumed = used = active + overhead) everything tells me I have enough even to set a 0% degradation and be ok after half my cluster goes down. Please see a screenshot of my cluster resources attached. Again, thanks for taking the time and for your quick response!
But I have enough resources to tolerate half the cluster going down or so I think. I think I understand what this setting does and I don't want to have a performance impact in the event of a f... See more...
But I have enough resources to tolerate half the cluster going down or so I think. I think I understand what this setting does and I don't want to have a performance impact in the event of a failure so that is why I set it to 0. My Active memory utilization is really low (read here this is the metric used ). Cpu utilization is low too. see images attached Thanks for the quick reply!
I do, now that i know what to look for googling it was easier. Found it here Deduplication and Compression | vSAN Space Efficiency Technologies | VMware Thanks a.p.​
Setup: vsan Stretched cluster running esxi 6.7U1, all flash. "Issue": When I try to create a VM storage policy with object space reservation set to 25, 50 or 75% the vsan datastore doesn... See more...
Setup: vsan Stretched cluster running esxi 6.7U1, all flash. "Issue": When I try to create a VM storage policy with object space reservation set to 25, 50 or 75% the vsan datastore doesn't appear as a compatible one but it does if I choose fully thin or thick. I have plenty of space available. It doesnt matter what setting I try, (RAID1,5,6, Local, Stretched) it is always the same. I haven't been able to find any special requirements for being able to create this type of policy. What am I missing?
If you have it deployed already the built-in HealthCheck is a good start (make sure everything is green there). We used two tools. Liveoptics to assess and size(run for at least 7 days) before... See more...
If you have it deployed already the built-in HealthCheck is a good start (make sure everything is green there). We used two tools. Liveoptics to assess and size(run for at least 7 days) before purchasing and to have a better idea of our workloads (Avg IO size, Peak IOPs, Network throughput, etc) Then once deployed we used HCIbench (vmware fling) to conduct benchmarking with data sets as close as possible to the LiveOptics reports. Hope this helps
Can someone tell me what OS is running on 1- vRealize Network Insight (on-prem appliances) 2- Network Insight Proxy appliance (deployed on-prem when NI is used as SaaS on VMware cloud). I... See more...
Can someone tell me what OS is running on 1- vRealize Network Insight (on-prem appliances) 2- Network Insight Proxy appliance (deployed on-prem when NI is used as SaaS on VMware cloud). I cant find this info anywhere. Thanks in advance.
Is this bug back, what am i missing. This is a vsan stretched cluster running 6.7u1. HA Admission Control configuration can be seen in the pics attached. I am using cluster resource  % (50)... See more...
Is this bug back, what am i missing. This is a vsan stretched cluster running 6.7u1. HA Admission Control configuration can be seen in the pics attached. I am using cluster resource  % (50) as recommended for vsan stretched clusters. The weird thing is on the HTML interface the Host Failure option is available but in the vSphere web client is not (which I think is how it should be on the HTML), not sure if this is a bug/issue just letting you know. I have plenty of resources, as you can see in the picture attached... I can lose half of the cluster and still will have plenty to support my operation. Yet, the "Insufficient configured resources to satisfy the desired vSphere HA failover level on the cluster" warning is still showing. What did i do wrong or am not understanding. PS: I do have like 3 Vms with "small" cpu/mem reservations. Thanks
Here is my scenario. I have an Existing Windows vCenter server v6.5d with an embedded PSC. I am planning to upgrade it to 6.5U2 and then deploy a new VCSA v6.7 with an embedded PSC (Same SSO a... See more...
Here is my scenario. I have an Existing Windows vCenter server v6.5d with an embedded PSC. I am planning to upgrade it to 6.5U2 and then deploy a new VCSA v6.7 with an embedded PSC (Same SSO and Site as the existing Windows one and configure ELM between them. Is this supported? I may have misunderstood an online blog where it mentions that it is but only on Greenfield deployment...if that's the case that really sucks ! Thanks in advance for your help!