burchell99's Posts

Did you ever get anywhere with this? I get the same thing with vCenter 8.0U2 for Azure AD   "Could not create indirect identity provider: Failed to create identity provider with IDP name Azure AD ... See more...
Did you ever get anywhere with this? I get the same thing with vCenter 8.0U2 for Azure AD   "Could not create indirect identity provider: Failed to create identity provider with IDP name Azure AD for tenant customer on host xxx.domain.com"   host name just an example
I dont think you can include the short name as a SAN. I have come accross the same problem   CN was FQDN SANs included IP and shortname Removed SANs as not essential for us
Same problem.   2 vCenters identical joined to AD in the standard way. rebooted. one shows as joined in the GUI so add an identity provider as normal then layer group memberships.   The other is ... See more...
Same problem.   2 vCenters identical joined to AD in the standard way. rebooted. one shows as joined in the GUI so add an identity provider as normal then layer group memberships.   The other is joined to AD (computer object) but the GUI shows it hasnt. /opt/likewise/bin/domainjoin-cli query shows its joined. No idea why! Version:7.0.3 Build:19234570  
you are using /24 which is the entire block of your 192.168.0.x The rules are read in order so the first rule is allowing your 192.168.0.20 and it never reaches the deny for a single IP you need th... See more...
you are using /24 which is the entire block of your 192.168.0.x The rules are read in order so the first rule is allowing your 192.168.0.20 and it never reaches the deny for a single IP you need the prefix to be /32 Note: this is untested but ive been researching the same today and that is my understanding Subnet Cheat Sheet – 24 Subnet Mask, 30, 26, 27, 29, and other IP Address CIDR Network References (freecodecamp.org)
It appears these warnings are cosmetic and if the rest of the config matches up you can select Ignore after it eventually fails on the check everything else in the upgrade seemed to work! a... See more...
It appears these warnings are cosmetic and if the rest of the config matches up you can select Ignore after it eventually fails on the check everything else in the upgrade seemed to work! always nervous about these vsphere rep upgrades as ive had some really bad experiences over the year
Same error while upgrading the main vSphere Replication appliance itself. seems like some confusion on the source and target regarding IPV6 even though i am using IPv4 I have highlighted wh... See more...
Same error while upgrading the main vSphere Replication appliance itself. seems like some confusion on the source and target regarding IPV6 even though i am using IPv4 I have highlighted where i think the "check" is failing. if anyone has experience of this i am all ears thank you
Hello I have been working through steps in a lab to upgrade vSphere Replication from 6.1.2 > 8.1.2 without success. I checked upgrade paths and it seems i am in a supported path. We have... See more...
Hello I have been working through steps in a lab to upgrade vSphere Replication from 6.1.2 > 8.1.2 without success. I checked upgrade paths and it seems i am in a supported path. We have a vSphere Replication Server on 6.1.2.2 and an add on server on 6.1.2.1 (due to a replication bug previously experienced this setup was advised by support). I am trying to upgrade the add on servers first as per documentation but when I deploy the new 8.1.2 appliance and start the migration it fails with: ERROR network_validation Config flags for interface do not match. (eth0) I tried to troubleshoot this and ensure the config did match (DNS, search, domain etc) but it still failed. I also validated i had an OVF context in the main server and add on server but the same thing keeps happening. I can select ignore and try and register but unsure of the impact. Has anyone managed to upgrade from 6.1.2 > 8.1.2? I would love to hear your experiences. Log example of error attached
I now see that SRM 8.2 is out which has an appliance option. I will upgrade my Windows and SQL version from 6.1.2.2 to 8.2 and then carry out the migration process noted: Migrate from Site Recove... See more...
I now see that SRM 8.2 is out which has an appliance option. I will upgrade my Windows and SQL version from 6.1.2.2 to 8.2 and then carry out the migration process noted: Migrate from Site Recovery Manager for Windows to Site Recovery Manager Virtual Appliance which will move us from windows and SQL to a single appliance. Hopefully this is of use to others
Hello Is it possible or supported to migrate from a SQL Server DB (2012) to the embedded DB and keep data? I found posts about going from embedded to SQL but we want to do the opposite. Cur... See more...
Hello Is it possible or supported to migrate from a SQL Server DB (2012) to the embedded DB and keep data? I found posts about going from embedded to SQL but we want to do the opposite. Currently running SRM 6.1.2.2 We are planning a migration to SRM 8.1.1 so I understand a workaround might be to do this and then to export config re-install with embeeded and import config but I wondered if there are alternative supported or unsupported options available on our current version. Thank you
Seems this resolved itself. I checked the comparability just now to find: good news. not sure when it changed but that is useful
Thanks for the reply. My main issue is that Server 2019 is just a branch of update from Server 2016 much like Windows 10 are doing. Windows 10 Is supported on ESXi 5.5 and above. When Server 2... See more...
Thanks for the reply. My main issue is that Server 2019 is just a branch of update from Server 2016 much like Windows 10 are doing. Windows 10 Is supported on ESXi 5.5 and above. When Server 2016 was released 2 years ago support was backdated to the GA release of ESXi 5.5 which at the time was 3 years old! The month after 2016 GA vSphere 6.5 was released so the system is fully supported across the entire 5.5-6.7 tracks. ESXi 6.0 U3 was released in February 2017 some 20 months ago so to not support Server 2019 seems strange especially as the 6.0 track has full support for a further 18 months from today I appreciate you cant answer and we don't know but I am stuck between a rock and hard place. We would be looking to upgrade to newer vSphere code in the 2nd half of 2019 but will need Server 2019 prior to this early next year. Here is to hoping! :S
Hello I am running vSphere 6.0 U3 and have a requirement to keep this environment for another year. I also have a requirement for Server 2019 which went GA last week. I see that the tech previ... See more...
Hello I am running vSphere 6.0 U3 and have a requirement to keep this environment for another year. I also have a requirement for Server 2019 which went GA last week. I see that the tech preview was only supported on ESXi 6.5 and ESXi 6.7 in the Compatibility Guide: VMware Compatibility Guide - Guest/Host Search Does anyone know if the GA release has plans to be supported in ESXi 6.0? The hypervisor remains in general support till March 2020 so i am hoping so but if anyone has any details i am all ears. Thanks
Its actually listed as fixed in 8.1.0.1 NEW When you configure virtual machines for replication, the Site Recovery user interface might show an error "Server Error Response with status: 0 for ... See more...
Its actually listed as fixed in 8.1.0.1 NEW When you configure virtual machines for replication, the Site Recovery user interface might show an error "Server Error Response with status: 0 for URL: null" When you are using the Site Recovery HTML5 user interface to select VMs to replicate to the recovery site, the UI might throw an error "Server Error Response with status: 0 for URL: null" This issue is fixed. However as 8.1.0.4 is the newest release I would advise to go to that code. My guess (and its only that) is that you are on the GA 8.1 release hence the problem
Thanks. I wonder if anyone who has done this can confirm my concern is that 8.1 merges the vSphere replication and SRM features into a single plugin/icon in the web clients (which then goes ex... See more...
Thanks. I wonder if anyone who has done this can confirm my concern is that 8.1 merges the vSphere replication and SRM features into a single plugin/icon in the web clients (which then goes external to the new HTML5 interface in SRM 8.1) what would happen to the plugins and buttons post upgrade for the old 6.5.1 site pair. I worry I would essentially be locked out of the older site pair and unable to access its GUI same question for the vsphere replication icon, will it remain or disappear a good one for some lab testing maybe but if anyone has done this id be very interested to hear!
Not familiar with that version. from the release notes 8.1.0.4 was released less than 2 weeks ago VMware Site Recovery Manager 8.1.0.4 | 24 AUG 2018 | Build 9569154 VMware Site Recovery Manag... See more...
Not familiar with that version. from the release notes 8.1.0.4 was released less than 2 weeks ago VMware Site Recovery Manager 8.1.0.4 | 24 AUG 2018 | Build 9569154 VMware Site Recovery Manager 8.1.0.3 | 12 JUN 2018 | Build 8738384 VMware Site Recovery Manager 8.1.0.2 | 18 MAY 2018 | Build 8527244 VMware Site Recovery Manager 8.1.0.1 | 20 APR 2018 | Build 8311425 Note: VMware Site Recovery Manager 8.1.0.1 | 20 APR 2018 | Build 8311425 replaces the previously released VMware Site Recovery Manager 8.1 | 17 APR 2018 | Build 8255892 VMware Site Recovery Manager 8.1 | 17 APR 2018 | Build 8255892
Thanks for the reply. Both site pairs would eventually be upgraded. I am just trying to work out if this needs to be big bang (not ideal) or can be phased with the other pair remaining on 6.5.... See more...
Thanks for the reply. Both site pairs would eventually be upgraded. I am just trying to work out if this needs to be big bang (not ideal) or can be phased with the other pair remaining on 6.5.1 working for a short period
are you running the latest release of 8.1? last time I checked it was 8.1.0.4 with many fixes. worth considering. if you are already on that code sounds like a support call. have you had ot... See more...
are you running the latest release of 8.1? last time I checked it was 8.1.0.4 with many fixes. worth considering. if you are already on that code sounds like a support call. have you had other problems with SRM 8.1?
did you ever get an answer on this? Is it just for the IP customization or for pre/post commands or more of an important requirement?
Hello We have 4 vCenters running 6.5 U1 in ELM Running vSphere replication and SRM 6.5.1 in a 2 separate site pairs If we upgrade one of the site pairs from 6.5.1 to 8.1 can the other pa... See more...
Hello We have 4 vCenters running 6.5 U1 in ELM Running vSphere replication and SRM 6.5.1 in a 2 separate site pairs If we upgrade one of the site pairs from 6.5.1 to 8.1 can the other pair continue to work on 6.5.1 or do they all need upgrading at the same time? Thanks
Thanks for the reply additional VR servers were added to remote sites because the replication is happening in that country which is thousands of miles away from the primary data centres I c... See more...
Thanks for the reply additional VR servers were added to remote sites because the replication is happening in that country which is thousands of miles away from the primary data centres I configured the replication using the add-on VR server and it works as expected We have the embedded primary vSphere replication servers both in the UK and working fine with 60+ replications. The add-on appliances are in a remote country with currently 1 replication configured to use the add on. my problem is the network traffic being sent to discover all hosts and datastores mappings within that vCenter (not replication traffic, just initial discovery) support note the logs have: "750+ repetitions for the same host-datastore combination" and have asked me to increase CPU and RAM resources on the add on appliance which I can do but find it hard to believe its the root cause Anyone seen this before? I am continuing to work with support. We previously had vSphere rep 5.5 setup in the same topology and configuration and had no experience like this here is an extract of the log analysis (host names have been changed) --------------------------------------- Searching for the references for specific hosts show 750+ repetitions for the same host-datastore combination: e.g.   grep "accessible true" _var_log_vmware_hbrsrv*log | awk '{print $7,$9}'| grep host-87| sort  |uniq -c      765 host-87 /vmfs/volumes/59302ddb-6aec7dc4-4b6f-0025b56210be      765 host-87 /vmfs/volumes/59369879-25e5f99c-9f65-0025b5621000      765 host-87 /vmfs/volumes/5936989f-c24c1d19-5832-0025b5621000      765 host-87 /vmfs/volumes/593698d0-435294c6-e47a-0025b5621000      765 host-87 /vmfs/volumes/59369909-177fb6d1-f680-0025b5621000      765 host-87 /vmfs/volumes/59369957-19dd6dd8-2326-0025b5621000      765 host-87 /vmfs/volumes/5936999d-dc17bf42-b1e4-0025b5621000      765 host-87 /vmfs/volumes/593699e2-d8eefcf2-f053-0025b5621000      765 host-87 /vmfs/volumes/59369a2e-de9ebc18-f769-0025b5621000      765 host-87 /vmfs/volumes/59369a7a-602c5e98-6a21-0025b5621000      765 host-87 /vmfs/volumes/59369acb-bf8cad98-0401-0025b5621000      765 host-87 /vmfs/volumes/59369cd7-24490508-1d64-0025b5621014      765 host-87 /vmfs/volumes/59369d15-0947c92c-5c02-0025b5621014      765 host-87 /vmfs/volumes/59369d5d-3d62e08a-2502-0025b5621014      765 host-87 /vmfs/volumes/59369d95-85aea7f9-0059-0025b5621014      765 host-87 /vmfs/volumes/59369dcc-22a7847b-7463-0025b5621014      765 host-87 /vmfs/volumes/59369dfc-38dc64dc-b2b5-0025b5621014      765 host-87 /vmfs/volumes/59369e28-eead2b5c-c8e7-0025b5621014      765 host-87 /vmfs/volumes/59369e53-df715bb8-9c97-0025b5621014      765 host-87 /vmfs/volumes/59369e85-f865fd52-37e3-0025b5621014      765 host-87 /vmfs/volumes/59369ed9-f78276b8-4d27-0025b5621014      765 host-87 /vmfs/volumes/598d610a-cce9c00a-5462-0025b5621154      765 host-87 /vmfs/volumes/59e1d92e-ec292b0d-f16c-0025b5621014      765 host-87 /vmfs/volumes/59e1d980-188f20ae-6d3b-0025b5621014      765 host-87 /vmfs/volumes/59e1da19-34f9854e-6e0a-0025b5621014      765 host-87 /vmfs/volumes/59e1da4e-e5d7659b-1bb4-0025b5621014      765 host-87 /vmfs/volumes/59e1da7e-fea1c702-f398-0025b5621014      765 host-87 /vmfs/volumes/59e1dab2-3261a01f-f7de-0025b5621014      765 host-87 /vmfs/volumes/59eba7e6-fdbf9d18-17e6-0025b56210dc      765 host-87 /vmfs/volumes/59eba81f-482d1508-256a-0025b56210dc      765 host-87 /vmfs/volumes/59eba842-a5935f28-29c7-0025b56210dc      765 host-87 /vmfs/volumes/5a09f7e7-95af91e1-3811-0025b5621168      765 host-87 /vmfs/volumes/5a09f94d-ee363e1d-d5fc-0025b5621168      765 host-87 /vmfs/volumes/5a0a20ad-be9563f0-739f-0025b5621168 Entries repeat in this kind of pattern:      _var_log_vmware_hbrsrv-216.log:2018-01-16T16:03:23.692Z verbose hbrsrv[7F4E524DC760] [Originator@6876 sub=Host opID=hs-init-75f6efae] Host: host-87 Datastore: /vmfs/volumes/59302ddb-6aec7dc4-4b6f-0025b56210be -> 59302ddb-6aec7dc4-4b6f-0025b56210be (name esx01-localstorage) accessible true _var_log_vmware_hbrsrv-216.log:2018-01-16T16:03:27.607Z verbose hbrsrv[7F4E4B123700] [Originator@6876 sub=Host] Host: host-87 Datastore: /vmfs/volumes/59302ddb-6aec7dc4-4b6f-0025b56210be -> 59302ddb-6aec7dc4-4b6f-0025b56210be (name esx01-localstorage) accessible true _var_log_vmware_hbrsrv-216.log:2018-01-16T16:03:32.498Z verbose hbrsrv[7F4E4B42F700] [Originator@6876 sub=Host] Host: host-87 Datastore: /vmfs/volumes/59302ddb-6aec7dc4-4b6f-0025b56210be -> 59302ddb-6aec7dc4-4b6f-0025b56210be (name esx01-localstorage) accessible true _var_log_vmware_hbrsrv-216.log:2018-01-16T16:03:37.162Z verbose hbrsrv[7F4E4B7BD700] [Originator@6876 sub=Host] Host: host-87 Datastore: /vmfs/volumes/59302ddb-6aec7dc4-4b6f-0025b56210be -> 59302ddb-6aec7dc4-4b6f-0025b56210be (name esx01-localstorage) accessible true _var_log_vmware_hbrsrv-216.log:2018-01-16T16:03:41.706Z verbose hbrsrv[7F4E4B4B1700] [Originator@6876 sub=Host] Host: host-87 Datastore: /vmfs/volumes/59302ddb-6aec7dc4-4b6f-0025b56210be -> 59302ddb-6aec7dc4-4b6f-0025b56210be (name esx01-localstorage) accessible true _var_log_vmware_hbrsrv-216.log:2018-01-16T16:03:45.897Z verbose hbrsrv[7F4E4B268700] [Originator@6876 sub=Host] Host: host-87 Datastore: /vmfs/volumes/59302ddb-6aec7dc4-4b6f-0025b56210be -> 59302ddb-6aec7dc4-4b6f-0025b56210be (name esx01-localstorage) accessible true