NFerrar's Posts

Just found this in the v4.3.1 release notes to: Deploying a second vRealize Suite Lifecycle Manager fails If you have multiple instances of VMware Cloud Foundation in the same SSO domain and you tr... See more...
Just found this in the v4.3.1 release notes to: Deploying a second vRealize Suite Lifecycle Manager fails If you have multiple instances of VMware Cloud Foundation in the same SSO domain and you try to deploy vRealize Lifecycle Manager on both, the second deployment will fail with the message Add vCenter Server and Data Center to vRealize Suite Lifecycle Manager Failed. Workaround: Use a single vRealize Suite Lifecycle Manager to manage instances of VMware Cloud Foundation in the same SSO domain
Although now I'm confused again... This is the main link I was following when I came up with the need for a region B specific vRSLCM: https://docs.vmware.com/en/VMware-Cloud-Foundation/4.3/vcf-vrslc... See more...
Although now I'm confused again... This is the main link I was following when I came up with the need for a region B specific vRSLCM: https://docs.vmware.com/en/VMware-Cloud-Foundation/4.3/vcf-vrslcm-wsa-design/GUID-D59D9A51-F829-4472-9683-84185A790E9C.html Not only does the logical architecture diagram imply a vRSLCM in region B but the text also seems to back that up? Under the "Multiple VMware Cloud Foundation Instances" section of the table at the bottom it states: In each VMware Cloud Foundation instance, a vRealize Suite Lifecycle Manager appliance deployed on the cross-instance NSX segment (although I still don't see why the region B vRSLCM would need to be on a cross-region AVN) vRealize Suite Lifecycle Manager in each additional VMware Cloud Foundation instance provides life cycle management for: vRealize Log Insight That link doesn't specifically say there's federation linking the VCFs but surely saying "multiple VCF instances" is only relevant in that context? Has the architecture changed with v4.3.x (as the VVS link was for v4.2)?
Ah, cheers for that link (and the help provided), had missed it when googling! It's the first time I've seen it clearly documented you deploy the region B vRealize components from the region A vRSLCM
Thanks - that was my thinking to, the one I'm still not sure about though is the region B vRSLCM. It will never move (and I assume goes on the region B AVN) and presumably you wouldn't want it depend... See more...
Thanks - that was my thinking to, the one I'm still not sure about though is the region B vRSLCM. It will never move (and I assume goes on the region B AVN) and presumably you wouldn't want it dependent on the cross-region WOA cluster (that's usually running out of region A) but it would benefit from connecting to a WOA so you can use your AD to authenticate access to it.
Hi, My understanding is that the cross-region vRealize Suite components are the ones that link to the cross-region Workspace ONE Access, however could someone clarify whether the local region compon... See more...
Hi, My understanding is that the cross-region vRealize Suite components are the ones that link to the cross-region Workspace ONE Access, however could someone clarify whether the local region components (vRealize Suite and NSX-T) point at the region-specific standalone Workspace ONE Access instances (in their local region) as region B configuration documentation (that I've found) doesn't seem to mention it? So for example: NSX-T Global (primary) Manager (deployed in region A) connects to the region A standalone Workspace ONE Access instance NSX-T Global (secondary) Manager (deployed in region B) connects to the region B standalone Workspace ONE Access instance vRLI in region A connects to the region A standalone Workspace ONE Access instance vRLI in region B connects to the region B standalone Workspace ONE Access instance vRSLCM in region B connects to the region B standalone Workspace ONE Access instance
Hi, Whilst VVDs were still a thing VMware published a spreadsheet that collated all the design decisions - I can't find something similar now things have changed to Design Guides and VVSs, is there ... See more...
Hi, Whilst VVDs were still a thing VMware published a spreadsheet that collated all the design decisions - I can't find something similar now things have changed to Design Guides and VVSs, is there one? I know the VVSs and Design Guides have a design decision appendix in them but for example for ESXi there's design decision info in both the Management and VI Design Guides and also in the IAM VVS (and possibly other places, I still need to look through all the VVSs...) Also is there any update to the NIST Compliance Kit? I need to document all design decisions for a VCF 4.3.1 deployment and most of these are driven by VMware guidance/mandates so having a full set of those in one location for reference would be helpful.
Changing the preferred SAN path manually can affect your environment, it's 100%! Could you explain why though? I get that in a path failure situation or even manually disabling the active I/O path m... See more...
Changing the preferred SAN path manually can affect your environment, it's 100%! Could you explain why though? I get that in a path failure situation or even manually disabling the active I/O path might cause some I/O to be lost or need to be resent but I was sort of hoping if all you're doing is changing the preferred path that ESXi would do it in a bit more of a controlled manner (i.e. finish off active I/O on the current path then switch to the new preferred path). It must have a mechanism to do it itself when you use a RR policy? I'm also not sure I follow how your use of RR policy to cut-over helps. When you do the step to shutdown the port on the switch, if the active I/O is on that path at the time isn't it just going to cause the same issue as a path failure? And with RR don't you have less control of which path is being used when you shutdown the switch port? I'm far from an expert on storage policies and how the I/O works at a low level though so maybe I'm just not understanding why the RR policy would help. We have an active/active SAN but use a Fixed storage policy (but I'd be fine with temporarily changing it to RR if it would guarantee the fabric switch cut-over wouldn't cause an issue in a guest VM). We've got around 200 VMs (with 16GB+ vRAM) that likely won't vMotion hence doing this properly and putting a host into maintenance mode is going to be a nightmare to arrange downtime for.
Hi, We're changing the SAN fabric switches in our environment, fortunately the hosts all have spare HBA ports so we have cabled up (and zoned for a test LUN/datastore). It's a vSphere ESXi v5.5 16-h... See more...
Hi, We're changing the SAN fabric switches in our environment, fortunately the hosts all have spare HBA ports so we have cabled up (and zoned for a test LUN/datastore). It's a vSphere ESXi v5.5 16-host cluster (yes I'm aware this isn't supported anymore!) However I'm not sure how best to do the cut-over. In testing a manual preferred path change via the GUI (whilst copy and zip operations were running in a VM on a test datastore that I was flipping the preferred path on) didn't seem to interrupt I/O but I realise that's not a guarantee that say a DB transaction on a busy SQL server wouldn't be affected. I can't find any low level details about what happens with a manual preferred path change though. Presumably if it doesn't interrupt I/O that's already in progress (but just sends the next I/O up the new preferred path) then it should always be invisible to a guest VM (isn't that similar to having a round robin pathing policy and it hitting the IOPS count?). The alternative is putting a host in maintenance mode and changing the preferred path on each datastore once all the VMs were off it but we have a separate issue whereby several of our critical app VMs won't vMotion as it's a 1Gb vMotion network and the vRAM size + rate of change of memory means vMotion times out. So that means incurring downtime which is going to be a pain to arrange. Has anyone done this sort of cut-over with on-line VMs running on the host you're making the change on (and has any advice)? I'm also thinking if we do do it on-line we probably don't want to script it as we'd either need a script for each host/datastore combination (at which point running them is more hassle than just using the GUI) or if we did all datastores on a host at the same time it might trigger an issue in itself with a mass pathing change going on?
Just a follow-up on this... I tried with vRNI v5.2 Thick instead but same issue, tried v5.2 Thin and it passed the validation checks but failed instantly on submit with a "failed to save details... See more...
Just a follow-up on this... I tried with vRNI v5.2 Thick instead but same issue, tried v5.2 Thin and it passed the validation checks but failed instantly on submit with a "failed to save details. please try again" error. VMware support advised changing the environment name I was using (but without saying why or to what) I guessed and removed the space from it but this was a couple of days later and in the meantime a colleague had changed the deployment request back to v5.3 Thin - it now passed the checks and deployment was successful. So I still don't understand why previous attempts with v5.3 Thin didn't work (let alone why Thick to an 80TB datastore doesn't work) but at least it deployed in the end. If I get a chance later I'll see if a space in the environment name was the cause but seems unlikely to me...
Hi, Has anyone successfully deployed Network Insight v5.3.0 from vRSLCM 8.1? I get a failure on an infrastructure pre-validation check for available disk space, however the vSAN datastore bein... See more...
Hi, Has anyone successfully deployed Network Insight v5.3.0 from vRSLCM 8.1? I get a failure on an infrastructure pre-validation check for available disk space, however the vSAN datastore being targeted has almost 80TB free (I've tried it thin and thick with the same result and to a smaller datastore also gives the same error). We have successfully deployed Identity Manager v3.3.0 and vRA v8.1.0 through vRSLCM to the same datastore. The only thing different I can think of with vRNI v5.3.0 is it wasn't supported initially with vRSLCM v8.1 but I added the product update pack from VMware Solution Exchange  to enable support for it. The v5.3.0 binaries mapped OK and every other validation check passes. I've got a case open with VMware support but they've not provided much help so far (there are no vSAN health issues and the datastore is mapped to all hosts in the cluster). I'm going to import the vRNI v5.2.0 binaries shortly and will try to deploy from those instead, I was just curious if anyone had successfully deployed vRNI v5.3.0 fresh via vRSLCM v8.1 or if there's something I'm missing as to why that's not supported (VMware Product Interoperability Matrices  indicates to me it should be?). Thanks!
Do you need to connect to the 192.168.x.x subnet from outside the virtual environment?
Hi, We have a couple of existing hosts with Intel Xeon Gold 6130 CPUs (Skylake) running ESXi v6.5u3 in a cluster managed by VCSA v6.7u3 (there wasn't a Dell v6.7 ESXi image available at the ti... See more...
Hi, We have a couple of existing hosts with Intel Xeon Gold 6130 CPUs (Skylake) running ESXi v6.5u3 in a cluster managed by VCSA v6.7u3 (there wasn't a Dell v6.7 ESXi image available at the time the hosts were built). I need to add a third host to the cluster but the current model of the server has Intel Xeon Gold 5128 CPUs (Cascade Lake) so I figured I'll probably need to turn on EVC mode. In vCenter whilst I can see Skylake EVC mode as an option the hosts fail the check for that mode ("the host CPU lacks features required by that mode" but not details of what features). But looking at the VMware compatibility checker the 6130's should be fine for Skylake EVC mode. I'm wondering is this either something being masked in the BIOS (will need to check when I can arrange host downtime but I didn't do any fancy BIOS config when they were deployed) or is it to do with the ESXi version on the hosts (I thought supported EVC modes were just determined by the CPU model and vCenter version)? I'm planning to upgrade the hosts to v6.7u3 anyway (but was waiting for the third host capacity to do this without VM downtime but that's not an option anymore as I need to enable EVC) but from what I've read I can't see this fixing the issue. At host level Skylake isn't shown as an EVC option whereas it is at cluster level It's not the end of the world, I can use Broadwell EVC mode for the cluster if needs be but if there's something I can fix now (or at least when I can get downtime arranged and the third host is delivered) and have Skylake available I'd rather do that.
Thanks, both were useful answers, seems I need a plan B!
Hi, Whilst I know a VCSA v6.7 appliance can't manage ESXi v5.5 hosts can it actually run on one? The reason I ask is we have a large internal vCenter & ESXi v5.5u3 environment but also several... See more...
Hi, Whilst I know a VCSA v6.7 appliance can't manage ESXi v5.5 hosts can it actually run on one? The reason I ask is we have a large internal vCenter & ESXi v5.5u3 environment but also several ESXi hosts running in gateways etc.  The internal cluster is due a complete refresh but before then we have an issue that deploying any new hosts outside this environment we are forced to use ESXi v5.5 as we want to keep the vCenter running on the internal cluster. As a stop gap I want to build another vSphere environment which would have a VSCA v6.7 + any new gateway non-internal cluster ESXi hosts (also running v6.7) would be managed by it, but with the VCSA running on an internal core cluster v5.5u3 host. Is that supported? I can't see VCSA would be doing anything low level enough (just to run) that it would care it's on a v5.5u3 host but HCL/Googling info just seems to be about the version of hosts you can manage not the versions it can run on. Cheers, Nick
Yeah I'm thinking the issue is maybe you can't licence a 5.1 ESXi server with the new type ROBO licence? I've added in 5.5U2 ESXi hosts fine and they're happily using ROBO licences (they were run... See more...
Yeah I'm thinking the issue is maybe you can't licence a 5.1 ESXi server with the new type ROBO licence? I've added in 5.5U2 ESXi hosts fine and they're happily using ROBO licences (they were running in eval mode previously though).
Hi, Hoping someone can help me sort out a licensing issue I have... We have several existing ESXi 5.1 hosts at branches (1 per branch), these are currently licensed using the free edition o... See more...
Hi, Hoping someone can help me sort out a licensing issue I have... We have several existing ESXi 5.1 hosts at branches (1 per branch), these are currently licensed using the free edition of ESXi. I want to now add these servers into a new central vCenter 5.5 Standard server so they can be managed via that. In order to licence this we've purchased 125 VM ROBO licences (Standard). My plan was just to add the hosts one at a time to vCenter and change them to ROBO licences however the first snag I hit was you can't add a host licensed with free ESXi to vCenter, fair enough so I thought I'd just change it back to eval mode in order to add it and then change to use a ROBO licence. However it looks like the eval mode was fully used before the free licence was added so I can't change it back to eval mode. I then noticed we had a spare 2 CPU 5.5 Enterprise licence on the vCenter so I added the host in (changing it to use this licence during the import process). That worked and the host is now in vCenter using the 5.5 Enterprise key (although it shows as using a 5.1 Enterprise key of the same number), it didn't complain that the licence key was invalid due to me using a 5.5 key to licence a 5.1 host. However when I went to change it to use the ROBO key I get a an error "A specified parameter was not correct: licenseKey" and it won't let me change it to the ROBO licence. It's not clear to me why I'm getting this error though, is there a specific issue with changing a host from a host licence to a ROBO licence (I haven't found anything on this)? I did find this article VMware KB: Assigning license to an ESXi host fails with the error: A specified parameter was not correct: licenseKey which indicates it could be a licence version number issue however I would have thought I would have gotten this error when I tried to apply the 5.5 host licence to the host in the first place but that worked fine. The ROBO licence in vCenter just says v5 rather than v5.5 so is a ROBO licence version point specific and I need to downgrade that to a 5.1 licence or is something else going on here? Thanks, Nick