dmgeurts's Posts

For clarity, I'm trying to use the new API (/api/session) rather than the old one (/vmware/cis/session). Edited the original post. I got caught out by a simple line feed...
Found plenty of detail about using REST API, but not how to troubleshoot a failing login. I've verified the credentials and UI login works fine with these credentials so I'm confused why it's not wor... See more...
Found plenty of detail about using REST API, but not how to troubleshoot a failing login. I've verified the credentials and UI login works fine with these credentials so I'm confused why it's not working: AUTH64=$(printf 'user@domain.com:************' | base64) curl -k -X POST https://vcenter.some.domain.com/api/session -H "Authorization: Basic ${AUTH64}" {"error_type":"UNAUTHENTICATED","messages":[{"args":[],"default_message":"Authentication required.","id":"com.vmware.vapi.endpoint.method.authentication.required"}]} So far I've tried 'username', 'domain/username' and 'username@domain'. I'm using FreeIPA as LDAP backend and using the local admin user doesn't work either. What on earth am I doing wrong?! Silly error in the end that took way too much time: echo includes a line feed which base64 happily includes in the parsed string. Using printf is a better option here (one could also use echo with a flag to inhibit the line feed). The example above was amended with printf.
Did you ever find a solution for this?   Actually the following resolved it. Use at your own risk, ymmv etc... Host >> Configuration >> System >> Advanced System Settings Here, change `VMkernel.B... See more...
Did you ever find a solution for this?   Actually the following resolved it. Use at your own risk, ymmv etc... Host >> Configuration >> System >> Advanced System Settings Here, change `VMkernel.Boot.disableACSCheck` to `TRUE`
I have a couple of HPE DL380 servers which have been running v7.0.2 for a while and after reading reports these can run v7.0.3, albeit with warnings about officially unsupported CPUs, I took the plun... See more...
I have a couple of HPE DL380 servers which have been running v7.0.2 for a while and after reading reports these can run v7.0.3, albeit with warnings about officially unsupported CPUs, I took the plunge on one of the servers. But I'm running into an issue where I suspect that Lifecycle Manager is not allowing me to maintain an old driver for an Intel P4500 NVMe. Driver 2.6.0 works, but 2.7.0  and up require a C620 chipset which my gen8 and gen9 servers do not have. Manually installing the VIB works after uninstalling the v2.7.x VIB. But I suspect Lifecycle Manager will want to revert this VIB. Lifecycle Manager (vSphere client v7.0 U3c) is being very odd about additional drivers, as I'm seeing cosmetic glitches and it's refusing to add the v2.6.0 or v2.6.1 Inte;l NVMe driver to the baseline. But doesn't show any errors, it just silently completes without adding the driver to the baseline. Is there a way to pin a VIB or keep an old driver around in a 'newer' baseline? If not I'll just need to manually revert the driver each time I update the cluster.
Just to confirm, adding a PCIe NIC and disabling the onboard LOM resolved the issue for me. And it wasn't as expensive as I'd feared it would be.
Ah, I was afraid of that and sucks big time as I have two servers, one with a 10GB module and 1GB NIC and the other is the other way round, so the NIC numbers are not the same on both servers. This i... See more...
Ah, I was afraid of that and sucks big time as I have two servers, one with a 10GB module and 1GB NIC and the other is the other way round, so the NIC numbers are not the same on both servers. This is going to be an expensive issue to fix...
This looks similar but the offered solution doesn't work. https://communities.vmware.com/t5/VMware-vSphere-Discussions/vSphere-6-7-Configuration-Quickstart-Physical-adapter-does-not/m-p/2887444/highl... See more...
This looks similar but the offered solution doesn't work. https://communities.vmware.com/t5/VMware-vSphere-Discussions/vSphere-6-7-Configuration-Quickstart-Physical-adapter-does-not/m-p/2887444/highlight/false#M41286
I've got the same issue, after recreating my cluster. While it was fine before. Must say that I'm on vSphere v7.0
I had my dual server vSAN cluster running, but had to juggle (kinda rebuild) it to get EVC enabled. And now completing the cluster configuration is giving me grief. The current state is that Quicksta... See more...
I had my dual server vSAN cluster running, but had to juggle (kinda rebuild) it to get EVC enabled. And now completing the cluster configuration is giving me grief. The current state is that Quickstart has configured one of the two servers, but is refusing to configure the second. The error I get is the following: "Some of newly added hosts to the cluster contain a network configuration that is incompatible with the network configuration of the cluster. Physical adapters of host esx02.mgmt.domain.com on Distributed Switch vDS_20G does not match the input." I've read somewhere that removing one of the redundant links on the vDS can help. I've done this in turn for both vmnics and no dice. How on earth do I get this working, where would you start digging to get to the root cause? As far as I can tell the configuration, apart from the vmnic numbering, is identical between the two nodes. vSphere 7.0.2. and vWitner both on a 3rd node in the datacenter but not part of the cluster in question Cluster nodes: 2 HPE servers DL380p gen8 - vmnic0 & 1 DL380 gen9 - vmnic4 & 5 vSAN is configured and working. LACP (LAGG) - 2x 10GE for VMs, vMotion and vSAN. LACP (LAGG) - 2x 1GE for management.
Just encountered this on a VxRail installation. Did you find a solution?