RealQuiet's Posts

SDDC is on 4.5.2 and I have generated the SoS logs, which allowed me to query the logs when using the SDDC Manager UI. I I am looking for the log name that will store login successes and failures ... See more...
SDDC is on 4.5.2 and I have generated the SoS logs, which allowed me to query the logs when using the SDDC Manager UI. I I am looking for the log name that will store login successes and failures when using SSH or the console access using the local vcf and/or root account. 
Does anyone know where SDDC Manager stores security logs for VCF or root logins (successes and failures)?  I was able to find the UI logins under sddc-manager-ui-activity.log but cannot find securit... See more...
Does anyone know where SDDC Manager stores security logs for VCF or root logins (successes and failures)?  I was able to find the UI logins under sddc-manager-ui-activity.log but cannot find security logs for SSH or using the console to log in with root/vcf accounts. Any help would be appreciated. 
Did you check your proxy settings? Make sure they are not cleared out and/or that you have the correct entries for when it should bypass the proxy.    /etc/sysconfig/proxy   NO_PROXY="localhost, ... See more...
Did you check your proxy settings? Make sure they are not cleared out and/or that you have the correct entries for when it should bypass the proxy.    /etc/sysconfig/proxy   NO_PROXY="localhost, 127.0.0.1,<vcsa_fqdn>, <vcsa_ip>,<sddc_fqdn>,<sddc_ip> "
I am trying to create an custom iso to bring in to my SDDC manager and followed the directions from https://docs.vmware.com/en/VMware-Cloud-Foundation/4.3/vcf-admin/GUID-D43A3FAC-682E-46F7-8342-03364... See more...
I am trying to create an custom iso to bring in to my SDDC manager and followed the directions from https://docs.vmware.com/en/VMware-Cloud-Foundation/4.3/vcf-admin/GUID-D43A3FAC-682E-46F7-8342-03364EE5D2CC.html Get-DepotBaseImages "D:\VCF_ESXI_7.0u2c\VMware-ESXi-7.0U2c-18426014-depot.zip" Version Vendor Release date ------- ------ ------------ 7.0.2-0.20.18426014 VMware, Inc. 08/23/2021 23:00:00 7.0.2-0.15.18295176 VMware, Inc. 08/23/2021 23:00:00   Get-DepotAddons "D:\VCF_ESXI_7.0u2c\lnv-esx-7.0.2-custom-20210717-21C_addon.zip" Name Version ID Vendor Release date ---- ------- -- ------ ------------ LVO 7.0.2-LVO.702.10.7 LVO:7.0.2-LVO.702.10.7 Lenovo, Inc. 07/16/2021 17:15:18     The issue is that it spits out TWO depot base images. The .20 is a bugfix and the .15 is a security fix. The json fails if I list both, but work if I list one. What is the solution here? Will fail because two base images are listed: { "add_on": { "name": "LVO", "version": "7.0.2-LVO.702.10.7" }, "base_image": { "version": ["7.0.2-0.20.18426014","7.0.2-0.15.18295176"] }, "components": null, "hardware_support": null, "solutions": null }
I appreciate that information! I left it out during the bring up, good to now it is not required for something this simple.
This turned out to be a valid way to create the workload domain using vmnic0 and 2 instead of 0 and 1: "vmNics": [      {           "id": "vmnic0",           "vdsName": "VDS01",      },      { ... See more...
This turned out to be a valid way to create the workload domain using vmnic0 and 2 instead of 0 and 1: "vmNics": [      {           "id": "vmnic0",           "vdsName": "VDS01",      },      {           "id": "vmnic2",           "vdsName": "VDS01",      } ] I am not sure what the moveToNvds does and how it is used if I want to create the VM workload vDS during the bring up. I am assuming that the NSX-T configurations will automatically attach to VDS01 because it is declared with the "isUsedByNsxt". 
Ok, if I am reading it correctly, then I would need something akin to this:   "vdsSpecs": [ {      "isUsedByNsxt": true,      "name": "VDS01",      "portGroupSpecs": [ {                "name": ... See more...
Ok, if I am reading it correctly, then I would need something akin to this:   "vdsSpecs": [ {      "isUsedByNsxt": true,      "name": "VDS01",      "portGroupSpecs": [ {                "name": "VDS01-pg-mgmt",                "transportType": "MANAGEMENT"           },           {                "name": "VDS01-pg-vsan",                "transportType": "VSAN"           },           {                "name": "VDS01-pg-vmotion",                "transportType": "VMOTION"           } ]      } ]   At which point the  ""isUsedByNsxt": true," flag would place the NSX-T configuration on this vDS as well. VDS01 would carry vMotion, vSAN, ESXI Management, and NSX-T. For the hostNetworkSpec, can it IGNORE vmnics or does each one have to be assigned a vDS and port group configuration?  Configuration 1: vmnic1 and vmnic3 to carry VM Workloads "vmNics": [      {           "id": "vmnic0",           "vdsName": "VDS01",           "moveToNvds": true      },      {           "id": "vmnic1",           "vdsName": "VDS02",           "moveToNvds": false      },      {           "id": "vmnic2",           "vdsName": "VDS01",           "moveToNvds": true      },      {           "id": "vmnic3",           "vdsName": "VDS02",           "moveToNvds": false      } ]   Configuration 2: Can we bypass configuring vmnic1 and vmnic3 and configure the vDS later? "vmNics": [      {           "id": "vmnic0",           "vdsName": "VDS01",           "moveToNvds": true      },      {           "id": "vmnic2",           "vdsName": "VDS01",           "moveToNvds": true      } ]
I need to bring up a Workload domain that is configured with two network cards. Each one has 2 ports. For redundancy purposes and for old legacy networking practices: Card 1 port 1 -  vmnic0 attache... See more...
I need to bring up a Workload domain that is configured with two network cards. Each one has 2 ports. For redundancy purposes and for old legacy networking practices: Card 1 port 1 -  vmnic0 attached to Switch A - vSAN, vMotion, ESXI, NSX-T Card 1 port 2 - vmnic1 attached to Switch B - Workload vLANs Card 2 port 1 -  vmnic2 attached to Switch A - vSAN, vMotion, ESXI, NSX-T Card 2 port 2 - vmnic3 attached to Switch B - Workload vLANs I am using the APIs to create a Workload domain and was wonder how I script it to use vmnic0 and vmnic2 instead of vmnic0 and vmnic1. Is it done under the hostNetworkSpec? If so, what value do I put in for moveToNvds?    "hostSpecs": [      {      "id": "***host id***",      "licenseKey":"LICENSE_KEY",      "hostNetworkSpec": {           "vmNics": [           {                "id": "vmnic0",                "vdsName": "VDS01",                "moveToNvds": true                },                {                "id": "vmnic1",                "vdsName": "VDS02"                "moveToNvds": false                },                {                "id": "vmnic2",                "vdsName": "VDS01"                "moveToNvds": true                },                {                "id": "vmnic3",                "vdsName": "VDS02"                "moveToNvds": false                }           ]      } },   Any help would be appreciated. Hoping that I am not going to far down the wrong path to resolve the issue, so I am asking for directions
If you have a Administrator password that logs in during the customization process, make sure that you re-enter it. We use it in our customization and after upgrading from vCenter 6.0 to 6.5 we h... See more...
If you have a Administrator password that logs in during the customization process, make sure that you re-enter it. We use it in our customization and after upgrading from vCenter 6.0 to 6.5 we had to re-enter it. I also saw this when upgrading from 6.5 to 6.7. Perhaps the encryption keys change during the upgrade process.
What is the processor licensing requirement for VCF? Is it a minimum of 2 processors per a node? Or can it be 1 processor per a node? We have a quote for nodes using 2 processors per a man... See more...
What is the processor licensing requirement for VCF? Is it a minimum of 2 processors per a node? Or can it be 1 processor per a node? We have a quote for nodes using 2 processors per a management node, it seems excessive to use 2 processors.
Tried the dis-join and rejoin. Did it using the GUI then shell. Same result. Went through the hassle and set up the LDAP and it works. No victory for me, now I need to document this as it is d... See more...
Tried the dis-join and rejoin. Did it using the GUI then shell. Same result. Went through the hassle and set up the LDAP and it works. No victory for me, now I need to document this as it is different from all the other VCSAs in the environment. So it works, I am just not happy about the implementation. The answer is if user groups do not work using AD Integration then use LDAP.
During my research I also ran across forum postings saying that using LDAP would work. It is just frustrating because we have 5 other VCSAs that do no have this problem. This particular VCSA resi... See more...
During my research I also ran across forum postings saying that using LDAP would work. It is just frustrating because we have 5 other VCSAs that do no have this problem. This particular VCSA resides in the same subnet as the DC that it would be communicating with, so there are no firewalls causing issues. If I cannot find a solution by Tuesday, then I will try the LDAP solution. Not ideal, as it would be configured differently than the rest of the environment.
Running into an issue with my fresh vCenter 6.7 install. This is probably familiar to some of you out there so I hope you have a resolution that I can use. Unable to login because you do not h... See more...
Running into an issue with my fresh vCenter 6.7 install. This is probably familiar to some of you out there so I hope you have a resolution that I can use. Unable to login because you do not have permissions on any vCenter Server systems connected to this client. vCenter is joined to the domain and it can resolve the user objects and user groups from AD. If I add a User Group (which I am a part of) at any level (nested in vSphere.local\Administrators, global permissions, or vCenter root) I cannot log in. However, if I add my user account to global permissions or vCenter root then it works. Nesting under vSphere.local\Administrators is not working for groups or users. AD Group Nested under Administrators = FAIL Global Permissions = FAIL vCenter (any level) = FAIL AD User Nested under Administrator = FAIL Global Permissions = PASS vCenter (any level) = PASS Also, I redeployed vCenter from scratch and the first thing I did was rejoin it to AD (after resetting AD object) and tried this. Same issue. For auditing reasons, management of permissions, user groups are preferred. Any help would be appreciated. For now I am just adding the team's user accounts to the vCenter root with the Administrator role.
Yes it is an AF-8 sizing. I am actually gathering IO data to see what our needs our. Gathering the IO data this week. Working on a data capture for at least a week. We have a follow the sun model... See more...
Yes it is an AF-8 sizing. I am actually gathering IO data to see what our needs our. Gathering the IO data this week. Working on a data capture for at least a week. We have a follow the sun model, so workloads vary depending on the shift. Thank you for your link, it accounts for changes to SSD endurance and now I am seeing errors in my calculations for the caching tier change. I may revert it back to 400GB... but I do like the endurance and ability of a larger caching drive to hold objects longer before having to write them to the capacity tier. I was using older information from: https://cormachogan.com/2015/05/19/vsan-6-0-part-10-10-cache-recommendation-for-af-vsan/
I actually pushed up to 800GB, they started with 400GB and I just thought I could push performance more if it had the 600GB to leave objects in as long as possible. Personally I feel that losing ... See more...
I actually pushed up to 800GB, they started with 400GB and I just thought I could push performance more if it had the 600GB to leave objects in as long as possible. Personally I feel that losing a disk group because a cache drive wears out is a little much. Especially when I consider that if it wore out, and vSAN is doing this evenly, then the remaining cache tier drives may follow. If anything, I want the capacity drives to wear out first and, if the system is still around, then the caching tier. I spoke with the vendor about my SATA concerns, and they still recommended SATA over SAS for the capacity tier. They put me at ease over that, of course I still want to hear from other's experiences Anyone have VDI experience with SSD SAS caching tier and SSD SATA capacity tier? Your input would be appreciated.
Doing a build out for a VDI cluster and we are getting quotes for an AF-vSAN. The vSAN will be running in RAID 6 with compression, dedup, and encryption enabled with a workload of Windows 10 full... See more...
Doing a build out for a VDI cluster and we are getting quotes for an AF-vSAN. The vSAN will be running in RAID 6 with compression, dedup, and encryption enabled with a workload of Windows 10 full-clones and persistent data. The vendor is doing a mix with the following: Cache: 800GB SSD SAS 12Gb x 2 (4k reads @ 240k, 4k writes at 130k) Capacity: 1.92TB SSD SATA 6Gb x 6 (4KB reads @ 90k, 4KB writes @ 50k) Each node will have two disk groups comprised of a 800GB cache drive with 3x1.92TB drives for capacity. I have some room in the budget to jump up to 1.6TB SAS drives for the capacity tier. This increases cost per drive and I lose 1.8TB of space per a node. The gain is a capacity tier where each drive has 4KB reads @ 200k and 4KB writes @ 90k. Does anyone here have experience with using a mixed SAS cache and SATA capacity tier for an AF-vSAN deployment? For VDIs, would you save money and use a mix of SAS and SATA or SAS across both tiers?
Yeah, looking at this more closely it gets quite costly to obtain a RAID 6 and have a fault tolerance of two. I am currently favoring the SAS SSD configurations due to cost.
I have a small VDI environment, only 200 Windows 10 systems with 8Gb fiber channel connecting to the storage array. We are looking at buying new equipment and have a good size budget for an HCI s... See more...
I have a small VDI environment, only 200 Windows 10 systems with 8Gb fiber channel connecting to the storage array. We are looking at buying new equipment and have a good size budget for an HCI solution using vSAN. Off the bat, I expect storage latency to drop, just having it in rack, but I want to know if NVMe SSD with an Optane caching tier is worth it. Keep in mind we will be encrypting the vSAN. I know, I know... it is workload dependent. I am just trying to get a baseline on if Optane is overkill for VDI. I am considering 3 SSD configurations 1. All NVMe with Optane Cache 2. NVMe cache + SAS storage 3. All SAS Also, please let me know if I am overlooking another storage technology that is better for VDI.
I agree that CPU resources are not often highly utilized. The issue is that CPU Ready time increases and performance is impacted by the process delay. The proposed configuration is almost identic... See more...
I agree that CPU resources are not often highly utilized. The issue is that CPU Ready time increases and performance is impacted by the process delay. The proposed configuration is almost identical in core count, so the hope is that CPU Ready time does not increase. I am concerned that there is some nuance in how resources are handled with two socket hosts vs one socket hosts that will significantly increase CPU Ready time even when core counts are nearly identical.
Found my answer, VMware does not support ESXi hosts with 1 socket. The supported configurations are 2, 3, and 4 socket. The dream is dead. -------------------------- With higher core counts... See more...
Found my answer, VMware does not support ESXi hosts with 1 socket. The supported configurations are 2, 3, and 4 socket. The dream is dead. -------------------------- With higher core counts, is it now OK to move away from 2 sockets and start using 1 socket ESXi hosts? I am working on reducing cost in our environment and wanted to know what the best practice is regarding ESXi hosts and socket count. Here is the current cluster configuration: Hosts: 3 Sockets per host: 2 Cores per a Socket: 10 Physical Cores: 60 Logical Processors (HT enabled): 120 There are 25 VMs in the cluster that have a total of 66 vCPUs assigned to them. To reduce licensing cost I would like to replace these hosts with the following: Hosts: 3 Sockets per host: 1 Cores per a Socket: 18 or 22 Physical Cores: 54 or 66 Logical Processors (HT enabled): 108 or 132 I feel like we are trapped in the practice that we need two sockets, per previous core limitations, when one socket per a host is now sufficient. Message was edited by: Michael N Answer Found