rcstevensonaz's Posts

It looks like Intel RST Volume Management Device (VMD) with RAID 1 for NVMe drivers is supported as of early 2022 for both boot and data volumes:  Using Intel VMD driver for vSphere to create NVMe RA... See more...
It looks like Intel RST Volume Management Device (VMD) with RAID 1 for NVMe drivers is supported as of early 2022 for both boot and data volumes:  Using Intel VMD driver for vSphere to create NVMe RAID1 I have no idea whether using Intel VMD for RAID 1 is a good idea or not. But, RAID 0, even if it was supported, remains a horrible idea regardless
Thanks for clarifying.  In my case, the root and the administrator@vsphere.local are the same password. Is there any issue with special characters (e.g., "(", ")", and "^")?
As a reference to others, Tim's comment refers to Option 4 when you run /usr/lib/vmware-vmca/bin/certificate-manager But, it is asking for a valid SSO password to perform certificate operation... See more...
As a reference to others, Tim's comment refers to Option 4 when you run /usr/lib/vmware-vmca/bin/certificate-manager But, it is asking for a valid SSO password to perform certificate operations.  Can anyone explain what that is (it is not the root password).
I noticed the exact same thing this weekend building a new ESXi 6.0 server with an Intel 530 SSD 256GB and an Intel 730 SSD 480GB. Building the VMFS took about half an hour for the 256GB 530. ... See more...
I noticed the exact same thing this weekend building a new ESXi 6.0 server with an Intel 530 SSD 256GB and an Intel 730 SSD 480GB. Building the VMFS took about half an hour for the 256GB 530.  I went to bed and have no idea how many hours the 480GB 730 took.  These are on SATA 0 and SATA 1 of a SuperMicro X9SCM (C204) motherboard. And, as was noted earlier, the ESXi server itself was essentially unresponsive during this process.  vSphere client was not updating; performance measurements not updating; access through the console was very unresponsive.  I did not have any VMs running; so I don't know if they would have also been effected during this time.   First time, I thought the machine was hung and rebooted; then tried again and just left the machine running with the VMFS formating and eventually it completed.
In regard to tschuld's comment on his configuration of 1CPU and 12Gb, this configuration matches any of the basic consumer LGA 1366 motherboards that is popluated with memory as follows: 4Gb s... See more...
In regard to tschuld's comment on his configuration of 1CPU and 12Gb, this configuration matches any of the basic consumer LGA 1366 motherboards that is popluated with memory as follows: 4Gb stick in one DIMM bank 2Gb stick in both DIMM banks If a person is using an LGA 1366-based motherboard (e.g. Core i7), this is about as basic as you can get.  The only lower option is 2Gb sticks in a single DIMM bank (6Gb).  Seems like VMware has not factored in tri-channel boards into their 8Gb limit.
It would be nice if the limits at least took into consideration tri-channel motherboards and not just dual-channel. I'm just playing with this for home use and have an single CPU motherboard (... See more...
It would be nice if the limits at least took into consideration tri-channel motherboards and not just dual-channel. I'm just playing with this for home use and have an single CPU motherboard (i7 930) and the 6 memory banks populated with 2Gb modules.  This puts my very humble machine over the 8Gb limit.  So on a tri-channel board, you can either only populate one-half of the DIMM slots or you need to find 1Gb sticks instead of 2Gb sticks. Seems like they should go wtih at at least 12Gb to allow 2Gb sticks on dual (i.e., 4x2Gb) and tri-channel (i.e., 6x2Gb) motherboards.