krobertsIAA's Posts

This answer helped me and was much simpler than any other method. It can also be automated due to being through PowerCLI. If it were sooner I'd vote this to be the correct answer. Tested with ... See more...
This answer helped me and was much simpler than any other method. It can also be automated due to being through PowerCLI. If it were sooner I'd vote this to be the correct answer. Tested with IBM RSA cards.
1. I can attest to #1 being ok, we are at haswell levels of evc now, no issues. 2. DRS will balance based on performance and you (p:v) ratios. 3. I wouldn't say you HAVE to buy all new cpus. T... See more...
1. I can attest to #1 being ok, we are at haswell levels of evc now, no issues. 2. DRS will balance based on performance and you (p:v) ratios. 3. I wouldn't say you HAVE to buy all new cpus. That is what EVC is for, to create a common baseline. Just keep circulating hosts and keep up to date with ESXi versions to get those new baselines.
It is 6.5 which is relatively new so it is POSSIBLE that there is a bug, I'm no vExpert or VCDX but I'll out that out there. My actionable theory would be the vmotion network, as others have s... See more...
It is 6.5 which is relatively new so it is POSSIBLE that there is a bug, I'm no vExpert or VCDX but I'll out that out there. My actionable theory would be the vmotion network, as others have stated, make sure that checkbox is checked at the portgroup level, vlan id matches, ips in the same subnet, only one portgroup has vmotion enabled, do some vmkpings through ssh if you can.
Maybe give this a shot? VMware Knowledge Base Mound the drives but before resetting the password, look for where all the space has gone? Delete old logs, then reset the password.
Hello All, Just curious if I missed a patch that changed the behavior or if I've created some anomaly. We raised our cluster level EVC's a couple months ago across the board (3 different cl... See more...
Hello All, Just curious if I missed a patch that changed the behavior or if I've created some anomaly. We raised our cluster level EVC's a couple months ago across the board (3 different clusters). We set up batches/groups to begin power cycling of VM's, raising the hardware level and EVC in one go. To prep, I ran a powerCLI script that would schedule the hardware level increase upon the next reboot. This worked as expected in testing in our non-production cluster. 1. Run PCLI script against the group of VM's 2. Power Down the VM's      2.a Hardware level raised to maximum for ESXi 5.5 (10) 3. Power VM's on      3.a VM assumes the highest EVC that the HW level and Cluster permit 4. Profit We performed step 1 on VM's several weeks ahead of time as it should have little effect. Raise the HW level does not usually hurt anything (found some bugs but not in scope of this thread) What we observed was: 1. Run PCLI script against the group of VM's several days/weeks ahead of time 2. Some VM's reboot due to scheduled tasks      2.a Hardware level raises to 10      2.b EVC raises to maximum for HW level and Cluster setting      2.c No power off of the VM's is noted 3. Unexpected early profit The third cluster has EVC disabled so we cannot use it for testing. Production and Non Production clusters are in different vcenters non-linked, both 5.5 U3. We're happy to have the easiest upgrade ever but it was unexpected. Curious if others have seen this. For reference, I attached the PowerCLI script we used.