BenLiebowitz's Posts

When trying to press F2, did you click into the window first?  Try pressing space bar or a different key.  Does the window light up?  It should look similar to this: The screenshot you ... See more...
When trying to press F2, did you click into the window first?  Try pressing space bar or a different key.  Does the window light up?  It should look similar to this: The screenshot you provided looks like you're not clicked in...
What OS is selected for the VM in settings?  I see this KB article, it's for vSphere 5 but should still be the same process. Installing ESXi 5.x in VMware Workstation (2034803) | VMware KB
When you find the solution, please let us know what it was.
VMs in vSphere and in VMware workstation should be able to be suspended and brought back online later. As for hibernation, that's an option within the OS of your VM.  VMware Workstation -... See more...
VMs in vSphere and in VMware workstation should be able to be suspended and brought back online later. As for hibernation, that's an option within the OS of your VM.  VMware Workstation - Using Suspend & Resume: Using Suspend and Resume
Are you sure the host can see the storage?  It sounds like it could be an issue with the host not seeing the storage array.
Another option is to build new.  I went with this route when I went from 4.1 to 5.5.  We built new new vSphere host at 5.5 using an extra host we pulled out of the cluster.  We then deployed ... See more...
Another option is to build new.  I went with this route when I went from 4.1 to 5.5.  We built new new vSphere host at 5.5 using an extra host we pulled out of the cluster.  We then deployed a new vCenter server.  We then removed another host, rebuilt it using the 5.5 ISO and joined it to the new vCenter in the cluster with the 1st host.  We then removed 1 host at a time from the 4.1 vCenter and joined them (live) into the 5.5 vCenter, in the same cluster as the 1st 2 (making sure to have DRS & HA off).  We were then able to migrate the VMs to the 5.5 hosts with 0 downtime.  As we moved VMs, we rebuilt the 4.1 hosts using 5.5, until all hosts were at 5.5.  I usually prefer this method as you're getting a clean install each time. However, like TheBobkin​ said, you'd need to go to 5.x first before going to 6.5. 
I have to agree with bspagna89​, this sounds like the same issue we had with our HP Blades.  Please confirm what hardware you're running on and what version of ESXi so we can help troubleshoo... See more...
I have to agree with bspagna89​, this sounds like the same issue we had with our HP Blades.  Please confirm what hardware you're running on and what version of ESXi so we can help troubleshoot your issue.
Did you check out the troubleshooting steps in this KB? Troubleshooting VMware High Availability (HA) issues in VMware vCenter Server 5.x and 6.0 (2004429) | VMware KB
You're probably right on the reboot..  the host I was getting the error on turned out to have a hardware issue.  I just did another one and used the update-entity cmdlet and it's working fine now... See more...
You're probably right on the reboot..  the host I was getting the error on turned out to have a hardware issue.  I just did another one and used the update-entity cmdlet and it's working fine now.  Thanks!
Hi, I'm trying to use PowerCLI to upgrade some hosts from 5.5 to 6.0.  I'm not sure if it's failing because, if you do it manually via Update Manager, it prompts you to accept the EULA? ... See more...
Hi, I'm trying to use PowerCLI to upgrade some hosts from 5.5 to 6.0.  I'm not sure if it's failing because, if you do it manually via Update Manager, it prompts you to accept the EULA? get-baseline -name "HP ESXi 6.0 U2" | remediate-inventory -entity $VMHOST -confirm:$false This is the code i"m using.  This works fine when I try to install normal patch or extension baselines, but with the Upgrade baseline I'm having issues. The error says the hardware is incompatible, but I know that isn't the case.  Everything has been verified on the HCL. Here's my error: remediate-inventory : 5/2/2017 10:45:26 AM Update-Entity    The operation for the entity "VMHOST" failed with the following message: "Hardware configuration of host VMHOST is incompatible. Check scan results for details." At D:\ps1\vmware\VUM_HostUpgrade.ps1:62 char:40 + ... "HP ESXi 6.0 U2" | remediate-inventory -entity $VMHOST -confirm:$false +                    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo      : NotSpecified: (:) [Update-Entity], ExtendedFault + FullyQualifiedErrorId : Client20_TaskServiceImpl_CheckServerSideTaskUpda    tes_OperationFailed,VMware.VumAutomation.Commands.RemediateInventory c'mon Community Warrior LucD​, put on your cape and save me! Anyone? 
No, I'm looking for overall count, but I'll keep this as a reference.    Thanks!
You're the MAN! That worked great! I added this to the end to clear the screen and display the count. clear  write-host "" write-host "Total Powered On VMs: " $sum | Measure-Objec... See more...
You're the MAN! That worked great! I added this to the end to clear the screen and display the count. clear  write-host "" write-host "Total Powered On VMs: " $sum | Measure-Object -Sum | select -ExpandProperty Sum Thanks again Luc! 
I have a script that will give me the total count of Powered on VMs in each environment, but I would love for it to automatically add the values together and display an overall total, instead of ... See more...
I have a script that will give me the total count of Powered on VMs in each environment, but I would love for it to automatically add the values together and display an overall total, instead of exporting each vCenter's count to a CSV.  For example, I'd like it to see total VC01=100, VC02=200, VC03=50, VC04=250, VC05=100.  Total = 700 Powered On VMs.  Anyone have any suggestions?  ------------------------ # Build array for each vCenter with vDS switch   $array = "vc01", "vc0e", "vc0e", "vc04", "vc05"   for($count=0;$count -lt $array.length; $count++) { # Connect to All vCenter Servers, one at a time. connect-viserver $array[$count] # Get VM counts & export to CSV Get-VMHost | Get-VM | where-object {$_.PowerState -eq "PoweredOn"}| Measure-Object | export-csv -notypeinformation c:\ben\vmware\$($array[$count])_vm_count.csv # Disconnect from vCenter Servers disconnect-viserver -confirm:$false } ------------------------ Thanks in advance!
My plan is to use the 1gb links in standby mode.  This way, if the 10gb link goes down, the 1gb uplinks will become active.  However, what i'd like to do is set specific 1gb uplinks to be use... See more...
My plan is to use the 1gb links in standby mode.  This way, if the 10gb link goes down, the 1gb uplinks will become active.  However, what i'd like to do is set specific 1gb uplinks to be used for vMotion, Mgmt, etc.  I don't really have a lab to test this on so I wanted to ask here...
I have an environment with multiple hosts running ESXi 5.5.  Each host has two 10gb uplinks to two Dell Force10 switches.  One of those switches died and we're currently running single threaded w... See more...
I have an environment with multiple hosts running ESXi 5.5.  Each host has two 10gb uplinks to two Dell Force10 switches.  One of those switches died and we're currently running single threaded while they look to replace the switch.  I'd like to use FOUR 1gb NICs as failover for the one 10gb uplink that's still active.  Can I add 1gb uplinks to a vDS configured with 10gb uplinks?  I assume I just need to increase the uplinks in the switch from two to six and set the new connections as standby, yes?  Or do I need to create a new vDS switch and if so, how do I configure it to be failover for the first? Thanks in advance!
I've never worried about moving powered off VMs via PowerCLI...  Typically I patch via PowerCLI and the VMs migrate via DRS and anything left is unavailable for about 30 minutes at most.  If ... See more...
I've never worried about moving powered off VMs via PowerCLI...  Typically I patch via PowerCLI and the VMs migrate via DRS and anything left is unavailable for about 30 minutes at most.  If that doesn't work, going along the route of doing it manually may work too...  You'd need to script disabling DRS, moving the VMs, and then setting maintenance mode.  Finally, you'll want to re-enable DRS again. This post from Damian Karlson for way back may get you started. Ghetto host evacuation: PowerCLI — DAMIAN KARLSON
I think adding the -Evacuate switch to the set-vmhost cmd should move the powered off VMs. 
Based on your image, there's no redundancy for your Mgmt/vMotion/VM Network portgroups.  There's 2 different ways you can go about this... You can keep it all in the same vSwitch like you hav... See more...
Based on your image, there's no redundancy for your Mgmt/vMotion/VM Network portgroups.  There's 2 different ways you can go about this... You can keep it all in the same vSwitch like you have now, add 2 additional uplinks, and segregate the traffic for each portgroup to a specific VMNIC, listing the others as failover.  OR, separate out the MGMT, vMotion, and VM Network portgroups into separate vSwitches, and setup each wtih dual uplinks. MY QA Environment has hosts with 8 RJ45 ports on the back each at 1gb.  There are 2 connections for NFS, 2 connections for vMotion, and 4 connections for MGMT & VMNetwork (with 2 VMNICs for each set with the other two being failover and vice versa.)  Hope this helps!
I'm not an expert, so LucD‌ can correct me, but something like this may work... foreach ($vm in (Get-VM)) ( $hdsrc = Get-HardDisk -VM $vm -Datastore $srcds foreach ($hd in $hdsrc) {   Move... See more...
I'm not an expert, so LucD‌ can correct me, but something like this may work... foreach ($vm in (Get-VM)) ( $hdsrc = Get-HardDisk -VM $vm -Datastore $srcds foreach ($hd in $hdsrc) {   Move-HardDisk $hd -Datastore $dstds } }
Hey Luc, Just wanted to thank you and the original poster for this thread.. I have a task to migrate some servers from one SAN to a new one and 2 of the VMs have 27 VMDKs which reside on over ... See more...
Hey Luc, Just wanted to thank you and the original poster for this thread.. I have a task to migrate some servers from one SAN to a new one and 2 of the VMs have 27 VMDKs which reside on over 10 datastores.  I was going to migrate them by hand using the wizard, but when I saw this thread I realized I could script it AND could run it with the -runasync switch and do them all at once with the VM powered off.    Thanks!