SparkRezaRafiee's Posts

As Daphnissov advised, iSCSI can be set up in different ways but the best way to set iSCSI up would be to put it on layer 2 connected network. So the iSCSI initiator and the iSCSI targets should ... See more...
As Daphnissov advised, iSCSI can be set up in different ways but the best way to set iSCSI up would be to put it on layer 2 connected network. So the iSCSI initiator and the iSCSI targets should be on the same subnet with MTU size of 9000. to create redundancy, you will need to different subnets, uplinks, initiators and targets. But in any kind of depolyment, you should be able to have ping coming and going between iSCSI initiator and the storage. So if you cannot ping the iSCSI sotrage from the ESXi host using vmkping then the first step would be to fix network connectivity. So check physical network connectivity, VMK0, iSCSI initiators, MTU, configuration on iSCSI storage, etc.
Usually restarting VM tools service and then starting Virtual Disk service fixes the issue. In some cases that doesn't work and rebooting the VM fixes the problem. If you see VSS error in e... See more...
Usually restarting VM tools service and then starting Virtual Disk service fixes the issue. In some cases that doesn't work and rebooting the VM fixes the problem. If you see VSS error in event logs in APPLICATION logs with something liek the below description: Volume Shadow Copy Service error: Unexpected error DeviceIoControl(\\?\fdc#generic_floppy_drive#6&3b4c39bd&1&0#{53f5630d-b6bf-11d0-94f2-00a0c91efb8b} - 0000000000000558,0x00560000,0000000000000000,0,00000000003EAD60,4096,[0]).  hr = 0x80070001, Incorrect function. Then the workaround to resolve the issue is: - Go to Device manager - Disable floppy drive, then disable floppy disk controller (floppy drive disappears in Device manager) and Disable VSS application quiescing using VMware Tools: 1. Open the C:\ProgramData\VMware\VMware Tools\Tools.conf file in a text editor, such as Notepad. If the file does not exist, create it. 2. Add these lines to the file: [vmbackup] vss.disableAppQuiescing = true 3. Save and close the file. 4. Restart the VMware Tools Service for the changes to take effect. 5. Click Start > Run, type services.msc, and click OK. 6. Right-click the VMware Tools Service and click Restart. Once implementing the changes the Hot-Clone of the VM completed successfully. see also VMware KB-1031298.
Your scripts can't go wrong mate
Your script is valid. Just wanted to share the script that I have that does a similar thing ina different way. I should have put a descriptive comment on my post. Let me edit it.
LucD's script is valid. The below script is Cluster based. So need to provide a cluster name and then it finds the all RDM disks attached to all VMs in that cluster. Then it makes the RDM devi... See more...
LucD's script is valid. The below script is Cluster based. So need to provide a cluster name and then it finds the all RDM disks attached to all VMs in that cluster. Then it makes the RDM devices perenerrally reserved on the ESXi hosts. $ClusterName = "Cluster Name" # Create a connection object to all hosts in the Target Cluster $TargetCluster = Get-Cluster -Name $ClusterName | Get-VMHost # Find the ScsiCanonicalName for all RDM Disks attached to VMs in the Target Cluster $RDMDisks = Get-VM  | Get-HardDisk -DiskType "RawPhysical","RawVirtual" | Select ScsiCanonicalName foreach ($rdm2 in $RDMDisks){ $rdm2 } # Retrieve and EsxCli instance for each connection foreach($RDMDisk in $RDMDisks) { foreach($hst in $TargetCluster) { # And for each RDM Disk $esxcli=Get-EsxCli -VMHost $Hst # Set the configuration to "PereniallyReserved". # setconfig method: void setconfig(boolean detached, string device, boolean perenniallyreserved) $esxcli.storage.core.device.setconfig($false, ($RDMDisk.ScsiCanonicalName), $false) } } # Disconnect the connection objects created for the Target Cluster Disconnect-VIServer * -Confirm:$false | Out-Null
If the host profile is going to alter any DNS related configuration i.e. DNS servers, Domain Name, Domain Search etc. you get that error message and the work around is to dis-join the host from D... See more...
If the host profile is going to alter any DNS related configuration i.e. DNS servers, Domain Name, Domain Search etc. you get that error message and the work around is to dis-join the host from Domain and re-apply the host profile that will also re-join the ESXi host back to the Domain.
Hi Paulo, You can run the below command and leave the SSH window open: pktcap-uw --vmk vmk0 -o /tmp/test.pcap (replace "vmk0" with the vmk# of your management vmk adapter of the ESXi hos... See more...
Hi Paulo, You can run the below command and leave the SSH window open: pktcap-uw --vmk vmk0 -o /tmp/test.pcap (replace "vmk0" with the vmk# of your management vmk adapter of the ESXi host if it's not VMK0) The packet capture will continously capture the traffic of the VMKernel onto test.pcap file and when you want to stop it, just press Ctrl-C multiple times. (Do not stop it by Ctrl-Z as it may leave the process running in background that won't release the output file). Then open the file using Wireshark which is quite user friendly and easy to use. Cheers, Reza
Hi Paulo, I would suggest to check performance graphs of the both the ESXi host and the vCenter server at the occurrence time of the issue and look for potential high latency or high CPU/RAM u... See more...
Hi Paulo, I would suggest to check performance graphs of the both the ESXi host and the vCenter server at the occurrence time of the issue and look for potential high latency or high CPU/RAM usage. Also check the timestamp of the alert and see if there was any backup job running at that time. If not, then the potential issue can be the layer 3 network connectivity latency especially if you have fairewall doing the interVLAN routing. To capture the network traffic you can use pktcap-uw --vmk vmk# -o file.pcap on ESXi shell and then open the captured file with WireShark as it is easier to view the contents of the pcap on WireShark. Also you can check the hostd.log file and look for heartbeats. Regards, Reza
Hi Paul, I would suggest to check the events for VCSA and see if there was backup, snapshot or any other tasks running on VCSA at the same time that the host became disconnected. In large a... See more...
Hi Paul, I would suggest to check the events for VCSA and see if there was backup, snapshot or any other tasks running on VCSA at the same time that the host became disconnected. In large and enterprise scale environments it usually happens in wide mangement subnets, network congestion, low heartbeat timeout values and if the VC is too busy with lots of taks in queue. Also I have seen that issue when crating snapshot or backups of the vCenter Server. Also high storage latency causes that issue as well. Cheers, Reza
Hi Osaidaz, If it's a Windows 2012 Standard edition license, I believe it would be ok to re-use the license unless you are not assigning the license to a new VM more than once every 90 days. S... See more...
Hi Osaidaz, If it's a Windows 2012 Standard edition license, I believe it would be ok to re-use the license unless you are not assigning the license to a new VM more than once every 90 days. So if you build a virtualized cluster and are vMotioning or Live Migrating VMs (each licensed with individual copies of Windows Server Standard) around (manually or by DRS) then you need to make sure that the server has the appropriate number of licenses assigned to it prior to the workload running on it. In other words, it must be licesed for the max number of physical procs of a host available in the cluster. So the proper version of the Windows 2012 to implement in VMware environment is Windows 2012 Datacenter that you will need to get license per physical processors and then you can have unlimited instances of Windows 2012 VMs implemented in your VMware SDDC. Windows licenses can be uses as the below in a virualized environment: Windows Server Standard: Assign 1 license to the host (which may be used for  Hyper-V or not used for Xen or VMware) and get 1 free license for a VM on that host. Windows Server Enterprise: Assign 1 license to the host (same as Standard) and get up to 4 free licenses (with downgrade rights) for VMs on that host. Windows Server Datacenter: Assign 2 (minimum) per proc (socket, not core) licenses to the host and get unlimited free licenses (with downgrade rights) for VMs on that host. You can get more info from the Microsoft Windows Lic in Virulized environment document  below: https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0ahUKEwiptYTGvYbcAhWE2LwKHaI7CfUQFghC… Cheers, Reza
Hi Bob, The safest way to restart the management services is to evacuate the VMs and put the host in maintenance mode and restart the management services. But restarting the management service... See more...
Hi Bob, The safest way to restart the management services is to evacuate the VMs and put the host in maintenance mode and restart the management services. But restarting the management services won't impact any running VMs unless there's specific configuration i.e. VSAN or LACP, in which cases we can exclude the required services from the services/sh job or alternatively we can individually restart the management services. Thanks for pointing that out, I should have mentioned to check for using LACP or VSAN in that environment before restarting the management services. Cheers, Reza
HPE Custom ESXi ISO should be ok
Revert the changes back and then storage migrate the VM to a different datastore if possible. Buf before moving it, after reverting the changes back, check the lock on the VMDK file again and ... See more...
Revert the changes back and then storage migrate the VM to a different datastore if possible. Buf before moving it, after reverting the changes back, check the lock on the VMDK file again and make sure it's not locked. If it doesn't resolve the issue, the workaround would be: -Removing the VM from inventory and re-adding it back -Removing the ESXi host from inventory and re-adding it back -Removing flag from vCenter Server Inventory DB manually
That's a known issue with qLogic driver that creates a massive log fiel and fills up the /tmp folder. To resolve the issue, upgrade the QLogic driver on the ESXi host. The workaround is to ... See more...
That's a known issue with qLogic driver that creates a massive log fiel and fills up the /tmp folder. To resolve the issue, upgrade the QLogic driver on the ESXi host. The workaround is to delete the ql_ima.log file and then you will need to restart management services on the ESXi host to release the space in /tmp folder. To restart the management services, SSH to the host and run "services.sh restart" ** Generally restarting management services won't impact running VMs on that host. but in some cases it can cause issue i.e. when using LACP in vDS. So check and make sure that it's safe to restart management services beforehand. Alternatively you can restart hostd and vpxa services by running: /etc/init.d/hostd restart /etc/init.d/vpxa restart To check the disk space, SSH to the host and run "vdf -h" Also if you cannot upgrade the qLogic driver due to hardware compatibility etc. you can create a cron job to delete the ql_ima.log file every day which will prevent /tmp folder from filling up.
I know this thread has been created a long time ago but still there are some people have this issue. That can be due to iSCSI login timeout. I had the same issue and managed to resolve it b... See more...
I know this thread has been created a long time ago but still there are some people have this issue. That can be due to iSCSI login timeout. I had the same issue and managed to resolve it by tuning the iSCSI login timeout from 5 sec (Default) to 30 seconds and also changing NOOP Intervals and NOOP timeout values to 30 seconds. But as long as the both VMKs are active and there is no dead paths to the iSCSI targets and also all paths are visible, assuming that you have already checked the network IP addresses, subnet mask and MTU size, the alert can be ignored.