rguhr's Posts

RuneH has posted some interesting details about this problem in the HPE community: https://community.hpe.com/t5/proliant-servers-ml-dl-sl/outage-due-to-self-launched-reboots-by-spp-multiple-reboots/... See more...
RuneH has posted some interesting details about this problem in the HPE community: https://community.hpe.com/t5/proliant-servers-ml-dl-sl/outage-due-to-self-launched-reboots-by-spp-multiple-reboots/m-p/7200200/highlight/true#M184049
Thanks for the answer. It is particularly interesting that not only VMware ESXi as OS is affected and that this could be a general "feature" of the update process via SPP ISO/iLO.
Since vSphere 7 no longer possible to limit Storage vMotion via advanced settings: https://communities.vmware.com/t5/vMotion-Resource-Management/Storage-vMotion-Max-Cost-per-datastore/m-p/2910772/hi... See more...
Since vSphere 7 no longer possible to limit Storage vMotion via advanced settings: https://communities.vmware.com/t5/vMotion-Resource-Management/Storage-vMotion-Max-Cost-per-datastore/m-p/2910772/highlight/true#M4797
Hey, a few days ago we had an outage because our VMware hypervisors rebooted themselves after an HPE SPP was applied and the host was booted back into the OS.   The hypervisor was taken out of mai... See more...
Hey, a few days ago we had an outage because our VMware hypervisors rebooted themselves after an HPE SPP was applied and the host was booted back into the OS.   The hypervisor was taken out of maintenance mode and the next host was started. However, on two HPE ProLiant DL365 Gen10 systems, the hypervisor rebooted again (by itself) after a few minutes (unfortunately there were already a few VMs on the system at this point)   Has anyone observed this behavior before? or is it even a desired behavior?   The HPE documentation points out that some components are only updated to the latest version after several SPP applications. Is there perhaps a new mechanism that recognizes this and automatically restarts the SPP/update process on the hypervisor? 
Alternative to limit storage vmotion would be: VAAI (vStorage API for Array Integration) This will offload Storage vMotions to the storage array. We have already implemented this but unfortunatel... See more...
Alternative to limit storage vmotion would be: VAAI (vStorage API for Array Integration) This will offload Storage vMotions to the storage array. We have already implemented this but unfortunately it says in the fine print that for NFS datastores only powered off VMs can benefit from the offload (Source: Whitepaper, NetApp)
Limiting the vMotion traffic type has no effect on Storage vMotion traffic  Set the vMotion limits to 1Gbit/s, 3Gbit/s 500Mbit/s and 50 Mbit/s
We had a network outage yesterday because Storage vMotion was using too much bandwidth (to NFS Datastores).   Is there any way to limit Storage vMotion? For example, there is the possibility ... See more...
We had a network outage yesterday because Storage vMotion was using too much bandwidth (to NFS Datastores).   Is there any way to limit Storage vMotion? For example, there is the possibility to set limits under System traffic. But there it says only vMotion is Storage vMotion included? (We don't want to limit "normal" NFS traffic)    
Thanks for you answer. VMware support also suggested the VMware Cloud Director Availability (VCDA). It is a pity that the VCD has not integrated this directly itself. Some other ways could be ... See more...
Thanks for you answer. VMware support also suggested the VMware Cloud Director Availability (VCDA). It is a pity that the VCD has not integrated this directly itself. Some other ways could be (not tested): * Convert the VM to a vApp template, place the template in a catalog, subscribe the catalog on the other side and then deploy the template * Convert the VM to a vApp, download the vApp as OVA and import the OVA on the other side * Other 3rd-party tools like Veeam
Hi, we setup multisite with two VCD instances. Customers now can view and manage their virtual data centers across sites with single login.   However, we have not found a way for customers to migr... See more...
Hi, we setup multisite with two VCD instances. Customers now can view and manage their virtual data centers across sites with single login.   However, we have not found a way for customers to migrate workload from one site to another. The normal way within the same site would be the Move action on VMs (you need a target vAPP for this) but in the Move dialog you can only select vApps from the same site.   Does migration work differently for multisite or is there really no option for customers at the moment (without help from the service provider)? (The underlying vCenters can migrate workloads via Cross vCenter Migration)
Hi, is it possible to change the system name of a VCD setup after the initial deployment?     For the installation ID we have   ${VCLOUD_HOME}/bin/cell-management-tool mac-address-managemen... See more...
Hi, is it possible to change the system name of a VCD setup after the initial deployment?     For the installation ID we have   ${VCLOUD_HOME}/bin/cell-management-tool mac-address-management     but system-setup should probably not be run again?   ${VCLOUD_HOME}/bin/cell-management-tool --help [...] system-setup - Performs one-time system setup of the server group. [...] ${VCLOUD_HOME}/bin/cell-management-tool system-setup --help usage: cell-management-tool system-setup [options] --email <arg> Required - Admin email --full-name <arg> Required - Admin fullname -h,--help Print this message --installation-id <arg> Required - Installation ID. Range: [1..63] --password <arg> Required (if unattended mode) - Admin password --serial-number <arg> Optional - License serial number --system-name <arg> Required - System name --unattended Optional - Unattended mode does not prompt for the administrator password, which you must supply on the command line. --user <arg> Required - Admin username      
Got a reply from the VMware support: vMotion Routes are only supported with own vMotion stack. Source: https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vcenterhost.doc/GUID-3B411... See more...
Got a reply from the VMware support: vMotion Routes are only supported with own vMotion stack. Source: https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vcenterhost.doc/GUID-3B41119A-1276-404B-8BFB-A32409052449.html#:~:text=Network%20Configuration,Stack%20of%20an%20ESXi%20Host
Hi, we have two datacenter locations that use different L3 networks. We would like to be able to migrate VMs from one DC to the other. Networks: sideA-mgmt     (e.g. 10.10.1.0/24) sideA-vmotio... See more...
Hi, we have two datacenter locations that use different L3 networks. We would like to be able to migrate VMs from one DC to the other. Networks: sideA-mgmt     (e.g. 10.10.1.0/24) sideA-vmotion (e.g. 10.10.2.0/24) sideB-mgmt     (e.g. 10.20.1.0/24) sideB-vmotion (e.g. 10.20.2.0/24) My first idea would be to use one combined vmkernel adapter for vmotion (hot) and provisioning (cold migration) traffic and set a route on both datacenter sides. I tested this solution and it worked but then I found this ~7 year old KB article that suggests routes for vmotion (and probably provisioning too?) are not official supported on default stack: https://kb.vmware.com/s/article/2108823 Is this still the case and I have to use own stacks for routeablity of vMotion/prov? My backup idea would be to create two vmkernel adapter one for vmotion and one for provisioning with their own stacks and then just put a default route into these own stacks / routing tables. (I would then need two IP addresses from the respective vMotion network, one for vmotion and one for prov. The gateway remains the same. Have not yet checked whether this makes problems with the routing table assignment. In the worst case I would need a third network per location.) Are both ways supported or only the own stack variant? Are routes ESXi (minor/major) upgrade safe? Regards, Robért
Thanks @Macleud this works for us
We use a shared catalog to distribute some basic ISOs to our customers. Unfortunately a lot of users don't remove the ISOs after they are done with it. To keep the shared catalog clean, we have to re... See more...
We use a shared catalog to distribute some basic ISOs to our customers. Unfortunately a lot of users don't remove the ISOs after they are done with it. To keep the shared catalog clean, we have to remove/update the ISOs from time to time. We have a powercli command to find VMs with a media connected on vSphere level:   Get-VM | Where-Object {$_.PowerState –eq "PoweredOn"}| Get-CDDrive | Where {$_.ISOPath -ne $null} | select Parent,IsoPath | format-table -autosize   But we need a version to this on VMware Cloud Director Level. Sometimes the VCD database thinks the media is still connected somewhere, but on vSphere level it's already gone. Get-Media and Get-CIVM apparently have no property that tracks that. It doesn't need to be a powercli cmdlet - a select query for the postgres database would also work