roland_geiser's Posts

Hello We are on vSphere 7 and do the upgrade procedure for our HPE ESXi Hosts by VUM and still use the update baselines. In the earlier releases, we entered the HPE URLs from the HPE vibsdepot (http... See more...
Hello We are on vSphere 7 and do the upgrade procedure for our HPE ESXi Hosts by VUM and still use the update baselines. In the earlier releases, we entered the HPE URLs from the HPE vibsdepot (http://vibsdepot.hpe.com/index.xml, http://vibsdepot.hpe.com/index-drv.xml) in Lifecycle Manager. But now, all seems to be included in that baseline "Partner provided Addons for ESXi" and probably also the baseline "VMware Certified Async Drivers for ESXi":   How do you handle that? Do you still attach the HPE URLs in addition? And when you want to upgrade the HPE specific drivers and tools (without upgrading the OS, so not using the HPE custom ISO). How do you make shure that you can filter only the HPE drivers/tool and only install/update these HPE related drivers/tools? So in principle update (the base is always a HPE custom image where all tools and drivers are present - so its mostly an update of them) all the tools/drivers that make the difference between the vanilla VMware Stock image and the HPE custom image by using lifecycle manager baselines.    Best regards Roland
I can give an answer to that question myself in the meantime: According to VMware, there is no e-mail alerting currently. This feature should be available with the release 4.4 apparently. This relea... See more...
I can give an answer to that question myself in the meantime: According to VMware, there is no e-mail alerting currently. This feature should be available with the release 4.4 apparently. This release is coming soon. 
Hello  On our old Usage Meter Appliances in Version 3.x there was a way to send e-mails when problems arose... for example a monitored vCenter Server was not  reachable. In that case, there was an e... See more...
Hello  On our old Usage Meter Appliances in Version 3.x there was a way to send e-mails when problems arose... for example a monitored vCenter Server was not  reachable. In that case, there was an e-mail sendt. I now upgraded to Version 4.3 and cannot detect any settings regarding alarms. Is there no way to be alerted when there are some issues on the appliance?  Does anyone know how to configure some alerting on 4.x?   Best regards Roland
The Clustering Deep Dive eBook is really a great reading. Compliment!
I have checked the memory active on the cluster: 'Active Guest Memory' is somewhat over 300 GB. So as described above, the warning Running VMs utilization cannot satisfy the configured failover r... See more...
I have checked the memory active on the cluster: 'Active Guest Memory' is somewhat over 300 GB. So as described above, the warning Running VMs utilization cannot satisfy the configured failover resources on the cluster pops up when the available unreserved memory for the VMs is approximately 0.3 TB (Host failures cluster tolerates = 7). If I have 0.7 TB unreserved memory available (Host failures cluster tolerates = 6), the warning does not appear. So this confirms the theory that VMware is working with memory active in fact.
Hello I do sometimes have troubles doing the VCSA backup in the VAMI. My job then stops with an error, the VAMI does not respond properly and freezes and when I try to login again after a brows... See more...
Hello I do sometimes have troubles doing the VCSA backup in the VAMI. My job then stops with an error, the VAMI does not respond properly and freezes and when I try to login again after a browser refresh, I have the issue also. It seems that the connection to the ftp does sometimes not work correctly, probably due to a firewall. However, copying the json file has helped everytime in my case. So great tip, thank you!
Hello Erich Thank you for your input. I saw that information too. It seems that "memory actively used" is indeed the value that VMware uses for it's calculation. Because I have to set 'Host fail... See more...
Hello Erich Thank you for your input. I saw that information too. It seems that "memory actively used" is indeed the value that VMware uses for it's calculation. Because I have to set 'Host failures cluster tolerates' to 7 what means there is a very small amount of (non reserved) memory available, there seems evidence that they use memory active. I know what memory active means. Several tools like for example the "Oversized VM Report" from Veeam One base their calculations upon memory active. So I think Veeam calculates the memory active over a certain period, takes the peak value und proposes that peak plus some buffer from ten or twenty percent as the recommended memory configuration. I don't know how this is calculated by vCOPS, would be interesting to know. For me, If I see a VM with for example 16 GB of RAM and I see that memory active is constantly about 1 GB, I know I can go and configure the VM with less memory without circumstances. I will then go to 8 GB probably or to 4 GB or even 2 GB, but the more progressive I decrease the memory the better I observe the VM for a while - probably by means of the guest-operating system's memory counters - to make shure the VM performs still good. I think if I configure a VM with exactly just the (peak) amount of memory active, there is a good chance that there is a performance degradiaton. But if VMware really does calculate with "memory actively used" = memory active, in their eyes tthe same performance is guaranteed after a VM restart...? And we know that memory active during a vm restart is also very high for a couple of minutes... So If I check my VCSA: Configured Memory 16 GB, memory active = 2 GB. Memory utilization according to the VAMI Memory Utilization Trending chart: 50%. Would anybody really configure the memory to 2 GB and expect the same performance? I am not shure if this calculation is too progressive.
This is true for the percentage based failover capacity. But this is not the same as the percentage of performance degradation VMs tolerate setting.
Hello No I don't think it is based on consumed memory. Because when I set the 'host failures cluster tolerates' to 6, my available memory is 5 TB*0.4 = 2 TB in a failure state. Memory consumed ... See more...
Hello No I don't think it is based on consumed memory. Because when I set the 'host failures cluster tolerates' to 6, my available memory is 5 TB*0.4 = 2 TB in a failure state. Memory consumed in the cluster is 2.87 GB. But as mentioned: in this configuration the warning does not appear! I have to set the setting to 7 hosts, where the available non-reserved memory is only about 0.3 TB. Only then the warning pops up. So probably VMware is really calculating with 'memory active' (plus a small percentage of buffer maybe) and if a vm get's its memory active in the failure state it is ok for them and "there is no performance degradiation"... ? I know that this setting has nothing to do with reservations. It is rather made for people who don't work with reservations for every vm to become a feeling how their vms will perform in a desaster. Because we all know that without reservations, we can start almost unlimited vms, but they will have a poor performace and swapping is ocurring when heavily overloaded...
Hello Yes, the upgrade procedure is the same. You can upgrade directly from VCSA 6.0 to 6.7 (from 5.5. you need an intermediate step to 6.0 or 6.5 and you cannot upgrade directly to 6.7) The ... See more...
Hello Yes, the upgrade procedure is the same. You can upgrade directly from VCSA 6.0 to 6.7 (from 5.5. you need an intermediate step to 6.0 or 6.5 and you cannot upgrade directly to 6.7) The VCSA name will be the same as it was on your old VCSA. Probably you have to change the VM label in the inventory to the same name as the old VCSA had and rename the old one to "VCSAName_old" or something like this You can manage both clusters with ESXi in 6.5 and 6.7. Of course if version 6.7 has a new feature, you won't have it available on the 6.5 ESXi... For configuration maximums see https://configmax.vmware.com/guest   and http://sdebbeche.com/wp-content/uploads/2016/11/vsphere-65-configuration-maximums.pdf , the latter is vSphere 6.5. In vSphere 6.7 the numbers changed slightly, see http://vsphere-land.com/news/configuration-maximum-changes-in-vsphere-6-7.html  for the differences. best regards
Hi Can we power on the 5.5 vCenter server appliance and revert it back to the snapshot which we took prior to the 6.5 upgrade? if so, would it take the IP address back, which was exported to t... See more...
Hi Can we power on the 5.5 vCenter server appliance and revert it back to the snapshot which we took prior to the 6.5 upgrade? if so, would it take the IP address back, which was exported to the new 6.5 appliance during the migration process?  I think yes. Especially when you did not yet upgrade your ESXi Hosts, this should work. If not, can we do a fresh installation of 6.5 vCenter server appliance and attach the ESXI hosts to it ? If so, are there any dependencies (vCenter settings, Network settings, ESXi hosts Licensing, vCenter Licensing) we need to be aware of ? I would also recommend to re-install or do a new-install of the VCSA. Your environment with only three hosts seems not to be very complex. And if you use standard switches, their configuration is already ok and you will see the portgroups all on the new VCSA. If your old vCenter is running it is also possible to setup a new VCSA in parallel (new IP and hostname)  and then just detach a host (with all the vms) from the old vCenter and attach it on the new VCSA. All VMs should be visible in the new VCSA now. This as a proposal if you cannot get the upgrade to work. best regards
Hello I am trying to find out on which memory metric the 'performance degradiation VMs tolerate' setting in the Admission Control menu is based on. Let's do an example: I have 10 ESXi ho... See more...
Hello I am trying to find out on which memory metric the 'performance degradiation VMs tolerate' setting in the Admission Control menu is based on. Let's do an example: I have 10 ESXi hosts, each with 512 GB RAM, resulting in total 5 GB of RAM. Used memory in the cluster is 2.87 TB (If I sum up the memory consumed value on all VMs manually, I do get virtually the same amount of memory): Some VMs do have reservations, most VMs don't. The total amount of reservations in the cluster is 1.2 TB. Admission Control is configured like this: Normally, I configure 'Host failures cluster tolerates' to five hosts, so 50 percent of my "stretched over two sites cluster" can fail. In this example let's configure this setting to six hosts. This results in this graphic: OK above we see the gray shaded bar, these are my 60 % failover capacity. I also see the reserved memory in blue. So in this state, every VM with reservations of course receives its reservation and the rest of available memory (for the vms without or partially reservations) is the light gray bar.This is approximately (5TB*0.4 - 1.2 TB)  = 0.8 TB RAM. From this it follows that in the normal state (10 hosts available) the vms are using 2.87 TB (thereof  1.2 TB reserved) --> unreserved capacity occupied by all VMs: 1.67 TB in a failover state (4 hosts available) the vms can only use 2 TB (thereof still 1.2 TB reserved) --> unreserved capacity occupied by all VMs: 0.8 TB So in a failover state (4 hosts running), the vms without reservations do have to move closer together. But it seems, that there is still enough capacity and there is no warning that the running VMs utilization cannot satisfy the configured failover resources on the cluster. Although the vms have less memory available, there is not yet a performance degradiation! OK, lets try with 'Host failures cluster tolerates' set to 7 hosts. Of course, the gray shaded bar is getting longer, the blue bar for reserved memory is the same and gap between - the non reserved memory) is getting very small, only approximately 0.3 TB. So now, after waiting a few minutes, the cluster is complaining about insufficient failover resources... So I am curious which memory metric is used here to do the calculation. I don't think it is only memory active? Because a calculation based only on memory active seems to be a calculation too progressive in my opinion and with only taken memory active in consideration one can expect some performance degradiation. Is it probably memory active and a certain percentage as a buffer? Does anyone know here more how this calculation works? Because I would like to have a feeling how my vms will perform in a failover state (apart from that the algorithm tells me that the vm's don't have memory issues and that "the same performance is guaranteed...") Best regards Roland
Hello Thanks for your answer. I didn't await that HA and DRS won't work in this scenario with two vDS. I thought they will work, because when I do a vmotion for test purposes from a "vDS A" to... See more...
Hello Thanks for your answer. I didn't await that HA and DRS won't work in this scenario with two vDS. I thought they will work, because when I do a vmotion for test purposes from a "vDS A" to anothver "vDS B" with identically labelled portgroups, in the wizard the portgroup name is suggested correctly, so I was assuming that HA will work as well... I know, vDS should not break normally. But probably a fatal configuration failure could break it in the worst case. Or  an upgrade of a vDS to a later version could fail and the connectivity to vmkernel adapters and virtual machines could fail. But of course: If you are right and DRS and - even more important - HA don't work with two vDS, I really have to go with only one... Best regards and thank you Roland
Hello I need to build a stretched cluster between two datacenters. There are 4 hosts on each site, so a total of 8 hosts in the cluster.  I have HPE peer-persistence LUNs implemented, so synchro... See more...
Hello I need to build a stretched cluster between two datacenters. There are 4 hosts on each site, so a total of 8 hosts in the cluster.  I have HPE peer-persistence LUNs implemented, so synchronously mirrored LUNs protect the vms and if one site failes, the vms will be started by vSphere HA automatically on the hosts on the second site. Admission Control host failures is set to 4, so one site can fail completely and 50% of ressorces are reserved by admission control. To this day I have vDS implemented only one one datacenter. Now in my new scenario described above, I have the challenge if I should configure only one vDS across all hosts (simple setup, easy maintenance) or if I should go with two vDS. The latter means, that the four hosts in "datacenter A" share a vDS and the four in "datacenter B" share another vDS. This is a little bit more complicated to maintain. The most important goal however is reliabilty and system stability. So if I only have one vDS and this one is broken, in the worst case the networking on all hosts/vms is affected. With 2 vDS I have a good chance that half of the hosts/vms still have network connectivity. One question I am not shure about is: Will DRS work and move VMs between two ESXi hosts with different vDS? (The portgroups will be labelled with the same name, I know that in the meantime this works in vSphere 6.5, if the two vDS are located in different network folders) What would you recommend me? Which approach would you suggest with the assumtion that stability is most important? Will DRS work in this scenario? Best regards Roland
Hello Same issue here, we are running ESXi 6.5 U1 with the latest HPE Agents and every host is logging the event between 20 and 60 times per 24 hours. According to HPE, these log entries can b... See more...
Hello Same issue here, we are running ESXi 6.5 U1 with the latest HPE Agents and every host is logging the event between 20 and 60 times per 24 hours. According to HPE, these log entries can be ignored: HPE Support document - HPE Support Center
Thx for your answer. What would be the value for the worst case Allocation in my scenario? For the vms with no reservation, is it calculated like this: 80 GB / 15 VMs = 5,3 GB worst case alloc... See more...
Thx for your answer. What would be the value for the worst case Allocation in my scenario? For the vms with no reservation, is it calculated like this: 80 GB / 15 VMs = 5,3 GB worst case allocation? (15 vms are competing for 80 GB available (=non reserved) memory) Best regards Roland
I have some comprehension question concerning HA Resource Reservation. Let's assume I have a vSphere cluster with two nodes. Each nodes has 100 GB of RAM. So total RAM = 200 GB in my cluster. ... See more...
I have some comprehension question concerning HA Resource Reservation. Let's assume I have a vSphere cluster with two nodes. Each nodes has 100 GB of RAM. So total RAM = 200 GB in my cluster. I configure Admission Control with failover capacity 50% : OK, now there is for example one VM with 20 GB of RAM and 100% RAM reservation. In addition there are 15 other VMs with 10 GB RAM each and they have all no reservation. To keep the calculation simple let's assume there is no overhead Memory. So I think the Resource Reservation calculation is as follows: Available Reservation: 200 GB Used Reservation: 120 GB (50% from Admission Control and 20 GB from one VM) Available Reservation: 80 GB HA state of the cluster should be green, right? Now a node failure happens. I have now 100 GB RAM left in the cluster. So the VM with the 20 GB Reservation is powered on - memory is guaranteed. All remaining 15 VMs with no reservations have to share the remaining Memory of 80 GB, so they compete for memory. Is my assumption correct that Admission Control calculates the resources by considering the configured host failover capacity plus all memory reservations? And if there are enough resources to satisfy the needed amount of memory, the admission control state is okay? Are the failover resources reported as okay even if there are plenty of other vms with no memory reservation as above-mentioned that have enough memory when the cluster is healthy but have to compete for memory in case of a host failure? What would happen, if I configure the performance degradation VMs tolerate to 0 %... would there be a warning in case of a loss of 50% memory if the memory of each vm would become less than the memory allocated at the healthy state? Will the reported Worst Case Allocation value for the VMs with no reservation getting lower if the resources are short in case of a host failure? (In my lab, every VM reports its configured amount of Memory as Worst Case Allocation.... so what must happen that the Worst Case Allocation Value is getting lower? (I know, if there are memory reservations, the worst case allocation will be at least the Reservation) best regards Roland
I have the same issue and clicking the log in button instead of pressing enter has helped in my case, thank you!
Thanks for your answer. 10GB for vmotion is really very fast. If you have an ESXi with 256 GB RAM, it's impressive how fast the vms are moved to another ESXi host. I am testing a configurat... See more...
Thanks for your answer. 10GB for vmotion is really very fast. If you have an ESXi with 256 GB RAM, it's impressive how fast the vms are moved to another ESXi host. I am testing a configuration with only 2 NICs with 10GB each at the moment and I built it as proposed here: vSphere networking configuration with 10 GbE | VMwaremine - Mine of knowledge about virtualization (szenario one). I think this design makes sense and I have plenty of speed with vmotion, management and all the trunk ports. (vDS are not available because I don't have enterprise plus licenses) Best regards Roland
Hello Is this statement correct: "When I am cold migrating a (powered off) vm from one ESXi's local storage to another ESXi's local storage via vCenter, the traffic flow goes through the manag... See more...
Hello Is this statement correct: "When I am cold migrating a (powered off) vm from one ESXi's local storage to another ESXi's local storage via vCenter, the traffic flow goes through the management-traffic-portgroup, not the vmotion-portgroup?" It's because when I do a hot migrate, regardless of its a vmotion or a vmotion with storage vmotion, then the traffic flow goes through the vmotion port group. It's the behavior that I expect. But why does the traffic not flow through the vmotion-portgroup as well when I do a cold migration? Or ist there a misconfiguration? But I have different port groups on different nic's and there are different subnets configured...and every port group has just one traffic type assigned. I am just studying: What if I have a 10GB vmotion network and a 1 GB management network? When doing a cold migration, the traffic will choose the slow 1-GB-connection? Everybody is giving as much bandwidth as possible for vmotion, but in that scenario the traffic goes through the management network? I know - in most cases we are doing a vmotion or svmotion. But I was astonished when I detected, that the traffic choses the management port group and not the vmotion port group... Is this really the default behavior as described? Best regards Roland