sk84's Accepted Solutions

There is no official evaluation mode for vCloud Director 9.x because vCD 9.x is only available for Service and Cloud Providers with an active VCPP contract. And within VCPP, vCD is free. Therefor... See more...
There is no official evaluation mode for vCloud Director 9.x because vCD 9.x is only available for Service and Cloud Providers with an active VCPP contract. And within VCPP, vCD is free. Therefore a trial period makes no sense for this product.
Since vSphere 6.7 only TLSv1.2 is enabled by default. In addition there is a tool for managing the TLS protocols: Managing TLS Protocol Configuration with the TLS Configurator Utility But sin... See more...
Since vSphere 6.7 only TLSv1.2 is enabled by default. In addition there is a tool for managing the TLS protocols: Managing TLS Protocol Configuration with the TLS Configurator Utility But since you didn't specify your version, other vSphere versions may look different. And whether changing the SSL/TLS settings will have an impact depends mainly on third-party software. vSphere itself (vCenter and ESXi Hosts) will work fine with higher TLS versions from 6.5 onwards. However, if you are using other software (backup software, monitoring tools or other VMware products in older versions), they may no longer work. Or if you use the vSphere (Web) client with an older browser that does not support TLS v1.2, you won't be able to connect to the vCenter server. --------------------------------------------------------------------------------------------------------- Was it helpful? Let us know by completing this short survey here.
The security team called this a security vulnerability. Seriously? The security team has no idea. It's only a security problem if the accesses are not regulated and unauthorized people have ... See more...
The security team called this a security vulnerability. Seriously? The security team has no idea. It's only a security problem if the accesses are not regulated and unauthorized people have access to this host... --------------------------------------------------------------------------------------------------------- Was it helpful? Let us know by completing this short survey here.
You must also put the request in quotation marks. The "&" character is otherwise interpreted by the shell (execute task in the background) and is no longer part of the request for curl. Therefore... See more...
You must also put the request in quotation marks. The "&" character is otherwise interpreted by the shell (execute task in the background) and is no longer part of the request for curl. Therefore your request is always truncated at the "&" character and you always see the same page. So the right thing would be: curl -i -k -H "Accept:application/*+xml;version=31.0" -H "x-vcloud-authorization:f9c49bfdghkjkfgd4ebf4d7e2d89c17b" -X GET "https://my-vcd/api/query?type=adminUser&page=2"
Also the VMWare compatibility hardware site lists the HBA as SAS device. Yeah. Because it's a SAS HBA. My concern is that since the HBA does not have memory for caching and BBU the perf... See more...
Also the VMWare compatibility hardware site lists the HBA as SAS device. Yeah. Because it's a SAS HBA. My concern is that since the HBA does not have memory for caching and BBU the performance will be really slow. You will not get any performance benefits but the performance should not be worse then the slowest hard disk. And with a missing BBU you will only risk loosing some IO during an power outage of the server. In my opinion, these restrictions are justifiable for a homelab. Are there any problems with ESXi and HBA RAID arrays. As long as this HBA card is on the HCL, there should be no problems. But it is important to use a supported driver / firmware combination. See: VMware Compatibility Guide - I/O Device Search The RAID will be build with: 1) Two 3TB HDD RAID 1 2) Two 500GB SSD RAID 0 I would consider whether I really want to make a RAID0 for the SSD Datastore? The performance gain is relatively unimportant in this case and you have double the storage space, but if an SSD dies all VM data on this data storage are lost. Usually it makes more sense to configure 2 single datastores with one SSD each. If one SSD fails then at least not all data is gone. And you can also distribute the virtual disks over both datastores.
So the above test was done to verify if it is expected behavior.  Is there a way to have the link state to be propagated to the Virtual swich? Yes, as already explained this is the expected be... See more...
So the above test was done to verify if it is expected behavior.  Is there a way to have the link state to be propagated to the Virtual swich? Yes, as already explained this is the expected behavior. And the virtual switch knows the link state of the physical nics but the link state is not forwarded to virtual machines and there is also no mechanism like "Link State Tracking". If your physical nic supports SR-IOV, you can para-virtualize your pysical nic and attach it "directly" to a virtual machine. Or build your application / guest os cluster with 3 nodes where one of them act as witness. That's the only proper cluster design. The pairing of 2 nodes can always lead to a split-brain scenario and can therefore not seriously be called a cluster solution.
Put in place by our engineering dept, but supported by my group.  Engineering decides on when/if it gets any updates or upgrades to new versions. In this case your engineering team did not do ... See more...
Put in place by our engineering dept, but supported by my group.  Engineering decides on when/if it gets any updates or upgrades to new versions. In this case your engineering team did not do a good job. vCloud Usage Meter 3.5 is End-of-Support since April 1, 2019 and VMware requires all service and cloud providers to explicitly upgrade to at least version 3.6 (or better 3.6.1 Hotpatch 2). According to your VCPP contract and the VCPP Product Usage Guide you are obliged to do this. Providers who have not updated by March 31, 2019 may be subject to a compliance review. See: Usage Meter 3.5 End of General Support | VMware Cloud Provider Blog Version 3.6.1 also has some bugfixes concerning the metering of vROps. Maybe this already solves your problem. Therefore I strongly recommend to update to version 3.6.1 Hotpatch 2 as soon as possible.
Okay. Good point. I didn't checked the notes on the Lenovo Compatibility Matrix. In that case you shouldn't use the Lenovo custom image, but the standard ESXi 6.0 U3 image from VMware: Downlo... See more...
Okay. Good point. I didn't checked the notes on the Lenovo Compatibility Matrix. In that case you shouldn't use the Lenovo custom image, but the standard ESXi 6.0 U3 image from VMware: Download VMware vSphere
Yes. You can unpublish the plugin with the following command: python publish.py -H HOST -u USERNAME -p PASSWORD -a unpublish See "python publish.py --help": usage: publish.py [-h] -H HOST -u ... See more...
Yes. You can unpublish the plugin with the following command: python publish.py -H HOST -u USERNAME -p PASSWORD -a unpublish See "python publish.py --help": usage: publish.py [-h] -H HOST -u USERNAME -p PASSWORD                   [-a {install,delete,publish,unpublish,register}] Install Operations Plugin optional arguments:   -h, --help            show this help message and exit   -H HOST, --host HOST  vCloud Host   -u USERNAME, --username USERNAME                         vCloud Username   -p PASSWORD, --password PASSWORD                         vCloud Password   -a {install,delete,publish,unpublish,register}, --action {install,delete,publish,unpublish,register}                         action to perform
Elastic Allocation Pools only plays a role for you as a provider, where and how the resources are allocated (on VM level or RP level for example). And also the CPU guarantee value is only for... See more...
Elastic Allocation Pools only plays a role for you as a provider, where and how the resources are allocated (on VM level or RP level for example). And also the CPU guarantee value is only for you as provider to overcommit the CPU resources. If you guarantee 10%, in vSphere only 10% of the used CPU resources will be set as reservation. But for the customer it looks like 100%. So, if the CPU ressources are completely used the customer can not power on any further VMs. As an example: 1 GHz CPU speed, allocation pool with 12 GHz and 10% CPU guarantee. The customer configures VMs with 12 vCPUs. In the vCD GUI the customer now sees that 100% of the CPU resources are in use. But in vSphere only 1.2 GHz are reserved for the customer. So you as a provider can overbook the CPU resources. That's the advantage of the allocation pool in contrast to the reservation pool model where all resources are bound to the customer.
It seems that the permissions on the NFS share are still not correct: 2019-05-01 16:23:48 | Mounting NFS file share... 2019-05-01 16:23:48 | System ip0 is: 10.200.1.6 chown: changing ownershi... See more...
It seems that the permissions on the NFS share are still not correct: 2019-05-01 16:23:48 | Mounting NFS file share... 2019-05-01 16:23:48 | System ip0 is: 10.200.1.6 chown: changing ownership of '/opt/vmware/vcloud-director/data/transfer/foo': Operation not permitted chown: changing ownership of '/opt/vmware/vcloud-director/data/transfer': Operation not permitted ls: cannot access '/opt/vmware/vcloud-director/data/transfer/appliance-nodes': No such file or directory ls: cannot access '/opt/vmware/vcloud-director/data/transfer/cells': No such file or directory The share must be readable and writeable for the vcloud user (user id 1000 or 1001, not sure what the appliances are using in 9.7). And because of this permission error vCD is not able to write the pg_hba.conf file to the nfs share and therefore the other nodes can't connect to the master db: 2019-05-01 16:23:54,935 | ERROR | main                 | ConfigAgent               | Could not connect to database: FATAL: no pg_hba.conf entry for host "10.200.1.7", user "vcloud", database "vcloud", SSL off |
Basically the information in this blog article is still correct. Meanwhile there is also the vSAN Ready Node Configurator available, which is mentioned in the article: https://vsansizer.vmware.co... See more...
Basically the information in this blog article is still correct. Meanwhile there is also the vSAN Ready Node Configurator available, which is mentioned in the article: https://vsansizer.vmware.com/
The questions cannot all be answered concretely, as it is very different from use case to use case. 1./2. Basically you only need 2 hosts for HA. But if a part is broken or a host fails, you a... See more...
The questions cannot all be answered concretely, as it is very different from use case to use case. 1./2. Basically you only need 2 hosts for HA. But if a part is broken or a host fails, you are not protected until the part or host is replaced. Experience shows that this can sometimes take some time until the error is localized and the hardware is replaced. Also, you are not protected against failures during maintenance. Therefore it is generally considered best practice that a productive cluster has at least 3 hosts. However, it is a risk assessment and a question of your infrastructure and also of your hardware support contract. If you can handle a failure of one or more days or there is an alternative for the running VMs, it might be better to have only 2 hosts to save money. 3. I'm not familiar with low budget storages. Here I would simply get offers and compare. But I recommend you to choose a storage from a well-known manufacturer and maybe something with an SSD cache and automated storage tiering. The storage is one of the most important components in virtualized setup. Both in terms of reliability and performance. So I wouldn't take the cheapest storage from an unknown vendor for productive VMs. 4. With vSAN, I would make comparisons. It can be cheaper. But if you can't or don't want to run an external Witness Appliance or Host, you definitely need 3 hosts for a vSAN cluster. 5. This would be basically feasible and an alternative, but it also has some disadvantages. Above all, it's pointless with your concept. With the small number of hosts and CPU sockets you can license the environment with an Essentials (Plus) license and HA is already included. This is one of the cheapest licensing models and it works easier and better than vSphere Replication. 6. How many CPUs and Cores per CPU you need depends on the workload and VM sizing. So I can't answer that in general. But (as an example): If you only have 7-10 VMs with a normal workload and the VMs have an average of 4 vCPUs, then I would probably recommend to use only 1 CPU with 12-20 cores for cost reasons.
A shared storage is not necessary for FT and does not help you in this situation. The only requirement that must be met at this point is that all hosts on which an FT VM can be placed must be in ... See more...
A shared storage is not necessary for FT and does not help you in this situation. The only requirement that must be met at this point is that all hosts on which an FT VM can be placed must be in a cluster with HA & FT enabled. And as soon as Primary and Secondary VM do not see each other anymore, the Secondary VM becomes active and becomes the new Primary VM and vSphere creates a new Secondary VM with an identical copy of the new Primary VM. In this way, split-brain situations are avoided. You can check the FT status by checking the "vSphere Fault Tolerance" area in the Summary tab of the FT VMs. There you can see the status, e.g. "Protected" or "Unprotected" and also which VM is the Primary and Secondary.
vSAN 2017 Specialst is just a digital badge. There is no pdf certificate for this. But you can claim and display the badge via Acclaim: VMware vSAN 2017 Specialist - Acclaim
Yes. Here you can read what you need to keep in mind: Remediation Specifics of Hosts That Are Part of a vSAN Cluster
As already said, vCD 9.x licenses are only available through VCPP. Q. How is vCloud Director packaged and how may it be purchased? A. vCloud Director licenses are fulfilled through the VMware... See more...
As already said, vCD 9.x licenses are only available through VCPP. Q. How is vCloud Director packaged and how may it be purchased? A. vCloud Director licenses are fulfilled through the VMware Cloud Provider Program, with a specific number of licenses available at each contract level. Once enrolled in the program, the product can be downloaded from myVMware. https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/products/vcloud/vmware-vcloud-director-faq.pdf And it doesn't matter which use case you have. Whether internally or as a service provider, you receive vCD licenses only via a VCPP contract. License usage metering is then covered by the vCloud Usage Meter (a seperate appliance). However, if you only want to provide a self-service portal for vm workloads to internal dev teams, vRealize Automation might be a better option for you.
Can we create a host group in cluster DRS for these resource pools(in Vcenter server) and then can be implemented in vcloud director ? You can create 2 different resource pools and PVDCs and y... See more...
Can we create a host group in cluster DRS for these resource pools(in Vcenter server) and then can be implemented in vcloud director ? You can create 2 different resource pools and PVDCs and you can create DRS groups and rules. But you cannot bring these things together. A resource pool works clusterwide on all ESXi hosts. And that's not getting you anywhere. And for DRS you can only create VM groups and host groups. You cannot select resource pools. So you would need an affinity rule for all VMs which should/must run on these hosts and, in addition, you need an anti-affinity rule for all VMs which should/must not run on these two hosts. In the end, you would have 1 host group with these 2 ESXi hosts, 2 vm groups (one with the customer's VMs and one with all other VMs) and 2 DRS rules. And this must be statically configured or managed through a (PowerCLI) script outside of vCD. Because vCD cannot select a DRS vm group during VM deployment. That's why I said it's hard to manage.
Hello TheBobkin​, Yes. It's confusing and I also had a SR open a few weeks ago because I couldn't find this setting (SR #19060154301). The KB article explicitly mentions the vSphere Client,... See more...
Hello TheBobkin​, Yes. It's confusing and I also had a SR open a few weeks ago because I couldn't find this setting (SR #19060154301). The KB article explicitly mentions the vSphere Client, not the vSphere Web Client ("In the vSphere Client, click Configure..."). And in the HTML5 user interface, I could find this setting at the vCenter level as well.
Okay. Since you have reconfigured the storage policy, did you try to delete the ESG appliance configurations, as I mentioned? These values are also defined after deployment. You can see them by... See more...
Okay. Since you have reconfigured the storage policy, did you try to delete the ESG appliance configurations, as I mentioned? These values are also defined after deployment. You can see them by selecting the Edge Gateway in the Networking & Security tab of the vSphere (Web) Client and under Manage -> Settings -> Configuration -> NSX Edge Appliances. If you select one of the two appliances there and click on the red X, it will not delete your entire ESG configuration as it is stored in the NSX Manager. Only the resource configurations of the ESG VMs and the ESG VMs themselves are deleted. If you can delete the two entries there and if you already have changed the default storage policy a redeploy in vCD should work.