All Posts

Does the existing VMware KMIP certification process cover this solution? I.e., the solution does not leverage any more KMIP capabilities than are used today by vSphere, vSAN, and vTA encryption? [Ra... See more...
Does the existing VMware KMIP certification process cover this solution? I.e., the solution does not leverage any more KMIP capabilities than are used today by vSphere, vSAN, and vTA encryption? [Radostin] Correct. We ask vCenter to handle all operations with KMIP so whatever they support as infrastructure, we also support it. I understand the sovereign cloud motivation. However, it seems to me that there are many more gaps to close. A vCenter/Director administrator with sufficient privileges can access tenant VM consoles and VM disks today even if they do not have access to the key provider credentials. Do you have a roadmap for closing such gaps? Without such a roadmap it seems premature to name this sovereign cloud. [Radostin] Manish Arora is the PM for Sovereign Cloud so maybe he can comment on the roadmap there. I understand the concern about the VM consoles but how they can access the encrypted VM disks? In our particular case, we operate Director for our tenants and we also operate a multi-tenant key provider for them. It is our vision that we will connect the customer org directly to the customer's key provider instance, including the establishment of credentials, all transparently to the customer. Our desire is that the customer need not manage network connectivity, key provider credentials, or enrollment of particular VDCs in their org. The customer may still revoke either their root key or individual keys within the key provider instance as a means of retaining the right of cryptographic erasure. [Radostin] Let me confirm that I understand you correctly. This basically means that you as a provider manage the KMS and manage the encryption keys on behalf of your tenants. The provider setups the connectivity, authenticates to KMS and also specifies that key1 needs to be used for encryption of the VMs in tenant's Org VDC1 (or the full org for the sake of the example). For your tenant, all of the above would be fully transparent and they would not have to go to BYOE to setup anything. But still, they would be able to login in their KMIP tenant, and observe the keys (key1) which VCD is using there. If the above is true, what workflows do you expect the tenant to be able to do with the encryption keys in their KMIP? For example, since you as a provider setup key1 for this customer, then the customer cannot just log in their KMIP tenant and revoke key1 because this would break their VMs until they are recrypted with a valid key. Meaning that the tenant admin needs to call their provider and ask them to replace key1 with key2 and then revoke key1 becuase in this setup the provider manages the keys. I will be happy to jump on a call and discuss the use cases - I believe it will be much faster and more efficient. If you are OK maybe @jaskaranv can help us arrange it? Thanks again for your feedback!
Thank you, Radostin. Here is an additional question and a few brief replies: Does the existing VMware KMIP certification process cover this solution? I.e., the solution does not leverage any more KM... See more...
Thank you, Radostin. Here is an additional question and a few brief replies: Does the existing VMware KMIP certification process cover this solution? I.e., the solution does not leverage any more KMIP capabilities than are used today by vSphere, vSAN, and vTA encryption? I understand the sovereign cloud motivation. However, it seems to me that there are many more gaps to close. A vCenter/Director administrator with sufficient privileges can access tenant VM consoles and VM disks today even if they do not have access to the key provider credentials. Do you have a roadmap for closing such gaps? Without such a roadmap it seems premature to name this sovereign cloud. In our particular case, we operate Director for our tenants and we also operate a multi-tenant key provider for them. It is our vision that we will connect the customer org directly to the customer's key provider instance, including the establishment of credentials, all transparently to the customer. Our desire is that the customer need not manage network connectivity, key provider credentials, or enrollment of particular VDCs in their org. The customer may still revoke either their root key or individual keys within the key provider instance as a means of retaining the right of cryptographic erasure. Thanks!
Hi @smoonen , Thanks for your feedback and for making the next step in testing the solution add-on!  > It's interesting to me that this solution exists in a space in between provider and tenant. Un... See more...
Hi @smoonen , Thanks for your feedback and for making the next step in testing the solution add-on!  > It's interesting to me that this solution exists in a space in between provider and tenant. Unlike the org SSO configuration, there is some responsibility on the provider's part to establish connectivity to the KMS. However, there is still a tenant responsibility to authenticate with the KMS. [Radostin] Yes, you are right. One of the primary use cases of the solution is Sovereign Cloud where tenants want to own and control their encryption keys. The provider registers the KMS because they have to ensure the connectivity between VC and KMS and carefully review the KMS certificate and decide if they want to trust it. This trust is being stored in the underlying vCenter. When they publish the KMS to their tenants, the tenant admin needs to authenticate with their own credentials because it's their own KMS or their own space in a multi-tenant KMS.  > In the enterprise context, I can understand why this might be a good arrangement. However, in a cloud context, it is likely that the cloud provider is managing both Director and the KMS. In this case the cloud provider might have means of managing both the network connectivity to KMS as well as the authentication to KMS. As a result: 1. It seems desirable to me for this solution to optionally allow the provider to manage authentication instead of the tenant. In this case, the provider should be able not only to publish the KMS to a tenant org, but there should be a way to ensure that existing and new tenant VDCs are enrolled in the KMS rather than in the default key provider. [Radostin] It seems the idea is the provider to be able to manage multiple KMS systems and then publish those to their tenants so that they do not use the default key provider. It goes off the initial purpose of the solution which is to allow tenants to control their own encryption keys but it is a good use case and if there is wider need for such we can plan of adding in one of the next releases. Help me understand the use case - why do you want to manage multiple KMS and share KMS1 to tenant1 and KMS2 to tenant2? If you as a provider manage the encryption keys for your tenants and this is transparent to them, do they care in which KMS you store the keys?  > 2. It seems highly desirable to me for APIs or CLIs to be exposed allowing the provider to automate all of this configuration. The documentation as currently written only shows UI operations. [Radostin] The solution add-on comes with a full set of APIs which support all of the operations which you see in the UI. Actually the UI is entirely using this API to perform its operations so once you install the add-on which can be done through a CLI, you can use those APIs to build your own experience.  > In fact I had pictured that this feature would be offered by establishing a vCenter KMS connection and selecting a Key Provider per org much like vCenter today allows you to select a Key Provider per cluster. It's interesting to me that you've chosen to implement this as a solution addon instead. [Radostin] In VCD, solution add-ons are the way to deliver faster and more solutions / extensions which our providers can monetize on so expect to see more of these in the future Thanks for the opportunity to review this early! [Radostin] We look forward to more of your feedback!
I've reviewed the documentation - thanks - but haven't had a chance to test this yet. It's interesting to me that this solution exists in a space in between provider and tenant. Unlike the org SSO c... See more...
I've reviewed the documentation - thanks - but haven't had a chance to test this yet. It's interesting to me that this solution exists in a space in between provider and tenant. Unlike the org SSO configuration, there is some responsibility on the provider's part to establish connectivity to the KMS. However, there is still a tenant responsibility to authenticate with the KMS. In the enterprise context, I can understand why this might be a good arrangement. However, in a cloud context, it is likely that the cloud provider is managing both Director and the KMS. In this case the cloud provider might have means of managing both the network connectivity to KMS as well as the authentication to KMS. As a result: 1. It seems desirable to me for this solution to optionally allow the provider to manage authentication instead of the tenant. In this case, the provider should be able not only to publish the KMS to a tenant org, but there should be a way to ensure that existing and new tenant VDCs are enrolled in the KMS rather than in the default key provider. 2. It seems highly desirable to me for APIs or CLIs to be exposed allowing the provider to automate all of this configuration. The documentation as currently written only shows UI operations. In fact I had pictured that this feature would be offered by establishing a vCenter KMS connection and selecting a Key Provider per org much like vCenter today allows you to select a Key Provider per cluster. It's interesting to me that you've chosen to implement this as a solution addon instead. Thanks for the opportunity to review this early!
I could manage the cluster (i.e.: kubectl get nodes, get pods etc)
I think the answer to the root CA issue is to add teh certificate to " Cluster Certificates (Optional) " in the  "CSE Management" window. Will try and see if it works.
Sorry for the delay, I am still looking into this with the engineering team. Beyond logging in, were you able to view/edit any TMC resources when using the `Cloud Administrator` role?
Thanks a lot! seems it helped.
I managed to delete it manually by: "curl -ks -H "Accept: application/json;version=37.0" -H "Content-Type: application/json" -H "Authorization: Bearer ${VCLOUD_ACCESS_TOKEN}" -X DELETE https://$VCD... See more...
I managed to delete it manually by: "curl -ks -H "Accept: application/json;version=37.0" -H "Content-Type: application/json" -H "Authorization: Bearer ${VCLOUD_ACCESS_TOKEN}" -X DELETE https://$VCD_HOSTNAME/cloudapi/1.0.0/entities/urn:vcloud:entity:vmware:solutions_add_on_instance:72f202b9-a8a9-46ac-8ebd-9fa4490d0f0b" The next problem is that the CSI 4.1 Plugin does not have a certificate session during cluster creation. I will need to find a way to add the certificate after cluster creation
Deleting the cluster does not remove entries from the VCD database related to the solution. There are steps on page 32 to delete the solution. Could you try those steps if you haven't already? You ma... See more...
Deleting the cluster does not remove entries from the VCD database related to the solution. There are steps on page 32 to delete the solution. Could you try those steps if you haven't already? You may need to follow the steps on page 31 to mark the solution as FAILED. I will reach out to the engineering team to get some next steps if that doesn't work.
I will discuss this with the engineering team and get back to you.
I am having issues with the installation of TMC-SM. I got till the page 22 of installation – tried to install TMC-SM add-on instance. It run for some time, then stopped and remained in ‘In progre... See more...
I am having issues with the installation of TMC-SM. I got till the page 22 of installation – tried to install TMC-SM add-on instance. It run for some time, then stopped and remained in ‘In progress’ state. So now I am not able to either delete it, or create a new one (as only 1 instance is supported). I tried deleting tmc Kubernetes cluster from VCD UI and the project from harbor and recreated those from the scratch but seems something else needs to be cleaned up. Attached the log from the tmc instance installation. Could you help with deleting it?    
I am not sure if "CSE4" is referring to a VM or vApp or if that is just some hardcoded name and of no consequence to the search. I am asking cause I have just updated CSE to 4.1 and delete the prev... See more...
I am not sure if "CSE4" is referring to a VM or vApp or if that is just some hardcoded name and of no consequence to the search. I am asking cause I have just updated CSE to 4.1 and delete the previous vApp/VM (IIRC both called CSE4). Would be great to have some help with this as I need to remove this instance and reinstall it. root@PhotonOS-001 [ ~ ]# /mnt/cdrom/linux.run delete instance --name $TMC_SM_INSTANCE_NAME --accept --host $VCD_HOSTNAME --username $VCD_USERNAME --certificate-file /tmp/vcd.pem --encryption-key ${TMC_SM_ENCRYPTION_KEY} --accept --password $VCD_EXT_PASSWORD INFO [0019] Triggering action action=hook event=PreDelete INFO [0021] All global roles are ready to delete action=hook event=PreDelete INFO [0021] cluster:tmc action=hook event=PreDelete INFO [0021] Get Solution Org action=hook event=PreDelete INFO [0021] Solution Org: CSE action=hook event=PreDelete INFO [0021] Search CSE4 Cluster action=hook event=PreDelete ERROR [0021] Failed to find any cse cluster in org CSE action=hook event=PreDelete ERROR [0021] Failed to delete instance 'tmc' name=tmc ERROR [0021] Failed to find any cse cluster in org CSE: exit status 23: failed to execute trigger hook errorCode=5012120012191213
This is a known issue and will be fixed in GA.
I believe this can be solved by adding the root CAs to the kapp-controller pods. Generate a ca-certificates.crt file with the contents of all CAs to be trusted. rm -f ca-certificates.crt cat roo... See more...
I believe this can be solved by adding the root CAs to the kapp-controller pods. Generate a ca-certificates.crt file with the contents of all CAs to be trusted. rm -f ca-certificates.crt cat rootCA.crt >> ca-certificates.crt # Repeat for all trusted CAs Load the certificate bundle into Kubernetes and update the kapp-controller deployment to include it in all pods. kubectl create -n tkg-system configmap kapp-controller-ca-certificates --from-file=ca-certificates.crt cat <<EOF | kubectl patch -n tkg-system deployment/kapp-controller --patch-file=/dev/stdin spec: template: spec: containers: - name: kapp-controller volumeMounts: - mountPath: /etc/ssl/certs/ca-certificates.crt subPath: ca-certificates.crt name: ca-certificates readOnly: true volumes: - configMap: name: kapp-controller-ca-certificates name: ca-certificates EOF  The kapp-controller pods will restart with the new configuration and should start working. You can follow the kapp-controller logs for more details. kubectl -n tkg-system logs -f deployment/kapp-controller  
Currently, I can login to TMC CLI in the following ways: 1) Using LDAP accountswith `Cloud Administrator` role 2) Using LDAP account with role `tmc:admin` 3) Using local accounts `tmc-amin`, `t... See more...
Currently, I can login to TMC CLI in the following ways: 1) Using LDAP accountswith `Cloud Administrator` role 2) Using LDAP account with role `tmc:admin` 3) Using local accounts `tmc-amin`, `tmc-member` or any other local accounts with role `tmc:admin` or `tmc:member` assigned to them I cannot authenticate to TMC CLI from LDAP/local accounts/groups for which I have authentication configured TMC GUI Access section. See screenshot that shows current access policy.     To me, it seems like the `tmc-admin` or `tmc-member` roles are necessary to log ont TMC CLI and subsequentially accesst the K8s API via says kubectl However, having those roles gives automatically admin access to TMC managed K8s clusters which defeats the purpose of RBAC. Am I missing something?
I am unable to reconcile the tanzau-standard repo due to a certificate error. How can I import or trust the authority for the harbor host to overcome this issue?  
I am getting this error: "API Error: Failed to list cluster's integrations: Not Implemented: please try again later (unimplemented)" as a red banner. Any hints how to solve this? Thanks.  
Introduction The Installation Guide includes console commands to install prerequisites, prepare clusters and install Tanzu Mission Control Self-Managed. Some of these commands are lengthy and are... See more...
Introduction The Installation Guide includes console commands to install prerequisites, prepare clusters and install Tanzu Mission Control Self-Managed. Some of these commands are lengthy and are not easy to copy-paste out of the PDF document. This article provides a duplicate form of these commands so it is easier to follow along with the Installation Guide. This article does not include every step. Be sure to follow the Installation Guide and refer back to this article for complex commands. Deploy Installer VM # tdnf install -y git jq openssl-c_rehash tar unzip # curl -L --output /usr/local/bin/kubectl \ https://dl.k8s.io/release/v1.23.10/bin/linux/amd64/kubectl && chmod +x /usr/local/bin/kubectl # curl -L https://github.com/carvel-dev/kapp-controller/releases/download/v0.46.1/kctrl-linux-amd64 -o /tmp/kctrl && install /tmp/kctrl /usr/local/bin && rm /tmp/kctrl Increase the capacity of /tmp to hold images prior to upload # umount /tmp && mount -t tmpfs -o size=10G tmpfs /tmp Mount the solution ISO to the Installer VM # sed -i '/\/mnt\/cdrom/d' /etc/fstab # mount /dev/sr0 /mnt/cdrom -t udf -o ro Create a self-signed certificate authority # openssl req -x509 -sha256 -days 1825 -newkey rsa:2048 \ -keyout $HOME/rootCA.key -out $HOME/rootCA.crt \ -nodes -extensions v3_ca \ -subj "/C=US/ST=CA/L=Palo Alto/O=CompanyName/OU=OrgName/CN=TMC-SM VCD Tech Preview Issuing CA" # ls rootCA.* Deploy Harbor Configure certificates # export KUBECONFIG=$PWD/kubeconfig-harbor.txt # kubectl create secret tls -n cert-manager selfsigned-ca-pair \ --cert=$HOME/rootCA.crt --key=$HOME/rootCA.key # cat <<EOF | kubectl apply -f - { "apiVersion": "cert-manager.io/v1", "kind": "ClusterIssuer", "metadata": { "name": "selfsigned-ca-clusterissuer" }, "spec": { "ca": { "secretName": "selfsigned-ca-pair" } } } EOF Deploy Contour and Harbor 1. Set environment variables with configuration values. # IP address to associate with the Load Balancer for Harbor export HARBOR_LOAD_BALANCER_IP="10.11.12.13" # Desired hostname for the Harbor service. This must be configured to point to the IP # address above. export HARBOR_HOSTNAME="harbor.${HARBOR_LOAD_BALANCER_IP}.**bleep**.io" # This will be used as the initial password for the “admin” user export HARBOR_ADMIN_PASSWORD="AdminPassword" 2. Prepare a values file for the Contour installation # cat <<EOF > contour-packageinstall-values.yaml envoy: service: type: LoadBalancer loadBalancerIP: ${HARBOR_LOAD_BALANCER_IP} EOF 3. Deploy Contour using the Tanzu package # kctrl package install \ -i contour \ -n tanzu-system \ --package contour.tanzu.vmware.com \ --version 1.20.2+vmware.2-tkg.1 \ --values-file contour-packageinstall-values.yaml 4. Create a certificate for the Harbor services using the ClusterIssuer resource # kubectl create ns tanzu-system-registry # cat <<EOF | kubectl apply -f - apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: ${HARBOR_HOSTNAME} namespace: tanzu-system-registry spec: secretName: ${HARBOR_HOSTNAME}-tls duration: 8760h # 365d renewBefore: 720h # 30d subject: organizations: - MyOrgName isCA: false privateKey: algorithm: RSA encoding: PKCS1 size: 2048 usages: - server auth - client auth dnsNames: - ${HARBOR_HOSTNAME} ipAddresses: - ${HARBOR_LOAD_BALANCER_IP} issuerRef: name: selfsigned-ca-clusterissuer kind: ClusterIssuer group: cert-manager.io EOF 5. Prepare a values file for the Harbor installation # cat <<EOF > harbor-packageinstall-values.yaml secretKey: $(head -1 /dev/random | base64 | head -c 16) core: secret: $(head -1 /dev/random | base64 | head -c 16) xsrfKey: $(head -1 /dev/random | base64 | head -c 32) jobservice: secret: $(head -1 /dev/random | base64 | head -c 16) registry: secret: $(head -1 /dev/random | base64 | head -c 16) database: password: $(head -1 /dev/random | base64 | head -c 16) hostname: ${HARBOR_HOSTNAME} harborAdminPassword: ${HARBOR_ADMIN_PASSWORD} tlsCertificateSecretName: ${HARBOR_HOSTNAME}-tls notary: enabled: false persistence: persistentVolumeClaim: registry: size: 128Gi EOF 6. Deploy Harbor using the Tanzu package # kctrl package install \ -i harbor \ -n tanzu-system \ --package harbor.tanzu.vmware.com \ --version 2.6.1+vmware.1-tkg.1 \ --values-file harbor-packageinstall-values.yaml Deploy TMC-SM for VCD Configure certificates # export KUBECONFIG=$PWD/kubeconfig-tmc.txt # kubectl create secret tls -n cert-manager selfsigned-ca-pair \ --cert=$HOME/rootCA.crt --key=$HOME/rootCA.key # cat <<EOF | kubectl apply -f - { "apiVersion": "cert-manager.io/v1", "kind": "ClusterIssuer", "metadata": { "name": "selfsigned-ca-clusterissuer" }, "spec": { "ca": { "secretName": "selfsigned-ca-pair" } } } EOF Install the Solution Add-On 1. Set environment variables with the desired configuration settings. export VCD_HOSTNAME=vcd.example.com export VCD_USERNAME=administrator export VCD_EXT_PASSWORD=password export TMC_SM_INSTANCE_NAME=VALUE_REQUIRED export TMC_SM_ENCRYPTION_KEY=MySuperSecretKeyThatIRemember # Provide the Kubernetes cluster name for TMC deployment, # e.g., tkgm-tmc-cluster export TMC_SM_KUBE_CLUSTER_NAME=VALUE_REQUIRED # Provide DNS zone to configure TMC endpoints, i.e., tmc.mydomain.com export TMC_SM_DNS_ZONE=VALUE_REQUIRED # Provide the Load balancer IP of Contour Envoy, i.e., 10.11.12.23. TMC DNS # Zone should be mapped to this IP. export TMC_SM_LOAD_BALANCER_IP=VALUE_REQUIRED # Provide Harbor project path for pushing/pulling TMC packages during # installation, i.e., harbor.mydomain.com/myproject export TMC_SM_HARBOR_URL=harbor.slz.vcd.local/tmc # Provide Harbor username for Basic authentication export TMC_SM_HARBOR_USERNAME=robot\$tmc # Provide Harbor password for Basic authentication export VCD_EXT_INPUT_HARBOR_PASSWORD=VALUE_REQUIRED # Provide the base64 encoded CA bundle in PEM format of the Harbor server. # It is required if the Harbor server certificate is not signed by a # well-known certificate authority. export VCD_EXT_INPUT_HARBOR_CA_BUNDLE=$(cat $HOME/rootCA.crt | base64 -w0) ############ # Optional Settings ############ # Set MinIO root user name. Defaults to minioadmin. export VCD_EXT_INPUT_MINIO_ROOT_USERNAME= # Set MinIO root user password. If left blank, a random password will be # generated. Format: no less than 8 chars, at least 1 digit, at least 1 # special char(@$!%*#.,-_=*), at least 1 letter, i.e., P@ssw0rd export VCD_EXT_INPUT_MINIO_ROOT_PASSWORD= # Set TMC's PostgreSQL password. If left blank, a random password will be # generated. Format: no less than 8 chars, at least 1 digit, at least 1 # special char(@$!%*#.,-_=*), at least 1 letter, i.e., P@ssw0rd export VCD_EXT_INPUT_POSTGRES_PASSWORD=S3cretPGP@ssw0rd # Set the default Grafana admin user name. Defaults to admin. export VCD_EXT_INPUT_GRAFANA_ADMIN_USERNAME= # Set the default Grafana admin user password. If left blank, a random # password will be generated. Format: no less than 8 chars, at least 1 digit, # at least 1 special char(@$!%*#.,-_=*), at least 1 letter, i.e., P@ssw0rd export VCD_EXT_INPUT_GRAFANA_ADMIN_PASSWORD= # Sets the timeout in seconds for TMC installation. Defaults to 3600. export VCD_EXT_INPUT_DEPLOY_TIMEOUT=3600 2. Load Harbor rootCA.crt # cp $HOME/rootCA.crt /etc/ssl/certs/harbor.pem && rehash_ca_certificates.sh # timeout 1 openssl s_client -quiet -verify_return_error ${HARBOR_HOSTNAME}:443 3. Download the VCD certificate to a file. # /mnt/cdrom/linux.run get certificates --host $VCD_HOSTNAME \ --output /tmp/vcd.pem \ --accept 4. Configure VCD to trust the TMC-SM VCD Integration Solution Add-On. # /mnt/cdrom/linux.run trust --host $VCD_HOSTNAME \ --username $VCD_USERNAME \ --certificate-file /tmp/vcd.pem \ --accept 5. Create the solution add-on instance. # /mnt/cdrom/linux.run create instance --name $TMC_SM_INSTANCE_NAME \ --host $VCD_HOSTNAME \ --username $VCD_USERNAME \ --certificate-file /tmp/vcd.pem \ --encryption-key ${TMC_SM_ENCRYPTION_KEY} \ --input-kube-cluster-name=${TMC_SM_KUBE_CLUSTER_NAME} \ --input-cert-provider=cluster-issuer \ --input-cert-cluster-issuer-name=selfsigned-ca-clusterissuer \ --input-dns-zone=${TMC_SM_DNS_ZONE} \ --input-contour-envoy-load-balancer-ip=${TMC_SM_LOAD_BALANCER_IP} \ --input-harbor-url=${TMC_SM_HARBOR_URL} \ --input-harbor-username=${TMC_SM_HARBOR_USERNAME} \ --accept