elihuj's Posts

Thank you daphnissov. Made the change and will monitor.
In order to monitor storage connectivity issues, we have the "Cannot connect to storage" alarm configured. Whenever we mount/unmount a Veeam NFS datastore, it triggers an alarm. I've tried disabl... See more...
In order to monitor storage connectivity issues, we have the "Cannot connect to storage" alarm configured. Whenever we mount/unmount a Veeam NFS datastore, it triggers an alarm. I've tried disabling the alarm actions on the datastore. Additionally, I've set the "Cannot connect to storage" triggers to set the "Datastore ID not equal to" the ID of the Veeam datastore. Unfortunately, the alarms persist. Is there a better way to silence unnecessary datastore alarms?
Yes, we're using two non-routable VLAN's and RR as the PSP.
I have a perplexing issue with a client SQL server. The backstory is their SQL jobs started to incrementally take longer and longer to complete. They say nothing has been done on the SQL server s... See more...
I have a perplexing issue with a client SQL server. The backstory is their SQL jobs started to incrementally take longer and longer to complete. They say nothing has been done on the SQL server side. This doesn't seem to be host related, as the issue persists from host to host. As part of our troubleshooting we upgraded their hosts with 10GB NIC's to the SAN. The problem persisted. As we continued to troubleshoot, we moved their SQL VM to local storage on the host. They said that after the move the VM began performing flawlessly. I am not really sure why the VM would be performing better on local disks. They moved from a LUN made up of 24x15K disks in a RAID10 to 2x7.2K disks in a RAID1. I completed a series of benchmark tests using FIO, HDParm, and QPerf. Every test I have ran has the SAN outperforming the local disks (in some cases by a great deal). The only thing I can really think of is latency. Their particular SAN utilizes copper instead of fiber. I can't imagine it would make this significant of an impact thought. Any ideas or thoughts are appreciated. Thank you.
Hello CSvec‌, thank you. Your reply was very informative. It looks like workflows need to be tailored to be used by the Event Broker service, is that correct?
Just an update. I did notice that when I view the Properties of a blueprint during deployment, the correct vRO properties that I pass from Orchestrator are present. What I can't figure out is why... See more...
Just an update. I did notice that when I view the Properties of a blueprint during deployment, the correct vRO properties that I pass from Orchestrator are present. What I can't figure out is why they are showing up in the blueprint itself.
I had a question about assigning vRO Workflows to blueprints. I followed this link to associate a workflow to my blueprint. Specifically, on step 2 where it talks about assigning a state change w... See more...
I had a question about assigning vRO Workflows to blueprints. I followed this link to associate a workflow to my blueprint. Specifically, on step 2 where it talks about assigning a state change workflow to a blueprint. After assigning a workflow to a blueprint via the BuildingMachine stub, I refresh my blueprint in vRA and do not have any custom properties added. What I ended up doing was manually adding the following as Custom Properties in my blueprint: ExternalWFStubs.MachineProvisioned ExternalWFStubs.MachineProvisioned.diskSize ExternalWFStubs.MachineProvisioned has a value of the Workflow ID I am calling from vRO. When I run a request from my blueprint now I do have an option to set the disk size, and the workflow does execute correctly. This does not seem like the best way to do this, however. Is there something that I am missing in the initial configuration?
Thank you for the reply Sreec. I'm currently using a single Edge device now (with connectivity to my External network), and my ORG networks connected. Is that the typical configuration with vCD o... See more...
Thank you for the reply Sreec. I'm currently using a single Edge device now (with connectivity to my External network), and my ORG networks connected. Is that the typical configuration with vCD or is there a best practice for Edge deployment? I have not tried it with a vApp network and its own Edge. I will try that and see how it works.
Hello Sreec. When I view a VM within my vApp, I do not see an IP address under the "External IP" column. I have my VM connected to a routed ORG-Network, and it does receive a private IP without i... See more...
Hello Sreec. When I view a VM within my vApp, I do not see an IP address under the "External IP" column. I have my VM connected to a routed ORG-Network, and it does receive a private IP without issue. The ESG is setup to use my External Network. I configured a NAT rule on the ESG, and the VM is accessible via its public IP. My question was is it possible to have something like a floating pool of public IP's that could be dynamically allocated to the VM for its External IP. Or will it always require a NAT/Firewall rule to provide public connectivity (without a direct network)?
Thank you for the reply Sreec So external IP's need to be manually assigned each time? There's no way to dynamically assign?
I have a question about VM's receiving external IP addresses. I created an External Network pool with public IP addresses, and have it configured within my ESG. Within the ESG, I sub-allocated a ... See more...
I have a question about VM's receiving external IP addresses. I created an External Network pool with public IP addresses, and have it configured within my ESG. Within the ESG, I sub-allocated a pool of addresses. When I deploy a VM, it receives a private IP address from my ORG network but no External IP. Is there a way to get it to allocate dynamically?
Are you receiving these errors in an HA cluster?
Thank you for the reply Nick. Yes, there are plenty of memory resources available at the host level. The swapping/ballooning has dropped on the pool since Friday.. but it's still happening. Inter... See more...
Thank you for the reply Nick. Yes, there are plenty of memory resources available at the host level. The swapping/ballooning has dropped on the pool since Friday.. but it's still happening. Interestingly, even when I remove the memory limit on the pool the reclamation still occurs.
I know this is an old topic, but I came across it today while troubleshooting the same error and figured I'd post my solution. After changing some settings on a resource pool (which included sett... See more...
I know this is an old topic, but I came across it today while troubleshooting the same error and figured I'd post my solution. After changing some settings on a resource pool (which included setting a memory reservation on a VM within that pool), I noticed the "Unable to apply DRS..." message on one of my hosts. I removed the memory reservation and power cycled the VM, but the message persisted. Restarting the management agents, and other troubleshooting methods described in this post did not work. I decided to check out the .vmx file for the VM, and wouldn't you know the memory reservation was still present. I removed it, and powered up the VM and verified no more error messages in my hostd.log file. Why the reservation parameter stuck around in the .vmx file after I removed it, I have no idea.
I have a resource pool configured with a handful of VM's, with a set memory limit of 72GB. Currently the VM's in the pool are consuming roughly 40GB of the available host memory. My question is w... See more...
I have a resource pool configured with a handful of VM's, with a set memory limit of 72GB. Currently the VM's in the pool are consuming roughly 40GB of the available host memory. My question is why do I see swapping, compression, and ballooning within the pool when there are plenty of host resources available? For example, one of the VM's is configured with 8GB of memory. When I checked the resource allocation, half of it was ballooning with a very small amount swapping. I did verify that VM Tools is installed and up to date on all the VM's within the pool.
I was actually able to resolve my issue. Here's what I did in case anyone has a similar problem. Within the CreateClusterResilienceTable Function, I changed this: $ClusterView = Get-View -V... See more...
I was actually able to resolve my issue. Here's what I did in case anyone has a similar problem. Within the CreateClusterResilienceTable Function, I changed this: $ClusterView = Get-View -ViewType "ClusterComputeResource" -Filter @{"Name" = $ClusterTemp.Name} to this: $ClusterView = Get-View -ViewType "ClusterComputeResource" | Where {$_.Name -eq $clusterTemp.Name} After the change, I had no more errors and the resiliency part reported correctly.
Thank you hussainbte. Now is it always best practice to use single socket with more cores? For example, in a 2 socket 6 core host, are you getting better performance with your VM configured as 1 ... See more...
Thank you hussainbte. Now is it always best practice to use single socket with more cores? For example, in a 2 socket 6 core host, are you getting better performance with your VM configured as 1 socket 6 core versus 2 socket 3 core?
Thank you for the replies, I did a lot of reading on it last night as well. So despite having 24 logical cores, for a 12 vCPU VM on a host with 12 physical CPU's, the scheduler will still schedul... See more...
Thank you for the replies, I did a lot of reading on it last night as well. So despite having 24 logical cores, for a 12 vCPU VM on a host with 12 physical CPU's, the scheduler will still schedule the vCPU's on the pCPU's correct? From what I read on Hyper Threading, a second execution thread is provided to an existing core. When one thread is idle or waiting, the other thread can execute instructions. This can increase efficiency if there is enough CPU idle time to provide for scheduling two threads. However, in my case the CPU scheduler is utilizing all 12 physical cores fully not leaving any room for the second execution threads to provide any benefit. Is this correct?
I have a host with two Intel E5-2430 processors (2 socket, 6 cores) with Hyper Threading enabled for a total of 24 Logical Processors. One of the VM's that we deployed was configured with 12 vCPU... See more...
I have a host with two Intel E5-2430 processors (2 socket, 6 cores) with Hyper Threading enabled for a total of 24 Logical Processors. One of the VM's that we deployed was configured with 12 vCPU's (2 socket, 6 core). During our benchmarking tests, I noticed that this VM consumed all available CPU cycles on the host. With HT enabled, shouldn't this only utilize half of the resources available?
I'm seeing error messages for one of my clusters during the resiliency report. Gathering Cluster Resilience Information for Cluster Method invocation failed because [System.Object[]] does ... See more...
I'm seeing error messages for one of my clusters during the resiliency report. Gathering Cluster Resilience Information for Cluster Method invocation failed because [System.Object[]] does not contain a method named 'op_Subtraction'. At C:\Scripts\CPReport-v2.1-Community-Edition.ps1:496 char:4 +             $ClusterFreeMemory = $ClusterTotalMemory - $ClusterUsedMemory - $ACPolicyInte ... +             ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~     + CategoryInfo          : InvalidOperation: (op_Subtraction:String) [], RuntimeException     + FullyQualifiedErrorId : MethodNotFound Method invocation failed because [System.Object[]] does not contain a method named 'op_Subtraction'. At C:\Scripts\CPReport-v2.1-Community-Edition.ps1:499 char:4 +             $ClusterFreeMemoryPercentage = $ClusterFreeMemoryPercentage - $ACPolicyIntege ... +             ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~     + CategoryInfo          : InvalidOperation: (op_Subtraction:String) [], RuntimeException     + FullyQualifiedErrorId : MethodNotFound Method invocation failed because [System.Object[]] does not contain a method named 'op_Division'. At C:\Scripts\CPReport-v2.1-Community-Edition.ps1:512 char:4 +             $HAReservedMemory = ($ACPolicyInteger/100) * ($ClusterTemp | Get-VMHost | Mea ... +    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~     + CategoryInfo          : InvalidOperation: (op_Division:String) [], RuntimeException     + FullyQualifiedErrorId : MethodNotFound            Creating chart... Exception calling "DataBindXY" with "2" argument(s): "Data points insertion error. Number of X values is less than Y values Parameter name: xValue" At C:\Scripts\CPReport-v2.1-Community-Edition.ps1:608 char:2 +     $Chart.Series["Data"].Points.DataBindXY($NameArray, $ValueArray) #Modified by M ... +     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~     + CategoryInfo          : NotSpecified: (:) [], MethodInvocationException     + FullyQualifiedErrorId : ArgumentOutOfRangeException I saw someone had a similar issue, but did not see a resolution. It only happens with one cluster. Any ideas as to why?