Tocano's Posts

Looking at Get-Module -ListAvailable, it lists the modules as being in /root/.local/share/powershell/Modules/. I'm assuming that since this directory doesn't exist on the appliance, that this pat... See more...
Looking at Get-Module -ListAvailable, it lists the modules as being in /root/.local/share/powershell/Modules/. I'm assuming that since this directory doesn't exist on the appliance, that this path reflects something in a container. So is it possible to drop a Module folder somewhere on the appliance to get picked up by the vRO Powershell environment? Thanks
Thank you for that reference. Knew it likely had swagger docs for that somewhere but couldn't find the URL for it.
I'm running a vRA8.1 instance with embedded vRO. I have a Powershell action that I've gotten to work, but I'm trying to move a hard-coded value out of the code and into a Configuration Element... See more...
I'm running a vRA8.1 instance with embedded vRO. I have a Powershell action that I've gotten to work, but I'm trying to move a hard-coded value out of the code and into a Configuration Element/Asset. However, I'm struggling to figure out how to access the Configuration via Powershell. I've considered creating another action (this one Javascript) to pull the Configuration data and return it to the main Powershell action, but I've also not figured out how to execute other actions from Powershell yet either. If I need to access something like a vRO REST API to do this, then so be it, but I'm struggling to find documentation on a vRO REST API on vRA8. Any help would be appreciated.
Looks like the blog author reorganized his URLs (took dates out) and broke the link given above. Here's the updated link: https://www.drewgreen.net/fix-for-cron-failing-on-vmware-vcenter-s... See more...
Looks like the blog author reorganized his URLs (took dates out) and broke the link given above. Here's the updated link: https://www.drewgreen.net/fix-for-cron-failing-on-vmware-vcenter-server-appliance-vcsa-6-5/ To summarize (in case page again gets lost): Errors occur using the default /etc/pam.d/crond config: account required pam_access.so account include  password-auth session required pam_loginuid.so session include  password -auth auth    include  password -auth First, make a backup copy of this file, then edit the original, replacing the 3 "password-auth" references with "system-auth": account required pam_access.so account include  system-auth session required pam_loginuid.so session include  system-auth auth    include  system-auth This resolved my issue with cron jobs failing to run as well.
I tend to be very heavy handed when it comes to logging for debugging purposes. However, after a disk out-of-space situation, to avoid flooding the logs when NOT debugging/troubleshooting, I am t... See more...
I tend to be very heavy handed when it comes to logging for debugging purposes. However, after a disk out-of-space situation, to avoid flooding the logs when NOT debugging/troubleshooting, I am trying to use System.debug more often. My intuitive thought was that this would only be displayed either when the entire vRO server is in Debug mode, or when working on a workflow and one hits the 'Debug' button. But that's not what I'm seeing. It still only seems to display System.log level events or higher. It appears that one either enables debug across the entire vRO server, or not at all. Is there some setting I have overlooked that keeps even the 'Debug' button from displaying debug messages?  Is this possible? Thank you
I see several actions and workflows that will assign Services/Resources/CatalogItems to an Entitlement. However, I do not see any facility to unassign any. I wondered if I could at least clea... See more...
I see several actions and workflows that will assign Services/Resources/CatalogItems to an Entitlement. However, I do not see any facility to unassign any. I wondered if I could at least clear one out by setting the vCACCAFEEntitledService[] to an empty array and then updating the entitlement. So I tried: vCACCAFEEntitlementObj.entitledServices = []; But this errors with: [E] Property or method 'entitledServices' not found on object vCACCAFEEntitlement What's odd though is that in the previous line, I was debugging and did: System.log("Existing Entitlement Services: " + vCACCAFEEntitlementObj.entitledServices.toSource()); And this works just fine, resulting in a JSobject string that shows the content of the Entitlement services. So entitledServices does seem to exist, but then throws the error on the next line when trying to set it.  ... Very confusing. Any help would be appreciated.
I did do a Get- after doing the Set- and it showed what I expected. But seeing the $global:DefaultVIServer.ExtensionData.Client.ServiceTimeout still set to 300 is what tipped me off. I then did a... See more...
I did do a Get- after doing the Set- and it showed what I expected. But seeing the $global:DefaultVIServer.ExtensionData.Client.ServiceTimeout still set to 300 is what tipped me off. I then did a Disconnect-VIServer for my existing session I'd been working in and then when I reconnected, this value was updated to -1 and Get-Log did not time out. So to those that may read this in the future: it appears that Set-PowerCLIConfiguration -WebOperationTimeout does not affect existing PowerCLI vCenter/ESXi connections, only future ones. Thanks Luc for the hint in the right direction.
So we have a script that takes an ESXi host and just does: Connect-Viserver [ESXihostname] -Credential (Import-clixml [credfile]) Get-Log -Bundle -DestinationPath E:\Logs\ However, about e... See more...
So we have a script that takes an ESXi host and just does: Connect-Viserver [ESXihostname] -Credential (Import-clixml [credfile]) Get-Log -Bundle -DestinationPath E:\Logs\ However, about exactly 5 min after the execution of Get-Log, the cmdlet consistently errors with: Get-Log : 3/23/2017 12:05:02 PM    Get-Log        The operation has timed out At line:1 char:1 + Get-Log -Bundle -DestinationPath E:\Logs\ + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~     + CategoryInfo: NotSpecified: (:) [Get-Log], ViError     + FullyQualifiedErrorId : Client20_QueryServiceImpl_WaitForUpdates_ViError,VMware.VimAutomation.ViCore.Cmdlets.Commands.GetLog And for the life of me, I cannot find a timeout that extends this. Set-PowerCliConfiguration -WebOperationTimeoutSeconds -1 $vmHost | Get-AdvancedSettings -Name 'UserVars.ESXiShellTimeOut'  (equals 0) $vmHost | Get-AdvancedSettings -Name 'UserVars.ESXiShellInteractiveTimeOut'  (equals 900) There's an old KB about modifying the timeout on the query service, but this applied to vCenter 4, and doesn't seem applicable now. So I'm struggling to find any setting to modify this timeout and allow a Get-Log execution to complete (vSphere client takes just over 15 min to run). Any help would be appreciated. Get-Log;operation timed out;weboperationtimeoutsecond
So I'm trying to update a vRA Reservation - specifically a Custom Property item in the extension data. I have fetched the vCACCAFEReservation, and it's extension data object (a vCACCAFELiteral... See more...
So I'm trying to update a vRA Reservation - specifically a Custom Property item in the extension data. I have fetched the vCACCAFEReservation, and it's extension data object (a vCACCAFELiteralMap). In there is an item, say 'AcctNum' which is currently set to '1234' (string I believe) and I wish to set to '0987'. So I remove the item from the extension data: resExtData.remove("AcctNum"); This works. Then I create a new value and place in the extensionData object: var newValue = vCACCAFEStringLiteral.fromString("0987"); resExtData.put("AcctNum",newValue); According to the link above, .put() takes a string Key and a com.vmware.vcac.platform.content.literals.Literal ‌value - which vCACCAFEStringLiteral.fromString() should return. Then I update the reservation object with the modified extension data: resObj.setExtensionData(resExtData); That appears to work. Then I fetch the needed REST client service object(s) to communicate with vRA: resClient = vCACCAFEHost.createReservationClient(); resService = resClient.getReservationReservationService(); And this all works. But then, I apply the updated reservation extension data to the live reservation: resService.updateReservation(resObj); And I get an error at that point: Error in (Workflow:ActionDev / Set Reservation Data (item3)#73) Invalid custom property: AcctNum. Supported datatypes are BooleanLiteral, StringLiteral and SecureStringLiteral. The Custom Property exists and I believe I'm using a proper datatype - a StringLiteral. Plus, wouldn't the previous .put() or .setExtensionData() method calls fail if it was not an acceptable format/datatype? So I'm not sure what the issue is. Any help would be appreciated.
We recently tried to standardize our use of PS in vRO. So we created a generic runPSScript action to which we pass the PSscript filepath and the parameters (as a Properties object), and it execut... See more...
We recently tried to standardize our use of PS in vRO. So we created a generic runPSScript action to which we pass the PSscript filepath and the parameters (as a Properties object), and it executes everything. It will fetch the proper PS host, it will build the parameter string from the parameter properties, it will execute the PS script, then it calls another action which parses the horrific mess of data returned by the PS plugin to pull out whatever object or list of objects are returned. The real goal of it is to essentially strip out all the excessive, unnecessary markup and just return the actual data objects/values returned by the PSScript. Attaching the parsePSReturnObject action we have. NOTE 1: it's very simplistic at this point and just handles most basic return types. If you get into complex objects with recursive sub-objects, it will probably fail. NOTE 2: any standard PS echo/Write-Output type of stuff will show up as part of the list of unlabeled string outputs. We specifically made the decision to not use this as a data return type. Instead we required that anyone writing a PSScript for use in this environment will either return a single string or will make sure that multiple strings are labeled in order to avoid the complexity of trying to sort log-lines from intended return values. See if you can import that and see if it works for your needs.
While the vmx.log.destination still seems to work, it seems that the vmx.log.syslogID does not in 6.0. I have no idea why, but setting the vmx.log.destination allows me to redirect the VM log... See more...
While the vmx.log.destination still seems to work, it seems that the vmx.log.syslogID does not in 6.0. I have no idea why, but setting the vmx.log.destination allows me to redirect the VM logs into syslog (and thus into things like LogInsight), but the value of that is limited if I cannot place a unique label on the entries. Otherwise it's virtually impossible to track long-term data as every time the VM migrates or powers off/on, it gets a new PID.
That's odd. Sounds like you have some strange build issue on that system which causes the #Requires to act as a workaround of that issue. It seems that #Requires is a script-specific feature ... See more...
That's odd. Sounds like you have some strange build issue on that system which causes the #Requires to act as a workaround of that issue. It seems that #Requires is a script-specific feature (based on this). That is, it isn't just a flag one can set on a PS session in general. It is a line in a script that informs the executable how to run that specific script. So I'd be surprised if it was something you could set in a profile.
Are you trying to avoid doing something like: Get-Module -ListAvailable VMware* | Import-Module at the beginning of any scripts you write?
This is kind of ridiculous. It's been a year and I've just spent multiple days still fighting this same issue running into exactly the same problems as the other posters. Is it that difficult ... See more...
This is kind of ridiculous. It's been a year and I've just spent multiple days still fighting this same issue running into exactly the same problems as the other posters. Is it that difficult to just add a: GET https://{vraHost}/api/tenants/{tenantId}/directories/{id}/sync That executes the 'Sync Now' function on the directory indicated? That would seem to avoid the problems with trying to perform a PUT on the directory to "auto trigger" the sync, as well as discourage the use of the internal vIDM API to try to work around this limitation.
I've been delving into log insight recently and have a few questions: If I create an alert, how can I make that available for others on my team to access and edit? (I'm hoping that it isn't s... See more...
I've been delving into log insight recently and have a few questions: If I create an alert, how can I make that available for others on my team to access and edit? (I'm hoping that it isn't silo'd like it seems.) I'm not talking about as a dashboard item, but just an alert I should be able to create an alert, and place it in a shared list that can be modified by others on my team I assume this functionality is available, but I'm simply not finding it Can I customize the text of email alerts and reference pieces of the events within them? For example, if I create an alert, I'd like to be able to specify the text of the email alert like: "In vCenter [[source]], host [[hostname]] ([[vmw_cluster]]) just reported a 'Problem' event: [[logline]]" Can I send an alert notification to vRealize Orchestrator? This seems to make sense as a method for reacting to trigger conditions in log insight, but I can't seem to figure out how to do this. Log insight is a very impressive and hugely useful tool. Just a few things weren't quite straightforward to me. Appreciate any guidance.
I also have multiple vSphere/vCenters. Is there some requirement why you have to use the credential passthru approach?  I connect to each one using an xmlcli credential file. You can create them ... See more...
I also have multiple vSphere/vCenters. Is there some requirement why you have to use the credential passthru approach?  I connect to each one using an xmlcli credential file. You can create them easily like follows: Get-Credential | Export-Clixml [scriptsPath]\Credentials\[username].clixml I then have a function (loaded in my profile) that will allow me to specify just a vcenter and a username: function vConnect ($vCenterName,$credFileName) {      Connect-Viserver $vCenterName -Credential (Import-clixml "[scriptsPath]\Credentials\$($credFileName).clixml") } So to connect, all I have to type is: > vConnect [vCenterName] [username] This provides me with some flexibility to connect to different vCenters as different users in different contexts with very little difficulty. NOTE: Be aware that the clixml files are tied to a specific user acct. Whatever user you are to create them will be the only user who can use that clixml file. So plan accordingly.
What it sounds like you're looking for is called "orphaned vmdks". You can Google for "powercli find orphaned vmdks" and find myriad different scripts out there with different approaches for find... See more...
What it sounds like you're looking for is called "orphaned vmdks". You can Google for "powercli find orphaned vmdks" and find myriad different scripts out there with different approaches for finding these.
Great point. Most of our clusters are 10+ nodes. For the few small clusters, I have a 2 hour wait-time from initiating one unmap before I will initiate another since 2-2.5 hours seems to cover mo... See more...
Great point. Most of our clusters are 10+ nodes. For the few small clusters, I have a 2 hour wait-time from initiating one unmap before I will initiate another since 2-2.5 hours seems to cover most of our datastores. May be different for different environments.
I'd be careful with this approach. Since the PowerCLI implementation of ESXCLI seems to have a hard 30min timeout, any datastores that take longer than 30 min to execute unmap (most >2TB in my ex... See more...
I'd be careful with this approach. Since the PowerCLI implementation of ESXCLI seems to have a hard 30min timeout, any datastores that take longer than 30 min to execute unmap (most >2TB in my experience) will *appear* to error in PowerCLI and it will move onto the next DS. But since it doesn't actually end the unmap execution, one could have 2, 3, 4, etc unmaps all taking place at the same time. If the set of datastores happens to have a consecutive set of datastores all in the same cluster (likely), then the | Select -First 1 approach will likely result in a single host executing all those concurrent unmaps. While the unmaps may not create significant load on the host, depending on the hardware environment, it could place additional load on the storage environment/fabric at a single, particular location which in some heavy-load environments may cause performance issues. My preference  is to do something like: $esx = $ds | Get-VMHost | Where {$_.ConnectionState -eq "Connected" -and $_.ExtensionData.Runtime.StandbyMode -eq 'none' -and $_.ExtensionData.Runtime.InMaintenanceMode -eq $false} | Get-Random  That way, you're not likely to get the same host for unmaps in the same cluster, thus distributing the load a bit.
In addition to LucD's reference and the aid in those threads, be aware that it appears that as of 6.0, PowerCLI has a ~30 min esxcli session limit. So the PowerCLI esxcli unmap command will *appe... See more...
In addition to LucD's reference and the aid in those threads, be aware that it appears that as of 6.0, PowerCLI has a ~30 min esxcli session limit. So the PowerCLI esxcli unmap command will *appear* to timeout and fail if you're running unmap against a larger Datastore. However, the actual command still appears to finish running on the host. You can tail the hostd.log file on the host that is executing the unmap and you'll see lines like: Unmap: Async Unmapped 200 blocks from volume 54538a18-952a3a54-0b78-f8db8871541b These lines continue to occur even after the command appears to have failed from the PowerCLI perspective. One side effect of this is that if you have a set of datastores that you're looping through to unmap, PowerCLI will move to the next one and initiate another unmap before the first has finished. So you may have multiple different datastores executing unmaps concurrently. So take care to either place a sleep command in there to provide sufficient space to mitigate too many concurrent execs, or at least monitor the performance of your storage subsystem. In addition, you may want to randomize which host in a cluster is used to execute the unmap on a given datastore to avoid having a single host executing all unmaps.