ericr999's Posts

Hi Legioon, I have the same setup as you do. Running 7.3 on all my hosts. In my prod I have 2CPU and 12 GB of RAM. In my dev I have a smaller sizing, since only 1-2 workflows can run at the s... See more...
Hi Legioon, I have the same setup as you do. Running 7.3 on all my hosts. In my prod I have 2CPU and 12 GB of RAM. In my dev I have a smaller sizing, since only 1-2 workflows can run at the same time, but still I have 2 CPU and 8GB of RAM. The link provided by qc4vmware should be working, I think that's the one I used in the past. Also, make sure that for the memory you set for VRO you don't put the full size showing up in the VM. You should keep at least 2GB for the OS. Since that it did improve the service, but make sure you have your plugins up to date, and remote services have their certificate in order and everything this can cause issues and slow down. Also with previous version of VRO, like with 7.0, we had issues that was forcing us to reboot every week or so, I had a case open and we never figured it out, so I updated the OS and things have improved. Let us know how your memory adjustement went.
Hi Ilian, Yeah, my previous comment was about his plugin. Because the Custom Header works fine with the RESTOperation, but I just can't download a binary file with that plugin. But with the on... See more...
Hi Ilian, Yeah, my previous comment was about his plugin. Because the Custom Header works fine with the RESTOperation, but I just can't download a binary file with that plugin. But with the one provided seems to be good, except I can't grab the file needed since its protected by a custom header that acts like a API Key. I'll send him a message. Thanks!
No way of ignoring certificate errors ? Not a big deal, I'll just have to ask the team to fix their issue with the cerificate of their server. Also, is there a way to set a custom header ? By ... See more...
No way of ignoring certificate errors ? Not a big deal, I'll just have to ask the team to fix their issue with the cerificate of their server. Also, is there a way to set a custom header ? By looking at the API Explorer, the plugin seems to be limited, and I don't have enough knowledge yet in order to play in a dar file.
ahhh yeah, that would be acceptable for me. Thanks!!
Hello, Trying to automate Nessus with VRO so that when we build machines once everything is done, we can run a first scan and retrieve the report. Nessus seems to be able to generate a .csv th... See more...
Hello, Trying to automate Nessus with VRO so that when we build machines once everything is done, we can run a first scan and retrieve the report. Nessus seems to be able to generate a .csv that is stored in a zip file. I'm trying to download that ZIP file into a Ressource Element. But from what I see, the RESTOperation object seems to store the reply in a string in the variable response.contentAsString. I've used the Generate workflow for the REST Operation, and added a few lines if I remember. It looks like this: //prepare request //Do not edit System.log("==== Execution du WF POST ======"); restOperation.urlTemplate = restOperation.urlTemplate + uri; var inParamtersValues = []; var request = restOperation.createRequest(inParamtersValues, content); var noToken = new Number(token); //set the request content type request.contentType = "application/zip"; request.setHeader("X-SecurityCenter", noToken); System.log("Request: " + request); System.error("Request URL: " + request.fullUrl); //Customize the request here //request.setHeader("headerName", "headerValue"); //execute request //Do not edit var response = request.execute(); System.log("request:"+request.contentType); System.log("response:"+response.contentType); //prepare output parameters System.log("Response: " + response); statusCode = response.statusCode; statusCodeAttribute = statusCode; System.log("Status code: " + statusCode); contentLength = response.contentLength; headers = response.getAllHeaders(); contentAsString = response.contentAsString; System.log("Content a s string: " + contentAsString); But I don't want to retrieve the content as a String, is there another way to receive a binary file via VRO. ? My next step would be to run this on another machine with curl maybe. Thanks,
I'd like to add more info regarding what daphnissov said. Personally I run 2 node in a cluster environnement behind a load balancer and it works great. But I run it in active/passive mode. Becaus... See more...
I'd like to add more info regarding what daphnissov said. Personally I run 2 node in a cluster environnement behind a load balancer and it works great. But I run it in active/passive mode. Because executions logs are not kept on both node, only on the node that the workflow was run on. So its not very pratical when you wan to review a specific execution, you always have to browse through all your instances, when you only have two its not to bad, but still you have to figure out where the execution was run.
Hi Tom, That is great news! And this will work with a VRO Standalone ? I don't have VRA Server at the moment. Thanks!
Hi Ilian, That's a very good plugin!! Thanks a lot!!
Hello Juan, What do you mean by new ID ? A new workflow ID ? Or a new versions number of the Workflows/Actions/CE etc ? How do you sync with Git ? Do you use a plugin ?
This is not really related to VRO itself. But in my case, VRO is used to launch a Workflow that will do a bunch of stuff first, then upon the different results, will launch a PowerShell script. T... See more...
This is not really related to VRO itself. But in my case, VRO is used to launch a Workflow that will do a bunch of stuff first, then upon the different results, will launch a PowerShell script. That script is sentitive and will do Delete/Create/Move stuff in AD. Since its related to AD, the security team asked us to do a mecanic, to hash the Second script. That hash is stored in the first script. And the hash of the first script is stored in a Vault, more specifically CyberArk. So if the first script is changed, CyberArk will validate it and I won't be able to retrieve a password for the second script, so the script will fail. If the first script is good, and the second one the hash found in the first script and if it fit with the second one the script is allowed to be run. So anyway, I was wondering what mecanic does people uses to monitor/protect their script ? If I have to change something in the scripts I must change the hash in the scripts and must contact the Security team to change the hash value. And only me and another collegue is allowed to call them. I know I could use Tripwire to monitor the file, but the current method used is not very pratical and I have to contact the other them frequently. I was trying to find something that makes more sense. Thanks,
I had the same issue, and done the same procedure over and over with the support. And we haven't found a solution so far. But in my case, I only get this error in my prod, where I have 2 VRO Serv... See more...
I had the same issue, and done the same procedure over and over with the support. And we haven't found a solution so far. But in my case, I only get this error in my prod, where I have 2 VRO Server. And I will get this error only on the second node. In my preprod, I also have two vro Server, both linked to another Active Directory, and both vro nodes are working without issue. Also, when doing the reset it will not reset the sso authentification done within the VRO Server, but it will reset the authentication with the Control Center. But also, this issue helped me realized that with a cluster and sso, 7.3 is not yet ready. I mean the cluster part is working fine, but I don't have enough control over the cluster. When the cluster is enabled, and sso is activated for the Control Center, you loose access to connect to node 1 or node 2 specifically. So, you cannot restart a specific node, by reading the documentation regarding the cluster information and load balancing stuff, something is not right. Because if you have vro node 1 down, and want to restart it. You access the control center which is now load balanced, but the monitor is based on the documentation page, which will probably work since its monitoring the control center service, but what you need to restart is the vro server, so you can't access node 1 via http directly because you could be redirected to whatever node the load balancer decides. Anyway I reported that to the dev team already. Also, the access to http is vital to me, since our security team prevent us from using ssh. ssh should be enabled in extreme cases. So for this reason, I've removed the sso authentication from the Control Center, and now I'm only using root. Another detail, when doing the reset, and reconfiguration, you will need the account/password to authenticate with your vcenter/psc, important to know since in my case I don't have that account, I must always get the team that does...
I'm running in the same situation. At first we were developping workflows and using a lot of System.log and System.error. But I realized that when developping bigger workflows with more wrapper w... See more...
I'm running in the same situation. At first we were developping workflows and using a lot of System.log and System.error. But I realized that when developping bigger workflows with more wrapper within wrappers and stuff, and having situation that from a main workflow variable was using this value and now the value has been replaced when it got into the second or third workflow... I've decided to create an action that will show inParameters and inValues of the Current workflow by doing a System.debug. Works quite nice so far. But I've come to realized that System.debug will not show if in the Control Center, under Scripting log level is not set at debug. That value is probably ok for dev/preprod. What about prod ? Does this only enables debug stuff when doing System.debug in the code ? So if I add that action to all my workflows I should see quite an increase activities in the logs and in LogInsight. I guess if I activate this in prod I'll have to monitor the server. But having this can be quite usefull sometimes even in prod. Do you have pointers as you how I could use a different log file for custom stuff and still show it in the console window ?
Hi jasnyder, That's a nice workaround for the overwritting issue. It would be a good habit to use for now. Its just sad that the product has been around for a long time now, and Git has bee... See more...
Hi jasnyder, That's a nice workaround for the overwritting issue. It would be a good habit to use for now. Its just sad that the product has been around for a long time now, and Git has been around for quite some time too, and yet in VRO there's no real strong support for versionning, ease of comparison between version and branching support. But for now, I think you have a good suggestion. Can't wait to have something better though!
Would you care sharing your java params ? So far I've increased the memory in the vm, and I've ajusted the setenv.sh file for the initial jvm memory usage and its maximum value. But I haven't see... See more...
Would you care sharing your java params ? So far I've increased the memory in the vm, and I've ajusted the setenv.sh file for the initial jvm memory usage and its maximum value. But I haven't seen a huge gain in performance. So from my understanding, you run a single vro node in production ? That's interesting, I was also considering that in a future version I might have to go back to a single node with a postgres db. On our side we are 3 dev and probably around 300 workflows. Yeah, that's what I'm missing a dev environnement of all the components. But since all these environnements are maintained by other teams and sometimes they neglect to build a dev environnement, because they personally don't have the use for it or the ressources to maintain it. I guess I'll have to find a way to ask all the other teams to provide me with a dev environnement. Thanks for sharing your situation!
Nice to know that I'm not the only one who's in this situation. Maybe togheter we will be able to build a case and make them understand that this is really an issue. So I'm thinking the exact ... See more...
Nice to know that I'm not the only one who's in this situation. Maybe togheter we will be able to build a case and make them understand that this is really an issue. So I'm thinking the exact same thing as you. Setting up a VRO instance in dev for each dev. But for this I would have to use the PostgreSQL database. Right now we are using the SQL database because our production environnement uses a SQL one because of the load balancing. So right now, how do you manage the code ? Have you developped some methods that can help you in some ways ? Also, if I ever manage to get a dev vro for each developper, I don't know how I'll manage all of them to get connected to all external services. Like IPAM, vCenter, Cf-Engine/Ansible. We currently don't have dev environnements for each of these systems, sometimes in my dev I have to work with a preprod of the vCenters, or with the production of Cf-Engine etc.
Hello, I was wondering how's everyone else is doing about managing the code in VRO when there is multiple developpers on the same node ? Some of my team sometimes forget to increase the versi... See more...
Hello, I was wondering how's everyone else is doing about managing the code in VRO when there is multiple developpers on the same node ? Some of my team sometimes forget to increase the version of a workflow, sometime we are testing stuff on the same workflows and we overwrite each other. So do you normally share VRO nodes ? Do you use a plugin ? Have you found a way to integrate it with Git ? I'm working about integrating my workflows into Gitlab using CodeGuardian plugin, but still we will share the vro node and might overwrite each other. So how do you work in VRO when in a team of devs ? And multiple environnement dev/preprod/prod ? Thanks!
Nice, you are getting closer! Make sure you have the right permissions set on the folders or workflows you want to run. And that group we created it in the vsphere tenant for the authen... See more...
Nice, you are getting closer! Make sure you have the right permissions set on the folders or workflows you want to run. And that group we created it in the vsphere tenant for the authentification. And the user used to trigger the execution is in the AD and was added to that VROSCCM group in the PSC. But if you are getting a 401, is actually a good thing. Guess you already double checked, but make sure the password is good, and the username it should work if you use in the format username@yourad.fqdn.local, not the domain of the vsphere tenant. Have you tried making it work with a RESTClient in Firefox for exemple ? That helped me a lot. Thanks!
In a lot of workflows, when I have to update an external system, like a REST API, about a status of a Workflow, add a block that will retry the Operation for 3 times with a sleep of 60 seconds. J... See more...
In a lot of workflows, when I have to update an external system, like a REST API, about a status of a Workflow, add a block that will retry the Operation for 3 times with a sleep of 60 seconds. Just in case that the REST API is down, and if its down for more than 3 minutes, I'll send an email with the status. Is there a better way than to repeat this block over and over in all my workflows ?
Ilian, you always have the right solution for me! Also, I'm still learning the vSphere API, do you have a suggestion of books/courses that could help me improve my programming skills in tha... See more...
Ilian, you always have the right solution for me! Also, I'm still learning the vSphere API, do you have a suggestion of books/courses that could help me improve my programming skills in that area ? I'm not a super guru in coding either, I know my way around, but most of the time I spend time looking on the web to search for suggestions. Thanks again for your precious help!
Hello, I'm looking to replicated the functionnality found in vCenter that helps find a related object. Like now I'd like to see all the ESX Clusters linked to a Datastore Cluster. I can't s... See more...
Hello, I'm looking to replicated the functionnality found in vCenter that helps find a related object. Like now I'd like to see all the ESX Clusters linked to a Datastore Cluster. I can't seem to find a relation, and now I'm browsing through the MOB and I'm getting a headache! I can't seem to find a good pattern that would always work to retrieve that info. Anyone else tried that before ? Thanks!