Hi,
I imported a workflow from vRO 7.5 to my new environment vRO 8.1
This workflow writes some files in the following path: /data-vra-scripts/folder (in vro appliance).
According to this VMWare article I need to edit js-io-rights.conf
Rules in the js-io-rights.conf File Permitting Write Access to the vRealize Orchestrator System
However in the path mentioned in the article the file does not exist (/data/vco/var/run/vco/js-io-rights.conf)
However I found it in the following path: /data/vco/usr/lib/vco/app-server/conf/js-io-rights.conf
I try to edit the file from /data/vco/usr/lib/vco/app-server/conf/js-io-rights.conf - but still my workflow fails on creating my file.
I also try to copy js-io-rights.conf to the path mentioned in the article, but the errors still occur.
By the way, in past, in my old environment I edited this file, and eventually it allows me to create and edit files.
Any idea?
Hi,
I don't think you'll be able to write to arbitrary file system location, unless it is mounted as a volume to the Docker container's file system.
So, for example, if you try to create in vRO scripting a file /data-vra-scripts/folder/myfile.txt and write some data into it, on the appliance's file system (outside of vRO server container) it will go somewhere under /data/docker/overlay2/<overlay-id>/...
Oh, it is somthing new.
So how can I accomplish the idea of writing a file to appliance?
I have some workflows which collect info from different elements in my network and save them in files in appliance.
This help me to present the user these informations when running a new workflow (in presentation tab)
It is faster to load info from files instead of running actions which collect these informations
Thanks,
If a workflow X writes some data to file /data-vra-scripts/folder/myfile.txt, then another workflow Y will be able to read it from exactly the same location. Within vRO scripting, the location looks just like in 7.x.
The actual file location within Docker overlay file system matters only when you try to access this file externally, outside vRO server container.
This the flow I do.
Workflow x writes to file
Workflow y reads from same file.
I use java script code
var fw = Filewrite("/data-vra-script/folder/newfile.txt);
fw.open()
fw.write("test");
fw.close();
However, the workflow fails because it does not find the file...
Again, it is the same wotkflow I use in vRO 7.5.
The code should work (of course, after fixing the syntax errors the shown snippet is full with).
Could you show the content of your js-io-rights.conf file? And the exact error you are getting?
var fw = new FileWriter("/data-vra-scripts/attributes/newfile.txt");
fw.open();
fw.clean();
fw.write("test");
fw.close();
2020-08-19 22:10:32.000 +03:00INFO__item_stack:/item1
2020-08-19 22:10:32.000 +03:00ERRORCannot create file, reason : No such file or directory
2020-08-19 22:10:32.000 +03:00ERRORWorkflow execution stack:
***
item: 'test to file/item1', state: 'failed', business state: 'null', exception: 'Cannot create file, reason : No such file or directory'
workflow: 'test to file' (2d75176f-4abd-483e-b48b-b4884d3ebf69)
| 'no inputs'
| 'no outputs'
| 'no attributes'
*** End of execution stack.
-rwx /
+rwx /var/run/vco/
-rwx /etc/vco/app-server/security/
+rx /etc/vco/
+rx /usr/lib/vco/
+rx /var/log/vco/
+rwx /data-vra-scripts/scripts/
+rwx /data-vra-scripts/attributes/
Does the directory /data-vra-scripts/attributes/ exist in the vRO server container? If yes, what are its ownership/permission attributes?
I would expect to receive such error if the directory does not exist, or exist on the appliance file system but not on the vRO container file system.
I'm not sure I understand the meaning of vRO server container.
How can I check and configure the folder to be part of the vRO server container?
now, when logging to appliance through ssh (putty) and typing the command df -h I see the mount /data-vra-scripts I created on a new disk I add to appliance.
In version 8.x, there is a change of the way vRO, PostgreSQL, and other services are deployed and run in the appliance. They are now deployed and run as Kubernetes pods.
When you establish a SSH connection to the appliance, you can enumerate the running Kubernetes pods in a given namespace with a command like
kubectl -n prelude get pods
The above command will return a list of several pods, one of them having a name similar to vco-app-7d874bd699-k6bc4 (the suffix after vco-app- will vary). That's the pod where vRO server, vRO Control Center, and several other containers are running. To open a Bash terminal to vRO server container, you should use a command like the following:
kubectl -n prelude exec -it vco-app-7d874bd699-k6bc4 -c vco-server-app -- bash
In this Bash terminal, you should create the directory /data-vra-scripts/attributes/ to be used by your scripting code. And that's directory there is different than /data-vra-scripts/attributes/ directory in the appliance outside of vRO server container (it will mapped to some directory in the Docker overlay file system).
Ok, I will try your commands.
But I have now another question:
One of my workflow creates a big file.
This is the reason I added a new 25G disk to appliance and mount it to /data-vra-scripts.
How can I do it in the container?
Because vRO is now riding under Kubernetes, you're going to have a real bad time trying to make this work with much complexity. With this new model, you need to think about other ways to accomplish what you're trying to do, the better approach being to mount a remote share someplace and handle your file I/O there. Otherwise, you're talking about manipulating the K8s deployment and pods to use hostPath or PersistentVolumes, which is another ball of wax entirely.
The problem with a share is letency.
I can create an action which connect to this share with remote powershell host.
Then connect this action to presentetion tab in my workflow.
However, when running this workflow it may takes few long seconds until it opens and the data is pooulates to user.
Is there other solution for this problem?
Maybe somthing new in new version?
I tried your commands, and now the workflow creates the files.
However, if I shutdown and start the components following VMWare article, the folder /data-vra-scripts disappears.
Starting and stopping vRealize Automation
One more question, can I stop and start only the vco-server instead of stopping and starting all the components and wait 20 min until they go up?
When I want to restart only the vRO server, I usually open a Bash terminal to vRO server container (using the command I posted previously) and then execute kill 1 (1 is the PID of the Java process of the vRO server). This will cause Kubernetes to start a new instance of it, which should be up and running quickly. The same command works also for restarting vRO Control Center if you open a Bash terminal to its container.
Another option is to use kubectl command to delete the entire vco-app-<id> pod if you want to restart vRO server plus vRO Control Center at the same time.
What about the issue that the folder and files I created in the vco-server container are deleted when restarting the vco?
any idea?
Sorry, I'm not that familiar with Kubernetes to say what's the recommended approach to work with persistent folders. I'd suggest to check if there is something related in the documentation, and if not, probably contact VMware support.
What is it you're trying to create, exactly? And why are you trying to create such a large file? Perhaps if you explain what you're trying to accomplish rather than starting with the technical implementation, there might be a better way..
As I said, writing locally when vRO is now under K8s is going to be a problem. Your files are disappearing when restarted because containers inside pods are immutable; a new pod is created which wipes the file system. Changing that requires changing the manifest which requires altering system level details which isn't going to work well and won't be supported. So, again, state what your goals are, but know that you're going to have to change your processes and not write out huge files to the local vRO system.
See my other response here. This is normal behavior when running a container under Kubernetes. This is one of the reasons why your desired method isn't going to work.
I use this mount\disk for two goals:
1. cache files-
In order to run workflows and populate data to user in faster way (as I described in my previous comments)
The actions which are linked to inputs in the presentation read from those files.
2. I have two sperate vCentes which are connected to my vRO.
I want to copy template form one vc to other.
Until vRO 7.5 I use plugin named: OvaTranster in order to export and import templates.
I export the template from vc1 to this large disk and import the template to vc2