Can anybody suggest a proper way (without the use of external components like Files, DB, AMPQ etc.) of exchanging data between workflow instances running in parallel?
I have tried to use Configuration Element attributes (getAttributeWithKey(), setAttributeWithKey()) but it seems that their values are cached at the moment of WF instance start and don't update while this instance is running.
How about Resource Elements or File(s) combined with the locking system... This way, you can use CSV, XML, or other plain text format. Last I recall Resource Elements can be up to 4MB in size so some general text data should be no problem here.
I'll try Resource Elements.
Meanwhile, it looks that it is possible to force an update on ConfigurationElement by putting WF instance into wait-signal state for a second (with Waiting Timer element).
Not sure though, if this update is guaranteed.
it is possible to update a config element by doing it with a nested workflow. Having a nested worfflow will force the config element to be reloaded, getting last saved values before modifying it again.
With that said you must use the lockAndWait to make sure you do not write / load to the same config element multiple times.
Another new option is to use DynamicTypes (vCO 5.5.1) putInCache / getFromCache methods. With that said this may not be a good option if you use vCO clustering.
Unfortunately, this method doesn't work.
Here is the test scenario - I have four workflows: A, B, C, D.
"A" - reads the attribute, changes it depending on old value, and then writes it back using setAttributeWithKey() method of ConfigurationElement object. Then it displays new value on the console.
"B" - reads the value of the attribute using getAttributeWithKey() method and displays the value on the console.
"C" - Repeatedly in loop uses workflow "A" (as nested workflow item) with a small delay of 2000 ms. Delay is generated using System.sleep() method. So the value of the attribute is changing all the time.
"D" - Repeatedly in loop uses workflow "B" (as nested workflow item) with a delay of 1000 ms, also using System.sleep() method.
"C" performs read & write operation using nested workflow and works fine, as I see the attribute value changing in the configuration panel of vCO client.
"D" performs only read operation using nested workflow and it should display updated values, but all the time it shows the value which was in the attribute at the time of "D" workflow start.
However, if I change System.sleep() call to Waiting Timer element (which is not suitable to have in real scenario), then "D" starts to display updated values.
Maybe there's another way to force workflow to reread the configuration element?
The way to force a re-read is to use a "nested workflow" from the palette, not just a regular workflow.
Ah, I see.
Never used this one before. I prefer to call workflows asynchronously from scriptable task - gives more control over its execution.
This way it does work indeed. Nevertheless this method isn't suitable for my real scenario since asyncronous WF launch produces too big delay, like 4-5 seconds before the workflow is actually launched.
I'm trying to implement custom locking mechanism with queues for workflow instances waiting for a lock on specific string. (LockingSystem doesn't provide queue capability AFAIK). Since 90% of code pieces which will be running under lock a very short (<= 1s), then having 4-5 seconds overhead for acquring lock, and then 4-5 seconds for releasing lock is too much. Especially if consider, that some WF instances may want to acquire new lock and release it every 2 seconds and I can have up to 50-80 of such workflow instances running in parallel.
So far I'm using database and SQL Plugin for storing queue records, which gives me approximately 1s (or 100%) of overhead, and I'm still looking for a faster solution.
Calling a workflow from a scriptable task work as well and as you noticed will be faster.
For a faster lock you can try DynamicTypes (vCO 5.5.1) putInCache / getFromCache methods if you are not using a clustered vCO. Since this is in memory caching it may be faster than using the SQL plug-in.