If it's a version update to the workflow content (i.e., workflow has the same ID) I would expect any running instance to use the version it started with. The next invocation would then use the the updated version.
IMO and if this is done in a sane way, the workflow execution context should be populated with the entire workflow definition at invocation time and then that definition would be executed until the context terminates (completion of the workflow or a terminating error).
I'd also assume that the underlying Java system uses synchronization to ensure that a version update cannot happen while an execution context is loading the workflow definition.
If you are changing the workflow ID attached to the policy then I'd expect similar behaviour (i.e., the policy invokes the workflow with the ID it references at the time of invocation).
Lastly, I would imagine that there would be some variance to allow for here in cases where a workflow can invoke other workflows/actions as these are indirectly referenced by the workflow definition and would be lazily loaded as needed. This would mean that changes to the versions of those items might have an effect on the execution of the invoked workflow AFTER it has started (e.g., if a workflow does some polling actions or has a long processing window)
I'm spit-balling a bit here but I'm also a Java developer so the above would be how I'd expect it to work if the designers were sane and reasonable
Without being 100% certain, I think that vRO will not preload the entire workflow definition at the moment it starts and use it until completion, but instead will pick up the updated content of the workflow items when the execution point reaches them. Which could lead to an undefined behavior, depending on how big are the changes in the updated version.
I think it could be verified easily, for example by writing a workflow with a couple of scriptable tasks, the first one taking relatively long time to execute (eg. containing System.sleep()), then starting the workflow, and while it is working inside the first scriptable task, just import a new version of the same workflow or edit the same workflow in the vRO client by making some changes in the scripting in the second scriptable task. When the execution leaves the first scriptable task and goes into the second task, you should check whether it will execute the original content of the second task before the change, or the content modified while the first task was still running.
The ideal case would have been if vRO content was under a source control system, so when you start the workflow, you will know what exactly is the current changeset at the moment you start the execution, and will be able to fetch all referenced child workflows/actions/other content when necessary using the same changeset number. But we are still not there.
I got curious so I tried this out and the results are a little interesting
I created a very simple workflow like this
The first step like this
It will sleep for 2 minutes so I have time to change something on the workflow definition and get it saved before the sleep completes
The other step like this
So, the first time I ran the workflow as soon as it had started I edited the definition and just upped the version to 0.0.2 and saved it back. I made no other code change
So, the log output there shows that running workflow reads the new version after the update which suggests that iiliev is correct - the workflow is read as needed while it runs
Next I tried another test by making a simple code change to step 2 while the workflow was running
Log output was this
The additional line was not executed and so it seems that all the script items within the workflow were loaded into the context once at invocation time .... which conversely suggests that I was correct - the workflow definition is read in at invocation time
I think that for more complex arrangements this may not be something you'd want to depend on though - e.g., actions or other workflow calls within your workflow
Anyway, an interesting experiment