ahola
Enthusiast
Enthusiast

SQL Plugin - connection pool & retries

Jump to solution

Folks -

We are using vCO workflows which do lot of CRUD operations on the database. Given these are long running workflows with a decent amount of load, we want to make sure that this solution that we have implemented is performant and reliable. To this end:

1. We would like to ensure that we are able to reuse database connections - is there the notion of Connection Pools or an equivalent.

2. How about reconnecting back to the db when it goes down? Is that something one will have to code into the workflow via say exception handlers etc.

I am sure others have run into similar issues - would appreciate any/all feedback that you may have.

Cheers.

Tags (1)
0 Kudos
1 Solution

Accepted Solutions
bbonev
VMware Employee
VMware Employee

Hi,

Unfortunately the SQL plug-in doesn't support connection pooling. We've never considered it. We will discuss it and may include it in the next update release of the plug-in.

Regarding your second question - we don't keep connection opened for a long time and evry DB call is atomic. When you execute some of the CRUD workflows a new conncetion is created, the corresponding SQL query is executed/ commited and the connection is closed. If there's a problem, like disconnection from the Oracle server, the proper way to ensure your operation has passed is to write some custom error handling using the scripting capabilities of the Orchestrator. In pseudo code:

var hasPassed = false;
for (i = 0; i< maxRetries -1 && !hasPassed ; i++) {

hasPassed = false;

try {

  // execute the statement either by using the exposed JDBC API or by using the CRUD actions

hasPassed = true;

} catch (error) {

//log the problem

}

}

Hope this helps you. Any further feedback will be great.

Best regards,

Boyan

View solution in original post

0 Kudos
2 Replies
bbonev
VMware Employee
VMware Employee

Hi,

Unfortunately the SQL plug-in doesn't support connection pooling. We've never considered it. We will discuss it and may include it in the next update release of the plug-in.

Regarding your second question - we don't keep connection opened for a long time and evry DB call is atomic. When you execute some of the CRUD workflows a new conncetion is created, the corresponding SQL query is executed/ commited and the connection is closed. If there's a problem, like disconnection from the Oracle server, the proper way to ensure your operation has passed is to write some custom error handling using the scripting capabilities of the Orchestrator. In pseudo code:

var hasPassed = false;
for (i = 0; i< maxRetries -1 && !hasPassed ; i++) {

hasPassed = false;

try {

  // execute the statement either by using the exposed JDBC API or by using the CRUD actions

hasPassed = true;

} catch (error) {

//log the problem

}

}

Hope this helps you. Any further feedback will be great.

Best regards,

Boyan

View solution in original post

0 Kudos
ahola
Enthusiast
Enthusiast

Boyan -

Thanks for your prompt reply. Sorry for the delay as I did not get notified about this response.

1. Yes, connection pooling can be a big deal esp for workflows that have a lot of db related processing as tear up/tear down of connections can be quite expensive. We will have to see if this is going to be an issue for our application. Do you know if I could workaround this by having say a long running instance that is responsible for doing only db related updates - and which reuses the same connection(s) for a really long time? For any db related update, we call into this workflow instance.

2. For handling errors, we are currently using logic similar to what you have proposed. That might be ok for now I guess.

Cheers,

Shashi

0 Kudos