How to connect database and use the table columns in bpmn files

Hi @salaboy,
I want to connect db and use the data in my bpmn file.
Could you please help me to connect database and use the table columns in bpmn files .

refer sample code if it available

thanks
rk

I want to connect application related database and use the data as per the bpmn flow.
I do not touch zeebe default database.

thanks
rk

Hi team,
I want to connect db and use the data in my bpmn file.
Could you please help me to connect database and use the table columns in bpmn files .

I want to connect application related database and use the data as per the bpmn flow.
I do not touch zeebe default database.

refer sample code if it available

thanks
rk

Just IMHO:

I think it’s not a good idea to couple Zeebe with application’s database. It will mix different responsibilities then: workflow orchestration and domain functions. Not really clean from architectural point of view.

Have a good day )

1 Like

@Gorynych,

I think you are not understanding my requirement…i won’t combine both zeebe and applications transactions in to single db

zeebe related transactions will save in zeebe database
Application related transactions will save in application db

As part of entire application , zeebe should integrate and respective transactions will validate ,rules should apply based on that total workflow will be completed

so, we should use both databases…

here in this requirement once workflow is started executing it should get the details from application data base and process it.

thanks
rk

get the details from application data base and process it

This is what I’m talking about.
Zeebe should process workflow.
Zeebe should not process data - domain services should do that.

2 Likes

thank you @Gorynych,

if there is any example in github please refer url

thanks
rk

You can check out any specific example for the client implementation you’re using. For Java, for example:

https://docs.zeebe.io/clients/java-client-examples/data-pojo.html

As I understood, you want to pull data from the database and have your workflow execute based on that data. So you’d read your data the way you normally do, then set your (e.g. workflow) variable(s).

Ran into the same problem.
I don’t think it is a good idea to store big POJO objects in the process variables as suggested by @Gorynych .

In my case I need to evaluate conditions by using just a couple of fields of 1Mb json object.
I believe pulling and storing such a big object in a process variable will impact the overall performance at scale.

Something like data-connectors would be very useful for this.

Now I’m considering the way around using bpmn preprocessing before deployment:

  1. Replace all the condition evaluations referring to external data with the temporary variables.
  2. Inject into bpmn service tasks which evaluates the expression using Job worker and store results into temporary variables.

What I don’t like in this approach - a process becomes visually messy and it defeats the main purposes of BPMN of being human friendly.

1 Like

As an alternative to data-connectors might be having an option to evaluate expressions externally (e.g. with Job workers).