Tracking workflows - Zeebe vs Operate?

Hi there,

my understanding is that Zeebe and Operate are complementary from a CQRS perspective.

Zeebe (based on event sourcing) manages/controls the state of the workflow (using append-only log structures), i.e. the write model. Operate on the other hand computes the snapshot by consuming the events emitted by Zeebe (via Elastic).

Now, there might be use cases where processes are implemented as decentralised choreography across multiple services (no central coordinator), and we can’t/don’t want to change that to an orchestrated model by dropping in Zeebe.

But visibility/process monitoring is still important to us. Theoretically, as long as the individual services generate the proper events which are being streamed to Operate, it should be able to provide the overarching perspective.

Essentially, I am referring to slide 22 and 24 of this presentation which indicates that Zeebe can also be used merely for tracking processes even if all processes are using choreography.

My two questions:

  1. What is the split between Zeebe and Operate (my understanding; Zeebe to manage workflows, Operate to track workflows) ?
  2. Can I use Zeebe/Operate in a “tracking only” scenario to provide “logical flow visibility” where the micro services are actively listening to events and decentrally executing actions ? All events are available on Kafka.

thanks, Nick

Nick - again great questions :slight_smile:

You are right, Zeebe is the runtime workflow engine (the command part in CQRS) and Operate with an underlying Elasticsearch is the query part. Zeebe can export all records it processes to Elastic, from where they will be loaded into the Operate Elastic index to serve all the queries you need (so it is eventual consistent).

For pure event tracking I see two possibilities now:

  1. Run a proper Zeebe broker which just listens to events. As a side product you will be able to see everything in Operate as well. I showed an example of this in the Kafka talk.
  2. We currently teach https://camunda.com/products/optimize/ to be able to process generic events, do a simple process discovery and provide all visibility and analysis Optimize is already capable of (currently it can only read data from the Camunda engine). We have a working prototype internally and are searching for user that are interested in this, drop me a private mail in case this sounds good, then I can bring you in touch with the product manager of Optimize (as he is really eager to learn about the exact requirements and scenarios).

Currently it is not easily possible to “just” dump the Kafka events into Operate directly. Before you try this I think it is easier to simply dump them into Elasticsearch (or the like) and build your own visualization using https://bpmn.io/.

WDYT?
Bernd