Native code with no orchestration and no network boundaries will always be faster than code that is orchestrated or that has to invoke other services over the network**.
The trade-off is that you have to put the complexity of the system orchestration somewhere.
In terms of the fastest way to do orchestration, depending on your load you may be better off putting your implementation code in the same JVM as the engine and using Camunda.
But if you have massive streams of data, then you will run into vertical scale problems, and you can’t horizontally scale the JVM or a database for a BPMN engine.
Zeebe uses gRPC for the network protocol. It’s a highly efficient binary protocol over a persistent HTTP2 connection, so there is no connection setup time per request like there is with REST.
You will probably find native code faster and more efficient at low system complexity, and embedded JVM / Database faster at low scale (no network).
It’s when you go to massive scale that database locks and vertical scale limits kick in, and Zeebe is faster. It will never be as fast as hardwired code would be, but BPMN’s value prop is around business agility - the speed at which you can understand and modify your system’s behaviour - not raw execution speed.
Having said that, it could be fast to run the Zeebe broker and the workers on the same machine at a low scale. But it really is designed to scale performance linearly across scale boundaries that Camunda can’t reach, while maintaining visibility at both design and operation time via BPMN.
So Zeebe is faster (and continues to operate) at higher loads than Camunda can because it uses a sharded in-memory database and a replicated append-only event log rather than a traditional RDBMS for its state store, and it’s faster to comprehend and modify your system than hardwired code because of BPMN***.
** Assuming optimal design/configuration.
*** Caveat: Like any tech, you can still make it run slow, or be incomprehensible if you don’t configure/design your system appropriately.