Cluster setup on OpenShift

Hi,
we are trying to install Zeebe cluster on our on-prem OpenShift and we are getting the same issue that is described at zeebe-io/zeebe-helm#11. We use OpenShift version 4.4.0.
I’ve also tried 0.23.1-non-root and the pod seems to be functioning properly, but then there’s another issue with gateway while using this older version.
What are our options to start the latest version?

Helm config I use:

clusterSize: "1"
partitionCount: "1"
replicationFactor: "1"
cpuThreadCount: "1"
ioThreadCount: "1"
pvcSize: "5Gi" 

resources: 
  requests:
    cpu: 500m
    memory: 1Gi
  limits:
    cpu: 1000m
    memory: 2Gi

elasticsearch:
  image: "elasticsearch"
  esMajorVersion: "6"
  replicas: 1
  minimumMasterNodes: 1
  volumeClaimTemplate:
    resources:
      requests:
        storage: 5Gi
  podSecurityPolicy:
    spec:
      privileged: false
  podSecurityContext:
    fsGroup: 1001380000
    runAsUser: 1001380000
  securityContext:
    runAsUser: 1001380000
  sysctlInitContainer:
    enabled: false

I’ve bypassed this by removing volume mount for startup.sh. Then it starts up correctly. What are the consequences of removing it? Would something break?

 volumeMounts:
      - name: config
        mountPath: /usr/local/bin/startup.sh
        subPath: startup.sh

Hey @Karolis, I recently helped another user with this. You can simply set the defaultMode of that volume to 0777, and leave the volume mount. The script will help setup the nodes properly to find each other; while not necessary (in the sense you could configure it yourself), its somewhat helpful, so I would propose to leave it.

See issue https://github.com/zeebe-io/zeebe-cluster-helm/issues/101 for more.

These days I unfortunately don’t have much time for the Helm charts, but maybe @salaboy can help you more :slight_smile:

1 Like