HELM Charts resulting in pending pods

Hi,

I’m following the Zeebe Kubernetes HELM Charts guide running locally on Kubernetes KIND.

After I’ve created the cluster, installed the helm charts and waited a few minutes the response from ‘kubectl get pods’ looks like this:

NAME                                   READY   STATUS    RESTARTS   AGE
elasticsearch-master-0                 0/1     Running   0          2m28s
elasticsearch-master-1                 0/1     Pending   0          2m28s
elasticsearch-master-2                 0/1     Pending   0          2m28s
test1-zeebe-0                          0/1     Running   0          2m28s
test1-zeebe-1                          0/1     Running   0          2m28s
test1-zeebe-2                          0/1     Pending   0          2m28s
test1-zeebe-gateway-5bc696bb98-425bv   1/1     Running   0          2m28s

Running ‘kubectl describe pod test1-zeebe-2’ gives this:

Name:           test1-zeebe-2
Namespace:      default
Priority:       0
Node:           <none>
Labels:         app.kubernetes.io/component=broker
                app.kubernetes.io/instance=test1
                app.kubernetes.io/managed-by=Helm
                app.kubernetes.io/name=zeebe-cluster
                app.kubernetes.io/version=0.23.4
                controller-revision-hash=test1-zeebe-77cb77ff7f
                helm.sh/chart=zeebe-cluster-0.0.128
                statefulset.kubernetes.io/pod-name=test1-zeebe-2
Annotations:    <none>
Status:         Pending
IP:
IPs:            <none>
Controlled By:  StatefulSet/test1-zeebe
Containers:
  zeebe-cluster:
    Image:       camunda/zeebe:0.23.4
    Ports:       9600/TCP, 26501/TCP, 26502/TCP
    Host Ports:  0/TCP, 0/TCP, 0/TCP
    Limits:
      cpu:     1
      memory:  4Gi
    Requests:
      cpu:      500m
      memory:   2Gi
    Readiness:  http-get http://:9600/ready delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:
      ZEEBE_BROKER_CLUSTER_CLUSTERNAME:                test1-zeebe
      ZEEBE_LOG_LEVEL:
      ZEEBE_BROKER_CLUSTER_PARTITIONSCOUNT:            3
      ZEEBE_BROKER_CLUSTER_CLUSTERSIZE:                3
      ZEEBE_BROKER_CLUSTER_REPLICATIONFACTOR:          3
      ZEEBE_BROKER_THREADS_CPUTHREADCOUNT:             2
      ZEEBE_BROKER_THREADS_IOTHREADCOUNT:              2
      ZEEBE_BROKER_GATEWAY_ENABLE:                     false
      ZEEBE_BROKER_EXPORTERS_ELASTICSEARCH_CLASSNAME:  io.zeebe.exporter.ElasticsearchExporter
      ZEEBE_BROKER_EXPORTERS_ELASTICSEARCH_ARGS_URL:   http://elasticsearch-master:9200
      ZEEBE_BROKER_NETWORK_COMMANDAPI_PORT:            26501
      ZEEBE_BROKER_NETWORK_INTERNALAPI_PORT:           26502
      ZEEBE_BROKER_NETWORK_MONITORINGAPI_PORT:         9600
      K8S_POD_NAME:                                    test1-zeebe-2 (v1:metadata.name)
      JAVA_TOOL_OPTIONS:                               -XX:MaxRAMPercentage=25.0 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/usr/local/zeebe/data -XX:ErrorFile=/usr/local/zeebe/data/zeebe_error%p.log -XX:+ExitOnOutOfMemoryError
    Mounts:
      /exporters from exporters (rw)
      /usr/local/bin/startup.sh from config (rw,path="startup.sh")
      /usr/local/zeebe/config/application.yaml from config (rw,path="application.yaml")
      /usr/local/zeebe/data from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-gns7d (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  data-test1-zeebe-2
    ReadOnly:   false
  config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      test1-zeebe-cluster
    Optional:  false
  exporters:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  default-token-gns7d:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-gns7d
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                 From               Message
  ----     ------            ----                ----               -------
  Warning  FailedScheduling  3s (x7 over 4m21s)  default-scheduler  0/1 nodes are available: 1 Insufficient memory.

Looks like I have “insufficient memory” … how to proceed?

Hey @jonas

either get a bigger machine or reduce the resource constraints.

Greets
Chris

1 Like

@jonas yeah… that is probably due to resource constraints.
Can you try the following profile -> https://github.com/zeebe-io/zeebe-helm-profiles/blob/master/zeebe-dev-profile.yaml with:

helm install test-core zeebe/zeebe-full --values https://raw.githubusercontent.com/zeebe-io/zeebe-helm-profiles/master/zeebe-dev-profile.yaml

Works perfectly fine thanks!

PS> kubectl get pods
NAME                                                       READY   STATUS    RESTARTS   AGE
elasticsearch-master-0                                     1/1     Running   0          4m43s
test-core-nginx-ingress-controller-7df489fc58-bxjw5        1/1     Running   0          4m43s
test-core-nginx-ingress-default-backend-77d6fbf76b-kpt2n   1/1     Running   0          4m43s
test-core-operate-6cdf4cf7cb-rhlcd                         1/1     Running   1          4m43s
test-core-zeebe-0                                          1/1     Running   0          4m43s
test-core-zeebe-gateway-9b5947cc5-dtn7r                    1/1     Running   0          4m43s

PS> .\zbctl.exe --insecure status
Cluster size: 1
Partitions count: 1
Replication factor: 1
Gateway version: 0.23.4
Brokers:
  Broker 0 - test-core-zeebe-0.test-core-zeebe.default.svc.cluster.local:26501
    Version: 0.23.4
    Partition 1 : Leader

Now I will try the same on GKE.

1 Like

@Jonas thanks for reporting back… I am testing those charts always in GKE… so you shouldn’t have a problem… Remember that is a developer profile to test it… you might also want to check Camunda Cloud… which basically gives you the same without you needed to deploy the engine :slight_smile:

SaaS offerings ain’t in the scope of the evaluation we’re doing for different workflow engine at the moment. Thanks for pointing out though.

1 Like