Skip to content

Connection Troubleshooting

If your PulseStream agent isn’t connecting to the control plane, use this guide to diagnose and fix common issues.

Terminal window
kubectl get pods -n pulsestream-agent

If the pod is in CrashLoopBackOff or Error, check the logs:

Terminal window
kubectl logs -n pulsestream-agent deployment/pulsestream-agent --tail=50

The agent receives its credentials automatically during provisioning. Verify the agent config secret exists:

Terminal window
kubectl get secret -n pulsestream-agent agent-config

If the secret is missing, the provisioning token may have expired. Generate a new one from Settings → Connection and re-apply the manifest.

The agent needs outbound HTTPS access to the PulseStream control plane:

Terminal window
kubectl exec -n pulsestream-agent deployment/pulsestream-agent -- \
curl -s -o /dev/null -w "%{http_code}" https://app.pulsestream.ai/health

A 200 response confirms connectivity. If you get a timeout, check your cluster’s egress rules and network policies.

If pods stay in Pending after installation, the most common cause is a missing StorageClass. Check PVC status:

Terminal window
kubectl get pvc -n pulsestream-agent

If PVCs show Pending, your cluster may not have a default StorageClass. Fix by specifying one explicitly:

Terminal window
helm upgrade pulsestream-agent oci://public.ecr.aws/b2g7w6t0/charts/pulsestream-agent \
--namespace pulsestream-agent \
--set postgresql.primary.persistence.storageClass=gp3 \
--set opensearch.persistence.storageClass=gp3

Replace gp3 with the appropriate StorageClass for your cluster (e.g., standard for GKE, default for AKS).

This means the control plane hasn’t received a heartbeat from the agent. Common causes:

  1. Agent not deployed — Follow the installation guide
  2. Wrong API key — Regenerate the key in Settings and update the Helm release
  3. Network issues — Check firewall rules and egress policies
  4. Agent crashlooping — Check pod logs for startup errors

The agent is intermittently losing connection. This is often caused by:

  • Resource pressure — Increase CPU/memory limits in the Helm values
  • Network instability — Check for packet loss between the cluster and PulseStream
  • Pod eviction — Check if the node is under resource pressure

If none of the above resolves your issue, reach out to support with:

  1. Agent pod logs (kubectl logs)
  2. Agent pod description (kubectl describe pod)
  3. Your workspace ID (visible in Settings)