Run multiple statsd exporter in Kubernetes as a deployment

When you hear about statsd exporter, you might confuse it with statsd. Its okay, you are not the only person who does it. Statsd run as a daemon and it receives metrics from statsd clients and sends aggregates to backends like graphite for storage and future visualisation.  On other hand, statsd exporter receives metrics from statsd clients or statsd server itself and exports them as prometheus metrics. You can configure your prometheus or victoria metrics to scrape metrics on path /metrics of statsd exporter.

In this article, I will explain the steps to run multiple statsd exporters in kubernetes as a deployment.

Run statsd exporter in kubernetes as a deployment

 

Deploy Statsd exporter in kubernetes:

We need to create three kubernetes objects:

  • A deployment  to run three statsd exporter pods
  • A ClusterIP service endpoint which accepts metrics in both UDP and TCP formats
  • A configmap which contains Statsd exporter config file.
apiVersion: apps/v1
kind: Deployment
metadata:
  name: statsd-exporter
  namespace: <namespace>
spec:
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  replicas: 3
  selector:
    matchLabels:
      app: statsd-exporter
  template:
    metadata:
      annotations:
        prometheus.io/port: "9102"
        prometheus.io/scrape: "true"
      labels:
        app: statsd-exporter
    spec:
      containers:
      - args:
        - --statsd.mapping-config=/etc/statsd.yml 
        - --statsd.listen-udp=:9125
        - --statsd.listen-tcp=:9125
        - --web.listen-address=:9102
        - --log.level=info

        volumeMounts:
        - mountPath: /etc/statsd.yml
          subPath: statsd.yml
          name: statsd-yaml
          readOnly: true
        image: prom/statsd-exporter:v0.17.0
        name: statsd-exporter
        lifecycle:
          preStop:
            exec:
              command: ["sleep", "240"]
        livenessProbe:
          failureThreshold: 1
          exec:
           command:
            - wget
            - --no-verbose
            - --tries=1 
            - --spider 
            - http://localhost:9102/metrics
          initialDelaySeconds: 90
          periodSeconds: 60
          successThreshold: 1
          timeoutSeconds: 10
        readinessProbe:
          failureThreshold: 1
          exec:
           command:
            - wget
            - --no-verbose
            - --tries=1 
            - --spider 
            - http://localhost:9102/metrics
          initialDelaySeconds: 5
          periodSeconds: 3
          successThreshold: 1
          timeoutSeconds: 10

        ports:
        - containerPort: 9125
          protocol: UDP
        - containerPort: 9125
          protocol: TCP
        - containerPort: 9102
          protocol: TCP

        resources:
          requests:
            cpu: 200m
            memory: 200Mi
      terminationGracePeriodSeconds: 300  
      volumes:
        - name: statsd-yaml
          configMap:
            name: statsd
---
apiVersion: v1
kind: Service
metadata:
  name: statsd-exporter
  namespace: <namespace>
spec:
  ports:
  - name: ingress-tcp
    port: 9125
    protocol: TCP
    targetPort: 9125
  - name: ingress-udp
    port: 9125
    protocol: UDP
    targetPort: 9125
  selector:
    app: statsd-exporter
  type: ClusterIP
---
apiVersion: v1
data:
  statsd.yml: |
    defaults:
        # By default all histogram use following buckets
        timer_type: histogram
        ttl: 20m
        buckets: [5, 30, 50, 80, 100, 200, 300, 400, 500, 600, 700, 800, 900, 1000, 2000, 3000, 4000, 5000, 6000, 7000, 8000, 9000, 10000]
    mappings:
      # Example of a particular metric using probably different bucket(not in this case, having just for sample)
    - match: sample_metrics.*
      timer_type: histogram
      buckets: [5, 30, 50, 80, 100, 200, 300, 400, 500, 600, 700, 800, 900, 1000, 2000, 3000, 4000, 5000, 6000, 7000, 8000, 9000, 10000, 15000, 25000, 35000, 45000, 60000]
      name: "sample_metrics"

kind: ConfigMap
metadata:
  name: statsd
  namespace: <namespace>

Apply the YAML file to create the three K8s object we need.  The newly created service endpoint statsd-exporter.the_name_space.svc.cluster.local and port 9125 would be used by the application to send metrics to statsd exporter.

 

Configure statsd exporter to be scraped by prometheus:

Now we need to scrap these metrics exported  by statsd exporter using prometheus or Victoria metrics. Statsd exporter exposes the metrics on path /metrics on port 9102.

If you noticed, I added following annotations in the statsd-exporter deployment yaml file, which instructs prometheus to scrap the metrics on port 9102 and path /metrics.

 annotations:
  prometheus.io/port: "9102"
  prometheus.io/scrape: "true"

Go to Prometheus console and make sure statsd exporter is recognised as targets.

 

Push metrics to statsd exporter:

Once you make sure the prometheus scraps statsd exporter metrics, you can go ahead and configure your application to push metrics to the statsd exporter. You can also test pushing metrics using curl command. One example is,

echo "example_metrics:1|h" | nc -u -w0 statsd-exporter.the_name_space.svc.cluster.local 9125

 

While plotting the metrics in Prometheus or Grafana, make sure to use aggregator like SUM, AVG ..etc to include metrics from all statsd exporter pods.

Good day.

Related Posts

Begin typing your search term above and press enter to search. Press ESC to cancel.

Back To Top
x