fbpixel

How to deploy Graphite & Graphana

Tommaso Doninelli}
Tommaso Doninelli
Share:

Deploy and monitor Graphite and Grapahana in HakunaCloud

TODO Add some info why do we need to monitor? Why graphite and graphana?

sbambolino bam bam
Un tizio che guarda una valle da paura

Networking

Create a network that will host all the graphite nodes, grapahana and eventually memcached Graphite and statsd do not support authentication. Thuse, we need to deploy a separated network to limit the service exposures

1
beekube network create monitoring

Persistent volumes

How much storage do you need? We can use this online whisper filesize calculator

1
2
1s:1d,10s:3d,1m:7d,5m:60d,10m:365d
30 metrics, per 1k hosts are ~ 60 GB

So, let’s create the volumes

1
2
3
beekube volume create metrics_graphite_config --size 5
beekube volume create metrics_graphite_data --size 100
beekube volume create metrics_statsd_config --size 5

Names are self consistent…

Mapped Ports

1
2
3
4
5
6
7
8
9
10
|Public | Host| Container | Service|
|-------|---- | --------- | ---------|
|  NO   |  80 |        80 | [nginx](https://www.nginx.com/resources/admin-guide/) |
|  NO   |2003 |      2003 | [carbon receiver - plaintext](http://graphite.readthedocs.io/en/latest/feeding-carbon.html#the-plaintext-protocol) |
|  NO   |2004 |      2004 | [carbon receiver - pickle](http://graphite.readthedocs.io/en/latest/feeding-carbon.html#the-pickle-protocol) |
|  NO   |2023 |      2023 | [carbon aggregator - plaintext](http://graphite.readthedocs.io/en/latest/carbon-daemons.html#carbon-aggregator-py) |
|  NO   |2024 |      2024 | [carbon aggregator - pickle](http://graphite.readthedocs.io/en/latest/carbon-daemons.html#carbon-aggregator-py) |
|  NO   |8080 |      8080 | Graphite internal gunicorn port (without Nginx proxying). |
|  YES  |8125 |      8125 | [statsd](https://github.com/etsy/statsd/blob/master/docs/server.md) |
|  NO   |8126 |      8126 | [statsd admin](https://github.com/etsy/statsd/blob/master/docs/admin_interface.md) |

W’re going to expose only statsd on port 8125/udp. The other ports will be exposed only internally to other contiaenrs (like Graphana)

-p 80:80 \ -p 2003:2003 \ -p 2004:2004 \ -p 2023:2023 \ -p 2024:2024 \ -p 8126:8126 \

1
2
3
4
5
6
7
8
9
10
11
12
beekube run \
 --name graphite01 \
 --restart=always \
 --cpus 8 --memory 8192m \
 --network monitoring \
 -v metrics_graphite_config:/opt/graphite/conf \
 -v metrics_graphite_data:/opt/graphite/storage \
 -v metrics_statsd_config:/opt/statsd/config \
 -p 8125:8125/udp \
 -p 2023:2023 \
 graphiteapp/graphite-statsd

Change the configuration

The graphiteapp/graphite-statsd contaienr will setup all the configurations with default values. Probably we want to change the default data retention. In our case, we are tracking metrics at 1 second resolution. To change the data retentions we have to:

Stop the graphite container

1
beekube stop graphite01

How to change a file in a Docker volume? In HakunaCloud, we can start a dummy container to do SSH and have access to the file system:

1
2
3
4
5
6
7
beekube run --name ssh \
    -p 46587:22 \
    -v metrics_graphite_config:/graphite \
    -v metrics_statsd_config:/statsd \
    -v metrics_graphite_data:/graphite_data \
    -e SSH_ROOT_PASS=dioCane44 \
    beekube/bubuntu

Remember to update the flush interval of stats as well, in the udp.js and tcp.js files. Access the volume with the configuration for both graphite and statsd (we’ll use a dummy container to run ssh)

1
TODO Copy from the README of graphiteapp

Delete all whisper file find /graphite_data/whisper -name '*.wsp' -delete

Now, restart the container - that’s it

1
beekube start graphite01

Push test metrics

1
while true; do echo "alpha.counter:$((RANDOM % 100))|c\nbeta.counter:$((RANDOM % 100))|c" | socat -t 0 STDIN UDP:127.0.0.1:8125; sleep 1; done

Deploy grafana

Tommaso Doninelli

CEO @ HakunaCloud

10 years as CTO, former Software Engineer at Amazon AWS, Cloud Solution Architect with projects in US, Europe and United Arab Emirates.

"I am a DevOps and automation advocate; you can test, deploy, analyze and improve even your grandma recipes. "