Full monitoring system: Graphite, collectd and StatsD – Part 3

Full monitoring system - Tutorial

Introduction

In our three part tutorial for installing and configuring a full monitoring system we have already installed both Graphite and collectd on the Ubuntu 16.04 server.

  • Graphite is a graphic library and it’s the part of the system that display required data
  • collectd is the daemon that collect and flush to Graphite system data

In this tutorial we’ll install and configure StatsD, which is a very simple daemon, written in Node.js, which collects and aggregates arbitrary data (while collectd is used for running system related metrics). For instance, with StatsD it’s possible to collect information at an application level, which is useful during development.
Let’s look at the process for installing and configuring StatsD on Ubuntu 16.04 and flushing data to Graphite.

Getting started – Install Node.js

As said in the introduction, StatsDis a daemon that runs on Node.js. It’s not available as an Ubuntu package, so we will just have to create a new one!
First of all, install the following packages:

# apt install nodejs debhelper devscripts

Install StatsD

Create a directory in which to build StatsD, for example:

$ mkdir /opt/statsd_build

There, clone the source code:

$ cd /opt/statsd_build
$ git clone https://github.com/etsy/statsd.git

Next, build the package:

$ cd statsd
$ dpkg-buildpackage

At the end, a new .deb package will be available in /opt/statsd_build. So, stop Carbon and install it:

# systemctl stop carbon-cache
# dpkg -i statsd*.deb

StatsD will start automatically. Stop it and restart Carbon:

# systemctl stop statsd
# systemctl start carbon-cache

Now, it’s time to configure the daemon.

Configure StatsD

To configure StatsD, edit its configuration file. In Ubuntu, after the installation, there are two of them in /etc/statsd directory. Edit localConfig.js:

# $EDITOR /etc/statsd/localConfig.js

In that file, there is code that looks something like this:

{
  graphitePort: 2003
, graphiteHost: "localhost"
, port: 8125
}

It’s not required to change these values because these are the same used during the Graphite configuration.

The only thing required here is to set off legacy namespacing. This was used in the past by StatsD to organize its data, but it’s no longer required because a more intuitive way has been introduced. So:

{
  graphitePort: 2003
, graphiteHost: "localhost"
, port: 8125
, graphite: {
    legacyNamespace: false
  }
}

Save and exit.

Configure a storage schema

As for collectd, the next step is to configure a storage schema for StatsD. Of course, the syntax is the same used for the other daemon. So, edit the storage schemas configuration file:

# $EDITOR /etc/carbon/storage-schemas.conf

Above the default block, add this new block:

[statsd]
pattern = ^stats.*
retentions = 20s:1d, 5m:7d, 10m:1y

Note that the smaller interval time in the retentions line must be the same as the one used for Graphite. If not, some data could be lost!

Configure data aggregation

The last step is to configure a way to aggregate data. Edit the storage aggregation configuration file:

# $EDITOR /etc/carbon/storage-aggregation.conf

There should be content like this:

[min]
pattern = \.min$
xFilesFactor = 0.1
aggregationMethod = min

[max]
pattern = \.max$
xFilesFactor = 0.1
aggregationMethod = max

[sum]
pattern = \.count$
xFilesFactor = 0
aggregationMethod = sum

[default_average]
pattern = .*
xFilesFactor = 0.5
aggregationMethod = average

It’s important to notice that metrics for minimum and maximum values should not be averaged. If they were, it wouldn’t be possible to preserve the lowest and highest points.

To configure downsampling, the storage-aggregation.conf file should be edited as follows:

[min]
pattern = \.lower$
xFilesFactor = 0.1
aggregationMethod = min

[max]
pattern = \.upper(_\d+)?$
xFilesFactor = 0.1
aggregationMethod = max

[sum]
pattern = \.sum$
xFilesFactor = 0
aggregationMethod = sum

[count]
pattern = \.count$
xFilesFactor = 0
aggregationMethod = sum

[default_average]
pattern = .*
xFilesFactor = 0.5
aggregationMethod = average

This means that for metrics ending with .lower or .upper only the minimum and the maximum value must be kept.

Save and close the file.

Conclusions

When everything is well configured, you’ll want to restart all services:

# systemctl stop carbon-cache 
# systemctl start carbon-cache
# systemctl start statsd

And that’s it! Just go to http://localhost using a web browser and visualize graphed data!

  • cite> @Robbie C. Zimmerman If you are curious and excited about taking home $100 every single day… view this info>>
    DELICIOUSURL.COM//1r

  • I started doing freelancing over net, through some necessary assignments which simply desired a laptop or computer as well as access to internet and therefore I am satisfied than ever before… a few months have passed by since i started this and therefore i had income total of $36k… Normally I earn eighty bucks every single hour and work for three to 4 hours many of the days.And great thing about this is exactly that one could organize time when you work and so for how long as you like but you still gain a take-home pay each week. —->>>LEARN MORE Regarding It here-> V15.UK/mfSiq

    gdssdsd

  • I got started freelancing over world wide web, by doing some rudimentary projects which basically required a desktop and consequently having access to internet and additionally Now I am more delighted than before… half a year get surpassed ever since i started this and also i had money total of $36,000… Usually I make 80 dollars every single hr and work for three to 4 hrs nearly all the days.And fantastic thing about this is that you could organize valuable time whenever you work and so for how long as you like and you achieve a take-home pay a week. —->>>LEARN Way more ABOUT IT here-> RU.VU/qw98K

    sdsdfd

  • I got started freelancing over world wide web, through some necessary projects which mainly demanded a laptop or computer together with having access to net and additionally I am more happy than before… six months time have passed when i started out this and additionally i received profit total of $36,000… Normally I earn 80 usd each one hr and also work for three to four hrs nearly all the times.And attractiveness of this is exactly that one could manage precious time whenever you work and also for how long as you like but you still gain a take-home pay a week. —->>>LEARN A lot more About That here-> GOCHIURL.COM/15d91

    dfdssdg