32

Logz.io Infrastructure Monitoring: Building Grafana Visualizations

 4 years ago
source link: https://logz.io/blog/logz-io-infrastructure-monitoring-grafana-visualizations/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
myINNvn.jpg!web
  • Home
  • Blog
  • DevOps
  • Logz.io Infrastructure Monitoring: Building Grafana Visualizations

Yesterday, my colleague Mike Elsmore wrote ablog about sending metrics to Logz.io Infrastructure monitoring – now let’s analyze them by building Grafana visualizations!

Once you’ve started to send metric data to Logz.io, how do you visualize and interpret that data so that it’s useful for you? In Logz.io Infrastructure Monitoring, we use Grafana to provide dashboards and bring meaningful information to light. Grafana is a hugely popular open source project (like most of the Logz.io stack) that connects to Elasticsearch, allowing for powerful querying and visualization of data.

The dashboard is the heart of infrastructure monitoring. Grafana open source offers a rich set of capabilities for creating custom panels and dashboards. Logz.io offers dozens of useful pre-built dashboards which you can duplicate to create your own dashboards, or just review to take ideas for specific panels or copy useful snippets. You can find pre-built dashboards for many popular AWS services, Microsoft Azure services, SQL databases andNoSQL databases (such as MySql, Postgres, Redis, MongoDB), operating systems,Docker monitoring, and Kubernetes monitoring in container environments.

Building Grafana Visualizations with Logz.io

Dashboards are most powerful when you can filter metrics based on relevant variables, so you can zoom in on your topic of investigation. Before starting to create your dashboard’s panels, it’s best you put some thought into which variables are relevant in your environment, and add them to your dashboard (although of course you can add or edit variables also later on). It can be found under Dashboard Settings (the cogwheel ⚙️) → Variables :

B3yAreJ.png!web

Infrastructure Monitoring: Grafana Dashboard Settings (cogwheel) → Variables

Start by defining your variable through a query. To do this, familiarize yourself with the syntax, here . You can use Grafana dashboard templating capabilities in your query . You can also create nested variables, enabling you to fine-tune your investigation gradually. For instance, in a clustered environment you may first hone in on a cluster, and then on a specific node in that cluster.

Once defined, the variables will present at the top of your dashboard as drop-down selection. You can now parameterize panels based on these variables using the $<variable-name> notation.

Now it’s time to add your panels of interest. You will find many available visualizations you can use:

JfemQnI.png!web

Grafana panel visualization is highly customizable

The panel visualization is highly customizable. For a detailed explanation about each part, please see the official Grafana documentation .  

Lucene Queries and Grafana

The heart of the panel is the query. Here you will define the datasource (your Logz.io metrics account), a L ucene query defining the documents of interest (like the “Where” clause in RDBMS), the metric we monitor (like the “Select” clause) and aggregation function (max, min, sum, average, percentiles, etc.), the grouping, and sampling time parameters. You can use the variables you’ve defined as parameters, as seen in the example below:

YrQ3ae2.png!web

You can use the variables you’ve defined as parameters in Grafana visualizations

Logz.io Infrastructure Monitoring Visualization

Logz.io Infrastructure Monitoring provides for each of your metrics built-in roll-ups by MIN , MAX , AVG , and SUM . For your metric of choice you’ll simply have the following four roll-up versions in addition to the metric itself. They’re denoted by a respective suffix of $MIN , $MAX , $AVG , $SUM . Why is that useful? Think about a case that you’d like to see peaks of memory utilization over an extended period (a few days or more), in which the fine-grain metrics are already rolled up. You’d get a more accurate result calculating the peak (Max function) over Max’ed metrics. When you get that out of the box for any metric, that’s pretty neat. 

There are many more capabilities for writing complex queries (even from multiple sources!), and creating simple and powerful visualizations, so go ahead and explore it. Tomorrow, look out for a post on how to link your metrics with your logs, how to add Markers, and how to define alerts!


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK