40

Introducing the IRONdb Prometheus Adapter

 5 years ago
source link: https://www.tuicool.com/articles/hit/ieQBRbj
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Prometheus , an open-source project from CNCF, is an infrastructure and service monitoring system which has become popular due to its ease of deployment and general purpose feature set. Prometheus supports features such as metric collection, alerting, and metric visualizations — but falls short when it comes to long-term data retention.

Prometheus is based on autonomous, single-server nodes which rely on local storage . While this model has its advantages, it introduces some obvious data storage scaling issues. As a result, Prometheus tends to be deployed with shorter retention intervals, which can limit its overall utility.

Today we’re happy to introduce the Beta of our IRONdb Prometheus adapter . IRONdb is Circonus’s internally developed TSDB. It’s a drop-in solution for organizations struggling to scale Prometheus or ones that have become frustrated with maintaining a high-availability metrics infrastructure. Prometheus users who integrate with IRONdb unlock the potential for historical analysis of their metric data, while simultaneously benefiting from IRONdb’s support for replication and clustering.

Here’s a high-level overview of the features that a Prometheus installation gains by adding IRONdb into its data storage architecture.

Features

IRONdb Prometheus without IRONdb Storage node cluster ceiling >100 1 Data retention Years Weeks High Availability Yes No Partitioning methods Automatic Sharding Consistency methods Immediate per node, catches up across nodes None Replication methods Configurable replication factor, 2 by default
multi-datacenter capability by federation Server-side scripts Yes, in Lua No Data scheme Schema-free Yes Data typing Supports numeric, text, and histogram data Supports numeric data API RESTful HTTP API RESTful HTTP/JSON API

How it Works

The IRONdb Prometheus Adapter provides remote read and write capabilities for Prometheus to IRONdb. These capabilities allow IRONdb to provide metric storage for Prometheus installations, providing a seamless integration between Prometheus and IRONdb. No other data storage solutions can operate at the scale of cardinality that IRONdb can, with one IRONdb cluster supporting many individual Prometheus instances.

fMraeay.png!web

Multiple Prometheus instances supported by IRONdb nodes, through our adapter (created in Go)

The Go Gopher created by Renee French is licenced under creative commons .

Once configured, Prometheus writes are translated into Circonus’s raw flatbuffer message format for performant writes into IRONdb. The adapter takes protobuf from Prometheus and converts it into raw flatbuffer messages that are then handled by IRONdb.

When Prometheus needs data from IRONdb, it makes a request to the the configured ‘remote_read’ endpoint. This endpoint points at the Adapter. The endpoint proxies the request to IRONdb, receives back a response, and ultimately translates the response such that Prometheus can understand it.

Implementation Details

Prometheus support is handled within the adapter through a read and a write handler. These two handlers take in snappy encoded protobuf messages as a request payload from Prometheus which are decoded by the adapter. After the messages are decoded and classified into read or write operations the adapter prepares the associated IRONdb specific submission or rollup retrievals outlined below.

For write operations, the decoded request is converted into a list of metrics and then sent to an available IRONdb node. The process through which the adapter translates the Prometheus time-series request is shown here:

for _, ts := range timeseries {
	// convert metric and labels to IRONdb format:
	for _, sample := range ts.GetSamples() {
		// MakeMetric takes in a flatbuffer builder, the metric from
		// the prometheus metric family and results in an offset for
		// the metric inserted into the builder
		mOffset, err := MakeMetric(b, ts.GetLabels(), sample, accountID, checkName, checkUUID)
		if err != nil {
			return []byte{}, errors.Wrap(err,
				"failed to encode metric to flatbuffer")
		}
		// keep track of all of the metric offsets so we can build the
		// MetricList Metrics Vector
		offsets = append(offsets, mOffset)
	}

For remote read capabilities, the Prometheus protobuf request contains a query which we extract from the request payload. We will take each query matcher from the request and fold them all into a single stream tag query for IRONdb, as shown here:

switch m.Type {
case prompb.LabelMatcher_EQ:
	// query equal
	tag := fmt.Sprintf(`b"%s":b"%s"`, matcherName, matcherValue)
	snowthTagQuery.WriteString(tag)
	streamTags = append(streamTags, tag)
case prompb.LabelMatcher_NEQ:
	// query not equal
	tag := fmt.Sprintf(`b"%s":b"%s"`, matcherName, matcherValue)
	snowthTagQuery.WriteString("not(")
	snowthTagQuery.WriteString(tag)
	snowthTagQuery.WriteByte(')')
case prompb.LabelMatcher_RE:
	// query regular expression
	tag := fmt.Sprintf(`b"%s":b/%s/`, matcherName, matcherValue)
	snowthTagQuery.WriteString(tag)
	streamTags = append(streamTags, tag)
case prompb.LabelMatcher_NRE:
	// query not regular expression
	tag := fmt.Sprintf(`b"%s":b/%s/`, matcherName, matcherValue)
	snowthTagQuery.WriteString("not(")
	snowthTagQuery.WriteString(tag)
	snowthTagQuery.WriteByte(')')
}

After we retrieve all of the matched metric names from the IRONdb find tags API, we will perform a rollup request from IRONdb and return the results to the caller, afterwards we then encode the results as a Prometheus QueryResult set.

Getting Started

The repository includes a quick guide to getting started, with everything you need to know to build and run the adapter. A Makefile is included, which will perform all the build tasks you will need in order to build the service. After building, you can run the Prometheus Adapter either directly or through docker.

eMVj6j7.png!web

Configuration Screenshot

You’ll be able to modify the behavior of the IRONdb Prometheus adapter by modifying the “prometheus.yml” config file. The config file specifies how often to read and write to stores and defaults to scraping every 15 seconds, which may not fill your needs.

What’s Next?

The IRONdb Prometheus Adapter opens the door for Prometheus to integrate with the suite of alerting and analytics tools in the Circonus monitoring platform. Once data is stored in IRONdb, it’s a simple matter to set up Prometheus to integrate with Circonus. We’re working towards this goal right now.

Later, once we’re finished with the Beta, we’ll release the Adapter for use with all on-premise IRONdb installations.

You can learn more about IRONdb here orcontact us to participate in the IRONdb Prometheus Adapter Beta.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK