GitHub - timberio/vector: A High-Performance, Logs, Metrics, & Events Router
source link: https://github.com/timberio/vector
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
README.md
Chat/Forum • Mailing List • Install
Vector is a high-performance observability data router. It makes collecting, transforming, and sending logs, metrics, and events easy. It decouples data collection & routing from your services, giving you control and data ownership, among many other benefits.
Built in Rust, Vector places high-value on performance, correctness, and operator friendliness. It compiles to a single static binary and is designed to be deployed across your entire infrastructure, serving both as a light-weight agent and a highly efficient service, making the process of getting data from A to B simple and unified.
Documentation
About
Setup
- Installation - docker, apt, homebrew, yum, download archives, and more
- Getting started
- Deployment - topologies, roles
Usage
- Configuration - sources, transforms, sinks
- Administration - starting, stopping, reloading, updating
- Guides
Resources
Features
- Fast - Built in Rust, Vector is fast and memory efficient. No runtime. No garbage collector.
- Correct - Obsessed with getting the details right.
- Vendor Neutral - Does not favor a specific storage. Fair, open, with the user's best interest in mind.
- Agent or Service - One simple tool to get data from A to B. Deploys as an agent or service.
- Logs, Metrics, or Events - Logs, metrics, and events. Collect, unify, and ship all observability data.
- Correlate Logs & Metrics - Derive metrics from logs, add shared context with transforms.
- Clear Guarantees - A guarantee support matrix helps you understand your tradeoffs.
- Easy To Deploy - Cross-compiles to a single static binary with no runtime.
- Hot Reload - Reload configuration on the fly, without skipping a beat.
Performance
Test Vector Filebeat FluentBit FluentD Logstash SplunkUF SplunkHF TCP to Blackhole 86mib/s n/a 64.4mib/s 27.7mib/s 40.6mib/s n/a n/a File to TCP 76.7mib/s 7.8mib/s 35mib/s 26.1mib/s 3.1mib/s 40.1mib/s 39mib/s Regex Parsing 13.2mib/s n/a 20.5mib/s 2.6mib/s 4.6mib/s n/a 7.8mib/s TCP to HTTP 26.7mib/s n/a 19.6mib/s <1mib/s 2.7mib/s n/a n/a TCP to TCP 69.9mib/s 5mib/s 67.1mib/s 3.9mib/s 10mib/s 70.4mib/s 7.6mib/sTo learn more about our performance tests, please see the Vector test harness.
Correctness
Test Vector Filebeat FluentBit FluentD Logstash Splunk UF Splunk HF Disk Buffer Persistence ✅ ✅ ❌ ❌ ⚠️ ✅ ✅ File Rotate (create) ✅ ✅ ✅ ✅ ✅ ✅ ✅ File Rotate (copytruncate) ✅ ❌ ❌ ❌ ❌ ✅ ✅ File Truncation ✅ ✅ ✅ ✅ ✅ ✅ ✅ Process (SIGHUP) ✅ ❌ ❌ ❌ ⚠️ ✅ ✅ JSON (wrapped) ✅ ✅ ❌ ✅ ✅ ✅ ✅To learn more about our performance tests, please see the Vector test harness.
Installation
Run the following in your terminal, then follow the on-screen instructions.
curl https://sh.vector.dev -sSf | sh
Or view platform specific installation instructions.
Sources
Name Descriptionfile
Ingests data through one or more local files and outputs log
events.
journald
Ingests data through log records from journald and outputs log
events.
kafka
Ingests data through Kafka 0.9 or later and outputs log
events.
statsd
Ingests data through the StatsD UDP protocol and outputs metric
events.
stdin
Ingests data through standard input (STDIN) and outputs log
events.
syslog
Ingests data through the Syslog 5424 protocol and outputs log
events.
tcp
Ingests data through the TCP protocol and outputs log
events.
udp
Ingests data through the UDP protocol and outputs log
events.
vector
Ingests data through another upstream Vector instance and outputs log
and metric
events.
Transforms
Name Descriptionadd_fields
Accepts log
events and allows you to add one or more fields.
coercer
Accepts log
events and allows you to coerce event fields into fixed types.
field_filter
Accepts log
and metric
events and allows you to filter events by a field's value.
grok_parser
Accepts log
events and allows you to parse a field value with Grok.
json_parser
Accepts log
events and allows you to parse a field value as JSON.
log_to_metric
Accepts log
events and allows you to convert logs into one or more metrics.
lua
Accepts log
events and allows you to transform events with a full embedded Lua engine.
regex_parser
Accepts log
events and allows you to parse a field's value with a Regular Expression.
remove_fields
Accepts log
and metric
events and allows you to remove one or more event fields.
sampler
Accepts log
events and allows you to sample events with a configurable rate.
tokenizer
Accepts log
events and allows you to tokenize a field's value by splitting on white space, ignoring special wrapping characters, and zipping the tokens into ordered field names.
Sinks
Name Descriptionaws_cloudwatch_logs
Batches log
events to AWS CloudWatch Logs via the PutLogEvents
API endpoint.
aws_kinesis_streams
Batches log
events to AWS Kinesis Data Stream via the PutRecords
API endpoint.
aws_s3
Batches log
events to AWS S3 via the PutObject
API endpoint.
blackhole
Streams log
and metric
events to a blackhole that simply discards data, designed for testing and benchmarking purposes.
clickhouse
Batches log
events to Clickhouse via the HTTP
Interface.
console
Streams log
and metric
events to the console, STDOUT
or STDERR
.
elasticsearch
Batches log
events to Elasticsearch via the _bulk
API endpoint.
http
Batches log
events to a generic HTTP endpoint.
kafka
Streams log
events to Apache Kafka via the Kafka protocol.
prometheus
Exposes metric
events to Prometheus metrics service.
splunk_hec
Batches log
events to a Splunk HTTP Event Collector.
tcp
Streams log
events to a TCP connection.
vector
Streams log
events to another downstream Vector instance.
Companies Using Vector In Production
License
Copyright 2019, Vector Authors. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use these files except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
Developed with ❤️ by Timber.io
Recommend
About Joyk
Aggregate valuable and interesting links.
Joyk means Joy of geeK