19

ring-election

 4 years ago
source link: https://github.com/pioardi/ring-election
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Ring election

Contents

Getting started

Try it out !
docker-compose up

Check assigned partitions to local:9000/status or change the port to 9001/9002

Try to stop and restart container and observe the behaviour.

If you want to develop new features or fix a bug you can do that without docker images , just configure environment variables correctly ( you can see them on [docker-compose.yaml] file) .

Seesection to know how to integrate this library and build some distributed ring on top of ring-election !!!

Overview and rationale

In modern systems it is often needed to distribute the application load to make the system scalable so that every data is processed by a single instance.

Ring-election is a driver that implements a distributed algorithm that assigns to each node the partitions to work on . In a simple use case each node can obtain data that are part of the partitions of which it is owner and work on them.

The algorithm will assign to each node one or more partitions to work with.

A node will be removed if it does not send an hearth beat for a while , this process is called heart check.

Each node in the ring will have an ID and a priority , if the leader node will die the node with lower priority will be elect as leader.

If a node is added or removed from the cluster, the allocated partitions will be rebalanced.

What the ring-election driver offers you ?

  • A default partitioner that for an object returns the partition to which it is assigned.
  • Mechanism of leader election
  • Failure detection between nodes.
  • Assignment and rebalancing of partitions between nodes
  • Automatic re-election of the leader

What problems can you solve with this driver ?

  • Scalability
  • High availability
  • Concurrency between nodes in a cluster
  • Automatic failover

Use cases

This section introduce you on what you can build on top of ring-election using it as driver/library.

Distributed Scheduler

Each Scheduler instance will work on the assigned partitions .

A real implementation of this use case is available here https://github.com/pioardi/hurricane-scheduler

EnEjIr6.jpg!web

Distributed lock

Distributed cache

Distributed computing

How to integrate as driver/library

How to leader

const ring = require('ring-election');
let leader = ring.leader;
leader.createServer();
// if you want REST API as monitoring , invoke startMonitoring
leader.startMonitoring();
// to get ring info
ring.leader.ring();
// Your leader will be the coordinator.

How to follower

const ring = require('ring-election');
let follower = ring.follower;
follower.createClient();
// if you want REST API as monitoring , invoke startMonitoring
follower.startMonitoring();
// to get ring info
ring.follower.ring();
// to get assigned partitions
let assignedPartitions = ring.follower.partitions();
// now let me assume that a follower will create some data
// and you want to partition this data
let partition = ring.follower.defaultPartitioner('KEY');
// save your data including the partition on a storage
// you will be the only one in the cluster working on the partitions assigned to you.

Try it out !

docker image build -t ring-election .
   docker-compose up

See examples folder for more advanced examples

Configuration

PORT: The leader will start to listen on this port , default is 3000

TIME_TO_RECONNECT: The time to wait for a follower when he has to connect to a new leader in ms , default is 3000

HEARTH_BEAT_FREQUENCY: The frequency with which a hearth beat is performed by a follower , default is 1000

HEARTH_BEAT_CHECK_FREQUENCY: The frequency with which an hearth check is performed by a leader , default is 3000

LOG_LEVEL: Follow this https://www.npmjs.com/package/winston#logging-levels , default is info.

NUM_PARTITIONS: Number of partitions to distribute across the cluster , default is 10.

SEED_NODES: hostnames and ports of leader node comma separated, Ex . hostname1:port,hostname2:port

MONITORING_PORT: port to expose rest service for monitoring , default is 9000

Monitoring API

To monitor your cluster contact any node on the path /status (HTTP verb : GET) or contact a follower node on /partitions (HTTP verb : GET).

TODO List

Re-add a client in the cluster when it was removed and send an hearth beat

Tag 1.0 and public on npm

High Level Diagram

VBfA3ur.jpg!web

How to contribute

Take tasks from todo list, develop a new feature or fix a bug and do a pull request.

How to run tests

Unit tests

npm run test

Integration tests

cd test/integration

./integration.sh

npm run integration-test

Versioning

We use ( http://semver.org/

) for versioning.

License

This project is licensed under the MIT License - see the [LICENSE.md] file for details


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK