9

Linux Administration: Deploying Elasticsearch cluster

 2 years ago
source link: http://www.linux-admins.net/2016/05/deploying-elasticsearch-cluster.html
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Deploying Elasticsearch cluster

Elasticsearch (ES) is a distributed, scalable, search and analytics engine that enables fast data retrieval. [1]. It exposes a RESTful API, Java and Python libraries, and extensibility with various plugins like the Zookeeper cluster integration plugin.

In this post I'll deploy a small Elasticsearch cluster consisting of one Nginx loadbalancer, two ES client nodes, three ES master nodes and two ES data nodes.

The role of the three different ES node types is:

- Client nodes: act as load balancer for routing queries and index processes. The client nodes do not hold any data.
- Data nodes: hold data, merge segments and execute queries. The data nodes are the main workers.
- Master nodes: manages the cluster and elects a master node using Unicast. The master nodes hold configuration data and the mapping of all the indexes in the cluster.

A simple one node deployment by default configures the ES server to be both a master and a data node. To be able to further scale however, you'll need multiple data nodes, at least 3 masters (to prevent split brain scenarios) and two client nodes to route the requests and results.

Installing and configuring ES cluster is rather simple. First download the Oracle JRE and install it on all ES nodes:

[es-nodes]$ lsb_release -d Description: Ubuntu 14.04.4 LTS [es-nodes]$ cd /usr/src [es-nodes]$ wget http://download.oracle.com/otn-pub/java/jdk/8u92-b14/jre-8u92-linux-x64.tar.gz [es-nodes]$ tar zxfv jre-8u92-linux-x64.tar.gz [es-nodes]$ mkdir /usr/lib/jvm/ && mv jre1.8.0_92/ /usr/lib/jvm/ [es-nodes]$ update-alternatives --install /usr/bin/java java /usr/lib/jvm/jre1.8.0_92/bin/java 2000 [es-nodes]$ export JAVA_HOME=/usr/lib/jvm/jre1.8.0_92/ [es-nodes]$ echo "export JAVA_HOME=/usr/lib/jvm/jre1.8.0_92/" > /etc/profile.d/oraclejdk.sh [es-nodes]$ java -version java version "1.8.0_92" Java(TM) SE Runtime Environment (build 1.8.0_92-b14) Java HotSpot(TM) 64-Bit Server VM (build 25.92-b14, mixed mode)

With Java installed, download and install elasticsearch on all ES nodes:

[es-nodes]$ wget https://download.elastic.co/elasticsearch/release/org/elasticsearch/distribution/deb/elasticsearch/2.3.3/elasticsearch-2.3.3.deb [es-nodes]$ dpkg -i elasticsearch-2.3.3.deb

Next configure the different types of ES nodes, starting with the three masters:

[es-master-n01,02,03]$ cat /etc/elasticsearch/elasticsearch.yml cluster.name: test-cluster # must be the same on all ES nodes part of the cluster node.name: es-master-n01 # replace this with n02 and n03 node.data: false node.master: true # this is what defines the node as a master network.host: 10.176.66.106 # replace this with the IPs of n02 and n03 discovery.zen.ping.unicast.hosts: ["10.176.66.106", "10.176.66.108", "10.176.66.113"] # The IPs of all the ES masters in the cluster discovery.zen.minimum_master_nodes: 3 # minimum number of masters to have a quorum

The two data nodes:

[es-data-n01,02]$ cat /etc/elasticsearch/elasticsearch.yml cluster.name: test-cluster node.name: es-data-n01 node.data: true node.master: false network.host: 10.176.66.73 discovery.zen.ping.unicast.hosts: ["10.176.66.106", "10.176.66.108", "10.176.66.113"] discovery.zen.minimum_master_nodes: 3

And finally the two client nodes (note how the main difference between the node types is the node.data and node.master stanzas:

[es-client-n01,02]$ cat /etc/elasticsearch/elasticsearch.yml cluster.name: test-cluster node.name: es-client-n01 node.data: false node.master: false network.host: 10.176.6.154 discovery.zen.ping.unicast.hosts: ["10.176.66.106", "10.176.66.108", "10.176.66.113"] discovery.zen.minimum_master_nodes: 3

On Ubuntu elasticsearch ships with configurable defaults in /etc/default/elasticsearch. Let's set the HEAP size to 2GB and the max number of available file descriptors on all nodes in the cluster:

[es-nodes]$ cat /etc/default/elasticsearch ES_HEAP_SIZE=1g ES_STARTUP_SLEEP_TIME=5 MAX_OPEN_FILES=65535 #ES_HOME=/usr/share/elasticsearch #CONF_DIR=/etc/elasticsearch #DATA_DIR=/var/lib/elasticsearch #LOG_DIR=/var/log/elasticsearch #PID_DIR=/var/run/elasticsearch #ES_HEAP_NEWSIZE= #ES_DIRECT_SIZE= #ES_JAVA_OPTS= #ES_RESTART_ON_UPGRADE=true #ES_GC_LOG_FILE=/var/log/elasticsearch/gc.log #ES_USER=elasticsearch #ES_GROUP=elasticsearch #MAX_LOCKED_MEMORY=unlimited #MAX_MAP_COUNT=262144

Start all the masters, data nodes and client nodes and watch the formation of the cluster and election of a leading master:

[es-master-n01]$ /etc/init.d/elasticsearch start * Starting Elasticsearch Server [ OK ] [es-master-n01]$ cat /var/log/elasticsearch/test-cluster.log [2016-05-25 15:11:41,772][INFO ][node ] [es-master-n01] version[2.3.3], pid[6048], build[218bdf1/2016-05-17T15:40:04Z] [2016-05-25 15:11:41,772][INFO ][node ] [es-master-n01] initializing ... [2016-05-25 15:11:42,478][INFO ][plugins ] [es-master-n01] modules [reindex, lang-expression, lang-groovy], plugins [], sites [] [2016-05-25 15:11:42,504][INFO ][env ] [es-master-n01] using [1] data paths, mounts [[/ (/dev/xvda1)]], net usable_space [73.7gb], net total_space [78.6gb], spins? [no], types [ext4] [2016-05-25 15:11:42,504][INFO ][env ] [es-master-n01] heap size [2007.3mb], compressed ordinary object pointers [true] [2016-05-25 15:11:42,505][WARN ][env ] [es-master-n01] max file descriptors [65535] for elasticsearch process likely too low, consider increasing to at least [65536] [2016-05-25 15:11:44,695][INFO ][node ] [es-master-n01] initialized [2016-05-25 15:11:44,695][INFO ][node ] [es-master-n01] starting ... [2016-05-25 15:11:44,885][INFO ][transport ] [es-master-n01] publish_address {10.176.66.106:9300}, bound_addresses {10.176.66.106:9300} [2016-05-25 15:11:44,891][INFO ][discovery ] [es-master-n01] test-cluster/eaf0BN5vR9eostNn0UtmJw [2016-05-25 15:11:48,097][INFO ][cluster.service ] [es-master-n01] detected_master {es-master-n02}{DYCjY5H_QguQEhVlo74tVw}{10.176.66.108}{10.176.66.108:9300}{data=false, master=true}, added {{es-data-n02}{2YUeN6r3TNaHikfE3iNp4g}{10.176.66.101}{10.176.66.101:9300}{master=false},{es-client-n02}{RCIgWbNBTiSXfDgSDBX8qg}{10.176.6.154}{10.176.6.154:9300}{data=false, master=false},{es-master-n02}{DYCjY5H_QguQEhVlo74tVw}{10.176.66.108}{10.176.66.108:9300}{data=false, master=true},{es-client-n01}{GpYXjp4MRums-U9wqmYDLg}{10.176.3.48}{10.176.3.48:9300}{data=false, master=false},{es-data-n01}{LaqNrBQyR4Khb97h_wTBCA}{10.176.66.73}{10.176.66.73:9300}{master=false},{es-master-n03}{Tn6ZgIK0TBSErRgFelGo2g}{10.176.66.113}{10.176.66.113:9300}{data=false, master=true},}, reason: zen-disco-receive(from master [{es-master-n02}{DYCjY5H_QguQEhVlo74tVw}{10.176.66.108}{10.176.66.108:9300}{data=false, master=true}]) [2016-05-25 15:11:48,230][INFO ][http ] [es-master-n01] publish_address {10.176.66.106:9200}, bound_addresses {10.176.66.106:9200} [2016-05-25 15:11:48,230][INFO ][node ] [es-master-n01] started

Fail the current master and observe the election of a new one:

[2016-05-24 19:27:27,838][INFO ][discovery.zen ] [es-master-n01] master_left [{es-master-n02}{RSxbeHM6QyqDcHJ0edqSqw}{10.176.66.108}{10.176.66.108:9300}{data=false, master=true}], reason [shut_down] [2016-05-24 19:27:27,839][WARN ][discovery.zen ] [es-master-n01] master left (reason = shut_down), current nodes: {{es-master-n03}{Tn6ZgIK0TBSErRgFelGo2g}{10.176.66.113}{10.176.66.113:9300}{data=false, master=true},{es-client-n02}{RCIgWbNBTiSXfDgSDBX8qg}{10.176.6.154}{10.176.6.154:9300}{data=false, master=false},{es-client-n01}{GpYXjp4MRums-U9wqmYDLg}{10.176.3.48}{10.176.3.48:9300}{data=false, master=false},{es-master-n01}{n_Wun5GLSeeLcpciPOgHkw}{10.176.66.106}{10.176.66.106:9300}{data=false, master=true},} [2016-05-24 19:27:27,841][INFO ][cluster.service ] [es-master-n01] removed {{es-master-n02}{RSxbeHM6QyqDcHJ0edqSqw}{10.176.66.108}{10.176.66.108:9300}{data=false, master=true},}, reason: zen-disco-master_failed ({es-master-n02}{RSxbeHM6QyqDcHJ0edqSqw}{10.176.66.108}{10.176.66.108:9300}{data=false, master=true}) [2016-05-24 19:27:35,599][INFO ][cluster.service ] [es-master-n01] detected_master {es-master-n03}{Tn6ZgIK0TBSErRgFelGo2g}{10.176.66.113}{10.176.66.113:9300}{data=false, master=true}, added {{es-master-n02}{DYCjY5H_QguQEhVlo74tVw}{10.176.66.108}{10.176.66.108:9300}{data=false, master=true},}, reason: zen-disco-receive(from master [{es-master-n03}{Tn6ZgIK0TBSErRgFelGo2g}{10.176.66.113}{10.176.66.113:9300}{data=false, master=true}]) [2016-05-24 19:38:19,106][INFO ][cluster.service ] [es-master-n01] added {{es-data-n01}{LaqNrBQyR4Khb97h_wTBCA}{10.176.66.73}{10.176.66.73:9300}{master=false},}, reason: zen-disco-receive(from master [{es-master-n03}{Tn6ZgIK0TBSErRgFelGo2g}{10.176.66.113}{10.176.66.113:9300}{data=false, master=true}]) [2016-05-24 19:43:24,792][INFO ][cluster.service ] [es-master-n01] added {{es-data-n02}{2YUeN6r3TNaHikfE3iNp4g}{10.176.66.101}{10.176.66.101:9300}{master=false},}, reason: zen-disco-receive(from master [{es-master-n03}{Tn6ZgIK0TBSErRgFelGo2g}{10.176.66.113}{10.176.66.113:9300}{data=false, master=true}])

Elasticsearch exposes a great API to check the cluster status or gather various metrics [2]:

[es-client-n01]$ curl -XGET 'http://10.176.3.48:9200/_cluster/health?pretty=true' { "cluster_name" : "test-cluster", "status" : "green", "timed_out" : false, "number_of_nodes" : 7, "number_of_data_nodes" : 2, "active_primary_shards" : 5, "active_shards" : 10, "relocating_shards" : 0, "initializing_shards" : 0, "unassigned_shards" : 0, "delayed_unassigned_shards" : 0, "number_of_pending_tasks" : 0, "number_of_in_flight_fetch" : 0, "task_max_waiting_in_queue_millis" : 0, "active_shards_percent_as_number" : 100.0 }

[es-client-n01]$ curl -X GET 'http://10.176.3.48:9200/_cluster/stats?human&pretty' { "timestamp" : 1464127115210, "cluster_name" : "test-cluster", "status" : "green", "indices" : { "count" : 1, "shards" : { "total" : 10, "primaries" : 5, "replication" : 1.0, "index" : { "shards" : { "min" : 10, "max" : 10, "avg" : 10.0 }, "primaries" : { "min" : 5, "max" : 5, "avg" : 5.0 }, "replication" : { "min" : 1.0, "max" : 1.0, "avg" : 1.0 } } }, "docs" : { "count" : 0, "deleted" : 0 }, "store" : { "size" : "1.5kb", "size_in_bytes" : 1590, "throttle_time" : "0s", "throttle_time_in_millis" : 0 }, "fielddata" : { "memory_size" : "0b", "memory_size_in_bytes" : 0, "evictions" : 0 }, "query_cache" : { "memory_size" : "0b", "memory_size_in_bytes" : 0, "total_count" : 0, "hit_count" : 0, "miss_count" : 0, "cache_size" : 0, "cache_count" : 0, "evictions" : 0 }, "completion" : { "size" : "0b", "size_in_bytes" : 0 }, "segments" : { "count" : 0, "memory" : "0b", "memory_in_bytes" : 0, "terms_memory" : "0b", "terms_memory_in_bytes" : 0, "stored_fields_memory" : "0b", "stored_fields_memory_in_bytes" : 0, "term_vectors_memory" : "0b", "term_vectors_memory_in_bytes" : 0, "norms_memory" : "0b", "norms_memory_in_bytes" : 0, "doc_values_memory" : "0b", "doc_values_memory_in_bytes" : 0, "index_writer_memory" : "0b", "index_writer_memory_in_bytes" : 0, "index_writer_max_memory" : "4.8mb", "index_writer_max_memory_in_bytes" : 5120000, "version_map_memory" : "0b", "version_map_memory_in_bytes" : 0, "fixed_bit_set" : "0b", "fixed_bit_set_memory_in_bytes" : 0 }, "percolate" : { "total" : 0, "time" : "0s", "time_in_millis" : 0, "current" : 0, "memory_size_in_bytes" : -1, "memory_size" : "-1b", "queries" : 0 } }, "nodes" : { "count" : { "total" : 7, "master_only" : 3, "data_only" : 2, "master_data" : 0, "client" : 0 }, "versions" : [ "2.3.3" ], "os" : { "available_processors" : 14, "allocated_processors" : 14, "mem" : { "total" : "6.1gb", "total_in_bytes" : 6655361024 }, "names" : [ { "name" : "Linux", "count" : 7 } ] }, "process" : { "cpu" : { "percent" : 0 }, "open_file_descriptors" : { "min" : 264, "max" : 291, "avg" : 271 } }, "jvm" : { "max_uptime" : "2.6h", "max_uptime_in_millis" : 9637801, "versions" : [ { "version" : "1.8.0_92", "vm_name" : "Java HotSpot(TM) 64-Bit Server VM", "vm_version" : "25.92-b14", "vm_vendor" : "Oracle Corporation", "count" : 7 } ], "mem" : { "heap_used" : "623.1mb", "heap_used_in_bytes" : 653377880, "heap_max" : "6.8gb", "heap_max_in_bytes" : 7394164736 }, "threads" : 200 }, "fs" : { "total" : "393gb", "total_in_bytes" : 422067834880, "free" : "385.7gb", "free_in_bytes" : 414192263168, "available" : "368.6gb", "available_in_bytes" : 395856375808 }, "plugins" : [ ] } }

[es-client-n01]$ curl -X GET 'http://10.176.3.48:9200/_cat/health?v' epoch timestamp cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent 1464127244 22:00:44 test-cluster green 7 2 10 5 0 0 0 0 - 100.0%

[es-client-n01]$ curl -X GET 'http://10.176.3.48:9200/_cat/master?v' id host ip node Tn6ZgIK0TBSErRgFelGo2g 10.176.66.113 10.176.66.113 es-master-n03

[es-client-n01]$ curl -X GET 'http://10.176.3.48:9200/_cat/nodeattrs?v' node host ip attr value es-client-n01 10.176.3.48 10.176.3.48 data false es-client-n01 10.176.3.48 10.176.3.48 master false es-client-n02 10.176.6.154 10.176.6.154 data false es-client-n02 10.176.6.154 10.176.6.154 master false es-data-n01 10.176.66.73 10.176.66.73 master false es-master-n01 10.176.66.106 10.176.66.106 data false es-master-n01 10.176.66.106 10.176.66.106 master true es-master-n02 10.176.66.108 10.176.66.108 data false es-master-n02 10.176.66.108 10.176.66.108 master true es-data-n02 10.176.66.101 10.176.66.101 master false es-master-n03 10.176.66.113 10.176.66.113 data false es-master-n03 10.176.66.113 10.176.66.113 master true

And finally to add and retrieve an index:

[es-client-n01]$ curl -X POST '10.176.3.48:9200/test_index/' {"acknowledged":true} [es-client-n01]$ curl -X GET '10.176.3.48:9200/test_index/' | python -mjson.tool { "test_index": { "aliases": {}, "mappings": {}, "settings": { "index": { "creation_date": "1464126352243", "number_of_replicas": "1", "number_of_shards": "5", "uuid": "n_i3T0i4TLyjfkJspd1ukA", "version": { "created": "2030399" } } }, "warmers": {} } }

We can directly connect to the two client nodes to perform operations, or front end them with a load balancer:

[es-lb-n01]$ apt-get update && apt-get install -y nginx [es-lb-n01]$ cat /etc/nginx/sites-available/elasticsearch_lb upstream elasticsearch_client_nodes { least_conn; server 10.176.3.48:9200; server 10.176.6.154:9200; }

server { listen 9292; server_name es-lb-n01.example.net;

access_log /var/log/nginx/elasticsearch_lb/access.log; error_log /var/log/nginx/elasticsearch_lb/error.log;

location / { proxy_pass http://elasticsearch_client_nodes; proxy_http_version 1.1; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forward-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; } }

[es-lb-n01]$ ln -s /etc/nginx/sites-available/elasticsearch_lb /etc/nginx/sites-enabled/elasticsearch_lb [es-lb-n01]$ mkdir /var/log/nginx/elasticsearch_lb/ [es-lb-n01]$ /etc/init.d/nginx restart

To test connect to the LB:

[workstation]$ curl http://es-lb-n01.example.net:9292/ { "name" : "es-client-n01", "cluster_name" : "test-cluster", "version" : { "number" : "2.3.3", "build_hash" : "218bdf10790eef486ff2c41a3df5cfa32dadcfde", "build_timestamp" : "2016-05-17T15:40:04Z", "build_snapshot" : false, "lucene_version" : "5.5.0" }, "tagline" : "You Know, for Search" }

To export an index from ES and import it to a different ES server we can use a tool called elasticdump:

[es-client-n01]$ curl -sL https://deb.nodesource.com/setup_4.x | sudo bash - [es-client-n01]$ apt-get install -y nodejs [es-client-n01]$ npm install elasticdump [es-client-n01]$ ./node_modules/elasticdump/bin/elasticdump --input=http://localhost:9200/your_index --output=/tmp/your_index.json --type=data [es-client-n01]$ ./node_modules/elasticdump/bin/elasticdump --input=http://localhost:9200/your_index --output=/tmp/your_index_mapping.json --type=mapping

[es-new-client-n01]$ ./node_modules/elasticdump/bin/elasticdump --input=your_index.json --output=http://localhost:9200/your_index Fri, 19 Aug 2016 15:42:26 GMT | starting dump Fri, 19 Aug 2016 15:42:26 GMT | got 100 objects from source file (offset: 0) ... Fri, 19 Aug 2016 15:43:17 GMT | Total Writes: 170114 Fri, 19 Aug 2016 15:43:17 GMT | dump complete

Here are few other useful examples:

[es-nodes]$ curl -XGET localhost:9200/_cat/shards?v logstash-2016.08.21 14 p STARTED 23923583 37.1gb 10.2.44.32 es-data-n06.prod.us-east-1.example.com logstash-2016.08.21 14 r STARTED 23923583 37gb 10.2.79.190 es-data-n24.prod.us-east-1.example.com logstash-2016.08.21 3 r STARTED 12041352 18.5gb 10.2.37.163 es-data-n10.prod.us-east-1.example.com logstash-2016.08.21 3 p STARTED 12041352 18.4gb 10.2.68.222 es-data-n16.prod.us-east-1.example.com logstash-2016.08.21 18 r STARTED 28428955 46.3gb 10.2.58.85 es-data-n23.prod.us-east-1.example.com logstash-2016.08.21 18 p STARTED 28428955 46.2gb 10.2.26.208 es-data-n01.prod.us-east-1.example.com logstash-2016.08.21 0 r STARTED 28879857 54.2gb 10.2.22.252 es-data-n17.prod.us-east-1.example.com logstash-2016.08.21 0 p STARTED 28879857 53.4gb 10.2.56.170 es-data-n07.prod.us-east-1.example.com logstash-2016.08.28 19 r STARTED 25119161 42.8gb 10.2.20.19 es-data-n21.prod.us-east-1.example.com [es-nodes]$ [es-nodes]$ curl -XGET localhost:9200/_cat/allocation?v 93 3.3tb 3.5tb 268.3gb 3.8tb 93 10.2.37.163 10.2.37.163 es-data-n10.prod.us-east-1.example.com 88 3.3tb 3.5tb 259.2gb 3.8tb 93 10.2.63.160 10.2.63.160 es-data-n19.prod.us-east-1.example.com 96 3.1tb 3.3tb 464.5gb 3.8tb 88 10.2.78.75 10.2.78.75 es-data-n08.prod.us-east-1.example.com 97 3.1tb 3.3tb 502.9gb 3.8tb 87 10.2.40.3 10.2.40.3 es-data-n18.prod.us-east-1.example.com 90 3.2tb 3.4tb 363.1gb 3.8tb 90 10.2.33.40 10.2.33.40 es-data-n02.prod.us-east-1.example.com 93 3.3tb 3.5tb 347.8gb 3.8tb 91 10.2.56.170 10.2.56.170 es-data-n07.prod.us-east-1.example.com 99 3.2tb 3.4tb 442.7gb 3.8tb 88 10.2.72.159 10.2.72.159 es-data-n04.prod.us-east-1.example.com [es-nodes]$ [es-nodes]$ curl -XGET localhost:9200/_cat/nodes?v host ip heap.percent ram.percent load node.role master name 10.2.17.1 10.2.17.1 4 75 0.00 c - kibana-n01.prod.us-east-1.example.com/customer 10.2.79.190 10.2.79.190 56 99 1.25 d - es-data-n24.prod.us-east-1.example.com 10.2.44.32 10.2.44.32 53 98 0.76 d - es-data-n06.prod.us-east-1.example.com 10.2.57.92 10.2.57.92 58 99 3.73 d - es-data-n03.prod.us-east-1.example.com 10.2.25.203 10.2.25.203 47 64 0.01 - - es-client-n03.prod.us-east-1.example.com 10.2.63.147 10.2.63.147 62 99 0.83 d - es-data-n11.prod.us-east-1.example.com 10.2.58.85 10.2.58.85 71 94 0.42 d - es-data-n23.prod.us-east-1.example.com 10.2.18.31 10.2.18.31 21 32 2.31 - - logstash-n01.prod.us-east-1.example.com 10.2.34.196 10.2.34.196 62 92 0.28 d - es-data-n22.prod.us-east-1.example.com 10.2.48.85 10.2.48.85 1 94 1.20 d - es-data-n36.prod.us-east-1.example.com 10.2.23.252 10.2.23.252 15 70 0.00 - * es-master-n01.prod.us-east-1.example.com 10.2.29.56 10.2.29.56 54 99 4.51 d - es-data-n05.prod.us-east-1.example.com 10.2.23.186 10.2.23.186 67 99 1.45 d - es-data-n13.prod.us-east-1.example.com [es-nodes]$ [es-nodes]$ curl -XPUT localhost:9200/_cluster/settings?pretty -d '{"transient":{"cluster.routing.allocation.disk.watermark.low": "90%", "cluster.routing.allocation.disk.watermark.high": "95%", "cluster.info.update.interval": "1m"}}' { "acknowledged" : true, "persistent" : { }, "transient" : { "cluster" : { "routing" : { "allocation" : { "disk" : { "watermark" : { "low" : "90%", "high" : "95%" } } } }, "info" : { "update" : { "interval" : "1m" } } } } } [es-nodes]$ [es-nodes]$ curl -s -XPUT localhost:9200/_cluster/settings?pretty -d '{"transient":{"cluster.routing.allocation.cluster_concurrent_rebalance": "4"}}' [es-nodes]$ curl -s -XPUT localhost:9200/_cluster/settings?pretty -d '{"transient":{"cluster.routing.allocation.node_concurrent_recoveries": "4"}}' [es-nodes]$ curl -s -XPUT localhost:9200/_cluster/settings?pretty -d '{"transient":{"indices.recovery.max_bytes_per_sec": "400mb"}}' [es-nodes]$ curl -s -XPUT localhost:9200/_cluster/settings?pretty -d '{"transient":{"indices.recovery.concurrent_streams": "6"}}'

Resources:

[1]. https://www.elastic.co/guide/en/elasticsearch/reference/current/getting-started.html [2]. https://www.elastic.co/guide/en/elasticsearch/reference/current/cat.html


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK