Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Restore index elasticsearch 1.3.4 #9455

Closed
fdelelis opened this issue Jan 28, 2015 · 34 comments
Closed

Restore index elasticsearch 1.3.4 #9455

fdelelis opened this issue Jan 28, 2015 · 34 comments
Assignees
Labels
:Distributed Coordination/Snapshot/Restore Anything directly related to the `_snapshot/*` APIs feedback_needed

Comments

@fdelelis
Copy link

Hi,
I'm having a curious success when i restore a index or whole cluster. After restore with the API _restore, I can't query the index restored during a time unexpected, "IndexMissingException". Then, after a time, I can.
I don't know if it is a normal situation. Someone has been involved with this issue?
Thanks.

@clintongormley
Copy link
Contributor

Hi @fdelelis

You need to monitor the state of the index with the cluster health API. From http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-snapshots.html#_monitoring_snapshot_restore_progress :

The restore process piggybacks on the standard recovery mechanism of the Elasticsearch. As a result, standard recovery monitoring services can be used to monitor the state of restore. When restore operation is executed the cluster typically goes into red state. It happens because the restore operation starts with "recovering" primary shards of the restored indices. During this operation the primary shards become unavailable which manifests itself in the red cluster state. Once recovery of primary shards is completed Elasticsearch is switching to standard replication process that creates the required number of replicas at this moment cluster switches to the yellow state. Once all required replicas are created, the cluster switches to the green states.

@fdelelis
Copy link
Author

Hi,

I am restoring only one node cluster with 2MB of documents.When finish,
the index state is open but search failed and I noticed that after 5
minutes I can search on the index again. The procedure is delete index,
search(failed), restore index and search(failed until 5 minutes). There is
nothing more that we are ignorig?

Thanks

2015-01-28 13:39 GMT+01:00 Clinton Gormley notifications@github.com:

Hi @fdelelis https://github.com/fdelelis

You need to monitor the state of the index with the cluster health API.
From
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-snapshots.html#_monitoring_snapshot_restore_progress
:

The restore process piggybacks on the standard recovery mechanism of the
Elasticsearch. As a result, standard recovery monitoring services can be
used to monitor the state of restore. When restore operation is executed
the cluster typically goes into red state. It happens because the restore
operation starts with "recovering" primary shards of the restored indices.
During this operation the primary shards become unavailable which manifests
itself in the red cluster state. Once recovery of primary shards is
completed Elasticsearch is switching to standard replication process that
creates the required number of replicas at this moment cluster switches to
the yellow state. Once all required replicas are created, the cluster
switches to the green states.


Reply to this email directly or view it on GitHub
#9455 (comment)
.

@clintongormley
Copy link
Contributor

What does the cluster health API report during the period that the index is unavailable?

@fdelelis
Copy link
Author

Hi,

Cluster health:

{
"cluster_name": "ddoles-des-c1",
"status": "yellow",
"timed_out": false,
"number_of_nodes": 1,
"number_of_data_nodes": 1,
"active_primary_shards": 158,
"active_shards": 158,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 158
}

Yellow because can't allocate replica shards.

Cluster state:

{
"cluster_name": "ddoles-des-c1",
"version": 72,
"master_node": "7EVvKamZSXa-qkkGUDT_EQ",
"blocks": {},
"nodes": {
"7EVvKamZSXa-qkkGUDT_EQ": {
"name": "srv-vln-des1",
"transport_address": "inet[/192.168.67.207:9300]",
"attributes": {}
}
},
"metadata": {
"templates": {},
"indices": {
"prueba1": {
"state": "open",
"settings": {
"index": {
"uuid": "TQlWfo83R6O3kfZih_gmBw",
......................
Prueba1 index is open.

And when search in the index:

{
"error": "IndexMissingException[[prueba1] missing]",
"status": 404
}

2015-01-28 14:06 GMT+01:00 Clinton Gormley notifications@github.com:

What does the cluster health API report during the period that the index
is unavailable?


Reply to this email directly or view it on GitHub
#9455 (comment)
.

@clintongormley
Copy link
Contributor

@imotov any ideas here?

@clintongormley clintongormley added the :Distributed Coordination/Snapshot/Restore Anything directly related to the `_snapshot/*` APIs label Jan 28, 2015
@imotov
Copy link
Contributor

imotov commented Jan 28, 2015

@fdelelis which repository are you using? Could you run restore with wait_for_completion=true flag and try searching right after this command returns (it might take a while) and let us know if the search still fails?

@imotov imotov self-assigned this Jan 28, 2015
@fdelelis
Copy link
Author

Hi,

The repository is fs. The comand launched:

curl -XPOST
http://x.x.x.x:9200/_snapshot/backup_test/snapshot_20150127-17:40/_restore?wait_for_completion=true
-d '{
"indices" : "prueba1"
}'

{"snapshot":{"snapshot":"snapshot_20150127-17:40","indices":["prueba1"],"shards":{"total":5,"failed":0,"successful":5}}}

GET /prueba1/_search

{
"error": "IndexMissingException[[prueba1] missing]",
"status": 404
}

I want comment, if it is helpful, that when I delete a index, search
(missing), create the same index, and search again , neither is.

thanks-

2015-01-28 16:22 GMT+01:00 Igor Motov notifications@github.com:

@fdelelis https://github.com/fdelelis which repository are you using?
Could you run restore with wait_for_completion=true flag and try
searching right after this command returns (it might take a while) and let
us know if the search still fails?


Reply to this email directly or view it on GitHub
#9455 (comment)
.

@imotov
Copy link
Contributor

imotov commented Jan 28, 2015

@fdelelis I am not sure I understand what you mean by "and search again , neither is." Does it mean the search works or it doesn't work? Could you post complete cluster state right after restore operation returns and another complete cluster state when index is searchable? You can email them to me igor.motov@elasticsearch.com if you don't want to post them here.

@imotov
Copy link
Contributor

imotov commented Jan 29, 2015

Just want to summarize what we found so far. It looks like in @fdelelis open index operation takes very long time no matter if this operation is performed directly using index open command or indirectly as part of restore process.

@imotov
Copy link
Contributor

imotov commented Jan 29, 2015

@fdelelis sorry, I cannot reproduce this problem and I don't see how something like this can be possible based on your description. In other words, it sounds like we are missing some crucial piece of information here. Could you try reproducing this problem on a brand new cluster, write down the steps that you took to reproduce it and send these steps to us together with log file and configuration file from your node? Since it fails on both restore and open index operations, please try to use open index operation for your reproduction since it involves less moving pieces (it is basically a subset of restore operation). Thank you!

@fdelelis
Copy link
Author

fdelelis commented Feb 2, 2015

Hi Igor,

Just in case there was any error in the configuration, my configuration is:

cluster.name: des-c1
#node.rack: rack1
node.name: des1
#node.name: ${HOSTNAME}
#path.conf: /opt/elasticsearch/config
#path.data: /opt/elasticsearch/data
#path.work: /opt/elasticsearch/work
#path.logs: /opt/elasticsearch/logs
#path.plugins: /opt/elasticsearch/plugins
discovery.zen.minimum_master_nodes: 1
#discovery.zen.minimum_master_nodes: 2
network.bind_host: 0.0.0.0
network.publish_host: eth0:ipv4
gateway.type: local
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["des1-priv1","des2-priv1","des3-priv1"]
discovery.zen.ping.timeout: 60
bootstrap.mlockall: true
index.mapper.dynamic: false
action.auto_create_index: false
action.disable_delete_all_indices: true

I only have a one node up. See something wrong?

Thanks.

2015-01-29 17:28 GMT+01:00 Igor Motov notifications@github.com:

@fdelelis https://github.com/fdelelis sorry, I cannot reproduce this
problem and I don't see how something like this can be possible based on
your description. In other words, it sounds like we are missing some
crucial piece of information here. Could you try reproducing this problem
on a brand new cluster, write down the steps that you took to reproduce it
and send these steps to us together with log file and configuration file
from your node? Since it fails on both restore and open index operations,
please try to use open index operation for your reproduction since it
involves less moving pieces (it is basically a subset of restore
operation). Thank you!


Reply to this email directly or view it on GitHub
#9455 (comment)
.

@fdelelis
Copy link
Author

fdelelis commented Feb 4, 2015

Hi Igor,

Sorry, I have configure other cluster in the same server. The steps are:

POST /prueba1/_close --> OK
GET /prueba1/_search --> indexmissing
POST /prueba1/_open --> OK
GET /prueba1/_search --> "IndexClosedException[[prueba1] closed]"

Can you help me?

2015-02-02 8:09 GMT+01:00 Félix de Lelelis felix.delelisdd@gmail.com:

Hi Igor,

Just in case there was any error in the configuration, my configuration is:

cluster.name: des-c1
#node.rack: rack1
node.name: des1
#node.name: ${HOSTNAME}
#path.conf: /opt/elasticsearch/config
#path.data: /opt/elasticsearch/data
#path.work: /opt/elasticsearch/work
#path.logs: /opt/elasticsearch/logs
#path.plugins: /opt/elasticsearch/plugins
discovery.zen.minimum_master_nodes: 1
#discovery.zen.minimum_master_nodes: 2
network.bind_host: 0.0.0.0
network.publish_host: eth0:ipv4
gateway.type: local
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["des1-priv1","des2-priv1","des3-priv1"]
discovery.zen.ping.timeout: 60
bootstrap.mlockall: true
index.mapper.dynamic: false
action.auto_create_index: false
action.disable_delete_all_indices: true

I only have a one node up. See something wrong?

Thanks.

2015-01-29 17:28 GMT+01:00 Igor Motov notifications@github.com:

@fdelelis https://github.com/fdelelis sorry, I cannot reproduce this
problem and I don't see how something like this can be possible based on
your description. In other words, it sounds like we are missing some
crucial piece of information here. Could you try reproducing this problem
on a brand new cluster, write down the steps that you took to reproduce it
and send these steps to us together with log file and configuration file
from your node? Since it fails on both restore and open index operations,
please try to use open index operation for your reproduction since it
involves less moving pieces (it is basically a subset of restore
operation). Thank you!


Reply to this email directly or view it on GitHub
#9455 (comment)
.

@imotov
Copy link
Contributor

imotov commented Feb 4, 2015

@fdelelis this is what I get when I follow steps that you have described:

~/Demo› curl -O -L https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.3.4.tar.gz
~/Demo› tar -xzf elasticsearch-1.3.4.tar.gz 
~/Demo› cd elasticsearch-1.3.4 
~/Demo/elasticsearch-1.3.4› vi config/elasticsearch.yml 
~/Demo/elasticsearch-1.3.4› cat config/elasticsearch.yml
cluster.name: des-c1
node.name:  des1
discovery.zen.minimum_master_nodes: 1
network.bind_host: 0.0.0.0
network.publish_host: _en0:ipv4_
gateway.type: local
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["des1-priv1","des2-priv1","des3-priv1"]
discovery.zen.ping.timeout: 60
bootstrap.mlockall: true
index.mapper.dynamic: false
action.auto_create_index: false
action.disable_delete_all_indices: true
~/Demo/elasticsearch-1.3.4› bin/elasticsearch -d
~/Demo/elasticsearch-1.3.4› curl -XPUT localhost:9200/prueba1
{"acknowledged":true}
~/Demo/elasticsearch-1.3.4› curl -XPOST localhost:9200/prueba1/_close 
{"acknowledged":true}
~/Demo/elasticsearch-1.3.4› curl -XGET localhost:9200/prueba1/_search
{"error":"IndexClosedException[[prueba1] closed]","status":403}
~/Demo/elasticsearch-1.3.4› curl -XPOST localhost:9200/prueba1/_open
{"acknowledged":true}
~/Demo/elasticsearch-1.3.4› curl -XGET localhost:9200/prueba1/_search
{"took":26,"timed_out":false,"_shards":{"total":5,"successful":5,"failed":0},"hits":{"total":0,"max_score":null,"hits":[]}}

Do you have any plugins install? Does it happen to only this index or all indices on the server?

@fdelelis
Copy link
Author

fdelelis commented Feb 4, 2015

Hi Igor,

I have installed head plugin. It happen on all index. Can be the plugin?

Thanks
El 4/2/2015 19:05, "Igor Motov" notifications@github.com escribió:

@fdelelis https://github.com/fdelelis this is what I get when I follow
steps that you have described:

~/Demo› curl -O -L https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.3.4.tar.gz
~/Demo› tar -xzf elasticsearch-1.3.4.tar.gz
~/Demo› cd elasticsearch-1.3.4
~/Demo/elasticsearch-1.3.4› vi config/elasticsearch.yml
~/Demo/elasticsearch-1.3.4› cat config/elasticsearch.ymlcluster.name: des-c1node.name: des1
discovery.zen.minimum_master_nodes: 1
network.bind_host: 0.0.0.0
network.publish_host: en0:ipv4
gateway.type: local
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["des1-priv1","des2-priv1","des3-priv1"]
discovery.zen.ping.timeout: 60
bootstrap.mlockall: true
index.mapper.dynamic: false
action.auto_create_index: false
action.disable_delete_all_indices: true
~/Demo/elasticsearch-1.3.4› bin/elasticsearch -d
~/Demo/elasticsearch-1.3.4› curl -XPUT localhost:9200/prueba1
{"acknowledged":true}
~/Demo/elasticsearch-1.3.4› curl -XPOST localhost:9200/prueba1/_close
{"acknowledged":true}
~/Demo/elasticsearch-1.3.4› curl -XGET localhost:9200/prueba1/_search
{"error":"IndexClosedException[[prueba1] closed]","status":403}
~/Demo/elasticsearch-1.3.4› curl -XPOST localhost:9200/prueba1/_open
{"acknowledged":true}
~/Demo/elasticsearch-1.3.4› curl -XGET localhost:9200/prueba1/_search
{"took":26,"timed_out":false,"_shards":{"total":5,"successful":5,"failed":0},"hits":{"total":0,"max_score":null,"hits":[]}}

Do you have any plugins install? Does it happen to only this index or all
indices on the server?


Reply to this email directly or view it on GitHub
#9455 (comment)
.

@imotov
Copy link
Contributor

imotov commented Feb 4, 2015

@fdelelis does it happen when you create a brand new empty index as in my example?

@fdelelis
Copy link
Author

fdelelis commented Feb 4, 2015

Allways I index any doc. I haven't tried with an empty index.
El 4/2/2015 19:42, "Igor Motov" notifications@github.com escribió:

@fdelelis https://github.com/fdelelis does it happen when you create a
brand new empty index as in my example?


Reply to this email directly or view it on GitHub
#9455 (comment)
.

@imotov
Copy link
Contributor

imotov commented Feb 4, 2015

Could you try an empty index?

@fdelelis
Copy link
Author

fdelelis commented Feb 4, 2015 via email

@fdelelis
Copy link
Author

fdelelis commented Feb 5, 2015

Hi Igor,

I would comment that we installed mapper-attachments and head plugins.I
created a empty index and the results have been the same:

POST /prueba4
{
"settings": {
"index": {
"analysis": {
"analyzer": {
"ddol_analyzer": {
"type": "custom",
"filter": [
"lowercase"
],
"tokenizer": "whitespace"
}
}
},
"number_of_replicas": "0",
"number_of_shards": "5"
}
},
"mappings": {
"invoice": {
"dynamic": "strict",
"_all": {
"auto_boost": true
},
"_timestamp": {
"enabled": true,
"format": "dd-MM-yyyy HH:mm:ss||yyyy-MM-dd'T'HH:mm:ss.SSSZZ"
},
"_source": {
"excludes": [
"notas.*",
"attach.metadata"
]
},
"properties": {
"attach": {
"properties": {
"filename": {
"type": "string",
"index_options": "docs",
"analyzer": "ddol_analyzer",
"null_value": "",
"include_in_all": true
},
"filepath": {
"type": "string",
"index": "not_analyzed",
"null_value": "",
"include_in_all": false
},
"metadata": {
"type": "string",
"index_options": "docs",
"null_value": "",
"include_in_all": true
}
}
},
"base_imponible": {
"type": "float",
"doc_values": true,
"include_in_all": true
},
"centro": {
"properties": {
"denominacion": {
"type": "string",
"boost": 2,
"null_value": "",
"include_in_all": true
},
"direccion": {
"type": "string",
"boost": 0.1,
"null_value": "",
"include_in_all": true
},
"idcentro": {
"type": "integer",
"doc_values": true
}
}
},
"cliente": {
"properties": {
"cif": {
"type": "string",
"boost": 2,
"index_options": "docs",
"analyzer": "ddol_analyzer",
"null_value": "",
"include_in_all": true
},
"direccion": {
"type": "string",
"boost": 1.5,
"index_options": "docs",
"analyzer": "ddol_analyzer",
"null_value": "",
"include_in_all": true
},
"idcliente": {
"type": "integer",
"doc_values": true
},
"razonsocial": {
"type": "string",
"boost": 2,
"index_options": "docs",
"analyzer": "ddol_analyzer",
"fields": {
"raw": {
"type": "string",
"index": "not_analyzed"
}
},
"null_value": "",
"include_in_all": true
}
}
},
"enviada": {
"type": "boolean",
"null_value": false
},
"fecha": {
"type": "date",
"doc_values": true,
"format": "dd/MM/yyyy||yyyy-MM-dd",
"null_value": "0000-01-01",
"include_in_all": true
},
"fechaValidada": {
"type": "date",
"doc_values": true,
"format": "dd/MM/yyyy||yyyy-MM-dd",
"null_value": "0000-01-01",
"include_in_all": true
},
"fechaenvrec": {
"type": "date",
"doc_values": true,
"format": "dd/MM/yyyy||yyyy-MM-dd",
"null_value": "0000-01-01",
"include_in_all": true
},
"filename": {
"type": "string",
"boost": 2,
"index_options": "docs",
"analyzer": "ddol_analyzer",
"null_value": "",
"include_in_all": true
},
"filepath": {
"type": "string",
"boost": 2,
"index": "not_analyzed",
"norms": {
"enabled": true
},
"null_value": "",
"include_in_all": false
},
"firmada": {
"type": "boolean",
"null_value": false
},
"firmada_ok": {
"type": "boolean",
"null_value": false
},
"idempresa": {
"type": "integer",
"doc_values": true,
"null_value": -1
},
"idfactura": {
"type": "integer",
"doc_values": true,
"null_value": -1
},
"idmoneda": {
"type": "string",
"index_options": "docs",
"null_value": "",
"include_in_all": true
},
"idorganizacion": {
"type": "integer",
"doc_values": true,
"null_value": -1
},
"impuestos_repercutidos": {
"type": "float",
"doc_values": true,
"null_value": 0,
"include_in_all": true
},
"leida": {
"type": "boolean",
"null_value": false
},
"lote": {
"type": "integer",
"doc_values": true,
"null_value": -1,
"include_in_all": true
},
"notas": {
"properties": {
"contenido": {
"type": "string",
"index_options": "docs",
"null_value": "",
"include_in_all": true
},
"titulo": {
"type": "string",
"include_in_all": true
}
}
},
"numero": {
"type": "string",
"boost": 1.5,
"index_options": "docs",
"analyzer": "ddol_analyzer",
"fields": {
"raw": {
"type": "string",
"index": "not_analyzed"
}
},
"null_value": "",
"include_in_all": true
},
"peso": {
"type": "long",
"boost": 2,
"norms": {
"enabled": true
},
"null_value": -1,
"include_in_all": true
},
"proveedor": {
"properties": {
"cif": {
"type": "string",
"boost": 2,
"index_options": "docs",
"analyzer": "ddol_analyzer",
"include_in_all": true
},
"direccion": {
"type": "string",
"boost": 1.1,
"index_options": "docs",
"include_in_all": true
},
"idproveedor": {
"type": "string",
"index": "no"
},
"razonsocial": {
"type": "string",
"boost": 2,
"index_options": "docs",
"fields": {
"raw": {
"type": "string",
"index": "not_analyzed"
}
},
"include_in_all": true
}
}
},
"serie": {
"type": "string",
"boost": 1.5,
"index_options": "docs",
"analyzer": "ddol_analyzer",
"fields": {
"raw": {
"type": "string",
"index": "not_analyzed"
}
},
"null_value": "",
"include_in_all": true
},
"tieneAttachs": {
"type": "boolean",
"null_value": false
},
"tieneNotas": {
"type": "boolean",
"null_value": false
},
"tipo": {
"type": "string",
"norms": {
"enabled": false
},
"index_options": "docs",
"null_value": "",
"include_in_all": false
},
"total": {
"type": "float",
"doc_values": true,
"null_value": 0,
"include_in_all": true
},
"validada": {
"type": "boolean",
"null_value": false
}
}
}
}

}

_GET /prueba4/search:

{
"took": 1,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 0,
"max_score": null,
"hits": []
}
}

_POST /prueba4/close:

{
"acknowledged": true
}

_GET /prueba4/search:

{
"error": "IndexClosedException[[prueba4] closed]",
"status": 403
}

_POST /prueba4/open:

{
"acknowledged": true
}

_GET /prueba4/search:

{
"error": "IndexClosedException[[prueba4] closed]",
"status": 403
}

I appreciate the help, wait your response.

Thanks

2015-02-04 19:58 GMT+01:00 Félix de Lelelis felix.delelisdd@gmail.com:

Ok. When I arrive to the office I will try it. When I will have the
results I will get in touch with you.
Thank you very much.

@imotov
Copy link
Contributor

imotov commented Feb 5, 2015

@fdelelis I still cannot reproduce it. I have a few questions though.

  1. How long does it take for your single-node cluster to get back to yellow state when you restart it?
  2. After 5 minutes when index is finally searchable how does this node perform in general.
  3. Does it take a while for an index to become searchable when you first create it?

It would also really help us to understand the issue if you could you do the following experiment. Create a new index, close it, open it and while it's still not searchable execute the following commands and send me the output together with log file from this node.

curl "http://localhost:9200/_nodes"
curl "http://localhost:9200/_nodes/stats"
curl "http://localhost:9200/_nodes/hot_threads?threads=100"

Thank you!

@fdelelis
Copy link
Author

fdelelis commented Feb 5, 2015

Hi Igor,

  1. The restart is inmediate.
  2. This node perform normally. I don't understand. In this node I can
    search, create index,.. quite fast.
  3. When I create it, I can instantly search.

Execute the following steps:

  • POST /prueba2
  • POST /prueba2/_close
  • POST /prueba2/_open

Execute comands:

curl "http://localhost:9200/_nodes http://localhost:9200/_nodes"

{
"cluster_name": "ddoles-prueba",
"nodes": {
"eQ-Jqzy2TDO5OLMXQswMXg": {
"name": "srv-vln-des1",
"transport_address": "inet[/192.168.67.207:9300]",
"host": "srv-vln-des1.datadec-online.com",
"ip": "192.168.62.205",
"version": "1.3.4",
"build": "a70f3cc",
"http_address": "inet[/192.168.67.207:9200]",
"settings": {
"index": {
"mapper": {
"dynamic": "false"
}
},
"node": {
"name": "srv-vln-des1"
},
"bootstrap": {
"mlockall": "true"
},
"gateway": {
"type": "local"
},
"name": "srv-vln-des1",
"pidfile": "/var/run/elasticsearch/elasticsearch.pid",
"path": {
"data": "/opt/elasticsearch/data",
"work": "/tmp/elasticsearch",
"home": "/usr/share/elasticsearch",
"conf": "/etc/elasticsearch",
"logs": "/var/log/elasticsearch"
},
"action": {
"auto_create_index": "false",
"disable_delete_all_indices": "true"
},
"cluster": {
"name": "ddoles-prueba"
},
"config": "/etc/elasticsearch/elasticsearch.yml",
"discovery": {
"zen": {
"minimum_master_nodes": "1",
"ping": {
"multicast": {
"enabled": "false"
},
"timeout": "60"
}
}
},
"network": {
"bind_host": "0.0.0.0",
"publish_host": "eth0:ipv4"
}
},
"os": {
"refresh_interval_in_millis": 1000,
"available_processors": 2,
"cpu": {
"vendor": "Intel",
"model": "Westmere E56xx/L56xx/X56xx (Nehalem-C)",
"mhz": 2799,
"total_cores": 2,
"total_sockets": 2,
"cores_per_socket": 1,
"cache_size_in_bytes": 4096
},
"mem": {
"total_in_bytes": 1930407936
},
"swap": {
"total_in_bytes": 3232755712
}
},
"process": {
"refresh_interval_in_millis": 1000,
"id": 2240,
"max_file_descriptors": 65535,
"mlockall": false
},
"jvm": {
"pid": 2240,
"version": "1.7.0_71",
"vm_name": "OpenJDK 64-Bit Server VM",
"vm_version": "24.65-b04",
"vm_vendor": "Oracle Corporation",
"start_time_in_millis": 1423147524686,
"mem": {
"heap_init_in_bytes": 268435456,
"heap_max_in_bytes": 1056309248,
"non_heap_init_in_bytes": 24313856,
"non_heap_max_in_bytes": 224395264,
"direct_max_in_bytes": 1056309248
},
"gc_collectors": [
"ParNew",
"ConcurrentMarkSweep"
],
"memory_pools": [
"Code Cache",
"Par Eden Space",
"Par Survivor Space",
"CMS Old Gen",
"CMS Perm Gen"
]
},
"thread_pool": {
"generic": {
"type": "cached",
"keep_alive": "30s",
"queue_size": -1
},
"index": {
"type": "fixed",
"min": 2,
"max": 2,
"queue_size": "200"
},
"snapshot_data": {
"type": "scaling",
"min": 1,
"max": 5,
"keep_alive": "5m",
"queue_size": -1
},
"bench": {
"type": "scaling",
"min": 1,
"max": 1,
"keep_alive": "5m",
"queue_size": -1
},
"get": {
"type": "fixed",
"min": 2,
"max": 2,
"queue_size": "1k"
},
"snapshot": {
"type": "scaling",
"min": 1,
"max": 1,
"keep_alive": "5m",
"queue_size": -1
},
"merge": {
"type": "scaling",
"min": 1,
"max": 1,
"keep_alive": "5m",
"queue_size": -1
},
"suggest": {
"type": "fixed",
"min": 2,
"max": 2,
"queue_size": "1k"
},
"bulk": {
"type": "fixed",
"min": 2,
"max": 2,
"queue_size": "50"
},
"optimize": {
"type": "fixed",
"min": 1,
"max": 1,
"queue_size": -1
},
"warmer": {
"type": "scaling",
"min": 1,
"max": 1,
"keep_alive": "5m",
"queue_size": -1
},
"flush": {
"type": "scaling",
"min": 1,
"max": 1,
"keep_alive": "5m",
"queue_size": -1
},
"search": {
"type": "fixed",
"min": 6,
"max": 6,
"queue_size": "1k"
},
"percolate": {
"type": "fixed",
"min": 2,
"max": 2,
"queue_size": "1k"
},
"management": {
"type": "scaling",
"min": 1,
"max": 5,
"keep_alive": "5m",
"queue_size": -1
},
"refresh": {
"type": "scaling",
"min": 1,
"max": 1,
"keep_alive": "5m",
"queue_size": -1
}
},
"network": {
"refresh_interval_in_millis": 5000,
"primary_interface": {
"address": "192.168.64.205",
"name": "eth0",
"mac_address": "52:54:FC:11:09:88"
}
},
"transport": {
"bound_address": "inet[/0.0.0.0:9300]",
"publish_address": "inet[/192.168.67.207:9300]"
},
"http": {
"bound_address": "inet[/0.0.0.0:9200]",
"publish_address": "inet[/192.168.67.207:9200]",
"max_content_length_in_bytes": 104857600
},
"plugins": [
{
"name": "mapper-attachments",
"version": "2.3.2",
"description": "Adds the attachment type allowing to parse
difference attachment formats",
"jvm": true,
"site": false
},
{
"name": "head",
"version": "NA",
"description": "No description found.",
"url": "/_plugin/head/",
"jvm": false,
"site": true
}
]
}
}
}

curl "http://localhost:9200/_nodes/stats http://localhost:9200/_nodes/stats"

{
"cluster_name": "ddoles-prueba",
"nodes": {
"eQ-Jqzy2TDO5OLMXQswMXg": {
"timestamp": 1423148142461,
"name": "srv-vln-des1",
"transport_address": "inet[/192.168.67.207:9300]",
"host": "srv-vln-des1.datadec-online.com",
"ip": [
"inet[/192.168.67.207:9300]",
"NONE"
],
"indices": {
"docs": {
"count": 3,
"deleted": 0
},
"store": {
"size_in_bytes": 39433,
"throttle_time_in_millis": 0
},
"indexing": {
"index_total": 0,
"index_time_in_millis": 0,
"index_current": 0,
"delete_total": 0,
"delete_time_in_millis": 0,
"delete_current": 0
},
"get": {
"total": 0,
"time_in_millis": 0,
"exists_total": 0,
"exists_time_in_millis": 0,
"missing_total": 0,
"missing_time_in_millis": 0,
"current": 0
},
"search": {
"open_contexts": 0,
"query_total": 5,
"query_time_in_millis": 121,
"query_current": 0,
"fetch_total": 0,
"fetch_time_in_millis": 0,
"fetch_current": 0
},
"merges": {
"current": 0,
"current_docs": 0,
"current_size_in_bytes": 0,
"total": 0,
"total_time_in_millis": 0,
"total_docs": 0,
"total_size_in_bytes": 0
},
"refresh": {
"total": 15,
"total_time_in_millis": 0
},
"flush": {
"total": 0,
"total_time_in_millis": 0
},
"warmer": {
"current": 0,
"total": 20,
"total_time_in_millis": 136
},
"filter_cache": {
"memory_size_in_bytes": 0,
"evictions": 0
},
"id_cache": {
"memory_size_in_bytes": 0
},
"fielddata": {
"memory_size_in_bytes": 0,
"evictions": 0
},
"percolate": {
"total": 0,
"time_in_millis": 0,
"current": 0,
"memory_size_in_bytes": -1,
"memory_size": "-1b",
"queries": 0
},
"completion": {
"size_in_bytes": 0
},
"segments": {
"count": 3,
"memory_in_bytes": 122280,
"index_writer_memory_in_bytes": 0,
"version_map_memory_in_bytes": 0
},
"translog": {
"operations": 0,
"size_in_bytes": 0
},
"suggest": {
"total": 0,
"time_in_millis": 0,
"current": 0
}
},
"os": {
"timestamp": 1423148142603,
"uptime_in_millis": 7595801,
"load_average": [
1.2,
1.02,
1.04
],
"cpu": {
"sys": 9,
"user": 15,
"idle": 69,
"usage": 24,
"stolen": 0
},
"mem": {
"free_in_bytes": 717250560,
"used_in_bytes": 1213157376,
"free_percent": 48,
"used_percent": 51,
"actual_free_in_bytes": 938508288,
"actual_used_in_bytes": 991899648
},
"swap": {
"used_in_bytes": 712716288,
"free_in_bytes": 2520039424
}
},
"process": {
"timestamp": 1423148142664,
"open_file_descriptors": 172,
"cpu": {
"percent": 0,
"sys_in_millis": 780,
"user_in_millis": 7880,
"total_in_millis": 8660
},
"mem": {
"resident_in_bytes": 223014912,
"share_in_bytes": 5459968,
"total_virtual_in_bytes": 2587729920
}
},
"jvm": {
"timestamp": 1423148142664,
"uptime_in_millis": 617978,
"mem": {
"heap_used_in_bytes": 131464672,
"heap_used_percent": 12,
"heap_committed_in_bytes": 251002880,
"heap_max_in_bytes": 1056309248,
"non_heap_used_in_bytes": 36657144,
"non_heap_committed_in_bytes": 38076416,
"pools": {
"young": {
"used_in_bytes": 105074712,
"max_in_bytes": 139591680,
"peak_used_in_bytes": 139591680,
"peak_max_in_bytes": 139591680
},
"survivor": {
"used_in_bytes": 17432568,
"max_in_bytes": 17432576,
"peak_used_in_bytes": 17432568,
"peak_max_in_bytes": 17432576
},
"old": {
"used_in_bytes": 8957392,
"max_in_bytes": 899284992,
"peak_used_in_bytes": 8957392,
"peak_max_in_bytes": 899284992
}
}
},
"threads": {
"count": 37,
"peak_count": 40
},
"gc": {
"collectors": {
"young": {
"collection_count": 1,
"collection_time_in_millis": 71
},
"old": {
"collection_count": 0,
"collection_time_in_millis": 0
}
}
},
"buffer_pools": {
"direct": {
"count": 32,
"used_in_bytes": 2905930,
"total_capacity_in_bytes": 2905930
},
"mapped": {
"count": 0,
"used_in_bytes": 0,
"total_capacity_in_bytes": 0
}
}
},
"thread_pool": {
"generic": {
"threads": 1,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 5,
"completed": 125
},
"index": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"snapshot_data": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"bench": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"get": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"snapshot": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"merge": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"suggest": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"bulk": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"optimize": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"warmer": {
"threads": 1,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 1,
"completed": 15
},
"flush": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"search": {
"threads": 6,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 6,
"completed": 6
},
"percolate": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"management": {
"threads": 2,
"queue": 0,
"active": 1,
"rejected": 0,
"largest": 2,
"completed": 42
},
"refresh": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
}
},
"network": {
"tcp": {
"active_opens": 5907143,
"passive_opens": 2812328,
"curr_estab": 66,
"in_segs": 732894264,
"out_segs": 720415851,
"retrans_segs": 73626,
"estab_resets": 345234,
"attempt_fails": 4854077,
"in_errs": 6,
"out_rsts": 2252160
}
},
"fs": {
"timestamp": 1423148142666,
"total": {
"total_in_bytes": 13411287040,
"free_in_bytes": 7898660864,
"available_in_bytes": 7898660864,
"disk_reads": 1341845,
"disk_writes": 2659943,
"disk_io_op": 4001788,
"disk_read_size_in_bytes": 140784985600,
"disk_write_size_in_bytes": 83741012480,
"disk_io_size_in_bytes": 224525998080,
"disk_queue": "0",
"disk_service_time": "2.2"
},
"data": [
{
"path": "/opt/elasticsearch/data/ddoles-prueba/nodes/0",
"mount": "/",
"dev": "/dev/vda2",
"total_in_bytes": 13411287040,
"free_in_bytes": 7898660864,
"available_in_bytes": 7898660864,
"disk_reads": 1341845,
"disk_writes": 2659943,
"disk_io_op": 4001788,
"disk_read_size_in_bytes": 140784985600,
"disk_write_size_in_bytes": 83741012480,
"disk_io_size_in_bytes": 224525998080,
"disk_queue": "0",
"disk_service_time": "2.2"
}
]
},
"transport": {
"server_open": 13,
"rx_count": 0,
"rx_size_in_bytes": 0,
"tx_count": 0,
"tx_size_in_bytes": 0
},
"http": {
"current_open": 2,
"total_opened": 15
},
"fielddata_breaker": {
"maximum_size_in_bytes": 633785548,
"maximum_size": "604.4mb",
"estimated_size_in_bytes": 0,
"estimated_size": "0b",
"overhead": 1.03,
"tripped": 0
}
}
}
}

curl "http://localhost:9200/_nodes/hot_threads?threads=100
http://localhost:9200/_nodes/hot_threads?threads=100"

::: [srv-vln-des1][eQ-Jqzy2TDO5OLMXQswMXg][srv-vln-des1.datadec-online.com
][inet[/192.168.67.207:9300]]

0.2% (1ms out of 500ms) cpu usage by thread

'elasticsearch[srv-vln-des1][scheduler][T#1]'
10/10 snapshots sharing following 9 elements
sun.misc.Unsafe.park(Native Method)

java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)

java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)

java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)

java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)

java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:745)

0.1% (354.5micros out of 500ms) cpu usage by thread

'elasticsearch[srv-vln-des1][transport_client_timer][T#1]{Hashed wheel
timer #1}'
10/10 snapshots sharing following 5 elements
java.lang.Thread.sleep(Native Method)

org.elasticsearch.common.netty.util.HashedWheelTimer$Worker.waitForNextTick(HashedWheelTimer.java:483)

org.elasticsearch.common.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:392)

org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
java.lang.Thread.run(Thread.java:745)

0.0% (113.8micros out of 500ms) cpu usage by thread

'elasticsearch[srv-vln-des1][[timer]]'
10/10 snapshots sharing following 2 elements
java.lang.Thread.sleep(Native Method)

org.elasticsearch.threadpool.ThreadPool$EstimatedTimeThread.run(ThreadPool.java:530)

0.0% (62.8micros out of 500ms) cpu usage by thread

'elasticsearch[srv-vln-des1][transport_client_boss][T#1]{New I/O boss #5}'
10/10 snapshots sharing following 14 elements
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:415)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)

org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42)

org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:745)

0.0% (43.2micros out of 500ms) cpu usage by thread

'elasticsearch[srv-vln-des1][transport_client_worker][T#4]{New I/O worker
#4}'
10/10 snapshots sharing following 15 elements
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:415)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)

org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)

org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:745)

0.0% (37.1micros out of 500ms) cpu usage by thread

'elasticsearch[srv-vln-des1][transport_client_worker][T#2]{New I/O worker
#2}'
10/10 snapshots sharing following 15 elements
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:415)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)

org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)

org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:745)

0.0% (35.6micros out of 500ms) cpu usage by thread

'elasticsearch[srv-vln-des1][http_server_worker][T#3]{New I/O worker #13}'
10/10 snapshots sharing following 15 elements
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:415)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)

org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)

org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:745)

0.0% (34.4micros out of 500ms) cpu usage by thread

'elasticsearch[srv-vln-des1][http_server_worker][T#4]{New I/O worker #14}'
10/10 snapshots sharing following 15 elements
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:415)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)

org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)

org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:745)

0.0% (30.7micros out of 500ms) cpu usage by thread

'elasticsearch[srv-vln-des1][http_server_worker][T#2]{New I/O worker #12}'
10/10 snapshots sharing following 15 elements
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:415)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)

org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)

org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:745)

0.0% (29.8micros out of 500ms) cpu usage by thread

'elasticsearch[srv-vln-des1][transport_server_worker][T#3]{New I/O worker
#8}'
10/10 snapshots sharing following 15 elements
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:415)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)

org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)

org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:745)

0.0% (24.7micros out of 500ms) cpu usage by thread

'elasticsearch[srv-vln-des1][transport_server_worker][T#4]{New I/O worker
#9}'
10/10 snapshots sharing following 15 elements
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:415)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)

org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)

org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:745)

0.0% (22.1micros out of 500ms) cpu usage by thread

'elasticsearch[srv-vln-des1][http_server_worker][T#1]{New I/O worker #11}'
10/10 snapshots sharing following 15 elements
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:415)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)

org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)

org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:745)

0.0% (21.2micros out of 500ms) cpu usage by thread

'elasticsearch[srv-vln-des1][transport_client_worker][T#1]{New I/O worker
#1}'
10/10 snapshots sharing following 15 elements
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:415)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)

org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)

org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:745)

0.0% (19.2micros out of 500ms) cpu usage by thread

'elasticsearch[srv-vln-des1][transport_server_worker][T#1]{New I/O worker
#6}'
10/10 snapshots sharing following 15 elements
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:415)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)

org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)

org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:745)

0.0% (18.7micros out of 500ms) cpu usage by thread

'elasticsearch[srv-vln-des1][transport_server_worker][T#2]{New I/O worker
#7}'
10/10 snapshots sharing following 15 elements
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:415)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)

org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)

org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:745)

0.0% (18micros out of 500ms) cpu usage by thread

'elasticsearch[srv-vln-des1][transport_client_worker][T#3]{New I/O worker
#3}'
10/10 snapshots sharing following 15 elements
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:415)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)

org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)

org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:745)

0.0% (0s out of 500ms) cpu usage by thread

'elasticsearch[srv-vln-des1][transport_server_boss][T#1]{New I/O server
boss #10}'
10/10 snapshots sharing following 14 elements
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)

org.elasticsearch.common.netty.channel.socket.nio.NioServerBoss.select(NioServerBoss.java:163)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)

org.elasticsearch.common.netty.channel.socket.nio.NioServerBoss.run(NioServerBoss.java:42)

org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:745)

0.0% (0s out of 500ms) cpu usage by thread 'Reference Handler'
 10/10 snapshots sharing following 3 elements
   java.lang.Object.wait(Native Method)
   java.lang.Object.wait(Object.java:503)
   java.lang.ref.Reference$ReferenceHandler.run(Reference.java:133)

0.0% (0s out of 500ms) cpu usage by thread

'elasticsearch[srv-vln-des1][clusterService#updateTask][T#1]'
10/10 snapshots sharing following 8 elements
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)

java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)

java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:539)

java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:745)

0.0% (0s out of 500ms) cpu usage by thread

'elasticsearch[srv-vln-des1][riverClusterService#updateTask][T#1]'
10/10 snapshots sharing following 8 elements
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)

java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)

java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)

java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:745)

0.0% (0s out of 500ms) cpu usage by thread

'elasticsearch[srv-vln-des1][master_mapping_updater]'
10/10 snapshots sharing following 6 elements
sun.misc.Unsafe.park(Native Method)

java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)

java.util.concurrent.LinkedTransferQueue.awaitMatch(LinkedTransferQueue.java:731)

java.util.concurrent.LinkedTransferQueue.xfer(LinkedTransferQueue.java:644)

java.util.concurrent.LinkedTransferQueue.poll(LinkedTransferQueue.java:1145)

org.elasticsearch.cluster.action.index.MappingUpdatedAction$MasterMappingUpdater.run(MappingUpdatedAction.java:363)

0.0% (0s out of 500ms) cpu usage by thread

'elasticsearch[srv-vln-des1][management][T#1]'
10/10 snapshots sharing following 9 elements
sun.misc.Unsafe.park(Native Method)

java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)

java.util.concurrent.LinkedTransferQueue.awaitMatch(LinkedTransferQueue.java:731)

java.util.concurrent.LinkedTransferQueue.xfer(LinkedTransferQueue.java:644)

java.util.concurrent.LinkedTransferQueue.poll(LinkedTransferQueue.java:1145)

java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:745)

0.0% (0s out of 500ms) cpu usage by thread

'elasticsearch[srv-vln-des1][[ttl_expire]]'
10/10 snapshots sharing following 5 elements
sun.misc.Unsafe.park(Native Method)

java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)

java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2176)

org.elasticsearch.indices.ttl.IndicesTTLService$Notifier.await(IndicesTTLService.java:325)

org.elasticsearch.indices.ttl.IndicesTTLService$PurgerThread.run(IndicesTTLService.java:147)

0.0% (0s out of 500ms) cpu usage by thread

'elasticsearch[srv-vln-des1][http_server_boss][T#1]{New I/O server boss
#15}'
10/10 snapshots sharing following 14 elements
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)

org.elasticsearch.common.netty.channel.socket.nio.NioServerBoss.select(NioServerBoss.java:163)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)

org.elasticsearch.common.netty.channel.socket.nio.NioServerBoss.run(NioServerBoss.java:42)

org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:745)

0.0% (0s out of 500ms) cpu usage by thread 'Signal Dispatcher'
 unique snapshot
 unique snapshot
 unique snapshot
 unique snapshot
 unique snapshot
 unique snapshot
 unique snapshot
 unique snapshot
 unique snapshot
 unique snapshot

0.0% (0s out of 500ms) cpu usage by thread 'Finalizer'
 10/10 snapshots sharing following 4 elements
   java.lang.Object.wait(Native Method)
   java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
   java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
   java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:209)

0.0% (0s out of 500ms) cpu usage by thread 'DestroyJavaVM'
 unique snapshot
 unique snapshot
 unique snapshot
 unique snapshot
 unique snapshot
 unique snapshot
 unique snapshot
 unique snapshot
 unique snapshot
 unique snapshot

0.0% (0s out of 500ms) cpu usage by thread

'elasticsearch[keepAlive/1.3.4]'
10/10 snapshots sharing following 8 elements
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)

java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:834)

java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:994)

java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1303)
java.util.concurrent.CountDownLatch.await(CountDownLatch.java:236)
org.elasticsearch.bootstrap.Bootstrap$3.run(Bootstrap.java:225)
java.lang.Thread.run(Thread.java:745)

0.0% (0s out of 500ms) cpu usage by thread

'elasticsearch[srv-vln-des1][management][T#2]'
10/10 snapshots sharing following 9 elements
sun.misc.Unsafe.park(Native Method)

java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)

java.util.concurrent.LinkedTransferQueue.awaitMatch(LinkedTransferQueue.java:731)

java.util.concurrent.LinkedTransferQueue.xfer(LinkedTransferQueue.java:644)

java.util.concurrent.LinkedTransferQueue.poll(LinkedTransferQueue.java:1145)

java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:745)

0.0% (0s out of 500ms) cpu usage by thread

'elasticsearch[srv-vln-des1][warmer][T#1]'
10/10 snapshots sharing following 9 elements
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)

java.util.concurrent.LinkedTransferQueue.awaitMatch(LinkedTransferQueue.java:735)

java.util.concurrent.LinkedTransferQueue.xfer(LinkedTransferQueue.java:644)

java.util.concurrent.LinkedTransferQueue.take(LinkedTransferQueue.java:1137)

java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:745)

0.0% (0s out of 500ms) cpu usage by thread

'elasticsearch[srv-vln-des1][search][T#2]'
10/10 snapshots sharing following 10 elements
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)

java.util.concurrent.LinkedTransferQueue.awaitMatch(LinkedTransferQueue.java:735)

java.util.concurrent.LinkedTransferQueue.xfer(LinkedTransferQueue.java:644)

java.util.concurrent.LinkedTransferQueue.take(LinkedTransferQueue.java:1137)

org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:162)

java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:745)

0.0% (0s out of 500ms) cpu usage by thread

'elasticsearch[srv-vln-des1][search][T#1]'
10/10 snapshots sharing following 10 elements
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)

java.util.concurrent.LinkedTransferQueue.awaitMatch(LinkedTransferQueue.java:735)

java.util.concurrent.LinkedTransferQueue.xfer(LinkedTransferQueue.java:644)

java.util.concurrent.LinkedTransferQueue.take(LinkedTransferQueue.java:1137)

org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:162)

java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:745)

0.0% (0s out of 500ms) cpu usage by thread

'elasticsearch[srv-vln-des1][search][T#6]'
10/10 snapshots sharing following 10 elements
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)

java.util.concurrent.LinkedTransferQueue.awaitMatch(LinkedTransferQueue.java:735)

java.util.concurrent.LinkedTransferQueue.xfer(LinkedTransferQueue.java:644)

java.util.concurrent.LinkedTransferQueue.take(LinkedTransferQueue.java:1137)

org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:162)

java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:745)

0.0% (0s out of 500ms) cpu usage by thread

'elasticsearch[srv-vln-des1][search][T#5]'
10/10 snapshots sharing following 10 elements
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)

java.util.concurrent.LinkedTransferQueue.awaitMatch(LinkedTransferQueue.java:735)

java.util.concurrent.LinkedTransferQueue.xfer(LinkedTransferQueue.java:644)

java.util.concurrent.LinkedTransferQueue.take(LinkedTransferQueue.java:1137)

org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:162)

java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:745)

0.0% (0s out of 500ms) cpu usage by thread

'elasticsearch[srv-vln-des1][search][T#4]'
10/10 snapshots sharing following 10 elements
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)

java.util.concurrent.LinkedTransferQueue.awaitMatch(LinkedTransferQueue.java:735)

java.util.concurrent.LinkedTransferQueue.xfer(LinkedTransferQueue.java:644)

java.util.concurrent.LinkedTransferQueue.take(LinkedTransferQueue.java:1137)

org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:162)

java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:745)

0.0% (0s out of 500ms) cpu usage by thread

'elasticsearch[srv-vln-des1][search][T#3]'
10/10 snapshots sharing following 10 elements
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)

java.util.concurrent.LinkedTransferQueue.awaitMatch(LinkedTransferQueue.java:735)

java.util.concurrent.LinkedTransferQueue.xfer(LinkedTransferQueue.java:644)

java.util.concurrent.LinkedTransferQueue.take(LinkedTransferQueue.java:1137)

org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:162)

java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:745)

Thanks. I wait your response.

2015-02-05 15:37 GMT+01:00 Igor Motov notifications@github.com:

@fdelelis https://github.com/fdelelis I still cannot reproduce it. I
have a few questions though.

  1. How long does it take for your single-node cluster to get back to
    yellow state when you restart it?
  2. After 5 minutes when index is finally searchable how does this node
    perform in general.
  3. Does it take a while for an index to become searchable when you
    first create it?

It would also really help us to understand the issue if you could you do
the following experiment. Create a new index, close it, open it and while
it's still not searchable execute the following commands and send me the
output together with log file from this node.

curl "http://localhost:9200/_nodes"
curl "http://localhost:9200/_nodes/stats"
curl "http://localhost:9200/_nodes/hot_threads?threads=100"

Thank you!


Reply to this email directly or view it on GitHub
#9455 (comment)
.

@imotov
Copy link
Contributor

imotov commented Feb 5, 2015

@fdelelis Is this behavior specific to this particular machine? Can you reproduce it on some other computer? If you can it would be great if you could compress the data directory and send it to us for analysis.

@fdelelis
Copy link
Author

fdelelis commented Feb 5, 2015

Hi,

I couldn't reproduce it in other computer, but I would need know which is
the cause. I send it the data directory.
Thanks

2015-02-05 18:16 GMT+01:00 Igor Motov notifications@github.com:

@fdelelis https://github.com/fdelelis Is this behavior specific to this
particular machine? Can you reproduce it on some other computer? If you can
it would be great if you could compress the data directory and send it to
us for analysis.


Reply to this email directly or view it on GitHub
#9455 (comment)
.

@imotov
Copy link
Contributor

imotov commented Feb 5, 2015

@fdelelis If it doesn't reproduce on another computer, having data might not help us much. You should try comparing two computers and figure out the difference. In particular, please compare file systems where data is stored. Is it slow on a virtualized machine and fast on non-virtualized hardware? Try narrowing it down to a single factor that is making difference.

Based on your data we checked all usual suspects. Besides having large swap on your computer nothing jumps out. However, we think that, if swap was the issue you would have seen slow down in other areas as well. It's still might be a good idea to make sure that elasticsearch doesn't swap.

@fdelelis
Copy link
Author

fdelelis commented Feb 5, 2015

Hi Igor,

I am going to try comparing with a non-virtualized machine. I would ask you
if the version of elasticsearch can have a bug, because I compared 2
virtualized machines, one of them with version 1.1.1.1 and the other with
1.3.4.1. The last one is where I have the problem and in the version
1.1.1.1 works ok.

Thank you very much.
El 5/2/2015 20:05, "Igor Motov" notifications@github.com escribió:

@fdelelis https://github.com/fdelelis If it doesn't reproduce on
another computer, having data might not help us much. You should try
comparing two computers and figure out the difference. In particular,
please compare file systems where data is stored. Is it slow on a
virtualized machine and fast on non-virtualized hardware? Try narrowing it
down to a single factor that is making difference.

Based on your data we checked all usual suspects. Besides having large
swap on your computer nothing jumps out. However, we think that, if swap
was the issue you would have seen slow down in other areas as well. It's
still might be a good idea to make sure that elasticsearch doesn't swap
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/setup-configuration.html#setup-configuration-memory
.


Reply to this email directly or view it on GitHub
#9455 (comment)
.

@imotov
Copy link
Contributor

imotov commented Feb 5, 2015

@fdelelis when you said "one of them with version 1.1.1.1 and the other with 1.3.4.1" which software you were talking about?

@fdelelis
Copy link
Author

fdelelis commented Feb 6, 2015

I am refering a elasticsearch, can be? I want upgranding a 1.4.

thanks

2015-02-05 23:35 GMT+01:00 Igor Motov notifications@github.com:

@fdelelis https://github.com/fdelelis when you said "one of them with
version 1.1.1.1 and the other with 1.3.4.1" which software you were talking
about?


Reply to this email directly or view it on GitHub
#9455 (comment)
.

@dadoonet
Copy link
Member

dadoonet commented Feb 6, 2015

We don't have 4 digits for versions at elasticsearch. That's why @imotov asked you this.
I was reading again the thread and I had hard time to understand what exactly is happening here.

It looks like you have also a lot of shards on a single machine. But then when you asked for nodes stats, you have only 3 documents. That's super strange.

@fdelelis
Copy link
Author

fdelelis commented Feb 6, 2015

Hi David,

I use rpm's installation, so the version's are 1.1 and 1.3..

This is because there is on the same machine other cluster and I start an
other to test. The old elasticsearch cluster is down.

Can be a bug of the version 1.3?, because in version 1.1 and 1.4 works fine.

Thanks

2015-02-06 8:42 GMT+01:00 David Pilato notifications@github.com:

We don't have 4 digits for versions at elasticsearch. That's why @imotov
https://github.com/imotov asked you this.
I was reading again the thread and I had hard time to understand what
exactly is happening here.

It looks like you have also a lot of shards on a single machine. But then
when you asked for nodes stats, you have only 3 documents. That's super
strange.


Reply to this email directly or view it on GitHub
#9455 (comment)
.

@imotov
Copy link
Contributor

imotov commented Feb 6, 2015

@fdelelis I cannot think of any bugs that existed in 1.1, were fixed in 1.4 and match your description. However, upgrading might be a good idea regardless. Any chance you can send us your log file from the failing version?

@fdelelis
Copy link
Author

fdelelis commented Feb 9, 2015

Hi Igor,
I will upgrade the cluster tomorrow. I will send the results and logs.

Thanks
El 6/2/2015 15:40, "Igor Motov" notifications@github.com escribió:

@fdelelis https://github.com/fdelelis I cannot think of any bugs that
existed in 1.1, were fixed in 1.4 and match your description. However,
upgrading might be a good idea regardless. Any chance you can send us your
log file from the failing version?


Reply to this email directly or view it on GitHub
#9455 (comment)
.

@fdelelis
Copy link
Author

Hi Igor,

Sorry by don't send you the results before. After upgrade, I can open the
index corretly but I think that I have a error in my sense application that
I use to check.

Thanks.

2015-02-09 19:18 GMT+01:00 Félix de Lelelis felix.delelisdd@gmail.com:

Hi Igor,
I will upgrade the cluster tomorrow. I will send the results and logs.

Thanks
El 6/2/2015 15:40, "Igor Motov" notifications@github.com escribió:

@fdelelis https://github.com/fdelelis I cannot think of any bugs that
existed in 1.1, were fixed in 1.4 and match your description. However,
upgrading might be a good idea regardless. Any chance you can send us your
log file from the failing version?


Reply to this email directly or view it on GitHub
#9455 (comment)
.

@imotov
Copy link
Contributor

imotov commented Feb 23, 2015

@fdelelis should I close the issue then?

@fdelelis
Copy link
Author

Yes Igor.

Thanks a lot.

2015-02-23 18:19 GMT+01:00 Igor Motov notifications@github.com:

@fdelelis https://github.com/fdelelis should I close the issue then?


Reply to this email directly or view it on GitHub
#9455 (comment)
.

@imotov imotov closed this as completed Feb 23, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
:Distributed Coordination/Snapshot/Restore Anything directly related to the `_snapshot/*` APIs feedback_needed
Projects
None yet
Development

No branches or pull requests

4 participants