Date
1 - 11 of 11
a problem about elasticsearch
jcbms <jcbm...@...>
janusgraph : 0.2.0 elasticsearch: 6.1.1 can you tell me why this happen? 06:21:01,169 ERROR ElasticSearchIndex:604 - Failed to execute bulk Elasticsearch mutation java.lang.RuntimeException: error while performing request at org.elasticsearch.client.RestClient$SyncResponseListener.get(RestClient.java:681) at org.elasticsearch.client.RestClient.performRequest(RestClient.java:219) at org.elasticsearch.client.RestClient.performRequest(RestClient.java:191) at org.janusgraph.diskstorage.es.rest.RestElasticSearchClient.performRequest(RestElasticSearchClient.java:320) at org.janusgraph.diskstorage.es.rest.RestElasticSearchClient.bulkRequest(RestElasticSearchClient.java:249) at org.janusgraph.diskstorage.es.ElasticSearchIndex.mutate(ElasticSearchIndex.java:601) at org.janusgraph.diskstorage.indexing.IndexTransaction$1.call(IndexTransaction.java:160) at org.janusgraph.diskstorage.indexing.IndexTransaction$1.call(IndexTransaction.java:157) at org.janusgraph.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:69) at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:55) at org.janusgraph.diskstorage.indexing.IndexTransaction.flushInternal(IndexTransaction.java:157) at org.janusgraph.diskstorage.indexing.IndexTransaction.commit(IndexTransaction.java:138) at org.janusgraph.diskstorage.BackendTransaction.commitIndexes(BackendTransaction.java:141) at org.janusgraph.graphdb.database.StandardJanusGraph.commit(StandardJanusGraph.java:751) at org.janusgraph.graphdb.transaction.StandardJanusGraphTx.commit(StandardJanusGraphTx.java:1374) at DNSSubmit.submit(DNSSubmit.java:104) at LoadData$4.run(LoadData.java:160) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) |
|
jcbms <jcbm...@...>
full stack is
toggle quoted message
Show quoted text
06:31:12,386 ERROR ElasticSearchIndex:604 - Failed to execute bulk Elasticsearch mutation java.lang.RuntimeException: error while performing request at org.elasticsearch.client.RestClient$SyncResponseListener.get(RestClient.java:681) at org.elasticsearch.client.RestClient.performRequest(RestClient.java:219) at org.elasticsearch.client.RestClient.performRequest(RestClient.java:191) at org.janusgraph.diskstorage.es.rest.RestElasticSearchClient.performRequest(RestElasticSearchClient.java:320) at org.janusgraph.diskstorage.es.rest.RestElasticSearchClient.bulkRequest(RestElasticSearchClient.java:249) at org.janusgraph.diskstorage.es.ElasticSearchIndex.mutate(ElasticSearchIndex.java:601) at org.janusgraph.diskstorage.indexing.IndexTransaction$1.call(IndexTransaction.java:160) at org.janusgraph.diskstorage.indexing.IndexTransaction$1.call(IndexTransaction.java:157) at org.janusgraph.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:69) at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:55) at org.janusgraph.diskstorage.indexing.IndexTransaction.flushInternal(IndexTransaction.java:157) at org.janusgraph.diskstorage.indexing.IndexTransaction.commit(IndexTransaction.java:138) at org.janusgraph.diskstorage.BackendTransaction.commitIndexes(BackendTransaction.java:141) at org.janusgraph.graphdb.database.StandardJanusGraph.commit(StandardJanusGraph.java:751) at org.janusgraph.graphdb.transaction.StandardJanusGraphTx.commit(StandardJanusGraphTx.java:1374) at DNSSubmit.submit(DNSSubmit.java:104) at LoadData$4.run(LoadData.java:160) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.util.concurrent.TimeoutException at org.apache.http.nio.pool.AbstractNIOConnPool.processPendingRequest(AbstractNIOConnPool.java:364) at org.apache.http.nio.pool.AbstractNIOConnPool.processNextPendingRequest(AbstractNIOConnPool.java:344) at org.apache.http.nio.pool.AbstractNIOConnPool.release(AbstractNIOConnPool.java:318) at org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.releaseConnection(PoolingNHttpClientConnectionManager.java:303) at org.apache.http.impl.nio.client.AbstractClientExchangeHandler.releaseConnection(AbstractClientExchangeHandler.java:239) at org.apache.http.impl.nio.client.MainClientExec.responseCompleted(MainClientExec.java:387) at org.apache.http.impl.nio.client.DefaultClientExchangeHandlerImpl.responseCompleted(DefaultClientExchangeHandlerImpl.java:168) at org.apache.http.nio.protocol.HttpAsyncRequestExecutor.processResponse(HttpAsyncRequestExecutor.java:436) at org.apache.http.nio.protocol.HttpAsyncRequestExecutor.inputReady(HttpAsyncRequestExecutor.java:326) at org.apache.http.impl.nio.DefaultNHttpClientConnection.consumeInput(DefaultNHttpClientConnection.java:265) at org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:81) at org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:39) at org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(AbstractIODispatch.java:114) at org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor.java:162) at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIOReactor.java:337) at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOReactor.java:315) at org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:276) at org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:104) at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:588) ... 1 more 06:31:12,389 ERROR StandardJanusGraph:755 - Error while commiting index mutations for transaction [2382] on index: search org.janusgraph.core.JanusGraphException: Could not execute operation due to backend exception at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:57) at org.janusgraph.diskstorage.indexing.IndexTransaction.flushInternal(IndexTransaction.java:157) at org.janusgraph.diskstorage.indexing.IndexTransaction.commit(IndexTransaction.java:138) at org.janusgraph.diskstorage.BackendTransaction.commitIndexes(BackendTransaction.java:141) at org.janusgraph.graphdb.database.StandardJanusGraph.commit(StandardJanusGraph.java:751) at org.janusgraph.graphdb.transaction.StandardJanusGraphTx.commit(StandardJanusGraphTx.java:1374) at DNSSubmit.submit(DNSSubmit.java:104) at LoadData$4.run(LoadData.java:160) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: org.janusgraph.diskstorage.PermanentBackendException: Unknown exception while executing index operation at org.janusgraph.diskstorage.es.ElasticSearchIndex.convert(ElasticSearchIndex.java:301) at org.janusgraph.diskstorage.es.ElasticSearchIndex.mutate(ElasticSearchIndex.java:605) at org.janusgraph.diskstorage.indexing.IndexTransaction$1.call(IndexTransaction.java:160) at org.janusgraph.diskstorage.indexing.IndexTransaction$1.call(IndexTransaction.java:157) at org.janusgraph.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:69) at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:55) ... 10 more Caused by: java.lang.RuntimeException: error while performing request at org.elasticsearch.client.RestClient$SyncResponseListener.get(RestClient.java:681) at org.elasticsearch.client.RestClient.performRequest(RestClient.java:219) at org.elasticsearch.client.RestClient.performRequest(RestClient.java:191) at org.janusgraph.diskstorage.es.rest.RestElasticSearchClient.performRequest(RestElasticSearchClient.java:320) at org.janusgraph.diskstorage.es.rest.RestElasticSearchClient.bulkRequest(RestElasticSearchClient.java:249) at org.janusgraph.diskstorage.es.ElasticSearchIndex.mutate(ElasticSearchIndex.java:601) ... 14 more Caused by: java.util.concurrent.TimeoutException at org.apache.http.nio.pool.AbstractNIOConnPool.processPendingRequest(AbstractNIOConnPool.java:364) at org.apache.http.nio.pool.AbstractNIOConnPool.processNextPendingRequest(AbstractNIOConnPool.java:344) at org.apache.http.nio.pool.AbstractNIOConnPool.release(AbstractNIOConnPool.java:318) at org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.releaseConnection(PoolingNHttpClientConnectionManager.java:303) at org.apache.http.impl.nio.client.AbstractClientExchangeHandler.releaseConnection(AbstractClientExchangeHandler.java:239) at org.apache.http.impl.nio.client.MainClientExec.responseCompleted(MainClientExec.java:387) at org.apache.http.impl.nio.client.DefaultClientExchangeHandlerImpl.responseCompleted(DefaultClientExchangeHandlerImpl.java:168) at org.apache.http.nio.protocol.HttpAsyncRequestExecutor.processResponse(HttpAsyncRequestExecutor.java:436) at org.apache.http.nio.protocol.HttpAsyncRequestExecutor.inputReady(HttpAsyncRequestExecutor.java:326) at org.apache.http.impl.nio.DefaultNHttpClientConnection.consumeInput(DefaultNHttpClientConnection.java:265) at org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:81) at org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:39) at org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(AbstractIODispatch.java:114) at org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor.java:162) at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIOReactor.java:337) at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOReactor.java:315) at org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:276) at org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:104) at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:588) 在 2018年7月25日星期三 UTC+8下午2:30:27,jcbms写道:
|
|
jcbms <jcbm...@...>
I want to know if this problem will lost data? and how to prevent it 在 2018年7月25日星期三 UTC+8下午2:30:27,jcbms写道:
|
|
Ted Wilmes <twi...@...>
Hi jcbms, This sort of thing usually means you're overloading your Elasticsearch server(s). Perhaps you're committing too much in a single transaction or you do not have enough resources to support the load you're placing on the ES server? I'd suggest watching the ES metrics and turning down the concurrency of your load and/or batch sizes, or perhaps expanding your ES resources. --Ted On Wednesday, July 25, 2018 at 1:37:00 AM UTC-5, jcbms wrote:
|
|
Jason Plurad <plu...@...>
The data is still stored in the storage backend (Cassandra, HBase). You could run a reindex operation to get ES populated with the data after addressing some of the possible data overload situations that Ted mentioned. See also this thread from the Elasticsearch forum: https://discuss.elastic.co/t/restclient-java-lang-runtimeexception-error-while-performing-request/81078 On Wednesday, July 25, 2018 at 11:08:26 AM UTC-4, Ted Wilmes wrote:
|
|
Vincent Praveen <vincent2...@...>
Hi Jason,
toggle quoted message
Show quoted text
We are also facing the same issue and we want to find out the root cause for this these are the steps we do We ingest 1000 edges in a single commit, I understand the error comes on the ES side. We assume that the ES cannot complete the indexing of the 1000 inserted records in 30 seconds or 1000 record batch size is too large for it to process ( Note : we always had 1000 edge batches and there was no problem earlier we have about 200m+ edges inside already ) We want to know what could have triggered this message, and what steps happen in which order so we can dig deeper. Questions 1. After we ingest the 1000 edges and run commit, ES takes over and runs the index if this step fails will the indexing not happen at all for these 1000 records ? 2. We noticed a recent slowness in our ingestion speed, could this be due to this ES index issue , if so anything we can do to overcome it ? 3. For each batch commit, does the JG wait for the ES to complete the indexing before moving on to the next batch? 4. We have multiple error messages in our logs and want to know whats the impact on the index or any possible data loss? Regards, Vincent On Wednesday, August 8, 2018 at 10:07:03 PM UTC+8 Jason Plurad wrote:
|
|
Abhay Pandit <abha...@...>
Hi Vincent, Recently I faced a lot of mutation errors. The simple solution is just upgrade your Janusgraph to v0.5.2. Hope this helps you. Thanks, Abhay On Tue, 18 Aug 2020 at 14:04, Vincent Praveen <vincent2...@...> wrote: Hi Jason, |
|
Vincent Praveen <vincent2...@...>
Hi Abhay,
toggle quoted message
Show quoted text
Thanks for your suggestion, But just curious is not this issue related to ES rather than JG, how come upgrading the JG will help with ES handling of indexes and mutation. Regards, Vincent On Wednesday, August 19, 2020 at 2:21:29 AM UTC+8 ab...@... wrote:
|
|
Anshul Sharma <sharma.a...@...>
Hi JG Team,
toggle quoted message
Show quoted text
Would like to add on the same email conversation. As we aware that below error comes as JanusGraph is waiting for the index response from Elastic Search. Is there any way we can increase the listener timeout ? Does JG provide any metric related to it ? ERROR org.janusgraph.diskstorage.es.ElasticSearchIndex - Failed to execute bulk Elasticsearch mutation java.io.IOException: listener timeout after waiting for [30000] ms at org.elasticsearch.client.RestClient$SyncResponseListener.get(RestClient.java:661) Thanks & Regards, Anshul Sharma. On Wednesday, August 19, 2020 at 7:58:38 AM UTC+5:30 vinc...@... wrote: Hi Abhay, |
|
anjanisingh22@...
Hi Anshul,
I am facing same issue? Did you got any solution for the issue? Thanks, Anjani |
|
Is it ES [the software] that is bottlenecking, or could it be the HW you have it running on? If the HW isn't the issue, have you been able to trace where the issue is in ES? But if not, I'd be remiss to not put in a plug for Scylla as a better performing option as a JanusGraph data store. Hope you get it resolved! On Fri, Jun 11, 2021, 1:37 AM <anjanisingh22@...> wrote: Hi Anshul, |
|