Date   

Re: Is there a way to override script and response timeout for some longer running gremlin queries

Stephen Mallette <spmal...@...>
 

 https://issues.apache.org/jira/browse/TINKERPOP-1342

might be on my todo list actually

On Wed, Jul 25, 2018 at 1:21 PM Jason Plurad <plu...@...> wrote:
No, there's no way to override the timeout on a per-request basis. Sounds like it could be an interesting enhancement.

On Sunday, July 22, 2018 at 11:50:09 PM UTC-4, Mitesh Patel wrote:
Hi Jason,
Thanks for your reply. I am aware of the scriptEvaluationTimeout, but, I am looking more like on the fly (via gremlin server commands) override the timeout for some longer running queries. Are there any such queries? e.g there are call methods on awaitGraphIndexStatus, but I did not find anything that can be helpful here.

Thanks,
Mitesh

On Saturday, July 21, 2018 at 8:11:33 PM UTC+5:30, Jason Plurad wrote:
To increase the timeout on the Gremlin Server, update the scriptEvaluationTimeout value (defaults to 30 seconds) in the gremlin-server.yaml

Composite indexes work best when they are highly selective with a low number of matches.

On Thursday, July 19, 2018 at 11:17:12 AM UTC-4, Mitesh Patel wrote:
Hi,
I am trying to run some gremlin queries which takes longer than my set timeout of 30 seconds in gremlin-server conf file. 
Is there any mechanism to override the timeout for certain queries?

for e.g. a simple query
g.V().has('property_idx','indexed').count()  

Although property_idx is indexed, it still  runs for more than 30 seconds since the data is huge. Is there a way to override timeout for such queries?

- Regards,
Mitesh

--
You received this message because you are subscribed to the Google Groups "JanusGraph users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to janusgra...@....
To view this discussion on the web visit https://groups.google.com/d/msgid/janusgraph-users/4e307e0a-def0-4c67-a30e-13ec4799878c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: user define id question

Jason Plurad <plu...@...>
 

I'm not certain if you would hit conflicts. You would need to dig a bit into the id assignment source code to verify if this is true.


On Monday, July 23, 2018 at 4:39:49 AM UTC-4, jcbms wrote:
can you tell me what's the reason for conflicts? 

在 2018年7月21日星期六 UTC+8下午11:13:33,Jason Plurad写道:
I believe you would potentially run into conflicts. I don't think that it is intended to toggle on/off that configuration property.

You might be better off creating a property named "id" to store your custom identifier and creating a composite index against it.


On Friday, July 20, 2018 at 4:53:32 AM UTC-4, jcbms wrote:
If  I set this property 
graph.set-vertex-id=true
and import data success ,then I want to use janusgraph assign automiclly what should i do ? will janusgraph assign id conflict with my old ids?


Re: Is there a way to override script and response timeout for some longer running gremlin queries

Jason Plurad <plu...@...>
 

No, there's no way to override the timeout on a per-request basis. Sounds like it could be an interesting enhancement.


On Sunday, July 22, 2018 at 11:50:09 PM UTC-4, Mitesh Patel wrote:
Hi Jason,
Thanks for your reply. I am aware of the scriptEvaluationTimeout, but, I am looking more like on the fly (via gremlin server commands) override the timeout for some longer running queries. Are there any such queries? e.g there are call methods on awaitGraphIndexStatus, but I did not find anything that can be helpful here.

Thanks,
Mitesh

On Saturday, July 21, 2018 at 8:11:33 PM UTC+5:30, Jason Plurad wrote:
To increase the timeout on the Gremlin Server, update the scriptEvaluationTimeout value (defaults to 30 seconds) in the gremlin-server.yaml

Composite indexes work best when they are highly selective with a low number of matches.

On Thursday, July 19, 2018 at 11:17:12 AM UTC-4, Mitesh Patel wrote:
Hi,
I am trying to run some gremlin queries which takes longer than my set timeout of 30 seconds in gremlin-server conf file. 
Is there any mechanism to override the timeout for certain queries?

for e.g. a simple query
g.V().has('property_idx','indexed').count()  

Although property_idx is indexed, it still  runs for more than 30 seconds since the data is huge. Is there a way to override timeout for such queries?

- Regards,
Mitesh


Re: Trying to improve query performance with vertex-centric indexes

Ted Wilmes <twi...@...>
 

Hi Clemens,
VCI's are local to each vertex and won't help with g.E().has(...) queries which are global lookups. You will see them used in instances where you are traversing a vertex's edges and filtering on an edge property like this pattern: g.V(123).outE().has('tstamp', gt(100)).inV().

--Ted


On Wednesday, July 25, 2018 at 10:05:49 AM UTC-5, Clemens Viehl wrote:
Hello,

I'm trying to understand & use vertex-centric indexing in order to speed up traversals (like shortest path), but whatever I tried: I couldn't get them to be used in my queries.
Setup is JanusGraph 0.2.1 + Elasticsearch as provided with Janus + Cassandra 3.11.2 running with java.version 1.8.0_162.

In the following example I'm creating just a very small graph:

graph = JanusGraphFactory.open('conf/janusgraph-cql-es.properties')
mgmt
= graph.openManagement()
// Vertex related initialisations
nodeIdKey
= mgmt.makePropertyKey('nodeId').dataType(String.class).make()
idxNodeId
= mgmt.buildIndex('idx_nodeId', Vertex.class).addKey(nodeIdKey).unique().buildCompositeIndex()
mgmt
.setConsistency(idxNodeId, ConsistencyModifier.LOCK)
idxNodeId
.getIndexStatus(nodeIdKey)
nodePropertyKey
= mgmt.makePropertyKey('nodeProperty').dataType(String.class).make()
idxNodeProperty
= mgmt.buildIndex('idx_nodeProperty', Vertex.class).addKey(nodePropertyKey).buildCompositeIndex()
idxNodeProperty
.getIndexStatus(nodePropertyKey)
// Edge related initialisations
relationLabel
= mgmt.makeEdgeLabel('relation').make()
edgePropertyKey
= mgmt.makePropertyKey('edgeProperty').dataType(String.class).make()
idxVertexCentric
= mgmt.buildEdgeIndex(relationLabel, 'idx_VertexCentric', Direction.BOTH, edgePropertyKey)
idxVertexCentric
.getIndexStatus()
mgmt
.commit()

// Creating some nodes
g
= graph.traversal()
graph
.addVertex('node').property('nodeId','4711')
g
.V().has('nodeId','4711').property('nodeProperty','aaaa')
graph
.addVertex('node').property('nodeId','4712')
g
.V().has('nodeId','4712').property('nodeProperty','bbbb')
graph
.addVertex('node').property('nodeId','4713')
g
.V().has('nodeId','4713').property('nodeProperty','cccc')

from = g.V().has('nodeId','4711').next()
to
= g.V().has('nodeId','4712').next()

// Creating an edge
from.addEdge('relation', to).property('nodeProperty', 'dddd')

// Check if indexes are used - in the profile output I can see that the composite index is used
g
.V().has('node', 'nodeProperty','bbbb').profile()  

// Expected usage of vertex centric index - but it isn't used here; my question is: why?
g
.E().has('node', 'edgeProperty','dddd').profile()
g
.E().has('node', 'edgeProperty', textContains('dddd')).profile()

Something I have noticed during my testing that might be related: calling awaitRelationIndexStatus takes a lot of time (1m) and although the actualStatus flag look good (ENABLED) the succeeded=false looks alarming to me.

gremlin> mgmt.awaitRelationIndexStatus(graph, 'idx_VertexCentric', 'relation').call()
==>RelationIndexStatusReport[succeeded=false, indexName='idx_VertexCentric', relationTypeName='relation', actualStatus=ENABLED, targetStatus=[REGISTERED], elapsed=PT1M0.129S]


Is this the reason why the vertex centric index isn't used? If so: what's most likely the cause?
For these tests I'm starting Cassandra, ElasticSearch and Gremlin shell via their windows scripts as they are (out of the box) without configuration changes.

Please note that I'm still quite new to the graph DB world, so I might miss something pretty basic...
Thanks in advance for any help!

Best regards,

Clemens


Re: a problem about elasticsearch

Ted Wilmes <twi...@...>
 

Hi jcbms,
This sort of thing usually means you're overloading your Elasticsearch server(s). Perhaps you're committing too much in a single transaction or you do not have enough resources to support the load you're placing on the ES server? I'd suggest watching the ES metrics and turning down the concurrency of your load and/or batch sizes, or perhaps expanding your ES resources.

--Ted


On Wednesday, July 25, 2018 at 1:37:00 AM UTC-5, jcbms wrote:
I want to know if this problem will lost data? and how to prevent it  

在 2018年7月25日星期三 UTC+8下午2:30:27,jcbms写道:
janusgraph : 0.2.0
elasticsearch: 6.1.1

can you tell me why this happen?
06:21:01,169 ERROR ElasticSearchIndex:604 - Failed to execute bulk Elasticsearch mutation
java.lang.RuntimeException: error while performing request
at org.elasticsearch.client.RestClient$SyncResponseListener.get(RestClient.java:681)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:219)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:191)
at org.janusgraph.diskstorage.es.rest.RestElasticSearchClient.performRequest(RestElasticSearchClient.java:320)
at org.janusgraph.diskstorage.es.rest.RestElasticSearchClient.bulkRequest(RestElasticSearchClient.java:249)
at org.janusgraph.diskstorage.es.ElasticSearchIndex.mutate(ElasticSearchIndex.java:601)
at org.janusgraph.diskstorage.indexing.IndexTransaction$1.call(IndexTransaction.java:160)
at org.janusgraph.diskstorage.indexing.IndexTransaction$1.call(IndexTransaction.java:157)
at org.janusgraph.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:69)
at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:55)
at org.janusgraph.diskstorage.indexing.IndexTransaction.flushInternal(IndexTransaction.java:157)
at org.janusgraph.diskstorage.indexing.IndexTransaction.commit(IndexTransaction.java:138)
at org.janusgraph.diskstorage.BackendTransaction.commitIndexes(BackendTransaction.java:141)
at org.janusgraph.graphdb.database.StandardJanusGraph.commit(StandardJanusGraph.java:751)
at org.janusgraph.graphdb.transaction.StandardJanusGraphTx.commit(StandardJanusGraphTx.java:1374)
at DNSSubmit.submit(DNSSubmit.java:104)
at LoadData$4.run(LoadData.java:160)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)


Trying to improve query performance with vertex-centric indexes

Clemens Viehl <c.v...@...>
 

Hello,

I'm trying to understand & use vertex-centric indexing in order to speed up traversals (like shortest path), but whatever I tried: I couldn't get them to be used in my queries.
Setup is JanusGraph 0.2.1 + Elasticsearch as provided with Janus + Cassandra 3.11.2 running with java.version 1.8.0_162.

In the following example I'm creating just a very small graph:

graph = JanusGraphFactory.open('conf/janusgraph-cql-es.properties')
mgmt
= graph.openManagement()
// Vertex related initialisations
nodeIdKey
= mgmt.makePropertyKey('nodeId').dataType(String.class).make()
idxNodeId
= mgmt.buildIndex('idx_nodeId', Vertex.class).addKey(nodeIdKey).unique().buildCompositeIndex()
mgmt
.setConsistency(idxNodeId, ConsistencyModifier.LOCK)
idxNodeId
.getIndexStatus(nodeIdKey)
nodePropertyKey
= mgmt.makePropertyKey('nodeProperty').dataType(String.class).make()
idxNodeProperty
= mgmt.buildIndex('idx_nodeProperty', Vertex.class).addKey(nodePropertyKey).buildCompositeIndex()
idxNodeProperty
.getIndexStatus(nodePropertyKey)
// Edge related initialisations
relationLabel
= mgmt.makeEdgeLabel('relation').make()
edgePropertyKey
= mgmt.makePropertyKey('edgeProperty').dataType(String.class).make()
idxVertexCentric
= mgmt.buildEdgeIndex(relationLabel, 'idx_VertexCentric', Direction.BOTH, edgePropertyKey)
idxVertexCentric
.getIndexStatus()
mgmt
.commit()

// Creating some nodes
g
= graph.traversal()
graph
.addVertex('node').property('nodeId','4711')
g
.V().has('nodeId','4711').property('nodeProperty','aaaa')
graph
.addVertex('node').property('nodeId','4712')
g
.V().has('nodeId','4712').property('nodeProperty','bbbb')
graph
.addVertex('node').property('nodeId','4713')
g
.V().has('nodeId','4713').property('nodeProperty','cccc')

from = g.V().has('nodeId','4711').next()
to
= g.V().has('nodeId','4712').next()

// Creating an edge
from.addEdge('relation', to).property('nodeProperty', 'dddd')

// Check if indexes are used - in the profile output I can see that the composite index is used
g
.V().has('node', 'nodeProperty','bbbb').profile()  

// Expected usage of vertex centric index - but it isn't used here; my question is: why?
g
.E().has('node', 'edgeProperty','dddd').profile()
g
.E().has('node', 'edgeProperty', textContains('dddd')).profile()

Something I have noticed during my testing that might be related: calling awaitRelationIndexStatus takes a lot of time (1m) and although the actualStatus flag look good (ENABLED) the succeeded=false looks alarming to me.

gremlin> mgmt.awaitRelationIndexStatus(graph, 'idx_VertexCentric', 'relation').call()
==>RelationIndexStatusReport[succeeded=false, indexName='idx_VertexCentric', relationTypeName='relation', actualStatus=ENABLED, targetStatus=[REGISTERED], elapsed=PT1M0.129S]


Is this the reason why the vertex centric index isn't used? If so: what's most likely the cause?
For these tests I'm starting Cassandra, ElasticSearch and Gremlin shell via their windows scripts as they are (out of the box) without configuration changes.

Please note that I'm still quite new to the graph DB world, so I might miss something pretty basic...
Thanks in advance for any help!

Best regards,

Clemens


Re: a problem about elasticsearch

jcbms <jcbm...@...>
 

I want to know if this problem will lost data? and how to prevent it  

在 2018年7月25日星期三 UTC+8下午2:30:27,jcbms写道:

janusgraph : 0.2.0
elasticsearch: 6.1.1

can you tell me why this happen?
06:21:01,169 ERROR ElasticSearchIndex:604 - Failed to execute bulk Elasticsearch mutation
java.lang.RuntimeException: error while performing request
at org.elasticsearch.client.RestClient$SyncResponseListener.get(RestClient.java:681)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:219)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:191)
at org.janusgraph.diskstorage.es.rest.RestElasticSearchClient.performRequest(RestElasticSearchClient.java:320)
at org.janusgraph.diskstorage.es.rest.RestElasticSearchClient.bulkRequest(RestElasticSearchClient.java:249)
at org.janusgraph.diskstorage.es.ElasticSearchIndex.mutate(ElasticSearchIndex.java:601)
at org.janusgraph.diskstorage.indexing.IndexTransaction$1.call(IndexTransaction.java:160)
at org.janusgraph.diskstorage.indexing.IndexTransaction$1.call(IndexTransaction.java:157)
at org.janusgraph.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:69)
at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:55)
at org.janusgraph.diskstorage.indexing.IndexTransaction.flushInternal(IndexTransaction.java:157)
at org.janusgraph.diskstorage.indexing.IndexTransaction.commit(IndexTransaction.java:138)
at org.janusgraph.diskstorage.BackendTransaction.commitIndexes(BackendTransaction.java:141)
at org.janusgraph.graphdb.database.StandardJanusGraph.commit(StandardJanusGraph.java:751)
at org.janusgraph.graphdb.transaction.StandardJanusGraphTx.commit(StandardJanusGraphTx.java:1374)
at DNSSubmit.submit(DNSSubmit.java:104)
at LoadData$4.run(LoadData.java:160)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)


Re: a problem about elasticsearch

jcbms <jcbm...@...>
 

full stack is 


06:31:12,386 ERROR ElasticSearchIndex:604 - Failed to execute bulk Elasticsearch mutation
java.lang.RuntimeException: error while performing request
at org.elasticsearch.client.RestClient$SyncResponseListener.get(RestClient.java:681)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:219)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:191)
at org.janusgraph.diskstorage.es.rest.RestElasticSearchClient.performRequest(RestElasticSearchClient.java:320)
at org.janusgraph.diskstorage.es.rest.RestElasticSearchClient.bulkRequest(RestElasticSearchClient.java:249)
at org.janusgraph.diskstorage.es.ElasticSearchIndex.mutate(ElasticSearchIndex.java:601)
at org.janusgraph.diskstorage.indexing.IndexTransaction$1.call(IndexTransaction.java:160)
at org.janusgraph.diskstorage.indexing.IndexTransaction$1.call(IndexTransaction.java:157)
at org.janusgraph.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:69)
at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:55)
at org.janusgraph.diskstorage.indexing.IndexTransaction.flushInternal(IndexTransaction.java:157)
at org.janusgraph.diskstorage.indexing.IndexTransaction.commit(IndexTransaction.java:138)
at org.janusgraph.diskstorage.BackendTransaction.commitIndexes(BackendTransaction.java:141)
at org.janusgraph.graphdb.database.StandardJanusGraph.commit(StandardJanusGraph.java:751)
at org.janusgraph.graphdb.transaction.StandardJanusGraphTx.commit(StandardJanusGraphTx.java:1374)
at DNSSubmit.submit(DNSSubmit.java:104)
at LoadData$4.run(LoadData.java:160)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.concurrent.TimeoutException
at org.apache.http.nio.pool.AbstractNIOConnPool.processPendingRequest(AbstractNIOConnPool.java:364)
at org.apache.http.nio.pool.AbstractNIOConnPool.processNextPendingRequest(AbstractNIOConnPool.java:344)
at org.apache.http.nio.pool.AbstractNIOConnPool.release(AbstractNIOConnPool.java:318)
at org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.releaseConnection(PoolingNHttpClientConnectionManager.java:303)
at org.apache.http.impl.nio.client.AbstractClientExchangeHandler.releaseConnection(AbstractClientExchangeHandler.java:239)
at org.apache.http.impl.nio.client.MainClientExec.responseCompleted(MainClientExec.java:387)
at org.apache.http.impl.nio.client.DefaultClientExchangeHandlerImpl.responseCompleted(DefaultClientExchangeHandlerImpl.java:168)
at org.apache.http.nio.protocol.HttpAsyncRequestExecutor.processResponse(HttpAsyncRequestExecutor.java:436)
at org.apache.http.nio.protocol.HttpAsyncRequestExecutor.inputReady(HttpAsyncRequestExecutor.java:326)
at org.apache.http.impl.nio.DefaultNHttpClientConnection.consumeInput(DefaultNHttpClientConnection.java:265)
at org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:81)
at org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:39)
at org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(AbstractIODispatch.java:114)
at org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor.java:162)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIOReactor.java:337)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOReactor.java:315)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:276)
at org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:104)
at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:588)
... 1 more
06:31:12,389 ERROR StandardJanusGraph:755 - Error while commiting index mutations for transaction [2382] on index: search
org.janusgraph.core.JanusGraphException: Could not execute operation due to backend exception
at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:57)
at org.janusgraph.diskstorage.indexing.IndexTransaction.flushInternal(IndexTransaction.java:157)
at org.janusgraph.diskstorage.indexing.IndexTransaction.commit(IndexTransaction.java:138)
at org.janusgraph.diskstorage.BackendTransaction.commitIndexes(BackendTransaction.java:141)
at org.janusgraph.graphdb.database.StandardJanusGraph.commit(StandardJanusGraph.java:751)
at org.janusgraph.graphdb.transaction.StandardJanusGraphTx.commit(StandardJanusGraphTx.java:1374)
at DNSSubmit.submit(DNSSubmit.java:104)
at LoadData$4.run(LoadData.java:160)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.janusgraph.diskstorage.PermanentBackendException: Unknown exception while executing index operation
at org.janusgraph.diskstorage.es.ElasticSearchIndex.convert(ElasticSearchIndex.java:301)
at org.janusgraph.diskstorage.es.ElasticSearchIndex.mutate(ElasticSearchIndex.java:605)
at org.janusgraph.diskstorage.indexing.IndexTransaction$1.call(IndexTransaction.java:160)
at org.janusgraph.diskstorage.indexing.IndexTransaction$1.call(IndexTransaction.java:157)
at org.janusgraph.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:69)
at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:55)
... 10 more
Caused by: java.lang.RuntimeException: error while performing request
at org.elasticsearch.client.RestClient$SyncResponseListener.get(RestClient.java:681)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:219)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:191)
at org.janusgraph.diskstorage.es.rest.RestElasticSearchClient.performRequest(RestElasticSearchClient.java:320)
at org.janusgraph.diskstorage.es.rest.RestElasticSearchClient.bulkRequest(RestElasticSearchClient.java:249)
at org.janusgraph.diskstorage.es.ElasticSearchIndex.mutate(ElasticSearchIndex.java:601)
... 14 more
Caused by: java.util.concurrent.TimeoutException
at org.apache.http.nio.pool.AbstractNIOConnPool.processPendingRequest(AbstractNIOConnPool.java:364)
at org.apache.http.nio.pool.AbstractNIOConnPool.processNextPendingRequest(AbstractNIOConnPool.java:344)
at org.apache.http.nio.pool.AbstractNIOConnPool.release(AbstractNIOConnPool.java:318)
at org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.releaseConnection(PoolingNHttpClientConnectionManager.java:303)
at org.apache.http.impl.nio.client.AbstractClientExchangeHandler.releaseConnection(AbstractClientExchangeHandler.java:239)
at org.apache.http.impl.nio.client.MainClientExec.responseCompleted(MainClientExec.java:387)
at org.apache.http.impl.nio.client.DefaultClientExchangeHandlerImpl.responseCompleted(DefaultClientExchangeHandlerImpl.java:168)
at org.apache.http.nio.protocol.HttpAsyncRequestExecutor.processResponse(HttpAsyncRequestExecutor.java:436)
at org.apache.http.nio.protocol.HttpAsyncRequestExecutor.inputReady(HttpAsyncRequestExecutor.java:326)
at org.apache.http.impl.nio.DefaultNHttpClientConnection.consumeInput(DefaultNHttpClientConnection.java:265)
at org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:81)
at org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:39)
at org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(AbstractIODispatch.java:114)
at org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor.java:162)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIOReactor.java:337)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOReactor.java:315)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:276)
at org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:104)
at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:588)


在 2018年7月25日星期三 UTC+8下午2:30:27,jcbms写道:

janusgraph : 0.2.0
elasticsearch: 6.1.1

can you tell me why this happen?
06:21:01,169 ERROR ElasticSearchIndex:604 - Failed to execute bulk Elasticsearch mutation
java.lang.RuntimeException: error while performing request
at org.elasticsearch.client.RestClient$SyncResponseListener.get(RestClient.java:681)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:219)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:191)
at org.janusgraph.diskstorage.es.rest.RestElasticSearchClient.performRequest(RestElasticSearchClient.java:320)
at org.janusgraph.diskstorage.es.rest.RestElasticSearchClient.bulkRequest(RestElasticSearchClient.java:249)
at org.janusgraph.diskstorage.es.ElasticSearchIndex.mutate(ElasticSearchIndex.java:601)
at org.janusgraph.diskstorage.indexing.IndexTransaction$1.call(IndexTransaction.java:160)
at org.janusgraph.diskstorage.indexing.IndexTransaction$1.call(IndexTransaction.java:157)
at org.janusgraph.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:69)
at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:55)
at org.janusgraph.diskstorage.indexing.IndexTransaction.flushInternal(IndexTransaction.java:157)
at org.janusgraph.diskstorage.indexing.IndexTransaction.commit(IndexTransaction.java:138)
at org.janusgraph.diskstorage.BackendTransaction.commitIndexes(BackendTransaction.java:141)
at org.janusgraph.graphdb.database.StandardJanusGraph.commit(StandardJanusGraph.java:751)
at org.janusgraph.graphdb.transaction.StandardJanusGraphTx.commit(StandardJanusGraphTx.java:1374)
at DNSSubmit.submit(DNSSubmit.java:104)
at LoadData$4.run(LoadData.java:160)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)


a problem about elasticsearch

jcbms <jcbm...@...>
 

janusgraph : 0.2.0
elasticsearch: 6.1.1

can you tell me why this happen?
06:21:01,169 ERROR ElasticSearchIndex:604 - Failed to execute bulk Elasticsearch mutation
java.lang.RuntimeException: error while performing request
at org.elasticsearch.client.RestClient$SyncResponseListener.get(RestClient.java:681)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:219)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:191)
at org.janusgraph.diskstorage.es.rest.RestElasticSearchClient.performRequest(RestElasticSearchClient.java:320)
at org.janusgraph.diskstorage.es.rest.RestElasticSearchClient.bulkRequest(RestElasticSearchClient.java:249)
at org.janusgraph.diskstorage.es.ElasticSearchIndex.mutate(ElasticSearchIndex.java:601)
at org.janusgraph.diskstorage.indexing.IndexTransaction$1.call(IndexTransaction.java:160)
at org.janusgraph.diskstorage.indexing.IndexTransaction$1.call(IndexTransaction.java:157)
at org.janusgraph.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:69)
at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:55)
at org.janusgraph.diskstorage.indexing.IndexTransaction.flushInternal(IndexTransaction.java:157)
at org.janusgraph.diskstorage.indexing.IndexTransaction.commit(IndexTransaction.java:138)
at org.janusgraph.diskstorage.BackendTransaction.commitIndexes(BackendTransaction.java:141)
at org.janusgraph.graphdb.database.StandardJanusGraph.commit(StandardJanusGraph.java:751)
at org.janusgraph.graphdb.transaction.StandardJanusGraphTx.commit(StandardJanusGraphTx.java:1374)
at DNSSubmit.submit(DNSSubmit.java:104)
at LoadData$4.run(LoadData.java:160)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)


Re: Heartbeat error when using BulkUploader

ste...@...
 

Also, the longest line in the adjacency list is: 17,179,092 characters long. That equates to about 128,000 edges for that particular vertex.


On Tuesday, July 24, 2018 at 11:37:07 AM UTC-7, s...@... wrote:

It happened again, so I've included a screenshot of the error:



On Monday, July 23, 2018 at 11:13:14 PM UTC-7, Debasish Kanhar wrote:
Hi,

Can you share full stack trace or better logs? Maybe that will give us more clarity why you are facing this error. This can be due to any number of reasons. I also remember getting Heartbeat timed out even when my connection to backend Cassandra was failing.

Also, maybe your configuration you are specifying will help.

On Tuesday, 24 July 2018 10:17:12 UTC+5:30, s...@... wrote:
I'm consistently having issues using the bulkUploadVertexProgram to load a very large (800GB) graph into Janusgraph. I have it split up into 110 files (time-based edge separation) so the files are all about 8gb, and I am iterating over the files one by one to upload. I haven't been able to get past the first file. I have 64 cores with spark executer and worker memory of 6g each (416GB on the machine). I've tried a number of different configurations to no avail. What is really killing productivity is how long it takes for the system to fail (about 5 hours), which makes it hard to iterate on debugging. The latest error I'm having is:



[Stage 5:==03:13:44 WARN  org.apache.spark.rpc.netty.NettyRpcEnv  - Ignored message: HeartbeatResponse(false)
03:14:31 WARN  org.apache.spark.rpc.netty.NettyRpcEndpointRef  - Error sending message [message = Heartbeat(driver,[Lscala.Tuple2;@3f7a6c8,BlockManagerId(driver, localhost, 40607))] in 1 attempts
org
.apache.spark.rpc.RpcTimeoutException: Futures timed out after [10 seconds]. This timeout is controlled by spark.executor.heartbeatInterval


This is an urgent issue for me and any help is greatly appreciated.


Re: Heartbeat error when using BulkUploader

ste...@...
 

It happened again, so I've included a screenshot of the error:



On Monday, July 23, 2018 at 11:13:14 PM UTC-7, Debasish Kanhar wrote:
Hi,

Can you share full stack trace or better logs? Maybe that will give us more clarity why you are facing this error. This can be due to any number of reasons. I also remember getting Heartbeat timed out even when my connection to backend Cassandra was failing.

Also, maybe your configuration you are specifying will help.

On Tuesday, 24 July 2018 10:17:12 UTC+5:30, s...@... wrote:
I'm consistently having issues using the bulkUploadVertexProgram to load a very large (800GB) graph into Janusgraph. I have it split up into 110 files (time-based edge separation) so the files are all about 8gb, and I am iterating over the files one by one to upload. I haven't been able to get past the first file. I have 64 cores with spark executer and worker memory of 6g each (416GB on the machine). I've tried a number of different configurations to no avail. What is really killing productivity is how long it takes for the system to fail (about 5 hours), which makes it hard to iterate on debugging. The latest error I'm having is:



[Stage 5:==03:13:44 WARN  org.apache.spark.rpc.netty.NettyRpcEnv  - Ignored message: HeartbeatResponse(false)
03:14:31 WARN  org.apache.spark.rpc.netty.NettyRpcEndpointRef  - Error sending message [message = Heartbeat(driver,[Lscala.Tuple2;@3f7a6c8,BlockManagerId(driver, localhost, 40607))] in 1 attempts
org
.apache.spark.rpc.RpcTimeoutException: Futures timed out after [10 seconds]. This timeout is controlled by spark.executor.heartbeatInterval


This is an urgent issue for me and any help is greatly appreciated.


Re: Difference in the notation of the edge and vertex properties

Daniel Kuppitz <me@...>
 

Vertex properties are, by default, multi-valued properties, that's why the toString() notation shows them as lists/arrays. Edge properties on the other hand only support single values.

Cheers,
Daniel


On Tue, Jul 24, 2018 at 12:28 AM, shrikant pachauri <sk.pa...@...> wrote:
HI,

 I was trying to fire some simple queries like getting value map of any single vertex and edge. Say I have defined some properties in the edge and vertices.
 now suppose I fire two queries on on the janus graph (backend cassandra)

1. g.V(4568).valueMap()
    it would give result like 
   [readOnly:[false],status:[Pass],uuid:[11978cd0-3e07-4218-9fe6-936d3eb242ba],_label:[Student]] 

2. g.E().valueMap()
    it would return result like

   [prefixKey:studentLinkCollege,uuid:72731396-34d4-4336-91b4-4b3e662b6244, city: delhi, type: secondary]



 Now as you can see the properties in the vertices are in square braces while in edges is more of key value pair. Now I just want to know if this is a bug or some part of design?
 One more thing if its a part of design what might be possible reasons behind this behaviour.



Regards,
Shrikant 






--
You received this message because you are subscribed to the Google Groups "JanusGraph users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to janusgraph-users+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/janusgraph-users/202cbede-e6ee-435e-8aac-4618ab3814ea%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Heartbeat error when using BulkUploader

ste...@...
 

I am using Cassandra. Is it possibly something on the cassandra side that is failing? In a previous configuration I noticed a GC error that appeared to come from Cassandra.


On Monday, July 23, 2018 at 11:13:14 PM UTC-7, Debasish Kanhar wrote:
Hi,

Can you share full stack trace or better logs? Maybe that will give us more clarity why you are facing this error. This can be due to any number of reasons. I also remember getting Heartbeat timed out even when my connection to backend Cassandra was failing.

Also, maybe your configuration you are specifying will help.

On Tuesday, 24 July 2018 10:17:12 UTC+5:30, s...@... wrote:
I'm consistently having issues using the bulkUploadVertexProgram to load a very large (800GB) graph into Janusgraph. I have it split up into 110 files (time-based edge separation) so the files are all about 8gb, and I am iterating over the files one by one to upload. I haven't been able to get past the first file. I have 64 cores with spark executer and worker memory of 6g each (416GB on the machine). I've tried a number of different configurations to no avail. What is really killing productivity is how long it takes for the system to fail (about 5 hours), which makes it hard to iterate on debugging. The latest error I'm having is:



[Stage 5:==03:13:44 WARN  org.apache.spark.rpc.netty.NettyRpcEnv  - Ignored message: HeartbeatResponse(false)
03:14:31 WARN  org.apache.spark.rpc.netty.NettyRpcEndpointRef  - Error sending message [message = Heartbeat(driver,[Lscala.Tuple2;@3f7a6c8,BlockManagerId(driver, localhost, 40607))] in 1 attempts
org
.apache.spark.rpc.RpcTimeoutException: Futures timed out after [10 seconds]. This timeout is controlled by spark.executor.heartbeatInterval


This is an urgent issue for me and any help is greatly appreciated.


Re: Heartbeat error when using BulkUploader

ste...@...
 

Thanks for the response Debasish. Here is my configuration for  the read graph:

gremlin.graph=org.apache.tinkerpop.gremlin.hadoop.structure.HadoopGraph
gremlin
.hadoop.graphInputFormat=org.apache.tinkerpop.gremlin.hadoop.structure.io.script.ScriptInputFormat
gremlin
.hadoop.graphOutputFormat=org.apache.tinkerpop.gremlin.hadoop.structure.io.gryo.GryoOutputFormat
gremlin
.hadoop.jarsInDistributedCache=true


gremlin
.hadoop.inputLocation=data/sample-bulk-import-data
gremlin
.hadoop.scriptInputFormat.script=scripts/bulk-import.groovy
storage
.batch-loading=true
gremlin
.hadoop.defaultGraphComputer=org.apache.tinkerpop.gremlin.spark.process.computer.SparkGraphComputer


gremlin
.hadoop.outputLocation=/path/to/persist/location
gremlin
.spark.graphStorageLevel=DISK_ONLY
gremlin
.spark.persistStorageLevel=DISK_ONLY




#
# SparkGraphComputer Configuration
#
spark
.master=local[*]
spark
.serializer=org.apache.spark.serializer.KryoSerializer
spark
.executor.memory=6g
spark
.driver.memory=6g
spark
.local.dir=/janusgraph/external/spark

Here is the configuration for the write graph:

gremlin.graph=org.janusgraph.core.JanusGraphFactory

storage
.backend=cassandrathrift
storage
.batch-loading=true
storage
.cassandra.frame-size-mb=1000
schema
.default=none

ids
.block-size=25000

storage
.hostname=<three IPs for cassandra ring>
storage
.cassandra.keyspace=test_graph
storage
.read-time=200000
storage
.write-time=20000

#I've commented the next two out, but they were used to build the keyspace
#storage.cassandra.replication-strategy-options=asia-southeast1_asia_cassandra,2
#storage.cassandra.replication-strategy-class=org.apache.cassandra.locator.NetworkTopologyStrategy
storage
.cassandra.write-consistency-level=ONE
storage
.cassandra.read-consistency-level=ONE
#storage.cassandra.atomic-batch-mutate=false


index
.edge.backend=lucene
index
.edge.directory=/janusgraph/data/edgeindex


# Whether to enable JanusGraph's database-level cache, which is shared
# across all transactions. Enabling this option speeds up traversals by
# holding hot graph elements in memory, but also increases the likelihood
# of reading stale data.  Disabling it forces each transaction to
# independently fetch graph elements from storage before reading/writing
# them.
cache
.db-cache = true
cache
.db-cache-clean-wait = 20
cache
.db-cache-time = 180000
cache
.db-cache-size = 0.5

Here is the script to run the bulk upload:

import groovy.io.FileType


folder
= new File('/janusgraph/external/import/adjacency-list')
done_folder
= new File('/janusgraph/external/import/done')
folder
.eachFileRecurse FileType.FILES,  { file ->
 
if (file.name.endsWith(".csv")) {
    println
(file.absolutePath)


    graph
= GraphFactory.open("conf/coral/read-graph.properties")
    graph
.configuration().setInputLocation(file.absolutePath)
    graph
.configuration().setProperty("gremlin.hadoop.scriptInputFormat.script", "/janusgraph/scripts/bulk-import.groovy")
    blvp
= BulkLoaderVertexProgram.build().intermediateBatchSize(10000).writeGraph('conf/coral/write-graph.properties').create(graph)
    graph
.compute(SparkGraphComputer).program(blvp).submit().get()
    graph
.close()
    file
.renameTo(new File(done_folder, file.getName()))
 
}
}


On Monday, July 23, 2018 at 11:13:14 PM UTC-7, Debasish Kanhar wrote:
Hi,

Can you share full stack trace or better logs? Maybe that will give us more clarity why you are facing this error. This can be due to any number of reasons. I also remember getting Heartbeat timed out even when my connection to backend Cassandra was failing.

Also, maybe your configuration you are specifying will help.

On Tuesday, 24 July 2018 10:17:12 UTC+5:30, s...@... wrote:
I'm consistently having issues using the bulkUploadVertexProgram to load a very large (800GB) graph into Janusgraph. I have it split up into 110 files (time-based edge separation) so the files are all about 8gb, and I am iterating over the files one by one to upload. I haven't been able to get past the first file. I have 64 cores with spark executer and worker memory of 6g each (416GB on the machine). I've tried a number of different configurations to no avail. What is really killing productivity is how long it takes for the system to fail (about 5 hours), which makes it hard to iterate on debugging. The latest error I'm having is:



[Stage 5:==03:13:44 WARN  org.apache.spark.rpc.netty.NettyRpcEnv  - Ignored message: HeartbeatResponse(false)
03:14:31 WARN  org.apache.spark.rpc.netty.NettyRpcEndpointRef  - Error sending message [message = Heartbeat(driver,[Lscala.Tuple2;@3f7a6c8,BlockManagerId(driver, localhost, 40607))] in 1 attempts
org
.apache.spark.rpc.RpcTimeoutException: Futures timed out after [10 seconds]. This timeout is controlled by spark.executor.heartbeatInterval


This is an urgent issue for me and any help is greatly appreciated.


Re: Heartbeat error when using BulkUploader

Debasish Kanhar <d.k...@...>
 

Hi,

Can you share full stack trace or better logs? Maybe that will give us more clarity why you are facing this error. This can be due to any number of reasons. I also remember getting Heartbeat timed out even when my connection to backend Cassandra was failing.

Also, maybe your configuration you are specifying will help.

On Tuesday, 24 July 2018 10:17:12 UTC+5:30, s...@... wrote:
I'm consistently having issues using the bulkUploadVertexProgram to load a very large (800GB) graph into Janusgraph. I have it split up into 110 files (time-based edge separation) so the files are all about 8gb, and I am iterating over the files one by one to upload. I haven't been able to get past the first file. I have 64 cores with spark executer and worker memory of 6g each (416GB on the machine). I've tried a number of different configurations to no avail. What is really killing productivity is how long it takes for the system to fail (about 5 hours), which makes it hard to iterate on debugging. The latest error I'm having is:



[Stage 5:==03:13:44 WARN  org.apache.spark.rpc.netty.NettyRpcEnv  - Ignored message: HeartbeatResponse(false)
03:14:31 WARN  org.apache.spark.rpc.netty.NettyRpcEndpointRef  - Error sending message [message = Heartbeat(driver,[Lscala.Tuple2;@3f7a6c8,BlockManagerId(driver, localhost, 40607))] in 1 attempts
org
.apache.spark.rpc.RpcTimeoutException: Futures timed out after [10 seconds]. This timeout is controlled by spark.executor.heartbeatInterval


This is an urgent issue for me and any help is greatly appreciated.


Heartbeat error when using BulkUploader

ste...@...
 

I'm consistently having issues using the bulkUploadVertexProgram to load a very large (800GB) graph into Janusgraph. I have it split up into 110 files (time-based edge separation) so the files are all about 8gb, and I am iterating over the files one by one to upload. I haven't been able to get past the first file. I have 64 cores with spark executer and worker memory of 6g each (416GB on the machine). I've tried a number of different configurations to no avail. What is really killing productivity is how long it takes for the system to fail (about 5 hours), which makes it hard to iterate on debugging. The latest error I'm having is:



[Stage 5:==03:13:44 WARN  org.apache.spark.rpc.netty.NettyRpcEnv  - Ignored message: HeartbeatResponse(false)
03:14:31 WARN  org.apache.spark.rpc.netty.NettyRpcEndpointRef  - Error sending message [message = Heartbeat(driver,[Lscala.Tuple2;@3f7a6c8,BlockManagerId(driver, localhost, 40607))] in 1 attempts
org
.apache.spark.rpc.RpcTimeoutException: Futures timed out after [10 seconds]. This timeout is controlled by spark.executor.heartbeatInterval


This is an urgent issue for me and any help is greatly appreciated.


Re: JanusGraph with Kerberized ElasticSearch indexing backend?

Hoda Moradi <hoda.mo...@...>
 

Thanks for your update. Is new changes going to take care of authentication in Solr as well? 


On Friday, July 20, 2018 at 8:13:14 PM UTC-5, jackson.christopher.lee wrote:
Hi Hoda,

We are in the process of finalizing changes to make it work in such an environment (Kerberized Solr/Hbase). We should have a Pull Request with the necessary changes in by the end of the month. So hopefully it will make it into the next release.

Regards,
Christopher Jackson

On Jul 20, 2018, at 12:29 PM, Hoda Moradi <hod...@...> wrote:

Hi Christopher,

I have the same issue did you made it work with secure solr? I am using Hbase as backend and solr as my indexing. Both are Kerberized.

On Friday, June 29, 2018 at 10:45:53 AM UTC-5, jackson.christopher.lee wrote:
Hi Folks,

Curious if anyone is using JanusGraph 0.2.0 with Elastic Search as the indexing backend in a kerberized environment? If so did it work out of the box or did you have to work around issues?

We are currently using kerberized SOLR and it doesn't seem to be supported (see https://github.com/JanusGraph/janusgraph/issues/1138). We are thinking to attempt to try against ES to see if it has the same issues or not but don't want to put effort there if it will have the same issues.

I would greatly appreciate any feedback anyone can give. 

Regards,
Christopher Jackson

--
You received this message because you are subscribed to the Google Groups "JanusGraph users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to janusgraph-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/janusgraph-users/7c6b43fe-8052-47b7-a6c6-a5450fb23483%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: question about vertex-centric index

jcbms <jcbm...@...>
 

thank a lot ,you are master of janusgraph 

在 2018年7月21日星期六 UTC+8下午11:01:33,Jason Plurad写道:

Vertex-centric indexes (VCI) are stored in the storage backend (Cassandra, HBase). An external indexing backend is not required.

The key aspect of the VCI is the ordering of the edges based on an edge property. When you have a query that comes in that filters on that edge property, it is faster to select it out the matching edge when it is sorted. If they are not sorted, it would need to iterate through all the incident edges to be certain whether the matching condition is met.

> JanusGraph automatically builds vertex-centric indexes per edge label and property key.

I think you need to take that one by one.

* For incident edges on a vertex, they are sorted by edge label by default. In graph of the gods, "brother" edge would come before "lives" edge. Now let's say for example, a vertex had 1,000 "brother" edges and 2,000 "lives" edges. All the "brother" edges would all be found together, but if you wanted to more specifically find brothers by age, it would be useful to have a VCI on the "age" edge property.
* For properties on a vertex, they are sorted by property name. This comes more into play if you have a list or set cardinality.


On Thursday, July 19, 2018 at 4:32:14 AM UTC-4, jcbms wrote:
JanusGraph automatically builds vertex-centric indexes per edge label and property key. That means, even with thousands of incident battled edges, queries like g.V(h).out('mother') or g.V(h).values('age') are efficiently answered by the local index
what does this mean? if i add a property to an edge ,janusgraph will automitically make a local index ? I don't need to define the schema? 

在 2018年7月19日星期四 UTC+8下午4:19:50,jcbms写道:
long time no see 

9.2. Vertex-centric Indexes 

I have some question about this vertex centric index, where does this index store ? in hbase or elasticsearch ? how does janusgraph make this index for good proformance ?just like 
g.V(h).outE('battled').has('time', inside(10, 20)).inV() 
does janusgraph store vertex centric index in hbase and store out edges orders by time?
or janusgraph use elasticsearch for index ,  what's the different between EdgeMixedindex ? like
 mgmt.buildIndex("UniqueEdgeKey", Edge.class).addKey(edgeKey).buildMixedIndex()




Re: user define id question

jcbms <jcbm...@...>
 

can you tell me what's the reason for conflicts? 

在 2018年7月21日星期六 UTC+8下午11:13:33,Jason Plurad写道:

I believe you would potentially run into conflicts. I don't think that it is intended to toggle on/off that configuration property.

You might be better off creating a property named "id" to store your custom identifier and creating a composite index against it.


On Friday, July 20, 2018 at 4:53:32 AM UTC-4, jcbms wrote:
If  I set this property 
graph.set-vertex-id=true
and import data success ,then I want to use janusgraph assign automiclly what should i do ? will janusgraph assign id conflict with my old ids?


Re: Is there a way to override script and response timeout for some longer running gremlin queries

pal...@...
 

Hi Jason,
Thanks for your reply. I am aware of the scriptEvaluationTimeout, but, I am looking more like on the fly (via gremlin server commands) override the timeout for some longer running queries. Are there any such queries? e.g there are call methods on awaitGraphIndexStatus, but I did not find anything that can be helpful here.

Thanks,
Mitesh


On Saturday, July 21, 2018 at 8:11:33 PM UTC+5:30, Jason Plurad wrote:
To increase the timeout on the Gremlin Server, update the scriptEvaluationTimeout value (defaults to 30 seconds) in the gremlin-server.yaml

Composite indexes work best when they are highly selective with a low number of matches.

On Thursday, July 19, 2018 at 11:17:12 AM UTC-4, Mitesh Patel wrote:
Hi,
I am trying to run some gremlin queries which takes longer than my set timeout of 30 seconds in gremlin-server conf file. 
Is there any mechanism to override the timeout for certain queries?

for e.g. a simple query
g.V().has('property_idx','indexed').count()  

Although property_idx is indexed, it still  runs for more than 30 seconds since the data is huge. Is there a way to override timeout for such queries?

- Regards,
Mitesh

4101 - 4120 of 6651