Date   

Re: com.datastax.driver.core.exceptions.BusyPoolException with CQL backend

Scott P <scott_p...@...>
 

A single thread is doing all CRUD operations, it commits after roughly every 10,000 element touches. Could the number of deltas in the transaction be too large?


On Wednesday, August 30, 2017 at 12:52:52 PM UTC-4, Jason Plurad wrote:
Looks like you can set a batch size, and the default is 20. Let us know if it helps.

storage.cql.batch-statement-size=20

Related reading on the Cassandra Java Driver FAQ.


On Wednesday, August 30, 2017 at 12:29:36 PM UTC-4, scott_patterson wrote:
I'm trying out the new storage.backend=CQL with cassandra from the master branch (0.2.0) and I'm consistently hitting this error after a few hours of CRUD operations, which could previously be handled (0.1.0) with no problems using the storage.backend=cassandra driver.

Is this a known issue that will be addressed in the final 0.2.0 release? Any recommendations on configuration options to adjust the pool or connection sizes?

org.janusgraph.core.JanusGraphException: Could not execute operation due to backend exception
at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:57)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction.persist(CacheTransaction.java:95)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction.flushInternal(CacheTransaction.java:137)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction.commit(CacheTransaction.java:200)
at org.janusgraph.diskstorage.BackendTransaction.commitStorage(BackendTransaction.java:133)
at org.janusgraph.graphdb.database.StandardJanusGraph.commit(StandardJanusGraph.java:729)
at org.janusgraph.graphdb.transaction.StandardJanusGraphTx.commit(StandardJanusGraphTx.java:1374)
at org.janusgraph.graphdb.tinkerpop.JanusGraphBlueprintsGraph$GraphTransaction.doCommit(JanusGraphBlueprintsGraph.java:272)
at org.apache.tinkerpop.gremlin.structure.util.AbstractTransaction.commit(AbstractTransaction.java:105)
... 13 more
Caused by: org.janusgraph.diskstorage.TemporaryBackendException: Could not successfully complete backend operation due to repeated temporary exceptions after PT1M40S
at org.janusgraph.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:101)
at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:55)
... 21 more
Caused by: org.janusgraph.diskstorage.TemporaryBackendException: Temporary failure in storage backend
at org.janusgraph.diskstorage.cql.CQLKeyColumnValueStore.lambda$null$2(CQLKeyColumnValueStore.java:123)
at org.janusgraph.diskstorage.cql.CQLKeyColumnValueStore$$Lambda$184.00000000BC05D670.apply(Unknown Source)
at io.vavr.API$Match$Case0.apply(API.java:3174)
at io.vavr.API$Match.of(API.java:3137)
at org.janusgraph.diskstorage.cql.CQLKeyColumnValueStore.lambda$static$3(CQLKeyColumnValueStore.java:120)
at org.janusgraph.diskstorage.cql.CQLKeyColumnValueStore$$Lambda$65.00000000C02B77E0.apply(Unknown Source)
at org.janusgraph.diskstorage.cql.CQLStoreManager.mutateManyUnlogged(CQLStoreManager.java:415)
at org.janusgraph.diskstorage.cql.CQLStoreManager.mutateMany(CQLStoreManager.java:346)
at org.janusgraph.diskstorage.locking.consistentkey.ExpectedValueCheckingStoreManager.mutateMany(ExpectedValueCheckingStoreManager.java:79)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction$1.call(CacheTransaction.java:98)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction$1.call(CacheTransaction.java:95)
at org.janusgraph.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:69)
... 22 more
Caused by: java.util.concurrent.ExecutionException: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /x.x.x.x:9042 (com.datastax.driver.core.exceptions.BusyPoolException: [/x.x.x.x] Pool is busy (no available connection and the queue has reached its max size 256)))
at com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:299)
at com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:286)
at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116)
at io.vavr.concurrent.Future$$Lambda$71.00000000C00F2990.apply(Unknown Source)
at io.vavr.control.Try.of(Try.java:62)
at io.vavr.concurrent.FutureImpl.lambda$run$2(FutureImpl.java:199)
at io.vavr.concurrent.FutureImpl$$Lambda$72.00000000C00F3670.run(Unknown Source)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:522)
... 4 more
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /x.x.x.x:9042 (com.datastax.driver.core.exceptions.BusyPoolException: [/x.x.x.x] Pool is busy (no available connection and the queue has reached its max size 256)))
at com.datastax.driver.core.RequestHandler.reportNoMoreHosts(RequestHandler.java:211)
at com.datastax.driver.core.RequestHandler.access$1000(RequestHandler.java:46)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.findNextHostAndQuery(RequestHandler.java:275)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution$1.onFailure(RequestHandler.java:338)
at com.google.common.util.concurrent.Futures$6.run(Futures.java:1310)
at com.google.common.util.concurrent.MoreExecutors$DirectExecutor.execute(MoreExecutors.java:457)
at com.google.common.util.concurrent.Futures$ImmediateFuture.addListener(Futures.java:106)
at com.google.common.util.concurrent.Futures.addCallback(Futures.java:1322)
at com.google.common.util.concurrent.Futures.addCallback(Futures.java:1258)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.query(RequestHandler.java:297)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.findNextHostAndQuery(RequestHandler.java:272)
at com.datastax.driver.core.RequestHandler.startNewExecution(RequestHandler.java:115)
at com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:95)
at com.datastax.driver.core.SessionManager.executeAsync(SessionManager.java:132)
at org.janusgraph.diskstorage.cql.CQLStoreManager.lambda$null$24(CQLStoreManager.java:406)
at org.janusgraph.diskstorage.cql.CQLStoreManager$$Lambda$106.00000000C0938C00.apply(Unknown Source)
at io.vavr.collection.Iterator$35.getNext(Iterator.java:1632)
at io.vavr.collection.AbstractIterator.next(AbstractIterator.java:34)
at io.vavr.collection.Iterator$34.getNext(Iterator.java:1510)
at io.vavr.collection.AbstractIterator.next(AbstractIterator.java:34)
at io.vavr.collection.Iterator$34.getNext(Iterator.java:1510)
at io.vavr.collection.AbstractIterator.next(AbstractIterator.java:34)
at io.vavr.collection.Traversable.foldLeft(Traversable.java:471)
at io.vavr.concurrent.Future.sequence(Future.java:549)
at org.janusgraph.diskstorage.cql.CQLStoreManager.mutateManyUnlogged(CQLStoreManager.java:388)
at org.janusgraph.diskstorage.cql.CQLStoreManager.mutateMany(CQLStoreManager.java:346)
at org.janusgraph.diskstorage.locking.consistentkey.ExpectedValueCheckingStoreManager.mutateMany(ExpectedValueCheckingStoreManager.java:79)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction$1.call(CacheTransaction.java:98)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction$1.call(CacheTransaction.java:95)
at org.janusgraph.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:69)
at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:55)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction.persist(CacheTransaction.java:95)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction.flushInternal(CacheTransaction.java:137)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction.commit(CacheTransaction.java:200)
at org.janusgraph.diskstorage.BackendTransaction.commitStorage(BackendTransaction.java:133)
at org.janusgraph.graphdb.database.StandardJanusGraph.commit(StandardJanusGraph.java:729)
at org.janusgraph.graphdb.transaction.StandardJanusGraphTx.commit(StandardJanusGraphTx.java:1374)
at org.janusgraph.graphdb.tinkerpop.JanusGraphBlueprintsGraph$GraphTransaction.doCommit(JanusGraphBlueprintsGraph.java:272)
at org.apache.tinkerpop.gremlin.structure.util.AbstractTransaction.commit(AbstractTransaction.java:105)
... 13 more


Re: com.datastax.driver.core.exceptions.BusyPoolException with CQL backend

Scott P <scott_p...@...>
 

I think I am iterating all the results but I'm not certain. I looked through my traversal code and categorized it into one of these three.

a) Most of my traversals end with toList() or iterate(), such as these.
JanusGraph.traversal().V().has("propA", "value1").has("propB", graphUrl).hasLabel("label1").toList()
JanusGraph.traversal().V().has("propB", "value2").hasLabel("label2").property("propC", "value3").drop().iterate()

b) There are a couple of snippets where I already have a Vertex object and I do g.V(v).next(). I was expecting that a lookup by a Vertex object will only have 1 single result, so it's okay to just call next()
Vertex v = input;
JanusGraph.traversal().V(v).next()

c) And I found a couple suspect snippets where I am ending the traversal with hasNext() to check if something exists.
JanusGraph.traversal().V().has("propD", "value4").hasLabel("label3").hasNext()

Could any of these be leaking a connection?


On Wednesday, August 30, 2017 at 1:21:03 PM UTC-4, Robert Dale wrote:
Are you iterating your results all the way out?

Robert Dale

On Wed, Aug 30, 2017 at 12:52 PM, Jason Plurad <p...@...> wrote:
Looks like you can set a batch size, and the default is 20. Let us know if it helps.

storage.cql.batch-statement-size=20

Related reading on the Cassandra Java Driver FAQ.



On Wednesday, August 30, 2017 at 12:29:36 PM UTC-4, scott_patterson wrote:
I'm trying out the new storage.backend=CQL with cassandra from the master branch (0.2.0) and I'm consistently hitting this error after a few hours of CRUD operations, which could previously be handled (0.1.0) with no problems using the storage.backend=cassandra driver.

Is this a known issue that will be addressed in the final 0.2.0 release? Any recommendations on configuration options to adjust the pool or connection sizes?

org.janusgraph.core.JanusGraphException: Could not execute operation due to backend exception
at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:57)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction.persist(CacheTransaction.java:95)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction.flushInternal(CacheTransaction.java:137)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction.commit(CacheTransaction.java:200)
at org.janusgraph.diskstorage.BackendTransaction.commitStorage(BackendTransaction.java:133)
at org.janusgraph.graphdb.database.StandardJanusGraph.commit(StandardJanusGraph.java:729)
at org.janusgraph.graphdb.transaction.StandardJanusGraphTx.commit(StandardJanusGraphTx.java:1374)
at org.janusgraph.graphdb.tinkerpop.JanusGraphBlueprintsGraph$GraphTransaction.doCommit(JanusGraphBlueprintsGraph.java:272)
at org.apache.tinkerpop.gremlin.structure.util.AbstractTransaction.commit(AbstractTransaction.java:105)
... 13 more
Caused by: org.janusgraph.diskstorage.TemporaryBackendException: Could not successfully complete backend operation due to repeated temporary exceptions after PT1M40S
at org.janusgraph.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:101)
at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:55)
... 21 more
Caused by: org.janusgraph.diskstorage.TemporaryBackendException: Temporary failure in storage backend
at org.janusgraph.diskstorage.cql.CQLKeyColumnValueStore.lambda$null$2(CQLKeyColumnValueStore.java:123)
at org.janusgraph.diskstorage.cql.CQLKeyColumnValueStore$$Lambda$184.00000000BC05D670.apply(Unknown Source)
at io.vavr.API$Match$Case0.apply(API.java:3174)
at io.vavr.API$Match.of(API.java:3137)
at org.janusgraph.diskstorage.cql.CQLKeyColumnValueStore.lambda$static$3(CQLKeyColumnValueStore.java:120)
at org.janusgraph.diskstorage.cql.CQLKeyColumnValueStore$$Lambda$65.00000000C02B77E0.apply(Unknown Source)
at org.janusgraph.diskstorage.cql.CQLStoreManager.mutateManyUnlogged(CQLStoreManager.java:415)
at org.janusgraph.diskstorage.cql.CQLStoreManager.mutateMany(CQLStoreManager.java:346)
at org.janusgraph.diskstorage.locking.consistentkey.ExpectedValueCheckingStoreManager.mutateMany(ExpectedValueCheckingStoreManager.java:79)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction$1.call(CacheTransaction.java:98)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction$1.call(CacheTransaction.java:95)
at org.janusgraph.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:69)
... 22 more
Caused by: java.util.concurrent.ExecutionException: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /x.x.x.x:9042 (com.datastax.driver.core.exceptions.BusyPoolException: [/x.x.x.x] Pool is busy (no available connection and the queue has reached its max size 256)))
at com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:299)
at com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:286)
at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116)
at io.vavr.concurrent.Future$$Lambda$71.00000000C00F2990.apply(Unknown Source)
at io.vavr.control.Try.of(Try.java:62)
at io.vavr.concurrent.FutureImpl.lambda$run$2(FutureImpl.java:199)
at io.vavr.concurrent.FutureImpl$$Lambda$72.00000000C00F3670.run(Unknown Source)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:522)
... 4 more
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /x.x.x.x:9042 (com.datastax.driver.core.exceptions.BusyPoolException: [/x.x.x.x] Pool is busy (no available connection and the queue has reached its max size 256)))
at com.datastax.driver.core.RequestHandler.reportNoMoreHosts(RequestHandler.java:211)
at com.datastax.driver.core.RequestHandler.access$1000(RequestHandler.java:46)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.findNextHostAndQuery(RequestHandler.java:275)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution$1.onFailure(RequestHandler.java:338)
at com.google.common.util.concurrent.Futures$6.run(Futures.java:1310)
at com.google.common.util.concurrent.MoreExecutors$DirectExecutor.execute(MoreExecutors.java:457)
at com.google.common.util.concurrent.Futures$ImmediateFuture.addListener(Futures.java:106)
at com.google.common.util.concurrent.Futures.addCallback(Futures.java:1322)
at com.google.common.util.concurrent.Futures.addCallback(Futures.java:1258)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.query(RequestHandler.java:297)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.findNextHostAndQuery(RequestHandler.java:272)
at com.datastax.driver.core.RequestHandler.startNewExecution(RequestHandler.java:115)
at com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:95)
at com.datastax.driver.core.SessionManager.executeAsync(SessionManager.java:132)
at org.janusgraph.diskstorage.cql.CQLStoreManager.lambda$null$24(CQLStoreManager.java:406)
at org.janusgraph.diskstorage.cql.CQLStoreManager$$Lambda$106.00000000C0938C00.apply(Unknown Source)
at io.vavr.collection.Iterator$35.getNext(Iterator.java:1632)
at io.vavr.collection.AbstractIterator.next(AbstractIterator.java:34)
at io.vavr.collection.Iterator$34.getNext(Iterator.java:1510)
at io.vavr.collection.AbstractIterator.next(AbstractIterator.java:34)
at io.vavr.collection.Iterator$34.getNext(Iterator.java:1510)
at io.vavr.collection.AbstractIterator.next(AbstractIterator.java:34)
at io.vavr.collection.Traversable.foldLeft(Traversable.java:471)
at io.vavr.concurrent.Future.sequence(Future.java:549)
at org.janusgraph.diskstorage.cql.CQLStoreManager.mutateManyUnlogged(CQLStoreManager.java:388)
at org.janusgraph.diskstorage.cql.CQLStoreManager.mutateMany(CQLStoreManager.java:346)
at org.janusgraph.diskstorage.locking.consistentkey.ExpectedValueCheckingStoreManager.mutateMany(ExpectedValueCheckingStoreManager.java:79)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction$1.call(CacheTransaction.java:98)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction$1.call(CacheTransaction.java:95)
at org.janusgraph.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:69)
at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:55)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction.persist(CacheTransaction.java:95)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction.flushInternal(CacheTransaction.java:137)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction.commit(CacheTransaction.java:200)
at org.janusgraph.diskstorage.BackendTransaction.commitStorage(BackendTransaction.java:133)
at org.janusgraph.graphdb.database.StandardJanusGraph.commit(StandardJanusGraph.java:729)
at org.janusgraph.graphdb.transaction.StandardJanusGraphTx.commit(StandardJanusGraphTx.java:1374)
at org.janusgraph.graphdb.tinkerpop.JanusGraphBlueprintsGraph$GraphTransaction.doCommit(JanusGraphBlueprintsGraph.java:272)
at org.apache.tinkerpop.gremlin.structure.util.AbstractTransaction.commit(AbstractTransaction.java:105)
... 13 more

--
You received this message because you are subscribed to the Google Groups "JanusGraph users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to janusgraph-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/janusgraph-users/8fd71cd7-d385-42a0-adf9-73c05f69fa31%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.


Re: com.datastax.driver.core.exceptions.BusyPoolException with CQL backend

Robert Dale <rob...@...>
 

Are you iterating your results all the way out?

Robert Dale

On Wed, Aug 30, 2017 at 12:52 PM, Jason Plurad <plu...@...> wrote:
Looks like you can set a batch size, and the default is 20. Let us know if it helps.

storage.cql.batch-statement-size=20

Related reading on the Cassandra Java Driver FAQ.



On Wednesday, August 30, 2017 at 12:29:36 PM UTC-4, scott_patterson wrote:
I'm trying out the new storage.backend=CQL with cassandra from the master branch (0.2.0) and I'm consistently hitting this error after a few hours of CRUD operations, which could previously be handled (0.1.0) with no problems using the storage.backend=cassandra driver.

Is this a known issue that will be addressed in the final 0.2.0 release? Any recommendations on configuration options to adjust the pool or connection sizes?

org.janusgraph.core.JanusGraphException: Could not execute operation due to backend exception
at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:57)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction.persist(CacheTransaction.java:95)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction.flushInternal(CacheTransaction.java:137)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction.commit(CacheTransaction.java:200)
at org.janusgraph.diskstorage.BackendTransaction.commitStorage(BackendTransaction.java:133)
at org.janusgraph.graphdb.database.StandardJanusGraph.commit(StandardJanusGraph.java:729)
at org.janusgraph.graphdb.transaction.StandardJanusGraphTx.commit(StandardJanusGraphTx.java:1374)
at org.janusgraph.graphdb.tinkerpop.JanusGraphBlueprintsGraph$GraphTransaction.doCommit(JanusGraphBlueprintsGraph.java:272)
at org.apache.tinkerpop.gremlin.structure.util.AbstractTransaction.commit(AbstractTransaction.java:105)
... 13 more
Caused by: org.janusgraph.diskstorage.TemporaryBackendException: Could not successfully complete backend operation due to repeated temporary exceptions after PT1M40S
at org.janusgraph.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:101)
at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:55)
... 21 more
Caused by: org.janusgraph.diskstorage.TemporaryBackendException: Temporary failure in storage backend
at org.janusgraph.diskstorage.cql.CQLKeyColumnValueStore.lambda$null$2(CQLKeyColumnValueStore.java:123)
at org.janusgraph.diskstorage.cql.CQLKeyColumnValueStore$$Lambda$184.00000000BC05D670.apply(Unknown Source)
at io.vavr.API$Match$Case0.apply(API.java:3174)
at io.vavr.API$Match.of(API.java:3137)
at org.janusgraph.diskstorage.cql.CQLKeyColumnValueStore.lambda$static$3(CQLKeyColumnValueStore.java:120)
at org.janusgraph.diskstorage.cql.CQLKeyColumnValueStore$$Lambda$65.00000000C02B77E0.apply(Unknown Source)
at org.janusgraph.diskstorage.cql.CQLStoreManager.mutateManyUnlogged(CQLStoreManager.java:415)
at org.janusgraph.diskstorage.cql.CQLStoreManager.mutateMany(CQLStoreManager.java:346)
at org.janusgraph.diskstorage.locking.consistentkey.ExpectedValueCheckingStoreManager.mutateMany(ExpectedValueCheckingStoreManager.java:79)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction$1.call(CacheTransaction.java:98)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction$1.call(CacheTransaction.java:95)
at org.janusgraph.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:69)
... 22 more
Caused by: java.util.concurrent.ExecutionException: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /x.x.x.x:9042 (com.datastax.driver.core.exceptions.BusyPoolException: [/x.x.x.x] Pool is busy (no available connection and the queue has reached its max size 256)))
at com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:299)
at com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:286)
at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116)
at io.vavr.concurrent.Future$$Lambda$71.00000000C00F2990.apply(Unknown Source)
at io.vavr.control.Try.of(Try.java:62)
at io.vavr.concurrent.FutureImpl.lambda$run$2(FutureImpl.java:199)
at io.vavr.concurrent.FutureImpl$$Lambda$72.00000000C00F3670.run(Unknown Source)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:522)
... 4 more
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /x.x.x.x:9042 (com.datastax.driver.core.exceptions.BusyPoolException: [/x.x.x.x] Pool is busy (no available connection and the queue has reached its max size 256)))
at com.datastax.driver.core.RequestHandler.reportNoMoreHosts(RequestHandler.java:211)
at com.datastax.driver.core.RequestHandler.access$1000(RequestHandler.java:46)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.findNextHostAndQuery(RequestHandler.java:275)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution$1.onFailure(RequestHandler.java:338)
at com.google.common.util.concurrent.Futures$6.run(Futures.java:1310)
at com.google.common.util.concurrent.MoreExecutors$DirectExecutor.execute(MoreExecutors.java:457)
at com.google.common.util.concurrent.Futures$ImmediateFuture.addListener(Futures.java:106)
at com.google.common.util.concurrent.Futures.addCallback(Futures.java:1322)
at com.google.common.util.concurrent.Futures.addCallback(Futures.java:1258)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.query(RequestHandler.java:297)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.findNextHostAndQuery(RequestHandler.java:272)
at com.datastax.driver.core.RequestHandler.startNewExecution(RequestHandler.java:115)
at com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:95)
at com.datastax.driver.core.SessionManager.executeAsync(SessionManager.java:132)
at org.janusgraph.diskstorage.cql.CQLStoreManager.lambda$null$24(CQLStoreManager.java:406)
at org.janusgraph.diskstorage.cql.CQLStoreManager$$Lambda$106.00000000C0938C00.apply(Unknown Source)
at io.vavr.collection.Iterator$35.getNext(Iterator.java:1632)
at io.vavr.collection.AbstractIterator.next(AbstractIterator.java:34)
at io.vavr.collection.Iterator$34.getNext(Iterator.java:1510)
at io.vavr.collection.AbstractIterator.next(AbstractIterator.java:34)
at io.vavr.collection.Iterator$34.getNext(Iterator.java:1510)
at io.vavr.collection.AbstractIterator.next(AbstractIterator.java:34)
at io.vavr.collection.Traversable.foldLeft(Traversable.java:471)
at io.vavr.concurrent.Future.sequence(Future.java:549)
at org.janusgraph.diskstorage.cql.CQLStoreManager.mutateManyUnlogged(CQLStoreManager.java:388)
at org.janusgraph.diskstorage.cql.CQLStoreManager.mutateMany(CQLStoreManager.java:346)
at org.janusgraph.diskstorage.locking.consistentkey.ExpectedValueCheckingStoreManager.mutateMany(ExpectedValueCheckingStoreManager.java:79)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction$1.call(CacheTransaction.java:98)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction$1.call(CacheTransaction.java:95)
at org.janusgraph.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:69)
at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:55)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction.persist(CacheTransaction.java:95)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction.flushInternal(CacheTransaction.java:137)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction.commit(CacheTransaction.java:200)
at org.janusgraph.diskstorage.BackendTransaction.commitStorage(BackendTransaction.java:133)
at org.janusgraph.graphdb.database.StandardJanusGraph.commit(StandardJanusGraph.java:729)
at org.janusgraph.graphdb.transaction.StandardJanusGraphTx.commit(StandardJanusGraphTx.java:1374)
at org.janusgraph.graphdb.tinkerpop.JanusGraphBlueprintsGraph$GraphTransaction.doCommit(JanusGraphBlueprintsGraph.java:272)
at org.apache.tinkerpop.gremlin.structure.util.AbstractTransaction.commit(AbstractTransaction.java:105)
... 13 more

--
You received this message because you are subscribed to the Google Groups "JanusGraph users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to janusgraph-users+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/janusgraph-users/8fd71cd7-d385-42a0-adf9-73c05f69fa31%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.


Re: com.datastax.driver.core.exceptions.BusyPoolException with CQL backend

Jason Plurad <plu...@...>
 

Looks like you can set a batch size, and the default is 20. Let us know if it helps.

storage.cql.batch-statement-size=20

Related reading on the Cassandra Java Driver FAQ.


On Wednesday, August 30, 2017 at 12:29:36 PM UTC-4, scott_patterson wrote:
I'm trying out the new storage.backend=CQL with cassandra from the master branch (0.2.0) and I'm consistently hitting this error after a few hours of CRUD operations, which could previously be handled (0.1.0) with no problems using the storage.backend=cassandra driver.

Is this a known issue that will be addressed in the final 0.2.0 release? Any recommendations on configuration options to adjust the pool or connection sizes?

org.janusgraph.core.JanusGraphException: Could not execute operation due to backend exception
at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:57)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction.persist(CacheTransaction.java:95)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction.flushInternal(CacheTransaction.java:137)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction.commit(CacheTransaction.java:200)
at org.janusgraph.diskstorage.BackendTransaction.commitStorage(BackendTransaction.java:133)
at org.janusgraph.graphdb.database.StandardJanusGraph.commit(StandardJanusGraph.java:729)
at org.janusgraph.graphdb.transaction.StandardJanusGraphTx.commit(StandardJanusGraphTx.java:1374)
at org.janusgraph.graphdb.tinkerpop.JanusGraphBlueprintsGraph$GraphTransaction.doCommit(JanusGraphBlueprintsGraph.java:272)
at org.apache.tinkerpop.gremlin.structure.util.AbstractTransaction.commit(AbstractTransaction.java:105)
... 13 more
Caused by: org.janusgraph.diskstorage.TemporaryBackendException: Could not successfully complete backend operation due to repeated temporary exceptions after PT1M40S
at org.janusgraph.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:101)
at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:55)
... 21 more
Caused by: org.janusgraph.diskstorage.TemporaryBackendException: Temporary failure in storage backend
at org.janusgraph.diskstorage.cql.CQLKeyColumnValueStore.lambda$null$2(CQLKeyColumnValueStore.java:123)
at org.janusgraph.diskstorage.cql.CQLKeyColumnValueStore$$Lambda$184.00000000BC05D670.apply(Unknown Source)
at io.vavr.API$Match$Case0.apply(API.java:3174)
at io.vavr.API$Match.of(API.java:3137)
at org.janusgraph.diskstorage.cql.CQLKeyColumnValueStore.lambda$static$3(CQLKeyColumnValueStore.java:120)
at org.janusgraph.diskstorage.cql.CQLKeyColumnValueStore$$Lambda$65.00000000C02B77E0.apply(Unknown Source)
at org.janusgraph.diskstorage.cql.CQLStoreManager.mutateManyUnlogged(CQLStoreManager.java:415)
at org.janusgraph.diskstorage.cql.CQLStoreManager.mutateMany(CQLStoreManager.java:346)
at org.janusgraph.diskstorage.locking.consistentkey.ExpectedValueCheckingStoreManager.mutateMany(ExpectedValueCheckingStoreManager.java:79)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction$1.call(CacheTransaction.java:98)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction$1.call(CacheTransaction.java:95)
at org.janusgraph.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:69)
... 22 more
Caused by: java.util.concurrent.ExecutionException: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /x.x.x.x:9042 (com.datastax.driver.core.exceptions.BusyPoolException: [/x.x.x.x] Pool is busy (no available connection and the queue has reached its max size 256)))
at com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:299)
at com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:286)
at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116)
at io.vavr.concurrent.Future$$Lambda$71.00000000C00F2990.apply(Unknown Source)
at io.vavr.control.Try.of(Try.java:62)
at io.vavr.concurrent.FutureImpl.lambda$run$2(FutureImpl.java:199)
at io.vavr.concurrent.FutureImpl$$Lambda$72.00000000C00F3670.run(Unknown Source)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:522)
... 4 more
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /x.x.x.x:9042 (com.datastax.driver.core.exceptions.BusyPoolException: [/x.x.x.x] Pool is busy (no available connection and the queue has reached its max size 256)))
at com.datastax.driver.core.RequestHandler.reportNoMoreHosts(RequestHandler.java:211)
at com.datastax.driver.core.RequestHandler.access$1000(RequestHandler.java:46)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.findNextHostAndQuery(RequestHandler.java:275)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution$1.onFailure(RequestHandler.java:338)
at com.google.common.util.concurrent.Futures$6.run(Futures.java:1310)
at com.google.common.util.concurrent.MoreExecutors$DirectExecutor.execute(MoreExecutors.java:457)
at com.google.common.util.concurrent.Futures$ImmediateFuture.addListener(Futures.java:106)
at com.google.common.util.concurrent.Futures.addCallback(Futures.java:1322)
at com.google.common.util.concurrent.Futures.addCallback(Futures.java:1258)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.query(RequestHandler.java:297)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.findNextHostAndQuery(RequestHandler.java:272)
at com.datastax.driver.core.RequestHandler.startNewExecution(RequestHandler.java:115)
at com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:95)
at com.datastax.driver.core.SessionManager.executeAsync(SessionManager.java:132)
at org.janusgraph.diskstorage.cql.CQLStoreManager.lambda$null$24(CQLStoreManager.java:406)
at org.janusgraph.diskstorage.cql.CQLStoreManager$$Lambda$106.00000000C0938C00.apply(Unknown Source)
at io.vavr.collection.Iterator$35.getNext(Iterator.java:1632)
at io.vavr.collection.AbstractIterator.next(AbstractIterator.java:34)
at io.vavr.collection.Iterator$34.getNext(Iterator.java:1510)
at io.vavr.collection.AbstractIterator.next(AbstractIterator.java:34)
at io.vavr.collection.Iterator$34.getNext(Iterator.java:1510)
at io.vavr.collection.AbstractIterator.next(AbstractIterator.java:34)
at io.vavr.collection.Traversable.foldLeft(Traversable.java:471)
at io.vavr.concurrent.Future.sequence(Future.java:549)
at org.janusgraph.diskstorage.cql.CQLStoreManager.mutateManyUnlogged(CQLStoreManager.java:388)
at org.janusgraph.diskstorage.cql.CQLStoreManager.mutateMany(CQLStoreManager.java:346)
at org.janusgraph.diskstorage.locking.consistentkey.ExpectedValueCheckingStoreManager.mutateMany(ExpectedValueCheckingStoreManager.java:79)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction$1.call(CacheTransaction.java:98)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction$1.call(CacheTransaction.java:95)
at org.janusgraph.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:69)
at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:55)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction.persist(CacheTransaction.java:95)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction.flushInternal(CacheTransaction.java:137)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction.commit(CacheTransaction.java:200)
at org.janusgraph.diskstorage.BackendTransaction.commitStorage(BackendTransaction.java:133)
at org.janusgraph.graphdb.database.StandardJanusGraph.commit(StandardJanusGraph.java:729)
at org.janusgraph.graphdb.transaction.StandardJanusGraphTx.commit(StandardJanusGraphTx.java:1374)
at org.janusgraph.graphdb.tinkerpop.JanusGraphBlueprintsGraph$GraphTransaction.doCommit(JanusGraphBlueprintsGraph.java:272)
at org.apache.tinkerpop.gremlin.structure.util.AbstractTransaction.commit(AbstractTransaction.java:105)
... 13 more


com.datastax.driver.core.exceptions.BusyPoolException with CQL backend

scott_p...@...
 

I'm trying out the new storage.backend=CQL with cassandra from the master branch (0.2.0) and I'm consistently hitting this error after a few hours of CRUD operations, which could previously be handled (0.1.0) with no problems using the storage.backend=cassandra driver.

Is this a known issue that will be addressed in the final 0.2.0 release? Any recommendations on configuration options to adjust the pool or connection sizes?

org.janusgraph.core.JanusGraphException: Could not execute operation due to backend exception
at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:57)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction.persist(CacheTransaction.java:95)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction.flushInternal(CacheTransaction.java:137)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction.commit(CacheTransaction.java:200)
at org.janusgraph.diskstorage.BackendTransaction.commitStorage(BackendTransaction.java:133)
at org.janusgraph.graphdb.database.StandardJanusGraph.commit(StandardJanusGraph.java:729)
at org.janusgraph.graphdb.transaction.StandardJanusGraphTx.commit(StandardJanusGraphTx.java:1374)
at org.janusgraph.graphdb.tinkerpop.JanusGraphBlueprintsGraph$GraphTransaction.doCommit(JanusGraphBlueprintsGraph.java:272)
at org.apache.tinkerpop.gremlin.structure.util.AbstractTransaction.commit(AbstractTransaction.java:105)
... 13 more
Caused by: org.janusgraph.diskstorage.TemporaryBackendException: Could not successfully complete backend operation due to repeated temporary exceptions after PT1M40S
at org.janusgraph.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:101)
at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:55)
... 21 more
Caused by: org.janusgraph.diskstorage.TemporaryBackendException: Temporary failure in storage backend
at org.janusgraph.diskstorage.cql.CQLKeyColumnValueStore.lambda$null$2(CQLKeyColumnValueStore.java:123)
at org.janusgraph.diskstorage.cql.CQLKeyColumnValueStore$$Lambda$184.00000000BC05D670.apply(Unknown Source)
at io.vavr.API$Match$Case0.apply(API.java:3174)
at io.vavr.API$Match.of(API.java:3137)
at org.janusgraph.diskstorage.cql.CQLKeyColumnValueStore.lambda$static$3(CQLKeyColumnValueStore.java:120)
at org.janusgraph.diskstorage.cql.CQLKeyColumnValueStore$$Lambda$65.00000000C02B77E0.apply(Unknown Source)
at org.janusgraph.diskstorage.cql.CQLStoreManager.mutateManyUnlogged(CQLStoreManager.java:415)
at org.janusgraph.diskstorage.cql.CQLStoreManager.mutateMany(CQLStoreManager.java:346)
at org.janusgraph.diskstorage.locking.consistentkey.ExpectedValueCheckingStoreManager.mutateMany(ExpectedValueCheckingStoreManager.java:79)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction$1.call(CacheTransaction.java:98)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction$1.call(CacheTransaction.java:95)
at org.janusgraph.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:69)
... 22 more
Caused by: java.util.concurrent.ExecutionException: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /x.x.x.x:9042 (com.datastax.driver.core.exceptions.BusyPoolException: [/x.x.x.x] Pool is busy (no available connection and the queue has reached its max size 256)))
at com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:299)
at com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:286)
at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116)
at io.vavr.concurrent.Future$$Lambda$71.00000000C00F2990.apply(Unknown Source)
at io.vavr.control.Try.of(Try.java:62)
at io.vavr.concurrent.FutureImpl.lambda$run$2(FutureImpl.java:199)
at io.vavr.concurrent.FutureImpl$$Lambda$72.00000000C00F3670.run(Unknown Source)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:522)
... 4 more
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /x.x.x.x:9042 (com.datastax.driver.core.exceptions.BusyPoolException: [/x.x.x.x] Pool is busy (no available connection and the queue has reached its max size 256)))
at com.datastax.driver.core.RequestHandler.reportNoMoreHosts(RequestHandler.java:211)
at com.datastax.driver.core.RequestHandler.access$1000(RequestHandler.java:46)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.findNextHostAndQuery(RequestHandler.java:275)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution$1.onFailure(RequestHandler.java:338)
at com.google.common.util.concurrent.Futures$6.run(Futures.java:1310)
at com.google.common.util.concurrent.MoreExecutors$DirectExecutor.execute(MoreExecutors.java:457)
at com.google.common.util.concurrent.Futures$ImmediateFuture.addListener(Futures.java:106)
at com.google.common.util.concurrent.Futures.addCallback(Futures.java:1322)
at com.google.common.util.concurrent.Futures.addCallback(Futures.java:1258)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.query(RequestHandler.java:297)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.findNextHostAndQuery(RequestHandler.java:272)
at com.datastax.driver.core.RequestHandler.startNewExecution(RequestHandler.java:115)
at com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:95)
at com.datastax.driver.core.SessionManager.executeAsync(SessionManager.java:132)
at org.janusgraph.diskstorage.cql.CQLStoreManager.lambda$null$24(CQLStoreManager.java:406)
at org.janusgraph.diskstorage.cql.CQLStoreManager$$Lambda$106.00000000C0938C00.apply(Unknown Source)
at io.vavr.collection.Iterator$35.getNext(Iterator.java:1632)
at io.vavr.collection.AbstractIterator.next(AbstractIterator.java:34)
at io.vavr.collection.Iterator$34.getNext(Iterator.java:1510)
at io.vavr.collection.AbstractIterator.next(AbstractIterator.java:34)
at io.vavr.collection.Iterator$34.getNext(Iterator.java:1510)
at io.vavr.collection.AbstractIterator.next(AbstractIterator.java:34)
at io.vavr.collection.Traversable.foldLeft(Traversable.java:471)
at io.vavr.concurrent.Future.sequence(Future.java:549)
at org.janusgraph.diskstorage.cql.CQLStoreManager.mutateManyUnlogged(CQLStoreManager.java:388)
at org.janusgraph.diskstorage.cql.CQLStoreManager.mutateMany(CQLStoreManager.java:346)
at org.janusgraph.diskstorage.locking.consistentkey.ExpectedValueCheckingStoreManager.mutateMany(ExpectedValueCheckingStoreManager.java:79)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction$1.call(CacheTransaction.java:98)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction$1.call(CacheTransaction.java:95)
at org.janusgraph.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:69)
at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:55)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction.persist(CacheTransaction.java:95)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction.flushInternal(CacheTransaction.java:137)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction.commit(CacheTransaction.java:200)
at org.janusgraph.diskstorage.BackendTransaction.commitStorage(BackendTransaction.java:133)
at org.janusgraph.graphdb.database.StandardJanusGraph.commit(StandardJanusGraph.java:729)
at org.janusgraph.graphdb.transaction.StandardJanusGraphTx.commit(StandardJanusGraphTx.java:1374)
at org.janusgraph.graphdb.tinkerpop.JanusGraphBlueprintsGraph$GraphTransaction.doCommit(JanusGraphBlueprintsGraph.java:272)
at org.apache.tinkerpop.gremlin.structure.util.AbstractTransaction.commit(AbstractTransaction.java:105)
... 13 more


Re: extras that provide business value

Misha Brukman <mbru...@...>
 

Re: visualization: you can use Gephi or Cytoscape (both open source). In addition, several commercial graph visualization vendors are working on integrations with JanusGraph that will be announced in the near future.

On Tue, Aug 29, 2017 at 12:52 AM, <an...@...> wrote:
Janus is such a compelling database to switch to. Right now we use Neo4j. There are a couple of features that neo4j has that are very useful and I was wondering if they are on the roadmap for Janus?

1. bulk uploading via csv files. the neo4j import tool is powerful; its like a language dedicated for bulk loading existing data. there are many cases where this is a must have.
2. data visualization via browser. pretty data visualizations make everyone happy


--
You received this message because you are subscribed to the Google Groups "JanusGraph users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to janusgraph-users+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: extras that provide business value

Robert Dale <rob...@...>
 

Not speaking for the roadmap...

1. Usually it takes a trivial amount of groovy to parse csv and add the nodes
2. You can point this at a Gremlin server:  https://github.com/bricaud/graphexp

Robert Dale

On Tue, Aug 29, 2017 at 12:52 AM, <an...@...> wrote:
Janus is such a compelling database to switch to. Right now we use Neo4j. There are a couple of features that neo4j has that are very useful and I was wondering if they are on the roadmap for Janus?

1. bulk uploading via csv files. the neo4j import tool is powerful; its like a language dedicated for bulk loading existing data. there are many cases where this is a must have.
2. data visualization via browser. pretty data visualizations make everyone happy


--
You received this message because you are subscribed to the Google Groups "JanusGraph users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to janusgraph-users+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: filtering a path

Daniel Kuppitz <me@...>
 

I don't know how to write Scala code, so you'll always have to convert my code snippets from Java to Scala ;).

Again, the snippet assumes, that all vertices on a certain path have a num value greater than 50. If that's not what you want, then how do you want to treat edges that are connected to vertices that don't match the filter criteria?

Let me demonstrate the problem. Let's say you want to find all paths in the modern graph, that end at v[1]:

gremlin> endVertex = g.V(1).next(); g.V().repeat(bothE().otherV().simplePath()).until(__.is(endVertex)).path()
==>[v[2],e[7][1-knows->2],v[1]]
==>[v[3],e[9][1-created->3],v[1]]
==>[v[3],e[11][4-created->3],v[4],e[8][1-knows->4],v[1]]
==>[v[4],e[8][1-knows->4],v[1]]
==>[v[4],e[11][4-created->3],v[3],e[9][1-created->3],v[1]]
==>[v[5],e[10][4-created->5],v[4],e[8][1-knows->4],v[1]]
==>[v[5],e[10][4-created->5],v[4],e[11][4-created->3],v[3],e[9][1-created->3],v[1]]
==>[v[6],e[12][6-created->3],v[3],e[9][1-created->3],v[1]]
==>[v[6],e[12][6-created->3],v[3],e[11][4-created->3],v[4],e[8][1-knows->4],v[1]]

Another way to do that would be:

gremlin> endVertex = g.V(1).next(); g.V().as("a").repeat(bothE().as("a").otherV().as("a").simplePath()).until(__.is(endVertex)).select(all, "a")
==>[v[2],e[7][1-knows->2],v[1]]
==>[v[3],e[9][1-created->3],v[1]]
==>[v[3],e[11][4-created->3],v[4],e[8][1-knows->4],v[1]]
==>[v[4],e[8][1-knows->4],v[1]]
==>[v[4],e[11][4-created->3],v[3],e[9][1-created->3],v[1]]
==>[v[5],e[10][4-created->5],v[4],e[8][1-knows->4],v[1]]
==>[v[5],e[10][4-created->5],v[4],e[11][4-created->3],v[3],e[9][1-created->3],v[1]]
==>[v[6],e[12][6-created->3],v[3],e[9][1-created->3],v[1]]
==>[v[6],e[12][6-created->3],v[3],e[11][4-created->3],v[4],e[8][1-knows->4],v[1]]

Using this approach, you can filter out certain vertices (e.g. you only want person vertices):

gremlin> endVertex = g.V(1).next(); g.V().choose(hasLabel("person"), __.as("a"), identity()).repeat(bothE().as("a").otherV().choose(hasLabel("person"), __.as("a"), identity()).simplePath()).until(__.is(endVertex)).select(all, "a")
==>[v[2],e[7][1-knows->2],v[1]]
==>[e[9][1-created->3],v[1]]
==>[e[11][4-created->3],v[4],e[8][1-knows->4],v[1]]
==>[v[4],e[8][1-knows->4],v[1]]
==>[v[4],e[11][4-created->3],e[9][1-created->3],v[1]]
==>[e[10][4-created->5],v[4],e[8][1-knows->4],v[1]]
==>[e[10][4-created->5],v[4],e[11][4-created->3],e[9][1-created->3],v[1]]
==>[v[6],e[12][6-created->3],e[9][1-created->3],v[1]]
==>[v[6],e[12][6-created->3],e[11][4-created->3],v[4],e[8][1-knows->4],v[1]]

As you can see, that keeps some edges, that no longer make any sense.

Cheers,
Daniel

On Wed, Aug 30, 2017 at 7:14 AM, Yair Ogen <yair...@...> wrote:
That didn't compile. 

I changed it to use Key:

.repeat(_.outE().inV().has(Key[Long]("num"),gt(50)).simplePath())

oddly enough it filters everything although clearly some ages do have higher than 50 in this property




Re: Janus Graph benchmarking

Misha Brukman <mbru...@...>
 

There are several existing graph db benchmarking frameworks, e.g.,
If you do end up benchmarking JanusGraph with any of these or another tool, we'd love to see the results!

On Wed, Aug 30, 2017 at 1:25 AM, <nav...@...> wrote:
Hello, 

Can someone let me know if there is a benchmarking tool for Graph Dbs (particularly for JanusGraph) like YCSB for NOSQL databases and Cassandra-Stress specific tp Cassandra Db. 
Anyone using JanusGraph in production or even development stage with data size big enough for benchmark might have tried this, and i am wondering how are they measuring the performance of their graph engine.

Thanks.
Naveen

--
You received this message because you are subscribed to the Google Groups "JanusGraph users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to janusgraph-users+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: filtering a path

Yair Ogen <yair...@...>
 

That didn't compile. 

I changed it to use Key:

.repeat(_.outE().inV().has(Key[Long]("num"),gt(50)).simplePath())

oddly enough it filters everything although clearly some ages do have higher than 50 in this property



Re: filtering a path

Daniel Kuppitz <me@...>
 

even those in between start and end

Does that mean you want to exclude the whole path or only matching vertices on the path? If the latter, then what about the edges? Taking out a single vertex leaves 2 invalid edges on the path.
If the former, then it's:

...repeat(outE().inV().has("num", gt(50)).simplePath())...

Cheers,
Daniel
 

On Wed, Aug 30, 2017 at 6:34 AM, <yair...@...> wrote:
What's the best way to filter a path based on a Vertex property.

I am using the gremlin-scala lib.

This is the code:

val paths = startVertex.asScala().start()
      .repeat(_.outE().inV().simplePath())
      .until(_.is(endVertex.vertex))
      .path()
      .toList()


This works great. Now I want to add a filter that will filter out any Vertexes (even those in between start and end) in the path where property("num") > 50.

Seems that the filter API is only for the End Vertex? 

--
You received this message because you are subscribed to the Google Groups "JanusGraph users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to janusgraph-users+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


filtering a path

yair...@...
 

What's the best way to filter a path based on a Vertex property.

I am using the gremlin-scala lib.

This is the code:

val paths = startVertex.asScala().start()
      .repeat(_.outE().inV().simplePath())
      .until(_.is(endVertex.vertex))
      .path()
      .toList()


This works great. Now I want to add a filter that will filter out any Vertexes (even those in between start and end) in the path where property("num") > 50.

Seems that the filter API is only for the End Vertex? 


Scala client

yair...@...
 

Does anyone know of a good scala client?

Shold I use 'gremlin-scala_2.12'?


Janus Graph benchmarking

nav...@...
 

Hello, 

Can someone let me know if there is a benchmarking tool for Graph Dbs (particularly for JanusGraph) like YCSB for NOSQL databases and Cassandra-Stress specific tp Cassandra Db. 
Anyone using JanusGraph in production or even development stage with data size big enough for benchmark might have tried this, and i am wondering how are they measuring the performance of their graph engine.

Thanks.
Naveen


Re: New committer: David Clement

sju...@...
 

Welcome, David! It's great to have you on board!


On Tuesday, August 29, 2017 at 9:20:12 AM UTC-5, Jason Plurad wrote:
On behalf of the JanusGraph Technical Steering Committee (TSC), I'm pleased to welcome a new committer on the project!

David Clement has submitted several good pull requests which enhanced the functionality for the indexing backends, both ES and Solr. He has been thorough and quite responsive to the feedback offered in the reviews.


Re: Proper way to define metaproperties in schema

David Brown <dave...@...>
 

Thank you for looking into this for me. In general, Janus seems pretty spiffy. Nice work done here so far.


On Tuesday, August 29, 2017 at 4:57:44 PM UTC-4, Jason Plurad wrote:
Thanks for the reproduce scenario.
I opened up an issue to track this https://github.com/JanusGraph/janusgraph/issues/487

On Tuesday, August 29, 2017 at 12:50:54 PM UTC-4, David Brown wrote:
For the record, the output of the same script when run against TinkerGraph 3.2.6 is as expected:

properties: [{'string_prop': [vp[string_prop->dave]], 'integer_prop': [vp[integer_prop->1]], 'float_prop': [vp[float_prop->1.1]]}]
string prop: [vp[string_prop->dave]]
float_prop: [vp[float_prop->1.1]]
integer_prop: [vp[integer_prop->1]]


On Monday, August 28, 2017 at 1:44:18 PM UTC-4, David Brown wrote:
Hello JanusGraph users,

I have been experimenting with Janus, and using the automatic schema generation, metaproperties work as expected. However, when I set `schema.default=none` in the conf and define my own schema, metaproperties seem to quit working--metaproperty data is no longer returned in the Gremlin Server response. How should metaproperties be defined in the schema? I can't seem to find this information in the documentation. I can provide example schema definitions if necessary.

Thanks,

Dave


Re: Proper way to define metaproperties in schema

Jason Plurad <plu...@...>
 

Thanks for the reproduce scenario.
I opened up an issue to track this https://github.com/JanusGraph/janusgraph/issues/487


On Tuesday, August 29, 2017 at 12:50:54 PM UTC-4, David Brown wrote:
For the record, the output of the same script when run against TinkerGraph 3.2.6 is as expected:

properties: [{'string_prop': [vp[string_prop->dave]], 'integer_prop': [vp[integer_prop->1]], 'float_prop': [vp[float_prop->1.1]]}]
string prop: [vp[string_prop->dave]]
float_prop: [vp[float_prop->1.1]]
integer_prop: [vp[integer_prop->1]]


On Monday, August 28, 2017 at 1:44:18 PM UTC-4, David Brown wrote:
Hello JanusGraph users,

I have been experimenting with Janus, and using the automatic schema generation, metaproperties work as expected. However, when I set `schema.default=none` in the conf and define my own schema, metaproperties seem to quit working--metaproperty data is no longer returned in the Gremlin Server response. How should metaproperties be defined in the schema? I can't seem to find this information in the documentation. I can provide example schema definitions if necessary.

Thanks,

Dave


Re: New committer: David Clement

Jerry He <jerr...@...>
 

Congrats and welcome, David!


On Tue, Aug 29, 2017 at 7:20 AM Jason Plurad <plu...@...> wrote:
On behalf of the JanusGraph Technical Steering Committee (TSC), I'm pleased to welcome a new committer on the project!

David Clement has submitted several good pull requests which enhanced the functionality for the indexing backends, both ES and Solr. He has been thorough and quite responsive to the feedback offered in the reviews.

--
You received this message because you are subscribed to the Google Groups "JanusGraph users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to janusgraph-users+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Proper way to define metaproperties in schema

David Brown <dave...@...>
 

For the record, the output of the same script when run against TinkerGraph 3.2.6 is as expected:

properties: [{'string_prop': [vp[string_prop->dave]], 'integer_prop': [vp[integer_prop->1]], 'float_prop': [vp[float_prop->1.1]]}]
string prop: [vp[string_prop->dave]]
float_prop: [vp[float_prop->1.1]]
integer_prop: [vp[integer_prop->1]]


On Monday, August 28, 2017 at 1:44:18 PM UTC-4, David Brown wrote:
Hello JanusGraph users,

I have been experimenting with Janus, and using the automatic schema generation, metaproperties work as expected. However, when I set `schema.default=none` in the conf and define my own schema, metaproperties seem to quit working--metaproperty data is no longer returned in the Gremlin Server response. How should metaproperties be defined in the schema? I can't seem to find this information in the documentation. I can provide example schema definitions if necessary.

Thanks,

Dave


Re: Proper way to define metaproperties in schema

David Brown <dave...@...>
 

Ok, I built from source. It appears that this is happening specifically with properties defined as Floats. The following script in gremlin_python (3.2.6) illustrates this problem:
from gremlin_python.driver.client import Client
from gremlin_python.driver.driver_remote_connection import DriverRemoteConnection
from gremlin_python.structure.graph import Graph
from gremlin_python.process.traversal import Cardinality


client = Client('ws://localhost:8182/gremlin', 'g')
rc = DriverRemoteConnection('ws://localhost:8182/gremlin', 'g')
g = Graph().traversal().withRemote(rc)

schema_msg = """mgmt = graph.openManagement()
                string_prop = mgmt.makePropertyKey('string_prop').dataType(String.class).cardinality(Cardinality.LIST).make()
                float_prop = mgmt.makePropertyKey('float_prop').dataType(Float.class).cardinality(Cardinality.LIST).make()
                integer_prop = mgmt.makePropertyKey('integer_prop').dataType(Integer.class).cardinality(Cardinality.LIST).make()
                mgmt.commit()"""

client.submit(schema_msg)


v = g.addV('person').property(Cardinality.list_, 'string_prop', 'dave')\
                    .property(Cardinality.list_,'float_prop', 1.1)\
                    .property(Cardinality.list_,'integer_prop', 1).next()
props = g.V(v.id).propertyMap().toList()
print("properties: {}".format(props))
string_prop = g.V(v.id).properties('string_prop').hasValue('dave').toList()
print("string prop: {}".format(string_prop))
float_prop = g.V(v.id).properties('float_prop').hasValue(1.1).toList()
print("float_prop: {}".format(float_prop))
integer_prop = g.V(v.id).properties('integer_prop').hasValue(1).toList()
print("integer_prop: {}".format(integer_prop))

The output of this script is:

properties: [{'integer_prop': [vp[integer_prop->1]], 'float_prop': [vp[float_prop->1.1]], 'string_prop': [vp[string_prop->dave]]}]
string prop: [vp[string_prop->dave]]
float_prop: []
integer_prop: [vp[integer_prop->1]]

Am I missing something here? Or is there a problem with floats and the hasValue step? Or maybe this is gremlin_python specific...

On Monday, August 28, 2017 at 1:44:18 PM UTC-4, David Brown wrote:
Hello JanusGraph users,

I have been experimenting with Janus, and using the automatic schema generation, metaproperties work as expected. However, when I set `schema.default=none` in the conf and define my own schema, metaproperties seem to quit working--metaproperty data is no longer returned in the Gremlin Server response. How should metaproperties be defined in the schema? I can't seem to find this information in the documentation. I can provide example schema definitions if necessary.

Thanks,

Dave