Required Capacity Error - JanusGraph on Cassandra
Joe Obernberger
Hi all - I'm getting the following error when executing the following query:
List<Object> correlationIDsListSource = traversal.V().has("source", source).outE("correlation").has("type", type).has("range", range).values("cID").limit(1000).toList(); I'm not sure if it's timing out, or if something else is wrong? Any ideas on what to check? Thank you! Caused by: org.janusgraph.diskstorage.PermanentBackendException: Permanent exception while executing backend operation EdgeStoreQuery at org.janusgraph.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:79) at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:52) ... 39 more Caused by: org.janusgraph.core.JanusGraphException: Exception in JanusGraph at org.janusgraph.diskstorage.keycolumnvalue.cache.ExpirationKCVSCache.getSlice(ExpirationKCVSCache.java:104) at org.janusgraph.diskstorage.BackendTransaction$1.call(BackendTransaction.java:274) at org.janusgraph.diskstorage.BackendTransaction$1.call(BackendTransaction.java:271) at org.janusgraph.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:66) ... 40 more Caused by: com.google.common.util.concurrent.UncheckedExecutionException: java.lang.IllegalArgumentException: required capacity -2147483615 is negative, likely caused by integer overflow at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2055) at com.google.common.cache.LocalCache.get(LocalCache.java:3966) at com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4863) at org.janusgraph.diskstorage.keycolumnvalue.cache.ExpirationKCVSCache.getSlice(ExpirationKCVSCache.java:97) ... 43 more Caused by: java.lang.IllegalArgumentException: required capacity -2147483615 is negative, likely caused by integer overflow at org.janusgraph.diskstorage.util.ArrayUtil.growSpace(ArrayUtil.java:33) at org.janusgraph.diskstorage.util.StaticArrayEntryList.ensureSpace(StaticArrayEntryList.java:454) at org.janusgraph.diskstorage.util.StaticArrayEntryList.of(StaticArrayEntryList.java:418) at org.janusgraph.diskstorage.util.StaticArrayEntryList.ofStaticBuffer(StaticArrayEntryList.java:355) at org.janusgraph.diskstorage.cql.function.slice.AbstractCQLSliceFunction.fromResultSet(AbstractCQLSliceFunction.java:60) at org.janusgraph.diskstorage.cql.function.slice.CQLSimpleSliceFunction.getSlice(CQLSimpleSliceFunction.java:40) at org.janusgraph.diskstorage.cql.function.slice.AbstractCQLSliceFunction.getSlice(AbstractCQLSliceFunction.java:48) at org.janusgraph.diskstorage.cql.CQLKeyColumnValueStore.getSlice(CQLKeyColumnValueStore.java:359) at org.janusgraph.diskstorage.keycolumnvalue.KCVSProxy.getSlice(KCVSProxy.java:82) at org.janusgraph.diskstorage.keycolumnvalue.cache.ExpirationKCVSCache.lambda$getSlice$1(ExpirationKCVSCache.java:99) at com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4868) at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3533) at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2282) at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2159) at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2049) ... 46 more org.janusgraph.core.JanusGraphException: Could not execute operation due to backend exception at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:54) at org.janusgraph.diskstorage.BackendTransaction.executeRead(BackendTransaction.java:488) at org.janusgraph.diskstorage.BackendTransaction.edgeStoreQuery(BackendTransaction.java:271) at org.janusgraph.graphdb.database.StandardJanusGraph.edgeQuery(StandardJanusGraph.java:490) at org.janusgraph.graphdb.transaction.StandardJanusGraphTx$2.lambda$execute$1(StandardJanusGraphTx.java:1320) at org.janusgraph.graphdb.query.profile.QueryProfiler.profile(QueryProfiler.java:107) at org.janusgraph.graphdb.query.profile.QueryProfiler.profile(QueryProfiler.java:99) at org.janusgraph.graphdb.query.profile.QueryProfiler.profile(QueryProfiler.java:95) at org.janusgraph.graphdb.transaction.StandardJanusGraphTx$2.lambda$execute$2(StandardJanusGraphTx.java:1320) at org.janusgraph.graphdb.vertices.CacheVertex.loadRelations(CacheVertex.java:73) at org.janusgraph.graphdb.transaction.StandardJanusGraphTx$2.execute(StandardJanusGraphTx.java:1320) at org.janusgraph.graphdb.transaction.StandardJanusGraphTx$2.execute(StandardJanusGraphTx.java:1231) at org.janusgraph.graphdb.query.QueryProcessor$LimitAdjustingIterator.getNewIterator(QueryProcessor.java:206) at org.janusgraph.graphdb.query.LimitAdjustingIterator.hasNext(LimitAdjustingIterator.java:69) at org.janusgraph.graphdb.util.CloseableIteratorUtils$1.computeNext(CloseableIteratorUtils.java:49) at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:146) at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:141) at org.janusgraph.graphdb.query.ResultSetIterator.nextInternal(ResultSetIterator.java:55) at org.janusgraph.graphdb.query.ResultSetIterator.<init>(ResultSetIterator.java:45) -Joe -- This email has been checked for viruses by AVG antivirus software. www.avg.com |
|
Hi Joe,
I have no detailed knowledge of the JanusGraph backend code myself, but just a reaction for clarification (so that others see more hints to the cause of the issue):
Best wishes, Marc |
|
Joe Obernberger
Hi Marc - as usual you are on the right path. The number of edges on the nodes in question was very high, so doing any sort of query on it is slow. The query was timing out; not sure what that error message means, but when I do the same query in gremlin, it just runs and runs. Unfortunately, I'm ending up with lots of nodes being super nodes in this graph. The string size of the cID property is small. I've not modified the partitioning. I did try to use vertex cut,
but had some issues with nodeIDs that seemed to appear out of
nowhere - ie they were never created, but appeared in the edges
list. It was odd. Looking at cassandra, there are some very large partitions: nodetool tablehistograms
graphsource.graphindex -Joe On 9/15/2022 3:59 AM,
hadoopmarc@... wrote:
|
|
hadoopmarc@...
Hi Joe,
With "an index on type and range" you really mean:
Indeed, supernodes have little value in traversing graphs. Maybe you can remove the worst ones (they probably have little meaning) or make them into a property on the attached vertices. If the supernodes are not the ones you want to traverse in your query, maybe a label constraint in the index can help. Best wishes, Marc |
|