Date   

Re: Problem adding edges with the same label

Miroslav Smiljanic
 

The same problem does not exist when using InMemory Storage Backend.


On Tue, Dec 7, 2021 at 1:51 PM Miroslav Smiljanic <miroslav@...> wrote:
Hi All,

I have the setup using Cassandra backend (Azure Cosmos DB fully managed).

Seems it is not possible to add two edges with the same label.

gremlin> g.addV()
==>v[4288]
gremlin> g.addV()
==>v[4200]
gremlin> g.addV()
==>v[4344]
gremlin> g.addE('child').from(__.V(4288)).to(__.V(4200))
==>e[t4-3b4-t1-38o][4288-child->4200]
gremlin> g.addE('child').from(__.V(4200)).to(__.V(4344))
java.lang.NullPointerException
Type ':help' or ':h' for help.
Display stack trace? [yN]


This is the error in server log

388895 [gremlin-server-exec-7] WARN  org.apache.tinkerpop.gremlin.server.op.AbstractEvalOpProcessor  - Exception processing a script on request [RequestMessage{, requestId=a04edabb-920e-4526-ab3a-1d1d22a8ed82, op='eval', processor='', args={gremlin=g.addE('child').from(__.V(4200)).to(__.V(4344)), userAgent=Gremlin Console/1.21.0, batchSize=64}}].
java.lang.NullPointerException
at org.janusgraph.graphdb.transaction.StandardJanusGraphTx.getUniquenessLock(StandardJanusGraphTx.java:720)
at org.janusgraph.graphdb.transaction.StandardJanusGraphTx.addEdge(StandardJanusGraphTx.java:784)
at org.janusgraph.graphdb.transaction.StandardJanusGraphTx.addEdge(StandardJanusGraphTx.java:774)
at org.janusgraph.graphdb.vertices.AbstractVertex.addEdge(AbstractVertex.java:188)
at org.janusgraph.graphdb.vertices.AbstractVertex.addEdge(AbstractVertex.java:45)
at org.apache.tinkerpop.gremlin.process.traversal.step.map.AddEdgeStartStep.processNextStart(AddEdgeStartStep.java:137)
at org.apache.tinkerpop.gremlin.process.traversal.step.util.AbstractStep.hasNext(AbstractStep.java:150)
at org.apache.tinkerpop.gremlin.process.traversal.util.DefaultTraversal.hasNext(DefaultTraversal.java:222)
at org.apache.tinkerpop.gremlin.server.op.AbstractOpProcessor.handleIterator(AbstractOpProcessor.java:97)
at org.apache.tinkerpop.gremlin.server.op.AbstractEvalOpProcessor.lambda$evalOpInternal$5(AbstractEvalOpProcessor.java:263)
at org.apache.tinkerpop.gremlin.groovy.engine.GremlinExecutor.lambda$eval$0(GremlinExecutor.java:283)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)

Does anyone know what could be the problem?

Regards,
Miroslav 


Problem adding edges with the same label

Miroslav Smiljanic
 

Hi All,

I have the setup using Cassandra backend (Azure Cosmos DB fully managed).

Seems it is not possible to add two edges with the same label.

gremlin> g.addV()
==>v[4288]
gremlin> g.addV()
==>v[4200]
gremlin> g.addV()
==>v[4344]
gremlin> g.addE('child').from(__.V(4288)).to(__.V(4200))
==>e[t4-3b4-t1-38o][4288-child->4200]
gremlin> g.addE('child').from(__.V(4200)).to(__.V(4344))
java.lang.NullPointerException
Type ':help' or ':h' for help.
Display stack trace? [yN]


This is the error in server log

388895 [gremlin-server-exec-7] WARN  org.apache.tinkerpop.gremlin.server.op.AbstractEvalOpProcessor  - Exception processing a script on request [RequestMessage{, requestId=a04edabb-920e-4526-ab3a-1d1d22a8ed82, op='eval', processor='', args={gremlin=g.addE('child').from(__.V(4200)).to(__.V(4344)), userAgent=Gremlin Console/1.21.0, batchSize=64}}].
java.lang.NullPointerException
at org.janusgraph.graphdb.transaction.StandardJanusGraphTx.getUniquenessLock(StandardJanusGraphTx.java:720)
at org.janusgraph.graphdb.transaction.StandardJanusGraphTx.addEdge(StandardJanusGraphTx.java:784)
at org.janusgraph.graphdb.transaction.StandardJanusGraphTx.addEdge(StandardJanusGraphTx.java:774)
at org.janusgraph.graphdb.vertices.AbstractVertex.addEdge(AbstractVertex.java:188)
at org.janusgraph.graphdb.vertices.AbstractVertex.addEdge(AbstractVertex.java:45)
at org.apache.tinkerpop.gremlin.process.traversal.step.map.AddEdgeStartStep.processNextStart(AddEdgeStartStep.java:137)
at org.apache.tinkerpop.gremlin.process.traversal.step.util.AbstractStep.hasNext(AbstractStep.java:150)
at org.apache.tinkerpop.gremlin.process.traversal.util.DefaultTraversal.hasNext(DefaultTraversal.java:222)
at org.apache.tinkerpop.gremlin.server.op.AbstractOpProcessor.handleIterator(AbstractOpProcessor.java:97)
at org.apache.tinkerpop.gremlin.server.op.AbstractEvalOpProcessor.lambda$evalOpInternal$5(AbstractEvalOpProcessor.java:263)
at org.apache.tinkerpop.gremlin.groovy.engine.GremlinExecutor.lambda$eval$0(GremlinExecutor.java:283)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)

Does anyone know what could be the problem?

Regards,
Miroslav 


Drop graph not working

51kumarakhil@...
 

Hi all. 
I'm creating dynamic graphs using configuredGraphFactory in bigTable. I'm able to create them but when I try to delete them then it seems that they only get deleted from janusgraph-server and not from bigTable.
When I try to access the graph it says graph doesnt exists.. but I can still see it in the bigTable. When I try to manually delete the graph from BigTable server throw error like graph not found.
I used below cmd to delete the graph 
> ConfiguredGraphFactory.drop('graph_name') 

:_(


Re: Commit progress listener

marcorizzi82@...
 

Thanks a lot Marc for the answer and the link to the gremlin-users group.

I was wondering if there's something "implementation specific" done in JanusGraph that could help in having such progress information during a commit even if out of Gremlin/Tinkerpop specifications.

Thanks again,
Marco


Re: NullPointerException comparing PredicateCondition (in equals method)

hadoopmarc@...
 

Hi Albert,

I tried with the GraphOfTheGods with janusgraph-0.5.3:
gremlin> gremlin> g.V().has(label, without('god')).has(label, without('location')).values('name')
14:17:41 WARN  org.janusgraph.graphdb.transaction.StandardJanusGraphTx  - Query requires iterating over all vertices [(~label <> null AND ~label <> god AND ~label <> location)]. For better performance, use indexes
==>nemean
==>hercules
==>saturn
==>cerberus
==>alcmene
==>hydra

gremlin> g.V().has(label, eq('god')).has(label, eq(null))14:22:31 WARN  org.janusgraph.graphdb.transaction.StandardJanusGraphTx  - Query requires iterating over all vertices [(~label = god AND ~label = null)]. For better performance, use indexes
gremlin>
So, I guess you did not mean to say that these traversals should fail.

If it has anything to do with the way you use the PredicateCondition class, can you give a code example where you just instantiate two PredicateConditions and have an equal() call fail?

Best wishes,    Marc


Re: JanugGraph-0.6.0: Unable to open connection JanusGraphFactory with CL=ONE when quorum lost

Umesh Gade
 

Hi Marc,
Thanks for reply. Yes, we are also doing thorough testing to check any compatibility issue and thus so far we haven't found any except this one. Strange thing is that the current issue posted here is NOT there with Janus-0.5.3+Cassandra 4.0 
This issue has something to do with "storage.cql.only-use-local-consistency-for-system-operations". We could solve this issue by setting it to TRUE for 2 node cluster but problem remains with 3+ node cluster.  
Is there any new touchpoint to this flag during JanusFactory.open(...) call added in janusgraph-0.6.0 ?

On Sat, 4 Dec 2021, 17:14 , <hadoopmarc@...> wrote:
Hi Umesh,

I assume that checking the compatibility matrix at https://docs.janusgraph.org/changelog/ made you post the additional comment about cassandra 4.0.0 :-)

Indeed, support of cassandra 4.0 is still an open issue.

Best wishes,    Marc


Re: JanugGraph-0.6.0: Unable to open connection JanusGraphFactory with CL=ONE when quorum lost

hadoopmarc@...
 

Hi Umesh,

I assume that checking the compatibility matrix at https://docs.janusgraph.org/changelog/ made you post the additional comment about cassandra 4.0.0 :-)

Indeed, support of cassandra 4.0 is still an open issue.

Best wishes,    Marc


Re: Commit progress listener

hadoopmarc@...
 

Hi Marco,

Interesting idea! From what I read about the EventStrategy, this is a feature stemming from TinkerPop and questions about it could best be addressed there.

Note however that gremlin OLTP traversals are executed mainly depth first, so any new events introduced would be rather be at the level of branching in the graph than at the level of traversal steps. Possibly, execution based events would only be useful after barrier steps, which break the depth-first pattern and therefore, passing such a barrier would give a sense of progress in the execution of a traversal.

Best wishes,    Marc


Commit progress listener

marcorizzi82@...
 

Hi all,
I've successfully using in my application an EventStrategy's listener with both the DefaultEventQueue and the TransactionalEventQueue: they gave a way for the listener to be invoked while the events occur or after they have been committed, respectively before and after the "g.tx().commit()" statement execution.

My question is: is there a way to have events about the progressing of the commit method execution itself?
I'm trying to provide a feedback to my application's users about the graph events and the commit method invocation is kind of black box I can just call and wait for the execution to end: can more information be retrieved about the execution itself?

Thanks in advance,
Marco


Re: JanugGraph-0.6.0: Unable to open connection JanusGraphFactory with CL=ONE when quorum lost

Umesh Gade
 

One update:
Cassandra version is 4.0.0

On Thu, Dec 2, 2021 at 2:02 PM Umesh Gade via lists.lfaidata.foundation <er.umeshgade=gmail.com@...> wrote:
Hi,
      We just upgraded janus to 0.6.0 and started observing an issue which was working earlier.
Scenario is, we open a connection with read/write CL="ONE" using JanugGraphFactory. But when quorum is lost, this connection fails to open. Curious to know, what's changed around this and what needs to be done to fix this ?

Graph config passed:
storage.backend=cql
storage.port=9042
storage.cql.keyspace=test_ks
storage.cql.local-datacenter=dc1
storage.cql.read-consistency-level=ONE
storage.cql.write-consistency-level=ONE
storage.cql.executor-service.enabled=false
storage.cql.atomic-batch-mutate=false
graph.set-vertex-id=true
query.force-index=false
query.optimizer-backend-access=false

Below is exception which we got:
Opening connection to graph with test_ks@localhost:9042
org.janusgraph.core.JanusGraphException: Could not execute operation due to backend exception
        at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:54)
        at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:117)
        at org.janusgraph.diskstorage.configuration.backend.KCVSConfiguration.get(KCVSConfiguration.java:96)
        at org.janusgraph.diskstorage.configuration.BasicConfiguration.isFrozen(BasicConfiguration.java:105)
        at org.janusgraph.diskstorage.configuration.builder.ReadConfigurationBuilder.buildGlobalConfiguration(ReadConfigurationBuilder.java:81)
        at org.janusgraph.graphdb.configuration.builder.GraphDatabaseConfigurationBuilder.build(GraphDatabaseConfigurationBuilder.java:67)
        at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:176)
        at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:147)
        at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:127)
        at ***.TestCli.openConnection(TestCli.java:140)        
Caused by: org.janusgraph.diskstorage.TemporaryBackendException: Could not successfully complete backend operation due to repeated temporary exceptions after PT1M
        at org.janusgraph.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:98)
        at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:52)
        ... 11 more
Caused by: org.janusgraph.diskstorage.TemporaryBackendException: Temporary failure in storage backend
        at io.vavr.API$Match$Case0.apply(API.java:5135)
        at io.vavr.API$Match.of(API.java:5092)
        at org.janusgraph.diskstorage.cql.CQLKeyColumnValueStore.lambda$static$0(CQLKeyColumnValueStore.java:120)
        at org.janusgraph.diskstorage.cql.function.slice.CQLSimpleSliceFunction.interruptibleWait(CQLSimpleSliceFunction.java:50)
        at org.janusgraph.diskstorage.cql.function.slice.CQLSimpleSliceFunction.getSlice(CQLSimpleSliceFunction.java:39)
        at org.janusgraph.diskstorage.cql.function.slice.AbstractCQLSliceFunction.getSlice(AbstractCQLSliceFunction.java:48)
        at org.janusgraph.diskstorage.cql.CQLKeyColumnValueStore.getSlice(CQLKeyColumnValueStore.java:358)
        at org.janusgraph.diskstorage.configuration.backend.KCVSConfiguration$1.call(KCVSConfiguration.java:99)
        at org.janusgraph.diskstorage.configuration.backend.KCVSConfiguration$1.call(KCVSConfiguration.java:96)
        at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:106)
        at org.janusgraph.diskstorage.util.BackendOperation$1.call(BackendOperation.java:120)
        at org.janusgraph.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:66)
        ... 12 more
Caused by: java.util.concurrent.ExecutionException: com.datastax.oss.driver.api.core.AllNodesFailedException: All 1 node(s) tried for the query failed (showing first 1 nodes, use getAllErrors() for more): Node(endPoint=localhost/127.0.0.1:9042, hostId=779642c7-23bb-46d4-88fa-6ae08f2f9e24, hashCode=61feb06d): [com.datastax.oss.driver.api.core.servererrors.UnavailableException: Not enough replicas available for query at consistency QUORUM (2 required but only 1 alive)]
        at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
        at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
        at org.janusgraph.diskstorage.cql.function.slice.CQLSimpleSliceFunction.interruptibleWait(CQLSimpleSliceFunction.java:45)
        ... 20 more
Caused by: com.datastax.oss.driver.api.core.AllNodesFailedException: All 1 node(s) tried for the query failed (showing first 1 nodes, use getAllErrors() for more): Node(endPoint=localhost/127.0.0.1:9042, hostId=779642c7-23bb-46d4-88fa-6ae08f2f9e24, hashCode=61feb06d): [com.datastax.oss.driver.api.core.servererrors.UnavailableException: Not enough replicas available for query at consistency QUORUM (2 required but only 1 alive)]
        at com.datastax.oss.driver.api.core.AllNodesFailedException.fromErrors(AllNodesFailedException.java:55)
        at com.datastax.oss.driver.internal.core.cql.CqlRequestHandler.sendRequest(CqlRequestHandler.java:261)
        at com.datastax.oss.driver.internal.core.cql.CqlRequestHandler.access$1000(CqlRequestHandler.java:94)
        at com.datastax.oss.driver.internal.core.cql.CqlRequestHandler$NodeResponseCallback.processRetryVerdict(CqlRequestHandler.java:849)
        at com.datastax.oss.driver.internal.core.cql.CqlRequestHandler$NodeResponseCallback.processErrorResponse(CqlRequestHandler.java:828)
        at com.datastax.oss.driver.internal.core.cql.CqlRequestHandler$NodeResponseCallback.onResponse(CqlRequestHandler.java:655)
        at com.datastax.oss.driver.internal.core.channel.InFlightHandler.channelRead(InFlightHandler.java:257)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324)
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:655)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:581)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
        at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
        at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
        at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
        at java.lang.Thread.run(Thread.java:748)
        Suppressed: com.datastax.oss.driver.api.core.servererrors.UnavailableException: Not enough replicas available for query at consistency QUORUM (2 required but only 1 alive)
--
Sincerely,
Umesh Gade



--
Sincerely,
Umesh Gade


JanugGraph-0.6.0: Unable to open connection JanusGraphFactory with CL=ONE when quorum lost

Umesh Gade
 

Hi,
      We just upgraded janus to 0.6.0 and started observing an issue which was working earlier.
Scenario is, we open a connection with read/write CL="ONE" using JanugGraphFactory. But when quorum is lost, this connection fails to open. Curious to know, what's changed around this and what needs to be done to fix this ?

Graph config passed:
storage.backend=cql
storage.port=9042
storage.cql.keyspace=test_ks
storage.cql.local-datacenter=dc1
storage.cql.read-consistency-level=ONE
storage.cql.write-consistency-level=ONE
storage.cql.executor-service.enabled=false
storage.cql.atomic-batch-mutate=false
graph.set-vertex-id=true
query.force-index=false
query.optimizer-backend-access=false

Below is exception which we got:
Opening connection to graph with test_ks@localhost:9042
org.janusgraph.core.JanusGraphException: Could not execute operation due to backend exception
        at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:54)
        at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:117)
        at org.janusgraph.diskstorage.configuration.backend.KCVSConfiguration.get(KCVSConfiguration.java:96)
        at org.janusgraph.diskstorage.configuration.BasicConfiguration.isFrozen(BasicConfiguration.java:105)
        at org.janusgraph.diskstorage.configuration.builder.ReadConfigurationBuilder.buildGlobalConfiguration(ReadConfigurationBuilder.java:81)
        at org.janusgraph.graphdb.configuration.builder.GraphDatabaseConfigurationBuilder.build(GraphDatabaseConfigurationBuilder.java:67)
        at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:176)
        at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:147)
        at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:127)
        at ***.TestCli.openConnection(TestCli.java:140)        
Caused by: org.janusgraph.diskstorage.TemporaryBackendException: Could not successfully complete backend operation due to repeated temporary exceptions after PT1M
        at org.janusgraph.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:98)
        at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:52)
        ... 11 more
Caused by: org.janusgraph.diskstorage.TemporaryBackendException: Temporary failure in storage backend
        at io.vavr.API$Match$Case0.apply(API.java:5135)
        at io.vavr.API$Match.of(API.java:5092)
        at org.janusgraph.diskstorage.cql.CQLKeyColumnValueStore.lambda$static$0(CQLKeyColumnValueStore.java:120)
        at org.janusgraph.diskstorage.cql.function.slice.CQLSimpleSliceFunction.interruptibleWait(CQLSimpleSliceFunction.java:50)
        at org.janusgraph.diskstorage.cql.function.slice.CQLSimpleSliceFunction.getSlice(CQLSimpleSliceFunction.java:39)
        at org.janusgraph.diskstorage.cql.function.slice.AbstractCQLSliceFunction.getSlice(AbstractCQLSliceFunction.java:48)
        at org.janusgraph.diskstorage.cql.CQLKeyColumnValueStore.getSlice(CQLKeyColumnValueStore.java:358)
        at org.janusgraph.diskstorage.configuration.backend.KCVSConfiguration$1.call(KCVSConfiguration.java:99)
        at org.janusgraph.diskstorage.configuration.backend.KCVSConfiguration$1.call(KCVSConfiguration.java:96)
        at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:106)
        at org.janusgraph.diskstorage.util.BackendOperation$1.call(BackendOperation.java:120)
        at org.janusgraph.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:66)
        ... 12 more
Caused by: java.util.concurrent.ExecutionException: com.datastax.oss.driver.api.core.AllNodesFailedException: All 1 node(s) tried for the query failed (showing first 1 nodes, use getAllErrors() for more): Node(endPoint=localhost/127.0.0.1:9042, hostId=779642c7-23bb-46d4-88fa-6ae08f2f9e24, hashCode=61feb06d): [com.datastax.oss.driver.api.core.servererrors.UnavailableException: Not enough replicas available for query at consistency QUORUM (2 required but only 1 alive)]
        at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
        at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
        at org.janusgraph.diskstorage.cql.function.slice.CQLSimpleSliceFunction.interruptibleWait(CQLSimpleSliceFunction.java:45)
        ... 20 more
Caused by: com.datastax.oss.driver.api.core.AllNodesFailedException: All 1 node(s) tried for the query failed (showing first 1 nodes, use getAllErrors() for more): Node(endPoint=localhost/127.0.0.1:9042, hostId=779642c7-23bb-46d4-88fa-6ae08f2f9e24, hashCode=61feb06d): [com.datastax.oss.driver.api.core.servererrors.UnavailableException: Not enough replicas available for query at consistency QUORUM (2 required but only 1 alive)]
        at com.datastax.oss.driver.api.core.AllNodesFailedException.fromErrors(AllNodesFailedException.java:55)
        at com.datastax.oss.driver.internal.core.cql.CqlRequestHandler.sendRequest(CqlRequestHandler.java:261)
        at com.datastax.oss.driver.internal.core.cql.CqlRequestHandler.access$1000(CqlRequestHandler.java:94)
        at com.datastax.oss.driver.internal.core.cql.CqlRequestHandler$NodeResponseCallback.processRetryVerdict(CqlRequestHandler.java:849)
        at com.datastax.oss.driver.internal.core.cql.CqlRequestHandler$NodeResponseCallback.processErrorResponse(CqlRequestHandler.java:828)
        at com.datastax.oss.driver.internal.core.cql.CqlRequestHandler$NodeResponseCallback.onResponse(CqlRequestHandler.java:655)
        at com.datastax.oss.driver.internal.core.channel.InFlightHandler.channelRead(InFlightHandler.java:257)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324)
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:655)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:581)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
        at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
        at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
        at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
        at java.lang.Thread.run(Thread.java:748)
        Suppressed: com.datastax.oss.driver.api.core.servererrors.UnavailableException: Not enough replicas available for query at consistency QUORUM (2 required but only 1 alive)
--
Sincerely,
Umesh Gade


NullPointerException comparing PredicateCondition (in equals method)

albert.lockett@...
 

Janusgraph (version 0.5.2) can be made to throw a NullPointerException using the following traversals:

g.V().has(label, eq('User')).has(label, eq(null))

g.V().has(label, without('User')).has(label, without('Group'))

 

in QueryUtil#constraints2QNF we're builidng up a list of conditions from the constaints and in addConstraints we're checking if the list already contains the condition (by calling contains)

if (!conditions.contains(pc)) conditions.add(pc);

It calls the equals method in PredicateCondition. If the list already contains another condition with the same predicate and key, and the condition we're trying to add has a null value, it will throw a null pointer exception


public boolean equals(Object other) {
// ....
PredicateCondition oth = (PredicateCondition) other;
return key.equals(oth.key) && predicate.equals(oth.predicate) && value.equals(oth.value);

 

 

The stack trace:

java.lang.NullPointerException

        at org.janusgraph.graphdb.query.condition.PredicateCondition.equals(PredicateCondition.java:109)

        at java.util.ArrayList.indexOf(ArrayList.java:321)

        at java.util.ArrayList.contains(ArrayList.java:304)

        at org.janusgraph.graphdb.query.QueryUtil.addConstraint(QueryUtil.java:272)

        at org.janusgraph.graphdb.query.QueryUtil.constraints2QNF(QueryUtil.java:215)

        at org.janusgraph.graphdb.query.graph.GraphCentricQueryBuilder.constructQueryWithoutProfile(GraphCentricQueryBuilder.java:238)

        at org.janusgraph.graphdb.query.graph.GraphCentricQueryBuilder.constructQuery(GraphCentricQueryBuilder.java:225)

        at org.janusgraph.graphdb.tinkerpop.optimize.JanusGraphStep.buildGraphCentricQuery(JanusGraphStep.java:196)

        at org.janusgraph.graphdb.tinkerpop.optimize.JanusGraphStep.lambda$new$0(JanusGraphStep.java:94)

        at java.util.LinkedHashMap$LinkedEntrySet.forEach(LinkedHashMap.java:671)

        at org.janusgraph.graphdb.tinkerpop.optimize.JanusGraphStep.lambda$new$1(JanusGraphStep.java:94)

        at java.util.ArrayList.forEach(ArrayList.java:1257)

        at org.janusgraph.graphdb.tinkerpop.optimize.JanusGraphStep.lambda$new$3(JanusGraphStep.java:93)

        at org.apache.tinkerpop.gremlin.process.traversal.step.map.GraphStep.processNextStart(GraphStep.java:157)

        at org.apache.tinkerpop.gremlin.process.traversal.step.util.AbstractStep.hasNext(AbstractStep.java:144)

        at org.apache.tinkerpop.gremlin.process.traversal.util.DefaultTraversal.hasNext(DefaultTraversal.java:196)

        at org.apache.tinkerpop.gremlin.console.Console$_closure3.doCall(Console.groovy:255)

        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

        at java.lang.reflect.Method.invoke(Method.java:498)

        at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:101)

        at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:323)

        at org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:263)

        at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1041)

        at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:37)

        at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:127)

        at org.codehaus.groovy.tools.shell.Groovysh.setLastResult(Groovysh.groovy:463)

        at sun.reflect.GeneratedMethodAccessor28.invoke(Unknown Source)

        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

        at java.lang.reflect.Method.invoke(Method.java:498)

        at org.codehaus.groovy.runtime.callsite.PlainObjectMetaMethodSite.doInvoke(PlainObjectMetaMethodSite.java:43)

        at org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite$PogoCachedMethodSiteNoUnwrapNoCoerce.invoke(PogoMetaMethodSite.java:190)

        at org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite.callCurrent(PogoMetaMethodSite.java:58)

        at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:168)

        at org.codehaus.groovy.tools.shell.Groovysh.execute(Groovysh.groovy:201)

        at org.apache.tinkerpop.gremlin.console.GremlinGroovysh.super$3$execute(GremlinGroovysh.groovy)

 

 

 


Re: Bindings for graphs created using ConfiguredGraphFactory not working as expected

hadoopmarc@...
 

Hi Anya,

In v0.6.0 the bin/janusgraph-server.sh start script does not start Cassandra any more. Are you sure you did start Cassandra ("cassandra/bin/cassandra") before starting JanusGraph?
Also check whether you did not mix up the graph1 and graph1_config graph.graphname values.

I guess you found out to do (before running bin/janusgraph-server.sh):
export JANUSGRAPH_YAML=conf/gremlin-server/gremlin-server-configuration.yaml

Best wishes,      Marc


Re: Cleaning up old data in large graphs

hadoopmarc@...
 

Hi Mladen,

Indeed, there is still a load of open issues regarding TTL:

https://github.com/JanusGraph/janusgraph/issues?q=is%3Aissue+is%3Aopen+ttl

Your last remark about empty vertices sounds plausible, although it would be pretty bad if true. Searching on "new HashMap" on github gives too many results to inspect, so please keep an open eye on more hints where it would occur.
I did not see open issues that report empty vertices after ghost vertex removal.

Best wishes,    Marc


Re: Cleaning up old data in large graphs

Mladen Marović
 
Edited

Hi Mark,

thanks for the response.

  1. As described in https://docs.janusgraph.org/schema/advschema/, TTL is already supported. However, there are two issues in my case:

    a) Changing the TTL is supported, but the new TTL will only be applied on inserts and updates. In other words, if I have a TTL of 12 months, I change it to 18 months, it will effectively take 12 months before that change comes into effect because all the old data will still have TTL set to 12 months. A possible workaround would be to run over all objects in the database and update them in some way to force setting the new TTL, although that seems a bit costly.

    b) I'm not sure how the TTL setting applies exactly in Janusgraph. Is it set only on the data or on the composite indexes as well? Because if it's set only on the data, then after a while the indexes should be filled with non-existing entries. I can confirm this to be the case for mixed indexes - during testing, data was deleted in cassandra, but mixed index entries in elasticsearch were not, which means that I would have to delete them manually as well. This would be OK if janusgraph supported using multiple indexes in elasticsearch for a single index (which would be a really cool feature btw!), but I don't think that's the case - I tried to trick janusgraph into using an alias, but things did not work as expected.

  2. I don't think the problem in the Spark jobs is with transactions. By default, in case of an exception, Spark should repeat that task, and eventually the job ends, so all tasks finished successfully. Also, in my case, there actually are no exceptions. I even managed to manually find the vertices that caused the issues via the gremlin console, but their valueMap() is {} where I would expect it to contain the 10-15 properties they usually have, if they weren't deleted. Basically, Janusgraph acts as if it found a vertex (or some part of it), but during deletion, nothing happens.

    If I remember correctly, I tried to analyze what is happening a while ago and I seem to have found some place in the janusgraph source where a dummy (empty) vertex is created if Janusgraph does not find the proper data. I guess that's what's happening to me when I get the {} result. Maybe the index entry wasn't cleaned up, Janusgraph thinks there should be something, finds nothing, so it returns the empty vertex. When I try to delete it, again there is nothing to be deleted so the index entry isn't cleared. I don't know if that's actually possible, but that might explain my case.

Best regards,

Mladen Marović


Re: Cleaning up old data in large graphs

hadoopmarc@...
 

Hi Mladen,

Just two things that come up while reading your story:
  • the cassandra TTL feature seems promising for your use case, see e.g. https://www.geeksforgeeks.org/time-to-live-ttl-for-a-column-in-cassandra/ I guess this would require code changes in janusgraph-cassandra.
  • how is transaction control in the spark jobs? You want transactions of reasonable size (say 10.000 vertices or edges) and you want spark tasks to fail if the transaction commit fails. In that way spark will repeat the task and will hopefully succeed.

Best wishes,    Marc


Bindings for graphs created using ConfiguredGraphFactory not working as expected

anya.sharma@...
 
Edited

Hello,

I have a local setup of JanusGraph 0.6.0 with Cassandra 3.11.9. I am creating a graph using the ConfiguredGraphFactory. For this, I am using the bundled properties and yaml files and creating the graph by running the following commands from the Gremlin console (also bundled with the JanusGraph installation):

gremlin> :remote connect tinkerpop.server conf/remote.yaml session
gremlin> :remote console
gremlin> map.put('storage.backend', 'cql');
gremlin> map.put('storage.hostname', '127.0.0.1');
gremlin> map.put('graph.graphname', 'graph1');
gremlin> map.put('storage.username', 'myDBUsername');
gremlin> map.put('storage.password', 'myDBPassword');
gremlin> ConfiguredGraphFactory.createConfiguration(new MapConfiguration(map));

Once I have created the map, I try to access the graph and the traversal variables bound to it, but I get the following response:

gremlin>ConfiguredGraphFactory.open('graph1')
gremlin> graph1
No such property: graph1 for class: Script7
 
gremlin> graph1_traversal
No such property: graph1_traversal for class: Script8
 
I am using the gremlin-server-configuration.yaml and janusgraph-cql-configuration.properties bundled with the JanusGraph installation package. The only changes I have made are adding the credentials and custom graph.graphname:

graph.graphname=graph1_config
storage.hostname=127.0.0.1
storage.username=myDBUsername
storage.password=myDBPassword

According to the documentation, I should be able to access the bound variables. I was able to do this in the 0.3.1 version of Janusgraph. What could I be missing/doing wrong?

Thanks
Anya


Re: Duplicate vertex issue with Uniqueness constraints | Janusgraph CQL

Pawan Shriwas
 

Hi Marc, 

Adding additional data - 

Checking duplicate data with uniqueness constraints on name_cons field -

gremlin> g.V().has('gId',P.within('da209078-4a2f-4db2-b489-27da028df983','ba81f5d3-a29b-4a2c-88c3-c265ce3f68a5','9804b32d-31d9-409a-a441-a38fdbf998f7')).valueMap()

==>[gId:[da209078-4a2f-4db2-b489-27da028df983],entityGId:[9e51c70d-f148-401f-8eea-53b767d9bbb6],name_cons:[CGNAT_NS2]]

==>[gId:[ba81f5d3-a29b-4a2c-88c3-c265ce3f68a5],entityGId:[7e763ebc-b2e0-4d04-baaa-4463d04ca436],name_cons:[CGNAT_NS2]]

==>[gId:[9804b32d-31d9-409a-a441-a38fdbf998f7],entityGId:[23fd7efd-3688-4b58-aab6-173d25a8dd63],name_cons:[CGNAT_NS2]]

gremlin>



Reading of data with unique index property with Consistency lock and get only one record - 


gremlin> g.V().has('name_cons','CGNAT_NS2').valueMap()

==>[gId:[290cc878-19e1-44f6-9f6c-62b7471e21bc],entityGId:[0b59889d-e725-46e5-9f42-d96daaeaa21d],name_cons:[CGNAT_NS2]]

gremlin>

gremlin>


Hope this clarifies!!!!


On Mon, Nov 22, 2021 at 12:39 PM Pawan Shriwas via lists.lfaidata.foundation <shriwas.pawan=gmail.com@...> wrote:
Hi Marc;

Yes, We are committing the transaction after each operation.   

how do you know about "duplicate vertex creation" when "it returns only 1 record"?
Vertex is being ingested with the same data and graph generate different id for the same. When we query the graph with these different ids, list object return having same name multiple time but  when we retrieve the data with name parameter(having unique index with lock consistency) graph returns only one record.

Hope this helps.

Thanks,
Pawan

 

On Sun, Nov 21, 2021 at 4:01 PM <hadoopmarc@...> wrote:
Hi Pawan,

Your code mirrors the example at https://docs.janusgraph.org/advanced-topics/eventual-consistency/#data-consistency for the greatest part. Are you sure the changes on graphMgmt get committed?

Also, how do you know about "duplicate vertex creation" when "it returns only 1 record"?

Best wishes,   Marc

PS. Most of the software community reserves names starting with a verb to functions and class methods. Violating this convention (e.g. PropertyKey makePropertyKey) makes your code almost unreadable to others.



--
Thanks & Regard

PAWAN SHRIWAS



--
Thanks & Regard

PAWAN SHRIWAS


Cleaning up old data in large graphs

Mladen Marović
 

Hello,

I have a graph (Janusgraph 0.5.3 running on a cql backend and an elasticsearch index) that is updated in near real-time. About 50M new vertices and 100M new edges are added every month. A large part of these (around 90%) should be deleted after 1 year, and the customer may require to change this at a later date. The remaining 10% of the data has no fixed expiration period, but vertices are expected to be deleted when they have no more edges.

Currently, I have a daily Spark job that deletes vertices and their edges by checking their date field (a field denoting the date they were added to the graph). A second Spark job is used to delete vertices without edges. This sort of works, but is definitely not perfect for the following reasons:

  1. After running the first cleanup job for a specific date, there's always a small amount of items (vertices or edges) left. The job reports the number of deleted items, and even after running the job for several times, there's always a non-zero number of items being reported as deleted in that run. For example, in the first run it will report several million items as deleted, in the second about 5000, in the third about 4800, in the fourth about 4620 etc. This converges to some non-zero small number eventually, meaning the Spark job always sees some vertices that it repeatedly attempts to delete, but never actually does, even though no errors appear.

    I'm guessing this is caused by some consistency issues, but could not resolve it completely. I tried to run the GhostVertexRemover vertex program which helps and further reduces the number of remaining items, but some still persist. Also, when running the cleanup job on a smaller scale (less workers and data), the job seems to work without issues, so I don't think there are any major bugs in the code itself that would cause this.

  2. Once it starts, the cleaning job is quite performance-intensive and can sometimes interfere with the input job that loads the graph data, which is something I want to avoid.

  3. During the cleanup job, cassandra delete operations produce a lot of tombstones. If the tombstone threshold is too low and exceeded on a single node, the entire graph will no longer accept any changes until a cassandra compaction is run. A large number of tombstones also degrades search performance. Graph supernodes with an especially large edge count may require several "run the cleanup job -> cleanup fails -> run compaction" cycles before everything is properly cleaned up. An alternative is to configure the tombstone threshold to be some absurdly high number to prevent failures completely and schedule daily compaction on each cassandra node after each cleanup job, which is what I'm doing currently.

I was wondering if anyone has some suggestions or best practices on how to manage graph data with a retention period (that could change over time)?

Best regards,

Mladen Marović


Re: Duplicate vertex issue with Uniqueness constraints | Janusgraph CQL

Pawan Shriwas
 

Hi Marc;

Yes, We are committing the transaction after each operation.   

how do you know about "duplicate vertex creation" when "it returns only 1 record"?
Vertex is being ingested with the same data and graph generate different id for the same. When we query the graph with these different ids, list object return having same name multiple time but  when we retrieve the data with name parameter(having unique index with lock consistency) graph returns only one record.

Hope this helps.

Thanks,
Pawan

 

On Sun, Nov 21, 2021 at 4:01 PM <hadoopmarc@...> wrote:
Hi Pawan,

Your code mirrors the example at https://docs.janusgraph.org/advanced-topics/eventual-consistency/#data-consistency for the greatest part. Are you sure the changes on graphMgmt get committed?

Also, how do you know about "duplicate vertex creation" when "it returns only 1 record"?

Best wishes,   Marc

PS. Most of the software community reserves names starting with a verb to functions and class methods. Violating this convention (e.g. PropertyKey makePropertyKey) makes your code almost unreadable to others.



--
Thanks & Regard

PAWAN SHRIWAS

401 - 420 of 6666