Date   

Re: Janusgraph 0.6.0 cassandra connection issues caused by: com.datastax.oss.driver.api.core.AllNodesFailedException: Could not reach any contact point, make sure you've provided valid addresses (showing first 1 nodes, use getAllErrors() for more): Node(endPoint=/127.0.0.1:9042, hostId=null, hashCode=5960d2ce): [com.datastax.oss.driver.api.core.connection.ConnectionInitException: [JanusGraph Session|control|connecting...]

krishna.sailesh2@...
 

HI Boxuan,

i have tired with updating janusgraph to 0.6.1, issue is the same.
yes i am using Janusgraph server, should i need to use differnt server?

JanusGraphFactory.open(conf);

Thanks
Krishna Jalla


Re: Janusgraph 0.6.0 cassandra connection issues caused by: com.datastax.oss.driver.api.core.AllNodesFailedException: Could not reach any contact point, make sure you've provided valid addresses (showing first 1 nodes, use getAllErrors() for more): Node(endPoint=/127.0.0.1:9042, hostId=null, hashCode=5960d2ce): [com.datastax.oss.driver.api.core.connection.ConnectionInitException: [JanusGraph Session|control|connecting...]

Boxuan Li
 

Can you try the latest version 0.6.1? There was a bug in 0.6.0 which occurs when you have multiple hostnames and you are using a gremlin (JanusGraph) server.

Best,
Boxuan


Re: Janusgraph 0.6.0 cassandra connection issues caused by: com.datastax.oss.driver.api.core.AllNodesFailedException: Could not reach any contact point, make sure you've provided valid addresses (showing first 1 nodes, use getAllErrors() for more): Node(endPoint=/127.0.0.1:9042, hostId=null, hashCode=5960d2ce): [com.datastax.oss.driver.api.core.connection.ConnectionInitException: [JanusGraph Session|control|connecting...]

krishna.sailesh2@...
 

Hi Boxuan

Thanks for the reply, my configurations are correct, don't know why it is showing 127.0.0.1 . btw when i am giving only single node ip it is working fine

storage.hostname = cass01 - working fine
storage.hostname = cass01,cass02,cass03 - giving the above error

same for elasticsearch

index.search.hostname: "esearch01" - working fine
index.search.hostname: "esearch01,esearch02esearch03" -- giving the above error



can you please help on how to connect janusgraph with multiple cassandra or elasticsearch nodes

Thanks
Krishna Jalla


Re: Janusgraph 0.6.0 cassandra connection issues caused by: com.datastax.oss.driver.api.core.AllNodesFailedException: Could not reach any contact point, make sure you've provided valid addresses (showing first 1 nodes, use getAllErrors() for more): Node(endPoint=/127.0.0.1:9042, hostId=null, hashCode=5960d2ce): [com.datastax.oss.driver.api.core.connection.ConnectionInitException: [JanusGraph Session|control|connecting...]

Boxuan Li
 

Hi Krishna, are you sure you are using the right configuration? Your log suggests that you are using “127.0.0.1” as your hostname.

On Mar 3, 2022, at 8:09 AM, krishna.sailesh2@... wrote:

127.0.0.1


Janusgraph 0.6.0 cassandra connection issues caused by: com.datastax.oss.driver.api.core.AllNodesFailedException: Could not reach any contact point, make sure you've provided valid addresses (showing first 1 nodes, use getAllErrors() for more): Node(endPoint=/127.0.0.1:9042, hostId=null, hashCode=5960d2ce): [com.datastax.oss.driver.api.core.connection.ConnectionInitException: [JanusGraph Session|control|connecting...]

krishna.sailesh2@...
 

Hi Folks

I am trying to connect janusgraph with cassandra with cql jar
with janusgraph-cql(0.6.0) cassandra(3.11.6), cassandra java driver 4.9.0

properties:

"storage.backend:cql", "storage.hostname: "cass01,cass02,cass03"", "storage.cql.keyspace:graphs", "storage.cql.local-datacenter:data",
"storage.cql.read-consistency-level:LOCAL_QUORUM",
"storage.cql.write-consistency-level:LOCAL_QUORUM", "cache.db-cache:false",
"index.search.backend:elasticsearch", "index.search.hostname: "esearch01,esearch02esearch03"", "index.search.index-name:graphs",
"index.search.elasticsearch.client-only:true","graph.allow-upgrade:false","storage.lock.wait-time:200
Caused by: java.lang.IllegalArgumentException: Could not instantiate implementation: org.janusgraph.diskstorage.cql.CQLStoreManager
	at org.janusgraph.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:79) ~[janusgraph-core-0.6.0.jar:?]
	at org.janusgraph.diskstorage.Backend.getImplementationClass(Backend.java:525) ~[janusgraph-core-0.6.0.jar:?]
	at org.janusgraph.diskstorage.Backend.getStorageManager(Backend.java:489) ~[janusgraph-core-0.6.0.jar:?]
	at org.janusgraph.graphdb.configuration.builder.GraphDatabaseConfigurationBuilder.build(GraphDatabaseConfigurationBuilder.java:64) ~[janusgraph-core-0.6.0.jar:?]
	at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:176) ~[janusgraph-core-0.6.0.jar:?]
	at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:147) ~[janusgraph-core-0.6.0.jar:?]
	at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:127) ~[janusgraph-core-0.6.0.jar:?]
	at com.opsramp.graphdb.core.GraphFactory.openGraph(GraphFactory.java:60) ~[graphdb-core-11.0.0-SNAPSHOT.jar:?]
	... 13 more
Caused by: java.lang.reflect.InvocationTargetException
	at jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:?]
	at jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[?:?]
	at jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:?]
	at java.lang.reflect.Constructor.newInstance(Constructor.java:490) ~[?:?]
	at org.janusgraph.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:73) ~[janusgraph-core-0.6.0.jar:?]
	at org.janusgraph.diskstorage.Backend.getImplementationClass(Backend.java:525) ~[janusgraph-core-0.6.0.jar:?]
	at org.janusgraph.diskstorage.Backend.getStorageManager(Backend.java:489) ~[janusgraph-core-0.6.0.jar:?]
	at org.janusgraph.graphdb.configuration.builder.GraphDatabaseConfigurationBuilder.build(GraphDatabaseConfigurationBuilder.java:64) ~[janusgraph-core-0.6.0.jar:?]
	at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:176) ~[janusgraph-core-0.6.0.jar:?]
	at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:147) ~[janusgraph-core-0.6.0.jar:?]
	at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:127) ~[janusgraph-core-0.6.0.jar:?]
	at com.opsramp.graphdb.core.GraphFactory.openGraph(GraphFactory.java:60) ~[graphdb-core-11.0.0-SNAPSHOT.jar:?]
	... 13 more
Caused by: com.datastax.oss.driver.api.core.AllNodesFailedException: Could not reach any contact point, make sure you've provided valid addresses (showing first 1 nodes, use getAllErrors() for more): Node(endPoint=/127.0.0.1:9042, hostId=null, hashCode=53eeb30b): [com.datastax.oss.driver.api.core.connection.ConnectionInitException: [JanusGraph Session|control|connecting...] Protocol initialization request, step 1 (OPTIONS): failed to send request (java.nio.channels.ClosedChannelException)]
	at com.datastax.oss.driver.api.core.AllNodesFailedException.copy(AllNodesFailedException.java:141) ~[java-driver-core-4.9.0.jar:?]
	at com.datastax.oss.driver.internal.core.util.concurrent.CompletableFutures.getUninterruptibly(CompletableFutures.java:149) ~[java-driver-core-4.9.0.jar:?]
	at com.datastax.oss.driver.api.core.session.SessionBuilder.build(SessionBuilder.java:697) ~[java-driver-core-4.9.0.jar:?]
	at org.janusgraph.diskstorage.cql.builder.CQLSessionBuilder.build(CQLSessionBuilder.java:95) ~[janusgraph-cql-0.6.0.jar:?]
	at org.janusgraph.diskstorage.cql.CQLStoreManager.<init>(CQLStoreManager.java:135) ~[janusgraph-cql-0.6.0.jar:?]
	at org.janusgraph.diskstorage.cql.CQLStoreManager.<init>(CQLStoreManager.java:116) ~[janusgraph-cql-0.6.0.jar:?]
	at jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:?]
	at jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[?:?]
	at jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:?]
	at java.lang.reflect.Constructor.newInstance(Constructor.java:490) ~[?:?]
	at org.janusgraph.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:73) ~[janusgraph-core-0.6.0.jar:?]
	at org.janusgraph.diskstorage.Backend.getImplementationClass(Backend.java:525) ~[janusgraph-core-0.6.0.jar:?]
	at org.janusgraph.diskstorage.Backend.getStorageManager(Backend.java:489) ~[janusgraph-core-0.6.0.jar:?]
	at org.janusgraph.graphdb.configuration.builder.GraphDatabaseConfigurationBuilder.build(GraphDatabaseConfigurationBuilder.java:64) ~[janusgraph-core-0.6.0.jar:?]
	at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:176) ~[janusgraph-core-0.6.0.jar:?]
	at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:147) ~[janusgraph-core-0.6.0.jar:?]
	at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:127) ~[janusgraph-core-0.6.0.jar:?]
	at com.opsramp.graphdb.core.GraphFactory.openGraph(GraphFactory.java:60) ~[graphdb-core-11.0.0-SNAPSHOT.jar:?]
	... 13 more
	Suppressed: com.datastax.oss.driver.api.core.connection.ConnectionInitException: [JanusGraph Session|control|connecting...] Protocol initialization request, step 1 (OPTIONS): failed to send request (java.nio.channels.ClosedChannelException)
		at com.datastax.oss.driver.internal.core.channel.ProtocolInitHandler$InitRequest.fail(ProtocolInitHandler.java:354) ~[java-driver-core-4.9.0.jar:?]
		at com.datastax.oss.driver.internal.core.channel.ChannelHandlerRequest.writeListener(ChannelHandlerRequest.java:87) ~[java-driver-core-4.9.0.jar:?]
		at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:551) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:183) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.channel.DefaultChannelPromise.addListener(DefaultChannelPromise.java:95) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.channel.DefaultChannelPromise.addListener(DefaultChannelPromise.java:30) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at com.datastax.oss.driver.internal.core.channel.ChannelHandlerRequest.send(ChannelHandlerRequest.java:76) ~[java-driver-core-4.9.0.jar:?]
		at com.datastax.oss.driver.internal.core.channel.ProtocolInitHandler$InitRequest.send(ProtocolInitHandler.java:193) ~[java-driver-core-4.9.0.jar:?]
		at com.datastax.oss.driver.internal.core.channel.ProtocolInitHandler.onRealConnect(ProtocolInitHandler.java:124) ~[java-driver-core-4.9.0.jar:?]
		at com.datastax.oss.driver.internal.core.channel.ConnectInitHandler.lambda$connect$0(ConnectInitHandler.java:57) ~[java-driver-core-4.9.0.jar:?]
		at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:570) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:549) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:615) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:608) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:117) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:321) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:337) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:702) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at java.lang.Thread.run(Thread.java:834) [?:?]
		Suppressed: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: /127.0.0.1:9042
		Caused by: java.net.ConnectException: Connection refused
			at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[?:?]
			at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:779) ~[?:?]
			at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:330) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
			at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:334) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
			at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:702) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
			at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
			at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
			at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
			at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
			at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
			at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
			at java.lang.Thread.run(Thread.java:834) [?:?]
	Caused by: java.nio.channels.ClosedChannelException
		at io.netty.channel.AbstractChannel$AbstractUnsafe.newClosedChannelException(AbstractChannel.java:957) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.channel.AbstractChannel$AbstractUnsafe.flush0(AbstractChannel.java:921) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.flush0(AbstractNioChannel.java:354) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.channel.AbstractChannel$AbstractUnsafe.flush(AbstractChannel.java:897) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.channel.DefaultChannelPipeline$HeadContext.flush(DefaultChannelPipeline.java:1372) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:750) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:742) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:728) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.channel.ChannelDuplexHandler.flush(ChannelDuplexHandler.java:127) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:750) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:765) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:790) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:758) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:808) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.channel.DefaultChannelPipeline.writeAndFlush(DefaultChannelPipeline.java:1025) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.channel.AbstractChannel.writeAndFlush(AbstractChannel.java:294) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at com.datastax.oss.driver.internal.core.channel.ChannelHandlerRequest.send(ChannelHandlerRequest.java:75) ~[java-driver-core-4.9.0.jar:?]

 

Can you please help me on this

 

Thanks

Krishna Sailesh

 

 


Re: How to determine how many nodes to use?

hadoopmarc@...
 

Hi Doug,

Some questions back:
  1. In terms of https://docs.janusgraph.org/operations/deployment/ how does your cluster look like?
  2. Have you located any performance bottlenecks?
  3. What kind of gremlin queries are served?

Best wishes,   Marc


How to determine how many nodes to use?

Doug Whitfield
 

Hi folks,

I currently have a 3-node cluster which is experiencing performance issues. We are thinking about expanding the number of nodes. What would be the steps for determining how many nodes we need?

Best Regards,
Doug Whitfield


Re: FW: Edge Index Creation Error

hadoopmarc@...
 

What JanusGraph version do you use? Recent TinkerPop versions use Order.asc instead of Order.incr.

Best wishes,    Marc


FW: Edge Index Creation Error

pd.vanlill@...
 

HI

 

I am trying to create an Edge Index from the gremlin console.

 

I am executing the following

 

mgmt = graph.openManagement()

gate_to = mgmt.getEdgeLabel('gate_to')

stargate_id = mgmt.makePropertyKey("stargate_id").dataType(Long.class).make()

mgmt.buildEdgeIndex(gate_to, 'GateToEdges', Direction.BOTH, Order.incr, stargate_id)

mgmt.commit()

 

And I am receiving this error

 

No such property: incr for class: org.apache.tinkerpop.gremlin.process.traversal.Order

 

I have checked the JavaDoc and this static property does exist and the docs specify to use it here: https://docs.janusgraph.org/v0.3/index-management/index-performance/ under “Vertex-centric Indexes”


Issue #2181: Could not find type for id

Umesh Gade
 

Hi,
    There is an issue reported for long time:https://github.com/JanusGraph/janusgraph/issues/2181
We hit this issue 2 time till now. Very very rarely occuring issue but its impact is major. I have commented more details on above link.

Does anybody know about this issue and its cause ?

--
Sincerely,
Umesh Gade


Re: JanusGraph database cache on distributed setup

Boxuan Li
 

Thanks Marc for making it clear.

@Wasantha, how did you implement your void invalidate(StaticBuffer key, List<CachableStaticBuffer> entries) method? Make sure you evict this key from your Redis cache. The default implementation in JanusGraph does not evict it immediately. Rather, it records this key in a local HashMap called expiredKeys and evicts the entry after a timeout. If you use this approach, and you don’t store expiredKeys on Redis, then your other instance could still read stale data. I personally think the usage of expiredKeys is not necessary in your case - you could simply evict the entry from Redis in the invalidate call.

If you still have a problem, probably a better way is to share your code so that we could take a look at your implementation.

Best,
Boxuan

On Feb 20, 2022, at 6:23 AM, hadoopmarc@... wrote:

If you do not use sessions, remote requests to Gremlin Server are committed automatically, see: https://tinkerpop.apache.org/docs/current/reference/#considering-transactions .

Are you sure that committing a modification is sufficient to move over the change from the transaction cache to the database cache, botth in the current and in your new ReDis implementation? Maybe you can test by having a remote modification test followed by a retrieval request of the same vertex from the same client, so that the database cache is filled explicitly (before the second client attempts to retrieve it).

Marc


Re: JanusGraph database cache on distributed setup

hadoopmarc@...
 

If you do not use sessions, remote requests to Gremlin Server are committed automatically, see: https://tinkerpop.apache.org/docs/current/reference/#considering-transactions .

Are you sure that committing a modification is sufficient to move over the change from the transaction cache to the database cache, botth in the current and in your new ReDis implementation? Maybe you can test by having a remote modification test followed by a retrieval request of the same vertex from the same client, so that the database cache is filled explicitly (before the second client attempts to retrieve it).

Marc


Re: JanusGraph database cache on distributed setup

Boxuan Li
 

Hi Wasantha,

I am not familiar with the transaction scope when using a remote Gremlin server, so I could be wrong, but could you try rolling back the transaction explicitly on JG instance B? Just to make sure you are not accessing the stale data cached in a local transaction.

Best,
Boxuan

On Feb 19, 2022, at 11:51 AM, washerath@... wrote:

Hi Boxuan,

I was not using a session on gremlin console. So i guess it does not need to commit explicitly. Anyway i have tried commiting the transaction [ g.tx().commit() ] after opening a session, but same behaviour observered. 

Thanks

Wasantha



Re: JanusGraph database cache on distributed setup

washerath@...
 

Hi Boxuan,

I was not using a session on gremlin console. So i guess it does not need to commit explicitly. Anyway i have tried commiting the transaction [ g.tx().commit() ] after opening a session, but same behaviour observered. 

Thanks

Wasantha


Re: JanusGraph database cache on distributed setup

Boxuan Li
 

Hi Wasantha,

In your example, it looks like you didn't commit your transaction on JG instance A. Uncommitted changes are only visible to the local transaction on the local instance. Can you try committing it first on A and then query on B?

Best,
Boxuan


Re: JanusGraph database cache on distributed setup

washerath@...
 

Hi Boxuan,

I was able to change ExpirationKCVSCache class to persist the cache on Redis DB,

But i could still see some data anomaly between two JG instances. As example when i change a property of a vertex from one JG server [ g.V(40964200).property('data', 'some_other_value')

JG instance A
JG instance A

it does not reflect on other JG instance. 

JG instance B
JG instance B

When debuging the flow we could identify that when we triggering a vertex property modification, it gets persists on guava cache using  GuavaVertexCache add method and when retrieving it reads data using  get method on same class. This could be the reason for above observation.

Feels like we might need to do modifications on transaction-wise cache as well. Correct me if i am missing something here and happy to contribute the implementation to the community once this done.

Thanks

Wasantha


Re: JanusGraph database cache on distributed setup

Boxuan Li
 

Hi Wasantha,


It's great to see that you have made some progress. If possible, it would be awesome if you could contribute your implementation to the community!

Yes, modifying `ExpirationKCVSCache`  is enough. `GuavaSubqueryCache` and `GuavaVertexCache` and transaction-wise cache, so you don't want to make them global. `RelationQueryCache` and `SchemaCache` are graph-wise cache, you could make them global, but not necessary since they only store schema rather than real data - actually I would recommend not doing so because JanusGraph already has a mechanism of evicting stale schema cache.

Best,
Boxuan


Re: JanusGraph database cache on distributed setup

washerath@...
 

Hi Boxuan,

I am evaluating the approach of rewriting ExpirationKCVSCache as suggested. There i could replace existing guava cache implementation to connect with remote Redis db.  So that Redis DB will be act as centralized cache which can connects with all other JG instances.

While going through the JG source it could find same guava cache implementation (cachebuilder = CacheBuilder.newBuilder()) uses on several other places. Eg . GuavaSubqueryCache, GuavaVertexCache, ...

Will it be sufficient to have modification only on ExpirationKCVSCache or do we need to look for modification on several other places as well ?

Thanks

Wasantha


Re: Removed graphs still open in muti node cluster

hadoopmarc@...
 

Hi Lixu,

JanusGraph-0.6.0 had various changes to the ConfiguredGraphFactory which might have solved your issue:

https://github.com/JanusGraph/janusgraph/issues/2236
https://github.com/JanusGraph/janusgraph/blob/v0.5.3/janusgraph-core/src/main/java/org/janusgraph/core/ConfiguredGraphFactory.java
https://github.com/JanusGraph/janusgraph/blob/v0.6.1/janusgraph-core/src/main/java/org/janusgraph/core/ConfiguredGraphFactory.java

Can you recheck with version 0.6.1?

BTW, the release notes of v0.6.0 form an impressive list! Merely reading it takes minutes.

Best wishes,

Marc


Re: Preserve IDs when importing graphml

hadoopmarc@...
 

Hi Laura,

No answer but some relevant search results:

https://groups.google.com/g/gremlin-users/c/jUBuhhKuf0M/m/kiKMY0eHAwAJ
The  graph.set-vertex-id property at: https://docs.janusgraph.org/configs/configuration-reference/#graph

In general, when working with JanusGraph, it is better to first transform the input graphml and make the id into a property.

Best wishes,   Marc