Date   

Re: Error starting Cassandra when running janusgraph-full-0.6.1

hadoopmarc@...
 


Re: Error starting Cassandra when running janusgraph-full-0.6.1

Yingjie Li
 

Hello Marc, 
I am using Mac OS Monterey and Java 8 (1.8.0_251). I have been using Janusgraph 0.5.3 with no issues.


Thanks
Yingjie

On Sat, Mar 5, 2022 at 7:35 AM <hadoopmarc@...> wrote:
Hi Yingjie

To see a NullpointException from the very first start is rather disappointing, so thumbs up for your patience!

To make things reproducible, what OS + verion and JVM + version do you use?
Did you have more luck with other JanusGraph versions?

I just checked with Ubuntu MATE 20.04 with OpenJDK Runtime Environment (build 1.8.0_312-8u312-b07-0ubuntu1~20.04-b07) and did not experience your issue.

Best wishes,      Marc


Re: Error starting Cassandra when running janusgraph-full-0.6.1

hadoopmarc@...
 

Hi Yingyie,

My answer above was not accurate. Actually, on my system the startup only succeeds with:

$ cd janusgraph-full-0.6.1
$ bin/janusgraph.sh start

So, it fails when you start it the way you did:
$ janusgraph-full-0.6.1/bin/janusgraph.sh start -v

Your way does work for v0.5.3 but not for v0.6.0 and v0.6.1. I will report it as an issue, if not already present.

Happy graphing,

Marc


Re: Error starting Cassandra when running janusgraph-full-0.6.1

hadoopmarc@...
 

Hi Yingjie

To see a NullpointException from the very first start is rather disappointing, so thumbs up for your patience!

To make things reproducible, what OS + verion and JVM + version do you use?
Did you have more luck with other JanusGraph versions?

I just checked with Ubuntu MATE 20.04 with OpenJDK Runtime Environment (build 1.8.0_312-8u312-b07-0ubuntu1~20.04-b07) and did not experience your issue.

Best wishes,      Marc


Re: Janusgraph 0.6.0 cassandra connection issues caused by: com.datastax.oss.driver.api.core.AllNodesFailedException: Could not reach any contact point, make sure you've provided valid addresses (showing first 1 nodes, use getAllErrors() for more): Node(endPoint=/127.0.0.1:9042, hostId=null, hashCode=5960d2ce): [com.datastax.oss.driver.api.core.connection.ConnectionInitException: [JanusGraph Session|control|connecting...]

Boxuan Li
 

Hi Krishna, just want to make sure you are using a fresh 0.6.1 installation, not using any of the old config coming from 0.6.0, right? 


Error starting Cassandra when running janusgraph-full-0.6.1

Yingjie Li
 

Hello,

 I came across the issue below when migrating from  janusgraph-full-0.5.3 to janusgraph-full-0.6.1.  

When I run 'janusgraph.sh start'  after downloading  janusgraph-full-0.6.1. I got error of starting Cassandra. Interesting thing is after a fresh installation,  sometimes this problem appears, sometimes the problem disappear. I also tried modifying the  cassandra-env.sh by setting the  Djava.rmi.server.hostname=127.0.0.1 as below, the problem did not go away, so still sometimes it works sometimes it does not.
# JVM_OPTS="$JVM_OPTS -Djava.rmi.server.hostname=<public name>"
JVM_OPTS="$JVM_OPTS -Djava.rmi.server.hostname=127.0.0.1"

Here is the output of the verbose run with the default installation:  

./janusgraph-full-0.6.1/bin/janusgraph.sh start -v
Forking Cassandra...
Running `nodetool statusbinary`.CompilerOracle: dontinline org/apache/cassandra/db/Columns$Serializer.deserializeLargeSubset (Lorg/apache/cassandra/io/util/DataInputPlus;Lorg/apache/cassandra/db/Columns;I)Lorg/apache/cassandra/db/Columns;
CompilerOracle: dontinline org/apache/cassandra/db/Columns$Serializer.serializeLargeSubset (Ljava/util/Collection;ILorg/apache/cassandra/db/Columns;ILorg/apache/cassandra/io/util/DataOutputPlus;)V
CompilerOracle: dontinline org/apache/cassandra/db/Columns$Serializer.serializeLargeSubsetSize (Ljava/util/Collection;ILorg/apache/cassandra/db/Columns;I)I
CompilerOracle: dontinline org/apache/cassandra/db/commitlog/AbstractCommitLogSegmentManager.advanceAllocatingFrom (Lorg/apache/cassandra/db/commitlog/CommitLogSegment;)V
CompilerOracle: dontinline org/apache/cassandra/db/transform/BaseIterator.tryGetMoreContents ()Z
CompilerOracle: dontinline org/apache/cassandra/db/transform/StoppingTransformation.stop ()V
CompilerOracle: dontinline org/apache/cassandra/db/transform/StoppingTransformation.stopInPartition ()V
CompilerOracle: dontinline org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.doFlush (I)V
CompilerOracle: dontinline org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.writeExcessSlow ()V
CompilerOracle: dontinline org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.writeSlow (JI)V
CompilerOracle: dontinline org/apache/cassandra/io/util/RebufferingInputStream.readPrimitiveSlowly (I)J
CompilerOracle: inline org/apache/cassandra/db/rows/UnfilteredSerializer.serializeRowBody (Lorg/apache/cassandra/db/rows/Row;ILorg/apache/cassandra/db/SerializationHeader;Lorg/apache/cassandra/io/util/DataOutputPlus;)V
CompilerOracle: inline org/apache/cassandra/io/util/Memory.checkBounds (JJ)V
CompilerOracle: inline org/apache/cassandra/io/util/SafeMemory.checkBounds (JJ)V
CompilerOracle: inline org/apache/cassandra/utils/AsymmetricOrdering.selectBoundary (Lorg/apache/cassandra/utils/AsymmetricOrdering/Op;II)I
CompilerOracle: inline org/apache/cassandra/utils/AsymmetricOrdering.strictnessOfLessThan (Lorg/apache/cassandra/utils/AsymmetricOrdering/Op;)I
CompilerOracle: inline org/apache/cassandra/utils/BloomFilter.indexes (Lorg/apache/cassandra/utils/IFilter/FilterKey;)[J
CompilerOracle: inline org/apache/cassandra/utils/BloomFilter.setIndexes (JJIJ[J)V
CompilerOracle: inline org/apache/cassandra/utils/ByteBufferUtil.compare (Ljava/nio/ByteBuffer;[B)I
CompilerOracle: inline org/apache/cassandra/utils/ByteBufferUtil.compare ([BLjava/nio/ByteBuffer;)I
CompilerOracle: inline org/apache/cassandra/utils/ByteBufferUtil.compareUnsigned (Ljava/nio/ByteBuffer;Ljava/nio/ByteBuffer;)I
CompilerOracle: inline org/apache/cassandra/utils/FastByteOperations$UnsafeOperations.compareTo (Ljava/lang/Object;JILjava/lang/Object;JI)I
CompilerOracle: inline org/apache/cassandra/utils/FastByteOperations$UnsafeOperations.compareTo (Ljava/lang/Object;JILjava/nio/ByteBuffer;)I
CompilerOracle: inline org/apache/cassandra/utils/FastByteOperations$UnsafeOperations.compareTo (Ljava/nio/ByteBuffer;Ljava/nio/ByteBuffer;)I
CompilerOracle: inline org/apache/cassandra/utils/vint/VIntCoding.encodeVInt (JI)[B
.INFO  [main] 2022-03-04 08:50:51,052 YamlConfigurationLoader.java:92 - Configuration location: file:/Users/yingjieli/Projects/git/Situational-Awareness/Janusgraph/test/janusgraph-full-0.6.1/cassandra/conf/cassandra.yaml
INFO  [main] 2022-03-04 08:50:51,287 Config.java:537 - Node configuration:[allocate_tokens_for_keyspace=null; authenticator=AllowAllAuthenticator; authorizer=AllowAllAuthorizer; auto_bootstrap=true; auto_snapshot=true; back_pressure_enabled=false; back_pressure_strategy=org.apache.cassandra.net.RateBasedBackPressure{high_ratio=0.9, factor=5, flow=FAST}; batch_size_fail_threshold_in_kb=50; batch_size_warn_threshold_in_kb=5; batchlog_replay_throttle_in_kb=1024; broadcast_address=null; broadcast_rpc_address=null; buffer_pool_use_heap_if_exhausted=true; cas_contention_timeout_in_ms=1000; cdc_enabled=false; cdc_free_space_check_interval_ms=250; cdc_raw_directory=null; cdc_total_space_in_mb=0; check_for_duplicate_rows_during_compaction=true; check_for_duplicate_rows_during_reads=true; client_encryption_options=<REDACTED>; cluster_name=JanusGraph Cassandra Cluster; column_index_cache_size_in_kb=2; column_index_size_in_kb=64; commit_failure_policy=stop; commitlog_compression=null; commitlog_directory=db/cassandra/commitlog; commitlog_max_compression_buffers_in_pool=3; commitlog_periodic_queue_size=-1; commitlog_segment_size_in_mb=32; commitlog_sync=periodic; commitlog_sync_batch_window_in_ms=NaN; commitlog_sync_period_in_ms=10000; commitlog_total_space_in_mb=null; compaction_large_partition_warning_threshold_mb=100; compaction_throughput_mb_per_sec=16; concurrent_compactors=null; concurrent_counter_writes=32; concurrent_materialized_view_writes=32; concurrent_reads=32; concurrent_replicates=null; concurrent_writes=32; counter_cache_keys_to_save=2147483647; counter_cache_save_period=7200; counter_cache_size_in_mb=null; counter_write_request_timeout_in_ms=5000; credentials_cache_max_entries=1000; credentials_update_interval_in_ms=-1; credentials_validity_in_ms=2000; cross_node_timeout=false; data_file_directories=[Ljava.lang.String;@43ee72e6; disk_access_mode=auto; disk_failure_policy=stop; disk_optimization_estimate_percentile=0.95; disk_optimization_page_cross_chance=0.1; disk_optimization_strategy=ssd; dynamic_snitch=true; dynamic_snitch_badness_threshold=0.1; dynamic_snitch_reset_interval_in_ms=600000; dynamic_snitch_update_interval_in_ms=100; enable_materialized_views=true; enable_sasi_indexes=true; enable_scripted_user_defined_functions=false; enable_user_defined_functions=false; enable_user_defined_functions_threads=true; encryption_options=<REDACTED>; endpoint_snitch=SimpleSnitch; file_cache_round_up=null; file_cache_size_in_mb=null; gc_log_threshold_in_ms=200; gc_warn_threshold_in_ms=1000; hinted_handoff_disabled_datacenters=[]; hinted_handoff_enabled=true; hinted_handoff_throttle_in_kb=1024; hints_compression=null; hints_directory=null; hints_flush_period_in_ms=10000; incremental_backups=false; index_interval=null; index_summary_capacity_in_mb=null; index_summary_resize_interval_in_minutes=60; initial_token=null; inter_dc_stream_throughput_outbound_megabits_per_sec=200; inter_dc_tcp_nodelay=false; internode_authenticator=null; internode_compression=dc; internode_recv_buff_size_in_bytes=0; internode_send_buff_size_in_bytes=0; key_cache_keys_to_save=2147483647; key_cache_save_period=14400; key_cache_size_in_mb=null; listen_address=localhost; listen_interface=null; listen_interface_prefer_ipv6=false; listen_on_broadcast_address=false; max_hint_window_in_ms=10800000; max_hints_delivery_threads=2; max_hints_file_size_in_mb=128; max_mutation_size_in_kb=null; max_streaming_retries=3; max_value_size_in_mb=256; memtable_allocation_type=heap_buffers; memtable_cleanup_threshold=null; memtable_flush_writers=0; memtable_heap_space_in_mb=null; memtable_offheap_space_in_mb=null; min_free_space_per_drive_in_mb=50; native_transport_flush_in_batches_legacy=true; native_transport_max_concurrent_connections=-1; native_transport_max_concurrent_connections_per_ip=-1; native_transport_max_concurrent_requests_in_bytes=-1; native_transport_max_concurrent_requests_in_bytes_per_ip=-1; native_transport_max_frame_size_in_mb=256; native_transport_max_negotiable_protocol_version=-2147483648; native_transport_max_threads=128; native_transport_port=9042; native_transport_port_ssl=null; num_tokens=256; otc_backlog_expiration_interval_ms=200; otc_coalescing_enough_coalesced_messages=8; otc_coalescing_strategy=DISABLED; otc_coalescing_window_us=200; partitioner=org.apache.cassandra.dht.Murmur3Partitioner; permissions_cache_max_entries=1000; permissions_update_interval_in_ms=-1; permissions_validity_in_ms=2000; phi_convict_threshold=8.0; prepared_statements_cache_size_mb=null; range_request_timeout_in_ms=10000; read_request_timeout_in_ms=5000; repair_session_max_tree_depth=18; request_scheduler=org.apache.cassandra.scheduler.NoScheduler; request_scheduler_id=null; request_scheduler_options=null; request_timeout_in_ms=10000; role_manager=CassandraRoleManager; roles_cache_max_entries=1000; roles_update_interval_in_ms=-1; roles_validity_in_ms=2000; row_cache_class_name=org.apache.cassandra.cache.OHCProvider; row_cache_keys_to_save=2147483647; row_cache_save_period=0; row_cache_size_in_mb=0; rpc_address=localhost; rpc_interface=null; rpc_interface_prefer_ipv6=false; rpc_keepalive=true; rpc_listen_backlog=50; rpc_max_threads=2147483647; rpc_min_threads=16; rpc_port=9160; rpc_recv_buff_size_in_bytes=null; rpc_send_buff_size_in_bytes=null; rpc_server_type=sync; saved_caches_directory=db/cassandra/saved_caches; seed_provider=org.apache.cassandra.locator.SimpleSeedProvider{seeds=127.0.0.1}; server_encryption_options=<REDACTED>; slow_query_log_timeout_in_ms=500; snapshot_before_compaction=false; snapshot_on_duplicate_row_detection=false; ssl_storage_port=7001; sstable_preemptive_open_interval_in_mb=50; start_native_transport=true; start_rpc=false; storage_port=7000; stream_throughput_outbound_megabits_per_sec=200; streaming_keep_alive_period_in_secs=300; streaming_socket_timeout_in_ms=86400000; thrift_framed_transport_size_in_mb=15; thrift_max_message_length_in_mb=16; thrift_prepared_statements_cache_size_mb=null; tombstone_failure_threshold=100000; tombstone_warn_threshold=1000; tracetype_query_ttl=86400; tracetype_repair_ttl=604800; transparent_data_encryption_options=org.apache.cassandra.config.TransparentDataEncryptionOptions@23529fee; trickle_fsync=false; trickle_fsync_interval_in_kb=10240; truncate_request_timeout_in_ms=60000; unlogged_batch_across_partitions_warn_threshold=10; user_defined_function_fail_timeout=1500; user_defined_function_warn_timeout=500; user_function_timeout_policy=die; windows_timer_interval=1; write_request_timeout_in_ms=2000]
INFO  [main] 2022-03-04 08:50:51,288 DatabaseDescriptor.java:381 - DiskAccessMode 'auto' determined to be mmap, indexAccessMode is mmap
INFO  [main] 2022-03-04 08:50:51,288 DatabaseDescriptor.java:439 - Global memtable on-heap threshold is enabled at 2008MB
INFO  [main] 2022-03-04 08:50:51,288 DatabaseDescriptor.java:443 - Global memtable off-heap threshold is enabled at 2008MB
Exception (java.lang.NullPointerException) encountered during startup: null
java.lang.NullPointerException
at java.nio.file.Files.provider(Files.java:97)
at java.nio.file.Files.getFileStore(Files.java:1461)
at org.apache.cassandra.io.util.FileUtils.getFileStore(FileUtils.java:682)
at org.apache.cassandra.config.DatabaseDescriptor.guessFileStore(DatabaseDescriptor.java:1087)
at org.apache.cassandra.config.DatabaseDescriptor.applySimpleConfig(DatabaseDescriptor.java:493)
at org.apache.cassandra.config.DatabaseDescriptor.applyAll(DatabaseDescriptor.java:324)
at org.apache.cassandra.config.DatabaseDescriptor.daemonInitialization(DatabaseDescriptor.java:153)
at org.apache.cassandra.config.DatabaseDescriptor.daemonInitialization(DatabaseDescriptor.java:137)
at org.apache.cassandra.service.CassandraDaemon.applyConfig(CassandraDaemon.java:680)
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:622)
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:786)
ERROR [main] 2022-03-04 08:50:51,295 CassandraDaemon.java:803 - Exception encountered during startup
java.lang.NullPointerException: null
at java.nio.file.Files.provider(Files.java:97) ~[na:1.8.0_251]
at java.nio.file.Files.getFileStore(Files.java:1461) ~[na:1.8.0_251]
at org.apache.cassandra.io.util.FileUtils.getFileStore(FileUtils.java:682) ~[apache-cassandra-3.11.10.jar:3.11.10]
at org.apache.cassandra.config.DatabaseDescriptor.guessFileStore(DatabaseDescriptor.java:1087) ~[apache-cassandra-3.11.10.jar:3.11.10]
at org.apache.cassandra.config.DatabaseDescriptor.applySimpleConfig(DatabaseDescriptor.java:493) ~[apache-cassandra-3.11.10.jar:3.11.10]
at org.apache.cassandra.config.DatabaseDescriptor.applyAll(DatabaseDescriptor.java:324) ~[apache-cassandra-3.11.10.jar:3.11.10]
at org.apache.cassandra.config.DatabaseDescriptor.daemonInitialization(DatabaseDescriptor.java:153) ~[apache-cassandra-3.11.10.jar:3.11.10]
at org.apache.cassandra.config.DatabaseDescriptor.daemonInitialization(DatabaseDescriptor.java:137) ~[apache-cassandra-3.11.10.jar:3.11.10]
at org.apache.cassandra.service.CassandraDaemon.applyConfig(CassandraDaemon.java:680) [apache-cassandra-3.11.10.jar:3.11.10]
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:622) [apache-cassandra-3.11.10.jar:3.11.10]
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:786) [apache-cassandra-3.11.10.jar:3.11.10]
.................... timeout exceeded (60 seconds)


Thanks,
Yingjie


Re: Janusgraph 0.6.0 cassandra connection issues caused by: com.datastax.oss.driver.api.core.AllNodesFailedException: Could not reach any contact point, make sure you've provided valid addresses (showing first 1 nodes, use getAllErrors() for more): Node(endPoint=/127.0.0.1:9042, hostId=null, hashCode=5960d2ce): [com.datastax.oss.driver.api.core.connection.ConnectionInitException: [JanusGraph Session|control|connecting...]

krishna.sailesh2@...
 

HI Boxuan,

i have tired with updating janusgraph to 0.6.1, issue is the same.
yes i am using Janusgraph server, should i need to use differnt server?

JanusGraphFactory.open(conf);

Thanks
Krishna Jalla


Re: Janusgraph 0.6.0 cassandra connection issues caused by: com.datastax.oss.driver.api.core.AllNodesFailedException: Could not reach any contact point, make sure you've provided valid addresses (showing first 1 nodes, use getAllErrors() for more): Node(endPoint=/127.0.0.1:9042, hostId=null, hashCode=5960d2ce): [com.datastax.oss.driver.api.core.connection.ConnectionInitException: [JanusGraph Session|control|connecting...]

Boxuan Li
 

Can you try the latest version 0.6.1? There was a bug in 0.6.0 which occurs when you have multiple hostnames and you are using a gremlin (JanusGraph) server.

Best,
Boxuan


Re: Janusgraph 0.6.0 cassandra connection issues caused by: com.datastax.oss.driver.api.core.AllNodesFailedException: Could not reach any contact point, make sure you've provided valid addresses (showing first 1 nodes, use getAllErrors() for more): Node(endPoint=/127.0.0.1:9042, hostId=null, hashCode=5960d2ce): [com.datastax.oss.driver.api.core.connection.ConnectionInitException: [JanusGraph Session|control|connecting...]

krishna.sailesh2@...
 

Hi Boxuan

Thanks for the reply, my configurations are correct, don't know why it is showing 127.0.0.1 . btw when i am giving only single node ip it is working fine

storage.hostname = cass01 - working fine
storage.hostname = cass01,cass02,cass03 - giving the above error

same for elasticsearch

index.search.hostname: "esearch01" - working fine
index.search.hostname: "esearch01,esearch02esearch03" -- giving the above error



can you please help on how to connect janusgraph with multiple cassandra or elasticsearch nodes

Thanks
Krishna Jalla


Re: Janusgraph 0.6.0 cassandra connection issues caused by: com.datastax.oss.driver.api.core.AllNodesFailedException: Could not reach any contact point, make sure you've provided valid addresses (showing first 1 nodes, use getAllErrors() for more): Node(endPoint=/127.0.0.1:9042, hostId=null, hashCode=5960d2ce): [com.datastax.oss.driver.api.core.connection.ConnectionInitException: [JanusGraph Session|control|connecting...]

Boxuan Li
 

Hi Krishna, are you sure you are using the right configuration? Your log suggests that you are using “127.0.0.1” as your hostname.

On Mar 3, 2022, at 8:09 AM, krishna.sailesh2@... wrote:

127.0.0.1


Janusgraph 0.6.0 cassandra connection issues caused by: com.datastax.oss.driver.api.core.AllNodesFailedException: Could not reach any contact point, make sure you've provided valid addresses (showing first 1 nodes, use getAllErrors() for more): Node(endPoint=/127.0.0.1:9042, hostId=null, hashCode=5960d2ce): [com.datastax.oss.driver.api.core.connection.ConnectionInitException: [JanusGraph Session|control|connecting...]

krishna.sailesh2@...
 

Hi Folks

I am trying to connect janusgraph with cassandra with cql jar
with janusgraph-cql(0.6.0) cassandra(3.11.6), cassandra java driver 4.9.0

properties:

"storage.backend:cql", "storage.hostname: "cass01,cass02,cass03"", "storage.cql.keyspace:graphs", "storage.cql.local-datacenter:data",
"storage.cql.read-consistency-level:LOCAL_QUORUM",
"storage.cql.write-consistency-level:LOCAL_QUORUM", "cache.db-cache:false",
"index.search.backend:elasticsearch", "index.search.hostname: "esearch01,esearch02esearch03"", "index.search.index-name:graphs",
"index.search.elasticsearch.client-only:true","graph.allow-upgrade:false","storage.lock.wait-time:200
Caused by: java.lang.IllegalArgumentException: Could not instantiate implementation: org.janusgraph.diskstorage.cql.CQLStoreManager
	at org.janusgraph.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:79) ~[janusgraph-core-0.6.0.jar:?]
	at org.janusgraph.diskstorage.Backend.getImplementationClass(Backend.java:525) ~[janusgraph-core-0.6.0.jar:?]
	at org.janusgraph.diskstorage.Backend.getStorageManager(Backend.java:489) ~[janusgraph-core-0.6.0.jar:?]
	at org.janusgraph.graphdb.configuration.builder.GraphDatabaseConfigurationBuilder.build(GraphDatabaseConfigurationBuilder.java:64) ~[janusgraph-core-0.6.0.jar:?]
	at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:176) ~[janusgraph-core-0.6.0.jar:?]
	at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:147) ~[janusgraph-core-0.6.0.jar:?]
	at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:127) ~[janusgraph-core-0.6.0.jar:?]
	at com.opsramp.graphdb.core.GraphFactory.openGraph(GraphFactory.java:60) ~[graphdb-core-11.0.0-SNAPSHOT.jar:?]
	... 13 more
Caused by: java.lang.reflect.InvocationTargetException
	at jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:?]
	at jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[?:?]
	at jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:?]
	at java.lang.reflect.Constructor.newInstance(Constructor.java:490) ~[?:?]
	at org.janusgraph.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:73) ~[janusgraph-core-0.6.0.jar:?]
	at org.janusgraph.diskstorage.Backend.getImplementationClass(Backend.java:525) ~[janusgraph-core-0.6.0.jar:?]
	at org.janusgraph.diskstorage.Backend.getStorageManager(Backend.java:489) ~[janusgraph-core-0.6.0.jar:?]
	at org.janusgraph.graphdb.configuration.builder.GraphDatabaseConfigurationBuilder.build(GraphDatabaseConfigurationBuilder.java:64) ~[janusgraph-core-0.6.0.jar:?]
	at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:176) ~[janusgraph-core-0.6.0.jar:?]
	at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:147) ~[janusgraph-core-0.6.0.jar:?]
	at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:127) ~[janusgraph-core-0.6.0.jar:?]
	at com.opsramp.graphdb.core.GraphFactory.openGraph(GraphFactory.java:60) ~[graphdb-core-11.0.0-SNAPSHOT.jar:?]
	... 13 more
Caused by: com.datastax.oss.driver.api.core.AllNodesFailedException: Could not reach any contact point, make sure you've provided valid addresses (showing first 1 nodes, use getAllErrors() for more): Node(endPoint=/127.0.0.1:9042, hostId=null, hashCode=53eeb30b): [com.datastax.oss.driver.api.core.connection.ConnectionInitException: [JanusGraph Session|control|connecting...] Protocol initialization request, step 1 (OPTIONS): failed to send request (java.nio.channels.ClosedChannelException)]
	at com.datastax.oss.driver.api.core.AllNodesFailedException.copy(AllNodesFailedException.java:141) ~[java-driver-core-4.9.0.jar:?]
	at com.datastax.oss.driver.internal.core.util.concurrent.CompletableFutures.getUninterruptibly(CompletableFutures.java:149) ~[java-driver-core-4.9.0.jar:?]
	at com.datastax.oss.driver.api.core.session.SessionBuilder.build(SessionBuilder.java:697) ~[java-driver-core-4.9.0.jar:?]
	at org.janusgraph.diskstorage.cql.builder.CQLSessionBuilder.build(CQLSessionBuilder.java:95) ~[janusgraph-cql-0.6.0.jar:?]
	at org.janusgraph.diskstorage.cql.CQLStoreManager.<init>(CQLStoreManager.java:135) ~[janusgraph-cql-0.6.0.jar:?]
	at org.janusgraph.diskstorage.cql.CQLStoreManager.<init>(CQLStoreManager.java:116) ~[janusgraph-cql-0.6.0.jar:?]
	at jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:?]
	at jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[?:?]
	at jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:?]
	at java.lang.reflect.Constructor.newInstance(Constructor.java:490) ~[?:?]
	at org.janusgraph.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:73) ~[janusgraph-core-0.6.0.jar:?]
	at org.janusgraph.diskstorage.Backend.getImplementationClass(Backend.java:525) ~[janusgraph-core-0.6.0.jar:?]
	at org.janusgraph.diskstorage.Backend.getStorageManager(Backend.java:489) ~[janusgraph-core-0.6.0.jar:?]
	at org.janusgraph.graphdb.configuration.builder.GraphDatabaseConfigurationBuilder.build(GraphDatabaseConfigurationBuilder.java:64) ~[janusgraph-core-0.6.0.jar:?]
	at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:176) ~[janusgraph-core-0.6.0.jar:?]
	at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:147) ~[janusgraph-core-0.6.0.jar:?]
	at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:127) ~[janusgraph-core-0.6.0.jar:?]
	at com.opsramp.graphdb.core.GraphFactory.openGraph(GraphFactory.java:60) ~[graphdb-core-11.0.0-SNAPSHOT.jar:?]
	... 13 more
	Suppressed: com.datastax.oss.driver.api.core.connection.ConnectionInitException: [JanusGraph Session|control|connecting...] Protocol initialization request, step 1 (OPTIONS): failed to send request (java.nio.channels.ClosedChannelException)
		at com.datastax.oss.driver.internal.core.channel.ProtocolInitHandler$InitRequest.fail(ProtocolInitHandler.java:354) ~[java-driver-core-4.9.0.jar:?]
		at com.datastax.oss.driver.internal.core.channel.ChannelHandlerRequest.writeListener(ChannelHandlerRequest.java:87) ~[java-driver-core-4.9.0.jar:?]
		at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:551) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:183) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.channel.DefaultChannelPromise.addListener(DefaultChannelPromise.java:95) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.channel.DefaultChannelPromise.addListener(DefaultChannelPromise.java:30) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at com.datastax.oss.driver.internal.core.channel.ChannelHandlerRequest.send(ChannelHandlerRequest.java:76) ~[java-driver-core-4.9.0.jar:?]
		at com.datastax.oss.driver.internal.core.channel.ProtocolInitHandler$InitRequest.send(ProtocolInitHandler.java:193) ~[java-driver-core-4.9.0.jar:?]
		at com.datastax.oss.driver.internal.core.channel.ProtocolInitHandler.onRealConnect(ProtocolInitHandler.java:124) ~[java-driver-core-4.9.0.jar:?]
		at com.datastax.oss.driver.internal.core.channel.ConnectInitHandler.lambda$connect$0(ConnectInitHandler.java:57) ~[java-driver-core-4.9.0.jar:?]
		at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:570) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:549) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:615) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:608) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:117) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:321) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:337) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:702) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at java.lang.Thread.run(Thread.java:834) [?:?]
		Suppressed: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: /127.0.0.1:9042
		Caused by: java.net.ConnectException: Connection refused
			at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[?:?]
			at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:779) ~[?:?]
			at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:330) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
			at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:334) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
			at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:702) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
			at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
			at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
			at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
			at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
			at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
			at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
			at java.lang.Thread.run(Thread.java:834) [?:?]
	Caused by: java.nio.channels.ClosedChannelException
		at io.netty.channel.AbstractChannel$AbstractUnsafe.newClosedChannelException(AbstractChannel.java:957) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.channel.AbstractChannel$AbstractUnsafe.flush0(AbstractChannel.java:921) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.flush0(AbstractNioChannel.java:354) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.channel.AbstractChannel$AbstractUnsafe.flush(AbstractChannel.java:897) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.channel.DefaultChannelPipeline$HeadContext.flush(DefaultChannelPipeline.java:1372) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:750) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:742) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:728) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.channel.ChannelDuplexHandler.flush(ChannelDuplexHandler.java:127) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:750) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:765) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:790) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:758) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:808) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.channel.DefaultChannelPipeline.writeAndFlush(DefaultChannelPipeline.java:1025) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at io.netty.channel.AbstractChannel.writeAndFlush(AbstractChannel.java:294) ~[netty-all-4.1.51.Final.jar:4.1.51.Final]
		at com.datastax.oss.driver.internal.core.channel.ChannelHandlerRequest.send(ChannelHandlerRequest.java:75) ~[java-driver-core-4.9.0.jar:?]

 

Can you please help me on this

 

Thanks

Krishna Sailesh

 

 


Re: How to determine how many nodes to use?

hadoopmarc@...
 

Hi Doug,

Some questions back:
  1. In terms of https://docs.janusgraph.org/operations/deployment/ how does your cluster look like?
  2. Have you located any performance bottlenecks?
  3. What kind of gremlin queries are served?

Best wishes,   Marc


How to determine how many nodes to use?

Doug Whitfield
 

Hi folks,

I currently have a 3-node cluster which is experiencing performance issues. We are thinking about expanding the number of nodes. What would be the steps for determining how many nodes we need?

Best Regards,
Doug Whitfield


Re: FW: Edge Index Creation Error

hadoopmarc@...
 

What JanusGraph version do you use? Recent TinkerPop versions use Order.asc instead of Order.incr.

Best wishes,    Marc


FW: Edge Index Creation Error

pd.vanlill@...
 

HI

 

I am trying to create an Edge Index from the gremlin console.

 

I am executing the following

 

mgmt = graph.openManagement()

gate_to = mgmt.getEdgeLabel('gate_to')

stargate_id = mgmt.makePropertyKey("stargate_id").dataType(Long.class).make()

mgmt.buildEdgeIndex(gate_to, 'GateToEdges', Direction.BOTH, Order.incr, stargate_id)

mgmt.commit()

 

And I am receiving this error

 

No such property: incr for class: org.apache.tinkerpop.gremlin.process.traversal.Order

 

I have checked the JavaDoc and this static property does exist and the docs specify to use it here: https://docs.janusgraph.org/v0.3/index-management/index-performance/ under “Vertex-centric Indexes”


Issue #2181: Could not find type for id

Umesh Gade
 

Hi,
    There is an issue reported for long time:https://github.com/JanusGraph/janusgraph/issues/2181
We hit this issue 2 time till now. Very very rarely occuring issue but its impact is major. I have commented more details on above link.

Does anybody know about this issue and its cause ?

--
Sincerely,
Umesh Gade


Re: JanusGraph database cache on distributed setup

Boxuan Li
 

Thanks Marc for making it clear.

@Wasantha, how did you implement your void invalidate(StaticBuffer key, List<CachableStaticBuffer> entries) method? Make sure you evict this key from your Redis cache. The default implementation in JanusGraph does not evict it immediately. Rather, it records this key in a local HashMap called expiredKeys and evicts the entry after a timeout. If you use this approach, and you don’t store expiredKeys on Redis, then your other instance could still read stale data. I personally think the usage of expiredKeys is not necessary in your case - you could simply evict the entry from Redis in the invalidate call.

If you still have a problem, probably a better way is to share your code so that we could take a look at your implementation.

Best,
Boxuan

On Feb 20, 2022, at 6:23 AM, hadoopmarc@... wrote:

If you do not use sessions, remote requests to Gremlin Server are committed automatically, see: https://tinkerpop.apache.org/docs/current/reference/#considering-transactions .

Are you sure that committing a modification is sufficient to move over the change from the transaction cache to the database cache, botth in the current and in your new ReDis implementation? Maybe you can test by having a remote modification test followed by a retrieval request of the same vertex from the same client, so that the database cache is filled explicitly (before the second client attempts to retrieve it).

Marc


Re: JanusGraph database cache on distributed setup

hadoopmarc@...
 

If you do not use sessions, remote requests to Gremlin Server are committed automatically, see: https://tinkerpop.apache.org/docs/current/reference/#considering-transactions .

Are you sure that committing a modification is sufficient to move over the change from the transaction cache to the database cache, botth in the current and in your new ReDis implementation? Maybe you can test by having a remote modification test followed by a retrieval request of the same vertex from the same client, so that the database cache is filled explicitly (before the second client attempts to retrieve it).

Marc


Re: JanusGraph database cache on distributed setup

Boxuan Li
 

Hi Wasantha,

I am not familiar with the transaction scope when using a remote Gremlin server, so I could be wrong, but could you try rolling back the transaction explicitly on JG instance B? Just to make sure you are not accessing the stale data cached in a local transaction.

Best,
Boxuan

On Feb 19, 2022, at 11:51 AM, washerath@... wrote:

Hi Boxuan,

I was not using a session on gremlin console. So i guess it does not need to commit explicitly. Anyway i have tried commiting the transaction [ g.tx().commit() ] after opening a session, but same behaviour observered. 

Thanks

Wasantha



Re: JanusGraph database cache on distributed setup

washerath@...
 

Hi Boxuan,

I was not using a session on gremlin console. So i guess it does not need to commit explicitly. Anyway i have tried commiting the transaction [ g.tx().commit() ] after opening a session, but same behaviour observered. 

Thanks

Wasantha

221 - 240 of 6656