Date   

Re: JanusGraph database cache on distributed setup

Boxuan Li
 

Hi Wasantha,

It's great to hear that you have solved the previous problem.

Regarding contributing to the community, I would suggest you create a GitHub issue first, describing the problem and your approach there. This is not required but recommended. Then, you could create a pull request linking to that issue (note that you would also be asked to sign an individual CLA or corporate CLA once you create your first pull request in JanusGraph).

I am not 100% sure if I understand your question, but I guess you are asking what exactly is stored in that cache. Basically, that cache stores the raw data fetched from the storage backend. It does not have to be vertex - it could be vertex properties or edges. It might also contain deserialized data (See getCache() method in Entry.java). Note that in your case, since your cache is not local, it might be a better idea to store only the raw data but not the deserialized data to reduce the network overhead. To achieve that, you could override Entry::setCache method and let it do nothing. If you are interested in learning more about the "raw data", I wrote a blog Data layout in JanusGraph that you might be interested in.

Hope this helps.
Boxuan


Gremlin giving stale response

Aman <amandeep.srivastava1996@...>
 

Hi,

I've built a service around JanusGraph to ingest/retrieve data. When I ingest a new edge, the same is not reflected on the gremlin endpoint of JG. However, ingested vertices count is updated correctly.

Here's what I'm doing:
1. Use API to create 2 new vertices
2. Use API to create a new edge
3. Hit gremlin API to get vertex count -> shows 2 correctly
4. Hit gremlin API to get edge count -> shows 0

When I restart gremlin server, I'm able to see correct edge count via gremlin API (i.e. 1 in above example). My hunch is that gremlin API is using a stale transaction somewhere, hence returning incorrect data.

I tried setting 2 values of g in empty-sample.groovy:
1. globals << [g: graph.traversal()] -> This seems to give the correct number of vertices and edges always, so no issues with this one.
2. globals << [g: graph.buildTransaction().readOnly().start().traversal()] -> This seems to be having the above mentioned inconsistency issue.

I want the gremlin API to be read-only so all ingestions happen via my custom built APIs (I validate a few things before persisting data). Had following questions:

1. Why does the second value of g cause inconsistent results only for edges? If it's continuing on the same transaction, shouldn't staleness disrupt vertex count results too?
2. How can I set g value such that traversals are always readOnly and every request starts a new transaction, ending it after response has been returned to the clients.

Would appreciate it if the group can provide inputs.

Regards,
Aman


Re: JanusGraph database cache on distributed setup

washerath@...
 

Hi Boxuan,

We were able to overcome sync issue on multiple JG instance after doing modifications on void invalidate(StaticBuffer key, List<CachableStaticBuffer> entries) as suggested. We had few performance issues to resolve which took considerable effort.

i am happy to contribute the implementation to community since it's almost done. Please guide on that. 

One another question on cache :

private final Cache<KeySliceQuery,EntryList> cache;

As per above initialization on cache object it persists only the specific query against result of that query. It does not caches all the traversal vertices. Is my understanding correct ?

Thanks
Wasantha


Connective predicate with id throws an IllegalArgumentException (Invalid condition)

toom@...
 

Hello,
 
If a query contains a connective predicate with string ids, it fails with an IllegalArgumentException:
 
gremlin> g.V().or(has(T.id, "1933504"), has(T.id, "2265280"), has(T.id, "2027592")) 
java.lang.IllegalArgumentException: Invalid condition: [2265280, 2027592, 1933504]                               
        at com.google.common.base.Preconditions.checkArgument(Preconditions.java:217)                  
        at org.janusgraph.graphdb.query.graph.GraphCentricQueryBuilder.has(GraphCentricQueryBuilder.java:148)
        at org.janusgraph.graphdb.query.graph.GraphCentricQueryBuilder.has(GraphCentricQueryBuilder.java:67)
        at org.janusgraph.graphdb.tinkerpop.optimize.step.JanusGraphStep.addConstraint(JanusGraphStep.java:168)
        at org.janusgraph.graphdb.tinkerpop.optimize.step.JanusGraphStep.buildGlobalGraphCentricQuery(JanusGraphStep.java:156)
        at org.janusgraph.graphdb.tinkerpop.optimize.step.JanusGraphStep.buildGlobalGraphCentricQuery(JanusGraphStep.java:131)
 
The condition is invalid because the predicate value is not a List [1] but a HashSet. The value is changed to a HashSet by HasContainer because it detects an id strings [2].
 
The same query with numerical id works.
 
I think the value of a connective predicate could be a Collection. The "isValidCondition" could check on this super class.
 
Regards,
 
Toom.
 
[1] https://github.com/JanusGraph/janusgraph/blob/v0.6.1/janusgraph-core/src/main/java/org/janusgraph/graphdb/predicate/ConnectiveJanusPredicate.java#L45-L47
[2] https://github.com/apache/tinkerpop/blob/3.5.1/gremlin-core/src/main/java/org/apache/tinkerpop/gremlin/process/traversal/step/util/HasContainer.java#L69-L72


Re: Janusgraph 0.6.0 cassandra connection issues caused by: com.datastax.oss.driver.api.core.AllNodesFailedException: Could not reach any contact point, make sure you've provided valid addresses (showing first 1 nodes, use getAllErrors() for more): Node(endPoint=/127.0.0.1:9042, hostId=null, hashCode=5960d2ce): [com.datastax.oss.driver.api.core.connection.ConnectionInitException: [JanusGraph Session|control|connecting...]

Boxuan Li
 

Hi Krishna,

Sorry for the late reply. Can you try one thing:

storage.hostname = cass01,cass01

See if this config works or not. If this works, then likely the problem is not with JanusGraph. Otherwise, there might be a bug, and it would be helpful if you could provide your complete configuration and steps.

You might also want to try the last stable version, 0.5.3, and see if you have the same issue.

Best,
Boxuan

On Mar 7, 2022, at 5:50 AM, krishna.sailesh2@... wrote:

Hi hadoopmarc

It's a typo in the post, I had just typed that people can understand I am passing it as a string.
in acutal code, i am using property file and reading those properties in configuration2 object and passing to janusgraph.

Thanks
Krishna Jalla


Re: JanusGraph Best Practice to Store the Data

Boxuan Li
 

Hi,

There are a few factors you might want to consider:

1. An increase of your transaction-wise cache and database-level cache memory usage.
2. Cassandra does not support large column value well. 100-500kb is far less than the hard limit, but some say that this scale can also lead to performance issue (disclaimer: I’ve never tried it myself).
3. Serialization and deserialization cost. To reduce storage and network overhead, JanusGraph encodes and compresses your string value (see StringSerializer). That being said, I believe this overhead should (usually) still be much smaller than an additional network call (if you store docValue somewhere else).

The best option depends on your use case and your testing, of course.

Best,
Boxuan


On Mar 9, 2022, at 8:22 AM, kaintharinder@... wrote:

[Edited Message Follows]

Hi Team,

We are running JanusGraph + Cassandra combination for storing through Gremlin Commands from Java Api.
Thinking of saving the full JSON document into Graph alongside relationship.

Gremlin Query is Like Below :
g.addV('Segment').property(\"docId\",docId).property(\"docValue\",docValue).property(\"docSize\",docSize)

The "docValue" value will be huge lying in the range of 100-500kb. It is a JSON document.
Wanted to understand whether it is a good practice to save full documents in Graph or should we only store the references.


JanusGraph Best Practice to Store the Data

kaintharinder@...
 
Edited

Hi Team,

We are running JanusGraph + Cassandra combination for storing through Gremlin Commands from Java Api.
Thinking of saving the full JSON document into Graph alongside relationship.

Gremlin Query is Like Below :
g.addV('Segment').property(\"docId\",docId).property(\"docValue\",docValue).property(\"docSize\",docSize)

The "docValue" value will be huge lying in the range of 100-500kb. It is a JSON document.
Wanted to understand whether it is a good practice to save full documents in Graph or should we only store the references.


Re: Janusgraph 0.6.0 cassandra connection issues caused by: com.datastax.oss.driver.api.core.AllNodesFailedException: Could not reach any contact point, make sure you've provided valid addresses (showing first 1 nodes, use getAllErrors() for more): Node(endPoint=/127.0.0.1:9042, hostId=null, hashCode=5960d2ce): [com.datastax.oss.driver.api.core.connection.ConnectionInitException: [JanusGraph Session|control|connecting...]

krishna.sailesh2@...
 

Hi hadoopmarc

It's a typo in the post, I had just typed that people can understand I am passing it as a string.
in acutal code, i am using property file and reading those properties in configuration2 object and passing to janusgraph.

Thanks
Krishna Jalla


Re: Janusgraph 0.6.0 cassandra connection issues caused by: com.datastax.oss.driver.api.core.AllNodesFailedException: Could not reach any contact point, make sure you've provided valid addresses (showing first 1 nodes, use getAllErrors() for more): Node(endPoint=/127.0.0.1:9042, hostId=null, hashCode=5960d2ce): [com.datastax.oss.driver.api.core.connection.ConnectionInitException: [JanusGraph Session|control|connecting...]

hadoopmarc@...
 

Hi Krishna,

The properties you cite are not quoted in the right way:
"storage.hostname: "cass01,cass02,cass03""
Are these typos in the post or in your actual code?

Best wishes,   Marc


Re: Janusgraph 0.6.0 cassandra connection issues caused by: com.datastax.oss.driver.api.core.AllNodesFailedException: Could not reach any contact point, make sure you've provided valid addresses (showing first 1 nodes, use getAllErrors() for more): Node(endPoint=/127.0.0.1:9042, hostId=null, hashCode=5960d2ce): [com.datastax.oss.driver.api.core.connection.ConnectionInitException: [JanusGraph Session|control|connecting...]

krishna.sailesh2@...
 

HI Boxuan,

I am using the same above mentioned config in the properties file, loading configuration2 object using java and loading janusgraph.

Thanks
Krishna Jalla


No writes: Janusgraph bigtable

51kumarakhil@...
 

using configured graph factory we are creating graph by send the below query using gremlin client
 
 
let client = new gremlin.driver.Client(ws://XX.XX.XX.XX:8182/gremlin)
client.submit('ConfiguredGraphFactory.create("g_day5_01"); 0') 
 
we obsevered below logs in janus-server (0.5.3) console
 
202164 [gremlin-server-exec-4] INFO  org.janusgraph.diskstorage.Backend  - Initiated backend operations thread pool of size 16
202251 [gremlin-server-exec-4] WARN  com.google.cloud.bigtable.hbase2_x.BigtableAdmin  - Table g_day5_01 was disabled in memory only.
204248 [gremlin-server-exec-4] WARN  com.google.cloud.bigtable.hbase2_x.BigtableAdmin  - Table g_day5_01 was enabled in memory only.
204308 [gremlin-server-exec-4] WARN  com.google.cloud.bigtable.hbase2_x.BigtableAdmin  - Table g_day5_01 was disabled in memory only.
206152 [gremlin-server-exec-4] WARN  com.google.cloud.bigtable.hbase2_x.BigtableAdmin  - Table g_day5_01 was enabled in memory only.
206189 [gremlin-server-exec-4] WARN  com.google.cloud.bigtable.hbase2_x.BigtableAdmin  - Table g_day5_01 was disabled in memory only.
208040 [gremlin-server-exec-4] WARN  com.google.cloud.bigtable.hbase2_x.BigtableAdmin  - Table g_day5_01 was enabled in memory only.
208077 [gremlin-server-exec-4] WARN  com.google.cloud.bigtable.hbase2_x.BigtableAdmin  - Table g_day5_01 was disabled in memory only.
209972 [gremlin-server-exec-4] WARN  com.google.cloud.bigtable.hbase2_x.BigtableAdmin  - Table g_day5_01 was enabled in memory only.
210007 [gremlin-server-exec-4] WARN  com.google.cloud.bigtable.hbase2_x.BigtableAdmin  - Table g_day5_01 was disabled in memory only.
212007 [gremlin-server-exec-4] WARN  com.google.cloud.bigtable.hbase2_x.BigtableAdmin  - Table g_day5_01 was enabled in memory only.
212047 [gremlin-server-exec-4] WARN  com.google.cloud.bigtable.hbase2_x.BigtableAdmin  - Table g_day5_01 was disabled in memory only.
213878 [gremlin-server-exec-4] WARN  com.google.cloud.bigtable.hbase2_x.BigtableAdmin  - Table g_day5_01 was enabled in memory only.
213982 [gremlin-server-exec-4] WARN  com.google.cloud.bigtable.hbase2_x.BigtableAdmin  - Table g_day5_01 was disabled in memory only.
215822 [gremlin-server-exec-4] WARN  com.google.cloud.bigtable.hbase2_x.BigtableAdmin  - Table g_day5_01 was enabled in memory only.
215974 [gremlin-server-exec-4] WARN  com.google.cloud.bigtable.hbase2_x.BigtableAdmin  - Table g_day5_01 was disabled in memory only.
218262 [gremlin-server-exec-4] WARN  com.google.cloud.bigtable.hbase2_x.BigtableAdmin  - Table g_day5_01 was enabled in memory only.
 
after multiple enabled-disabled logs, we can also see the graph being created in BigTable with name: "g_day5_01"
 
 
After this we are running queries to insert data again by using gremlin client
 
 
for(let i of data){
client.submit("g_day5_01_traversal.addV(i.label).property(i.key1, i.value1)")
}
 
 
after looping through all the data, We observed that there is no data in the graph. We also checked the gremlin-server logs and yes, queries are reaching to the server.

Config File:
storage.backend=hbase
storage.hbase.ext.google.bigtable.instance.id=XXXXXXXXXXX
storage.hbase.ext.google.bigtable.project.id=XXXXXXXXXXXXX
storage.hbase.ext.hbase.client.connection.impl=com.google.cloud.bigtable.hbase2_x.BigtableConnection
graph.timestamps=MICRO
storage.lock.wait-time=100


Re: Error starting Cassandra when running janusgraph-full-0.6.1

Yingjie Li
 

Hello Marc,

YES, You are right. 

./bin/janusgraph.sh start gives no error, and ./janusgraph-full-0.6.1/bin/janusgraph.sh start does.

Yingjie


On Sat, Mar 5, 2022 at 7:56 AM <hadoopmarc@...> wrote:
Hi Yingyie,

My answer above was not accurate. Actually, on my system the startup only succeeds with:

$ cd janusgraph-full-0.6.1
$ bin/janusgraph.sh start

So, it fails when you start it the way you did:
$ janusgraph-full-0.6.1/bin/janusgraph.sh start -v

Your way does work for v0.5.3 but not for v0.6.0 and v0.6.1. I will report it as an issue, if not already present.

Happy graphing,

Marc


Re: Error starting Cassandra when running janusgraph-full-0.6.1

hadoopmarc@...
 


Re: Error starting Cassandra when running janusgraph-full-0.6.1

Yingjie Li
 

Hello Marc, 
I am using Mac OS Monterey and Java 8 (1.8.0_251). I have been using Janusgraph 0.5.3 with no issues.


Thanks
Yingjie

On Sat, Mar 5, 2022 at 7:35 AM <hadoopmarc@...> wrote:
Hi Yingjie

To see a NullpointException from the very first start is rather disappointing, so thumbs up for your patience!

To make things reproducible, what OS + verion and JVM + version do you use?
Did you have more luck with other JanusGraph versions?

I just checked with Ubuntu MATE 20.04 with OpenJDK Runtime Environment (build 1.8.0_312-8u312-b07-0ubuntu1~20.04-b07) and did not experience your issue.

Best wishes,      Marc


Re: Error starting Cassandra when running janusgraph-full-0.6.1

hadoopmarc@...
 

Hi Yingyie,

My answer above was not accurate. Actually, on my system the startup only succeeds with:

$ cd janusgraph-full-0.6.1
$ bin/janusgraph.sh start

So, it fails when you start it the way you did:
$ janusgraph-full-0.6.1/bin/janusgraph.sh start -v

Your way does work for v0.5.3 but not for v0.6.0 and v0.6.1. I will report it as an issue, if not already present.

Happy graphing,

Marc


Re: Error starting Cassandra when running janusgraph-full-0.6.1

hadoopmarc@...
 

Hi Yingjie

To see a NullpointException from the very first start is rather disappointing, so thumbs up for your patience!

To make things reproducible, what OS + verion and JVM + version do you use?
Did you have more luck with other JanusGraph versions?

I just checked with Ubuntu MATE 20.04 with OpenJDK Runtime Environment (build 1.8.0_312-8u312-b07-0ubuntu1~20.04-b07) and did not experience your issue.

Best wishes,      Marc


Re: Janusgraph 0.6.0 cassandra connection issues caused by: com.datastax.oss.driver.api.core.AllNodesFailedException: Could not reach any contact point, make sure you've provided valid addresses (showing first 1 nodes, use getAllErrors() for more): Node(endPoint=/127.0.0.1:9042, hostId=null, hashCode=5960d2ce): [com.datastax.oss.driver.api.core.connection.ConnectionInitException: [JanusGraph Session|control|connecting...]

Boxuan Li
 

Hi Krishna, just want to make sure you are using a fresh 0.6.1 installation, not using any of the old config coming from 0.6.0, right? 


Error starting Cassandra when running janusgraph-full-0.6.1

Yingjie Li
 

Hello,

 I came across the issue below when migrating from  janusgraph-full-0.5.3 to janusgraph-full-0.6.1.  

When I run 'janusgraph.sh start'  after downloading  janusgraph-full-0.6.1. I got error of starting Cassandra. Interesting thing is after a fresh installation,  sometimes this problem appears, sometimes the problem disappear. I also tried modifying the  cassandra-env.sh by setting the  Djava.rmi.server.hostname=127.0.0.1 as below, the problem did not go away, so still sometimes it works sometimes it does not.
# JVM_OPTS="$JVM_OPTS -Djava.rmi.server.hostname=<public name>"
JVM_OPTS="$JVM_OPTS -Djava.rmi.server.hostname=127.0.0.1"

Here is the output of the verbose run with the default installation:  

./janusgraph-full-0.6.1/bin/janusgraph.sh start -v
Forking Cassandra...
Running `nodetool statusbinary`.CompilerOracle: dontinline org/apache/cassandra/db/Columns$Serializer.deserializeLargeSubset (Lorg/apache/cassandra/io/util/DataInputPlus;Lorg/apache/cassandra/db/Columns;I)Lorg/apache/cassandra/db/Columns;
CompilerOracle: dontinline org/apache/cassandra/db/Columns$Serializer.serializeLargeSubset (Ljava/util/Collection;ILorg/apache/cassandra/db/Columns;ILorg/apache/cassandra/io/util/DataOutputPlus;)V
CompilerOracle: dontinline org/apache/cassandra/db/Columns$Serializer.serializeLargeSubsetSize (Ljava/util/Collection;ILorg/apache/cassandra/db/Columns;I)I
CompilerOracle: dontinline org/apache/cassandra/db/commitlog/AbstractCommitLogSegmentManager.advanceAllocatingFrom (Lorg/apache/cassandra/db/commitlog/CommitLogSegment;)V
CompilerOracle: dontinline org/apache/cassandra/db/transform/BaseIterator.tryGetMoreContents ()Z
CompilerOracle: dontinline org/apache/cassandra/db/transform/StoppingTransformation.stop ()V
CompilerOracle: dontinline org/apache/cassandra/db/transform/StoppingTransformation.stopInPartition ()V
CompilerOracle: dontinline org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.doFlush (I)V
CompilerOracle: dontinline org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.writeExcessSlow ()V
CompilerOracle: dontinline org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.writeSlow (JI)V
CompilerOracle: dontinline org/apache/cassandra/io/util/RebufferingInputStream.readPrimitiveSlowly (I)J
CompilerOracle: inline org/apache/cassandra/db/rows/UnfilteredSerializer.serializeRowBody (Lorg/apache/cassandra/db/rows/Row;ILorg/apache/cassandra/db/SerializationHeader;Lorg/apache/cassandra/io/util/DataOutputPlus;)V
CompilerOracle: inline org/apache/cassandra/io/util/Memory.checkBounds (JJ)V
CompilerOracle: inline org/apache/cassandra/io/util/SafeMemory.checkBounds (JJ)V
CompilerOracle: inline org/apache/cassandra/utils/AsymmetricOrdering.selectBoundary (Lorg/apache/cassandra/utils/AsymmetricOrdering/Op;II)I
CompilerOracle: inline org/apache/cassandra/utils/AsymmetricOrdering.strictnessOfLessThan (Lorg/apache/cassandra/utils/AsymmetricOrdering/Op;)I
CompilerOracle: inline org/apache/cassandra/utils/BloomFilter.indexes (Lorg/apache/cassandra/utils/IFilter/FilterKey;)[J
CompilerOracle: inline org/apache/cassandra/utils/BloomFilter.setIndexes (JJIJ[J)V
CompilerOracle: inline org/apache/cassandra/utils/ByteBufferUtil.compare (Ljava/nio/ByteBuffer;[B)I
CompilerOracle: inline org/apache/cassandra/utils/ByteBufferUtil.compare ([BLjava/nio/ByteBuffer;)I
CompilerOracle: inline org/apache/cassandra/utils/ByteBufferUtil.compareUnsigned (Ljava/nio/ByteBuffer;Ljava/nio/ByteBuffer;)I
CompilerOracle: inline org/apache/cassandra/utils/FastByteOperations$UnsafeOperations.compareTo (Ljava/lang/Object;JILjava/lang/Object;JI)I
CompilerOracle: inline org/apache/cassandra/utils/FastByteOperations$UnsafeOperations.compareTo (Ljava/lang/Object;JILjava/nio/ByteBuffer;)I
CompilerOracle: inline org/apache/cassandra/utils/FastByteOperations$UnsafeOperations.compareTo (Ljava/nio/ByteBuffer;Ljava/nio/ByteBuffer;)I
CompilerOracle: inline org/apache/cassandra/utils/vint/VIntCoding.encodeVInt (JI)[B
.INFO  [main] 2022-03-04 08:50:51,052 YamlConfigurationLoader.java:92 - Configuration location: file:/Users/yingjieli/Projects/git/Situational-Awareness/Janusgraph/test/janusgraph-full-0.6.1/cassandra/conf/cassandra.yaml
INFO  [main] 2022-03-04 08:50:51,287 Config.java:537 - Node configuration:[allocate_tokens_for_keyspace=null; authenticator=AllowAllAuthenticator; authorizer=AllowAllAuthorizer; auto_bootstrap=true; auto_snapshot=true; back_pressure_enabled=false; back_pressure_strategy=org.apache.cassandra.net.RateBasedBackPressure{high_ratio=0.9, factor=5, flow=FAST}; batch_size_fail_threshold_in_kb=50; batch_size_warn_threshold_in_kb=5; batchlog_replay_throttle_in_kb=1024; broadcast_address=null; broadcast_rpc_address=null; buffer_pool_use_heap_if_exhausted=true; cas_contention_timeout_in_ms=1000; cdc_enabled=false; cdc_free_space_check_interval_ms=250; cdc_raw_directory=null; cdc_total_space_in_mb=0; check_for_duplicate_rows_during_compaction=true; check_for_duplicate_rows_during_reads=true; client_encryption_options=<REDACTED>; cluster_name=JanusGraph Cassandra Cluster; column_index_cache_size_in_kb=2; column_index_size_in_kb=64; commit_failure_policy=stop; commitlog_compression=null; commitlog_directory=db/cassandra/commitlog; commitlog_max_compression_buffers_in_pool=3; commitlog_periodic_queue_size=-1; commitlog_segment_size_in_mb=32; commitlog_sync=periodic; commitlog_sync_batch_window_in_ms=NaN; commitlog_sync_period_in_ms=10000; commitlog_total_space_in_mb=null; compaction_large_partition_warning_threshold_mb=100; compaction_throughput_mb_per_sec=16; concurrent_compactors=null; concurrent_counter_writes=32; concurrent_materialized_view_writes=32; concurrent_reads=32; concurrent_replicates=null; concurrent_writes=32; counter_cache_keys_to_save=2147483647; counter_cache_save_period=7200; counter_cache_size_in_mb=null; counter_write_request_timeout_in_ms=5000; credentials_cache_max_entries=1000; credentials_update_interval_in_ms=-1; credentials_validity_in_ms=2000; cross_node_timeout=false; data_file_directories=[Ljava.lang.String;@43ee72e6; disk_access_mode=auto; disk_failure_policy=stop; disk_optimization_estimate_percentile=0.95; disk_optimization_page_cross_chance=0.1; disk_optimization_strategy=ssd; dynamic_snitch=true; dynamic_snitch_badness_threshold=0.1; dynamic_snitch_reset_interval_in_ms=600000; dynamic_snitch_update_interval_in_ms=100; enable_materialized_views=true; enable_sasi_indexes=true; enable_scripted_user_defined_functions=false; enable_user_defined_functions=false; enable_user_defined_functions_threads=true; encryption_options=<REDACTED>; endpoint_snitch=SimpleSnitch; file_cache_round_up=null; file_cache_size_in_mb=null; gc_log_threshold_in_ms=200; gc_warn_threshold_in_ms=1000; hinted_handoff_disabled_datacenters=[]; hinted_handoff_enabled=true; hinted_handoff_throttle_in_kb=1024; hints_compression=null; hints_directory=null; hints_flush_period_in_ms=10000; incremental_backups=false; index_interval=null; index_summary_capacity_in_mb=null; index_summary_resize_interval_in_minutes=60; initial_token=null; inter_dc_stream_throughput_outbound_megabits_per_sec=200; inter_dc_tcp_nodelay=false; internode_authenticator=null; internode_compression=dc; internode_recv_buff_size_in_bytes=0; internode_send_buff_size_in_bytes=0; key_cache_keys_to_save=2147483647; key_cache_save_period=14400; key_cache_size_in_mb=null; listen_address=localhost; listen_interface=null; listen_interface_prefer_ipv6=false; listen_on_broadcast_address=false; max_hint_window_in_ms=10800000; max_hints_delivery_threads=2; max_hints_file_size_in_mb=128; max_mutation_size_in_kb=null; max_streaming_retries=3; max_value_size_in_mb=256; memtable_allocation_type=heap_buffers; memtable_cleanup_threshold=null; memtable_flush_writers=0; memtable_heap_space_in_mb=null; memtable_offheap_space_in_mb=null; min_free_space_per_drive_in_mb=50; native_transport_flush_in_batches_legacy=true; native_transport_max_concurrent_connections=-1; native_transport_max_concurrent_connections_per_ip=-1; native_transport_max_concurrent_requests_in_bytes=-1; native_transport_max_concurrent_requests_in_bytes_per_ip=-1; native_transport_max_frame_size_in_mb=256; native_transport_max_negotiable_protocol_version=-2147483648; native_transport_max_threads=128; native_transport_port=9042; native_transport_port_ssl=null; num_tokens=256; otc_backlog_expiration_interval_ms=200; otc_coalescing_enough_coalesced_messages=8; otc_coalescing_strategy=DISABLED; otc_coalescing_window_us=200; partitioner=org.apache.cassandra.dht.Murmur3Partitioner; permissions_cache_max_entries=1000; permissions_update_interval_in_ms=-1; permissions_validity_in_ms=2000; phi_convict_threshold=8.0; prepared_statements_cache_size_mb=null; range_request_timeout_in_ms=10000; read_request_timeout_in_ms=5000; repair_session_max_tree_depth=18; request_scheduler=org.apache.cassandra.scheduler.NoScheduler; request_scheduler_id=null; request_scheduler_options=null; request_timeout_in_ms=10000; role_manager=CassandraRoleManager; roles_cache_max_entries=1000; roles_update_interval_in_ms=-1; roles_validity_in_ms=2000; row_cache_class_name=org.apache.cassandra.cache.OHCProvider; row_cache_keys_to_save=2147483647; row_cache_save_period=0; row_cache_size_in_mb=0; rpc_address=localhost; rpc_interface=null; rpc_interface_prefer_ipv6=false; rpc_keepalive=true; rpc_listen_backlog=50; rpc_max_threads=2147483647; rpc_min_threads=16; rpc_port=9160; rpc_recv_buff_size_in_bytes=null; rpc_send_buff_size_in_bytes=null; rpc_server_type=sync; saved_caches_directory=db/cassandra/saved_caches; seed_provider=org.apache.cassandra.locator.SimpleSeedProvider{seeds=127.0.0.1}; server_encryption_options=<REDACTED>; slow_query_log_timeout_in_ms=500; snapshot_before_compaction=false; snapshot_on_duplicate_row_detection=false; ssl_storage_port=7001; sstable_preemptive_open_interval_in_mb=50; start_native_transport=true; start_rpc=false; storage_port=7000; stream_throughput_outbound_megabits_per_sec=200; streaming_keep_alive_period_in_secs=300; streaming_socket_timeout_in_ms=86400000; thrift_framed_transport_size_in_mb=15; thrift_max_message_length_in_mb=16; thrift_prepared_statements_cache_size_mb=null; tombstone_failure_threshold=100000; tombstone_warn_threshold=1000; tracetype_query_ttl=86400; tracetype_repair_ttl=604800; transparent_data_encryption_options=org.apache.cassandra.config.TransparentDataEncryptionOptions@23529fee; trickle_fsync=false; trickle_fsync_interval_in_kb=10240; truncate_request_timeout_in_ms=60000; unlogged_batch_across_partitions_warn_threshold=10; user_defined_function_fail_timeout=1500; user_defined_function_warn_timeout=500; user_function_timeout_policy=die; windows_timer_interval=1; write_request_timeout_in_ms=2000]
INFO  [main] 2022-03-04 08:50:51,288 DatabaseDescriptor.java:381 - DiskAccessMode 'auto' determined to be mmap, indexAccessMode is mmap
INFO  [main] 2022-03-04 08:50:51,288 DatabaseDescriptor.java:439 - Global memtable on-heap threshold is enabled at 2008MB
INFO  [main] 2022-03-04 08:50:51,288 DatabaseDescriptor.java:443 - Global memtable off-heap threshold is enabled at 2008MB
Exception (java.lang.NullPointerException) encountered during startup: null
java.lang.NullPointerException
at java.nio.file.Files.provider(Files.java:97)
at java.nio.file.Files.getFileStore(Files.java:1461)
at org.apache.cassandra.io.util.FileUtils.getFileStore(FileUtils.java:682)
at org.apache.cassandra.config.DatabaseDescriptor.guessFileStore(DatabaseDescriptor.java:1087)
at org.apache.cassandra.config.DatabaseDescriptor.applySimpleConfig(DatabaseDescriptor.java:493)
at org.apache.cassandra.config.DatabaseDescriptor.applyAll(DatabaseDescriptor.java:324)
at org.apache.cassandra.config.DatabaseDescriptor.daemonInitialization(DatabaseDescriptor.java:153)
at org.apache.cassandra.config.DatabaseDescriptor.daemonInitialization(DatabaseDescriptor.java:137)
at org.apache.cassandra.service.CassandraDaemon.applyConfig(CassandraDaemon.java:680)
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:622)
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:786)
ERROR [main] 2022-03-04 08:50:51,295 CassandraDaemon.java:803 - Exception encountered during startup
java.lang.NullPointerException: null
at java.nio.file.Files.provider(Files.java:97) ~[na:1.8.0_251]
at java.nio.file.Files.getFileStore(Files.java:1461) ~[na:1.8.0_251]
at org.apache.cassandra.io.util.FileUtils.getFileStore(FileUtils.java:682) ~[apache-cassandra-3.11.10.jar:3.11.10]
at org.apache.cassandra.config.DatabaseDescriptor.guessFileStore(DatabaseDescriptor.java:1087) ~[apache-cassandra-3.11.10.jar:3.11.10]
at org.apache.cassandra.config.DatabaseDescriptor.applySimpleConfig(DatabaseDescriptor.java:493) ~[apache-cassandra-3.11.10.jar:3.11.10]
at org.apache.cassandra.config.DatabaseDescriptor.applyAll(DatabaseDescriptor.java:324) ~[apache-cassandra-3.11.10.jar:3.11.10]
at org.apache.cassandra.config.DatabaseDescriptor.daemonInitialization(DatabaseDescriptor.java:153) ~[apache-cassandra-3.11.10.jar:3.11.10]
at org.apache.cassandra.config.DatabaseDescriptor.daemonInitialization(DatabaseDescriptor.java:137) ~[apache-cassandra-3.11.10.jar:3.11.10]
at org.apache.cassandra.service.CassandraDaemon.applyConfig(CassandraDaemon.java:680) [apache-cassandra-3.11.10.jar:3.11.10]
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:622) [apache-cassandra-3.11.10.jar:3.11.10]
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:786) [apache-cassandra-3.11.10.jar:3.11.10]
.................... timeout exceeded (60 seconds)


Thanks,
Yingjie


Re: Janusgraph 0.6.0 cassandra connection issues caused by: com.datastax.oss.driver.api.core.AllNodesFailedException: Could not reach any contact point, make sure you've provided valid addresses (showing first 1 nodes, use getAllErrors() for more): Node(endPoint=/127.0.0.1:9042, hostId=null, hashCode=5960d2ce): [com.datastax.oss.driver.api.core.connection.ConnectionInitException: [JanusGraph Session|control|connecting...]

krishna.sailesh2@...
 

HI Boxuan,

i have tired with updating janusgraph to 0.6.1, issue is the same.
yes i am using Janusgraph server, should i need to use differnt server?

JanusGraphFactory.open(conf);

Thanks
Krishna Jalla


Re: Janusgraph 0.6.0 cassandra connection issues caused by: com.datastax.oss.driver.api.core.AllNodesFailedException: Could not reach any contact point, make sure you've provided valid addresses (showing first 1 nodes, use getAllErrors() for more): Node(endPoint=/127.0.0.1:9042, hostId=null, hashCode=5960d2ce): [com.datastax.oss.driver.api.core.connection.ConnectionInitException: [JanusGraph Session|control|connecting...]

Boxuan Li
 

Can you try the latest version 0.6.1? There was a bug in 0.6.0 which occurs when you have multiple hostnames and you are using a gremlin (JanusGraph) server.

Best,
Boxuan

141 - 160 of 6588