Date
1 - 3 of 3
[Performance Issue] Large partitions formed on janusgraph_ids table leading to read perf issues (throughput reduces to 1/3rd of original)
I learned from your other thread that you use many spark executors that have their own janusgraph instance. I remember that I used a similar scheme many years ago with janusgraph-0.1.1. At that time I simply stored the janusgraph id's at load time in a partitioned file on hdfs, so that I could later use them for analytics queries with spark.
Could you elaborate on what you call the janusgraph_ids table and how you distribute vertex id's to your spark executors? According to the janusgraph data model there is no separate id table, but vertex id's are encoded in the row key.
Best wishes, Marc
Could you elaborate on what you call the janusgraph_ids table and how you distribute vertex id's to your spark executors? According to the janusgraph data model there is no separate id table, but vertex id's are encoded in the row key.
Best wishes, Marc
hadoopmarc@...
Please beware that this question was also posted as:
https://lists.lfaidata.foundation/g/janusgraph-dev/topic/performance_issue_large/80821002?p=,,,20,0,0,0::recentpostdate%2Fsticky,,,20,2,0,80821002
with an accompanying thread:
https://lists.lfaidata.foundation/g/janusgraph-dev/topic/performance_optimization/80653320?p=,,,20,0,0,0::recentpostdate%2Fsticky,,,20,2,0,80653320
https://lists.lfaidata.foundation/g/janusgraph-dev/topic/performance_issue_large/80821002?p=,,,20,0,0,0::recentpostdate%2Fsticky,,,20,2,0,80821002
with an accompanying thread:
https://lists.lfaidata.foundation/g/janusgraph-dev/topic/performance_optimization/80653320?p=,,,20,0,0,0::recentpostdate%2Fsticky,,,20,2,0,80653320
Hi all
We are using janusgraph at zeotap at humongous scale (~70B V and 50B E) backed by scylla.
Right now I am facing an issue in janusgraph_ids table, wherein there are large partitions created in the scylla DB, and this is leading to huge read performance issues. The queries hitting janusgraph_ids table are range queries and with large partitions, the reads are becoming super slow.
I would like to know if anyone else has observed similar issue, is there a set of configurations that need to be checked or something else you would suggest.
In scylla grafana board, this issue is seen with high number of foreground read tasks.

Thanks
Saurabh
We are using janusgraph at zeotap at humongous scale (~70B V and 50B E) backed by scylla.
Right now I am facing an issue in janusgraph_ids table, wherein there are large partitions created in the scylla DB, and this is leading to huge read performance issues. The queries hitting janusgraph_ids table are range queries and with large partitions, the reads are becoming super slow.
I would like to know if anyone else has observed similar issue, is there a set of configurations that need to be checked or something else you would suggest.
In scylla grafana board, this issue is seen with high number of foreground read tasks.
Thanks
Saurabh