Re: CQL scaling limit?
madams@...
Hi Marc, We're running on Kubernetes, and there's no cpu limitations on the indexer. From left to right, top to bottom:
The grey areas represents the moment when the overall performance stopped scaling linearly with the number of indexers. We're not maxing out the cpus yet, so it looks like we can still push the cluster. I don't have the IO waiting time per indexer unfortunately, but the node-exporter metric on IO waiting time fits with the grey areas in the graphs. As you mentioned the ID Block allocation, I checked the logs for warning messages, and they are actually id allocation warning messages, I looked for other warning messages but didn't find any. I tried increasing the Id Block size to 10 000 000 but didn't see any improvement - that said, from my understanding of the ID allocation it is the perfect suspect. I'll rerun these tests on a completely fresh graph with ids.block-size=10000000 to double check. If that does not work, I'll try upgrading to the master version and re run the test. Any tip on how to log which part is slowing the insertion? I was thinking maybe of using the org.janusgraph.util.stats.MetricManager to time the execution time of parts of the code of the org.janusgraph.graphdb.database.StandardJanusGraph.commit() method. Thanks a lot,
|
||||||||||||||
|
||||||||||||||
Re: CQL scaling limit?
hadoopmarc@...
Hi Marc,
Just to be sure: the indexers itself are not limited in the number of CPU they can grab? The 60 indexers run on the same machine? Or in independent cloud containers? If the indexers are not CPU limited, it would be interesting to log where the time is spent: their own java code, waiting for transactions to complete, waiting for the id manager to return id-blocks? Best wishes, Marc
|
||||||||||||||
|
||||||||||||||
Re: CQL scaling limit?
madams@...
Hi Boxuan, I can definitively try from the 0.6.0 pre release or master version, that's a good idea, Thanks!
|
||||||||||||||
|
||||||||||||||
Re: CQL scaling limit?
Hi,
I didn't check your metrics but my first impression was this might be related to the internal thread pool. Can you try out the 0.6.0 pre-release version or master version? Remember to set `storage.cql.executor-service.enabled` to false. Before 0.6.0, an internal thread pool was used to process all CQL queries, which had a hard-coded 15 threads. https://github.com/JanusGraph/janusgraph/pull/2700 made the thread pool configurable, and https://github.com/JanusGraph/janusgraph/pull/2741 further made this thread pool optional. EDIT: Just realized your problem was related to the horizontal scaling of JanusGraph instances. Then this internal thread pool thing is likely not related - but still worth trying. Hope this helps. Best, Boxuan
|
||||||||||||||
|
||||||||||||||
CQL scaling limit?
madams@...
Hi all, We've been trying to scale horizontally Janusgraph to keep up with the data throughput, but we're reaching some limit in scaling.
Each indexer counts the number of records it processes. We start with one indexer, and every 5 ~ 10 minutes we double the number of indexers and look how the overall performance increased. From top down, left to right these panels represent:
Our janusgraph configuration looks like: storage.backend=cql
Each transaction inserts ~ 10 records. Each indexer runs 10 transactions in parallel. We tried different things but without success:
The most successful test so far was to switch the cql write consistency level to ANY and the read consistency level to ONE:
We don't use LOCK consistency. We never update vertices or edges, so FORK consistency doesn't look useful in our case. It really looks like something somewhere is a bottleneck when we start scaling. I checked out the Janusgraph github repo locally and went through it to try to understand what set of operations Janusgraph does to insert vertices/edges and how this binds to transactions, but I'm struggling a little to find that information. So, any idea/recommendations? Cheers,
|
||||||||||||||
|
||||||||||||||
Re: graphml properties of properties
Laura Morales <lauretas@...>
FWIW I've tried exporting the graph in the example to JSON (GraphSON) and the metaproperty *is* preserved, however when I import the same graph from the json file the metaproperty is not created.
toggle quoted messageShow quoted text
Sent: Thursday, August 26, 2021 at 6:36 AM
|
||||||||||||||
|
||||||||||||||
Re: graphml properties of properties
Laura Morales <lauretas@...>
Thank you for this example.
After running this, I can see that the property "metatest" has been ignored and is missing completely from the GraphML output. Another issue that I have with GraphML is that it cannot apparently represent all the key types that are supported by Janus. For example it does not define any attribute for "date" and "time", and it does not allow to specify "int32" or "int64"; it only defines basic primitives such as string, int, double. What serialization format should I use to best match Janus? One that allows metaproperties and also all the various types (date, int32, char, etc.). I also need it to be human readable because I'm editing my graph file manually, and then I load this file into Janus. GraphML is not that bad, I can use it... it's just too limited given that it does not support the features mentioned above. Is there any better alternative? Or should I roll my own? Sent: Wednesday, August 25, 2021 at 5:05 PM From: hadoopmarc@... To: janusgraph-users@... Subject: Re: [janusgraph-users] graphml properties of properties Hi Laura, No. As the TinkerPop docs say: "graphML is a lossy format". You can try for yourself with:gremlin> graph = TinkerFactory.createModern() ==>tinkergraph[vertices:6 edges:6] gremlin> g = graph.traversal() ==>graphtraversalsource[tinkergraph[vertices:6 edges:6], standard] gremlin> g.V(1).properties('name').elementMap() ==>[id:0,key:name,value:marko] gremlin> g.V(1).properties('name').property('metatest', 'hi') ==>vp[name->marko] gremlin> g.V(1).properties('name').elementMap() ==>[id:0,key:name,value:marko,metatest:hi] gremlin> g.addV('person').property('name', 'turing') ==>v[13] gremlin> g.io('data/metatest.xml').write().iterate() gremlin>Best wishes, Marc
|
||||||||||||||
|
||||||||||||||
Re: graphml properties of properties
hadoopmarc@...
Hi Laura,
No. As the TinkerPop docs say: "graphML is a lossy format". You can try for yourself with: gremlin> graph = TinkerFactory.createModern()Best wishes, Marc
|
||||||||||||||
|
||||||||||||||
Too low Performance when running PageRank and WCC on Graph500
shepherdkingqsp@...
Hi there,
Recently I am trying to measure Janusgraph performance. When I was trying to run benchmark using Janusgraph 0.5.3, I found too low performance when testing pagerank and wcc. The code I used that you can refer to: https://github.com/gaolk/graph-database-benchmark/tree/master/benchmark/janusgraph Data: Graph500 The environment: Janusgraph Version: 0.5.3 (download the full release zip from janugraph github) The config of Janusgraph (default conf/janusgraph-cql.properties)
To be more specific, I ran Khop with it and got reasonable result.
But when I ran wcc and pagerank, I got 3 hours timeout either. Could you somebody help find the reason that I got low performance? Regards, Shipeng
|
||||||||||||||
|
||||||||||||||
Re: Fail to load complete edge data of Graph500 to Janusgraph 0.5.3 with Cassandra CQl as storage backends
shepherdkingqsp@...
HI Marc,
I have tried it. And finally I got complete Graph500 vertices and edges loaded. But there is a still weird thing that I found the same exception reported in the log. Could you please explain this? With exception reported, the data was still loaded completely? Regards, Shipeng
|
||||||||||||||
|
||||||||||||||
Re: Not able to enable Write-ahead logs using tx.log-tx for existing JanusGraph setup
Radhika Kundam
Hi Boxuan,
Thank you for the response. I tried by forceClosing all management instances except the current one before setting "tx.log-tx". Management is getting updated with latest value, this is not the issue even without closing other instances. The issue is graph.getConfiguration().hasLogTransactions() is not getting refreshed through JanusGraphManagement property update. My understanding is logTransactions will be updated only through GraphDatabaseConfiguration:preLoadConfiguration which will be called only when we open graph instance JanusGraphFactory.open(config).I don't see any setter method to update logTransactions when we update JanusGraphManagement property. Because of this after updating JanusGraphManagement when we restart cluster it's invoking GraphDatabaseConfiguration:preLoadConfiguration which is updating logTransactions value then. Please correct me if I am missing anything and it'll be really helpful if any alternative approach to update logTransaction when we update Management property without restarting the cluster. Thanks, Radhika
|
||||||||||||||
|
||||||||||||||
Re: org.janusgraph.diskstorage.PermanentBackendException: Read 1 locks with our rid but mismatched timestamps
Ronnie
Hi Mark,
Thanks for the suggestions. Narrowed down the issue to JanusGraph's support for Azul Prime JDK 8, which generates nanosecond precision for Instant.now() API, as compared to the millisecond precision for OpenJDK 8. So the issue was resolved by applying the patch for https://github.com/JanusGraph/janusgraph/issues/1979. Thanks! Ronnie
|
||||||||||||||
|
||||||||||||||
Re: Fail to load complete edge data of Graph500 to Janusgraph 0.5.3 with Cassandra CQl as storage backends
shepherdkingqsp@...
On Tue, Aug 24, 2021 at 06:20 AM, <hadoopmarc@...> wrote:
withGot it. I will try it soon. Thanks, Marc! Shipeng
|
||||||||||||||
|
||||||||||||||
Re: Fail to load complete edge data of Graph500 to Janusgraph 0.5.3 with Cassandra CQl as storage backends
hadoopmarc@...
Hi Shipeng Qi,
The system that you use might be too small for the number of threads in the loading code. You can try to decrease the number of threads from 8 to 4 with: private static ExecutorService pool = Executors.newFixedThreadPool(4); Best wishes, Marc
|
||||||||||||||
|
||||||||||||||
Re: Not able to enable Write-ahead logs using tx.log-tx for existing JanusGraph setup
Boxuan Li
I suspect this is due to stale management instances. Check out https://developer.ibm.com/articles/janusgraph-tips-and-tricks-pt-2/#troubleshooting-indexes and see if it helps.
|
||||||||||||||
|
||||||||||||||
Fail to load complete edge data of Graph500 to Janusgraph 0.5.3 with Cassandra CQl as storage backends
shepherdkingqsp@...
Hi there,
I am new to Janusgraph. I have some problems in loading data to Janusgraph with Cassandra CQL as storage backend. When I tried to load Graph500 to Janusgraph, planning to run benchmark on it. I found that the edges loaded to janusgraph were not complete, 67107183 edges loaded while 67108864 supposed. (Vertices loaded were complete) The code and config I used is post as below. The code I used is a benchmark by tigergraph: - load vertex: https://github.com/gaolk/graph-database-benchmark/blob/master/benchmark/janusgraph/multiThreadVertexImporter.java - load edge: https://github.com/gaolk/graph-database-benchmark/blob/master/benchmark/janusgraph/multiThreadEdgeImporter.java The config I used is conf/janusgraph-cql.properties in Janusgraph 0.5.3 full (https://github.com/JanusGraph/janusgraph/releases/download/v0.5.3/janusgraph-full-0.5.3.zip) I got those exceptions when loading data. Exception 1: Exception 2:
I have found solution on google but got few things help. Could somebody help? Best Regards, Shipeng Qi
|
||||||||||||||
|
||||||||||||||
Re: Not able to enable Write-ahead logs using tx.log-tx for existing JanusGraph setup
Radhika Kundam
Hi Boxuan,For existing JanusGraph setup, I am updating tx.log-tx configuration by setting management system property as mentioned in https://docs.janusgraph.org/basics/configuration/#global-configuration
And I could see the configuration updated properly in JanusGraphManagement. managementSystem.get("tx.log-tx"); => prints false But this change is not reflected for logTransactions in GraphDatabaseConfiguration:preLoadConfiguration. graph.getConfiguration().hasLogTransactions() => prints falseWhile transaction recovery using StandardTransactionLogProcessor it checks for the argument graph.getConfiguration().hasLogTransactions() which is not having latest config('tx.log-tx') updated in ManagementSystem. To reflect the change, I had to restart cluster two times. Also since it's GLOBAL property I am not allowed to override using graph.configuration() and only available option is to update through ManagementSystem which is not updating logTransactions. I would really appreciate your help on this. Thanks, Radhika
|
||||||||||||||
|
||||||||||||||
Re: Wait the mixed index backend
toom@...
Thank you, this is exactly what I look for.
|
||||||||||||||
|
||||||||||||||
Re: Wait the mixed index backend
Boxuan Li
Hi Toom,
Do you want to ALWAYS make sure the vertex is indexed? If so and if you happen to use Elasticsearch, you can set index.[X].elasticsearch.bulk-refresh=wait_forSee https://www.elastic.co/guide/en/elasticsearch/reference/master/docs-refresh.html. Best, Boxuan
|
||||||||||||||
|
||||||||||||||
Wait the mixed index backend
toom@...
Hello,
The vertex that has just been created are not immediately available on mixed index (documented here [1]).
I'm looking for a way to make sure the vertex is indexed, by waiting the mixed index backend. I think the easiest way is to request the vertex id using direct index query:
graph.indexQuery("myIndex", "v.id:" + vertex.id())
But I didn't find a way to do that. Do you think this feature can be added? Maybe I can make a PR.
Regards,
Toom.
[1] https://github.com/JanusGraph/janusgraph/blob/v0.5.3/docs/index-backend/direct-index-query.md#mixed-index-availability-delay
|
||||||||||||||
|