{"id":{"@type":"g:Int32","@value":1},"label":"person","outE":{"created":[{"id":{"@type":"g:Int32","@value":9},"inV":{"@type":"g:Int32","@value":3},"properties":{"weight":{"@type":"g:Double","@value":0.4}}}],"knows":[{"id":{"@type":"g:Int32","@value":7},"inV":{"@type":"g:Int32","@value":2},"properties":{"weight":{"@type":"g:Double","@value":0.5}}},{"id":{"@type":"g:Int32","@value":8},"inV":{"@type":"g:Int32","@value":4},"properties":{"weight":{"@type":"g:Double","@value":1.0}}}]},"properties":{"name":[{"id":{"@type":"g:Int64","@value":0},"value":"marko"}],"age":[{"id":{"@type":"g:Int64","@value":1},"value":{"@type":"g:Int32","@value":29}}]}}
{"id":{"@type":"g:Int32","@value":2},"label":"person","inE":{"knows":[{"id":{"@type":"g:Int32","@value":7},"outV":{"@type":"g:Int32","@value":1},"properties":{"weight":{"@type":"g:Double","@value":0.5}}}]},"properties":{"name":[{"id":{"@type":"g:Int64","@value":2},"value":"vadas"}],"age":[{"id":{"@type":"g:Int64","@value":3},"value":{"@type":"g:Int32","@value":27}}]}}
{"id":{"@type":"g:Int32","@value":3},"label":"software","inE":{"created":[{"id":{"@type":"g:Int32","@value":9},"outV":{"@type":"g:Int32","@value":1},"properties":{"weight":{"@type":"g:Double","@value":0.4}}},{"id":{"@type":"g:Int32","@value":11},"outV":{"@type":"g:Int32","@value":4},"properties":{"weight":{"@type":"g:Double","@value":0.4}}},{"id":{"@type":"g:Int32","@value":12},"outV":{"@type":"g:Int32","@value":6},"properties":{"weight":{"@type":"g:Double","@value":0.2}}}]},"properties":{"name":[{"id":{"@type":"g:Int64","@value":4},"value":"lop"}],"lang":[{"id":{"@type":"g:Int64","@value":5},"value":"java"}]}}
{"id":{"@type":"g:Int32","@value":4},"label":"person","inE":{"knows":[{"id":{"@type":"g:Int32","@value":8},"outV":{"@type":"g:Int32","@value":1},"properties":{"weight":{"@type":"g:Double","@value":1.0}}}]},"outE":{"created":[{"id":{"@type":"g:Int32","@value":10},"inV":{"@type":"g:Int32","@value":5},"properties":{"weight":{"@type":"g:Double","@value":1.0}}},{"id":{"@type":"g:Int32","@value":11},"inV":{"@type":"g:Int32","@value":3},"properties":{"weight":{"@type":"g:Double","@value":0.4}}}]},"properties":{"name":[{"id":{"@type":"g:Int64","@value":6},"value":"josh"}],"age":[{"id":{"@type":"g:Int64","@value":7},"value":{"@type":"g:Int32","@value":32}}]}}
{"id":{"@type":"g:Int32","@value":5},"label":"software","inE":{"created":[{"id":{"@type":"g:Int32","@value":10},"outV":{"@type":"g:Int32","@value":4},"properties":{"weight":{"@type":"g:Double","@value":1.0}}}]},"properties":{"name":[{"id":{"@type":"g:Int64","@value":8},"value":"ripple"}],"lang":[{"id":{"@type":"g:Int64","@value":9},"value":"java"}]}}
{"id":{"@type":"g:Int32","@value":6},"label":"person","outE":{"created":[{"id":{"@type":"g:Int32","@value":12},"inV":{"@type":"g:Int32","@value":3},"properties":{"weight":{"@type":"g:Double","@value":0.2}}}]},"properties":{"name":[{"id":{"@type":"g:Int64","@value":10},"value":"peter"}],"age":[{"id":{"@type":"g:Int64","@value":11},"value":{"@type":"g:Int32","@value":35}}]}}
I don't understand two things, can anyone help me understand them?
- why do I need an "outE" *and* "inE" definition for the same edge? Why can't I just define one or the other? If I define both, the edge is created when importing the file, otherwise if I only use "outE" the edge is not created
- why is everything given an id? Including edges and properties (for example "properties":{"name":[{"id":{"@type":"g:Int64","@value":0},"value":"marko"}). Removing all the "id" except for nodes IDs seems to work fine
- do you use CompositeIndex or MixedIndex?
- is it certain that the two transaction do not overlap in time (as "next" suggests)?
- do the two transactions occur in the same janusgraph instance?
- is hbase configured as a single host or as a cluster?
Marc
It still does not work for me.
graph.getOpenTransactions().forEach { tx -> tx.commit() }
Hi Marc,
I tried rerunning the scaling test on a fresh graph with ids.block-size=10000000 , unfortunately I haven't seen any performance gain.
I also tried ids.block-size=10000000 and ids.authority.conflict-avoidance-mode=GLOBAL_AUTO, but there also there was no performance gain.
I used GLOBAL_AUTO as it was the easiest to test, I ran the test twice to make sure the result was not just due to unlucky random tag assignment. I didn't do the math, but I guess I would have to be very unlucky to get twice a very bad random tag allocation!
I tried something else which turned out to be very successful:
instead of inserting all the properties in the graph, I tried only inserting the ones necessary to feed the composite indexes and vertex-centric indexes. The indexes are used to execute efficiently the "get element or create it" logic. This test scaled quite nicely up to 64 indexers (instead of 4 before)!
Out of all the tests I tried so far, the two most successful ones were:
- decreasing the cql consistency level (from Quorum to ANY/ONE)
- decreasing the number of properties
What's interesting with these two cases, is that they didn't significantly increased the performance of a single indexer, they really increased the horizontal scalability we could achieve.
My best guess for why it is the case: they reduced the amount of work the ScyllaDB coordinators had to do by:
- decreasing the amount of coordination necessary to get a majority answer (Quorum)
- decreasing the size in bytes of the cql unlogged batches, some of our properties can be quite big ( > 1KB )
I would happily continue digging into this, unfortunately we have other priorities that turned up. We're putting the testing on the side for the moment.
I thought I would post my complete findings/guess anyway in case they are useful to someone.
Thank you so much for your help!
Cheers,
Marc
In my case, I am using index ind1 to fetch vertices from the graph. Upon fetching I am adding some properties (e.g one such property is p1) to the vertices and committing the transaction.
In the next transaction, I am fetching the vertices using index ind2 where one key in the index is the property (p1) added in the last transaction. I get the vertices and remove them. Vertices are reported to be removed successfully. But sometimes they are still present with only the properties (p1) added in the previous transaction. Although other properties/edges have been removed. This is happening very intermittently.
It would be really helpful if someone has an idea about this and can explain me.
I enabled write-ahead logs to support index recovery for secondary persistence failures. I am trying to set TTL for write-ahead logs through JanusGraphManagement setting "log.tx.ttl".
Tried below use case.
1. Set write-ahead log TTL as 10 min.
2. Created few failed entries by bringing Solr(Index Client) down.
3. Waited for more than TTL time(even waited for 1hr) and bring Solr UP.
Expected behavior is failed entries should not recovered as write-ahead log might be gone by then.
Actual behavior is failed entries are recovered successfully.
I triaged and able to see that it's updating "root.log.ttl" properly while creating instance of KCVLogManager for tx log.
Please let me know if any additional configuration is required or if my understanding about expected behavior is not correct.
Thank you,
Radhika
It would be good if "logTransactions" can be refreshed on update of tx.log-tx by providing setter method without reopening graph.
Because as per my understanding, reopening of graph is required only for this tx.log-tx management setting(but not for any other management settings) as this property should be reflected for logTransactions.
I think this is not a thing with memory spec. I think maybe it is a thing with configuration. (Cause you can see the result of Khop is reasonable.)
Best regards,
Shipeng
You can only add a single level of metaproperties. One can understand this from the java docs.
gremlin> g.V(1).properties('name').next().getClass()In a TinkerGraph a regular property is a TinkerVertexProperty with a property() method to add metaproperties.
==>class org.apache.tinkerpop.gremlin.tinkergraph.structure.TinkerVertexProperty
gremlin> g.V(1).properties('name').properties('metatest').next().getClass()
==>class org.apache.tinkerpop.gremlin.tinkergraph.structure.TinkerProperty
In a TinkerGraph a metaproperty is a TinkerProperty without property() method.
Best wishes, Marc
Using TinkerGraph I exported a graph to graphSON in the way shown above. I reloaded it as follows:
gremlin> graph = TinkerGraph.open();So, the metaproperty added s read from graphSON. Do you mean to say that you cannot do the same with JanusGraph? I did not check myself.
==>tinkergraph[vertices:0 edges:0]
gremlin> g = graph.traversal()
==>graphtraversalsource[tinkergraph[vertices:0 edges:0], standard]
gremlin> g.io('data/metatest.json').read().iterate()
gremlin> g.V().elementMap()
==>[id:1,label:person,name:marko,age:29]
==>[id:2,label:person,name:vadas,age:27]
==>[id:3,label:software,name:lop,lang:java]
==>[id:4,label:person,name:josh,age:32]
==>[id:5,label:software,name:ripple,lang:java]
==>[id:6,label:person,name:peter,age:35]
==>[id:13,label:person,name:turing]
gremlin> g.V(1).properties('name').elementMap()
==>[id:0,key:name,value:marko,metatest:hi]
gremlin>
Best wishes, Marc
Did you use their machine specs: 32 vCPU and 244 GB memory? The graph is pretty big for in-memory use during OLAP:
marc@antecmarc:~$ curl http://service.tigergraph.com/download/benchmark/dataset/graph500-22/graph500-22_unique_node | wc -lBest wishes, Marc
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 17.6M 100 17.6M 0 0 3221k 0 0:00:05 0:00:05 --:--:-- 4166k
2396019
marc@antecmarc:~$ curl http://service.tigergraph.com/download/benchmark/dataset/graph500-22/graph500-22 | wc -l
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 989M 100 989M 0 0 5427k 0 0:03:06 0:03:06 --:--:-- 6123k
67108864
If you know how to handle MetricManager, that sounds fine. I was thinking in more basic terms: adding some log statements to you indexer java code.
Regarding the id block allocation, some features seem to have been added, which are still largely undocumented, see:
https://github.com/JanusGraph/janusgraph/blob/83c93fe717453ec31086ca1a208217a747ebd1a8/janusgraph-core/src/main/java/org/janusgraph/diskstorage/idmanagement/ConflictAvoidanceMode.java
https://docs.janusgraph.org/basics/janusgraph-cfg/#idsauthority
Notice that the default value for ids.authority.conflict-avoidance-mode is NONE. Given the rigor you show in your attempts, trying other values seems worth a try too!
Best wishes, Marc
Hi Marc,
We're running on Kubernetes, and there's no cpu limitations on the indexer.
Thanks for pointing this out, I actually haven't checked the overall resources of the cluster... It's a good sanity check:
From left to right, top to bottom:
- The total cpu usage per node on the cluster, there's about 20 of them and other non janusgraph applications are running, that's why it's not 0 when the tests are not running
- The processed records per Indexer, there's two tests here in one panel:
- First group is the scaling test with the cql QUORUM consistency
- Second group is the scaling test with write consistency ANY and the read consistency ONE
- The IO Waiting time per node in the cluster (unfortunately I don't have this metric per indexer)
- ID block allocation warning logs events (the exact log message looks like "Temporary storage exception while acquiring id block - retrying in PT2.4S: {}")
The grey areas represents the moment when the overall performance stopped scaling linearly with the number of indexers.
We're not maxing out the cpus yet, so it looks like we can still push the cluster. I don't have the IO waiting time per indexer unfortunately, but the node-exporter metric on IO waiting time fits with the grey areas in the graphs.
As you mentioned the ID Block allocation, I checked the logs for warning messages, and they are actually id allocation warning messages, I looked for other warning messages but didn't find any.
I tried increasing the Id Block size to 10 000 000 but didn't see any improvement - that said, from my understanding of the ID allocation it is the perfect suspect. I'll rerun these tests on a completely fresh graph with ids.block-size=10000000 to double check.
If that does not work, I'll try upgrading to the master version and re run the test. Any tip on how to log which part is slowing the insertion? I was thinking maybe of using the org.janusgraph.util.stats.MetricManager to time the execution time of parts of the code of the org.janusgraph.graphdb.database.StandardJanusGraph.commit() method.
Thanks a lot,
Cheers,
Marc
Just to be sure: the indexers itself are not limited in the number of CPU they can grab? The 60 indexers run on the same machine? Or in independent cloud containers?
If the indexers are not CPU limited, it would be interesting to log where the time is spent: their own java code, waiting for transactions to complete, waiting for the id manager to return id-blocks?
Best wishes, Marc
I didn't check your metrics but my first impression was this might be related to the internal thread pool. Can you try out the 0.6.0 pre-release version or master version? Remember to set `storage.cql.executor-service.enabled` to false. Before 0.6.0, an internal thread pool was used to process all CQL queries, which had a hard-coded 15 threads. https://github.com/JanusGraph/janusgraph/pull/2700 made the thread pool configurable, and https://github.com/JanusGraph/janusgraph/pull/2741 further made this thread pool optional.
EDIT: Just realized your problem was related to the horizontal scaling of JanusGraph instances. Then this internal thread pool thing is likely not related - but still worth trying.
Hope this helps.
Best,
Boxuan
Hi all,
We've been trying to scale horizontally Janusgraph to keep up with the data throughput, but we're reaching some limit in scaling.
We've been trying different things to pinpoint the bottleneck but we're struggling to find it... Some help would be most welcome :)
Our current setup:
- 6 ScyllaDB Instances
- 6 Elasticsearch Instances
- Our "indexers" running Janusgraph as a lib, they can be scaled up and down
- They read data from our sources and write it to Janusgraph
- Each Indexer runs on his own jvm
Each indexer counts the number of records it processes. We start with one indexer, and every 5 ~ 10 minutes we double the number of indexers and look how the overall performance increased.
From top down, left to right these panels represent:
- Total Processed Records: The overall performance of the system
- Average Processed Records: Average per Indexer, ideally this should be a flat curve
- Number of Running Indexers: We scaled at 1, 2, 4, 8, 16, 32, 64
- Processed Records Per Indexer
- Cpu Usage: The cpu usage per Indexer
- Heap: Heap usage per indexer. The red line is the max memory the Heap can take, we left a generous margin
As you can see past 4 Indexers, the performance per Indexer decreases until no additional throughput can be reached. At first we thought this might simply be due to resource limitations, but ScyllaDB and Elasticsearch are not really struggling. The ScyllaDB load, read and write latency looked good during this test:
Both ScyllaDB and Elasticsearch are running on NVMe hard disks, with 10GB+ of ram. ScyllaDB is also deployed with CPU pinning. If we try a Cassandra Stress test, we can really max out ScyllaDB.
Our janusgraph configuration looks like:
storage.backend=cql
storage.hostname=scylla
storage.cql.replication-factor=3
index.search.backend=elasticsearch
index.search.hostname=elasticsearch
index.search.elasticsearch.transport-scheme=http
index.search.elasticsearch.create.ext.number_of_shards=4
index.search.elasticsearch.create.ext.number_of_replicas=2
graph.replace-instance-if-exists=true
schema.default=none
tx.log-tx=true
tx.max-commit-time=170000
ids.block-size=100000
ids.authority.wait-time=1000
Each input record contains ~20 vertices and ~20 edges. The workflow of the indexer is:
- For each vertex, check if it exists in the graph using a composite index. Create it if it does not.
- Insert edges using the vertex id's returned by step 1
Each transaction inserts ~ 10 records. Each indexer runs 10 transactions in parallel.
We tried different things but without success:
- Increasing/decreasing the transaction size
- Increasing/decreasing storage.cql.batch-statement-size
- Enabling/disabling batch loading
- Increasing the ids.block-size to 10 000 000
The most successful test so far was to switch the cql write consistency level to ANY and the read consistency level to ONE:
This time the indexer scaled nicely up to 16 indexers, and the overall performance was still increased by scaling to 32 indexers. Once we reached 64 indexers though the performance dramatically dropped. There ScyllaDB had a little more load, but it still doesn't look like it's struggling:
We don't use LOCK consistency. We never update vertices or edges, so FORK consistency doesn't look useful in our case.
It really looks like something somewhere is a bottleneck when we start scaling.
I checked out the Janusgraph github repo locally and went through it to try to understand what set of operations Janusgraph does to insert vertices/edges and how this binds to transactions, but I'm struggling a little to find that information.
So, any idea/recommendations?
Cheers,
Marc