Date   

Re: graphml properties of properties

hadoopmarc@...
 

Hi Laura,

Using TinkerGraph I exported a graph to graphSON in the way shown above. I reloaded it as follows:
gremlin> graph = TinkerGraph.open();
==>tinkergraph[vertices:0 edges:0]
gremlin> g = graph.traversal()
==>graphtraversalsource[tinkergraph[vertices:0 edges:0], standard]
gremlin> g.io('data/metatest.json').read().iterate()
gremlin> g.V().elementMap()
==>[id:1,label:person,name:marko,age:29]
==>[id:2,label:person,name:vadas,age:27]
==>[id:3,label:software,name:lop,lang:java]
==>[id:4,label:person,name:josh,age:32]
==>[id:5,label:software,name:ripple,lang:java]
==>[id:6,label:person,name:peter,age:35]
==>[id:13,label:person,name:turing]
gremlin> g.V(1).properties('name').elementMap()
==>[id:0,key:name,value:marko,metatest:hi]
gremlin>
So, the metaproperty added s read from graphSON. Do you mean to say that you cannot do the same with JanusGraph? I did not check myself.

Best wishes,    Marc


Re: Too low Performance when running PageRank and WCC on Graph500

hadoopmarc@...
 

Hi Shipeng,

Did you use their machine specs: 32 vCPU and 244 GB memory?  The graph is pretty big for in-memory use during OLAP:
marc@antecmarc:~$ curl http://service.tigergraph.com/download/benchmark/dataset/graph500-22/graph500-22_unique_node | wc -l
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 17.6M  100 17.6M    0     0  3221k      0  0:00:05  0:00:05 --:--:-- 4166k
2396019
marc@antecmarc:~$ curl http://service.tigergraph.com/download/benchmark/dataset/graph500-22/graph500-22 | wc -l
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  989M  100  989M    0     0  5427k      0  0:03:06  0:03:06 --:--:-- 6123k
67108864
Best wishes,    Marc


Re: CQL scaling limit?

hadoopmarc@...
 

Just one more thing to rule out: did you set cpu.request and cpu.limit of the indexer containers to the same value? You want the pods to be really independent for this test.


multilevel properties depth

Laura Morales <lauretas@...>
 

How many "levels" of multilevel properties are supported by Janus? What I mean is, can I only add properties about other properties, or can I add an arbitrary number of multilevel properties, that is properties about properties about properties about properties about properties...


Re: CQL scaling limit?

hadoopmarc@...
 

Hi Marc,
If you know how to handle MetricManager, that sounds fine. I was thinking in more basic terms: adding some log statements to you indexer java code.

Regarding the id block allocation, some features seem to have been added, which are still largely undocumented, see:
https://github.com/JanusGraph/janusgraph/blob/83c93fe717453ec31086ca1a208217a747ebd1a8/janusgraph-core/src/main/java/org/janusgraph/diskstorage/idmanagement/ConflictAvoidanceMode.java
https://docs.janusgraph.org/basics/janusgraph-cfg/#idsauthority
Notice that the default value for ids.authority.conflict-avoidance-mode is NONE. Given the rigor you show in your attempts,  trying other values seems worth a try too!

Best wishes,    Marc


Re: CQL scaling limit?

madams@...
 

Hi Marc,

We're running on Kubernetes, and there's no cpu limitations on the indexer.
Thanks for pointing this out, I actually haven't checked the overall resources of the cluster... It's a good sanity check:

From left to right, top to bottom:

  • The total cpu usage per node on the cluster, there's about 20 of them and other non janusgraph applications are running, that's why it's not 0 when the tests are not running
  • The processed records per Indexer, there's two tests here in one panel:
    1. First group is the scaling test with the cql QUORUM consistency
    2. Second group is the scaling test with write consistency ANY and the read consistency ONE
  • The IO Waiting time per node in the cluster (unfortunately I don't have this metric per indexer)
  • ID block allocation warning logs events (the exact log message looks like "Temporary storage exception while acquiring id block - retrying in PT2.4S: {}")

The grey areas represents the moment when the overall performance stopped scaling linearly with the number of indexers.

We're not maxing out the cpus yet, so it looks like we can still push the cluster. I don't have the IO waiting time per indexer unfortunately, but the node-exporter metric on IO waiting time fits with the grey areas in the graphs.

As you mentioned the ID Block allocation, I checked the logs for warning messages, and they are actually id allocation warning messages, I looked for other warning messages but didn't find any.

I tried increasing the Id Block size to 10 000 000 but didn't see any improvement - that said, from my understanding of the ID allocation it is the perfect suspect. I'll rerun these tests on a completely fresh graph with ids.block-size=10000000 to double check.

If that does not work, I'll try upgrading to the master version and re run the test. Any tip on how to log which part is slowing the insertion? I was thinking maybe of using the org.janusgraph.util.stats.MetricManager to time the execution time of parts of the code of the org.janusgraph.graphdb.database.StandardJanusGraph.commit() method.


Thanks a lot,
Cheers,
Marc


Re: CQL scaling limit?

hadoopmarc@...
 

Hi Marc,

Just to be sure: the indexers itself are not limited in the number of CPU they can grab? The 60 indexers run on the same machine? Or in independent cloud containers?

If the indexers are not CPU limited, it would be interesting to log where the time is spent: their own java code, waiting for transactions to complete, waiting for the id manager to return id-blocks?

Best wishes,   Marc


Re: CQL scaling limit?

madams@...
 

Hi Boxuan,

I can definitively try from the 0.6.0 pre release or master version, that's a good idea,
I'll come back with the result,

Thanks!
Cheers,
Marc


Re: CQL scaling limit?

Boxuan Li
 
Edited

Hi,

I didn't check your metrics but my first impression was this might be related to the internal thread pool. Can you try out the 0.6.0 pre-release version or master version? Remember to set `storage.cql.executor-service.enabled` to false. Before 0.6.0, an internal thread pool was used to process all CQL queries, which had a hard-coded 15 threads. https://github.com/JanusGraph/janusgraph/pull/2700 made the thread pool configurable, and https://github.com/JanusGraph/janusgraph/pull/2741 further made this thread pool optional.

EDIT: Just realized your problem was related to the horizontal scaling of JanusGraph instances. Then this internal thread pool thing is likely not related - but still worth trying.

Hope this helps.

Best,
Boxuan


CQL scaling limit?

madams@...
 

Hi all,

We've been trying to scale horizontally Janusgraph to keep up with the data throughput, but we're reaching some limit in scaling.
We've been trying different things to pinpoint the bottleneck but we're struggling to find it... Some help would be most welcome :)

Our current setup:

  • 6 ScyllaDB Instances
  • 6 Elasticsearch Instances
  • Our "indexers" running Janusgraph as a lib, they can be scaled up and down
    • They read data from our sources and write it to Janusgraph
    • Each Indexer runs on his own jvm
Our scaling test looks like this:



Each indexer counts the number of records it processes. We start with one indexer, and every 5 ~ 10 minutes we double the number of indexers and look how the overall performance increased.
From top down, left to right these panels represent:

  • Total Processed Records: The overall performance of the system
  • Average Processed Records: Average per Indexer, ideally this should be a flat curve
  • Number of Running Indexers: We scaled at 1, 2, 4, 8, 16, 32, 64
  • Processed Records Per Indexer
  • Cpu Usage: The cpu usage per Indexer
  • Heap: Heap usage per indexer. The red line is the max memory the Heap can take, we left a generous margin


As you can see past 4 Indexers, the performance per Indexer decreases until no additional throughput can be reached. At first we thought this might simply be due to resource limitations, but ScyllaDB and Elasticsearch are not really struggling. The ScyllaDB load, read and write latency looked good during this test:



Both ScyllaDB and Elasticsearch are running on NVMe hard disks, with 10GB+ of ram. ScyllaDB is also deployed with CPU pinning. If we try a Cassandra Stress test, we can really max out ScyllaDB.

Our janusgraph configuration looks like:

storage.backend=cql
storage.hostname=scylla
storage.cql.replication-factor=3
index.search.backend=elasticsearch
index.search.hostname=elasticsearch
index.search.elasticsearch.transport-scheme=http
index.search.elasticsearch.create.ext.number_of_shards=4
index.search.elasticsearch.create.ext.number_of_replicas=2
graph.replace-instance-if-exists=true
schema.default=none
tx.log-tx=true
tx.max-commit-time=170000
ids.block-size=100000
ids.authority.wait-time=1000


Each input record contains ~20 vertices and ~20 edges. The workflow of the indexer is:

  1. For each vertex, check if it exists in the graph using a composite index. Create it if it does not.
  2. Insert edges using the vertex id's returned by step 1

Each transaction inserts ~ 10 records. Each indexer runs 10 transactions in parallel.

We tried different things but without success:

  • Increasing/decreasing the transaction size
  • Increasing/decreasing storage.cql.batch-statement-size
  • Enabling/disabling batch loading
  • Increasing the ids.block-size to 10 000 000

The most successful test so far was to switch the cql write consistency level to ANY and the read consistency level to ONE:



This time the indexer scaled nicely up to 16 indexers, and the overall performance was still increased by scaling to 32 indexers. Once we reached 64 indexers though the performance dramatically dropped. There ScyllaDB had a little more load, but it still doesn't look like it's struggling:

We don't use LOCK consistency. We never update vertices or edges, so FORK consistency doesn't look useful in our case.

It really looks like something somewhere is a bottleneck when we start scaling.

I checked out the Janusgraph github repo locally and went through it to try to understand what set of operations Janusgraph does to insert vertices/edges and how this binds to transactions, but I'm struggling a little to find that information.

So, any idea/recommendations?

Cheers,
Marc


Re: graphml properties of properties

Laura Morales <lauretas@...>
 

FWIW I've tried exporting the graph in the example to JSON (GraphSON) and the metaproperty *is* preserved, however when I import the same graph from the json file the metaproperty is not created.

Sent: Thursday, August 26, 2021 at 6:36 AM
From: "Laura Morales" <lauretas@...>
To: janusgraph-users@...
Cc: janusgraph-users@...
Subject: Re: [janusgraph-users] graphml properties of properties

Thank you for this example.
After running this, I can see that the property "metatest" has been ignored and is missing completely from the GraphML output. Another issue that I have with GraphML is that it cannot apparently represent all the key types that are supported by Janus. For example it does not define any attribute for "date" and "time", and it does not allow to specify "int32" or "int64"; it only defines basic primitives such as string, int, double.

What serialization format should I use to best match Janus? One that allows metaproperties and also all the various types (date, int32, char, etc.). I also need it to be human readable because I'm editing my graph file manually, and then I load this file into Janus. GraphML is not that bad, I can use it... it's just too limited given that it does not support the features mentioned above. Is there any better alternative? Or should I roll my own?



Sent: Wednesday, August 25, 2021 at 5:05 PM
From: hadoopmarc@...
To: janusgraph-users@...
Subject: Re: [janusgraph-users] graphml properties of properties
Hi Laura,

No. As the TinkerPop docs say: "graphML is a lossy format".

You can try for yourself with:gremlin> graph = TinkerFactory.createModern()
==>tinkergraph[vertices:6 edges:6]
gremlin> g = graph.traversal()
==>graphtraversalsource[tinkergraph[vertices:6 edges:6], standard]
gremlin> g.V(1).properties('name').elementMap()
==>[id:0,key:name,value:marko]
gremlin> g.V(1).properties('name').property('metatest', 'hi')
==>vp[name->marko]
gremlin> g.V(1).properties('name').elementMap()
==>[id:0,key:name,value:marko,metatest:hi]
gremlin> g.addV('person').property('name', 'turing')
==>v[13]
gremlin> g.io('data/metatest.xml').write().iterate()
gremlin>Best wishes,    Marc


Re: graphml properties of properties

Laura Morales <lauretas@...>
 

Thank you for this example.
After running this, I can see that the property "metatest" has been ignored and is missing completely from the GraphML output. Another issue that I have with GraphML is that it cannot apparently represent all the key types that are supported by Janus. For example it does not define any attribute for "date" and "time", and it does not allow to specify "int32" or "int64"; it only defines basic primitives such as string, int, double.

What serialization format should I use to best match Janus? One that allows metaproperties and also all the various types (date, int32, char, etc.). I also need it to be human readable because I'm editing my graph file manually, and then I load this file into Janus. GraphML is not that bad, I can use it... it's just too limited given that it does not support the features mentioned above. Is there any better alternative? Or should I roll my own?



Sent: Wednesday, August 25, 2021 at 5:05 PM
From: hadoopmarc@...
To: janusgraph-users@...
Subject: Re: [janusgraph-users] graphml properties of properties
Hi Laura,

No. As the TinkerPop docs say: "graphML is a lossy format".

You can try for yourself with:gremlin> graph = TinkerFactory.createModern()
==>tinkergraph[vertices:6 edges:6]
gremlin> g = graph.traversal()
==>graphtraversalsource[tinkergraph[vertices:6 edges:6], standard]
gremlin> g.V(1).properties('name').elementMap()
==>[id:0,key:name,value:marko]
gremlin> g.V(1).properties('name').property('metatest', 'hi')
==>vp[name->marko]
gremlin> g.V(1).properties('name').elementMap()
==>[id:0,key:name,value:marko,metatest:hi]
gremlin> g.addV('person').property('name', 'turing')
==>v[13]
gremlin> g.io('data/metatest.xml').write().iterate()
gremlin>Best wishes,    Marc


Re: graphml properties of properties

hadoopmarc@...
 

Hi Laura,

No. As the TinkerPop docs say: "graphML is a lossy format".

You can try for yourself with:
gremlin> graph = TinkerFactory.createModern()
==>tinkergraph[vertices:6 edges:6]
gremlin> g = graph.traversal()
==>graphtraversalsource[tinkergraph[vertices:6 edges:6], standard]
gremlin> g.V(1).properties('name').elementMap()
==>[id:0,key:name,value:marko]
gremlin> g.V(1).properties('name').property('metatest', 'hi')
==>vp[name->marko]
gremlin> g.V(1).properties('name').elementMap()
==>[id:0,key:name,value:marko,metatest:hi]
gremlin> g.addV('person').property('name', 'turing')
==>v[13]
gremlin> g.io('data/metatest.xml').write().iterate()
gremlin>
Best wishes,    Marc


Too low Performance when running PageRank and WCC on Graph500

shepherdkingqsp@...
 

Hi there,

Recently I am trying to measure Janusgraph performance. When I was trying to run benchmark using Janusgraph 0.5.3, I found too low performance when testing pagerank and wcc.

The code I used that you can refer to:
https://github.com/gaolk/graph-database-benchmark/tree/master/benchmark/janusgraph

Data:
Graph500

The environment:
Janusgraph Version: 0.5.3 (download the full release zip from janugraph github)

The config of Janusgraph (default conf/janusgraph-cql.properties)
gremlin.graph=org.janusgraph.core.JanusGraphFactory
storage.backend=cql
storage.batch-loading=true
storage.hostname=127.0.0.1
storage.cql.keyspace=janusgraph
cache.db-cache = true
cache.db-cache-clean-wait = 20
cache.db-cache-time = 180000
cache.db-cache-size = 0.5
To be more specific, I ran Khop with it and got reasonable result.

K-Hop

Latency

1-Hop

23.42

2-Hop

16628.49

3-Hop

1872747.62(2/10 2h Timeout)

4-Hop

889146.03(8/10 2h Timeout)

5-Hop

10/10 2h Timeout

6-Hop

10/10 2h Timeout


But when I ran wcc and pagerank, I got 3 hours timeout either.

Could you somebody help find the reason that I got low performance?


Regards,
Shipeng


Re: Fail to load complete edge data of Graph500 to Janusgraph 0.5.3 with Cassandra CQl as storage backends

shepherdkingqsp@...
 

HI Marc,

I have tried it. And finally I got complete Graph500 vertices and edges loaded.

But there is a still weird thing that I found the same exception reported in the log.

Could you please explain this? With exception reported, the data was still loaded completely?

Regards,
Shipeng


Re: Not able to enable Write-ahead logs using tx.log-tx for existing JanusGraph setup

Radhika Kundam
 

Hi Boxuan,

Thank you for the response. I tried by forceClosing all management instances except the current one before setting "tx.log-tx". Management is getting updated with latest value, this is not the issue even without closing other instances. The issue is graph.getConfiguration().hasLogTransactions() is not getting refreshed through JanusGraphManagement property update. My understanding is logTransactions will be updated only through GraphDatabaseConfiguration:preLoadConfiguration which will be called only when we open graph instance JanusGraphFactory.open(config).I don't see any setter method to update logTransactions when we update JanusGraphManagement property. Because of this after updating JanusGraphManagement when we restart cluster it's invoking  GraphDatabaseConfiguration:preLoadConfiguration which is updating logTransactions value then.

Please correct me if I am missing anything and it'll be really helpful if any alternative approach to update logTransaction when we update Management property without restarting the cluster.

Thanks,
Radhika


Re: org.janusgraph.diskstorage.PermanentBackendException: Read 1 locks with our rid but mismatched timestamps

Ronnie
 

Hi Mark,
Thanks for the suggestions.

Narrowed down the issue to JanusGraph's support for Azul Prime JDK 8, which generates nanosecond precision for Instant.now() API, as compared to the millisecond precision for OpenJDK 8. So the issue was resolved by applying the patch for https://github.com/JanusGraph/janusgraph/issues/1979.

Thanks!
Ronnie


Re: Fail to load complete edge data of Graph500 to Janusgraph 0.5.3 with Cassandra CQl as storage backends

shepherdkingqsp@...
 

On Tue, Aug 24, 2021 at 06:20 AM, <hadoopmarc@...> wrote:
with
Got it. I will try it soon. 

Thanks, Marc!

Shipeng


Re: Fail to load complete edge data of Graph500 to Janusgraph 0.5.3 with Cassandra CQl as storage backends

hadoopmarc@...
 

Hi Shipeng Qi,

The system that you use might be too small for the number of threads in the loading code. You can try to decrease the number of threads from 8 to 4 with:

private static ExecutorService pool = Executors.newFixedThreadPool(4);

Best wishes,    Marc


Re: Not able to enable Write-ahead logs using tx.log-tx for existing JanusGraph setup

Boxuan Li
 

I suspect this is due to stale management instances. Check out https://developer.ibm.com/articles/janusgraph-tips-and-tricks-pt-2/#troubleshooting-indexes and see if it helps.

541 - 560 of 6663