|
Re: Query Optimisation
Hi Vinayak,
Your last remark explains it well: it seems that in JanusGraph a union of multiple clauses can take much longer than the sum of the individual clauses. There are still two things that we
Hi Vinayak,
Your last remark explains it well: it seems that in JanusGraph a union of multiple clauses can take much longer than the sum of the individual clauses. There are still two things that we
|
By
hadoopmarc@...
·
#5890
·
|
|
Re: Query Optimisation
Hi Marc,
That works as expected. Union also works as expected as in Query1 but when I add limit to all edge the performance degrades.
Thanks
Hi Marc,
That works as expected. Union also works as expected as in Query1 but when I add limit to all edge the performance degrades.
Thanks
|
By
Vinayak Bali
·
#5889
·
|
|
Re: Query Optimisation
Hi Vinayak,
What happens with a single clause, so without the union:
g.V().has('property1', 'vertex3').outE().has('property1', 'edge3').inV().has('property1',
Hi Vinayak,
What happens with a single clause, so without the union:
g.V().has('property1', 'vertex3').outE().has('property1', 'edge3').inV().has('property1',
|
By
hadoopmarc@...
·
#5888
·
|
|
Re: Query Optimisation
Hi Marc,
Yes, all the index are made available and no warning is thrown while executing the query. I tried debugging using profile step. 99% of time is taken by the union query.
Thanks &
Hi Marc,
Yes, all the index are made available and no warning is thrown while executing the query. I tried debugging using profile step. 99% of time is taken by the union query.
Thanks &
|
By
Vinayak Bali
·
#5887
·
|
|
Re: Query Optimisation
Hi Vinayak,
To be sure: we are dealing here with a large graph, so all V().has('property1', 'vertex...') steps do hit the index (no index log warnings)? For one, it would be interesting to see the
Hi Vinayak,
To be sure: we are dealing here with a large graph, so all V().has('property1', 'vertex...') steps do hit the index (no index log warnings)? For one, it would be interesting to see the
|
By
hadoopmarc@...
·
#5886
·
|
|
Re: Query Optimisation
Hi Marc,
Tried the approach you suggested. There is some improvement. Earlier it took 2 mins, now it's taking 1min 50sec. Is there any other way to optimize this further may to ms or seconds??
Thank
Hi Marc,
Tried the approach you suggested. There is some improvement. Earlier it took 2 mins, now it's taking 1min 50sec. Is there any other way to optimize this further may to ms or seconds??
Thank
|
By
Vinayak Bali
·
#5885
·
|
|
Re: Query Optimisation
Hi Vinayak,
My answer already contains a concrete suggestion. Replace all union subclauses starting with outE with the alternate form that has a local(................limit1)) construct, as
Hi Vinayak,
My answer already contains a concrete suggestion. Replace all union subclauses starting with outE with the alternate form that has a local(................limit1)) construct, as
|
By
hadoopmarc@...
·
#5884
·
|
|
Re: Query Optimisation
Hi Marc,
Thank you for your reply. I understand the queries are big, so there is a problem viewing them.
Actually I am not interested in either of v1 or v2. I want to apply limit on edges, and don't
Hi Marc,
Thank you for your reply. I understand the queries are big, so there is a problem viewing them.
Actually I am not interested in either of v1 or v2. I want to apply limit on edges, and don't
|
By
Vinayak Bali
·
#5883
·
|
|
Re: Query Optimisation
Hi Vinayak,
Can you please try and format your code in a consistent way to ease the reading (even if the editor in this forum is not really helpful in this)? After manual reformatting, Query 1 and the
Hi Vinayak,
Can you please try and format your code in a consistent way to ease the reading (even if the editor in this forum is not really helpful in this)? After manual reformatting, Query 1 and the
|
By
hadoopmarc@...
·
#5882
·
|
|
Re: Not able to run queries using spark graph computer from java
Hi Sai,
The blog you mentioned is a bit outdated and is for spark-1.x. To get an idea of what changes are needed to get OLAP running with spark-2.x, you can take a look
Hi Sai,
The blog you mentioned is a bit outdated and is for spark-1.x. To get an idea of what changes are needed to get OLAP running with spark-2.x, you can take a look
|
By
hadoopmarc@...
·
#5881
·
|
|
Query Optimisation
Hi All,
g.inject(1).union(V().has('property1', 'vertex1').as('v1').union(outE().has('property1', 'edge1').as('e').inV().has('property1', 'vertex1'),outE().has('property1',
Hi All,
g.inject(1).union(V().has('property1', 'vertex1').as('v1').union(outE().has('property1', 'edge1').as('e').inV().has('property1', 'vertex1'),outE().has('property1',
|
By
Vinayak Bali
·
#5880
·
|
|
Re: Not able to run queries using spark graph computer from java
Hi Marc,
I got this when querying using OLTP:
gremlin> g.V(1469152598528)
==>v[1469152598528]
gremlin> g.V(1469152598528).elementMap()
==>[id:1469152598528,label:vertex]
I am also trying to run spark
Hi Marc,
I got this when querying using OLTP:
gremlin> g.V(1469152598528)
==>v[1469152598528]
gremlin> g.V(1469152598528).elementMap()
==>[id:1469152598528,label:vertex]
I am also trying to run spark
|
By
Sai Supraj R
·
#5879
·
|
|
Re: Not able to run queries using spark graph computer from java
Hi Sai,
What happens in createTraversal()?
What do you get with g.V(1469152598528).elementMap() if you open the graph for OLTP queries?
Best wishes, Marc
Hi Sai,
What happens in createTraversal()?
What do you get with g.V(1469152598528).elementMap() if you open the graph for OLTP queries?
Best wishes, Marc
|
By
hadoopmarc@...
·
#5878
·
|
|
Re: Support for DB cache for Multi Node Janus Server Setup
Hi Pasan,
The multiple janusgraph nodes share the same storage backend, so the common caching is done in the storage backend.
Best wishes, Marc
Hi Pasan,
The multiple janusgraph nodes share the same storage backend, so the common caching is done in the storage backend.
Best wishes, Marc
|
By
hadoopmarc@...
·
#5877
·
|
|
Support for DB cache for Multi Node Janus Server Setup
Hi All,
I would like to understand the possibility of horizontal scaling of janusgraph servers while keeping the cache enabled. Based on the janusgraph document -
Hi All,
I would like to understand the possibility of horizontal scaling of janusgraph servers while keeping the cache enabled. Based on the janusgraph document -
|
By
pasansumanathilake@...
·
#5876
·
|
|
Re: Not able to run queries using spark graph computer from java
Hi Marc,
Sorry my bad I have posted the wrong code.
I used Graph graph = GraphFactory.open("read-cql.properties");
and i got the above error.
Thanks
Sai
Hi Marc,
Sorry my bad I have posted the wrong code.
I used Graph graph = GraphFactory.open("read-cql.properties");
and i got the above error.
Thanks
Sai
|
By
Sai Supraj R
·
#5875
·
|
|
Re: Not able to run queries using spark graph computer from java
Hi Sai,
The calling code you present is not complete.
The first line should read (because HadoopGraph does not derive from JanusGraph):
Graph graph = GraphFactory.open("read-cql.properties");Best
Hi Sai,
The calling code you present is not complete.
The first line should read (because HadoopGraph does not derive from JanusGraph):
Graph graph = GraphFactory.open("read-cql.properties");Best
|
By
hadoopmarc@...
·
#5874
·
|
|
Re: olap connection with spark standalone cluster
Hi Sai,
This exception is not really related to this thread.
JanusGraph with SparkGraphComputer can only be used with the TinkerPop HadoopGraph. Therefore, the example in the JanusGraph ref docs has a
Hi Sai,
This exception is not really related to this thread.
JanusGraph with SparkGraphComputer can only be used with the TinkerPop HadoopGraph. Therefore, the example in the JanusGraph ref docs has a
|
By
hadoopmarc@...
·
#5873
·
|
|
Not able to run queries using spark graph computer from java
Hi,
I am getting the following error when running queries using spark graph computer from java.
Caused by: java.lang.RuntimeException: java.lang.IllegalArgumentException: Edge with id already exists:
Hi,
I am getting the following error when running queries using spark graph computer from java.
Caused by: java.lang.RuntimeException: java.lang.IllegalArgumentException: Edge with id already exists:
|
By
Sai Supraj R
·
#5872
·
|
|
Re: olap connection with spark standalone cluster
HI I tried with the above solution but it is still throwing error :
java.lang.Throwable: Hook creation trace
at org.janusgraph.graphdb.database.StandardJanusGraph.<init>(StandardJanusGraph.java:185)
HI I tried with the above solution but it is still throwing error :
java.lang.Throwable: Hook creation trace
at org.janusgraph.graphdb.database.StandardJanusGraph.<init>(StandardJanusGraph.java:185)
|
By
Sai Supraj R
·
#5871
·
|