|
MapReduce reindexing with authentication
Hi Boxuan, Using existing mechanisms for configuring mapreduce would be nicer, indeed. Upon reading this hadoop command, I see a GENERIC_OPTIONS env variable read by the mapreduce client, that can hav
Hi Boxuan, Using existing mechanisms for configuring mapreduce would be nicer, indeed. Upon reading this hadoop command, I see a GENERIC_OPTIONS env variable read by the mapreduce client, that can hav
|
By
hadoopmarc@...
· #5906
·
|
|
ID block allocation exception while creating edge
Hi Anjani, It is a while ago I did this myself. I interpret ids.num-partitions as a stock of reserved id blocks that can be delegated to a janugraph instance. It does not have a large value to not was
Hi Anjani, It is a while ago I did this myself. I interpret ids.num-partitions as a stock of reserved id blocks that can be delegated to a janugraph instance. It does not have a large value to not was
|
By
hadoopmarc@...
· #5900
·
|
|
ID block allocation exception while creating edge
What is the number of parallel tasks? (for setting ids.num-partitions) You have the ids.authority.wait-time still on its default value of 300 ms, so that seems worthwhile experimenting with. Best wish
What is the number of parallel tasks? (for setting ids.num-partitions) You have the ids.authority.wait-time still on its default value of 300 ms, so that seems worthwhile experimenting with. Best wish
|
By
hadoopmarc@...
· #5898
·
|
|
ID block allocation exception while creating edge
Hi Anjani, Please show the properties file you use to open janusgraph. I assume you also saw the other recommendations in https://docs.janusgraph.org/advanced-topics/bulk-loading/#optimizing-id-alloca
Hi Anjani, Please show the properties file you use to open janusgraph. I assume you also saw the other recommendations in https://docs.janusgraph.org/advanced-topics/bulk-loading/#optimizing-id-alloca
|
By
hadoopmarc@...
· #5896
·
|
|
Query Optimisation
Hi Vinayak, Actually, query 4 was easier to rework. It could read somewhat like: g.V().has('property1', 'vertex1').as('v1').outE().has('property1', 'edge1').limit(100).as('e').inV().has('property1', '
Hi Vinayak, Actually, query 4 was easier to rework. It could read somewhat like: g.V().has('property1', 'vertex1').as('v1').outE().has('property1', 'edge1').limit(100).as('e').inV().has('property1', '
|
By
hadoopmarc@...
· #5894
·
|
|
Query Optimisation
Hi Vinayak, If you would bother to demonstrate this behavior with a reproducible, generated graph, you can report it as an issue on github. For now, you can only look for workarounds: - combine the fo
Hi Vinayak, If you would bother to demonstrate this behavior with a reproducible, generated graph, you can report it as an issue on github. For now, you can only look for workarounds: - combine the fo
|
By
hadoopmarc@...
· #5892
·
|
|
Query Optimisation
Hi Vinayak, Your last remark explains it well: it seems that in JanusGraph a union of multiple clauses can take much longer than the sum of the individual clauses. There are still two things that we h
Hi Vinayak, Your last remark explains it well: it seems that in JanusGraph a union of multiple clauses can take much longer than the sum of the individual clauses. There are still two things that we h
|
By
hadoopmarc@...
· #5890
·
|
|
Query Optimisation
Hi Vinayak, What happens with a single clause, so without the union: g.V().has('property1', 'vertex3').outE().has('property1', 'edge3').inV().has('property1', 'vertex2').limit(100).path().toList() Bes
Hi Vinayak, What happens with a single clause, so without the union: g.V().has('property1', 'vertex3').outE().has('property1', 'edge3').inV().has('property1', 'vertex2').limit(100).path().toList() Bes
|
By
hadoopmarc@...
· #5888
·
|
|
Query Optimisation
Hi Vinayak, To be sure: we are dealing here with a large graph, so all V().has('property1', 'vertex...') steps do hit the index (no index log warnings)? For one, it would be interesting to see the out
Hi Vinayak, To be sure: we are dealing here with a large graph, so all V().has('property1', 'vertex...') steps do hit the index (no index log warnings)? For one, it would be interesting to see the out
|
By
hadoopmarc@...
· #5886
·
|
|
Query Optimisation
Hi Vinayak, My answer already contains a concrete suggestion. Replace all union subclauses starting with outE with the alternate form that has a local(................limit1)) construct, as indicated.
Hi Vinayak, My answer already contains a concrete suggestion. Replace all union subclauses starting with outE with the alternate form that has a local(................limit1)) construct, as indicated.
|
By
hadoopmarc@...
· #5884
·
|
|
Query Optimisation
Hi Vinayak, Can you please try and format your code in a consistent way to ease the reading (even if the editor in this forum is not really helpful in this)? After manual reformatting, Query 1 and the
Hi Vinayak, Can you please try and format your code in a consistent way to ease the reading (even if the editor in this forum is not really helpful in this)? After manual reformatting, Query 1 and the
|
By
hadoopmarc@...
· #5882
·
|
|
Not able to run queries using spark graph computer from java
Hi Sai, The blog you mentioned is a bit outdated and is for spark-1.x. To get an idea of what changes are needed to get OLAP running with spark-2.x, you can take a look at: https://tinkerpop.apache.or
Hi Sai, The blog you mentioned is a bit outdated and is for spark-1.x. To get an idea of what changes are needed to get OLAP running with spark-2.x, you can take a look at: https://tinkerpop.apache.or
|
By
hadoopmarc@...
· #5881
·
|
|
Not able to run queries using spark graph computer from java
Hi Sai, What happens in createTraversal()? What do you get with g.V(1469152598528).elementMap() if you open the graph for OLTP queries? Best wishes, Marc
Hi Sai, What happens in createTraversal()? What do you get with g.V(1469152598528).elementMap() if you open the graph for OLTP queries? Best wishes, Marc
|
By
hadoopmarc@...
· #5878
·
|
|
Support for DB cache for Multi Node Janus Server Setup
Hi Pasan, The multiple janusgraph nodes share the same storage backend, so the common caching is done in the storage backend. Best wishes, Marc
Hi Pasan, The multiple janusgraph nodes share the same storage backend, so the common caching is done in the storage backend. Best wishes, Marc
|
By
hadoopmarc@...
· #5877
·
|
|
Not able to run queries using spark graph computer from java
Hi Sai, The calling code you present is not complete. The first line should read (because HadoopGraph does not derive from JanusGraph): Graph graph = GraphFactory.open("read-cql.properties"); Best wis
Hi Sai, The calling code you present is not complete. The first line should read (because HadoopGraph does not derive from JanusGraph): Graph graph = GraphFactory.open("read-cql.properties"); Best wis
|
By
hadoopmarc@...
· #5874
·
|
|
olap connection with spark standalone cluster
Hi Sai, This exception is not really related to this thread. JanusGraph with SparkGraphComputer can only be used with the TinkerPop HadoopGraph. Therefore, the example in the JanusGraph ref docs has a
Hi Sai, This exception is not really related to this thread. JanusGraph with SparkGraphComputer can only be used with the TinkerPop HadoopGraph. Therefore, the example in the JanusGraph ref docs has a
|
By
hadoopmarc@...
· #5873
·
|
|
Any advice on performance concern of JanusGraph with Cassandra&Elastic Search?
Many organizations use JanusGraph on this scale. Insertion of data is slow so you need massive parallel operations to do an entire bulk load overnight. Most people use tools like Apache Spark for this
Many organizations use JanusGraph on this scale. Insertion of data is slow so you need massive parallel operations to do an entire bulk load overnight. Most people use tools like Apache Spark for this
|
By
hadoopmarc@...
· #5870
·
|
|
Backup & Restore of Janusgraph Data with Mixed Index Backend (Elastisearch)
In theory (not used in practice) the following should be possible: make a snapshot of the ScyllaDB keyspace after the ScyllaDB snapshot is written, make a snapshot of corresponding ES mixed indices re
In theory (not used in practice) the following should be possible: make a snapshot of the ScyllaDB keyspace after the ScyllaDB snapshot is written, make a snapshot of corresponding ES mixed indices re
|
By
hadoopmarc@...
· #5863
·
|
|
Configured graph factory not working after making changes to gremlin-server.yaml
Hi Sai, In your last post a line with ConfiguredGraphFactory.createConfiguration(new MapConfiguration(map)); is missing. A complete working transcript that works out of the box from the janusgraph-ful
Hi Sai, In your last post a line with ConfiguredGraphFactory.createConfiguration(new MapConfiguration(map)); is missing. A complete working transcript that works out of the box from the janusgraph-ful
|
By
hadoopmarc@...
· #5860
·
|
|
Configured graph factory not working after making changes to gremlin-server.yaml
Hi Sai, I suspect this is related to your setting: #do not auto generate graph vertex id graph.set-vertex-id=true Can you try without? Best wishes, Marc
Hi Sai, I suspect this is related to your setting: #do not auto generate graph vertex id graph.set-vertex-id=true Can you try without? Best wishes, Marc
|
By
hadoopmarc@...
· #5854
·
|