|
ID block allocation exception while creating edge
Hi Anjani, One thing that does not feel good is that you create and commit a transaction for every row of your dataframe. Although I do not see how this would interfere with ID allocation, best practi
Hi Anjani, One thing that does not feel good is that you create and commit a transaction for every row of your dataframe. Although I do not see how this would interfere with ID allocation, best practi
|
By
hadoopmarc@...
· #5931
·
|
|
ID block allocation exception while creating edge
Hi Anjani, It is still most likely that the modified value of "ids.block-size" somehow does not come through. So, are you sure that all JanusGraph instances are closed before using the new value ("ids
Hi Anjani, It is still most likely that the modified value of "ids.block-size" somehow does not come through. So, are you sure that all JanusGraph instances are closed before using the new value ("ids
|
By
hadoopmarc@...
· #5927
·
|
|
MapReduce reindexing with authentication
Hi Boxuan, Yes, you are right, I mixed things up by wrongly interpreting GENERIC_OPTIONS as an env variable. I did some additional experiments. though, bringing in new information. 1. It is possible t
Hi Boxuan, Yes, you are right, I mixed things up by wrongly interpreting GENERIC_OPTIONS as an env variable. I did some additional experiments. though, bringing in new information. 1. It is possible t
|
By
hadoopmarc@...
· #5924
·
|
|
Making janus graph client to not use QUORUM
Hi Anjani, To see what exactly happens with local configurations, I did the following: from the binary janusgraph distribution I started janusgraph with "bin/janusgraph.sh start" (this implicitly uses
Hi Anjani, To see what exactly happens with local configurations, I did the following: from the binary janusgraph distribution I started janusgraph with "bin/janusgraph.sh start" (this implicitly uses
|
By
hadoopmarc@...
· #5923
·
|
|
Backend data model deserialization
Hi Elliot, There should be some old Titan resources that describe how the data model is binary coded into the row keys and row values. Of course, it is also implicit from the JanusGraph source code. I
Hi Elliot, There should be some old Titan resources that describe how the data model is binary coded into the row keys and row values. Of course, it is also implicit from the JanusGraph source code. I
|
By
hadoopmarc@...
· #5920
·
|
|
Query Optimisation
Hi Vinayak, Please study the as(), select(), project() and cap() steps from the TinkerPop ref docs. The arguments of project() do not reference the keys of side effects but rather introduce new keys f
Hi Vinayak, Please study the as(), select(), project() and cap() steps from the TinkerPop ref docs. The arguments of project() do not reference the keys of side effects but rather introduce new keys f
|
By
hadoopmarc@...
· #5916
·
|
|
Storing and reading connected component RDD through OutputFormatRDD & InputFormatRDD
Hi Anjani, The following section of the TinkerPop ref docs gives an example of how to reuse the output RDD of one job in a follow-up gremlin OLAP job. https://tinkerpop.apache.org/docs/3.4.10/referenc
Hi Anjani, The following section of the TinkerPop ref docs gives an example of how to reuse the output RDD of one job in a follow-up gremlin OLAP job. https://tinkerpop.apache.org/docs/3.4.10/referenc
|
By
hadoopmarc@...
· #5910
·
|
|
MapReduce reindexing with authentication
Hi Boxuan, Yes, I did not finish my argument. What I tried to suggest: if the hadoop CLI command checks the GENERIC_OPTIONS env variable, then maybe also the mapreduce java client called by JanusGraph
Hi Boxuan, Yes, I did not finish my argument. What I tried to suggest: if the hadoop CLI command checks the GENERIC_OPTIONS env variable, then maybe also the mapreduce java client called by JanusGraph
|
By
hadoopmarc@...
· #5909
·
|
|
MapReduce reindexing with authentication
Hi Boxuan, Using existing mechanisms for configuring mapreduce would be nicer, indeed. Upon reading this hadoop command, I see a GENERIC_OPTIONS env variable read by the mapreduce client, that can hav
Hi Boxuan, Using existing mechanisms for configuring mapreduce would be nicer, indeed. Upon reading this hadoop command, I see a GENERIC_OPTIONS env variable read by the mapreduce client, that can hav
|
By
hadoopmarc@...
· #5906
·
|
|
ID block allocation exception while creating edge
Hi Anjani, It is a while ago I did this myself. I interpret ids.num-partitions as a stock of reserved id blocks that can be delegated to a janugraph instance. It does not have a large value to not was
Hi Anjani, It is a while ago I did this myself. I interpret ids.num-partitions as a stock of reserved id blocks that can be delegated to a janugraph instance. It does not have a large value to not was
|
By
hadoopmarc@...
· #5900
·
|
|
ID block allocation exception while creating edge
What is the number of parallel tasks? (for setting ids.num-partitions) You have the ids.authority.wait-time still on its default value of 300 ms, so that seems worthwhile experimenting with. Best wish
What is the number of parallel tasks? (for setting ids.num-partitions) You have the ids.authority.wait-time still on its default value of 300 ms, so that seems worthwhile experimenting with. Best wish
|
By
hadoopmarc@...
· #5898
·
|
|
ID block allocation exception while creating edge
Hi Anjani, Please show the properties file you use to open janusgraph. I assume you also saw the other recommendations in https://docs.janusgraph.org/advanced-topics/bulk-loading/#optimizing-id-alloca
Hi Anjani, Please show the properties file you use to open janusgraph. I assume you also saw the other recommendations in https://docs.janusgraph.org/advanced-topics/bulk-loading/#optimizing-id-alloca
|
By
hadoopmarc@...
· #5896
·
|
|
Query Optimisation
Hi Vinayak, Actually, query 4 was easier to rework. It could read somewhat like: g.V().has('property1', 'vertex1').as('v1').outE().has('property1', 'edge1').limit(100).as('e').inV().has('property1', '
Hi Vinayak, Actually, query 4 was easier to rework. It could read somewhat like: g.V().has('property1', 'vertex1').as('v1').outE().has('property1', 'edge1').limit(100).as('e').inV().has('property1', '
|
By
hadoopmarc@...
· #5894
·
|
|
Query Optimisation
Hi Vinayak, If you would bother to demonstrate this behavior with a reproducible, generated graph, you can report it as an issue on github. For now, you can only look for workarounds: - combine the fo
Hi Vinayak, If you would bother to demonstrate this behavior with a reproducible, generated graph, you can report it as an issue on github. For now, you can only look for workarounds: - combine the fo
|
By
hadoopmarc@...
· #5892
·
|
|
Query Optimisation
Hi Vinayak, Your last remark explains it well: it seems that in JanusGraph a union of multiple clauses can take much longer than the sum of the individual clauses. There are still two things that we h
Hi Vinayak, Your last remark explains it well: it seems that in JanusGraph a union of multiple clauses can take much longer than the sum of the individual clauses. There are still two things that we h
|
By
hadoopmarc@...
· #5890
·
|
|
Query Optimisation
Hi Vinayak, What happens with a single clause, so without the union: g.V().has('property1', 'vertex3').outE().has('property1', 'edge3').inV().has('property1', 'vertex2').limit(100).path().toList() Bes
Hi Vinayak, What happens with a single clause, so without the union: g.V().has('property1', 'vertex3').outE().has('property1', 'edge3').inV().has('property1', 'vertex2').limit(100).path().toList() Bes
|
By
hadoopmarc@...
· #5888
·
|
|
Query Optimisation
Hi Vinayak, To be sure: we are dealing here with a large graph, so all V().has('property1', 'vertex...') steps do hit the index (no index log warnings)? For one, it would be interesting to see the out
Hi Vinayak, To be sure: we are dealing here with a large graph, so all V().has('property1', 'vertex...') steps do hit the index (no index log warnings)? For one, it would be interesting to see the out
|
By
hadoopmarc@...
· #5886
·
|
|
Query Optimisation
Hi Vinayak, My answer already contains a concrete suggestion. Replace all union subclauses starting with outE with the alternate form that has a local(................limit1)) construct, as indicated.
Hi Vinayak, My answer already contains a concrete suggestion. Replace all union subclauses starting with outE with the alternate form that has a local(................limit1)) construct, as indicated.
|
By
hadoopmarc@...
· #5884
·
|
|
Query Optimisation
Hi Vinayak, Can you please try and format your code in a consistent way to ease the reading (even if the editor in this forum is not really helpful in this)? After manual reformatting, Query 1 and the
Hi Vinayak, Can you please try and format your code in a consistent way to ease the reading (even if the editor in this forum is not really helpful in this)? After manual reformatting, Query 1 and the
|
By
hadoopmarc@...
· #5882
·
|
|
Not able to run queries using spark graph computer from java
Hi Sai, The blog you mentioned is a bit outdated and is for spark-1.x. To get an idea of what changes are needed to get OLAP running with spark-2.x, you can take a look at: https://tinkerpop.apache.or
Hi Sai, The blog you mentioned is a bit outdated and is for spark-1.x. To get an idea of what changes are needed to get OLAP running with spark-2.x, you can take a look at: https://tinkerpop.apache.or
|
By
hadoopmarc@...
· #5881
·
|