|
Re: Data Loading Script Optimization
Hi Marc,
To avoid confusion, including a new transaction at line number 39, as well as at line no 121.
Line 39: GraphTraversalSource g = graph.newTransaction().traversal();
Line 121: g = ctx.g =
Hi Marc,
To avoid confusion, including a new transaction at line number 39, as well as at line no 121.
Line 39: GraphTraversalSource g = graph.newTransaction().traversal();
Line 121: g = ctx.g =
|
By
Vinayak Bali
·
#6075
·
|
|
Re: Data Loading Script Optimization
Hi Vinayak,
Yes, it should be possible to improve on the 3% CPU usage.
The newTransaction() should be added to line 39 (GraphTraversalSource g = graph.traversal();) as the global g from line 121 is
Hi Vinayak,
Yes, it should be possible to improve on the 3% CPU usage.
The newTransaction() should be added to line 39 (GraphTraversalSource g = graph.traversal();) as the global g from line 121 is
|
By
hadoopmarc@...
·
#6074
·
|
|
Re: Data Loading Script Optimization
Hi Marc,
The storage backend used is Cassandra.
Yes, storage backend janusgraph and load scripts are on the same server.
specified storage.batch-loading=true
CPU usage is very low not more than 3
Hi Marc,
The storage backend used is Cassandra.
Yes, storage backend janusgraph and load scripts are on the same server.
specified storage.batch-loading=true
CPU usage is very low not more than 3
|
By
Vinayak Bali
·
#6073
·
|
|
Re: Data Loading Script Optimization
Hi Vinayak,
What storage backend do you use? Do I understand right that the storage backend and the load script all run on the same server? If, so, are all available CPU resources actively used during
Hi Vinayak,
What storage backend do you use? Do I understand right that the storage backend and the load script all run on the same server? If, so, are all available CPU resources actively used during
|
By
hadoopmarc@...
·
#6072
·
|
|
Re: Not able to enable Write-ahead logs using tx.log-tx for existing JanusGraph setup
No. It should work well for your existing JanusGraph setup too. Note that it is a GLOBAL option so it must be changed for the entire cluster. See
No. It should work well for your existing JanusGraph setup too. Note that it is a GLOBAL option so it must be changed for the entire cluster. See
|
By
Boxuan Li
·
#6071
·
|
|
Not able to enable Write-ahead logs using tx.log-tx for existing JanusGraph setup
Hi All,
I am trying to enable write-ahead logs by using the config property: tx.log-tx to handle transaction recovery.
It's working fine for new JanusGraph setup but not working for Janusgraph setup
Hi All,
I am trying to enable write-ahead logs by using the config property: tx.log-tx to handle transaction recovery.
It's working fine for new JanusGraph setup but not working for Janusgraph setup
|
By
Radhika Kundam
·
#6070
·
|
|
Re: config skip-schema-check=true is not honored for HBase
Hi Jigar,
Yes, I think it is an issue. I did not fully dive into it, in particular I did not check whether any tests exist for the "disable schema check" configuration option. So, go ahead and create
Hi Jigar,
Yes, I think it is an issue. I did not fully dive into it, in particular I did not check whether any tests exist for the "disable schema check" configuration option. So, go ahead and create
|
By
hadoopmarc@...
·
#6069
·
|
|
Data Loading Script Optimization
Hi All,
I have attached a groovy script that I use to load data into janusgraph.
The script takes 4 mins to load 1.5 million nodes and 13 mins to load approx 3 million edges. The server on which
Hi All,
I have attached a groovy script that I use to load data into janusgraph.
The script takes 4 mins to load 1.5 million nodes and 13 mins to load approx 3 million edges. The server on which
|
By
Vinayak Bali
·
#6068
·
|
|
Re: config skip-schema-check=true is not honored for HBase
Hi Marc
Here is the full stack trace,
https://gist.github.com/jigs1993/5cc1682a919cfb5e8290bf4636f1c766
possible fix is here: https://github.com/jigs1993/janusgraph/pull/1/files
Let me know if you
Hi Marc
Here is the full stack trace,
https://gist.github.com/jigs1993/5cc1682a919cfb5e8290bf4636f1c766
possible fix is here: https://github.com/jigs1993/janusgraph/pull/1/files
Let me know if you
|
By
jigar patel <jigar.9408266552@...>
·
#6067
·
|
|
Re: config skip-schema-check=true is not honored for HBase
Hi Jigar,
Can you provide the properties file you used for opening the graph, as well as the complete stacktrace for the exception listed above?
Best wishes, Marc
Hi Jigar,
Can you provide the properties file you used for opening the graph, as well as the complete stacktrace for the exception listed above?
Best wishes, Marc
|
By
hadoopmarc@...
·
#6066
·
|
|
config skip-schema-check=true is not honored for HBase
org.apache.hadoop.hbase.security.AccessDeniedException: org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient permissions (user=<user>, scope=<namespace>:<table>,
org.apache.hadoop.hbase.security.AccessDeniedException: org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient permissions (user=<user>, scope=<namespace>:<table>,
|
By
jigar patel <jigar.9408266552@...>
·
#6065
·
|
|
Re: Property keys unique per label
Hi Laura,
Thanks for explaining in more detail. Another example is a "color" property. Different data sources could use different types of color objects. As long as you do not want to query for paints
Hi Laura,
Thanks for explaining in more detail. Another example is a "color" property. Different data sources could use different types of color objects. As long as you do not want to query for paints
|
By
hadoopmarc@...
·
#6064
·
|
|
Re: Property keys unique per label
Janus describes itself like this
a scalable graph database optimized for storing and querying graphs containing hundreds of billions of vertices and edges distributed across a multi-machine
Janus describes itself like this
a scalable graph database optimized for storing and querying graphs containing hundreds of billions of vertices and edges distributed across a multi-machine
|
By
Laura Morales
·
#6063
·
|
|
Re: Property keys unique per label
Hi Laura,
Indeed, unique property key names are a limitation. But to be honest: if two properties have a different data-value type I would say these are different properties, so why give them the same
Hi Laura,
Indeed, unique property key names are a limitation. But to be honest: if two properties have a different data-value type I would say these are different properties, so why give them the same
|
By
hadoopmarc@...
·
#6062
·
|
|
Re: How to create users and roles
Hi Jonathan,
User authorization for Gremlin Server was introduced in TinkerPop 3.5.0, see https://tinkerpop.apache.org/docs/current/reference/#authorization
JanusGraph will use TinkerPop 3.5.x in its
Hi Jonathan,
User authorization for Gremlin Server was introduced in TinkerPop 3.5.0, see https://tinkerpop.apache.org/docs/current/reference/#authorization
JanusGraph will use TinkerPop 3.5.x in its
|
By
hadoopmarc@...
·
#6061
·
|
|
How to create users and roles
Dear,
I have not found into the documentation on the process to create and manage user and roles in order to contro datal access.
At this page https://docs.janusgraph.org/basics/server/ we can see
Dear,
I have not found into the documentation on the process to create and manage user and roles in order to contro datal access.
At this page https://docs.janusgraph.org/basics/server/ we can see
|
By
jonathan.mercier.fr@...
·
#6060
·
|
|
Property keys unique per label
The documentation says "Property key names must be unique in the graph". Does it mean that it's not possible to have property keys that are unique *per label*? In other words, can I have two distinct
The documentation says "Property key names must be unique in the graph". Does it mean that it's not possible to have property keys that are unique *per label*? In other words, can I have two distinct
|
By
Laura Morales
·
#6059
·
|
|
Re: janusgraph and deeplearning
Hi Jonathan,
One thing is not yet clear to me: does your graph fit into a single node (regarding memory and GPU) or do you plan to use distributed pytorch? Either way, I guess it would be most
Hi Jonathan,
One thing is not yet clear to me: does your graph fit into a single node (regarding memory and GPU) or do you plan to use distributed pytorch? Either way, I guess it would be most
|
By
hadoopmarc@...
·
#6058
·
|
|
Re: How to split graph in multiple graphml files and load them separately
Hi Laura,
Without checking this in the code, it only seems logical that the graph id is ignored, because you have to supply the io readers with an existing Graph instance. Apparently it was chosen to
Hi Laura,
Without checking this in the code, it only seems logical that the graph id is ignored, because you have to supply the io readers with an existing Graph instance. Apparently it was chosen to
|
By
hadoopmarc@...
·
#6057
·
|
|
Re: Performance Improvement
Laura that is helpful, will go through it and try to implement it.
Also, if there are any configurations that can be tuned for better performance, please share them.
Laura that is helpful, will go through it and try to implement it.
Also, if there are any configurations that can be tuned for better performance, please share them.
|
By
Vinayak Bali
·
#6056
·
|