|
Re: Getting illegal access error due to unsupported GoogleGuava 21?
Hello Robert,
I have never used the JanusGraph server; as such I am not sure if we can write complex programs and save them for later use and integration - my code currently sits in between two code
Hello Robert,
I have never used the JanusGraph server; as such I am not sure if we can write complex programs and save them for later use and integration - my code currently sits in between two code
|
By
arunab...@...
·
#553
·
|
|
Re: Hardware Calculation
The short answer is no.. Number of instances depends on the workload you will be running on the Janusgraph. You need to design your workload, test it on your Janusgraph instance with desired usage
The short answer is no.. Number of instances depends on the workload you will be running on the Janusgraph. You need to design your workload, test it on your Janusgraph instance with desired usage
|
By
ted...@...
·
#552
·
|
|
Re: Getting illegal access error due to unsupported GoogleGuava 21?
Using the JanusGraph (Gremlin) Server will help decouple your dependencies.
See also http://docs.janusgraph.org/latest/server.html
Robert Dale
Using the JanusGraph (Gremlin) Server will help decouple your dependencies.
See also http://docs.janusgraph.org/latest/server.html
Robert Dale
|
By
Robert Dale <rob...@...>
·
#557
·
|
|
Getting illegal access error due to unsupported GoogleGuava 21?
My project is using Google Guava 21 (cannot change it) and is giving the following error:
java.lang.IllegalAccessError: tva.lang.IllegalAccessError: tried to access method
My project is using Google Guava 21 (cannot change it) and is giving the following error:
java.lang.IllegalAccessError: tva.lang.IllegalAccessError: tried to access method
|
By
arunab...@...
·
#551
·
|
|
Re: [BLOG] Configuring JanusGraph for spark-yarn
Thank you Marc.
I did not set spark.executor.instances, but I do have spark.cores.max set to 64 and within YARN, it is configured to allow has much RAM/cores for our 5 server cluster.
Thank you Marc.
I did not set spark.executor.instances, but I do have spark.cores.max set to 64 and within YARN, it is configured to allow has much RAM/cores for our 5 server cluster.
|
By
Joe Obernberger <joseph.o...@...>
·
#550
·
|
|
Hardware Calculation
Hi everyone,
Is there some hardware calculator available for janusgraph to calculate how many instances of janus and cassandra would we require. Any help will be much appreciated.
--
Thanks &
Hi everyone,
Is there some hardware calculator available for janusgraph to calculate how many instances of janus and cassandra would we require. Any help will be much appreciated.
--
Thanks &
|
By
Amyth Arora <aroras....@...>
·
#558
·
|
|
Re: [BLOG] Configuring JanusGraph for spark-yarn
Hi Joe,
Another thing to try (only tested on Tinkerpop, not on JanusGraph): create the traversalsource as follows:
g = graph.traversal().withComputer(new
Hi Joe,
Another thing to try (only tested on Tinkerpop, not on JanusGraph): create the traversalsource as follows:
g = graph.traversal().withComputer(new
|
By
HadoopMarc <bi...@...>
·
#549
·
|
|
Creating a mixed index on an existing graph
I'm trying to add a mixed index with one key to an existing graph. After building the index, I do a commit on the JanusGraphManagement, and then awaitGraphIndexStatus(graph, mykey).call(). The
I'm trying to add a mixed index with one key to an existing graph. After building the index, I do a commit on the JanusGraphManagement, and then awaitGraphIndexStatus(graph, mykey).call(). The
|
By
Peter Schwarz <kkup...@...>
·
#547
·
|
|
Re: [BLOG] Configuring JanusGraph for spark-yarn
Marc - thank you. I've updated the classpath and removed nearly all of the CDH jars; had to keep chimera and some of the HBase libs in there. Apart from those and all the jars in lib.zip,
Marc - thank you. I've updated the classpath and removed nearly all of the CDH jars; had to keep chimera and some of the HBase libs in there. Apart from those and all the jars in lib.zip,
|
By
Joe Obernberger <joseph.o...@...>
·
#548
·
|
|
Re: [BLOG] Configuring JanusGraph for spark-yarn
Hi Gari and Joe,
Glad to see you testing the recipes for MapR and Cloudera respectively! I am sure that you realized by now that getting this to work is like walking through a minefield. If you
Hi Gari and Joe,
Glad to see you testing the recipes for MapR and Cloudera respectively! I am sure that you realized by now that getting this to work is like walking through a minefield. If you
|
By
HadoopMarc <bi...@...>
·
#545
·
|
|
Re: [BLOG] Configuring JanusGraph for spark-yarn
Hi Marc - I did try splitting regions. What happens when I run the SparkGraphComputer job, is that it seems to hit one region server hard, then moves onto the next; appears to run serially.
Hi Marc - I did try splitting regions. What happens when I run the SparkGraphComputer job, is that it seems to hit one region server hard, then moves onto the next; appears to run serially.
|
By
Joe Obernberger <joseph.o...@...>
·
#546
·
|
|
Mizo + JanusGraph
In working to get the SparkGraphComputer to work with JanusGraph, I came across this project for Titan.
https://github.com/imri/mizo
I can get it to compile for JanusGraph, but not
In working to get the SparkGraphComputer to work with JanusGraph, I came across this project for Titan.
https://github.com/imri/mizo
I can get it to compile for JanusGraph, but not
|
By
Joe Obernberger <joseph.o...@...>
·
#544
·
|
|
Re: Cache expiration time
Thanks!
By
Ohad Pinchevsky <ohad.pi...@...>
·
#543
·
|
|
How can we bulk load the edges while we have the vertexes in our JanusGraph DB?
Assume, we have the vertexes in DB. and we have the edge information in GraphSON/XML/TXT? how can we import the edges into JanusGraph?
Assume, we have the vertexes in DB. and we have the edge information in GraphSON/XML/TXT? how can we import the edges into JanusGraph?
|
By
hu junjie <hjj...@...>
·
#542
·
|
|
Re: Index on a vertex label from Java
Not the answer I was hoping for, but thanks!
Not the answer I was hoping for, but thanks!
|
By
Peter Schwarz <kkup...@...>
·
#541
·
|
|
Vertex ID data type
Currently JanusGraph vertex ID is of type 64bits long, is it possible to also support UUID as the vertex ID?
Currently JanusGraph vertex ID is of type 64bits long, is it possible to also support UUID as the vertex ID?
|
By
cda...@...
·
#536
·
|
|
Re: [BLOG] Configuring JanusGraph for spark-yarn
Could you let us know a little more about your configuration? What is your storage backend for JanusGraph (HBase/Cassandra)? I actually do not see an error in your log, but at the very
Could you let us know a little more about your configuration? What is your storage backend for JanusGraph (HBase/Cassandra)? I actually do not see an error in your log, but at the very
|
By
Joe Obernberger <joseph.o...@...>
·
#540
·
|
|
Re: [BLOG] Configuring JanusGraph for spark-yarn
Hi Marc,
Request your help, I am runnign Janusgraph with maprdb as backend, I have successfuly been able to create GodofGraphs example on M7 as backend
But when I am trying to execute the following
Hi Marc,
Request your help, I am runnign Janusgraph with maprdb as backend, I have successfuly been able to create GodofGraphs example on M7 as backend
But when I am trying to execute the following
|
By
Gariee <garim...@...>
·
#535
·
|
|
Issue when trying to use Spark Graph Computer
Hi,
I am trying to execute the following where cluster is mapr and spark on yarn
graph=GraphFactory.open('conf/hadoop-graph/hadoop-load.properties')
g =
Hi,
I am trying to execute the following where cluster is mapr and spark on yarn
graph=GraphFactory.open('conf/hadoop-graph/hadoop-load.properties')
g =
|
By
Gariee <garim...@...>
·
#534
·
|
|
Re: Creating a gremlin pipeline from an arraylist
You probably should benchmark it, but I'd think that the injection would be faster since you already have the edges resolved. I think using the graph step g.E(a) would ultimately re-lookup the edges
You probably should benchmark it, but I'd think that the injection would be faster since you already have the edges resolved. I think using the graph step g.E(a) would ultimately re-lookup the edges
|
By
Jason Plurad <plu...@...>
·
#533
·
|