Re: Calling a SparkGraphComputer from within Spark


HadoopMarc <m.c.d...@...>
 

Hi Rob,

Yes, you are diving in, I read. JanusGraph data are available as HadoopGraph using the HBaseInputFormat and CassandraInputFormat classes. You can find examples on the old Titan forum, e.g.

https://groups.google.com/forum/#!searchin/aureliusgraphs/read-cassandra%7Csort:relevance/aureliusgraphs/CJnT05-m_cQ/z6JcjKUxCgAJ

Cheers,    Marc

Op zondag 19 maart 2017 11:05:14 UTC+1 schreef Rob Keevil:

Thanks (again!) for the quick response.  I've got Gremlin to use the existing SparkContext successfully.

I'm trying to test this, however I'm also finding triggering the traversal from Scala difficult. I would have thought the Scala equivalent of the current Groovy example (gremlin> g = graph.traversal().withComputer(SparkGraphComputer)) would be:
val g = graph.traversal().withComputer(classOf[SparkGraphComputer])
println
(g.V().count().next())

However JanusGraphBlueprintsGraph is doing an explicit check that only the FulgoraGraphComputer can be used in this way (line 129).  

withComputer also has a method signature of Computer (i.e Object not Class),  but instantiating a SparkGraphComputer needs a HadoopGraph, not a JanusGraph.

Once I get this up and running ill submit a full code example for others to use in future.

Thanks,
Rob



Join {janusgraph-users@lists.lfaidata.foundation to automatically receive all group messages.