Re: Calling a SparkGraphComputer from within Spark

"Jun(Terry) Yang" <terr...@...>

Hi Rob,

I went through the tinkerpop code, just PageRankMapReduce, ClusterCountMapReduce, ClusterPopulationMapReduce has memoryKey function. 
(I found the description "We still recommend users call persist on the resulting RDD if they plan to reuse it." in spark doc
Not sure if this is the design.

After running PeerPressureVertexProgram sample( I saw 2 RDDs, and the result of the sample is 2(integer)
==>output/clusterCount [Memory Deserialized 1x Replicated]
==>output/~g [Memory Deserialized 1x Replicated]
gremlin> spark.head('output', 'clusterCount', PersistedInputRDD)
Then I tried the read these RDDs with gremlin.hadoop.graphReader=PersistedInputRDD.class.getCanonicalName():
a).I failed to read "output/clusterCount" with excretion: java.lang.ClassCastException: java.lang.Integer cannot be cast to 
    The integer value should be read at this case, but the graph structure can't accept it, so I guess some spark program may access this persistence RDD.
b).And successful with "output/~g"
gremlin> graph2 ='conf/hadoop-graph/')
gremlin> graph2.configuration().setProperty('gremlin.hadoop.graphReader', PersistedInputRDD.class.getCanonicalName())
gremlin> graph2.configuration().setProperty('gremlin.hadoop.inputLocation', 'output/~g')
gremlin> g2.V().valueMap()
==>[gremlin.peerPressureVertexProgram.cluster:[1], name:[josh], age:[32]]
==>[gremlin.peerPressureVertexProgram.cluster:[1], name:[marko], age:[29]]
==>[gremlin.peerPressureVertexProgram.cluster:[6], name:[peter], age:[35]]
==>[gremlin.peerPressureVertexProgram.cluster:[1], name:[lop], lang:[java]]
==>[gremlin.peerPressureVertexProgram.cluster:[1], name:[ripple], lang:[java]]
==>[gremlin.peerPressureVertexProgram.cluster:[1], name:[vadas], age:[27]]

Hope this will help you~


On Monday, March 20, 2017 at 2:49:41 AM UTC+8, Rob Keevil wrote:
Last battle before I think this is all done, I need to extract the output without collecting results to the driver and exploding the memory there.

Gremlin has a page at on how to retrieve the result as a persisted RDD  However, their calculation uses a vertex program, which can name the step using memoryKey('clusterCount').  A regular traversal doesn't seem to have this option, and Spark logs that it removes the RDD after the traversal.  Do you know of any way to access this RDD?

(I've set the required property).

Join to automatically receive all group messages.