Re: Calling a SparkGraphComputer from within Spark
HadoopMarc <m.c.d...@...>
Hi Rob, Documentation is not abundant on this, I agree. So I read through the spark-gremlin source code and saw that SparkGraphComputer can reuse the SparkContext using the gremlin.spark.persistStorageLevel property. If you set this property and get your SparkContext with one of gremlin-spark's static Spark.create() methods, I would expect that your own jobs and the SparkGraphComputer jobs run within the same SparkContext. HTH, Marc Op vrijdag 17 maart 2017 12:50:31 UTC+1 schreef Rob Keevil:
|
|