Re: olap connection with spark standalone cluster


Abhay Pandit <abha...@...>
 

Hi Lilly,

SparkGraphComputer will not support direct gremlin queries using Java programs.
You can try using this as below.

String query = "g.V().count()";
ComputerResult result = graph.compute(SparkGraphComputer.class)

            .result(GraphComputer.ResultGraph.NEW)

            .persist(GraphComputer.Persist.EDGES)

            .program(TraversalVertexProgram.build()

                    .traversal(

                            graph.traversal().withComputer(SparkGraphComputer.class),

                            "gremlin-groovy",

                            query)

                    .create(graph))

            .submit()

            .get();

System.out.println( computerResult.memory().get("gremlin.traversalVertexProgram.haltedTraversers"));


Join my facebook group: https://www.facebook.com/groups/Janusgraph/

Thanks,
Abhay

On Tue, 15 Oct 2019 at 19:25, <marc.d...@...> wrote:
Hi Lilly,

This error says that are somehow two versions of the TinkerPop jars in your project. If you use maven you check this with the dependency plugin.

If other problems appear, also be sure that the spark cluster is doing fine by running one of the examples from the spark distribution with spark-submit.

HTH,    Marc

Op dinsdag 15 oktober 2019 09:38:08 UTC+2 schreef Lilly:
Hi everyone,

I downloaded a fresh spark binary relaese (spark-2.4.0-hadoop2.7) and set the master to spark://127.0.0.1:7077. I then started all services via $SPARK_HOME/sbin/start-all.sh.
I checked that spark works with the provided example programs.

I am further using the janusgraph-0.4.0-hadoop2 binary.

Now I configured the read-cassandra-3.properties as follows:
gremlin.graph=org.apache.tinkerpop.gremlin.hadoop.structure.HadoopGraph
gremlin.hadoop.graphReader=org.janusgraph.hadoop.formats.cassandra.Cassandra3InputFormat
gremlin.hadoop.graphWriter=org.apache.tinkerpop.gremlin.hadoop.structure.io.gryo.GryoOutputFormat
gremlin.hadoop.jarsInDistributedCache=true
gremlin.hadoop.inputLocation=none
gremlin.hadoop.outputLocation=output
gremlin.spark.persistContext=true
janusgraphmr.ioformat.conf.storage.backend=cassandra
janusgraphmr.ioformat.conf.storage.hostname=127.0.0.1
janusgraphmr.ioformat.conf.storage.port=9160
janusgraphmr.ioformat.conf.storage.cassandra.keyspace=janusgraph
cassandra.input.partitioner.class=org.apache.cassandra.dht.Murmur3Partitioner
spark.master=spark://127.0.0.1:7077
spark.executor.memory=8g
spark.executor.extraClassPath=/home/janusgraph-0.4.0-hadoop2/lib/*
spark.serializer=org.apache.spark.serializer.KryoSerializer
spark.kryo.registrator=org.apache.tinkerpop.gremlin.spark.structure.io.gryo.GryoRegistrator

where the janusgraph libraries are stored in /home/janusgraph-0.4.0-hadoop2/lib/*

In my java application I now tried
Graph graph = GraphFactory.open('...')
GraphTraversalSource g = graph.traversal().withComputer(SparkGraphComputer.class);
and then g.V().count().next()
I get the error message:
ERROR org.apache.spark.scheduler.TaskSetManager - Task 3 in stage 0.0 failed 4 times; aborting job
Exception in thread "main" java.lang.IllegalStateException: org.apache.spark.SparkException: Job aborted due to stage failure: Task 3 in stage 0.0 failed 4 times, most recent failure: Lost task 3.3 in stage 0.0 (TID 15, 192.168.178.32, executor 0): java.io.InvalidClassException: org.apache.tinkerpop.gremlin.process.traversal.util.DefaultTraversal; local class incompatible: stream classdesc serialVersionUID = -3191185630641472442, local class serialVersionUID = 6523257080464450267

Any ideas as to what might be the problem?
Thanks!
Lilly


--
You received this message because you are subscribed to the Google Groups "JanusGraph users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to janusgra...@....
To view this discussion on the web visit https://groups.google.com/d/msgid/janusgraph-users/e7336651-4265-4508-985b-64ed53935fff%40googlegroups.com.

Join {janusgraph-users@lists.lfaidata.foundation to automatically receive all group messages.