Date
1 - 1 of 1
Read partitioned vertex (ID=8202), but partitioned vertex filtering is disabled
spirit...@...
I make graph statistics by using hadoop-gremlin and SparkGraphComputer.
But, I run into the problem, the following is:
Is my config file is wrong? `Cause I didn't find the config-template. AnyBody can help me ? Thank you very much
config file is:
gremlin.graph=org.apache.tinkerpop.gremlin.hadoop.structure.HadoopGraphgremlin.hadoop.graphInputFormat=org.janusgraph.hadoop.formats.hbase.HBaseInputFormat#gremlin.hadoop.graphOutputFormat=org.apache.hadoop.mapreduce.lib.output.NullOutputFormatgremlin.hadoop.graphOutputFormat=org.apache.tinkerpop.gremlin.hadoop.structure.io.gryo.GryoOutputFormat
gremlin.hadoop.jarsInDistributedCache=truegremlin.hadoop.inputLocation=nonegremlin.hadoop.outputLocation=output###### hbase config####janusgraphmr.ioformat.conf.storage.backend=hbasejanusgraphmr.ioformat.conf.storage.hostname=dl-rc-optd-ambari-slave-v-test-1.host.dataengine.com,dl-rc-optd-ambari-slave-v-test-2.host.dataengine.com,dl-rc-optd-ambari-slave-v-test-3.host.dataengine.comjanusgraphmr.ioformat.conf.storage.port=2181janusgraphmr.ioformat.conf.storage.hbase.table=bluesharp_hanxi1
###################################### GiraphGraphComputer Configuration ######################################giraph.minWorkers=1giraph.maxWorkers=1giraph.SplitMasterWorker=falsegiraph.useOutOfCoreGraph=truegiraph.useOutOfCoreMessages=true
mapred.map.child.java.opts=-Xmx1024mmapred.reduce.child.java.opts=-Xmx1024mgiraph.numInputThreads=4giraph.numComputeThreads=4giraph.maxMessagesInMemory=100000##################################### SparkGraphComputer Configuration #####################################spark.master=local[4]spark.executor.memory=1gspark.serializer=org.apache.spark.serializer.KryoSerializer#spark.kryo.registrator=org.janusgraph.hadoop.serialize.JanusGraphKryoRegistrator
But, I run into the problem, the following is:
19:20:58 ERROR org.apache.spark.scheduler.TaskSetManager - Task 0 in stage 0.0 failed 1 times; aborting joborg.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost): java.lang.IllegalStateException: Read partitioned vertex (ID=8202), but partitioned vertex filtering is disabled. at com.google.common.base.Preconditions.checkState(Preconditions.java:197) at org.janusgraph.hadoop.formats.util.JanusGraphVertexDeserializer.readHadoopVertex(JanusGraphVertexDeserializer.java:84) at org.janusgraph.hadoop.formats.util.GiraphRecordReader.nextKeyValue(GiraphRecordReader.java:60) at org.apache.spark.rdd.NewHadoopRDD$$anon$1.hasNext(NewHadoopRDD.scala:168) at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.convert.Wrappers$IteratorWrapper.hasNext(Wrappers.scala:29) at org.apache.tinkerpop.gremlin.util.iterator.IteratorUtils$4.advance(IteratorUtils.java:298) at org.apache.tinkerpop.gremlin.util.iterator.IteratorUtils$4.hasNext(IteratorUtils.java:269) at scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:41) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:388) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$class.foreach(Iterator.scala:727) at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:144) at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1157) at scala.collection.TraversableOnce$class.fold(TraversableOnce.scala:199) at scala.collection.AbstractIterator.fold(Iterator.scala:1157) at org.apache.spark.rdd.RDD$$anonfun$fold$1$$anonfun$19.apply(RDD.scala:1086) at org.apache.spark.rdd.RDD$$anonfun$fold$1$$anonfun$19.apply(RDD.scala:1086) at org.apache.spark.SparkContext$$anonfun$36.apply(SparkContext.scala:1951) at org.apache.spark.SparkContext$$anonfun$36.apply(SparkContext.scala:1951) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) at org.apache.spark.scheduler.Task.run(Task.scala:89) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:748)
Is my config file is wrong? `Cause I didn't find the config-template. AnyBody can help me ? Thank you very much