Request your help, I am runnign Janusgraph with maprdb as
backend, I have successfuly been able to create GodofGraphs
example on M7 as backend
But when I am trying to
execute the following where cluster is mapr and spark on
yarn
plugin
activated: tinkerpop.tinkergraph
gremlin>
graph =
GraphFactory.open('conf/hadoop-graph/hadoop-load.properties')
==>hadoopgraph[gryoinputformat->nulloutputformat]
gremlin>
g = graph.traversal(computer(SparkGraphComputer))
==>graphtraversalsource[hadoopgraph[gryoinputformat->nulloutputformat],
sparkgraphcomputer]
gremlin>
g.V().count()
hadoop-load.properties ( I
tried all different combinations as commented below each
time its the same error)
#
# SparkGraphComputer
Configuration
#
spark.master=yarn-client
spark.yarn.queue=cmp
mapred.job.queue.name=cmp
#spark.driver.allowMultipleContexts=true
#spark.executor.memory=4g
#spark.ui.port=20000
spark.serializer=org.apache.spark.serializer.KryoSerializer
spark.yarn.appMasterEnv.CLASSPATH=$CLASSPATH:/opt/mapr/hadoop/hadoop-2.7.0/etc/hadoop:/opt/mapr/hadoop/hadoop-2.7.0/share/hadoop/common/lib/*:/opt/mapr/hadoop/hadoop-2.7.0/share/hadoop/common/*:/opt/mapr/hadoop/hadoop-2.7.0/share/hadoop/hdfs:/opt/mapr/hadoop/hadoop-2.7.0/share/hadoop/hdfs/lib/*:/opt/mapr/hadoop/hadoop-2.7.0/share/hadoop/hdfs/*:/opt/mapr/hadoop/hadoop-2.7.0/share/hadoop/yarn/lib/*:/opt/mapr/hadoop/hadoop-2.7.0/share/hadoop/yarn/*:/opt/mapr/hadoop/hadoop-2.7.0/share/hadoop/mapreduce/lib/*:/opt/mapr/hadoop/hadoop-2.7.0/share/hadoop/mapreduce/*:/contrib/capacity-scheduler/*.jar:/opt/mapr/lib/kvstore*.jar:/opt/mapr/lib/libprotodefs*.jar:/opt/mapr/lib/baseutils*.jar:/opt/mapr/lib/maprutil*.jar:/opt/mapr/lib/json-20080701.jar:/opt/mapr/lib/flexjson-2.1.jar
#spark.executor.instances=10
#spark.executor.cores=2
#spark.executor.CoarseGrainedExecutorBackend.cores=2
#spark.executor.CoarseGrainedExecutorBackend.driver=FIXME
#spark.executor.CoarseGrainedExecutorBackend.stopping=false
#spark.streaming.stopGracefullyOnShutdown=true
#spark.yarn.driver.memoryOverhead=4g
#spark.yarn.executor.memoryOverhead=1024
#spark.yarn.am.extraJavaOptions=-Dhdp.version=2.3.0.0-2557
--------------------------------------------------------------------------------------------------------------
yarn log
------------------------------------------------------------------------------------------------------------
When the last command is
executed, driver abruptly shuts down in yarn container and
shuts down the spark context too with following error from
Yarn logs
Container:
container_e27_1501284102300_47651_01_000008 on abcd.com_8039
LogType:stderr
Log Upload Time:Mon Jul 31 14:08:42 -0700 2017
LogLength:2441
Log Contents:
17/07/31 14:08:05 INFO
executor.CoarseGrainedExecutorBackend: Registered signal
handlers for [TERM, HUP, INT]
17/07/31 14:08:05 INFO spark.SecurityManager: Changing view
acls to: cmphs
17/07/31 14:08:05 INFO spark.SecurityManager: Changing
modify acls to: cmphs
17/07/31 14:08:05 INFO spark.SecurityManager:
SecurityManager: authentication disabled; ui acls disabled;
users with view permissions: Set(cmphs); users with modify
permissions: Set(cmphs)
17/07/31 14:08:06 INFO spark.SecurityManager: Changing view
acls to: cmphs
17/07/31 14:08:06 INFO spark.SecurityManager: Changing
modify acls to: cmphs
17/07/31 14:08:06 INFO spark.SecurityManager:
SecurityManager: authentication disabled; ui acls disabled;
users with view permissions: Set(cmphs); users with modify
permissions: Set(cmphs)
17/07/31 14:08:06 INFO slf4j.Slf4jLogger: Slf4jLogger
started
17/07/31 14:08:06 INFO Remoting: Starting remoting
17/07/31 14:08:06 INFO Remoting: Remoting started; listening
on addresses
:[akka.tcp://sparkExecutorActorSystem@...:36376]
17/07/31 14:08:06 INFO util.Utils: Successfully started
service 'sparkExecutorActorSystem' on port 36376.
17/07/31 14:08:06 INFO storage.DiskBlockManager: Created
local directory at
/tmp/hadoop-mapr/nm-local-dir/usercache/cmphs/appcache/application_1501284102300_47651/blockmgr-244e0062-016e-4402-85c9-69f2ab9ef9d2
17/07/31 14:08:06 INFO storage.MemoryStore: MemoryStore
started with capacity 2.7 GB
17/07/31 14:08:06 INFO
executor.CoarseGrainedExecutorBackend: Connecting to driver:
spark://CoarseGrainedScheduler@...:43768
17/07/31 14:08:06 INFO
executor.CoarseGrainedExecutorBackend: Successfully
registered with driver
17/07/31 14:08:06 INFO executor.Executor: Starting executor
ID 7 on host abcd.com
17/07/31 14:08:06 INFO util.Utils: Successfully started
service
'org.apache.spark.network.netty.NettyBlockTransferService'
on port 44208.
17/07/31 14:08:06 INFO netty.NettyBlockTransferService:
Server created on 44208
17/07/31 14:08:06 INFO storage.BlockManagerMaster: Trying to
register BlockManager
17/07/31 14:08:06 INFO storage.BlockManagerMaster:
Registered BlockManager
17/07/31 14:08:41 INFO
executor.CoarseGrainedExecutorBackend: Driver commanded a
shutdown
17/07/31 14:08:41 INFO storage.MemoryStore: MemoryStore
cleared
17/07/31 14:08:41 INFO storage.BlockManager: BlockManager
stopped
17/07/31 14:08:41 INFO util.ShutdownHookManager: Shutdown
hook called
End of LogType:stderr
LogType:stdout
Log Upload Time:Mon Jul 31 14:08:42 -0700 2017
LogLength:0
Log Contents:
End of LogType:stdout
I am stuck wit h this
error , request help here.