Re: [BLOG] Configuring JanusGraph for spark-yarn
Could this be a networking issue? Maybe a firewall is enabled, or selinux is preventing a connection?
I've been able to get this to work, but running a simple count - g.V().count() on anything but a very small graph takes a very very long time (hours). Are there any cache settings, or other resources that could be modified to better the performance?
The YARN container logs are filled withe debug lines about 'Created dirty vertex map with initial size 32', 'Created vertex cache with max size 20000', and 'Generated HBase Filter ColumnRange Filter'. Can any of these things be adjusted in the properties file? Thank you!
-Joe
hi,Thanks for your post.--I did it according to the post.But I ran into a problem.
15:58:49,110 INFO SecurityManager:58 - Changing view acls to: rc15:58:49,110 INFO SecurityManager:58 - Changing modify acls to: rc15:58:49,110 INFO SecurityManager:58 - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(rc); users with modify permissions: Set(rc)15:58:49,111 INFO Client:58 - Submitting application 25 to ResourceManager15:58:49,320 INFO YarnClientImpl:274 - Submitted application application_1500608983535_002515:58:49,321 INFO SchedulerExtensionServices:58 - Starting Yarn extension services with app application_1500608983535_0025 and attemptId None15:58:50,325 INFO Client:58 - Application report for application_1500608983535_0025 (state: ACCEPTED)15:58:50,326 INFO Client:58 -client token: N/Adiagnostics: N/AApplicationMaster host: N/AApplicationMaster RPC port: -1queue: defaultstart time: 1500883129115final status: UNDEFINEDtracking URL: http://dl-rc-optd-ambari-master-v-test-2.host.dataengine.com:8088/proxy/application_1500608983535_0025/user: rc15:58:51,330 INFO Client:58 - Application report for application_1500608983535_0025 (state: ACCEPTED)15:58:52,333 INFO Client:58 - Application report for application_1500608983535_0025 (state: ACCEPTED)15:58:53,335 INFO Client:58 - Application report for application_1500608983535_0025 (state: ACCEPTED)15:58:54,337 INFO Client:58 - Application report for application_1500608983535_0025 (state: ACCEPTED)15:58:55,340 INFO Client:58 - Application report for application_1500608983535_0025 (state: ACCEPTED)
15:58:56,343 INFO Client:58 - Application report for application_1500608983535_0025 (state: ACCEPTED)15:58:56,802 INFO YarnSchedulerBackend$YarnSchedulerEndpoint:58 - ApplicationMaster registered as NettyRpcEndpointRef(null)15:58:56,822 INFO YarnClientSchedulerBackend:58 - Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> dl-rc-optd-ambari-master-v-test-1.host.dataengine.com,dl-rc-optd-ambari-master-v-test-2.host.dataengine.com, PROXY_URI_BASES -> http://dl-rc-optd-ambari-master-v-test-1.host.dataengine.com:8088/proxy/application_1500608983535_0025,http://dl-rc-optd-ambari-master-v-test-2.host.dataengine.com:8088/proxy/application_1500608983535_0025), /proxy/application_1500608983535_002515:58:56,824 INFO JettyUtils:58 - Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter15:58:57,346 INFO Client:58 - Application report for application_1500608983535_0025 (state: RUNNING)15:58:57,347 INFO Client:58 -client token: N/Adiagnostics: N/AApplicationMaster host: 10.200.48.154ApplicationMaster RPC port: 0queue: defaultstart time: 1500883129115final status: UNDEFINEDtracking URL: http://dl-rc-optd-ambari-master-v-test-2.host.dataengine.com:8088/proxy/application_1500608983535_0025/user: rc15:58:57,348 INFO YarnClientSchedulerBackend:58 - Application application_1500608983535_0025 has started running.15:58:57,358 INFO Utils:58 - Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 47514.15:58:57,358 INFO NettyBlockTransferService:58 - Server created on 4751415:58:57,360 INFO BlockManagerMaster:58 - Trying to register BlockManager15:58:57,363 INFO BlockManagerMasterEndpoint:58 - Registering block manager 10.200.48.112:47514 with 2.4 GB RAM, BlockManagerId(driver, 10.200.48.112, 47514)15:58:57,366 INFO BlockManagerMaster:58 - Registered BlockManager
15:58:57,585 INFO EventLoggingListener:58 - Logging events to hdfs:///spark-history/application_1500608983535_002515:59:07,177 WARN YarnSchedulerBackend$YarnSchedulerEndpoint:70 - Container marked as failed: container_e170_1500608983535_0025_01_000002 on host: dl-rc-optd-ambari-slave-v-test-1.host.dataengine.com. Exit status: 1. Diagnostics: Exception from container-launch.Container id: container_e170_1500608983535_0025_01_000002Exit code: 1Stack trace: ExitCodeException exitCode=1:at org.apache.hadoop.util.Shell.runCommand(Shell.java:576)at org.apache.hadoop.util.Shell.run(Shell.java:487)at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:753)at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:371)at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:303)at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)at java.util.concurrent.FutureTask.run(FutureTask.java:266)at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)at java.lang.Thread.run(Thread.java:745)
Shell output: main : command provided 1main : run as user is rcmain : requested yarn user is rc
Container exited with a non-zero exit code 1
Display stack trace? [yN]15:59:57,702 WARN TransportChannelHandler:79 - Exception in connection from 10.200.48.155/10.200.48.155:50921java.io.IOException: Connection reset by peerat sun.nio.ch.FileDispatcherImpl.read0(Native Method)at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)at sun.nio.ch.IOUtil.read(IOUtil.java:192)at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)at io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:313)at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:881)at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:242)at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:119)at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)at java.lang.Thread.run(Thread.java:748)
15:59:57,704 ERROR TransportResponseHandler:132 - Still have 1 requests outstanding when connection from 10.200.48.155/10.200.48.155:50921 is closed15:59:57,706 WARN NettyRpcEndpointRef:91 - Error sending message [message = RequestExecutors(0,0,Map())] in 1 attemptsjava.io.IOException: Connection reset by peerat sun.nio.ch.FileDispatcherImpl.read0(Native Method)at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)at sun.nio.ch.IOUtil.read(IOUtil.java:192)at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)at io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:313)at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:881)at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:242)at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:119)at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)at java.lang.Thread.run(Thread.java:748)I am confused about that. Could you please help me?
在 2017年7月6日星期四 UTC+8下午4:15:37,HadoopMarc写道:
Readers wanting to run OLAP queries on a real spark-yarn cluster might want to check my recent post:
http://yaaics.blogspot.nl/2017/07/configuring- janusgraph-for-spark-yarn.html
Regards, Marc
You received this message because you are subscribed to the Google Groups "JanusGraph users list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to janusgra...@....
For more options, visit https://groups.google.com/d/optout.