Date   

Re: Not able to connect when 1 of 3 nodes is down in the Cassandra cluster

Jason Plurad <plu...@...>
 

This is more of a Cassandra question than JanusGraph/Titan. If you have two nodes in DC1 and the read/write consistency settings are LOCAL_QUORUM, you can't reach a local quorum in DC1 when one node is down.

You could try either LOCAL_ONE or QUORUM.

http://docs.datastax.com/en/cassandra/2.1/cassandra/dml/dml_config_consistency_c.html


On Sunday, July 23, 2017 at 9:14:12 AM UTC-4, Bharat Dighe wrote:
I am using titan 1.0 and planning to move to Janus very soon.

I have following keyspace

CREATE KEYSPACE my_ks WITH replication = {'class': 'NetworkTopologyStrategy', 'DC1': '2', 'DC2': '1'}  AND durable_writes = true;

Current status of the cluster nodes is as follows, one of the node in DC1 is down.

|/ State=Normal/Leaving/Joining/Moving
--  Address        Load       Tokens  Owns    Host ID                               Rack
DN  IP1  
2.8 MB     256     ?       2a5abdad-af65-48e7-a74c-d40f1f759460  rac2
UN  IP2  
4.33 MB    256     ?       4897d661-24d3-4d30-b07a-00a8103635f6  rac1
Datacenter: Sunnyside_DC
========================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address        Load       Tokens  Owns    Host ID                               Rack
UN  IP3  
5.24 MB    256     ?       f830e2a9-6eea-4617-88dd-d63e44beb115  rac1


Titan is able to connect to node in DC2 but fails to join the UP node in DC1.

TitanGraph graph = TitanFactory.build().
 
set("storage.backend", "cassandra").
 
set("storage.hostname", "IP2").
 
set("storage.port", 9160).
 
set("storage.cassandra.keyspace", "my_ks").
 
set("storage.read-only", false).
 
set("query.force-index", false).
 
set("storage.cassandra.astyanax.connection-pool-type","ROUND_ROBIN").
 
set("storage.cassandra.astyanax.node-discovery-type","NONE").
 
set("storage.cassandra.read-consistency-level","LOCAL_QUORUM").
 
set("storage.cassandra.write-consistency-level","LOCAL_QUORUM").
 
set("storage.cassandra.atomic-batch-mutate",false).
 open
();

It gives following exception:

22:11:19,655 ERROR CountingConnectionPoolMonitor:94 - com.netflix.astyanax.connectionpool.exceptions.TokenRangeOfflineException: TokenRangeOfflineException: [host=IP2:9160, latency=100(100), attempts=1]UnavailableException()
com
.netflix.astyanax.connectionpool.exceptions.TokenRangeOfflineException: TokenRangeOfflineException: [host=IP2:9160, latency=100(100), attempts=1]UnavailableException()
 at com
.netflix.astyanax.thrift.ThriftConverter.ToConnectionPoolException(ThriftConverter.java:165)
 at com
.netflix.astyanax.thrift.AbstractOperationImpl.execute(AbstractOperationImpl.java:65)
 at com
.netflix.astyanax.thrift.AbstractOperationImpl.execute(AbstractOperationImpl.java:28)
 at com
.netflix.astyanax.thrift.ThriftSyncConnectionFactoryImpl$ThriftConnection.execute(ThriftSyncConnectionFactoryImpl.java:153)
 at com
.netflix.astyanax.connectionpool.impl.AbstractExecuteWithFailoverImpl.tryOperation(AbstractExecuteWithFailoverImpl.java:119)
 at com
.netflix.astyanax.connectionpool.impl.AbstractHostPartitionConnectionPool.executeWithFailover(AbstractHostPartitionConnectionPool.java:352)
 at com
.netflix.astyanax.thrift.ThriftColumnFamilyQueryImpl$4.execute(ThriftColumnFamilyQueryImpl.java:538)
 at com
.thinkaurelius.titan.diskstorage.cassandra.astyanax.AstyanaxKeyColumnValueStore.getNamesSlice(AstyanaxKeyColumnValueStore.java:112)
 at com
.thinkaurelius.titan.diskstorage.cassandra.astyanax.AstyanaxKeyColumnValueStore.getNamesSlice(AstyanaxKeyColumnValueStore.java:78)
 at com
.thinkaurelius.titan.diskstorage.cassandra.astyanax.AstyanaxKeyColumnValueStore.getSlice(AstyanaxKeyColumnValueStore.java:67)
 at com
.thinkaurelius.titan.diskstorage.configuration.backend.KCVSConfiguration$1.call(KCVSConfiguration.java:91)
 at com
.thinkaurelius.titan.diskstorage.configuration.backend.KCVSConfiguration$1.call(KCVSConfiguration.java:1)
 at com
.thinkaurelius.titan.diskstorage.util.BackendOperation.execute(BackendOperation.java:133)
 at com
.thinkaurelius.titan.diskstorage.util.BackendOperation$1.call(BackendOperation.java:147)
 at com
.thinkaurelius.titan.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:56)
 at com
.thinkaurelius.titan.diskstorage.util.BackendOperation.execute(BackendOperation.java:42)
 at com
.thinkaurelius.titan.diskstorage.util.BackendOperation.execute(BackendOperation.java:144)
 at com
.thinkaurelius.titan.diskstorage.configuration.backend.KCVSConfiguration.get(KCVSConfiguration.java:88)
 at com
.thinkaurelius.titan.diskstorage.configuration.BasicConfiguration.isFrozen(BasicConfiguration.java:93)
 at com
.thinkaurelius.titan.graphdb.configuration.GraphDatabaseConfiguration.<init>(GraphDatabaseConfiguration.java:1338)
 at com
.thinkaurelius.titan.core.TitanFactory.open(TitanFactory.java:94)
 at com
.thinkaurelius.titan.core.TitanFactory.open(TitanFactory.java:84)
 at com
.thinkaurelius.titan.core.TitanFactory$Builder.open(TitanFactory.java:139)
 at 
TestGraph.main(TestGraph.java:20)
Caused by: UnavailableException()
 at org
.apache.cassandra.thrift.Cassandra$multiget_slice_result$multiget_slice_resultStandardScheme.read(Cassandra.java:14687)
 at org
.apache.cassandra.thrift.Cassandra$multiget_slice_result$multiget_slice_resultStandardScheme.read(Cassandra.java:14633)
 at org
.apache.cassandra.thrift.Cassandra$multiget_slice_result.read(Cassandra.java:14559)
 at org
.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
 at org
.apache.cassandra.thrift.Cassandra$Client.recv_multiget_slice(Cassandra.java:741)
 at org
.apache.cassandra.thrift.Cassandra$Client.multiget_slice(Cassandra.java:725)
 at com
.netflix.astyanax.thrift.ThriftColumnFamilyQueryImpl$4$1.internalExecute(ThriftColumnFamilyQueryImpl.java:544)
 at com
.netflix.astyanax.thrift.ThriftColumnFamilyQueryImpl$4$1.internalExecute(ThriftColumnFamilyQueryImpl.java:541)
 at com
.netflix.astyanax.thrift.AbstractOperationImpl.execute(AbstractOperationImpl.java:60)
 
... 22 more


Please help me to resolve this.

Thanks
Bharat


Re: Make HttpChannelizer enabled while using external cassandra in Titan(1.0.0)

Jason Plurad <plu...@...>
 

Hi Manoj,

There are directions for JanusGraph in the documentation here http://docs.janusgraph.org/latest/server.html#_janusgraph_server_as_a_rest_style_endpoint

If you're using the default gremlin-server.yaml with uses the janusgraph-cassandra-es.properties, you just need to change the channelizer

channelizer: org.apache.tinkerpop.gremlin.server.channel.HttpChannelizer

-- Jason


On Monday, July 24, 2017 at 9:16:47 AM UTC-4, manoj nainwal wrote:
Hi all,

Could you please tell me, how can I enable HttpChannelizer while using external cassandra?

Thank you,
Manoj


Re: [BLOG] Configuring JanusGraph for spark-yarn

spirit...@...
 

hi,Thanks for your post.
I did it according to the post.But I ran into a problem.
15:58:49,110  INFO SecurityManager:58 - Changing view acls to: rc
15:58:49,110  INFO SecurityManager:58 - Changing modify acls to: rc
15:58:49,110  INFO SecurityManager:58 - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(rc); users with modify permissions: Set(rc)
15:58:49,111  INFO Client:58 - Submitting application 25 to ResourceManager
15:58:49,320  INFO YarnClientImpl:274 - Submitted application application_1500608983535_0025
15:58:49,321  INFO SchedulerExtensionServices:58 - Starting Yarn extension services with app application_1500608983535_0025 and attemptId None
15:58:50,325  INFO Client:58 - Application report for application_1500608983535_0025 (state: ACCEPTED)
15:58:50,326  INFO Client:58 -
client token: N/A
diagnostics: N/A
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1500883129115
final status: UNDEFINED
tracking URL: http://dl-rc-optd-ambari-master-v-test-2.host.dataengine.com:8088/proxy/application_1500608983535_0025/
user: rc
15:58:51,330  INFO Client:58 - Application report for application_1500608983535_0025 (state: ACCEPTED)
15:58:52,333  INFO Client:58 - Application report for application_1500608983535_0025 (state: ACCEPTED)
15:58:53,335  INFO Client:58 - Application report for application_1500608983535_0025 (state: ACCEPTED)
15:58:54,337  INFO Client:58 - Application report for application_1500608983535_0025 (state: ACCEPTED)
15:58:55,340  INFO Client:58 - Application report for application_1500608983535_0025 (state: ACCEPTED)
15:58:56,343  INFO Client:58 - Application report for application_1500608983535_0025 (state: ACCEPTED)
15:58:56,802  INFO YarnSchedulerBackend$YarnSchedulerEndpoint:58 - ApplicationMaster registered as NettyRpcEndpointRef(null)
15:58:56,822  INFO YarnClientSchedulerBackend:58 - Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> dl-rc-optd-ambari-master-v-test-1.host.dataengine.com,dl-rc-optd-ambari-master-v-test-2.host.dataengine.com, PROXY_URI_BASES -> http://dl-rc-optd-ambari-master-v-test-1.host.dataengine.com:8088/proxy/application_1500608983535_0025,http://dl-rc-optd-ambari-master-v-test-2.host.dataengine.com:8088/proxy/application_1500608983535_0025), /proxy/application_1500608983535_0025
15:58:56,824  INFO JettyUtils:58 - Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
15:58:57,346  INFO Client:58 - Application report for application_1500608983535_0025 (state: RUNNING)
15:58:57,347  INFO Client:58 -
client token: N/A
diagnostics: N/A
ApplicationMaster host: 10.200.48.154
ApplicationMaster RPC port: 0
queue: default
start time: 1500883129115
final status: UNDEFINED
tracking URL: http://dl-rc-optd-ambari-master-v-test-2.host.dataengine.com:8088/proxy/application_1500608983535_0025/
user: rc
15:58:57,348  INFO YarnClientSchedulerBackend:58 - Application application_1500608983535_0025 has started running.
15:58:57,358  INFO Utils:58 - Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 47514.
15:58:57,358  INFO NettyBlockTransferService:58 - Server created on 47514
15:58:57,360  INFO BlockManagerMaster:58 - Trying to register BlockManager
15:58:57,363  INFO BlockManagerMasterEndpoint:58 - Registering block manager 10.200.48.112:47514 with 2.4 GB RAM, BlockManagerId(driver, 10.200.48.112, 47514)15:58:57,366  INFO BlockManagerMaster:58 - Registered BlockManager
15:58:57,585  INFO EventLoggingListener:58 - Logging events to hdfs:///spark-history/application_1500608983535_0025
15:59:07,177  WARN YarnSchedulerBackend$YarnSchedulerEndpoint:70 - Container marked as failed: container_e170_1500608983535_0025_01_000002 on host: dl-rc-optd-ambari-slave-v-test-1.host.dataengine.com. Exit status: 1. Diagnostics: Exception from container-launch.
Container id: container_e170_1500608983535_0025_01_000002
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:576)
at org.apache.hadoop.util.Shell.run(Shell.java:487)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:753)
at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:371)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:303)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

Shell output: main : command provided 1
main : run as user is rc
main : requested yarn user is rc


Container exited with a non-zero exit code 1
Display stack trace? [yN]15:59:57,702  WARN TransportChannelHandler:79 - Exception in connection from 10.200.48.155/10.200.48.155:50921
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:192)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
at io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:313)
at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:881)
at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:242)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:119)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at java.lang.Thread.run(Thread.java:748)
15:59:57,704 ERROR TransportResponseHandler:132 - Still have 1 requests outstanding when connection from 10.200.48.155/10.200.48.155:50921 is closed
15:59:57,706  WARN NettyRpcEndpointRef:91 - Error sending message [message = RequestExecutors(0,0,Map())] in 1 attempts
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:192)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
at io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:313)
at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:881)
at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:242)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:119)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at java.lang.Thread.run(Thread.java:748)

I am confused about that. Could you please help me?



在 2017年7月6日星期四 UTC+8下午4:15:37,HadoopMarc写道:


Readers wanting to run OLAP queries on a real spark-yarn cluster might want to check my recent post:

http://yaaics.blogspot.nl/2017/07/configuring-janusgraph-for-spark-yarn.html

Regards,  Marc


Make HttpChannelizer enabled while using external cassandra in Titan(1.0.0)

manoj92...@...
 

Hi all,

Could you please tell me, how can I enable HttpChannelizer while using external cassandra?

Thank you,
Manoj


Re: how can i remove the index

李平 <lipin...@...>
 

BaseConfiguration baseConfiguration = new BaseConfiguration();
baseConfiguration.setProperty("storage.backend", "hbase");
baseConfiguration.setProperty("storage.hostname", "192.168.1.108");
baseConfiguration.setProperty("gremlin.graph", "org.janusgraph.core.JanusGraphFactory");
JanusGraph janusGraph = JanusGraphFactory.open(baseConfiguration);
JanusGraphManagement janusGraphManagement = janusGraph.openManagement();
JanusGraphIndex phoneIndex = janusGraphManagement.getGraphIndex("phoneIndex");
PropertyKey phone = janusGraphManagement.getPropertyKey("phone");
SchemaStatus indexStatus = phoneIndex.getIndexStatus(phone);
System.out.println(indexStatus);
if (indexStatus == INSTALLED) {
ManagementSystem.awaitGraphIndexStatus(janusGraph,"phoneIndex").call();
janusGraphManagement.updateIndex(phoneIndex, SchemaAction.REGISTER_INDEX).get();
}
janusGraphManagement.commit();
janusGraph.tx().commit();


it can not turn to register status,exception is  timeout 
10:54:19.148 [main] DEBUG org.janusgraph.graphdb.database.management.GraphIndexStatusWatcher - Key phone has status INSTALLED
10:54:19.148 [main] INFO org.janusgraph.graphdb.database.management.GraphIndexStatusWatcher - Some key(s) on index phoneIndex do not currently have status REGISTERED: phone=INSTALLED
10:54:19.148 [main] INFO org.janusgraph.graphdb.database.management.GraphIndexStatusWatcher - Timed out (PT1M) while waiting for index phoneIndex to converge on status REGISTERED


在 2017年7月22日星期六 UTC+8上午12:15:20,David Pitera写道:

Your propertyKey `phone` on your index `phoneIndex` is in the `INSTALLED` state, so you need to `REGISTER` it and wait for it to become registered before you can attempt to `enable` it. After it is enabled, you can disable and then remove it.

Some code that should help cause it goes through the whole installed/registered/enabled phase is:




On Thu, Jul 20, 2017 at 11:43 PM, 李平 <li...@...> wrote:
public class GraphTest {

public static void main(String[] args) throws Exception {

BaseConfiguration baseConfiguration = new BaseConfiguration();
baseConfiguration.setProperty("storage.backend", "hbase");
baseConfiguration.setProperty("storage.hostname", "192.168.1.108");
baseConfiguration.setProperty("gremlin.graph", "org.janusgraph.core.JanusGraphFactory");
JanusGraph janusGraph = JanusGraphFactory.open(baseConfiguration);
GraphTraversalSource g = janusGraph.traversal();
g.tx().rollback();
JanusGraphManagement janusGraphManagement = janusGraph.openManagement();
JanusGraphIndex phoneIndex = janusGraphManagement.getGraphIndex("phoneIndex");
janusGraphManagement.updateIndex(phoneIndex, SchemaAction.DISABLE_INDEX).get();
janusGraphManagement.commit();
g.tx().commit();
ManagementSystem.awaitGraphIndexStatus(janusGraph,"phoneIndex")
.status(SchemaStatus.DISABLED)
.timeout(10, ChronoUnit.MINUTES)
.call();
janusGraphManagement.updateIndex(phoneIndex, SchemaAction.REMOVE_INDEX).get();
janusGraphManagement.commit();
janusGraph.tx().commit();
System.out.println("---------------------remove index sucess...");

the log is :Some key(s) on index phoneIndex do not currently have status DISABLED: phone=INSTALLED


--
You received this message because you are subscribed to the Google Groups "JanusGraph users list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to janusgraph-use...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Not able to connect when 1 of 3 nodes is down in the Cassandra cluster

Bharat Dighe <bdi...@...>
 

I am using titan 1.0 and planning to move to Janus very soon.

I have following keyspace

CREATE KEYSPACE my_ks WITH replication = {'class': 'NetworkTopologyStrategy', 'DC1': '2', 'DC2': '1'}  AND durable_writes = true;

Current status of the cluster nodes is as follows, one of the node in DC1 is down.

|/ State=Normal/Leaving/Joining/Moving
--  Address        Load       Tokens  Owns    Host ID                               Rack
DN  IP1  
2.8 MB     256     ?       2a5abdad-af65-48e7-a74c-d40f1f759460  rac2
UN  IP2  
4.33 MB    256     ?       4897d661-24d3-4d30-b07a-00a8103635f6  rac1
Datacenter: Sunnyside_DC
========================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address        Load       Tokens  Owns    Host ID                               Rack
UN  IP3  
5.24 MB    256     ?       f830e2a9-6eea-4617-88dd-d63e44beb115  rac1


Titan is able to connect to node in DC2 but fails to join the UP node in DC1.

TitanGraph graph = TitanFactory.build().
 
set("storage.backend", "cassandra").
 
set("storage.hostname", "IP2").
 
set("storage.port", 9160).
 
set("storage.cassandra.keyspace", "my_ks").
 
set("storage.read-only", false).
 
set("query.force-index", false).
 
set("storage.cassandra.astyanax.connection-pool-type","ROUND_ROBIN").
 
set("storage.cassandra.astyanax.node-discovery-type","NONE").
 
set("storage.cassandra.read-consistency-level","LOCAL_QUORUM").
 
set("storage.cassandra.write-consistency-level","LOCAL_QUORUM").
 
set("storage.cassandra.atomic-batch-mutate",false).
 open
();

It gives following exception:

22:11:19,655 ERROR CountingConnectionPoolMonitor:94 - com.netflix.astyanax.connectionpool.exceptions.TokenRangeOfflineException: TokenRangeOfflineException: [host=IP2:9160, latency=100(100), attempts=1]UnavailableException()
com
.netflix.astyanax.connectionpool.exceptions.TokenRangeOfflineException: TokenRangeOfflineException: [host=IP2:9160, latency=100(100), attempts=1]UnavailableException()
 at com
.netflix.astyanax.thrift.ThriftConverter.ToConnectionPoolException(ThriftConverter.java:165)
 at com
.netflix.astyanax.thrift.AbstractOperationImpl.execute(AbstractOperationImpl.java:65)
 at com
.netflix.astyanax.thrift.AbstractOperationImpl.execute(AbstractOperationImpl.java:28)
 at com
.netflix.astyanax.thrift.ThriftSyncConnectionFactoryImpl$ThriftConnection.execute(ThriftSyncConnectionFactoryImpl.java:153)
 at com
.netflix.astyanax.connectionpool.impl.AbstractExecuteWithFailoverImpl.tryOperation(AbstractExecuteWithFailoverImpl.java:119)
 at com
.netflix.astyanax.connectionpool.impl.AbstractHostPartitionConnectionPool.executeWithFailover(AbstractHostPartitionConnectionPool.java:352)
 at com
.netflix.astyanax.thrift.ThriftColumnFamilyQueryImpl$4.execute(ThriftColumnFamilyQueryImpl.java:538)
 at com
.thinkaurelius.titan.diskstorage.cassandra.astyanax.AstyanaxKeyColumnValueStore.getNamesSlice(AstyanaxKeyColumnValueStore.java:112)
 at com
.thinkaurelius.titan.diskstorage.cassandra.astyanax.AstyanaxKeyColumnValueStore.getNamesSlice(AstyanaxKeyColumnValueStore.java:78)
 at com
.thinkaurelius.titan.diskstorage.cassandra.astyanax.AstyanaxKeyColumnValueStore.getSlice(AstyanaxKeyColumnValueStore.java:67)
 at com
.thinkaurelius.titan.diskstorage.configuration.backend.KCVSConfiguration$1.call(KCVSConfiguration.java:91)
 at com
.thinkaurelius.titan.diskstorage.configuration.backend.KCVSConfiguration$1.call(KCVSConfiguration.java:1)
 at com
.thinkaurelius.titan.diskstorage.util.BackendOperation.execute(BackendOperation.java:133)
 at com
.thinkaurelius.titan.diskstorage.util.BackendOperation$1.call(BackendOperation.java:147)
 at com
.thinkaurelius.titan.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:56)
 at com
.thinkaurelius.titan.diskstorage.util.BackendOperation.execute(BackendOperation.java:42)
 at com
.thinkaurelius.titan.diskstorage.util.BackendOperation.execute(BackendOperation.java:144)
 at com
.thinkaurelius.titan.diskstorage.configuration.backend.KCVSConfiguration.get(KCVSConfiguration.java:88)
 at com
.thinkaurelius.titan.diskstorage.configuration.BasicConfiguration.isFrozen(BasicConfiguration.java:93)
 at com
.thinkaurelius.titan.graphdb.configuration.GraphDatabaseConfiguration.<init>(GraphDatabaseConfiguration.java:1338)
 at com
.thinkaurelius.titan.core.TitanFactory.open(TitanFactory.java:94)
 at com
.thinkaurelius.titan.core.TitanFactory.open(TitanFactory.java:84)
 at com
.thinkaurelius.titan.core.TitanFactory$Builder.open(TitanFactory.java:139)
 at 
TestGraph.main(TestGraph.java:20)
Caused by: UnavailableException()
 at org
.apache.cassandra.thrift.Cassandra$multiget_slice_result$multiget_slice_resultStandardScheme.read(Cassandra.java:14687)
 at org
.apache.cassandra.thrift.Cassandra$multiget_slice_result$multiget_slice_resultStandardScheme.read(Cassandra.java:14633)
 at org
.apache.cassandra.thrift.Cassandra$multiget_slice_result.read(Cassandra.java:14559)
 at org
.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
 at org
.apache.cassandra.thrift.Cassandra$Client.recv_multiget_slice(Cassandra.java:741)
 at org
.apache.cassandra.thrift.Cassandra$Client.multiget_slice(Cassandra.java:725)
 at com
.netflix.astyanax.thrift.ThriftColumnFamilyQueryImpl$4$1.internalExecute(ThriftColumnFamilyQueryImpl.java:544)
 at com
.netflix.astyanax.thrift.ThriftColumnFamilyQueryImpl$4$1.internalExecute(ThriftColumnFamilyQueryImpl.java:541)
 at com
.netflix.astyanax.thrift.AbstractOperationImpl.execute(AbstractOperationImpl.java:60)
 
... 22 more


Please help me to resolve this.

Thanks
Bharat


Re: Merging two graphs.

Jason Plurad <plu...@...>
 

You could use some of the methods from TinkerPop's ElementHelper to help copy over properties.
http://tinkerpop.apache.org/javadocs/3.2.3/full/org/apache/tinkerpop/gremlin/structure/util/ElementHelper.html

The element ids will be different between your 2 JanusGraph instances, so as you've noted, you'll either have to track the ids or have sufficient indexes configured to do lookups.


On Wednesday, July 19, 2017 at 10:37:28 AM UTC-4, Gwiz wrote:
Hello,

I have an in-memory Janusgraph instance and I would like to merge that with my Cassandra/ES based graph. What is the best (efficient) way to do this?

When I merge, I don't need to insert the vertices that are already in my Cassandra/ES based graph. I know which vertices in the in-memory graph exists in the Cassandra/ES based graph. 

I don't see any Java API to take a Vertex (or list of Vertices) from one Janusgraph instance and add them to another. All I see is the following:
  • addVertex(String)
  • addVertex(Object... objects)
My original plan is to insert the new vertices from in-memory graph into my Cassandra/ES based graph, get the IDs back and use them to insert Edges.

I appreciate any help.

Thanks.


Re: when release 0.2.0?

Jason Plurad <plu...@...>
 

I think it's getting pretty close. I'd guess July is a stretch, but August should be possible.

Several great things are already in place: TinkerPop 3.2.5 support, Cassandra CQL support, ES 5.4.2 support, Solr 6.6.0 support.
* There are a few more PRs in the queue that need to get merged, including this one on OLAP compatibility with Cassandra 3.0+.
* There is also some discussion going on regarding the Cassandra source code tree organization, which also needs to be completed.
* A recently identified migration issue from Titan must be fixed.

I could be missing others. Here's how you and anybody else in the community can help:
* Help triage the issues. Try to reproduce them. Add comments. If there's something critical that's needed for 0.2, let it be known.
* Help review code in the pull requests.
* Test the master branch in your test environments. It already has cassandra-cql and ES 5.x support in place, so you can help us make sure it works especially for your use cases.


On Thursday, July 20, 2017 at 10:27:59 PM UTC-4, Ranger Tsao wrote:
I want use JanusGraph in my production,but I need two feature: cassandra-cql and es 5.x


Re: how can i remove the index

Chin Huang <chinhu...@...>
 

We were in the same situation. The problem is the index cannot be manually moved out the "INSTALLED" state. We have to recreate in the index and make sure no open transactions during mgmt.commit()

Please also check this thread, with similar issue of having index stuck in the "INSTALLED" state.

https://mail.google.com/mail/u/0/#search/installed/15c7ee77373b6ea8

On Fri, Jul 21, 2017 at 9:15 AM, David Pitera <piter...@...> wrote:
Your propertyKey `phone` on your index `phoneIndex` is in the `INSTALLED` state, so you need to `REGISTER` it and wait for it to become registered before you can attempt to `enable` it. After it is enabled, you can disable and then remove it.

Some code that should help cause it goes through the whole installed/registered/enabled phase is:




On Thu, Jul 20, 2017 at 11:43 PM, 李平 <lipin...@...> wrote:
public class GraphTest {

public static void main(String[] args) throws Exception {

BaseConfiguration baseConfiguration = new BaseConfiguration();
baseConfiguration.setProperty("storage.backend", "hbase");
baseConfiguration.setProperty("storage.hostname", "192.168.1.108");
baseConfiguration.setProperty("gremlin.graph", "org.janusgraph.core.JanusGraphFactory");
JanusGraph janusGraph = JanusGraphFactory.open(baseConfiguration);
GraphTraversalSource g = janusGraph.traversal();
g.tx().rollback();
JanusGraphManagement janusGraphManagement = janusGraph.openManagement();
JanusGraphIndex phoneIndex = janusGraphManagement.getGraphIndex("phoneIndex");
janusGraphManagement.updateIndex(phoneIndex, SchemaAction.DISABLE_INDEX).get();
janusGraphManagement.commit();
g.tx().commit();
ManagementSystem.awaitGraphIndexStatus(janusGraph,"phoneIndex")
.status(SchemaStatus.DISABLED)
.timeout(10, ChronoUnit.MINUTES)
.call();
janusGraphManagement.updateIndex(phoneIndex, SchemaAction.REMOVE_INDEX).get();
janusGraphManagement.commit();
janusGraph.tx().commit();
System.out.println("---------------------remove index sucess...");

the log is :Some key(s) on index phoneIndex do not currently have status DISABLED: phone=INSTALLED


--
You received this message because you are subscribed to the Google Groups "JanusGraph users list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to janusgraph-users+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "JanusGraph users list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to janusgraph-users+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: how can i remove the index

David Pitera <piter...@...>
 

Your propertyKey `phone` on your index `phoneIndex` is in the `INSTALLED` state, so you need to `REGISTER` it and wait for it to become registered before you can attempt to `enable` it. After it is enabled, you can disable and then remove it.

Some code that should help cause it goes through the whole installed/registered/enabled phase is:




On Thu, Jul 20, 2017 at 11:43 PM, 李平 <lipin...@...> wrote:
public class GraphTest {

public static void main(String[] args) throws Exception {

BaseConfiguration baseConfiguration = new BaseConfiguration();
baseConfiguration.setProperty("storage.backend", "hbase");
baseConfiguration.setProperty("storage.hostname", "192.168.1.108");
baseConfiguration.setProperty("gremlin.graph", "org.janusgraph.core.JanusGraphFactory");
JanusGraph janusGraph = JanusGraphFactory.open(baseConfiguration);
GraphTraversalSource g = janusGraph.traversal();
g.tx().rollback();
JanusGraphManagement janusGraphManagement = janusGraph.openManagement();
JanusGraphIndex phoneIndex = janusGraphManagement.getGraphIndex("phoneIndex");
janusGraphManagement.updateIndex(phoneIndex, SchemaAction.DISABLE_INDEX).get();
janusGraphManagement.commit();
g.tx().commit();
ManagementSystem.awaitGraphIndexStatus(janusGraph,"phoneIndex")
.status(SchemaStatus.DISABLED)
.timeout(10, ChronoUnit.MINUTES)
.call();
janusGraphManagement.updateIndex(phoneIndex, SchemaAction.REMOVE_INDEX).get();
janusGraphManagement.commit();
janusGraph.tx().commit();
System.out.println("---------------------remove index sucess...");

the log is :Some key(s) on index phoneIndex do not currently have status DISABLED: phone=INSTALLED


--
You received this message because you are subscribed to the Google Groups "JanusGraph users list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to janusgraph-users+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


how can i remove the index

李平 <lipin...@...>
 

public class GraphTest {

public static void main(String[] args) throws Exception {

BaseConfiguration baseConfiguration = new BaseConfiguration();
baseConfiguration.setProperty("storage.backend", "hbase");
baseConfiguration.setProperty("storage.hostname", "192.168.1.108");
baseConfiguration.setProperty("gremlin.graph", "org.janusgraph.core.JanusGraphFactory");
JanusGraph janusGraph = JanusGraphFactory.open(baseConfiguration);
GraphTraversalSource g = janusGraph.traversal();
g.tx().rollback();
JanusGraphManagement janusGraphManagement = janusGraph.openManagement();
JanusGraphIndex phoneIndex = janusGraphManagement.getGraphIndex("phoneIndex");
janusGraphManagement.updateIndex(phoneIndex, SchemaAction.DISABLE_INDEX).get();
janusGraphManagement.commit();
g.tx().commit();
ManagementSystem.awaitGraphIndexStatus(janusGraph,"phoneIndex")
.status(SchemaStatus.DISABLED)
.timeout(10, ChronoUnit.MINUTES)
.call();
janusGraphManagement.updateIndex(phoneIndex, SchemaAction.REMOVE_INDEX).get();
janusGraphManagement.commit();
janusGraph.tx().commit();
System.out.println("---------------------remove index sucess...");

the log is :Some key(s) on index phoneIndex do not currently have status DISABLED: phone=INSTALLED



when release 0.2.0?

Ranger Tsao <cao....@...>
 

I want use JanusGraph in my production,but I need two feature: cassandra-cql and es 5.x


Re: Geoshape property in remote gremlin query, GraphSON

Robert Dale <rob...@...>
 

Using cluster/client will work with the v1 serializer.

gremlin-server.yaml:

serializers:
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerGremlinV1d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}


conf/remote.yaml:
hosts: [localhost]
port
: 8182
serializer
: { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerGremlinV1d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] } }


connect:
gremlin> cluster = Cluster.open('conf/remote.yaml')
==>localhost/127.0.0.1:8182
gremlin
> client = cluster.connect();
==>org.apache.tinkerpop.gremlin.driver.Client$ClusteredClient@434514d8
gremlin> client.submit("g.addV().property('geo', Geoshape.point(0.0,1.1))")
==>result{object={id=4152, label=vertex, type=vertex, properties={geo=[{id=sn-37c-sl, value=POINT (1.1 0)}]}} class=java.util.HashMap}
gremlin> client.submit("g.V().values('geo')").all().get().get(0).getObject()
==>POINT (1.1 0)
gremlin> client.submit("g.V().values('geo')").all().get().get(0).getObject().getClass()
==>class org.janusgraph.core.attribute.Geoshape



You won't be able to use withRemote() with the V1 serializer.



On Wednesday, July 19, 2017 at 3:16:37 PM UTC-4, Robert Dale wrote:
It seems Geoshape GraphSON support is hardcoded to v1 although I couldn't get it to work with that either.  If you have to use GraphSON instead of Gryo, then you could checkout master, apply this patch, and rebuild. I created an  issue to support multiple versions of serializers  https://github.com/JanusGraph/janusgraph/issues/420

diff --git a/janusgraph-core/src/main/java/org/janusgraph/graphdb/tinkerpop/io/graphson/JanusGraphSONModule.java b/janusgraph-core/src/main/java/org/janusgraph/graphdb/tinkerpop/io/graphson/JanusGraphSONModule.java
index
6ef907b..8168309 100644
--- a/janusgraph-core/src/main/java/org/janusgraph/graphdb/tinkerpop/io/graphson/JanusGraphSONModule.java
+++ b/janusgraph-core/src/main/java/org/janusgraph/graphdb/tinkerpop/io/graphson/JanusGraphSONModule.java
@@ -50,10 +50,10 @@ public class JanusGraphSONModule extends TinkerPopJacksonModule {
     
private JanusGraphSONModule() {
         
super("janusgraph");
         addSerializer
(RelationIdentifier.class, new RelationIdentifierSerializer());
-        addSerializer(Geoshape.class, new Geoshape.GeoshapeGsonSerializerV1d0());
+        addSerializer(Geoshape.class, new Geoshape.GeoshapeGsonSerializerV2d0());
 
         addDeserializer
(RelationIdentifier.class, new RelationIdentifierDeserializer());
-        addDeserializer(Geoshape.class, new Geoshape.GeoshapeGsonDeserializerV1d0());
+        addDeserializer(Geoshape.class, new Geoshape.GeoshapeGsonDeserializerV2d0());
     
}
 
     
private static final JanusGraphSONModule INSTANCE = new JanusGraphSONModule();



On Tuesday, July 18, 2017 at 5:47:50 PM UTC-4, Conrad Rosenbrock wrote:
I am trying to assign a value to a property with the native Geoshape type. I have it serialized into JSON as follows (where g is aliased to the traversal on gremlin server):

{"@value": {"type": "point", "coordinates": [{"@value": 1.1, "@type": "g:Double"}, {"@value": 2.2, "@type": "g:Double"}]}, "@type": "g:Geoshape"}

In the gremlin console, I can easily type 

Geoshape.point(1.1, 2.2)

and it works perfectly. I am sure that it is something quite simple. Here is the error:

Request [PooledUnsafeDirectByteBuf(ridx: 653, widx: 653, cap: 687)] could not be deserialized by org.apache.tinkerpop.gremlin.driver.ser.AbstractGraphSONMessageSerializerV2d0.

For reference, I do have the following serializer in the gremlin server config:

{ className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerGremlinV2d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}

which should direct gremlin server to the relevant deserializer in Janus.

Thanks!


Re: How Can I make a statistics,i.e:how many vertexes or edges?

spirit...@...
 

I run into a new problem when I config the gremlin and spark-onYarn according the post.
Am I lacking the dependencies or version conflicts?

10:47:47,199  INFO KryoShimServiceLoader:117 - Set KryoShimService provider to org.apache.tinkerpop.gremlin.hadoop.structure.io.HadoopPoolShimService@4b31a708 (class org.apache.tinkerpop.gremlin.hadoop.structure.io.HadoopPoolShimService) because its priority value (0) is the highest available
10:47:47,199  INFO KryoShimServiceLoader:123 - Configuring KryoShimService provider org.apache.tinkerpop.gremlin.hadoop.structure.io.HadoopPoolShimService@4b31a708 with user-provided configuration
10:47:51,447  INFO SparkContext:58 - Running Spark version 1.6.1
10:47:51,495  INFO SecurityManager:58 - Changing view acls to: rc
10:47:51,496  INFO SecurityManager:58 - Changing modify acls to: rc
10:47:51,496  INFO SecurityManager:58 - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(rc); users with modify permissions: Set(rc)
10:47:51,855  INFO Utils:58 - Successfully started service 'sparkDriver' on port 41967.
10:47:52,450  INFO Slf4jLogger:80 - Slf4jLogger started
10:47:52,504  INFO Remoting:74 - Starting remoting
10:47:52,666  INFO Remoting:74 - Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@...:50605]
10:47:52,673  INFO Utils:58 - Successfully started service 'sparkDriverActorSystem' on port 50605.
10:47:53,428  INFO SparkEnv:58 - Registering MapOutputTracker
10:47:53,448  INFO SparkEnv:58 - Registering BlockManagerMaster
10:47:53,460  INFO DiskBlockManager:58 - Created local directory at /tmp/blockmgr-94bbe487-7cf4-4cf5-bcc2-fc538487f31a
10:47:53,473  INFO MemoryStore:58 - MemoryStore started with capacity 2.4 GB
10:47:53,591  INFO SparkEnv:58 - Registering OutputCommitCoordinator
10:47:53,755  INFO Server:272 - jetty-8.y.z-SNAPSHOT
10:47:53,809  INFO AbstractConnector:338 - Started SelectChannelConnector@0.0.0.0:4040
10:47:53,810  INFO Utils:58 - Successfully started service 'SparkUI' on port 4040.
10:47:53,813  INFO SparkUI:58 - Started SparkUI at http://10.200.48.112:4040
spark.yarn.driver.memoryOverhead is set but does not apply in client mode.
10:47:54,996  INFO TimelineClientImpl:296 - Timeline service address: http://dl-rc-optd-ambari-master-v-test-1.host.dataengine.com:8188/ws/v1/timeline/
10:47:55,307  INFO ConfiguredRMFailoverProxyProvider:100 - Failing over to rm2
10:47:55,333  INFO Client:58 - Requesting a new application from cluster with 8 NodeManagers
10:47:55,351  INFO Client:58 - Verifying our application has not requested more than the maximum memory capability of the cluster (10240 MB per container)
10:47:55,351  INFO Client:58 - Will allocate AM container, with 896 MB memory including 384 MB overhead
10:47:55,352  INFO Client:58 - Setting up container launch context for our AM
10:47:55,355  INFO Client:58 - Setting up the launch environment for our AM container
10:47:55,367  INFO Client:58 - Preparing resources for our AM container
10:47:56,298  INFO Client:58 - Uploading resource file:/rc/lib/spark_lib/spark-assembly-1.6.1.2.4.2.0-258-hadoop2.7.1.2.4.2.0-258.jar -> hdfs://chorustest/user/rc/.sparkStaging/application_1499824261147_0015/spark-assembly-1.6.1.2.4.2.0-258-hadoop2.7.1.2.4.2.0-258.jar
10:47:59,369  INFO Client:58 - Uploading resource file:/tmp/spark-ea70c397-fad0-44bc-ae1f-7248ed3f3003/__spark_conf__1134932846047586070.zip -> hdfs://chorustest/user/rc/.sparkStaging/application_1499824261147_0015/__spark_conf__1134932846047586070.zip
10:47:59,442  WARN Client:70 -
hdp.version is not found,
Please set HDP_VERSION=xxx in spark-env.sh,
or set -Dhdp.version=xxx in spark.{driver|yarn.am}.extraJavaOptions
or set SPARK_JAVA_OPTS="-Dhdp.verion=xxx" in spark-env.sh
If you're running Spark under HDP.

10:47:59,456  INFO SecurityManager:58 - Changing view acls to: rc
10:47:59,456  INFO SecurityManager:58 - Changing modify acls to: rc
10:47:59,456  INFO SecurityManager:58 - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(rc); users with modify permissions: Set(rc)
10:47:59,463  INFO Client:58 - Submitting application 15 to ResourceManager
10:47:59,694  INFO YarnClientImpl:273 - Submitted application application_1499824261147_0015
java.lang.NoSuchMethodError: org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.bindToYarn(Lorg/apache/hadoop/yarn/api/records/ApplicationId;Lscala/Option;)V


在 2017年7月17日星期一 UTC+8下午5:53:32,spi...@...写道:

My graph has about 100 million vertexes and 200 million edges. But if use the following code, it is too slow.
GraphTraversal<Vertex, Long> countV = traversal.V().count();
while (countV.hasNext()){
System.out.println("countV:" + countV.next());
}

GraphTraversal<Edge, Long> countE = traversal.E().count();
while (countE.hasNext()){
System.out.println("countE:" + countE.next());
}

I want to computer the count of vertex or edge directly through Hbase. The following code is:
 SnapshotCounter.HBaseGetter entryGetter = new SnapshotCounter.HBaseGetter();
       
EntryList entryList = StaticArrayEntryList.ofBytes(
                result
.getMap().get(Bytes.toBytes("e")).entrySet(),
                entryGetter
);
       
StandardTitanTx tx = (StandardTitanTx) graph.newTransaction();
       
System.out.println("Entry list size: " + entryList.size());
       
int cnt = 0;
//        IDInspector inspector = graph.getIDInspector();
        for (Entry entry : entryList) {
           
RelationCache relation = graph.getEdgeSerializer().readRelation(entry, false, tx);
//            Direction direction = graph.getEdgeSerializer().parseDirection(entry);
//            System.out.println("Direction is:" + direction.name());
//            System.out.println("relation is:" + relation);

//            System.out.println("numProperties: " + relation.numProperties());
//            Iterator<LongObjectCursor<Object>> longObjectCursorIterator = relation.propertyIterator();
//            LongObjectCursor<Object> next = longObjectCursorIterator.next();
//            System.out.println("key is:" + next.key);
//            System.out.println("value is:" + next.value);
//            System.out.println("next.toString is:" + next.toString());



            RelationType type = tx.getExistingRelationType(relation.typeId);

           
Iterator<Edge> edgeIterator1 = type.edges(Direction.BOTH);
           
while (edgeIterator1.hasNext()){
               
Edge next11 = edgeIterator1.next();
               
System.out.println("relType is :" + next11.property("relType"));
           
}

//             if (type.isEdgeLabel() && !tx.getIdInspector().isEdgeLabelId(relation.relationId)){
//            if (type.isEdgeLabel() &&  !graph.getIDManager().isEdgeLabelId(relation.relationId) &&
//                    !tx.getIdInspector().isRelationTypeId(type.longId())) {
            if (type.isEdgeLabel() ) {
                cnt
++;
               
System.out.print("isSystemRelationTypeId: ");
               
System.out.println(graph.getIDManager().isSystemRelationTypeId(relation.typeId));

               
System.out.print("isEdgeLabelId: ");
               
System.out.println(graph.getIDManager().isEdgeLabelId(relation.typeId));

               
System.out.print("type isEdgeLabel: ");
               
System.out.println(type.isEdgeLabel());

               
System.out.print("relationId isSystemRelationTypeId: ");
               
System.out.println(graph.getIDManager().isSystemRelationTypeId(relation.relationId));
               
System.out.println(entry.getValue().toString());
           
}
       
}
       
System.out.println("Edge count: " + cnt);

I made a test by making a small graph-- two vertexes and two edges.


But I just get the count of the edge is one, expecting two. Is there any problem? Please help....This problem bugs me. Thanks~~~


Re: How Can I make a statistics,i.e:how many vertexes or edges?

spirit...@...
 

I config the gremlin and spark-on-yarn according to the post.
But I ran into the problem
10:47:47,199  INFO KryoShimServiceLoader:117 - Set KryoShimService provider to org.apache.tinkerpop.gremlin.hadoop.structure.io.HadoopPoolShimService@4b31a708 (class org.apache.tinkerpop.gremlin.hadoop.structure.io.HadoopPoolShimService) because its priority value (0) is the highest available
10:47:47,199  INFO KryoShimServiceLoader:123 - Configuring KryoShimService provider org.apache.tinkerpop.gremlin.hadoop.structure.io.HadoopPoolShimService@4b31a708 with user-provided configuration
10:47:51,447  INFO SparkContext:58 - Running Spark version 1.6.1
10:47:51,495  INFO SecurityManager:58 - Changing view acls to: rc
10:47:51,496  INFO SecurityManager:58 - Changing modify acls to: rc
10:47:51,496  INFO SecurityManager:58 - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(rc); users with modify permissions: Set(rc)
10:47:51,855  INFO Utils:58 - Successfully started service 'sparkDriver' on port 41967.
10:47:52,450  INFO Slf4jLogger:80 - Slf4jLogger started
10:47:52,504  INFO Remoting:74 - Starting remoting
10:47:52,666  INFO Remoting:74 - Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@...:50605]
10:47:52,673  INFO Utils:58 - Successfully started service 'sparkDriverActorSystem' on port 50605.
10:47:53,428  INFO SparkEnv:58 - Registering MapOutputTracker
10:47:53,448  INFO SparkEnv:58 - Registering BlockManagerMaster
10:47:53,460  INFO DiskBlockManager:58 - Created local directory at /tmp/blockmgr-94bbe487-7cf4-4cf5-bcc2-fc538487f31a
10:47:53,473  INFO MemoryStore:58 - MemoryStore started with capacity 2.4 GB
10:47:53,591  INFO SparkEnv:58 - Registering OutputCommitCoordinator
10:47:53,755  INFO Server:272 - jetty-8.y.z-SNAPSHOT
10:47:53,809  INFO AbstractConnector:338 - Started SelectChannelConnector@0.0.0.0:4040
10:47:53,810  INFO Utils:58 - Successfully started service 'SparkUI' on port 4040.
10:47:53,813  INFO SparkUI:58 - Started SparkUI at http://10.200.48.112:4040
spark.yarn.driver.memoryOverhead is set but does not apply in client mode.
10:47:54,996  INFO TimelineClientImpl:296 - Timeline service address: http://dl-rc-optd-ambari-master-v-test-1.host.dataengine.com:8188/ws/v1/timeline/
10:47:55,307  INFO ConfiguredRMFailoverProxyProvider:100 - Failing over to rm2
10:47:55,333  INFO Client:58 - Requesting a new application from cluster with 8 NodeManagers
10:47:55,351  INFO Client:58 - Verifying our application has not requested more than the maximum memory capability of the cluster (10240 MB per container)
10:47:55,351  INFO Client:58 - Will allocate AM container, with 896 MB memory including 384 MB overhead
10:47:55,352  INFO Client:58 - Setting up container launch context for our AM
10:47:55,355  INFO Client:58 - Setting up the launch environment for our AM container
10:47:55,367  INFO Client:58 - Preparing resources for our AM container
10:47:56,298  INFO Client:58 - Uploading resource file:/rc/lib/spark_lib/spark-assembly-1.6.1.2.4.2.0-258-hadoop2.7.1.2.4.2.0-258.jar -> hdfs://chorustest/user/rc/.sparkStaging/application_1499824261147_0015/spark-assembly-1.6.1.2.4.2.0-258-hadoop2.7.1.2.4.2.0-258.jar
10:47:59,369  INFO Client:58 - Uploading resource file:/tmp/spark-ea70c397-fad0-44bc-ae1f-7248ed3f3003/__spark_conf__1134932846047586070.zip -> hdfs://chorustest/user/rc/.sparkStaging/application_1499824261147_0015/__spark_conf__1134932846047586070.zip
10:47:59,442  WARN Client:70 -
hdp.version is not found,
Please set HDP_VERSION=xxx in spark-env.sh,
or set -Dhdp.version=xxx in spark.{driver|yarn.am}.extraJavaOptions
or set SPARK_JAVA_OPTS="-Dhdp.verion=xxx" in spark-env.sh
If you're running Spark under HDP.

10:47:59,456  INFO SecurityManager:58 - Changing view acls to: rc
10:47:59,456  INFO SecurityManager:58 - Changing modify acls to: rc
10:47:59,456  INFO SecurityManager:58 - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(rc); users with modify permissions: Set(rc)
10:47:59,463  INFO Client:58 - Submitting application 15 to ResourceManager
10:47:59,694  INFO YarnClientImpl:273 - Submitted application application_1499824261147_0015
java.lang.NoSuchMethodError: org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.bindToYarn(Lorg/apache/hadoop/yarn/api/records/ApplicationId;Lscala/Option;)V
 I lost the dependencies? or version conflicts?


在 2017年7月18日星期二 UTC+8下午10:13:53,Jason Plurad写道:

If you scroll back just a few days in the message history of this group, you'll find a link to this nice blog post: "Configuring JanusGraph for spark-yarn" https://groups.google.com/d/msg/janusgraph-users/9e82gcUTB4M/evKFnB3cAgAJ

HadoopMarc covers doing an OLAP vertex count with JanusGraph + HBase, which it sounds like what you're trying to do, and it has an example properties file.

I can't really tell what you're trying to do in that code snippet. It would be best if you would share the code publicly on GitHub or BitBucket or something similar so if somebody wanted to try it out, it would be easy to do.

> anybody online?  please help me~

JanusGraph is run by volunteers contributing to the open source project. Immediate responses may not happen. Using the mailing list and searching its archive is your best bet for learning from the community because there are several hundred folks that are subscribed to this list.


On Monday, July 17, 2017 at 11:35:47 PM UTC-4, spirit888hill wrote:
anybody online?  please help me~

在 2017年7月17日星期一 UTC+8下午5:53:32,写道:
My graph has about 100 million vertexes and 200 million edges. But if use the following code, it is too slow.
GraphTraversal<Vertex, Long> countV = traversal.V().count();
while (countV.hasNext()){
System.out.println("countV:" + countV.next());
}

GraphTraversal<Edge, Long> countE = traversal.E().count();
while (countE.hasNext()){
System.out.println("countE:" + countE.next());
}

I want to computer the count of vertex or edge directly through Hbase. The following code is:
 SnapshotCounter.HBaseGetter entryGetter = new SnapshotCounter.HBaseGetter();
       
EntryList entryList = StaticArrayEntryList.ofBytes(
                result
.getMap().get(Bytes.toBytes("e")).entrySet(),
                entryGetter
);
       
StandardTitanTx tx = (StandardTitanTx) graph.newTransaction();
       
System.out.println("Entry list size: " + entryList.size());
       
int cnt = 0;
//        IDInspector inspector = graph.getIDInspector();
        for (Entry entry : entryList) {
           
RelationCache relation = graph.getEdgeSerializer().readRelation(entry, false, tx);
//            Direction direction = graph.getEdgeSerializer().parseDirection(entry);
//            System.out.println("Direction is:" + direction.name());
//            System.out.println("relation is:" + relation);

//            System.out.println("numProperties: " + relation.numProperties());
//            Iterator<LongObjectCursor<Object>> longObjectCursorIterator = relation.propertyIterator();
//            LongObjectCursor<Object> next = longObjectCursorIterator.next();
//            System.out.println("key is:" + next.key);
//            System.out.println("value is:" + next.value);
//            System.out.println("next.toString is:" + next.toString());



            RelationType type = tx.getExistingRelationType(relation.typeId);

           
Iterator<Edge> edgeIterator1 = type.edges(Direction.BOTH);
           
while (edgeIterator1.hasNext()){
               
Edge next11 = edgeIterator1.next();
               
System.out.println("relType is :" + next11.property("relType"));
           
}

//             if (type.isEdgeLabel() && !tx.getIdInspector().isEdgeLabelId(relation.relationId)){
//            if (type.isEdgeLabel() &&  !graph.getIDManager().isEdgeLabelId(relation.relationId) &&
//                    !tx.getIdInspector().isRelationTypeId(type.longId())) {
            if (type.isEdgeLabel() ) {
                cnt
++;
               
System.out.print("isSystemRelationTypeId: ");
               
System.out.println(graph.getIDManager().isSystemRelationTypeId(relation.typeId));

               
System.out.print("isEdgeLabelId: ");
               
System.out.println(graph.getIDManager().isEdgeLabelId(relation.typeId));

               
System.out.print("type isEdgeLabel: ");
               
System.out.println(type.isEdgeLabel());

               
System.out.print("relationId isSystemRelationTypeId: ");
               
System.out.println(graph.getIDManager().isSystemRelationTypeId(relation.relationId));
               
System.out.println(entry.getValue().toString());
           
}
       
}
       
System.out.println("Edge count: " + cnt);

I made a test by making a small graph-- two vertexes and two edges.


But I just get the count of the edge is one, expecting two. Is there any problem? Please help....This problem bugs me. Thanks~~~


Re: Geoshape property in remote gremlin query, GraphSON

Robert Dale <rob...@...>
 

It seems Geoshape GraphSON support is hardcoded to v1 although I couldn't get it to work with that either.  If you have to use GraphSON instead of Gryo, then you could checkout master, apply this patch, and rebuild. I created an  issue to support multiple versions of serializers  https://github.com/JanusGraph/janusgraph/issues/420

diff --git a/janusgraph-core/src/main/java/org/janusgraph/graphdb/tinkerpop/io/graphson/JanusGraphSONModule.java b/janusgraph-core/src/main/java/org/janusgraph/graphdb/tinkerpop/io/graphson/JanusGraphSONModule.java
index
6ef907b..8168309 100644
--- a/janusgraph-core/src/main/java/org/janusgraph/graphdb/tinkerpop/io/graphson/JanusGraphSONModule.java
+++ b/janusgraph-core/src/main/java/org/janusgraph/graphdb/tinkerpop/io/graphson/JanusGraphSONModule.java
@@ -50,10 +50,10 @@ public class JanusGraphSONModule extends TinkerPopJacksonModule {
     
private JanusGraphSONModule() {
         
super("janusgraph");
         addSerializer
(RelationIdentifier.class, new RelationIdentifierSerializer());
-        addSerializer(Geoshape.class, new Geoshape.GeoshapeGsonSerializerV1d0());
+        addSerializer(Geoshape.class, new Geoshape.GeoshapeGsonSerializerV2d0());
 
         addDeserializer
(RelationIdentifier.class, new RelationIdentifierDeserializer());
-        addDeserializer(Geoshape.class, new Geoshape.GeoshapeGsonDeserializerV1d0());
+        addDeserializer(Geoshape.class, new Geoshape.GeoshapeGsonDeserializerV2d0());
     
}
 
     
private static final JanusGraphSONModule INSTANCE = new JanusGraphSONModule();



On Tuesday, July 18, 2017 at 5:47:50 PM UTC-4, Conrad Rosenbrock wrote:
I am trying to assign a value to a property with the native Geoshape type. I have it serialized into JSON as follows (where g is aliased to the traversal on gremlin server):

{"@value": {"type": "point", "coordinates": [{"@value": 1.1, "@type": "g:Double"}, {"@value": 2.2, "@type": "g:Double"}]}, "@type": "g:Geoshape"}

In the gremlin console, I can easily type 

Geoshape.point(1.1, 2.2)

and it works perfectly. I am sure that it is something quite simple. Here is the error:

Request [PooledUnsafeDirectByteBuf(ridx: 653, widx: 653, cap: 687)] could not be deserialized by org.apache.tinkerpop.gremlin.driver.ser.AbstractGraphSONMessageSerializerV2d0.

For reference, I do have the following serializer in the gremlin server config:

{ className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerGremlinV2d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}

which should direct gremlin server to the relevant deserializer in Janus.

Thanks!


Merging two graphs.

Gwiz <feed...@...>
 

Hello,

I have an in-memory Janusgraph instance and I would like to merge that with my Cassandra/ES based graph. What is the best (efficient) way to do this?

When I merge, I don't need to insert the vertices that are already in my Cassandra/ES based graph. I know which vertices in the in-memory graph exists in the Cassandra/ES based graph. 

I don't see any Java API to take a Vertex (or list of Vertices) from one Janusgraph instance and add them to another. All I see is the following:
  • addVertex(String)
  • addVertex(Object... objects)
My original plan is to insert the new vertices from in-memory graph into my Cassandra/ES based graph, get the IDs back and use them to insert Edges.

I appreciate any help.

Thanks.


Re: How Can I make a statistics,i.e:how many vertexes or edges?

spirit...@...
 

Oooops, I'm so sorry. HadoopMarc is a person.:)

在 2017年7月18日星期二 UTC+8下午10:13:53,Jason Plurad写道:

If you scroll back just a few days in the message history of this group, you'll find a link to this nice blog post: "Configuring JanusGraph for spark-yarn" https://groups.google.com/d/msg/janusgraph-users/9e82gcUTB4M/evKFnB3cAgAJ

HadoopMarc covers doing an OLAP vertex count with JanusGraph + HBase, which it sounds like what you're trying to do, and it has an example properties file.

I can't really tell what you're trying to do in that code snippet. It would be best if you would share the code publicly on GitHub or BitBucket or something similar so if somebody wanted to try it out, it would be easy to do.

> anybody online?  please help me~

JanusGraph is run by volunteers contributing to the open source project. Immediate responses may not happen. Using the mailing list and searching its archive is your best bet for learning from the community because there are several hundred folks that are subscribed to this list.


On Monday, July 17, 2017 at 11:35:47 PM UTC-4, spirit888hill wrote:
anybody online?  please help me~

在 2017年7月17日星期一 UTC+8下午5:53:32,写道:
My graph has about 100 million vertexes and 200 million edges. But if use the following code, it is too slow.
GraphTraversal<Vertex, Long> countV = traversal.V().count();
while (countV.hasNext()){
System.out.println("countV:" + countV.next());
}

GraphTraversal<Edge, Long> countE = traversal.E().count();
while (countE.hasNext()){
System.out.println("countE:" + countE.next());
}

I want to computer the count of vertex or edge directly through Hbase. The following code is:
 SnapshotCounter.HBaseGetter entryGetter = new SnapshotCounter.HBaseGetter();
       
EntryList entryList = StaticArrayEntryList.ofBytes(
                result
.getMap().get(Bytes.toBytes("e")).entrySet(),
                entryGetter
);
       
StandardTitanTx tx = (StandardTitanTx) graph.newTransaction();
       
System.out.println("Entry list size: " + entryList.size());
       
int cnt = 0;
//        IDInspector inspector = graph.getIDInspector();
        for (Entry entry : entryList) {
           
RelationCache relation = graph.getEdgeSerializer().readRelation(entry, false, tx);
//            Direction direction = graph.getEdgeSerializer().parseDirection(entry);
//            System.out.println("Direction is:" + direction.name());
//            System.out.println("relation is:" + relation);

//            System.out.println("numProperties: " + relation.numProperties());
//            Iterator<LongObjectCursor<Object>> longObjectCursorIterator = relation.propertyIterator();
//            LongObjectCursor<Object> next = longObjectCursorIterator.next();
//            System.out.println("key is:" + next.key);
//            System.out.println("value is:" + next.value);
//            System.out.println("next.toString is:" + next.toString());



            RelationType type = tx.getExistingRelationType(relation.typeId);

           
Iterator<Edge> edgeIterator1 = type.edges(Direction.BOTH);
           
while (edgeIterator1.hasNext()){
               
Edge next11 = edgeIterator1.next();
               
System.out.println("relType is :" + next11.property("relType"));
           
}

//             if (type.isEdgeLabel() && !tx.getIdInspector().isEdgeLabelId(relation.relationId)){
//            if (type.isEdgeLabel() &&  !graph.getIDManager().isEdgeLabelId(relation.relationId) &&
//                    !tx.getIdInspector().isRelationTypeId(type.longId())) {
            if (type.isEdgeLabel() ) {
                cnt
++;
               
System.out.print("isSystemRelationTypeId: ");
               
System.out.println(graph.getIDManager().isSystemRelationTypeId(relation.typeId));

               
System.out.print("isEdgeLabelId: ");
               
System.out.println(graph.getIDManager().isEdgeLabelId(relation.typeId));

               
System.out.print("type isEdgeLabel: ");
               
System.out.println(type.isEdgeLabel());

               
System.out.print("relationId isSystemRelationTypeId: ");
               
System.out.println(graph.getIDManager().isSystemRelationTypeId(relation.relationId));
               
System.out.println(entry.getValue().toString());
           
}
       
}
       
System.out.println("Edge count: " + cnt);

I made a test by making a small graph-- two vertexes and two edges.


But I just get the count of the edge is one, expecting two. Is there any problem? Please help....This problem bugs me. Thanks~~~


Re: How Can I make a statistics,i.e:how many vertexes or edges?

spirit...@...
 

Thanks Jason for your reply.
I'll show my code on GitHub later.
And I want to know what the HadoopMarc is? You mean Hadoop MapReduce??

在 2017年7月18日星期二 UTC+8下午10:13:53,Jason Plurad写道:

If you scroll back just a few days in the message history of this group, you'll find a link to this nice blog post: "Configuring JanusGraph for spark-yarn" https://groups.google.com/d/msg/janusgraph-users/9e82gcUTB4M/evKFnB3cAgAJ

HadoopMarc covers doing an OLAP vertex count with JanusGraph + HBase, which it sounds like what you're trying to do, and it has an example properties file.

I can't really tell what you're trying to do in that code snippet. It would be best if you would share the code publicly on GitHub or BitBucket or something similar so if somebody wanted to try it out, it would be easy to do.

> anybody online?  please help me~

JanusGraph is run by volunteers contributing to the open source project. Immediate responses may not happen. Using the mailing list and searching its archive is your best bet for learning from the community because there are several hundred folks that are subscribed to this list.


On Monday, July 17, 2017 at 11:35:47 PM UTC-4, spirit888hill wrote:
anybody online?  please help me~

在 2017年7月17日星期一 UTC+8下午5:53:32,写道:
My graph has about 100 million vertexes and 200 million edges. But if use the following code, it is too slow.
GraphTraversal<Vertex, Long> countV = traversal.V().count();
while (countV.hasNext()){
System.out.println("countV:" + countV.next());
}

GraphTraversal<Edge, Long> countE = traversal.E().count();
while (countE.hasNext()){
System.out.println("countE:" + countE.next());
}

I want to computer the count of vertex or edge directly through Hbase. The following code is:
 SnapshotCounter.HBaseGetter entryGetter = new SnapshotCounter.HBaseGetter();
       
EntryList entryList = StaticArrayEntryList.ofBytes(
                result
.getMap().get(Bytes.toBytes("e")).entrySet(),
                entryGetter
);
       
StandardTitanTx tx = (StandardTitanTx) graph.newTransaction();
       
System.out.println("Entry list size: " + entryList.size());
       
int cnt = 0;
//        IDInspector inspector = graph.getIDInspector();
        for (Entry entry : entryList) {
           
RelationCache relation = graph.getEdgeSerializer().readRelation(entry, false, tx);
//            Direction direction = graph.getEdgeSerializer().parseDirection(entry);
//            System.out.println("Direction is:" + direction.name());
//            System.out.println("relation is:" + relation);

//            System.out.println("numProperties: " + relation.numProperties());
//            Iterator<LongObjectCursor<Object>> longObjectCursorIterator = relation.propertyIterator();
//            LongObjectCursor<Object> next = longObjectCursorIterator.next();
//            System.out.println("key is:" + next.key);
//            System.out.println("value is:" + next.value);
//            System.out.println("next.toString is:" + next.toString());



            RelationType type = tx.getExistingRelationType(relation.typeId);

           
Iterator<Edge> edgeIterator1 = type.edges(Direction.BOTH);
           
while (edgeIterator1.hasNext()){
               
Edge next11 = edgeIterator1.next();
               
System.out.println("relType is :" + next11.property("relType"));
           
}

//             if (type.isEdgeLabel() && !tx.getIdInspector().isEdgeLabelId(relation.relationId)){
//            if (type.isEdgeLabel() &&  !graph.getIDManager().isEdgeLabelId(relation.relationId) &&
//                    !tx.getIdInspector().isRelationTypeId(type.longId())) {
            if (type.isEdgeLabel() ) {
                cnt
++;
               
System.out.print("isSystemRelationTypeId: ");
               
System.out.println(graph.getIDManager().isSystemRelationTypeId(relation.typeId));

               
System.out.print("isEdgeLabelId: ");
               
System.out.println(graph.getIDManager().isEdgeLabelId(relation.typeId));

               
System.out.print("type isEdgeLabel: ");
               
System.out.println(type.isEdgeLabel());

               
System.out.print("relationId isSystemRelationTypeId: ");
               
System.out.println(graph.getIDManager().isSystemRelationTypeId(relation.relationId));
               
System.out.println(entry.getValue().toString());
           
}
       
}
       
System.out.println("Edge count: " + cnt);

I made a test by making a small graph-- two vertexes and two edges.


But I just get the count of the edge is one, expecting two. Is there any problem? Please help....This problem bugs me. Thanks~~~


Geoshape property in remote gremlin query, GraphSON

rosen...@...
 

I am trying to assign a value to a property with the native Geoshape type. I have it serialized into JSON as follows (where g is aliased to the traversal on gremlin server):

{"@value": {"type": "point", "coordinates": [{"@value": 1.1, "@type": "g:Double"}, {"@value": 2.2, "@type": "g:Double"}]}, "@type": "g:Geoshape"}

In the gremlin console, I can easily type 

Geoshape.point(1.1, 2.2)

and it works perfectly. I am sure that it is something quite simple. Here is the error:

Request [PooledUnsafeDirectByteBuf(ridx: 653, widx: 653, cap: 687)] could not be deserialized by org.apache.tinkerpop.gremlin.driver.ser.AbstractGraphSONMessageSerializerV2d0.

For reference, I do have the following serializer in the gremlin server config:

{ className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerGremlinV2d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}

which should direct gremlin server to the relevant deserializer in Janus.

Thanks!

6201 - 6220 of 6663