Date   

Re: HBase ScannerTimeoutException

HadoopMarc <m.c.d...@...>
 

Hi Joseph,

If you want to process all vertices (map operation) you need an OLAP query (currently only works for readonly tasks):
http://docs.janusgraph.org/latest/hadoop-tp3.html
http://tinkerpop.apache.org/docs/3.2.3/reference/#sparkgraphcomputer

If you want to filter the total set of vertices, you need an index on one or more properties of your vertices:
http://docs.janusgraph.org/latest/indexes.html

What do you want to accomplish apart from looping over the vertices in your graph?

HTH,   Marc

Op donderdag 11 mei 2017 16:18:50 UTC+2 schreef Joseph Obernberger:

Hi All - I'm using a loop to do a task on all vertices in fairly large
graph (million+ nodes), and the operation that I'm doing takes some
time.  I'm getting a
org.apache.hadoop.hbase.client.ScannerTimeoutException 20091230ms passed
since the last invocation, timeout is currently set to 60000.

Is there a better way to loop through all vertices besides something like:
-------------------
JanusGraphTransaction vertTrans = graph.newTransaction();
Iterator<Vertex> vertices = vertTran.vertices();

while (vertices != null && vertices.hasNext()) {
//do stuff
}
------------------

?

Thank you!

-Joe


Re: Adding dependencies to target 'install'

Jason Plurad <plu...@...>
 

I think you're looking for janusgraph-core/pom.xml. All of these dependencies end up in the distribution zip.


On Wednesday, May 10, 2017 at 6:31:18 PM UTC-7, Timothy Findlay wrote:
Hi Folks,

Forgive me for the simple question, but I've got:
# mvn clean install -DskipTests

This all works fine. But I want to add a new JAR to the classpath, Specifically I'd like to use Univocity-parsers.

I tried adding:
<dependency>
	<groupId>com.univocity</groupId>
	<artifactId>univocity-parsers</artifactId>
	<version>2.4.1</version>
	<type>jar</type>
</dependency>

to the main pom, but the JAR doesn't get added to the lib folder and thus its not available at run time. Sure, I could just copy the built JAR in there, but I'd like a semi-automated build if I can.

Where is the right place to add the dependency for maven to add the JAR to the 'install' target ?


HBase ScannerTimeoutException

Joe Obernberger <joseph.o...@...>
 

Hi All - I'm using a loop to do a task on all vertices in fairly large graph (million+ nodes), and the operation that I'm doing takes some time. I'm getting a org.apache.hadoop.hbase.client.ScannerTimeoutException 20091230ms passed since the last invocation, timeout is currently set to 60000.

Is there a better way to loop through all vertices besides something like:
-------------------
JanusGraphTransaction vertTrans = graph.newTransaction();
Iterator<Vertex> vertices = vertTran.vertices();

while (vertices != null && vertices.hasNext()) {
//do stuff
}
------------------

?

Thank you!

-Joe


Adding dependencies to target 'install'

tfin...@...
 

Hi Folks,

Forgive me for the simple question, but I've got:
# git clone https://github.com/JanusGraph/janusgraph.git
# mvn clean install -DskipTests

This all works fine. But I want to add a new JAR to the classpath, Specifically I'd like to use Univocity-parsers.

I tried adding:
<dependency>
	<groupId>com.univocity</groupId>
	<artifactId>univocity-parsers</artifactId>
	<version>2.4.1</version>
	<type>jar</type>
</dependency>

to the main pom, but the JAR doesn't get added to the lib folder and thus its not available at run time. Sure, I could just copy the built JAR in there, but I'd like a semi-automated build if I can.

Where is the right place to add the dependency for maven to add the JAR to the 'install' target ?


Re: Hadoop and MixedIndexes

Marcelo Freitas <marcel...@...>
 

Answering my own question, BulkLoaderVertexProgram uses a nested instance of a graph in each worker to do the trick.


On Tuesday, May 9, 2017 at 5:36:57 PM UTC+9, Marcelo Freitas wrote:
ok, I got it now that the support GremlinHadoop will have to cassandra is as a reader.

So, what is the recommended way to persist this data back to the cassandra/elasticsearch cluster?

On Tuesday, May 9, 2017 at 3:40:54 PM UTC+9, Marcelo Freitas wrote:
Hello there,

This might be a stupid question, but when running tasks in Hadoop do we have access to the index backend?

If no, what happens when we run something like the PageRankVertexProgram and the indexes we update are actually MixedIndexes? Only Cassandra gets updated? Does the update on elasticsearch side gets trigger automatically somehow?

If yes, do you have any pointers to docs on how to setup this?

Regards.


Re: Hadoop and MixedIndexes

Marcelo Freitas <marcel...@...>
 

ok, I got it now that the support GremlinHadoop will have to cassandra is as a reader.

So, what is the recommended way to persist this data back to the cassandra/elasticsearch cluster?


On Tuesday, May 9, 2017 at 3:40:54 PM UTC+9, Marcelo Freitas wrote:
Hello there,

This might be a stupid question, but when running tasks in Hadoop do we have access to the index backend?

If no, what happens when we run something like the PageRankVertexProgram and the indexes we update are actually MixedIndexes? Only Cassandra gets updated? Does the update on elasticsearch side gets trigger automatically somehow?

If yes, do you have any pointers to docs on how to setup this?

Regards.


Re: Production Readyness

Marcelo Freitas <marcel...@...>
 

Collin,

We are using JanusGraph in production where I work (for about 1 or 2 weeks now, before that we used Titan) and I know other companies that still uses Titan in their production environment. So far, rock solid. Not a single crash or weird bug found in production environment, and we do have a considerable number of users.

Regards


On Wednesday, May 3, 2017 at 12:20:09 AM UTC+9, Jason Plurad wrote:
Hi Collin,

I'm not aware of what "level of testing and rigor that went into the 1.0 release of Titan". The version is just a number, and I wouldn't get too hung up on it if you've tested it in your own environment and you're satisfied with its capabilities. I know of several projects that put pre-1.0 versions of Titan into production.

As an open source project, I think that JanusGraph's goal is to encourage everybody in the community to get involved testing releases while they are being developed. Nobody knows your environment and your needs more than you do! There are directions in the source code repository on how to run the test cases. Release candidates are announced and voted on via the dev mailing list, and anybody in the community is welcome to provide feedback. And feedback is exactly what JanusGraph needs from the community to continue evolving.

That being said, the first release of JanusGraph largely the same code as Titan 1.0. There was work done renaming classes and packages plus several dependency version uplifts, namely TinkerPop, HBase, BerkeleyDB. If you're looking to migrate an existing Titan 1.0 deployment, you should be aware of #228 which already has a fix from Alex. I'd look for a fix release on the 0.1 branch to arrive sooner than the 0.2 release (or whatever the next release is numbered).

-- Jason

On Wednesday, April 26, 2017 at 7:24:14 PM UTC-4, Collin Scangarella wrote:
Hey all,

We're pretty excited about Janus over here and are considering switching over in the near future (I'm actually trying out OLAP with hadoop 2 as we speak). I'm normally quite hesitant to put very early releases into production (I never would have done that with the 0.1.0 of Titan) but since Janus is a fork from Titan I have a feeling that it'll be much more stable than most software at this point in the lifecycle. I want to reach out to the community to see if janus's 0.1.0 release underwent the level of testing and rigor that went into the 1.0 release of Titan. Does anyone have any insight into this?

Thanks,
Collin


Hadoop and MixedIndexes

Marcelo Freitas <marcel...@...>
 

Hello there,

This might be a stupid question, but when running tasks in Hadoop do we have access to the index backend?

If no, what happens when we run something like the PageRankVertexProgram and the indexes we update are actually MixedIndexes? Only Cassandra gets updated? Does the update on elasticsearch side gets trigger automatically somehow?

If yes, do you have any pointers to docs on how to setup this?

Regards.


Re: Scala 2.10 vs. 2.11 build of Janus?

HadoopMarc <m.c.d...@...>
 

Hi Raj,

Do you need Janusgraph's OLAP part?  If not, you might probably succeed in your scala 2.11 project if you simply exclude tinkerpop spark-gremlin from your janusgraph dependency. Note, I did not check the entire dependency tree. However, further proof for my suggestion stems from gremlin-scala which uses either scala-2.11 or scala-2.12:
https://github.com/mpollmeier/gremlin-scala/blob/v3.2.3.4/build.sbt

Cheers,    Marc

Op dinsdag 4 april 2017 14:30:15 UTC+2 schreef raj...@...:


    We are using Scala 2.11.8 in our project but Janusgraph build seems to use scala 2.10.(5/6?) libraries.

 We are wondering if there is near future release (perhaps through maven) with scala 2.11?

 If not, wondering if you already tried that and faced some issues.

We are facing compatability issues in using libraries specific to scala 2.11 such as logging when we try to use an older (several years old in fact) version of scala 2.10.6?

Thanks!
Raj


Re: Scala 2.10 vs. 2.11 build of Janus?

Jamie Lawson <jamier...@...>
 

You say it would take some work. Do you have a ballpark estimate my customer can use for planning? Presumably, when Tinkerpop 3.3 snapshots get to "candidate" stage, you start tooling for them. After that is it 2 months before you have a release? Is it 4 months? Is it six months? A ballpark number here would be really valuable for planning purposes.


On Tuesday, April 4, 2017 at 7:07:35 AM UTC-7, Jason Plurad wrote:
The Scala dependency is defined via Apache TinkerPop. JanusGraph is aligned currently with the TinkerPop 3.2.x branch, which is using Scala 2.10.5 (and Spark 1.6.1). Scala 2.11 (and Spark 2.0.2) is coming with TinkerPop 3.3, but there isn't a schedule in place for that yet.

I'm not aware if anybody has attempted to build JanusGraph against TinkerPop master branch, but I imagine there would be some work involved for any breaking changes.

On Tuesday, April 4, 2017 at 8:30:15 AM UTC-4, rajdeep.singh wrote:

    We are using Scala 2.11.8 in our project but Janusgraph build seems to use scala 2.10.(5/6?) libraries.

 We are wondering if there is near future release (perhaps through maven) with scala 2.11?

 If not, wondering if you already tried that and faced some issues.

We are facing compatability issues in using libraries specific to scala 2.11 such as logging when we try to use an older (several years old in fact) version of scala 2.10.6?

Thanks!
Raj


Re: org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are availa

Jason Plurad <plu...@...>
 

Sounds like ES is not started or cannot connect. Are you using `bin/janusgraph.sh start` or do you have a standalone ES deployed?
I'd suggest double checking the IP and port that ES is listening on, and verify it matches what you have in the properties file.


On Friday, May 5, 2017 at 10:54:07 AM UTC-4, SY wrote:
I want to load GraphOfTheGodsFactory using cassandra & es. 

It is giving me this exception when i try to do  gremlin> graph = JanusGraphFactory.open('conf/janusgraph-cassandra-es.properties')

java.lang.IllegalArgumentException: Could not instantiate implementation: org.janusgraph.diskstorage.es.ElasticSearchIndex

at org.janusgraph.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:69)

at org.janusgraph.diskstorage.Backend.getImplementationClass(Backend.java:480)

at org.janusgraph.diskstorage.Backend.getIndexes(Backend.java:467)

at org.janusgraph.diskstorage.Backend.<init>(Backend.java:154)

at org.janusgraph.graphdb.configuration.GraphDatabaseConfiguration.getBackend(GraphDatabaseConfiguration.java:1840)

at org.janusgraph.graphdb.database.StandardJanusGraph.<init>(StandardJanusGraph.java:134)

at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:107)

at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:75)

at org.janusgraph.core.JanusGraphFactory$open.call(Unknown Source)

at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)

at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)

at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:125)

at groovysh_evaluate.run(groovysh_evaluate:3)

at org.codehaus.groovy.vmplugin.v7.IndyInterface.selectMethod(IndyInterface.java:232)

at org.codehaus.groovy.tools.shell.Interpreter.evaluate(Interpreter.groovy:70)

at org.codehaus.groovy.tools.shell.Groovysh.execute(Groovysh.groovy:190)

at org.apache.tinkerpop.gremlin.console.GremlinGroovysh.super$3$execute(GremlinGroovysh.groovy)

at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:498)

at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93)

at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325)

at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1215)

at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeMethodOnSuperN(ScriptBytecodeAdapter.java:132)

at org.apache.tinkerpop.gremlin.console.GremlinGroovysh.execute(GremlinGroovysh.groovy:72)

at org.codehaus.groovy.vmplugin.v7.IndyInterface.selectMethod(IndyInterface.java:232)

at org.codehaus.groovy.tools.shell.Shell.leftShift(Shell.groovy:122)

at org.codehaus.groovy.vmplugin.v7.IndyInterface.selectMethod(IndyInterface.java:232)

at org.codehaus.groovy.tools.shell.ShellRunner.work(ShellRunner.groovy:95)

at org.codehaus.groovy.tools.shell.InteractiveShellRunner.super$2$work(InteractiveShellRunner.groovy)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:498)

at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93)

at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325)

at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1215)

at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeMethodOnSuperN(ScriptBytecodeAdapter.java:132)

at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeMethodOnSuper0(ScriptBytecodeAdapter.java:152)

at org.codehaus.groovy.tools.shell.InteractiveShellRunner.work(InteractiveShellRunner.groovy:124)

at org.codehaus.groovy.vmplugin.v7.IndyInterface.selectMethod(IndyInterface.java:232)

at org.codehaus.groovy.tools.shell.ShellRunner.run(ShellRunner.groovy:59)

at org.codehaus.groovy.tools.shell.InteractiveShellRunner.super$2$run(InteractiveShellRunner.groovy)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:498)

at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93)

at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325)

at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1215)

at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeMethodOnSuperN(ScriptBytecodeAdapter.java:132)

at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeMethodOnSuper0(ScriptBytecodeAdapter.java:152)

at org.codehaus.groovy.tools.shell.InteractiveShellRunner.run(InteractiveShellRunner.groovy:83)

at org.codehaus.groovy.vmplugin.v7.IndyInterface.selectMethod(IndyInterface.java:232)

at org.apache.tinkerpop.gremlin.console.Console.<init>(Console.groovy:152)

at org.codehaus.groovy.vmplugin.v7.IndyInterface.selectMethod(IndyInterface.java:232)

at org.apache.tinkerpop.gremlin.console.Console.main(Console.groovy:455)

Caused by: java.lang.reflect.InvocationTargetException

at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)

at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

at java.lang.reflect.Constructor.newInstance(Constructor.java:423)

at org.janusgraph.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:58)

... 56 more

Caused by: org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: []

at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:279)

at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:198)

at org.elasticsearch.client.transport.support.InternalTransportClusterAdminClient.execute(InternalTransportClusterAdminClient.java:86)

at org.elasticsearch.client.support.AbstractClusterAdminClient.health(AbstractClusterAdminClient.java:127)

at org.elasticsearch.action.admin.cluster.health.ClusterHealthRequestBuilder.doExecute(ClusterHealthRequestBuilder.java:92)

at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:91)

at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:65)

at org.janusgraph.diskstorage.es.ElasticSearchIndex.<init>(ElasticSearchIndex.java:215)

... 61 more


Any thoughts ?


org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are availa

SY <sahithiy...@...>
 

I want to load GraphOfTheGodsFactory using cassandra & es. 

It is giving me this exception when i try to do  gremlin> graph = JanusGraphFactory.open('conf/janusgraph-cassandra-es.properties')

java.lang.IllegalArgumentException: Could not instantiate implementation: org.janusgraph.diskstorage.es.ElasticSearchIndex

at org.janusgraph.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:69)

at org.janusgraph.diskstorage.Backend.getImplementationClass(Backend.java:480)

at org.janusgraph.diskstorage.Backend.getIndexes(Backend.java:467)

at org.janusgraph.diskstorage.Backend.<init>(Backend.java:154)

at org.janusgraph.graphdb.configuration.GraphDatabaseConfiguration.getBackend(GraphDatabaseConfiguration.java:1840)

at org.janusgraph.graphdb.database.StandardJanusGraph.<init>(StandardJanusGraph.java:134)

at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:107)

at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:75)

at org.janusgraph.core.JanusGraphFactory$open.call(Unknown Source)

at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)

at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)

at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:125)

at groovysh_evaluate.run(groovysh_evaluate:3)

at org.codehaus.groovy.vmplugin.v7.IndyInterface.selectMethod(IndyInterface.java:232)

at org.codehaus.groovy.tools.shell.Interpreter.evaluate(Interpreter.groovy:70)

at org.codehaus.groovy.tools.shell.Groovysh.execute(Groovysh.groovy:190)

at org.apache.tinkerpop.gremlin.console.GremlinGroovysh.super$3$execute(GremlinGroovysh.groovy)

at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:498)

at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93)

at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325)

at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1215)

at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeMethodOnSuperN(ScriptBytecodeAdapter.java:132)

at org.apache.tinkerpop.gremlin.console.GremlinGroovysh.execute(GremlinGroovysh.groovy:72)

at org.codehaus.groovy.vmplugin.v7.IndyInterface.selectMethod(IndyInterface.java:232)

at org.codehaus.groovy.tools.shell.Shell.leftShift(Shell.groovy:122)

at org.codehaus.groovy.vmplugin.v7.IndyInterface.selectMethod(IndyInterface.java:232)

at org.codehaus.groovy.tools.shell.ShellRunner.work(ShellRunner.groovy:95)

at org.codehaus.groovy.tools.shell.InteractiveShellRunner.super$2$work(InteractiveShellRunner.groovy)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:498)

at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93)

at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325)

at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1215)

at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeMethodOnSuperN(ScriptBytecodeAdapter.java:132)

at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeMethodOnSuper0(ScriptBytecodeAdapter.java:152)

at org.codehaus.groovy.tools.shell.InteractiveShellRunner.work(InteractiveShellRunner.groovy:124)

at org.codehaus.groovy.vmplugin.v7.IndyInterface.selectMethod(IndyInterface.java:232)

at org.codehaus.groovy.tools.shell.ShellRunner.run(ShellRunner.groovy:59)

at org.codehaus.groovy.tools.shell.InteractiveShellRunner.super$2$run(InteractiveShellRunner.groovy)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:498)

at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93)

at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325)

at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1215)

at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeMethodOnSuperN(ScriptBytecodeAdapter.java:132)

at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeMethodOnSuper0(ScriptBytecodeAdapter.java:152)

at org.codehaus.groovy.tools.shell.InteractiveShellRunner.run(InteractiveShellRunner.groovy:83)

at org.codehaus.groovy.vmplugin.v7.IndyInterface.selectMethod(IndyInterface.java:232)

at org.apache.tinkerpop.gremlin.console.Console.<init>(Console.groovy:152)

at org.codehaus.groovy.vmplugin.v7.IndyInterface.selectMethod(IndyInterface.java:232)

at org.apache.tinkerpop.gremlin.console.Console.main(Console.groovy:455)

Caused by: java.lang.reflect.InvocationTargetException

at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)

at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

at java.lang.reflect.Constructor.newInstance(Constructor.java:423)

at org.janusgraph.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:58)

... 56 more

Caused by: org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: []

at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:279)

at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:198)

at org.elasticsearch.client.transport.support.InternalTransportClusterAdminClient.execute(InternalTransportClusterAdminClient.java:86)

at org.elasticsearch.client.support.AbstractClusterAdminClient.health(AbstractClusterAdminClient.java:127)

at org.elasticsearch.action.admin.cluster.health.ClusterHealthRequestBuilder.doExecute(ClusterHealthRequestBuilder.java:92)

at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:91)

at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:65)

at org.janusgraph.diskstorage.es.ElasticSearchIndex.<init>(ElasticSearchIndex.java:215)

... 61 more


Any thoughts ?


Re: index could no longer be used

jme...@...
 

Sorry!  I found it's my own fault.
The property is a String type, and is a mixed index.

while this went wrong,
gremlin>  g.E().has('contents','hdfs').values()
Could not find a suitable index to answer graph query and graph scans are disabled: [( contents = hdfs)]: EDGE
this is ok
gremlin>  g.E().has('contents',textContains('hdfs')).count()
gremlin
>  11


So ,when a String type with mixed type index
Query should use textContains.


On Tuesday, May 2, 2017 at 11:46:13 PM UTC+8, Jason Plurad wrote:
FYI, the Titan mailing list is here.
What does your index definition look like?
Are you using JanusGraph or is this a problem you can recreate on JanusGraph?

On Tuesday, May 2, 2017 at 11:38:37 AM UTC-4, Jhon Mernio wrote:
   Hello:

   I'm using titan with hbase and elasticsearch, all was ok at first, but yesterday when I was searching as usual, it worn 
   Could not find a suitable index to answer graph query and graph scans are disabled: [( contents = hdfs)]: EDGE
   like below:

gremlin>  index=mgmt.getGraphIndex('message') 
==>message
gremlin
>  index.getIndexStatus(mgmt.getPropertyKey('contents')
==>ENABLED
gremlin
>  g.E().has('contents','hdfs').values()
Could not find a suitable index to answer graph query and graph scans are disabled: [( contents = hdfs)]: EDGE

 the index was a mixed type index, using elasticsearch backhand 
 this is very strange, because in fact the index does exist, and before today, all queries alike this was ok.
 and when data inserts into the database, I could see the increase of indexed data num by using elasticsearch count API

 and the flowing query is ok

 gremlin>  graph.indexQuery("message","e.contents: hdfs").edges()

 Anyone have any idea about this problem?

 Thanks!


Someone should do a talk on JanusGraph at the London Opensource Graph Technologies meetup

Haikal Pribadi <hai...@...>
 

Hi everyone,

The Opensource Graph Technologies meetup community in London has been nicely growing the past few months! We've been organising it (GRAKN.AI) and we've had cool talks from people Apache Giraph, OrientDB, and a Neo4J.

We've done 2 talks both with more than 100+ signups (and about 50 show up) seems like the community continues to grow. We're planning to have the third meetup and wondering if anyone from our JanusGraph community would like to give a talk, at our London meetup, on the project that they're working on with JanusGraph?

Thanks, everyone! Would love to have JanusGraph's presence at the meetup! :)


Re: Who is using JanusGraph in production?

Misha Brukman <mbru...@...>
 

On Fri, Apr 7, 2017 at 4:03 AM, Jimmy <xuliuc...@...> wrote:
Lovely and promising project! I want to know if anyone is using JanusGraph in production at present?Thanks!

--
You received this message because you are subscribed to the Google Groups "JanusGraph users list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to janusgraph-users+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: index could no longer be used

Jason Plurad <plu...@...>
 

FYI, the Titan mailing list is here.
What does your index definition look like?
Are you using JanusGraph or is this a problem you can recreate on JanusGraph?


On Tuesday, May 2, 2017 at 11:38:37 AM UTC-4, Jhon Mernio wrote:
   Hello:

   I'm using titan with hbase and elasticsearch, all was ok at first, but yesterday when I was searching as usual, it worn 
   Could not find a suitable index to answer graph query and graph scans are disabled: [( contents = hdfs)]: EDGE
   like below:

gremlin>  index=mgmt.getGraphIndex('message') 
==>message
gremlin
>  index.getIndexStatus(mgmt.getPropertyKey('contents')
==>ENABLED
gremlin
>  g.E().has('contents','hdfs').values()
Could not find a suitable index to answer graph query and graph scans are disabled: [( contents = hdfs)]: EDGE

 the index was a mixed type index, using elasticsearch backhand 
 this is very strange, because in fact the index does exist, and before today, all queries alike this was ok.
 and when data inserts into the database, I could see the increase of indexed data num by using elasticsearch count API

 and the flowing query is ok

 gremlin>  graph.indexQuery("message","e.contents: hdfs").edges()

 Anyone have any idea about this problem?

 Thanks!


Re: possible id conflicts when migrating from Titan to Janus

Jason Plurad <plu...@...>
 

This issue has been reported https://github.com/JanusGraph/janusgraph/issues/228
Please follow the discussion there.


On Tuesday, May 2, 2017 at 11:39:42 AM UTC-4, chris lu wrote:
We have a Titan instance running on Cassandra.

When migrating from Titan to Janus, a new table "titan.janusgraph_ids" is generated. Does this table respect the existing "titan.titan_ids" in terms of id generation? Will it cause one existing entity's id reused for a new entity?

Chris


Re: TimedOutException

Jason Plurad <plu...@...>
 

cqlsh communicates using CQL interface, but JanusGraph uses Thrift. I'd suggest double checking that the RPC settings (rpc_address, rpc_port) in cassandra.yaml match what you have in cassandra-es.properties. If you're using rpc_interface rather than rpc_address in cassandra.yaml, verify the listening address and port with ss.

-- Jason


On Friday, April 28, 2017 at 11:23:30 AM UTC-4, Gwiz wrote:
When I try to open a graph, I am constantly getting the TimeOutExceptions.  My Cassandra Cluster is fine and I was able to use cqlsh with no issues. Are there any Thrift settings that need to be adjusted?

graph = JanusGraphFactory.open('cassandra-es.properties')

Caused by: TimedOutException()
        at org
.apache.cassandra.thrift.Cassandra$multiget_slice_result$multiget_slice_resultStandardScheme.read(Cassandra.java:14696)
        at org
.apache.cassandra.thrift.Cassandra$multiget_slice_result$multiget_slice_resultStandardScheme.read(Cassandra.java:14633)
        at org
.apache.cassandra.thrift.Cassandra$multiget_slice_result.read(Cassandra.java:14559)
        at org
.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
        at org
.apache.cassandra.thrift.Cassandra$Client.recv_multiget_slice(Cassandra.java:741)
        at org
.apache.cassandra.thrift.Cassandra$Client.multiget_slice(Cassandra.java:725)
        at org
.janusgraph.diskstorage.cassandra.thrift.CassandraThriftKeyColumnValueStore.getNamesSlice(CassandraThriftKeyColumnValueStore.java:143)
       
... 44 more


If I try few times, I succeed once randomly. 

Thanks


Re: Production Readyness

Jason Plurad <plu...@...>
 

Hi Collin,

I'm not aware of what "level of testing and rigor that went into the 1.0 release of Titan". The version is just a number, and I wouldn't get too hung up on it if you've tested it in your own environment and you're satisfied with its capabilities. I know of several projects that put pre-1.0 versions of Titan into production.

As an open source project, I think that JanusGraph's goal is to encourage everybody in the community to get involved testing releases while they are being developed. Nobody knows your environment and your needs more than you do! There are directions in the source code repository on how to run the test cases. Release candidates are announced and voted on via the dev mailing list, and anybody in the community is welcome to provide feedback. And feedback is exactly what JanusGraph needs from the community to continue evolving.

That being said, the first release of JanusGraph largely the same code as Titan 1.0. There was work done renaming classes and packages plus several dependency version uplifts, namely TinkerPop, HBase, BerkeleyDB. If you're looking to migrate an existing Titan 1.0 deployment, you should be aware of #228 which already has a fix from Alex. I'd look for a fix release on the 0.1 branch to arrive sooner than the 0.2 release (or whatever the next release is numbered).

-- Jason


On Wednesday, April 26, 2017 at 7:24:14 PM UTC-4, Collin Scangarella wrote:
Hey all,

We're pretty excited about Janus over here and are considering switching over in the near future (I'm actually trying out OLAP with hadoop 2 as we speak). I'm normally quite hesitant to put very early releases into production (I never would have done that with the 0.1.0 of Titan) but since Janus is a fork from Titan I have a feeling that it'll be much more stable than most software at this point in the lifecycle. I want to reach out to the community to see if janus's 0.1.0 release underwent the level of testing and rigor that went into the 1.0 release of Titan. Does anyone have any insight into this?

Thanks,
Collin


possible id conflicts when migrating from Titan to Janus

chr...@...
 

We have a Titan instance running on Cassandra.

When migrating from Titan to Janus, a new table "titan.janusgraph_ids" is generated. Does this table respect the existing "titan.titan_ids" in terms of id generation? Will it cause one existing entity's id reused for a new entity?

Chris

6441 - 6460 of 6661