Date   

Fail to load complete edge data of Graph500 to Janusgraph 0.5.3 with Cassandra CQl as storage backends

shepherdkingqsp@...
 

Hi there,

I am new to Janusgraph. I have some problems in loading data to Janusgraph with Cassandra CQL as storage backend.

When I tried to load Graph500 to Janusgraph, planning to run benchmark on it.  I found that the edges loaded to janusgraph were not complete, 67107183 edges loaded while 67108864 supposed. (Vertices loaded were complete)

The code and config I used is post as below.

The code I used is a benchmark by tigergraph:
- load vertex: https://github.com/gaolk/graph-database-benchmark/blob/master/benchmark/janusgraph/multiThreadVertexImporter.java
- load edge: https://github.com/gaolk/graph-database-benchmark/blob/master/benchmark/janusgraph/multiThreadEdgeImporter.java

The config I used is conf/janusgraph-cql.properties in Janusgraph 0.5.3 full (https://github.com/JanusGraph/janusgraph/releases/download/v0.5.3/janusgraph-full-0.5.3.zip)
cache.db-cache-clean-wait = 20
cache.db-cache-size = 0.5
cache.db-cache-time = 180000
cache.db-cache = true
gremlin.graph=org.janusgraph.core.JanusGraphFactory
storage.backend=cql
storage.batch-loading=true
storage.cql.keyspace=janusgraph 
storage.hostname=127.0.0.1
I got those exceptions when loading data.
Exception 1:
Caused by: java.util.concurrent.ExecutionException: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1:9042 (com.datastax.driver.core.exceptions.OperationTimedOutException: [/127.0.0.1:9042] Timed out waiting for server response))
        at com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:299)
        at com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:286)
        at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116)
        at io.vavr.control.Try.of(Try.java:62)
        at io.vavr.concurrent.FutureImpl.lambda$run$2(FutureImpl.java:199)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Exception 2:
Caused by: org.janusgraph.diskstorage.TemporaryBackendException: Could not successfully complete backend operation due to repeated temporary exceptions after PT10S
        at org.janusgraph.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:100)
        at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:54)
        at org.janusgraph.diskstorage.BackendTransaction.executeRead(BackendTransaction.java:469)
        at org.janusgraph.diskstorage.BackendTransaction.indexQuery(BackendTransaction.java:395)
        at org.janusgraph.graphdb.query.graph.MultiKeySliceQuery.execute(MultiKeySliceQuery.java:52)
        at org.janusgraph.graphdb.database.IndexSerializer.query(IndexSerializer.java:515)
        at org.janusgraph.graphdb.util.SubqueryIterator.<init>(SubqueryIterator.java:66)
        ... 20 more
Caused by: org.janusgraph.diskstorage.TemporaryBackendException: Temporary failure in storage backend
        at io.vavr.API$Match$Case0.apply(API.java:3174)
        at io.vavr.API$Match.of(API.java:3137)
        at org.janusgraph.diskstorage.cql.CQLKeyColumnValueStore.lambda$static$0(CQLKeyColumnValueStore.java:123)
        at io.vavr.control.Try.getOrElseThrow(Try.java:671)
        at org.janusgraph.diskstorage.cql.CQLKeyColumnValueStore.getSlice(CQLKeyColumnValueStore.java:290)
        at org.janusgraph.diskstorage.keycolumnvalue.KCVSProxy.getSlice(KCVSProxy.java:76)
        at org.janusgraph.diskstorage.keycolumnvalue.cache.ExpirationKCVSCache.lambda$getSlice$1(ExpirationKCVSCache.java:91)
        at com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4742)
        at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3527)
        at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2319)

I have found solution on google but got few things help. Could somebody help?


Best Regards,
Shipeng Qi


Re: Not able to enable Write-ahead logs using tx.log-tx for existing JanusGraph setup

Radhika Kundam
 

Hi Boxuan,For existing JanusGraph setup, I am updating tx.log-tx configuration by setting management system property as mentioned in https://docs.janusgraph.org/basics/configuration/#global-configuration
And I could see the configuration updated properly in JanusGraphManagement.

managementSystem.get("tx.log-tx"); => prints false
managementSystem.set("tx.log-tx", true);
managementSystem.commit();
managementSystem.get("tx.log-tx"); => prints true

But this change is not reflected for logTransactions in 
GraphDatabaseConfiguration:preLoadConfiguration. 
graph.getConfiguration().hasLogTransactions() => prints false
While transaction recovery using StandardTransactionLogProcessor it checks for the argument graph.getConfiguration().hasLogTransactions() which is not having latest config('tx.log-tx') updated in ManagementSystem.
To reflect the change, I had to restart cluster two times. 
Also since it's GLOBAL property I am not allowed to override using graph.configuration() and only available option is to update through ManagementSystem which is not updating logTransactions.

I would really appreciate your help on this.

Thanks,
Radhika


Re: Wait the mixed index backend

toom@...
 

Thank you, this is exactly what I look for.


Re: Wait the mixed index backend

Boxuan Li
 

Hi Toom,

Do you want to ALWAYS make sure the vertex is indexed? If so and if you happen to use Elasticsearch, you can set 
index.[X].elasticsearch.bulk-refresh=wait_for
See https://www.elastic.co/guide/en/elasticsearch/reference/master/docs-refresh.html.

Best,
Boxuan


Wait the mixed index backend

toom@...
 

Hello,
 
The vertex that has just been created are not immediately available on mixed index (documented here [1]). 
I'm looking for a way to make sure the vertex is indexed, by waiting the mixed index backend. I think the easiest way is to request the vertex id using direct index query:
  graph.indexQuery("myIndex", "v.id:" + vertex.id())

But I didn't find a way to do that. Do you think this feature can be added? Maybe I can make a PR.
 
Regards,
 
Toom.
 
[1] https://github.com/JanusGraph/janusgraph/blob/v0.5.3/docs/index-backend/direct-index-query.md#mixed-index-availability-delay


graphml properties of properties

Laura Morales <lauretas@...>
 

Janus supports "properties of properties", ie. properties defined on other properties. How is this represented with graphml? Should I use nested elements like this

<node>
<data key="foo">
bar
<data key="xxx">yyy</data>
</data>
</node>

or should I use attributes like this?

<node>
<data key="foo" xxx="yyy">bar</data>
</node>


Re: Release vs full release?

hadoopmarc@...
 

janusgraph-full includes Gremlin Server


Release vs full release?

Laura Morales <lauretas@...>
 

What is the difference between janusgraph-<version>.zip and janusgraph-full-<version>.zip?


Re: graphml <edge> inside <node>

hadoopmarc@...
 

Hi Laura,

Your assumption about ordering of nodes and edges is correct, see the warning at: https://tinkerpop.apache.org/docs/current/reference/#graphml

For your use case it seems that you can simply edit the nodes and edges out of order and now and then save an ordered version of the graph using the sort function of your editor. If you do a lot of editing, it is probably more convenient to write a custom csv import script for networkx and then save as graphml.

Best wishes,    Marc


Re: org.janusgraph.diskstorage.PermanentBackendException: Read 1 locks with our rid but mismatched timestamps

hadoopmarc@...
 

Hi Ronnie
No idea what is going on here, but just being pragmatic:

the bin/janusgraph.sh script starts Cassandra, Elasticsearch and Gremlin Server using the conf/janusgraph-cql-es.conf graph configuration. As you can check in the janusgraph distribution files, this config does not specify the graph.timestamps  property (so defaults to MICRO).

So some things to try:
  • does your Cassandra cluster have custom settings for the timestamp type? Then you hit an existing janusgraph issue.
  • does your system have a locale different than english-international and does changing the locale recover the issue? Then you discovered a janusgraph issue!
  • does removing the graph.timestamps=MICRO line help? Note that you can only try this on a new graph, because this setting is FIXED.
Best wishes,    Marc


graphml <edge> inside <node>

Laura Morales <lauretas@...>
 

Sorry I'm asking to this list because I can't find this information anywhere else.
Is it possible to have an <edge> element nested inside a <node> element with graphml? And will Janus be able to read it correctly? Basically, instead of

<node id="1"></node>
<node id="2"></node>
<edge source="1" target="2"></edge>

I would like to use

<node id="1">
<edge target="2"></edge>
</node>
<node id="2"></node>

This would be very useful to me because I have a file (in graphml format) that I'm editing manually, and nesting edges will help me keep all the information "bundled" within a <node>; it would also reduce the verbosity by a lot. Unfortunately I have a feeling that this is not defined by the graphml spec, but I wonder if Janus can parse it?


Re: Need info regarding transaction recovery completion

Radhika Kundam
 

Thank you Boxuan for the response. I will create a feature request.

Regards,
Radhika


Re: Need info regarding transaction recovery completion

Boxuan Li
 

Hi Radhika,

Unfortunately, there is no such API. If you are willing to dive into JanusGraph source code, you can modify StandardTransactionLogProcessor::fixSecondaryFailure method and build JanusGraph by yourself.

You are also welcome to create a feature request on GitHub issues. Probably we should allow users to register a callback method when recovery is done.

Best,
Boxuan



Need info regarding transaction recovery completion

Radhika Kundam
 
Edited

Hi Team,

I am using JanusGraphFactory.startTransactionRecovery to recover secondary failure entries.
We need to perform some other action once recovery is completed, I couldn't find any API to know the status of recovery.
Is there any way to know if the recovery of all the failure entries is completed.

I would appreciate you help.

Thanks,
Radhika


org.janusgraph.diskstorage.PermanentBackendException: Read 1 locks with our rid but mismatched timestamps

Ronnie
 

Hi,

Environment
- JanusGraph 0.5.3 on JDK: 1.8
- Backend: Cassandra 3.11.3 running on JDK 1.8

Warning and error during first time server startup
2021-08-17T22:26:20,861 - WARN  [main:o.j.d.l.c.ConsistentKeyLocker@510] - Skipping outdated lock on KeyColumn [k=0x 16-165-160-103-105- 30- 71-114- 97-112-104- 95- 78- 97-109-101- 95- 73-110-100-101-248, c=0x  0] with our rid ( 48- 97- 55- 50- 97- 97- 57- 57- 49- 56- 57- 56- 57- 45-115-104- 97-114-101-100-106- 97-110-117-115-103-114- 97-112-104- 48- 49- 45-112- 50- 55- 45-101-110-103- 45-105-110- 48- 51- 45-113-117- 97-108-121-115- 45- 99-111-109- 49) but mismatched timestamp (actual ts 2021-08-17T22:26:20.755981Z, expected ts 2021-08-17T22:26:20.755981926Z)
2021-08-17T22:26:20,863 - ERROR [main:o.j.g.d.StandardJanusGraph@724] - Could not commit transaction [1] due to storage exception in system-commit
Caused by: org.janusgraph.diskstorage.PermanentBackendException: Read 1 locks with our rid  48- 97- 55- 50- 97- 97- 57- 57- 49- 56- 57- 56- 57- 45-115-104- 97-114-101-100-106- 97-110-117-115-103-114- 97-112-104- 48- 49- 45-112- 50- 55- 45-101-110-103- 45-105-110- 48- 51- 45-113-117- 97-108-121-115- 45- 99-111-109- 49 but mismatched timestamps; no lock column contained our timestamp (2021-08-17T22:26:20.755981926Z)
at org.janusgraph.diskstorage.locking.consistentkey.ConsistentKeyLocker.checkSeniority(ConsistentKeyLocker.java:542)
at org.janusgraph.diskstorage.locking.consistentkey.ConsistentKeyLocker.checkSingleLock(ConsistentKeyLocker.java:468)
at org.janusgraph.diskstorage.locking.consistentkey.ConsistentKeyLocker.checkSingleLock(ConsistentKeyLocker.java:118)
at org.janusgraph.diskstorage.locking.AbstractLocker.checkLocks(AbstractLocker.java:351)
... 27 more
2021-08-17T22:26:20,864 - ERROR [main:o.a.t.g.s.u.ServerGremlinExecutor@87] - Could not invoke constructor on class org.janusgraph.graphdb.management.JanusGraphManager (defined by the 'graphManager' setting) with one argument of class Settings
Graph configuration:
gremlin.graph=org.janusgraph.core.ConfiguredGraphFactory
graph.graphname=ConfigurationManagementGraph
graph.timestamps=MICRO
storage.backend=cql
storage.hostname=10.114.171.91,10.114.171.92,10.114.171.93
storage.cql.keyspace= sharedjanusgraph
storage.read-time=50000
cache.db-cache = true
cache.db-cache-clean-wait = 20
cache.db-cache-time = 180000
cache.db-cache-size = 0.4
tx.log-tx=true
tx.max-commit-time=15000
metrics.enabled=False
metrics.jmx.enabled=False
cluster.max-partitions=32
Note: Explicitly set graph.timestamps=MICRO ; when setting this to NANO as suggested here https://stackoverflow.com/questions/58916854/janusgraph-janusgraphexception-could-not-commit-transaction-due-to-exception-dur i get the following error:
java.lang.IllegalArgumentException: Timestamp overflow detected: 2021-08-17T23:20:11.611614212Z
at org.janusgraph.diskstorage.log.kcvs.KCVSLog.getTimeSlice(KCVSLog.java:330)
at org.janusgraph.diskstorage.log.kcvs.KCVSLog.add(KCVSLog.java:418)
at org.janusgraph.diskstorage.log.kcvs.KCVSLog.add(KCVSLog.java:394)
at org.janusgraph.diskstorage.log.kcvs.KCVSLog.add(KCVSLog.java:377)
at org.janusgraph.graphdb.database.StandardJanusGraph.commit(StandardJanusGraph.java:690)
at org.janusgraph.graphdb.transaction.StandardJanusGraphTx.commit(StandardJanusGraphTx.java:1438)
... 14 more
Any pointers why this time resolution mismatch is happening?

Thanks,
Ronnie


Re: Janusgraph 0.6.0

Oleksandr Porunov
 

Thanks. I opened the PR here: https://github.com/JanusGraph/janusgraph/pull/2760


Re: Janusgraph 0.6.0

toom@...
 

I confirm, adding lucene-backward-codecs-8.9.0.jar in lib folder solves my problem.

Toom.


Re: Janusgraph 0.6.0

Oleksandr Porunov
 

Hi, we upgraded Lucene to 8.9.0 version. Do you think the problem will be resolved if we include `lucene-backward-codecs.jar` in the classpath?


Janusgraph 0.6.0

toom@...
 

Hi,

I'm testing the new pre-release of Janusgraph (from https://github.com/JanusGraph/janusgraph/releases/download/v0.6.0/janusgraph-0.6.0.zip) with the Berkeley/Lucene database created from JG 0.5.3 and all calls of index fail (below the full stacktrace).
Is there a migration process, maybe a reindex ? The error message contains "Could not load codec 'Lucene70'.  Did you forget to add lucene-backward-codecs.jar?". Do you plan to include this jar in your JanusGraph distribution ?

Best regards,

Toom.

-- 
org.janusgraph.core.JanusGraphException: Could not call index
        at org.janusgraph.graphdb.util.SubqueryIterator.<init>(SubqueryIterator.java:67)
        at org.janusgraph.graphdb.transaction.StandardJanusGraphTx$3.execute(StandardJanusGraphTx.java:1430)
        at org.janusgraph.graphdb.transaction.StandardJanusGraphTx$3.execute(StandardJanusGraphTx.java:1322)
        at org.janusgraph.graphdb.query.QueryProcessor$LimitAdjustingIterator.getNewIterator(QueryProcessor.java:206)
        at org.janusgraph.graphdb.query.LimitAdjustingIterator.hasNext(LimitAdjustingIterator.java:69)
        at org.janusgraph.graphdb.query.ResultSetIterator.nextInternal(ResultSetIterator.java:55)
        at org.janusgraph.graphdb.query.ResultSetIterator.<init>(ResultSetIterator.java:45)
        at org.janusgraph.graphdb.query.QueryProcessor.iterator(QueryProcessor.java:68)
        at org.janusgraph.graphdb.query.graph.GraphCentricQueryBuilder.lambda$iterables$1(GraphCentricQueryBuilder.java:239)
        at org.janusgraph.graphdb.tinkerpop.optimize.step.JanusGraphStep.lambda$executeGraphCentricQuery$2(JanusGraphStep.java:202)
        at org.janusgraph.graphdb.util.ProfiledIterator.<init>(ProfiledIterator.java:36)
        at org.janusgraph.graphdb.tinkerpop.optimize.step.JanusGraphStep.executeGraphCentricQuery(JanusGraphStep.java:202)
        at org.janusgraph.graphdb.tinkerpop.optimize.step.JanusGraphStep.lambda$null$0(JanusGraphStep.java:105)
        at java.lang.Iterable.forEach(Iterable.java:75)
        at org.janusgraph.graphdb.tinkerpop.optimize.step.JanusGraphStep.lambda$new$1(JanusGraphStep.java:105)
        at org.apache.tinkerpop.gremlin.process.traversal.step.map.GraphStep.processNextStart(GraphStep.java:157)
        at org.apache.tinkerpop.gremlin.process.traversal.step.util.AbstractStep.hasNext(AbstractStep.java:150)
        at org.apache.tinkerpop.gremlin.process.traversal.util.DefaultTraversal.hasNext(DefaultTraversal.java:222)
        at java_util_Iterator$hasNext.call(Unknown Source)
        at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:47)
        at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:115)
        at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:119)
        at org.apache.tinkerpop.gremlin.console.Console$_closure3.doCall(Console.groovy:257)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:101)
        at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:323)
        at org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:263)
        at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1041)
        at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:37)
        at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:47)
        at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:52)
        at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:127)
        at org.codehaus.groovy.tools.shell.Groovysh.setLastResult(Groovysh.groovy:463)
        at sun.reflect.GeneratedMethodAccessor29.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.codehaus.groovy.runtime.callsite.PlainObjectMetaMethodSite.doInvoke(PlainObjectMetaMethodSite.java:43)
        at org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite$PogoCachedMethodSiteNoUnwrapNoCoerce.invoke(PogoMetaMethodSite.java:190)
        at org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite.callCurrent(PogoMetaMethodSite.java:58)
        at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallCurrent(CallSiteArray.java:51)
        at org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite.callCurrent(PogoMetaMethodSite.java:63)
        at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:168)
        at org.codehaus.groovy.tools.shell.Groovysh.execute(Groovysh.groovy:201)
        at org.apache.tinkerpop.gremlin.console.GremlinGroovysh.super$3$execute(GremlinGroovysh.groovy)
        at sun.reflect.GeneratedMethodAccessor24.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:101)
        at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:323)
        at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1217)
        at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeMethodOnSuperN(ScriptBytecodeAdapter.java:144)
        at org.apache.tinkerpop.gremlin.console.GremlinGroovysh.execute(GremlinGroovysh.groovy:83)
        at org.codehaus.groovy.tools.shell.Shell.leftShift(Shell.groovy:120)
        at org.codehaus.groovy.tools.shell.Shell$leftShift$2.call(Unknown Source)
        at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:47)
        at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:115)
        at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:127)
        at org.codehaus.groovy.tools.shell.ShellRunner.work(ShellRunner.groovy:93)
        at org.codehaus.groovy.tools.shell.InteractiveShellRunner.super$2$work(InteractiveShellRunner.groovy)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:101)
        at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:323)
        at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1217)
        at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeMethodOnSuperN(ScriptBytecodeAdapter.java:144)
        at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeMethodOnSuper0(ScriptBytecodeAdapter.java:164)
        at org.codehaus.groovy.tools.shell.InteractiveShellRunner.work(InteractiveShellRunner.groovy:138)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.codehaus.groovy.runtime.callsite.PlainObjectMetaMethodSite.doInvoke(PlainObjectMetaMethodSite.java:43)
        at org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite$PogoCachedMethodSiteNoUnwrapNoCoerce.invoke(PogoMetaMethodSite.java:190)
        at org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite.callCurrent(PogoMetaMethodSite.java:58)
        at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallCurrent(CallSiteArray.java:51)
        at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:156)
        at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:160)
        at org.codehaus.groovy.tools.shell.ShellRunner.run(ShellRunner.groovy:57)
        at org.codehaus.groovy.tools.shell.InteractiveShellRunner.super$2$run(InteractiveShellRunner.groovy)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:101)
        at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:323)
        at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1217)
        at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeMethodOnSuperN(ScriptBytecodeAdapter.java:144)
        at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeMethodOnSuper0(ScriptBytecodeAdapter.java:164)
        at org.codehaus.groovy.tools.shell.InteractiveShellRunner.run(InteractiveShellRunner.groovy:97)
        at java_lang_Runnable$run.call(Unknown Source)
        at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:47)
        at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:115)
        at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:119)
        at org.apache.tinkerpop.gremlin.console.Console.<init>(Console.groovy:170)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.codehaus.groovy.reflection.CachedConstructor.invoke(CachedConstructor.java:80)
        at org.codehaus.groovy.runtime.callsite.ConstructorSite$ConstructorSiteNoUnwrapNoCoerce.callConstructor(ConstructorSite.java:105)
        at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallConstructor(CallSiteArray.java:59)
        at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callConstructor(AbstractCallSite.java:237)
        at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callConstructor(AbstractCallSite.java:265)
        at org.apache.tinkerpop.gremlin.console.Console.main(Console.groovy:524)
Caused by: org.janusgraph.core.JanusGraphException: Could not execute operation due to backend exception
        at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:54)
        at org.janusgraph.diskstorage.BackendTransaction.executeRead(BackendTransaction.java:488)
        at org.janusgraph.diskstorage.BackendTransaction.indexQuery(BackendTransaction.java:416)
        at org.janusgraph.graphdb.database.IndexSerializer.query(IndexSerializer.java:596)
        at org.janusgraph.graphdb.util.SubqueryIterator.<init>(SubqueryIterator.java:65)
        ... 108 more
Caused by: org.janusgraph.diskstorage.PermanentBackendException: Permanent exception while executing backend operation IndexQuery
        at org.janusgraph.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:79)
        at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:52)
        ... 112 more
Caused by: java.lang.IllegalArgumentException: Could not load codec 'Lucene70'.  Did you forget to add lucene-backward-codecs.jar?
        at org.apache.lucene.index.SegmentInfos.readCodec(SegmentInfos.java:449)
        at org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:356)
        at org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:291)
        at org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:64)
        at org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:61)
        at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:720)
        at org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:84)
        at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:64)
        at org.janusgraph.diskstorage.lucene.LuceneIndex$Transaction.getSearcher(LuceneIndex.java:1105)
        at org.janusgraph.diskstorage.lucene.LuceneIndex$Transaction.access$000(LuceneIndex.java:1090)
        at org.janusgraph.diskstorage.lucene.LuceneIndex.query(LuceneIndex.java:596)
        at org.janusgraph.diskstorage.indexing.IndexTransaction.queryStream(IndexTransaction.java:110)
        at org.janusgraph.diskstorage.BackendTransaction$6.call(BackendTransaction.java:419)
        at org.janusgraph.diskstorage.BackendTransaction$6.call(BackendTransaction.java:416)
        at org.janusgraph.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:66)
        ... 113 more
        Suppressed: org.apache.lucene.index.CorruptIndexException: checksum passed (47229baa). possibly transient resource issue, or a Lucene or JVM bug (resource=BufferedChecksumIndexInput(MMapIndexInput(path=".../segments_w")))
                at org.apache.lucene.codecs.CodecUtil.checkFooter(CodecUtil.java:466)
                at org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:434)
                ... 126 more
Caused by: java.lang.IllegalArgumentException: An SPI class of type org.apache.lucene.codecs.Codec with name 'Lucene70' does not exist.  You need to add the corresponding JAR file supporting this SPI to your classpath.  The current classpath supports the following names: [Lucene87]
        at org.apache.lucene.util.NamedSPILoader.lookup(NamedSPILoader.java:116)
        at org.apache.lucene.codecs.Codec.forName(Codec.java:116)
        at org.apache.lucene.index.SegmentInfos.readCodec(SegmentInfos.java:445)
        ... 127 more
 


Re: What are the implications of using Object.class property type?

hadoopmarc@...
 

Hi Laura,

One code example says more than 1000 words:

gremlin> graph = TinkerFactory.createModern()
==>tinkergraph[vertices:6 edges:6]
gremlin> g=graph.traversal(
traversal(    traversal()   
gremlin> g=graph.traversal()
==>graphtraversalsource[tinkergraph[vertices:6 edges:6], standard]
gremlin> g.addV().property("lang", 45)
==>v[13]
gremlin> g.V().elementMap()
==>[id:1,label:person,name:marko,age:29]
==>[id:2,label:person,name:vadas,age:27]
==>[id:3,label:software,name:lop,lang:java]
==>[id:4,label:person,name:josh,age:32]
==>[id:5,label:software,name:ripple,lang:java]
==>[id:6,label:person,name:peter,age:35]
==>[id:13,label:vertex,lang:45]
gremlin> g.V().values("lang")
==>java
==>java
==>45
gremlin> g.V().values("lang").group().by(map{it->it.get().getClass()}).by(count())
==>[class java.lang.String:2,class java.lang.Integer:1]
gremlin>

So, this query shows you all occurring data types of a specific property in the graph.
Strictly speaking, gremlin OLAP queries are queries using withComputer(). I tend to use the term a bit looser including analytical queries requiring a full table scan.

Best wishes,

Marc

561 - 580 of 6663