Date   

Re: Query requires iterating over all vertices

Boxuan Li
 

Try rewriting your query as:

`  g.V().has("country", P.neq(null)).values("country").dedup().order().limit(10)`

FYI starting from the upcoming 0.6.0 release, has("country") will leverage the mixed index. See https://github.com/JanusGraph/janusgraph/pull/2175 for more details.


Re: Query requires iterating over all vertices

Laura Morales <lauretas@...>
 

Hi, it means your query is not leveraging your indexes. Can you provide the query and also the output of `mgmt.printIndexes()` (or how you created the indices)?
My ".properties" file:
gremlin.graph=org.janusgraph.core.JanusGraphFactory
storage.backend=berkeleyje
storage.directory=/graph/db
index.search.backend=lucene
index.search.directory=/graph/index

Then I open the "gremlin.sh" console and start the graph like this:
graph = JanusGraphFactory.open("/graph/database.properties")
g = graph.traversal()

I've imported the data from https://raw.githubusercontent.com/krlawrence/graph/master/sample-data/air-routes.graphml like this:
graph.io(graphml()).readGraph("airroutes.graphml")
graph.tx().commit()

I've tried to create a Composite Index following the documentation at https://docs.janusgraph.org/index-management/index-performance/#composite-index and this works.

Then I've tried to create a Mixed Index following the documentation below of the above, but it doesn't work. I always get that same warning.
I've tried to create the index on the property "country", just by copy-pasting the example in the documentation and replacing the property name.

The query is (typed into the "gremlin.sh" console):
g.V().has("country").values("country").dedup().order().limit(10)

The output of "mgmt.printIndexes()" is:
------------------------------------------------------------------------------------------------
Vertex Index Name | Type | Unique | Backing | Key: Status |
---------------------------------------------------------------------------------------------------
ctryidx | Mixed | false | search | country: ENABLED |
---------------------------------------------------------------------------------------------------
Edge Index (VCI) Name | Type | Unique | Backing | Key: Status |
---------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------
Relation Index | Type | Direction | Sort Key | Order | Status |
---------------------------------------------------------------------------------------------------


Re: Query requires iterating over all vertices

Boxuan Li
 

Hi, it means your query is not leveraging your indexes. Can you provide the query and also the output of `mgmt.printIndexes()` (or how you created the indices)?

On Jul 22, 2021, at 11:03 PM, Laura Morales <lauretas@...> wrote:

I was able to create a "Composite Index", but I cannot seem to be able to create a "Mixed Index" with berkeleyjs and lucene.
What I've done:

- I've modified the ".properties" file
index.search.backend=lucene
index.search.directory=/data/searchindex
- I've created the index by following the instructions at https://docs.janusgraph.org/index-management/index-performance/#mixed-index I've tried with both one and two properties in the index

When I make the query, I always get the same warning "WARN org.janusgraph.graphdb.transaction.StandardJanusGraphTx - Query requires iterating over all vertices [()]. For better performance, use indexes"
I don't understand where the problem is. When I create the index following the documentation, everything seems to work fine, I don't get any errors.

How can I debug this?





Query requires iterating over all vertices

Laura Morales <lauretas@...>
 

I was able to create a "Composite Index", but I cannot seem to be able to create a "Mixed Index" with berkeleyjs and lucene.
What I've done:

- I've modified the ".properties" file
index.search.backend=lucene
index.search.directory=/data/searchindex
- I've created the index by following the instructions at https://docs.janusgraph.org/index-management/index-performance/#mixed-index I've tried with both one and two properties in the index

When I make the query, I always get the same warning "WARN org.janusgraph.graphdb.transaction.StandardJanusGraphTx - Query requires iterating over all vertices [()]. For better performance, use indexes"
I don't understand where the problem is. When I create the index following the documentation, everything seems to work fine, I don't get any errors.

How can I debug this?


Tinkerpop 3.4.1 with Hadoop3

anjanisingh22@...
 

Hi All,
I am trying to run SparkGraphComputer using tinkerpop 3.4.1 on Spark2.3 having Hadoop3 but getting issues, not able to connect with Spark.
As per http://tinkerpop.apache.org/docs/3.4.1/upgrade/#_hadoop_gremlin , "Hadoop1 is no longer supported. Hadoop2 is now the only supported Hadoop version in TinkerPop." Does it means Tinkerpop is not supported with Hadoop3?

Thanks in advance.

Regards,
Anjani


Re: Could not execute operation due to backend exception

Laura Morales <lauretas@...>
 

Thank you a lot for the help, I could not figure out what the problem was!
Is it normal that the requirement is so high? 5GB of space for an empty database? And can it be changed somehow with some settings?
 
 
 

Sent: Wednesday, July 21, 2021 at 10:48 AM
From: hadoopmarc@...
To: janusgraph-users@...
Subject: Re: [janusgraph-users] Could not execute operation due to backend exception
Hi Laura,

Down the stacktrace you can note:Caused by: com.sleepycat.je.DiskLimitException: (JE 18.3.12) Disk usage is not within je.maxDisk or je.freeDisk limits and write operations are prohibited: maxDiskLimit=0 freeDiskLimit=5,368,709,120 adjustedMaxDiskLimit=0 maxDiskOverage=0 freeDiskShortage=4,704,661,504 diskFreeSpace=664,047,616 availableLogSize=-4,704,661,504 totalLogSize=103,399 activeLogSize=103,399 reservedLogSize=0 protectedLogSize=0 protectedLogSizeMap={}After some googling you can conclude that your JanusGraph disk is short of 4.7 GB of space for running JanusGraph with BerkeleyjeDB. Maybe check the logs folder.

No problem you asked; stacktraces can be daunting when very long and it was good to include the entire stacktrace!

Best wishes,    Marc


Re: Could not execute operation due to backend exception

hadoopmarc@...
 

Hi Laura,

Down the stacktrace you can note:
Caused by: com.sleepycat.je.DiskLimitException: (JE 18.3.12) Disk usage is not within je.maxDisk or je.freeDisk limits and write operations are prohibited: maxDiskLimit=0 freeDiskLimit=5,368,709,120 adjustedMaxDiskLimit=0 maxDiskOverage=0 freeDiskShortage=4,704,661,504 diskFreeSpace=664,047,616 availableLogSize=-4,704,661,504 totalLogSize=103,399 activeLogSize=103,399 reservedLogSize=0 protectedLogSize=0 protectedLogSizeMap={}
After some googling you can conclude that your JanusGraph disk is short of 4.7 GB of space for running JanusGraph with BerkeleyjeDB. Maybe check the logs folder.

No problem you asked; stacktraces can be daunting when very long and it was good to include the entire stacktrace!

Best wishes,    Marc


Re: What are the implications of using Object.class property type?

hadoopmarc@...
 

Hi Laura,

A similar question was posed recently:
https://lists.lfaidata.foundation/g/janusgraph-users/message/5986

So,
1. Only for the CompositeIndex
2. In your specific example, you could use the java Integer class ( https://docs.oracle.com/javase/8/docs/api/java/lang/Integer.html ), because its constructor takes either integer type or string type and it has the equals() method implemented.

Best wishes,    Marc


Could not execute operation due to backend exception

Laura Morales <lauretas@...>
 

What am I doing wrong? I cannot get Janus to start.

$ java -version
openjdk version "1.8.0_252"
OpenJDK Runtime Environment (build 1.8.0_252-8u252-b09-1~deb9u1-b09)
OpenJDK 64-Bit Server VM (build 25.252-b09, mixed mode)

$ cd janusgraph-0.5.3
$ ./bin/gremlin.sh
gremlin> graph = JanusGraphFactory.open('conf/janusgraph-berkeleyje.properties')
Could not execute operation due to backend exception
Type ':help' or ':h' for help.
Display stack trace? [yN]y
org.janusgraph.core.JanusGraphException: Could not execute operation due to backend exception
at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:56)
at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:158)
at org.janusgraph.diskstorage.configuration.backend.KCVSConfiguration.set(KCVSConfiguration.java:146)
at org.janusgraph.diskstorage.configuration.backend.KCVSConfiguration.set(KCVSConfiguration.java:123)
at org.janusgraph.diskstorage.configuration.ModifiableConfiguration.set(ModifiableConfiguration.java:43)
at org.janusgraph.diskstorage.configuration.builder.ReadConfigurationBuilder.setupJanusGraphVersion(ReadConfigurationBuilder.java:130)
at org.janusgraph.diskstorage.configuration.builder.ReadConfigurationBuilder.buildGlobalConfiguration(ReadConfigurationBuilder.java:74)
at org.janusgraph.graphdb.configuration.builder.GraphDatabaseConfigurationBuilder.build(GraphDatabaseConfigurationBuilder.java:53)
at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:161)
at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:132)
at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:79)
at org.janusgraph.core.JanusGraphFactory$open.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:47)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:115)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:127)
at groovysh_evaluate.run(groovysh_evaluate:3)
at groovysh_evaluate$run.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:47)
at groovysh_evaluate$run.call(Unknown Source)
at org.codehaus.groovy.tools.shell.Interpreter.evaluate(Interpreter.groovy:81)
at org.codehaus.groovy.tools.shell.Evaluator$evaluate.call(Unknown Source)
at org.codehaus.groovy.tools.shell.Groovysh.execute(Groovysh.groovy:201)
at org.apache.tinkerpop.gremlin.console.GremlinGroovysh.super$3$execute(GremlinGroovysh.groovy)
at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:101)
at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:323)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1217)
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeMethodOnSuperN(ScriptBytecodeAdapter.java:144)
at org.apache.tinkerpop.gremlin.console.GremlinGroovysh.execute(GremlinGroovysh.groovy:83)
at org.codehaus.groovy.tools.shell.Shell.leftShift(Shell.groovy:120)
at org.codehaus.groovy.tools.shell.Shell$leftShift$1.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:47)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:115)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:127)
at org.codehaus.groovy.tools.shell.ShellRunner.work(ShellRunner.groovy:93)
at org.codehaus.groovy.tools.shell.InteractiveShellRunner.super$2$work(InteractiveShellRunner.groovy)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:101)
at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:323)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1217)
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeMethodOnSuperN(ScriptBytecodeAdapter.java:144)
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeMethodOnSuper0(ScriptBytecodeAdapter.java:164)
at org.codehaus.groovy.tools.shell.InteractiveShellRunner.work(InteractiveShellRunner.groovy:138)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.codehaus.groovy.runtime.callsite.PlainObjectMetaMethodSite.doInvoke(PlainObjectMetaMethodSite.java:43)
at org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite$PogoCachedMethodSiteNoUnwrapNoCoerce.invoke(PogoMetaMethodSite.java:190)
at org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite.callCurrent(PogoMetaMethodSite.java:58)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallCurrent(CallSiteArray.java:51)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:156)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:160)
at org.codehaus.groovy.tools.shell.ShellRunner.run(ShellRunner.groovy:57)
at org.codehaus.groovy.tools.shell.InteractiveShellRunner.super$2$run(InteractiveShellRunner.groovy)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:101)
at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:323)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1217)
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeMethodOnSuperN(ScriptBytecodeAdapter.java:144)
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeMethodOnSuper0(ScriptBytecodeAdapter.java:164)
at org.codehaus.groovy.tools.shell.InteractiveShellRunner.run(InteractiveShellRunner.groovy:97)
at org.codehaus.groovy.vmplugin.v7.IndyInterface.selectMethod(IndyInterface.java:234)
at org.apache.tinkerpop.gremlin.console.Console.<init>(Console.groovy:168)
at org.codehaus.groovy.vmplugin.v7.IndyInterface.selectMethod(IndyInterface.java:234)
at org.apache.tinkerpop.gremlin.console.Console.main(Console.groovy:502)
Suppressed: org.janusgraph.core.JanusGraphException: Could not close configuration store
at org.janusgraph.diskstorage.configuration.backend.KCVSConfiguration.close(KCVSConfiguration.java:228)
at org.janusgraph.diskstorage.configuration.builder.ReadConfigurationBuilder.buildGlobalConfiguration(ReadConfigurationBuilder.java:94)
... 67 more
Caused by: org.janusgraph.diskstorage.PermanentBackendException: Could not close BerkeleyJE database
at org.janusgraph.diskstorage.berkeleyje.BerkeleyJEStoreManager.close(BerkeleyJEStoreManager.java:249)
at org.janusgraph.diskstorage.keycolumnvalue.keyvalue.OrderedKeyValueStoreManagerAdapter.close(OrderedKeyValueStoreManagerAdapter.java:73)
at org.janusgraph.diskstorage.configuration.backend.builder.KCVSConfigurationBuilder$1.close(KCVSConfigurationBuilder.java:46)
at org.janusgraph.diskstorage.configuration.backend.KCVSConfiguration.close(KCVSConfiguration.java:225)
... 68 more
Caused by: com.sleepycat.je.DiskLimitException: (JE 18.3.12) Disk usage is not within je.maxDisk or je.freeDisk limits and write operations are prohibited: maxDiskLimit=0 freeDiskLimit=5,368,709,120 adjustedMaxDiskLimit=0 maxDiskOverage=0 freeDiskShortage=4,704,661,504 diskFreeSpace=664,047,616 availableLogSize=-4,704,661,504 totalLogSize=103,399 activeLogSize=103,399 reservedLogSize=0 protectedLogSize=0 protectedLogSizeMap={}
at com.sleepycat.je.dbi.EnvironmentImpl.checkDiskLimitViolation(EnvironmentImpl.java:2733)
at com.sleepycat.je.recovery.Checkpointer.doCheckpoint(Checkpointer.java:724)
at com.sleepycat.je.dbi.EnvironmentImpl.invokeCheckpoint(EnvironmentImpl.java:2321)
at com.sleepycat.je.dbi.EnvironmentImpl.doClose(EnvironmentImpl.java:1972)
at com.sleepycat.je.dbi.DbEnvPool.closeEnvironment(DbEnvPool.java:342)
at com.sleepycat.je.dbi.EnvironmentImpl.close(EnvironmentImpl.java:1866)
at com.sleepycat.je.Environment.close(Environment.java:444)
at org.janusgraph.diskstorage.berkeleyje.BerkeleyJEStoreManager.close(BerkeleyJEStoreManager.java:247)
... 71 more
Caused by: org.janusgraph.diskstorage.PermanentBackendException: Permanent failure in storage backend
at org.janusgraph.diskstorage.berkeleyje.BerkeleyJETx.commit(BerkeleyJETx.java:112)
at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:153)
at org.janusgraph.diskstorage.util.BackendOperation$1.call(BackendOperation.java:161)
at org.janusgraph.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:68)
at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:54)
... 73 more
Caused by: com.sleepycat.je.DiskLimitException: (JE 18.3.12) Transaction 54 must be aborted, caused by: com.sleepycat.je.DiskLimitException: (JE 18.3.12) Disk usage is not within je.maxDisk or je.freeDisk limits and write operations are prohibited: maxDiskLimit=0 freeDiskLimit=5,368,709,120 adjustedMaxDiskLimit=0 maxDiskOverage=0 freeDiskShortage=4,704,661,504 diskFreeSpace=664,047,616 availableLogSize=-4,704,661,504 totalLogSize=103,399 activeLogSize=103,399 reservedLogSize=0 protectedLogSize=0 protectedLogSizeMap={}
at com.sleepycat.je.DiskLimitException.wrapSelf(DiskLimitException.java:61)
at com.sleepycat.je.txn.Txn.checkState(Txn.java:1951)
at com.sleepycat.je.txn.Txn.commit(Txn.java:725)
at com.sleepycat.je.txn.Txn.commit(Txn.java:631)
at com.sleepycat.je.Transaction.commit(Transaction.java:337)
at org.janusgraph.diskstorage.berkeleyje.BerkeleyJETx.commit(BerkeleyJETx.java:109)
... 77 more
Caused by: com.sleepycat.je.DiskLimitException: (JE 18.3.12) Disk usage is not within je.maxDisk or je.freeDisk limits and write operations are prohibited: maxDiskLimit=0 freeDiskLimit=5,368,709,120 adjustedMaxDiskLimit=0 maxDiskOverage=0 freeDiskShortage=4,704,661,504 diskFreeSpace=664,047,616 availableLogSize=-4,704,661,504 totalLogSize=103,399 activeLogSize=103,399 reservedLogSize=0 protectedLogSize=0 protectedLogSizeMap={}
at com.sleepycat.je.Cursor.checkUpdatesAllowed(Cursor.java:5407)
at com.sleepycat.je.Cursor.checkUpdatesAllowed(Cursor.java:5384)
at com.sleepycat.je.Cursor.putInternal(Cursor.java:2439)
at com.sleepycat.je.Cursor.putInternal(Cursor.java:841)
at com.sleepycat.je.Database.put(Database.java:1635)
at org.janusgraph.diskstorage.berkeleyje.BerkeleyJEKeyValueStore.insert(BerkeleyJEKeyValueStore.java:229)
at org.janusgraph.diskstorage.berkeleyje.BerkeleyJEKeyValueStore.insert(BerkeleyJEKeyValueStore.java:213)
at org.janusgraph.diskstorage.keycolumnvalue.keyvalue.OrderedKeyValueStoreAdapter.mutate(OrderedKeyValueStoreAdapter.java:99)
at org.janusgraph.diskstorage.configuration.backend.KCVSConfiguration$2.call(KCVSConfiguration.java:151)
at org.janusgraph.diskstorage.configuration.backend.KCVSConfiguration$2.call(KCVSConfiguration.java:146)
at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:147)
... 76 more


What are the implications of using Object.class property type?

Laura Morales <lauretas@...>
 

What are the practical implications of using Object.class as a property type, instead of the other native types (eg. String.class, Integer.class, etc.)?
IIUC Object.class means that a property can take any type as its value. But then my questions are:
1. can Object.class be indexed?
2. if a property can take multiple values, how does querying work if a query for a string but one of the values is actually an integer?


Re: Bulk loading

Laura Morales <lauretas@...>
 

If I setup an empty graph with persistent storage, for example berkeley (thus not an in-memory graph), can I load a graphml/graphson file and have it all added to the graph?
 
 
 

Sent: Tuesday, July 20, 2021 at 2:52 PM
From: hadoopmarc@...
To: janusgraph-users@...
Subject: Re: [janusgraph-users] Bulk loading
Hi Laura,

JanusGraph support for loading data does not go further than the Traversal API (used in https://tinkerpop.apache.org/docs/current/reference/#addvertex-step ) and the JanusGraphManagement API (used in https://docs.janusgraph.org/basics/schema/[https://docs.janusgraph.org/basics/schema/] ).

A better resource than the reference docs to get you started is:
http://www.kelvinlawrence.net/book/PracticalGremlin.html[http://www.kelvinlawrence.net/book/PracticalGremlin.html]

The bulk loading tips will be useful for graphs with millions of vertices and edges.

Best wishes,     Marc


Re: JanusGraph combined with Belief Propagation

hadoopmarc@...
 

Hi,

You can first try to write a custom VertexProgram for belief propagation with Apache TinkerPop. A custom VertexProgram supports the massive message parsing needed for belief propagation.
If it works on TinkerPop you can use the same VertexProgram on JanusGraph (if a single TinkerPop machine does not suffice to hold your graph), but you will have the additional complexity of getting JanusGraph to work with Apache Spark.

A Google search on belief propagation VertexProgram TinkerPop does not give any relevant results.

Best wishes,    Marc


Re: Bulk loading

hadoopmarc@...
 

Hi Laura,

JanusGraph support for loading data does not go further than the Traversal API (used in https://tinkerpop.apache.org/docs/current/reference/#addvertex-step ) and the JanusGraphManagement API (used in https://docs.janusgraph.org/basics/schema/ ).

A better resource than the reference docs to get you started is:
http://www.kelvinlawrence.net/book/PracticalGremlin.html

The bulk loading tips will be useful for graphs with millions of vertices and edges.

Best wishes,     Marc


JanusGraph combined with Belief Propagation

ganeshanvinothkumar@...
 

I'm moving from AWS Neptune architecture to JanusGraph in CentOS.

Anyone has tried implementing Belief Propagation with JanusGraph?


Re: Janus multiple subgraphs

Boxuan Li
 

Hi Laura, unfortunately, you would have to handle this in your application layer. For example, you can use different labels, properties, indexes for different subgraphs.


Bulk loading

Laura Morales <lauretas@...>
 

I've read the "Bulk Loading" chapter of the documentation several times but I still don't understand how to create a graph. Everything that I can find online is some Java or Groovy code.
Given:
1. a graph schema (say, in JSON)
2. a bunch of data (say, in CSV)
does Janus have any tool to load this stuff into a database, or to create a new one? Without using any Java/Groovy programming, or 3rd party tools? Or am I expected to write my own Groovy scripts for parsing the CSV and creating the graph?


Re: Very slow performance when opening a new session

hadoopmarc@...
 

Hi Roy,

I can confirm your observation using the standard 'bin/janusgraph.sh start' from the full janusgraph distribution.
I just used the the gremlin console with:

:remote connect tinkerpop.server conf/remote.yaml session
:remote console
a = 3

Although there is no logical reason for it in hindsight, I checked whether the delay was not due to class loading in gremlin console, using:
export JAVA_OPTIONS='-verbose:class'

I can also confirm that the delay does not happen with a non-sessioned connection.
I can also confirm that the delay occurs for the gremlin server and gremlin console of the Apache TinkerPop distribution (version 3.4.8).

I guess the initial delay is due to the additional overhead of sessions as described in:
https://tinkerpop.apache.org/docs/current/reference/#sessions

Best wishes,     Marc


Janus multiple subgraphs

Laura Morales <lauretas@...>
 

My understanding is that a Janus server can host multiple graphs, but they are isolated and cannot be queried together.
I'd like to know if/how it's possible to split one single graph into multiple subgraphs such that:

- I can query only one subgraph, or the entire graph
- vertex/edge properties (and their indexes) are local to a subgraph
- subgraphs can have links from one another, so I should be able to query multiple subgraphs in the same query

I think what I'm trying to achieve is something akin to Postgres' database schemas. In Postgres, databases are independent but they can be sub-divided into multiple schemas. Each schema has its own table, constraints, indexes, but I can query multiple schemas at once by using their fully qualified name "schema.table".


Janus multiple subgraphs

Laura Morales <lauretas@...>
 

My understanding is that a Janus server can host multiple graphs, but they are isolated and cannot be queried together.
I'd like to know if/how it's possible to split one single graph into multiple subgraphs such that:

- I can query only one subgraph, or the entire graph
- vertex/edge properties (and their indexes) are local to a subgraph
- subgraphs can have links from one another, so I should be able to query multiple subgraphs in the same query

I think what I'm trying to achieve is something akin to Postgres' database schemas. In Postgres, databases are independent but they can be sub-divided into multiple schemas. Each schema has its own table, constraints, indexes, but I can query multiple schemas at once by using their fully qualified name "schema.table".


Very slow performance when opening a new session

Roy Reznik <reznik.roy@...>
 

I'm seeing very slow performance when opening a new session in JanusGraph.
The message I'm sending is this:
{"requestId":"02a58ee3-e4d3-11eb-bd29-04d4c4eaf347","op":"eval","processor":"session","args":{"bindings":{},"evaluationTimeout":120000,"gremlin":"g.V().limit(1).id()","language":"gremlin-groovy","rebindings":{},"session":"50052633-079b-4500-bc29-a3eacb1f0dba"}}

Basically, the inner query doesn't really matter. When I use the session processor with a new session id that's never been used it takes ~1.2s for JanusGraph to respond.
Queries afterwards, with the same session id are much quicker.
Why is the overhead of starting a new session so large? Can it be reduced somehow by configuration?

Thanks,
Roy.

621 - 640 of 6651