Re: JanusGraph database cache on distributed setup
Boxuan Li
Hi Wasantha,
toggle quoted messageShow quoted text
I am not familiar with the transaction scope when using a remote Gremlin server, so I could be wrong, but could you try rolling back the transaction explicitly on JG instance B? Just to make sure you are not accessing the stale data cached in a local transaction. Best, Boxuan
|
|
Re: JanusGraph database cache on distributed setup
washerath@...
Hi Boxuan, Wasantha
|
|
Re: JanusGraph database cache on distributed setup
Boxuan Li
Hi Wasantha,
In your example, it looks like you didn't commit your transaction on JG instance A. Uncommitted changes are only visible to the local transaction on the local instance. Can you try committing it first on A and then query on B? Best, Boxuan
|
|
Re: JanusGraph database cache on distributed setup
washerath@...
Hi Boxuan, Wasantha
|
|
Re: JanusGraph database cache on distributed setup
Boxuan Li
Hi Wasantha,
|
|
Re: JanusGraph database cache on distributed setup
washerath@...
Hi Boxuan, Wasantha
|
|
Re: Removed graphs still open in muti node cluster
hadoopmarc@...
Hi Lixu,
JanusGraph-0.6.0 had various changes to the ConfiguredGraphFactory which might have solved your issue: https://github.com/JanusGraph/janusgraph/issues/2236 https://github.com/JanusGraph/janusgraph/blob/v0.5.3/janusgraph-core/src/main/java/org/janusgraph/core/ConfiguredGraphFactory.java https://github.com/JanusGraph/janusgraph/blob/v0.6.1/janusgraph-core/src/main/java/org/janusgraph/core/ConfiguredGraphFactory.java Can you recheck with version 0.6.1? BTW, the release notes of v0.6.0 form an impressive list! Merely reading it takes minutes. Best wishes, Marc
|
|
Re: Preserve IDs when importing graphml
hadoopmarc@...
Hi Laura,
No answer but some relevant search results: https://groups.google.com/g/gremlin-users/c/jUBuhhKuf0M/m/kiKMY0eHAwAJ The graph.set-vertex-id property at: https://docs.janusgraph.org/configs/configuration-reference/#graph In general, when working with JanusGraph, it is better to first transform the input graphml and make the id into a property. Best wishes, Marc
|
|
Preserve IDs when importing graphml
Laura Morales <lauretas@...>
I think I've read once that it's possible to preserve the IDs when importing graphml data. Unfortunately, I cannot remember where I read that. All my IDs are integers.
How do I do that?
|
|
Re: Importing a schema
hadoopmarc@...
Hi Laura,
JanusGraph only allows to configure a custom SchemaMaker with the schema.default property. Googling on SchemaMaker hits some (unmaintained?) projects that could help: https://github.com/graph-lab/janusgraph-schema-manager Best wishes, Marc
|
|
Importing a schema
Laura Morales <lauretas@...>
Is there a way to import a schema, instead of creating it with a script?
For example importing this file: <?xml version='1.0' ?> <graphml xmlns='http://graphml.graphdrawing.org/xmlns'> <key id='type' for='node' attr.name='type' attr.type='string'></key> <key id='code' for='node' attr.name='code' attr.type='string'></key> <key id='icao' for='node' attr.name='icao' attr.type='string'></key> <key id='desc' for='node' attr.name='desc' attr.type='string'></key> <key id='region' for='node' attr.name='region' attr.type='string'></key> <key id='runways' for='node' attr.name='runways' attr.type='int'></key> <key id='longest' for='node' attr.name='longest' attr.type='int'></key> <key id='elev' for='node' attr.name='elev' attr.type='int'></key> <key id='country' for='node' attr.name='country' attr.type='string'></key> <key id='city' for='node' attr.name='city' attr.type='string'></key> <key id='lat' for='node' attr.name='lat' attr.type='double'></key> <key id='lon' for='node' attr.name='lon' attr.type='double'></key> <key id='dist' for='edge' attr.name='dist' attr.type='int'></key> <key id='labelV' for='node' attr.name='labelV' attr.type='string'></key> <key id='labelE' for='edge' attr.name='labelE' attr.type='string'></key> </graphml> like this: graph.io(graphml()).readGraph("file.graphml") is there a way to make Janus create the schema from the XML file? This would be very convenient because it means I don't have to write groovy scripts for creating the schema.
|
|
Re: JanusGraph database cache on distributed setup
Boxuan Li
Hi Wasantha,
toggle quoted messageShow quoted text
A centralized cache is a good idea in many use cases. What you could do is to maintain a centralized cache by yourself. This, however, requires some changes to your application code (e.g. your app might need to do a look up in cache and then query JanusGraph). A more advanced approach is to rewrite ExpirationKCVSCache (https://javadoc.io/doc/org.janusgraph/janusgraph-core/latest/org/janusgraph/diskstorage/keycolumnvalue/cache/ExpirationKCVSCache.html) by yourself and let it store cache in a centralized cache rather than the local cache. Then, the db.cache feature should still work except that the cache is synced across JanusGraph instances. Best, Boxuan
|
|
Re: JanusGraph database cache on distributed setup
washerath@...
Actually the concern is with db.cache feature.
Once we enable the db.cache, what ever the modification done for particular vertex only visible for that JG instance untill the cache expires. So if we have multiple JG instances, the modifications done from one instance does not reflect on other immediately. If we can have centralized cache which syncs up with all JG instances this can be avoided. Thanks, Wasantha
|
|
can we dynamically create multiple graphs with customized schema files
Yingjie Li
Hello,
Currently we run gremlin.sh with customized schema file in groovy (contains backend config, graph name, cache size as well as property key, vertex/edge index) to initialize graph. It seems that we have to restart the server to make sure that the graph is accessible. Is there a way to dynamically create multiple graphs with customized schema files without the need to restart the server?
|
|
Removed graphs still open in muti node cluster
lixu
Hi, I'm using JnausGraphManager and JanusGraphWsAndHttpChannelizer to manage dynamic graph operating.
When dropping graph in multi-nodes cluster, removed graphs only closed in specific node, the removed graphs are still open in other nodes.
Version: 0.5.3
Storage Backend: Hbase
Mixed Index Backend: elasticsearch
Expected Behavior: all nodes in cluster should close the removed graphs
Current Behavior: only the node executing dropping script close the removed graphs
Related codes:
JanusGraphManager:
private class GremlinExecutorGraphBinder implements Runnable {
final JanusGraphManager graphManager;
final GremlinExecutor gremlinExecutor;
public GremlinExecutorGraphBinder(JanusGraphManager graphManager, GremlinExecutor gremlinExecutor) {
this.graphManager = graphManager;
this.gremlinExecutor = gremlinExecutor;
}
@Override
public void run() {
ConfiguredGraphFactory.getGraphNames().forEach(it -> {
try {
final Graph graph = ConfiguredGraphFactory.open(it);
updateTraversalSource(it, graph, this.gremlinExecutor, this.graphManager);
} catch (Exception e) {
// cannot open graph, do nothing
log.error(String.format("Failed to open graph %s with the following error:\n %s.\n" +
"Thus, it and its traversal will not be bound on this server.", it, e.toString()));
}
});
}
}
In above codes, run() method will get all the graph names from hbase, so it could find all the added graphs,
but those graphs removed from the other nodes are still open in current node.
|
|
Re: Forcing Janusgraph to use indices when performing traversal with Union step
Boxuan Li
The approach you proposed should work as good/bad as your original single query in theory. I would be surprised if this approach works better than your original single query, but if so, please let me know.
The optimal approach depends on your data. Since each of your indexes only covers a single index, there are two strategies in general: 1. Leverage one index to return results for one condition, and do in-memory filtering for the other condition. Your original query does this. This is useful in some scenarios (e.g. one index returns many more results than another index). 2. Leverage both indexes and do in-memory intersection to return results that satisfy both conditions. This might be something you want, and it is useful in some scenarios (e.g. both indices return roughly the same results and there is not much overlapping between two sets of results). If this is what you want, try rewriting your query as g.V().has("indexed-prop1", "value1").or(__.has("indexed-prop2", "value2"), __.has("indexed-prop3", "value3")). Essentially, change the union step into "or" step. Hope this helps! Best, Boxuan
|
|
Re: Forcing Janusgraph to use indices when performing traversal with Union step
brad@...
Thank you for your reply.
We can look into migrating to version 0.6 (or later)... There exists an index for indexed-prop1, indexed-prop2, and indexed-prop3, but there are no multi-column indices (e.g., indexed-prop1 and indexed-prop2). In our application, the columns that might be used for the search are specified by the user, and we wouldn't have any way of knowing in advance what combination of columns they might ask for. The only thing that we can do now is to ensure that all of the columns that are used in a 'has' step are indexed. One approach that I thought might be an improvement is to perform the first traversal (i.e., " g.V().has('indexed-prop1', 'value1')") separately, collect the set of vertices which satisfy this query, then perform an additional query for each 'has' step that is currently in the 'union' step. So, for example, we run g.V().has('indexed-prop1', 'value1') initially, and it returns 3 vertices (V1, V2, and V3). We then run a traversal for indexed-prop2: "g.V(V1, V2, V3).has('indexed-prop2', 'value2')", which returns a subset of the vertices returned by the first traversal. Then, run another traversal, this time: "g.V(V1, V2, V3).has('indexed-prop3', 'value3')", again returning a subset of the vertices. Finally, figure out (programatically, without executing another traversal) the union of the vertices returned by the last 2 traversals. This is an inelegant and brute-force technique which would probably work, but I would rather do this in a single traversal, and I haven't been able to figure out how to do this. Can you recommend an approach for doing this kind of action in a single traversal? Thanks, Brad
|
|
Re: Forcing Janusgraph to use indices when performing traversal with Union step
Boxuan Li
Hi Brad,
I can see that your traversal is using the index index_ten1_apm_0 from the following snippet:
Then, JanusGraph uses in-memory filtering to check whether the results returned by the index_ten1_apm_0 satisfy your predicates has('indexed-prop2', 'value2’) or has('indexed-prop3', ‘value3’). Do you have an index that contains both the field “indexed-prop1” and “indexed-prop2”, and an index that contains both the field “indexed-prop1” and “indexed-prop3”? If so, try replacing “union” step with “or” step. If not, then you could try creating those indexes, otherwise there is no optimization that JanusGraph can do. Btw it looks like you are using a JanusGraph version < 0.6, which is out of maintenance. The same query should run moderately faster in the latest version due to a couple of optimizations. Best, Boxuan
|
|
Re: Forcing Janusgraph to use indices when performing traversal with Union step
brad@...
TRAVERSAL:
[GraphStep(vertex,[]), HasStep([~label.eq(ten1.apm.version), ten1.apm.idx.type_id.eq(i_javaServiceInstance)]), UnionStep([[HasStep([ten1.apm.idx.display_name.eq(JavaServiceInstance3)]), EndStep], [HasStep([ten1.apm.idx.str4.eq(998)]), EndStep]]), RangeGlobalStep(0,2147483647), HasStep([ten1.apm.idx.discovery_ts.lte(1643313164000), ten1.apm.idx.last_seen_ts.gt(0)]), OrderGlobalStep([[value(ten1.apm.idx.last_seen_ts), desc]]), RangeGlobalStep(0,1000), GroupStep(value(uid),[FoldStep, OrderLocalStep([[value(ten1.apm.idx.last_seen_ts), desc]]), UnfoldStep, RangeGlobalStep(0,1)]), LambdaFlatMapStep(lambda), RangeGlobalStep(0,2500), PropertyMapStep([uid, ten1.apm.idx.display_name, ten1.apm.idx.last_seen_ts, ten1.apm.idx.discovery_ts, typ],value)]
Traversal Metrics
Step Count Traversers Time (ms) % Dur
=============================================================================================================
JanusGraphStep([],[~label.eq(ten1.apm.version),... 3 3 14.381 54.44
\\_condition=(~label = ten1.apm.version AND ten1.apm.idx.type_id = i_javaServiceInstance)
\\_orders=[]
\\_isFitted=true
\\_isOrdered=true
\\_query=[(ten1.apm.idx.type_id = i_javaServiceInstance)](4000):index_ten1_apm_0
\\_index=index_ten1_apm_0
\\_index_impl=search
optimization 0.011
optimization 0.381
backend-query 3 21.484
\\_query=index_ten1_apm_0:[(ten1.apm.idx.type_id = i_javaServiceInstance)](4000):index_ten1_apm_0
\\_limit=4000
UnionStep([[HasStep([ten1.apm.idx.display_name.... 2 2 10.005 37.87
HasStep([ten1.apm.idx.display_name.eq(JavaSer... 1 1 0.113
EndStep 1 1 0.076
HasStep([ten1.apm.idx.str4.eq(998)]) 1 1 0.208
EndStep 1 1 0.169
RangeGlobalStep(0,2147483647) 2 2 0.177 0.67
HasStep([ten1.apm.idx.discovery_ts.lte(16433131... 2 2 0.616 2.33
OrderGlobalStep([[value(ten1.apm.idx.last_seen_... 2 2 0.281 1.06
RangeGlobalStep(0,1000) 2 2 0.115 0.44
GroupStep(value(uid),[FoldStep, OrderLocalStep(... 1 1 0.323 1.22
LambdaFlatMapStep(lambda) 2 2 0.082 0.31
RangeGlobalStep(0,2500) 2 2 0.062 0.23
PropertyMapStep([uid, ten1.apm.idx.display_name... 2 2 0.373 1.41
>TOTAL - - 26.417 -
|
|
Re: dynamic graphics, limits and global index
Matthew Nguyen <nguyenm9@...>
Thanks Marc. Currently triplestore/LPG is on hold awaiting streaming incidental edge queries in order to play some more. Hoping we will see a day when LPG/3store harmonize.
-----Original Message-----
From: hadoopmarc@... To: janusgraph-users@... Sent: Sun, Feb 6, 2022 1:56 pm Subject: Re: [janusgraph-users] dynamic graphics, limits and global index Hi Matt,
Adding to what I stated above about independent composite indices for separate graphs on the same storage backend, the issue turns out to more nuanced for mixed indices on an indexing bakend, see the recent question: https://lists.lfaidata.foundation/g/janusgraph-users/topic/88879391 I though it useful to add it to this thread too. Marc PS Good to hear that JanusGraph can possibly support your usecase!
|
|