Date   

Re: Lucene index inconsistency

inverseintegral42@...
 

Yes that's correct. Initially, I sent a mail to the mailing list but it took 2 weeks to show up, so in the meantime I made an account and directly created the topic.
I have reported the topic to the moderators so that they can delete this one if possible :)


Re: Lucene index inconsistency

hadoopmarc@...
 

I assume you reposted this inadvertenly? I saw you created issue https://github.com/JanusGraph/janusgraph/issues/3146  to document this inconvenience.
Thanks,

Marc


Lucene index inconsistency

Inverse Integral <inverseintegral42@...>
 

Hey everyone,

I'm currently running janusgraph-inmemory with janusgraph-lucence both version 0.6.2
And I'm experiencing strange behaviour when creating an index that contains an underscore in its name.
Whenever I run the following code:
PropertiesConfiguration conf = ConfigurationUtil.loadPropertiesConfig("conf/test.properties");
JanusGraph graph = JanusGraphFactory.open(conf);
String name = "some_name";

graph.tx().rollback();
JanusGraphManagement management = graph.openManagement();

management.makePropertyKey("name").dataType(String.class).make();
management.commit();

management = graph.openManagement();

management.buildIndex(name, Vertex.class)
.addKey(management.getPropertyKey("name"))
.buildMixedIndex("search");
management.commit();

ManagementSystem.awaitGraphIndexStatus(graph, name).
status(SchemaStatus.REGISTERED)
.call();

management = graph.openManagement();
management.updateIndex(management.getGraphIndex(name), SchemaAction.REINDEX).get();
management.commit();

management = graph.openManagement();
System.out.println(management.printIndexes());

graph.traversal().addV().property("name", "Test").next();
graph.tx().commit();

I get the following output:

INFO  o.j.g.d.management.ManagementSystem - Index update job successful for [some_name]
------------------------------------------------------------------------------------------------
Graph Index (Vertex)           | Type        | Unique    | Backing        | Key:           Status |
---------------------------------------------------------------------------------------------------
some_name                      | Mixed       | false     | search         | name:         ENABLED |
---------------------------------------------------------------------------------------------------
Graph Index (Edge)             | Type        | Unique    | Backing        | Key:           Status |
---------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------
Relation Index (VCI)           | Type        | Direction | Sort Key       | Order    |     Status |
---------------------------------------------------------------------------------------------------

ERROR o.j.g.database.StandardJanusGraph - Error while committing index mutations for transaction [5] on index: search
org.janusgraph.core.JanusGraphException: Could not execute operation due to backend exception
at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:54)
at org.janusgraph.diskstorage.indexing.IndexTransaction.flushInternal(IndexTransaction.java:158)
at org.janusgraph.diskstorage.indexing.IndexTransaction.commit(IndexTransaction.java:139)
at org.janusgraph.diskstorage.BackendTransaction.commitIndexes(BackendTransaction.java:143)
at org.janusgraph.graphdb.database.StandardJanusGraph.commit(StandardJanusGraph.java:804)
at org.janusgraph.graphdb.transaction.StandardJanusGraphTx.commit(StandardJanusGraphTx.java:1525)
at org.janusgraph.graphdb.tinkerpop.JanusGraphBlueprintsGraph$GraphTransaction.doCommit(JanusGraphBlueprintsGraph.java:322)
at org.apache.tinkerpop.gremlin.structure.util.AbstractTransaction.commit(AbstractTransaction.java:104)
at org.janusgraph.graphdb.tinkerpop.JanusGraphBlueprintsGraph$GraphTransaction.commit(JanusGraphBlueprintsGraph.java:300)
at Scratch.main(scratch.java:48)
Caused by: org.janusgraph.diskstorage.PermanentBackendException: Permanent exception while executing backend operation IndexMutation
at org.janusgraph.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:79)
at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:52)
... 9 common frames omitted
Caused by: java.lang.IllegalArgumentException: Invalid store name: some_name
at com.google.common.base.Preconditions.checkArgument(Preconditions.java:217)
at org.janusgraph.diskstorage.lucene.LuceneIndex.getStoreDirectory(LuceneIndex.java:194)
at org.janusgraph.diskstorage.lucene.LuceneIndex.getWriter(LuceneIndex.java:216)
at org.janusgraph.diskstorage.lucene.LuceneIndex.mutateStores(LuceneIndex.java:289)
at org.janusgraph.diskstorage.lucene.LuceneIndex.mutate(LuceneIndex.java:275)
at org.janusgraph.diskstorage.indexing.IndexTransaction$1.call(IndexTransaction.java:161)
at org.janusgraph.diskstorage.indexing.IndexTransaction$1.call(IndexTransaction.java:158)
at org.janusgraph.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:66)
... 10 common frames omitted

Wouldn't it be better to disallow the creation of the index in the first place? After the creation all queries seem to fail with the same PermanentBackendException.
Also I don't quite understand why underscores are not allowed in index names. Maybe this is a limitation of Lucene


Incorrect result when lucene index is present

inverseintegral42@...
 

When I run the following code with janusgraph-inmemory and janusgraph-lucene both in version 0.6.2
PropertiesConfiguration conf = ConfigurationUtil.loadPropertiesConfig("conf/test.properties");
JanusGraph graph = JanusGraphFactory.open(conf);

GraphTraversalSource g = graph.traversal();
JanusGraphManagement m = graph.openManagement();

VertexLabel l = m.makeVertexLabel("L").make();
PropertyKey p = m.makePropertyKey("p").dataType(Short.class).make();
PropertyKey q = m.makePropertyKey("q").dataType(UUID.class).make();
m.buildIndex("someName", Vertex.class).addKey(p).addKey(q).indexOnly(l).buildMixedIndex("search");
m.commit();

g.addV("L").property("p", (short) 1).next();
g.tx().commit();

System.out.println(g.V().hasLabel("L").has("q").count().next());
System.out.println(g.V().hasLabel("L").has("q", not(eq(UUID.randomUUID()))).count().next());

I get the output
0
1

But I would expect the output to be
0
0

since there is no vertex with label L and property q. When I remove the index the result is correct.
I assume that this is because the UUID type is handled incorrectly.
Note that this only happens if the index is on both keys (p and q).

I'm using the following configuration:

gremlin.graph=org.janusgraph.core.JanusGraphFactory
storage.backend=inmemory
index.search.backend=lucene
index.search.directory=data/searchindex
schema.default=none

 


Re: Lucene index long overflow

hadoopmarc@...
 

OK, good catch, it is a bug. I had not recognized Long.MAX_VALUE in your query. The LuceneIndex class converts the neq to a range query and adds + 1 to the requested value without any check. The writer must have thought nobody would notice. It works for:
g.V().has("age", P.neq(9223372036854775806L))

If you want, you can report it as an issue.

Best wishes,   Marc


Re: Lucene index long overflow

inverseintegral42@...
 

Your example indeed seems to work fine. If you change the traversal as such:

g.V().has("age", P.neq(9223372036854775807L))

it should throw an exception. The neq predicate seems to be the problem here.


Re: Lucene index long overflow

hadoopmarc@...
 
Edited

I am not sure what the trouble is in your approach. Using variable names key and label might be problematic (it is in the Gremlin Console). Rather than building out your example, I chose to rework the GraphOfTheGodsFactory with a Long property index for use in the console. See below, this provides you with a working playground (tested on JanusGraph-0.5.3).




graph = JanusGraphFactory.open('conf/janusgraph-berkeleyje-lucene.properties')


management = graph.openManagement();
name = management.makePropertyKey("name").dataType(String.class).make();
nameIndex = management.buildIndex("name", Vertex.class).addKey(name).unique().buildCompositeIndex();
management.setConsistency(nameIndex, ConsistencyModifier.LOCK);
age = management.makePropertyKey("age").dataType(Long.class).make();
management.buildIndex("vertices", Vertex.class).addKey(age).buildMixedIndex("search");

time = management.makePropertyKey("time").dataType(Integer.class).make();
reason = management.makePropertyKey("reason").dataType(String.class).make();
place = management.makePropertyKey("place").dataType(Geoshape.class).make();
management.buildIndex("edges", Edge.class).addKey(reason).addKey(place).buildMixedIndex("search");

management.makeEdgeLabel("father").multiplicity(Multiplicity.MANY2ONE).make();
management.makeEdgeLabel("mother").multiplicity(Multiplicity.MANY2ONE).make();
battled = management.makeEdgeLabel("battled").signature(time).make();
management.buildEdgeIndex(battled, "battlesByTime", Direction.BOTH, Order.desc, time);
management.makeEdgeLabel("lives").signature(reason).make();
management.makeEdgeLabel("pet").make();
management.makeEdgeLabel("brother").make();

management.makeVertexLabel("titan").make();
management.makeVertexLabel("location").make();
management.makeVertexLabel("god").make();
management.makeVertexLabel("demigod").make();
management.makeVertexLabel("human").make();
management.makeVertexLabel("monster").make();

management.commit();

tx = graph.newTransaction();
// vertices

saturn = tx.addVertex(T.label, "titan", "name", "saturn", "age", 10000);
sky = tx.addVertex(T.label, "location", "name", "sky");
sea = tx.addVertex(T.label, "location", "name", "sea");
jupiter = tx.addVertex(T.label, "god", "name", "jupiter", "age", 5000);
neptune = tx.addVertex(T.label, "god", "name", "neptune", "age", 4500);
hercules = tx.addVertex(T.label, "demigod", "name", "hercules", "age", 30);
alcmene = tx.addVertex(T.label, "human", "name", "alcmene", "age", 45);
pluto = tx.addVertex(T.label, "god", "name", "pluto", "age", 4000);
nemean = tx.addVertex(T.label, "monster", "name", "nemean");
hydra = tx.addVertex(T.label, "monster", "name", "hydra");
cerberus = tx.addVertex(T.label, "monster", "name", "cerberus");
tartarus = tx.addVertex(T.label, "location", "name", "tartarus");

// edges

jupiter.addEdge("father", saturn);
jupiter.addEdge("lives", sky, "reason", "loves fresh breezes");
jupiter.addEdge("brother", neptune);
jupiter.addEdge("brother", pluto);

neptune.addEdge("lives", sea).property("reason", "loves waves");
neptune.addEdge("brother", jupiter);
neptune.addEdge("brother", pluto);

hercules.addEdge("father", jupiter);
hercules.addEdge("mother", alcmene);
hercules.addEdge("battled", nemean, "time", 1, "place", Geoshape.point(38.1f, 23.7f));
hercules.addEdge("battled", hydra, "time", 2, "place", Geoshape.point(37.7f, 23.9f));
hercules.addEdge("battled", cerberus, "time", 12, "place", Geoshape.point(39f, 22f));

pluto.addEdge("brother", jupiter);
pluto.addEdge("brother", neptune);
pluto.addEdge("lives", tartarus, "reason", "no fear of death");
pluto.addEdge("pet", cerberus);

cerberus.addEdge("lives", tartarus);

// commit the transaction to disk
tx.commit();

g = graph.traversal()
g.V().has("age", P.lt(9223372036854775807L))

gremlin> g.V().has("age", P.lt(9223372036854775807L))
==>v[4264]
==>v[12536]
==>v[8344]
==>v[8440]
==>v[8272]
==>v[4344]


Re: Lucene index long overflow

inverseintegral42@...
 

A similar problem occurs when using the following traversal:

graph.traversal()
.V()
.hasLabel("test")
.has("prop", P.neq(-9223372036854775808L))
.count()
.next();


Re: Unsatisfied Link Error - Jansi

Joe Obernberger
 

Turns out that I needed to set the java.io.tmpdir somewhere other than /tmp (assuming that's the default).  Not sure why exactly, but by setting JAVA_OPTIONS with:
-Djava.io.tmpdir=/some/place/else
Then gremlin loads OK.

-Joe

On 7/20/2022 3:51 PM, Joe Obernberger via lists.lfaidata.foundation wrote:
Getting this error when trying to run Gremlin on an AWS EC2 instance - exactly like:

https://issues.apache.org/jira/browse/TINKERPOP-2584?page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel&focusedCommentId=17367398#comment-17367398

I'm using openjdk 11.0.13, but it looks like it's some sort of amazon version as it comes up as 1:11.0.13.0.8-1.amzn2.0.3 when doing a yum list installed.
Is there any workaround?
Thanks.

-Joe

--
This email has been checked for viruses by AVG.
https://www.avg.com


Unsatisfied Link Error - Jansi

Joe Obernberger
 

Getting this error when trying to run Gremlin on an AWS EC2 instance - exactly like:

https://issues.apache.org/jira/browse/TINKERPOP-2584?page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel&focusedCommentId=17367398#comment-17367398

I'm using openjdk 11.0.13, but it looks like it's some sort of amazon version as it comes up as 1:11.0.13.0.8-1.amzn2.0.3 when doing a yum list installed.
Is there any workaround?
Thanks.

-Joe


--
This email has been checked for viruses by AVG.
https://www.avg.com


Lucene index long overflow

inverseintegral42@...
 

When I run the following code with janusgraph-inmemory and janusgraph-lucene both in version 0.6.2.

PropertiesConfiguration conf = ConfigurationUtil.loadPropertiesConfig("conf/test.properties");
JanusGraph graph = JanusGraphFactory.open(conf);

JanusGraphManagement m = graph.openManagement();
PropertyKey key = m.makePropertyKey("prop").dataType(Long.class).make();
VertexLabel label = m.makeVertexLabel("test").make();
m.buildIndex("propIndex", Vertex.class).addKey(key).indexOnly(label).buildMixedIndex("search");
m.commit();

graph.traversal()
.V()
.hasLabel("test")
.has("prop", P.neq(9223372036854775807L))
.count()
.next();


an unexpected exception is thrown:

Exception in thread "main" org.janusgraph.core.JanusGraphException: Could not execute operation due to backend exception
    at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:54)
    at org.janusgraph.diskstorage.BackendTransaction.executeRead(BackendTransaction.java:488)
    at org.janusgraph.diskstorage.BackendTransaction.indexQueryCount(BackendTransaction.java:431)
    at org.janusgraph.graphdb.database.IndexSerializer.queryCount(IndexSerializer.java:603)
    at org.janusgraph.graphdb.query.graph.MixedIndexCountQueryBuilder.executeTotals(MixedIndexCountQueryBuilder.java:61)
    at org.janusgraph.graphdb.tinkerpop.optimize.step.JanusGraphMixedIndexCountStep.processNextStart(JanusGraphMixedIndexCountStep.java:75)
    at org.apache.tinkerpop.gremlin.process.traversal.step.util.AbstractStep.next(AbstractStep.java:135)
    at org.apache.tinkerpop.gremlin.process.traversal.step.util.AbstractStep.next(AbstractStep.java:40)
    at org.apache.tinkerpop.gremlin.process.traversal.util.DefaultTraversal.next(DefaultTraversal.java:240)
    at Scratch.main(scratch.java:35)
Caused by: org.janusgraph.diskstorage.PermanentBackendException: Permanent exception while executing backend operation indexQueryCount
    at org.janusgraph.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:79)
    at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:52)
    ... 9 more
Caused by: java.lang.ArithmeticException: long overflow
    at java.base/java.lang.Math.addExact(Math.java:848)
    at org.janusgraph.diskstorage.lucene.LuceneIndex.numericQuery(LuceneIndex.java:634)
    at org.janusgraph.diskstorage.lucene.LuceneIndex.convertQuery(LuceneIndex.java:766)
    at org.janusgraph.diskstorage.lucene.LuceneIndex.convertQuery(LuceneIndex.java:864)
    at org.janusgraph.diskstorage.lucene.LuceneIndex.queryCount(LuceneIndex.java:927)
    at org.janusgraph.diskstorage.indexing.IndexTransaction.queryCount(IndexTransaction.java:114)
    at org.janusgraph.diskstorage.BackendTransaction$7.call(BackendTransaction.java:434)
    at org.janusgraph.diskstorage.BackendTransaction$7.call(BackendTransaction.java:431)
    at org.janusgraph.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:66)
    ... 10 more

I would expect this query to run without any problem since the constant does not overflow.


Re: sorl janusgraph indexing with http basic authentication

Real Life Adventure
 

Thanks Marc for the Quick Update.


Re: sorl janusgraph indexing with http basic authentication

hadoopmarc@...
 

Unfortunately, JanusGraph does not support basic authentication on its SOLR client yet, see https://github.com/JanusGraph/janusgraph/issues/1056

Before finding this JanusGraph issue, I came across the following piece of SOLR documentation which might be worth a try:

https://solr.apache.org/guide/8_1/basic-authentication-plugin.html#global-jvm-basic-auth-credentials

Finally, note that the discussion in the JanusGraph issue also includes a workaround forapplying  ssl on the SOLR connection, which is a natural further step for protecting your SOLR data.

Best wishes,     Marc


sorl janusgraph indexing with http basic authentication

Real Life Adventure
 
Edited

Hello Everyone,
           iam trying to index dynamic graphs with solr index,but iam facing below error.

#############################################
gremlin> sg.V().has('state', textContains('runn'))
Error from server at http://*.*.*.*:***/solr: Expected mime type application/octet-stream but got text/html. <html>
<head>
<meta http-equiv="Content-Type" content="text/html;charset=utf-8"/>
<title>Error 401 require authentication</title>
</head>
<body><h2>HTTP ERROR 401 require authentication</h2>
<table>
<tr><th>URI:</th><td>/solr/state/select</td></tr>
<tr><th>STATUS:</th><td>401</td></tr>
<tr><th>MESSAGE:</th><td>require authentication</td></tr>
<tr><th>SERVLET:</th><td>default</td></tr>
</table>
 
</body>
</html>
#############################################
mgmt.set('index.search.backend','solr')
mgmt.set('index.search.solr.mode','http')
mgmt.set('index.search.solr.http-urls','http://xxx:xx/solr')
mgmt.set('index.search.solr.username','***')
mgmt.set('index.search.solr.password','***')
i set above configuration with solr index.
 if any help.it would be appreciated.

Thanks in advance,
Real life Adventure.


Re: Lucene index inconsistency

hadoopmarc@...
 

I agree. If you start out with an empty graph, the Lucene index is not created until you add the first vertex. It is not easy to find why Lucene would only allow alphanumeric characters for names of indices (implicit assumption in the JanusGraph Preconditions check) and whether this still holds for the currently used version of Lucene.

If you want, you can create an issue for it or even provide a PR.

Best wishes,    Marc


On Fri, Jul 15, 2022 at 12:43 PM, @inverseintegral wrote:
Invalid store name


Lucene index inconsistency

inverseintegral42@...
 

Hey everyone,

I'm currently running janusgraph-inmemory with janusgraph-lucence both version 0.6.2
And I'm experiencing strange behaviour when creating an index that contains an underscore in its name.
Whenever I run the following code:
PropertiesConfiguration conf = ConfigurationUtil.loadPropertiesConfig("conf/test.properties");
JanusGraph graph = JanusGraphFactory.open(conf);
String name = "some_name";

graph.tx().rollback();
JanusGraphManagement management = graph.openManagement();

management.makePropertyKey("name").dataType(String.class).make();
management.commit();

management = graph.openManagement();

management.buildIndex(name, Vertex.class)
.addKey(management.getPropertyKey("name"))
.buildMixedIndex("search");
management.commit();

ManagementSystem.awaitGraphIndexStatus(graph, name).
status(SchemaStatus.REGISTERED)
.call();

management = graph.openManagement();
management.updateIndex(management.getGraphIndex(name), SchemaAction.REINDEX).get();
management.commit();

management = graph.openManagement();
System.out.println(management.printIndexes());

graph.traversal().addV().property("name", "Test").next();
graph.tx().commit();
I get the following output:
I get the following output:

INFO  o.j.g.d.management.ManagementSystem - Index update job successful for [some_name]
------------------------------------------------------------------------------------------------
Graph Index (Vertex)           | Type        | Unique    | Backing        | Key:           Status |
---------------------------------------------------------------------------------------------------
some_name                      | Mixed       | false     | search         | name:         ENABLED |
---------------------------------------------------------------------------------------------------
Graph Index (Edge)             | Type        | Unique    | Backing        | Key:           Status |
---------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------
Relation Index (VCI)           | Type        | Direction | Sort Key       | Order    |     Status |
---------------------------------------------------------------------------------------------------

ERROR o.j.g.database.StandardJanusGraph - Error while committing index mutations for transaction [5] on index: search
org.janusgraph.core.JanusGraphException: Could not execute operation due to backend exception
at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:54)
at org.janusgraph.diskstorage.indexing.IndexTransaction.flushInternal(IndexTransaction.java:158)
at org.janusgraph.diskstorage.indexing.IndexTransaction.commit(IndexTransaction.java:139)
at org.janusgraph.diskstorage.BackendTransaction.commitIndexes(BackendTransaction.java:143)
at org.janusgraph.graphdb.database.StandardJanusGraph.commit(StandardJanusGraph.java:804)
at org.janusgraph.graphdb.transaction.StandardJanusGraphTx.commit(StandardJanusGraphTx.java:1525)
at org.janusgraph.graphdb.tinkerpop.JanusGraphBlueprintsGraph$GraphTransaction.doCommit(JanusGraphBlueprintsGraph.java:322)
at org.apache.tinkerpop.gremlin.structure.util.AbstractTransaction.commit(AbstractTransaction.java:104)
at org.janusgraph.graphdb.tinkerpop.JanusGraphBlueprintsGraph$GraphTransaction.commit(JanusGraphBlueprintsGraph.java:300)
at Scratch.main(scratch.java:48)
Caused by: org.janusgraph.diskstorage.PermanentBackendException: Permanent exception while executing backend operation IndexMutation
at org.janusgraph.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:79)
at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:52)
... 9 common frames omitted
Caused by: java.lang.IllegalArgumentException: Invalid store name: some_name
at com.google.common.base.Preconditions.checkArgument(Preconditions.java:217)
at org.janusgraph.diskstorage.lucene.LuceneIndex.getStoreDirectory(LuceneIndex.java:194)
at org.janusgraph.diskstorage.lucene.LuceneIndex.getWriter(LuceneIndex.java:216)
at org.janusgraph.diskstorage.lucene.LuceneIndex.mutateStores(LuceneIndex.java:289)
at org.janusgraph.diskstorage.lucene.LuceneIndex.mutate(LuceneIndex.java:275)
at org.janusgraph.diskstorage.indexing.IndexTransaction$1.call(IndexTransaction.java:161)
at org.janusgraph.diskstorage.indexing.IndexTransaction$1.call(IndexTransaction.java:158)
at org.janusgraph.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:66)
... 10 common frames omitted

Wouldn't it be better to disallow the creation of the index in the first place? After the creation all queries seem to fail with the same PermanentBackendException.
Also I don't quite understand why underscores are not allowed in index names. Maybe this is a limitation of Lucene
 



Re: Write vertex program output to HDFS using SparkGraphComputer

hadoopmarc@...
 

Are you sure the JanusGraph hadoop-2.x clients cannot be used with your hadoop-3.x cluster? See: https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/Compatibility.html#Policy

Marc


Re: Write vertex program output to HDFS using SparkGraphComputer

anjanisingh22@...
 

Thanks for response Mark. Yes i am using JanusGraph as input to SparkGraphComputer.  We had Hadoop3 which is not supported in tinkerpop 3.5 but as per documentation its supported by Tinkerpop 3.6, so trying to run it with 3.6 .

I will validate hadoop-gremlin via console.

Thanks,
Anjani


Re: Write vertex program output to HDFS using SparkGraphComputer

hadoopmarc@...
 
Edited

Hi Anjani,

Does hdfs work from the Gremlin Console? See the example at:
https://tinkerpop.apache.org/docs/current/reference/#_installing_hadoop_gremlin
https://tinkerpop.apache.org/docs/current/reference/#sparkgraphcomputer

In particular, can you reproduce the "Using CloneVertexProgram" example from the last link?

What is the input to SparkGraphComputer in your usecase? I guess it is JanusGraph, being on the JanusGraph user list right now. However, JanusGraph ships with TinkerPop 3.5 while you mention TinkerPop 3.6 above.

Best wishes,     Marc



Re: Nodes with lots of edges

Matthew Nguyen <nguyenm9@...>
 

Saw the same thing awhile back.  Boxuan put in a Jira for it: https://github.com/JanusGraph/janusgraph/issues/2966


-----Original Message-----
From: Joe Obernberger <joseph.obernberger@...>
To: janusgraph-users@...
Sent: Mon, Jul 11, 2022 7:32 am
Subject: Re: [janusgraph-users] Nodes with lots of edges

Hi Marc - yes, it takes minutes to do queries on nodes with lots of edges.  Like:
:> g.V().has("somevar","someVal").outE().has("indexedField","value")
I believe this is because of the large partition size.  I would love to use vertex cutting; but there seems to be a problem with it:
-----
Every-time, I built a small graph and exported to graphML for viewing in Gephi I would have node IDs that only existed in the edges list.
I printed the nodeID in my code everywhere it was used, and I would never see it in the output, but the graphML had it in the edges list and those 'zombie nodes' did exist in the graph as confirmed by gremlin queries. This was happening because I was using:
VertexLabel sourceLabel = mgmt.makeVertexLabel("source").partition().make(); Once I removed partition, the "zombie" node IDs disappeared.  I wanted to use partition for that since those particular vertexes can have a lot of edges; potentially billions. 
-----
Is there a bug with vertex cutting?  Thank you!
-Joe
On 7/8/2022 1:44 AM, hadoopmarc@... wrote:
Hi Joe,

You do not describe whether breaking this rule of thumb causes real performance issues in your case. Anyway, JanusGraph allows you to partition the stored edges of a node, see:
https://docs.janusgraph.org/advanced-topics/partitioning/#vertex-cut

Marc



AVG logo
This email has been checked for viruses by AVG antivirus software.
www.avg.com