|
Re: Required Capacity Error - JanusGraph on Cassandra
Hi Joe,
I have no detailed knowledge of the JanusGraph backend code myself, but just a reaction for clarification (so that others see more hints to the cause of the issue):
Is it possible that the
Hi Joe,
I have no detailed knowledge of the JanusGraph backend code myself, but just a reaction for clarification (so that others see more hints to the cause of the issue):
Is it possible that the
|
By
hadoopmarc@...
·
#6633
·
Edited
|
|
Re: Composite Indexing not working as expected for property on vertex in janusgraph 0.6.1
gremlin> g.V().has("newid","xyz").valueMap(true).tryNext().isPresent()
==>false
gremlin> g.V().has("newid",unfold().is("hash data")).valueMap(true).tryNext().isPresent()
07:01:35 WARN
gremlin> g.V().has("newid","xyz").valueMap(true).tryNext().isPresent()
==>false
gremlin> g.V().has("newid",unfold().is("hash data")).valueMap(true).tryNext().isPresent()
07:01:35 WARN
|
By
Nikita Pande
·
#6632
·
Edited
|
|
Composite Indexing not working as expected for property on vertex in janusgraph 0.6.1
g.V().has("newid", "xyz").count().profile()
04:31:35 WARN org.janusgraph.graphdb.transaction.StandardJanusGraphTx - Query requires iterating over all vertices [(newid xyz)].
Even though the newid
g.V().has("newid", "xyz").count().profile()
04:31:35 WARN org.janusgraph.graphdb.transaction.StandardJanusGraphTx - Query requires iterating over all vertices [(newid xyz)].
Even though the newid
|
By
Nikita Pande
·
#6631
·
|
|
Required Capacity Error - JanusGraph on Cassandra
Hi all - I'm getting the following error when executing the following query:
List<Object> correlationIDsListSource = traversal.V().has("source", source).outE("correlation").has("type",
Hi all - I'm getting the following error when executing the following query:
List<Object> correlationIDsListSource = traversal.V().has("source", source).outE("correlation").has("type",
|
By
Joe Obernberger
·
#6630
·
|
|
Re: BigTable - large rows (more than 256MB)
> I've seen cases in the past, where queries relying on a mixed index fail while the index backend still hasn't caught up to the storage backend.
Yes that could happen. You can use
> I've seen cases in the past, where queries relying on a mixed index fail while the index backend still hasn't caught up to the storage backend.
Yes that could happen. You can use
|
By
Boxuan Li
·
#6629
·
|
|
Re: BigTable - large rows (more than 256MB)
That's a great post. This is exactly the use-case we have, with a type property.
Regarding the usage of mixed indexes -
- I'm less concerned with property updates in this case (as opposed to
That's a great post. This is exactly the use-case we have, with a type property.
Regarding the usage of mixed indexes -
- I'm less concerned with property updates in this case (as opposed to
|
By
schwartz@...
·
#6628
·
|
|
Re: BigTable - large rows (more than 256MB)
Hi Assaf,
I see. That makes sense and unfortunately, I don't have a perfect solution. I would suggest you use a mixed index instead.
Regarding the data model, you can take a look at a blog I wrote
Hi Assaf,
I see. That makes sense and unfortunately, I don't have a perfect solution. I would suggest you use a mixed index instead.
Regarding the data model, you can take a look at a blog I wrote
|
By
Boxuan Li
·
#6627
·
|
|
Re: BigTable - large rows (more than 256MB)
Hi Boxuan - thanks for the quick response!
I get a feeling that 2) might be the issue. Since JanusGraph has never allowed us to index labels, we ended up having a "shadow property" which is set as a
Hi Boxuan - thanks for the quick response!
I get a feeling that 2) might be the issue. Since JanusGraph has never allowed us to index labels, we ended up having a "shadow property" which is set as a
|
By
schwartz@...
·
#6626
·
|
|
Re: BigTable - large rows (more than 256MB)
Hi Assaf,
Having too many vertices with the same label shouldn't be a problem. The two most possible causes are:
1) You have a super node that has too many edges.
2) You have a composite index with
Hi Assaf,
Having too many vertices with the same label shouldn't be a problem. The two most possible causes are:
1) You have a super node that has too many edges.
2) You have a composite index with
|
By
Boxuan Li
·
#6625
·
|
|
BigTable - large rows (more than 256MB)
Hi!
We are using JG on top of Bigtable. While trying to understand some slow queries, I found the following in the Bigtable query vizualizer: "Large rows — Found 1 single key storing more than
Hi!
We are using JG on top of Bigtable. While trying to understand some slow queries, I found the following in the Bigtable query vizualizer: "Large rows — Found 1 single key storing more than
|
By
schwartz@...
·
#6624
·
|
|
Re: Janusgraph-full-0.6.1: how to fix "WARNING: Critical severity vulnerabilities were found with Log4j!"
Hello Marc,
Actually my previous testing was incomplete. After removing those two log4j related jar files from the lib directory, I can start elasticsearch, cassandra and Janusgraph server
Hello Marc,
Actually my previous testing was incomplete. After removing those two log4j related jar files from the lib directory, I can start elasticsearch, cassandra and Janusgraph server
|
By
Yingjie Li
·
#6623
·
|
|
Re: Connecting to server from java: can't lock berkeleyje
Apparently it replaces the full path to the properties file with a relative path (conf/remote-graph.properties), which isn't found, of course.
So, I've copied them and now I can connect.
Thanks.
Apparently it replaces the full path to the properties file with a relative path (conf/remote-graph.properties), which isn't found, of course.
So, I've copied them and now I can connect.
Thanks.
|
By
queshaw
·
#6622
·
|
|
Re: Connecting to server from java: can't lock berkeleyje
The stacktrace is not very helpful, unfortunately, but that is not your doing. The original example does not use an "ats" variable but instantiates g with a oneliner, but it hard to see why this could
The stacktrace is not very helpful, unfortunately, but that is not your doing. The original example does not use an "ats" variable but instantiates g with a oneliner, but it hard to see why this could
|
By
hadoopmarc@...
·
#6621
·
|
|
Re: Connecting to server from java: can't lock berkeleyje
D'oh... So, after that, I get a NullPointerException:
AnonymousTraversalSource<GraphTraversalSource> ats = traversal(); // not null
if (new File(props).exists()) // path to
D'oh... So, after that, I get a NullPointerException:
AnonymousTraversalSource<GraphTraversalSource> ats = traversal(); // not null
if (new File(props).exists()) // path to
|
By
queshaw
·
#6620
·
|
|
Re: Janusgraph-full-0.6.1: how to fix "WARNING: Critical severity vulnerabilities were found with Log4j!"
Hello Marc,
Yes. Now it works!
Thanks
Yingjie
Hello Marc,
Yes. Now it works!
Thanks
Yingjie
|
By
Yingjie Li
·
#6619
·
|
|
Re: Connecting to server from java: can't lock berkeleyje
You can take a look at:
https://docs.janusgraph.org/interactions/connecting/java/
Apparently, you used JanusGraphFactory, which opens an embedded JanusGraph instance in the client. If you want to
You can take a look at:
https://docs.janusgraph.org/interactions/connecting/java/
Apparently, you used JanusGraphFactory, which opens an embedded JanusGraph instance in the client. If you want to
|
By
hadoopmarc@...
·
#6618
·
|
|
Re: Connecting to server from java: can't lock berkeleyje
I should probably add I intend to use import static org.janusgraph.core.attribute.Text.* for textContains etc.
I should probably add I intend to use import static org.janusgraph.core.attribute.Text.* for textContains etc.
|
By
queshaw
·
#6617
·
|
|
Connecting to server from java: can't lock berkeleyje
I'm trying to connect to janusgraph server from java. If I follow the instructions in the documentation, using this in gradle:
implementation 'org.janusgraph:janusgraph-driver:0.6.2'
I'm trying to connect to janusgraph server from java. If I follow the instructions in the documentation, using this in gradle:
implementation 'org.janusgraph:janusgraph-driver:0.6.2'
|
By
queshaw
·
#6616
·
|
|
Re: Janusgraph-full-0.6.1: how to fix "WARNING: Critical severity vulnerabilities were found with Log4j!"
Hi Yingjie,
My suggestion was incomplete. In addition to removing the log4j-1.2.17.jar file from the lib folder, you have to remove the slf4j-log4j12-1.7.30.jar file as well. Otherwise, JanusGraph
Hi Yingjie,
My suggestion was incomplete. In addition to removing the log4j-1.2.17.jar file from the lib folder, you have to remove the slf4j-log4j12-1.7.30.jar file as well. Otherwise, JanusGraph
|
By
hadoopmarc@...
·
#6615
·
|
|
Re: Janusgraph-full-0.6.1: how to fix "WARNING: Critical severity vulnerabilities were found with Log4j!"
On Sat, Aug 20, 2022 at 09:28 AM, <hadoopmarc@...> wrote:
The JanusGraph and TinkerPop code only explicitly depend on slfj4j, so you can choose the logging implementation. you want You can simply
On Sat, Aug 20, 2022 at 09:28 AM, <hadoopmarc@...> wrote:
The JanusGraph and TinkerPop code only explicitly depend on slfj4j, so you can choose the logging implementation. you want You can simply
|
By
Yingjie Li
·
#6614
·
|