|
Re: Error when running JanusGraph with YARN and CQL
Answering my own question. I was able fix the above error and successfully run the count job after explicitly adding /Users/my_comp/Downloads/janusgraph-0.5.2/lib/* to
Answering my own question. I was able fix the above error and successfully run the count job after explicitly adding /Users/my_comp/Downloads/janusgraph-0.5.2/lib/* to
|
By
Varun Ganesh <operatio...@...>
·
#5394
·
|
|
Re: Error when running JanusGraph with YARN and CQL
An update on this, I tried setting the env var below:
export HADOOP_GREMLIN_LIBS=$GREMLIN_HOME/lib
After doing this I was able to successfully run the tinkerpop-modern.kryo example from the Recipes
An update on this, I tried setting the env var below:
export HADOOP_GREMLIN_LIBS=$GREMLIN_HOME/lib
After doing this I was able to successfully run the tinkerpop-modern.kryo example from the Recipes
|
By
Varun Ganesh <operatio...@...>
·
#5393
·
|
|
Re: How to open the same graph multiple times and not get the same object?
Thanks for sharing! I personally only use MapReduce and not sure if there is an existing solution for Spark.
> if there is any danger in opening multiple separate graph instances and using them to
Thanks for sharing! I personally only use MapReduce and not sure if there is an existing solution for Spark.
> if there is any danger in opening multiple separate graph instances and using them to
|
By
BO XUAN LI <libo...@...>
·
#5398
·
|
|
Re: OLAP, Hadoop, Spark and Cassandra
A slight correction and clarification of my previous post - the total number of partitions/splits is exactly equal to total_number_of_tokens + 1. In a 3-node cassandra cluster where each node has 256
A slight correction and clarification of my previous post - the total number of partitions/splits is exactly equal to total_number_of_tokens + 1. In a 3-node cassandra cluster where each node has 256
|
By
Mladen Marović <mladen...@...>
·
#5392
·
|
|
Error when running JanusGraph with YARN and CQL
Hello,
I am trying to run SparkGraphComputer on a JanusGraph backed by Cassandra and ElasticSearch. I have previously verified that I am able to run SparkGraphComputer on a local Spark standalone
Hello,
I am trying to run SparkGraphComputer on a JanusGraph backed by Cassandra and ElasticSearch. I have previously verified that I am able to run SparkGraphComputer on a local Spark standalone
|
By
Varun Ganesh <operatio...@...>
·
#5391
·
|
|
Centric Indexes failing to support all conditions for better performance.
JanusGraph documentation: https://docs.janusgraph.org/index-management/index-performance/
is describing usage of Vertex Centrix Index [edge=battled +
JanusGraph documentation: https://docs.janusgraph.org/index-management/index-performance/
is describing usage of Vertex Centrix Index [edge=battled +
|
By
chrism <cmil...@...>
·
#5390
·
|
|
Re: How to open the same graph multiple times and not get the same object?
Hello Boxuan,
I need to support reindexing very large graphs. To my knowledge, the only feasible way that's supported is via the `MapReduceIndexManagement` class. This is not ideal for me as I'd like
Hello Boxuan,
I need to support reindexing very large graphs. To my knowledge, the only feasible way that's supported is via the `MapReduceIndexManagement` class. This is not ideal for me as I'd like
|
By
Mladen Marović <mladen...@...>
·
#5389
·
|
|
Re: Running OLAP on HBase with SparkGraphComputer fails with Error Container killed by YARN for exceeding memory limits
Hi Marc,
The parameter hbase.mapreduce.tableinput.mappers.per.region can be effective. I set it to 40, and there are 40 tasks processing every region. But here comes the new promblem--the data skew.
Hi Marc,
The parameter hbase.mapreduce.tableinput.mappers.per.region can be effective. I set it to 40, and there are 40 tasks processing every region. But here comes the new promblem--the data skew.
|
By
Roy Yu <7604...@...>
·
#5387
·
|
|
Re: How to open the same graph multiple times and not get the same object?
Hi Mladen,
Agree with Marc, that's something you could try. If possible, could you share the reason why you have to open the same graph multiple times with different graph objects? If there is no
Hi Mladen,
Agree with Marc, that's something you could try. If possible, could you share the reason why you have to open the same graph multiple times with different graph objects? If there is no
|
By
Boxuan Li <libo...@...>
·
#5386
·
|
|
SimplePath query is slower in 6 node vs 3 node Cassandra cluster
Hello,
I am currently using Janusgraph version 0.5.2. I have a graph with about 18 million vertices and 25 million edges.
I have two versions of this graph, one backed by a 3 node Cassandra cluster
Hello,
I am currently using Janusgraph version 0.5.2. I have a graph with about 18 million vertices and 25 million edges.
I have two versions of this graph, one backed by a 3 node Cassandra cluster
|
By
Varun Ganesh <operatio...@...>
·
#5388
·
|
|
Re: Configuring Transaction Log feature
Hi Sandeep,
I think I have already added below line to indicate that it should pull the detail from now onwords in processor. Is it not working?
"setStartTimeNow()"
Is anyone other face the same
Hi Sandeep,
I think I have already added below line to indicate that it should pull the detail from now onwords in processor. Is it not working?
"setStartTimeNow()"
Is anyone other face the same
|
By
Pawan Shriwas <shriwa...@...>
·
#5385
·
|
|
Re: How to run groovy script in background?
You could end your script with:
System.exit(0)
HTH, Marc
Op woensdag 9 december 2020 om 04:16:43 UTC+1 schreef Phate:
You could end your script with:
System.exit(0)
HTH, Marc
Op woensdag 9 december 2020 om 04:16:43 UTC+1 schreef Phate:
|
By
HadoopMarc <bi...@...>
·
#5384
·
|
|
Re: How to open the same graph multiple times and not get the same object?
Hi Mladen,
The constructor of StandardJanusGraph seems worth a
Hi Mladen,
The constructor of StandardJanusGraph seems worth a
|
By
HadoopMarc <bi...@...>
·
#5383
·
|
|
How to run groovy script in background?
Hi all, is it possible run gremlin.sh in background?
I try to use `-e` argument to run a groovy script, and always change to stopped status, but it can finish when change to in foreground.
```
[bin]$
Hi all, is it possible run gremlin.sh in background?
I try to use `-e` argument to run a groovy script, and always change to stopped status, but it can finish when change to in foreground.
```
[bin]$
|
By
Phate <phat...@...>
·
#5382
·
|
|
How to open the same graph multiple times and not get the same object?
Hello,
I'm writing a Java program that, for various implementation details, needs to open the same graph multiple times. Currently I'm using JanusGraphFactory.open(...), but this always looks up the
Hello,
I'm writing a Java program that, for various implementation details, needs to open the same graph multiple times. Currently I'm using JanusGraphFactory.open(...), but this always looks up the
|
By
Mladen Marović <mladen...@...>
·
#5381
·
|
|
Re: Running OLAP on HBase with SparkGraphComputer fails with Error Container killed by YARN for exceeding memory limits
Hi Marc,
Thank you for your advice, I will try it and tell you result. Your advice is the trace of light in the dark, I so desire.
Hi Marc,
Thank you for your advice, I will try it and tell you result. Your advice is the trace of light in the dark, I so desire.
|
By
Roy Yu <7604...@...>
·
#5380
·
|
|
Re: How to improve traversal query performance
Hi Marc,
Profile outputs I tried.
1. g.V().has('serial',within('XXXXXX','YYYYYY')).inE('assembled').outV()
----------------------------------------------------------------------
gremlin>
Hi Marc,
Profile outputs I tried.
1. g.V().has('serial',within('XXXXXX','YYYYYY')).inE('assembled').outV()
----------------------------------------------------------------------
gremlin>
|
By
Manabu Kotani <smallcany...@...>
·
#5379
·
|
|
Re: How to improve traversal query performance
Hi Marc,
Sorry for delay reply.
Gremlin steps for my graph is below.
(Sorry, I don't know how to attach file.)
1. schema groovy
Hi Marc,
Sorry for delay reply.
Gremlin steps for my graph is below.
(Sorry, I don't know how to attach file.)
1. schema groovy
|
By
Manabu Kotani <smallcany...@...>
·
#5378
·
|
|
Re: Running OLAP on HBase with SparkGraphComputer fails with Error Container killed by YARN for exceeding memory limits
Hi Roy,
As I mentioned, I did not keep up with possibly new janusgraph-hbase features. From the HBase source, I see that HBase now has a "hbase.mapreduce.tableinput.mappers.per.region" config
Hi Roy,
As I mentioned, I did not keep up with possibly new janusgraph-hbase features. From the HBase source, I see that HBase now has a "hbase.mapreduce.tableinput.mappers.per.region" config
|
By
HadoopMarc <bi...@...>
·
#5377
·
|
|
Re: Running OLAP on HBase with SparkGraphComputer fails with Error Container killed by YARN for exceeding memory limits
you seem to run on cloud infra that reduces your requested 40 Gb to 33 Gb (see https://databricks.com/session_na20/running-apache-spark-on-kubernetes-best-practices-and-pitfalls). Fact of life.
you seem to run on cloud infra that reduces your requested 40 Gb to 33 Gb (see https://databricks.com/session_na20/running-apache-spark-on-kubernetes-best-practices-and-pitfalls). Fact of life.
|
By
Roy Yu <7604...@...>
·
#5376
·
|