|
How to run groovy script in background?
Hi all, is it possible run gremlin.sh in background?
I try to use `-e` argument to run a groovy script, and always change to stopped status, but it can finish when change to in foreground.
```
[bin]$
Hi all, is it possible run gremlin.sh in background?
I try to use `-e` argument to run a groovy script, and always change to stopped status, but it can finish when change to in foreground.
```
[bin]$
|
By
Phate <phat...@...>
·
#5382
·
|
|
How to open the same graph multiple times and not get the same object?
Hello,
I'm writing a Java program that, for various implementation details, needs to open the same graph multiple times. Currently I'm using JanusGraphFactory.open(...), but this always looks up the
Hello,
I'm writing a Java program that, for various implementation details, needs to open the same graph multiple times. Currently I'm using JanusGraphFactory.open(...), but this always looks up the
|
By
Mladen Marović <mladen...@...>
·
#5381
·
|
|
Re: Running OLAP on HBase with SparkGraphComputer fails with Error Container killed by YARN for exceeding memory limits
Hi Marc,
Thank you for your advice, I will try it and tell you result. Your advice is the trace of light in the dark, I so desire.
Hi Marc,
Thank you for your advice, I will try it and tell you result. Your advice is the trace of light in the dark, I so desire.
|
By
Roy Yu <7604...@...>
·
#5380
·
|
|
Re: How to improve traversal query performance
Hi Marc,
Profile outputs I tried.
1. g.V().has('serial',within('XXXXXX','YYYYYY')).inE('assembled').outV()
----------------------------------------------------------------------
gremlin>
Hi Marc,
Profile outputs I tried.
1. g.V().has('serial',within('XXXXXX','YYYYYY')).inE('assembled').outV()
----------------------------------------------------------------------
gremlin>
|
By
Manabu Kotani <smallcany...@...>
·
#5379
·
|
|
Re: How to improve traversal query performance
Hi Marc,
Sorry for delay reply.
Gremlin steps for my graph is below.
(Sorry, I don't know how to attach file.)
1. schema groovy
Hi Marc,
Sorry for delay reply.
Gremlin steps for my graph is below.
(Sorry, I don't know how to attach file.)
1. schema groovy
|
By
Manabu Kotani <smallcany...@...>
·
#5378
·
|
|
Re: Running OLAP on HBase with SparkGraphComputer fails with Error Container killed by YARN for exceeding memory limits
Hi Roy,
As I mentioned, I did not keep up with possibly new janusgraph-hbase features. From the HBase source, I see that HBase now has a "hbase.mapreduce.tableinput.mappers.per.region" config
Hi Roy,
As I mentioned, I did not keep up with possibly new janusgraph-hbase features. From the HBase source, I see that HBase now has a "hbase.mapreduce.tableinput.mappers.per.region" config
|
By
HadoopMarc <bi...@...>
·
#5377
·
|
|
Re: Running OLAP on HBase with SparkGraphComputer fails with Error Container killed by YARN for exceeding memory limits
you seem to run on cloud infra that reduces your requested 40 Gb to 33 Gb (see https://databricks.com/session_na20/running-apache-spark-on-kubernetes-best-practices-and-pitfalls). Fact of life.
you seem to run on cloud infra that reduces your requested 40 Gb to 33 Gb (see https://databricks.com/session_na20/running-apache-spark-on-kubernetes-best-practices-and-pitfalls). Fact of life.
|
By
Roy Yu <7604...@...>
·
#5376
·
|
|
Re: Running OLAP on HBase with SparkGraphComputer fails with Error Container killed by YARN for exceeding memory limits
Hi Marc
Thanks for your immediate response.
I've tried to set spark.yarn.executor.memoryOverhead=10G and re-run the task, and it stilled failed. From the spark task UI, I saw 80% of processing time is
Hi Marc
Thanks for your immediate response.
I've tried to set spark.yarn.executor.memoryOverhead=10G and re-run the task, and it stilled failed. From the spark task UI, I saw 80% of processing time is
|
By
Roy Yu <7604...@...>
·
#5375
·
|
|
ERROR: Could not commit transaction due to exception during persistence
```
Caused by: java.lang.IllegalArgumentException: Multiple entries with same key: Completeness_metric.Status=org.janusgraph.diskstorage.indexing.StandardKeyInformation@6bf65470 and
```
Caused by: java.lang.IllegalArgumentException: Multiple entries with same key: Completeness_metric.Status=org.janusgraph.diskstorage.indexing.StandardKeyInformation@6bf65470 and
|
By
Gaurav Sehgal <gaurav.s...@...>
·
#5374
·
|
|
Re: Running OLAP on HBase with SparkGraphComputer fails with Error Container killed by YARN for exceeding memory limits
Hi Roy,
There seem to be three things bothering you here:
you did not specify spark.yarn.executor.memoryOverhead, as the exception message says. Easily solved.
you seem to run on cloud infra that
Hi Roy,
There seem to be three things bothering you here:
you did not specify spark.yarn.executor.memoryOverhead, as the exception message says. Easily solved.
you seem to run on cloud infra that
|
By
HadoopMarc <bi...@...>
·
#5373
·
|
|
Running OLAP on HBase with SparkGraphComputer fails with Error Container killed by YARN for exceeding memory limits
Error message:
ExecutorLostFailure (executor 1 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 33.1 GB of 33 GB physical memory used. Consider
Error message:
ExecutorLostFailure (executor 1 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 33.1 GB of 33 GB physical memory used. Consider
|
By
Roy Yu <7604...@...>
·
#5372
·
|
|
Re: Running OLAP on HBase with SparkGraphComputer fails on shuffle/Pregel message pass
I have the same promblem, have you ever solved it?
I have the same promblem, have you ever solved it?
|
By
Roy Yu <7604...@...>
·
#5371
·
|
|
Re: Not sure if vertex centric index is being used
I have the same question. Anyone able to offer some light on this issue.
There is (as usually) certain way of constructing Gremlin Query to get local index used,
but there is not way to know it is
I have the same question. Anyone able to offer some light on this issue.
There is (as usually) certain way of constructing Gremlin Query to get local index used,
but there is not way to know it is
|
By
chrism <cmil...@...>
·
#5370
·
|
|
Re: Janusgraph Hadoop Spark standalone cluster - Janusgraph job always creates constant number 513 of Spark tasks
Hi Varun,
Not a solution, but someone in the thread below explained the 257 magic number for OLAP on a Cassandra cluster:
https://groups.google.com/g/janusgraph-users/c/IdrRyIefihY
Marc
Op vrijdag 4
Hi Varun,
Not a solution, but someone in the thread below explained the 257 magic number for OLAP on a Cassandra cluster:
https://groups.google.com/g/janusgraph-users/c/IdrRyIefihY
Marc
Op vrijdag 4
|
By
HadoopMarc <bi...@...>
·
#5369
·
|
|
Re: Janusgraph Hadoop Spark standalone cluster - Janusgraph job always creates constant number 513 of Spark tasks
Hi,
I am facing this same issue. I am using SparkGraphComputer to read from Janusgraph backed by cassandra. `g.V().count()` takes about 3 minutes to load just two rows that I have in the graph.
I see
Hi,
I am facing this same issue. I am using SparkGraphComputer to read from Janusgraph backed by cassandra. `g.V().count()` takes about 3 minutes to load just two rows that I have in the graph.
I see
|
By
Varun Ganesh <operatio...@...>
·
#5368
·
|
|
Re: Configuring Transaction Log feature
pawan,
can you check for following in your logs Loaded unidentified ReadMarker start time...
seems your readmarker is starting from 1970. so it tries to read changes since then
Regards,
Sandeep
pawan,
can you check for following in your logs Loaded unidentified ReadMarker start time...
seems your readmarker is starting from 1970. so it tries to read changes since then
Regards,
Sandeep
|
By
Sandeep Mishra <sandy...@...>
·
#5366
·
|
|
Re: Use index for sorting
The problem of using custom value for null is that we need to choose a value for each data type, and hope that nobody will try to use this particular value. I suppose it is feasible for data type like
The problem of using custom value for null is that we need to choose a value for each data type, and hope that nobody will try to use this particular value. I suppose it is feasible for data type like
|
By
toom <to...@...>
·
#5365
·
|
|
Re: Use index for sorting
No, null support is an optional feature for graph providers. JanusGraph does not allow null value and I don’t think it will be supported (in near future).
Apart from the solution suggested by Marc,
No, null support is an optional feature for graph providers. JanusGraph does not allow null value and I don’t think it will be supported (in near future).
Apart from the solution suggested by Marc,
|
By
"Li, Boxuan" <libo...@...>
·
#5367
·
|
|
Re: Use index for sorting
Hi Marc,
Thank you for your response.
If I understand correctly, with TinkerPop 3.5 I will be able to sort on property with missing values. It is a good news.
Do you know it JanusGraph 0.6.0 will be
Hi Marc,
Thank you for your response.
If I understand correctly, with TinkerPop 3.5 I will be able to sort on property with missing values. It is a good news.
Do you know it JanusGraph 0.6.0 will be
|
By
toom <to...@...>
·
#5364
·
|
|
Re: Use index for sorting
Hi Toom,
No solution, but the exception that you mention comes from TinkerPop:
https://github.com/apache/tinkerpop/blob/b4928e1262174a68c3dc1f3234d4340e266f8d98/docs/src/upgrade/release-3.5.x.asciidoc
Hi Toom,
No solution, but the exception that you mention comes from TinkerPop:
https://github.com/apache/tinkerpop/blob/b4928e1262174a68c3dc1f3234d4340e266f8d98/docs/src/upgrade/release-3.5.x.asciidoc
|
By
HadoopMarc <bi...@...>
·
#5363
·
|