|
Re: Running OLAP on HBase with SparkGraphComputer fails with Error Container killed by YARN for exceeding memory limits
Hi Marc
Thanks for your immediate response.
I've tried to set spark.yarn.executor.memoryOverhead=10G and re-run the task, and it stilled failed. From the spark task UI, I saw 80% of processing time is
Hi Marc
Thanks for your immediate response.
I've tried to set spark.yarn.executor.memoryOverhead=10G and re-run the task, and it stilled failed. From the spark task UI, I saw 80% of processing time is
|
By
Roy Yu <7604...@...>
·
#5375
·
|
|
ERROR: Could not commit transaction due to exception during persistence
```
Caused by: java.lang.IllegalArgumentException: Multiple entries with same key: Completeness_metric.Status=org.janusgraph.diskstorage.indexing.StandardKeyInformation@6bf65470 and
```
Caused by: java.lang.IllegalArgumentException: Multiple entries with same key: Completeness_metric.Status=org.janusgraph.diskstorage.indexing.StandardKeyInformation@6bf65470 and
|
By
Gaurav Sehgal <gaurav.s...@...>
·
#5374
·
|
|
Re: Running OLAP on HBase with SparkGraphComputer fails with Error Container killed by YARN for exceeding memory limits
Hi Roy,
There seem to be three things bothering you here:
you did not specify spark.yarn.executor.memoryOverhead, as the exception message says. Easily solved.
you seem to run on cloud infra that
Hi Roy,
There seem to be three things bothering you here:
you did not specify spark.yarn.executor.memoryOverhead, as the exception message says. Easily solved.
you seem to run on cloud infra that
|
By
HadoopMarc <bi...@...>
·
#5373
·
|
|
Running OLAP on HBase with SparkGraphComputer fails with Error Container killed by YARN for exceeding memory limits
Error message:
ExecutorLostFailure (executor 1 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 33.1 GB of 33 GB physical memory used. Consider
Error message:
ExecutorLostFailure (executor 1 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 33.1 GB of 33 GB physical memory used. Consider
|
By
Roy Yu <7604...@...>
·
#5372
·
|
|
Re: Running OLAP on HBase with SparkGraphComputer fails on shuffle/Pregel message pass
I have the same promblem, have you ever solved it?
I have the same promblem, have you ever solved it?
|
By
Roy Yu <7604...@...>
·
#5371
·
|
|
Re: Not sure if vertex centric index is being used
I have the same question. Anyone able to offer some light on this issue.
There is (as usually) certain way of constructing Gremlin Query to get local index used,
but there is not way to know it is
I have the same question. Anyone able to offer some light on this issue.
There is (as usually) certain way of constructing Gremlin Query to get local index used,
but there is not way to know it is
|
By
chrism <cmil...@...>
·
#5370
·
|
|
Re: Janusgraph Hadoop Spark standalone cluster - Janusgraph job always creates constant number 513 of Spark tasks
Hi Varun,
Not a solution, but someone in the thread below explained the 257 magic number for OLAP on a Cassandra cluster:
https://groups.google.com/g/janusgraph-users/c/IdrRyIefihY
Marc
Op vrijdag 4
Hi Varun,
Not a solution, but someone in the thread below explained the 257 magic number for OLAP on a Cassandra cluster:
https://groups.google.com/g/janusgraph-users/c/IdrRyIefihY
Marc
Op vrijdag 4
|
By
HadoopMarc <bi...@...>
·
#5369
·
|
|
Re: Janusgraph Hadoop Spark standalone cluster - Janusgraph job always creates constant number 513 of Spark tasks
Hi,
I am facing this same issue. I am using SparkGraphComputer to read from Janusgraph backed by cassandra. `g.V().count()` takes about 3 minutes to load just two rows that I have in the graph.
I see
Hi,
I am facing this same issue. I am using SparkGraphComputer to read from Janusgraph backed by cassandra. `g.V().count()` takes about 3 minutes to load just two rows that I have in the graph.
I see
|
By
Varun Ganesh <operatio...@...>
·
#5368
·
|
|
Re: Configuring Transaction Log feature
pawan,
can you check for following in your logs Loaded unidentified ReadMarker start time...
seems your readmarker is starting from 1970. so it tries to read changes since then
Regards,
Sandeep
pawan,
can you check for following in your logs Loaded unidentified ReadMarker start time...
seems your readmarker is starting from 1970. so it tries to read changes since then
Regards,
Sandeep
|
By
Sandeep Mishra <sandy...@...>
·
#5366
·
|
|
Re: Use index for sorting
The problem of using custom value for null is that we need to choose a value for each data type, and hope that nobody will try to use this particular value. I suppose it is feasible for data type like
The problem of using custom value for null is that we need to choose a value for each data type, and hope that nobody will try to use this particular value. I suppose it is feasible for data type like
|
By
toom <to...@...>
·
#5365
·
|
|
Re: Use index for sorting
No, null support is an optional feature for graph providers. JanusGraph does not allow null value and I don’t think it will be supported (in near future).
Apart from the solution suggested by Marc,
No, null support is an optional feature for graph providers. JanusGraph does not allow null value and I don’t think it will be supported (in near future).
Apart from the solution suggested by Marc,
|
By
"Li, Boxuan" <libo...@...>
·
#5367
·
|
|
Re: Use index for sorting
Hi Marc,
Thank you for your response.
If I understand correctly, with TinkerPop 3.5 I will be able to sort on property with missing values. It is a good news.
Do you know it JanusGraph 0.6.0 will be
Hi Marc,
Thank you for your response.
If I understand correctly, with TinkerPop 3.5 I will be able to sort on property with missing values. It is a good news.
Do you know it JanusGraph 0.6.0 will be
|
By
toom <to...@...>
·
#5364
·
|
|
Re: Use index for sorting
Hi Toom,
No solution, but the exception that you mention comes from TinkerPop:
https://github.com/apache/tinkerpop/blob/b4928e1262174a68c3dc1f3234d4340e266f8d98/docs/src/upgrade/release-3.5.x.asciidoc
Hi Toom,
No solution, but the exception that you mention comes from TinkerPop:
https://github.com/apache/tinkerpop/blob/b4928e1262174a68c3dc1f3234d4340e266f8d98/docs/src/upgrade/release-3.5.x.asciidoc
|
By
HadoopMarc <bi...@...>
·
#5363
·
|
|
Re: Run JanusGraph/Cassandra/ElasticSearch in production in a VM with 8GB/RAM
Hi, it does not seem impossible but you should really test it and ask yourself the questions:
do your really need gremlin to query your data (instead of SQL)?
do you really need Elasticsearch for
Hi, it does not seem impossible but you should really test it and ask yourself the questions:
do your really need gremlin to query your data (instead of SQL)?
do you really need Elasticsearch for
|
By
HadoopMarc <bi...@...>
·
#5362
·
|
|
Run JanusGraph/Cassandra/ElasticSearch in production in a VM with 8GB/RAM
Hi, is possible to Run JanusGraph/Cassandra/ElasticSearch in production in a VM with 8GB/RAM?
The system has more reads than writes, we create 10 vertex per hour and the system has a average of 20
Hi, is possible to Run JanusGraph/Cassandra/ElasticSearch in production in a VM with 8GB/RAM?
The system has more reads than writes, we create 10 vertex per hour and the system has a average of 20
|
By
"p...@pwill.com.br" <pw...@...>
·
#5361
·
|
|
Use index for sorting
Hello,
I'm using JanusGraph with Cassandra (0.5.2) and ElasticSearch.
I try to optimize my queries and use the mixed indexes as much as possible, in particular for sortings, but I have some
Hello,
I'm using JanusGraph with Cassandra (0.5.2) and ElasticSearch.
I try to optimize my queries and use the mixed indexes as much as possible, in particular for sortings, but I have some
|
By
toom <to...@...>
·
#5360
·
|
|
Re: throw NullPointerException when query with hasLabel() script
it is reproducible in my environment. and this problem is not appeared when janusgraph server started.
it is always there after i running into it for the first time, continuing to now
i have another
it is reproducible in my environment. and this problem is not appeared when janusgraph server started.
it is always there after i running into it for the first time, continuing to now
i have another
|
By
阳生丙 <ouyang....@...>
·
#5359
·
|
|
Re: OLAP, Hadoop, Spark and Cassandra
Hi Mladen,
Interesting read! Spark is not very sensitive to the number of tasks. I believe that for OLAP on HadoopGraph the optimum is for partitions of 256 Mb or so. Larger is difficult to hold in
Hi Mladen,
Interesting read! Spark is not very sensitive to the number of tasks. I believe that for OLAP on HadoopGraph the optimum is for partitions of 256 Mb or so. Larger is difficult to hold in
|
By
HadoopMarc <bi...@...>
·
#5358
·
|
|
Re: SimplePath query is slower in 6 node vs 3 node Cassandra cluster
Hi Boxuan,
Thank you for getting back to me. Please find my responses below:
> Did you check the hardware differences?
Yes I can confirm that the two clusters are identical except for the number of
Hi Boxuan,
Thank you for getting back to me. Please find my responses below:
> Did you check the hardware differences?
Yes I can confirm that the two clusters are identical except for the number of
|
By
Varun Ganesh <operatio...@...>
·
#5357
·
|
|
Re: OLAP, Hadoop, Spark and Cassandra
I know I'm quite late to the party, but for future reference - the number of input partitions in Spark depends on the partitioning of the source. In case of cassandra, partitioning is determined by
I know I'm quite late to the party, but for future reference - the number of input partitions in Spark depends on the partitioning of the source. In case of cassandra, partitioning is determined by
|
By
Mladen Marović <mladen...@...>
·
#5356
·
|