Date
1 - 3 of 3
Managed memory leak detected in Spark Olap query
Lilly <lfie...@...>
Hi everyone, I am using Janusgraph version 5.2 with cassandra backend. When using graph = GraphFactory.open(Resources.getResource("read-cql.properties").getFile()); g = inputGraph.traversal().withComputer(SparkGraphComputer.class); g.V().connectedComponent().iterate(); where "read-cql.properties" is the provided configuration file of janusgraph. I get the following Logger output: WARN org.apache.spark.executor.Executor - Managed memory leak detected This also happens in all sort of other Olap queries. Is there something I can do about it? This does not necessarily sound so good. Thanks for any suggestions! Lilly |
|
HadoopMarc <bi...@...>
Hi Lilly, I did some googling first and saw that this error message has a long history in the Apache Spark backlog and still shows up in recent versions: https://issues.apache.org/jira/browse/SPARK-30443 So, I suppose the action lies with Apache Spark and in the mean time I hope you do not run into memory problems. I agree that it "does not necessarily sound so good", but just see it as an explicit bug among the invisible bugs :-). Best wishes, Marc Op maandag 7 september 2020 om 17:22:03 UTC+2 schreef Lilly:
|
|
Lilly <lfie...@...>
Hi Marc, Thanks for your answer! Well then I am hoping it is not as big of an issue and try to ignore the warning for now. Lilly HadoopMarc schrieb am Dienstag, 8. September 2020 um 08:27:45 UTC+2:
|
|