Re: Janusgraph transaction was closed All of a sudden


"nar...@gmail.com" <naresh...@...>
 

Thanks Marc,
         i think i got answer from you. it might be because of too many transactions or indexing backend that cannot keep up with the ingestion. but i have few questions on this.

am using janusgraph client 0.3.2 with Hbase(8 region servers), elastic

1)what is the suggestible number of transactions per janugraph instance? and i hope should be able to replicate it by creating too many transactions or any other best way to replicate and test ?
2)  an indexing backend that cannot keep up with the ingestion -- any idea which case it will happen? please suggest any best way to replicate and test ?


and thanks for suggestions on spark RDD.mapPartions() function
Yes we have enough partitions with each 500 vertices max.
not using exactly  RDD.mapPartions(),  but using forEachPartition () and vertices will be created in spark action/operation i.e stream.forEachRDD -> forEachPartition (.. creating vertices here...). please suggest if this is not the right way?

Thanks,
Naresh

On Friday, September 25, 2020 at 11:39:45 PM UTC+8 HadoopMarc wrote:
Hi Naresh,

It is the responsibility of the application to commit transactions. One application example is gremlin-server which can do that for you, but this may be not be the most convenient for bulk loading.

If you use spark, a nice way is to use the RDD.mapPartions() function. If you have partitions of the size of a single transaction (1000-10000 vertices), you can catch any exceptions and rollback the transaction on failure and commit on success. Spark will automatically retry a failed partition and by using mapPartitions() you are sure that there is exactly one succesful run for any partition.

Reasons for occasional failure may be too large transactions or an indexing backend that cannot keep up with the ingestion. ID block exhaustion generates its own exceptions.

HTH,    Marc

Op vrijdag 25 september 2020 om 14:52:34 UTC+2 schreef nar...@...:

Hi,
am using spark for parallel processing with mix of batch loading(at transaction level) and normal transaction.

case 1# some cases am using bulk loading at transaction level
txn = janusGraph.buildTransaction().enableBatchLoading().start();
..
create vertices and edges
..
txn.commit()

case 2# with normal transaction
txn = janusGraph.newTransaction();
..
create vertices and edges
..
txn.commit()

got below exception in the middle of processing and transaction did not commit hence failed to create vertices.

java.lang.IllegalStateException: Cannot access element because its enclosing transaction is closed and unbound at org.janusgraph.graphdb.transaction.StandardJanusGraphTx.getNextTx(StandardJanusGraphTx.java:305) at org.janusgraph.graphdb.vertices.AbstractVertex.tx(AbstractVertex.java:60) at org.janusgraph.graphdb.vertices.AbstractVertex.property(AbstractVertex.java:152) at org.janusgraph.core.JanusGraphVertex.property(JanusGraphVertex.java:72) at org.janusgraph.core.JanusGraphVertex.property(JanusGraphVertex.java:33) 


it happens very rare and not sure which case it will happen

can you please suggest, is there any case where janusgraph can commit/close transaction automatically?
we are explicitly opening, commiting and closing  txns, so no the other place where we can close/commit in the middle of processing.

Thanks,
Naresh

Join janusgraph-users@lists.lfaidata.foundation to automatically receive all group messages.