Re: Options for Bulk Read/Bulk Export


subbu165@...
 

So currently we have JanusGraph with the storage back-end as FDB and use ElasticSearch for indexing. 
 
First we get the vertexIDs indexes from Elasticsearch back-end and then below is what we do 
JanusGraph graph = JanusGraphFactory.open(janusConfig);
Vertex vertex = graph.vertices(vertexId).next(); 
 
All the above including getting the vertexid indexes from Elasticsearch happens within the spark context using sparkRDD for partition and parallelisation. If we remove spark out of the equation, what else best way I can do bulkExport?
Also @oleksandr, you have stated that "Otherwise, additional calls might be executed to your backend which could be not as efficient." how should we do these additional calls and get subsequent records. Lets say I'm exporting 10M records and our cache/memory size doesn't support that much, so first I retrieve 1 to 1M records and then 1M to 2M, then 2M to 3M and so on, how can we iterate this way? how can this be achieved in Janus, Please throw some light

Join janusgraph-users@lists.lfaidata.foundation to automatically receive all group messages.