Anyone with experience of adding new Storage backend for JanusGraph ? [Help needed w.r.t SnowFlake]


Debasish Kanhar <d.k...@...>
 

I know that JanusGraph needs a column-family type nosql database as storage backend, and hence that is why we have Scylla, Cassandra, HBase etc. SnowFlake isn't a column family database, but it has a column data type which can store any sort of data. So we can store complete JSON Oriented Column family data here after massaging / pre-processing the data. Is that a practical thought? Is is practical enough to implement?

If it is practical enough to implement, what needs to be done? I'm going through the source code, and I'm basing my ideas based on my understanding from janusgraph-cassandra and janusgraph-berkley projects. Please correct me if I'm wrong in my understanding.

  1. We need to have a StoreManager class like HBaseStoreManager, AbstractCassandraStoreManager, BerkeleyJEStoreManager which extends either DistributedStoreManager or LocalStoreManagerand implements KeyColumnValueStoreManager class right? These class needs to have build features object which is more or less like storage connection configuration. They need to have a beginTransaction method which creates the actual connection to corresponding storage backend. Is that correct?
  2. You will need to have corresponding Transaction classes which create the transaction to corresponding backend like *CassandraTransaction* or *BerkeleyJETx*. The transaction class needs to extend AbstractStoreTransaction` class. Though I can see and understand the transaction being created in BerkeleyJETx I don't see something similar for CassandraTransaction. So am I missing something in my undesrtanding here?
  3. You need to have KeyColumnValueStore class for backend. Like *AsyntaxKeyColumnValueStore* or *BerkeleyJEKeyValueStore* etc. They need to extend KeyColumnValueStore . This class takes care of massaging the data into KeyColumnFormat so that they can then be inserted into corresponding table inside Storage Backend.
    1. So question to my mind are, what will be structure of those classes?
    2. Are there some methods which needs to be present always like I see getSlice() being used across in all classes. Also, how do they work?
    3. Do they just convert incoming gremlin queries into KeyColumnValue structure?
    4. Are there any other classes I'm missing out on or these 3 are the only ones needed to be modified to create a new storage backend?
    5. Also, if these 3 are only classes needed, and let's say we success in using SnowFlake as storage backend, how do the read aspect of janusgraph/query aspect gets solved? Are there any changes needed as well on that end or JanusGraph is so abstracted that it can now start picking up from new source?
  4. And, I thought there would be some classes which would be reading in from "gremlin queries" doing certain "pre-processing into certain data structures (tabular)" and then pushed it through some connection into respective backends. This is where we cant help, is there a way to visualize those objects after "pre-processing"  and then store those objects as it is in SnowFlake and reuse it to fulfill gremlin queries.

I know we can store random objects in SnowFlake, just looking at changed needed at JanusGraph level to achieve those.

Any help will be really appreciated.

Thanks in Advance.

Join janusgraph-dev@lists.lfaidata.foundation to automatically receive all group messages.