Re: Anyone with experience of adding new Storage backend for JanusGraph ? [Help needed w.r.t SnowFlake]

Dmitry Kovalev <dk.g...@...>

A few more things which you might find helpful:

1) To help you verify if your implementation is correct, you can utilise the standard test suite which is used to test the correctness of all backends - more or less all you need to do is extend KeyColumnValueStoreTest and override the setup methods to provide an instance of your store implementation. See e.g. CQLStoreTest

2) If you plan to opensource your work anyway, it may help if you make your work in progress public - e.g. put it on Github or Gitlab etc right now. This way you can refer to specific code, and may even get someone to help out. Of course it is not always possible depending on your company's policies, but if it is - it may be beneficial.

On Tue, 3 Dec 2019 at 00:26, Dmitry Kovalev <dk.g...@...> wrote:
Hi Debashish,

in terms of wrapping one's head around what getSlice() method does - conceptually it is not hard to understand, if you peruse the link I have referred you to in my original reply:

The relevant part of it is really short so I'll just copy it here (with added emphasis in bold):

Bigtable Data Model

Under the Bigtable data model each table is a collection of rows. Each row is uniquely identified by a key. Each row is comprised of an arbitrary (large, but limited) number of cells. A cell is composed of a column and value. A cell is uniquely identified by a column within a given row. Rows in the Bigtable model are called "wide rows" because they support a large number of cells and the columns of those cells don’t have to be defined up front as is required in relational databases.

JanusGraph has an additional requirement for the Bigtable data model: The cells must be sorted by their columns and a subset of the cells specified by a column range must be efficiently retrievable (e.g. by using index structures, skip lists, or binary search).


Basically, getSlice method is the formal representation of above requirement in bold:  based on the order defined for "column keys" space, it should return all "columns" whose keys lay "between" a start and end key values, given in SliceQuery... that is, >= start and <=end... Please refer to the javadoc for more detail.

However, answering the question of how do you effectively implement it in your backend is pretty much the crux of your potential contribution.

If the underlying DB's data model more or less "natively" supports the above (as e.g. in the case of Cassandra, BDB etc), then it becomes relatively easy.

If the underlying data model is different, then it gets us back to the question which has been asked a couple of times in this thread - i.e. whether it is actually feasible and/or desirable to try and implement it?

For example, in order to implement it in a "classical" RDBMS, your would have to find one which supports ordering and indexing of byte columns/blobs, and then probably encounter scalability issues if you chose to model the whole key-column-value store as one table with row key, column key and data... It might still be possible to address these issues and implement it reasonably effectively, but it is unclear what would be the point - as you would effectively have to circumvent the "relational/SQL" top abstraction layer, which is the whole point of RDBMS, to get back to lower level implementation details.

Unfortunately I know nothing about Snowflake and it's data model, and don't have the time to learn about it in any sufficient detail any time soon, so I cannot really advise you neither on feasibility nor on any implementation details.

Hope this helps,


On Sun, 1 Dec 2019 at 09:04, Debasish Kanhar <d.k...@...> wrote:
Hello any developers following this thread:

As suggested by Dimitry, CQL adopter uses prepared statements, and hence that would be appropriate for me in sense that, I'll be using SQL statements (SnowSQL) for SnowFlake querying using a DAO. Thus CQL and SnowFlake adopter I'm building would be similar and hence makes sense to reference out of those.

As mentioned before, I'm currently blocked at the method getSlice. I know that the method is used while querying the data, but I'm unable to get my head around how does it work internally. A blind implementation might work, but it won't give me an understanding how its working internally. If anyone can help me understand how it works, a similar implementation for SnowFlake becomes easier then.

As mentioned before I'm basing my understanding from CQL adopter. If we look at CQLKeyColumnValueStore under getSlice method, it makes use of this.getSlice prepared statement to fulfill query. The this.getSlice is as follows:

this.getSlice = this.session.prepare(select()
.from(this.storeManager.getKeyspaceName(), this.tableName)
.where(eq(KEY_COLUMN_NAME, bindMarker(KEY_BINDING)))

The this.getSlice() is used in the method public EntryList getSlice()   which uses the prepared statement above to execute some query. When the following happens (Contents of getSlice method)

final Future<EntryList> result = Future.fromJavaFuture(
.setBytes(KEY_BINDING, query.getKey().asByteBuffer())
.setBytes(SLICE_START_BINDING, query.getSliceStart().asByteBuffer())
.setBytes(SLICE_END_BINDING, query.getSliceEnd().asByteBuffer())
.setInt(LIMIT_BINDING, query.getLimit())
.map(resultSet -> fromResultSet(resultSet, this.getter));

Is following understanding correct? Anyone with JanusGraph and Cassandra expertise can help.

I'm updating the base query from following bindings:

.where(eq(KEY_COLUMN_NAME, query.getKey().asByteBuffer()))
.and(gte(COLUMN_COLUMN_NAME, query.getSliceStart().asByteBuffer()))
.and(lt(COLUMN_COLUMN_NAME, query.getSliceEnd().asByteBuffer()))

Is above interpolation correct?

So, if we were to model this in any RDBMS (SnowFlake for eg though SnowFlake isn't RDBMS, it is similar in terms of storage and query engine) with 3 columns as (key, value, column1) of datatypes string (varchar with binary info) can something similar query be correct?

SELECT .... FROM keyspace WHERE
key = query.getKey().asByteBuffer() and
column1 >= query.getSliceStart().asByteBuffer() and
column1 < query.getSliceEnd().asByteBuffer()
limit query.getLimit()

Does this sort of query sound similar in terms of what is targeted to achieve? If I can understand the actual meaning of the prepared statements here, I can also base my undertandings for rest of methods which would be required for doing mutations in underlaying backend.

Any help is really appreciated as we are kinda getting tighter and tighter on deadline regarding the feasibility PoC of SnowFlake as backend for JanusGraph.

Thanks in advance

On Thursday, 28 November 2019 21:05:09 UTC+5:30, Debasish Kanhar wrote:
Hi Evgeniy,

Thanks for the question. We plan to open source it once implemented but we are still long way from implementation. Will be really grateful to community who can help in any way to achieve this :-)

On Thursday, 28 November 2019 16:16:27 UTC+5:30, Evgeniy Ignatiev wrote:


Is this backend open-source/will be open-sourced?

Best regards,
Evgeniy Ignatiev.

On 11/28/2019 1:40 PM, Debasish Kanhar wrote:
Hi Ryan.

Well that's a very valid question you asked. The current implementation of backends like Scylla as you mentioned are really highly performant. There is no specific problem in mind, but off late I have been dealing with a lot of clients who are migrating their whole system into SnowFlake, including the whole Data storage and Analytics components as well. SnowFlake is a hot upcoming Data storage and warehousing system.

Those clients are really reluctant to add another storage component to their application. Reasons can be a lot like due to high costs, or added complexity of their architecture, or duplication of data across storages. But at the same time these clients also want to incorporate Graph Databases and Graph Analytics into their application as well. This integration is targeted for those set of customers/clients who are/have migrating/migrated into SnowFlake and want to have Graph based component as well. For now, its simply not possible for them to have JanusGraph with their SnowFlake data storages.

Hope I was able to explain it clearly :-)

On Wednesday, 27 November 2019 20:40:52 UTC+5:30, Ryan Stauffer wrote:

This sounds like an interesting project, but I do have a question about your choice of Snowflake.  If I missed your response to this in the email chain, I apologize, but what problems with the existing high-performance backends (Scylla, for instance) are you trying to solve with Snowflake?  The answer to that would probably inform your specific implementation over Snowflake.


On Wed, Nov 27, 2019 at 3:18 AM Debasish Kanhar <d...@...> wrote:
Hi Dimitriy,

Sorry about the late response. I was working on this project part time only till last week when we moved into full time dev for this PoC. Really thanks to your pointers and Jason's that we have been able to start with the development works and we have some ground work to start with :-)

So,we are modelling SnowFlake (Which is like SQL File store) as a Key-Value store by creating two columns namely "Key" and "Value" in each tables. We are going to define the data type as binary here (Or Stringified Binary) so that arbitrary data can be dumped (I feel its of type StaticBuffer Key and StaticBuffer value. Is that correct? )

Since, we are modelling SnowFlake as Key-Value store, it makes sense to have a SnowFlakeManager class implement OrderedKeyValueStore like for BerkleyJE? Is that correct understanding?

Updates are that we have almost finished development of SnowFlakeManager class. The required methods needed are implemented like beginTransaction, openDatabase though one particular function not done is mutateMany is not done, but it will be done as it in turn calls KeyValueStore.insert() method.

Also, a lot of basic functions in KeyValueStore is also done like insert (Insert binary key-value), get (Get from binary key), delete (Delete a row using binary key). We are kinda stuck at the function getSlice(). What does it do?

We are kinda wondering how getSlice operates? I know that the function is used when querying Janusgraph for gremlin queries (Read operations) ( . We see that a sliceQuery is generated which is then executed againt backend to get results.
Now, my question here is that, slice query is used while queryingfor properties for vertices (edges/properties) by slicing the relations of vertex and slicing them based on filters/conditions. The following steps are followed in getSlice function (BerkleyKeyValueStore - berkleydb & ColumnValueStore - inmemory) :
  1. Find the row from the passed key. (Returns a Binary value against the binary key)
  2. Fetch slice bounderies, i.e. slice start and end from query passed
  3. Apply the slice boundries on the returned value in 1st step else, fetch the first results (pt 1) by applying the slicing conditions in step
My question is related to last step. Since my data in DB is just "Binary Key-Binary Value", how can we apply another constraints (slice conditions) in query? It just doesn't have any additional meta data to apply slice on as I just have 2 columns in my table.

Hope my explaination was clear for you to understand. I want to know primarily how the last step would work in the data model I described above (Having 2 columns, one for Key and other for Value. And each of stringified binary data type). And, is the data model selected good enough?

Thanks in advance. And I promise this time my replies will be quicker :-)

On Friday, 25 October 2019 03:17:24 UTC+5:30, Dmitry Kovalev wrote:
Hi Debashish,

here are my 2 cents:

First of all, you need to be clear with yourself as to why exactly you want to build a new backend? E.g. do you find that the existing ones are sub-optimal for certain use cases, or they are too hard to set up, or you just want to provide a backend to a cool new database in the hope that it will increase adoption, or smth else? In other words, do you have a clear idea of what is this new backend going to provide which the existing ones do not, e.g. advanced scalability or performance or ease of setup, or just an option for people with existing Snowflake infra to put it to a new use?

Second, you are almost correct, in that basically all you need to implement are three interfaces:
- KeyColumnValueStoreManager, which allows opening multiple instances of named KeyColumnValueStores and provides a certain level of transactional context between different stores it has opened
-  KeyColumnValueStore - which represents an ordered collection of "rows" accessible by keys, where each row is a
- KeyValueStore - basically an ordered collection of key-value pairs, which can be though of as individual "columns" of that row, and their respective values

Both row and column keys, and the data values are generic byte data.

Have a look at this piece of documentation:    

Possibly the simplest way to understand the "minimum contract" required by Janusgraph from a backend is to look at the inmemory backend. You will see that:  
- KeyColumnValueStoreManager is conceptually a Map of store name ->  KeyColumnValueStore, 
- each  KeyColumnValueStore is conceptually a NavigableMap of "rows" or KeyValueStores (i.e. a "table") ,
- each KeyValueStore is conceptually an ordered collection of key -> value pairs ("columns").

In the most basic case, once you implement these three relatively simple interfaces, Janusgraph can take care of all the translation of graph operations such as adding vertices and edges, and of gremlin queries, into a series of read-write operations over a collection of KCV stores. When you open a new graph, JanusGraph asks the KeyColumnValueStoreManager implementation to create a number of specially named KeyColumnValueStores, which it uses to store vertices, edges, and various indices. It creates a number of "utility" stores which it uses internally for locking, id management etc.

Crucially, whatever stores Janusgraph creates in your backend implementation, and whatever it is using them for, you only need to make sure that you implement those basic interfaces which allow to store arbitrary byte data and access it by arbitrary byte keys.

So for your first "naive" implementation, you most probably shouldn't worry too much about translation of graph model to KCVS model and back - this is what Janusgraph itself is mostly about anyway. Just use StoreFeatures to tell Janusgraph that your backend supports only most basic operations, and concentrate on thinking how to best implement the KCVS interfaces with your underlying database/storage system.

Of course, after that, as you start thinking of supporting better levels of consistency/transaction management across multiple stores, about performance, better utilising native indexing/query mechanisms, separate indexing backends, support for distributed backend model etc etc - you will find that there is more to it, and this is where you can gain further insights from the documentation, existing backend sources and asking more specific questions.

See for example this piece of documentation:

Hope this helps,

On Thu, 24 Oct 2019 at 21:27, Debasish Kanhar <d...@...> wrote:
I know that JanusGraph needs a column-family type nosql database as storage backend, and hence that is why we have Scylla, Cassandra, HBase etc. SnowFlake isn't a column family database, but it has a column data type which can store any sort of data. So we can store complete JSON Oriented Column family data here after massaging / pre-processing the data. Is that a practical thought? Is is practical enough to implement?

If it is practical enough to implement, what needs to be done? I'm going through the source code, and I'm basing my ideas based on my understanding from janusgraph-cassandra and janusgraph-berkley projects. Please correct me if I'm wrong in my understanding.

  1. We need to have a StoreManager class like HBaseStoreManager, AbstractCassandraStoreManager, BerkeleyJEStoreManager which extends either DistributedStoreManager or LocalStoreManagerand implements KeyColumnValueStoreManager class right? These class needs to have build features object which is more or less like storage connection configuration. They need to have a beginTransaction method which creates the actual connection to corresponding storage backend. Is that correct?
  2. You will need to have corresponding Transaction classes which create the transaction to corresponding backend like *CassandraTransaction* or *BerkeleyJETx*. The transaction class needs to extend AbstractStoreTransaction` class. Though I can see and understand the transaction being created in BerkeleyJETx I don't see something similar for CassandraTransaction. So am I missing something in my undesrtanding here?
  3. You need to have KeyColumnValueStore class for backend. Like *AsyntaxKeyColumnValueStore* or *BerkeleyJEKeyValueStore* etc. They need to extend KeyColumnValueStore . This class takes care of massaging the data into KeyColumnFormat so that they can then be inserted into corresponding table inside Storage Backend.
    1. So question to my mind are, what will be structure of those classes?
    2. Are there some methods which needs to be present always like I see getSlice() being used across in all classes. Also, how do they work?
    3. Do they just convert incoming gremlin queries into KeyColumnValue structure?
    4. Are there any other classes I'm missing out on or these 3 are the only ones needed to be modified to create a new storage backend?
    5. Also, if these 3 are only classes needed, and let's say we success in using SnowFlake as storage backend, how do the read aspect of janusgraph/query aspect gets solved? Are there any changes needed as well on that end or JanusGraph is so abstracted that it can now start picking up from new source?
  4. And, I thought there would be some classes which would be reading in from "gremlin queries" doing certain "pre-processing into certain data structures (tabular)" and then pushed it through some connection into respective backends. This is where we cant help, is there a way to visualize those objects after "pre-processing"  and then store those objects as it is in SnowFlake and reuse it to fulfill gremlin queries.

I know we can store random objects in SnowFlake, just looking at changed needed at JanusGraph level to achieve those.

Any help will be really appreciated.

Thanks in Advance.
You received this message because you are subscribed to the Google Groups "JanusGraph developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jan...@....
To view this discussion on the web visit
You received this message because you are subscribed to the Google Groups "JanusGraph developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jan...@....
To view this discussion on the web visit
You received this message because you are subscribed to the Google Groups "JanusGraph developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jan...@....
To view this discussion on the web visit

You received this message because you are subscribed to the Google Groups "JanusGraph developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to janusgr...@....
To view this discussion on the web visit

Join to automatically receive all group messages.