Debasish Kanhar <d.k...@...>
toggle quoted messageShow quoted text
Hello any developers following this thread:
As suggested by Dimitry, CQL adopter uses prepared statements, and hence that would be appropriate for me in sense that, I'll be using SQL statements (SnowSQL) for SnowFlake querying using a DAO. Thus CQL and SnowFlake adopter I'm building would be similar and hence makes sense to reference out of those.
As mentioned before, I'm currently blocked at the method getSlice. I know that the method is used while querying the data, but I'm unable to get my head around how does it work internally. A blind implementation might work, but it won't give me an understanding how its working internally. If anyone can help me understand how it works, a similar implementation for SnowFlake becomes easier then.
As mentioned before I'm basing my understanding from CQL adopter. If we look at CQLKeyColumnValueStore under getSlice method, it makes use of this.getSlice prepared statement to fulfill query. The this.getSlice is as follows:
this.getSlice = this.session.prepare(select()
The this.getSlice() is used in the method public EntryList getSlice() which uses the prepared statement above to execute some query. When the following happens (Contents of getSlice method)
final Future<EntryList> result = Future.fromJavaFuture(
.map(resultSet -> fromResultSet(resultSet, this.getter));
Is following understanding correct? Anyone with JanusGraph and Cassandra expertise can help.
I'm updating the base query from following bindings:
Is above interpolation correct?
So, if we were to model this in any RDBMS (SnowFlake for eg though SnowFlake isn't RDBMS, it is similar in terms of storage and query engine) with 3 columns as (key, value, column1) of datatypes string (varchar with binary info) can something similar query be correct?
SELECT .... FROM keyspace WHERE
key = query.getKey().asByteBuffer() and
column1 >= query.getSliceStart().asByteBuffer() and
column1 < query.getSliceEnd().asByteBuffer()
Does this sort of query sound similar in terms of what is targeted to achieve? If I can understand the actual meaning of the prepared statements here, I can also base my undertandings for rest of methods which would be required for doing mutations in underlaying backend.
Any help is really appreciated as we are kinda getting tighter and tighter on deadline regarding the feasibility PoC of SnowFlake as backend for JanusGraph.
Thanks in advance
On Thursday, 28 November 2019 21:05:09 UTC+5:30, Debasish Kanhar wrote:
Thanks for the question. We plan to open source it once implemented but we are still long way from implementation. Will be really grateful to community who can help in any way to achieve this :-)
On Thursday, 28 November 2019 16:16:27 UTC+5:30, Evgeniy Ignatiev wrote:
Is this backend open-source/will be open-sourced?
On 11/28/2019 1:40 PM, Debasish Kanhar
Well that's a very valid question you asked. The current
implementation of backends like Scylla as you mentioned are
really highly performant. There is no specific problem in
mind, but off late I have been dealing with a lot of clients
who are migrating their whole system into SnowFlake, including
the whole Data storage and Analytics components as well.
SnowFlake is a hot upcoming Data storage and warehousing
Those clients are really reluctant to add another storage
component to their application. Reasons can be a lot like due
to high costs, or added complexity of their architecture, or
duplication of data across storages. But at the same time
these clients also want to incorporate Graph Databases and
Graph Analytics into their application as well. This
integration is targeted for those set of customers/clients who
are/have migrating/migrated into SnowFlake and want to have
Graph based component as well. For now, its simply not
possible for them to have JanusGraph with their SnowFlake data
Hope I was able to explain it clearly :-)
On Wednesday, 27 November 2019 20:40:52 UTC+5:30, Ryan Stauffer
This sounds like an interesting project, but I do
have a question about your choice of Snowflake. If I
missed your response to this in the email chain, I
apologize, but what problems with the existing
high-performance backends (Scylla, for instance) are you
trying to solve with Snowflake? The answer to that
would probably inform your specific implementation over
On Wed, Nov 27, 2019 at 3:18 AM Debasish
Sorry about the late response. I was working on
this project part time only till last week when we
moved into full time dev for this PoC. Really thanks
to your pointers and Jason's that we have been able
to start with the development works and we have some
ground work to start with :-)
So,we are modelling SnowFlake (Which is like SQL
File store) as a Key-Value store by creating two
columns namely "Key" and "Value" in each tables. We
are going to define the data type as binary here (Or
Stringified Binary) so that arbitrary data can be
dumped (I feel its of type StaticBuffer Key and
StaticBuffer value. Is that correct? )
Since, we are modelling SnowFlake as Key-Value
store, it makes sense to have a SnowFlakeManager
class implement OrderedKeyValueStore like
for BerkleyJE? Is that correct understanding?
Updates are that we have almost finished
development of SnowFlakeManager class. The required
methods needed are implemented like beginTransaction,
openDatabase though one particular function
not done is mutateMany is not done, but it will be
done as it in turn calls KeyValueStore.insert()
Also, a lot of basic functions in KeyValueStore
is also done like insert (Insert binary key-value),
get (Get from binary key), delete (Delete a row
using binary key). We are kinda stuck at the
function getSlice(). What does it do?
Now, my question here is that, slice query is
used while queryingfor properties for vertices
(edges/properties) by slicing the relations of
vertex and slicing them based on filters/conditions.
The following steps are followed in getSlice
function (BerkleyKeyValueStore - berkleydb &
ColumnValueStore - inmemory) :
- Find the row from the passed key. (Returns a
Binary value against the binary key)
- Fetch slice bounderies, i.e. slice start and
end from query passed
- Apply the slice boundries on the returned
value in 1st step else, fetch the first results
(pt 1) by applying the slicing conditions in
My question is related to last step. Since my
data in DB is just "Binary Key-Binary Value", how
can we apply another constraints (slice
conditions) in query? It just doesn't have any
additional meta data to apply slice on as I just
have 2 columns in my table.
Hope my explaination was clear for you to
understand. I want to know primarily how the last
step would work in the data model I described
above (Having 2 columns, one for Key and other for
Value. And each of stringified binary data type).
And, is the data model selected good enough?
Thanks in advance. And I promise this time my
replies will be quicker :-)
On Friday, 25 October 2019 03:17:24 UTC+5:30, Dmitry
here are my 2 cents:
First of all, you need to be clear with
yourself as to why exactly you want to build a
new backend? E.g. do you find that the
existing ones are sub-optimal for certain use
cases, or they are too hard to set up, or you
just want to provide a backend to a cool new
database in the hope that it will increase
adoption, or smth else? In other words, do you
have a clear idea of what is this new backend
going to provide which the existing ones do
not, e.g. advanced scalability or performance
or ease of setup, or just an option for people
with existing Snowflake infra to put it to a
Second, you are almost correct, in that
basically all you need to implement are three
- KeyColumnValueStoreManager, which allows
opening multiple instances of named
KeyColumnValueStores and provides a certain
level of transactional context between
different stores it has opened
KeyColumnValueStore - which represents an
ordered collection of "rows" accessible by
keys, where each row is a
- KeyValueStore - basically an ordered
collection of key-value pairs, which can be
though of as individual "columns" of that row,
and their respective values
Both row and column keys, and the data
values are generic byte data.
Possibly the simplest way to understand the
"minimum contract" required by Janusgraph from
a backend is to look at the inmemory backend.
You will see that:
- KeyColumnValueStoreManager is
conceptually a Map of store name ->
KeyColumnValueStore is conceptually a
NavigableMap of "rows" or KeyValueStores (i.e.
- each KeyValueStore is conceptually an
ordered collection of key -> value pairs
In the most basic case, once you implement
these three relatively simple interfaces,
Janusgraph can take care of all the
translation of graph operations such as adding
vertices and edges, and of gremlin queries,
into a series of read-write operations over a
collection of KCV stores. When you open a new
graph, JanusGraph asks the
KeyColumnValueStoreManager implementation to
create a number of specially named
KeyColumnValueStores, which it uses to store
vertices, edges, and various indices. It
creates a number of "utility" stores which it
uses internally for locking, id management
Crucially, whatever stores Janusgraph
creates in your backend implementation, and
whatever it is using them for, you only need
to make sure that you implement those basic
interfaces which allow to store arbitrary byte
data and access it by arbitrary byte keys.
So for your first "naive" implementation,
you most probably shouldn't worry too much
about translation of graph model to KCVS model
and back - this is what Janusgraph itself is
mostly about anyway. Just use StoreFeatures to
tell Janusgraph that your backend supports
only most basic operations, and concentrate on
thinking how to best implement the KCVS
interfaces with your underlying
Of course, after that, as you start
thinking of supporting better levels of
consistency/transaction management across
multiple stores, about performance, better
utilising native indexing/query mechanisms,
separate indexing backends, support for
distributed backend model etc etc - you will
find that there is more to it, and this is
where you can gain further insights from the
documentation, existing backend sources and
asking more specific questions.
Hope this helps,
On Thu, 24 Oct 2019 at 21:27,
Debasish Kanhar <d...@...
know that JanusGraph needs a
nosql database as storage backend, and
hence that is why we have Scylla,
Cassandra, HBase etc. SnowFlake isn't a
column family database, but it has a
column data type which can store any
sort of data. So we can store complete
JSON Oriented Column family data here
after massaging / pre-processing the
data. Is that a practical thought? Is is
practical enough to implement?
it is practical enough to implement,
what needs to be done? I'm going through
the source code, and I'm basing my ideas
based on my understanding from
Please correct me if I'm wrong in my
need to have a
HBaseStoreManager, AbstractCassandraStoreManager, BerkeleyJEStoreManager which
DistributedStoreManager or LocalStoreManagerand
right? These class needs to have
which is more or less like storage
connection configuration. They need
to have a
which creates the actual connection
to corresponding storage backend. Is
will need to have corresponding Transaction classes
which create the transaction to
corresponding backend like
*CassandraTransaction* or *BerkeleyJETx*. The transaction class needs to extend AbstractStoreTransaction`
class. Though I can see and
understand the transaction being
don't see something similar for
So am I missing something in my
need to have
for backend. Like
*AsyntaxKeyColumnValueStore* or *BerkeleyJEKeyValueStore* etc.
They need to extend
This class takes care of massaging
the data into
that they can then be inserted into
corresponding table inside Storage
question to my mind are, what will
be structure of those classes?
needs to be present always like I
used across in all classes. Also,
how do they work?
convert incoming gremlin queries into KeyColumnValue structure?
there any other classes I'm
missing out on or these 3 are the
only ones needed to be modified to
create a new storage backend?
if these 3 are only classes
needed, and let's say we success
in using SnowFlake as storage
backend, how do the
read aspect of janusgraph/query aspect gets
solved? Are there any changes
needed as well on that end or
JanusGraph is so abstracted that
it can now start picking up from
I thought there would be some
classes which would be reading in
from "gremlin queries" doing certain
"pre-processing into certain data
structures (tabular)" and then
pushed it through some connection
into respective backends. This is
where we cant help, is there a way
to visualize those objects after
"pre-processing" and then store
those objects as it is in SnowFlake
and reuse it to fulfill gremlin
know we can store random objects in
SnowFlake, just looking at changed
needed at JanusGraph level to achieve
help will be really appreciated.
You received this message because you are
subscribed to the Google Groups "JanusGraph
To unsubscribe from this group and stop
receiving emails from it, send an email to jan...@....
To view this discussion on the web visit https://groups.google.com/d/msgid/janusgraph-dev/8169f717-9923-478d-b7f1-28d6ee894e9d%40googlegroups.com.
You received this message because you are subscribed to
the Google Groups "JanusGraph developers" group.
To unsubscribe from this group and stop receiving emails
from it, send an email to jan...@....
To view this discussion on the web visit https://groups.google.com/d/msgid/janusgraph-dev/fe1118aa-5132-44ed-b59e-209e9b7adaab%40googlegroups.com.
You received this message because you are subscribed to the Google
Groups "JanusGraph developers" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to jan...@....
To view this discussion on the web visit https://groups.google.com/d/msgid/janusgraph-dev/bc498b4e-6950-46b9-b7b9-a853da174830%40googlegroups.com.