Re: BigTable - large rows (more than 256MB)


Boxuan Li
 

> I've seen cases in the past, where queries relying on a mixed index fail while the index backend still hasn't caught up to the storage backend.

Yes that could happen. You can use https://docs.janusgraph.org/operations/recovery/#transaction-failure to tackle this problem but it also means you have an additional long-running process to maintain.

> How about cleanups / ttls / etc. ?

Not sure if I understand it correctly. In your business model, are some vertices less important such that they can be deleted? If frequent cleanup / ttl means that the total number of vertices will drop significantly, then yeah that's gonna help.

> what will be the upper bound for the number of vertices

My empirical number is a few million vertices with the same "type" (indexed by a composite index). 

Join {janusgraph-users@lists.lfaidata.foundation to automatically receive all group messages.