Hello,
I have a graph (Janusgraph 0.5.3 running on a cql backend and an elasticsearch index) that is updated in near real-time. About 50M new vertices and 100M new edges are added every month. A large part of these (around 90%) should be deleted after 1 year, and the customer may require to change this at a later date. The remaining 10% of the data has no fixed expiration period, but vertices are expected to be deleted when they have no more edges.
Currently, I have a daily Spark job that deletes vertices and their edges by checking their date field (a field denoting the date they were added to the graph). A second Spark job is used to delete vertices without edges. This sort of works, but is definitely not perfect for the following reasons:
After running the first cleanup job for a specific date, there's always a small amount of items (vertices or edges) left. The job reports the number of deleted items, and even after running the job for several times, there's always a non-zero number of items being reported as deleted in that run. For example, in the first run it will report several million items as deleted, in the second about 5000, in the third about 4800, in the fourth about 4620 etc. This converges to some non-zero small number eventually, meaning the Spark job always sees some vertices that it repeatedly attempts to delete, but never actually does, even though no errors appear.
I'm guessing this is caused by some consistency issues, but could not resolve it completely. I tried to run the GhostVertexRemover vertex program which helps and further reduces the number of remaining items, but some still persist. Also, when running the cleanup job on a smaller scale (less workers and data), the job seems to work without issues, so I don't think there are any major bugs in the code itself that would cause this.
Once it starts, the cleaning job is quite performance-intensive and can sometimes interfere with the input job that loads the graph data, which is something I want to avoid.
During the cleanup job, cassandra delete operations produce a lot of tombstones. If the tombstone threshold is too low and exceeded on a single node, the entire graph will no longer accept any changes until a cassandra compaction is run. A large number of tombstones also degrades search performance. Graph supernodes with an especially large edge count may require several "run the cleanup job -> cleanup fails -> run compaction" cycles before everything is properly cleaned up. An alternative is to configure the tombstone threshold to be some absurdly high number to prevent failures completely and schedule daily compaction on each cassandra node after each cleanup job, which is what I'm doing currently.
I was wondering if anyone has some suggestions or best practices on how to manage graph data with a retention period (that could change over time)?
Best regards,
Mladen Marović
|
|
Hi Mladen, Just two things that come up while reading your story:
- the cassandra TTL feature seems promising for your use case, see e.g. https://www.geeksforgeeks.org/time-to-live-ttl-for-a-column-in-cassandra/ I guess this would require code changes in janusgraph-cassandra.
- how is transaction control in the spark jobs? You want transactions of reasonable size (say 10.000 vertices or edges) and you want spark tasks to fail if the transaction commit fails. In that way spark will repeat the task and will hopefully succeed.
Best wishes, Marc
|
|
Hi Mark,
thanks for the response.
As described in https://docs.janusgraph.org/schema/advschema/, TTL is already supported. However, there are two issues in my case:
a) Changing the TTL is supported, but the new TTL will only be applied on inserts and updates. In other words, if I have a TTL of 12 months, I change it to 18 months, it will effectively take 12 months before that change comes into effect because all the old data will still have TTL set to 12 months. A possible workaround would be to run over all objects in the database and update them in some way to force setting the new TTL, although that seems a bit costly.
b) I'm not sure how the TTL setting applies exactly in Janusgraph. Is it set only on the data or on the composite indexes as well? Because if it's set only on the data, then after a while the indexes should be filled with non-existing entries. I can confirm this to be the case for mixed indexes - during testing, data was deleted in cassandra, but mixed index entries in elasticsearch were not, which means that I would have to delete them manually as well. This would be OK if janusgraph supported using multiple indexes in elasticsearch for a single index (which would be a really cool feature btw!), but I don't think that's the case - I tried to trick janusgraph into using an alias, but things did not work as expected.
I don't think the problem in the Spark jobs is with transactions. By default, in case of an exception, Spark should repeat that task, and eventually the job ends, so all tasks finished successfully. Also, in my case, there actually are no exceptions. I even managed to manually find the vertices that caused the issues via the gremlin console, but their valueMap() is {} where I would expect it to contain the 10-15 properties they usually have, if they weren't deleted. Basically, Janusgraph acts as if it found a vertex (or some part of it), but during deletion, nothing happens.
If I remember correctly, I tried to analyze what is happening a while ago and I seem to have found some place in the janusgraph source where a dummy (empty) vertex is created if Janusgraph does not find the proper data. I guess that's what's happening to me when I get the {} result. Maybe the index entry wasn't cleaned up, Janusgraph thinks there should be something, finds nothing, so it returns the empty vertex. When I try to delete it, again there is nothing to be deleted so the index entry isn't cleared. I don't know if that's actually possible, but that might explain my case.
Best regards,
Mladen Marović
|
|
Hi Mladen, Indeed, there is still a load of open issues regarding TTL: https://github.com/JanusGraph/janusgraph/issues?q=is%3Aissue+is%3Aopen+ttlYour last remark about empty vertices sounds plausible, although it would be pretty bad if true. Searching on "new HashMap" on github gives too many results to inspect, so please keep an open eye on more hints where it would occur. I did not see open issues that report empty vertices after ghost vertex removal. Best wishes, Marc
|
|