Hey there, folks. Firstly I want to say thanks for your help with the previous bug we uncovered.
I'm evaluating JanusGraph performance on BigTable and observing very slow write speeds when writing even a single vertex and committing a transaction. Starting a new transaction, writing a single vertex, and committing the transaction takes at minimum 5-6 seconds.
BigTable metrics indicate that the backend is never taking more than 100ms (max) to perform a write. It's hard to imagine that any amount of overhead on the BigTable side would bring this up to 5-6 seconds. The basic BigTable stats inside our application also look reasonable.
Here is the current configuration:
"storage.backend": "hbase"
"metrics.enabled": true
"cache.db-cache": false
"query.batch": true
"storage.page-size": 1000
"storage.hbase.ext.hbase.client.connection.impl": "com.google.cloud.bigtable.hbase2_x.BigtableConnection"
"storage.hbase.ext.google.bigtable.grpc.retry.deadlineexceeded.enable": true
"storage.hbase.ext.google.bigtable.grpc.channel.count": 50
"storage.lock.retries": 5
"storage.lock.wait-time": 50.millis
This is running in a GCP container that is rather beefy and not doing anything else, and is located in the same region as the BigTable cluster. Other traffic to/from the container seems fine.
I'm currently using hbase-shaded-client rev 2.1.5 since that's aligned to JanusGraph 0.5.3 which we are currently using. I experimented with up to 2.4.8 and saw no difference. I'm also usingĀ bigtable-hbase-2.x-shaded 1.25.1, the latest stable revision.
I'm at a loss how to progress further with my diagnosis, as all evidence indicates that the latency is originating with JanusGraph'sĀ operation. How can I better find and eliminate the source of this latency?
Thanks!