Date   

How to improve traversal query performance

Manabu Kotani <smallcany...@...>
 

Hi All,

I'm testing traversal query performance.
My query (please see below) takes about 1.8sec.

Is there solution for improve performance (faster than 1.8sec)?
I hope that takes less than 500ms.

1.Environment:
JanusGraph (0.5.2) + Cassandra (3.11.0) on Docker Desktop (Windows)

2.Schema:
------------------------------------------------------------------------------------------------
Vertex Label Name              | Partitioned | Static                                             |
---------------------------------------------------------------------------------------------------
item                           | false       | false                                              |
---------------------------------------------------------------------------------------------------
Edge Label Name                | Directed    | Unidirected | Multiplicity                         |
---------------------------------------------------------------------------------------------------
assembled                      | true        | false       | MULTI                                |
---------------------------------------------------------------------------------------------------
Property Key Name              | Cardinality | Data Type                                          |
---------------------------------------------------------------------------------------------------
serial                         | SINGLE      | class java.lang.String                             |
work_date                      | SINGLE      | class java.util.Date                               |
---------------------------------------------------------------------------------------------------
Vertex Index Name              | Type        | Unique    | Backing        | Key:           Status |
---------------------------------------------------------------------------------------------------
bySerial                       | Composite   | false     | internalindex  | serial:       ENABLED |
---------------------------------------------------------------------------------------------------
Edge Index (VCI) Name          | Type        | Unique    | Backing        | Key:           Status |
---------------------------------------------------------------------------------------------------
byWorkDate                     | Composite   | false     | internalindex  | work_date:    ENABLED |
---------------------------------------------------------------------------------------------------
Relation Index                 | Type        | Direction | Sort Key       | Order    |     Status |
---------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------
-----------------------------------------------------------------------------------------------------------  

3.Query:
g.V().has('serial',within('XXXXXX','YYYYYY',....<- 100 search keys).as('a')
.repeat(inE('assembled').as('b').outV().as('c').simplePath())
.emit()
.select('a').values('serial').as('parent')
.select('b').values('work_date').as('work_date')
.select('c').values('serial').as('child')
.select('parent','child','work_date')
.order().by('parent').by('child').by('work_date')
----------------------------------------------------------------------------------------------------------- 
 
4.Query Profile:
==>Traversal Metrics
Step                                                               Count  Traversers       Time (ms)    % Dur
=============================================================================================================
JanusGraphStep([],[serial.within([XXXXXX...                   100         100         159.582     8.89
    \_condition=((serial = XXXXXX OR serial = YYYYYY OR .... <- 100 search keys))
    \_orders=[]
    \_isFitted=true
    \_isOrdered=true
    \_query=multiKSQ[100]@2000
    \_index=bySerial
  optimization                                                                                 0.018
  optimization                                                                                 6.744
  backend-query                                                      100                    1074.225
    \_query=bySerial:multiKSQ[100]@2000
    \_limit=2000
RepeatStep([JanusGraphVertexStep(IN,[assembled]...                 20669       20669         857.001    47.74
  JanusGraphVertexStep(IN,[assembled],edge)@[b]                    20669       20669         633.529
    \_condition=type[assembled]
    \_orders=[]
    \_isFitted=true
    \_isOrdered=true
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    \_multi=true
    \_vertices=204
    optimization                                                                               0.477
    backend-query                                                    228                       2.076
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.150
    backend-query                                                      0                      43.366
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.093
    backend-query                                                    229                       1.978
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.107
    backend-query                                                      0                      32.738
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.111
    backend-query                                                    229                       1.577
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.107
    backend-query                                                      0                      17.827
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.085
    backend-query                                                    229                       1.517
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.108
    backend-query                                                      0                       5.729
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.071
    backend-query                                                    228                       1.993
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.083
    backend-query                                                      0                       3.335
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.150
    backend-query                                                    229                       1.890
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.137
    backend-query                                                      0                      32.593
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.110
    backend-query                                                    229                       2.253
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.069
    backend-query                                                    230                       1.624
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.070
    backend-query                                                      0                      12.797
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.116
    backend-query                                                    229                       1.579
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.090
    backend-query                                                      0                       5.764
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.107
    backend-query                                                    229                       1.651
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.134
    backend-query                                                      0                      22.327
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.074
    backend-query                                                    229                       1.756
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.075
    backend-query                                                      0                      11.145
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.069
    backend-query                                                    229                       1.947
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.086
    backend-query                                                      0                       3.727
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.100
    backend-query                                                    116                       1.492
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.085
    backend-query                                                      0                      27.159
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.132
    backend-query                                                    229                       1.524
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.100
    backend-query                                                      0                       7.173
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.075
    backend-query                                                    230                       1.880
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.114
    backend-query                                                      0                       3.696
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.085
    backend-query                                                    228                       1.645
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.143
    backend-query                                                      0                       2.924
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.105
    backend-query                                                    229                       2.010
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.316
    backend-query                                                      0                       3.806
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.095
    backend-query                                                    230                       1.854
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.185
    backend-query                                                    229                       1.936
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.099
    backend-query                                                      0                       2.135
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.070
    backend-query                                                    231                       1.479
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.067
    backend-query                                                      0                       5.907
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.069
    backend-query                                                      1                       1.129
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.109
    backend-query                                                      0                       1.069
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.082
    backend-query                                                    231                       1.245
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.072
    backend-query                                                      0                       1.175
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.064
    backend-query                                                    229                       1.308
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.078
    backend-query                                                      0                       7.058
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.145
    backend-query                                                    231                       1.655
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.115
    backend-query                                                      0                       3.946
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.067
    backend-query                                                    117                       1.231
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.063
    backend-query                                                      0                      11.856
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.065
    backend-query                                                    230                       1.606
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.072
    backend-query                                                      0                       6.973
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.070
    backend-query                                                    229                       1.445
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.088
    backend-query                                                    230                       1.836
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.098
    backend-query                                                      0                       2.552
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.088
    backend-query                                                    116                       1.450
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.060
    backend-query                                                      0                       4.072
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.062
    backend-query                                                    229                       1.421
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.058
    backend-query                                                      0                       2.342
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.058
    backend-query                                                    229                       0.999
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.070
    backend-query                                                      0                       1.847
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.063
    backend-query                                                    229                       1.171
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.064
    backend-query                                                      0                       0.999
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.051
    backend-query                                                    228                       0.991
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.070
    backend-query                                                      0                       2.107
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.215
    backend-query                                                    116                       1.678
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.069
    backend-query                                                    229                       1.578
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.081
    backend-query                                                      0                       3.649
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.096
    backend-query                                                    229                       1.619
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.066
    backend-query                                                    228                       1.549
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.070
    backend-query                                                    116                       1.610
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.154
    backend-query                                                    228                       1.746
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.092
    backend-query                                                      0                       2.958
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.093
    backend-query                                                    232                       1.698
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.143
    backend-query                                                    229                       1.719
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.081
    backend-query                                                      0                       2.809
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.065
    backend-query                                                    229                       1.410
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.082
    backend-query                                                    229                       1.458
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.109
    backend-query                                                    228                       1.651
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.066
    backend-query                                                    228                       1.417
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.111
    backend-query                                                    117                       1.536
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.188
    backend-query                                                      0                       1.660
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.132
    backend-query                                                    229                       2.361
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.110
    backend-query                                                      0                       2.384
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.140
    backend-query                                                    229                       1.680
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.067
    backend-query                                                    230                       1.342
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.070
    backend-query                                                      0                       3.129
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.118
    backend-query                                                    231                       1.397
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.169
    backend-query                                                      0                       5.665
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.114
    backend-query                                                    116                       1.780
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.128
    backend-query                                                      0                       2.316
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.108
    backend-query                                                    229                       1.521
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.083
    backend-query                                                    231                       1.508
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.074
    backend-query                                                      0                       2.327
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.092
    backend-query                                                    116                       1.509
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.285
    backend-query                                                      0                       2.007
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.079
    backend-query                                                    116                       1.245
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.134
    backend-query                                                    230                       1.521
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.092
    backend-query                                                      1                       1.278
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.064
    backend-query                                                      0                       1.104
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.076
    backend-query                                                    231                       1.287
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.079
    backend-query                                                    229                       1.768
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.098
    backend-query                                                      0                       2.570
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.110
    backend-query                                                    116                       1.489
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.061
    backend-query                                                      0                       1.756
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.055
    backend-query                                                    229                       1.133
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.060
    backend-query                                                    116                       1.241
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.056
    backend-query                                                      0                       2.435
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.056
    backend-query                                                    228                       1.099
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.061
    backend-query                                                      0                       1.017
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.080
    backend-query                                                    229                       1.217
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.065
    backend-query                                                    230                       1.448
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.065
    backend-query                                                    229                       1.546
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.079
    backend-query                                                    230                       1.955
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.165
    backend-query                                                      0                       3.284
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.102
    backend-query                                                    229                       1.936
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.081
    backend-query                                                      0                       4.640
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.072
    backend-query                                                    229                       1.384
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.062
    backend-query                                                      0                       2.224
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.088
    backend-query                                                    116                       1.419
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.069
    backend-query                                                      0                       2.289
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.070
    backend-query                                                    231                       1.474
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.071
    backend-query                                                    229                       1.646
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.072
    backend-query                                                      0                       1.408
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.068
    backend-query                                                    230                       1.974
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.090
    backend-query                                                    229                       1.923
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.151
    backend-query                                                    230                       2.211
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.074
    backend-query                                                    230                       1.234
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.059
    backend-query                                                      0                       1.695
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.125
    backend-query                                                    230                       1.199
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.064
    backend-query                                                      0                       1.089
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.057
    backend-query                                                    116                       1.807
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.085
    backend-query                                                      0                       1.299
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.074
    backend-query                                                    228                       1.397
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.081
    backend-query                                                    228                       1.776
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.079
    backend-query                                                      0                       1.980
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.101
    backend-query                                                    229                       1.571
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.070
    backend-query                                                    231                       1.483
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.064
    backend-query                                                      0                       2.260
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.060
    backend-query                                                    230                       1.471
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.070
    backend-query                                                    232                       1.305
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.060
    backend-query                                                    229                       1.246
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.063
    backend-query                                                    229                       1.093
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.053
    backend-query                                                    229                       1.420
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.062
    backend-query                                                    226                       1.596
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.145
    backend-query                                                      0                       2.730
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.059
    backend-query                                                    229                       1.550
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.076
    backend-query                                                    231                       1.622
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.058
    backend-query                                                    117                       1.224
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.108
    backend-query                                                      0                       2.025
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.062
    backend-query                                                    230                       1.251
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.058
    backend-query                                                    230                       1.223
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.068
    backend-query                                                    116                       1.224
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.051
    backend-query                                                      0                       0.937
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.045
    backend-query                                                    116                       1.597
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.058
    backend-query                                                    228                       1.595
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.063
    backend-query                                                      0                       3.238
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.058
    backend-query                                                    229                       1.573
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.078
    backend-query                                                    231                       1.894
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.092
    backend-query                                                    230                       1.717
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
    optimization                                                                               0.061
    backend-query                                                    231                       1.302
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@812bd43d
  EdgeVertexStep(OUT)@[c]                                          20669       20669          39.223
  PathFilterStep(simple)                                           20669       20669          44.905
  JanusGraphMultiQueryStep(RepeatEndStep)                          20669       20669          65.528
  RepeatEndStep                                                    20669       20669          39.443
SelectOneStep(last,a)                                              20669       20669          44.574     2.48
JanusGraphPropertiesStep([serial],value)@[parent]                  20669       20669          92.515     5.15
    \_condition=type[serial]
    \_orders=[]
    \_isFitted=true
    \_isOrdered=true
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@811c505d
    \_multi=true
    \_vertices=100
  optimization                                                                                 0.090
  backend-query                                                      100                      12.807
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@811c505d
SelectOneStep(last,b)                                              20669       20669          41.753     2.33
JanusGraphPropertiesStep([work_date],value)@[wo...                 20669       20669          98.648     5.50
SelectOneStep(last,c)                                              20669       20669          41.674     2.32
JanusGraphPropertiesStep([serial],value)@[child]                   20669       20669         246.094    13.71
    \_condition=type[serial]
    \_orders=[]
    \_isFitted=true
    \_isOrdered=true
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@811c505d
    \_multi=true
    \_vertices=1392
  optimization                                                                                 0.060
  backend-query                                                     1392                     136.281
    \_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@811c505d
SelectStep(last,[parent, child, work_date])                        20669       20669          49.139     2.74
OrderGlobalStep([[value(parent), asc], [value(c...                 20669       20669         164.034     9.14
                                            >TOTAL                     -           -        1795.018        -
-----------------------------------------------------------------------------------------------------------

Sorry for my poor English.
Thanks,
Manabu


Re: Ability to read adjacent vertex properties in vertex program

HadoopMarc <bi...@...>
 

Hi Anjani,

In a VertexProgram you cannot read properties from neighbouring vertices using the ComputerAdjacentVertex. You have to pass data between vertices explicitly using the messenger object in the execute method of the VertexProgram:
https://github.com/apache/tinkerpop/blob/3.4.8/gremlin-core/src/main/java/org/apache/tinkerpop/gremlin/process/computer/VertexProgram.java

Best wishes,  Marc

Op dinsdag 17 november 2020 om 15:42:02 UTC+1 schreef anj...@...:

Hi All,

In our graph, related nodes are connected to a master node. So as part of vertex program i was thinking to just get all master nodes and from master node get all connected nodes. 
So finally output should be like below and i should be able to read properties from derived list of vertices.

 [MasterNode] -> List of connected nodes, ex :
  V(masternode)  -> [ V(a), V(b), V(c)]

I noticed  vertex.inVertex() returns ComputerAdjacentVertex and it does not have properties, all properties are dropped.

Is there any way to read  adjacent vertex properties in vertex program? Please note i am not using adjacent vertex property for any traversal logic but want to read just as part of final result.

Thanks,
Anjani


Ability to read adjacent vertex properties in vertex program

"anj...@gmail.com" <anjani...@...>
 

Hi All,

In our graph, related nodes are connected to a master node. So as part of vertex program i was thinking to just get all master nodes and from master node get all connected nodes. 
So finally output should be like below and i should be able to read properties from derived list of vertices.

 [MasterNode] -> List of connected nodes, ex :
  V(masternode)  -> [ V(a), V(b), V(c)]

I noticed  vertex.inVertex() returns ComputerAdjacentVertex and it does not have properties, all properties are dropped.

Is there any way to read  adjacent vertex properties in vertex program? Please note i am not using adjacent vertex property for any traversal logic but want to read just as part of final result.

Thanks,
Anjani


Re: .bat and .sh Execution Fail When Run From a Path That Contains Spaces

Bassem Naguib <bass...@...>
 

Thanks Marc! I just posted the GitHub issue

On Sunday, November 15, 2020 at 1:18:11 AM UTC-7 HadoopMarc wrote:
Hi,

I can reproduce this for linux. So, please, create the issue on github with the text above.

Thanks,    Marc

Op zondag 15 november 2020 om 08:36:45 UTC+1 schreef ba...@...:

I was about to open an issue on GitHub. But I saw a message that asks me to post here first.

The .bat and .sh files under the "janusgraph-0.5.2/bin" folder give errors when you try to run them from a path that contains spaces. For example "C:\Program Files\janusgraph-0.5.2"

Here is the Linux shell output

bassem@Bassem-Laptop:/mnt/c/Program Files/janusgraph-0.5.2/bin$ ./gremlin.sh
./gremlin.sh: line 39: cd: too many arguments
bassem@Bassem-Laptop:/mnt/c/Program Files/janusgraph-0.5.2/bin$ ./gremlin-server.sh
./gremlin-server.sh: line 32: cd: too many arguments
./gremlin-server.sh: line 43: cd: too many arguments
./gremlin-server.sh: line 48: cd: too many arguments
./gremlin-server.sh: line 53: cd: too many arguments
find: ‘/mnt/c/Program’: No such file or directory
find: ‘Files/janusgraph-0.5.2/bin’: No such file or directory
find: ‘/mnt/c/Program’: No such file or directory
find: ‘Files/janusgraph-0.5.2/bin’: No such file or directory
find: ‘/mnt/c/Program’: No such file or directory
find: ‘Files/janusgraph-0.5.2/bin’: No such file or directory
find: ‘/mnt/c/Program’: No such file or directory
find: ‘Files/janusgraph-0.5.2/bin’: No such file or directory
./gremlin-server.sh: line 80: cd: too many arguments
WARNING: Tried /mnt/c/Program Files/janusgraph-0.5.2/bin/gremlin-server.yaml and /mnt/c/Program Files/janusgraph-0.5.2/bin//mnt/c/Program Files/janusgraph-0.5.2/bin/gremlin-server.yaml. Neither were readable.
Error opening zip file or JAR manifest missing : /mnt/c/Program
Error occurred during initialization of VM
agent library failed to init: instrument

And here is the Windows Command Prompt output

C:\Program Files\janusgraph-0.5.2\bin>gremlin.bat
Error: Could not find or load main class Files\janusgraph-0.5.2\ext

C:\Program Files\janusgraph-0.5.2\bin>gremlin-server.bat
Error: Could not find or load main class Files\janusgraph-0.5.2\logs


Re: .bat and .sh Execution Fail When Run From a Path That Contains Spaces

HadoopMarc <bi...@...>
 

Hi,

I can reproduce this for linux. So, please, create the issue on github with the text above.

Thanks,    Marc

Op zondag 15 november 2020 om 08:36:45 UTC+1 schreef ba...@...:


I was about to open an issue on GitHub. But I saw a message that asks me to post here first.

The .bat and .sh files under the "janusgraph-0.5.2/bin" folder give errors when you try to run them from a path that contains spaces. For example "C:\Program Files\janusgraph-0.5.2"

Here is the Linux shell output

bassem@Bassem-Laptop:/mnt/c/Program Files/janusgraph-0.5.2/bin$ ./gremlin.sh
./gremlin.sh: line 39: cd: too many arguments
bassem@Bassem-Laptop:/mnt/c/Program Files/janusgraph-0.5.2/bin$ ./gremlin-server.sh
./gremlin-server.sh: line 32: cd: too many arguments
./gremlin-server.sh: line 43: cd: too many arguments
./gremlin-server.sh: line 48: cd: too many arguments
./gremlin-server.sh: line 53: cd: too many arguments
find: ‘/mnt/c/Program’: No such file or directory
find: ‘Files/janusgraph-0.5.2/bin’: No such file or directory
find: ‘/mnt/c/Program’: No such file or directory
find: ‘Files/janusgraph-0.5.2/bin’: No such file or directory
find: ‘/mnt/c/Program’: No such file or directory
find: ‘Files/janusgraph-0.5.2/bin’: No such file or directory
find: ‘/mnt/c/Program’: No such file or directory
find: ‘Files/janusgraph-0.5.2/bin’: No such file or directory
./gremlin-server.sh: line 80: cd: too many arguments
WARNING: Tried /mnt/c/Program Files/janusgraph-0.5.2/bin/gremlin-server.yaml and /mnt/c/Program Files/janusgraph-0.5.2/bin//mnt/c/Program Files/janusgraph-0.5.2/bin/gremlin-server.yaml. Neither were readable.
Error opening zip file or JAR manifest missing : /mnt/c/Program
Error occurred during initialization of VM
agent library failed to init: instrument

And here is the Windows Command Prompt output

C:\Program Files\janusgraph-0.5.2\bin>gremlin.bat
Error: Could not find or load main class Files\janusgraph-0.5.2\ext

C:\Program Files\janusgraph-0.5.2\bin>gremlin-server.bat
Error: Could not find or load main class Files\janusgraph-0.5.2\logs


.bat and .sh Execution Fail When Run From a Path That Contains Spaces

Bassem Naguib <bass...@...>
 


I was about to open an issue on GitHub. But I saw a message that asks me to post here first.

The .bat and .sh files under the "janusgraph-0.5.2/bin" folder give errors when you try to run them from a path that contains spaces. For example "C:\Program Files\janusgraph-0.5.2"

Here is the Linux shell output

bassem@Bassem-Laptop:/mnt/c/Program Files/janusgraph-0.5.2/bin$ ./gremlin.sh
./gremlin.sh: line 39: cd: too many arguments
bassem@Bassem-Laptop:/mnt/c/Program Files/janusgraph-0.5.2/bin$ ./gremlin-server.sh
./gremlin-server.sh: line 32: cd: too many arguments
./gremlin-server.sh: line 43: cd: too many arguments
./gremlin-server.sh: line 48: cd: too many arguments
./gremlin-server.sh: line 53: cd: too many arguments
find: ‘/mnt/c/Program’: No such file or directory
find: ‘Files/janusgraph-0.5.2/bin’: No such file or directory
find: ‘/mnt/c/Program’: No such file or directory
find: ‘Files/janusgraph-0.5.2/bin’: No such file or directory
find: ‘/mnt/c/Program’: No such file or directory
find: ‘Files/janusgraph-0.5.2/bin’: No such file or directory
find: ‘/mnt/c/Program’: No such file or directory
find: ‘Files/janusgraph-0.5.2/bin’: No such file or directory
./gremlin-server.sh: line 80: cd: too many arguments
WARNING: Tried /mnt/c/Program Files/janusgraph-0.5.2/bin/gremlin-server.yaml and /mnt/c/Program Files/janusgraph-0.5.2/bin//mnt/c/Program Files/janusgraph-0.5.2/bin/gremlin-server.yaml. Neither were readable.
Error opening zip file or JAR manifest missing : /mnt/c/Program
Error occurred during initialization of VM
agent library failed to init: instrument

And here is the Windows Command Prompt output

C:\Program Files\janusgraph-0.5.2\bin>gremlin.bat
Error: Could not find or load main class Files\janusgraph-0.5.2\ext

C:\Program Files\janusgraph-0.5.2\bin>gremlin-server.bat
Error: Could not find or load main class Files\janusgraph-0.5.2\logs


Re: Recommendation for Storage Backend

Bassem Naguib <bass...@...>
 

Thank you so much Peter for the detailed answer! I think we need to  switch between ScyllaDB  and Cassandra to understand the differences better. It is good that JanusGraph enables you to switch between storage backends hopefully without a lot of trouble.


On Friday, October 30, 2020 at 4:55:55 PM UTC-6 p...@... wrote:
Comments inline.


On Thu, Oct 29, 2020, 9:06 AM Bassem Naguib <ba...@...> wrote:

Hello,

I am looking into using JanusGraph for a new multi-tenant SaaS application. And I wanted to ask the community for help on choosing a suitable storage backend for my use case.

Everyone's going to have a bias, and I want to be transparent. I work for ScyllaDB, so of course I think they are the best. I will do my best, however, to give you reasons which I hope prove sufficiently helpful.

We need to have as much data isolation as possible between tenants. In our previous projects with similar requirements, we used a relational DBMS with a database-per-tenant isolation strategy. So I assume, in the JanusGraph world, it will be graph-per-tenant?

That sounds appropriate. You might segregate user data in Scylla, which underlies JanusGraphc, via separate tables or even completely separate keyspaces.

Also we know that a single tenant's graph will never grow too big to fit on one server. But we may need to divide the tenant graphs between multiple graph DB servers.

Yes. Scylla would automatically shard data across nodes. Whatever persistent database you choose to reside under JanusGraph, make sure it automatically shards data across nodes.

We do not care very much about dividing the DB server resources equally between tenants. So the "Noisy Neighbor" problem is not a concern.

No, but there are problems by having neighbors at all. Heartbleed-like attacks such as Zombieload. We wrote this up as a piece for people who wish to think about ways to protect against attacks from neighbors, noisy or otherwise.


Finally, we are looking for minimum read and write latency. And as close to ACID transactions as we can get.

...this is a fundamental tug of war. Because ACID is specifically not as fast as possible to provide consistency. 

ScyllaDB is written not with a full ACID guarantee. Like Cassandra it leans to the AP-mode of the CAP theorem rather than CP. But we do have LWT. And our LWT implementation is inherently more efficient than the design decisions Cassandra made — less round trips, for instance. But adding *any* sort of linearizabity, or any strict consistency levels like CL=ALL), is going to increase your latencies and/or lower your throughput, unless (or even if) you scale in terms of, say, capacity and concurrency. Those have their own prices, limits and tradeoffs, too. That's just the nature of it. It is vital for you to really think which is most vital: strict ACID consistency or performance / availability.

You can read more about our LWT implementation here.


Other vendors can definitely claim more strict adherence to ACID. The question would be to clarify for your use case which are the higher priorities, and what your SLAs for each might be.

• Latency
• Throughput
• Availability
• Consistency (even at the sacrifice of the above three)
• "Correctness" in terms of ACID compliance

When it comes down to it, our users feel the top three win out over the bottom two. But this is definitely a use-case specific judgment call for you.

A couple of real-world use cases to consider:



Sincerely,

-Peter Corless.

I would love to hear you guys' thoughts about suitable storage backend(s) for this use case.

Thanks in advance!

--
You received this message because you are subscribed to the Google Groups "JanusGraph users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to janusgr...@....
To view this discussion on the web visit https://groups.google.com/d/msgid/janusgraph-users/5ca4c350-9558-4d7b-9cc8-fe2b6a817975n%40googlegroups.com.


Re: Where does JanusGraph Stores Indexes in ElasticSearch

"alex...@gmail.com" <alexand...@...>
 

Hi,

ElasticSearch is used for mixed indexes only. If you don't use mixed indexes, you don't need ElasticSearch.
All other indexes are stored in a storage backend you are using (Cassandra, HBase, etc.).

Best regards,
Oleksandr

On Thursday, October 29, 2020 at 11:06:34 AM UTC+2 HadoopMarc wrote:
Hi Krishna,

That is right. The mixed index is for non-equality matches.

Marc

Op donderdag 29 oktober 2020 om 08:55:01 UTC+1 schreef kri...@...:
HI Marc

Thanks for your response. Is elastic search only used for mixed indexes?. if we only build composite indexes in our use can we neglect giving elasticsearch config properties in building graph through graph factory?


Thanks & Regards
Krishna Jalla

On Thu, Oct 29, 2020 at 12:40 PM HadoopMarc <b...@...> wrote:
Hi Krishna,

The documentation is not conclusive about this, but I strongly suspect that the vertex-centric indices do not use the indexing backend. That means that they use the tables in the storage backend, like for the composite indices.

Best wishes,    Marc

Op donderdag 29 oktober 2020 om 07:43:41 UTC+1 schreef kri...@...:
Hi Folks,


Currently i am using JanusGraph with Cassandra(cql) backend along with ElasticSearch for indexing. we have created few janusgraph vertex-centric indexes through janusgraph. Queries of janusgraph is working fine. when i try to get indexes from ElasticSearch through below query i cannot found the indexes created for janusgraph.


how can we find the janus graph indexes data in ElasticSearch?
where the indexes data is stored in ElasticSearch?


Thanks&Regards
Krishna Jalla

--
You received this message because you are subscribed to the Google Groups "JanusGraph users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to janusgr...@....
To view this discussion on the web visit https://groups.google.com/d/msgid/janusgraph-users/0595e44c-c063-4e94-be5c-31393d3311b3n%40googlegroups.com.


Re: The count query based on the vertex traversal edges is too slow!!!

"alex...@gmail.com" <alexand...@...>
 

Hi,

`count` step doesn't use mixed index right now. There is a WIP PR which will allow to use mixed index for count step: https://github.com/JanusGraph/janusgraph/pull/2200
Right now, the best you can do is using direct indexQuery count. See the comment here on how to use indexQuery to speedup count: https://github.com/JanusGraph/janusgraph/issues/926#issuecomment-401442381

range(low,high) - is a deep pagination problem. It always searches from 0 to `high` but you can change your logic to have a workaround for deep pagination. Here is a comment where I discuss the workaround: https://github.com/JanusGraph/janusgraph/issues/986#issuecomment-451601715
You can read more about deep pagination problem here: https://www.elastic.co/guide/en/elasticsearch/guide/current/pagination.html

Best regards,
Oleksandr

On Tuesday, October 27, 2020 at 1:53:03 PM UTC+2 HadoopMarc wrote:
Hi

You have a lot of perseverance and you are welcome with that! This is an open source community, so use this perseverance to find a solution for us all.

Some resources on janusgraph performance:

Best wishes,    Marc

Op dinsdag 27 oktober 2020 om 10:57:04 UTC+1 schreef wan...@...:
Is there no other way to speed up the page turning and querying the last few pages?
And I still have a requirement that many edges may be loaded. At present, the speed of loading is not ideal.

在2020年10月27日星期二 UTC+8 下午3:17:05<HadoopMarc> 写道:
Hi,

As explained earlier, the range(30000, 30010) causes a table scan starting at result 0. There is no way to circumvent this using range.

As to the many targetIds during development, you can do:

targetIds = g.V().hasLabel('InstanceMetric').has('type', neq('network)).has('vlabel', 'InstanceMetric').id().limit(10).toList()

HTH,     Marc

Op dinsdag 27 oktober 2020 om 04:42:05 UTC+1 schreef wan...@...:
I did a test in the terminal environment and it can indeed speed up the query. But in real development, I can’t use this method, because this method will load a lot of vertex ids and increase additional network overhead.

In addition, is there any way to speed up the query of the last few pages of data in the paging query?

在2020年10月26日星期一 UTC+8 下午6:27:36<HadoopMarc> 写道:
Hi,

You are right, when the index is not used in the outV() step, Janugraph resorts to a full table scan until it has enough results, 10 in the first case and 30.010 in the second. Can you also try my other suggestion to first get the targetIds and use these in you main query? My hope is that the inE() step is sufficiently fast. The edges returned from inE() already contain the vertex id's that can be matched locally against targetIds.

Marc

Op maandag 26 oktober 2020 om 08:21:32 UTC+1 schreef wan...@...:

Hi,
Please help me look at the following question
在2020年10月26日星期一 UTC+8 下午2:49:43<HadoopMarc> 写道:
Hi,

The first line in the code suggestion in my previous post should have been (added id() step):

targetIds = g.V().hasLabel('InstanceMetric').has('type', neq('network)).has('vlabel', 'InstanceMetric').id().toList()

Best wishes,    Marc

Op zondag 25 oktober 2020 om 16:57:58 UTC+1 schreef HadoopMarc:
Hi,

Apparently, the query planner is not able to use the index for the outV() step. Can you see what happens if we split the query like this (not tested):

targetIds = g.V().hasLabel('InstanceMetric').has('type', neq('network)).has('vlabel', 'InstanceMetric').toList()

g.V().hasLabel('InstanceMetric').has('type', neq('network)).has('vlabel', 'InstanceMetric')
    .inE('Cause').has('status', -1).has('isManual', false)
        .has('promote', within(-1,0,2,3)).has('vlabel', 'Cause')
        .where(outV().has(id, within(targetIds)))

Note that you can use the where() step instead of the as/select construct, just for readability.

HTH,   Marc

Op zaterdag 24 oktober 2020 om 15:19:12 UTC+2 schreef wan...@...:
The total number of sides is 15000

The edge data meets the query condition is only 10000 in total

在2020年10月24日星期六 UTC+8 下午9:18:48<wd w> 写道:
profile.png

在2020年10月20日星期二 UTC+8 下午10:38:46<HadoopMarc> 写道:
Can you show the profiling of the query using the profile() step?

Best wishes,    Marc

Op dinsdag 20 oktober 2020 om 14:22:59 UTC+2 schreef wan...@...:

g.V().hasLabel("Instance").has("instanceId", P.within("12", "34")).bothE('Cause').has('enabled', true).as('e').bothV().has('instanceId', P.within('64', '123')).select('e').count();

The above count query executes very slowly, what method can be used to speed up its query.

I have created compositeIndex and  mixedIndex for instanceId, enabled.

How should I convert this query to a direct index query!


Re: leaking transactions in ConsistentKeyLocker

BO XUAN LI <libo...@...>
 

Hi Madhan,

I am not familiar with this part thus cannot say much on this, but do you have any example (or even better, unit test) that can reproduce the issue?

Regards,
Boxuan

On Nov 8, 2020, at 9:37 AM, 'Madhan Neethiraj' via JanusGraph users <janusgra...@...> wrote:

Transactions created in ConsistentKeyLocker.overrideTimestamp() are neither committed or rolled back by the callers. This results in leaked transactions. This issue (of leaked transactions) was seen while implementing a custom StoreManager. Addressing the issue required updating ConsistentKeyLocker, to commit the created transactions.

The fix is straight forward, and I have patches for master and v0.5.1. Can someone please review and either confirm the issue or suggest alternate (to avoid leaked transactions)?

Relevant code from ConsistentKeyLocker.java is given below.

Thanks,
Madhan

  private WriteResult tryWriteLockOnce(StaticBuffer key, StaticBuffer del, StoreTransaction txh) {
    ...
    try {
      final StoreTransaction newTx = overrideTimestamp(txh, writeTimer.getStartTime());
      store.mutate(key, Collections.singletonList(newLockEntry),
          null == del ? KeyColumnValueStore.NO_DELETIONS : Collections.singletonList(del), newTx);
      } catch (BackendException e) {
        ...
  }

  private WriteResult tryDeleteLockOnce(StaticBuffer key, StaticBuffer col, StoreTransaction txh) {
    ...
    try {
      final StoreTransaction newTx = overrideTimestamp(txh, delTimer.getStartTime());
      store.mutate(key, ImmutableList.of(), Collections.singletonList(col), newTx);
    } catch (BackendException e) {
      ...
  }

  protected void deleteSingleLock(KeyColumn kc, ConsistentKeyLockStatus ls, StoreTransaction tx) {
    ...
    try {
      StoreTransaction newTx = overrideTimestamp(tx, times.getTime());
      store.mutate(serializer.toLockKey(kc.getKey(), kc.getColumn()), ImmutableList.of(), deletions, newTx);
      return;
    } catch (TemporaryBackendException e) {
      ...
  }

  private StoreTransaction overrideTimestamp(final StoreTransaction tx, final Instant commitTime) throws BackendException {
    StandardBaseTransactionConfig newCfg = new StandardBaseTransactionConfig.Builder(tx.getConfiguration()).commitTime(commitTime).build();
    return manager.beginTransaction(newCfg);
  }


--
You received this message because you are subscribed to the Google Groups "JanusGraph users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to janusgra...@....
To view this discussion on the web visit https://groups.google.com/d/msgid/janusgraph-users/6e5f4ef5-42a6-446a-818f-1aa14b128994n%40googlegroups.com.


Re: [Bug?] Updating and then removing the property in the same transaction leaves the property with an older value

Boxuan Li <libo...@...>
 

Created a PR to fix this bug: https://github.com/JanusGraph/janusgraph/pull/2244
Appreciate any review!

On Saturday, February 22, 2020 at 2:00:09 PM UTC+8 alex...@... wrote:
I have opened the issue on GitHub regarding this bug: https://github.com/JanusGraph/janusgraph/issues/1981

Pavel, will you be able to check that issue?

On Friday, February 21, 2020 at 12:29:43 AM UTC-8, Pavel Ershov wrote:
This bug introduced by optimization, when property changed - no one read query will be triggered, data just overwritten blindly, see https://github.com/JanusGraph/janusgraph/blob/100a2ee21351c24eedc0bd9533a49870ab16002a/janusgraph-core/src/main/java/org/janusgraph/graphdb/transaction/StandardJanusGraphTx.java#L796-L808

For fix that we need to read old property from db to mark it deleted

Or rewrite property explicitly

graph=JanusGraphFactory.open("inmemory")
g
=graph.traversal()
v
=graph.addVertex();
vid
=v.id();
v
.property("name", "name1");
v
.property("p1", "a");
graph
.tx().commit();
v
=g.V().has("name", "name1").next();
v
.property("p1").remove() // remove old property explicitly
v
.property("p1", "y");
v
.property("p1").remove()
g
.V(vid).properties();
graph
.tx().commit();
g
.V(vid).properties();



четверг, 20 февраля 2020 г., 20:14:12 UTC+3 пользователь Oleksandr Porunov написал:
I think this is a bug. You may raise an issue on GitHub about that.
Basically, when we updated a property and then removing this property in the same transaction, JanusGraph thinks that the property is new and remove it only from added relations inside the current transaction (i.e. remove the updated version of the property without marking it for deletion during the transaction closure). I think, we should mark the property as a modified property to resolve the issue.

As a temporary workaround, you may call "remove" 2 times for modified properties. In such case, the property will be marked for deletion and will be removed during the commit operation.
I.e. call  next 4 operations to remove 2 properties:
v.property("p1").remove()
v.property("p1").remove()
v.property("p2").remove()
v.property("p2").remove()

After the issue is fixed, you will need only 1 "remove" call per each modified property. Please open the issue on GitHub about that.

Thank you,
Oleksandr

On Tuesday, February 18, 2020 at 3:41:54 PM UTC-8, Bharat Dighe wrote:
Is it a known bug? Any workaround?

Added a vertex
gremlin> v=graph.addVertex();
==>v[204804096]
gremlin> v.property("name", "name1");
==>vp[name->name1]
gremlin> v.property("p1", "v");
==>vp[p1->v]
gremlin> v.property("p2", "v");
==>vp[p2->v]
gremlin> graph.tx().commit();
==>null
gremlin> g.V(204804096).properties();
==>vp[name->name1]
==>vp[p1->v]
==>vp[p2->v]
gremlin> graph.tx().commit();
==>null

Updated propeties p1 and p2 to value x
gremlin> v=g.V().has("name", "name1").next();
==>v[204804096]
gremlin> v.property("p1", "x");
==>vp[p1->x]
gremlin> v.property("p2", "x")
==>vp[p2->x]
gremlin> graph.tx().commit();
==>null
gremlin> g.V(204804096).properties();
==>vp[name->name1]
==>vp[p1->x]
==>vp[p2->x]

Updated propeties p1 and p2 to value y and in the same transaction the properties are removed

gremlin> v=g.V().has("name", "name1").next();
==>v[204804096]
gremlin> v.property("p1", "y");
==>vp[p1->y]
gremlin> v.property("p2", "y");
==>vp[p2->y]
gremlin> v.property("p1").remove()
==>null
gremlin> v.property("p2").remove()
==>null
gremlin> g.V(204804096).properties();
==>vp[name->name1]
==>vp[p1->x]
==>vp[p2->x]
gremlin> graph.tx().commit();
==>null
gremlin> g.V(204804280).properties();
==>vp[name->v1]
==>vp[p1->x]
==>vp[p2->x]

Properties p1 and p2 are not removed. Their values are set to previous value of "x".





leaking transactions in ConsistentKeyLocker

Madhan Neethiraj <mneet...@...>
 

Transactions created in ConsistentKeyLocker.overrideTimestamp() are neither committed or rolled back by the callers. This results in leaked transactions. This issue (of leaked transactions) was seen while implementing a custom StoreManager. Addressing the issue required updating ConsistentKeyLocker, to commit the created transactions.

The fix is straight forward, and I have patches for master and v0.5.1. Can someone please review and either confirm the issue or suggest alternate (to avoid leaked transactions)?

Relevant code from ConsistentKeyLocker.java is given below.

Thanks,
Madhan

  private WriteResult tryWriteLockOnce(StaticBuffer key, StaticBuffer del, StoreTransaction txh) {
    ...
    try {
      final StoreTransaction newTx = overrideTimestamp(txh, writeTimer.getStartTime());
      store.mutate(key, Collections.singletonList(newLockEntry),
          null == del ? KeyColumnValueStore.NO_DELETIONS : Collections.singletonList(del), newTx);
      } catch (BackendException e) {
        ...
  }

  private WriteResult tryDeleteLockOnce(StaticBuffer key, StaticBuffer col, StoreTransaction txh) {
    ...
    try {
      final StoreTransaction newTx = overrideTimestamp(txh, delTimer.getStartTime());
      store.mutate(key, ImmutableList.of(), Collections.singletonList(col), newTx);
    } catch (BackendException e) {
      ...
  }

  protected void deleteSingleLock(KeyColumn kc, ConsistentKeyLockStatus ls, StoreTransaction tx) {
    ...
    try {
      StoreTransaction newTx = overrideTimestamp(tx, times.getTime());
      store.mutate(serializer.toLockKey(kc.getKey(), kc.getColumn()), ImmutableList.of(), deletions, newTx);
      return;
    } catch (TemporaryBackendException e) {
      ...
  }

  private StoreTransaction overrideTimestamp(final StoreTransaction tx, final Instant commitTime) throws BackendException {
    StandardBaseTransactionConfig newCfg = new StandardBaseTransactionConfig.Builder(tx.getConfiguration()).commitTime(commitTime).build();
    return manager.beginTransaction(newCfg);
  }


Re: Confusion on the warning "Lock write succeeded but took too long"

HadoopMarc <bi...@...>
 

Hi,

This is more about english than about java.

I think the author means that if you get an answer after 10 seconds that you have the lock for 1 second starting 5 seconds ago, you need a new lock.

HTH,    Marc

Op zaterdag 31 oktober 2020 om 02:25:01 UTC+1 schreef c...@...:

Environment: JG-0.5.2 with HBase and Elasticsearch

When running a java program to delete verteics and edges, many warnings "Lock write succeeded but took too long" show up.

Wanting to know what happens, I check the code and got confused,

@Override
protected ConsistentKeyLockStatus writeSingleLock(KeyColumn lockID, StoreTransaction txh) throws Throwable {

final StaticBuffer lockKey = serializer.toLockKey(lockID.getKey(), lockID.getColumn());
StaticBuffer oldLockCol = null;

for (int i = 0; i < lockRetryCount; i++) {
WriteResult wr = tryWriteLockOnce(lockKey, oldLockCol, txh);
if (wr.isSuccessful() && wr.getDuration().compareTo(lockWait) <= 0) {
final Instant writeInstant = wr.getWriteTimestamp();
final Instant expireInstant = writeInstant.plus(lockExpire);
return new ConsistentKeyLockStatus(writeInstant, expireInstant);
}
oldLockCol = wr.getLockCol();
handleMutationFailure(lockID, lockKey, wr, txh);
}
tryDeleteLockOnce(lockKey, oldLockCol, txh);
// TODO log exception or successful too-slow write here
throw new TemporaryBackendException("Lock write retry count exceeded");
}


In the code shown above, when wr.getDuration()  greater than lockWait, the warning "Lock write succeeded but took too long" will be printed, and than continue the for loop until to limit.  

And my confusion is that why we continue the for loop when the write is successful (even though it took too long, it's still a successful operation, right?)

Any help would be greatly appreciated.


Re: Transactional operation in janus-graph through gremlin queries

"anj...@gmail.com" <anjani...@...>
 

Thanks Marc, for you help and time.

Regards,
Anjani

On Tuesday, 3 November 2020 at 21:45:33 UTC+5:30 HadoopMarc wrote:
Hi Anjani,

No, existing schema elements cannot be modified (apart from renames), see:


Best wishes,     Marc

Op dinsdag 3 november 2020 om 11:08:00 UTC+1 schreef anj...@...:
Thanks Mark, i missed step 4. Thank you very much for pointing it.

One more question, Our graph is already running on prod and properties are defined but consistency is not set on them. 
If i add consistency modifier for existing properties,  will it be picked up?

Thanks,
Anjani

On Monday, 2 November 2020 at 21:41:18 UTC+5:30 HadoopMarc wrote:
Hi Anjani,

See step 4 in the ref docs link I sent earlier: the locks are not released until the entire transaction is committed or rolled back.

Marc

Op maandag 2 november 2020 om 13:21:57 UTC+1 schreef anj...@...:
Hi Marc,

Thanks for your detailed response. My understanding is node is locked automatically during operation and get released after it, does not wait for commit.

 Suppose i need to update 3 nodes. I can write like as below. In this way if there is any exception for any of the node, will not commit and hence can control it. 
try {
    g.V(4104).property("NodeUpdatedDate", new Date()).next();
    g.V(4288).property("NodeUpdatedDate", new Date()).next();
    g.V(4188).property("NodeUpdatedDate"new Date()).next();
    g.tx().commit();
} catch (Exception e) {
//Recover, retry
}
  
With this 1st node V(4104) is locked by thread when update is happening, but it get released when update for other nodes V(4288), V(4188) happening, which mean other thread can update V(4104) before transaction is committed, which might result in data inconsistency.

I was thinking in some way acquire lock on all nodes before doing any operation on them some thing like :
g.V(4288).lock(),g.V(4104).lock(), g.V(4188).lock()
After locking explicitly, perform operations and unlock as part of commit.

Thanks,
Anjani

On Saturday, 31 October 2020 at 17:01:05 UTC+5:30 HadoopMarc wrote:
Hi Anjani,

Do you mean that there are still (extremely rare) failure situations possible despite the use of locking and the use of JanusGraph transactions? I am not sure if I can think of one and it would depend on ill-timed failures in the backend (e.g. power failure). One thing to worry about and that you could properly test, is whether all mutations in the JanusGraph transaction are sent to the backend in a single network request (otherwise JanusGraph could have persisted two of the five nodes and then fail). There are various configuration properties that might influence this:

query.batch
storage.cql.atomic-batch-mutate
storage.cql.batch-statement-size

Also see the comments for the tx.log-tx property.

HTH,    Marc

Op vrijdag 30 oktober 2020 om 15:58:39 UTC+1 schreef anj...@...:
Hi Marc,

Thanks for your response. Earlier i had look on the page you shared and from that my understanding is we can define consistency at property level and if same property is modified by two different threads then  consistency check from back-end happens and transaction can success or can throw locking exception. But this is applicable to a property of a singe node.

In my case i want to add/update property on  multiple nodes based on some condition.  For example based on some rules we see some nodes are related and we want to group them, for that want to add/update one property on multiple nodes, say want to add/update property on 5 nodes. In that case want to local all 5 nodes, update them and then release locks. 
- If update to any of the node fails then we should roll back updates to other nodes also.
- When update to 5 nodes are going on, no other threads should modify that property.

Thanks,
Anjani

 

On Friday, 30 October 2020 at 19:26:10 UTC+5:30 HadoopMarc wrote:
Hi Anjani,

I am not sure if I understand your question and if your question already took the following into account:


What aspect of transactions do you miss? You can choose between tx.commit() for succesful insertion and tx.rollback() in case of exceptions.

Please clarify!

Marc

Op vrijdag 30 oktober 2020 om 08:15:36 UTC+1 schreef anj...@...:
Hi All,

We are using Janus 0.5.2 with Cassandra and Elastic-search. 
Currently for adding or updating a node we are using gremlin queries in java.  

We have a use case where we need to update multiple-nodes for a given metadata. We want to make sure updates to multiple nodes are transactional and when updates are happening, no other thread should update them.

Through gremlin queries do we have option to: 
 - achieve transaction updates.
 - locking/unlocking of nodes for updates?

Appreciate your thoughts/inputs.

Thanks,
Anjani


Re: Transactional operation in janus-graph through gremlin queries

HadoopMarc <bi...@...>
 

Hi Anjani,

No, existing schema elements cannot be modified (apart from renames), see:

https://docs.janusgraph.org/basics/schema/#changing-schema-elements

Best wishes,     Marc

Op dinsdag 3 november 2020 om 11:08:00 UTC+1 schreef anj...@...:

Thanks Mark, i missed step 4. Thank you very much for pointing it.

One more question, Our graph is already running on prod and properties are defined but consistency is not set on them. 
If i add consistency modifier for existing properties,  will it be picked up?

Thanks,
Anjani

On Monday, 2 November 2020 at 21:41:18 UTC+5:30 HadoopMarc wrote:
Hi Anjani,

See step 4 in the ref docs link I sent earlier: the locks are not released until the entire transaction is committed or rolled back.

Marc

Op maandag 2 november 2020 om 13:21:57 UTC+1 schreef anj...@...:
Hi Marc,

Thanks for your detailed response. My understanding is node is locked automatically during operation and get released after it, does not wait for commit.

 Suppose i need to update 3 nodes. I can write like as below. In this way if there is any exception for any of the node, will not commit and hence can control it. 
try {
    g.V(4104).property("NodeUpdatedDate", new Date()).next();
    g.V(4288).property("NodeUpdatedDate", new Date()).next();
    g.V(4188).property("NodeUpdatedDate"new Date()).next();
    g.tx().commit();
} catch (Exception e) {
//Recover, retry
}
  
With this 1st node V(4104) is locked by thread when update is happening, but it get released when update for other nodes V(4288), V(4188) happening, which mean other thread can update V(4104) before transaction is committed, which might result in data inconsistency.

I was thinking in some way acquire lock on all nodes before doing any operation on them some thing like :
g.V(4288).lock(),g.V(4104).lock(), g.V(4188).lock()
After locking explicitly, perform operations and unlock as part of commit.

Thanks,
Anjani

On Saturday, 31 October 2020 at 17:01:05 UTC+5:30 HadoopMarc wrote:
Hi Anjani,

Do you mean that there are still (extremely rare) failure situations possible despite the use of locking and the use of JanusGraph transactions? I am not sure if I can think of one and it would depend on ill-timed failures in the backend (e.g. power failure). One thing to worry about and that you could properly test, is whether all mutations in the JanusGraph transaction are sent to the backend in a single network request (otherwise JanusGraph could have persisted two of the five nodes and then fail). There are various configuration properties that might influence this:

query.batch
storage.cql.atomic-batch-mutate
storage.cql.batch-statement-size

Also see the comments for the tx.log-tx property.

HTH,    Marc

Op vrijdag 30 oktober 2020 om 15:58:39 UTC+1 schreef anj...@...:
Hi Marc,

Thanks for your response. Earlier i had look on the page you shared and from that my understanding is we can define consistency at property level and if same property is modified by two different threads then  consistency check from back-end happens and transaction can success or can throw locking exception. But this is applicable to a property of a singe node.

In my case i want to add/update property on  multiple nodes based on some condition.  For example based on some rules we see some nodes are related and we want to group them, for that want to add/update one property on multiple nodes, say want to add/update property on 5 nodes. In that case want to local all 5 nodes, update them and then release locks. 
- If update to any of the node fails then we should roll back updates to other nodes also.
- When update to 5 nodes are going on, no other threads should modify that property.

Thanks,
Anjani

 

On Friday, 30 October 2020 at 19:26:10 UTC+5:30 HadoopMarc wrote:
Hi Anjani,

I am not sure if I understand your question and if your question already took the following into account:


What aspect of transactions do you miss? You can choose between tx.commit() for succesful insertion and tx.rollback() in case of exceptions.

Please clarify!

Marc

Op vrijdag 30 oktober 2020 om 08:15:36 UTC+1 schreef anj...@...:
Hi All,

We are using Janus 0.5.2 with Cassandra and Elastic-search. 
Currently for adding or updating a node we are using gremlin queries in java.  

We have a use case where we need to update multiple-nodes for a given metadata. We want to make sure updates to multiple nodes are transactional and when updates are happening, no other thread should update them.

Through gremlin queries do we have option to: 
 - achieve transaction updates.
 - locking/unlocking of nodes for updates?

Appreciate your thoughts/inputs.

Thanks,
Anjani


Re: Transactional operation in janus-graph through gremlin queries

"anj...@gmail.com" <anjani...@...>
 

Thanks Mark, i missed step 4. Thank you very much for pointing it.

One more question, Our graph is already running on prod and properties are defined but consistency is not set on them. 
If i add consistency modifier for existing properties,  will it be picked up?

Thanks,
Anjani

On Monday, 2 November 2020 at 21:41:18 UTC+5:30 HadoopMarc wrote:
Hi Anjani,

See step 4 in the ref docs link I sent earlier: the locks are not released until the entire transaction is committed or rolled back.

Marc

Op maandag 2 november 2020 om 13:21:57 UTC+1 schreef anj...@...:
Hi Marc,

Thanks for your detailed response. My understanding is node is locked automatically during operation and get released after it, does not wait for commit.

 Suppose i need to update 3 nodes. I can write like as below. In this way if there is any exception for any of the node, will not commit and hence can control it. 
try {
    g.V(4104).property("NodeUpdatedDate", new Date()).next();
    g.V(4288).property("NodeUpdatedDate", new Date()).next();
    g.V(4188).property("NodeUpdatedDate"new Date()).next();
    g.tx().commit();
} catch (Exception e) {
//Recover, retry
}
  
With this 1st node V(4104) is locked by thread when update is happening, but it get released when update for other nodes V(4288), V(4188) happening, which mean other thread can update V(4104) before transaction is committed, which might result in data inconsistency.

I was thinking in some way acquire lock on all nodes before doing any operation on them some thing like :
g.V(4288).lock(),g.V(4104).lock(), g.V(4188).lock()
After locking explicitly, perform operations and unlock as part of commit.

Thanks,
Anjani

On Saturday, 31 October 2020 at 17:01:05 UTC+5:30 HadoopMarc wrote:
Hi Anjani,

Do you mean that there are still (extremely rare) failure situations possible despite the use of locking and the use of JanusGraph transactions? I am not sure if I can think of one and it would depend on ill-timed failures in the backend (e.g. power failure). One thing to worry about and that you could properly test, is whether all mutations in the JanusGraph transaction are sent to the backend in a single network request (otherwise JanusGraph could have persisted two of the five nodes and then fail). There are various configuration properties that might influence this:

query.batch
storage.cql.atomic-batch-mutate
storage.cql.batch-statement-size

Also see the comments for the tx.log-tx property.

HTH,    Marc

Op vrijdag 30 oktober 2020 om 15:58:39 UTC+1 schreef anj...@...:
Hi Marc,

Thanks for your response. Earlier i had look on the page you shared and from that my understanding is we can define consistency at property level and if same property is modified by two different threads then  consistency check from back-end happens and transaction can success or can throw locking exception. But this is applicable to a property of a singe node.

In my case i want to add/update property on  multiple nodes based on some condition.  For example based on some rules we see some nodes are related and we want to group them, for that want to add/update one property on multiple nodes, say want to add/update property on 5 nodes. In that case want to local all 5 nodes, update them and then release locks. 
- If update to any of the node fails then we should roll back updates to other nodes also.
- When update to 5 nodes are going on, no other threads should modify that property.

Thanks,
Anjani

 

On Friday, 30 October 2020 at 19:26:10 UTC+5:30 HadoopMarc wrote:
Hi Anjani,

I am not sure if I understand your question and if your question already took the following into account:


What aspect of transactions do you miss? You can choose between tx.commit() for succesful insertion and tx.rollback() in case of exceptions.

Please clarify!

Marc

Op vrijdag 30 oktober 2020 om 08:15:36 UTC+1 schreef anj...@...:
Hi All,

We are using Janus 0.5.2 with Cassandra and Elastic-search. 
Currently for adding or updating a node we are using gremlin queries in java.  

We have a use case where we need to update multiple-nodes for a given metadata. We want to make sure updates to multiple nodes are transactional and when updates are happening, no other thread should update them.

Through gremlin queries do we have option to: 
 - achieve transaction updates.
 - locking/unlocking of nodes for updates?

Appreciate your thoughts/inputs.

Thanks,
Anjani


20/11/03 04:32:32 WARN consistentkey.ConsistentKeyLocker: Lock write succeeded but took too long: duration PT0.107S exceeded limit PT0.1S

"priy...@gmail.com" <priyanka...@...>
 

Hi 

i am using Janus with hbase as storage backend. I am getting floowing exception while committing trasaction:

20/11/03 04:32:30 WARN consistentkey.ConsistentKeyLocker: Lock write succeeded but took too long: duration PT0.106S exceeded limit PT0.1S
20/11/03 04:32:30 WARN consistentkey.ConsistentKeyLocker: Lock write succeeded but took too long: duration PT0.113S exceeded limit PT0.1S
20/11/03 04:32:30 WARN consistentkey.ConsistentKeyLocker: Lock write succeeded but took too long: duration PT0.139S exceeded limit PT0.1S
20/11/03 04:32:31 WARN consistentkey.ConsistentKeyLocker: Lock write succeeded but took too long: duration PT0.164S exceeded limit PT0.1S
20/11/03 04:32:31 WARN consistentkey.ConsistentKeyLocker: Lock write succeeded but took too long: duration PT0.178S exceeded limit PT0.1S
20/11/03 04:32:31 WARN consistentkey.ConsistentKeyLocker: Lock write succeeded but took too long: duration PT0.138S exceeded limit PT0.1S
20/11/03 04:32:31 WARN consistentkey.ConsistentKeyLocker: Lock write succeeded but took too long: duration PT0.138S exceeded limit PT0.1S
20/11/03 04:32:31 WARN consistentkey.ConsistentKeyLocker: Lock write succeeded but took too long: duration PT0.147S exceeded limit PT0.1S
20/11/03 04:32:31 WARN consistentkey.ConsistentKeyLocker: Lock write succeeded but took too long: duration PT0.194S exceeded limit PT0.1S
20/11/03 04:32:31 WARN consistentkey.ConsistentKeyLocker: Lock write succeeded but took too long: duration PT0.111S exceeded limit PT0.1S
20/11/03 04:32:31 WARN consistentkey.ConsistentKeyLocker: Lock write succeeded but took too long: duration PT0.134S exceeded limit PT0.1S
20/11/03 04:32:32 WARN consistentkey.ConsistentKeyLocker: Lock write succeeded but took too long: duration PT0.127S exceeded limit PT0.1S
20/11/03 04:32:32 WARN consistentkey.ConsistentKeyLocker: Lock write succeeded but took too long: duration PT0.127S exceeded limit PT0.1S
20/11/03 04:32:32 WARN consistentkey.ConsistentKeyLocker: Lock write succeeded but took too long: duration PT0.214S exceeded limit PT0.1S
20/11/03 04:32:32 WARN consistentkey.ConsistentKeyLocker: Lock write succeeded but took too long: duration PT0.201S exceeded limit PT0.1S
20/11/03 04:32:32 WARN consistentkey.ConsistentKeyLocker: Lock write succeeded but took too long: duration PT0.129S exceeded limit PT0.1S
20/11/03 04:32:32 WARN consistentkey.ConsistentKeyLocker: Lock write succeeded but took too long: duration PT0.107S exceeded limit PT0.1S
20/11/03 04:32:32 ERROR database.StandardJanusGraph: Could not commit transaction [1] due to exception
org.janusgraph.diskstorage.locking.TemporaryLockingException: Temporary locking failure
        at org.janusgraph.diskstorage.locking.AbstractLocker.writeLock(AbstractLocker.java:309)
        at org.janusgraph.diskstorage.locking.consistentkey.ExpectedValueCheckingStore.acquireLock(ExpectedValueCheckingStore.java:103)
        at org.janusgraph.diskstorage.keycolumnvalue.KCVSProxy.acquireLock(KCVSProxy.java:52)
        at org.janusgraph.diskstorage.BackendTransaction.acquireEdgeLock(BackendTransaction.java:237)
        at org.janusgraph.graphdb.database.StandardJanusGraph.prepareCommit(StandardJanusGraph.java:532)
        at org.janusgraph.graphdb.database.StandardJanusGraph.commit(StandardJanusGraph.java:721)
        at org.janusgraph.graphdb.transaction.StandardJanusGraphTx.commit(StandardJanusGraphTx.java:1374)


i have tried increasing the lock wait time and expiry time. But still it gives the same exception:

storage.lock.expiry-time = 600000
storage.lock.wait-time = 50000
What could be the reason? I have not tried increasing number of retries here.


Re: Serializing a JanusGraph subgraph in Gremlin-Java

Fred Eisele <fredric...@...>
 

I am getting the same errors you got but the fix is not doing it for me.
```yaml
hosts: [localhost]
port: 8182
serializer:
   className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV3d0
   config:
     ioRegistries:
       - org.apache.tinkerpop.gremlin.tinkergraph.structure.TinkerIoRegistryV3d0
       - org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry
```
```text
Exception in thread "main" java.util.concurrent.CompletionException:
org.apache.tinkerpop.gremlin.driver.exception.ResponseException:
Error during serialization:
Class is not registered: org.apache.tinkerpop.gremlin.tinkergraph.structure.TinkerGraph
Note: To register this class use:
kryo.register(org.apache.tinkerpop.gremlin.tinkergraph.structure.TinkerGraph.class);
    at java.base/java.util.concurrent.CompletableFuture.reportJoin(CompletableFuture.java:412)
```
Meanwhile the groovy-gremlin-client works fine and it makes no mention of the TinkerIoRegistryV3d0.

On Friday, April 6, 2018 at 8:30:07 PM UTC-5 ri...@... wrote:
I was able to get it to work by including both JanusGraph and TinkerPop IoRegistries on both server and client

Tinkerpop 3.2.6 on (Janus 0.2.0) gremlin server and client

gremlin-server.yaml

...
serializers:
  - { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV1d0, config: { ioRegistries: [org.apache.tinkerpop.gremlin.tinkergraph.structure.TinkerIoRegistry,org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}
...

remote-objects.yaml

hosts: [127.0.0.1]
port: 8182
serializer: {
    className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV1d0,
    config: {
        ioRegistries: [org.apache.tinkerpop.gremlin.tinkergraph.structure.TinkerIoRegistry,org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry]
    }   
}

If I include only TinkIoRegistry, it complains about not being able to handle certain JanusGraph classes.  If I just include JanusGraphIoRegistry, it complains about not being able to handle certain Tinkerpop classes.

Thanks for your help, Stephen




On Thursday, April 5, 2018 at 5:01:12 PM UTC-5, John Ripley wrote:
I am connecting to a remote JanusGraph 0.20 instance from Java.  My client is using Tinkerpop 3.2.6.  I can do all the standard stuff, return vertices, edges, etc.

When I try to build a 2 generation subgraph in starting from a known seed

   Object o = g.V().has("id", 1).repeat(__.bothE().subgraph("subGraph").outV()).times(2).cap("subGraph").next();


I get the following exception:


org.apache.tinkerpop.gremlin.driver.exception.ResponseException: Error during serialization: Class is not registered: org.apache.tinkerpop.gremlin.tinkergraph.structure.TinkerGraph

Note: To register this class use: kryo.register(org.apache.tinkerpop.gremlin.tinkergraph.structure.TinkerGraph.class);

at org.apache.tinkerpop.gremlin.driver.Handler$GremlinResponseHandler.channelRead0(Handler.java:246) ~[gremlin-driver-3.3.1.jar:3.3.1]

        ...


This same line works fine in the gremlin console.  

gremlin> sg = g.V().has('id', 1).repeat(bothE().subgraph('subGraph').bothV()).times(3).cap('subGraph').next()

==>tinkergraph[vertices:4 edges:4]

I am using the out of the box JanusGraph 0.20 gremlin-server.yaml and remote-objects.yaml













Re: Transactional operation in janus-graph through gremlin queries

HadoopMarc <bi...@...>
 

Hi Anjani,

See step 4 in the ref docs link I sent earlier: the locks are not released until the entire transaction is committed or rolled back.

Marc

Op maandag 2 november 2020 om 13:21:57 UTC+1 schreef anj...@...:

Hi Marc,

Thanks for your detailed response. My understanding is node is locked automatically during operation and get released after it, does not wait for commit.

 Suppose i need to update 3 nodes. I can write like as below. In this way if there is any exception for any of the node, will not commit and hence can control it. 
try {
    g.V(4104).property("NodeUpdatedDate", new Date()).next();
    g.V(4288).property("NodeUpdatedDate", new Date()).next();
    g.V(4188).property("NodeUpdatedDate"new Date()).next();
    g.tx().commit();
} catch (Exception e) {
//Recover, retry
}
  
With this 1st node V(4104) is locked by thread when update is happening, but it get released when update for other nodes V(4288), V(4188) happening, which mean other thread can update V(4104) before transaction is committed, which might result in data inconsistency.

I was thinking in some way acquire lock on all nodes before doing any operation on them some thing like :
g.V(4288).lock(),g.V(4104).lock(), g.V(4188).lock()
After locking explicitly, perform operations and unlock as part of commit.

Thanks,
Anjani

On Saturday, 31 October 2020 at 17:01:05 UTC+5:30 HadoopMarc wrote:
Hi Anjani,

Do you mean that there are still (extremely rare) failure situations possible despite the use of locking and the use of JanusGraph transactions? I am not sure if I can think of one and it would depend on ill-timed failures in the backend (e.g. power failure). One thing to worry about and that you could properly test, is whether all mutations in the JanusGraph transaction are sent to the backend in a single network request (otherwise JanusGraph could have persisted two of the five nodes and then fail). There are various configuration properties that might influence this:

query.batch
storage.cql.atomic-batch-mutate
storage.cql.batch-statement-size

Also see the comments for the tx.log-tx property.

HTH,    Marc

Op vrijdag 30 oktober 2020 om 15:58:39 UTC+1 schreef anj...@...:
Hi Marc,

Thanks for your response. Earlier i had look on the page you shared and from that my understanding is we can define consistency at property level and if same property is modified by two different threads then  consistency check from back-end happens and transaction can success or can throw locking exception. But this is applicable to a property of a singe node.

In my case i want to add/update property on  multiple nodes based on some condition.  For example based on some rules we see some nodes are related and we want to group them, for that want to add/update one property on multiple nodes, say want to add/update property on 5 nodes. In that case want to local all 5 nodes, update them and then release locks. 
- If update to any of the node fails then we should roll back updates to other nodes also.
- When update to 5 nodes are going on, no other threads should modify that property.

Thanks,
Anjani

 

On Friday, 30 October 2020 at 19:26:10 UTC+5:30 HadoopMarc wrote:
Hi Anjani,

I am not sure if I understand your question and if your question already took the following into account:


What aspect of transactions do you miss? You can choose between tx.commit() for succesful insertion and tx.rollback() in case of exceptions.

Please clarify!

Marc

Op vrijdag 30 oktober 2020 om 08:15:36 UTC+1 schreef anj...@...:
Hi All,

We are using Janus 0.5.2 with Cassandra and Elastic-search. 
Currently for adding or updating a node we are using gremlin queries in java.  

We have a use case where we need to update multiple-nodes for a given metadata. We want to make sure updates to multiple nodes are transactional and when updates are happening, no other thread should update them.

Through gremlin queries do we have option to: 
 - achieve transaction updates.
 - locking/unlocking of nodes for updates?

Appreciate your thoughts/inputs.

Thanks,
Anjani


Re: Transactional operation in janus-graph through gremlin queries

"anj...@gmail.com" <anjani...@...>
 

Hi Marc,

Thanks for your detailed response. My understanding is node is locked automatically during operation and get released after it, does not wait for commit.

 Suppose i need to update 3 nodes. I can write like as below. In this way if there is any exception for any of the node, will not commit and hence can control it. 
try {
    g.V(4104).property("NodeUpdatedDate", new Date()).next();
    g.V(4288).property("NodeUpdatedDate", new Date()).next();
    g.V(4188).property("NodeUpdatedDate"new Date()).next();
    g.tx().commit();
} catch (Exception e) {
//Recover, retry
}
  
With this 1st node V(4104) is locked by thread when update is happening, but it get released when update for other nodes V(4288), V(4188) happening, which mean other thread can update V(4104) before transaction is committed, which might result in data inconsistency.

I was thinking in some way acquire lock on all nodes before doing any operation on them some thing like :
g.V(4288).lock(),g.V(4104).lock(), g.V(4188).lock()
After locking explicitly, perform operations and unlock as part of commit.

Thanks,
Anjani

On Saturday, 31 October 2020 at 17:01:05 UTC+5:30 HadoopMarc wrote:
Hi Anjani,

Do you mean that there are still (extremely rare) failure situations possible despite the use of locking and the use of JanusGraph transactions? I am not sure if I can think of one and it would depend on ill-timed failures in the backend (e.g. power failure). One thing to worry about and that you could properly test, is whether all mutations in the JanusGraph transaction are sent to the backend in a single network request (otherwise JanusGraph could have persisted two of the five nodes and then fail). There are various configuration properties that might influence this:

query.batch
storage.cql.atomic-batch-mutate
storage.cql.batch-statement-size

Also see the comments for the tx.log-tx property.

HTH,    Marc

Op vrijdag 30 oktober 2020 om 15:58:39 UTC+1 schreef anj...@...:
Hi Marc,

Thanks for your response. Earlier i had look on the page you shared and from that my understanding is we can define consistency at property level and if same property is modified by two different threads then  consistency check from back-end happens and transaction can success or can throw locking exception. But this is applicable to a property of a singe node.

In my case i want to add/update property on  multiple nodes based on some condition.  For example based on some rules we see some nodes are related and we want to group them, for that want to add/update one property on multiple nodes, say want to add/update property on 5 nodes. In that case want to local all 5 nodes, update them and then release locks. 
- If update to any of the node fails then we should roll back updates to other nodes also.
- When update to 5 nodes are going on, no other threads should modify that property.

Thanks,
Anjani

 

On Friday, 30 October 2020 at 19:26:10 UTC+5:30 HadoopMarc wrote:
Hi Anjani,

I am not sure if I understand your question and if your question already took the following into account:


What aspect of transactions do you miss? You can choose between tx.commit() for succesful insertion and tx.rollback() in case of exceptions.

Please clarify!

Marc

Op vrijdag 30 oktober 2020 om 08:15:36 UTC+1 schreef anj...@...:
Hi All,

We are using Janus 0.5.2 with Cassandra and Elastic-search. 
Currently for adding or updating a node we are using gremlin queries in java.  

We have a use case where we need to update multiple-nodes for a given metadata. We want to make sure updates to multiple nodes are transactional and when updates are happening, no other thread should update them.

Through gremlin queries do we have option to: 
 - achieve transaction updates.
 - locking/unlocking of nodes for updates?

Appreciate your thoughts/inputs.

Thanks,
Anjani

1341 - 1360 of 6663