Date
1 - 3 of 3
PropertyPlacementStrategy configuration
Manish Baid <mmb...@...>
Hi, I am trying to setup key based partitioning using "PropertyPlacementStrategy". As per the test class, I am required to specify the partition-key, every vertex in the graph have a partitioning key i.e. communityId However I am unable to set this as a global configuration: mgmt.set('ids.partition-key', 'communityId') Error: Unknown configuration element in namespace [root.ids]: partition-key Regards |
|
HadoopMarc <bi...@...>
Hi, I browsed a bit through the source code and the configOption you need is defined here: https://github.com/JanusGraph/janusgraph/blob/9ba0b2d850066a32551f9da4af0780f643288376/janusgraph-core/src/main/java/org/janusgraph/graphdb/database/idassigner/placement/PropertyPlacementStrategy.java I guess the ids.partition-key ConfigOption is not available until you open the graph with the option: ids.placement=PropertyPlacementStrategy HTH, Marc Op zondag 20 september 2020 om 02:02:49 UTC+2 schreef m...@...:
|
|
Manish Baid <mmb...@...>
Thanks Marc. That works. Issue with this approach is all parameters related to propery placement cannot be set together, one has to close the graph and then apply the change. Which is ok, as long as it is documented. mgmt = graph.openManagement() mgmt.set('ids.flush', false) mgmt.set('ids.placement', 'org.janusgraph.graphdb.database.idassigner.placement.PropertyPlacementStrategy') mgmt.commit() graph.close() graph.open(...) mgmt = graph.openManagement() mgmt.set('ids.partition-key','communityId') mgmt.commit() How can I validate if data is actually partitioned by the partitioning key? In my tests (50k vertices with 5 partition keys - expected partition is 5), when I see cassandra table stats: Table: janusgraph_ids SSTable count: 0 Space used (live): 0 Space used (total): 0 Space used by snapshots (total): 0 Off heap memory used (total): 2268 SSTable Compression Ratio: -1.0 Number of partitions (estimate): 12 The expected and actual count do not match. Thanks On Sunday, 20 September 2020 at 01:50:21 UTC-7 HadoopMarc wrote:
|
|