Commit progress listener
marcorizzi82@...
Hi all,
I've successfully using in my application an EventStrategy's listener with both the DefaultEventQueue and the TransactionalEventQueue: they gave a way for the listener to be invoked while the events occur or after they have been committed, respectively before and after the "g.tx().commit()" statement execution. My question is: is there a way to have events about the progressing of the commit method execution itself? I'm trying to provide a feedback to my application's users about the graph events and the commit method invocation is kind of black box I can just call and wait for the execution to end: can more information be retrieved about the execution itself? Thanks in advance, Marco
|
|
Re: JanugGraph-0.6.0: Unable to open connection JanusGraphFactory with CL=ONE when quorum lost
Umesh Gade
One update: Cassandra version is 4.0.0
On Thu, Dec 2, 2021 at 2:02 PM Umesh Gade via lists.lfaidata.foundation <er.umeshgade=gmail.com@...> wrote:
--
Sincerely, Umesh Gade
|
|
JanugGraph-0.6.0: Unable to open connection JanusGraphFactory with CL=ONE when quorum lost
Umesh Gade
Hi, We just upgraded janus to 0.6.0 and started observing an issue which was working earlier. Scenario is, we open a connection with read/write CL="ONE" using JanugGraphFactory. But when quorum is lost, this connection fails to open. Curious to know, what's changed around this and what needs to be done to fix this ? Graph config passed:
Below is exception which we got: Opening connection to graph with test_ks@localhost:9042 -- org.janusgraph.core.JanusGraphException: Could not execute operation due to backend exception at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:54) at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:117) at org.janusgraph.diskstorage.configuration.backend.KCVSConfiguration.get(KCVSConfiguration.java:96) at org.janusgraph.diskstorage.configuration.BasicConfiguration.isFrozen(BasicConfiguration.java:105) at org.janusgraph.diskstorage.configuration.builder.ReadConfigurationBuilder.buildGlobalConfiguration(ReadConfigurationBuilder.java:81) at org.janusgraph.graphdb.configuration.builder.GraphDatabaseConfigurationBuilder.build(GraphDatabaseConfigurationBuilder.java:67) at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:176) at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:147) at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:127) at ***.TestCli.openConnection(TestCli.java:140) Caused by: org.janusgraph.diskstorage.TemporaryBackendException: Could not successfully complete backend operation due to repeated temporary exceptions after PT1M at org.janusgraph.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:98) at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:52) ... 11 more Caused by: org.janusgraph.diskstorage.TemporaryBackendException: Temporary failure in storage backend at io.vavr.API$Match$Case0.apply(API.java:5135) at io.vavr.API$Match.of(API.java:5092) at org.janusgraph.diskstorage.cql.CQLKeyColumnValueStore.lambda$static$0(CQLKeyColumnValueStore.java:120) at org.janusgraph.diskstorage.cql.function.slice.CQLSimpleSliceFunction.interruptibleWait(CQLSimpleSliceFunction.java:50) at org.janusgraph.diskstorage.cql.function.slice.CQLSimpleSliceFunction.getSlice(CQLSimpleSliceFunction.java:39) at org.janusgraph.diskstorage.cql.function.slice.AbstractCQLSliceFunction.getSlice(AbstractCQLSliceFunction.java:48) at org.janusgraph.diskstorage.cql.CQLKeyColumnValueStore.getSlice(CQLKeyColumnValueStore.java:358) at org.janusgraph.diskstorage.configuration.backend.KCVSConfiguration$1.call(KCVSConfiguration.java:99) at org.janusgraph.diskstorage.configuration.backend.KCVSConfiguration$1.call(KCVSConfiguration.java:96) at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:106) at org.janusgraph.diskstorage.util.BackendOperation$1.call(BackendOperation.java:120) at org.janusgraph.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:66) ... 12 more Caused by: java.util.concurrent.ExecutionException: com.datastax.oss.driver.api.core.AllNodesFailedException: All 1 node(s) tried for the query failed (showing first 1 nodes, use getAllErrors() for more): Node(endPoint=localhost/127.0.0.1:9042, hostId=779642c7-23bb-46d4-88fa-6ae08f2f9e24, hashCode=61feb06d): [com.datastax.oss.driver.api.core.servererrors.UnavailableException: Not enough replicas available for query at consistency QUORUM (2 required but only 1 alive)] at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908) at org.janusgraph.diskstorage.cql.function.slice.CQLSimpleSliceFunction.interruptibleWait(CQLSimpleSliceFunction.java:45) ... 20 more Caused by: com.datastax.oss.driver.api.core.AllNodesFailedException: All 1 node(s) tried for the query failed (showing first 1 nodes, use getAllErrors() for more): Node(endPoint=localhost/127.0.0.1:9042, hostId=779642c7-23bb-46d4-88fa-6ae08f2f9e24, hashCode=61feb06d): [com.datastax.oss.driver.api.core.servererrors.UnavailableException: Not enough replicas available for query at consistency QUORUM (2 required but only 1 alive)] at com.datastax.oss.driver.api.core.AllNodesFailedException.fromErrors(AllNodesFailedException.java:55) at com.datastax.oss.driver.internal.core.cql.CqlRequestHandler.sendRequest(CqlRequestHandler.java:261) at com.datastax.oss.driver.internal.core.cql.CqlRequestHandler.access$1000(CqlRequestHandler.java:94) at com.datastax.oss.driver.internal.core.cql.CqlRequestHandler$NodeResponseCallback.processRetryVerdict(CqlRequestHandler.java:849) at com.datastax.oss.driver.internal.core.cql.CqlRequestHandler$NodeResponseCallback.processErrorResponse(CqlRequestHandler.java:828) at com.datastax.oss.driver.internal.core.cql.CqlRequestHandler$NodeResponseCallback.onResponse(CqlRequestHandler.java:655) at com.datastax.oss.driver.internal.core.channel.InFlightHandler.channelRead(InFlightHandler.java:257) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:655) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:581) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) Suppressed: com.datastax.oss.driver.api.core.servererrors.UnavailableException: Not enough replicas available for query at consistency QUORUM (2 required but only 1 alive) Sincerely, Umesh Gade
|
|
NullPointerException comparing PredicateCondition (in equals method)
albert.lockett@...
Janusgraph (version 0.5.2) can be made to throw a NullPointerException using the following traversals: g.V().has(label, eq('User')).has(label, eq(null)) g.V().has(label, without('User')).has(label, without('Group'))
in QueryUtil#constraints2QNF we're builidng up a list of conditions from the constaints and in addConstraints we're checking if the list already contains the condition (by calling contains) if (!conditions.contains(pc)) conditions.add(pc); It calls the equals method in PredicateCondition. If the list already contains another condition with the same predicate and key, and the condition we're trying to add has a null value, it will throw a null pointer exception
The stack trace: java.lang.NullPointerException at org.janusgraph.graphdb.query.condition.PredicateCondition.equals(PredicateCondition.java:109) at java.util.ArrayList.indexOf(ArrayList.java:321) at java.util.ArrayList.contains(ArrayList.java:304) at org.janusgraph.graphdb.query.QueryUtil.addConstraint(QueryUtil.java:272) at org.janusgraph.graphdb.query.QueryUtil.constraints2QNF(QueryUtil.java:215) at org.janusgraph.graphdb.query.graph.GraphCentricQueryBuilder.constructQueryWithoutProfile(GraphCentricQueryBuilder.java:238) at org.janusgraph.graphdb.query.graph.GraphCentricQueryBuilder.constructQuery(GraphCentricQueryBuilder.java:225) at org.janusgraph.graphdb.tinkerpop.optimize.JanusGraphStep.buildGraphCentricQuery(JanusGraphStep.java:196) at org.janusgraph.graphdb.tinkerpop.optimize.JanusGraphStep.lambda$new$0(JanusGraphStep.java:94) at java.util.LinkedHashMap$LinkedEntrySet.forEach(LinkedHashMap.java:671) at org.janusgraph.graphdb.tinkerpop.optimize.JanusGraphStep.lambda$new$1(JanusGraphStep.java:94) at java.util.ArrayList.forEach(ArrayList.java:1257) at org.janusgraph.graphdb.tinkerpop.optimize.JanusGraphStep.lambda$new$3(JanusGraphStep.java:93) at org.apache.tinkerpop.gremlin.process.traversal.step.map.GraphStep.processNextStart(GraphStep.java:157) at org.apache.tinkerpop.gremlin.process.traversal.step.util.AbstractStep.hasNext(AbstractStep.java:144) at org.apache.tinkerpop.gremlin.process.traversal.util.DefaultTraversal.hasNext(DefaultTraversal.java:196) at org.apache.tinkerpop.gremlin.console.Console$_closure3.doCall(Console.groovy:255) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:101) at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:323) at org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:263) at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1041) at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:37) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:127) at org.codehaus.groovy.tools.shell.Groovysh.setLastResult(Groovysh.groovy:463) at sun.reflect.GeneratedMethodAccessor28.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.codehaus.groovy.runtime.callsite.PlainObjectMetaMethodSite.doInvoke(PlainObjectMetaMethodSite.java:43) at org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite$PogoCachedMethodSiteNoUnwrapNoCoerce.invoke(PogoMetaMethodSite.java:190) at org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite.callCurrent(PogoMetaMethodSite.java:58) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:168) at org.codehaus.groovy.tools.shell.Groovysh.execute(Groovysh.groovy:201) at org.apache.tinkerpop.gremlin.console.GremlinGroovysh.super$3$execute(GremlinGroovysh.groovy)
|
|
Re: Bindings for graphs created using ConfiguredGraphFactory not working as expected
hadoopmarc@...
Hi Anya,
In v0.6.0 the bin/janusgraph-server.sh start script does not start Cassandra any more. Are you sure you did start Cassandra ("cassandra/bin/cassandra") before starting JanusGraph? Also check whether you did not mix up the graph1 and graph1_config graph.graphname values. I guess you found out to do (before running bin/janusgraph-server.sh): export JANUSGRAPH_YAML=conf/gremlin-server/gremlin-server-configuration.yaml Best wishes, Marc
|
|
Re: Cleaning up old data in large graphs
hadoopmarc@...
Hi Mladen,
Indeed, there is still a load of open issues regarding TTL: https://github.com/JanusGraph/janusgraph/issues?q=is%3Aissue+is%3Aopen+ttl Your last remark about empty vertices sounds plausible, although it would be pretty bad if true. Searching on "new HashMap" on github gives too many results to inspect, so please keep an open eye on more hints where it would occur. I did not see open issues that report empty vertices after ghost vertex removal. Best wishes, Marc
|
|
Re: Cleaning up old data in large graphs
Hi Mark, thanks for the response.
Best regards, Mladen Marović
|
|
Re: Cleaning up old data in large graphs
hadoopmarc@...
Hi Mladen,
Just two things that come up while reading your story:
Best wishes, Marc
|
|
Bindings for graphs created using ConfiguredGraphFactory not working as expected
Hello,
I have a local setup of JanusGraph 0.6.0 with Cassandra 3.11.9. I am creating a graph using the ConfiguredGraphFactory. For this, I am using the bundled properties and yaml files and creating the graph by running the following commands from the Gremlin console (also bundled with the JanusGraph installation): gremlin> :remote connect tinkerpop.server conf/remote.yaml session
gremlin> :remote console
gremlin> map.put('storage.backend', 'cql');
gremlin> map.put('storage.hostname', '127.0.0.1');
gremlin> map.put('graph.graphname', 'graph1');
gremlin> map.put('storage.username', 'myDBUsername');
gremlin> map.put('storage.password', 'myDBPassword');
gremlin> ConfiguredGraphFactory.createConfiguration(new MapConfiguration(map));
Once I have created the map, I try to access the graph and the traversal variables bound to it, but I get the following response: gremlin>ConfiguredGraphFactory.open('graph1') gremlin> graph1
No such property: graph1 for class: Script7
gremlin> graph1_traversal
No such property: graph1_traversal for class: Script8
graph.graphname=graph1_config
storage.hostname=127.0.0.1
storage.username=myDBUsername
storage.password=myDBPassword
According to the documentation, I should be able to access the bound variables. I was able to do this in the 0.3.1 version of Janusgraph. What could I be missing/doing wrong? Thanks Anya
|
|
Re: Duplicate vertex issue with Uniqueness constraints | Janusgraph CQL
Pawan Shriwas
Hi Marc, Checking duplicate data with uniqueness constraints on name_cons field - gremlin> g.V().has('gId',P.within('da209078-4a2f-4db2-b489-27da028df983','ba81f5d3-a29b-4a2c-88c3-c265ce3f68a5','9804b32d-31d9-409a-a441-a38fdbf998f7')).valueMap() ==>[gId:[da209078-4a2f-4db2-b489-27da028df983],entityGId:[9e51c70d-f148-401f-8eea-53b767d9bbb6],name_cons:[CGNAT_NS2]] ==>[gId:[ba81f5d3-a29b-4a2c-88c3-c265ce3f68a5],entityGId:[7e763ebc-b2e0-4d04-baaa-4463d04ca436],name_cons:[CGNAT_NS2]] ==>[gId:[9804b32d-31d9-409a-a441-a38fdbf998f7],entityGId:[23fd7efd-3688-4b58-aab6-173d25a8dd63],name_cons:[CGNAT_NS2]] gremlin> Reading of data with unique index property with Consistency lock and get only one record - gremlin> g.V().has('name_cons','CGNAT_NS2').valueMap() ==>[gId:[290cc878-19e1-44f6-9f6c-62b7471e21bc],entityGId:[0b59889d-e725-46e5-9f42-d96daaeaa21d],name_cons:[CGNAT_NS2]] gremlin> gremlin> Hope this clarifies!!!!
On Mon, Nov 22, 2021 at 12:39 PM Pawan Shriwas via lists.lfaidata.foundation <shriwas.pawan=gmail.com@...> wrote:
--
Thanks & Regard PAWAN SHRIWAS
|
|
Cleaning up old data in large graphs
Mladen Marović
Hello, I have a graph (Janusgraph 0.5.3 running on a cql backend and an elasticsearch index) that is updated in near real-time. About 50M new vertices and 100M new edges are added every month. A large part of these (around 90%) should be deleted after 1 year, and the customer may require to change this at a later date. The remaining 10% of the data has no fixed expiration period, but vertices are expected to be deleted when they have no more edges. Currently, I have a daily Spark job that deletes vertices and their edges by checking their
I was wondering if anyone has some suggestions or best practices on how to manage graph data with a retention period (that could change over time)? Best regards, Mladen Marović
|
|
Re: Duplicate vertex issue with Uniqueness constraints | Janusgraph CQL
Pawan Shriwas
Hi Marc; Yes, We are committing the transaction after each operation. how do you know about "duplicate vertex creation" when "it returns only 1 record"? Vertex is being ingested with the same data and graph generate different id for the same. When we query the graph with these different ids, list object return having same name multiple time but when we retrieve the data with name parameter(having unique index with lock consistency) graph returns only one record. Hope this helps. Thanks, Pawan
On Sun, Nov 21, 2021 at 4:01 PM <hadoopmarc@...> wrote: Hi Pawan, --
Thanks & Regard PAWAN SHRIWAS
|
|
Re: Duplicate vertex issue with Uniqueness constraints | Janusgraph CQL
hadoopmarc@...
Hi Pawan,
Your code mirrors the example at https://docs.janusgraph.org/advanced-topics/eventual-consistency/#data-consistency for the greatest part. Are you sure the changes on graphMgmt get committed? Also, how do you know about "duplicate vertex creation" when "it returns only 1 record"? Best wishes, Marc PS. Most of the software community reserves names starting with a verb to functions and class methods. Violating this convention (e.g. PropertyKey makePropertyKey) makes your code almost unreadable to others.
|
|
Re: jvm.options broken
hadoopmarc@...
Hi Matthias,
Thanks for taking the trouble to report this. It took a while, but your report did not go unnoticed: https://github.com/JanusGraph/janusgraph/issues/2857 Best wishes, Marc
|
|
Duplicate vertex issue with Uniqueness constraints | Janusgraph CQL
Pawan Shriwas
Hi Everyone, I am facing a duplicate vertex creation issue even though the unique index is present in that property and when i retrive the data with the same index it returns only 1 record. Please see below information for the same. Storage Backend - Cassandra CQL Janusgraph version - 0.5.2 index - Composite Uniqueness - True Consistency - yes Index Status - ENABLED Below are the code snippet - Index Status : Thanks, Pawan
|
|
Re: Diagnosing slow write speeds to BigTable
AC
I have a follow-up question in addition to my reply above: Is there any guide for understanding the JanusGraph metrics available? I have written a basic metrics integration but I'm finding it quite hard to interpret the metrics that are being produced.
On Tue, Nov 16, 2021 at 12:35 PM AC via lists.lfaidata.foundation <acrane=twitter.com@...> wrote:
|
|
Re: Diagnosing slow write speeds to BigTable
AC
Hey again Boxuan, thanks for your help in this thread! 2) That is a good idea, I will try making some writes to BigTable outside of JanusGraph in this container. However, considering that the BigTable client stats and BigTable server stats both report low latencies from within the JanusGraph application, this is looking like a JanusGraph-related issue. I will report back with results today.
On Tue, Nov 16, 2021 at 11:48 AM Boxuan Li <liboxuan@...> wrote: I am not an expert on this and I've never used BigTable or GCP before, but here are my two cents:
|
|
Re: Diagnosing slow write speeds to BigTable
Boxuan Li
I am not an expert on this and I've never used BigTable or GCP before, but here are my two cents:
1) Did you test the read speed? Is it also very slow compared to writing? 2) Did you try using an HBase/Bigtable client (in the same GCP container as your JanusGraph instance) to write to your BigTable cluster? If it's also very slow then the problem might be with your network or other setups. Best, Boxuan
|
|
Diagnosing slow write speeds to BigTable
AC
Hey there, folks. Firstly I want to say thanks for your help with the previous bug we uncovered. I'm evaluating JanusGraph performance on BigTable and observing very slow write speeds when writing even a single vertex and committing a transaction. Starting a new transaction, writing a single vertex, and committing the transaction takes at minimum 5-6 seconds. BigTable metrics indicate that the backend is never taking more than 100ms (max) to perform a write. It's hard to imagine that any amount of overhead on the BigTable side would bring this up to 5-6 seconds. The basic BigTable stats inside our application also look reasonable. Here is the current configuration: "storage.backend": "hbase" "metrics.enabled": true "cache.db-cache": false "query.batch": true "storage.page-size": 1000 "storage.hbase.ext.hbase.client.connection.impl": "com.google.cloud.bigtable.hbase2_x.BigtableConnection" "storage.hbase.ext.google.bigtable.grpc.retry.deadlineexceeded.enable": true "storage.hbase.ext.google.bigtable.grpc.channel.count": 50 "storage.lock.retries": 5 "storage.lock.wait-time": 50.millis This is running in a GCP container that is rather beefy and not doing anything else, and is located in the same region as the BigTable cluster. Other traffic to/from the container seems fine. I'm currently using hbase-shaded-client rev 2.1.5 since that's aligned to JanusGraph 0.5.3 which we are currently using. I experimented with up to 2.4.8 and saw no difference. I'm also using bigtable-hbase-2.x-shaded 1.25.1, the latest stable revision. I'm at a loss how to progress further with my diagnosis, as all evidence indicates that the latency is originating with JanusGraph's operation. How can I better find and eliminate the source of this latency? Thanks!
|
|
Re: How to change GLOBAL_OFFLINE configuration when graph can't be instantiated
toom@...
Hi Marc,
Your solution works if the configuration hasn't been changed yet. If you change the index backend and set a wrong hostname, you cannot access your data anymore: mgmt = graph.openManagement() mgmt.set("index.search.backend", "elasticsearch") mgmt.set("index.search.hostname", "non-existant.hostname") mgmt.commit() Then the database cannot be open. Regards, Toom.
|
|