Date   

Re: high-scale-lib dependency

sergeymetallic@...
 

Hm, in Janusgraph version 0.6.0 there is a different library used https://github.com/datastax/java-driver , is there any point to have the dependency on apache cassandra?


Re: high-scale-lib dependency

Clement de Groc
 

Hey! Just wanted to report that we had a similar issue with high-scale-lib.
Replacing high-scale-lib with JCTools sounds like a good option, but I'm not sure it will work for all modules: if I'm not mistaken, Cassandra relies on `high-scale-lib` too.
Another solution could be to exclude all classes under `java/util` from JanusGraph uber-jars.


high-scale-lib dependency

sergeymetallic@...
 
Edited

There is a java library dependency in Janusgraph "high-scale-lib" which is old and unsupported (last update happened 8 years back, see https://github.com/boundary/high-scale-lib). When Janusgraph is included as library into another project it causes IDE issues when loaded into Eclipse or VScode and using Java 11, just because it contains packages like "java.*"

As a solution I would suggest to migrate to a successor project which is actively developed https://github.com/JCTools/JCTools and does not have such issues


Re: High HBase backend 'configuration' row contention

hadoopmarc@...
 

Hi Tendai,

Just one thing came to my mind: did you apply the JanusGraphFactory inside a singleton object so that all tasks from all cores in a spark executor use the same JanusGraph instance? If not, this is an easy change to lower overhead due to connection setups.

Best wishes,   Marc


Re: High HBase backend 'configuration' row contention

Tendai Munetsi
 

Hi Marc,

Thanks for responding back. The configuration row in question, which is created by Janusgraph when the HBase table is first initialized, is having slow read performance due to the simultaneous access by the Spark executors (400+). Again, each executor creates an embedded Janusgraph instance, and we found that the Janusgraph instance accesses the config row every time the JanusGraphFactory’s open() method is called (numerous times per executor). This leads to the executors trying to access this row at the same time and is causing the row to respond back slow. The rest of the other graph data rows do NOT have this latency while reading/writing the graph. I hope that provides some clarification on the issue.

Regards,
Tendai


Re: Bindings for graphs created using ConfiguredGraphFactory not working as expected

anya.sharma@...
 

Hello Marc,

Thank you for your reply. In response to your suggestions and the questions you posed:
Are you sure you did start Cassandra ("cassandra/bin/cassandra") before starting JanusGraph? - Yes, the Cassandra server is running. I am not using the Cassandra bundled with JanusGraph, but my own separate installation which was running at the time when I faced this issue.
Also check whether you did not mix up the graph1 and graph1_config graph.graphname values - I double-checked, and the values for the graph name that I am using are correct.
I guess you found out to do (before running bin/janusgraph-server.sh):
export JANUSGRAPH_YAML=conf/gremlin-server/gremlin-server-configuration.yaml - I am running the server using the command - gremlin-server.bat conf/gremlin-server/gremlin-server-configuration.yaml instead of storing the yaml file in an environment variable. 

Apart from the above, I also ran the same commands as mentioned in my question on a 0.3.1 JanusGraph server and it worked:

Creating the graph:


Accessing the graph through the implicit variables:



The same steps give the issue posed in the question when run using JanusGraph 0.6.0:


Thanks
Anya


JanusGraph and Apache Log4J CVE-2021-44228 (aka "log4shell")

Florian Hockmann
 

A critical security vulnerability was reported in Apache Log4j 2 on December 9, 2021: CVE-2021-44228 [1]. JanusGraph itself is not directly affected by this vulnerability as it still uses log4j 1.2 and the CVE only affects log4j2.

 

However, the two index backends Elasticsearch and Apache Solr are affected by the CVE. Both projects have already published reports on how they are affected and how the vulnerability can be mitigated:

Apache HBase, Apache TinkerPop, Apache Spark, and Apache Hadoop are all also still on log4j 1 and are therefore also not affected.

 

We recommend all JanusGraph users who use JanusGraph together with Elasticsearch or Apache Solr to follow the recommendations listed in the reports linked above. This mostly comes down to updating the backends once an update with a fix is available and to set the JVM option -Dlog4j2.formatMsgNoLookups=true as a mitigation until then.

 

We also plan to release patch versions for 0.5 and 0.6 shortly for the Elasticsearch version that is distributed as part of the pre-packed (“full”) distribution of JanusGraph to adopt these recommendations by default for users who start JanusGraph via the `bin/janusgraph.sh` script which also starts Elasticsearch.

 

Given that Log4j 1 already reached EOL (end of life) and is also affected by other CVEs, we also plan to move away from that version in a future release of JanusGraph.

 

Best regards,

The JanusGraph Technical Steering Committee

 


Re: Drop graph not working

hadoopmarc@...
 

Thanks for your report. Can you please provide the following details:
  1. Which version of JanusGraph do you use?
  2. Are you sure you did not set the storage.drop-on-clear property to false?
  3. Do you have permissions to drop the table manually with hbase shell?


Re: High HBase backend 'configuration' row contention

hadoopmarc@...
 

Hi Tendai,

I do not understand the concept of row contention. Is not this config row just the row that is retrieved most often on the region servers that contain it and are not other rows on these servers served equally slow?

HBase tends to compact tables to a limited number of large size regions (typically 20GB). So, if you have an hdfs replication factor of 3 and your graph has a size of just two regions, at best 6 region servers of your HBase cluster can serve your 500 spark executors.

So, maybe this gives you some hint on what is happening. Or maybe you have more details on how you came to the conclusion that there is such a thing as row contention?

Best wishes,    Marc


High HBase backend 'configuration' row contention

Tendai Munetsi
 

Hi,

 

We are running embedded Janusgraph (0.5.3) with an HBase backend (2.1.6) in our Spark jobs. Each Spark executor creates an instance of Janusgraph. At times there can be over 500 executors running simultaneously. Under those conditions, we observe heavy row contention for the ‘configuration’ row that Janusgraph creates as part of the initialization of the HBase table. Is there any recommendation on how to prevent/reduce this HBase row contention? As the row is only created during HBase initialization and is never updated subsequently, can the data held by the configuration row be moved out of HBase and into a static file?

 

Thanks,

Tendai


This e-mail may contain information that is privileged or confidential. If you are not the intended recipient, please delete the e-mail and any attachments and notify us immediately.



Re: Problem adding edges with the same label

Miroslav Smiljanic
 

The same problem does not exist when using InMemory Storage Backend.


On Tue, Dec 7, 2021 at 1:51 PM Miroslav Smiljanic <miroslav@...> wrote:
Hi All,

I have the setup using Cassandra backend (Azure Cosmos DB fully managed).

Seems it is not possible to add two edges with the same label.

gremlin> g.addV()
==>v[4288]
gremlin> g.addV()
==>v[4200]
gremlin> g.addV()
==>v[4344]
gremlin> g.addE('child').from(__.V(4288)).to(__.V(4200))
==>e[t4-3b4-t1-38o][4288-child->4200]
gremlin> g.addE('child').from(__.V(4200)).to(__.V(4344))
java.lang.NullPointerException
Type ':help' or ':h' for help.
Display stack trace? [yN]


This is the error in server log

388895 [gremlin-server-exec-7] WARN  org.apache.tinkerpop.gremlin.server.op.AbstractEvalOpProcessor  - Exception processing a script on request [RequestMessage{, requestId=a04edabb-920e-4526-ab3a-1d1d22a8ed82, op='eval', processor='', args={gremlin=g.addE('child').from(__.V(4200)).to(__.V(4344)), userAgent=Gremlin Console/1.21.0, batchSize=64}}].
java.lang.NullPointerException
at org.janusgraph.graphdb.transaction.StandardJanusGraphTx.getUniquenessLock(StandardJanusGraphTx.java:720)
at org.janusgraph.graphdb.transaction.StandardJanusGraphTx.addEdge(StandardJanusGraphTx.java:784)
at org.janusgraph.graphdb.transaction.StandardJanusGraphTx.addEdge(StandardJanusGraphTx.java:774)
at org.janusgraph.graphdb.vertices.AbstractVertex.addEdge(AbstractVertex.java:188)
at org.janusgraph.graphdb.vertices.AbstractVertex.addEdge(AbstractVertex.java:45)
at org.apache.tinkerpop.gremlin.process.traversal.step.map.AddEdgeStartStep.processNextStart(AddEdgeStartStep.java:137)
at org.apache.tinkerpop.gremlin.process.traversal.step.util.AbstractStep.hasNext(AbstractStep.java:150)
at org.apache.tinkerpop.gremlin.process.traversal.util.DefaultTraversal.hasNext(DefaultTraversal.java:222)
at org.apache.tinkerpop.gremlin.server.op.AbstractOpProcessor.handleIterator(AbstractOpProcessor.java:97)
at org.apache.tinkerpop.gremlin.server.op.AbstractEvalOpProcessor.lambda$evalOpInternal$5(AbstractEvalOpProcessor.java:263)
at org.apache.tinkerpop.gremlin.groovy.engine.GremlinExecutor.lambda$eval$0(GremlinExecutor.java:283)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)

Does anyone know what could be the problem?

Regards,
Miroslav 


Problem adding edges with the same label

Miroslav Smiljanic
 

Hi All,

I have the setup using Cassandra backend (Azure Cosmos DB fully managed).

Seems it is not possible to add two edges with the same label.

gremlin> g.addV()
==>v[4288]
gremlin> g.addV()
==>v[4200]
gremlin> g.addV()
==>v[4344]
gremlin> g.addE('child').from(__.V(4288)).to(__.V(4200))
==>e[t4-3b4-t1-38o][4288-child->4200]
gremlin> g.addE('child').from(__.V(4200)).to(__.V(4344))
java.lang.NullPointerException
Type ':help' or ':h' for help.
Display stack trace? [yN]


This is the error in server log

388895 [gremlin-server-exec-7] WARN  org.apache.tinkerpop.gremlin.server.op.AbstractEvalOpProcessor  - Exception processing a script on request [RequestMessage{, requestId=a04edabb-920e-4526-ab3a-1d1d22a8ed82, op='eval', processor='', args={gremlin=g.addE('child').from(__.V(4200)).to(__.V(4344)), userAgent=Gremlin Console/1.21.0, batchSize=64}}].
java.lang.NullPointerException
at org.janusgraph.graphdb.transaction.StandardJanusGraphTx.getUniquenessLock(StandardJanusGraphTx.java:720)
at org.janusgraph.graphdb.transaction.StandardJanusGraphTx.addEdge(StandardJanusGraphTx.java:784)
at org.janusgraph.graphdb.transaction.StandardJanusGraphTx.addEdge(StandardJanusGraphTx.java:774)
at org.janusgraph.graphdb.vertices.AbstractVertex.addEdge(AbstractVertex.java:188)
at org.janusgraph.graphdb.vertices.AbstractVertex.addEdge(AbstractVertex.java:45)
at org.apache.tinkerpop.gremlin.process.traversal.step.map.AddEdgeStartStep.processNextStart(AddEdgeStartStep.java:137)
at org.apache.tinkerpop.gremlin.process.traversal.step.util.AbstractStep.hasNext(AbstractStep.java:150)
at org.apache.tinkerpop.gremlin.process.traversal.util.DefaultTraversal.hasNext(DefaultTraversal.java:222)
at org.apache.tinkerpop.gremlin.server.op.AbstractOpProcessor.handleIterator(AbstractOpProcessor.java:97)
at org.apache.tinkerpop.gremlin.server.op.AbstractEvalOpProcessor.lambda$evalOpInternal$5(AbstractEvalOpProcessor.java:263)
at org.apache.tinkerpop.gremlin.groovy.engine.GremlinExecutor.lambda$eval$0(GremlinExecutor.java:283)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)

Does anyone know what could be the problem?

Regards,
Miroslav 


Drop graph not working

51kumarakhil@...
 

Hi all. 
I'm creating dynamic graphs using configuredGraphFactory in bigTable. I'm able to create them but when I try to delete them then it seems that they only get deleted from janusgraph-server and not from bigTable.
When I try to access the graph it says graph doesnt exists.. but I can still see it in the bigTable. When I try to manually delete the graph from BigTable server throw error like graph not found.
I used below cmd to delete the graph 
> ConfiguredGraphFactory.drop('graph_name') 

:_(


Re: Commit progress listener

marcorizzi82@...
 

Thanks a lot Marc for the answer and the link to the gremlin-users group.

I was wondering if there's something "implementation specific" done in JanusGraph that could help in having such progress information during a commit even if out of Gremlin/Tinkerpop specifications.

Thanks again,
Marco


Re: NullPointerException comparing PredicateCondition (in equals method)

hadoopmarc@...
 

Hi Albert,

I tried with the GraphOfTheGods with janusgraph-0.5.3:
gremlin> gremlin> g.V().has(label, without('god')).has(label, without('location')).values('name')
14:17:41 WARN  org.janusgraph.graphdb.transaction.StandardJanusGraphTx  - Query requires iterating over all vertices [(~label <> null AND ~label <> god AND ~label <> location)]. For better performance, use indexes
==>nemean
==>hercules
==>saturn
==>cerberus
==>alcmene
==>hydra

gremlin> g.V().has(label, eq('god')).has(label, eq(null))14:22:31 WARN  org.janusgraph.graphdb.transaction.StandardJanusGraphTx  - Query requires iterating over all vertices [(~label = god AND ~label = null)]. For better performance, use indexes
gremlin>
So, I guess you did not mean to say that these traversals should fail.

If it has anything to do with the way you use the PredicateCondition class, can you give a code example where you just instantiate two PredicateConditions and have an equal() call fail?

Best wishes,    Marc


Re: JanugGraph-0.6.0: Unable to open connection JanusGraphFactory with CL=ONE when quorum lost

Umesh Gade
 

Hi Marc,
Thanks for reply. Yes, we are also doing thorough testing to check any compatibility issue and thus so far we haven't found any except this one. Strange thing is that the current issue posted here is NOT there with Janus-0.5.3+Cassandra 4.0 
This issue has something to do with "storage.cql.only-use-local-consistency-for-system-operations". We could solve this issue by setting it to TRUE for 2 node cluster but problem remains with 3+ node cluster.  
Is there any new touchpoint to this flag during JanusFactory.open(...) call added in janusgraph-0.6.0 ?

On Sat, 4 Dec 2021, 17:14 , <hadoopmarc@...> wrote:
Hi Umesh,

I assume that checking the compatibility matrix at https://docs.janusgraph.org/changelog/ made you post the additional comment about cassandra 4.0.0 :-)

Indeed, support of cassandra 4.0 is still an open issue.

Best wishes,    Marc


Re: JanugGraph-0.6.0: Unable to open connection JanusGraphFactory with CL=ONE when quorum lost

hadoopmarc@...
 

Hi Umesh,

I assume that checking the compatibility matrix at https://docs.janusgraph.org/changelog/ made you post the additional comment about cassandra 4.0.0 :-)

Indeed, support of cassandra 4.0 is still an open issue.

Best wishes,    Marc


Re: Commit progress listener

hadoopmarc@...
 

Hi Marco,

Interesting idea! From what I read about the EventStrategy, this is a feature stemming from TinkerPop and questions about it could best be addressed there.

Note however that gremlin OLTP traversals are executed mainly depth first, so any new events introduced would be rather be at the level of branching in the graph than at the level of traversal steps. Possibly, execution based events would only be useful after barrier steps, which break the depth-first pattern and therefore, passing such a barrier would give a sense of progress in the execution of a traversal.

Best wishes,    Marc


Commit progress listener

marcorizzi82@...
 

Hi all,
I've successfully using in my application an EventStrategy's listener with both the DefaultEventQueue and the TransactionalEventQueue: they gave a way for the listener to be invoked while the events occur or after they have been committed, respectively before and after the "g.tx().commit()" statement execution.

My question is: is there a way to have events about the progressing of the commit method execution itself?
I'm trying to provide a feedback to my application's users about the graph events and the commit method invocation is kind of black box I can just call and wait for the execution to end: can more information be retrieved about the execution itself?

Thanks in advance,
Marco


Re: JanugGraph-0.6.0: Unable to open connection JanusGraphFactory with CL=ONE when quorum lost

Umesh Gade
 

One update:
Cassandra version is 4.0.0

On Thu, Dec 2, 2021 at 2:02 PM Umesh Gade via lists.lfaidata.foundation <er.umeshgade=gmail.com@...> wrote:
Hi,
      We just upgraded janus to 0.6.0 and started observing an issue which was working earlier.
Scenario is, we open a connection with read/write CL="ONE" using JanugGraphFactory. But when quorum is lost, this connection fails to open. Curious to know, what's changed around this and what needs to be done to fix this ?

Graph config passed:
storage.backend=cql
storage.port=9042
storage.cql.keyspace=test_ks
storage.cql.local-datacenter=dc1
storage.cql.read-consistency-level=ONE
storage.cql.write-consistency-level=ONE
storage.cql.executor-service.enabled=false
storage.cql.atomic-batch-mutate=false
graph.set-vertex-id=true
query.force-index=false
query.optimizer-backend-access=false

Below is exception which we got:
Opening connection to graph with test_ks@localhost:9042
org.janusgraph.core.JanusGraphException: Could not execute operation due to backend exception
        at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:54)
        at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:117)
        at org.janusgraph.diskstorage.configuration.backend.KCVSConfiguration.get(KCVSConfiguration.java:96)
        at org.janusgraph.diskstorage.configuration.BasicConfiguration.isFrozen(BasicConfiguration.java:105)
        at org.janusgraph.diskstorage.configuration.builder.ReadConfigurationBuilder.buildGlobalConfiguration(ReadConfigurationBuilder.java:81)
        at org.janusgraph.graphdb.configuration.builder.GraphDatabaseConfigurationBuilder.build(GraphDatabaseConfigurationBuilder.java:67)
        at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:176)
        at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:147)
        at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:127)
        at ***.TestCli.openConnection(TestCli.java:140)        
Caused by: org.janusgraph.diskstorage.TemporaryBackendException: Could not successfully complete backend operation due to repeated temporary exceptions after PT1M
        at org.janusgraph.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:98)
        at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:52)
        ... 11 more
Caused by: org.janusgraph.diskstorage.TemporaryBackendException: Temporary failure in storage backend
        at io.vavr.API$Match$Case0.apply(API.java:5135)
        at io.vavr.API$Match.of(API.java:5092)
        at org.janusgraph.diskstorage.cql.CQLKeyColumnValueStore.lambda$static$0(CQLKeyColumnValueStore.java:120)
        at org.janusgraph.diskstorage.cql.function.slice.CQLSimpleSliceFunction.interruptibleWait(CQLSimpleSliceFunction.java:50)
        at org.janusgraph.diskstorage.cql.function.slice.CQLSimpleSliceFunction.getSlice(CQLSimpleSliceFunction.java:39)
        at org.janusgraph.diskstorage.cql.function.slice.AbstractCQLSliceFunction.getSlice(AbstractCQLSliceFunction.java:48)
        at org.janusgraph.diskstorage.cql.CQLKeyColumnValueStore.getSlice(CQLKeyColumnValueStore.java:358)
        at org.janusgraph.diskstorage.configuration.backend.KCVSConfiguration$1.call(KCVSConfiguration.java:99)
        at org.janusgraph.diskstorage.configuration.backend.KCVSConfiguration$1.call(KCVSConfiguration.java:96)
        at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:106)
        at org.janusgraph.diskstorage.util.BackendOperation$1.call(BackendOperation.java:120)
        at org.janusgraph.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:66)
        ... 12 more
Caused by: java.util.concurrent.ExecutionException: com.datastax.oss.driver.api.core.AllNodesFailedException: All 1 node(s) tried for the query failed (showing first 1 nodes, use getAllErrors() for more): Node(endPoint=localhost/127.0.0.1:9042, hostId=779642c7-23bb-46d4-88fa-6ae08f2f9e24, hashCode=61feb06d): [com.datastax.oss.driver.api.core.servererrors.UnavailableException: Not enough replicas available for query at consistency QUORUM (2 required but only 1 alive)]
        at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
        at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
        at org.janusgraph.diskstorage.cql.function.slice.CQLSimpleSliceFunction.interruptibleWait(CQLSimpleSliceFunction.java:45)
        ... 20 more
Caused by: com.datastax.oss.driver.api.core.AllNodesFailedException: All 1 node(s) tried for the query failed (showing first 1 nodes, use getAllErrors() for more): Node(endPoint=localhost/127.0.0.1:9042, hostId=779642c7-23bb-46d4-88fa-6ae08f2f9e24, hashCode=61feb06d): [com.datastax.oss.driver.api.core.servererrors.UnavailableException: Not enough replicas available for query at consistency QUORUM (2 required but only 1 alive)]
        at com.datastax.oss.driver.api.core.AllNodesFailedException.fromErrors(AllNodesFailedException.java:55)
        at com.datastax.oss.driver.internal.core.cql.CqlRequestHandler.sendRequest(CqlRequestHandler.java:261)
        at com.datastax.oss.driver.internal.core.cql.CqlRequestHandler.access$1000(CqlRequestHandler.java:94)
        at com.datastax.oss.driver.internal.core.cql.CqlRequestHandler$NodeResponseCallback.processRetryVerdict(CqlRequestHandler.java:849)
        at com.datastax.oss.driver.internal.core.cql.CqlRequestHandler$NodeResponseCallback.processErrorResponse(CqlRequestHandler.java:828)
        at com.datastax.oss.driver.internal.core.cql.CqlRequestHandler$NodeResponseCallback.onResponse(CqlRequestHandler.java:655)
        at com.datastax.oss.driver.internal.core.channel.InFlightHandler.channelRead(InFlightHandler.java:257)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324)
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:655)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:581)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
        at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
        at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
        at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
        at java.lang.Thread.run(Thread.java:748)
        Suppressed: com.datastax.oss.driver.api.core.servererrors.UnavailableException: Not enough replicas available for query at consistency QUORUM (2 required but only 1 alive)
--
Sincerely,
Umesh Gade



--
Sincerely,
Umesh Gade

381 - 400 of 6656