Date   

Re: [BLOG] Configuring JanusGraph for spark-yarn

HadoopMarc <bi...@...>
 

Hi ... and others,  I have been offline for a few weeks enjoying a holiday and will start looking into your questions and make the suggested corrections. Thanks for following the recipes and helping others with it.

..., did you run the recipe on the same HDP sandbox and same Tinkerpop version? I remember (from 4 weeks ago) that copying the zookeeper.znode.parent property from the hbase configs to the janusgraph configs was essential to get janusgraph's HBaseInputFormat working (that is: read graph data for the spark tasks).

Cheers,    Marc

Op maandag 24 juli 2017 10:12:13 UTC+2 schreef spi...@...:

hi,Thanks for your post.
I did it according to the post.But I ran into a problem.
15:58:49,110  INFO SecurityManager:58 - Changing view acls to: rc
15:58:49,110  INFO SecurityManager:58 - Changing modify acls to: rc
15:58:49,110  INFO SecurityManager:58 - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(rc); users with modify permissions: Set(rc)
15:58:49,111  INFO Client:58 - Submitting application 25 to ResourceManager
15:58:49,320  INFO YarnClientImpl:274 - Submitted application application_1500608983535_0025
15:58:49,321  INFO SchedulerExtensionServices:58 - Starting Yarn extension services with app application_1500608983535_0025 and attemptId None
15:58:50,325  INFO Client:58 - Application report for application_1500608983535_0025 (state: ACCEPTED)
15:58:50,326  INFO Client:58 -
client token: N/A
diagnostics: N/A
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1500883129115
final status: UNDEFINED
user: rc
15:58:51,330  INFO Client:58 - Application report for application_1500608983535_0025 (state: ACCEPTED)
15:58:52,333  INFO Client:58 - Application report for application_1500608983535_0025 (state: ACCEPTED)
15:58:53,335  INFO Client:58 - Application report for application_1500608983535_0025 (state: ACCEPTED)
15:58:54,337  INFO Client:58 - Application report for application_1500608983535_0025 (state: ACCEPTED)
15:58:55,340  INFO Client:58 - Application report for application_1500608983535_0025 (state: ACCEPTED)
15:58:56,343  INFO Client:58 - Application report for application_1500608983535_0025 (state: ACCEPTED)
15:58:56,802  INFO YarnSchedulerBackend$YarnSchedulerEndpoint:58 - ApplicationMaster registered as NettyRpcEndpointRef(null)
15:58:56,822  INFO YarnClientSchedulerBackend:58 - Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> dl-rc-optd-ambari-master-v-test-1.host.dataengine.com,dl-rc-optd-ambari-master-v-test-2.host.dataengine.com, PROXY_URI_BASES -> http://dl-rc-optd-ambari-master-v-test-1.host.dataengine.com:8088/proxy/application_1500608983535_0025,http://dl-rc-optd-ambari-master-v-test-2.host.dataengine.com:8088/proxy/application_1500608983535_0025), /proxy/application_1500608983535_0025
15:58:56,824  INFO JettyUtils:58 - Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
15:58:57,346  INFO Client:58 - Application report for application_1500608983535_0025 (state: RUNNING)
15:58:57,347  INFO Client:58 -
client token: N/A
diagnostics: N/A
ApplicationMaster host: 10.200.48.154
ApplicationMaster RPC port: 0
queue: default
start time: 1500883129115
final status: UNDEFINED
user: rc
15:58:57,348  INFO YarnClientSchedulerBackend:58 - Application application_1500608983535_0025 has started running.
15:58:57,358  INFO Utils:58 - Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 47514.
15:58:57,358  INFO NettyBlockTransferService:58 - Server created on 47514
15:58:57,360  INFO BlockManagerMaster:58 - Trying to register BlockManager
15:58:57,363  INFO BlockManagerMasterEndpoint:58 - Registering block manager 10.200.48.112:47514 with 2.4 GB RAM, BlockManagerId(driver, 10.200.48.112, 47514)15:58:57,366  INFO BlockManagerMaster:58 - Registered BlockManager
15:58:57,585  INFO EventLoggingListener:58 - Logging events to hdfs:///spark-history/application_1500608983535_0025
15:59:07,177  WARN YarnSchedulerBackend$YarnSchedulerEndpoint:70 - Container marked as failed: container_e170_1500608983535_0025_01_000002 on host: dl-rc-optd-ambari-slave-v-test-1.host.dataengine.com. Exit status: 1. Diagnostics: Exception from container-launch.
Container id: container_e170_1500608983535_0025_01_000002
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:576)
at org.apache.hadoop.util.Shell.run(Shell.java:487)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:753)
at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:371)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:303)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

Shell output: main : command provided 1
main : run as user is rc
main : requested yarn user is rc


Container exited with a non-zero exit code 1
Display stack trace? [yN]15:59:57,702  WARN TransportChannelHandler:79 - Exception in connection from 10.200.48.155/10.200.48.155:50921
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:192)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
at io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:313)
at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:881)
at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:242)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:119)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at java.lang.Thread.run(Thread.java:748)
15:59:57,704 ERROR TransportResponseHandler:132 - Still have 1 requests outstanding when connection from 10.200.48.155/10.200.48.155:50921 is closed
15:59:57,706  WARN NettyRpcEndpointRef:91 - Error sending message [message = RequestExecutors(0,0,Map())] in 1 attempts
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:192)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
at io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:313)
at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:881)
at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:242)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:119)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at java.lang.Thread.run(Thread.java:748)

I am confused about that. Could you please help me?



在 2017年7月6日星期四 UTC+8下午4:15:37,HadoopMarc写道:

Readers wanting to run OLAP queries on a real spark-yarn cluster might want to check my recent post:

http://yaaics.blogspot.nl/2017/07/configuring-janusgraph-for-spark-yarn.html

Regards,  Marc


Re: Best practice setup for Go driver development & identifying the websocket serialization format

Ray Scott <raya...@...>
 

Do you have a reference for that setting the response format? The driver documentation doesn't mention it, only that you can specify the format of the request. There is an example response in JSON, but it's nothing like what I receive as a response. This is in the Tinkerpop docs. Janus seem to have removed driver development documentation from their release.

http://tinkerpop.apache.org/docs/3.2.5/dev/provider/#_graph_driver_provider_requirements


Re: Best practice setup for Go driver development & identifying the websocket serialization format

loh...@...
 

The server serializes the response in whichever format has been requested by the client. I'd imagine what you're seeing would be the unmarshalled version of the returned JSON that your go websockets library created.


On Friday, August 4, 2017 at 3:40:44 PM UTC-4, Ray Scott wrote:
I want to develop a driver in Go that connects to Gremlin Server using a websocket, runs a parameterized Groovy script and parses the response. At this stage all I need to do is perform basic queries and modify the graph. I've read through the documentation on driver development, and looked through some source code for existing drivers. 

Connecting and sending the data is the easy part. What I can not find anywhere, is an explanation of what I can expect to receive back in terms of the serialised format. I'm actually using JanusGraph straight out the box. I've looked at the yaml config and read some posts on the serializers listed therein. I've read a little about GraphSON and GraphML and Kryo and all I'm really looking for is a way to setup the server so that it returns a response thats some sort of official spec'd format that I can work with in Go. The only other thing I need to do, is be able to use the console as normal. 

As an example, if I send this query...

graph = JanusGraphFactory.open(cassandra)
g
= graph.traversal()
g
.V().has('name', 'hercules').next()


I receive this...

[[map[id:8376 label:demigod type:vertex properties:map[name:[map[id:2s7-6go-sl value:hercules]] age:[map[id:36f-6go-35x value:30]]]]]]


What format is that? How do other driver developers handle this? Do I need to change the settings of the serializers in the yaml config? Do I use a writer in the Groovy script to serialize the result into a format of my choice? I don't want to perform any unnecessary serialization. 

Thanks.


Best practice setup for Go driver development & identifying the websocket serialization format

Ray Scott <raya...@...>
 

I want to develop a driver in Go that connects to Gremlin Server using a websocket, runs a parameterized Groovy script and parses the response. At this stage all I need to do is perform basic queries and modify the graph. I've read through the documentation on driver development, and looked through some source code for existing drivers. 

Connecting and sending the data is the easy part. What I can not find anywhere, is an explanation of what I can expect to receive back in terms of the serialised format. I'm actually using JanusGraph straight out the box. I've looked at the yaml config and read some posts on the serializers listed therein. I've read a little about GraphSON and GraphML and Kryo and all I'm really looking for is a way to setup the server so that it returns a response thats some sort of official spec'd format that I can work with in Go. The only other thing I need to do, is be able to use the console as normal. 

As an example, if I send this query...

graph = JanusGraphFactory.open(cassandra)
g
= graph.traversal()
g
.V().has('name', 'hercules').next()


I receive this...

[[map[id:8376 label:demigod type:vertex properties:map[name:[map[id:2s7-6go-sl value:hercules]] age:[map[id:36f-6go-35x value:30]]]]]]


What format is that? How do other driver developers handle this? Do I need to change the settings of the serializers in the yaml config? Do I use a writer in the Groovy script to serialize the result into a format of my choice? I don't want to perform any unnecessary serialization. 

Thanks.


Potential Fix for Indexes stuck in `INSTALLED` state

David Pitera <piter...@...>
 

Hey guys, I know there have been a bunch of questions lately about indexes getting stuck in the `installed` state, and I recently discovered some more interesting potential causes for the problem; please see #5 in my StackOverflow answer here: https://stackoverflow.com/questions/40585417/titan-db-ignoring-index/40591478#40591478

TLDR; you might have phantom JanusGraph nodes that are unable to acknowledge index existence, and thus the index will never move to `REGISTERED`. You may also have issues with backfilling of the queue, but I would definitely expect the former first.

Good luck!


Re: how is janusgraph data stored in Cassandra

Jerry He <jerr...@...>
 

The edges and properties are serialized, encoded and optionally compressed in the backend table.  Raw scan on the backend table will not easily show what they are. 
The things you may be able to see in clear text,  for example, are the configuration settings stored in the backend store.

Having said that, I wonder if it is feasible or usable to provide such a tool to look at or examine the raw data in the backend table.

Thanks.

On Friday, August 4, 2017 at 6:53:45 AM UTC-7, Suny wrote:
Thanks,

Under Janusgraph keyspace in cassandra i see some tables with information stored as blob. Is there a way to find the row (containing data from janusgraph) in any table ? 

On Thursday, August 3, 2017 at 5:15:56 PM UTC-4, Kelvin Lawrence wrote:
Janus graph uses an adjacency list model. Each vertex, its properties and its adjacent edges are stored as a row in Cassandra.

You might find this part of the documentation of use.


HTH
Kelvin

On Thursday, August 3, 2017 at 3:51:58 PM UTC-5, Suny wrote:
Can someone explain how janusgraph data is stored in Cassandra ? Are there any specific tables in Cassandra that i can look at for data from janusgraph ?


Re: how is janusgraph data stored in Cassandra

Suny <sahithiy...@...>
 

Thanks,

Under Janusgraph keyspace in cassandra i see some tables with information stored as blob. Is there a way to find the row (containing data from janusgraph) in any table ? 


On Thursday, August 3, 2017 at 5:15:56 PM UTC-4, Kelvin Lawrence wrote:
Janus graph uses an adjacency list model. Each vertex, its properties and its adjacent edges are stored as a row in Cassandra.

You might find this part of the documentation of use.


HTH
Kelvin

On Thursday, August 3, 2017 at 3:51:58 PM UTC-5, Suny wrote:
Can someone explain how janusgraph data is stored in Cassandra ? Are there any specific tables in Cassandra that i can look at for data from janusgraph ?


about janusgraph use spark(yarn-client) compute

liuzhip...@...
 

1、configuration " conf/hadoop-graph/read-cassandra.properties "  file ,as follow:
---------------------------------------------------------------------------------------------------
gremlin.graph=org.apache.tinkerpop.gremlin.hadoop.structure.HadoopGraph
gremlin.hadoop.graphInputFormat=org.janusgraph.hadoop.formats.hbase.HBaseInputFormat
gremlin.hadoop.graphOutputFormat=org.apache.tinkerpop.gremlin.hadoop.structure.io.gryo.GryoOutputFormat

gremlin.hadoop.jarsInDistributedCache=true
gremlin.hadoop.inputLocation=none
gremlin.hadoop.outputLocation=output

janusgraphmr.ioformat.conf.storage.backend=hbase
janusgraphmr.ioformat.conf.storage.hostname=testhadoop001.ppdapi.com,testhadoop002.ppdapi.com,testhadoop003.ppdapi.com
janusgraphmr.ioformat.conf.storage.keyspace=janusgraph

spark.master=yarn-client
spark.serializer=org.apache.spark.serializer.KryoSerializer
spark.yarn.services=org.apache.spark.deploy.yarn.history.YarnHistoryService
spark.yarn.historyServer.address=http://testhadoop002.ppdapi.com:18088
spark.history.provider=org.apache.spark.deploy.yarn.history.YarnHistoryProvider
spark.history.ui.port=18088
gremlin.spark.persistContext=true

2、start gremlin.sh,problem is here





view the spark task by web-ui  
it seems to miss a jar package, but i put guava-16.0.1.jar into the lib





DynamoDB autoscaling for JanusGraph

sanjana....@...
 


Does any one know or can help how to do autoscaling preferably as setting when JanusGraph is initialized?

Ram


Re: how is janusgraph data stored in Cassandra

Kelvin Lawrence <kelvin....@...>
 

Janus graph uses an adjacency list model. Each vertex, its properties and its adjacent edges are stored as a row in Cassandra.

You might find this part of the documentation of use.

http://docs.janusgraph.org/latest/data-model.html

HTH
Kelvin


On Thursday, August 3, 2017 at 3:51:58 PM UTC-5, Suny wrote:
Can someone explain how janusgraph data is stored in Cassandra ? Are there any specific tables in Cassandra that i can look at for data from janusgraph ?


how is janusgraph data stored in Cassandra

Suny <sahithiy...@...>
 

Can someone explain how janusgraph data is stored in Cassandra ? Are there any specific tables in Cassandra that i can look at for data from janusgraph ?


Do We Need Specialized Graph Databases? Benchmarking Real-Time Social Networking Applications

rcanz...@...
 

Has everyone seen this article out of the University of Waterloo, which concludes TinkerPop 3 to be not ready for prime time?

Do We Need Specialized Graph Databases? Benchmarking Real-Time Social Networking Applications
Anil Pacaci, Alice Zhou, Jimmy Lin, and M. Tamer Özsu
10.1145/3078447.3078459
https://event.cwi.nl/grades/2017/12-Apaci.pdf

Interested to know what other folks think of this testing setup and set of conclusions.



Re: hi how can i use janusGraph api to connect gremlin-server

Robert Dale <rob...@...>
 

There are various ways to connect to gremlin-server and depends on the server configuration and host language.








Robert Dale

On Wed, Aug 2, 2017 at 11:51 PM, 李平 <lipin...@...> wrote:
i want to use janusGraph api to connect my gremlin server  
another question ,how to build a unique vertex if the vertex exists ,and then return the vertex

--
You received this message because you are subscribed to the Google Groups "JanusGraph users list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to janusgraph-users+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Janusgraph with ES as index backend

Kelvin Lawrence <kelvin....@...>
 

Yes - sorry if that was not clear.

Kelvin


On Thursday, August 3, 2017 at 10:28:11 AM UTC-5, Suny wrote:
Thanks for you response. By ' If you tell Janus about the indexed properties using the management API' you mean, creating property keys and index using management API right ? 

On Wednesday, August 2, 2017 at 6:50:25 PM UTC-4, Kelvin Lawrence wrote:
If you tell Janus about the indexed properties using the management API it will use them automatically when you run Gremlin queries. You only need to use indexQuery for cases where you want to read from the index directly for other reasons.

HTH

Kelvin

On Wednesday, August 2, 2017 at 12:46:37 PM UTC-5, Suny wrote:
Hi,

I am using Janusgraph with ES as index backend. I created a mixed index  on some vertex attributes. This mixed index is saved in ES.

Now if i query for that vertices based on index, will janusgraph use ES internally to perform search operation ? Or do i need to use IndexQuery to perform search on ES directly ?


Re: Janusgraph with ES as index backend

Suny <sahithiy...@...>
 

Thanks for you response. By ' If you tell Janus about the indexed properties using the management API' you mean, creating property keys and index using management API right ? 


On Wednesday, August 2, 2017 at 6:50:25 PM UTC-4, Kelvin Lawrence wrote:
If you tell Janus about the indexed properties using the management API it will use them automatically when you run Gremlin queries. You only need to use indexQuery for cases where you want to read from the index directly for other reasons.

HTH

Kelvin

On Wednesday, August 2, 2017 at 12:46:37 PM UTC-5, Suny wrote:
Hi,

I am using Janusgraph with ES as index backend. I created a mixed index  on some vertex attributes. This mixed index is saved in ES.

Now if i query for that vertices based on index, will janusgraph use ES internally to perform search operation ? Or do i need to use IndexQuery to perform search on ES directly ?


Re: janus cassandra limitations

mirosla...@...
 

Ok so i get it a bit wrong in my initial assumption.

1.
"vertexindex" stores values for all properties for all vertices.
In my case key=0x00 is 'false' and this value is stored in 90% of my vertices.

so still in theory you could have so many vertices as titan schema allows but you could not store same value for any property more than 2^30 times.

2.
"edgestorage" contains information about all vertices with all properties values references and all edges per vertex
this means one vertex could have in theory maximum of 2^30 edges

3.
Request to janusgraph designers: 


On Thursday, August 3, 2017 at 12:58:29 AM UTC+2, Kelvin Lawrence wrote:

Hi Mirosław,

Janus graph uses an adjacency list model for storing vertices and edges. A vertex, its properties and all of its adjacent edges are stored in a single Cassandra row,

The Janus Graph documentation goes into these issues in some detail.

You are using a very old version of Titan BTW. It would be worth upgrading if you can.

Cheers,
Kelvin

On Wednesday, August 2, 2017 at 10:36:39 AM UTC-5, Mirosław Głusiuk wrote:

Hi all,


from what I know janus is fork of titan which means if it does not have different storage impl it could have problems with bigger data count.


"janusgraph/titan can store up to a quintillion edges (2^60) and half as many vertices. "

"The maximum number of cells (rows x columns) in a single partition is 2 billion."


2 billions is about (2^31)

in cassandra schema we always have 2 columns per table so you could store about (2^30) values per key

so if not mistaken "half as many vertices" is not for cassandra storage backend?


I'm using titan 0.4.4 and after having like 50M+ vertices I have spot cassandra started to complain about "Compacting large partition titan/vertexindex:00".
As I understand partition for key 0x00 already is too big and start to causing performance during compaction.
Also I spot that it contains one value per each created vertex (8+8=16bytes). so it is already bigger than 500MB in size which is already bigger than cassandra recommendation.
http://docs.datastax.com/en/landing_page/doc/landing_page/planning/planningPartitionSize.html


So my question is what is real janusgraph/titan limit for cassandra backend which will not "kill" cassandra?

Btw I also spot that some keys from "edgestore" table for "supernodes" are also bigger than 1GB with my current graph.


Could anyone explain how janusgraph stores data in cassandra and how to configure it to prevent storing huge rows?


hi how can i use janusGraph api to connect gremlin-server

李平 <lipin...@...>
 

i want to use janusGraph api to connect my gremlin server  
another question ,how to build a unique vertex if the vertex exists ,and then return the vertex


Re: janus cassandra limitations

Kelvin Lawrence <kelvin....@...>
 


Hi Mirosław,

Janus graph uses an adjacency list model for storing vertices and edges. A vertex, its properties and all of its adjacent edges are stored in a single Cassandra row,

The Janus Graph documentation goes into these issues in some detail.
http://docs.janusgraph.org/latest/index.html

You are using a very old version of Titan BTW. It would be worth upgrading if you can.

Cheers,
Kelvin

On Wednesday, August 2, 2017 at 10:36:39 AM UTC-5, Mirosław Głusiuk wrote:

Hi all,


from what I know janus is fork of titan which means if it does not have different storage impl it could have problems with bigger data count.


"janusgraph/titan can store up to a quintillion edges (2^60) and half as many vertices. "

"The maximum number of cells (rows x columns) in a single partition is 2 billion."


2 billions is about (2^31)

in cassandra schema we always have 2 columns per table so you could store about (2^30) values per key

so if not mistaken "half as many vertices" is not for cassandra storage backend?


I'm using titan 0.4.4 and after having like 50M+ vertices I have spot cassandra started to complain about "Compacting large partition titan/vertexindex:00".
As I understand partition for key 0x00 already is too big and start to causing performance during compaction.
Also I spot that it contains one value per each created vertex (8+8=16bytes). so it is already bigger than 500MB in size which is already bigger than cassandra recommendation.
http://docs.datastax.com/en/landing_page/doc/landing_page/planning/planningPartitionSize.html


So my question is what is real janusgraph/titan limit for cassandra backend which will not "kill" cassandra?

Btw I also spot that some keys from "edgestore" table for "supernodes" are also bigger than 1GB with my current graph.


Could anyone explain how janusgraph stores data in cassandra and how to configure it to prevent storing huge rows?


Re: Janusgraph with ES as index backend

Kelvin Lawrence <kelvin....@...>
 

If you tell Janus about the indexed properties using the management API it will use them automatically when you run Gremlin queries. You only need to use indexQuery for cases where you want to read from the index directly for other reasons.

HTH

Kelvin


On Wednesday, August 2, 2017 at 12:46:37 PM UTC-5, Suny wrote:
Hi,

I am using Janusgraph with ES as index backend. I created a mixed index  on some vertex attributes. This mixed index is saved in ES.

Now if i query for that vertices based on index, will janusgraph use ES internally to perform search operation ? Or do i need to use IndexQuery to perform search on ES directly ?


Re: I'm starting a new startup big project, should I use Janus as main database to store all my data?

Kelvin Lawrence <kelvin....@...>
 

Hi there,

I don't think it would be appropriate to make definitive recommendations as to whether or not to use Janus in production for your needs. The best way to decide on that is to install it and run some tests. What I do know is that on this list a number of people have indicated they either already are or plan to build solutions that include Janus Graph.

As to your other questions here are some answers.

Janus graph supports the Gremlin query and traversal language that let's you add, delete, update nodes and edges to a graph.

Janus supports numerous back end stores that include Cassandra, HBase and Berkley DB and it can also run just in memory which is good for testing. The graph data is persisted to the back end store.

Deciding which back end store to use will depend on many factors. You will want to consider things like number of users and whether you care more about consistency or availability when making that choice. 

I would encourage you to install Janus and run some tests and see what works best for your needs. I'm sure people on this list can help if you encounter issues as you experiment.

HTH

Kelvin


On Wednesday, August 2, 2017 at 8:56:53 AM UTC-5, Augusto Will wrote:
I'm thinking about learn Janus to use in my new big project but i can't understand some things.

Janus can be used like any database and supports "insert", "update", "delete"  operations so Janus will write data into Cassandra or other database to store these data, right?

Where Janus store the Nodes, Edges, Attributes etc, it will write these into database, right?

These data should be loaded in memory by Janus or will be read from Cassandra all the time?

The data that Janus read, must be load in Janus in every query or it will do selects in database to retrieve the data I need?

The data retrieved in database is only what I need or Janus will read all records in database all the time?

Should I use Janus in my project in production or should I wait until it becomes production ready?

I'm developing some kind of social network that need to store friendship, posts, comments, user blocks and do some elasticsearch too, in this case, what database backend should I use?


Thank you.

6181 - 6200 of 6678