Re: Release vs full release?
Hi Laura,
Both janusgraph-<version>.zip and janusgraph-full-<version>.zip are the same except that `janusgraph-full-<version>.zip` includes Cassandra and ElasticSearch which can be quickly started with a convenient tool `bin/janusgraph.sh`. This is the convenient way to test JanusGraph with mixed indices without the need to configure and run your own ElasticSearch and Cassandra. That said, in most cases I would recommend to run your own installation of storage and mixed index backends. Both these releases have JanusGraph Server (Gremlin Server) included. Thus, in production you should prefer using janusgraph-<version>.zip with your own backend storages installation instead of using janusgraph-full-<version>.zip. Best regards, Oleksandr
|
|
Re: Too low Performance when running PageRank and WCC on Graph500
Hi Shipeng,
I didn't check the graph which you refereed but 0.5.3 JanusGraph has some hard limits with Cassandra backend. I would recommend trying 0.6.0 version. You might want to add some configurations related to your throughput. Something like: ``` storage.cql.read-consistency-level: ONE query.batch: true query.smart-limit: false # query.fast-property: false or true depending on queries ids.block-size: 1000000 storage.batch-loading: true storage.cql.local-max-connections-per-host: 5 storage.cql.max-requests-per-connection: 1024 storage.cql.executor-service.enabled: false storage.parallel-backend-executor-service.core-pool-size: 100 ``` Best regards, Oleksandr
|
|
Re: [ANNOUNCE] JanusGraph 0.6.0 Release
schwartz@...
Nice!! Gonna give this a spin in a few days
|
|
Re: [ANNOUNCE] JanusGraph 0.6.0 Release
It was quick. JanusGraph 0.6.0 Docker image is now available in Docker Hub: https://hub.docker.com/layers/janusgraph/janusgraph/0.6.0/images/sha256-39b9059c23488f20d17f4b35f65c468112114b77d14f1b7d103fa56f66a65aab
|
|
Re: [ANNOUNCE] JanusGraph 0.6.0 Release
Jan Jansen is working on upgrading the docker image to 0.6.0 in this PR: https://github.com/JanusGraph/janusgraph-docker/pull/91
I believe the Docker image should be available soon.
|
|
Re: [ANNOUNCE] JanusGraph 0.6.0 Release
schwartz@...
This is great! Was looking forward to this.
Any ETA for the docker image? Thanks a lot, Assaf
|
|
[ANNOUNCE] JanusGraph 0.6.0 Release
The JanusGraph Technical Steering Committee is excited to announce the release of JanusGraph 0.6.0.
JanusGraph is an Apache TinkerPop enabled property graph database with support for a variety of storage and indexing backends. Thank you to all of the contributors. Notable new features in this release include:
The release artifacts can be found at this location: https://github.com/JanusGraph/janusgraph/releases/tag/v0.6.0 A full binary distribution is provided for user convenience: https://github.com/JanusGraph/janusgraph/releases/download/v0.6.0/janusgraph-full-0.6.0.zip A truncated binary distribution is provided:
https://github.com/JanusGraph/janusgraph/releases/download/v0.6.0/janusgraph-0.6.0.zip The online docs can be found here: https://docs.janusgraph.org To view the resolved issues and commits check the milestone here:
https://github.com/JanusGraph/janusgraph/milestone/17?closed=1Thank you very much,
Oleksandr Porunov
|
|
Re: Removing a vertex is not removing recently added properties in different transaction
hadoopmarc@...
Hi Priyanka,
The case you describe sounds suspect and might be a JanusGraph issue. Your last remark ("If i add some delay b/w two operations then vertices are getting removed correctly.") gives an important clue as to what is going on. A few additional questions:
Best wishes, Marc
|
|
Re: Confused about GraphSON edges definition
Laura Morales <lauretas@...>
People do not want to put effort in explaining graphSON because that is not the way to goMay I ask why it is not the way to go, and what is the way instead? I thought my problem was fairly easy: have a graph in a file, load the file. But GraphML is lossy, and GraphSON is not the way to go. What is left other than having to write my own groovy scripts and using the tinkerpop api?
|
|
Re: Confused about GraphSON edges definition
hadoopmarc@...
Hi Laura,
https://tinkerpop.apache.org/javadocs/current/full/org/apache/tinkerpop/gremlin/structure/io/graphson/GraphSONReader.html People do not want to put effort in explaining graphSON because that is not the way to go. As said above, you can just use a TinkerGraph with addV, eddEdge and property() and export the graph to graphSON. Best wishes, Marc
|
|
Looking for deeper understanding of the systemlog table.
jason.mccarthy@...
Hi all,
I'm hoping someone can help me understand something better. I'm curious about the size of the systemlog table for a number of our graphs. On our backend data store this is the only table which reports having large cells. On some nodes there is only a few of them, but on other nodes they number in the hundreds (the large cells that is). I have a few basic questions: a) what is stored in this table? b) what kind of maintenance can I safely perform on it from the backend, if any? c) what might cause these large cells to show up in this table (and what could be done to avoid it)? Thanks, Jason
|
|
Re: Confused about GraphSON edges definition
Laura Morales <lauretas@...>
Hi,
I've asked my question over there (here's the thread https://groups.google.com/g/gremlin-users/c/_H3UZyfdvtE) and the possible solution seems to be to use readVertices() instead of read() or readGraph(). But I'm very confused and I'd really appreciate if you guys could help me make sense of it. I haven't used Gremlin, Groovy, and Janus before, so I'm basically relying on the Janus documentation but I cannot find any examples for this. How can I load a GraphSON file using readVertices()? Sent: Thursday, September 02, 2021 at 8:07 AM From: hadoopmarc@... To: janusgraph-users@... Subject: Re: [janusgraph-users] Confused about GraphSON edges definition Hi Laura, If you want to know, you would better ask on the TinkerPop users list. Note that graphSON is not designed as a human-readable or standardized interchange format, but rather as interchange format between TinkerPop-compatible processes. If you want to create or modify a graphSON file, it is easier to instantiate a TinkerGraph and use the TinkerPop API. Best wishes, Marc
|
|
Re: Removing a vertex is not removing recently added properties in different transaction
Priyanka Jindal
Please find my answers inline:
- i am suing composite index
- They do not overlap in time.
- Yes they do
- Its a cluster. If i add some delay b/w two operations then vertices are getting removed correctly.
|
|
Re: Confused about GraphSON edges definition
hadoopmarc@...
Hi Laura,
If you want to know, you would better ask on the TinkerPop users list. Note that graphSON is not designed as a human-readable or standardized interchange format, but rather as interchange format between TinkerPop-compatible processes. If you want to create or modify a graphSON file, it is easier to instantiate a TinkerGraph and use the TinkerPop API. Best wishes, Marc
|
|
Re: CQL scaling limit?
hadoopmarc@...
Nice work!
|
|
Confused about GraphSON edges definition
Laura Morales <lauretas@...>
I'm looking at this example from TinkerPop https://tinkerpop.apache.org/docs/current/dev/io/#graphson
{"id":{"@type":"g:Int32","@value":1},"label":"person","outE":{"created":[{"id":{"@type":"g:Int32","@value":9},"inV":{"@type":"g:Int32","@value":3},"properties":{"weight":{"@type":"g:Double","@value":0.4}}}],"knows":[{"id":{"@type":"g:Int32","@value":7},"inV":{"@type":"g:Int32","@value":2},"properties":{"weight":{"@type":"g:Double","@value":0.5}}},{"id":{"@type":"g:Int32","@value":8},"inV":{"@type":"g:Int32","@value":4},"properties":{"weight":{"@type":"g:Double","@value":1.0}}}]},"properties":{"name":[{"id":{"@type":"g:Int64","@value":0},"value":"marko"}],"age":[{"id":{"@type":"g:Int64","@value":1},"value":{"@type":"g:Int32","@value":29}}]}} {"id":{"@type":"g:Int32","@value":2},"label":"person","inE":{"knows":[{"id":{"@type":"g:Int32","@value":7},"outV":{"@type":"g:Int32","@value":1},"properties":{"weight":{"@type":"g:Double","@value":0.5}}}]},"properties":{"name":[{"id":{"@type":"g:Int64","@value":2},"value":"vadas"}],"age":[{"id":{"@type":"g:Int64","@value":3},"value":{"@type":"g:Int32","@value":27}}]}} {"id":{"@type":"g:Int32","@value":3},"label":"software","inE":{"created":[{"id":{"@type":"g:Int32","@value":9},"outV":{"@type":"g:Int32","@value":1},"properties":{"weight":{"@type":"g:Double","@value":0.4}}},{"id":{"@type":"g:Int32","@value":11},"outV":{"@type":"g:Int32","@value":4},"properties":{"weight":{"@type":"g:Double","@value":0.4}}},{"id":{"@type":"g:Int32","@value":12},"outV":{"@type":"g:Int32","@value":6},"properties":{"weight":{"@type":"g:Double","@value":0.2}}}]},"properties":{"name":[{"id":{"@type":"g:Int64","@value":4},"value":"lop"}],"lang":[{"id":{"@type":"g:Int64","@value":5},"value":"java"}]}} {"id":{"@type":"g:Int32","@value":4},"label":"person","inE":{"knows":[{"id":{"@type":"g:Int32","@value":8},"outV":{"@type":"g:Int32","@value":1},"properties":{"weight":{"@type":"g:Double","@value":1.0}}}]},"outE":{"created":[{"id":{"@type":"g:Int32","@value":10},"inV":{"@type":"g:Int32","@value":5},"properties":{"weight":{"@type":"g:Double","@value":1.0}}},{"id":{"@type":"g:Int32","@value":11},"inV":{"@type":"g:Int32","@value":3},"properties":{"weight":{"@type":"g:Double","@value":0.4}}}]},"properties":{"name":[{"id":{"@type":"g:Int64","@value":6},"value":"josh"}],"age":[{"id":{"@type":"g:Int64","@value":7},"value":{"@type":"g:Int32","@value":32}}]}} {"id":{"@type":"g:Int32","@value":5},"label":"software","inE":{"created":[{"id":{"@type":"g:Int32","@value":10},"outV":{"@type":"g:Int32","@value":4},"properties":{"weight":{"@type":"g:Double","@value":1.0}}}]},"properties":{"name":[{"id":{"@type":"g:Int64","@value":8},"value":"ripple"}],"lang":[{"id":{"@type":"g:Int64","@value":9},"value":"java"}]}} {"id":{"@type":"g:Int32","@value":6},"label":"person","outE":{"created":[{"id":{"@type":"g:Int32","@value":12},"inV":{"@type":"g:Int32","@value":3},"properties":{"weight":{"@type":"g:Double","@value":0.2}}}]},"properties":{"name":[{"id":{"@type":"g:Int64","@value":10},"value":"peter"}],"age":[{"id":{"@type":"g:Int64","@value":11},"value":{"@type":"g:Int32","@value":35}}]}} I don't understand two things, can anyone help me understand them? - why do I need an "outE" *and* "inE" definition for the same edge? Why can't I just define one or the other? If I define both, the edge is created when importing the file, otherwise if I only use "outE" the edge is not created - why is everything given an id? Including edges and properties (for example "properties":{"name":[{"id":{"@type":"g:Int64","@value":0},"value":"marko"}). Removing all the "id" except for nodes IDs seems to work fine
|
|
Re: Removing a vertex is not removing recently added properties in different transaction
hadoopmarc@...
The behavior you describe sounds like the behavior one experiences for transactions occurring in parallel. So let us investigate some further:
Marc
|
|
Re: Index stuck on INSTALLED (single instance of JanusGraph)
fredrick.eisele@...
It still does not work for me.
|
|
Re: CQL scaling limit?
madams@...
Hi Marc, I tried rerunning the scaling test on a fresh graph with ids.block-size=10000000 , unfortunately I haven't seen any performance gain. I also tried ids.block-size=10000000 and ids.authority.conflict-avoidance-mode=GLOBAL_AUTO, but there also there was no performance gain.
I tried something else which turned out to be very successful: instead of inserting all the properties in the graph, I tried only inserting the ones necessary to feed the composite indexes and vertex-centric indexes. The indexes are used to execute efficiently the "get element or create it" logic. This test scaled quite nicely up to 64 indexers (instead of 4 before)!
My best guess for why it is the case: they reduced the amount of work the ScyllaDB coordinators had to do by:
I would happily continue digging into this, unfortunately we have other priorities that turned up. We're putting the testing on the side for the moment. I thought I would post my complete findings/guess anyway in case they are useful to someone.
Thank you so much for your help!
|
|
Removing a vertex is not removing recently added properties in different transaction
Priyanka Jindal
I am using janus client with hbase as storage backend.
In my case, I am using index ind1 to fetch vertices from the graph. Upon fetching I am adding some properties (e.g one such property is p1) to the vertices and committing the transaction. In the next transaction, I am fetching the vertices using index ind2 where one key in the index is the property (p1) added in the last transaction. I get the vertices and remove them. Vertices are reported to be removed successfully. But sometimes they are still present with only the properties (p1) added in the previous transaction. Although other properties/edges have been removed. This is happening very intermittently. It would be really helpful if someone has an idea about this and can explain me.
|
|