Exception: IllegalStateException: The vertex or type is not associated with this transaction
aa5186 <arunab...@...>
Hi all, I am trying to add a new vertex and associate a new edge from it to an existing vertex. It doesn't allow me to add the edge and throws the exception: java.lang. ``` List<Vertex> nodes = g.V().hasLabel("A"). Vertex newNode = tx.addVertex(T.label,"B",
|
|
How count the vertex or edge in Janus?
spirit...@...
|
|
How Can I make a statistics,i.e:how many vertexes or edges?
spirit...@...
My graph has about 100 million vertexes and 200 million edges. But if use the following code, it is too slow.
while (countV.hasNext()){ I want to computer the count of vertex or edge directly through Hbase. The following code is:
I made a test by making a small graph-- two vertexes and two edges. But I just get the count of the edge is one, expecting two. Is there any problem? Please help....This problem bugs me. Thanks~~~
|
|
Re: Bulk loading CPU busy
Edoardo N <e.n...@...>
Hi! sorry for the question but i'm new with janusgraph. How can I import csv file into janus???
Il giorno martedì 23 maggio 2017 09:41:15 UTC+2, CGen ha scritto:
|
|
Re: [BLOG] Configuring JanusGraph for spark-yarn
HadoopMarc <bi...@...>
Hi John, Your assumption about different types of graph object for OLTP and OLAP is right (at least for JanusGraph, TinkerGraph supports both). I remember examples from the gremlin user list, though, where OLTP and OLAP were mixed in the same traversal. It is no problem to have a graph1 and a graph2 graph object simultaneously. This is also what you do in gremlin-server when you want to serve multiple graphs. Cheers, Marc Op zaterdag 15 juli 2017 00:24:25 UTC+2 schreef John Helmsen:
|
|
Re: [BLOG] Configuring JanusGraph for spark-yarn
John Helmsen <john....@...>
HadoopMarc, It seems that we have two graph classes that need to be created: The first is a standardjanusgraph object that runs a standard computer. This is able to perform OLTP data pushes and, I assume, standard OLTP queries. It, however, does not interface with Spark, so SparkGraphComputer cannot be used as the graph computer for its traversal object. The second object is a HadoopGraph object that can have SparkGraphComputer activated for its associated traversal source object. This can perform appropriate map-reduced OLAP calculations, but wouldn't be good for putting information into the HBase database. Is this accurate, or can we create a graphtraversalsource that can perform both the OLTP data inserts and utilize SparkGraphComputer? If not, could we create both objects simultaneously? Would there be conflicts between the two if there were two simultaneous traversers?
On Thursday, July 6, 2017 at 4:15:37 AM UTC-4, HadoopMarc wrote:
|
|
Re: JanusGraph support for Cassandra 3.x
Ted Wilmes <twi...@...>
Interesting, if you have a record of what your issues were, could you create a ticket? I haven't seen issues with Cassandra 3.10 and the Astyanax adapter but maybe your setup or usage patterns were different. I'm assuming you enabled Thrift on your Cassandra cluster? Thanks, Ted
On Thursday, July 13, 2017 at 7:14:37 PM UTC-5, Vladyslav Kosulin wrote:
|
|
Re: janusgraph solr cassandra GraphOfTheGodsFactory
sju...@...
It looks like that is the case. I'm not a regular Solr user so maybe others can chime in here if they know otherwise. The docs reference a "index.search.solr.configset" configuration property that would allow configset/core reuse but it looks like that's only used in SolrCloud configurations not HTTP.
On Wednesday, July 12, 2017 at 2:52:04 PM UTC-5, mahendiran chandrasekar wrote:
|
|
Re: JanusGraph support for Cassandra 3.x
Vladyslav Kosulin <vkos...@...>
Astyanax with Cassandra 3? I tried in other project, this combo did not work.
On Friday, July 7, 2017 at 3:09:23 PM UTC-4, Ted Wilmes wrote:
|
|
Re: janusgraph solr cassandra GraphOfTheGodsFactory
mahi...@...
I am still running into same kind of trouble with solr, when i try to build a search index. I am following this http://docs.janusgraph.org/0.1.0-SNAPSHOT/solr.html Does section 24.1.2.3 means, i need to create a solr core for every index i create ? Like, if i create an index on a property key of a vertex called "foo" i need to create a core for that index ?
On Saturday, 8 July 2017 05:41:17 UTC-7, s...@... wrote:
|
|
Re: janusgraph solr cassandra GraphOfTheGodsFactory
mahi...@...
Thanks for the reply, that worked for GraphOfTheGods
On Friday, 7 July 2017 20:11:04 UTC-7, mahendiran chandrasekar wrote:
|
|
Re: Possibility of index out of sync with graph
Jason Plurad <plu...@...>
Hi Ralf, It is still an issue. I've reproduced it and opened up https://github.com/JanusGraph/janusgraph/issues/410 -- Jason
On Wednesday, July 12, 2017 at 9:11:40 AM UTC-4, Ralf Steppacher wrote:
|
|
Re: Possibility of index out of sync with graph
Ralf Steppacher <ralf....@...>
Hi Austin, Ralf
On Tuesday, June 20, 2017 at 8:41:31 PM UTC+2, Austin Sharp wrote:
|
|
Re: New user-JanusGraph consistency with Cassandra
David Pitera <piter...@...>
Hey Stephen, The first source of information on this can be grabbed directly from the JanusGraph documentation http://docs.janusgraph.org/latest/tx.html: ``` JanusGraph transactions are not necessarily ACID. They can be so configured on BerkleyDB, but they are not generally so on Cassandra or HBase, where the underlying storage system does not provide serializable isolation or multi-row atomic writes and the cost of simulating those properties would be substantial. ``` With that in mind, I think if you set your Cassandra cluster such that reads/writes are set to QUORUM/ALL and all transactions lock on writes, then I _believe_ you would get what you are looking for in terms of pure "Consistency", of course at the expense of horribly low write-throughput. I can see a viable use case for such a work flow, i.e. tons of Reads, barely any writes, but the reads need to be consistent when the data does change, even if not often. However, I would be careful of the isolation of your transactions; Cassandra transactions are not isolated across row mutations. So I believe committed and rolled back transactions will result in consistent reads, I am not convinced that you will not see something like stale reads, etc due to isolation limitations.
On Tuesday, July 11, 2017 at 7:28:58 AM UTC-4, Stephen Horton wrote:
|
|
Re: professional support for JanusGraph
David Pitera <piter...@...>
Hey Peter; Compose is one company that provides a fully hosted and managed JanusGraph deployment option: https://www.compose.com/databases/janusgraph.
On Friday, June 23, 2017 at 11:30:12 AM UTC-4, Peter wrote:
|
|
Re: Handling backend connection issues on Gremlin-Server start up
David Pitera <piter...@...>
> ) Is this normal behavior to have Gremlin-Server continue even though the selected backend cannot be contacted on start up? I think so yes because the way a backend becomes connected to is that we iterate through each graph inside the server YAML's graphs {} object and try to instantiate it. That graph reference instantiation is what opens the connections to the appropriate backend as defined in each graphs .properties file. You could have multiple graphs defined, each talking to a different backend. If one fails, it seems to be expected to still be able to work with the others. > is there a way for me to send a command to the Gremlin-Server so it will attempt a reconnection when I know Cassandra is up? If the connection wasn't configured properly at server start, then there is no graph reference instantiated on the JVM and thus nothing to do to act on that graph or try to make it reconnect to the backend. The notion of "being connected to the backend" if really nothing more than a successful graph instantiation. Should Cassandra go down after this has successfully happened, the individual gremlin script submissions you send would fail, but start to succeed automatically when the backend came back up. To fix your issue You could 1) restart the server or 2) use the `JanusGraphFactory.open()` method to dynamically instantiate the graph or 3) dynamically instantiate the graph once this PR https://github.com/JanusGraph/janusgraph/pull/392 has been merged. (Of course you'll need to read the documentation in further detail to see how to do so). I would do (1) until you can do (3) as (2) will not store the graph reference in a GraphManager, and thus your graph's gremlin script executions will not be auto-managed (i.e. committed or rolled back) for you).
On Monday, July 3, 2017 at 12:43:24 PM UTC-4, Carlos wrote:
|
|
Re: Error during serialization: Cannot get namespace of root
David Pitera <piter...@...>
Hey Ramesh, I have seen this issue before but only when the query you issue is actually returning a Graph reference itself; and I've only seen it in the context of an Http query. This leads me to believe there is an issue with the way the Http serializer serializes a graph reference. It should be a no-stopper for you, as everything else should work as expected, and if you really don't like seeing that message, you can modify your gremlin scripts from something like `graph` to `graph; 0;` or `graph.vertices().size()` (anything that doesn't actually return a graph reference). If you feel so inspired, please open an issue in the repo. I have a feeling but not sure that the fix might be at the TinkerPop level.
On Thursday, July 6, 2017 at 11:04:08 AM UTC-4, Ramesh Nagaraju wrote:
|
|
Re: keyspaces in JanusGraph
David Pitera <piter...@...>
Hey Peter; as Ted said you can define which keyspace you connect to using the appropriate configuration property when defining your graph's .properties file. Also be aware of this PR https://github.com/JanusGraph/janusgraph/pull/392 which will allow you to create new graphs dynamically (i.e. post server start) and store/manage configurations for these graphs through the ConfigurationManagementGraph API, where the configurations for your graphs are persisted rather than stored on disk in .properties files. The PR also introduces the notion of a `Template_Configuration`, which should be useful in your case. If are _always_ connecting to one Cassandra cluster, then you can define that appropriate configuration (sans the keyspace parameter), and then call `ConfiguredGraphFactory.create("<graphName>")` to dynamically create and connect to a new graph whose keyspace would be equivalent to "<graphName>". For more specifics, about how to use the `ConfigurationManagementGraph` and `ConfiguredGraphFactory` you can look at the documentation I've written for it in that link.
On Wednesday, June 28, 2017 at 11:52:04 AM UTC-4, Peter wrote:
|
|
New user-JanusGraph consistency with Cassandra
stephenr...@...
Hello, I am evaluating Janusgraph for a project. If backed with Cassandra and set to Quorum or ALL consistency in Cassandra config, and if I use transactions with consistency locking for all of my writes, what consistency issues do I need to consider and be aware of if running a cluster of 20-30 Cassandra instances? Thank you! Stephen
|
|
Re: Schema management tools
Robert Dale <rob...@...>
On Thursday, July 6, 2017 at 11:36:45 AM UTC-4, Jason Plurad wrote:
|
|