Date   

Re: Backend data model deserialization

Boxuan Li
 

Hi Elliot,

I am not aware of existing utilities for deserialization, but as Marc has suggested, you might want to see if there are old Titan resources regarding it, since the data model hasn’t been changed since Titan -> JanusGraph migration.

If you want to resort to the source code, you could check out EdgeSerializer and IndexSerializer. Here is a simple code snippet demonstrating how to deserialize an edge:

final Entry colVal = StaticArrayEntry.of(StaticArrayBuffer.of(Bytes.fromHexString("0x70a0802140803800")), StaticArrayBuffer.of(Bytes.fromHexString("0x0180a076616c75e5"))); // I retrieved this hex string from Cassandra cqlsh console
final StandardSerializer serializer = new StandardSerializer();
final EdgeSerializer edgeSerializer = new EdgeSerializer(serializer);
RelationCache edgeCache = edgeSerializer.readRelation(colVal, false, (StandardJanusGraphTx) tx); // this is the deserialized edge

Bytes.fromHexString is an utility method provided by datastax cassandra driver. You might use any other library/code to convert hex string to bytes.

As you can see, there is no single easy-to-use API to deserialize raw data. If you end up creating one, I think it would be helpful if you could contribute back to the community.

Best regards,
Boxuan


On May 20, 2021, at 8:07 PM, hadoopmarc@... wrote:

Hi Elliot,

There should be some old Titan resources that describe how the data model is binary coded into the row keys and row values. Of course, it is also implicit from the JanusGraph source code.

If you look back at this week's OLAP presentations (https://lists.lfaidata.foundation/g/janusgraph-users/topic/janusgraph_meetup_4/82939376) you will see that one of the presenters exactly did what you propose: they exported rows from scylladb and converted it to gryo format for import into TinkerPop HadoopGraph. You might want to contact them to coordinate a possible contribution to the JanusGraph project.

Best wishes,     Marc


Re: Backend data model deserialization

hadoopmarc@...
 

Hi Elliot,

There should be some old Titan resources that describe how the data model is binary coded into the row keys and row values. Of course, it is also implicit from the JanusGraph source code.

If you look back at this week's OLAP presentations (https://lists.lfaidata.foundation/g/janusgraph-users/topic/janusgraph_meetup_4/82939376) you will see that one of the presenters exactly did what you propose: they exported rows from scylladb and converted it to gryo format for import into TinkerPop HadoopGraph. You might want to contact them to coordinate a possible contribution to the JanusGraph project.

Best wishes,     Marc


Re: MapReduce reindexing with authentication

Boxuan Li
 

Hi Marc,

Thanks for your explanation. Just to avoid confusion, GENERIC_OPTIONS itself is not an env variable, but a set of configuration options (https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/CommandsManual.html#Generic_Options). These options have nothing to do with environment variables.

If I understand you correctly, you are saying that maybe ToolRunner interface is not required to submit files. I didn’t try but I think you are right, because what it does under the hood is simply

if (line.hasOption("files")) {
conf.set("tmpfiles", this.validateFiles(line.getOptionValue("files"), conf), "from -files command line option");
}
Which will later be picked up by hadoop client. So, theoretically, ToolRunner is not needed, and one can set hadoop config by themselves. This, however, does not seem to be documented officially anywhere, and it is not guaranteed that this string literal “tmpfiles” will not change in future versions.

Note that even if one wants to set “tmpfiles” by themselves for MapReduce reindex, they still need to modify JanusGraph source code because currently hadoopConf object is created within MapReduceIndexManagement class and users have no control over it.

Best regards,
Boxuan

On May 15, 2021, at 4:53 PM, hadoopmarc@... wrote:

Hi Boxuan,

Yes, I did not finish my argument. What I tried to suggest: if the hadoop CLI command checks the GENERIC_OPTIONS env variable, then maybe also the mapreduce java client called by JanusGraph checks the GENERIC_OPTIONS env variable.

The (old) blog below suggests, however, that this behavior is not present by default but requires the janusgraph code to run hadoop's ToolRunner. So, just see if this is any better than what you had in mind to implement.
https://hadoopi.wordpress.com/2013/06/05/hadoop-implementing-the-tool-interface-for-mapreduce-driver/

Best wishes,    Marc


Backend data model deserialization

Elliot Block <eblock@...>
 

Hello,

Is there any supported way (e.g. a class/API) for deserializing raw data model rows, i.e. to get from raw Bigtable bytes to Vertex/edge list objects (in Java)?

https://docs.janusgraph.org/advanced-topics/data-model/

We're on the Cloud Bigtable storage backend, and it has excellent support for bulk exporting Bigtable rows (e.g. to Parquet in GCS), but we're unclear how to deserialize the raw Bigtable row/cell bytes back into usable Vertex objects.  If we were to build support for something like this, would it be a candidate for contribution back into the project?  Or is it misunderstanding the intended API/usage path?

Any thoughts greatly appreciated.  Thank you!

- Elliot


JanusGraph Meetup #4 Recording

Ted Wilmes
 

Hello,
Thanks to all who attended the meetup yesterday. If you weren't able to make it, you can find the recording at: https://www.experoinc.com/online-seminar/janusgraph-community-meetup.

Thanks to our presenters: Marc, Saurabh, and Bruno, we had a really good set of material presented.

Thanks,
Ted


Re: Query Optimisation

hadoopmarc@...
 

Hi Vinayak,

Please study the as(), select(), project() and cap() steps from the TinkerPop ref docs. The arguments of project() do not reference the keys of side effects but rather introduce new keys for its output. The query I provided above was tested in the TinkerPop modern graph, so I repeat my suggestion to try that one first and show in what way it fails to provide sensible output.

Best wishes,    Marc


Re: Query Optimisation

Vinayak Bali
 

Hi Marc, 

I am using the following query now.

g2.inject(1).union(
V().has('property1', 'V1').aggregate('v1').outE().has('property1', 'E1').limit(100).aggregate('e').inV().has('property2', 'V2').aggregate('v2')
).project('v1','e','v2').by(valueMap().by(unfold()))

But this only returns the elements of V2. No V1 and E1 attributes are returned. Can you please check ??

Thanks & Regards,
Vinayak


On Mon, May 10, 2021 at 8:13 PM <hadoopmarc@...> wrote:
Hi Vinayak,

Actually, query 4 was easier to rework. It could read somewhat like:
g.V().has('property1', 'vertex1').as('v1').outE().has('property1', 'edge1').limit(100).as('e').inV().has('property1', 'vertex1').as('v2').
    select('v1','e','v2').by(valueMap().by(unfold())).aggregate('x').fold().
  V().has('property1', 'vertex1').as('v1').outE().has('property1', 'edge2').limit(100).as('e').inV().has('property1', 'vertex2').as('v2').
    select('v1','e','v2').by(valueMap().by(unfold())).aggregate('x').fold().
  V().has('property1', 'vertex3').as('v1').outE().has('property1', 'edge3').limit(100).as('e').inV().has('property1', 'vertex2').as('v2').
    select('v1','e','v2').by(valueMap().by(unfold())).aggregate('x').fold().
  V().has('property1', 'vertex3').as('v1').outE().has('property1', 'Component_Of').limit(100).as('e').inV().has('property1', 'vertex1').as('v2')).
    select('v1','e','v2').by(valueMap().by(unfold())).aggregate('x').fold().
  cap('x')

Best wishes,    Marc


Making janus graph client to not use QUORUM

anjanisingh22@...
 

Hi All,

I am trying to create nodes in graph and while reading created node id i am getting below exception:

org.janusgraph.diskstorage.TemporaryBackendException: Temporary failure in storage backend at io.vavr.API$Match$Case0.apply(API.java:3174)

at io.vavr.API$Match.of(API.java:3137)

at org.janusgraph.diskstorage.cql.CQLKeyColumnValueStore.lambda$static$0(CQLKeyColumnValueStore.java:125)

at io.vavr.control.Try.getOrElseThrow(Try.java:671)

at org.janusgraph.diskstorage.cql.CQLKeyColumnValueStore.getSlice(CQLKeyColumnValueStore.java:292)

at org.janusgraph.diskstorage.configuration.backend.KCVSConfiguration$3.call(KCVSConfiguration.java:177)

at org.janusgraph.diskstorage.configuration.backend.KCVSConfiguration$3.call(KCVSConfiguration.java:174)

at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:147)

at org.janusgraph.diskstorage.util.BackendOperation$1.call(BackendOperation.java:161)

at org.janusgraph.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:68)

... 26 more

Caused by: com.datastax.driver.core.exceptions.ReadTimeoutException: Cassandra timeout during read query at consistency QUORUM (3 responses were required but only 2 replica responded)


I am trying to update janus client to not use QUORUM but LOCAL_ONE.  Setting below property but its not working.

 

JanusGraphFactory.Builder configProps = JanusGraphFactory.build();      configProps.set("storage.cql.read-consistency-level", “LOCAL_ONE”);

 

In CQLTransaction class i see its getting set by reading value from CustomOption

public CQLTransaction(final BaseTransactionConfig config) {
super(config);
this.readConsistencyLevel = ConsistencyLevel.valueOf(getConfiguration().getCustomOption(READ_CONSISTENCY));
this.writeConsistencyLevel = ConsistencyLevel.valueOf(getConfiguration().getCustomOption(WRITE_CONSISTENCY));
}


I did tied  seeing value by using customOption() method of TransactionBuilder but no luck.

 

GraphTraversalSource g = janusGraph.buildTransaction()
.customOption("storage.cql.read-consistency-level","LOCAL_ONE")
.start().traversal();

 


Could you please help me to fix it?


Thanks & Regards,
Anjani

 


Re: [janusgraph-dev] [Meetup] JanusGraph Meetup May 18 covering JG OLAP approaches

cmilowka
 

Thank you Ted, I am also interested in watching this video on OLAP later, it is like 00:30 in Melbourne, not too easy to get up in the morning...
Regards to presenters, Christopher.


Re: [janusgraph-dev] [Meetup] JanusGraph Meetup May 18 covering JG OLAP approaches

Ted Wilmes
 

Hi Boxuan,
Yes, definitely. I'll post this under presentations on janusgraph.org. Also, I hadn't posted meetup 3 on there yet and finally tracked the link down, so that will also be up there shortly.

Thanks,
Ted


On Sun, May 16, 2021 at 10:22 AM Boxuan Li <liboxuan@...> wrote:
Hi Ted,

Thanks for organizing this! Do you have plans to record & release the video after the meetup? 10:30 ET is a bit late for some regions in APAC, so it would be great if there would be a video record.

Cheers,
Boxuan

On May 14, 2021, at 11:37 PM, Ted Wilmes <twilmes@...> wrote:

Hello,
We will be hosting a community meetup next week on Tuesday, May 18th at 9:30 central/10:30 eastern. We have a great set of speakers who will be discussing all things JanusGraph OLAP:

* Hadoop Marc who has helped many of us on the mailing list and in JG issues
* Saurabh Verma, principal engineer at Zeotap
* Bruno Berriso, engineer at Expero

If you're interested in signing up, here's the link: https://www.experoinc.com/get/janusgraph-user-group.

Thanks,
Ted


Re: [janusgraph-dev] [Meetup] JanusGraph Meetup May 18 covering JG OLAP approaches

Boxuan Li
 

Hi Ted,

Thanks for organizing this! Do you have plans to record & release the video after the meetup? 10:30 ET is a bit late for some regions in APAC, so it would be great if there would be a video record.

Cheers,
Boxuan

On May 14, 2021, at 11:37 PM, Ted Wilmes <twilmes@...> wrote:

Hello,
We will be hosting a community meetup next week on Tuesday, May 18th at 9:30 central/10:30 eastern. We have a great set of speakers who will be discussing all things JanusGraph OLAP:

* Hadoop Marc who has helped many of us on the mailing list and in JG issues
* Saurabh Verma, principal engineer at Zeotap
* Bruno Berriso, engineer at Expero

If you're interested in signing up, here's the link: https://www.experoinc.com/get/janusgraph-user-group.

Thanks,
Ted


Re: Storing and reading connected component RDD through OutputFormatRDD & InputFormatRDD

hadoopmarc@...
 

Hi Anjani,

The following section of the TinkerPop ref docs gives an example of how to reuse the output RDD of one job in a follow-up gremlin OLAP job.
https://tinkerpop.apache.org/docs/3.4.10/reference/#interacting-with-spark

Best wishes,   Marc


Re: MapReduce reindexing with authentication

hadoopmarc@...
 

Hi Boxuan,

Yes, I did not finish my argument. What I tried to suggest: if the hadoop CLI command checks the GENERIC_OPTIONS env variable, then maybe also the mapreduce java client called by JanusGraph checks the GENERIC_OPTIONS env variable.

The (old) blog below suggests, however, that this behavior is not present by default but requires the janusgraph code to run hadoop's ToolRunner. So, just see if this is any better than what you had in mind to implement.
https://hadoopi.wordpress.com/2013/06/05/hadoop-implementing-the-tool-interface-for-mapreduce-driver/

Best wishes,    Marc


[Meetup] JanusGraph Meetup May 18 covering JG OLAP approaches

Ted Wilmes
 

Hello,
We will be hosting a community meetup next week on Tuesday, May 18th at 9:30 central/10:30 eastern. We have a great set of speakers who will be discussing all things JanusGraph OLAP:

* Hadoop Marc who has helped many of us on the mailing list and in JG issues
* Saurabh Verma, principal engineer at Zeotap
* Bruno Berriso, engineer at Expero

If you're interested in signing up, here's the link: https://www.experoinc.com/get/janusgraph-user-group.

Thanks,
Ted


Re: MapReduce reindexing with authentication

Boxuan Li
 

Hi Marc, you are right, we are indeed using this -files option :)

On May 14, 2021, at 8:06 PM, hadoopmarc@... wrote:

Hi Boxuan,

Using existing mechanisms for configuring mapreduce would be nicer, indeed.

Upon reading this hadoop command, I see a GENERIC_OPTIONS env variable read by the mapreduce client, that can have a -files option. Maybe it is possible to include a jaas file that points to the (already installed?) keytab file on the workers?

Best wishes,     Marc


Re: MapReduce reindexing with authentication

hadoopmarc@...
 

Hi Boxuan,

Using existing mechanisms for configuring mapreduce would be nicer, indeed.

Upon reading this hadoop command, I see a GENERIC_OPTIONS env variable read by the mapreduce client, that can have a -files option. Maybe it is possible to include a jaas file that points to the (already installed?) keytab file on the workers?

Best wishes,     Marc


MapReduce reindexing with authentication

Boxuan Li
 

We have been using a yarn cluster to run MapReduce reindexing (Cassandra + Elasticsearch) for a long time. Recently, we introduced Kerberos-based authentication to the Elasticsearch cluster, meaning that worker nodes need to authenticate via a keytab file.

We managed to achieve so by using a hadoop command to include the keytab file when submitting the MapReduce job. Hadoop automatically copies this file and distributes it to the working directory of all worker nodes. This works well for us, except that we have to make changes to MapReduceIndexManagement class so that it accepts an org.apache.hadoop.conf.Configuration object (which is created by org.apache.hadoop.util.ToolRunner) rather than instantiate one by itself. We are happy to submit a PR for this, but I would like to hear if there is any better way of handling this.

Cheers,
Boxuan


Multiple entries with same key on mixed index

toom@...
 

Hello,

I encounter the error describe in the issue #1916 (https://github.com/JanusGraph/janusgraph/issues/1916) on a mixed index (lucene). When I list property keys of the index, they are all duplicated.

I haven't identified the root cause and I don't know how to reproduce it.

I would like to find a solution to repair the inconsistency,  without loosing my data.

The exposed API of JanusGraphManagement doesn't seem to be helpful: mixed index can't be removed and the IllegalArgumentException is raised when I try to retrieve the index status. Removing the index in Lucene or unconfiguring index backend doesn't help either.

So I've tried to find a solution using internal API. Is it safe to delete Lucene data files and remove the schema vertex related to the mixed index ?

((ManagementSystem)mgmt).getWrappedTx()
  .getSchemaVertex(JanusGraphSchemaCategory.GRAPHINDEX.getSchemaName("indexName"))
  .remove()

Is there a reason to not permit to remove a mixed index ?

Best wishes

Toom.


Re: Support for DB cache for Multi Node Janus Server Setup

pasansumanathilake@...
 

On Thu, May 6, 2021 at 11:36 PM, <hadoopmarc@...> wrote:
Marc
Hi Marc,

Thanks for the reply. Yeah, it's true that a multi-node setup uses the same storage backend. However, I am referring to here is the Janusgraph database cache - https://docs.janusgraph.org/basics/cache/#database-level-caching Which is using Janusgraph heap memory.


Storing and reading connected component RDD through OutputFormatRDD & InputFormatRDD

anjanisingh22@...
 
Edited

Hi All,

I am using connected component vertex program to find all the connected nodes in graph and then using that RDD for further processing in graph. I want to store that RDD at some output location so that i can re-use the RDD and don't have to run connected component vertex program which is time consuming. 

I see in tinker-pop library we have OutputFormatRDD  to save data. I tired

outputFormatRDD.writeGraphRDD(graphComputerConfiguration, uniqueRDD);  ## connected but its throwing class cast exception as connected component vertex program output RDD value is a list which can not be cast to VertexWritable

 

 outputFormatRDD.writeMemoryRDD(graphComputerConfiguration, "memoryKey",  uniqueRDD);  ## Its saving RDD by creating memory key folder name at output location.


Not able to read RDD through InputFormatRDD.readMemoryRDD() as its looking for data files as per class SequenceFileInputFormat class. 

Am i missing any thing? Please let me know if you have tired some 
these methods? Want to check if we can use out of box methods before proceeding with our own?

Thanks,
Anjani




741 - 760 of 6661