Date   

Re: Logging vertex program

Nikita Pande
 

Hi Marc,


Exactly suprised to see logs in stderr. 

Thanks,
Nikita

On Wed, 1 Jun, 2022, 9:04 pm , <hadoopmarc@...> wrote:
Hi Nikita,

Do you use the spark web UI? In the executor tab you can follow the stderr link and see any logged or printed output. No idea why they use stderr.

Best wishes,    Marc


Re: Logging vertex program

hadoopmarc@...
 

Hi Nikita,

Do you use the spark web UI? In the executor tab you can follow the stderr link and see any logged or printed output. No idea why they use stderr.

Best wishes,    Marc


Re: upgrade gremlin version

hadoopmarc@...
 

Hi Senthilkumar,

I remember the Gremlin.version() output of 1.2.1 in the gremlin console of the janusgraph distribution is a bug somewhere. You can look in the lib folder and see that janusgraph-0.6.1 uses gremlin-3.5.1. gremlin-3.6.x will become available in a later janusgraph release. It is not easy or advisable to try and upgrade the gremlin version yourself.

Best wishes,   Marc


upgrade gremlin version

senthilkmrr@...
 

 Janusgraph  latest version running on 1.2 version. But Gremlin latest version is 3.6. How to upgrade Germlin  latest version?
--
Senthilkumar


Logging vertex program

Nikita Pande
 

Hi team,

I am trying to add logs in a vertex program, but cant d=find them on spark executors or janusgraph server logs.
How to get vertex program logs?

Thanks and Regards,
Nikita


Re: Janusgraph cluster set-up

hadoopmarc@...
 
Edited

Hi Senthilkumar,

Sounds like the "getting started deployment scenario", see https://docs.janusgraph.org/operations/deployment/#getting-started-scenario

JanusGraph servers only need to have the exact same config and will cooperate automagically. Setting up the scylladb cluster is described elsewhere. Setting up load balancing/virtualIP is described elsewhere.

Best wishes,    Marc


Janusgraph cluster set-up

senthilkmrr@...
 

 Please let me know  How to create janusgraph multi node clustering set-up?    I using cassandra/scylladb for data storage.
--
Senthilkumar


New committer: Clement de Groc

Boxuan Li
 

On behalf of the JanusGraph Technical Steering Committee (TSC), I'm pleased to welcome a new committer to the project!

Clement de Groc has been a solid contributor. He already contributed many performance improvements, and bug fixes and provided many PR reviews.

Congratulations, Clement!


Re: Hbase read after write not working with janusgraph-0.6.1 but was working with janusgraph-0.6.1

hadoopmarc@...
 

Hi Nikita,

JanusGraph can be run as a server or in embedded mode. gremlin-server.yaml is for configuring JanusGraph Server. With JanusGraphFactory in the Gremlin Console you instantiate embedded JanusGraph. In the latter case you can query JanusGraph without a remote connection to JanusGraph Server.

See: https://docs.janusgraph.org/operations/deployment/

Marc


Re: Hbase read after write not working with janusgraph-0.6.1 but was working with janusgraph-0.6.1

Nikita Pande
 
Edited

Hi @hadoopmarc,

Is there a difference between configuring janusgraph server conf ie gremlin-server.yaml
graphs: {
  graph: /etc/opt/janusgraph/janusgraph.properties
   
   

vs configuring it from console for eg: graph2=GraphFactory.open("/etc/opt/janusgraph/janusgraph.properties")?


Re: Hbase read after write not working with janusgraph-0.6.1 but was working with janusgraph-0.6.1

hadoopmarc@...
 

Hi Nikita,

Possibly, your previous setup included JanusGraph Server and you had a remote connection from the Gremlin Console to JanusGraph Server. In that case, succesful gremlin requests are committed automatically, see: https://tinkerpop.apache.org/docs/current/reference/#considering-transactions

Marc


Re: Hbase read after write not working with janusgraph-0.6.1 but was working with janusgraph-0.6.1

Nikita Pande
 
Edited

Hi @hadoopmarc, thanks for reply.

Did this change after upgrade from v0.6.0 to v0.6.1? I couldnt find diff...It was working with v0.6.0 without tx().commit()

Now I am running following steps
g2.tx().begin()
g2.addV()
g2.tx().commit()
g2.V().count()

It seems to be fine now


Re: Hbase read after write not working with janusgraph-0.6.1 but was working with janusgraph-0.6.1

hadoopmarc@...
 

Hi Nikita,

Sorry for asking the obvious, but are you sure you committed the addV() transaction with a g2.tx().commit() before closing the Gremlin Console?

Best wishes,    Marc


Hbase read after write not working with janusgraph-0.6.1 but was working with janusgraph-0.6.1

Nikita Pande
 

So basically g.addV() is adding vertices in hbase server. But retrieval after restart of gremlin is not happening. 
Step1. In gremlin console, g2.addV() and g2.V().count() returns 124

Step2: Restart gremlin conosole and run g2.V().count()

gremlin> g2.V().count()

14:04:54 WARN  org.janusgraph.graphdb.transaction.StandardJanusGraphTx  - Query requires iterating over all vertices [()]. For better performance, use indexes

==>123

This is still the old value.

Howe ver same works with janusgraph-0.6.0


Re: Bulk Loading with Spark

Joe Obernberger
 

Thank you Marc - something isn't right with my code - debugging.  Right now the graph is 4,339,690 vertices and 15,707,179 edges, but that took days to build, and is probably 5% of the data.
Querying the graph is fast.

-Joe

On 5/22/2022 7:53 AM, hadoopmarc@... wrote:
Hi Joe,

What is slow? Can you please check the Expero blog series and compare to their reference numbers (per parallel spark task):

https://www.experoinc.com/post/janusgraph-nuts-and-bolts-part-1-write-performance

Best wishes,

Marc



AVG logo

This email has been checked for viruses by AVG antivirus software.
www.avg.com



Re: Bulk Loading with Spark

hadoopmarc@...
 

Hi Joe,

What is slow? Can you please check the Expero blog series and compare to their reference numbers (per parallel spark task):

https://www.experoinc.com/post/janusgraph-nuts-and-bolts-part-1-write-performance

Best wishes,

Marc


Re: Bulk Loading with Spark

Joe Obernberger
 

Should have added - I'm connecting with:

JanusGraph graph = JanusGraphFactory.build()
                .set("storage.backend", "cql")
                .set("storage.hostname", "charon:9042, chaos:9042")
                .set("storage.cql.keyspace", "graph")
                .set("storage.cql.cluster-name", "JoeCluster")
.set("storage.cql.only-use-local-consistency-for-system-operations", "true")
                .set("storage.cql.batch-statement-size", 256)
                .set("storage.cql.local-max-connections-per-host", 8)
                .set("storage.cql.read-consistency-level", "ONE")
                .set("storage.batch-loading", true)
                .set("schema.default", "none")
                .set("ids.block-size", 100000)
                .set("storage.buffer-size", 16384)
                .open();


-Joe

On 5/20/2022 5:28 PM, Joe Obernberger via lists.lfaidata.foundation wrote:
Hi All - I'm trying to use spark to do a bulk load, but it's very slow.
The cassandra cluster I'm connecting to is a bare-metal, 15 node cluster.

I'm using java code to do the loading using:
GraphTraversalSource.addV and Vertex.addEdge in a loop.

Is there a better way?

Thank you!

-Joe

--
This email has been checked for viruses by AVG.
https://www.avg.com


Bulk Loading with Spark

Joe Obernberger
 

Hi All - I'm trying to use spark to do a bulk load, but it's very slow.
The cassandra cluster I'm connecting to is a bare-metal, 15 node cluster.

I'm using java code to do the loading using:
GraphTraversalSource.addV and Vertex.addEdge in a loop.

Is there a better way?

Thank you!

-Joe


--
This email has been checked for viruses by AVG.
https://www.avg.com


Re: NoSuchMethodError

hadoopmarc@...
 

Thanks for reporting back! Unfortunately, java still has big problems using multiple versions of the same library, so your current approach is not a guarantee for success for future version updates (and even your current project may suffer from side-effects you have not encountered yet):

https://www.theserverside.com/tip/Problems-with-Java-modules-still-plague-developers

A good point of refence is running JanusGraph OLAP queries in the Gremlin Console using spark master = local[*], because an extensive test suite ran succesfully for each release. But practical projects, like yours, have other requirements than just served by Gremlin Conosole.

Best wishes,     Marc


Re: NoSuchMethodError

Joe Obernberger
 

Oh - my apologies.  I'm using 0.6, and Cassandra 4.04.

What I eventually did was get the source from github and compile with Cassandra 4.  I'm using spark 3.2.1.

-Joe

On 5/19/2022 1:50 AM, hadoopmarc@... wrote:
Hi Joe,

Your issue description is not complete. To start:
  • what version of JanusGraph do you use?
  • what spark master do you use?

The easiest way to find the guava version of JanusGraph is to download the zip distribution and check the lib folder.

Best wishes,    Marc



AVG logo

This email has been checked for viruses by AVG antivirus software.
www.avg.com


141 - 160 of 6661