|
Re: Getting org.janusgraph.graphdb.database.idassigner.IDPoolExhaustedException consistently
Hi,
There does not seem to be much that helps in finding a root cause (no similar questions or issues in history). The most helpful thing I found is the following
Hi,
There does not seem to be much that helps in finding a root cause (no similar questions or issues in history). The most helpful thing I found is the following
|
By
hadoopmarc@...
·
#5934
·
|
|
Getting org.janusgraph.graphdb.database.idassigner.IDPoolExhaustedException consistently
Hi
I am getting the below exception while ingesting data to an existing graph
Job aborted due to stage failure: Task 349 in stage 2.0 failed 10 times, most recent failure: Lost task 349.9 in stage
Hi
I am getting the below exception while ingesting data to an existing graph
Job aborted due to stage failure: Task 349 in stage 2.0 failed 10 times, most recent failure: Lost task 349.9 in stage
|
By
sauverma
·
#5933
·
|
|
Re: Backend data model deserialization
Awesome thank you all for the great info and recent presentations! We are prototyping bulk export + deserialize from Cloud Bigtable over approx. the next week and will try to report back if we can
Awesome thank you all for the great info and recent presentations! We are prototyping bulk export + deserialize from Cloud Bigtable over approx. the next week and will try to report back if we can
|
By
Elliot Block <eblock@...>
·
#5932
·
|
|
Re: ID block allocation exception while creating edge
Hi Anjani,
One thing that does not feel good is that you create and commit a transaction for every row of your dataframe. Although I do not see how this would interfere with ID allocation, best
Hi Anjani,
One thing that does not feel good is that you create and commit a transaction for every row of your dataframe. Although I do not see how this would interfere with ID allocation, best
|
By
hadoopmarc@...
·
#5931
·
|
|
Re: Making janus graph client to not use QUORUM
Thanks Marc, i will try that option.
Thanks Marc, i will try that option.
|
By
anjanisingh22@...
·
#5930
·
|
|
Re: ID block allocation exception while creating edge
Sharing detail on how i am creating node/edges to make sure nothing wrong with that which is resulting in ID allocation failures.
I am creating one static instance JanusGraph object on each spark
Sharing detail on how i am creating node/edges to make sure nothing wrong with that which is resulting in ID allocation failures.
I am creating one static instance JanusGraph object on each spark
|
By
anjanisingh22@...
·
#5929
·
|
|
Re: ID block allocation exception while creating edge
Thanks for response Marc. Yes i also think for some reason changes are not getting picked up but not able to figure out why so.
ids.block-size is updated in config file of all janus nodes and after
Thanks for response Marc. Yes i also think for some reason changes are not getting picked up but not able to figure out why so.
ids.block-size is updated in config file of all janus nodes and after
|
By
anjanisingh22@...
·
#5928
·
|
|
Re: ID block allocation exception while creating edge
Hi Anjani,
It is still most likely that the modified value of "ids.block-size" somehow does not come through. So, are you sure that
all JanusGraph instances are closed before using the new value
Hi Anjani,
It is still most likely that the modified value of "ids.block-size" somehow does not come through. So, are you sure that
all JanusGraph instances are closed before using the new value
|
By
hadoopmarc@...
·
#5927
·
|
|
Re: ID block allocation exception while creating edge
Hi Marc,
I tried setting ids.num-partitions = number of executors through code not directly in janus global config files but no luck. Added below properties but it didn't
Hi Marc,
I tried setting ids.num-partitions = number of executors through code not directly in janus global config files but no luck. Added below properties but it didn't
|
By
anjanisingh22@...
·
#5926
·
|
|
Re: MapReduce reindexing with authentication
Hi Marc,
That is an interesting solution. I was not aware of the mapreduce.application.classpath property. It is not well documented, but from what I understand, this option is used primarily to
Hi Marc,
That is an interesting solution. I was not aware of the mapreduce.application.classpath property. It is not well documented, but from what I understand, this option is used primarily to
|
By
Boxuan Li
·
#5925
·
|
|
Re: MapReduce reindexing with authentication
Hi Boxuan,
Yes, you are right, I mixed things up by wrongly interpreting GENERIC_OPTIONS as an env variable. I did some additional experiments. though, bringing in new information.
1. It is possible
Hi Boxuan,
Yes, you are right, I mixed things up by wrongly interpreting GENERIC_OPTIONS as an env variable. I did some additional experiments. though, bringing in new information.
1. It is possible
|
By
hadoopmarc@...
·
#5924
·
|
|
Re: Making janus graph client to not use QUORUM
Hi Anjani,
To see what exactly happens with local configurations, I did the following:
from the binary janusgraph distribution I started janusgraph with "bin/janusgraph.sh start" (this implicitly
Hi Anjani,
To see what exactly happens with local configurations, I did the following:
from the binary janusgraph distribution I started janusgraph with "bin/janusgraph.sh start" (this implicitly
|
By
hadoopmarc@...
·
#5923
·
|
|
Re: Backend data model deserialization
Hi Elliot
At zeotap we ve taken the same route to enable olap consumers via apache spark. We presented it in the recent janusgraph meet-up at
Hi Elliot
At zeotap we ve taken the same route to enable olap consumers via apache spark. We presented it in the recent janusgraph meet-up at
|
By
sauverma
·
#5922
·
|
|
Re: Backend data model deserialization
Hi Elliot,
I am not aware of existing utilities for deserialization, but as Marc has suggested, you might want to see if there are old Titan resources regarding it, since the data model hasn’t been
Hi Elliot,
I am not aware of existing utilities for deserialization, but as Marc has suggested, you might want to see if there are old Titan resources regarding it, since the data model hasn’t been
|
By
Boxuan Li
·
#5921
·
|
|
Re: Backend data model deserialization
Hi Elliot,
There should be some old Titan resources that describe how the data model is binary coded into the row keys and row values. Of course, it is also implicit from the JanusGraph source
Hi Elliot,
There should be some old Titan resources that describe how the data model is binary coded into the row keys and row values. Of course, it is also implicit from the JanusGraph source
|
By
hadoopmarc@...
·
#5920
·
|
|
Re: MapReduce reindexing with authentication
Hi Marc,
Thanks for your explanation. Just to avoid confusion, GENERIC_OPTIONS itself is not an env variable, but a set of configuration options
Hi Marc,
Thanks for your explanation. Just to avoid confusion, GENERIC_OPTIONS itself is not an env variable, but a set of configuration options
|
By
Boxuan Li
·
#5919
·
|
|
Backend data model deserialization
Hello,
Is there any supported way (e.g. a class/API) for deserializing raw data model rows, i.e. to get from raw Bigtable bytes to Vertex/edge list objects (in
Hello,
Is there any supported way (e.g. a class/API) for deserializing raw data model rows, i.e. to get from raw Bigtable bytes to Vertex/edge list objects (in
|
By
Elliot Block <eblock@...>
·
#5918
·
|
|
JanusGraph Meetup #4 Recording
Hello,
Thanks to all who attended the meetup yesterday. If you weren't able to make it, you can find the recording at: https://www.experoinc.com/online-seminar/janusgraph-community-meetup.
Thanks to
Hello,
Thanks to all who attended the meetup yesterday. If you weren't able to make it, you can find the recording at: https://www.experoinc.com/online-seminar/janusgraph-community-meetup.
Thanks to
|
By
Ted Wilmes
·
#5917
·
|
|
Re: Query Optimisation
Hi Vinayak,
Please study the as(), select(), project() and cap() steps from the TinkerPop ref docs. The arguments of project() do not reference the keys of side effects but rather introduce new keys
Hi Vinayak,
Please study the as(), select(), project() and cap() steps from the TinkerPop ref docs. The arguments of project() do not reference the keys of side effects but rather introduce new keys
|
By
hadoopmarc@...
·
#5916
·
|
|
Re: Query Optimisation
Hi Marc,
I am using the following query now.
g2.inject(1).union(
V().has('property1', 'V1').aggregate('v1').outE().has('property1', 'E1').limit(100).aggregate('e').inV().has('property2',
Hi Marc,
I am using the following query now.
g2.inject(1).union(
V().has('property1', 'V1').aggregate('v1').outE().has('property1', 'E1').limit(100).aggregate('e').inV().has('property2',
|
By
Vinayak Bali
·
#5915
·
|