-
Notifications
You must be signed in to change notification settings - Fork 78
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Distributed weights (builder) #547
Comments
The new stuff focuses on the adjacency dataframe as its core data structure & keeps methods quite lean to make exactly this possible... or at least we hope! We'd still need a testbench to profile whether we need to For a distributed constructor, I think the existing |
Nope, |
Do you know if it is planned? (or even feasible?) |
Planned would be a bit too strong of a word. We have discussed that and concluded that it may be feasible but it may also be better to guide people to use two steps - first sindex query over partitions and then sindex query over geometries within subset of partitions. That is what the current implementation of distributed |
Maybe we can see what’s required for the constructors and retrofit it
either with a makeshift sindex or feeding it upstream. I’m not sure how
it’d interplay with the overlapping computation across chunks…
|
As the new generation of weights are coming into fruition (#534 ), I wanted to drop an issue to collate a proposal and some ideas @martinfleis and I have been fleshing out over the past couple of years. If anything, at least this can be a space to discuss whether it'd make sense to have on the roadmap at all and, if so, how it can be squared up with all the ongoing parallel plans on weights.
Idea: support (at least) part of the weights functionality on parallelisable and distributed data structures
Why: building weights for larger-than-memory datasets would lay the foundation for spatial analytics (i.e., PySAL functionality) at planetary scale. My own take is that in the last ten years, we've gotten away without this because RAM and CPUs have grown faster than the data we were using. I think this is changing because we're able to access datasets significantly bigger (even just at national scale) on which we'd like to run pysal.
How: our proposal (and this is very much for debate too!) is to build functionality that stores weights as adjacency matrices that are stored as
dask.dataframe
objects that don't need to live in memory in full. To build them, we could rely ondask_geopandas.geodataframe
objects. This way has a couple of advantages:If the above seems reasonable, probably the main technical blocker is defining spatial relationships across different chunks (for geometries within the same chunk it's straightforward), and possibly merge results at the end. I know Martin had some ideas here and there is precedent we built out-of-necessity and very much in ad-hoc ways for the Urban Grammar here (note this is all before the 0.1 release of
dask-geopandas
so we didn't have adask_geopandas.GeoDataFrame
to build upon. Some stuff here might be redundant or it might be possible to notably streamline it).The text was updated successfully, but these errors were encountered: