Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Investigate replication of ECB stress test methodology 2022 #126

Open
joemoorhouse opened this issue Apr 26, 2023 · 8 comments
Open

Investigate replication of ECB stress test methodology 2022 #126

joemoorhouse opened this issue Apr 26, 2023 · 8 comments
Assignees
Labels
enhancement New feature or request

Comments

@joemoorhouse
Copy link
Collaborator

joemoorhouse commented Apr 26, 2023

The aim is to be able to identify the underlying sources of the hazard indicators at a sufficient level of detail that these can be on-boarded so as to replicate the calculation end-to-end.

The first step is identify in as much detail as possible the sources used, e.g. Climada and then investigate how these can be used efficiently within OSC.

@joemoorhouse joemoorhouse added the enhancement New feature or request label Apr 26, 2023
@joemoorhouse
Copy link
Collaborator Author

Hi @sandoema, FYI

@mvarfima
Copy link

Hi Joe,

can you please add jtotoafs to this issue?

Many thanks

@jtoroafs
Copy link

Sorry is jtoroafs, Juan

@mvarfima
Copy link

mvarfima commented May 24, 2023

Dottori__flood____template.pdf
ECB_hazard_data___template.pdf
Flood_exercise___template.pdf
flood_example_spain_ISIMIP2b_Power.pdf
flood_example_spain_ISIMIP2a_Power.pdf
flood_example_spain_GOV_Power.pdf
flood_example_spain_JRC_Power.pdf

• Find attached Matt Sandow comments about Climada progress on Thursday osc meeting: “MV: Finished Spanish flood scenarios using CMIP data sets applied to JRC damage functions. Next step is to test Spanish government flood data with 1m sqm resolution. This effectively replicates the ECB stress test for Spain at 1sqm level. Problems with large file sizes. JM gives some recommendations on file optimisation through chunking/ZARR format on S3 store. Action with MV to take this to the Wed technical meeting. Use same approach as WRI data treatment. ” Feel free to watch it live here.

• Climada examples for JRC, ISIMIP 2a and 2b are finished and the results can be compared for the Spanish Power Plants. With respect to Spanish government flood data, we have four different datasets (floodable zones, population impact, economic impact and flood intensity). We are interested in the last one, flood intensity (spanish flood intensity maps). This data is 1m resolution, which is incredible good, but poses a problem for computing. Joseph offered himself to help me onboard this data to osc s3 in zarr type to speed the calculations (wed 24/05). In relation to this, we are interested in knowing the data catalog that osc is interested in and how to define it (data commons teams could help on this). We assume official government data is mandatory and other data sources as JRC and ISIMIP could be treated as secondary. We will add documents explaining in detail the methodology for each hazard dataset , vulnerability function used,…

• Climada uses Huizinga damage functions for residential asset type. The industrial asset type should be added in collaboration with Climada Developers. This opportunity could serve to add all the available asset types and regions in Houzinga. Moreover, the damage functions granularity should be improved using disaster loss data.

• The exposure dataset used (Global Power Plant Dababase) could be improved with a precise pricing of the power plants, instead of using MW/year as a proxy for value. We are working on a Python class to value power plants that will turns translate damage into economic damages.

• Marcin has made progress in flood theory, examples of use and ECB methodology. Find them attached. These are preliminary documents and we will still have to work on them

@mvarfima
Copy link

Wednesday 24/05/2023 meeting was very helpfull. Joseph (@joemoorhouse ) and I were discussing how to onboard Spanish Gov data to osc S3. The data is 1m resolution and for 10 year return period the file weight is 1000Gb (1Tb). Given that the resolution is extremely high we can downsize it to 100m.

On the other hand, the data is provided chunked by the Spanish gov and some pre-computing must be done to upload it to an unique raster file in s3. First of all, the raster shape and affine transformation for the s3 raster file must be guessed. Secondly, every file in spanish gov must be read in 100x100 window and inserted in the new raster file.

From map provided by spanish gov we can discover shape and affine transformation of raster file.

Coordinate system used is: EPSG 25830

left up corner: (-110000, 4914036)
right up corner: (1095561, 4914036)
left down corner: (-110000, 3900000)
right down corner: (1095561, 3900000)

the range is approximate and can be narrowed

width: 1095561 + 110000 = 1205561
height: 4914036 - 3900000 = 1014036

affina transformation adding vector:= (-110000, 3900000): = left down corner.

Canary islands to be treated as a separated raster file.

Finally, github issues are not available to upload jupyter notebooks, so Joseph will create a new folder named assessments at src folder in osc hazard repo.

@jtoroafs
Copy link

jtoroafs commented May 29, 2023 via email

@joemoorhouse
Copy link
Collaborator Author

Hi @mvarfima,
I'd suggest you put the notebook in a new folder 'onboarding' under notebooks, intended for all analyses around on-boarding a new dataset:
i.e. create 'onboarding' under this folder
https://github.com/os-climate/hazard/tree/main/notebooks
Assuming you have forked hazard and cloned your fork locally, please just add your notebook there (creating new folder locally); then you can commit, push to your fork and do a pull request when you have something you want to share on main repo (which can be right away).

@mvarfima
Copy link

mvarfima commented Jun 1, 2023

Great, I'll do it that way.

By the way, we can guess the exact raster shape using Spain lat-lon bounds. I will create a detailed Jupyter notebook for the onboard.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants