-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Investigate replication of ECB stress test methodology 2022 #126
Comments
Hi @sandoema, FYI |
Hi Joe, can you please add jtotoafs to this issue? Many thanks |
Sorry is jtoroafs, Juan |
Dottori__flood____template.pdf • Find attached Matt Sandow comments about Climada progress on Thursday osc meeting: “MV: Finished Spanish flood scenarios using CMIP data sets applied to JRC damage functions. Next step is to test Spanish government flood data with 1m sqm resolution. This effectively replicates the ECB stress test for Spain at 1sqm level. Problems with large file sizes. JM gives some recommendations on file optimisation through chunking/ZARR format on S3 store. Action with MV to take this to the Wed technical meeting. Use same approach as WRI data treatment. ” Feel free to watch it live here. • Climada examples for JRC, ISIMIP 2a and 2b are finished and the results can be compared for the Spanish Power Plants. With respect to Spanish government flood data, we have four different datasets (floodable zones, population impact, economic impact and flood intensity). We are interested in the last one, flood intensity (spanish flood intensity maps). This data is 1m resolution, which is incredible good, but poses a problem for computing. Joseph offered himself to help me onboard this data to osc s3 in zarr type to speed the calculations (wed 24/05). In relation to this, we are interested in knowing the data catalog that osc is interested in and how to define it (data commons teams could help on this). We assume official government data is mandatory and other data sources as JRC and ISIMIP could be treated as secondary. We will add documents explaining in detail the methodology for each hazard dataset , vulnerability function used,… • Climada uses Huizinga damage functions for residential asset type. The industrial asset type should be added in collaboration with Climada Developers. This opportunity could serve to add all the available asset types and regions in Houzinga. Moreover, the damage functions granularity should be improved using disaster loss data. • The exposure dataset used (Global Power Plant Dababase) could be improved with a precise pricing of the power plants, instead of using MW/year as a proxy for value. We are working on a Python class to value power plants that will turns translate damage into economic damages. • Marcin has made progress in flood theory, examples of use and ECB methodology. Find them attached. These are preliminary documents and we will still have to work on them |
Wednesday 24/05/2023 meeting was very helpfull. Joseph (@joemoorhouse ) and I were discussing how to onboard Spanish Gov data to osc S3. The data is 1m resolution and for 10 year return period the file weight is 1000Gb (1Tb). Given that the resolution is extremely high we can downsize it to 100m. On the other hand, the data is provided chunked by the Spanish gov and some pre-computing must be done to upload it to an unique raster file in s3. First of all, the raster shape and affine transformation for the s3 raster file must be guessed. Secondly, every file in spanish gov must be read in 100x100 window and inserted in the new raster file. From map provided by spanish gov we can discover shape and affine transformation of raster file. Coordinate system used is: EPSG 25830 left up corner: (-110000, 4914036) the range is approximate and can be narrowed width: 1095561 + 110000 = 1205561 affina transformation adding vector:= (-110000, 3900000): = left down corner. Canary islands to be treated as a separated raster file. Finally, github issues are not available to upload jupyter notebooks, so Joseph will create a new folder named assessments at src folder in osc hazard repo. |
Where should we leave all this information?
Juan
Enviado desde mi iPad
El 29 may 2023, a las 13:53, mvarfima ***@***.***> escribió:
Wednesday 24/05/2023 meeting was very helpfull. Joseph ***@***.***<https://github.com/joemoorhouse> ) and I were discussing how to onboard Spanish Gov data to osc S3. The data is 1m resolution and for 10 year return period the file weight is 1000Gb (1Tb). Given that the resolution is extremely high we can downsize it to 100m.
On the other hand, the data is provided chunked by the Spanish gov and some pre-computing must be done to upload it to an unique raster file in s3. First of all, the raster shape and affine transformation for the s3 raster file must be guessed. Secondly, every file in spanish gov must be read in 100x100 window and inserted in the new raster file.
From map provided by spanish gov<https://sig.mapama.gob.es/snczi/> we can discover shape and affine transformation of raster file.
Coordinate system used is: EPSG 25830<https://epsg.org/crs_25830/ETRS89-UTM-zone-30N.html?sessionkey=cedqtluqe0>
left up corner: (-110000, 4914036)
right up corner: (1095561, 4914036)
left down corner: (-110000, 3900000)
right down corner: (1095561, 3900000)
the range is approximate and can be narrowed
width: 1095561 + 110000 = 1205561
height: 4914036 - 3900000 = 1014036
affina transformation adding vector:= (-110000, 3900000): = left down corner.
Canary islands to be treated as a separated raster file.
Finally, github issues are not available to upload jupyter notebooks, so Joseph will create a new folder named assessments at src folder in osc hazard repo<https://github.com/os-climate/hazard>.
—
Reply to this email directly, view it on GitHub<#126 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/A6AIJTNWIHD7FSSW6ILZKWTXISE3NANCNFSM6AAAAAAXMT4BUI>.
You are receiving this because you commented.Message ID: ***@***.***>
|
Hi @mvarfima, |
Great, I'll do it that way. By the way, we can guess the exact raster shape using Spain lat-lon bounds. I will create a detailed Jupyter notebook for the onboard. |
The aim is to be able to identify the underlying sources of the hazard indicators at a sufficient level of detail that these can be on-boarded so as to replicate the calculation end-to-end.
The first step is identify in as much detail as possible the sources used, e.g. Climada and then investigate how these can be used efficiently within OSC.
The text was updated successfully, but these errors were encountered: