This repository contains the code and documentation for the Trustworthy and Ethical Assurance (TEA) platform—an application for building trustworthy and ethical assurance cases, developed by researchers at the Alan Turing Institute and University of York.
The UK's Responsible Technology Adoption Unit (Department of Science, Innovation, and Technology) is also a project partner.
To use the TEA platform, please go to https://assuranceplatform.azurewebsites.net/.
Warning
The TEA platform is made available as a research preview, and should not be used for any business critical tasks. Future breaking changes should be expected.
To view the documentation site, please go to https://alan-turing-institute.github.io/AssurancePlatform.
Trustworthy and ethical assurance is a methodology and procedure for developing a structured argument, which provides reviewable (and contestable) assurance that a set of claims about a normative goal of a data-driven technology are warranted given the available evidence.
The following elements are central to this methodology and procedure:
- The SAFE-D Principles: a set of five operationalisable principles—Sustainability, Accountability, Fairness, Explainability, Data Stewardship—that have been carefully designed and refined to address real-world challenges associated with the design, development, and deployment of data-driven technologies.
- Assurance Cases: the documented argument that communicates the basis for how and why a goal has been achieved.
- Argument Patterns: starting templates for building assurance cases. They identify the types of claims (or, the sets of reasons) that need to be established to justify the associated top-level normative goal.
The Trustworthy and Ethical Assurance platform brings these elements together in a usable and accessible manner, and helps project teams to provide trustworthy and justifiable assurance about the processes they undertook when designing, developing, and deploying their technology or system.
To get started quickly with installing this platform visit the backend and frontend installation instructions.
The Trustworthy and Ethical Assurance application can be run locally or deployed on your own server or a cloud-based service (e.g. Azure). To view the different installation instructions, please visit our documentation site for the backend and frontend.
The following resources provide additional information about the Trustworthy and Ethical Assurance framework and methodology:
- Burr, C., Arana, S., Gould Van Praag, C., Habli, I., Kaas, M., Katell, M., Laher, S., Leslie, D., Niederer, S., Ozturk, B., Polo, N., Porter, Z., Ryan, P., Sharan, M., Solis Lemus, J. A., Strocchi, M., Westerling, K., (2024) Trustworthy and Ethical Assurance of Digital Health and Healthcare. https://doi.org/10.5281/zenodo.10532573
- Porter, Z., Habli, I., McDermid, J. et al. A principles-based ethics assurance argument pattern for AI and autonomous systems. AI Ethics 4, 593–616 (2024). https://doi.org/10.1007/s43681-023-00297-2
- Burr, C. and Powell, R., (2022) Trustworthy Assurance of Digital Mental Healthcare. The Alan Turing Institute https://doi.org/10.5281/zenodo.7107200
- Burr, C., & Leslie, D. (2022). Ethical assurance: A practical approach to the responsible design, development, and deployment of data-driven technologies. AI and Ethics. https://doi.org/10.1007/s43681-022-00178-0
From March 2024 until September 2024, the project is funded by UKRI's BRAID programme as part of a scoping research award for the Trustworthy and Ethical Assurance of Digital Twins project.
Between April 2023 and December 2023, this project received funding from the Assuring Autonomy International Programme, a partnership between Lloyd’s Register Foundation and the University of York, which was awarded to Dr Christopher Burr.
Between July 2021 and June 2022 this project received funding from the UKRI’s Trustworthy Autonomous Hub, which was awarded to Dr Christopher Burr (Grant number: TAS_PP_00040).