Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

Defense in depth for your research (project)

Defense in depth is a risk-mitigation approach that relies on multiple, overlapping, and complementary layers of safeguards, so that if one control fails, others remain effective.

Key assumption: any single mitigation measure can fail.

Dimensions of defense in research (projects) for an AI use case:

Using AI in research (projects) introduces the following risks:

Practical risk management

There is an AI use case in a research project.

Risk management framework adapted to the trinity of risks in research (projects)

Let’s adapt the Risk management framework (https://owaspai.org/goto/riskanalysis/) to the trinity of risks:

People (Researchers, Teams, and Partners)

General mitigation measures involving people and human factor

Technology & Infrastructure: AI Systems

General mitigation measures involving technology and infrastructure

Processes: Ethics, Integrity, and Governance

General mitigation measures for three processes in research

Mapping the trinity of risks and existing guidelines and recommendations on AI use in research

Trinity of risksEU Guidelines on the responsible use of generative AI in researchHelmholtz “Recommendations for the use of artificial intelligence”
Ethical risks• Privacy, confidentiality, and IP rights
• Bias and prejudices due to training data
• Privacy and confidentiality
Integrity risks• Responsibility for scientific output
• Transparent AI use
• Continuous AI literacy
• Sensitive activities impacting others
• (Scientific) information integrity
Governance risks• National, EU & international legislation• AI-related regulation
• Copyright and IP rights
• Privacy and confidentiality