AI systems and AI models increasingly support all stages of research projects across the entire research lifecycle. They are used for literature review, research design, idea generation, data collection and analysis, content generation, coding, writing, archiving, sharing, and dissemination. In this workbook, we addressed the safe and secure use of AI in research projects through the prism of the trinity of good research: research ethics, research integrity, and research governance.
In the Introduction, we began with key definitions of AI systems and AI models, AI safety and AI security, and risk management. We highlighted that, before using any AI system, researchers should consider the safety and security of all its components: hardware, software, data, models, and networks.
In Part 1 (Theory), we showed that freedom of research is exercised within ethical, integrity, and governance constraints, and that AI both amplifies existing risks and introduces new ones. AI safety and AI security correspond to risks that arise unintentionally as well as deliberately. These AI-related risks intersect with research ethics, research integrity, and research governance, each of which addresses different responsibilities. Research ethics concerns responsibility towards participants, society, and the environment. Research integrity focuses on responsibility towards scientific truth and, consequently, the behaviour of researchers. Research governance addresses responsibility for legal and regulatory compliance. We also reviewed existing AI risk management approaches and introduced a general risk management framework.
In Part 2 (Practice), we adapted this risk management framework to the trinity of risks in research: ethical, integrity-related, and governance-related. By identifying risks, evaluating their likelihood and impact, and selecting appropriate treatment strategies (mitigation, transfer, avoidance, or acceptance), researchers can make informed decisions about AI use. We then discussed AI use cases across the entire research lifecycle and demonstrated that AI-related risks occur at every stage of research. Because any single safeguard can fail, so-called defense in depth across people, technology, and processes is essential as a mitigation approach. We also provided general mitigation strategies covering all three dimensions.
This leads to a key practical message: research groups and research projects need their own AI policies and checklists. Institutional rules are necessary but insufficient, as they cannot fully address domain-specific and project-specific contexts. Group-level AI policies function as living agreements that clarify responsibilities, prevent undocumented “shadow AI”, support transparency and accountability, and protect researchers. To support this, checklists for people, technology, and processes are provided.
Safe, secure, and responsible use of AI in research is a continuous process. Start discussing these topics within your research group and with project partners. Raise awareness, identify and mitigate risks, and revisit decisions as projects evolve. Responsible AI use is a team effort. Research ethics, research integrity, and research governance are not box-ticking exercises. They are quality instruments that enable responsible and excellent research in the age of AI.