Why research groups and projects need their own AI policies and checklists¶
Institutional AI policies and checklists are necessary but not sufficient. Research groups and projects may need context-specific rules that reflect their data, methods, risks, and responsibilities.
Reasons:
AI risks are discipline-specific and use-case-specific
In international research projects even the regulations for research ethics, research integrity, and research governance may differ
Purpose of AI Policies & Checklists¶
AI policies and checklists help research groups and research projects:
prevent ethical, integrity, and governance breaches
clarify responsibilities
ensure transparency and reproducibility
support safe and secure AI use
What is AI policy for a research group or project?¶
An AI policy is:
A living document
A risk management tool and risk register
A shared agreement within the group/project
A bridge between ethics, integrity, and governance
Scope, responsibilities, and minimal compliance¶
Which AI systems are allowed?
For which tasks?
Who is responsible for:
AI selection
risk management
documentation
incident reporting
Minimal compliance: GDPR, Copyright, EU AI Act, export control, cybersecurity, etc.
People-Checklist¶
Researchers & Team Members
Basic AI literacy for all team members
Awareness of ethical risks (harms to participants, society, and the environment)
Understanding of research integrity rules for AI use
Clear rules for disclosure of AI use
No “shadow AI” or undocumented tool usage
Collaboration & Culture
AI use discussed openly in the team
Agreement on what counts as acceptable AI assistance
Clear expectations for students, HiWis, and PhDs
Special care for interdisciplinary & international projects
Technology-Checklist¶
AI Systems & Tools
Approved AI tools list (local, cloud, or hybrid)
Trust portals & compliance documentation checked (cloud and hybrid)
Data residency & logging behavior understood
No personal or sensitive data in public AI systems
Security Controls
Least-privilege access to data, code, networks, and credentials
No unrestricted agentic AI access to local files
Versioning of models, prompts, and outputs
Monitoring for data leakage and misuse
Open-Weight Models
Risk assessment before downloading model weights
No public release without governance review
Vulnerability reporting plan in place
Processes-Checklist¶
Ethics (Is it acceptable? Is it responsible towards participants, society, and the environment?)
Could AI use distort interpretation or fairness?
Could vulnerable groups be affected indirectly?
Are societal or environmental impacts considered?
Integrity (Is it good science? Is it responsible towards the truth?)
AI use documented in methods sections
Original sources preserved and cited
Human judgment remains central
Reproducibility ensured despite AI variability
Governance (Is it compliant?)
DPIA completed if required
Licenses and terms respected
Ethics committee informed if risk profile changes
Archiving and sharing rules defined
Last but not least¶
Rewrite your group or project AI policy as “10 simple rules for using AI” in your field.