Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

Research integrity: Who defines that?

Main research integrity documents for using AI in research

General documents which do not specifically mention „AI“:

Research misconduct in European Code of Conduct

According to ALLEA, 2023:

Research misconduct is traditionally defined as fabrication, falsification, or plagiarism in proposing, performing, or reviewing research, or in reporting research results:

DFG and research misconduct: Scope and Definitions

The following scope and definitions can be found in DFG, “Rules of Procedure for Dealing with Scientific Misconduct”, https://www.dfg.de/resource/blob/339200/dfg-80-01-v0524-en.pdf

"§1 (2): These Rules of Procedure apply if the respondent is one of the following with regard to the allegation:

  1. A grant applicant to the DFG

  2. A grant recipient funded by the DFG,

  3. Individuals with a high level of scientific responsibility in connection with funding proposals submitted by higher education institutions or non-university research institutions,

  4. individuals reviewing a proposal for the DFG or

  5. A member of a DFG committee or a committee supported by the DFG in administering funding instruments who participates in advisory, review, evaluation or decision-making procedures."

"§2: Scientific Misconduct (1) 1^1An individual pursuant to § 1 (2) nos. 1-3 commits scientific misconduct if they do any of the following in particular, either intentionally or with gross negligence:

  1. make misrepresentations (§ 3),

  2. appropriate others’ research achievements without justification (§ 4),

  3. interfere with others’ research (§ 5),

  4. participate in the scientific misconduct of others by way of co-authorship (§ 6) or

  5. neglect their supervisory duties (§ 7). 2^2Anyone who intentionally participates in the misconduct of others is also guilty of scientific misconduct (§ 8). (2) A person pursuant to § 1 (2) nos. 4 and 5 commits scientific misconduct if they do any of the following, either intentionally or with gross negligence:

  6. breach confidentiality (§ 9),

  7. fail to disclose circumstances that give rise to the appearance of conflict of interest (§ 10) or

  8. inadmissibly give unfair preferential treatment to others (§ 11). "

Spectrum of questionable research practices

Although usually questionable research practices are defined as something less dangerous as scientific misconduct, Kolstoe (2023) proposed to define a spectrum of questionable practices:

QRPs with AI don’t just repeat traditional integrity risks — they amplify them. Shigapov (2025)

A lack of AI literacy may unintentionally lead to the whole spectrum of questionable research practices. Not necessarily your own AI literacy, but that of a co-author, project partner, or student assistant.

Tool: Retraction Watch Database, Blog, etc.

Tool: Academ-AI

Tool: Research Integrity Risk Index

Living guidelines on the responsible use of generative AI in research for researchers

  1. Remain ultimately responsible for scientific output.

  2. Use generative AI transparently.

  3. Pay particular attention to issues related to privacy, confidentiality and intellectual property rights when sharing sensitive or protected information with AI tools.

  4. Respect applicable national, EU and international legislation.

  5. Continuously learn how to use generative AI tools properly to maximise their benefits, including by undertaking training.

  6. Refrain from using generative AI tools substantially in sensitive activities that could impact other researchers or organisations (for example peer review, evaluation of research proposals, etc).

Source: https://research-and-innovation.ec.europa.eu/document/download/2b6cf7e5-36ac-41cb-aab5-0d32050143dc_en

Living guidelines on the responsible use of generative AI in research for research organizations

The guidelines for research organizations include: “3. Reference or integrate these generative AI guidelines into their general research guidelines for good research practices and ethics. Using these guidelines as a basis for discussion, research organisations openly consult their research staff and stakeholders on the use of generative AI and related policies. Research organisations apply these guidelines whenever possible. If needed, they could be complemented with specific additional recommendations and/or exceptions that should be published for transparency.”

German FAQ on AI and research integrity

Tool: Artificial Intelligence Disclosure (AID)

Tool: AI Attribution Toolkit

Tool: GAIDeT Declaration Generator

Tool: Decision tree for responsible application of AI

References
  1. Deutsche Forschungsgemeinschaft. (2025). Guidelines for Safeguarding Good Research Practice. Code of Conduct. 10.5281/ZENODO.14281892
  2. ALLEA. (2023). The European Code of Conduct for Research Integrity – Revised Edition 2023. ALLEA. 10.26356/ECOC
  3. Kolstoe, S. (2023). Defining the Spectrum of Questionable Research Practices (QRPs). UK Research Integrity Office. 10.37672/ukrio.2023.02.qrps
  4. Shigapov, R. (2025). Questionable Practices in the use of AI. 10.5281/ZENODO.17349510
  5. Glynn, A. (2024). Academ-AI: documenting the undisclosed use of generative artificial intelligence in academic publishing. arXiv. 10.48550/ARXIV.2411.15218
  6. Meho, L. I. (2025). Gaming the metrics: bibliometric anomalies in global university rankings and the research integrity risk index (RI2). Scientometrics, 130(11), 6683–6726. 10.1007/s11192-025-05480-2
  7. Frisch, K. (2025). FAQ Künstliche Intelligenz und gute wissenschaftliche Praxis - Version 2. 10.5281/ZENODO.17349995
  8. Suchikova, Y., Tsybuliak, N., Teixeira da Silva, J. A., & Nazarovets, S. (2025). GAIDeT (Generative AI Delegation Taxonomy): A taxonomy for humans to delegate tasks to generative artificial intelligence in scientific research and publishing. Accountability in Research, 1–27. 10.1080/08989621.2025.2544331