Generative AI Policy
The Journal of Practice Theory recognises the growing use of generative artificial intelligence (“AI”) tools in academic research and writing. The journal supports the thoughtful and responsible use of such technologies, provided that their use is transparent, ethically sound, and consistent with the standards of scholarly practice.
Authors, reviewers, and editors remain fully responsible for the content they produce, regardless of any AI tools used in its preparation.
- Principles of Use
AI tools may be used to support the research and writing process. However, contributions submitted to the journal must reflect the author’s own scholarly judgement, interpretation, and argument.
Authors are responsible for reviewing and validating any AI-assisted material prior to submission, and for ensuring that their contributions are:
- accurate and verifiable,
- appropriately referenced,
- and internally consistent.
- Transparency and Disclosure
The use of AI tools must be clearly disclosed in the manuscript.
Disclosure should be made in the following locations, as appropriate:
- Acknowledgements: where AI has assisted with writing, editing, translation, or formatting;
- Methods: where AI has contributed to research design, data collection, analysis, or literature review;
- Figure captions: where AI has been used to generate or modify visual material.
Authors should:
- name the AI tool used,
- describe how it has been used and the extent of its use.
Transparency is essential to maintaining trust in the research and publication process.
- Human Responsibility
Authors retain full responsibility for their submissions. This includes:
- verifying the accuracy of all claims, citations, and references;
- ensuring that AI-generated content does not introduce errors, biases, or misrepresentations;
- ensuring that the final manuscript reflects their own voice, argument, and scholarly contribution.
- Ethical and Responsible Use
AI must be used in a manner consistent with data protection, confidentiality, and relevant legal and ethical standards.
Authors should be aware of the potential risks associated with inputting material into AI systems, particularly in relation to privacy, data protection, and the handling of unpublished or sensitive content. Authors should exercise appropriate judgement in how such tools are used.
Authors should also:
- critically assess AI-generated outputs for bias, stereotyping, or misinformation;
- avoid using AI to imitate the distinctive voice or style of identifiable scholars.
- Rights and Intellectual Property
Authors must ensure that any AI tools used do not compromise:
- their own rights to their work,
- the rights of the journal to publish the material,
- or the rights of any third parties.
Authors should review the terms and conditions of AI tools to ensure that content is not subject to unauthorised reuse, ownership claims, or use for training by the provider beyond what is necessary to deliver the service.
- Peer Review and Confidentiality
The confidentiality of the peer review process must be strictly maintained.
Reviewers and editors must not upload manuscripts under review (in whole or in part) to AI tools.
AI may be used in limited ways that do not involve sharing or disclosing the content of the manuscript under review. For example, reviewers may use AI tools to support the organisation, clarity, or presentation of their feedback, provided that the intellectual content of the review remains their own and that no confidential material is disclosed.
Where AI is used in drafting a review, this should be disclosed to the editor, including:
- the tool used,
- its role in the review process,
- and the reviewer’s role in directing and verifying the content.
- Evolving Practice
This policy reflects current best practice (March 2026) and will evolve as AI technologies and scholarly norms develop. The journal remains committed to supporting innovation in research and publishing while maintaining high standards of integrity, transparency, and intellectual responsibility.