Policy on the Use of Artificial Intelligence
Scope of Application
This policy regulates the use of artificial intelligence (AI) technologies, especially generative AI, at all stages of the editorial process: writing, peer review, and editing. Its purpose is to ensure transparency, integrity, accountability, and the publication of reliable and ethical scientific content.
For Authors
Disclosure of AI Use
-
If generative AI tools (e.g., ChatGPT, Copilot, DALL·E) are employed for writing, translation, editing, data analysis, or figure generation, their use must be explicitly declared in the manuscript, either in the Methods section or Acknowledgments.
-
The disclosure must include the name of the tool, version, purpose, and level of intervention. Recommended wording (Elsevier format):
“During the preparation of this article, the author used [TOOL NAME] for [PURPOSE]. After using this tool, the author reviewed and edited the content and takes full responsibility for it.”
AI Cannot Be Listed as an Author
-
AI tools cannot be credited as authors or co-authors, since they do not meet the requirements of responsibility, critical review, and final approval established for human authorship by COPE.
Full Human Responsibility
-
Authors remain fully responsible for the accuracy, originality, and ethical soundness of the content, including sections generated or modified by AI.
Use of AI in Images and Figures
-
The use of AI tools to create, alter, or manipulate images (e.g., removing, moving, or inserting elements) is not permitted. Only basic adjustments such as brightness or contrast are allowed, provided they do not alter the information.
-
The only exception is when AI is explicitly part of the research design or methodology. In such cases, it must be described in detail (tool, model, version, provider), and, if requested, the originals prior to AI intervention must be provided.
Avoiding Fabricated or Invented Data
-
AI must not be used to fabricate data, results, references, or conclusions. Content generated must be carefully reviewed to prevent errors, biases, or falsifications.
For Reviewers
Confidentiality First
-
Manuscripts under review are confidential documents. They may not be uploaded to public AI tools, as this would violate confidentiality and copyright.
AI Only as a Stylistic Aid, Not for Evaluation
-
While AI may be used to improve writing style or clarity, reviewers must apply human critical judgment to evaluate the scientific quality of the manuscript.
Declaration of AI Use
-
If AI tools are employed (e.g., to improve the style of the review report), the reviewer must inform the editor and clearly indicate which tool was used and for what purpose.
For Editors and Editorial Staff
Confidentiality and Prohibition of Public AI Use
-
Manuscripts or editorial communications must not be uploaded to public AI tools due to confidentiality concerns.
AI as Technical Support Under Human Supervision
-
AI tools may be used for administrative tasks (formatting, plagiarism detection, style checking), but always under strict human supervision.
Editorial Decisions Without AI
-
Content evaluation, acceptance/revision decisions, and communications with authors must be based on human critical analysis. AI must not influence substantive editorial decisions.
Monitoring and Sanctions
-
Non-compliance with this policy (e.g., inappropriate AI use, lack of disclosure) may result in manuscript rejection, retraction, or reporting in accordance with the publisher’s ethical policies.
Continuous Updates
This policy will be reviewed periodically to adapt to technological advances and best practices in editorial ethics.