Editorial Policy on the Use of Artificial Intelligence
The scientific journal Peruvian Journal of Management (PJM), committed to the highest standards of academic integrity, editorial ethics, and scientific transparency, formally adopts international guidelines on the responsible use of Artificial Intelligence (AI) in the scientific publishing process.
In this framework, PJM:
- Acknowledges and applies the reference editorial policies established by Elsevier, Springer Nature, and Wiley.
- Adopts the recommendations of the Committee on Publication Ethics (COPE) as an ethical framework for addressing the use and potential misuse of AI in scientific manuscripts.
- Establishes a clear, structured, and updatable policy to guide authors, reviewers, and editors in the transparent, ethical, and legal use of AI-based technologies.
PJM is committed to:
- Updating its policies as international regulations on AI evolve,
- Educating its editorial team and reviewers on the ethical use of these technologies,
- Ensuring that any technological implementation enhances quality, transparency, and trust in the editorial process.
The advancement and availability of AI tools, particularly those based on generative language models, have transformed various aspects of academic work. In response, Peruvian Journal of Management outlines the following editorial guidelines based on international publishing policies and COPE’s recommendations.
These guidelines aim to:
- Ensure scientific integrity and editorial transparency in the use of AI throughout the publication cycle,
- Promote ethical, responsible, and supervised use of AI tools by authors, reviewers, and editors,
- Prevent risks related to undeclared content generation, breaches of confidentiality, or misuse of technologies with existing technical, legal, and ethical limitations.
The following sections explain how AI should be managed in key areas of the editorial process: manuscript writing, image handling, peer review, editorial evaluation, internal tools, and ethical management of suspicious cases.
1. Use of AI by Authors
Permitted use (under human supervision):
- Elsevier allows the use of generative AI tools solely for improving the language and readability of the manuscript, always under human review.
- Springer Nature permits AI use for style, grammar, or tone correction without requiring a declaration. However, if AI is used to generate content, it must be disclosed in the Methods section.
- Wiley supports the use of AI as a support tool to generate ideas, synthesize information, or refine content, under the author’s direct responsibility.
Author obligations:
- Transparently declare the use of AI, when applicable, in the manuscript (especially if new content was generated).
- Supervise and critically review AI-generated content to avoid errors, omissions, or bias.
- Take full responsibility for the submitted content, regardless of the technological tools used.
Restrictions:
- No AI tool may be listed as an author or co-author, as it cannot assume ethical, legal, or scientific responsibility.
- Generative AI may not be used to create or alter images, except in clearly justified and declared methodological exceptions.
- Failing to declare significant AI use in content generation is considered a lack of transparency.
2. Use of AI by Reviewers
Restrictions:
- Elsevier, Springer Nature, and COPE agree that reviewers must not use generative AI tools to analyze manuscripts, draft review reports, or edit evaluation texts.
- Uploading a manuscript, partially or entirely, to an AI platform constitutes a breach of the confidentiality of the peer review process and may infringe copyright or data protection laws.
Reviewer responsibilities:
- Prepare the review report independently and fully, exercising critical judgment and assuming complete responsibility for its content.
- Assess the declared use of AI by the author as part of the evaluation of quality, originality, and transparency.
- Refrain from making assumptions or engaging in discriminatory behavior based on the author's country, discipline, or background.
In case of doubt:
- If there is reasonable suspicion of inappropriate AI use by the author, the concern should be communicated to the editor respectfully and without conclusive judgment in the absence of evidence.
3. Use of AI by Editors
Restrictions:
- Generative AI must not be used to analyze manuscripts, draft decision letters, issue editorial rulings, or process editorial communications.
- All editorial content and correspondence related to manuscripts is confidential and must not be entered into generative AI platforms.
Permitted technical use:
- Non-generative AI tools may be used exclusively for technical and automated functions such as:
- Plagiarism detection
- Reviewer suggestion
- Identification of structural deficiencies or formal gaps in manuscripts
Conditions for implementation:
- The tools used must be ethically vetted, ensure the protection of participants' identities, and include mechanisms to prevent bias, aligned with Responsible AI principles (such as those established by RELX).
- Editorial decision-making must be conducted exclusively by humans.
Editor responsibilities:
- Oversee compliance with AI use policies by authors and reviewers.
- Contact authors directly in case of reasonable concerns regarding AI use and document the communication transparently and respectfully.
- Act decisively when there is clear evidence of misuse, making editorial decisions aligned with COPE’s ethical principles.
General Considerations
- Editorial decisions must be based on objective evidence, avoiding unfounded assumptions and any form of discrimination based on geography, discipline, or experience.
- AI use in the editorial process must align with principles of scientific integrity, confidentiality, transparency, human oversight, and accountability.
- COPE recommends that in cases of reasonable suspicion, direct dialogue with authors should be prioritized over immediate punitive actions, and that rejections should not be based solely on unverified suspicions.
(2)_.png)
.png)



