Every technology that changes the rules tends to follow the same arc: suspicion, rejection, tolerance, and eventual integration. Academia -rightly cautious about quickly embracing new technology- repeats this cycle with each new wave. The 1970s and 1980s saw the debate about banning the calculator from the classroom, when the worry was that the ability of students to conduct mental mathematics would be lost. Today, calculators are even allowed in the examination room, because they enable teaching mathematical thinking and problem-solving. We traded some mental arithmetic for abstraction, modelling, and analysis.1
Artificial intelligence (AI) is constantly improving its quality and becoming popular among us. Journal editors warn against its abuse; universities put out sensible guidelines; AI detectors sprout up, as do “humanisers” designed to slip under the detectors’ radar. We will soon reach the point where it will be impractical to prove whether AI supported a manuscript. Is this inherently detrimental to the quality of publications? The response varies depending on how and why we use AI.
To perhaps make this point clear, a recent experience of mine illustrates this point. In the context of the Spanish Fertility Society Benign Pathology Special Interest Group (SIG), I was involved in a meta-analysis to investigate the association between chronic endometritis and endometriosis. The methodology is precise. On this occasion, I directed the workflow with AI assistance, keeping my role in design, critical oversight, and verification. A job that takes days of our most precious resource, time, was done in four hours without giving up rigour. Among other things, AI did not “invent” the question, replace clinical judgment, or make methodological decisions for me; rather, it sped up time-consuming tasks, helped me to synthesise disparate data, and write stronger drafts for critique, which aligns with evidence that AI can accelerate aspects of systematic review.2 What is wrong with that? Nothing, if the outcome is clearer, more reproducible, and more useful for patients.
Independent “clinically meaningful” research questions are not driven by AI, at least for now. Conceptual originality, ethical design, and accountability remain human if and when Artificial General Intelligence, a general-purpose system with human-level or greater competence across domains, able to learn, reason, plan, transfer knowledge across tasks, and act autonomously, arrives, we will need to revisit boundaries. That future debate should not paralyse today’s progress.
In the meantime, the stance of academia should switch from “ban or detect” to the affirmative “govern and leverage with safeguards”. We can be guided by a set of basic principles for adopting in a responsible way:3
• Transparency: Reveal AI use (what tools, when, with what controls).
• Authorship and Responsibility: Humans are solely responsible; AI is not a co-author.
• Data Integrity: No artificial data without specifying it as such, no reinvention of data inside images/figures; control over the images/figures.
• Traceability: Version of the document, prompts, methodological choices and substantial changes; allow for reproducibility.
• Privacy and Security: Protect sensitive information; maintain strong de-identification.
• Training: Teach authors, reviewers, and editors about what they can and can’t do with AI.
• Critical Assessment: All AI outputs should be tested against methodological and clinical benchmarks; AI is a helper, not a judge.
• Red Lines: Plagiarism, made-up references, or unverifiable hallucinations; apply appropriate sanctions.
Our goal as surgeons and medical scientists is to promote quality care and improve patient outcomes based on the best available evidence. If these principles, of transparency, traceability, integrity of data, verification and privacy, are respected, then the primary question is not whether AI “participated”, but whether the knowledge that came after is valid, useful and applicable to improve practices. The authors have the intellectual authorship and the clinical judgment; AI is the instrument we use to improve and fine-tune. Priorities should centre on aligning decisions with high-quality evidence, with critical appraisal of bias and benefit–harm, rather than ritual scrutiny of the tool used to reach the result.
Some academic societies are already making progressing in this direction. The European Society for Gynaecological Endoscopy, which is one of the surgical societies at the forefront of minimally invasive gynaecologic surgery, created a SIG on AI. The American Association of Gynecologic Laparoscopists formed an AI Task Force. The goals of these academic societies include education, project development, as well as ethical and medico-legal discussion about institutional and professional use. This, I think, is the right route to take: not rejection, but acceptance with discernment, adjustment, and improvement.
What about the near future? Early prototypes of more autonomous surgical robots are emerging.4 They remain imperfectly implemented and must still operate under strict human supervision, but they are there. In the beginning, the majority of patients are likely to trust and give preference to their surgeon, but subsequent generations, who have grown up with this technology, will see nothing unusual in it. Adoption is inevitable, and responsibility lies in arriving prepared using standards, audits, and a culture of safety.
AI is not a shortcut to think less, just as calculators were not a shortcut to understand less mathematics. It is a tool that allows us to spend more human intellect to what matters, like spending more time with our patients or improving our surgical skills. If our shared goal is to improve practice and deliver the best evidence-based care, the question is not whether we allow AI, but how we incorporate it so that it raises quality, saves time, and expands equity, whilst yielding nothing on ethics, rigour, and accountability. Let’s adapt before we fall behind.


