Conference participants on their way to the next session (© Photothek)
What does it take to uphold research excellence? What do librarians think about AI in the scholarly environment? How can we combat inequalities in academic publishing? This year’s APE conference tackled these questions and more.
Every year, the APE Conference (Academic Publishing in Europe) brings together a wide range of stakeholders in scholarly communications and scientific publishing. Held, as usual, at the historic ESMT in Berlin, Germany’s leading business school, this year’s APE was all about academic integrity and the maintenance of impeccable standards in scholarly publishing, as they become threatened by a new era of paper mills, fake ‘research’ and the consequent erosion of public trust.
It was a conference of eloquently expressed ideas, presented by stakeholders across the industry who showed passionate commitment to upholding the legitimate pursuit and dissemination of knowledge.

Keynote: An Independent Publishing Industry
The keynote speaker was Caroline Sutton, the CEO of the International Association of Scientific, Technical and Medical Publishers (STM). Immediately picking up on the main theme of the conference, she said that trust in scientists is currently pretty high, but that misinformation was still identified as the second highest risk in 2025. “We now live in a world where people increasingly don’t know whom to trust. It is for the publishing community to come together and save the library in the face of those trying to burn it down” – i.e., the propagators of fake news and fake research. Caroline said that continuing checks and balances are essential.

After the keynote, the panel session, sometimes accompanied by formal presentations, sometimes not, was the main vehicle used for discussion at the conference. This proved extremely effective at an event devoted not only to the formulation of ideas but of capturing them within a multi-stakeholder context.
Shaping the Future of Science: A Multi-Stakeholder Dialogue
“Shaping the Future of Science” was a thought-provoking session in which different stakeholders in the industry discussed how the concept of academic excellence is enduring, the kinds of reward system that align with integrity and how peer review can be made to work in a world where science is expected to combine rigour with excellence.
Christopher Smith, of AHRC / UKRI and Science Europe, said that a major recent change was that there is much greater awareness of how science is embedded in the economy. Consequently, the idea takes precedence over the individual, achievement over process. This represents an ideological shift that separates the researcher from the research.
Lorela Mehmeti, an early career researcher at the University of Bologna, said that young researchers perceive excellence to mean shifting to more transparency – i.e., changing the conditions within which research is carried out. Research may lose its power if the influence of individual cultures is ignored.

Pawel Rowinski, of ALLEA, said that the concept of research excellence must embrace academic freedom, especially in politically sensitive contexts. Research must be conducted within a living framework that connects quality with values.
Max Voegler, of Elsevier, who moderated the session, agreed. He said that there was now a need to think about how different scholarly communities could intentionally come together. There followed an intricate discussion on the future of peer review. One conclusion the panel reached was that increased interdisciplinarity makes it necessary for reviews to be addressed to different groups without interfering with excellence. There is an important role for learned societies here – they could start updating the processes of peer review at a national level in their own countries.
Safeguarding Research: The Role of Academic Publishers in an Era of Political and Public Pressure
The panel was moderated by Lou Peck, The International Bunch. The main speaker was Ilyas Saliba, Research Fellow at the University of Erfurt, Chair of International Relations at the Faculty of Economics, Law and Social Sciences, and the panellists were Sarah Phibbs, Director of Equity & Inclusion, STM and Louise Schouten, VP of Product Development at De Gruyter Brill.

Ilyas said it was essential to measure academic freedom because it is the central pillar of an open and democratic society. Louise said that a publisher can only do so much to safeguard the integrity of research – including making sure that editorial boards are aware of the pitfalls of fake work and asking editors and authors to comply with COPE guidelines. Sarah said that the publishing industry is making progress in these vital areas but that the problem continues to grow and “we don’t know what we don’t know.”
Changing Library Priorities in an AI World
The session “Changing Library Priorities in an AI World” was moderated by Dominique de Roo, Chief Strategy Officer at De Gruyter Brill. The panellists were University Librarians from 3 different continents: Beau Case, Inaugural Dean of Libraries at the University of Central Florida; Iman Magdy Khamis, Director of Northwestern University Library in Qatar, and Professor Jianbin Jin, Director of Tsinghua University Library, China, and Professor at Tsinghua University’s School of Journalism and Communication.

Dominique presented some findings from a global study recently commissioned by De Gruyter Brill from Gold Leaf to discover how AI is being viewed and used at academic libraries and institutions across the world. She said that the session would be about the contrast between publisher enthusiasm for innovation versus librarian cautiousness: how can innovation be driven while safeguarding trust?
She pointed out that although 90% of the librarians who responded to the study said that use of AI was permitted at their institutions, legitimacy is not the same as trust. The study found that significant numbers of librarians are either resistant or actively hostile to AI. She went on to name the most-used AI applications globally – the top 3 are Copilot, ChatGPT and Turnitin – and to illustrate the gap between use of the free and paid versions. She concluded by saying that librarians are concerned about the effects of AI on data privacy and surveillance; academic integrity and cheating; environmental impact; vendor over-hype; and loss of human judgement. If AI is to be embraced within the scholarly environment, these concerns must be addressed.
Professor Jianbin Jin said that librarians have a responsibility to introduce new applications to scholars and students and to integrate a multiple approach to research. AI can be used to search the knowledge base for hard-to-process topics. It can also make a huge difference to labour-intensive activities such as cataloguing. AI enables more efficient use of the library, can suggest change programmes to the academic community, and can be used to improve or enhance digital literacy. As a country, China will embrace AI for daily activities; some of the AI tools being produced by publishers can be part of this process.
Iman Magdy Khamis said that Dominique’s presentation showed how confused librarians are about AI – “we feel we have to do something, but how can you trust something or talk of its ethical use when you don’t know what’s happening inside it?” Copilot does make librarians feel a bit safer, because it is part of its agreement that individual libraries’ data will not be used to ‘train’ its LLM tool. Iman also observed that ‘privacy’ holds different meanings for communities in the North and the global South. Nevertheless, a myriad of ‘good’ uses has been found for AI by librarians across the world – for example, to create subject headings, systematic indexing, cataloguing, document processing and to use heat maps to show where the majority of students are studying. Robotics can also be used to help autistic students and for feature recognition – AI is a bigger thing than just an application.
Beau Case declared at the outset that he is a “fan of AI”. He said that he was not content with his library being or being perceived as a study hall. “If we are not moving the needle of research and teaching, then we are failing.” Last summer his university was entrusted with the development of extra-curricular AI activities, and it was a game-changer. The day-long event focused on AI in the workplace and asked the question, what happens to students when they leave the university? The event showcased AI technology and held sessions on ethics. Beau would like to ask the global community, “What are you developing in the classroom?” He added that all libraries have collections of distinction. UCF specialises in aerospace and hospitality tourism. He now wants to build his own AI application to enable libraries to publish and specialise in highly technical areas. He invited any librarians who were interested to contact him.

The panel went on to discuss how the librarian’s role is changed by AI, particularly in the context of information science and its specialisms and policies and the maintenance of research integrity. The panellists felt that although there was a lot of talk about ethics and AI, little international initiative has been taken in this area.
There was a suggestion from the audience that libraries could collectively take ownership of and develop scholarly ownership of AI – otherwise they would “simply be ceding the ground to” the major AI players. Beau Case agreed with this proposition but said that it would be extremely difficult to gain consensus.
Professor Jin, picking up on Dominique’s opening comment about the tensions AI forges between publishers and librarians, concluded the debate by saying that the relationship is changing and needs to be modified – and that AI is a catalyst as well as a disruptor; but the modifications can be incremental, not radical. Harmonies can be achieved.
If you are interested in learning more about the study commissioned by De Gruyter Brill, you are can watch a recording of this webinar which took place on 29th January 2026.
From Access to Inclusion: Tackling Systemic Barriers in Global Scholarly Publishing
This panel was moderated by Colleen Campbell of the Max Planck Digital Library. Panellists were Ncoza Dlova, Dean, School of Clinical Medicine, University of KwaZulu-Natal; Matthew Giampoala, Vice President Publications at the American Geophysical Association; and Glenn Truran, Executive Director, SANLiC.

Although the debate was based on the premise that OA is not a business model but a prerequisite of equitable publishing, it was immediately acknowledged that one of the most severe ‘unintended consequences’ of OA is to make publishing unaffordable by scholars working in certain communities, institutions and disciplines. Extensive illustrations of this unjust inequality were put forward. Conclusions reached were that greater transparency of how APC prices are arrived at is needed and, above all, that a central international fund for those who otherwise have no means of paying should be created.
Research Integrity in a Post-Truth Era – New Challenges, New Tools
Day 2 of the APE Conference began with “Research Integrity in a Post-Truth Era – New Challenges, New Tools”. It was moderated by Sami Benchekroun of Morressier / Molecular Connections Group. The panellists were Anna Abalkina, Freie Universität Berlin; Miriam Maus, IOP Publishing and Sonja Ochsenfeld-Repp, Deutsche Forschungsgemeinschaft (DFG). The session immediately picked up on the previous day’s central theme of maintaining research integrity in a world threatened by “bad science” and fake research.
Anna Abalkina said that the majority of traction by the major publishers in 2025 was in fact “massive retractions”. Publishers were gaining a greater understanding of what it means to publish “clean literature” and now know much more about fraudulent signs in specific disciplines – e.g., cancer research.
Sonja Ochsenfeld-Repp, a lawyer responsible for maintaining sustainability in research practices, welcomed that there was new dialogue between funders and publishers.
Miriam Maus said she was responsible for content at IOP and maintaining its integrity was her “number one” task. Paper mills are now becoming more sophisticated and citation manipulation is quite hard to catch. The challenge is to identify the problem early in the process. As Caroline Sutton said on the first day of the conference, trust in information is at an all-time low and it is the publisher’s responsibility to foster trust in science(s).

Sami asked whether, from a systems perspective, publishers were doing “more of the same”, or something different. Miriam said that IOP has changed how it deals with submissions. “There is much more realisation that trust in individuals is not a protection and we need to trust in processes instead.” However, she did not yet envisage radical change in the outputs of publishing: “Demand from the research community to publish traditional articles is still there.”
Sonja said that peer review is one of the main trust signs for funders. In 1998 a white paper was published on safeguarding good practice in publishing and its tenets are still relevant. Funders are trying to change the perception that quantity – a mass of publications – trumps the quality that can be achieved by publishing less, when the reverse is true.
Sami asked Anna who was the villain in all of this. Anna said it was her mission to disrupt the business model of the paper mills, but that so far this was not working. Only a tiny number of published articles are retracted, and then often only after a very long time. There are structural problems to address, particularly the conflict between profit and research integrity. “We are all the villains.”
The panel concluded that metrics need to be more sophisticated and reflective; that publishers should retract more articles; and that there is a significant role for institutions to play, particularly by engaging in raw data archiving and pre-submission screening.
Uncharted Intelligence: AI, Policy and the Global Reinvention of Scholarly Publishing
This session was moderated by Jude Perera, Associate Director, Communication & Engagement, at Wiley. The panellists were Richard Gallagher, President & Editor-in-Chief, Annual Reviews; Lukas Pollmann, Lead Customer Engineer for publishers at Google Cloud, Germany); and Sahar Vahdati, Professor of AI for Science, Leibniz University of Hannover.
Jude asked Richard how editors can tell when a journal contains AI-generated content. Richard said that a good article is not just a coherent organisation of facts. AI can scan huge bodies of literature in different styes, connect ideas across disciplines and make studies of massive, complex data sets, but it is not human, so its outputs do not include the human effects on research. It helps by leaving the researcher free to apply the element of human understanding. “Science is a very creative, very human process. We want to capture the incredible ability of AI to bring material together and add the human dimension.”

Sahar said that the AI models that we see at the moment can only perform about 3% of the synthesis and analysis that humans do – humans operate at 85% – 90% levels. “We need an ecosystem that is supported by AI, but so far these models have been created from what humans have created. In future we need to control by using AI not to make decisions, but to support decisions.”
Lukas said that AI can be a positive force for good – it can enable greater use of resources than would otherwise be possible and has the potential for uncovering nuggets that researchers otherwise may or may not stumble across. But it can be the beginning of a slippery slope: “What happens if AI actually takes over our thought processes?”
Sahar concluded the debate by saying that collectively we need to work on creating machine readable information that can be judged by humans – to think of a new model and come together in a new way.
Keeping the Humanities Visible in Scholarly Publishing
The session was moderated by Ove Kähler, CEO of Mohr Siebeck. The panellists were Barbara Budrich, Founder and MD, Verlag Barbara Budrich; Professor Dr Thed van Leeuwen, Professor of Monitoring Open Science Policies and Practices, Leiden University; and Christoph Markschies, President, Berlin-Brandenburg Academy of Sciences and Humanities.
Professor van Leeuwen began by saying that peer review is always the main driver for the assessment of research. Different disciplines demand different approaches to research: for historians, working in the field is not common; conversely, epidemiologists can only work in the field. STM knowledge reaches its peak citation rate 3 – 4 years after publication and then declines; research in AHSS, on the contrary, can suddenly achieve spikes in citation decades afterwards.

Christoph Markschies said that publishers “should not jump too early to assume that the Tik Tok generation are not interested in monographs”. For an author, the difference between the serious publisher and others derives from the personal context. “All my publishers are interested not just in the next book, but in the publication process that benefits the discipline – what we need is an Oxford handbook of handbooks.” In detecting AI-generated content, it is necessary not only to train technique, but also the ability to judge – to be competent to ask, “is this a valuable manuscript with human-produced judgement?”
Barbara Budrich said that she did not recognise the distinction usually made between STEM and non-STEM subjects. All are sciences! One of the things that publishers bring to every discipline is good English – “English is our tool”. It is the publisher’s role to remove disciplinary jargon from publications. Publishers also try to influence critical thinking – and in publishing, trial without innovation won’t work. She also emphasised the professionalism that publishers bring to publishing, with the many years of experience they have behind them. Libraries wishing to turn publisher may think it looks easy, but “you don’t see what we do.” Publishers curate, they discuss, they know the disciplines in which they operate in a way that no scholar does – if librarians want to take this on, they can, but they must train for it.
Startups to watch – Innovators of tomorrow
The afternoon of the second day kicked off with a Dragons’ Den-type assessment of three new solutions that are being developed to assist scholarly publishers. The audience-elected winner (of three entrants) was Pure.Science, a start-up application designed to transform scholarly workflows.

This was followed by a presentation of the findings of a pre-conference workshop that sought to identify the gaps between current and desired AI usage in publishing.
Science Publishing in Crisis: Challenges and Solutions
Ed Pentz, Executive Director, Crossref moderated this session. Panellists were Steven Inchcoombe, President of Research, Springer Nature, UK; Dan Larhammar, Member & former President of the Royal Swedish Academy of Sciences; and Professor Bernhard Sabel of the University of Magdeburg & Sciii gGmbH, Berlin.

Dan Larhammar and Professor Sabel presented the Stockholm Declarations on the maintenance of integrity and quality control in the dissemination of scholarly research. They said that funders should take a more active role – “they have to realise that they are wasting money on fake science.”
Steven Inchcoombe responded from the publishers’ perspective, saying that publishers will “have to foot the bill” to make the publication of scholarly publishing more secure if they wish to retain the trust on which their organisations depend.
The conference concluded with heartfelt thanks from Ingo Rother, MD of the Berlin Institute for Scholarly Publishing. Ingo’s second APE Conference was well-received, combining informality and light-heartedness with seriousness of purpose and a choice of topics that appealed to a wide range of participants in the scholarly publishing community.
This blog post was written by Linda Bennett and first published on the De Gruyter Brill “Conversations” blog.
