Get all your news in one place.
100's of premium titles.
One app.
Start reading
The Conversation
The Conversation
Sorin M.S. Krammer, Professor of Strategy and International Business, University of Southampton

The AI scientist: now academic papers can be fully automated, what does this mean for the future of research?

whiteMocca/Shutterstock

Until recently, AI’s role in research felt like having a useful assistant. It could summarise a paper, clean up a dataset or draft an abstract. Researchers were still in charge of the thinking.

That changed in late 2025 when cutting-edge “frontier” AI models became capable of reasoning and planning reliably by themselves. A key feature of these models is “tool calling” – the ability to interact with external tools in order to act on the world, not just describe it.

This marks the rise of agentic AI: systems that do not just respond to instructions but can independently plan, execute and iterate. In science as in other fields, chatbots have become coworkers that can autonomously complete real work, end to end.

An example of this is Tokyo-based Sakana AI’s The AI Scientist. Unveiled in mid-2025 and now in its second iteration, the Japanese tech company bills this as “the first comprehensive system for fully automatic scientific discovery”.

The AI Scientist scans existing literature, generates hypotheses, writes and executes code, analyses results and produces a full research paper – largely without human involvement. It reasons, fails and revises, just as a junior scientist would.

The proof? An AI Scientist academic paper describing “a pipeline for automating the entire scientific process end to end” was accepted by the International Conference on Learning Representations and published in the scientific journal Nature in March 2026, following peer review.

This represents something genuinely new: an autonomous AI system passing a milder version of the Turing test by demonstrating scientific quality, if not (yet) machine intelligence.

The AI Scientist’s peer-reviewed paper explained. Video: Matthew Berman.

Other significant achievements include Singapore-based startup Analemma carrying out a live demonstration of its Fully Automated Research System (Fars) in February. It produced 166 complete machine-learning research papers in roughly 417 hours for around US$1,100 (£810). That’s one academic paper every 2.5 hours at a cost that would sustain a research assistant for a couple of weeks.

And Google Cloud AI Research recently unveiled PaperOrchestra, which takes a researcher’s raw experimental logs and rough notes and converts them into a submission-ready manuscript, with figures and verified citations. In blind evaluations by 11 AI researchers, it easily outperformed existing autonomous systems in this area.

Having spent two decades researching disruptive technological innovations, I believe a significant threshold has been crossed. While there is a way to go before AI systems match the very best human-produced work, the era of fully automated research has arrived.

Implications for academia

The arrival of autonomous research systems lands on an academic system under severe strain in many countries. Over the last decade, the number of papers submitted to academic journals has grown much faster than the pool of qualified peer reviewers, leading to suggestions that the science publication system is being “overwhelmed”.

If systems like Fars can produce thousands of papers per year, the publication infrastructure of science faces a volume it was never designed to handle. Some academic reviews have already been identified as using AI-generated content. As submission numbers continue to rise, this may alter the role of a published academic paper as a definitive signal of the quality and skills of human researchers.

An optimistic take is that AI may shift academia away from its strong reliance on quantity-based metrics, in favour of how influential or innovative publications are. This is a reform critics of the current system have long called for.

Less optimistically, as AI research scales up, an academic system designed for coherent, methodologically defensible contributions may inflate the proportion of incremental, rather than radically novel, scientific contributions. Both the quality and originality of research could suffer as a result.

Science has always needed its heretics to advance. Italian astronomer Galileo, the “father of modern science”, was forced to recant his defence of heliocentrism before the Catholic Church’s Inquisition. Hungarian physician Ignaz Semmelweis died in a psychiatric institution having failed to convince his colleagues that handwashing could save lives.

Yet historically, the ability of scientific institutions to encourage radical approaches has also been a mainstaple of how science has progressed. To sustain this, AI systems will need to be trained to maximise novelty and transformation, rather than plausibility and incremental progress.

AI’s impact on creative industries

The transformative effects of this new breeed of AI extend well beyond scientific research. A striking example is The Epstein Files. This fully AI-generated podcast reached number one the UK Apple Podcasts and Spotify charts in early 2026, drawing 700,000 downloads in its first week.

Music is further along and more conflicted. By mid-2025, the fully AI-generated band The Velvet Sundown had amassed over a million monthly Spotify listeners. In 2026, the platform was forced to introduce artist-protection features after AI tracks began displacing human music on popular playlists, while Deezer, facing roughly 50,000 AI-generated uploads daily, began excluding them from curated lists.

Ownership remains the elephant in the room. US courts have ruled that AI-generated works cannot be copyrighted, since human authorship remains a legal requirement. AI can produce at industrial scale, but no one can own the output legally.

This matters far beyond intellectual property law. In creative industries, it threatens the royalty streams, licensing deals and catalogue valuations on which artists, labels and publishers have built their entire business models for generations.

In science, meanwhile, it is destabilising the entire incentive architecture, which rests on the foundational assumption that knowledge is both generated and owned by humans. When that assumption dissolves, so does much of the institutional logic that has governed how we produce, reward and trust expertise.

The question, across all these fields, is no longer whether AI can produce the work. Rather, it is whether sufficient thought has gone into what we will gain and lose when it does.

The Conversation

Sorin M.S. Krammer does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

This article was originally published on The Conversation. Read the original article.

Sign up to read this article
Read news from 100's of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.