Google’s artificial intelligence arm DeepMind has been holding back the release of its world-renowned research, as it seeks to retain a competitive edge in the race to dominate the burgeoning AI industry.
The group, led by Nobel Prize winner Sir Demis Hassabis, has introduced a tougher vetting process and more bureaucracy that made it harder to publish studies about its work on AI, according to seven current and former research scientists at Google DeepMind.
Three former researchers said that the group was most reluctant to share papers that reveal innovations that could be exploited by competitors, or cast Google’s own Gemini AI model in a negative light compared to others.
The changes represent a significant shift for DeepMind, which has long prided itself on its reputation for releasing groundbreaking papers and home for the best scientists building AI.
Meanwhile, huge breakthroughs by Google researchers — such as its 2017 “transformers” paper that provided the architecture behind large language models — played a central role in creating today’s boom in generative AI.
Since then, DeepMind has become a central part of its parent company’s drive to cash in on the cutting-edge technology, as investors expressed concern the Big Tech giant had ceded its early lead to the likes of ChatGPT maker OpenAI.
“I cannot imagine us putting out the transformer papers for general use now,” said one current researcher.
Among the changes in the company’s publication policies are a six-month embargo before “strategic” papers related to generative AI are released. Researchers also often need to convince several staff members of the merits of publication, said two people with knowledge of the matter.
A person close to DeepMind said the changes were to benefit researchers who had become frustrated spending time on work that would not be approved for strategic or competitive reasons. They added that the company still publishes hundreds of papers each year and is among the largest contributors to major AI conferences.
Concern that Google was falling behind in the AI race contributed to the merger of the London-based DeepMind and California-based Brain AI units in 2023. Since then, it has been faster to roll out a wide array of AI-infused products.
“The company has shifted to one that cares more about product and less about getting research results out for the general public good,” said one former DeepMind research scientist. “It’s not what I signed up for.”
DeepMind said it has “always been committed to advancing AI research and we are instituting updates to our policies that preserve the ability for our teams to publish and contribute to the broader research ecosystem”.
While the company had a publication review process in place before DeepMind’s merger with Brain, the system has become more bureaucratic, according to those with knowledge of the changes.
Former staffers suggested the new processes have stifled the release of commercially sensitive research to avoid the leaking of potential innovations. One said that publishing papers on generative AI was “almost impossible”.
In one incident, DeepMind stopped the publication of research that showed Google’s Gemini language model is not as capable or is less safe than rivals, especially OpenAI’s GPT-4, according to one current employee.
However, the employee added it had also blocked a paper that revealed vulnerabilities in OpenAI’s ChatGPT, over concerns the release seemed like a hostile tit-for-tat.
A person close to DeepMind said it does not block papers that discuss security vulnerabilities, adding it routinely publishes such work under a “responsible disclosure policy,” in which researchers must give companies the chance to fix any flaws before making them public.
But the clampdown has unsettled some staffers, where success has long been measured through appearing in top-tier scientific journals. People with knowledge of the matter said the new review processes had contributed to some departures.
“If you can’t publish, it’s a career killer if you’re a researcher,” said a former researcher.
Some ex-staff added projects focused on improving its Gemini suite of AI-infused products were increasingly prioritised in the internal battle for access to data sets and computing power.
In the past few years, Google has produced a range of AI-powered products that have impressed the markets. This includes improving its AI-generated summaries that appear above search results, to unveiling an “Astra” AI agent that can answer real-time queries across video, audio and text.
The company’s share price has increased by as much as a third over the past year, though those gains pared back in recent weeks as concern over US tariffs hit tech stocks.
In recent years, Hassabis has balanced the desire of Google’s leaders to commercialise its breakthroughs with his life mission of trying to make artificial general intelligence (AGI) — AI systems with abilities that can match or surpass humans.
“Anything that gets in the way of that he will remove,” said one current employee. “He tells people this is a company not a university campus; if you want to work at a place like that, then leave.”
Additional reporting by George Hammond