DeepMind CEO Demis Hassabis Says AI Misuse, Not Job Loss, Is the Bigger Threat
Demis Hassabis, CEO of DeepMind and a Nobel Prize laureate, says the most serious threat posed by artificial intelligence is not widespread job loss, but the potential for the technology to be misused by bad actors.
Speaking at the SXSW festival in London, Hassabis emphasized the need for tighter restrictions on access to powerful AI systems, warning that the world is moving too slowly to regulate tools capable of destabilizing entire economies and societies.
“Both of those risks are important,” Hassabis told CNN’s Anna Stewart during the interview. “But a bad actor could repurpose those same technologies for a harmful end… and so one big thing is how do we restrict access to these powerful systems to bad actors, but enable good actors to do many, many amazing things with it?”
Register for Tekedia Mini-MBA edition 17 (June 9 – Sept 6, 2025) today for early bird discounts. Do annual for access to Blucera.com.
Tekedia AI in Business Masterclass opens registrations.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register to become a better CEO or Director with Tekedia CEO & Director Program.
His comments come amid growing anxiety over AI’s disruption to the labor market. Last week, Anthropic CEO Dario Amodei warned that artificial intelligence could wipe out half of all entry-level white-collar jobs. Meta CEO Mark Zuckerberg recently said AI would write half of his company’s code by 2026.
But Hassabis, who heads Google’s flagship AI research lab, downplayed fears of a “jobpocalypse.” He acknowledged that AI will change the nature of work and push society to adapt, but said the real challenge lies in how governments, institutions, and companies distribute the gains of productivity AI will generate.
“There’s going to be a huge amount of change,” he said. “Usually what happens is new, even better jobs arrive to take the place of some of the jobs that get replaced. We’ll see if that happens this time.”
But he said the bigger danger is letting these systems develop and proliferate without adequate safeguards. Citing recent examples—like hackers impersonating U.S. officials using AI-generated voice messages, or the rise in deepfake pornography—Hassabis said the absence of a global framework for AI safety is increasingly alarming.
A 2023 State Department report warned that AI could pose “catastrophic” risks to national security, while the FBI recently issued an advisory after detecting AI-generated audio being used to impersonate American officials.
The concerns are not hypothetical. Just last month, President Donald Trump signed the Take It Down Act, which aims to curb the spread of non-consensual explicit deepfakes by making it illegal to share such content online. At the same time, Google quietly removed language from its AI ethics page in February, including a clause pledging not to use AI for weapons and surveillance—fueling more concerns about the erosion of internal guardrails.
A Call for International AI Governance
To prevent misuse, Hassabis is calling for an international agreement on how AI should be used and governed—a kind of global accord similar to nuclear or climate pacts. But geopolitical tensions, particularly between the U.S. and China, have so far stalled progress on any unified regulatory vision.
“Obviously, it’s looking difficult at present day with the geopolitics as it is,” Hassabis admitted. “But… as AI becomes more sophisticated, I think it’ll become more clear to the world that that needs to happen.”
The DeepMind chief envisions a future in which people will rely heavily on AI “agents”—autonomous tools capable of executing complex tasks in real time. Google is already working to integrate such capabilities into its search engine and has experimented with AI-powered smart glasses that function like always-on digital assistants.
“We sometimes call it a universal AI assistant that will go around with you everywhere,” Hassabis said. “Help you in your everyday life, do mundane admin tasks for you, but also enrich your life by recommending you amazing things… maybe even friends to meet.”
Between Hype and Reality
Despite the powerful promise of AI and its rapid adoption across sectors, Hassabis noted that the technology still suffers from serious limitations, including bias and hallucination. These flaws have led to real-world failures, such as when The Chicago Sun-Times and The Philadelphia Inquirer published AI-generated summer reading lists that included books that didn’t exist.
While technologists like Hassabis remain optimistic that AI will be a net benefit to society, his comments underscore the growing split among industry leaders: some warn of AI’s economic shockwaves, others of its geopolitical risks. Hassabis seems to believe both are real, but only one could spiral out of control if left unchecked.