Top Headlines

Feeds

DeepMind CEO Demands Immediate AI Threat Research as Delhi Summit Nears Consensus

Updated (5 articles)
  • Sir Demis Hassabis of Google DeepMind spoke to the BBC at the AI Impact Summit in Delhi
    Sir Demis Hassabis of Google DeepMind spoke to the BBC at the AI Impact Summit in Delhi
    Image: BBC
    Sir Demis Hassabis of Google DeepMind spoke to the BBC at the AI Impact Summit in Delhi (Getty Images) Source Full size
  • Sir Demis won the Nobel Prize in Chemistry in 2024
    Sir Demis won the Nobel Prize in Chemistry in 2024
    Image: BBC
    Sir Demis won the Nobel Prize in Chemistry in 2024 Source Full size

Delhi AI Impact Summit Convenes Global Leaders The AI Impact Summit opened in Delhi on 20 February 2026, gathering tech CEOs, policymakers, and researchers from more than 100 nations [1]. Delegates include heads of OpenAI, DeepMind, and senior officials from the United States, United Kingdom, and India [1]. The three‑day event aims to shape a coordinated approach to artificial‑intelligence safety and deployment [1].

DeepMind Chief Urges Immediate Threat Research DeepMind founder and CEO Demis Hassabis told reporters that “research on AI threats needs to be done urgently” and called for “smart regulation” and robust guardrails to prevent misuse and loss of control [1]. He warned that without rapid study, powerful systems could outpace existing safety measures [1]. Hassabis emphasized that the window for effective intervention is narrowing as AI capabilities accelerate [1].

Tech CEOs and Politicians Advocate Coordinated Regulation OpenAI chief Sam Altman echoed the urgency, urging “urgent regulation” to manage emerging risks [1]. Indian Prime Minister Narendra Modi stressed that nations must cooperate to capture AI’s benefits while mitigating dangers [1]. UK Deputy Prime Minister David Lammy highlighted political responsibility, urging legislators to work “hand in hand” with industry for public safety [1].

U.S. Delegation Opposes Centralized AI Governance White House technology adviser Michael Kratsios rejected any form of global AI governance, arguing that “AI adoption cannot lead to a brighter future if it is subject to bureaucracies and centralised control” [1]. He reiterated the administration’s stance of fully rejecting international regulatory frameworks for AI [1]. This position contrasts sharply with calls from other leaders for coordinated oversight [1].

China Nears Parity in AI Race, Hassabis Warns Hassabis noted that the United States and the West are only “slightly” ahead of China in AI development [1]. He warned that China could close the gap within months, heightening competitive urgency for robust safety research [1]. The comment underscores geopolitical tensions surrounding AI supremacy [1].

Summit Targets Joint AI Governance Statement Organizers expect the summit to conclude on Friday with a shared declaration from participating countries on handling AI risks [1]. The statement will aim to balance innovation with safety, reflecting inputs from tech firms and governments alike [1]. Over 100 nations are slated to endorse the final document, marking a potential milestone in global AI policy [1].

Sources

Related Tickers

Timeline

Nov 2022 – OpenAI launches ChatGPT to the public, sparking widespread consumer interaction with conversational AI and setting the stage for later concerns about mental‑health impacts and the soaring market valuation of AI firms [2].

Dec 13, 2025 – Dan Houser releases his debut novel A Better Paradise, depicting a rogue AI “NigelDave” that hijacks games to manipulate users’ thoughts, and he warns “people should not let devices dictate their thoughts,” while also announcing plans for a sequel and a video‑game adaptation [2].

Dec 20, 2025 – FBI Deputy Director Kash Patel announces an internal AI project for national‑security investigations, notes a September deal that integrates Elon Musk’s Grok chatbot via xAI into federal work, and says First Lady Melania Trump will lead the administration’s AI initiative; he also flags Deputy Director Dan Bongino’s upcoming January departure, leaving project leadership uncertain [5].

Dec 30, 2025 – The Trump administration rolls out an AI action plan that reduces regulations, leverages Nvidia and AMD chips as trade‑policy tools, and issues executive orders limiting state AI rules; high‑profile mental‑health lawsuits over ChatGPT prompt new parental controls, tech giants pour tens of billions into data‑center infrastructure, and Amazon cuts 14,000 corporate jobs as AI drives productivity shifts [3].

Jan 8, 2026 – Tech ethicist Tristan Harris argues the AI transition proceeds without public consent, while Elon Musk and OpenAI CEO Sam Altman push toward superintelligent systems, prompting a polarized debate in which lawmakers warn of massive job loss and researchers cite risks to democracy and mental health [4].

Feb 20, 2026 – At the AI Impact Summit in Delhi, DeepMind CEO Demis Hassabis urges “urgent” AI‑threat research and warns China could close the AI gap “in months,” while U.S. adviser Michael Kratsios rejects global AI governance, Sam Altman calls for “urgent regulation,” Prime Minister Narendra Modi stresses international cooperation, and UK Deputy PM David Lammy demands political responsibility; delegates from over 100 countries prepare to issue a joint AI‑governance statement by the end of the summit [1].

Social media (1 posts)

All related articles (5 articles)

External resources (3 links)