Harry and Meghan Join Tech Visionaries in Demanding Prohibition on Superintelligent Systems

Prince Harry and Meghan Markle have teamed up with artificial intelligence pioneers and Nobel Prize winners to advocate for a complete ban on developing superintelligent AI systems.

Harry and Meghan are among the signatories of a influential declaration that demands “a prohibition on the development of superintelligence”. Superintelligent AI refers to AI systems that would surpass human cognitive abilities in every intellectual area, though this technology have not yet been developed.

Primary Requirements in the Statement

The statement states that the ban should stay active until there is “broad scientific consensus” on creating superintelligence “safely and controllably” and once “substantial public support” has been secured.

Prominent figures who added their signatures include AI pioneer and Nobel Prize recipient a leading AI researcher, along with his fellow “godfather” of modern AI, another AI expert; Apple co-founder Steve Wozniak; UK entrepreneur Virgin founder; former US national security adviser; ex-head of state an international leader, and UK writer Stephen Fry. Additional Nobel winners who signed include a peace advocate, a physics Nobelist, John C Mather, and an economics expert.

Organizational Background

The statement, aimed at governments, tech firms and policy makers, was organized by the Future of Life Institute (FLI), a US-based AI safety group that earlier demanded a pause in developing powerful AI systems in 2023, shortly after the emergence of ChatGPT made AI a global political talking point.

Tech Sector Views

In July, Meta's CEO, the chief executive of Facebook parent Meta, one of the leading tech companies in the US, claimed that development of superintelligence was “approaching reality”. However, some analysts have argued that discussions about superintelligence indicates competitive positioning among tech companies investing enormous sums on artificial intelligence recently, rather than the industry being near reaching any technical breakthroughs.

Possible Dangers

However, the organization warns that the possibility of ASI being developed “in the coming decade” presents numerous risks ranging from replacing human workers to losses of civil liberties, exposing countries to national security risks and even threatening humanity with extinction. Existential fears about AI center around the possible capability of a system to escape human oversight and safety guidelines and trigger actions against human welfare.

Citizen Sentiment

The institute published a US national poll showing that approximately three-quarters of US citizens want strong oversight on advanced AI, with six out of 10 believing that superhuman AI should not be developed until it is demonstrated to be secure or manageable. The poll of American respondents noted that only a small fraction supported the status quo of rapid, uncontrolled advancement.

Corporate Goals

The top artificial intelligence firms in the United States, including the conversational AI creator a major AI lab and Google, have made the development of artificial general intelligence – the theoretical state where artificial intelligence equals human cognitive capability at most cognitive tasks – an stated objective of their work. Although this is slightly less advanced than ASI, some specialists also warn it could carry an extinction threat by, for example, being able to improve itself toward achieving superintelligence, while also presenting an implicit threat for the modern labour market.

Wendy Barry
Wendy Barry

A tech enthusiast and business strategist with over a decade of experience in digital transformation and startup consulting.

October 2025 Blog Roll