The loudest warnings about superintelligence come from its developers. Yet the call for global oversight is not driven by fear alone. It moves within a field of tension between security concerns and the question of who controls the infrastructure that will structure economic and political power in the future.
The Controlled Alarm
The warning about a possible super-AI serves a recognizable function. It shifts the debate from innovation to security. Once artificial intelligence is perceived as an existential threat, regulation appears inevitable. When, for example, OpenAI CEO Sam Altman warns of uncontrollable superintelligence while simultaneously calling for international oversight, he positions his company as a responsible actor and an indispensable counterpart for governments. This creates a discursive advantage. A technological forecast becomes political agenda-setting—and a question about the future becomes a political question of the present.
Regulation as an Instrument of Power
The discussion of an international AI authority, often modeled on nuclear regulatory bodies, appears plausible at first glance. Safety standards and licensing requirements are meant to limit risks and increase transparency. Yet regulation is never neutral. High requirements create high barriers to entry. Only corporations with enormous resources can sustain complex compliance structures, legal safeguards, and long-term technological investments. Smaller providers or open research initiatives would face significantly greater difficulty meeting such standards. Companies that already possess global economies of scale could translate their position into a regulated oligopoly. Security and market concentration would grow in parallel.
The Speculative Apocalypse
The notion of a rapidly emerging, autonomously acting superintelligence remains hypothetical. There is no system with its own strategic will, no robust architecture for unlimited self-improvement, and no empirical basis suggesting that the combined intelligence of humanity will soon be surpassed. Nevertheless, this scenario exerts considerable political force. It legitimizes preventive control and far-reaching interventions in technological development and regulation. A possible future thus becomes the foundation for present decisions. At the same time, it would be too reductive to view regulation solely as an instrument of power. AI systems already intervene in financial markets as well as military and administrative decision-making processes. Questions of liability and misuse are real. The decisive issue is therefore less whether regulation occurs than within which institutional framework it takes place.
Infrastructure Instead of Intelligence
The real transformation lies less in a hypothetical machine apocalypse than in the concentration of computing power. Intelligence is becoming a foundational infrastructure, generated in vast data centers and requiring enormous investments in energy and hardware. Only a few actors possess these resources. Whoever controls these capacities determines access to machine-supported decision-making in both the economy and the state. The question of power therefore lies less in the algorithm than in the material basis of its computation. AI thus becomes a strategic resource comparable to energy or telecommunications. Yet the discussion of superintelligence often distracts from the true center of technological power: the availability of computing capacity and energy.
The Problem of Legitimacy
When developers both define risks and operate the corresponding systems, a structural tension emerges. Oversight closely intertwined with industry could increase security while simultaneously stabilizing existing power relations. Regulation would then not only control but also legitimize. The core democratic question therefore is: who controls the controllers, and to whom are they accountable? Without clear institutional distance, the architecture of security risks becoming a permanent architecture of power.
The New Industrial Compact
This shift marks the real conflict. The debate about super-AI revolves less around machine consciousness than around institutional power. A new industrial compact is emerging between states, technology companies, and the public over the order of digital systems. Whoever drafts this compact will not only define safety standards but also shape the economic and cultural structures of the coming decades. Whoever sets the institutional framework determines the room for maneuver of states, companies, and society—and thus the power architecture of the decades ahead.


