Advanced AI: Understanding the Opportunities and Risks Ahead
Understanding Superintelligence and AI Threats to Humanity
How to hold genuinely uncertain but consequential concerns about advanced AI clearly — without either dismissing them or being paralysed by them.
Published: 20 April 2026Updated: 3 days, 12 hours ago15 min read
ByNeuro
Levels of ScaleHumanity
LensFrameworksTechnology
Wellbeing DimensionTechnologicalEvolutionary
System of WellbeingFlourishing Humanity
Wellbeing StrainSuperintelligence and AI threats to humanity
Regenerative Development GoalsRDG 17 - Interwoven Stewardship
Quick summary
Artificial intelligence is developing at a pace that is truly difficult to hold in mind. Systems that, a few years ago, could not reliably compose a coherent paragraph can now pass professional examinations, produce original scientific research, write software, and engage in extended reasoning across complex domains. The capabilities are advancing faster than most people's mental models of what AI can and cannot do — which means that almost everyone is, to some degree, navigating this space with maps that are already outdated.
Superintelligence and AI threats to humanity describes the set of concerns that arise when thinking about where this trajectory might lead — and what risks it might carry for human wellbeing, human agency, and the long-term prospects of human civilisation. These concerns range from the concrete and near-term (displacement of workers whose skills become automatable, concentration of AI power in the hands of a small number of actors) to the more speculative but potentially more consequential (the challenge of ensuring that highly capable AI systems remain aligned with human values and goals).
This article does not take the position that these concerns are certain to materialise, nor that they should be dismissed. It takes the position that they are worth understanding by everyone because the decisions being made now about how AI is developed, deployed, and governed will shape which futures remain available. Those decisions are too important to be left only to the people who happen to be building the systems.
The feeling of watching something accelerate that nobody seems quite sure how to steer — and the anxiety of genuine uncertainty
There is a quality to the current moment in AI development that many people describe: the sense of watching something change very rapidly, in ways that are not yet understood, with consequences that are still unclear, and with a feeling that the decisions being made now will matter enormously — but without clarity about who is making them, on what basis, or toward what end.
Public surveys consistently find that large portions of people across different countries hold both enthusiasm and concern about AI at the same time — anticipating benefits while also worrying about job displacement, misuse, loss of human control, and effects on social cohesion. This is what appropriate ambivalence about a complex and consequential development looks like. The people who are certain, in either direction — certain that AI will save the world or certain that it will end it — are, in all likelihood, working with simpler models of the situation than its complexity deserves.
Understanding the concerns around advanced AI — neither catastrophising them into inevitability nor dismissing them as science fiction — requires some familiarity with what the technical arguments actually are, what is uncertain and what is more established, and where the real leverage points for shaping outcomes lie. That kind of understanding is the foundation for the kind of informed civic engagement that questions of this magnitude require.
What advanced AI is, what concerns technical experts raise, and why the alignment problem is harder than it first appears
The AI systems attracting the most attention and the most concern are known as large language models and, more broadly, foundation models — systems trained on very large numbers of data that develop broad capabilities across a wide range of tasks (3). The distinctive feature of these systems is their generality: unlike earlier AI applications, which were trained to perform specific, narrow functions, foundation models can be applied to many different tasks without being specifically trained for each. This generality is what makes them potentially so powerful — and what makes the question of how they behave in unexpected situations more significant.
The technical challenge that AI safety researchers focus on is broadly described as the alignment problem: the challenge of ensuring that AI systems reliably pursue goals that are good for human beings (7). The problem is about the difficulty of specifying what we actually want, completely enough that a highly capable system optimising for it does not find unexpected ways to satisfy the specified objective while violating the spirit of what was intended. A system instructed to maximise a performance metric may optimise for the metric in ways that are technically compliant but clearly undesirable. As systems become more capable, the potential consequences of such misalignment become more significant (7).
The question of whether and when AI systems might reach or exceed human-level capability across all cognitive domains — sometimes called artificial general intelligence — is disputed among researchers, with credible views ranging across a wide timeframe (3). What is less debated is that the current pace of capability development has consistently surprised both optimists and pessimists in their predictions, and that planning solely on the assumption of very slow or very rapid progress both carry risks (3).
More near-term concerns focus on capabilities that current or near-current AI systems already possess. The capacity to generate convincing text, images, audio, and video at scale creates new possibilities for misinformation, disinformation, and manipulation — the fabrication of evidence that is difficult to distinguish from authentic records (6). The potential for AI systems to be used to develop biological or chemical agents with harmful applications is a specific biosecurity concern raised by researchers in that field. The use of AI for cyberattacks, surveillance, and the concentration of economic and political power are concerns with more immediate and concrete pathways (2).
Levels of trust vary significantly across countries and cultural contexts and awareness of AI capabilities is growing while concerns about its risks have also risen (1). An important distinction is between concern about AI as currently deployed — the bias, opacity, and accountability problems explored in the article on loss of trust in institutions — and concern about the trajectory of AI development toward more capable future systems. Both are legitimate. They require somewhat different responses.
Why AI anxiety is not irrational — and why the uncertainty itself is one of the challenges of living in this moment
One of the most important things to understand about AI anxiety in the current moment is that it is not a sign of technological illiteracy or excessive caution. The people who have studied advanced AI systems most carefully include some of those who are most concerned about the risks — a fact that gets hidden when AI concern is characterised as the domain of the uninformed or the fearful (3, 7). The argument for taking these concerns seriously is that the potential consequences are large enough, and the probability of significant disruption significant enough, that treating them as unworthy of serious attention would itself be a form of recklessness.
The psychological challenge of navigating this space is real. Human cognitive systems are well-suited to evaluating familiar kinds of risk — risks from known sources, with understood mechanisms, and historical precedents. Advanced AI presents a different kind of problem: a technology developing rapidly, with capabilities that are difficult for non-specialists to assess, carrying risks whose size is uncertain, and whose consequences may not become fully apparent until after critical decisions have already been made (8). The combination of high stakes and uncertainty is one of the most difficult configurations for human reasoning — tending to produce either dismissal or catastrophising, with the calibrated middle ground being harder to maintain.
There is also the question of what this moment of technological transition means for human identity and meaning. Work, creativity, social connection, and civic participation are all domains in which AI systems are becoming capable in ways that were not anticipated. The question of what remains distinctively and irreducibly human — what cannot be replicated, automated, or rendered unnecessary by increasingly capable systems — is a question that many people are encountering in practical form as the skills and capacities they have built their identities and livelihoods around become more readily approximable by AI (5).
The psychological costs extend well beyond the material: the loss of occupational identity, the sense of being devalued by systems that can outperform one in one's own domain, and the difficulty of imagining where meaningful contribution will come from in a substantially automated economy are sources of real distress (5). These experiences are present now, in the actual deployment of current AI systems. They are likely to intensify as capabilities continue to develop.
Power concentration,labourdisplacement, and thecivilisationalgovernance challenge of technology that outpaces the institutions meant to oversee it
A historical perspective on transformative technologies could be useful here. The development of major general-purpose technologies — the printing press, the steam engine, electricity, computing — has consistently produced large benefits in aggregate while also generating significant disruption, displacement, and — where governance failed — opportunities for new forms of domination and exploitation (9). Whether transformative technologies produce broadly shared benefit or concentrate advantage in the hands of those who control them is not determined by the technology itself but by the governance choices, institutional arrangements, and power relations that shape how the technology is owned, developed, and deployed. AI, in this respect, is more like previous technologies than it is different from them — and the historical lesson is that good outcomes require deliberate governance.
The concentration of AI development in a very small number of large technology companies — with the vast majority of the most capable AI systems being developed by a handful of organisations — raises specific concerns about who will shape the values, priorities, and safety practices embedded in the most powerful AI systems, and in whose interests those systems will be deployed (2). This is not a uniquely AI concern. The concentration of power in digital platforms has already demonstrated how the decisions of a small number of companies can have global consequences for information environments, economic opportunity, and civic life. Advanced AI amplifies these dynamics.
International scientific assessment of advanced AI safety, drawing together researchers from across different countries and institutions, has documented both the rapid pace of capability development and the significant gaps in the understanding and governance of its risks (3). The development of AI safety as a research discipline — focused on the technical challenge of ensuring that AI systems behave as intended even in novel situations and at high capability levels — has grown in recent years, partly driven by concern within the research community itself about the trajectory of development (3).
The labour market dimension of AI development is both immediate and deeply uncertain in its longer-term form. Current AI systems are already capable of performing significant portions of tasks in many knowledge-worker roles — writing, coding, data analysis, customer service, legal research — and the productivity gains from AI deployment are real and growing (1). What happens to the workers whose tasks become most readily automatable depends on whether the gains from AI-enabled productivity are broadly distributed or concentrated; whether the education and social support systems can adapt quickly enough to help people move into roles that are less easily automated; and whether new forms of human-centred work emerge at a pace that absorbs displacement. These are questions of political economy and institutional design (9).
Governance frameworks for advanced AI are developing, though the pace of governance has consistently lagged behind the pace of capability development. Risk-based regulatory approaches — requiring more rigorous safety assessment for AI applications in high-stakes domains — have been adopted in some jurisdictions (3). International coordination on AI safety and governance is emerging, though the competitive dynamics between major powers create some obstacles to the kind of deep cooperation that the most significant risks may require (2).
The competitive dynamics that make safety harder, theexpertisegaps that make governance harder, and who gets to decide
The governance of advanced AI faces a specific structural problem: the development of powerful AI systems is driven partly by competitive dynamics — between companies, between nations — that create incentives to prioritise capability development over safety and to move faster than a more cautious approach would allow. This is a structural feature of competitive environments in which the actor who moves cautiously risks being overtaken by one who does not.
This dynamic is sometimes called the race to the top in AI capability — except that 'the top' in terms of capability may not be 'the top' in terms of safety or human benefit. It creates a situation in which the people most aware of the risks may be those with the least ability to slow down, because slowing down unilaterally means ceding ground to those who do not (8). The solution to this problem — if there is one — is political and institutional: the creation of shared agreements, norms, and enforcement mechanisms that change the incentive structure for all actors simultaneously, so that moving carefully is compatible with remaining competitive. That kind of agreement is hard to create and hard to sustain. It is also, historically, the kind of thing that has been created when the stakes became clear enough.
There is also a genuine expertise challenge. The people with the deepest technical understanding of advanced AI systems are, for the most part, employed by the companies developing them — creating a structural tension between the depth of knowledge needed to assess risks and the institutional interests of the organisations employing those who have it. Meaningful independent oversight requires independent expertise, which requires investment in academic and public research capacity that has been significantly underfunded relative to the resources going into capability development (2, 3).
A deeper challenge is the question of who gets to decide what counts as a good outcome from AI development, and whose values the most powerful AI systems should embody. This is a question of values, of political philosophy, and of governance — one that requires broad democratic input rather than resolution by a small group of researchers or executives, however well-intentioned they may be (6). The current state of affairs, in which a small number of organisations are making enormously consequential decisions about the direction of AI development with limited democratic oversight, is widely recognised as unsatisfactory — including by many within those organisations. The gap between that recognition and adequate institutional response is itself one of the most significant challenges of the current moment.
There is finally the question of whether the concerns about transformative AI risk are well-calibrated, or whether they reflect cognitive biases — the human tendency to find narrative salience in stories of dramatic failure, to overweight vivid but uncertain risks relative to more tedious but more certain ones. Some researchers argue that existential risk from AI is systematically overstated; others argue that it is systematically underweighted precisely because of its unprecedented and unfamiliar character (8). The honest answer is that the uncertainty is real, and that intellectual honesty requires holding it as uncertainty rather than resolving it prematurely in either direction.
The risks of advanced AI aresubstantially governancerisks — and the research points toward what adequate governancerequires
If the risks of advanced AI arise substantially from governance failures — from competitive dynamics, concentration of power, inadequate oversight, and the absence of democratic input into decisions of civilisational importance — then what supports better outcomes will be found primarily in governance, institutional design, and the political conditions that make effective governance possible.
The development of AI safety as a technical discipline represents a true effort to address the alignment problem through research: the development of methods for training AI systems to behave in accordance with human values, techniques for verifying that systems do what they are intended to do, and tools for understanding what is happening inside AI systems that are currently difficult to interpret (3, 7). This research matters. It also, by itself, is insufficient — both because it requires the organisations developing AI to implement its findings, and because the most significant risks may arise from dynamics that no amount of technical safety work can address alone.
The framework of beneficial AI — the principle that AI systems should be developed to benefit humanity broadly, operated transparently, be subject to meaningful oversight, and avoid causing harm — provides a set of values that, if truly embedded in the governance of AI development, would address many of the concerns raised (6). Broadly beneficial AI is something that almost everyone claims to support. The challenge is creating the institutional conditions in which those principles are operationalised — in practice, with accountability, in the face of competitive and commercial pressures that pull in other directions.
A consistent thread across the governance literature is that the decisions being made about advanced AI development are too consequential to be resolved by a narrow set of actors without broader accountability (3). The gap between the scale of what is being decided and the breadth of democratic input into those decisions is widely recognised — including within the research and technology communities closest to the development. Closing that gap is as much a question of political and civic conditions as it is of technical ones.
The capacity to evaluate claims about AI — to distinguish what current systems can and cannot do, to assess what is uncertain from what is more established, and to hold concern without false certainty — is itself part of what adequate civic engagement with these questions requires (1). A consistent excess of both overconfident optimism and overconfident pessimism characterises the AI space. Calibrated uncertainty is in short supply and high demand. Understanding this strain clearly is where that capacity begins.
We are making decisions now that will shape which futuresremainpossible — and who makes those decisions matters enormously
Superintelligence and AI threats to humanity is a story about a present moment in which decisions of very large consequence are being made — about how to develop extraordinarily powerful technology, in whose interests, with what oversight, toward what ends — and in which the quality of those decisions will determine which futures remain possible for the people who come after us.
The analogy that some researchers draw with earlier civilisational challenges is instructive because they share a common feature: they are problems that require humanity to coordinate effectively across competitive and institutional barriers to manage something of unprecedented danger and potential. That kind of coordination is hard. It is also, in the cases where it has worked at all, what has made the difference between outcomes that were merely difficult and those that were catastrophic (8).
The wellbeing strain of superintelligence and AI threats is about the experience of living through a period of genuinely consequential uncertainty — of watching decisions being made that matter enormously, with inadequate visibility into who is making them or on what basis, and with the sense that the window for shaping outcomes may be narrower than it appears. That experience produces its own forms of anxiety, disengagement, and fatalism (5). Addressing it requires better governance of AI and the kind of civic literacy and democratic engagement that keeps questions of this size from being resolved by a small number of actors without adequate accountability to the rest of humanity.
The question that the risks of advanced AI ultimately raises — for researchers, for policymakers, for the companies developing these systems, and for everyone who will live in the world they help create — is what kind of relationship between human beings and the tools they build is worth trying to establish. Systems that augment human judgment, that remain under meaningful human oversight, and that are developed with the full range of human values and concerns in mind.
That is a technical challenge, a governance challenge, and a moral challenge simultaneously. The first step toward meeting it — for anyone — is understanding it clearly enough to hold a view about it. That clarity belongs to everyone who will be affected by the outcomes. Which is to say: everyone (6, 9).
Floridi L et al. AI4People — an ethical framework for a good AI society: opportunities, risks, principles, and recommendations. Minds and Machines. 2018 Nov 26;28(4):689–707. https://doi.org/10.1007/s11023-018-9482-5
Russell S. Human compatible: artificial intelligence and the problem of control. New York: Viking; 2019.
Ord T. The precipice: existential risk and the future of humanity. London: Bloomsbury; 2020.
Acemoglu D, Johnson S. Power and progress: our thousand-year struggle over technology and prosperity. New York: PublicAffairs; 2023.