When the System Gets It Wrong — and Nobody Explains Why
Understanding Loss of Trust in Institutions
What happens to the brain when institutions make consequential decisions in ways that can't be seen, questioned, or appealed — and what that costs the people who depend on them.
Published: 15 April 2026Updated: 3 days, 13 hours ago13 min read
ByNeuro
Levels of ScaleOrganisation
LensFrameworksLogic
Wellbeing DimensionInstitutional
System of WellbeingFair Organisations
Wellbeing StrainLoss of trust in institutions
Regenerative Development GoalsRDG 9 - Ethical Infrastructure
Quick summary
Most people have had the experience of encountering a system — a benefits assessment, a credit decision, a medical recommendation, a hiring process — that felt wrong in some way they couldn't quite pin down. The decision arrived through a process they couldn't see, produced by criteria they couldn't examine, with no clear route to question or appeal. And underlying that frustration was something deeper: the sense that the system was not designed with them — or with fairness — as its primary concern.
Loss of trust in institutions describes what happens when the systems people depend on to make consequential decisions about their lives — healthcare, government, financial services, justice — behave in ways that are opaque, inconsistent, or that appear to prioritise interests other than those of the people they serve. This is a story about what accumulated institutional behaviour does to the capacity for trust over time — and what that erosion costs.
This article explores what institutional trust actually requires, how the brain processes trust and what happens when it breaks down, why trust in major institutions has been declining across many countries, and why that decline is unevenly experienced — falling hardest on those who were already least positioned to absorb its consequences.
Losing trust in an institution feels different from losing trust in a person — and recovering it requires something different too
There is a particular quality to the experience of being let down by an institution you depended on. It differs from disappointment in a person, where the relationship itself can be addressed, the grievance named, the repair attempted. With institutions, the experience tends to be more diffuse and more structural: a decision arrived from somewhere within a system too large to see clearly, made according to criteria too opaque to examine, with a route to appeal that may not exist or that requires resources most people do not have.
Many people carry some version of this experience. The benefits assessment that arrived with a decision that felt obviously wrong and no clear means of challenging it. The medical appointment that lasted three minutes and ended with a prescription rather than a conversation. The complaint to a financial institution that cycled through automated responses without ever reaching a person who could make a decision. The sense — often accurate — that the institution was designed for its own operational convenience rather than for the people it was supposed to serve.
What makes institutional trust a wellbeing issue, rather than simply a political or civic one, is that living in an environment of low institutional trust is cognitively and physically costly. The feeling of being on the receiving end of systems you depend on but cannot meaningfully influence is a specific form of stress that the nervous system carries in recognisable ways — and that compounds over time, often without being named as such.
The brain builds trust in institutions the same way it builds trust in people — through consistent evidence of capability, good intentions, and reliability
Trust in any system — whether a person, an organisation, or a government — depends on three things being reliably present. That the system is capable of doing what it claims to do. That it has good intentions toward the people depending on it — that it is oriented toward their wellbeing rather than toward some other interest. And that it is consistent — that it behaves according to principles that can be known and relied upon, rather than shifting according to circumstances or the identity of the person it is dealing with (1). When all three are present, trust is reasonable. When any of those is absent, it begins to erode. When all three are compromised simultaneously, the erosion is rapid and difficult to reverse.
This model — developed through decades of organisational psychology research — describes something familiar from lived experience. People tend to lose trust in institutions through a pattern: repeated encounters with a system that appeared incompetent, or that seemed to prioritise its own interests over theirs, or that behaved differently depending on who it was dealing with. Each encounter updates the internal model. The trust that took years to build can erode considerably faster.
At a neurological level, trust functions as a form of predictive security. When we trust an institution, we can allocate our attention elsewhere: we do not have to monitor every interaction, inspect every decision, or maintain a background readiness against the possibility that the system will fail us. Trust is cognitively efficient. It reduces the mental load of navigating complex environments by allowing us to rely on systems without having to evaluate them continuously (2).
Low-trust environments reverse this. When trust in an institution is absent or damaged, people must actively monitor interactions that would otherwise be handled with background confidence. They must evaluate claims that they would otherwise accept, investagite decisions that they would otherwise trust, and maintain a vigilance that itself consumes resources. The nervous system in a low-trust environment remains in a mild state of alert — a sustained background activation that is metabolically expensive and cognitively draining over time. A consistent finding across stress research links this kind of sustained activation to impaired decision-making, reduced working memory, and increased emotional reactivity (6). Living in an environment of low institutional trust is, in measurable terms, harder on the mind and body than living in one where institutions can be relied upon.
Institutional trust failure is not experienced equally — the people who feel it most are those who depend on institutions most and can exit them least
One of the most significant features of institutional trust failure is how unevenly it falls. The experience of encountering an institution as unreliable, self-serving, or discriminatory is not distributed randomly across the population. It tends to concentrate among those who interact most frequently with public institutions — and who have the fewest alternatives when those institutions fail them.
For people with the resources to exit, institutional failure is an inconvenience. A disappointing encounter with a public hospital can be managed by accessing private healthcare. A regulatory failure can be navigated with legal advice. For people who depend on public institutions because the private alternatives are inaccessible, institutional failure has a different quality — one that carries nowhere to go. The benefit assessment that produces a wrong outcome is not followed by the option to pay for a different outcome. The experience of institutional failure, for people in these circumstances, is the repeated texture of engagement with systems that were, in principle, designed to serve them.
Research on institutional betrayal — what happens when institutions that people depend on fail them — documents a specific quality of harm that exceeds the practical consequences of any individual failure (3, 7). When an institution that should have been oriented toward you demonstrably is not, the damage settles into the relationship with that institution and, in time, into the relationship with institutions more broadly. People who have been let down by a healthcare system carry that experience into subsequent healthcare encounters. People whose legal system has produced outcomes that felt unjust carry that experience into their relationship with civic institutions more generally. The harm compounds across encounters in ways that are difficult to observe from within any single institutional interaction.
This matters because it shapes civic participation. People who have accumulated repeated encounters with institutions behaving in ways that were opaque, inconsistent, or discriminatory tend to withdraw from specific kinds of engagement with systems that have not rewarded engagement (9). That withdrawal is rational. It is also costly at a collective level, because the institutions that most need accountability are often those that have already produced the most disengagement from the communities they have failed.
Trust in governments, healthcare, media, and financial institutions has been declining for decades — and the pattern reveals what institutionalbehaviourhasactually been
The decline in trust in major institutions is one of the most consistently documented trends in social research over recent decades. Annual surveys of attitudes across governments, healthcare systems, financial institutions, and media record, in country after country, a pattern of declining confidence in the competence, intentions, and consistency of the institutions that shape public life (4, 8). The picture varies by country and institution — trust in healthcare systems tends to be more resilient than trust in government or media — but the overall direction has been broadly consistent across many decades in most Western democracies.
The 2008 financial crisis produced a significant acceleration, as the perception that financial institutions had been operating primarily in their own interests — and that the consequences had been borne disproportionately by those with least power to resist them — became difficult to ignore. Public health responses over the following decade produced further volatility: sharp recoveries of trust in some institutions in some countries, and sharp collapses in others, leaving a landscape in which institutional trust varies considerably by context and by how specific institutions behaved during periods of acute public dependence on them.
What drives institutional trust decline is well-studied, and the findings are consistent. Perceived self-interest — the sense that an institution is oriented toward its own benefit rather than toward those it serves — is among the strongest predictors. Lack of transparency — the experience of not being able to see how decisions are made — matters considerably. Inconsistency — decisions appearing to vary depending on who is asking, or between what institutions say and what they do — corrodes trust reliably. And failures of accountability — the perception that when things go wrong, systems do not take responsibility or repair the harm — are particularly damaging because they signal that the consistency required for trust cannot be anticipated in future encounters (1, 4, 8).
Automated and algorithmically assisted decision-making has arrived into this already-fractured landscape as a new and specific expression of longstanding institutional trust problems. When systems making consequential decisions about people's lives — healthcare access, benefit eligibility, custodial risk, credit — produce outputs that differ systematically across demographic groups, that cannot be explained in ways the affected person can examine and challenge, and that resist meaningful appeal, they concentrate the conditions that erode trust most reliably: opacity, inconsistency, and the apparent absence of genuine accountability (2, 5). AI bias is one contemporary and particularly visible expression of the same underlying conditions.
The conditionsrequiredto build institutional trust are the same conditions that institutions under pressure are most likely to sacrifice
There is an uncomfortable asymmetry in the dynamics of institutional trust. Trust takes time to build and can erode rapidly. The behaviours that build it — consistency, transparency, genuine accountability, demonstrated orientation toward those served — are often expensive, slow, and inconvenient for institutions operating under resource pressure. The behaviours that erode it — prioritising internal efficiency over individual experience, making decisions through opaque processes, resisting accountability when things go wrong — frequently offer short-term advantages to the institutions that adopt them.
This creates a structural pressure toward trust-damaging behaviour that operates independently of the intentions of any individual within an institution. An organisation facing resource pressure has incentives to reduce the time spent on individual cases, to standardise responses, and to make accountability processes complex enough to discourage most complaints. Each of these choices makes operational sense. Each also degrades the conditions — capable, benevolent, consistent — that trust requires. The result is that institutional trust decline can be nobody's deliberate intention and still be the predictable outcome of accumulated institutional choices.
There is also the specific problem of information asymmetry. People depend on institutions to make decisions they cannot make for themselves — about their health, their legal situation, their financial options — precisely because institutions hold information, expertise, or authority that individuals lack. That asymmetry is the reason institutions are needed. It is also the condition that makes institutional betrayal particularly harmful. When an institution uses its informational advantage to serve its own interests rather than those of the people depending on it, the people most harmed are those least equipped to recognise what is happening or to seek redress.
The compounding effect of lost trust presents a further structural problem. Once trust has been seriously damaged, even good-faith behaviour by an institution is filtered through a framework of suspicion. An apology reads as reputation management. A policy change reads as a response to external pressure rather than a genuine commitment. A transparent explanation reads as spin. Rebuilding trust from low levels requires sustained behavioural change over time — and the institutions that most need to rebuild trust are often those operating under the same conditions of resource pressure and accountability avoidance that damaged it in the first place.
The research on institutional trust points consistently towardbehaviourrather than communication as what rebuilding itactually requires
Loss of institutional trust arises from the cumulative experience of systems behaving in ways that are opaque, inconsistent, or that appear to prioritise interests other than those of the people they serve. A consistent thread across the research on trust repair is that behaviour precedes perception (1). Institutions that have lost trust tend to respond by investing in communication — clearer messaging, more visible leaders, better explanation of decisions. These investments can be entirely sincere and still fail to rebuild trust, because the conditions that build trust are structural rather than communicative.
People update their trust models based primarily on what institutions do over time, not on what those institutions say about what they do. Capability demonstrated through competent and consistent outcomes. Benevolence demonstrated through genuine orientation toward those served — including when that orientation is inconvenient or costly. Consistency demonstrated through behaviour that does not vary by circumstance, by who is asking, or by the cost of maintaining it. These are the conditions that research identifies as building and restoring institutional trust. They are demanding conditions. And they require sustained institutional commitment that exceeds what any communication strategy can substitute for (1, 4).
Understanding this is significant in itself — because it shifts where the question of improvement belongs. Institutional trust is a problem of institutional design and behaviour.
Institutional trust is a shared resource — and its erosion costs everyone, not only those who feel it most directly
Loss of trust in institutions is a story about what happens to collective life when the systems that coordinate it stop being reliably oriented toward the people they serve.
Institutional trust is a public good — a shared resource that makes complex social cooperation possible. When healthcare institutions can be trusted, people do not have to become experts in medicine to navigate healthcare decisions. When legal processes are trusted as genuinely impartial, people do not have to calculate how to game them. When regulatory frameworks are trusted as enforced with integrity, people do not have to independently verify every claim made by every institution they depend on. Institutional trust, at its best, reduces the cognitive load of participation in a complex society to something that human minds can actually manage. Its erosion — distributed unevenly, falling hardest on those with the fewest alternatives — reshapes the conditions of collective life for everyone.
A population that does not trust its healthcare institutions will not reliably follow public health guidance when it matters most (10). A population that does not trust its legal institutions will not rely on legal mechanisms to resolve disputes. A population that does not trust the information systems it depends on will navigate information environments without a shared epistemic reference point. These are documented consequences of institutional trust decline playing out in everyday public life (4).
The question that the experience of institutional trust failure ultimately raises — for the people who have experienced it most directly, and for the institutions that have produced it — is what it would mean to design institutions around what trust actually requires. Not what builds reputation or manages public perception, but what consistently demonstrates capability, orientation toward those served, and the consistency that allows people to rely on systems they cannot directly observe or control. That is a design question as much as a governance question. And it is one that both institutions — and the people who depend on them — have every reason to take seriously.
References:
Mayer RC, Davis JH, Schoorman FD. An integrative model of organizational trust. The Academy of Management Review. 1995 Jul;20(3):709–734. https://doi.org/10.2307/258792
Hoff KA, Bashir M. Trust in automation: Integrating Empirical Evidence on Factors That Influence Trust. Human Factors: The Journal of the Human Factors and Ergonomic Society. 2014 Sep 02;57(3):407–434. https://doi.org/10.1177/0018720814547570
Freyd JJ. Institutional betrayal and institutional courage. In: Smith CP, Freyd JJ. Dangerous safe havens: institutional betrayalexacerbatessexual trauma. Journal of Traumatic Stress. 2013 Feb;26(1):119–124. https://doi.org/10.1002/jts.21778
Barocas S, Selbst AD. Big data’s disparate impact. 104 California Law Review 671. SSRN Electronic Journal. 2016 Aug 11. https://doi.org/10.2139/ssrn.2477899
McEwen BS, Gianaros PJ. Central role of the brain in stress and adaptation: links to socioeconomic status, health, and disease. Annals of the New York Academy of Sciences. 2010 Feb 18;1186:190–222. https://doi.org/10.1111/j.1749-6632.2009.05331.x
Hooghe M, Marien S. A comparative analysis of the relation between political trust and forms of political participation in Europe. European Societies. 2013 Feb;15(1):131–152. https://doi.org/10.1080/14616696.2012.692807
Jennings W, Stoker G, Bunting H, et al. Lack of trust, conspiracy beliefs, and social media use predict COVID-19 vaccine hesitancy. Vaccines. 2021;9(6):593. https://doi.org/10.3390/vaccines9060593