Why We Believe What We See

Understanding Misinformation & Disinformation

What the brain does with false and misleading information — and why ordinary cognitive systems are more vulnerable than we tend to assume

Published: 12 April 2026 Updated: 3 days, 14 hours ago
Levels of Scale Community
Lens Learning Logic
Wellbeing Dimension Cognitive
System of Wellbeing Thriving Communities
Regenerative Development Goals RDG 4 - Lifelong Learning
Why We Believe What We See

Quick summary

Most of us have had the experience of reading something that seemed plausible, sharing it, and then discovering it wasn't accurate. Or of trying to follow a complex story — a health crisis, a political development, a scientific debate — and finding that each source tells a different version, with equal confidence and incompatible conclusions. Or of simply reaching a point where the effort of evaluating what is and isn't credible feels too great, and the easiest response is to stop trying.

Misinformation and disinformation describe what happens when false, misleading, or deliberately deceptive content circulates through information environments faster than it can be evaluated, corrected, or contained. This is a story about what ordinary cognitive systems do when the demands placed on them consistently exceed what they were built to handle — and about the structural conditions that make those demands so difficult to escape.

This article explores what misinformation and disinformation actually are, how the brain's natural learning mechanisms make us all more at the mercy of misinformation than we like to think, why the current information environment has made these vulnerabilities so much more consequential, and why the exhaustion many people feel around information is an understandable biological and psychological response.

False information travels faster than true information — and understanding why reveals something important about how the brain learns

Most people have had the experience of reading something that seemed credible, sharing it, and later finding it wasn’t accurate. Or of following a developing story — a public health crisis, an election, a scientific controversy — and finding that each source tells a different version with equal confidence and incompatible conclusions. Or of noticing that a claim they knew to be false felt somehow more familiar after seeing it repeated enough times. These are recognisable experiences of a brain navigating an information environment it was not designed for.

woman-apart-from-others-in-outdoor-setting-2026-03-24-02-26-49-utc

For many people, sustained exposure to this environment has produced a kind of managed disengagement: a decision, often made without full awareness, to invest less in the effort of working out what is true. The news gets skimmed rather than read. The health claim gets neither believed nor investigated. The political development gets filed under ‘probably complicated’ and set aside. This withdrawal is understandable as a response to genuine cognitive overload. It is also costly — to individual decision-making, to the quality of civic participation, and to the shared epistemic culture that democratic life depends on.

Misinformation and disinformation are a structural condition of the current information environment — one shaped by the architecture of digital platforms, the economics of attention, and in some cases by deliberate strategic intent. Understanding how they work, and why the brain is so open to them, changes the relationship we have with that vulnerability: from something that feels like a personal failing to something that makes clear sense once the mechanisms are visible.

The illusory truth effect, motivated reasoning, and why repetition alone can make false things feel true

The starting point for understanding misinformation vulnerability is a set of cognitive mechanisms that are useful, efficient, and deeply embedded in how the brain learns. The most fundamental of these is the brain's reliance on cognitive shortcuts — automatic, fast-processing strategies that allow us to make judgements and decisions without engaging in effortful, deliberate evaluation every time. These shortcuts are the brain's solution to the problem of functioning in a world of overwhelming information with limited cognitive resources (1).

One of the most significant of these shortcuts — and one of the most consistently exploited by misinformation — is what experts call the illusory truth effect: the finding that repeated exposure to a claim, regardless of its accuracy, reliably increases the perceived truth of that claim (6, 7). The mechanism is straightforward: familiarity feels like truth. When something has been encountered before, it is processed more fluently — more easily, with less cognitive friction — and that fluency is interpreted by the brain as a signal of credibility (6). The brain is, in effect, using processing ease as a proxy for accuracy. In environments where repeated exposure reliably correlated with a claim having been validated by multiple sources, this was a reasonable shortcut. In an environment where claims can be repeated millions of times regardless of their accuracy, it becomes a significant vulnerability.

muslim-teenager-girl-best-friends-bonding-with-lov-2026-03-16-03-24-19-utc

The illusory truth effect does not disappear when people have relevant background knowledge. Even people who already know a claim is false can show increased belief in it after repeated exposure — suggesting that the fluency mechanism operates at a level that prior knowledge does not reliably override (8). Similarly, brief prior exposure — seeing a headline once, without reading the story — is enough to increase later perceived accuracy (9). The implication is that the sheer volume of repeated false claims in a high-information environment creates a genuine epistemic hazard that careful thinking alone cannot fully protect against.

Alongside the illusory truth effect, motivated reasoning plays a significant role in how people process information that touches on identity or values (5). When information is relevant to beliefs that carry emotional or identity significance — political beliefs, health beliefs, beliefs about one's in-group — reasoning tends to become goal-directed: aimed at arriving at the desired conclusion rather than the most accurate one. Evidence that supports existing beliefs is processed more readily; evidence that challenges them attracts more scrutiny (4, 5). This is a deeply embedded feature of how the brain manages the relationship between new information and existing self-understanding.

There is also the dimension captured by the distinction between fast, automatic processing and slower, more deliberate reasoning (3). Many of the conditions under which people encounter information online — scrolling quickly, in states of emotional activation, while doing other things — are conditions in which deliberate, careful evaluation is least available. The information environment is optimised for speed of consumption; the cognitive conditions most needed for quality evaluation are precisely those least present (10). Misinformation vulnerability, in this sense, is partly a function of the imbalance between how information arrives and what evaluating it well would require.

Vulnerability to misinformation is not a sign of low intelligence or poor education — and the cost of navigating a persistently unreliable information environment is real

One of the most important things to understand about misinformation and disinformation vulnerability is that it is not primarily a product of low intelligence, poor education, or political extremism. The cognitive mechanisms that make people susceptible to both — the illusory truth effect, motivated reasoning, reliance on cognitive shortcuts — operate across all populations, all levels of education, and all political positions. The primary difference between those who are more and less vulnerable under controlled conditions tends to be the habit of pausing to evaluate rather than the capacity to do so. And that habit is itself a resource that can be used up.

The continuous effort of evaluation takes a toll that tends to go unacknowledged. Assessing the credibility of sources, checking claims, resisting the pull of plausible-seeming content that turns out to be false — all of this requires genuine cognitive effort (11). When that effort is demanded across every domain of information simultaneously, and when it frequently fails to produce reliable conclusions, the cognitive system begins to disengage. This is a predictable response to a cost-benefit calculation that has shifted: if careful evaluation rarely delivers trustworthy results, the resources consumed by trying are better spent elsewhere. The result is a gradual withdrawal from epistemic engagement that misinformation environments tend to produce — and in some cases, deliberately cultivate.

woman-with-eyes-closed-sits-indoors-2026-03-26-07-19-48-utc

The experience of this fatigue tends to show up in recognisable patterns. A growing scepticism that treats everything as potentially unreliable. A preference for information that arrives through trusted social networks rather than formal media, regardless of its accuracy. A withdrawal from civic information-gathering altogether — a decision, often not fully conscious, that the complexity of the information environment is not worth engaging with (11). These responses are understandable. They are also the responses that misinformation and disinformation environments tend to produce deliberately — because a population that has disengaged from epistemic effort is easier to manipulate.

There is also a health dimension to this that is easy to overlook. The chronic low-grade stress of navigating an unreliable information environment — the background uncertainty, the effort of vigilance, the cognitive dissonance of holding incompatible claims — carries its own physiological cost. There is a link between sustained cognitive load and reduced working memory, impaired decision-making, and increased emotional reactivity (1, 11). Misinformation fatigue is, in this sense, a form of strain on the nervous system — one that is particularly insidious because it is rarely named or acknowledged as such.

False information travels faster than true information — and understanding why reveals the structural problem

Understanding the scale of the misinformation problem requires understanding something about how false information moves through digital networks. False news spreads faster, further, and more broadly than accurate news — and that this difference is driven primarily by human behaviour rather than by automated accounts or bots (2). The mechanism relates to what was described earlier: false information is more likely to be novel, surprising, and emotionally activating — which makes it more likely to generate engagement, sharing, and the kind of social endorsement that triggers the illusory truth effect at scale. The information environment that emerges from this dynamic is one in which false claims receive disproportionate amplification precisely because of the features that make them false.

A useful framework for thinking about the information environment distinguishes between three different types of information problem. Misinformation is false content shared without the intent to harm — the honest mistake, the misremembered fact, the satire that escapes its context and circulates as truth. Disinformation is false content shared with the deliberate intent to harm or deceive — coordinated campaigns, fabricated evidence, strategic narrative manipulation. Malinformation is accurate content used to cause harm — private information made public, selectively quoted statements, true facts weaponised in misleading ways (15). These distinctions matter because they suggest different responses: misinformation may be addressed through correction; disinformation requires understanding the intent and the infrastructure behind it; malinformation raises questions about context and purpose rather than accuracy alone.

young-men-gathering-in-nature-2026-01-11-08-29-09-utc

The deepfake and synthetic media dimension adds a further layer of complexity. Exposure to convincing fabrications can increase uncertainty and reduce trust in authentic content — even when people are told the video was fake (14). The implication for the principle that 'seeing is believing' is significant: in an environment where video and audio can be convincingly fabricated, the most persuasive form of evidence becomes less reliable. The cognitive shortcut of treating perceptual experience as a reliable guide to truth — entirely reasonable in most human historical environments — becomes a specific vulnerability in an environment of sophisticated synthetic media.

The extent to which the information environment shapes exposure is also significant. There is some evidence that cross-cutting political exposure — encountering perspectives from across political difference — is reduced by the way content is curated and ranked online (13). This means that even people who are not actively seeking out confirming information are structurally less likely to encounter challenges to their existing beliefs — creating the conditions for motivated reasoning to operate with less corrective input than it would otherwise receive.

Corrections often don't work — and the individuals being asked to resist misinformation are doing so largely without structural support

One of the most important and uncomfortable findings in the misinformation research is that correcting false information is considerably harder than producing it. The asymmetry runs in multiple directions, and understanding it is essential to honest thinking about what individuals can and cannot be expected to do.

A well-documented phenomenon called the continued influence effect describes how misinformation continues to affect reasoning and behaviour even after it has been explicitly corrected (11). People may consciously know that a claim has been debunked and still rely on it — because the original claim has already been encoded, and the correction has to compete with that encoding rather than simply replacing it (11). The implications are significant: the common intuition that the solution to misinformation is better fact-checking and more corrections underestimates the depth at which false information embeds itself. Corrections help, but they help less than producing accurate information in the first place.

diverse-group-of-friends-laughing-and-walking-toge-2026-01-08-05-55-29-utc

There is also something called the implied truth effect: the finding that when some content is labelled as potentially false or misleading, content that receives no label tends to be perceived as more credible — as though the absence of a warning implies endorsement of accuracy (10). This means that partial fact-checking — labelling some false content without being able to label all of it — can increase the perceived credibility of unlabelled false content. The scale of the misinformation problem means that comprehensive real-time fact-checking is not feasible; the implied truth effect means that incomplete fact-checking carries its own risks.

There is also a broader tension here: people in everyday information environments are effectively being asked to perform the role of skilled epistemologists — evaluating source credibility, resisting motivated reasoning, applying lateral thinking to verify claims, maintaining calibrated uncertainty across contested domains — in an environment deliberately structured to exploit the cognitive vulnerabilities that make those tasks difficult (11, 12). This is not a reasonable demand. The effort required for genuine epistemic diligence would consume cognitive resources that people need for everything else in their lives. Something important is missing when the entire burden of navigating a structurally unreliable information environment falls on the individual consumer of that environment.

The burden is also not equally distributed. Misinformation tends to have disproportionate effects on communities with less access to trusted information sources, less time and cognitive bandwidth for careful evaluation, and more exposure to targeted disinformation campaigns (12). The people who are most systematically exposed to misinformation are often those whose material and cognitive resources most limit their capacity to resist it — producing a form of epistemic inequity that compounds existing structural disadvantages.

Misinformation and disinformation are a structural problem — and genuine resistance depends on conditions, rather than only individual effort

A consistent thread across the misinformation and disinformation research is that the mechanisms producing epistemic harm are multiple and interacting — cognitive, structural, and social at once. What this means in practice is that genuine resistance depends on conditions rather than only on individual effort. The brain’s vulnerability to misinformation is a feature of normal learning systems encountering an environment they were not built for (1, 11) — and that environment has been shaped, in significant part, by design choices that could be made differently.

adult-hipster-bearded-man-meditate-and-have-relax-2026-01-08-06-12-30-utc

Some research points toward approaches that work at the level of building general epistemic capacity rather than correcting individual claims — building familiarity with the techniques through which manipulation operates, rather than training people to recognise specific false content (16, 17). This direction is significant because it suggests that genuine resistance is less about knowing the right answers and more about recognising when the conditions of a given encounter are designed to bypass careful evaluation. The distinction matters: it is a shift from defending against individual pieces of misinformation toward understanding the structural dynamics that produce it.

These conditions are shaped at multiple levels simultaneously — the individual, the platform, and the civic infrastructure of information itself (12, 16). Genuinely improving the epistemic environment is less a matter of individual skill-building than of changing the structural conditions within which individual cognition operates.

Epistemic humility and epistemic surrender are very different things — and the distinction matters for how we live together

Misinformation and disinformation are a story about ordinary brains operating in extraordinary informational conditions. The cognitive mechanisms that make us vulnerable — the shortcuts, the motivated reasoning, the fluency heuristics — are features of minds that evolved in very different epistemic environments, now navigating an information environment that has been structured, often deliberately, to exploit precisely those features. The fatigue that results is what happens to a cognitive system that has been running at a level of vigilance it was not designed to sustain indefinitely.

The distinction between epistemic humility and epistemic surrender is important here. Epistemic humility — the recognition that one's own beliefs are shaped by cognitive processes that are not perfectly calibrated, and that credible-seeming information can be false — is both accurate and useful. It supports the kind of careful, revisable thinking that genuine learning requires (4). Epistemic surrender — the conclusion that nothing can be known, that all sources are equally unreliable, that the effort of evaluation is not worth making — is a different thing entirely. It is the state that misinformation environments tend to produce, and it serves the interests of those who benefit from a disengaged, uncertain public rather than the interests of the people experiencing it.

girl-with-a-cup-of-coffee-sitting-on-the-railing-o-2026-01-11-08-51-46-utc

The civic dimension of this matters enormously. Democratic participation depends on access to information and a shared epistemic culture — a collective commitment to the idea that some things are more reliably known than others, that evidence matters, and that the effort of working out what is true is worth making even when it is difficult. Misinformation and disinformation erode that culture from the inside: through the gradual accumulation of false impressions, the normalisation of uncertainty, and the quiet withdrawal from epistemic engagement that sustained exposure to an unreliable information environment tends to produce.

The capacity for genuine learning is a skill that can be supported or undermined by the conditions in which it operates (16, 17). Investing in those conditions — in media literacy, in platform design that respects cognitive limits, in information infrastructure that prioritises accuracy over engagement — is close to the centre of what a society that takes both individual flourishing and collective governance seriously has reason to care about.

For anyone experiencing the particular tiredness of not knowing what to believe: that tiredness is an honest signal from a system that has been asked to do too much with too little support. Naming it clearly — and understanding where it comes from — is a more useful starting point than either dismissing it or surrendering to it.

References:
  1. Tversky A, Kahneman D. Judgment under uncertainty: heuristics and biases. Science. 1974 Sep 27;185(4157):1124–1131. https://doi.org/10.1126/science.185.4157.1124

  2. Vosoughi S, Roy D, Aral S. The spread of true and false news online. Science. 2018 Mar 09;359(6380):1146–1151. https://doi.org/10.1126/science.aap9559

  3. Kahneman D. Thinking, fast and slow. New York: Farrar, Straus and Giroux; 2011.

  4. Nickerson RS. Confirmation bias: a ubiquitous phenomenon in many guises. Review of General Psychology. 1998 Jun;2(2):175–220. https://doi.org/10.1037/1089-2680.2.2.175

  5. Kunda Z. The case for motivated reasoning. Psychological Bulletin. 1990 Nov 01;108(3):480–498. https://doi.org/10.1037/0033-2909.108.3.480

  6. Dechêne A, Stahl C, Hansen J, Wänke M. The truth about the truth: a meta-analytic review of the truth effect. Personality and Social Psychology Review. 2010 Dec 18;14(2):238–257. https://doi.org/10.1177/1088868309352251

  7. Hasher L, Goldstein D, Toppino T. Frequency and the conference of referential validity. Journal of Verbal Learning and Verbal Behavior. 1977 Feb;16(1):107–112. https://doi.org/10.1016/S0022-5371(77)80012-1

  8. Fazio LK, Brashier NM, Payne BK, Marsh EJ. Knowledge does not protect against illusory truth. Journal of Experimantal Psychology: General. 2015;144(5):993–1002. https://doi.org/10.1037/xge0000098

  9. Pennycook G, Cannon TD, Rand DG. Prior exposure increases perceived accuracy of fake news. Journal of Experimental Psychology: General. 2018;147(12):1865–1880. https://doi.org/10.1037/xge0000465

  10. Pennycook G, Rand DG. Lazy, not biased: susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning. Cognition. 2019 Jul;188:39–50. https://doi.org/10.1016/j.cognition.2018.06.011

  11. Ecker UKH et al. The psychological drivers of misinformation belief and its resistance to correction. Nature Reviews Psychology. 2022 Jan 12;1:13–29. https://doi.org/10.1038/s44159-021-00006-y

  12. Lazer DMJ et al. The science of fake news. Science. 2018 Mar 09;359(6380):1094–1096. https://doi.org/10.1126/science.aao2998

  13. Nyhan B, Settle J, Thorson E, et al. Like-minded sources on Facebook are prevalent but not polarizing. Nature. 2023 Jul 27;620(7972):137–144. https://doi.org/10.1038/s41586-023-06297-w

  14. Vaccari C, Chadwick A. Deepfakes and disinformation: exploring the impact of synthetic political video on deception, uncertainty, and trust in news. Social Media and Society. 2020 Feb 19;6(1). https://doi.org/10.1177/2056305120903408

  15. Wardle C, Derakhshan H. Information disorder: toward an interdisciplinary framework for research and policy making. Council of Europe; 2017 Sept 27. https://rm.coe.int/information-disorder-report/168076277c

  16. van der Linden S. Foolproof: why misinformation infects our minds and how to build immunity. London: 4th Estate; 2023.

  17. Roozenbeek J, van der Linden S, Goldberg B, Rathje S, Lewandowsky S. Psychological inoculation improves resilience against misinformation on social media. Science Advances. 2022 Aug 24;8(34):eabo6254. https://www.science.org/doi/10.1126/sciadv.abo6254