The gates before the evidence
Imagine you can't trust your government, your news, or your internet connection. Every app you use to reach the outside world is officially blocked. The VPN you rely on drops without warning.
When a crisis hits — and one did during this research project in June 2025 — you're flooded with rumors and have no fast way to tell which ones could get you or your family hurt. The people trying to help you sort fact from fiction look, at first glance, a lot like the people trying to manipulate you.
That's daily life for young Iranians online. And it turns out, how they decide what to trust has implications far beyond Iran.
Over nearly a year of field research with Iranian Gen Z, incl. surveys of over 2,000 respondents, in-depth interviews, AI-facilitated interviews at scale, and a rapid-response case study conducted during the June 2025 Iran–Israel conflict, we found that the standard fact-checking theory of change breaks early in this environment.
The assumption that publishing a correction and distributing it widely will lead audiences to process evidence, update beliefs, and share the result does not hold when the audience's first decision is not "Is this true?" but "Do I stay?" and "Is this person safe and relevant enough to listen to?"
Those judgments happen fast, under constraint: VPN friction, unstable connectivity, uncertainty about who's acting in good faith, and a platform where journalism doesn't compete with other journalism — it competes with creator formats built for pace and personality.
This research was conducted by Gazzetta and ASL19 as part of a project to strengthen how Factnameh, ASL19's Persian-language information manipulation research initiative, reaches young audiences inside Iran. It was made possible through support from the IFCN Global Fact Check Fund.
What we found
The core finding is that credibility forms through a sequence of gates, and most fact-checking interventions fail by addressing the later gates (method, sources, verdict) without clearing the earlier ones: attention, relational safety, identity fit.
We call this the Journey to Trust framework, and it models what we observed across the full dataset:
- Earn attention. The viewer decides whether to stay before any evidence is processed. Clear openings that name the claim, the stakes, and the proof direction outperform slow context-building or authority-first introductions.
- Establish relational safety. Viewers rapidly infer whether the messenger feels culturally fluent, fair-minded, and non-threatening. In low-trust environments, tone is interpreted as evidence. If the entry reads as patronizing, propagandistic, or "not for us," persuadable viewers leave before the proof appears.
- Deliver visible proof early. Once a short listening window is granted, the content must quickly move from "trust me" to "see this" (screenshots, dates, side-by-sides, original clips) so the conclusion feels anchored in observable artifacts, not personality.
- Reduce verification friction. Even convinced viewers may not act if checking is costly. 29% of Gen Z respondents who didn't verify a claim said they simply didn't know how or where. Another 25% said they didn't have time. Non-verification is often a friction story, not an ideology story.
- Prompt realistic actions. Most downstream behavior is low-effort and private, especially in environments where visible correction carries social or safety risk.
We formalize the operational principle behind this as messenger-first, evidence-fast: open with a human, culturally fluent entry that clears the relational threshold, then present visible receipts early enough (within the first 20–30 seconds) to convert relational trust into epistemic trust. The purpose is to redesign the format so that evidence has a chance to land.
Gen Z is not one audience
The research also complicates the idea of Gen Z as a single group. We identified five distinct archetypes based on how viewers grant credibility and when they choose to verify.
The largest segment (51%) is creator-first: they grant trust through parasocial closeness, "vibes," and community belonging, often in entertainment contexts where misinformation can arrive incidentally. This is also the lowest-verification group (34% self-reported verification rate) and the primary risk surface for misinformation.
About 35% of respondents fall into patriot segments, where national belonging is highly activated — but distinct from government alignment. It is often framed as loyalty to "people/culture/country." These segments verify at high rates (76–84%) and can become amplifiers for correction when framing is pro-people and government-neutral. But content that feels "against Iran" triggers immediate distrust, even when the evidence is strong.
The remaining 14% are evidence-first: they want documentation and method. They function as validators in their networks — the person others consult — and respond well to transparent "show your work" formats.
The same fact-check will not land equally across these groups. The evidence stays constant; what changes is the hook, host tone, pacing, and call to action.
What crisis changes
One contribution of this research is that these mechanisms were observed under crisis conditions, not inferred from "normal times" alone. During the June 2025 Iran–Israel conflict, when uncertainty was high and connectivity degraded, the information journey compressed.
YouTube was rarely a first destination for updates — only 4% of crisis survey respondents used it for conflict information. Telegram, Instagram, and broadcast media dominated. Under filtering and throttled connections, "best source" was replaced by "reachable source." Verification still happened, but it was often routed through whatever channels remained accessible, which means "I checked" did not necessarily mean "I checked well."
The rumors that gained traction clustered around proximate, high-stakes threats: claims that one's city was being attacked, that a nuclear strike was imminent, that the leader had been killed. Salience tracked with perceived immediacy, not technical plausibility. Anger was the strongest emotional driver of verification behavior; skepticism, counterintuitively, reduced the likelihood of checking.
The practical implication is that crisis response cannot be designed as YouTube-only, even when YouTube is the flagship channel. Crisis moments — when misinformation risk spikes — push people toward lighter, faster channels. YouTube's role shifts to explainer and archive: a place people may consult later to make sense of what happened, if they can access it at all.
What transfers beyond Iran
The study focused on a filtered environment. But the core finding that credibility is sequential, and that most interventions address the wrong stage is not Iran-specific. On any video-first platform, the viewer's first decision is "stay or leave," not "true or false."
What transfers directly: the sequential trust model as a diagnostic tool, the messenger-first principle as a format discipline, the idea that proof should be a portable artifact (a screenshot, a timestamp, a side-by-side) that can travel across platforms, and friction-aware verification design that treats the "how to check" step like a product funnel.
What doesn't transfer without adaptation: the specific identity cues that signal safety or threat, the platform routing map (which channels people use first under what conditions), and the risk landscape that shapes what actions viewers are willing to take publicly.
You'll be asked to register to subscribe to our free newsletter, Field Notes (if you haven't already).
If you sign up, over the next few weeks you'll also get a playbook for YouTube creators on how to operationalize these findings.
You can also listen to this AI-generated audio summary of the report below. It covers the main findings in about 20 minutes.
Feedback or questions: hello@gazzetta.xyz.