Research findings

From our work on information access under censorship, in exile, and beyond reach.

What we've learned from studying how civic information moves — or doesn't — in restrictive environments.

Conducting research in remote contexts

Iranian Gen Z on YouTube under censorship and crisis

A year of field research on how trust forms under censorship, and what it means for anyone publishing on YouTube.

What Iranians told us after the blackout

Gathering reliable data from within distorted information environments during crises is a challenge. Using secure channels into Iran through Telegram and VPN-based ads, we asked Iranians in January 2026, "What's on your mind?" When internet connectivity returned on January 22, we received over 220 responses in four days.

Constraint-based segmentation: A research alternative for systematic exclusion

Traditional survey methodologies assume everyone has an equal chance of responding. But in restrictive environments (where apps are blocked, devices are older, or participation carries risk) standard sampling renders the most vulnerable populations invisible.

This article proposes a shift from estimating population size to mapping participation barriers. By sizing the groups affected by specific technological or safety constraints, we create a more accurate picture of the information landscape than "representative" data can provide.

AI and information access

How five AI models source Iran-related information in Persian

In a joint study with Factnameh, Gazzetta audited five major AI models (ChatGPT, Claude, Gemini, DeepSeek, and Mistral) to test how they handle political cueing in Persian. The study, conducted in December 2025, used a fixed set of prompts regarding Iran, ranging from neutral questions to leading questions framed in state-aligned or opposition language.

The research highlights the risk that mirroring models allow the prompt to dictate the source library, so users can unintentionally trigger state propaganda simply by using state-aligned vocabulary. This suggests that for users in contested environments, the "truth" an AI presents is malleable and contingent on the user's own input phrasing.

DeepSeek's double life: How we tricked a Chinese chatbot for strike advice the 'free' Western models couldn't give

We tested international versus Chinese AI models on labor rights questions and found a paradox:

DeepSeek, a heavily moderated Chinese model, possesses detailed, tactical knowledge on organizing strikes and navigating management pressure. Western models, while "free," are untrained on these realities and provided useless, impractical, and even dangerous advice when queried.

How we think of censorship in autocratic contexts in increasingly AI-intermediated information spaces

Censorship through AI is no longer just a blocklist of banned words; it is a complex coordinate system.

Our research identifies four boundaries that determine if content passes or fails:

  1. Risk Framing (technical vs. political)
  2. Scale (individual vs. institutional)
  3. Intent (objective vs. manipulative)
  4. Scope (redistribution vs. restructuring).

By plotting inputs on these axes, we found that the same facts can either pass or trigger filters depending entirely on how they are framed relative to the "origin" of political safety.

Journalism practice and funding

Our Liechtenstein media model: creator economy meets journalism

Liechtenstein’s media crisis proves that financial subsidies alone cannot save journalism in micro-markets. Following the defunding of the national broadcaster, we propose a structural overhaul: merging the creator economy with a shared "backbone."

By pooling costly resources—like legal aid, security, and investigative units—independent outlets can compete on editorial perspective rather than infrastructure. This model aims to sustain diverse voices in a population of 40,000 without the unsustainable duplication of fixed costs.

Don’t hesitate to contact us with thoughts, ideas, and feedback at hello@gazzetta.xyz.