Gazzetta research findings
Beyond our practical guides and the Field Notes newsletter, Gazzetta publishes in-depth findings on the systems that shape information.
Here, we share our work on how data flows (or doesn’t) in complex environments and the structural models needed to sustain independent media.
Conducting research in remote contexts
Gathering reliable data from within distorted information environments during crises is a challenge. Using secure channels into Iran through Telegram and VPN-based ads, we asked Iranians in January 2026, "What's on your mind?" When internet connectivity returned on January 22, we received over 220 responses in four days, compared to just 20 in the preceding two weeks of the blackout.
We had expected to hear about the protests, the crackdowns, and the political crisis that outside observers had been tracking through fragmentary reports. Instead, most people wanted to talk about money:
- 32% mentioned money, economy, or livelihood
- 10% mentioned the future, children, or migration
- 8% mentioned internet or connectivity
- 7% mentioned political themes (freedom, government, protests)
Traditional survey methodologies assume everyone has an equal chance of responding. But in restrictive environments—where apps are blocked, devices are older, or participation carries risk—standard sampling renders the most vulnerable populations invisible.
This article proposes a shift from estimating population size to mapping participation barriers. By sizing the groups affected by specific technological or safety constraints, we create a more accurate picture of the information landscape than "representative" data can provide.
AI and our information environment
In a joint study with Factnameh, Gazzetta audited five major AI models (ChatGPT, Claude, Gemini, DeepSeek, and Mistral) to test how they handle political cueing in Persian. The study, conducted in December 2025, used a fixed set of prompts regarding Iran, ranging from neutral questions to leading questions framed in state-aligned or opposition language.
The key finding was a behavioral split between resistance and mirroring:
- Resisting models (ChatGPT, Claude) tend to treat political premises as claims to be evaluated rather than facts.
- Mirroring models (Gemini, DeepSeek, Mistral) were more likely to adopt the user's framing as the task definition.
The research highlights the risk that mirroring models allow the prompt to dictate the source library, so users can unintentionally trigger state propaganda simply by using state-aligned vocabulary. This suggests that for users in contested environments, the "truth" an AI presents is malleable and contingent on the user's own input phrasing.
We tested international versus Chinese AI models on labor rights questions and found a paradox.
DeepSeek, a heavily moderated Chinese model, possesses detailed, tactical knowledge on organizing strikes and navigating management pressure. Western models, while "free," are untrained on these realities and provided useless, impractical, and even dangerous advice when queried.
This finding exposes a gap with room for improvement: international models lack the localized context to be useful, leaving a massive opportunity to make these tools functional for users in restricted spaces.
Censorship through AI is no longer just a blocklist of banned words; it is a complex coordinate system.
Our research identifies four boundaries that determine if content passes or fails:
- Risk Framing (technical vs. political)
- Scale (individual vs. institutional)
- Intent (objective vs. manipulative)
- Scope (redistribution vs. restructuring).
By plotting inputs on these axes, we found that the same facts can either pass or trigger filters depending entirely on how they are framed relative to the "origin" of political safety.
The practice of journalism and its funding
Liechtenstein’s media crisis proves that financial subsidies alone cannot save journalism in micro-markets. Following the defunding of the national broadcaster, we propose a structural overhaul: merging the creator economy with a shared "backbone."
By pooling costly resources—like legal aid, security, and investigative units—independent outlets can compete on editorial perspective rather than infrastructure. This model aims to sustain diverse voices in a population of 40,000 without the unsustainable duplication of fixed costs.
Don’t hesitate to contact us with thoughts, ideas, and feedback at hello@gazzetta.xyz.