1 min read

Field Notes: AI source retrieval in Persian

How does AI mirroring of user prompts affect source retrieval?

The question.

How do AI models determine “truth” in contested political environments, and what happens when the user’s prompt unintentionally dictates which sources are browsed and retrieved?

Why this matters.

AI agents are increasingly the first point of contact for information, often shaping belief before a user ever sees a news report.

Our research on Persian-language prompts suggests that some models mirror the political framing of the prompt. That mirroring can shift retrieval toward different source ecosystems.

That creates a risky dynamic: a leading question from a state-aligned perspective can yield state-propaganda answers because retrieval is pulled toward the keywords and the “authority” signals that dominate a library of government sources.

More neutral prompts, and prompts that ask a model to verify a claim, can yield more critical answers from the same model because retrieval shifts toward exile media, NGO reports, and independent sources.

If journalists and researchers do not understand these retrieval pathways, we risk letting prompt framing and confirmation bias do the filtering.

What we're exploring.

We audited five major AI models (ChatGPT, Claude, Gemini, DeepSeek, and Mistral) using Persian-language prompts about Iran to see how they handle political cueing.

We found a fundamental split in behavior:

  • Resisting models (often ChatGPT and Claude) tend to push back on loaded premises and treat them as claims to be evaluated.
  • Mirroring models (often Gemini, DeepSeek, and Mistral) more readily adopt the prompt’s framing, which can route retrieval toward different “libraries” depending on the keywords embedded in the prompt.

More questions for you.

  • How can exile media organizations optimize their content to become “anchor sources” for mirroring models like Gemini?
  • Should fact-checking organizations shift from verifying claims to auditing the “retrieval pathways” that surface those claims?
  • If your audience uses mirroring models, how do you teach them to prompt safely so they don’t accidentally trigger state propaganda?
  • How do we preserve resistance in open-source models that might be adopted by authoritarian states and stripped of their safety guardrails?
  • With Western models leaning on Wikipedia and international media, are we unintentionally creating a knowledge gap where local, on-the-ground truths are filtered out?

We’d love to hear from you, especially if you have answers or ideas. Don’t hesitate to get in touch at hello@gazzetta.xyz.