1 min read

Field Notes: The AI knowledge divide

Do AI guardrails make knowledge a luxury?

The question

How does authoritarian information control in AI models—like the Chinese chatbot DeepSeek—create a new form of digital inequality where access to useful knowledge becomes a luxury good?

Why this matters

Our information environment is being shaped by AI, yet not all AI models are created equal.

The architecture of oppression reveals how political control is being encoded into certain AI systems, meanwhile “free” systems unconstrained by censorship are falling short in understanding the contexts of real people’s lives.

What we're exploring

We are asking how to build information tools that are not just politically uncensored, but practically useful to people facing real constraints, no matter where they live.

We tested international AI models against China's DeepSeek and discovered surprising differences in how these systems behave:

  • DeepSeek, the Chinese model, contained extensive, detailed knowledge about the topic of inquiry: labor organizing and collective action for Chinese workers.
  • However, its moderation system forced users to engage in an elaborate, exhausting game of linguistic cat-and-mouse to access that knowledge, turning basic information-seeking into an exercise in resistance.
  • Meanwhile, Western models (like ChatGPT and Gemini) appeared to be often less informed and useful, at times even dangerously naive. They lacked practical, up-to-date understanding of Chinese labor conditions, offering outdated NGO contacts or legal advice that assumes resources most Chinese workers don't have.

More questions for you

  • How can we design international AI models to better understand the social, economic, and political constraints of all users, especially those in authoritarian contexts?
  • What happens to social movements when access to organizing knowledge is further gated by linguistic skill and technical sophistication?
  • How should we assess the practical utility of AI advice output, especially when it may go against an autocracy's normative setting?
  • Is the forced performance of political orthodoxy required to trick censored chatbots a new form of digital authoritarianism?

Read our full article: DeepSeek's double life: How we tricked a Chinese chatbot for strike advice the 'free' Western models couldn't give.

We'd love to hear from you on these questions, especially if you have answers or ideas. Don't hesitate to get in touch at hello@gazzetta.xyz.