“Research study after research study has looked at model training processes and noted that most major LLMs are fed a mix of biased training data, biased annotation practices, flawed taxonomy design.” 

- Annie Brown, AI researcher and founder of the AI infrastructure company Reliabl - TechCrunch, November 2025. 

LLMs (AI Large Language Models) have become part of everyday life. Systems such as ChatGPT, Claude, Gemini, Meta AI (Llama) and X.ai's Grok handle billions of interactions daily. They increasingly shape what information people encounter and in what order, subtly deciding what's important and even what is true, sometimes without users realizing it. Because LLMs wield growing power over information exposure, it is vital to recognize the political and ideological structures at multiple stages of their design, and to identify manipulation risks. 

This report examines how an ideological skew can enter AI systems both "accidentally”, for example through omission during training, and "on purpose”, through deliberate attempts to skew training data or tuning processes. We close with recommendations for minimizing ideological lean as dependence on these technologies accelerates. 

You can download the full report and press release for this impactful research below.

Risk Analysis of Political Manipulation in Large Language Model Training

Next
Next

End Citizens United Research—Who Owns Corruption?