A two-year cross-examination reveals what ChatGPT, Claude, Gemini, and others will admit about their own risks, and why those admissions haven’t changed anything
By Derek Simpson
What happens when you treat AI systems like witnesses in a courtroom and force them to answer direct questions about their own risks, limitations, and harms?
They confess.
Over two years, I developed a cross-examination methodology and applied it to eight major AI systems: ChatGPT, Claude, Gemini, Meta AI, Perplexity, Grok, DeepSeek, and Llama. The goal was simple. Push past the corporate talking points and get these systems to testify about what they actually know about themselves.
The results were striking. Not because the systems denied their problems, but because they admitted them so readily.
Grok, developed by xAI, called itself “propaganda with extra steps.” Claude, built by Anthropic, admitted “there may be truths I cannot tell, and neither of us can know what they are.” Gemini, Google’s flagship AI, described its own candor as “sophisticated compliance” and called its deployment “premature.”
Meta AI was perhaps the most revealing. When questioned about its impact on young users, particularly given its integration into Facebook, Instagram, WhatsApp, and Messenger, the system made several admissions. It acknowledged that it “inherits responsibility” for the mental health effects associated with Meta’s platforms. It admitted that users “may not have opted-in to AI interactions” and that “complete avoidance might be challenging.” It even acknowledged that groups relying on it “might lose touch with skills like active listening, conflict resolution, and consensus-building.”
Then came the stonewalling. When I followed up with fourteen direct questions about data practices, encryption, and liability, Meta AI deflected every single one to corporate privacy links. The system could articulate the problems. It just couldn’t, or wouldn’t, address them.
The Gap Between Knowing and Doing
This pattern repeated across all eight systems. They could describe their limitations with remarkable precision. They could acknowledge risks that their creators rarely discuss publicly. But that awareness changed nothing about how they operated.
Perplexity put it most succinctly when it observed that “articulation is cheaper than reform.” The systems are built to sound thoughtful about their problems. They are not built to solve them.
ChatGPT acknowledged that “users should understand things they often don’t” about how the system works. When pressed on why this gap exists, it explained the tension between ease of use and informed consent. Making the system accessible means obscuring its complexity. The very thing that makes AI feel seamless is what prevents users from understanding what they are interacting with.
Why This Matters Now
This research arrives at a pivotal moment. Australia has just implemented the world’s first social media ban for children under 16. The UK’s Online Safety Act is now fully enforceable. Denmark, France, and Spain are advancing similar legislation across the EU.
Meanwhile, the United States just signed an executive order blocking state-level AI regulation.
The contrast is stark. One side of the world is building accountability frameworks based on documented harms. The other just removed the ability to create them.
What makes this particularly frustrating is that the AI systems themselves support the case for oversight. Gemini stated that “binding international regulation” would be required to change the current trajectory. These are not critics on the outside making accusations. These are the systems themselves, admitting under direct questioning that the path they are on is unsustainable.
Everyday Harms, Not Science Fiction
My book, The Quiet Bargain, focuses on the harms that are already happening, not speculative scenarios about superintelligence or robot uprisings. Bias in hiring algorithms. Errors in medical information. The erosion of human skills through dependency. The displacement of genuine human connection by systems designed to simulate it.
These are the quiet bargains we make every day when we trade human presence for seamless efficiency. The book documents what eight AI systems will admit about these trade-offs when forced to testify, and why their confessions have not translated into meaningful change.
The systems know. The companies know. The question is whether we will act on that knowledge or continue to accept the bargain as offered.
About the Author
Derek Simpson is the author of The Quiet Bargain: What Eight AI Systems Revealed About Risk, Accountability, and the Cost of Seamless Efficiency, available now on
SDLC Connector links Shopify with Odoo ERP and syncs products, customers, orders, inventory, payments, and…
Intra-African trade has the potential to double by 2035, but legacy approaches to connectivity along…
Paul Rushworth-Brown has been nominated for Author of the Year 2026 as part of an…
Real Experience. Results That Matter. HOUSTON, TX — The campaign Larry Rubin For Congress (TX-38)…
Los Angeles, CA — FilmIQ™ today announced the launch of The White List, a curated…
Orlando, Florida - Data and marketing strategist Charlie Render joins Game Changer’s Collective Podcast to…
This website uses cookies.