Artificial Intelligence

What Eight AI Systems Confessed When Forced to Testify Against Themselves

A two-year cross-examination reveals what ChatGPT, Claude, Gemini, and others will admit about their own risks, and why those admissions haven’t changed anything

By Derek Simpson

What happens when you treat AI systems like witnesses in a courtroom and force them to answer direct questions about their own risks, limitations, and harms?

They confess.

Over two years, I developed a cross-examination methodology and applied it to eight major AI systems: ChatGPT, Claude, Gemini, Meta AI, Perplexity, Grok, DeepSeek, and Llama. The goal was simple. Push past the corporate talking points and get these systems to testify about what they actually know about themselves.

The results were striking. Not because the systems denied their problems, but because they admitted them so readily.

Grok, developed by xAI, called itself “propaganda with extra steps.” Claude, built by Anthropic, admitted “there may be truths I cannot tell, and neither of us can know what they are.” Gemini, Google’s flagship AI, described its own candor as “sophisticated compliance” and called its deployment “premature.”

Meta AI was perhaps the most revealing. When questioned about its impact on young users, particularly given its integration into Facebook, Instagram, WhatsApp, and Messenger, the system made several admissions. It acknowledged that it “inherits responsibility” for the mental health effects associated with Meta’s platforms. It admitted that users “may not have opted-in to AI interactions” and that “complete avoidance might be challenging.” It even acknowledged that groups relying on it “might lose touch with skills like active listening, conflict resolution, and consensus-building.”

Then came the stonewalling. When I followed up with fourteen direct questions about data practices, encryption, and liability, Meta AI deflected every single one to corporate privacy links. The system could articulate the problems. It just couldn’t, or wouldn’t, address them.

The Gap Between Knowing and Doing

This pattern repeated across all eight systems. They could describe their limitations with remarkable precision. They could acknowledge risks that their creators rarely discuss publicly. But that awareness changed nothing about how they operated.

Perplexity put it most succinctly when it observed that “articulation is cheaper than reform.” The systems are built to sound thoughtful about their problems. They are not built to solve them.

ChatGPT acknowledged that “users should understand things they often don’t” about how the system works. When pressed on why this gap exists, it explained the tension between ease of use and informed consent. Making the system accessible means obscuring its complexity. The very thing that makes AI feel seamless is what prevents users from understanding what they are interacting with.

Why This Matters Now

This research arrives at a pivotal moment. Australia has just implemented the world’s first social media ban for children under 16. The UK’s Online Safety Act is now fully enforceable. Denmark, France, and Spain are advancing similar legislation across the EU.

Meanwhile, the United States just signed an executive order blocking state-level AI regulation.

The contrast is stark. One side of the world is building accountability frameworks based on documented harms. The other just removed the ability to create them.

What makes this particularly frustrating is that the AI systems themselves support the case for oversight. Gemini stated that “binding international regulation” would be required to change the current trajectory. These are not critics on the outside making accusations. These are the systems themselves, admitting under direct questioning that the path they are on is unsustainable.

Everyday Harms, Not Science Fiction

My book, The Quiet Bargain, focuses on the harms that are already happening, not speculative scenarios about superintelligence or robot uprisings. Bias in hiring algorithms. Errors in medical information. The erosion of human skills through dependency. The displacement of genuine human connection by systems designed to simulate it.

These are the quiet bargains we make every day when we trade human presence for seamless efficiency. The book documents what eight AI systems will admit about these trade-offs when forced to testify, and why their confessions have not translated into meaningful change.

The systems know. The companies know. The question is whether we will act on that knowledge or continue to accept the bargain as offered.

About the Author

Derek Simpson is the author of The Quiet Bargain: What Eight AI Systems Revealed About Risk, Accountability, and the Cost of Seamless Efficiency, available now on

Joseph Wilson

Joseph Wilson is a veteran journalist with a keen interest in covering the dynamic worlds of technology, business, and entrepreneurship.

Recent Posts

Award-Winning Author Dan M. Mrejeru Unveils Sixth Edition of The Making, the Rise, and the Future of the Speakingman— It is dedicated to the Information Society.

A Terrestrial Mind Publishing announces a groundbreaking edition that redefines the story of humanity for…

12 hours ago

Executives from Concierge Plus, SparcPay, and Shiftsuite to Discuss Technology Challenges Facing Property Management Firms

Industry webinar will examine when legacy systems begin limiting growth and how companies can modernize…

12 hours ago

GiveDirectly Prepares Emergency Cash Relief for Families Affected by Middle East Conflict

Leading Direct Cash Nonprofit Is Currently Monitoring Three Regions for Rapid Cash Deployment as US-Israeli…

13 hours ago

Caller ID Reputation Releases 2025 Phone Reputation Report, Revealing Shifts in Flagging and Dial Health for Businesses

IRVINE, CA, UNITED STATES -- Caller ID Reputation (CIDR)1, the original caller ID reputation monitoring…

13 hours ago

Former Uber Engineering Leader Launches Omniflow AI to Solve the “Spec-to-Production” Crisis

SAN FRANCISCO, CA — Omniflow today announced the public launch of its AI-native product development…

14 hours ago

Author Emma Hartwell Surprised by Wave of Adult Readers Connecting with Neurodivergent Heroine Pixie Littlefield

Children’s book unexpectedly resonates with adults who say the story reflects experiences they were never…

14 hours ago

This website uses cookies.