Please wait...

    About This Project

    This project explores whether advanced AI models can remain logically consistent when exposed to internal contradictions. Using a testing framework we developed (GEP² Generator Professional Prompts), we will create simulated contradictions between two language models to see how well they handle reasoning under pressure. By measuring breakdowns in their coherence, we aim to reveal hidden flaws in their internal logic. Our goal is not to optimize AI, but to understand its structural limits.

    Blast off!

    Browse Other Projects on Experiment

    Related Projects

    From Pixels to Protection: Assessing the Utilization of AI in Underwater Video Analysis for Marine Protected Areas

    This project, partially funded by Sea Grant California, uses Baited Remote Underwater Video to study fish...

    Using Virtual Reality to Build Resistance to Distraction

    We are developing an intervention that takes a new approach to distraction in ADHD. Our goal is to use this...

    Why do distantly related bird species respond to one another's vocalizations?

    The project primarily focuses on identifying specific acoustic signatures that trigger responses from congeneric...