12
0
0
Like?
Please wait...
About This Project
This project explores whether advanced AI models can remain logically consistent when exposed to internal contradictions. Using a testing framework we developed (GEP² Generator Professional Prompts), we will create simulated contradictions between two language models to see how well they handle reasoning under pressure. By measuring breakdowns in their coherence, we aim to reveal hidden flaws in their internal logic. Our goal is not to optimize AI, but to understand its structural limits.

Browse Other Projects on Experiment
Related Projects
Sustainable AI & Computing: Building a Brain-Inspired VO2 Symbolic Resonance Array
We are prototyping a VO2 Symbolic Resonance Array, a hybrid analog-digital dev kit for sustainable computing...
Can we find a quantifiable correlation between attention and flow state?
Flow, aka "the zone", is at the center of peak performance and intrinsic motivation. Despite such benefits...
Opening your mind’s eye: collaborating with a computer to reveal visual imagination
Close your eyes and visualize the streets of your childhood. What do you remember of what you have seen...

