Dailyphoton – In interview
In an era where large language models (LLMs) are making leaps and bounds every month, and tech billionaires are declaring that Artificial General Intelligence (AGI) is just years away, a powerful but obscure current of thought is emerging, asking a thorny question: Are we chasing a ghost? A well-crafted but sci-fi article, “AGI is Mathematically Impossible,” has become a focal point for this debate. We sat down with a critical analyst who has been closely following this conversation to delve into the layers of one of the most important questions of the 21st century.
Full text of the scientific article by author Max M. Schlereth mentioned in the article:
AGI is Mathematically Impossible
https://philarchive.org/archive/SCHAIM-14
Q: Sir, the article opens with a bold statement: AGI is impossible. This argument is based on a concept the author calls the “Infinite Choice Barrier.” Before going into technical details, could you explain this concept through the real-world examples the author gives?
A: Very much so. The author has cleverly used three examples from three different fields to illustrate a single structural problem.
First, there is “The Conjugal Paradox.” Imagine a person asking their partner a seemingly simple question: “Hey, be honest: Have you gained weight?” An AI, no matter how sophisticated, would freeze when faced with this question. It would try to calculate the optimal answer. It would analyze the history of the relationship, the tone of the question, the micro-expressions on the face, the cultural context, and countless other factors. But the problem is, each new factor that enters the analysis opens up dozens of new possibilities. An “honest” answer could be hurtful. A “diplomatic” answer could be perceived as insincere. The author argues that the decision space here is not just “large,” it is irreducibly infinite . Humans deal with this every day, not by calculation, but by intuition and empathy – things that an algorithm cannot fully simulate.
The second example is “Newtonian Thinking.” Imagine an AI that has been trained perfectly on the entire corpus of Newtonian physics. Then we feed it data from the Michelson-Morley experiment, which shows that the speed of light is constant. What will the AI do? It will try to explain this data within the Newtonian framework. It will generate thousands, even millions of increasingly complex sub-theories: “Maybe there is a special kind of aether wind?”, “Maybe there are micromechanical interactions?”. It will never, on its own, come up with Einstein’s idea: “Maybe time itself is relative.” The reason is that the concept of “relative time” does not exist in its vocabulary and rules. The AI is locked into its cognitive framework. It can optimize within the framework, but it cannot “jump” outside it.
Finally, there is the “Moment of Madness” business example, a story about rescuing a failing television station. Any financial analysis based on media industry data would say “No purchase.” But the real solution lies in a completely different realm: combining the purchase of the television station with a cheap real estate deal nearby, and then making the television station itself a tenant in that building. An AI would never find this solution, because it has no reason to search the “real estate” space for a “media” problem. This connection is random to the algorithm and cannot be systematically searched.
All three of these examples point to one point: truly “general” problems often require the ability to go beyond logic and preconceived notions—something that algorithmic systems, by their very nature, cannot do.
Q: These examples are intuitive, but they seem more philosophical than mathematical. What theoretical basis does this argument have to claim “impossible,” rather than just “very difficult”?
A: This is where the fundamental theorems of computer science come in. This argument is said to originate from Alan Turing ‘s The Halting Problem .
To understand this, we need to understand the nature of the Halting Problem. In 1936, Turing posed a seemingly logical question: “Can we write a computer program ‘oracle’, called Halts, which can look at any other program (P) and any of its inputs (I), and judge with certainty whether P(I) will finish and stop, or will run forever?”.
Turing rigorously proved that such an “oracle” could not exist. His proof, called a “rebuttal,” was elegant. He assumed that Halts existed, and then built a “rebel” program called Paradox. Paradox took the source code of a program P as input, and asked Halts whether P would halt when given itself as input (P(P)). Paradox then did exactly the opposite of the oracle: if Halts said “halt,” it would run forever; if Halts said “don’t halt,” it would stop immediately.
The final trap is when we run Paradox with itself as input (Paradox(Paradox)). At this point, Paradox will ask Halts about itself, and no matter what Halts answers, it will be proven wrong. This logical contradiction forces us to conclude that our initial assumption – that Halts exists – is wrong.
The argument of the “impossibility” camp is: if an AI cannot solve such a fundamental problem about the behavior of algorithms themselves, how can it achieve “general” intelligence?
Q: But the Halting Problem seems too abstract. Another example often brought up is the Collatz Conjecture. How is it related, and why is it considered a barrier?
Answer: The Collatz Conjecture is a perfect example to bring the Halting Problem from the theoretical world to reality. This conjecture, proposed in 1937, has a very simple rule:
Start with any positive integer. If it is even, divide by 2. If it is odd, multiply by 3 and add 1. Then repeat the process until n is less than or equal to 1.
The theory is that no matter what number you start with, the sequence will always end at 1.
Its appeal lies in the fact that the rules are simple but the behavior is extremely chaotic and unpredictable. Mathematicians have used supercomputers to test and confirm that this conjecture holds for all numbers up to the unimaginable power of 268. Hundreds of papers have been written, and many attempts have been made by the world’s top mathematicians, but so far, no one has been able to prove that it holds for all cases . We still don’t know if some huge number will fly into another loop or go off into infinity.
The following hypothetical code will demonstrate the Collatz conjecture :
n is a positive integer,
when n > 1
check if n is even or odd.
If n is even, n = n / 2.
If n is odd, n = n x 3 + 1.
Repeat.
//Stop if n ≤ 1
Now, let’s connect it to AGI. If an AGI Halts “oracle” exists, it should be able to solve the Collatz Conjecture with ease. We just need to write a program collatz(n) and ask AGI Halts: collatz(n) Halts or Not. It should answer “YES” immediately for every n. The fact that such a seemingly simple problem is beyond the reach of the entire mathematical mind of mankind suggests that predicting the “halting” behavior of programs in general is a task of infinite difficulty. The “impossible” camp uses this as evidence that there are logical pitfalls that even the simplest algorithms can fall into, and that AGI is no exception.
Q: I understand the theoretical hurdles, but I want to get back to a more practical argument. Does AGI really need to solve these problems? An AGI would only need to have human-level capabilities, which humans can’t solve.
A: This is the strongest and most logical counterattack. Setting a superhuman standard for AGI and then concluding that it will fail is a straw man fallacy.
When faced with this argument, the “impossible” camp is forced to clarify its position. The problem, they will say, is not AGI’s ability to solve problems. The problem is that the structure of impossible problems is hidden in everyday problems. They will argue that a social conversation, a creative business decision, or the understanding of a work of art all contain similar “undecidable” elements. They require leaps of intuition, empathy, and tolerance for contradiction—things they believe are non-algorithmic.
Ultimately, the debate is not “Is AGI smarter than Einstein?” It is: “Is intelligence, in the broadest human sense, a computable phenomenon?”
“We will continue the dialogue, delving into the fragile bridge between the world of pure logic and messy physical reality. This is where the arguments become most modern and fascinating.”
Q: We’ve discussed the barriers to computation. But all of these, from Turing Machines to the Halting Problem, exist in an idealized mathematical space with assumptions like infinite paper tape and infinite time. A true AGI would have to operate in our physical world, a world bound by the laws of thermodynamics and finite energy. Does it make sense to impose paradoxes from an ideal world on a physical entity?
A: This is an incredibly profound question. By bringing physics into play, we are challenging the most basic definitions. A theoretically “infinite” loop, in a system with finite energy, will eventually come to a halt. This would seem to invalidate Turing’s paradox.
Consider the Paradox paradox again. When Halts predicted that Paradox would stop (“YES”), Paradox would enter an infinite loop. But, does an “infinite” loop actually exist? In our universe, the second law of thermodynamics states that entropy always increases. Every computational system consumes energy and generates heat. A computer running an “infinite” loop would theoretically run out of energy, or its components would wear out and fail. Therefore, physically, it would always stop . If so, Halts’ “YES” prediction turned out to be correct, and the paradox would seem to disappear!
Conversely, when Halts predicts that Paradox will not halt (“NO”), Paradox will attempt to execute the halt command. But that process—from asking Halts, receiving the result, to executing the final command—takes time and energy. What if the system runs out of energy just before the halt command is executed? The program has now stopped due to resource exhaustion, not due to the completion of its logic. So its behavior can be interpreted as “not halting” according to its internal logic, and Halts’s “NO” prophecy is once again correct.
By introducing physical constraints, you blur the clear line between “stopping” and “not stopping” that Turing’s paradox relies on. This opens up a fascinating field called Hypercomputation , where researchers explore models of computation that can overcome the Turing limit by exploiting hypothetical physical phenomena.
Q: So if these theoretical paradoxes can be “neutralized” by the laws of physics, does that mean the barrier to AGI has been removed? The paper seems to have anticipated this and presents a “second proof” based on Information Theory and Entropy. Is this proof stronger?
A: This is a very clever move by the author. Realizing that Turing-based arguments can be challenged, he proposes a new, more sophisticated barrier, which he calls “IOpenER” (Information Opens, Entropy Rises) .
Claude Shannon’s classical information theory suggests that as a system receives more information, its uncertainty (entropy) decreases. But “IOpenER” proposes the opposite hypothesis: in some complex problems, receiving more information does not clarify the problem but opens up many new interpretations , making the system more ambiguous and chaotic. Entropy, instead of decreasing, begins to increase uncontrollably.
According to this theory, AI is not “stuck” in an endless loop, but rather “drowned” in an ever-expanding ocean of possibilities. This behavior can manifest itself in the form of circular, contradictory answers, or “hallucinations.”
Q: This sounds very interesting and seems to explain a lot of the strange behavior of current AI models. But there is an important question: Is “IOpenER” a proven theorem like the Halting Problem?
A: You have correctly pointed out the core weakness of the “second proof”. No, “IOpenER” is not a proven theorem. It is a scientific hypothesis proposed by the fictional author himself.
Here’s the fundamental difference: The Halting Problem is a fundamental mathematical truth. “IOpenER” is a speculative idea, albeit a clever one. It has great explanatory power, but it is still just a hypothesis waiting to be tested. It’s possible that the phenomena “IOpenER” describes are just symptoms of current AI architectures. So while the paper’s first layer of argument rests on a rock-solid foundation of mathematics, the second layer, though fascinating, is built on a much less solid foundation.
Q: So what should we conclude after all this? Is AGI a possible dream or a mathematical illusion?
Answer: This dialogue shows that there is no simple answer. It depends entirely on your definitions and assumptions.
If you believe that intelligence, intuition, and creativity are complex forms of computation that we don’t yet fully understand, and that hypotheses like “IOpenER” merely describe temporary technical problems, then AGI is still a viable goal.
But if you believe that logical barriers like the Halting Problem are absolute, and that they will always recur in new forms—whether as theoretical infinite loops or real entropy explosions—then AGI as currently defined will forever remain an illusion.
Ultimately, the race to AGI is not just a technical one. It is also a profound philosophical quest about our own nature: is the human mind the ultimate computing machine, or is it something else entirely, beyond the reach of logic and algorithms? The answer, perhaps, will shape the future of both humanity and machines.