Skip to content

Instantly share code, notes, and snippets.

@VictorTaelin
Last active November 12, 2024 09:50
Show Gist options
  • Save VictorTaelin/a060db7bada170e50d61871a752daf6e to your computer and use it in GitHub Desktop.
Save VictorTaelin/a060db7bada170e50d61871a752daf6e to your computer and use it in GitHub Desktop.
Scalign HVM towards an Optimal Theorem Prover

Scaling HVM towards an Optimal Theorem Prover

Why an Optimal Theorem Prover implies AGI?

Theorem Proving is the ability to solve a mathematical problem. A computer program capable of competently doing that would immediatelly unlock the automation of every intellectual task that a human can perform, because all problems can be reduced to that of solving abstract equations. From the discovery of new physics, to recursive self-improvement and unfathomable technological progress: everything we associate with "AGI" would follow from an Optimal Theorem Prover.

Why we failed to build an Optimal Theorem Prover?

We know efficient theorem proving is physically possible: humans do it with minimal energy. Per the Church-Turing thesis, unless human brains are magical, this capacity can be performed by an algorithm. Given that no known problem in type theory requires neural networks for optimal solutions, such algorithm is likely purely functional. Yet, despite decades of research, all current solutions, including Z3, Vampire, SPASS, still underperform humans, as they hit an exponential combinatorial wall. This strongly suggests the optimal solution might require some techniques unavailable in traditional paradigms.

How HVM helps addressing this problem?

HVM (Higher-order Virtual Machine) is a massively parallel runtime with a unique feature: it's an optimal λ-calculus evaluator. Through interaction nets and superposition nodes, it can evaluate certain programs exponentially faster than any other known system; i.e., billions of times for some inputs. Early experiments already show HVM outperforming alternatives by 300x when searching for programs that fit specific patterns, and that difference will only increase as we find clever ways to exploit optimal evaluation. This asymptotical advantage could be the missing piece needed to make automated theorem proving viable.

Why Mac Minis?

The bitter lesson of AI is that scale trumps clever algorithms. This was proven true with Large Language Models, where massive compute led to emergent capabilities. Yet, this lesson wasn't applied to logical AI systems. By leveraging Apple's M4 chips, which offer superior performance/watt for HVM workloads, we can build a cost-effective supercomputer dedicated to exploring logical AI algorithms at unprecedented scale.

Our Plan

We're building a cluster of 512 Mac Minis to run massively parallel theorem-proving experiments. Using HVM's optimal evaluation and the M4's computing power, we'll explore type-driven proof synthesis and evolution of Elementary Affine Logic terms. The goal is to demonstrate that, with sufficient scale and the right computational model, we can make meaningful progress towards an optimal theorem prover - and consequently, towards AGI.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment