Zulip Chat Archive

Stream: new members

Topic: AlphaEvolve Matmul Verification


David Xu (May 21 2025 at 01:32):

I tried verifying the correctness of the matrix multiplication formula in AlphaEvolve by DeepMind. I'm currently stuck on the last step, but I still want to share my code.

As far as I know, the AlphaEvolve team discovered a formula that computes the product of two 4×4 matrices using only 48 complex number multiplications. This surpasses previous algorithms. (The Strassen algorithm, proposed in 1969, requires 49 multiplications. A straightforward implementation by definition needs 64 multiplications.)

https://github.com/google-deepmind/alphaevolve_results
The official Python verification code was published here, but it seems to involve concepts about tensors rather than directly implementing the matrix multiplication formula.

https://github.com/PhialsBasement/AlphaEvolve-MatrixMul-Verification
This person simplified the code and explicitly implemented the formula.


https://github.com/dx2102/AlphaEvolve-Matmul-Verification/blob/main/check.py
Based on their code, I can verify the formula’s correctness with sympy: the result matrix computed by the new formula matches the matrix product defined by standard multiplication, after polynomial simplification. (Unfortunately the code seems to exceed Zulip's 10,000 character limit so I posted the github link)

https://github.com/dx2102/AlphaEvolve-Matmul-Verification/blob/main/check.lean
I also attempted to represent and prove the correctness in Lean. I managed to expand the definition of matrix multiplication. The next step is to verify that 16 huge polynomials simplify to zero. However, the problem is that ring doesn't seem to recognize that Complex.I ^ 2 = -1. The results after running simp and ring are both a bit strange.

Given that even sympy takes a few seconds to simplify these polynomials, the process might be even slower in Lean. If anyone can still manage to complete this proof, I’d find it really interesting!

Kevin Buzzard (May 21 2025 at 06:17):

@Sidharth Hariharan I know you have exams right now but don't you have some beefed upnorm-num which eats this for breakfast?

Sidharth Hariharan (May 22 2025 at 14:02):

Yes, our slight modification to norm_num handles expressions like I ^ 2 = -1 (and more complicated expressions). However, it is limited to 'numerical' expressions, that is, expressions where the real and imaginary parts consist solely of numbers. I tried testing it on @Deming Xu's code, but the expressions there aren't numerical (they involve variables, namely, matrix entries), so our norm_num unfortunately doesn't seem to be able to handle them—at least, not on its own. Some combination of it with ring or field_simp might yield better results, but I'm not too sure. I'll send an update if I manage to get it working.

That being said, I do remember there being a discussion about creating some kind of ring extension that'd be able to handle ring extensions with ii (pun intended)...

Shreyas Srinivas (May 22 2025 at 22:06):

Deming Xu said:

I tried verifying the correctness of the matrix multiplication formula in AlphaEvolve by DeepMind. I'm currently stuck on the last step, but I still want to share my code.

As far as I know, the AlphaEvolve team discovered a formula that computes the product of two 4×4 matrices using only 48 complex number multiplications. This surpasses previous algorithms. (The Strassen algorithm, proposed in 1969, requires 49 multiplications. A straightforward implementation by definition needs 64 multiplications.)

https://github.com/google-deepmind/alphaevolve_results
The official Python verification code was published here, but it seems to involve concepts about tensors rather than directly implementing the matrix multiplication formula.

https://github.com/PhialsBasement/AlphaEvolve-MatrixMul-Verification
This person simplified the code and explicitly implemented the formula.


https://github.com/dx2102/AlphaEvolve-Matmul-Verification/blob/main/check.py
Based on their code, I can verify the formula’s correctness with sympy: the result matrix computed by the new formula matches the matrix product defined by standard multiplication, after polynomial simplification. (Unfortunately the code seems to exceed Zulip's 10,000 character limit so I posted the github link)

https://github.com/dx2102/AlphaEvolve-Matmul-Verification/blob/main/check.lean
I also attempted to represent and prove the correctness in Lean. I managed to expand the definition of matrix multiplication. The next step is to verify that 16 huge polynomials simplify to zero. However, the problem is that ring doesn't seem to recognize that Complex.I ^ 2 = -1. The results after running simp and ring are both a bit strange.

Given that even sympy takes a few seconds to simplify these polynomials, the process might be even slower in Lean. If anyone can still manage to complete this proof, I’d find it really interesting!

That tensor stuff is standard in the literature on improving the matrix multiplication constant. The good news is that these are computer scientists’ tensors.

Shreyas Srinivas (May 22 2025 at 22:07):

Basically n dimensional arrays.


Last updated: Dec 20 2025 at 21:32 UTC