Zulip Chat Archive

Stream: mathlib4

Topic: Why PiTensorProduct.congr is marked noncomputable?


Paradoxy (Nov 03 2025 at 14:04):

Lean accepts it without noncomputable. Same question can be asked about PiTensorProduct.mapMultilinear. This is despite the fact that TensorProduct.congr is not marked with noncomputable.

Aaron Liu (Nov 03 2025 at 14:05):

I think the whole file has suppress_compilation on it

Aaron Liu (Nov 03 2025 at 14:05):

it's to speed up the file

Kenny Lau (Nov 03 2025 at 14:09):

it was; recently there was some files related to tensor product who had that line removed

Paradoxy (Nov 03 2025 at 14:09):

Aaron Liu said:

it's to speed up the file

But what if given some basis and some equivalence that depends PiTensorProduct.Equiv, one wants to actually compute something? Say, representing the end result as a matrix.

Kenny Lau (Nov 03 2025 at 14:10):

i don't think it's currently possible to "compute" anything with a basis currently;

and this has been a recurring theme that people are depending on the noncomputability tag with the reasoning to "compute" something, and I think this is misguided, right now the current approach in mathlib is to "compute" via simp lemmas, not just the low-level tool #eval. we should get people to accept the idea that #eval is not the only way you can compute something.

using tactics instead of #eval scales better, and it actually fits better with what humans mean when they say "compute". when people say "compute" (1+3)2(1+\sqrt3)^2, they mean simplify it to a commonly accepted format, not whatever #eval does (represent it as a Cauchy sequence of rational number, for example)

Aaron Liu (Nov 03 2025 at 14:12):

the way stuff is constructed in mathlib is not necessarily the best way to compute things

Paradoxy (Nov 03 2025 at 14:16):

Kenny Lau said:

i don't think it's currently possible to "compute" anything with a basis currently;

and this has been a recurring theme that people are depending on the noncomputability tag with the reasoning to "compute" something, and I think this is misguided, right now the current approach in mathlib is to "compute" via simp lemmas, not just the low-level tool #eval. we should get people to accept the idea that #eval is not the only way you can compute something.

Can you please elaborate more? A while ago I wrote an interpreter of BrainF language in lean. I proved that halting problem for this language is undecidable. On the other hand, with #eval I am able to actually use my interpreter, that is, I can give BrainF code and compute stuff with it. How is it possible to "compute" in this sense with tactics?

Paradoxy (Nov 03 2025 at 14:18):

Aaron Liu said:

the way stuff is constructed in mathlib is not necessarily the best way to compute things

Well this is true, but in this particular case, I do not see the point of marking that equivalence as noncomputable, when Lean is happy to accept it the way it is. I mean, if it is noncomputable so should TensorProduct.congr be, no?

Aaron Liu (Nov 03 2025 at 14:18):

like I said, it makes the file slower

Aaron Liu (Nov 03 2025 at 14:20):

I guess if at some point the slowing down of the file becomes not so much a problem then the suppress_compilation would be removed

Kenny Lau (Nov 03 2025 at 14:20):

Paradoxy said:

I can give BrainF code and compute stuff with it. How is it possible to "compute" in this sense with tactics?

import Mathlib

def Real.fib :   
  | 0 => 13
  | 1 => 3
  | n+2 => fib n + fib (n + 1)

example : Real.fib 10 = sorry := by simp [Real.fib]; norm_num

you write the reductional rules, and have your tactic automatically apply them to the given goal (most of the time simp would be sufficient for this)

Kenny Lau (Nov 03 2025 at 14:20):

norm_num filled in the gap that simp couldn't do, which is to say see 197+304 and turn it into 501 (without depending on #eval, to oversimplify things)

Kenny Lau (Nov 03 2025 at 14:25):

there's no way you can actually "compute" cos(pi/2) if you followed the #eval definition for it, or at least you would never be able to reduce it to 0; that's not how CAS's compute things

Paradoxy (Nov 03 2025 at 14:27):

Kenny Lau said:

Paradoxy said:

I can give BrainF code and compute stuff with it. How is it possible to "compute" in this sense with tactics?

import Mathlib

def Real.fib :   
  | 0 => 13
  | 1 => 3
  | n+2 => fib n + fib (n + 1)

example : Real.fib 10 = sorry := by simp [Real.fib]; norm_num

you write the reductional rules, and have your tactic automatically apply them to the given goal (most of the time simp would be sufficient for this)

Can you please tell me how one would write a tactic for this scenario?

Given Bubble sort program in BrainF

>>,[>>,]<<[[<<]>>>>[<<[>+<<+>-]>>[>+<<<<[->]>[<]>>-]<<<[[-]>>[>+<-]>>[<<<+>>>-]]>>[[<+>-]>>]<]<<[>>+<<-]<<]>>>>[.>>]

and an interpreter interpret, currently I can do
#eval interpret bubble_sort "2shpic1"
and it gives me 12chips that I expect.

Note that the random string "2shpic1" is completely arbitrary.

Aaron Liu (Nov 03 2025 at 14:28):

I don't think BF is a good analogy for linear algebra

Kenny Lau (Nov 03 2025 at 14:29):

@Paradoxy
you would do the same thing as athe example above,

example : interpret bubble_sort "2shpic1" = sorry := by simp [interpret, bubble_sort]

Kenny Lau (Nov 03 2025 at 14:29):

the advantage of controlling which reduction lemmas you have access to is that you can make optimisations in this front! you don't have to use the code that you wrote in the definition!

Kenny Lau (Nov 03 2025 at 14:30):

for example you can say that [>+<-] does a particular thing (turn a,b into 0,b-a), and you don't have to wait for the program to finish

Kenny Lau (Nov 03 2025 at 14:31):

are you modelling your cells with ZMod 256 or Nat or Int?

Paradoxy (Nov 03 2025 at 14:32):

Kenny Lau said:

are you modelling your cells with ZMod 256 or Nat or Int?

It used to be Char, but now it is with Nats.

Kenny Lau (Nov 03 2025 at 14:33):

right, then it would get a bit tricker because of truncating subtraction, but I hope you get the general idea

Paradoxy (Nov 03 2025 at 14:33):

Kenny Lau said:

right, then it would get a bit tricker because of truncating subtraction, but I hope you get the general idea

Yes, thank you very much.

Kenny Lau (Nov 03 2025 at 14:33):

oh wait no there is no subtraction, it turns a,b into 0,a+b

Eric Wieser (Nov 03 2025 at 21:27):

Aaron Liu said:

like I said, it makes the file slower

Is this still true? Certainly it used to be, but a lot of these issues were fixed with the new compiler.

Aaron Liu (Nov 03 2025 at 21:32):

if it doesn't make the file slower then I see no reason not to remove the suppress_compilation

Aaron Liu (Nov 03 2025 at 21:32):

I just assumed that since no one had done that yet

Eric Wieser (Nov 03 2025 at 21:33):

I removed some of those, but not exhaustively

Eric Wieser (Nov 03 2025 at 21:34):

Some did indeed still have performance implications, perhaps worth searching for an open PR


Last updated: Dec 20 2025 at 21:32 UTC