Zulip Chat Archive
Stream: general
Topic: new (mac) laptop for Lean
Kevin Buzzard (Jan 07 2025 at 16:12):
I am getting greedy because I want to compile FLT faster, so I'm thinking of buying the fastest laptop available for compiling Lean code. Talking to several people this seems to mean the fastest mac. @Oliver Nash I think you bought a mac relatively recently and were super-impressed by how quickly it could compile mathlib. But @Alex J. Best you told me that you also had a recent mac and Lean was performing very poorly on it for reasons you couldn't fathom. Does anyone else have experience with new macs and Lean? Is there anything I should avoid? My plan is to get work to pay so price is not an issue, I just want to maximise speed.
Oliver Nash (Jan 07 2025 at 16:16):
I bought an M3 Max in late 2022 and it was able to compile Mathlib from scratch in about 20 minutes a year or so ago (I haven't tested it recently). I love it and find it extremely fast and a huge improvement to my quality of life etc.
Oliver Nash (Jan 07 2025 at 16:17):
However unless you happen to like / be used to Apple's design (which I do and am) then a Mac is probably not the right call: I think you can get better compute for your buck elsewhere (albeit you'll have to live a cold dark life outside of the beautiful Apple ecosystem).
Oliver Nash (Jan 07 2025 at 16:20):
Rereading your message, I see "compute for your buck" not really relevant. However I believe a stronger statement is true: even though these Macs are very fast, there exist faster (and as it happens cheaper) laptops. I'm afraid I don't know what brand though.
Kevin Buzzard (Jan 07 2025 at 16:22):
The reason I suggested a mac is that when I finally got to meet @Markus Himmel recently he suggested that "the fastest laptop is pretty much always the fastest mac"
Johan Commelin (Jan 07 2025 at 16:24):
Are you sure you want a laptop?
Johan Commelin (Jan 07 2025 at 16:24):
Or is a fast desktop machine also fine? And then you SSH into it.
Kevin Buzzard (Jan 07 2025 at 16:32):
I just spent two hours in a coffee shop with no internet hacking on FLT and then 30 minutes on the London tube with no internet so maybe a laptop is better for me? Although I guess I already have a fast desktop and should probably talk to you offline about how to set this up
Johan Commelin (Jan 07 2025 at 16:34):
The short answer is: VScode SSH extension
Sebastian Ullrich (Jan 07 2025 at 16:37):
Note that AMD just announced the 9950X3D which is the successor of the CPU compiling Mathlib in 17min in the speedcenter, and also a mobile CPU variant that may be able to get close to that
Arthur Paulino (Jan 07 2025 at 16:48):
Let me second the desktop machine suggestion + remote access.
You can leave your heavier, more expensive and energy consuming machine at home with proper cooling conditions and walk around with a lightweight (and cheaper) machine to serve as a "graphical terminal", if you consider solutions like the VS Code SSH extension that Johan suggested.
This setup is more flexible, as it allows you to - practically speaking - have access to speedy compilation from arbitrary machines.
Arthur Paulino (Jan 07 2025 at 16:52):
In the same spirit, I'll throw this idea in case it applies to your use case, it might be worth for the FLT project to fund and acquire a machine (or maybe two machines?) so members can have access to it
Mauricio Collares (Jan 07 2025 at 16:59):
You can also rent a server for a month to see the desktop + remote access strategy suits you. For example, Oracle Cloud gives 3,000 free CPU hours per month, and they have some pretty cool 80-core Ampere Altras (of course, 80 cores * 720 hours in a month is more than 3,000, so if you don't want to pay anything you can either use fewer cores or not leave the machine running all the time). You'd need to set it up, though, which might be annoying.
Mauricio Collares (Jan 07 2025 at 16:59):
But if price is not an issue, can't you get work to pay for a nice server and a nice laptop? :) Mac Minis are quite cheap, so the combined cost might not be too bad.
Mauricio Collares (Jan 07 2025 at 17:01):
Does anyone know how long it takes for the cheapest M4 Mac Mini (600 dollars, 700 euros) to compile mathlib?
Kevin Buzzard (Jan 07 2025 at 17:23):
I think that a desktop is not a good fit for my situation. I already have a fast desktop. I am specifically interested in suggestions for a laptop which will compile mathlib in under 20 minutes.
Mauricio Collares (Jan 07 2025 at 17:26):
Sorry, I now noticed you mentioned this already! I should have read the whole thread first.
Jireh Loreaux (Jan 07 2025 at 17:28):
Even if you obtain a fast laptop, you should still set up remote access via ssh to your desktop. That way, whenever you have internet (which you could probably have in a coffee shop via a wifi hotspot using your mobile?), you can have two full-blown FLT installations running simultaneously. The data transferred over ssh is minimal.
Jireh Loreaux (Jan 07 2025 at 17:29):
(and, by the way, it's super easy using Tailscale and the VS Code remote ssh extension)
Mauricio Collares (Jan 07 2025 at 17:31):
I should say I fully agree with Kevin, working offline is typically a much better experience. Fewer distractions, fewer random failures, less latency, and so on.
Jireh Loreaux (Jan 07 2025 at 17:35):
I'm not arguing against buying a fast laptop, but a fast desktop (in addition to the laptop) on the university network is the best of both worlds, especially because it means my download speeds for the cache are ridiculously fast.
Andrew Yang (Jan 07 2025 at 18:30):
I just want to add a datapoint that i often work offline and I got a new MacBook this summer for lean and I’m quite satisfied with it.
Kevin Buzzard (Jan 07 2025 at 18:46):
Yes I already have a fast desktop and a good latency, this is why I am specifically asking about a fast laptop. I think we all agree that having a fast desktop is a great idea. I also agree that hiding away in a coffeeshop away from all the people knocking on my office door and emailing me is a great idea.
Kevin Buzzard (Jan 07 2025 at 18:48):
Andrew Yang said:
I just want to add a datapoint that i often work offline and I got a new MacBook this summer for lean and I’m quite satisfied with it.
So is the answer "just buy a new MacBook"? What I was hoping to hear was comments on possible mistakes (e.g. "avoid this specific chip/model") and possible tips (e.g. "make sure you get the one with extra foobars because this really makes a difference")
Patrick Massot (Jan 07 2025 at 18:52):
All this discussion will make me happier when I wait for Lean to build. I’ll be able to think: every second I wait is some money I’m not giving to Apple.
Geoffrey Irving (Jan 07 2025 at 18:54):
My recommendation is that yes, it’s buy a Mac laptop, and specifically if you want the fastest one go with M4 Pro. M4 Max has more GPU cores but these are irrelevant for normal Lean use.
Alex J. Best (Jan 07 2025 at 18:56):
I think the issue with mine that I mentioned to you Kevin is likely insufficient ram, it is only 8gb despite being a relatively new machine. Lean is basically unusable locally for me. Other than that I'm very happy with it.
Christian Merten (Jan 07 2025 at 19:11):
Remember to buy one with enough disk space. My poor 512 GB laptop's disk is half eaten up by various mathlib caches.
Andrew Yang (Jan 07 2025 at 19:20):
Kevin Buzzard said:
So is the answer "just buy a new MacBook"? What I was hoping to hear was comments on possible mistakes (e.g. "avoid this specific chip/model") and possible tips (e.g. "make sure you get the one with extra foobars because this really makes a difference")
This is my only one so I can’t tell you what not to buy, but mine (M3 Max with 64GB ram and 2TB storage) is more than enough.
Bryan Gin-ge Chen (Jan 07 2025 at 19:25):
A datapoint: I bought a new 14-in M4 Macbook Pro with 16 GB RAM in November and just tried building mathlib4 from scratch (lake clean; lake build) for the first time now and it took about 57 (!) minutes. It was much longer than I expected, but on the other hand I was doing various non-lean stuff in the background, including streaming videos and there wasn't any noticeable lag (and the laptop never got uncomfortably warm either). If I had to guess, if you'll be wanting to compile mathlib4 a lot, you'll probably want more RAM?
Using lean4 in VS Code has been quite comfortable for me (but I'm not a heavy user and mostly rely on the cache).
Julian Berman (Jan 07 2025 at 19:31):
Interesting. That's slightly surprising, I think it takes around that long for me to compile even with 8GB RAM on my M3 MacMini. (About to try again, but yeah last time I measured it was around an hour.)
Julian Berman (Jan 07 2025 at 19:33):
But yeah Kevin if you're willing to sell your soul to the closed source demons and money is close to no object, a 14" MBP with M4 Max and upgrades to the 16 core chip, 64GB RAM and nano texture display is probably what I'd buy. Looks like £4K buys you entry to the club.
Rob Lewis (Jan 07 2025 at 19:46):
Christian Merten said:
Remember to buy one with enough disk space. My poor 512 GB laptop's disk is half eaten up by various mathlib caches.
Agreed on this. After many years of not really thinking about disk space, I now have to janitor my various elan toolchains, cache storage, projects with unpacked caches, etc.
Alok Singh (Jan 07 2025 at 19:58):
I’m very satisfied with my “old” M2 Max. The 96gb ram especially is handy. But I’d get at least 2tb of storage since I usually only have 30gb free and I hate babysitting it. I highly recommend a MacBook, it’s really nice to have desktop level power without bothering with ssh.
Thomas Browning (Jan 07 2025 at 20:46):
Rob Lewis said:
Christian Merten said:
Remember to buy one with enough disk space. My poor 512 GB laptop's disk is half eaten up by various mathlib caches.
Agreed on this. After many years of not really thinking about disk space, I now have to janitor my various elan toolchains, cache storage, projects with unpacked caches, etc.
Likewise. Is it possible to get elan to automatically clean up old caches (which could always be redownloaded if needed again)?
Mauricio Collares (Jan 07 2025 at 20:50):
Elan has a elan toolchain gc command to view unused toolchains (and you can pass --delete to it)
Mauricio Collares (Jan 07 2025 at 20:56):
Oh wait, it's not in a stable version yet, I think! I have it because nixpkgs builds elan from git. s/has/will have/
Edit: It's now available in elan 4.0.0-rc1
Christian Merten (Jan 07 2025 at 21:02):
Thomas Browning said:
Is it possible to get elan to automatically clean up old caches (which could always be redownloaded if needed again)?
The reason why I have multiple copies of the mathlib cache is precisely because I don't want to spend so much time on waiting for lake exe cache get to finish when switching between branches or projects (not to mention the often necessary rm -rf .lakes in between)
Mauricio Collares (Jan 07 2025 at 21:04):
If you want to go the other way, you can just use git worktree a lot
Christian Merten (Jan 07 2025 at 21:11):
I already do this, but this does not deduplicate the ~5 GB mathlib cache.
Oliver Nash (Jan 07 2025 at 21:12):
Bryan Gin-ge Chen said:
I bought a new 14-in M4 Macbook Pro with 16 GB RAM in November and just tried building mathlib4 from scratch (
lake clean; lake build) for the first time now and it took about 57 (!) minutes. It was much longer than I expected, but on the other hand I was doing various non-lean stuff in the background, including streaming videos and there wasn't any noticeable lag (and the laptop never got uncomfortably warm either).
This surprised me sufficiently that I ran my own experiment (M3 Max, 96GB RAM). I measured 27 minutes wall clock to build everything (on recent master 01e663839357f0acbf26e478a6e76f6f29e98ce4). I did not have any other applications running and just left the laptop alone. After running lake clean:
➜ mathlib4 git:(master) time lake build Mathlib
Build completed successfully.
lake build Mathlib 18213.79s user 3256.97s system 1310% cpu 27:18.53 total
Winston Yin (尹維晨) (Jan 07 2025 at 22:05):
@Kevin Buzzard if you’d like I can do a test compile today on my MacBook Pro 16” M3 Max 64 GB memory and report back the time.
Bryan Gin-ge Chen (Jan 07 2025 at 22:08):
I just re-ran a build on commit 01e663839 with time lake build Mathlib after lake clean, this time after closing all other apps and leaving my laptop alone and it finished in 41 minutes (M4 Pro, 16GB RAM):
lake build Mathlib 20415.68s user 3410.83s system 954% cpu 41:36.61 total
Julian Berman (Jan 07 2025 at 22:45):
time caffeinate -i lake build
4268.45 real 23849.58 user 4578.14 sys
(so yeah it's about 70 minutes here at this point on this mighty lil' 8GB machine).
Julian Berman (Jan 07 2025 at 22:47):
Mathlib's build process is a bit "interesting" -- (and I know we're offtopic, and that there are lots of threads here where people have looked at this in more systematic and precise ways than my anecdotal comment is about to be) but basically from my recollection what I've seen is things proceed pretty quickly, and then around the last 2-300 files in the build, stuff slows to a crawl, perhaps on files in Mathlib that are really memory-bound and which then end up swapping lots on my 8GB machine (and presumably less on yours'es with your fancy normal amounts of RAM)
Henrik Böving (Jan 07 2025 at 23:04):
Note that as the transitive import set of a mathlib file grows bigger it's startup time also keeps getting slower. For example doc-gen running on the later files of mathlib to generate documentation spends 60% of its runtime per file setting up the imports and only 40% actually doc-genning
Jon Eugster (Jan 07 2025 at 23:38):
Bryan Gin-ge Chen said:
[...] with
time lake build Mathlib[...] this time after closing all other apps and leaving my laptop alone [...]
on my new macbook pro with M4 Pro + 48GB memory it took 26min on a fresh clone (toolchain already downloaded before starting) :)
lake build 17092.07s user 3486.77s system 1317% cpu 26:01.57 total
Yaël Dillies (Jan 08 2025 at 00:13):
Kevin Buzzard said:
hiding away in a coffeeshop away from all the people knocking on my office door and emailing me is a great idea.
So this is where you have been :light_bulb:
Kevin Buzzard (Jan 08 2025 at 00:39):
Yeah I installed you in my office to take messages
Arthur Paulino (Jan 08 2025 at 00:46):
About mathlib cache, I will suggest something I suggested a while ago, hoping someone might pick up the task.
cache could run a counting routine to warn the user whenever more than X% of the cached files are orphan. This routine could be triggered after every call to the get command. This warn could
- Just suggest calling
cache clean(which only gets rid of orphan files) or - Prompt the user asking whether
cache cleanshould run, as an y/n question
Sorry for the tangent!
Winston Yin (尹維晨) (Jan 08 2025 at 09:13):
Winston Yin (尹維晨) said:
Kevin Buzzard if you’d like I can do a test compile today on my MacBook Pro 16” M3 Max 64 GB memory and report back the time.
Fresh mathlib build under realistic conditions (with dozens of Chrome tabs, VSCode, 6 different instant messaging apps, Mail, Slack open):
lake build 17694.83s user 3313.07s system 1471% cpu 23:47.48 total
Winston Yin (尹維晨) (Jan 08 2025 at 09:16):
Please consider a laptop with a good battery life, as all the cores (16 on mine) are saturated during build, with power consumption sometimes exceeding 60W for the duration of the build.
Chris Birkbeck (Jan 08 2025 at 12:40):
Oliver Nash said:
I bought an M3 Max in late 2022 and it was able to compile Mathlib from scratch in about 20 minutes a year or so ago (I haven't tested it recently). I love it and find it extremely fast and a huge improvement to my quality of life etc.
I have basically the same computer and I also find its really good and can compile mathlib in about 20 mins. I cant say I love mac software, but hardware seems good for lean.
Matthew Ballard (Jan 08 2025 at 13:05):
Here is a glimpse into your future (14", M1 Max, 64 Gb, with tons of applications running, and jacked into the grid)
lake build Mathlib 20890.21s user 3474.57s system 815% cpu 49:46.18 total
Other thoughts:
- I have never thought "Jeez, I have too much memory" or "Jeez, I have too much storage". 64 Gb was the maximum configurable then (Jan '22) and I feel good about that choice. I would max out both as much your budget allows if you plan on riding the machine until the wheels fall off.
- I have thought "I wish I had more storage". I have 1 Tb and it is 80-90% full preventing me from doing things like migrating away from Dropbox but still useable generally.
- As mentioned above, the battery life while building Mathlib is now pretty meager, about an hour or two at the most.
- 14'' vs 16'' has its trade offs. 16'' has better thermals and a larger battery (and I believe overall longer life). 14'' is more portable.
Matthew Ballard (Jan 08 2025 at 13:13):
Given all of that, if I had to do it over again, I would buy some insane desktop box, slap a disposable OS on it, and ssh in from my iPad. Right now, I almost exclusively ssh into my Mac while it rests at home.
Dagur Asgeirsson (Jan 08 2025 at 19:33):
Matthew Ballard said:
Given all of that, if I had to do it over again, I would buy some insane desktop box, slap a disposable OS on it, and ssh in from my iPad. Right now, I almost exclusively ssh into my Mac while it rests at home.
How do you ssh from your iPad? Do you use vscode in the browser?
Matthew Ballard (Jan 08 2025 at 19:43):
Long ago before it became a subscription I purchased https://blink.sh I use neovim in that terminal emulator but they have since bundled in a copy of VS Code
At least https://tailscale.com is still free for very mild use.
Winston Yin (尹維晨) (Jan 08 2025 at 22:19):
I made the switch from 14” to 16” and don’t regret the decrease in portability. The overall longer battery life and zero CPU throttling during the 23 min build time are some justifications for the price jump. If you go for the most powerful M[n] Max chip, you are more likely to utilise its full power on the 16”. I’ve seen reports that the 14” with M3 Max throttles, but also that the thermals may have improved with the 14” M4 Max model. I might add that even with the fan at full speed, my laptop does not sound like it’s trying to fly to the moon.
Shanghe Chen (Jan 10 2025 at 05:28):
The mac series laptops are awesome for lean and if anyone interested in a non-mac or relative low budget solution, I am currently using MSI alpha 17 C7VG with Ryzen 9 7945HX and 64GB ram. With it I am able to compile mathlib in about 20~30mins:
$ time lake build
Build completed successfully.
lake build 30218.43s user 1692.51s system 2529% cpu 21:01.66 total
Kevin Buzzard (Jan 10 2025 at 06:35):
What OS do you have?
Shanghe Chen (Jan 10 2025 at 07:30):
It’s on Archlinux and the relatively low budget is a trade off for being more heavy and less portable and more complicated in setting the system up. Similar laptops with the same 7945HX CPU and ram should be similar in compiling mathlib I think
Mauricio Collares (Jan 10 2025 at 09:28):
Sebastian Ullrich said:
Note that AMD just announced the 9950X3D which is the successor of the CPU compiling Mathlib in 17min in the speedcenter, and also a mobile CPU variant that may be able to get close to that
For example, https://rog.asus.com/laptops/rog-strix/rog-strix-g16-2025-g614/ has the next generation of @Shanghe Chen's laptop's processor. I don't think it's for sale yet.
Kevin Buzzard (Mar 19 2025 at 12:47):
OK my new mac has arrived and I was going to run time lake build on mathlib4 but now I've realised that this is also timing things like cloning of aesop, building batteries etc; I've realised that I don't understand properly what the benchmark is.
Julian Berman (Mar 19 2025 at 12:50):
In a more serious benchmark this may matter but at least for my part above I simply visually ignored this by seeing the cloning was done in a few seconds and that's negligible compared to the full timing. I'm curious to see what your results are, my new machine which also arrived yesterday is mid-building so I'll have timing numbers in a few minutes.
Shreyas Srinivas (Mar 19 2025 at 13:17):
I think as an end user, you should time it all.
Shreyas Srinivas (Mar 19 2025 at 13:18):
Because basically that is what people will experience which is as important as benchmarking
Ruben Van de Velde (Mar 19 2025 at 13:26):
Well yes, but at least the cloning part depends on network speed more than anything. Ideally lake would just print a summary of which steps took how long at the end
Kevin Buzzard (Mar 19 2025 at 13:27):
from start to finish it was
lake build 17133.38s user 4390.55s system 1173% cpu 30:34.13 total
Matthew Ballard (Mar 19 2025 at 13:28):
How does your old machine compare?
Julian Berman (Mar 19 2025 at 14:40):
Here on my M2 mac mini (8GB) I get:
4335.98 real 25614.95 user 4021.91 sys
and on my new M4 MBA (24GB) I get:
3723.75 real 30203.72 user 3976.60 sys
which is smaller than I expected the speedup to be honestly but watching btm as it built lean was at least pegging all the CPUs so at least I'm CPU bound for the build. Memory use was pretty low throughout the build, around 12GB.
Shreyas Srinivas (Mar 19 2025 at 15:41):
Doesn’t Mac mini have better cooling than an MBA?
Julian Berman (Mar 19 2025 at 15:47):
It has fans whereas the MBA is fully passive IIRC -- but what are you suggesting? macOS throttling CPUs because of temp? I wouldn't bet on this having happened as my previous M2 MBA had similar timings to the mac mini. But given I still haven't sent the M2 MBA back as a trade in I can do a direct comparison with that as well just to triple check.
Shreyas Srinivas (Mar 19 2025 at 16:00):
Yes, I am suggesting throttling
Shreyas Srinivas (Mar 19 2025 at 16:00):
I don't think the MBA was designed keeping heavy CPU workloads in mind. So I would definitely expect some throttling.
Peter Nelson (Mar 19 2025 at 16:14):
On an M1 ultra mac studio, where presumably heat/throttling isn't a concern:
23454.05s user 4924.01s system 1610% cpu 29:22.02 total
Shreyas Srinivas (Mar 19 2025 at 16:23):
so an m4 Macbook pro outperformed an M1 ultra mac studio?
Jon Eugster (Mar 19 2025 at 21:35):
Ruben Van de Velde said:
Well yes, but at least the cloning part depends on network speed more than anything. Ideally lake would just print a summary of which steps took how long at the end
I think one could first call lake clean in a freshly cloned mathlib. This seems to download all dependencies without building anythin.
Ruben Van de Velde (Mar 19 2025 at 21:55):
Huh, really?
Jon Eugster (Mar 19 2025 at 22:15):
not sure it's an intended/stable feature, but it does seem to work that way... Probably because lake clones all dependencies on the first call, regardless of what that call is and lake clean is just one call that doesn't do anything relevant in this case
Peter Nelson (Mar 20 2025 at 00:06):
Shreyas Srinivas said:
so an m4 Macbook pro outperformed an M1 ultra mac studio?
Yeah, I was a little surprised too. But I don’t understand what is happening with parallelism here. Certainly not all cores were being used.
Bolton Bailey (Jul 05 2025 at 16:27):
I recently got a new mac laptop so in the interest of collecting more data I will be building mathlib from scratch on both the new and the old one.
Here is my old one's specs
Hardware Overview:
Model Name: MacBook Pro
Model Identifier: Mac14,6
Model Number: Z1760005WLL/A
Chip: Apple M2 Max
Total Number of Cores: 12 (8 performance and 4 efficiency)
Memory: 96 GB
System Firmware Version: 10151.121.1
OS Loader Version: 10151.121.1
Activation Lock Status: Disabled
And the timing:
~/Desktop/mathlibcontribution/mathlib4 master
> time lake build
Build completed successfully.
lake build 24081.24s user 3626.12s system 1096% cpu 42:07.25 total
~/Desktop/mathlibcontribution/mathlib4 master
> git rev-parse HEAD
ede29a38dfb2b9841d4898ff4525e9504667a057
Bolton Bailey (Jul 06 2025 at 15:35):
And the new specs:
Hardware Overview:
Model Name: MacBook Pro
Model Identifier: Mac16,8
Model Number: Z1FE001BALL/A
Chip: Apple M4 Pro
Total Number of Cores: 12 (8 performance and 4 efficiency)
Memory: 24 GB
System Firmware Version: 11881.121.1
OS Loader Version: 11881.121.1
Activation Lock Status: Disabled
and the timing
~/Desktop/mathlibcontribution/mathlib4 @ede29a38
> time lake build
Build completed successfully.
lake build 19918.69s user 2673.53s system 1099% cpu 34:14.07 total
Quang Dao (Jul 06 2025 at 15:44):
Bolton Bailey said:
I recently got a new mac laptop so in the interest of collecting more data I will be building mathlib from scratch on both the new and the old one.
Here is my old one's specs
Hardware Overview: Model Name: MacBook Pro Model Identifier: Mac14,6 Model Number: Z1760005WLL/A Chip: Apple M2 Max Total Number of Cores: 12 (8 performance and 4 efficiency) Memory: 96 GB System Firmware Version: 10151.121.1 OS Loader Version: 10151.121.1 Activation Lock Status: DisabledAnd the timing:
~/Desktop/mathlibcontribution/mathlib4 master > time lake build Build completed successfully. lake build 24081.24s user 3626.12s system 1096% cpu 42:07.25 total ~/Desktop/mathlibcontribution/mathlib4 master > git rev-parse HEAD ede29a38dfb2b9841d4898ff4525e9504667a057
Am I reading this right - 96GB RAM?
Bolton Bailey (Jul 06 2025 at 19:37):
Quang Dao said:
Am I reading this right - 96GB RAM?
Yes, no idea why I got so much but that's indeed what my old laptop had.
Yongshun Ye (Jul 29 2025 at 14:19):
For those interested, here are the results (time lake build on latest master) on Unbutu 24.04.2 LTS with an AMD Ryzen 9950X and 48GB of RAM.
First time including fetching the dependency repos:
real 17m15.375s
user 436m35.777s
sys 23m39.813s
Second time after lake clean without fetching the dependency repos:
real 16m14.758s
user 437m53.617s
sys 23m57.999s
In System Monitor I see all CPU cores are utilized most of the time:
Screenshot from 2025-07-29 21-34-28.png
Alfredo Moreira-Rosa (Jul 29 2025 at 15:51):
It takes me like 4 hours :cry:
Sebastian Ullrich (Jul 30 2025 at 07:39):
Yongshun Shreck Ye said:
For those interested, here are the results (
time lake buildon latestmaster) on Unbutu 24.04.2 LTS with an AMD Ryzen 9950X and 48GB of RAM.
That's an interesting result because at https://speed.lean-lang.org/mathlib4 we get <22min builds from a 7950X3D. So the 3D cache seems to more than equalize the generational gap.
Yongshun Ye (Jul 30 2025 at 11:24):
@Sebastian Ullrich As Gemini explained to me, I think the number to be looking at for the actual time spent is the real one which is 17m/16m, sys is the total amount of CPU time spent in kernel mode of all cores added up. Somehow the output format of time is quite different on Ubuntu from macOS.
Sebastian Ullrich (Jul 30 2025 at 11:32):
Oops, I did focus on sys accidentally as it was formatted differently! That's good news about newer CPUs then
Mauricio Collares (Sep 03 2025 at 22:05):
Sebastian Ullrich said:
Note that AMD just announced the 9950X3D which is the successor of the CPU compiling Mathlib in 17min in the speedcenter, and also a mobile CPU variant that may be able to get close to that
Mathlib (commit 67006b84b492ebbc58d8d50dd3646048c8adc244) build times on a 9950X3D, after fetching dependency sources and running lake clean:
$ time lake build
Build completed successfully (7179 jobs).
real 14m48,755s
user 378m57,131s
sys 24m30,177s
Luc Duponcheel (Sep 04 2025 at 16:03):
Hi, folks,
just "out of curiosity" I also tried this time lake build after a previous "fetching dependency sources" build and a lake clean
Build completed successfully (7190 jobs).
real 30m8.348s
user 636m5.638s
sys 43m40.425s
Agreed, 2x slower, but not too bad either ... (certainly not 4 hours).
htop showed that all 24 "threads" were very busy (close to 100%)
I did this on a "humble" 2000 Euro, 13 inch Asus ProArt laptop that I merely bought because of its weight, OLED screen (I am an "old guy", and I can use it without wearing glasses), and GPU.
I have a question:
the 24 "threads" that htop showed me, are they also GPU related or only CPU related? If they are alse GPU related, then that could explain the "reasonable" result.
Thx
... not that it matter much, I am just curious ...
Luc
Henrik Böving (Sep 04 2025 at 16:17):
Lean does not interact with the GPU at all. It doesn't really have a workload that would be sensible for GPU compute.
Luc Duponcheel (Sep 04 2025 at 16:22):
Thx Henrik, btw, is that not a "missed opportunity" now that GPU's are so powerful (and assuming that their treatment of parallelism is ok (?))
Thales Fragoso (Sep 04 2025 at 16:25):
Luc Duponcheel said:
Thx Henrik, btw, is that not a "missed opportunity" now that GPU's are so powerful (and assuming that their treatment of parallelism is ok (?))
GPUs are only good for a very specific kind of computation. I haven't seen them used for compilers.
Weiyi Wang (Sep 04 2025 at 16:25):
GPU is good at performing the same task at large set of data. Type checking sounds like the exact opposite of it
Thales Fragoso (Sep 04 2025 at 16:26):
Has anyone been playing with the snapdragon elite chips? Clang compilation benchmarks seem to place them close to a M3 Pro. And with the Thinkpad t14s getting some decent Linux support, they start to get interesting.
Henrik Böving (Sep 04 2025 at 16:27):
No, GPU compute is only effective for very particular numerical workloads. Lean's job as a program is mostly to go through trees, build new subtrees and make decisions based on looking at nodes in those trees. All of these are a very bad fit for GPUs.
The compute model of a GPU is in essence that you get hundreds or even thousands, usually (compared to a CPU) quite slow, cores that mostly want to execute the same numerical program on different pieces of data in the memory of the GPU. If your computation is shaped like that a modern GPU can often outrun a modern CPU, if it is not the GPU will perform much worse than the CPU.
Filippo A. E. Nuccio (Sep 21 2025 at 16:58):
Seeing this info about possible incompatibilities between GMP and Zen 5 CPUs, I wonder if anything bad has been experienced using Lean on these devices, since now GMP is integrated.
Henrik Böving (Sep 21 2025 at 17:21):
Almost no computations using GMP ever happen when you are using Lean. As long as your natural numbers stay below 2^63 everything is done using normal 64 bit integer operations. I would expect that nobody using Lean will ever experience something like this because of GMP, unless they are purposely working with large natural numbers.
Last updated: Dec 20 2025 at 21:32 UTC