Zulip Chat Archive
Stream: general
Topic: Computational Neuroscience in Lean?
Britt Anderson (Sep 11 2024 at 13:56):
Are there any efforts to use lean as a tool for computational cognitive (neuro)science? I have looked at the course for Scientists and Engineers and much of the promise suggested there would be good for theoretical neuroscience as well. We have many computer programs that are offered as models of cognition none of which to my knowledge have ever been proved to be actually implementing what the cognitive scientists behind them want them to be doing. I would be interested in being pointed to even very small projects that have tried this. As an example we know that Hopfield networks converge. This was proved in the traditional way decades ago, but to my knowledge even tutorial versions of the network have never been coded in a language that formally verified the implementation. I don't really have any doubts that popular implementations of Hopfield networks are in fact correct, I just use Hopfield's convergence proof as an example of the type of work I hope people can point me towards. Thank you (and for reference here is Hopfield's original brief paper with the proof: https://doi.org/10.1073/pnas.79.8.2554).
Tyler Josephson ⚛️ (Sep 13 2024 at 00:07):
Cool idea!
FYI, @Tomas Skrivan is working on automatic differentiation in Lean and has made progress toward implementing an MLP
Britt Anderson (Sep 13 2024 at 15:00):
Thanks for the encouragement. I took a quick look at scilean, but will look more closely soon for examples that I can use for inspiration.
Frederick Pu (Dec 25 2024 at 00:40):
i wonder if these sorts of formalisms share anything in common with verifying information theoretic properties about neural networks such as how information is stored differently between transformers and recurrent neural networks. Ex: an RNN can theoretically encode information arbitrarily far back in time provided that it just ignores all new memories, whereas a transformer can only have knowledge about events within its context window
Britt Anderson (Dec 27 2024 at 11:52):
For me the allure of Lean4 is the hope that a question like yours could be made sufficiently precise to give definitive answers. By giving the RNN and transformer specifications you could show that they are or are not isomorphic to some thing of interest. I guess in your case that would be information theoretic constructs. But I have a long way to go with my Lean knowledge to explore my intuition further at this point, but hope springs eternal. If you are around Southern Ontario (as suggested by your coordinates in your profile) maybe we could meet up to chat some time? I am at the University of Waterloo.
Tyler Josephson ⚛️ (Dec 28 2024 at 03:40):
I’d love to see a NN implementation alongside a proof of some version of the universal function approximation.
Last updated: May 02 2025 at 03:31 UTC