Zulip Chat Archive

Stream: new members

Topic: Neural Network Interface?


Caleb Kisby (Oct 25 2022 at 06:13):

Hi all,

I'm an AI PhD student, new to Lean (I have some experience with Agda, and I've played around a bit with the Lean tutorials). So far I'm really enjoying Lean, thinking of sticking with it long-term. (Quotient and topological structures in Lean are especially nice! I've had a lot of trouble dealing with those in Agda in the past.)

I've read a lot of discussion on here about using neural networks to assist theorem proving in Lean. But I'm working the other way around: Use formal verification (say in Lean) to check that a neural network has nice properties.

My question is: Is there an already implemented / canonical way to interface neural networks with Lean? Ideally I'd like to be able to have a neural network written in (say) C++ with Tensorflow, and then do the verification via Lean's FFI. But I'm aware that Lean's FFI only handles primitive types, so this probably wouldn't work. Alternatively, I could implement a neural network library in Lean, but I'd really rather not reinvent the wheel.

Thanks for your time!
Caleb

Paul Lezeau (Oct 25 2022 at 06:39):

Hi Caleb! I'm not aware of any work having been done in that direction, but that's definitely something that would be very cool (I'd personally be very interested in that)

Paul Lezeau (Oct 25 2022 at 06:41):

In terms of the mathematics needed, Lean definitely has everything you would need to define fully connected neural networks

Bolton Bailey (Oct 25 2022 at 06:42):

Welcome @Caleb Kisby . There are a few projects which use NNs in conjunction with Lean. Off the top of my head, we have #lean-gptf , which is a stream you might want to subscribe to. There is also Azerbayev and Ayers' Lean Chat, see this blog post.

Bolton Bailey (Oct 25 2022 at 06:43):

You might also be interested in the #Machine Learning for Theorem Proving stream

Paul Lezeau (Oct 25 2022 at 06:45):

@Caleb Kisby what sort of mathematical properties of neural networks would you be studying ?

Caleb Kisby (Oct 25 2022 at 06:47):

@Paul Lezeau Well, sure, I could implement it myself. I suppose I'm hoping there's a nice library or FFI way to avoid me writing it from scratch (plus, this would make it easier for people to plug their own net into my system!).

Thanks for the welcome and suggestions @Bolton Bailey! I'll check out the Lean Chat github to see how they interface the net.

Paul Lezeau (Oct 25 2022 at 06:50):

@Caleb Kisby That's fair enough ! But I'm sure that's something you could get other people interested in working on with you were you to end up doing it ;)

Caleb Kisby (Oct 25 2022 at 06:57):

@Paul Lezeau Whoops, I missed your very first comment (I'm new to Zulip)! I didn't mean to be rude :-)

At the moment I have a humble modal logic with modalities T ('typically', interpreted as the neural network's activation pattern on an input) and [P+] (interpreted as a Hebbian update operator on the net). The idea is that axioms stated in this language express properties about what the net finds typical after unsupervised (Hebbian) learning, e.g.
[P+}TQ -> T[P+]Q
says (roughly) "if after learning P, the neural network predicts Q, then the neural network would have expected Q in the first place"

Caleb Kisby (Oct 25 2022 at 06:59):

The language is basic, and I'm still figuring out what things it can express. But I do have soundness results, and I'm starting by implementing these (hopefully in Lean). Plus, since it's a modal logic it's at least decidable.
(Shameless plug for my paper: https://journals.flvc.org/FLAIRS/article/download/130735/133901)

Caleb Kisby (Oct 25 2022 at 07:07):

So technically I'd like to prove in Lean properties of the logic, but the logic is defined using functions from the neural network. I might just bite the bullet and whip up a quick ANN in Lean... :sweat_smile:

Paul Lezeau (Oct 25 2022 at 07:12):

Caleb Kisby said:

Paul Lezeau Whoops, I missed your very first comment (I'm new to Zulip)! I didn't mean to be rude :-)

At the moment I have a humble modal logic with modalities T ('typically', interpreted as the neural network's activation pattern on an input) and [P+] (interpreted as a Hebbian update operator on the net). The idea is that axioms stated in this language express properties about what the net finds typical after unsupervised (Hebbian) learning, e.g.
[P+}TQ -> T[P+]Q
says (roughly) "if after learning P, the neural network predicts Q, then the neural network would have expected Q in the first place"

No worries at all;)

Paul Lezeau (Oct 25 2022 at 07:13):

I had no area this kind of stuff was being researched in AI, that's pretty cool!

Paul Lezeau (Oct 25 2022 at 07:15):

Caleb Kisby said:

So technically I'd like to prove in Lean properties of the logic, but the logic is defined using functions from the neural network. I might just bite the bullet and whip up a quick ANN in Lean... :sweat_smile:

I see ! Well that's definitely something I'm interested in so feel free to ping me if you ever want someone to have a look at you code;)

Johan Commelin (Oct 25 2022 at 07:16):

@Caleb Kisby You might be interested in https://github.com/dselsam/certigrad

Caleb Kisby (Oct 25 2022 at 07:17):

Right!? At the moment, there are only a handful of people bridging connectionism and logic in this specific way -- I'm sort of resuscitating some forgotten work from the 90's, since it's helpful to the neuro-symbolic crowd. Thanks for the support, I'll keep you updated!

Caleb Kisby (Oct 25 2022 at 07:23):

Johan Commelin said:

Caleb Kisby You might be interested in https://github.com/dselsam/certigrad

Whoah, nice! And they have Lean code for training a neural network! This might be exactly what I was hoping for :smile_cat:

Tomas Skrivan (Oct 25 2022 at 07:37):

In the recent thread @Martin (egLanghaus) is interested in somewhat similar thing i.e. reasoning about neural networks.

Martin (egLanghaus) (Oct 25 2022 at 08:10):

Currently reading the ONNX and ONNX-ML syntax and semantics here.

Goal is to read actual architectures into Lean and prove input/output relations with an eye toward XAI (explainability), and to check if in fact (as has been claimed) that DNNs are so complex as to make interactive proof assistants useless in verifying them.


Last updated: Dec 20 2023 at 11:08 UTC