Zulip Chat Archive

Stream: general

Topic: Mechanization Roundtable

view this post on Zulip Floris van Doorn (Oct 05 2019 at 18:33):

Mechanization of math roundtable discussion starting now: https://www.youtube.com/watch?v=KNWPmhc2sh8 (with Tom Hales)

view this post on Zulip Koundinya Vajjha (Oct 05 2019 at 18:35):

And also Michael Harris!

view this post on Zulip Kevin Buzzard (Oct 05 2019 at 19:26):

Only Michael Harris could quote William Burroughs in a discussion about the mechanization of mathematics :D

view this post on Zulip Kevin Buzzard (Oct 05 2019 at 19:40):

He is now suggesting that if we tell computers what the Langlands Programme is (something which I fully intend to do!) then the computers might tell us that they are not interested in it :-/

view this post on Zulip Jason Rute (Oct 06 2019 at 13:29):

This was an interesting conversation. I feel that the discussion tended to focus around AI-based automation of mathematics, which I don’t think anyone in that room knows much about. I think they are right that for computers to do mathematics well, they need to incorporate human social patterns of reasoning—individual interest, understanding (whatever that means), communication of ideas, question asking, etc. However, I also counter that this is exactly the sort of things that those who focus on artificial general intelligence are also interested in solving. For examples of preliminary research, Deepmind’s Starcraft AI works by training diverse populations of agents with individual preferences for various types of game play. OpenAI just released a study of using cooperation and competition of a large population of agents in hide-and-seek to learn advanced strategies. Facebook AI is experimenting with using natural language as a way for agents to communicate and share high-level goals and strategies. I think I agree with Szegedy and others that mathematics provides a powerful testbed for developing these necessary parts of artificial general intelligence.

view this post on Zulip Jason Rute (Oct 06 2019 at 13:30):

I also found Stephanie Dick’s comments to be very thoughtful. She challenged that as we mechanize mathematics, it will change the way we do mathematics in unexpected ways. Also, she cautions against raw enthusiasm to adopt machine learning solutions to every problem. While I sort of have a G.H. Hardy-esque view that focusing on pure mathematics is safer than focusing on most other things, I also have to accept that modern AI tools are very general purpose, and any modern AI tool which can be used well for mathematics, can probably be used just as well for some nefarious purpose.

view this post on Zulip Reid Barton (Oct 06 2019 at 18:26):

For someone worried about superintelligence scenarios, I don't think a focus on mathematics would appear very safe

view this post on Zulip Tim Daly (Oct 06 2019 at 18:30):

I've spent a fair portion of my life doing things that were, at the time, considered AI. (Note that anything we figure out how to do is no longer considered AI :-) , including robotics, computer algebra, Parallel Distributed Processing (aka Neural Nets), Expert Systems, and Robot-Computer Cooperative Tasks.

There appear (to me) to be two domains of AI. One involves bicycles and the other involves science. Bicycles involve low-level pattern recognition. They involve a task that is hard to explain and is "learned by doing". I used this kind of reasoning to teach a robot to recognize how much torque is needed to distinguish a misthreaded bolt from a properly threaded bolt.

Science involves creating mental models from which one can draw testable conclusions. It is easy to explain and can be "learned by teaching".

Searle's point is that there is the Epistemic (knowledge, aka science) realm and the Ontology (existence, aka bicycle) realm.

It is my considered opinion that the deep learning only addresses one of the two domains of AI, the bicycle pattern domain.

In the context of computational mathematics (proof theory and computer algebra), both aspects are present. So while useful for recognizing patterns in established proofs (the bicycle "learn by doing"), I don't think deep learning will prove useful for the science ("learn by teaching") aspect.

view this post on Zulip Reid Barton (Oct 06 2019 at 18:52):

I would agree with "will not be sufficient" but not "will not prove useful"

Last updated: May 15 2021 at 23:13 UTC