Zulip Chat Archive

Stream: Program verification

Topic: Turing machines


view this post on Zulip Mario Carneiro (May 24 2020 at 11:45):

This is an advertisement for #2792, which constructs a (TM2 model) turing machine for partial recursive functions, thus proving that every partrec function is TM-computable. (I haven't done anything about the other direction, proving that turing machines compute partial recursive functions, but I don't think it will be nearly as difficult.)

I'm curious to see where people would like to see the computability library go in the future. My eye has been on defining computational complexity classes for a while, and I haven't done much with oracle turing machines but if there are interesting things to say there then it might be worthwhile. The TM construction in #2792 is careful to be polynomial time, using binary rather than unary representations of numbers, although there is probably around O(n^2) simulation overhead between the most basic TM model and the simulated partial recursive function.

view this post on Zulip Reid Barton (May 24 2020 at 12:07):

I'd really like to have something like pcodable and ptime for implementing polynomial-time reductions, for cryptography applications.

view this post on Zulip Mario Carneiro (May 24 2020 at 12:09):

I think pcodable is pretty trivial, since it is only a constraint on the encode/decode composition, which is the identity on most infinite sets of interest

view this post on Zulip Reid Barton (May 24 2020 at 12:09):

I guess a benchmark here would be to pick some problem that isn't 3-SAT and reduce 3-SAT to it. Of course it would be good to have Cook's theorem too, but it's not actually as relevant for my purposes.

view this post on Zulip Mario Carneiro (May 24 2020 at 12:10):

is ptime the complexity class P?

view this post on Zulip Reid Barton (May 24 2020 at 12:10):

Can we already write down that constraint?

view this post on Zulip Reid Barton (May 24 2020 at 12:10):

Yeah

view this post on Zulip Reid Barton (May 24 2020 at 12:11):

By the way, I was wondering whether it would ever make sense, rather than working with a fixed primcodable/pcodable encoding, to instead work with an equivalence class or quotient under "can be converted between, by a primrec/polynomial-time function"

view this post on Zulip Mario Carneiro (May 24 2020 at 12:12):

We have turing machines so we can write down P, but without the TM -> partrec construction we couldn't do too much with the notion. Now we should be able to prove that lots of interesting functions are in P

view this post on Zulip Reid Barton (May 24 2020 at 12:12):

Right, that was my next question.

view this post on Zulip Mario Carneiro (May 24 2020 at 12:14):

that still requires some more work beyond what's currently done since all the reductions that have been proven thus far are polynomial time but not proven to be such (as I am still not sure how best to express the clocks inside proofs)

view this post on Zulip Reid Barton (May 24 2020 at 12:14):

e.g. I see we have primrec.nat_div_mod--how hard is it to make a P version

view this post on Zulip Reid Barton (May 24 2020 at 12:15):

I'm guessing that one would be pretty much the hardest of what there is already for primrec

view this post on Zulip Mario Carneiro (May 24 2020 at 12:15):

once the groundwork is laid it should not be much worse than primrec stuff, because there is a way to express P compositionally

view this post on Zulip Mario Carneiro (May 24 2020 at 12:16):

I'm not sure I understand your encoding quotient suggestion

view this post on Zulip Mario Carneiro (May 24 2020 at 12:17):

The base encoding is there as a meter stick for all other encodings

view this post on Zulip Mario Carneiro (May 24 2020 at 12:19):

given two types A and B you can't a priori say whether a function A -> B is computable, but if there are canonical maps A <-> nat and B <-> nat then f : A -> B maps to f' : nat -> nat and this has unambiguous meaning

view this post on Zulip Reid Barton (May 24 2020 at 12:20):

Right, but you don't actually need specific maps A <-> nat and B <-> nat--you can identify two encodings A <-> nat if both composites nat -> nat are computable/primrec/polytime, and the same for B, and still have a notion of computable/primrec/polytime for maps A -> B

view this post on Zulip Reid Barton (May 24 2020 at 12:21):

Or alternatively, work with a set of encodings A <-> nat, maybe one which contains all the encodings which are computable/...-ly equivalent to any of its members

view this post on Zulip Reid Barton (May 24 2020 at 12:22):

The advantage is supposed to be that one makes a bunch of irrelevant choices when picking an encoding for something like list (nat \x bool), but everyone would agree on the equivalence class of the resulting encoding

view this post on Zulip Reid Barton (May 24 2020 at 12:22):

Or rather, for polytime purposes, nat has two natural structures, the b-ary one for b >= 2 and the unary one

view this post on Zulip Reid Barton (May 24 2020 at 12:23):

It's sort of like defining a manifold by an atlas, rather than by a fixed embedding in some R^N

view this post on Zulip Mario Carneiro (May 24 2020 at 12:24):

In the context of lean though, with typeclass inference, it's not clear to me that this is really any different from the canonical bijection approach

view this post on Zulip Mario Carneiro (May 24 2020 at 12:25):

because you still want a "canonical equivalence class"

view this post on Zulip Reid Barton (May 24 2020 at 12:25):

Yeah, I'm not sure whether it would have other than conceptual value

view this post on Zulip Mario Carneiro (May 24 2020 at 12:25):

and it's not like this helps to make the class a subsingleton

view this post on Zulip Reid Barton (May 24 2020 at 12:26):

You'd still put an equivalence class on each type via the type class system, it would just be actually canonical in the sense that two people would usually pick the same one, rather than merely chosen in the sense that it's the one that instance search generates

view this post on Zulip Mario Carneiro (May 24 2020 at 12:27):

I think we can instead invest in making it easy to prove a function is computable relative to any encoding one wishes to use

view this post on Zulip Reid Barton (May 24 2020 at 12:27):

right, and I think it's probably easy already actually

view this post on Zulip Mario Carneiro (May 24 2020 at 12:28):

by proving that all reasonable encodings are P-equivalent to whatever the canonical one is

view this post on Zulip Mario Carneiro (May 24 2020 at 12:28):

I think some encodings might need to change though; the encoding of lists is something like double exponential right now


Last updated: May 08 2021 at 21:09 UTC