Zulip Chat Archive
Stream: general
Topic: Is `to_partrec.code` actually efficient?
Praneeth Kolichala (Mar 24 2022 at 07:24):
Is the basis given in to_partrec.code
"efficient" -- i.e. does it simulate polynomial-time turing machines in polynomial time? I see that the reduction is good the other way -- TM's can simulate a step of to_partrec.code
in polytime. Presumably, based on the preliminary work in https://github.com/leanprover-community/mathlib/pull/11046, the other way holds as well. However, when actually trying to write some code using to_partrec.code
, I'm finding that this is actually quite difficult.
First of all, it is clear that if we treat the size of the numbers as having size in binary rather than unary, then it does not encode all polytime algorithms. For example, doubling a number represented in a single cell (i.e. [N]
) requires time, since each step allows you to increment the cell at most once. In a traditional representation, the size of the input is , however. This can be fixed if we treat the size of each natural number as if it were encoded in unary (even though this isn't how the actual reduction to turing machines works). Note that currently, the branch at https://github.com/leanprover-community/mathlib/pull/11046/files has time_bound
that doesn't consider the size of the nat's at all, only the length of the list, but I understand it's a work in progress still.
However, I'm not sure even some simple functions can be written in poly-time: for example, consider finding the init
of a list of numbers (all but the last element). While this is obviously possible, because it is shown in that file that all computable functions have a code, I'm not sure it is possible when each nat is bounded by , where is the length of the list. In this case, we have essentially no storage capabilities, so any time we call fix
on a function, if it is able to "modify" the end, it has already forgotten the beginning.
I have some work that copies a lot of to_partrec.code
but uses list bool
and different basis functions. I'm wondering if we are eventually planning on replacing tm_to_partrec
to better support complexity theory.
Mario Carneiro (Mar 24 2022 at 12:39):
I think you are right that to_partrec.code
is not efficient. It was primarily intended to be a reduction of partrec
to involve operations on lists which are easier to encode in a turing machine, but the reduction is coming from a representation of partial recursive functions that is itself not efficient (since everything is based on unary recursion), and the list support in to_partrec.code
is very barebones, and only really supports lists of fixed length in a useful way.
Mario Carneiro (Mar 24 2022 at 12:44):
The connection to partial recursive functions is only intended to preserve computability. For defining P, I think the basis should be more explicitly related to turing machines, and a variation on to_partrec.code
that uses list bool
sounds like a fine idea to get something that is structurally closer to partial recursive functions which is polytime.
Bolton Bailey (Mar 24 2022 at 13:48):
Here was the justification I gave in my head as to why I thought to_partrec.code
should be able to simulate a TM with polynomial slowdown. If I encode a TM tape as a list of nats, then I can write a code to take in a list and return the list after one step of the TM. Presumably, that code runs in time linear in the distance of the TM head from the start of the list. Then I can use the code.fix
to repeat this until the turing machine halts (I instruct the machine head to walk back to the left when it does this so I can detect it). Since this distance is only at most as long as the number of steps, the total number of steps is at most quadratic in the TM runtime. Perhaps my presumption about being able to write this linear time is wrong though.
Bolton Bailey (Mar 24 2022 at 13:57):
Indeed, in #11406 I haven't been considering the size of the numbers, only the length of the list. I was envisioning encoding data structures in lists where all the numbers in size were bounded, and then immediately terminating in failure if a number too large was detected. I guess the question is, if I'm doing that, does it make sense to have the base type of the list be Nats at all, perhaps it should just be an arbitrary type or fintype or bools as Mario suggests, and we should introduce yet another code type to which evaluates functions on lists for those.
Mario Carneiro (Mar 24 2022 at 14:36):
If I encode a TM tape as a list of nats, then I can write a code to take in a list and return the list after one step of the TM.
This seems to be the crux of the issue. Suppose that we encode the TM tape using nats of bounded size. (This is a reasonable assumption since we can't afford to access this data efficiently.) If we don't use fix
, then I believe we can prove that every function we can encode either does not inspect drop n v
and passes it along to the output for some fixed n
, or discards this tail entirely. Thus fix
's control condition cannot be a function of more than a fixed number of places in the input, which means that it's either a finite state automaton producing output in the inaccessible tail, or it consumes input from the tail and has nowhere to put output. So anything like init
isn't going to be expressible.
Bolton Bailey (Mar 24 2022 at 14:52):
Hmm, is it that the initial segment of the tape must go "on the stack" as it were?
Bolton Bailey (Mar 24 2022 at 15:00):
What is needed is a function that
- If the initial (four elements say) segment of the list contains the TM head, computes the new initial segment for the list from a lookup table.
- If the initial segment of the list encodes a tape element, recursively calls itself on the tail, then appends the head back to the result and returns that.
Mario Carneiro (Mar 24 2022 at 15:13):
the issue is that there isn't a "stack" to put things on in the first place. You need to multiplex the input and output on the same tape, and this requires list operations that are apparently not expressible.
Bolton Bailey (Mar 27 2022 at 21:06):
Seems like what might have to be done to salvage this is to make another code type with a more expressive recursion operation. I'm not sure we want that though, so I might take this opportunity to abandon this approach though and either go the TM2 route that I've now started in #12993 or the lambda calculus route that was suggested in the complexity theory thread.
Praneeth Kolichala (Mar 29 2022 at 07:17):
@Bolton Bailey I ported some of your code from your PR to here: https://github.com/prakol16/lean_complexity_theory_polytime_defs
but now it uses bitstring = list bool
instead of list nat
. This does mean we will have to redo the compilation to TM's though (but I think that is inevitable).
I think ideally, this code would actually replace tm_to_partrec
, since so much of the code is duplicated with only slight differences. There is also the issue that we have lots of different kinds of encodings (encodable
, primcodeable
, encoding
, fun_encoding
, boolcodeable
) which we might want to somehow unify.
Mario Carneiro (Mar 29 2022 at 09:04):
I don't think there is a need for boolcodable
to be a separate type from primcodable
; you can encode to nats with a binary translation to list bool
Mario Carneiro (Mar 29 2022 at 09:10):
In particular, if primcodable
and primrec
need to change then that will go against the literature definition
Mario Carneiro (Mar 29 2022 at 09:11):
keeping in mind that primcodable
and primrec
are not making any attempt to characterize polytime computability
Mario Carneiro (Mar 29 2022 at 09:11):
but polytime computable functions should be primitive recursive
Praneeth Kolichala (Mar 29 2022 at 16:31):
@Mario Carneiro The only issue is that there are already definitions for many items being primcodable
, but they are not consistent with polytime encodings. For example, I believe that the encoding of a list takes exponential time in the length of the list. Another thing we could do is rename boolcodeable
to fastcodable
(note that the encodings should be "good" in general, not just polytime but ideally logspace to retrieve a particular item in a list etc. as well) but still make it N -> N
instead of list bool -> list bool
. Note that this is slightly harder because most of these encodings are more natural as list bool
than as nat's, but as you say, there is an equivalence between them.
I think if we want good unification with the existing primrec
stuff, what we should do is have a separate class, (fastcodable
?, which replaces boolcodable
), which doesn't use the default encodable encoding (because this won't work typically if we want something with low overhead). Then, we have primcodable
extend fastcodable
(or maybe be a mixin?), and use fastcodable
's pairing function for consistency with the code that will be polynomial time. I assume that primrec
does not rely on any arithmetic properties of the pairing function, only it encodes pairs somehow and can decode them, right?
Mario Carneiro (Mar 29 2022 at 22:18):
I believe the pairing function is polytime when interpreted on list bool, not quite linear time since it involves arithmetic. The only encoding that isn't fastcodable
is the one for lists
Mario Carneiro (Mar 29 2022 at 22:22):
I think fastcodable
should extend primcodable
and require that the function in question is polytime computable (which means some of the slow encodings have to be replaced). Logspace is a lot harder and needs a computational model other than TMs to be useful, so I would stay away from it for the present.
Mario Carneiro (Mar 29 2022 at 22:26):
I believe that the only place that directly interacts with the list encoding is in the proof that list.cons
is primrec and list.foldr
is primrec and everything else is proven from that
Praneeth Kolichala (Mar 30 2022 at 00:28):
While it's true that the pairing function nat.mkpair
is polytime, it's very convenient to have the list representation be built off of the the pairing function so that [a, b, c, ...]
is represented as (a, (b, (c, ...)))
, (and mkpair
doesn't work for this). The main issue is I want to avoid having duplicate instances of [primcodable (list α)]
, because that might make things confusing. So if we want to keep the polytime and primcodable encodings consistent, do we want to break (a) primcodable
using nat.mkpair
for pairs or (b) [a, b,c, ...]
being encoded as (a, (b, (c, ...)))
? To be, (a) seems like the easier sacrifice, but I don't know how much effort it would take to convert this.
My original idea for fastcodable
was to actually not put any formal requirements at all on it, just have them be reasonably good encodings. Then, we add prop mix-ins like [primcodable α]
which is true of a type α
with [fastcodable α]
when encode(decode(x))
is primitive recursive. Similarly, [polycodable α]
when encode(decode(x))
is poly-time. Both proposals (mix-ins, extending) are similar I think.
Mario Carneiro (Mar 30 2022 at 00:49):
@Praneeth Kolichala
While it's true that the pairing function nat.mkpair is polytime, it's very convenient to have the list representation be built off of the the pairing function so that [a, b, c, ...] is represented as (a, (b, (c, ...))), (and mkpair doesn't work for this).
The relevant property you need for an efficient pairing function is that pairing and results in a number of size . mkpair
only satisfies this in the case where the sizes of and are approximately equal; it has unacceptable waste when one is much larger than the other, which leads to bad behavior in the nested case. Your implementation achieves this, although it is somewhat heavy on the left side (it is ). Not sure how much it is worth optimizing this though.
Mario Carneiro (Mar 30 2022 at 00:50):
The main issue is I want to avoid having duplicate instances of [primcodable (list α)], because that might make things confusing.
I think we're agreed on that. If we do change out the encoding (and I think there is good reason to do so), it should replace the original one and primrec will have to be modified to prove that the constructors and recursors are also primrec in the new encoding.
Mario Carneiro (Mar 30 2022 at 00:53):
My original idea for fastcodable was to actually not put any formal requirements at all on it, just have them be reasonably good encodings.
It's fine to just have reasonably good encodings (indeed that was my aim with primcodable
as well), but you will eventually need actual runtime bounds on the constructors and destructors like polytime. The only aspect of this that is formally part of the primcodable
structure is that encode o decode
needs to be primrec; you can't express anything else about the two functions generically since they essentially become primrec by definition.
Mario Carneiro (Mar 30 2022 at 00:55):
if you aren't putting any constraints at all on fastcodable
, then that's just the encodable
typeclass.
Praneeth Kolichala (Mar 30 2022 at 01:00):
Mario Carneiro said:
if you aren't putting any constraints at all on
fastcodable
, then that's just theencodable
typeclass.
Yeah, what I was thinking was something that would be used like (untested):
class polycodable {α : Type} [fastcodable α] :=
(encode_decode : encode(decode(x)) is polytime
which would be used like:
theorem polytime_comp {α β γ : Type} [fastcodable α] [fastcodable β] [fastcodable γ] [polycodable α] [polycodable β] [polycodable γ]
(f : α -> β) (g : β -> γ) (hf : polytime f) (hg : polytime g) : polytime (f ∘ g)
The only reason being that if polycodable
extends primcodable
, then there are redundantly two requirements (encode_decode
being primitive recursive and also polynomial time). Of course this can be fixed with a lemma showing that you only need one. Otherwise they are pretty similar I think.
Then, the only differences between fastcodable
and encodable
would be how the encodings are implemented.
Mario Carneiro (Mar 30 2022 at 01:02):
The only reason being that if polycodable extends primcodable, then there are redundantly two requirements (encode_decode being primitive recursive and also polynomial time). Of course this can be fixed with a lemma showing that you only need one. Otherwise they are pretty similar I think.
That's fine, there are plenty of such redundant requirements in the algebraic hierarchy. You can use smart constructors to work around it.
Mario Carneiro (Mar 30 2022 at 01:02):
I don't think these need to be mixins since they form a strict inclusion hierarchy
Mario Carneiro (Mar 30 2022 at 01:03):
Then, the only differences between fastcodable and encodable would be how the encodings are implemented.
That's exactly what I want to avoid, this would yield multiple implementations of encodable
Praneeth Kolichala (Mar 30 2022 at 01:03):
Mario Carneiro said:
I think we're agreed on that. If we do change out the encoding (and I think there is good reason to do so), it should replace the original one and primrec will have to be modified to prove that the constructors and recursors are also primrec in the new encoding.
So should we replace just list encodings or both list and tuple encodings (in order to preserve the property [a, b, c...]
= (a, (b, (c, ...)
?)
Mario Carneiro (Mar 30 2022 at 01:06):
It is possible to replace just the list encoding with something more balanced, but I don't think that fixes the root cause. If you use a tuple encoding that adapts to the sizes of numbers then you can use it in lists and also in trees or other kinds of tupling (which is important for supporting general inductive types), and the list encoding will not have to change
Mario Carneiro (Mar 30 2022 at 01:08):
In fact, I think your tuple encoding isn't good enough, because (a, ())
has size 2|a|+O(1)
so ((((a, ()), ()), ..., ())
has size 2^n|a|
Mario Carneiro (Mar 30 2022 at 01:08):
so I think we do need the more efficient encoding of the length after all
Praneeth Kolichala (Mar 30 2022 at 01:09):
Yeah it's not good enough for trees, I was planning on encoding trees as lists of tuples with the tuples holding the data as well as "pointers" to the children
Mario Carneiro (Mar 30 2022 at 01:09):
that's not compositional enough, you would have to analyze the whole tree at once to do that
Mario Carneiro (Mar 30 2022 at 01:11):
there is an easy tweak: you can encode log |a|
in unary and then |a|
in binary, and then a
and b
Mario Carneiro (Mar 30 2022 at 01:11):
this gets 2 log |a| + |a| + |b|
size
Praneeth Kolichala (Mar 30 2022 at 01:13):
Yes, this is the natural next step, but the more complicated the pairing function becomes, the harder it is to implement on a TM (currently, my code
uses the pairing function as a black box, so it would have to be compiled manually to a TM)
Mario Carneiro (Mar 30 2022 at 01:15):
I don't see why this function would get involved in tm_to_partrec
?
Praneeth Kolichala (Mar 30 2022 at 01:15):
Mario Carneiro said:
that's not compositional enough, you would have to analyze the whole tree at once to do that
What do you mean by this? It's still a poly-time construction, the space usage is good (constant-factor overhead).
Mario Carneiro (Mar 30 2022 at 01:16):
you are referencing pointers so you need to have a context to translate the tree, it can't just be a function tree -> nat
Mario Carneiro (Mar 30 2022 at 01:16):
and it gets even harder to do things like that if inductive types are nested in each other
Praneeth Kolichala (Mar 30 2022 at 01:17):
Mario Carneiro said:
I don't see why this function would get involved in
tm_to_partrec
?
Eventually, we'd want to show something like "code
is polytime" --> "TM computes in polytime," right? (e.g. at least for reductions which are easier from TM machines, like Cook-Levin, although this might not be that bad from compositional basis functions either, I'm not sure)
Mario Carneiro (Mar 30 2022 at 01:17):
so to the extent possible if we can translate trees to natural numbers without any contextual information it would be preferable
Mario Carneiro (Mar 30 2022 at 01:17):
a simple pair : nat -> nat -> nat
function applied everywhere achieves that
Mario Carneiro (Mar 30 2022 at 01:18):
what does "code
is polytime" mean there?
Mario Carneiro (Mar 30 2022 at 01:19):
do you have a cost model on code
to link to?
Praneeth Kolichala (Mar 30 2022 at 01:19):
Mario Carneiro said:
you are referencing pointers so you need to have a context to translate the tree, it can't just be a function
tree -> nat
What do you mean context? The "pointers" were metaphorical, I just mean indices in the list.
So for example, the tree ((A, B), C) would translate to [(root, 1, 2), (child 1, 3, 4), (C, none, none), (A, none, none), (B, none, none)]
. The format is [(data of node, index of left child, index of right child)]
Mario Carneiro (Mar 30 2022 at 01:20):
That's what I was talking about. You can't interpret (root, 1, 2)
without the "context" [(root, 1, 2), (child 1, 3, 4), (C, none, none), (A, none, none), (B, none, none)]
Mario Carneiro (Mar 30 2022 at 01:20):
I want an encoding that does not require a context
Mario Carneiro (Mar 30 2022 at 01:21):
which means that f(((A, B), C))
is a function only of f((A, B))
and f(C)
Mario Carneiro (Mar 30 2022 at 01:23):
As inspiration, I always consider the case of the representations I have been using here: lists of characters where you put (
and )
and ,
between elements
Mario Carneiro (Mar 30 2022 at 01:23):
this is a great encoding, it does not suffer from any blowup problems and it's fully compositional
Praneeth Kolichala (Mar 30 2022 at 01:26):
Mario Carneiro said:
what does "
code
is polytime" mean there?
It means the function represented by the code as a function from N to N runs in polynomial time. The cost model is still a WIP, but there is some preliminary code here: https://github.com/prakol16/lean_complexity_theory_polytime_defs/blob/main/src/time_bound.lean. Note that the cost for pair
should add something (e.g. the length of c₁.eval v
) which is not currently there to get nice properties, and the cost of an oracle could be changed to the length of the output or possibly a custom cost function for oracles.
Mario Carneiro (Mar 30 2022 at 01:29):
Note that the cost for pair should add something (e.g. the length of c₁.eval v) which is not currently there to get nice properties
I'm not sure about that. I would hope that it is a lemma that you can't produce a value of size n
without taking time n
Mario Carneiro (Mar 30 2022 at 01:30):
the pairing function should be primitive in code
, so references to the cost of a nat pairing function seem inappropriate
Praneeth Kolichala (Mar 30 2022 at 01:30):
Mario Carneiro said:
Note that the cost for pair should add something (e.g. the length of c₁.eval v) which is not currently there to get nice properties
I'm not sure about that. I would hope that it is a lemma that you can't produce a value of size
n
without taking timen
Exactly, I meant that the function should be modified to get that nice property.
Praneeth Kolichala (Mar 30 2022 at 01:31):
Mario Carneiro said:
the pairing function should be primitive in
code
, so references to the cost of a nat pairing function seem inappropriate
The function itself is primitive, but the size of the output compared to the two inputs matters and needs to be accounted for.
Mario Carneiro (Mar 30 2022 at 01:32):
the size of the output should be the sum of the two within a log factor
Mario Carneiro (Mar 30 2022 at 01:32):
so you will get a log overhead which seems fine
Mario Carneiro (Mar 30 2022 at 01:33):
TMs already have much worse overhead just shuffling around the tape
Mario Carneiro (Mar 30 2022 at 01:34):
so you shouldn't aim for better than polytime
Last updated: Dec 20 2023 at 11:08 UTC