Zulip Chat Archive
Stream: general
Topic: data structures
Jason Rute (Oct 25 2019 at 23:20):
After the discussion in the topic "type classes for data structures", I decided to try coding up some priority queues from Okasaki's Purely Functional Data Structures. There isn't much there, but I thought I'd share it. The README has some discussion on what I found. https://github.com/jasonrute/lean_data_structures
Scott Morrison (Oct 25 2019 at 23:40):
Thanks @Jason! Keeley and I are likely to want to use some priority queues in meta
land soon, so it would be lovely to have some of this in mathlib, perhaps even with an API so we're not committed to a particular implementation.
Simon Hudon (Oct 26 2019 at 01:43):
@Scott Morrison What do you mean by an API? Do you mean using type classes for polymorphism?
Scott Morrison (Oct 26 2019 at 01:43):
Yes.
Simon Hudon (Oct 26 2019 at 01:46):
That's not a very convenient approach. You end up with overly long type classes which are not all that useful. A better approach is just to use the same name for functions and lemmas when reimplementing the data structure.
Scott Morrison (Oct 26 2019 at 02:51):
Okay!
Simon Hudon (Oct 26 2019 at 02:59):
By the way, with the splay tree data structure, we're cooking up an extension to lazy evaluation that uses dependent types and quotient types that allows functions reading the data structure to mutate it without returning a new copy. That should help make functional data structures, especially the ones with amortized complexity, significantly faster.
Jason Rute (Oct 26 2019 at 05:01):
@Simon Hudon That is interesting. Do you know if is similar to the various lazy heap ideas in Okasaki's book (or thesis)? That book (which I have just skimmed) has a lot to say about laziness and where it shines and slows things down in practice in functional data structures.
Jason Rute (Oct 26 2019 at 05:04):
For example it says this about one of it's lazy heap implementations:
Hint to Practitioners: Although it now deals gracefully with persistence, this implementation of pairing heaps is relatively slow in practice because of overheads associated with lazy evaluation. It shines, however, under heavily persistent usage, where we reap maximum benefit from memoization. It is also competitive in lazy languages, where all data structures pay the over- heads of lazy evaluation regardless of whether they actually gain any benefit.
Mario Carneiro (Oct 26 2019 at 05:05):
I once planned to formalize Okasaki's book, but I hit a roadblock with laziness, because lean provides no way to implement memoization
Simon Hudon (Oct 26 2019 at 05:30):
The idea is to add a mechanism expressive enough to allow the implementation of laziness and memoization. Because we have dependent types, we can go further than memoization. We can, for instance, rebalance trees or shuffle items around just like a Haskell program would evaluate and cache the value of a thunk.
Last updated: Dec 20 2023 at 11:08 UTC