## Stream: lean4

### Topic: Assertion failure due to LEAN_MAX_SMALL_OBJECT_SIZE

#### Simon Winwood (Jan 29 2021 at 23:53):

This is more of an unexpected behavior than a bug: we have a large structure, and when it is created we get a runtime assertion failure. Rebuilding lean with the -DSMALL_ALLOCATOR=OFF option fixes this.

Assertion failed: (sz1 <= LEAN_MAX_SMALL_OBJECT_SIZE), function lean_alloc_ctor_memory, file include/lean/lean.h, line 334.


#### Leonardo de Moura (Jan 30 2021 at 00:01):

The current runtime is assuming "constructor objects" have less than 512 bytes. We will remove this restriction. We know statically whether this assumption is true or not, and we can use the regular allocator when it is not.

#### Leonardo de Moura (Jan 30 2021 at 00:02):

So, the fix will not affect the performance of the generated code when the assumption is true.

#### Simon Winwood (Jan 30 2021 at 00:06):

Yeah, I assumed it was something like that

#### Leonardo de Moura (Jan 30 2021 at 02:48):

BTW, the constructor objects in the runtime still have the following limits:

• 256 "pointer" fields
• 1024 bytes for scalar fields

In principle, we could add a new kind of object ("fat constructors") to the runtime. This new kind of object would have a bigger header, and would not have the limitations above. We can decide statically whether we need to use regular or fat constructors. So, there is no performance penalty for people using regular constructors only.
That being said, we do not plan to support "fat constructors" anytime soon. For this kind of constructor to be useful, we would have to improve the performance of many subsystems. For example, the kernel-generated recursors are quite expensive for an inductive datatype containing a constructor with more than 200 fields.

#### Simon Winwood (Feb 01 2021 at 22:21):

Thanks!

For those larger objects, you (or we) could always just nest them, no?

We had a related problem with the seL4 proofs when we first started trying to prove things about the C code --- the record of local variables was huge, and caused all sorts of (verification) performance problems. In particular, the lemmas relating to update/update and access/update. (I think) we ended up modelling records as trees rather than lists, which made things a lot faster. Probably Lean doesn't have these issues as records are fundamental, although I don't know how you plan on handling structure simplification.

Last updated: May 07 2021 at 13:21 UTC