Zulip Chat Archive

Stream: lean4

Topic: LTest: A fixture-based test framework for Lean


Alexander Fasching (May 06 2023 at 02:20):

Hello everyone,

I've experimented with several iterations of a test framework similar to pytest and rstest.
The current iteration is LTest, a fixture-based test framework for Lean 4 which should aid testing of IO focused programs and libraries with dependencies that require significant effort for setup and teardown.

It's available here: https://github.com/alexf91/LTest/

It's all quite experimental and my first deeper journey into Lean and Lean metaprogramming beyond a few tutorials and manuals, but I think it's usable. At this point, it supports fixtures and testcases, both implemented as macros.

A fixture has a state of type σ, a value of type α and three fields:

  • default : σ the default value of the state
  • setup : ... → StateT σ IO α returns the value passed to dependencies
  • teardown : StateT σ IO Unit for cleanup

Only the setup function is required, unless σ is not inhabited. Then default has to be set.

A fixture for the value 0 and without a state looks like this:

fixture Zero Unit Nat where
  default  := ()
  setup    := do return 0
  teardown := return

Fixtures can also depend on other fixtures by specifying them as a requirement.

fixture One Unit Nat requires (n : Zero) where
  setup := do return n + 1

Finally, we can define a testcase.
A testcase always has the type IO Unit, but can also depend on fixtures.
Just like for fixtures, requirements are optional.

testcase testNothing := do return

testcase testOne requires (n : One) := do
  assertEqual n 1

Eventually, we want to run testcases. They are compiled into a program. The main function
is created with #LTestMain. Tests are registered with an extension when they are declared.

#LTestMain

Teardowns are executed in reverse order of their setup functions.
During execution, stdout and stderr are captured for setups, teardowns and the testcase itself.
They are reported when a testsuite is run and something fails. There are a few examples in the examples directory.

The framework is still missing a lot of things, and some of them (parameters...) might require a redesign:

  • Parametrized fixtures and testcases
  • Running testcases in interactive mode or during compilation
  • Reporting where an assertion failed
  • Scopes: Currently everything is scoped most locally (every fixture instance has its own state),
    pytest would call the setup function of a fixture used by multiple dependencies only once for every testcase, not every time it appears in a requirements field (basically a topsort with some constraints for scopes).
    This can be mitigated with IO.Refs to some extent.

The implementation itself is another story, especially the macros, but it's all work in progress.
I'm open for suggestions. This is not used anywhere, so everything can break.


Last updated: Dec 20 2023 at 11:08 UTC