Flean
: Floating Point Numbers in Lean
I have been attempting to formalize a theory of floating point numbers in the programming language Lean. Lean is an interactive theorem prover, the newest hotness in that space, it seems. There's plenty of developments and things to figure out still both personally and in the community as a whole.
Expect this document to change as I make progress or eventually stall out.
Why floating point numbers?
I recently got into an argument with someone on whether you can trust floating point number libraries. In a really sensitive application, like aerospace, floating point number weirdness can get really peculiar and cause errors or performance problems. I figured Lean might be able to provide guarantees of correctness that can satisfy the sweatiest of try-hards.
So far, I have learned that very small numbers and very large numbers have drastically different properties, and it's a little bit of a miracle that any computation is accurate.
Has anyone done this before?
- Yes, in Coq there's Flocq which shaped my thinking, but strays a bit from IEEE and from computability.
- In Mathlib (Lean), there's a very incomplete FP where I quickly abandoned in favor
of more direct theory using
Int.log
. - There's also an interval arithmetic library in Lean interval which is far more sophisticated, but maybe there's room for a second. Mine has a few differences:
- Uses nonstandard precision with 64 exponent and 64 mantissa bits using Int64.
- It's trying to be fast, optimizing even in reductions via (via a custom
Nat.log2
implementation). - Supports only rounding up and down, uses two's complement on the mantissa to represent negative numbers.
- Focused primarily on interval arithmetic and making serious progress on tighter bounds for approximating transcendental functions.
- Differs from ordinary IEEE 754 standard: extra bits in the mantissa to support both the subnormal and normal numbers, 0 is unique.
- Their
rounds
function does not output a number like mine. They instead deal with it like a set.rounds
with the up flag gives you the set of representations above that number. It's less clear how to compute with it.
Goals and Non-Goals
Personally: Learn Lean in a serious but not hard way. I've already learned a lot.
In general, the idea is to (1) model the floating point numbers, (2) characterize their properties via a convenient set of rules,
and (3) apply that logic to prove things about Lean's Float
by declaring an interface axiomatically.
Goals
- Be capable of matching operations of normal floating points as exactly as possible IEEE 754
- Extensible to different precisions
- Theoretically capable of supporting lots of bounds on Lean's
Float
Non-Goals
- Performance
- Fixed-point arithmetic
- Interval arithmetic
- Using SMT solvers to bitblast our way to happiness--which is almost certainly the most practical approach.
What I've learned so far?
coming soon