Zulip Chat Archive

Stream: general

Topic: Trust "down to the metal"


Tim Daly (Jan 03 2020 at 07:54):

While building a trusted computer algebra system, the SANE version of Axiom, I've been looking at questions of trust at all levels.

Andrej Bauer has been working on a smaller kernel (a nucleus) that separates the trust from the logic. The rules of the logic can be specified as needed but checked by the nucleus code.

I've been studying Field Programmable Gate Arrays (FPGA) that allow you to create your own hardware in a C-like language (Verilog). It allows you to check the chip you build all the way down to the transistor states. You can create things as complex as a whole CPU or as simple as a trusted nucleus (youtube: Building a CPU on an FPGA). ACL2 has a history of verifying hardware logic.

It appears that, assuming I can understand Bauer's Andromeda system, it would be possible and not all that hard to implemented a trusted kernel on an FPGA the size and form factor of a USB stick.

Trust "down to the metal"

Johan Commelin (Jan 03 2020 at 08:26):

@Tim Daly You should also read Mario's metamath zero paper. One of his goals/applications is trust at all levels.

Johan Commelin (Jan 03 2020 at 08:27):

You can see here https://github.com/digama0/mm0/tree/master/examples that he has been formalising x86 and such

Tim Daly (Jan 03 2020 at 09:41):

Yes, I've seen that repo.

The problem with the x86 is the instruction semantics. I wrote a program that generates the x86 instruction semantics (as conditional-concurrent assignments) which we used for reverse-engineering malware (http://daly.axiom-developer.org/TimothyDaly_files/publications/sei/intel/intel.pdf). The x86 is a complex beast. Writing the semantics took 3 years.

On the other hand, an FPGA is small, has gate-level timing, and is an ideal platform for nucleus and kernel level semantics.
You can do all the dancing you want at any level and then "run it through the hardware". Assuming your implementation provides independent proof checking it would not matter whether you used Coq, Lean, or something else.

Tim Daly (Jan 03 2020 at 09:42):

In addition, Bauer's approach lets you choose your logic.

Andrew Ashworth (Jan 03 2020 at 10:16):

Heh. You can simulate all the way down to transistor states, in an FPGA. What actually happens is un-knowable. Once you instrument your FPGA, your bug might disappear. (My time doing occasional FPGA development may have made me a bit cynical...)

Mario Carneiro (Jan 03 2020 at 13:21):

Running a theorem prover kernel on an FPGA is not out of the question. You need a relatively large read only memory, and a small read-write memory (probably on the order of 1 MB ROM and 16 KB RAM for something the size of set.mm) and you stream the whole proof.

Mario Carneiro (Jan 03 2020 at 13:22):

writing the logic gates is not fun though. You really need to abstract away from that to get a real theorem prover running

Tim Daly (Jan 03 2020 at 15:06):

Intel bought Altera (the second largest FPGA company) a couple years ago. Intel now ships CPUs with a built-in FPGA. You can buy one now (but you have to be one of the FAANGs, they only sell them to the big server companies). One of these days you'll be able to "side load" your FPGA processor and side-load your proof from your proof-carrying code. Leo probably has access to one at Microsoft.

As for "a large read-only memory", the Altera has a lot of ROM that allows you to install whole CPUs. They also have dual MIPS CPUs available on the die itself if you want to run MPS code. The rules for a logic like CIC would be a trivial in comparison. And since the hardware is naturally fully parallel you can compute all branches of your proof tree at once.

It would be much easier to write proven hardware semantics for Bauer's Andromeda on an FPGA than it would to prove anything about an x86 instruction stream. Using a layered code architecture like Milawa would allow the user to decide which, and how many, layers to side load into the FPGA, each layer proven by lower layers. With not many layers you end up with a fully proven lisp engine.

It will all be available, likely within 2020, once Intel decides to ship non-FAANG hardware to the world.

Tim Daly (Jan 03 2020 at 15:18):

So layering Lean's logic on an FPGA that implements the trusted kernel would run really fast and, if the kernel is in ROM, would be rather trustworthy.

Mario Carneiro (Jan 03 2020 at 16:09):

I think x86 is being a distraction here. It does not make a big difference what the instruction set is

Tim Daly (Jan 03 2020 at 16:40):

The point I set out to make had nothing to do with the x86. The basic idea was to embed something like Bauer's Andromeda in hardware. The Andromeda nucleus is designed to accept logic rules like CIC and perform the trusted verification. Thus many logics could be used depending on what you're trying to prove and what logic is convenient (assuming soundness/completeness) all of which use the same trusted nucleus.

Tim Daly (Jan 03 2020 at 16:44):

The problem I'm trying to solve is proving mathematical algorithms. Such proofs can use different logics

Tim Daly (Jan 03 2020 at 16:45):

Lean is useful for the group theory theorems since Axiom is designed around group theory.

Johan Commelin (Feb 25 2020 at 12:57):

Tim Daly said:

Lean is useful for the group theory theorems since Axiom is designed around group theory.

(emphasis mine)

Here is another statement that I find very funny. It's a typical example of a statement that I don't understand, but that does rub me the wrong way. Lean is useful. Lean is useful for lots of things. Group theory is one example. But Axiom (at the moment) has nothing to do with why Lean is useful for group theory. So providing Axiom as an argument for why Lean is useful seems nonsense.

Johan Commelin (Feb 25 2020 at 13:00):

I don't want to enter into a flamewar. I don't want to offend people. I don't want bitter discussions. But I do sense that some people are getting quite unhappy about certain statements you make.


Last updated: Dec 20 2023 at 11:08 UTC