Zulip Chat Archive

Stream: metaprogramming / tactics

Topic: confidence sorry implementation help


David Liang (Dec 04 2024 at 15:20):

is that possible to implement a confidence associated sorry via meta programming? I would like to introduce confidence into sorry. and confidence_sorry would be able to propagate the confidence to the upper func with the confidence, and the confidence are then referenced for boolean logics. e.g. and logic would yield minimum confidence of all terms, or logic would yield confidence of weighted sum of each term activating freqs and corresponding term confidence, etc. then the top prove would searching for the highest confidence prove path. the path should be trackable with each chosen axiom, lemma proven by confidence_sorry.

Notification Bot (Dec 04 2024 at 15:42):

A message was moved here from #lean4 > is that possible to implement sorry via meta programming? by Yury G. Kudryashov.

Yury G. Kudryashov (Dec 04 2024 at 15:42):

Please don't post duplicate messages to different channels.

David Liang (Dec 04 2024 at 15:44):

is the idea even possible? or anything close to that?

Mario Carneiro (Dec 04 2024 at 15:46):

sorry is just syntax which produces a call to sorryAx. You can define your own sorryConfidenceAx (confidence : Rat) (or use another type for confidence level), and have tactics that produce it and a command which calculates the minimum confidence sorry in a term, along the same lines as #print axioms

David Liang (Dec 04 2024 at 16:09):

from what i read, the sorryAx looks like invoking an internal func lean_sorry. The lean_sorry implementation looks weird. it panics directly. i might misunderstand the runtime. why it can panic and not disrupt the whole tactic? leave that alone, i still have little clue on implement the sorryConfidenceAx as you mentioned. if i'm not mistaken, it should also be something like:
@[extern "lean_sorry", never_extract]
axiom sorryConfidenceAx (α : Sort u) (synthetic := false) : {confidence:Rat} → α
the Sort u is a bit hard to understand. the universe level description in metaprogramming book is not easy to follow the concept. And how to make the confidence interact with boolean logic in the above mentioned ways? and does it interact with existing tactics well?

Kyle Miller (Dec 04 2024 at 17:33):

Make sure you're not misinterpreting the function of implemented_by. That's to give the runtime behavior of sorryAx in compiled code, and it makes sense to have the runtime panic if sorryAx is ever executed since sorryAx stands for a missing value.

Kyle Miller (Dec 04 2024 at 17:35):

It's probably better to take advantage of sorry and make a definition:

def sorryConfidenceAx (α : Sort u) (confidence : Rat) : α := (sorry : Rat  α) confidence

(I wrote it this way so that sorries with different confidences are different. If you just write sorry on its own on the RHS then sorryConfidenceAx Nat 0 = sorryConfidenceAx Nat 1 is true.)

David Liang (Dec 05 2024 at 00:38):

i think meta programming is at least required to make the boolean logic work with the confidence level.
term1 AND term2
the confidence is determined by the minimum confidence of term1, term2.
term1 OR term2
the confidence is determined by the router ratio, and corresponding confidence. ratio term1term1 confidence+ ratio term2term2 confidence to estimate the overall confidence. possibly MVar is required from my limited lean4 understanding.

Yury G. Kudryashov (Dec 05 2024 at 03:51):

I think that first you need to figure out what's your goal.

David Liang (Dec 05 2024 at 03:53):

the goal is to define a confidence associated sorry. and it's capable of doing some boolean logical tactics?

Yury G. Kudryashov (Dec 05 2024 at 03:55):

You can have "sorry with confidence" like

axiom sorryWithConfidence (r : Rat) (p : Prop) : p

then use

exact sorryWithConfidence 0.5

(or a macro that unfolds to it) to close goals and have a metaprogram that works like #print axioms but find the minimal confdence level used in sorryWithConfidence calls.

Yury G. Kudryashov (Dec 05 2024 at 03:56):

Or you can introduce a type of propositions that are true at a confidence level.

Yury G. Kudryashov (Dec 05 2024 at 03:57):

In other words, use something like

def ConfidenceProp := UpperSet Rat

def TrueWithConfidence (p : ConfidenceProp) (r : Rat) : Prop := r  p.1

David Liang (Dec 05 2024 at 03:58):

how about the boolean logic part, i would like to have a way of doing term statistics to yield the confidence for certain proving program?

Yury G. Kudryashov (Dec 05 2024 at 03:59):

Read the paragraph about "#print axioms" above.

David Liang (Dec 05 2024 at 04:00):

i have an idea of integrating math provers into a neural system to solve general problems with confidence. and improve term confidence as much as possible.

Yury G. Kudryashov (Dec 05 2024 at 04:01):

If you want to have implications like "if p is true with confidence r and q is true with confidence s, then p ∨ q is true with confidence max r s", then you need a proper theory of propositions that are true at a given confidence level, not a "sorry with confidence".

David Liang (Dec 05 2024 at 04:02):

i proposed a way of calc such confidence in the above discussion. is that the theory you refer to?

Yury G. Kudryashov (Dec 05 2024 at 04:02):

With "sorry with confidence", all you can do is to find the minimal r in all calls to sorryWithConfidence rin the proof.

Yury G. Kudryashov (Dec 05 2024 at 04:04):

BTW, what's your background in math and in Lean?

David Liang (Dec 05 2024 at 04:11):

i'm quite new to lean. major interest is to utilize the lean proving tactics into a progressively improving neural system. each term is a neural trainable program. with more data, the trainable program improves over time and the confidence feedback is more accurate.

David Liang (Dec 05 2024 at 04:12):

and incorporate some llm feature to decouple the neural program into smaller programs.

Yury G. Kudryashov (Dec 05 2024 at 05:16):

If you want to use Lean, then you should first (a) learn Lean; (b) write Lean code that you want to make work.

Yury G. Kudryashov (Dec 05 2024 at 05:16):

Then it will be possible to help you.

Yury G. Kudryashov (Dec 05 2024 at 05:19):

E.g., it is possible to make this work:

theorem myThm : MyGoal := by
  apply myAuxThm -- generates 2 subgoals
  sorry' 5
  sorry' 7

#print_confidence myThm -- prints 5

Yury G. Kudryashov (Dec 05 2024 at 05:23):

But this doesn't help with

theorem myThm : GoalLeft  GoalRight := by
  have h1 : GoalLeft := sorry' 5
  have h2 : GoalRight := sorry' 7
  my_magic

#print_confidence myThm -- prints 7

because you need to choose whether you use left or right to prove. Of course, if you generate proofs with LM, you can try both, so it will be

theorem myThm : GoalLeft  GoalRight := by
  left
  sorry' 5

theorem myThm' : GoalLeft  GoalRight := by
  right
  sorry' 7

#print_confidence myThm -- 5
#print_confidence myThm' -- 7, prefer this proof

David Liang (Dec 05 2024 at 05:27):

it lies on the applicability. e.g. h1 applicability is 50%, h2 applicability is 30%, desired output would be something like: 50%5+30%7. just linear assumption. Can the problem to be solved via FFI, the interfacing program would log the applicability via routing network prob output, and the conf_sorry would also call the external function for fetching this info?

Yury G. Kudryashov (Dec 05 2024 at 05:29):

Please, write an example code snippet with example expected output.

Yury G. Kudryashov (Dec 05 2024 at 05:29):

In other words, what would be a test file for the new feature?


Last updated: May 02 2025 at 03:31 UTC