Jump to content

Talk:Proof: Difference between revisions

no edit summary
No edit summary
Line 237:
:::::: (I fixed my phrasing there.) 1. You are talking about smaller limits, here, and 2. you are introducing new terminology (e.g. "DList") without bothering to define it, and making less general statements. In other words, these are meaningless objections -- the point is that there is a limit. This means, in the context of this task, that there will be even numbers which cannot be added associatively because they cannot be added. This leaves us with two options: either (a) the type system is incomplete and does not represent this failure mode, (b) the type system does represent this failure mode. Traditionally, static type systems use (a) and run-time typing is needed for (b). One common failure mode (e.g. two's complement, fixed width) violates the peano axioms for approximately half of the representable combinations of values. Another common failure mode (inductive representation or variable width) also poses an additional constraint where time needed to complete the operation increases as the values increase in size. This second failure mode can be made so burdensome that it disguises the first failure mode, but that's not always a good thing. My problem, here, is that this task currently seems to be asking for a type system with no failure modes. Of course we know that any actual implementation of the peano axioms must eventually fail to implement some cases, but -- because of earlier objections here -- I am afraid that if I present an implementation that has either of these failure modes that people will object to it. So I am asking that this task requirement be made explicit. --[[User:Rdm|Rdm]] 10:08, 14 May 2012 (UTC)
 
::::::: "the point is that there is a limit" - of course (this was my minor comment on minor comment for the minor point), but a type always has it cardinality (i.e. the cardinality of the corresponding set, for example, aleph-null for natural numbers or finite strings), this is unbeatable, it does not matter how many terms can be constructed physically. "This means, in the context of this task" - this means nothing in the context of this task. Again: just as you can prove `even + even = even` (for ''all'' natural numbers) by using a finite amount of paper and in finite number of steps, computer (proof system) can do this using a finite amount of memory in a finite time. Using the same axioms and deriving rules! In the Agda version there is no numbers which are directly tested for the statements of the task (well, only the first natural number is verified directly), the computation is done in a general form over terms and types. Imagine the following, you have the formulas, you have a finite set of axioms and inference rules, there is a way to apply these rules to formulas to obtain new formulas, starting with the axioms. So using only terms of finite size, and applying a finite number of rules you can deduce the truth of the `even + even = even` statements in a suitable metatheory. Proof systems do just that, they do not directly test the statements for some finite set of test data. The proof itself is a compile-time type checking (in the case of languages with dependent types). Not all the math, but some formal mathematical systems is definitely systems that can be implemented: their symbolic data (the "math" itself) can be implemented, the reasoning in such systems can be implemented and so on. We can trust in proofs that helds in such systems, as well as we trust in "ordinary" proofs. — [[User:Ht|Ht]] 12:56, 14 May 2012 (UTC)
 
:::::::: I think that it's fair to say that the failure of the implementation to satisfy the properties which were proved for the type means nothing to you. If you ever care to become aware of counter example, I suggest you spend some time visiting the [http://catless.ncl.ac.uk/Risks risks forum]. Nevertheless, I shall endeavor to create an implementation which fails in this fashion. It's a shame that you feel this concept is too complicated for you to explain for it to be included in the task description. But I shall propose this change, myself. --[[User:Rdm|Rdm]] 12:17, 14 May 2012 (UTC)
Line 260:
 
::: My current inclination is to believe that you cannot get [http://www.cs.nott.ac.uk/~txa/publ/ydtm.pdf the point] of dependently-typed programming languages / proof systems. — [[User:Ht|Ht]] 23:00, 13 May 2012 (UTC)
 
: Your last addition:
 
:: Note also that it's expected that the implementation of the type will fail to satisfy the peano axioms for some values. It's only required that the implementation succeed for values which do not exceed some non-trivial finite limit. The proof is not meant to apply to the implementation for values which exceed that limit.
 
: The "implementation of the type" is the implementation of Martin-Lof type theory (or its extension), [http://intuitionistic.files.wordpress.com/2010/07/martin-lof-tt.pdf the paper] has a model of natural numbers similar to the system of Peano axioms. — [[User:Ht|Ht]] 12:56, 14 May 2012 (UTC)
 
This will be my final attempt to clarify this to you (please grab a book or article on type theory if you want to know how mathematicians and computer scientists define a type): The only implication of the fact that an actual implementation of a given logic/type theory runs on a machine with a finite amount of memory is that the set of results the compiler can give is extended from "I was able to verify the proof as correct" and "I was able to verify the proof as incorrect" with "I'm unable to verify the proof as either correct or incorrect because I ran out of memory". The third option must not occur here. The compiler should say "I was able to verify the proof as correct", assuring us that the property even + even = even hold for all countably infinite pairs of natural numbers.
Anonymous user
Cookies help us deliver our services. By using our services, you agree to our use of cookies.