To be even more clear about what we're talking about we need to distinguish syntax from semantics. If we're talking about "rules of inference" or "proofs", we are usually thinking syntactically. That is, we're thinking about symbols on a page and rules for manipulating those collections of symbols into other collections of symbols or rules about what constitutes "correct" arrangements of the symbols, i.
More informal renditions will be sentences in a natural language that follow "rules of reason", but the idea is still that the form of the argument is what makes it valid. Semantics, on the other hand, interprets those symbols as mathematical objects and then we say a formula i.
For 1 , one very unsatisfying answer is that it is often taken as given, i. A slightly more satisfying answer is the following. This isn't quite what it means, but this idea of everything being "either true or false" is often considered intuitively obvious. We have rules for building proofs from other proofs, and that's all there is to this perspective.
For 2 , if you use the "truth table" semantics of classical propositional logic, then you simply calculate. You can easily show this. In these semantics, "proof by contradiction" is simply "true". To question this requires questioning the semantics. Why not three or an infinite number of them? This leads to multi-valued logics. Alternatively, we could keep the truth values the same, but interpret formulas as something other than Boolean functions.
For example, we could say they are Boolean functions but we only allow monotonic ones, or we could say that they are total Boolean relations. Making these changes requires adapting the notion of "true".
Changing the semantics affects which rules and axioms are sound. A rule or axiom is sound with respect to a given semantics, if its interpretation is "true" in that semantics.
To summarize, if you're working with respect to "truth table" semantics, then "proof by contradiction" is simply "true", that is when interpreted it is interpreted as a constantly "true" Boolean function, and this can be easily calculated. In this case, all of your "logical assumptions" are built into the notion of "truth table" semantics. With respect to semantics, "proof" is irrelevant.
Proof is a syntactic concept. Your discussion about "assuming the premise is false" is slightly mangled proof-theoretic talk. You can have meta-logical assumptions that some formula is "true", but this is happening outside of the logic. Ultimately the coin of the mathematical realm is the more syntactic notion of proof and semantics just pushes proof to the meta-logic.
And thus we have. Sign up to join this community. If you reach a contradiction with something you know is true, then the only possible problem can be in your initial assumption that X is false. Therefore, X must be true. Unfortunately, this proof technique can really cause problems for amateurs. Typically, what happens is that the proof starts off quite reasonably, and then gets lost in a maze of complexity.
Somewhere in the confusion, a mistake is made, which leads to a contradiction. Then it looks like the proof is done, but unfortunately the contradiction has nothing to do with the initial assumption, and comes solely from the mistake in the middle.
One big component of this is figuring out which uses of proof by contradiction are avoidable, and which one are essential in the latter case the theorem statement has to be weakened for the constructive version. But this is all about proof by contradiction in the narrow sense. You can still derive all the results you want, you simply have to be more explicit about whether you have constructive evidence or not.
What is the justification for not believing that from a double negation of A follows A is a valid inference? I took formal systems in university some years ago and remember something about different axiom systems and how one school of thought rejected one particular rule of inference but was this the one?
I am trying to think of a world where this rule is not valid and not succeeding. It's also not false. To be "false" I'd have to have tangible evidence of "the Collatz conjecture is false", a counterexample. To be clear, there is now an element of time involved. Or, as others have stated, constructive logic reflects the "communicative nature" of proof. What I should have said a second ago is "The Collatz conjecture is not true for me right now because I personally don't happen to have evidence for its validity yet ".
If you had a valid proof it'd be true for you and once you communicated that proof to me it'd be true for me. If you live in a world where things must be true or false e.
One example is to define truth as provability. If A isn't unprovable, it doesn't mean that A is provable. Do they really? So let's prove that. It just follows from the axioms. And that a contradiction is a formula and its negation. I'm not sure "try not to use [proof by contradiction] unless you really need it" is the optimal conclusion.
In "Meta Math"[1], Gregory Chaitin explores the question: "What makes a mathematical concept desirable? The best theorems seem obvious in retrospect, because they are reduced into relationships between existing concepts. I would recommend trying all angles of attack when searching for a proof.
If it starts to take too long or you hit a dead end, try a different angle. Even if you discover a proof, try your other opening moves, look for a more elegant solution. I believe Wolfram Alpha converges on this solution by performing some sort of weighted, breadth-first search on the permutations of an equation.
I highly recommend "Meta Math"[1] if you are interested in theories about entropy and algorithms that generate proofs. Oh, that's only a guideline for presenting your proves.
Not for coming up with them in the first place. Mathematicians do exactly what you are suggesting already. The first prove one finds is usually quite ugly, and then comes the simplification. Writers call it editing, programmers refactoring. EGreg on April 28, root parent next [—].
The question is, what other types can there be? All difficult conjectures should be proved by reductio ad absurdum arguments. For if the proof is long and complicated enough you are bound to make a mistake somewhere and hence a contradiction will inevitably appear, and so the truth of the original conjecture is established QED.
I don't like the conclusion that students should avoid proving by contradiction unless they really need it. I almost always start with contradiction as my initial method of choice when taking exams because you don't always have time to completely rewrite your proofs.
Contradiction helps with this since it naturally encompasses proving by contrapositive also. And of course if you find any other absurdities along the way sooner, you're done quicker. This is nice because a proof by contrapositive is more or less equivalent to a direct proof. So simply by always starting with proof by contradiction, you cover the cases where the theorem: - is simple enough for you to find out exactly why it's true direct proof hidden in the contrapositive.
Where is that conclusion drawn? This is an blog post for research and armchair mathematicians, not students trying to sit an exam. Oh, ok. I interpreted that as advice to students writing up their proves, not trying to find them. First, let's assume that proof by contraction isn't necessary, Pretty unrelated, but proof by contradiction has been really useful for me in real life. It gives you a way to look at logical facts, and remove many axioms, because it relies on empirical evidence and a set of assumptions that you are trying to invalidate.
0コメント