Am I the only one who’s noticed an amusing (or perhaps alarming) tendency of proofs by contradiction to sneak in places they don’t really belong?

Let me illustrate with an absurd example.

```Prop 0. It is cold outside.

Proof. Assume for contradiction that it is warm outside.
Then it must not be snowing right now.
But it is snowing right now.
This is a contradiction, so our assumption was false.
So it is cold outside.```

See the problem? Like most cases of sneaky PBCs, it’s not immediately obvious why this proof is unnecessarily long. But what happened? We assumed Prop 0 was false, then proved it was true, and so we’d found a contradiction! Well, I suppose we did, but we could have just gone ahead and proved it:

```Proof. It is snowing right now.
Therefore, it is cold outside.```

Here, the exact same logic proves the proposition in a much more straightforward way. To make the issue perfectly transparent, let me rewrite the above proofs as more formal arguments:

```Axioms: B. (It is snowing.)
B => A.  (Snowing implies cold outside.)

Prop 1: A.

Proof #1. Assume not-A.
not-A => not-B.
But we know B.
Therefore, A.

Proof #2. We know B.
B => A.
Therefore, A.```

Looks ridiculously obvious here, but I’d bet that, once you start looking for these unnecessary PBCs, you’ll see them all over the place. The general pattern is always a variation on the following: To show A, you’ll see someone assume not-A, then prove not-not-A, and then declare a contradiction. But of course, proving not-not-A will turn out, on inspection, to be trivially equivalent to having just proven A in the first place, without using contradiction.

It can be a bit more complex or subtle than this, but in general it involves at some level a straightforward proof of the proposition wrapped in a contradiction. Let me give a real example.

Example 1: Cantor’s Diagonal Proof.

```Prop 1: There's no bijection between the integers
and the reals in [0,1).

Proof: Assume for contradiction that there is; call it f.
Let r_i = f(i).
Express each real r_i as a binary string 0.bi1 bi2 bi3 ....
Let cij = 1 - bij. Then the real number s = 0.c00 c11 c22 c33 ....
is not mapped to by any integer, since for each integer i,
cii ~= the ith bit of f(i), so s ~= f(i).
So there is no such bijection.```

Might be better expressed:

```Proof. Let f be any injection from the integers to [0,1).
Let r_i = f(i).
Express each real r_i as a binary string 0.bi1 bi2 bi3 ....
Let cij = 1 - bij. Then the real number s = 0.c00 c11 c22 c33 ....
is not mapped to by any integer, since for each integer i,
cii ~= the ith bit of f(i), so s ~= f(i).
So f is not a bijection.```

The structural issue is the same. In the first, we assume f is a bijection, then show that f is not actually bijection, and cry “Contradiction!” In the second, we assume f is an injection and show (with the exact same argument) that f cannot be a bijection. In my opinion, this is cleaner, neater, and straighter to the point.

In a more formal (and general) setting, see if you can convince yourself that Cantor’s proof on Wolfram Mathworld contains an unnecessary PBC.

As usual, it is easier to identify phenomena like this “in the wild” than come up with simple, illustrative examples for a blog post. So at this point, my claim — that PBC is often overused, resulting in needlessly complicated or obfuscated proofs — is left to the reader to confirm or refute.

(Am I guaranteed to be right?)