Symbolic Logic
In college, I double majored in Philosophy and Mathematics. A required class for both of those majors was one called Symbolic Logic, or often referred to as Propositional Logic. The course teaches you to break arguments down into an algebraic notation and follow a set of principles to come to logically sound conclusions.
I have found that as a software engineer I frequently use the principles this course taught me in my programming. I may not look at some code and specifically say out loud, “That’s a modus ponens” or “That’s a hypothetical syllogism”, but I know they’re there.
My hope with this post isn’t to radically change your ability to code, but rather give you some vocabulary for discussing logic in your programs and more.
Foundational knowledge
There are a few things you need to know before reading the rest of this post. I’m going to express all of these logical principles with pseudo-code as best as I can. This accomplishes two things (at least). First, it makes it easier for me to type (rather than using the formal symbols more commonly used in textbooks on the subject). Second, it is my hope that using a code-like syntax will help you relate the concepts to your coding work more easily, therefore improving the learning of the concepts.
In my pseudo-code, I will be using the traditional letters used in symbolic logic, p
and q
, as placeholders for propositions. You can think of them as variables in our logical equations. To the best of my knowledge, p
was chosen as an abbreviation of “proposition”, and then letters are chosen by alphabetical order. Just like real variable names, you can choose to replace them with something more meaningful if that helps you.
Here is a small legend of my pseudo-code:
p
orq
represent propositions, or elements of an argument=>
is used for “entails”. It can also be read as “Ifp
, thenq
”---
is used to indicate the end of the original set of propositions. What follows are conclusions derived from those propositions.!
is used for “negation”&
is used for “and”|
is used for “or”===
is used for “material equivalence”
Certain conclusions can only be derived from applying other logical principles. When a principle is applied, an abbreviation of the principle’s name, as well as the line numbers of the propositions required to impart that principle, will be noted next to the conclusion.
Rules of Inference
The first set of logical rules we are going to discuss can be categorized as “rules of inference”. That is, from the information at hand, we can logically deduce a set of conclusions. This set of logical principles may be useful in debugging programs, as they will teach you to arrive at sound conclusions given the information you have. Let’s get started.
Modus Ponens
Modus ponens is the most foundational rule of inference we can learn. I would tell you what the Latin means, but honestly, I think it would only add confusion rather than clarity, so I’ll leave you to read the wiki on it on your own. The rule reads as follows:
If p
, then q
. p
is true, therefore q
.
And in our symbolic logic format, it looks like:
1. p => q
2. p
---
3. q 1,2 M.P.
If one proposition entails another and the first proposition is true, we can conclude that the second proposition is true. We use this all of the time in programming:
if (data) {
doSomethingWith(data)
}
This principle is also used frequently when debugging code. If you do not have the result you would expect from a condition, then you must not have a truthful condition.
The opposite, if I have the result I expect, then I must have a truthful condition, cannot be logically deduced and is a common fallacy known as “affirming the consequent”. The reason this fails is that there may be other propositions that entail q
other than p
. Be careful when you program in making this fallacy. Getting the right result doesn’t necessarily guarantee it got there by the path you expected. Always verify.
Modus Tollens
Our next rule of inference is modus tollens, whose Latin meaning might also cause more confusion than clarity, wiki here. It is closely related to modus ponens. It follows:
If p
, then q
. Not q
, therefore not p
.
And in symbolic form, reads as:
1. p => q
2. !q
---
3. !p 1,2 M.T.
I have always found modus tollens fascinating. Heuristically, we would gravitate towards saying, “Not p
entails not q
”, but this isn’t quite true. Why? Because other elements of the universe can cause q
. We only know with certainty that the truthful condition of p
entails the consequent of q
. Thus, we also know with certainty, that the absence of q
ensures the absence of p
.
In programming, if we know that we have written a condition correctly, and we do not get the desired result, than we can deduce we have not met our condition. Worse than not meeting the requirements our condition, is knowing we have not satisfied our condition and, yet, our desired result still occurs. That, my friends, is a bug.
Conjunction and Simplification
No more Latin principles. I promise. Our next rules can be considered partners, and therefore will be taught together.
conjunction is the combining of two propositions: p
, q
, therefore p & q
.
1. p
2. q
---
3. p & q 1,2 Con.
Simplification is taking a conjunction and reducing it to one of its propositions: p & q
, therefore p
.
1. p & q
---
2. p 1 Simp.
In programming, we create a number of conjunctions and simplifications all the time. A conjunction could be creating a condition based on two different values, and a simplification could be the destructuring of a key/value from an object.
Hypothetical Syllogism
This logical principle is sometimes known as “double modus ponens”, you’ll soon see why.
p
entails q
and q
entails r
. Therefore, p
entails r
.
1. p => q
2. q => r
---
3. p => r 1,2 H.S.
For me, hypothetical syllogism just makes sense, which has always made it a bit difficult to explain to others. Essentially, it’s a chain of modus ponens, and removing the middle link in that chain.
It is closely related to the transitive property. For example, in math, if you had x > y
and y > z
, then you can infer that x > z
.
In programming, correct usage of hypothetical syllogism may show you that you’re able to reduce a few steps in a particular program, recognizing that one antecedent, p
entails a further consequent, r
.
Absorption
This one is probably the first oddball we’re going to encounter and it might not be terribly useful for programming, though there might be times it is useful in debugging. Absorption goes as follows:
If p
, then q
. Therefore, if p
, then p & q
.
1. p => q
---
2. p => p & q 1 Abs.
The premise of this principle builds upon a conjunction. If a condition leads to a result, then it is true that having that condition means having both the condition and the result. Another way to think about this is, the condition doesn’t disappear because it has led to a result. It is still present, and therefore, can be combined with the result to make new inferences.
Disjunctive Syllogism
This one might seem odd at first, so I’ll give you an insight right away. The key to understanding disjunctive syllogism is understanding that when |
is used, it guarantees that one of the premises is true. The rule goes as follows:
p
or q
. Not p
. Therefore, q
.
If we have a scenario where we have one premise or another, and we know for certain that we do not have one of them, we can deduce that we have the other. In our symbolic format, it looks like this:
1. p | q
2. !p
---
3. q 1,2 D.S.
This one might be a bit harder to use directly while programming. The best scenario I can think of is if you have a condition that’s being met based on an ||
, and you know for certain that one of the operands is false
, then you know the other is true
.
Constructive Dilemma
This rule of inference is is essentially combining modus ponens with disjunctive syllogism. The rule goes:
p
entails q
, and r
entails s
. p | r
. Therefore, q | s
.
We have two entailment clauses, this antecedent leads to that consequent. If we have a premise that says we have one or the other antecedent, in this case p | r
, then it follows that we have one or the other consequent q | s
. In symbolic form, it looks like this:
1. (p => q) & (r => s)
2. p | r
---
3. q | s 1,2 C.D.
A constructive dilemma essentially occurs whenever we have multiple conditional statements in some code. If we have multiple if
statements, we can think of those as entailments clauses. Because we know the set of antecedents, we also know the set of consequents that come from those if
statements.
function passwordStrength(str) {
if (!str) return
if (meetsTheseConditions(str)) return 'weak'
if (meetsTheseOtherConditions(str)) return 'strong'
}
Here we have three if
statements and the set of their consequents contains weak
, strong
, or undefined
. If we extrapolate this, we can see we have multiple constructive dilemmas.
1. noString => undefined
2. weakString => 'weak'
3. strongString => 'strong'
---
4. (noString => undefined) & (weakString => 'weak') 1,2 Conj.
5. (noString => undefined) & (strongString => 'strong') 1,3 Conj.
6. (weakString => 'weak') & (strongString => 'strong') 2,3 Conj.
7. undefined | 'weak' 4 C.D.
8. undefined | 'strong' 5 C.D.
9. 'weak' | 'strong' 6 C.D.
Addition
p
, therefore p | q
.
This one’s odd, but useful for meeting the requirements for other logical principles. Essentially, if you have a proposition, then you have that proposition or any other proposition in the universe.
1. p
---
2. p | q 1 Add.
I’ll come up with a rudimentary symbolic logic problem to show you what I mean. Let’s combine addition with constructive dilemma. I’ll give a set of premises and a conclusion we should be able to reach.
1. (p => q) & (r => s)
2. p
How do we prove the conclusion q | s
? We can do the following:
3. p | r 2 Add.
4. q | s 1,3 C.D.
Rules of Replacement
The next set of principles we will learn are known as “rules of replacement”. These rules will teach us logically sound ways of swapping one set of premises for another. These rules can often be applied when trying to improve the legibility or even the performance of some code.
Double Negation
p === !!p
We are all not unfamiliar with this concept, right? Or would it be better to ask, “We are all familiar with this concept, right?”
The most common time you’ll see this in coding is if you come across the “double bang” !!
. This is often used in JavaScript to coerce a value into a boolean. It is the equivalent of calling Boolean(value)
. I prefer to use Boolean()
. It’s easier to read.
Commutation
p | q === q | p
as well as p & q === q & p
The rule of commutation means that, for certain logical operations, the order of the arguments does not matter. |
and &
are commutative operations. In mathematics, +
and *
are commutative.
This rule can actually be useful in optimizing our program’s performance. In JavaScript, when using a logical operand such as &&
or ||
, we can improve performance by making the left side of the operation a case that is most like to “short circuit” the operation.
Short circuiting a logical operation occurs when the left side of the infix operator guarantess the result. If the left side of conjunction is false
, as in false && ...
, then the right side is irrelevant and not evaluated. Similarly, if the left side of a disjunction is true
, as in true || ...
, then the right side is irrelevant and not evaluated. We can use this to our advantage. Let’s make an example.
Let’s say I want to determine if I should wear my rain jacket. I only wear my rain jacket when it’s both cold and raining.
const requiresRainJacket = weather => isCold(weather) && isRaining(weather)
Now, consider the order of functions here. As written, we’ll only get to the isRaining
check if it’s cold. Is this the best order? I live in a pretty cool climate, it could probably evaluate that function to true
for 7-8 months of the year. Two-thirds of the time, our function won’t short circuit. Arguably, the isRaining
check is more important, and more likely to short circuit than the isCool
check. So we can gain a slight improvement by flipping the conditions.
const requiresRainJacket = weather => isRaining(weather) && isCold(weather)
Tautology
p === p | p
as well as p === p & p
This one is quite possibly my favorite. A tautology is when we say that some thing is that thing. Like, the weekend is the end of the week. It’s the kind of thing that when you were a kid you might have responded with a loud and emphatic, “Duhhh!“.
In symbolic logic, we can use tautologies to achieve propositions that enable the inference of other propositions.
Association
(p | (q | r)) === ((p | q) | r)
and (p & (q & r)) === ((p & q) & r)
This rule of replacement may have you thinking we’re writing a LISP with all the parentheses. This is similar to commutation, in that the rule of association says that, for certain logical operations, the grouping of operations does not matter. Again, this is similar to how in mathematics, if all the operations of an equation are +
or *
, it doesn’t matter which one you do first.
It’s not often that this will come up in programming. In fact, it’s more likely to come up when something is not associative. In other words, sometimes you’ll find a bug when you’re parentheses group conditions in the wrong way. It’s quite likely if the operations are associative, than your formatter may remove the groupings altogether.
Transposition
p => q === !q => !p
Transposition is a replacement that we can see derives from modus tollens. In programming, sometimes it is easier to read or write that the negation of one thing results in the negation of the other. Another way to think about this is “absence”. Sometimes it can be easier to say, “The absence of this condition results in the absence of this other condition”. Hopefully, that’s some food for thought.
Material Implication
p => q === !p | q
This rule of replacement is the logical conclusion of a modus ponens. If we have an entailment, where a condition leads to a result, then we can conclude that, at any given time, we either don’t have a truthful condition or we have the result.
In programming, we can recognize that either a condition isn’t met or its result exists.
// There is no universe where the condition is false
// && the result exists simultaneously. They are mutually
// exclusive propositions
if (condition) {
return result
}
Exportation
(p & q) => r === p => (q => r)
When I first started writing this rule of replacement, I thought, “There’s no way this is useful”, but then I realized that exportation happens all the time. The rule works like this: If a consequent is the result of a conjunction, then we can simplify that conjunction, and deduce that one condition leads to the entailment of the rest.
A simpler way to understand this, is probably just to see it. Have you ever written code like this?
if (thisCondition) {
if (thisOtherCondition) {
// ... some result
}
}
If you have, then you’ve written the right side of an exportation. Can you see it? The logical replacement is:
if (thisCondition && thisOtherCondition) {
// ... some result
}
Now, I bet you’ve done that refactor a lot. Combine this with “short circuiting”, and you may have vastly improved some nasty nested if-else
code. Might I suggest you read my post “When else
s Make Your Code Worse” some time.
Material Equivalence
(p === q) === ((p => q) & (q => p))
and (p === q) === ((p & q) | (!p & !q))
This one is fairly complex looking, but breaking it down makes it more digestible.
Two propositions are equivalent if it can be demonstrated that they entail one another. Two propositions are also equivalent if it can be demonstrated that their conjunction is true or the conjunction of their negations is true.
Distribution
(p & (q | r)) === ((p & q) | (p & r))
and (p | (q & r)) === ((p | q) & (p | r))
The rule of distribution is the same for math as it is for symbolic logic. You might recall from an algebra class having an equation that looked like 3(2x + 4y) = 42
. You might be required to distribute the 3
to make it 6x + 12y = 42
. Or the opposite, factoring out the 3
if you had been given the latter equation. It works the same way for logic.
You can see this in complicated conditional code sometimes:
if (conditionA) {
if (conditionB || conditionC) {
// ... some result
}
}
That’s the same as:
if ((conditionA && conditionB) || (conditionA && conditionC)) {
// ... some result
}
De Morgan’s Theorems
!(p & q) === (!p | !q)
and !(p | q) === (!p & !q)
Saving the best and most useful for last. You should probably memorize this one.
The negation of the conjunction of two premises is equivalent to the disjunction of the negation of each premise. Also, the negation of the disjunction of two premises is equivalent to the conjunction of the negation of each premise.
Simple, right?
Let’s use a code example to make it simpler. Let’s say we have an item
and we want to determine if it’s notForSale
:
const notForSale = !(inStock(item) && hasPrice(item))
This might be easier to read and understand as:
const notForSale = !inStock(item) || !hasPrice(item)
Same thing goes for ||
s. Maybe we want to determine if an event is coming soon:
const isUpcomingSoon = !(inThePast(event) || isBeyondThreshold(event))
// is the same as
const isUpcomingSoon = !inThePast(event) && !isBeyondThreshold(event)
Knowing De Morgan’s Theorems can help you refactor tricky conditionals into more legible code for you and your team.
Conclusion
And that’s it. Everything there is to learn about symbolic logic. Truthfully, this is just scratching the surface. I hope that there’s something in this post that inspires you and maybe helps you in a refactor or two.