2.3

Good Logic

LogicFallaciesArgumentation

Chess pieces


Logic is a systematic way of making inferences, which can be useful for determining truth. It helps you to (a) identify relationships between things and (b) determine whether a conclusion should be trusted based upon what you know about those relationships. A simple example is "If students are asking questions, then they are engaged in their learning. The students are asking questions.  Therefore, the students are learning." By stringing statements together, we can make arguments, which are structured approaches to convincing others to accept a particular conclusion.

Good logic, then, might simply be said to be argumentation that follows the rules of logic by making arguments that are logically valid and sound.

Yet, there also seems to be a moral dimension of logic that should dictate not just how we use logic, but why and when ... and notably when not to rely on logic. That is, it seems that like any tool logic has both responsible and irresponsible uses.

Lest this sound too blasphemous, let's begin by exploring the Structure of an Argument, "If, Then" Statements, and Fallacies or Mistaken Logic. From there, we will discuss how logic leads to three different Types of Certainty and what this means for using logic in a responsible way.

Structure of an Argument

A logical argument is merely an attempt to persuasively prove something using logical reasoning. All arguments have at least two things: one or more premises and a conclusion. Premises are the laws, theories, facts, instances, or contexts that we assume to be true, and conclusions are what we are trying to prove by bringing up these premises in relation to one another.

When laying out an argument using formal logical notation, we list all of our premises first (one on each line) and then identify the conclusion with a therefore symbol, which is shaped like three dots of a triangle's vertices, as follows: ∴

This is an example of a formal argument:

I care about all students.

Rita is a student.

∴ Therefore, I care about Rita.

In this case, "I care about all students" and "Rita is a student" are the premises, while "I care about Rita" is the conclusion.

Though most arguments are not structured in this way in typical writing and language, you can normally identify conclusions by listening for keywords that imply logical relationships, such as "therefore," "so," and "then."

"If, Then" Statements

In order for logic to work, we need statements that connect ideas and facts together. These generally take the form of "if, then" statements. I even just used one in this sentence by arguing that "if logic is to work, then we need 'if, then' statements."

"If, then" (or conditional) statements are everywhere, and we use them all the time. "If I don't get any sleep, then I won't be able to go to work in the morning." "If you don't study, then you will fail." "If you eat, then you won't be hungry." "If you treat people kindly, then they will treat you kindly." And so forth.

These types of statements allow us to structure two things in relationship to one another in a way that implies particular causal inferences. In logic, the most simple way of showing this relationship is by stating "if A, then B" or as follows:

$$ A \rightarrow B $$

By making a statement like this, we are making two causal claims. First, we are stating that A is a sufficient cause of B, meaning that A is all that it takes to cause B. It does not matter what else is happening in the universe or what other factors might be at play, if we just have A, then we will definitely have B. This does not necessarily mean that B originated from A or that B comes after A but just that if I know that A exists or is true, then I know that B also must exist or be true.

The second claim we are making is that B is a necessary cause of A, meaning that the presence of B is essential to A's existence. Again, this does not mean that B comes before A or is more fundamental or basic but just that if I know that B does not exist or is not true, then A must not exist or not be true either.

An example of this would be a statement like "if you are a person, then you have value," which we could form into logical notation as follows:

$$ Person \rightarrow Value $$

This statement implies two things. First, from a sufficiency perspective, it means that being a person is sufficient to having value. It does not matter what else a person is (teacher, student, ballerina, tax collector, octogenarian, newborn, Democrat, Republican, Armenian, Ethiopian, social justice warrior, transgender, cisgender, despot, paedophile, murderer, white supremacist, ad infinitum), this "if, then" statement means that that person has value by virtue of the simple fact that they are human.

Did that list make you at all uncomfortable? If so, it is likely because we often make strong "if, then" statements without necessarily realizing the full causal sufficiency that our statements logically imply. Do you really believe that all people have value? If not, then you might rephrase the statement to something like "if you are a person of type X" or you might alternatively use the logical power of the statement to push yourself to recognize that all people do actually have value (even if you initially bristle at the idea in some cases). This is perhaps the greatest power of logic; it allows you to take relationships between two things (such as people and value) and apply that relationship to a variety of situations (e.g., how do I react to a murderer on death row?).

The second implication of this statement involves causal necessity and though it is a bit more of an abstraction in this case, it means that if something does not have value, then it cannot be a person. A simpler example of this would be the statement that "if something is a human, then it is a mammal." We have historically categorized our species biologically as mammals, because, like other mammals, we are warm-blooded vertebrates who give birth to live young. Yet, with the current advent of AI and robots, we are quickly finding ourselves in situations where we talk to machines like humans and treat them like we would other humans (sometimes having difficulty even judging if a conversant is a human or a machine).

As such machines progress, will there ever be a time when we consider them to be human? Based on this "if, then" statement, being a mammal is necessary to being human, so the logical necessity of the statement is that a machine could only be considered human if it became a mammal as well (i.e., one cannot be a human and not a mammal). As pointed out previously, we can either in this situation use the "if, then" statement as the means for determining what is the proper definition of being human, which every AI would fail, or alternatively consider whether our original "if, then" statement is really true (or might there be a time when being human is no longer defined in terms of being a mammal). At any rate, if we accept this "if, then" statement, then it allows us with strong certainty to look at anything in the universe and say definitively that it is not human by merely determining whether or not it is a mammal, which is a very powerful claim to be able to make.

As these two examples hopefully illustrate, "if, then" statements are very powerful, because they can allow us to make strong assertions about the world based on very little knowledge (e.g., I don't know who you, the reader, are, but I know that you have value based upon no evidence other than the fact that you are human and I accept the argument above). Wielding this power to both construct and destruct arguments, though, requires us to understand both how to apply logic according to its basic rules and also to understand the limits of logic.

In the beginning we talked about the rules for forming an argument. Here we want to clarify some of those rules. We will first review two rules (Modus Ponens and Modus Tollens) and then two common mistakes that break those rules (Denying the Antecedent and Affirming the Consequent).

Modus Ponens or Affirming the Antecedent

The first rule of logic with "if, then" statements is that given a relationship between two things (A and B, as in an "if, then" statement), if A is a sufficient cause of B, and if A is true, then B must be true also.

$$ \displaylines{A \rightarrow B \\A \\\therefore B} $$

This logical notation should be read as follows:

If A, then B.

A (exists or is true).

Therefore, B (exists or must be true).

An example of the modus ponens in action that undergirds all of assessment in education would be the following:

If a student performs well on a test, then they must have learned the material. Juanita performed well on the test, therefore she learned the material.

In this case, because we cannot open up a person's brain and figure out whether they learned something or not, we ask them to perform a task, such as fill in some answers on a multiple-choice test. If they perform well, then we conclude that they must have learned. Alternatively, this could be written as follows:

$$ \displaylines{Perform \rightarrow Learn \\ Perform \\ \therefore Learn} $$

Modus ponens is simple but powerful, because it allows us to take a universal or theoretical statement (like the relationship between performance and learning) and to make arguments about particular instances or cases where the premises are met.

Modus Tollens or Denying the Consequent

Building off of this, the second rule of logic with "if, then" statements holds that given a relationship between two things (A and B), if B is a necessary cause of A, and if B is not true, then A must not be true either.

$$ \displaylines{A \rightarrow B \\\sim B \\\therefore \sim A} $$

This logical notation should be read as follows:

If A, then B.

Not B (does not exist or is not true).

Therefore, Not A (does not exist or must not be true).

To logically reframe our case from above, we could say the following:

If a student performs well on a test, then they must have learned the material. Isabella did not learn the material, therefore she will not perform well on the test.

Alternatively, this could be rewritten as follows:

$$ \displaylines{Perform \rightarrow Learn \\ \sim Learn \\ \therefore \sim Perform} $$

The modus tollens works, because if A truly is sufficient to always prove B, then there can never be a situation when B is not true while A is true. This implies a very strong causal relationship between the two things, and one of the benefits of using a modus tollens is to ask ourselves whether we really meant to suggest as strong of a relationship as we did.

In the example above, many people might say that performance is an indicator of learning, but do they realize that this is the same as saying that if a student does not learn then they will not perform well? Perhaps, perhaps not. However, if we do not agree with the conclusion of the modus tollens or had an instance when a student did not learn the material but still performed well, then it would call into question the validity of our initial claim, because if the modus tollens is not valid, then the original argument is not valid either.

Fallacies or Mistaken Logic

Some of the logical difficulties that arise in arguments come about as people misuse modus ponens and tollens or when they confuse necessary and sufficient causes. These are called formal logical fallacies, and the two most basic forms they take are "affirming the consequent" and "denying the antecedent." In addition, there is another class of informal logical fallacies that originates in a variety of reasoning errors unassociated with the actual form of the argument. We will explore each of these in turn.

Affirming the Consequent

Building off of the modus ponens above, someone might conclude that if A leads to B, then the existence or truth of B must prove A, as follows:

$$ \displaylines{A \rightarrow B \\B \\\therefore A} $$

This is logically invalid, though, because just because A is sufficient to prove B, it does not follow that B is sufficient to prove A. In the case of performance and learning, this would be like saying the following:

$$ \displaylines{Perform \rightarrow Learn \\ Learn \\ \therefore Perform} $$

This misapplication of logic assumes that learning alone determines performance, when, in fact, many other factors might influence performance as well, such as the quality of the test, what you ate for breakfast, your grasp of the testing language, and so forth. In other words, though learning might be essential to performing well on a test, it is not sufficient.

To use another example, let us assume that the following statement is true: "Providing the expertise and resources for a quality education is quite expensive." In a nutshell, this is arguing that expenses are necessary for a quality education and that whenever you find a quality educational experience, there were expenses associated with making it happen. This may or may not be true, but let's assume that it is true for now and rewrite the argument as follows:

$$ \displaylines{Quality \rightarrow Expensive \\ Quality \\ \therefore Expensive} $$

Let us say that someone sees this argument, believes it, and then concludes that all expensive educational solutions must be high quality, as follows:

$$ \displaylines{Expensive \\ \therefore Quality} $$

This is invalid, though, because though the original argument acknowledges a relationship between expense and quality, it does not show that merely spending money increases quality (as the latter argument suggests).

To avoid this fallacy in thinking, we should remember that just because something is necessary for causing something else, it does not mean that it is sufficient. Thus, though you might need more teachers to help improve a school, providing more teachers alone will not make this happen. Similarly, though you might need access to particular technologies to improve learning, providing the technology alone will not make learning happen. And so forth.

Denying the Antecedent

The second common fallacy that arises in logical argumentation comes from assuming that a conclusion is untrue just because a premise is untrue or that if A leads to B, then the lack of A must prove the lack of B, as follows:

$$ A \displaylines{\rightarrow B \\ \sim A \\\therefore \sim B} $$

To use the performance and learning example above, this would be like saying that if a student did not perform, then they did not learn, as follows:

$$ \displaylines{Perform \rightarrow Learn \\ \sim Perform \\ \therefore \sim Learn} $$

The problem with this is that a student might have learned the material but nonetheless failed a test (for reasons mentioned above). This fallacious argument does not logically follow from the original argument, because though learning might be necessary for performance, a lack of performance does not prove a lack of learning, because other factors might be to blame for the poor performance.

To use another example, we might believe that if achievement gaps exist along racial lines, then this is evidence for systemic racism as follows:

$$ \displaylines{Gaps \rightarrow Racism} $$

If we look at data from a given scenario, though, and find that no gaps exist, does this mean that systemic racism also does not exist, as follows?

$$ \displaylines{\sim Gaps \\ \therefore \sim Racism} $$

Perhaps, but perhaps not. Perhaps systemic racism evidences itself in ways other than achievement gaps, or perhaps the lack of a gap merely shows that a racially marginalized group is working harder to make up for differences in how they are treated within racially stratified sytems (as many marginalized groups will argue). In any case, this version of the argument does not follow from the original argument, because it assumes that achievement gaps are the only indicators of systemic racism.

To avoid this fallacy in thinking, we should remember that just because something is a sufficient cause or a sure evidence for something else, it does not mean that it is the only cause or the only evidence for the other.

Informal Fallacies

In addition to these formal fallacies, there are many informal fallacies that reveal errors in logical reasoning. These errors take many forms and might attempt to appeal to a listener's emotions, to muddle the issue, or to sidetrack the argument. Some common informal fallacies include ad hominemtu quoque, slippery slope, begging the question, post hoc, strawman, burden of proof, fallacy fallacy, and many others. We will briefly discuss a few here, but for more information on these and other informal fallacies, please refer to the Your Logical Fallacy Is website.

To better understand each of these fallacies, we will also provide an example using two fictitious characters in a parent-teacher conference: Graciella, who is a parent dissatisfied with her child's schooling experience, and Inez, who is a very defensive teacher. In all scenarios, Inez will provide the negative example of using the fallacy.

Ad Hominem

The ad hominem fallacy, which literally means "to the man," consists of attacking your opponent's character rather than the argument they are making.

Graciella: You marked my daughter's test wrong. She got the answer right, but you said it was wrong.

Inez: I'm sorry, but I don't believe you even completed high school. Are you really qualified to question my grading methods?

In this situation, Inez is bypassing Graciella's argument and instead attacking her credibility. Even if what Inez is saying is true, it does not change the fact that Graciella might have a sound argument.

Tu Quoque

The tu quoque fallacy, commonly called whataboutism or appeal to hypocrisy, consists of accusing your opponent's side of doing the same thing that they are accusing your side of.

Graciella: You did not provide my daughter with any help on this assignment.

Inez: You never help your daughter with any of the homework that I send home with her, so you don't have any right to accuse me of that.

In this situation, though what Inez is saying might be true and Graciella might be guilty of the same behavior she is accusing Inez of, this is irrelevant to the point that Graciella is a teacher that should be providing help to students on their assignments.

Slippery Slope

The slippery slope fallacy consists of stringing together multiple causal statements to typically lead to a catastrophic conclusion, attempting to show that the initial step should never have been taken.

Graciella: Why did you tell my daughter she couldn't wear her hat in class?

Inez: Because wearing a hat shows disrespect for the school. If she disrespects the school, then she disrespects society. If she disrespects society, then she will commit crimes. Do you really want your daughter to end up in prison someday?

In this situation, Inez starts with a plausible premise and moves from one conclusion to another without fully substantiating each step (e.g., is disrespecting the school really the same as disrespecting society?). This allows her to end with a scary conclusion that is very far removed from the original premise.

Begging the Question

The begging the question fallacy merely restates the original question as the conclusion in a circular way, essentially appealing to the conclusion being true by definition.

Graciella: My daughter said that her science textbook said that Pluto is a planet. Why are you teaching her things that aren't true?

Inez: Pluto is a planet, because the science textbook says it's a planet.

In this situation, it has already been established that the book said that Pluto is a planet and rather than arguing that this is true from some other source of evidence, Inez is merely restating the original premise.

Post Hoc

The post hoc (ergo propter hoc) fallacy literally means after this (therefore because of this) and consists of claiming that because something happened after something else then it must have resulted from it.

Graciella: My daughter failed this test even though she studied hard for it all weekend.

Inez: I saw her gossiping with some other girls before class while other students were studying, so she must not have studied that hard.

In this situation, Inez connected two true observations together (that the daughter failed and was seen gossiping before class) to make a problematic causal inference (gossiping instead of studying for that brief period caused her to fail).

Strawman

The strawman fallacy consists of making a simplistic caricature of the opposing argument, which is easier to argue against.

Graciella: I work three jobs to make ends meet. When I get home from my third job each night, my daughter has made dinner, washed the dishes, and put her siblings to bed. She's exhausted, I'm exhausted, and I just can't find the time to help her with all the homework you're giving to her. And even if I can find the time, most of the stuff is beyond me.

Inez: I think that if the two of you really cared about her future, then you could figure it out. In my mind, it all comes down to grit.

In this scenario, Inez is taking the complex situation that Graciella has explained to her (which involved time constraints, sleeping requirements, and limited expertise) and has oversimplified it to just be about grit or caring. Whenever someone says something like "it all comes down to X," you can generally assume that they are putting forth a strawman fallacy.

Burden of Proof

The burden of proof fallacy consists of claiming that your opponent must provide proof of their claims before you should be expected to provide proof of your own.

Inez: Your daughter seems to not be reading as well as she should.

Graciella: Why do you think that? She reads just fine when we read together.

Inez: Do you have any evidence that she's actually on track for her grade level?

It is obviously appropriate for a parent and teacher to consider whether a student is on track with her reading, but in this scenario, Inez assumes that the daughter is not reading well (without providing any concrete evidence) and then discounts Graciella's counter-argument by suggesting that Graciella is the one who must prove that the daughter is on track (rather than the teacher providing evidence that she might not be).

Fallacy Fallacy

The fallacy fallacy consists of claiming that your opponent's conclusion is untrue, because they used a fallacious or weak argument to argue for it.

Graciella: I don't understand how my daughter can keep failing her math tests. She's never had trouble in math before. She is very good at math! When we go to the store, she helps me keep track of how much I'm spending so that we don't go over the budget.

Inez: This is algebra. The type of math that we're doing is much more complex than simple addition. So, your argument is invalid, and this proves my point that she just must not be good at complex math.

In this scenario, Graciella did not make the strongest argument in favor of her daughter's algebraic abilities, but Inez should not conclude from this (alone) that her daughter is bad at math. Doing so ignores the fact that the daughter's math ability and Graciella's logical reasoning are completely disconnected from one another.

Three Types of Certainty

There are at least three different types of certainty that we can attain through logical argumentation: deductive, probabilistic, and inductive. Deductive certainty is most common in mathematics, moral reasoning, theology, and formal logic and essentially operates on an all-or-nothing basis: either we can be completely sure or we can't be sure at all. Probabilistic certainty is common in the social sciences and establishes certainty based on statistical formulas. And inductive reasoning is common in observational sciences and everyday argumentation, wherein we use evidence to make what we believe to draw subjectively reasonable conclusions about the world. Each will now be discussed in turn.

Deductive

From the perspective of deductive logic, an argument is logically convincing only if it is both (1) valid and (2) sound. That is, arguments are neither true nor false but are rather only convincing (to be trusted) or unconvincing (not to be trusted).

To be deductively valid, an argument's conclusions must necessarily always follow from the premises. For instance, we might want to reveal that systemic racism exists in the U.S. educational system by showing that achievement gaps exist along racial lines. We might do this by making an argument that if Asian Americans experience an achievement gap when compared to white Americans, then systemic racism must exist as follows:

$$ \displaylines{Gap \rightarrow Racism \\ Gap \\ \therefore Racism} $$

In this case, we would have to show that every single time that a gap has existed between groups along racial lines, then this has been the result of systemic racism. If we cannot show this (or if there are compelling counter-examples to this claim such as gaps that have existed along socio-economic lines alone), then the argument will struggle to be convincing in terms of validity.

To make deductive arguments more valid, then, we often have to add new premises, which will help more carefully show the relationship between our premises and conclusion. In this case, we might add other premises that help to prevent counter-examples from emerging (e.g., "and there are no differences in socio-economic status" or "and there are no differences in levels of parental formal schooling"). In so doing, we can make the argument more convincing by showing that the relationship between achievement gaps and racism cannot be explained away by reference to other factors.

Learning Check

Consider the following argument:

If a student learns the periodic table of elements, they will pass the test. Lucy learned the periodic table. Therefore, Lucy will pass the test.

Is this argument valid?

  1. Yes
  2. No

Consider the following argument:

If the No Child Left Behind Act (NCLB) worked, then all students would be succeeding today. All students are not succeeding today. Therefore, NCLB did not work.

Is this argument valid?

  1. Yes
  2. No

Furthermore, to be sound, an argument must be valid and all of its premises must be true. For this argument to be sound, we would have to show that race-based gaps exist between all minority student groups and their white counterparts, including between Asian and white Americans. However, the argument will lose its convincing power if we analyze our data and discover that no gaps exist (or that Asian Americans out-perform their white counterparts). If the premise is not true (i.e., a gap exists), then it prevents us from convincingly arriving at the conclusion (i.e., systemic racism exists). This weakens the argument. Notably, it does not disprove that systemic racism exists, but the lack of a gap merely shows that this particular argument is not convincing for proving it in terms of soundness.

Because we expect deductive arguments to be both sound and valid, it can be difficult to make bulletproof arguments, because our opponents have two ways of challenging the argument. They can either (a) show that our conclusion does not always follow from the premises we have provided (i.e., not valid) or (b) show that just one of our premises is false (i.e., not sound).

Let us take the modus ponens example from above to illustrate and assume that Juanita is using this argument to convince her teacher that she learned what she was supposed to learn in class.

The easiest way to test the argument for soundness would be to simply check whether Juanita performed well on the test. If she did not perform well, then this does not necessarily mean that she did not learn, but it rather shows that the argument is just not a convincing way to show that she learned. Likely, Juanita would have determined whether the argument was sound or not before she made it and would only have attempted to make the argument if she was sure that she performed well on the test.

Even with this evidence, though, is it possible that Juanita still did not learn the material? Or in logical terms is the following possible:

$$ \displaylines{Perform\  \& \sim Learn} $$

If someone can prove this, then they can show that the original relationship between learning and performance (that if a person performs, then they must have learned) is not always to be trusted. They might do this in a number of ways, such as checking to see whether another student who merely guessed "C" on every option was able to perform on par with Juanita. If so, then this would effectively destroy the argument, because they would have shown that the conclusion does not always follow from the premises in every case or that performance is not always a valid indicator of learning (e.g., in cases of cheating or poor test design).

Probabilistic

Probabilistic certainty relies upon statistically-established relationships to help us determine to what degree we should trust a given conclusion. For instance, let us assume that a study was conducted that showed that students who used a particular program to prepare for a test did better on the test than students who did not. Such statistical tests typically have a significance score to account for random variance in the data (e.g., due to individual differences between students) and other reported measures to help us know how we can reasonably understand the results.

Within the social sciences, the p-value is used in many tests to establish the likelihood that any differences seen were not due to chance. Where we set this certainty threshold is quite arbitrary, but in education research it is generally set at p<.05, meaning that education researchers need to be able to show that there is less than a 5% (or 1-in-20) chance that the observed results were due to random effects. This means that education researchers anticipate that their outcomes should be true at least 95% of the time in order to report them (or that they are 95% certain).

Quite different from deductive certainty, which relies on definitions and logical implications based on the meanings of words and ideas themselves, probabilistic certainty relies upon empirically observing and measuring relationships between phenomena and using statistics to determine how certainly we can predict future events based upon past events.

Inductive

Both deductive and probabilistic certainty tend to be quite untenable in a person's day-to-day life, because we rarely are analyzing situations within closed, neatly-defined systems (as required for deductive certainty) and also rarely have the ability to statistically analyze relationships between premises and conclusions (as required for probabilistic certainty).

To illustrate, when you walk into a dark room and turn on the light switch, you expect the light to turn on. Why? Not because we can confirm that all the circuitry, conditions, and natural laws associated with such a behavior are understood and have not been violated (as with deductive certainty) and not because we have tested the light switch 100 times and have found it to turn on the light at least 95 of those times (as with probabilistic certainty).

Rather, we simply know (a) that turning on the light is what the switch is designed for and (b) that our past experiences with such switches shows that they generally work. Such certainty is termed inductive, because it uses finite facts or experiences to draw conclusions in a compelling (though not infallible) way.

Is it possible that the light will not turn on? Absolutely (and the logician could skeptically point this out), but given the limited evidence it still makes sense to assume that the conclusion is probably true.

Thus, with inductive reasoning, the goal is to provide sufficient evidences to make a compelling case, even though the likelihood of the conclusion is not absolute (deductive) or quantitatively calculable (probabilistic).

An example from education might be Graciella (from the examples above) noting that her daughter has failed multiple tests in a row and is feeling like she cannot keep up and concluding that she will likely fail the next test also. The daughter's performance on the fourth test is not ensured by definition (deductive) or mathematically (probabilistic) but is rather an educated guess based on the premises (previous failures and current attitude). Is it possible that the conclusion is wrong? Sure, but it nonetheless seems like a compelling argument that should not be ignored, and Graciella should be concerned about her daughter's next test performance as a result.

Responsible Logic

Finally, I will close this chapter by mentioning a few points about the responsible use of logic. Logic serves a very important purpose of unmasking assumptions, enabling skepticism, and highlighting irrationality, yet it does so in specific ways that should be understood and used responsibly.

Unmasking Assumptions

One of the great benefits of logic is that it can help us to unmask hidden assumptions that we might otherwise take for granted or that a sophist might be attempting to mask. By laying out premises and conclusions in structured ways and interrogating how one leads to another, we are able to understand more clearly where argumentative differences lay.

As an example, let us use the historic abortion debate in the U.S.

On one side of the debate, those who self-identify as "pro-choice" argue that a woman should have the right to choose what is to be done to her body and that because abortion is a procedure done on women's bodies, then women should have the right to choose that procedure or not.

On the other side of the debate, those who self-identiy as "pro-life" argue that every living person has a right to life and that because living babies are people, then they should not be killed in the womb.

Each side self-identifies based upon their claimed central tenet. Pro-choice advocates argue that choice is preeminent and that those who oppose them are not respectful of women's choices over their bodies (implicitly labeling them as "anti-choice"). Pro-life advocates argue that a baby's life is preeminent and that those who oppose them are not respectful of a baby's right to life (implicitly labeling them as "anti-life"). Yet, neither of these characterizations are actually true. Pro-choice advocates may actually care deeply about the right to life of a baby, and pro-life advocates may actually care deeply about the individual rights of women.

So, the source of the conflict is not located in either of the premises (a) that women should have the right to choose what happens to their bodies or (b) that living babies have a right to life, because both pro-choice and pro-life advocates actually believe both of these things. Rather, there is an unstated premise upon which the argument for each side is based. It is that unborn fetuses are or are not living babies. By changing this single premise, it changes the conclusion of the argument.

This premise is the heart of the issue, and yet, when we see political pundits or advocates arguing about the issue, they spend very little (if any) time on the crucial premise and rather spend time angrily shouting about premises that both sides agree on (i.e., women should have a right to choose, babies should have a right to life). Much of this happens simply because it is easier to win an argument by making your opponent appear to disagree with a basic truth that we all agree on (i.e., freedom and life) than it is to have a reasonable conversation about an unsubstantiated belief (i.e., what life is and when it begins).

Logic, then, can be helpful for showing that one person might truly believe in women's rights and also oppose abortion or that another person might truly believe in a baby's right to life and support abortion, which shows that the arguments we are generally having are not substantive or dealing with the actual source of the disagreement. By unmasking such contentious assumptions, we can (hopefully) spend our energies solving problems rather than creating caricatured arguments that fail to reflect the complexities of reality.

Healthy Skepticism

Logic is also helpful for encouraging us to approach arguments with a certain level of skepticism, asking whether all the premises are clear, whether they are true (sound), and whether the conclusion necessarily follows from the premises (valid). This can help keep us from believing in conclusions for which there is insufficient evidence and is necessary for approaching problems in life in a critical manner. Such a mindset I consider to be healthy skepticism, because it leads us to interrogate arguments to determine the level of certainty we can place in the conclusion based on the provided evidence.

However, skepticism can easily become unhealthy if we place our certainty thresholds too high (e.g., expecting all arguments to provide deductive certainty). In the social sciences, for example, scientists must balance between two types of errors. Type I errors are those in which the scientist has not been sufficiently skeptical and has rejected a null hypothesis that should have been retained. Type II errors on the other hand are those in which the scientist has been too skeptical and has failed to reject the null hypothesis by setting a standard for verification that is too high. Typical social science errs more on the side of preventing Type I errors than Type II errors (e.g., the 95% vs. 5% split of the standard p-value), which seems reasonably appropriate, but both errors are problematic, and the unhealthy skeptic can easily fall into the trap of demanding too much evidentiary certainty before believing a conclusion and thereby commit a Type II error.

To illustrate, a famous example is provided by the renowned atheist Bertrand Russell (and Richard Dawkins later repeated his stance) as follows:

When asked what he would do if after he died he found himself standing in God's presence and God asked him why he didn't believe in him, Bertrand Ruseel quipped that he would shake is finger at God and shout "Not enough evidence, God! Not enough evidence!"

Though humorous and almost bravely skeptical, there seems to be a type of hubristic skepticism (or cynicism) at play in this type of scenario that denies truth to its face if it is not presented in the quixotic trappings that the seeker demands. Such levels of skepticism can easily lead one to deny very practical causal relationships in argumentation, such as that flicking a light switch actually caused a light to turn on (as in Humean skepticism), even though the skeptic actually uses light switches on a daily basis (implying some practical belief in the relationship). In the example above, denying the existence of God to his face would be like denying the existence of any other person that you saw face to face, such as a friend or mother. Such behavior does not reveal enlightened scientific reasoning but rather absurd and insincere resistance to reasonable truths that you already accept (in this scenario, would Russell respond to God, if he did not acknowledge his existence?). To be clear, here I am not making an argument for the existence of God but am merely responding to this famous scenario which holds as its premise that the skeptic is standing in God's presence and is talking to him.

Similarly, a true empiricist, for example, might faithfully quip that "nothing should be believed unless it is based in sufficient, externally verifiable, objective evidence." The problem with such a belief is that the maxim itself is not based in empirical evidence (where in the world was this maxim originally observed?), and if it was observed, then why should it be believed if the maxim was not already believed (cf., Quine's critique of logical positivism and the analytic-synthetic distinction)? To say that logical fallacies should be avoided because logical fallacies are bad is itself a fallacy, and it would be quite difficult to construct a logical argument showing that logic should be believed (and even if we did, such an argument would be circular).

All of this is to say that healthy skepticism is a valuable logical asset to critical thinkers, but such skepticism should serve a reasonable purpose. Rather than being used as a blunt hammer to skeptically deny everything, logic should be used as a precise scalpel to dissect arguments, to understand underlying evidence and relationships, and to make reasonable certainty demands. In other words, logic is a tool to be used, not a master to be served, and if logic or critical reasoning are ever preventing us from reasonably doing good or moving forward in valuable ways, then we should (like the social scientist) accept a slim chance of succumbing to a Type I error rather than fall prey to the ever-present Type II error of cynicism.

Human Irrationality and Ethics

And finally, logic exists as a field of study, and we are talking about it now, because it is not natural to humans. Humans are naturally irrational, and logic serves as a foil to this. Many of the informal fallacies mentioned above are successful because they are irrational. For instance, they appeal to our prejudicial nature, such as the ad hominem, or they appeal to fear, such as the slippery slope. Logic can serve as a way to overcome these irrational aspects of our natural selves and overcome prejudice, fear, and various other vices through reason. This is a clear benefit of logic, and it can be very useful for helping us to become better people by clearly considering our motivations, assumptions, and outlooks on the world.

Yet, is all irrationality bad? Should it all be overcome?

Some would say so and suggest that the logical life and the good life are synonymous. Yet, can't a person irrationally do good? And aren't there plenty of good actions that are irrational?

I irrationally love and care for my children, and though I could probably come up with some kind of logical argument for why I should, does that love and care need to be rationalized to be valid? Or do we seek to rationalize such things only when we fail to do them well (e.g., in the case of a neglectful father)? And if we cannot rationalize the need to be a good parent, then does that absolve us of the need to be one?

There are plenty of things we have not proven rationally to ourselves that we believe implicitly and claim others should believe as well (e.g., parents should care for their children, all people are created equal, justice is a virtuous goal, everyone deserves a chance, first do no harm), but if we really applied skeptical reasoning and attempted to prove such things, we would find the task much more difficult than it may initially appear.

I bring this up only to point out that logic is often used as a weapon to critique the irrationality in others while ignoring the irrationality persistent in all of us, and it seems that the difference between leading a good life and a lesser life has more to do with the actual things we do than whether we are doing those things for rational reasons. The serial killer can be quite rational, and the philanthropist can be quite irrational.

This is in no way intended to be a treatise on ethics, but I merely bring this up to point out that logic itself does not seem to be the doorway to a good and ethical life and that we should be careful not to operate on the assumption that all irrationality is necessarily bad and all rationality is necessarily good.

As suggested previously, reason, logic, and skepticism are tools to be used toward an end. That end can be noble or nefarious; the fact that we use logic in no way suggests that we are acting rightly.

Royce Kimmons

Brigham Young University

Royce Kimmons is an Associate Professor of Instructional Psychology and Technology at Brigham Young University where he seeks to end the effects of socioeconomic divides on educational opportunities through open education and transformative technology use. He is the founder of EdTechBooks.org, open.byu.edu, and many other sites focused on providing free, high-quality learning resources to all. More information about his work may be found at http://roycekimmons.com, and you may also dialogue with him on Twitter @roycekimmons.

This content is provided to you freely by EdTech Books.

Access it online or download it at https://edtechbooks.org/rapidwriting/good_logic.