What makes a strong inductive reasoning




















Therefore, probably there are no hummingbirds in this forest. Conclusion though is still only probable, even though the probability is much higher that the conclusion is true. Better evidence. More evidence. More observations and by trained observers. Take Away Point: Note the important message form these two examples. Both are inductive and both have uncertain conclusions. But clearly the second inductive argument is stronger than the first one.

It has more evidence. Many weeks of observation by trained observers v only one day by untrained observers. The second argument also has a big generalization as a conclusion, but the conclusion has a higher probability and involves less risk. We want less risk, but we have to use induction every day. So, we should learn how to have less risk for our inductive conclusions. All inductive arguments involve generalizations to the conclusion and thus involve risk and probability.

True False. All deductive arguments provide conclusive evidence for their conclusions. All inductive arguments provide the best available evidence for their conclusions. Some inductive arguments are valid. All inductive arguments involve risk and have uncertain conclusions, but some are stronger than others. We can have a sharp cut off between strong and weak inductive arguments just as we can have a sharp cut off between valid and invalid deductive arguments.

All Rights Reserved. Created with SoftChalk;. Copyright: Ronald C. Pine mobile page SoftChalk 8. For instance, we noted one inductive possibility probability only for the first man x : If only three hats are used one on x, one on y, and one on z , then we know x has a higher probability of having on a red hat IF he sees two white hats, one on y and one on z.

We can be smart using inductive reasoning even though we do not have certainty for our inference. We can also be dumb and have less probability for our inference. If x was allowed to choose based on probability, he would be smart to choose red and not very smart choosing white. Choosing red is wiser because the probability is higher - there are two red hats left but only one white. Any inductive logic that treats such arguments should address two challenges.

In particular, it should tell us how to determine the appropriate degree p to which such premises inductively support the conclusion, for a given margin of error q. That is, it should be provable as a metatheorem that if a conclusion expressing the approximate proportion for an attribute in a population is true, then it is very likely that sufficiently numerous random samples of the population will provide true premises for good inductive arguments that confer degrees of support p approaching 1 for that true conclusion—where, on pain of triviality, these sufficiently numerous samples are only a tiny fraction of a large population.

The supplement on Enumerative Inductions: Bayesian Estimation and Convergence , shows precisely how a a Bayesian account of enumerative induction may meet these two challenges. Enumerative induction is, however, rather limited in scope. This form of induction is only applicable to the support of claims involving simple universal conditionals i. But, many important empirical hypotheses are not reducible to this simple form, and the evidence for these hypotheses is not composed of an enumeration of such instances.

Consider, for example, the Newtonian Theory of Mechanics:. All objects remain at rest or in uniform motion unless acted upon by some external force. If an object exerts a force on another object, the second object exerts an equal amount of force on the first object, but in the opposite direction to the force exerted by the first object.

The evidence for and against this theory is not gotten by examining a randomly selected subset of objects and the forces acting upon them. Rather, the theory is tested by calculating what this theory says or implies about observable phenomena in a wide variety of specific situations—e. This approach to testing hypotheses and theories is ubiquitous, and should be captured by an adequate inductive logic. More generally, for a wide range of cases where inductive reasoning is important, enumerative induction is inadequate.

Rather, the kind of evidential reasoning that judges the likely truth of hypotheses on the basis of what they say or imply about the evidence is more appropriate. Consider the kinds of inferences jury members are supposed to make, based on the evidence presented at a murder trial. The inference to probable guilt or innocence is based on a patchwork of evidence of various kinds.

It almost never involves consideration of a randomly selected sequences of past situations when people like the accused committed similar murders. Or, consider how a doctor diagnoses her patient on the basis of his symptoms. Although the frequency of occurrence of various diseases when similar symptoms have been present may play a role, this is clearly not the whole story. Diagnosticians commonly employ a form of hypothesis evaluation —e.

Thus, a fully adequate account of inductive logic should explicate the logic of hypothesis evaluation , through which a hypothesis or theory may be tested on the basis of what it says or "predicts" about observable phenomena. In Section 3 we will see how a kind of probabilistic inductive logic called "Bayesian Inference" or "Bayesian Confirmation Theory" captures such reasoning. The full logical structure of such arguments will be spelled out in that section.

Perhaps the oldest and best understood way of representing partial belief, uncertain inference, and inductive support is in terms of probability and the equivalent notion odds. Mathematicians have studied probability for over years, but the concept is certainly much older.

In recent times a number of other, related representations of partial belief and uncertain inference have emerged. Some of these approaches have found useful application in computer based artificial intelligence systems that perform inductive inferences in expert domains such as medical diagnosis. Nevertheless, probabilistic representations have predominated in such application domains. So, in this article we will focus exclusively on probabilistic representations of inductive support.

A brief comparative description of some of the most prominent alternative representations of uncertainty and support-strength can be found in the supplement Some Prominent Approaches to the Representation of Uncertain Inference. The mathematical study of probability originated with Blaise Pascal and Pierre de Fermat in the mid th century. From that time through the early 19 th century, as the mathematical theory continued to develop, probability theory was primarily applied to the assessment of risk in games of chance and to drawing simple statistical inferences about characteristics of large populations—e.

In the early 19 th century Pierre de Laplace made further theoretical advances and showed how to apply probabilistic reasoning to a much wider range of scientific and practical problems. Since that time probability has become an indispensable tool in the sciences, business, and many other areas of modern life. Throughout the development of probability theory various researchers appear to have thought of it as a kind of logic.

John Venn followed two decades later with an alternative empirical frequentist account of probability in The Logic of Chance Not long after that the whole discipline of logic was transformed by new developments in deductive logic. In the late 19 th and early 20 th century Frege, followed by Russell and Whitehead, showed how deductive logic may be represented in the kind of rigorous formal system we now call quantified predicate logic.

For the first time logicians had a fully formal deductive logic powerful enough to represent all valid deductive arguments that arise in mathematics and the sciences. In this logic the validity of deductive arguments depends only on the logical structure of the sentences involved. This development in deductive logic spurred some logicians to attempt to apply a similar approach to inductive reasoning.

The idea was to extend the deductive entailment relation to a notion of probabilistic entailment for cases where premises provide less than conclusive support for conclusions.

Attempts to develop such a logic vary somewhat with regard to the ways in which they attempt to emulate the paradigm of formal deductive logic. Some inductive logicians have tried to follow the deductive paradigm by attempting to specify inductive support probabilities solely in terms of the syntactic structures of premise and conclusion sentences.

In deductive logic the syntactic structure of the sentences involved completely determines whether premises logically entail a conclusion. So these inductive logicians have attempted to follow suit. In such a system each sentence confers a syntactically specified degree of support on each of the other sentences of the language. Thus, the inductive probabilities in such a system are logical in the sense that they depend on syntactic structure alone.

This kind of conception was articulated to some extent by John Maynard Keynes in his Treatise on Probability Rudolf Carnap pursued this idea with greater rigor in his Logical Foundations of Probability and in several subsequent works e. So, such approaches might well be called Bayesian logicist inductive logics.

Other prominent Bayesian logicist attempts to develop a probabilistic inductive logic include the works of Jeffreys , Jaynes , and Rosenkrantz It is now widely held that the core idea of this syntactic approach to Bayesian logicism is fatally flawed—that syntactic logical structure cannot be the sole determiner of the degree to which premises inductively support conclusions.

A crucial facet of the problem faced by syntactic Bayesian logicism involves how the logic is supposed to apply in scientific contexts where the conclusion sentence is some scientific hypothesis or theory, and the premises are evidence claims.

The difficulty is that in any probabilistic logic that satisfies the usual axioms for probabilities, the inductive support for a hypothesis must depend in part on its prior probability. This prior probability represents arguably how plausible the hypothesis is taken to be on the basis of considerations other than the observational and experimental evidence e.

A syntactic Bayesian logicist must tell us how to assign values to these pre-evidential prior probabilities of hypotheses in a way that relies only on the syntactic logical structure of the hypothesis, perhaps based on some measure of syntactic simplicity. There are severe problems with getting this idea to work.

Various kinds of examples seem to show that such an approach must assign intuitively quite unreasonable prior probabilities to hypotheses in specific cases see the footnote cited near the end of Section 3. Furthermore, for this idea to apply to the evidential support of real scientific theories, scientists would have to formalize theories in a way that makes their relevant syntactic structures apparent, and then evaluate theories solely on that syntactic basis together with their syntactic relationships to evidence statements.

Are we to evaluate alternative theories of gravitation, and alternative quantum theories, this way? This seems an extremely dubious approach to the evaluation of real scientific hypotheses and theories. Thus, it seems that logical structure alone may not suffice for the inductive evaluation of scientific hypotheses. At about the time that the syntactic Bayesian logicist idea was developing, an alternative conception of probabilistic inductive reasoning was also emerging.

This approach is now generally referred to as the Bayesian subjectivist or personalist approach to inductive reasoning see, e. This approach was originally developed as part of a larger normative theory of belief and action known as Bayesian decision theory.

Bayesian subjectivists provide a logic of decision that captures this idea, and they attempt to justify this logic by showing that in principle it leads to optimal decisions about which of various risky alternatives should be pursued. On the Bayesian subjectivist or personalist account of inductive probability, inductive probability functions represent the subjective or personal belief-strengths of ideally rational agents, the kind of belief strengths that figure into rational decision making.

See the section on subjective probability in the entry on interpretations of the probability calculus , in this Encyclopedia. Elements of a logicist conception of inductive logic live on today as part of the general approach called Bayesian inductive logic. In this article the probabilistic inductive logic we will examine is a Bayesian inductive logic in this broader sense.

This logic will not presuppose the subjectivist Bayesian theory of belief and decision, and will avoid the objectionable features of the syntactic version of Bayesian logicism. We will see that there are good reasons to distinguish inductive probabilities from degree-of-belief probabilities and from purely syntactic logical probabilities.

So, the probabilistic logic articulated in this article will be presented in a way that depends on neither of these conceptions of what the probability functions are. However, this version of the logic will be general enough that it may be fitted to a Bayesian subjectivist or Bayesian syntactic-logicist program, if one desires to do that. All logics derive from the meanings of terms in sentences. What we now recognize as formal deductive logic rests on the meanings i.

These logical terms, and the symbols we will employ to represent them, are as follows:. That is, the logical validity of deductive arguments depends neither on the meanings of the name and predicate and relation terms, nor on the truth-values of sentences containing them.

It merely supposes that these non-logical terms are meaningful, and that sentences containing them have truth-values. Deductive logic then tells us that the logical structures of some sentences—i. This is the notion of logical inconsistency. The notion of logical entailment is inter-definable with it. A collection of premise sentences logically entails a conclusion sentence just when the negation of the conclusion is logically inconsistent with those premises.

An inductive logic must, it seems, deviate from the paradigm provided by deductive logic in several significant ways. For one thing, logical entailment is an absolute, all-or-nothing relationship between sentences, whereas inductive support comes in degrees-of-strength.

For another, although the notion of inductive support is analogous to the deductive notion of logical entailment , and is arguably an extension of it, there seems to be no inductive logic extension of the notion of logical inconsistency —at least none that is inter-definable with inductive support in the way that logical inconsistency is inter-definable with logical entailment. Another notable difference is that when B logically entails A , adding a premise C cannot undermine the logical entailment—i.

This property of logical entailment is called monotonicity. But inductive support is nonmonotonic. In a formal treatment of probabilistic inductive logic, inductive support is represented by conditional probability functions defined on sentences of a formal language L.

These conditional probability functions are constrained by certain rules or axioms that are sensitive to the meanings of the logical terms i. The axioms apply without regard for what the other terms of the language may mean.

Although each support function satisfies these same axioms, the further issue of which among them provides an appropriate measure of inductive support is not settled by the axioms alone.

That may depend on additional factors, such as the meanings of the non-logical terms i. A good way to specify the axioms of the logic of inductive support functions is as follows. These axioms are apparently weaker than the usual axioms for conditional probabilities. For instance, the usual axioms assume that conditional probability values are restricted to real numbers between 0 and 1. The following axioms do not assume this, but only that support functions assign some real numbers as values for support strengths.

However, it turns out that the following axioms suffice to derive all the usual axioms for conditional probabilities including the usual restriction to values between 0 and 1. We draw on these weaker axioms only to forestall some concerns about whether the support function axioms may assume too much, or may be overly restrictive.

This axiomatization takes conditional probability as basic, as seems appropriate for evidential support functions. Notice that conditional probability functions apply only to pairs of sentences, a conclusion sentence and a premise sentence. So, in probabilistic inductive logic we represent finite collections of premises by conjoining them into a single sentence. Rather than say,. The above axioms are quite weak.

For instance, they do not say that logically equivalent sentences are supported by all other sentences to the same degree; rather, that result is derivable from these axioms see result 6 below. Nor do these axioms say that logically equivalent sentences support all other sentences to the same degree; rather, that result is also derivable see result 8 below. Indeed, from these axioms all of the usual theorems of probability theory may be derived. The following results are particularly useful in probabilistic logic.

Their derivations from these axioms are provided in note 2. Let us now briefly consider each axiom to see how plausible it is as a constraint on a quantitative measure of inductive support, and how it extends the notion of deductive entailment. It turns out that the all support values must lie between 0 and 1, but this follows from the axioms, rather than being assumed by them. The scaling of inductive support via the real numbers is surely a reasonable way to go.

Axiom 1 is a non-triviality requirement. It says that the support values cannot be the same for all sentence pairs. This axiom merely rules out the trivial support function that assigns the same amount of support to each sentence by every sentence.

One might replace this axiom with the following rule:. But this alternative rule turns out to be derivable from axiom 1 together with the other axioms. Axiom 2 asserts that when B logically entail A , the support of A by B is as strong as support can possibly be. This comports with the idea that an inductive support function is a generalization of the deductive entailment relation, where the premises of deductive entailments provide the strongest possible support for their conclusions.

This is an especially weak axiom. But taken together with the other axioms, it suffices to entail that logically equivalent sentences support all sentences to precisely the same degree.

Axiom 4 says that inductive support adds up in a plausible way. When C logically entails the incompatibility of A and B , i. The only exception is in those cases where C acts like a logical contradiction and supports all sentences to the maximum possible degree in deductive logic a logical contradiction logically entails every sentence. Read this way, axiom 5 then says the following. Suppose B is true in proportion q of all the states of affairs where C is true, and suppose A is true in fraction r of those states where B and C are true together.

Then A and B should be true together in what proportion of all the states where C is true? The degree to which a sentence B supports a sentence A may well depend on what these sentences mean.

In particular it will usually depend on the meanings we associate with the non-logical terms those terms other than the logical terms not , and , or , etc. For example, we should want. However, evidential support functions should not presuppose meaning assignments in the sense of so-called secondary intensions —e.

Thus, the meanings of terms we associate with a support function should only be their primary intensions, not their secondary intensions. In the context of inductive logic it makes good sense to supplement the above axioms with two additional axioms. Here is the first of them:. From axiom 6 followed by results 7, 5, and 4 we have. The idea behind axiom 6 is that inductive logic is about evidential support for contingent claims.

Nothing can count as empirical evidence for or against non-contingent truths. In particular, analytic truths should be maximally supported by all premises C. One important respect in which inductive logic should follow the deductive paradigm is that the logic should not presuppose the truth of contingent statements. If a statement C is contingent, then some other statements should be able to count as evidence against C. This is no way for an inductive logic to behave.

The whole idea of inductive logic is to provide a measure of the extent to which premise statements indicate the likely truth-values of contingent conclusion statements. Such probability assignments would make the inductive logic enthymematic by hiding significant premises in inductive support relationships. It would be analogous to permitting deductive arguments to count as valid in cases where the explicitly stated premises are insufficient to logically entail the conclusion, but where the validity of the argument is permitted to depend on additional unstated premises.

This is not how a rigorous approach to deductive logic should work, and it should not be a common practice in a rigorous approach to inductive logic. Nevertheless, it is common practice for probabilistic logicians to sweep provisionally accepted contingent claims under the rug by assigning them probability 1 regardless of the fact that no explicit evidence for them is provided. Although this convention is useful, such probability functions should be considered mere abbreviations for proper, logically explicit, non-enthymematic, inductive support relations.

Some Bayesian logicists have proposed that an inductive logic might be made to depend solely on the logical form of sentences, as is the case for deductive logic. The idea is, effectively, to supplement axioms 1—7 with additional axioms that depend only on the logical structures of sentences, and to introduce enough such axioms to reduce the number of possible support functions to a single uniquely best support function.

It is now widely agreed that this project cannot be carried out in a plausible way. Perhaps support functions should obey some rules in addition to axioms 1—7. But it is doubtful that any plausible collection of additional rules can suffice to determine a single, uniquely qualified support function. Later, in Section 3 , we will briefly return to this issue, after we develop a more detailed account of how inductive probabilities capture the relationship between hypotheses and evidence.

Axioms 1—7 for conditional probability functions merely place formal constraints on what may properly count as a degree of support function. The issue of which of the possible truth-value assignments to a language represents the actual truth or falsehood of its sentences depends on more than this. It depends on the meanings of the non-logical terms and on the state of the actual world.

Similarly, the degree to which some sentences actually support others in a fully meaningful language must rely on something more than the mere satisfaction of the axioms for support functions.

It must, at least, rely on what the sentences of the language mean, and perhaps on much more besides. But, what more? Perhaps a better understanding of what inductive probability is may provide some help by filling out our conception of what inductive support is about.

There will not generally be a single privileged way to define such a measure on possible states of affairs. This idea needs more fleshing out, of course. The next section will provide some indication of how that might go. Subjectivist Bayesians offer an alternative reading of the support functions. Subjectivist Bayesians usually tie such belief strengths to how much money or how many units of utility the agent would be willing to bet on A turning out to be true. Roughly, the idea is this.

These relationships between belief-strengths and the desirability of outcomes e. Subjectivist Bayesians usually take inductive probability to just be this notion of probabilistic belief-strength.

Undoubtedly real agents do believe some claims more strongly than others. And, arguably, the belief strengths of real agents can be measured on a probabilistic scale between 0 and 1, at least approximately.

In any case, some account of what support functions are supposed to represent is clearly needed. The belief function account and the logicist account in terms of measures on possible states of affairs are two attempts to provide this account. But let us put this interpretative issue aside for now. One may be able to get a better handle on what inductive support functions really are after one sees how the inductive logic that draws on them is supposed to work.

One of the most important applications of an inductive logic is its treatment of the evidential evaluation of scientific hypotheses. The logic should capture the structure of evidential support for all sorts of scientific hypotheses, ranging from simple diagnostic claims e. This section will show how evidential support functions a. Bayesian confirmation functions represent the evidential evaluation of scientific hypotheses and theories.

This logic is essentially comparative. The evaluation of a hypothesis depends on how strongly evidence supports it over alternative hypotheses. The collection of alternatives may be very simple, e. Whenever two variants of a hypothesis or theory differ in empirical import, they count as distinct hypotheses.

This should not be confused with the converse positivistic assertion that theories with the same empirical content are really the same theory. The collection of competing hypotheses or theories to be evaluated by the logic may be finite in number, or may be countably infinite. No realistic language contains more than a countable number of expressions; so it suffices for a logic to apply to countably infinite number of sentences. From a purely logical perspective the collection of competing alternatives may consist of every rival hypothesis or theory about a given subject matter that can be expressed within a given language — e.

In practice, alternative hypotheses or theories will often be constructed and evidentially evaluated over a long period of time. The logic of evidential support works in much the same way regardless of whether all alternative hypotheses are considered together, or only a few alternative hypotheses are available at a time. Evidence for scientific hypotheses consists of the results of specific experiments or observations. The logical connection between scientific hypotheses and the evidence often requires the mediation of background information and auxiliary hypotheses.

Rather, each of the alternative hypotheses under consideration draws on the same background and auxiliaries to logically connect to the evidential events. This method of theory evaluation is called the hypothetical-deductive approach to evidential support. Duhem and Quine are generally credited with alerting inductive logicians to the importance of auxiliary hypotheses in connecting scientific hypotheses and theories to empirical evidence.

See the entry on Pierre Duhem. They point out that scientific hypotheses often make little contact with evidence claims on their own. Rather, in most cases scientific hypotheses make testable predictions only relative to background information and auxiliary hypotheses that tie them to the evidence.

Some specific examples of such auxiliary hypotheses will be provided in the next subsection. If an athlete performed well on a day when they wore their socks inside out, they may conclude that the inside-out socks brought them luck. If future successes happen when they again wear their socks inside out, the belief may strengthen. Should that not be the case, they may update their belief and recognize that it is incorrect. Only when Thanksgiving rolls around does that assumption prove incorrect.

The issue with overusing inductive reasoning is that cognitive shortcuts and biases can warp the conclusions we draw. Our world is not always as predictable as inductive reasoning suggests, and we may selectively draw upon past experiences to confirm a belief.

Someone who reasons inductively that they have bad luck may recall only unlucky experiences to support that hypothesis and ignore instances of good luck. In inductive arguments, focus on the inference. When a conclusion relies upon an inference and contains new information not found in the premises, the reasoning is inductive.

For example, if premises were established that the defendant slurred his words, stumbled as he walked, and smelled of alcohol, you might reasonably infer the conclusion that the defendant was drunk.

This is inductive reasoning. In an inductive argument the conclusion is, at best, probable. The conclusion is not always true when the premises are true. The probability of the conclusion depends on the strength of the inference from the premises. Thus, when dealing with inductive reasoning, pay special attention to the inductive leap or inference, by which the conclusion follows the premises.

On a daily basis we draw inferences such as how a person will probably act, what the weather will probably be like, and how a meal will probably taste, and these are typical inductive inferences. It can be studied by asking young children simple questions involving cartoon pictures, or it can be studied by giving adults a variety of complex verbal arguments and asking them to make probability judgments.

For example, much of the study of induction has been concerned with category-based induction, such as inferring that your next door neighbor sleeps on the basis that your neighbor is a human animal, even if you have never seen your neighbor sleeping. Deduction begins with a broad truth the major premise , such as the statement that all men are mortal. This is followed by the minor premise, a more specific statement, such as that Socrates is a man.

A conclusion follows: Socrates is mortal. If the major premise is true and the minor premise is true the conclusion cannot be false. Deductive reasoning is black and white; a conclusion is either true or false and cannot be partly true or partly false.

We decide whether a deductive statement is true by assessing the strength of the link between the premises and the conclusion. If all men are mortal and Socrates is a man, there is no way he can not be mortal, for example. There are no situations in which the premise is not true, so the conclusion is true.

In science, deduction is used to reach conclusions believed to be true. A hypothesis is formed; then evidence is collected to support it. If observations support its truth, the hypothesis is confirmed.

Science also involves inductive reasoning when broad conclusions are drawn from specific observations; data leads to conclusions. If the data shows a tangible pattern, it will support a hypothesis. For example, having seen ten white swans, we could use inductive reasoning to conclude that all swans are white. This hypothesis is easier to disprove than to prove, and the premises are not necessarily true, but they are true given the existing evidence and given that researchers cannot find a situation in which it is not true.

By combining both types of reasoning, science moves closer to the truth. In general, the more outlandish a claim is, the stronger the evidence supporting it must be. We should be wary of deductive reasoning that appears to make sense without pointing to a truth. My pet has four paws. Therefore, my pet is a dog. Plato — BC believed that all things are divided into the visible and the intelligible. Intelligible things can be known through deduction with observation being of secondary importance to reasoning and are true knowledge.

Aristotle took an inductive approach, emphasizing the need for observations to support knowledge. He believed that we can reason only from discernable phenomena. From there, we use logic to infer causes. Debate about reasoning remained much the same until the time of Isaac Newton. In his Principia, Newton outlined four rules for reasoning in the scientific method :. In , philosopher John Stuart Mill published A System of Logic , which further refined our understanding of reasoning. Mill believed that science should be based on a search for regularities among events.

If a regularity is consistent, it can be considered a law. Mill described five methods for identifying causes by noting regularities. These methods are still used today:.



0コメント

  • 1000 / 1000