Geography Notes

Use of scientific method in geography | scientific method | geography.

ADVERTISEMENTS:

In this article we will discuss about:- 1. Meaning of Scientific Method 2. Key Elements of Scientific Method 3. Routes to Explanation 4. Elements 5. Geographical Application.

Meaning of Scientific Method:

The term ‘scientific method’ denotes the logical structure of the process by which the search for trustworthy knowledge advances. The primary task of scientific method is to explain empirical phenomena. There is no need to argue that geography ought to be a science.

Geography simply is a science by virtue of the fact that it is a truth- seeking discipline whose raw materials consist of empirical observations. There is no suggestion that geography should undergo any sort of epistemological restructuring.

Hay identifies four categories or groups of geographers with regard to the appropriateness or inappropriateness of scientific method in geography. The first group consists mainly of physical geographers who believe that their discipline is a field of a natural science and, therefore, do not doubt that scientific method is appropriate.

In the second group are those human geographers, who see scientific methods as being appropriate to their discipline as social science, although they may also recognise that such an application poses certain problems not encountered in classical natural sciences like physics and chemistry.

The third group consists of those who believe that the subject matter of geography makes scientific or quasi-scientific methodology inappropriate. Most recently, a fourth group has emerged which seeks to apply Marxist methods in geography and believes that such methods are scientific not in the mould of classical natural sciences.

In order to understand these differing views, it is necessary to identify the key elements of scientific thinking and practice, to outline some of the philosophical problems involved in scientific method not always evident to scientists themselves, as well as to examine some additional issues which arise when scientific method is applied to geography and also to similar disciplines.

Key Elements of Scientific Method :

The scientific method is often characterised by five elements – theory (and fact), law, logic and reduction, which necessarily sustain the scientific thinking and/or approach. However, there is one more element, i.e. ‘hypothesis’ that also sustains and provides the required input to scientific practice and process in geography, and also in related sciences.

1. Theory and Fact :

Basic to modern science is an intricate relation between theory and fact. Popular opinion generally conceives of these as direct opposites. Theory is compared with speculation and thus a theory remains speculation until it is proved. When this proof is made, theory becomes fact. Facts are thought to be definite, certain, without question and their meaning to be self-evident. Theory (speculation) is supposed to be the realm of philosophers.

Scientific theory, therefore, is thought to be merely the summation of facts which have been accumulated upon a given discipline and/ or subject.

It is, indeed, a fact that:

(1) Theory and fact are not diametrically opposed, but inextricably intertwined;

(2) Theory is not speculation; and

(3) Scientists are very much concerned with both theory and fact.

A fact is regarded as an empirically variable observation. It could never have produced modern science had it not been gathered. Theory refers to the relationship between facts, or to the ordering them in some meaningful way.

It also refers to an organised and coherent body of assumptions and arguments. It may be directed to the explanation of a unique phenomenon (Wegner’s theory of continental drift had only one world to consider) or a whole class of phenomena (the theory of air masses).

Theories may be used to account for different phenomena. Without theory, science could yield no prediction. Without prediction there would be no control over the material world.

It can, therefore, be said that the facts of science are the product of observations that are not random but meaningful, i.e. theoretically relevant. Thus, facts and theory are interrelated/ intertwined in many complex ways. The development of science can be considered as a constant interplay between theory and fact.

Theory is a tool of science in five ways:

(1) It defines the major orientation of a science, by defining the kinds of data which are to be abstracted;

(2) It offers a conceptual scheme by which the relevant phenomena are systematised, classified and interrelated; it summarises facts into (a) empirical generalisations and (b) systems of generalisations;

(4) It predicts facts; and

(5) It points to gaps in our knowledge.

However, facts are also productive of theory in five ways:

(1) They help to initiate theories;

(2) They lead to the reformulation of existing theory;

(3) They cause the rejection of theories which do not fit the facts;

(4) They change the focus and orientation of theory; and

(5) They classify and redefine theory.

Scientific Theory :

‘A scientific theory may be considered as a set of sentences expressed in terms of a specific vocabulary.

The vocabulary may contain primitive terms which cannot be defined and ‘defend’ terms which may be formed from the primitive terms. The sentences may similarly be divided into primitive sentences— axiomatic statements—and derivative sentences— theorems… terms such as ‘point’, ‘line’, ‘place’ form the primitive terms collected together in an initial set of axiomatic statements…. In addition to the primitive term and the axiomatic statements, scientific theories also possess certain rules which govern the formation of the derivative sentences – But a theory is useful in empirical science only if it is given some interpretation with reference to empirical phenomena….The text of the theory provides a translation from the completely abstract theoretical language to the language of empirical observation…. The text of a theory not only identifies the empirical subject-matter which the theory refers to. It also identifies the domain of the theory… (that) may be regarded the section or sections of reality which the theory adequately covers’.

Harvey (1969, 90-91) identifies the following advantages of a formal statement of a theory:

1. ‘The formal statement of a theory requires the elimination of inexactness and, as a consequence of this, ensures complete certainty as to the logical validity of the conclusion. The empirical success of a theory relies entirely upon the success of the text in linking the abstract symbols of the theory to real world events’.

2. ‘The elaboration of formal theory, provided the basic postulates are good ones, can help suggest new ideas, prove unsuspected conclusions, and indicate new empirical laws.’

3. ‘The formal statement of a theory requires turning a spatial or temporal sequence into a completely non-spatial and non-temporal set of relations. Even in theories which explicitly include time or space as variables the treatment is abstracted….It is ‘therefore’ characteristic of formal theory to state all propositions—whether primitive or derived— as if they were universal propositions. Again, it is the text which has to perform the difficult task of linking these universal propositions to empirical events which have a location in space and time.’

It appears from the above points that the key problems in the application of formal theory in the empirical sciences is the provision of an adequate text.

The text associated with a formal theory necessarily performs two essential tasks:

(i) It identifies an abstract symbol with a particular class of real-world phenomena, and

(ii) It may place the abstract symbols within a particular context which may include specific mention of location in space and time.

The text associated with a formal theory should not only link abstract symbols with abstract concepts. It should also specify how abstract concepts may be reduced to factual statements. This, therefore, raises the whole problem of the nature of concept-formation in the sciences and the operational problem of giving idealisation and theoretical concepts some adequate definition.

There are innumerable idealisations to be found in both the natural and social sciences. Indeed, explanation would be impracticable without such idealisations.

Some idealisations can be theoretically defined, and they are characteristic of the natural sciences where physics, in particular, has achieved a high degree of unification in its theoretical structure. However, many idealisations in both the natural and social sciences cannot be referred to any well-established theoretical structure, either because the idealisation is itself inappropriate, or because the requisite general theory is yet to be established. Idealisations in social sciences are fundamentally different from the theoretical concepts of natural sciences.

The reasons – the text is deficient and/or weak, and the domain not clearly defined. The failure to achieve significant explanatory power in social sciences is the result of the paucity in such disciplines of the requisite general theory.

The universality of the theoretical statements may be transferred to any situation in space and time since it appears that the statements are universal in fact. In the social sciences this is not so, and therefore one of the important functions of the text of a theory is to identify the domain of objects and events to which such theories can be applied—this domain may simply be defined by a set of spatial and temporal coordinates. A theory without a text and a well-defined domain is useless for prediction.

However, the greater success of the physical sciences, relative to the social sciences, in providing a text for theoretical structures accounts for the greater predictive success of the natural, as opposed to the social, sciences.

A scientific theory, however, needs to be tested or assessed not only for its own internal consistency, but also for its consistency with the world as observed.

The Role of Fact :

Theory and fact are in constant interaction. Developments in one may lead to developments in the other. Theory, implicit or explicit, is basic to knowledge and even perception. Theory is not merely a passive element. It plays an active role in the uncovering of facts. Similarly, fact has an equally significant part to play in the development of theory. Science actually depends upon a continuous stimulation of fact by theory and of theory by fact.

a. Fact initiates theory:

Many of the human- interest stories in the history of science describe how a striking fact, sometimes stumbled upon, led to important new theories. This is what the public thinks of as a ‘discovery’. Many of the stories take an added drama in the retelling, but they express a fundamental fact in the growth of science, that an apparently simple observation may lead to significant theory.

Merton (1949) has called this kind of observation ‘the unanticipated, anamolous and strategic datum’. Attempting to account for the anomalous datum, however appeared to have led to an interesting development of theory.

Almost every ‘discoverer’ was preceded by others who saw his discovery first and thought no further about it. This was the case with most of those who attempted for the unanticipated, anomalous, and strategic datum for the development of theory. The fact initiates theory only if the scientist/researcher is alert to the possible interplay between the two.

b. Facts lead to the rejection and reformulation of existing theory:

Facts do not completely determine theory, since many possible theories can be developed to take account of a specific set of observations. Nevertheless, facts are the stubborn of the two. Any theory must adjust to the facts and is rejected or reformulated if they cannot be fitted into its structure.

Since research is a continuing activity, rejection and reformulation are likely to be going on simultaneously. Observations are gradually accumulated which seem to cast doubt upon existing theory. While new tests are being planned, new formulations of theory are developed which might fit these new facts.

The relation between fact and theory may be expressed in syllogistic terms. A theory predicts that certain facts will be observable – ‘If X conditions exist, then Y is observable; if Y is not observable, then X condition does not obtain.’

However, if X condition does exist, and Y is not observable, then the original proposition is denied. However, such a syllogistic pattern of logic does not guarantee that the original theory is correct when the facts are predicted and conformity merely guarantees that certain other theoretical propositions are not correct.

c. Facts redefine and classify theory:

New facts that fit the theory will always redefine theory, for they state in detail what the theory states in very general terms. Facts clarify that theory throws further light upon its concepts. Finally, they may actually present new theoretical problems, that is, the redefinition may be far more specific than the theory.

An example is the general hypothesis that when individuals from a rural population, particularly the tribal, enter the urban environment they experience a considerable amount of personal disorganisation. This process has been studied in most detail for immigrant groups and for children of such immigrants. It is normally held that many changes in habit pattern will occur in this adjustment process.

One of these is a decline in fertility. As a consequence of these notions, we could predict that when the rural, particularly tribal, people settle in the urban areas and large cities their birth rate will drop. Actually, the net reproduction rate of urban tribal people is much lower than that of the rural tribal people and the fact is, therefore, in accordance with the theoretical prediction.

The theory, however, is a general expectation, while the demographic facts are specific. The theory does not state how much the difference will be. In actuality, the fertility of urban tribal people is even lower than that of the non-tribal urban people. We are thus left with a redefinition of the theory towards greater specificity, and the older theory simply does not account for these new facts.

The facts do not reject the older theory – they are simply more complex and definite than the prediction of the original theory, and they call for further research. Indeed, it is one of the major experiences of research that actually testing any existing theory is likely to redefine it. The concepts that have been accepted as simple and obvious turn out to be elusive, vague, and ill- defined when we fit them to the facts.

It is not that the facts do not fit. It is rather that they are much richer, more precise and definite than concept or theory. Further, many such redefinitions and classifications may in turn lead to the discovery of new hypotheses. For so long as our theories use general terms and makes rough predictions, it is difficult to disprove them.

However, facts become a stimulus to the redefinition and classification of theory even when they are in conformity with it. This process leads, in turn, to reformulation of theory and the discovery of new facts.

The growth of science is seen in new facts and new theory. Facts take their ultimate meaning from the theories which summarise them, classify them, predict them, point them out, and define them. However, theory may direct the scientific process; facts in turn play a significant role in the development of theory. New and anomalous facts may initiate new theories.

New observations lead to the rejection and reformulation of existing theory, or may demand that older theories be redefined. Concepts which appeared definite in meaning are clarified by the specific facts relating to them. The geographer must accept the responsibilities of the scientist, who must see fact in theory and theory in fact.

The second key element in scientific thinking is ‘law’. ‘Any fully developed scientific theory contains, embedded within it, certain statements about unvarying relationships. These laws may be evident at the level of everyday experience or only at the level of scientific investigation, for example, by controlled experiment or microscopic investigation. As with theories there is a predisposition among scientists to seek laws which cover broad categories of phenomena…There is also a preference within science for deterministic laws – that ‘wherever’ A and B are present C ‘will’ result. But it is recognized that some laws have a probabilistic form even if they represent a transient stage in the development of the discipline and will give way to deterministic laws as the discipline develops’.

The credit to put the relevance of ‘law’ in geography goes to Schaefer who said that geographers should seek to make law-like statements. ‘A science’, according to him, ‘is characterized by its explanations, and explanations require laws…. To explain the phenomena, one has described means always to recognize them as instances of laws…. In geography… the major regularities which are described refer to spatial patterns…. Hence geography has to be conceived as the science concerned with the formulation of the laws governing the spatial distribution of certain features on the surface of the Earth’.

Geographical procedures would then not differ from those employed in the other sciences, both natural and social – observation would lead to a hypothesis—about the interrelationship between two spatial patterns, for example, and this would be tested against large number of cases, to provide the material for a law if it were thereby verified.

A law should be unrestricted in its application over space and time. It is thus a ‘universal statement’ of unrestricted range. This suggests at least one important criterion for distinguishing a law.

‘The universality criterion requires that laws should not make specific or tacit reference to proper names. Consider the proposition that towns of similar size and function are found at similar distances apart. The term ‘town’ can be defined only with reference to human social organization and it carries with it … an implicit reference to the proper name ‘Earth’. Within such a context the statement may be true, but the universality criterion has undoubtedly been offended. To get round this difficulty, we may attempt to define ‘town’ in terms of a set of properties which we claim are possessed by towns and only towns. In an infinite universe, however, there may well be some phenomenon which possesses all the properties listed without being a town. Again, we are not justified in regarding the statement being a proper law’.

However, strict laws are only found in physics, and to an extent in chemistry. These are truly universal in nature. It makes the development of laws in biology, zoology, geology, physical geography, etc. redundant, except in so far as such disciplines can reduce their statements to those of physics. The social sciences and human geography are even more seriously affected.

Harvey (1969) contradicts interpreting universality in such a strict manner, as does Smart (1959). To quote Harvey (1969), ‘There are two ways in which we may justify some relaxation of it. With purely empirical proposition it may prove useful to draw a distinction between philosophical and methodological universality. Philosophical universality involves the belief that universally true statements can be made. Such a belief may be supported by reference to some set of metaphysical propositions … or else it depends upon showing that a statement is in fact universally true. The latter course is essentially an inductive step and, therefore, a degree of uncertainty is involved. A proposition can never be shown empirically to be universally true. This applies as much to the strict laws of physics as it does to the ‘mere generalizations’ of biology and economics. Philosophical universality implies methodological universality, but the reverse relationship does not hold. We may regard statements as if they were universally true without necessarily believing that they are or even assuming that they will ultimately be shown to be so…..In such a case it becomes a matter of deciding whether it is useful and reasonable to regard a statement as if it were universally true, and hence, law-like.’

A substantial part of Braithwaite’s analysis of scientific explanation is concerned with establishing how laws are related to a surrounding structure of theory. It is impossible to determine whether a statement is or is not a law simply by referring to the truth or falsity of the generalisation it contains.

A major criterion in determining whether a statement is or is not a law is the relationship of that statement to the system of statements that constitutes a theory. If this criterion is accepted, then the ideas are required to be adjusted regarding the verification procedures necessary to transform a scientific hypothesis into a scientific law.

A generalisation may be set up or established as true or false simply by direct reference to empirical subject matter. The truth of an empirical law has to be established by this method too, but in addition, it requires support from other empirical laws, theoretical laws (that cannot be given any direct test), and also from other lower level empirical laws that helps it to predict.

A key concept in this respect is that laws must be proven through objective procedures and not accepted simply because they seem plausible. As Bunge (1962) puts it, ‘The plausibility or intuitive reality of a theory is not a valid basis for judging a theory. A valid law must predict certain patterns in the world, so that having developed an idea about those patterns, the researcher must formulate them into a testable hypothesis —”a proposition whose truth or falsity is capable of being asserted”. An experiment is then designed to test the hypothesis, data are collected, and the validity of the prediction evaluated….One successful test will not turn it into a law replication on other data sets will be needed since a law is supposed to be universal’.

After sufficient (undefined) successful tests, therefore, a hypothesis may be accorded law-like status and is fed into a body of theory which comprises a series of related laws. There are two types of statements within a full theory, ‘the axioms or givens’, which are statements taken to be true, such as laws; and the deductions, or ‘theorems’ from those initial conditions, which are derived consequences from agreed facts—the next round of hypotheses.

However, Jones (1956) pointed out the impossibility of discovering universal laws about human behaviour and indicated the existence of two types of law in physics – the ‘determinate’ laws of classic physics, which apply microscopically; and the ‘probabilistic quantum laws’ which refer to the behaviour of individual particles.

Golledge and Amedeo (1968) attempted to indicate that science recognises several types of law, and also that the veracity of a law-like statement can never be finally proven, since it cannot be tested against all instances, at all times and in all places.

They indicated four types of laws which have relevance for human geographer:

(1) Cross-sectional laws, which describe functional relationships, but show no causal connection, although they may suggest one;

(2) Equilibrium laws which state what will be observed if certain criteria are met;

(3) Dynamic laws, which incorporate notions of change, with the alteration in one variable being followed by (perhaps causing) an alteration in another. Dynamic laws may be historical, showing that B preceded by A and followed by C, or developmental, in which B would be followed by C, D, E, etc.; and

(4) ‘Statistical laws’ which are probability statements of B happening given that A exists. All laws of the other three categories may be either deterministic or statistical with the latter almost certainly the case with phenomena studied by geographers.

However, according to Sack (1972), space, time and matter cannot be separated analytically in an empirical science which is concerned to provide explanation. He attempted to show that geometry is not an acceptable language for such a science, i.e. geography.

Nevertheless, geography is closely allied with geometry in its emphasis on the spatial aspects of events (the instances of law), but geometry alone is insufficient as a basis for explanation and prediction since no processes are involved in the derivation of geometries.

Bunge (1973), however, responded to this statement, claiming that spatial prediction was quite possible with reference to the geometry alone, as instanced by central place theory and Thunian analysis.

Sack (1973) responded by saying that the static laws espoused by Bunge are only special cases of dynamic laws having antecedent and consequent conditions and that although the laws of geometry are unequivocally static, purely spatial, non-deducible from dynamic laws, and explain and predict physical geometric properties of events, they do not answer the questions about the geometric properties of events that geographers raise and they do not make statements about process’.

Geography, according to Sack, is concerned to explain events and it requires substantive laws – such laws may contain geometric terms, such as ‘the frictions of crossing a certain substance’, but these terms of themselves are insufficient to provide explanations.

He identified two types of laws relevant to geographical work:

(1) Congruent substance laws which are independent of location – statements of ‘if A then B’ are universals which require no spatial referent;

(2) Overlapping substance laws which involve spatial terms – ‘if A then B’ in such cases contains some specific reference to location.

Both types are relevant and necessary in providing the answers to the geographical questions, so case may be made for a necessary ‘spatialness’ to the substance laws of human geography.

Thus, positivist-led geography has wider application of laws for successful and fruitful analysis of geographical phenomena together with spatial patterns. The concept of law has a much wider significance in such geography which is being conceived of as a science with law-seeking episteme because it postulates a three-fold hierarchy of scientific statements from factual statements or systematised descriptions through a middle-tier of ’empirical generalisations or laws’, to general or theoretical laws.

But laws are not only type of connecting statements used in scientific theory, indeed a theory which consists entirely of laws based on experience and experiment is viewed with disfavour; it is seen as more satisfactory if most of the laws and other links in the theory can be shown to be logically derived from a much smaller number of fundamental assumptions and laws. Scientists have tended to use mathematics (algebra and geometry) as the language for expressing and developing this logic, but other abstract languages are also used (for example, chemical equations and bonding diagram).

Logical validation is one of the most commonly used methods of validation and certainly one of the most difficult to apply. It refers to either theoretical or ‘common-sense’ analysis which concludes simply that the items being what they are, the nature of the continuum cannot be other than stated to be. Logical validation, or ‘face validity’ as it is sometimes called, is always used because it automatically springs from the careful definition of the continuum and the selection of the items.

4. Reductionism :

It is the final key in much scientific thought—the idea that the laws and theories of a discipline can be re- expressed as special case of the outworking of the laws of a more fundamental discipline. Reductionism is usually taken to apply to any doctrine that seeks to explain a higher-order phenomenon in terms of a lower-order phenomenon.

Such a doctrine can be held in various forms, and applied in many different areas of intellectual endeavour. One form of reductionism is the notion that laws of all other sciences can be in principle reduced to, or expressed in terms of, the laws of micro physics; another is the thesis that all mental faculties can be expressed as events in or states of the brain.

Reductionism is defined more formally as concepts or statements redefined in terms which are more elementary or basic. In many cases, the communication of knowledge thus raised cannot be expressed in ordinary verbal language, and two other methods must be added. The first is that of the use of words with specialised meanings, and the second is the language of pure symbols associated with mathematics and symbolic logic.

A geographical explanation may be said to be reductionist if it attempts to account for a range of phenomena in terms of a single determining factor. In human geography, the most common form of reductionism is probably that which asserts that all terms which refer to groups or collectivities can be, in principle, expressed as descriptions of the behaviour of individual actor.

This view, however, has come to be known as methodological individualism. Some Marxist theories are said to be reductionist because they attempt to explain the diversity of sound behaviour by reference simply to the economy.

‘Studies in physical geography have, in general, no option but to be carried out using the methods of contemporary science, whether these are reductionist or attempts at holism-like system analysis. We seek an understanding of landslides or glacial motion in terms of mechanical principles, and apply chemistry in the study of weathering processes and soil formation. The behavior of larger, more complex systems is routinely analysed with elaborate computer models, as in drainage basin hydrology or weather forecasting. But, at this point we ought to ask whether at the next level of complexity, that of human societies together with their environment, both living and non­living, reductionist approaches are appropriate or applicable, and with what degree of success…. Should geography try to be like physics? Should it be possible to express everything in laws … which are 100 percent applicable in the sense of temporal prediction (… the laws of physics do not exhibit 100 percent probability but they are not far short for practical purpose)….The techniques of the mathematician and statistician have been paramount, for example, in the elaboration of models of spatial interaction, the use of idea from catastrophe theory and Q-analysis, and the widespread employment of bivariate and multivariate linear models for the analysis of data’.

To quote Hagget and Chorley (1967), ‘… those subjects which have modelled their forms on mathematics or physics … have climbed considerably more rapidly than those which have attempted to build internal or idiographic structure’.

There have been remarkable developments and advances in the development and adaptation of modelling and analytical techniques, but whether empirical studies which have used these have provided any more generalisations and accurate predictions than in non-reductionist approaches is arguable. The great emphasis on space as the central element in geography makes the discipline more like physics.

People unsympathetic to quantification, for its association with ‘hard data’, have often criticised the view for its apparent lack of humanity; its cold objectivity (if it should exist) does not appeal to all. However, reductionism of a different variety may not be accepted that which views all human patterns in terms of a single-factor explanation such as class struggle at the heart of classical Marxist theory that appears to be too simple to explain the great variety of society-environment relationships which have been observed on the face of the Earth.

It is rather difficult to comment on the success of reductionist methods in geography, given the fact that geography has yet to achieve a great success in terms of laws. Some regularities have been pointed out, such as rank size rules of cities; the spatial patterning of towns as service centres, however, the bulky literature concerning them contains evidence of numerous exceptions and assertions of the culture-bound nature of the findings.

‘Geography appears to have carried forward a more just world, no more and no less than any other divisions of learning with which it shares a reluctance to be committed to single element solutions, especially those of an ideological character’.

However, on reduction, Harvey (1969, 94-95) points out, ‘… The problem of finding adequate empirical definition of theoretical concepts can be solved by the provision of an adequate general theory. The development of powerful basic axiomatic statements will make possible precise definition of the idealizations on which current theory rests.

The procedure may lead to the reduction of the large number of idealizations and concepts in social science to special cases of more axiomatic statements…. Many of the concepts and idealizations used in the natural sciences may be ultimately defined by reference to the basic concepts of physics.

The unification of disparate theoretical structures into one system of statements involves the reduction of disparate idealizations to special cases of a few basic postulates. This phenomenon of reduction may also be found in the social sciences, and the development of general theory in the social sciences may well depend on such reduction.

The postulates of economics may be reducible to a particular subset of postulates in psychology…. The degree to which reduction can take place, however, is a controversial issue, and even if it is conceded that total reduction is ultimately possible, this is far from being practicable at the present time. On the other hand, it cannot be denied that there is considerable benefit to be had from the integration of diverse concepts and statements into some more general theoretical framework…. The development of general theory in the social sciences—and the reduction of some concepts which this implies— may enable more precise definition of certain idealizations and hence facilitate the statement of an appropriate text for some of the theories developed in the social sciences.’

5. Hypothesis :

Theory, law, logic and reduction—these four elements are the key parts of scientific thinking, but there is however a fifth element—the research hypothesis—which provides a link to the area of scientific practice. In a well-developed natural science, a research hypothesis predicts the outcome of an experiment or observation if the theory is correct. In this way, a theory or its extensions can be tested in contexts other than those for which it was originally devised.

The formulation of deduction constitutes a hypothesis; if verified, it becomes a part of a future theoretical construction. In practice, a theory is an elaborate hypothesis which deals with more types of facts than does the simple hypothesis.

A theory states a logical relationship between facts. From this theory, other propositions can be deduced that should be true if the first holds. These deduced propositions are hypotheses. Hypotheses do not necessarily have to be true, however.

The truth of many hypotheses, the researchers formulate is most often unknown. Hypotheses, therefore, are tentative statements about things that the researcher wishes to support or refute. A hypothesis is a provisional statement that guides empirical work in several scientific epistemologies.

A hypothesis, therefore, is a structured speculation that must be tested empirically. If it proves to be valid, then a positive addition is made to the stock of theory; knowledge has been increased. It proves invalid; knowledge has also been increased, albeit in a negative sense.

Routes to Scientific Explanation: Induction and Deduction:

There are two alternative routes to explanation, or which are followed in establishing a scientific law, according to Harvey (1969). The first by ‘induction’—proceeding from numerous particular instances to universal statements—and the second that of ‘deduction’—proceeding from some a priori universal premises to statements about particular sets of events.

i. Route (1):

It is also known as the Baconian Route or ‘Inductive Route to Scientific Explanation. According to Harvey (1969), ‘Sense-perception data provide us with the lowest level information for fashioning scientific understanding. This information, when transformed into some language, forms a mass of poorly ordered statements which we sometimes refer to as ‘factual’.

It is partly ordered by the use of words and symbols to describe it. Then, by the process of definition, measurement, and classification, we may place such partially ordered facts into groups and categories and therefore impose some degree of seemingly rational order upon the data.

In the early stages of scientific development, such ordering and classification of data may be the main activity of science, and the classification so developed may have a weak explanatory function…..The status of empirical laws established by such a route is a matter of some controversy.

It should be noted that each step along this route so far involves inductive inference. Thus laws established by this route are alone sometimes called inductive laws. Some maintain that inductive laws cannot be accorded the status of scientific law.’

This route to scientific explanation, however, does not describe how the scientist should proceed, but it does describe one of the ways in which a scientist might describe his/her action so as to meet with the approval of other scientists.

This route involves a dangerous form of generalising from the particular case, as the acceptance of the interpretations depends too much on the charisma of the scholar involved. Churchman (1961) observed that ‘facts, measurements and theories are methodologically the same’. Applying an a priori classification system to a set of data may thus be regarded as an activity similar in kind to postulating an a priori theory.

ii. Route (2):

‘The second route whereby we may justify scientific conclusions clearly recognises the a priori nature of much scientific knowledge. It firmly rests upon intuitive speculation regarding the nature of the reality we seek to know…. This involves some kind of intuitive picturing of how that reality is structured. Such a priori pictures … later identify as a priori models. With the aid of such pictures we may postulate a theory. That theory should have a logical structure which ensures consistency and a set of statements which connect the abstract notions contained in the theory to sense-perception data. The theory will enable us to deduce sets of hypotheses which, when given an empirical interpretation, may be tested against sense- perception data. The more hypotheses we can check in this fashion, the more confident we may feel in the validity of the theory provided, of course, that the tests prove positive.’

‘In the process of elaborating or seeking to test a theory, we may resort to another kind of model— an a posteriori model—which expresses the notion contained in the theory in a different form says, in mathematical notation. In some circumstances, model building may here amount to developing an experimental design procedure, and a primary function of this procedure is to lay down the rules whereby we may define, classify, and measure the variables which are relevant for testing the theory. By using such experimental designs we may amass evidence to confirm the hypotheses contained in the theory’.

In the nutshell, this route begins with an observer perceiving patterns in the world; he/she then formulates experiments, or some other kind of test, to prove the veracity of the explanations which he/she produced for those patterns. Only when his/her ideas have been tested successfully against data other than from which they are derived can a generalisation be produced.

Scientific knowledge, obtained via the second route, is ‘a kind of controlled speculation. The control really amounts to ensuring that statements are logically consistent and insisting that at least some of the statements may be successfully related to sense-perception data’. It is such a procedure that an increasing number of human geographers sought to apply during the 1950s and 1960s.

The method, known as positivism, was developed by a group of philosophers working in Vienna during the 1920s and 1930s. It based on a conception of an objective world in which there is order waiting to be discovered. Because that order—the spatial patterns of variation and covariation in the case of geography—exists, it cannot be contaminated by the observer.

A neutral observer, on the basis of his observations or his reading of the research of others, will derive a hypothesis (a speculative law) about some aspect of reality and then test that hypothesis verification of his hypothesis is translates the speculative law into an accepted one.

Deduction occurs when facts are gathered to confirm or disapprove hypothesised relationships among variables that have been deduced from propositions. Whether there were facts that precipitated the propositions does not really matter. What matters is that research is essentially a hypothesis-testing venture in which the hypotheses rest on logically (if not factually) deduced relational statements.

Burgess and Akers (1966) have attempted to reveal how particular hypothesis can be generated from many general and inclusive assertions. Deduction is a type of reasoning in which the conclusion follows necessarily from the given premises, but it does not increase content, although one may require intellectual abilities of a higher order in order to trace all the steps that lead from the premises of a deductive argument to its conclusions.

Most writers on the scientific explanation have argued that the appropriate logic is that of deduction. Thus, the view that scientific explanation must always be reduced in the form of logical deduction has had wide acceptance. Braithwate (1960) has also sought for the systematic organisation of scientific knowledge as a ‘hypothetic-deductive’ system.

He pointed out – ‘A scientific system consists of a set of hypotheses which form’ a deductive system, that is, which is arranged in such a way that from some of the hypotheses as premises all other hypotheses logically follow.

The proposition in a deductive system may be considered as being arranged in an order of levels, the hypotheses at the highest level being those which occur as premises in the system, and those at the lowest level being those which occur as conclusions of the system, and those at intermediate levels being those which occur as conclusions of deductions from higher level hypotheses and which serve as premises for deductions to lower level hypotheses.

With regard to the advantage of deduction, Harvey (1969) suggests that – ‘… if the premises are true then the conclusions are necessarily true. If … we have certain degree of confidence in a set of premises we may possess the same level of confidence with respect to any logically deduced consequence. This property has led to the use of reduction wherever possible. Theories are thus invariably stated as deductive systems of statement… The application of such theories to the actual explanation of events is rendered as logical deduction.’

The form of explanation, which Hempel calls ‘deductive homological’ (covering law explanation) consists of:

i. One or more laws of nature, and

ii. A list of specific initial conditions or circumstances which, when taken together, show that an event must necessarily have occurred or that describe the setting of the event to be explained.

From these premises, the occurrence of the event in question can be inferred by a strictly deductive chain of reasoning. In this form of explanation, prediction and explanation are symmetrical and deduction ensures the logical certainty of the conclusion.

It is assumed that the final outcome of this process will be the discovery and verification of a set of natural laws from which all events can be rigorously deduced. When that point is reached, science will have achieved its ultimate goals.

Induction involves moving from particular instances of relations among variables to the formulation of hypotheses and from these to the development of propositions. Many scientists had claimed or appeared to have claimed that laws and theories were derived from the observation of repeated regularities. This method is often referred to as induction. In one or another of its forms, induction is the way most social scientists go about business of expanding knowledge.

Theodorsen and Theodorsen (1969) have attempted to distinguish between two basic types of induction—enumerative and analytic. Enumerative induction is the most common form of induction used in social science research today.

Most often enumerative induction involves generalisation from samples with varying degrees of representativeness. Usually, but not invariably, these generalisations are derived through the application of statistical procedures to the data.

Accompanying these studies are usually statements pertaining to ‘probability’ of generalisations to larger and more inclusive populations based on findings from the sample methods. Analytic induction is a procedure whereby there is a case-by-case analysis of specific features to determine which conditions are always present prior to the occurrence of certain types of conduct.

Induction is important to our concern with scientific method because of the role it plays in the formulation of empirical generalisations.

These generalisations may conveniently be divided into:

i. Summative, and

ii. Extended.

A summative generalisation describes a property which has been confirmed by the actual observation of all the relevant cases, for example, the statement, ‘All the men in this room are old’. Since the men in question have actually been examined, the generalisation does not go beyond the evidence and therefore does not involve inductive reasoning.

However, the statement, ‘All swans are white’ is an example of an extended empirical generalisation— as one goes beyond the evidence on which it is based.

We may have examined thousands upon thousands of swans and found them all to be white, yet we cannot guarantee the truth of the general statement because it remains possible that a non-white swan may someday be found (as in fact did happen with the discovery of black swans in Australia).

In fact, we cannot guarantee the truth of an empirical generalisation unless, as in the summative case, we can point to all the singular instances upon which it is based. ‘There is no logical justification for extending belief in the premises to belief in the conclusions. The failure of logicians and philosophers to find (or agree upon) such logical justification has led many to reject its use entirely in the presentation of scientific knowledge’.

Let us consider the role that the inductive reasoning plays in the fallibilist conception of the quest for knowledge. The philosophy of fallibilism tends to reject the traditional positivist view that science begins with the collection of observational data and proceeds inductively to the establishment of general laws. The starting point of the fallibilist conception lies in the realisation that there is an asymmetrical relationship between logical states of a general law and that of its negation.

It is increasingly believed by the fallibilist that a general law is never conclusively verifiable but always conclusively falsifiable. Falsification is possible but verification is not. To be scientific is to recognise explicitly that knowledge is approached by means of a process of conjectures and refutations.

Induction is involved in the development of conjectures (theories, hypotheses), but not in the search for refutation. Induction is also involved in theory-building by virtue of the fact that theories incorporate (often implicitly) laws or law-like statements that extend beyond the evidence upon which they are based. For the fallibilist, however, law-like statements and the theories in which they become embedded are not hard core of science; they are never anything more than heuristic speculations.

The distinguishing feature of the scientific attitude is the willingness—indeed, the desire—to confront these speculations with pertinent empirical observations, thereby exposing them to the possibility of refutations. A refutation, when it occurs, is a logically conclusive result in which inductive reasoning plays no part.

For the fallibilist, therefore, a scientific enquiry has two distinct phases—’an imaginative, hypothetical, theory-proposing phase’ and ‘a critical, objective, experimental phase’. Inductive inferences may be involved in the former, but they are strictly banned from the latter.

In the traditional positivist view of science, general laws established by inductive reasoning are not regarded as speculations but as Verified scientific knowledge’. For the positivist, induction is the very foundation of science. For the fallibilist, induction leads only to conjectures, and science does not begin in earnest until these conjectures are confronted by the threat of falsification.

Although the distinction between deduction and induction serves the important purpose of identifying opposite ways to go about theory- building, most investigators find that their scientific work entails a certain amount of both. Induction probably permits more of an opportunity to see theory in a dynamic state of emergence rather than as already given, as is the case with deduction, but the issue is open to some debate.

Because there is neither a rigorously developed comprehensive theory from which to deduce particular relationships for testing, nor a sufficient accumulation of data to allow for systematic theory development through induction, even personal preferences for one or the other approach must remain flexible and adaptable.

‘The methodological rejection of induction can only apply to certain aspects of the formulation of scientific knowledge. Science attempts to organize the propositions within a deductive frame of inference…. The deductive form of scientific theories must be regarded as the end-product of scientific knowledge, rather than as the mould into which all scientific thought is cast from the very initiation of an investigation.

But even assuming that a deductive theoretical structure has been successfully evolved, induction still plays an important function at certain stages in the articulation and verification of such a theoretical structure…. However, it is misleading to regard deduction and induction as mutually exclusive forms of inference.

Although it is generally agreed that scientific knowledge should be organised as a hypothetico-deductive system and that the law contained in that system can best be applied by a deductive explanatory procedure, there are many occasions when inductive steps may be used within these deductive frameworks.

However, the method of induction was criticised on two grounds. First, it was evident that in many cases the observations were themselves made with pre-conceptions (right or wrong) as to what constituted characteristics worthy of observation and recording.

Second, the exact form of the law-like statement derived was seldom free from theoretical presuppositions and a priori definitions. It seemed that pure induction seldom occurred. It was recognised that the logical justification of induction itself relied upon the inductive method.

These conclusions are important not only for an inductivist theory of scientific method, but for any other empiricist school of thought which believes that facts should and can be allowed to speak for themselves. As mentioned earlier, Popper (1959) introduced falsification as a concept to replace verification, and under this new concept all theories were deemed to be provisional—coherent systems of not yet falsified hypotheses about the nature of phenomena.

The falsificationist position had two other implications:

i. It required that all scientific statements should have the logical possibility of being proved false in this case the claim that ‘All swans are white’ is incapable of being proved conclusively true (the next swan observed might be black), but could be proved false (by the observation of one truly black swan);

ii. the form of empirical enquiries – no longer was it necessary to find evidence in support of a hypothesis, it became part of the scientific enterprise to find evidence which disapproved it. However, a complex and well- tested theory predicts an effect that fails to happen, but this failure may be due to a fundamental falsity of the theory or to some low level error of logic in deriving the research hypothesis. The falsification of the hypothesis requires a search for the false step, not the immediate abandonment of the theory as a whole.

The normal scientific methods have been subjected to critical analysis and criticism, especially on the issues of reductionism and objectivity. Reductionism has been defined earlier as the idea that the laws and theories of one discipline can, and should, be reformulated as special cases of the outworking of a more fundamental discipline, often linked to a change of scale of enquiry. But such a reduction appears to be fallacious.

With regard to the fallacy in the objectivity of scientific knowledge, it is sometimes asserted and assumed that whereas the humanities have important areas which are matters of personal subjective value judgement, science is independent of such personal judgement, an independence assumed by its use of abstract logic and natural measurements.

But this view is difficult to maintain when it is recognised that the type of questions investigated, the theory used and the observations conducted all depend on the research paradigm adopted, and the paradigm in turn reflects the value system of the society and its philosophical and scientific presuppositions.

If objectivity does exist, it is not the objectivity of the individual scientist, but a relative objectivity of the knowledge itself because it has been tested and corrected by many individuals working in different contexts and dimensions.

Relevance of Scientific Method in Geography :

Despite criticisms with regard to the application of scientific method to geography as a way of obtaining useful and reliable knowledge; scientific methods hold relevance in geographical scholarship, research and training in both physical and human geography for three reasons.

1. Although scientism is a mistaken and dangerous ideology, the scientific method does have the ability to provide coherent and testable theories about the nature of geographical phenomena.

2. The scientific method remains appealing because it is in many respects a codified and logically corrected extension of thought structures developed in everyday life, including the willingness to correct theories or hypotheses in the light of experience.

3. Partly as a consequence of these two points, knowledge of a scientific type is required by society for its purpose of managing social and natural systems (and if geography fails to provide such knowledge; some other disciplines will develop to fill the gaps).

However, the scientific geography cannot remain untouched by critical evaluations and criticisms, and the elements which were required to be retained had to be modified. As a result, many geographical theories were ‘derivative’ in the sense that they attempted to specify the application of geographical theories which were established in cognate disciplines (in the physical and social sciences). The ability or inability of derivative theories to provide the basis for geographical explanations largely depended on the test of their overall value as research programmes.

But, in addition to such derivative theories, there emerged a need for specifically geographical theories or laws which were essentially laws of composition, specifying the way in which these derivative laws appeared to have interacted to produce the multi- faceted phenomena that geographers sought to understand. The level at which these laws of composition operated, however, appeared to be much greater than the scale at which the derivative laws and theories operated.

It is possible to identify a number of examples of derivative theory already used in geography. Economic concepts have frequently been used as the foundation for geographic theory. Economics has, perhaps, been the most successful of the social sciences in developing formal theory (even if the empirical status of the theory is open to doubt). The central-place theory has frequently been described as the one relatively well-developed branch of theoretical economic geography.

Central-place theory provides just one example out of many to demonstrate how geographical theory may be derived from the basic postulates of economics. The existence of such postulates was undoubtedly an important necessary condition for the emergence of a theoretical human geography.

Many of the postulates and theorems of economics have been absorbed into geographical theory. In particular, the whole of location theory, which has been ‘especially’ concerned with the development of the theoretical-deductive method in geography. Human geographers have long recognised that geographic patterns are the end- product of a large number of individual decisions made at different times for often very different reasons, and that it was necessary to employ some psychological notions in explaining those patterns which revealed that psychological and sociological postulates were introduced in the construction of geographical theory.

Similarly, the geographical studies of weathering had applied the chemistry of ions and cations to the specific chemical composition of parent rocks under stated conditions of temperature and humidity. The examples suggest that although some geographical problems required ‘spatial laws’, not all derived theory in geography would be concerned with spatial relationships.

It seemed rather less easy to identify) laws of composition in current geographical work, but some recent applications of choice theory to the choice of destination and mode of travel brought together economic and non-economic variables in a choice calculus that seemed more convincing than explanation in economic terms alone.

The use of derivative and geographical theories and laws necessarily implied an openness to reductionism—a willingness to accept reduction, but not an assumption that all geographical problems could and must be solved in reductionist terms.

It also implied an open approach to the ‘types of logic’ adopted. However, many of the cognate disciplines (physics, chemistry, economics) themselves appeared to depend on ‘abstract logic’, and that abstract languages have proved extremely powerful and made it likely that such languages would be used in geography.

Alan Wilson (1974) attempted to show how an abstract mathematical language was capable of integrating quite different parts of urban and regional systems. But the openness to abstract logic (like mathematics) must not be allowed to exclude from geographic theory those variables and concepts (e.g. the quality of landscape) which were not capable of representation. Geography, however, should remain open to the introduction of new languages that might prove more flexible.

The advocates of the application of scientific method to geography feel that at the level of practice, geography would need to retain most of its elements so as to prove the ‘scientific status’ of the discipline (e.g. geography).

The research hypothesis, the hypothesis test and prediction as testing device would be required. But the idea that any one set of observations conclusively proved or definitively falsified a theory could not be retained. It was the accumulation of such results (positive or negative) that led to the advance or decline of rival research programmes.

However, it seems inescapable that scientific geography, in spite of being subjected to criticism, will require the retention, development and refinement of measuring devices, guided by the emergent geographical theories and by the cognate disciplines from which they have been derived, in the future with openness as to what levels and forms of measurement are the most appropriate for a given study. There will be a continuing need for statistical analysis to carry on the application of scientific method to geographical study, research and training.

In spite of a strong assertion for the relevance of scientific method in geography, the admirers and/or adherents of the method also accept the possibility of the relevance of humanistic or phenomenological approaches in geographical explanation that may yield new insights into the nature of geographical phenomena. Many geographers strongly believe in the desirability of methodological heterodoxy, e.g. allowing the co-existence of radically different approaches within the discipline.

Nevertheless, without scientific method in the subject, ‘geography would cease to offer a convincing interpretation of the Earth’s surface and the activities of individuals upon it’. There is no doubt that application of scientific method to geography has given a nomothetic basis to the discipline with a scientific status and saved it from the crisis of its identity that it suffered during the transition period.

Geographical Application of Scientific Method: Some Problems:

The post-War geography in the mid-twentieth century witnessed continuing debate with regard to the application of scientific method in geography. There were two mutually exclusive arguments on this issue. One side argued that scientific method should be introduced into both physical and human geography.

On the other hand, some geographers had claimed that the discipline was in some sense an exceptional discipline which might be excused (if not completely excluded) from the constraints of scientific method. The debate, however, had its origin in the nineteenth century and over decades; it hardened to a great extent.

Despite the counter-arguments and dichotomies, the period from 1960 experienced a vigorous expansion of geographical research using quasi-scientific methods, with emphasis on the law- seeking approaches and model-based paradigms.

The philosophical and methodological base for this was carried forward by many young geographers of the Anglo-American heritage and tradition. A number of textbooks in both human and physical geography emphasised the need for theory, laws, hypotheses, measurement and statistical testing.

But the enthusiastic practitioners and protagonists of this approach were often unaware of the problems inherent in the scientific approach, and could not identify the additional problems posed by its geographical use. Most of these problems stemmed from the twin facts that ‘geography as a whole deals with multi-variable open systems and that human geography deals with knowing subjects’.

1. Geographers were, over many years, concerned with the term ‘uniqueness’ because geographical phenomena on the surface of the Earth are unique and distinguishable, as well as complex in character and causation. The conclusion drawn is that geography deals with unique events, and generalisation in the form of laws and theories is doomed to failure or cannot be carried forth.

It is the idiographic attitude that implies a concern with the uniqueness of individual phenomena or events whereas the nomothetic approach implies a desire to subsume individual cases under laws or law-like statements of very general, if not universal, applicability.

This position, however, certainly provided a powerful argument against inductive methods in geography, and many geographers who stated for the uniqueness case, nevertheless argued for an inductive approach. It was then less clear that uniqueness was a valid objection to a theoretically based hypothesis-testing approach, because a collection of unique cases might nevertheless confirm or reject a hypothesised relationship, uniqueness was only an obstacle to that if it could be shown that causal relationships were themselves unique to each instance and changed inconsistently from place to place, and from time to time (spatial- temporal).

It was, however, argued that uniqueness with respect to some trivial property (location of some geographical phenomenon may be a trivial property) or some peripheral relationship was insufficient reason to invalidate scientific method.

2. A second consequence of geographical systems being large open systems is the difficulty of carrying out experimental tests. The sheer large size of a geographical system (the atmosphere, the river basin, a city) makes the laboratory experiment impossible. Scaling down the system may change and/or alter its properties in unknown ways.

Even if the system is reproduced in the laboratory there is no assurance that all the variables relevant in reality have been included in the laboratory version. An alternative solution to it in scientific terms is the ‘field experiment’, but it is difficult to ensure that the only variables allowed to vary are those being investigated, and certain experiments in human -geography would be politically or morally unacceptable.

So in field experiments and certainly in field data collection much of the control of extraneous variables is achieved by purely statistical means which in theory allow the isolation of a two- variable relationship when ‘all other variables are held constant’. Yet even such methods can only ‘hold constant’ recorded variables. There is no way of controlling for the possible effects of unrecognized and unrecorded additional variables.

3. A third consequence of the multi-variable nature of geographic system concerns the use of theory from other discipline. This may be applied and synthetic (the attempt to borrow theory from other disciplines to bear on a geographic problem) or reductionist (the attempt to interpret geographical relationships as special cases of more general theory in other disciplines).

Such attempt to borrow is especially difficult if more than one other discipline is involved, each with a scale of analysis, a conceptual framework and definition which may not be compatible with each other or with the geographical terms of reference.

A common geographical solution to this is to adopt one discipline as the source of a central theoretical framework (an ‘economic approach’ to urban geography, the ‘physics’ of slope development) and to use theory from other disciplines as modifying the central theory.

A problem remains, however, that the best current theory in geography requires some knowledge of many natural and social sciences. It is this, one suspects, that makes most geographers reluctant, if not unable, to pursue a wide slice of the discipline at the highest level.

4. Another problem that arises in applying scientific method in geography is the interference by the observer with the phenomenon observed. This problem is encountered in laboratory sciences, but it is usually possible to so design the experiment as to minimise the effect. It occurs in physical geography.

In human geography, the same problem occurs in two more acute forms:

(a) If the presence of an observer is known to the actors (either in a role of observer or as a simple stranger), it may lead to a short-run change in behaviour, conscious or unconscious; the results of observation will thus be untypical of normal behaviour, and

(b) The interaction between observer and observed (at the time of observation or later by publication of research findings) may produce long-run changes which would not otherwise have occurred. If such changes are towards a research hypothesis, ‘a false hypothesis may be spuriously confirmed, if counter to the hypothesis a correct hypothesis may be mistakenly rejected’.

Related Articles:

  • Hypotheses: Types, Levels and Functions | Scientific Method | Geography
  • The Role of Theory in Geography | Elements | Scientific Method | Geography
  • Major Theoretical and Methodological Developments in Geography
  • Foundation of Scientific Geography | Essay | Geography

Geography , Scientific Method , Use of Scientific Method in Geography

Privacy Overview

CookieDurationDescription
cookielawinfo-checkbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.

The Learner's Guide to Geospatial Analysis

Geography Department Penn State

  • LIBRARY RESOURCES

Geospatial Reasoning

Print

The three well known reasoning processes trace the development of analytic beliefs along different paths. Inductive reasoning reveals “that something is probably true," deductive reasoning demonstrates “that something is necessarily true.” It is generally accepted within the intelligence community that both are limited: inductive reasoning leads to multiple, equally likely solutions, and deductive reasoning is subject to deception. Therefore, a third aid to judgment, abductive reasoning, showing “that something is plausibly true,” is used to offset the limitations of the others. While analysts who employ all three guides to sound judgment stand to be the most persuasive, fallacious reasoning or mischaracterization of rules, cases, or results in any of the three can affect reasoning using the others.

  • Inductive reasoning , moving from the specific case to the general rule, suggests many possible outcomes, or the range of what might happen in the future. However, inductive reasoning lacks a means to distinguish among outcomes. An analyst has no way of knowing whether a solution is correct.
  • Deductive reasoning , on the other hand, moves from the general to the specific. Deductive reasoning becomes essential for predictions. Based on past perceptions, certain facts indicate specific outcomes. If, for example, troops are deployed to the border, communications are increased, and leadership is in defensive bunkers, then war is imminent. However, if leadership remains in the public eye, then these preparations indicate that an exercise is imminent.
  • Abductive reasoning reveals plausible outcomes. Abductive reasoning is the process of generating the best explanation for a set of observations. When actions defy accurate interpretation through existing paradigms, abductive reasoning generates novel means of explanation. In the case of predictions, an abductive process presents an “assessment of probabilities.” Although abduction provides no guarantee that the analyst has chosen the correct hypothesis, the probative force of the accompanying argument indicates that the most likely hypothesis is known and that actionable intelligence is being developed.

It is not too far of a stretch to say that people who are drawn to the discipline of geospatial intelligence have minds accustomed to assembling information into three-dimensional mental schemas. We construct schemas in our mind, rotate them, and view them from many angles. Furthermore, the experienced geospatial professional imagines spatial schemas influenced in the fourth dimension, time. We mentally replay time series of the schema. So easy is the geospatial professional’s ability to assemble multidimensional models that the expert does it with incomplete data. We mentally fill in gaps, making an intuitive leap toward a working schema with barely enough data to perceive even the most rudimentary spatial patterns. This is a sophisticated form of geospatial reasoning. Expertise increases with experience because as we come across additional schemas, our mind continuously expands to accommodate them. This might be called spatial awareness. Being a visual-spatial learner, instead of feeling daunted by the abundance and complexity of data, we find pleasure in recognizing the patterns. Are we crazy? No, this is what is called a visual-spatial mind. Some also call these people right brain thinkers.

The concept of right brain and left brain thinking developed from the research of psychobiologist Roger W. Sperry. Sperry discovered that the human brain has two different ways of thinking. The right brain is visual and processes information in an intuitive and simultaneous way, looking first at the whole picture then the details. The left brain is verbal and processes information in an analytical and sequential way, looking first at the pieces then putting them together to get the whole. Some individuals are more whole-brained and equally adept at both modes.

The qualities of the Visual-Spatial person are well documented but not well known . Visual-spatial thinkers are individuals who think in pictures rather than in words. They have a different brain organization than sequential thinkers. They are whole-part thinkers who think in terms of the big picture first before they examine the details. They are non-sequential, which means that they do not think and learn in the step-by-step manner. They arrive at correct solutions without taking steps. They may have difficulty with easy tasks, but show a unique ability with difficult, complex tasks. They are systems thinkers who can orchestrate large amounts of information from different domains, but they often miss the details.

Sarah Andrews likens some contrasting thought processes to a cog railway. Data must be in a set sequence in order to process it through a workflow. In order to answer a given question, the thinker needs information fed to him in order. He will apply a standardized method towards arriving at a pragmatic answer, check his results, and move on to the next question. In order to move comfortably through this routine, he requires that a rigid set of rules be in place. This is compared with the geospatial analyst who grabs information in whatever order, and instead of crunching down a straight-line, formulaic route toward an answer, makes an intuitive, mental leap toward the simultaneous perception of a group of possible answers. The answers may overlap, but none are perfect. In response to this ambiguity, the geospatial analyst develops a risk assessment, chooses the best working answer from this group, and proceeds to improve the estimate by gathering further data. Unlike, the engineer, whose formulaic approach requires that the unquestioned authority of the formula exist in order to proceed, the geospatial intelligence professional questions all authority, be it in the form of a human or acquired data.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

Inductive Reasoning | Types, Examples, Explanation

Published on January 12, 2022 by Pritha Bhandari . Revised on June 22, 2023.

Inductive reasoning is a method of drawing conclusions by going from the specific to the general. It’s usually contrasted with deductive reasoning , where you go from general information to specific conclusions.

Inductive reasoning is also called inductive logic or bottom-up reasoning.

Note Inductive reasoning is often confused with deductive reasoning. However, in deductive reasoning, you make inferences by going from general premises to specific conclusions.

Table of contents

What is inductive reasoning, inductive reasoning in research, types of inductive reasoning, inductive generalization, statistical generalization, causal reasoning, sign reasoning, analogical reasoning, inductive vs. deductive reasoning, other interesting articles, frequently asked questions about inductive reasoning.

Inductive reasoning is a logical approach to making inferences, or conclusions. People often use inductive reasoning informally in everyday situations.

Inductive Reasoning

You may have come across inductive logic examples that come in a set of three statements. These start with one specific observation, add a general pattern, and end with a conclusion.

Examples: Inductive reasoning
Stage Example 1 Example 2
Specific observation Nala is an orange cat and she purrs loudly. Baby Jack said his first word at the age of 12 months.
Pattern recognition Every orange cat I’ve met purrs loudly. All babies say their first word at the age of 12 months.
General conclusion All orange cats purr loudly. All babies say their first word at the age of 12 months.

Prevent plagiarism. Run a free check.

In inductive research, you start by making observations or gathering data. Then , you take a broad view of your data and search for patterns. Finally, you make general conclusions that you might incorporate into theories.

You distribute a survey to pet owners. You ask about the type of animal they have and any behavioral changes they’ve noticed in their pets since they started working from home. These data make up your observations.

To analyze your data, you create a procedure to categorize the survey responses so you can pick up on repeated themes. You notice a pattern : most pets became more needy and clingy or agitated and aggressive.

Inductive reasoning is commonly linked to qualitative research , but both quantitative and qualitative research use a mix of different types of reasoning.

There are many different types of inductive reasoning that people use formally or informally, so we’ll cover just a few in this article:

Inductive reasoning generalizations can vary from weak to strong, depending on the number and quality of observations and arguments used.

Inductive generalizations use observations about a sample to come to a conclusion about the population it came from.

Inductive generalizations are also called induction by enumeration.

  • The flamingos here are all pink.
  • All flamingos I’ve ever seen are pink.
  • All flamingos must be pink.

Inductive generalizations are evaluated using several criteria:

  • Large sample: Your sample should be large for a solid set of observations.
  • Random sampling: Probability sampling methods let you generalize your findings.
  • Variety: Your observations should be externally valid .
  • Counterevidence: Any observations that refute yours falsify your generalization.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Statistical generalizations use specific numbers to make statements about populations, while non-statistical generalizations aren’t as specific.

These generalizations are a subtype of inductive generalizations, and they’re also called statistical syllogisms.

Here’s an example of a statistical generalization contrasted with a non-statistical generalization.

Example: Statistical vs. non-statistical generalization
Specific observation 73% of students from a sample in a local university prefer hybrid learning environments. Most students from a sample in a local university prefer hybrid learning environments.
Inductive generalization 73% of all students in the university prefer hybrid learning environments. Most students in the university prefer hybrid learning environments.

Causal reasoning means making cause-and-effect links between different things.

A causal reasoning statement often follows a standard setup:

  • You start with a premise about a correlation (two events that co-occur).
  • You put forward the specific direction of causality or refute any other direction.
  • You conclude with a causal statement about the relationship between two things.
  • All of my white clothes turn pink when I put a red cloth in the washing machine with them.
  • My white clothes don’t turn pink when I wash them on their own.
  • Putting colorful clothes with light colors causes the colors to run and stain the light-colored clothes.

Good causal inferences meet a couple of criteria:

  • Direction: The direction of causality should be clear and unambiguous based on your observations.
  • Strength: There’s ideally a strong relationship between the cause and the effect.

Sign reasoning involves making correlational connections between different things.

Using inductive reasoning, you infer a purely correlational relationship where nothing causes the other thing to occur. Instead, one event may act as a “sign” that another event will occur or is currently occurring.

  • Every time Punxsutawney Phil casts a shadow on Groundhog Day, winter lasts six more weeks.
  • Punxsutawney Phil doesn’t cause winter to be extended six more weeks.
  • His shadow is a sign that we’ll have six more weeks of wintery weather.

It’s best to be careful when making correlational links between variables . Build your argument on strong evidence, and eliminate any confounding variables , or you may be on shaky ground.

Analogical reasoning means drawing conclusions about something based on its similarities to another thing. You first link two things together and then conclude that some attribute of one thing must also hold true for the other thing.

Analogical reasoning can be literal (closely similar) or figurative (abstract), but you’ll have a much stronger case when you use a literal comparison.

Analogical reasoning is also called comparison reasoning.

  • Humans and laboratory rats are extremely similar biologically, sharing over 90% of their DNA.
  • Lab rats show promising results when treated with a new drug for managing Parkinson’s disease.
  • Therefore, humans will also show promising results when treated with the drug.

Inductive reasoning is a bottom-up approach, while deductive reasoning is top-down.

In deductive reasoning, you make inferences by going from general premises to specific conclusions. You start with a theory, and you might develop a hypothesis that you test empirically. You collect data from many observations and use a statistical test to come to a conclusion about your hypothesis.

Inductive research is usually exploratory in nature, because your generalizations help you develop theories. In contrast, deductive research is generally confirmatory.

Sometimes, both inductive and deductive approaches are combined within a single research study.

Inductive reasoning approach

You begin by using qualitative methods to explore the research topic, taking an inductive reasoning approach. You collect observations by interviewing workers on the subject and analyze the data to spot any patterns. Then, you develop a theory to test in a follow-up study.

Deductive reasoning approach

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Chi square goodness of fit test
  • Degrees of freedom
  • Null hypothesis
  • Discourse analysis
  • Control groups
  • Mixed methods research
  • Non-probability sampling
  • Quantitative research
  • Inclusion and exclusion criteria

Research bias

  • Rosenthal effect
  • Implicit bias
  • Cognitive bias
  • Selection bias
  • Negativity bias
  • Status quo bias

Inductive reasoning is a method of drawing conclusions by going from the specific to the general. It’s usually contrasted with deductive reasoning, where you proceed from general information to specific conclusions.

In inductive research , you start by making observations or gathering data. Then, you take a broad scan of your data and search for patterns. Finally, you make general conclusions that you might incorporate into theories.

Inductive reasoning takes you from the specific to the general, while in deductive reasoning, you make inferences by going from general premises to specific conclusions.

There are many different types of inductive reasoning that people use formally or informally.

Here are a few common types:

  • Inductive generalization : You use observations about a sample to come to a conclusion about the population it came from.
  • Statistical generalization: You use specific numbers about samples to make statements about populations.
  • Causal reasoning: You make cause-and-effect links between different things.
  • Sign reasoning: You make a conclusion about a correlational relationship between different things.
  • Analogical reasoning: You make a conclusion about something based on its similarities to something else.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bhandari, P. (2023, June 22). Inductive Reasoning | Types, Examples, Explanation. Scribbr. Retrieved August 26, 2024, from https://www.scribbr.com/methodology/inductive-reasoning/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, inductive vs. deductive research approach | steps & examples, exploratory research | definition, guide, & examples, correlation vs. causation | difference, designs & examples, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

Geographical Association

  • Search Events & CPD
  • GA Annual Conference and Exhibition
  • GA CPD courses
  • Consultancy services
  • Quality Marks
  • CPD Toolkit
  • Study Tours
  • Geography Education Research
  • Online Teaching Resources
  • Geography subject leadership
  • Curriculum planning
  • Progression and assessment in geography
  • Classroom practice
  • Geography fieldwork
  • Promoting geography
  • Become a geography teacher
  • Support for geography teacher educators

Support for trainees and ECTs

  • Networking Calendar
  • GA Branches
  • Student Activities

Support the GA

  • Volunteer Groups
  • Write for the GA

Decision making, problem solving and mysteries

explain the inductive route to problem solving in geography

‘Education should enable students to make sense of the world for themselves, to be critical of information, to enable them to participate in decision making and to promote their own social and intellectual development so that they can get more out of life and contribute more to society.’

Roberts, 2013, p 22

Topics on this page:

What are decision making and problem solving activities? | Why are these activities important for geographical learning? | Managing decision making activities | Decision making in GCSE examinations | Making animals activity | Layered decision making | Mysteries | How to use mysteries | Reading

What are decision making and problem solving activities?

Decision-making and problem solving in geography are activities where an issue or question is identified and investigated. Geography teachers use these activities to provide real world  contexts  for students to apply and develop their knowledge and skills and develop geographical thinking.

Teachers put students in situations where they are required to evaluate alternatives and reach decisions. This helps students to engage with geographical issues and to gain a critical understanding of the kinds of evidence and skills used for decision making. 

Problem solving takes this further into implementing actions and evaluating the consequences. Mysteries are another form of problem solving that originated in the ‘Thinking through geography’ project.

Why are these activities important for geographical learning?

Decision making and problem solving activities seek to challenge students, so they are put in a position where they have to think hard. While it is important that the context and scenario are not too complex for them to comprehend, it is valuable to give them tasks which are just beyond their present capabilities so they have to struggle a bit.

The Russian psychologist, Vygotsky, described this by the concept of the ‘zone of proximal development’ i.e. what a student can do on their own and if supported by more able peers or adults. 

Teachers should select appropriate decision making activities that aim to move students through the zone so they can work independently and move forward from what they can currently do with some support (see  Learning theories and geography ).

A significant part of the learning in decision-making and problem solving activities is in the analysis and reflection that students engage in as they weigh up the information provided to reach a conclusion. 

These activities are best tackled collaboratively so that several views and opinions have to be considered in making the decision and each person must explain their ideas clearly to their peers to make their case and justify their argument.

Another way in which these activities contribute to good geographical learning is that by drawing in and synthesising elements from across the subject they broaden and deepen students’ understanding. They often involve looking at a problem or an issue holistically, therefore replicating the ways geographers think in real world situations. 

A careful choice of problem and context can mean that students must consider a wide variety of different geographies to reach a decision and must make links across the subject to do so.

  • Look at  A New Stadium for Rotherham United – A Siting Exercise case study . These materials from the GA’s  Living Geography  project provides an example of a decision making activity.
  • Rose, C. (2008) ‘Are year 13s too old to think?’,  Teaching Geography , Autumn.
  • Thomas, S. and McGahan, H. (1997)  ‘ Geography it makes you think’,  Teaching Geography,  July. – an example of  Decision making for Guiseppe Cosanostro . Statements have to be categorised into those that are background information and those that are triggers for his decision on whether to migrate.

Managing decision making activities

Selecting the right geographical question or problem is important. Students need to have sufficient prior knowledge to tackle the problem and make decisions and the teacher must provide the necessary contextual material. Some examples are provided later on this page.

To make the geographical learning worthwhile, students need time to fully explore the problem, discuss ideas and struggle with challenging issues in the process of decision making. They need to have the opportunity to make sense of the information provided and think hard about the geography. 

These activities can extend over one or more lessons. A good decision making activity requires an investment in careful planning so it is important that sufficient lesson time is allocated to allow the activity to achieve its goals.

Consider carefully how you set up groups of students for a decision making activity. The students must work collaboratively and support each other if the strategy is to be successful (see Collaborative learning in geography ).

If you are using this type of activity with a class for the first time, you will need to consider what scaffolding to provide to help them tackle the process and manage the analysis of information and data on which to base their decision. This could be in the form of written guidance, or through class discussion and questions (see Scaffolding geographical learning ).

Model the process to help them understand that there are no certainties or ‘one right answer’ and how to justify what they decide. Show them the type of outcome you expect, such as reports written by other students, or provide a writing frame. Teachers need to monitor the activity to provide support or further information as necessary, but should not be too hasty to intervene and solve the problem for them!

Decision making activities are supposed to be challenging and you should give students the opportunity to show you what they can achieve. Do not shy back and set your expectations too low. What they can do successfully is highly dependent on their motivation and your thoughtful support. 

If they have good self-esteem and find the topic of interest to them, they will attempt most challenges you present to them. You need to establish the right classroom climate and build a relationship of trust where students feel supported, valued and their efforts are praised.

Develop an activity for yourself, including preparing the resources (GIS offers excellent opportunities for developing decision making activities.) Then plan and teach the sequence of lessons.

Some ideas:

  • Use the  Water crisis in Las Vegas  resources for a decision-making activity with a focus on sustainability, water conflicts, extreme environments or human–environment interactions.
  • Some exam specifications have decision making papers you could develop further.
  • Where will I live? ; refer to the ‘students as citizens’ section and apply these ideas to a similar decision making exercise in your local area.
  • Use some of the ideas from ‘Making animals’ below.

Decision making in GCSE examinations

The current GCSE examination from AQA includes a paper on geographical applications with a section on issue evaluation. This contributes a critical thinking and problem-solving element to the assessment. 

A resource booklet is available in advance, including e.g. maps, graphs, diagrams, photographs, quotes from different interest groups etc. Students are expected to interpret, analyse and evaluate the information and issue(s) in the pre-release resources, make an appraisal of the advantages and disadvantages, and evaluate the alternatives.

  • Refer to  Issue evaluation for all abilities  ( AQA) Rebecca Blackshaw.

Making animals activity

This ‘thinking activity’ is concerned with planning and decision making allowing for particular constraints. One of the early versions was to design an animal, hence the title, but the activity does not need to include animals! The basic premise of the original activity was that students have to design an animal that would be adapted to live in a particular environment.

The constraints they had to think about are environmental factors and how animals can adapt to particular conditions. The idea can be applied to other situations where there are specific parameters. Nichols and Kinninment (2001) gives examples that includes topics such as natural regions, migration, a shanty town. This is a very flexible activity! (See  Making Animals )

The three important characteristics of the generic strategy are:

  • A context to work within
  • Features to choose – to design something in that context
  • Constraints on their choice – such as the number of features or the amount they can spend.
  • Refer to the example of ‘Backpacking in Italy’ in Leat and McGrane (2000) which includes the resources used. You could design a similar task for another topic/location.

Some hints on managing ‘Backpacking in Italy’

  • The task needs a good introduction or ‘framing’ to establish the relevance and purpose of the decisions students are being asked to make.
  • Ensure you focus on the  place  aspects of the context – where is the geography?
  • The activity works best in pairs so they work cooperatively on the decisions.
  • Eavesdrop their thinking so you can use this in the debriefing.
  • Expect a range of responses: some students may struggle with the interrelationship of human and geography factors.
  • Warn students that you will expect them to justify their reasons for what they pack.
  • In the debrief push the students for these justifications, and you may have to play the devil’s advocate to get them to argue out their justification.
  • Leat, D and McGrane, J. (2000) ‘Diagnostic and formative assessment of students’ thinking’,  Teaching Geography,  January.
  • Nichols, A. and Kinninment, D. (2001)  More Thinking through Geography , London: Chris Kington Publishing.

Layered decision making

This was originally a thinking geography activity and it introduces more complexity into decision making activities so they are more realistic and challenging. Students are provided with the information and make decisions based on this. 

Then further information is introduced that changes the scenario so the decision must be reconsidered. This approached is useful for situations where there are complex or conflicting issues that need to be resolved, so students can be fed the information one stage at a time.

As layered decision making is more complex some students may struggle and need more support. In particular, as more information is introduced there is a risk of cognitive overload and students may have difficulty remembering all the factors involved, so ways to have this easily to hand is important. 

Students will need guidance in how to record information efficiently to help them make decisions. Debriefing needs to take place as you go along so that there is discussion on the first decisions before more complexity is introduced.

  • Avanessian, A. (2008) ‘Layered decision making: coastal protection along the Holderness coast’,  Teaching Geography , Spring.
  • Biddulph, M., Lambert, D. and Balderstone, D. (2021),  Learning to Teach Geography in the Secondary School: A Companion to School Experience , 4th edition, Abingdon: Routledge p 81.
  • Enser, M.. (2019) Making Every Geography Lesson Count, Crown House Publishing. Chap 3 Section 6 has an example of decision making.
  • Nichols, A. and Kinninment, D. (2001)  More Thinking through Geography , London: Chris Kington Publishing. (Examples: moving house; the consequences of dam construction; a new stadium).

Mysteries are a form of problem solving, developed by David Leat’s  Thinking through geography  project in the 1990s, that was directly concerned with developing cognitive abilities through geography teaching. Students are provided with a range of ‘clues’ in order to explore possible explanations for a ‘mystery’ in which they have to solve a central question.

Effective mysteries often start by linking two seemingly unconnected elements and this approach helps to introduce a holistic dimension to the geography topic. Mysteries are challenging activities that provide opportunity for students to try out new information against the understanding they already have; this is an important for building schemas.

Students are given 16–30 pieces of information on individual cards and have to work collaboratively in small groups to solve the question. The problem solving in mysteries usually focuses on ‘cause and effect’ or classification. 

Students need to sort relevant information from irrelevant; interpret information; make links between disparate pieces of information; speculate and form hypothesis which they go on to check, refine and explain. The cards enable the statements to be moved about, so they can process and change their ideas.

Mysteries encourage students to deal with ambiguity. They must recognise there is no one right answer. They must determine whether the information on each card is relevant or not. The mystery should be very like real life! 

Ultimately the students should write in detail about the central question and should have some thoughtful geographical explanations. It is important to consolidate learning in this follow up activity and students should be encouraged to tell the story of the mystery and not just to repeat what was on the cards.

How to use mysteries

To create a new mystery from scratch requires a good deal of research and planning and it is best to use an example that has already been developed in the first instance. Check the chosen mystery includes the key concepts and ideas that you want to cover in the unit you are teaching.

Identify the necessary prior understanding that students will need to be able to understand the statements on the cards and the vocabulary. Plan to do some pre-teaching if necessary. 

Provide a good introduction to set the scene and provide the stimulus so that the students want to solve the mystery – they need to be motivated and persistent to puzzle it out or the strategy will not work. Stress the key question for the mystery at the start, and keep coming back to it.

Successful learning from a mystery depends on collaborative working. Students can have strongly held views and there can be dissent in the groups to cope with. Select the groups carefully with this in mind.

You should allow sufficient time for them to work through the problem. Advise students to sort the statements and discard the ones they do not think are relevant, but to keep checking on the discarded ones as they work. Watch out for any groups that are overwhelmed and start to go off task. 

Provide support but do not give them too much ‘help’ and resist the temptation to solve the question for them. When you intervene, aim to trigger their thinking to consider different strategies rather than give them the answers.

Mysteries are an excellent tool for diagnostic and formative assessment. As groups work, observe how students handle the information, listen to their discussions and explanations and read their final product.

Debriefing is an important part of the activity (see  Debriefing in geography ). Here you will analyse how they approached the tasks and what they found out. It is a good idea to start with feedback from a group with a reasonable, but challengeable, explanation and invite others to comment. Try to keep the ‘answer’ open for as long as you can so you can get discussion and debate to unpick the statements in detail.

When you have discussed the outcomes, move on to discuss how they approached the task. Did their ideas change during the task? How did their group operate? How did they resolve disagreements? (se e  Metacognition ).

  • Atherton, R. (2009) ‘Living with natural processes – physical geography and the human impact on the environment’, in Mitchell, D (ed)  Living Geography: Exciting futures for teachers and students . London: Chris Kington Publishing – this chapter contains detailed information and resources for a mystery about flooding, ‘Why is Mrs Wilson having to replace her precious gnome collection?’.
  • Balderstone, D. (ed) (2006)  Secondary Geography Handbook . Sheffield: Geographical Association, p 324 –  ‘What happened to the Singh family and why? (Bangladesh flooding)’ . This discusses how mysteries were tailored very successfully to use with students with SEN and shows examples of students’ work.
  • Gillman, R. and Gillman, S. (2016) ‘Using mysteries to develop place knowledge’,  Teaching Geography,  Spring. – a mystery which focuses on the Ebola crisis and includes on-line materials.
  • Leat, D. (1998)  Thinking through geography . London: Chris Kington Publishing (Examples: industrial change in South Wales (this factory has closed); Who is to blame for the Sharpe Point Flats? The lost livestock of Loxley Farm (Y12).
  • Leat, D. and Nichols, A. (1999)  Theory into Practice: Mysteries make you think . Sheffield: Geographical Association.
  • Lyon, J. (2009) ‘Life, death and disease – applied geographical thinking and disease’ in Mitchell, D (ed)  Living Geography: Exciting futures for teachers and students . London: Chris Kington Publishing – this chapter contains detailed information and resources for a mystery about disease, ‘Why did Eric Marshall catch measles in 1997?’.
  • Rawding, C. (2015) ‘Marie Antoinette and Heathrow Airport: holistic geographies’,  Teaching Geography,  Spring. – a mystery involving two different volcanic eruptions makes connections between physical and human processes to teach holistic geographies.
  • Ward, R. (2004) ‘Mind friendly learning in geography’,  Teaching Geography , October.
  • Wright, E. (2004) ‘Why did Mrs Windsor vote yes to the Euro?’,  Teaching Geography , October.

Also available from the ITE section

explain the inductive route to problem solving in geography

Thinking through geography

explain the inductive route to problem solving in geography

Creativity in geography lessons

explain the inductive route to problem solving in geography

Argument in geography

explain the inductive route to problem solving in geography

Critical thinking

Keep in touch.

Sign up to the GA’s newsletter for the latest ideas, support and advice in geography education. Log in, or create an account, and sign up for our newsletter.

Geographical Association

© The Geographical Association 2024

Charity No: 1135148 Company No: 07139068

worldpay

Strategic Partners

Discover the World logo

A short 20-second video with various animated maps showing scenes and data from all over the world

Problem-solving with a geographic approach

As we confront the greatest issues of our time, one factor is crucial—geography.

What is the geographic approach?

Our most serious challenges—such as climate change, sustainability, social inequity, and global public health—are inherently spatial. To solve such complex problems, we must first understand their geography.

The geographic approach is a way of thinking and problem-solving that integrates and organizes all relevant information in the crucial context of location. Leaders use this approach to reveal patterns and trends; model scenarios and solutions; and ultimately, make sound, strategic decisions.

An animated digital map of Europe showing pollution hotspots in orange and yellow, overlaid on a photo of diplomats meeting to discuss policies

Monitoring the earth’s health

The European Environment Agency tracks air quality and pollution levels to better inform policy decisions across the continent.

A geographic approach provides clarity

Geography is a way of pulling all key information about an issue together, expanding the questions we can ask about a place or a problem and the creative solutions we can bring to bear. 

Science based and data driven

A geographic approach relies on science and data to understand problems and reveal solutions.

Holistic and inclusive

A geographic approach considers how all factors are interconnected, uniting data types by what they have in common—location.

Collaborative

Maps are a powerful foundation for communication and action—a way to create shared understanding, explore alternatives, and find solutions.

A digital animation of the Port of Rotterdam with traffic illuminated as it moves around the port, overlaid on a photo of a man next to a shipping container entering information onto a tablet

Making the most out of complex data

Mapping all kinds of data about a system such as the Port of Rotterdam offers a full perspective, revealing opportunities to operate more efficiently.

Mapping transforms data into understanding

With so much data having a location component, a geographic approach provides a logical foundation for organizing, analyzing, and applying it. When we visualize and analyze data on a map, hidden relationships and insights emerge.

Geography delivers a dynamic narrative

Maps tell stories about places—what's happening there now, what has happened, and what will happen next.

Maps are an accessible analytic platform

Maps help us grasp concepts and tap into a visual storytelling language we intuitively understand.

High-resolution imagery comes to life

When viewed on a map, imagery transforms from static snapshots to compelling stories that enhance understanding.

A short video shows a true to life 3D version of San Francisco, then zooms in and changes to white building renderings with areas highlighted in pink along transit lines, overlaid on an image of someone riding a bicycle

Visualizing how to improve mobility

This 3D map of San Francisco demonstrates how walkability and transportation access (shown in pink) improve with planned transit service expansions.

Cutting-edge technology magnifies the power of geography

Geography is being revitalized by a world of sensors and connectivity and made more powerful by modern geographic information system (GIS) software. With today's sophisticated digital maps, we can apply our best data science and analysis to convert raw data into location intelligence—insights that empower real-time understanding and transform decision-making.

An animated dashboard shows a simplified view of New York City with the live locations of buses and traffic incidents

Managing real-time operations

This live dashboard view of buses and traffic incidents in New York City combines historical and real-time data to avoid delays and keep people safe.

Global challenges require a geographic approach

Sustainability, infrastructure, climate impacts.

Leaders use a geographic approach to guide the most successful sustainability projects and actualize resilience.

A map of Southern California with areas marked in red to show the results of a green infrastructure analysis

A geographic approach to planning, prioritization, and operations helps leaders understand how infrastructure projects relate to surrounding environments.

A detailed 3D vector model of Cincinnati, Ohio shows buildings and individual contoured trees to help inform planning for 5G networks.

Leaders who need to understand climate change impacts rely on a geographic approach to build actionable climate change solutions.

A map of the Pacific Northwest shows air quality in colors ranging from green to dark red, poor air quality being a result of wildfires across the region

Location matters more than ever

Geographic knowledge creates essential context. We can't manage our world without it—whether it's global supply chain issues, equitable internet access in the US, or energy consumption for a multinational company. As we work together to address today’s challenges, a geographic approach, powered by GIS, will help map the common ground we need to inspire effective action.

Applying a geographic approach across all sectors

Businesses use a geographic approach to streamline operations, develop strategy, and achieve sustainable prosperity.

Governments use a geographic approach to build resilient, equitable infrastructure and improve disaster preparedness.

Nonprofit organizations use a geographic approach to maximize their effectiveness and make the most of limited resources.

Find out how a geographic approach can elevate your organization's work.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Inductive vs Deductive Research Approach (with Examples)

Inductive vs Deductive Reasoning | Difference & Examples

Published on 4 May 2022 by Raimo Streefkerk . Revised on 10 October 2022.

The main difference between inductive and deductive reasoning is that inductive reasoning aims at developing a theory while deductive reasoning aims at testing an existing theory .

Inductive reasoning moves from specific observations to broad generalisations , and deductive reasoning the other way around.

Both approaches are used in various types of research , and it’s not uncommon to combine them in one large study.

Inductive-vs-deductive-reasoning

Table of contents

Inductive research approach, deductive research approach, combining inductive and deductive research, frequently asked questions about inductive vs deductive reasoning.

When there is little to no existing literature on a topic, it is common to perform inductive research because there is no theory to test. The inductive approach consists of three stages:

  • A low-cost airline flight is delayed
  • Dogs A and B have fleas
  • Elephants depend on water to exist
  • Another 20 flights from low-cost airlines are delayed
  • All observed dogs have fleas
  • All observed animals depend on water to exist
  • Low-cost airlines always have delays
  • All dogs have fleas
  • All biological life depends on water to exist

Limitations of an inductive approach

A conclusion drawn on the basis of an inductive method can never be proven, but it can be invalidated.

Example You observe 1,000 flights from low-cost airlines. All of them experience a delay, which is in line with your theory. However, you can never prove that flight 1,001 will also be delayed. Still, the larger your dataset, the more reliable the conclusion.

Prevent plagiarism, run a free check.

When conducting deductive research , you always start with a theory (the result of inductive research). Reasoning deductively means testing these theories. If there is no theory yet, you cannot conduct deductive research.

The deductive research approach consists of four stages:

  • If passengers fly with a low-cost airline, then they will always experience delays
  • All pet dogs in my apartment building have fleas
  • All land mammals depend on water to exist
  • Collect flight data of low-cost airlines
  • Test all dogs in the building for fleas
  • Study all land mammal species to see if they depend on water
  • 5 out of 100 flights of low-cost airlines are not delayed
  • 10 out of 20 dogs didn’t have fleas
  • All land mammal species depend on water
  • 5 out of 100 flights of low-cost airlines are not delayed = reject hypothesis
  • 10 out of 20 dogs didn’t have fleas = reject hypothesis
  • All land mammal species depend on water = support hypothesis

Limitations of a deductive approach

The conclusions of deductive reasoning can only be true if all the premises set in the inductive study are true and the terms are clear.

  • All dogs have fleas (premise)
  • Benno is a dog (premise)
  • Benno has fleas (conclusion)

Many scientists conducting a larger research project begin with an inductive study (developing a theory). The inductive study is followed up with deductive research to confirm or invalidate the conclusion.

In the examples above, the conclusion (theory) of the inductive study is also used as a starting point for the deductive study.

Inductive reasoning is a bottom-up approach, while deductive reasoning is top-down.

Inductive reasoning takes you from the specific to the general, while in deductive reasoning, you make inferences by going from general premises to specific conclusions.

Inductive reasoning is a method of drawing conclusions by going from the specific to the general. It’s usually contrasted with deductive reasoning, where you proceed from general information to specific conclusions.

Inductive reasoning is also called inductive logic or bottom-up reasoning.

Deductive reasoning is a logical approach where you progress from general ideas to specific conclusions. It’s often contrasted with inductive reasoning , where you start with specific observations and form general conclusions.

Deductive reasoning is also called deductive logic.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Streefkerk, R. (2022, October 10). Inductive vs Deductive Reasoning | Difference & Examples. Scribbr. Retrieved 26 August 2024, from https://www.scribbr.co.uk/research-methods/inductive-vs-deductive-reasoning/

Is this article helpful?

Raimo Streefkerk

Raimo Streefkerk

Other students also liked, inductive reasoning | types, examples, explanation, what is deductive reasoning | explanation & examples, a quick guide to experimental design | 5 steps & examples.

JavaScript seems to be disabled in your browser. For the best experience on our site, be sure to turn on Javascript in your browser.

  • Order Tracking
  • Create an Account

explain the inductive route to problem solving in geography

200+ Award-Winning Educational Textbooks, Activity Books, & Printable eBooks!

  • Compare Products

Reading, Writing, Math, Science, Social Studies

  • Search by Book Series
  • Algebra I & II  Gr. 7-12+
  • Algebra Magic Tricks  Gr. 2-12+
  • Algebra Word Problems  Gr. 7-12+
  • Balance Benders  Gr. 2-12+
  • Balance Math & More!  Gr. 2-12+
  • Basics of Critical Thinking  Gr. 4-7
  • Brain Stretchers  Gr. 5-12+
  • Building Thinking Skills  Gr. Toddler-12+
  • Building Writing Skills  Gr. 3-7
  • Bundles - Critical Thinking  Gr. PreK-9
  • Bundles - Language Arts  Gr. K-8
  • Bundles - Mathematics  Gr. PreK-9
  • Bundles - Multi-Subject Curriculum  Gr. PreK-12+
  • Bundles - Test Prep  Gr. Toddler-12+
  • Can You Find Me?  Gr. PreK-1
  • Complete the Picture Math  Gr. 1-3
  • Cornell Critical Thinking Tests  Gr. 5-12+
  • Cranium Crackers  Gr. 3-12+
  • Creative Problem Solving  Gr. PreK-2
  • Critical Thinking Activities to Improve Writing  Gr. 4-12+
  • Critical Thinking Coloring  Gr. PreK-2
  • Critical Thinking Detective  Gr. 3-12+
  • Critical Thinking Tests  Gr. PreK-6
  • Critical Thinking for Reading Comprehension  Gr. 1-5
  • Critical Thinking in United States History  Gr. 6-12+
  • CrossNumber Math Puzzles  Gr. 4-10
  • Crypt-O-Words  Gr. 2-7
  • Crypto Mind Benders  Gr. 3-12+
  • Daily Mind Builders  Gr. 5-12+
  • Dare to Compare Math  Gr. 2-7
  • Developing Critical Thinking through Science  Gr. 1-8
  • Dr. DooRiddles  Gr. PreK-12+
  • Dr. Funster's  Gr. 2-12+
  • Editor in Chief  Gr. 2-12+
  • Fun-Time Phonics!  Gr. PreK-2
  • Half 'n Half Animals  Gr. K-4
  • Hands-On Thinking Skills  Gr. K-1
  • Inference Jones  Gr. 1-6
  • James Madison  Gr. 10-12+
  • Jumbles  Gr. 3-5
  • Language Mechanic  Gr. 4-7
  • Language Smarts  Gr. 1-4
  • Mastering Logic & Math Problem Solving  Gr. 6-9
  • Math Analogies  Gr. K-9
  • Math Detective  Gr. 3-8
  • Math Games  Gr. 3-8
  • Math Mind Benders  Gr. 5-12+
  • Math Ties  Gr. 4-8
  • Math Word Problems  Gr. 4-10
  • Mathematical Reasoning  Gr. Toddler-11
  • Middle School Science  Gr. 6-8
  • Mind Benders  Gr. PreK-12+
  • Mind Building Math  Gr. K-1
  • Mind Building Reading  Gr. K-1
  • Novel Thinking  Gr. 3-6
  • OLSAT® Test Prep  Gr. PreK-K
  • Organizing Thinking  Gr. 2-8
  • Pattern Explorer  Gr. 3-9
  • Practical Critical Thinking  Gr. 8-12+
  • Punctuation Puzzler  Gr. 3-8
  • Reading Detective  Gr. 3-12+
  • Red Herring Mysteries  Gr. 4-12+
  • Red Herrings Science Mysteries  Gr. 4-9
  • Science Detective  Gr. 3-6
  • Science Mind Benders  Gr. PreK-3
  • Science Vocabulary Crossword Puzzles  Gr. 4-6
  • Sciencewise  Gr. 4-12+
  • Scratch Your Brain  Gr. 2-12+
  • Sentence Diagramming  Gr. 3-12+
  • Smarty Pants Puzzles  Gr. 3-12+
  • Snailopolis  Gr. K-4
  • Something's Fishy at Lake Iwannafisha  Gr. 5-9
  • Teaching Technology  Gr. 3-12+
  • Tell Me a Story  Gr. PreK-1
  • Think Analogies  Gr. 3-12+
  • Think and Write  Gr. 3-8
  • Think-A-Grams  Gr. 4-12+
  • Thinking About Time  Gr. 3-6
  • Thinking Connections  Gr. 4-12+
  • Thinking Directionally  Gr. 2-6
  • Thinking Skills & Key Concepts  Gr. PreK-2
  • Thinking Skills for Tests  Gr. PreK-5
  • U.S. History Detective  Gr. 8-12+
  • Understanding Fractions  Gr. 2-6
  • Visual Perceptual Skill Building  Gr. PreK-3
  • Vocabulary Riddles  Gr. 4-8
  • Vocabulary Smarts  Gr. 2-5
  • Vocabulary Virtuoso  Gr. 2-12+
  • What Would You Do?  Gr. 2-12+
  • Who Is This Kid? Colleges Want to Know!  Gr. 9-12+
  • Word Explorer  Gr. 4-8
  • Word Roots  Gr. 3-12+
  • World History Detective  Gr. 6-12+
  • Writing Detective  Gr. 3-6
  • You Decide!  Gr. 6-12+

explain the inductive route to problem solving in geography

  • Special of the Month
  • Sign Up for our Best Offers
  • Bundles = Greatest Savings!
  • Sign Up for Free Puzzles
  • Sign Up for Free Activities
  • Toddler (Ages 0-3)
  • PreK (Ages 3-5)
  • Kindergarten (Ages 5-6)
  • 1st Grade (Ages 6-7)
  • 2nd Grade (Ages 7-8)
  • 3rd Grade (Ages 8-9)
  • 4th Grade (Ages 9-10)
  • 5th Grade (Ages 10-11)
  • 6th Grade (Ages 11-12)
  • 7th Grade (Ages 12-13)
  • 8th Grade (Ages 13-14)
  • 9th Grade (Ages 14-15)
  • 10th Grade (Ages 15-16)
  • 11th Grade (Ages 16-17)
  • 12th Grade (Ages 17-18)
  • 12th+ Grade (Ages 18+)
  • Test Prep Directory
  • Test Prep Bundles
  • Test Prep Guides
  • Preschool Academics
  • Store Locator
  • Submit Feedback/Request
  • Sales Alerts Sign-Up
  • Technical Support
  • Mission & History
  • Articles & Advice
  • Testimonials
  • Our Guarantee
  • New Products
  • Free Activities
  • Libros en Español

Guide To Inductive & Deductive Reasoning

Induction vs. Deduction

October 15, 2008, by The Critical Thinking Co. Staff

Induction and deduction are pervasive elements in critical thinking. They are also somewhat misunderstood terms. Arguments based on experience or observation are best expressed inductively , while arguments based on laws or rules are best expressed deductively . Most arguments are mainly inductive. In fact, inductive reasoning usually comes much more naturally to us than deductive reasoning.

Inductive reasoning moves from specific details and observations (typically of nature) to the more general underlying principles or process that explains them (e.g., Newton's Law of Gravity). It is open-ended and exploratory, especially at the beginning. The premises of an inductive argument are believed to support the conclusion, but do not ensure it. Thus, the conclusion of an induction is regarded as a hypothesis. In the Inductive method, also called the scientific method , observation of nature is the authority.

In contrast, deductive reasoning typically moves from general truths to specific conclusions. It opens with an expansive explanation (statements known or believed to be true) and continues with predictions for specific observations supporting it. Deductive reasoning is narrow in nature and is concerned with testing or confirming a hypothesis. It is dependent on its premises. For example, a false premise can lead to a false result, and inconclusive premises will also yield an inconclusive conclusion. Deductive reasoning leads to a confirmation (or not) of our original theories. It guarantees the correctness of a conclusion. Logic is the authority in the deductive method.

If you can strengthen your argument or hypothesis by adding another piece of information, you are using inductive reasoning. If you cannot improve your argument by adding more evidence, you are employing deductive reasoning.

  • Privacy Policy

Research Method

Home » Inductive Vs Deductive Research

Inductive Vs Deductive Research

Table of Contents

Inductive Vs Deductive Research

Inductive and deductive research are two contrasting approaches used in Research to develop and test theories .

Inductive Research

  • Definition : Inductive research starts with specific observations or real examples of events, trends, or social processes. From these observations, researchers identify patterns and develop broader generalizations or theories.
  • Observation : Begin with detailed observations of the world.
  • Pattern Recognition : Identify patterns or regularities in these observations.
  • Theory Formation : Develop theories or hypotheses based on the identified patterns.
  • Conclusion : Make generalizations that can be applied to broader contexts.
  • Example : A researcher observes that students who study in groups tend to perform better on exams. From this pattern, they might develop a theory that group study is more effective than studying alone.

Deductive Research

  • Definition : Deductive research starts with a theory or hypothesis and then designs a research strategy to test this hypothesis. It moves from the general to the specific.
  • Theory : Begin with an existing theory or hypothesis.
  • Hypothesis Development : Formulate a hypothesis based on the theory.
  • Data Collection : Collect data to test the hypothesis.
  • Analysis : Analyze the data to determine whether it supports or refutes the hypothesis.
  • Conclusion : Draw conclusions that confirm or challenge the initial theory.
  • Example : A researcher starts with the hypothesis that “students who study for more than 3 hours a day perform better on exams.” They then collect data to see if this hypothesis holds true.

Key Differences

  • Inductive : Moves from specific observations to broader generalizations (bottom-up approach).
  • Deductive : Moves from a general theory to specific observations or experiments (top-down approach).
  • Inductive : Theories are developed based on observed patterns.
  • Deductive : Theories are tested through empirical observation.
  • Inductive : Useful in exploring new phenomena or generating new theories.
  • Deductive : Effective for testing existing theories or hypotheses.

Both inductive and deductive research approaches are crucial in the development and testing of theories. The choice between them depends on the research goal: inductive for exploring and generating new theories, and deductive for testing existing ones.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Research Hypothesis Vs Null Hypothesis

Research Hypothesis Vs Null Hypothesis

Review Article vs Research Article

Review Article vs Research Article

Descriptive Statistics vs Inferential Statistics

Descriptive vs Inferential Statistics – All Key...

Exploratory Vs Explanatory Research

Exploratory Vs Explanatory Research

Qualitative Vs Quantitative Research

Qualitative Vs Quantitative Research

Basic Vs Applied Research

Basic Vs Applied Research

Stratechi.com

  • What is Strategy?
  • Business Models
  • Developing a Strategy
  • Strategic Planning
  • Competitive Advantage
  • Growth Strategy
  • Market Strategy
  • Customer Strategy
  • Geographic Strategy
  • Product Strategy
  • Service Strategy
  • Pricing Strategy
  • Distribution Strategy
  • Sales Strategy
  • Marketing Strategy
  • Digital Marketing Strategy
  • Organizational Strategy
  • HR Strategy – Organizational Design
  • HR Strategy – Employee Journey & Culture
  • Process Strategy
  • Procurement Strategy
  • Cost and Capital Strategy
  • Business Value
  • Market Analysis
  • Problem Solving Skills
  • Strategic Options
  • Business Analytics
  • Strategic Decision Making
  • Process Improvement
  • Project Planning
  • Team Leadership
  • Personal Development
  • Leadership Maturity Model
  • Leadership Team Strategy
  • The Leadership Team
  • Leadership Mindset
  • Communication & Collaboration
  • Problem Solving
  • Decision Making
  • People Leadership
  • Strategic Execution
  • Executive Coaching
  • Strategy Coaching
  • Business Transformation
  • Strategy Workshops
  • Leadership Strategy Survey
  • Leadership Training
  • Who’s Joe?

DEDUCTION & INDUCTION

“The grand aim of all science is to cover the greatest number of empirical facts by logical deduction from the smallest number of hypotheses or axioms.”

― Albert Einstein

What is deductive and inductive logic?

deductive inductive thinking

Deductive logic is referred to as top-down logic, drawing conclusions through the elimination or examination of the disaggregated elements of a situation. Think about the simple example of the profit of a company, which equals revenue minus costs. Let’s say a company’s profit is declining, yet its revenues are increasing. By deduction, their costs must be increasing faster than their revenues, hence shrinking their profits, even though revenues are increasing.

The process of deductive logic is the typical problem solving process for management consulting projects . Once a team creates a hypothesis tree, then the team typically focuses on discovering and analyzing facts to prove or disprove the hypotheses of the tree. And, through proving or disproving hypotheses, the team creates conclusions and recommendations. Deductive logic is used when there is a discrete set of hypotheses or options , such as when trying to find the root cause of a process issue or trying to optimize a discrete system.

On the other hand, inductive logic is the inverse of deductive logic, taking observations or facts and creating hypotheses or theories from them. Inductive logic is known as bottom-up logic, which starts with selective observations and facts that lead to generalizing and inducing potential hypotheses or theories.

A Barrel of Bad Apples

Imagine there is a barrel of 100 apples and 5 apples are picked from the barrel, and they were all rotten. Using inductive logic, the fact that the first 5 apples are rotten can be generalized into a hypothesis that all the apples are rotten. The key with inductive logic is it doesn’t determine factual conclusions, only hypotheses. If all 100 apples were examined then this would be deductive logic. And, if all 100 were rotten, then it could be concluded as fact that all the apples in the barrel are indeed rotten. Though, just picking 5 that are rotten can only create a hypothesis that all 100 are rotten. Inductive logic should be used when there is an open-ended set of options or potential hypotheses, such as trying to figure out the best marketing campaign to drive sales, or potential innovations for a product, where there might be selective facts and observations that point to potential good solutions, but only after being tested can be truly confirmed as fact.

inductive deductive business logic

People using inductive logic to derive conclusions is a large and somewhat invisible issue in strategic thinking and problem solving. I often run across situations where someone observes something and then makes a conclusion about the root cause of a discrete problem.

An Example Using Deduction & Induction in Root Cause Analysis

Let’s go through a simple example to understand this issue better. Let’s say a company has a quality issue where customers are receiving a broken product. And, a product manager states, “The issue must be the shipping department. I’ve seen people in the warehouse drop products and then package them up and ship them.” The product manager uses inductive logic to try and conclude that the quality issue is because of the shipping department mishandling the product. Yet, this inductive logic only creates a hypothesis that the shipping department is to blame.

Deductive logic, not inductive logic, must be used to factually determine the root cause of the quality issue. With deductive logic, we first need to create MECE ( Mutually Exclusive , Collectively Exhaustive) hypotheses of what is driving the quality issue. By creating a hypothesis tree , our quality issue can be from four main hypotheses, which are poor design, the wrong materials, bad manufacturing, or mishandling of the product by the shipping department.

inductive versus deductive logic

Then the path of deductive logic would lead one to prove or disprove the main hypotheses. To prove or disprove whether it was mishandled by the shipping department an audit could be conducted, which could include inspecting the product before shipping and inspecting the shipping & handling processes . Let’s say the shipping & handling audit showed no issues but did find that 40% of the product had a faulty part, let’s call this part B. Then, we could conduct an audit of the manufacturing and assembly processes. Let’s say during assembly process A, part B broke 40% of the time, even though the manufacturer was consistently following the assembly process instructions. Then, a supplier audit on part B could be conducted to ensure part B is authentic, high quality, and designed to specifications. Let’s say the supplier audit came back with no issues. And, then the product design could be evaluated, and let’s say it was found part B wasn’t properly designed for the assembly process and broke 40% of the time in assembly. By deductive logic, we can conclude that the quality issue was due to the poor design of Part B. Above is a visual representation of the example.

Why is inductive vs. deductive logic important?

Both inductive and deductive logic are fundamental in problem solving. Though, inductive logic is often used when deductive logic is appropriate. This is a subtle issue that most people don’t ever think about, but the consequences are often significant since false conclusions often come from inductive logic. One of the main reasons companies use top strategy consulting firms is because of their strong deductive problem solving methodologies. Deductive problem solving is comprehensive and derives factual conclusions. Most people or teams tasked with solving a problem don’t start with a problem statement , then build a hypothesis tree, and then spend weeks or months proving and disproving the different branches of the hypothesis tree, but top strategy consulting firms do. If somebody wants to figure out the true root causes of a problem, they will use deductive problem solving.

Inductive logic is also critical to strategy when it comes to connecting the dots in creating great options and solutions to a problem. Inductive logic is necessary when the context of a situation is understood, and creative and innovative options and solutions are needed. Elegant inductive logic was the driver for the simplicity of the iPhone, many of the innovations in the Tesla, and the most creative solutions to challenging situations.

EXERCISES TO IMPROVE YOUR DEDUCTION & INDUCTION

One of the core strengths of strategic leaders is the high-quality logic they apply to problems and situations. Regarding inductive and deductive logic, most of the time people use inductive logic. They take a few thoughts or facts and create hypotheses. Typically, what most people need to build up is their deductive logic. That is why we focus on it so much in this problem solving module.

Exercise 1 – Build Your Logic Awareness

Can you tell when people or even yourself are using deductive vs. inductive logic? Can you determine which logic is needed in which situation? If not, in meetings, when people are recommending a course of action, or are breaking down an argument, see if you can determine if they are using inductive or deductive logic, or no logic at all (gut feelings and emotion). And, then figure out your logic and when best to use deductive vs. inductive arguments.

Exercise 2 – Use Deduction, When you should Use Deduction

When you have a significant problem or opportunity you need to solve or build a strategy for, start with a deductive problem solving process. Use the tools in this module, by defining the problem statement, disaggregating the problem, building a hypothesis tree, prove or disprove hypotheses through facts and analysis . And, then switch to inductive logic when creating creative potential solutions and synthesizing those solutions.

NEXT SECTION: THE POWER OF QUESTIONS

DOWNLOAD STRATEGY PRESENTATION TEMPLATES

THE $150 VALUE PACK - 600 SLIDES 168-PAGE COMPENDIUM OF STRATEGY FRAMEWORKS & TEMPLATES 186-PAGE HR & ORG STRATEGY PRESENTATION 100-PAGE SALES PLAN PRESENTATION 121-PAGE STRATEGIC PLAN & COMPANY OVERVIEW PRESENTATION 114-PAGE MARKET & COMPETITIVE ANALYSIS PRESENTATION 18-PAGE BUSINESS MODEL TEMPLATE

JOE NEWSUM COACHING

Newsum Headshot small

EXECUTIVE COACHING STRATEGY COACHING ELEVATE360 BUSINESS TRANSFORMATION STRATEGY WORKSHOPS LEADERSHIP STRATEGY SURVEY & WORKSHOP STRATEGY & LEADERSHIP TRAINING

THE LEADERSHIP MATURITY MODEL

Explore other types of strategy.

BIG PICTURE WHAT IS STRATEGY? BUSINESS MODEL COMP. ADVANTAGE GROWTH

TARGETS MARKET CUSTOMER GEOGRAPHIC

VALUE PROPOSITION PRODUCT SERVICE PRICING

GO TO MARKET DISTRIBUTION SALES MARKETING

ORGANIZATIONAL ORG DESIGN HR & CULTURE PROCESS PARTNER

EXPLORE THE TOP 100 STRATEGIC LEADERSHIP COMPETENCIES

TYPES OF VALUE MARKET ANALYSIS PROBLEM SOLVING

OPTION CREATION ANALYTICS DECISION MAKING PROCESS TOOLS

PLANNING & PROJECTS PEOPLE LEADERSHIP PERSONAL DEVELOPMENT

sm icons linkedIn In tm

Logo for College of DuPage Digital Press

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

7 Module 7: Thinking, Reasoning, and Problem-Solving

This module is about how a solid working knowledge of psychological principles can help you to think more effectively, so you can succeed in school and life. You might be inclined to believe that—because you have been thinking for as long as you can remember, because you are able to figure out the solution to many problems, because you feel capable of using logic to argue a point, because you can evaluate whether the things you read and hear make sense—you do not need any special training in thinking. But this, of course, is one of the key barriers to helping people think better. If you do not believe that there is anything wrong, why try to fix it?

The human brain is indeed a remarkable thinking machine, capable of amazing, complex, creative, logical thoughts. Why, then, are we telling you that you need to learn how to think? Mainly because one major lesson from cognitive psychology is that these capabilities of the human brain are relatively infrequently realized. Many psychologists believe that people are essentially “cognitive misers.” It is not that we are lazy, but that we have a tendency to expend the least amount of mental effort necessary. Although you may not realize it, it actually takes a great deal of energy to think. Careful, deliberative reasoning and critical thinking are very difficult. Because we seem to be successful without going to the trouble of using these skills well, it feels unnecessary to develop them. As you shall see, however, there are many pitfalls in the cognitive processes described in this module. When people do not devote extra effort to learning and improving reasoning, problem solving, and critical thinking skills, they make many errors.

As is true for memory, if you develop the cognitive skills presented in this module, you will be more successful in school. It is important that you realize, however, that these skills will help you far beyond school, even more so than a good memory will. Although it is somewhat useful to have a good memory, ten years from now no potential employer will care how many questions you got right on multiple choice exams during college. All of them will, however, recognize whether you are a logical, analytical, critical thinker. With these thinking skills, you will be an effective, persuasive communicator and an excellent problem solver.

The module begins by describing different kinds of thought and knowledge, especially conceptual knowledge and critical thinking. An understanding of these differences will be valuable as you progress through school and encounter different assignments that require you to tap into different kinds of knowledge. The second section covers deductive and inductive reasoning, which are processes we use to construct and evaluate strong arguments. They are essential skills to have whenever you are trying to persuade someone (including yourself) of some point, or to respond to someone’s efforts to persuade you. The module ends with a section about problem solving. A solid understanding of the key processes involved in problem solving will help you to handle many daily challenges.

7.1. Different kinds of thought

7.2. Reasoning and Judgment

7.3. Problem Solving

READING WITH PURPOSE

Remember and understand.

By reading and studying Module 7, you should be able to remember and describe:

  • Concepts and inferences (7.1)
  • Procedural knowledge (7.1)
  • Metacognition (7.1)
  • Characteristics of critical thinking:  skepticism; identify biases, distortions, omissions, and assumptions; reasoning and problem solving skills  (7.1)
  • Reasoning:  deductive reasoning, deductively valid argument, inductive reasoning, inductively strong argument, availability heuristic, representativeness heuristic  (7.2)
  • Fixation:  functional fixedness, mental set  (7.3)
  • Algorithms, heuristics, and the role of confirmation bias (7.3)
  • Effective problem solving sequence (7.3)

By reading and thinking about how the concepts in Module 6 apply to real life, you should be able to:

  • Identify which type of knowledge a piece of information is (7.1)
  • Recognize examples of deductive and inductive reasoning (7.2)
  • Recognize judgments that have probably been influenced by the availability heuristic (7.2)
  • Recognize examples of problem solving heuristics and algorithms (7.3)

Analyze, Evaluate, and Create

By reading and thinking about Module 6, participating in classroom activities, and completing out-of-class assignments, you should be able to:

  • Use the principles of critical thinking to evaluate information (7.1)
  • Explain whether examples of reasoning arguments are deductively valid or inductively strong (7.2)
  • Outline how you could try to solve a problem from your life using the effective problem solving sequence (7.3)

7.1. Different kinds of thought and knowledge

  • Take a few minutes to write down everything that you know about dogs.
  • Do you believe that:
  • Psychic ability exists?
  • Hypnosis is an altered state of consciousness?
  • Magnet therapy is effective for relieving pain?
  • Aerobic exercise is an effective treatment for depression?
  • UFO’s from outer space have visited earth?

On what do you base your belief or disbelief for the questions above?

Of course, we all know what is meant by the words  think  and  knowledge . You probably also realize that they are not unitary concepts; there are different kinds of thought and knowledge. In this section, let us look at some of these differences. If you are familiar with these different kinds of thought and pay attention to them in your classes, it will help you to focus on the right goals, learn more effectively, and succeed in school. Different assignments and requirements in school call on you to use different kinds of knowledge or thought, so it will be very helpful for you to learn to recognize them (Anderson, et al. 2001).

Factual and conceptual knowledge

Module 5 introduced the idea of declarative memory, which is composed of facts and episodes. If you have ever played a trivia game or watched Jeopardy on TV, you realize that the human brain is able to hold an extraordinary number of facts. Likewise, you realize that each of us has an enormous store of episodes, essentially facts about events that happened in our own lives. It may be difficult to keep that in mind when we are struggling to retrieve one of those facts while taking an exam, however. Part of the problem is that, in contradiction to the advice from Module 5, many students continue to try to memorize course material as a series of unrelated facts (picture a history student simply trying to memorize history as a set of unrelated dates without any coherent story tying them together). Facts in the real world are not random and unorganized, however. It is the way that they are organized that constitutes a second key kind of knowledge, conceptual.

Concepts are nothing more than our mental representations of categories of things in the world. For example, think about dogs. When you do this, you might remember specific facts about dogs, such as they have fur and they bark. You may also recall dogs that you have encountered and picture them in your mind. All of this information (and more) makes up your concept of dog. You can have concepts of simple categories (e.g., triangle), complex categories (e.g., small dogs that sleep all day, eat out of the garbage, and bark at leaves), kinds of people (e.g., psychology professors), events (e.g., birthday parties), and abstract ideas (e.g., justice). Gregory Murphy (2002) refers to concepts as the “glue that holds our mental life together” (p. 1). Very simply, summarizing the world by using concepts is one of the most important cognitive tasks that we do. Our conceptual knowledge  is  our knowledge about the world. Individual concepts are related to each other to form a rich interconnected network of knowledge. For example, think about how the following concepts might be related to each other: dog, pet, play, Frisbee, chew toy, shoe. Or, of more obvious use to you now, how these concepts are related: working memory, long-term memory, declarative memory, procedural memory, and rehearsal? Because our minds have a natural tendency to organize information conceptually, when students try to remember course material as isolated facts, they are working against their strengths.

One last important point about concepts is that they allow you to instantly know a great deal of information about something. For example, if someone hands you a small red object and says, “here is an apple,” they do not have to tell you, “it is something you can eat.” You already know that you can eat it because it is true by virtue of the fact that the object is an apple; this is called drawing an  inference , assuming that something is true on the basis of your previous knowledge (for example, of category membership or of how the world works) or logical reasoning.

Procedural knowledge

Physical skills, such as tying your shoes, doing a cartwheel, and driving a car (or doing all three at the same time, but don’t try this at home) are certainly a kind of knowledge. They are procedural knowledge, the same idea as procedural memory that you saw in Module 5. Mental skills, such as reading, debating, and planning a psychology experiment, are procedural knowledge, as well. In short, procedural knowledge is the knowledge how to do something (Cohen & Eichenbaum, 1993).

Metacognitive knowledge

Floyd used to think that he had a great memory. Now, he has a better memory. Why? Because he finally realized that his memory was not as great as he once thought it was. Because Floyd eventually learned that he often forgets where he put things, he finally developed the habit of putting things in the same place. (Unfortunately, he did not learn this lesson before losing at least 5 watches and a wedding ring.) Because he finally realized that he often forgets to do things, he finally started using the To Do list app on his phone. And so on. Floyd’s insights about the real limitations of his memory have allowed him to remember things that he used to forget.

All of us have knowledge about the way our own minds work. You may know that you have a good memory for people’s names and a poor memory for math formulas. Someone else might realize that they have difficulty remembering to do things, like stopping at the store on the way home. Others still know that they tend to overlook details. This knowledge about our own thinking is actually quite important; it is called metacognitive knowledge, or  metacognition . Like other kinds of thinking skills, it is subject to error. For example, in unpublished research, one of the authors surveyed about 120 General Psychology students on the first day of the term. Among other questions, the students were asked them to predict their grade in the class and report their current Grade Point Average. Two-thirds of the students predicted that their grade in the course would be higher than their GPA. (The reality is that at our college, students tend to earn lower grades in psychology than their overall GPA.) Another example: Students routinely report that they thought they had done well on an exam, only to discover, to their dismay, that they were wrong (more on that important problem in a moment). Both errors reveal a breakdown in metacognition.

The Dunning-Kruger Effect

In general, most college students probably do not study enough. For example, using data from the National Survey of Student Engagement, Fosnacht, McCormack, and Lerma (2018) reported that first-year students at 4-year colleges in the U.S. averaged less than 14 hours per week preparing for classes. The typical suggestion is that you should spend two hours outside of class for every hour in class, or 24 – 30 hours per week for a full-time student. Clearly, students in general are nowhere near that recommended mark. Many observers, including some faculty, believe that this shortfall is a result of students being too busy or lazy. Now, it may be true that many students are too busy, with work and family obligations, for example. Others, are not particularly motivated in school, and therefore might correctly be labeled lazy. A third possible explanation, however, is that some students might not think they need to spend this much time. And this is a matter of metacognition. Consider the scenario that we mentioned above, students thinking they had done well on an exam only to discover that they did not. Justin Kruger and David Dunning examined scenarios very much like this in 1999. Kruger and Dunning gave research participants tests measuring humor, logic, and grammar. Then, they asked the participants to assess their own abilities and test performance in these areas. They found that participants in general tended to overestimate their abilities, already a problem with metacognition. Importantly, the participants who scored the lowest overestimated their abilities the most. Specifically, students who scored in the bottom quarter (averaging in the 12th percentile) thought they had scored in the 62nd percentile. This has become known as the  Dunning-Kruger effect . Many individual faculty members have replicated these results with their own student on their course exams, including the authors of this book. Think about it. Some students who just took an exam and performed poorly believe that they did well before seeing their score. It seems very likely that these are the very same students who stopped studying the night before because they thought they were “done.” Quite simply, it is not just that they did not know the material. They did not know that they did not know the material. That is poor metacognition.

In order to develop good metacognitive skills, you should continually monitor your thinking and seek frequent feedback on the accuracy of your thinking (Medina, Castleberry, & Persky 2017). For example, in classes get in the habit of predicting your exam grades. As soon as possible after taking an exam, try to find out which questions you missed and try to figure out why. If you do this soon enough, you may be able to recall the way it felt when you originally answered the question. Did you feel confident that you had answered the question correctly? Then you have just discovered an opportunity to improve your metacognition. Be on the lookout for that feeling and respond with caution.

concept :  a mental representation of a category of things in the world

Dunning-Kruger effect : individuals who are less competent tend to overestimate their abilities more than individuals who are more competent do

inference : an assumption about the truth of something that is not stated. Inferences come from our prior knowledge and experience, and from logical reasoning

metacognition :  knowledge about one’s own cognitive processes; thinking about your thinking

Critical thinking

One particular kind of knowledge or thinking skill that is related to metacognition is  critical thinking (Chew, 2020). You may have noticed that critical thinking is an objective in many college courses, and thus it could be a legitimate topic to cover in nearly any college course. It is particularly appropriate in psychology, however. As the science of (behavior and) mental processes, psychology is obviously well suited to be the discipline through which you should be introduced to this important way of thinking.

More importantly, there is a particular need to use critical thinking in psychology. We are all, in a way, experts in human behavior and mental processes, having engaged in them literally since birth. Thus, perhaps more than in any other class, students typically approach psychology with very clear ideas and opinions about its subject matter. That is, students already “know” a lot about psychology. The problem is, “it ain’t so much the things we don’t know that get us into trouble. It’s the things we know that just ain’t so” (Ward, quoted in Gilovich 1991). Indeed, many of students’ preconceptions about psychology are just plain wrong. Randolph Smith (2002) wrote a book about critical thinking in psychology called  Challenging Your Preconceptions,  highlighting this fact. On the other hand, many of students’ preconceptions about psychology are just plain right! But wait, how do you know which of your preconceptions are right and which are wrong? And when you come across a research finding or theory in this class that contradicts your preconceptions, what will you do? Will you stick to your original idea, discounting the information from the class? Will you immediately change your mind? Critical thinking can help us sort through this confusing mess.

But what is critical thinking? The goal of critical thinking is simple to state (but extraordinarily difficult to achieve): it is to be right, to draw the correct conclusions, to believe in things that are true and to disbelieve things that are false. We will provide two definitions of critical thinking (or, if you like, one large definition with two distinct parts). First, a more conceptual one: Critical thinking is thinking like a scientist in your everyday life (Schmaltz, Jansen, & Wenckowski, 2017).  Our second definition is more operational; it is simply a list of skills that are essential to be a critical thinker. Critical thinking entails solid reasoning and problem solving skills; skepticism; and an ability to identify biases, distortions, omissions, and assumptions. Excellent deductive and inductive reasoning, and problem solving skills contribute to critical thinking. So, you can consider the subject matter of sections 7.2 and 7.3 to be part of critical thinking. Because we will be devoting considerable time to these concepts in the rest of the module, let us begin with a discussion about the other aspects of critical thinking.

Let’s address that first part of the definition. Scientists form hypotheses, or predictions about some possible future observations. Then, they collect data, or information (think of this as making those future observations). They do their best to make unbiased observations using reliable techniques that have been verified by others. Then, and only then, they draw a conclusion about what those observations mean. Oh, and do not forget the most important part. “Conclusion” is probably not the most appropriate word because this conclusion is only tentative. A scientist is always prepared that someone else might come along and produce new observations that would require a new conclusion be drawn. Wow! If you like to be right, you could do a lot worse than using a process like this.

A Critical Thinker’s Toolkit 

Now for the second part of the definition. Good critical thinkers (and scientists) rely on a variety of tools to evaluate information. Perhaps the most recognizable tool for critical thinking is  skepticism (and this term provides the clearest link to the thinking like a scientist definition, as you are about to see). Some people intend it as an insult when they call someone a skeptic. But if someone calls you a skeptic, if they are using the term correctly, you should consider it a great compliment. Simply put, skepticism is a way of thinking in which you refrain from drawing a conclusion or changing your mind until good evidence has been provided. People from Missouri should recognize this principle, as Missouri is known as the Show-Me State. As a skeptic, you are not inclined to believe something just because someone said so, because someone else believes it, or because it sounds reasonable. You must be persuaded by high quality evidence.

Of course, if that evidence is produced, you have a responsibility as a skeptic to change your belief. Failure to change a belief in the face of good evidence is not skepticism; skepticism has open mindedness at its core. M. Neil Browne and Stuart Keeley (2018) use the term weak sense critical thinking to describe critical thinking behaviors that are used only to strengthen a prior belief. Strong sense critical thinking, on the other hand, has as its goal reaching the best conclusion. Sometimes that means strengthening your prior belief, but sometimes it means changing your belief to accommodate the better evidence.

Many times, a failure to think critically or weak sense critical thinking is related to a  bias , an inclination, tendency, leaning, or prejudice. Everybody has biases, but many people are unaware of them. Awareness of your own biases gives you the opportunity to control or counteract them. Unfortunately, however, many people are happy to let their biases creep into their attempts to persuade others; indeed, it is a key part of their persuasive strategy. To see how these biases influence messages, just look at the different descriptions and explanations of the same events given by people of different ages or income brackets, or conservative versus liberal commentators, or by commentators from different parts of the world. Of course, to be successful, these people who are consciously using their biases must disguise them. Even undisguised biases can be difficult to identify, so disguised ones can be nearly impossible.

Here are some common sources of biases:

  • Personal values and beliefs.  Some people believe that human beings are basically driven to seek power and that they are typically in competition with one another over scarce resources. These beliefs are similar to the world-view that political scientists call “realism.” Other people believe that human beings prefer to cooperate and that, given the chance, they will do so. These beliefs are similar to the world-view known as “idealism.” For many people, these deeply held beliefs can influence, or bias, their interpretations of such wide ranging situations as the behavior of nations and their leaders or the behavior of the driver in the car ahead of you. For example, if your worldview is that people are typically in competition and someone cuts you off on the highway, you may assume that the driver did it purposely to get ahead of you. Other types of beliefs about the way the world is or the way the world should be, for example, political beliefs, can similarly become a significant source of bias.
  • Racism, sexism, ageism and other forms of prejudice and bigotry.  These are, sadly, a common source of bias in many people. They are essentially a special kind of “belief about the way the world is.” These beliefs—for example, that women do not make effective leaders—lead people to ignore contradictory evidence (examples of effective women leaders, or research that disputes the belief) and to interpret ambiguous evidence in a way consistent with the belief.
  • Self-interest.  When particular people benefit from things turning out a certain way, they can sometimes be very susceptible to letting that interest bias them. For example, a company that will earn a profit if they sell their product may have a bias in the way that they give information about their product. A union that will benefit if its members get a generous contract might have a bias in the way it presents information about salaries at competing organizations. (Note that our inclusion of examples describing both companies and unions is an explicit attempt to control for our own personal biases). Home buyers are often dismayed to discover that they purchased their dream house from someone whose self-interest led them to lie about flooding problems in the basement or back yard. This principle, the biasing power of self-interest, is likely what led to the famous phrase  Caveat Emptor  (let the buyer beware) .  

Knowing that these types of biases exist will help you evaluate evidence more critically. Do not forget, though, that people are not always keen to let you discover the sources of biases in their arguments. For example, companies or political organizations can sometimes disguise their support of a research study by contracting with a university professor, who comes complete with a seemingly unbiased institutional affiliation, to conduct the study.

People’s biases, conscious or unconscious, can lead them to make omissions, distortions, and assumptions that undermine our ability to correctly evaluate evidence. It is essential that you look for these elements. Always ask, what is missing, what is not as it appears, and what is being assumed here? For example, consider this (fictional) chart from an ad reporting customer satisfaction at 4 local health clubs.

explain the inductive route to problem solving in geography

Clearly, from the results of the chart, one would be tempted to give Club C a try, as customer satisfaction is much higher than for the other 3 clubs.

There are so many distortions and omissions in this chart, however, that it is actually quite meaningless. First, how was satisfaction measured? Do the bars represent responses to a survey? If so, how were the questions asked? Most importantly, where is the missing scale for the chart? Although the differences look quite large, are they really?

Well, here is the same chart, with a different scale, this time labeled:

explain the inductive route to problem solving in geography

Club C is not so impressive any more, is it? In fact, all of the health clubs have customer satisfaction ratings (whatever that means) between 85% and 88%. In the first chart, the entire scale of the graph included only the percentages between 83 and 89. This “judicious” choice of scale—some would call it a distortion—and omission of that scale from the chart make the tiny differences among the clubs seem important, however.

Also, in order to be a critical thinker, you need to learn to pay attention to the assumptions that underlie a message. Let us briefly illustrate the role of assumptions by touching on some people’s beliefs about the criminal justice system in the US. Some believe that a major problem with our judicial system is that many criminals go free because of legal technicalities. Others believe that a major problem is that many innocent people are convicted of crimes. The simple fact is, both types of errors occur. A person’s conclusion about which flaw in our judicial system is the greater tragedy is based on an assumption about which of these is the more serious error (letting the guilty go free or convicting the innocent). This type of assumption is called a value assumption (Browne and Keeley, 2018). It reflects the differences in values that people develop, differences that may lead us to disregard valid evidence that does not fit in with our particular values.

Oh, by the way, some students probably noticed this, but the seven tips for evaluating information that we shared in Module 1 are related to this. Actually, they are part of this section. The tips are, to a very large degree, set of ideas you can use to help you identify biases, distortions, omissions, and assumptions. If you do not remember this section, we strongly recommend you take a few minutes to review it.

skepticism :  a way of thinking in which you refrain from drawing a conclusion or changing your mind until good evidence has been provided

bias : an inclination, tendency, leaning, or prejudice

  • Which of your beliefs (or disbeliefs) from the Activate exercise for this section were derived from a process of critical thinking? If some of your beliefs were not based on critical thinking, are you willing to reassess these beliefs? If the answer is no, why do you think that is? If the answer is yes, what concrete steps will you take?

7.2 Reasoning and Judgment

  • What percentage of kidnappings are committed by strangers?
  • Which area of the house is riskiest: kitchen, bathroom, or stairs?
  • What is the most common cancer in the US?
  • What percentage of workplace homicides are committed by co-workers?

An essential set of procedural thinking skills is  reasoning , the ability to generate and evaluate solid conclusions from a set of statements or evidence. You should note that these conclusions (when they are generated instead of being evaluated) are one key type of inference that we described in Section 7.1. There are two main types of reasoning, deductive and inductive.

Deductive reasoning

Suppose your teacher tells you that if you get an A on the final exam in a course, you will get an A for the whole course. Then, you get an A on the final exam. What will your final course grade be? Most people can see instantly that you can conclude with certainty that you will get an A for the course. This is a type of reasoning called  deductive reasoning , which is defined as reasoning in which a conclusion is guaranteed to be true as long as the statements leading to it are true. The three statements can be listed as an  argument , with two beginning statements and a conclusion:

Statement 1: If you get an A on the final exam, you will get an A for the course

Statement 2: You get an A on the final exam

Conclusion: You will get an A for the course

This particular arrangement, in which true beginning statements lead to a guaranteed true conclusion, is known as a  deductively valid argument . Although deductive reasoning is often the subject of abstract, brain-teasing, puzzle-like word problems, it is actually an extremely important type of everyday reasoning. It is just hard to recognize sometimes. For example, imagine that you are looking for your car keys and you realize that they are either in the kitchen drawer or in your book bag. After looking in the kitchen drawer, you instantly know that they must be in your book bag. That conclusion results from a simple deductive reasoning argument. In addition, solid deductive reasoning skills are necessary for you to succeed in the sciences, philosophy, math, computer programming, and any endeavor involving the use of logic to persuade others to your point of view or to evaluate others’ arguments.

Cognitive psychologists, and before them philosophers, have been quite interested in deductive reasoning, not so much for its practical applications, but for the insights it can offer them about the ways that human beings think. One of the early ideas to emerge from the examination of deductive reasoning is that people learn (or develop) mental versions of rules that allow them to solve these types of reasoning problems (Braine, 1978; Braine, Reiser, & Rumain, 1984). The best way to see this point of view is to realize that there are different possible rules, and some of them are very simple. For example, consider this rule of logic:

therefore q

Logical rules are often presented abstractly, as letters, in order to imply that they can be used in very many specific situations. Here is a concrete version of the of the same rule:

I’ll either have pizza or a hamburger for dinner tonight (p or q)

I won’t have pizza (not p)

Therefore, I’ll have a hamburger (therefore q)

This kind of reasoning seems so natural, so easy, that it is quite plausible that we would use a version of this rule in our daily lives. At least, it seems more plausible than some of the alternative possibilities—for example, that we need to have experience with the specific situation (pizza or hamburger, in this case) in order to solve this type of problem easily. So perhaps there is a form of natural logic (Rips, 1990) that contains very simple versions of logical rules. When we are faced with a reasoning problem that maps onto one of these rules, we use the rule.

But be very careful; things are not always as easy as they seem. Even these simple rules are not so simple. For example, consider the following rule. Many people fail to realize that this rule is just as valid as the pizza or hamburger rule above.

if p, then q

therefore, not p

Concrete version:

If I eat dinner, then I will have dessert

I did not have dessert

Therefore, I did not eat dinner

The simple fact is, it can be very difficult for people to apply rules of deductive logic correctly; as a result, they make many errors when trying to do so. Is this a deductively valid argument or not?

Students who like school study a lot

Students who study a lot get good grades

Jane does not like school

Therefore, Jane does not get good grades

Many people are surprised to discover that this is not a logically valid argument; the conclusion is not guaranteed to be true from the beginning statements. Although the first statement says that students who like school study a lot, it does NOT say that students who do not like school do not study a lot. In other words, it may very well be possible to study a lot without liking school. Even people who sometimes get problems like this right might not be using the rules of deductive reasoning. Instead, they might just be making judgments for examples they know, in this case, remembering instances of people who get good grades despite not liking school.

Making deductive reasoning even more difficult is the fact that there are two important properties that an argument may have. One, it can be valid or invalid (meaning that the conclusion does or does not follow logically from the statements leading up to it). Two, an argument (or more correctly, its conclusion) can be true or false. Here is an example of an argument that is logically valid, but has a false conclusion (at least we think it is false).

Either you are eleven feet tall or the Grand Canyon was created by a spaceship crashing into the earth.

You are not eleven feet tall

Therefore the Grand Canyon was created by a spaceship crashing into the earth

This argument has the exact same form as the pizza or hamburger argument above, making it is deductively valid. The conclusion is so false, however, that it is absurd (of course, the reason the conclusion is false is that the first statement is false). When people are judging arguments, they tend to not observe the difference between deductive validity and the empirical truth of statements or conclusions. If the elements of an argument happen to be true, people are likely to judge the argument logically valid; if the elements are false, they will very likely judge it invalid (Markovits & Bouffard-Bouchard, 1992; Moshman & Franks, 1986). Thus, it seems a stretch to say that people are using these logical rules to judge the validity of arguments. Many psychologists believe that most people actually have very limited deductive reasoning skills (Johnson-Laird, 1999). They argue that when faced with a problem for which deductive logic is required, people resort to some simpler technique, such as matching terms that appear in the statements and the conclusion (Evans, 1982). This might not seem like a problem, but what if reasoners believe that the elements are true and they happen to be wrong; they will would believe that they are using a form of reasoning that guarantees they are correct and yet be wrong.

deductive reasoning :  a type of reasoning in which the conclusion is guaranteed to be true any time the statements leading up to it are true

argument :  a set of statements in which the beginning statements lead to a conclusion

deductively valid argument :  an argument for which true beginning statements guarantee that the conclusion is true

Inductive reasoning and judgment

Every day, you make many judgments about the likelihood of one thing or another. Whether you realize it or not, you are practicing  inductive reasoning   on a daily basis. In inductive reasoning arguments, a conclusion is likely whenever the statements preceding it are true. The first thing to notice about inductive reasoning is that, by definition, you can never be sure about your conclusion; you can only estimate how likely the conclusion is. Inductive reasoning may lead you to focus on Memory Encoding and Recoding when you study for the exam, but it is possible the instructor will ask more questions about Memory Retrieval instead. Unlike deductive reasoning, the conclusions you reach through inductive reasoning are only probable, not certain. That is why scientists consider inductive reasoning weaker than deductive reasoning. But imagine how hard it would be for us to function if we could not act unless we were certain about the outcome.

Inductive reasoning can be represented as logical arguments consisting of statements and a conclusion, just as deductive reasoning can be. In an inductive argument, you are given some statements and a conclusion (or you are given some statements and must draw a conclusion). An argument is  inductively strong   if the conclusion would be very probable whenever the statements are true. So, for example, here is an inductively strong argument:

  • Statement #1: The forecaster on Channel 2 said it is going to rain today.
  • Statement #2: The forecaster on Channel 5 said it is going to rain today.
  • Statement #3: It is very cloudy and humid.
  • Statement #4: You just heard thunder.
  • Conclusion (or judgment): It is going to rain today.

Think of the statements as evidence, on the basis of which you will draw a conclusion. So, based on the evidence presented in the four statements, it is very likely that it will rain today. Will it definitely rain today? Certainly not. We can all think of times that the weather forecaster was wrong.

A true story: Some years ago psychology student was watching a baseball playoff game between the St. Louis Cardinals and the Los Angeles Dodgers. A graphic on the screen had just informed the audience that the Cardinal at bat, (Hall of Fame shortstop) Ozzie Smith, a switch hitter batting left-handed for this plate appearance, had never, in nearly 3000 career at-bats, hit a home run left-handed. The student, who had just learned about inductive reasoning in his psychology class, turned to his companion (a Cardinals fan) and smugly said, “It is an inductively strong argument that Ozzie Smith will not hit a home run.” He turned back to face the television just in time to watch the ball sail over the right field fence for a home run. Although the student felt foolish at the time, he was not wrong. It was an inductively strong argument; 3000 at-bats is an awful lot of evidence suggesting that the Wizard of Ozz (as he was known) would not be hitting one out of the park (think of each at-bat without a home run as a statement in an inductive argument). Sadly (for the die-hard Cubs fan and Cardinals-hating student), despite the strength of the argument, the conclusion was wrong.

Given the possibility that we might draw an incorrect conclusion even with an inductively strong argument, we really want to be sure that we do, in fact, make inductively strong arguments. If we judge something probable, it had better be probable. If we judge something nearly impossible, it had better not happen. Think of inductive reasoning, then, as making reasonably accurate judgments of the probability of some conclusion given a set of evidence.

We base many decisions in our lives on inductive reasoning. For example:

Statement #1: Psychology is not my best subject

Statement #2: My psychology instructor has a reputation for giving difficult exams

Statement #3: My first psychology exam was much harder than I expected

Judgment: The next exam will probably be very difficult.

Decision: I will study tonight instead of watching Netflix.

Some other examples of judgments that people commonly make in a school context include judgments of the likelihood that:

  • A particular class will be interesting/useful/difficult
  • You will be able to finish writing a paper by next week if you go out tonight
  • Your laptop’s battery will last through the next trip to the library
  • You will not miss anything important if you skip class tomorrow
  • Your instructor will not notice if you skip class tomorrow
  • You will be able to find a book that you will need for a paper
  • There will be an essay question about Memory Encoding on the next exam

Tversky and Kahneman (1983) recognized that there are two general ways that we might make these judgments; they termed them extensional (i.e., following the laws of probability) and intuitive (i.e., using shortcuts or heuristics, see below). We will use a similar distinction between Type 1 and Type 2 thinking, as described by Keith Stanovich and his colleagues (Evans and Stanovich, 2013; Stanovich and West, 2000). Type 1 thinking is fast, automatic, effortful, and emotional. In fact, it is hardly fair to call it reasoning at all, as judgments just seem to pop into one’s head. Type 2 thinking , on the other hand, is slow, effortful, and logical. So obviously, it is more likely to lead to a correct judgment, or an optimal decision. The problem is, we tend to over-rely on Type 1. Now, we are not saying that Type 2 is the right way to go for every decision or judgment we make. It seems a bit much, for example, to engage in a step-by-step logical reasoning procedure to decide whether we will have chicken or fish for dinner tonight.

Many bad decisions in some very important contexts, however, can be traced back to poor judgments of the likelihood of certain risks or outcomes that result from the use of Type 1 when a more logical reasoning process would have been more appropriate. For example:

Statement #1: It is late at night.

Statement #2: Albert has been drinking beer for the past five hours at a party.

Statement #3: Albert is not exactly sure where he is or how far away home is.

Judgment: Albert will have no difficulty walking home.

Decision: He walks home alone.

As you can see in this example, the three statements backing up the judgment do not really support it. In other words, this argument is not inductively strong because it is based on judgments that ignore the laws of probability. What are the chances that someone facing these conditions will be able to walk home alone easily? And one need not be drunk to make poor decisions based on judgments that just pop into our heads.

The truth is that many of our probability judgments do not come very close to what the laws of probability say they should be. Think about it. In order for us to reason in accordance with these laws, we would need to know the laws of probability, which would allow us to calculate the relationship between particular pieces of evidence and the probability of some outcome (i.e., how much likelihood should change given a piece of evidence), and we would have to do these heavy math calculations in our heads. After all, that is what Type 2 requires. Needless to say, even if we were motivated, we often do not even know how to apply Type 2 reasoning in many cases.

So what do we do when we don’t have the knowledge, skills, or time required to make the correct mathematical judgment? Do we hold off and wait until we can get better evidence? Do we read up on probability and fire up our calculator app so we can compute the correct probability? Of course not. We rely on Type 1 thinking. We “wing it.” That is, we come up with a likelihood estimate using some means at our disposal. Psychologists use the term heuristic to describe the type of “winging it” we are talking about. A  heuristic   is a shortcut strategy that we use to make some judgment or solve some problem (see Section 7.3). Heuristics are easy and quick, think of them as the basic procedures that are characteristic of Type 1.  They can absolutely lead to reasonably good judgments and decisions in some situations (like choosing between chicken and fish for dinner). They are, however, far from foolproof. There are, in fact, quite a lot of situations in which heuristics can lead us to make incorrect judgments, and in many cases the decisions based on those judgments can have serious consequences.

Let us return to the activity that begins this section. You were asked to judge the likelihood (or frequency) of certain events and risks. You were free to come up with your own evidence (or statements) to make these judgments. This is where a heuristic crops up. As a judgment shortcut, we tend to generate specific examples of those very events to help us decide their likelihood or frequency. For example, if we are asked to judge how common, frequent, or likely a particular type of cancer is, many of our statements would be examples of specific cancer cases:

Statement #1: Andy Kaufman (comedian) had lung cancer.

Statement #2: Colin Powell (US Secretary of State) had prostate cancer.

Statement #3: Bob Marley (musician) had skin and brain cancer

Statement #4: Sandra Day O’Connor (Supreme Court Justice) had breast cancer.

Statement #5: Fred Rogers (children’s entertainer) had stomach cancer.

Statement #6: Robin Roberts (news anchor) had breast cancer.

Statement #7: Bette Davis (actress) had breast cancer.

Judgment: Breast cancer is the most common type.

Your own experience or memory may also tell you that breast cancer is the most common type. But it is not (although it is common). Actually, skin cancer is the most common type in the US. We make the same types of misjudgments all the time because we do not generate the examples or evidence according to their actual frequencies or probabilities. Instead, we have a tendency (or bias) to search for the examples in memory; if they are easy to retrieve, we assume that they are common. To rephrase this in the language of the heuristic, events seem more likely to the extent that they are available to memory. This bias has been termed the  availability heuristic   (Kahneman and Tversky, 1974).

The fact that we use the availability heuristic does not automatically mean that our judgment is wrong. The reason we use heuristics in the first place is that they work fairly well in many cases (and, of course that they are easy to use). So, the easiest examples to think of sometimes are the most common ones. Is it more likely that a member of the U.S. Senate is a man or a woman? Most people have a much easier time generating examples of male senators. And as it turns out, the U.S. Senate has many more men than women (74 to 26 in 2020). In this case, then, the availability heuristic would lead you to make the correct judgment; it is far more likely that a senator would be a man.

In many other cases, however, the availability heuristic will lead us astray. This is because events can be memorable for many reasons other than their frequency. Section 5.2, Encoding Meaning, suggested that one good way to encode the meaning of some information is to form a mental image of it. Thus, information that has been pictured mentally will be more available to memory. Indeed, an event that is vivid and easily pictured will trick many people into supposing that type of event is more common than it actually is. Repetition of information will also make it more memorable. So, if the same event is described to you in a magazine, on the evening news, on a podcast that you listen to, and in your Facebook feed; it will be very available to memory. Again, the availability heuristic will cause you to misperceive the frequency of these types of events.

Most interestingly, information that is unusual is more memorable. Suppose we give you the following list of words to remember: box, flower, letter, platypus, oven, boat, newspaper, purse, drum, car. Very likely, the easiest word to remember would be platypus, the unusual one. The same thing occurs with memories of events. An event may be available to memory because it is unusual, yet the availability heuristic leads us to judge that the event is common. Did you catch that? In these cases, the availability heuristic makes us think the exact opposite of the true frequency. We end up thinking something is common because it is unusual (and therefore memorable). Yikes.

The misapplication of the availability heuristic sometimes has unfortunate results. For example, if you went to K-12 school in the US over the past 10 years, it is extremely likely that you have participated in lockdown and active shooter drills. Of course, everyone is trying to prevent the tragedy of another school shooting. And believe us, we are not trying to minimize how terrible the tragedy is. But the truth of the matter is, school shootings are extremely rare. Because the federal government does not keep a database of school shootings, the Washington Post has maintained their own running tally. Between 1999 and January 2020 (the date of the most recent school shooting with a death in the US at of the time this paragraph was written), the Post reported a total of 254 people died in school shootings in the US. Not 254 per year, 254 total. That is an average of 12 per year. Of course, that is 254 people who should not have died (particularly because many were children), but in a country with approximately 60,000,000 students and teachers, this is a very small risk.

But many students and teachers are terrified that they will be victims of school shootings because of the availability heuristic. It is so easy to think of examples (they are very available to memory) that people believe the event is very common. It is not. And there is a downside to this. We happen to believe that there is an enormous gun violence problem in the United States. According the the Centers for Disease Control and Prevention, there were 39,773 firearm deaths in the US in 2017. Fifteen of those deaths were in school shootings, according to the Post. 60% of those deaths were suicides. When people pay attention to the school shooting risk (low), they often fail to notice the much larger risk.

And examples like this are by no means unique. The authors of this book have been teaching psychology since the 1990’s. We have been able to make the exact same arguments about the misapplication of the availability heuristics and keep them current by simply swapping out for the “fear of the day.” In the 1990’s it was children being kidnapped by strangers (it was known as “stranger danger”) despite the facts that kidnappings accounted for only 2% of the violent crimes committed against children, and only 24% of kidnappings are committed by strangers (US Department of Justice, 2007). This fear overlapped with the fear of terrorism that gripped the country after the 2001 terrorist attacks on the World Trade Center and US Pentagon and still plagues the population of the US somewhat in 2020. After a well-publicized, sensational act of violence, people are extremely likely to increase their estimates of the chances that they, too, will be victims of terror. Think about the reality, however. In October of 2001, a terrorist mailed anthrax spores to members of the US government and a number of media companies. A total of five people died as a result of this attack. The nation was nearly paralyzed by the fear of dying from the attack; in reality the probability of an individual person dying was 0.00000002.

The availability heuristic can lead you to make incorrect judgments in a school setting as well. For example, suppose you are trying to decide if you should take a class from a particular math professor. You might try to make a judgment of how good a teacher she is by recalling instances of friends and acquaintances making comments about her teaching skill. You may have some examples that suggest that she is a poor teacher very available to memory, so on the basis of the availability heuristic you judge her a poor teacher and decide to take the class from someone else. What if, however, the instances you recalled were all from the same person, and this person happens to be a very colorful storyteller? The subsequent ease of remembering the instances might not indicate that the professor is a poor teacher after all.

Although the availability heuristic is obviously important, it is not the only judgment heuristic we use. Amos Tversky and Daniel Kahneman examined the role of heuristics in inductive reasoning in a long series of studies. Kahneman received a Nobel Prize in Economics for this research in 2002, and Tversky would have certainly received one as well if he had not died of melanoma at age 59 in 1996 (Nobel Prizes are not awarded posthumously). Kahneman and Tversky demonstrated repeatedly that people do not reason in ways that are consistent with the laws of probability. They identified several heuristic strategies that people use instead to make judgments about likelihood. The importance of this work for economics (and the reason that Kahneman was awarded the Nobel Prize) is that earlier economic theories had assumed that people do make judgments rationally, that is, in agreement with the laws of probability.

Another common heuristic that people use for making judgments is the  representativeness heuristic (Kahneman & Tversky 1973). Suppose we describe a person to you. He is quiet and shy, has an unassuming personality, and likes to work with numbers. Is this person more likely to be an accountant or an attorney? If you said accountant, you were probably using the representativeness heuristic. Our imaginary person is judged likely to be an accountant because he resembles, or is representative of the concept of, an accountant. When research participants are asked to make judgments such as these, the only thing that seems to matter is the representativeness of the description. For example, if told that the person described is in a room that contains 70 attorneys and 30 accountants, participants will still assume that he is an accountant.

inductive reasoning :  a type of reasoning in which we make judgments about likelihood from sets of evidence

inductively strong argument :  an inductive argument in which the beginning statements lead to a conclusion that is probably true

heuristic :  a shortcut strategy that we use to make judgments and solve problems. Although they are easy to use, they do not guarantee correct judgments and solutions

availability heuristic :  judging the frequency or likelihood of some event type according to how easily examples of the event can be called to mind (i.e., how available they are to memory)

representativeness heuristic:   judging the likelihood that something is a member of a category on the basis of how much it resembles a typical category member (i.e., how representative it is of the category)

Type 1 thinking : fast, automatic, and emotional thinking.

Type 2 thinking : slow, effortful, and logical thinking.

  • What percentage of workplace homicides are co-worker violence?

Many people get these questions wrong. The answers are 10%; stairs; skin; 6%. How close were your answers? Explain how the availability heuristic might have led you to make the incorrect judgments.

  • Can you think of some other judgments that you have made (or beliefs that you have) that might have been influenced by the availability heuristic?

7.3 Problem Solving

  • Please take a few minutes to list a number of problems that you are facing right now.
  • Now write about a problem that you recently solved.
  • What is your definition of a problem?

Mary has a problem. Her daughter, ordinarily quite eager to please, appears to delight in being the last person to do anything. Whether getting ready for school, going to piano lessons or karate class, or even going out with her friends, she seems unwilling or unable to get ready on time. Other people have different kinds of problems. For example, many students work at jobs, have numerous family commitments, and are facing a course schedule full of difficult exams, assignments, papers, and speeches. How can they find enough time to devote to their studies and still fulfill their other obligations? Speaking of students and their problems: Show that a ball thrown vertically upward with initial velocity v0 takes twice as much time to return as to reach the highest point (from Spiegel, 1981).

These are three very different situations, but we have called them all problems. What makes them all the same, despite the differences? A psychologist might define a  problem   as a situation with an initial state, a goal state, and a set of possible intermediate states. Somewhat more meaningfully, we might consider a problem a situation in which you are in here one state (e.g., daughter is always late), you want to be there in another state (e.g., daughter is not always late), and with no obvious way to get from here to there. Defined this way, each of the three situations we outlined can now be seen as an example of the same general concept, a problem. At this point, you might begin to wonder what is not a problem, given such a general definition. It seems that nearly every non-routine task we engage in could qualify as a problem. As long as you realize that problems are not necessarily bad (it can be quite fun and satisfying to rise to the challenge and solve a problem), this may be a useful way to think about it.

Can we identify a set of problem-solving skills that would apply to these very different kinds of situations? That task, in a nutshell, is a major goal of this section. Let us try to begin to make sense of the wide variety of ways that problems can be solved with an important observation: the process of solving problems can be divided into two key parts. First, people have to notice, comprehend, and represent the problem properly in their minds (called  problem representation ). Second, they have to apply some kind of solution strategy to the problem. Psychologists have studied both of these key parts of the process in detail.

When you first think about the problem-solving process, you might guess that most of our difficulties would occur because we are failing in the second step, the application of strategies. Although this can be a significant difficulty much of the time, the more important source of difficulty is probably problem representation. In short, we often fail to solve a problem because we are looking at it, or thinking about it, the wrong way.

problem :  a situation in which we are in an initial state, have a desired goal state, and there is a number of possible intermediate states (i.e., there is no obvious way to get from the initial to the goal state)

problem representation :  noticing, comprehending and forming a mental conception of a problem

Defining and Mentally Representing Problems in Order to Solve Them

So, the main obstacle to solving a problem is that we do not clearly understand exactly what the problem is. Recall the problem with Mary’s daughter always being late. One way to represent, or to think about, this problem is that she is being defiant. She refuses to get ready in time. This type of representation or definition suggests a particular type of solution. Another way to think about the problem, however, is to consider the possibility that she is simply being sidetracked by interesting diversions. This different conception of what the problem is (i.e., different representation) suggests a very different solution strategy. For example, if Mary defines the problem as defiance, she may be tempted to solve the problem using some kind of coercive tactics, that is, to assert her authority as her mother and force her to listen. On the other hand, if Mary defines the problem as distraction, she may try to solve it by simply removing the distracting objects.

As you might guess, when a problem is represented one way, the solution may seem very difficult, or even impossible. Seen another way, the solution might be very easy. For example, consider the following problem (from Nasar, 1998):

Two bicyclists start 20 miles apart and head toward each other, each going at a steady rate of 10 miles per hour. At the same time, a fly that travels at a steady 15 miles per hour starts from the front wheel of the southbound bicycle and flies to the front wheel of the northbound one, then turns around and flies to the front wheel of the southbound one again, and continues in this manner until he is crushed between the two front wheels. Question: what total distance did the fly cover?

Please take a few minutes to try to solve this problem.

Most people represent this problem as a question about a fly because, well, that is how the question is asked. The solution, using this representation, is to figure out how far the fly travels on the first leg of its journey, then add this total to how far it travels on the second leg of its journey (when it turns around and returns to the first bicycle), then continue to add the smaller distance from each leg of the journey until you converge on the correct answer. You would have to be quite skilled at math to solve this problem, and you would probably need some time and pencil and paper to do it.

If you consider a different representation, however, you can solve this problem in your head. Instead of thinking about it as a question about a fly, think about it as a question about the bicycles. They are 20 miles apart, and each is traveling 10 miles per hour. How long will it take for the bicycles to reach each other? Right, one hour. The fly is traveling 15 miles per hour; therefore, it will travel a total of 15 miles back and forth in the hour before the bicycles meet. Represented one way (as a problem about a fly), the problem is quite difficult. Represented another way (as a problem about two bicycles), it is easy. Changing your representation of a problem is sometimes the best—sometimes the only—way to solve it.

Unfortunately, however, changing a problem’s representation is not the easiest thing in the world to do. Often, problem solvers get stuck looking at a problem one way. This is called  fixation . Most people who represent the preceding problem as a problem about a fly probably do not pause to reconsider, and consequently change, their representation. A parent who thinks her daughter is being defiant is unlikely to consider the possibility that her behavior is far less purposeful.

Problem-solving fixation was examined by a group of German psychologists called Gestalt psychologists during the 1930’s and 1940’s. Karl Dunker, for example, discovered an important type of failure to take a different perspective called  functional fixedness . Imagine being a participant in one of his experiments. You are asked to figure out how to mount two candles on a door and are given an assortment of odds and ends, including a small empty cardboard box and some thumbtacks. Perhaps you have already figured out a solution: tack the box to the door so it forms a platform, then put the candles on top of the box. Most people are able to arrive at this solution. Imagine a slight variation of the procedure, however. What if, instead of being empty, the box had matches in it? Most people given this version of the problem do not arrive at the solution given above. Why? Because it seems to people that when the box contains matches, it already has a function; it is a matchbox. People are unlikely to consider a new function for an object that already has a function. This is functional fixedness.

Mental set is a type of fixation in which the problem solver gets stuck using the same solution strategy that has been successful in the past, even though the solution may no longer be useful. It is commonly seen when students do math problems for homework. Often, several problems in a row require the reapplication of the same solution strategy. Then, without warning, the next problem in the set requires a new strategy. Many students attempt to apply the formerly successful strategy on the new problem and therefore cannot come up with a correct answer.

The thing to remember is that you cannot solve a problem unless you correctly identify what it is to begin with (initial state) and what you want the end result to be (goal state). That may mean looking at the problem from a different angle and representing it in a new way. The correct representation does not guarantee a successful solution, but it certainly puts you on the right track.

A bit more optimistically, the Gestalt psychologists discovered what may be considered the opposite of fixation, namely  insight . Sometimes the solution to a problem just seems to pop into your head. Wolfgang Kohler examined insight by posing many different problems to chimpanzees, principally problems pertaining to their acquisition of out-of-reach food. In one version, a banana was placed outside of a chimpanzee’s cage and a short stick inside the cage. The stick was too short to retrieve the banana, but was long enough to retrieve a longer stick also located outside of the cage. This second stick was long enough to retrieve the banana. After trying, and failing, to reach the banana with the shorter stick, the chimpanzee would try a couple of random-seeming attempts, react with some apparent frustration or anger, then suddenly rush to the longer stick, the correct solution fully realized at this point. This sudden appearance of the solution, observed many times with many different problems, was termed insight by Kohler.

Lest you think it pertains to chimpanzees only, Karl Dunker demonstrated that children also solve problems through insight in the 1930s. More importantly, you have probably experienced insight yourself. Think back to a time when you were trying to solve a difficult problem. After struggling for a while, you gave up. Hours later, the solution just popped into your head, perhaps when you were taking a walk, eating dinner, or lying in bed.

fixation :  when a problem solver gets stuck looking at a problem a particular way and cannot change his or her representation of it (or his or her intended solution strategy)

functional fixedness :  a specific type of fixation in which a problem solver cannot think of a new use for an object that already has a function

mental set :  a specific type of fixation in which a problem solver gets stuck using the same solution strategy that has been successful in the past

insight :  a sudden realization of a solution to a problem

Solving Problems by Trial and Error

Correctly identifying the problem and your goal for a solution is a good start, but recall the psychologist’s definition of a problem: it includes a set of possible intermediate states. Viewed this way, a problem can be solved satisfactorily only if one can find a path through some of these intermediate states to the goal. Imagine a fairly routine problem, finding a new route to school when your ordinary route is blocked (by road construction, for example). At each intersection, you may turn left, turn right, or go straight. A satisfactory solution to the problem (of getting to school) is a sequence of selections at each intersection that allows you to wind up at school.

If you had all the time in the world to get to school, you might try choosing intermediate states randomly. At one corner you turn left, the next you go straight, then you go left again, then right, then right, then straight. Unfortunately, trial and error will not necessarily get you where you want to go, and even if it does, it is not the fastest way to get there. For example, when a friend of ours was in college, he got lost on the way to a concert and attempted to find the venue by choosing streets to turn onto randomly (this was long before the use of GPS). Amazingly enough, the strategy worked, although he did end up missing two out of the three bands who played that night.

Trial and error is not all bad, however. B.F. Skinner, a prominent behaviorist psychologist, suggested that people often behave randomly in order to see what effect the behavior has on the environment and what subsequent effect this environmental change has on them. This seems particularly true for the very young person. Picture a child filling a household’s fish tank with toilet paper, for example. To a child trying to develop a repertoire of creative problem-solving strategies, an odd and random behavior might be just the ticket. Eventually, the exasperated parent hopes, the child will discover that many of these random behaviors do not successfully solve problems; in fact, in many cases they create problems. Thus, one would expect a decrease in this random behavior as a child matures. You should realize, however, that the opposite extreme is equally counterproductive. If the children become too rigid, never trying something unexpected and new, their problem solving skills can become too limited.

Effective problem solving seems to call for a happy medium that strikes a balance between using well-founded old strategies and trying new ground and territory. The individual who recognizes a situation in which an old problem-solving strategy would work best, and who can also recognize a situation in which a new untested strategy is necessary is halfway to success.

Solving Problems with Algorithms and Heuristics

For many problems there is a possible strategy available that will guarantee a correct solution. For example, think about math problems. Math lessons often consist of step-by-step procedures that can be used to solve the problems. If you apply the strategy without error, you are guaranteed to arrive at the correct solution to the problem. This approach is called using an  algorithm , a term that denotes the step-by-step procedure that guarantees a correct solution. Because algorithms are sometimes available and come with a guarantee, you might think that most people use them frequently. Unfortunately, however, they do not. As the experience of many students who have struggled through math classes can attest, algorithms can be extremely difficult to use, even when the problem solver knows which algorithm is supposed to work in solving the problem. In problems outside of math class, we often do not even know if an algorithm is available. It is probably fair to say, then, that algorithms are rarely used when people try to solve problems.

Because algorithms are so difficult to use, people often pass up the opportunity to guarantee a correct solution in favor of a strategy that is much easier to use and yields a reasonable chance of coming up with a correct solution. These strategies are called  problem solving heuristics . Similar to what you saw in section 6.2 with reasoning heuristics, a problem solving heuristic is a shortcut strategy that people use when trying to solve problems. It usually works pretty well, but does not guarantee a correct solution to the problem. For example, one problem solving heuristic might be “always move toward the goal” (so when trying to get to school when your regular route is blocked, you would always turn in the direction you think the school is). A heuristic that people might use when doing math homework is “use the same solution strategy that you just used for the previous problem.”

By the way, we hope these last two paragraphs feel familiar to you. They seem to parallel a distinction that you recently learned. Indeed, algorithms and problem-solving heuristics are another example of the distinction between Type 1 thinking and Type 2 thinking.

Although it is probably not worth describing a large number of specific heuristics, two observations about heuristics are worth mentioning. First, heuristics can be very general or they can be very specific, pertaining to a particular type of problem only. For example, “always move toward the goal” is a general strategy that you can apply to countless problem situations. On the other hand, “when you are lost without a functioning gps, pick the most expensive car you can see and follow it” is specific to the problem of being lost. Second, all heuristics are not equally useful. One heuristic that many students know is “when in doubt, choose c for a question on a multiple-choice exam.” This is a dreadful strategy because many instructors intentionally randomize the order of answer choices. Another test-taking heuristic, somewhat more useful, is “look for the answer to one question somewhere else on the exam.”

You really should pay attention to the application of heuristics to test taking. Imagine that while reviewing your answers for a multiple-choice exam before turning it in, you come across a question for which you originally thought the answer was c. Upon reflection, you now think that the answer might be b. Should you change the answer to b, or should you stick with your first impression? Most people will apply the heuristic strategy to “stick with your first impression.” What they do not realize, of course, is that this is a very poor strategy (Lilienfeld et al, 2009). Most of the errors on exams come on questions that were answered wrong originally and were not changed (so they remain wrong). There are many fewer errors where we change a correct answer to an incorrect answer. And, of course, sometimes we change an incorrect answer to a correct answer. In fact, research has shown that it is more common to change a wrong answer to a right answer than vice versa (Bruno, 2001).

The belief in this poor test-taking strategy (stick with your first impression) is based on the  confirmation bias   (Nickerson, 1998; Wason, 1960). You first saw the confirmation bias in Module 1, but because it is so important, we will repeat the information here. People have a bias, or tendency, to notice information that confirms what they already believe. Somebody at one time told you to stick with your first impression, so when you look at the results of an exam you have taken, you will tend to notice the cases that are consistent with that belief. That is, you will notice the cases in which you originally had an answer correct and changed it to the wrong answer. You tend not to notice the other two important (and more common) cases, changing an answer from wrong to right, and leaving a wrong answer unchanged.

Because heuristics by definition do not guarantee a correct solution to a problem, mistakes are bound to occur when we employ them. A poor choice of a specific heuristic will lead to an even higher likelihood of making an error.

algorithm :  a step-by-step procedure that guarantees a correct solution to a problem

problem solving heuristic :  a shortcut strategy that we use to solve problems. Although they are easy to use, they do not guarantee correct judgments and solutions

confirmation bias :  people’s tendency to notice information that confirms what they already believe

An Effective Problem-Solving Sequence

You may be left with a big question: If algorithms are hard to use and heuristics often don’t work, how am I supposed to solve problems? Robert Sternberg (1996), as part of his theory of what makes people successfully intelligent (Module 8) described a problem-solving sequence that has been shown to work rather well:

  • Identify the existence of a problem.  In school, problem identification is often easy; problems that you encounter in math classes, for example, are conveniently labeled as problems for you. Outside of school, however, realizing that you have a problem is a key difficulty that you must get past in order to begin solving it. You must be very sensitive to the symptoms that indicate a problem.
  • Define the problem.  Suppose you realize that you have been having many headaches recently. Very likely, you would identify this as a problem. If you define the problem as “headaches,” the solution would probably be to take aspirin or ibuprofen or some other anti-inflammatory medication. If the headaches keep returning, however, you have not really solved the problem—likely because you have mistaken a symptom for the problem itself. Instead, you must find the root cause of the headaches. Stress might be the real problem. For you to successfully solve many problems it may be necessary for you to overcome your fixations and represent the problems differently. One specific strategy that you might find useful is to try to define the problem from someone else’s perspective. How would your parents, spouse, significant other, doctor, etc. define the problem? Somewhere in these different perspectives may lurk the key definition that will allow you to find an easier and permanent solution.
  • Formulate strategy.  Now it is time to begin planning exactly how the problem will be solved. Is there an algorithm or heuristic available for you to use? Remember, heuristics by their very nature guarantee that occasionally you will not be able to solve the problem. One point to keep in mind is that you should look for long-range solutions, which are more likely to address the root cause of a problem than short-range solutions.
  • Represent and organize information.  Similar to the way that the problem itself can be defined, or represented in multiple ways, information within the problem is open to different interpretations. Suppose you are studying for a big exam. You have chapters from a textbook and from a supplemental reader, along with lecture notes that all need to be studied. How should you (represent and) organize these materials? Should you separate them by type of material (text versus reader versus lecture notes), or should you separate them by topic? To solve problems effectively, you must learn to find the most useful representation and organization of information.
  • Allocate resources.  This is perhaps the simplest principle of the problem solving sequence, but it is extremely difficult for many people. First, you must decide whether time, money, skills, effort, goodwill, or some other resource would help to solve the problem Then, you must make the hard choice of deciding which resources to use, realizing that you cannot devote maximum resources to every problem. Very often, the solution to problem is simply to change how resources are allocated (for example, spending more time studying in order to improve grades).
  • Monitor and evaluate solutions.  Pay attention to the solution strategy while you are applying it. If it is not working, you may be able to select another strategy. Another fact you should realize about problem solving is that it never does end. Solving one problem frequently brings up new ones. Good monitoring and evaluation of your problem solutions can help you to anticipate and get a jump on solving the inevitable new problems that will arise.

Please note that this as  an  effective problem-solving sequence, not  the  effective problem solving sequence. Just as you can become fixated and end up representing the problem incorrectly or trying an inefficient solution, you can become stuck applying the problem-solving sequence in an inflexible way. Clearly there are problem situations that can be solved without using these skills in this order.

Additionally, many real-world problems may require that you go back and redefine a problem several times as the situation changes (Sternberg et al. 2000). For example, consider the problem with Mary’s daughter one last time. At first, Mary did represent the problem as one of defiance. When her early strategy of pleading and threatening punishment was unsuccessful, Mary began to observe her daughter more carefully. She noticed that, indeed, her daughter’s attention would be drawn by an irresistible distraction or book. Fresh with a re-representation of the problem, she began a new solution strategy. She began to remind her daughter every few minutes to stay on task and remind her that if she is ready before it is time to leave, she may return to the book or other distracting object at that time. Fortunately, this strategy was successful, so Mary did not have to go back and redefine the problem again.

Pick one or two of the problems that you listed when you first started studying this section and try to work out the steps of Sternberg’s problem solving sequence for each one.

a mental representation of a category of things in the world

an assumption about the truth of something that is not stated. Inferences come from our prior knowledge and experience, and from logical reasoning

knowledge about one’s own cognitive processes; thinking about your thinking

individuals who are less competent tend to overestimate their abilities more than individuals who are more competent do

Thinking like a scientist in your everyday life for the purpose of drawing correct conclusions. It entails skepticism; an ability to identify biases, distortions, omissions, and assumptions; and excellent deductive and inductive reasoning, and problem solving skills.

a way of thinking in which you refrain from drawing a conclusion or changing your mind until good evidence has been provided

an inclination, tendency, leaning, or prejudice

a type of reasoning in which the conclusion is guaranteed to be true any time the statements leading up to it are true

a set of statements in which the beginning statements lead to a conclusion

an argument for which true beginning statements guarantee that the conclusion is true

a type of reasoning in which we make judgments about likelihood from sets of evidence

an inductive argument in which the beginning statements lead to a conclusion that is probably true

fast, automatic, and emotional thinking

slow, effortful, and logical thinking

a shortcut strategy that we use to make judgments and solve problems. Although they are easy to use, they do not guarantee correct judgments and solutions

udging the frequency or likelihood of some event type according to how easily examples of the event can be called to mind (i.e., how available they are to memory)

judging the likelihood that something is a member of a category on the basis of how much it resembles a typical category member (i.e., how representative it is of the category)

a situation in which we are in an initial state, have a desired goal state, and there is a number of possible intermediate states (i.e., there is no obvious way to get from the initial to the goal state)

noticing, comprehending and forming a mental conception of a problem

when a problem solver gets stuck looking at a problem a particular way and cannot change his or her representation of it (or his or her intended solution strategy)

a specific type of fixation in which a problem solver cannot think of a new use for an object that already has a function

a specific type of fixation in which a problem solver gets stuck using the same solution strategy that has been successful in the past

a sudden realization of a solution to a problem

a step-by-step procedure that guarantees a correct solution to a problem

The tendency to notice and pay attention to information that confirms your prior beliefs and to ignore information that disconfirms them.

a shortcut strategy that we use to solve problems. Although they are easy to use, they do not guarantee correct judgments and solutions

Introduction to Psychology Copyright © 2020 by Ken Gray; Elizabeth Arnott-Hill; and Or'Shaundra Benson is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

You are now being redirected to mayfile.online....

SEP home page

  • Table of Contents
  • Random Entry
  • Chronological
  • Editorial Information
  • About the SEP
  • Editorial Board
  • How to Cite the SEP
  • Special Characters
  • Advanced Tools
  • Support the SEP
  • PDFs for SEP Friends
  • Make a Donation
  • SEPIA for Libraries
  • Entry Contents

Bibliography

Academic tools.

  • Friends PDF Preview
  • Author and Citation Info
  • Back to Top

The Problem of Induction

We generally think that the observations we make are able to justify some expectations or predictions about observations we have not yet made, as well as general claims that go beyond the observed. For example, the observation that bread of a certain appearance has thus far been nourishing seems to justify the expectation that the next similar piece of bread I eat will also be nourishing, as well as the claim that bread of this sort is generally nourishing. Such inferences from the observed to the unobserved, or to general laws, are known as “inductive inferences”.

The original source of what has become known as the “problem of induction” is in Book 1, part iii, section 6 of A Treatise of Human Nature by David Hume, published in 1739 (Hume 1739). In 1748, Hume gave a shorter version of the argument in Section iv of An enquiry concerning human understanding (Hume 1748). Throughout this article we will give references to the Treatise as “T”, and the Enquiry as “E”.

Hume asks on what grounds we come to our beliefs about the unobserved on the basis of inductive inferences. He presents an argument in the form of a dilemma which appears to rule out the possibility of any reasoning from the premises to the conclusion of an inductive inference. There are, he says, two possible types of arguments, “demonstrative” and “probable”, but neither will serve. A demonstrative argument produces the wrong kind of conclusion, and a probable argument would be circular. Therefore, for Hume, the problem remains of how to explain why we form any conclusions that go beyond the past instances of which we have had experience (T. 1.3.6.10). Hume stresses that he is not disputing that we do draw such inferences. The challenge, as he sees it, is to understand the “foundation” of the inference—the “logic” or “process of argument” that it is based upon (E. 4.2.21). The problem of meeting this challenge, while evading Hume’s argument against the possibility of doing so, has become known as “the problem of induction”.

Hume’s argument is one of the most famous in philosophy. A number of philosophers have attempted solutions to the problem, but a significant number have embraced his conclusion that it is insoluble. There is also a wide spectrum of opinion on the significance of the problem. Some have argued that Hume’s argument does not establish any far-reaching skeptical conclusion, either because it was never intended to, or because the argument is in some way misformulated. Yet many have regarded it as one of the most profound philosophical challenges imaginable since it seems to call into question the justification of one of the most fundamental ways in which we form knowledge. Bertrand Russell, for example, expressed the view that if Hume’s problem cannot be solved, “there is no intellectual difference between sanity and insanity” (Russell 1946: 699).

In this article, we will first examine Hume’s own argument, provide a reconstruction of it, and then survey different responses to the problem which it poses.

1. Hume’s Problem

2. reconstruction, 3.1 synthetic a priori, 3.2 the nomological-explanatory solution, 3.3 bayesian solution, 3.4 partial solutions, 3.5 the combinatorial approach, 4.1 inductive justifications of induction, 4.2 no rules, 5.1 postulates and hinges, 5.2 ordinary language dissolution, 5.3 pragmatic vindication of induction, 5.4 formal learning theory, 5.5 meta-induction, 6. living with inductive skepticism, other internet resources, related entries.

Hume introduces the problem of induction as part of an analysis of the notions of cause and effect. Hume worked with a picture, widespread in the early modern period, in which the mind was populated with mental entities called “ideas”. Hume thought that ultimately all our ideas could be traced back to the “impressions” of sense experience. In the simplest case, an idea enters the mind by being “copied” from the corresponding impression (T. 1.1.1.7/4). More complex ideas are then created by the combination of simple ideas (E. 2.5/19). Hume took there to be a number of relations between ideas, including the relation of causation (E. 3.2). (For more on Hume’s philosophy in general, see Morris & Brown 2014).

For Hume, the relation of causation is the only relation by means of which “we can go beyond the evidence of our memory and senses” (E. 4.1.4, T. 1.3.2.3/74). Suppose we have an object present to our senses: say gunpowder. We may then infer to an effect of that object: say, the explosion. The causal relation links our past and present experience to our expectations about the future (E. 4.1.4/26).

Hume argues that we cannot make a causal inference by purely a priori means (E. 4.1.7). Rather, he claims, it is based on experience, and specifically experience of constant conjunction. We infer that the gunpowder will explode on the basis of past experience of an association between gunpowder and explosions.

Hume wants to know more about the basis for this kind of inference. If such an inference is made by a “chain of reasoning” (E. 4.2.16), he says, he would like to know what that reasoning is. In general, he claims that the inferences depend on a transition of the form:

I have found that such an object has always been attended with such an effect, and I foresee, that other objects, which are, in appearance, similar, will be attended with similar effects . (E. 4.2.16)

In the Treatise , Hume says that

if Reason determin’d us, it would proceed upon that principle that instances, of which we have had no experience, must resemble those, of which we have had experience, and that the course of nature continues always uniformly the same . (T. 1.3.6.4)

For convenience, we will refer to this claim of similarity or resemblance between observed and unobserved regularities as the “Uniformity Principle (UP)”. Sometimes it is also called the “Resemblance Principle”, or the “Principle of Uniformity of Nature”.

Hume then presents his famous argument to the conclusion that there can be no reasoning behind this principle. The argument takes the form of a dilemma. Hume makes a distinction between relations of ideas and matters of fact. Relations of ideas include geometric, algebraic and arithmetic propositions, “and, in short, every affirmation, which is either intuitively or demonstratively certain”. “Matters of fact”, on the other hand are empirical propositions which can readily be conceived to be other than they are. Hume says that

All reasonings may be divided into two kinds, namely, demonstrative reasoning, or that concerning relations of ideas, and moral reasoning, or that concerning matter of fact and existence. (E. 4.2.18)

Hume considers the possibility of each of these types of reasoning in turn, and in each case argues that it is impossible for it to supply an argument for the Uniformity Principle.

First, Hume argues that the reasoning cannot be demonstrative, because demonstrative reasoning only establishes conclusions which cannot be conceived to be false. And, he says,

it implies no contradiction that the course of nature may change, and that an object seemingly like those which we have experienced, may be attended with different or contrary effects. (E. 4.2.18)

It is possible, he says, to clearly and distinctly conceive of a situation where the unobserved case does not follow the regularity so far observed (E. 4.2.18, T. 1.3.6.5/89).

Second, Hume argues that the reasoning also cannot be “such as regard matter of fact and real existence”. He also calls this “probable” reasoning. All such reasoning, he claims, “proceed upon the supposition, that the future will be conformable to the past”, in other words on the Uniformity Principle (E. 4.2.19).

Therefore, if the chain of reasoning is based on an argument of this kind it will again be relying on this supposition, “and taking that for granted, which is the very point in question”. (E. 4.2.19, see also T. 1.3.6.7/90). The second type of reasoning then fails to provide a chain of reasoning which is not circular.

In the Treatise version, Hume concludes

Thus, not only our reason fails us in the discovery of the ultimate connexion of causes and effects, but even after experience has inform’d us of their constant conjunction , ’tis impossible for us to satisfy ourselves by our reason, why we shou’d extend that experience beyond those particular instances, which have fallen under our observation. (T. 1.3.6.11/91–2)

The conclusion then is that our tendency to project past regularities into the future is not underpinned by reason. The problem of induction is to find a way to avoid this conclusion, despite Hume’s argument.

After presenting the problem, Hume does present his own “solution” to the doubts he has raised (E. 5, T. 1.3.7–16). This consists of an explanation of what the inductive inferences are driven by, if not reason. In the Treatise Hume raises the problem of induction in an explicitly contrastive way. He asks whether the transition involved in the inference is produced

by means of the understanding or imagination; whether we are determin’d by reason to make the transition, or by a certain association and relation of perceptions? (T. 1.3.6.4)

And he goes on to summarize the conclusion by saying

When the mind, therefore, passes from the idea or impression of one object to the idea or belief of another, it is not determin’d by reason, but by certain principles, which associate together the ideas of these objects, and unite them in the imagination. (T. 1.3.6.12)

Thus, it is the imagination which is taken to be responsible for underpinning the inductive inference, rather than reason.

In the Enquiry , Hume suggests that the step taken by the mind,

which is not supported by any argument, or process of the understanding … must be induced by some other principle of equal weight and authority. (E. 5.1.2)

That principle is “custom” or “habit”. The idea is that if one has seen similar objects or events constantly conjoined, then the mind is inclined to expect a similar regularity to hold in the future. The tendency or “propensity” to draw such inferences, is the effect of custom:

… having found, in many instances, that any two kinds of objects, flame and heat, snow and cold, have always been conjoined together; if flame or snow be presented anew to the senses, the mind is carried by custom to expect heat or cold, and to believe , that such a quality does exist and will discover itself upon a nearer approach. This belief is the necessary result of of placing the mind in such circumstances. It is an operation of the soul, when we are so situated, as unavoidable as to feel the passion of love, when we receive benefits; or hatred, when we meet with injuries. All these operations are a species of natural instincts, which no reasoning or process of the thought and understanding is able, either to produce, or to prevent. (E. 5.1.8)

Hume argues that the fact that these inferences do follow the course of nature is a kind of “pre-established harmony” (E. 5.2.21). It is a kind of natural instinct, which may in fact be more effective in making us successful in the world, than if we relied on reason to make these inferences.

Hume’s argument has been presented and formulated in many different versions. There is also an ongoing lively discussion over the historical interpretation of what Hume himself intended by the argument. It is therefore difficult to provide an unequivocal and uncontroversial reconstruction of Hume’s argument. Nonetheless, for the purposes of organizing the different responses to Hume’s problem that will be discussed in this article, the following reconstruction will serve as a useful starting point.

Hume’s argument concerns specific inductive inferences such as:

All observed instances of A have been B .

The next instance of A will be B .

Let us call this “inference I ”. Inferences which fall under this type of schema are now often referred to as cases of “simple enumerative induction”.

Hume’s own example is:

All observed instances of bread (of a particular appearance) have been nourishing.

The next instance of bread (of that appearance) will be nourishing.

Hume’s argument then proceeds as follows (premises are labeled as P, and subconclusions and conclusions as C):

Consequences:

There have been different interpretations of what Hume means by “demonstrative” and “probable” arguments. Sometimes “demonstrative” is equated with “deductive”, and probable with “inductive” (e.g., Salmon 1966). Then the first horn of Hume’s dilemma would eliminate the possibility of a deductive argument, and the second would eliminate the possibility of an inductive argument. However, under this interpretation, premise P3 would not hold, because it is possible for the conclusion of a deductive argument to be a non-necessary proposition. Premise P3 could be modified to say that a demonstrative (deductive) argument establishes a conclusion that cannot be false if the premises are true. But then it becomes possible that the supposition that the future resembles the past, which is not a necessary proposition, could be established by a deductive argument from some premises, though not from a priori premises (in contradiction to conclusion C1 ).

Another common reading is to equate “demonstrative” with “deductively valid with a priori premises”, and “probable” with “having an empirical premise” (e.g., Okasha 2001). This may be closer to the mark, if one thinks, as Hume seems to have done, that premises which can be known a priori cannot be false, and hence are necessary. If the inference is deductively valid, then the conclusion of the inference from a priori premises must also be necessary. What the first horn of the dilemma then rules out is the possibility of a deductively valid argument with a priori premises, and the second horn rules out any argument (deductive or non-deductive), which relies on an empirical premise.

However, recent commentators have argued that in the historical context that Hume was situated in, the distinction he draws between demonstrative and probable arguments has little to do with whether or not the argument has a deductive form (Owen 1999; Garrett 2002). In addition, the class of inferences that establish conclusions whose negation is a contradiction may include not just deductively valid inferences from a priori premises, but any inferences that can be drawn using a priori reasoning (that is, reasoning where the transition from premises to the conclusion makes no appeal to what we learn from observations). It looks as though Hume does intend the argument of the first horn to rule out any a priori reasoning, since he says that a change in the course of nature cannot be ruled out “by any demonstrative argument or abstract reasoning a priori ” (E. 5.2.18). On this understanding, a priori arguments would be ruled out by the first horn of Hume’s dilemma, and empirical arguments by the second horn. This is the interpretation that I will adopt for the purposes of this article.

In Hume’s argument, the UP plays a central role. As we will see in section 4.2 , various authors have been doubtful about this principle. Versions of Hume’s argument have also been formulated which do not make reference to the UP. Rather they directly address the question of what arguments can be given in support of the transition from the premises to the conclusion of the specific inductive inference I . What arguments could lead us, for example, to infer that the next piece of bread will nourish from the observations of nourishing bread made so far? For the first horn of the argument, Hume’s argument can be directly applied. A demonstrative argument establishes a conclusion whose negation is a contradiction. The negation of the conclusion of the inductive inference is not a contradiction. It is not a contradiction that the next piece of bread is not nourishing. Therefore, there is no demonstrative argument for the conclusion of the inductive inference. In the second horn of the argument, the problem Hume raises is a circularity. Even if Hume is wrong that all inductive inferences depend on the UP, there may still be a circularity problem, but as we shall see in section 4.1 , the exact nature of the circularity needs to be carefully considered. But the main point at present is that the Humean argument is often formulated without invoking the UP.

Since Hume’s argument is a dilemma, there are two main ways to resist it. The first is to tackle the first horn and to argue that there is after all a demonstrative argument –here taken to mean an argument based on a priori reasoning—that can justify the inductive inference. The second is to tackle the second horn and to argue that there is after all a probable (or empirical) argument that can justify the inductive inference. We discuss the different variants of these two approaches in sections 3 and 4 .

There are also those who dispute the consequences of the dilemma. For example, some scholars have denied that Hume should be read as invoking a premise such premise P8 at all. The reason, they claim, is that he was not aiming for an explicitly normative conclusion about justification such as C5 . Hume certainly is seeking a “chain of reasoning” from the premises of the inductive inference to the conclusion, and he thinks that an argument for the UP is necessary to complete the chain. However, one could think that there is no further premise regarding justification, and so the conclusion of his argument is simply C4 : there is no chain of reasoning from the premises to the conclusion of an inductive inference. Hume could then be, as Don Garrett and David Owen have argued, advancing a “thesis in cognitive psychology”, rather than making a normative claim about justification (Owen 1999; Garrett 2002). The thesis is about the nature of the cognitive process underlying the inference. According to Garrett, the main upshot of Hume’s argument is that there can be no reasoning process that establishes the UP. For Owen, the message is that the inference is not drawn through a chain of ideas connected by mediating links, as would be characteristic of the faculty of reason.

There are also interpreters who have argued that Hume is merely trying to exclude a specific kind of justification of induction, based on a conception of reason predominant among rationalists of his time, rather than a justification in general (Beauchamp & Rosenberg 1981; Baier 2009). In particular, it has been claimed that it is “an attempt to refute the rationalist belief that at least some inductive arguments are demonstrative” (Beauchamp & Rosenberg 1981: xviii). Under this interpretation, premise P8 should be modified to read something like:

  • If there is no chain of reasoning based on demonstrative arguments from the premises to the conclusion of inference I , then inference I is not justified.

Such interpretations do however struggle with the fact that Hume’s argument is explicitly a two-pronged attack, which concerns not just demonstrative arguments, but also probable arguments.

The question of how expansive a normative conclusion to attribute to Hume is a complex one. It depends in part on the interpretation of Hume’s own solution to his problem. As we saw in section 1 , Hume attributes the basis of inductive inference to principles of the imagination in the Treatise, and in the Enquiry to “custom”, “habit”, conceived as a kind of natural instinct. The question is then whether this alternative provides any kind of justification for the inference, even if not one based on reason. On the face of it, it looks as though Hume is suggesting that inductive inferences proceed on an entirely arational basis. He clearly does not think that they do not succeed in producing good outcomes. In fact, Hume even suggests that this operation of the mind may even be less “liable to error and mistake” than if it were entrusted to “the fallacious deductions of our reason, which is slow in its operations” (E. 5.2.22). It is also not clear that he sees the workings of the imagination as completely devoid of rationality. For one thing, Hume talks about the imagination as governed by principles . Later in the Treatise , he even gives “rules” and “logic” for characterizing what should count as a good causal inference (T. 1.3.15). He also clearly sees it as possible to distinguish between better forms of such “reasoning”, as he continues to call it. Thus, there may be grounds to argue that Hume was not trying to argue that inductive inferences have no rational foundation whatsoever, but merely that they do not have the specific type of rational foundation which is rooted in the faculty of Reason.

All this indicates that there is room for debate over the intended scope of Hume’s own conclusion. And thus there is also room for debate over exactly what form a premise (such as premise P8 ) that connects the rest of his argument to a normative conclusion should take. No matter who is right about this however, the fact remains that Hume has throughout history been predominantly read as presenting an argument for inductive skepticism.

There are a number of approaches which effectively, if not explicitly, take issue with premise P8 and argue that providing a chain of reasoning from the premises to the conclusion is not a necessary condition for justification of an inductive inference. According to this type of approach, one may admit that Hume has shown that inductive inferences are not justified in the sense that we have reasons to think their conclusions true, but still think that weaker kinds of justification of induction are possible ( section 5 ). Finally, there are some philosophers who do accept the skeptical conclusion C5 and attempt to accommodate it. For example, there have been attempts to argue that inductive inference is not as central to scientific inquiry as is often thought ( section 6 ).

3. Tackling the First Horn of Hume’s Dilemma

The first horn of Hume’s argument, as formulated above, is aimed at establishing that there is no demonstrative argument for the UP. There are several ways people have attempted to show that the first horn does not definitively preclude a demonstrative or a priori argument for inductive inferences. One possible escape route from the first horn is to deny premise P3 , which amounts to admitting the possibility of synthetic a priori propositions ( section 3.1 ). Another possibility is to attempt to provide an a priori argument that the conclusion of the inference is probable, though not certain. The first horn of Hume’s dilemma implies that there cannot be a demonstrative argument to the conclusion of an inductive inference because it is possible to conceive of the negation of the conclusion. For instance, it is quite possible to imagine that the next piece of bread I eat will poison me rather than nourish me. However, this does not rule out the possibility of a demonstrative argument that establishes only that the bread is highly likely to nourish, not that it definitely will. One might then also challenge premise P8 , by saying that it is not necessary for justification of an inductive inference to have a chain of reasoning from its premises to its conclusion. Rather it would suffice if we had an argument from the premises to the claim that the conclusion is probable or likely. Then an a priori justification of the inductive inference would have been provided. There have been attempts to provide a priori justifications for inductive inference based on Inference to the Best Explanation ( section 3.2 ). There are also attempts to find an a priori solution based on probabilistic formulations of inductive inference, though many now think that a purely a priori argument cannot be found because there are empirical assumptions involved (sections 3.3 - 3.5 ).

As we have seen in section 1 , Hume takes demonstrative arguments to have conclusions which are “relations of ideas”, whereas “probable” or “moral” arguments have conclusions which are “matters of fact”. Hume’s distinction between “relations of ideas” and “matters of fact” anticipates the distinction drawn by Kant between “analytic” and “synthetic” propositions (Kant 1781). A classic example of an analytic proposition is “Bachelors are unmarried men”, and a synthetic proposition is “My bike tyre is flat”. For Hume, demonstrative arguments, which are based on a priori reasoning, can establish only relations of ideas, or analytic propositions. The association between a prioricity and analyticity underpins premise P3 , which states that a demonstrative argument establishes a conclusion whose negation is a contradiction.

One possible response to Hume’s problem is to deny premise P3 , by allowing the possibility that a priori reasoning could give rise to synthetic propositions. Kant famously argued in response to Hume that such synthetic a priori knowledge is possible (Kant 1781, 1783). He does this by a kind of reversal of the empiricist programme espoused by Hume. Whereas Hume tried to understand how the concept of a causal or necessary connection could be based on experience, Kant argued instead that experience only comes about through the concepts or “categories” of the understanding. On his view, one can gain a priori knowledge of these concepts, including the concept of causation, by a transcendental argument concerning the necessary preconditions of experience. A more detailed account of Kant’s response to Hume can be found in de Pierris and Friedman 2013.

The “Nomological-explanatory” solution, which has been put forward by Armstrong, BonJour and Foster (Armstrong 1983; BonJour 1998; Foster 2004) appeals to the principle of Inference to the Best Explanation (IBE). According to IBE, we should infer that the hypothesis which provides the best explanation of the evidence is probably true. Proponents of the Nomological-Explanatory approach take Inference to the Best Explanation to be a mode of inference which is distinct from the type of “extrapolative” inductive inference that Hume was trying to justify. They also regard it as a type of inference which although non-deductive, is justified a priori . For example, Armstrong says “To infer to the best explanation is part of what it is to be rational. If that is not rational, what is?” (Armstrong 1983: 59).

The a priori justification is taken to proceed in two steps. First, it is argued that we should recognize that certain observed regularities require an explanation in terms of some underlying law. For example, if a coin persistently lands heads on repeated tosses, then it becomes increasingly implausible that this occurred just because of “chance”. Rather, we should infer to the better explanation that the coin has a certain bias. Saying that the coin lands heads not only for the observed cases, but also for the unobserved cases, does not provide an explanation of the observed regularity. Thus, mere Humean constant conjunction is not sufficient. What is needed for an explanation is a “non-Humean, metaphysically robust conception of objective regularity” (BonJour 1998), which is thought of as involving actual natural necessity (Armstrong 1983; Foster 2004).

Once it has been established that there must be some metaphysically robust explanation of the observed regularity, the second step is to argue that out of all possible metaphysically robust explanations, the “straight” inductive explanation is the best one, where the straight explanation extrapolates the observed frequency to the wider population. For example, given that a coin has some objective chance of landing heads, the best explanation of the fact that \(m/n\) heads have been so far observed, is that the objective chance of the coin landing heads is \(m/n\). And this objective chance determines what happens not only in observed cases but also in unobserved cases.

The Nomological-Explanatory solution relies on taking IBE as a rational, a priori form of inference which is distinct from inductive inferences like inference I . However, one might alternatively view inductive inferences as a special case of IBE (Harman 1968), or take IBE to be merely an alternative way of characterizing inductive inference (Henderson 2014). If either of these views is right, IBE does not have the necessary independence from inductive inference to provide a non-circular justification of it.

One may also object to the Nomological-Explanatory approach on the grounds that regularities do not necessarily require an explanation in terms of necessary connections or robust metaphysical laws. The viability of the approach also depends on the tenability of a non-Humean conception of laws. There have been several serious attempts to develop such an account (Armstrong 1983; Tooley 1977; Dretske 1977), but also much criticism (see J. Carroll 2016).

Another critical objection is that the Nomological-Explanatory solution simply begs the question, even if it is taken to be legitimate to make use of IBE in the justification of induction. In the first step of the argument we infer to a law or regularity which extends beyond the spatio-temporal region in which observations have been thus far made, in order to predict what will happen in the future. But why could a law that only applies to the observed spatio-temporal region not be an equally good explanation? The main reply seems to be that we can see a priori that laws with temporal or spatial restrictions would be less good explanations. Foster argues that the reason is that this would introduce more mysteries:

For it seems to me that a law whose scope is restricted to some particular period is more mysterious, inherently more puzzling, than one which is temporally universal. (Foster 2004)

Another way in which one can try to construct an a priori argument that the premises of an inductive inference make its conclusion probable, is to make use of the formalism of probability theory itself. At the time Hume wrote, probabilities were used to analyze games of chance. And in general, they were used to address the problem of what we would expect to see, given that a certain cause was known to be operative. This is the so-called problem of “direct inference”. However, the problem of induction concerns the “inverse” problem of determining the cause or general hypothesis, given particular observations.

One of the first and most important methods for tackling the “inverse” problem using probabilities was developed by Thomas Bayes. Bayes’s essay containing the main results was published after his death in 1764 (Bayes 1764). However, it is possible that the work was done significantly earlier and was in fact written in direct response to the publication of Hume’s Enquiry in 1748 (see Zabell 1989: 290–93, for discussion of what is known about the history).

We will illustrate the Bayesian method using the problem of drawing balls from an urn. Suppose that we have an urn which contains white and black balls in an unknown proportion. We draw a sample of balls from the urn by removing a ball, noting its color, and then putting it back before drawing again.

Consider first the problem of direct inference. Given the proportion of white balls in the urn, what is the probability of various outcomes for a sample of observations of a given size? Suppose the proportion of white balls in the urn is \(\theta = 0.6\). The probability of drawing one white ball in a sample of one is then \(p(W; \theta = 0.6) = 0.6\). We can also compute the probability for other outcomes, such as drawing two white balls in a sample of two, using the rules of the probability calculus (see section 1 of Hájek 2011). Generally, the probability that \(n_w\) white balls are drawn in a sample of size N , is given by the binomial distribution:

This is a specific example of a “sampling distribution”, \(p(E\mid H)\), which gives the probability of certain evidence E in a sample, on the assumption that a certain hypothesis H is true. Calculation of the sampling distribution can in general be done a priori , given the rules of the probability calculus.

However, the problem of induction is the inverse problem. We want to infer not what the sample will be like, with a known hypothesis, rather we want to infer a hypothesis about the general situation or population, based on the observation of a limited sample. The probabilities of the candidate hypotheses can then be used to inform predictions about further observations. In the case of the urn, for example, we want to know what the observation of a particular sample frequency of white balls, \(\frac{n_w}{N}\), tells us about \(\theta\), the proportion of white balls in the urn.

The idea of the Bayesian approach is to assign probabilities not only to the events which constitute evidence, but also to hypotheses. One starts with a “prior probability” distribution over the relevant hypotheses \(p(H)\). On learning some evidence E , the Bayesian updates the prior \(p(H)\) to the conditional probability \(p(H\mid E)\). This update rule is called the “rule of conditionalisation”. The conditional probability \(p(H\mid E)\) is known as the “posterior probability”, and is calculated using Bayes’ rule:

Here the sampling distribution can be taken to be a conditional probability \(p(E\mid H)\), which is known as the “likelihood” of the hypothesis H on evidence E .

One can then go on to compute the predictive distribution for as yet unobserved data \(E'\), given observations E . The predictive distribution in a Bayesian approach is given by

where the sum becomes an integral in cases where H is a continuous variable.

For the urn example, we can compute the posterior probability \(p(\theta\mid n_w)\) using Bayes’ rule, and the likelihood given by the binomial distribution above. In order to do so, we also need to assign a prior probability distribution to the parameter \(\theta\). One natural choice, which was made early on by Bayes himself and by Laplace, is to put a uniform prior over the parameter \(\theta\). Bayes’ own rationale for this choice was that then if you work out the probability of each value for the number of whites in the sample based only on the prior, before any data is observed, all those probabilities are equal. Laplace had a different justification, based on the Principle of Indifference. This principle states that if you don’t have any reason to favor one hypothesis over another, you should assign them all equal probabilities.

With the choice of uniform prior, the posterior probability and predictive distribution can be calculated. It turns out that the probability that the next ball will be white, given that \(n_w\) of N draws were white, is given by

This is Laplace’s famous “rule of succession” (1814). Suppose on the basis of observing 90 white balls out of 100, we calculate by the rule of succession that the probability of the next ball being white is \(91/102=0.89\). It is quite conceivable that the next ball might be black. Even in the case, where all 100 balls have been white, so that the probability of the next ball being white is 0.99, there is still a small probability that the next ball is not white. What the probabilistic reasoning supplies then is not an argument to the conclusion that the next ball will be a certain color, but an argument to the conclusion that certain future observations are very likely given what has been observed in the past.

Overall, the Bayes-Laplace argument in the urn case provides an example of how probabilistic reasoning can take us from evidence about observations in the past to a prediction for how likely certain future observations are. The question is what kind of solution, if any, this type of calculation provides to the problem of induction. At first sight, since it is just a mathematical calculation, it looks as though it does indeed provide an a priori argument from the premises of an inductive inference to the proposition that a certain conclusion is probable.

However, in order to establish this definitively, one would need to argue that all the components and assumptions of the argument are a priori and this requires further examination of at least three important issues.

First, the Bayes-Laplace argument relies on the rules of the probability calculus. What is the status of these rules? Does following them amount to a priori reasoning? The answer to this depends in part on how probability itself is interpreted. Broadly speaking, there are prominent interpretations of probability according to which the rules plausibly have a priori status and could form the basis of a demonstrative argument. These include the classical interpretation originally developed by Laplace (1814), the logical interpretation (Keynes (1921), Johnson (1921), Jeffreys (1939), Carnap (1950), Cox (1946, 1961), and the subjectivist interpretation of Ramsey (1926), Savage (1954), and de Finetti (1964). Attempts to argue for a probabilistic a priori solution to the problem of induction have been primarily associated with these interpretations.

Secondly, in the case of the urn, the Bayes-Laplace argument is based on a particular probabilistic model—the binomial model. This involves the assumption that there is a parameter describing an unknown proportion \(\theta\) of balls in the urn, and that the data amounts to independent draws from a distribution over that parameter. What is the basis of these assumptions? Do they generalize to other cases beyond the actual urn case—i.e., can we see observations in general as analogous to draws from an “Urn of Nature”? There has been a persistent worry that these types of assumptions, while reasonable when applied to the case of drawing balls from an urn, will not hold for other cases of inductive inference. Thus, the probabilistic solution to the problem of induction might be of relatively limited scope. At the least, there are some assumptions going into the choice of model here that need to be made explicit. Arguably the choice of model introduces empirical assumptions, which would mean that the probabilistic solution is not an a priori one.

Thirdly, the Bayes-Laplace argument relies on a particular choice of prior probability distribution. What is the status of this assignment, and can it be based on a priori principles? Historically, the Bayes-Laplace choice of a uniform prior, as well as the whole concept of classical probability, relied on the Principle of Indifference. This principle has been regarded by many as an a priori principle. However, it has also been subjected to much criticism on the grounds that it can give rise to inconsistent probability assignments (Bertrand 1888; Borel 1909; Keynes 1921). Such inconsistencies are produced by there being more than one way to carve up the space of alternatives, and different choices give rise to conflicting probability assignments. One attempt to rescue the Principle of Indifference has been to appeal to explanationism, and argue that the principle should be applied only to the carving of the space at “the most explanatorily basic level”, where this level is identified according to an a priori notion of explanatory priority (Huemer 2009).

The quest for an a priori argument for the assignment of the prior has been largely abandoned. For many, the subjectivist foundations developed by Ramsey, de Finetti and Savage provide a more satisfactory basis for understanding probability. From this point of view, it is a mistake to try to introduce any further a priori constraints on the probabilities beyond those dictated by the probability rules themselves. Rather the assignment of priors may reflect personal opinions or background knowledge, and no prior is a priori an unreasonable choice.

So far, we have considered probabilistic arguments which place probabilities over hypotheses in a hypothesis space as well as observations. There is also a tradition of attempts to determine what probability distributions we should have, given certain observations, from the starting point of a joint probability distribution over all the observable variables. One may then postulate axioms directly on this distribution over observables, and examine the consequences for the predictive distribution. Much of the development of inductive logic, including the influential programme by Carnap, proceeded in this manner (Carnap 1950, 1952).

This approach helps to clarify the role of the assumptions behind probabilistic models. One assumption that one can make about the observations is that they are “exchangeable”. This means that the joint distribution of the random variables is invariant under permutations. Informally, this means that the order of the observations does not affect the probability. For instance, in the urn case, this would mean that drawing first a white ball and then a black ball is just as probable as first drawing a black and then a white. De Finetti proved a general representation theorem that if the joint probability distribution of an infinite sequence of random variables is assumed to be exchangeable, then it can be written as a mixture of distribution functions from each of which the data behave as if they are independent random draws (de Finetti 1964). In the case of the urn example, the theorem shows that it is as if the data are independent random draws from a binomial distribution over a parameter \(\theta\), which itself has a prior probability distribution.

The assumption of exchangeability may be seen as a natural formalization of Hume’s assumption that the past resembles the future. This is intuitive because assuming exchangeability means thinking that the order of observations, both past and future, does not matter to the probability assignments.

However, the development of the programme of inductive logic revealed that many generalizations are possible. For example, Johnson proposed to assume an axiom he called the “sufficientness postulate”. This states that outcomes can be of a number of different types, and that the conditional probability that the next outcome is of type i depends only on the number of previous trials and the number of previous outcomes of type i (Johnson 1932). Assuming the sufficientness postulate for three or more types gives rise to a general predictive distribution corresponding to Carnap’s “continuum of inductive methods” (Carnap 1952). This predictive distribution takes the form:

for some positive number k . This reduces to Laplace’s rule of succession when \(t=2\) and \(k=1\).

Generalizations of the notion of exchangeability, such as “partial exchangeability” and “Markov exchangeability”, have been explored, and these may be thought of as forms of symmetry assumption (Zabell 1988; Skyrms 2012). As less restrictive axioms on the probabilities for observables are assumed, the result is that there is no longer a unique result for the probability of a prediction, but rather a whole class of possible probabilities, mapped out by a generalized rule of succession such as the above. Therefore, in this tradition, as in the Bayes-Laplace approach, we have moved away from producing an argument which produces a unique a priori probabilistic answer to Hume’s problem.

One might think then that the assignment of the prior, or the relevant corresponding postulates on the observable probability distribution, is precisely where empirical assumptions enter into inductive inferences. The probabilistic calculations are empirical arguments, rather than a priori ones. If this is correct, then the probabilistic framework has not in the end provided an a priori solution to the problem of induction, but it has rather allowed us to clarify what could be meant by Hume’s claim that inductive inferences rely on the Uniformity Principle.

Some think that although the problem of induction is not solved, there is in some sense a partial solution, which has been called a “logical solution”. Howson, for example, argues that “ Inductive reasoning is justified to the extent that it is sound, given appropriate premises ” (Howson 2000: 239, his emphasis). According to this view, there is no getting away from an empirical premise for inductive inferences, but we might still think of Bayesian conditioning as functioning like a kind of logic or “consistency constraint” which “generates predictions from the assumptions and observations together” (Romeijn 2004: 360). Once we have an empirical assumption, instantiated in the prior probability, and the observations, Bayesian conditioning tells us what the resulting predictive probability distribution should be.

The idea of a partial solution also arises in the context of the learning theory that grounds contemporary machine learning. Machine learning is a field in computer science concerned with algorithms that learn from experience. Examples are algorithms which can be trained to recognise or classify patterns in data. Learning theory concerns itself with finding mathematical theorems which guarantee the performance of algorithms which are in practical use. In this domain, there is a well-known finding that learning algorithms are only effective if they have ‘inductive bias’ — that is, if they make some a priori assumptions about the domain they are employed upon (Mitchell 1997).

The idea is also given formal expression in the so-called ‘No-Free-Lunch theorems’ (Wolpert 1992, 1996, 1997). These can be interpreted as versions of the argument in Hume’s first fork since they establish that there can be no contradiction in the algorithm not performing well, since there are a priori possible situations in which it does not (Sterkenburg and Grünwald 2021:9992). Given Hume’s premise P3 , this rules out a demonstrative argument for its good performance.

Premise P3 can perhaps be challenged on the grounds that a priori justifications can also be given for contingent propositions. Even though an inductive inference can fail in some possible situations, it could still be reasonable to form an expectation of reliability if we spread our credence equally over all the possibilities and have reason to think (or at least no reason to doubt) that the cases where inductive inference is unreliable require a ‘very specific arrangement of things’ and thus form a small fraction of the total space of possibilities (White 2015). The No-Free-Lunch theorems make difficulties for this approach since they show that if we put a uniform distribution over all logically possible sequences of future events, any learning algorithm is expected to have a generalisation error of 1/2, and hence to do no better than guessing at random (Schurz 2021b).

The No-Free-Lunch theorems may be seen as fundamental limitations on justifying learning algorithms when these algorithms are seen as ‘purely data-driven’ — that is as mappings from possible data to conclusions. However, learning algorithms may also be conceived as functions not only of input data, but also of a particular model (Sterkenburg and Grünwald 2021). For example, the Bayesian ‘algorithm’ gives a universal recipe for taking a particular model and prior and updating on the data. A number of theorems in learning theory provide general guarantees for the performance of such recipes. For instance, there are theorems which guarantee convergence of the Bayesian algorithm (Ghosal, Ghosh and van der Vaart 2000, Ghosal, Lember and van der Vaart 2008). In each instantiation, this convergence is relative to a particular specific prior. Thus, although the considerations first raised by Hume, and later instantiated in the No-Free-Lunch theorems, preclude any universal model-independent justification for learning algorithms, it does not rule out partial justifications in the form of such general a priori ‘model-relative’ learning guarantees (Sterkenburg and Grünwald 2021).

An alternative attempt to use probabilistic reasoning to produce an a priori justification for inductive inferences is the so-called “combinatorial” solution. This was first put forward by Donald C. Williams (1947) and later developed by David Stove (1986).

Like the Bayes-Laplace argument, the solution relies heavily on the idea that straightforward a priori calculations can be done in a “direct inference” from population to sample. As we have seen, given a certain population frequency, the probability of getting different frequencies in a sample can be calculated straightforwardly based on the rules of the probability calculus. The Bayes-Laplace argument relied on inverting the probability distribution using Bayes’ rule to get from the sampling distribution to the posterior distribution. Williams instead proposes that the inverse inference may be based on a certain logical syllogism: the proportional (or statistical) syllogism.

The proportional, or statistical syllogism, is the following:

  • Of all the things that are M , \(m/n\) are P .

Therefore, a is P , with probability \(m/n\).

For example, if 90% of rabbits in a population are white and we observe a rabbit a , then the proportional syllogism says that we infer that a is white with a probability of 90%. Williams argues that the proportional syllogism is a non-deductive logical syllogism, which effectively interpolates between the syllogism for entailment

  • All M s are P

Therefore, a is P .

And the syllogism for contradiction

Therefore, a is not P .

This syllogism can be combined with an observation about the behavior of increasingly large samples. From calculations of the sampling distribution, it can be shown that as the sample size increases, the probability that the sample frequency is in a range which closely approximates the population frequency also increases. In fact, Bernoulli’s law of large numbers states that the probability that the sample frequency approximates the population frequency tends to one as the sample size goes to infinity. Williams argues that such results support a “general over-all premise, common to all inductions, that samples ‘match’ their populations” (Williams 1947: 78).

We can then apply the proportional syllogism to samples from a population, to get the following argument:

  • Most samples match their population
  • S is a sample.

Therefore, S matches its population, with high probability.

This is an instance of the proportional syllogism, and it uses the general result about samples matching populations as the first major premise.

The next step is to argue that if we observe that the sample contains a proportion of \(m/n\) F s, then we can conclude that since this sample with high probability matches its population, the population, with high probability, has a population frequency that approximates the sample frequency \(m/n\). Both Williams and Stove claim that this amounts to a logical a priori solution to the problem of induction.

A number of authors have expressed the view that the Williams-Stove argument is only valid if the sample S is drawn randomly from the population of possible samples—i.e., that any sample is as likely to be drawn as any other (Brown 1987; Will 1948; Giaquinto 1987). Sometimes this is presented as an objection to the application of the proportional syllogism. The claim is that the proportional syllogism is only valid if a is drawn randomly from the population of M s. However, the response has been that there is no need to know that the sample is randomly drawn in order to apply the syllogism (Maher 1996; Campbell 2001; Campbell & Franklin 2004). Certainly if you have reason to think that your sampling procedure is more likely to draw certain individuals than others—for example, if you know that you are in a certain location where there are more of a certain type—then you should not apply the proportional syllogism. But if you have no such reasons, the defenders claim, it is quite rational to apply it. Certainly it is always possible that you draw an unrepresentative sample—meaning one of the few samples in which the sample frequency does not match the population frequency—but this is why the conclusion is only probable and not certain.

The more problematic step in the argument is the final step, which takes us from the claim that samples match their populations with high probability to the claim that having seen a particular sample frequency, the population from which the sample is drawn has frequency close to the sample frequency with high probability. The problem here is a subtle shift in what is meant by “high probability”, which has formed the basis of a common misreading of Bernouilli’s theorem. Hacking (1975: 156–59) puts the point in the following terms. Bernouilli’s theorem licenses the claim that much more often than not, a small interval around the sample frequency will include the true population frequency. In other words, it is highly probable in the sense of “usually right” to say that the sample matches its population. But this does not imply that the proposition that a small interval around the sample will contain the true population frequency is highly probable in the sense of “credible on each occasion of use”. This would mean that for any given sample, it is highly credible that the sample matches its population. It is quite compatible with the claim that it is “usually right” that the sample matches its population to say that there are some samples which do not match their populations at all. Thus one cannot conclude from Bernouilli’s theorem that for any given sample frequency, we should assign high probability to the proposition that a small interval around the sample frequency will contain the true population frequency. But this is exactly the slide that Williams makes in the final step of his argument. Maher (1996) argues in a similar fashion that the last step of the Williams-Stove argument is fallacious. In fact, if one wants to draw conclusions about the probability of the population frequency given the sample frequency, the proper way to do so is by using the Bayesian method described in the previous section. But, as we there saw, this requires the assignment of prior probabilities, and this explains why many people have thought that the combinatorial solution somehow illicitly presupposed an assumption like the principle of indifference. The Williams-Stove argument does not in fact give us an alternative way of inverting the probabilities which somehow bypasses all the issues that Bayesians have faced.

4. Tackling the Second Horn of Hume’s Dilemma

So far we have considered ways in which the first horn of Hume’s dilemma might be tackled. But it is of course also possible to take on the second horn instead.

One may argue that a probable argument would not, despite what Hume says, be circular in a problematic way (we consider responses of this kind in section 4.1 ). Or, one might attempt to argue that probable arguments are not circular at all ( section 4.2 ).

One way to tackle the second horn of Hume’s dilemma is to reject premise P6 , which rules out circular arguments. Some have argued that certain kinds of circular arguments would provide an acceptable justification for the inductive inference. Since the justification would then itself be an inductive one, this approach is often referred to as an “inductive justification of induction”.

First we should examine how exactly the Humean circularity supposedly arises. Take the simple case of enumerative inductive inference that follows the following pattern ( X ):

Most observed F s have been G s

Therefore: Most F s are G s.

Hume claims that such arguments presuppose the Uniformity Principle (UP). According to premises P7 and P8 , this supposition also needs to be supported by an argument in order that the inductive inference be justified. A natural idea is that we can argue for the Uniformity Principle on the grounds that “it works”. We know that it works, because past instances of arguments which relied upon it were found to be successful. This alone however is not sufficient unless we have reason to think that such arguments will also be successful in the future. That claim must itself be supported by an inductive argument ( S ):

Most arguments of form X that rely on UP have succeeded in the past.

Therefore, most arguments of form X that rely on UP succeed.

But this argument itself depends on the UP, which is the very supposition which we were trying to justify.

As we have seen in section 2 , some reject Hume’s claim that all inductive inferences presuppose the UP. However, the argument that basing the justification of the inductive inference on a probable argument would result in circularity need not rely on this claim. The circularity concern can be framed more generally. If argument S relies on something which is already presupposed in inference X , then argument S cannot be used to justify inference X . The question though is what precisely the something is.

Some authors have argued that in fact S does not rely on any premise or even presupposition that would require us to already know the conclusion of X . S is then not a “premise circular” argument. Rather, they claim, it is “rule-circular”—it relies on a rule of inference in order to reach the conclusion that that very rule is reliable. Suppose we adopt the rule R which says that when it is observed that most F s are G s, we should infer that most F s are G s. Then inference X relies on rule R . We want to show that rule R is reliable. We could appeal to the fact that R worked in the past, and so, by an inductive argument, it will also work in the future. Call this argument S *:

Most inferences following rule R have been successful

Therefore, most inferences following R are successful.

Since this argument itself uses rule R , using it to establish that R is reliable is rule-circular.

Some authors have then argued that although premise-circularity is vicious, rule-circularity is not (Cleve 1984; Papineau 1992). One reason for thinking rule-circularity is not vicious would be if it is not necessary to know or even justifiably believe that rule R is reliable in order to move to a justified conclusion using the rule. This is a claim made by externalists about justification (Cleve 1984). They say that as long as R is in fact reliable, one can form a justified belief in the conclusion of an argument relying on R , as long as one has justified belief in the premises.

If one is not persuaded by the externalist claim, one might attempt to argue that rule circularity is benign in a different fashion. For example, the requirement that a rule be shown to be reliable without any rule-circularity might appear unreasonable when the rule is of a very fundamental nature. As Lange puts it:

It might be suggested that although a circular argument is ordinarily unable to justify its conclusion, a circular argument is acceptable in the case of justifying a fundamental form of reasoning. After all, there is nowhere more basic to turn, so all that we can reasonably demand of a fundamental form of reasoning is that it endorse itself. (Lange 2011: 56)

Proponents of this point of view point out that even deductive inference cannot be justified deductively. Consider Lewis Carroll’s dialogue between Achilles and the Tortoise (Carroll 1895). Achilles is arguing with a Tortoise who refuses to perform modus ponens . The Tortoise accepts the premise that p , and the premise that p implies q but he will not accept q . How can Achilles convince him? He manages to persuade him to accept another premise, namely “if p and p implies q , then q ”. But the Tortoise is still not prepared to infer to q . Achilles goes on adding more premises of the same kind, but to no avail. It appears then that modus ponens cannot be justified to someone who is not already prepared to use that rule.

It might seem odd if premise circularity were vicious, and rule circularity were not, given that there appears to be an easy interchange between rules and premises. After all, a rule can always, as in the Lewis Carroll story, be added as a premise to the argument. But what the Carroll story also appears to indicate is that there is indeed a fundamental difference between being prepared to accept a premise stating a rule (the Tortoise is happy to do this), and being prepared to use that rule (this is what the Tortoise refuses to do).

Suppose that we grant that an inductive argument such as S (or S *) can support an inductive inference X without vicious circularity. Still, a possible objection is that the argument simply does not provide a full justification of X . After all, less sane inference rules such as counterinduction can support themselves in a similar fashion. The counterinductive rule is CI:

Most observed A s are B s.

Therefore, it is not the case that most A s are B s.

Consider then the following argument CI*:

Most CI arguments have been unsuccessful

Therefore, it is not the case that most CI arguments are unsuccessful, i.e., many CI arguments are successful.

This argument therefore establishes the reliability of CI in a rule-circular fashion (see Salmon 1963).

Argument S can be used to support inference X , but only for someone who is already prepared to infer inductively by using S . It cannot convince a skeptic who is not prepared to rely upon that rule in the first place. One might think then that the argument is simply not achieving very much.

The response to these concerns is that, as Papineau puts it, the argument is “not supposed to do very much” (Papineau 1992: 18). The fact that a counterinductivist counterpart of the argument exists is true, but irrelevant. It is conceded that the argument cannot persuade either a counterinductivist, or a skeptic. Nonetheless, proponents of the inductive justification maintain that there is still some added value in showing that inductive inferences are reliable, even when we already accept that there is nothing problematic about them. The inductive justification of induction provides a kind of important consistency check on our existing beliefs.

It is possible to go even further in an attempt to dismantle the Humean circularity. Maybe inductive inferences do not even have a rule in common. What if every inductive inference is essentially unique? This can be seen as rejecting Hume’s premise P5 . Okasha, for example, argues that Hume’s circularity problem can be evaded if there are “no rules” behind induction (Okasha 2005a,b). Norton puts forward the similar idea that all inductive inferences are material, and have nothing formal in common (Norton 2003, 2010, 2021).

Proponents of such views have attacked Hume’s claim that there is a UP on which all inductive inferences are based. There have long been complaints about the vagueness of the Uniformity Principle (Salmon 1953). The future only resembles the past in some respects, but not others. Suppose that on all my birthdays so far, I have been under 40 years old. This does not give me a reason to expect that I will be under 40 years old on my next birthday. There seems then to be a major lacuna in Hume’s account. He might have explained or described how we draw an inductive inference, on the assumption that it is one we can draw. But he leaves untouched the question of how we distinguish between cases where we extrapolate a regularity legitimately, regarding it as a law, and cases where we do not.

Nelson Goodman is often seen as having made this point in a particularly vivid form with his “new riddle of induction” (Goodman 1955: 59–83). Suppose we define a predicate “grue” in the following way. An object is “grue” when it is green if observed before time t and blue otherwise. Goodman considers a thought experiment in which we observe a bunch of green emeralds before time t . We could describe our results by saying all the observed emeralds are green. Using a simple enumerative inductive schema, we could infer from the result that all observed emeralds are green, that all emeralds are green. But equally, we could describe the same results by saying that all observed emeralds are grue. Then using the same schema, we could infer from the result that all observed emeralds are grue, that all emeralds are grue. In the first case, we expect an emerald observed after time t to be green, whereas in the second, we expect it to be blue. Thus the two predictions are incompatible. Goodman claims that what Hume omitted to do was to give any explanation for why we project predicates like “green”, but not predicates like “grue”. This is the “new riddle”, which is often taken to be a further problem of induction that Hume did not address.

One moral that could be taken from Goodman is that there is not one general Uniformity Principle that all probable arguments rely upon (Sober 1988; Norton 2003; Okasha 2001, 2005a,b, Jackson 2019). Rather each inductive inference presupposes some more specific empirical presupposition. A particular inductive inference depends on some specific way in which the future resembles the past. It can then be justified by another inductive inference which depends on some quite different empirical claim. This will in turn need to be justified—by yet another inductive inference. The nature of Hume’s problem in the second horn is thus transformed. There is no circularity. Rather there is a regress of inductive justifications, each relying on their own empirical presuppositions (Sober 1988; Norton 2003; Okasha 2001, 2005a,b).

One way to put this point is to say that Hume’s argument rests on a quantifier shift fallacy (Sober 1988; Okasha 2005a). Hume says that there exists a general presupposition for all inductive inferences, whereas he should have said that for each inductive inference, there is some presupposition. Different inductive inferences then rest on different empirical presuppositions, and the problem of circularity is evaded.

What will then be the consequence of supposing that Hume’s problem should indeed have been a regress, rather than a circularity? Here different opinions are possible. On the one hand, one might think that a regress still leads to a skeptical conclusion (Schurz and Thorn 2020). So although the exact form in which Hume stated his problem was not correct, the conclusion is not substantially different (Sober 1988). Another possibility is that the transformation mitigates or even removes the skeptical problem. For example, Norton argues that the upshot is a dissolution of the problem of induction, since the regress of justifications benignly terminates (Norton 2003). And Okasha more mildly suggests that even if the regress is infinite, “Perhaps infinite regresses are less bad than vicious circles after all” (Okasha 2005b: 253).

Any dissolution of Hume’s circularity does not depend only on arguing that the UP should be replaced by empirical presuppositions which are specific to each inductive inference. It is also necessary to establish that inductive inferences share no common rules—otherwise there will still be at least some rule-circularity. Okasha suggests that the Bayesian model of belief-updating is an illustration how induction can be characterized in a rule-free way, but this is problematic, since in this model all inductive inferences still share the common rule of Bayesian conditionalisation. Norton’s material theory of induction postulates a rule-free characterization of induction, but it is not clear whether it really can avoid any role for general rules (Achinstein 2010, Kelly 2010, Worrall 2010).

5. Alternative Conceptions of Justification

Hume is usually read as delivering a negative verdict on the possibility of justifying inference I , via a premise such as P8 , though as we have seen in section section 2 , some have questioned whether Hume is best interpreted as drawing a conclusion about justification of inference I at all. In this section we examine approaches which question in different ways whether premise P8 really does give a valid necessary condition for justification of inference I and propose various alternative conceptions of justification.

One approach has been to turn to general reflection on what is even needed for justification of an inference in the first place. For example, Wittgenstein raised doubts over whether it is even meaningful to ask for the grounds for inductive inferences.

If anyone said that information about the past could not convince him that something would happen in the future, I should not understand him. One might ask him: what do you expect to be told, then? What sort of information do you call a ground for such a belief? … If these are not grounds, then what are grounds?—If you say these are not grounds, then you must surely be able to state what must be the case for us to have the right to say that there are grounds for our assumption…. (Wittgenstein 1953: 481)

One might not, for instance, think that there even needs to be a chain of reasoning in which each step or presupposition is supported by an argument. Wittgenstein took it that there are some principles so fundamental that they do not require support from any further argument. They are the “hinges” on which enquiry turns.

Out of Wittgenstein’s ideas has developed a general notion of “entitlement”, which is a kind of rational warrant to hold certain propositions which does not come with the same requirements as “justification”. Entitlement provides epistemic rights to hold a proposition, without responsibilities to base the belief in it on an argument. Crispin Wright (2004) has argued that there are certain principles, including the Uniformity Principle, that we are entitled in this sense to hold.

Some philosophers have set themselves the task of determining a set or sets of postulates which form a plausible basis for inductive inferences. Bertrand Russell, for example, argued that five postulates lay at the root of inductive reasoning (Russell 1948). Arthur Burks, on the other hand, proposed that the set of postulates is not unique, but there may be multiple sets of postulates corresponding to different inductive methods (Burks 1953, 1955).

The main objection to all these views is that they do not really solve the problem of induction in a way that adequately secures the pillars on which inductive inference stands. As Salmon puts it, “admission of unjustified and unjustifiable postulates to deal with the problem is tantamount to making scientific method a matter of faith” (Salmon 1966: 48).

Rather than allowing undefended empirical postulates to give normative support to an inductive inference, one could instead argue for a completely different conception of what is involved in justification. Like Wittgenstein, later ordinary language philosophers, notably P.F. Strawson, also questioned what exactly it means to ask for a justification of inductive inferences (Strawson 1952). This has become known as the “Ordinary language dissolution” of the problem of induction.

Strawson points out that it could be meaningful to ask for a deductive justification of inductive inferences. But it is not clear that this is helpful since this is effectively “a demand that induction shall be shown to be really a kind of deduction” (Strawson 1952: 230). Rather, Strawson says, when we ask about whether a particular inductive inference is justified, we are typically judging whether it conforms to our usual inductive standards. Suppose, he says, someone has formed the belief by inductive inference that All f ’s are g . Strawson says that if that person is asked for their grounds or reasons for holding that belief,

I think it would be felt to be a satisfactory answer if he replied: “Well, in all my wide and varied experience I’ve come across innumerable cases of f and never a case of f which wasn’t a case of g ”. In saying this, he is clearly claiming to have inductive support, inductive evidence, of a certain kind, for his belief. (Strawson 1952)

That is just because inductive support, as it is usually understood, simply consists of having observed many positive instances in a wide variety of conditions.

In effect, this approach denies that producing a chain of reasoning is a necessary condition for justification. Rather, an inductive inference is justified if it conforms to the usual standards of inductive justification. But, is there more to it? Might we not ask what reason we have to rely on those inductive standards?

It surely makes sense to ask whether a particular inductive inference is justified. But the answer to that is fairly straightforward. Sometimes people have enough evidence for their conclusions and sometimes they do not. Does it also make sense to ask about whether inductive procedures generally are justified? Strawson draws the analogy between asking whether a particular act is legal. We may answer such a question, he says, by referring to the law of the land.

But it makes no sense to inquire in general whether the law of the land, the legal system as a whole, is or is not legal. For to what legal standards are we appealing? (Strawson 1952: 257)

According to Strawson,

It is an analytic proposition that it is reasonable to have a degree of belief in a statement which is proportional to the strength of the evidence in its favour; and it is an analytic proposition, though not a proposition of mathematics, that, other things being equal, the evidence for a generalisation is strong in proportion as the number of favourable instances, and the variety of circumstances in which they have been found, is great. So to ask whether it is reasonable to place reliance on inductive procedures is like asking whether it is reasonable to proportion the degree of one’s convictions to the strength of the evidence. Doing this is what “being reasonable” means in such a context. (Strawson 1952: 256–57)

Thus, according to this point of view, there is no further question to ask about whether it is reasonable to rely on inductive inferences.

The ordinary language philosophers do not explicitly argue against Hume’s premise P8 . But effectively what they are doing is offering a whole different story about what it would mean to be justified in believing the conclusion of inductive inferences. What is needed is just conformity to inductive standards, and there is no real meaning to asking for any further justification for those.

The main objection to this view is that conformity to the usual standards is insufficient to provide the needed justification. What we need to know is whether belief in the conclusion of an inductive inference is “epistemically reasonable or justified in the sense that …there is reason to think that it is likely to be true” (BonJour 1998: 198). The problem Hume has raised is whether, despite the fact that inductive inferences have tended to produce true conclusions in the past, we have reason to think the conclusion of an inductive inference we now make is likely to be true. Arguably, establishing that an inductive inference is rational in the sense that it follows inductive standards is not sufficient to establish that its conclusion is likely to be true. In fact Strawson allows that there is a question about whether “induction will continue to be successful”, which is distinct from the question of whether induction is rational. This question he does take to hinge on a “contingent, factual matter” (Strawson 1952: 262). But if it is this question that concerned Hume, it is no answer to establish that induction is rational, unless that claim is understood to involve or imply that an inductive inference carried out according to rational standards is likely to have a true conclusion.

Another solution based on an alternative criterion for justification is the “pragmatic” approach initiated by Reichenbach (1938 [2006]). Reichenbach did think Hume’s argument unassailable, but nonetheless he attempted to provide a weaker kind of justification for induction. In order to emphasize the difference from the kind of justification Hume sought, some have given it a different term and refer to Reichenbach’s solution as a “vindication”, rather than a justification of induction (Feigl 1950; Salmon 1963).

Reichenbach argued that it was not necessary for the justification of inductive inference to show that its conclusion is true. Rather “the proof of the truth of the conclusion is only a sufficient condition for the justification of induction, not a necessary condition” (Reichenbach 2006: 348). If it could be shown, he says, that inductive inference is a necessary condition of success, then even if we do not know that it will succeed, we still have some reason to follow it. Reichenbach makes a comparison to the situation where a man is suffering from a disease, and the physician says “I do not know whether an operation will save the man, but if there is any remedy, it is an operation” (Reichenbach 1938 [2006: 349]). This provides some kind of justification for operating on the man, even if one does not know that the operation will succeed.

In order to get a full account, of course, we need to say more about what is meant for a method to have “success”, or to “work”. Reichenbach thought that this should be defined in relation to the aim of induction. This aim, he thought, is “ to find series of events whose frequency of occurrence converges towards a limit ” (1938 [2006: 350]).

Reichenbach applied his strategy to a general form of “statistical induction” in which we observe the relative frequency \(f_n\) of a particular event in n observations and then form expectations about the frequency that will arise when more observations are made. The “inductive principle” then states that if after a certain number of instances, an observed frequency of \(m/n\) is observed, for any prolongation of the series of observations, the frequency will continue to fall within a small interval of \(m/n\). Hume’s examples are special cases of this principle, where the observed frequency is 1. For example, in Hume’s bread case, suppose bread was observed to nourish n times out of n (i.e. an observed frequency of 100%), then according to the principle of induction, we expect that as we observe more instances, the frequency of nourishing ones will continue to be within a very small interval of 100%. Following this inductive principle is also sometimes referred to as following the “straight rule”. The problem then is to justify the use of this rule.

Reichenbach argued that even if Hume is right to think that we cannot be justified in thinking for any particular application of the rule that the conclusion is likely to be true, for the purposes of practical action we do not need to establish this. We can instead regard the inductive rule as resulting in a “posit”, or statement that we deal with as if it is true. We posit a certain frequency f on the basis of our evidence, and this is like making a wager or bet that the frequency is in fact f . One strategy for positing frequencies is to follow the rule of induction.

Reichenbach proposes that we can show that the rule of induction meets his weaker justification condition. This does not require showing that following the inductive principle will always work. It is possible that the world is so disorderly that we cannot construct series with any limits. In that case, neither the inductive principle, nor any other method will succeed. But, he argues, if there is a limit, by following the inductive principle we will eventually find it. There is some element of a series of observations, beyond which the principle of induction will lead to the true value of the limit. Although the inductive rule may give quite wrong results early in the sequence, as it follows chance fluctuations in the sample frequency, it is guaranteed to eventually approximate the limiting frequency, if such a limit exists. Therefore, the rule of induction is justified as an instrument of positing because it is a method of which we know that if it is possible to achieve the aim of inductive inference we shall do so by means of this method (Reichenbach 1949: 475).

One might question whether Reichenbach has achieved his goal of showing that following the inductive rule is a necessary condition of success. In order to show that, one would also need to establish that no other methods can also achieve the aim. But, as Reichenbach himself recognises, many other rules of inference as well as the straight rule may also converge on the limit (Salmon 1966: 53). In fact, any method which converges asymptotically to the straight rule also does so. An easily specified class of such rules are those which add to the inductive rule a function \(c_n\) in which the \(c_n\) converge to zero with increasing n .

Reichenbach makes two suggestions aimed at avoiding this problem. On the one hand, he claims, since we have no real way to pick between methods, we might as well just use the inductive rule since it is “easier to handle, owing to its descriptive simplicity”. He also claims that the method which embodies the “smallest risk” is following the inductive rule (Reichenbach 1938 [2006: 355–356]).

There is also the concern that there could be a completely different kind of rule which converges on the limit. We can consider, for example, the possibility of a soothsayer or psychic who is able to predict future events reliably. Here Reichenbach argues that induction is still necessary in such a case, because it has to be used to check whether the other method works. It is only by using induction, Reichenbach says, that we could recognise the reliability of the alternative method, by examining its track record.

In assessing this argument, it is helpful to distinguish between levels at which the principle of induction can be applied. Following Skyrms (2000), we may distinguish between level 1, where candidate methods are applied to ordinary events or individuals, and level 2, where they are applied not to individuals or events, but to the arguments on level 1. Let us refer to “object-induction” when the inductive principle is applied at level 1, and “meta-induction” when it is applied at level 2. Reichenbach’s response does not rule out the possibility that another method might do better than object-induction at level 1. It only shows that the success of that other method may be recognised by a meta-induction at level 2 (Skyrms 2000). Nonetheless, Reichenbach’s thought was later picked up and developed into the suggestion that a meta-inductivist who applies induction not only at the object level to observations, but also to the success of others’ methods, might by those means be able to do as well predictively as the alternative method (Schurz 2008; see section 5.5 for more discussion of meta-induction).

Reichenbach’s justification is generally taken to be a pragmatic one, since though it does not supply knowledge of a future event, it supplies a sufficient reason for action (Reichenbach 1949: 481). One might question whether a pragmatic argument can really deliver an all-purpose, general justification for following the inductive rule. Surely a pragmatic solution should be sensitive to differences in pay-offs that depend on the circumstances. For example, Reichenbach offers the following analogue to his pragmatic justification:

We may compare our situation to that of a man who wants to fish in an unexplored part of the sea. There is no one to tell him whether or not there are fish in this place. Shall he cast his net? Well, if he wants to fish in that place, I should advise him to cast the net, to take the chance at least. It is preferable to try even in uncertainty than not to try and be certain of getting nothing. (Reichenbach 1938 [2006: 362–363])

As Lange points out, the argument here “presumes that there is no cost to trying”. In such a situation, “the fisherman has everything to gain and nothing to lose by casting his net” (Lange 2011: 77). But if there is some significant cost to making the attempt, it may not be so clear that the most rational course of action is to cast the net. Similarly, whether or not it would make sense to adopt the policy of making no predictions, rather than the policy of following the inductive rule, may depend on what the practical penalties are for being wrong. A pragmatic solution may not be capable of offering rationale for following the inductive rule which is applicable in all circumstances.

Another question is whether Reichenbach has specified the aim of induction too narrowly. Finding series of events whose frequency of occurrence converges to a limit ties the vindication to the long-run, while allowing essentially no constraint on what can be posited in the short-run. Yet it is in the short run that inductive practice actually occurs and where it really needs justification (BonJour 1998: 194; Salmon 1966: 53).

Formal learning theory can be regarded as a kind of extension of the Reichenbachian programme. It does not offer justifications for inductive inferences in the sense of giving reasons why they should be taken as likely to provide a true conclusion. Rather it offers a “means-ends” epistemology -- it provides reasons for following particular methods based on their optimality in achieving certain desirable epistemic ends, even if there is no guarantee that at any given stage of inquiry the results they produce are at all close to the truth (Schulte 1999).

Formal learning theory is particularly concerned with showing that methods are “logically reliable” in the sense that they arrive at the truth given any sequence of data consistent with our background knowledge (Kelly 1996). However, it goes further than this. As we have just seen, one of the problems for Reichenbach was that there are too many rules which converge in the limit to the true frequency. Which one should we then choose in the short-run? Formal learning theory broadens Reichenbach’s general strategy by considering what happens if we have other epistemic goals besides long-run convergence to the truth. In particular, formal learning theorists have considered the goal of getting to the truth as efficiently, or quickly, as possible, as well as the goal of minimising the number of mind-changes, or retractions along the way. It has then been argued that the usual inductive method, which is characterised by a preference for simpler hypotheses (Occam’s razor), can be justified since it is the unique method which meets the standards for getting to the truth in the long run as efficiently as possible, with a minimum number of retractions (Kelly 2007).

Steel (2010) has proposed that the Principle of Induction (understood as a rule which makes inductive generalisations along the lines of the Straight Rule) can be given a means-ends justification by showing that following it is both necessary and sufficient for logical reliability. The proof is an a priori mathematical one, thus it allegedly avoids the circularity of Hume’s second horn. However, Steel also does not see the approach as an attempt to grasp Hume’s first horn, since the proof is only relative to a certain choice of epistemic ends.

As with other results in formal learning theory, this solution is also only valid relative to a given hypothesis space and conception of possible sequences of data. For this reason, some have seen it as not addressing Hume’s problem of giving grounds for a particular inductive inference (Howson 2011). An alternative attitude is that it does solve a significant part of Hume’s problem (Steel 2010). There is a similar dispute over formal learning theory’s treatment of Goodman’s riddle (Chart 2000, Schulte 2017).

Another approach to pursuing a broadly Reichenbachian programme is Gerhard Schurz’s strategy based on meta-induction (Schurz 2008, 2017, 2019). Schurz draws a distinction between applying inductive methods at the level of events—so-called “object-level” induction (OI), and applying inductive methods at the level of competing prediction methods—so-called “meta-induction” (MI). Whereas object-level inductive methods make predictions based on the events which have been observed to occur, meta-inductive methods make predictions based on aggregating the predictions of different available prediction methods according to their success rates. Here, the success rate of a method is defined according to some precise way of scoring success in making predictions.

The starting point of the meta-inductive approach is that the aim of inductive inference is not just, as Reichenbach had it, finding long-run limiting frequencies, but also predicting successfully in both the long and short run. Even if Hume has precluded showing that the inductive method is reliable in achieving successful prediction, perhaps it can still be shown that it is “predictively optimal”. A method is “predictively optimal” if it succeeds best in making successful predictions out of all competing methods, no matter what data is received. Schurz brings to bear results from the regret-based learning framework in machine learning that show that there is a meta-inductive strategy that is predictively optimal among all predictive methods that are accessible to an epistemic agent (Cesa-Bianchi and Lugosi 2006, Schurz 2008, 2017, 2019). This meta-inductive strategy, which Schurz calls “wMI”, predicts a weighted average of the predictions of the accessible methods, where the weights are “attractivities”, which measure the difference between the method’s own success rate and the success rate of wMI.

The main result is that the wMI strategy is long-run optimal in the sense that it converges to the maximum success rate of the accessible prediction methods. Worst-case bounds for short-run performance can also be derived. The optimality result forms the basis for an a priori means-ends justification for the use of wMI. Namely, the thought is, it is reasonable to use wMI, since it achieves the best success rates possible in the long run out of the given methods.

Schurz also claims that this a priori justification of wMI, together with the contingent fact that inductive methods have so far been much more successful than non-inductive methods, gives rise to an a posteriori non-circular justification of induction. Since wMI will achieve in the long run the maximal success rate of the available prediction methods, it is reasonable to use it. But as a matter of fact, object-inductive prediction methods have been more successful than non-inductive methods so far. Therefore Schurz says “it is meta-inductively justified to favor object-inductivistic strategies in the future” (Schurz 2019: 85). This justification, he claims, is not circular because meta-induction has an a priori independent justification. The idea is that since it is a priori justified to use wMI, it is also a priori justified to use the maximally successful method at the object level. Since it turns out that that the maximally successful method is object-induction, then we have a non-circular a posteriori argument that it is reasonable to use object-induction.

Schurz’s original theorems on the optimality of wMI apply to the case where there are finitely many predictive methods. One point of discussion is whether this amounts to an important limitation on its claims to provide a full solution of the problem of induction. The question then is whether it is necessary that the optimality results be extended to an infinite, or perhaps an expanding pool of strategies (Eckhardt 2010, Sterkenburg 2019, Schurz 2021a).

Another important issue concerns what it means for object-induction to be “meta-inductively justified”. The meta-inductive strategy wMI and object-induction are clearly different strategies. They could result in different predictions tomorrow, if OI would stop working and another method would start to do better. In that case, wMI would begin to favour the other method, and wMI would start to come apart from OI. The optimality results provide a reason to follow wMI. How exactly does object-induction inherit that justification? At most, it seems that we get a justification for following OI on the next time-step, on the grounds that OI’s prediction approximately coincides with that of wMI (Sterkenburg 2020, Sterkenburg (forthcoming)). However, this requires a stronger empirical postulate than simply the observation that OI has been more successful than non-inductive methods. It also requires something like that “as a matter of empirical fact, the strategy OI has been so much more successful than its competitors, that the meta-inductivist attributes it such a large share of the total weight that its prediction (approximately) coincides with OI’s prediction” (Sterkenburg 2020: 538). Furthermore, even if we allow that the empirical evidence does back up such a strong claim, the issue remains that the meta-inductive justification is in support of following the strategy of meta-induction, not in support of the strategy of following OI (Sterkenburg (2020), sec. 3.3.2).

So far we have considered the various ways in which we might attempt to solve the problem of induction by resisting one or other premise of Hume’s argument. Some philosophers have however seen his argument as unassailable, and have thus accepted that it does lead to inductive skepticism, the conclusion that inductive inferences cannot be justified. The challenge then is to find a way of living with such a radical-seeming conclusion. We appear to rely on inductive inference ubiquitously in daily life, and it is also generally thought that it is at the very foundation of the scientific method. Can we go on with all this, whilst still seriously thinking none of it is justified by any rational argument?

One option here is to argue, as does Nicholas Maxwell, that the problem of induction is posed in an overly restrictive context. Maxwell argues that the problem does not arise if we adopt a different conception of science than the ‘standard empiricist’ one, which he denotes ‘aim-oriented empiricism’ (Maxwell 2017).

Another option here is to think that the significance of the problem of induction is somehow restricted to a skeptical context. Hume himself seems to have thought along these lines. For instance he says:

Nature will always maintain her rights, and prevail in the end over any abstract reasoning whatsoever. Though we should conclude, for instance, as in the foregoing section, that, in all reasonings from experience, there is a step taken by the mind, which is not supported by any argument or process of the understanding; there is no danger, that these reasonings, on which almost all knowledge depends, will ever be affected by such a discovery. (E. 5.1.2)

Hume’s purpose is clearly not to argue that we should not make inductive inferences in everyday life, and indeed his whole method and system of describing the mind in naturalistic terms depends on inductive inferences through and through. The problem of induction then must be seen as a problem that arises only at the level of philosophical reflection.

Another way to mitigate the force of inductive skepticism is to restrict its scope. Karl Popper, for instance, regarded the problem of induction as insurmountable, but he argued that science is not in fact based on inductive inferences at all (Popper 1935 [1959]). Rather he presented a deductivist view of science, according to which it proceeds by making bold conjectures, and then attempting to falsify those conjectures. In the simplest version of this account, when a hypothesis makes a prediction which is found to be false in an experiment, the hypothesis is rejected as falsified. The logic of this procedure is fully deductive. The hypothesis entails the prediction, and the falsity of the prediction refutes the hypothesis by modus tollens. Thus, Popper claimed that science was not based on the extrapolative inferences considered by Hume. The consequence then is that it is not so important, at least for science, if those inferences would lack a rational foundation.

Popper’s account appears to be incomplete in an important way. There are always many hypotheses which have not yet been refuted by the evidence, and these may contradict one another. According to the strictly deductive framework, since none are yet falsified, they are all on an equal footing. Yet, scientists will typically want to say that one is better supported by the evidence than the others. We seem to need more than just deductive reasoning to support practical decision-making (Salmon 1981). Popper did indeed appeal to a notion of one hypothesis being better or worse “corroborated” by the evidence. But arguably, this took him away from a strictly deductive view of science. It appears doubtful then that pure deductivism can give an adequate account of scientific method.

  • Achinstein, Peter, 2010, “The War on Induction: Whewell Takes on Newton and Mill (Norton Takes on Everyone)”, Philosophy of Science , 77(5): 728–739.
  • Armstrong, David M., 1983, What is a Law of Nature? , Cambridge: Cambridge University Press.
  • Baier, Annette C., 2009, A Progress of Sentiments , Harvard: Harvard University Press.
  • Bayes, Thomas, 1764, “An Essay Towards Solving a Problem in the Doctrine of Chances”, Philosophical Transactions of the Royal Society of London , 53: 370–418.
  • Beauchamp, Tom L, and Alexander Rosenberg, 1981, Hume and the Problem of Causation , Oxford: Oxford University Press.
  • Bertrand, Joseph Louis Francois, 1888, Calcul des probabilites , Paris: Gauthier-Villars.
  • BonJour, Laurence, 1998, In Defense of Pure Reason: A Rationalist Account of A Priori Justification , Cambridge: Cambridge University Press.
  • Borel, Emile, 1909, Elements de la theorie des probabilites , Paris: Herman et Fils.
  • Brown, M.B., 1987, “Review of The Rationality of Induction , D.C. Stove [1986]”, History and Philosophy of Logic , 8(1): 116–120.
  • Burks, Arthur W., 1953, “The Presupposition Theory of Induction”, Philosophy of Science , 20(3): 177–197.
  • –––, 1955, “On the Presuppositions of Induction”, Review of Metaphysics , 8(4): 574–611.
  • Campbell, Scott, 2001, “Fixing a Hole in the Ground of Induction”, Australasian Journal of Philosophy , 79(4): 553–563.
  • Campbell, Scott, and James Franklin, 2004, “Randomness and the Justification of Induction”, Synthese , 138(1): 79–99.
  • Carnap, Rudolph, 1950, Logical Foundations of Probability , Chicago: University of Chicago Press.
  • –––, 1952, The Continuum of Inductive Methods , Chicago: University of Chicago Press.
  • Carroll, John W., 2016, “Laws of Nature”, Stanford Encyclopedia of Philosophy (Fall 2016 Edition), Edward N. Zalta (ed.), URL = < https://plato.stanford.edu/archives/fall2016/entries/laws-of-nature/ >.
  • Carroll, Lewis, 1895, “What the Tortoise said to Achilles”, Mind , 4(14): 278–280.
  • Cesa-Bianchi, Nicolo, and Gabor Lugosi, 2006, Prediction, Learning, and Games , Cambridge: Cambridge University Press.
  • Chart, David, 2000, “Schulte and Goodman’s Riddle”, British Journal for the Philosophy of Science, 51(1): 147–149.
  • Cleve, James van, 1984, “Reliability, Justification, and the Problem of Induction”, Midwest Studies In Philosophy : 555–567.
  • Cox, R. T., 1946, “Probability, frequency and reasonable expectation”, American Journal of Physics , 14: 1–10.
  • –––, 1961, The Algebra of Probable Inference , Baltimore, MD: Johns Hopkins University Press.
  • de Finetti, Bruno, 1964, “Foresight: its logical laws, its subjective sources”, in H.E. Kyburg (ed.), Studies in subjective probability , New York: Wiley, pp. 93–158.
  • de Pierris, Graciela and Michael Friedman, 2013, “Kant and Hume on Causality”, The Stanford Encyclopedia of Philosophy (Winter 2013 Edition), Edward N. Zalta (ed.), URL = < https://plato.stanford.edu/archives/win2013/entries/kant-hume-causality/ >.
  • Dretske, Fred I., 1977, “Laws of Nature”, Philosophy of Science , 44(2): 248–68.
  • Eckhardt, Arnold, 2010, “Can the Best-Alternative-Justification Solve Hume’s Problem? (On the limits of a promising approach)”, Philosophy of Science , 77(4): 584–593.
  • Feigl, Herbert, 1950, “De Principiis non disputandum”, in Max Black (ed.), Philosophical Analysis , Ithaca, NY: Cornell University Press, pp. 119–56.
  • Foster, John, 2004, The Divine Lawmaker: Lectures on Induction, Laws of Nature and the Existence of God , Oxford: Clarendon Press.
  • Garrett, Don, 2002, Cognition and Commitment in Hume’s Philosophy , Oxford: Oxford University Press.
  • Ghosal, S., J. K. Ghosh, and A.W. van der Vaart, 2000, “Convergence rates of posterior distributions”, The Annals of Statistics , 28: 500–531.
  • Ghosal, S., J. Lember, and A. W. van der Vaart, 2008, “Non-parametric Bayesian model selection and averaging”, Electronic Journal of Statistics, 2: 63–89.
  • Giaquinto, Marcus, 1987, “Review of The Rationality of Induction , D.C. Stove [1986]”, Philosophy of Science , 54(4): 612–615.
  • Goodman, Nelson, 1955, Fact, Fiction and Forecast , Cambridge, MA: Harvard University Press.
  • Hacking, Ian, 1975, The Emergence of Probability: a Philosophical Study of Early Ideas About Probability, Induction and Statistical Inference , Cambridge: Cambridge University Press.
  • Hájek, Alan, 2011, “Interpretations of Probability”, The Stanford Encyclopedia of Philosophy (Winter 2012 Edition), Edward N. Zalta (ed.), URL = < https://plato.stanford.edu/archives/win2012/entries/probability-interpret/ >.
  • Harman, Gilbert, 1968, “Enumerative Induction as Inference to the Best Explanation”, Journal of Philosophy , 65(18): 529–533.
  • Henderson, Leah, 2014, “Bayesianism and Inference to the Best Explanation”, The British Journal for the Philosophy of Science , 65(4): 687–715.
  • Howson, Colin, 2000, Hume’s Problem: Induction and the Justification of Belief , Oxford: Oxford University Press.
  • –––, 2011, “No Answer to Hume”, International Studies in the Philosophy of Science , 25(3): 279–284.
  • Huemer, Michael, 2009, “Explanationist Aid for the Theory of Inductive Logic”, The British Journal for the Philosophy of Science , 60(2): 345–375.
  • [T] Hume, David, 1739, A Treatise of Human Nature , Oxford: Oxford University Press. (Cited by book.part.section.paragraph.)
  • [E] –––, 1748, An Enquiry Concerning Human Understanding , Oxford: Oxford University Press. (Cited by section.part.paragraph.)
  • Jackson, Alexander, 2019, “How to solve Hume’s problem of induction”, Episteme 16: 157–174.
  • Jeffreys, Harold, 1939, Theory of Probability , Oxford: Oxford University Press.
  • Johnson, William Ernest, 1921, Logic , Cambridge: Cambridge University Press.
  • –––, 1932, “Probability: the Deductive and Inductive Problems”, Mind , 49(164): 409–423.
  • Kant, Immanuel, 1781, Kritik der reinen Vernunft . Translated as Critique of Pure Reason , Paul Guyer and Allen W. Wood, A., (eds.), Cambridge: Cambridge University Press, 1998.
  • –––, 1783, Prolegomena zu einer jeden künftigen Metaphysik, die als Wissenschaft wird auftreten können . Translated as Prologomena to Any Future Metaphysics , James W. Ellington (trans.), Indianapolis: Hackett publishing, 2002.
  • Kelly, Kevin T., 1996, The Logic of Reliable Inquiry , Oxford: Oxford University Press.
  • –––, 2007, “A new solution to the puzzle of simplicity”, Philosophy of Science , 74: 561–573.
  • Kelly, Thomas, 2010, “Hume, Norton and induction without rules”, Philosophy of Science, 77: 754–764.
  • Keynes, John Maynard, 1921, A Treatise on Probability , London: Macmillan.
  • Lange, Marc, 2011, “Hume and the Problem of induction”, in Dov Gabbay, Stephan Hartmann and John Woods (eds.), Inductive Logic , ( Handbook of the History of Logic , Volume 10), Amsterdam: Elsevier, pp. 43–92.
  • Laplace, Pierre-Simon, 1814, Essai philosophique sur les probabilités , Paris. Translated in 1902 from the sixth French edition as A Philosophical Essay on Probabilities , by Frederick Wilson Truscott and Frederick Lincoln Emory, New York: John Wiley and Sons. Retranslated in 1995 from the fifth French edition (1825) as Philosophical Essay on Probabilities , by Andrew I. Dale, 1995, New York: Springer-Verlag.
  • Maher, Patrick, 1996, “The Hole in the Ground of Induction”, Australasian Journal of Philosophy , 74(3): 423–432.
  • Maxwell, Nicholas, 2017, Understanding Scientific Progress: Aim-Oriented Empiricism , St. Paul: Paragon House.
  • Mitchell, Tom, 1997, Machine Learning : McGraw-Hill.
  • Morris, William E., and Charlotte R. Brown, 2014 [2017], “David Hume”, The Stanford Encyclopedia of Philosophy (Spring 2017 Edition), Edward N. Zalta (ed.), URL = < https://plato.stanford.edu/archives/spr2017/entries/hume/ >.
  • Norton, John D., 2003, “A Material Theory of Induction”, Philosophy of Science , 70(4): 647–670.
  • –––, 2010, “There are no universal rules for induction”, Philosophy of Science , 77: 765–777.
  • –––, 2021, The Material Theory of Induction : BSPS Open/University of Calgary Press.
  • Okasha, Samir, 2001, “What did Hume Really Show about Induction?”, The Philosophical Quarterly , 51(204): 307–327.
  • –––, 2005a, “Bayesianism and the Traditional Problem of Induction”, Croatian Journal of Philosophy , 5(14): 181–194.
  • –––, 2005b, “Does Hume’s Argument against Induction Rest on a Quantifier-Shift Fallacy?”, Proceedings of the Aristotelian Society , 105: 237–255.
  • Owen, David, 1999, Hume’s Reason , Oxford: Oxford University Press.
  • Papineau, David, 1992, “Reliabilism, Induction and Scepticism”, The Philosophical Quarterly , 42(166): 1–20.
  • Popper, Karl, 1935 [1959], Logik der Forschung , Wien: J. Springer. Translated by Popper as The Logic of Scientific Discovery , London: Hutchinson, 1959.
  • Ramsey, Frank P., 1926, “Truth and Probability”, in R.B. Braithwaite (ed.), The Foundations of Mathematics and Other Logical Essays , London: Routledge and Kegan-Paul Ltd., pp. 156–98.
  • Reichenbach, Hans, 1949, The Theory of Probability , Berkeley: University of California Press.
  • –––, 1938 [2006], Experience and Prediction: An Analysis of the Foundations and the Structure of Knowledge , Chicago: University of Chicago Press. Page numbers from the 2006 edition, Indiana: University of Notre Dame Press.
  • Romeijn, Jan-Willem, 2004, “Hypotheses and Inductive Predictions”, Synthese , 141(3): 333–364.
  • Russell, Bertrand, 1946, A History of Western Philosophy , London: George Allen and Unwin Ltd.
  • –––, 1948, Human Knowledge: Its Scope and Limits , New York: Simon and Schuster.
  • Salmon, Wesley C., 1963, “On Vindicating Induction”, Philosophy of Science , 30(3): 252–261.
  • –––, 1966, The Foundations of Scientific Inference , Pittsburgh: University of Pittsburgh Press.
  • –––, 1981, “Rational Prediction”, British Journal for the Philosophy of Science , 32(2): 115–125.
  • Salmon, Wesley C., 1953, “The Uniformity of Nature”, Philosophy and Phenomenological Research , 14(1): 39–48.
  • Savage, Leonard J, 1954, The Foundations of Statistics , New York: Dover Publications.
  • Schulte, Oliver, 1999, “Means-Ends Epistemology”, British Journal for the Philosophy of Science , 50(1): 1–31.
  • –––, 2000, “What to believe and what to take seriously: a reply to David Chart concerning the riddle of induction”, British Journal for the Philosophy of Science, 51: 151–153.
  • –––, 2017 [2018], “Formal Learning Theory”, The Stanford Encyclopedia of Philosophy (Spring 2018 Edition), Edward N. Zalta (ed.), URL = < https://plato.stanford.edu/archives/spr2018/entries/learning-formal/ >.
  • Schurz, Gerhard, 2008, “The Meta-inductivist’s Winning Strategy in the Prediction Game: A New Approach to Hume’s Problem”, Philosophy of Science , 75(3): 278–305.
  • –––, 2017, “Optimality Justifications: New Foundations for Foundation-Oriented Epistemology”, Synthese , 73:1–23.
  • –––, 2019, Hume’s Problem Solved: the Optimality of Meta-induction , Cambridge, MA: MIT Press.
  • –––, 2021a, “Meta-induction over unboundedly many prediction methods: a reply to Arnold and Sterkenburg”, Philosophy of Science , 88: 320–340.
  • –––, 2021b, “The No Free Lunch Theorem: bad news for (White’s account of) the problem of induction”, Episteme , 18: 31–45.
  • Schurz, Gerhard, and Paul Thorn, 2020, “The material theory of object-induction and the universal optimality of meta-induction: two complementary accounts”, Studies in History and Philosophy of Science A , 82: 99–93.
  • Skyrms, Brian 2000, Choice and Chance: an introduction to inductive logic , Wadsworth.
  • –––, 2012, From Zeno to Arbitrage: Essays on Quantity, Coherence and Induction , Oxford: Oxford University Press.
  • Sober, Elliott, 1988, Reconstructing the Past: Parsimony, Evolution and Inference , Cambridge MA: MIT Press.
  • Steel, Daniel, 2010, “What If the Principle of Induction Is Normative? Formal Learning Theory and Hume’s Problem”, International Studies in the Philosophy of Science , 24(2): 171–185.
  • Sterkenburg, Tom, 2019, “The meta-inductive justification of induction: the pool of strategies”, Philosophy of Science , 86: 981–992.
  • –––, 2020, “The meta-inductive justification of induction”, Episteme , 17: 519–541.
  • –––, forthcoming, “Explaining the success of induction”, British Journal for the Philosophy of Science , https://doi.org/10.1086/717068.
  • Sterkenburg, Tom and Peter Grünwald, 2021, “The no-free-lunch theorems of supervised learning”, Synthese , 199: 9979–10015.
  • Stove, David C., 1986, The Rationality of Induction , Oxford: Clarendon Press.
  • Strawson, Peter Frederick, 1952, Introduction to Logical Theory , London: Methuen.
  • Tooley, Michael, 1977, “The Nature of Laws”, Canadian Journal of Philosophy , 7(4): 667–698.
  • White, Roger, 2015, “The problem of the problem of induction”, Episteme , 12: 275–290.
  • Will, Frederick L., 1948, “Donald Williams’ Theory of Induction”, Philosophical Review , 57(3): 231–247.
  • Williams, Donald C., 1947, The Ground of Induction , Harvard: Harvard University Press.
  • Wittgenstein, Ludwig, 1953, Philosophical Investigations , New Jersey: Prentice Hall.
  • Wolpert, D. H., 1997, “No free lunch theorems for optimization”, IEEE Transactions on Evolutionary Computation , 1: 67–82.
  • –––, 1992, “On the connecton between in-sample testing and generalization error”, Complex Systems , 6: 47–94.
  • –––, 1996, “The lack of a priori distinctions between learning algorithms”, Neural Computation 8: 1341–1390.
  • Worrall, John, 2010, “For Universal Rules, Against Induction”, Philosophy of Science , 77(5): 740–53.
  • Wright, Crispin, 2004, “Wittgensteinian Certainties”, in Denis McManus (ed.), Wittgenstein and Scepticism , London: Routledge, pp. 22–55.
  • Zabell, Sandy L., 1988, “Symmetry and Its Discontents”, in Brian Skyrms (ed.), Causation, Chance and Credence , Dordrecht: Springer Netherlands, pp. 155–190.
  • –––, 1989, “The Rule of Succession”, Erkenntnis , 31(2–3): 283–321.
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.
  • Vickers, John, “The Problem of Induction,” Stanford Encyclopedia of Philosophy (Spring 2018 Edition), Edward N. Zalta (ed.), URL = < https://plato.stanford.edu/archives/spr2018/entries/induction-problem/ >. [This was the previous entry on the problem of induction in the Stanford Encyclopedia of Philosophy — see the version history .]
  • Teaching Theory of Knowledge: Probability and Induction , organization of topics and bibliography by Brad Armendt (Arizona State University) and Martin Curd (Purdue).
  • Forecasting Principles , A brief survey of prediction markets.

Bayes’ Theorem | belief, formal representations of | confirmation | epistemology, formal | Feigl, Herbert | Goodman, Nelson | Hume, David | Kant, Immanuel: and Hume on causality | laws of nature | learning theory, formal | logic: inductive | Popper, Karl | probability, interpretations of | Reichenbach, Hans | simplicity | skepticism | statistics, philosophy of | Strawson, Peter Frederick

Acknowledgments

Particular thanks are due to Don Garrett and Tom Sterkenburg for helpful feedback on a draft of this entry. Thanks also to David Atkinson, Simon Friederich, Jeanne Peijnenburg, Theo Kuipers and Jan-Willem Romeijn for comments.

Copyright © 2022 by Leah Henderson < l . henderson @ rug . nl >

  • Accessibility

Support SEP

Mirror sites.

View this site from another server:

  • Info about mirror sites

The Stanford Encyclopedia of Philosophy is copyright © 2023 by The Metaphysics Research Lab , Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

Marcus Coetzee

Inductive and deductive reasoning can help us to solve complex strategic and social problems.

Article on #Strategy .

By Marcus Coetzee, 18 June 2021.

1. Introduction

Strategy emerges from how we think about the complex problems facing our organizations. These problems might relate to our environment, the challenges faced by our beneficiaries or something inside our organization. To become better at developing strategies, we must learn how to think more clearly and avoid cognitive biases.

My ability to think strategically has benefited immensely from understanding the differences between inductive and deductive reasoning, and understanding when and how to apply them. Inductive reasoning involves ‘bottom up thinking’ – constructing theories from details. In contrast, deductive reasoning involves ‘top down thinking’ – starting with a theory and assuming details that must be true if the theory is valid.

We all have our preferences for one of these types of reasoning when solving complex problems that affect organizations and communities. Nevertheless, it is beneficial to master both types of reasoning so that we can use them when the need arises.

This article summarizes what I have learned so far while diving into this topic. It is a detailed and technical article that will interest people who want to enhance how they use reasoning to solve problems.

2. Terminology

Here is some of the terminology I use in this article:

‘Theories’ include beliefs, principles, generalizations, rules, patterns, conjectures and conclusions that describe a part of the world that is greater than what was observed. These theories are used to explain or predict that which was not observed or not yet observed.

‘Observations’ include experiences, cases and instances.

‘ Hypotheses ’ are clear statements that are the building blocks for theories. For example, it was raining this morning when I left my apartment. I hypothesized that drops of water would fall on me when I went outdoors. This hypothesis is a core component of our theory of rain.

‘Scientific’ is when we use inductive or deductive reasoning in a way that conforms with the standards prescribed by the philosophy of science , which explores the nature of scientific theories and methodologies. For example, the Principle of Falsification requires that a scientific theory is able to be disproved and should specify how this might be done.

3. Inductive reasoning

In this section I will introduce inductive reasoning and provide several examples. I will explain how inductive reasoning is intrinsically constrained by the need to make generalizations. I will also explain when and when not to use inductive reasoning. 

This section closes with a detailed example of how I used inductive reasoning to infer an informal theory that homelessness has increased in South Africa as a result of the Covid-19 pandemic and is unlikely to be alleviated any time soon.

3.1 What is inductive reasoning?

Inductive reasoning is commonly referred to as ‘bottom up’ thinking. It involves using details to infer theories that cover more than what was observed – i.e. creating generalizations based upon a set of observations. The statement of probable truth that we reach through inductive reasoning is sometimes called a ‘conjecture’.

The flowchart below illustrates the process of inductive reasoning.

explain the inductive route to problem solving in geography

We use inductive reasoning in our lives everyday to make sense of the world. Many of the theories we formulate are not scientific or academic but rather personal.

People are more likely to consider the theories that they develop through inductive reasoning to be true if their theories are associated with intense emotions, and if their repeated and different types of observations fit their theory. For example, someone will be more inclined to believe that their community is unsafe if they are a victim of crime, and if they know other people who have had similar experiences, and if they hear stories about their dangerous community on the radio.

In contrast, when inductive reasoning is used formally in statistics and quantitative research, then the strength of the resulting theory depends primarily on the sample design and research methodology. Let us assume that the researchers have a sample frame (with the details of the population that is being studied), and are able to draw a probability sample (where there is a positive and known chance of everyone being included in the sample). This would enable them to specify the exact statistical probability that their theory will apply to people, things and events that were not observed but are in the ambit of the theory.

3.2. Three examples of inductive reasoning

The best way to understand inductive reasoning is to see examples of how it is being used. Here are three that were on my mind when I wrote this article.

The first example relates to the National Income Dynamics Study – Coronavirus Rapid Mobile Survey (NIDS-CRAM) . Enumerators phoned a nationally representative sample of South Africans during ‘hard lockdown’ to understand their social and economic circumstances. This yielded many insights about how South Africans were struggling with the symptoms of poverty such as a shortage of food and access to social services. This is an example of inductive reasoning because the detailed results of the interviews were used to create a broader theory about the socio-economic circumstances of all South Africans.

The second example relates to the stories of government corruption and ‘state capture’ that have filled the South African news cycle for several years. Investigative journalists and the Zondo Commission of Inquiry into Allegations of State Capture have uncovered many instances of large-scale corruption. Many South Africans, including myself, have inferred a theory about the nature and incidence of corruption in government and state-owned enterprises. Then when I heard of a massive tender (approx USD 15 billion) being awarded on short notice for the supply of electricity, I predicted that government corruption is most likely involved in the tender process. Time is revealing the truth of the matter. This is inductive reasoning because I used several observations about corruption to notice patterns and develop a personal theory about government corruption, from which I make informal predictions.

The third example relates to a project I’m currently working on in East Africa . I am part of a team that is working on a study of non-tariff barriers in the East African Community. We are gathering official statistics on trade in the region, as well as information from traders, transporters, clearing agents and border officials. There are several data gathering methodologies involved. We will primarily use inductive reasoning to assimilate this data and infer a theory about the negative impact of these trade barriers on the region and how best to mitigate them. This is inductive reasoning because we use a multitude of observations to develop a theory about how non-tariff barriers are affecting all trade in the region.

3.3. Inductive theories vary in their probability that they will apply to things that were yet not observed

The Problem of Induction was described by the philosopher David Hume in the 18th century. He explained why generalizing a set of observations can never be true – at the most they can be described as highly probable . This is the inherent risk that we all experience when making generalizations about a broader group or set of phenomena. However, this should not belittle the value of inductive reasoning since our mental models rely on this process. We should simply accept that the ‘map is not the territory’.

When conducting scientific research, it may be possible to specify the probability that the theory is true for observations that were not used to build the theory (i.e. for other people or future events that are not yet observed). 

When we cannot specify the probability that an inductive reasoning is true, the proponents of the theory must be transparent about the process and compromises with data and methodology that were required along the way. This enables others to judge for themselves how probable they believe the theory to be.

3.4. When to use inductive reasoning

Inductive reasoning is useful when you want to develop a general theory based upon a limited set of observations because you don’t have the means to investigate or measure everything.

It is also useful when you already understand the conceptual areas that you want to explore but want to understand the likely incidence or frequency that certain things are true. For example, I spent three years working on a study to assess the likelihood that certain demographic and background factors were associated with students dropping out of South African universities.

It can also be useful when you want to investigate the strength of relationships between things and the extent to which certain variables correlate with each other.

Finally, inductive reasoning is useful when you want to make a prediction about the future based on historical trends (e.g. unemployment rate and types of skills that the economy will need.)

3.5. Inductive reasoning needs the right data to work effectively

Flawed and improbable theories are created when we take data from one situation and generalize it to other situations that are very different from the one where the original data was obtained. The problem here is not so much with inductive reasoning per se, but rather with its poor use. This might involve:

  • Attempting to generalize findings from one group to another with different characteristics. For example, a group of policy-makers might attempt to use a set of observations about the challenges faced by informal businesses in the Khayelitsha township in Cape Town to develop a theory about the challenges faced by all businesses in South Africa, regardless of their context or size. This resulting theory is likely to have some flaws.
  • Attempting to generalize findings to different contexts. For example, mosquito nets that have been treated with insecticide have proven effective in randomized control trials at reducing the incidence of malaria in Africa with no harmful side effects. However, when these nets were given to certain fishing communities in Zambia, it was discovered   that these fishermen were using them to filter fish and other insects from rivers, lakes and wetlands which then damaged these ecosystems as an unintended consequence.
  • Attempting to use associations between things to assume a causal relationship. For example, we know that high levels of vitamin D are associated with reduced Covid-19 symptoms , but this does not necessarily mean that taking vitamin D supplements will achieve the same since there might be other factors at play. People with ill-health or who are too sick to go outdoors will tend to have poor vitamin D levels.

The quality of the theories developed using inductive reasoning are also influenced by the quality of our mental models. For example, believers in the QAnon conspiracy have assimilated a disparate set of observations into a theory that a bunch of satanic power-hungry pedophiles are trying to take over the United States government.

I believe that we must learn to guard against theories where inductive reasoning has been used incorrectly since they can easily be used for nefarious purposes, or at the very least, these theories will mislead or misinform us.

3.6. Detailed example of inductive reasoning

While writing this article, I audited my belief that the social problem of homelessness has increased in South Africa as a result of the Covid-19 pandemic and is unlikely to be alleviated any time soon. The following flowchart shows a simplified version of how I unconsciously used inductive reasoning to infer this theory. You must read the flowchart from left to right.

explain the inductive route to problem solving in geography

Because this theory was developed informally and largely unconsciously, I can’t specify the probability that it is true for other neighborhoods in Cape Town and for other cities in South Africa. Neither am I an expert in homelessness. Nevertheless, I will refine my personal theory as I learn more about this problem and how it has recently worsened. 

4. Deductive reasoning

This section introduces deductive reasoning and provides several examples to show how it is different from inductive reasoning. I will explain when to use it and when not to use it. The section will conclude with a detailed example of how I might use deductive reasoning to develop a theory about the financial problems facing a non-profit organization.

4.1 What is deductive reasoning?

Deductive reasoning is commonly referred to as ‘top-down’ thinking. It involves adopting a theory, which was most likely developed using inductive reasoning, and then deducing details that must be true if the theory is valid.

The flowchart below illustrates the process of deductive reasoning.

explain the inductive route to problem solving in geography

Deductive thinking is closely associated with an experimental approach in science and academia. It is a straightforward method for checking the validity of the theory and then refining or discarding it. 

The Theory of Falsifiability by Karl Popper is pertinent as it states that a scientific theory is one that is capable of being disproved, and is valid until one of its hypotheses are proven to be false.

4.2. Three examples of deductive reasoning

Here are three examples of deductive reasoning that I have encountered in my work.

The first example relates to the Theory of Change, which is part of the doctrine of non-profit organizations and social enterprises. It starts with a theory about the end-state that must be achieved (i.e. the vision) and broadly how this can happen (i.e. mission). Deductive reasoning is then used to work backwards from the vision and map the key activities, outputs and outcomes that will achieve this end-state. A Theory of Change uses deductive reasoning because it starts with a theory of what can be achieved and deduces hypotheses that must be true for it to be valid.

The second example relates to the strategic work that I have done with the association of hospices in South Africa. During this time, we developed a theory using inductive reasoning about how the private sector will start to compete with traditional hospices, and how we should respond. Then we used deductive reasoning to deduce that private commercial hospices will seek to dominate the profitable market segments as soon as medical aid schemes pay properly for palliative care. This would present a threat to hospices since patients with medical aids cross-subsidize the services that hospices provide to poor communities. There is also the risk that hospices will consequently receive fewer bequests than before. Emerging evidence suggests that this hypothesis is true as some businesses have recently entered this market and begun to sell their services. Therefore our theory remains valid for now. This uses deductive reasoning since we started with the theory about competition from the private sector and unpacked the details of what must happen for our theory to hold ground.

The third example relates to randomized control trials (RCTs) which are based on deductive reasoning since they create testable hypotheses. Researchers then seek to falsify/disprove these hypotheses in order to test the validity of their theories that a certain type of intervention would produce a specific type of change. Examples of RCTs include:

  • testing the efficacy of Covid vaccines
  • testing whether marketing or financial training provides the greatest benefits for entrepreneurs
  • testing whether money for mobile airtime and data, and travel subsidies can help young people to find work
  • A/B testing by Instagram to test whether new features increase user engagement.

4.3. The best times to use deductive reasoning

The best time to use deductive reasoning is when there are diminishing returns to gathering more information using the inductive approach – i.e. as the new information adds few insights to what is already known. It is also useful when you are trying to understand the key drivers/causes of a problem or solution as opposed to things that are associated with it.

4.5. Common mistakes when using deductive reasoning

There are four common mistakes that I have noticed people make when using deductive reasoning to solve complex social problems. 

The first mistake is when one attempts to prove the validity of a theory by testing hypotheses that are not logically (or only partially related) to the theory. For example, let us assume that we were testing the market demand for a social enterprise that sells fortified food to feeding schemes and humanitarian agencies. A false hypothesis might be that ‘these potential customers have big annual budgets’ since this alludes to their ability to afford the food. However, I would argue that this would be a poor hypothesis since a big organizational budget does not necessarily mean that they spend a lot of money on food. Neither does it mean that they will want to buy the type of food that the social enterprise sells.

The second mistake is when one attempts to prove a theory by testing hypotheses that are not mutually exclusive (see MECE principle ) since it would be difficult to isolate which of the hypotheses are true. For example, let us assume that your organization runs a diversion programme to rehabilitate young offenders and is trying to understand the efficacy of its activities. It would be poor practice to compare the effectiveness of its counseling programmes on young people versus unemployed people since these categories may overlap. Similarly, it would be unwise to hypothesize that a diversion programme and a counseling programme would be required to rehabilitate these youth since counseling is an integral part of diversion.

The third mistake is when one uses hypotheses where it is impossible to gather evidence to prove or disprove them. For example, a small non-profit organization that runs drama workshops in communities should be cautious about hypothesizing that they improve community cohesion.

The final mistake is when one tries to create an initial theory when insufficient information exists in the first place, and when inductive reasoning should first be used.

4.6. Detailed example of deductive reasoning

For this example, let us assume that a large non-profit organization needs our help with a formal assessment of some pressing problems that threaten its existence.

Then let us assume, that after some initial conversations and after reviewing some documents, we used inductive reasoning to develop a theory that the organization is struggling financially and at risk of running itself into the ground.

The following flowchart gives an example of the types of hypotheses that we might deduce from this theory.

explain the inductive route to problem solving in geography

Now that we have deduced some hypotheses, we should be able to identify the type of evidence that we need to determine which of these sub hypotheses are true. For example, let’s look at the evidence and actions that we might need to prove/disprove hypothesis 1.1 (‘the organization is in debt’). We might need to do the following:

  • Review the balance sheet in the audited financial statements for the past three years and in the latest unaudited statement or management accounts.
  • Calculate the debt and current ratios over the past three financial years.
  • Review the components of current liabilities and long-term liabilities.
  • Review a list of trade creditors.

We might discover that some of our hypotheses are valid and others are invalid. For example, Hypothesis 1.1 (‘The organization is in debt’) might be currently be invalid while Hypothesis 1.2 (‘financial reserves are deteriorating’) might be valid. 

Next we could use this feedback to refine our hypotheses and original ideas, and write them as follows:

  • Hypothesis 1.1 – The organization’s assets are declining and the ratio between assets and liabilities is deteriorating overall.
  • Hypothesis 1.2 – Financial reserves are deteriorating and being used to fund the shortfall in the budget and pay creditors, and will only last 12 months at the current rate of consumption. 

Then the hypothesis tree comes together. If all the evidence supports the hypotheses, then our theory that ‘the organization is struggling financially and is at risk of running itself into the ground’ would be sound. This deductive approach would also reveal some of the causes of the problem that would need to be addressed and make it easier to present our findings to the board of directors.

5. Conclusion

We use inductive and deductive reasoning all the time in our lives and work. We use it both formally and informally. The strategy, policy and research that we see around us is underpinned by one of these forms of reasoning, and possibly both.

This article has explained the differences in inductive and deductive reasoning. The former seeks to assimilate observations to develop probable theories to describe the unknown or predict the future, whereas the latter seeks to test the soundness of theories by using evidence to validate hypotheses. Both forms of reasoning are equally important. They work together to provide us with useful theories. They have enabled the human race to be as successful as it is.

However, we should be mindful of the limitations of these two types of reasoning. When used incorrectly, they can result in improbable or unsound theories that can limit our options and distort our thinking. They can also be used nefariously to promote flawed theories for a political or geopolitical agenda.

We should also strive to be able to use inductive and deductive reasoning more explicitly when required. I believe there is immense value in learning how to improve our reasoning – the purpose of this article. It will improve our ability to understand this complex world we live in and make much better decisions.

6. Further reading

Here are some of the links that were the most useful in researching this topic.

  • Crafting Cases: The Definitive Guide to Issue Trees by Bruno Nogueira.
  • Deductive vs Inductive Reasoning: Make Smarter Arguments, Better Decisions, and Stronger Conclusions posted on FS Blog 
  • The McKinsey Way by Ethan Raisel (book)
  • The Pyramid Principle: Logic in Writing and Thinking (book)

In pursuit of strategic clarity

Pardon Our Interruption

As you were browsing something about your browser made us think you were a bot. There are a few reasons this might happen:

  • You've disabled JavaScript in your web browser.
  • You're a power user moving through this website with super-human speed.
  • You've disabled cookies in your web browser.
  • A third-party browser plugin, such as Ghostery or NoScript, is preventing JavaScript from running. Additional information is available in this support article .

To regain access, please make sure that cookies and JavaScript are enabled before reloading the page.

WJEC Eduqas GCSE Geography B: Past Papers

Browse our range of WJEC Eduqas GCSE Geography Past Papers and Mark Schemes below. Testing yourself with GCSE Geography past papers is a great way to identify which topics need more revision, so you can ensure that you are revising as effectively as possible to help you get ready for your GCSE Geography exam.

Visit all of our WJEC Eduqas GCSE Past Papers here .

Eduqas GCSE Geography Past Papers

The Eduqas GCSE Geography A and B (C111U and C112U) past exam papers section of Revision World. You can download the papers and marking schemes by clicking on the links below. Scroll down to view papers from previous years.

June 2023 Eduqas GCSE  Geography A Past Papers (C111U)

Geography A - Component 1:  Changing Physical and Human Landscapes (C111U10-1) Download Paper    –    Download Mark Scheme

Geography A - Component 2:  Environmental and Developmental Issues (C111U20-1) Download Paper    –    Download Mark Scheme

Geography A - Component 3:  Applied Fieldwork Enquiry (C111U30-1) Download Paper    –   Download Mark Scheme

June 2023 Eduqas GCSE Geography B Past Papers (C112U)

Geography B - Component 1:  Investigating Geographical Issues (C112U10-1) Download Paper    –    Download Mark Scheme

Geography B - Component 2:  Problem Solving Geography (C112U20-1) Download Paper     –  Download Mark Scheme

Geography B - Component 3:  Applied Fieldwork Enquiry (C112U30-1) Download Paper    –   Download Mark Scheme

June 2022 Eduqas GCSE (9-1) Geography A Past Papers (C111U)

June 2022 Eduqas GCSE (9-1) Geography B Past Papers (C112U)

Geography B - Component 3:  Applied Fieldwork Enquiry (C112U30-1) Download Paper     –    Download Mark Scheme  

November 2020 Eduqas GCSE (9-1) Geography A Past Papers (C111U)

Geography A - Component 3:  Applied Fieldwork Enquiry (C111U30-1) Download Paper   -   Download Resource Folder   –   Download Mark Scheme

November 2020 Eduqas GCSE (9-1) Geography B Past Papers (C112U)

Geography B - Component 2:  Problem Solving Geography (C112U20-1) Download Paper   -  Download Resource Folder   –  Download Mark Scheme

Geography B - Component 3:  Applied Fieldwork Enquiry (C112U30-1) Download Paper    -   Download Resource Folder   –    Download Mark Scheme

June 2019 Eduqas GCSE (9-1) Geography A Past Papers (C111U)

June 2019 Eduqas GCSE (9-1) Geography B Past Papers (C112U)

Geography B - Component 2:  Problem Solving Geography (C112U20-1) Download Paper    –    Download Mark Scheme

Geography B - Component 3:  Applied Fieldwork Enquiry (C112U30-1) Download Paper    –    Download Mark Scheme

June 2018 Eduqas GCSE (9-1) Geography A Past Papers (C111U)

Geography A - Component 3:  Applied Fieldwork Enquiry (C111U30-1) Download Paper    –    Download Mark Scheme

June 2018 Eduqas GCSE (9-1) Geography B Past Papers (C112U)

For other GCSE Geography past papers  click here .

sign up to revision world banner

IMAGES

  1. PPT

    explain the inductive route to problem solving in geography

  2. PPT

    explain the inductive route to problem solving in geography

  3. Problem Solving and Inductive Reasoning

    explain the inductive route to problem solving in geography

  4. University of Guelph Bookstore

    explain the inductive route to problem solving in geography

  5. (PPT) Chapter 13: Correlation An Introduction to Statistical Problem

    explain the inductive route to problem solving in geography

  6. An Introduction To Statistical Problem Solving In Geography by J

    explain the inductive route to problem solving in geography

COMMENTS

  1. Use of Scientific Method in Geography

    Meaning of Scientific Method: The term 'scientific method' denotes the logical structure of the process by which the search for trustworthy knowledge advances. The primary task of scientific method is to explain empirical phenomena. There is no need to argue that geography ought to be a science. Geography simply is a science by virtue of ...

  2. PDF Introduction: a Scientific Approach to Geography and Environmental Studies

    natural sciences: scientific disciplines that study the natural, or biophysical, world; they include such disciplines as atmospheric science, biology, chemistry, geology, oceanography, physics, physical geography, and natural-science approaches within environmental studies, often referred to as environmental science.

  3. Geospatial Reasoning

    Reasoning. The three well known reasoning processes trace the development of analytic beliefs along different paths. Inductive reasoning reveals "that something is probably true," deductive reasoning demonstrates "that something is necessarily true.". It is generally accepted within the intelligence community that both are limited ...

  4. Inductive Reasoning

    Examples: Inductive reasoning. Nala is an orange cat and she purrs loudly. Baby Jack said his first word at the age of 12 months. Every orange cat I've met purrs loudly. All observed babies say their first word at the age of 12 months. All orange cats purr loudly. All babies say their first word at the age of 12 months.

  5. PDF Teaching and Learning Methods in Geography Promoting Sustainability

    The most often written expressions promoting SD in geography education concerned environmental sustainability (42%), followed by social (25%), economic (19%), and cultural sustainability (14%). The most emphasized features of the current teaching methods were active participation, thinking skills, animation, evaluation, dialog, demonstrations ...

  6. PDF Concepts and Trends in Geography

    work, Harvey laid out two possible methodologies to explain geographical phenomena: an inductive route where generalizations are made from observation; and a deductive one where, through empirical observation, testable models and hypothesis are formulated and later verified to become scientific laws.[11] He placed preference on the latter method.

  7. Decision making, problem solving and mysteries

    Decision-making and problem solving in geography are activities where an issue or question is identified and investigated. Geography teachers use these activities to provide real world contexts for students to apply and develop their knowledge and skills and develop geographical thinking. Teachers put students in situations where they are ...

  8. Geographic Approach

    To solve such complex problems, we must first understand their geography. The geographic approach is a way of thinking and problem-solving that integrates and organizes all relevant information in the crucial context of location. Leaders use this approach to reveal patterns and trends; model scenarios and solutions; and ultimately, make sound ...

  9. (PDF) Teaching and Learning Methods in Geography ...

    This study aimed to identify and describe useful teaching methods in geography education, including outdoor education, to promote sustainability and the goals, topics, and features regarding. the ...

  10. Inductive vs Deductive Reasoning

    The main difference between inductive and deductive reasoning is that inductive reasoning aims at developing a theory while deductive reasoning aims at testing an existing theory. Inductive reasoning moves from specific observations to broad generalisations, and deductive reasoning the other way around. Both approaches are used in various types ...

  11. Guide To Inductive & Deductive Reasoning

    In fact, inductive reasoning usually comes much more naturally to us than deductive reasoning. Inductive reasoning moves from specific details and observations (typically of nature) to the more general underlying principles or process that explains them (e.g., Newton's Law of Gravity). It is open-ended and exploratory, especially at the beginning.

  12. Inductive Vs Deductive Research

    Deductive: Moves from a general theory to specific observations or experiments (top-down approach). Inductive: Theories are developed based on observed patterns. Deductive: Theories are tested through empirical observation. Inductive: Useful in exploring new phenomena or generating new theories.

  13. Deductive & Inductive Logic: The Practice Guide by McKinsey Alum

    Regarding inductive and deductive logic, most of the time people use inductive logic. They take a few thoughts or facts and create hypotheses. Typically, what most people need to build up is their deductive logic. That is why we focus on it so much in this problem solving module. Exercise 1 - Build Your Logic Awareness.

  14. An Introduction to Statistical Problem Solving in Geography

    4180 IL Route 83, Suite 101 Long Grove, Illinois 60047 Phone: ... They delve into the calculation of descriptive summaries and graphics to explain geographic patterns and use inferential statistics ... MULTIVARIATE PROBLEM-SOLVING IN GEOGRAPHY 18. Examples of Multivariate Problem-Solving in Geography 19. Multifactor Analysis: Two-Way Analysis ...

  15. 7 Module 7: Thinking, Reasoning, and Problem-Solving

    Module 7: Thinking, Reasoning, and Problem-Solving. This module is about how a solid working knowledge of psychological principles can help you to think more effectively, so you can succeed in school and life. You might be inclined to believe that—because you have been thinking for as long as you can remember, because you are able to figure ...

  16. PDF An Introduction To Statistical Problem Solving In Geography

    oversimplifying, the authors stress the importance of written narratives that explain each statistical technique. After introducing basic statistical concepts and terminology, the authors focus on nonspatial and spatial descriptive statistics. ... Workbook for Statistical Problem Solving in Geography Arthur J. Lembo (Jr.),2014-10-24 This book ...

  17. The Problem of Induction

    The original source of what has become known as the "problem of induction" is in Book 1, part iii, section 6 of A Treatise of Human Nature by David Hume, published in 1739 (Hume 1739). In 1748, Hume gave a shorter version of the argument in Section iv of An enquiry concerning human understanding (Hume 1748).

  18. Inductive and deductive reasoning can help us to solve complex

    3.5. Inductive reasoning needs the right data to work effectively. Flawed and improbable theories are created when we take data from one situation and generalize it to other situations that are very different from the one where the original data was obtained. The problem here is not so much with inductive reasoning per se, but rather with its ...

  19. Issues & Solutions to Problems of Urban Areas

    The CBD for instance, has a particular problem with the lack of space for development, the high cost of land and meeting strict planning and government policies. Congestion and pollution are concentrated in the CBD but is also a general urban issue. Other problems include: Pollution. Inequality. Housing. Congestion.

  20. Problem Solving by Inductive Reasoning: How to Solve Problems

    1.1 Problem Solving by Inductive Reasoning 2 Logical Argument In order to reason through a problem usually requires a premise. A premise is an assumption, law, rule, or observation. Then inductive or deductive reasoning can be used to come to a conclusion. A premise and a conclusion make up a logical argument.

  21. PDF GEOGRAPHY B

    GEOGRAPHY B - Component 2. Problem Solving Geography. TUESDAY, 5 . JUNE 2018 - AFTERNOON 1 hour 30 minutes. S18-C112U20-1. ... groups of people who may not benefit from building evacuation routes. For ... Explain why this funding should be the responsibility of the national government of the

  22. WJEC Eduqas GCSE Geography B Past Papers

    WJEC Eduqas GCSE Geography B: Past Papers. Browse our range of WJEC Eduqas GCSE Geography Past Papers and Mark Schemes below. Testing yourself with GCSE Geography past papers is a great way to identify which topics need more revision, so you can ensure that you are revising as effectively as possible to help you get ready for your GCSE Geography exam.

  23. Eduqas GCSE Geography Past Papers

    Download Paper - Download Mark Scheme. June 2023 Eduqas GCSE Geography B Past Papers (C112U) Geography B - Component 1: Investigating Geographical Issues (C112U10-1) Download Paper - Download Mark Scheme. Geography B - Component 2: Problem Solving Geography (C112U20-1) Download Paper - Download Mark Scheme.