Validity Criteria of Knowledge

Researchers can evaluate the knowledge they produce by using different cri­teria of validity. Each of the epistemologies we are looking at – the positivist, the interpretativist and the constructivist – incorporate a number of validity criteria.

1. Positivist Validity Criteria

For positivism, specific criteria enable researchers to distinguish clearly between scientific and non-scientific knowledge. These criteria have evolved along with positivism, and have moved from ‘verification’ to ‘degree of confir­mation’ and ‘degree of refutation’.

1.1. Verification

Early positivists applied the principle of ‘verification’: a proposition is either analytical or synthetic, and is either true by virtue of its own definition or, in certain situations, by virtue of practical experience. A synthetic proposition has meaning if, and only if, it can be verified empirically (Blaug, 1992). Verification obliges researchers to assure the truth of their statements through empirical verification.

1.2. Degree of confirmation

As positivism has evolved, other criteria have supplanted verification. The term ‘degree of confirmation’ refers to the probabilistic logic proposed by Carnap. The logic of confirmation calls the certainty of truth into question. It is based on the idea that we cannot say that a proposition is universally true, but only that it is probable. We can never be sure that it is true in every case and in all circumstances. Consequently, we can only confirm it against experience, or by drawing on the results of other theories – but we will not be able to estab­lish its truth as certain (Hempel, 1964). Carnap’s (1962) vision of science can be summed up as follows: All theories are impossible to prove, but they present different degrees of probability. Scientific honesty consists of only stating theo­ries that are highly probable, or simply specifying for each scientific theory the factors that support it and the theory’s probability in light of these factors. A theory can be probable – in fact Carnap replaces the notion of proof by degree of probability. Researchers who subscribe to Carnap’s probabilistic logic are compelled to evaluate the degree of probability with which their statements are confirmed.

1.3. Refutation

According to Popper’s principle of ‘refutation’, we can never maintain that a theory is true, but we can say it is not true – that is, that it has been refuted. The following example is a good illustration. To the question ‘Are all swans white?’ the only answer that is scientifically acceptable is ‘No’. However many white swans we have observed, we do not have the right to infer that all swans are white. Observing a single black swan is sufficient to refute this conclusion.

A theory that has not been refuted is then a theory that is provisionally corroborated. The term ‘corroboration’ is important for Popper, who draws a clear distinction between it and ‘confirmation’:

By the degree of corroboration of a theory I mean a concise report evaluating the state (at a certain time t) of the critical discussion of a theory, with respect to the way it solves its problems; its degree of testability; the severity of the tests it has undergone; and the way it has stood up to these tests … The main purpose of the formulae that I proposed as definition of the degree of corroboration was to show that, in many cases, the more improbable (improbable in the sense of the calculus of probability) hypo­thesis is preferable.

According to this principle, a theory is scientific if it is refutable – that is, if it accepts that certain results may invalidate it. However, any theory that cannot be refuted is not scientific. This includes psychoanalysis (for example, the Freudian hypothesis of the subconscious) and Marxism, along with other theories that remain valid whatever observations are made about them. Popper insists on the asymmetry of verification and invalidation. For him, there is no logic of proof, but a logic of refutation, and argues that, conse­quently, we must construct our scientific propositions from hypotheses that can be refuted.

1.4. Logical consistency

Finally, in assessing the validity of research, positivism only recognizes as scientific those methods that respect formal logic (deductive logic). This idea is referred to as ‘logical consistency’. One test for logical consistency is to show that all of a theory’s propositions are related to one another by the rules of formal logic, or are logically deducible from the same set of premises (Lee, 1991). Positivism refuses to consider inductive logic as scientific. It argues that the only logic that enables us to reproduce reality objectively is deductive logic.

Inductive logic enables us to move from particular observations to general statements. Deductive logic, on the other hand, uses true premises and the rules of formal inference to establish the truth-value of a proposition (or its non-refutation). These two types of logic will be examined in greater depth in Chapter 3.

2. Interpretativist Validity Criteria

Interpretativists and constructivists both question the primacy of deductive logic, and the specific and universal character of the validity criteria proposed by positivists. For interpretativists, validity criteria are criteria of trustworthiness. Lincoln and Guba (1985) identify these as credibility, transferability, depend­ability and confirmability.

Credibility

How can one establish confidence in the ‘truth’ of the findings of a particular inquiry for the subjects with which and the context in which the inquiry was carried out? When we consider the assumption of multiple constructed realities, there is no ultimate benchmark to which one can turn for justification – whether in principle or by a technical adjustment via the falsification principle. Reality is now a multiple set of mental constructions … To demonstrate ‘truth value’ we must show that the recon­structions that have been arrived at via the inquiry are credible to the constructors of the original multiple realities . . . The implementation of the credibility criterion becomes a twofold task: first, to carry out the inquiry in such a way that the proba­bility that the findings will be found to be credible is enhanced, and, second, to demonstrate the credibility of the findings by having them approved by the con­structors of the multiple realities being studied.

(Lincoln and Guba, 1985: 295-6)

Transferability

How can one determine the extent to which the findings of a particular inquiry have applicability in other contexts or with other subjects? Interpretativists make the assumption that at best only working hypotheses may be abstracted, the transferability of which is an empirical matter, depending on the degree of similarity between send­ing and receiving contexts. Transferability inferences cannot be made by an investi­gator who knows only the sending context.

(Lincoln and Guba, 1985: 297)

Dependability

How can one determine whether the findings of an inquiry would be repeated if the inquiry were replicated with the same (or similar) subjects in the same (or similar) context? In the conventional paradigm, for this criterion there must be something tangible and unchanging ‘out there’ that can serve as a benchmark if the idea of repli­cation is to make sense. An interpretativist sees reliability as part of a larger set of factors that are associated with the observed changes. Dependability takes into account both factors of instability and factors of phenomenal or design induced change.

(Lincoln and Guba, 1985: 299)

Confirmability

How can we establish the degree to which the findings of an inquiry are determined by the subjects and conditions of the inquiry and not by the biases, motivations, inter­ests, or perspectives of the inquirer? An interpretativist prefers a qualitative definition of this criterion. This definition removes the emphasis from the investigator (it is no longer his or her objectivity that is at stake) and places it where, as it seems to the investigator, it ought more logically to be: on the data themselves. The issue is no longer the investigator’s characteristics but the characteristics of the data: are they or are they not confirmable?

(Lincoln and Guba, 1985: 300)

3. Constructivist Validity Criteria

Constructivists question the classic criteria proposed by positivists. They contest the verification-refutation alternative, saying verification is illusory and refutation inadequate. It is illusory, they say, to devise a scientific process using verification criteria when one’s vision of the world is based on pheno­menological and intentionalist hypotheses. It is inadequate to devise a scienti­fic process using refutability criteria when one defends the constructed and transforming nature of research projects in disciplines such as organizational science.

In constructivism, criteria for validating knowledge are still very much a topic of debate. However, while constructivist epistemology refuses to acknowledge any single validity criterion, certain authors propose sources for validating knowledge. We will present two of them here; the adequation (or suitability) criterion proposed by Glasersfeld (1984), and the ‘teachability’ criterion defended by Le Moigne (1995).

Adequation Glasersfeld (1984), who is considered a radical constructivist, holds that knowledge is valid when it fits a given situation. He illustrates this principle using the metaphor of a key. A key fits if it opens the lock it is supposed to open. Here, suitability refers to a capacity: that of the key and not of the lock. Thanks to professional burglars, we are only too well aware that many keys cut very differently from our own may nonetheless open our door!

Teachability The criteria relating to teachable knowledge can be expressed in terms of reproducibility, intelligibility, and constructibility. In Le Moigne’s (1995) view, it is no longer enough for model-makers to demonstrate knowl­edge. They have to show that this knowledge is both constructible and repro­ducible, and therefore intelligible. It is important for modelers to be scrupulous about explaining their aims when constructing teachable knowledge.

The validity criteria applied by constructivists do not impose a single method of constructing knowledge, but are able to accept and defend a multi­plicity of methods. Constructivists do not see deductive reasoning as the only valid reasoning method, accepting too other methods such as analogy and metaphor.

Source: Thietart Raymond-Alain et al. (2001), Doing Management Research: A Comprehensive Guide, SAGE Publications Ltd; 1 edition.

Leave a Reply

Your email address will not be published. Required fields are marked *