Menu

Inductive Logic

A Thematic Compilation by Avi Sion

header photo

22. Logical Activities

 

1.    Logical Attitudes

Logic is usually presented for study as a static description and prescription of forms of proposition and arguments, so that we forget that it is essentially an activity, a psychic act. Even the three Laws of Thought have to be looked at in this perspective, to be fully understood. To each one of them, there corresponds a certain mental attitude, policy or process…

To the Law of Identity, corresponds the attitude of acknowledgement of fact, i.e. of whatever happens to be fact in the given context. Here, the term ‘fact’ is meant broadly to include the fact of appearance, the fact of reality or illusion, or even the fact of ignorance or uncertainty. Also, the attention to eventual conflicts (contradictions, incompatibilities, paradoxes, tensions) and gaps (questions, mysteries); and by extension, other forms of oppositional relations.

To the Law of Non-contradiction, corresponds the policy of rejection of contradictions. Contradictions occur in our knowledge through errors of processing of some kind (e.g. over-generalization, uncontrolled adduction, unsuccessful guessing), which is ultimately due to the gradual presentation of information to the human observer and to his limited, inductive cognitive means. The Law is an insight that such occurrence, once clearly realized, is to be regarded not as a confirmation that contradiction can occur in reality, but as a signal that a mere illusion is taking place that must be rejected.

To the Law of the Excluded Middle, corresponds the process of searching for gaps or conflicts in knowledge and pursuing their resolution. This is the most dynamic cognitive activity, an important engine in the development of knowledge. And when a contradiction or even an uncertainty arises, it is this impulse of the human thinking apparatus that acts to ask and answer the implicit questions, so as to maintain a healthy harmony in one’s knowledge.

Thus, the exercise of logic depends very much on the human will, to adopt an attitude of factualism and resolve to check for consistency, look for further information and issues, and correct any errors found. The psychological result of such positive practices, coupled with opportunity and creativity, is increasing knowledge and clarity. The contraries of the above are avoidance or evasion of fact, acceptance of contradictions, and stupidity and laziness. The overall result of such illogical practices is ignorance and confusion.

Whereas ‘consciousness’ refers to the essentially static manifestation of a Subject-Object relation, ‘thought’ is an activity with an aim (knowledge and decision-making). The responsibility of the thinker for his thought processes exists not only at the fundamental level of the three Laws, but at every level of detail, in every cognitive act. Reasoning is never mechanical. To see what goes on around us, we must turn our heads and focus our eyes. To form a concept or formulate a proposition or construct an argument or make an experiment or test a hypothesis, we have to make an effort. The more attentive and careful our cognitive efforts, the more successful they are likely to be.

 

2.    Principles of Adduction[1]

The concepts and processes of adduction are fundamental tools of human cognition, which only started becoming clear in recent centuries thanks to philosophers like Francis Bacon or Karl Popper. Even so, many people are still today not aware of this important branch of logic. Logic is the art and science of discourse. Like all logical principles, those of adduction are firstly idealized descriptions of ordinary thinking, and thereafter prescriptions for scientific thought.

Anything we believe or wonder about or disbelieve may be considered a theory. Everything thinkable has some initial credibility at first glance, but we are for this very reason required to further evaluate it, otherwise contradictories would be equally true! Adduction is the science of such evaluation: it tells us how we do and should add further credibility to a theory or its negation. To adduce evidence is to add logical weight to an idea.

A theory T is said to predict something P, if T implies P (but does not imply nonP). A theory T may predict the negation of something, i.e. nonP; we might then say that T disclaims P; in such case, T implies nonP (but does not imply P). A theory T may not-predict P, or not-predict nonP, which are the same situation by our definition (i.e. where T does not imply P and does not imply nonP); we might then say that T is neutral to P (and to nonP).[2]

A theory T has always got at least one alternative nonT, at least to start with[3]. Normally, we do not have only one theory T and its negation nonT to consider, but many theories T1, T2, T3, etc. If any of these alternatives are compatible, they are improperly formulated. Properly formulated alternatives are not merely distinct but incompatible[4]. Let us henceforth suppose we are dealing with such contraries or contradictories, so that the alternatives in the disjunction ‘T1 or T2 or T3 or..’. are mutually exclusive[5].

Theories depend for their truth on internal consistency and consistency with all other knowledge, both the theoretical and the empirical. Here, we are concerned in particular with the estimating the truth, or falsehood, of theories with reference to their predictions or lack of them.

By correct (or true) prediction we mean that T predicts P and P indeed occurs, or that T disclaims P and nonP indeed occurs.

By incorrect (or false) prediction is meant that T predicts P whereas nonP is found to occur, or that T disclaims P whereas P is found to occur.

Ultimately, occurrences like P or nonP on which we base our judgments have to be mere phenomena – things which appear in our experience, simply as they appear[6].

If a theory seems true at first sight, it is presumably because its alternative(s) was or were quickly eliminated for some reason – for example, due to inconsistency, or because of obviously untenable predictions. If no alternative was even considered, then the first theory – and its alternative(s) – must be subjected to consistency checks and empirical tests. By the latter term we refer to observation (which may be preceded by experiment) of concrete events (and eventually some of their abstract aspects), to settle issues raised by conflicting theories.

It is conceivable that only one theory concerning some issue be at all thinkable; but this situation must not be confused with that of having only succeeded in constructing one theory thus far. For it also happens that we have no theory for the issue at hand (at present and perhaps forever), and we do not conclude from this that there is no explanation (we maintain that there is one, in principle). It must likewise be kept in mind that having two or more theories for something does not ensure that we have all the possible explanations. We may later (or never) find some additional alternative(s), which may indeed turn out to be more or the most credible.

Alternative theories may have some predictions in common; indeed, they necessarily do (if only in implying existence, consciousness and similar generalities). More significant are the differences between alternative theories: that one predicts what another disclaims, or that one predicts or disclaims what another is neutral to; because it is with reference to such differences, and empirical tests to resolve issues, that we can confirm, undermine, select, reject or establish theories.[7]

If a theory correctly predicts something, which at least one alternative theory was neutral to, then the first theory is somewhat confirmed, i.e. it effectively gains some probability of being true (lost by some less successful alternative theory). If a theory is neutral to something that an alternative theory correctly predicted, then the first theory is somewhat undermined, i.e. it effectively loses some probability of being true (gained by a more successful alternative theory). If all alternative theories equally predict an event or all are equally neutral to it, then each of the theories may be said to be unaffected by the occurrence.

Thus, confirmation is more than correct prediction and undermining more than neutrality. By our definitions, these terms are only applicable when alternative theories behave differently, i.e. when at least one makes a correct prediction and at least one is neutral to the occurrence concerned. If all alternatives behave uniformly in that respect, they are unaffected by the occurrence, i.e. their probability ratings are unchanged. Thus, confirmation (strengthening) and undermining (weakening) are relative, depending on comparisons and contrasts between theories.[8]

Furthermore, we may refer to degrees of probability, (a) according to which and how many theories are confirmed or undermined with regard to a given occurrence, and (b) according to the number of occurrences that affect our set of theories. If we count one ‘point’ per such occurrence, then (a) in each event the theory or theories confirmed share the point, i.e. participate in the increased probability, while that or those undermined get nothing; and (b) over many instances, we sum the shares obtained by each of the theories and thus determine their comparative weights (thus far in the research process). The theory with the most accumulated such points is the most probable, and therefore the one to be selected.[9]

Note that it may happen that two alternative theories T and nonT, or a set of theories T1, T2, T3... are in equilibrium, because each theory is variously confirmed by some events and undermined by others, and at the end their accumulated points happen to be equal. This is a commonplace impasse, especially because in practice we rarely do or even can accurately assign and compute probability ratings as above suggested in the way of an ideal model. We end up often relying on judgment calls, which people make with varying success. But of course, such decisions are only required when we have to take immediate action; if we are under no pressure, we do not have to make a stand one way or the other.

If any prediction of a theory is incorrect, then the theory is rejected, i.e. to be abandoned and hopefully replaced, by another theory or a modified version of the same (which is, strictly speaking, another theory), as successful in its predictions as the previous yet without the same fault. The expression ‘trial and error’ refers to this process. Rejection is effective disproof, or as near to it as we can get empirically. It follows that if T incorrectly predicts P, then nonT is effectively proved[10]. So long as a theory seemingly makes no incorrect predictions, it is tolerated by the empirical evidence as a whole. A tolerated theory is simply not-rejected thus far, and would therefore be variously confirmed, undermined, unaffected.

A theory is finally established only if it was the only theory with a true prediction while all alternative theories made the very opposite prediction. In short, the established theory had an exclusive implication of the events concerned. Clearly, if nonT is rejected, then T is our only remaining choice; similarly, it all alternatives T2, T3... are rejected, then the leftover T1 is established[11]. We may then talk of inductive proof or vindication. Such proof remains convincing only insofar as we presume that our list of alternative theories is complete and their respective relations to their predictions correct, as well as that the test was indeed fully empirical and did not conceal certain untested theoretical assumptions. Proof is deductive only if the theory’s contradictory is self-contradictory, i.e. if the theory is self-evident.

Once a theory is selected on the basis of probabilities or established because it is the last to withstand all tests, it retains this favored status until, if ever, the situation changes, i.e. as new evidence appears or is found, or new predictions are made, or new theories are constructed.

It is important to note that, since new theories may enter the discussion late in the day, events which thus far had no effect on the relative probabilities of alternative theories or on a lone standing theory, may with the arrival on the scene of the additional player(s), become significant data. For that reason, in the case of selection, even though correct predictions or neutralities may previously have not resulted in further confirmations or undermining, they may suddenly be of revived interest[12]. Likewise, in the case of establishment, we have to continue keeping track of the theory’s correct predictions or neutralities, for they may affect our judgments at a later stage.

Certain apparent deviations from the above principles must be mentioned and clarified:

Note that well-established (consistent and comparatively often-confirmed) large theories are sometimes treated as ‘proofs’ for narrower hypotheses. They are thus regarded as equivalent to empirical evidence in their force. This gives the appearance that ‘reason’ is on a par with experience with respect to evidence – but it is a false impression.

More specifically: say that (a) I guessed or ‘intuited’ the measure of so and so to be x, and (b) I calculated same to be x. Both (a) and (b) are ‘theories’, which can in fact be wrong, yet (a) being an isolated theory (or offhand guess) is considered confirmed or rejected by (b), because the latter being broader in scope (e.g. a mathematics theorem) would require much more and more complex work to be put in doubt.

The more complicated the consequences of rejecting an established hypothesis, the more careful we are about doing such a thing, preferring to put the pressure on weaker elements of our knowledge first.

Note also here the following epistemological fallacy: we often project an image, and then use this imagined event as an empirical datum, in support of larger hypotheses. In other words, speculations are layered: some are accepted as primary, and then used to ‘justify’ more removed, secondary speculations. By being so used repeatedly, the primary speculations are gradually given an appearance of solidity they do not deserve.

The term ‘fact’ is often misused or misunderstood. We must distinguish between theory-generated, relative fact and theory-supporting, absolute fact.

'Facts' may be implied by one's theory, in the sense of being predicted with the expectation that they will be found true, in which event the theory concerned would be buttressed. Such 'facts' are not yet established, or still have a low probability rating. We may call that supposed fact. It is properly speaking an item within one's theory, one claimed to be distinguished by being empirically testable, one that at first glance is no less tentative than the theory that implied it.

In contrast, established fact refers to propositions that are already a source of credibility for the theory in question, being independently established. The logical relation of implication (theory to fact) is the same, but the role played by the alleged fact is different. Here, a relatively empirical/tested proposition actually adds credibility to a proposed theory.

 

3.    Generalization is Justifiable

The law of generalization is a special case of adductive logic, one much misunderstood and maligned.

In generalization, we pass from a particular proposition (such as: some X are Y) to a general one (all X are Y). The terms involved in such case are already accepted, either because we have observed some instances (i.e. things that are X and things that are Y) or because in some preceding inferences or hypotheses these terms became part of our context. These terms already overlap to at least a partial extent, again either thanks to an observation (that some things are both X and Y) or by other means. The generalization proper only concerns the last lap, viz. on the basis that some X are Y, accepting that all X are Y. There is no deductive certainty in this process; but it is inductively legitimate.

The general proposition is strictly speaking merely a hypothesis, like any other. It is not forever fixed; we can change our minds and, on the basis of new data (observed or inferred), come to the alternate conclusion that ‘some X are not Y’ – this would simply be particularization. Like any hypothesis, a generalization is subject to the checks and balances provided by the principles of adduction. The only thing that distinguishes this special case from others is that it deals with already granted terms in an already granted particular proposition, whereas adduction more broadly can be used to invent new terms, or to invent particular as well as general propositions. To criticize generalization by giving the impression that it is prejudicial and inflexible is to misrepresent it. We may generalize, provided we remain open-minded enough to particularize should our enlarged database require such correction.

Some criticize generalization because it allows us to make statements about unobserved instances. To understand the legitimacy of generalization, one should see that in moving from ‘some X are Y’ to ‘all X are Y’ one remains within the same polarity of relation (i.e. ‘are’, in this case); whereas if one made the opposite assumption, viz. that some of the remaining, unobserved instances of X are not (or might not be) Y, one would be introducing a much newer, less justified relation. So far, we have only encountered Xs that are Y, what justification do we have in supposing that there might be Xs that are not Y? The latter is more presumptive than assuming a continued uniformity of behavior.

Note this argument well. When we generalize from some to all X are Y, we only change the quantity involved. Whereas if, given that some X are Y, we supposed that some other X are also Y and some are not Y, we change both the quantity and the polarity, for we are not only speculating about the existence of X’s that are not Y, but also saying something about all X (those known to be Y, those speculated to also be Y and those speculated to be not Y). Thus, the preference on principle of particularization to generalization would be a more speculative posture.

Whence, generalization is to be recommended – until and unless we find reason to particularize. Of course, the degree of certainty of such process is proportional to how diligently we have searched for exceptions and not found any.

To those who might retort that an agnostic or problematic position about the unobserved cases would be preferable, we may reply as follows. To say that, is a suggestion that “man is unable to know generalities.” But such a statement would be self-contradictory, since it is itself a claim to generality. How do these critics claim to have acquired knowledge of this very generality? Do they claim special privileges or powers for themselves? It logically follows that they implicitly admit that man (or some humans, themselves at least) can know some generalities, if only this one (that ‘man can know some generalities’). Only this position is self-consistent, note well! If we admit some generality possible (in this case, generality known by the logic of paradoxes), then we can more readily in principle admit more of it (namely, by generalization), provided high standards of logic are maintained.

Moreover, if we admit that quantitative generalization is justifiable, we must admit in principle that modal generalization is so too, because they are exactly the same process used in slightly different contexts. Quantitative generalization is what we have just seen, the move from ‘some X are Y’ to ‘all X are Y’, i.e. from some instances of the subject X (having the predicate Y) to all instances of it. Modal generalization is the move from ‘(some or all) X are in some circumstances Y’ to ‘(some or all) X are in all circumstances Y’, i.e. from some circumstances in which the XY conjunction appears (potentiality) to all eventual surrounding circumstances (natural necessity). It is no different a process, save that the focus of attention is the frequency of circumstances instead of instances. We cannot argue against natural necessity, as David Hume tried, without arguing against generality. Such a skeptical position is in either case self-defeating, being itself a claim to general and necessary knowledge!

Note that the arguments proposed above in favor of the law of generalization are consistent with that law, but not to be viewed as an application of it. They are logical insights, proceeding from the forms taken by human thought. That is to say, while we induce the fact that conceptual knowledge consists of propositional forms with various characteristics (subject, copula, predicate; polarity, quantity, modality; categorical, conditional), the analysis of the implications on reasoning of such forms is a more deductive logical act.

Thus, generalization in all its forms, properly conceived and practiced, i.e. including particularization where appropriate, is fully justified as an inductive tool. It is one instrument in the arsenal of human cognition, a very widely used and essential one. Its validity in principle is undeniable, as our above arguments show.

 

4.    Syllogism Adds to Knowledge

People generally associate logic with deduction, due perhaps to the historic weight of Aristotelian logic. But closer scrutiny shows that human discourse is largely inductive, with deduction as but one tool among others in the toolbox, albeit an essential one. This is evident even in the case of Aristotelian syllogism.

A classic criticism of syllogistic logic (by J. S. Mill and others) is that it is essentially circular argument, which adds nothing to knowledge, since (in the first figure) the conclusion is already presumed in the major premise. For example:

 

All men are mortal

(major premise)

Caius is a man

(minor premise)

therefore, Caius is mortal

(conclusion)

 

But this criticism paints a misleading picture of the role of the argument, due to the erroneous belief that universal propositions are based on “complete enumeration” of cases[13]. Let us consider each of the three propositions in it.

Now, our major premise, being a universal proposition, may be either:

axiomatic, in the sense of self-evident proposition (one whose contradictory is self-contradictory, i.e. paradoxical), or

inductive, in the way of a generalization from particular observations or a hypothesis selected by adduction, or

deductive, in the sense of inferred by eduction or syllogism from one of the preceding.

If our major premise is (a), it is obviously not inferred from the minor premise or the conclusion. If (b), it is at best probable, and that probability could only be incrementally improved by the minor premise or conclusion. And if it is (c), its reliability depends on the probability of the premises in the preceding argument, which will reclassify it as (a) or (b).

Our minor premise, being a singular (or particular) proposition, may be either:

purely empirical, in the sense of evident by mere observation (such propositions have to underlie knowledge), or

inductive, i.e. involving not only observations but a more or less conscious complex of judgments that include some generalization and adduction, or

deductive, being inferred by eduction or syllogism from one of the preceding.

If our minor premise is (a), it is obviously not inferred from any other proposition. If (b), it is at best probable, and that probability could only be incrementally improved by the conclusion. And if it is (c), its reliability depends on the probability of the premises in the preceding argument, which will reclassify it as (a) or (b).

It follows from this analysis that the putative conclusion was derived from the premises and was not used in constructing them. In case (a), the conclusion is as certain as the premises. In case (b), the putative conclusion may be viewed as a prediction derived from the inductions involved in the premises. The conclusion is in neither case the basis of either premise, contrary to the said critics. The premises were known temporally before the conclusion was known.

The deductive aspect of the argument is that granting the premises, the conclusion would follow. But the inductive aspect is that the conclusion is no more probable than the premises. Since the premises are inductive, the conclusion is so too, even though their relationship is deductive. The purpose of the argument is not to repeat information in the premises, but to verify that the premises are not too broad. The conclusion will be tested empirically; if it is confirmed, it will strengthen the premises, broaden their empirical basis; if it is rejected, it will cause rejection of one or both premise(s).

In our example, conveniently, Caius couldn’t be proved to be mortal, although apparently human, till he was dead. While he was alive, therefore, the generalization in the major premise couldn’t be based on Caius’ mortality. Rather, we could assume Caius mortal (with some probability – a high one in this instance) due to the credibility of the premises. When, finally, Caius died and was seen to die, he joined the ranks of people adductively confirming the major premise. He passed from the status of reasoned case to that of empirical case.

Thus, the said modern criticism of syllogism (and by extension, other forms of “deductive” argument) is not justified. Syllogism is a deductive procedure all right, but it is usually used in the service of inductive activities. Without our ability to establish deductive relations between propositions, our inductive capabilities would be much reduced. All pursuit of knowledge is induction; deduction is one link in the chain of the inductive process.

It should be noted that in addition to the above-mentioned processes involved in syllogism, we have to take into account yet deeper processes that are tacitly assumed in such argumentation. For instance, terms imply classification, which implies comparison, which mostly includes a problematic reliance on memory (insofar as past and present cases are compared), as well as perceptual and conceptual powers, and which ontologically raises the issue of universals. Or again, prediction often refers to future cases, and this raises philosophical questions, like the nature of time.

The approach adopted above may be categorized as more epistemological than purely logical. It was not sufficiently stressed in my Future Logic.

 

5.    Concept Formation

Many philosophers give the impression that a concept is formed simply by pronouncing a clear definition and then considering what referents it applies to. This belief gives rise to misleading doctrines, like Kant’s idea that definitions are arbitrary and tautologous. For this reason, it is important to understand more fully how concepts arise in practice[14]. There are in fact two ways concepts are formed:

Deductive concepts. Some concepts indeed start with reference to a selected attribute found to occur in some things (or invented, by mental conjunction of separately experienced attributes). The attribute defines the concept once and for all, after which we look around and verify what things it applies to (if any, in the case of inventions) and what things lack it. Such concepts might be labeled ‘deductive’, in that their definition is fixed. Of course, insofar as such concepts depend on experiential input (observation of an attribute, or of the attributes imagined conjoined), they are not purely deductive.

Note in passing the distinction between deductive concepts based on some observed attribute(s), and those based on an imagined conjunction of observed attributes. The former necessarily have some real referents, whereas the latter may or not have referents. The imagined definition may turn out by observation or experiment to have been a good prediction; or nothing may ever be found that matches what it projects. Such fictions may of course have from the start been intended for fun, without expectation of concretization; but sometimes we do seriously look for corresponding entities (e.g. an elementary particle).

Inductive concepts. But there are other sorts of concepts, which develop more gradually and by insight. We observe a group of things that seem to have something in common, we know not immediately quite what. We first label the group of things with a distinct name, thus conventionally binding them together for further consideration. This name has certain referents, more or less recognizable by insight, but not yet a definition! Secondly, we look for the common attribute(s) that may be used as definition, so as to bind the referents together in our minds in a factual (not conventional, but natural) way. The latter is a trial and error, inductive process.

We begin it by more closely observing the specimens under consideration, in a bid to discern some of their attributes. One of these attributes, or a set of them, may then stand out as common to all the specimens, and be proposed as the group’s definition. Later, this assumption may be found false, when a previously unnoticed specimen is taken into consideration, which intuitively fits into the group, but does not have the attribute(s) required to fit into the postulated definition. This may go on and on for quite a while, until we manage to pinpoint the precise attribute or cluster of attributes that can fulfill the role of definition.

I would say that the majority of concepts are inductive, rather that deductive. That is, they do not begin with a clear and fixed definition, but start with a vague notion and gradually tend towards a clearer concept. It is important for philosophers and logicians to remember this fact.

 

6.    Empty Classes

The concept of empty or null classes is very much a logical positivist construct. According to that school, you but have to ‘define’ a class, and you can leave to later determination the issue as to whether it has referents or is ‘null’. The conceptual vector is divorced from the empirical vector.

What happens in practice is that an imaginary entity (or a complex of experience, logical insight and imagination) is classified without due notice of its imaginary aspect(s). A budding concept is prematurely packaged, one could say, or inadequately labeled. Had we paid a little more attention or made a few extra efforts of verification, we would have quickly noted the inadequacies or difficulties in the concept. We would not have ‘defined’ the concept so easily and clumsily in the first place, and thus not found it to be a ‘null class’.

One ought not, or as little as possible, build up one’s knowledge by the postulation of fanciful classes, to be later found ‘empty’ of referents. One should rather seek to examine one’s concepts carefully from the start. Though of course in practice the task is rather to re-examine seemingly cut-and-dried concepts.

I am not saying that we do not have null classes in our cognitive processes. Quite the contrary, we have throughout history produced classes of imaginary entities later recognized as non-existent. Take ‘Pegasus’ – I presume some of the people who imagined this entity believed it existed or perhaps children do for a while. They had an image of a horse with wings, but eventually found it to be a myth.

However, as a myth, it survives, as a receptacle for thousands of symbolizations or playful associations, which perhaps have a function in the life of the mind. It is thus very difficult to call ‘Pegasus’ a null-class. Strictly speaking, it is, since there were never ‘flying horses’. But in another sense, as the recipient of every time the word Pegasus is used, or the image of a flying horse is mentally referred to, it is not an empty class. It is full of incidental ‘entities’, which are not flying horses but have to do with the names or images of the flying horse – events of consciousness which are rather grouped by a common symbol.

Mythical concepts in this sense are discussed by Michel Foucault in his Order of Things.

We can further buttress the non-emptiness of imaginary concepts by reminding ourselves that today’s imaginations may tomorrow turn out to have been realistic. Or getting more philosophical we can still today imagine a scenario for ourselves, consistent with all experience and logical checks, in which ‘Pegasus’ has a place as a ‘real’ entity, or a concept with real referents. Perhaps one day, as a result of genetic manipulations.

Another example interesting to note is that of a born-blind person, who supposedly lacks even imaginary experience of sights, talking of shape or color. Such words are, for that person, purely null-classes, since not based on any idea, inner any more than outer, as to what they are intended to refer to, but on mere hearsay and mimicry. Here again, some surgical operation might conceivably give that person sight, at which time the words would acquire meaning.

But of course, there are many concepts in our minds, at all times, which are bound to be out of phase with the world around since we are cognitively limited anyway. It follows that the distinction here suggested, between direct reference and indirect (symbolic – verbal or pictorial) reference, must be viewed as having gradations, with seemingly direct or seemingly indirect in-betweens.

Furthermore, we can give the cognitive advice that one should avoid conceptualization practices that unnecessarily multiply null-classes (a sort of corollary of Ockham’s Razor). Before ‘defining’ some new class, do a little research and reflection, it is a more efficient approach in the long run.

One should also endeavor to distinguish between ‘realistic’ concepts and ‘imaginary’ concepts, whenever possible, so that though the latter be null classes strictly speaking, their mentally subsisting elements, the indirect references, may be registered in a fitting manner. Of course, realistic concepts may later be found imaginary and vice-versa; we must remain supple in such categorizations.

Imaginary concepts are distinguished as complexes involving not only perception and conception, but also creativity. The precise role of the latter faculty must be kept in mind. We must estimate the varying part played by projection in each concept over time. This, of course, is nothing new to logic, but a restatement for this particular context of something well known in general.

 

Drawn from Phenomenology (2003), Chapter 7:3,1-2,4,6.

 
 

[1]             This essay was written back in 1990, soon after I completed Future Logic, so that I could not include its clarifications in that book. All the other topics in this chapter were developed later, in 1997.

[2]             A theory that implies both P and nonP is inconsistent and therefore false. If that result seems inappropriate, then the claim that T implies P or that T implies nonP or both must be reviewed.

[3]             This alternative is incompatible with it, i.e. they cannot both be true.

[4]             For example, 'it is white' and 'it is black' are too vague to be incompatible. We might not realize this immediately, till we remember that some things are both black and white, i.e. partly the one and partly the other. Then we would say more precisely 'it is white and not black' or 'it is wholly black', to facilitate subsequent testing. Of course, our knowledge that some things are both black and white is the product of previous experience; in formulating our theses accordingly, we merely short cut settled issues.

[5]             The disjunction 'T or nonT' may be viewed as a special case of this. But also, 'T1 or T2 or T3 or..’. may always be recast as 'T1 or nonT1', where nonT1 is equivalent to 'T2 or T3 or..’..

[6]             Such bare events impinge on our mind all the time. A skillful knower is one who has trained himself or herself to distinguish primary phenomena from later constructs involving them. Sometimes such distinction is only possible ex post facto, after discovery of erroneous consequences of past failures in this art.

[7]             A prediction is only significant, useful to deciding between theories, if it is, as well as consistent, testable empirically; otherwise, it is just hot air, mere assertion, a cover or embellishment for speculations. The process of testing cannot rest content at some convenient stage, but must perpetually put ideas in question, to ensure ever greater credibility.

[8]             Note that correct prediction by a theory does not imply proof of the theory (since 'T predicts P' does not imply 'nonT predicts nonP'), nor even exclude correct prediction by the contradictory theory (since 'nonT predicts P' is compatible). It 'confirms' the theory only if the contradictory theory may be 'undermined' (i.e. if 'nonT is neutral to P'), otherwise both the theory and its contradictory are unaffected.

[9]             The domain of probability rating may be further complicated by reference to different degrees of implication, instead of just to strict implication. T may 'probably imply' P, for instance, and this formal possibility gives rise to further nuances in the computation of probabilities of theories.

[10]           Note that if both T and nonT predict P, then P is bound to occur; i.e. if the implications are logically incontrovertible, then P is necessary. If we nonetheless find nonP to occur and thus our predictions false, we are faced with a paradox. To resolve it, we must verify our observation of nonP and our implications of P by both T and nonT. Inevitably, either the observation or one or both implications (or the assumptions that led us to them) will be found erroneous, by the law of non-contradiction.

[11]           At least temporarily; we may later find reason to eliminate T1, which would mean that our list of theories was not complete and a further alternative Tn must be formulated.

[12]           Thus, correct prediction, though not identical with confirmation, is 'potential' confirmation, etc.

[13]           In a way, Aristotle brought this criticism upon himself, since he first apparently suggested that universal propositions are based on complete enumeration. But of course, in practice we almost never (except in very artificial situations where we ourselves conventionally define a group as complete) encounter completely enumerable groups. Our concepts are normally open-ended, with a potentially “infinite” population that we can never even in theory hope to come across (since some of it may be in the past or future, or in some other solar system or galaxy)!

[14]           See also my Future Logic, chapter 4.4, and other comments on this topic scattered in my works. The present comments were written in 2002, so as to clarify the next section, about empty classes. The ultimate null class is, of course, ‘non-existence’!

Go Back

Comment

Blog Search

Blog Archive

Comments

There are currently no blog comments.