Contents

### Introduction and Overview

While writing mathematical equalities, we assume that if A = B, then B = A. But this principle doesn’t hold in logic when we employ two concepts, one of which is more general than the other. For example, “cat is a mammal” doesn’t imply that “mammal is a cat” because even dogs are mammals. Logic tries to solve this problem using set theory, in which there is a set called “mammal” and “cat” is a member of this set. This, however, constitutes knowledge *representation *and not knowledge *acquisition*. For instance, why is “bird” excluded from the set “mammal”? Obviously, to do that, we must see a “mammal” *in* the cat and dog, and *not *in the bird. If we are able to see the mammal in some objects, but not in others, it follows that “mammal” must exist in the dog and cat and not in the bird. Otherwise, our claim that a cat is a mammal would be false, and we could not create the set “mammal” with select membership.

Then again, to say that we can see “mammal” in cat and dog, we must say that we possess the *idea “*mammal” before we use that idea to determine if a cat and dog are mammalian and before we create a set called “mammal”. Now, “mammal” means three things–(a) the *a priori* idea in the mind, (b) the thing embedded in a cat and dog, and (c) the set of cats and dogs. This article explores the implications of this fact for understanding mind and matter.

### The Problem of Logical Inversions

Mathematical inversions are commonly employed in equations. For instance, if X = Y + Z, then, Y + Z = X. We need only two numbers to find the third, provided their relationship given by the above equation is true. The essential premise underlying this use is that X, Y, and Z are all entities of the same *type.* For instance, if X is $10 and Y is $3 then, by the above definition, Z must be $7. Let’s call this the ** quantitative **use of addition because all terms in this equation are of the same

*type*.

However, sometimes, this quantitative use involves different types of entities. For instance, if Y is 5 *oranges*, and Z is 8 *apples*, then X must be 13 *fruits.* We cannot simply add apples and oranges unless we recognize that they are *instances* of a type called “fruit”. Additions are permitted only on the same *type* of entities, not on different types.

Therefore, if we are adding different types of entities, then we must first find a *more abstract entity* of which the entities being added are instances or sub-types. For example, to add apples and oranges, we must view them as sub-classes of fruits. Even to perform ordinary mathematical operations, therefore, we need to keep a conceptual hierarchy in mind, because without that conceptual hierarchy we could just add two entities and produce a new number, although that number would not represent a new *type* of object. For instance, we cannot add planets and galaxies, or tables and shirts, unless we identify them as instances of a general class called “things” or “objects”.

And when we have constructed this conceptual hierarchy, the logical inversions cannot be applied. For instance, if “mammal” is a more general class than “cat”, then we can say that “cat is a mammal”, but we cannot say that “mammal is a cat”.

The logical system of reasoning that employs different kinds of entities can be called ** qualitative**, as opposed to

*quantitative.*Inversions work for quantities but not for qualities.

### Quantitative Logic in Machines

Quantitative thinking is used in all computers. This type of thinking can use knowledge *representation *but cannot perform knowledge *acquisition*. For example, we can denote a set by an array and a computer can reason with the array and the elements in that array provided the programmer puts them there. To do that, we have to define a criterion for deciding a membership that can be called a “property”.

Let’s try to perform knowledge acquisition with a computer where we have to define properties and see where it takes us. Let’s suppose the array is “mammal”, and it comprises of members that are cat and dog. To add a new member, we must have a *definition* of mammal. Since we are talking about knowledge representation, we can assume that we can break “mammal” into some properties, which collectively constitute another array. Now, we have two arrays–(a) the set of cat and dog, and (b) the set of attributes. But what is an attribute? For example, if mammal attributes are (a) there is a female, (b) the female has breasts, (c) the female has children, (d) there is something called “feeding”, and (e) the children of the female feed on the female’s breasts, then we can say that the set of attributes has 5 members.

Now, we have to define what we mean by ‘female’, ‘breast’, ‘children’, ‘feeding’, etc. Obviously, these would also be arrays. So, the attribute set that defines ‘mammal’ turns out to be an array of attributes, each of which must be an array of attributes, each of which must be an array of attributes. This means that to perform knowledge *acquisition*, we have to prior do knowledge *representation *that requires constructing this infinite hierarchy of arrays. Since all these arrays have to be defined *before* the machine can reason, we can call this *a priori* knowledge that the programmer must provide. Let’s suppose that we can construct such a hierarchy.

And now we give a computer this hierarchy and ask: Is this new thing a mammal? To produce any answer, the computer must run through this infinite hierarchy, which requires infinite time and space. If a cat is a mammal, then all the attributes in the prior definition must match, which means that the cat must also be an infinite hierarchical array. If the hierarchy is infinite, then each cat is infinite, and the computer cannot do anything unless we feed in these infinite hierarchical arrays. In short, *after* we provide infinite *a priori* knowledge, the computer still takes infinite time and space to make a simple decision. Programming a computer makes sense if the computer can answer questions after I give it my knowledge. Programming makes no sense if I spend infinite time programming the computer and the computer takes infinite time to produce an answer. Why would I not just decide myself, instead of asking the computer?

This is where we can see that the human mind is clearly better than the computer, *even in terms of speed*. This superiority is based on the fact that we do not just acquire new knowledge but also reason quickly with it, even though the conceptual hierarchy is infinite. How do we do that? The answer is that we avoid traversing entire branches of the hierarchy by just examining the topmost (most abstract) element of the hierarchy. We don’t need to define a mammal in terms of its attributes; we just know what it is as a concept. And we can check immediately if that concept is found in the cat because there’s a sense in which that attribute is present as a single attribute, rather than an infinite cascading hierarchy of attributes. This in turn requires the two additional types of meanings of “mammal”—something *a priori* in our minds, and something preexisting in that cat. Now a “mammal” is not just a set of cats and dogs; it is also an idea in our mind, and also a property in the cat, and this property is a singular thing, not an infinite cascading hierarchy of concepts that need to be traversed.

A logician may insist that their system of reasoning is equivalent to mental reasoning, but that is not true. It just looks equivalent on paper because we designate a property called “mammal” on paper, and we understand what it means, but that is because we are already using our minds with the two other definitions of “mammal”—an *a priori* concept in our minds and something immanent in the cat.

This problem becomes evident if we try to automate this system reasoning by building a machine. Then we realize that we need infinite space to store the knowledge, infinite effort to program the machine, and then it takes an infinite time to get an answer. In simple terms, the illusion of equivalence between logic and conceptual reasoning is created because symbols are used to denote conceptual properties but the meaning of the symbol is in our heads alone. If we try to put that meaning on paper, it needs infinite space, time, and effort.

### The Genesis of Mathematical Paradoxes

The conclusion is that every word has three kinds of meanings. There is a “transcendent” meaning in our minds, there is an “immanent” property in the object, and there is a collection of objects. Without that transcendent meaning, we cannot identify if something has the immanent property, and without identifying if that immanent property, we cannot construct a set. Hence, without the transcendent and immanent meanings of any word, we can never produce a set, and logic would be worthless.

The next problem is that when a word has three different meanings, which of the meanings actually applies in which case? For instance, when we are speaking about a “mammal”, are we referring to the transcendent idea in our mind, the immanent property in the object, or the collection of objects? To answer this question, we need to introduce context in logic, because the same word has three different meanings.

This is every more problematic because logical truth is supposed to be *universal truth *rather than *contextual truth. *If we cannot induct contextuality in logic, then when we refer to one meaning of “mammal” the reasoning system can infer another meaning, and the result would be a paradox. Such paradoxes exist in number theory and set theory, and hence in all of science. At the heart of each paradox lies the problem of contextuality being converted into universality, and there being three kinds of meanings. *Gödel’s Mistake* discusses these paradoxes, how they emerge, and how they entail a role of meaning in mathematics.

Here I will illustrate this problem through the easily accessible *Barber’s Paradox* described by Bertrand Russell. The paradox is the following innocuous statement: “A Barber Shaves All Those Who Don’t Shave Themselves”. To create a paradox, we must use the term “barber” in two different ways—(1) to denote a class of people who shave others, and (2) as an individual who belongs to this class. The class of people is obviously more general than the individual who belongs to this class, quite like “mammal” is more general than “cat” which belongs to this class. In the case of “cat is a mammal”, we have two distinct words to denote two types of entities—one more general than the other—and therefore the reason for the paradox becomes evident upon inversion. However, when the *same word* alternately denotes a class and an individual, this paradox is harder to decode. This word can be “number”. Sometimes, “number” means an idea; sometimes it means a property immanent in a thing; and sometimes it means a set of things. And it leads to the incompleteness of any theory of numbers. But, the problem of Barber’s paradox and number theory incompleteness is not fundamentally different.

The Barber’s Paradox relies on meaning confusion. It asks: Does the barber shave himself? Supposing that the barber does not shave himself, then by the definition that a barber must shave all those who don’t shave themselves, he must shave himself. If, however, he shaves himself, then he should not have shaved himself because he is a barber. Clearly, the problem here is that Mr. Barber is not always a barber. There are two distinct entities—a class and an individual—the former is more general and the latter is particular. The logic that says Mr. Barber is a barber cannot be inverted to claim that a barber is Mr. Barber. In other words, A is B, but B is not A. When this paradox is presented, a logician might say: But we can solve this by creating sets; a set is not equal to its member, but a member is a part of the set. And because people made such arguments, Russell constructed a set of all sets that are not members of themselves to show how we can recreate the paradox. Although we can claim to solve the problem with two concepts, the problem exists even with a single concept.

Thus, there is an illusion of consistency in Predicate logic, because there is a separation of an object and its predicate. By this separation, we say that a “mammal” is a predicate of “cat”, or use the “mammal” in the immanent set, not in the set-theoretic sense where “mammal” is also a set that contains all cats. We also symbolize “mammal” by the letter M rather than trying to define what it means. Again, out of the three meanings–transcendent, immanent, and set-theoretic–, we just use one (immanent) and we escape the paradoxes.

If, however, we use all the three meanings, which we must if we want to understand conceptual reasoning, then there is no escape from paradoxes. Such paradoxes become evident in set theory because in forming a set of mammals, there is an immanent property in each thing that makes it a mammal and there is a set, and these two can’t be reconciled in a single system. Similarly, the paradox appears in number theory because a number can be a transcendent idea, and an immanent property, and the two can’t be reconciled. Therefore, people say that set theory and number theory are paradoxical but Predicate logic is free of paradoxes. Yes, it’s true, and yet, that’s because we don’t see that when “mammal” is a predicate of “cat”, then “cat” is a predicate of “mammal”. Such a thing is evident in a sufficiently strong system.

### Thinking Machines Need a New Science

To make a machine that thinks, the machine must carry the conceptual hierarchy. It must now be able to detect which entity is more abstract, and which entity is an instance of that abstract idea and carries that idea within itself (i.e., that idea is immanent in that thing). If this machine is given two symbols—one of which represents an idea more abstract than the other—the machine must detect which one is more abstract than the other. In other words, the machine must be able to detect the *relationship* between symbols, simply by measuring the positions of the symbols. This is impossible in current science because a machine cannot detect the meaning of the symbol simply from its physical address in a computer, and therefore current science can never create a thinking machine. Symbol address in current computers doesn’t indicate a meaning, and without that, the symbol is just a label for a value (a quantity) that cannot be tied to a meaning (a quality).

By implication, we also cannot understand human thinking, which avoids logical paradoxes because it understands which of the three meanings is employed in which case contextually. Many scientists, and most non-scientists, don’t understand this problem which arises from the nature of the meaning of concepts. They don’t realize that thinking is not a question of finding a material configuration but involves a problem in logic, which can be avoided only when material objects have meanings, and these meanings can be derived from the measurement of their position. We cannot, therefore, keep meanings in a separate “world”, while this world is material. Rather, we must collapse the difference between matter and meaning, and all objects should be seen as symbols of meaning. The position state of the symbol must itself denote meaning. In other words, we would no longer be concerned with the position state, but only with meanings.

### What is Semantic Science?

A semantic science requires a fundamental revision to the notion of space and time. The revision is that all locations in space and all events in time are not the same, because the objects at these locations are themselves meanings—which can be more or less abstract. Space and time in this view become an inverted *tree structure* rather than a *box structure*. In this tree structure, there is a hierarchy of concepts.

Reasoning in this space is walking up and down the inverted tree (i.e. from root to leaf and leaf to root) due to which we establish the relationship between the more and the less abstract concepts, although the more abstract concept precedes the less abstract concept. Perception involves walking up the inverted tree, where we try to infer the more abstract concepts from the less abstract concepts. The problem in modern science is that it has begun from a fundamental presupposition, namely that all the world is *uniform*, and all objects are of the same type. That supposition results in the use of *quantitative* reasoning and conceptual variety must be forbidden by definition.

The implications of semantics for mathematics, physics, logic, computing theory, and the nature of mind are many, but they all stem from a simple problem of meaning, which needs neither sophisticated theories nor complicated philosophy. The problem is logic itself when this logic is used in conjunction with concepts. And even the problem of logic is nothing other than three kinds of meanings.

Ashish Dalela, "How Meanings Change the Use of Logic," in

*Shabda Journal*, May 1, 2016, https://journal.shabda.co/2016/05/01/how-meanings-change-the-use-of-logic/.