Hacker News new | past | comments | ask | show | jobs | submit login

>Searle calls this derived intentionality which is different from intrinsic intentionality

But what makes derived intentionality not abstract? What definition of abstract are you using that excludes derived intentionality while including intrinsic intentionality?

But lets look more closely at the differences between derived and intrinsic intentionality. Derived intentionality is some relation that picks out a target in a specified context. E.g. a binary bit picks out heads/tails or day/night in my phone depending on the context set up by the programmer. Essentially the laws of physics are exploited to create a system where some symbol in the right context stands in a certain relation with the intended entities. We can boil this process down to a ball rolling down the hill along one track vs another track is picking between two objects at the bottom of the hill.

How does intrinsic intentionality fare? Presumably the idea is that such a system picks out the intended object without any external context needed to establish the reference. But is such a system categorically different than the derived sort? It doesn't seem so. The brain relies on the laws of physics to establish the context that allows signals to propagate along specific circuits. The brain also stands in specific relation to external objects such that the necessary causal chains can be established for concepts to be extracted from experience. Without this experience there would be no reference and no intentionality. So intrinsic intentionality of this sort has an essential dependence on an externally specified context.

But what about sensory concepts and internal states? Surely my experience of pain intrinsically references damaging bodily states as seen by my unlearned but competent behavior in the presence of pain, e.g. avoidance behaviors. But this reference didn't form in a vacuum. We represent a billion years of computation in the form of evolution to craft specific organizing principles in our bodies and brains that entail competent behavior for sensory stimuli. If there is a distinction between intrinsic and derived intentionality, it is not categorical. It is simply due to the right computational processes having created the right organizing principles to allow for it.




An essential feature of abstract things is that they do not exist independently and in their own right. For example, this chair or that man (whose name is John) are concrete objects. However, the concepts "chair" and "man" are abstract. They do not exist in themselves as such. The same can be said for something like "brown", an attribute that, let's say, is instantiated by both the chair and by John in some way, but which cannot exist by itself as such. So we can say that "chair", "man" and "brown" all exist "in" these concrete things (or more precisely, determine these things to be those things or in those ways). However, apart from those things that instantiate them, these forms also exist somewhere else, namely, the intellect. However, they exist in our intellects without instantiating them. Otherwise, we would literally have to have a chair or a man or something brown in our intellects the moment we thought these things. So you have a problem. You have a kind of substratum in which these forms can exist without being those things. That does not sound like matter because when those forms exist in matter, they always exist as concrete instantiations of those things.

W.r.t. derived intentionality, the relation that obtains here between a signifier and the signified is in the mind of the observer. When you read "banana", you know what I mean because the concept, in all its intrinstic intentionality and semantic content, already exists in your intellect and you have learned that that string of symbols is meant to refer to that concept. I could, however, take a non-English speaker and mischievously teach them that "banana" refers to what you and I would use the term "apple" to mean. No intrinsic relation exists between the signifier and the concept. However, there is an intrinsic relation that obtains between concepts and their instantiations. The concept "banana" is what it means to be a banana. So the derived intention involves two relations, namely, one between the signifier and the concept (which is a matter of arbitrary convention) and another relation between the concept and the signified, which necessarily obtains between the two. Derived intentionality is parasitic on intrinsic intentionality. The former requires the latter.

So what is meant when we say that computers do not possess concepts (i.e., abstract things), only derived intentionality, we mean that computers are, for all intents and purposes, syntactic machines composed of symbols and symbol manipulation rules (I would go further and say that what that describes are really abstract computing models like Turing machines, whereas physical computers are merely used to simulate these abstract machines).

Now, my whole point earlier was that if we presuppose a materialist metaphysical account of matter, we will be unable to account for intrinsic intentionality. This is a well known problem. And if we cannot account for intrinstic intentionality, then we certainly cannot make sense of derived intentionality.


Your description of abstract things sounds like a dressed-up version of something fairly mundane. (This isn't to say that your description is deficient, but rather that the concept is ultimately fairly mundane.) So I gathered three essential features of instrinsic intentionality: (1) does not exist independently, (2) exist in the intellect, (3) exist in the things that instantiate them.

Given this definition there are a universe of potential abstracta due to the many possible ways to categorize objects and their dynamics. Abstracta are essentially "objects of categorization" that relate different objects by their similarity along a particular set of dimensions. Chairs belong to the category "chair" due to sharing some particular set of features, for example. The abstract object (concept) here is chair, which is instantiated by every instance of chair; the relation between abstract and particular is two-way. Minds are relevant because they are the kinds of things that identify such categorizations of objects along a set of criteria, thus abstracta "exist in the intellect".

You know where else these abstracta exist? In unsupervised machine learning algorithms. An algorithm that automatically categorizes images based on whatever relevant features it discovers has the power of categorization which presumably is the characteristic property of abstracta. Thus the abstracta also exists within the computer system running the ML algorithm. But this abstracta seems to satisfy your criteria for intrinsic intentionality (if we don't beg the question against computer systems). The relation between the ML system and the abstracta are independent of a human to fix the reference. Yes, the algorithm was created by a person, but he did not specify what relations are formed and does not fix reference for the concepts discovered by the algorithm and the things in the world. This is analogous to evolution creating within us the capacity to independently discover abstract concepts.

(Just to preempt a reference to Searle's Chinese room argument, I believe his argument is fatally flawed: https://news.ycombinator.com/item?id=23182928)


You're trying to reduce abstraction to statistical pattern classification, but that doesn't work because statistical measures are inherently bounded in generality, indeterminate and ambiguous while my concept of, say, triangularity is universal, determinate and exact.

Say I give you an image of a black isosceles triangle. Nothing in that image will tell you how to group those features. There is no single interpretation, so single way to classify the image. You might design your algorithm to prefer certain ways of grouping them, but that follows from the designer's prior understanding of what he's looking at and how he wants his algorithm to classify things. If your model has been trained using only black isosceles triangles and red rhombuses, it is possible that it would classify a red right triangle as rhombus or an entirely different thing and there would be no reason in principle to say that the classification was objectively wrong apart from the objective measure of triangularity itself. But that's precisely what the algorithm/model lack in the first place and cannot attain in the second.

Furthermore, just because your ML algorithm has grouped something successfully by your measure of correctness doesn't mean it's grasped essentially what it means to be a member of that class. The grouping is always incidental no matter how much refinement goes into it.

Now, you might be tempted to say that human brains and minds are no different because evolution has done to human brains what human brains do to computer algorithms and models. But that is tantamount not only to denying the existence of abstract concepts in computers, but also their existence in human minds. You've effectively banished abstracta from existence which is exactly what materialism is forced to do.

(With physical computers, things actually get worse because computers aren't objectively computing anything. There is no fact of the matter beyond the physical processes that go on in a particular computer. Computation in physical artifacts is observer relative. I can choose to interpret what a physical computer does through the lens of computation, but there is nothing in the computer itself that is objectively computation. Kripke's plus/quus paradox demonstrates this nicely.)

P.S. An article you might find interesting in this vein, also from Feser: https://drive.google.com/file/d/0B4SjM0oabZazckZnWlE1Q3FtdGs...


>You're trying to reduce abstraction to statistical pattern classification, but that doesn't work because statistical measures are inherently bounded in generality, indeterminate and ambiguous

A substrate with a statistical description can still have determinate behavior. The brain, for example is made up of neurons that have a statistical description. But it makes determinate decisions, and presumably can grasp concepts exactly. Thresholding functions, for example, are a mechanism that can transform a statistical process into a determinate outcome.

>doesn't mean it's grasped essentially what it means to be a member of that class.

I don't know what this means aside from the ability to correctly identify members of that class. But there's no reason to think an ML algorithm cannot do this.

Regarding Feser and Searle, there is a lot to say. I think they are demonstrably wrong about computation being observer relative and whether computation is indeterminate[1]. Regarding computations being observer relative, it's helpful to get clear on what computation is. Then it easily follows that a computation is an objective fact of a process.

A computer is at its most fundamental an information processing device. This means that the input state has mutual information with something in the world, the computer undergoes some physical process that transforms the input to some output, and this output has further mutual information with something in the world. The input information is transformed by the computer to some different information, thus a computation is revelatory: it has the power to tell you something you didn't know previously. This is why a computer can tell me the inverse of a matrix, while my wall cannot, for example. My wall is inherently non-revelatory no matter how I look at it. This definition is at odds with Searle's definition of a computer as a symbol processing device, but my definition more accurately captures what people mean when they use the term computer and compute.

This understanding of a computer is important because the concept of mutual information is mind-independent. There is a fact of the matter whether one system has mutual information with another system. Thus, a computer that is fundamentally a device for meaningfully transforming mutual information is mind independent.

[1]https://www.reddit.com/r/askphilosophy/comments/bviafb/what_...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: