I approached people in the Phil Dept where I work, and they expressed only polite interest (i.e., disinterest). So from my (very limited) survey, there may be folks willing to write entries, but, that I can tell, there is not broad interest.
The way I understand philosophy is that it is a very wide subject with lots of subtopics. It wouldn’t surprise me if polite disinterest were the most likely outcome of pairing any random philosopher with any random topic.
Philosophy is very broad, so philosophers specialize, especially analytic philosophers who harved up the subject into hundreds. Most aren't going to be interested in whatever random topic you bring up.
Highly recommend trying to grasp the argument, because in a sense it connects complexity to ‘time to compute’. If it takes a room of robots following simple instructions a billion years to act as an AI that reads Chinese, are those robots a collective intelligence? Can it be scaled up to consciousness?
> because in a sense it connects complexity to ‘time to compute’.
No, it doesn’t. It basically just rejects by definition (and then presents an illustration which is a useful support if you accept its definition or an illustration that its definitions are wrong if you don’t) that a programmed response system that exhibits the behavior of an actor that understands material actually understands material, and from that rejects the idea that brains (which produce minds, which have actual understanding) are equivalent to programmed response systems.
> No, it doesn’t. It basically just rejects by definition...
Its been a little while since I've read the paper, but I don't think it's right to say that it rejects this conclusion by definition. Instead, it presents a situation where the reader's intuitions are intended to rebel at the thought that the system in question understands the symbols that it is manipulating. But yes, otherwise I agree with you that the argument then goes that, if this case has caused you to be skeptical about whether a symbol manipulation system could be conscious, you should be skeptical about whether other ones would be as well.
There are, of course, some important limitations/objections. Two that come to mind are that 1) you might not think that the reader's intuitions about a situation have any metaphysical significance--they might just be wrong and 2) the Chinese Room example might just suggest that a particular kind of symbol manipulation system may not be conscious without supporting the much stronger claim that they can't be conscious.
The fact that one can raise objections, though, doesn't mean that it isn't an important contribution to the philosophy of mind. If nothing else, I think Searle was right to point out that the example raises interesting questions about the usefulness of the Turing Test since, by hypothesis, the Chinese Room could pass it.
If nothing else, I think Searle was right to point out that the example raises interesting questions about the usefulness of the Turing Test since, by hypothesis, the Chinese Room could pass it.
The other part of the thought experiment, apart from imagining the room, is to imagine how complex the turing test might be.
When people imagine what a Turing Test conversation would look like, they frequently underestimate the conversation. I find Dennet's example of an imaginary Turing Test from Consciousness Explained to be a good counterexample:
Judge: Did you hear about the Irishman who found a magic lamp? When he rubbed it a genie appeared and granted him three wishes. “I’ll have a pint of Guiness!” the Irishman replied and immediately it appeared. The Irishman eagerly set to sipping and then gulping, but the level of Guiness in the glass was always magically restored. After a while the genie became impatient. “Well, what about your second wish?” he asked. Replied the Irishman between gulps, “Oh well, I guess I’ll have another one of these.”
CHINESE ROOM: Very funny. No, I hadn’t heard it– but you know I find ethnic jokes in bad taste. I laughed in spite of myself, but really, I think you should find other topics for us to discuss.
J: Fair enough but I told you the joke because I want you to explain it to me.
CR: Boring! You should never explain jokes.
J: Nevertheless, this is my test question. Can you explain to me how and why the joke “works”?
CR: If you insist. You see, it depends on the assumption that the magically refilling glass will go on refilling forever, so the Irishman has all the stout he can ever drink. So he hardly has a reason for wanting a duplicate but he is so stupid (that’s the part I object to) or so besotted by the alcohol that he doesn’t recognize this, and so, unthinkingly endorsing his delight with his first wish come true, he asks for seconds. These background assumptions aren’t true, of course, but just part of the ambient lore of joke-telling, in which we suspend our disbelief in magic and so forth. By the way we could imagine a somewhat labored continuation in which the Irishman turned out to be “right” in his second wish after all, perhaps he’s planning to throw a big party and one glass won’t refill fast enough to satisfy all his thirsty guests (and it’s no use saving it up in advance– we all know how stale stout loses its taste). We tend not to think of such complications which is part of the explanation of why jokes work. Is that enough?
Dennett: “The fact is that any program that could actually hold up its end in the conversation depicted would have to be an extraordinary supple, sophisticated, and multilayered system, brimming with “world knowledge” and meta-knowledge and meta-meta-knowledge about its own responses, the likely responses of its interlocutor, and much, much more…. Maybe the billions of actions of all those highly structured parts produce genuine understanding in the system after all.”
> If it takes a room of robots following simple instructions a billion years to act as an AI that reads Chinese, are those robots a collective intelligence? Can it be scaled up to consciousness?
No, because of the article you linked to. Chinese room argues AGAINST the existence of consciousness/intelligence arising from following symbolic instructions.
Here is a quote I like by John Searle:
> Computational models of consciousness are not sufficient by themselves for consciousness. The computational model for consciousness stands to consciousness in the same way the computational model of anything stands to the domain being modelled. Nobody supposes that the computational model of rainstorms in London will leave us all wet. But they make the mistake of supposing that the computational model of consciousness is somehow conscious. It is the same mistake in both cases.
Searle's argument misses the mark. A simulated rainstorm wont leave us wet, but a simulation of a disordered state is (or contains) a disordered state. If consciousness is a relational property of a system, then a simulation of such a relational property will instantiate that relational property without qualification. The Chinese room argument can't address this conception of consciousness.
The argument is against this view of consciousness.
More intuitively: our words are things which do things in the world. When we acquire them we do so by being embedded in the world. When we use them we use them to change the world. (Largely).
If a child in a cupboard reading a textbook out-loud could fool a chemist that the child understood chemistry, it doesn't mean they do. In the end, you need to turn on that bunsen burner.
The chinese room needs to be taken super-literally: does the person just doing a "hash-table lookup" on "the right answers to pre-prosed questions" actually understand.
To me its incredibly obvious that they do not. This does not establish that, eg. cognition, does not involve computational hastable-like processes. I think it does establish that propontents of a naive computationalism have an incredibly big challenge.
The challenge is to specify the environment computationally, since that is the only means by which the computer itself can be sufficiently embedded and actually understand anything.
The issue here is that if "computer" isnt just a trivial property it has to include, eg., measurability, determinism, etc. Properties that reality lacks. Reality isnt a computer.
Starting with the chinese room one /can/ supplement sufficiently until we reach searle's conclusion.
> does the person just doing a "hash-table lookup"
But this is Searle's sleight-of-hand. The question isn't whether the person in the Chinese room understands Chinese, but whether the system as a whole understands Chinese. The man is analogous to the CPU in a computer, but the CPU is embedded in a larger system and it is the entire system that performs operations of the computer. You can't focus on one component to the exclusion of the rest of the system and expect to derive valid conclusions about the system.
>The challenge is to specify the environment computationally
I agree, but this presents no in principle challenge to a computational system reaching such a detailed description of the environment.
>The issue here is that if "computer" isnt just a trivial property it has to include, eg., measurability, determinism, etc. Properties that reality lacks. Reality isnt a computer.
I don't follow. The issues of measurability and determinism in QM (assuming that's what you are referring to) aren't necessarily problems at the scale of physical systems. Computational systems can certainly make distinctions to within some margin of error, which is sufficient as a substrate for gathering information about the external environment.
No, there's no sleight of hand. It seems you just agree that the man doesnt understand Chinese.
> at the scale of physical systems.
Alas, they are. Even classical mechanics isn't deterministic. Most systems are chaotic and require infinite precision in measurement to be deterministic. Since QM precludes this, most chaotic systems literally can only be predicted over short horizons.
Eg., there is moon in our solar system, i believe, with a chaotic orbit. At maximum possible measurement fidelity we can predict 20yrs.
Consider that all the particles of our body are just these little chaotic moons, and their aggregate chaos dramatically dimishes this 20yr horizon.
There is, quite literally, not enough information present in this moment to predict the next moment. Most systems are sufficiently chaotic in a sufficiently large number of parameters to preclude this. This reaches all the way up to classical scales.
Everyone agrees that the man doesn't understand Chinese. The sleight of hand is that Searle compares a full computer to a component of the Chinese Room, when he should be comparing a computer to the room as a whole. The man in this thought experiment is analogous to an internal component of a computer, not the whole thing.
Put another way, Searle separates a "computer" from the instructions it is running, but there is no such separation. It's like separating a brain from the electrical brain activity. It's true that a brain without electrical activity cannot understand Chinese just like a computer without a program cannot, but that's not a very interesting observation.
A computer is just an implementation of a function (in mathematical sense), ie., a finite set of pairs (IN, OUT) where (IN, OUT) in {0,1}^N, {0,1}^N. Ie., its an implementation of {010101, ...} -> {010101010, 01010...} .
If we all agree that the function the man performs can be set to be whatever you wish (ie., any {input sentences} -> {output sentences}) and we agree that the man doesnt understand chinese... then the question arises: what is missing?
I dont think the "systems reply" actually deals with this point -- it rather just "assumes it does" by "assuming some system" to be specified.
If using a language isnt equivalent to a finite computable function -- what is it?
The systems reply needs to include that the reason a reply is given is that the system is caused to give the reply by the relevant causal history / environment / experiences. Ie., that the system says "it's sunny" because it has been acquainted with the sun, and it observes that the day is sunny.
This shows the "systems reply" prima facie fails: it is no reply at all to "presume that the system can be so-implemented so as to make the systems reply correct". You actually need to show it can be so-implemented. No one has done this.
There are lots of reasons to suppose it can be done, not least, that most things arent computable (ie., via non-determinism, chaos, and the like). Given the environment is chaotic, it is a profoundly open question whether computers can be built to "respond to the right causes" and computational systems may be incapable of doing this.
If they cannot, then searle is right. That man, and whatever he may be a part of, will never understand chinese. It is insufficient to "look up the answers", "proper causation" is required.
>I dont think the "systems reply" actually deals with this point
To be clear, the point of the systems reply isn't to demonstrate how a computational system can understand language, it is to point out a loophole such that computationalism avoids the main thrust of the Chinese room.
>If using a language isnt equivalent to a finite computable function -- what is it?
In the Chinese room, the man isn't an embodiment of the computable function (algorithm). The man is simply the computational engine of the algorithm. The embodied algorithm includes the state of the "tape", the unbounded stack of paper as "scratch space" used by the man in carrying out the instructions of the algorithm. So it remains an open question whether the finite computable function that implements the language function must "use language" in the relevant sense.
What reason is there to think that it does use language in the relevant sense? For one, a semantic view of the dynamics of the algorithm as it processes the symbols has predictive power for the output of the system. Such predictive power must be explained. If we assume that the room can in principle respond meaningfully and substantively to any meaningful Chinese input, we can go as far as to say the room is computing over the semantic space of the Chinese language embedded in some environment context encoded in its dictionary. This is because, in the limit of substantive Chinese output for all input, there is no way to duplicate the output without duplicating the structure, in this case semantic structure. The algorithm, in a very straightforward sense, is using language to determine appropriate responses.
What is missing is the man's ability to memorize the function and carry it out without the book. If he could, and the function truly produced a normal response to any Chinese phrase in existence, then he would speak Chinese.
Edit: He would speak Chinese, but he wouldn't understand it. What is missing in order to understand Chinese is an additional program that translates Chinese to English or pictures or some abstract structure independent of language. Humans have this, but the function in this thought experiment is a black box, so we don't know if it uses an intermediate representation. Thanks to colinmhayes for pointing this out.
Could he give a reasoned answer to "what is the weather"? That depends on whether we include external stimuli as part of the function input. If not, then neither a human nor a computer could give a sensible answer beyond "I don't know".
I see now that your issue is really with the function - could a function ever exist that gives responses based on history/environment/experience. My understanding is that such a function is the premise of the thought experiment; it's a hypothetical we accept in order to have this discussion. Searle claims that even if such a function exists, the computer still doesn't truly understand. But if that's what we're asking, my answer would be yes, as long as history/environment/experience are inputs as well. Of course a computer locked away in a room only receiving questions as input can never give a real answer to "what is the weather", just like a human in that situation couldn't. But if we expand our Chinese Room test to include that type of question and also allow the room or computer to take its environment as input, then it can give answers caused by its environment.
> You actually need to show it can be so-implemented. No one has done this.
I mean, fair enough. It's fine to say "I won't believe it until I see it", but that pretty much ends the discussion. If we want to talk about whether something not yet achieved is possible, then we need to be willing to imagine things that don't exist yet.
> What is missing is the man's ability to memorize the function and carry it out without the book. If he could, and the function truly produced a normal response to any Chinese phrase in existence, then he would speak Chinese.
No, the function would.
My CPU doesn't speak HTML. My web browser does. Does running my web browser mean the CPU speaks HTML, even if the whole browser is loaded into cache? I don't think it does; the CPU is still running machine code.
If I memorise a Turing machine, does that mean I understand the computation it's performing? No; I'd have to pick it apart and work out what each bit does, then put that meaning back together, in order to try to work out how it works.
Memorising the function would enable the man to teach himself Chinese (whichever language “Chinese” is), just like memorising a good mathematics would allow most people to teach themselves the concepts therein. But memorisation isn't understanding.
> If he could, and the function truly produced a normal response to any Chinese phrase in existence, then he would speak Chinese
I don't necessarily disagree, but it's not so simple. Just because the man speaks chinese doesn't mean he understands it. He could have figured out the proper output for every possible input without knowing what any of it means. If the next person asks in English "what did you talk about with the last person?" what would he answer, assuming he also speaks english? Really the question comes down to if the computer is able to write the book by observing I guess, but even then you could conceive of a different book with instructions on how to write the translation book.
One of you said a few comments up: "then the question arises: what is missing?"
And i just wanna pour us all a cup of tea and sit with that.
It reminds me of the song "anything you can do, i can do better, i can do anything better than you - No you cant! Yes i can!.."
The issue is- tell me what it cant do. Explain to me what the chinese room is not understanding. Whatever model you use to translate the missing functions can then be "encoded".
Id like to think my cpu has a brain and it goes off dreaming with the idle threads- but im also quite convinced that every single thread that runs machine code is a non-concious direct representation of the mental model i encoded into its functions.
The point is that the human in the Chinese room is just the hardware, and no one really thinks that any computer hardware on its own understands Chinese. It would obviously be the entire computational system that understands Chinese. The only reason this can even appear to be confusing is that we often casually use "computer" to refer both to an entire computational system (your "implementation of a function") as well as a physical piece of hardware (like the box on the floor with a Dell sticker on it).
Worth noting that computers don't actually compute functions. They are used to simulate computation, but they're not computing per se. Any claim that they do is an anthropomorphic projection onto an object that lacks that capacity.
Kripke's quaddition vs. addition distinction is also a good point in this regard.
This is probably too late a response to ever be read, but with regard to the latter point, Kripke did not have to invent the 'quus' operation , as there are actual examples to be found, for example in modular versus conventional arithmetic, real-valued versus complex-valued functions, etc. These cases do not create any unavoidable problems; we simply disambiguate where necessary. It is a non-issue unless your philosophical intuition depends on the semantics of words being something more fundamental than a convention among the users of a language.
As for your first claim, it appears to be begging the question, predefining, without justification, 'real' computation as being something that only humans can do, while machines can only simulate so doing.
> whether the system as a whole understands Chinese.
"The system" isn't the kind of thing that understands. If I give n people a letter each and a number indicating the position the letter should occupy on some grid, it does not follow that the group knows the sentence because groups are not knowing subjects. If they arrange the letters on the grid, then each individual person in the group now knows the sentence when they read it.
Only individual knowing agents know and understand. Abusing the definition of "knowing" or "understanding" doesn't help clarify anything. All it does is lead to equivocation.
Indeed - there are, in fact, several things wrong with it.
Firstly, it is a circular argument in which an unargued-for intuition - '"The system" isn't the kind of thing that understands' - is used in trying to justify that intuition.
Secondly, no-one is suggesting, and Strong AI certainly does not imply, that the utterly simple 'system' here is capable of understanding anything. The fact that it does not proves nothing.
Thirdly, it is irrelevant (a variant of the homunculus fallacy) to see any significance in what the actors understand and when they do so, as they are being employed to act as unthinking automata. Replace each of them with a simple mechanism for placing a letter in a predetermined spot, and you have an equivalent system, also mindless, that is equally useless for demonstrating anything about the feasibility of Strong AI.
> does the person just doing a "hash-table lookup" on "the right answers to pre-prosed questions" actually understand.
The person doesn't, but the (person + hash-table) system does. This is not weird - presumably whoever wrote the hash-table did understand Chinese. The whole point of using a room for this thought experiment is that we're no longer talking about a person, we're talking about a "machine" consisting of a person, a hash-table, and some kind of input/output system.
The hypothetical computer is the same. The CPU does not understand Chinese, but the whole computer as a system does, because part of that system is actual knowledge that came from someone that understands Chinese. When people ask if a computer is intelligent, they are not asking about one particular component, they're asking about the computer as a whole. Just like when you talk about a human's knowledge and abilities, you don't ask about particular sectors of their brain, you ask about the person as a whole.
GPT-3 isn't there yet, but yes, a robot with the book from the Chinese Room experiment would form a system that understands Chinese.
I agree that the person is frivolous, that's why it's strange that Searle asks whether the person in his thought experiment understands Chinese. That's irrelevant, what matters is whether the room as a whole does, and clearly it does.
I'm not sure. Say the person has translations tables for english and chinese. If the second tester asked in English "what did you talk about with the last person?" would the man be able to answer? Clearly the man + hash table speaks chinese, but I don't think that's the same as understanding it.
That depends on the premise of this new thought experiment. Are the program-books allowed to include side effects of writing down records of interactions, and take those records as input? If not, it's not a fair comparison with a computer. If so, then yes I think the room could be able to answer.
Why did I have to add something, if the room already understood Chinese before? Because we're essentially adding a second program now and trying to share understanding between them. The room did understand Chinese, but that understanding will not be accessible to a new component unless we design with that in mind. An analogy would be asking a person "which muscles did you use to digest that food?" Clearly some part of the human-system knows this, because it activated the right muscles and successfully digested the food, but the part of the system that hears and responds to English doesn't have access to that knowledge. We would need to redesign the system and programs in order to share understanding between these different parts of the system.
I realize we were using the term "hash-table" in this thread and what I'm talking about wouldn't be possible with just tables, but such a simple program is not a requirement for the thought experiment. The idea is we just have some black box function that takes inputs and produces outputs and we don't know how it works.
I agree that one should take Searle's argument literally, and I doubt there are many people on either side who suppose that the room's operator understands either the questions being posed to the room or the answers it delivers. The dispute is whether there is anything significant in this assumption - and, more specifically, whether Searle is right in supposing that if Hard AI is possible, then it would follow that the room's operator must understand the questions, their answers, and why they are appropriate answers.
You may recognize that, by distinguishing between the operator and the room, I am prefiguring the so-called "systems reply", which is that the system as a whole is demonstrating whatever level of understanding is manifest here, and there is no assumption, in so supposing, that this understanding is manifest in any single component, even if one of those components has the ability to understand some things on its own. By deliberate construction, Searle is using the room's operator to mechanistically implement an algorithm, a task that does not require knowledge of the language in which the questions are posed (in fact, he is using the language barrier to prevent the operator answering the questions himself.) Searle has not presented an argument for the operator needing to do anything more, and it is obvious that people can be trained to perform tasks they do not have any understanding of beyond the specific actions involved.
Searle's attempt to refute this reply seems to show that he cannot even conceive it fully, as that response is to modify the scenario such that the operator memorizes the rule book, as if where it is stored makes any difference.
Searle's stance here seems to me to be a version of the homunculus fallacy, supposing that our minds are implemented by (as opposed to being) a conscious entity.
I think it is significant that the room, even as a system, does not use language.
A hashtable lookup cannot be what it is to "act in the world with words".
When I say, "do you like what I'm wearing?" -- regardless of what has been written in any book, at any time, I do not want that reply.
I may want those words. But the reply I want is to use your judgement of taste and experiences of the world to tell me your opinion. The words dont answer the question if the reason they are used is incorrect.
A hashtable lookup can, basically, never be a reason to use words.
And hence the Chinese room reveals much much more than simply a circular point. And the systems reply fails to say much much less than it needs to.
The systems reply needs to explain "by what system" the system's use of language counts as a use. "by what system" words are born of understanding, not "mere reterival".
To do this, you need to specify the whole world, experience, etc. as "computational systems". And thus the systems reply simply fails against this example. It isnt sufficient to say "maybe" you have to say, "and this is how!".
And worse, the world is clearly not a computer -- as any non-trivial definition (eg., universal turing machine) cannot simulate properties the world has (non-determinism, chaos, etc.).
> And worse, the world is clearly not a computer -- as any non-trivial definition (eg., universal turing machine) cannot simulate properties the world has (non-determinism, chaos, etc.).
This is precisely wrong. With the possible exception of consciousness, in our current understanding of physics, the world is a computer. I Bieber some of your intuition fails because you have wormgly assumed this point to be obvious.
You're also mixing up the Chinese room experiment with the larger problem of an interactive AI. The Chinese room system is restricted, by definition, to only interact with the word through language. Over time, as it interacts with a person, it may decide that it likes them or not, and its responses may start changing based on this - that is allowed to be part of the algorithm that the human/machine inside the room is applying.
For a silly example, the algorithm may say that 'people who use the same word three times in a row are extremely nice', and use this state of 'extremely nice' to modify its future responses once the same word has been recorded three times in a row. This is no different, fundamentally, to me liking people whose face is symmetric, or the same gender as myself - it's not a choice I made, it's just an arbitrary rule of my (genetic) algorithm.
Quantum non-determinism isnt computable. A universal turing machine is just an implementation of a function from the Naturals to Naturals (ie., expressible as a binary -> binary).
QM systems cannot be described by such functions, hence the world isnt a computer. Eg., any quantum randomness isnt a function as it has no input state and a non-determinstic output state.
Worse, clearly at least the input and output states of a computer should be measurable -- but most systems arent. Even including classical ones.
Ie., in chaotic systems it isn't possible, due to QM, to measure x st. f in y = f(x, t) is just a set (x, y, t) -- for large t. f here cannot be a function -- as y is necessarily a distribution of states.
In abitary chaotic systems y spans the space of all possible states of abitary t, and thus we have zero ability to predict anything beyond relatively narrow time-horizons.
If you wish to claim reality is a computer: (1) state the properties of a computer; (2) show that having these properties is non-trivial (ie., it is a substantive claim that something is a computer); and (3) show that these properties are at least consistent with our best theories of physics.
An ideal Turing Machine can perfectly well predict the state of a quantum system: it can solve the Schordinger equation, and predict the time evolution of the system for each possible solution. It can then predict all possible states of the world after N steps, and even the probability of each state. N can be arbitrarily large. If you believe in the Copenhagen interpretation of QM, you then need to choose one of these states randomly according to the probability the TM predicted. If you believe in the MWI, you don't even need to do that: by this point, the TM has already done exactly what the universe does.
This same process also extends to chaotic systems: a Turing machine can take all possible measurement clause and compute the state of the system after N steps.
The mistake you are making is in the word 'predict'. Of course you can't predict the result of a non-deterministic process. But a computer doesn't predict: it computes, that is, it follows mechanical steps to transform an input value. And non-deterministic computations are nothing special, we do them every day in real computers ( rand() is a non-deterministic computation). The most famous class of algorithms in complexity theory is even called 'Nondeterministic Polynomial Time', NP for short (of P=NP? fame).
I would first note that you can keep arguing with me all you want, but it is well known and commonly practiced that a Turing machine such as a digital computer can simulate a Quantum Computer or Quantum System to arbitrary precision, given sufficiently large but finite time. There are even cloud services offering such simulation capabilities, such as Amazon Braket [0]. The simulations take exponential time to run a linear time quantum algorithm, but they will produce the same result as a QC would (after enough sampling of the QC results).
> Hilbert space is infinite-dimensional and real-valued.
Infinite-dimensional Hilbert state spaces are a useful mathematical model, but they are not physical. Physical systems have finite elements and thus finite state spaces. We could argue that space itself is infinitely divisible, so that we need infinite-dimensional state spaces to represent the infinite possible positions of a particle, but this is not a physical concern, as it is impossible to differentiate in finite time two states that only differ in an infinitesimal position change - so, we are free to choose some minimal unit of length (say, one over Graham's number of the Planck length) and get a finite-dimensional state space. Crucially, for any amount of time and for any sensitivity of instruments, we can always choose some such unit and ensure that our results will not be distinguishable with those measurement instruments within that amount of time.
> QM says that there is no `rand(seed)`, rather there is only `rand()` that is what "no hidden variables" means.
QM says no such thing, though many interpretations do. Still, that is exactly why I chose rand(), not rand(seed) in my example, so I'm not sure why you're bringing this up. rand(seed) is a deterministic computation, rand() is a non-deterministic computation.
> If you want to do `rand(s) forall s`, that "forall" requires real numbers -- and then you're out-of-luck on computability.
This is not what I was proposing. I was proposing a Turing machine + perfectly random rand() as the simulation of a random world.
Even this is not necessary if you believe in the MWI, where the whole universe actually evolves perfectly linearly and deterministically from a fixed initial state, with no randomness of any kind. This interpretation is perfectly compatible with all observations of QM, if we also add the postulate that observers can only observe one state at a time (the one they are entangled with).
Even without going there, the construction I proposed earlier, where we essentially select enough real numbers to satisfy any possible measurement device leads to such that our computable set is indistinguishable in practice from the infinite uncomputable set we started with still applies.
In general, there is no (known) way to introduce infinity into empirical science - it is a priori impossible to distinguish, in finite time, between an arbitrarily large but finite quantity, and a truly infinite quantity.
A quantum computer is just a computer. Yes, a turing machine can simulate a QC. A QC doesnt use any special quantum properties in its computation. The "quantum" part should really be read more about its storage system than its computation, ie., its mostly about how its input states are prepared.
That's neither here nor there though.
A myriad of other issues exist. The uncertainty principle precludes "in any world", infinitely precise measurement. Thus chaotic systems, at least of a large time horizon, arent deterministic.
A chaotic system is one in which "insignificant digits" in the input determine "significant digits" in the output. You need very very very low digit precession in inputs the longer the time horizon. This gets beyond the uncertainty principal.
You can run as many turing machines as you like (necessarily, uncountably many to be consistent with QM). But you cannot escape the problem.
The best theories of physics are extremely far away from "effective computation". They are phrased actively hostile to it. And you have to completely revise physics to make it even half-plausible. It's a project which is barely stated, and a long way from finished.
It seems, indeed, a little doomed to failure. Properties that physics uses day-in-day-out posses the trivial infinities of geometry; and the trivial randomness of uncountable ensembles etc. -- to claim that physics states that reality is a computer is bizarre.
A QC absolutely uses quantum properties in its probabilistic computation (the complex-valued probabilities of its qbits and their entanglement). A QC can also perfectly represent the state of a quantum system (until you want to measure its output, of course).
The laws of physics are known to be computable. Quantum Mechanics is one of the easiest to prove this for, since it is based on simple linear transforms! (Thermodynamics and GR are more complex, but still computable).
I have no idea why you believe differently.
A chaotic system requires arbitrary precision in the input measurements to be able to predict arbitrarily precisely its state after arbitrarily many steps. But, crucially, it does not require infinite precision in the input unless you want to predict its state with infinite precision (non-goal) or after infinite time (also a non-goal).
The uncertainty principle is also irrelevant. The uncertainty principle has to do with measurement of classical properties. Conversely, the wave function of a system is precise, not fuzzy. It's only in measuring properties of the wave function that we reach uncertainty or probability at all. So, a computer can precisely compute the simple linear evolution of the wave function, according to the Schrodinger equation, which is what the universe itself is doing. The result of this computation is a complex value. You can then apply the Born rule to this result to get the probability of your system being in a particular state after following the simulated evolution, just as you would for a real quantum mechanical system.
We do this kind of quantum simulation every day with great success, with a countable number of classical computers (typically 1), though it is only tractable for very very simple systems. The results don't typically differ in any perceivable way from the actual experiment.
Depends on rand() implementation. The one in C is indeed deterministic. Others use hardware entropy as the seed, and are not predictable.
Either way, extending the TM model with an idealized rand() that is purely random doesn't significantly alter the model, and it makes it capable of non-deterministic computations.
In your first post in this thread you wrote "the Chinese room needs to be taken super-literally" - a sentiment I agreed with - but since then (starting immediately afterwards, in fact) you seem to think that Searle is restricting the capabilities of the room to those of a hash table. I am pretty sure that a close reading of his paper will fail to reveal any such restriction, and, in fact, if it did, that would count against his argument. Similarly, nothing you say, here or elsewhere among these replies, about the limitations of hash tables, supports Searle's argument (or more general arguments against Hard AI.)
Nor is Searle arguing that the Chinese room, if it could be implemented, must use language (beyond the capabilities that he does require of it, which is answering questions from an ill-defined domain.) And as far as I can tell, he does not require it to, as you somewhat vaguely put it, "act in the world with words" beyond this.
The systems reply does not need to do anything more than show that Searle is making some unargued-for assumptions about what Hard AI would imply - and Searle's response to it so completely misses the point of that reply that he is actually helping make it clear! (For more details, see the second paragraph of my previous post; as far as I can tell, no argument has been raised against the points I made there.)
The limitations of Turing machines are similarly beside the point. A computer, such as those involved in the composition and delivery of this reply, is not just a Turing machine (not even a universal one), even though it is capable of implementing one (up to the physical limits of memory). In particular, a computer (as opposed to a Turing machine) can incorporate environmental inputs, including physically-derived entropy, and can use these in simulating physical processes.
You are evidently fond of the phrase "[Reality/the world] is not a computer"; that may be so, but rather more definitively, I think, one can say that the mind is an information processor.
I understand that it seems inconceivable to you that a mind could be produced by a suitably-programmed computer of any power, and I am aware that I can only offer hand-wavy arguments for that proposition. I think you are mistaken, however, to read Searle's Chinese Room argument as being about, let alone supporting, the broader intuitions you have expressed in your reply to me and to others. Searle, who is, after all, a philosopher of considerable stature, is well aware that you need something more than just one's intuitions to make a respectable argument; that alone would not pass peer review.
... I’m finding it very difficult to phrase this argument about actually understanding in a way that wouldn’t also entail that chairs made of atoms (and—overwhelmingly—empty space) don’t actually exist, they are mere simulacra of real chairs (which don’t exist anywhere). Which might be a consistent position, but I’d instead say that it is simply a largely useless interpretation of the word actually and as good philosophers we should search for a better one. (See also Anderson’s “More is different”.)
You "actually understand" if in saying words, w, about a situation, s, the reason, r, you say those words counts as as an understanding of s.
So if you're asked, "what is the weather today?" and a system replies, "its sunny" -- it only understands if the reason it replied was because its saying "sunny" had come about because it understands sunny sitations; and therefore has a justfiied reason on this occasion to say that this occasion is sunny.
If I ask a NN trained on a trillion documents, "do you like what I'm wearing?" it cannot answer. It can say the words, "yes I do!" but it cannot have a reason to say those words. It's just a "weighted average retrievla across a compressed dataset of a trillion documetns".
The question isnt "on average, what -- historically -- would a generic person reply to the question: do you like what i'm wearing?"
The question is if the systems likes what I'm wearing. Its replies are only "actual replies" if it can see what i'm wearing, has experiences/preferences for taste, has some aesthetic judgement, has a disposition to like/dislike -- etc.
No ML system on the planet, in this sense, has any understanding whatsoever. Interpolation over historical data is just a means of compressing history into parameters, ie., its a compressed lookup table. That isn't ever a /reason/, exactly, if a person simply used such a table -- they wouldnt mean what they said
Given that your squishy brainmeats also consist of neural networks that have been trained directly by years of experience and indirectly by billions of years of evolution, I can write off your response as just a weighted average retrieval across a compressed dataset of trillions of your experiences.
If, on the occasion I say, "I like you" my saying it is caused by my liking you -- then you can describe this process however you wish.
Since my liking you is caused by my immediate environment, it isn't reducible to a weighted average of my history.
Another way of putting it: the historical positions of all the molecules in some water aren't sufficient to determine its present state. It's state depends on its container (ie., the pressure & temp of its environment). And there are a very very large number of states of water, many still being discovered.
In this sense my state in any moment is a point in an infinite space of states -- not determined by my history. But also extremely complexly by my container -- my social, etc. environment. The world hitting my senses is doing more to me than the air on the water. It induces in me a state which cannot be "averaged" from my history.
Thus, no, we are not weighed averages of our histories. We are profoundly chaotic and organic organisms whose growth in our environments enables us to respond to our enviroments by entering a near infinite number of states. These states arent in our history, they are how our biophysical structure -- via history -- responses to the near infinite depth of the here-and-now.
We are more like water than a computer. A computer is a deterministic machine which is a deterministic function of its deterministic inputs. Water is a chatoic system whose state "isnt up to it". Water's state is /in/ its container, and water itself is a non-determinsitic chatoic soup.
The chaos of water is the least of what one nanometer of a cell has; a cell is a trillion times that adaptive and responsive. And we are a trillion of those.
We are a cascade of chaotic state changes provoked by an infinitely rich environment action on our bodies shaped by a long history of organic growth.
We don't have "neural networks", we have cells. That some form "networks" has nothing to do with what we are. A complete misdirection.
> Since my liking you is caused by my immediate environment, it isn't reducible to a weighted average of my history.
It is not clear to me that this cannot be the case of a weighted average very heavily weighted to the immediate past.
> The historical positions of all the molecules in some water aren't sufficient to determine its present state. It's state depends on its container (ie., the pressure & temp of its environment).
AFAIK, given that you could determine the momenta of the molecules from a history of their positions, this would be sufficient to determine its state (maybe you need their angular momenta independently?) The relevant information about the container has been impressed on the motion of the molecules.
Similarly, we can suppose that the history of your environment becomes manifest in your mental states (and a predisposition towards certain state transitions) - though, on account of the complexity of that environment, in a compressed form.
> We are more like water than a computer. A computer is a deterministic machine which is a deterministic function of its deterministic inputs. Water is a chatoic system whose state "isnt up to it". Water's state is /in/ its container, and water itself is a non-determinsitic chatoic soup.
And yet we can usefully model fluid dynamics on a computer, even though the mathematical representation of the problem is analytically intractable. This line of argument does not appear to be leading in the direction you think it does.
> We don't have "neural networks", we have cells. That some form "networks" has nothing to do with what we are. A complete misdirection.
I am generally distrustful of these ontological arguments - quite often, it seems, things that were once thought of as being completely different turned out to be similar in some relevant way.
Ok, but this is all irrelevant. The trained history of evolution is the Chinese book. The environmental interaction is the symbol being passed into the room.
Without your trained history, you would not be able to say "it is sunny" when the sun hits your skin. We know this for sure, as an elephant or a baby are unable to say it, while a Frenchman would say different words.
Me saying "I like you" is not caused by me liking you. It is caused by me deciding, based on my experience and my perception of the current situation, to utter these words. I may be lying, I may be exaggerating, I may be misdirecting. Human speech is extremely rarey caused directly by interaction with the (external) environment (exceptions would be onomatopoeia exclaimed when in extreme pain, or when surprised, which tend to bypass most computation in the brain).
Okay, but how do you test whether any speaker (human or otherwise) understands something without asking them to explain it? If you have some other test (like perhaps some analysis of what the human brain is doing when it is understanding something), great! But even then, why can't the computer do the same thing that the human brain is doing which constitutes understanding? Or if the only test is to ask the speaker to explain something to you, well then, the Chinese room can do that too! Of course you end up with an infinite regress where you can ask the speaker to explain the previous explanation forever, but that's true for human speakers as well.
The question is whether what causes us to understand things is a computational process.
See my comment to the other reply, an others in this thread about chaos/non-determinism to see why I doubt.
Ie., I dont think our organic growth and adaption to our environment, in being profoundly chaotic (and via QM, thef. non-deterministic) is likely to be describable as a computable function.
Non-determinism is a red herring. It's trivial to model and produce a non-deterministic computer given a deterministic one and since source of non-determinism (it's in fact impossible to produce a truly deterministic computer in real life).
Conversely, you can interpret QM to mean that the universe evolves perfectly deterministically, except that no particular observer can predict in which branch they will find themselves.
Also, non-determinism has little to do with computability. All known rules of physics are computable, even if you include the Born rule for non-determinism. A quantum computer is computable by a Turing machine (in exponential time, as far as we know).
All laws of physics aren't computable. None of them are, they are all defined over the reals.
You've a bigger issue with talking about the computability of those laws.
I am assuming that somehow the reals can be ditched (which, incidentally, seems unlikely). And in that reality, we still have a problem: the role the reals play is to make reality infinitely precisely measurable. Without this chaotic systems become non-deterministic over realtively short time horizons.
Nearly all actual systems are chaotic in many of their properties. It isnt a matter of just adding an RNG to a CPU. A trivial object has 10^30 atoms, in organic stuff, much of that is behaving chaotically on environmental boundaries. The engery to simulate this, assuming 10^30 RNGs (etc.) is vastly greater than the target system and "cannot outrun" that system.
Not only this but I think simulation of such things quickly surpasses physical bounds on physically possible computers, not least as the RNGs have an energy cost.
The real numbers used in physics are all computable (pi [0], e, sqrt(2) etc are all computable numbers). There are non-computable real numbers, but so far those have never been necessary for describing physical systems (e.g. Chaitin's constant).
When talking about computability in the absolute, theoretical sense, the size of the universe is irrelevant - all we ask is if there exists some arbitrarily large, but finite, Turing machine that performs that computation and then halts after an arbitrarily large but finite amount of time.
If we're taking about practical computability, than my base assertion that the physical world is itself a computer also implies that this computer is exactly fast enough to simulate physics. Since we know that the laws of physics are computable in the theoretical sense above, and since we know that the universe is accurately simulating itself, we have proven my assertion.
Your other points about chaos and non-determinism I have addressed in another thread.
> The chinese room needs to be taken super-literally: does the person just doing a "hash-table lookup" on "the right answers to pre-prosed questions" actually understand.
> To me its incredibly obvious that they do not.
That is Searle’s approach, but I think the point where Searle and others taking it fall down in the inability to do this: Explain any testable manner in which we can confirm that those who you would describe as “really understanding” a language (or anything else) are actually doing something materially different from what you assert is obviously not understanding.
Until someone can do this, it is clear that the Chinese Room is nothing more than a conclusion that mechanistic computation is insufficient for understanding resting solely on the assumption that...mechanistic computation is insufficient for understanding.
> The chinese room needs to be taken super-literally: does the person just doing a "hash-table lookup" on "the right answers to pre-prosed questions" actually understand.
That’s not how it’s phrased. The Chinese Room needs to pass the Turing Test with arbitrary input of Chinese characters. So a convincing Chinese Elissa basically.
> The issue here is that if "computer" isnt just a trivial property it has to include, eg., measurability, determinism, etc. Properties that reality lacks. Reality isnt a computer.
There's a confusion in the language I'm seeing in the comments, between "understand" as in knowing a language, and "understand" as in possessing a useful mental model of a system.
I don't see why a machine can't be said to "know" a language; after all, it can translate languages. I can translate from French to English (I'm a native English speaker); but there are sentences in English that I can't "understand", either because I don't really know what all the words mean, or because the concept expressed by the sentence is beyond me.
Not really. The point is that as an argument against computationalism about the mind, Searle's Chinese room can't rule out computationalism if mind is merely a relational property of a system.
The Chinese room was originally an argument against the consciousness of automatons, but then EPR was an argument against (fundamental) entanglement, Bell more or less expected his inequality to hold, Michaelson–Morley was interpreted as having found the luminiferous aether, and the Poisson (aka Arago) spot was a reductio ad absurdum of the wave theory of light. All of these things aren’t obsolete, they are still useful pieces of insight, but the state of the art regarding how they should be conceptualized has moved on from what their originators thought.
Which is not an argument for any particular intepretation of the Chinese room, it’s only to say that the bare fact that the original author thought of it in a particular way doesn’t mean we should do so.
The Chinese Room is useful as an idea or model, even if you don't agree with the original interpretation. I am fond of the view that the room is potentially conscious, with human or machine worker. The worker should expect to understand the conversation they are mechanically carrying out no more than your individual brain cells ought to expect to understand.
It seems to me that the argument is kind of silly. The "Chinese Room" has failed to account for the actual processes of the system. The 'spoken response' is only a small part of the 'program' of a mind; the rest is the internal processes that constitute consciousness that can infer, compare, reinforce, and associate. Consciousness is the whole of the system, not a functional IO.
Yeah, this is related to Stephen Wolfram's principle of computational equivalence and his re-take on "the weather has a mind of his own". Turing machines occur everywhere in nature, emerge from the simplest rules, Wolfram demonstrates, so maybe consciousness is everywhere too. Where to put the scope of computing processes basically. Searle refutes it saying it's nonsensical to contend that a concrete wall could be conscious. I don't know, who knows.
It doesn't seem particularly nonsensical to contend a concrete wall has some form of consciousness. Concrete walls don't spring up out of nowhere, they're built by humans who have a need to separate things (either for good or for bad). A single concrete wall might not seem like much, but in the greater scope of things, the rising and falling of all concrete walls seems to represent some sort of plane of humanity and its consciousness. Even think of the extremely complicated emotions evoked by different types of walls: i.e. the Berlin Wall, the U.S.-Mexico Wall, the Great Wall, etc. I know I'm probably reading way too much into this specific example, but I think this could generally apply to a lot of things.
Why stop at a computer, why not extend the argument to a full simulation of a human brain? What if I sat in my room for 3 million years simulating 3 seconds of a human body including the brain? Assuming that my simulation is faithful, who's to say that my simulation doesn't have some form of consciousness or subjective experience?
You can go even deeper: given that your simulation follows a set of known rules from an initial state, what does actually performing those steps even matter?
Is the consciousness given existence because you perform calculations on paper, or does it exist already within the fabric of reality and you're merely exposing it?
I'm not sure what you mean. Just listing the steps of an algorithm is quite different than executing the steps of an algorithm. As an obvious example, with an algorithm for summing two numbers together, the difference between listing the steps and executing them is that when you execute them you are provided with the sum.
I don't see any sense in which these aren't different, unless perhaps you're introducing some separate cosmological "Boltzmann brain"-style argument that all possible finite results of all halting algorithms will arise or have arisen due to random fluctuations. But in that case you've got a whole new set of assumptions to address.
The idea of a fundamental difference between the result of "performed" and "un-performed" computation always leads me to imagining the null universe, where nothing gets computed ever but all math "exists" as we'd expect. It begs the question, why is the universe non-null? Allowing the mere existence of the possibility of computation to answer that question is very comforting. But yes, it requires approximately as much faith as any monotheistic religion.
Does it really? When one considers that we take seriously the many worlds interpretation of quantum mechanics and nigh-uncountable Calabi-Yau manifold topologies of superstring theory? There is some suggestion that our experiential reality is but one infinitesimal slice of existence even among the empirically minded.
I don't think we have any real observations that rule out the Matrix theory (e.g. existence of a "creator"), so I think it does. I'm not sure how parsimony plays into this; I do think MUH[0] is the most parsimonious TOE.
This is why I find it so strange that philosophers seem to by in large accept "Cogito Ergo Sum" or "I think therefor I am" as a valid proof of ontology. What if the appearance of thought is not actually thought?
Did philosophers not watch Blade runner? Sure Descartes didn't - but contemporaries sure as shit do!
Not to downplay Descartes (his philosophy, specifically the process he undertakes, is worth a study), but something that the pop summaries of his writing really gloss over is that his entire premise hinges on something he takes as assumed true that need not be, and if it isn't, the rest goes straight off the rails.
Paraphrasing from memory: he pins "cogito ergo sum" on the assumption that objective knowledge is possible. That assumption is built atop imagining an alternative: an evil demon has full control over his senses and feeds him whatever it wants to (essentially, the 'Matrix hypothesis'). His solution to this concern? "A loving God wouldn't create a universe that banal and meaningless."
It's only a proof if you accept the postulate that objective reality is correlated with experience.
But how do you even distinguish appearance of thought from actual thought, objectively? Of course you can always argue that actual thought is something that is subjectively experienced, but that leads to solipsism, doesn't it?
Solipsism is what you get from accepting the Cogito "I know I exist for sure, and only my own existence can be verified by the cogito"
To answer your question, I claim that you can't. Radical skepticism ("we live in the universe of the evil demon"), which is what I advocate for, means that you can't even claim that you exist. See the works of Max Stirner for further elaboration of the implications of this for Philosophy.
Philosophical confusion often ensues when we use words outside their context. In this case, words like "intelligence", "consciousness", and "understanding" are meaningless, because they have no widely understood meanings in the context of non-human systems.
The Chinese room argument is effectively a counterargument to the Turing test. Turing defined intelligence in terms of external behavior of the system. According to him, a system is intelligent, if its behavior is indistinguishable from that of a human in a task that requires intelligence.
Searle claimed that intelligence is a property of the physical system itself, not of its external behavior. In order to determine whether a system understands something, we have to observe its internal behavior. A state machine could in principle pass the Turing test due to an extensive set of rules, but Searle would not call it intelligent, because it is fundamentally a simple mechanism.
The original Chinese room argument went further. Searle claimed that the same principle that rules out the intelligence of a state machine also rules out the intelligence of any programmed computer. I think that can be attributed to taking models of computation too literally. If we define computation as taking a fixed program and a single input and producing a single output, it feels mechanical rather than intelligent. A system like that can simulate another system that interacts with the environment and updates its internal state, but like Searle himself claimed, a simulation is not the same as the simulated system.
The point about him doing the manual execution of the computer is a stupid parlor trick. Individual proteins in axons have no special awareness of "English" encoded in the firing. RNA expression has no special awareness of "English". It seems like appeal to bias rather than an actual argument.
I find far too many philosophical constructs insufficiently deconstructed.
Plus everyone has a different personal interpretation of language and meaning shaded by different exposures and interpretations of sensory input. Sufficient common experience is likely what gives words some semblance of common meaning between people.
I would argue what makes AI truly hard to do is that a computer can't have the experience of birth, child rearing, etc. It's too foreign. But at some point language veers into a coded representation of physical laws and reality, and there AI can converge with human experience.
IMHO the Chinese room thought experiment is essentially a tautology which hinges on the validity of the key assumptions it implies, namely, that such a room is possible.
So, Searle supposes that he is in a closed room and has a book with encoded instructions that supposedly allow him to communicate in Chinese; and he also supposes that being able to convince the Chinese-speaker that they're speaking to a human (i.e. passing a Turing test) is sufficient to demonstrate "understanding" - for whatever definition of "understanding" Searle wants to use.
If we discount the second assumption (which might be the case - IMHO humans could be tricked in a Turing test by something that makes zero pretense at understanding anything), then there's no paradox, then it's just an experiment that's too weak to say anything about "understanding" whatsoever; but since Searle considers the experiment valid, let's assume that this holds, and if someone/something can convince the Chinese person of "proper understanding", then that someone/something would really have "proper understanding".
So it comes down to whether simply blindly following instructions a book can achieve proper understanding. If yes, then Searle must concede the argument. And if not, then Searle's room simply doesn't work - apparently such a book is impossible to make or have, the Chinese room can't exist and the impossible experiment can't have any results.
In essence, it's a tautology - no matter how ridiculous you consider encoding "proper understanding" in a book of instructions, if you start with an assumption that you do have such a book, then you obviously have assumed a world where the combination of instructions+mindless executor can have "proper understanding"; and if you start with an assumption that a machine like that definitely can not be thinking "properly" (as Searle does) and Strong AI is impossible, then you have thrown away the key thing required for the thought experiment - the possibility for such a Strong-AI-enabling book to exist.
If you make two assumptions and find out that they are incompatible - like Searle does here - then you can deduce that at least one of them is wrong, but it does not say anything at all about which one of them is wrong, so I don't see much value in this thought experiment.
> To all of the questions that the person asks, it makes appropriate responses, such that any Chinese speaker would be convinced that they are talking to another Chinese-speaking human being.
This is a fallacy, regardless of the language. Some comms between conscious entities must always contain metaphoric language, including signaling through body language. A shrug is a metaphor for uncertainty. This is why the premise of the Chinese Room is true (we can’t build a mind in a box) but also why conscious machines will necessarily have to have bodies.
Anything that is conscious and can hold a (real) dialog with a human will have emotional output tied to the body and it’s collective experience.
Can't this argument be pretty easily countered with quantum mechanics and randomness induced by it? I'm no physicist but if you have a quantum random source of bits the computer and the person would not reach the same answer. It could then be argued consciousness arises from the quantum nature of the universe.
By the same token, if you have a quantum random source of bits two identical computers running the same computation but using that source as part of the input will not always reach the same answer. Does that prove that they are running different computations? Does your CPU become conscious when a cosmic ray interacts with its RAM (a rare but far from unheard of occurence of quantum noise affecting all computers on Earth, much more common for computers outside the atmosphere).
I wanted to incorporate this question with an Arduino lesson plan based around key breaking, basically demonstrating that 10,000 combinations might seem like a lot when you're doing it by hand, but a computer can brute force it in no time, so how big does a number have to be before it starts taking serious time for a computer to count to it?
That's when I learned compilers will look at your for loops and just predict the result of incrementing a number a million times and skip to the end condition. Was quite a shock to find out the program I write is not the program that runs. I think I added something simple to the loop, like flipping a bit on a digital output to force the loop to actually run.
The best way to achieve this without changing the code would be to modify the variable to be `volatile`. That way you tell the compiler that you actually care about it reading/writing it from/to memory every time it is read/written.
Actually yes I think that was what we went with, instead of counting from 0 to MAX_INTEGER we generated random numbers that many times, brute forced integer "keys" just fine.
This is only redeeming to me really because generally its the philosophers that are presumptuous enough to tell other fields what they should do or think. Nice to see the tables turned on them!
> generally its the philosophers that are presumptuous enough to tell other fields what they should do or think
I'm sorry but in my experience, this is not at all a one-way street. It is very common for me to hear engineers and other STEM-types to complain about modern art or about the supposedly obscurantist, sophistic style of various disciplines of the liberal arts. However, these complaints rarely come from Engineers who take an active interest in the fields they criticize.
Admittedly I'm mainly speaking about general people I work with or whom I (sadly often) encounter on the internet. But there are some eminent names who are just as guilty. In addition to Kaku, as bakuninsbart mentioned, I could also bring up, off the top of my head, Stephen "philosophy is dead" Hawking and Richard "Shakespeare would have been better if he were educated" Dawkins.
I think I could come up with many more examples if I started looking. On the other hand, we fortunately also have people like Murray Gell-Man, who has gotten us to quote from the most experimental book in all of English-language literature every time we talk about elementary sub-atomic particles.
> supposedly obscurantist, sophistic style of various disciplines of the liberal arts.
Would you say there is absolutely no kernel of truth to this? Check out, say, the abstract to this paper [1]. Is there nothing obscurantist about it? If you acknowledge that it's obscurantist to some degree, would you say that it's rare and I just cherry-picked a bad one?
I'm a STEM person, and I have trouble understanding why some people find this stuff to be just reasonable academic work with nothing dysfunctional, pedantic or sophistic about the writing style.
It just seems so extremely obvious to me, that it makes me wonder if the people into this stuff simply have nervous systems that are wired a bit differently, and I'm falling prey to the typical mind fallacy. It's hard to believe that if I studied this stuff deeply enough and with an open mind, it would no longer seem obscure.
While one could argue that the abstract you linked is not well written, I do not think it’s obscurantist by any means. It’s a dissertation, so it’s not surprising they are using the jargon of the field and citing important works.
All academic literature is specialist literature. If you aren’t trained in the field you likely won’t understand it. It’s totally reasonable to me that a STEM person would have no idea what this abstract is saying just as a Humanities person probably couldn’t make heads or tails of the abstract of a dissertation on category theory or on a particular branch of computer science.
I find it funny that STEM folks always go after humanities academics for being obtuse when its just a matter of the pot calling the kettle black—dense STEM research and theory uses language that’d be considered equally obtuse to the untrained reader.
To me, it goes beyond being hard to read, and I take it as obscurantist in the strictest sense of someone going out of their way to be hard to understand.
I have a theory that most STEM people simply don't think like most humanities people, literally at a neurological level. (I edited my post to add some thoughts around that, possibly after you replied.)
STEM work rarely comes off that way to me. The only time it looks to me like the person is going out of their way to be obtuse and technical, is some higher math stuff (which is a known thing and acknowledged even by some mathematicians). This includes the stuff from entirely different parts of STEM that I don't understand at all.
That’s fair. I would agree there is a certain “big words == more intellectual == smarter” or “more difficult == smarter” fallacy that arises somewhat frequently in contemporary humanities papers.
I think part of it might originate from the fact that the abstractions used for talking about things in the humanities aren’t fixed as well as they are in science. Take the abstract in question for example—the writer uses the palimpsest as a sort of visual analogue and abstraction to try to describe interactions and relationships between texts/narratives—while it’s not an absurd metaphor, it’s difficult to grok, because there is no real standardized metaphor for describing this set of relationships. You could argue the object of study isn’t as well defined as it is in the sciences where we have fairly standardized abstractions like “waveform” etc. that make it a lost easier to talk about things clearly.
I think that there is a thing called literary nonfiction which does use these higher level abstractions.
It might be that they are less fixed, as you say. That is a feature rather than a bug. It would take so much more text to describe everything literally than to use literary devices, which seem to be things that humans are really good at grasping. They are everywhere in film and TV but not everyone has experience naming them and referring to them in text at a meta level.
Imagine writing a symbolic AI in Go or C. There is a reason why people use Lisps and functional languages for very dense abstractions. They just do a lot of work, which some folks choose to deride as magic.
> It's hard to believe that if I studied this stuff deeply enough and with an open mind, it would no longer seem obscure.
Not sure you could satisfy the "open minded" part of that if you are coming from this close minded starting place!
Whenever I come across something I don't understand, I will try to figure it out. If it seems utterly weird, that's even more motivation to figure it out! I really can't imagine this mode of thought where you read something, do not immediately grasp it, and then feel that somehow it's wrong/bad/obscurantist. Like, how do you learn anything at all?
Side note but its funny people throw around "sophistic" in contexts like this. In Plato's time, sophists were precisely the ones to appeal to common intuitions, for money. It was Socrates who came along and said philosophy began with wonder, and demonstrated this by intentionally confusing people in order to break them out of modes of thought that were deeply embedded.
> Whenever I come across something I don't understand, I will try to figure it out. If it seems utterly weird, that's even more motivation to figure it out! I really can't imagine this mode of thought where you read something, do not immediately grasp it, and then feel that somehow it's wrong/bad/obscurantist. Like, how do you learn anything at all?
It's a matter of the tone, and also the fact that when I do read carefully and figure out what they're saying, I often think "wow, I could have put that in much simpler terms with no loss of information."
I don't feel this way about literally any other topic besides modern literary criticism (and stuff in that family like modern continental philosophy). Even analytic philosophy looking at similar topics doesn't normally feel willfully opaque in the same way, and I'm happy to dig in and learn the more difficult aspects of what they're saying.
Do you think this is due to being prejudiced about this exact area and nothing else?
It's intensely obscurantist. No, it's not rare; some people think that using language that's hard to understand makes you clever. I think true cleverness is explaining concepts that are hard to understand, using language that's easy to understand.
Why not respond to my long comment in this thread, where I defend this abstract? Your sentence---"It's intensely obscurantist"---is forceful-sounding but it has no argument.
I also cannot say whether this paper is deliberately obscure, because I am not a literature professor and I don't know who the audience is. But, I tried to give some arguments for why the abstract looks reasonable. Are you a literature professor?
No, but I can read most scientific and philosophical texts in English, and get the gist. And I'm good enough at reading English to be able to recognise bafflegab when I see it.
If that prose is written in a code that is only meant to be comprehensible to sociologists and postmodernists, I don't see the sense in publishing it (as in, making it public).
>If that prose is written in a code that is only meant to be comprehensible to sociologists and postmodernists, I don't see the sense in publishing it (as in, making it public).
I mean, it was a doctoral dissertation, written exclusively for an audience of Literature PhDs[1]. Other times, these papers are published in specialized journals. Without publishing, how else would they disseminate their research?
And even if it's not for me, I'm always happy for open access to research. So I'm happy whenever postmodern thinkers make their work available to the public, in the same way I am when thinkers of abstract mathematics do. I personally will probably not be looking in either work, though :-).
[1] Cardoza-Kane, Karen M, "Trauma's palimpsests: The narrative cycles of Louise Erdrich and Richard Rodriguez" (2005). Doctoral Dissertations Available from Proquest. AAI3193887.
I'll try to answer. I absolutely don't think this look cherry-picked. In fact I think it looks like a reasonable humanities abstract! Note that I am far from a specialist in literature, but I do love to read.
First of all, although it uses some precise vocabulary, this abstract does not seem to have the particular blandness of much academic writing (I definitely agree that poor writing is often easy to find in all branches of academy). One little proxy to look at is the overuse of nominalization (i.e. using a noun form where a verb form could work). And not all nominalization is bad. The word "nominalization" is actually self-describing.
For example, perhaps we could "clean" this passage (my quote is a fragment of a participle phrase at the end of a sentence)
>... , their thematic and formal interconnections enacting both the repetitions of trauma and the necessary revisions of historiography, identity, and recovery.
into a new independent clause:
> They formally and thematically interconnect, repeating and necessarily revising how one presents history,self-identifies, and recovers.
But this "fix" might blur the original meaning in critical ways. For example, who now is this "one" being spoken of? That seemed to me to be the best option, over the worse pronouns "you" and "we" (which would be speaking for someone else). The original phrases avoid this entity identification, focusing instead on the general action.
I also removed enact, but what if these texts really do "enact" a revision? This "creation upon creation" of the an action seems in-line with the concept of a palimpsest, which is a repurposed book (I'll speak more on the palimpsest after I dissect my awful revision). Then, I completely butchered the last concepts. To present history is only one aspect of the general practice of historiography. Revising an identity is not the same as self-identifying (and, again, who is this self?). To recover matches with the action of recovery more closely, but using the verb would destroy the sentence's parallelism. Hopefully, this example demonstrates the great difficulty of using precise language with heavy, complex concepts.
On to the subject itself, using the word "palimpsest" doesn't seem obscure. Perhaps the author's central thesis is that the works that she studies have repurposed old texts or old memories of trauma again and again. This seems like a suitable metaphor. Or, maybe palimpsest has a special technical meaning in her body of scholarship.
And the abstract does carefully lay out the scholarly tradition that the paper follows, going chapter by chapter. Readers familiar with the works she mentions will probably be happy for this guided summary. Readers not familiar, like me for almost all the names, may not even be part of the intended audience.
Lastly it seems like she uses two different authors to explore her own scholarly interests in trauma, gender, sexuality, and self. These seem like great things to study, and very complex indeed.
Even if these themes were not on the minds of the authors who are the subject of this paper, these authors nevertheless do live in the world, and their work necessarily incorporates fragments of inherited thought (like how you and I speak English, whose development we had nothing to do with). Maybe this paper finds some unexpected connections.
> that it makes me wonder if the people into this stuff simply have nervous systems that are wired a bit differently
I don't know anything about nervous systems.
But in these kind of works, there are not any exact answers. You cannot say `gcc gender-paper.c` and find out if it compiles. Instead, these ideas have to be written about and discussed against a wider body of thought. And there's probably some element of judgement and metaphor required to think this way. And the ideas in these works in fact do slowly disseminate and affect society.
That's my spiel. I typed it up because I see these kind of comments often, and I wanted to put a good response on record.
----
I also don't believe there is zero obscurantism in the humanities, by the way. Shitty research happens everywhere. But, every time the humanities gets slammed as being particularly soft, it seems like the reader forgets all the articles on the front page about falsified scientific results, unreproducible research, political machinations to get tenure, outright grift, etc.
That's not my experience at all, rather I see a lot of philosophers giving good pointers where their field borders on others, and then being shot down or ignored. I'd actually be interested in an example of this, as I don't think I've encountered it so far, although I'm admittedly not that engaged either.
By far the worst offenders of presumptuousness must be old physicists though, followed by old computer scientists. Thinking of Michio Kaku talk about philosophy still makes me physically cringe.
My experience is that the feedback provided by philosophers sounds good to the philosophers, but is actually useless to practitioners. And in technical subjects, like math, computer science and physics, it is important to develop a really good BS filter.
> Few were sufficiently correct that people have forgotten who discovered what they discovered.
This is laughable. Most of our culture was once philosophy, and I sure don't know whose. If you define philosophy as “that which has not escaped the realm of theoretical philosophy”, then sure. But we use practical philosophy every day!
All fields were once "philosophy", that's a naming artifact not an insight. What's being claimed is something like: when "natural philosophy" and "philosophy" diverged into distinct fields, "natural philosophy" got the good stuff.
“Natural philosophy” has contributed basically nothing to the structure of our society. To our ideas of fairness, and ethics, and how to co-operate with untrustworthy strangers. Its impacts, like medicine, have, but that's like saying trees are responsible for how our society works.
So much of this, we just take for granted. Philosophy is either obvious or wrong.
Is there any evidence that the-thing-we-call-philosophy-today has actually contributed positively to those things? I've seen it claimed that professional ethicists act less ethically in their daily lives than the average member of the public, and most ethics training classes seem at best useless.
There is no evidence. The thing we (non-philosophers) call philosophy today is like the thing we call pure mathematics today: it's the philosophy that hasn't been useful, and therefore hasn't been moved into another field like epistemology or philosophy-of-science or probability theory.
I think there’s an inherent tension between what philosophers are trying to do and what practitioners in specific fields are trying to do that leads to the bad blood.
If you’re a philosopher, you’re generally trying to look at things from a general enough perspective that you might argue for a radical shift in the way we do things and call out the assumptions that are otherwise taken as axioms of the field—the water we swim around in.
If you’re a practitioner, you’re focused on swimming. Core axioms and prevailing theory are your presuppositions—you’re just focused on getting things done within the confines of the prevailing theory/framework, perhaps nudging it every so often in particular directions based on new discoveries—its quite rare that a working scientist sparks an actual theoretic revision and becomes the next Einstein (see thomas Kuhn).
So yes, philosophers are totally useless when it comes to getting practical stuff done because that’s generally not the space they are attempting to help with or illuminate.
The problem is that philosophers believe that they know things from first principles which, in fact, have been falsified by science. Therefore the thing that they are saying is wrong.
> legions of lesser philosophers have fallen into the same trap.
I don't know, this case is somewhat infamous, but it really speaks more to the Nobel committee's own biases than the field of philosophy. Bergson was prolific and influential, but also unpopular within many philosophical circles outside of the Francosphere, and also inside it by the time the post-structuralists came along.
Anyway, wait until you find out what the legions of lesser scientists have been up to since 1921...
Goodness yes. When I was in physics, there were a lot of these philosophers who had some kind of real problem with (special) relativity, like it had swerved and driven over their dog on purpose. I'm not talking about the spackling of idiots who were under the impression that Michelson-Morley was performed precisely once, but folks who just found it antithetical to their personal outlooks, aside from the space opera types whose dreams are cruelly crushed underfoot by c (no exotic matter has yet called Lazarus forth from his tomb). Normally, one could dispel them with some holy water and a copy of Dr. Will's Was Einstein Right? but some had a tenacity to them which had long transformed from a virtue to a vice.
It's funny because of the number of physicists who loudly proclaim philosophy is either dead or at least useless (while also extolling the virtues of their own philosophy of science, with bonus points for mangling Popper).
Also physicist. My beef is more with philosophy-of-science folks in particular. They think that we, physicists, really need them and their insights about how to do physics properly. For example they really like this meme:
reddit.com/r/badphilosophy/comments/gjz24v
Which is totally dishonest. (I tried to counter that with this https://imgur.com/a/zwDhfxJ , but that's not memey enough.)
It's not a coincidence that the left side (more positive to philosophy) and Born (whose quotes are more neutral) are also continental European, while the three critical examples (can add in Krauss, Hawking, and many others) come from Anglo-Americans. There's too often a pettiness in the latter (to an extent, also in your post) in which what look like fairly limited and contemporaneous gripes about academics in another field is given the trappings of some historically unbound and universal insight.
There's no inherent tension between science and philosophy; even Meermin's "shut up and calculate" is a philosophical position (instrumentalism), to say nothing of the philosophical, scientific and mathematical contributions of e.g. Poincaré, Helmholtz, Mach, Duhem, Peirce, Leibniz, Von Neumann, Jaynes etc. Even modern Anglo-American physicists contribute to and draw a lot from philosophy, e.g. Wheeler and Paul Anderson.
There are philosophy schools (eg. philosophical materialism, not very well known outside spanish speaking countries https://www.fgbueno.es/ing/gbm.htm) that not only they don't that, they consider philosophy as a second grade knowledge, where first grade knowledge come from sciences and techniques, and philosophy works with the concepts and ideas that come from them.
Philosophy itself, in fact, was born when people took the concepts used in geometry and started using them to build systems of ideas (quite the opposite of the widespread view of philosophy as the "origin" of science)
> Philosophy itself, in fact, was born when people took the concepts used in geometry and started using them to build systems of ideas
What? Philosophy and geometry both date to prehistory and we find them co-evolving in the west through many separate ideological movements. It's ridiculous to talk about which one came first. Moreover, when people talk of "concepts used in geometry" as a distinct thing they usually mean Euclidean or at oldest Pythagorean - earlier than that and geometry is considerably more numerical and difficult to distinguish from any other math or astronomy. But philosophy significantly predates that kind of geometric concept - "Know thyself" is hardly a mathematical directive!
I am beginning to be of the sentiment that human brains are usually functionally approximated by pushdown automatons, while our emotional life better approximates hypercomputation. The commonly assumed view that minds are Turing machines no longer fits me.
The argument against consciousness, for survival-machines seems strong. To find an ecological niché for consciousness requires more creativity than survival ever would... and so an evolution of consciousness may have happened in program space instead of hot and wet mechanistic nature.
Some of my favorite insights have come from realizing how concepts overlap disciplines. I remember the first time I recognized an overlap, and it led to a realization that the rules by which the universe operate crop up in every aspect of the experience of existing as part of it. Since then so many insights about so many things have come from similar understandings, such as how thermodynamics applies to informational organization, and even ways to think of problems come from such insights that can change your view of the world.
I find that computational complexity provides a reasonable way to define capitalism as to distinguish it from other economic models. Only under capitalism is it possible for wealth to grow exponentially (which will exceed almost any other possible growth function with respect to time) since you can make wealth by virtue of already having wealth. It also helps explain the origins of wealth inequality since when growth is determined solely by current value, it is not bound in the same way that one's labor is.
Some people assume that capitalism is simply supply/demand or the presence of markets, but I think that ignores the fact that markets have seemingly existed for far longer than capitalism.
Why Philosophers Should Care About Computational Complexity (2011) [pdf] - https://news.ycombinator.com/item?id=17573142 - July 2018 (22 comments)
Why Philosophers Should Care About Computational Complexity (2011) [pdf] - https://news.ycombinator.com/item?id=11913825 - June 2016 (54 comments)
Why Philosophers Should Care About Computational Complexity [pdf] - https://news.ycombinator.com/item?id=9061744 - Feb 2015 (43 comments)
Why Philosophers Should Care About Computational Complexity - https://news.ycombinator.com/item?id=2897277 - Aug 2011 (10 comments)
Why Philosophers Should Care About Computational Complexity - https://news.ycombinator.com/item?id=2861825 - Aug 2011 (36 comments)