Hacker News new | past | comments | ask | show | jobs | submit login

Why does our not fully (or even close to fully) understanding how the brain works mean conscious vs subconscious is unscientific?

Is all mental health research unscientific based on the same logic?

My understanding is that it's conclusive that some of our brain's processes we feel we are controlling (conscious thinking), while others happen in the background without us realising. The fact that we don't understand everything that makes that the case doesn't stop that being the case.




You're making a common mistake, that of equating "scientific" with "good". Science is a particular method for learning about the world, not a marker of moral worth. I don't want to get too far into the epistemological weeds here, but there are plenty of other, perfectly valid ways to understand the world, many of which are used and studied in academia. The easiest examples to point to are philosophy and mathematics. Mathematicians are not scientists, neither are philosophers.


I may well be misunderstanding a definition of science, but no I wasn't confusing it with "good".

I don't know what the best definition of science is, but here's the first one Google suggests:

> "the intellectual and practical activity encompassing the systematic study of the structure and behaviour of the physical and natural world through observation and experiment."

Can you explain what excludes the studying of human behaviour or mental health from how you define "science"?


This is what I meant by "getting into the epistemological weeds" heh. I'll requote your definition with emphasis.

> the intellectual and practical activity encompassing the systematic study of the structure and behaviour of the physical and natural world through observation and experiment.

All three of the italicized aspects are not how the mind is studied. The mind is not a physical object. It cannot be observed or experimented with in a systematic way.

You can obtain three samples of three different kinds of steel and perform a destructive test on all three of them, then when you're done you can obtain more. This is materials science and you can get paid a lot of money to do it.

There is "neuroscience" where you do the same thing, with neurons. Neurons are not minds. You'll never learn how a mind works by studying neurons, any more than you can learn how a building works by studying steel. If you're looking to learn how a building works, steel is just one of the hundreds of topics you need to understand.

There is no "building science" and there is no "mind science", because buildings and minds are not physical, natural things. They're unnatural things, conjured up by human invention and imagination. Physically speaking, i.e. according to the best observation techniques the field of neuroscience can come up with, a newborn, a brain trauma victim, and a college student are all identical as your parent notes.

This deficiency in understanding is what the commenter you were responding to was pointing to. People thinking they can understand the mind by doing fMRIs. And thinking any kind of rigorous exploration and study as "science" leads to this kind of mistake.

Another example, computer science isn't science. It's math. Computers are deterministic, there's no point in running experiments. Computers are not natural, nor are they physical. Silicon is, and you can study that by running experiments and observing the results.

To get epistemic, knowledge comprises justified true beliefs. Science is a type of justification. There are other kinds. If you use other forms of justification, then you don't get scientific knowledge out of it, overloading science makes people believe that all knowledge that comes out of academia is scientific, i.e. made through repeated experimentation and observation, such that we can rely on it to not fail on us.


Oh come now.

> The mind is not a physical object. It cannot be observed or experimented with in a systematic way.

So clearly false if you read the article, or have ever looked at an optical illusion.

> ... because buildings and minds are not physical, natural things. They're unnatural things, conjured up by human invention and imagination

Then likewise Ecology, Geology, Biology and all other sciences that cover systems aren’t science? What is left? Surely not physics because that could be quantum OR relativity. Surely not QM because that could be particles OR waves.

> Another example, computer science isn't science. It's math. Computers are deterministic, there's no point in running experiments

Ok now I know you are trolling.


>> The mind is not a physical object. It cannot be observed or experimented with in a systematic way.

> So clearly false if you read the article, or have ever looked at an optical illusion.

"Mind" vs. brain. For example, you will never observe the concept "triangularity" or the number 8 or the color "red" in a brain scan. You will see brain activity correlates between what is most likely someone perceiving an instance of triangularity (e.g., a concrete triangle drawn on the blackboard) or, say, the symbol "8" that's been drawn with a red marker, but those are not concepts, which are general. You cannot imagine "triangularity", only particular triangles. But all physical triangles are concrete and particular. You can't draw or construct an abstract triangle because any such drawing or construction will be a particular triangle that excludes all others (an infinite number of them, actually). Properties like the color red can be thought apart from any particular red thing, but you won't find "red" by itself rolling down the sidewalk.

So you have the following syllogism:

Every physical thing is particular. No abstract thing is particular. Therefore, no abstract thing is physical.

So if we can know things abstractly, then it must follow that such things exist in a nonphysical way in our minds. But there is no physical thing in which abstract things can inhere as abstract things because physical things are always concrete and particular.

Things get worse for reductionists. Current reductive views of physical reality exile properties like color as we commonly understand them to the realm of consciousness or mind or whatever. I.e., color as we commonly think of it is reduced to the experience of the reflectance properties of matter, that is, to a property of the mind, because as we have assumed, it is not a property of matter. But if color is not a property of matter, and the mind is material, then color cannot be a property of the mind. Therefore, the mind must be immaterial. This latter view of reality is essentially Cartesian where the universe is divided into impoverished res extensa and the metaphysical rug of res cogitans under which we can sweep all of those unseemly phenomena that reductive accounts of reality cannot cope with. Of course, you might be able to get away with that as long as you're a Cartesian dualist, but materialism, the metaphysical bastard child of Cartesian dualism, takes Cartesian dualism, jettisons res cogitans and attempts to reduce all of those unseemly phenomena attributed to res cogitans to phenomena proper to res extensa. Of course, materialism is utterly incapable of dealing with this problem by definition. Stubborn, dyed-in-the-wool materialists like the Churchlands or Dennett, instead of rethinking their presuppositions, have resorted to the pathetic tactic of denying the very thing they were supposed to explain. Can't explain color or abstract concepts? Then they must not exist!


>Every physical thing is particular. No abstract thing is particular. Therefore, no abstract thing is physical.

But abstract things can supervene on the physical. Information, for example, is abstract, but it supervenes on some physical stuff. Granted, information is not identical to any particular instantiation, but the abstract pattern can be manifested by a particular physical instantiation. You're welcome to call information immaterial if you like, but it presents no metaphysical difficulties for physicalism.

>Stubborn, dyed-in-the-wool materialists like the Churchlands or Dennett, instead of rethinking their presuppositions, have resorted to the pathetic tactic

Why are non-materialists so fucking angry? Incivility doesn't help your cause. If your arguments were good they would stand on their own without embellishment.


> But abstract things can supervene on the physical. Information, for example, is abstract, but it supervenes on some physical stuff. Granted, information is not identical to any particular instantiation, but the abstract pattern can be manifested by a particular physical instantiation. You're welcome to call information immaterial if you like, but it presents no metaphysical difficulties for physicalism.

Example? The world "information" is often used in a magical way. Patterns need not be immaterial, and I never argued for that, but I really don't know what you mean by "information". (FWIW, "supervene" is another one of those terms.)

> Why are non-materialists so fucking angry? Incivility doesn't help your cause. If your arguments were good they would stand on their own without embellishment.

Pot to kettle? Looks, there's a history there that maybe you're not privy to. Eliminativists and other materialists have consistency refused to address these fundamental problems while simultaneously ridiculing and dismissing anyone who doesn't agree with them. So you'll have to forgive me for being "uncivil". After a while, it's hard not to conclude that we're dealing with willful ignorance or intellectual dishonesty.


Information is state or configuration of one system that tells you something about another system. The pixels on your screen contain information about the state of my brain because the particular pattern of lights communicates the thoughts in my head. Information is abstract because it is independent of the medium: the pixels on the screen, pressure waves in the air, marks on paper, etc, can all be used to carry the same information.

Supervene means something is constituted by the configuration of some substance. Or the more common definition: A supervenes on B if there can be no change in A without a corresponding change in B.

>Eliminativists and other materialists have consistency refused to address these fundamental problems

I admit that people have reason to be frustrated with certain materialists, Dennett chief among them. I have my share of frustrations with him as well. But there's this trend I see with non-materialists (online and professionals) showing active disdain for materialism/physicalism that is entirely unhelpful. Ultimately we're all just trying to solve one of the hardest problems of all. Genuine efforts to move the conversation forward should be welcomed. Intractable disagreement just points towards the need for better arguments.


Okay, so intentionality is essential to information. Let's take your example of the pixels on your screen.

There is nothing intrinsic to those pixels or that arrangement of pixels that points to the state of your brain. That doesn't mean there isn't a causal history the effect of which are those physical pixel states. It is, however, entirely a matter of convention how those pixels are arranged by the designers and how they must be interpreted in conformity with the intended convention. You must bring with you the hermeneutic baggage, so to speak, that allows you to interpret those pixels in the manner intended by the designers. Those same pixels will signify something else within a different context and it is the observer that needs to have the contextual information to be able to interpret them in conformity with the designer's intentions. Furthermore, the designers of the program could have chosen to cause different pixels to light up to convey the same information. They could have instead caused those pixels to resemble, in aggregate, English language sentences that, when interpreted, describe the image of the state in your brain. But there is nothing about those pixels qua pixels that can tell you anything about your brain state. The meaning of each pixel is just that it is a pixel in a particular state, and the meaning of the aggregate of pixels is that they are an aggregate of pixels, each in a particular state. You can call that supervenience in that the meaning of the aggregate follows from the meanings of individual constituting pixels, but none of that changes the fact that the pixel states as such, whether individually or in aggregate, do not intrinsically mean your brain state. This is analogous to written text. A human actor with some meaning in mind causes blobs of ink to be arranged in some way on paper in accordance with some convention. Those blobs of ink are just blobs of ink no matter how many there are or how they're arranged. The reader, which is to say the interpreter, must bring with him a mental dictionary of conventions (a grammar) that relates symbols and arrangements of symbols to meanings to be able to reconstruct the meaning intended by the author. The meaning (or information) is in no way in the text even if it influences what meaning the interpreter attaches to it.

As Feser notes[0], Searle calls this derived intentionality which is different from intrinsic intentionality (thoughts are one example of the latter). So I do not agree that anything abstract is happening in your panel of flashing lightbulbs.

[0] https://edwardfeser.blogspot.com/2010/08/fodors-trinity.html


>Searle calls this derived intentionality which is different from intrinsic intentionality

But what makes derived intentionality not abstract? What definition of abstract are you using that excludes derived intentionality while including intrinsic intentionality?

But lets look more closely at the differences between derived and intrinsic intentionality. Derived intentionality is some relation that picks out a target in a specified context. E.g. a binary bit picks out heads/tails or day/night in my phone depending on the context set up by the programmer. Essentially the laws of physics are exploited to create a system where some symbol in the right context stands in a certain relation with the intended entities. We can boil this process down to a ball rolling down the hill along one track vs another track is picking between two objects at the bottom of the hill.

How does intrinsic intentionality fare? Presumably the idea is that such a system picks out the intended object without any external context needed to establish the reference. But is such a system categorically different than the derived sort? It doesn't seem so. The brain relies on the laws of physics to establish the context that allows signals to propagate along specific circuits. The brain also stands in specific relation to external objects such that the necessary causal chains can be established for concepts to be extracted from experience. Without this experience there would be no reference and no intentionality. So intrinsic intentionality of this sort has an essential dependence on an externally specified context.

But what about sensory concepts and internal states? Surely my experience of pain intrinsically references damaging bodily states as seen by my unlearned but competent behavior in the presence of pain, e.g. avoidance behaviors. But this reference didn't form in a vacuum. We represent a billion years of computation in the form of evolution to craft specific organizing principles in our bodies and brains that entail competent behavior for sensory stimuli. If there is a distinction between intrinsic and derived intentionality, it is not categorical. It is simply due to the right computational processes having created the right organizing principles to allow for it.


An essential feature of abstract things is that they do not exist independently and in their own right. For example, this chair or that man (whose name is John) are concrete objects. However, the concepts "chair" and "man" are abstract. They do not exist in themselves as such. The same can be said for something like "brown", an attribute that, let's say, is instantiated by both the chair and by John in some way, but which cannot exist by itself as such. So we can say that "chair", "man" and "brown" all exist "in" these concrete things (or more precisely, determine these things to be those things or in those ways). However, apart from those things that instantiate them, these forms also exist somewhere else, namely, the intellect. However, they exist in our intellects without instantiating them. Otherwise, we would literally have to have a chair or a man or something brown in our intellects the moment we thought these things. So you have a problem. You have a kind of substratum in which these forms can exist without being those things. That does not sound like matter because when those forms exist in matter, they always exist as concrete instantiations of those things.

W.r.t. derived intentionality, the relation that obtains here between a signifier and the signified is in the mind of the observer. When you read "banana", you know what I mean because the concept, in all its intrinstic intentionality and semantic content, already exists in your intellect and you have learned that that string of symbols is meant to refer to that concept. I could, however, take a non-English speaker and mischievously teach them that "banana" refers to what you and I would use the term "apple" to mean. No intrinsic relation exists between the signifier and the concept. However, there is an intrinsic relation that obtains between concepts and their instantiations. The concept "banana" is what it means to be a banana. So the derived intention involves two relations, namely, one between the signifier and the concept (which is a matter of arbitrary convention) and another relation between the concept and the signified, which necessarily obtains between the two. Derived intentionality is parasitic on intrinsic intentionality. The former requires the latter.

So what is meant when we say that computers do not possess concepts (i.e., abstract things), only derived intentionality, we mean that computers are, for all intents and purposes, syntactic machines composed of symbols and symbol manipulation rules (I would go further and say that what that describes are really abstract computing models like Turing machines, whereas physical computers are merely used to simulate these abstract machines).

Now, my whole point earlier was that if we presuppose a materialist metaphysical account of matter, we will be unable to account for intrinsic intentionality. This is a well known problem. And if we cannot account for intrinstic intentionality, then we certainly cannot make sense of derived intentionality.


Your description of abstract things sounds like a dressed-up version of something fairly mundane. (This isn't to say that your description is deficient, but rather that the concept is ultimately fairly mundane.) So I gathered three essential features of instrinsic intentionality: (1) does not exist independently, (2) exist in the intellect, (3) exist in the things that instantiate them.

Given this definition there are a universe of potential abstracta due to the many possible ways to categorize objects and their dynamics. Abstracta are essentially "objects of categorization" that relate different objects by their similarity along a particular set of dimensions. Chairs belong to the category "chair" due to sharing some particular set of features, for example. The abstract object (concept) here is chair, which is instantiated by every instance of chair; the relation between abstract and particular is two-way. Minds are relevant because they are the kinds of things that identify such categorizations of objects along a set of criteria, thus abstracta "exist in the intellect".

You know where else these abstracta exist? In unsupervised machine learning algorithms. An algorithm that automatically categorizes images based on whatever relevant features it discovers has the power of categorization which presumably is the characteristic property of abstracta. Thus the abstracta also exists within the computer system running the ML algorithm. But this abstracta seems to satisfy your criteria for intrinsic intentionality (if we don't beg the question against computer systems). The relation between the ML system and the abstracta are independent of a human to fix the reference. Yes, the algorithm was created by a person, but he did not specify what relations are formed and does not fix reference for the concepts discovered by the algorithm and the things in the world. This is analogous to evolution creating within us the capacity to independently discover abstract concepts.

(Just to preempt a reference to Searle's Chinese room argument, I believe his argument is fatally flawed: https://news.ycombinator.com/item?id=23182928)


You're trying to reduce abstraction to statistical pattern classification, but that doesn't work because statistical measures are inherently bounded in generality, indeterminate and ambiguous while my concept of, say, triangularity is universal, determinate and exact.

Say I give you an image of a black isosceles triangle. Nothing in that image will tell you how to group those features. There is no single interpretation, so single way to classify the image. You might design your algorithm to prefer certain ways of grouping them, but that follows from the designer's prior understanding of what he's looking at and how he wants his algorithm to classify things. If your model has been trained using only black isosceles triangles and red rhombuses, it is possible that it would classify a red right triangle as rhombus or an entirely different thing and there would be no reason in principle to say that the classification was objectively wrong apart from the objective measure of triangularity itself. But that's precisely what the algorithm/model lack in the first place and cannot attain in the second.

Furthermore, just because your ML algorithm has grouped something successfully by your measure of correctness doesn't mean it's grasped essentially what it means to be a member of that class. The grouping is always incidental no matter how much refinement goes into it.

Now, you might be tempted to say that human brains and minds are no different because evolution has done to human brains what human brains do to computer algorithms and models. But that is tantamount not only to denying the existence of abstract concepts in computers, but also their existence in human minds. You've effectively banished abstracta from existence which is exactly what materialism is forced to do.

(With physical computers, things actually get worse because computers aren't objectively computing anything. There is no fact of the matter beyond the physical processes that go on in a particular computer. Computation in physical artifacts is observer relative. I can choose to interpret what a physical computer does through the lens of computation, but there is nothing in the computer itself that is objectively computation. Kripke's plus/quus paradox demonstrates this nicely.)

P.S. An article you might find interesting in this vein, also from Feser: https://drive.google.com/file/d/0B4SjM0oabZazckZnWlE1Q3FtdGs...


>You're trying to reduce abstraction to statistical pattern classification, but that doesn't work because statistical measures are inherently bounded in generality, indeterminate and ambiguous

A substrate with a statistical description can still have determinate behavior. The brain, for example is made up of neurons that have a statistical description. But it makes determinate decisions, and presumably can grasp concepts exactly. Thresholding functions, for example, are a mechanism that can transform a statistical process into a determinate outcome.

>doesn't mean it's grasped essentially what it means to be a member of that class.

I don't know what this means aside from the ability to correctly identify members of that class. But there's no reason to think an ML algorithm cannot do this.

Regarding Feser and Searle, there is a lot to say. I think they are demonstrably wrong about computation being observer relative and whether computation is indeterminate[1]. Regarding computations being observer relative, it's helpful to get clear on what computation is. Then it easily follows that a computation is an objective fact of a process.

A computer is at its most fundamental an information processing device. This means that the input state has mutual information with something in the world, the computer undergoes some physical process that transforms the input to some output, and this output has further mutual information with something in the world. The input information is transformed by the computer to some different information, thus a computation is revelatory: it has the power to tell you something you didn't know previously. This is why a computer can tell me the inverse of a matrix, while my wall cannot, for example. My wall is inherently non-revelatory no matter how I look at it. This definition is at odds with Searle's definition of a computer as a symbol processing device, but my definition more accurately captures what people mean when they use the term computer and compute.

This understanding of a computer is important because the concept of mutual information is mind-independent. There is a fact of the matter whether one system has mutual information with another system. Thus, a computer that is fundamentally a device for meaningfully transforming mutual information is mind independent.

[1]https://www.reddit.com/r/askphilosophy/comments/bviafb/what_...


Sorry most of your argument here is going over my head. My mind gets the glazed over feel as soon as I hear the word “epistemology”.

It seems like you are using the word “mind” as a specific technical word within the field of philosophy rather than the general usage of the word so I’ll just defer to you for any philosophical context of the conversation. I’m not interested in that.

What got my goat was the implication that complex systems like the mind and the brain or any other complex system aren’t a “science” because it somehow isn’t pure enough/reduced enough.


> What got my goat was the implication that complex systems like the mind and the brain or any other complex system aren’t a “science” because it somehow isn’t pure enough/reduced enough.

That wasn't the implication, but perhaps more needs to be said. The first distinction I made was between brain and "mind" (here understood as "intellect", that is, the faculty responsibly for conceptualization, or, abstraction from particulars enountered in the senses) by appealing to a classic Aristotelian argument for the immateriality of the intellect. So you can study the physical brain using various empirical methods and through the lenses of various empirical science, sure, but if the intellect is immaterial, then obviously you can't subject it to any direct empirical experimentation. That doesn't mean it cannot become the object of a science (i.e., psychology as classically understood), nor does it mean you can't make observations about human beings to gather supporting evidence of some kind to draw certain conclusions. An immaterial intellect just isn't something you can look at under a microscope or at which you can fire subatomic particles.


> That wasn't the implication... appealing to a classic Aristotelian argument for the immateriality of the intellect

Now what you are saying makes more sense to me. I must not have read far enough up the comment chain to get the full context.

I guess I view the mind/intellect as a “state” of chemicals/electrical impulses/influences/etc that you could in theory take a snapshot of and is therefore material no matter how abstract the thought pattern. Trying to separate the brain vs. intellect is a false dichotomy from my perspective. I’m not sure if you are actually arguing for the Aristotelian perspective (as was my initial assumption) or if you are simply explaining a viewpoint.

I might note that I’m fascinated by super-determinism[0] with non-local variables right now. I can see that the repeatability of recording a given state of mind makes less sense since current QM theory could not guarantee that you could ever fully capture a given state of anything.

https://en.m.wikipedia.org/wiki/Superdeterminism


I am indeed arguing for the Aristotelian position. A lot of mind/brain talk is thrown around without any deep appreciation of the metaphysical presumptions being made, much less the metaphysical consequences of those presumptions.

> no matter how abstract the thought pattern

This needs to be explained. What exactly is a "thought pattern" and what does it mean for it to be abstract? As I've noted, matter is always concrete whereas abstract things aren't really things in that they cannot exist in their own right. You and I have the concept of "triangularity" in our intellects, and "triangularity" means exactly that and is therefore intelligible as "triangularity", but that concept is not reducible to any particular triangle. However, only particular triangles exist in the physical world. You would need to show how "triangularity" could exist as a concrete physical thing without also being a particular triangle. Then you'd have to show how concrete triangles instantiate this concrete "triangularity".

That's the Aristotelian angle. However, we can also approach this issue from the materialist angle. Take for instance the color red as we commonly understand it. Now, since Galileo and Descartes, matter has been construed as essentially colorless. Instead, matter has reflectance properties and color is construed as an artifact of the mind that in some unexplained way results from those reflectance properties, but is completely distinct from those reflectance properties. This is an essentially Cartesian view of the world wherein the universe is ultimately divided into two basic kinds of distinct things, namely, mental substance and extended substance, i.e., (a particular understanding of) mind and (a particular understanding of) matter, respectively. There are serious problems with Cartesian metaphysics, but for now, it's enough to observe that we moderns more or less hold to that view of matter. Now materialism also holds that view of matter. However, it denies the existence of mental substance leaving you with a broadly Cartesian view of matter. The trouble is that it is now impossible to explain things like the color red as mental phenomena as construed by Cartesians. This is known as the problem of qualia.

There are three directions you can take to try to preserve this view of matter and while accounting for qualia. One is to retreat back to Cartesian dualism. Another is to dabble with panpsychism (which is arguably just crypto-dualism). A third is to deny the existence of the very thing you were supposed to explain (eliminativism). Each of these has serious problems. However, a better option is to reconsider the metaphysics of matter. Aristotelian metaphysics does not suffer from these issues.


> "Mind" vs. brain. For example, you will never observe the concept "triangularity" or the number 8 or the color "red" in a brain scan.

Nor will you see Shape::TRIANGLE or the number 8 or Color::RED if you put a CPU under a microscope, but computer programs are capable of reasoning about all of them. What's your point?


Shape::TRIANGLE, 8 or Color::RED aren't concepts. They're merely symbols. In principle, you will find them encoded in some way under a microscope (as some arrangement of physical states, though it will depend on the particular physical medium what the particular arrangement will look like). Your computer program can process that physical state to mimic some aspects of reasoning, but that's entirely a matter of how the program is designed to operate with these physical arrangements. You cannot analyze or derive anything from the symbols qua symbols themselves because there's no meaning there to analyze or from which to derive things.


How would you define a “concept” then? How do concepts themselves intrinsically have meaning that symbols don’t?


In this context, these symbols are conventional signs taken to refer to meanings that are not themselves. Concepts are apprehensions of form, or we might say "meanings", the what-it-is-to-be of the given thing. When I write "123", that series of characters obviously is not itself the number 123. A human interpreter in possession of a mental dictionary can read that series of squiggles to arrive mentally at the concept of the number 123. But all that exists in computers are representations, not the things they are meant to represent, and their meanings are entirely in the mind of the human observer who assigns conventional meanings to those representations.


How is that different from our brains? All that exists in us may very well be our neuronal representations of a concept. Alternatively, the meaning of a representation in a computer is interpreted and acted upon by other elements within a computer


As far as the mind has physical effects, it can be studied scientifically. When I study a building by studying its steel frame, I am studying the building inasmuch as the steel frame is a component of the building.


Depends on what do you expect as an outcome of the study.

https://en.m.wikipedia.org/wiki/Hard_problem_of_consciousnes...


You can apply science to human behavior and mental health. The problem is trying to integrate that work into theories of consciousness, you start to blur the lines between science and philosophy.


I agree that it can be approached from non-scientific angles (I'd hazard a guess that any subject that can be considered scientific can be approached unscientifically - sometimes for good reasons, sometimes not), but I think(?) you're agreeing with me in my responding to this comment:

> The second you say the words "conscious mind", you leave the realm of science.


> Is all mental health research unscientific based on the same logic?

Mental health research is focused on clinical outcomes so it's not the same as basic research on consciousness. All of these fMRI studies have been useless for the latter because they're blunt tools - perfect for studying tumor blood flow or correlating which parts of the brain are kinda-sorta active during some extreme scenario (stuck in an fMRI "for science") - but useless for actually probing the nature of the mind.

They tell us that there is "something" there that can be loosely called the "conscious" and "subconscious" mind in academically trained company but it's still trying to shoehorn late 19th century ideas to cutting edge neuroscience - it comes down to selling grant proposals using familiar colloquialisms. We could very well find out tomorrow that there are higher level networks within the brain that make the separation meaningless and you'd still see grant writers overuse them any chance they get.

> My understanding is that it's conclusive that some of our brain's processes we feel we are controlling (conscious thinking), while others happen in the background without us realising. The fact that we don't understand everything that makes that the case doesn't stop that being the case.

These papers are a lot worse than "we don't understand everything." They're often times actively harmful to our understanding.

The brain is so complex that fMRI studies are the scientific research equivalent of a full body CT scan. The older you are, the more likely you are to find something that looks like a tumor. By the time one is in their 50s, full body scans are more likely to result in serious complications from unnecessary biopsies than find a malignant growth. Likewise, the more complex the brain the more room you have to find spurious correlations in fMRIs in just about any scenario where the subject is conscious or even dead if the researcher is really bad at statistics - which most are.

This makes them the perfect tool for scientists under pressure to publish or perish, but not for studying the brain.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: