Hacker News new | past | comments | ask | show | jobs | submit login
Molyneux's problem (wikipedia.org)
122 points by ph0rque on April 7, 2018 | hide | past | favorite | 68 comments



At Phaenomenta in Flensburg there's an interesting exhibit that breaks the link between tactile sensation and visual perception:

An oval is placed flat on a table, so that you can touch it and feel it's shape.

Above it, and parallel to the table, is a lens that distorts the oval so that it looks perfectly circular.

When you trace the shape with your finger and keep your eyes closed, it feels like an oval.

When you trace the shape by looking through the lens, it both looks and feels like you're tracing a circle. The feeling persisted when I used my fingers to grip it.

Only by cupping it with my whole hand did it start feeling like an oval again, despite still looking circular. Pretty cool and weird!


This experience seems the same as most described in this very interesting and recent article about virtual embodiment [1]. Seems like we didn't need VR helmets to realize it after all.

[1] https://www.newyorker.com/magazine/2018/04/02/are-we-already...


A few years ago, I was lucky to work with the brilliant Japanese researchers at the Cyber Interface Lab/Hirose-Tanikawa lab (The University of Tokyo).

Amongst their projects (involving AR, VR and multi-modal devices), some students explored various ways to play with our perceptions and define more sharply the interactions and priorities of our senses. Incredible experiments involving simple contraptions as well as more advanced VR, and a lot of scientific creativity (see [1], particularly Multimodal interfaces)

Amongst those projects, Yuki Ban's "Magic Pot" was very similar to what you described, and I remember being almost upset at the realization that the sense we tend to trust the most (vision) can also drive us in the wrong direction the fastest. (see [2] and [3] for more info)

[1] http://www.cyber.t.u-tokyo.ac.jp/projects/

[2] https://dl.acm.org/citation.cfm?id=2343470

[3] http://www.drunk-boarder.com/works/magicpot/


in drumming there is this idea of separating your hands, with training you can play uncoordinated rhythms simultaneously but without training this is very difficult. humans train their whole lives to integrate signals but this is the opposite. i wonder if in VR we will separate the senses


I went in expecting an article about how the most enthusiastic and visionary game designers will tend to enthusiastically state their hopes, dreams and visions about the unfinished game they are working on, but will ultimately end up being ridiculed or being called liars when the final game fails to live up to the lofty ideas that they expressed during development. The actual article was interesting too though.


> but will ultimately end up being ridiculed or being called liars

It's well deserved if thats the case. At least if it keeps happening over and over again for the same person.


"Plant a seed and watch it grow." is perhaps the most memorable broken promise.

I always thought Peter would have continued to do great things had he been born 10 years later. The man's imagination certainly outpaced technology. A tragedy, indeed.


Well Molyneux is taken, I guess we could call it the Murray effect.


To be honest, when Peters horses stampeded with him, the audience happily went along even with the most unreasonable idea. People like him where and are rare, and the games delivered under the name bullfrog where so legendary - (even though half of them he wasnt involved at all), its was allmost impossible to live up to that avalanch of a reputation that had developed around him.

Its a sad thing that the people looking for something to hate- aka the web happend to him. I liked him visionary, and even if only half the vision survived, the games where always excellent.


The hate for Molyneux is not just due to him over promising. It is due to him continually lying to his supporters and taking their money to build and launch a F2P mobile game that still receives updates even though they haven't bothered to even post an update their steam community in 3 years.

He is still misleadingly selling Godus under "Early Access" on Steam even though he clearly has no plans to continue development and there hasn't been an update in over 3 years.


Molyneux was/is such a massive bullshit artist, that he would even make promises in front of crowds that the game devs had never even heard before. He just let his imagination and mouth run wild.


Maybe the line between hoax and avantgard, never tryied before isnt as clear cut as many want it to be.

If you venture into unchartered territory, trying things not done before- you are exploring a very branchy maze- and sometimes its only sheer luck keeping you from exploring a promising dead end.

To a outsider, or somebody years later, looking down from the knowledgeable mountain of history- such simple decisions seem "strange" and wastefull. But down, in the trenches of today, they where the best move available.

Kickstarter money was always play money of those investing, and once it has been used up- what should those developing eat?

Love and air from the fans? He runs off with little sums, while at least delivering, while others venture off with full studio budgets and deliver next to nothing?


Murray deserves more respect than Molyneux. Where Molyneux was off to the next shiny idea after underwhelming is fans once again, Murray is still working hard on getting NMS to where they promised it would be at launch. Atlas Rises patch brought a lot of much needed content and I'm really looking forward to their latest anounced update.

Peter's problem was that he didn't just release the game without all the promised features, he wouldn't even hang around to fix the bugs.


Same


I think this will be pretty obvious to anyone who has meddled with machine learning. Suppose you have a working model of the world (based on your sense of touch) and you now suddenly get more data (sight):

  {255,254,254,254,255,255,254,253,254,255,254,255,255,255,248,247,255,255,255,254,255,249,228,221,207,204,210,216,247,255,255,236,146,169,184,166,127,88,231,255,255,241,154,155,155,106,96,105,236,255,255,243,156,159,160,113,105,100,238,255,255,247,170,157,164,104,55,103,243,255,254,255,244,214,179,120,182,237,255,254,255,255,255,255,246,244,255,255,255,255,255,255,254,254,255,255,253,254,255,255}
Do you see the cube? I'd think not.


This. Newly-sighted people have no idea whatsoever what to make of the torrent of confusing sensory data they're suddently receiving. Their brains simply haven't grown the really complicated pattern-recognition machinery required to reason about perspective projection, lighting and shading, shadows, colors and textures, and everything else people sighted from birth take for granted.


Highly contextually dependent: most of the early-detection neurons are exactly what you describe: edge-detectors, color similarity detectors, etc. They become a thing right from the birth of a child and would never develop if the child could never see at all (they have a very rigid structure, so the random development is out of the equation).

So you’re clearly touching upon the definition of what is it to “have the vision” and what is it to “obtain the vision” as it’s not binary anymore: there are many levels to “vision”, most of which overlap as well.


Indeed. I had a light-bulb moment about this when thinking about how a 2D image is unrolled into a one-dimensional vector before processing by an artificial neural network. Before training, the NN doesn't even have a concept of certain pixels being related because of proximity. It needs to learn to see the 2D structure in the data from scratch, starting with edge detection.

And this also means that the order in which the pixels are presented to the NN doesn't matter in the first place. You could pick a completely random (but fixed) permutation and the NN would still learn to "see" in that mess.


NN has concept of 2D image built in - conv layer for example: http://cs231n.github.io/convolutional-networks/


Thanks, I don't think CNNs were mentioned in Andrew Ng's machine learning course. That's about the only background I have in machine learning. The NN examples of image recognition in that course did start from scratch without any architectural features designed for images.


You're solving the wrong problem. That's not being able to see. You need to have vision working enough to see that there are two objects before you can ask the question. It's a problem of distinguishing them, and it's not unreasonable to logic that out based on the sharpness of the transitions.


More curious would be to show people a set of cube/sphere datasets & ask them to categorize


Spoiler alert - after “curing” blindness in a few patients this was demonstrated to be false.

There was a more recent article which I saw in the aggs recently, but Google offers up this one from 2011:

https://mobile.nytimes.com/2011/04/26/health/research/26blin...


It's a question. How can it be false?

And also Locke (who was being asked the question) was right in his answer.


Locke was "right" in his answer by dumb luck. Fundamental this question isn't about philosophy at all, it's an empirical matter of science. Without a good understanding of vision and neurology, and the ability to run good experiments based on informed hypotheses, there's no way to answer this.

So, sure, Locke guess right based on a flawed (seriously: the basic science was 200 years in the future!) premise. That doesn't say much interesting about the subject.


Locke and Molyneux had a hypothesis: "Not. For, though he has obtained the experience of how a globe, how a cube affects his touch, yet he has not yet obtained the experience, that what affects his touch so or so, must affect his sight so or so; or that a protuberant angle in the cube, that pressed his hand unequally, shall appear to his eye as it does in the cube."

They didn't have an experiment that could test this hypothesis. When we were able to perform an experiment, their hypothesis turned out to be correct.

This isn't "dumb luck" any more than Einstein's theory of general relativity being confirmed by the precession of mercury.

And it doesn't have anything to do with neurology - it's about the structure of the mind and perception, rather than the anatomy of the brain, eye, and hands (software, rather than hardware).


That hypothesis was wrong though. It's just word salad, they didn't have a theory for how one "obtains" "experience", or for how that "what affects" can be transfered or not between senses.

And it's fine as philosophy, because you're allowed to do that. But it's not science, shouldn't be called science, and you shouldn't be using a scientific result based on proper hypothesis testing and two centuries of theory work to argue about who was the "better philosopher".

Not. Science. That's my only point.


I may agree with you that it is not science. However I disagree with the reasons you use to back this up.

They did have a theory for how one obtains experience, a basic one (the senses collect info about the outside world and...thats it, experience gained!)

It really sounds like you're claiming philosophy is not a science, which doesn't really make sense. Philosophy is certainly a unique kind of science, one where hypotheses must be made on nebulous difficult to define concepts, and lots of things are difficult or impossible to test. However that doesn't automatically make it not scientific. Just because theories are untestable doesn't mean science cannot be done with them. Because they aren't truly untestable, they just aren't entirely testable. You may not be able to prove/disprove a given theory, but you can test a group of theories, limit it to "okay, we can't tell which theory is true, but it absolutely IS one of these theories, not any others" And quite possibly as other science advances, previously untestable aspects may become testable and the list can be shortened.

To claim that philosophy isn't science is to claim it cannot make any objective progress, which is something I definitely disagree with.


The hypothesis was that the correspondence of visual perception with tactile perception is learned, not innate.

That's a perfectly valid scientific hypothesis, and turns out to be testable and correct.

Science, from Google: the intellectual and practical activity encompassing the systematic study of the structure and behavior of the physical and natural world through observation and experiment.

They hardly need a theory for how you gain those experiences - you gain those experiences by seeing objects and by handling them, obviously.

Granted, they didn't understand Maxwell's equations and photoelectric effects and some complete model of the light-retina-nerves-brain-mind system, but they didn't need that in order to make scientific hypotheses about whether we learn to correlate sensory stimuli with the world around us or if it just automatic.

We know more now (but hopefully not as much as people 200 years from now will know), but that doesn't mean they weren't doing science: the intellectual and practical activity encompassing the systematic study of the structure and behavior of the physical and natural world through observation and experiment


"To which the acute and judicious proposer answers, "Not. For, though he has obtained the experience of how a globe, how a cube affects his touch, yet he has not yet obtained the experience, that what affects his touch so or so, must affect his sight so or so; or that a protuberant angle in the cube, that pressed his hand unequally, shall appear to his eye as it does in the cube."—I agree with this thinking gentleman, whom I am proud to call my friend, in his answer to this problem; and am of opinion that the blind man, at first sight, would not be able with certainty to say which was the globe, which the cube, whilst he only saw them; though he could unerringly name them by his touch, and certainly distinguish them by the difference of their figures felt. This I have set down, and leave with my reader, as an occasion for him to consider how much he may be beholden to experience, improvement, and acquired notions, where he thinks he had not the least use of, or help from them. And the rather, because this observing gentleman further adds, that "having, upon the occasion of my book, proposed this to divers very ingenious men, he hardly ever met with one that at first gave the answer to it which he thinks true, till by hearing his reasons they were convinced."

Dumb luck doesn't seem to be the right phrase. Their reasoning seems solid, given their assumptions about the basics of how the human being works.


It was philosophy back then; it's science now. A large part of philosophy is about asking questions that don't yet have scientific answers.


Sure. But saying "Locke was right" in a context where we're discussing an experiment run two centuries later certainly seems to be implying some kind of comparable discourse, no?

I mean: "Einstein was right" in the context of the Michelson-Morley experiment means that he came up with a predictive theory that was later confirmed by practical scientists. Locke didn't. By scientific standards, he just guessed without any grounding. That distinction is really important!

Edit to expand my point: the bottom line is that our current understanding of neurology has no bearing on who was "right" in a 200-year-old discussion, even though they are "technically" about the same subject. It just doesn't matter. If you want philosophy to be a separate domain from science (and we do, for obvious reasons), you need to keep it separate and not use science to argue about who was the better philosopher.


The thing is, philosophy in this case was using the knowledge available and applying reasoning to drawn conclusions.

While it was entirely possible that Locke's prediction would prove to be incorrect. It was not 'dumb luck' it was an 'educated guess'. 'dumb luck' literally is pretty much exactly the opposite of what occurred here.


Actually Locke here just agrees without providing more justifications. It would have been more interesting if he had provided a counter-argument instead. For instance, "while it's true that this man hasn't learned how what he felt looks, he can to some extend apply logic; he knows that the sphere feels the same from every direction of touch and he can observe that one of the objects looks the same from every direction; so he can conclude that this object is the sphere".

But I guess someone actually gaining sight just wants to be done ASAP with those silly experiments and go to the theater ;-)


Good point and this 'logical' way of determining an object is a cube is something i wondered about as well.

It should be possible, if the subject is given enough time and approaches the task from a rigorous angle, to deduce that the object must be a cube. through a path like...

eyes -> visual processing of the brain -> logical facts about -> mental model of the object in question -> categorizing the mental model -> determining what the object is.

However, I think, the fundamental issue is that in sight, touch etc, typically the concious/logical part of the brain is not involved in determining objects. probably because this is much to slow. There is dedicated hardware/software that does this as far as i am aware.

There are people with the condition where they cannot name, or identify objects from touch but can from sight, even more interesting some people can identify things from touch with their right hand, but not with their left.

https://en.wikipedia.org/wiki/Astereognosis


> It should be possible, if the subject is given enough time and approaches the task from a rigorous angle, to deduce that the object must be a cube. through a path like...

Rigorous logical reasoning requires some axioms, and someone who has just gained his or her sight has no reasonable axioms to apply (or, rather, no way to decide, even on the basis of experience, which axioms are reasonable to impose). Suppose that I felt an object and discovered that it felt the same from all angles. Could I conclude that it tasted or smelled the same from all angles? I don't think that is a reasonable conclusion. Why then should I conclude, with no prior evidence to guide me, that it would look the same from all angles?


> Could I conclude that it tasted or smelled the same from all angles?

You can't 100% assume it.

But if there's no reason to think it's a trick question, and one object does smell and taste evenly, while the other object has multiple distinct smells/tastes...

You can get a pretty good idea.


> Fundamental this question isn't about philosophy at all, it's an empirical matter of science

Empiricism is a philosophy.


One way to read the underlying question is, "Are there necessary connections between sensory perceptions?" (Could a tactile sensation, in and of itself, somehow "imply" a visual one?) That doesn't strike me as an empirical or scientific question.


Sure it does. Perceptions are actual things that physically happen in the brain and can be measured, by MRIs or surveys.


That is to say, experimentally the answer to the question is no.


This reminds me of the bouba/kiki effect in which there is perceptual cross-talk in auditory and perceptual schemata across all humans. It would be interesting to see if a blind person who became able to see would still label the shapes in the same way.



Wow, that was fascinating. For anyone else wanting to read the article, I would recommend looking at this non-labelled image first, and choosing which shape you would call Bouba and which you would call Kiki:

https://upload.wikimedia.org/wikipedia/commons/e/e7/Booba-Ki...

If you read the Wikipedia article first, you'll bias your own result on the test / experiment.

Also an interesting footnote at the end of that article:

"Individuals who have autism do not show as strong a preference. Individuals without autism agree with the standard result 88% of the time, while individuals with autism agree only 56% of the time."


This is interesting and really counterintuitive for me after seeing the results.

I would assume (and still can't shake the feeling) that a blind person would clearly know that a cube/square has corners and a sphere/circle does not. It's hard to understand how that isn't obvious when seeing for the first time. Even if it wasn't immediately understood, one should be able to trace, with fingers or eyes or whatever else, the outline of a cube vs sphere and observe that only one has corners. The act of tracing should match the tactile sensation of moving your fingers around similar objects.


I think it is a mistake to think that what we "see" comes from our eyes rather than from a large, trained visual network in the brain. Consider, for instance, the behavior of the eye's blind spot, or the fact that we perceive objects in one plane of vision despite having two eyes. These effects can't come from just raw data.

Someone who hasn't learned/trained that processing is probably going to see something much more strange--do you think you would be able to differentiate between a sphere and a square by looking at hexdumps of bitmap files?


Good point. If you haven't already, check out this talk by Donald Hoffman: https://www.ted.com/talks/donald_hoffman_do_we_see_reality_a...


I think it really depends on whether the neurons responsible for handling vision processing have any natural spatial processing ability. It's not obvious to me whether they would or not. My guess is that allowing some time to look around the objects (without being told which is which and without touching them), that they would be able to create spatial relationships for vision processing (sensorimotor loop) and be able to overcome this issue. Once there is some spatial reasoning for the two different types of input, I believe they would begin to build correlations between them (only for very simple shapes).

Perhaps an experiment could be made that would answer this by giving a person a completely new sense and asking the same identification problem. I think the skin could be interesting to use as it allows 2D input with the possibility of utilizing existing spatial mapping, yet isn't used (in most cases) for 2D depth feedback. You could attach some device that heats/cools, pokes, pinches or something else to excite nerves based on some depth information. Of course they would be blindfolded and unable to touch the objects.

You could make two different types of experiment, one that has spatial mapping to the skin and another that is randomly linked, seeing how long (if at all) it takes for somebody to figure out from the inputs which shape they are observing.

My guess is that they could quickly get an intuition for each shape simply by looking at the rate of change of inputs when they move about the object.


It seems as though the last paragraph, where the problem is actually resolved, should be a lot more prominent.


This can be directly observed in the case of Shirl Jennings[0], upon whose life the movie At First Sight[1] is based.

[0]https://en.wikipedia.org/wiki/Shirl_Jennings [1]http://www.imdb.com/title/tt0132512/


Reminded me of the McGurk effect (https://www.youtube.com/watch?v=G-lN8vWm3m0 https://en.wikipedia.org/wiki/McGurk_effect). I'm constantly surprised by how our brains work.


Part of the problem with this is assuming that the person regained their site, they would not initially be able to make sense of the information. Seeing happens as much in the brain as in the eyes. I love how they spoke and wrote back then. It's comprehensible, but it makes clear how differently they spoke back then.


If someone suddenly "switched on" a sonar detector that had been dormant in your body for all your life, would you be able to immediately recognise shapes and structures with it, and relate this new sense to your existing ones? My guess is not.


When reading the title I thought this was named after Peter Molyneux the creator of fable. And the problem was over sharing/promesing game features


I thought it was about Stephan Molyneux.


I hoped it was not.


I think Molyneux's problem is an excellent example of why Kantian thinking (as he is apologetically understood in the 21st century) is so important, even today.

If you follow a modern Kantian line, one of the most important functions of the brain is to provide a common spatial and temporal framework in which it is possible to represent things (a basis for representation). The brain then does the immensely awful work of creating objects in that spatial and temporal basis out of the mess that it gets from optical, auditory, haptic, olfactory systems, as well as information it gets from very strong beliefs we have about the world (think Piaget - object permanence, objects can't be co-located, folk-physics, etc.) We situate a self in that world, and we populate our surroundings with family members, cardboard tubes, music, clouds, walls - every mundane thing. We even build systems of knowledge using symbols in that world, which is rather remarkable (our folk-physics is not the physics we use for engineering, obviously).

Consider the problem of remote-piloting a drone. We are capable of shifting our frame of reference from our eyeballs to our droneballs and back again pretty quickly. Your brain is a complex thing - think of how quickly you are able to translate musculo-skelatal movements of your fingers into the correct inputs for the drone. Or say you're wearing headphones and getting auditory input from the drone: how does your brain realize that the sound of someone saying your name (muffled through the headphones) is coming from your friend standing next to you? There is a sort of representational richness in solving these problems that is easy to overlook.

I think we've all had the jarring experience of being woken from a dream by a human voice, or a sound in our home, that had one meaning in the dream and another upon waking. Same auditory input, very different processing, but all part of an attempt to project a world that makes sense of your inputs.

Experience is a function of our brain working constantly to unify sensation to create an accurate simulation of what is going on around ourselves. The brain does so in a basis language, and its easy to have the mistaken belief that the basis language of space and time is something that "is just there". It is "just there," in the sense that our simulation works pretty darn well for creating a coherent world that we can navigate, but the brain doesn't get it for free - it assumes that basis.

I happen to think that the common basis for locating things in space and time is a large part of what makes language possible (how can all agree that "there's a cat on the mat" (or not)), but that's just a silly personal hunch and not a well-considered position.

It was a very common mistake in academia ten years ago to think that humans learn the structure of the world from data. We do, but we require a very strong grammar of representation in order to even begin having what we call experiences that it is possible to organize, or learn from (parallel to language learning).


> if a man born blind can feel the differences between shapes such as spheres and cubes, could he, if given the ability to see, distinguish those objects by sight alone, in reference to the tactile schemata he already possessed?

If a sighted person can see an object and distinguish it by touch alone, the opposite must be true.


I teach whitewater kayaking, and part of what I teach is the kayaking roll, which involves being underwater with your eyes closed and executing a series of body and paddle movements. No matter how many times someone has seen a video demonstration of the technique underwater, they still are completely spatially disoriented the first few times they try the move. That's because it's completely new to them, and they've never actually had to map the series of physical sensations they experience onto movement without visual cues. I would bet the same is true of blind people: we recognize squares or circles because being born sighted, we have mapped our visual perception onto touch and onto language from an extremely early age. If you never went through that, there's no way you could recognize a square or a circle, because visual information doesn't fit anywhere into your understanding of how you observe space. We don't give kids physical toys just for their entertainment, they're also for building the link between visual, tactile, and language awareness.


I agree, I find the entire question to be difficult to define. If you magically bestowed sight upon a blind person with a sphere and a cube in front of them, they wouldn't even recognize them as separate objects. They would be bedazzled by a wave of sensory input they could make no sense of. Then over time, they would start mapping it to their understanding of the world and at some point they would be able to recognize the objects. When and how exactly that happens is interesting, but not really a deep philosophical question in my estimation.


>No matter how many times someone has seen a video demonstration of the technique underwater, they still are completely spatially disoriented the first few times they try the move.

How well does practice "on land" or "in the air" map to the same movement under water, with water resistance?


Not especially well. It's kind of helpful to understand conceptually what's going on, but ultimately the physical sense on land is completely different, and the student will still think almost entirely in visual terms unless they're already accustomed to taking coaching and have uncommonly good body awareness. Most of the time it's faster to just start the student off underwater and begin the process of learning that mapping between movement and touch without the visual part.


It turns out that this is true for sighted people only because sighted people have grown up both touching and seeing objects. Someone who was born blind has not yet learnt the skill of comparing the two sensations. (The experiment which demonstrates this is explained in the Wikipedia article.)


You're assuming a symmetry between sight and touch that is unlikely to hold completely.

There is an interesting, if largely anecdotal, paper from the 60s. There may be more recent work. http://www.richardgregory.org/papers/recovery_blind/1-introd...


Did nobody read the article? There was an experiment done in 2003 that showed that no, newly-sighted individuals cannot distinguish objects based on existing tactile knowledge.


There is a TON of detailed and interesting information in the article I linked to. It is substantive, in contrast to the yes/no study described on the Wiki page.

Yet you felt the need to create a throwaway account in order to complain about any and all discussion in the comments which doesn't hinge on the original link...


Obviously not. Did you read to the end of the article?


>the opposite must be true.

why?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: