Hacker News new | past | comments | ask | show | jobs | submit login
IBM: Mind reading is less than five years away. For real. (cnet.com)
124 points by azazo on Dec 20, 2011 | hide | past | favorite | 90 comments



I've recently completed a masters thesis on EEG based mind reading, and I think I have a fairly good grasp on the state of the art in this field. I also have a copy of Kurzweil's The Singularity is Near by my bed, and I'm usually strongly optimistic about technology. But if IBM are talking about EEG based technology here, I would have to bet that they are flat out wrong on this one. I'll explain why.

Something like moving a cursor around by thinking about it, or thinking about making a call and having it happened requires a hell of a lot of bits of information to be produced by the brain computer interface. With the current state of the art we can distinguish between something like 2-6 classes of thoughts sort-of reliably, and even then it's typically about thinking of particular movements, not "call mom".

Importantly, what most people look for in the signal (the feature in machine learning terms) are changes in signal variance. And there are methods to detect these changes that are in some sense mathematically optimal (which is to say they can be still be improved a little bit, but there won't be any revolutionary new discoveries.) There may be other features to look for, but we wont be getting much better at detecting changes in signal variance.

Some methods can report results like a 94% accuracy over a binary classification problem. Such a result may seem "close to perfect", but it is averaged over several subjects, and likely varies between for example 100% and 70%. For the people with 70% accuracy, the distinguishing features of their signals are hidden for various reasons. And this is for getting one bit of information out of the device. Seems like such a device would need to work for everyone to be commercially successful.

In computer vision we have our own brains to prove that the problems can be solved. For EEG based brain computer interfaces, such proofs don't exist. There are certain things you probably can't detect from an EEG signal, meaning the distinguishing information probably isn't there at all. I'm easily willing to bet IBM money that who I would like to call can not be inferred from the electrical activity on my scalp. (Seriously IBM, let's go on longbets.org and do this.)


Thanks for the interesting detailed information about EEG resolution, which I can attest is in accordance with what I have read about neuroelectrical interaction in other contexts.

But what is to me implausible about thinking "phone Mom" and having my computer do it for me is that this scenario envisions an unusually high degree of usability that no consumer-facing software writers have ever achieved. Right now, on a BRAND NEW computer system using mostly application programs recommended by Hacker News readers (for example, I am using Chrome to Web browse), I can't count on my computer doing what I want even if I have my hands on the keyboard or a hand on my mouse. User-interface design appears to be HARD--or at least, it is rarely done right--so I am very doubtful that in five years or even twenty-five years I'll be able to use a computer that really does what I think.


Hate to be that guy, but Siri is getting pretty close to this.. "Phone Mom" will work with an iPhone 4S.


Given the above comment I would be even less optimistic about a computer being able to determine the difference between thinking "I should call mom at some point" or "What a nice phone call I had with mom yesterday" and "call mom right now".

This would take the "butt-dialing" phenomenon to disturbing new levels.


> I'm easily willing to bet IBM money that who I would like to call can not be inferred from the electrical activity on my scalp. (Seriously IBM, let's go on longbets.org and do this.)

Be more precise. If you can distinguish two classes, you can convey anything through EEG, however slowly (think 'bits'). With error, use error-correcting codes and you can get relatively high accuracy.

Pedantics aside, I agree in sentiment. Even with invasive techniques we can only roughly decode movement.


Sure. Let's say a five second EEG segment, recorded from up to 200 electrodes. I bet that by 2017 we cannot accurately detect who a person would like to call out of phonebook of 50 people. Specifically, in a setting where each person is as likely to be called (equal prior probability), I bet the detection accuracy will not exceed 4% on average with a phone book of 50 people. I'm talking about the BCI understanding a "call mom" thought, not detecting it through some other means like a movement (though I don't expect that to work by then either).


My point was more about precision when making a bet, but yeah like I said I agree in sentiment. That being said, two-bit encoding is exactly what Stephen Hawking uses with his thumb, so it's not inconceivable to use it on a neural level as a last resort, however impractical. And in some cases, it is: often paralyzed individuals use their tongue to manipulate a cursor, but this can cause all sorts of problems like abnormally large tongues due to muscle growth.

And I work in a lab that does BMI work, and we couldn't do the "call mom" command in the sentiment of lars, even though we use more spatially precise recordings (multi-electrode chronically implanted arrays). So I'm with that. OTOH we can do some cool things like control a computer cursor or tv remote with motor commands like "left-up" etc. Subjects reported that after a while they would cease to "translate" thoughts from movement commands into "BMI" commands like "change channel". It stands to reason they might be able to do the phone-book thing in that case.

Of course, few people find it worthwhile to get chronically implanted electrodes placed in their motor cortex, soooo.


Why would "call mom" be more difficult than "Left left up up"? I'm guessing left, right, up, down, would map to certain EEG patterns. Why would it be more difficult to map call and mom to their patterns?

It seems to me that if you can map patterns for four directions, you should also be able to map patterns for 50 different phone book entries and several verbs.


I'm not being clear, I don't think that thinking the words "left left up up" would be detectable through EEG.

When I say detecting movement, I mean things like imagining moving a hand, a foot or a tongue. These movements use distinct areas of the brain so you can distinguish between them by looking at where on the scalp the change occurred. This is done in ways that are known to be close to perfect.

However, you probably couldn't use scalp location if you wanted to distinguish "call mom" from "call John", as they would presumably activate the same area of the brain. There are of course other things one could look at, and I obviously can't prove that it can't be done. But at the same time I have never seen any kind of positive result for an EEG classification task at this level of detail.


Well all you would really need are some easily detectable signals. So if move hand, move foot, move eyes, etc are all easily detectable then you have the basis for an interface. After that it's just a matter of building the interface around those limitations. It wouldn't be mind reading (not even close), but it seems like you should be able to get a reasonably good UI going.


I doubt it. Firstly, you will have to provide a method to filter out real movements from intended ones. A sensor on a few muscles may help, but sticking them on your skin every morning would not help towards the goal of "Reasonably good UI".

Secondly, I am not sure one can learn to almost unconsciously think about certain movements of bodily parts. Chances are this will keep requiring too much apof one's attention.

Thirdly, I think temporal resolution will be awful. Even if you can learn to think about say 3 movements simultaneously, I doubt you will get this above a byte per second of bandwidth. Written text is around a bit/character, so that would likely be way below slow speech.

Most of this is opinion/guessing, so feel free to correct things.


Rather than thinking "call mom," could you think "imagine moving your arm in that direction and pushing something"? Instead of literal mind-reading, have essentially a touchscreen without physical touch?


Would it (now, or in the reasonably realistic future) be able to detect me imagining my fingers typing "call mom" on a keyboard?


It definitely isn't possible now, and I wouldn't expect it in five years either. If you look at [1], you can see the areas of the motor cortex. With todays methods we can do an acceptable job at separating for example hand from foot movement. These methods look at the spatial domain, and do so in a way that is near perfect. And as you can see, there is a certain distance between the areas on the scalp, while the fingers are all in the same area.

So you couldn't distinguish individual fingers with todays technology. If it was ever to be done, I'd expect that it would be done with the same algorithms as we use today, but with much denser electrodes. If I were to bet, I'd bet that this would be physically impossible, but I'm not as confident as I am with saying we wont be able to detect who I want to call.

[1] http://en.wikipedia.org/wiki/File:Human_motor_cortex_topogra...


HA. Error correcting codes? You realize that means the person thinking in bits would have to do that themselves. A 7-bit parity would be harder to do than just dialing the damn thing. Even then all it'd be able to do is tell you that you messed up. Something that actually makes corrections on the fly (Hamming). No way. I couldn't do it and I know how Hamming works. I really doubt a consumer could do it or would want to.


A credit card number already has a check digit. If you could make people memorize a few more digits than the telephone number that they'd like to call, you have your error correction.

(Of course, this is highly domain-specific and not very convenient. But it does seem possible.)


"who I would like to call can not be inferred from the electrical activity on my scalp"

If I read the blog post correctly the claim is not passive mind reading. The claim is that the user has some training to issue the sorts of thought commands that can be reliably picked up. If I think "call mom" it doesn't do anything. But if I think "Up Up Down Down Left Right Left Right" maybe it can interpret that correctly as my preassigned shorthand for "call mom".


They're a little fuzzy, but the exact quote from the video is: "IBM scientists are researching how to link your devices, such as a computer or a smart phone. So you just need to think about calling someone, and it happens." From the context it is implied that this will happen in 5 years. In their in-depth blog post about the topic, they say: "...I could wonder what the traffic will be like on the way home and this information would pop up in front of me."

So I think they are actually describing a "call mom" scenario, and I strongly doubt that the information to detect that is at all present in an EEG signal.


I bet it's harder not to compulsively think "Up Up Down Down Left Right Left Right" than it is to dial a phone.


Maybe if it recognized just thinking about that sequence, but I think having to deliberately focus on each direction (kind of like pressing a series of buttons in your mind, one after the other) would be too hard to accidentally do like that.


Even with very invasive techniques (chronic implants), the reading of motion signals for example is very problematic. One of the first devices to do that is the http://en.wikipedia.org/wiki/BrainGate . It requires substantial training and retraining because of brain plasticity. Still, these are major breakthroughs, and hopefully we will be able to find a reliable and usable signal, but i agree, EEG is too bulky for that purpose.


James May (of BBC's Top Gear) trying to control a wheelchair with his mind:

http://www.youtube.com/watch?v=Uyrd0uOuyms

Even giving directions via thought is tricky.


The "No Passwords" prediction is overlooking a big stumbling block: biometric data is not that secret and cannot be changed once intercepted. You might as well just walk up to an ATM, and speak your social security number. So the ATM is secure, but it's just another trusted client with all its associated problems.

The only thing biometric data is really good for is keeping track of people when they don't want to be tracked or want to hide their identity. For example, it would be a useful means of tracking and identifiying people in a prison or a border checkpoint.


Can someone change this to link to the actual IBM blog entry [1] instead of the CNET fluff piece?

[1] http://asmarterplanet.com/blog/2011/12/the-next-5-in-5-our-f...


Linkbaity headline, there.

"Mind reading" already exists kindof sortof maybe good enough to cnet to write an article about.

This is at the top of my Christmas list: http://emotiv.com/

In fact, here is a comparison of consumer Brain Computer Interfaces: http://en.wikipedia.org/wiki/Comparison_of_consumer_brain%E2...


As a current PhD student working in this area, I caution you about getting too excited about the Emotiv EPOC. We've got one in the lab we've started to work with as a potential low-cost EEG system. The out-of-the-box software is kinda hokey, so you may end up with an expensive novelty you use once or twice.

On the technical side, it does seem to be the best current option for consumer EEG, though most of these devices are actually strongly influenced by, if not heavily reliant on, facial muscle movements.


Agreed. Although there are many videos online of using the device successfully, the people who tried it in our lab found it/themselves very difficult to train. And it was due to lack of trying...we very much wanted to control things with our minds.


A friend of mine bought the developer-prerelease version a year ago(it has extra 2 electrodes), it is very hard to train, he got some success, but basically it is a novelty, can't really use it as a practical device.

Also there was a rather restrictive licence on the out-of-the-box software, which is not very productive for a hardware company.


Have you worked with the open source python library for it?


No, I haven't, though I'm interested in which library are you referring to. We've been developing our own Python wrapper interface to their API, though this is to share a common interface with the other EEG DAQ (e.g. g.tec) Python wrappers we've been developing.


https://github.com/daeken/Emokit/blob/master/Announcement.md

(I've heard a few people make similar complaints to yours, which is really really saddening to me. Regardless I'm still going to buy one and see what I can do with it :))


One of my best friends just finished his phd in brain computer interfaces and "mind reading" totally already exists (for certain very specific definitions of "mind reading").


Most likely not same definitions that come to someone's mind when they read it within a sensational headline such as this one.


Agreed, definitely a much stricter definition.


definitely a sensationalist headline but still a very interesting bit of tech to watch evolve.

the first thing I ever saw along these lines: recreating a cat's vision from brain sensors (1999)

http://news.bbc.co.uk/2/hi/science/nature/471786.stm


lars's comment (http://news.ycombinator.com/item?id=3371968) is right on target. I recently finished my PhD in biomedical engineering, and the hot field that everyone wants to go into is what we're calling BMI - Brain-Machine Interfaces. The trick is, there are very few types of signals than can be reliably determined from these brain-signal reading devices.

Broadly speaking, there are two kinds of tasks that can be easily accomplished; anything involving moving limbs, or simple, low degree of freedom tasks (like moving a computer cursor). After months and months of training, a person can be trained to manipulate numerous degrees with pretty good reliability (i.e., move a robotic arm, AND control the mechanical pincer at the end), but this type of work doesn't generalize to other types of thought. We're nowhere near being able to extract sentences or words or being able to determine what complex scene is being viewed simply using brain activity patterns.


Previous "five in five" predictions from IBM can be found here: http://www.ibm.com/smarterplanet/us/en/ibm_predictions_for_f...


So their 5 year prediction from 2006 fell completely on its face.


Including: "Our mobile phones will start to read our minds"


I would say rather the capability may be 5 years away. Whether consumers want it - I'm skeptical. I knew someone who for reasons I won't go into had a computer that they had to control with their eyes (basically has a webcam that tracks the eyes and moves the cursor and then clicks when you wink). It made me realize the further integration of computing control and a human's anatomy/biology can create more problems because there is a lack of a filtering mechanism. When you type on a computer you choose what your computer does by making deliberate actions rather than your computer monitoring you and interpreting your actions. The problem with the latter is there are many things you do that does not involve your computer... pick up the phone, throw a ball for your dog, talk to a coworker, etc. When your computer is monitoring you for input it never knows when the action is for it and when it is not. So in the case of the computers based on eye control, the experience is very problematic when you have to look somewhere else for any reason.

Now taking it a step further I can't even imagine how out of control a computer would be based on someone's mind. Our minds randomly fire off thoughts non-stop - its actually incredibly hard to concentrate on one deliberate thing for a long time (if you've ever tried meditation you realize this very quickly). How a computer could filter actions for it and actions that are just the randomness of the brain seems like it would be incredibly difficult in that there really isn't a definitive line there at all.


You seem to be describing a problem that needs to be solved rather than a situation meaning that there is anything wrong with the technology. I used to have speech control turned on on my computer - you can set it to listen to everything which is basically the situation you describe with all its problems. Alternatively you can ask for it to listen for a keyword or only to listen when you press a key. I imagine similar solutions will present themselves for brain-computer interfaces.


I think there is some truth to that but I'd say the one difference between speech recognition and brain recognition is that speech is a voluntary action you control, while your thoughts have a largely involuntary component to it. Involuntary meaning when someone says "an idea just popped into my head" - the idea seemingly was not an action deliberately triggered. Think if while you were "mind typing" an email and suddenly the thought "god I hate my boss" popped into your head. If the filtering mechanism of the computer was poor, your computer might, assuming it was being helpful, shoot off an email to your boss saying "god I hate you". I guess what I'm saying is the way by which you could filter your own thoughts to be interpreted by your computer or not seems like an incredibly difficult proposition.


Yes, but I think we can also distinguish, in our own minds, the difference between a thought that is fleeting or passing and a thought that we want to take action on. Similarly, a successful brain-computer interface should be able to make that distinction.


When talking about EEG-based "mind reading", there are three primary methods currently under study (when looking at locked-in patients at least):

1) P300 - This refers to a predictable change in the EEG signal that happens around 300 milliseconds after something you were expecting happens. For example, if I am looking for a particular letter to flash amongst a grid of letters all randomly flashing, a P300 will be triggered when the letter I want flashes.

2) SSVEP - This stands for steady state visually evoked potential. This approach uses EEG signals recorded from over the visual cortex, which responds to constantly flickering stimuli. Given a few seconds, the power of the frequency of the attended stimulus increases in the EEG, which can then be detected and used to make a decision.

3) SMR - This stands for sensorimotor rhythms, and is an approach that looks for changes in EEG activity over the motor cortex. Successful approaches have been able to identify when you imagine clenching your left or right fists, or pushing down on your foot. Unlike the other two, this does not require external stimuli.

SMR is the most like what we consider mind reading, as the user is initiating the signal while the other two infer what a person is looking at. It is limited to only 2-3 degrees of freedom at the moment, however, and is the hardest signal to work with. It is susceptible to external factors such as the current environment and mental state, and not everyone seems to be able to generate the needed signals. SSVEP, while lacking the wow factor of SMR, is much easier to work with and is a much more stable signal.

Disclosure: I work in this area. Here's a flashy NSF video highlighting our lab: http://www.nsf.gov/news/special_reports/science_nation/brain...


i am setting an alert in my calender 5 years from now with the text of this article and the author's email address.


Does anyone ever feel that neuroscience is getting more and more lovecraftian and challenging basic assumptions of what it means to be human? It sometimes feels like we're at a point in history where all the basic tenets of existence are being torn down by science and replaced with... nothing. Am I the only one who gets existential crisises from this kind of stuff? :p

It doesn't help, of course, that I'm currently reading this book: http://www.amazon.com/Conspiracy-Against-Human-Race-Contriva...

The luddite in me wishes that science will never be able to fully pick apart the human psyche. Here's to having an inscrutable ghost in the machine to keep us from being mere deterministic flesh-bots...


I wonder too...

There have been other times in history that scientists had the idea that science was almost complete, that there were just a few things left to sort out and we'd understand it all (such as around 1900 with mathematics).

We may think we are very near and then discover something new and then find out a lot of new questions around the mind and consciousness. I don't think we're quite there yet.

However I'm sure "shallow AI" (and maybe "shallow mindreading") will become more and more important in the near future. Which is what IBM is focusing on.

BTW: Thanks for the pointer. That book looks very interesting.


BTW: Thanks for the pointer. That book looks very interesting.

I'm actually not so happy about posting that link. For me, it states things that I had already mostly figured out on my own previously. For others, it might zap a lot of Sanity Points.

I would say Ligotti is a very unique writer, in that he's deeply immersed in existentialist philosophy, neuroscience, cognitive psychology, AND lovecraftian horror. It makes for a very... disturbing cocktail.

Of course, everyone on HN is seemingly a Nietzschean overman who can take these kinds of things in stride. Me, not so much :p


I'm not sure who is right, but note that transhumanists have a very different view on such developments.

Also, if the "basic tenets of existence" (whatever you consider those to be) can be torn down by looking at them critically, shouldn't they be? (Perhaps.)


Also, if the "basic tenets of existence" (whatever you consider those to be) can be torn down by looking at them critically, shouldn't they be? (Perhaps.)

Should we look at Cthulhu just because we can? :p


I think this problem will become apparent when we are able to create real human minds, and put them in real interactive environment that stimulates our world. Who is the god?


A little fanciful I think. The stuff about generating your own energy through captured kinetic energy is silly. My house has a 20KW feed - thats about 27 horsepower. On my bike I produce a tiny fraction of a horsepower. Its many orders of magnitude off.


Indeed - it's just weird that they're focusing on capturing low grade kinetic energy right when photovoltaic solar is at the tipping point of beating grid power with under a 5 year payback period. Expect (in sunny climates) to see a switch to solar panels on the level that we saw from CRTs to flat panels a few years back.

But the really interesting thing in energy in the next 5 years is going to be price drops in storage. We could conceivably see the first lithium-air batteries for cars, which will finally get costs down to within striking distance of petrol. I also suspect we'll start seeing widespread grid storage installations - possibly using sodium-sulphur batteries to absorb and redistribute inputs from distributed renewable sources. The 2-way grid is the next big story in energy.


You definitely produce more than a tiny fraction of a horsepower :) I have a watt meter on my bicycle, and I can easily average 250 watts for an hour ride. Well, easily meaning sweating profusely and worked at the end of it, but still, that's 1/3rd of a horsepower for an hour. I can put out 1 horsepower for a few seconds, and I am not a great cyclist...


Ok, YOU produce more than a tiny fraction. I'm 52 and out of shape. I can ride 50 miles if you give me all day, but no way am I putting out more than 1/10 of a horsepower. And NO WAY am I putting a generator on my bike while I'm riding it!


Yeah, but isn't it that the premise of the article is that you would combine energy from various multiple sources like, I don't know, 10-30 per house? Water flowing in pipes, generated heat, gathered kinetic energy?

It would be interesting to see some calculations how much could be gained that way.


I think you'd surprise yourself :) you could easily peak 250 without much issue, and could sustain 125 without too much of a problem.


Plus, much of the 'waste heat' produced in my house (computer, bike, cooking) is released into my house, not lost... the only one really is the heat lost from water down the drain.


In a sense, speech is mind-reading: you can have in your mind what the writer had in their's.

This isn't just sophistry, but shows there are two problems, 1. to transmit information into and out of a mind; 2. to transform the information into a form that can be understood by another. A common language if you will.

This has analogues in relational databases, where the internal physical storage representation is transformed into a logical representation of relations, from which yet other relations may be transformed; and in integrating heterogeneous web services, where the particular XML or JSON format is the common language and the classes of the programs at the ends are the representation within each mind.

There's no reason to think that the internal representation within each of our minds is terribly similar. It will have some common characteristics, but will likely differ as much as different human languages - or as much as other parts of ourselves, such as our fingerprints. Otherwise, everyone would communicate with that, instead of inventing common languages.


>Otherwise, everyone would communicate with that, instead of inventing common languages.

How would we communicate with it? By directly linking our brains together? I don't see why it would have a direct translation into sounds.


You're right, that particular sentence is unnecessary to my argument and weakens it.


I'm guessing that when mind reading comes it will be more more of a machine learning exercise based on analysis of speech, vocal inflections, visible features, and previous actions than a portable EKG machine with wires on the scalp.

See Poe's detective Auguste Dupin, in, for example, "Murders in the Rue Morgue."


I think it says something about this "prediction" that most of the text on the IBM page about it (http://asmarterplanet.com/blog/2011/12/the-next-5-in-5-mind-...) is:

Vote for this as the coolest IBM 5 in 5 prediction by clicking the “Like” button below.

Join the Twitter conversation at #IBM5in5


"Neurofeedback" already exists it's just still under the radar (it's like teaching yourself to roll your tongue). I've been trying to pull some demos together to demonstrate that the web browser is the place this will take off: http://vimeo.com/32059038 (sorry I haven't pushed more of this extra-rough demo code yet). Consider using something like the wireless PendantEEG if you're going to be doing your own development OR be prepared to pay excessive licensing fees required from a few of the vendors mentioned here. If you are interested in helping develop this stuff mentioned in that video (and don't mind springing for some reasonbly cheap hardware) please ping me. I'd also like to plan a MindHead hackathon/mini-conference this spring in Boston (my personal interests are improving attention and relaxation, peak perfomance, and BCI).


Going down the list of 5, for each one I was thinking to myself, "Yeah right", then going through the explanations I was thinking, "Oh, well if that is what you mean by that, sure why not".


Slightly off-topic, but I've always thought that the first wave of HCI to hit the market and gain traction would be the integration of Affective sensing tech. products and API's into popular areas like music, social networks, and health care. I've always thought this would bring down the cost, increase investment in the HCI/BCI space, and speed up the adoption rates and lead to a much faster improvement of HCI technologies.


I dont see this happening, or being very accurate if it does. I dont know about you guys, but my mind thinks about something new every few seconds, and one tiny piece of a thought will turn into a whole new though. Its all very random and for a computer to be able to understand and filter that seems a little too sci-fi.


probably depends on your definition of "mind reading", but sounds like it warrants a longbet.


I was under the impression that we were very close to being able to move sensors with our minds.

http://www.ted.com/talks/tan_le_a_headset_that_reads_your_br...


IBM constantly seems to release press releases about technology it hasn't yet developed to production quality. Said technology always vanishes without trace (as far as I can recall.) I'm not holding my breath on this one.


Thanks for the awesome example of putting one of paul graham's essays into action.

http://www.paulgraham.com/submarine.html


"ATM machine" in an IBM video? I'm slightly disappointed.


I prefer to say "automatic ATM machine" myself so nobody misunderstands me.


Would you rather them call it an AT machine? I guess it could be "@machine" in that case. Point being -- if you can't choose which to leave out, then you have only half redundancy -- which is acceptable, if not optimal.


What's wrong with "ATM"?


It does not work without PIN number :(


The great thing about bold predictions is nobody ever remembers them if you're wrong, but you look like a genius if you're right.


"you can control the cursor on a computer screen just by thinking about where you want to move it."

Imagine writing code by thinking only?


Well I guess you could link brain patterns to thoughts, but how do you gonna read them without a 5 ton MRI machine?


Who's to say a MRI machine has to weigh 5 tons 5 years from now? I imagine people once said "Sure it's great to store your recipes in a computer, but who wants to do that on a room sized computer?"


Gauss and Maxwell say.


They do and they don't. Yes, it is true that MRI machines weigh 5 tons because of the magnetic fields generated and the amount of hardware required to constrain those fields.

But you could imagine an MRI machine that works with lower strength fields and better sensor technology. Imagining one is a far cry from being able to build one though and I don't see this happening in the near future (if at all).

I think this prediction (like most of IBM's predictions about the future) is strong on marketing and very weak on science.


How soon will it reach the quantum level of "you can't measure without changing"?


Massively unsettling coming from the company who helped the nazis streamline their attempts at genocide.


George orwell's vision from the book 1984 vision is becoming true


Seeing as at least 2 of the 5 are to be blunt crap why are we even discussing this - this is as relistic as the fusion "to cheap to meter" stories they ran in the 50's FFS


The only one who can tell me something is N years away is someone who just stepped out of a time machine. I see no time machine, I pipe to dev null.


You cannot pipe to dev null, since it's a device, not a process.

So, to summarize in shell-like syntax, you can redirect your output to /dev/null, as in

$ me > /dev/null

but you can't sensibly use a pipe like so:

$ me | /dev/null

This has been a message from your friendly neighborhood Unix fundamentalism/literalism chapter.


    $ sudo cp /bin/cat /dev/null
    $ echo test | /dev/null
    test
(this doesn't actually work)


but you had no problem with the time machine. only on HN.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: