Hacker News new | past | comments | ask | show | jobs | submit login
IA or AI? (vanemden.wordpress.com)
103 points by rudenoise on Oct 12, 2015 | hide | past | favorite | 35 comments



The IA advance which is most obvious to me (yet somehow not yet a reality) is the nomenclator.

In Rome, a nomenclator was a slave who remembered people's names for you, and as they approached would whisper to you that this is Gaius Tullius Castor, his wife is Flaminia, his eldest boy is Marcus, and he owns beanfields.

A Google Glass camera on your eyeglasses and a speaker in your ear, hooked up to Facebook's face recognition and social web, can tell you a quick precis of who you see across the room before they get to you. Add a touch sensor in your pocket or on a ring for unobtrusive control, and a mic to pick up your annotations or commands, and you've got a product that should be a major hit by the second generation.


Google famously banned face recognition on Glass after I and some other developers made demo apps and APIs for using it: https://www.youtube.com/watch?v=E1aeMJY1AO0

Even now that Glass has been canceled for all but commercial use, it's still in the guidelines that you aren't allowed to use face recognition: > https://developers.google.com/glass/policies?hl=en " Don't use the camera or microphone to cross-reference and immediately present personal information identifying anyone other than the user, including use cases such as facial recognition and voice print. Glassware that do this will not be approved at this time. "

Amusingly, my nursing notes demo was me trying to be politically correct. People were more interested in things like cross referencing most wanted lists and sexual offender lists.


That ban made me uninterested in Glass. Face recognition would have been such a help to elderly folks. Add object recognition and GPS and you could have had an assistant to help the elderly through their day.


I can't see why. The tech just needs to be designed to handle malicious input a bit better and isolate certain aspects on a per-user basis. That people could pollute the recognition system's data set should've been assumed. Especially if marketing to the Internet crowd. ;)


The nomenclator is an interesting example. Far as Glass, I agree use cases like that. protomyth's had a lot of potential, too. The effect my brain injury had on memory, long-term and short-term, means I'd probably benefit. Quite an advance from the "cyborgs" wearing primitive stuff that just let them take notes and look ridiculsou at the same time.


I highly recommend Things that Make Us Smart: http://www.amazon.com/Things-That-Make-Smart-Attributes/dp/0... "Defending Human attributes int he age of the Machine".

Great read for anyone interested in IA or AI.


"[...] good notation is worth a whopping increment in IQ points. Except that the really good ones allow one to have thoughts that are impossible without."

I posit this post tangentially explains the nagging feeling that many parents[1] experience when their children struggle with mathematics. The benefits of basic language literacy are clear, but follow-on analogies such as the above emphasize a point of view concluding that an inability to attain mathematical fluency excludes the next generation from any implied augmented intelligence benefits.

The extrapolated message would be that mathematically disinclined adults will then be completely unable to comprehend certain important thoughts in [insert arcane, highly-specialized technical field].

Regarding the question posed by the title and last sentence in the blog post, I'm not sure why the thrust is framed as an XOR, and not as an AND. It's not like we can't focus on both IA and AI at the same time.

[1] Anecdata warning: I am a parent. I have this nagging feeling.


> It's not like we can't focus on both IA and AI at the same time.

I think OP want to say that AI is a mean, whereas IA is an end. The real goal of AI is IA.

Also I think that aiming for IA will provide small benefits in the short term but a lot of benefit in the long term taking into consideration the slow pace of IA's innovations emergence like it's the case of GUIs (as explained in the article).

Whereas AI doesn't provide benefits in the short term at all if not applied to IA. From my AI enthusiast understanding, what happens is that instead of applying the discoveries of narrow AI, researcher are jumping into AGI which doesn't help to spread IA innovations and lower the chance of emergence of new discoveries not necessarily related to the field of IA or AI.

That's how I understand that aiming for IA provides more overall benefits.


> an inability to attain mathematical fluency excludes the next generation from any implied augmented intelligence benefits.

Well, only in some ways. I don't have to understand how a refrigerator works in order to use it. Improvements in quality of life produced by use of augmented intelligence ought to be accessible even to those without it.


The problem only happens when no one bothers to learn how something works. Look at all of those big iron systems out there that few people know how to program, there is reason Cobol and Fortran programmers still make good money.

Oh and refrigeration is simple, it is just an application of the ideal gas law PV=nRT, and a pump. Refrigerant is compressed then cooled through use of a heat-sink then pumped into the refrigerator and allowed to expand where the refrigerant absorbs thermal energy and is pumped out and repeated.


I'm not sure it's a problem. It's encapsulaiton. Systems should be designed so that there is a difference between the knowledge required to operate the device and the knowledge required to design/service it.


i believe that the false dichotomy was used in order to attract clicks and to provide context for the relevance of IA. The article hardly talks about AI at all except that it was the AI agenda which IA was born from.


It seems obvious to me that IA is where the tremendous benefits to society occur. Imagine a world where everyone has the equivalent of a genius IQ today. A lot of problems suddenly disappear.

AI, on the other hand, while very useful, doesn't change people. And frankly, most problems we have are because people lack understanding. I don't know about you, but I don't actually want to replace mankind with something else, I just want us all better.

Of course, what "better" is--is highly debatable, so that definitely gives pause as well.


I would rethink statement, that AI doesn't change people. In my opinion it does it indirectly. AFAIK it's well documented fact that Google has changed the way our brains work. Would it be possible if Google had no AI? I ask, don't realy know.

How about computer games? I daresay that experience created with them can be beneficial to inteligence. Interaction with AI in games happen to be interesting to take a look at.

Except that think that process of creating AI is an IA experience. If you want to make AI you wonder what make you inteligent. As you observe your outcome you understand your inteligence better. Better understanding better inteligence.

All in all, I don't get why AI would exclude IA (or oposite). Although I'm grateful for sharing an idea of IA.


Fair enough. I think that we need both. Just that I believe that we benefit more from the IA than the AI. The AI is the tool that gets us there. We need AI, but it's not enough.


It's not even speculation: your point has tons of evidence backing it. I remember when I did AI that "automatic programming" was something we wanted. My research led to a project called The Programmer's Apprentice from MIT. The idea, since auto prog wasn't working, was a more limited A.I. that became an extension of programmer's mind to automate some work, analyse others, optimize even more, and so on. Not sure where that went but a Java programmer's explanation of NetBeans was deja vu. ;) A programmer with a text editor vs one with modern tools (I.A. + A.I.) is a game-changing difference that did effectively increase intelligence of the work.

Likewise, people not so smart with numbers had calculators. People needing conversions have Google and Frick. There's financial and accounting packages that can convert lots of numerical assessments into simpler forms to aid the user's understanding. ERP and BPM, done right, let a person ignore inconsequential details to focus on high-level aspects of business operation. Wikipedia for summaries + Google for details and verification let one amass expertise in a new domain rapidly.

And so on and so forth. Intelligence Augmentation and Artificial Intelligence both have proven value. Both are used today. So, we can keep both. :)


> It seems obvious to me that IA is where the tremendous benefits to society occur. Imagine a world where everyone has the equivalent of a genius IQ today. A lot of problems suddenly disappear.

Except most `genius IQ' don't end up as high achievers or even happier persons.


I think this is because they spend most of their lives dealing with unnecessary conflict and isolation since most people don't understand them. Being a genius doesn't remove the need for social acceptance built into human beings.

As such many with a genius IQ pretend not to be in order to be accepted. So their talents are wasted.

However, if everyone was augmented there would be no need to pretend to be "normal" just to be accepted. So much more could then be accomplished.

In fact, I think today's "normal" would be tomorrow's "intellectual disability."

There are many successful geniuses as well. And those are often wildly successful. Again, if everyone was that way it would be acceptable, rather than odd, to care about real achievements.


> A lot of problems suddenly disappear.

Not to be negative but citation needed.

Also (I guess we'll cross "isolation" of the list): https://en.wikipedia.org/wiki/Intellectual_giftedness#Social...


An interesting point. I think your citation adds to the point though.

In observing my "normal" peers, honestly, they do a lot of very strange things just to be considered normal.

I mean, it's pretty expensive just to keep up with current trend of sunglasses size or sock length, just to be seen as normal.

Not to mention that you have to hold your hands a certain way and talk incoherently.

There is a lot of "normalizing" behavior that becomes unnecessary when everyone has the capacity to see how inane and impractical such behavior really is.


A lot of smart men have been passionate about the proportion of columns, or the proportion of numbers, or even the aesthetics of curly braces. So why is it inane to care about the proportion of clothing, or the angle of the hand? I think people play to their strength. Also, the trend has to change as the world changes, because aesthetics is about the whole. So it's not because it's everchanging that it's necessarily arbitrary. The more one can afford not to care about it, the more impractical it is, but I wouldn't say it is impractical to society. It is architecture for the person.


>Isolation is one of the main challenges faced by gifted individuals, especially those with no social network of gifted peers. In order to gain popularity, gifted children will often try to hide their abilities to win social approval.

That seems to suggest the cause of the problem is a lack of high IQ individuals. If the majority had a high IQ due to IA, nobody would feel isolated by their high IQ (although then the low IQ minority might feel isolated).


EDW387 [0] (which doesn't have NN0 or NN1 pseudonyms either) seems to be pretty clear about what the "anti-intellectualism" comment was about.

>The undisguised appeal to anti-intellectualism and anti-individualism was frightening. He was talking about his "augmented knowledge workshop" and I was constantly reminded of Manny Lehman's vigorous complaint about the American educational system that is extremely "knowledge oriented", failing to do justice to the fact that one of the main objects of education is the insight that makes quite a lot of knowledge superfluous.

Wish the author went into more detail on why now may be different than during Kay/Engelbart's time.

[0] https://www.cs.utexas.edu/users/EWD/transcriptions/EWD03xx/E...


There is this "Tim van Gelder on Douglas Engelbart, Intelligence Amplification and Argument Mapping"

https://www.youtube.com/watch?v=P77FvUy-NGA


The EWD by Dijkstra now actually mentions Engelbart by name: https://www.cs.utexas.edu/users/EWD/ewd03xx/EWD387.PDF.


Props to Dijkstra for choosing the right topics to have strong opinions about, which is the hardest part in making an intellectual contribution -- far harder than being right or wrong about a given topic. The writeup is like code that's correct but for a sign error.


John Markoff's new book "Machines of Loving Grace" is a great one about this AI vs IA topic. http://www.amazon.com/Machines-Loving-Grace-Common-Between/d...


> The point I am making here is that Engelbart and Kay were unrealistic in expecting that their technologies would give quick results in the way of Tools for Thought. They had no appreciation for the vast and rich culture that produced the tools for thought enabled by the traditional technologies of writing and printing. They did not realize that a similar culture needs to arise around a new technology with augmentation potential.

I am guilty of deifying Englebart and Kay, and castigating "our society" for failing them. After my honeymoon period with the "tool of thought" people, I've calmed down.

Here's my radical belief: portability is for people who can't write their own programs. (copped from a Torvalds witticism)

Consider writing and literacy: If you really grow up in a literate culture, you can start with a blank page and end with a bespoke document that suits your needs. If you don't grow up in that, you have to modify others' documents. This limits you. Hallmark cards are for people who can't write poetically (no judgment intended).

So too for programming. Today we rely on hundreds of millions of lines of code of others we can't even realistically modify. But I think the future resembles Forth: in less than a hundred lines of code, you write something that suits your needs[0]. You can't do this yet because computers suck.

I'm talking loosely and at a high-level.

[0] I think Forth is a powerful vision for the future: no operating system, no types, no compatibility, no syntax. An executable english language.


Great piece. I got a chance to work with Douglas Engelbart several years ago and wrote up some responses in reply to Maarten's IA or AI post: http://codinginparadise.org/ebooks/html/blog/ia_vs__ai.html


That enforces my belief that the influx of new programming languages will continue for some more years and it will only get better.


Nothing is more high-tech than culture, it's everything even if we tend to work over the seemingly faceless Internet these days - it's people all the way down.


The idea of "notation as intelligence augmentation" is the reason (or one of them) that Haskell programmers are so enthusiastic about things like functors and monads; type theory is its own branch of mathematics that could be appended in the list of things like calculus and vector analysis [1], and might bring in the same kind of new levels of thought and abstraction.

[1] Disclaimer; I am not a mathematician.


When considering intelligence amplification, the book that comes to mind is Psychohistorical Crisis, by Donald Kingsbury. Computer-to-brain interfaces may go a long way in the next few thousand years.

https://en.wikipedia.org/wiki/Psychohistorical_Crisis


Also Vernon Vinge's Rainbows End which has both IA and AI in a fairly plausible near future scenario:

https://en.wikipedia.org/wiki/Rainbows_End


oblig. https://xkcd.com/903/

We already have amplified memory (see also: books, mnemonics). and google amplifies retrieval.

But what is "intelligence", that we might amplify it? For me, limited short-term working memory is an obstacle (EWD's "limited size of skull"). As complexity is added, earlier parts drop out.

There is the "technology" of hierarchical decomposition and the pyschological instinct of chunking, but every problem has irreducible complexity... if this is greater than my working memory, I cannot grasp it.

Artificially enhanced working memory may help here, but I suspect the limit is due not so much short-term memory itself, but it having associations throughout all long-term memory. That is, it's less a cache limit than a bandwidth limit, interconnecting with the entire mind. We aren't Von Neumann architectured.

PS: there's an argument that we might not be able to grasp intelligence itself, if its and its components' irreducible complexity is greater than any person's working memory - even if we formalize a correct model, we mightn't grasp it ourselves. Thus, IA may be essential for AI. Or, AI is essential for AI.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: