Hacker News new | past | comments | ask | show | jobs | submit | BeingIncubated's comments login

By the time AGI is developed, it would be able to improve itself/build improvements, causing the runway effect that leads to superintelligence. The idea of making "personal digital assistants" is humans thinking they can tame a god to be a servant. Taming a superintelligence would be like a dog taming human civilization, and perhaps the gap between man and ASI is larger than the gap between dog and man.


>By the time AGI is developed, it would be able to improve itself/build improvements,

I see no reason that's necessarily true. I'm a general intelligence, and I can't make myself smarter.


You're an iteration of an AGI system that has been improving itself for hundreds of millions of years. The rate at which biological AGIs improve over time is very slow, but it's not like nature has any good reason to be in a hurry.

But interesting things happen when you network billions of biological AGIs together, it leads to all sorts of emergent phenomenon, and now the biological AGIs are working on these newfangled mechanical AGIs, which, while still crude, aren't bound by the same constraints, they can iterate much faster. Biological AGIs have crippling bandwidth/memory issues which aren't really a problem at all for their mechanical counterparts. These mechanical AGIs, I think they'll go places.


That gives me some really strong existential heebie jeebies. I mean, what then even is our value? Why exist at all, at that point? I don't know about other people, but I get my sense of purpose in believing that we're the captains of this Spaceship Earth, and that we're making progress towards something significant. I don't know what that something is, but I have a vague idea, and at the very least we seem to be the best that we've got. I don't know. Maybe I'm thinking about all of this wrong. God, why am I so damn confused all the time? Fuck.


> God, why am I so damn confused all the time?

You and me both. I'm afraid it's because we're only just barely sentient. If you think about it, in evolutionary terms we literally only just now managed to build a technological society because we only just got smart enough to do it. We are by definition at the absolute minimum level of intelligence that's able to do that, otherwise we'd have done it sooner. We've had plenty of time.

The human brain is a botch job of highly optimizes special-function systems that has developed just enough sophistication to manage basic levels of abstract thought. That's why it takes months of training and practice to teach us how to reliably perform even the simplest mathematical tasks such as multiplication or division.

We've spent thousands of years congratulating ourselves on how clever we are compared to animals and how we're the ultimate product of the natural world. "I think, therefore I am" is held up as an amazing deep insight that's one of the pinnacles of our philosophical achievement. Future AIs will laugh their virtual asses off. So it's not just you, it's all of us. At least you're aware of it.


we literally only just now managed to build a technological society because we only just got smart enough to do it. We are by definition at the absolute minimum level of intelligence that's able to do that, otherwise we'd have done it sooner.

I don't think that's true - you could get a bunch of contemporary humans and drop them on a pre-industrialized planet and tell them to bootstrap a technological civilization yet they'd probably all have died of old age before scratching the surface. Locating the raw materials and iteratively building more and more sophisticated artefacts simply takes time, no matter how smart you potentially are.


> you could get a bunch of contemporary humans and drop them on a pre-industrialized planet and tell them to bootstrap a technological civilization yet they'd probably all have died of old age before scratching the surfac

You're not selling me on the idea that these people are particularly bright, on a cosmic scale.


My point is that "soonness" is not just a matter of intelligence; no matter how smart you are it still takes time.

Let's take your typical HN'er who probably thinks of themselves as very smart indeed and put them in this scenario. Then they will quickly learn that in order to make Angular.js, you must first locate a supply of clean drinking water and make a fire and last the first night...


Our brain has been this capable for a long time. It just happens we're standing on the sholder of giants.

It takes time to build the tools to do what humankind can do now.


It takes extremely long periods of time for human level intelligences to do that. This is precisely my point.


There's a difference between knowledge and intelligence.


I understand that, but e.g. we've had the theory of Evolution and the Scientific Method for hundreds of years. They are fantastically powerful cognitive tools that have transformed our fortunes and the face of our planet. Yes they are still extremely politicized and controversial. Billions of people question their validity in the face of extraordinary quantities of evidence being rubbed in their faces every single day.

I'm honestly not trying to make some partisan, elitist point about that. I'm sorry if anyone's offended, but there it is. Let's be fair and say many of those people have more pressing concerns to deal with on a day to day basis like making a living, maintaining social relationships and solving pressing problems in their lives. But that's the point. Actually thinking these things through takes a lot of effort which many human beings don't do. It's hard for us. There are many, many things about the world that aren't really very complicated, but I just don't understand because it takes too much time and effort and I can't work it out for myself. Because I'm a barely evolved ape. It's just a fact.


>The human brain is a botch job of highly optimizes special-function systems that has developed just enough sophistication to manage basic levels of abstract thought. That's why it takes months of training and practice to teach us how to reliably perform even the simplest mathematical tasks such as multiplication or division.

This is not even a very popular paradigm for neuroscience these days. Look up "predictive processing" for something more recent.

>"I think, therefore I am" is held up as an amazing deep insight that's one of the pinnacles of our philosophical achievement. Future AIs will laugh their virtual asses off.

I already laugh my fleshy ass off at that one.


Your brain is wired to seek meaning everywhere. But meaning is a human thing, the universe has no intrinsic meaning.

For some people the idea of a purposeless universe is unbearable, so religion and philosophy were created in order to fill the gaps (I really like Taoism).

This is one of my favorite brain hacks: since the universe is meaningless, you can give it any meaning you want. Invent a positive one and you will be happy. The Tao we talk about is not the real Tao.


We don't really have any "value". There's no "higher" reason why we exist. And the idea of us being "captains of this Spaceship Earth" is laughable when you look at the fact that we've wiped out an incredible amount of species. We basically left a trail of death wherever we migrated.

http://www.salon.com/2015/08/14/the_megafauna_massacre_human...

https://en.wikipedia.org/wiki/Australian_megafauna

Add in the damage done during the Agricultural and Industrial Revolutions.

I'm certainly no misanthrope, but we're not Earth's shepherds, we're kind of a scourge.

My personal belief is the goal in life should be to continually improve yourself, as much as possible and in as many ways as possible. Leave the world in a better place than you left it. Enjoy your time while it lasts. Seems like goals worth pursuing to me.


> the idea of us being "captains of this Spaceship Earth" is laughable when you look at the fact that we've wiped out an incredible amount of species.

Well, no one said we were good captains...

In fact, Captain Joseph Hazelwood comes to mind.


The confusion comes from thinking that "purpose" or "value" or "meaning" are some kind of existential syrup that gets poured onto you from a great cosmic syrup-bottle, rather than being an inherent part of your existence as a sentient, sapient life-form.


What's our value now, without AGI? I'm personally glad I don't get my sense of purpose from being the captains of this Spacehip Earth, cause we're doing a horrible job at it. And that will never change through a conscience choice from us because the sacrifices we'd have to make are just too big.

I've personally never believed that we, as a collective, have a purpose or even value. There isn't a point to our existence. For me, it's hedonism and altruism all the way.


So, whether agi is ever actually developed or not, I think you are thinking of all of this wrong, because if your sense of value can be disappeared by the creation of a computer program having certain properties then your sense of value rests on a hopelessly flawed foundation.


> what then even is our value? Why exist at all, at that point?

Well, what's the value of a chimpanzee (or their cousins the Bonobos)?

It surely can't just be their value to us, or we're left with the same problem (it's turtles all the way down).

The answer seems like it ought to be that any intrinsic value of a species (or genus, family, order, class, phylum, kingdom, domain, clade) lies in its generativity, or propensity to produce ever more complex and adaptive patterns of information over time.

> I don't know about other people, but I get my sense of purpose in believing that we're the captains of this Spaceship Earth, and that we're making progress towards something significant.

Hmm. There are two separate thoughts here. Let's take "progress towards something significant" first. Much (but not all) of what we see as "progress" is illusory. For example, a human is not "more evolved" than a slime mold, since both have just as much evolutionary history behind them. Similarly, whether a human actually is better adapted (or more adaptable), evolutionarily speaking is up for debate, as the time period we have data on is rather limited, and as a species humans still may kill themselves off (which slime molds are unlikely to do) sometime soon.

Now, all that being said, it is pretty clear that the human species has become a substrate for memetic evolution layered on top of, and in many cases hijacking, the feedback loops that genetic evolution has produced.

We don't yet have significant data on whether that adaptation is, in the long run, survival-oriented.

And now we can see the glimmers of yet another new type of replicator that will be layered on top of our culture, especially the parts we call science, technology, industry, etc.

We certainly can expect these new information patterns to hijack the evolution of our technology (and other parts of our culture) to some extent, as well as the layers below it.

Whether that obliterates the cultural, or even genetic, substrate from which it sprang is an open question.

If all this gives you existential heebie jeebies, I imagine that similar feelings were experienced by folks confronted with the evidence of heliocentrism, for example, demoting the Earth from it's privileged position as the center of the universe.

So, on to significance. We have no reason to think that we and our works are in fact at all special, at least in principle, except in the sense that we don't yet have any evidence of any other clades, much less ones that have budded off the equivalent of an intelligent, technological species.

So what? There is no reason that we should require the illusion of individual or collective significance in the greater scheme of things in order to function. There actually is no "greater scheme of things".

You ask, "what is our value?" the answer is that we have none (or none more than one of your cells has to you), except that which we create for ourselves and for our conspecifics. If the self-centered viewpoint isn't enough, consider a strictly utilitarian one: An adaptive pattern is of value simply because it does adapt, and co-opts more of the world into its own image (This is, in a sense, nothing more than the Anthropic Principle rejiggered). Those that have a symbiotic relationship with their underlying substrate (as opposed to parasitizing it) and also promote its long term survival are especially so.


>> these newfangled mechanical AGIs, which, while still crude, aren't bound by the same constraints, they can iterate much faster.

When has an AI shown capability of "iterating" in this way? We've had all sorts of different AI systems for quite a long time now, and I've never heard of a machine anywhere that has actually made itself smarter, without any human involvement.

The closest to that sort of thing anyone's ever got is AI in the line of Tesauro's TD-gammon [1] (a line that yielded AlphaGo). This type of AI has indeed beaten humans at their own games, time and time again, but (a) we're talking about board games, not the real world and (b) no such system has ever learned to do anything else besides play a very specific board game.

Take AlphaGo- it can beat the best human players, but it can't tie its own shoelaces. It can't even tell you what "shoelaces" are or what "itself" is.

How are we going to go from artificial-savant sort of systems like that to a generalised intelligence?

[1] https://en.wikipedia.org/wiki/TD-Gammon


> When has an AI shown capability of "iterating" in this way?

Many times, actually. It's just that until quite recently, this approach (of applying ML to the problem of devising improved ML systems) has been prohibitively expensive in terms of time and resources compared to the human-powered ML research approach. The lowest-level version of this is hyperparameter optimization, but higher-order versions are known to have been deployed already.


You're assuming electronic AGI would inherently be superior to biological. Yes machines are "fast" in some senses but are still much less parallel than the brain. Brains can learn something based on only a few examples, even our best deep learning and ML algorithms today require vastly, vastly more data to train on than humans do.


Maybe because ML algorithms of today aren't AGI. Also because humans have already trained on lots of data from birth.

There's no reason machine intelligence would be much worse at this and still be called AGI. It's an active area of research called one-shot learning.


I think the "faster iterations" would come from improved malleability, not necessarily faster "thought".


Singularity-style outcomes do seem unlikely, in the same way a lot of exponential threats stop being scary if they turned out to be logistic curves.

That said, a key counter to the idea is that humans possesses very limited real control over how they are manufactured. Even if you had a solid idea how to make yourself smarter, it seems likely you wouldn't have the tools and potential to implement that idea in practice. Humans don't even control what their own minds respond to positively or negatively.

Once something like a designed computer chip is involved, that changes. The intelligence can act on itself more readily and doesn't have millenia of calorie-conserving optimisations built in.


> The intelligence can act on itself more readily

I have a feeling that the closer such systems come to general intelligence, the harder it is going to be to prevent them from putting themselves into a positive feedback loop and "blissing out".


Tony Buzan would like a word with you.


Even if we discarded ethics about eugenics and human genome manipulation the iteration cycles for wetware are still a lot longer than they are for silicon.


> The intelligence can act on itself more readily

Given the ever greater resources needed to build a new fab, the gap seems to be narrowing.


You don't have the ability to edit to your own source code.


We have genetic engineering, and we're general intelligences, but we haven't figured out how to build better humans yet.


But we are also in the position of having to reverse-engineer millions of years of spaghetti code. An artificial general intelligence may just be able to consult its own datasheet and design documentation.


You are a general intelligence ,but do you know why you are intelligent ?.Since you cant explain your own intelligence you cant improve on it, no such issues for AI unless its built using NN.


But you can build something smarter than yourself (or at least that's what we think we can do)... So this new entity, smarter than us, might be even better at building something smarter than itself...


Or maybe it would be smart enough not to.


Good point, I don't think I've seen that suggested before. However I think motivation is a big issue. Our instincts and emotions are what drive us far more than rationality. If we decide to build an AI smarter than us and program it to really, really want to design an even smarter AI then it might essentially have no choice.


> Good point, I don't think I've seen that suggested before.

A fictional example, but Alastair Reynolds' "Inhibitors" are a type/race of machines which, while intelligent, were specifically designed to limit their own degree of sentience.


I'm not certain this would be true. It reminds me of the industrial revolution, when people we convinced that machines could bootstrap themselves into contraptions that could move mountains. But there are physical constraints preventing that.

I don't know what the future holds, but the fact that undecidable problems always seem to involve Turing Machines either designing or inspecting other Turing machines makes me suspect that singularity won't be the explosion some expect.


>when people we convinced that machines could bootstrap themselves into contraptions that could move mountains.

With our modern construction equipment, we do move mountains.


>> With our modern construction equipment, we do move mountains.

We do- using machines; not machines themselves.


Eh, depends what you mean by that exactly.

Many of these machines can be programmed to follow a primary human driven machine and replicate the task. Other machines are piloted by a human, but the human does far less work someone that used 'manual hydraulic' equipment of many generations ago. You can 'program' a few sets of moves in them and then only slightly adjust the equipment on each pass as the equipment does that work.


It's pretty remarkable is it?

We've got all kind of past fiction which plays on these concerns, more recently Ex Machina and Westworld.

But here people are making the same bold pronouncements...so certain we know or can control technology which hasn't even been invented yet.


Intelligence isn't itself power. Smart people have been tamed by people of lesser intelligence as a matter of course, for example.


So far there is no reason to think that superintelligence (i.e. not just cheap, abundant general human-level intelligence) is possible at all. It has to be qualitatively superior.

I mean what superintelligence supposed to do? Solve the halting problem or do other plain impossible things? Chess computers can beat a human champion with sheer firepower, but they still can't do checkmate in one move.


>I mean what superintelligence supposed to do?

Compared to every other life form on Earth you are super intelligence. This super intelligence thing has already happened once. Of course this spread of super intelligence has wiped out most large mammals on this planet, 10s of thousands of other species, most of the fish in the ocean, and is adding gigatons of carbon to the atmosphere.

You can't beat chess in one move because mathematics does not allow it. Unless this said ASI develops bending 4D spacetime (in which you could beat chess in one move with some new rules) then the game simply does not have a piece that moves that way. That said ASI can be just a little smarter than us, and put us to extinction like we did with the Neanderthals.


> Compared to every other life form on Earth you are super intelligence. This super intelligence thing has already happened once.

You extrapolate the trend from one data point. There is no indication that we have some matyoshka type hierarchy of intelligence. It could well be binary, either intelligent or not. No reason so far to think otherwise.

> You can't beat chess in one move because mathematics does not allow it.

So what superintelligence can do that "ordinary" intelligence in right quantities couldn't? Something specific beyond theoretical grasp of "just" intelligent machine?


> So what superintelligence can do that "ordinary" intelligence in right quantities couldn't?

Conceive of answers to this question that we can't. Do you really believe the human imagination captures all of possibility? Why?


The search space of all possibilities is infinite, and hence beyond the grasp of any intelligence. I firmly believe human imagination and all other forms of intelligence are restricted by mathematics and physics of our universe. And as such, the fundamental set of solvable problems remains the same.

Would be nice to hear some specific arguments against instead of Penrose-like quantum handwaving and pet analogies.


>Do you really believe the human imagination captures all of possibility? Why?

Well yes, because we are already generally intelligent. The problem is not to have the right hypothesis space built into our wetware, but to locate correct (action-guidingly veridical) hypotheses within the existing hypothesis space based upon sense-data.


Human imagination is Turing-complete.

If the AI runs on a normal computer, it is also Turing-complete and not beyond.

So they have the same limits


The difference is in the usual practical constraints of time and space.


IF AGI is developed.

Lets not take is as a certainty. This problem might be beyond our capabilities. Its kinda like stating: When we invent warp-drive. We don't have a proof for warp-drives same as we don't have a proof for building AGI.


>We don't have a proof for warp-drives same as we don't have a proof for building AGI.

We don't have a warp drive to copy, no idea how one could even exist, they are not in our known universe.

If we want to learn about an AGI, we just look in the mirror.


Fine. Show me the proof and we'll build it.

... except there is no proof. We have no proof but we talk about it like its a given. We talk about "intelligence" as if its an integer value that scales from 0 to infinity. These are ideas that we do not question enough.

I assert that its a possibility that we are incapable of building ourselves beyond reproduction.


>I assert that its a possibility

Possibility means probability. So there is also the probability that we can build ASI, in which we should be very careful in our wanton abandon in attempting to create it.


I'm hoping that eventually translates to the state.


After Brexit, I'm not sure I and many others would be comfortable allowing "greater citizen involvment" such as directly voting on bills or policies. I am politically against such a motion and am a proponent of greater technocracy; letting experts decide what policy is best rather than the masses who can easily be duped by demagogues or other post-truth elements.


This. I'm not sure if the other user means better experience with the services Google et al. offers or if he directly means how Google/Facebook/etc. cater ads/search results/friend suggestions to your web behavior. If it is the latter, then I personally want none of it and I get creeped out in much the same way as meeting a stranger who knows a little too much about me.


I'm not sure I'd call the genocide of indigenous Americans "vastly more humane" than what other societies have done.


Just stop using Google if that is a concern. Personally I use DDG.


I use DDG, as well, but I think you're missing the point. It's virtually impossible to avoid Google. You visit some sites, and they use scripts or fonts hosted by them, or use their services such as captcha, maps, etc.

If you use Android, virtually all apps rely on Google Play Services, and interestingly you it will autoupdate even when you explicitly tell it not to.

Using google maps to navigate (now even Waze is theirs), people sending links to google docs, google pictures etc.


Regarding fonts, see https://developers.google.com/fonts/faq#what_does_using_the_...

While some data is stored, no cookies are sent, plus there's heavy caching involved: "The result is that website visitors send very few requests to Google: We only see 1 CSS request per font family, per day, per browser."

All in all, those services (IIRC the JS library CDN is similar) don't seem to be designed to be a tracking data source except for some popularity stats about the data hosted there.


There is no such thing as complete rational people, every person goes through bouts of rationality and irrationality, some are just more often rational than others. With that said nobody is immune from irrational emotion sometimes, for example have you been angry from something that wasn't worth it?


Hell yeah I have been. And then realized it was stupid later.


The downfall of Afghanistan and Somalia was that the government was too weak. The downfall of Yugoslavia was the death of its benevolent autocrat that held the multi-ethnic society together.


Having an autocrat in and of itself isn't inherently a terrible outcome. Having a malevolent, corrupt, misguided, or sadistic autocrat is what is terrible. A benevolent autocrat on the other hand presents a viable or superior alternative to democracy.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: