Hacker News new | past | comments | ask | show | jobs | submit login
Eliezer Yudkowsky on the Singularity, Bayesian Brains and Closet Goblins (scientificamerican.com)
82 points by gpresot on March 10, 2016 | hide | past | favorite | 75 comments



I am heavily biased as I was active on Overcoming Bias (Irony?) and then later the Less Wrong forums, but every interaction I have had over the years with Yudkowski or his followers has been insufferable.

Beyond just the pathos of that, I take major issue with the lack of technical depth of MIRI and the rest of these "futurist" prognosticating semi-doomsayers with respect to current AI state of the art, or even proposals for actual roadmaps to AGI.

Their Step 1: Create AGI; Step 2: ??; Step 3: Doomsday reasoning chain has led to what is in my mind the silly concept that AGI can be built to be "safe" to humanity. This leads to false hope and to things like the AI Safety Pledge aka "Research Priorities for Robust and Beneficial Artificial Intelligence"[1] - for which admittedly I am a signatory largely cause my research adviser did and it was the thing to do at the time.

That said, I thought his response to how he differs from Kurzweil - moreso the conclusions he comes to - are pretty spot on.

[1] http://futureoflife.org/ai-open-letter/


The reason they don't have roadmaps to AGI is because they do not want AGI to be made before the Friendly AI problem has been solved.

When you think about the vast space of mind design space, plus all the ways we've made mistakes reasoning about even simpler & stupider optimization processes like evolution, I don't think they're being too silly.

If any of you want a quick and insightful introduction, this video is very good: https://vimeo.com/26914859


>The reason they don't have roadmaps to AGI is because they do not want AGI to be made before the Friendly AI problem has been solved

Right, which is an impossibility in my opinion. There is an inherent conflict between systems with asymmetric power and capability in a resource constrained environment. Trying to get around that fundamental principle is an exercise in futility.


Could you elaborate? "Systems with asymmetric power" presumably refers to the AGI - or does it? Maybe you are referring to the AI box or the utility function design or the "nanny AI" which is meant to contain the AGI? I don't know what "capability in a resource constrained environment" refers to because that could refer to pretty much anything in our universe or any finite universe.


before the Friendly AI problem has been solved.

That's like saying they aren't telling us how to make time travel because the grandfather paradox hasn't been solved.

They aren't withholding some unique secrets of the universe because we have unsolved problems... they are just slightly crazy people who can turn a phrase and pivoted that ability into an endless series of "grants" as to avoid needing to get real jobs contributing to GDP.

The output of their work is amusing, but just because someone runs around with veracity of purpose doesn't mean what they say is part of a ground truth reality.

Read it for enjoyment, but don't read it for belief.


> The reason they don't have roadmaps to AGI is because they do not want AGI to be made before the Friendly AI problem has been solved.

As if those are unrelated concepts. The more divorced they are from real world AI software, the less relevant their results will be.


I used to think that way too. Now I believe they are working on problems which will only become relevant - pulling a figure out of my ass just to put a figure on it - 30 years hence - but in 30 years time, we'll be damned glad that they did, and wish that they'd had more staff and more resources! If we even notice the problem before the world ends, or is saved.


> The reason they don't have roadmaps to AGI is because they do not want AGI to be made before the Friendly AI problem has been solved

Reminds me of: https://www.youtube.com/watch?v=_ZI_aEalijE


>but every interaction I have had over the years with Yudkowski or his followers has been insufferable.

I wish I could hear more about this. I haven't had such an experience and I am very curious about your observations. Maybe write about it?

I have seen many criticisms of lesswrong well taken by the community. For example:

http://lesswrong.com/lw/2po/selfimprovement_or_shiny_distrac...


Certainly I am not the first to describe the behavior of a non-trivial number of outspoken members of the community as "cultish." In large part my complaints stem from:

1. Dogmatic adulation of Yudkowski as a "savior" figure.

2. An almost immune response resistance to "practical" approaches to the Friendly AI problem. For example, laying out even a basic path to the structure of one - given that's their mission.

3. In light of #2, the community's serious consideration of un-testable scenarios like Roko's Basilisk

4. A flavor of authoritarianism, racism and sexism from former founder as Michael Anissimov.

5. Critical lack of diversity in backgrounds and approaches to ML/AI, which keeps the thought process fairly insular. For example you never see these people talking/attending AI conferences, and I'm not sure I have ever seen them at the AGI conference. AFAIK also they only publish in-house with liberal internal reference.

Those are some from the top of my head. Granted I think they have sorted some of this stuff out, like #4, but all of it together makes for a toxic and fragile community, largely built around a singular ego.

I haven't been keeping up with them as much in the last few years so some of these things might be getting better - but when I left the community in 2013 these were my main problems.


> 1. Dogmatic adulation of Yudkowski as a "savior" figure.

They're just copying the man himself, who sees himself that way. But before you dismiss that as cultish, be sure you fully understand the problem he's trying to solve and how instrumental he has been to solving it so far (in terms of raising money, convincing people, building a community, knocking down bad arguments, etc.).

> 2. An almost immune response resistance to "practical" approaches to the Friendly AI problem.

What you have to bear in mind is that there are plenty of "practical" approaches to the Friendly AI that are simple, elegant - and wrong. Have some humility and note that if it was that easy, someone would have published a peer-reviewed solution already and that would be that.

> In light of #2, the community's serious consideration of un-testable scenarios like Roko's Basilisk

The community rejected it (with help of a site I contribute to, RationalWiki), and probably would have rejected it faster and more decisively if Yudkowsky hadn't unsuccessfully tried to censor all discussion of it.

> A flavor of authoritarianism, racism and sexism from former founder as Michael Anissimov.

Anissimov is indeed a straight-out fascist, in my view - but that just shows bad judgement on the part of Yudkowsky, that should not be used to tar the entire community with the same brush.

> Critical lack of diversity in backgrounds and approaches to ML/AI, which keeps the thought process fairly insular. For example you never see these people talking/attending AI conferences, and I'm not sure I have ever seen them at the AGI conference. AFAIK also they only publish in-house with liberal internal reference.

They are starting to publish in mainstream venues but most of their output has been online so far. Also, this stuff is hard, it's not like your typical "add another epicycle, cite self again" stuff of a lot of scientific research. And they are not trying to maximise the number of papers they publish, but rather trying to make progress on a specific research agenda. The former objective is not the same as the latter and could even impede the latter.


My impression is, there's no real "hard" substance behind EY, and he doesn't even try to achieve that, being satisfied with being "influental" even as is.


He's stated as much through the years - noting his lack of applicable development or project management skills.

I don't think that's a hit on it's own, but without someone like a Ng/LeCun/Bengio the organization lacks a critical link with reality.


I'm not sure there's anything wrong with that - unlike one of my fellow RationalWiki editors who literally criticised effective altruists for (gasp) paying other people to do stuff for them, which is like the foundation of capitalism and the modern economy (it's also known as division of labour).

Does the CEO of a software company have to know how to code? Does it matter? Maybe the CTO should - maybe - but the CEO?


For all that I loved his hpmor book I can't stand to read him speaking the way he does. It feels like he tries too hard to sound smart and has built his own hill that he gets to be king of and ignore the rest of the world from the top of.


I disagree that he tries too hard to sound smart. I despise the tendency to unnecessarily use complex words when simple ones will do. Police officers, for example, are rightly famous for this tendency. In Eliezer's writing and in this Q&A, the complex words are there because they are the most concise and precise expression of what he's trying to say -- simpler words would either be inaccurate or would have to go on at length.

For example:

Horgan: Is college overrated?

Yudkowsky: It'd be very surprising if college were underrated, given the social desirability bias of endorsing college.

You'd need a few paragraphs to explain what Yudkowsky means to people who don't already understand social desirability bias.


Yudkowsky has a way of thinking that he believes everyone else should adopt. He is skilled at using phrasing to frame his statements in his way of thinking.

For example, instead of saying something like, "a lot of people think college is good," he phrases it in terms of a bias. Why? Because if he can get the other person talking about the subject in terms of biases, then he has advanced his own favorite way of thinking.

But is liking college really a bias? And if so, compared to what norm? He offers no explanation; he just asserts it.


I don't think he's saying that liking college is a bias. Or rather, he is, but I think he's saying something subtly different (which implies the former): that those who denigrate college, or are "meh" about college, take a "social desirability hit" - they are likely to be seen as either unintelligent manual labourers, or Philistines, or annoying self-taught outliers, or insulting people who have put in the hard work to go to college, or a combination. And therefore young people grow up in such an environment and many of them have a strong belief, in such an environment, that college is a necessity, and don't really consider the evidence dispassionately.


- Is college overrated?

- Yes. People keep endorsing college education because it makes them seem smart.


Except that is a totally different explanation (and much less plausible).

Example: Parents are not motivated by seeming smart nor do they get credit for acting smart when they push their kids to go to college because it's the default position. Sending your kids to college is simply what "respectable people" do. This plausibly leads to a social desirability bias.

I'm pretty skeptical about the magnitude of this effect, though. I would say the popularity of higher education can be better explained by the zero-sum nature -- and hence arms race -- of competition for jobs.


I actually really liked that line because it was beautifully concise I was talking more about the essay answers later in the interview.


I think those are Yudkowsky desperately trying to explain things without using scientific technical terms and lesswrongian jargon.

You can see he kind of drops character with the 'cyborg' question.


> Even if some of the wilder speculations are true and it's possible for our universe to spawn baby universes, that doesn't get us literal immortality. To live significantly past a googolplex years without repeating yourself, you need computing structures containing more than a googol elements, and those won't fit inside a single Hubble volume.

> And a googolplex is hardly infinity. To paraphrase Martin Gardner, Graham's Number is still relatively small because most finite numbers are very much larger. Look up the fast-growing hierarchy if you really want to have your mind blown, well, eternity is longer than that. Only weird and frankly terrifying anthropic theories would let you live long enough to gaze, perhaps knowingly and perhaps not, upon the halting of the longest-running halting Turing machine with 100 states.

This literally says "eternity is very very long".


Yes, this was by far the most excruciating part of the interview.


Yudkowsky writes without modesty and doesn't mind seeming arrogant. This doesn't particularly bug me; in fact I kind of like it - I think he's a fascinating guy. But a lot of people find his writing style intolerable, and he has a disproportionate number of antifans.


Enemies. Adversaries? Anti-fans? Really?


They aren't enemies in the same way a fan isn't a friend. I would say "antifan" describes a distinct thing.


...critic?


Actually, "antifan" is the right word here. "Don't care/know enough to criticize", and "don't hate enough to be an enemy", but simply think he's overrated.


Well, if you have read most of his lesswrong articles(like I have), its hard to not enjoy the way he speaks and writes about these things. But I totally get that it feels obtuse and unnecessarily complex to someone without good exposure to his http://lesswrong.com/ writings.


I think that's a big part of the disconnect. Unless you are a devotee of EY, you generally don't "get" him. I suspect that it's because he mostly operates in the echo chamber that is the "less wrong" community.

In short, he generally does not get credit for what he knows and understands because he cannot communicate clearly with people who do not already know and understand his philosophy.

Said another way, if you can't explain things to people unless they already understand you, then you can't explain things to people.


I don't get him at all. To me he looks like a Randian dilettante folk philosopher, with a minimal track record of practical achievement.

His AI arguments seem to be of the "If you assume an AGI singularity is possible..." kind, which is suspiciously like assuming god exists, for completely arbitrary values of god, and then debating whether or not he cares enough to hate you.

Most software today sucks so badly it can't tie its own shoelaces without poking itself in the brain and falling over.

How do we know this isn't a general feature of software? Has there ever been a 100% bullet-proof non-trivial software system? Is there anything in the history of CS which implies that such a system is possible?

So suggesting that by (say) 2066 software will not only stop sucking but will be able to meta-improve itself with bug-free perfection to the point where it can eat the world seems curiously ungrounded.


You're right about the quality. But in addition AI, as we build it today, is task-oriented, not personality-oriented. People are trying to build cars that drives themselves where directed, not cars that decide whether or not they feel like driving people around at all.

Now, maybe it's possible that a task-oriented AI will at some point develop a personality. But there is no evidence that such a thing is possible.

The "real life" personalities we observe in nature--ourselves--did not evolve from task-oriented machinery. It was the opposite: personality and will is observable in even simple animals; a beetle directs its own actions and tries above all else to preserve its own life. No one sets a task for it.

Where does a beetle get that self-direction? Well, what is life itself? Where did it come from? We find ourselves in very deep waters; the depths of which I think AI-fearing folks don't seem to appreciate.


I don't think it is the case that "AI-fearing folks" like myself don't appreciate what the alleged difficulties of building AGI might be. Rather, the burden of proof is on you, the doubters, to show that we need to spend no money at all on such risks, despite the magnitude of the harm that would be caused if we're right and nothing is done about the risks.


> So suggesting that by (say) 2066 software will not only stop sucking but will be able to meta-improve itself with bug-free perfection to the point where it can eat the world seems curiously ungrounded.

That's a straw man. Software doesn't need to be bug-free to take over the world, literally or figuratively. Cars replaced horses but weren't fault-free. PCs replaced typewriters but weren't bug-free. All software needs to do to take over the world is be sufficiently competent and superior to its competitors - products or (in the case he's concerned about) humans.

Also, it's true that one AI might blow itself up, or make a fatal mistake, while trying to take over the world, but it only needs one idiot misconfiguring his AI in his basement to give us a potential world-destroying calamity - the classic "paperclip-maximiser" problem. That's a very hard problem, and it's bigger than solving the AI control problem - it's no good knowing the secret to keeping AIs under control with probability 0.9999999999999999... if someone else unleashes a dangerous AI without knowing that secret, or deliberately ignores it due to insanity or some death-wish on humanity.

And while he doesn't focus on this, there are plenty of AI risks that we still might conceivably face even if we defeat the outright existential risks, the threats to our very existence as a species. For example, consider what would happen if a paedophile or a rapist got hold of a very powerful AI and was able to command it...


Yes, yes and more yes. I love his ideas and thoughts on many topics, but it always comes across as the person who was too busy being "brilliant" to learn how to interface well with the broader society. The sad part is that he really is brilliant.


No he's not, he's just skilled at manipulating people's perceptions of him based on a narrow set of premises that he introduced.

Go through his non-philosophical papers and point to one thing in there that's brilliant. His miniscule technical output also seems to agree with this.

I weep for the world that considers people like Yudkowsky brilliant.


There are many definitions for "brilliant", though most of them heavily involve philosophy from my perspective.

The easiest example of his work that I'd describe as brilliant is HPMOR. It is a story that makes the Harry Potter world not just bearable but beautiful and lively. There are portions where the philosophy comes on so thick as to disrupt the story, but on the whole it is a beautiful work. Reading it is how I first discovered EY.


I haven't read HPMOR, and I haven't read Harry Potter books, but I watched the movies, and I think HP world is beautiful as is. What makes it unbearable for you?


Point taken. But there are some complex ideas that just need some amount of effort from the reader in order to "get" them. I don't know what is the right balance but if you are being introduced to new ideas, introduction of those ideas will also bring some vocabulary that you will (at least initially) be uncomfortable with. I guess I want to see someone communicate the ideas on less wrong in a simpler way without dumbing down or ignoring the nuances.


I'm amused at the irony of users on a site devoted to technology, many of whom are probably software engineers, complaining about obscure jargon...


Is this a cult? I enjoy the was Halmos and Feynman talk about things. I've never read anything interesting from Yudkowsky...


I don't think it was a cult. LessWrong was an interesting place where interesting people gathered for a time; though LessWrong is basically on life support now.

Gwern, Hal Finny, Wei Dai, Hannu Rajaniemi, Aaron Swartz, and Scott Alexander are some of the more well-known (outside of LessWrong) former members. It really was an interesting community for a while. There were meet-ups and Yudkowsky did solicit for his nonprofit, but it never seemed culty to me. And none of the people I mentioned ever seemed wary of criticizing Yudkowsky when they thought he was wrong. See, for example, Gwern's epic reply to Yudkowsky's post on automation: http://lesswrong.com/lw/hh4/the_robots_ai_and_unemployment_a...

This doesn't seem like the type of reply an entranced follower would give their guru. The fact that this very-critical comment got twice as many upvotes as the original post is telling of the attitude of the community as a whole.


Worth noting that the community has spread out into a kind of "diaspora": some have gone into effective altruism (many retaining their AI risk beliefs, especially the ones in California); some have been hired to work on existential risks full-time; Scott Alexander now has his own (impressive) blog and some LWers still follow him there and go to meetups that he attends... Of course, it's also probably true that some have since graduated, got jobs and simply have less time to read and post stuff on the internet any more.


Depends what you mean by "cult". They are a defined group of people that display a substantial amount of in-group closeness (google "lesswrong cuddle puddle" if you have the stomach for it), possessing unusual views, organised around a central leader figure, that believes the leader's worldview is superior. Poisoned kool-aid, however, is unlikely. Make of it what you will.


I've read a lot of his work, but I don't think it needs to be as complex as it is. I am smart, well educated, and well read and I often have to pull out a dictionary to read what he writes. Maybe I just need to read more but I don't feel like he is trying to communicate his ideas only listen to himself speak.


When I read this, I actually came away with the impression that he was able to accurately and eloquently express certain ideas/beliefs/concepts that I agree with and can grasp in my own mind, but I would be unable to express these same ideas coherently to another human being.


>I'd try to do all the things smart economists have been yelling about for a while but that almost no country ever does.

Yudkowsky then mentions NGDP level targeting, consumption taxes, land value tax, negative wage taxes, Singapore's healthcare and Estonia's e-government.

Is there an economics textbook or article that explains in layman's terms what those things are and why they are good ideas?


NGDP level targeting says during economic downturns, the government should fill the gap between the downturn and the previous economic output level (creating jobs, buying things, paying people to do things, etc). Basically, don't let an economic downturn interrupt people's lives—always give them something to do and opportunities to grow.

land value tax comes from the idea nobody should be able to "own" land, you should rent land. When there's a more profitable use for your land than you are current exploiting, you must give the land up. The UK enjoys things like 99 year leases on land instead of in the US where you "buy" land and own it until the heat death of the universe.

negative wage taxes is like basic income if you make the "negative wage limit" really high. The government pays you because you don't make enough (maybe for reasons outside your control, like all jobs you are qualified for are now done by robots).

Singapore has free computerized centralized healthcare that doesn't cost a million dollars a person-year like in the US.

Estonia lets you get something resembling a "mini passport" with no international recognition (perhaps some cross-EU identity recognition, but no residency benefits), but with legal ties to an Estonian "e-residency" so you can verify your identity online electronically (chipped smart cards verified by government records, etc).


> NGDP level targeting says during economic downturns, the government should fill the gap

NO. It says the central bank should fill the gap, and that there's absolutely no point in the government's fiscal policy trying to fill the gap when it can be filled by creating enough money to keep NGDP on a level path.


For my (slightly outdated) sceptical take on this idea, see http://rationalwiki.org/wiki/Market_monetarism


> land value tax comes from the idea nobody should be able to "own" land, you should rent land. When there's a more profitable use for your land than you are current exploiting, you must give the land up. The UK enjoys things like 99 year leases on land instead of in the US where you "buy" land and own it until the heat death of the universe.

That sounds like the scariest solution to the problem of land ownership. There are plenty of cases where the government decides that somebody must give their land up to expropriate more value from it with Eminient Domain laws. Who gets to decide when land is not extracting enough value? Politicians who are bought out by private corporations controlled by a board of directors, none of whom live anywhere near the land being expropriated? Land should be collectively owned by the people living on it and producing from it.


It would fix a lot of things like e.g. San Francisco. Imagine if all those two story, single family homes were taxed at their land value (40 story condos! condos everywhere!) where 3,000 people could live in the space currently occupied by 2 to 6 people.

Living in a place shouldn't grant you the right to obstruct progress forever just because you got there first. In the same vein, just because you got somewhere first also shouldn't mean you get unlimited profit potential just because... you got there first.


It would be accompanied with democratic process of the people who currently use that land. If the community of land users decided it would not hurt them for whatever progress somebody else proposes, they would democratically decide among themselves whether or not to do that.

Nobody would vote to gentrify and therefore forcibly remove themselves from their homes.

Again, you just attached value to the very loaded term progress. What is progress for one person might not be progress for the people who use that land for their own sustenance, whether that be a house or a farm or an enterprise.


>with democratic process of the people who currently use that land

Why only the people who currently use that land? Why don't the other 3000 people who could be living there have a say?


If you own land, you would owe land value tax, regardless of what you have done to the land. If you just hold the land and do nothing you still owe tax. No one forces you to sell the land.

This:

>When there's a more profitable use for your land than you are current exploiting, you must give the land up

is not accurate.

> Land should be collectively owned by the people living on it and producing from it.

Why should this be the case?


>is not accurate.

From wikipedia on eminent domain: "The property may be taken either for government use or by delegation to third parties, who will devote it to public or civic use or, in some cases, to economic development." The government, using eminent domain, literally forces you to sell the land. That is what the law is.

What part of that isn't true? They can take private land for public or "economic" development.

> Why should this be the case?

Private property is theft. Private property is a tool used to expropriate value from producers (workers) and give it to the property owners (capitalists). If you have a mine, a person or a corporation owned by a small group of people, buys the deed for the mine. They pay workers to extract the resources from the land for a wage. The wage is necessarily less than the value the workers create by mining the resources. That surplus goes into the pockets of the capitalist owners, even though they did no work. The miners are likely paid pennies, live in poor conditions, and now their home land is ruined by the ecological effects of the mining operation.

Resource extraction happens to be the easy one to describe, but the exact same principle holds in all manifestations of private property.

If you disagree, please at least read some Karl Marx first. Criticism of capitalism is well examined, and has been done for a long time.


Land value tax sounds like a proposal to return directly to feudalism.

"Using land the most profitably" (ie. extracting the most tribute) will become de facto ownership for the controlling entity through tribute, while everyone else will be tenants of the few controllers.


I think what the grandparent meant was that if you were not using the land profitably you are going to want to 'sell' the land to someone who can use it more profitably and is therefore willing to pay a high price for it. If you didn't do that and just rented the land you would lose money because of the high land tax.

I think 'selling' here means "giving the other guy the privilege of having to pay lots of taxes instead of you".


I think renting land in any fashion is morally reprehensible. Though I don't fit in with much the neoliberal HN crowd as an anarchist communalist.

Ownership should be restricted to the community of inhabitants of the land. The processes around controlling the land would be voted on democratically, and a federation of community ownerships with local representation would deal with extra-community processes around broader land decision making.


Feudalism is a word that has been applied to such a wide variety of historical situations that it almost has no meaning, like the word "liberal". Please explain what is bad with having to pay rent to the government, given that we already pay taxes, and land value tax would reduce the need to levy other taxes, other than socially-advantageous sin taxes such as carbon taxes.


yes, it's scary indeed.



Unfortunately, the questions the interviewer asks are all rather trite. On the flip side, most of the answers are rambling and barely coherent. Not the finest presentation of the field as a whole. :(


I find Horgan very frustrating as a writer. He seems to prefer trite controversy over deep understanding.


This is Scientific American, a general-interest publication, not AGI Risks Weekly. In other words, most of the people reading this article will have never come across these ideas, let alone considering the questions "obvious". I'd be overjoyed if scientific american was discussing transhumanism, the singularity and AGI risks on a regular basis, and I'd be happy to be proven wrong - but I don't think that's the case.


Knowing nothing about this field, reading the comments did serve to temper my enthusiasm for the article.


I only see three comments, one by a guy who complains that Yudkowsky was mean to him and has no academic credentials, one from a guy who calls Yudkowsky overconfident (in what, his belief that we should be cautious?) and naive for being a libertarian and one from a guy who admits he also knows nothing about the field.


>I don't think that humans and machines "merging" is a likely source for the first superhuman intelligences.

I have to disagree with him a little bit. I think "merging" is already happening and will continue to accelerate. AI/Human feedback loops are easy to conceptualize. AI does its own thinking, when it is unsure it consults an array of humans for their opinion. Repeat ad infinitum. Maybe a second array of humans proofread AI decisions, watching for conclusions they disagree with. It might not be a matrix plug in the back of your head, but data centers and human-arrays stuck in a feedback-loop, (communicating bidirectionally with screens, cameras, and eye movements) probably offer a scenario that out performs either the machines or humans in isolation. Each augments the others limitations.

Are machines faster by themselves, or with a human co-processor to consult at their discretion? Can a machine go "hey im not great at this task yet, but humans seem to accelerate at it"?


When Ray Kurzweil talks about merging with the machine, he talks about it in the literal sense (as nanobots in your body, or uploading your mind to a computer). You seem to view the merging in a broader sense, which I agree is happening.


http://abstrusegoose.com/171

http://blog.dilbert.com/post/102627914061/dilbert-pocket

(the philip k dick in me thinks, if we merge, the machines would absorb us, not us them into our blood. the borg > cyborgs)


It's probably not a patch on what a dedicated AGI could achieve. Read Nick Bostrom's book Superintelligence for some in-depth arguments on this.


not sure how i said accelerate and not excel. brain glitched.



Dude that was years ago. Give it up.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: