Hacker News new | past | comments | ask | show | jobs | submit login
What Darwin's Theory of Evolution Reveals About Artificial Intelligence (theatlantic.com)
136 points by llambda on June 22, 2012 | hide | past | favorite | 84 comments



I recently watched: http://www.youtube.com/watch?v=Uoda5BSj_6o

in which Eliezer Yudkowsky talks about challenges in making 'friendly' AI. Yudkowsky draws a lot of examples from evolution to show how a system blindly optimising for an objective function can result in all sorts of things that a human specifying the objective function wouldn't ever have expected.

The linked article at one point says: "In order to be a perfect and beautiful computing machine, it is not requisite to know what arithmetic is." After watching Yudkowsky's talk, which I found very insightful, I got the impression that it could be very difficult to make a machine that wouldn't do very unpredictable things, unless you had a very deep understanding of exactly what you were telling it to do.

The talk might be of interest.


That's definitely true, and a major problem beyond friendliness (i.e. way before any singularity). A traditional assumption in machine learning and other areas of AI is that you can factor out the problem-specification parts from the algorithmic problem-solving parts, but it's very easy to get results that are unexpected, except in the rare cases where you have an 100%-correct obvious objective function handed to you by the structure of a problem. In cases where a human is writing down what they think the objective is, a very common case is that there are a lot of unspoken things that they intended to be included, "optimize [x], but without doing anything obviously stupid that I wouldn't want you to do". Hence a lot of iteration on objective functions is needed in real-world applications, and recent-ish research focuses on alternative formulations like interactive machine learning, preference elicitation, etc.


For an example of this have a look at this essay [1] by George Danzig in which he tells the story of how he tried optimizing his diet using linear programming.

[1] http://dl.dropbox.com/u/5317066/1990-dantzig-dietproblem.pdf


Interesting read. Thanks a lot!


> "optimize [x], but without doing anything obviously stupid that I wouldn't want you to do"

This is similar to the basic idea of Yudkowsky et al's Coherent Extrapolated Volition, the gist of which is that an FAI's metaethical function should be some aggregated of what it believes people would want it to do if they had the knowledge and intelligence(/cycles) that the AI has. (I believe there exist updates/improvements to CEV, but I don't know much about them.)


"At one point in the discussion, Sussman told Minsky that he was using a certain randomizing technique in his program because he didn't want the machine to have any preconceived notions. Minsky said, "Well, it has them, it's just that you don't know what they are." It was the most profound thing Gerry Sussman had ever heard. "


Just one comment on a minute quote from Penrose that happened to be in the article:

"To my way of thinking there is still something mysterious about evolution, with its apparent 'groping' towards some future purpose. Things at least seem to organize themselves somewhat better than they 'ought' to, just on the basis of blind-chance evolution and natural selection."

This is a common fallacy about evolution, and is explained beautifully by the anthropic principle, or in other words, the innate selection bias of our existence. We've self-selected for our own awareness of our circumstance and existence. Things are not organizing better than they "ought" to, they've just happened to organize to a sufficient point that we exist and perceive this process and say things about it like the above quote.

It is in the same way that someone who wins the lottery must think themselves exceedingly lucky that they, of all the millions of people participating, have won. They must think there is something mysterious about this, that things turned out somewhat better than they 'ought' to, just on the basis of blind chance.

Yet, what is the probability that some person, of the entire pool of people in the world, wins the lottery? One. It has necessarily happened by the nature of the lottery.

We as a species have won this lottery, by the mere nature of our sentience. We should not think it mysterious or unusual in any way. However, we are lucky in the sense that we are here; we are special in that we can perceive and understand. As long as we understand the fact that there is no "should" in evolution, this is a perfectly fine thought. it just happened, and on this planet, it produced something able to understand itself. As Carl Sagan said, "We are a way for the cosmos to know itself." Certainly there is much metaphysical and philosophical consequence to our existence, but scientifically and probabilistically speaking, it makes perfect sense.

Consequently, I believe it may be much more difficult to reach true AI than some have postulated.


It is worth noticing that Dennett, while not taking Penrose out of context per se, is using him to illustrate a much narrower point than you're making -- and your quote of him therefore does take Penrose out of context. He continues that quote by saying, "It may well be that such appearances are quite deceptive. There seems to be something about the way that the laws of physics work, which allows natural selection to be a much more effective process than it would be with just arbitrary laws."

Penrose's point is that natural selection seems to have capitalized on a specific physical feature which can do something better than any computer can -- this he finds quite remarkable, and he is expressing skepticism that, if you just leave a genetic algorithm for a while with an arbitrary weighting function, that it will develop structures which start to reach towards such logic puzzles with curiosity and form proofs of them.

That we have done this is not testament to the anthropic principle nearly as much as it is testament to the fact that, among all the animals, we seem to have most cleanly decoupled our logical intuitions and applied them to our problems.


This is a common fallacy about evolution, and is explained beautifully by the anthropic principle, or in other words, the innate selection bias of our existence.

I'm a little confused about how the anthropic principle explains this fallacy (and it is a fallacy to be sure). I guess you're referring to how natural selection (and all the other sorts of selection that occur regularly) lead to the existence of "appropriate" organisms and features.

But I think knowing some of the actual details in how evolution works at all the different levels (molecular bio, epigenetics, competition/cooperation, behavior, etc, etc, etc) serves as an even better explanation. Here's a 25 lecture course from Stanford that explains just that: http://www.youtube.com/watch?v=NNnIGh9g6fA


Absolutely, but Penrose was a brilliant man; he surely understood the biological and scientific explanations of it all. Yet, he still held some mysterious belief that there was something magical about the process. This belief comes from the self-selection bias (http://en.wikipedia.org/wiki/Self-selection_bias) (NOTE: not natural selection or evolutionary selection; there's a conflict of terms between probability theory and biology there—sorry for the confusion) I spoke of, and many people share it. From our individual perspective, or even from a human-wide perspective (which are the ones most familiar to all of us) we appear extremely special, extremely lucky to exist at all; to have been in the right place at the right time to undergo this process of evolution.

The anthropic principle simply invalidates this mystery: it is not special, it simply happened, and because it happened, we exist to observe it.

It doesn't care to explain evolution in any biological or scientific way; it only seeks to remove all doubt that it was able to happen at all.


I'm going to take a few seconds here to be a pedant and disentangle the structure of an argument from its contents.

Strictly speaking, a fallacy is a defect in the structure of an argument, i.e. in the logic that connects a premise to a conclusion. What you and the parent have identified is simply a false premise ("Things at least seem to organize themselves somewhat better than they 'ought' to"[1]). A false premise may result in the wrong conclusion, but an argument can be wrong without being fallacious.

This distinction isn't so much relevant here, but it comes in handy when you want to pinpoint sources of disagreement in a discussion. Lobbing the f-word around (even when deserved) just makes people defensive.

1. Of course the argument used to derive this premise might be fallacious, but that argument isn't given.


Thanks for making the distinction, but the word "fallacy" can also be used more loosely than the way you define it. Googling "fallacy definition" gives me:

    1. A mistaken belief, esp. one based on unsound argument.
    2. A failure in reasoning that renders an argument invalid.
We were using it in the first sense.


I disagree. You say that "Things are not organizing better than they 'ought to, they've just happened to organize to a sufficient point that we exist and perceive this process and say things about it".

Let's assume we are but one of an infinite path of possibilities for slightly different starting conditions of a universe. When you get to this magnitude of combinations, it is no longer sufficient to compare individual combinations -- rather it is infinite sets that make the most sense to compare -- classes of possible universes in a sense. You are comparing cardinalities, not members of the different sets themselves.

And if you think about all of the possibilities, it is almost certain (borrowing the mathematical usage of that term) that our universe is unlike almost all combinations, and to a significant degree. I could be wrong here, but it is certainly something that could be proven (or disproven) by an expert mathematician.

So, when you say we won a lottery, I think it is instead more like you threw a dart at a real number line between 0 and 1 and the dart landed exactly on 0.4. It's not impossible, but the probability of that happening is still 0. (Those two statements are actually not incompatible. There's a good question on the math stack exchange about it).


What calinet6 is saying is that the reason you think "it is almost certain (borrowing the mathematical usage of that term) that our universe is unlike almost all combinations, and to a significant degree" is because this is the world that you live in. That makes it seem like some sort of special case from all the other innumerable possibilities. In reality the only special thing about it is that it is the one that happened to happen.


What I'm saying with my argument is that the odds that we ended up in a "set" of universes similar to ours is exactly 0. (Which does not make it impossible.) If the odds are 0, then it seems more reasonable that there is a better, more probable explanation that the universe is the way it is than chance.

It's not based on what I want to believe, or even something from self-selection bias. It is something that can be calculated and either proven true or false, probably with set theory of some kind.


"What I'm saying with my argument is that the odds that we ended up in a "set" of universes similar to ours is exactly 0."

I generally use the anthropic principle in the context of evolution and our existence on this planet—of all planets in the universe—versus our existence in this universe. You could certainly take it to that extreme, but it's a much deeper venture and we know so little concrete data about it.

How many universes are there? Is it an infinite set, as some suggest? We don't know. Do they differ in their stability or cosmological constants? We don't know. How many universes have there ever been? We don't know. How much time has ever existed? We don't know. Does time even exist outside the 4-dimensional space of our universe? We don't know that, either.

But we do have a very good theory about how we came about, as sentient lifeforms, on this specific planet around this specific sun in this specific universe. That's quite a lot, actually.

If you can prove anything mathematically about the probability of our universe's existence by random chance as opposed to some outside factor, then book your flight to Stockholm. My most humble hypothesis is that you will require more data to run those numbers; data that we do not yet have. You could make some guesses, but that's all they'll be.

In the end, it boils down to the age old endless question: originally it was "how were we created?"; then it was, "how was the Earth created?", then "how was the solar system created?", followed by galaxy, space, the universe, whatever's holding the universe, ad infinitum. At the ever-expanding edge of our knowledge, "God" seems as good a word as any to describe that mystery.


In your dart example, would you say that whatever particular position a dart lands on would be better described by something other than chance?


Well, the crux with my example is that the dart lands exactly on 0.4. There is an infinite amount of real numbers between 0 and 1, so what is the probability it would land exactly on 0.4? Zero, of course. But that still doesn't make it impossible.

I'm not saying no to the fact that it could have randomly landed there, but given that the probability is 0, I am saying that it is possible there is something else we have not discovered that provides a more probable explanation than chance.

*A different type of analogy is the old spontaneous generation theory for maggots in meat. For a long time, people thought they just randomly appeared. Upon closer investigation, it turns out there were other, more deterministic processes going on.


The probability that it lands on exactly. 4 is exactly the same as the probability of it landing on any other particular number. You place additional significance on . 4, but that is for cultural reasons.

If I flip a coin 5 times in a row, HHHHH is just as likely as any other combination, but humans have the tendency to think that is weird or special.

Anyway...

If I understand your hangup correctly, it seems you are mistakenly thinking that all imaginable possible realities should have been equally likely (as positions on a random dart board would be). Of course this is not true; a reality where snakes developed internal organs that function as internal combustion engines which they use to power propellers is very unlikely indeed. The reason of course is that this reality did not arise by picking one randomly from a bag of all possible realities, but rather developed over time through random mutations that were selected for by natural selection.

Now, of the possible realities random mutation and natural selection could have created, this world could be said to be equally likely. There is not some sort of "future purpose" as Penrose says, that is being driven towards.


Okay, what's the probability of the dart landing on 0.400000? About one in a million, right? It's not zero because the target is only specified to finite precision.

Our universe is also only specified to finite precision; we don't know whether physics is ultimately continuous or discrete, but regardless, we can only measure anything to finitely many digits. So whatever the probability of our universe may be, it is at least greater than zero.


There is additional significance to 0.4. Maybe I didn't clarify that well enough in my original post. "0.4" represents the class of universes that support life in a comfortable way. What do I mean by comfortable? I mean that there are far, far more combinations of universe classes that would result in life that is painful (or even extremely painful). Short lifespans, constant agony as a result of different processes, etc. How do I know this? I don't -- it's just a conjecture (and everyone is just conjecturing too). I could be wrong. But it seems intuitively that there would be infinitely more permutations of a painful but life-supporting universe than the "comfortable" one we live in (I don't mean "comfortable" in the literal sense -- there's still plenty of pain in our world).


Penrose was explicitly talking about biological evolution. If we are actually in a "fine tuned universe" discussion now, then all I have to say is "Anthropic Principle". I don't think there is much more meat there to discuss otherwise. We cannot (yet?) reason about any deterministic processes that may have lead the universe itself to have the constants it does.

Of course we already know about many processes that support the formation and existence of life in this already existing universe. Stellar evolution for example, is a rather healthy branch of scientific inquiry. There are not "mysterious" things at play, unless Penrose and yourself are simply intending to imply that there are still things which we do not fully understand (which is obvious, and that is a very confusing way of stating something so obvious and universally recognized).


Excellent summary, and very clear. I enjoyed reading your line of thinking. Thanks!


Okay, apparently there's some person that doesn't like what I'm saying and is downranking my posts. I checked the YC guidelines for posting and I do not appear to be violating any of them. If you're downranking my posts just because you disagree with my position and my exposition of it, well... then that's just not very nice.


Seriously? Really? C'mon... Is there a way I can see who did that?


(I have not downvoted you. I don't have the karma to downvote yet and have been replying to you so I can't anyway)

People upvote for agreement, it only makes sense that they downvote for disagreement as well. I'm fairly certain that PG has said that he is ok with that before.

You can be pretty sure that you will always be downvoted for complaining about downvotes though, even by people that may agree with you.


Yes. The only mystery is how the universe got here, and if you want to get super metaphysical, why math and logic work at all.

Evolution as a process is entirely reasonable when you look at the details if the logic.


Instead of reading this persons take on it I would much rather read the original paper. (1)

I don't see how darwinism is even related at all. Penrose likes to say that there is a fundamenal limit through either a quantum process or computers can't comprehend godel's theorem (2) .

Aaronson in his quantum computing from Democratius course says that if our brain is a quantum computer it isn't very good at taking advantage of it. (3)

Personally I think we don't have good abilities of understanding godel's theorem (one of Penrose's arguments) . Our pattern recognition is basically learned through induction and it just increases its optimization.

1 http://orium.homelinux.org/paper/turingai.pdf

2 http://en.wikipedia.org/wiki/The_Emperors_New_Mind

3 http://www.scottaaronson.com/democritus/lec10.5.html


In my opinion Penrose has been so wrong for so long about so much of artificial intelligence and cognition that it's hard to take him seriously any more.


Classic case of a visionary genius crossing the blurry line from physics to metaphysics.


I am reminded of the Nobel Disease: http://rationalwiki.org/wiki/Nobel_disease


Penrose likes to say that there is a fundamenal limit through either a quantum process

This is a common misconception about Penrose for some reason. I think what he's really saying is that there is some as-of-yet-unknown physical process that is responsible for consciousness. In order for us to discover what exactly that physical process is, however, we will probably (though not necessarily) have to completely revise our understanding of quantum mechanics. For example, we might need a theory that successfully integrates QM with relativity.

Wikipedia has a decent article on hypercomputing(http://en.wikipedia.org/wiki/Hypercomputation#Hypercomputer_...) (i.e. computers that can do more than Turing Machines), which is essentially what Penrose believes is responsible for human intelligence. Many of these weird hypercomputers use non-quantum properties of the universe to achieve hypercomputation. Two common examples are creating and collapsing different threads in space time, or computing with arbitrary precision real numbers.


I have studied hypercomputation and so far it seems like there is nothing there yet or anything useful. No feasible models have been implemented.

If there is an unknown process in reality that does this then why can't a computer be designed to take advantage of this once we know that. The burden of proof is on you to prove the process not hand wave it away.


Your point is very important to keep in mind. Penrose is about the most credible supporter of a non-algorithmic mind that you could find but he, like those who argue the human mind is algorithmic, does not use any sort of supernatural phenomenon to explain anything.

Any mechanism that could make the human mind non-algorithmic (none have been discovered so far) could still be harnesses by synthetic life forms. Penrose's non-algorithmic mind idea has far fewer ethical or religious implications than a lot of people seem to think. In his world thinking machines are still possible, the only trick is that Turing machines can't do it.


IMHO, though, Penrose's argument is a have-your-cake-and-eat-it kind of thing. His claimed scientific basis for non-algorithmic computation in the mind relies on science which is tantalizingly out of reach (yet somehow has definable end-point when our understanding of QM reaches point X). But that's the same as claiming that he knows that the science of pixie-dust will be well understood one day, therefore pixie-dust is scientific, and pixie-dust itself makes the brain problem tractable.

Personally, I don't think that there is a special mechanism (or pixie-dust) that separates my brain function from that of my cat. Of course, there are some qualitative differences which I clearly don't understand. But, blaming the fact that I don't understand the differences on the lack of some scientific breakthrough doesn't seem to clear the whole problem up : It's passing the buck.

IMHO, when the 'trick' to consciousness is found, people will look back and see that what seemed like pixie-dust was just a change in Order() of the system as a brain gets more complex. And there's nothing to suggest that there couldn't be another similar 'upshift' in consciousness beyond that which regulation-sized human brains can do...


I was very impressed on learning that Turing proved that a turing machine could compute any computable number... but then I started to wonder how one could prove such a thing. It seemed you'd need a definition of "computable" and a definition of "turing machine" and then show its equivalence. But how could you define what was "computable", and show it was true? That's the nub of the problem and it seemed impossible to me. Eventually I read his paper, and it turns out he thought so too:

  9. The extent of the computable numbers.
    No attempt has yet been made to show that the “computable” numbers include all
  numbers which would naturally be regarded as computable.  All arguments which can be
  given are bound to be, fundamentally, appeals to intuition, and for this reason
  rather unsatisfactory mathematically.
On Computable Numbers, with an Application to the Entscheidungsproblem http://www.comlab.ox.ac.uk/activities/ieg/e-library/sources/...


Most numbers are uncomputable because most real numbers are infinite strings of non-repeating digits, which obviously would never fit in a computer. More interesting non-computable numbers include the Busy Beaver problem https://en.wikipedia.org/wiki/Busy_beaver There doesn't seem to be a way to compute these numbers without basically trying out every possible answer.


But is the universe even modeled by real numbers or is this solely a concept of the mathematical domain?

Hawking calculated the Bekenstein bound, which proved an upper bound on the amount of information that can be contained within a given volume. As black holes are maximal entropy objects, one can compute the entropy for a certain black hole volume and equate that with Shannon entropy to determine the number of bits. This suggests, to some degree, that the fundamental particle is the bit.

(This also proves that a true Turing machine with unbounded memory is not physically possible.)


This whole thing is in the "mathematical domain". The point is that you can define a number in such a way that it clearly exists, but there's no good way to calculate it.


Although it is commonly taught in undergrad classes as "stating that Turing machines and lambda calculus are equivalent" (which to be clear, it does), the premise of the Church-Turing Thesis is that all computable functions can be computed with Turing machines/lambda calculus.

The Church-Turing Thesis is unproven, but these days almost universally beleived to be true. Wikkpedia has a fairly decent coverage of all this stuff, if anyone is new to this topic.


With a powerful enough computing system, we could just run an artificial world and hope intelligence evolves within it.


Koji Suzuki's 'Ring' series - the novels on which the horror movies were based - has a very interesting slant on this idea; well worth reading. (Don't be misled by the movies: the second and third books in the trilogy are excellent speculative fiction!)


Given that you would also be simulating a lot of unintelligent behaviour, how would you discover whether or not the simulation has been successful? I mean how would you find which parts of your system are the intelligent parts?


Perhaps you could detect when they start to preform science. Put them one a rock, then a few hundred thousand kilometers away place another rock, this time sterile, and seperate them with a vacuum. If living goop gets to the sterile rock, chances are you would be justified in calling it intelligent.

(Yes, I did just recently watch 2001. Sue me ;p)


When they try to escape confinement...? (See my comment above, and the movie 'The Thirteenth Floor')


I believe the comparison the author makes between Turing an Darwin is valid and applies, in general, to any system really.

That is, it's very hard to describe a system, top-down, as the state of the whole thing at a particular point in time, but it's more feasible to approach a description as the rate of change of all the smaller systems contained within.

In other words: it's hard to describe a computer, but it's easier to describe it as a series of computations (Turing machine); it's hard to describe "life", but it's easier to describe it as a series of processes over time (evolution).

It's a way of thinking.


The analogy drawn between evolution and artificial intelligence is interesting, yet ultimately deficient in at least one key aspect. Yes, the steps performed by a computer are unintelligent, just like mutation and natural selection within evolution. However, unlike the process of life according to evolution, computers are not purposeless or directionless. Computers were designed and imbued with a purpose (computation) by intelligent beings, in contrast to evolution’s “blind watchmaker”. Yes, unintelligent computers can perform intelligent processes (for a particular definition of intelligence). Yet it remains up to intelligent beings to judge the final output produced by computers: whether such output makes sense, is useful, or is true. Even if eventually a computer designed by another computer (or a lineage of computers) manages to pass the most brutal of Turing tests, you can’t get away from the fact that such a computer ultimately owes its direction, purpose, and intelligence to the intelligent beings who put together its oldest ancestor.

For an apt analogy between AI and evolution, a computer would need to exist and compute without any involvement from intelligent beings at any point in its existence. That computer would need to develop its direction and processes in absolute, unintelligent independence. We couldn’t develop such a computer ourselves; we’d need to discover it somewhere in the natural world (perhaps lying within a mineral deposit). However, at this point we’d be drawing an analogy between evolution and another form of itself. Circular analogies aren’t very useful.

The author writes, “in order to be a perfect and beautiful computing machine, it is not requisite to know what arithmetic is.” This is true, but knowledge of arithmetic (or a higher form of reasoning) is required in order to produce such a machine in the first place. At least that’s what we can tell based on the only computers available to us (all of which owe their existence to knowledgeable beings).


As someone who has a keen interest in reading about evolution and artificial intelligence, I'm a bit disappointed they had to bring an anti-God slant to the article.

Other than that, I certainly believe it's possible to generate something "intelligent" (for lack of a better word). I think this will require an improvement in both processing ability -- and more importantly -- an innovative new way to create a mutation algorithm. If we just take bits and let them mutate over time, we may eventually get something like AI but I think it will take more time than we have. We'll need to use our own intelligence to accelerate this process.


I don't think its "anti-God" per se. Its just that as science advances there's more and more that we can say about the universe that doesn't require God (or philosophy) to talk about.

Unfortunately the early western judeo-christian line of thinking biased people heavily towards a "God of the gaps" style of thinking. In this line, God is first postulated as an explanation for unexplainable observations and then later falsified. This makes science seem "anit-God" just by its nature, since it always seems to be trying to un-gap the God parts. IMHO this philosophical rabbit trail was one of Christianity's biggest mistakes.


Yeah, I can agree with that. In fact, I would be surprised that an omnipotent being would be "hiding in a crevice, just waiting to be found" by humans. It seems too much like a story. I prefer the idea of absolute physical laws that were well-designed, and from these laws everything else came about.


When the topic is evolution, there is usually an anti-God slant.


Because you don't need a God any longer to explain things. God is a literary device that has been used throughout the ages to fill in the gaps of human knowledge. It's akin to other devices, like dream sequences, flashbacks, Chekhov's Gun, just one that caught on, mainly thanks to a couple thousand years of it being rammed down the throats (often, quite literally) of uneducated peasants.

These days, we don't need the concept of a creator because we better understand the workings of the universe.

This of course doesn't have anything to do with Faith, that's an entirely personal thing that some need/want to get through life, to each their own. Fortunately in this era we mainly have switched away from executing those that don't share the same faith. Oh, wait...


So... am I the only religious person on this website who is also very scientific? I mean, I believe in evolution, and in my opinion the two concepts have always been complementary rather than exclusive (but I won't start a debate). Just curious.


I don't think they're compatible at all. I think those who think they are simply display evidence of our ability to maintain cognitive dissonance; religion and science are not compatible. The only people who think they are, are religious people who can't leg go of their faith, so they rationalize.

Science is anti-faith, religion is anti-evidence, how anyone thinks them compatible is beyond me. Faith is not a virtue in science, it's just bad thinking.


The agnost Stephen Jay Gould thought they were not opposed: http://www.stephenjaygould.org/library/gould_noma.html The Catholic church has the following to say about the relationship between faith and science: http://www.vatican.va/holy_father/john_paul_ii/encyclicals/d...


That he did, but his belief can be summed up with this quote..

"The net of science covers the empirical universe: what is it made of (fact) and why does it work this way (theory). The net of religion extends over questions of moral meaning and value. These two magisteria do not overlap, nor do they encompass all inquiry (consider, for starters, the magisterium of art and the meaning of beauty)."

And he's quite simply wrong, they do overlap as science has something to say about all of those things. Religion has no claim on the realm of morality and meaning, nor does it provide any actual answers on the subject, nor does it serve as a guiding light on the subject.

Gould is trying to play nice because he thinks religion is valuable to many people and he doesn't want to offend. I do not believe his position honest, merely political.

As for what the church thinks, they're going to rationalize no matter what, their opinions simply support my original statement.


No, I also consider myself both religious and science minded. For example I have no issue with the idea that complexity in evolution is an emergent property, like for example seen in Conway's life. My religious ideas on the soul are very much in line with for example Karl Rahner's: http://www.anselm.edu/Documents/Institute%20for%20Saint%20An...


I don't know about this website, but I know mathematicians (some of whom have made significant contributions) who are practicing christians. Not just for the social life either, but because they believe in god.


The hostility with which religion has treated science has provoked an annoyingly equal and opposite reaction. I personally have some interesting beliefs, but it's useless to wonder whether they're religious or not.

But really, there is a deeper issue that, without a testament of the nature of your personal interpretation of your religion, it is not actually possible to usefully decide if evolution and your religious beliefs are compatible.


I'd say that's rather overestimating the philosophical importance of evolution. When you look at the arguments people actually make about God's existence (http://en.wikipedia.org/wiki/Existence_of_God), maybe one or two of them actually involve where ducks come from.


Considering the Catholic Church finds no contradiction between evolution and the faith, I too would say evolutions importance to the existence/non-existence of God is overrated.

http://en.wikipedia.org/wiki/Catholic_Church_and_evolution


You're rather neglecting the past thousand years or so where the Catholic Church would have you executed for thinking the earth wasn't flat. And let's not even mention their issue with women, witch hunts, and of course the Inquisition.

And you're somewhat misleadingly misrepresenting that Wikipedia article. It actually says that the no-conflict position is an unofficial stance of the Vatican.

It also (rather importantly neglected by your comment) says that their position is that humans are indeed special and need the existence of god. That's rather a conceited view of our species, but then this is coming from the same organization that spent much of the 13th century burning whole french villages...


Both protestant and catholic theology (Schillebeeckx, Rahner, Kasper, Moltmann) have largely moved past such a stop gap god. The original poster is right, me and most practicing christians in my country (arguably it's different in the US) don't see a problem with evolution. Within the framework of Catholicism (the largest organized, most developed christian denomination) believing so is no problem at all, the "deposit of faith" allows this. The seeds for this have been sown in the early part of the 20th century. Concerning the specialness of the human person or soul, and the need for God; this is first and formost meant as theological language.


As to the first point: Slavery was once legal also, but that is no longer so.

as to the second point: I would assume the Pope actually knows the opinion of the church and wrote so here: http://www.ewtn.com/library/PAPALDOC/JP961022.HTM or http://www.vatican.va/holy_father/john_paul_ii/messages/pont...

As to point 3: see my response to 1 and yes, religions for the most part believe Humans are special. Being sentient tends to do that.


Okay, okay, I didn't mean to start one of these. Let's just get back to the AI discussion. (I do have a long paragraph about AI in the second part of my original post that has been neglected, you know).


I am not overstating, or even stating at all any philosophical importance of evolution. Evolution just is, it doesn't need, nor does it care about our human philosophies. Evolution began long ago before humans (obviously) and was working just fine with our without us giving it any thought whatsoever.

I think you have it backwards, religion very much depends (with glorious irony) upon evolution, and our human condition of attributing agency where there is none.

Meanwhile, with our without any of thinking any mythological super being, evolution carries right along, at the cellular level all the way up to who we are.

I am not saying there is, or isn't a god (I honestly don't care, what does it matter?) - but I am saying that it's time we stopped burying our heads in the sand and ignoring science, while we hack off the limbs of anyone that doesn't share our particular god theory.


Great.


Very interesting. Neat look into how humans are just made up of layers that perform a certain task. The layers themselves don't know the overall structure or results of their actions, but their combination creates the human. The parallel between humans and computers in this regard has been widely noted, but I hadn't made the connection between the evolution of humans that led to a more complex and thinking organism and possibly using this method to foster artificial intelligence.


Could someone please explain what this article is trying to convey . I understand that the main thrust is that intelligence or comprehension can emerge / evolve as an end result of simpler interacting processes that are not necessarily intelligent. This is is analogous to how evolution builds complex systems but the "algorithm" for evolution is nt so complex . So are they implying that strong AI is definitely possible ? Or are they saying something beyond that ?


Isn't this basically a stance that Wolfram argues with his cellular automata as basis?


It's interesting you mention that. Gerard 't Hooft came up with a way (a loophole) that the universe could still be entirely deterministic using the cellular automata model. Most physicists disagree with this view, but what makes it interesting is that 't Hooft is a Nobel laureate.

Here's his paper: http://iopscience.iop.org/1742-6596/67/1/012015/


I should like to warn people of this sentence: 'If the history of resistance to Darwinian thinking is a good measure, we can expect that long into the future, long after every triumph of human thought has been matched or surpassed by "mere machines," there will still be thinkers who insist that the human mind works in mysterious ways that no science can comprehend.'

You might think, by his preceding paragraph, that this applies to all of his "trickle-down theorists". It emphatically does not apply to Penrose or John Searle -- two of the three he names explicitly. (I'm not even sure that it applies to Descartes.)

So, let me just summarize those two, for people who aren't so familiar with their writings. Penrose is a physicist who thinks it peculiar that we can mathematically prove, for any computer, here is a true fact which that computer cannot prove. He thinks this is peculiar because if our understanding were algorithmic, then that would imply that we could eventually write down its axioms as a formal system, go through the Godel proof line by line, and prove, "here is a true fact which my understanding cannot prove," thereby using our understanding to prove that fact, thus constructing a contradiction. His conclusion is that understanding can't be algorithmic. (Dennett of course disagrees.) Nonetheless, he believes in science. He thinks that we need to understand existing science (quantum mechanics and perhaps gravity) better to comprehend consciousness, but he certainly seems to believe that science will eventually comprehend it.

Searle is a philosopher who finds it peculiar that functionalists (like Dennett) still seem to quietly accept the Cartesian counting of substances. That is, they seem to accept that consciousness is somehow a 'thing' which is distinct from the stuff of the world, and therefore they must argue that such 'things' don't really exist. Searle wants to say, first and foremost, that we really are conscious when we wake up in the morning -- all that touchy-feely crap that you experience until you go to bed at night isn't some sort of illusion, it really is a part of the world and does affect the world, etc. Again, he views it as a mistake to think that it's somehow a 'second substance' which cannot be reconciled with 'material substance' -- they're both parts of the physical world and whether you treat them separately says more about you than it does about the world.

He also wants to note that it (consciousness) is not a computation. This is for a very simple reason: whether a box is performing a computation is observer-relative, given Turing's definition of a computer as a symbol-shuffler: an observer who doesn't interpret the symbols 'properly' doesn't see it as 'computing' anything. But whether you are conscious is presumably observer-independent; there is no perspective that I can switch to by which I might somehow obviate your consciousness and thereby render you incapable of feeling, for example, pain. Many of Searle's articles contain the attitude of: I'm going to clear away the philosophical problems for once and for all and then the scientists can do the hard part of figuring out what neuronal events correlate with consciousness and all of that stuff, so that we can understand consciousness. In that sense he certainly believes that science will eventually understand consciousness.

These two might not even agree that "a thinking thing cannot be constructed out of Turing's building blocks." If Dennett's point is "sorta-thinking" as a model for thinking, then they might agree that a program can sorta-think. Penrose is merely skeptical that sorta-thinking can climb as high as understanding, while Searle is skeptical that sorta-thinking can rise to the observer-independent nature that we consider a fundamental part of our everyday experience.


>He thinks that we need to understand existing science (quantum mechanics and perhaps gravity) better to comprehend consciousness

What, not phlogiston?

>That is, they seem to accept that consciousness is somehow a 'thing' which is distinct from the stuff of the world, and therefore they must argue that such 'things' don't really exist.

Wait, what? Searle claims that functionalists are dualists? And that therefore they must claim not to be? This makes no sense.

>This is for a very simple reason: whether a box is performing a computation is observer-relative, given Turing's definition of a computer as a symbol-shuffler: an observer who doesn't interpret the symbols 'properly' doesn't see it as 'computing' anything. But whether you are conscious is presumably observer-independent; there is no perspective that I can switch to by which I might somehow obviate your consciousness...

If I write a program that computes the first N squares, and you don't understand the symbols, the computation has nevertheless happened. The claim that Turing computation is observer-relative is equivalent to claiming that if I don't know how x86 works, then my computer won't boot. Of course it will (and likewise, of course, whatever perspective you switch to has no effect on my consciousness).


>>He thinks that we need to understand existing science (quantum mechanics and perhaps gravity) better to comprehend consciousness

> What, not phlogiston?

To be fair, more research really is needed. We don't have an underlying theory to explain the facts we know about consciousness. Yet.

We don't even know if a simulation using the known laws of physics would include the effects of consciousness, let alone be conscious.


What are the effects of consciousness, exactly? Is there a procedure to determine whether a given human possesses consciousness as you yourself, do? Is there a procedure to determine whether an AI possesses consciousness as you yourself, do? Is there a way to disprove the solipsist? This is the nub of the problem: there really isn't.

Consciousness does not present itself as just another thing in the world -- at all. It is the unique medium through which you experience everything you have ever experienced. It is the fact that the universe exists from a perspective. You assume that the universe also exists from other perspectives, but there is no objective basis for this belief -- in the sense that there are no phenomena solely attributable to true consciousness.


The fact that we are having this conversation is a measurable effect of consciousness.

Consciousness interacts with the world and causes bodies to do different things. This is a measurable effect. A p-zombie in an unconscious universe is less likely to talk about "qualia" than a conscious being.

Hopefully we will discover some simpler effects to measure in the future, but for know we're stuck with "people sure talk about it a lot for something that doesn't exist...".


I agree that humans discuss consciousness and qualia because these things really exist. So yes, in my philosophical opinion, this is an effect of consciousness on the world. However, first, this phenomenon is not guaranteed to occur with all conscious subjects (MOST humans do NOT discuss these things), and second, the phenomenon can also occur with unconscious subjects (You could be an unconscious program designed to win arguments on the internet). The phenomenon is neither a necessary nor a sufficient indication of Consciousness, and it can't be understood to have any scientific value for any future objective understanding of Consciousness.

My position is even stronger though: I am quite happy imagining honest p-zombies and AIs discussing consciousness, apparently as if they themselves possessed it, but without any internal contradiction. We (you and I) understand each other when we each say "consciousness" and "qualia" but it is only because these concepts have a privileged place in our intellects-- we experience them directly (what's more, they are the stuff of our very experience of anything at all). Now, a p-zombie has no predisposition toward understanding these concepts, because it has no subjective experience of its own. How do I explain these concepts to the p-zombie then, in a manner that will force it to say, "Oh, right, no, I don't have consciousness." I contend it can't be done, that any attempt at a definition can always be misunderstood to apply to objective phenomena of the human brain/mind/person apart from what we know as true consciousness, and this is how a p-zombie or AI will always understand it.


I think we agree about consciousness, but you have a stricter constraint for "having scientific value".


"Wait, what? Searle claims that functionalists are dualists? And that therefore they must claim not to be? This makes no sense."

He claims that they, as monists, are inheriting the dualist categories and arguing that one of them doesn't exist. He thinks the sane approach to reconciling the body and the mind is to reject that categorization.

So for example Searle has talked about the problem of how we can get real free will in the sense that we feel there were other options open to us, and we responsibly decided upon the one option. Modern functionalists try to create something which is functionally indistinguishable -- so, for example, a computer which 'sorta decides' which path to take based on a random number generator and a trainable probability distribution.

On this account the feeling that other options are open is more or less some sort of conscious illusion -- we feel like there are many options open to us, but if we really understood what was going on down in the plumbing, we would say "aha, I was deterministic all the way." The remaining loose end is shored up with an appeal to compatibilism: "this is perfectly consistent with free will, because 'free will' should just name my internal logic which determined my action -- and we've got that if our training model and probability distribution is complicated enough."

Searle in Liberty and Neurobiology says that this is indeed one option which he can't really impeach: it's possible that our very feeling that we have 'other options open' is somehow an utter and complete illusion generated in our conscious experience to make us feel relevant, when really there is a brute mechanical process which takes us through our everyday lives.

Notice that it has to be an illusion to Searle, because he is not willing to take the implicit step which functionalists necessarily make in looking for something "functionally indistinguishable": the functionalist ignores the touchy-feely intuitions which people have. Functionalists don't want to talk about the conscious aspects of consciousness, because their inheriting a materialism which is historically contra Cartesian dualism. You see it right in the Dennett article: who does he try to lump Penrose and Searle with? Renee Descartes.

Searle's approach instead says something like, "Consciousness is a real feature of real material structures in the real world and we're not going to explain it if our theories just ignore it. Fortunately, its philosophical status is not too complicated and we can solve this problem, leaving the hard scientific puzzle to the scientists."

"If I write a program that computes the first N squares, and you don't understand the symbols, the computation has nevertheless happened. The claim that Turing computation is observer-relative is equivalent to claiming that if I don't know how x86 works, then my computer won't boot. Of course it will (and likewise, of course, whatever perspective you switch to has no effect on my consciousness)."

Well, no, those aren't equivalent. The symbols which appear on your computer screen are in fact interpreted by you as meaningful, regardless of what you think about the instruction set, and that is the reason that the thing is a computer for you. You are right, you do not need to interpret the x86 code directly to interpret the results of the computation -- nonetheless you do need to interpret the computer as a computation.

So that's a bit abstract. Just think of what would happen if I modified your OS (more precisely, I suppose, the display server) so that, right before the image gets sent to the screen, it gets encrypted under AES in CMC mode. So you boot up your computer and suddenly the screen just fills with what looks like random pixel noise, you move your mouse and suddenly the whole screen changes to other noise.

Is it still functional as a computer? You might defend the idea that "yes, even though it's totally unusable now for any computation, it's still computing all the things it used to compute." Searle wants to say that if you accept this, then trivially everything is a computer and the claim that the brain is a computer is a vacuous claim. Why? Because imagine something which you would not normally mistake for a computer: throwing a baseball around with your kid. Strictly speaking, there is an 'intrinsic computation' of this sort which the ball is doing -- it is computing parabolas, complete with an all-order perturbation theory calculation of the effects of wind resistance, spin, wind, pressure noise, etc.

Searle wants to say: that ball doesn't become a computer until you start using it to compute those things. And that's a perspective shift which you need to make. Perhaps to create an effective android to throw baseballs to your kid, you should model all of this in detail. Perhaps that android can then help us to understand what happens to the baseball. Searle doesn't have a problem with either of those. But if the baseball is just intrinsically a computer then it's hard to find something which isn't, from some perspective, a computer. Every particle in the visible universe -- as well as perhaps every subset of particles in the universe -- would seem to be computing something "in principle".

And if we return to the computing-the-squares puzzle -- how do you know that it computed the sequence of squares, and not, say, the odd numbers? Just like my invertible transform of your screen into random data, there is an invertible transform from the squares to the odd numbers -- especially if you store a 'sequence' as a list of differences between its elements, in which case the sequence of squares is in fact stored as the list (1 3 5 7 9 11 ...) . How do you know that you didn't compute the Fibonacci numbers, given that there is a bijection from the squares to the Fibonacci numbers?

In contrast, if I seal you in a box so that you can't communicate with the world, you are still able to feel things. As an asymmetry with computers, if I say that you really are conscious in this sense, it does not immediately imply that every particle and aggregation of particles in the universe is conscious.


Darwin's theory was the Theory of Natural Selection, not the theory of evolution. I wish people would get it right.


What's the difference?


"In order to be a perfect and beautiful computing machine, it is not requisite to know what arithmetic is."

I was just thinking yesterday that a similar thing might be true for human beings. We constantly strive for self-knowledge, what we really want, what we really need, but what if a smoothly functioning "self" is dependent on NOT knowing exactly how our minds work?

I'm just going to give up trying to understand myself as a complete waste of time. So far, it feels liberating. YMMV.


“Voici mon secret. Il est très simple: on ne voit bien qu’avec le cœur. L’essential est invisible pour les yeux.”

http://www.economist.com/blogs/prospero/2012/06/quick-study-...

"Le coeur a ses raisons que la raison ne connaît pas"


We may now construct a machine to do the work of this computer.

Creation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: