Hacker News new | past | comments | ask | show | jobs | submit login

And if it has? Well, then the statistical likelihood is that we're located somewhere in that chain of simulations within simulations. The alternative - that we're the first civilisation, in the first universe - is virtually (no pun intended) absurd.

wrong. there are multiple alternatives; we are the nth civilization in a real universe, we are the nth to the nth civilization in the multiverse, etc. Sloppy sensationalism and bad statistical reasoning in this article.




I've always felt there's something deeply fallacious -- or hopelessly wrapped up in human psychology -- about taking a representation of a virtual world that we might create and treating it as on par with the actual universe.

It's almost like saying "we're probably characters in a novel," since for every universe there are many novels. Or like that old proof that God exists because he has every desirable quality, including existence -- as if a hypothetical entity could be forced into being by sheer burden of how it is described.

When we talk and reason, we call something a universe to invoke all the general properties our universe has -- except existence, of course, because it would be silly if we could only talk about things that exist. Various things like books and simulations are physical representations of universes that don't exist. By virtue of our interpretation, they are universes nonetheless. When reading a book, we fill in assumptions and details from real life if the author gives us no reason to think otherwise. Tautologically, the world of the book is different from the real world in some limited and structured way, but not in any way that keeps it from being a world at all.

Physics seeks to explain the world exhaustively and reductionistically in terms of mechanisms, models, and mathematics -- basically, to boil it all down to the consequences that emerge from laws and equations that we can completely conceptualize. If the universe consisted just of electromagnetism, for example, then Maxwell's equations would be a "complete" description of the universe, and any simulation of them could be considered a universe. Everything about the universe, every general description that's true of it without reference to specific places, times, and things, would be reflected in the simulated universe as well, because science allows us to subsume it all into the more fundamental description given by the equations.

The problem is that a model of the universe is stil a model. Humans invented the notion that a thing and a description of a thing are equivalent.


If you want to convince anyone, you have to do better than just referring to your feelings. Plato felt the world of ideas was more real than the "real" world. Cosmologist Max Tegmark argues that we have no reason not to believe that the universe is anything more than pure math, and all formal systems exist in precisely the same way that our universe does. Philosopher David Lewis argues that the actual world is nothing but the possible world that we happen to be in. Representationalists believe that any simulation that duplicates the causal structure of conscious beings would have conscious beings in it. (A novel does not duplicate such a causal structure, so that's one place where your argument falls flat on its face, if indeed you were trying to make an argument.) Searle with his Chinese Box argument is not buying it.

The bottom line is that we just don't know, and are unlikely ever to know, but if one wants to argue intelligently about it, there's a huge amount of philosophical literature on just this topic.

On the other hand, your position would seem to deny human rights to future intelligent robots, which has serious ethical ramifications should you happen to be wrong.


There is a similar argument against AI. That a computer can't "feel" or experience any of the sensations we do. If you type 'emotional_state = "happy"', does that mean your computer feels happiness? Even if you have a complex simulation of neurons, if you zoom in, all you will see is little electrons bouncing through various logic gates and ending up in some kind of pattern.

But the thing is, the exact same thing is true for your brain too! If you zoom in on your brain you will find nothing more than electrical impulses and neurons that strengthen or weaken their connection to other neurons. If the neurons were suddenly replaced with little computers that performed exactly the same, you wouldn't feel any different. You would still experience emotions and feel sensations and have no idea any parts had been swapped under the hood, unless you looked.

I don't know why but this idea is confusing to me. This XKCD http://xkcd.com/505/ is a good example. My consciousness could be a bunch of static rocks and a man placing new rows based on simple rules. I find this thought disturbing, but there isn't anything fundamentally different between that, a computer simulation, and a world of a bunch of atoms hitting each other, following simple physical laws.

Different beings could exist in all three universes. And each would feel emotions and sensations and all that. And each would argue about how beings just like them in the other universes "don't really exist" like they do.

So how is this different from a character in a novel or a variable in a computer set to "happy"? I honestly don't know. The character in a book is completely static. Whatever is written in the pages, it isn't changing, it isn't taking input and producing output. But the rows of rocks are static too after all. They don't take any input from the outside world either. They change over a dimension of space, each new row one foot to the right of the last. As do the characters in a book. Each paragraph moves the characters forward through their own dimension of time. Maybe the process that creates the rows of rocks, the man following simple rules, is what makes it conscious. But the book is created by a writer following a process too, does that make it conscious?

The only thing I can think of is that the character in the book is far simpler than a human. Words that say "john is happy" is similar to setting a variable on your computer equal to "happy". Happiness in a human is far more complex, involving tons of interactions between neurons. But this answer isn't satisfactory to me. Complexity isn't what makes something intelligent or conscious after all.


> The only thing I can think of is that the character in the book is far simpler than a human. Words that say "john is happy" is similar to setting a variable on your computer equal to "happy". Happiness in a human is far more complex, involving tons of interactions between neurons. But this answer isn't satisfactory to me. Complexity isn't what makes something intelligent or conscious after all.

This is the difference between a running program and a short description of the program. The short description isn't runnable, and isn't running. The reason the program itself is doing what we want it to do is not because it's more complex than the short description (though it is), but because it's actually executing the algorithm. We could imagine more and more detailed descriptions of the program, specifying more and more closely how it works, and eventually it's so detailed that you could run the program from that description. At that point, the description is still not a running program, unless and until you run it. Being conscious is a process -- a running algorithm -- not a static description.


Well I'm not really claiming the book is sentient, just trying to come up with a criteria that excludes books and everything else except for people, without being completely arbitrary.

Anyways you can think of time as a dimension. Each moment of time exists, connected to the ones that came before and come after it. You can represent the every state that a turing machine goes through all at once. Just like those rows of rocks. What does "running" even mean in a universe where all points in time can exist at once?

But regardless of that, you can still say the book is "running" when it is being written or when it is being read. The characters don't just exist in the book, but literally in your mind. Though I still wouldn't call them conscious or even intelligent.


> What does "running" even mean in a universe where all points in time can exist at once?

If they all exist at once, then in what sense are they connected? Certainly not a causal sense, which I would argue matters quite a lot.

> The characters don't just exist in the book, but literally in your mind. Though I still wouldn't call them conscious or even intelligent.

If your mind was able to simulate the actual process of consciousness in the brain of the character being read about, then I would argue that the character could be conscious. Of course, brains don't have that much additional processing power. :)


"But the thing is, the exact same thing is true for your brain too!"

That is pure speculation. Just because we can observe similarities between two phenomenons it doesn't mean they are the same thing. You're erroneously assuming that the universe can only work in a way that fits our theories and models.


What are you suggesting? That the brain fundamentally is not a machine, a process of some kind? Even if everything we know about the universe was wrong, which could be true, but it is far, far, far from pure speculation. You might as well say it's pure speculation that gravity exists.


Complexity isn't what makes something intelligent or conscious after all.

Why not?



This shows that you can't just say "complexity makes something intelligent and conscious" by itself, you have to explain how it works, and "complexity" isn't an explanation, it's just a label. I agree.

But that's equally true of saying "complexity doesn't make something intelligent and conscious". All right, then what does? "Not complex" is a label just as much as "complex" is. Either way you haven't explained anything. That's what I was trying to get at.

Also, saying "complexity doesn't make something intelligent/conscious" is not the same as saying "something intelligent/conscious doesn't have to be complex", which is what I think the statement I originally responded to, taken in context, was really intended to mean. Do you really think intelligence and consciousness can be present in simple things like rocks? Might there not be a reason that the human brain is three pounds of complex matter, not three pounds of jello?


Yes, you are right, it is likely that anything worthy of being considered conscious would be fairly complex. (Though I believe it is possible to create an intelligence that is initially fairly simple. AIXI for example is the ultimate intelligence and could be created in a few dozen lines of code. It would just take nearly infinite computing power to do anything interesting and I wouldn't really consider it "conscious". Even the processes going on in your brain, at least the very basic function that makes us "intelligent", stripping away all the uninteresting stuff or additions to that, could probably be specified in a very small space. A lot of neurons may seem complicated, but you only need to understand the programming of a few, than just copy them a billion times. But all of this is beside the point.)

Anyways in the original context of where I said that, I was trying to come up with a satisfactory way to distinguish consciousness from non-consciousness. Complexity obviously isn't the way since lots of complex things are not conscious. It may be that all conscious things are also complex. But then complexity isn't the reason it's conscious, it just happens to correlate with it.

Setting a variable in a computer equal to "happy" obviously doesn't make the computer experience happiness. But then what possible sequence of commands and state changes would? If you accept that there is no possible sequence that would, then you have to accept that humans can not experience consciousness either. Because for all we know we are running in a computer too. Even if we are not, the process our brain follows easily could. The fact that it runs on atoms bouncing into each other and not electrons flowing through logic gates makes no difference.


AIXI for example is the ultimate intelligence and could be created in a few dozen lines of code.

Really? Show your work, please.

A lot of neurons may seem complicated, but you only need to understand the programming of a few, than just copy them a billion times.

This assumes that they are all "programmed" the same. Why do you think that must be the case? Also, you're leaving out all the chemical processes that contribute to brain function.

I was trying to come up with a satisfactory way to distinguish consciousness from non-consciousness. Complexity obviously isn't the way since lots of complex things are not conscious.

True.

It may be that all conscious things are also complex. But then complexity isn't the reason it's conscious, it just happens to correlate with it.

The connection might well be stronger than "just happens to correlate". Even if "complexity" by itself isn't the reason it's conscious, it might well be that consciousness requires a complex substrate.

Setting a variable in a computer equal to "happy" obviously doesn't make the computer experience happiness. But then what possible sequence of commands and state changes would?

Um, a much more complex sequence of commands and state changes?

If you accept that there is no possible sequence that would

I don't. I just think there is no simple sequence that would.


I really do believe that consciousness doesn't really have anything to do with intelligence. Even if humans are not simple. It's conceivable you could create something like a human with very simple machinery, just massively scaled up.

If that is too complex, you could create an even simpler algorithm which produces that. Maybe by creating a genetic algorithm which "evolves" human-like intelligence after billions of generations.

In both cases the end result may be complex, but the algorithm that creates it is not. Whether it be a network of billions of interconnected neurons that arise from simple programming of a few neurons, or a complex intelligence optimized by simple random mutation and selection.

This is the idea of emergence. That complex seeming behavior can "emerge" from simple rules and a simple process.


I really do believe that consciousness doesn't really have anything to do with intelligence.

The problem is that both of these terms are really too vague. What do you mean by "consciousness"? Some people claim insects are conscious; some claim bacteria are conscious; some even claim electrons are conscious.

Also, what do you mean by "intelligence"? By some definitions, the process of evolution by natural selection is "intelligent".

If by these terms, you mean "the properties humans have that we call consciousness and intelligence", then why do you think there is no connection between them? Does your own intelligence really have nothing to do with your consciousness? Or vice versa?

It's conceivable you could create something like a human with very simple machinery, just massively scaled up.

Conceivable, yes. Likely, I don't think so. But that's really a matter of opinion; it depends on how much of the complexity in the parts of the human brain (neurons, chemicals, etc.) is necessary for human consciousness or intelligence, and how much you think is just an artifact of the implementation, so to speak. We don't know enough about how the brain works yet to really judge.

the end result may be complex, but the algorithm that creates it is not.

In other words, the complexity is still there.

That complex seeming behavior can "emerge" from simple rules and a simple process.

Wait a minute--complex seeming? Or complex?

If you're trying to argue that the complexity may not be in the algorithm itself, but only in the actual working out of the algorithm in time and space, I'll buy that.

But if you're trying to argue that it isn't "really" complex because the algorithm is simple, I don't buy that. I never said the algorithm had to be complex; I just said there has to be complexity somewhere for there to be (human-level, to avoid all the definitional issues I raised above) consciousness and intelligence. That's perfectly consistent with the complexity ultimately being built out of simple parts--just a lot of them with a lot of interactions, so the complexity is in the interactions, not in the parts themselves.

(I think it's likely that the parts themselves have significant complexity too, as I said above, but even if that's true, there will still be some lower level, possibly much lower, where there are simple parts, just a huge, huge number of them with a lot of interactions.)


That was actually a typo. I meant to type "consciousness doesn't really have anything to do with complexity." In the sense that you could probably define an intelligent being like a human in very little space.

>In other words, the complexity is still there.

Well yes in a way. By the definitions used in information theory, complexity is just the amount of bits you need to accurately describe something in some language. By common usage complexity just means the amount of concepts you need to learn to understand something (about the same thing.) But it can also mean the number of moving parts a machine has, so to speak, which could be very large while simple enough to understand or write on paper. Like a computer display which has a few thousand pixels each of which is almost exactly the same.

I don't know if you would call a computer display complicated, at least not much moreso than the invidual pixels that make it up. The same is probably true of humans. Even if humans are fairly complicated, someday a connectionist approach or maybe a rough approximation of a human brain might succeed at creating something arguably conscious.

>If you're trying to argue that the complexity may not be in the algorithm itself, but only in the actual working out of the algorithm in time and space, I'll buy that.

This may be a much better way of describing what I just said above. So yeah.


There is definitely a difference between a description and a complete description, and, as another commenter pointed out, between a description and a simulation that embodies that description.

Consciousness is a property that we ascribe to ourselves and other entities. We don't experience it in others the way we experience it in ourselves, of course, but we infer it from the data available from interacting with them. They produce a pattern of signals that resonates deeply enough with us that we recognize them as like us in having goals, feelings, and access to the "human experience." For example, one of the most convincing things we could observe an AI do would be to make a novel observation about life, derived from current circumstances, that strikes us as deep and insightful, perhaps because it was hovering at the edge of our own awareness, or because it immediately activates mental patterns that we did not activate but are also meaningfully connected to the situation at hand for us.

It makes sense that if we were to copy the workings of the brain in sufficient detail in a machine, the machine would also be ascribed consciousness by human users. Perhaps we could say that it is an "artificial brain" in a literal sense, performing normal brain functions the way an artificial heart pumps blood.

In the case of the universe, it's at issue whether there is such a thing as a "complete description" of the universe, whether a simulation could be done, and then whether this simulation actually is a universe in some sense. Unlike brains, we don't regularly encounter other universes (actual, physical ones), ones which we don't live in and experience from the inside, but which we are forced to conclude are true manifestations that correspond in every important detail to our own nevertheless. A complete description of the universe by a future physicist (say) would start with a short list of all the types of information that go into describing a particular universe, like a list of type of particles and variables such as position and momentum. This list describes universes in general. Then we'd have to describe a particular universe, with a long, long, long list of all the individual particles and so on. Then we have to represent this information physically in some sort of computer, for the purpose of simulation, and perform the ongoing processing involved in running the simulation.

Finally, we must argue that the simulation is a universe. Not just deserving of being called one, or having the properties typically associated with one, but a true instance of some class that previously had the universe we live in as its single member.

One interesting feature of this line of thought is its seemingly infinite ambition. What about smaller goals we could set for mankind -- might we ever engineer an artificial star? How about an artificial galaxy? We can try to beef up the likelihood of this ever happening by considering not just mankind but any other "intelligent beings" the universe might happen to contain, in the past, present, and future. However, we are again on the shaky ground of inventing a class from a singleton. All extraterrestrial "intelligent beings" ever conceived are just stories with small, structured deviations from the human template; beings we identify with because they, too, have intentions and goals and act in their ultimate self-interest for the purpose of self-preservation and just trying to make a go of it in this big ol' universe. Some of them, it's presumed, make detailed copies of natural things at an astronomical scale and then talk about whether the old thing and the new thing are members of the same class of thing or not.

I agree that it's irrelevant to us and our operation if our brains are made of billiard balls or quantum clouds, and the same seems true of the universe.

It is still interesting whether the universe is "computable" or not, that is, whether something meeting the current definition of a computer could simulate it, even in theory. The part I object to from the OP is the implication we could draw conclusions about the origin of the universe from the prospect of being able to simulate one in theory. I think it's silly or meaningless to say we possibly -- or probably! -- live in an "artificial" universe.


>For example, one of the most convincing things we could observe an AI do would be to make a novel observation about life, derived from current circumstances, that strikes us as deep and insightful, perhaps because it was hovering at the edge of our own awareness, or because it immediately activates mental patterns that we did not activate but are also meaningfully connected to the situation at hand for us.

I think you've come much closer than I to getting a definition of consciousness, but I still don't find it satisfactory. You could have a computer mimic a human without actually having anything resembling an inner "human experience". For example, a giant look up table with a preprogrammed response to any possible input. We have created extremely simple versions of this that are almost effective enough to pass the turing test. Just by gather a large number of human responses to chatbots. Though you could say the process that creates the giant look up table is intelligent, the table itself obviously is not.

There are AIs like AIXI which could be vastly intelligent without reasonably being considered conscious. They don't actually "think" so much as brute-force every possible solution to a given problem, or every possible explanation for a series of inputs, etc. Given enough computing power this would actually be effective. You could then ask it to mimic a conscious being and it could do so easily (given either a definition of consciousness or a set of observations about other conscious beings.) Maybe doing that requires the intelligence to actually simulate a conscious process somewhere along the line, so I don't know if that counts. It may be able to do so through mere, without having to actually run a full simulation. Providing a "deep insight" about life is nowhere near as complicated as a full human consciousness.

Maybe there is no such thing as consciousness. In the sense that there is no simple way to define it that is fully consistent with everything we want it to mean. That there are always going to be arbitrary seeming exceptions and gray areas. But that has deep and disturbing implications for morality. Maybe it's not relevant to our day to day life, but if we ever want to program a friendly artificial intelligence, we are screwed.

>I think it's silly or meaningless to say we possibly -- or probably! -- live in an "artificial" universe.

Well assuming that there are more simulated beings in all of existence, whatever that means, than there are non-simulated ones, it is very likely we are simulations. Though it's impossible to know what exists beyond our own universe. What is the distribution of universes? Do 50% of universes allow for infinite computing power, for example? What programming language do they use, which would determine how many bits a given universe would take to specify in that language, and therefore how often it gets simulated? How many universes even have anything close to intelligent beings?

An infinitely powerful computer would allow you to simulate every possible universe all at once, instantly (you would also be able to simulate every possible branch of non-deterministic universes, continuous variables, time travelling, and other things that people claim makes the universe "uncomputable.) And you could instantly do it an infinite number of times. If it were possible for such a thing to exist in a universe, that universe would contain every other possible universe. Including copies of itself and others universes with infinite computing power, which would contain more universes, etc.

(Very interesting short story that explains the implications of this http://qntm.org/responsibility)

So what is the top level universe? Why does something exist rather than nothing? Perhaps everything that can exist does exist. Maybe every computable program has been (is being?) run, including one that specifies our universe. That's the simplest possible explanation that explains the existence of the universe. Though I can not wrap my head around why something exists rather than nothing. None of this has any practical implications for our daily life, and the answer is probably unknowable, but it bothers me.

Sorry for going off on such a tangent. I hope this is comprehensible, as I know philosophical writings rarely are.


Consciousness and intelligence: You're right that proving something is "conscious" does seem difficult or impossible. Making something that is merely "intelligent," on the other hand, may be easy, depending on how you define the word. I think that's because consciousness is something we experience in ourselves, looking inwards, marveling at what it means that we are a mind living in a body, while intelligence is something we ascribe to other things in varying degrees if they exhibit certain behaviors. It follows that for us to consider something else to be conscious, we have to have that same feeling as when we experience our own consciousness, but via identification with something else (a robot or AI). We have to be like, "Wow, consciousness," and then be like, "Oh, that's not me, that's him!" It's the same way for intelligence, but it's a much broader term because there are many kinds of intelligence. Animals exhibit a lot of intelligence, as do some computer systems. In both cases (consciousness and intelligence), I think making systems that we recognize as more intelligent and more conscious involves understanding the brain and replicating its various functionality (since it's brains that will be making the call; it's brains that encode the distinction; the rest of the universe doesn't care). As we learn more about consciousness, we may be able to identify it as a particular kind of intelligence. For example, it may basically be the small part of our brain that directs our attention and awareness from moment to moment and decides what computational tasks to take on, while the rest of the brain is basically a large set of elaborate coprocessors specialized for different types of computation.

Simulated universes: I still think this is all meaningless metaphor. It's not easy to articulate why, but I'll try.

Some lines of thought just combine ideas in deep ways, but don't actually tell you anything new. For example, Zeno's paradox -- to get from point A to point B, you have to first go halfway, then halfway again, and so on forever -- doesn't actually teach us that nothing can ever go from point A to point B. It just demonstrates a glitch in the metaphors we use to reason about time and space. Actually, it's a glitch in the metaphor of traversing space as achieving a goal. Normally, if reaching a particular goal involves an infinite to-do list of subgoals, we wouldn't expect to ever get to the end and actually achieve the goal. However, if the goal is traversing some infinitely subdividable interval of space, we are provided with a mechanism to generate an infinite list of subgoals that are presumably all involved in achieving the larger goal. It's all just mental gymnastics and the normal tools of intuition breaking down.

To take a closer example, consider this line of thinking: "If you pick a random marriage proposal, chances are it is an imagined one rather than an actual one, because for every actual marriage proposal there are on average two or more imagined ones that never came to pass." This strange kind of thinking also mashes together diverse intuitive concepts. There's the idea that every class of thing (like "marriage proposal") has an extension set, an ensemble of instances. There's the fiction that an "event" is a discrete thing that we could identify and count if we had to, like counting the number of thoughts I've had today ("a thought" is a singular noun after all). There's the concept from probability theory of picking a member at random from a set (with the presumption that this is a well-defined act). There's the idea that an imagined X is still an X, because we still call it one. Unicorns are still unicorns even if they don't happen to exist.

The most intellectual, abstract part of the brain has a tendency to focus on what could possibly be rather than what is. We can get so wrapped up in generalities that we forget that this is the universe -- what we see around us, not what we imagine or are capable of conceiving of. Statements like, "Perhaps everything that can exist does exist" sound to me like a projection of our own models of reality out onto reality. Similarly, it's fascinating that we can conceptualize and talk about infinite computing power, but that doesn't mean there is any reality to it.


The quote is a somewhat poor rephrasing of a more solid argument from this paper:

http://www.simulation-argument.com/simulation.pdf

Anyway, your objection here isn't actually relevant as far as I can tell. It's a probabilistic argument about the number of real minds vs. the number of simulated minds, and the bulk of the "proof" is in demonstrating that, under certain assumptions, the number of simulated minds would be so high in comparison to the number of real minds that you'd almost be forced to conclude that you are simulated.

It really doesn't have anything to do with "universes inside universes."


...and the main flaw of this still remains: we don't know the probabilities of a very advanced civilization running a simulation because we can have no idea of whether this would be interesting for them to do, we can't predict what would their goals and motivations be. Out great-great-...-great-grandchildren will not just be super-smart versions of us, they will have concepts and goals we cannot even imagine! A simulation of us, depressing as it may be to think it this way, would be just as interesting for them as an ant-farm to a little boy - he'd get bored of it quickly because he's as far away evolutionary speaking from ants as these super-iteligent beings would be from us, and that wouldn't make for a large number of ant-farms simply because ant-farms will be boring and useless...


Did you read the paper? It takes that all into account. This is one of its three possible conclusions, as stated in the abstract.


my bad, I actually read it some time ago, after someone sent me a "summary" of the reasoning, omitting this variant, but somehow the "summary" was stuck in my head and not the actual paper. indeed, Nick Bostrom actually thought everything through, including this line of thought.


The objection is totally relevant because you cannot calculate probabilities if you don't know the number of real universes, the probability of life arising from non-life, etc. I would argue, using that line of reasoning that it is more likely that we would be a dream of a sentient entity rather than a simulation of one if his probabilistic assumptions are taken as probable.


Dreams only last a few hours, so that's not really a possibility. If you are proposing we are the "dreams" of a sentient and extremely advanced machine, then that is hardly different from saying that we are just simulated.

And I'm pretty sure the numbers you mentioned--like the number of real universes, the probability of life arising from non-life--really don't come into it at all. Skimming over the paper, none of that stuff seems to come up at all. Again, this isn't about universes inside universes (the news article phrased the argument poorly).


Im not talking about universes within universes. Let's take our own universe as an example, and using probability as we know it in our own as holding in whatever 'real' universe(s) that may or may not simulate itself or some subset of itself. If sentient beings evolve (taking Bostroms postulate of that probability as reasonably high) and assuming they 'dream' if they are anything like us, as he does, and since we have (yet) to develop a technology that can simulate a universe as complex as ours, it's far more probable that sentients evolve and dream universes than evolve and simulate us. Even in our own "existence" (whatever that means) you have to agree that there have been far more dreams than games played. This follows probabilistically; not all people play games but almost every human that has ever lived has dreamed. Therefore it is far more probable that we are a dream, or passing thought, for that matter, than an intended simulation. Hinduism win.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: