Hacker News new | past | comments | ask | show | jobs | submit | p1esk's comments login

What about AI software running on silicon chips? Soon it will reach levels of complexity vastly exceeding any human brain. To these systems we will look like bugs, or maybe even just cells - they might not even classify us as being alive, let alone intelligent.

“Soon it will reach levels of complexity vastly exceeding any human brain.”

I doubt the “soon” part. Artificial neurons are vast simplifications of real neurons, and even complex networks like GPT don’t come close the complexity of biological networks, both in network structure and in terms of what goes on between the neurons (eg chemical processes, as opposed to just the activity of the neurons themselves). Individual neurons have been shown to have capability to both process and store information, something which artificial ones don’t do. Besides, we are only scratching the surface in our understanding of biological neurons and brains. How can we say “this thing we built that is a vast simplification of this thing we barely understand will soon exceed the complexity of the thing we don’t yet understand”?


Because it’s already able to do almost everything a brain can? State of the art AI models can already learn, reason, communicate, and even create - better than most humans. Using a lot less neurons, and much simpler neuronal structures.

All trends indicate we’re only a couple years away from an AI superintelligence. No additional understanding of biological brains is required to get there.


I remember a similar argument made since the 1980's... 40 years later and a lot of stall-outs...

It could be hubris to assume we know enough yet to replicate our 'kind' of intelligence. Just as it might be hubris to assert AI doesn't have some 'kind' of experience. A minimum (which will be raised once we have a better understanding) is still: change of state, to have an experience - we simply don't know what else exactly is required, so that threshold minimum remains to be raised.


This argument could not be made before March 14 2023, when the first actual AI was released (GPT-4). I remember that day very well because I was extensively testing every GPT model before that (part of my job as an AI researcher). The entire history of human civilization can be separated into before and after that milestone.

We do not need to "replicate" human intelligence. It's enough to "simulate" it. The coming AI models will be entirely "alien" types of intelligence to us, and that's OK as long as they are useful. Most likely these AI models will be able to finally explain to us how our own brains work.


In my opinion, he is more notable for inventing a variational autoencoder.

It depends on what would actually change.

would be… mostly office suite, with a little LLM inference sprinkled in.

No, it would be LLM inference with a little bit of an office suite sprinkled in.


Very true, once you're senior enough, couple of hours of focused effort is enough to make decent (visible) progress.

Is GPT-4 not a “big thing in AI”?

It's extremely spicy autocomplete and it burns astonishing amounts of natural resources to reach even that lofty peak

I have little doubt that even when we have superintelligent AI solving science and such problems way beyond humans it will still be dismissed as extra spicy autocomplete.

Sam’s ambitions are way beyond “salary”.

It's a quote from Upton Sinclair from an era where you generally had to have a profitable business and employees before you had investors.

There were many ventures in the past that got investors without a current profitable going concern. Oil & mining speculation, chartering boat crews to go on exploration expeditions and more.

Indeed, public investment was born from large projects where any profit would be many years off.

I know theres this whole joke about being pre-revenue on the Silicon Valley TV show, but getting investors in order to be able to build a business which becomes profitable after goes back a long time. Like a really long time.

Replace salary with the thing he wants

Maybe he wants to make the world a better place?

Maybe :)

You can post an ad at your local uni ECE department and hope some grad student would have enough FPGA experience to handle a “simple” project.

Totally. But I fear that my local grad students are too pricey for me!

Do you know how little they pay TAs?

Hahaha. That's actually my primary comparison. We're at $45K + $10K overhead these days....

I agree. Though it's good to have options for GPU accelerated Numpy. Especially if Google decides to discontinue Jax at some point.

Exactly. O1 is just a minor gpt-4 tweak before Orion.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: