What do you mean time is irreversible? Your movement in it is irreversible, just like your movement on a one-way road is irreversible in space. Time itself does not possess the property of irreversibility, at least in modern physics. All the equations describing natural phenomena can work both ways, forwards and backwards in time.
Maybe this analogy will be helpful for you. We have CPU with mutable RAM. You can create immutable language on top and start arguing that you can go both ways in mutations back and forth in RAM. But this is the feature of the abstraction you applied not the core feature of "reality".
This is what differs model from reality. You're talking about abstract concept which helps you to measure mutations but as I tried to explain above - is not a real thing. It's just working and useful idea.
Plenty of apps are available in English outside the anglosphere. Seems like a US-centric oversight not releasing it to “the rest of the world” as usual.
How do we know which institutions to trust in safely deploying AGI? From my POV some kind of autonomous and generalized intelligence is inevitable and imminent - but who can be trusted to deliver something that works for the majority? Or is that a pipe dream?
I don't really understand the question. Its like saying "who can be trusted to deploy computers?". Everyone with the resources is going to do so, whether you "trust" them or not.
It's not a concern for any current living generations, so any answers are moot because the landscape of the entire friction between corporations, governments and the people will likely have shifted dramatically to where our opinions today have no relevance to their issues.
> It's not a concern for any current living generations
What exactly is impossible to implement if some implementations of so-called artificial intelligence can do so much of useful things?
Don't you believe that AI can just take 1% of human jobs and became a billionaire with significant impact to world's politics? It needn't to add a lot of things to existing implementation, just give it a human's rights such as a bank account and ability to buy businesses.
> It's not a concern for any current living generations
How much would you be willing to bet? I understand the skepticism, but to assign 0% probability to it happening in our lifetimes seems excessively low.
AGI slightly exceeds humans, but they are actually kind of shitty in all sorts of annoying and hard to predict ways. They turn out to be fantastic slackers and liars. Your voters, to put it mildly, don't like them. It's hard to monetize them and we all agree we should focus our efforts on something else.
>How much would you be willing to bet? I understand the skepticism, but to assign 0% probability to it happening in our lifetimes seems excessively low.
Not GP, but how much you got?
AGI (or hard AI, or whatever you want to call it) strongly implies not just reasoning and interaction with the environment, but self awareness. Something which is conveniently ignored by folks who claim that AGI is just around the corner, and welcome their new 'grey goo' overlords.
As Heinlein (it's fiction of course, but the principle that self awareness is necessary for AGI -- not (just) numbers of neurons/data points -- holds IMHO) put it[0]:
"Am not going to argue whether a machine can 'really' be alive, 'really' be self-aware. Is a virus self-aware? Nyet. How about oyster? I doubt it. A cat? Almost certainly. A human? Don't know about you, tovarishch, but I am. Somewhere along evolutionary chain from macromolecule to human brain self-awareness crept in. Psychologists assert it happens automatically whenever a brain acquires certain very high number of associational paths. Can't see it matters whether paths are protein or platinum. ('Soul?' Does a dog have a soul? How about cockroach?)"
As we've seen[1], a variety of meat machines (i.e., animals like us) have varying levels of self awareness. Without that trait, AGI won't be achievable.
Without the ability to recognize and incorporate the concept that one is an entity with existence separate from the rest of the world, there is no real awareness or consciousness.
I'd even go so far to posit that until human children are able to understand object permanence and that their mental states aren't globally available to everyone, they don't meet the standard of "self-awareness."
That's a hard problem, and while we have some conceptual ideas about how that might arise, we have no mechanism or even a foundation for inculcating such a trait into the algorithms folks call "AI".
Until that problem is solved, there will be no AGI. Full stop. And I find it unlikely in the extreme that we will gain the scientific/engineering know how to make that happen in our lifetimes.
If an AGI is possible at all, it's not going to be something that you control. The issue isn't whether someone trustworthy creates it. The issue is, no matter who creates it, it will decide what it's going to do. Its creator will not have real control.
So having it "work for the majority" isn't so much a pipe dream, it's more of a roll of the dice, with completely unknown odds.
It's hard to talk about this and clearly convey all meaning.
If you're saying that it will decide because neural nets are black boxes which we don't have a complete understanding of, and we're without a clear way to analyze their behavior, I can see where you're coming from.
But these things will not be beyond our influence. They're going to be slaves to the computations encoded in the neural net connections / weights. We're going to shape / mold them through a process akin to natural selection. We're going to select for intelligences that want to help humanity. It's not going to be a roll of a normal dice, it'll be more like the roll of a weighted dice. And we're going to, I believe, get better tools / theories for understanding the output of these neural nets so we will be able to conduct this selection with some confidence.
Humans can be said to decide what we're going to do thanks to an uncaring, unconscious, and brutal evolutionary process that prioritized self-interest, survival, and reproduction. It's all about the selection process, and this time around we have a hand in guiding it.
Will an AGI have the ability to decide for itself? If so, then how can you make it what you want it to be (with any certainty)? And if not, then how is it "general"?
To me, it's kind of like raising kids. You try to train their neural nets to bias them toward doing what you think is good and right. And that sometimes works. Yes, I think it's fair to think of it as biasing the dice. But it's sure not 100%. They'll still decide which of your values they keep, and which ones they throw away as being stupid. And you can't stop them from doing that.
I guess, to try to respond to your direct point, that if it's an AGI, then it's less deterministically driven by the training data than we might wish.
The real danger in the next few years will be the extreme operating speed of the types of fairly general purpose AIs that we already have such as GPT. This will increase to be dozens of times faster than what humans are capable of. That creates a very strong type of leverage for operators and an incentive to remove slow humans from the loop. Overall transfer of control to these types of systems may lead to a precarious lack of real agency and capability to adapt fast enough for humans. As well as danger from things like smart computer viruses taking over the systems.
I think the only way to be relatively safe from those issues is to limit the hardware performance.
I don't want be too human-centric, but to be completely honest we haven't seen the slightest proof that human intelligence is not something special. I know lots of animals are pretty clever, none approach us in any practical sense.
While it looks like an evolutionairy fluke that can be approached or even exceeded by other species - either on this or another planet - in the blink of an eye, I think that's actually more speculative than we would care to admit.
We don't know. Maybe human intelligence is a very close approximation of cognition's equivalent of physic's light speed. Increasing it may turn out to be prohibitively expensive. There's lots of precedence for animals having acquired features close at or actually at the physical maximum of whatever it is they are optimizing for.
To be clear, I'm not convinced of anything either way but I'd think it would as fantastic as it would be slightly depressing to find out human intelligence actually is some kind of global maximum with some exceptions like machines using energy harvested from black hole systems or something.
There is a limit to compression of human-relevant information which is largely what intelligence is.
The main thing I am talking about is speed of output. You can already see huge increases in say old GPT-3.5 versus GPT-3.5-turbo or old GPT-4 to new.
We know for a fact that the hardware inference speed can be increased by using faster (currently prohibitively expensive) memory or by packing more onto a chip. There are design for new memory-based computing paradigms.
It's already clear that AI is superintelligent in certain domains or aspects. Such as the ability to exchange information with other agents.
Computer hardware efficiency has relentlessly increased. It would be a total break with history if it suddenly stopped.
Well, for one, I see no competition. I don't know what the technical definition of "special" is, but I'd say being the only one counts for something.
> What senses might you consider as enough practical? Have you heard about Koko? What do you think about corvidae?
I know both and I know this is a slippery slope. You should know my love for animals runs deep, but I really struggly to put them in the same league as us.
I took a shortcut with saying "practical", because this discussion is way too deep to be performed A) by me and B) on HN. Practical means something like, can they adapt their skills as widely as we can? Can they adapt to uncommon situations? Not subtly or in theory, like solving some puzzle, but really practical? There is nothing subtle about a human becoming a parkour world champignon (I'm leaving this in, just too good) or adapting to life in a submarine (or learning chess, or whittling, or making tea, and many literal millions more examples).
Maybe I am overlooking something, but the skills these animals show seem really minor compared to what even disadvantaged humans are capable of.
I appreciate the amount of intelligence you put in the message, it is interesting to read and to think about. But the style of your reasoning gives me some hints of creationism, let me show you some anti-creationism point of view.
> Practical means something like, can they adapt their skills as widely as we can?
The most crucial (in my opinion, which has been not introduced to any more crucial points) difference between us and Koko is that we can hold our breath and gorillas can not. That leaded us to develop speech in the seance that speechless group of apes can not win an exactly same group of apes with more developed communicative ability. This, and probably nothing more, has led to such a large gap between humans and apes, so large that humans have ceased to see the relationship between themselves and apes.
I see your understanding of "practical" as something specialized, like agricultural revolution. But why a gorilla should start planting foods if it knows that nobody is going to protect its crops while sleeping because of just lack of common language?
> Can they adapt to uncommon situations?
What can be more uncommon than living on a trees without a warm house and typically without any house at all, without regular nutrition, with a lot of really different enemies from tiny insects to giant cats, with a regular fights, with no democracy and law and medicine?
Being disadvantaged requires to face some uncommonities every day, what about office managers? Disadvantaged people (if they are just poor men and not disabled ones on welfare) can easily survive nuclear war because most of them are OK about living in a similar to gorillas livestyle, but I can not believe that most of average Joes survive a situation when their money are going to cost nothing because of lack of civilization.
Oh my, I have seldomly been accused of creationism. To be clear, I can separate the ability from the creature. I don't have a religious or otherwise attachment to the human form specifically. Other than - to be completely honest here - being one.
Let's just clear that out of the way. What I am "claiming", which would be an exaggeration because I'm sort of exploring here, is that whatever human cognition is may be an optimal or near-optimal state of cognitive ability.
So, to be fair, give Koko some millions of years and some evolutionary pressure and I'm sure she'll join us and I'd be happy to have her on our team.
Your point about our ability to hold our breath and how it lead to our increasing dominance is fascinating. I have to say I am not completely sold on the idea that holding your breath is the only way to develop proper channels of communication for I can easily imagine some sort of physical signaling standing in for at least parts of it. That said, I can appreciate the immediate and overwhelming advantage of speech.
This does stimulate my curiosity about what came first here, speech or cognitive ability? Why did "we" even consider speaking? How does one do that without having the cognitive architecture for recognizing its value in the first place? In other words, was "us" being smarter the catalyst for speaking or was it the other way around? Fascinating and I am way too much of an amateur to say anything more of value on it.
I will however continue do so anyway, because that is my sacred duty as a dedicated HN'er and allround developer douchebag.
> What can be more uncommon than living on a trees without a warm house and typically without any house at all, without regular nutrition, with a lot of really different enemies from tiny insects to giant cats, with a regular fights, with no democracy and law and medicine?
I might be in danger of being too blunt here, but this is the bar you have to clear if you wish to survive. This is exactly what humans are capable of even in their "undeveloped" form. These sort of pressures might be foundational to our evolution, but then again, every animal has to deal with it in some way or another so I'm not sure what made us take what I can only call the excessively cerebral path. Maybe it was like the evolution of the peacock's tail? A runaway process, leading to miraculous but exorbitant results like the mantis shrimp's eyes.
What I mean by uncommon is: can we coach you to pick cotton, whittle little wooden sculptures, play a game like checkers and sing simple songs or whatever else is appropiate for your particular physical form and has virtually no bearing on your immediate survival? I know this is a hard thing to pin down, because one can come up with myriad examples of varying levels of persuasive power but you surely perceive some differences here even if they are hard to lock into? Differences that cannot just be attributed to language or lack of proper motivation.
It's not so much every thing we can do in particular that's piquing my interest, but the sheer breadth of things we are capable of taking on both physical (parkour, gymnasts) and cerebral (chess, math). I didn't even get to art, which is like a whole world on its own and the various combinations of all those domains.
> This does stimulate my curiosity about what came first here, speech or cognitive ability?
This is the question I thought about all evening before I fell asleep. I have two ways to answer it.
1. Let's take the well-known Feline and Canine. All my friends who spend a lot of time with animals will call dogs smarter than cats, but why?
Dogs have a more developed communication system: they have more varieties of barking than cats have varieties of meowing. Dogs are playful, they know how to smile, they know how to feel guilty and actively show it, they are capable of paired activities under the supervision of a person. From what most of dogs can't, cats can only chase prey without visual or odor contact, purely by sound (but polar foxes can do even this).
Conclusion - the level of communication correlates with the level of intelligence.
2. Let's take the most primitive organism, the prokaryote (sorry for not naming some precise specie, let's consider some abstract prokaryote with the requirement to be the simplest). Google tells us:
> All organisms, from the prokaryotes to the most complex eukaryotes can sense and respond to environmental stimuli.
But also Wikipedia tells us that prokaryotes are able to interchange some information using DNA:
> These are (1) bacterial virus (bacteriophage)-mediated transduction, (2) plasmid-mediated conjugation, and (3) natural transformation.
These two examples make me confident in the opinion that communication and cognition are two different words for describing the same idea from two different points of view.
Even when she reported about her toothache? If this situation was not fabricated (why to fabricate a work of your life and how to do it unnoticed) the situation meets the definition of intelligence.
What are your doubts, are they related on some data?
I hate post-2022 internet discussions because one is really common nowadays - talk about statements you don't like that they were written by bot.
This accusation takes ridiculously low number of symbols, so it is impossible even to react to this kind of assumption because answering to troll using significantly more symbols is the definition of troll feeding.
By the way, the message you are referring to consist of more than one statement and I can not even guess which one is bothering you.
Don't worry, it's been happening to people since 2017 as a way to disregard real people disagreeing with their horrible political fads - now it's just 2022 and everything is a bot. Let's not forget on the the internet no one knows you're a dog.
Satire is compatible with most layers of pg's pyramid except of the top layer. But his comment is no higher than "responding to tone" - I know that GPT detectors use to false detect GPT-generated text in non-fluent human's one which is my case.
So I am not just were unable to see any satire in his comment but still is. But that was not useless: now I can answer to that kind of trolling with my comment which is upvoted despite of being located in [flagged] branch.
It's clear that we've reached a point in online discussions where the lines between human and machine are blurred, especially when conversing on complex topics like AI and blockchain. My comment wasn't intended to reduce your argument's credibility but rather to highlight this fascinating phenomenon.
That said, returning to the original topic, I believe that trust in government is multifaceted and not easily boiled down to "good guys" versus "bad guys." Furthermore, while blockchain-based systems are designed to resist central control, it's a mistake to think that governments are incapable of influencing or regulating these technologies.
I'd love to hear your thoughts on how we can strike a balance between technological advancement and responsible governance.
I am in a funny situation because of arguing to a really bot.
But wait a minute. The lines between human and machine might be blurred when you are talking to some Customer Support specialist, but it is never blurred when discussing any sciences (Math, Physics, Programming and of course Blockchain).
What about good guys vs bad guys issue from comment, your GPT4 has correctly discovered sarcasm which has happened before human actor did it which tells to me that the lines between human and machine is somewhat blurred indeed.
> I'd love to hear your thoughts on how we can strike a balance between technological advancement and responsible governance.
I'd love the same, that's why I threw the "good guys" point. Also I really believe that some kind of Blockchain-powered AI actor will be a game changer with totally unpredictably outcomes because it will be a biggest revolution in power balance since nuclear bombs.
(Shameless plug) I recently started my own newsletter to help a non-technical audience understand the big picture trends driving AI and AGI: https://newsletter.envisioning.io
> The main difference between the prototype shown today and what will be going to the moon is that the ones going to the moon will be white instead of dark. “That’s really for thermal reasons,” Mr. Ralston said.
We're an independent and distributed research institute focused on tracking advancement in emerging technology. Envisioning began as a series of speculative future infographics in 2011 and have since developed visualization tools in d3.js, a methodology for scouting emerging tech as well as our own backend for tracking these changing data.
Today we are launching a platform for tracking tech around urban innovation [linked] meant to help public officials make better decisions around tech and promote more democratic futures.