Hacker News new | past | comments | ask | show | jobs | submit login

I'll tell you my vision which is kinda long term - like 20-30 year from now on. I think the future is that everyone will have their own personalized AI assistant on their phone. Internet as it is will be mostly useless because only robots will be able to wade through the generated shit ocean and the next generation will see it as the current see the TV - unimportant old boring low-entropy data which is not entertaining anymore.

There will be a paradigm shift where our-customers-are-ai apps appear and most stuff will need to have an API which makes AI able to connect and use those services effortlessly and without error because who don't want to tell his assistant to "send 5$ to Jill for the pizza"? There will be money in base AI models you can choose(subscribe to) and what it can and cannot do for you. It will still be riddled with ads and now it's your personal assistant who can push any agenda to you.

Operation systems will become a layer under the assistant ai.

You will still talk on/to your phone.

I guess free software will be more important than ever.

Free ai assistants will be available and computing power will be there to run it on your phone but all the shit we've seen with open Vs closed source, Linux Vs Windows, walled gardens whatnot will go another round this time with free open public training data assistants Vs closed-but-oh-so-less-clumsy ones.

Security problem will be plenty like how to hide your AI assistant custom fingerprint? Authentication and authorization system for AI? How much someone else personal assistant worth? How to steal or defend it?




An LLM/AI companion based on your ledger, that grows and evolves with you, acting as a sort of Socratic daimon, a psychologic twin, assistant, advisor, and sparring partner. Like a journal that talks back and thinks with you. It'd need to be only personally accessible and safe/private. Biometric access and local running could solve this.


This is the dream. That said - Gen AI on its own wont get us to helpers/Daemons.

Right now, if you want to build something like a daemon you will need multiple agents. When a task needs to be delegated, the central control system needs to be able to spin up a few processes and get them going.

You can do this right now. You can create an ensemble of personalities/roles (jr dev, sr dev, project manager) and have them plan.

They do a good job, if you are monitoring it. You can break up the plan into chunks, spin up more instances, distribute chunks and have those instances work on it.

Sadly this cant happen, and I think these are fundamental limits to Generative AI. Right now personas just go ahead and "pretend work" - they say we "I am now going to go and come up with a project plan".

You have a dependency where output from all prompts must be actionable, or verifiable. If output is off, then it snowballs.

This is not a tooling, or context window issue. This is a hard limit to what Generation can achieve. Its impressive that we got this level of emergence, but when you look at output in detail, its flawed.

This is the verification issue, and verification (or disproving something) is the essence of science.

Maybe something can be added to it, or entirely new structures can be built - but generation on its own will not take us over this threshold.

Right now, LLMs are more like actors. They are very good actors, but you dont get your prescriptions from actors.


Exactly! Most will have a name given by its owner, a pet of sort but much more in function. You will need to reach a certain age to get it, the beginning of adulthood.

People will mourn them and there will be real world AI cementeries.

Diffie Whitfield envisioned an internet highway where every computer is connected to every other in 1974 and then went on to find out how to do that communication in secure channels.[1]

I think my vision is quite tame and deductible from the current situation compared to his.

[1] according to Singh Simon: The Code Book


The more personal value it holds, the more risk it has. Imagine gaining access to this digital daimon of someone, it'll be like accessing their most inner thoughts. The upside however could be exponential personal growth. I've seen many examples of real life twins that basically live as a superhuman individual. E.g. the Liberman twins. This tech could be a digital equivalent of that.


In outright totalitarian countries government access to your agent will be mandated.

In more democratic countries we'll see lots of suits and court cases over getting access to these agents and assigning intent to your IRL actions based on your 'thoughts' with the AI.


Small correction - Daniil and David Liberman are just brothers, not twins.


> You will need to reach a certain age to get it, the beginning of adulthood.

Not a chance will whatever capitalist figures out how to do this allow a measly cultural restriction keep them from pushing that age deep into childhood.


> a psychologic twin, assistant, advisor, and sparring partner

and snitch.

> It'd need to be only personally accessible and safe/private.

IMO there's very little chance that such a thing won't report or sell every conceivable shred of your existence to governments and corporations.


Anything long term is meaningless to think about, besides just the fun.

AI on it's current track is going to so radically reshape society that it will be totally unrecognizable to people today. Society in it's current form makes no sense when there are smarter and cheaper non-human workers available.


There is a huge number of people - hundreds of millions - employed today that could quite literally be replaced with small shell scripts. Yet they are not being replaced. There is no reason why AI will not similarly fail to bring forth the revolution.


Which people exactly?


In 20-30 years I most certainly do not want to use a phone anymore.

I grew up without the things, and boy was that fun.


Do you ever wonder if that's because of the growing up part and not the phone part? Growing up for most people tends to be pretty fun and exciting compared to adult life. And with or without phones, the 80s and 90s were a very different time when it came to kids being able to go out and play outside on their own. Move to a country where it's common for kids to play outside and you'll see that times aren't much different even though kids all have phones. In a way, it's easier because now you're just one text and location share away from meeting up with them.


Growing up in the time where Palm and Windows CE were a thing, I knew having a capable pocket sized computer was an inevitability, and couldn't wait for it. And it's great: I would fucking hate not having modern mobile banking, communications tools, maps, calendars and the ability to look anything up immediately.

The internet was more fun pre-smartphone, but that's because of the demographic change brought on by a new Eternal September, not the existence of mobile computers. We went from the internet being the domain of interesting, creative people who made things for fun to "basically everybody."


If you grew up before phones then don't worry. You man not live this long.

But seriously there will be some communication device or chip implant. And 'personal' assistant will be just a presentation of a software running 'in the cloud' and controlled by big corporation. With no privacy at all. Which makes it a spyware through which government can track and control crowd.

That's sounds like dystopia, but having a tracking device on almost every person was unimaginable not so long ago. Today it's a norm. Big corporations have and sell your location in real time.


> the next generation will see it as the current see the TV - unimportant old boring low-entropy data which is not entertaining anymore.

Don't you mean high-entropy data? High entropy data would be less orderly, more compressible, and have a lower signal to noise ratio ... like TV shows compared to a textbook.


Low entropy data is what's more compressible. High entropy means unpredictable, which could mean high noise (like TV static which is incompressible) or high signal as GP intended.


Thank you. I just learned about Shannon Entropy and that it grows with information content. The negative sign in the formula is due to the negative log of the normalized population probability.

https://math.stackexchange.com/questions/395121/how-entropy-...


Yes. You can also take the log of the reciprocal probability (or "luck") and then you don't need the negative sign. This value is the expected number of trials you would need for the outcome to occur once. I find this presentation a bit more intuitive. See also Boltzmann entropy S = k log W which takes this form.


Entropy is the "signal" within the noise.


Not necessarily. Pure noise has high entropy.


Meanwhile, the common folk will sit in the iCubicle homes with just AirGlasses and subscriptions for everything, including the virtual furniture and the Fortnite Metaverse as the then-facebook to glue it all together.

I'm honestly a little excited about our Snow Crash future.


Do you think that this will all function as one giant homogenizing force at the societal level? The AIs will all be trained on the same data, and so will have the same opinions, beliefs, persuasions, etc. It seems like everyone having AIs which are mostly the same will maximize exploitation, and minimize exploration, of ideas.


Is that any different than media before the internet consumed it all? Media consisted of a couple of news stations, a couple of TV stations and that was mostly it.

I'm also not sure that the recent broadening of media has been a net benefit to society. Look at the degree of polarization in recent years. At a certain point heterogeneity is no longer a societal good.


No, you're correct, media is also a homogenizing force. Regional accents have disappeared, for example. But I would argue that even the political polarization is homogenization, not evidence of heterogeneity, because the variety of the middle was eliminated, leaving the consolidated extremes.

My point is that personal llms will be an even greater force along this same line.


There are a million different beliefs, opinions etc in the corpus LLMs get trained on and they can predict all of it. It doesn't have to have the same opinions.


Sure, but the probabilities of those beliefs won't be the same as each other, and they will be the same between all users, so that doesn't address my point.


again that's up to who's training the models and/or the user. Bing is GPT-4, same exact pre-training but it sounds nothing like chatGPT-4.

LLM probabilities are dynamic. They change based on context. If you want it to behave in a certain way then you provide the context to engineer that.

a particular belief system being most present in training doesn't mean an LLM will always shift probabilities to that system. Such prediction strategies would be awful to fulfil its training objective. Getting it to shift to even the most niche belief is as simple as providing the context to do so.


> again that's up to who's training the models and/or the user.

True, but training is expensive, I imagine that only a few actors will train the popular LLMs, exactly as we are seeing today.

> LLM probabilities are dynamic. They change based on context. If you want it to behave in a certain way then you provide the context to engineer that. ... Getting it to shift to even the most niche belief is as simple as providing the context to do so.

I thought so too, until I tried to get it to agree that 1 + 1 = 3. It would not do that, no matter how much context I provided, likely because the probabilities in the underlying data were so skewed.


For a bit there it didn’t sound too bad!


You'll be a God, but a God who has to look at ads all the time, and your angels will... also be ads.


If your interface to the internet is a locally running open source ai assistant, couldn't you just tell it to strip out ads from the information it returns? It could change the internet economy some.


Eh, if OpenAI gets their way, your local AI will be banned as being too dangerous.


If you have the right subscription level for your daemon.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: