In the long run, it will 'secure' knowledge and data. Nobody will know what's in GPT's databases.
Currently you can command kagi/google/http websites to return information. You can infer what should be in Google's search engine and track when information is deleted.
GPT is not commanded, it predicts with inaccuracy. So anybody who wants to black-hole information behind the scenes and never reveal clues to that fact, can do so.
All failed predictions are covered by LLM's design, you cannot infer without serious long term study that something has been removed deliberately. You cannot infer that a valid data entry exists and you failed to retreive it, because unverifiable bs is the default failure state from LLM's.
High level tech people will invest in this, regardless of what the public values in it. Just like Elon's SpaceX and Tesla got lifted out of pitfalls by gov and VC, so too will the AI guys.
Let me put it this way. Hoarde and backup every scrap of online information you care about. Hypothetically an LLM fueled replacement for all of 'the free and open web' websites, could limit information availibity.
A metaphorical example would be leaving out Tianamen Square. Which is fine when you can just Google it, but with the old freedom of information gone, an LLM has the ability to just bs you and you'd never have a reason to infer it existed in the first place.
It's a Super-Injunction by default, a perfect repository for spies to dump data, a librarian who will answer any question but only answer with the truth, if he likes you.
No more Snowden and Assange leaks, there's no way to chase up a deleted video with a search engine.
Anyway you get the idea. In the long run, the structuralists are licking their wet lips at the thought of re-establishing a heirarchy of information access. (Probably, i don't know).
Currently you can command kagi/google/http websites to return information. You can infer what should be in Google's search engine and track when information is deleted.
GPT is not commanded, it predicts with inaccuracy. So anybody who wants to black-hole information behind the scenes and never reveal clues to that fact, can do so.
All failed predictions are covered by LLM's design, you cannot infer without serious long term study that something has been removed deliberately. You cannot infer that a valid data entry exists and you failed to retreive it, because unverifiable bs is the default failure state from LLM's.
High level tech people will invest in this, regardless of what the public values in it. Just like Elon's SpaceX and Tesla got lifted out of pitfalls by gov and VC, so too will the AI guys.
Let me put it this way. Hoarde and backup every scrap of online information you care about. Hypothetically an LLM fueled replacement for all of 'the free and open web' websites, could limit information availibity.
A metaphorical example would be leaving out Tianamen Square. Which is fine when you can just Google it, but with the old freedom of information gone, an LLM has the ability to just bs you and you'd never have a reason to infer it existed in the first place.
It's a Super-Injunction by default, a perfect repository for spies to dump data, a librarian who will answer any question but only answer with the truth, if he likes you.
No more Snowden and Assange leaks, there's no way to chase up a deleted video with a search engine.
Anyway you get the idea. In the long run, the structuralists are licking their wet lips at the thought of re-establishing a heirarchy of information access. (Probably, i don't know).