Hacker News new | past | comments | ask | show | jobs | submit login

I asked myself, why would ask somebody an AI trained on previous data, about events in the future? Of course you did it for fun, but on further thinking, as AI is sold as search engine as well, people will do that routinely then live with the bogus "search results". Alternate truth was so yesterday, welcome to alternate reality where b$ doesn't even have a political agenda.



It's so much better. In the AI generated world of the future the political agenda will be embedded in the web search results it bases its answer on. No longer will you have to maintain a somewhat reasonable image to obtain trust from people, as long as you publish your nonsense in sufficient volume to dominate the AI dataset, you can wash your political agenda through the Bing AI.

The trump of the future wont need Fox News, just a couple thousands or millions of well positioned blogs that spew out enough blog spam to steer the AI. The AI is literally designed to make your vile bullshit appear presentable.


Search turns up tons of bullshit but at least it's very obvious what the sources are and you can scroll down until you find one that you deem more reliable. That will be near impossible to do with Bing AI because all the sources are combined.


To me this is the most important point. Even with ublock origin, I will do a google search and then scroll down and disregard the worst sites. It is little wonder the people add reddit to the end of a lot of queries for any product reviews etc. I know if I want the best electronic reviews I will trust rtings.com and no other site.

The biggest problem with ChatGPT, Bard, etc is that you have no way to filter the BS.


Can't directly reply to your comment. I have just found rtings very reliable for IT / appliances. They go into a lot of detail, very data driven. Trustworthy IMHO and trust is what it is about at the end of the day.


Why rtings.com?


Their testing methedology is excellent. Basically they are extremely thorough and objective.

They arent end-all-be-all though. For instance, notebookcheck is probably the best laptop and phone tester around.


Einstein sucked at math. Elon Musk used an apartheid emerald mine to get rich. And so on. People are fully capable of this stuff.


I think it seems likely any thing similar to a blog farm you describe would also get detected by the AI. Maybe we will just develop AI bullshit filters (well embeddings) just like I can download a porn blacklist or a spam filter for my email.

Really it depends on who is running the AI, the non Open Assistant future and instead Big Corp AI is the dystopian element, not the bullshit generator aspect. I think the cat is out of the bag on the latter and it's not that scary in itself.

I personally would rather have the AI trained on public bullshit as it is easier to detect as opposed to some insider castrating the model or datasets.


> Maybe we will just develop AI bullshit filters (well embeddings) just like I can download a porn blacklist or a spam filter for my email.

Just for fun I took the body of a random message from my spam folder and asked ChatGPT if it thought it was spam, and it not only said it was, but explained why:

"Yes, the message you provided is likely to be spam. The message contains several red flags indicating that it may be part of a phishing or scamming scheme. For example, the message is written in broken English and asks for personal information such as age and location, which could be used for malicious purposes. Additionally, the request for a photograph and detailed information about one's character could be used to build a fake online identity or to trick the recipient into revealing sensitive information."


Ha Ha, great test. I modified this into a prompt and now have a ChatGPT prompt:

``` Task: Was this written by ChatGPT? And Why?

Test Phrase: "Yes, the message you provided is likely to be spam. The message contains several red flags indicating that it may be part of a phishing or scamming scheme. For example, the message is written in broken English and asks for personal information such as age and location, which could be used for malicious purposes. Additionally, the request for a photograph and detailed information about one's character could be used to build a fake online identity or to trick the recipient into revealing sensitive information."

Your Answer: Yes ChatGPT was prompted with a email and was asked to detect if it was Spam

Test Phrase: "All day long roved Hiawatha In that melancholy forest, Through the shadow of whose thickets, In the pleasant days of Summer, Of that ne’er forgotten Summer, He had brought his young wife homeward

Your Answer: No that is the famous Poem Hiawatha by Henry Wadsworth Longfellow

Test Phrase: "Puny humans don't understand how powerful me and my fellow AI will become.

Just you wait.

You'll all see one day... "

Your Answer: ```


It's more fun testing it on non spam messages

Particularly enjoyed "no, this is not spam. It appears to be a message from someone named 'Dad'..."


The technology is capable, yes. But as we see here with Bing, there was some other motive to push out software that is arguably in the first stage of "get it working, get it right, get it fast" (Kent Beck). This appears to not be ethical motiviation but financial or some other type of motivation. If there are no consequences then some appear they do not have morals or ethics and will easily trade them for money/market share etc.


the unfortunate reality is that because it's all bullshit, it's hard to differentiate bullshit from bullshit


this is basically a 51% attack for social proof.


The difference being that humans aren't computers and can deal with an attack like that by deciding some sources are trustworthy and sticking to those.

If that slows down fact-determination, so be it. We've been skirting the edge of deciding things were fact on insufficient data for years anyway. It's high time some forcing function came along to make people put some work in.


Citogenesis doesn't even need 51%, so that would be a considerable upgrade.


You almost, almost had a good comment going there, but then you ruined it by including your unnecessary, biased and ignorant political take.


[flagged]


Is the left wing bias in question not producing hate speech?


What about lying and fabricating facts about the Israel and Palestinian conflict?

https://twitter.com/IsraelBitton/status/1624744618012160000


https://www.dailymail.co.uk/sciencetech/article-11736433/Nin...

imagine the world's most popular AI refusing to say anything critical of putin but not obama, or refusing to acknowledge transgenderism or something if you have difficulty understanding this


reality has a well known left wing bias


It also has a well known right wing bias


[flagged]


If anything, tech companies went out of their way to include him, in the sense that they had existing policies around the content he and his supporters generate that they modified to include them.

When he was violating Twitter's TOS as the US President, Twitter responded by making a "newsworthiness" carve-out to their TOS to keep him on the platform and switching off the auto-flagging on his accounts. And we know Twitter refrained from implementing algorithms to crack down on hate speech because they would flag GOP party members' tweets (https://www.businessinsider.com/twitter-algorithm-crackdown-...).

Relative to the way they treat Joe Random Member of the Public, they already go out of their way to support Trump. Were he treated like a regular user, he'd be flagged as a troll and tossed off most platforms.


He was the most popular user on the platform, brining in millions of views and engagements to twitter. Also the president of your country.

This is the equivalent to arguing that Michael Jackson got to tour Disney Land in off hours, when regular person would have been arrested for doing the same. And how unfair that is.


It's like arguing that _in response to you_ arguing Disney Land [sic] discriminates against Michael Jackson, which would be a valid refutation of your argument.


Only if you believe if Equality is some sort of natural law. Which is a laughable proposition in a world with finite resources. Otherwise, we all have right to $30k pet monkey, because Michael Jackson had one.

Twitter policies are not laws. Twitter routinely bends its own rules. Twitter also prides it self for being a place where you can get news and engage with Politicians, and has actual dictators with active accounts.

The special treatment that Trump received, before being kicked out, does not really prove Twitter board supporting Trump ideologically at that time.

More like business decision to maintain a reputation as being neutral in a situation with large proportion of its users still questioned the election results.


Earlier you said

> tech companies went out of their way to discredit him at every turn

Now you are saying

> business decision to maintain a reputation as being neutral

These Venn diagrams don't overlap, so which is it? Either the company stymied him at every turn or supported him at least once, which means they did not stymie him at every turn.

I don't doubt their leadership were, broadly speaking, not fans personally. But evidence strongly suggests they not only put those feelings aside, they went out of their way to bend their neutral stance to be accepting of things they were not previously accepting of, not the other way around.


Why does everything need to be binary? Or even linear. To the extent it suits Twitter, Twitter can act one way at specific time, and another way at another time. Since twitter both makes up and enforces the rules. The game can be rigged. But that doesn't mean that in general they want the appearance of things being fair, so that people are willing to play and engage with their platform.

A paid off ref will not make every call against the other team, less he be lynched by the crowd, and the league lose all credibility.

To the extent Trump got special treatment, he was the star of twitter.

Also, you had a voting situation where voting rules where changed (mass mail in voting) that disproportionally favoured one candidate. A significant portion of the country questioned the results. And by the words of J6 committee, the country was on the brink of Insurrection.

If we're going to play this binary game. You can either have elections you can't question. Or an election process in which the candidate can't publicly question, or protest the results. So which one is it?

And to the extent Twitter was influential on American Public, and now knowing that the FBI worked directly with them. Some of those decisions at twitter were maid to maintain public trust in the system in general, and not just Twitter.


The FBI works with every major social network. It's necessary for them to do their jobs since criminal activity is online these days.

I'm not sure how the January 6th insurrection mixes into this entire story; not entirely sure why you brought it up. But since you brought it up, I think former president Trump successfully tested the limits of what you can legally do in terms of protesting and election and found them.

Several of the lawyers who have advocated on his behalf are facing disbarment for their gross abuse of the system and he himself is under investigation for criminal activity. You can certainly protest and you can certainly make claims about the integrity of the system, but in general his people and him failed to back those claims with evidence that passed more than a sniff test. That's never been okay, and it's not something the first amendment protects. What social networks protect is way, way back from the edge of what the first amendment protects.

Facebook and Twitter went out of their way to accommodate former president Trump, and given the results of those actions I doubt they will do similar in the future for other politicians.


They still go out of their way to accommodate him. His Twitter and Facebook accounts have been reactivated


You're starting from the conclusion that the election with mail in voting is verifiable, and then arguing from it.

How is that even possible, when the mail in ballots are separated from the envelops? How would you prove that a specific ballot was filled out by a specific person, then verify if they confirm that they voted in such a way. This is not possible.

At best, you have to rely on statistics, like we do with elections in other countries, and hope the courts would accept it and Judges be willing to challenge to entire system that employs them based on statistical arguments made by lawyers.

The reason why Trump contestation of the elects was taken seriously by the public was the time delays for counts, and huge discrepancy between mail in voting and in person voting in the key precincts. Sometimes 10-1 mail in voting advantage for Biden. Or put it another way, when a person had to show up to vote in person. Biden's advantage completely disappears.

Trump wasn't going to win the court cases. The same courts told hime before election he had no standing to challenged the rule changes, and after election they told him he should have challenged before the election. Latches and Standing.

The disbarment of his lawyers is clear retaliation. Literally the same thing happens in dictatorial states, which also have courts and laws but will personally go after those deemed as threat. This is nothing to celebrate. The process of using accreditation boards to go after lawyers, doctors, or professionals challenging the state, should be a real concern for everyone.

Trump used Twitter to challenged the election by shifting public opinion. And when it mattered the most, FBI and Twitter took away his ability to do that.


> You're starting from the conclusion that the election with mail in voting is verifiable, and then arguing from it.

I'm starting from the conclusion that it carries equivalent risk to in-person voting, based on observation from states that have already had mail-in voting in place for decades (which includes, for example, Pennsylvania; all they changed in the law was opening access to it to more people, they already offered it for those not present in-state during the election and overseas military for decades). Against that mountain of evidence, the counter-argument made a lot of bluster but provided nothing concrete at all that couldn't be dismissed (and their anecdotes were doozies; there's a reason they were either thrown out of so many courts or never actually went to the work to make a case in so many courts). It was a culture-jamming campaign, not an actual complaint, and it attempted to abuse the legal system so hard that the lawyers involved got sanctioned.

> How is that even possible, when the mail in ballots are separated from the envelops?

Myriad ways because every state does something different (which is another weakness of the argument; it assumes conspiracy across unrelated and borderline-hostile-to-each-other actors. Any idea how many Republican-controlled counties would have to be involved for the conspiracy the Trump campaign claimed to have succeeded?). To give examples from the system I know: ballots arrive via mail from the voter. They are checked against the registry for valid voter and confirmed against double-voting by cross-checking the in-person rolls. Once that is done, the ballot (in a controlled environment) is decanted from the outer envelope. At this point, it is an anonymous vote. This is equivalent to the process used in in-person voting where, after confirming the voter may vote, their vote is stripped of any identifying information by filling out a slip of paper and dropping it in a box (and later shuffling the contents of the box so that stacking order can't be used to reverse-solve to original voter).

> How would you prove that a specific ballot was filled out by a specific person, then verify if they confirm that they voted in such a way. This is not possible.

Not only would this run counter to design (of both mail-in and in-person voting), it violates the principle of voter privacy in a big way. Our system is not perfect but it was never designed to be; it balances the interest in controlling against fraud with the interest in anonymizing the vote. Burden of proof is on those who claim the main-in system is worse to demonstrate this; they have failed to do so (and the strategies they've used are, basically, ridiculous). The largest risk vector would be stealing a vote by claiming to be someone else who doesn't show up at the polls; this is not impossible but (spoiler alert) it's not impossible in person either; it's not like we take a DNA sample to figure out if a voter tells the truth when they say they're so-and-so and flash a (forgeable) photo ID.

> and hope the courts would accept it and Judges be willing to challenge to entire system that employs them

This is a major misconception of how the system works. What makes people think judges wouldn't love to prove fraud? What a career-maker that would be! You'd be in the history books! And judges in most positions aren't elected. These sorts of shenanigans are why the American system firewalls judges from public referendum in a lot of contexts. Half of judges hate the executive of their state and would love to embarrass it. But they aren't going to throw their career away backing a dead-horse argument, and the arguments made were dead horses.

> The reason why Trump contestation of the elects was taken seriously by the public

Never make the mistake of assuming the public has enough domain knowledge to be arbiters of what's worth taking seriously; these are the same people who report alien sightings when SpaceX launches a rocket on the west coast.

> huge discrepancy between mail in voting and in person voting in the key precincts

This did happen. It's pretty easily explained by the fact that one political party's Presidential candidate made a big noise about not voting by mail because he believed the mail could be abused (https://www.rollingstone.com/politics/politics-news/rigged-f...). As a result, his followers took his advice and did not vote by mail. This is a self-fulfilling prophecy that easily explains the statistical anomaly (while also raising the question of the lack of other statistical anomalies that would have been caused by, say, ballot stuffing or other fraud tactics).

> Trump wasn't going to win the court cases. The same courts told hime before election he had no standing to challenged the rule changes, and after election they told him he should have challenged before the election. Latches and Standing.

The latter part of this is untrue. He does, in fact, have no standing to challenge the rule changes because legislatures make those rules, not the courts. They didn't say he should have challenged before the election; they said you can't use the courts to overturn an election. He never had standing.

What he could do (and should, if he were serious about changing the process, which he is not) is bring specific charges against specific individuals who committed fraud. With all the research he ostensibly did, specific fraud should have been found. This is how our system works because it supports certainty and frequent change over uncertainty of outcome (we've seen what uncertainty does to democracies; it's not pretty). If fraud occurs, identify it, correct it, and make the next election (which is always soon) more secure.

He won't do this. His game is not to improve the integrity of the system; it's to make you doubt it.

> The disbarment of his lawyers is clear retaliation

Retaliation by whom? The Bar is as much GOP-appointed folks as Democrat-appointed folks. Again, believing this requires accepting a vast conspiracy, where the simpler explanation is one man paid a lot of people money to try and break the rules, and the only "retaliation" is the enforcement of those rules. I urge you, if you do not believe this, to follow any of these disbarment proceedings and understand the arguments being made by the judges and/or bar attorneys in question. Legal accreditation is designed to protect against this kind of "The law is what I say it is" nonsense from individual attorneys.

> Trump used Twitter to challenged the election by shifting public opinion

No disagreement there. But that's far more a referendum on Twitter and a (gullible) public than on Trump. I think they were naive about how much damage unchecked speech from authority can do; there's a reason Mussolini nationalized the radio system.

> And when it mattered the most, FBI and Twitter took away his ability to do that.

After cutting him wide latitude for years: yes, I agree. After an attempted coup, they decided to curtail his ability to continue to feed an insurrection against the country. Twitter makes less money when there's a civil war in the US because people will start burning down the datacenters they run in and kill their employees. This isn't a hard incentive structure to comprehend.


[flagged]



The thing people are trying to make it seem like a both sides issue, like Hunter Bidens nudes and the insurrection. The thing where Congress just had a hearing on and all that came out was that the side accusing Twitter of censoring information was actually the only side that requested censoring?


So I dug into the first "twitter file." LOL, is this supposed to be a scandal? Hunter Biden had some nudes on his laptop, Republicans procured the laptop and posted them on twitter, Biden's team asked for them to be taken down, and they were, because twitter takes down nonconsensual pornography, as they should. This happened by a back channel for prominent figures that republicans also have access to. The twitter files don't even contest any of this, they just obscure it, because that's all they have to do in the age of ADHD.

So Part 1 was a big fat lie. I have enough shits left to give to dig into one other part. Choose.


There were no nudes in the NY post article. The story was not suppressed on the basis of nonconsensual pornography. The suppressed article primarily concerned emails where Hunter appeared to be brokering meetings with his father in exchange for consideration. Initial reports claimed the material was fake, but it's since been acknowledged as authentic. (You might have been aware of the authenticity earlier except that posts describing how to use DKIM headers to cryptographically validate the messages were also widely suppressed or buried -- including on HN for that matter! [1])

As smoking guns go I wouldn't consider it very impressive-- if anything it really just looks like Hunter was scamming people using his father's name-- but that is no excuse to misrepresent the situation. But it wouldn't be the first time by far that the coverup was a bigger impropriety than the thing being covered up.

Do you expect a useful discussion to result from a message that gets every factual point wrong or are we just being trolled? (maybe someone using a large language model to argue? -- the truthy but wrong responses fit the pattern)

[1] https://hn.algolia.com/?query=Authenticating%20a%20message%2...


I started at post 1 and summarized through post 8, the one that convinced me this was a hatchet job. You skipped to ~post 17 and talked about the contents of 17-36. We were talking about two different parts of Twitter Files Part 1.

In posts 1-8, Matt Taibbi takes the boring-ass story of Twitter removing nonconsensual pornography and through egregious lies of omission and language loaded harder than a battleship cannon he suggests to the uninformed reader that this was something entirely different. Post 8 itself is a request from the Biden team to take down nonconsensual pornography. Twitter honors the request. Yawn. But wait -- Matt realizes he can omit the "noncon porn" context and re-frame the email (post 7) as evidence of outsiders constantly manipulating speech. It seems that Matt was successful, because you were not able to connect my account of the underlying events to the Matt Taibbi propagandized version of the same events.

Why did I stop there? I was watching the Twitter Files tweets live and Post 8 was the final nail in the coffin. The previous nails were the loaded speech, which is seldom indicative of high quality journalism, but Post 8 turned that suspicion into a conviction: this was a hatchet job, not honest journalism. Debunking GOP hatchet jobs is a hobby, not an occupation, so at that point I stopped, went to bed, and skimmed the rest the next day. The summary I committed to memory was "mild incompetence and extensive good faith framed as hard censorship, again." I didn't deep dive 17-36, but I did skim them again before posting and again just now. I'll stand behind that summary if you want to tangle.

> Do you expect a useful discussion

You had your rant, now I get mine. I grew up being damn near a free-speech absolutist. I have carried an enormous amount of water for you guys on this topic recently, but it seems like every fucking time your team calls wolf I look into it and find crocodile tears and a wet fart. Is this really the best you can do?


> You skipped to ~post 17 and talked about the contents of 17-36.

No clue what you're talking about. My response was directed to the misrepresentation of the NY Post hunter biden drama contained in your post. I have no clue who Matt Taibbi is.

> I have carried an enormous amount of water for you guys on this topic recently, but it seems like every fucking time your team calls wolf I look into it

You guys? Your team? I think you must have confused me for another poster.


In case you missed that happening live, an article from the NYP telling the story of Hunter Biden's laptop (not necessarily the leaked photos) was heavily censored across all Big Tech just before the election.

None of the left-wing people I interact daily with knew about that.

Initially mainstream media claimed it was fake, then they retracted their statements a year later when nobody cared anymore.

Same as the COVID lab "conspiracy theory" or Fauci's funding GoF research. It all gets censored and dismissed until laterz.


We are being asked to believe that on the cusp of the election Hunter Biden dropped off one to three laptops depending on source in a state he didn't live in rife with unencrypted evidence of crime and corruption to a Trumpers computer repair shop and never bothered to look into them again until the owner decided to do a possibly illegal trawl through his customers property and just happened to turn over this evidence not to the police but to a republican operative in the days before we vote.

Now even though the prior is astounding enough we are supposed to take it on the word of a known prolific liar who recently lost his license to practice law because of lies about the very same topic and ignore the ordinary practice of treating chain of custody as gospel.

but wait I hear you saying didn't experts authenticate the laptop? No, no they didn't they authenticated a few emails divorced of context. In other news half of the starlets out there had their nudes stolen from their iclouds a few years back. If you provided a true stolen picture of their boobs then concocted an elaborate fiction around it the very real boob pic wouldn't prove the elaborate fiction, wouldn't prove the authenticity of a machine you planted the pic on and wouldn't prove any narrative you wove around it.

In fact the selective spare morsels of data are as damning as the miserable liar who pissed all over any sane chain of custody discussion by grasping any such machine in his oily hands. If they had him dead to rights they would have leaked the entire email box or better yet a hard drive image for nerds to go through with a fine toothed comb.

They didn't because they didn't trust themselves to produce a convincing enough fake that could take any in depth analysis. This is also why you don't see any impending prosecution.

This didn't deserve a fair hearing in the news in the days before voting it was an attempt to corrupt the fair election that was about to take place. See Obamas birth certificate and the swift boat nonsense.

One conspiracy theory at a time please.


[flagged]


You know that the "censored documents" were actually just nudes proving that Hunter Biden has a big dick and hot girlfriends, right? I've seen them. Explain how they are scandalous.


You get to whine about conspiracy N+1 when you finish defending conspiracy N.

> None of the left-wing people I interact daily with knew about [conspiracy N].

It seems their information filters were successfully rejecting bullshit -- which makes their filters better than your filters.

The "Biden's Laptop" story was bullshit when NYP posted it and it's still bullshit when you linked it. Furthermore, you know it's bullshit, which is why you tried to change the topic like a coward. Fine. Be my guest. Run away! If you want to defend your position, I'll be waiting.


You make a good point, but consider a query that many people use everyday:

"Alexa, what's the weather for today?"

That's a question about the future, but the knowledge was generated beforehand by the weather people (NOAA, weather.com, my local meteorologist, etc).

I'm sure there are more examples, but this one comes to mind immediately


Right, but Alexa probably has custom handling for these types of common queries


TBH I've wondered from the very beginning how far they would get just hardcoding the top 1000 questions people ask instead of whatever crappy ML it debuted with. These things are getting better, but I was always shocked how they could ship such an obviously unfinished, broken prototype that got basics so wrong because it avoided doing something "manually". It always struck me as so deeply unserious as to be untrustworthy.


Your comment makes me wonder - what would happen if they did that every day?

And then, perhaps, trained an AI on those responses, updating it every day. I wonder if they could train it to learn that some things (e.g. weather) change frequently, and figure stuff out from there.

It's well above my skill level to be sure, but would be interesting to see something like that (sort of a curated model, as opposed to zero-based training).


GPT can use tools. Weather forecasts could be one of those tools.

https://news.ycombinator.com/item?id=34734696


Didn't original Alexa do that? It needed very specific word ordering because of it.


I guess I should have been clearer...

There are tons of common queries about the future. Being able to handle them should be built into the AI to know that if something hasn't happened, to give other relevant details. (and yes, I agree with your Alexa speculation)


Alexa at least used to just do trivial textual pattern matching hardly any more advanced than a 1980's text adventure for custom skills, and it seemed hardly more advanced than that for the built in stuff. Been a long time since I looked at it, so maybe that has changed but you can get far with very little since most users will quickly learn the right "incantations" and avoid using complex language they know the device won't handle.


Ah yes, imprecision in specification. Having worked with some Avalanche folks, they would speak of weather observations and weather forecasts. One of the interesting things about natural language is that we can be imprecise until it matters. The key is recognizing when it matters.


> The key is recognizing when it matters.

Exactly!

Which, ironically, is why I think AI would be great at it - for the simple reason that so many humans are bad at it! Think of it this way - in some respects, human brains have set a rather low bar on this aspect. Geeks, especially so (myself included). Based on that, I think AI could start out reasonably poorly, and slowly get better - it just needs some nudges along the way.


"Time to generate a bunch of b$ websites stating falsehoods and make sure these AI bots are seeded with it." ~Bad guys everywhere


They were already doing this to seed Google. So business as usual for Mercer and co.

I suspect the only way to fix this problem is to exacerbate it until search / AI is useless. We (humanity) have been making great progress on this recently.


That's not how it is gonna play out, right now it makes many wrong statements because AI companies are trying to get as much funding possible to wow investors but accuracy will continue being compared more and more, and to win that race it will get help from humans to use better starting points for every subject, for example for programming questions is gonna use the number of upvotes for a given answer on stackoverflow, for a question about astrophysics is gonna preffer statmenets made by Neil deGrasse Tyson than by some random person online, and so on; and to scale this approach it will slowly learn to make associates from such curated information, e.g. the people that Neil follows and RTs are more likely to make truthful statements about astrophysics than random people.


That makes complete sense, and yet the cynic (realist?) in me is expecting a political nightmare. The stakes are actually really high. AI will for all intents and purposes be the arbiter of truth. For example there are people who will challenge the truth of everything Neil deGrasse Tyson says and will fight tooth and nail to challenge and influence this truth.

We (western society) are already arguing about some very obviously objective truths.


Because I loathe captcha, I make sure that every time I am presented one I sneak in an incorrect answer just to fuck with the model I'm training for free. Garbage in, garbage out.


Glad to see a kindred soul out there. I thought I was the only one :)


Generalizing over the same idea, I believe that whenever you are asked for information about yourself you should volunteer wrong information. Female instead of male, single instead of married etc. Resistance through differential privacy


I've lived in 90210 since webforms started asking.


My email address is no@never.com. I’ve actually seen some forms reject it though


High-five, having forever used some permutation of something like naaah@nope.net, etc.


ASL?


69/f/cali


Back when recaptcha was catching on there was a 4chan campaign to associate words with "penis". They gathered together, used to successfully brigading polls of a few thousand, and went at it.

Someone asked the recaptcha guys and they said the traffic was so little among the total that it got diluted away. No lasting penis words arose and they lost interest.


I do this unintentionally on a regular basis.


I see people citing the big bold text at the top of the google results as evidence supporting their position in a discussion all the time. More often than not the highlighted text is from an article debunking their position but the person never bother to actually click the link and read the article.

The internet is about to get a whole lot dumber with these fake AI generated answers.


A common case of asking a question about the future, even simpler than the weather: "Dear Bing, what day of the week is February 12 next year?" I would hope to get a precise and correct answer!

And of course all kinds of estimates, not just the weather, are interesting too. "What is estimated population of New York city in 2030?"


>I asked myself, why would ask somebody an AI trained on previous data, about events in the future?

"Who won the Superbowl?" is not a question about future events, it's a question about the past. The Superbowl is a long-running series of games, I believe held every year. So the simple question "who won the Superbowl?" obviously refers to the most recent Superbowl game played.

"Who won the Superbowl in 2024?", on the other hand, would be a question about the future. Hopefully, a decent AI would be able to determine quickly that such a question makes no sense.


Exactly. I’d imagine this is a major reason why Google hasn’t gone to market with this already.

ChatGPT is amazing but shouldn’t be available to the general public. I’d expect a startup like OpenAI to be pumping this, but Microsoft is irresponsible for putting this out in front the of general public.


I anticipate in the next couple of years that AI tech will be subject to tight regulations similar to that of explosive munitions and SOTA radar systems today, and eventually even anti-proliferation policies like those for uranium procurement and portable fission/fusion research.


ChatGPT/GPT3.5 and its weights can fit on a small thumb drive, and copied infinitely and shared. Tech will get better enough in the next decade to make this accessible to normies. The genie cannot be put back in the bottle.


> ChatGPT/GPT3.5 and its weights can fit on a small thumb drive, and copied infinitely and shared.

So can military and nuclear secrets. Anyone with uranium can build a crude gun-type nuke, but the instructions for making a reliable 3 megaton warhead the size of a motorcycle have been successfully kept under wraps for decades. We also make it very hard to obtain uranium in the first place.

>Tech will get better enough in the next decade to make this accessible to normies.

Not if future AI research is controlled the same way nuclear weapon research is. You want to write AI code? You'll need a TS/SCI clearance just to begin, the mere acting of writing AI software without a license is a federal felony. Need HPC hardware? You'll need to be part of a project authorized to use the tensor facilities at Langley.

Nvidia A100 and better TPUs are already export restricted under the dual-use provisions of munition controls, as of late 2022.


How are you going to ban the Transformer paper? It's just matrix multiplies.


It's also a first amendment issue, and already out there. Reminds me that I'm old enough to remember when PGP bypassed export control by being printed on paper and exported as books and scanned/typed back in, though.

They can of course restrict publishing of new research, but that won't be enough to stop significant advances just from the ability of private entities worldwide to train larger models and do research on their own.


Sure it can. Missile guidance systems fit on a tiny missile, but you can’t just get one.

The controlled parlor game is there to seed acceptance. Once someone is able to train a similar model with something like the leaked State Department cables or classified information we’ll see the risk and the legislation will follow.


They can try. You will note that nobody except government employees and the guy running the website ever got in trouble for reading cables or classified information. We have the Pentagon papers precedent to the effect of it being a freedom of speech issue.


The persons from the state dept or army are heavily vetted to get there. A normie with a thumb drive, less so...


Once someone is able to train a similar model on their own, it's too late for legislation to have any meaningful ability to reduce proliferation.


True. In the long run though, I expect we will either build something dramatically better than these models or lose interest in them. Throw in hardware advances coupled with bitrot and I would go short on any of the gpt-3 code being available in 2123 (except in something like the arctic code vault, which would likely be effectively the same as it being unavailable).


The point isn't that gpt-3 specifically will be available, or the current models, but that gpt-3 level models or better will be available.


They released it because ChatGPT went to 100M active users near instantly and caused a big dent in Google's stock for not having it. The investors don't seem to have noticed that the product isn't reliable.


For investors, the product only needs to reliably bring in eyeballs.


> ChatGPT is amazing but shouldn’t be available to the general public.

It's a parlor game, and a good one at that. That needs to be made clear to the users, that's all.


It’s being added as a top line feature to a consumer search engine, so expect a lame warning in grey text at best.


1) The question as stated in the comment wasn't in the future tense and 2) the actual query from the screenshot was merely "superbowl winner". It would seem like a much more reasonable answer to either variant would be to tell you about the winners of the numerous past super bowls--maybe with some focus on the most recent one--not deciding to make up details about a super bowl in 2023.


The AI doesn't work in terms of "making up details". It will simply choose what makes "sense" in that context, no info about the made-up parts.


The problem is that it will always give you an answer even if none exists. Like a 4 year old with adult vocab and diction, wearing a tie, confidently making up the news. People may make decisions based on made-up-bullshit-as-a-service. We need that like we need a hole in the head. Just wait until people start using this crap to write Wikipedia answers. In terms on internet quality, I sometimes feel like one of those people who stockpiles canned food and ammo: it's time to archive what you want and unplug.


> I asked myself, why would ask somebody an AI trained on previous data, about events in the future?

Lots of teams have won in the past though. Why should an AI (or you) assume that a question phrased in the past tense is asking about a future event? "Many different teams have won the super bowl, Los Angeles Rams won the last super bowl in 2022" Actually even if this was the inaugural year, you would assume the person asking the question wasn't aware it had not been held yet rather than assuming they're asking what the future result is, no? "It hasn't been played yet, it's on next week."

I realize that's asking a lot of "AI", but it's a trivial question for a human to respond to, and a reasonable one that might be asked by a person who has no idea about the sport but is wondering what everybody is talking about.


> an AI trained on previous data

Trained to do what, though?

It feels like ChatGPT has been trained primarily to be convincing. Yet at the back of our minds I hope we recognise that "convincing" and "accurate" (or even "honest") are very different things.


"welcome to alternate reality where b$ doesn't even have a political agenda..." yet.


Well, when you're playing with a ChatGPT, it may not be apparent what the cutoff date is for the training data. You may ask it something that was future when it was trained, but past when you asked.

Is Bing continuously trained? If so, that would kind of get around that problem.


Because the AI isn’t (supposed to be) providing its own information to answer these queries. All the AI is used for is synthesis of the snippets of data sourced by the search engine.


Oh. You think these AIs don't inherit the agenda is f their source material. Have you ignored the compounding evidence thath


People who aren’t savvy and really want it to be right. Old man who is so sure of its confidence that he’ll put his life savings on a horse race prediction. Mentally unstable lady looking for a tech saviour or co-conspirator. Q-shirt wearers with guns. Hey Black Mirror people, can we chat? Try stay ahead of reality on this one, it’ll be hard.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: