Hacker News new | past | comments | ask | show | jobs | submit login
Every Day Is April Fool’s Day Now (vice.com)
55 points by zhte415 on April 1, 2023 | hide | past | favorite | 68 comments



With half a moment's reflection you'll realize that photos and video having been showing untrue things since the beginning (before anyone alive was born, in fact) and people have somehow been coping.

In fact, people have always been able to lie, deceive, trick, hide, bullshit, distract and beguile though all of human existence. The technology hasn't changing anything. (Vice seems like the source of the dumbest stuff that gets on to the front page HN.)


> The technology hasn't changing anything.

I don't agree with this. Sure, things like VFX have existed to augment the experience of movies and other types of content over the years, but these were large-scale commercial projects that only a few could afford.

We're now reaching the point where anyone can create video content that humans aren't able to discern is real or not, and that is unprecedented. Video is an extremely powerful medium that has driven entire movements. For example, BLM wouldn't have happened without the world watching a video of a cop publicly murdering George Floyd.

There's something about seeing video evidence of heinous things happening that drives people to act more than simply reading op-eds or listening to interviews, largely because it's more viscerally stimulating.

We can't know until generative AI has proliferated and fully integrated into society to know whether or not it's changing how we think or behave.


>We're now reaching the point where anyone can create video content that humans aren't able to discern is real or not

>We're now reaching the point where anyone can photoshop an image with intent to deceive

>We're now reaching the point where anyone can film a scene with intent to deceive

>We're now reaching the point where anyone can stage a photograph with intent to deceive

>We're now reaching the point where anyone can mass-produce pamphlets with intent to deceive

>We're now reaching the point where anyone can cheaply replicate a signet ring to pose as a member of our House

>We're now reaching the point where anyone can write a letter impersonating the King

>We're now reaching the point where...

It's funny how little humans change. Think about how, for every single one of the above technologies, many people reacted with the same sense of doom and gloom, induced by a novel technology, yet the general was no different, only the particular. Something else interesting, is many of us relish in this doom-mongering. Again, this is not something new either, as reactionary doomsayers and the cults they've spawned have been coming and going for thousands of years.

I find it useful to remind myself that technologies and socio-philosophical trends come and go, yet man remains fundamentally the same. In times of uncertainty, it's comforting, and informs me how to act.


Yep, and stable diffusion can imitate hand written signatures of everyone, no matter how difficult it's to draw it.

I used ChatGPT to write a story of a person who warns about a new knife technology, which produces sharper knives than any other knife technology in history.

https://i.postimg.cc/9X4Vjyjk/knife-technology.png


The hand signature could also be taken from a picture and then drawn by a cnc/engraver, without ai.

This doesn't really help you though, because for important things you need witness a for your signature.


It has always been trivially easy for people to lie to each other.

The fact that it used to be hard to lie via one specific medium, like video, and now is easy (well, soon perhaps -- the cited examples show there's still some ways to go there) is more of a return to the normal course.


No it is not. People used to be able to trust what they saw with their own two eyes. Hollywood could come up with some fake magic, but your average joe couldn’t. Have a look at the new unreal engine demo. Pretty soon, anyone will be able to take a person’s social media photos and videos and place them into any scene they want.

Did you see the news article of the pope in a white coat? Heads up, it was an AI photo and it tricked millions. What do you think might happen if someone decides to flood the internet with a thousand fake George Floyd videos to incite riots because “the ends justify the means”.

This is the world we are entering.


You can still trust what you can see with your own two eyes. You can't trust videos or photos of things. Computer image/video editors (e.g. Photoshop, AfterEffects) have made photos & videos untrustworthy for years if a skilled editor made the modifications, and before that some darkroom tricks could do a reasonable job. The new AIs make it cheaper & easier to fake photos and videos.


Good point. You can’t trust anything you see on a screen anymore. Although even your eyes can deceive you in the real world occasionally.


Yes, optical illusions do exist. They're not particularly useful for the sorts of deceptions that edited media can produce though, so they're not much of a threat.


I wasn’t really talking optical illusions but more something along the lines of the film vantage point or just detective work in general. A lot of people can recall the same event and be telling it faithfully to what they saw and remember but the actual truth of what happened can be something entirely different to their own individual perception. I suppose something like saying you saw someone murder someone with a gun and then run off, when if you’d have seen it from another angle then you would have seen the guy was acting in self defence because he was about to get stabbed or something like that


> What do you think might happen if someone decides to flood the internet with a thousand fake George Floyd videos to incite riots because “the ends justify the means”.

People have been flooding the internet with lies all along -- and, indeed, sometimes incite riots with those lies.

I get it: there are new ways to lie now, and some will be more susceptible because they don't pay attention to the moving target of what's possible with deepfake technology. But adding one more way to lie to the existing pile of a thousand ways doesn't change the balance, especially since the effectiveness of a lie doesn't depend on its believability, but on whether it tells a story the listener wants to believe.


photoshopped images have been deceiving people for decades. it's just getting easier to scale production and harder to identify manipulated images


George Orwell in his writings about the Spanish Civil War complained bitterly about the recent loss of any sunset of shared objective truth and how cheap local newspapers would write entirely fictional battles, which, to the readers back home would be taken as true.


https://www.theawl.com/wp-content/uploads/2010/05/0yu1y3zKUe...

I think "fake news" has been going on a bit longer than maybe you might realize.

The USA started a war with Spain and conquered the Philippine islands over headlines like this in 1898.

https://en.wikipedia.org/wiki/USS_Maine_(1889)


If you think augmented imagery started with vfx then you would not believe what people were able to do in a darkroom. Deepfakes can be made with an xacto blade.


This guy gets it. Evil people can frame other people for things they didn’t do, creating completely photo realistic videos as evidence. It doesn’t matter if the truth comes out eventually if the pitchforks have already come out to enact revenge and get their drop of blood. And on the flip side, any evidence of their wrongdoing is just “fake AI news”.

Welcome to the new world.


I think the point of the article, and it is a very valid point these days, is that it is the ease in which we turn to the pitchforks that is the problem that truly needs to be addressed.

“We need to put our focus on transforming society to be able to deal with this.”


The solution to that is not to believe anything and just go on with your life.


[flagged]


To be fair the ufo sightings also come up on our cameras, both in our pockets and the gun cameras on our fighter jets.


I mean, I personally don't like this take. People believing lies throughout history has caused a massive amount of damage ("looks like it's time to go to war again").

Lets use an analogy. How much stuff have you lit on fire in your life? You can have something very flammable burning and as long as its seperated some distance from other flammable objects you don't have to worry about it torching everything you know and love.

The problems tend to crop up with fire when you're ignorant or complacent (I'll put stupidity in this classification too). That 'little fire' you thought was under control suddenly isn't, and you can't take it back. You can only hope that either by a massive amount of work you can put everything out, or that the flames will find a boundary they cannot spread past.

When it comes to communications the world doesn't have boundaries any longer. The dumb shit someone posts on twitter can be at the top of Everest and the middle of the Pacific ocean less than a second later. This dumb shit can cause riots. Can cause governments to launch weapons. Can cause neighbors to kill each other. The protections we were afforded of messages took a long time to spread in the past are long gone. We're well beyond tricking one person at a time, or even one nation at a time. We can lie, deceive, trick, bullshit, distract, and beguile an entire planet all at once.

Quantity is a quality in itself.


> People believing lies throughout history has caused a massive amount of damage ("looks like it's time to go to war again").

"Looks like it's time to go to war again" isn't a lie, it's an opinion. And the case I think you need to make here is that people believing what at the time were intended to be "lies" have caused more damage than people believing things that were spread in the honest belief they were true i.e. that "lies" are both easily defined and distinctly dangerous.

That's a far more difficult case.


> "Looks like it's time to go to war again" isn't a lie, it's an opinion.

Eh. The Spanish-American war was started because of a likely false assumption. And posters which literally dehumanize the opponent are a form of lie. There was a time, I've read, when some Europeans literally believed Jews had horns.

https://theisraelbible.com/do-jews-have-horns/ (Due to a mistranslation) "In Christian art of the Middle Ages, Moses is depicted wearing horns, most notably in Michelangelo’s statue created from the 16th century. This led to the persistent anti-Semitic belief that all Jews had horns and were in league with the devil."

> And the case I think you need to make here is that people believing what at the time were intended to be "lies" have caused more damage than people believing things that were spread in the honest belief they were true

I think you're nitpicking. Often enough the second person to spread a purposeful lie does so because they believe it to be true.

A ton of manipulated media that people spread as fact is going to be media made for the fun of it, not as purposeful lies.


We also have people fact checking it or expressing doubt within seconds.

> We can lie, deceive, trick, bullshit, distract, and beguile an entire planet all at once.

Whereas for the last 150 years it took a day for the news articles to be published across the Atlantic.


> The technology hasn't changing anything.

I think this is completely wrong. It's now at massive scale, and you are much more likely to be exposed to convincing lies in a world where it's harder to understand what's actually going on. It's possible to live your entire life lying and get away with it because of improvements in living standards and the society we live in.

I know quite a few people who are now living in a parallel reality as they've been exposed to specious nonsense and are indeed immersed in falsehoods. And they point to the volume of it as one of the reasons why these things must be true.


> It's now at massive scale, and you are much more likely to be exposed to convincing lies in a world where it's harder to understand what's actually going on.

This premise isn't valid. The fact that there are a lot more messages going around (and we can safely assume a lot more lies in rough proportion) has no bearing in the likelihood that one is exposed to convincing lies.

1) The more lies, the less convincing each one will be. That's how markets work. Therefore, at some saturation of lies, none will be convincing, and your problem becomes one of discovering truths.

2) Entire societies can believe in a single convincing lie.

3) You don't even have to lie to create untruths. If I were going to have a poll test, as a first approximation I'd ask two questions: 1) Was Saddam Hussein one of the people who plotted 9/11, and 2) Did Russia hack voting machines in 2016, changing to totals to a Trump win? They're good examples of untruths that aren't lies; they were never the stated positions of any authority, instead they were intentionally incepted through FUD and vagaries.

So more lies, fewer suckers, fewer lies, more suckers. And you can create lies without telling them.


Imagine someone making the argument that email and messaging platforms don't change anything from when we had ink and quill scribes. Hey it's just sending a written message to someone, how could it make any mentionable difference?


> The technology hasn't changing anything.

This is factually untrue.

Technology has absolutely changed many things. The anonymity of the internet means that running scams is much easier. The global reach of the internet means that you can reach across borders and play legal games in a way that was much more difficult before. The automation of phone calls and of sending out emails in bulk means that you can try to deceive tens of thousands of people at a time, instead of just one-by-one. "Spam" barely existed before the internet, and it was definitely much more pervasive after - and the shape and economics of the "scam funnel" is a fundamental part of it.

Sure, "people have always been able to lie, deceive, trick, hide, bullshit, distract and beguile" - but the means that they use to do so has changed, and claiming that "the technology hasn't changed anything" is objectively false. The means matter, and more than just for the fact that changing methods means changing strategies to combat those methods.

Scale matters, and technology has massively increased the scale of many things, including the ability to deceive others.

Vice may be arguing the point extremely poorly, but the point itself is a good one, and worthy of consideration.


This a dishonestly weak take. It's like saying people have always communicated with each other, therefore the introduction of the internet hasn't changed anything. The difference of course with AI is scale and access which is completely revolutionized by the new technology.


This is a complete lack of a take. Instead of saying what it is like, say why it's wrong.

Saying that this view is wrong because of another view that you're also not going to refute is a no-op. The internet didn't change the vast majority of things significantly. It changed the things it changed.


That's a bit like saying that fentanyl is nothing new because opium addiction dates to antiquity.


If you're going to do this, you have to explain what characteristics of fentanyl are new, and why that makes the history of opium (and heroin) addiction not the most relevant experiences to apply to the fentanyl scandal. That will be a very difficult thing to do. You can't just handwave at the new thing and say it completely changes the game. That's not argument, that's a car commercial.


The difference is in intensity and quantity. Fentanyl vs opium: breathing in a bit of dust from handling a bag of pills is potentially lethal, and mass manufacturing has put tons of it into the market -- compared to opium which took dedicated farming and harvesting practices and was relatively limited in supply, and overdosing was at least very expensive. ML vs photoshop: results that once took experts many hours of work are now available to anybody with a momentary whim; likewise new voice-cloning scams which would have taken professional impersonators hours of contact with their bait-person has been reduced to a quick phone call.


No. Video and photos (in their original forms) do not show untrue things. They show the "first-hand fact" in a particular time and space.

What can happen is people misinterpret these "first-hand facts" with assumptions and priors that generates "second-hand untrue facts" based on them. If you have all the necessary information to trace back from second-hand untrue facts, you can always discover where the assumptions and fallacies were made.

Now what we have is "first-hand untrue facts" from the source, and there is no way to trace back and find such loopholes. It's similar to not having ground truth / have wrong ground truth when doing supervised learning.


This is so obtuse, that it must be trolling.

It’s like saying the printing press changed nothing! people could write manuscripts all they want. So what if you drove the cost down significantly? It’s people who have the ideas not books.


Another "it's like."

> So what if you drove the cost down significantly?

I feel like you're saying this like it's a rhetorical question that you don't have to expand upon. "What were the implications of lowering the cost of print" is a question that takes thought to answer, you can't just breathlessly announce that it's changed everything.

I feel like everyone needs a refresher on distinctions and differences. If lightning struck everyone who spoke English and suddenly they spoke French, everything in the English speaking world would change but very little would be different.


> The technology hasn't changing anything.

exactly.

as Exupery pointed out in "Land for people" (?) , technology does not abstract/hide us from nature/essence, quite the opposite, it allows/thrusts us further into that essence extremely (where it might not be pretty to be - be it middle of mountain / desert, or in the thick of social-(dis)connectivity ..)

Whether we can cope with consequences, is another matter..


The obvious retort here is that network effects modify just what bullshit means. And that the technological difference is qualitatively different. We've all heard that back and forth (it's all been done vs. technological modifications in scope / scale) so many times that I'm not sure it bears repeating. Do you have a take on this general topic?


One thing that "technology" has changed is how people can interact.

On the negative side de facto censorship, such as on Reddit, has created echo chamber so people can think hitherto fringe ideas are normal.

On the positive side if you look hard enough, and it is getting increasingly hard, you can find alternative POVs


>nothing ever happens

nuclear bomb goes off in your city

>uh it's clearly a nothingburger

what's the deal with people leaning THIS HARD into normalcy bias? It comes off as straight up obvious denial of reality at this point.


People have socialized forever, social media hasn't changed anything.

People have been bartering forever, ecommerce hasn't changed anything.

People have been talking forever, email and IRC hasn't changed anything.


> The technology hasn't changing anything

It changes reach and the ability to flood the space a lie so that the truth is the outlier or flood the space with many lies so nobody knows which to believe.


All this AI technology is groundbreaking, revolutionary, going to change everything, etc. except with the bad stuff, it won't change any of that.


I have a much shorter and optimistic version, which is that these models break a theory of mind that underpins a lot of bad stuff.

Basic idea is, since we have shown that we can simulate all consistent artifacts of language in an LLM, it's a proof that language can and (therefore does) exist on a substrate. That substrate for us is the real, and what kids who encounter AI now will understand intuitively is that the artifacts of language and the real are necessarily separate things. It's the end of subjectivity and everything stacked on top of it, thank literal God. I think people can be forgiven for having been decieved by some clever linguistic hacks that unmoored them from reality, and now we can look at the uncanny artifacts of the logic of ideas over language using sound models for what they are.

Welcome to the desert of the real!


> that substrate for us is the real, and what kids who encounter AI now will understand intuitively is that the artifacts of language and the real are necessarily separate things

That sounds pretty optimistic. Wouldn't kids that grow up with AI consider it part of the real, by virtue of never experiencing a life without it? If I were to use the same logic for social media, I could say:

"kids who grow up with social media will understand intuitively that the artifacts of online popularity and the real are necessarily separate things"

which does not sound true to me.


> which does not sound true to me.

Which part? Smart kids who grow up with social media will understand from experience that the artifacts of online popularity have a direct and significant effect on their real lives, so...

> "[...] the artifacts of online popularity and the real are necessarily separate things"

...is definitely not a true statement. A "real" separate from the interference of newfangled technologies is just middle-aged reaction. We don't pretend that things said on telephones don't have an effect on our real lives, or announcements made on television won't affect us.

edit: honestly thought we moved from "irl" to "afk," then dropped them both as we accepted that we would carry keyboards with us 24 hours a day.


This is a good analogy. I'd argue that what social media did was give kids a strategic and meta understanding of popularity, where there is very much a public and private self. AI will give kids a strategic and meta understanding of language and ideology.

The phone reference is not right as people specifically use telephones to "pretend," because to commit it to writing makes it consequential and 'real', whereas to say something on the phone is he-said-she-said. The phone is explicitly the domain of pretending. Regarding TV: a min-30% of people demonstrably ignore what is announced on television as well, so yes, something on television will affect us because it affects others, but that doesn't make it true.

A friends argument was, "just because you believe Santa Claus is a lie doesn't mean xmas isn't going to come every single year, and you'd be dumb to ignore that reality." The counter to this is, "Christmas does come every year, and when sustaining the dissonance of the crassness and lies about the season becomes too much, a few more people recognize what it exists to celebrate."

I'm saying that AI is making it untenable to believe narratives spread via the internet over what you see and physically experienced, and this is a cause for optimism.


I see your point. But couldn't you use the same argument without AI? The internet already has multiple mutually-exclusive narratives, and all of them have people that take them as reality.

I would argue that kids that grow up in that context either pick a side and engage in the eternal narrative war, or become cynics and despair, thinking that there's no objective truth.

Adding more narratives, now generated by AI, doesn't (IMO) change the current playing field that much. My bet would be that kids will become even more cynical: "if a machine can make anything sound like truth, there probably isn't any objective truth"


What's unique about LLM's is that they break the idea that consciousness is a function of language and its narratives, and a whole bunch of other anxieties go with it. My optimism is that it clicks that truth does not come from internet narratives, or any ideology or narrative for that matter, and that more people are driven to develop competence and real experiences instead of imagined ones.

Optimism is rarely rational but it's always risk taking, and that's worthwhile, imo.


I agree with the first sentence, I think we only differ in the second one, where you assign more probability to that "click" happening than I do.

But anyway, I can 100% agree with optimism being worth it. Thanks for the discussion!


I love it. The media goes from being a source of truth to purely a source of arbitrary images and narratives. It's understandable that the people who own the media would be upset by this.


Can you elaborate on this some more? Are you essentially saying the same thing as the treachery of images?

https://en.wikipedia.org/wiki/The_Treachery_of_Images


Subjectivity is a relic in the museum of hyperreality.


The internet had its Eternal September. Now we've entered Eternal April.



1000 Hours Free — Sign on Today!


A glimpse of what's to come... https://apple-ai.vercel.app


A few of the questions I touched on in this site's about page (April Fool's Day joke):

What will happen when millions of websites are auto-generated by AI and the authors aren't as well-intentioned?

How much longer will web developers like myself create these types of websites with hand-written code?

With the exponential progress in AI, where will this all lead over the next few years? Over the next decade?


Suddenly, the iconoclasm of the 8th-century Byzantines doesn't seem so far-fetched after all...


The accelerationist in me wonders if this is a positive development in the medium term. If everyone's convinced that the entire internet is sketchy, then much like state-controlled media in a dictatorship, or tabloids, no one will actually believe it.

The scale and sophistication of disinformation is getting to the point where actual verification will be extremely difficult or down to personal connections/experience, outside of direct statements. Maybe the casual majority who's favorite show is NCIS will just throw up their hands and start living more locally/draw from verified sources when they do go on the internet. One can hope, but in the meantime we're in for some chaos.

There will also be plenty who can't/won't adapt, and stay stuck in various internet fever dreams for the rest of their lives as they desperately seek that next dopamine hit, and various AI algorithms tailor what they see to keep them hooked.


> If everyone's convinced that the entire internet is sketchy, then much like state-controlled media in a dictatorship, or tabloids, no one will actually believe it.

This breaks down because "everyone" does not include about 30% of people (and 30% is probably generous). Evidence for this abounds.


> "everyone" does not include about 30% of people (and 30% is probably generous).

I suspect that you think that there's only one lie, and that it's a type of person [edit: and I'd also guess that that person is deplorable.] There are any number of lies, and all of them are believed to different extents. Some of them 99% of us believe, others 1% of us believe.


To a first degree, sure, but I'm also very keenly aware that I'm in that 30% on many topics. It makes the inability for "everyone" to be suspicious of everything even more insidious, because it means we can't just have a nice civil war and kill the "right" 30% of people to clear things up.


Images and videos are going to need to be signed in order to prove their provenance.


Which is great. The reaction to "AI"s polluting the Internet with its trash is going to create a lot of new kinds of communities and technologies to facilitate authenticity.


Potential upside: People might develop critical thinking skills and state-level disinformation campaigns may be drowned out by kids making fakes for the lulz.


Before, only people with resources and skills could afford to create fake content. Now everybody can. How is this not an advancement, taking into account you can't take away the ability to create fakes from people with resources?


Because according to lore, it doesn’t go well when Loki is the dominant god




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: