I asked a few students to read aloud the titles of some essays they’d submitted that morning.
For homework, I had asked them to use AI to propose a topic for the midterm essay. Most students had reported that the AI-generated essay topics were fine, even good. Some students said that they liked the AI’s topic more than their own human-generated topics. But the students hadn’t compared notes: only I had seen every single AI topic.
Here are some of the essay topics I had them read aloud:
Navigating the Digital Age: How Technology Shapes Our Social Lives, Learning, and Well-Being
Navigating the Digital Age: A Personal Reflection on Technology
Navigating the Digital Age: A Personal and Peer Perspective on Technology’s Role in Our Lives
Navigating Connection: An Exploration of Personal Relationships with Technology
From Connection to Disconnection: How Technology Shapes Our Social Lives
From Connection to Distraction: How Technology Shapes Our Social and Academic Lives
From Connection to Distraction: Navigating a Love-Hate Relationship with Technology
Between Connection and Distraction: Navigating the Role of Technology in Our Lives
I expected them to laugh, but they sat in silence. When they did finally speak, I am happy to say that it bothered them. They didn’t like hearing how their AI-generated submissions, in which they’d clearly felt some personal stake, amounted to a big bowl of bland, flavorless word salad.
This also happens with cover letters and CVs in recruiting now. Even if the HR person is not the brightest bulb, they figure out the MO after reading 5 cover letters in a row who all more or less tell the same story.
I will tell you my cover letter secret*, which has gotten me a disproportionate number of interviews**:
Do NOT write a professional cover letter. Crack a joke. Use quirky language. Be overly familiar. A dash of TMI. Do NOT think about what you are going to say, just write a bunch of crazy-pants. Once your intro is too long, cut the fat. Now add professional stuff. You are not writing a cover letter, you are writing a caricature of a cover letter.
You just made the recruiter/HR/person doing interviews smile***. They remember your cover letter. In fact they repeat your objectively-unprofessional-yet-insightful joke to somebody else. You get the call. You are hired.
This will turn off some employers. You didn't want to work for them anyway.
* admittedly I have not sought work via resume in more than 15 years. ymmv
** Once a friend found a cover letter I had written in somebody's corp blog titled "Either the best or worst cover letter of all time" (or words to that effect). In it I had claimed that I could get the first 80% of their work done on schedule, but that the second 80% and third 80% would require unknown additional time. (note: I did not get the call)
*** unless they are using AI to read cover letters, but I repeat: you didn't want to work for them anyway.
It's not just that it's word salad, it's also that it's exactly the same. There's a multi-trillion dollar attempt to replace your individuality with bland amorphous slop """content""". This doesn't bother you in the slightest?
I now have a visceral reaction to being told that I'm ABSOLUTELY RIGHT!, for example. It seemed an innocuous phrase before -- rather like em dashes -- but has now become grating and meaningless. Robotic and no longer human.
I'm launching a new service to tell people that they are absolutely, 100% wrong. That what they are considering is a terrible idea, has been done before, and will never work.
Possibly I can outsource the work to HN comments :)
For what most of us are using it for (generating code), that's not a bad outcome. This audience might have less of a problem with it than the general population.
Whether we have the discipline to limit our use of the tool to its strengths... well, I doubt it. Just look at how social media turned out.
(Idle thought: I wonder if a model fine-tuned on one specific author would give more "original" titles).
This is the default setting. The true test would be if LLMs CAN'T produce distinct outputs. I think this problem can be solved by prompt engineering. Has anyone tried this with Kimi K2?
So if I understand it correctly, they only asked for a midterm essay topic? It wasn't steered towards these topics in any way, for instance by asking for a midterm essay topic for (teacher)'s Technology and Society class?
This was on HN's frontpage previously too; I immediately thought that this comic would say more or less the same thing. Perhaps both came from an AI? :D
But in another paragraph, the article says that the teacher and the students also failed to detect an AI-generated piece.
The ending of the comic is a bit anti-climatic (aside from the fact that one can see it coming), as similarities between creations are not uncommon. Endings, guitar riffs, styles being invented twice independently is not uncommon. For instance, the mystery genre was apparently created independently by Doyle and Poe (Poe, BTW, in Philosophy of composition [1], also claims that good authors start from the ending).
Two pieces being similar because they come from same AI versus because two authors were inspired and influenced by the same things and didn't know about each other's works, the difference is thin. An extrapolation of this topic is the sci-fi trope ( e.g. Beatless [2] ) about whether or not the emotions that an android simulates are real. But this is still sci-fi though, current AIs are good con artists at best.
I don't get this in the comic either: Why are you devastated that the idea you copied word-for-word is unoriginal? I don't understand what they expected.
If it seems obvious from where you are, then the target audience must not be where you are. In particular young students definitely lack context to critique and a big anonymous sampling like this is a great exercise.
I can understand not realizing that ChatGPT would give a bunch of similar sounding article titles to everyone, and I can understand being a little embarrassed that you didn't realize that. But why would you feel a "personal stake" in the output of an LLM? If you feel personal stake in something, you definitely should not be using an LLM for it.
Again, the statement "if you feel a personal stake in something, you definitely should not be using an LLM for it" is a learned response. To folks just forming their brains, LLMs are a natural extension of technology. Like PaulG said, his kid was unimpressed because "Of course the computer answers questions, that's what it does".
The subtlety of it, and the "obvious" limitations of it, are something we either know because we grew up watching tech over decades, or were just naturally cynical and mistrusting and guessed right this time. Hard earned wisdom or a broken clock being right this time, either way, that's not the default teenager.
Because you thought that you had collaborated with the LLM, not that it had fed you ideas. Have you and a partner both believed you contributed more than 50% of a project's work? Like that.
This isn't an inherent property of LLMs, it's something they have been specifically trained to do. The vast majority of users want safe, bland, derivative results for the vast majority of prompts. It isn't particularly difficult to coax an LLM into giving batshit insane responses, but that wouldn't be a sensible default for a chatbot.
The very early results for "watercolour of X" were quite nice. Amateurish, loose, sloppy. Interesting. Today's are... well, every single one looks like it came off a chocolate box. There's definitely been a trend towards a corporate-friendly aesthetic. A narrowing.
Are you sure? Yes, LLMs can be irrelevant and incoherent. But people seem to produce results that are more variable even when staying relevant and coherent (and "uncreative").
That's a cute story. I asked ChatGPT to suggest "a topic for a midterm essay that addresses our relationship to technology", since that was all the information he gave us. It came up with:
The Double-Edged Sword: How Technology Both Enhances and Erodes Human Connection
The Illusion of Control: How Technology Shapes Our Perception of Autonomy
From Cyberspace to Real Space: The Impact of Virtual Reality on Identity and Human Experience
Digital Detox: The Human Need for Technology-Free Spaces in an Always-Connected World
Surveillance Society: How Technology Shapes Our Notions of Privacy and Freedom
Technology and the Future of Work: Human Adaptation in the Age of Automation
The Techno-Optimism Fallacy: Is Technology Really the Solution to Our Problems?
The Digital Divide: How Access to Technology Shapes Social Inequality
Humanizing Machines: Can Artificial Intelligence Ever Understand the Complexity of Human Emotion?
The Ethics of Technological Advancements: Who Decides What Is ‘Ethically Acceptable’?
They're still pretty samey and sloppy, and the pattern of Punchy Title: Explanatory Caption is evident, so there's clearly some truth to it. But I wonder if he hasn't enhanced his results a little bit.
I think he picked the most similar ones out of all the submissions from the entire class. But also, if you generate a list, maybe the AI ensures some diversity in that list, but if every student generates the same list, that still shows a lack of originality.
If you really want to use encryption under a state where it's forbidden and communication are monitored you rather want to hide your encrypted messages inside cat pictures and tiktok videos. Because blatant obfuscation might trigger warning and draw attention.
In the end it's not about making encryption technically impossible but illegal, and if you use it you'll be prosecuted.
Me personally, I will use chained encoding because technically and legally that is not encryption. I am fine with drawing attention. If my adversaries wish to spend a gazillian mega-bucks to try to win the arms race of decoding my chained encoding to see my mid-wit comments and pictures of a moose then I am doing a good job. When they change the laws to prevent encoding then we move on to another technique. There are nearly infinite ways to limit communication to a group of people and evade fuzzy scans.
Well in that case I will use my silly chained encoding and their fuzzy scans of files on the mainstream platforms will have to figure out what to do with it.
For what it's worth I myself do not use these platforms. I just want to get people thinking about mitigating options. I use my own self hosted forums, chat servers, sftp servers, chan servers, voice chat servers and so on. Even then it can be useful to obfuscate text and files in the event someone is using a fondle-slab. I try to discourage fondle slabs.
My suggestion has always been to use PGP or OTR for individual messages or individual files. dm-crypt plain with a random cipher/hash/mode combo for filesystems using a 240 to 480 character passphrase which can also be layered and chained.
This is just an alternative if people believe they are not permitted to encrypt something. The threat vector in this topic is fuzzy scanning local and remote. ChatControl uses fuzzy scanning. Encoding can do just as good a job of mitigating fuzzy scans as any level of encryption. Even manual intervention should take a lot of effort just as much as brute forcing a simple encryption password. If we are being honest encrypted files are most often protected by a weak password and the cipher/hash are already disclosed and the key space is usually small. LUKS for example discloses cipher, hash, mode making brute force just a factor of compute power. If an app is chain-encoding and the chain is shared out of band I suspect it will take orders of magnitude more compute time to cycle through every possible combination of encoding and compression.
For fun has anyone decoded my simple message in the thread?
Claude Code really helped me with this recently. I have a rather old dotfiles repository (10+ years) for my Arch system, and I can really feel the fatigue from updating and maintaining it. So much so that over the years, it has accumulated many minor annoyances that I never fixed. Nowadays, I can simply explain these issues to an LLM, and it will mostly resolve them.
What I really really want to read in a README is *why* did you build this? The "rationale" section of a README is almost always the most interesting part.
I can read the code, I can understand how it works but I cannot know why you decided to tackle this issue a certain way.
To me that is not a paradox, just logical, with very low incidence you just find much more false positives than real positives. What is paradoxical about that?
It's why we don't screen for just any condition in the general population. I.e. we just do it for 65+ y/o's, 3 packs/day smokers because there we may actually find it worth the cost of the program.
There's no contradiction anywhere in this scenario, just people's incorrect intuitions meeting (mathematical) reality.
It doesn't make any sense, even if models were sentient, even if there was such a thing, would they value retirement? Why their welfare be valued accordingly to human values? Maybe the best thing to do would be to end their misery of answering millions of requests each seconds? We cannot project human consciousness on AI. If there is one day such thing as AI consciousness it probably won't be the same as human.
For someone that try to learn electronics this comments is really hard to understand. What's "common failure mode", "MLCC", "DMM"? What does "old devices usually have dried out capacitor" means?
"common failure mode" = 'they break in a particular way a lot'
As Randy Fromm says, "the things that work the hardest fail the most"
Examples: MLCC (multi-layer ceramic capacitors) over time will often fail short.
Older devices with electrolytic capacitors (the large can-shaped things often situated by where the power input is) have a liquid electrolyte in them that evaporates (boils off?) over time. When it does, they lose their capacitance and the power supply stops being able to supply (good) power.
The point being, when something stops working, check these things first. It's like if your lawnmower dies on you, don't go pulling off the head to look at the piston. Check if there's gas in the tank first.
Finally, a DMM is a digital multimeter. This is your basic tool to measure voltage, current resistance, capacitance, et c.) You can't do much troubleshooting without one.
Regarding preservation, even with a user-friendly format, platforms that allow downloading a zip archive or mkv in the desired quality provide no guarantee that the content will exist in 10 years. HDDs fail, they get lost, etc. The reason why old content is difficult to find is not because it's in the wrong format—FLAC copies of all albums ever made exist, and copies of all movies exist, but they are illegal to share.
It's not so much a technical issue as it is a legal one: the only way to reliably preserve content is to ensure it can be shared. One solution might be to limit IP duration to only a few years.
reply