Sarkar argues that “AI shaming arises from a class anxiety induced in middle class knowledge workers, and is a form of boundary work to maintain class solidarity and limit mobility into knowledge work.”
I think there is at least some truth to this.
Another possible cause of AI shaming is that reading AI writing feels like a waste of time. If the author didn’t bother to write it, why should I bother to read it and provide feedback?
This latter piece is something I am struggling with.
I have spent 10+ years working on teams that are primarily composed of people whose first language is not English in workplaces where English is the mandated business language. Ever since the earliest LLMs started appearing, the written communication of non-native speakers has become a lot clearer from a grammatical point of view, but also a lot more florid and pretentious than they actually intended to be. This is really annoying to read because you need to mentally decode the LLM-ness of their comments/messages/etc back into normal English, which ends up costing more cognitive overhead than it used to reading their more blunt and/or broken English. But, from their perspective, I also understand that it saves them cognitive effort to just type a vague notion into an LLM and ask for a block of well-formed English.
So, in some way, this fantastic tool for translation is resulting in worse communication than we had before. It's strange and I'm not sure how to solve it. I suppose we could use an LLM on the receiving end to decode the rambling mess the LLM on the sending end produced? This future sucks.
It is normal, you add a layer between the two brains that communicate, and that layer only add statistical experience to the message.
I write letters to my gf, in English, while English is not our first language. I would never ever put an LLM between us: this would fall flat, remove who we are, be a mess of cultural references, it would just not be interesting to read, even if maybe we could make it sound more native, in the style of Barack Obama or Prince Charles...
LLMs are going to make people as dumb as GPS made them. Except really, when reading a map was not very useful a skill, writing what you feel... should be.
I thought about this too. I think the solution is to send both, prompt and output - since the output was selected itself by the human between potentially multiple variants.
Prompt: I want to tell you X
AI: Dear sir, as per our previous discussion let's delve into the item at hand...
> why should I bother to read it and provide feedback?
I like to discuss a topic with a LLM and generate an article at the end. It is more structured and better worded but still reflects my own ideas. I only post these articles in a private blog I don't pass them as my own writing. But I find this exercise useful to me because I use LLMs as a brainstorming and idea-debugging space.
> If the author didn’t bother to write it, why should I bother to read it
There is an argument that luxury stuff is valuable because typically it's hand made, and in a sense, what you are buying is not the item itself, but the untold hours "wasted" creating that item for your own exclusive use. In a sense "renting a slave" - you have control over another human's time, and this is a power trip.
You have expressed it perfectly: "I don't care about the writing itself, I care about how much effort a human put into it"
Well, with regard photography that is a lazy point. It is both true and not true, and also irrelevant. Artistic photography takes a lot of effort, training, and taste. But using an iPhone to take an arbitrary picture does not. I don’t value photography that takes no effort, just as you don’t. Ultimately, photography is recording data— light that traveled to the camera. AI writing is regurgitating someone else’s data.
When you use GenAI to produce work, it is to a significant extent not your own work. If you call it your own work then you are committing fraud to some degree. You are obscuring your own contribution. This is not separate from quality, since authorship is a fundamental aspect of quality.
Agree that this helps in some ways, but it also adds complexity. You can e.g. subscribe to a billing SaaS, but then you have to link your system to the billing system somehow
I saw an interesting idea floating around a while back: Put in a screen instead of the trackpad. I can see it being useful for drawing as well as picking emojis and so on, but it couldn’t be always available like the touch bar can.
Have you considered adding a “business app” category? Thinking of things like CRMs and learning management systems. Maybe Salesforce would be the holotype?
Looks great! I entered a value for an item, and it was automatically denominated in USD. The amount I entered was in NOK. How can I change the currency? I checked the profile page, but I couldn’t find it there.
Interesting take — hadn’t thought about it that way before.
There is one crucial piece missing, though: universities’ incentives. At least in Norway where I live, universities are paid the same for many STEM, humanities and social sciences programs. This causes an oversupply of e.g. historians compared to e.g. software developers, as universities are free to ignore labor market realities.
I think there is at least some truth to this.
Another possible cause of AI shaming is that reading AI writing feels like a waste of time. If the author didn’t bother to write it, why should I bother to read it and provide feedback?