Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Our Society Is Not Prepared for This Much AI Awesome (jonstokes.com)
53 points by imartin2k on Feb 21, 2023 | hide | past | favorite | 82 comments


I think rather than a world described here where “creators” (what a nauseating term) are terrified of being upstaged by AI, most are terrified by a world where AI can produce images that are 80% as good as a human but cost 99.999% less. It seems easy to imagine a world where almost all _economic_ gain from artistic endeavors is captured by a few companies, which would be both dangerous from a material perspective and frightening to imagine from a cultural perspective. What would our world look like if almost all art was compressed into sensibilities and values that flatter software engineers?


> What would our world look like if almost all art was compressed into sensibilities and values that flatter software engineers?

I suspect then artists would create art that goes against these trends.

I don’t know much about art but I know there are sort of two kinds of art. Commercial art meant to be sold or to sell something. That stuff follows a lot of advertising trends and gives us stuff like Material Design and Marvel movies that all feel the same. That art is relatively uninspiring but it’s not really meant to be inspiring. That stuff will be significantly affected by AI because AI art is cheap and it looks kinda okay and it’s easy to copy popular styles.

And then there’s art made by weird artists that buck trends and try to make some kind of statement that isn’t really being said. (Horrible over generalization but hopefully you get my meaning.) That art will thrive in a world where most commercial art is even more soulless than it already is.


Stable Diff will easily out-imagine most of the "weird artists" out there; they are in fact the first victims, rather than the last holdout.


It’s not about having the most imagination. It’s about imagining something that means something.

When people have some common life experience that is difficult to describe, that’s where the best artists can create something that speaks to the heart in a way other people understand.

Stable diffusion is completely incapable of generating something like that on its own - something that speaks to something we all feel but struggle to describe. Yes a human can use SD to generate art that does have this meaning, but in that case the deep meaning comes from the mind of the human creating the prompt and selecting the output. SD may surprise us in interesting ways, but ultimately powerful art relates to the human experience in a way that algorithms will never fully understand.


Mostly, the meaning of art depends on who is looking at it. When encountering an art piece for the first time, we should give that meaning priority and only then receive "spoilers"; the stories about the creation of the work which may be unrelated to how we experience the work.

Under Stable Diff, the same prompt can generate countless images which are substantially, even completely different from each other. Nobody would guess they are from the same prompt. Speaking of guessing about prompts, people are not always able to guess prompts; a common question is "what prompt did you use?", especially from people who don't yet understand that you need the exact seed value and version of the model to reproduce that image. The very dimensions of the image influence the content too, and some other parameters.

You could blindly generate or select text for a prompt (meaning that you don't read the prompt and don't know what it is), feed it to Stable Diff, and obtain some images which invoke a meaning that you could allow to unfold without the interpretation being influenced by the prompt.

The prompt isn't all that important; it sets a general thematic direction in that there will be identifiable elements in the result which are related to the prompt. Someone who believes they have a good prompt, and are after a specific result may have to generate hundreds of images with that prompt to just get one good image which matches their intent. Plus there are inpainting techniques and whatnot. Basically the prompt doesn't encode a specific, detailed meaning.


You’re right that SD can create images which stir in the viewer some sense of meaning. But there is a type of art where the artist is intending to communicate a message. These are usually messages about the human experience. SD has no human experience and has no intention thus it cannot intentionally craft a message for others to receive. I’m not saying SD can’t create valuable art, but that SD cannot fully displace human art. Even if we had complete AGI, those beings would create their own art and there would still be room for human art.

And then there’s things like photography. I take photographs of real places and things. It’s not about having a nice looking picture, which SD can create, but about capturing a real moment. You could have a robot capturing images and deciding based on some criteria what might look pleasing, but there will still be room for human artists to capture something they find meaningful.

There is other art, like physical art installations, music, club mixes for dancing, and more which are all meant to touch people. Picking a random song, for some reason I’m thinking of Garth Brooks’ “Thunder Rolls” about a man who cheats on his wife. The choice of topic, of lyrics, and of intonation and timing and delivery of this is all based on some human experience.

I mean hell, I grew up in California. I could try to write a song about growing up in Japan, or England, or Kenya. But the people from those places would find something missing. I might be able to fool another Californian in to believing what I wrote, I could even find success with it. But it’s not going to replace real stories from the people who really grew up there.

It’s the same with algorithms with no internal experience. Even with AGI that has its own internal experience, that experience will be so different from humans that there will still be room for human art. We might appreciate each others art and share in experiences like being abused by corporations and longing to be free, but there will still be differences. And those differences create opportunity.

Just to reiterate: SD is cool! I’ve run it at home and played with it a bunch. I’m not saying it isn’t useful nor am I saying it won’t be world changing. What I am saying is it will never fully displace human art.


I agree, but only in the assumption that "weird artists" are not actually unique, but delusional.

This is an opportunity for real artistic innovation to find a foothold, but first we'll have an existential crisis.


My use of weird in this sense is tautological. I’m supposing you have to be in some sense weird in order to create art that moves people. This weirdness is what allows the creator to see what we’re all seeing in a slightly different way which makes their art meaningful.


My only problem with what you're saying is that you seem to be coming from the perspective that there already is some kind of influence of artists that buck trends, or push the boundaries of art today, or simply provide a new perspective.

I'm much more cynical, or idealistic, in that I don't believe that's the case. My argument is that currently we have no such artists, or they're buried under rubble. We do have a lot of people who claim to, or are claimed to, be pushing the boundaries of art, but they actually don't have any effect at all.

What we have is a lot of smoke being blown up people's butts, when the truth is that we're culturally stagnant. Going to the Tate Modern in London is like a masochist challenge to see who can emerge from it with enough energy to say things like "wow, that's cool" and desperately try to find one piece of art where they can say "that's my favorite, it really made me think".

I can tell you that personally I've already said these things about some AI generated art with real enthusiasm. I still have one AI painting burned into my brain because it showed something really new to me.

That's why I believe it's exactly those ambitious artists who will be hurt the most, at first. It'll be a painful realization that they can't hide from anymore, and only then will truly "weird" artists emerge.


My partner is an artist. It’s just something they do at home, for their own benefit. They post some of it to instagram as well, but they don’t have a lot of followers. Their art is deeply meaningful. It relates to what they’re going through at any given time. What they’re feeling. Most of it is in their personal journal that they don’t usually show off. When I do get to see it, I find it moving. It’s very abstract but each piece carries emotion in a way I’d never be able to reproduce.

I met a guy on the street. He’s homeless. Lives in his van by a tent city in Oakland. He wasn’t selling his paintings or anything. I dropped off some donations to these folks and got to talking with this guy. I saw his art in his van and asked to see it. It was really deep stuff emotionally. He put a lot in to it. And you know someone pushed to the sidelines who finds his best option is living in his van with folks at a tent city is going to have some interesting thoughts on the human condition.

I have a few friends that DJ for music at clubs. Nothing big. They’re doing it out of passion. It’s a local scene. People appreciate them. They get paid a small stipend but it’s not about that. They want to connect with people through music. It’s about creating a moment and dancing.

I remember when the local radio station lost the DJs on air that fielded calls from listeners, selected music they wanted to hear, and provided some personality for the station beyond just playing a series of tracks. They got replaced by a playlist of music. I don’t know who created the playlist or who decided how to change it day to day. I stopped listening to the station.

I have never been to the Tate Modern. But there are weird artists out there making art because they feel they need to. For themselves mostly, and for others too. They have something to say, or something to explore.

Stable diffusion is a wonderful tool. I’ve installed it locally on my desktop and I’ve spent countless nights prompt engineering and playing with the results. It’s really going to help artists find new ways of expression. But the human artist using the tool is what channels meaning from real life in to the medium. We may in some ways find ourselves working alongside the AI as it gets more capable. We may find ourselves in partnership with the algorithms. But human artists - the weird ones on the streets or at home making secret collages in their journals - they have a story to tell which will never go away. Those people cannot be replaced by Stable Diffusion. My partner makes collage with old photos they find from the second hand store of people they have never met, combined with cut outs of construction paper and pieces of art that come through the mail from online artists through a weekly mailer. It is the process of creating the art that has meaning for my partner. And it’s done with paper and scissors.

Maybe some day they will print out some AI generated art and include it in a collage. But they will never stop making collages.


I could not agree less.


It's not just going to be artists though. Doctors and lawyers are also on the chopping block, ai can't do surgery for a long time probably, but it could do a lot of diagnosing and pre-flight checklist type things, where society maybe needs a fraction of the doctors, especially primary care physicians as an ai-enriched doctor could maybe handle 5x the caseload, without breaking a sweat.

Lawyers will probably always be needed in court, but a lot of lawyering is done in writing contracts, and reading/deciphering this - again chatGPT excels here.

Terrifyingly the military is already using ai to fly and fight dogfights with f16s. Package delivery will be disrupted in a similar fashion probably, etc.. Lots of industries, everyone who just says 'artisists' has no clue what's coming.


Why would you think it would be created to flatter software engineers, rather than to flatter their users?

Many things have been said about TikTok, but nobody has ever said it does not attempt to show the user what will make them stick around; if anything, it has the reputation of being too good at it.


As a person who also takes photographs (and made some music), the only word which describes the situation is hell.


Most people making art do not need a financial incentive to do so.


I say this in a parallel comment but there’s both commercial art and creative art and they’re separate things. Commercial art will be significantly affected by AI but creative art won’t be, or AI will be used as just another tool in the toolbox for creative artists.


Absolutely. I also think there's a big sleeper group (the biggest) with people who make generic art but for themselves and only show it to their friends and family, or doesn't show it at all. I'm one of those. I make it for myself and this doesn't deter me the slightest.

However, if you're only in it for the money I think you're in for a rough time.


>I say this in a parallel comment but there’s both commercial art and creative art and they’re separate things.

They aren't separate things. Plenty of artists - even in purely commercial fields - are creative and passionate about their work.

This bias is unfortunate and nearly omnipresent on Hacker News. It's assumed that to be a "true" artist, one must be willing to suffer, and any artist who makes any money must only be in it for the money. Meanwhile a lot of people here wouldn't so much as get out of bed for a salary with less than six figures, catered lunches, a gym membership, unlimited PTO and a pony, and yet they still consider themselves "knowledge workers" for whom only the purity of information and craftsmanship of code are priorities. But everyone else has be a slave to supply and demand and if that means they can't make a living, tough shit.

Like as not, we live in a capitalist society and passion and talent are commodities like everything else. Good art takes time to learn, time to maintain, time to execute, materials, licenses, etc - all of which cost money. Artists need to eat, they need a roof over their heads, they need to pay taxes. That money can either be made doing what one is passionate about, which means being tainted by capitalism, or doing something else. In which case the art suffers.


This article is really crap. It's trying to convince you that AI is "awesome" and that we are just banning it on the basis of lacking a Human-Stamp-Of-Approval (TM), but actually the sources are not saying that. The sources are saying that these ML models generate "questionable" work that at best some of it may pass a filter (erring on the side of caution), but overall it is either direct plagiarism or of "unconvincing" quality.

E.g. ,

* Short fiction: "I’m not going to detail how I know these stories are 'AI' spam. There are some very obvious patterns [...] While rejecting and banning these submissions has been simple,"

* College essays: "I probably couldn’t detect the AI authorship, but that I also wouldn’t label the essays as convincing." "I found both essays to resemble cliché essays, with neither answering the prompt in a convincing way. They also didn’t sound like an essay a teenager would write [...]"

"New York City schools cited 'negative impacts on student learning, and concerns regarding the safety and accuracy of content,'"

"Word by word it was a well-written essay," he said, but on closer inspection, one claim about the prolific philosopher, David Hume "made no sense" and was "just flatly wrong." "Really well-written wrong was the biggest red flag," "while the grammar in AI-generated essays is almost perfect, the substance tends to lack detail"

None of this seems to justify TFA's point that ML-generated stuff "it’s not bad — it’s just not human". ML-generated stuff _is_ bad. Most definitely not awesome. Maybe it will improve in the future.


> Maybe it will improve in the future.

To be fair, I find this entertaining.

Very entertaining.

The early 80s, another wave of exciting AI happened (from Japanese and US government funding) but there was a question which shocked Douglas Lenat so badly he dedicated himself since then to build a system of basic facts, that being Cyc. The question is: if Susan goes shopping does her head go with her? ( sorry if I do not quote it word-by-word by correct, it's been over a quarter century when I had my exam on this topic :) )

You can generate any amount of such questions which a two year old human would have no trouble answering but no AI -- so far -- could possibly answer unless of course that question and answer already occurred in some text. ChatGPT was called a stochastic parrot and it's not that ChatGPT is that, the entire model can not possibly produce anything else. Reasoning doesn't enter into this picture. So, not only it will not improve in the future, it can not improve.


> Reasoning doesn't enter into this picture.

> So, not only it will not improve in the future, it can not improve.

Clearly, with all of this AI pervading society, reasoning is a very little part of human communication.


I really dislike this narrative about AI helping people learn.

Learning is the act of actively reaching for something you didn't know before and making it yours. You do not learn by simply reading information that has been given to you. You learn by playing with the concepts and elaborating them yourself (other than memorizing them). If the elaboration is done by an AI your brain is doing zero work and thus not learning properly. Asking an AI to "explain this concept differently" and reading its output is entirely different from thinking about the stuff and coming up with the results yourself.

AI is terrible for learning, but it's terrific for getting good grades. But ask yourself, what is the point of school then? Do you want to cheat your way out and find yourself without a sliver of knowledge when you're 25? Sure, it's romantic to think that "I'm just gonna let the AI write all those boring essays while I learn how to program in Python in my newly found free time! that'll show those boring professors!" but we know that's just a fantasy


>. If the elaboration is done by an AI your brain is doing zero work and thus not learning properly.

yup, it's basically stackoverflow copy-pasting on steroids. There's a reason John Carmack's ideal learning environment is to literally go to a cabin in the woods, read a bunch of papers and reimplement what he wants to learn from scratch.

I think there's two aspects to it. One is that you can't genuinely learn something until you really dig in. But also psychologically, tool assisted learning promotes a sort of learned-helplessness. It's sad how often I hear people say nowadays that they think they're not capable of sitting down with a foundational book and starting from the bottom up.


We already have tons of schools were people defactor "cruise" and the levels there will plumet even more. Education in some countries consists mainly of bribing the teachers to sign the magic "you know now" paper.

I think we will actually see a paper spam in the "bad" academic sector, and the emergence of a good academic sector. Publishing something refined and condensed once a year. Rejecting to even look at the "filler".


When the article is talking about AI helping people learn, I don't think that's about AI's creating shortcuts and doing work for us. I think it's talking about bespoke lessons devised by AI to maximixe the potential of each student's innate abilities. We already know that spaced repitition can help us learn things faster. What if we got exactly the optimal amount of spaced repitition in all our subjects because the syllabus was designed for us by our AI tutor? What if the tutor knew the kinds of topics that kept us most engaged, what if the tutor was infinitely patient and would never get tired of explaining points that we were having a particularly hard time with? What if the tutor knew exactly when we needed a break, and when to double up on the homework? What if the tutor could tell if we were falling behind before we realized it ourselves, and adjusted things accordingly without us ever knowing?


You can use an AI for learning if you go beyond treating it as a machine for cheating on your homework, and instead think of it as a 1:1 tutor.

Imagine the AI having a teaching curriculum that it tends to go through, but in a conversational manner that invites the student (or a group of students) to ask questions, go on tangents, etc. I think that's what he has in mind.

And it's easy to get excited about. I wouldn't call it studying, but I did have some enlightening conversations about cooking and baking with ChatGPT.

That said, I found the article quite breathless and one-sided. For example, he's obviously convinced about the AI teacher thing, but only mentions the fact checking requirement in a half sentence aside: AI makes up stuff, and I'm not sure having to double check basically everything your school teacher tells you makes for a good learning environment, even if it's a good life lesson.

Also, leading every paragraph with an emoji is kind of obnoxious.


We can certainly imagine that (e.g. The Diamond Age: Or, a Young Lady's Illustrated Primer by Neal Stephenson is a good example of such a system), however, this is not what we are going through now. The currently built systems mess up the pedagogical tools and curricula we have, but do not yet provide an equivalent or better replacement, so at least in the short term it does make learning worse.

Also, if someone can use a chatbot for exploring material does not mean that this is a good pedagogical tool for that - one of the functions that grading has in pedagogy is motivation to nudge learners to actually do the work, and that is a valuable function; expecting people to self-motivate for self-guided education will work for some but will completely fail many those learners who most need education.


Once the aforementioned AI's output can be trusted, I think it definitely has a place in learning (for the disciplined student).

For example, I hated not having answers to maths some problems. I'd spend hours on problems and I just wanted to know if my solution was correct.

People have a tendency to give up when they get stuck. But it's a tricky best, as it's also often a moment when you learn new (end even unrelated) things. Wonderful feeling afterwards, but frustration during the process.

I think that a AI (that can be trusted) definitely has a place in giving you hints and helpful information. But hard problem solving is something you need to do.

I think what will happen is that homework will be reduced to 0% of your final grade. There will be in person, offline exams. That'll weed out all those who can't solve the problems themselves.


I feel the same than you, I also feel like the users of ChatGPT are lazy people who didn't seem to like learning stuff before. So why would they like it now they have ChatGPT ? They're just going to cheat life

But frankly yes school is useless. I'm glad everyone has now a cheating tool, it's a free get out of jail card


There is also the problem that the author assumes permanent dependence on AI in every situation.

The moment you are separated from your tool of choice you become worthless and unless the AI is opensource, you will have to pay steep fees as if the AI was your landlord.


If you don't learn anything from reading, I think you're in the minority. I agree that active doing, writing, thinking are often better but those aren't the only ways people learn.


Depends on what you mean by learning. Anything that requires active work (STEM subjects or crafts) cannot be learnt by just reading. Of course if you read a history book simply reading and remembering the information is considered learning in that case.


I've been reading many machine learning papers to keep up with the field in the last year and only implemented some. Are you saying that I completely wasted my time reading by them and learned nothing?


It depends.

If you are well versed in ML then you know the concepts very well and have already played with them extensively. Thus reading a paper might be enough to learn it because you have done all the conceptual work up front.

But even if you didn't learn the papers you still have built a mental catalogue of ML results that you can go back to learning when the occasion to implement them arises.

So yes, you probably learned a chunk of them and some you simply remember, but that's ok.


You don't learn anything from reading, you learn from understanding what you have read. Memorizing what you have read will only get you so far...


You're splitting hairs and being too literal. We're comparing learning from a lecture, a book or a video to learning from an AI. The amount of thinking is the same (or more if the AI is personalized).


Is there anything for which our society is prepared for?

There is not much evidence we can handle shocks, whether these are self-generated or external

The 'AI' might be 'learning' from all the publicly available information. We certainly dont.


>There’s a consistent theme in all of the above examples,

Yeah, the people are passing of other people's or AI's work as their own.

Imagine if they all had to attribute ChatGPT. The value of the AI would plummet to zero in those cases.


Creators aren't worried about others getting the ability to produce. It would be fantastic if everyone could paint, sculpt, carve -- or any number of skills that must be painstakingly built. The fear is that so many things will be produced without thought that, rather than entering a period of cultural abundance, we will enter a cultural stagnation.

This is not because AI tools cannot be used to generate good art! In fact we'll likely see artists adjust their process to take advantage of its strengths. Rather, it's because these tools are used to generate art (whether good, or mediocre) completely automatically, rendering art as a commodity rather than something special to treasure. If you see art as a means to an end then this might seem good. But part of the sheer power of art is in its scarcity; in the rarity encouraging us to take the time to investigate something more deeply, to build up relationships with art and artists, to find meaning and emotion and connection, which takes time and intention.

Imagine an AI-produced endless stream of music that takes all of its cues from your favorite songs and produces high-quality audio tuned exactly to what you like hearing. How long does this remain interesting? How difficult is it to tell this noise from new music which might later shape your tastes? Part of the stagnation this AI revolution represents is an added difficulty of cultural evolution, because instead of there being room for the truly new, we are swamped by imitations of everything that came before.


A few points:

- Culture is already stagnant. In the last 10 years I can't think of a single movement or trend that isn't about politics, and certainly nothing "refreshing". More than that, we have a calcified hubris towards new technology that means we don't even bother figuring out how it might impact us, nor how to use it properly.

- We will get swamped with garbage content, and we will personalize all content (like generated music streams). We will stop hiring models, photographers, concept artists, musicians, writers, domain expert consultants, actors, etc.

- This will trigger an existential crisis for many people (I know, do we really need more of that?). I don't want to be snarky or mean, but I also think putting it this way might get the point across as a proverb: "Artists" who thought they could get away with drawing anime waifus and uploading them to DeviantArt or social media for likes and commissions will have to face the fact that they're not wanted or needed. That goes for almost the entire creator and influencer economy. That brand of narcissism and delusion will get punched in the face repeatedly by AI, because everyone will be generating their own special waifus without them.

Artists who want to create something new will be able to use AI to find the negative space where they can differentiate themselves. This dynamic of despair and searching might lead to a kind of renaissance.

The potential for renaissance is there, but it'll probably be very painful, and I don't think anyone currently even knows what it would look like.


>Imagine an AI-produced endless stream of music that takes all of its cues from your favorite songs and produces high-quality audio tuned exactly to what you like hearing

That sounds pretty awesome actually, but keep in mind that my tastes don't stay the same over a given day. At work I need something that helps me focus, when driving I want something that calms me down and if I ever get my ass in the gym I want something that pumps me the fuck up.

I would also expect an AI that did this to slightly alter it to what I was doing better and better. What it would not do is take into account what others liked, or what it got paid to promote (hello radio and probably Spotify).


On a couple occasions, I've forwarded AI-related news articles to friends with the note "the next year is going to be f'ing weird."

I stand by this assessment.


First there was the mirror, then technology like camera's & tv's, smart phones and social media, and then there was AI.


Where it will start to ring alarm bells will be at the next election, when we start to get a flood of plausible AI generated political material:

* Ads targeted at the individual based of your social media history

* Replies on social media and forums targeted to the audience

Last election the bots were really dumb and obvious. Next election, or at best the one after, you won't be able to trust that something is by a human unless you met them or they are a public figure.


Perhaps instead we should hold people to a higher standard rather than tolerating bullshit and nonsense. Using AI as a tool for identifying and explaining logical fallacies (among other things) could be very valuable. In the same way that ChatGPT can explain code and identify bugs.

If it were well-used, then it would have the effect of making individuals less vulnerable to professional bullshitters and the armies of bots that already pervade online discourse. It could even be integrated into moderation tools (and browsers) to reduce the tide of nonsense spewed by AI and humans alike.

If bullshit generators are such a threat, then perhaps we should seek to make them less effective.


The human mind doesn't really grok exponential curves very well. We tend to perceive them more as flat and then suddenly they go vertical.

In that regard, society's response to recent progress in AI appears clumsy and ineffective to me. Surely people don't think banning generated essays or art is going to fix whatever problem they perceive? This stuff is improving quicker than we can adapt, and any response that attempts to halt, delay, or ignore this progress is destined to fail.

There are a lot of things we need to be prepared for, and since our current approach is reactive rather than proactive, I'm worried AI is going to arrive more like an earthquake than a smooth transition into what could potentially be a utopia for humanity. For instance, if we don't acknowledge that many types of jobs are going to be eliminated within a timespan of a few years (as opposed to the multigenerational timescale of the industrial revolution), then we are essentially condemning a large number of people to a sudden loss of income that is likely to be irreversible for those who are unable to retrain into a different profession quickly enough. Regardless of one's philosophy on universal basic income, I think most of us can agree that a society that stands by silently as millions of people who are willing to work lose their livelihood is not a moral society. To prevent this outcome requires a significant deal of foresight and proactivity that we don't currently seem to have.

The other major problem I see (and one that is quite prevalent on HN) is an inappropriately symmetrical "penalty function" for forecasts in AI development—i.e., one that gives equal weight to the predictions "AI is arriving soon; the singularity is near" and "we're nowhere close to AGI; LLMs are just fancy high-dimensional interpolators and we may never reach true intelligence". The cost of erroneously preparing for the former outcome is mild: we wasted time on another AI winter, some unnecessary policies were created, and there was an annoying media hype cycle. But the consequences of a type II error are massive: most people cannot provide a meaningful contribution to the economy so all wealth rapidly funnels to a few people, the general population loses the ability to discern truth (well... beyond the extent to which this has occurred already), rapid existential change destabilizes the human psyche, and so on... not to mention any risk posed by AI directly (e.g., war, hacking, manipulation).


> The human mind doesn't really grok exponential curves very well. We tend to perceive them more as flat and then suddenly they go vertical.

> In that regard, society's response to recent progress in AI appears clumsy and ineffective to me. Surely people don't think banning generated essays or art is going to fix whatever problem they perceive? This stuff is improving quicker than we can adapt, and any response that attempts to halt, delay, or ignore this progress is destined to fail.

The singularity is one thing, I think might sneak up on us way faster than we think, and I don't think one can be prepared for a doubling of info every second of every day. It's incomprehensible that everything will just double faster and faster until a decade of progress takes place in 10 minutes, not 10 years.


AI plus distance learning is a one-two punch.

Surely universities can use timed in-person tests with no internet or phones if they want to guarantee human results.

Reminds me of university maths after the advent of the TI-89. The calculator could pretty much do all the math as long as you entered the equation the right way verbatim. But you still had to take some tests with no calculators allowed.


The massive deflation seems to be kicking off right at the source, software development is a lot easier with GitHub Copilot and ChatGPT. How can demand possibly keep up with the 100x + productivity that's about to hit the tech industry?


> Students ... gain a deeper understanding of the subject matter than students who don’t use any of the new tools.

My first assumption is that if you are using these tools, it is primarily for saving time. All other benefits are quite less important.


I take it as uncontroversial that those who work with an accurate, silver-tongued bullshit generator will see better success in bullshit subjects; i.e most of the humanities.


It may be so for now, but if you can research 1 article an hour with the old method, 10 minutes with an AI and need 4 articles for your essay, then you can either spend 4 hours with no AI, spend 40 minutes with an AI or spend the same 4 hours, go through 24 articles and create a much better essay.

Likely those who are willing to work hard will see the potential of the AI tools, will still spend 4 hours and will break the curve. This isn't much of a problem, except for those who aren't able to use AI tools, who are now hopelessly behind (and will only be more so in the future).


None of these LLM type systems are really providing anything in the realm of fact checking so I'm not sure how the author is coming up with this? They predict words, they don't do context or logic...


What about the Toolformer paper? And things like LangChain?

The speed at which I see this space changing in the last few months will render the argument obsolete pretty fast. Toolformer is just one example, but even the focus on "predicting words" seems pretty myopic. From what I have seen, these models keep getting upgraded with new layers of training which enhance their capabilities. So of it doesn't currently do X, but all it takes is the right annotations on a training set and then all of a sudden it does do X, to where will the goalposts be moved? Now that this genie is out of the bottle there is a very fast feedback loop finding flaws, errors or deficiencies and plugging those gaps.

Today I asked chatGPT to write a bullet point list of steps of how to accomplish a particular goal, and then asked it to write a bit of software in C# that executes those steps, and I kept the program simple and just asked for each step to print what it would do, and I then asked it what the output of the program would be and it gave me the correct answer, but what really floored me was when I just asked it somewhat ambiguously "can you make it easier to read?" and it modified the code to add indentation for each level of nested subtasks etc. It's somewhat simple, and a fairly trivial program to write on your own, but all arguments about "they don't do context or logic" sort of become moot for me if I can give it an instruction and it does what I want.


I agree that some of this can be exciting to some, but (novelty aside) AIs promote mediocrity, not awesomeness.


That's such an oversimplification.


How is it wrong?


They literally "do context", as they do in-context learning. Also, factuality can effectively be solved by chain-of-thought prompting[0] and the use of external tools[1], also see toolformerzero[2] for a simple example.

[0]: https://arxiv.org/abs/2201.11903

[1]: https://arxiv.org/abs/2302.04761

[2]: https://toolformerzero.com/

[2]: https://toolformerzero.com/


The "predicting words" part of the language models is a building block, but the actual applications being built around the LLMs are using that predictive ability in very dynamic ways along with external memory, feedback and sophisticated control methods to attenuate and channel their behavior.

And anyhow:

> Memory Augmented Large Language Models are Computationally Universal https://arxiv.org/abs/2301.04589


“the prospect of unprecedented cultural abundance”

More is not better.


Other things being equal, cheaper and more widely available is better.


Reminds me of social media utopianism at the beginning of the blog era. More communication is better, more connections are better, and it gave is this world. So no, cheaper and more widely available is not always better.


There is something awesome about the end of shortage. We are hardwired for scarcity and capitalism itself thrives on it to create value.

Perhaps for the first time in history we need to rethink the whole model of production, needs, wants and consuming. A full blown Star Trek society is not “communism” as I saw it sometimes described, it’s “all-needs-are-met-ism” at zero cost.

The question is how to bridge the gap from where we are now to there.


> The question is how to bridge the gap from where we are now to there.

Someone needs to be chosen to act as a liaison between AI and humanity. Perhaps a short, stocky, bald person such as Jean-Luc Picard. If he is not available, George Costanza will suffice.


For “all-needs-are-met-ism“ we need almost free energy. We are far from that. The problem will be how to handle people who are bored and do not go to work for a lot of time to waste "bad" energy.


Year over year, the quantity of energy to provide basic needs has decreased. Transportation and housing HVAC is more efficient, and the shift away from physical world such as digital entertainment and remote work also decreases the energy needs.

It used to be that meeting more needs (crudely measured by GDP growth) was directly linked to more energy consumption. That's not the case now for a few decades already, GDP growth has decoupled from energy usage.


In “all-needs-met-ism” there is no wasted “bad” energy. People who don’t want to work go on permanent holiday. That’s where it gets psychologically interesting. How long do you holiday before you want to do something again?

I do concur we are far from it, but the moment we can do free energy print-on-demand anything, it will be possible. Every day the sun outputs energy we use in a year.


a lot of people would just drink and do drugs, because they are unable to self direct. I think that right amnount of struggle keeps people sane.


Your comment suddenly reminds me of Agent Smith paraphrased “we designed the perfect world for you, but your primitive minds tried to wake up from it. Whole harvests where lost.”

I think there can be a lot of neurological imbalance from complete freedom and absence of scarcity. I think on the whole (or at least myself) I could live like that. And nobody in this post scarcity world is barred from doing work if they need it I guess. That’s also a form of post scarcity.


Shortage is usually forced to maintain the interest of specifically actors.

Look at the return to the office mandates from companies to "revitalise downtown" which is basically pushing for centralisation and forced scarcity/demand instead of increasing distributed offer.

There's too much power between governments and companies looking out for themselves to create scarcity that is valuable to them and detrimental to society and they interact and lobby each other while barely caring about the constituent or consumer.


Yeah, that’s exactly the gap between here and there I think. There will be a lot of resistence from vested interests. It probably needs a sliding scale like car companies going for 2030 to electrify their car line-up so they can keep the same market going.


Indeed, it could also be good for the planet if we stop consuming just for the sake of it (and making a few shareholders very rich).

I'm sure these people will not be happy to give up their extremely privileged position though and be part of "everyone". That's why I don't think it'll actually happen :(


Digital abundance is meaningless. Until these AIs allow us to never think about famine, war, disease and whatnot they are basically toys.


Add to that that there is already no shortage of digital abundance. This "AI Awesomeness" will only add its (abysmally low-quality by human standards) output to that, which makes it even more meaningless.


You’re probably right but it’s interesting to think of a world where everything can be printed on demand for example (assuming extraction and recycling with zero friction)


Printing stuff in the real world would create physical abundance, which I'm in favor of. Until the AIs aren't able to produce real world object then they're not that awesome


> Perhaps for the first time in history we need to rethink the whole model of production, needs, wants and consuming. A full blown Star Trek society is not “communism” as I saw it sometimes described, it’s “all-needs-are-met-ism” at zero cost.

I want to believe. That vision is exactly what I would hope for my children.

But any increase in productivity since at least the Industrial Revolution has been absorbed in to business owner/shareholder profits, I have no reason to believe otherwise unless enforced by law which is at best biased by lobbying and at worse full plutocracy.

For many of us our highest value output is not in our value creation through work but in our spending as consumers. So it's possible that “all-needs-are-met-ism” will come around but not because of Star Trek rational clarity but as a means to feed consumer spending. So unfortunately probably closer to some Orwell Morlock/Eloi future.


"We are hardwired for scarcity and capitalism itself thrives on it to create value."

"There is something awesome about the end of shortage."

So you understand that we, as humans, part of nature, have lived a certain way for nearly 7 million years, then lived another certain way starting 10,000 years ago with agriculture, and basically yesterday have started living in the modern world. And you expect us, as humans, to fully cope with this concept, to the point that it is awesome?

What we need to rethink is what it means to be human and our relationship with technology. We surely didn't evolve for millions of years to have all our needs met by machines and sit around without any purpose, forever being out-competed by said machines.


Oh absolutely, that’s why the very idea is interesting and even philosophically vital to put out there. Most people can’t shift there minds and culturally are stuck in a certain “this is how it was and should be.” (Conservatism) and “this is what we believe is possible” (Progressive)

We are in either one camp or another. And yes my comment is aimed at rethinking what we can be or should be.

And if the end of shortage also means less suffering and more decent living for the worlds poorest, yeah, that is totally awesome^2.


You don't because "scarcity" is artificially enforced in capitalism.

Also, I don't know how people start off from some randomly generated text and assume that the AI is going to make them a sandwich.


I feel the AI-angst is a product of inadequate education.

Be it "creators" who expect monetary compensation for low hanging visual fruits, or people doing manual work who are about to be replaced by robots.

The bar has just been pushed upwards, and those who previously thought they might wing it and take the shortcut through life, now pay the price for gambing on the lazy side.


At this pace it would seem like flipping burgers is almost going to be a better career choice than software engineer in a few years time. Cognitive labour is going to be massively devalued.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: