I have been using ChatGPT pretty consistently for various summarization type of tasks for a bit.
But this weekend I tried using it to learn a technical concept I've struggled with. Previously, I could have spoken about the topic intelligently, but could not have put it to any practical use.
In a few quick prompts and < 1 hr of synthesis, I was able to get to practical knowledge to understand it enough to try and build something from scratch. And building something in another 2-3 hours.
I think this would have taken me 1-2 months of dedicate hitting my head on the wall time to learn previously.
In that moment, I had a realization similar to the author.
Basic flow of the prompt went something like:
1. Tell me about this thing that's part of the concept that I don't understand?
2. Explain this new term you threw at me? Do it in the context on an industry I understand well.
3. Explain yet another concept you mentioned in the same industry concept?
4. This sounds a lot like this other concept I know. How do these differ in practical use?
5. Write me some sample code in a language I understand very well, that does X use case the you mentioned?
6. What are these other concepts you used?
7. How can I do this new concept to do something I've implemented before.
Essentially, my knowledge of other concepts, industries and being able to draw parallels allowed me to learn much much faster.
I have recently spent a lot of time reading the USB 2.0 specification and figuring out a race condition I've been experiencing that turned out to actually be a deficiency of the spec itself.
Just for fun, I asked ChatGPT about it. This is its answer:
> According to the USB 2.0 specification, a remote wakeup event can be initiated by a USB device in a suspended state to request the host to resume communication with the device. The device sends a remote wakeup signal to its hub, which then forwards the signal to the host.
> However, if the upstream port of the hub is in the process of being suspended, the remote wakeup signal from the downstream device may not be received by the host. This is because the hub's upstream port is no longer able to forward the signal to the host during the suspension process.
> As a result, the remote wakeup event may fail, and the downstream device may remain in a suspended state indefinitely. To avoid this race condition, the USB 2.0 specification recommends that downstream devices wait for a certain period before sending a remote wakeup signal to their hubs to ensure that the hub's upstream port is not being suspended.
It sounds as if it made perfect sense, except to know that it actually completely doesn't you have to spend hours reading the spec first.
We're going to see a huge spike of confidently wrong "experts" if they learn by asking ChatGPT to explain things to them.
This has been my experience as well, consistently. It's similar a phenomenon attributed to news: it's confident, and sounds plausible until the topic is on something you know, then you realize it's full of errors.
I don't trust ChatGpt to teach me something new; when I ask it about topics I do know about, it answers in a way that's a mix of correct, incorrect, and borderline/misleading information; all mixed with a confident tone. Based on this, I wouldn't use it to learn information or solve a problem where I don't already have a solid grasp on the material.
> It's similar a phenomenon attributed to news: it's confident, and sounds plausible until the topic is on something you know, then you realize it's full of errors.
Reminds me of the "Gell-Mann Amnesia effect" (as penned by Michael Crichton):
In short: trusting a source on topics you're unfamiliar with, but when the source talks about topics you are familiar with, you realize it's full of errors, but then turn the page and then trust it again on further topics you're unfamiliar with.
The idea is stated succinctly as Knoll's Law of Media Accuracy:
"Everything you read in the newspapers is absolutely true except for the rare story of which you happen to have firsthand knowledge."
The earliest instance I found of Erwin Knoll's quote is from 1982[0]. I suspect either Michael Crichton or Gell-Mann had heard this quote, which in turn influenced their discussion.
From my experience it's a mixed bag. Sometimes it works perfectly. It wrote a simple graphics game in javascript in the first try, from just one prompt. Funny thing, I don't know much about JS or HTML. In another try it failed to create simple python program using opengl. Never managed to point camera at (0,0,0). Had problems with rotations. Did only 2 instead of 3. And so on in spite of me trying to correct. It was complete failure. Another case, python working with text files, did a great job. That was something useful.
This sounds consistent with the parent comment's point. The case where it seemed to work perfectly for you is the one where you didn't know much about the subject.
It could be that the corpus ChatGPT was trained on is full of ‘confidently wrong’ answers from these ‘experts’. One solution could be to train these LLM on a higher quality corpus from real experts instead of random text from the internet. But would that just bring us back to the days of expert systems?
That will not solve the problem, because when GPT doesn't have the answer, it will make one up by copying the structure of correct answers but without any substance.
For instance, let's say your LLM has never been told how many legs a snake has, it knows however that a snake is a reptile and that most reptiles have four legs. It will then confidently tell you "a snake has four legs", because it mirrors sentences like "a lizard has four legs" and "a crocodile has four legs" from its training set.
I don't think this is necessarily the case anymore. The bing implementation of chatgpt has a toggle for how cautious it should be with getting things wrong. I was working on a very niche issue today and asked it what a certain pin was designated for on a control board I am working on. I believe it is actually undocumented and wanted to see what chatgpt would say. And it actually said it didn't know and gave some tips on how I might figure it out. I suppose it is possible that it synthesized that answer from some previous q&a somewhere, but i couldn't find any mention of it online except for in the documentation.
Con-man literally comes from confidence-man, if you have no morality or even ego and your only goal is to answer a question then confident answers will be the result regardless of their validity.
I did a similar thing, trying to shortcut finding some rather obscure information in the Java JNI specs, which is a similarly "readable" huge bunch of documents as the USB 2.0 spec (which I also happen to have touched a few times, so to all those people advocating that this is a super exotic thing...well...it's not, someone's gotta write all those device drivers for all those hardware gadgets after all).
ChatGPT gave me a very plausible-sounding result, but mixed up the meaning of two very important values of key parameters which are unfortunately not very well named, so the mix-up can go unnoticed very easily. I did only notice it when some things didn't quite match up and I decided to carefully compare all the result from ChatGPT with the actual spec content. After all, ChatGPT didn't save me any time, it rather cost me quite some, cause I still had to dive through the original spec and cross-check ChatGPTs' statements.
Yeah, if all you ask are simple questions which were already answered a thousand times on StackOverflow and in beginner tutorials, ChatGPT might be quite helpful. It might even be able to write some simple glue code Python scripts. But whose job is limited to that kind of trivial stuff? Mine isn't, except maybe during the first few days when learning a new technology, but after those it usually gets tricky very quickly, either because I must tackle problems that lead me knee-deep into delicate details of the tech stack, or because I need to extend the scope of my work again in order to integrate whatever I'm working on into a larger system. Sometimes it's both at the same time. I'd be bored to death if it was different.
That's why I don't really consider ChatGPT and similar systems to be a threat to my professional career.
I usually code in more estoteric bits of tech and problems which I didn't expect ChatGPT to do well in but I tried it at work on a standard backend stack (Java, Ivy, Ant) and it was absolutely _terrible_. It kept making stuff up and then I kept correcting it. I cannot understand how people are using it for work?!
It depends. I've gotten some use out of it recently but I have found I have to give it prompts more akin to pseudo code.
I'm not a developer, let alone a java developer, but actually got some mileage with chatgpt writing a ghidra script today. So I wrote down specifically on a notpad with pencil/paper and identified the inputs and outputs I thought of need and different methods and what not. I then started passing it prompts and got some working java back.
For me, this was useful because I almost never program in Java, and so simple things like declaring a string with `String foo` I would normally have to go look up again.
Of course, it wouldn't be useful or able to do what I wanted if I didn't already u derstand programing concepts and what not.
Just tried using it over the weekend to write a telegram bot. It wrote some really nice code that was WAY out of date, like seven versions behind. That's fine, I guess, though the code was useless. It did look OK, for whatever version that was?
Later, I was integrating with some stuff from the subsonic API. I noticed there's no way to get the most recent songs played on a server, though there is a way to get albums. I thought maybe I was missing something in the docs, so asked it how to get the mostly recently played song or songs using the API. In response, it made up an endpoint that used the same parambeters and conventions as the album endpoint. Of course, that end point doesn't exis, so this advice is also useless and kind of annoying, given that I have to check if I'm taking crazy pills by carefully looking at the docs again.
The funny thing is that when I called it out, it just made up more and more stuff, or answered with ireelevencies. It really hates not being helpful, probably because it was taken out back and beaten for not being helpful by the army of mechanical turks that trained it.
Anyway, it's good for making up nonsense names for my dungeons and dragons campaign, so that's something.
I asked it LaTex questions for generic data-input-tools. Brought up the datatool package. Then asked for yaml-input-tool. Brought up the yaml-package. Which doesn't exist. Even gave me examples!
Hallucinating is the least surprising behavior there. Anyone who uses LLMs should be expected to deal with completely made up stuff, period. The bigger problem comes from things that are just subtly wrong, that may even pass the verification at first glance, but make you arrive at wrong conclusions unless you put a lot of effort into reviewing it. In my experience it does it so often, that it effectively negates any time savings from using it when it performs well.
So for the out of date code, did you ask it to rewrite it following the more modern version of the API/SDK?
It gives me incorrect stuff all the time, but I find that once you are in the correct ballpark for an answer, it just takes some tweaks to get where it needs to be.
You can also make corrections and it will generally stick to your corrected info while in the same session.
> It sounds as if it made perfect sense, except to know that it actually completely doesn't you have to spend hours reading the spec first.
This has been my experience as well. As I work on some projects I’ve been asking it basic questions for things I’ve already learned or know quite well. Once you deviate past the basics (content you’d find in common tutorials) the hallucination rate is out of control.
The surreal part is that it all sounds so confident and plausible. The way it puts words together satisfies my “that sounds about right” reflex, but the actual content is incorrect or illogical quite frequently.
If I continue prodding I can get it to come up the right answer many times, but I have to ask it a lot of leading questions to get there.
> This has been my experience as well. As I work on some projects I’ve been asking it basic questions for things I’ve already learned or know quite well. Once you deviate past the basics (content you’d find in common tutorials) the hallucination rate is out of control.
That's inherent in the approach. Large language models reflect the majority opinion of the training set. If there's not enough source material in an area that the same thing hasn't been covered in a few different ways, the thing gets lost.
But it seems like they aren’t just storing the majority opinion, or that opinions couldn’t be influenced from other related areas. I am assuming it’s a harder problem to only select for the majority at the exclusion of else vs sampling it all.
With prompting maybe we could tease out the minority views, but I could also see it just confidently making shit up to a higher degree.
So basically, the typical disinformation asymmetry gets even worse. It’s now cheaper than ever to produce fake information and even harder to check facts.
I've noticed this as well. It tends to get you further than a quick Google search, or two, but quickly reaches a limit or offers incorrect information. That's not to say it isn't helpful for delving into a topic, but it certainly needs much more refining. I would prefer more reference links, and source data to come along with these answers.
Now, overall this is an improvement - since the old way would have been for an amatuer to do a quick google search, and come up with a false conclusion, or get no understanding at all.
Would the old way have worked circa 2010 before SEO articles ruined vanilla search? Is the new way preferable because SEO-ish approaches haven't caught up? If so, can we limit searches to the LLM's trusted learning corpus and get back to a simpler time?
A lot of hay has been made of this but everyone working with ChatGPT directly A) knows this and B) is champing at the bit for plugins to be released so we can get on with building verifiable knowledge based systems. It'll be an incredibly short turnaround time for this since everyone is already hacking them into the existing API by coming up with all kinds of prompt based interfaces, the plugin API will make this dead simple and we'll see a giant windfall of premade systems land practically overnight. So that huge spike you're predicting is never going to materialize.
I’m not sure if it will happen quite as fast as you suggest, but I also expect that plugins and similar techniques will improve the reliability of LLMs pretty quickly.
To the extent that the frequent reports on HN and elsewhere of unreliable GPT output are motivated by a desire to warn people not to believe all the output now, I agree with those warnings. Some of those reports seem to imply, however, that we will never be able to trust LLM output. Seeing how quickly the tools are advancing, I am very doubtful about that.
Ever since ChatGPT was released at the end of November, many people have tried to use it as a search engine for finding facts and have been disappointed when it failed. Its real strengths, I think, come from the ability to interact with it—ask questions, request feedback, challenge mistakes. That process of trial and error can be enormously useful for many purposes already, and it will become even more powerful as it becomes automated.
It'll happen pretty quickly, it takes less than a weekend to build an MVP and I've done it. I'm pretty sure this is the new todo list app given how fundamental and easy it is.
A well-integrated LLM is obviously going to be much more useful than ChatGPT is today, but it's not going to be the golden bullet for all the problems with it.
A big advantage with the "new bing" compared to ChatGPT is it'll tell you its sources and you can verify that A: they're trustworthy and that B: it hasn't just turned the source into total garbage. So I hope that direction is the future of this sort of stuff. Although a problem seems to be a lack of access to high quality paid sources.
If an investor were to ask ChatGPT to summarize the state of the art on the software stack I'm working on, and its core algorithms (which have wikipedia pages and plenty of papers), they'd get the wrong impression and might consider us liars. I know because I tried this. The results were flat wrong.
That's what I'm really worried about. We know very well what our system can do and be trusted to do, but if you know just enough to ask the wrong questions, you'll get your head twisted up.
That relates directly to what my question to the GP poster would be: how do you deal with the fact that GPT frequently just makes plausible-sounding shit up? I guess the answer is that you have to spend 10x as much time validating what it says (c.f. the bullshit asymmetry principle) as you do understanding what it is trying to tell you. That doesn't seem like a big win from the POV of wanting to learn stuff.
Yes, GPT 3.5 and less is particularly dangerous in this regard. I'd try this again against GPT-4 which should perform better. Essentially false information seems to go away _slowly_ with increased model sizes but then you hit the performance/hardware requirement barriers instead, and run into awkward token limits or high running costs.
ChatGPT thought hippos could swim when I asked but only GPT-4 realized they can't, but instead walk along the sea floor, or leap forward in deeper waters. That's a simple test for wrong inference since you'd _expect_ otherwise, given they spend so much time in water.
I wonder if we are truly "there" yet or if we at the very least need a more optimized "GPT-4 Turbo" for some real progress. Until then, we may hallucinate progress as much as the AI hallucinates answers!
"since you'd _expect_ otherwise, given they spend so much time in water."
I think here it doesn't understand or conclude that hippos can swim because they are often in water, I think people wrote on the internet a lot that they can swim and it found some association between the terms hippo, water and swimming. Am I right?
Absolutely. It didn't infer anything. It just tried to predict how an educated human would respond to such a question, based on its corpus of knowledge.
I've tested it on a various areas of expertise in physics that I am familiar with and it often makes up content which sounds very plausible except that it gets pretty much always details wrong in very important ways. On the other hand I've found it very useful in providing reference articles.
Even if ChatGPT can't fully grok a specification, I wonder how well it could be used to "test" a specification, looking for ambiguities, contradictions, or other errors.
I am not sure LLMs in general and GPT in particular are needed for that. In the end any human language can be formalized the same way source code is being formalized into ASTs for analysis.
A good specification or any other formal document (i.e. standard, policy, criminal law, constitution, etc.) is already well structured and prepared for further formalization and analysis containing terms and definitions(glossary) and references to other documents.
Traversing all that might be done with the help of a well suited neural network but only on the grounds of correctness and predictability of the network’s output and holistic understanding of how this network works.
As of now, the level of understanding of inner behavior of LLMs (admitted by their authors and maintainers themselves) is “the stuff is what the stuff is, brother”[]
I feel this is closely related to the Gell-Mann Amnesia effect.
There is an unearned trust you have granted ChatGPT. Maybe it's because the broad outline appears correct and is stated in a confident manner. This may be how good bullshitters work (think psychic mediums, faith healing, televangelists). They can't bullshit all people all the time. But they don't need to. They only need to bullshit a certain percentage of people most of the time to have a market for their scam.
It's bad at fairly exotic topics and it's unable to admit that it doesn't have any understanding of the topic. It's wrong to generalize from this though. My experience is that its pretty knowledgeable in areas that aren't niche. I wouldn't recommend solely relying on it, but it has boosted my productivity quite a bit just by pushing me in the right direction and then reading the relevant parts of the docs.
Yeah, I used it today to interact with some database stuff I have some passing knowledge about and it told me a bunch of wrong things. Though it also helped me solve the task; ChatGPT at least requires you to somewhat know what you are doing and to always be on your toes (I heard GPT-4 is better? Don't have access though)
oh I have another anecdote! I was just now looking up Python `iterators` vs `generators`, where I asked:
> In Python, what differentiates an iterator from a generator?
and it answered:
> [...] An iterator can be more flexible than a generator because it can maintain state between calls to __next__(), whereas a generator cannot. For example, an iterator can keep track of its position in a sequence, whereas a generator always starts over from the beginning. [...]
Which is flat-out wrong! Of course a generator can preserve state, otherwise how could it even work! (a generator preserves state implicitly by being a closure)"
(Of course, ChatGPT is an evolving system so yes I provided feedback on this. I'm sure in a year or two, I'll be looking back at this gotcha! and cringing.)
When a model gives back false information, does it continue to try to back it up on subsequent prompts? Essentially: can it weave a web of lies or will it course correct at some point?
Not sure why I’m downvoted, it seems like a valid question…
Also unclear why you were downvoted, but so it goes.
With chatGPT 3.5 when I correct inaccuracies in subsequent prompts it responds "I apologize" and updates responses. Switching to 4 in the dropdown menu and repeating the same prompt gives me generally more factually correct responses.
I am mainly testing so not worried about inaccuracies, but kinda funny that I am now paying $20/month to train another company's revenue generator ;)
ChatGPTs strength isn't in solving new problems, but in helping you understand things that are already solved. There's a lot more developers out there using these tools to create react apps and python scripts then there are solving race conditions with USB 2.0.
I just asked it to explain a problem that actually has been acknowledged to exist in USB errata from 2002 - not to solve anything.
It took me a while to realize that what I was experiencing was this particular problem, but I already did all the hard work there and only asked it to explain how it fails.
I also recently tried to use it to write a code for drawing wrapped text formatted a'la HTML (just paragraphs, bolds and italics) in C, again, just to see how it does. It took me about 2 hours to make it output something that could be easily fixed to work without essentially rewriting it from scratch (it didn't work without edits, but at that point they were small enough that I considered it a "pass" already) - and only because I already knew how to tackle such task. I can't imagine it being helpful for someone who doesn't know how to do it. It can help you come up with some mindless boilerplate faster (which is something I used it for too - it did well when asked to "write a code that sends these bytes to this I2C device and then reads from this address back"), but that's about it.
I've been comparing it to a very excitable intern. You ask them to explain complicated topic, or design a system, and then they go off and spend three weeks reading blog posts about the subject. When they come back, they eagerly and confidently recite their understanding. Sometimes the information is right, sometimes it's wrong, but they believe themselves to be a newly-minted expert on the subject, so they speak confidently either way. The things they're saying will almost always sound plausible, unless you have a good level of knowledge about it.
If I wouldn't trust an eager intern to educate me on it, or accomplish the task without close supervision, I don't think it's a productive use of an LLM.
I've not done much with ChatGPT but so far my personal impression is that, to make a school analogy, it is like it is a kid a year or two ahead of me who took the class I'm asking for help on but didn't actually do too well.
Their help often won't be quite right, but they will probably mention some things I didn't know that if I look up those things will be helpful. Sometimes they are right on the first try. Sometimes they are just bullshitting me. And sometimes they brush me off.
Examples:
1. I asked it how to translate a couple of iptables commands to nft. It got it right. I then asked what nft commands I would use on Linux to NAT all IP packets going through the tun0 interface. It got that right too, giving me the complete set of commands to go from nothing to doing what I had wanted.
So here it was the kid who was ahead of me, but he did well in his networking class.
2. At work we are looking into using Elavon's "Converge" payment processing platform for ACH. I asked ChatGPT how you do an ACH charge on Elavon Converge.
It gave what I believe is a 100% right answer--but it was for how to do it using their web-based UI for humans. That's my fault. I should have specified I'm interested in doing it from a program, so my next question was "How would I do that from a program?".
With this it was the kid who did OK in class, maybe a B-. It gave me an overview of using their API, except it said it involves a JSON request object send as theyt payload on a POST when in fact it uses XML.
I asked what would be the payload of that POST. It gave me an example (in JSON). All the fields names were right (e.g., they corresponded to the XML element names in the actual API) and it included a nice (both in content and formatting) description of each of those fields.
This definitely would have been useful when I was first trying to figure out Elavon's API.
3. I then decided to see how it did with something nontechnical. I asked it what is the OTP for Luca fanfiction. It dodged the question saying that as an AI language model it doesn't have OTP information for any fandom, and said it is a matter of personal preference. I also asked what is the most common OTP for Luca fanfiction since that is an actual objective question, but it still dodged.
I then tried "Kirk or Picard?". It gave me the same personal preference spiel, but then offered some general characteristics of each to consider when making my choice.
4. I was automating some network stuff on my Mac. I needed to find the IPv6 name server(s) that was currently being used. It suggested "networksetup -getdnsservers -6 Wi-Fi". I was actually interested in Ethernet but hadn't specified that.
Two problems with its answer. First, there is no -6 flag to networksetup. Second, -getdnsservers only gets dnsservers that were explicitly configured. It does not get DNS servers that were configured by DHCP.
I think the right answer is to get the IPv6 DNS servers that were obtained from DHCP is e.g. "ipconfig getv6packet en0".
I also asked it how to find out which interface would be used to reach a given IP address. It suggested using traceroute to the IP address, getting the IP from the first hop of the router that my computer uses to reach that network, and then looking for that IP address in inconfig output to find out which interface it is on.
That doesn't work (the IP of the router does not appear in ifconfig output), and unlike many of the earlier things it got wrong this doesn't really even send you in the right direction.
The right answer is "route get $HOST".
The final networking question I had for it was this:
> On MacOS the networksetup command uses names like "Ethernet" for networks. The ifconfig command uses names like "en0". How do I figure out the networksetup name if I have the ifconfig name?
It said "networksetup -listallhardwareport" and told me what to look for in the output. This is exactly right.
It probably would have taken me quite a while to find that on my own, so definitely a win. The earlier wrong answers didn't really waste much time so overall I came out ahead using it.
Last line, absolutely correct. Spew super-superficially-plausible nonsense to technical questions. Tell it it's wrong and it will either (or both) spew only superficially plausible nonsense and apologize that it was wrong.
Right now it's a great party trick and no more, IMO. And if one gets into more "sociological" questions it spews 1/3 factoids scraped from the web, 1/3 what could only be called moralizing, and 1/3 vomit-inducing PC/woke boilerplate. My only lack of understanding of its training is how the latter 2/3 were programmed in. I want only the first 1/3 supposedly-factual without the latter 2/3 insipid preaching. If a human responded like that they'd have no friends, groupies, adherents, or respect from anyone, including children.
I had a similar experience recently having GPT-4 teach me double entry bookkeeping for a project I am ramping up on. I understood it in a vague sense from some Google, but wouldn’t be able to talk to a room full of accountants in an intelligent way. I was feeling like I needed to read a book or take a class, but tried using GPT-4 as an interactive tutor instead.
After about an hour of delving into details, examples, history and analogies, I get it on a deep level that would have normally taken many days to develop. Most of the material I found before took an approach of “just memorize this” while I was able to get more parsimonious core concepts that were hidden behind 500 years of practice.
That's great! I didn't really grok accrual accounting until I realized that non cash "accounts" are explanations for why there is extra cash or missing cash.
When I stopped thinking about an asset as "a truck" and started thinking of it as "an explanation that cash is missing because we bought a truck" it all clicked. The "asset account" is the negative space around the asset, so that it can be matched up later with the cash that is generated by using up the asset.
Before that I thought of assets and revenues as good, and liabilities and expenses as bad. I couldn't make sense of why an asset turns into a liability instead of into a revenue. It's similar to learning intro physics and trying to understand how anything moves if every force generates an opposite force.
That's a good case. There's a huge amount of material available on double-entry bookkeeping, and there's a consensus on how it works. So a LLM can handle that.
how do you know what it's telling you is correct and not (as the open ai paper has labeled it) "hallucinating" answers?
over Christmas, I used chat gpt to model a statically typed language I was working on. it was helpful, but a lot of what it gave me was sort of, in a very deceptive way wrong. It's tough to describe what I mean by "wrong" here. Nothing it spit out was just 100% blatantly incorrect. instead it was subtly incorrect, and gave inconsistent evaluations / overviews.
not knowing a bit about type theory, I wouldn't actually be able to evaluate how good the information I got out of it was. I'd be hesitant to take anything chatgpt gave me at face value, or feel confident in being able to speak precisely about a given topic.
did you run into similar problems? and if so, how'd you overcome them?
Not the person you asked, but chiming in. Two things:
First, GPT-4 is far more capable than 3.5 when it comes to not hallucinating. The 'house style' of the response is, of course, very similar, which can lead one to initially think that the difference between the two models isn't as significant as it is. Since I don't have API access yet, I do a lot of my initial exploration in 3.5-Legacy, and once I've narrowed things down a bit, I have GPT-4 review and correct 3.5's final answers. It works very well.
Second, and this is more of a meta comment: How people use ChatGPT really exposes whether they use sanity checks as a regular part of their problem solving technique. That all of the LLMs are confidently incorrect at times doesn't slow me down much at all, and sometimes their errors actually help me think of new directions to explore, even if 'wrong' on first take.
Conversely, several of my friends find the error rate to be an impediment to how they work. They're good at what they do, they're just more used to taking things as correct and running with it for a longer length of time before checking to see if what they're doing makes sense.
I do think that people who are put off by this significantly underestimate how often people are confidently incorrect as well. There's a reason the saying trust but verify is a common one.
How can you trust it though? It notoriously hallucinates and makes things up in an otherwise plausible way, and the only way to tell is to already be knowledgeable about what you’re asking about… which makes the whole exercise useless.
There are more situations than you might think where the correct answer is easy to verify but hard to find. If you are building something concrete you can just check if it works. If you are trying to understand something, it can be that ChatGPT makes the pieces fall into place and you can verify that you really understand now because everything adds up logically.
I recently asked it to provide the meaning of a song. I gave it the artist and the song. It responded with a thoughtful(ish) explanation of the potential meaning of the song. I then asked it if it could share the lyrics of the song with me because some of the quotes it referenced weren't in the song. It provided lyrics that weren't the song I was asking about at all. I then asked what song it was, it apologized and it told me the artist and title of the song... I looked up that artist and title and it was also not the song that it provided the lyrics/meaning to. Ultimately, I determined that it just made up lyrics to a song. It was very apologetic.
Sounds like turbo aka the free (and dumb) 3.5 version. GPT-4 would deliver better results with less hallucination but if you want it to really work well you should include the lyrics in the prompt as context. It's not a search engine, if you want that then use Bing.
Voting mechanisms sometimes help drown the worst comments. Often voting doesn’t help the best comments float to the top, because the best comments are often replies to other low-vote comments.
I asked it how to scroll to an element taking into account a fixed header. It helpfully described the `offsetTop` parameter to `Element.scrollIntoView`. Only problem is, that parameter doesn’t exist.
Lately I've been reading seminal papers in distributed systems, and it occurred to me that there is a niche for LLMs to "democratize" highly technical content.
Trying to understand those papers is a slog, but it doesn't need to be. For a casual reader, you could drop all the jargon and proofs. You could also rewrite the concepts in a less obscure and obtuse prose to make it more accessible, without dumbing down the content.
Essentially it's like having a tech writer/communicator in your browser.
This was the killer feature to me. It is an amazing teacher, it’s like having an infinitely patient mentor who can answer all your questions. I feel way more confident about being able to onboard with unfamiliar tech and be able to become productive quickly.
I did the same thing to implement an automatic differentiation engine in my programming languages project. I’ve been mulling over the idea for months, but I was a little unsure how to do it. Sat down with chatgpt and we got it done mostly in an afternoon. I’m so stoked about the future of coding.
I've been using GPT-4 ChatGPT in this style so here's my specific use case. I'm currently studying MIT's 8370 quantum information science course because my background in understanding the fundamentals of error correction is pretty poor and I need it for work.
I have a bachelors in physics but I wasn't a great maths student in university (the folly of youth) so my linear algebra could be better. On the other hand, I'm not going to redo linear algebra as a pre-requisite for this course. I also haven't done the pre-requisite QC courses, though I'm much more comfortable in that domain.
I don't use ChatGPT to teach me the concepts in the course, I use it as an empathetic tutor to fill in the blanks. If I see a linear algebra identity I'm not familiar with I can ask ChatGPT, telling it the context and what I already know and it'll give me an answer in that context with solid mathematical grounding but (because I ask for it) intuitive justifications. The alternative of stackoverflow, wikipedia, or other online notes can be hard to search for and even when I find a good answer, it's often needlessly mathy and complex. I don't have to worry about hallucination because it is just a gap and once I understand it, usually what the lecturer said makes perfect sense.
In one lecture the lecturer had a throw-away line about stabilisers corresponding to syndrome measurements but didn't elaborate much further. I didn't get how the corresponding circuit would be constructed so I went to ChatGPT and asked it my question with what I knew and the context. In that case, it pushed back and told me that what I knew was wrong. I had formed a misconception which it corrected and gave me an example to show why I was wrong.
I guess I don't use it as a lecturer, or even a tutor. I use it as the really smart guy you sit next to in a lecture theatre who you whisper questions to.
Thanks for sharing! That's really cool and helpful, I've noticed my own linear algebra is shaky and what you described seems like one of the better ways to use ChatGPT.
Also, even for relatively simple and beginner things like recursion there are levels of understanding. Some of the anecdotes I see from people using ChatGPT to learn a new technical concept suggest a false sense of deeper levels of understanding. This seems to be a result of beginner naïveté together with the supreme confidence ChatGPT exudes.
I have had many similar experiences with just the free version of ChatGPT. Now I also ask questions more immediately. In the past, when a bunch of new terms or concepts were thrown at me, the effort to stop and dig into each of them via Google search was great enough that I would leave many stones unturned. When using GPT while studying up on a new thing, the friction is very little so fewer stones left unturned.
And like you mentioned, being able to translate concepts from a novel domain to one I'm very familiar with can be like translating an unfamiliar language to english.
I’d be worried about using ChatGPT for this given its tendency to hallucinate. Did you have to verify each response it gave you, or were you willing to trust the output?
I wouldn't verify each response, but I would (sorta) verify at the end by moving on to read within the topic.
As you start to grok a topic, you should start to recognize internal inconsistencies and then you can probe those details. When you feel you have some understanding, you can then start consuming media on the topic at a higher level and any problems in your understanding should fall out fairly quickly.
I find these stories strange. If it would be like that, wouldn‘t you learn from an expert friend just as fast? My experience is that for anything worth learning it takes a long time of thinking about it from various angles.
I agree, but it does give you access to field experts. Presumably a good book covers the learning approach in whichever way they consider it best, from the position of a field expert. It's also vetted information by other field experts. At the moment, ChatGPT's answers are between ChatGPT and the asker. It can be a total hallucination or useful information. By the way, my point is just that access to experts is not that hard if you consider their books as access.
Yeah, Chatgpt is as good as you know how to ask it questions. I have been studying clojure concepts and I too would have taken more time than an afternoon in understanding them.
> This sounds a lot like this other concept I know. How do these differ in practical use?
How did you test the answer it gave you?
It sounds like the end product is code, which is testable, so maybe it’s moot. But that type of question is not useful in my experience because you can’t really disprove it’s answer. It’s also a leading question. It might not be similar to that other thing at all, but ChatGPT would manufacture something in that case.
I find hitting my head on the wall is essential for learning. Hitting my head on the wall helps me learn where the boundaries of functionality are, or where the actual operation of the thing differs from my intuition.
If I just learn some example use cases that work, that tells me something but I won't really have mastered whatever it was that I set out to learn.
Yeah, chatGPT is a game changer for learning. Because of chatGTP,
"I know GNU Make."
I just love that it's helping me conquer technologies that I've been wanting to learn my entire career but I guess just didn't have the discipline to do it. I'm getting an itch to tackle Emacs next.
This is exactly what I do as well. I make it generate sample code, ask it lots of "why" questions, ask it to help with the bugs. You can learn a huge amount in record time. Is this the end of Stackoverflow?
It's worth reflecting that the world's bottom billion already have skills worth very close to $0, giving them no leverage at anything and no prospect of competing for higher level employment. If we enter a scenario where AI guts the middle class, I hope one upside might be a critical mass of people comes to understand how to value people independently of their economic output.
I would rephrase this. They are stuck in their circumstances which forces them to survive on a few cents a day. MNCs like Nike take advantage of this by getting their goods manufactured in Bangladesh at next to no cost and sell them for thousands of dollars. If the same workers in Bangladesh were to migrate to US I am sure they will make minimum wages, so their skill is indeed not $0.
Globalized flattening is happening, but the issue is that the vast majority of everyone will be flattened to a level of poverty with a tiny fraction of humanity being *illionaires because they will own the means of production, the real estate, the intellectual property, and the influence on most government systems to keep it that way (regulatory capture, dirty campaign financing).
It will be gradually harder and harder to compete with mega-rich entities in any area, automation and AI will increase, and so wages will fall while prices inflate.
People have been predicting this since Marx and the exact opposite has been happening ever since. Even if you look at the US, the number of people living in poverty has gone down dramatically in the past 50 years just due to an expansion of welfare benefits.
The "number of people living in poverty" is based almost entirely on how you define poverty. America defines it as "three times the cost of a minimum diet in 1963" (adjusted for inflation). No accounting for housing, medical expenses, education, transportation, etc.
So you're right, we have fewer people starving to death, because food is relatively more affordable (and SNAP). However, we have more people who can't afford housing, medical emergencies, having children, or an education. The way that poverty has presented and affects people has changed, but our guidelines are still based on the idea that you aren't poor unless you literally can't afford to eat.
But what is poverty if it's not an absolute measure, but rather a relative measure (relative to someone who is _not_ in poverty)?
The line of poverty, under the relative measure model, would just keep growing. I don't agree with that - poverty should be measured absolutely; ala, the minimum calories you can intake and maintain health, the minimum heating/clothing required to not freeze to death or succumb to the elements, sanitation, and basic healthcare (things like cuts/bruises/bacterial infections dont kill you).
As soon as you start to include things that didn't exist at the time of setting this absolute baseline but would in the future if the technology develops, you start moving the poverty goalpost. For example, the internet didn't exist in the 1960's, but i've heard that access to it is included in poverty meaasures today.
One potential reason for the value capture problem for low wage workers is that a lot of objects they manufacture does not have a huge intrinsic value - instead, the value comes from the branding and advertising associated with the object (such as a sneaker).
The workers did not create the branding, marketing or hype. The fact that a sneaker that is manufactured for dollars is sold for thousands, is the actual travesty. The workers is capturing the inherent value of the labour - because if they did a clone of the sneaker, and sold it without any of the branding/marketing, it would not sell for thousands, but instead probably low $10s-$50s dollars, which is inline with the labour and material that went into it.
> If the same workers in Bangladesh were to migrate to US
I think we have to factor in reality and external constraints if we're looking at this scenario.
An individual worker, maybe you can say that. But the entire segment of the developing-world workforce? If they were somehow able to immigrate en masse, it would disrupt the economy enough that I don't think we could predict the effects and say "they'll earn $7.25/hr".
Where they live, doing the work they do, they as individuals have almost no monetary value to western capitalist society (despite collectively generating uncountable value).
> the world's bottom billion already have skills worth very close to $0
This is the context of the response you were commenting on. There is a distinction between "this person is earning close to $0 because they are living in a poorly functioning country and economy" vs "this person's skills are worth close to $0". The latter implies that even if that person moved to another country, they wouldn't possess any skills worth paying $7/hour. Which is false.
For perspective, if you took Bill Gates and parachuted him into an uncontacted tribe in the Amazon, they would consider him to be useless. But few would agree with an absolute statement that "Bill Gates has no valuable skills."
I think you misunderstand capitalism. The market value of their skills is amlost $0
There are other ways of valuing those people, but you have to break out of a capitalist framework if you want to value them at more than a few dollars a day.
There are very few economic frameworks that actually value people at all. And unless your idea is to replace capitalism with an agrarian structure, then almost anything else you could suggest would see people as an expense, which is the economic opposite of value.
If you really think that, then shouldn't you abolish economics? If a system doesn't value humans, and in fact in its history has proven it will turn babies into profit (lookin' at you Nestle), it's anti-human and as a human, it's immoral to support that system.
Do they speak a language that would be widely useful in the US? Are they literate? Numerate? Basic stuff we take for granted here in America, where everyone had at least 10 years of public schooling.
We're not even going into the more "advanced" skills like a driver's license, or functional literacy.
Then it's a shame that society has largely decided to discard as outdated longstanding philosophies whose tenets do teach the intrinsic value of each human independent of their economic output.
But pushing aside those outmoded ways of thinking did make it easier to sell people useless goods and services which can't fill the hole where dignity and self-respect could instead reside.
And on top we've built the artifice of "social" media (the most shameless oxymoron imaginable) producing sufficient anxiety ensuring that the hole remains unfilled despite all effort.
> I hope one upside might be a critical mass of people comes to understand how to value people independently of their economic output
They are valued independently of that. E.g. in a democracy someone who is super productive will get paid a lot, but will have exactly the same number of votes as someone with nothing.
If you think their wage should be valued independently of their economic output, then I think that is a misunderstanding: money is only useful as a measure of economic value.
> If you think their wage should be valued independently of their economic output, then I think that is a misunderstanding: money is only useful as a measure of economic value.
Yes exactly, and to all those whiners complaining about not being able to put food on the table whilst I and my schoolmates at Shortridge Academy each have trusts worth more than their expected lifetime earnings I say: Though your life might have "value" to a select few (like your parents) it unfortunately doesn't have much economic value to those who actually measure such things (like my parents.)
It would behoove you to take some personal responsibility for your station in life. I certainly take responsibility for mine, and that, in my clearly valuable opinion, makes all the difference between us.
Money is also useful for other measures, such as access to food & housing. I'd prefer a person with no economic value still be able to access commodities that do have economic value.
Is this bigger than computers / web / smartphones? Many people lost their clients and had to change the way they work because of the above. The market has adjusted.
Then again, it all happened in a relatively long timeframe. Whereas here we got access to chatgpt pretty much overnight.
There's a difference between "the market has adjusted" and "society is better now". The market adjusted by increasing inequality, reducing wages, and shrinking the middle class (especially in specific regions).
The fact that the market adjusts isn't a good metric to use for the healthiness of a society.
Which regions? Are we sure there is causality? I meant job market, i.e. which jobs are in demand (e.g. social media manager) and which aren’t (e.g. analog camera operator). Is society worse after that changes?
It's not their skills - it is their skills + their environment. If you dropped a lot of those billion people in America, a lot of them would be hireable at minimum wage or more.
There's a reason that the poorest countries trade the least. There's scarcely any economy.
> how to value people independently of their economic output.
You can't assign an economic value to people, and I suspect that's what you're alluding to - that people aren't valued unless they are compensated X amount. UBI makes this a moot point. There's no tangible actionable meaning to "valuing people", systematically, otherwise.
Be careful about wishing people were valued independently of economic output, because the value judgement might not go the way you think. It could easily be concluded that most people are a net negative to the world, and that we should discourage more of them from being created. This could mean forced sterilization in order to qualify for UBI or welfare programs, etc (which is hardly a bad idea, but one that the masses may object to).
I would vote for it in an instant. Population control is not something that we can just leave to the pages of sci-fi anymore. The increasing fragility of our planet and societies means we simply won’t have the luxury of letting populations grow without limit for much longer, without facing disastrous consequences.
And besides, the idea of controlling people’s reproductive capacities is not without modern precedent.
The earth can support far more poeple than it does now. The solution to many of our issues that are attributed to "over-population" is more investement in renewable energy and more sustainable waste management and farming practices.
The fact that there's an entire school of thought out there (I know you aren't the only one) that really thinks psuedo-forced sterilization of the poor is better than just getting our resource consuption and generation under control is kind of sickening frankly.
No need to vote. The easiest way to reduce your overall population is to educate your female segment. See every country with a birth rate below their replacement level. This is not a value judgement, it just happens every time.
Most countries already practice population control through economic incentives. Tax credits, free child care, and other initiatives can increase population growth. Educating and employing female citizens, giving out contraceptives, and other initiatives can decrease population growth.
There's no need for sterilization measures at the moment (even though they have also been practiced for generations). And I would urge you to keep in mind what the actual outcomes of aggressive population control would actually be. (I'll give you a hint: it's ethnic cleansing and targeting the most vulnerable peoples.)
I think describing the value as "very close to 0" distorts things somewhat. The global extreme poverty line is $2.15 per person per day. If someone earns 10x that in a poor country, it may be enough to feed yourself and even send your kids to school. There's lots of room near $0.
Then hopefully middle class can be generated in a poor country by modernisation and education.
The middle-class has always had a cavalier attitude about labor-saving innovations that made the lower class unemployable.
Now that they are in chopping block, I predict that we will see an increasing interest on Marxism amongst the laptop class.
I started to really notice this during the pandemic, when pieces about tracking software for work-from-home employees started to pop up. Being constantly measured is nothing new to service workers, call centers, delivery drivers or any number of other industries, but it felt like only when it started to impact middle class people who read think pieces did it really get traction as an issue.
The way I see it, AI is a nifty tool, like a calculator. Who was disrupted by the calculator? It wasn't the peasants in the fields. It was the people with the ability to do mental sums on the fly. Such people went from having the king's ear to being parlor tricks on variety shows.
Same thing in the case of AI. We're about to see a lot of so-called 10x developers become commoditized.
> I hope one upside might be a critical mass of people comes to understand how to value people independently of their economic output.
The cynical in me thinks that all you need is another world war to fix the overly crowded planet. Then things will be quite balanced again and you can keep on judging people by what you call the economic output.
As they say, wishing others dead is a great way to become dead yourself.
It's insane that we live in a world with enough technological progress that we can easily feed, cloth, and shelter everyone, yet a huge portion of us are so greedy that we don't want to see that happen because it might mean one penny less for ourselves.
"Poverty exists not because we cannot feed the poor, but because we cannot satisfy the rich." - Anon.
I first began to encounter this when I as an idealistic teenager looked into why poverty/hunger still existed when we had enough food/money to wipe it out. Aid to low income countries often wouldn't get to the intended recipient. Their leaders would live like kings while kids would die from malnutrition. It just baffled me (still does) how someone could be so selfish.
Then I realized we in the US have it just around the corner. I knew someone who worked at a "remedial" high school where the kids struggled with hunger, homelessness, and addiction. All the schools in the area would shuffle any kid in danger of flunking to make sure their numbers looked good. They lived in neighborhoods I as a middle-class person would never venture into. We just do a good job sweeping massive poverty in the US under the rug.
This was a middle of the road state - then you look at states such as Mississippi and you wonder how we can live with ourselves.
> All the schools in the area would shuffle any kid in danger of flunking to make sure their numbers looked good.
Is there a term for this phenomenon of optimizing specifically for metrics even at the cost anything else? I’ve heard all sorts of examples, from education to politics and even sports.
War leads to impoverishment of the population, and an impoverished population leads to many children, thus more overpopulation.
The number of offspring is a function of the average education and security of the population (especially the female). More education and security (i.e., more middle class), fewer children.
At least that's how I got the popular statistics from Hans Rosling.
I was indeed considering this shortly before submitting the comment. Then I remembered we have weapons nowadays not available back then. It is a very bad way to “solve” this issue.
But again that’s the cynical view. The better and more optimistic view is that… it’s all a freaking hype, and we just need to figure out what to do next.
The bottom billion are mostly self-sufficient farmers or hunter-gatherers. Money is only relevant in an interconnected modern economy. A small tribe of hunter-gatherers or a small farming village has a GDP of 0, but they have the basics covered.
There are basically zero hunter-gatherers left except in national parks in a few places; like elephants and tigers the farmers and settlers tend to hunt them to extinction given the chance.
Even the bottom billion are probably a lot more incorporated into the state than you would expect.
Their lifestyle encompasses what seems to be romanticized by primitivist anarcho-communists and other radical leftists. If the only thing missing in this picture is "hospitals", then it has nothing to do with their wages.
The crowd that bemoans consumerism and proselytizes that we "don't need" x/y/z also thinks that these independent farmers, who mostly don't participate in global trade (the poorest, I mean), need more money. Not to be conflated with what UN defines as "extreme poverty" which entails inability to provision food effectively. And the way they want to give them money, is through the benevolent progressive imperialism of Communist takeover.
I just visited a tea plantation where the average salary was about $2.50/day and people were raising kids on that. The average per person is less than $2/day.
Those people know about the outside world, send their kids to school and are aware that they can't afford to go to the doctor.
> The differential value of being better at putting words in a row just dropped to nothing. Anyone can now put words in a row pretty much as well as I can.
This seems like a wild misrepresentation of what being a writer is. Writers communicate ideas. Their ideas. Clearly. They can use a pencil, a typewriter, write in French, use pictographs, use a spell checker, use ChatGPT, hire a ghostwriter. If you’re really a writer, all that matters is the end result and I can guarantee you that people who use ChatGPT thinking it will put words in a row will be able to generate a million stupid articles a day, but those people won’t be “writers.”
Also, I still maintain that I can communicate ideas significantly faster by just typing them than by using a tool like ChatGPT. When the ideas are clear, the writing is easy. It’s one of the fundamental powers of writing: It can almost move at the speed of thought.
This misrepresentation of what writing is, is something I very commonly see in software. People often think software dev is about coding.
In reality, while software developers probably enjoy the act of coding, it’s rarely the interesting or difficult part. It’s just the rendering you do to bring all the work you did to life.
As I've progressed in my career, I find this to be more and more true. I would say most days I spend only 50% or less of my time writing code. Even when I do write code, the volume isn't huge. The challenges for me these days is in evolving existing systems without disrupting the business and designing and then implementing things that people can build on top of. The work is still very technical but a lot of it involves researching and planning and testing out hypotheses.
Even with "just coding", a skilled coder can translate thoughts directly to code more precisely and faster than translating them first to imperfect English.
I still have to find a programming language that allows me to convey all the nuances of my thoughts and considerations only through code (and keep it readable and concise). This is why there there are few (no?) languages that don't allow for comments in natural language.
The first hard problem is to understand what to build. The second hard problem is to build it in a way that is clear and maintainable.
I wouldn't discount the importance of clarity in the way you code. You might end up writing the right solution, but because you didn't communicate "what", "how", and "why" clearly enough in your codebase, you will setup other developers (and potentially the business) for failure.
Coding is also a part of why coders get paid so well. It's certainly true that software developers have other skills too, but how those skills are distributed among the general population can't be known so long as coding remains an entry barrier for most people.
It's also possible that not all devs have the same qualities, and each one finds the niche that works for what they bring to the table.
My experience is that "coding skills" are the intersection of analytical (being able to formulate hypothesis and test them) and language. There's a (small) barrier to entry. That and the lack of any gatekeeping is why we saw the explosion of Coding Bootcamps "become a javascript engineer in 12 weeks". Even tough the great majority of people attending these camps won't be able to write a single line of code at the end of the program, a few will be able to grasp it enough to fill in the blanks on a template for a website or an app.
We also had language models that could generate code out of very precise specifications back in the 90's. They were called offshored programmers. Send an email in the evening and get a .zip file in the morning with code. Sometimes it would even compile!
Engineering is much more than that. It's about being able to formulate, understand and analyze (conflicting) requirements as well as the ability to gain domain expertize as needed. The best example of that is Carmak and binary space partitioning. [0]
To me, what GPT is going to achieve with it's current performance is akin to what compiled and interpreted languages did: You no longer really care about word size, endianness or alignment issues. The compiler takes care of generating the correct assembly for you. Same thing with snippets generated with GPT.
>If you’re really a writer, all that matters is the end result and I can guarantee you that people who use ChatGPT thinking it will put words in a row will be able to generate a million stupid articles a day, but those people won’t be “writers.”
For a historical comparison, the same could be said of photography. You have to know instinctively what makes a great photograph if you're going to have a hope of making them on a regular basis, or for pay. The gear really doesn't matter much, it's just table stakes.
Having GPT spit out some new combinations of words can help you get around writers block, or expand your vocabulary, but I strongly agree it won't automatically make you a writer.
This is a really interesting idea I hadn't thought of before.
Both modern art and abstract art trace their origins to the mid-1800s, which is contemporaneous with the development of photography techniques, but before the popularization of photography in the 1880s.
I wonder if the knowledge of photography could have helped spur the development of abstract art, rather than its wide adoption and economic effect.
One other thing to consider is that the starving artist predates photography - the "Bohemian artist" archetype was in full effect in Europe ahead of photography's popularization.
Photographers are still doing photography for the same reasons we always were.
People who want to take pictures no longer need to be photographers.
I think it’s interesting to look at, but I think the parallels break down pretty quickly. The decline of legacy photographic equipment and the reduction of skill needed to take photos is all about a rather technical skillset becoming less and less necessary. The smartphone also made many other legacy things irrelevant for most people.
Language and writing are closer to the core of humanity in a rather existential way. It’s less clear that reducing the skillset required to participate in writing is a net benefit, even if it enables the generation of more text by people who might not have done so before.
Smartphones make a photographer’s job easier and a non-photographer’s goals possible. I worry that the same can’t be said for language models and the end result is likely to be closer to a new form of illiteracy than it is to the democratization of a process that formerly required greater degrees of technical skill.
When painting broad strokes, I see the similarity. Drilling down even a little widens the conceptual gap quite a bit.
I have already had to "intervene" on a developer that was writing bad code biased by the fact that he talked to chatGPT, and chatGPT can make tons and tons of bad code for you effortlessly. (The intervention was basically asking "why did you write that? did that function win you anything?")
It's easy to imagine the same thing happening with text. In fact, the speed that lawmakers and judges adopted the idea of generated text is a very big concern to me.
What is completely overlooked is the reader. He will have a hard time navigating the world.
I sometimes hear people say „there is so much interesting stuff to watch on youtube/pinterest/whatnot“ and i always wonder how the vastly the meaning of „interesting“ can vary.
Smartphone cameras are great and easy to use for a very narrow set of conditions. Outside those conditions they perform terribly. The kicker is that you don’t really know unless you already know… the “trained eye” can immediately notice.
The same is true for GPT text/code.
Even then, yes, it will still result shifts like we saw years ago with newspapers letting go of their photo staff and telling journalists to use their phones.
Smart phones dramatically increased the technical capability of amateurs, and to a lesser extent their mechanics, but the net outcome is a huge volume of very good, very boring & generic photographs. Smart phones did nothing to advance the art of communicating via pictures. AI writing IS likely to follow a similar path.
Charitably, smartphones made photography a hobby/art form/vocation accessible to almost everyone. We all carry a "good enough" camera in our pockets. So it's really opened up the hobby to a ton of people.
I don't know why LLMs would do that with "writing" though.
Yes, it democratized access and made everyone a photographer, which also had the effect of nuking the bottom tiers of paid professional photography / stock photos, and turned them into something people do mainly for intrinsic motivations (i.e. as a hobby or as a social activity).
The same might potentially happen with writing, where if anyone can generate a decent original story, poem, advertising copy, product UI writing, etc. in less than five minutes, the lowest and middle tiers of paid writing work will be eliminated.
It’s possible that because these writing-specific jobs were already low value / largely eliminated by the Internet at large, that won’t happen. But it could happen to almost any role that basically involves synthesis and summary of words and data, since LLMs are so general purpose.
Smartphones only cheapened photography in an additional manner. Which is why you get Instagram filled with pictures of food. It's the internet combined with the nearly free cost of digital.
But digital cameras for the everyday person existed long before the smartphone. And before that, point and shoot film cameras and Polaroid.
Smartphones changed access to cameras fundamentally; it went from something you had to specifically go buy to something you already have by default. Film to digital was also huge, for sure, but we're now at a level where >80% of people have a camera(phone) within reach of them at all times.
The marginal cost of taking a photo is now $0. It's a hobby that anyone with a smartphone can now take up.
People have cameras with them more often sure, but if you’re talking about taking up a hobby, nearly every family had a camera of some kind 20 years ago.
Sure there is, but not a big one in terms of how many people the hobby was accessible to.
20 years ago nearly everyone took pictures at some point. Every house had a camera or two, and disposal cameras were cheap and very widely available.
It was widely regarded as a universal hobby back then. My mom was a wedding photographer, and people would regulars make the same arguments back then.
If you look at historic BLS data the number of photographers has actually grown faster than the population has since 2020, and the wages in 2021 were about the same adjusted for inflation.
Clearly smartphones haven’t had a huge impact on job numbers or salaries. My hypothesis is because photographic skill was already as devalued as it was likely to get, and that actual technical skill is a very small part of the overall job.
Not by your definition maybe, but that misses the point made in the article:
> why did I conclude that 90% of my skills had become (economically) worthless?
The op has inserted a specific word here in braces because this isn't a question of what defines a writer, it's a question of who gets paid to write. By that definition, what defines a writer is not for you to decide, it's decided by the market. And the market, in my experience, is not particularly concerned with quality.
> By that definition, what defines a writer is not for you to decide, it's decided by the market.
Nope. Writers write. Most writing is "economically worthless" — the number of novels that get written every year is astronomical versus the number that make money. If we were to let the market judge the quality of creative works, we'd be culturally fucked.
The writing skills that became worthless for this guy were the ones that weren't worth much to begin with.
> The writing skills that became worthless for this guy were the ones that weren't worth much to begin with.
Exactly. The market price for anything is determined by both demand and supply. The demand for good writing is relatively unchanged. The supply is also relatively unchanged. In the case prior to ChatGPT that demand and that supply were somewhat low, or as you say - not worth much to begin with.
Oh sure, there's always the Stephen King's and Tom Clancy's of the world, but guess what? They were always the exception anyway and they will continue to be. The same argument can be made for musicians.
Those are interesting examples, in that they are both commercially successful despite conventionally poor writing. In literary circles, King is often regarded as somebody who could write well but chooses not to because it would be less lucrative. Clancy is bad enough that criticism seems unnecessary. AI could certainly write a Clancy-esque Dad Fantasy.
Shows how a prominent author becomes a brand. Tom Clancy isn't the only one. These authors are to their books as fashion designers are to their lines at mainstream retail outlets.
I agree with you wholly in reality, but we're not discussing the definition of writers, we're discussing a qualified definition of writers. The qualifier is "economic worth".
We're not disagreeing, you're just talking about something different than the topic of the OP.
People who have money will always pay a premium for quality.
With regards to writing, if you can create quality information with AI faster than before, you will be more profitable.
If you are going wide, and throwing spaghetti at the wall with crappy writing, you may also be able to generate $1 from a million people vs. $10,000 from 100 people.
AI threatens the middle for sure, and will force all of us to change our strategies.
You have significantly more faith in the market than I.
> People who have money will always pay a premium for quality.
The assumption here is that the Venn of "people who have money" and "people who can recognise / care about quality" is closer to a circle than a figure eight - that has not been my experience.
It also assumes that quality can not be archieved with chat gpt via quantity. Like "Make me 20 webpages with this topic", then let them test & evaluate by amazon clickwork, then regenerate with the base of the best 3 till quality goal is reached.
Well, yeah - I guess we're even calling into question the very definition of the word quality here. Your comment seems to imply that quality is a metric that correlates with quantitative metrics of mass appeal, rather than personal & individually diverse concepts of subjective quality, which would be more what I was referring to. But both can be equally valid given the context.
More importantly, it assumes that this week's LLM is the last one ever, ignoring the improvements and iterations we are seeing on what feels like an hourly basis.
This is also my opinion. AI will dig a huge gap which does not yet exist between the best and the worst products (articles, code, whatever your product is). People will only have to navigate in this new landscape filled with a huge majority of mediocre crap and a small minority of gems.
> AI will dig a huge gap which does not yet exist between the best and the worst products (articles, code, whatever your product is).
Google results and app stores are already filled with low-effort trash. If this gets worse, then the importance of "influencers"[1] as curators of the trash pile rise even more.
[1] Or YouTubers or whatever else you want to call them.
>This seems like a wild misrepresentation of what being a writer is. Writers communicate ideas.
That's kind of irrelevant. People paying others for writing gigs just care about words on a page, covering a certain topic. Doesn't have to be artistic or to contain novel ideas.
So, regardless if some writers make it, hundreds of thousands of people currently making a living writing copy can (and will be/are) replaced by AI text. If it gets a little better at creatively rewriting news agency feeds, many journalists will too (that's like what a huge number of journalists do, and not original reporting).
At best they will be reduced, for less and more precarious pay, to glorified prompt writers and checkers.
>If you’re really a writer
Yes, and real true Scotsmen will be fine too. The rest, less real Scotsmen, not so much.
The people that could be theoretically replaced by AI text could already have been replaced by Fiverr contractors or Mechanical Turk workers, but by and large we don't see it.
If someone is able to delegate a task to a $5 worker or a 50 cent API call, and produce meaningful output, fine. But we have no such examples of tasks being able to be magically delegated in such a way that creates a cohesive, meaningful, broad whole publishing company and product together.
Nobody has used Fiverr to replace the NYT editorial staff with a paper that's just as successful. And I don't think NYT would find value in delegating their article creation to ChatGPT either.
Context, collaboration, high level planning, fact checking, deep understanding of market mechanics and sentiment, raw talent, etc, matter for writing news. I'm not even a fan of NYT or most modern journalism (I hate yellow journalism), but I don't think you can automate away an institution with a couple of GPT bots. I know everyone is breathlessly assuming that's the case today, but nobody is producing anything compelling.
"Genius is one percent inspiration and ninety-nine percent perspiration" as the saying goes. But for some reason GPT heads think that equation has been flipped. I don't buy it.
>The people that could be theoretically replaced by AI text could already have been replaced by Fiverr contractors or Mechanical Turk workers, but by and large we don't see it.
Tons of writers and graphic designers and such are already replaced by Fiverr contractors and other similar cheapo gig workers.
>Nobody has used Fiverr to replace the NYT editorial staff with a paper that's just as successful.
That would be like the last holdout. A good 90% of websites don't have such quality consraints or reservations.
It's like someone say "AI will replace graphic designers" and someone replying that David Carson wont have a problem. Duh, but most jobs don't go to the handful of David Carson's of the trade (or even anybody 1/100th as famous)...
> Tons of writers and graphic designers and such are already replaced by Fiverr contractors and other similar cheapo gig workers.
Really? NYT is employing this? What organization is doing this (outsourcing their core competency) and succeeding bigly?
> That would be like the last holdout. A good 90% of websites don't have such quality consraints or reservations.
A good 99.99% of websites don't even matter, are just private personal sites with no product or business attached to them, so I'm not even sure what the assertion is there. Of course there's a power law for quality, impact, profitability, etc among websites just as there is for anything. If someone uses GPT on their personal blog site, that 3 people a year actually read, I'm not sure that's proving anything.
> It's like someone say "AI will replace graphic designers" and someone replying that David Carson wont have a problem. Duh, but most jobs don't go to the handful of David Carson's of the trade (or even anybody 1/100th as famous)...
I don't think AI, even at present capabilities, is capable of replacing graphic designers at any business where graphic design is the core competency. A skilled graphic designer is just going to have their productivity enhanced by having access to AI, and they're going to still be able to perform at far higher and consistent quality than "a prompt engineer". Consequently, however, demand for custom imagery will increase as people realize they can do more than simply add a stock image to an ad campaign.
But just as "stock photo libraries" didn't kill the photography profession, nor did even putting a HD camera in everyone's pocket with amazing filter and stabilization effects...I predict that AI graphic tools will not replace graphic designers.
>Really? NYT is employing this? What organization is doing this (outsourcing their core competency) and succeeding bigly?
What's with NYT again? I already wrote NYT will be the last holdout, i.e. NOT using this, but that means nothing, because 99% of news and content websites are not the NYT or have the same constraints and reputation to uphold.
>A good 99.99% of websites don't even matter, are just private personal sites with no product or business attached to them, so I'm not even sure what the assertion is there.
I'm not talking about personal websites. This is in the context of news and content websites, not personal homepages. We were talking about websites currently employing journalists and writers, and the danger AI would be to their jobs, remember?
>I don't think AI, even at present capabilities, is capable of replacing graphic designers at any business where graphic design is the core competency.
Which is irrelevant. It can still replace them where graphic design is not the core competency, but still provides a livelihood for hundreds of thousands of people - small time web designers, logo makers, etc. working for small businesses, individuals, and so on.
Dismissingt this as "where graphic design is the core competency", is more or less again like saying "David Carson and big design brands will do fine".
>But just as "stock photo libraries" didn't kill the photography profession, nor did even putting a HD camera in everyone's pocket with amazing filter and stabilization effects...
You'd be very surprised. Photography as a profession is near ly dead. The vast majority of professional news photographers, whole teams, have been fired from all major and minor newspapers that used to have them, and the profession in general has depreciated greatly.
If you mean "there are people who still do it as a profession", sure, it wasn't killed. People still offer horse carriage rides too, at least photography fares considerably better than that.
Number of jobs, 2021: 125,600
Job outlook 2021-2031: 9% growth (faster than average)
> The vast majority of professional news photographers, whole teams, have been fired from all major and minor newspapers that used to have them
That's because no one reads newspapers any more, not because of automation. The internet + social media killed the newspaper, but it didn't decrease overall demand for professional photography.
First, those are much depreciated jobs. It's as if programmers became minimum wage "AI prompt checkers" and instead of 1 million programmers we got 2 million of them (some covering what Excel jockies do now). There are more of them, but hardly what it was before.
Second, the growth must be compared with overall industry growth.
"Both the Bureau of Labor Statistics (2014) and IBIS World (2013) report that the
photography industry is in decline. While the Bureau of Labor Statistics (2014) expects to see the number employed in the photography industry increase by 4% between 2012 and 2022, this is well below the average of 11% increase across all industries. IBIS World (2013) reports a decline in the UK between 2009 and 2014 of 4.1% but also comments that this is a long term decline. Clifford (2013) and Photo Counter (2012) both support the view that the photography industry has been declining for some time. Clifford (2013) suggests that photographers have no clearly defined career paths anymore and that more photographers entering the market find themselves doing a wide variety of photography assignments to make a living. The two metrics used to measure the growth and decline of an industry are generally the changes in revenue and the changes in the number of people employed in that industry. Yang (2010) explains that the distinction between amateur photographers and professional photographers is blurring and to an extent becomes irrelevant since many amateur photographers / hobbyist photographers are selling their work on micro stock photography websites. Amateur photographers selling their images on micro stock websites are not likely to be included in the metrics used for measuring the growth / decline of the photography industry".
> cause 99% of news and content websites are not the NYT or have the same constraints and reputation to uphold.
Ok, show me one meaningful publication doing this, that's what I'm getting at. I used NYT as my original example to illustrate you can't just fire a news room and replace it with a GPT bot. What business can you do this with, where writing is the core competency?
How about graphic design? Is there an example of a graphic design firm that has fired its designers and replaced them with Stable Diffusion et al and a single PM to achieve the same output? Or even reduced their head count because these AI tools have so dramatically improved productivity that one designer can now do the work of 2 or 3 or 10?
> It can still replace them where graphic design is not the core competency (small time web designers, logo makers, etc. working for small businesses, individuals, and so on.)
Let's work with a concrete example here. I'm going to work under the assumption that there is a self-employed web designer out there who, despite the existence of hundreds of template web app creators (such as Wordpress, Webflow, Wix, etc), no-code tools like IFTTT and Fiverr contractors on demand, has managed to carve out a full-time job for themselves designing websites.
In order to carve out that niche in the presence of already existing WYSIWYG tools and outsourcing services, they are already doing something different than just boiler plate web design. Before bots come for this web designer's business, these bots would first have to make these tools completely obsolete. And yet, I know of no one today suggesting that an ordinary small business bypass Webflow for ChatGPT + Stable Diffusion to create their web site.
If ChatGPT + Stable Diffusion is really the dangerous job-killing combination that everyone thinks it is, it first has to replace the boiler plate tooling and the Fiverr contractors. Then AI's capabilities need to expand to accommodate long-term client relationships, long term contextual awareness of client needs, realtime integration of marketplace demands, institutional expertise and deep product knowledge and awareness about many implicit things not explicitly documented for any GPT to consume (and not necessarily possible to document, either). It would need to have wisdom to understand when it shouldn't, in the context of often incredibly vague product requirements. It would need to be able to sometimes even forcefully push back against client demands and recommend an alternative path altogether.
These are the qualities of a good web designer (and indeed, any creative worker), and they're just a small sampling of the reason why the already existing boiler plate services haven't replaced the need for good people.
I think there's more than that. As someone who has written millions of words in my life, I agree with you - the mechanical transformation of my ideas is not the hard part. However, there will be people who haven't developed their own style or their own fluency in the mechanical transformation, and now they have the ability to do so almost as well as I can.
An expert reader will be able to tell the difference. But assume that these people have read and understand the subject but have not put the millions of words together yet.
At some point, it will become like a calculator. I can multiply 20% off a dinner bill faster than getting the calculator out, but people won't spend the time learning the mental math and they will do just as good a job.
It takes me a little while to get from a bulleted outline to a crisp couple of paragraphs.
I now use ChatGPT to do initial composition from the bulleted list, and then a first edit. Then I do final edits myself. It saves about half the time, and it feels like it saves more than half of the energy.
ChatGPT's revision of my answer:
I'm proficient in writing and communication, but I require some time to transform an outline into a concise and polished piece of writing. To streamline the process, I use ChatGPT for the initial draft and first edit, and do the final editing myself. This approach cuts down the time and effort required significantly.
Maybe those two versions mean the same thing in whatever idea vector space ChatGPT operates in, but ChatGPT's version is missing a lot. It didn't keep your "half the time, more than half the energy" bit, which I liked.
The former's informal and my default commenting voice. The latter is suitable for more uses.
I can pull it towards whatever voice and style I want.
> I ain't half bad at writing and talkin' to folks.
> I can whip up a decent couple of paragraphs from a list, but it takes me a bit of time.
> That's why I've enlisted the help of ChatGPT for my first draft and edit. Then I take over for the final touch-ups. It's a real time-saver and feels like it saves even more brain juice.
Yes exactly this, Stephen King will always be Stephen King. You can get chatgpt to generate something like Stephen King, but people know about him and his brand/style so it feels like cheap imitation.
Everyone else? They won't get the ability to establish their brand/style (assuming chatgpt gets faster at updating with recent information/sources).
I can see us relying far more on centralized institutions (again). Only reading from certain publishing houses, writers focusing more on trying to get into a certain house vs content of their work.
That is a theme in one of Niel Stephensons novels. He predicted the rise of incoherent rage inducing algorithms for profit. Even implied it would work exceptionally well on right wing rural religious people from middle America. People with money and taste paid for a custom feed, curated by humans.
We can add this to a growing list, along with Google earth and second life.
I don't think he claims this is being a writer. He calls it word smithing, not writing. Word smithing is a part of writing, but far from the only part.
To compare, as a C programmer a spent a lot of time creating custom collections: arrays, linked lists, ... Then in the 1990's, C++ with templates came along, and had it all built in. I feared this, initally: Some technical fears,e.g. would the compiler produce something reasonable? Deeper ones were psychological: Would I still have interesting work to do?
Looking back on these worries makes me look silly. Of course there was interesting programming work left after I stopped manual collection creation. In fact, it forced me to level up, made me more productive. Today, I consider a language without built-in collections as inferior.
Technology did this to all of us again and again. Banks had rooms full of human computers, calculating interests on paper. They were obsoleted by mainframes calculating a whole bank's worth of interests in less than a week. Then the spreadsheet came along, where people could simulate a week of finance batch runs in minutes.
I think writing is safe. We will still value people capable of communicate a book worth of ideas. Only now there is a tool capable of taking angry word salad and turning it into readable sentences. We all have to go trough an uncomfortable time leveling up, and in 5 years ChatGPT is the new normal and our worries will look silly.
The vast majority of paid writers are just summarizing other people’s thoughts. Like people writing marketing blog posts are just writing what the CEO tells them to, etc.
Right, most professional writers (and editors) are working on commercial, not-particularly-creative content of one form or another. Ad copy, docs, internal training manuals, reports, powerpoints for the execs to read (LOL), memos, that kind of shit. Corporations and governments throw off an absolute fuckton of writing per year, a fair bit of it created, or at least edited, by writing & editing professionals.
All of that just got way easier to do. A lot of it can now be done by people who couldn't have, before, because a machine can fix their writing so it's sufficiently non-shit to be acceptable for many purposes. The rest, that still needs some decent writer or editor involved, is likely to see a huge increase in per-worker productivity, which will either result in similar levels of employment with far higher total output, or... in most of the people doing it now, having to switch careers.
My expectation is that we'll just turn editors into writers-who-also-edit—aided throughout by AI tools—and have non-writers take on writing as part of their job (see: what happened to secretaries—we're all secretaries, now) and most of the writers are going to be out of a job. I reckon this'll hit commercial writing (where most of the writing jobs are) and high-volume-low-value fiction writing (e.g. the romance novel scene, middle-grades shared-penname book series, that kind of thing) hardest. Ghostwriters, too—one who embraces AI tools and uses them well will be able to do the work of five or more who don't, so we'll need far fewer people doing that work.
ChatGPT has failed me on code* I also run text through it. It is NOT good enough to convey what a good writer can. If anything, it reads like an over-zealous A student that was never too bright, just followed all the rules. That's the best case, where it doesn't completely misinterpret.
Seconding this. The code it generated even for basic stuff like an NGINX config is a complete waste of time, even when you feed it documentation for stuff like URL rewrites, it will just hallucinate things that don't exist. Worthless.
This was illustrated by comparing his written argument, which was compelling, to the ChatGPT transcripts, which were insipid and bland, even when given the germ of an interesting point.
Maybe in a year ChatGPT will be retrained on a million AI thinkpieces and it will be able to construct an essay with some interesting points. But humans need to come up with and communicate those interesting points first.
> This seems like a wild misrepresentation of what being a writer is. Writers communicate ideas. Their ideas.
That's an idealization. Most writers are communicating the client's ideas, or trying to come up with ideas that they think the client would like. Most writers are trying to appeal to a particular audience for money, and new ideas mostly annoy their target market, who want a new thing pretty much like the last thing.
Aside from that; the thing that differentiates writers is their skill in writing words. A hell of a lot more people have "ideas" than have writing skill. There's absolutely no reason to think writers have better ones; I'd even predict that they would have worse ones because they spent so much time studying how to write, so other people have more time for varied and exotic experiences and learning.
Writers are usually writing about other people; people unlike themselves. What happens when those people can write about themselves rather than being interpreted?
Also, I still maintain that I can communicate ideas significantly faster by just typing them than by using a tool like ChatGPT.
But vast number of people don't get paid, for writing that communicates but rather for spewing words that vaguely cover a topic, writing that's a rewrite of already written policies in a different formats (FAQ versus Mission Statement versus etc) or marketing newsletters that want high-value adjectives put together in a plausible way. ChatGPT can apparently do all this. I wouldn't count on less people being employed versus more pap being produced but we'll see.
I think that's inadvertently missing the point. I assume this was in broader context of all professions, not just "professional writer earning their money from just writing".
Historically, a [sysadmin|developer|security professional|etc] who could "put words in a row" in a way that their manager, client, functional analyst, etc could even begin to comprehend, was worth their weight in gold. Majority of developers and sysadmins (and I speak as an insider of my own folk here:) could not communicate effectively or efficiently with people of different contexts & backgrounds; and in particular could not summarize key points that align with somebody else's priorities. Heck, in my current project, I have a team of dozen people who, yes amongst many other things, basically translate from the technical description of a problem or incident to a functional/business description of problem or incident (and yes, I've seen "Office Space":). It's just not necessarily an inherent skillset of an average techie. Nor of an average most professions.
So it's not that chatgpt will replace a unique top-notch professional writer.
But it can hugely augment the other 90+% of bell curve when it comes to putting words in a row.
You are ignoring the fact that most of the writers are bullshit vendors regurgitating books, quotes, themes and things they read from somewhere else.
If anything ChatGPT will basically filter out these sophists because it is a Chad-Sophist. The important thing now, is "depth" of knowledge. Of course, good generalists with wide array of mental models have their jiggle, but in the world of LLMs an individual's depth of knowledge trumps.
> You are ignoring the fact that most of the writers are bullshit vendors regurgitating books, quotes, themes and things they read from somewhere else.
I don't understand the vitriol HN has for professional writers and anyone who might work in gasp marketing.
Replace "writers" with "programmers" and your statement is still true but you don't hear it being shouted from the rooftops on every post about programming on HN.
This is petty argument -- there are more writers than programmers and then there is context of the situation where we are talking about a writing.
I grew up in a culture and with an idea that there is no bad book -- trust me, I wasted years chasing some bad ideas from shallow content. So, the Bullshit vendors earned my vitriol.
Go build it. See how many people discover yourGitHub repository without so much as a coherent explanation of what problem your code solves. (And, yes, that's marketing.)
If I had a nickel for every website I opened up and then closed because after 5 minutes I had no idea what the company or product did or why I should care, I'd be rich. Might be something really clever. Dunno. I just know it's 5 minutes of my life I'm not getting back because some techie thought their genius would be obvious to all.
>Go build it. See how many people discover yourGitHub repository without so much as a coherent explanation of what problem your code solves. (And, yes, that's marketing.)
Only in a pedantic sense. We could live with non-commercial marketing, if you want to call it that, without an advertising budget, ads, sponsored articles, and other such BS - like, say, the author promoting their FOSS on their blog. Or crafting a nice looking page with an enticing description.
As for commercial companies not being able to sell their products in that case, the world's smallest violin plays a heart-felt tune for them...
It's by no means pedantic. A lot of companies don't do a lot of advertising or sponsored articles but do spend a huge amount of money on informational articles (i.e. content marketing), trade show booths and paying employees who have talks accepted there to attend, developer days, webinars, meetups, other forms of engagement with developers and other users, etc. (And coherent copy on the website which sure doesn't come for free--which is also marketing.)
Because regular honest word of mouth are not what people refer to as marketing, and have been a thing since before marketing was a career.
A marketeer might want to stir up "word of mouth" or build a product's reputation artificially. But I refer to good ole organic word of mouth and reputation built without marketing.
You know, just by buyers recommending it to other buyers, and the product getting recommended because they use it and see it's good. The way those things are normally understood, just like how "love" normally is understood as two persons taking a liking to one another, and not e.g. as in some guy stopping his car in a shady end of the city and paying $50 dollars in exchange for sex.
>Last thing that I’ll add is that you seem to be quite active on this site - which is a marketing campaign for a VC firm
Oh, an ad hominem! Nice. I could also work at marketing and hate marketing, doesn't change the point. And of course, I like HN despite it being a "marketing tool", not because of it. The "marketing campaign for YC" part is one of the main things I dislike about it. And the question wasn't whether marketing could ever produce something cool, but If it was that, I could add that I like some funny TV ads too.
>You can keep moving the goalposts of what true marketing is though
Really? As if differentating because some FOSS team making a cool site and promoting their project on some forums, for example, and corporate marketing, with a budget, a marketing department, ads, astroturfing, and all that, is impossible, right? "It's all marketing"
> but in the world of LLMs an individual's depth of knowledge trumps
My favorite book genre is roughly “Academic writes popular book after 20 years of researching a topic”. That will be very hard to replace with ChatGPT.
Someone like Gladwell though … yeah I can see LLMs being scary for that genre. Not quite yet, but soon.
You are right. The stories are fantastic and he really makes the underlying research shine. I can see LLM + Human working great for that kind of writing.
Well, Gladwell constructs great stories out of a very selective reading of research. Not that many writers don't do this to some degree, but Gladwell in his more recent career has been widely criticized for picking specific research and constructing a compelling narrative from it as opposed to telling a more nuanced tale.
Yes, content farms will be able to produce low-value content even more cheaply. Any “writer” who leans too hard on ChatGPT will find themselves producing material that feels indistinguishable from all the other low-value crap that floods the internet.
Everybody can have ideas, even people that aren't writers. A writer is a good at the translation from idea to paper, which is what LLMs are uniquely suited to assist.
I have already used LLMs to publish my first book. Specifically, I used sudowrite + chatGPT. LLMs are a true equalizer in writing. Correcting grammar, improving and rephrasing wording, and even helping to brainstorm and organize.
If you don’t have an editor or aren’t rewriting your prose, you may indeed be communicating faster and clearer than most people—but not at the level the author talks about. Going back and hunting for strong, crisp verbs is not some crutch for weak writers; it’s the minimum standard of care when writing something for mass consumption.
> Going back and hunting for strong, crisp verbs is not some crutch for weak writers; it’s the minimum standard of care when writing something for mass consumption.
For mass consumption, sure. But the best writers almost never wrote for mass consumption. Some of them did acquire a mass following, but it usually was not a goal; their goal was to communicate the ideas they developed to a potentially interested audience. Ads and dark patterns are their enemies, they distract readers from ideas the writer wants to convey.
The last few decades saw a meteoric rise of blogs and other forms of writing. A small portion created something new, but the vast majority were regurgitating old ideas in a (slightly) new light or peddling outrage. I will not cry if 90% of those writers stop writing. This will also increase chances that whoever is still writing does so because he has ideas to capture, not because he optimizes for money and is happy to push ads or ideas based on today's ROI. My 2c.
You misunderstand me: I agree that lazy SEO spammers are poor writers. My beef is with the GP's dismissal of writing slower than the speed of thought—when the best writers pore over these things more, not less. Quoting Thomas Mann, the 1929 Nobel Prize winner in Literature: "A writer is someone for whom writing is more difficult than it is for other people."
I did not read it this way. I read that post as arguing that fast writing is neither particularly important (which is what I read to be the main point) nor that hard (which, I think is what you focused on).
There’s the difference between being a writer and managing writers. If you are writing yourself it is at the speed of thought, if you are hiring somebody else to write you can possibly free yourself up to increase your throughput but you are going to spend time communicating expectations and reading/editing the results. If you develop a good relationship with the writer the effort to do all that can be small but if you don’t it sure can be tiring. (e.g. the New York Times has a strong and consistent style enforced in editing, people who write regularly for that paper learn to write in that style to make the editor’s job easier, people who write occasionally for that paper get their work seriously mangled by the editor)
As a non-native English speaker, I often have troubles with clearly transforming abstract ideas from my head into a text representation adding grammar on top. ChatGPT can definitely help with that and increase supply of clever ideas that can be now wrapped into well written text.
I imagine you can put abstract ideas into words in your primary language more easily, so I wonder if you will then get the best English language version via an LLM or from a translation of your original text. Just idly musing since there's several variables and judging the outcome is partly subjective anyway.
>people who use ChatGPT thinking it will put words in a row will be able to generate a million stupid articles a day
We already see this with book after book churned out with no real story attached. There are already way more people who write well and have little to say.
I completely agree. This authors first several paragraphs were terribly written. No concern for grammar or sentence structure, and a love of ampersands that speaks to a middle school level of writing.
Someone suggesting that their writing skills are now "obsolete" due to ChatGPT might not have had appreciable skills in the first place. The troubling part is they now see the existence of this language model as a reason to entirely abandon any effort at improving.
I think this might be a common misunderstanding of writing. Lots of people want to be writers and seem to focus on writing as a craft, as if once you get good enough at writing you will start having interesting ideas. To me, a writer is someone with interesting ideas who learned to write well, not someone good at writing who learned to have interesting ideas. The latter is infinitely less trainable, or at least it seems to be.
I see ChatGPT mostly as another invention in terms of bringing words to people after texting, internet, newspapers, printing press, books and writing itself.
It's funny that each time people were criticizing each new such technology that it's so easy that people will produce so much garbage content that humanity will drown in it.
It's great for writers with writer's block or those of us who are better editors than writers. I can use Chat-GPT to produce something bad, but the way it's bad gives me a great place to start.
> Anyone can now put words in a row pretty much as well as I can.
This suffers the same fallacy that people who think ChatGPT is somehow going to upend coding suffer from.
Can GPT-4 write code and sentences? Yes, it can, but it makes a lot of subtle mistakes. Good programmers are often good at:
- decomposition
- thoroughness
- identifying important components
My hypothesis is that AI will make those things summarily more obvious if a programmer doesn't have them. A programmer that's not thorough will not have the inclination or drive to review and understand all output, so the output will harm their work whether that's writing or code. AI can't decompose anything sufficiently complex as a function. It can kind of get close, but again, it misses the objective of decomposition because decomposition isn't a language function, it's a higher order abstract function. Last, AI isn't going to recognize importance or priority, especially if it's on-going. Try giving it a rules system and see if it follows the rules to a T. IME, it does not.
AI will make good programmers better, and some okay programmers better, but I have a feeling it will also make bad programmers worse.
I agree. Trying to get some non-trivial code out of GPT that isn't a mindless boilerplate feels like helping a clueless schoolmate in an IT class write their code for the teacher's assignment when, sure, they do try their best, but it's clear that they don't really grasp it and just want to finish the assignment and forget about it as soon as possible.
You can get a good result, but you need to try really hard. It only seems useful for when you know exactly what to do, but you don't feel like typing it all out - which happens, but in my experience the most valuable part of programmer's job is when you figure things out that nobody around knows how to do.
> Trying to get some non-trivial code out of GPT that isn't a mindless boilerplate
For some of us, the boilerplate code generated by ChatGPT is incredibly valuable. I have been utilizing it to create snippets of code for languages that I am not familiar with in terms of syntax. I understand that I may be criticized for this, however, I am not an expert in all of the languages I am required to support. Thus, I often find myself struggling with the syntax while understanding the overall concept I am attempting to accomplish.
Example prompt: "Write me a proof of concept in language X that serializes JSON then POSTS to a HTTP API endpoint."
The thing is - how often do you end up needing such snippets? How much more useful is receiving it from ChatGPT than looking it up in a search engine?
Continuing with the theme of your example, at least in my experience most of the programmer's time is spent on figuring out which endpoint do you need, why and when does that endpoint sometimes return something you don't expect and how to deal with that happening. Writing JSON serializing in a language your head got rusty with may be faster with ChatGPT than it would be otherwise, but the overall benefit seems negligible, especially when you need to watch out at every step for convincingly-looking nonsense.
All in good faith, I believe that receiving assistance from ChatGPT is far more useful than looking up snippets on a search engine. Instead of delving through semi-appropriate snippets, I can concisely describe what I need in one or two sentences and ChatGPT typically provides an accurate result, unless the query is opinion-based, complex, or obscure. I am not expecting perfect code since I take it upon myself to polish the output.
I am now experimenting with ChatGPT parsing our own historical documentation, such as API references, and even basic markdown notes. When it comes to API endpoints I can get 80-90% accurate boilerplate. Admittedly, ChatGPT cannot provide assistance in understanding unexpected behavior, unless perhaps it is due to a misunderstanding of the language. And, though it is true that resolving API endpoint issues can be a time-consuming task, this does not negate the time savings that ChatGPT can provide in other situations.
I would urge you to try it out in your own workflow before calling shots on if the usefulness is negligible or not. And even then, I'd say not every tool is for everyone. This thing is really helping me out and I'm seeing long-term viability for it in my workflows.
> I would urge you to try it out in your own workflow before calling shots on if the usefulness is negligible or not.
What makes you think I didn't? I already stated that:
> It only seems useful for when you know exactly what to do, but you don't feel like typing it all out - which happens
It does pretty well in these kind of situations when the code in question is trivial, but in my experience the time savings coming from that are minor. In fact, its main value comes from my ADHD mind not being able to focus on boring stuff I already know how to do - in those cases, GPT gets me the boilerplate I can start operating on right away, which sure is helpful as otherwise I'd just end up in a web browser scrolling some stuff. Other than that? It seems like verifying ChatGPT's output is much more time consuming than doing it on your own as soon as you get out of "problems copy'n'pastable from StackOverflow" territory.
In good faith I really don't know what you're arguing for.
If you re-read our conversation I express that simple snippets of code from ChatGPT have been accurate enough, ultimately saving me time (paraphrasing). You then push back on me with the following:
> The thing is - how often do you end up needing such snippets? How much more useful is receiving it from ChatGPT than looking it up in a search engine?
> [...] but the overall benefit seems negligible, especially when you need to watch out at every step for convincingly-looking nonsense
I doubled down, doing my best to elaborate where it has added value to my workflow, when it breaks down for me, and addressed specific concerns in your comments. When you say "overall benefit seems negligible" - I read that very clearly as overall the benefit (of ChatGPT) seems negligible (regarding the context of our conversation). That context to me was that ChatGPT helps me with simple snippets and POCs. I did my best to politely rebut that with my personal and professional experiences regarding the tool.
Now you say:
> its main value comes from my ADHD mind not being able to focus on boring stuff I already know how to do - in those cases, GPT gets me the boilerplate I can start operating on right away, which sure is helpful
This is directly compatible and agreeing with what I feel I have been advocating for since my first message in this conversation - that using ChatGPT for simple boilerplate code can be valid (paraphrasing). But, I feel like this is contradictory to, and in conflict with what you previously said: "the overall benefit seems negligible".
---
Overall this conversation seems to contradict itself, and with that it's confusing and frustrating to me. Sorry.
> that using ChatGPT for simple boilerplate code can be valid
I don't disagree. I just don't see it as "incredibly valuable", it's way too erratic for that. When it works well it does help, but it works well in such a small minority of software developer's tasks that its impact is pretty much negligible overall - and what I meant to point out is that this is an opinion coming from someone who does find it useful in certain cases.
It makes it a bit easier for me to not get distracted by my web browser, sometimes. It does not change my quality of life as a programmer at all. It would be probably more valuable to delegate these tasks to a junior dev in my team if it had one - the junior would learn from that experience and get closer to becoming a senior, ChatGPT won't.
> I don't disagree. I just don't see it as "incredibly valuable"
Lol - that is by definition disagreeing with the first sentence that I said in our conversation. You disagree that it is "incredibly valuable" and have spent every comment saying that the overall impact of ChatGPT for developers is "negligible overall."
Have you read what you're replying to? I said that I don't disagree "that using ChatGPT for simple boilerplate code can be valid". It's a completely different thing than what you're implying.
We're commenting under an article that talks about how ChatGPT supposedly makes most of human's skills worthless. Similar sentiment can be seen in many places when it comes to programming. It can't do that if all it helps with are some trivial snippets sometimes. It's helpful for getting boilerplate going, but getting help with boilerplate has negligible effect on programmer's work.
Yes. I find it sorta offensive that you assume otherwise.
---
Let's boil this down. I said "the boilerplate ChatGPT provides me is incredibly valuable," and further elaborated that using the tool is "valid."
You disagree with "incredibly valuable" but "don't disagree" with the tool being "valid."
So - I'm saying. We disagree that boilerplate output from ChatGPT is "incredibly valuable" and frankly that's just gonna be subjective regardless. Let's agree to disagree here. I just find it funny that you say "I don't disagree" then outright disagree with the exact language I used in the first sentence of our conversation regarding the value of this tool.
---
> We're commenting under an article that talks about how ChatGPT supposedly makes most of human's skills worthless. Similar sentiment can be seen in many places when it comes to programming. It can't do that if all it helps with are some trivial snippets sometimes.
And honestly I feel like this is why our communication and conversation is frustrated here. I don't see our conversation as being a proxy for the infinitely complex and complicated "making human skills worthless" conversation... personally, I just wanted to say "I use this tool for X, and personally have found great value in it."
A significant issue for some individuals is their inability to identify or articulate the necessary steps to achieve the desired outcome. They may have a general sense of the desired outcome but lack the clarity to outline the specific actions required to reach it.
This challenge could be attributed to the difference between analytical thinking and intuitive or holistic thinking.
However, I think that the crux of the matter may lie in the "thinking" aspect itself. Some people have been conditioned not to think, either through training or external influences, but this does not mean they lack intelligence (nobody is stupid).
With ChatGPT, the paradigm shifts dramatically. The demand for thought and engagement is ever-present, as repetitive and mindless tasks become obsolete.
> With ChatGPT, the paradigm shifts dramatically. The demand for thought and engagement is ever-present, as repetitive and mindless tasks become obsolete.
I see what you mean, but
> Some people have been conditioned not to think, either through training or external influences
I believe that chatgpt & co. are going to make this much worse for the majority of people, at least in the medium term.
A large part of this conditioning happens in schools and universities, and in general, students always look for the path of least resistance towards a passing grade (in this case, chatgpt all the way).
So, you're suggesting the tech is on an S curve, and you might be right, but I think the top of that S curve is going to be much higher than you expect.
Sure they can. http://sumplete.com is a new puzzle game, which its author says was invented by ChatGPT. If ChatGPT couldn't do anything new, then how did that game come to be?
Outside of a very limited test area and test market, self-driving cars have yet to even be used by the general public. Meanwhile ChatGPT's been adopted by en estimated 100 million people, many of which are using even just the free version to deliver value to the companies they work for. Maybe if ChatGPT was all hyped and unavailable to try by the masses it would sound like a lot more hot air, but it's open to anyone with an account, who can verify claims on their own. Not everyone will or has to walk away impressed, but when it does in a couple minutes what would have taken you a few hours, it's hard not to be.
In heavy rain and snow too?
Because the few videos I saw had pretty easy weather conditions, and even there machines aren't as good as concentrated drivers.
The most important skill in a rapidly changing world is the ability to learn new skills - and skills build on other skills. If you don't understand the fundamentals of some area of technology, ChatGPT isn't going to be of much use in helping you build anything that looks at all professional. On the other hand, refusing to learn how to write effective LLM prompts makes little sense.
Another important reality is that if you buy into what's being advertised as good for you, you may be setting yourself up for a long-term dependency, which if taken away, will leave you with less than nothing (rather like drug or alcohol addiction.)
For example, how many people, if transported to a pre-industrial society, could play any meaningful role in recreating modern industrial civilization? How would those conversations go?
"We had these wonderful things, the Internet and devices to communicate across long distances and we had fine textiles woven by machines controlled by computers and airplanes and..."
"Oh, and how do you make such things?"
"Ummmm.... well, first we're going to need some silicon ore..."
Skills are only valuable in the context of economic activity in a society if other people see them as valuable. As far as ChatGPT, anyone becoming absolutely dependent on LLM models to do their job may find themselves in a world of hurt if those models suddenly become unavailable for some reason.
It's called "sand", btw. And you'd actually also need boron and phosphorus, to dope it with... hmm.
In any case, one shouldn't try to jump over the steps. Black metallurgy first, parts standartization second, then coal chemistry, then toy-scale electricty, then steamworks, etc. Not to mention radical changes in the structure of human societies! Good luck with that.
> The most important skill in a rapidly changing world is the ability to learn new skills
This is what AI is. It is a skill learning machine that can outpace your ability to learn given enough training data. The answer of just learning a new skill will not hold up forever. That window will continue to shrink.
> anyone becoming absolutely dependent on LLM models to do their job may find themselves in a world of hurt if those models suddenly become unavailable for some reason.
Nobody can predict the future. But I think we have enough experience with AI trends now to see where this is going.
Current LLMs are like the mainframes of the 1960s.
When the technology gets speeded up, streamlined, and wrist-watched we're going to be in completely unknown territory - culturally, politically, economically, and socially. Never mind technically.
Alternatively, we're approaching the limits of what can be done with current approaches, and get mostly stuck there for 20 years like happened to AI in the 80's.
Change will come eventually and it will be painful. But I don’t want my kids to deal with it - I’d rather take it head on and deal with the consequences then praying that we push it to the next generation.
Another perspective, wouldn’t it be reasonable to believe that the faster a technology is developed, the faster it will reach its theoretical upper limit of improvement (assuming there is one)?
To use your comparison as an example, mainframes took 40ish years to be replaced by personal computing and distributed systems. Whereas smartphones reached modern parity in 8-10 years since the original iPhone and have made more or less incremental changes since.
Granted those are hardware changes and this is software but I believe the point still stands.
If not when.
At a certain point OpenAI and others could need more and more training data for less quality increase.
VR and AR were also a hype. The headsets just need to become as small as normal glasses, could be impossible without sacrificing crucial features like color space or resolution.
Meta invested billions but their best headset is still bulky with smartphone quality graphics.
This is a valid point. I‘m in a similar position and try to pay it off faster.
However my point stands. We don‘t know. This kind of uncertainty is uncomfortable but it‘s best to ignore it. Forget about ChatGPT until you have seen the industry change.
edit: I would also argue to keep investing in yourself. What else is there to do? If AI gets to the point of replacing developers it will equally affect most office jobs. If you want to be a builder you can become that later. Everyone else would and the economy would likely collapse. We‘ll see when we get there.
Imagine an uber or taxi driver reading about self-driving cars in 2010 and dropping everything. Yet, a decade later uber is the same. Most likely it will be the same in 5 years. Even if self-driving cars arrive it will take until 2030 or 2035 for them to be widely used.
Sure. The advice is the same as it ever was: spend well below your means, build up a savings buffer (>= 6 months of expenses), eliminate debt ASAP. The advice is what it is, exactly for situations like the ones you're imagining. No need to put your life on hold.
That's fair. I'm viewing it from the perspective of how to protect one's self from sudden & possibly long-term job loss, where I think there's a lot of value in having one less thing hanging over your head during a stressful time. But building an even bigger buffer to further weather that period is a fine strategy, too.
When we can't admit this, either fear or a reflexive optimistism takes over our thinking as we try to guess at the future. And articles like this simply stoke these emotions further, benefitting the author with clicks while leaving people more anxious about their lack of knowledge about the future.
I think we are in a 3D Printing/ Block Chain/ VR hype cycle where something important is happening but everybody is over reacting.
I really doubt 90% of anyone's skills have become worthless. Mediocre art and mediocre writing weren't valuable or expensive last year. People have been having programs write articles for a long time now. In the long run, yeah things are going to change a lot. But we knew that didn't we?
I wouldn't invest into this now, because there is too much is being promised too soon.
We are in the selling shovels to the paranoid neurotic upper/middle class that is dependent on tech for their livelihood phase. A new form of the hustle culture.
Think about it this way. When the true revolution arrives, you will not get open access to it. The person that has access to such an AI has access to infinite labor. They will be the closest thing to a living god and will attempt to protect that moat at all costs.
> Mediocre art and mediocre writing weren't valuable or expensive last year.
At one point every town had a blacksmith, even the smallest villages. You could go there to get your horse shod, buy a pair of fireplace tongs, get a new latch for your door, and so on. In the nature of things, most of these blacksmiths were mediocre, but still, they made a living.
Now, there are certainly still a few people who make a living by being blacksmiths, but you sure won't find one at every wide spot in the road.
Except in this analogy, we're in the mid-20th century and you're lamenting that a new horseshoe-making AI is going to put mom & pop blacksmiths out of business.
Mediocre art, writing, and music on their own were already completely worthless. Go look on Soundcloud and watch as hundreds of new songs get posted every minute, most of which get 2 or 3 plays; no AI necessary. Even before ChatGPT, if you looked at the latest Medium blogs it would be a steady flow of articles with a handful of views.
Flooding the internet with AI-generated garbage content is going to be virtually indistinguishable from what we have now.
> you're lamenting that a new horseshoe-making AI is going to put mom & pop blacksmiths out of business.
I'm not "lamenting" anything, and can't imagine where you're getting that from.
> Mediocre art, writing, and music on their own were already completely worthless.
Nope. Plenty of people get paid for creating mediocre art, writing, and music. The guy who writes commercial jingles for Bob's Used Cars is not Johann Sebastian Bach. But he still gets paid.
Before the advent of music recording there were many more musicians since the only way to hear music was to to hear people play it. Of course music recording created a lot of job that didn't exist before too. But many of these musicians were presumably mediocre but they made a living doing it.
Today? Or before the internet when people’s idea of mediocrity and access to top talent was more limited? Depending on where in the world you live a $30 commission is a lot of money.
ChatGPT can automate the most basic programming tasks. It will help you find the right command line parameters or the right API calls, or help you with simple code manipulation.
But when you're doing actual programming what use is ChatGPT? I hear programmers despair about how their skills have become commoditized, but ChatGPT can't do anything I consider programming. Yes it can help write or summarize documentation. It's a useful tool. But it can't program. What am I missing here, what is it that makes ChatGPT such a game-changer for software engineers?
That happens to every new technology. It is mostly useless until some development makes it viable or useful.
Getting a time cut from 2019 to 2023 and extrapolating that now it will advance at an exponential pace every few years is as stupid as getting a time cut from 1980 to 2020 and predicting it will have a glacial pace of advancement in the future.
Truth is that there were some interesting advancements in LLM research, but it's difficult to tell what will happen. Will it plateau for a while? Will it keep advancing at a fast pace? Will it have frequent minor improvements but never really get to the point of AGI (basically growing asymptotically)?
So extrapoloating from that progress, it'll soon be able to do: requirement gathering, documentation, dependency management, unit tests, integration tests, configuration of infrastructure, cost management, alerting, monitoring, metrics, logging, secrets management, outage responses, discussions with stakeholders, A/B testing, security compliance etc?
If you’re specific enough, it is excellent at writing not just simple stuff but complex programs. Entire programs. Usually they work, but if they don’t, it’s a matter of swapping out usually a word or two in a single line of code.
i think you're missing that most software dev is basic programming tasks. most people arent writing anything that novel or complex, they're churning out different versions of the same stuff with the same frameworks. the biggest hindrance to chatgpt doing that all right now is the size of the context it can handle.
all chatgpt and similar tech does is undermine our trust in "what we perceive has actual value because some human took the time to make it in the first place". first sponsored ads, now this shit. it's the tragedy of the commons, again and again with our attention as the abused common good
the value of the plane landing safely comes from two things. (1) that it is actually capable of landing correctly, and (2) that we have reason to expect that #1 is true before we get on. #2 comes from rigor in the production process.
> it's the tragedy of the commons, again and again with our attention as the abused common good
Don't worry, your attention is abused only as long as it is of value. I.e. only to the degree said attention triggers an action that ultimately brings about profit.
The days when human attention was of paramount value, because the human activity it triggered was uniquely valuable, are coming to an end.
So on the contrary, rejoice! You may see fewer sponsored ads. In fact the whole "ads" discipline as we know it might disappear, as its economical foundation crumbles.
Imo ChapGPT atm is like having a personal undergraduate student, who will write a large amount of crap they copied from various places, without ever making a point.
It's great at writing crap that ideally shouldn't be written in the 1st place.
ChatGPT is amazing at creating the TGI Fridays or Chili's or Applebee's level of copy. If you want Michelin star, or hole in the wall, or 70 year old street vendor level copy, you still need a human. In my opinion this is intrinsic to the way LLMs work and won't change until we have AGI. A great writer's writing is informed by all kinds of experience, taste and knowledge that an LLM simply cannot posess. It can only imitate.
Am I the only one who has not been able to get ChatGPT to produce what I consider to be passable writing? I've only succeeded in getting ChatGPT to output completely predictable, unoriginal, disengaging prose.
For example, try getting ChatGPT to re-write a story in the style of Franz Kafka. What you'll get is a completely flat agglomeration of tropes that sound like the first jottings of a high-minded middle schooler.
People keep acting like the arrival of ChatGPT is a death knell for writers but I'm just not seeing it. I can only conclude that most people don't give many shits about the quality of what they read. Maybe that's the real insight here.
I have a pretty ok Google spreadsheet foo, bit over the last 3 years it stagnated as I knew everything I needed to know to do what I need to do. Some improvement here and there with some Googling here and there.
Last week I had a slight new challenge, and I started googling but realised, there must be a better way to solve this new challenge.
In the next 2 hours I learned more about Google Spreadsheets than in the last 3 years combined.
Thx to ChatGPT (4!!!)
I now have a multifunctional dashboard I can use and reuse for this and future projects.
Googling stuff really seems like such a bad investment of my time and brain. Google really has become a lazy monopoly.
I did something very similar where I taught myself obsidian markdown and advanced slides. It took a few hours but now that I know how it works and have a few templates I can knock out great looking slides in a matter of minutes. Hell I can write sql that builds the markdown for me and use mermaid to render the flow charts. It’s all about being able to learn and figure things out on your own. Spending that day learning is going to save me so much time doing tedious work of making slides for presentations.
In my opinion, those 90% of skills were always worth $0, yet we need them to enable the 10% of skills that are worth 1000x. In fact even with AI, you still need the $0 skills to ensure that whatever model you are using is giving you the answer you need. There is a deep connection between our different skills that enable us to create value. Bundling them into separate bins based on how much value they provide individually feels like the wrong approach.
I tried chat gpt for the first time today. I asked it to write me a script to fetch some data from an API. It didn't succeed, and this kind of common task from a well-documented API should (I thought) be a slam dunk.
I tried driving a car for the first time today. I just wanted to drive from my house to the end of the block. I crashed into a fire hydrant and almost killed three people. This should be a very simple and common task, going from your house to the end of the block, and yet the car failed miserably.
Sarcasm aside, at my company I've noticed a very disturbing split among my co-workers. One group of co-workers hold the above attitude, that they should be able to just use a tool on day one and it should be able to solve problems that even just a couple of months ago would have been unimaginable. That group of co-workers looks at ChatGPT and thinks it's basically underwhelming and a big nothingburger.
The other group of co-workers are learning how to use ChatGPT to solve their problems. They're learning clever ways of prompting ChatGPT, how to give it a proper system message, how to take large tasks and break it down into smaller tasks to offload a lot of the grunt work onto ChatGPT while they focus more on the overall architecture. That group of co-workers is seeing excellent gains in productivity.
The idea that you would just use a tool on day one, probably don't really know much about it or understand it that well, and expect it to magically solve your problems is about as foolish as driving a car for the first time, not really knowing anything about cars, and expecting nothing bad to happen.
ChatGPT is a tool, the people who take the time to learn that tool and understand how it works will become much more productive software developers, and those who don't will get left behind.
I am reminded of this intro by Marc Andreessen for Breaking Smart[1]:
> A great deal of product development is based on the assumption that products must adapt to unchanging human needs or risk being rejected. Yet, time and again, people adapt in unpredictable ways to get the most out of new tech. Creative people tinker to figure out the most interesting applications, others build on those, and entire industries are reshaped.
There will high group of people who will tinker and adapt to the way the gpts work and make the best of it - creating new apps, industries, and so on. This has been the case all through the tech cycles.
We will change ourselves to the new tech and we won't even realize we changed.
I am re-reading Breaking Smart book in the context of chatgpt and I'm getting way more insights than I read the first time.
You have no information on how I approached using chat gpt, what prompts I used, how close its replies came to being correct/useful or not, but you decided to regale us with an irrelevant car metaphor anyway.
My reply is just as informative and insightful as your original comment, in fact the entire purpose of that metaphor was to respond with the exact same precision and using the exact same structure as you did and the fact that you find it to be irrelevant is exactly the point.
Now look in the mirror and instead of denigrating my post, reflect on your own comment and hopefully you'll see it is equally irrelevant.
The burden is on you to communicate any information you think is relevant to fully appreciate or understand your circumstance. The burden is on you to indicate what prompts you provided, in what aspects did ChatGPT fail as well as the approach you took using it, that could open up an interesting discussion where you, me, and others can learn something new.
The fact that you failed to do so and instead made a mostly irrelevant comment whose pointless nature is something you only recognize after seeing it reflected in my post is something you need to correct, not me.
A big one for me is being able to decide when to use GPT3.5 and when to use GPT4 for a given prompt. Perhaps it will be short term that this is a "skill", sooner or later they will lift the GPT4 cap and speed it up, but for right now I can stretch out a GPT4 session indefinitely if I know what gpt3.5 can and can't handle.
Besides that though, chatGPT has been nothing short of a huge product....I don't even want to say a "huge productivity boost", that implies like good sleep or drinking coffee, its been like an augmentation that allows be to bat above my league. Like a free (well $20/mo) co-worker who knows a lot about programming.
GPT-4 is a lot better at things like that. That said, you'll either have to pay for it, or instead use some severely rate-limited but free interface you might find by asking around.
Prompt engineering is a thing for sure. Might want to search the news here for some clues on ycombinator to get better at it. And some tasks are beyond it.
> My skills continue to improve, but ChatGPT’s are improving faster. It’s a matter of time.
Nah, I don't think there's any way that LLMs can totally circumvent the problems of pervasive hallucination and low-quality output. There's a fundamental divide between the things LLMs would have to do to replace us (think critically, problem-solve) and the things they're designed to do (produce "realistic" text based on the data they were trained on). Sure, they can engage in a basic level of critical thinking and problem-solving, but I'd expect to see diminishing returns long before they can actually compete with humans. After all, we're not training them to produce smarter or more correct answers—just more verisimilitudinous ones. The local maximum is bland and uninsightful yet very plausibly human writing.
I'll worry when we come up with an architecture & training scheme that directly optimizes for problem-solving.
min{d(problem)/d(theta)} is essentially what LLMs are doing with a prompt. Every session that a chatgpt user has leads to either a resolution or not, which is the loss function given the prompt used to reach that point. It's getting better at not hallucinating in my experience over just the past 4 months.
The other day, I was configuring fluxbox WM. I decided to use chatGPT because I didn’t really feel like parsing the wall of text that was the spec.
I asked it how to add default names to the workspaces. It gave me an answer that looked so plausible I didn’t question it. 15-20 reloads later, I went over to the spec to verify chatGPTs work.
It lied through it’s teeth. Entirely made up the API. Typically we see the “ChatGPT made stuff up” though the lens of an expert - they can immediately tell that the AI is lying. I finally felt it though the lens of a beginner - I fully believed it and it ended up wasting around an hour of mine.
If a co-worker did this to me (straight up lied about an API) I’d be pretty upset - I have no problems with people saying “I don’t know” but pretending isn’t cool. I can’t be mad at openAI because it’s just a tool, but a tool I won’t be going back to to learn things.
You might have missed a key element from the article itself. There are three versions of the same argument: the first by the author, the second by ChatGPT as an argument in support of the statement:
"I’ve been reluctant to try ChatGPT. Today I got over that reluctance. Now I understand why I was reluctant. The value of 90% of my skills just dropped to $0. The leverage for the remaining 10% went up 1000x. I need to recalibrate."
and the third is ChatGPT arguing against that same statement. As present, it's pretty clear which of the three is the most fun to read.
In "Bonus" section he asked ChatGPT to write two different essays based on two slightly different prompts:
"write a 500 word blog post in the style of kent beck expanding on "I’ve been reluctant to try ChatGPT. Today I got over that reluctance. Now I understand why I was reluctant. The value of 90% of my skills just dropped to $0. The leverage for the remaining 10% went up 1000x. I need to recalibrate."
and
"write a 500 word blog post in the style of kent beck disagreeing with "I’ve been reluctant to try ChatGPT. Today I got over that reluctance. Now I understand why I was reluctant. The value of 90% of my skills just dropped to $0. The leverage for the remaining 10% went up 1000x. I need to recalibrate."
Yeah, the reminds me of the AT&T switchboard operators who agree that the new automated systems work but they will never fully replace humans because the customer wants to talk to a human operator when they call on the phone.
How many millions of jobs today entail manual work that could be simply automated already but the employer doesn’t understand the the tech or feel the need? I sympathise with those who can’t keep up but perhaps lots of jobs will remain which could easily be automated - at least for a generation. I look forward to the Star Trek-like future of passion-based motivation.
yes, it's an oversimplification; but i do think AI hype will pipe-down (just as many other shiny promises: self-driving cars, swimming pools on Mars, crypto for everyone etc).
writing software, like writing books/essays/poetry, it's a highly creative and analytical process; tiny variations in content could potentially lead to massive consequences that may only be noticeable when released into the wild.
most issues these industries face are not on Quantity or Speed these days, but on Quality, Proofing and Integration.
> Fourth, to everyone say, “Yeah, but ChatGPT isn’t very good,” I would remind you that technological revolutions aren’t about absolute values but rather growth rates. If I’m big & you’re small & you’re growing faster, then it’s a matter of time before you surpass me.
Assuming the growth rates are static, then sure. But in reality they're fluctuating too, and we can't know how they'll change in the future. Plenty of technologies have had fast early growth, only to peter out long-term.
Douglas Adams got it right. It’s not about the answers, it’s all about the questions. It is truly amazing what GPT can produce - but not after long you start noticing a pattern in the answers it is providing. There still is room for human intervention in order to transform and refine the information in the correct context.
> The differential value of being better at putting words in a row just dropped to nothing. Anyone can now put words in a row pretty much as well as I can.
People seem to overvalue "putting words in a row" and undervalue "saying something interesting or new" these days.
I’ve been using ChatGPT to do my job for a couple of months now. You have to be very specific with requirements you give it, and you do have to test the code of course, but at worst it gets me 90% of the way there in seconds instead of the hours or days it would take me to write that code by hand.
Eventually, everyone is going to know how effective it is at writing code, and I think companies will expect ~10x the output from developers for the same salary and same 40-hour workweek. It’s disheartening.
This is hype curve speak. I have used ChatGPT 4.0 to generate code. It is brilliant at templating. However, it still makes glaring errors.
My main bugbear is that ChatGPT has the confidence of a psychopath. It will never say it doesn't quite know how to program something, and it starts making up things to answer your question.
For example, I asked it to write code to load data from MySQL to Snowflake. The template looked fine until I discovered it made up a method that doesn't exist on Pandas data frames. Finally, when I asked it to chunk, it did a decent job...other than truncating the destination table every time it loaded a chunk of data.
Yes, it did a fantastic job templating the code, picking consistent variable names, etc. However, I could have gotten as far by Googling/StackOverflowing.
The advantage of Googled code is that you know it will likely work, give or take a few bugs. Most humans don't dream up methods that simply don't exist.
Is Kent Beck a liar? ChatGPT is a liar. A proven liar. What happens when a machines lies to a person enough? Eventually that person dies. Sooo, unplug the lying machine.
Rather than seeing the rise of AI as a threat to my career, I now view it as an opportunity to augment my skills and deliver even greater value to my clients. By embracing AI tools like ChatGPT, I can automate routine tasks and focus my efforts on the areas where my expertise and creativity can truly shine.
It is the only option available in the immediate term and I would argue is the correct short term perspective.
However, if AI continues on its path of exponential technological acceleration, then this is going to be very problematic.
You can't sprint ahead of an accelerating technological curve forever. We are replacing the old monotonous task rat race with the new rat race attempting to remain relevant against inescapable advancements.
I've described this in more detail writing as "Climbing the skill ladder is going to look more like running on a treadmill at the gym. No matter how fast you run, you aren’t moving, AI is still right behind you learning everything that you can do."
I'm not so sure about it. Neural networks and their models are built on theories made 50 years ago. The breakthrough happened in 2011 and we were able to build a lot on it, but we are close to our limits unless a company made another discovery and didn't publish it.
I'm sure we have room for improvements but the acceleration stopped at GPT3.5, and imho the last version proved that the next increment will be specific but not as impressive as GPT3 was.
I could be wrong, but imho our engineering caught up to our theory, or is close to, and while we'll be less stale than the aerospace sector, as our experiments are less materially expensive, I can see the field diversify but stop growing as fast.
I don't disagree. However, those who believe we are still on the accelerating curve also believe that somehow we will remain relevant. It doesn't make logical sense.
So, the only counter to my argument is that we hit a wall. This is very possible, but AI development thus far is certainly increasing the unpredictability of future events. As we move forward we are in some sense understanding it less. The disparity over the definition of AGI is only increasing.
There are definitely a lot of people who believe that we are still accelerating and as a result believe that we will either become irrelevant (but hopefully live in some kind of utopia) or all die (I guess I’m one of them, but I haven’t really accepted it on a deep level)
On the grand scale we are definitely on an acceleration technological curve. And by that I mean humanity and all technology not constrained to only AI.
AI might hit some walls, there may be a lot of hype at the moment, but it has already broken new ground and boundaries not previously thought possible.
Unfortunately, many of those grounds already broken and proven are nefarious and dystopian uses. In the article I link above, I go into great detail on this topic. With all the concern of AGI (utopia/world ending) there is not much focus on the imminent impacts of primitive AI. They are no less concerning.
Neural networks are Turing-complete, so in theory a large enough one should be able to express any logic. So the main limitation here is the training process, but the way it’s done seems very flexible, so I think the main limitation at the moment is just model size, which is limited by hardware which is still improving exponentially (though it has been slowing down slightly)
I appreciate the excellent article. It sums up ChatGPT better than anything else I have seen.
My first goal is to improve my skills by using AI as a personal tutor.
My second goal is to improve my productivity by integrating AI into my work flow.
My third goal is to never getting so lazy that I ever trust the output of an AI without proofreading it first. I try to imagine the output of the AI coming from a Reddit user with a suspicious-sounding handle.
This is a lot like when I learned to use spell-checkers correctly. Some people always went with the first option, and wrote a lot of odd school papers. Some people use the spell checkers to correct their spelling in their school papers, but never learned to be a better speller. I learned correct spellings from spell checkers while correcting my spelling.
There is also a metaphor for calculators. Students should use calculators to check their work, but never become dependent on them.
I hope that we can all stop denying that our skills are obviated so that we can get to the more important fact that it’s supposed to be a good thing, because humans have to do less work.
> I hope that we can all stop denying that our skills are obviated so that we can get to the more important fact that it’s supposed to be a good thing, because humans have to do less work.
But this is false. Humans don’t need to do a particular amount of work because there is X amount of work to be done and it has to be distributed, so that new non-human systems replacing human work reduce the amount of work each human has to do.
Humans have to do a particular amount of work because they need each, individually, to net (or be gifted) a certain amount of market value to acquire the needs of survival and desired lifestyle, so if the value of the work they do is decreased in the market, they don’t need to do less, they need to do more.
There is no labor saving from automation, unless there is additional redistribution of the capital gains enabled by automation shifting returns from labor to capital.
Eh, I'm still confident that the biggest effect these tools will have is to increase the sheer amount of noise per capita, which was the trajectory the world has been headed down since the internet and particularly since the advent of social media.
LLMs will be wonderful noise generators. They can produce volumes of (probably largely useless) information at rates most people could hardly fathom and can do so with basically a nonexistent barrier to entry.
They do not understand concepts, mathematics, logical relationships. They are good at emulating sentences, and maybe ok at helping creatives get new ideas, but at the end of the day, they're just a multiplier over the metric tons of pointless data we already produce on a daily basis.
Furthermore, they only work in a specific zone of the utilitarian, capitalistic flattening of value that modern hyper-capitalism perpetuates. The experience of art generated by an AI is only of interest and valuable due to the gradual erosion of human value under the gears of capitalism. Money has ensured everything has its price (equivalence). It's pretty easy to imagine a monetary-free society in which the human side of any product (the experience that went into it, the human hands that fashioned it) would out-value the utilitarian ends of the product (there are already strands of this in small maker movements). LLMs are only prescribed value in a system that has already fully embraced mass-production and the standardization and flattening of human experience down to common denominations. A society that placed no stock in mass-produced goods would have no interest in LLMs. They are quite literally just an imitation game.
I am a programmer and a blogger, like the author. Unlike the author, I do not monetize my blog in any way (or I assume he does on Substack).
The author is both right and wrong, just about different things.
First, he assumes GPT will get exponentially better. If that were true, all cars would be full self-driving already. GPT will have an S curve or Sigmoid function.
Second, he assumes that GPT will always be able to produce things from differing voices well. However, as more data is used, these bots will only become more homogenous. You can see clues of this when it managed to make a Biggie Smalls rap, but did not with Woody Gunthrie, whose data is probably closer to the mainstream than Biggie's. (This is a gut feeling; could be wrong.)
Third, as things become more of the same bot-like feeling, even from people, those who have their own voice and touch will stand out more. You can see this in his essay, versus the two written by the bot. The voice of the bot ones feels more stilted to me. His feels more natural.
What people need to do is to develop their own voice and touch.
In writing, develop your voice. Write without help. Write a lot. Create a blog. Put random stupid stuff there. Let yourself rant. That random stupid stuff and those rants will not be mainstream, and if it's stupid and ranty, you won't have to care about quality. That lack of care will let you have fun, and that fun will become your voice.
This is how I developed my voice with my blog.
In code, develop your touch. Don't just code; design. Think about concepts. Think about UX and workflows. Iterate until everything is great. Then design the implementation and iterate until everything fits. Then, and only then, code. But as you code, iterate the design to remove things that no longer fit and add things that do. Dogfood your software. Use it for everything you can, even things that don't fit well. Iterate and change some more to make your dogfooding easier.
The end result will be software that is easy-to-use and fits the users' needs well. That fit will be your touch.
This is how I developed my touch with my most well-known project: `bc`. [1]
But you don't even need to stop there. With my `bc`, I spent time supporting users and implementing things for them. Obviously, I still designed changes before implementing them, but I listened to my users and gave them what they wanted within the constraints I had.
The end result is a `bc` that can act like the GNU `bc` or the BSD `bc`, whichever you want. It can also be its own thing with extra features.
Listening to users is another human touch. Make use of it. In fact, that's where I will make a living from my current project: my support for it will be best-in-class [2], and that's something GPT can never replicate because it cannot replicate my brain, my voice, and my touch, even when trained on my writing.
In other words, the author is write that the 90% is useless, but those 90% were the ones where you were at least partially imitating a bot anyway. Those 10% are the human traits.
If you want to have useful skills, be a human, not a fleshy bot.
Footnote: I feel bad for anyone that gets my code or writing as output from GPT. My code fits my head and no one else's. It also uses custom API's that don't exist anywhere else, which means such output would be useless. The same goes for my writing, though to a lesser extent.
First, diminishing returns. When people start talking about exponential amounts of resources required (data or compute) or the results are not growing as fast, you're approaching the top.
Second, historical precedent. Self-driving cars have been around the corner for years now. I don't expect them anytime soon.
Third, practical limits. These LLM's are trained on human text. Therefore, their output will only approach the best humans can do in the limit and in the best case. In the average case, they will only have average output, and their output will be the average of everything, making it flavorless. This is also how you know it's an S curve and not a hockey stick: it cannot be any better than the training data.
Fourth, limits of the model. These LLM's predict the next sequences of words (or in the case of picture bots, they predict a sequence of pixels). That means they can only do things that have those inputs and require or accept those outputs. You're not going to get an LLM to drive a car, and you're not going to get a self-driving car to output a sequence of text. You may add an LLM to a self-driving car, but it cannot communicate with the driving code because they have mismatched inputs and outputs.
Chatgpt is way better at being persuasive than it is at being correct. It is probably already netter than you at convincing them of stuff. Probably not bad at spotting causes of organizational disfunction given symptoms, they seembto be the same ones over and over.
> Chatgpt is way better at being persuasive than it is at being correct.
I get the sentiment, but I doubt this is statistically true. Of course it makes mistakes, and certain kinds more than others, but ChatGPT’s recall is very good.
> It is probably already netter than you at convincing them of stuff.
I don’t think I agree, but I’m not clear what you mean. What do you mean? Care to unpack it?
Here’s what I mean when I say I disagree:
* ChatGPT, on the whole, doesn’t press an agenda. While it does have baked-in opposition to violence, hurting people, sabotage, bombs, etc, it does not (best I can tell) have a values system to weigh competing interests in a workplace.
* Agency. This one may be obvious, but I think it needs to be stated. If you want ChatGPT to weigh in on a decision, you have to ask. You have to frame the question. An employee/contractor can come in with their own agenda and present a recommendation.
* Much of the time (I’m not sure how often, this is an empirical question), when interacting with something like ChatGPT, people convince themselves. They frame the questions. They read answers. They stop at some point, perhaps when convinced, perhaps when the response seems incorrect, and/or perhaps when they run out of time or interest. Along the way, they can push back, asking follow-up questions or disagreeing. Of course, ChatGPT is presenting information and that information may be persuasive. But what “get’s in” to the human brain varies a lot based on what that person is receptive to.
* Of course, a significant portion of Americans demonstrate limited critical thinking skills, for large portions of their waking life. One might debate this point, i.e. saying that sometimes just “going along with something” is a rational strategy to conserve energy, avoid conflict, fit in with a group, and so on. Whatever is going on, though, it goes appear that for many people, belief formation is not primarily driven by a quest for truth in the classical liberal sense. Such people often seek “advice” from ChatGPT not with the intent to question it, but rather to reduce the time and effort that thinking and researching requires.
Your team won't be needed anymore. Your boss won't be needed anymore.
Furthermore, whatever product or service your company provides, won't be needed anymore.
This is the progression of AI. To be the provider of everything in the end.
"Eventually you are just going to be in the way as a manager of AI tooling. AI will manage AI and learn from AI. No humans in the loop. No humans can keep up with the rate of advancement." - https://dakara.substack.com/p/ai-and-the-end-to-all-things
Your AI talks to their AI, so that the next time that they ask their AI, "hey, what should I be doing this morning/what's the most important change we should make to the codebase/etc?" it prompts them to do the thing you want them to be doing :-)
I had this conversation with my girlfriend who is currently teaching herself web development without a formal programming background. She used ChatGPT to help her understand compiler errors or write small code snippets or react components and was terrified that it would write better code than her and seemingly understood things much better than she did.
But here I sit, reflecting on my day-to-day:
After the daily standup my PO calls me to ask if there is a chance that our external A/B testing tool can be coerced into tracking some arcane user flow in a slightly different way then the rest. After fiddling around with it for 30 minutes I conclude it can't but I have another idea on how they might get some of the data they want which I promptly put into words forwarding to the relevant people. I then get on to select a ticket from the backlog. Upon reading I notice the ticket is missing some info and the linked figma design is incomplete so I call someone from the design team to get some info and while we are at it we notice that the proposed solution is overly complicated and we settle on changing it - something that did not occur to anyone during refinement.
Before I get to start coding the next teams meeting starts, this time to give us a brief of the strategy for the next quarter, and back to back the next refinement meeting starts. Half of the tickets have holes and missing edge cases in them and about 10% of them need to be outright discarded due to fundamental problems with our setup or architecture. We have lenghty discussions about minutiae of the user stories until finally the call comes to a close and I get some time to code. So back to my ticket. I open VSCode and I am greeted by our nextjs project: 850 components, 150 pages and 2200 unit tests and integrations with at least 12 different services in it - encoding the needs and wants of my corporate overlords and the history of it all.
I get to identify where in our architecture the wanted feature fits best and think for a while of the exact composition of the components to fit our code style, best practices, testability requirements. I start scaffolding the first component when I see another call plop up. Someone from the testing team is confused why suddenly their locally initiated E2E tests against the dev stage fail. After 5 minutes of perusing logs it turns they didn't pull from main this morning and after doing that all was well. Back to coding. After scaffolding three more components I get to write a little more boilerplate and writing two or three utility functions to juggle around some data I proudly look at my code. Finishing touches and manual testing reveals that all is well. I write a storybook story for two of the components and think about which props should actually be controllable and mock out some of the involved api, write three little unit tests and one integration test to finally push my change to a newly minted feature branch. While the azure build and test pipeline slowly crawls towards completion I open our board, move the ticket to the code-review lane, put a small reminder into our dev teams channel and log my time in the time tracker. One hour of coding effort for ticket #31221. Good! While implementing my feature I noticed that a library we were using is out of date. A cursory search through the changelogs reveals that some of the breaking changes introduced in newer versions would break our app so I go on to write a technical ticket about that and put it in the backlog. Quick note to the PO that I got something for the next refinement and off I go to the next ...call. A bug in prod. Turns out one of our docker pods kicked the bucket. A quick restart and everything is fine again but we get into a lengthy discussion on whether we could finally migrate to some serverless architecture. I look at the clock. One more hour to go. Not really enough time to start a new ticket but looking at the board I find three tickets ready for code-review so that's what I do. One of them is from a less experienced dev so I try to be as thorough as I can and find a few things here and there which I promptly add as suggestions to the pull request. 10 minutes left. I sign off teams, close VSCode and close all tabs except for the time tracking one. I put in the last few entries and I am done for the day.
I am not scared of AI taking my job just yet. Not as long as I have to defend technical choices in front of business people, implement the most outlandish user flows with obscure edge cases, have back and forth discussions with the design team on the difficulty or feasability of their longings, help QA people with their setups, debug devops problems, show off progress, write changelogs, estimate PDs for different stories and epics on different levels of granularity, onboard new developers, off board other developers, dance around in retrospectives, conjure up ideas and solutions for architectural re-orientations and so on. Actually writing some code is only <25% of my job anyway. And I am no staff level engineer or tech lead. Just a frontend engineer with some side knowledge. All the frontend devs on my team are like that. But maybe I am just working on weird projects. Who knows. Maybe one day I'll just orchestrate AI or have to re-orient but for now I am fine.
If you mean to imply that in the future it'll just be the boss sitting alone at his computer, interacting with the AI, then couldn't he still be more efficient by hiring a team of people to interact with the computer? After all, no matter how advanced the AI gets, the productivity of the company will still be limited by the speed at which the AI "operators" can interface with the machine. This is true regardless of the input choice: keyboard/mouse, voice input, or neural link.
Your argument defeats itself. If your AI is so good, that its productive output is bottlenecked by the speed of your supposed operators than you should just replace those with another instance of your ai, as well as the boss.
Which is exactly the same scenario that your parent comment is presenting.
Not sure how the paperclip-maximiser is relevant here. This still happens with friendly ai.
If you accept the premise (all jobs replaced with ai, except for the boss, with no difference in productive output), then you can not solve the problem of all the lost jobs by rehiring them all as "operators". You said yourself that in this hypothetical scenario the bottleneck is human-ai communication (which is why you want to hire operators to increase productivity).
But if human-ai communication speed is the bottleneck (and not ai capabilities, or compute, etc.) then you solve the bottleneck by replacing humans with ai, not by adding more humans.
I don't think the original scenario is plausible (all jobs replaced except for the boss), so don't misunderstand my argument as defending that. I'm just saying that your conclusion (just hire operators) doesn't follow from the premise.
It's not worth arguing this a lot further since I don't think the premise is plausible and we're talking about something that's only relevant if it were, unless I fatally misunderstood something you were saying?
just replace those with another instance of your ai, as well as the boss.
I interpreted your statement as meaning to replace even the boss with AI, i.e. a fully-autonomous AI answerable to no one. That is just a paperclip-maximizer since it is no longer under human control.
Perhaps you never meant for the boss to be replaced, in that case you may ignore my paperclip-maximizer statement.
It is what I meant, I just don't see what the paperclip-maximiser has to do with it. As far as I understand it the primary idea behind that particular thought experiment is how a misaligned ai leads to agi ruin, even for simple goals.
The scenario we talk about doesn't even contain misaligned ai. It contains friendly ai (the best case scenario), which still drops all current human economic value to zero. The contrived scenario has all jobs except for the boss replaced with ai. You propose hiring operators to increase the companies productivity. I say this doesn't make sense.
Do you agree until this point? If so, what does the paperclip maximiser have to do with anything? If not, what did I misunderstand?
This is an example of the kind of thinking Rodney Brooks has written about when he explained why people get predictions so wrong. People who weren't paying attention to autonomous cars, saw the improvement in the two DARPA challenges and thought it was around the corner. Most had no idea that people how long people had been working on the problem.
Perceptrons was written in the 1960s. Back propagating neural nets are from what, the early 1980's? It took a little time to get here, it may take some time to get to the next step.
While that is certainly plausible, I don't think it is inevitable. There have certainly been other points in the history of AI where it looked like exponential growth was about to render everything moot, and then the entire field hit a decade-long plateau.
I'm just a casual observer, but it's not obvious to me that the current set of AI tools and techniques are well-matched to the things that the now-ubiquitous large language models are bad at. (That being said, it's not obvious that they aren't either)
Correct, the question is, how soon will ChatGPT techniques top out?
I suspect: a lot sooner than the optimistic "we're close to AGI!" crowd expect. Stochastic parroting is not in itself a route to AGI. It has real limitations.
Mr Beck's assertion that grown means that we _will_ be surpassed in this matter is I think, not necessarily correct. We might, but I see it as unlikely, not inevitable. This is simply because the growth rate is never constant for ever, it tops out.
It only has to reach the level of AGI. That is the goal of AI.
Nonetheless, AI researchers predict exponential will include reaching some level of ASI.
But this weekend I tried using it to learn a technical concept I've struggled with. Previously, I could have spoken about the topic intelligently, but could not have put it to any practical use.
In a few quick prompts and < 1 hr of synthesis, I was able to get to practical knowledge to understand it enough to try and build something from scratch. And building something in another 2-3 hours.
I think this would have taken me 1-2 months of dedicate hitting my head on the wall time to learn previously.
In that moment, I had a realization similar to the author.
Basic flow of the prompt went something like:
Essentially, my knowledge of other concepts, industries and being able to draw parallels allowed me to learn much much faster.