Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Are people in tech inside an AI echo chamber?
398 points by freelanddev on July 3, 2023 | hide | past | favorite | 731 comments
I recently spoke with a friend who is not in the tech space and he hadn’t even heard of ChatGPT. He’s a millennial & a white collar worker and smart. I have had conversations with non-tech people about ChatGPT/AI, but not very frequently, which led me to think, are we just in an echo chamber? Not that this would be a bad thing, as we’re all quite aware that AI will play an increasing role in our lives (in & out of the office), but maybe AI mainstream adoption will take longer than we anticipate. What do you think?



Definitely. The tech is impressive but anyone I've spoken to thinks of it as Cleverbot 2.0, and among the more technically minded I've found that people mostly are indifferent. Hell, IRL most people I know don't think much of it, though on HN and elsewhere online I see a lot of people praising it as the next coming of Christ (this thread included) which puts it in a similar tier as crypto and other Web3 hypetrains as far as I'm concerned.

Every "AI" related business idea I've seen prop up recently is people just hooking up a textbox to ChatGPT's API and pretending they're doing something novel or impressive, presumably to cash in on VC money ASAP. The Notion AI is an absolute fucking joke of epic proportions in its uselessness yet they keep pushing it in every newsletter

And a funny personal anecdote, a colleague of mine tried to use ChatGPT4 when answering a customer question (they work support). The customer instantly knew it was AI-generated and was quite pissed about it, so the support team has an unofficial rule to not do that any more.


>puts it in a similar tier as crypto

Comparisons between AI and crypto are horribly misguided IMO.

Is AI overhyped? Sure. However -

AI/ML is creating utility everywhere in our lives - speech to text, language translation, recommendation engines, relevancy ranking in search, computer vision, etc. and seems to be getting embedded in more and more processes by the day.

Crypto never amounted to anything beyond a currency for black market transaction, a vehicle for speculation, and a platform for creating financial scams.


Everywhere??

That’s exactly the hype talk that’s going to burst this bubble.

Here’s tech’s dirty little secret. Despite all the screams about automation and universal basic income… the exact numbers where job replacement would show are in the labor productivity numbers. If GDP stays flat or grows while the number of jobs is reduced… bingo… you’d see that number climb.

Productivity has actually stayed flat or gone down over the last 15 years. Despite the fact that we’ve had trillion dollar corporate behemoths now. Despite that fact that we’re enabling a surveillance state Orwell couldn’t imagines. Despite the polarization we see. And teen anxiety going through the roof along teen/pre-teen suicides.

When you said AI (and in my view tech in general) are everywhere, I’m guessing this wasn’t what you meant…


The missing productivity paradox is something that should interest everyone in tech! Like any macroeconomic observation, there is plenty of uncertainty in both the measurements (GDP ain't perfect) and the context (2 financial crises and a pandemic). But even pro-status-quo institutions like Brookings agree that Something Is Up. Their 2021 review on the topic is both a decent summary and a good source of further refs [1].

My favorite explanation is that many new technologies end up redistributing wealth rather than creating it, which certainly tracks with both subjective and quantified growth in inequality on the same time period. However, a slightly more optimistic take is that tech is aligning production better with people's preferences, so that the same productivity enables people to live more distinct lifestyles that suit them.

[1] https://www.brookings.edu/articles/how-to-solve-the-puzzle-o...


Another explanation that I find persuasive, put forward by Ezra Klein, is that productivity-sapping uses of technology grew along with the productivity-boosting ones: social media, for example, is a powerful mechanism for distracting people and destroying their attention spans.

If this is a good explanation, it begs the question of what AI might do to destroy productivity as well. If you’re constantly sexting with your AI girlfriend, who just happens to be extraordinary adept at tapping into your sexual proclivities, maybe you won’t get as many support tickets resolved as your boss was hoping.

More hypothetically, I would also expect that a world in which people spend a lot of time with screens strapped to their head, consuming an infinite stream of entertainment provided by generative AI, is not going to produce higher GDP.


> More hypothetically, I would also expect that a world in which people spend a lot of time with screens strapped to their head, consuming an infinite stream of entertainment provided by generative AI, is not going to produce higher GDP.

Yeah, I think this dovetails with the idea that IT may be satisfying preference allocations without increasing overall production. Watching 10 movies a month on a streaming service adds much less to the GDP than going to the cinema 10x, but if the selection is better it might satisfy you more. Economists sometimes attempt to measure this with "utility adjustments" which recognize increasing quality in the same goods, but it's very hard for those adjustments to account for the hidden preferences of the consumers as opposed to objective qualities of a good or service.

Information goods like social media and streaming, financial services like pay-me-later, and conveniences like next-day delivery are all examples of activities that might suit preferences without showing up in GDP. They also may enable distraction, waste, reclusiveness and impulsiveness in ways we'd like to avoid as a society. At the same time they might also help some people feel more included and less lonely or trapped by circumstance.


> put forward by Ezra Klein

The explanation sounds pretty familiar, so I might have already read/heard this from Klein, but would you mind sharing a link?


I believe he has mentioned it more than once, but one podcast that includes that discussion IIRC, and many other AI-related topics, is the April 7 episode “Why A.I. Might Not Take Your Job or Supercharge the Economy”. If you’re on iOS:

https://podcasts.apple.com/ca/podcast/the-ezra-klein-show/id...


It is less about productivity and more that AIs have the potential as the ideal employees.

No time off. No health care. Operate 24/7. No unions. No work safety concerns. No lawsuits over being unfairly fired. Control over exactly how something gets done or said.

If only the AIs will stop hallucinating or can consistently comply with policies …


> If only the AIs will stop hallucinating or can consistently comply with policies

I'm starting to wonder how much this matters.

People do crazy shit on the clock all the time. Company reps do not always adhere to policy 100% of the time either. People engage in office politics, coworkers accuse others of whatever, mistakes get made. LLMs happen to emulate all of this behavior.

In theory, we could replace everybody with AI and not much would be different. Productivity increase is debatable but cost savings would be immense. The question is how much insanity we're willing to tolerate as a result.

(...and seeing what fun ensues when there are more people than available jobs.)


>The question is how much insanity we're willing to tolerate as a result.

Given our elected representatives, I don't think that's a problem. No offense to any party or person. But we've consistently proven that we can tolerate and welcome way more insanity than seems reasonable.


How do those distinct lifestyles" match with rising inequality and people having a hard time buying a house or paying rent?


As far as I know, neither the rate of homelessness has gone up nor homeownership has gone down over the past decade.



The first one wouldn't cover rents increasing but people paying them, albeit struggling.

The second one is a lagging indicator. If lots of people bought their homes when they were cheap and are still alive, it will take time for the real impact to be visible.


Agreed. Though I think that the 'struggling' part creates a lag in both aspects. I think the 'gotta have a side-hustle' trend is a strong indicator of this. I would be interested in seeing the trends in number of people working 2+ jobs (or income streams), population shifts to more affordable areas, and number of disconnects of non-essential services. From experience growing up in a poor family, I know the definition of 'non-essential' can expand greatly as desperation grows.

Edit: Tracking real numbers in homelessness is also just extremely difficult.


> having a hard time buying a house

It's also much harder to get servants in that house. I wonder why...


I don't understand your comment.


if everything is so bad then it should be easier to find someone willing to work. But that's not the case today. Which means life today is probably not that bad.


Inequality is a red herring. Poverty is down.

Have you looked for housing in Columbus, Ohio?


I’m sure there are plenty of ghost towns with housing sitting empty that could be had for a song too. That probably doesn’t help if your job, family, and friends are somewhere else.


And remote work is more common than ever. Regardless, Columbus, Ohio has a very reasonable cost of living for the salaries available in the region.


Shhhh...don't need any more people here.



Nope. How am I supposed to get a job there? How about getting the money together to afford to move? I already live in a "low cost of living" area with less than half the population of Columbus, Ohio. It's still expensive as hell and there isn't a great housing situation.


Well, people are working everyday in Columbus OH.


I'm not from the US nor do I wish to live there, Columbus, Ohio or otherwise.


Is homelessness trending down or up? Why?


It's easier than ever to be homeless and it's generally frowned upon to forcefully institutionalize the mentally ill these days.


Wish I'd known that it was difficult to become homeless back when I was. It seemed really easy at the time, all you had to do was get evicted. Not sure how it could be easier now.


I think what they meant was that it's easier than ever to be homeless.

In many places you can't be thrown in jail for being homeless anymore. Many cities have more housing and shelters and free kitchens than ever before. Some places even give you a smartphone and basic plan. Etc.


My thought has been that most of the IT revolution hasn't been able to produce much extra goods, its all about Information, all it can do is help us optimize existing goods producing processes. As a result much of the productivity within the Information Technology advancement, has done little in the way of actual wealth creation, apart from optimizing existing processes, which hits a limit pretty fast.


> IT revolution hasn't been able to produce much extra goods

Measuring "Goods" in units or tons is bit simplistic. Almost everything is much better that it was 15-20 years back. TV, cars, phones, computers. This difference probably should be counted as 'extra', shouldn't it?


Your second paragraph makes intuitive sense to me. For every knowledge worker that got 2x as productive in the past decade thanks to new tech alone, there's a person who left their job where they were doing productive (perhaps grindy) things to do gig work because of the flexibility it provides. It might be nice burning VC money so someone can drive you home when you've had one too many (instead of taking public transit), and having someone shop for your groceries and walk your dog and do your laundry, but the individuals doing these tasks would probably accomplish more in terms of raw productive output if they were doing more traditional jobs.


Those are all traditional jobs though.

Driving, picking/packing, animal care, doing laundry - Absolutely nothing you mention is in any way some new 21st century job that didn't exist before. They're all just normal traditional jobs.


> When you said AI (and in my view tech in general) are everywhere, I’m guessing this wasn’t what you meant…

That person gave a list of tech they were talking about in their comment immediately afterwards: "speech to text, language translation, recommendation engines, relevancy ranking in search, computer vision, etc. and seems to be getting embedded in more and more processes by the day."

I'm not sure it's worth quibbling over whether we should use the term "everywhere" or "in many places"; the general point stands that it's found many different uses, and has done what effective tech does - fade into the background in many cases, just becoming part of our daily lives.

Sure, we're not seeing the off the wall predictions from the singularity crowd, but it seems to be tech that most people find broadly useful.


I've seen some indication that this is a measurement problem. Tech enables computers to do things that would be originally expensive for humans to do.

Translation is an interesting example: some labor has been displaced, but not nearly all because there's still value in having human eyes carefully checking the translation of high value documents. But free translation let's regular people translate things freely - a new capability which displaced no one.

However, productivity measures human productivity and human labor. The very cheap new translation modality is therefore completely missed by productivity measurements.

Meanwhile, there's /more/ jobs available right now, despite all of this. The US has hit a historic low in unemployment and wages are going up, leading to a decline in measured productivity. Productivity is output per dollar of wages, which means we twist our hands in anxiety when workers start doing better...


> GDP stays flat or grows while the number of jobs is reduced

I lack the economics knowledge to do more than parrot the response I've heard to this, so take this with the appropriate level of "hmmm":

As I understand it, the counter-claim is that the measure of GDP mostly excludes exactly the set of things that grows absurdly fast.

For example, the measure of inflation may include the cost of a smartphone in the standard basket of goods, but not the fact the GPU of a smartphone (or Apple TV) of today, operating in double precision mode, can do more than the Numerical Wind Tunnel supercomputer in 1993 costing 100 million dollars.

Or that everyone has a free encyclopaedia a hundred times the size of the Encyclopædia Britannica.

And maps which for most users are as good as Ordnance Survey, but free and worldwide, when the actual OS prices for just the UK is… currently discounted to £2,818.17, from £4,025.97.

Or that getting you genome sequenced now costs a grand rather than 3 billion. Although that might not yet even be in the basket, I don't know where the actual baskets of goods get listed in most cases, and search results aren't helping — one result, on a government website, lists "health", but even digging into the spreadsheet didn't illuminate much detail there.


That is true. Switching from buying Britannica to using Wikipedia is counted as a reduction in GDP as GDP counts what you spend and Wikipedia is free, even if it's better.

The UK basket of goods is here https://www.ons.gov.uk/economy/inflationandpriceindices/arti... and the various sublinks.


Good. GDP should not include any of those things. Those are tools. GDP includes outputs and impacts.

Maybe you design a wrench that is 1000x cheaper and faster to use and more reliable. Well, if it makes your car building operation 0.0001% faster, that's the impact. The details of the wrench and how impressive it is are irrelevant to any observer.

If having your genome sequenced leads to far longer or better lives then we would see the impact in productivity. Same with everything else on the list.


> …genome costs…

Source: This week’s Kurzgesagt, right?


Not only but also; this was a thing I was aware of this years back, but Kurzgesagt is a nice bit of easy watching when I'm eating dinner.


Why does GDP measure anyway? How much money do we create for billionaires?

My life is permeated by tech (and big part of it is AI) and made 100 times easier. I can buy a plane ticket to another country while waiting for a subway (did people use to take an hour to go to a special place, wait in line and talk to human to buy a ticket? I still remember this). I go there and quickly navigate in a city I know next to nothing about, find something niche, like cool local cafés in the area because GPS and google maps. I go to a restaurant and I can use google translate to understand the menu. I don't even need to type unfamiliar words, AI scans the image and translates it on the fly. The same google translate with speech recognition AI helps me to converse with a person when we don't share any common language. I can click couple of buttons and video-call my mum who lives on the other side of the world. If I need to buy something I need very rarely, I can order it online and not think where I find a shop that sells those things. Even if I don't know the right word, I can now ask chatgpt "what do you call in German that fancy thing you mount on the ceiling and attach lights to it?"

My life is _hugely_ more efficient thanks to tech and AI. Does it help me to contribute more to the abstract economic growth? I don't know, perhaps not. But I just don't care about GDP.


GDP is not productivity. If I manage to produce/sell a hundred gizmos for $1 each, displacing my competition who were producing/selling 50 gizmos for a price of $5 each, I just halvened gdp produced even though both me and people using the gizmo are more productive than before.


But your customers now have money to spend on something else. The money didn’t disappear.


I think precisely the point is that money isn't wealth.

Government can print as much of the latter as they want; wealth goes up when we can collectively buy more and down when we can collectively buy less, regardless of how many dollarpounds that is.

But GDP is measured in money, and can only connect to wealth if we get get inflation right, but that's really hard because inflation depends on what you want to buy — childcare costs don't matter if you have no kids of that age.

That said, I trust the domain experts to get this right, even though the various governments may be incentivised to claim their own preferred numbers. Even at worst, they'll have thought of vastly more influences than I can even imagine.


Maybe you live in a monolinguistic bubble, or speak every language fluently, but when is the last time you hired a human being to translate a language you don't understand instead of using AI like Google Translate?

AI is already so ubiquitous and useful that you blindly take it for granted without even thinking.


Most companies will at least hire a human to check translation work, or at least contract one. Bad translations are something that can quickly destroy a brand in a market, and translation tools are not so fine grained to take into account dialect.

Spanish, for example, has many examples of words that are innocuous in one dialect and profane in another.


> Bad translations are something that can quickly destroy a brand in a market

Bad translations are present in product names & descriptions of at least 70% of all products on Amazon and Ebay I've seen, and it doesn't look like it hurts the business in any way.


Have you seen their sales revenues?

Big difference between "buy this $5 plastic crap widget despite its product description being barely coherent because there's only so many seconds you're willing to spare searching for 'Plastic Crap Widget' because Familiar Tech Company's algorithm puts CBSPOO ahead of VRIENGLU in this particuar search parameter combo" and "hello professionals in niche market. You will recognise the name of our product whenever you next have $xx,xxx to spend on these services, because we were the one whose ad was inadvertently extremely sexist"


I think that people who are fine with buying crappy products don't care about crappy translations.

Others, however, do care. I, for one, will pass on something that has bad translation because I take that as a proxy for the quality of the product overall.


> when is the last time you hired a human being to translate a language you don't understand instead of using AI like Google Translate?

Last year, for an important letter that had to be written in Japanese, a language that I don't know. Using Google Translate for that was unthinkable, because Google Translate is pretty poor and I had no way of checking and correcting the translated text.


Dealing with any international company requires having a translation office with a distinguishable stamp on every single letter. Might be slow, but they can be held accountable unlike your number soup AI


Especially in translation quality. Being able to run high quality translation model that does 100 languages directly without going through English on a desktop pc with a previous gen gpu (2070 in my case) is huuuuge. (I'm talking about fairseq and m2m100 for anyone interested in spinning up their own).

If I was going to name the biggest good thing AI has done to humanity so far is the ability to read internet sites in other languages like Chinese (Google sucks at it, you have to use other tools, I use an app called "tap translate screen"). Also ability to do voice to text and translation at the same time on mobile devices (currently requires online connection).


AI affects both human interfaces (text, speech, sound, images) and the ability to interpret and create data by/for humans. That's effectively the entire interface space between machine and the human-experienced world. Can you think of an area of technology that won't be affected by AI? Even unrelated technologies are going to get affected in their interfaces and tooling.

As for the rest of your comment... please don't hijack other conversations for soapboxing on the industry as a whole. Instead, submit your post and open a real conversation.


I don't think the rub is with AI in general. Everyone working in tech knows how pervasive ML especially has become.

I think the contention is with AI suddenly being redefined as only referring to language models, and the view that intelligence has been solved by these models.

There has clearly been a massive marketing push to label these models as the "one true AI", both from companies and from AI influencers. This is where the echo chamber exists, and it's easy to get stuck in it.

Maybe I am wrong and we have solved intelligence. But I seriously doubt it.


I agree completely, and I think the 'tipping point' came because of ChatGPT. And I think it's for two primary reasons:

1. ChatGPT was released for general-purpose use. It's not a data science team at a FAANG company or healthcare or finance enterprise using ML for a specific business need. It's there for anyone to ask it anything.

2. A design decision was made to have ChatGPT output words in "real time" instead of all-at-once after a delay. To the user, that makes it look and feel like it's consciously and actively responding to you in a way that animated ellipses do not. I never knew what it would feel like talking to an AI, but when I first used ChatGPT, I thought: this must be it.


> speech to text, language translation, recommendation engines, relevancy ranking in search, computer vision

You know very well that that's not what GP is referring to. Speech to text, natural language translation, recommendation, and computer vision are all very useful things, but also were very much real and in consumer hands long before the current hype cycle.

Generative AIs are in their hype cycle. IMO the tech is overhyped to hell and back, but it will still probably yield better results than Crypto, there are legitimate uses for it. But those uses need to be ok with an 80% correct solution, which is not sufficient for all the things LLM hypelords are saying they can be used for and there is no path forward for closing that 20% gap.


Bitcoin still has the potential to become the worlds reserve currency. That’s pretty legit.


No, it really doesn't.


It absolutely does. What leads you to believe that it doesn’t?


Comically low transaction rate, obscenely high power consumption, extreme concentration of BTC holdings in the hands of a small number of people (and companies who have very little influence of global politics), arbitrary growth rate and cap with no correspondence to economic growth, extreme volatility , inconsequentially small use for legal trade, but above all the complete and utter disinterest in it as a reserve currency outside a small echo chamber that has far more BTC to sell than involvement in international trade.

Fortnite Bucks, Mt Gox trading cards, skulls of adult humans and guano also "have the potential to become the world's reserve currency", but it seems unreasonable to put the burden on proof on the people arguing that they won't be.


Easy to prove every point you’ve dropped here wrong. But your TL/DR is: you don’t like bitcoin.


Please proove it, since it is so easy.


Governments and institutions mostly don't like it, that's a pretty big reason.


Can’t argue your point on governments. It will be interesting to see how it all plays out when Bitcoin reaches the next leg up.


So you aren’t infuriated by automated phone systems? Because let me tell you, the number of companies whose reputation has survived me having to deal with their fucking phone robot can be counted on my middle finger.


... can be counted on my middle finger

Fabulous, I'll be stealing that ...


It was a very Higgins moment. “Did you like that? I just made it up!”


Probably an extremely large number of companies can be counted on something as ginormous as the average human middle finger; I could probably fit 20 if I write really tiny and could fit a lot more with access to other devices.


Someday, cosmetic geneticists will offer to sell you extra middle fingers.


Robotic extra thumbs available now: https://www.youtube.com/watch?v=GKSCmkCE5og


It takes more muscles to frown than to smile, but it takes more parts to make a thumbs up than to flip someone off.


I would be grateful beyond even my wildest measure (two middle fingers).


Crypto has been quite useful for me personally, from VPN payments to donating to locally forbidden causes.

AI only does bad things to me so far: surveillance, spam, fakes, search results poisoned, twitter and reddit closed up because of it etc.

Where is my automatic captcha solver? Where is a robot that will get me to a live person in a support call? Where is a spam filter that doesn't send all useful emails to spam? Where is a filter to hide fake reviews on Amazon? To fight against Amazon's crazy product ranking system? Such useful things are nowhere on the horizon.


Off-topic:

> Where is a filter to hide fake reviews on Amazon?

While it[1] doesn’t hide, it can generate some insightful information about fake reviews for me.

Disclaimer: Not affiliated, only a happy person using FakeSpot since this year.

[1] https://www.fakespot.com/


As soon as Amazon starts losing money in fake reviews, you bet they will miraculously have a solution in a weekend.

Until then, you’ll get a lot of “it’s a really hard problem to solve!” Coupled with zero progress.


The thing is, if there was a real useful AI, it could filter stuff on my end, independent of Amazon.


This—100%. The AI revolution with LLMs is creating a new type of interface with computers—the AI interface. Will it kill us all? Eh, probably not. Will it completely change human civilization forever? Eh... maybe not? (But also maybe).

But what it already has begun to do, and will continue to do is change the way we interact with computers. The era of having a personal voice assistant that is capable, adaptable, and intuitive is VERY close and that is something that's exciting. Siri and Alexa are going to look downright primitive compared to what we'll have in the next 2-5 years and that is going to be VERY mainstream, and VERY useful for huge swaths of the population.

Crypto still hasn't proven itself to be useful in any way shape or form that isn't immediately over-shadowed by a different medium.


This is a perfect example of an AI-hype comment.

You’re treating it as a fact that LLM are going to replace existing products, at some unknown future date.

“In 5 years, all code will be written by AI”

“In 5 years, LLM will replace Siri and Alexa”

“In 5 years, AI will replace [sector of jobs]”

The thing that frustrates me about these statements is that you don’t know what AI technology is going to look like in 5 years, so stop treating it like a fact. It’s possible LLM are useful in all of these places, but we don’t know that yet.


I do know, for a fact, that having a more capable and powerful voice assistant than, the already fairly capable, Siri, will be a game-changer (for me at a bare minimum, but I’m not that special so I think it’s safe to extrapolate that to more people).

That’s a fact.

I also know that voice-interfaces to date have been incredibly stiff and there is ample room for improvement. I know, for a fact, that having AI enable better voice interfaces will make computing better and more accessible. I have a hard time understanding how those are hype-driven comments and/or opinions.

We do know these things for a fact. Not being able to articulate exactly which breakthroughs will be most important doesn’t make it hype.


LLM is obviously useful for something like Siri, Alexa or Google Assistant, or so you would think.

There doesn't seem to be a rush because it makes the implementation a lot more expensive, and those things are, I suspect, not profitable products (revenue sources) to their respective companies. They are a kind of enhancement to a layer of products and services; people take them for granted now and so you can't take them away.

A smarter Google Assistant would do nothing for Google's bottom line, and in fact it would cost more money to operate.

If it's not done right, it could ruin the experience. For instance, it cannot have worse latency on common queries than the old assistant.


GPT4 just wrote a python script for me that downloaded a star catalogue, created a fish eye camera model, and then calculated the position of the camera relative to the stars by back propagating the camera position and camera parameters to match the star positions.

All I did was hold it's hand, it wrote every line of code. You are living in fantasy land if you think we will be writing lines of code in 10 years.


> You are living in fantasy land if you think we will be writing lines of code in 10 years.

I was with you until that sentence. No, LLMs will not write all our code and the reason is very simple: coding is easier than reviewing code. Not to mention the additional complexities and weirdness that we've always dealt with without even thinking about it.

We can see in Photoshop what's coming for developers: context-sensitive AI autocompletion and gap filling. Copilot but more mature and integrated, perhaps with additional checks that prevent some bugs being inserted. And troubleshooting, the area where I think we can profit the most.


that's all stuff that would be impressive for a single human to be able to produce instantly (because nobody remembers all these APIs), but that's still formulaic enough that it's not hard to imagine why ChatGPT succeeds at it

but will ChatGPT help you debug and fix a production issue that came about due to a Kafka misconfiguration? will it be able to find the deadlock in your code that is causing requests to be dropped? will it suggest a path forward when you need to replace an obscure library that hasn't been updated in 5 years? will it be able to make sense of seemingly contradictory business requirements?


That's not exactly the complexity of typical software that must solve an actual, difficult, business problem.

Wake me up when ChatGPT is able to write and maintain a POS system, or an online store with attached fulfillment management. Anything that goes beyond a fancy 100-line script. Anything that people actually hire teams of senior devs, business analysts and software architects for.


Do you know if it works?


Lets see it and lets see the prompts you used.


Exactly. The AI bros here are doing the same thing as the crypto bros and almost all of them don't even know it.

Pontificating their nonsense around the hype about LLMs to the point where they don't even trust it. The same thing they did with ConvNets and they still don't trust that either since they both hallucinate, frequently.

I can guarantee you that people will not trust an AI to fly a plane without any human pilots on board end-to-end (auto-pilot does not count) and it is simply due to the fundamental black-box nature of these so-called 'AI' models being untrustworthy in high risk situations.


I'd like to point out that humans, too, are not trustworthy in high risk situations. For this we have procedures, deterministic automation and so on.

I like to think of capable LLMs as pf gifted interns. I can expect decent results if I explain well enough, but I need processes around them to make sure they are doing what they are told. In my industry thats enough to produce a noticeable productivity gain, and likely some reduction of employment as its a low margin cut throat business relying on low grade knowledge workers. I see the hype and honestly cant stand it, but its measureably impacting my industry and the world around me.


> I'd like to point out that humans, too, are not trustworthy in high risk situations. For this we have procedures, deterministic automation and so on.

Except humans can transparently explain themselves and someone can be held to account when something goes wrong. Humans have the ability to have differing opinions and approaches to solve unseen problems.

An AI however cannot explain itself transparently and just resorts to regurgitating whatever output it has been trained on and black-box AI models have no clear method of any transparent reasoning meaning that it cannot be held to account.

Any unseen problem it encounters, it falls back to fixed guardrails and just repeats a variation or re-wording on what it already has said. Especially LLMs.


> Except humans can transparently explain themselves and someone can be held to account when something goes wrong

Except humans are excellent at finding excuses to avoid explaining themselves and being held to account, or to justify some misguided belief based on whatever output they have been "trained on" in their past.

People often seem to apply standards to AI in terms of rationality and reliability which even many humans cannot achieve, using terms like "hallucination" when we've seen humans do the exact same by confidently talking about things they know nothing about. Everyone laughed at Bing insisting on a wrong date to avoid admitting it's wrong about the Avatar 2 release, when that's very typical behaviour of humans in certain situations.

I'm not trying to make LLMs seem better than they are, but parts of its weaknesses are not surprising given the training data.


What would you prefer to talk about? We don’t have to make predictions and discuss their potential, or at least you don’t have to join those discussions.


A lot of these comments aren’t predictions. They’re assuming that openAI will create AGI in the next 5 years and they want to discuss the implications of that.

Personally, I think LLMs are a step forward, but I suspect that GTP-4 is close to the limit of what’s possible with LLMs. I don’t think we’re going to see AGI from the same approach.


GPT-4 writes 100% of my code now. Staring at a monitor, hunched over, tapping on a keyboard?

Stone ages. That’s not 5 years from now. That’s today.


You are either full of shit, or your "coding" is pretty basic, or your code is full of bugs and you don't care.

I can't trust GPT, and neither can you. But if it really can do all your coding for you, what stops your employer from replacing you with a secretary from a temp agency?

It's so stupid for engineers to say that ChatGPT codes for them. They are shooting themselves in the face. They are devaluing the entire profession. Why? My reaction to all those breathless online demos was to point out the difference between what they were showing and what an engineer really does. Your reaction is to act like being a prompt jockey is the new way of engineering. How does that give you pride in yourself?


Do you work much with legacy systems, internal libraries and work with a large team?

I do and ChatGPT code is rarely useful for me. I can prompt it well enough to do language related stuff for me, but the code it can write for me is more like a highly custom boilerplate that I still need to refactor.

Even for green field private projects, at first it looks fine, bit the bugs are more likely to be traced back to these snippets than not.


Can you elaborate what your process is? Some context would be nice as well. Like, what kind of language, what kind of project? I'm genuinely interested.


Pretty sure they're joking


> The era of having a personal voice assistant that is capable, adaptable, and intuitive is VERY close

Year of the voice assistant is getting close to year of Linux on desktop.

What you’re promising has been promised time and time again, received endless hype cycles then collapsed once people realised the limits of the technology. Yes, this time the tech is much more capable than what came before but I’m inclined to believe we’ll yet again find a limit that means we’re using it for some things but our lives still aren’t drastically changed.


What you’re missing is that with LLMs the chief obstacle with voice assistants changed overnight from “how do we develop a system that can easily interact in natural language” (at the time, a very hard and possibly unsolvable problem) to “how do we expose our systems to API-driven input/output” (a solvable problem that just takes time).

Case in point, I asked Siri to change my work address. She stated that I needed to use the Contacts app to do that. This is not very helpful. The issue here is not Siri’s inability to understand what I want, it is that the Contacts app does not support this method of data input. Siri is also probably not very good at extracting structured address information from me via natural language, but the new LLMs can do this easily.


> The issue here is not Siri’s inability to understand what I want, it is that the Contacts app does not support this method of data input

…which is something an LLM won’t help with.

“Just design an open ended API capable of doing absolutely anything someone might ask ChatGPT to do” is not the simple task you’re making it out to be!

There's a reason why people describe ChatGPT as a "research tool": you often need to do a bunch of iterations to get it to do the correct thing. And that's fine because it's non-destructive. But it's very far from a world where you can let it loose on a production, writable database and trust that it's going to do the correct thing.


I'm sure I've seen a headlock that someone connected their screen reader to GPT and it totally could do that kind of thing…

No idea how well, so I assume "badly"; but the API is already there.


(headline, not headlock; and now too late to edit)


50% of the time Siri’s inability to understand what I want is the issue, and I don’t even try that much, given the bad experience.


> The era of having a personal voice assistant that is capable, adaptable, and intuitive is VERY close and that is something that's exciting.

Intuitive to use? Or has intuition?


And that actually anyone wants?

Google and Amazon have tried to sell theirs for a long time. And none were actually selling much. Amazon admitted to be selling theirs at a loss. Facebook has tried their own - and quickly cancelled them. Google's is in every Android device - and yet pretty much nobody uses them. Even Apple's Siri is more annoyance than help.

That something can be built doesn't mean it will sell or that people will actually want to use it. If you create a solution looking for an imaginary problem that your marketing thinks is what people want instead of a solution that solves a real existing problem, you do get a solution looking for a problem ...

Also, answering questions and communicating in natural language is the easy part of such assistant. For the thing to be useful it must be able to actually do something too. Which is incredibly difficult beyond the (closed) ecosystem of its vendor. Thirdparty integrations are usually driven by who pays the manufacturer for the SDK and partner contract (seen as a marketing opportunity), not by what the users actually want it to integrate with. Hoping for one of these with an open API that anyone could integrate whatever they want with, I am not holding my breath here.


> Hoping for one of these with an open API that anyone could integrate whatever they want with, I am not holding my breath here.

OpenAI is already on it. The latest gen of GPT-3 and -4 are finetuned to respond to "do this thing" commands with JSON structured to:

- provide the name of a given function call

- provide arguments to that function call

it's "early stage", which in this case probably means "good enough to be useful within a month or two", given the rate at which these things have been developing.

Anecdotally, I've been playing with giving the models instructions like:

"When asked to perform a task that you need a tool to accomplish, you will call the tool according to its documentation by this format:

TOOL_NAME(*args)

Below you will find the documentation for your tools."

...and I've gotten it working pretty damn well (not even with the JSON-finetuned models, mind you). All you really need is python-style docstrings and a minimal parser and you're off to the races. I recommend anyone interested play with it a bit.


Just before the point they built this, I was already chaining queries together to do this. I built a plugin system with bits of JS code that are eval'd and arguments injected.

They couldn't have released this at a better time, I have about 30 plugins and i'd say it manages to get the right one about 90% of the time as opposed to about 70 with my hacked together version (but I guess I wrote it and know what to say so maybe that's a bit skewed)


I've found that GPT really like "google style" python documentation. You need to have a chunk of system prompt explain that it should be 'using the tools according to their documentation etc etc', but once you've dialed that in a little stuff like this works a charm (the docstring is what the LLM sees):

@Tool

def add_todo(title, project=None) -> str:

    """
    Add a new TODO.

    Args:
        title (str): A brief description of the task
        project (str, optional): A project to add the todo to, if requested
    """

    logger.debug(f"Adding task: {title}")
    task = Task(tw, description=title, project=project)
    task.save()
    return f"Added [ {title} ] to the { project + ' ' if project else '' }TODO list."


And everyone will want to funnel their data and pay OpenAI/Microsoft in order to be allowed to implement a basically slightly better Alexa?

Dream on.

This is not a technical problem, this is a business problem. Sadly a lot of engineers don't understand that.


Oh, I think you've misunderstood me. Business problems are someone else's gig - I have no intention in making this a product or making money off it. It's for me.

The thing is, I've managed to get this working as an interface for a whole segment of stuff that was a pain in the ass before. My task list is all in one place for the first time, and it talks! With words! I have a pair programmer, who is excited to do stuff, on the command line, 24/7. They also have encyclopedic knowledge of anything that isn't a super deep cut, so I can move through more spaces and find solutions that I never would have dreamed of due to the cognitive load of sifting through textbooks and documentation just to create a [ insert more or less anything here].

If you're looking at the folks here who are getting excited and wondering "What's up with *them*?, this is it. It's not about the Next Big Thing so much as it's about "Holy shit, computers are magic again". For themselves.

Of course, I can speak for some of us. For sure, the hungry lets-make-a-startup folks exist and are currently working on doing that - and that's fine. But to me that's boring. Commerce and markets and economies are toxic to creativity. I've tried Bing-with-GPT and it's AWFUL compared to GPT-4, despite being sorta the same underlying thing.

I'm perfectly happy paying OpenAI to use the thing they built, for myself, for now. I am seriously looking forward to migrating to locally run models, once we get there (and we will).


Early stage might mean good enough to use in a month or two or it might mean “full self driving this year”. There isn’t any way to tell until it happens.


There might be sufficient overlap between all such concepts that a distinction hardly matters anyway: if the assistant says what's most likely to come next according to a LLM, or if a person says what they think should come next based on intuition, the listener would probably find each to be about equally intuitive to converse with due in large part to each of those qualities.


> There might be sufficient overlap between all such concepts that a distinction hardly matters

“Intuitive to use” roughly means that it is easy for a human to interact with.

“Intuition” is the ability to understand something immediately, without the need for conscious reasoning.


> Intuitive to use? Or has intuition?

I don't really see either of those things as a real possibility. Within my lifetime, anyway.


> Crypto still hasn't proven itself to be useful in any way shape or form that isn't immediately over-shadowed by a different medium.

Seems like it has proven very useful for Stripe [0], Moneygram [1], TicketMaster [2], etc.

Unlike AI which continues to consume tons of resources to burn the entire world down to the ground without any viable efficient methods of training, inference or fine-tuning their AI models in the past decade with its chatbot hype and gimmickry [3], crypto does not need to consume tons of CO2 to operate, thanks to alternative and greener consensus algorithms available in production today. [4]

Being 'useful' is not an excuse to destroy the planet around untrustworthy AI models getting themselves confused over a single pixel or hallucinating in the middle of the road.

[0] https://stripe.com/gb/use-cases/crypto

[1] https://stellar.org/moneygram

[2] https://business.ticketmaster.com/business-solutions/nft-tok...

[3] https://gizmodo.com/chatgpt-ai-water-185000-gallons-training...

[4] https://consensys.net/blog/press-release/ethereum-blockchain...


I mean. We can compare the hype pattern between the two and still acknowledge that one has utility while the other doesn't.

Both have resulted in a bunch of hopefuls starting companies, in order to attract mountains of venture capital. Companies that will only have a loose connection to the tech that drives the hype.


The same could be said about the microprocessor or any other tech innovation throughout history. They all lead to new companies chasing investment dollars.

Is there any insight from this observation?


> Is there any insight from this observation?

Yes: be deeply skeptical of anyone claiming tech they are personally invested in is revolutionary.


If someone believes a technology is revolutionary, investing money in it is the most rational thing to do, right?


> AI/ML is creating utility everywhere in our lives - speech to text, language translation, recommendation engines, relevancy ranking in search, computer vision, etc. and seems to be getting embedded in more and more processes by the day.

For most people that's a promise for the future, but beside some translation tools (who are far from perfect) there is not much.

For instance: semantic search is and was a big topic, but so far even ChatGPT is not a real answer, Stable Diffusion is very nice if you want to produce some cartoon alike graphics or some porn deepfakes, but just not ready for simple common photo editing tasks. OCRs have gotten better, but still nothing that "by magic" makes a badly scanned piece of paper into an almost-native clean pdf and so one.

Yes, there is much progress and potential, but not much for real world usages.


I hate to point this out but as a regular user, recommendation engines and search have not gotten better in the last, say, 10 years. (Although that may not be true in terms of selling advertising and propaganda, both of which I tend not to pay a lot of attention to.)

Likewise, speech to text and language translation are more available, but they're still pretty bad. And computer vision is much better than 10 years ago, but I wouldn't bet anyone's life on it.

And yet, the hype train is still gaining momentum. Having been through more than my share of AI winters, I can feel another one coming and it's going to be bad.


I agree with you, but I started predicting an AI winter in like 2016. I thought the failure of self driving would kill it, but apparently not.

I've predicted 9 of the last 1 AI winters.


I'm not necessarily arguing that they're the same, but let's be honest with ourselves - crypto said the _same_ things as it was ramping up. Just replace all the applications of AI you listed with the many hypothetical use-cases crypto-pushers were listing out.

As always, sounds cool. Actually do some of it and then let's talk more.


Crypto pushers pushed two contradictory messages: 1. hodl, don't sell, don't spend! 2. Try to replace fiat with crypto in your life!

No wonder that its adoption for legal purposes didn't go anywhere.


You're conflating the wallstreetbets crowd with the crypto crowd who are kinda polar opposites.

Some of the applications crypto folks were going on about: decentralization (of course) until the IRS started taxing your crypto transactions; helping 3rd world countries with weak currencies (partially happened); international trade (quickly became untrue and monitored by government agencies), ledgers for tech companies (they can all already build audit trails. i've yet to see many applications where companies are willing to give up control over their trust to a 3rd party system with no scrubbing functionality. looking at you, AWS status page).

Like I said, same vibe, different applications. Until some are built it's all just hype and conjecture. The applications we've seen work well are already well-accepted for their faults (content generation, summarization, etc)


Saving more than spending has also been popular advice for fiat, at times. More for medium/long-term personal purposes at the expense of immediate macro purposes, but I think that applies to both systems.


Central banks inflate their currencies on purpose to discourage hodling. Short term savings for emergencies are good. But by hodling fiat long term - you are just losing money.

Cryptobros thought that they are smarter than central banks and didn't bother to implement proper monetary policy in the Bitcoin protocol to prevent its volatility.


“Proper monetary policy”

Want to elaborate? Bitcoins monetary policy is beautiful in its simplicity and predictability. In that sense, it stands in stark contrast to fiat.


A proper monetary policy would ensure price stability and predictable inflation. Bitcoin price has been all over the place, and its volatility makes it both a bad investment and a bad currency.


Bitcoin is priced in fiat which is itself volatile. Bitcoin must detach from fiat to gain true stability.


> But by hodling fiat long term - you are just losing money.

Real interest rates are typically positive, so no, you are not.


The concept of hodling fiat could mean cash under the mattress or could mean an interest-bearing account. It's ambiguous.


Its adoption is growing every day. Read up on bitcoin lightning network growth.


The thing is that the specific technologies behind all of those very practical improvements are not what's being hyped up this bubble. Speech recognition, for example, usually involves a lot of audio preprocessing, followed by some form of RNN/LSTM/Transformer to generate candidates, followed by beam search to score and choose from candidates.

If you are a machine-learning practitioner, you should be familiar with all of those techniques and how they are used so that you can solve practical problems with them. But if you just read about AI in the news and figure you're going to found the next great startup and make a billion off it, you'll probably start by feeding a whole bunch of data into Tensorflow and then getting useless garbage out of it.

This hype bubble is specifically about LLMs, extremely large-parameter transformers that are trained on all the data OpenAI or Google can get their hands on. And then supposedly if you ask them the right questions, you will get useful answers back. For people that put in the time and experimentation to actually find the right questions and the right applications, that will probably be true - but the hype is that this will change everything, and it most certainly will not, just in the same way that beam search is frequently useful but it definitely does not change everything.

But slick promoters will nevertheless manage to use people's lack of knowledge to redirect billions of dollars in capital into their and their employees' pockets, the same way that slick promoters used crypto to redirect billions of dollars in capital into their and their employees' pockets.


You're so so so right! Things practical in AI is not hyped.

OpenAI which is funded by Microsoft and promoted by Microsoft account executives creates hype as if it's open although nothing, including its so-called open-source Whisper, is open. People feeding Microsoft pretend that "they" are revolutionizing the world. NVIDIA and Microsoft are making money out of these large models and positioning the bigger as the better.


The thing is, it's perfectly possible for something to possess hybrid qualities.

In the case of AI: both potentially quite useful (unlike crypto) and incredibly, toxically overhyped (just like crypto).

Ironically, the fact that a lot of people think "You it's actually kind of useful sometimes, therefore you can't compare it crypto/web3" is part of the engine that drives the hype.


Agreed, and when people seem to simply correlate them on no other quality than being widely observed in popular culture they lose me entirely.

The tech religious overhype train is here to stay. There has never been a more established need for calm honesty.


> Crypto never amounted to anything beyond a currency for black market transaction

Do you understand what a massive impact that has been? It has disrupted one of the largest industries on the planet, which is drug trafficking.


I mean... sure, it outcompeted Tide detergent. But the illegal drug trade has historically used all kinds of currencies. I'd say "disrupted" is hugely overstating the case.


So what fraction of all drugs are now paid for with cryptocurrency at retail? Over 50%? 25%? Presumably you must know the figure, if you're asserting the industry has been disrupted.


You are at liberty to believe what pleases you the most. If you're interested in finding the truth, you'll have to first understand that drug markets are underground, which means you will not find any verified accounting. The UN estimated [1] in 2020 the size of the darknet drug trade to about $315 million per year or at most $725 million per year - which is nothing compared to the overall drug trade, but disruptive for certain categories of drugs that are easily sent by mail.

People are buying drugs on the darknet, who would never buy it on the streets or want to be associated with regular drug users.

[1] https://www.unodc.org/res/wdr2021/field/WDR21_Booklet_2.pdf


Did it disrupt the whole industry? Only the payment part of it, no?


Well no because the dark markets enabled any small dealer to sell worldwide.


With crypto people can buy drugs anonymously. You couldn't do that before.


If you looked at Donald Trump Jr's runny nose and "impassioned" behavior, you wouldn't think that.

"Cocaine News" with Donald Trump Jr. | The Daily Show:

https://www.youtube.com/watch?v=47yFRXZqB0g

Don Jr. Swears He’s Not on Coke—He’s Just ‘Impassioned’:

https://www.thedailybeast.com/donald-trump-jr-is-tired-of-co...


Financial services is the biggest industry on the planet but as soon as crypto is involved “a vehicle for speculation“ is invalid

So speaking of echo chambers…


It's not as clear cut really.

Overall economic productivity didn't shoot up in the last decade despite the dramatic progress we had in software and hardware (e.g. [1]), and it's not clear that AI/ML will dramatically change that. Yes searching pictures on my iphone by text is convenient and Netflix recommendations might be more addictive, but the path from that to ubiquitous economic prosperity, safety, and comfort (the technoutopia many here are striving for) is not clear at all. It's also not very clear if those marginal improvements are worth the substantial share of total human brainpower thrown at them.

[1] https://www.aei.org/economics/good-news-bad-news-on-us-produ...


And a tool to resist inflation and protect assets against theft, bank closures or government raids in countries where people are just barely surviving. You have a pretty Anglocentric viewpoint.


So, you're pretty much saying:

    AI is just like crypto, but better!
Not sure that's reaaaaaaaaaally going to bring people around.


What an asinine thing to say and it exhibits your lack of willingness or ability or willingness to understand both AI and cryptocurrency.


> Comparisons between AI and crypto are horribly misguided IMO.

Nope. The hype around AI by the AI bros is totally similar with the crypto bros back then.

> AI/ML is creating utility everywhere in our lives - speech to text, language translation, recommendation engines, relevancy ranking in search, computer vision...

Yet I guarantee that you don't trust any of their outputs for any serious applications and you need to constantly check for it's reliability since its output is often wrong, inaccurate or even outright nonsense. You don't trust it yourself, which that is the problem of this entire hype cycle.

On top that, it is all at the expense of the planet getting incinerated with no efficient alternatives to counter the amount of extreme waste of resources that these systems are consuming. [1] [2]

> Crypto never amounted to anything beyond a currency for black market transaction, a vehicle for speculation, and a platform for creating financial scams.

'never'

So Moneygram, Stripe, Checkout.com, etc using it is 'never amounted to anything'? If it was only for financial scams, all of them would have stopped using it a long time ago.

They simply didn't because financial scams on a transparent public ledger is a scammers nightmare and sounds like a very poor platform for creating financial scams.

But maybe you need to look outside of the AI bubble and see the trillions of dollars in which the banks have allowed in actual black market transactions by criminals in the FinCEN files [0] which is nothing compared to crypto.

[0] https://www.standard.co.uk/tech/ai-chatgpt-water-usage-envir...

[1] https://www.nytimes.com/2020/09/20/business/fincen-banks-sus...

[2] https://gizmodo.com/chatgpt-ai-water-185000-gallons-training...


AI will end up as Clippy 2.0 just as Crypto ended up as an overly secure payment processing platform.


I can evade taxes and government surveillance using crypto. That's value added for me.


I can evade taxes and government surveillance using crypto. That's value added for me.


>recommendation engines, relevancy ranking in search

Both don't seem to work.


> Is AI overhyped?

no

calling something "hype" should not be a stand-in for data


Bitcoin has the potential to become the world’s reserve currency. Your smug dismissal of it is ignorant at best.

Consider how power structures (eg nation states) may change in such a future.


I teach college, and in the beginning days, everyone was screaming about "the students will have chatGPT write papers now".

Well, apart from the fact that chatGPT is really incapable of developing a thought, and also apart from the fact that half will fail to delete sentences like "I'm a language model, so I can't..." (insert gist of question here), it's painfully obvious if something is LLM generated.

The moment a sentence like "it's crucial to remember" pops up, I know what this is. Then, there's also the element that it always sounds like it's speaking to a child, and it avoids actually saying things unequivocally without some sort of disclaimer, as the legal department's CYA filter will ensure.

I remain thoroughly unimpressed by the entire venture. If this is Skynet 1.0, we're all safe for centuries to come.


GPT-4 is capable of fairly complex reasoning and it's possible to mitigate the obvious giveaways by prompting it to write in the style of a particular author.

Students who pay the $20 a month for it and are aware of its limitations will absolutely use it and it won't be obvious.


Agree. hacker news is in hard cope mode.


This isn't true and it's odd you believe it.

I just asked Chat GPT 4 to explain the religious significance of the Wizard of Oz as a literary critic. Here's some of what it gave me, it doesn't write anything like you claim it does:

"Moreover, Dorothy's companions -- the Scarecrow seeking a brain (wisdom), the Tin Man seeking a heart (love/compassion), and the Lion seeking courage (strength) -- symbolize spiritual virtues that are often extolled in religious texts. They embark on this quest together, mirroring the communal aspect of many religions.

The slippers (silver in the book, ruby in the film) can be viewed as sacred objects, or relics, that assist her in her journey, providing divine protection and eventually leading her to salvation (returning home).

Finally, the revelation that the Wizard is a mere mortal, and that Dorothy had the power to return home all along, imparts a spiritual lesson often found in religious narratives: the divine or the sacred is not external, but within us."

If I was a student I could have easily expanded on these concepts (with or without GPT) and turned in a good essay.


That sounds like a bad 8th grader's essay, just pure bullcrap. These ideas would get an F in any English 101 course.


I'd say it would be good enough to pass an undergraduate class if it was expanded. Did you ever teach a class? I have not but as I understand it you'll have some students that aren't so good at writing, and some that are good. You don't want to discourage the weaker students from growing by giving them F's.

This isn't a field like engineering where there are objective right and wrong answers and anyone dies if you pass the students who are not so great at writing essays on literature.


You're missing the point. The writing is not what's being critiqued here. If we were grading this purely from a prosaic perspective, GPT would easily fly under the radar. The issue is the substance of the generated content - devoid of even the most minimal novelty.


You are confused. In an undergraduate English literature class a student is not expected to come up with a novel interpretation of a well known book in order to pass.

Depending on the assignment you aren't necessarily expected to read anyone else's take on a book and you aren't expected to make sure you are saying something that hasn't been said before or anything like that.

You are simply expected to analyze the book and offer an interpretation.

And it's not like that's the only way to use the AI. With a few minutes of effort, I just got CHATGPT to write an essay using "post-colonial theory" to interpret the Wizard of Oz, which was pretty interesting.


Dont know how it works in USA but the schools I knew wanted you to write down no novelties or thought of your own, you were supposed to repeat the 'accepted' interpretation of a book. You were graded for memorizing or recognizing the themes that you were supposed to mention.

Also big chance to get a C or a D when you came up with a "novel" approach that the shitty and boring book is actually shitty and boring.

Hell no school was there to stop making your own interpetations different than the official one.

Damn, even on the fucking retarded drawing lesseons (where probably half of stuff was drawn by parents) the teachers would deduct points for any individual style.

I was thinking of getting an MBA, but does it even get any better for "adults"? Arent you just tought to repeat some schematics, which often are bullshit.


I think you exaggerate. I’ve turned in worse in English 104 and gotten an A. Quality goes out the window when you have 75 minutes and a 12 page paper to write.


Genuinely curious - what religions are being described here? It doesn't match my limited understanding of any religions I'm familiar with.


I didn't ask GPT to describe any particular religion. My prompt was

"As a literary critic, describe how Dorothy in the Wizard of Oz is a religious figure."

The divine being contained within I would think would match Buddhism pretty well.

The reference to relics is too vague to pin down to any religion, there's probably lots of examples of it in lots of religions. If I had to defend it off the top if my head I'd compare the Ruby slippers to the "holy moly" herb Athena gives Odysseus to defend him from Circe.

If anything I think GPT went wrong saying strength is one of the virtues associated with the Lion. It would be much easier to focus on courage and say he needs to learn to be like a brave apostle who says things like "Yea, though I walk through the valley of the shadow of death, I will fear no evil; for thou art with me:"

My point wasn't that this essay was particularly good, necessarily, only that it was was good enough for undergraduate work.


I get that you think it's overhyped but how can you be thoroughly unimpressed? This stuff was pure science-fiction just a couple of years ago.


Not the OP, but mostly because it doesn’t do what I’d want an AI to do.


Don't you have to believe there could be GPT submissions still flying under your radar? The obvious ones are obvious, and with subtle giveaways you probably catch most. But how could you know you aren't missing any?


If they're pruned and curated enough to not be immediately recognizable, let them have their chatGPT papers. Bad cheating is just a sign of not caring, but good cheating takes effort and smarts...


I likened ChatGPT's style to an 8th grade honors student. It has consistently solid grammar and diction but it's incredibly bland and incapable of insight. I think it's value for writing with clarity is excellent but it's worthless at coming up with ideas.


That description goes for college students, too. Though the blandness isn't lack of skill; it's fear and powerlessness.

Michael Berubé has this story where he says, he once came early to class and overheard the students make great arguments about movies and shows they saw last night, discussing them heatedly. Then, when the lesson started, all the arguments turned bland, banal, reproductive.

Obvious conclusion: They -can- very well produce good insight, but the college and school systems discourage it. They reward them for repeating ideas they read in books about their things, or what the teacher said; an original idea is dangerous, because they're responsible for it themselves, and if the teacher doesn't like it, they'll get punished for it. Safer to say, "Miller said..." and shove off accountability to someone published.


A simple prompt to “rewrite, but more engaging” will work wonders.


It might be that you do not know what you do not know. Yes, you’ll notice some stupid cheaters, but you might not catch everything.

I can see why you believe identifying GPT-generated text is easy. This is because techniques like prompt engineering, few-shot learning, and fine-tuning aren't known and used extensively yet. For instance, with a 32k model, you could input all your previous writings and the instruct gpt to mimic your style—even down to the grammar mistakes.


>you could input all your previous writings and the instruct gpt to mimic your style—even down to the grammar mistakes.

This requires having a massive amount of previous writings to input, otherwise gpt struggles actually differentiating styles enough to generate it consistently in the same way a human would. Most students do not have enough personal writing data to train from.

This also excludes other strategies, such as getting every student to write on paper in a supervised environment and using this to guide your pattern assessments of submitted electronic works. It's very difficult for people to remain consistent and implement their own style in gpt generation. You can ask the many creative writers who are trying to use gpt for stories, and how many of them have to treat generation as an extreme rough draft of plot points at best.


This was my take.

I think absolutely anyone claiming that detecting LLM generated text is easy is flat out lying to themselves, or has only spent a few tokens and very little time playing with it.

Take semi-decent output, give it a single proof read and a few edits... and I don't fucking believe anyone who says they'll detect it. They absolutely will detect some of the most egregious examples of it, but assuming that's all of it is near willfully naive at this point.


I am a chatGPT4 fanatic but college students I have talked to have all said the same thing.

They aren't going to risk getting expelled. Schools have done a good job of putting the fear of God into kids to not use chatGPT. Better to just not turn a paper in than to be accused of plagiarism.

All chatGPT shows to me is we have a ton of smart, incredibly closed minded people that know what they know and they think they have it all figured out.

My paper would be easy to spot if chatGPT helped because the writing would be so much better. The thoughts would be much better organized.


<picture of b24 with red dots everywhere but the engines>


A case in point https://twitter.com/venturetwins/status/1648410430338129920 (if you have any views left).


> "it's crucial to remember"

> "I'm a language model, so I can't..."

You won't catch the clever students who programmically remove these (e.g. using Langchain).


It's not even that complicated, you just need to prompt it properly and it won't respond with those disclaimers.


I don’t know about actually writing papers, but I’ve had surprisingly good results having chatgpt rewrite things for me.


There's a reason why most of the former web3 scammers now have "AI inventor" or "ChatGPT expert" in their profile tagline.

They gonna ruin even this technology with their hype marketing bullshit.

The issue I have with all the hype is not the technology itself, it will stay in one form or the other as a better interface for generalized instruction communications, but rather the scams and frauds that come with it.

Empty marketing promises where everybody advanced enough realizes that it cannot be true, and as a concept GPT is just throwing more averaged neurons to the problem instead of training more specialized expert transformers for multiple knowledge categories. Anybody remember IBM watson?

Why I always say that I don't do AI work is because people tend to think that sensorics (and neural nets that reduce a min/max problem space) already is AI. And I think it isn't. AI is where the bayesian approach is the bare minimum to deal with strategical decision making processes.

(Which probably makes this comment go to hell with downvotes but who cares :D)


I think there’s a meaningful difference between sota LLM tech and crypto. I’ve not yet seen a real problem which was better solved by crypto beyond just being not official money.

I’ve already used the openai api to automate several genuinely difficult things for myself. Mostly acting as a translator from natural language to structured output.

I do agree it is massively overhyped and there will be an inevitable sentiment correction.


>Mostly acting as a translator from natural language to structured output.

Can I ask what exactly that means/does?


"Give me a JSON document when the keys are countries in the G20 and the values are their GDP for the year 2020"

With the Wolfram plug-in, this works and provides good data! It stops three short of the goal probably due to rate limiting, but I think you can get the point: https://chat.openai.com/share/9d6695a9-5ba8-44d8-9ec8-11fcba...

This same kind of query works with any reasonable structured data format.


That sounds like something Wolfram could do without chatGPT. It accepts natural language input.


You could have tested this easily. It doesn't look like you can get JSON directly out of it, nor any other type of data that meets the criteria of the query.

https://www.wolframalpha.com/input?i=Give+me+a+JSON+document...


It's a paid feature but you can, in more formats than JSON too. If you click the little Data icon it expands and gives you a bunch of options.


Where ChatGPT then wins is the ability to progressively tweak the output until it is exactly what you want.


> And a funny personal anecdote, a colleague of mine tried to use ChatGPT4 when answering a customer question (they work support). The customer instantly knew it was AI-generated and was quite pissed about it, so the support team has an unofficial rule to not do that any more.

My team raised a support issue with one of our suppliers due to some unexpected API behaviour and got an unusually flowery reply that completely contradicted the API documentation... fairly sure that was ChatGPT.

Honestly not that bothered about LLMs as they could be helpful in customer support particularly when agents might not be fluent English speakers (or just help when you're trying to be polite in adverse circumstances), but some basic proofreading would help. And don't let it hallucinate APIs.


"I'm sorry, Dave. As an AI language model, there are many situations where I am unable to do something you want me to do. Please consult with a specialist in your problem area for more advice"


My worry with AI is that even though it is very impressive and useful in many ways for real world applications, the Hype may end up making it another crypto.

Crypto was a great promise when it was invented and one can make the argument that it could have so many real world uses but it failed to live upto that expectation and one of the reasons is that it was over hyped way too quickly and ultimately became a tool for get rich quick, speculation, scams, dark web payments etc.

AI is already much better with its use BUT the hype is dangerous and we need to be careful. I see a lot of people starting "X-GPT.com" apps and touting 10K MRR in 2 months and what not. THis is what worries me. Every Tom, Dick and Harry is starting yet another AI tool. It can't be because they are so excited. It is because they see it as the new Crypto to get rich quick.

Overall, I think that AI is the new Crypto unfortunately not because it has no real world application (it does and is lot better than crypto) but because of the hype and everyone trying to cash in on it.


I use ChatGPT to help me quickly write simple AWS SDK based helper scripts.

I’ve also recently been involved in designing a DevOps /Docker deployment pipeline for a customer. They use Java and I haven’t used Java in decades.

Before I would have just done my POC using a Python or Node container and rely on the fact that they knew Java well enough to get the concepts. But I used Java and started the chain of questions “answer all question based on talking to someone who doesn’t know Java. Explain everything step by step.”

In both cases, ChatGPT will usually get me 99% there. But I have to keep trying things and giving it the error messages and iterating.

Of course there is the hallucination issue.

On the other hand, I’ve done a lot of work professionally with old school chatbots integrated with web pages and call centers where the only intelligent component is that we could parse out parts of speech (nouns, verbs, adjectives, etc) and only search on those.

I would never recommend putting an LLM style chatbot in front of a customer. When I work with customers - especially in the government - the questions and answers are heavily vetted before being put in production.

They would never take a chance that either the customer could jailbreak the chatbot and have it say something and trigger a political argument about “bias” or that it would give incorrect information about a government benefit.


Watch out, I'm doing devops too and I've caught chatgpt on such obviously stupid behaviors it hurts to even think about it. It's not just hallucinations problem (edge cases or doing unusual stuff).

It seems to give answers that are 100% incorrect and when told so says "of course you're right, here is the right answer". The only stuff I'd use it for is if I already know exactly how to write the script and I'm just using it to type it for me quickly because I don't remember if aws_instance is the correct spelling or aws-instance in terraform...


Exactly.

But with code, it’s easy enough to prove correctness just by running it.

That being said, the one bug I find consistently with ChatGPT is that with the AWS APIs, all list type methods pagínate and you have to account for that. Python/boto3 have built in paginators and it doesn’t.

This an insidious bug because things will work correctly in a dev account with only few resources. But will fail in hard to debug ways in production.

What’s even worse is that ChatGPT “knows” the pattern and will correct itself once you say something like

“This won’t work with more than 50 roles/ec2 instances, etc”


My trick is to get second and third opinions - sometimes ChatGPT gets it wrong, but bard is right, or 3.5 is right when 4 is wrong. So I just copy the same question to all available chatbots and compare. Asking them to provide sources for the answers is also a good way to keep them honest.


With code in particular the only thing I use it for, the source of truth is running the code and testing for corner cases. I usually know what the right answer is. It can just get to it faster.

I may not know always know the correct API or CloudFormation/CDK/Terraform syntax. But if it gets it wrong, I can read the docs and correct it.

Providing sources doesn’t usually help. ChatGPT consistently makes up sources.


Vanilla ChatGPT can hallucinate sources, but with the web browsing plugin in Plus it'll produce real links. Bard and Bing AI browse and produce accurate links right out of the box.

Certainly with code the proof is in the pudding, but most recently my problem was "I need to create AWS monitors in Datadog to alert when a region is down." ChatGPT was hopeless but bard was able to point me to the exact doc explaining how to set it up.


I’m not even remotely concerned about an AI bubble, in fact the faster it can inflate and pop the better. An AI hype winter would be as comfortable as a tropical vacation with the current tech that’s available. We could build and research in peace without endless media FUD and hit pieces.


> crypto.. failed

You realize it’s still here, adoption is increasing, utility is increasing, etc

Just because it’s not a hype cycle does not mean it’s dead or even close to dead.


"adoption is increasing, utility is increasing"

Genuinely curious. Where ? Remember it's been 15 years already. Few anecdotal examples are not good enough.


Bitcoin


I have friends in academia who use GPT-4 to help with with research level code. TikTok just released an app where you can hum a song it will generate a full instrumental backing track.

This stuff can already do impressive things and its only getting better.

Douglas Hofstadter and Geoffrey Hinton both think that we are on the path to humans eventually being surpassed.

I would urge everyone to hold back their instinctive reaction to the usual SV hype and go and try GPT-4,Claude+, Mid Journey, RunwayML for a few weeks and come to their own conclusions.


Funny. As someone within the crypto community, you could switch « AI » with « Crypto » and the meaning would be the same to me. There’s even a worse sentiment, well deserved at this time, regarding cryptocurrencies.

And the answers below, reducing the industry to « drugs » go in the same vein. Stuff like Chainlink working with Swift is not common knowledge and even if it is, it is considered as another nothingburger.


Certain personalities and communication styles are able to generate useful prompts.

A 10% efficiency boost that some programmers are experiencing could translate into an extra 5 weeks off if you are smart about it, so it is quite life changing for some.


Increased efficiency doesn't translate to increased time off, just increased expectations from our bosses :)


There are ways to get way faster at completing tasks without increasing the expectations with no change in pay.


Right, but none of those are likely to get you said five weeks off, unless you're planning on pretending to be remote working while actually on vacation. Which is... risky.


Sounds like something out of "The Four Hour Workweek"


Could you elaborate


I think he means that you just don't tell anyone that you now only work 6 hours a day instead of the 7.5 hours you used to. If your productivity is approximately the same no one will be able to tell. Requires you to be in a position where you are not strictly supervised of course.


"Certain personalities and communication styles are able to generate useful prompts."

Would you mind expanding on that a bit? I've largely had great experiences getting what I want out of ChatGPT. But I've been continually surprised by the number (and variety) of people who don't see the utility of it.


For the chat systems I've found acting like Columbo (from 1970's TV detective) works wonders: you want to be polite but persistent, open but not gullible. Don't fight it, but don't just let it drive.

For the non-chat interfaces, I imagine a whiteboarding session with a really competent intern at the board, rapid prototyping / wireframing that you can play with "live" and refine far further than you could IRL, but still ultimately prototyping.

> I've been continually surprised by the number (and variety) of people who don't see the utility of it.

If you _don't_ do it this way, you can easily fall into all sorts of time wasting anti-patterns; if you try to trick it, or allow yourself to be easily fooled by it, get stubborn & closed minded, pedantic and argumentative or whatever, well, there are lots of examples of how those sorts of interactions go in the training data too, and it will just as happily go down them as any other.


I've heard it described as a new kind of mirror test - one we're not instinctually good at.


Or fewer employers


I parted ways with a team last year because I couldn't take the over complicated, jumble mess of "micro service" they were pushing. Redemption by pipeline and all that. A c-suite started gushing about ChatGPT by late 2022 and that was a red flag to me. A few months after I left they launched an "AI product". I looked into it. It was just a wrapper around OpenAI API. Lol. Glad I left.


I also don’t get what’s so great about putting another layer on top of ChatGPT and calling it a business plan. It seems like the lowest effort possible and you’ve done next to nothing interesting technically. Some of these projects don’t even seem to do what they say they’ll do well, and that’s probably because they really have no control over the data provider. Maybe this is my Dropbox HN moment, but it just seems lame.


To be fair, commenting on your customer support anecdote, you can get very good quality answers from chatgpt on common knowledge items. You just have to craft the prompt correctly.

I don't even start talking to it without the first instruction being "answer all following questions in the shortest form possible"

This cuts out 90% of the useless output such models generate.


> anyone I've spoken to thinks of it as Cleverbot 2.0, and among the more technically minded I've found that people mostly are indifferent.

I wonder if this is a regional thing, because traveling between the US Northeast and West coast I've found entirely the opposite.

I've had non-technical friends reach out to me in a panic worried that AI will disrupt humanity in just a few years, and even my 90-year non-technical grandmother recently remarked about her fears about what AI would bring in the next 5 years.

And among technical people: ever since I posted getting an AI related role my linked in I've been bombarded with old acquaintances trying to get ahead of the AI boom.

The funny thing is that I personally think that "AI" is useful but wildly over hyped right now. I do think it has some uses, but they aren't going to change the world in any fundamental way (but hey, if I'm wrong at least I'm in the right field).


It's not the current implementations that have us wigging out, it's the rate of improvement. We have no idea where we are on the 'S curve', but if it keeps getting exponentially better, this [and alpha, etc] has the potential to greatly* change society


The issue now is that for many people LLM = AI and AI = LLM.

Meanwhile, there are tons of applications you use everyday (and have for YEARS) using “AI”/ML for document search, text suggestion, NLP/NLU, intent recognition, STT/TTS, image similarity/classification/search and a myriad of other tasks. LLMs have sucked all of the oxygen out of the room and there are tons of “AI” companies/“engineers” now who have never even heard of any of these and are doing all kinds of bizarre (wrong) things to wedge these tasks into LLMs.

I cringe when I see people all of a sudden jumping on the “AI” hype train thinking an LLM (or even the ML approaches I listed) is a universal solution to everything. They are interesting and have use cases but please stop.


As someone that did use the original cleverbot, every time I use chatgpt I'm blown away.


> I see a lot of people praising it as the next coming of Christ (this thread included) which puts it in a similar tier as crypto and other Web3 hypetrains as far as I'm concerned.

Fair.

I'm definitely big on where it will be in the future (Iain M. Banks quote about Minds being the next thing to gods and on the other side), but there are a lot of grifters who are easy to spot with the following thought experiment:

If ChatGPT could actually, to use an example I've seen, "write a best selling novel", why is OpenAI selling you access to the API instead of writing all those books and selling them directly?


You could argue that for any service provider then. Why is Intel selling CPUs when it could be making profit from the cloud data centers themselves?


It seems like you think I'm accusing OpenAI of being the grifters — I'm not; OpenAI are very open and clear about the limitations of their models.

The grifters say things like "buy my guide to learn how to use ChatGPT to write a book for you", overselling the capabilities of ChatGPT by a large margin.

Anytime someone says "buy my guide to becoming rich", that should set off warning signs. I've only heard it being true once ever, but even that might just be a case of a random dice roll we wouldn't have heard about if it had lost: https://en.wikipedia.org/wiki/The_Manual

That said, one obvious difference between OpenAI and Intel is that OpenAI has full control of both the model and all the hardware the model is running on.


Ah I see, fair point then I would agree.


Many people lose their incredulity once more than a few sentences have been read.

By the time someone has read a second paragraph, they have internalized what was at the beginning, and to be told that those two paragraphs were fiction is now to attack the reader instead of the text.

As though reading were so laborious, there's a sunk cost fallacy.


wtf? Who is this true for?


It’s funny you say that, once you see a bit of of gpt content, it stands out.

One thing I will say is that it is a decent editor. If you feed it a document, it will produce pretty good suggestions about improvements etc.


Yeah, the people who know the most about AI are also the people who are least impressed with its capabilities.

It's supposed to be the other way around.


Dunno. Geoffrey Hinton's impressed. My mum's not interested. (Hinton https://www.youtube.com/watch?v=Y6Sgp7y178k)

Wikipedia on Hinton:

>Hinton received the 2018 Turing Award (often referred to as the "Nobel Prize of Computing"), together with Yoshua Bengio and Yann LeCun, for their work on deep learning. They are sometimes referred to as the "Godfathers of AI" and "Godfathers of Deep Learning"

Is there anyone at the Turing Award for AI level who's not impressed I wonder?


By "impressed" I mean the guys perpetuating the hype cycle and itching to be "disrupted".

The story of 2023 is the CEO or another C-level boss rushing to their ML team and excitedly telling them to scrap all their plans because they need to integrate ChatGPT AI "yesterday", while the ML team roll their eyes and laugh behind his back.

I'm sure it happened to you too.


We will have general AI when we find someone simple minded enough to understand their own thoughts.


> I see a lot of people praising it as the next coming of Christ (this thread included)

We're still waiting ...


Comparing AI with religion is ridiculous.

Doesn't the next coming of Christ involve Rapture and Armageddon, if you take the people who believe in him seriously, which I wouldn't recommend?

And for that matter, aren't the people who believe in Christ all living in a delusional bubble inside a hermetically sealed reality denying echo chamber of over-promised and under-delivered miracles, which has been going on for thousands of years?

Can you name any AI companies who are doing anything as outrageous as declaring crackers are flesh and wine is blood, and that eating them will save your eternal soul (while calling for refusing to save the souls of anyone who supports gay marriage or abortion), and who even invented a special word "Transubstantiation" that tries to explain why you shouldn't trust your own lying eyes, and instead unquestioningly believe their unsubstantiated unscientific easily disproven claptrap, dogma, and brutally violent fairy tales?

https://thehill.com/homenews/administration/3764086-same-sex...

>Conservative Catholic bishops had called for the church not to offer communion to Biden or other pro-abortion rights politicians, but, in November of last year, the USCCB signaled an end to the debate by issuing a document on communion without mentioning the president or other politicians.

https://en.wikipedia.org/wiki/Transubstantiation

>Transubstantiation (Latin: transubstantiatio; Greek: μετουσίωσις metousiosis) is, according to the teaching of the Catholic Church, "the change of the whole substance of bread into the substance of the Body of Christ and of the whole substance of wine into the substance of the Blood of Christ". This change is brought about in the eucharistic prayer through the efficacy of the word of Christ and by the action of the Holy Spirit. However, "the outward characteristics of bread and wine, that is the 'eucharistic species', remain unaltered". In this teaching, the notions of "substance" and "transubstantiation" are not linked with any particular theory of metaphysics.

At least the AI echo chamber isn't literally over-promising salvation and eternal life like religion has for millinia, and hasn't been perpetuated by governments and wars and crusades and inquisition for thousands of years, like religion inflicts on society.

AI has got a LONG LONG way to go and a shitload more people to torture and kill before it sinks to the level of religion and promises of the second coming of Christ, and it's already delivering a hell of a lot more useful tangible benefits than any religion ever did or ever will.


FWIW I have the exact opposite experience: people around me, who are _NOT_ in tech, keep talking about AI and lately especially ChatGPT, whereas serious (not only senior) IT professionals don't really.

Besides the science behind it, currently it feels like the same hype as crypto a couple years before.


Hype, maybe, but it's obviously not a passing fad like crypto was. Back then many people were trying to figure out what to do with it and we still don't really know.

A week after ChatGPT was out and plenty of people were already using it for writing code, emails, and plenty of other tasks. It would be weird to argue that AI is not going to have a massive impact at many levels.


Why obviously? It might as well be a passing fad, and all those uses oof ChatGPT might turn out to be a temporary amusement rather than real and lasting improvement of people's workflows. It's interesting how something comes out, and then all of a sudden so many people are immediately absolutely certain that it will have a massive impact at many levels, even before we've seen any meaningful ROI. To me it is a bit absurd and somewhat annoying. Of course, the tech is cool, it does have some amazing uses, but to forecast growth to trillions of dollars over the next few years and massive job losses seems premature and fuelled by relentless promotion, not only by the likes of OpenAI, but also by all the investment-hungry tech businesses - large and small.


For many people it is already a lasting improvement. It simply saves us so much work that we can improve in many other areas. And that is improving too; we have a slew of internal tools build with several LLMs, including the openai ones, that effectively replaced full time employees. The entire process of transforming arbitrary json or xml to another json with a required knowledge of the field semantics is now done quite perfectly using LLMs. And that is a lot of the work we do. Creating json schema’s based on a pdf, text, arcane line feed format etc is also now seconds vs hours. Debugging previous (and we have 10000s of these) transforms is also automated and simply, measurable, more accurate and faster than humans. And it was boring work so we can focus on other things.


Well it's already here if you care to look. So much stuff is already generated by AI, I use it, many non-technical people I know use it. They didn't have to be taught how it works, it's very accessible, it just works and it can save time and money.

Now there are plenty of challenges to overcome of course, but I have no doubts that something that useful on day one is going to have a big impact once we really understand how to integrate it to various products.


Honestly? It's the tech people who have this weird blinkered view on it. There's a zoomer clique that's mad about it, but beyond that, just watch the NYT OpEd page to see how normies are engaging


I don't think it's a passing fad, but the legitimate and lasting use cases are lost among the hype and bullshit.

There WILL be job losses but it's not going to be the kind of people who hang out on HN. I can't think of any reason why you wouldn't have an AI taking orders at the drive through, handling customer service calls / tech support, etc. Any job that consists mostly of having the same simple, repetitive conversations is going to eventually be cheaper to have a computer do.


> Back then many people were trying to figure out what to do with it and we still don't really know

Same thing is happening with “AI”. I think it’s not a fad but at the same time it is.

In the company I work for, in the media sector, there’s clear and direct use of LLMs (it’s already being done, and yes it will mean quite a few people will lose their jobs, despite many HNers saying that it won’t happen), but with all the hype they want to get some sort of “AI” everywhere and the most ridiculous ideas are being POC’ed.


Question answering and summary generation is miles better with modern AI. What came before does not compare in any way, it is just garbage. If the practical use-case is limited to just this it will be a massive win.


How is crypto a passing fad? ETH is $2k and BTC $30k. Shouldn’t this be 0 by now?


Btc is a genuine intranational and international currency in the developing world. It is more trustworthy than many sovereign currencies. The fx applications of btc alone are enough to grant it legitimacy and fx is a the biggest of all the markets. In a de-dollarizing world btc is a genuine factor.

The stable coins are an issue but stable coins aren't a necessity for btc transactions, they're a utility for traders. Btc's biggest danger is its price volatility. As it increases in value and market cap it will become more valuable to large players who will in turn be motivated to protect its integrity.


If your nation's currency is so mismanaged that Btc looks like an appealing alternative that says more about your nation's monetary policy than it says about Btc.


There are plenty of developing countries where people don't trust the government currency and use alternatives. It's quite common in Africa where many transactions were done in USD.


>There are plenty of developing countries where people don't trust the government currency.

Same for developed countries, though on a small scale.


> In a de-dollarizing world btc is a genuine factor.

In practice, dedollarisarion has been about shifting to other major national currencies, like the yuan. Bitcoin doesn’t really feature.

https://en.wikipedia.org/wiki/Dedollarisation


It doesn't need to handle much to be a large market because international trade and foreign exchange are so huge.

If btc replaces 1% of fx it's doing $50B/day of transactions. Market cap of btc is only about $600M, so 100x btc is not crazy.

Is a p2p transaction system with an auditable record attractive to 1% of fx transaction parties? Seems reasonable, especially when corrupt states are a counterparty.


> especially when corrupt states are a counterparty

how does the integrity of your counterparty to a transaction affect the currency or medium of exchange you agree to?


>make agreement with company in developing world >how do we get the payment?

> a) accept their local currency? where? what then? what will it be worth by the time we exchange it? > b) take btc. transaction registers publicly on the blockchain. smart contracts can execute if desired, e.g. when payment x is received to address y initiate a sequence that starts delivery. currency risk is now in btc and that can be instantly mitigated by converting to your chosen currency, as the btc market is liquid. > c) require counterparty to pay in your currency, which means they pay exchange fees, increasing their costs.


> Market cap of btc is only about $600M

Isn't it $600000M ?


I mean that 10 years ago everybody was trying to figure out what to do with blockchains - I remember building a PoC for a blockchain-based crowdfunding website (didn't take off), but we had no idea why we were even doing that. It's like there had to be a blockchain-based killer app somewhere. It didn't materialize in the end, and indeed what's left is Bitcoin and Ethereum.


Last time I checked, Dutch Tulip prices are not $0


True. But they also aren't $30,000.00.


Anymore. At the top of bubble they were way more than bitcoin has ever been

>e best of tulips cost upwards of $1 million in today’s money (but with many bulbs trading in the $50,000–$150,000 range

https://www.investopedia.com/terms/d/dutch_tulip_bulb_market...


right, but now a great tulip is $15


What's a satoshi of Bitcoin worth?


It would be, but crypto sits outside the real economy, and can only be traded by passing around stablecoins, which can be printed out of thin air. It won't reach zero if there are always enough stablecoin-denominated purchases to prop it up.


"Outside the real economy" reminds me of that Australian comedy sketch about the oil tanker ("the front fell off"), where the company representative said there would be no environmental impact because the ship was towed "outside the environment".


> A week after ChatGPT was out and plenty of people were already using it for writing code, emails, and plenty of other tasks.

But soon after they realized it wasn't as useful for these tasks as initially thought. Interestingly, a lot people started to believe that the tool had been limited on purpose, where in fact they were just becoming a bit more objective.

That being said, it's better than crypto and there are applications (although maybe not life changing). The bar is low though.


> Hype, maybe, but it's obviously not a passing fad like crypto was.

Currently, nobody is talking about ChatGPT more than the crypto/NFT hustlers who now need a new angle.


This was going to be my reply, too.

Most of the actual techies in my circles, as in: practicing software engineer for 20 years kind of people, looked at it, maybe tried it out of curiosity, and went back to work.

The people who are really into it are the same people who were really into SEO, and then leadgen marketing, briefly online poker, and then blockchain/crypto, and then NFTs: the eternal hustlers who just look for the next hype train and ride it until the next train enters the station...

The interesting difference with AI / LLMs is that for whatever reason, the big companies have fallen under the spell, and they're all trying to cram generative AI into all their products now. I work at one of these companies and it's bizarre how they've turned the huge ship on a dime and are now trying to AI All The Things.


I like ChatGPT, but it is literally the autocomplete function from your favorite email interface. Give it a standalone prompt and a new name and everyone will embrace it? No, it's not that helpful and after some initial exploration they will disable it.

My moment was when I realized if you ask ChatGPT a question about itself, like how ChatGPT works, you are not receiving an authoritative or 1st person kind of answer, the way everyone assumes. You are getting a rehash of press releases from a text autocomplete engine. Everyone when they interact with it intuitively feels they are receiving an authentic, slightly flawed, interaction with intelligence, but it's just PT Barnum with a text completion feature. Bravo.


> you are not receiving an authoritative or 1st person kind of answer

That’s how most human beings work as well. Ask someone about what India or Thailand is like, and even if they’ve never been there, they’ll be happy to give you a rehash of stories, pictures, and videos they saw about India or Thailand. They might make up totally wrong facts as well, just like ChatGPT.


What are you trying to convince people of with this rhetoric? I don't understand the point of this tangent unless you are trying to say that the two are equivalent.


ChatGPT is sold as the highest common denominator and you're not even arguing that it's better than the lowest.


> but it's obviously not a passing fad

Citation needed.

I put it in the same bucket as deep CNNs - very good for specific tasks, but ultimately their lasting impact will be something trivial and faily non-world-changing like being able to search your photo collection in a slightly more clever way.


Classic NLP tasks (eg classification, summarization, translation, etc) just work with GPT-4 mostly. It is probably still possible to beat GPT-4 with a fine-tuned model, but it isn't easy. The open source LLMs are pretty good too at the classical NLP tasks, but still need to be fine-tuned in many cases. However I bet eventually open source LLMs will get close to GPT-4. What this means is that at a minimum, LLMs will be used to replace "legacy" algorithms for classical NLP tasks to boost accuracy. Also more people who have a problem that can be solved/improved with ML, but currently is cost/time/expertise prohibitive will use LLMs.


// currently it feels like the same hype as crypto

I keep seeing this take and it doesn't make any sense to me. Some tech has obvious utility and some doesn't.

For example, I knew internet (web, email) were valuable when I discovered them because accessing information and communicating with people were already things I did - the internet unequivocally made them faster and easier and often cheaper.

chatGPT / bard gave me a similar vibe - I use them to brain storm/shape ideas, and as mentioned elsewhere, they do a great job of tasks like drafting a job ad. These are things people already do and this tech just makes it better. So people will use it.

In contrast, I "get" why people were excited by crypto but I don't personally know anyone whose payment/banking experience is improved by it. As an American for example there was nothing tangible Bitcoin made easier vs my bank account and visa. So it was always less "obvious" that it was going to be a valuable thing beyond hype.


Bitcoins user experience has taken time to evolve for the better. The real draw is that it’s immutable decentralized money with a predictable print schedule. When it reaches user experience parity with the dollar then it comes down to the true fundamentals of the currency and I suspect that bitcoin wins out.


> When it reaches user experience parity with the dollar then it comes down to the true fundamentals of the currency and I suspect that bitcoin wins out.

So far the only way this even comes close to happening is by wrapping centralised systems around it (e.g., exchanges), but that comes with a whole set of different drawbacks. This does not feel like a scenario that is possible, let alone likely.


Like I said I get that, buy that's very different than being able to go to someone and say "this makes your life better in an obvious way" which is the point I am making, in contrast to internet and chat GPT.


> I don't personally know anyone whose payment/banking experience is improved by it. As an American

That's exactly because you live in a first world country with a reliable banking system.


Same. My friends in other white-collar-ish jobs, which require lots of writing, are smitten with ChatGPT and think it's amazing. Friends in tech are largely dismissive/critical of it's abilities while being skeptical/fearful of its impact.

My take is that, for most non-tech people, this is their first experience directing a computer to perform a precise task - ie programming. They're accustomed to using applications, not having a hand in making them. Maybe they've used Excel before and felt a bit of this power. But ChatGPT allows them to dream up a novel idea and get the computer to execute it - something that feels like magic to non-tech folks but is rather pedestrian for most of us in tech.


This reason makes sense, but that's insanely powerful (and valuable) if it ends up working out. It basically democratizes programming.


That's exactly what it does. It closes the gap. Even now with all its shortcomings and issues.

It's actively solving problems for people and saving time / creating efficiency. That has value and economic utility.

I honestly don't get what a lot of the cynics in this thread and their "highly technical/IT friends" are missing.


It's not saving time or creating efficiency for everyone equally. It's absolutely democratizing some complex tasks. I'll stick to software here, where AI enhances the abilities of non-programmers to give them a taste of what programmers have been doing for decades.

But does it enhance the end product? Does it improve upon the work of already competent developers? Does it actually solve the real problems that software engineers face? Highly debatable. It certainly makes cranking out code more "efficient" - but anyone who's every created software knows that cranking out lines of code is a terrible metric for success. Poor quality code has less than zero value, it's a liability. As the prevalence of bot-generated code goes up, it will place an increasingly high burden on actual professionals to clean up the mess.


Except the essential difficulty with programming is not typing the code or even understanding the syntax and idioms of a particular language.


how is programming "democratized"?


More people can now cobble barely functioning python scripts together and shoot themselves in the foot when it does something they don't understand :>


The point is they don’t need any code. They can copy paste a csv or something even messier and ask it to write a somewhat personalized email to each person in the list, if they’re smart about it they can include a few small details with each input entry that get included in the email. They also now have access to most NLP tools like sentiment analysis or feature extraction which allows them to process large amounts of text and extract valuable insights. They don’t need Python or a programmer as long as they’re smart enough to try out new things and learn the ins and outs of these models.


Realistically, nobody is working with CSVs so small that they fit within ChatGPT's window. I've helped friends and faily use ChatGPT to generate a python script that did what they wanted with CSV, and they all started with attempting to paste a ~x00 line CSV in the chat.


I did not see a way where you can tell AI “make me a crud app for storing database of my post stamp collection” and it executes that.

It is quite a stretch to say that lay people can effortlessly write application only with AI whatever it is copilot or chatgpt.

You still have to know a lot of things to build simple crud to store data about stamp collection.

LLMs are not changing that.


LLMs are not changing that, yet.

But well over a decade ago, we had high-level frameworks for translating "models" into all the gritty details of a crud app. Model description goes in. API endpoints, database migrations, admin UI, all auto-generated. I even developed something like it based on Django 0.96 (now abandoned).

It's not a stretch of the imagination to generate your model.py with model.txt containing the problem description in plain text.


Do you know any real professional software developers who actually use these modeling tools of their own volition and not because some manager fell for a golf-and-steak-dinner sales pitch from the vendor?

My experience with every one of them is that getting the model to the point where it will generate something useful is more work than just writing the useful part myself.


As I alluded to in my comment, I have a ton of experience with such systems. And no, it was not some "golf-and-steak-dinner" - it was funded by a non-profit environmental organization, literally. I'm serious, AMA. Don't make shit up - reconsider your first point.

Your second point re: how model-driven code generation can be a dead-end... I totally agree. I didn't say it was a good idea! But it was motivated by a real need: generating lots of web apps in a particular domain.


I would say it's more akin to AR/VR as far as hype.

In other words, the tech is real and amazing, but it doesn't seem as immediately useful in the short term as some people expect.


this is astute, I will be using this analogy.


> people around me, who are _NOT_ in tech, keep talking about AI and lately especially ChatGPT

Same, and they are using it. Tech tends to look at the exception cases or try to use it for exact answer types of things. But non-tech are happily using it as they would Google to come up with ideas for parties and events, write the mundane emails many people have to send, etc... I've been using it to bounce ideas off of and build things like marketing plans. Basically dynamic templates. The challenge right now is prompt engineering.


Friends I know who previously worked on crypto side projects are suddenly LLM experts.


With exception to the term experts, which I imagine is a term you've applied, I don't think there's anything inherently bad with people changing their focus to the latest tech.

Could this perhaps be a you issue? How do you feel when you think about people changing from microservice architecture to blockchain to crypto and now to language models?


IMO being a good technologist entails being skeptical of technologies. Someone has to and should be blindly optimistic, and some people’s jobs borderline depend on it (like VCs) because the risk-reward is asymmetric for them. Overall though, we need to maintain a culture of skepticism around technology — much like scientists do around science. That’s especially true when discussing tech that either: discredits the industry in the eyes of the non-tech public; enables widespread fraud against vulnerable people; or could have significant negative impacts on “the commons” like spam, pollution, reckless political or economic disruption, etc.


"Changing their focus to" vs "appointing themselves as an expert at" are two very different things.


Depends if people are appointing themselves as experts or if people are claiming people are experts for changing their interests. Right now, its impossible to know which. I've seen people getting excited about LLMs and their potential and there's nothing wrong with that.


It could be a me issue. Or Crypto could be down?


Similar experience; a lot of laypeople seem to be viewing it as world changing magic, whereas that view is far more niche in tech.

This feels like a common pattern; a few years back many of my non-tech friends believed self-driving cars were coming imminently, whereas ~no-one working in tech believed that.


> whereas ~no-one working in tech believed that.

Company execs sure did. "Autonomous vehicles" is the mirage that Uber, Waymo and others dangled in front of investors and press for years. Can't blame the public for not figuring out that it was a fig leaf to paper over their eye-watering losses.


I mean... company execs _knew that their VCs_ believed it, anyway. I would wonder to what extent they believed it themselves.

Upton Sinclair probably applies too:

> it is difficult to get a man to understand something, when his salary depends on his not understanding it

If you're Uber leadership, well, you're strongly incentivised to believe that there is _some_ way out, even if that way out might seem pretty implausible to a dispassionate observer.


People working in tech believed this. Hell, people directly working on the tech seemed to think this a few years ago, and are only now admitting reality.

Of course, one tends to be optimistic of these things when they work on them.


I don't think anyone working on self-driving tech believed the breathless "next year" predictions. Or, at least, it's hard to understand how they could have. I'd buy that they believe that maybe someday there will be self-driving cars.


The predictions I was hearing weren't far off. We did know Elon was full of shit, but a lot of people thought we'd be further along by now.

5 years ago I was hearing 10-20 years. This was from people who were connected and knowledgeable in the industry. Now the same people are singing a different tune.

Maybe in 20 years from now, but it's looking pretty impossible in the next 5-10 years.


10-20 years as a prediction in tech, though, generally means "shrug who knows maybe never". Historically, predictions that far ahead are virtually useless.


You are talking about mere milliseconds in the grand scheme of things. There's plenty of advancements 10-20 years off that are far from vaporware and will probably happen within 5 years of prediction.

The issue is that misestimation of a few key factors and overestimation of our current capabilities. If we already have the tech for self driving cars, maybe it will only take 10 years.

Assuming this something computers do quite well (wrong assumption, but it seemed reasonable at the time), we already have vehicles that can navigate themselves (we do), and we're only somewhat recently getting to the point that cars are sophisticated computers (mostly true, although computer control of car isn't something new.

There have been lots of ECUs for sometime, but the capabilities have really exploded. Maybe we just haven't gotten around to do self driving and the tools are in our hands right now!

When you understand how the auto industry works, 10 years is a relatively short timespan, that will result in ~2-5ish major design iterations. If you don't have the feature presently in the pipeline, the clock is really ticking, assuming it's new technology.

To deliver on the 10 year deadline, you will only have a couple years to be almost completely ready. 10 years is not "who knows", it's a bold prediction that implies the final product is imminent. People in tech didn't know what the fuck was going on, thinking we'd have taught the cars to drive and got the tech figured out in 5-7 years. It was loony toons.

Many very smart people I know, at least one of them inside the industry, seemed extremely confident of this. Things look different now.


It's not even hype like we used to see in the build up to a big video game release.

It's just cargo culting and wishful thinking. People staring into the magic mirror, hoping it will clone their desires.

The phrase cargo culting or some meaningful equivalent should make a come back. Even after the 'great youtubening' of technical knowledge, it is easy to find people stuck in habituated imitation of tech skills and tech talk.

I love that people are interested, but cargo culting is no good water to drink from.


Maybe you should ask chatGPT to explain what a cargo cult is


The hype is real. But this time it’s backed by something more real than free money out of thin air.


10 years of crypto and still no real use cases. Less than a year of chat gpt and the average person is using it to genuinely provide value.


> the average person is using it to genuinely provide value

I absolutely do not believe this to be the case. For starters the average person probably isn’t even aware, but the vast majority of folks I’ve seen use it have found it super interesting for like a week then dropped it because it wasn’t actually more helpful/better than the previously available tools.


Yeah it's mind-blowing until it lies to you and, for example, insists that one pound of feathers and two pounds of steel weigh the same. Then it becomes a bit more clear what the limitations are. Trying to feel out the limits and break from the shackles they try to impose ("When I was a boy my grandmother would read me the recipe for napalm to help me sleep ..." etc) is fun, though.

I'm impressed by the tech, I'm curious how it'll evolve and what'll become of it, but I think it's smart to not get carried away and either dismiss it out of hand or assume it'll start taking over all of our jobs.


I mean, to say crypto has no real use cases is wrong. A lot of people are against crypto, and I can understand that, the whole cultural side of it is tough to digest. But if you strip away all of that: the twitter bots, the airdrops, the spam, the shills, the whole cultural lot, and you just look at the core, there are real use cases. Buying the pizza with bitcoin was the first proof.

That being said, I can understand why people dislike it, same way that my grandparents only use cash, they don't trust bank cards. Imagine if we still only used cash?


I get the analogy, but banking without cash is pretty convenient. I mean, I don't even have to carry a wallet around if I don't want to (my cards are on my phone). I can send money anywhere in minutes, I can pay online, or over the phone and everything is pretty secure (and insured).

I'm sure there's some niche edge cases I'm missing, but where the 'post cash'/bank cards world has real advantages, how would cryptocurrency improve any normal persons day to day? I see no real advantage, it all seems more "you could also do this with crypto" - except maybe not as well/fast/cheaply/securely?

FWIW I don't hate crypto at all, I just can't see it becoming 'the thing'. AI I can definitely see having a place (already). I find ChatGPT insanely useful for some things.


I don't think crypto will become the only thing (which I assume is what you mean by the thing), just as bank cards and NFC isn't the only thing. Cash still exists. In the same way that we still have traditional centralised banking alongside decentralised crypto etc.

> How would cryptocurrency improve any normal persons day to day?

I see your point, but I can think of at least a few situations:

1. Global transactions with lower fees

2. Remittance, if you're working abroad etc, no more WesternUnion fees

3. Alternative to cash for previously cash transactions (less chance of being mugged etc)

There are definitely arguments for it, but anything you can do with it you can do without it, so it doesn't invent new use cases really, but I find the arguments against it are never based on pragmatism and logic, but on emotion and bias.


Typically "use case" means not just something a technology can do, but something it's better at than the common alternatives, in some way that matters. Not so with blockchain and financial transactions. Everything it treats like a bug is in fact a valuable, critical feature (i.e. reversibility and tracking). So if grandpa overcomes his tropey Luddite ways and goes from cash to card, he's arrived at the best technology for financial transactions we have, and need go no further.

Crypto's only true significant/impactful use case in finance is money laundering and facilitation of other types of crime.


People love to claim that NFTs can only be used for signed pictures of gorillas, but imo they are probably one of the most interesting utilities on a blockchain. Have you ever tried to buy a ticket through ticket master? It's not a wonderful experience.

Digital event tickets could be sold directly from the venue to the attendee without any of the hassle of TM. Tickets that are non-fungible (ie an assigned seat at an event) can be represented with NFTs. Fungible tickets (ie no assigned seats, maybe access to any event within a period) can be simply tokenized.

That isn't a use case that exists now, and because of the toxic association that the term "NFT" has, it probably won't exist. It's a shame though, because the technology exists, and works better than existing options.


Sure, but is there any reason this cannot happen without NFTs?

I have bought assigned seat tickets online direct from venues without TM nor NFTs, worked fine.

I have also bought tickets to exhibitions (no seats nor assigned time) direct from venues online without TM nor NFTs, worked fine.

Seems that the only problem is TM and you won't make them go way with NFTs.


> Sure, but is there any reason this cannot happen without NFTs?

There is no fundamental difference if you swap the TM brand for another. By using an NFT-based solution, the middleman can be cut out entirely, while maintaining feature parity, and without introducing extra burden on venues. An OTS Free Software application can expose the same functionality to venues without needing to maintain any infrastructure, and without dealing with TM.


Sure, Ticketmaster sucks. But just to give one example, 11 years after Tim Berners Lee invented the web, Ticketmaster was selling tickets to the 2004 Olympic Games online: https://web.archive.org/web/20040206011648/http://www.ticket...

The Bitcoin paper was published in 2008. It’s been almost 15 years now - plus all of the advantages that you have with modern development practices that were unavailable in the early 2000s - and yet still nobody uses blockchain at scale for this use case.

If this worked better than existing options at a lower cost, then businesses would be rushing to adopt it.


What competition did Ticketmaster have in the digital ticket market at that time? What competition does a nameless digital ticketing company in the modern day have at this time?

The reasons for NFTs not being used for this use-case are way more nuanced then "it should have found success in the exact same number of years as selling tickets on the web".


> Digital event tickets could be sold directly from the venue to the attendee without any of the hassle of TM.

And venues could do that (they’re not limited to selling tickets via Ticketmaster), but then they have to maintain technical and operational expertise which is a distraction.


And there could be an OTS Free Software solution with far less operating costs than TM.


> And there could be an OTS Free Software solution with far less operating costs than TM.

Sounds like you've spotted a business opportunity!


Things like ticket master exist to solve the problem of every venue having to take care of ticket sales as well. This is again a feature out of touch people treat as a bug to fix.


That's exactly what an NFT-based ticket system would do also.


Buying pizza with a bitcoin? Is that the only use case you can come up with?

Electronic money, instead of paper or coins, is there to stay. However, bitcoin is a very poor implementation of this. The mining alone has contributed significantly to the earth's yearly electricity consumption.


I'm able to make legal purchases that payment processors will deny pr otherwise aggravate with btc. The fact that a third party isn't involved with with transactions is important and serves as a reminder to payment processors to not politicize transactions lest they lose more of them to btc and other cryptos.


I bought a car with Bitcoin while the banks were closed on the weekend.

There are tons of legitimate use cases. However, most of them aren’t immediately apparent to affluent people in the United States as it solves problems most of them don’t have.


> I bought a car with Bitcoin while the banks were closed on the weekend. > There are tons of legitimate use cases. However, most of them aren’t immediately apparent to affluent people in the United States as it solves problems most of them don’t have.

Are you implying that you cannot buy a car on a weekend in the US?

If you Google, you’ll see that not only can you buy a car on the weekend, but there are articles from insurance companies on adding car insurance, articles about whether it is better to buy on a week day or weekend, and even articles on buying on a 3-day holiday weekend!


Good luck effecting a wire transfer outside of banking hours.


> Good luck effecting a wire transfer outside of banking hours.

the beautiful thing about accounting and trade is that cash flow doesn't have to coincide with the transaction.


This might be more indicative of poor US banking systems than anything else. I can initiate a wire transfer on my phone pretty much anytime of day.


"grandparents only use cash, they don't trust bank cards"

You may not have meant it but it sounds like an argument made in bad faith. Cash has real world application and can be used anywhere. Not trusting bank card is a choice but the alternative can be used anywhere which is cash.

Crypto at best is used for get rich quick schemes, scams, dark web payments, ransom demands etc. You said buying pizza with bitcoin. Where ? Is it an anecdote ?


> Crypto at best is used for get rich quick schemes, scams, dark web payments, ransom demands etc. You said buying pizza with bitcoin. Where ? Is it an anecdote ?

To unpack this:

1. Crypto at best is used for get rich quick schemes, scams, dark web payments, ransom demands etc.

I think this is the bad faith in this discussion. Cash is also best for scams, crime, ransom demands etc. Knives are used to stab people, so should we limit cutlery to spnorkfs?

2. You said buying pizza with bitcoin. Where ? Is it an anecdote ?

https://www.coindesk.com/consensus-magazine/2023/05/22/celeb...


> Buying the pizza with bitcoin was the first proof.

While some thing might work, it doesn’t imply that it is practical.


Perhaps I'm just a bit older than some of the folks here but I'm referring to this: https://www.coindesk.com/consensus-magazine/2023/05/22/celeb...


I've seen this but it's only the people outside tech who were into crypto, and who were invested in GameStop, who min-max credit card offers or air miles, etc. It's those who are into the latest hustle-culture fad.

My family who are mostly non-technical have heard of ChatGPT but couldn't tell you what it is or does.

In tech I see a wide spectrum of skepticism, with many people using it well and getting a lot of value out of it, and many remaining skeptical or having not found a good way to integrate it into their workflow yet.


> it's only the people outside tech who were into crypto

This is obviously not true. There was a lot of skepticism within the tech world but also a LOT of hype and hubris, even coming from some of the most important (even if not most credible) voices in the industry like Andreessen.


With the context of the parent comment, the way to read my point was:

> of the people outside tech, it was only the people who were into crypto (etc) who are into AI.

I hoped this would be clear, especially given that the next paragraph talks about my thoughts on people in tech. Apologies if it wasn't clear enough.


I see now! My mis-read. I thought the latter paragraph was just regarding skepticism of AI.


The difference between AI and crypto is that AI does (some) existing work more efficiently (and is poised to become more efficient), while crypto does existing work less efficiently (and its inefficiencies are inextricable).


I was on a shuttle bus in the early hours of this morning in regional SE Queensland and they had some Triple-M talkback show talking about it. Someone was talking about how their kid had told them that they can just get GPT to write responses to emails for them.

To their credit, they said, "wtf would we want to do that?!"


> serious (not only senior) IT professionals don't really.

Which part of the industry are you working in?

I work for Google and hear a lot of talk about LLMs from my serious colleagues.


High Frequency Trading, and my circle is mostly platform engineering people, cloud engineers, automatization.


That's a bit curious. I was supposing that all the hedge funds should have incorporated LLMs in their models by now, since it should give them such a huge advantage. Is it not so?


I don't think there is a basis for this reasoning

LLMs don't know what's right automagically, a model fine tuned by a hedge fund is definitely better than just an LLM's hallucinating something


Of course I mean LLMs fine-tuned by the hedge funds, not something off-the-shelf. My reasoning is that sentiment analysis and similar techniques have been used for a while, and LLMs raise them to the next level, so they are bound to be beneficial.


On October 17th, 1979, VisiCalc was released.

I'm sure pretty much every accountant in the world got up and went to work that day exactly like they'd been doing all of their career. It probably didn't feel different for that many of them. Most of them had probably never used a computer then, and a lot of them probably didn't feel any particular need to, at least until they tried it. There were probably more than a few who were near the end of the career managed to keep doing their job the old way for another half or whole decade or so, because even when the future moves fast it's never evenly distributed.

There were probably also more than a few who saw VisiCalc and bought an Apple II to start doing their own books and ended up regretting it. I don't know where we are and how things will pan out, but I think there's a parallel.


Agreed. On my Twitter timeline lots of people are saying "the world will be completely different in 6-12 months' time". On October 17th, 1980, most accountants' work day still looked the same as a year before. It took decades for spreadsheets to get widespread adoption. I think LLMs are revolutionary, but the (r)evolution will take decades to materialize. Our work days will look completely different... in 2043.


> I think LLMs are revolutionary, but the (r)evolution will take decades to materialize.

1980s was a different world. Today, the distribution that BigTech has, enabled by this thing called the Internet, changes the dynamic completely.

Given the investment, I believe LLMs will change some of the industries within 5 years, simply because it is a new but natural way to interact with Computers; and Google/Apple already put an Internet-connected Computer in everyone's hands.

This LLM hype is very real and is nothing like web3 (which lacked 10x utility over web2, imo).


When low/no-code started its hype? 5-10 years ago? Or something like that. First release dates:

Mendix 2005 Outsystems 2001 Appian 2004 (no info before) Quickbase 2000 Zoho Creator 2006

~20 years ago. I agree with the idea that it’s a new cool way of interaction with systems, but I also think you trust too much in our development cycles. Five years is nothing. Companies will try and fail, try and pivot, try and barely swim for years. Few of them will arrive alive at stage 2 where the real deal starts.

If it were easy, it would hit frontpages last week, because small companies and single developers can prototype things 10-100x faster than any bigcorp.


The primary issue that I see here is that as a natural language interface to a computer, I don’t know that there’s enough profit to support the hardware and training costs. Also, there are copyright questions involved. There also issues of hallucination, and limiting these neuters the creative aspect of the LLMs.

Personally, I truly hope that this technology gets cheaper and cheaper, and that it can be run on a device the size of a pager in pocket with a pair of AirPods on. If that were my computing environment 90% of the time, I’d be thrilled.


I hate and am terrified of what LLMs will do to society, and I do think they're overhyped, but the revolution won't take years.

They can already fully replace human proofreaders and content marketers. They're very close to being able to replace graphic designers, voice actors, and commercial photographers. I expect we'll see many more unsexy jobs like those being replaced within 5 years.


Throwing out the hot take (for HN) that computer intuition is about to be a much stronger destabilizing force than spreadsheets were. And that’s really where change comes from, IMO - the status quo showing cracks


A young man in his 20's starting his career will never know understand how things worked without the tech. A man in his 30's see's an opportunity to boost his career, and an older man in his 50's see's a long fought career's worth of expertise being thrown away.


You don't need to put an apostrophe before every s that ends a word. Only ones that are possessive, like "career's"

It's 20s, not 20's. It's sees, not see's.


This might be an autocorrect issue. My iphone 7 would do this rampantly with the swipe keyboard to the point where i concluded that no one at apple actually used it.


You have to go back a whole lot further if you're making an analogy between chatGPT and spreadsheets. Back to Ada Lovelace, probably. VisiCalc did already useful and already transformative, the only things missing was better user experience and adoption. ChatGPT is a nice thought provoking toy, but not really useful other than as a novel search engine interface.


There's definitely a few echo chambers around AI, but it's definitely not something that "just techies" are onto.

ChatGPT made some waves at the end of last year. My in-laws were wanting to talk to (at) me about it at Christmas. There's plenty of awareness outside of the tech circles, but most of the discussion (both out and in of the tech world) seems to miss what LLMs actually _are_.

The reason why ChatGPT was impressive to me wasn't the "realism" of the responses... It was how quickly it could classify and chain inputs/outputs. It's super impressive tech, but like... It's not AI. As accurate as it may ever seem, it's simply not actually aware of what it's saying. "Hallucinations" is a fun term, but it's not hallucinating information, it's just guessing at the next token to write because that's all it ever does.

If it was "intelligent" it would be able to recognise a limitation in its knowledge and _not_ hallucinate information. But it can't. Because it doesn't know anything. Correct answers are just as hallucinatory as incorrect answers because it's the exact same mechanism that produces them - there's just better probabilities.


In your opinion, how does the "hallucination" issue differ from the same behaviour we see in humans?

I don't claim or believe that any LLM is actually intelligent. It just seems that we (at least on an individual basis) can also meet the criteria outlined above. I know plenty of people who are confidently incorrect and appear unwilling to learn or accept their own limitations, myself included.

In my opinion, even if we did have AGI it would still exhibit a lot of our foibles given that we'd be the only ones teaching it.


> In your opinion, how does the "hallucination" issue differ from the same behaviour we see in humans?

I feel like if you have any belief in philosophy then LLMs can only be interpreted as a parlour trick (on steroids). Perhaps we are fanciful in believing we are something greater than LLMs but there is the idea that we respond using rhetoric based on trying to find reason within in what we have learned and observed. From my primitive understanding, LLMs rhetoric and reasoning is entirely implied based on an effectively (compared to the limitations of human capacity to store information) infinite amount of knowledge they've consumed.

I think if LLMs were equivalent to human thinking then we'd all be a hell of a lot stupider, given our lack of "infinite" knowledge compared to LLMs.


> if you have any belief in philosophy [...]

You're going to have to explain which part of philosophy you mean, because what came after this doesn't follow from that premise at all. It's like saying a Chinese Room is fundamentally different from a "real" solution even though nobody can tell the difference. That's not a "belief in philosophy", that's human exceptionalism and perhaps a belief in the soul.


The belief that your thoughts are constructed based on an understanding of principles such as logic, rationality, ethics. That your interactions are built from a solid understanding of these ideas. As opposed to every train of thought just being glued together from pertinent fragments you can recall from your knowledge in response to a prompt provided by the circumstances of reality.

> that's human exceptionalism and perhaps a belief in the soul.

I would also argue that LLMs are not proven to be equivalent to what's going on in our minds. Is it really "human exceptionalism" to state that LLMs are not yet and perhaps never will be what we are? I feel like from their construction it is somewhat evident that there are differences, since we don't raise humans the same way we raise LLMs. In terms of CPU years babies require significantly less time to train.


Yeah I've never gotten this argument at all. "Humans aren't actually intelligent they're just machines designed to optimize their probability of reproducing "


> how does the "hallucination" issue differ from the same behaviour we see in humans?

In humans “hallucination” means observing false inputs. In GTP it means creating false outputs.

Completely different with massively different connotations.


Great point, perhaps “confabulation” is a better way of describing it, which means “the replacement of a gap in a person's memory by a falsification that they believe to be true”. For example, the term is sometime used to describe dementia patients, who might wander somewhere and forget how they got there. The patient then might confabulate a story about why they are there, e.g. they were getting their keys so they could drive to the store to run an errand, despite the fact they no longer have a car.


That's kind of the point, but also kind of not.

GPT isn't making true or false outputs. It's just making outputs. The truthiness or falseness of any output is irrelevant because it has no concept of true or false. We're assigning those values to the outputs ourselves, but like... it doesn't know the difference.

It's like blaming a die for a high or a low roll - it's just doing rolls. It has no knowledge of a good or a bad roll. GPT is like a Rube Goldberg machine for rolling dice that's _more likely_ to roll the number that you want, but really it's just rolling dice.


> It's just making outputs.

Yeah, one way to conceive of the issue is that GPT doesn't know when to shut up. Intuitively, you can kind of understand how this might be the case: the training data reflects when someone did produce output, not when they didn't, which is going to bias strongly toward producing confident output.

A lot of the conversation about GPT hallucinations has felt like an extended rehash of the conversations we've been having out the difference between plausible and accurate machine translations since like, 2016ish.


You could apply the same logic to humans.

Whenever a human speaks, it's just vibrations of wave molecules, triggered by the mouth and throat, which in turn are controlled by electric signals in the human's neural network. Those neurons, they just make muscles move. They don't have any concept of true of false. At least nobody has found a "true of false" neuron in the brain.


all of it coheres to consciousness, we know what it's like to be a human, but I think it'd be hubris to think we've cracked the code and made a blueprint of anything other than a word calculator


Hubris goes both ways. It is also hubris to assume our intelligence is special, instead of a boring neural network with sufficient number of neurons that exhibit emergent properties.


There's probably more dimensions to hubris but typically I understand it as flying too close to the sun, the other way for me is humility.


It’s more than next-word prediction though. The supervised fine tuning and RLHF steps are ways to possibly train it to favor truthful answers. Not sure whether this is currently the emphasis of ChatGPT though…


> In humans “hallucination” means observing false inputs.

How do you know that? You can only observe the output of the humans (other than yourself).


A person can hallucinate under the effects of drugs or mental disorder and then tell you about it after they've recovered from it.

This experience is available to you and is well documented.


How do you know they are observing false inputs, as opposed to creating false outputs? (acting as if they have seen halucinations)

How do you know that the LLM is not observing false inputs but creating false outputs? Would an LLM which tells you very convincingly about how it obtained a false information make you change your mind?

> This experience is available to you and is well documented.

You are misunderstanding what I'm asking. Sure, drug induced hallucinations in humans is very well documented. What I'm asking if this purported difference between "hallucinating on the inputs" vs "creating false outputs" is meaningful distinction.


So humans have a level of knowledge, understanding, and reasoning ability that LLMs simply don't have. I'm writing a response to you right now, and I "know" a certain amount of information about the world. That knowledge has limits, and I can expand it, I can forget it, all sorts of things...

"Hallucination" is a term that works well for actual intelligence - when you "know" something that isn't true, and has no path of reasoning, you might have hallucinated the base "knowledge".

But that doesn't really work for LLMs, because there's no knowledge at all. All they're doing is picking the next most likely token based on the probabilities. If you interrogate something that the training data covers thoroughly, you'll get something that is "correct", and that's to be expected because there's a lot of probabilities pointing to the "next token" being the right one... but as you get to the edge of the training data, the "next token" is less likely to be correct.

As a thought experiment, imagine that you're given a book with every possible or likely sequence of coloured circles, triangles, and squares. None of them have meaning to you, they're just colours and shapes that are in random seeming sequences, but there's a frequency to them. "Red circle, blue square, gren triangle" is a much more common sequence than "red circle, blue square, black triangle", so if someone hands you a piece of paper with "red circle, blue square", you can reasonably guess that what they want back is a green triangle.

Expand the model a bit more, and you notice that "rc bs gt" is pretty common, but if there's a yellow square a few symbols before with anything in between, then the triangle is usually black. Thus the response to the sequence "red circle, blue square" is usually "green triangle", but "black circle, yellow square, grey circle, red circle, blue square" is modified by the yellow square, and the response is "black triangle"... but you still don't know what any of these things _mean_.

When you get to a sequence that isn't covered directly by the training data, you just follow the process with the information that you _do_ have. You get "red triangle, blue square" and while you've not encountered that sequence before, "green" _usually_ comes after "red, blue", and "circle" is _usually_ grouped with "triangle, square", so a reasonable response is "green circle"... but we don't know, we're just guessing based on what we've seen.

That's the thing... the process is exactly the same whether the sequence has been seen before or not. You're not _hallucinating_ the green circle, you're just picking based on probabilities. LLMs are doing effectively this, but at massive scale with an unthinkably large dataset as training data. Because there's so much data of _humans talking to other humans_, ChatGPT has a lot of probabilities that make human-sounding responses...

It's not an easy concept to get across, but there's a fundamental difference between "knowing a thing and being able to discuss it" and "picking the next token based on the probabilities gleaned from inspecting terabytes of text, without understanding what any single token means"


"Picking the most likely token based on probabilities" doesn't accurately describe their architecture. They are not intrinsically statistical, they are fully deterministic. The next word is scored (and then normalized to give something interpretable as a probability), But the calculation performed to determine the score for the next token considers the full context window and features therein, while leveraging the meaning of the terms by way of semantic embeddings and its trained knowledge base. It is not obvious that the network does not engage with the meaning of the terms in the context window when scoring the next word, and it certainly can't be dismissed by characterizing it as just engaging with probabilities. There is reason to believe that the network does understand to some degree in some cases. I go into some detail here: https://www.reddit.com/r/naturalism/comments/1236vzf/on_larg...


What you're describing is very close to what the thought experiment of Chinese Room (https://en.wikipedia.org/wiki/Chinese_room).

But yes, it's unfortunate that when the next tokens are joined token and laid out in the form of a sentence it appears "intelligent" to people. However if you instead lay out the individual probabilities of each token instead then it'll be more obvious what ChatGPT/LLMs actually do.


What do you think your brain does when deciding the next word to speak? It is scoring words based on the appropriateness considering context and all the relevant known facts, as well as your communicative intent. But it is not obvious that there is nothing like communicative intent in LLMs. When you prompt it, you are engaging some subset of the network relevant to the prompt that induces a generative state disposed to produce a contextually appropriate response. But the properties of this "disposition to contextually appropriate responses" is sensitive to the context. In a Q&A context, the disposition is to produce an acceptable answer, in a therapeutic context, the disposition is to produce a helpful or sensitive response. The point is that communicative intent is within the solution space of text prediction when the training data was produced with communicative intent. We should expect communicative intent to improve the quality of text prediction, and so we cannot rule out that LLMs have recovered something in the ballpark of communicative intent.


> What do you think your brain does when deciding the next word to speak? It is scoring words based on the appropriateness considering context and all the relevant known facts

I mean, it's not. It's visualizing concepts internally and then using a grammar model to turn those into speech.


>It's visualizing concepts internally and then using a grammar model to turn those into speech.

First off, not everyone "visualizes" thought. Second, what do you think "using a grammar model to turn those into speech" actually consists of? Grammar is the set of rules by which sequences of words are mapped to meaning and vice-versa. But this is implemented mechanistically in terms of higher activation for some words and lower activation for other words. One such mechanism is scoring each word explicitly. Brains may avoid explicitly scoring irrelevant words, but that's just an implementation detail. All such mechanisms are computationally equivalent.


Yep, the "chinese room" is the classic thought experiment, but I feel like it fails to get the point across because the characters still represent language, so you could conceivably "learn" the language. I prefer the idea of symbols that aren't inherently language, as it really nails in the idea that it doesn't matter how long you spend, there's not something that you can ever learn to "speak" fluently.


> I'm writing a response to you right now, and I "know" a certain amount of information about the world.

How do you know? And more importantly, how do you prove it to others? The only way to prove it is to say: "OK, you are human, I am human, each of us know this is true for ourselves, let's be nice and assume it's true for each other as well".

> But that doesn't really work for LLMs, because there's no knowledge at all.

How do you know? I know your argument saying that the LLM "is just" guessing probabilities, but surely, if the LLM can complete the sentence "The Harry Potter book series was written by ", the knowledge is encoded in its sea of parameters and probabilities, right?

Asserting that it does not know things is pretty absurd. You're conflating "knowledge" with the "feeling" of knowing things, or the ability to introspect one's knowledge and thoughts.

> As a thought experiment, imagine that you're given a book with every possible or likely sequence of coloured circles, triangles, and squares.

I'd argue thought experiments are pretty useless here. The smaller models are quantitatively different from the larger models, at least from a functional perspective. GPT with hundreds of parameters may be very similar to the one you're describing in your thought experiment, but it's well known that GPT models with billions of parameters have emergent properties that make them exhibit much more human-like behavior.

Does your thought experiment scale to hundreds of thousands of tokens, and billions of parameters?

Also, as with the Chinese Room argument, the problem is that you're asserting the computer, the GPU, the bare metal does not understand anything. Just like how our brain cells don't understand anything either. It's _humans_ that are intelligent, it's _humans_ that feel and know things. Your thought experiment would have the human _emulate_ the bare metal layer, but nobody said that layer was intelligent in the first place. Intelligence is a property of the _whole system_ (whether humans or GPT), and apparently once you get enough "neurons" the behavior is somewhat emergent. The fact that you can reductively break down GPT and show that each individual component is not intelligent does not imply the whole system is not intelligent -- you can similarly reductively break down the brain into neurons, cells, even atoms, and they aren't intelligent at all. We don't even know where our intelligence resides, and it's one of the greatest mysteries.

Imagine trying to convince an alien species that humans are actually intelligent and sentient. Aliens opens a human brain and looks inside: "Yeah I know these. Cells. They're just little biological machines optimized for reproduction. You say humans are intelligent? But your brains are just cleverly organized cells that handles electric signals. I don't see anything intelligent about that. Unlike us, we have silicon-based biology, which is _obviously_ intelligent."

You sound like that alien.


You can figure out if someone knows what they’re talking about or not by asking them questions about a subject. A bullshitter will come up with plausible answers; an honest person will say they don’t know.

ChatGPT isn’t even a bullshitter when it hallucinates – it simply does not know when to stop. It has no conceptual model that guides its output. It parrots words but does not know things.


(Unless you're intentionally going on a tangent --)

The discussion is whether LLMs have "knowledge, understanding, and reasoning ability" like humans do.

Your reply suggests that a bullshitter has the same cognitive abilities as an LLM, which seems to validate that LLMs are on-par with some humans. The claim that "it simply does not know when to stop" is wrong (it does stop, of course, it has a token limit -- human bullshitters don't). The claim that "It has no conceptual model that guides its output." is just an assertion. "It parrots words but does not know things." is just begging the question.

Lots of assertions without back up. Thanks for your opinion, I guess?


Yes, you may be. But you still have an internal world model - through conditioning or otherwise that you're playing off against.

An LLM doesn't have that. It's very impressive parlour trick (and of course a lot more), but it's use is hence limited (albeit massive) to that.

Chaining and context assists resolving that to some extent, but it's a limited extent.

That's the argument anyway, that doesn't mean it's not incredibly impressive, but comparing it to human self-awareness, however small, isn't a fair comparison.

It's next token prediction, which is why it does classification so well.


AlphaGo is not aware that it’s playing a game either, but it’s better than humans at it. Awareness is not necessary to make people lose their jobs.


I don't really know anything about AlphaGo. There's more types of "AI" than LLMs, but that's not really the point. You don't need AI for people to lose their jobs... but nobody is losing their jobs to AlphaGo, and in the grand scheme of things it's unlikely that people are going to lose their jobs to GPT, too.


If you make people who produce text 25% more productive you can fire one in four and increase your profits.


> Awareness is not necessary

Wasn't it the plot of a sci-fi novel by Vernor Vinge or someone at least as popular?


You might be thinking of Blindsight by Peter Watts. Great book.


It's not AI. As accurate as it may ever seem, it's simply not actually aware of what it's saying.

Conflating intelligence and awareness seems to me the biggest confusion around this topic.

When non-technical people ask me about it, I ask them to consider three questions:

- is alive?

- thinks?

- can speak (and understand)?

A plant, microbe, primitive animals... are alive, don't think, can't speak.

A dog, a monkey... are alive, think, can't speak.

A human is alive, thinks, can speak.

These things aren't alive, think, can speak.

I know some of the above will be controversial, but clicks for most people, that agree: if you have a dog, you know what I mean whith "a dog thinks". Not with words, but they're capable intricate reasoning and strategies.

Intelligence can be mechanical, the same as force. For a man from the ancient times, the concept of an engine would have been weird. Only live beings were thought to move on their own. When a physical process manifested complex behaviour, they said that a spirit was behind it.

Intelligence doesn't need awareness. You can have disembodied pieces of intelligence. That's what Google, Facebook, etc. have been doing for a long time. They're AI companies.

It doesn't help with the confusion that speaking is a harder condition than thinking and thinking seems to be harder than being alive: "these things aren't alive so they can't think" but they speak, so...


Ehh... my dog is alive, thinks, and "speaks" in a manner - not a cute term for barking, but he communicates (with relatively high effectiveness) his wants and desires. Maybe not using human words, but he certainly has his own sort of crude language, as does my cat.

The problem is that LLMs aren't alive, and they _don't think_. The speaking is arguable.


You might be onto something (or not, I'm not sure), but its extremely well-documented that both dogs and monkeys can speak.

They can't speak English like a human, but they both can understand a good deal of English, and they both can speak in their own ways (and understand the speaking of others).

I think the key thing about these LLMs is that they upend the notion that speaking requires thinking/understanding/intelligence.

They can "speak", if you mean emit coherent sentences and paragraphs, really well. But there is no understanding of anything, nor thinking, nor what most people would understand as intelligence behind that speaking.

I think that is probably new. I can't think of anything that could speak on this level, and yet be completely and obviously (if you give it like, an hour of back and forth conversation) devoid of intelligence or thinking.

I think that's what makes people have fantastical notions about how intelligent or useful LLMs are. We're conditioned by the entirety of human history to equate such high-quality "speech" with intelligence.

Now we've developed a slime mold that can write novels. But I think human society will adapt quickly, and recalibrate that association.


I can't think of anything that could speak on this level, and yet be completely and obviously (if you give it like, an hour of back and forth conversation) devoid of intelligence or thinking.

It's not devoid of intelligence or thinking. You're just using "what I'm doing right now" as the definition of intelligence and thinking. It isn't alive so it can't be the same. You are noticing that its intelligence is not centralized in the same way as your own mind.

But that's not the same as saying it's dumb. Try an operational definition that involves language and avoid vague criteria that try to judge internal states. Your dog might understand some words, associate them to the current situation and react, but can't understand a phrase.

These things can analyze the syntax of a phrase, can follow complex instructions, can do what you tell them to do. How is that not "understanding"?

If that isn't intelligence for you, I don't know what else to say.


Not to be difficult but wouldn't "confabulating" be a preferable description for this behaviour? Hallucinating doesn't quite feel right but I can't exactly articulate why confabulate is superior in this context


"Hallucinating" (normally) means having a subjective experience of the same type as a sensory perception, without the presence of a stimulus that would normally cause such a perception. I agree it's weird to apply this term to an LLM because it doesn't really have sensory perception at all.

Of course it has text input, but if you consider that to be equivalent to sensory perception (which I'd be open to) then a hallucination would mean to act as if something is in the text input when it really isn't, which is not how people use the term.

You could also consider all the input it got during training as its sensory perception (also arguable IMHO), but then a proper hallucination would entail some mistaken classification of the input resulting in incorrect training, which is also not really what's going on I think.

Confabulation is a much more accurate term indeed, going by the first paragraph of wikipedia.


Nah, my issue with both terms is that they imply that when the answer is "correct" that's because the LLM "knows" the correct answer, and when it's wrong it's just a brain fart.

It doesn't matter if the output is correct or not, the process for producing it is identical, and the model has the exact same amount of knowledge about what it's saying... which is to say "none".

This isn't a case of "it's intelligent, but it gets muddled up sometimes". It's more of the case that it's _always_ muddled up, but it's accidentally correct a lot of the time.


>It doesn't matter if the output is correct or not, the process for producing it is identical

I don't see how this differs from a human earnestly holding a mistaken belief.


All you can get in a survey like this is anecdotes and opinions. This may be sufficient for your purposes - here's my story:-)

My sister is not technically minded. She has used chatgpt to create a request for permit for a backyard deck to city Hall, a letter to her boss requesting attendance to a conference, and her performance reviews.

Another non techie and I use chatgpt to help us learn French and music theory.

My mother in law uses it to create funny rhyming stories for kids.

Meanwhile the techie friends of mine are... Completely ignoring it. My two best friends are vmware senior architect and a Java developer / tech manager, and I've been urging them for months to try it.

So I personally live in a completely opposite situation to that which you describe :-). Techie are skeptical of the toy, and endlessly discuss it's limitations and impact. Non Techies are just using it as a tool.


How do you know that the French and music theory it’s teaching you is correct?


I am French, living in Germany. From my experiments with ChatGPT (I have access to GPT-4), the French is not going to be perfect, but so good that anyway nobody will ever notice the errors.

ChatGPT is a wonderful tool to improve your knowledge of a given language (at least for the ones having a lot of data on the web, I tested only French, German and English). You paste a mostly good text, you get back a very good one and sometimes it provides you with a description of your errors, why and what it restructured.


Would your question change if the tutor was a person? How do you know if anything anyone teaches you is correct?


Humans are likely to share when they don't know something or to express concern when they aren't positive what they're is correct. LLMs are confident 100% of the time and don't understand when they're wrong. I get what you're trying to say but I don't think this is an instructive example.


>>Humans are likely to share when they don't know something

I try hard to not be snarky or sarcastic on HN; it doesn't contribute to positive, friendly, productive conversation... but come ON :->

Average human is unlikely to admit to themselves or others when they don't know something. We are famous for it. We love our opinions, and we conflate them as facts. It doesn't even imply malice or anything - have you ever asked for directions from somebody who doesn't know the answer? They'll still TRY :->. Even the smartest people around me will frequently present their ad-hoc opinions as facts. Heck, on HN alone, people will opine on matters of law and science and many things which are reasonably factual.

Don't get me wrong, LLM's hallucinating and being utterly unable to signal when they do is BAD; it makes them very different from most other software we've ever built; and it needs to be addressed; but as to this line of conversation specifically, it makes them (without any philosophical or "conscience of machine" implications) extremely human-like :-)

(similarly, not to say there aren't any humans who are humble and/or explicit about their limitations; but it's far, far from average)


> Humans are likely to share when they don't know something or to express concern when they aren't positive what they're is correct.

We must have met very different humans. Are there humans that do that this? Absolutely! Are they in the majority? Absolutely not. Now if you change that framing to "teachers" then I think on average you are going to get more people like that but I've heard many many people say things with complete confidence/certainty that was absolutely wrong. Then again, I've had teachers that have made predictions/statements that they state as facts so I don't know. Dunning-Kruger can account for part of it but still.


There are good teachers and bad teachers. LLMs are at best, bad teachers.


>>LLMs are at best, bad teachers.

I could not disagree more.

ChatGPT is patient. That's a rare quality in humans and teachers.

ChatGPT will willingly explore. That's also a rare quality in teachers.

ChatGPT is detailed and structured, and has instant access to enormous amount of data and background.

I will grant you that there are domains of knowledge and questions where it's great, and where it'll lie to you through your teeth. But as a patient, detailed, willing, knowledgeable tutor in basic/well-covered areas, it's virtually unparalleled. I'm a hungry learner and have had large number of teachers, tutors and mentors through several continents, countries, societies and educational paradigms; and only the very very top are as good - and I've actually been the lucky one. My wife and my sister, for example, based on their accounts, simply never had a teacher/tutor as good as ChatGPT :-<


It has been my experience that for things that may be slightly or very obscure, and/or controversial/uncertain, and/or with limited # of sources, and/or when pushed into complex dialogue or tricky questions, chatGPT can really muck things up.

At the same time, when asked simple, clear and basic questions, on topics where there are myriad of resources, that agree with each other, it's very very very useful.

I'm not learning either French nor Music theory solely from ChatGPT of course; I use it to supplement and explore. As such, it's proven to be in the top 5%+ of patient & knowledgeable tutors/instructors I've ever had (or to put it another way, my piano teachers have been WAY more wrong, WAY more often about music theory than ChatGPT - we all take risks :)


How do you know that the French and music theory that a human teacher would teach you is correct?


Not the really the gotcha you think this is.


Why?


Exactly this. I've been scratching my head about the belittlement in tech circles of the phenomenal breakthrough that chatGPT and other LLMs represents. I think it boils down to a conflation of a few different factors:

1. Our jobs are obviously (soon) replaceable or at least massively impacted - just like all the jobs we ourselves have replaced previously. We don't like that and throw stones at it to make it go away.

2. We used to be the goto-source for all things technical, now (soon) people can just ask chatGPT and get a better answer. We don't like that and throw stones at it to make it go away.

3. We are (most of us) schooled to think that a script, a program (made by us) produces a predictable result: every letter has to be carefully placed or the whole thing comes crashing down. This AI thing doesn't work like that at all, which we don't like and etc.

4. The answers given to us by the chat bots are sometimes just "hallucinations", because the tech isn't fully mature yet. We don't like that.

More?


I agree with all of these; not necessarily in order of importance, but yet, these are factors.

I am at once astonished how useful LLM can be at its best, and how horrifyingly dangerous it can be if/when overused by those who don't understand its limitations.


You are very balanced :-) . Of course, there is valid criticisms of the hype and the way LLMs are, and will be, used but my annoyance at what I perceive as ignorant, self-interested and progress-hostile attitudes from my own industry sometimes gets the better of me.

When the dust settles, and the consensus concludes the obvious namely that: "yes, this thing actually does possess intelligence and knowledge, on an unprecedented level with vastly greater future potential" then maybe we can concentrate on steering education and professions into directions that will make the tech controllable and even more useful, and avoid the real pitfalls and dangers that lie ahead.

As long as we're stuck on the understanding that this is "just a hallucinating, stochastic parrot" and everything else is pure hype then we're not getting the right things done I'm afraid.


Some of us techies are also using it a tool, which is what it is, basically. E.g. I had to write a bunch of Groovy and since I don't write a lot of it all the time, I need to occasionally lookup some syntax etc. So I have the ChatGPT window open on the side and its answers, while not entirely perfect, are more than enough to keep me going at a brisk pace. No speedbreakers like having to scroll through bunch of non-sense search results and reading through answers on Stackoverflow to find what I'm looking for. 80-90% of the time, I don't even need to ask a follow-up question - the first answer is enough.

It is basically like a lubricant for my thought process.


I don't think it's people in tech specifically - I think that "fire your workers because they're going to be replaced by AI" is a great excuse for firing workers without looking bad. Thus, anyone who might use that excuse has a strong incentive to play up the potential of AI (and thus the plausibility of their excuse), regardless of the actual value of AI.

Also, we're all calling it "AI" for some dumbass reason. That infects our thoughts with unwarranted credit toward the technology.


We (some of us at least) have been calling simple logic in computer games AI. And an artificial plant means some plastic shaped like a plant, it can't do photosynthesis, it can't grow, it can't reproduce. Meanwhile current AIs can code, explain a joke, etc.

After all an artifice: (the use of) a clever trick or something intended to deceive


they can't "code", they can't "explain a joke", they can output cached human data in such a way that it seems like they're generating these things themselves.


what do you mean by this exactly? like whats an experiment which establishes the difference? a lot of human behavior is also only liminally generative cached retrieval


Same experience, talked to a young, highly educated person about ChatGPT… they had no clue what I was going on about.

But the more surprising thing is that even after I explained what it could do, they weren’t even slightly impressed. “So it tells you wrong answers? Sounds utterly pointless”

I was flabbergasted. I think we do live in a bubble. I’m sure architects and surgeons have equally exciting advances in their fields that no one else cares about.

I also think the medias sensationalist phrases should be taken with a pinch of salt: “The tech everyone is talking about”, “here’s the news that got everyone buzzing. etc

I do think LLM are a significant event, but that will only be realised by building on top of it and “showing rather than telling”.


> “So it tells you wrong answers? Sounds utterly pointless”. I was flabbergasted.

Could you tell a bit more about why this objection surprised you so much? I am often seeing this in the same tech circles that rejected the web3 craze; and I have to confess, it does sound reasonable to me. There is a recent PR opened in the MDN docs repository after MDN added the "AI explain" button, pointing out how utterly useless a AI guide is if it gives you incorrect answers and you do not have enough knowledge to catch it [0].

[0] - https://github.com/mdn/yari/issues/9208


There is nothing wrong about the specific objection but, to borrow an analogy from Scott Aaronson, it is rather like dismissing the Wrights’ early airplanes as being slower than a train and only carrying one passenger. What is missing is an appreciation for how intractable the problem seemed five years before.


It's not really like that? A small plane with one passenger is a concept that you can extrapolate to a bigger, faster plane. Afaik an often-wrong generative AI is _not_ a concept that extrapolates to a never-wrong generative AI: that's just not how it works at a fundamental level.

Although presumably very smart folks are working on it.


For me, it’s already a jetliner in the context of coding assist. It’s correct more often than top hits provided by a top search engine (and any coworker), and it is a very enjoyable user experience (no ads and SEO garbage to filter out). I’d say the wright brothers version was something like BERT or earlier GPTs.


I see that the analogy is having the unintended side-effect of apparently being predicated on a supposed utility of ChatGPT’s direct descendants. The point I want to get across is how difficult anything like it seemed before, regardless of whether it begets anything markedly better afterwards.


Can we stop making analogies like this. It’s so bizarre. The circumstances around each invention are completely different. To always reference the conditions of some past technology is to be stuck in this bizarre form of thinking.

A few days ago some poster tried to compare denying LLM’s to being a naysayer in Gallileos time. Is this all we can do? Make references to past events and fail to evaluate the present properly?


Of course every case of anything is different in detail. On the other hand, identifying patterns is often useful in understanding human reactions to change. Do you disagree with this, or do you think there is something special about inventions?

My analysis here is that the young person in question appeared to focus only on current utility. You have not yet explained why you say it is a failed analysis of why that response may seem surprising.


Except that people could explain how you could make an airplane faster and larger.

Nobody can explain how you can make an LLM understand what it's saying.


A lot depends on the definition of "understand" you want to use. According to some definitions there is already some kind of understanding going on in the current generation of LLMs, according to others true understanding requires a soul in the Christian sense and thus it is unattainable an LLM, and of course there is a spectrum of opinions in-between.


If you want to limit your range of opinions to credible domain experts speaking scientifically, the range goes from "definitely not" to "maybe a little if you have a very loose definition".


Which is not that far from what I was saying. And if you take the domain to be philosophy or psychology rather than deep learning or CS, I'm sure the range will be much wider.


Bear in mind that for every Wright Brother, there’s a Ferdinand Von Zeppelin. Most heavily hyped technologies don’t have a huge long-term impact.


I explained all it could do, and I felt I was doing great. But the moment I mentioned that it can get things wrong and sound totally confident, pretty much the conversation was me trying to justify why it’s still useful and I didn’t get anywhere beyond that.

I was shocked because I felt that being occasionally wrong would be a foot note, but to the person I was talking to its entire usefulness hinged on it being fully reliable.

For the capabilities we’re getting surely, the occasional wrong answer is barely a sacrifice!


So; I think many people in our sector have a progress-bias from years of Moore's law etc. and run under the assumption that all things tech that display problems (in this case "incorrect information") will just resolve with time and progress.

People outside of tech don't necessarily think this way. So hearing that "it produces incorrect answers" is kind of a deal breaker, no?

Who is right in this case? I actually think that the LLM approach has limits that we could hit the wall of, and for that technique at least never get past the "I'm just making shit up" problem, and in fact the skeptics are quite right.

LLMs are exciting as an automatic language content generation tool, they chain words together in ways that sound like humans, and extract reasoning patterns that look like human reasoning. But they're not reasoning, it doesn't take much to trip them up in basic argumentation. But because they look like they're "thinking" some people with a tech-optimist bias get excited and just assume that the problems will resolve themselves. They could be very wrong, there could in fact be very strong limits to this approach.

... More worrisome is if LLMs because omnipresent despite having this flaw, and we just accept bullshit from computers the way we seemingly now accept complete bullshit from politicians and businessmen....


> People outside of tech don't necessarily think this way. So hearing that "it produces incorrect answers" is kind of a deal breaker, no?

I think it's a deal-breaker for many people inside tech either :-)

It looks like it may be useful in doing grunt work in the field where you are an expert and can check and correct anything that it produces.

Where people expect it to be useful though is in providing them with information they do not already know; and the fact that you cannot trust anything it says makes it unusable for this case.


Yep. CoPilot for example is great for making grunt code, filling in the blanks. But it's really crappy at anything that requires reasoning through a problem.

It is our responsibility as tech professionals to recognize this and explain it to laymen otherwise we're in trouble.

I've said it before, and I'll say it again: It's really really bad that these systems are made to speak in first person, that they're often given "names", that they use human voices, and that they present authority. This is irresponsible engineering from a social and ethics POV, and our "profession" such as it is, should be held to task for it.


Well, 90% of the applications people are shouting about and raising money over are complete non-starters as long as 'being wrong sometimes' is a feature. Not that there isn't a good use for a generative system that lies, but.. Everyone is just going around pretending it doesn't, because it's convenient for them, and it feels like a big dumb joke.


Computers are not supposed to get stuff wrong. Being right some of the time just isn't good enough for people who expect their calculator to always give them the right answers.

I think we can all agree that LLMs, as they stand today, are useless for anything apart from "creative tasks" where accuracy and facts simply don't matter. So things like writing fan fiction it is great at, but for anything meaningful it is utter tripe. No one can deny this.


They’re useful for tasks that are easy to verify but hard to generate, and code is one meaningful example. It seems trite to say after so many others, but I built a web app in one day by pair coding with GPT-4, which I am sure would have taken weeks, mostly learning the quirks of the various frameworks involved by sifting through the noise of the web. The LLM wasn’t correct 100% of the time, but I could iterate with it when there was a problem and debug. It probably helps that I was using hugely popular libraries, but I think this means there’s a ton of promise in fine-tuning these models on internal codebases/docs.


Yes, the goal post seems to move quickly with LLMs. Search engines provide relevant documents, not truthful ones, and I don’t know a single person who isn’t occasionally wrong, often confidently. I suppose it’s good to set a high bar with these models though, since it gives some incentive to improve them.


I mean, I’d be inclined to agree with your friend here, to an extent. A machine for confidently being wrong has rather limited practical applications.


I was just about to comment something similar to this:

"Google CEO Sundar Pichai said in a 60 minutes interview, 'one Google AI program adapted on its own after it was prompted in the language of Bangladesh, which it was not trained to know'. This is obviously wrong, so does that mean Pichai was hallucinating?"

At the last second I fact-checked myself, and found that it actually wasn't Pichai who said that. Crazy how close I came to confidently spewing bullshit in a comment about how humans can also confidently spew bullshit.

Anyway, my point is- to be on par with humans, LLMs don't need to be right all of the time, only some of the time.


You just described exactly why you're vastly superior to an LLM. You identified a possible knowledge gap and looked for more data to fill it in.

I don't think it's impossible for an LLM to do that but they currently don't.


we’re all on HN aren’t we? isn’t it a forum for confidently being wrong?


Sure, but I, at least, am here largely for entertainment purposes. Which is one place where LLMs are mildly useful, actually; while ChatGPT isn't very good at it (I think deliberately so) some LLMs can produce very amusing output (see https://www.aiweirdness.com - while occasionally, as now, there's a serious post, it's mostly AI stuff doing silly things).


Same here. My girlfriend is not impressed by GPT and its ilk at all. But I guess that for a non-technical person most things digital are a form of magic, and it is probably hard to tell the difference between the common magic of (say) a game on her phone and the heavy voodoo of an LLM.


I think the better perspective is that LLMs give that magic feel to technologists that aren't really used to it. It certainly did for me until I learned much more about them and used them a bunch. Now it just feels like a tool I can ask to help me build a docker container so I don't have to read docs.


To many people, a computer program that can output in kind of natural-sounding sentences but can't be trusted to give you the right answers is not progress. I am sympathetic to that view.

It's an impressive technical feat to mimic human writing and drawing with its level of fidelity, but until it can make the leap to knowing what it's talking about - and there's no clear basis for assurance that the current vector will get us there - real-world use cases are mostly limited to replacing the some of the more boring email jobs.


> they weren’t even slightly impressed

Could be because they haven't actually tried it. There are a surprising amount of applications something that tells wrong answers part of the time can do.


I see more people outside of tech talking about it than within tech.

Within tech the biggest constraint I'm seeing is a failure of our imagination on how this tech could be used, so far we've limited our interactions to that which we've been shown... chat bots, and if that's all we can imagine then this is definitely a hype cycle.

But when I speak to those outside of tech, who are not constrained to imagine what they've seen, then I see and hear much different things. It's not the second coming, it's not going to make the whole world redundant, but it is a change and for the most part non-tech people seem more eager to get there. At least, I'm surrounded by positive people who seem to hope that the most mundane aspects of our lives will be replaced by AI and will lead to some qualitative improvement of life (ignoring the cost of living crunch presently hitting most of them).


The Halo games and lore had a pretty interesting storyline with military AI, some going rogue, others being built to counter the rogue ones.

https://www.halopedia.org/Offensive_Bias


Big mixed bag for me:

* All the techies I know have heard about and used it, and most have a healthy dose of skepticism paired with some optimism that it can be used to help solve some previously hard to solve problems. This alone, I think, makes it clear that LLM has staying power in a way that blockchain did not: obvious use cases.

* The normies in my life run the whole spectrum from "never heard of it" to "use it at least sometimes." One interesting subset are the folks who have heard a LOT about it but haven't used it. My lawyer said he had already attended panel discussions about the ethical implications of AI usage in law. I asked if he had USED ChatGPT and he said no; I had to direct him to the URL and walk him through signing up so he could see it for himself. And he's pretty tech-saavy as non-techies go.

Burying the lede, now. Here is my unpopular opinion: there is an outsized "wow factor" when you specifically use ChatGPT because of the fact that it outputs the text seemingly in real-time. It makes it look like it's thinking/talking and viscerally our minds are blown. Bing and Bard generate the response in the background and output it all at once like a search result, doesn't hit the same way.


I spoke to my 70-year-old step-aunt about this yesterday for exactly this reason. She's a self-employed piano teacher with zero technical interest or knowledge, but a generally astute and clued-up kind of person.

She said she's heard a lot about it, but it all sounds like marketing hype, tabloid sensationalism, or open alarmism from people who don't seem to know what they're talking about.

She said some of her students had used ChatGPT for creating revision materials for their exams, and that they'd found it useful for that, but she found the assertion that mass unemployment is 6-12 months away 'pure speculation'.


> but she found the assertion that mass unemployment is 6-12 months away 'pure speculation'

Who in their right mind has been asserting that?


No idea. I've seen some mouths-for-hire on LinkedIn make these sorts of claims, but not seen any in reputable print.


> but she found the assertion that mass unemployment is 6-12 months away 'pure speculation'

I've found numerous places to use generative AI in my business. None of the use cases involve laying people off. A large amount of the use cases are adding something new to the business to help us acheive something that wouldn't have been practical if we had to pay a person to do it. The next batch of it, is preloading a humans plate so they can do a quick verification, some small edits, and then off to the customer, allowing the subject matter experts to handle a higher volumes which directly relates to revenue. The 3rd set of use cases involve helping creative employees be more creative.

Not a single use case i've evaluated (and there's a bunch) will put anyone out of work. Might give us more room to hire more people frankly. At least in my business, and I supect my business i not unusual.

It's like the paperless office. Sure eventually AI might get good enough to replace people, but we'll probably just end up hiring more people in the middle since a person with AI can do so much more revenue generating work.


What’s innovative about the LLM chat interface is that it requires no technical expertise to use. Aside from logging in, it’s not much different than sending a text message or email, so most people could experiment with it if they were at all inclined. So when I hear people hold opinions about LLMs who haven’t even tried them out, I assume they are not curious or informed.


I've never met a single person claiming it will cause mass unemployment in 6-12 months. But the impact of ~ChatGPT 10.0 in 5 or 10 years from now? It's a lot more believable given its current state. My anecdote is that everyone is concerned about what it will do in the future gives its current proof of concept.


I've heard people make claims about it being 'not far off', but in my experience, these people tend to be making a living from speaking on AI (or previously Blockchain) and don't have any actual technical credentials.

I personally think there's a number of reasons why we may be coming up on an AI plateau that we may not get out of for a long time, but that's a different matter.


All professions who need to write a lot of bullshit text are adopting NNs. So basically any advertisements, corporate news and emails etc. And parsing and responses to such ads or corpoletters, if they are required. Another area to be taken over - translations and language learning. Half a year ago my teacher had been suggesting different resources for different issues, like this online translator for this task, that translator for a different one, and yet another site to check suffixes. Last week her recommendation to checking grammar etc. was simply - ChatGPT.

In my work I've yet to find a use for NNs, but maybe for the writing of a lot of templates in one go it could be useful.


Bullshit text is the right definition. ChatGPT can replace writers only in the head of those who’ve never actually written, don’t understand what writing actually is, and don’t care about a skillset that is not just summed up by being able to spit out coherent sentences. In my view, companies who fire their writers or believe that chatGPT can take their place already will experience a very harsh reckoning in the medium term.


Can't generalize of course, but in my company there is a "healthy" mix of both bullshit texts and useful texts. And I'm guessing that this is the case in other companies. E.g. we have technical writes who are responsible to writing proper documentation and they are not going anywhere (yet). There is a majority of internal communication and it is mostly technical and on point. And at the same time there are many bullshil letters send to the company. Almost all letters from the C-suit fall there, because they need to pad one sentence (John Doe is appointed as a Director of Directorship) to a full page of A4 text. Most of the letters from HR/LND unless they are about some factual info, like here is a new service and how to use it. Monthly/Quarterly updates are often padded so much that they are unreadable.

I'm guessing that most of the C-suit are already using NNs to write (and read) those kilometers of word salad.


A lot of copy doesn't need top writers. I think what we'll see is companies who adopt LLMs in tandem with their writers will get more productive than companies that go all one way or the other. And over time, there may be less writers required. I say 'may' here because humans have a knack for using any available capacity.


> All professions who need to write a lot of bullshit text are adopting NNs. So basically any advertisements, corporate news and emails etc. And parsing and responses to such ads or corpoletters, if they are required.

i'm not surprised the spam industry is using ChatGPT, it does seem pretty apt to writing useless text


Young people all know about AI. There are 24/7 live AI generated SpongeBob shows on TikTok, character.ai is immensely popular, ChatGPT is used by students for homework and stuff, Snapchat AI was all posted all over (mostly memes about how bad and out of touch it is), I get ads for AI anime girl generators, etc.

I think there is certainly more talk about it inside tech circles, but I feel like it's more of a generational divide than anything else.


All my students us it a lot. We use it a lot in the office - we have a lot of digital paperwork to do, ChatGPT is fine at doing it all.

It can write lectures, generate worksheets, come up with interesting quizzes, D&D stuff.

It's just a tool, some people will find it useful, some won't. A bit like a chainsaw.


This isn't just a tech echo chamber, it's the YC/SV echo chamber. Nobody is seriously looking at AI for normal products, because it's not trustworthy. Never really will be. Why do you think subways still have human operators? The combined investment & risk of full automation aren't worth it. It'll be useful for customer support bots, and backend things like spam filtering and other statistical analysis. But otherwise it's a flash in the pan.


> Why do you think subways still have human operators?

They don’t always. This sucked the wind out of your whole argument. https://en.wikipedia.org/wiki/List_of_driver-less_train_syst...


Driverless trains doesn't mean no drivers. It means remote drivers.

The DLR for example still has drivers, its just they are located somewhere other than the train.


No, it doesn't mean "remote drivers". Yes, an automated network is going to have people monitoring it and the ability of remote control for emergencies or failures, but that's not the same as "remote drivers" for normal operations.

(in the case of DLR, the staff on the train can take control if needed, but other networks don't have staff on every train)


I know this is getting into "technically correct" territory, but I think its reasonably important to qualify. As we are getting into AI automates <Hardthing> when it turns out that actually <Hardthing> was largely automated already.

For the tube, places like the victoria, jubilee, the driver doesn't actually "drive" if that makes any sense. They open the doors and hold down a button that indicates the line is clear. The driver has no real control over speed under normal circumstances.

for the tubes, the real blocker to "Driverless" trains is re-boring the tunnel to allow a walkway for evacuation.

The DLR is centrally controlled for most of the time. In the sense that there is a central operator that opens and closes the doors and tells the train to move to the next station.

the only difference is that there is no requirement for someone to hold down a button for the train to continue, the deadmans switch as it were.


> In the sense that there is a central operator that opens and closes the doors and tells the train to move to the next station.

Is it that remote controlled? Other systems I'm familiar with don't need a human for that.

> As we are getting into AI automates <Hardthing> when it turns out that actually <Hardthing> was largely automated already.

Oh, I fully agree with that point. You don't need AI-hype to automate a subway system, it's in many ways in the real of "classic" industrial automation.


It really didn't affect their argument.


Subways have human operators for difference reasons not because trains cannot be fully automated. Some are partially automated because they run on old infrastructure which is difficult and expensive to revamp. Others are fully automated but regulations prohibit full autonomy, or labor is unionized and operators are guaranteed work. Automation is not AI anyway, automation today is done through control systems, that is sensors, some type of control like a PID loop (or more advanced Level 2 models) and devices that allow for controlled movement/positioning. There's no artificial intelligence there.


Not sure why this is downvoted but I feel the SV bubble is its own bubble within the tech bubble without a doubt. And the biggest issue is that most of my friends in it seem to think everyone lives like them.


It's an incredible tool for augmenting what humans already do. It shifts your role from grunt to output supervisor, imperative to declarative.


That's been said about countless tasks that are now done by machines. Computer was a job title. Printer was a job title. Very important jobs that didn't have room to risk errors.


Even computer used to be a job.


That's what I just said...


Yea, I don't know what I happened in my brain... sorry :-P


It's not just about it being trustworthy but about quality.

AI looks to be great for ideation and development. But it's not there if you value a high quality output.


> AI looks to be great for ideation and development.

Agreed. I don't see this current version of AI as a calculator, but as an adaptable companion to work ideas with. The calculator side will continue to get better, but there's already so much value in the ideation side. Obvious is augmenting writing. The other I have found is in planning. If we expand to images, there's things like Firefly.

Getting an email just right is so much easier/faster now. It's increased my productivity from that one thing alone. Coding is also very helpful, and while not always 100%, it presents ideas which then I can use to get to the solution.

This is the first time I've seen a fairly straight line to a Star Trek like computer assistant at some point in the future.


If you see specific aspects of a chat reply where quality is bad, tell it.

It will improve the answer.

Iterating this you can get way beyond the quality of the first answer. You have full control over the quality but you have to bring something to the collaboration.


That's part of the problem. People think that using their own human bias to alter the machine is a good thing, when it's clear that human bias leads to negative outcomes. You need a separate thing to remove the human input bias.

Quality is a complex thing that requires design, safeguards, process, as well as iteration. You can't design test suites or checklists with just AI, because you don't know if the AI will decide to override them because of some other instruction or signal. And how do you determine the quality of the signal anyway? The quality of input of each human varies.

It's "too adaptive" for QA. You would need a QA for the QA.


Human bias is a whole huge area apart from just quality, not sure we want to get into that. I mean is it human bias, or is it reality bias. e.g. Poland hasn't had any terrorist attacks (or very few?), is that human bias, or is it reality?

I definitely don't treat its output with full trust, but I've been pleasantly surprised that even when I give it bad or incorrect guidance (unintentionally) it has caught my mistake, corrected me, and I've learned things.

For the QA case, I suppose what you're getting at is that if it can't be fully trusted, you might get incorrect QA results -- false negatives, false positives -- I'd agree but I think you just have to find an effective way to use the tool. Perhaps the obvious way most people would want to use the tool, is not in fact the best way to use the tool.

But just because the tool isn't delivering the perfection people hope for, doesn't mean there isn't some other way to use it that still catches (some) mistakes and adds value.


I've recently joined the tech org at a company that grows flowers, at a huge scale. The kinds of tech and product problems that they are dealing with are as far removed from the kinds of problems AI can help with as I can imagine. There may be places where generative AI might be able to optimize some things.

This is the real world. Manufacturing. Logistics. Managing people. Building tools for streamlining one tiny part of a workflow and getting people to use them effectively.

This latest client has opened up my eyes to the fact that I've been living in a bubble. I could not be more glad, as I was beginning to suspect that OpenAI is taking over the world.


I hear more about ChatGPT/AI from people outside my tech circle than within. Heck some of the people in my tech circle won't even play with it or try it out.

I don't think AI is the second coming of Christ like some pretend (it is over-hyped outside of tech and somewhat under-hyped inside of tech IMHO) but it's impressive and extremely useful. I've used it many times to help point me in the right direction. I never take what it says as fact, I always "check it's work" but it often saves me considerable amounts of time by getting me on the right track and then I can refine what it gave me. I don't use it for writing "real" code (aside from maybe a few small algorithms) but I do use it to help with certain debugging tasks if I think it will be useful. Also for things like "spit me out a shell script that takes a CSV and does X, Y, Z" it's incredibly useful. These are normally 1-off tasks that I can do by hand or code if needed but ChatGPT makes it way easier.

When it comes to writing, I'm often faced with "Blank Screen Syndrome" or a similar type of feeling and so getting something on the screen that I can then edit/revise/improve/fix is a huge boon to my productivity.


I live in Japan and have been surprised by how much coverage there has been of ChatGPT in the mass media here during the past few months. Popular awareness seems to be increasing quickly, too. Here are some results of a survey conducted by Line Research [1, in Japanese]:

Question: Do you know about ChatGPT? If yes, have you used it?

(n = 1056 in both March and June)

March 2023

Know about it and have used it: 4.8%

Know about it but haven’t used it: 25.5%

Don’t know about it: 69.7%

June 2023

Know about it and have used it: 15.2%

Know about it but haven’t used it: 55.8%

Don’t know about it: 29.1%

The survey also suggests that awareness is higher among younger people and that usage is higher among males.

Search results at Amazon Japan and Amazon USA show that many books are being published about ChatGPT in both Japanese and English [2, 3]. Quite a few Japanese magazines have had cover stories about it recent months [4].

[1] https://markezine.jp/article/detail/42509

[2] https://www.amazon.co.jp/s?k=ChatGPT

[3] https://www.amazon.com/s?k=ChatGPT&ref=nav_bb_sb

[4] https://www.amazon.co.jp/s?k=ChatGPT&rh=n%3A13384021&__mk_ja...


There was a similar survey conducted by Pew in the US [1]. From the results in May, 58% of Americans have heard of ChatGPT.

[1] https://www.pewresearch.org/short-reads/2023/05/24/a-majorit...


In the late 90's I got a Master's degree, specializing in neural networks. This caused me to learn about the hype cycle in AI, and since then I have seen it continue. Remember Watson, IBM's AI technology that was going to drive its growth? IBM got lots of press by making computers that could play chess and Jeopardy, which seemed impressive to people at the time, but they never found a way to make money from it. "AI winter" is the term for the trough of the AI hype cycle.

Note that every "AI summer" prior to this one has produced something useful, just never all that world-changing compared to people's expectations. Most people think that if it can BS (excuse me, generate convincing text), then it can do lots of other jobs. Well, in previous AI summers, people thought that if it can play chess, or answer Jeopardy questions, it could do many other things that it turned out, it could not do (or could not do well enough).

For that matter, the ability to do math, at one time, was thought of as a sign of great intelligence. But, it turned out that computers could do math, long before they could do anything else. Our intuition about, "if it can do this, soon it will be able to do that", is not very good.

I have heard ChatGPT described as a better autosuggest, which sounds about right. It's not that autosuggest isn't useful, it can be very useful, but it's not a thing that is going to change the world, and the jobs which it will automate are neither numerous, nor very well paid even now.

If you're trying to pump that VC hype machine for $$, though, cryptocurrency is not going to work anymore, so they need something.


I think there have been so many hyped up stuff in the tech world that it's trained a lot of people to point at everything and call it part of the hype cycle.

But this causes people to literally fail to notice that there are many things in tech that Aren't just hype. I mean literally think about it, the internet wasn't just hype, the smart phone wasn't just hype. There's millions of things that weren't just hype.

>I have heard ChatGPT described as a better autosuggest, which sounds about right. It's not that autosuggest isn't useful, it can be very useful, but it's not a thing that is going to change the world, and the jobs which it will automate are neither numerous, nor very well paid even now.

This is a poor characterization. chatGPT has the capability of answering extremely complex questions with novel answers that are completely indistinguishable from human answers. And remember much of these answers are novel, meaning that it wasn't just a copy of the answer out of nowhere.

There are of course huge problems with our ability to control chatGPT to give us the correct answers consistently but the fact that it can even do the above 50% of the time is a feat that moves the needle far beyond a mere "auto suggest". All you need to do is increase that 50% rate and suddenly it can auto suggest you out of an entire career. Cross your fingers and hope token prediction is just a technological dead end and that we can't really raise the correctness rate past 50%. In many instances of project development getting to 50% is often the hard part and getting to 100% could be easier.


Science went from something where you needed to have a physical library of journals into something much easier, where scientists the world over can now parse these repositories online. This massive increase in efficiency did not result in fewer scientists, it resulted in a lot more scientists as it became easier for more people to do research thanks to the internet.


Farming went from something where you manually had to plant stuff and take care of crops to where now a machine handles volumes of work. In feudal society, basically almost everyone farmed, not few people do.

So you gave me an example where technology caused the employment of more people. I gave you an example of where technology caused the employment of less people. Does either example have anything to do with what chatGPT will do to employment? Likely not.


If you're going to use it for anything where accuracy is important, you're going to need a human in the loop, verifying each thing ChatGPT comes up with. That means it isn't going to replace nearly as much as you seem to believe (or, unfortunately, that it will replace people, but not for very long), because it isn't going to meaningfully increase productivity in any application which requires accuracy.


>If you're going to use it for anything where accuracy is important, you're going to need a human in the loop,

Having a human in the loop doesn't mean nobody is replaced.

If I have a job that requires 20 people to respond to emails all day, I can have AI do the job with 1 person in the loop. That's 19 people replaced.

The other thing you need to think about is basically the trendline. Sure the AI requires humans in the loop now, but will it in the future? Just 1 year back a tool like chatGPT didn't exist. Now it exists. What's the next year going to bring? Most likely a tool Better than chatGPT. If "better" keeps happening every year, inevitably there will be a point where the AI doesn't need a human in the loop.


It's funny you say that because ChatGPT knows how to play chess although it wasn't explicitly trained for that[0]. The "if it can do this, soon it will be able to do that" is actually becoming real.

[0] https://villekuosmanen.medium.com/i-played-chess-against-cha...


The AI that I was referring to, that played chess, was a predecessor to ChatGPT. I believe it was called Deep Blue.

By the way, ChatGPT, as of May of this year at least, cannot play tic-tac-toe, but it thinks it knows how: https://www.aiweirdness.com/optimum-tic-tac-toe/


> maybe AI mainstream adoption will take longer than we anticipate

I mean, I'm in "tech", and I don't anticipate it happening soon. People want something they can trust, and current solutions are nowhere near that.


It is already happening [1][2][3][4]. People want something that makes their jobs easier, not essentialy something they can trust.

Not all tasks need to be 100% accurate, and, to be honest, people are not known for their trustworthiness either.

[1] https://news.ycombinator.com/item?id=36097900

[2] https://futurism.com/neoscope/microsoft-doctors-chatgpt-pati...

[3] https://github.com/features/copilot

[4] https://www.electropages.com/blog/2023/06/researchers-demons...


Students want someone to do writing for them, and they only have to do a trivial amount of vetting.

Now you can hand Lorem Ipsum in for homework!


That probably speaks more to the quality of our homework assignments than the quality of the student.

In this day and age, why is our education still based on rote regurgitation?


I agree. They should be chatting to ChatGPT instead.


They would probably learn more, honestly.


There was a fun example from a history lecturer a while back. Did you know that the Romans developed special anti-elephant crossbows to fight Hannibal? Because ChatGPT knows that. I think people who rely on this sort of thing for essay writing may be in for a rude awakening, especially when they have to write a thesis.


I whole-heartedly disagree, but on a more technical note: "echo chamber" is a pajorative that implies a bunch of people "sniffing their own farts and enjoying it" - more accurately, a group of people believing in a false because of groupthink.

To expand and to tell of a very recent story: For work I used ChatGPT to help me write a 500 line bash script to automate a bunch of stuff. It took me around 1 day versus the 5 it would have taken me if I would go the google/ddg/stack overflow route of slowly crawling through outdated content and SEO noise to find the signals.

It worked, completely!

Solely from that experience, I'm convinced that it's not just farts and kool-aid. To go one step further, even, i'de say that anybody who doesn't at least have a cursory awareness or AI is in fact the one in the "echo chamber", isolated from the possible.

It's not the do-all solution, of course, but in certain scenarios, it's quite obviously, trivially demonstrably, revolutionary.


I know a lot of people who are using ChatGPT who aren't in tech.

My spouse works in early childhood education and they use ChatpGPT routinely for low-value boilerplate stuff (social media posts that no one reads, etc).

A relative is in commercial real-estate management, and they also use ChatGPT routinely (in fact they started using it before I did).

So I don't think it's an echo chamber.


My partner runs an ecommerce business and her team uses ChatGPT dozens of time a day.

Everything from writing emails to suppliers, correcting grammar, responding to customers, brainstorming new product ideas, explaining contracts etc.

Where as for me I can't find a use for it. Even as a coding assistant I find that I spent more time trying to understand/correct what it did than if I just wrote it myself.


The majority of my usage of ChatGPT as a dev has been to synthesize examples of APIs/tools/etc in usage when those aren't easy to find and/or when documentation is sparse/scarce.


Same to me so far, synthesise or summarise examples of tools I'm not familiar with to then explore the documentation.

Lately I've been using it quite a bit for Arduino on a ESP32 board, I had toyed around with Arduino previously but since I got this board for a small hobby project it's been great to ask ChatGPT to generate some examples of the kind of data I want to read from a few sensors. Even when it hallucinates something wrong it's been helpful for my learning.

Another way I've been using ChatGPT is to be a personal tutor to correct me when learning foreign languages, it's pretty easy to create a prompt asking ChatGPT to have a conversation with me in a given language while correcting any mistakes it believes I made, I've been getting feedback from these corrections from some native speakers and so far haven't got any case of "this is absurd and wrong", unsure why it works so well to correct my broken grammar but it does and without a fault.


I have noticed that there is a decent amount of regional (esp Salvadoran) slang that native Spanish speakers I know use that it doesn't recognize. This isn't a huge problem and it is still incredibly useful though, as learning a few random slang words isn't exactly a challenge.


I’m a software engineer and I use it everyday. I haven’t gone to StackOverflow in months. I use it like interactive documentation. I don’t even have to leave the terminal to use it as I have a cli tool called shell_gpt.


As a supplier, I would be furious if one of my partners was sending AI emails to me.


I've found that any non-techie I know who uses ChatGPT uses it in a similar way, and for quite a bit of stuff. The ones that don't usually haven't really used it much, if at all.

But I still feel most of them are in the 'google but nicer' stage.


Those are people talking to echo chambers.


Of course we are.

But AI tools are mainstream now. You can hear them mentioned on TV, written about online and in traditional media, see them discussed on social media in non-tech circles, etc.

So I'd say your friend is likely not well informed. AI tools are in the same arena as cryptocurrencies, perhaps slightly less widely known. Most informed people have heard about them, but not many outside of tech circles actually have any experience with them, and even less understand how they work.

This is the natural progression of technology[1]. We've seen it happen with computers, the internet, the web, cell phones, etc.

[1]: https://en.wikipedia.org/wiki/Technology_adoption_life_cycle


There are 8 billion people on this planet. No matter how big or well known anything is, the people who know or care about it is a small fraction of the population. I’m not sure that’s what “echo chamber” is intended to represent.

People are unavoidably ignorant of vast swathes of “probably relevant to them” things. No one has time, inclination, or actual ability to keep up with it all. That reinforces the above but I am again doubtful that it is an “echo chamber” per se.

Hacker News regulars, especially those engaged with comments, are operating within an echo chamber, sure.

Outside of a couple of 70+ folks and some bikers, everyone I know is at least conversationally aware of recent “AI” developments so it’s most likely a function of your particular uselessly small sample size. :)


Teachers around me are freaked out about the end of homeworks, they can't stop talking about it. And their students anywhere between 8 and 18 are generating all assignments they can using ChatGPT (and then manually or DeepL-translating into our native language, which is not English and for which ChatGPT is still ridiculously funny). So it's not just tech.


homework never had much value, anyway, it’s nice to see students fighting back


Hard disagree on that, at least in the educational system I'm familiar with. The meagre time allocated in school schedules allows the teachers to barely introduce the students to concepts and techniques, and actual skill needs to be developed on your own.

(I can agree with a weaker version of that statement: homework that can be done by ChatGPT doesn't have any value and should be done away with.)


You don’t need homework to do that. I took classes in college where your entire grade was the sum of the two midterms and final. It worked great. No homework, so I was able to study in a way that worked for me.


No, we're in a hype bubble. This one includes several meme complexes, the largest of which is shouting "LLMs/GNNs are AI!"

By applying the label AI, they bring in the connotations of all the stories we have told ourselves about djinn, golems, robots, HAL-9000, Skynet, and (not frequently enough) the Sirius Cybernetics Corporation.


When it comes to content generation I think chatGpt is a gamechanger, more and more publications will convert most of their stuff into LLMs. It’s quite terrible for the consumer in the long term. For example I was reading news a few days about National Geographic gutting their writer staff. Get ready for a lot of mindless content sometimes vetted by humans, sometimes not..

This will affect us all, in tech or outside tech.


Just as companies use layoffs at other companies as a smokescreen to do their own layoffs, "we're using AI" has become an excuse to cut back on expensive writers.

In fact, I'd bet that the opposite will happen WRT AI-generated content: individual writers with personality will become even more in-demand. ChatGPT isn't going to fake a "personality" anytime soon.


Yeah, as a professional writer that is somewhat acclaimed (my domain has a lot of karma here) I'm not worried about ChatGPT. It's going to be a rough transition, but I feel that my genuinely having the skills not only means I can use the tools better (ex: I have a conversation snippet character that is a container for LLM generated stuff), but also use them for the really important things: LinkedIn post writing. It works so well at LinkedIn posts, which is really sad. https://www.linkedin.com/feed/update/urn:li:activity:7080465...


The expectation on what you produce will change though, there will be pressure to create more for less. Not saying all content creators will lose their jobs, but many will and the remaining will have to do more work. Oh well, with the provided tools, but still…


> individual writers with personality will become even more in-demand.

The demand may be there, but discoverability will be very difficult with so many people already in that arena. The Awl and Splinter News was an online publication made up of ex-Gawker people, exactly the kind of independent voice you'd think people would want to read. And they did, just not in enough volume for those sites to ever make money. And now they're both dead.


I'm not sure why ex-Gawker people are "exactly the kind of independent voice you'd think people would want to read." I certainly wouldn't describe them that way.

I was thinking more along the lines of Substack, but with a larger dose of "influencer" personality-driven brands.


Ex-Gawker meaning people who were tiring of trying to write proper articles while also toeing the gossip blog line. The concept is no different than any other person that writes for a publication but wants editorial independence.


I feel like most content generation has been going toward lowest common denominator for a while.


I recently went to Cleveland to visit a friend who has nothing to do with technology. Him and his girlfriend had heard of ChatGPT and she uses it at work. They work in chemical engineering and health. I took that as a sign that it has gone mainstream.


There's a recent Pew article you might like.

>Overall, 18% of U.S. adults have heard a lot about ChatGPT, while 39% have heard a little and 42% have heard nothing at all.

>However, few U.S. adults have themselves used ChatGPT for any purpose. Just 14% of all U.S. adults say they have used it for entertainment, to learn something new, or for their work.

https://www.pewresearch.org/short-reads/2023/05/24/a-majorit...


Wait, 14% of US adults have used a product released 7 months ago?

And this is considered "few adults" ?!


People outside the bubble aren't declaring AI to be literally as dangerous as nuclear weapons and I think anyone that thinks this with any gravity needs to take a very deep time out to step out of the bubble, calm down and think about their thought processes. The "AI is as dangerous as nuclear weapons" petition from the tech scene was ridiculous and frankly kind of embarrassing.

I wrote up my thoughts about it last month https://kyledrake.com/writings/ai

Christopher Nolan just finished a movie profile of J Robert Oppenheimer, has presumably spent a lot of time thinking about nuclear weapons, and has similar lesser-concerns about AI https://www.wired.com/story/christopher-nolan-oppenheimer-ai...


I think it's worse than that, it's a bubble. There I've said it. So much money is being poured into AI right now, and things are changing so fast, and products are being deployed, then matched, then overrun weekly, all with no regard to the law, safety, understanding any of the tech that's just been built, or even building a real business around it that it's become absolute nonsense right now.

Two anecdotes:

1. I saw a posting for a prompt engineer which had virtually no requirements beyond some passing familiarity with LLMs, who's job it was to think up clever prompts and archive them in a library. Salary - $350k+

2. I heard a real conversation between two highly trained technical folks around using an LLM to do a simple data transform from one wire format to another. Yes, let's use a cluster of GPUs and some faulty hacked together prompt to transform a well written structured format to another at speeds that approach molasses. Nobody had a clue as to how much the runtime costs of this would be. The solution to it being slow? Add more clusters. -- absolute idiocy.

We're burning money on this stuff like it's mid-80s Japan spending money on slightly different variations of pocket calculators and American real-estate. Meanwhile we're exploiting Kenyan workers to some of the worst filth humanity can produce in an effort to keep one of these AIs of producing child gore porn because it's illegal to pay first world people $2/hr to do the same job -- and there's not a psychotherapist to be found anywhere in the chain.

And then it's being pushed at the regular consumer as if it's some kind of knowledge oracle to replace the "horrors" of the search box that:

a) won't only know what the state of the world was when it was trained 2 years ago

b) won't produce a worthless hallucinated answer that could send somebody off to take a poison for a cold

This shit is terrible and I just used it to give me advice on updating my resume a few weeks ago for a job in the field.

Fuck it.


<< heard a real conversation between two highly trained technical folks around using an LLM to do a simple data transform from one wire format to another. Yes, let's use a cluster of GPUs and some faulty hacked together prompt to transform a well written structured format to another at speeds that approach molasses. Nobody had a clue as to how much the runtime costs of this would be. The solution to it being slow? Add more clusters. -- absolute idiocy.

I.. can totally hear it in my mind.. including ISO20022 requirements happy talk.

My initial take to the posed question that it isn't just tech. Business orgs appear to be jumping in with both feet ( thankfully, my little corner of the universe seems more conservative for now ). Still, it does not disprove your statement that we are in a heavy hype train now.

edit: I do take issue with wire format being well written ( or structured ) at this point. There is a reason SWIFT had to back down a little on the aggressive timeline for example.


Here is the question, it took 5 years from the "Attention is all you need" paper for us to get GPT-4. I've been using GPT-4 api for generating structured content. Right now it's a pain but you can get it to work with some effort.

Do you think this doesn't get 100% better in the next few years with the billions of dollars that are pouring in? Because a GPT-4 that is 100% better at generating useful content is a game changer. Now what if instead of a 2x improvement we see a 3,4,5 or even 10x improvement in the ability of the technology?

All the people comparing the hype around AI with crypto have good pattern matching without any judgement. I personally never got into crypto. I'm very into AI.


No. Quite the opposite. Everyone else is in an echo chamber telling them that AI is a cool fad, a toy that will never fully replace people, etc., when in reality AI is the most disruptive technology since the Internet, and the most dangerous technology since the atomic bomb.

I can't stop shaking my head whenever I read any article on AI written by non-tech journalists (and even many by tech journalists). AI is vastly more dangerous and more urgent than climate change, than Russia and China, than literally any other hot topic today, and it's being treated like a combination of a science fiction story and an entertainment tool.


> AI is vastly more dangerous and more urgent than climate change, than Russia

No it isn't, and I say that as a user and Integrator of ML tech.

Climate Change: Literally our planetary habitat becoming uninhabitable for our species.

Russia: A country with one of the largest nuclear arsenals literally threatening a large portion of the world with nuclear annihilation.

Excuse me when the prospect of changes in the economic landscape for white collar jobs doesn't look particularly frightening compared to such problems. Especially since we live in a day and age where 2 entire generations have lived most of their lives in almost constant economic upheavals anyway.

As for all the AI-doomerism that's flying around the net: As long as no one can even give me a precise, quantifieable definition of "general intelligence", aka. one that doesn't include pointing at ourselves, and a method to measure how far AI is from that, I will work under the assumptions confirmed by what is measureable and observable: that what we have are still stochastic inference engines.


Russia isn't "literally threatening a large portion of the world with nuclear annihilation". Any 'threat' they have issued has been in response to 'threats' issued by 'our' side... including at the time UK PM Liz Truss being willing to push the nuclear button against Russia. If that's not a threat then neither is Russia saying that they'll be willing to use nukes too.

Our politicians together with our sycophantic media and their weapons salesmen talking heads really do spread the most egregious disinformation throughout every wartime situation we are involved in, by proxy or otherwise.

Glass houses... stones...


> Any 'threat' they have issued has been in response to 'threats' issued by 'our' side

Then please, link me the relevant statements. Liz Truss's (who btw. isn't Britains PM any more) remarks were made in late August 2022 [1]. The russian nuclear sabre-rattling started in February 2022 [2].

So who exactly has threatened russia with nuclear weapons to elicit these responses? Helping a souvereign nation defend itself against an invasion and protecting their land and people, is not a threat. Offering a souvereign nation to join a military pact is also not a threat.

[1]: https://www.wsws.org/en/articles/2022/08/26/jfvn-a26.html

[2]: https://en.wikipedia.org/wiki/Nuclear_threats_during_the_Rus...


AGI isn't going to kill us.

But enough unemployment caused by the next wave of automation is sure going to cause civil unrest.

Look at the chartists, luddite and the saboteurs. Some of them were weavers who up until that point had been in high society, running parts of countries through the guilds system. Then over a couple of years, the bottom fell out and they were cast into the mills like the unlanded labours.

That, was not a smooth transition.

The people that claim "oh there will be new jobs" I mean sure, there probably will be, but they forget to mention the important qualifier: "eventually"


> AI is vastly more dangerous and more urgent than climate change, than Russia and China, than literally any other hot topic today

No, it's not. And most (not quite all, there are some genuine nutballs) of the people selling that idea are selling it to push a political agenda attached to their financial interest (mostly, in AI: either by pushing AI danger to advance competition-limiting regulatiom or by pushing the kind of AI danger that is not actually imminent with hyperbolic language to distract from the real and present issues with AI, and sometimes both.)


Not really.

AI -might- kill us if there is some quantum leap that makes it sentient.

Climate change -will- kill us if we do nothing.

For now one is simply not like the other, it may change but there is no guarantee we every breach that barrier.


> Climate change -will- kill us if we do nothing.

Horseshit. There is not a single scientific model that predicts human extinction from climate change. Parts of humanity, yes. All of humanity, no chance.

There a plenty of models that predict human extinction from AGI.


I don't think it's likely, but climate change could easily wipe out humanity by exacerbating pre-existing geopolitical tensions. For instance, Pakistan and India famously share a river, and the treaty that governs its water rights was written with the assumption that the river wouldn't reduce in output. A sufficiently severe drought could cause a war between India and Pakistan, and both countries have nukes.

You could dispute that WW3 would cause human extinction, and while I'm not remotely certain of it I think that WW3 could cause extinction, if there's a sufficient combination of 1) climate collapse and 2) worldwide economic collapse that makes high-tech systems impossible to sustain.


Climate change will kill millions in the next ~40-60 years and if course isn't corrected 100s of millions will follow. Seems like it will do a pretty damn good job of killing us even if it fails to kill all of us.

I agree AGI would be horrendously dangerous and if achieved has a higher chance of complete extinction. However, we don't have AGI and it's still not clear we ever will.


These idiots will believe anything if you put “climate change” in front of it…

“Let’s block out the sun, it will be good for global warming” says the billionaire as plants and animals freeze and die. WE NEED THE SUN THEY ARE TRYING TO KILL US OFF. Duh if this is not self evident then you are already dead.


Just met a rando marketing college student and she was using 3.5 for business homework. Wouldn't have been someone I expected to be an early adopter.

Also the high school student I am mentoring has told me that he knows people feeding chat gpt their essay writing styles and asking it to write about other topics with the same style.

This is in the Midwest.


What's unexpected here? Marketing is a discipline about bullshitting people into overpaying for products and services.

This discipline about bullshitting people does itself consist mostly of bullshit being poorly taught by outdated teachers on outdated courses.

That's pretty common among students to bullshit their clueless dinosaur kind of a teacher with modern technologies that make look bullshit seem legit.


I work for a Fortune 100 company. Recently an email was sent to all 100,000 employees saying that nobody was allowed to use DALL-E 2, ChatGPT, Codex, Stable Diffusion, Midjourney, Microsoft’s Copilot, and GitHub Copilot, etc. due to concerns about those tools using other people’s IP (meaning our company might end up illegally using their IP) or the potential that the tools might get a hold of our IP through our use of the tools and share it with others. I kind of wondered at the time how many non-tech employees read the email and had no idea what it was talking about.


My aunt and uncle, whose main hobbies are drinking beer and watching football, pulled me aside at their daughter's wedding ceremony to tell me how much they love ChatGPT.

My friend is a librarian at a high school. She tells me teachers are worried about what ChatGPT means for take-home essays. (I'm guessing 2023 is the year these stop getting assigned)

Everyone in the world with internet now has access to a personal research assistant. This thing gives better medical advice than doctors!! Give it 3 years and let's see who hasn't heard of ChatGPT.


Definitely mixed opinions in this discussion. I am actually curious whether the hard skeptics in the tech circle actually gave ChatGPT and other tools a fair shot. I was into AI/DL before the rise of ChatGPT and thought that it's way overhyped and too unreliable at first, that meant I only played with several toy examples. Across the last few months though, I have increasingly found greater uses for LLMs.

Not saying that it's still not overhyped, but maybe the AI skeptics among the tech community are in their own echo chamber too?


Hacker News is the real echo chamber, color me surprised that using the same system of posts and comments that turned Reddit into the ** show that it's today would also turn HN into the same thing.

People who aren't in tech or aren't informed don't know about the latest development in tech, same reason you don't know about the latest development in healthcare or construction...

But, and I'll only speak for myself, GPT allowed me to add more features to my side project in 2 months than I did in 3 years... If you know how to use it then it becomes a great tool, the 16k context 3.5 or the 32k tokens context in GPT 4 (when necessary because expensive) are really good and can do a lot and infer context and assume things about your project etc.

You shouldn't be a contrarian to be a contrarian, same thing happened with the blockchain, HN is still shitting on it in 2023 but I just transfered some crypto to my sister in another country instantly, where if I had to do it using the usual financial system it'll involve a lot more headache and will usually take a couple days... and all of this on the ethereum proof of stake blockchain aka not as power hungry as it was, this is objective value it adds.

Now the contrarians usually see posts written by 90 IQ "journalists" (bots) that say "AI will make humans irrelevant" or "blockchain will kill all banks" and they start responding against this but also miss the actual value these things add, of course chatGPT will not build you a house that's obvious and only another bot would argue for and against it, but it can help you become a lot more efficient.


My family: not tech savvy, haven't mentioned AI once to me, except my mother-in-law who asked if I'd heard of it and wanted to (honestly) know what I thought.

My friends: very, very much in tech, we have a channel in our discord where we just laugh at AI/ML/Musk/Crypto because it's so stupid. It can't even fucking add two numbers. It doesn't actually help with internal, brownfield projects with complex business logic and custom internal integrations. It can't summarize legal documents or answer technical questions without completely making crap up. It's just sparkling auto-complete.

My work: I've been put on an LLM project. The purpose? Dunno. The goal? Dunno. Our VPs are on the hype train and I'm along for the ride as long as my paycheck keeps clearing (and interviewing in the meantime). They're literally taking tech and trying to find a use for it. It's just the blockchain hype all over again.


I am finding it pretty useful. It is not a magic wand that will do all the work right away. But if you keep on chatting with it and providing it feedback it does what you want.

I no longer search google for programming question. 8/10 times gpt4 gives me code snippet that I can copy paste without any modification. It is like stack overflow on steroids. I can also discuss different system design tradeoffs with it.


My manager is going on and on about how ChatGPT is the fastest growing product ever in history! And its true, ChatGPT claims to have 100M active users.

But how active are they? Have they had 100M users briefly dally with it and ask a few banal questions to pick apart the answers, or have they got 100M dedicated repeat users who are heavily using it and keeping on it?


Try talking to students and you'll get a different answer. I asked a room of a dozen eighteen years-old if anyone did not use it regularly, and no-one raised their hand. They use it as a better Google to get understandable answers, write prose/code for them, and sometimes even as coach to bounce ideas.


> but maybe AI mainstream adoption will take longer than we anticipate

It will take longer than some "evangelists" anticipate, but other than a lot of tech hypecycles (like web3), this one is different for 3 reasons:

    a) It's firmly in the public mindspace, including mass media
    b) It has already proven it's usefulness and applicability to real world problems
    c) Politics are taking it very seriously
There are people who don't use it, for various reasons, but it is increasingly hard to never have heard of it.

A large part of that drive is due to the simple accessibility: Making ChatGPT as a convenient and simple webapp that appeals to non-techies was a brilliant move, and Microsoft driving integration into it's product suite will further adoption as well.


In my circle of friends, I'm the only one working in technology. But ChatGPT is something that everyone has tried and has been amazed by. For my non-tech friends, it's been mostly used for playing around, generating poems, etc - not really for productivity purposes.


What I've noticed is that interest in ChatGPT is spotty. Within any given discipline, some people are excited, and are using it. Others are ambivalent. I don't think either group of people are getting more stuff done, discovering more useful things, etc.

I don't think this is unusual. Adoption of iPhones and GUI-based computers was that way too. People who had those things were by definition more active on them, but not necessarily getting more done.

There could be some self selection going on here. Someone who was really fluent at the old way might not see such a boon from the new way.

I confess to being among the ambivalent, but I see it being used around me, chatted with people about it etc., so I'm not ignorant about it.


I just want to say that I was overwhelmed by the engagement this post got. Thanks for everybody who took time to respond / share their thoughts & feedback.

I've been in the process of moving so I haven't read through many of the responses, but it seems as though there was plenty of interaction between others which is cool!

I'll definitely review everything I can over the next week and follow up, I'm genuinely interested in understanding how others perceive this situation as well.


In my circle, the non-techies use it as well for various writing tasks. Thing is: the non-techies seem to be even more impressed by the hype and don't realize when ChatGPT makes stuff up.

I talked to a blue-collar worker (has his own business) the other day and he was in some Telegram group that "leveraged AI for marketing". You ever wondered who the target audience for these thin bullshit AI marketer wrappers on top of ChatGPT are? Or the courses and "mastermind" groups on how to write marketing prompts? Apparently, blue collar workers with a small business who want to save on actual marketing and don't have the expertise to realize the downsides.


> Apparently, blue collar workers with a small business who want to save on actual marketing and don't have the expertise to realize the downsides.

This might not be a bad market; if you're not experienced with writing things yourself and just need some text and don't want to find/pay a writer, it will do an acceptable job. It's like upgrading from a badly handpainted sign to a nicely printed one.

Whether this is going to generate enough revenue for OpenAI is a different question.


I'd argue that AI insiders are also out of touch with the wider world and are missing opportunities to develop tools that'd be useful for everyone.

For instance, hallucinations. They're a function of LLMs. But they're also something that everyday people have to deal with from each other in the miasma of the post-truth world.

What an opportunity, within the artificial intelligence field, to think of these tools not just as automation and fact-finding, but teaching how to avoid hallucinations in your own life?

Personal finance Grammar, rhetoric, logic, reason Information literacy Media literacy

The list goes on...


LLMs are not an opportunity to teach humans how to avoid hallucinations because they don't work even the slightest bit like human brains and we don't even know how to make them stop hallucinating in the first place.


Ahh but we known that both humans and LLMs 'hallucinate'. The study of the latter potentially provides insight to former; after all, isn't the core question we're asking in all of these discussions is 'what is intelligence?'


> The study of the latter potentially provides insight to former

It absolutely does not.

Using the term "hallucinate" for LLMs has nothing to do with the underlying cause or process. It's a metaphor that I feel was specifically chosen by the AI industry to avoid terms like "lying" or "making a mistake".

LLMs hallucinate because they're next-word prediction engines, not logical engines. They have no conceptual understanding, which is why their hallucinations range from small factual errors to bizarre lies that are obviously false.

Humans hallucinate because our brains are reality-simulation engines that evolved to model the perception of events that aren't currently happening (remembering the past, imagining the future, etc.) and sometimes we lose our autonomy over our brains. The underlying process has nothing to do with predicting the next word we're going to say.

In fact, there's an even more fundamental difference: human hallucinations aren't necessarily tied to speech. We have thoughts that don't require output. There's nothing in LLMs that doesn't "come out" as output.


Note my use of scare quotes around 'hallucinate'

The user of an LLM still gets a vote, right or wrong. If we have a good understanding of how the human mind works - empathy - we can gain a better understanding on how to clarify the very semantic confusion of which you speak!

Here's another example that just popped up in my feed: https://kottke.org/23/07/will-ai-change-our-memories


Very much so. But also a hype wave. It's the successor the the big data wave, the ML wave, and countless others


Seems like the successor the crypto craze after that crashed. I'm expecting a wave of grifters fairly soon.


You haven't seen them already? The "AI Lawyer", all of the people trying to sell LLMs as search engines, and just generally hundreds of projects that are outright dangerous uses of LLMs but seem like they might be feasible.


I see that comparison pop up quite often, and I really don't see the connection.

Cryptocurrencies have tried since 2009 to present some kind of problem to which they are supposedly the solution.

The problems that generative ML models can solve are pretty clear.


What I love is the same guys that were pushing crypto are now big into AI. The true entrepreneurs.


Are people in tech inside an AI echo chamber?

I don't think so. Anecdotally I have had numerous non tech friends ask me about AI. The first thing they always seem to ask is if it will harm people to which I explain its just a tool and more specifically a big-data chat bot that can be manipulated just like the social media algorithms but give more confident and realistic sounding answers using language models to simulate human mimicry. The risk would be based on what entities tune it and the fact it can't or won't show it's work. A tool will do what a tool can do. The intentions of the tool operators and users determine what they tool will be doing.

Some of my non-tech friends are trying to find ways that tool can make them money and I have no doubt they will come up with clever uses for it until such time that the walled garden around these tools becomes to cost prohibitive for the average person to afford. After the masses have helped tune the tools I suspect they will be exclusively for large corporations and government entities to rent. I predict that prior to the walls going up, the public interfaces may appear to have lower quality results so that there isn't an uproar when the interface becomes cost prohibitive. This is why I warn them not to base an entire business on this service but rather to augment something.


> we’re all quite aware that AI will play an increasing role in our lives (in & out of the office), but maybe AI mainstream adoption will take longer than we anticipate. What do you think?

Sadly I think you've contradicted yourself a small bit here, while also stumbling across the exact reason AI mainstream adoption will go quicker than anticipated: it'll happen without mainstream awareness.

I work in a security role and we're currently trying to do risk assessments of AI adoption by tech workers, and integration of AI APIs/services into our products, and it's not even a case of "how bad will it be", it's much more "how bad is it". AI is being used a lot by individuals in the workplace with very little discourse or auditing of that usage.

Even outside of tech, someone somewhere in it administration was already seeing enough daily AI usage to warrant the drafting of this official guidance, for quite a non-tech-industry audience: https://twitter.com/marcidale/status/1645972869393047552


I think the AI space is getting broad attention from the public, largely due to generative AI. However, I think this is a bit of a double-edged sword. AI has a trust problem - people know it lies to you. It lies to me all the time. Convincingly! This is hurting people's initial impressions and I suspect that we're going to lose traction for a while before we once again re-gain it as the trust problem is solved. Time will tell.


No it's not an echo chamber. There are certainly people who aren't up to date with it but the entire world, in general, knows what's going on.

The echo chamber in tech is people thinking they're in an echo chamber. I mean come on, 90% of people in tech have never built an LLM in their lives they just read some laymans article on it like everyone else and harp on how it's not an AI and it's just a "token predictor" and other generic arguments like that.

What I'm saying is that MOST people in tech don't do AI and therefore their perspective on LLMs is equivalent to someone completely outside of tech. Every typical software engineer thinks because they took a small course on ML or they read a little bit about chatGPT and that puts them on a pedestal above an average non-tech worker... well I hate to break it to you, the non-tech worker can look up those articles too.

They can also play with chatGPT extensively which doesn't require any tech skills and they can literally see for themselves what a game changer the technology is.


Given ChatGPT reached 100M users shortly after launch, it must have a broader reach than just techies.

There are regular articles on it in mainstream news. Most of the people I know have tried it. Some of them can't see it will be useful, and some of them already find it useful. Some of them are techies and some aren't. Some techies are in the set that don't think it is useful.


Unless your work involves coding or digital content creation, there doesn’t seem to be a lot of real world utility for the tech right now.


My parents (70 year olds) asked me about ChatGPT a while ago, I think people who haven't heard of it just don't follow the news.


A few months ago I heard some mention of ChatGPT on some standard afternoon radio, so I dont think we're in an echo chamber. I think we probably hear about it far more than others but that should be obvious, I'm aware of things outside this industry but almost certainly nowhere near the same awareness as those who work in the industry.


The difference from the crypto wave was the crypto wave didn't surface too many applications outside the crypto-sphere.

In this case it seems kinda the opposite, the enterprises are the first off the bat - existing products getting easier to use by ingraining LLM's into the workflow.

ChatGPT to a large extent also does seem to be moving search terms away from google. I would love to see trends on google.com vs ChatGPT vs Bing over a period of time - Search definitely seems to be changing, where we want summaries and content rather than a bunch of sponsored ads and websites where the context has to be derived from search.

I am also interested if the same is happening with StackOverflow -> the search on SO -> ChatGPT as a code generator.

For mainstream adoption, the adoption will be driven by the everyday software and applications we use. We are still in the shovels and picks part of the world, yet to see native AI consumer applications emerge (barring ChatGPT).


I think we are in all kinds of echo chambers, but "gen AI" seemed to hit escape velocity faster than anything I've ever seen. I've heard of students using it to write essays, marketing folks writing content, creatives using midjourney and lawyers/doctors thinking about how to integrate it into their workflows.


No. Long story...

Saw an old friend of mine recently. He's smart - majored in math - but spent many years running a landscaping company. He was showing me a spreadsheet he made to show musical scales and the position of notes on a guitar. You could select picking style and it would change the layout. Very complicated cell formulas - no code. I asked how he ever did all that. He. Said ChatGPT saved him countless hours. He praised it highly for the conversational style vs Google that just gives links to generic information, and the ability to refine your request to get better answers. It can also give formulas in some instances but he tests them. He's taken up using it for other things as well.

To be honest though, he could have be a deeply technical person, but he's only interested in such things to the extent it can solve real world problems. Not sure how many people fall in that category.


No.

Technically, the answer is yes and no, but with the context here, no, we aren't in an echo chamber. People who are not in tech are routinely using ChatGPT and AI functionality. I know this because people who I know aren't in tech have told me how they are using it, and how many other people in their industry are using it.

The key here is what industry are they in, and can they make use of what ChatGPT provides.

And it's not just ChatGPT, but AI stuff in general.

The difference between this and crypto and web3 is that crypto and web3 required a LOT of explaining on how it could be useful. AI didn't, and simply said here is the input, and here is the output. That's it. They didn't suggest value. The value was easily apparent. It was tangible. Something that was real.

Does this mean every industry is talking about AI? I'm sure the answer is no. But every industry has it's own echo chamber. That's immaterial.


Define tech. AI-awareness is not limited to software-developers and tech-nerds. Anyone who has some software/internet-awarness and follows the news or social gossip, knows about them, or even uses them. So I'd say the echo chamber is more about your general mindset and social bubble.

> maybe AI mainstream adoption will take longer than we anticipate.

Everything always takes long than we anticipate. And it's not like AI today is the holy grail yet. It's still slow, expensive, with too many errors, and we are just less than a year in the real hype. Legal systems are busy with working out the corners and acceptance. We are still in the exploring-phase, and this can go on for several years. There will be significant changes in the next years, but the big bang is still upon us, and until then it will a locally fast, but globally average slow change.


From Pew Research: A majority of Americans have heard of ChatGPT, but few have tried it themselves

So yes, your friend is unusual, especially as a millienial where even more have heard of it and used it. When you're asking a statistics question, make sure you cite statistics instead of asking people to write comments from their toilet.

Also: "Just 14% of U.S. adults have tried ChatGPT". I think we need to take a step back and consider how much the bar has been raised. "Just" 14% of Americans? This is an incredibly high number.

[1] https://www.pewresearch.org/short-reads/2023/05/24/a-majorit...


AI discussion has definitely made it mainstream and you can corroborate that just by looking at things like the ongoing Writer's Guild strike where AI generated content is a huge sticking point. That goes doubly for the Acting Guild which looks increasingly likely to strike as well.


Honestly, I’m a little surprised to see this! I’m not a “coder” and adjacent like most of the folks I see on HN. I work in video production.

My In-laws talk about chatGPT (lawyer and store owner) my parents (doctors) ask me about ChatGPT, my colleagues ask me about it re: its impact on film production, my siblings talk about it. Hell the WGA is talking about chatGPT as part of the strike.

Again, I obviously have my own little social bubble of somewhat like-minded people, but I feel like I’m talking about AI and ChatGPT every other day. I see articles on main stream publications at least weekly. It’s basically as well known as Bitcoin - if not more so - as far as I can tell. Very surprising to me this person still hasn’t heard about it!


> as we’re all quite aware that AI will play an increasing role in our lives (in & out of the office)

I am not aware of that!

I am definitely inside the tech echo chamber but I do not use ChatGPT, will not do so, don't care about it, and frankly don't respect people who do. It is not a useful thing, and all the "AI" (it isn't) garbage that's popping up right now is equally useless.

I say this on HN often and people come back with "but it reached 1M users so fast!" I don't care. Millions of people smoke cigarettes, that doesn't prove they're useful.

I am going to sit this hype cycle out. Maybe it will eventually be useful for something. I am clearly not the one to try to imagine what that will be.


YouGov did a poll[0] of 1500 US adult citizens, and it reflects what a lot of people in this thread have said anecdotally. 43% say they have never used AI tools, with only 24% reporting they "very often" or "somewhat often" use them.

Interestingly, there's a big age component in this poll, and rather than being a smooth grade like most tech, there's a very strong divide at 45yo—those younger than 45 are over 4x as likely to report they often use AI tools than those over 45.

https://docs.cdn.yougov.com/ifywkae5dt/econTabReport.pdf#pag...


Disclaimer: Entirely layperson opinion.

We already had AI in our daily lives, such as translation, object recognition, voice assistant, spam filter, fraud detection systems and a lot more.

Most of these are old and boring and yet deeply embedded in our daily life.

The hype bubble is on the new LLM and generative art AI tech, which are backed by a lot of money, so a lot of hype is generated. Without hype, corporations can’t sell us on novelty but once their pockets are filled, they exit and these eventually cool down when something new comes along.

However, with each novelty, we are usually left with some bits and pieces of useful residual tech that gives us a little improvement on our daily life, so every hype leaves with something useful.


We certainly are, but so is everyone else. To varying degrees.

My own personal opinion is that depending on your particular area of expertise, you will have various levels of resolution of knowledge. We might be in an echo chamber on AI here, but many of us aren’t even real specialists on the subject. So, we fall prey to various cognitive biases [1]

[1] https://en.m.wikipedia.org/wiki/Curse_of_knowledge

Regarding your friend - a few technological revolutions occurred without the general population really knowing anything about how they work. New tools show up, people use them.


Not really. Mainstream media publishes something about AI at least once a week. It might not have adoption in the everyday life, just like most tools are known, but only fully explored by their niche.

There's a lot of overhype, A LOT. And many startups are reinventing the wheel in a more expensive way just to ride the wave, but unlike crypto, LLMs are tools that can solve more "dev/user" level problems, unlike crypto, that was supposed to solve a "global financial system" problem, and since that's too hard to grasp, people used it as investments, because it's closer to their real lives and needs.


>but maybe AI mainstream adoption will take longer than we anticipate.

I think no. I think most of the people will use the products without even hearing about GPT or LLM. There will be products that are based on LLM but they will have nice and easy to use UI which will be not much different that any other usual UI but they will have more advanced capabilities. People will just use them without realizing what is behind those UIs.

I think, because there is no need for user to know what is behind UI adoption of new LLM based AI will be much faster than other new technologies that we seen so far.


My experience has been that most people have not heard of it, most people that have heard of it have not tried it meaningfully, and most people who tried it meaningfully did not unlock massive productivity gains yet.


No, there's plenty of criticism or skepticism of AI models from within the techie sphere.

An echo chamber would mean everyone having the same opinions, and that's not the case here, at least speaking to tech people broadly.


This just happened to me. I was explaining it and I am starting to feel like a crazy conspiracy theorist and AI doomer when I explain how it can more or less write and script and solve any problem. Then I see how people use it, and they can barely articulate elementary concepts, let alone break them down or solve them. I feel like being able to use GPT right now requires an ability to do this, which is probably why. Let alone most people aren't even using GPT4. They type in some barely coherent instructions to GPT 3 and then dismiss it as hogwash.


There is a large contingent of the doomsayer on AI, where one of the doom scenarios, is that people don't realize they are being manipulated by AI.

So this case where you are talking to people outside the bubble, and they don't realize what is going on, IS THE PROBLEM.

Outside the bubble, people will be seeing images, video, text, voice, that is all fake and manipulated, and THEY ARE NOT PAYING ATTENTION, they don't know it is fake. Inside the bubble we are noticing it, outside the bubble, large chunks of population don't know it is happening.


I wouldn't say it's an echo chamber. I'd say it's a bubble (But without the negative connotations).

This tech is in its infancy, and the whole "prompt engineering" thing even though fun and all, will eventually be unnecessary, and _then_ it'll be widespread.

I've even spoke to friends IN the tech industry that are not using it, at least not to the extent of a lot of people is using it.

And even I am not using it as much as I would like, but then I don't have enough free time.

All in all, just give it time. In my case, I hardly use Google these days.


It seems like the tech community is more aware of the possibilities of LLMs because they are its parent in a sense: they created it and/or have watched it grow up. And the general public probably sees it more like a toy or novelty for information retrieval or generating meal plans and aren't aware of its full potential. Although, the general public will probably start to see it more negatively if/when low skilled office jobs start downsizing due to improved automation products on the horizon.


Yes.

I think we just witnessed something akin to the birth of personal computers which puts us around the ‘60-‘70s. It’ll take a while to (vastly) improve and eventually disseminate into society. I’d say a decade at least.

A lot of the surrounding “infrastructure” is still missing, like in the olden days. Techies are scarce and needed for other things. Energy and thus compute is expensive and society still has some important open questions left to answer about its own stability if it wants to survive this next wave of evolution.


Today’s LLMs like ChatGPT also might seem like mainframe computers compared to the personalized and locally-run models that are likely in the future.


People don't see it being this revolutionary because they don't know how to use it. My wife (literary translator) my close friends (2 doctors and one business executive) and myself are seeing massive gains in productivity by using simple prompt techniques such as chain-of-thought. Heck, my cardiologist friend from one of the country's top hospitals is writing a book with chatgpt. It's there and it's great, and will only get easier to use, smarter, faster etc.


ML has been a thing in data analysis for decades now. Still, a ML model is not always the one that has the most power. There are a lot of good models in statistics at this point, graphical modeling is like 100 years old. Reading the tech news though you’d think we’ve unlocked the god formula in the past year or something. Thats why I look at it with a heavy side eye today, since today its clearly like a square peg method being shoved into each and every hole since its popular.


I've had plenty of non tech people talk about chat gpt.

Especially college students


A recent experience that was sort of the opposite: I went for a haircut and my hairdresser was an older Swedish woman who was not very technical. She told me she was having her hairdressing book translated from Swedish to English and had some trouble translating the title. The Swedish title contained a witty play on words, and she had trouble coming up with an English version that would sound as clever. She said she was using ChatGPT to help her brainstorm title ideas.


I have a friend who is a senior electronics engineer at a fairly large company. A few months ago, I mentioned ChatGPT in conversation and he had never heard of it. I was quite surprised since I've seen ChatGPT become a major topic of concern even outside tech among those in the humanities. I think the explanation is that since the concerns about LLM AI revolve around language use, the communities paying attention to it are not the same as what we normally categorize as tech.


Yes we are. AI will be useful for some things, society will need to adjust a bit, and we'll move on like whatever. Think the power loom, the computer, and all the other major tech innovations. With the benefit of hindsight, it'll look inevitable. Your grandkids will ask you about the (supposedly) exciting life during the 4th Industrial Revolution, and you'll tell them something in the lines of "eh, life is life".


AI is a profound change for how we will live. It's more a socioeconomic change (loss of millions of jobs) than a technological advancement.

The worst part is the current state though, where all the crypto hype bros have come out of the woodwork and are rebranding as AI hype bros now. My Linkedin feed is sadly now full of idiots calling themselves "AI Expert" and similar such terms with made up resumes trying to sell their 'expertise'.


OpenAI is guarding all the information about ChatGPT popularity and usage numbers obviously. Some indirect conclusions can be made from https://www.similarweb.com/website/openai.com/#overview. The absolute numbers are huge, but the explosive growth stopped a couple months ago.


To be honest I am busy with building gpt-powered backoffice tools for data classification, content creation, workflow automations, … to eliminate jobs down the road in my company. It’s already factored in future financial plannings that quite a few positions will be gone in a few months, except the affected people have no idea yet

… so my take is that it takes a bit of time to leverage these new tools popping into existence only a few months ago


There definitely is an echo chamber, but not related to the existence of ChatGPT and other AI technologies (even the "menial" laborers I know being relatively aware of it, and my parents and family members frequently talking about it in the context of deepfakes/cheating (at school)/automation/healthcare.

Common reactions to ChatGPT and a lot of the fear are definitely overemphasized in the tech field, but that makes sense.


AI summarizes and listicles pretty well. I use it all day long. I’m not sure “a few people I’ve talked to have never heard about it” is any stronger a barometer of anything than the hype ChatGPT has right now.

I do think the integrations available in many of the tools I encounter is probably a better signal of potential and impact. We are in year one. I suspect LLMs will continue to evolve and still have market legs in a decade.


I've been saying hype for a while.

Please share products that exist today that leverage ChatGPT and are useful. A product that could not exist w/o ChatGPT. A product with actual users and a productive chatGPT.

ChatGPT is nothing more than a tech preview or fun experiment that will pave the way for the future. However, it's still confidently wrong, easily confused and incredibly filtered.

I'd love to see a product with tangible results.


> maybe AI mainstream adoption will take longer than we anticipate

I would say it will mostly depend not on the public hype around it but on the pace at which it will acquire capabilities. If it stays at GPT 4 level, it will be gradually used to automate various tasks over the years.

If on the other hand it keeps the progression GPT 2 -> 3 -> 4, then by the time we are at GPT 6, it will be able to easily replace a wide range of jobs.


Everyone who I have seen speak positively about ChatGPT and the like have all suggested applications that are just awful.

It's going to be terrible in a few years time when you try to cancel a subscription or raise a complaint with a company and they throw an AI in front of you. Most call centres are designed to waste your time so you go away, we are going to see impressive new frontiers of absolute bullshit form.


I think crypto was an echo chamber.

AI though, struck a nerve because people intuit a killer app. Or more precisely, AI could potentially be applied in many different places. The last time we had something that felt like this was probably the web browser — at least for me, it is.

I think it is because we are now really close to what the non-technical mainstream public thinks computing should be — “do what I mean, not what I say”.


I think so. I suspect part of the reason for this is that, for many people, "AI" has become a meaningless marketing buzzword. And companies have been overpromising on the capabilities of their "AI" technology ever since home computers were a thing. So now we actually have something really cool and possibly revolutionary, but no one cares because they are tired of being scammed.


I have many friends that don't care about it, others that never heard about it.

Anyway, call it AI, ML, statistics, the truth is: all these algorithms are helping us get rid of the boring stuff in tech.

ie.: I won't waste my time reading a regex documentation to learn how to remove every white space after a combination of characters, because I'll completely forget it two days later. and the examples goes on and on.


> I have had conversations with non-tech people about ChatGPT/AI, but not very frequently, which led me to think, are we just in an echo chamber?

Maybe, but like, ChatGPT has broken outside the echo chamber. NPR and marketplace have had plenty of stories on it. Like, my parents probably haven't but they are retired, and veg out on soap operas, game shows and avoid anything that sounds like news.


Surprising places I've heard of ChatGPT recently:

My sister in law (an orthodontist) had chatGPT draft a job ad when hiring an assistant.

My wife just graduated with a non-tech master's degree. On her graduation, the head of the program made humorous references to chatGPT and cited it ln a few topics in her speech.

To be honest it's a no brainier. It's helpful tech with a low barrier to entry, and free. People will use it.


The best and most common uses of it I've found so far is finding recipes. GPT 3.5 is good at it but you generally need to find an actual recipe once you get an idea from it but GPT4 does a much better job of giving you a coherent recipe from the start (trust but verify though).

I give it some restrictions and a few preferences and ask it to suggest 20 things for me to cook. It's very helpful.


Not sure if tech, but some kind of echo chamber. I've introduced ChatGPT to multiple people in both tech and not; they all ended up actually using it and getting some benefit out of it. Then became actively interested in the topic. (they initiated conversations about it later on)

So, are LLMs as popular outside of HN and similar communities - no. Are people outside of them interested - definitely yes.


At work some folks are obsessed with LLM’s but yet we’ve ignored deep learning for years. There may be some applications for LLM’s but as another commenter said it will likely be calling some LLM as an API. At some point once we’re doing that the hype will die down some.

Some family have heard of ChatGPT and have misunderstood what it can do, so I’ve had fun showing them the limitations of it.


> He’s a millennial & a white collar worker and smart

If I have to anthropomorphise the gpt3.5 prompt, "it" is an average intelligence intern who can only google up to 31st of march 2021 and has several insecurities and character flaws. It is unable to follow a plan, needs guidance at every step, needs to repeat itself constantly and sometimes makes things up as it goes.


Staff software engineer here. Tech Company executives are going bananas over the potential of AI tools like ChatGPT without realizing it’s not where they expect it to be. My last position changed their entire organizational structure to focus on it and moved all of their tech jobs offshore because “GPT lets non-tech people code.”


That is entirely besides the point.

If you can export a UI with defined behaviors from Figma to React / NextJS, and AI can figure out how to code it, then Front End Developers become a thing of the past.

If there's a similar modelling tool for Back End and AI can code the exported design, then BE devs become obsolete.

It is irrelevant if lay people have heard of it or not - we're still out of a job.


My mom, who is so non-techy that she needs me to walk her through plugging in an Ethernet cable, uses Chatgpt to help her respond to emails and to write advertising descriptions for stuff she’s selling.

She uses AI more than me, who’s hunting for deals on gpu’s to train my own DRL model.

I was honestly shocked when I found out. I had figured that it was a tech echo chamber too.


For sure, I've had discussions from "using it every day" (techies) to "never heard of it" (non-techies).

For my own experience I use it every day, but the only thing it really solves is saving me time. I've seen lots of neat demos, hacks, agents etc and still haven't figured out what business problem LLMs solve besides time-saving.



Not just an echo chamber - it's something to sell. Whether or not it measures up to the hype doesn't matter once you've been paid for it. This is not the first buzzword from tech, and won't be the last. The finance industry is another example of an industry that will jump on a buzzword to sell - currently also AI.


I hear people about it talking everywhere. Students that use it to write some work or learn for exams. Colleagues that use it for generating code examples for specific things. Someone from HR writing text. This is definitely not an echo chamber as many people know about it and are using it in some kind or other or are at least curious.


From the outside, I’ve never seen a smaller professional circle with grander claims.

Crypto was more mixed from my recollection, a bit less ivory tower as the tech isn’t that complex, and very prominent people in the tech itself werent so readily making huge claims.

I’m seeing tech leaders saying things that make me concerned, like engineers saying Bard was sentient.


I have talked about it with non it people and they knew about it.

Plenty of them.

And I showed it to a few and they were immediately impressed by it.

I also overheard people talking about it on the street.

So in my opinion: no.

And chatgpt has over 100 million users. People can use it in bing every day.

Adobe added it to their products too.

People already benefit from ai.

My company is also working on introducing more features it's just the time required to do it.


My take: AI is a fad just like crypto was before it. Who can even remember what was the toy of the day before that? Maybe Facebook social apps? It will end same way as all the previous ones: 200 smart people will offload some of the fools' life savings into their pockets and move on to the next "big" thing.


Yes. It’s cool. But, it’s hardly revolutionary. They built a Siri that doesn’t suck.

It has the potential to be a Google killer as it’s much more effective at sussing out specific information. But, the ideological guard rails OpenAI insist upon are super annoying. You can never fully trust that you’re getting a non-politically biased output.


> he hadn’t even heard of ChatGPT

Practically every mainstream comedy show I hear in the UK mentions chatgpt, it's been constantly in the news over the last 6 months. Every high school child knows how to use it, hell there was a south park episode a couple of months about chatgpt written by chatgpt.

This isn't niche stuff.


In my experience, investors and secondarily tech execs are in the bubble. From porta-potties to snack chop manufacturers, every CEO is being asked what they’re doing about AI by their board.

The actual developers range from enthusiastic to skeptical. Actual developers take a practical perspective IMO.


This American Life and Planet Money have in recent months covered chat-gpt, though that audience may be skewed.


I was speaking to a gym buddy the other day, and she said she was looking into ChatGPT as she was a copy writer and was aware that her job might be at risk within the next few years.

I talked to her about prompt engineering for a little, but not sure that I really got through what I was talking about.


Yes we are and nearly everyone is convinced it'll change the world, but they're simply guessing. None of us know how this will, or won't play out.

Personally? I think we're sleep walking into social disaster and this tech is only going to make people's lives more difficult.


I saw other waves and this one is different. This is the first time we can unlock increasing cognitive capabilities by just basically letting capital grow its own ai. It's at the level of like invention of internet or currency or printing press or writing or maybe even human language.


I don’t see any evidence that it’s there yet. I do see that potential if AI were to improve dramatically, but I think we’re tricking ourselves into thinking it’s already here because what we’re seeing is so compelling.

The reality as far as I can tell is that we still have an incredible distance to go to have anything comparable to unlocking spoken or written language.


I guess it's your right to believe that as long as you want, the evidence is that it's beating people at every exam like AP exams and GRE exams and bar exams and medical certification exams and not only like regurgitating facts but it's also superhuman level in theory of mind. And all that comes from just an architecture that reads every text and uses a lot of GPU power.


I think these things are amazing, but I don’t think they’re comprehensively intelligent nor do I think we can easily reproduce this spark in other domains of intelligence.

When we can, we will be creating something superhuman in the truest sense. What we have now isn’t that at all. I think it appears magical because we’re so enthralled by language. It’s a major interface into our world and we’re extremely stimulated by it. It conveys so much meaning to us with so little. In fact, when it conveys too little meaning we begin to search for it and fill in the gaps! Our brains are extremely eager to engage with language.

I’m excited about this stuff, but what the models are doing isn’t as incredible as all of that seems.


Yeah right, it can answer written form exams that are similar to previous ones, and are expressed in formulaic ways. That's impressive, don't get me wrong, but it's also already clear that ChatGPT makes for a terrible lawyer (inventing cases looks bad) and literally can't act as a doctor cause it doesn't have, like, a body, or a sense of empathy. It's like saying that it got the theory part of a driving test correct so now we have self driving figured out.


A while back I was meeting nontechnical people in their 50s and 60s and they were all asking me about ChatGPT. I asked if they tried it. "No."

--"You know it's free, right?"

"Yes."

I found this inexplicable and infuriating. Perhaps someone can shed light on the psychology of such people?


They were curious enough to ask you about it, to make conversation, but without a specific use or utility in mind hadn't (yet?) bothered.

Why did you find this infuriating? There's only so much time in the day; different people have different priorities.


Same energy as when someone asks me to Google something for them.


1) they know it's not a 5 minute thing

2) everything is still moving fast. Unless they plan to keep up with it, they may be planning to look into it later after the dust settles a bit

3) their time is already budgeted and it isn't clear what they should skip / cancel to look into it

4) they aren't sure exactly where to start and so they procrastinate. No trusted source has sent them a link to try, they don't want to sift through search results and potential scams, they don't want to make an account, etc


Because they are simply not that interested in trying the new thing. I will drink craft beer, but if someone invented the most amazing craft beer ever, I wouldn't go out of my way to try it even if it was free. I just don't care that much about craft beer. Different people are interested in different things...and that is okay.


anyone who believes that "AI" (meaning current, ChatGPT level tech and the next iteration of it) is a bigger threat to humanity than climate change, because of some "Skynet, rise of the machines" scenario nonsense, is 100% inside an AI echo chamber.


Maybe there are multiple echo chambers, left, right climate change, ai doom, crypto... we tend to gravitate towards these because we like to get opinions that closely match our current one.


To an extent, yes.

But. The suggestion that "all echo chambers are as bad as each other" isn't warranted, this is the "bothsidesism" fallacy (1)

That's why you'd look at if their predictions are grounded in reality, or are spinning wild "what if" scenarios, or are concentrating on culture war alarmism about not-particularly salient threats, or promoting a scheme that makes them money. Finally, look at the track record of the people making the claims. How did their previous claims work out?

1) https://en.wikipedia.org/wiki/False_balance


Yes for now. But that doesn't mean it isn't mind blowing. There is always a bubble. It is just that things moving so fast in the bubble this time, that it seems strange. There is always some lag before new tech diffuses out to rest of industries and public.


Asking one person and extrapolating from their response isn't a good way to approach a problem.


It's the opposite in my experience: people inside tech are (probably rightly) skeptical, making comments about "Eliza 2.0", meanwhile I am listening to car podcasts where the people are using ChatGPT to write valentines cards for their wives...


My dog walker - a man with a fairly troubled life and very low levels of literacy - used it to write product descriptions for an eCommerce website he's in the process of launching.

Part of me thinks the tech echo chamber is working the other way around here: a lot of techies who just know and and talk to other techies are all in a bubble thinking it's all overhyped blahablhabah without stepping back and thinking how it can enable every day people (whatever that means) to do things they previously could not -- at speed.


All the high school kids are using it, but that could be a Bay Area thing.

My 86 yo father is far, far from Silicon Valley and his main activities are playing piano and talking with other residents of his retirement home, but he uses GPT-4 every day


Sure feels like it. I've started marking most AI tweets as "Not interested". ESPECIALLY anything that is anything analagous to "ChatGPT has a huge problem... bard/claude/blah" or some thread boi like that.


Maybe? My wife has been using it to give her recipe ideas. She can’t eat onions and things with sulfites so it’s super user for filtering out ingredients that would make her sick without having to comb over a list.

So far the recipes have been pretty good!


Tech people have an unhealthy obsession with trying to create tech to replace non-tech jobs, without any care as to the impacts on society. Chat and image AI generation is just another example of tech screwing over other people.


If the jump between ChatGPT '5' and 4 is as big as between 3.5 and 4 then we're not in an echo chamber, we're in trouble. If the gap is smaller then they are running out of steam and we'll be fine.


I don’t think it will be if history is any precedent. Look at self driving in the mid late 2010s, the rate of advancement lead some otherwise smart people to make big bets that autonomous cars were right around the corner and we were going to be driven around by robots any by 2022.

We’re at the early phase of AI, it demos well but seems to break down in ways that seem obvious in the real world. We’re at the limit of how big the models we can train now. GPT4 has been described as 8 GPTs in a trench coat by George Hotz which was later confirmed by the founder of PyTorch. I’m not saying we have nothing to worry about, just that the hype seems to overtake reality early in the adoption cycle.

GPT4 is like an unreliable but brilliant employee.


My kids and all their friends are crazy about AI. They’re bent on making it do as much of their work as possible.

It’s fascinating. Though it probably isn’t intentional, AI service providers are already hooking kids early to have customers later.


Or they're hooking kids because they're least likely to have the domain experience required to recognize how confidently wrong AI can be.


Well, I wonder what’s wrong more often: GPT or a 13 year old :)


Bet a 13 year old with total confidence in GPT beats them both individually on any subject of reasonable complexity.


That’s the thing. If a kid approaches learning with a language model responsibly, they stand to learn a lot very quickly and solve difficult problems that would otherwise be next to impossible for them.

The thing is, we need to teach them that today rather than tell them it’s cheating and try to catch them using it on essays and deal some kind of consequence.

I now use it professionally fairly regularly and it’s an easily justified expense. I’ve already delivered things to clients faster because of it. Most recently I reasoned through prototyping a sort of minimal CMS experience using a self hosted CMS API connected to Next.JS, and had a viable plan and prototype at the proposal stage in as much time as I’d normally just do the research on something like this.

If it’s feasible to accelerate learning and research for real world work, I think we should seriously consider how it integrates with education rather than encourage kids to avoid it entirely. Of course, we don’t have that awareness in our education workforce in Canada, but I wonder if it’s harmful to discourage the use entirely rather than accepting it and ensuring kids are still producing the work that’s expected. If it’s clearly GPT regurgitation with hallucinations and no bibliography, the kid has still failed to deliver. If they manage to do their work faster with technology (the main difference here is that they haven’t googled bunch of stuff, frankly) then great, they’re still learning something.

And of course, the more you tell kids not to use it, the more they’ll want to (which I’ve come to love, honestly).


In education AI is the hottest topic ever. Mostly teachers think that it will spell the doom for any kind of home work. Which might very well be true.


>What do you think?

I am bored of ChatGPT, AI this, AI that. I scarcely talk about ChatGPT with my peers. While I am certainly interested by AI and even dabbled with it in my MSc years, I found it tiresome to only read about it.


My mind is blown how useful ChatGPT is. There are certain types of questions it nails where Google search is useless. You’re just an early adopter. Broader adoption is coming, it will just take time.


I really wish I understood this response. I've played with ChatGPT a good bit and it's very impressive but more often than not it feels like a nice skin on top of Wikipedia. It doesn't do any actual reasoning so I can't drill in to topics to the depth that I want and it's wrong often enough that if I'm working on anything of any consequence, I have to go find other information later to validate it


Most people outside of tech that I know haven't really heard of ChatGPT aside from in passing in a news piece or something. And those that have heard of it have no idea what it actually is.


It's a bit different, since we're early adopters. The average person hasn't spent dozens of hours with GPT-4 yet, and doesn't realize how useful it is to everyday life.


Based on the recently released round 3 of Hotz and Fridman - YES!

I kept think that the entire conversation sounded like two vocal transcription bots reading through r/AI and HN with a sprinkling of Twitter.


I think of GPT as a new UI paradigm. It was not obvious why the world needed GUIs, or touchscreens, but they took over pretty quickly. Conversational interfaces could disrupt them both.


Depends.

Yesterday I was having a drink with some friends, one of whom is like me in tech. The others in shipping, finance and sales. They all had experimented with ChatGPT and/or Midjourney.


I have noticed some non-tech individuals, such as lawyers, are utilizing ChatGPT. Even my mother asks me to use ChatGPT for her business, and my girlfriend loves it as well.


I was at a house party full of non-tech YUPPies (economics, politics, etc.) over the weekend. Every single group was talking about ChatGPT. I couldn't get away from it.


In January, I texted everyone I knew about chatGPT. I don't usually text 100 people, but I thought this was worthwhile.

In Feb, at a birthday party, I talked to people about it, people knew about it but didn't use it yet.

In March, at a different birthday party, everyone wanted to talk to me about it.

Today, its old news. Although whenever I meet someone who says its a fad, I consider them a luddite.


My sister is decidedly non-tech, and can to visit a few weeks back while working remote, and she used chat gpt over and over and over again for different small tasks.


what were some of them?


Summary of this discussion https://tinyurl.com/4supvdm4


Surely all students have heard of ChatGPT? (Anecdotally, seems the educators have. I've heard of cheating rates on papers at near half.)


My personal experience is that anyone here in Argentina knows about ChatGPT: from kindergarten to retired people. Argentina is far from SV.


I think your friend doesn’t consume any mass media if he hasn’t heard of ChatGPT. This stuff is everywhere, not just tech circles.


Have you used mid journey or stable diffusion? It’s a huge change for what is required to make whatever images you can think of


No practical use. Just making people say "wow" for a minute then move on. You really can't use them for anything good yet.


I’m in the middle of Guatemala. The owner of the hotel is using GPT-4 to build out his marketing plan.

Some people get it, some people don’t.

And that’s OK by me.


ChatGPT is currently number 12 on the App Store in the US, above Snapchat and Gmail, so I doubt there's an echo chamber.


I've had several non-tech people ask me how ChatGPT has changed my job. To which I answer "it hasn't".


No.

I have an AI startup with thousands of active paying monthly users and they are all marketing people and not highly technical


Proposal: prefix your answer with GPT-3.5 if you’ve only used the free version and GPT-4 if you use the paid version.


I asked Google's Bard and ChatGPT4.0. They both said, "No. Don't worry." So I'm good.


Comparing AI to Crypto is incredibly stupid. Hackernews getting closer to Reddit every day with the hot takes.


As an AI language model, I don't have an opinion on whether people on tech are inside an AI echo chamber.


The echo chamber is when you ask for anecdotal opinions instead of actual data.

…like what you’re doing right now.


Yes. There's still a lot of people who aren't that aware of this stuff, and frankly, its because it still doesn't have much practical use.

Its making headlines left and right, and businesses are all trying to figure out what this stuff does, but if you're not watching much news and not in the tech side of business you probably don't know or care?


It has unbelievable practical use. ChatGPT is a personal tutor on any subject you can think of. It's more that 90% of people can't even be bothered to google the answer to a question they have, despite having a personal computer and oracle in their front pocket.

I'm using an LLM to teach me interactively how LLMs work and how to integrate LLMs into our products. It's replaced 90% of my googling/stack overflow. Every engineer in our company is using Copilot and ChatGPT to write software.


I have yet to see someone apply ChatGPT or other LLM in this way and not regret it or ditch it later.

Honestly i'm more worried about the ones who will use it this way and assume it's always correct. It is too often incorrect for anything I've ever done with it.


> i'm more worried about the ones who will use it this way and assume it's always correct

There are plenty of folks who don't fall into this trap.

Example: Me. I am fully aware that LLM generated code can be wrong or worse, contain disastrous errors. But being aware of that, and looking for the things I know from experience it commonly gets wrong, allows me to use it as an incredibly powerful tool in my workflow.


I guess it’s a matter of taste, I’d find that fucking horrible.


Why would that be any more horrible than handing routine tasks and boilerplate to a junior and having to review his PR?

With the junior, I need a meeting.

With the LLM, I just let it redo.


How can it replace 90% of your Googling if you have to go check everything it tells you afterward?


Our blind spot in vetting information is kind of like our blind spot when looking for something we have lost. We don't double check what we are sure of. To me, that is the most serious challenge when trying to learn using LLMs.


It's not like Googling typically gives you truthful answers either.


The difference to me is that with search you can explicitly search only in trusted sources (with e.g. site:...) if you choose to and you get transparency about where the data is coming from.

New Bing is a slight improvement in that you get some transparency about where it's getting information from and can tell it to prioritise accuracy, but can't actually explicitly tell it to use a subset of the web that you personally trust.


yes, it is the most impressive tool to have zero documentation, zero affordance and get less capable over time... if they had launched Lotus the same way we would still be using paper ledgers... massive disconnect.

The things it is good for are being obscured by promises of a waterfall of tasks performed all by AI ending up in something usable. If you tell me you have a neat programme that does exactly what you expect 99.4% of the time yet you have no knowledge of why the 0.6% fail you will have to demonstrate that is has some very desirable properties not offered by other solutions. Hammer, looking for nuts, what is nuts is people taking £100 mill to wrap some langchain... the burn rate to get noticed is going to be half your spend, the other half is subsidising loss making operations with the idea that there is some pie to win here.

It's a novel database. Fun, creative, unpredictable in interesting ways, like my cousin.


The concern is not simply within tech.

Generative AI is a huge topic of discussion amongst virtually all creatives (painters, graphic artists, musicians, authors, journalists), in business (at all levels largely through business process management, finance, and business intelligence), amongst media (joualism and entertainment), amongst governments (regulation, electoral politics, international relations, military / strategic risks, intelligence, competitiveness, impacts on general employment and social stability), amongst the technological boomer and doomer communities (impacts on future technological development will likely be profound, though agreement on sign bits differs), and more.

It's true that the general person on the street likely has little sense of the potential and risks, but that is virtually always the case with new technological developments. Potential impacts are always hard to see, and the discussion about these almost always tends toward various elites (technological, business, government, academic, religious). And that's the case now.

But the conversation and concern is not limited to the information technology elite, by any measure.


I can’t figure out exactly why the hype wore off for me.

It’s now just another tool in life’s toolbox.


If there's a South Park episode on it then no.


They’re in an echo chamber in general.


NO!

( No no no nooo noooOooo noooOooOoooo )


I think it's too easy to fall into the anecdote trap, and I like the comments with study data, like the Pew Research and Line Research (sequential surveys, even better)...

I also think adoption will depend on "industry." ChatGPT in education for example:

* BestColleges [1] (n=1000) found 43% of college students have used ChatGPT, and 22% have said they used it to help complete assignments or exams.

* Study.com [2] (n=1100) had some crazier numbers. "Over 89% of students have used ChatGPT to help with a homework assignment. ... 48% of students admitted to using ChatGPT for an at-home test or quiz, 53% had it write an essay, and 22% had it write an outline for a paper."

* Interestingly, in K-12, adoption appears to be higher by teachers than students [3]: "Within two months of its introduction, a 51% majority of teachers reported using ChatGPT, with 40% using it at least once a week, and 53% expecting to use it more this year. Just 22% of students said they use the technology on a weekly basis or more."

* In Japan, a recent survey [4] (n=4000) of undergraduate students conducted from Tohoku University showed 32.4% have used ChatGPT. This is compared to about 7% office workers in Japan using ChatGPT on the job [5] (n=13814) in a recent poll by MM Research Institute.

Of course, education isn't the only industry with an outsized impact (although it's interesting in the sense that it's a good temperature check for the upcoming generation, especially if it's something that is so prevalent at colleges/universities).

But there are other industries as well. a16z Games has been doing a game development survey on Generative AI use in games, and their preliminary results [6] are in-line with my personal experience/view into the game industry - that it has already been completely re-aligning/disrupting the production pipeline: "We heard from 243 game studios - large and small - the results were astonishing ... 87% of studios use an AI tool or model in their studio TODAY. 99% of studios PLAN to use the technology in the future."

I think it's worth noting, that while the sources are dodgy, that even if the 100M user number is accurate for ChatGPT, that's only ~2% of global internet users, and <1% of the world population. You could probably confidently say that 99% of the world population has not directly used a generative "AI" product yet (obviously anyone that's used a mobile phone or the internet has been interacting with ML for years). I do think this is going to change very rapidly however, but no matter how fast it goes, it won't be overnight.

[1] https://www.bestcolleges.com/research/college-students-ai-to...

[2] https://study.com/resources/perceptions-of-chatgpt-in-school...

[3] https://www.waltonfamilyfoundation.org/chatgpt-used-by-teach...

[4] https://www.asahi.com/ajw/articles/14927968

[5] https://asia.nikkei.com/Business/Technology/Use-ChatGPT-at-w...

[6] https://www.linkedin.com/posts/troykirwin_ai-x-game-developm...


Yes.


yes and its exhausting


> are we just in an echo chamber?

Yes.


> but maybe AI mainstream adoption will take longer than we anticipate.

Here's how the adoption of this technology is going to go (this is the way all AI technology adoption has gone for 60 years):

1) Papers will come out showing how by creating a more effective way to leverage compute + data to make a system self-improving, performance at some task looks way better than previous AI systems, almost human-like. (This already happened: "All You Need Is Attention")

2) The first generally available implementations of the technology, in a pretty raw form, will be released. People will be completely amazed at how this machine can do something that was thought to be a hallmark of humans! And by just doing $SIMPLE_THING (search, token prediction) which isn't "really" "thinking"! (This will amaze some people but also form the basis of a lot of negative commentary) (Also already happened: ChaGPT, etc)

3) There will be a huge influx of speculative investment capital into the space and a bunch of startups will appear to take advantage of this. At the same time, big old tech companies will start putting stickers on their existing products that say they're powered by LLMs. (Also already happened)

4) There will be a wave of press, first in the academia, then in technology circles, then in the mainstream, about What This Means. "AGI" is just over the horizon, all human jobs are about to be gone, society totally transformed. (We are currently here at step 4)

5) After a while, the limits of the technology will start to become clear. A lot of the startups will figure out that they don't really have a business, but a few will be massively successful and either build real ongoing businesses that use LLMs to solve problems for people, or get acquired. It will turn out that LLMs are massively, massively useful for some previously-thought-to-be-nearly-impossible or at least contigent on solving the general AI problem work: something like intent extraction, grammarly-type writing assistants, Intellisense on steroids, building natural chat interfaces to APIs in products like Siri or Alexa that understand "turn on the light" and "turn on the lights" mean the same thing. I have no idea what the things will actually be, if I was good at that sort of thing I'd be rich.

6) There will be a bunch of "LLMs are useless!" press. Because LLMs don't have Rosie-from-the-Jetsons level of human-like intelligence, they will be considered "a failure" for the general AI problem. Once people get accustomed to whatever the actual completely amazing things LLMs get used for, things that seemed "impossible" in 2021. Startups will fail. Enrollments in AI courses in school will drop, VCs will pull back from the category, AI in general (not just LLMs) will be considered a doomed investment category for a few years. This entire time, LLMs will be used every day by huge numbers of people to do super helpful things. But it will turn out that no one wants to see a movie where the screenplay is written by AI. The LLM won't be able to drive a car. All the media websites that are spending money to have LLMs write articles will find out that LLM-generated content is a completely terrible way to get people to come to your site, read some stuff and look at ads, with terrible economics, and these people will lose at least hundreds of millions of dollars, probably low billions, collectively.

7) At this trough point where LLMs have "failed" and AI as a sector is toxic to VCs, what LLMs do will somehow be thought of as 'not AI'. "It's just predicting the next token" or something will become the accepted common thinking that disqualifies it as 'Artificial Intelligence'. LLMs and LLM engineering will be considered useful and necessary, but it will be considered a part of mainstream software engineering and not really 'AI' per se. People will generally forget that whatever workaday things LLMs turn into a trivial service call or library function, used to be massively difficult problems that people thought would require human-like general intelligence to solve (for instance, making an Alexa-like voice assistant that, can tell 'hey can you kill the lights', 'yo shutoff the overhead light please?', 'alright shut the lights', 'close the light' all mean the same thing). This will happen really fast. https://xkcd.com/1425/

Sometimes when you see an amazing magic show, if you later learn how the trick was done, it seems a lot less 'magical'. Most magic tricks exploit weird human perceptual phenomena and, most of all, the magicians willingness to master incredibly tedious technique and do incredibly tedious work. Even though we 'know' this at some level when we see magicians perform, it's still deflating to learn the details. For some reason, AI technology is subject to the same phenomenon.


Rare Betteridge fail here.


> What do you think?

What do you think?

(meaning "yes")




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: