AI has busted before and will bust again. It happens so often with AI that there is a standard metaphor for it: AI winter.
The current technology of AI is not going to produce profits to justify the gargantuan levels of investment. Generative AIs excel at creative tasks, which is undoubtedly amazing, but creative industries have already had their margins beaten down by the hordes of humans who want to work in creative industries. You’re not going to create hundreds of $billions of revenue by replacing stock photography and blog post authors.
In objective contexts, generative AIs struggle with reliability, which makes it hard for folks to build them into the critical systems necessary to generate huge revenue. There is a reason Google did not deploy generative AI into search until OpenAI basically forced them to. And now that Google has rolled it out, does anyone think it is driving huge new revenue for them? No.
The reality is that much—maybe even most—of the current levels of investment are predicated on the idea that someone among the current crop of AI companies will create a “general intelligence” which will be intelligent and reliable enough to transform “hard” industries like energy, manufacturing, transportation, health care, etc.
Let’s recognize that investment thesis for what it is, a speculative bet. No one even agrees how to define AGI, let alone understands the concept rigorously enough to calculate how to get there from ChatGPT.
This is the view of a year ago. This take is already outdated.
There are already jobs being lost.
You don't need to make "Billons", you just need a model cheap enough to be cheaper than the human being replaced.
So if I can take one of the off the shelf models, and it costs me $20/week to replace a human that costs $1000/week, then there you go. It is already disruptive.
People forget that there was a shit ton of investment in the early internet that also didn't pay off, but nobody would say that the internet didn't have some winners.
I'm already having to deal with AI when making reservations at Delta. Where do you think the people that used to take phone calls went? That is one company, one role.
The point is, they did turn out, look at the internet today. Would anybody go back to the 80's and argue, it really isn't worth the investment? Amazon, don't bother, Cisco, forget about it.
Sure, but it seems like the anti-AI sentiment is that it is all completely worthless. The point is, there will be a lot of failures, but not some complete industry collapse where we enter another AI winter, so stop investing. There will be some winners.
There was investment in the internet pre-90's that built the infrastructure that allowed Amazon to exist. There were winners and losers. Maybe there were losers in the 80's and Amazon was a winner the in the 90's.
Because there were losers doesn't mean there should not have been any investment, which seems to be the argument against AI. That it costs a lot and not producing anything. When we can clearly see that the rush to invest is exactly to not be left behind and get to winning.
I hate to say "did you read your link" but i have no option since you are spamming that link everywhere.
Earlier this year, US software company Salesforce fired 700 workers – equivalent to approximately 1% of its global workforce. This is in addition to similar cuts that saw the company reduce its personnel by 10% last year. Similarly to Google, Salesforce hasn’t announced that these job losses are directly linked to AI.
Google CEO Sundar Pichai hasn’t explicitly announced these jobs will be replaced with AI technology outright
In 2020, MSN sacked dozens of journalists responsible for writing news stories displayed on the company’s homepage and has since been using AI software to create the content.
Turnitin laid off 15 people.
some article about MSN from 2020 or a no name company Turnitin laying of 15 ppl convinced you about AI taking jobs ? Yea you are def living in a different planet (maybe a planet in metaverse running on cryptocurrency).
These ppl are using AI as a cover for their businesses that need to lay ppl off.
Layoffs = BAD
Layoffs due to AI = Good
per markets and investors so thats what these CEOs are pretending.
You aren't wrong that some companies are using AI spin to put a positive on layoffs. I'm sure some companies are doing that.
1.
Is the technology moving so fast that we don't have 'reputable scientific studies analyzing years of historic data that would satisfy every internet know it all'? Probably. Does that mean nothing is happening? Probably not.
2.
Will people compare this latest technology change to the buggy whip argument, thus saying 'ah-ha, got you, AI will actually increase jobs'. Yes, because technology is always wonderful and easy for society to seamlessly adapt to.
3.
Is HN filled with programmers saying AI wont replace programmers, thus there is no problem, while ignoring the much larger workforce of low-mid level drones doing rote tasks. Definitely.
4.
Speaking in absolutes. Everyone is arguing about replacing jobs, "AI can't replace my job". But it is really just fractional. Lets say you have 5 marketing drones writing boring marketing material. AI allows them to do more, you still need humans to use the tools, and to edit, but now you only need 3. AI can't do the 'entire job' but it did enough to eliminate two positions. This is what is already happening. Do you think this stopped with the ESPN case? Companies didn't stop because they got caught, they just got better at it.
> More than one-third (37%) of business leaders say AI replaced workers in 2023
I just read one CNBC. This can mean whatever because "AI" can mean whatever they want and "business leaders" will get fired for saying we don't have "AI Strategy" . Markets are hostile to any "business leader" who isn't BS-ing that they are replacing jobs with AI.
I can't argue against that. There is definitely a lot of noise right now. A lot of spin. A lot of people arguing all sides.
Hence also, no good sources of information. At least nothing trusted.
But I think anybody that has used latest AI tools can see the writing on the wall about job consolidation. How many 'new' jobs will also be created. Who knows. It's kind of word of mouth now.
This is definitely going to be as disruptive as blue color jobs going overseas.
Investors are throwing unfathomable amounts of money at AI right now, and frontier models are very expensive to build. There’s also the issue of these AI-as-a-Service companies requiring an ever increasing amount of GPUs and electricity, and because frontier models keep getting bigger, the problem keeps getting worse.
There is a serious risk to these companies. A frontier model doesn’t sit at the frontier for very long. Given a year or two, open source models can catch up for less than 1% of the money spent to build a frontier model. The moat doesn’t last long, so you need to keep burning tens of billions to stay ahead.
But as open source models continue to improve, we run the risk of reaching a moment that looks like the current smartphone market. People aren’t upgrading their phones as often anymore because a 4 year old smart phone is still pretty great. There’s no need to buy every year. Well, at some point an open source model will be close enough to a frontier model that many users will just stick with it. They’ll prefer it because there’s no monthly fee to run it on their own hardware, because there are no rate limits, and because they prefer the privacy and flexibility.
Right now you need a big cluster of GPUs to run top tier models. But with every passing year we get closer to good-enough open source models running on gaming GPUs.
And when that happens, would you want to be one of the investors shoveling tens of billions of dollars into building frontier models with declining subscription numbers where you’ve got no moat?
I don’t think the economics of this are going to pan out in the long run. Everyone is still hyped about ChatGPT, so investors are happy to throw money at it. But I think we’re in a very big AI bubble right now. The economics only make sense if someone achieves AGI within the next couple of years. If they don’t, this whole thing is going to decline sharply as open source models provide a better free alternative.
Yeah but to what extent is it all just marketing and promotional type copy, plus maybe "content" that explains or summarizes things.
That's definitely responsible for billions of dollars in salaries. But also it's a bit of an arms race. Like the standards will just go up for everything evenly right?
The people who think an LLM or other type of model can do a job are the people who don't know anything about that job.
Managers who don't know what developers do think you can replace a developer with an LLM. Can an LLM shit out some code? Sure, but that's hardly what (good) developers do.
Magazine publishers who don't know what editors do think an LLM can replace an editor. Can an LLM make a bunch of statements about the quality of a piece of writing? Sure, but they may have no basis in reality and will require a real human editor to review them. Or your publication can succumb to being LLM generated slop.
Bad coders who don't know what good coders do see that an LLM can do what they've been doing and think developers will be replaced but they don't actually realize what it means to be a developer so they don't see all the things the LLM isn't doing.
Tech bros think a model will be able to revolutionize materials development but when actual materials scientists look at the output it turns out it's mostly garbage. And crucially it took actual materials scientists spending a whole lot of time to figure that out. [0]
Most of what these models do is waste actual experts' time by forcing them to wade through huge quantities of plausible looking but completely incorrect output.
The main thing I see (and it is possible I’m biased by some of these same factors), is that it does seem to make some moderately helpful tools? Like coding assistants, to knock out boilerplate.
Maybe if it could make a developer twice as effective, it could halve the developers to project ratio. Jevons paradox, and we get twice as many projects, great. But the management requirements would be different, right? If teams are half as large, wouldn’t expect management to just, like, go away. But the tree might be able to lose some middle “summarize and pass up” levels, right?
Basically it helps with really generic stuff, but writing generic stuff isn't my job. I have access to windows copilot pro (basically ChatGPT 4 Turbo/ 4o depending of the time of the day) as a tester, and github copilot as a user, i can say without hesitation that ChatGPT is sightly better to write comments, but both are bad to do anything harder than writing HTML forms, at least from scratch.
They _are_ a great rubber duck though, today i had a concurrency issue and asked windows copilot for solutions. It was wrong, but gave me an idea, basically saving me at least 45 minutes. Github copilot is a great autocomplete, saving me some time too, but i don't think it can make me twice as effective as a coder. 20%? Maybe 40% if i take into account the fact that it generate really good test cases (that i still have to read)?
But coding is like 25 percent of my job, database/object design and software architecture are like 30%, network security another 25%, and the rest is meetings/coordination, so all in all, i don't think you can halve teams because you give them good genAI
Every code generator I’ve used makes me think developers jobs are plenty safe at the moment from an AI takeover. You can’t seem to spend your way to a solution of producing good data. Nvidia is certainly enjoying watching people try though.
What’s sad is the tech is actually impressive and fascinating but it’s being forced to look more useful than it is by greedy investors, but what else is new. Water is wet and all that.
Suppose AI can do anyone's job. Then, after massive layoffs, no one would receive a paycheck. How would that produce a burgeoning economy? Or indeed, any economy.
If we don’t need any services, we don’t need any people. And if we don’t need any people, we don’t need any services. Technically, “let’s all die” has always been a solution that balances the equations of the economy; actually it is the simplest one: 0=0. But, we’ve muddled along somehow.
It can if you have someone knowledgeable driving it. Otherwise it gives out wrong or outdated information enough that it would quickly cause bugs in production code.
And that’s the thing it can do narrow scopes of a job well but ask it to do the entire task it will often mess up somewhere slightly enough to cause a snowball effect of failures by the time it’s “done”
If your job is copy pasting stack overflow responses sure. If you work on something specific, need to talk to client, need to brainstorm ideas, it's still very meh at best.
chatgpt has been here for what ? 24 months ? Unemployment rate is the same as ever, maybe a bit on the lower side if anything. If it was the miracle they promised we would see it everywhere: gdp, unemployment rate, productivity, &c.
Some oddly defensive reactions here. Wall St losing faith doesn't mean the tech is bad, just that they're not happy with the returns they're seeing. Plus, AI seems particularly susceptible to these extreme hype cycles. If it's legit, then it will pan out eventually. If not, then it will lay the foundation for something bigger.
They're not odd. These reactions pop up to all criticism of the current tech hype bubble. Happened with metaverse, NFTs, blockchain, etc.
They know that the emperor has no clothes. That the moment people in power start asking "So... where's all the profit on this investment?" it's game over.
AI is supported by massive subsidies and R&D from the big tech giants. Wall Street demanding that Microsoft stop wasting tens of billions on investment in a product that does not generate a profit for Microsoft will kill AI as it exists right now. OpenAI will raise their prices significantly, killing most-to-all of the startups relying on them.
This is especially dire because there's copyright and regulatory changes going on. An AI winter now means there's substantial risk of the next "AI summer" being crippled by actually having to abide by copyright and being subject to regulations.
> OpenAI will raise their prices significantly, killing most to all of the startups relying on them.
Have you heard the good news? The price per million tokens has fallen from $36 to $0.25 in the past 18 months.
> actually having to abide by copyright
Copyright is being eroded, first by the internet and its copying capabilities, then by the social networks and sharing, and now by generative AI. You can't make a profit owning expression in this distributed and interactive system, copyright was invented in the age of passive consumption of content. When you do a simple search you can find many alternatives to choose from. Any new work has to compete against decades of accumulation.
On top of that, AI is not really infringing, or it's a bad infringement system - it's much slower and more imprecise than just copying, if infringement is what someone seeks. It's a guided remixing tool, there's a human shaping its output. How can a model 1000x smaller than its training set be really infringing, it has no space for all the exact details.
OpenAI accumulates trillions of tokens in chat logs, eliciting experience from its users, it is crawling the tacit knowledge reservoir of its user base. That means it collects better information than web scrapers, information that would be normally lost. They can just sit there and people will bring data, guidance and feedback to the model. Do you see the business value in collecting practical experience and repackaging it?
Same for me. I wear glasses and I have tinnitus, which means that I can download all the movies I want. As long as I do not fully enjoy them of course.
Honest question: did the crypto hype cycles “lay the foundation for something bigger”?
I suppose there is an argument to be made that it pushed Nvidia to ramp up GPU production and lean into the compute market rather than gaming. Maybe gave the world some extra experience with hosting large GPU farms that are needed for AI training.
I don’t think I’m sold on either of those, but would be curious to see others discussion.
A.I. wont be a bust, but your investment in something simply because it "uses AI" might not be a smart choice.
I can't blame people for attaching "uses AI" to their pitch to get funding. But I will blame people giving them money for not being able to tell the difference between something revolutionary and something that never needed AI in the first place.
This has been a huge frustration with AI investments as I’m invest in industry and “AI ETFs” just invest in anything with AI on the website. Investors have absolutely no idea how AI works or what it will disrupt, and I genuinely think 99% of all investor sentiment towards AI is non-sense
The company I work for try to add AI/LLM on everything, instead of trying to improve/fix the underlying problem, they now just add the magic AI and everything is “perfect” now.
As an ML engineer and AI developer, I don’t see the real value at all, not to mention the added cost of using LLM
Going down the tangent of people working in the industry...
I unwittingly fell into low-level coding for DL software stacks about 7 years ago.
At first I was merely uninterested in the topic, compared to my teammates.
Now I think there's a serious possibility that LLMs and other new DL capabilities will be a net negative for society. I'm actively trying to get other work.
I know that if I don't do the work others gladly will, but the status quo seers my conscience.
AI progress has been dramatic but the revenue doesn't even come close to the infra spend. Also companies like Meta have started to commoditize the models. Consumers could give less a shit about copilot enabled PCs.
The actual revenue is in the future and requires more RnD. Many barriers to robotics and autonomy have started to fall. Drug development could greatly benefit from some of these advances.
Ai spending is more about fear (of falling behind) than greed
For example this morning even, I was working together with a coworker and he used ChatGPT to ask for some examples. Unfortunately they didn't work, after reading the actual documentation it turned out that the structure had to be slightly different.
This is the impression I get from everything that has been generated with "AI", on the face it looks great, but as soon as you start going into detail it's not right or just plainly wrong. The generated code had six fingers. They might be able to improve this eventually but I don't think it will ever be fixed because of the nature of LLMs.
Any new tech gets hyped to the moon because of theoretical use cases. The tech then comes up against genuine limitations and reality sets in. Work is done to get around the limitations, and useful products get built. However if you compare what got built to the theoretical products at the height of the hype cycle, you will find them wanting.
AI (as in GenAI) will be no different. I would not call that a bust.
Depends on where do you live and what access to mainstream banking you have. Nowadays about 95% of money I spend goes through crypto, and honestly, I don't want to go back.
Well, the number of people "in tech" worldwide is a hundred million or so. OpenAI reportedly has 11 million subscribers, presumably a lot of them are developers or similar. But I also know a bunch of people in advertising who uses chatgpt all the time.
Fascinating! It's the other way around for me. Most of the tech people I talk about it with dismiss it as a stochastic parrot and very much don't trust it, and it's the non-tech people that rave about their new friend that knows and will explain everything.
Stochastic Parrot - a name that aged like milk. How can it be a parrot when it is doing few-shot and zero-shot tasks? Humans also make mistakes, and we're also using leaky abstractions to operate.
We're "parroting" things we don't really understand. When we take a pill, we don't know what's inside. When we go to the doctor, we don't study medicine first. When we write an app, we don't know the minute details of the operating system or frameworks we use. As software developers we know the limits of abstraction and that no abstraction works everywhere.
Our society is functional, not based on genuine understanding. Can't be any other way. There is no central understanding, not in the brain and not in society. Remember the parable of the Elephant and the blind men? That's us.
Well there's one difference right there. When I sell my software I provide a warrantee and will refund if the software is not substantially functional, as described in the documentation and marketing.
A couple of consulting jobs have also required that I carry insurance to cover possible restitution due to errors and omissions - because yes, I do make mistakes.
As I understand it, the stochastic parrot providers require indemnification before you can use their products, so they are clearly held to a lower standard than humans.
Yeah. Every time Nvidia or OpenAI come up, and some one asks "where's their moat?" I giggle because that just makes me more convinced it's humans that are stochastic parrots, more than LLMs don't have something resembling intelligence. intelligent.
Lots of people are using generative AI for roleplay - don't know if this is a significant contributor to OpenAI's revenue, but it's a (the?) major non-corporate use.
Do you mean things like character.ai ? It seems they were successful, but not wildly so given that most of the researchers (incl. founder) just moved to Google, and character.ai plan to continue using off the shelf models. If the business had been growing very fast, I'd have thought that the founders would have stayed on for a larger pay day.
Really? Everyone outside of tech that Ive talked to uses it. I have a new mail carrier and I was chatting with her and she mentioned that she used chatgpt to complete her job application for the carrier position.
Business expenses in the real world aren't always "only spend on things that directly increase profit." It's harder to quantify a productivity increase.
The ROI of generative AI can be debated, but at the least the companies are locked into year+ enterprise contracts which is actualized revenue for OpenAI. Those contracts may or may not get renewed.
We likely overestimate AI's short-term impact, and there might even be a financial bubble about to pop. But I also think we underestimate the long-term impact. We're building absolutely amazing capabilities faster than many would have thought possible only a few years ago - I especially think applications to science and engineering will be huge and transformative.
Some context that sometimes gets lost: this sort of research is, to put it crudely, content marketing for an investment bank.
So when the article says "the firm began hosting private bull-and-bear debates" we can be sure that each attendee was met shortly thereafter by their friendly Goldman salesperson to talk about how to trade whichever way they wanted.
Organizations (Salesforce, Adobe, etc.) that have lots of proprietary data (chats, documents, clickstream, etc.) will benefit from applying AI tools to their products to automate work. It' almost certainly folly for those companies to build novel foundation models themselves.
There's also the possibility that this is a great and useful new technologies, but it becomes so cheap and democratized that there will be no big winner (aside from Nvidia stock price in the short to medium term).
There may well be a difference between the value for society which will probably be a lot and the return for investors which may not be good. At the moment I'm enjoying using the AI but not paying anything. It may go a bit like the airline industry where overinvestment in planes tends to lead to price wars and the investors losing overall.
IMO, if AI pops, for the top 3 companies by market cap:
Apple might not notice
MS will be in trouble
Nvidia will take a hit but as the premier designer of high-throughput compute devices and owner of the ecosystem, they’ll be well positioned to take advantage of the next thing.
I pay OpenAI and Anthropic directly for API access that is hooked into a plugin on my IDE. I also use Chatbox, an app that lets me chat with them, using the API directly.
No idea if they are making money on my use of their services, they haven't made that information public.
If the price was higher I would still pay it. Given that they keep offering lower prices on newer models, I am inclined to say that they are making money on the incremental use of the models (which is basically the cost of renting a GPU). No idea if they are making enough to recoup the cost of research.
When progress decouples from investment is when I'd say there is a true bubble. But so far progress has continued to soar.
Nobody thinks GPT4-o1 or Sonnet 3.5 is going to change the world. People are looking at the past 2-3 years though and extrapolating forward, and right now they don't see any evidence that progress will slow down. Quite the opposite actually.
As for where that value will manifest itself the most in the stock market? That's probably where this guy is getting hung up.
The risk is that the combined market caps of AI firms exceed 1 Trillion dollars now. This market cap is likely supported by several Billion dollars in Revenue, but margins are debatable outside of NVidia due to capex/opex of inference, training, and data acquisition.
These valuations only make sense if the future free cash flow of these AI firms reach the ~100 Billion dollar mark within the next 3-5 years. It's somewhat unclear what the path here is if one takes the position that
a) 1 billion people are not going to spend 100/year on ChatGPT like subscriptions.
b) White collar productivity is not going to increase by 80%.
> Nobody thinks GPT4-o1 or Sonnet 3.5 is going to change the world.
I do. AI progress could magically hit a brick wall right now and never advance any further, and o1 would still change the world more drastically than you can imagine. But AI progress will also not magically hit a brick wall.
I supposed for that reason o1 specifically will not change the world, because it will be superceded too soon to. But it would if it wasn't.
Agreed. o1 is a step in the right direction. There are half a dozen improvements that could be made along the same lines without introduction of any new technology.
Uh, a lot of people did, and continue to, think AI is going to change the world - including Chat GPT.
Hell, read any of the comments from the last year or two on any HN thread about it. Plenty of people claiming it’s the new god, it changes everything, replaces all knowledge workers, etc.
It is freaking amazing that this works almost perfectly. Seriously, it’s mind blowing. The problem is, when you keep in mind how the technology works, you realize that the “almost” can never be removed. That’s fine for some use cases but not for others. I understand that human translators make mistakes but they have a conception of truth and correctness, that matters.
We have some people that read Mandarin and double check the output once in a while. If it didn't work well the story would quickly become incoherent and make no logical sense because chapters are translated on their own.
The common failure mode is names and genders, for some reason it likes to swap names and genders of characters.
My point with regards to 2 is that it would have to maintain consistency across translation runs. Entire novels don't fit in the context so it can't make up a logically consistent novel across prompts.
When I do the translations, I actually don't even include previous chapters in the context.
So the last novel I completed is long one, but not unheard of I think it had 6 million characters. Now I don't know how many tokens would that be, but I doubt most models can support that large context.
And really consistent editing and choices between what is translated and what is not and instead is Romanized is rather important with many Chinese novels.
You can get something you can figure out, but I doubt you get something really enjoyed.
I don't think you should be getting downvoted for this, because you're pretty much correct. People's sense of what's normal has shifted so much in the last two years that LLMs now seem quaint, but the impact they've already had on things like the quality of machine translation is huge.
If you go to plain ChatGPT, a system not specifically designed to translate languages, and tell it to translate "以后你再使用我照片请使用这两张任何一张都可以这是我们结婚的照片" to English, you get a better result than any machine translation from just a few years ago. For example, it gets from context that "这是" has to be translated to a plural phrase in English. Even right now, Google Translate still gets this wrong.
I'm worried that a lot of the impact these technologies have will eventually turn out to be overwhelmingly bad. Google Photos is already partially broken by the amount of shitty AI images it returns. But the fact that they do have a huge impact can't be denied.
I don't know what exactly qualifies something as "changing the world", but if LLMs don't qualify, then not a lot of things do.
1) Most people have no need for translation software
2) Before LLMs we already had decent free translation in the form of the free Google Translate, using pre-transformer NN models
Personally I still use Google Translate as my go-to for translation, rather than using Sonnet 3.5. Maybe Google now use an LLM under the hood, but I haven't noticed any increase in quality in last few years.
You can test it by comparing human translations with LLM translations. The results are pretty close. Like I said in another comment, the common failure mode with mandarin is around names and genders
> I'm in a few communities that like to read novels from China / Korea. Claude Sonnet translates is able Mandarin to English almost perfectly.
What novels are you reading?
This is fascinating to me, because the world is quickly becoming a place where we have to choose which information from the unlimited information stream to consume. It feels like unlimited opportunity cost. I, for one, don't think I'll ever have enough time to watch every Academy Award nominated film (let alone all of the winners). And that's just one type of information.
You're going after some obscure (?) stuff. What brought about the interest?
Xianxia I expect. Distinctly Chinese fantasy webnovels set around Cultivators seeking immortality that go for 6000 chapters and start with the main character being the weakest guy in the weakest part of a world to them being a god like being who pinches galaxies between their finger tips.
As for why do people read it? Well.. there's lots of it, it's free and it's inherently progression fantasy most of the time which can often be addictive.
One must simply be careful they do not read forbidden scriptures.... and develop the Dao of Brainrot, it's sadly an ever present danger.
I like Xianxia because there is actual power progression. Compared to many of the new mangas where characters go to max power in about half a chapter...
Also it is often quite different fantasy and sometimes world building can be truly imaginative and different. Where as lot of others are rather too formulaic.
Yeah, I'm one of those readers who adore world building, I can honestly have cardboard cutout characters so long as the world building is great. Coupled with good progression and honestly I could read for a month solid, I have read for a month solid, it was glorious!
The novels I like the most right now are "Mysteries of the Immortal Puppet Master" and "Eternal tale" which are both just fun Chinese fantasy novels.
> What brought about the interest?
They are very unique coming from the perspective of an American that has mostly read books published by Western authors. There are all these unique fantasy tropes based on Chinese history that are like a parallel branch to Tolkein based fantasy. Also, you can clearly see that they have completely different value systems and ironically you can tell they are comparatively less censored.
a bit of a tangent but regarding the translation, can you compare it to the work of a human translator>? I often find translated works unsatisfying. While the fault may well be with me, I thought the Three Body Problem was a pretty poor piece of fiction (yes, I know, HN loves it, mea culpa etc) but I wonder if I dislike the original work, or the rendering in English.
I thought the translation of the first of the trilogy was stilted and flat, I could appreciate and enjoy the underlying story but the prose felt like a mechanical translation. The latter two books though I thought read much more naturally.
There are now equal measures of Skeptics and Alarmist to Sales and Hype-men.
So generally that there is an article about 'skeptics', do they really have some new take on this versus all of the 'skeptics' from a month ago, or a year ago.
Thanks, “head of stock research”. Definitely a real job that’s not at all made up. I’m sure he did a lot of chin stroking and graph glancing to come to this technical decision! Someone should let all the scientists know that they’re wrong, and intuitive algorithms are “not needed”…
EDIT: I'm 'posting too fast', so I'll break the rules instead and post my response here:
Thanks for the polite reply, despite the disagreement! No offense intended to any finance folks on here, but it's a murderously harmful industry built on lies. Being an expert in evaluating businesses in general is an absurd job, IMO; the whole system is a mix of gambling and coercion, and the veneer they put on it of "making the right plays" and "reading the market" is a tiny, tiny percentage of what they actually make money from, if it even works at all.
In other words: what would you study to become an expert in the concept of predicting the future of all human activity? Here's the report discussed in this article: https://www.goldmansachs.com/images/migrated/insights/pages/... AFAICT, his entire argument boils down to this:
In our experience, even basic summarization tasks often yield illegible and nonsensical results. This is not a matter of just some tweaks being required here and there; despite its expensive price tag, the technology is nowhere near where it needs to be in order to be useful for even such basic tasks.
This analysis needs citations of academic discussion on the specifics of the new technologies and how exactly they will fit into existing ones, not "we tried it around the office and couldn't get it to work". Certainly the bosses at Xerox failed to see any use for PCs in their own lives; after all, it's quicker to call someone on the phone, and cheaper to just write a letter!
Just like desktop PCs were a clunky presentiment of the countless applications of miniaturized computers (phones, smart appliances, automotive features, and microcontrollers in general), today's chatbots are a clunky presentiment of countless applications of intuitive algorithms (adaptive UX, actually helpful smart speakers, assistive technologies for the disabled, and self-improving systems in general).
For the second point, here's his quote:
Overbuilding things the world doesn’t have use for, or is not ready for, tends to end badly.
This quote is either misguided or a completely empty tautology, depending on how much leeway you give to "overbuilding". The world definitely has a use for intuitive algorithms, but it definitely doesn't have a use for too many intuitive algorithms -- that's what "too many" means!
TBF the "not needed" phrasing was a complete fabrication by NYT, so that's my bad.
> Thanks, “head of stock research”. Definitely a real job that’s not at all made up. I’m sure he did a lot of chin stroking and graph glancing to come to this technical decision! Someone should let all the scientists know that they’re wrong, and intuitive algorithms are “not needed”…
Why would you think "head of stock research" is a made up job? It is obvious that investors need to research the stocks the companies they invest in to figure out if they're good investments.
> Someone should let all the scientists know that they’re wrong, and intuitive algorithms are “not needed”…
It's your problem if you confuse "people trying to hype their AI products" with "scientists" and "not providing the hyped value" with "not needed."
I hate to admit it, but we're basically in the same boat as him. White collar workers that sit in air conditioned buildings (or at home) that rely on power+internet all the time to be useful. Plus how much tech workers are actually doing groundbreaking scientific work? Probably less than 1%.
Eh. Many programming jobs are connected to the actual creation of real world value. As in, I can point to actual real life people from every programming job I've had that will say: "Yes, my life was improved by the use of your programs". The fact that I do it using a computer and a climate controlled office isn't really relevant.
Stock traders are so abstracted from real human value, that it is not a stretch to say that their jobs do not add value as a whole to the world. If we had half as many people working in financializing the world, capital allocation would not suffer. At least that's my belief.
Fckn bean counters, this just shows what's wrong with the current economic system.
They've thrown money at AI, and, honestly, it isn't quite ready for it.
But at the same time, it's an amazing advance, that's ignited the field and opened whole new vistas.
Obviously, current AI is missing some very significant pieces of the puzzle to advance, but the current state is truly a giant step from just 10 years ago.
What was the 'profit' in the first moon landings, an iconic historical achievement, for example?
The 'no profit in it' naysayers just show that the capitalist model is not actually the thing responsible for the amazing innovation of the last 100+ years, and is wholly unsuitable for the next.
The current technology of AI is not going to produce profits to justify the gargantuan levels of investment. Generative AIs excel at creative tasks, which is undoubtedly amazing, but creative industries have already had their margins beaten down by the hordes of humans who want to work in creative industries. You’re not going to create hundreds of $billions of revenue by replacing stock photography and blog post authors.
In objective contexts, generative AIs struggle with reliability, which makes it hard for folks to build them into the critical systems necessary to generate huge revenue. There is a reason Google did not deploy generative AI into search until OpenAI basically forced them to. And now that Google has rolled it out, does anyone think it is driving huge new revenue for them? No.
The reality is that much—maybe even most—of the current levels of investment are predicated on the idea that someone among the current crop of AI companies will create a “general intelligence” which will be intelligent and reliable enough to transform “hard” industries like energy, manufacturing, transportation, health care, etc.
Let’s recognize that investment thesis for what it is, a speculative bet. No one even agrees how to define AGI, let alone understands the concept rigorously enough to calculate how to get there from ChatGPT.