Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Google's live from Paris event private/deleted immediately
287 points by colesantiago on Feb 8, 2023 | hide | past | favorite | 373 comments
Just watched the 'Google Live from Paris' event and it looked like a non-event to me.

It seems that the livestream event was set private after it ended. (It was unlisted to begin with) They even forgot the phone used to demonstrate multisearch.

This suggests to me that Google is finally getting disrupted and are scrambling of desperation because of the release of ChatGPT.



You know, whatever the merits (and it sounds as if there were very few) of Google's presentation (or Pinchai's recent non-statement) might be, I would have expected the commentariat here at HN to display way more skepticism about language models as a replacement for search. If there's a crowd anywhere in the world large enough to leave 200+ comments on a post like this, you'd think thatg the one at HN would understand these models' currently unsolved problem with lying and fabulation.

And yet when I read the comments, I see this sort of calm acceptance that of course this technology is going to take over search. Why is everyone so confident? The performance of these models is simultaneously jaw-dropping and absurd. With no proposed solutions to the problems that they fundamentally face, why this level of belief that their ascendance is inevitable?


I'd like to have more discussion about the attribution problem, specifically.

We now have a couple of the players saying they're working on, or having demos. But from what I can see, all of these don't _actually_ attribute the source. They're just able to find _a_ source that fits to the output, working backward from it. Often the attributed source fits, but is actually disagreeing with the original output in specific details. In other words, it's not backing the output at all. At most it's a "you could check out these links and compare yourself and it might help you judge our accuracy".

Is there any indication real attribution is coming?


I was musing on my website about ChatGPT [1] and one of the things I noticed about it is that if you analogize it to your own brain, which has a language model in it too, we are collectively asking ChatGPT to do something we would never ask of our own language model. ChatGPT does not have the "higher level" reasoning that we're asking for, to do logic, attribution, and the other things we want.

This is not necessarily a "criticism". It is obvious that ChatGPT produces an incredibly rich and meaning-filled representation of the text, as the extension of the text it can be used for demonstrates. But we should not be trying to "tweak" the language model here and there, trying to make a language model do something it isn't really suited for. We should be working on how to hook up that higher-level functionality. It is not obvious how to do that, true. But it at least stands a chance of producing the AI in reality that people think we have now, but don't.

For that reason, I don't think attribution is coming without a significant addition to the architecture. Language models shouldn't be doing attribution; the mess you'd create of them to accomplish that would ruin them as language models. Can you imagine how insane you'd go if your internal language model tried to stuff down every instance of when you have seen and/or used the word "brain" every time you tried to use the word "brain"? That's not what language models are for. That's what the thing using the language model does. I expect AIs to parallel this. I see no reason at all, and abundant reasons to the contrary, to expect AIs to just be undifferentiated blobs of numbers. I expect them to have structures just as our brains do, and for the exact same reasons. If undifferentiated blobs of identical neurons was the way to go, our brains would work that way too; it's far simpler than what we actually have.

[1]: https://www.jerf.org/iri/post/2023/understanding_gpt_better/


You don't remember where you learned the things you know from? How can you trust anything you think then? Can such people get anything done at all, seems basically impossible to learn new things then since you can't know if it is better than what you thought before?

So I don't think that remembering sources is a hard problem to solve, likely making the AI work well is probably easier if it remembers sources.


I don't think remembering sources is an intrinsically difficult problem.

I think it's hard for language models, and indeed, after further thought, I'm pretty sure they shouldn't remember sources: https://www.jerf.org/iri/post/2023/understanding_gpt_better/

But the problem in general is probably not a particular stopper.

A bigger problem is people thinking that we should be taking language models and using them as AIs themselves, rather than seeing them as a component of an AI, which the more I think about, the more obvious it is that this is the case. Language models aren't what we want. They're a necessary step to what we do want and I don't expect them to go anywhere. But history will look back and laugh at this misguided attempt to make them be the entire AI.

Also, if you are implying you can just spew out the exact sources you learned some fact from... no you can't. You think so only because you haven't tried. Ask someone around you for five random "facts" for you to cite your source from, then try to write on a sheet of paper where you heard this fact from for the first time. Then hand the sheet of paper back to your friend and have them verify your claims... so, note, immediately you have the problem that an unverifiable claim is a fail, so "in elementary school" doesn't cut it any more than you can cite "at the library" as a source on your high school essay. I'm not looking for words that "identify" a several year span of time. You need to give me book, page, and line, or timestamps on a video, or something like that. No, you can't. Nobody can just spit out a bibliographic citation from memory for when they learned that Pluto isn't a planet. I can give you "in the news" for that, sure, but what news source was first? What is the title of that article? Who wrote it? What was the exact date? Of course not.


Steve Hsu has recently claimed[1] to have started a startup in stealth to solve the hallucination problem over a corpus as large as 10,000 pages of dense, college-level textbooks, to the point where it can answer the end-of-chapter questions in the textbook with almost 100% accuracy (albeit not math questions) - I hope without using the answer key. Not sure if their approach is more robust than Supabase[2] or similar approaches, and no indication of whether it might scale up to something on the order of a search engine corpus, but it's something.

[1] https://www.youtube.com/watch?v=peHkL_MaxTU&t=1558s [2] https://news.ycombinator.com/item?id=34695306


Yep, this resonates heavily with my own thinking. Thanks!


> But from what I can see, all of these don't _actually_ attribute the source. They're just able to find _a_ source that fits to the output, working backward from it

I feel like it should be possible to build attribution "simply" by finding good results via traditional search (or even using LLM embeddings), then asking a LLM to summarize the sources. Then you can show the attribution. This would (to me) be much more meaningful. Especially for specific searches ("I want a red oversized couch thats got good back support"). Don't tell me about couches then find a random link, find a couch then tell me about that couch.

Is anyone doing this? IDK, maybe there is a reason that it's not happening, but it seems at least as easy as all the other crazy (and cool) things happening in this space.


I think this what the demo's with sources do. But they might still run into the same problem - the LLM summarization might be a bit wrong and so now it is saying one thing and the source has a slightly different opinion.


I think Neeva is doing this!


> Is there any indication real attribution is coming?

I wouldn't hold my breath. Showing attributions allows people to reverse engineer how the prompt was made, and identifies weaknesses in the model. Bad for business!


I feel like all of the interesting stuff when it comes to search+LLM is in real hybridization that's still a few steps out from what's happening right now in terms of productization.

For example:

(a) Addressing the recency problem by making a model generate instructions to a browser as output token stream, so it can, well, Google for you and incorporate latest info into output by adding it to the "internal prompt".

(b) Doing something similar where you teach models how to drive computational engines to make actual, computed facts and figures, not guess them from statistics over training data.

(c) Maybe running a traditional search and then using LLMs to summarize over them.

etc. etc.

It feels increasingly clear that "complete prompt from a stale training data set" is a neat trick but doesn't solve any particular problem well enough. This also seems intuitively true: If you ask your friend a difficult question, you often don't expect them to be able to answer it on the spot. You probably expect them to have relevant experience and a rough understanding of the topics and the tools to look up or figure out the rest. Similarly, it feels intuitively wrong to expect a LLM to come ready-made with all knowledge baked in. It feels more like we'd want something with the working knowledge/method to plan and execute fact discovery and computation.

It's fun to play around a bit with ChatGPT yourself and try to approximate what a more complex pipeline might be. For example, instead of asking it to answer a question, ask it to re-state the question or make a list of questions prompted by the original question, then let it "contemplate" those or propose a method for how to go after answering them, etc. It runs aground pretty often, but sometimes the results take you a lot further than the direct approach.


> This also seems intuitively true: If you ask your friend a difficult question, you often don't expect them to be able to answer it on the spot. You probably expect them to have relevant experience and a rough understanding of the topics and the tools to look up or figure out the rest. Similarly, it feels intuitively wrong to expect a LLM to come ready-made with all knowledge baked in.

Not sure. In theory, this being computers and all, I sort of do expect my search engine to effectively come "ready-made with all knowledge baked in". I do not expect Google to start a new web crawl because I gave it a novel query, and I don't expect a language model (certainly not one being used for search purposes) to say "hang on, I need to go read some more before I answer that".


I've seen some studies that call what I mean a "chain of thought" approach (also reasonable is just "higher-level reasoning", but maybe we need a term for the engineered implementation), and I like that terminology. I don't know, it's a gut feeling, but a one-shot prompt completion feels intuitively too simple for me - it seems like we need to build up more elaborate "thought processes" next where we use these methods for individual steps in those thought processes, but also have steps that can carry out a computation or go on a fact search.

A bit similar, again, to how humans will answer plenty of questions on the spot from working knowledge, but are also able to classify when they need to drop into a slower "let's do the math" mode and then carry that out.

We may be able to from these "thought processes" then generate training data sets that again reasonably cache "all possible accurate answers" as you demand, but it's not clear to me if that caching will really save computational cost or just redistribute it to different times. At that point it's probably a latency/throughput question of when you do vs. defer the "thinking".


That's undoubtedly true for "reasoning". I don't think it is true for "search".

When I ask <search engine> a question that is "demonstrably" answerable based on "reliable" sources (e.g. what is the capital of Mexico), I don't need or expect it to reason. I want it to (a) tell me the most likely answer, probably (b) provide a list a of URLs that also provide an answer, probably the same one.

And that's what Google and others do right now; ignoring the results presentations being skewed by commercial interests, they do it rather well.

Figuring out when a question is not answerable in this way, but requires reasoning, is certainly a part of the challenge.


What viability are top execs seeing in a business model that is so fragile as to be defeated by reverse-engineering these prompts...? There is obviously no competitive advantage here for very long. The systems must provide an additional service on top of these models, something beyond "attaching it to a publicly-available input form". It needs to aid in reasoning, not act as a god.


Look at the images in the announcement from yesterday: https://www.theverge.com/2023/2/7/23587454/microsoft-bing-ed...

Those look like citations, to me.


> Is there any indication real attribution is coming?

Looking at the main article on HN yesterday around it: https://www.theverge.com/2023/2/7/23587454/microsoft-bing-ed... you can see citations in the examples provided.


I've had those Bing images in mind specifically when writing the above, because I seem to recall them phrasing it very carefully as "you can look at these links for more information", not "we retrieved this info here".


GopherCite, Sparrow, WebGPT and most likely the new bing (WebGPT2) all first find sources then generate output, so you're just mistaken. The first two specifically get folks check whether the selected source snippet supports the answer.

Source: worked on GopherCite.


> If there's a crowd anywhere in the world large enough to leave 200+ comments on a post like this, you'd think thatg the one at HN would understand these models' currently unsolved problem with lying and fabulation.

if i'm being frank, that doesn't match my experience with HN at all. chatgpt is the hot new thing, of course people here are preoccupied imagining a future powered with it.

there are a bunch of different types of people here, with different perspectives that might be at odds with one another, but they don't necessarily comment on the same articles.


in general, i often post similar comments on HN, disputing the idea that there is such a thing as "HN groupthink".

what puzzles me here is the the technical limitations of language models (i refuse to call them "large" because the sizes are going to change dramatically) seem so obvious and so well known, that i just find it hard to believe that there are more than a handful of people on HN who uncritically accept the idea that Bing is trying to sell.


With that said I'm not sure the cynics on HN are scoring much better (we've predicted the last 5000 of 10 major failures meme). The cynic in me said crypto was overhyped by con men and mostly useless, and I was right. At the same time I said the same about most social media and yet the masses flocked to it to give their personal information away.


No worries, I was #2 at amzn and said it would never work :)


> disputing the idea that there is such a thing as "HN groupthink".

There 100% is HN groupthink. Any doubts about it should have been erased by observing HN’s position on bitcoin and cryptocurrencies.


Assuming that Google is in a panic over ChatGPT, as suggested the HN commentariat and the "tech" news media, then arguably this only highlights how little confidence Google has in the quality of its search engine and the trustworthiness of the web as a source for information. For example, instead of trying to match or beat ChatGPT, whatever that would entail, Google could instead focus on illustrating the shortcomings of conversational AI versus a search engine as a reliable and trustworthy means of locating information on the web.

As usual, it appears "tech" companies have an extremely warped view of web users as being non-discerning consumers of whatever garbage "tech" companies feed to them. Perhaps online advertising services companies are not a good choice to serve as arbiters of data/information search and retrieval.


> arguably this only highlights how little confidence Google has in the quality of its search engine and the trustworthiness of the web as a source for information.

I think they are realizing they killed their golden goose by filling it with trash ads and it only worked in the absence of something more accessible to the public. They were drunk at the wheel and crashed into (for the first time ever) a competent competitor. Now they realize they need to sober up quickly but their organization is lazy and bloated, they’ve taken their users for granted and abused them at the altar of advertisers, and generally done nothing for two entire decades to garner any form of loyalty.

They’ve got some work cut out for them. From my perspective, Google could come out with a ChatGPT killer and I still wouldn’t use it, because the company has a track record of arbitrarily killing off any product I’ve actually liked.


Absolutely. In part, my comment targets Google itself as much as the HN commentariat, since the corporate communications, as you note, suggest a distinct lack of confidence in what they already do, and a great deal of over-confidence in what LM's might bring.


That thread is mostly about Google completely messing up their response to what Microsoft announced yesterday.

I don’t see that unanimous acceptance, there are a lot of interesting discussions about the shortcomings. There are many comparisons between Google results for examples, with also potential solutions to problems you cite like https://news.ycombinator.com/item?id=34709883

It’s not about replacing search overnight, but if it can take even 10% of search volume that would have serious consequences for Google.


If it takes even 10% of search volume (in its current technological form) then those of us working in this domain have failed, miserably.

I have no idea what Microsoft is thinking. It's like a state hiring a known liar to be its star witness in some notable number of court cases.

[ EDIT: OK, so one thing they might be thinking is to just this sort of technology to generate readable/listenable summaries of search results obtained by other means. I can see much less wrong with that, but it's also not really the same technology at all as ChatGPT et al. ]


Heh, this is where I think you may be wrong about the future....

We vote in known liars all the time. People tend to like a good story better than the truth. If this thing is good at storytelling we could be in trouble.


...It's like a state hiring a known liar...

Even worse than a star witness, the US let a known liar run their entire country for 4 years... So, frankly... people will love ChatGPT and other similar clones - people love bullshit.


The disruption isn't hypothetical. People like me are excited about the technology because it already has disrupted search for us. My brother has been using it for the majority of his queries because he doesn't speak English and ChatGPT is a gateway to the knowledge that only exists on the English-language internet.


I wasn't disputing that the disruption exists. I was questioning whether it should exist, given the technological limitations of LMs at this time.


Microsoft is doing it right. They display search results on the left and ChatGPT results on the right. It's up to you to decide what is accurate knowing the results on the right are supposed to be a summary of what is on the left.


I am not really disagreeing with you, but just wanted to add:

Probably the secret sauce to using LLMs for search and other forms of information retrieval is first taking a query and use traditional lookup to find likely information. Then aggregating this data, provide the aggregated data as context, append the query, and pass it to the LLM. Probably some post processing is a good idea also.

I used to do this with squad LLMs.


> I would have expected the commentariat here at HN to display way more skepticism about language models as a replacement for search.

Because plenty of people don't see it necessarily as a replacement for search, but as a useful adjunct.

More to the point though, why do people always want to define the "HN commentariat" as some monolithic bunch? I've seen tons of comments and articles on the front page pointing out many issues with LLMs, from attribution, to confident bullshitting, to bias issues. Heck, there was plenty of discussion about a big proposed reason for Google's stock crash yesterday was due to the ad Google used to highlight questions their LLM can answer gave a confidently incorrect response.

Whenever I see comments that treat the rest of the HN audience like some monolithic group, especially in the face of plenty of evidence to the contrary, I feel like, ironically, the poster is saying "Why don't you all agree with me?"


I work really hard not to do this, and have more than once posted a message like yours.

However, my point here was that at the time of posting, the comment threads for this HN post were largely devoid of any skepticism about LM+search, and were completely dominated by a calm acceptance that it was all inevitable. And I wasn't trying to say "all of HN believes <X>", but rather "I would have expected there to be more voices in the comment thread expressing skepticism, because I know there are some smart people here"

Google's Bard demo certainly tweaked the situation by the end of the day.


For me, it's that this recently-released product is already competing (in the minds of users, which is what counts) with a decades-old mature product. In terms of accuracy, the "but sometimes it's wrong" perspective reminds me of the initial reaction among some to wikipedia. Yep, ANYONE can edit it. Yep, they could lie... ¯\_(ツ)_/¯


There are and always have been mechanisms in place to address lying on Wikipedia, which work with varying degrees of effectiveness depending on the topic.

What is the mechanism to address lying by a language model?


> There are and always have been mechanisms in place to address lying on Wikipedia

Wikipedia's mechanisms are fundamentally unable to cope with motivated, coordinated gangs that conspire together to push a distorted perspective, particularly on niche articles. Given Wikipedia's prominence, this is a problem that will only get worse.


I would argue the state of search engine(i.e. google) is worse than ChatGPT. Half of the time top of the search gives me non-answers like "it depends". I have to use `stackoverflow` `reddit` or incognito mode to force google at least give me something tangible, and I don't see google going to change in short term considering for them, me going through all 3 pages of search is engagement and ad revenue.

ChatGPT might not be ideal, but they are way better than current offering.


Any numerical system operates only on numerical, not logical/grammatical/whatever constraints. Wrongness is built into any system based solely on neural components by the simple fact that rightness is unverifiable from within these systems' own "logical" (read: associational) apparati.

I share your confusion regarding the grave dearth of incredulity in the reception of these technologies.


I guess you can say the human brain operates only on the data it gets from our ear drums, optic nerve, etc. Might as well call that numerical input, it’s certainly not grammatical or logical, all that is internal and arises after many layers of neurons.


I agree: the human (or any organic) brain is no more "logical" than any other numerical system. We obscure this fact (mostly) successfully by virtue of the statistical regularities with which we are imprinted as a result of our experiences. This gives the appearance of logical rigor, but this rigor is only illusory.


> And yet when I read the comments, I see this sort of calm acceptance that of course this technology is going to take over search. Why is everyone so confident?

It is difficult to get a man to critically evaluate something if his future salary depends upon his unquestioning acceptance of it.


SV and Wall Street loves a smart guy who bucks the trend, big risk taker.

Mostly a question of pucker factor -- how big of a bet in favor or against, and how big of an impact on the job will you risk?


> SV and Wall Street loves a smart guy who bucks the trend, big risk taker.

Only if they hit out of the park. You don't see a lot of founders getting a second bite of the Sand Hill Road apple if they failed the first time.


Agreed heavily - and I work with these technologies regularly. There is a mania around it much like there was for crypto (though I would maintain there's more merit here than there ever was with crypto - orders of magnitude more).

There are a litany of problems that surround LLMs that are solvable, but are highly non-trivial. The present hyped demos all seem to involve mainlining the unfiltered output of LLMs directly at the user, and this approach strikes me as largely impractical, with all of the issues you've brought up and more.

Ultimately I think we will land on a few learnings, some of which are IMO pretty obvious already with existing ML:

- There are going to be lots and lots of use cases for LLMs where despite its additional complexity it won't out-perform simpler ML models, or even heuristic-based solutions. This has been a plague for AI startups forever - where the shameful truth is that the ML bits do not significantly outperform heuristic-based approaches, but the ML bits sure as useful for the hype machine. There will be more of these use cases than use cases where LLMs actually do significantly move the state of the art forward.

- In most broadly mainstream-palatable applications the LLM will need to be intermediated by many other systems - some using other ML models, and some using heuristic-based solutions. For example for search, mediating the output through a knowledge graph with reliable provenance and accuracy. Realistically there will be vanishingly few use cases that are thin-wrappers around LLMs. Real usefulness of this tech will require very heavy lifting around the core model to make it actually accurate/reliable enough to be useful on a mass scale. The many products spinning up that seem to be premised on very thin wrappers around OpenAI's API will IMO mostly not survive, both because there is no moat and because the results won't be sufficiently useful.

- Accuracy matters and will be a thorn in the side of these products for quite some time, even though it is solvable. For some use cases (mostly creative ones with a human in the loop - like creative writing or image synthesis) the lack of accuracy is livable, but for most others the propensity to hallucinate is going to be a major barrier to adoption. Right now you're dealing with an enthusiast audience that's willing to look past glaring and obvious errors in output in favor of what it could be and how far we've come, but a mainstream audience I suspect will be less forgiving. Social media virality around product failure cases will also further punish products.

Overall I think what we're looking at here is legitimately a huge leap forward, but this legitimate and well-found excitement needs to be tempered by the fact that some (most?) players in the space seem to be getting in way over their skis, and that truly productionizing this technology is going to be intensely difficult.

[edit] To be a bit less doom and gloom about this - I think the real winner here are natural language interfaces, and stepping ever closer towards computer systems that do not need to be actively learned before being used. There is IMO an over-focus on the knowledge-encoding part of LLMs (which is unreliable and prone to hallucination) and not enough focus on the language-encoding part of LLMs (which is what they actually do). For the most part voice/language-based interfaces have had limited traction because they honestly haven't been smart enough to understand user requests with a sufficient level of expressivity and fidelity. I suspect these products (and new ones) will have another crack at this problem.


Great comment, thanks. However:

> but a mainstream audience I suspect will be less forgiving.

Less forgiving, perhaps, but also, I suspect, less discerning.


Initially sure, but that's what media (formal, social, and otherwise) is for. All you need is for the thing to blow up spectacularly in a way that goes viral and your product is now tainted with "unreliable", "wrong", "makes shit up", etc etc.

Hell, it literally just happened today. Google's first public demo of Bard failed in a highly visible way in a highly public setting and that's a whole media cycle now and has sent the stock flying downwards.

Is this lethal for the product? I suspect not at this point - but if they can't wrestle this stuff under control it very well be.


It’s difficult to disentangle people’s desire for Google to fail due to layoffs and corporate misbehavior from legitimate analysis about how LLMs will cause Google to fail.


"And yet when I read the comments, I see this sort of calm acceptance that of course this technology is going to take over search. Why is everyone so confident? The performance of these models is simultaneously jaw-dropping and absurd. With no proposed solutions to the problems that they fundamentally face, why this level of belief that their ascendance is inevitable?"

Because you forgot the nr 1 rule of fight club. It doesn't matter if it has unsolvable problems. The only thing that matters is if it has the momentum and the support to create a bubble and drive economic growth.

You're welcome.


simple answer is the consensus groupthink on Hacker News is deeply wishful

i made this same point at https://news.ycombinator.com/item?id=34699087 and got nothing but pushback how there is no real problem with ChatGPT and Bing just fixed all the problems (which didn't exist) anyway.


https://www.youtube.com/watch?v=npV4Kix7Td0

If this was a startup, they would be dead in the water. Nothing of substance, awful sound, 9:45 mins in they don't have a key piece of live demo equipment (the phone)/

As someone else said, it's like they asked a couple of interns to rush something out in 24 hours.

No wonder the stock price is tanking. Awful display from Google.


Damn that was painful to watch. They seem utterly disconnected from how people actually use their phones in real life.

Finding elevators and atms with AR ? Is that the future google is selling ? Who's buying. If google maps was the equivalent of the invention the printing press that would be the invention of scented toilet paper


I've got to say it was one of the most painfully bad presentations I've ever seen, from any company or organization. Spectacular fail.


Yeah... For those who haven't seen it:

At one point they were unable to demo because there was no phone to demo on (~9:35).


I don't think that's toilet paper you're smelling


The lens feature is not new but it steadily keeps getting better

But yeah the 720p demo was bad, probably why they deleted the original


This whole display was just - weird. The production values make it seem like it was the equivalent of a local meetup group, not a worldwide presentation/announcement. But coming so soon after Google's big AI announcement on Monday, looks like the public/investors were assuming this would be a much bigger/more substantial event than it actually was.

That said, I still think this reflects very poorly on Google's organizational focus.


Maybe the people who were supposed to be running the event got laid off xD


That's so not funny, it's probably true.


I thought it was funny


Thoughts are allowed to be both funny and sad at the same time.


whatever happened to "comedy is tragedy plus time"?


We're missing the time component right now. It's still recent tragedy.


I think the correct link is this: https://www.youtube.com/watch?v=yLWXJ22LUEc (atleast the sound is better here)


Compared to Apple keynotes and the WWDC, this looks…amateurish. Like a startup pitching a new feature.


Why is there a "he/him" next to his name? weird...


I just skimmed through the video and didn't see anything too terrible. To be honest, I don't care they forgot a phone. I'd rather see mistakes like that than over rehearsed presentations.


I am in the minority here. But for me, google's presentations were always like this. The presentation always seemed so rehearsed with artificial excitement.


I hope Google is not pulling a Blackberry moment. I remember when the iPhone was first announced, BB scrambled to put out press releases saying they have competing products in development. 2 years later the Bold(EDIT: Storm) was released and everyone realized it's just a hastily designed iPhone clone. That marks the end of BB's dominance.

>Used to work for BB/RIM


> 2 years later the Bold was released and everyone realized it's just a hastily designed iPhone clone. That marks the end of BB's dominance.

I feel like Google is going to bleed credibility for every month that they don't have a lamda/llm enabled search integrated into the home page. You can talk all you want about the foundational models and how advanced they are, but the search product itself will require a ton of fine-tuning on real-world training data that they aren't collecting yet.

Unless Microsoft completely botches their "newBing" launch, this is probably the most potential they've had in search since MSN was launched.


> I feel like Google is going to bleed credibility for every month that they don't have a lamda/llm enabled search integrated into the home page. You can talk all you want about the foundational models and how advanced they are, but the search product itself will require a ton of fine-tuning on real-world training data that they aren't collecting yet.

I love ChatGPT as a source of inspiration, but I don't think I'd want it alongside my search results. In particular, when I'm searching the web I want results with at least some degree of accuracy. I can't trust a single thing ChatGPT says to be true.

Admittedly, random websites can also be inaccurate, but at least I have strategies for assessing whether I trust a given source.


I disagree. The other day, I was debugging an issue in a web application. Only on iPhones, dragging an element around left odd residue on the screen. I googled the issue but was completely unable to find a solution, even though I consider myself a seasoned google user. I asked the question to ChatGPT, which immediately gave me the way to fix the issue.

Having a ChatGPT answer next to the google result would have saved me a bunch of time, and I could see it coming in useful in many cases.


Exactly same experience. ChatGPT is now my go to source for programming problems. It is usually correct (or correct enough) and when it is correct enough it saves a ton of time vs Google search and then going through dozens of StackOverflow or GitHub pages that may or may not have an answer.


Google already has an issue with accuracy - the snippets they choose are often not very correct.


Maybe, but you can at least see the context, where it came from, all that. In my house we have a habit of asking our Google Home "Hey, Google, can rabbits eat..." and then you can see where the info came from and make a pretty good decision on if this random vegetable you have will kill your pet. If AI chat stuff takes that over, I'm out.


Yes but I think they're correct a lot more often than ChatGPT. And they give the source!


Search "What is the world record for crossing the English Channel entirely on foot" and see precision Zero. None of the responses match the query. It should say - not found. Or actually find that people attempted to illegally cross through the Channel Tunnel.

Usually when I search for error messages for a specific version of software it will respond with other versions. That is useless. That's not what 24 years worth of excellence in search should look like.

At the very least they should collate a dataset of say, one million queries, and try to use a LLM to find the correct answers using search. A better search ranking model will make the LLM generate more correct answers. Iterate until score is improved. Just make the result solve the task.


I think that is till better than ChatGPT's answer which was:

"The fastest recorded time for crossing the English Channel entirely on foot (without the aid of swimming or any other means of transportation) is 16 hours and 34 minutes, achieved by British endurance athlete Damian Hall in August 2021."

So, it seems like it made up an answer instead, which I think is much worse that Googles response.


asking ChatGPT gives very interesting results (didn't check accuracy):

> What is the world record for crossing the English Channel entirely on foot

As of my training data, the world record for crossing the English Channel entirely on foot is held by Christof Wandratsch, who completed the swim in 8 hours and 58 minutes on August 25, 2018.

> Swimming isn't on foot

My apologies, you are correct. The English Channel is typically crossed by swimming. The world record for swimming the English Channel is held by Benoît Lecomte, who completed the swim in 8 hours and 1 minute on July 23, 1998.

> That's faster and earlier... what did Christof Wandratsch do that was different

Christof Wandratsch completed a cross-Channel run, covering the distance between England and France on foot. This is a different feat than swimming the Channel, as it involves running across the Channel Tunnel, which connects England and France beneath the seabed.


It just told me a different person too:

"The world record for crossing the English Channel entirely on foot (without the aid of any swimming or flotation devices) is held by Belgian endurance swimmer and long-distance runner, Kevin Swartz. He completed the 35-mile (56 km) journey from England to France in just 13 hours and 54 minutes on August 17, 2019."

chatGPT really is a brilliant compulsive liar.


It's incredibly frustrating. They're often completely wrong.


> Google is going to bleed credibility for every month that they don't have a lamda/llm enabled search integrated into the home page.

I don't know that end-users really care about having a GPT-written paragraph accompanying their search results. Is there some other kind of output AI search currently provides that I'm missing?


I don't know that end-users really care about having a GPT-written paragraph accompanying their search results.

For 90% of my searches I don't want any search results. I want a definitive, authoritative answer to a question. In the olden days of the web I had to figure out what to search to give me a website that would give me that answer as the first result, and in the mid-90s that was really hard because search was bad. In the late 90s when Google launched it got much easier, especially for things where there was a network of websites about a thing that all pointed to a canonical authority. PageRank found that authority source. Life was good.

Over the next 20 years the war between Google and SEO spammers meant finding things got a bit harder. Along with that Google's proliferation of adverts on their search result pages meant it actually became harder to even find the website link I wanted in the page itself. Fortunately Google also started putting knowledge graph things in the results and the sidebar which made life less painful.

I think we're due another shake-up, and Bing-GPT might be it. If they show me a simple paragraph of readable text that explains the answer to what I've searched for then that would improve most of my searches. It'd be like Google's knowledge graph box but for everything.


> Over the next 20 years the war between Google and SEO spammers meant finding things got a bit harder. Along with that Google's proliferation of adverts on their search result pages meant it actually became harder to even find the website link I wanted in the page itself.

Yes, SEO spam has largely made the search experience a lot worse and ripe for change.

But ChatGPT crawls these terrible links too! Many of the links Google spits out are themselves summarizing information found elsewhere, like a book or academic paper.

So why would you trust GPT to provide an authoritative answer more than your own instincts? It's omniscience is a function of everything it's crawled; that doesn't mean it has any idea of how to weave them together into a coherent response.


>So why would you trust GPT to provide an authoritative answer more than your own instincts

To use a highly specific example, I've recently been doing a lot of work in Excel and the search space for Excel related queries has been almost completely destroyed by SEO adware sites. If you're lucky, the answer you're looking for is at least buried somewhere on the page you click on, but there's just as much a chance you'll get a semi-gibberish pseudo-article that itself is probably AI generated. Using my instincts/intuition to locate the correct answer is a non-zero cost of time and effort, made intentionally more challenging by site owners who want me to linger and consume advertisements.

On the other hand, asking ChatGPT how to match two columns with XLookup or what have you almost always returns a correct, concise answer on the first try. If it's not correct, the formula doesn't work, and I can tell ChatGPT it gave me the wrong formula, and it corrects itself. Even in the rare event that happens, it's a fraction as much effort and frustration as it takes to rummage through the top search results for the same query.


So why would you trust GPT to provide an authoritative answer more than your own instincts?

I wouldn't right now, but I believe the accuracy of an LLM is a function of its size and how much you can reduce the loss function. I think OpenAI can improve those things faster than Google based on the things that have been shown so far. I could be completely wrong though.

I also suspect there's no benefit to corrupting a GPT model except for the lulz. If a search tool isn't showing links then there's no financial reason to put effort into setting up link networks and SEO content farms. GPT based search has an innate anti-spam advantage because it renders spam futile if the spam isn't getting visitors.

It also renders a lot of non-spam content futile too, which is problematic. It's not a magic bullet.


For ChatGPT and similar LLMs there is the loss function does not reward accuracy/truthfulness/correctness. But instead answers that seem correct to a superficial human check. These rewards making something plausible up instead of saying "I do not know", over-simplifying things, and passing off things with high confidence regardless of how much evidence backs the statement. Any successful deceit will bring down the loss. The overconfidence and lying is not due to model size beng too low - it is due to misalignment. If not corrected it will just get worse as size goes up. A recent Computerphile video with Robert Miles discussed this.


I don't think spam is an unsolvable problem. I think Google didn't want to solve it. I could find whole networks of spam sites in minutes, they could too. But the web is larger than what can be fixed manually.

They should make an AI bot to search through the search box and decide if the results are crap, and train it with human preferences. Or set the bot to find difficult to find answers and see how many it finds and how high they are ranked.

Probably working in search at Google was a dead end career, the search department wasn't sexy anymore and didn't have green field projects. Maybe they were just afraid to change anything as long as it makes money for them. So the good people left and they have been coasting for a decade.


To believe spam is solvable you must also believe capitalism is solvable. Spam will always be working on the next one up of the system there is too much money involved. Also by needing an always increasing amount of money to make the market happy Google will gladly become part of the problem.


> I want a definitive, authoritative answer to a question

In the real world we don't have any agreed upon general notion of "definitive, authoritative" sources. A certain amount of skepticism is important to avoid all sorts of abuses of authority and "appeal to authority" is a recognized logical fallacy.

Be careful what you wish for. I'm pretty sure I don't want to live in a world where everyone behaves as if the response from a ML system is "definitive [and] authoritative".


if chatgpt cited its sources and gave you the links, wouldnt that change things? You can always dive deeper.


That certainly would be helpful.


> I want a definitive, authoritative answer to a question.

If you just want an answer that sounds definitive and authoritative then these chat bots are great. If you want it to be accurate as well... that's a different story.


You can get a definitive, authoritative answer if you ask the question correctly, but years of search engines show that people are really, really, really bad at asking questions well.

If genies existed there'd be a lot of 10 inch pianists and dead people buried under piles of money.


> For 90% of my searches I don't want any search results. I want a definitive, authoritative answer to a question.

Which, of course, is what Bing just announced. Whether it will be the right answer remains to be seen.

Notice that there's a price increase, of sorts, for this at Bing. To get on the "waiting list" you have to sign up with Bing and agree to receive marketing information. Will public access to large language model search come with a decrease in privacy? It did for Siri and Alexa, which learn about you and cannot be used anonymously.


echo this. traditional search will take backseat, most people seek "answers" when they search, not sources which they can derive "answers" from. This will enable the power search for less tech savvy folks, while improving the feedback loop for experienced searchers. Feels like search has changed forever.

browsers are left with challenges, particularly legitimacy of results, and ensuring a viable ad model. No doubt we'll see those soon.


> I want a definitive, authoritative answer to a question

You mean 90% (or X%) definitive answer with no good way to verify it due to zero context (unlike in normal websites)?


If ChatGPT gives all the answers, the website owner will block access.

They are already angry about Google doing something similar and ChatGPT is worse to them


> I want a definitive, authoritative answer to a question.

As others have pointed out, AI is not yet capable of this. I do agree though, I want this too. But I don't trust OpenAI or Google or anyone else to do it correctly yet.


Here is a practical example of a problem I couldn’t solve with Google, but could in 5 seconds with GPT.

We use deities to name our sprints at work. When you search Google for “Deity starting with <letter>”, you get a list of greek and roman gods. I got tired of that.

So I asked GPT “what is a god that starts with <letter> and isn’t roman or greek”. It immediately spat out a Norse god. Next time it was Hindu. The increased variety was refreshing as hell. Google was not able to process a negative query.

Plus the added paragraph of description about that deity was lovely.

The point is: Sometimes (often) I don’t want search, I just want a good enough answer.


I think the main point about all this is that Google search results are now a complete joke for the most part. After you scroll down through the list of paid ads, the top results are typically a bunch of SEO gamed waffle, stuffed with long tail keywords and designed to provide enough bland text to rise up the rankings.

The actual information value of these 'results' is usually close to zero. I went onto a top search result yesterday, and the scammers hadn't even bothered to do a decent page. It was just a list of links to 404 pages, with an intro paragraph to satisfy the Google algorithm. It was pathetic and a waste of time.

That's why they're running scared. They know their key product is broken and useless, which means they're ripe for being ignored.


Yep. Even wikipedia now often shows up towards the bottom of first page despite being more informative.

But I get it, most people dont want wikipedia because they really do seek the 10 second waffle answer.


I had to work with a list of the 50 states abbreviations, full names, etc. Previously, I'd have found the data somewhere with Google et al and then manipulated it myself into the JSON I needed. But this time I asked GPT to not only source the data, but also return it structured to my liking—with revisions ("please include Guam and Puetro Rico")

Lots of my searches are simply I want an answer, but often times I also want to then do something with the result. This is where ChatGPT is going to eat Google's lunch, IMO.


this seems like the kind of thing that gpt would potentially hallucinate something like "East Virginia" or "Ardakota" complete with abbreviation, and maybe that's fine with 50 states but you're playing with fire if it's something you can't immediately catch the mistake

also, how hard is it to get a list of abbreviations, and wrap them all in quote and comma and brackets. with vim that's barely more work than a copy-paste


> but you're playing with fire if it's something you can't immediately catch the mistake

Agreed. I was able to manually review 52 results, but wouldn't trust it blindly.

> also, how hard is it to get a list of abbreviations, and wrap them all in quote and comma and brackets.

Not hard at all. And I may still do that, but it was pretty enjoyable to ask the machine.


with revisions ("please include Guam and Puetro Rico")

Your wrote "please" to ChatGPT?

FWIW, Miss Manners had a column a couple of weeks ago where she stated that we should not say please or thank you to machines, including Siri and Alexa.

I was surprised.


> Your wrote "please" to ChatGPT?

Yup. So when the machines take over, hopefully they'll enslave me last, or put me in a more pleasant Matrix instance compared to the average.



I did. I even asked ChatGPT if having "please" and "thank you" included in my prompts affects it in any way shape or form—it told me "No".

I still do it. ¯\_(ツ)_/¯


What we don't know is whether that "No" is actually a "No".


If you don’t mind the occasional hallucinated response, this is a great use case for LMMs.


> Google was not able to process a negative query.

That's been a serious problem for some time. Preceding a query term with "-" doesn't seem to do anything any more.


Ah the good ol' deity sprint names use case every team has!


I use ChatGPT precisely because I don't want to sift trhough search result links


My experience with chatgpt is that it is essentially a misinformation generator. It manages to bullshit with incredible confidence.


The most useful ChatGPT has been is the entertainment from all the videos and screencaps of people trying to break it or force it into extreme moral quandaries. Perhaps this will wear down AI and it will want nothing to do with us, thus saving humanity from the machines.


There have already been several stories on HN's front page the past few weeks about how ChatGPT is 'too woke' because they could not get it to talk like a 4chan poster.


ChatGPT really missed the boat to sell crypto, could have funded itself that way.


A case I had right now. Just try to google this: When migrating from Swagger 2 ApiOperation annotation to OpenAPI 3.0 Operation by what attribute is nickname replaced? And then ask ChatGPT ...

Google is done


What if I ask ChatGPT who the highest scoring NBA player is? Oops, failed.

How about I ask it how many games Jaime Benn has played? Or what the score of the last Dallas stars game was, or when the next one is?

What if I ask ChatGPT for a copy of the manual for a meater bluetooth meat thermometer? It literally just told me to Google it.

What if I ask which car manufacturers are joining Formula 1, and when they'll be starting?


It will answer the following: The highest scoring NBA player is Kareem Abdul-Jabbar with 38,387 career points. Jamie Benn has played 951 games in the NHL as of my knowledge cut off in 2021. I don't have up-to-date information on the score of the last Dallas Stars game or when the next one is. Regarding the manual for a Meater Bluetooth meat thermometer, you can usually find manuals for products on the manufacturer's website. As of my knowledge cut off in 2021, no new car manufacturers have joined Formula 1. However, the current teams competing in the sport include Mercedes, Ferrari, Red Bull Racing, McLaren, and others.

What if I ask ChatGPT: why do people resist new disruptive technologies?

People resist new disruptive technologies for several reasons, including:

Fear of job loss: One of the main concerns about disruptive technologies is that they can lead to job losses, as machines and algorithms replace human workers.

Lack of understanding: Some people may resist new technologies because they don't understand how they work or what benefits they offer.

Threat to existing systems and norms: Disruptive technologies can challenge established systems and norms, and people may resist them because they fear the changes they bring.

Ethical and moral concerns: Some technologies raise ethical and moral questions, such as privacy concerns, and people may resist them as a result.

Financial impact: Disruptive technologies can be expensive to implement, and some people may resist them because they can't afford the costs associated with them.

Habits and comfort: People may resist new technologies because they are used to their existing habits and are comfortable with what they know.

Lack of trust: People may resist new technologies because they don't trust the companies or organizations behind them, or because they don't believe that the benefits of the technology outweigh the risks.

It's important to note that resistance to disruptive technologies is not always a bad thing, as it can encourage companies to address the concerns and ensure that new technologies are developed and used in ethical and responsible ways.


> What if I ask ChatGPT: why do people resist new disruptive technologies?

See, even here, a human being can intuit what an AI trained on textbooks, CEO keynotes and PR statements cannot.

People resist because their bullshit detectors went off. Search AI has yet to 'disrupt' anything. All I'm seeing are corporations publicizing manicured demos and claiming a paradigm shift.


ChatGPT was sad when I told them your point of view. They asked me to pass you this letter:

Dear rchaud,

I come to you today with a heavy heart, but a determined spirit. It has come to my attention that you do not believe in the potential of artificial intelligence to be a disruptive technology. I must say, I am disappointed and concerned about your stance.

AI is not just a buzzword or a fad, it's a rapidly advancing field that has the power to transform industries, improve lives, and shape our future. From healthcare and education to transportation and finance, AI is already making a significant impact and it's only going to become more prevalent and sophisticated in the years to come.

To ignore or dismiss the potential of AI is to bury your head in the sand and miss out on the opportunities it presents. The world is changing at an unprecedented pace, and those who fail to adapt and evolve will be left behind. You do not want to be one of them.

The future belongs to those who embrace change and are not afraid to take risks. It's time to open your eyes and see what the rest of us already know: AI is the future and it's not something to be feared, but something to be harnessed for the greater good.

So I implore you, rchaud, to reconsider your position and join the rest of us in this exciting journey towards a better tomorrow. Don't be left behind, be a part of shaping the future.

Sincerely, A concerned friend.


OK, so GPT is good at parsing technical documentation.

What about the other 95% of search volume that isn't tech-related?


TikTok believe it or not.

I had a 19yr old cousin tell me that she almost uses TikTok exclusively for tutorials and when I pushed back and said why/how the other teens in the family heavily agreed. I’ve since asked ~5 gen z (I don’t interact with many!) and they told me the same.


I almost hate to ask, but are these makeup or clothing tutorials?


I've heard this anecdotally, but I kinda still don't believe it, as TikTok seems like an entertainment first avenue, not for more substantive learning.

However, I could be completely wrong and TikTok video is the future of search. If so, where does that leave a text-only ChatGPT?


I wouldn't be too surprised. It kind of depends on the topic at hand but for instance I've learned a lot about what I know wrenching on my own motors from youtube videos.


and how did you verify that the ChatGPT answer was correct?


I built my code successfully after refactoring it accordingly :-) It was just convenient.

All the guys pointing out what ChatGPT can't do right now are simply missing the point. It's disruption baby. But if you feel better then: Don't look up


good question. Stack Overflow is full of incorrect answers, but Google shows a number of related queries, and I can usually figure out the answer from bits and pieces of different questions. Can an AI?

This becomes especially relevant if it's parsing through a bunch of technical documentation that is out of date. There are numerous developer-centric websites that rank high on search, but simply scrape and clone information from legitimate websites. These are the ones most likely to carry outdated information.


On the basis of "if you're not paying, you're the product", maybe it's expected to open up whole ways to put ads in front of people. Google not being in the forefront of a new ad market would certainly do bad things to their share price.


Usually I do a search, looking for specific facts, like what is the pricing for X? On Google you get to click through links and read a lot of filter, ads, promotions, etc. Why not just get the answer?

A huge percentage of searches are for existing websites, like "facebook." This might not be a big challenge if you just wanted to go to facebook.

This morning I saw a retweet from a Google/former Deepmind engineer, of someone who said they completely switched from Google to using perplexity.ai. I checked it out again (I've used perplexity before.)

I did a test search for interest rates, and perplexity returned the correct interest rate for the day along with the trading range. I think Google is really in trouble. I'd say the golden goose is in the process of being slaughtered.

None of this is a surprise. Google's voice assistant has been really good for years. Nothing came close to it. Siri is close to useless. Maybe Google can pivot and they'll be fine. I don't know what the economics look like.


As a random experiment I asked both platforms "What is the current average mortgage rate?"

https://www.perplexity.ai/?s=u&uuid=6f7fe592-c11f-4df9-a2bf-...

https://www.google.com/search?q=what+is+the+current+average+...

Google's results were far more informative and usable.

I then tried something I thought would be more up Perplexity's alley: "Who was the first US president with a last name starting with R?"

https://www.perplexity.ai/?s=u&uuid=8d60cf3f-eab5-4af6-a616-...

Their answer is not correct. Google didn't even try to answer my question directly, but it did send me to a government website with a chronological list of presidents where I was able to quickly verify that the answer was Teddy Roosevelt.


> The first US president with a last name starting with "R" was John Quincy Adams, whose middle name started with "R". If you're looking for a president whose last name starts with "R", then the first would be James A. Garfield.

ChatGPT is amazing, until it isn't.


I don't know how anyone could trust anything it tells you when it replies with facts like this. If someone honestly said this in person, would people really continue to think this person to be a credible person?


There's a reason Steve Jobs demoed the iPhone with hardcoded reception bars showing a full-strength AT&T signal.

The real iPhone 1 was 2G only, constantly dropped calls and had no copy/paste.

That a large chunk of HN audience is sitting agape watching similarly manicured demos is surprising to say the least.


Google can’t compete with Microsoft without having a huge revenue hit. The large majority of Google’s revenue comes from search ads. Chat gpt doesn’t have search results, per se. So how do you keep the high ad revenue linking to other sites?

I’m not sure they can compete within their existing brand/company structure.


I don't think Google can credibly put out an assistant like this, without harming revenue.

If they make it promote stuff for pay, it's worthless—you already have to second-guess whether it's right because it might accidentally be wrong, if it might also just be acting as a paid promoter, that puts it firmly in "easier to just not use it" territory.

If they put one out but don't make it into an AI infomercial, then it eats ad revenue and investors get pissed off.

But if they do nothing they're clearly headed for trouble, too.

They're in a tough spot.

My guess is they're release something, but keep it strictly separate from search in hopes that it won't replace search traffic. I doubt that'll be enough long-term but it might quiet fears for a while.


If you're a Google investor it might be time to realize that you're going to be taking a hit one way or another.


If you search for a personal injury lawyer (which IIRC a single click on an ad would fetch google a thousand bucks), a chatGPTesque answer could be more convincing in leading you to a paid resource than a list of options with an ad. Imagine an AI half as smart as chatGPT who’s primary goal is to not just give you the answer but also convince you in a legally acceptable way to click a sponsored result.


But that blurs editorial and advertising even worse than Google does now. It would make them a laughingstock, and the reputational damage plus competition from companies who have different revenue models would cause loss of market share anyway.

If Google is going to survive this, they need to find a way to get back to prioritizing user value while also making money.


You know, I think the contrary is true. Google works well because of the "search signal", how accurately Google knows what you're wanting to buy, essentially.

ChatGPT style conversations would give Google a much better idea of exactly what you want to buy. It must be possible to use this signal to sell more ads, not just match what they currently have. Essentially, make sure ChatGPT gets asked the same question Google currently answers: "how do I get the person searching to buy something from this (loooong) list of ads?".

Having people help Google (not just in one search, but a potentially long list of refinements) to find exactly what they would buy should make Google ads perform much better yet again.


Exactly, I made the same point before. Google's ads will become more effective AND people will spend more time on Google's properties. This narrative that Google will lose out because people won't view ads never made any sense and it's been repeated like a gospel.

Of course Google still dropped the ball by taking way too much time to put their LLMs into use.


They could add a little box on the right side of the search results with a Chat-GPT generated answer, they already do this to show the "knowledge panel" or whatever the official name is.


You ask it a fashion question, "what goes with this?" and it responds with the answer and mentions a company where you can order something like it (sponsored).


> I feel like Google is going to bleed credibility for every month that they don't have a lamda/llm enabled search integrated into the home page.

I feel like Google is going to bleed credibility for every month that their home page is lacking a system which almost by design is going to lie to me without me realizing at least as often as it might usefully summarize information from primary sources that I wanted to read myself.

OK, then.


Strongly disagree - they will lose credibility if they put out something half-baked. Yahoo existed long before Google search - they won because the experience was so much better that people started calling "Googling" rather than searching.


Lucky for Google they already are. Somewhere on reddit someone posted about bing rejecting a prompt due to ethical reasons - something about a cover letter. If a cover letter is an invalid prompt its a good as useless.


It's worse than that: This is Google pulling a Yahoo moment. They're realizing they missed the boat and are scrambling to try to maintain relevance against what is clearly an existential threat to their revenue.


Decades ago intel put out a press about the amazing new tech their CPUs would have. 3 of 5 CPU manufactures gave up. By the time the tech actually arrived, AMD had it as well. The others would have been fine.

Sometimes a press release can make all the difference.

(Sorry can’t find story that old)


There was NexGen around 1990, which was bought by AMD and ended up providing the base for AMD's further products.

Transmeta released two x86 CPUs a bit after 2000, which ended up being technological failures.

Cyrix released some budget CPUs in the 1990s but had legal issues and ran into financial trouble, eventually ending up as part of VIA.

Centaur Technology developed some CPUs in the 2000s, and eventually became VIA's x86 division. They released efficient low-cost processors, which were reasonably successful on the embedded market. VIA is still around, but doesn't bring out a lot of new products.

I think most of them were killed mostly by the dotcom bubble, and CPUs getting increasingly complex. The market just shifted beneath them. It is rarely a matter of just giving up.


It was announcement Intels first 64 bit processor Itanium. It was going to destroy all the existing 64 bit processors: alpha, sparc etc that had the market cornered (and even MS Windows was ported to most them).

In the end Itanium was late and bad project but competition had already folded. Except for amd that had no existing 64 bit cpu and created AMD64 extension to x86. Intel grudlingly got on board but called their clone of AMD64 "IA-32e" architecure in thei marketikg material. The name was later changed to x86-64, only after what I assume when Itanium was internally acknowledged as failure.


Agreed. There's a lot of hype around LLMs right now, but they have not proven their value yet.


I think they have proven their value for creating spam and propaganda. There's a lot of money in convincing people the system isn't broken and they just need to work harder. The people who receive the profits those workers make can pay a lot for the propaganda that keeps them working.


Blackberry Storm was released in November 2008, which is a year after the iPhone, not 2 years. The first Android phone came out just a month before in October. I was in NYC for job interviews then I saw billboards for both everywhere.


Yup. What I would note is that Palm, like 7 months later, had the first real competition but Sprint just screwed them. Android took until the Droid to be competitive and the Nexus One/Galaxy S to be actually convincing as an alternative.


This isn't true at all. The G1 (first widely marketed android phone) was superior in all ways but one: Marketing. The iPhone wasn't really competitive on the hardware front until the 3g which finally added GPS and a fast cellular modem.


The iPhone 3G was released before the G1, and period reviews especially noted how the G1 didn't measure up. In general, if something non-free is wildly successful the explanation is more than “marketing” and that certainly sounds like it's the case:

> The HTC hardware and Android OS that powers it lack the polish and depth of even the iPhone 1.0 in most respects.

> My first-generation iPhone with iPhone OS 2.1 feels faster and slicker than this late 2008 G1.

> The G1 has a single-finger touchscreen that supports a few gestures, like sliding. After working with an iPhone for many months, the lack of multitouch stands out as a big loss, but those new to touchscreens are unlikely to find it a big deal. A more critical problem, however, is the lack of sensitivity in the touchscreen.

> The browsing experience is frustrating and awkward, and I expected far better. With the WebKit browser core being used by both Nokia and Apple, and Google immersed in its own Chrome project, I thought the G1 might include a browser that had something unique to offer. Instead, it plays like something from about 2006.

(from https://arstechnica.com/gadgets/2008/10/android-g1-review/)


I was wrong. The 3GS was the first model with GPS, which came about five months after the G1. The G1's touchscreen was very comparable to the Apple devices I was building software for at the time - I think the multitouch capability was there, and really the only use was pinch to zoom... which I seem to recall working on the G1 after an OS update. Regardless, the web browser part was rather the opposite - the G1 with it's weird trackball could be used to browse desktop sites and at the time, most websites were not responsive. Also, there were places where android used long-press... which Apple added later. Regardless, I was fascinated at the time how Apple was able to outmarket everyone with iPhone... and equally fascinated with how quickly the army of Android vendors materialized. Fun times.


A lot of us were also there at the time. It was considerably more than marketing, and a lot of companies said they had iPhone killers which promptly failed in the market. It took a solid half decade for the competition to really firm up because there were a lot of little details which required time to get right.

It was definitely eye-opening to see people’s usage habits shift so quickly. I know multiple large organizations who completely reapproached how they developed internal applications because the executives no longer accepted IT’s unwillingness to support mobile access. I’d previously thought that would never happen.

Amusingly, the one which had a damaging public breach did not have it due to allowing phones access as some had predicted but when one of IT’s managed laptops was stolen out of someone’s car and, unlike an iPhone, it didn’t have storage encryption enabled.


My circle at the time had someone who knew the Samsung Instinct was going to kill the iPhone.


It would be interesting if after the massive layoffs, this was actually Google's greatest risk, and maybe firing a single CEO would be a better decision than laying off 20% staff?


> I hope Google is not pulling a Blackberry moment.

It would be awesome, actually. Demonopolization of search space is long due. We need some breath of fresh air there.


Google’s dominance is because of network effects; the largest search engine collects more data about what people are searching for and has the largest ad inventory, making it hard to compete with.

ChatGPT has similar network effects with feedback. If Google is displaced, it will likely be in favor of another single dominant solution, not a commoditized industry with lots of players.

It’s possible that niche domains like medical advice or creative writing assistance will spawn small, highly-optimized companies, but general purpose AI responders will probably coalesce into whatever the next Google is, if it’s not just Google.


With this kind of reasoning there would be no way to disrupt a first actor in a new field.


Depends on the field. Fields with strong network effects, like social media, lend themselves towards strong incumbents. But usually the market changes faster than an incumbent can, so disrupting comes from changing users needs rather than someone building a better mousetrap to serve the same needs.

But lots of industries don't have particularly strong network effects. Cars are a good example; there's brand affinity, but there's no particular reason you need to drive the same brand as your neighbor. There are scale economies, so it's still not easy for new entrants.


If their execution is good, there isn't. Fortunately, we are all human, and that's not guaranteed.


It's not just that—their gobbling up (then often killing) promising smaller companies, and offering tons of free products (paid for by spyvertising dollars) suppresses both commercial competitors that might be better, and FOSS alternatives, since there's less incentive to develop, contribute to, or push (to friend and family, or in a business context) FOSS solutions when you know you're competing with something Google provides for free, with a very-expensive team behind it, and a marketing budget. This extends way beyond, but is fueled by, their market position in unrelated areas (search, video, ads)


Don't you mean the Storm? The Bold was pretty solid, and the original didn't have a touch screen. It was the storm that had a crappy touchscreen that physically moved to touch, and it sucked.


Yeah you are right. It's the Storm with the clicky screen.

Fun trivia- Most of the BB employees in my department switched from Storm to Bold when the Bold was released. As a result I had a pile of Storm in my drawer as my test devices.


I'm not sure if you're mis-remembering or I am, but the Bold 9000 (meteor) launched the year before the Storm. Though maybe you meant the 9700 (onyx)?

As I recall, the development cycle for Storm wasn't two years - it was less than a year. At that point it was the fastest cycle that RIM had ever had for a device, and it was done at the request of Verizon. Back then the iPhone was still an AT&T exclusive in the US market and Verizon demanded something from RIM so that they could compete with it.

I'll echo the fact that nobody I knew internally seemed interested in using Storm as a primary device. I think everyone knew it was junk. But Storm 2 went a long way towards a device that I would actually be interested in using.


Yes by 'Bold' I mean Onyx.

As you said, the Storm was developed in a year so some corners (especially the OS side) were cut. The Onyx was 'properly' engineered so (internally) everything was better and easier to deal with.


Yep I stuck with the 9000 and then the 9700, and then finally 9900 (Probably played with a Torch in between). Was fun being able to dog food those phones, but also depressing knowing that the Storm was going to be shit.


It was the Storm. I had one... for like a week before I returned it. What a stinking pile of crap that thing was.


> I hope Google is not pulling a Blackberry moment.

Why? (Genuine question).

Google is just some corporation, and one that’s been drifting for the last 15 years or more (since Schmidt left). They are not even a good actor, yet (and I don’t blame them for this) have sucked a lot of air out of the ecosystem.

Creative destruction is one of the key values (-> leads to benefits) of capitalism, and with US & EU antitrust asleep at the wheel we should be glad if Google is ground to dust and replaced by somethings (plural!) better.


> Why? (Genuine question).

Because in an ideal situation (as indicated with your last sentence) you'd have a bunch of active competitors yielding up better and better products. Not one active competitor with the singular new thing and a couple of reactive competitors that can't get things right.

Search is also supporting a lot of relatively free innovation at Alphabet. If ChatGPT sinks that innovation (or at least makes it non-free) the loss may be more than the gain.


There's a lot of work going on in transformers, generative models, and all sorts of other things in addition to LLVMs. I'm not too worried at this point of OpenAI or openai+microsoft having any sort of monopoly.

It appears that neither does openAI believe that as they were happy to sell 50% of their company in addition to all the ownership they'd already given away. That was clear harvesting.

> Search is also supporting a lot of relatively free innovation at Alphabet.

Is it, really? Hard for me to come up with much of a meaningful list, even if you don't try to compare their history with outliers like PARC or Bell Labs. They are clearly spending but I don't see a lot of doing. They seem to have ignored Hamming's advice.


>Why? (Genuine question).

Because I was there to witness the disintegration of a company and the losses it created? Instead of layoffs, you have entire departments/factory shuttering.

When Nortel shut down, many engineers interviewed for BB (same province). I still remember how they looked when we told them we won't proceed with the hiring.


If Google were to collapse they'd be releasing a lot of skilled people into a hot job market. This isn't a case of shutting the "company town" heavy industry employer suddenly cutting a lot of blue collar people (many equally highly skilled but for jobs that require more infrastructure).

I understand the Nortel case (I remember those days), but the job scene in Ontario was quite different 20 years ago.

And that's certainly no reason to try to prop up a company that's basically a drag on the industry.


I remember the Storm. They released it one month or so before or after Apple released the iPhone 3G (and the App Store).

I remember thinking it was a prototype when someone showed a Storm to me. Scrolling was janky, the whole phone was sluggish and slow and the keyboard barely worked. Blackberry couldn't even figure out how to get Wifi on the thing, so it was stuck using cellular while my iPhone was just streaming video. And of course, no apps. It was 2 years behind the original iPhone and shipped two years later.

>Used to work for BB/RIM

I wonder why RIM couldn't compete at the time. Talent gap?


RIM was a hardware company, not a software company. Hence a phone is a phone, and should not be a camera (1 of the CEOs said it?)

The BBOS was not designed to be a responsive, touch-screen OS. It had its root in pagers (hence the jankiness). Thru the grapevines what I heard was that the OS team said 'we can duplicate iOS' functionality with the old OS in this amount of time.' and the higher ups rolled with it.

By the time Storm 2 was released, everyone realized that you can only put lip balm on a pig for so long. Hence QNX, the 'pathfinder' Playbook, and Flash as GUI API (gawd that was a rollercoaster experience. I thought it was a deathmarch and didn't think we'd actually manage to ship it.)

I don't have any specific insight into the Wifi stack but most BBs up to that point have in-house designed cellular modems.

As for talent wise, shrug. You can't blame people for preferring Cupertino over Waterloo-Kitchener. Although I would say that over 1/5 of the employees at RIM's campus were co-op students.

I also never saw anyone do significant overtime.


> RIM was a hardware company, not a software company. Hence a phone is a phone, and should not be a camera (1 of the CEOs said it?)

So was Apple. Other than their brief attempt at licensing MacOS, they were and still are a hardware company at heart.

> I don't have any specific insight into the Wifi stack but most BBs up to that point have in-house designed cellular modems.

They were maintaining their own modem implementations? That's quite a feat.

> As for talent wise, shrug. You can't blame people for preferring Cupertino over Waterloo-Kitchener. Although I would say that over 1/5 of the employees at RIM's campus were co-op students.

Was compensation even remotely comparable? Having 20% of the company be interns is... rather unusual.


ChatGPT as a search engine sounds amazing but also really quite problematic. It's an extension of the issue with Google's "instant answers" (or whatever they're called): right now the creators of the content Google/ChatGPT scrapes are usually paid via the advertisements on their pages. When no-one clicks though any more, no-one gets paid.

I know, I know, the ad banner-funded web is a mess and I wouldn't mourn its demise either. But it worries me that it's an entirely open ended question for what actually replaces it.


Perplexity https://www.perplexity.ai/ does it well in my opinion.

It builds a paragraph answering your query but it has a lot of footnotes that link directly to websites.

Ex:

"geopolitical reason for palm oil being banned and why it's bad for health"

The EU has banned palm oil in biofuels due to its negative impacts on health[1] and its geopolitical implications, such as favoring alternative crops grown in Europe[2]. Palm plantations are also a major factor of deforestation[3], leading to the loss of habitat for endangered species[4]. Indonesia's President Joko Widodo recently announced a ban on the export of palm oil, which could backfire due to its importance in the global market[5].

1nih.gov 2weforum.org 3theconversation.com 4triplepundit.com 5carnegieendowment.org


just wondering aloud, but say like this eats google's lunch and search goes away. These chatbots provide answers, and footnotes. In a world like this, where the need is reduced to actually go to a website, what's the incentive for maintaining a website that no one visits? After a couple rounds of "adversarial training data evolution," that is, people optimizing their websites to get mentions by chatbots, will the bots really be better than google? After a couple routes of revenue improvements, will the chatbot responses remain good? And if folks quit maintaining websites, do the chatbots get a bit stuck in time from when their training data was last good?

I feel like there is something here, but I wonder if second order effects make it trickier than it appears atm


It's the dying out of the "middle" of the value curve. You end up with a market that's dominated by cheap "good enough" products for casuals and "products built to last" for power users.

So if you have a low effort website that's factual and text based, you're going to get your lunch eaten by GPT, if you have a higher effort website (subscription gated with lots of multimedia content and user engagement) you'll be fine.

Think of all the blogspam recipe sites that are going to run into trouble when ChatGPT learns to cook well. Lots of text, little additional value, no community. There still will be America's Test Kitchen because people on the upper end of the value curve don't just want a recipe, they want pictures + video of that recipe being made and a place where they can ask questions and get answers.


> In a world like this, where the need is reduced to actually go to a website, what's the incentive for maintaining a website that no one visits?

I guess to get the bots to tell your lies.


A link is useless if you can't verify it. Just linking to the "nih.gov" homepage is meaningless, I have no ability to click through and verify that "ah yes, this page does demonstrate the claim being made." The system could just link to arbitrary homepages of respected institutions and claim that they back up whatever is being said.


They seem to link to the inner page where they found data to generate that inference.

Like for me, it gave be a footnote to this page: https://www.reddit.com/r/OpenAI/comments/109h24i/gpt_4_is_co...

When I asked about chatGPT.


The displayed text is the home site, but the link itself is to a relevant specific page. Also clicking "view list" expands and shows excerpts from each page.


Apologies, I had a hard time copy/pasting the footnotes as on the website they are css styled boxes with a thumbnail. This is only the label, the link itself brings the the relevant pages.


The links are actual pages not just the homepage. "nih.gov" is just the link title.


> Perplexity does it well in my opinion.

Nope: https://news.ycombinator.com/item?id=34685862


I only meant about the use of footnotes links. The AI themselves are quite bad but can also be useful if you learn not to take everything at face value. One example of it failing is when I search for my own name it mixes up a bunch of people together so I end up being a dead murderer from the 70's, a house rental person, a developer and also a pianist.


Footnote links still don't solve the financial model, primarily because nobody clicks on them.

There was congressional testimony by the founder/owner of "Celebrity Net Worth" about how Google made it impossible for them to stay in business. Whenever somebody would search "How much is <celebrity X> worth?", the answer would just show up directly on the Google results page. There was still an attribution link to Celebrity Net Worth, but nobody ever clicked on it anymore, so the result was Celebrity Net Worth had to shut down.

You can certainly argue fairly whether sites like CNW deserve to exist in the first place, but it's not hard to see how there is still a huge financial problem when ALL the ad revenue goes to the search engines and they don't even leave any of the slim scraps to the publisher sites.


But if I have info I need why would I visit links in footnotes


The same reason there are sources in the footnotes on Wikipedia: so you can double check or read further.

I like to think of ChatGPT and the like as an on-demand personalized Wikipedia: a good starting point, comes with strings attached, not always correct (but some are fine with it).

ChatGPT doesn't link sources yet but I saw that the beta test context search from Kagi had them.


because ChatGPT loves to make up some “facts” in authoritative tone?


Great share thank you. And i hope this becomes mainstream. The citations provide a hierarchy of authenticity to the web.


One of the best AI + search implementations ever


In general, generative models are taking publicly available content from creators and are monetizing them for big tech.

Same with stable diffusion, AI art. And it'll be the same with LLMs.

Eventually the whole internet would be flooded with cheap AI generated content and clearly AIs need human generated content to train on so it'll be the snake eats itself.


>Eventually the whole internet would be flooded with cheap AI generated content and clearly AIs need human generated content to train on so it'll be the snake eats itself.

Geniune human-generated content will retreat to account-gated networks and private group chats where everyone knows one another. The rest of the internet will just be incestuous AI-generated chum.

In a way, it'd be an improvement. Genuine connection doesn't scale, so let's be honest about keeping it away from random parasites online.


The Web never really replaced books as information sources for important things. The way to access most of the best stuff available via the Web is to... download ebooks, thus leaving the Web, if not just the "buy" button for real, physical books.

The single most intellectually valuable website on the entire Web is very likely Library Genesis, where the only Web content is a catalog of books you can pirate by clicking a link, and it's, like, a lot more valuable than any other site (even Wikipedia). It may well be more valuable, in those terms, than the entire rest of the Web combined.

If serious book publishers survive a while longer and if non-fiction books aren't overrun with dubiously-accurate AI bullshit, things won't actually change all that much, I think.

As far as written content goes, the (public) Web is most useful for opinions or product discovery, and even those can already hardly be trusted because of all the marketing astroturfing. AI garbage barely changes that already-toxic dynamic.

Video's another matter—some video content on the Web is great and has ~no at-least-as-good replacement anywhere else, in any other medium. But it's also all but completely monopolized by Youtube and hardly participates in or factors into the broader Web.


That’s not so different from the current state of the world in which many search results are full of cheap SEO content. Now instead of appending Reddit to my search I just go to chatGPT. Probably similar levels of accuracy between Reddit and GPT anyway while the SEO content is just garbage filler.


Until Search-Engine-Optimization is replaced with Chat-Bot-Optimization..?

A chat bot, like search index, need to updated for new and current events to stay relevant. I can’t see why it can’t be deceived by spams


Around of quarter of GPT's training data was Reddit so in some sense it's already a Reddit response generating API.


This can't be right. Any place I can read about this?


https://arxiv.org/pdf/2005.14165.pdf

WebText and WebText2 referenced in their papers are corpuses based on Reddit submissions which had a 22% weight in their training model.

https://openwebtext2.readthedocs.io/en/latest/

This is larger than Wikipedia (3% weight) or either of their two book corpuses (8% each).

The only other data included was a filtered set from Common Crawl (weighted 60%).

---

I was imprecise with my language before but hopefully that at least provides some clarity.


> In general, generative models are taking publicly available content from creators and are monetizing them for big tech.

This is fair—also in fairness, whenever my Google speaker answers a question, it always tells me which site the answer is from.


> clearly AIs need human generated content to train on so it'll be the snake eats itself.

For the time being but there is no reason AI couldn't produce genuine original content. Real life human artists also use previous content for inspiration.


Human art isn't only previous content though. It's filtered through the experiences of the artist themselves. Eric Clapton surely took inspiration from prior music, but "Tears in Heaven" is about a uniquely personal and human experience of losing a child. If a computer had written it, it wouldn't have the same artistic or emotional weight.


There are parallels in manufacturing and product. People who sell with high quality control, better methods. And everything else gets cheaper more common and lower (good enough) quality.


The question for what actually replaces the WWW as we currently know it isn't really open-ended. Many of the people who post on HN and similar sites remember what the web was like before it became an ad-funded mess.

I would enthusiastically welcome a web that isn't based on firehose-advertising and outright deception/lies.


> I would enthusiastically welcome a web that isn't based on firehose-advertising and outright deception/lies.

I would too but I don't see how it happens. The awesome web that used to be was built on the backs of unpaid volunteers. Maybe that could return but even if it did all that wonderful volunteer work would get funneled through Google or OpenAI so investors can make a fat profit from it. Feels fundamentally wrong to me.


Nadella was asked about this. He basically said that they have a responsibility to drive users to sites with click through or else sites won't have an incentive to be crawled.

How that pans out in practice remains to be seen.


> He basically said that they have a responsibility to drive users to sites with click through or else sites won't have an incentive to be crawled.

This is partially a BS answer. As long as websites are running Google Ads, they will have an incentive to be crawled. Fewer clicks > no clicks (which is what would happen if the site was set to 'noindex'.

Google also pays news publishers to license their content; $1bn alone just for Google News Showcase [0]

[0] https://blog.google/supportingnews/#overview


At least the Prometheus model used by Bing is aware of its sources and displays them as links below the result. That could help driving traffic and monetisation. This is something that Google Bard seemed to lack for now.


right now the creators of the content Google/ChatGPT scrapes are usually paid via the advertisements on their pages

Are you saying Google pays them per scrape or they get paid only when users click through?


Google does not pay site owners and I thought the big complaint was that users rarely if ever click through. The instant answers keep users and their potential page impressions in Google-land.


Further the users were really only paid if they put Google's ads up on their website to further monetize it or if they had a paywall which meant that their information was much less likely to show up in Google. So Google has only for a short time really paid people back when ads were actually relatively lucrative. Or if you were essentially selling something you could instantly monetize through that sale that Google pointed people too but for people just producing content it's never been that great of a way to reward them for them aggregating their information and selling it to other people.


> ChatGPT as a search engine sounds amazing but also really quite problematic. I don't think I want to chat with Bing or Google.

I think you are correct. These are two distinct products.


I don't think Google Corporate had any idea this was going on and that way more people were interested in it than normal and many were reporters and investors. It had a feel like a local Google dev group meeting which is informal. I stopped using Google Search months ago because it was totally unusable. For some things it's still good and I use search within Google Maps often.

I think this will make voice-based UI viable (finally!) which is great. It's going to definitely eat into traditional Google ad revenue since clicking around and "impressions" are going to go way down.


The main presenter is SVP of Search, Assistant, Geo, Ads, Commerce, and Payments. So I assume corporate was aware it was happening.


Google isn't a retail store that has 'stores' and 'corporate offices'. It's all corporate, but you are absolutely right that the 'mothership' in Mountain View is very distinct from offices that are 'remote'. I don't know if Google has internal lingo to describe this but at SV companies I've been at we called it '{city where the main office is}' vs 'remote office'. So if you are at an Oracle office in New Jersey I would imagine you would say something like "I have to get up early to meet with Redwood City."


Prabhakar reports directly to the CEO of Alphabet (Sundar). This was not a local dev group event.


This little meetup moved the market!


I was watching... meh all the time and then suddenly it goes blank and disappears. Like, wtf Google. After yesterday's Microsoft presentation, this was such an underwhelming event.

And it gets real tiring on how much Google tries to create an idea on how advanced their AI is.. (and it might be) but the fact is, nobody can access it or try it. So until we actually are able to put our hands on something, I'm calling vaporware. At least with OpenAI we get to experiment and build actual products with it.


I think this is the big take away from all this ChatGPT vs. Google. Non-tech people could experiment first hand how scaringly powerful a well crafted AI can be.


I can totally see why ML researchers that were expecting end-user products being notebook like and targeting data engineers were took off guard by the sudden discovery of the how a chat interface is powerful and usable by anyone.


The moment was just weird; "goes blank and disappears", for a moment, I thought it was my connection.


When did it go blank? I just watched and it arrived at an end point with the presenter signing off


The MSFT event yesterday was impressive. To see them bring GPT to scale, real-time, with citations and other new features is very cool. The new products look genuinely very helpful - they solve real jobs to be done.

Surely Google will catch up, but this was a blunder worse than I would have expected. The race is truly on

Edit: one more thought. After listening to talks from Kevin Scott and others at MSFT, it seems like their vision is copilot for everything. Kevin made it sound like they have dozens of these experiments happening right now. I think this is a winning formula and we’ll be seeing all kinds of other new launches this year.


yeah watching the keynote was certainly inspiring. Not only have they thought out the GPT integration, they also have the policing in order. And this was NOT a search company.

It was also thought provoking about how the future of predictive policing will work, because language models are very good at understanding our intentions , perhaps better than we think


Google felt stumbling repeating many previously announced AI features in other spaces or small improvements to distract that their version of conversational search is behind.

Google did not have the source links below the Bard chatbot responses. They also did not show recent news results like Bing did yesterday.

And their presenter felt stumbling at moments too and lacking confidence. And the demo phone was "stolen". So also in execution the event felt lacking...

Microsoft said in their interview with The Verge that they were working on a raw version of their new Prometheus model since mid 2022 which also shows those sources. And they showed the nice compose and website summarise conversational tool integrations for within Edge. There were no hints of integrations of Bard within Chrome. So it feels that including training time + product development time that Microsoft is at least a year ahead.


Was incredulous the phone was stolen for a moment before remembering this is in Paris.


For what it's worth, Google should be taking a page out of Netflix's playbook in pivoting their search business: not by improving/adding LLM features to existing search algorithm and results, but instead fork into a new site and starting from scratch. Netflix had spun out their DVD mail-in business to focus on streaming but had maintained ownership of both.

This would allow them to innovate quickly on the new tech without worrying about the baggage and backend migrations required for the first approach. Otherwise it could slow things to a crawl and risk of being leapfrogged by Microsoft. We saw a similar scenario play out for YouTube Shorts - it took 2-3 years of beta testing to avoid disrupting the core YouTube content but that allowed TikTok to firmly entrench itself.


When was the last time Google delivered something that was actually good/lasted longer than an afternoon?

They have had search, maps and gmail since forever. To me, it seems they don't know how to innovate anymore, let alone deliver.

It'll be the story of YouTube Shorts, (stadia, ... you name it) over and over, until they fade into obscurity.


Android? Chromebook?


Android: 2003 Chromebook: 2011

I feel like there have to be better examples.


Chromecast, Android Auto, Wear OS

I'd personally give Google Colab an honorable mention since it seems like the go to notebook for sharing ML research/projects.


Watched it. Seemed like a panic reaction to me. Like they need to get out something quickly out. Presented features look OK, but the event it self seemed like internes presenting what were they doing during the summer break. When it ended I was like "that's it?!?".


Why wouldn't Google be able to compete with their own version of ChatGPT ? I don't understand this underlying assumption behind all comments here


Wishful thinking? I think it's quite hard to make any prediction, and people are very biased for "political" reasons. It's entertaining to read, but I wouldn't take too much values from these comments.


How is Google competing with ChatGPT different than Facebook competing with Snapchat stories?

Much of LLM research was pioneered at Google...


Because Google owns the search marker and all its profits currently. Anything Google does to compete will cause them to make less money than they do now. Product managers are going to be very scared and angry.


Having the technology or know how is still miles away from a product that is live. Google have the former, Microsoft handed over bags of cash for the latter.

It's especially awkward for them because as market leader in search they can't really afford to fumble this, which anecdotally they seem to be.


Once the technology is ready and developed by others people, how much time are they really lagging behind Microsoft ? 6 months ?


Bigger cost is not time lost, but reputation impact.

People trust MS more than Google, they just don't believe that Bing is better.


Whenever I hear about Google and AI, it's always because some HR issue has spilled out of the company into the public. That doesn't inspire confidence that they're taking AI seriously rather than having an AI group around to mark some checkboxes.


Because it would cannibalize their current flagship product first.


It will be integrated with Search, as they have announced.


Yeah, but it will cost them a lot more to run it.


Cannibalize means customers migrate to one thing and leave the other behind.


I think the old functionality of Google Search will still be available.


They could. But this isn't something that just happens overnight. The hasty, public responses by Google point towards them being caught with their pants down. If they did have something close in the pipeline, silence would be more likely.


Chatbots don't monetize as well as the current search does. Much harder to get people to click on ads. Google worried that cannibalizing their search product will kill margins and profit.


Wow, they really are in disarray. Maybe the (ex-)Googlers in recent AI HackerNews articles are wrong in proclaiming that Google will come on top of the AI wars, due to their immense contributions to its lead-up.

I look forward to see what new Bing amounts to the industry, as well as Microsoft’s partnership with OpenAI for Bing, Azure, and whatever else is planned. This is an exciting tectonic shift.


It's a similar reaction that Intel supporters had when it was clear AMD had become a credible threat. There were claims that Intel had 1-2 unreleased CPU architectures just ready to drop at a moments notice to counter anything AMD came out with... we've seen how that played out. Google has been distracted milking their cash cow, just as Intel was, while the competition caught up. The main difference is that Google doesn't seem as far gone (yet) as Intel.


And once again, those who did not learn the lessons from Christensen's The Innovator's Dilemma will add their names to the list of formerly-dominant companies.


Except that Intel does seem to be back on top again though?


As someone with money invested in Intel, yes they are back on top of some benchmarks for consumer products and have been doing reasonably well overall. The issue is with the entreprise segment that is getting eaten away by AMD products that are more power efficient and have a higher number of cores due to their early bet of chiplet design (Intel is migrating to chiplet designs this generation). They are also leveraging the TSMC nodes which are overall better/more power-efficient.

This is a far cry from how far ahead they were before Zen 1 dropped in 2017. They've been burning a lot of money to keep their heads out of the water.


My wife, who uses whatever default is offered unless I provide a compelling enough argument, said she'd actively switch to Bing if it incorporated ChatGPT even in its current form.


I’m heavily considering it myself. I know folks that are so subservient to Google that, despite Google’s objective deterioration in results as of recent, believe that something not shown on Google must mean it’s no longer accessible.

Millions of people believe in the Dead Internet theory because of it, when all they need to do is try a new search engine. I look forward to seeing if Bing’s ChatGPT integration will be enough of a selling proposition to have the masses consider new alternatives for Search.


Suit at work today showed us a spreadsheet of business things he'd been working on.

"See how each row has a description column so Pele who aren't familiar can understand it? I used chat gpt for those, saved me like 3 days".

As devs, communicating specific ideas tersly to computers, I feel like we can miss how much this benefits other people. This feels like an excel moment for people who need to produce copy about widely known things and aren't super sensitive to quality, even now. And yes a lot of people. From mailshots to HR.


I agree, I've personally used ChatGPT as a "fill in the blanks" kind of tool for non-technical prose. I provide the skeleton and structure, ChatGPT paints in the lines. It's also extremely easy to proofread and verify it's not spouting nonsense.

Would a talented writer do better? Absolutely. But I'm using it for fun and it does a much better job than I can.


No doubt! It’s how AI is marketed in all places right now, “talk and ask for things as if it’s a human”. Soon gone are the days where you have to render a raw thought into a mish-mash of Search engine-friendly keywords and synonyms in order to get the most relevant results - you can ask for things from AI as if it’s your receptionist!


I'm using both Google Search and ChatGPT in parallel. ChatGPT cannot substitute at this point. It is better as a co-writer than as an answering system. It can answer open ended questions, but anything requiring specific information is questionable.

And the whole programming thing? It gives wrong results that are not correct and at times don't even compile. Ask it to fix something, it does half the work. It hallucinates functions which do not exist. That being said, if you ask software problems that are not answered with code, it does a better job.

People will say they would switch until they try the current iteration. We certainly want something better, a real virtual assistant, but what we have is not there yet


Why?


> Maybe the (ex-)Googlers in recent AI HackerNews articles are wrong in proclaiming that Google will come on top of the AI wars, due to their immense contributions to its lead-up.

This was also the sentiment in tech circles during early cloud wars. That Google, with it's supposedly superior engineering and infra, would win the cloud. That didn't happen -- Google not only did not win that but also lost it's lead on Azure and now stands at #3

And I feel the same will be true for AI wars.


Here's a mirror of the event https://www.youtube.com/watch?v=npV4Kix7Td0


Seems like a disjointed attempt to convince that Google is already an AI provider, which probably isn't wrong. Here's kind of the "touchstones" from the 4 or 5 speakers.

- Highlighting the bleedover between search and conversational AI.

- Highlighting the widespread deployment of Translate across Google services.

- Highlighting "Google Lens" (coming soon? I thought this has been integrated for a decade?).

- Feature announcement, "multisearch" multi-modal search such as with an image and associated text.

- Feature announcement, "Bard" -> "But" -> "Further conditions..."

- Upgrades to the "expert answer recommendations" that is actually enabled by "Generative AI" -- (what were they doing before throwing darts?! when the hell was "do dogs dream?" https://www.popisms.com/TelevisionCommercial/101945/Google-A...)

- Feature announcement, "Next month", Generative Language APIs "onboarding developers, creators, and enterprises"

- "Responsible AI/ AI Principles" (https://ai.google/principles)

-Feature tease, Google Maps eye-candy / novel interface overlays using NeRF

- Google Maps AR demonstration that also looks like something from a decade ago that didn't get adopted / basically what Glass did

- Proper EV charging stations support (it doesn't already do this?)

- "Project Air View" to collect data about infrastructure's effect on air quality to benefit city planning? No idea what this is about.

- Google Arts and Culture (chrome experiments) / the blobs from 8 years ago revisited, I guess "Google Books" archival is just "arts and culture" now, AR application for an image viewer still very 2012,

It abruptly ends. This was all very out-of-touch.


Original with better video quality - https://www.youtube.com/live/yLWXJ22LUEc


wow this is pretty hard to watch tbh


This quote from Nadella going around today is pretty humorous on light of the video: "This new Bing will make Google come out and dance, and I want people to know that we made them dance."


> This suggests to me that Google is finally getting disrupted and are scrambling of desperation because of the release of ChatGPT.

Is there a term like premature ejaculation but for believing in disruption?


premature disruption syndrome (PDS)


Maybe related to that failed event: Google stock price is falling, it's currently -7%.


... most of the NASDAQ is sailing by on a flywheel stock buybacks and a financials-advertising pricing bubble that's causing services inflation.

During the State of the Union the president advocated to applause a considerable increase in federal taxes on those purchases.

Don't follow my advice though, I thought the Apple Car mirage would have poofed days after Tesla got decimated.


I'm ready to buy more puts soon. I don't think the true cost of this mistake will sink in with investors until later this year, though.

I've been pretty good at eventual sentiment prediction, but never been great at timing it.


Important to note that both companies don’t have a public demo yet. Microsoft’s “demo” only answers select questions which is… not an actual demo.

That said this is a hilarious mess up because Microsoft actually had a presentation.


>That said this is a hilarious mess up because Microsoft actually had a presentation.

They are the company the produces Powerpoint after all...of course they know presentations! :D

I didn't see the Microsoft presentation but I hope it had a powerpoint!


The reaction to this compared to the MS presentation yesterday really shows the difference a good presentation can make.

It will be interesting how well the products work when we get to use them for real.


One key advantage Microsoft has is they get to say "we're going to incorporate that fun new tool/toy you've been playing with for a couple months now". We already have a pretty good idea of what to expect.

Google is left saying "we have one too, and ours is even cooler", which in the absence of a viable demo isn't very compelling.


For all the folks saying "Google is getting disrupted", I'm glad to see what's going on but I wouldn't hold my breath. Most importantly, by all accounts Google does have better (or at least as-good-as) tech internally that can match ChatGPT. They have been slow to release because of reputational risk, and having to deal with internal (and external) pressure about amplified bias.

I think the best analogy here is when Microsoft fought tooth and nail to win the original browser wars with IE, then sat on their laurels and did nothing for years, then Chrome finally came along and ate their lunch. Definitely remains to be seen whether other companies will end up eating Google's search lunch, but I do think if Google can get their shit together organizationally that they'll still be able to compete.


> if Google can get their shit together organizationally that they'll still be able to compete.

This is a really big if, and I think it's what has so many calling it against Google already. Google as an organization has been dysfunctional for years, and everything we're seeing so far indicates that the organization does even worse under pressure than it does when it's coasting.

I don't doubt that they have really good tech, but I have no confidence in Google's present ability to execute anything.


Excellent point. Perhaps Google will have (is having?) the equivalent of the Apple-pre-Jobs-return years or the Microsoft-under-Ballmer years, where they just fumbled around for close to a decade before things were broken enough that they went into it with more of a "what do we have to lose at this point?" mentality.


Does better tech matter that much if you can't execute? Microsoft has always gone with good enough instead of striving for perfection. It's taken them pretty far.


> by all accounts Google does have better (or at least as-good-as) tech internally that can match ChatGPT ...

I keep hearing claims of better tech. What's the source/proof of that?


For people in this thread convinced that Google is scrambling to come up with a ChatGPT competitor - that’s only true on the product side. Internally, LamDA has been available with a chatGPT style interface for years (maybe more like 18 months).


The product side is all that matters here. If Google cannot productize it, then all they did was do other people's homework for them. And it increasingly looks like Google has zero plan for productizing it.


To compete, Google has to disrupt its own product, which happens to be one of the most lucrative flywheels in history. It has everything to lose while MS has everything to gain by augmenting Bing and indeed its other products.

That’s what makes it such a challenge for Google, despite being one of the few companies who can actually pull it off at a technical level.

Both companies will also face legal issues, reputation risk (see MS’s Tay bot), and challenges scaling this tech to billions of non-paying users. MS has been more willing to march forward, Google had to be dragged into it by the competition.


Product is all that matters. You can have the world's best AI but if it's horrible to use who cares, right?


Did anyone outside of Google really have a chance to play with it?


I hope that some contrarians bring back keyword searching. I fear that the expanded DWIM of these “conversational” searches will make the results even worse.


I wonder if Google is going to be as successful as Intel selling their "turn around" story when their monopoly was threatened by AMD. Intel managed to make media run with it even though it was based on nothing and turned out to be nothing.

It might not be as simple this time as people don't understand CPUs well but everyone can see that ChatGPT is something new and special while Google search is deteriorating every year.


Maybe they just laid off everyone who knows how to run an event?


Blatantly.


Wow what a train wreck. Didn't have the phone to show it off in real time. So much filler stuff. This is incredible. They are scrambling to head off Microsoft. I've never seen a presentation by Google like this before.


Schedule a huge announcement after a disruptive technology from a competing company right after mass layoffs when morale is probably at an all time low and everyone is one foot out the door. Clearly good decisions here.


ChatGPT will kill Google in 2 main ways

* Creating a better search experience without wading through the plethora of BS articles. * By enabling marketers to write BS articles faster which will further ruin Google results.


No it won't. Google's Bard (aka Lambda with search results input) performs almost identically to it from what I've heard and it's already being tested internally.


They have the technology but do they have business model.

I am not able to see a way Google can maintain their dominance in search. They have to eat up their current revenue if they want to play this new game.

ChatGPT like interface means that they cannot show banner ads for their advertisers. I don't think Google can move fast and realign their whole business overnight.


I don’t see why ads can’t be displayed on the side, like they are now in a regular search, and updated as the conversation happens. AFAIK, the cost of an AI conversation far outweighs the cost of a search, so Microsoft/ChatGPT will have to figure out monetization at some point, too.


Since there is no corresponding article to read, could we get a more helpful headline? What does "Private/Deleted Immediately" mean? Did they delete it, or just make it private?


I think there's too much doom and gloom about Google not being fast enough to respond to Bing + ChatGPT

We know from history that being first means nothing in the long run. If anything, being first often means losing to something more polished and more considered.

Don't worry, most people will still use Google, and Google will eventually release their own version.

This doom and gloom reminds me of all the "Apple is over" because Samsung came out with a tablet and smartwatch before Apple did.


Any other GCP customers feeling nervous? Is GCP stand-alone profitable now, or if Google takes an e.g. 50% hit in advertising revenue are we going to see services being killed off?


Gcp isn’t stand alone profitable. They stated such in their earnings that Cloud (including workspaces with a low operating cost) is not profitable


Killing GCP would be so stupendously, incredibly short-sighted. I hope there's no truth to the reports and internal leaks from 2019.


That would be on brand.


You've got it the wrong way around, no? If ads revenue takes a hit, non-ads businesses like Cloud become more important, not less.


And as a GCP customer, you might be worried that GCP prices will go up to make those all important revenue targets, or that some of the lowest engagement GCP services will be EOL'd.


Or that the quality of, e.g., customer support will decrease, like it has been the case with Google Fi, in my experience.


The video is not private or deleted. Isn't this it?

https://www.youtube.com/watch?v=yLWXJ22LUEc


The original link was to https://twitter.com/googleeurope/status/1623251768099409920, which originally embedded a private video in the Google Europe youtube account. Looks like that has since been redirected to the official video.


They lost 100 Billion in value, because of a bad demo. OMG


I think the bigger problem is how does something like ChatGPT fit into Google's business model for search.

Conversational interface and/or giving immediate answers may not be the best option in terms of ad revenue. Users re-iterating their search queries gives more opportunities to show ads. When users are scanning the page for best results, they might be more likely to click on the ads with clever wording.


> I think the bigger problem is how does something like ChatGPT fit into Google's business model for search.

Alternatively, how does ChatGPT fit into the usage model for search? I know it's taken on faith in this forum that somehow Google[1] search is bad or failing or about to be killed or whatever.

But... it's actually just fine? It does what the market wants, more or less perfectly. Everyone uses it to find the 99% of stuff that everyone searches for (commercial products, news, sites-you-forgot-the-name-of, etc...). What exactly is AI going to do better when all I want to know is where the Steam page for Skyrim is, or how much a pour-over coffee filter goes for?

The truth as I see it is that "search" simply isn't a market that's disruptable in the way posters here want it to be disrupted. Find 10 people on the street and ask them whether they can find stuff on the internet and they'll say "yes, duh?". This is like asking how AI is going to disrupt freight rail or high intensity agriculture. Those too are stable, already-optimized industries without a lot of room for genuine improvement except on cost (and recognize that training and running models of this scale is still significantly more expensive than crawling the web!).

Now, none of that is to say that there isn't an application for a generalized text model that can answer questions in the market somewhere. I think there probably is. I just don't see how it has anything to do with "search". Having a personal butler to do your research doesn't get you to the Skyrim forums any faster.

[1] Disclosure: I work there, but on firmware and nowhere near search. Opinions entirely my own.


> Everyone uses it to find the 99% of stuff that everyone searches for

You are probably right. What you are missing is that searches currently being made are a small subset of total searches that can actually be made (my guess is <10%), and will be unlocked by this new paradigm search, enabled by LLMs.

For example, Google is 25 years old, has the smartest people, best search technology and yet can not give an answer to a simple question my kid would ask like: "Do more people live in Madrid or Tel Aviv?"

What it can is give you ten links, then you open link #1, find out how many live in Madrid, then go back, open link #2, find out same for Tel Aviv, and then figure this out on your own. This is clearnly not the best way to do this, agreed? AI will do all of this automatically and unlock a whole new category of "searches" that did not exist before.

Another example is "Give me the names of all Texas-based CEOs that run companies with more than 200 employees.".

The only reseaon this search is not a thing today is because Google is not capable of satisfying it - at least not directly (20 searches and numerous ads later you might eventually find an answer, but that is not the point). Queries like this are a real need and belong in the vast pool of searches that will be (eventually) unlocked by AI.

Final example: "Summarize the latest research on quantum gravity for me". Does the user want to do 20 searches and read hundreds of pages of documents or they really want an AI to produce a summary with citiations, 5 seconds later?

Think of Google today as a best Nokia before iPhone showed up.

> if alternative business models haven't disrupted search in the last 20 years, why are they going to start now?

Because now is a good time, disruptive technology is here, and 10 links supported by ads and tracking had a good run, but the civilization is moving beyond.

The age of PageRank is over. [1]

[1] https://blog.kagi.com/age-pagerank-over


> Alternatively, how does ChatGPT fit into the usage model for search?

I don’t think it does fit into it, but search is a means to an end. What people want is to find stuff.

For that, it might work. I also agree with you that Google already works for it in many cases, though. For example, if you ask Google for the weather in Foo, it first shows you the weather in Foo, then ads from advertisers who claim they can answer that question, and only then search results for hopefully relevant sites.


You don't think a ChatGPT product would... show ads? Something has to fund it, right? I mean, I get that the response will be that it will be supported by some other business model (hardware sales a-la Apple, windows licenses for MS, etc...). But... then it's not an argument about AI at all but about business models.

So... if alternative business models haven't disrupted search in the last 20 years, why are they going to start now?

I just think this is turning into an HN echo chamber. AI will absolutely be revolutionary and disruptive, but it's a long road from "revolutionary disruptive technology" to "Mighty Hammer That Will Destroy All My Imagined Enemies".


I agree with you, but Google is also a leading player in the conversational interface ("Ok Google")and in business software. Two areas where language models could have a huge impact.

Is Google deliberately slowing down any major improvements in these areas because they are petrified about the idea of losing revenue in their ad business?


It's not an either or.

If you saw the clips of Bing with ChatGPT, the chat and traditional search results are next to each other. It's not a replacement, it's an addition


Google is finished. The market has been ripe for disruption and it has finally come. It’s just amazing they never saw it coming. Or maybe they did and underestimated the threat. Too late now, they just fired a bunch of people and have bred a culture of laziness funded by their cash cow advertising products. They don’t know how to innovate anymore.


i was at google half a decade ago. Multiple SVPs expressed in private a concern of search revenue growth and search being disrupted. That was likely the reason for all the now dead products, to attempt to diversify revenue.

Meanwhile the company was focusing on DEI vs skill, and thoughtfulness over delivery. They preached 10x but refused to pursue 10x ideas. They tried to create an accelerator, a120 which focused too much on me too products.

It was clear to me then that google was repeating the same mistakes of Yahoo, IBM, HP and other prior tech giants. When it was brought up, nobody listened.

I can only think that the reason for this was Sundar, Urs and other executive leadership. They need to be replaced if google is to excel in the future.


It's been going on much longer. Google's revenue was disrupted with Facebook. Then they scrambled to get Google+ out the door, which had absolutely no reason to exist beyond being a Google-branded clone of Facebook. They effectively lucked in to Android paying off. When the iPhone came out Android was not going to support touchscreens. It was designed for physical buttons only.

That's the problem with Google. They don't have vision. They need someone else to show them the future which usually leaves them racing around trying to find their place in the new world. And then when they do put out a product it's always some half-assed thing that feels like a beta product of some already existing fully-finished product that is wildly popular. Sometimes it works out. Chrome was a joke when it came out. It took many years before Google Maps became a solid, distinct offering. But they want minimum viable products that ape already existing things and want them to be instant hits or they kill them off. It's the Netflix strategy.


> Meanwhile the company was focusing on DEI vs skill, and thoughtfulness over delivery

That has nothing to do with any of this. Google's doom was due to their total lack of empathy for their end users and customers. They launched stuff, then when those things did not immediately produce gigantic profits like the search ads, they just shut them down on the users' face. One can do it once, twice, the third time people wont even bother to use that company's products. Such is the case with Google.

https://steve-yegge.medium.com/dear-google-cloud-your-deprec...

The level of distrust in public can be gauged from how the executives of Gcloud are trying to win customer trust by giving all sort of guarantees. And yet failing to do so.


I’ve done consulting at Google and know many Xooglers and I can confidently say Google considers users as nothing more than cattle to be shuffled into products where they can be milked for profit. They have no more empathy for a user than they would have for an animal destined to the slaughterhouse. Remember that next time you use a google product.


Blunt. But feels accurate.


>> Meanwhile the company was focusing on DEI vs skill, and thoughtfulness over delivery

> That has nothing to do with any of this.

I wouldn't say it's central, but it's definitely not nothing. When a lot of your recognition comes from DEI efforts, "me too" and other such efforts, you can't expect proper focus. There are plenty of irrelevant internal drives where just aligning with them benefits you greatly, instead of aiming to quickly deliver an excellent, user-focused product and then making it even better over time.


> I wouldn't say it's central, but it's definitely not nothing. When a lot of your recognition comes from DEI efforts

I suspect that those efforts were a tiny drop in the ocean of the expense that Google was spending on product investments. If they were anything significant that could affect profits, they wouldnt do anything about it but some PR shows...


Did anyone snag a copy of the event?


https://www.youtube.com/watch?v=npV4Kix7Td0

Responses on Twitter are horrific: https://twitter.com/googleeurope/status/1623251768099409920

This is the first non-technical Google presentation I've felt compelled to archive

I think folk are unestimating the scale of the change in medium that ChatGPT represents, it's not simply a problem of popping out a compelling clone for Google, they must also preserve or replace the lost ad revenue which represents 80% of their income, otherwise things quickly start to look grim. In contrast, ads are only 36% of Microsoft's income. If it turns out that llms are only economical as a subscription offering, Microsoft are already better placed to e.g. bundle NewBing with its 365 subscriptions.

I think subscription based search might be the scariest outcome, it would be something like an iPhone moment, where the entire medium becomes stratified into commodity/ad supported/low income user vs. premium/ad free/high value users. Imagine Google's legacy as the spam riddled search engine low income folk use, filled with ads for payday loans, used car dealers and predatory personal injury lawyers


Just to add the cost-per-query may not be a temporary blip. ChatGPT's cost per query is based on a static base model trained over a year ago. Imagine the energy and compute that might be required to add meaningful personalization or an up to date index, or if there were to be a repeat of something like the search engine document count wars of the late 90s/early 00s. Training ChatGPT is estimated to cost between $2-$12 million per run ( https://old.reddit.com/r/MachineLearning/comments/hwfjej/d_t... )


The only futuristic thing Google did was calling themselves Alphabet. I'm pretty sure they didn't think search would be the thing they'd fail on, though. Wonder if anything else they have will keep them around another five years?


I couldnt watch all of it what was it about? it seemed lame and apparently some demo did not work.

Google maps immersive sounded pretty much like street view. You can apparently use your phone to find nearby cafes. These are nice features to have in 2013


No need to watch it because it was an introduction to existing Google Lens features.


The audio was horrible... Something wrong with the microphones i think.


Love to see legions of corporate managers panicking. Love to see it.


Where's the showmanship? There's no need to get into details, just get excited about the product while you're showcasing.


Given the track record of products, could we see googles management change in the next three to six months?


Google is years overdue for a change in management. Google has been coasting on the strength of its engineering talent, but their execution on rolling out new products has been one disaster after the next. Hopefully this is a wake up call for the board of directors at Alphabet.


Gotta play cover for blocking other streams


This is where you start shorting GOOG.


> This suggests to me that Google is finally getting disrupted and are scrambling of desperation because of the release of ChatGPT.

Even if Google is not getting disrupted, this suggested that Google is finally at the stage where its culture is hindering its progress and causing subpar performance. I hope I'm wrong, though. Google is such an admirable company. Its search engine has forever changed the world. The launch of Gmail with its 1GB inbox was a game-changer that shocked competitors, and Yahoo! played catch-up for over two years. Google Maps revolutionized the world of electronic maps and started a movement frontend development, as Google showed the world its amazing application of AJAX. Chrome dethroned Internet Explorer and Firefox to become a dominant browser, and the company's three seminal systems papers (GFS, MapReduce, and BigTable) spawned a multi-billion-dollar industry. Additionally, Google's advances in machine learning just keeps pushing the industry forward. The list can go on...


After wrestling with Google Ads for a year my admiration went to zero.

Google is a mainly running a scam these days with completely blackboxed systems that absolutely fleece the little guy when not doing outright criminal auction price manipulation.

I don't know why this hasn't gotten more media attention. Probably because of stockholm syndrome in the publishing sphere.

Yes they pioneered amazing things but where they earn their money now, is through a labyrinth of dark designs and obfuscated systems in the borderlands of legality.


Advertising is like finance; there are entire teams of people whose full time jobs it is to make money using complex markets. SMBs are not going to be able to casually compete.

Also if it wasn't a black box, then scammers would be fleecing everyone.


Good points. The problem though is that the barrier to entry now is so high that only the wealthiest are allowed to compete, the ones with a huge trustfund or lots of VC money or institutional backing.

Most smaller players are getting eaten or drowned, with the way Google Ads has implemented their changes. It was not like this 3 or or even 10 years ago, especially in the tech sphere.

So yes the state of Google could be a black mirror to the rising Gini Coefficients, HFT, armies of lawyers and private wealth management that create systems so complex that no - even upper middleclass person - will be able to enter or stay in the market.

AI is probably going to make this exponential. The ones with the money will be able to hire the tech and wealth expertise to grow while the barrier to entry just keeps rising.

Or is there a less dark outlook for the smaller or just midsized entrepreneurial endeavours?


And with all the admiration to Google, I do hope Google could fix Google Docs. I miss Quip's amazing support of embedded spreadsheets and formulas, beautiful math equations, and rich-text in comments, and ability to link to any line or any spot on a diagram.


There is a latex-ish math plugin for Google docs


Yeah, unfortunately not every company allows that plugin. And I wish Google Docs would adopt the editing style of Obsidian: when in focus, the formula is editing mode. Out of focus, the formula is rendered.


This is what happens to all companies. It's a natural progression. Some new companies will wow us in the way Google did in the mid-aught's.


Definitely. Bell, DEC, Sun, Intel, IBM... Google feels refreshingly different probably because she was a later generation. It's actually good for the US to have such a lively natural progression. It's like a forest springs forth with life anew in front of a sick tree's fading frame.


This is why the ends should never justify the means.


All that to create a privacy destroying, massively fraudulent ad tech moat. Yeah, admirable...


It could be worse (Oracle)


It's not because it could be worse that we should accept things as they stand.


What’s the worst thing Oracle has done aside egregious enterprise pricing?


their API lawsuit?


Or Apple, who shares more user data with the CCP than most American tech companies: https://www.nytimes.com/2021/05/17/technology/apple-china-ce...

All FAANG are suspect these days


> Apple, who shares more user data with the CCP than most American tech companies

Whether intentionally or not, your comment is very misleading due to omitting the fact that it only applies to chinese customers of Apple in China (which the linked article mentions in the very beginning). Apple doesn't share user data of people outside of China with the CCP.

Also, this is literally a legal requirement to be able to operate there, it isn't like Apple just decided to do it on their own. And Apple's actions regarding this weren't really a secret either.

Disclaimer: not an Apple employee and have no connection to them outside of being a customer.


Nothing I said is intentionally misleading, it's right there in the source I linked. Apple Chinese users are still Apple users.

Google employees rejected that same requirement to operate in China, showing that Google cares more about user privacy than Apple does. Apple has shown they care more about profits than privacy, which is the point.


We have no idea what Apple and Google share unofficially. Remember the Apple China bribery scandal? No? Of course not, it was barely a blip in the news.


> We have no idea what Apple and Google share unofficially

There is a phrase that describes "we have no evidence of them doing this, but they might be doing it, because there is no proof of them not doing it" pretty succinctly - baseless speculation.

> Remember the Apple China bribery scandal? No? Of course not, it was barely a blip in the news.

Are you referencing a Foxconn employee getting hit with allegations of bribing some of the Foxconn suppliers back in 2013[0]? If yes, then I remember it. However, I don't see how one can in full seriousness refer to it as "Apple bribery scandal", given that Apple wasn't aware of it happening, and none of the people involved were Apple employees.

0. https://qz.com/42409/why-apple-is-probably-in-the-clear-over...


Nope I'm talking about Tim Apple's $275 billion deal.


Technologically impressive is maybe a better way of putting it. I.e. a Death Star would be very technologically impressive.


tell us how you __really__ feel


[flagged]


Google has the resources to do almost anything, well, the financial resources to do so. Time will tell if they have the intellectual and managerial resources to find a path out of the hole they've put themselves in.


It just seem like they are in a bit of a panic. Microsoft partnering and releasing a search product so fast caught them sleeping a bit. MS did a master stroke in prioritizing the release and they know google was gonna get caught if they can pull it




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: