Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Also on a total sidenote, if you told me 5 years ago that Bing would be a serious threat to Google I would have a laughed.

If you had told me 3 months ago that Bing would be a serious threat to Google, I still would have laughed. That's how much impact the ChatGPT integration had for Bing -- overnight.



Google has spent the last 10 years ago making Google worse. They achieved this in large part by making the whole Internet worse [0], but a search engine with results of the quality of Google 10 years ago would be a serious competitor.

[0] For example, Google used to have a fairly strictly enforced rule that indexable content had to actually be visible to an unauthenticated user. The current crop of sites that have apparently useful content in snippets but that hide it when loaded would have been penalized, possibly severely.


They really have a flywheel of internet destruction. The fact that they own the entire display advertising business + search and ping pong people from a 50% paid search listing to a CPM arbitrage SEO website and back is just gross.


I've been using DDG and Brave for a few years now, and I went back to Google yesterday because its the default for Chrome on my phone. I was startled at the difference in quality, especially with Brave vs Google. Brave typically prefers long-form writing and the quality of the articles is typically a lot higher than what I found using Google.


Brave Search is like Google from 2005/6.


this comment got me to switch from google, thanks


It’s almost if there is a downside to treating your users like shit. Google is such a weird company.


It’s an advertising company. Their destiny is to reach parity with Clear Channel Communications in terms of ethics and quality.


They really feel like they've reached a 'Microsoft of the late 90s' phase.


I'm afraid we're not there yet.


While I have many reasons that Google made the Internet worse (AMP, censoring search, forcing localized search results, privacy, etc.) I don't think the hidden content is their fault but rather that of publishers.

Publishers blamed Google for declining revenue since they had to make their content openly accessible and therefore free in order to be visible to users on search. The EU tried to make Facebook and Google pay publishers to account for this. I think allowing paywalled content was a compromise to prevent this legislation from passing.

That being said, I agree with the publishers especially since hypocritically Google and Facebook strictly don't allow scraping of their services and litigate those who do.

Google could easily fix this by putting a symbol or label on paywall-related content so you know not to click it.


Huh, do you have a source for the EU issue?


After 2016, Google went on a crusade to save the universe by stamping out all misinformation and seems to have highly deranked all forums and blogs in favor of mainstream sites. This has made Google unusable for any political or controversial subject matter. This has also made their LLM efforts too cautious as they can't handle the political controversy of an LLM and can't verify that it will never return anything that offends anyone.

It does seem like Larry and Sergei are finally back trying to fix the excessively politically sensitive and overly cautious culture. Larry, having disappeared to Fiji for several years, must have been pretty bored or annoyed with running it.


It’s not just Google.

Yesterday night I was trying to recall the name of a particular SCOTUS case that I had (semi-incorrectly) thought was connected to the 14th Amendment.

Bombed out on the SERPs two or three times, so I started looking for an very old article by Thomas Sowell that randomly introduced me to the case a few years back. I knew it was something regarding the unintended consequences of the Civil Rights Act.

Neither Google nor DDG gave me ANY useful results for 3-5 variations on “scholarly critique of the Civil Rights Act”. I eventually remembered the Sowell connection and even adding his name in quotes only got me to a page deboonking the article I was searching for!

I’m very far from being a right winger or whatever so I’ve never really experienced this sort of thing before, but my god is the “no no, you don’t REALLY want to search for THAT” silent censorship out of control. Millions to billions of people use these search engines daily and assume they’re mildly biased but otherwise shining portals to the sum total of human knowledge.

In hindsight, I suspect I would have immediately found the SCOTUS case with my initial search on 2010 era Google. Very much seems like my starter queries were triggering the Bad Think Detected algorithm.


Yandex seems to have a bunch of links to what you're looking for. Sadly, Yandex is the only decent search engine I've found for these sorts of topics.

https://yandex.com/search/touch/?text=scholarly+critique+of+...


> deboonking

*debunking


Original spelling is intended, a purposeful and humorous misspelling introduced by a meme.


Aha, thanks.


People are acting like Bing has created some big market success and Samsung is running to them because the tech is better. But the alternative explanation is simple: vendors are always looking for reasons to (threaten to) put mediocre non-Google search as default on their products, so they can extract more money from search engine providers. Samsung sees the current hype around Bing/Google/AI as a convenient negotiating point, since the media will portray this as “Samsung switches to awesome AI search” rather than “Samsung forces its users to use crummy Bing.”


The users who pay attention to search quality aren't impacted by the default search option. They will just go back to Google or whatever alternative they wish.

The users who the default engine really locks in are the ones who just mostly click ads and have no idea they are ads. When they use Google they are mostly clicking ads anyways, so the search result quality will appear about the same with Bing. Maybe even better if there are fewer ads.

For Microsoft, getting more users on board means higher ad volume which brings more advertisers who are going to spend the time to manage Bing ad campaigns. You can bet they've done the calculations for how much money they can spend at what price. Ultimately that leads to higher monetization and then Microsoft can pay other companies (like Apple) to switch the default engine.

Google is a multi-trillion dollar attention tax that just sucks money from the global economy. They've been wildly mismanaged since Eric Schmidt's CEO tenure ended. It's been a long time coming, but the timer is running out of sand for Google fast. The revenue may take a long time to peak and decline but when they start missing their quarterly earnings it will be a bloodbath for Google's employees.


I think the problem is Google is also pretty crummy. Ai is in a golden age without poisoned data right now. But wait until Bing gets popular and blackhat seo types start poisoning chatgpt. It will stop being useful, and I suspect in a way that will be unfixable since these language models are so hard to wrangle.


There's another side to that medal: At the moment nobody takes any issue with OpenAI doing filtering and curation in deciding what is part of their training data set, aside from perhaps the anti-bias crowd. "AI neutrality" is not yet a topic. Yet.


I've already seen that several times with image generation. Most recently was an article commenting on how the American smile was polluting generated photos. People can't decide what they want. Do they want licensed, curated commercial photos in the database or do they want search engine style neutrality? You really can't have both.


> People can't decide what they want.

You mean, so that you can satisfy everybody at the same time?

No, sorry, the world doesn't work this way.


Musk was on the air this week talking about how the current AI is biased on the left as he promotes his new AI company


Lets laugh and watch him make the "truth social" of AI, hopefully setting fire to another few billion in the process.


People laughed at paypal, tesla and spacex. Now he owns twitter.

Being the richest gives someone a lot of power to make others look foolish


It's always nice to hear Mr. Musk has not given up his cannabis habit.


…or maybe he has!


> without poisoned data right now

It already seems poisoned to me. It's popularity over correctness, because LLMs don't know semantics.


Your tone subs like you're contradicting a point, but your actual comment is completely in like with the idea that Google has been caught by surprise that Bing is good enough (as measured by market sentiment) to be a credible threat.


I was not at all surprised, to be honest. ChatGPT took over about 90-95% of my what I would previously resort to Google. Since Microsoft was dealing cards at OpenAI, it was just a matter of time...


How do you manage the hallucination problems, or do you not seem to be having them?

I’m blown away by the competence of the language model but its willingness to make up facts makes me leary.


95% of what I search for, I can independently confirm once I have it in hand. For the other 5%, I avoid ChatGPT.


Reminds me of Knoll’s law of media accuracy:

“everything you read in the newspapers is absolutely true, except for the rare story of which you happen to have firsthand knowledge”.

Humans are pretty good bullshitters too!


Newspapers print their errata though. Does ChatGPT ever admit to making a mistake?


All the time, but only when prompted. You have to have a conversation with it and provide more detail which exposes the flaws in its previous answers, then it will happily apologize for its mistakes. (For me, this usually looks like me pasting an error message that its code caused.)

I really hope they find a way to have it apply context from future conversations such that when it learns the error of its ways it emails you a retraction, but that's probably a ways out because humans can't be trusted to not weaponize such a feature into sending spam.


But it doesn't learn its error, that's the whole problem. It only responds to 'accusations' from user in the most common way, which is 'apologies-like'.

The weight of phrases like "you are wrong" is in fact so strong, that it fools the chatGPT to apologize for its 'mistakes' even in the scenarios where its text was obviously correct - like telling it 2+2 doesn't equal 4


Well yeah, it's an imperfect tool, and you have to treat it as such. Probably there's a lot to be discovered about how to use it most effectively. I just don't find that it's more problematic than the other tools in my box.

Sure, grep has never flat out lied to me the way chatGPT does, but it's a statistical model, not a co-worker, so I don't feel betrayed, I just feel... cautioned. It keeps you on your toes, which isn't such a bad state to be in.


It can pick up on inconsistencies, especially when pointed out, and can say it was wrong, and try to reconcile the information.


Errata are extremely rare. Gross misrepresentations and errors are not, unfortunately.


Bing generating snippets of text from websites isn't going to generate hallucinations like you think it is.


It totally would, if Bing doesn't return relevant results.

I've asked BingGPT about myself and it gave me three answers. One was more or less on-point (it found my linkedin profile), and the other two were hallucinations. What happened was Bing found two unrelated pages and GPT has tried and failed to make sense of them.

Either that, or I am a prince whose name means "goose" in Polish.


Problematically, they're much better bullshitters than ChatGPT. And if you used Google to find them, they're probably either selling you something, or you had to navigate a minefield of people who are in order to find them.


It’s great too when you don’t know exactly what to search for, especially for acronyms.


> its willingness to make up facts makes me leary

I see you haven't met humans


We can downvote human comments and proposed solution (on stack overflow, hn, etc...) and also I don't expect colleagues to lied to me when I ask them about a feature or how to do xyz in a language or library or framework.

Bing, IIRC, has a way to provide feedback, not sure how useful it is for today's users and if it will be able to solve hallucinations one day.


I try to always give Bing+ChatGPT chat or search results a thumbs up or a thumbs down. I am using the service for free, so it seems fair for me to take a moment to provide feedback.


When google sends me to a website, I can at least judge the credibility of a website.

When ChatGPT tells me something, I have no idea if it's paraphrasing information gathered from Encyclopedia Britannica, or from a hollow-earther forum.


> When ChatGPT tells me something, I have no idea if it's paraphrasing information gathered from Encyclopedia Britannica, or from a hollow-earther forum.

Or it's something it just hallucinated out of thin air.


Which is why you use one of the AI search engines that makes it cite its sources.

phind.com has been incredibly good for me.


This is a real question, so I apologize if it comes off as sophistry:

Is the work of judging the accuracy of a summary not just the work of comprehending the non-summarized field?

For example, a summary could be completely correct and cite its facts exhaustively. Say you're asking about available operating systems: it tells you a bunch of true info about Windows and OSX, but doesn't mention the existence of Linux. Without familiarity with the territory, wouldn't verifying the factuality of each reference still leave you with an incomplete picture?

At a slightly more practical level, do you actually save any time if you've gotta fully verify the sources? I assume you're doing more than just making sure the link doesn't 404, as citing a link that doesn't say what it is made out to be isn't exactly a new problem, but at that point we're mighty close to the traditional experience of running through a SERP.

Finally, even if you're reading all the links in detail, isn't that still a situation prone to automation bias? There's a lot of examples of cases where humans are supposed to check machine output, but if it's usually good enough the checkers fall into a pattern of trusting it too much and skipping work. Maybe I'm just lazy, but I think I'd eventually get less gung-ho about verifying sources and eventually do myself a mischief.

I'm asking because I've been underwhelmed by my own attempts at using LMs for search tasks, so maybe I'm doing it wrong.


The average human is going to give me the wrong answer to a question I ask him.

But I'm generally not interested in asking an average human. I'm interested in asking someone who knows their butt from a hole in the ground in whichever topic I'm asking them about.


Humans are actually quite reliable. Wikipedia is that trust manifested. Also a human liar knows they are lying, AI doesn't know it's saying something wrong.


Humans can also give wrong information without realizing it.


That's why we call it hallucinating rather than lying. Confusing the two is conceptually unhygienic.


What I've found is that until you see it really hallucinate like mad on a subject you know well you don't realize how crazy it can be.

Especially when I talk to it about fiction and ask questions about - for example - a specific story and you see it invent whole quotes and characters and so on...it is a masterful bullshitter.


Citations! I never trust Bing Chat's answer. The links usually quickly tell you if the answer is hallucinated. Basically: treat it as a search engine, not an answer engine. Follow the links like you would on any other search engine. Those links will still be more relevant.


It happily made up citations for me. In a follow up, I asked it not too, and to please use only real papers. It apologized, said it would not do it again, then in the same reply made up another non-existent but plausible citation.

Checking the links is a good practice.

I feel like we just created an interesting novel problem in the world. Looking forward to seeing how this plays out.


Are you talking about Bing Chat, which cites actual web pages it used to make the summary, or ChatGPT, which is a very different beast and relies on built-in knowledge rather than searches?


Good call. I was using ChatGPT.


Sounds like you should be doing the research yourself but are relying on an untrusted source to feed you answers? I don't think we're there yet...


On the contrary, I was doing a calibration, asking about something to which I know the answers very well. To see if it was trustworthy.


Phind gives you citations and even let’s you ignore certain sources.

https://www.phind.com/


Not any use to me (not a developer), but it's cool there are niche search engines for stuff like this.


Ignore the tagline, it's a general purpose search engine with some features for code.


Yeah that seems an unnecessary tagline - it works great for everything in my experience


> leary

Leary is a rare variant spelling of leery.

I mention this, because you seem to care about correctness.


That was a problem for ChatGPT3. Not so much for ChatGPT4. I also switched to ChatGPT4 for most of my searches. I only use Google now as a shortcut for navigating to specific website.


GPT4 got a lot better at avoiding hallucinations, in my anecdotal experiences. But it ain't free yet.


Hallucination problem is easily solved by using it as a code/config template or starter, and actually vetting its output. It's still a huge time-saver, even with the vetting time involved.

Cold War strategy. Trust but Verify.


Let the car drive itself, but do all the work of driving anyway so you can take the wheel when it screws up.


Is this really a problem?

What could be more 2020s than a post-truth search engine?


Leery


Can you give examples of the average pre-ChatGPT Google queries you were doing, that ChatGPT can fully handle?

Personally — and having not tried ChatGPT for this — I don't think ChatGPT would do well with resolving the kinds of queries I consider Google "good at."

To me, the place where Google wins over Bing, DDG, etc. is when I know there must exist some page that uniquely talks about some extremely niche overlap of concepts; but I don't know any specific "natural key" keywords to refer to the that overlapped-set-of-concepts, and instead only have a "cloud of highly-correlated keywords for the individual concepts involved" to throw into the search box.

For example, if I'm trying to conjure from the aether a discussion people are having about an issue I'm facing with some buggy behavior in an API — where that buggy behavior doesn't spit out any distinctive error message to use in the search.

I could see ChatGPT being good at a limited version of this problem, where I could give it e.g. several definitions of a word (= correlated keywords), and it could tell me the word that fits those definitions.

But the full version of the problem — pointing you at (or regurgitating) the one unique conversation that most highly correlates with your keyword cloud — essentially implies an Internet-scale "language model": one where there are unique vertices for every unique URL. Which, if you think about it, is what a traditional search engine's index is: a very dumb, but very large correlational language model, where that "dumbness" is a valuable constraint meaning that queries are able to be run map-reduced across many nodes.

Is there something I'm missing here, that makes Bing+ChatGPT better at these types of queries than Google is?

Or are the advantages ChatGPT is bringing to the table here, in areas that have nothing to do with making search engines better at the things they're already "the best tool for the job" at, and instead are in solving problems that could be solved any number of other ways (e.g. querying a search assistant such as Siri/Alexa; or pulling up an encyclopedia article or textbook relevant to the subject and just reading it) such that a search engine wouldn't necessarily be the first tool you'd search for?


I had an example recently: I wanted to learn more about how certificate-based WiFi authentication worked. In the past, I would have used Google to find some resources on it, probably find that the relevant standard is called 802.1x, used Google to find the relevant Wikipedia article, skimmed that, etc. But instead of doing that, I just asked ChatGPT the specific questions I needed the answer to.

When you're asking generally about a pretty basic topic which you just happen to not be very familiar with, ChatGPT is not too dissimilar from having an expert in the field you can chat with and ask questions to. I find it to be a very effective way of querying the huge database of information that is its training dataset.

Surprisingly, the one thing it's really terrible at but which I would've expected it to be okay at, is writing config files. I sometimes ask it how to write, say, a systemd service file which does a particular thing, and it usually shows me something which looks roughly sensible but doesn't actually do what I wanted. Its nature of fancy autocomplete with no understanding really shines through in those cases. Its biggest downfall is that it has no way to recognize when it doesn't have an answer and is making stuff up.


Doesn't that leave you wondering if the answer it gave you for certificate based auth is accurate? How do you verify?


> In the past, I would have used Google to find some resources on it, probably find that the relevant standard is called 802.1x, used Google to find the relevant Wikipedia article, skimmed that, etc.

And apparently you didn't quite notice the particular inefficiency there...

> But instead of doing that, I just asked ChatGPT the specific questions I needed the answer to.

Yeah, sure, if you want to vet hallucinations in stead of just getting the facts.

Just go directly to Wikipedia instead.


But that's the thing. I can't ask Wikipedia targeted questions. I have to read through the whole article or try to skim to the right points, if the article even covers the exact question I have.


A) Wikipedia articles have chapters with headings.

B) Your browser (most probably) has a search function.


I’m not the person you’re replying to, but I had been asking this question for a while but now I’m a convert. Here are some of my most recent uses:

how to create a multi line string in a bash script. I needed to encode a human-readable JSON string in a curl call and didn’t know how. Google game me crummy tutorial sites, GPT gave me instructions, and when provided with the target, did all the formatting too.

I found it also understands git well. I use git at work, but I almost never use anything beyond push/pull/commit so crazy rebases or merges and stuff I still have to search for instructions to remember them. Now GPT can just explain to me the steps for my particular case. When I googled things, I’d search for keywords and stuff based on my knowledge and piece together the steps myself.

On a counter example, I recently had an intern who botched their config on VsCode and didn’t know what settings to fix. I found it was easier to google search how to reset things than use GPT. Ymmv.


There are several engineering tasks that I've just found explained better by ChatGPT than scouring Google for out of date documentation or abandoned forum posts. For example a while back I needed to encode some AAC audio frames into the ADTS format. In the work I've been doing recently, this isn't a hard task given you have the spec. The problem was I couldn't find the spec on Google - arguably it's not well supported either.

No problem for ChatGPT however which was not only able to write the code, but write it in Rust - the target language Iw as going to. Now I've just found it easier to ask ChatGPT first then go go google.


Over the weekend I wanted a recipe for a dish I wanted to make and the first recipe I found required an important ingredient I didn't have. I thought I'd give ChatGPT a shot and asked it for the same recipe but not including that ingredient to see if it could come up with an alternative formulation.

I'm sure that recipe exists somewhere on the internet but ChatGPT gave something to me in a very succinct format with none of the usual bullshit you deal with when looking through search results. ChatGPT also thankfully did not include the usual recipe backstory.


Almost anything related to programming.

Geographical information about a region.

What is the name of a song I have in my mind.

Virtually anything else. I'm studying architecture and read about associations of feelings that cardinal points transmit in a house (north, south, east, west). Like, east is associated with youth because of sunrise. At first it wasn't obvious why, so I asked ChatGPT and it explained everything brilliantly to me.

It takes me an order of magnitude less time to educate myself on ChatGPT comparing to Google.


I was a skeptic, but it's very useful and not hallucinatory for small and specific coding questions.

For example, today I asked ChatGPT how to write a class method in Ruby and to explain the class << self idiom. Super simple stuff, but it gives accurate answers and it's way more convenient than Google.

For this class of simple queries there's a lot of overhead to do a Google search and then try to filter out bullshit and padded results vs a super simple prompt to ChatGPT.


90-95% of your searches don't require information more recent than September 2021?


bing actually uses search results as part of the llm context window.

it solves a lot for hallicination and current event problems you have in chatgpt


That sounds about right for me. When I'm searching the web for information, only rarely does that information have to be newer than that.

That said, I'm not a fan of using these tools for search. For me, anyway, they don't even come close to doing what I want when I'm searching the web.


in my exp. bing even hallucinates the sources. I use chatgpt and bing side by side instead of the average google, then resort to google with those both fail.

i find chatgpt 3.5 answers better than bing. Also ive had bing end conversion with me on more than one occasion without saying anything offensive


No. In few cases where there is time sensitivity, it's not an issue.

I'm using it to help me with a library integration, for example. I noticed it was recommending deprecated methods. So I copy/pasted the latest source code, asked it to update itself, and voilá.

It's super smart and learns literally in a second. Just drop recent information at it and ask what you need.


How are you dealing with it just manufacturing answers it does not readily have? Or is that pretty much the same as SEO spam and easy to filter out?


GPT4 is much better at it. So far, I haven't seen it hallucinate. GPT3 hallucinates terribly, but not that often, and it's fairly predictable in what kinds of questions it's more inclined to hallucinate.


I'm very curious what will happen when seo spammers come after chatgpt.


I'm sure they'll use ChatGPT to come up with solutions.

Spammers will, too, and since OpenAI has access to what they're asking and can easily flag their questions, they can feed misleading guidance to spammers.


Maybe I'm using a different Bing because I access it through Duck Duck Go, but it doesn't seem better than Google. I often have to add a !g to technical searches because DDD doesn't return the right results. Google has them in the very first links. I'll try to use Bing directly.


Raw ddg or google are awful for technical searches. But at least having ddg as default lets you have !whatever for specific topics.


Did ChatGPT even make Bing a threat?

Bing has only 10m+ downloads on Android. Not even a top 200 app.

Samsung switch too Bing will hurt Samsung more than Google.


“Only has 10 million downloads” wow


>If you had told me 3 months ago that Bing would be a serious threat to Google, I still would have laughed.

Every day I enter a few difficult queries on both Google and Bing to see if Bing gives me something better. I'm still laughing that people think Bing is a serious threat.


Are you comparing Google to the "new A.I. powered Bing, only in Microsoft edge" or to the normal Bing search engine?


The Bing search engine not the chatbot. Is there some v2 Bing search engine? A quick Bing search didn't reveal anything about it.


The Bing chatbot searches the web for you, it doesn't just spit out answers like ChatGPT. There's no clear distinction between the Chatbot and a search engine.

I just asked it about something in today's news and it answered it and provided links to 6 news articles on that topic.


It's unfortunate that it requires logging in to Edge with your Microsoft account because I'm just not going to do that. I don't sign in to browsers.


Maybe my queries are not difficult enough. Do you have an example?


Ok here's one: "why did capote and vidal hate each other"


Answer from the Bing Chat feature (no edits):

Truman Capote and Gore Vidal were two American writers who had a long-standing feud with each other. The feud began when Capote wrote an article for Esquire magazine in which he claimed that Vidal had been thrown out of the White House after making a pass at a member of President Kennedy's family². Vidal took Capote to court for libel, where the two traded insults². After the pair settled out of court, their feud continued – even outliving Capote².

I hope that helps!

Source: Conversation with Bing, 4/17/2023(1) The A-Z of Gore Vidal | Gore Vidal | The Guardian. https://www.theguardian.com/books/2012/aug/01/the-a-z-gore-v... Accessed 4/17/2023. (2) Why Did Gore Vidal and William Buckley Hate Each Other? - Daily Beast. https://www.thedailybeast.com/why-did-gore-vidal-and-william... Accessed 4/17/2023. (3) Feud sensation! Why Vargas Llosa thumped Márquez. https://www.theguardian.com/books/booksblog/2007/mar/13/feud... Accessed 4/17/2023. (4) A life in feuds: how Gore Vidal gripped a nation - The Guardian. https://www.theguardian.com/books/2015/aug/14/gore-vidal-gri... Accessed 4/17/2023. (5) ‘Just a couple of fags’: Truman Capote, Gore Vidal, and celebrity feud. https://www.tandfonline.com/doi/full/10.1080/19392397.2015.1... Accessed 4/17/2023. (6) Truman Capote’s unhappy ending | PBS NewsHour. https://www.pbs.org/newshour/health/truman-capotes-unhappy-e... Accessed 4/17/2023.


Not a very satisfying answer because it doesn't answer why Capote would slander Vidal in the first place. Indeed the feud existed before the Esquire slander took place so it's incorrect/hallucinatory for Bing Chat to say "The feud began when...".

Oh, and references (2) and (3) seem to be hallucinated and are unrelated to the question and response despite being cited inline. The other ref links are valuable but well then it's just a search engine with more noise.


I’ve used bing with gpt and it’s way worse than ChatGPT idk what you mean. I’d just as well use Google tbh. It wasn’t impressive.


The thing is: Google is threatened for the first time in its decades of history. It might not be better, yet, but it definitely is a real and existential threat to Google.


Google is threatened by MS and OA, OA is threatened by Stable Diffusion and MiniGPT-4. We are wondering if there will still be developer work in 10 years. Everyone is threatened.


If anything we will have more work because we will have more people capable of churning it out at a predictable rate.


Eh. Remember when Google was threatened by DuckDuckGo and other privacy search engines? Google is always being "threatened" by something or the other.


I just can't get over the ugly visual design and haphazard UX on bing.com.

For instance: That Microsoft Rewards counter? Ugh.


I think chatgtp itself with access to search / wolfram / apps etc is a more serious competitor than Bing.


Well that and a 10B investment right?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: