Hacker News new | past | comments | ask | show | jobs | submit login
False information on the internet is hiding the truth about onions (marketplace.org)
339 points by howsilly on March 16, 2017 | hide | past | favorite | 207 comments



I recently created a web page that provides - in my biased opinion - unique, relevant information on a niche topic that I'm interested in. It's been impossible for me to crack the top search results on this topic. Instead, the top 20+ results are all news articles repeating the same thing.

I never had reason to complain about Google's algorithm until I became a content creator. Now, I wonder what other great web pages I'm missing out on.


Instead, the top 20+ results are all news articles repeating the same thing.

I encounter this several times a week it seems (google, ddg, doesn't seem to matter). Usually when trying to find the actual source of something, like a scientific study, the top hits are all news sites, often whith articles which look like copies of each other. I'm assuming they're ranked by number of visits and popularity in general of the site, but it makes it really hard to drill down to the source of something. To the point I usually give up. I am going to try the '-news' thing mentioned by another commenter though, hope that helps. Or are there any other tips?


Perfect example of this is trying to google for mailing list posts.

All sorts of maximum ad-cram crappy sites (including ones that try to pretend to be a forum, with mailing list posts as "users") that maximize SEO gaming techniques dominate the results for 20+ pages.

http://marc.info, which is the best archive avalible atm, plain, simple no bullshit layout and is linked everywhere anytime somebody wants to reference a mailing list archive, nowhere to be found.


IMO it's because Gmane used to be highly ranked, but then it was shut down, so all those links are now gone, and the "void" has been filled with "crappy sites" like you said.

Speaking of which, does anyone know what's going on with Gmane? There hasn't been any updates since the ownership change.


How does that compare to http://markmail.org?


I am sure they all have Google's ads on them, too. Only slightly suspicious.


I can assure you adsense has no connection to crawl or rankings.


How much connection does it have to making the removal of adsense spam pages from Google's index a priority? I have a feeling it lowers that priority significantly, and if those pages were hurting Google's bottom line, it would be priority #1 for the company to remove them.


Making money is certainly a motivation for doing it.

A non-existant website won't get a search placing.


I really hate how google will show every possible copy of a newswire article before it shows local sources, etc. I've had slightly better luck with DuckDuckGo on that front, but I always feel like there is a happy medium between Google's ridiculously "context-aware"* algorithm, and DuckDuckGo's "context-agnostic"* algorithm.

What's really annoying is that if you use Google trends, it will let you know the context of what you are looking for, and you can clarify. I really hate it when I'm trying to look for something on Google, and I know it's just using the wrong interpretation of a word.

Easy example: I do a fair amount of scripting in VBA. I often want to look up documentation for things in VBA. If I use VBA in my search term, I will get a bunch of 15 year old pages on Excel tips and tricks. If I search VBA MSDN, I will get a bunch of pages on VB.net or VB6. I specifically have to use VBA MSDN OFFICE ACCESS in order to get the correct results. DuckDuckGo does not have this problem, but I use date filtering a lot, where DDG falls apart.

Search engines aside, it's ridiculous that Microsoft allowed this situation to happen in the first place.

*I'm not sure if either of those terms is remotely accurate, but they seem to articulate what I'm trying to say.


Every couple of years they seem to wipe out all these kinds of things, so you don't get them, then companies come up with new ways of writing their pages so that they all show up again.

Remember when we had all those spammy answer sites that dominated Google searches for almost any technical question?


There's still a few of those that pop up. If I am looking for an obscure error message, it might be listed in a single stackoverflow post, but then that question and answer will be repeated in 3-5 copy-cat sites, each having copied the question and answer.


I'll never forget www.expertsexchange.com. Because that isn't a job for an amature.


Um, NSFW?


It was the original domain for experts-exchange.com. Changed for obvious reasons.

https://en.wikipedia.org/wiki/Experts-Exchange


A possible solution for your particular problem is to use the site operator, e.g.

    vba sort array site:msdn.com


99% of VBA is used in Excel. The others are just an aberration as far as the indexers are concerned.


I understand, and to be fair, half the reason this frustrates me is because I wish I could be using literally any other language. Regardless, I'm often looking up classes that a) don't exist in Excel, and b) don't exist in the version of VBA used in the linked page. It's extremely confusing to click on a link and not having a single reference to the class you are trying to look up.

Furthermore, VBA <> VB. I wish I could just tell Google to stop showing me answers for VB6. It would be fantastic if I could use VB. My problems would basically solve themselves, but I'm not using VB, I'm using VBA.


Have you tried adding '-VB6' to your search queries? It will filter out those results.

https://bynd.com/news-ideas/google-advanced-search-comprehen...


That'll also filter out eg "This is only used in VBA, it's not used in VB6".


Maybe:

> VBA AROUND(20) VB6 | vba -vb6

would work?


Google provides a Chrome extension to block sites from Google Search results:

https://chrome.google.com/webstore/detail/personal-blocklist...

3rd party addon for Firefox:

https://addons.mozilla.org/En-us/firefox/addon/hide-unwanted...

Another one for DDG in Firefox:

https://addons.mozilla.org/en-US/firefox/addon/ddg-hide-unwa...


I never knew about this (and always wanted it). Thank you!

I wish this was a feature for Google when logged in (although this is a better way I guess).


It was a feature introduced in 2011 and discontinued in 2013: https://googlesystem.blogspot.com/2013/03/google-discontinue...


Hmm of course I missed it when it was around sigh


"The Google Way"™


Google search doesn't have much of an opinion about which sources to prefer other than "what users indicate answers their question". The vast majority of people _are_ looking for news coverage of scientific studies instead of the paper itself (people are generally unaware of how abysmally, unforgivably sloppy and dishonest much of science journalism is).

If you're looking for an actual paper, why not just use Google Scholar?


They do. That's actually what panda was all about:

"We actually came up with a classifier to say, okay, IRS or Wikipedia or New York Times over this side, and the low quality sites over this side." -- Matt Cutts


How does that contradict what I said in the slightest? Here's the quote when it's not pulled out of context to distort its meaning[1]:

----

> Singhal: And based on that, we basically formed some definition of what could be considered low quality. In addition, we launched the Chrome Site Blocker [allowing users to specify sites they wanted blocked from their search results] earlier , and we didn’t use that data in this change. However, we compared and it was 84 percent overlap [between sites downloaded by the Chrome blocker and downgraded by the update]. So that said that we were in the right direction.

> Wired.com: But how do you implement that algorithmically?

> Cutts: I think you look for signals that recreate that same intuition, that same experience that you have as an engineer and that users have. Whenever we look at the most blocked sites, it did match our intuition and experience, but the key is, you also have your experience of the sorts of sites that are going to be adding value for users versus not adding value for users. And we actually came up with a classifier to say, okay, IRS or Wikipedia or New York Times is over on this side, and the low-quality sites are over on this side. And you can really see mathematical reasons …

----

Cutts was using IRS/Wikipedia/NYT as stand-ins for "sources people trust", not "sources of high quality by some magical objective standard that Google decides contra their users". Ultimately, their classifier is validated against what users blocked as low quality sites, since there's no grounding for quality that doesn't come from users.

Hell, the quote you're pulling doesn't even make sense in the context of the exact example we're talking about! A lot of people would find a big newspaper like the NYT at the pinnacle of trustworthiness for factual content, more so than something like NBER or PLOS (where you can find the actual papers that NYT occasionally reports on, often not very well).

[1] https://www.wired.com/2011/03/the-panda-that-hates-farms/


"Cutts was using IRS/Wikipedia/NYT as stand-ins for "sources people trust", not "sources of high quality by some magical objective standard that Google decides contra their users"."

It ends up being the same thing. Brand value being an overwhelming factor is precisely what the OP and everybody in this thread was complaining about in their SERPs.


I sympathize with this a lot.

At least for scientific articles, it usually helps if you're willing to spend enough time to familiarize yourself with the technical jargon. As a somewhat dated example, "Higgs particle evidence" will turn up lots of crappy new articles, but "CMS Higgs abundances" will immediately return high quality relevant results.

Of course, you have to figure out the right lingo int the first place, but usually you can mine arXiv/PubMed/etc for this kind of thing. Wikipædia usually has links to some kind of primary source.


If you're using Google to find scientific sources I would recommend trying Google Scholar.

Not a perfect solution by any means, but it should get rid of news articles.


If you have access to an academic library (community college will often suffice), you can directly access scholarly and periodical indices. These will save hours as compared to a general Web search, even on Google Scholar.

Sci-Hub will turn up most of the results, LibGen and BookZZ many of the print sources.


I also have the problem when looking for new information. For the given topic you'll keep finding articles from 2014, and not of e.g. a new update that's been released recently.


Or the obverse: I'm interested in searching for an item not including current results. In some search engines (DDG, Google Mobile) you can only specify "last day / week / month / year", not excluding within recent intervals.

Google Web, Books, and Scholar, desktop versions, are better in this regard, as specific intervals can be specified. The problem here is that for the Web, there's a great deal of newer content masquerading as older. The problem appears in print as well, but not quite to the same extent.


IF you add this to the search results, you can choose some preset times to search (week/month/year).

  ?tbs=li:1
http://jwebnet.net/advancedgooglesearch.html

https://stenevang.wordpress.com/2013/02/22/google-advanced-p...


Thanks for the info, I didn't know about some of those options.


For scientific studies try Google Scholar scholar.google.com


We need some sort of "Adblock for Google search results".


Or a paid search engine incentivized to provide quality results rather than maximize ad network profits


Nice idea. But there's no sign that such a market exists. To put it into perspective, Google makes a nickle of ad revenue for every search. Would you pay that much?


I might pay subscription. I agree that the market is not likely to be big. Also data network effects and difficulty with price discrimination can be big obstacles.

EDIT: I think I misunderstood your question. I will probably pay that much for quality ad-free service. But of course it's a niche product.


Related anecdote.

I used to write for an amateur online magazine with product reviews and analysis of a certain industry. It wasn't big, but the content was good - not just in my opinion, I was told that many times by various readers. In ~2005 we routinely were #1 to #5 in searches related to the products we're covered. Today I often can't find new articles from that website even when I use the exact title as the query. The website is minimalist, fast and has no ads. The articles are extensive and well-conceived. And yet they are buried several pages deep under piles of single-paragraph stub "reviews" on more popular resources.

I assume the same thing happened/happens to a lot, maybe most small-scale content creators these days. Clickbait rules and content doesn't matter nearly as much as the number of incoming links. You can no longer rely on search engines to get you traffic and need to constantly self-promote on other websites (e.g. social networks).


Google has made it incredibly hard for content to rank on its own if your site isn't seen as "authoritative" (read: has tons of backlinks and is at least 6+ months old). For most competitive terms, you won't rank at all unless you have a Domain Authority (DA) in excess of 30+

It makes content creators miserable. Some of us just want to write good stuff and get traffic through Google, not build backlinks and be a SEO


My pet theory is that Google got a bit desperate in the battle against web spam, and decided to use a somewhat crude hammer. Perhaps overweighting the various factors that control ranking to favor either content hosted on "big names", or not distant from "big names" in the link graph.


My pet theory is (in no particular order):

* lack of https * lack of http/2.0 * lack of CDN utilization (aka lack of speed) * lack of Google backed advertising * lack of continuous updates * lack of inbound links * content not minimized * content not zipped

All contribute to a decreasing Google Page Rank (TM) over time.


Most of those are orthogonal to the quality and relevance of the content.


slow sites and stale ones are -ve signals of quality


Slow can certainly be a negative signal, I agree (though if the content is mostly just HTML and CSS without much media then speed only means so much).

But depending on the topic, "stale" may not mean much about quality at all. I've had a FAQ site about Tolkien's books online for at least 15 years now, and as it turns out he hasn't been publishing much lately (being dead and all). So apart from minor updates every few years when his son or other researchers publish new information from his drafts and notes, the site has been largely static for a decade or more. It's very good at its intended purpose, and it's hard to imagine what I could do to provide a steady stream of new content to indicate "freshness" without fundamentally changing the nature of the site.

I imagine that the same is true of a lot of sites out there, on a lot of topics that are far removed from the "breaking news" world.


Of course, you can have all those things and still be slow. Medium.com pages, for example, are pretty bloated for a blog. But perhaps Google is fine with an inaccurate proxy for "fast."


Yes, it used to be content truly was king. That mantra is long dead now, though.


In fact it can be argued that content was never king:

http://firstmonday.org/article/view/833/742 (from 2001)


Do you think they are favoring pages that have adsense on them?


I'm almost sure they're not. In all my years working for the ads section of Google I've never once heard the tiniest hint of anything like that, but I have repeatedly slammed my head into the firewall separating Search from everything else.

Google seems quite determined not to risk being seen as biased in that way.


Good to know.


> and has no ads.


Increasingly, folks are gaming Google's search.

Take this book:

https://www.amazon.com/Never-Split-Difference-Negotiating-De...

If I search for '"Never Split The Difference" review', I want to find, well, people's reviews. Note that the book has several ratings on Amazon - it is a popular book.

Yet I found only perhaps 1 "honest" review in the first 2 pages of Google's results. Everything else I find reads like a promotion for the book.

Looking at Fakespot, there is some evidence of light tampering with Amazon's reviews on the book.

The reason I Googled it? I've read a few chapters and am appalled at the book. It essentially is trying to boost its popularity by trashing what is taught in well respected negotiation programs at top universities. But while repeatedly trashing that education throughout the book, he continually advocates strategies that are also taught by the same programs he is trashing.

Given that he continually bashes the most famous book on the topic (Getting To Yes), I wanted to see if anyone has done an honest comparison between the two - pointing out the author's somewhat dishonest stance. And I can't find it in the early Google hits. I see it only in the 1 or 2 star reviews on Amazon.


It doesn't get better. It is a truly awful book, I think I only finished it because I was in a train-wreck fascination mode with the author's insecurity.


Glad I'm not the only one thinking so.

To be honest, that's similar to my motivation for finishing it. I think this is the first book I decided not to stop reading just for the sole purpose of writing a lengthy rebuttal review.

I actually will not say that his techniques are bad/wrong/poor. But it's really shitty to keep trashing Ivy league MBA programs and then advocating the techniques taught by those programs. To date I've read 3 books from The Harvard Negotiation Project and another from, I think, Duke. For each chapter in this book, I want to highlight his techniques and then specify exactly where the same advice appears in the books he criticizes.

I think it actually would have been a great book if he wrote with more integrity. His book is easy to read and practice. If his techniques work, then he has a legitimate advantage over the other books, which are much more complex. It's a pity he made a potentially great book into a lousy one.


It would be an interesting experiment if there was a search engine that ran on a heuristic of novelty rather than conformity. If you want to search for the history of pineapples, you don't need 100 sites with the same info. The more varied info you get, the broader a picture you can paint.

The downside is in the assumption that every difference is correct, but it's as dangerous as the assumption that only what is repeated is correct. Would be an interesting alternative at least.


  If you want to search for the history of pineapples,
  you don't need 100 sites with the same info.
"Pineapples grow from a plant in the tropics."

"Pineapples are grown in equatorial countries."

---

"Pineapples, when ripe, have a very sweet taste."

"Ripe pineapples have a high concentration of sugar."

---

"The etymology of the word for pineapple is 'excellent fruit' in the language of Tupi, although the english English translation is an aberration, with a stem of 'pine' due to it's resemblance to a pine cone."

"Most European languages name the fruit using the root word 'nanas', from the original native language where it was grown. Many languages have adapted the root word, although a few have a large degree of uncertainty, such as the origin of the reference to 'apple' in the English word."

---

It would need a fairly decent AI to recognise the commonality of information in these examples.


I think people often underestimate the power of much simpler methods. You'd probably get substantial improvements over the baseline just by penalizing sites for having the same words too many times. I forget the exact number, but I think it's something like 50% of the words in your average article appear only once. So if two articles have even 5% similarity among the words they only use once, they are probably quite similar.

AI of course can get you much farther still, but you definitely don't need something that powerful for a convincing upgrade from baseline.


Google is way way past that point. It worked that way 10 years ago and was exploited, so they added many more signals, generally far subtler than that.


It's even more odd considering diversity in page ranking and novelty in recommendation are well studied (although not solved) problems in information retrieval, so it's not like there isn't a solution. And it's not like Google doesn't have the human or computer resources to tackle it.


The blackhat SEO would at least be interesting if not terrifying.


I would assume that to counter the junk/spam/wrong information that would develop (even unintentionally) about a topic you would have to combine this with a "trust" ranking like PageRank.

Basically, only more mainstream sites have the right to present different views which leads you back to square one: Repeating information.


If you market it right, I think you could have a wildly successful "counterculture" search engine.


You could also use current search algorithms but add a penalty for redundancy. Or perhaps group similar results. I've always like the idea of column-based web results. I suppose some of what Google does has this flavor.


Even better, add settings the user can tweak for which heuristic setup you want. Put it behind enough layers of dashboards that it doesn't terrify the average user.


The heuristics you're looking for are accuracy and relevance.

The problem is that PageRank is no longer delivering on these.


Certainly it's not _as_ dangerous.


"Just write great content! Don't worry about the algorithm. Gaming it is bad, being destroyed by it means you're bad, the results on the top are good!

Please buy ads." - Google


For those that might not know. You can remove keywords and sites from a search. Most search engines can do this but the syntax might be slightly different.

For Google it's a minus sign. (sometimes it's the word NOT)

E.g. "onions -news" (without the quotes) will search for the word onion and not return any sites that have the word news in them. This can have false positives so youd need to play around with the words.

False positives because non news sites may have the word news and news sites might not have that word.


I was wondering how long it would take for someone to bring up the search modifiers. Here's the best/most complete list of them I've found, with examples of how to use them.

https://bynd.com/news-ideas/google-advanced-search-comprehen...


I search genealogy type stuff.

How about distance between words. (Smith married to Ellen - search for those words close to each other )

How about sounds similar. Like burger.

What about saving complicated searches.


That's very true, and useful information, but it's the kind of thing only a sophisticated searcher does. Very occasionally I still use extended search syntax.

On the other hand, back in the 90s it was a practical necessity to use extended search syntax with services like AltaVista to find anything because search engines were basically terrible. Google on the other hand has and continues to expend considerable effort to ensure such syntax is unnecessary 99.9%+ of the time.

Consequently very few people know it any more. Combine this with the fact that in order to use the extended search syntax effectively you often need at least some understanding of what you're searching for (e.g., at least the ability to discern bad vs. good information), and you can quickly see why the average Joe is going to end up seeing and quite possibly believing the false information.

(I, for example, not being any kind of foodie, only know that it takes much longer than 10 minutes to caramelise onions specifically because I saw a friend who is a foodie do it a few years ago. He specifically made the point that it takes ages, otherwise you lose the sweet flavour. If that hadn't happened I'd believe it takes 10 minutes because Google said so.)


I tried using this just the other day to find a faintly-remembered old news article about a topic currently in the news. I spent maybe 15 minutes getting nowhere, but noticed that googling for something hypothetically like "tower -trump" would pop up old news articles about the Eiffel Tower on news sites with sidebars linking to current news about President Trump.

The minus operator either no longer works at all, or it just penalizes results without eliminating them.


I had the opposite experience. I created a ridiculous cooking blog to entertain family and friends. It became the #1 result for the phrase, "how to sort lentils." And it was brought to my attention yesterday in another HN thread (https://news.ycombinator.com/item?id=13875755), that Google is now presenting it as the One True Answer for sorting lentils. My site is absurd, so hopefully would-be lentil sorters get a good chuckle and move on. But this Google feature seems broken if it's promoting bad information.


I had a read. Porkulent is fantastic.

I'll have to send links to my Facebook foodie friends and see how long they take to get it.


Thanks! One of these days, I'll hopefully get around to posting more.


I had the opposite experience. I wrote up a page for me and my kids to use, and then ignored it. A year later, I checked my stats, saw it was getting traffic, because somehow it was the #1 google result for a few phrases. So I improved it, monetized it a bit (not much), and as I have mentioned a few times before, we get a few dinners each month paid for by it.

Of course, I've never been mentioned in the news. Maybe the trick to being in a niche is keeping the niche quiet.


Google thinks it's a great idea to direct people to go to the sites that other people have gone to, and not recursive at all.


I'm not sure Google knows a ton about the quality and uniqueness of the actual content, you'll need other sites to start referring to yours.


It's a chicken and egg problem. To get a lot of incoming links you need to be popular. To become popular (via search engines, at least) you need a lot of incoming links. This heavily penalizes smaller websites and rewards incumbent players on the web who have dedicated marketing teams that can afford to spend several hours a day on social network promotions and viral campaigns.


It's a niche topic, so there aren't any other sites that discuss my topic (except news articles).

If there are other sites, I can't find them! :)


While writing a french blog about Python, it took us 3 years and 500 articles to start showing up on the top results. And we cover metaclasses, packaging, generators, etc. All the stuff you can't find elsewhere.

Being new in town is suspicious to google. Raising honestly to the top takes a lot of time and effort, unless you have incredible promotion habilities and/or deep SEO knowledge.


I took a look at your blog from another post of yours (fascinating stuff, based on what I can tell from Google Translate, at least). Any chance you were secretly penalized because of some of the more risqué content on your site?


I can be wrong but it seems the contrary. 80% of our traffic is generated by the alternative content. Maybe it lowers the ranking of the Python content. Hard to say.

But now we are actually quite famous in the french Python community. For a blog on such a niche topic in a small country we get around 6000 v/d. The hard part was starting really: we (my co-author and I) have no promotion skill whatsoever.

We actually starting to gain traction after 3 major french bloggers linked to us completely out of the blue (sebsauvage, korben and lehollandaisvolant). It seems linking from trusted source is still the most important part of the game for google.

Eventually I want to create a new site, in english this time, to translate all the stuff and make it SFW so it can benefit more people. Unfortunately this won't be enough to test your theory since as soon as you create content in english, you multiply you traffic by 10000.

Anyway first I'll need to save for 5/6 months of budget before doing so cause as a freelancer I can't really take holidays. Maybe I should do a crowd founding.


Off topic... but thank you for your great blog. :)


:) It's always nice to support, even after all these years. It's a hard job, and we don't get paid to do it so we need to hear comments like that.


This might help: https://millionshort.com/

It's helped me find unique and interesting sites in the past.


I did an experiment once. Searched for some random stuff on Google and Duck Duck Go. Of the 10 topics 7 of them showed relevant topic on Duck Duck Go while only news articles on Google.


I run a niche web app that has about 3000 unique visitors a week. People generally agree it's the best resource for its purpose and there was an article in one of the bigger local newspaper about it too. There are no ads or other shadiness on the site. It used to rank between 3 to 5 for the couple of obvious search terms, but since a site overhaul the ranking has tanked. I contacted Google via GWT but no response (though in the past they've been helpful a few times).


The trick then is just find the key word or phrase they all share, - it out, and you're back in business. Searching has always taken some work on the searcher's part, and Google is selling a product, not offering a charitable service.

It might be time to consider the need for a way to search all of this content that isn't bound to people trying to monetize the experience. DDG seems like an option in that regard.


This is why they will eventually be disrupted unless they change their search algorithm drastically.


Highly frustrating that sites like Wikihow which are scant on useful info rank so high.


Which search engines (if any) do give your site a reasonable rank?

I miss the old unhomogenized web.


What's the site? I'd be interested to take a look to see how the SEO is.


You should put trump and it will go on top


This is about way more than onions. We are always in the danger of decline and regression to the dark ages. The world of rumors, beliefs, marketing and magical thinking is always there, biding its time.


It seems like Trump is a sign that what you fear is already happening. Look at the media his supporters consume. Listen to AM talk radio if you want to know how much the world is still controlled by "rumors, beliefs, marketing and magical thinking."


This is one of my greatest fears.


As one of my favorite examples of the very phenomenon you fear, just wait for the usual crowd of historical revisionists who will be along shortly to explain that the Dark Ages were really a time of widespread enlightened progress.


There is well worded keyword dense quotation in the beginning of the article in question [1] that the rest of the article proves wrong, as latter [2] article analyses. But how is poor machine supposed to understand this?

Ah, those semantic web utopian visions, with humans producing content gently semantically marked for machines to "see" all those negatively ironic peculiar ambiguities and relations.

Now, that problematic "quotation" in [1] is in fact:

    <div class="text-2 text parbase section">
        <p style="margin-left: 40px;">"[…]"</p>
    </div>
i.e. not even marked as quotation (eg `<q cite="[…]">[…]</q>`). Sigh.

[1] http://www.slate.com/articles/life/scocca/2012/05/how_to_coo... [2] http://gizmodo.com/googles-algorithm-is-lying-to-you-about-o...


Why the hell should we help the comprehension of a machine of giant corporation that will use our content to sell ads on their results? Can we ask ourselves also this question before thinking how WE should bend to someone else's machine?


Because it helps the comprehension of all the machines, belonging to giant corporations or otherwise. And if they understand the site, they can use that information to help us, the users, by presenting relevant search results about caramelising onions and the semantic web and so forth. Yes, it will help Google sell advertising, but the benefits to the rest of us far outweigh that minor inconvenience to some people's idealism and politics.


This is why the semantic web lets you clearly mark up wrong information being cited, which, in the words of the Wikipedia "is primarily used to mark text that is mistaken".

https://www.w3schools.com/tags/tag_strike.asp

Haha, just kidding. Just reminding you that there's nothing semantic about HTML :) The strikethrough is kind of deprecated, but there's no good semantic replacement, because the semantic web is actually a complete joke.

Source on my Wikipedia quote: https://en.wikipedia.org/wiki/Strikethrough


Sounds like a good place for a microformat [1] that lets search engines know that they shouldn't take a sentence as true e.g. <div class="actually-wrong">takes 5 minutes</div>

[1] http://microformats.org/wiki/Main_Page


Ooh, awesome - then we can persuade all those sites we keep hearing about to mark up their fake news using these microformats, and the world's information problems will be solved!


<s> and <del> are still in the standard


The problem, IMO, is introduced when Google tries to provide definitive answers for questions that don't have definitive answers. To be sure, there are lots of questions that have definitive answers that would correctly satisfy 99% of searchers (what is the speed of sound, what is the height of the Empire State Building, etc.).

Then there are questions that require explanation and don't have a binary correct/incorrect answer. Even with the question serving as the example for this article: "How long does it take to caramalize onions?" Well sure, the author cited in this article adamantly claims it takes much more than ten minutes. But does it? Maybe there are equally or more qualified people who say that under normal circumstances and a certain heat level it doesn't take more than ten minutes to caramalize an onion. Who is right? I don't think Google can parse the available information to provide a "correct" response, and shouldn't try to.

So to me a better solution would be to categorize queries based on whether they can be answered definitively and, if they cannot, don't attempt to.


The speed of sound is probably more in the latter category as it is dependent on altitude and temperature, and even then only after assuming that someone really meant the speed of sound in air. If I search for "speed of sound" Google's info box will give me an answer labeled as "speed of sound at sea level" which should probably instead say "speed of sound in the atmosphere at sea level at 20 degrees celsius". It's an ill-defined question when asked so vaguely, and if you're going to pretend to give a definite answer by making a bunch of assumptions, they should be clearly presented.


I disagree. While there are some searchers who are curious about the subtleties of the differences between the speed of sound through various mediums and at different altitude levels, most people searching for that basic question are satisfied to know that the speed of sound through air is about 750mph, which is why I said that it would satisfy about 99% of searchers. People who are interested in digging in further can comb through the rest of the results, or further specify that they want to know the speed of sound through water or at 35,000 feet above sea level. Just as some people who search for "What is the height of the Empire State Building?" may want to know what is the height of the highest occupied floor, or just the structure without the mast, most want to know how many feet is the distance from the base to the top?

By trying to satisfy every searcher for these types of queries, you will be making searching more cumbersome for the vast majority of people, which is why I think that providing a single, easy to view answer that is likely to satisfy the person who entered the search a calculated risk that's worth taking.

It's for the answers that wouldn't satisfy most searchers that I don't think a definitive answer should be provided, as it is likely to be either wrong, or just one of many possible answers that the searcher is looking for.

It reminds me of a picture I once saw of MS Word when every single banner is enabled. The menus occupy half the screen. Even though there is some power user somewhere who really needs that menu option, the reality is that virtually nobody else does. So while that user has to go through the pain to figure out how to enable that banner, his pain makes the product easier to use for 99.99999% of users.


If the difference in the speed of sound because of altitude and temperature makes a difference for you, you should probably research a bit farther than the Google's info box. I just see that as useful as answering trivia quiz questions.


I sort of agree. On the one hand, an approximate answer is fine for this use-case (especially when you can look down 2 inches and see the full context in the Wikipedia result without looking through) but perhaps it would be worthwhile on answers like that to say "Approximately" at the front just to be clear.


It's even worse because the type of person who is Googling "What's the speed of sound" very likely doesn't understand that there are all these variables that affect it in the first place.


Because bad information that people passionately believe has a way of rising to the top.

Given that information is a sort of currency (which is exchanged for your attention while an advertisement is displayed in close proximity), perhaps we should have expected that something like Gresham's Law - that 'bad money drives out good' when competing currencies exist - would hold true on the web too.

Popularity is only a good proxy for quality when the population is rational. given the abundant evidence of even smart people often suffering from large cognitive biases, weak educational standards, and an economic system where consumption and waste are more profitable to supply than more sustainable solutions, it's not surprising that search quality could decline even as raw performance improves.

Google and the existing web will be disrupted by the advent of contextual search, which retrieves the graph of a specific information frame.

https://en.wikipedia.org/wiki/Gresham's_law


Heh, I just typed "how long does it take to caramelize onions?" into Google and got this answer:

about 5 minutes

With a link to his article. So, the news that this was fixed is premature.

p.s. he is absolutely right that caramelizing onions takes 35-45 minutes, most people have no idea what a caramelized onion really looks like.


> Ryssdal: Now, we should say, that in the days, what, week or so since this post was published, Google has in fact changed the search results, right? So, you've changed two things: You've changed the New York Times and you've changed Google.

This is still a problem with this specific example today.[0]

[0] - http://imgur.com/a/2R5k5


Interestingly, it gives the wrong times for the "caramelise" spelling, but the correct time for "caramelize."


It seems it's more nuanced. Try the following searches:

  how long does it take to caramelize onions
  how long does it take to caramelise onions (note the typo)
  how long to caramelize an onion (not a typo but still wrong)
  how long to caramelize onions


Replacing the 'z' with 's' is not a typo, though.


Several years ago, a coworker pointed out that an article spelled "organization" with an "s" instead of a "z" and wondered why such an obvious mistake would pass an editor.

I pointed out that that's just the British spelling. Everyone in my group was amazed that I would know such a thing. It was an eye-opening moment... for me.


There's a very old and very well established anecdote of this. It's the difference between the words "color" and colour".

- You show an American the word "colour" and they go: aha, a typo!

- You show a Brit the word "color" and they go aha, an American!


... an American. Or maybe a Roman citizen (Latin: color). Or a Pole: (Polish: kolor). Italian? (colore) Spanish? (color).

Most languages with a cognate for the Latin word color do not have this gratuitous Old French "ou" in its orthography. Good riddance.


Yes Australian here. I run into this problem with programming languages all the time. Because of things like "colour" vs "color" or "centre" vs "center". "Maximise" "minimise" etc are good examples as well

My coworkers and I call it the 'American API' problem.


Places I've worked - in Australia and NZ - have adopted American spelling throughout the code in an effort to standardise with tools, APIs, etc.

That is, "Use American spelling wherever possible in the code" is part of the coding standard.


It isn't exactly "the" British spelling -- it's actually "a" British spelling. That is, the "z" form is not only correct British usage, it's actually quite a bit older [0].

[0] http://blog.oxforddictionaries.com/2011/03/ize-or-ise/


My mom has a British/Indian accent but I've never even been there: there are a lot of less common words that I didn't even realize I said differently until I was too old to feel like intentionally changing it: rather with an 'ah', calcium with a 'sh', occasionally unconsciously spelling things "theatre" or "centre" or "harbour" (I guess a good chunk of the books I read growing up were from British publishers? I don't really know). I don't know why but I always managed to be surprised by how shocked people's reaction is to these pronunciations and spellings, especially because I've spent my whole life in California surrounded by immigrants (I'd say the percentage of friends I've had with parents born in America is in the very low double digits).


Did Google update that search result already? It shows the proper time for me.

http://i.imgur.com/XOT65j4.png


Welcome to the disinformation superhighway.


For great onion techniques see "Ruhlman's Twenty: 20 Techniques, 100 Recipes, A Cook's Manifesto" by Michael Ruhlman. Before this book I didn't know there was a difference between sweating them and caramelizing them.


A decade ago took a cooking class that an amazing chef offered to a bunch of us regulars.

One of the things he showed us was how to properly chop and sweat onions to sweeten them, including how to use smell to tell when they were done. Somebody said "Oh, we're going to caramelize them." and the chef replied, quite matter of factly, "Oh, no, we'll be here all night if we caramelize them."

And that's when I learned there was a difference.


He was not kidding. Full and careful caramelization for french onion soup, say, takes 3 hours or more (and you have to cut them with the grain or it won't work nearly as well).

If you ever wondered why it's really hard to find great french onion soup (assuming you've had it once!) - that's why.


if you go down the rabbit hole and follow the links within about his Slate article it takes you to a great recipe for onion soup I have also used, https://www.smith.edu/diningservices/recipes/onion-soup.php which covers how to do the onions right.


Any version of this that starts off with a food processor is showing you shortcuts, not how to do the onions right.

Short version of "right": onions sliced by hand in small wedges with the grain (i.e. lengthwise) - otherwise you cut through too many cell boundaries and let the water out too fast. Then it's a lower heat (no scorching!) for hours, not minutes, stirring every 15 min or so.

So it's obvious why people want shortcuts, but all of them give you inferior results.


> otherwise you cut through too many cell boundaries and let the water out too fast

That really doesn't make any sense. If you slice across the grain, you might slice through more vascular tissue than if you sliced lengthwise, but onions aren't highly vascularized. Furthermore, the cells are linked with plasmodesmata in all directions. No matter how you slice them, water is going to leak out at about the same rate. I would be a little suspicious that slicing with the grain would do anything. It sounds like one of those cooking rules of thumb that don't have any basis in fact. ...on the other hand, maybe Cooks Illustrated or someone else has done a controlled experiment and there really is a difference.


As I understand it, it's not the vascularity, but because the cells are anisotropic and arranged a bit radially.

Best thing I could find on a short search is serious eats (near the bottom) :http://www.seriouseats.com/2016/11/knife-skills-how-to-slice...

I'm pretty sure cooks illustrated has done this experiment, but finding their stuff on line is difficult.


I suppose you could count using a mandoline as "by hand", but either way, I think it's easier than a knife by far, especially if you're making more than a small amount of soup.


A mandoline doesn't help unfortunately. If you want the best result, you have to cut radially along the grain (like orange slices). It's a technique that requires a knife, I'm afraid - though it's certainly harder.

It's sort of analogous to the difference between properly mincing garlic and putting it through a press. For some things it won't matter much, for some you prefer a different technique. But they are not interchangeable, they are different things.


i'm new here and have no idea how long comments are allowed to be. there's a lot to say, but i'll try to be succinct. (and probably fail.)

1) the quickest way to make bad AI is to train it on bad data. and the internet is the king of bad data. if the New York Times is putting out bad recipes and if other humans are regularly buying into fake news articles, blaming the google algorithm seems like shooting the messenger for echoing the low standards of society.

2) many (though not all) of these questions are injecting presuppositions. when you ask google "how long does it take to caramelize onions," you're implying that such a thing can be quantified. you'll get answers which try to tell you how long it takes because semantically that's what fits. but in reality, a lot of things are done when they look done. or done when they smell done. or done when they're thick enough to form peaks.

so i really think this is a human problem. as much as we want to error-proof all our software, in the world of general AI, we will need people to interact responsibly.


1) It really depends on your AI and its goal. If the goal of the AI was to summarize human opinions, then the current approach is fine. What isn't working is that it's doing one thing (here is the answer of some people on the internet) and presenting it as something else (here is the answer). Plus, there are ways to put some bounds of confidence on an algorithmic prediction, and I don't see why they are not displayed along with the answer that was generated.

2) It relates to point 1, but the real solution would be that Google admits "I don't know/it depends" when the confidence is under some threshold.


> i'm new here and have no idea how long comments are allowed to be. there's a lot to say, but i'll try to be succinct. (and probably fail.)

FWIW you're fine, this wasn't a particularly long comment at all.


> Because bad information that people passionately believe has a way of rising to the top.

Great single line summation of the whole problem.


So, it isn't very different from the Big Lie idea[0]: if you repeat a lie long enough it becomes truth.

It works for humans, it works for algorithms.

[0] https://en.wikipedia.org/wiki/Big_lie


The onions thing really bothers me too.

I mean, it takes 45+ minutes to caramelize onions. You can fry them for 10-20 minutes or so and get a pretty tasty result, but they aren't caramelized onions.

I have no idea why people say otherwise!

I watch a lot of cooking shows and the contestants (supposed to be professional chefs) will have 30 minutes to make a dish and they'll say "I will start out by caramelizing the onions." No you won't! You'd need more than your 30 minute cooking time to caramelize onions!

Onions can be caramelized in a slow cooker, as mentioned. They aren't as good as doing it on the stove properly but are good enough for the labor you save.

/rant

I do proper caramelized onions only about once a year because they are so tedious.


A trick I learned from Seriouseats is that you can use a pressure cooker to carmelize onions in about half an hour. The normal stage of onion cooking where the liquid is still coming out of the onion limits the cooking temperature to about the boiling point of water; by using a pressure cooker, the boiling point of water is raised to the point where browning happens even before the water is gone. See: http://www.seriouseats.com/2016/01/the-food-lab-pressure-coo...


I've done this and the results product tastes great in dips/soups but for me (and it seems many commentors) the onions turn to complete mush (almost like a jam texture). Were you able to get your onions to maintain some texture?


Wow, I've been meaning to try a pressure cooker lately, in theory, (sounds good) but this post might have put me over the edge.

Pressure cooker, here I come.


It's because most people don't take 45+ minutes so they don't know the difference. In 10-20min you just have soft brown onions... but once you truly caramelize, you know it's totally different.


But hasn't anyone eaten caramelized onions in a restaurant before? Or even seen a picture?

I would assume professional chefs (who are the contestants on these cooking shows I watch) would know better.

I swear, it feels like an onion conspiracy.


In my experience, even most restaurant don't go real hard into caramelization. It's usually a quarter to half of what they could do, probably still because it takes so long.


I wouldn't have a problem with restaurants (or anyone) serving sweated onions, as they are actually yummy... Just not selling them as caramelized.

Call them what they are...

(To be honest, for many applications, I would slightly prefer sweated onions over caramelized onions.... I'd just like to know what I'm getting...)

... Unless there was an onion conspiracy...


Pro tip, you can throw a tiny bit of baking soda into the onions to make them caramelize faster.


That helps improve the rate of the Maillard reaction, but not the pyrolysis of sugars.


Rant maybe, but it's true. I've done them too in batches, usually enough for two large pans at once, anything less and it's a lot of waiting around doing nothing. And if you're inexperienced, pushing it on the heat will just burn them.


So am I the only person who thought that this would cover Tor vulnerabilities?


Not at all. I must admit I found the actual topic pleasantly humorous after expecting to read about Tor exploits/bugs.


Yes, me too.

But seriously, that is a worrisome bug in all single-answer search solutions. Search automation doesn't seem to handle sarcasm and irony very well. Maybe website coding needs an "ignore this block when parsing page meaning" property.


Onions have layers and so does the truth apparently. Determining actual truth from a lot of vaguely related information is quite difficult. I've always found that there are things you can enter a search string for but you can't describe it sufficiently to find the actual result, just things that appear related somehow.


And I'd like a step-by-step recipe on how to caramelize onions, which ingredients to use, etc... Like "the final recipe".


There are several other posted here, but I feel that all of them do something which will fundamentally disrupt the flavor (e.g. crockpot will make them soggy and not quite the deep mahogany brown that you want, adding water will do the same). Being a lover of French Onion soup, here's a method that's worked well for me over the past four years or so of experimenting:

1) Choose a pan that has high heat retention. A cast iron skillet would work well for this purpose, although you may run into some concerns when it comes time to deglaze. Generally one deglazes with something like wine, but people recommend keeping highly acidic stuff away from cast iron out of fear of damaging its seasoning.

2) Throw the onions into the pot/pan. Add a pinch of salt and a pinch of sugar to the onions. This will help them caramelize more quickly.

3) Turn the heat up to high and cover them for about two minutes. When you open the lid, you should notice a large amount of steam escaping. This is called "sweating" the onions. Onions need to be dry in order to brown properly--the sweating method helps them dry out more quickly.

4) Turning the heat down to medium-high, stir occasionally. You'll stir more frequently towards the end, pushing up from once every two minutes to once a minute, to once every thirty seconds, to constantly near the end. They're done when they're a deep mahogany brown.

That's it, really. As the original author said, there aren't any shortcuts to proper caramelized onions. The salt+sugar will help ever so slightly, the sweating will help ever so slightly. But you really shouldn't expect to caramelize them in less than a half hour.

EDIT: Down-votes for posting a recipe on HN? That's gotta be a new one on me.


Caramelizes onions in a cast iron pan!?? Please, no one ever do that unless you want to destroy your pan AND your onions.


You might have missed the part where I said it would be a good option, if not for the deglazing problem. Cast Iron is what usually comes to mind when we think heat retention, however, which is why I mentioned it.


The obvious choice to keep you both happy is a enameled cast iron dutch oven.


:')


How does that hurt the pan?


Doesn't. You can do almost nothing to hurt a cast-iron pan itself.

It can destroy your seasoning, though, and require you to "fix" it by thoroughly cleaning and re-seasoning it. It's not that difficult, but generally you want to retain your previous seasoning as it's much easier and it gets better with time.

People also think soap and water "damages" cast iron, and that's not true either.

http://www.seriouseats.com/2014/11/the-truth-about-cast-iron...

Anyway, prolonged rusting or breakage are the only ways to "destroy" cast iron.


Deglazing is pretty hard on the seasoning of the pan (basically polymerized fats from cooking), which means your stuff will probably stick next time you cook with it.

I actually use exactly the method above for caramelizing onions, and I use either an enamelled cast iron pan or a tri-ply stainless/aluminum pan. Both work well, but I think the results seem a little better with the cast iron. That may just be because I can go straight into making onion soup in the same pot, so every bit of the sticky onion stuff makes it into the soup.


The cookbook Zahav doesn't mince words:

- 1/4 cup olive oil - 6 onions, thinly sliced - Warm the oil over low heat in a large skillet or pot. Add a pinch of salt. - Cook over low heat, stirring occasionally, until onions are almost spreadably soft. About 3 hours.

No shortcuts, just take the time. Caramelized onions freeze and keep very well, so just stick them in some small tupperware in the freezer and defrost and use as needed. Much easier than trying to rush it 'on-demand' for a particular recipe.

https://books.google.com/books?id=nGcGCgAAQBAJ&pg=PA287&lpg=...


It's really easy; it just takes a long time.

1. Add 1 tbsp each olive oil and butter to a pan and melt butter over medium heat. No special pans needed. A frying pan, a stock pot, a wok...I've had success in all of em.

2. Add medium-chopped onion to heated oil/butter. Gently stir for two minutes. Add a pinch of sugar and pinch of salt (aka 1 tsp each).

3. Turn heat down to low heat. Stir every few minutes for the first half hour, and then almost constantly for the next twenty to thirty minutes.

That's it.


Scocca provides a good method at the end of the article:

Scocca: I throw them in the Crock-Pot overnight.

Ryssdal: Oh, that's so smart.

Scocca: They become a little juicy, and you might need to dry them out in the pan a bit before you actually use them, but it's a lot faster.


I did this the first time (based on a post at Metafilter derived from the original article mentioned in this piece - https://www.metafilter.com/115702/Oniongate) and that was a good approach because it took me out of the process which meant I didn't touch the onions or fiddle with the temperature or anything and 8 hours later I finally got the difference.

Now I cook them in a pan while doing something else. Takes an hour or so but it beats running the crockpot all day.


that's cool. will have to try that some time.

it reminds me of the low-labor recipe for making a roux. you can do it in an oven: http://www.saveur.com/article/kitchen/how-to-make-roux-in-an...




The baking soda is the key to enhance the maillard reaction and decrease the time it takes to caramelise. It causes the onion to lose a bit of consistency if you use too much (softer), but once you get the right ratio (or if it doesn't matter, for example in a soup, chili or dip) you can speed it up a bit and keep them intact.


It works, but does funny things to the flavor.


It's not noticeable if using the right ratio


Seriouseats did another article later about how to use a pressure cooker: http://www.seriouseats.com/2016/01/the-food-lab-pressure-coo...

It's not as fast, but it's still only half an hour and the flavor is deeper and richer.


My anecdotal experience is that internet recipes are always wrong about cook time. Usually they skew short. I will take this as personal vindication.


i usually find that internet recipes on aggregators are only good for theme and approach. like, what spices to use for a dish, and whether to braise or saute. the recipe tends to suck in proportion to how specific you read it to be.


DISCLAIMER: I'm an amateur when it comes to the philosophy and science of search engines.

I think a core problem is that what we determine to be "fact" [1] changes over time (as it rightly should). As such, no deterministic function can adequately represent fact, without having a bias.

Something that is "fact" today can be made invalidated tomorrow through many methods, scientific and otherwise. Of course, that does not change the importance of "fact" over time (i.e. at one time it was "fact" but it may no longer be).

I don't know how Google search algorithms inject randomness into search, but I believe that without enough randomness (and subsequent feedback) and testing, any such algorithm would be deficient. It's almost like you have to A/B test the "facts", in a way. And take into account time. And give each and every fact due consideration.

[1] https://en.wikipedia.org/wiki/Fact


It's interesting that PageRank was originally envisioned as a way to rank academic papers. Now it's failing on caramelizing onions, which doesn't seem like a subject anybody profits by spreading misinformation about. What does this say about how we use citation metrics in academia?


And he ends the article with a falsehood! (Unintentionally I presume)

*>I throw them in the Crock-Pot overnight. They become a little juicy...but it's a lot faster

Is NOT true. Prep time might be shorter but cooking time 'overnight' is not any faster than 45min on the stove top.


I think his point is that "drying them out in a pan" before using them is a lot faster than caramelising them from scratch. It's true--it takes maybe 2 minutes to get the crockpot-onions to the same consistency as regular caramelised ones.

Also, while the prep is the same, crockpot onions require literally no work during the cooking. On the stovetop, you do have to watch and stir them, which requires a little bit of attention. Since the crockpot version takes so long, the prep is also pretty far away from the rest of the cooking, which makes it "faster" if you have other stuff to do an hour before dinner.

Edit: If you haven't tried this, give it a go--it works well and you can put them on everything for a few days! Yum


What I've been noting in Web searches (Google and other providers) is that they're becoming less useful.

I'm not sure if this is a result of poorer relevance, gaming, my own changing interests, or increased access to more authoritiative sources. I'm suspecting all of the above.

As two Stanford University researchers noted in the 1990s, online epistemic services are highly prone to skewing information to favour interests of advertisers and business partners, rather than users. They proposed an approach which would avoid this problem, results suggest they've fallen short.

There's a tremendous industry that's emerged to game and skew search engine results. It's proved quite effective in its ends.

I've been shifting my explorations from programming and technical subjects to economics, history, and philosophy. Where the former are well-represented online and come from recent sources, the better sources for the latter are often at best nearline, and authoritative sources range from decades to centuries old, occasionally millennia. (Fortunately, the Data Explosion works to advantage, as the search space for older works is far smaller.)

I've also been rediscovering libraries and library cataloguing systems, including periodicals and scholarly indices (though far too few of these are available online). Google's Scholar is useful, as is Books, though both remain highly deficient. Samizdat and underground sources such as Sci-Hub, LibGen, and BookZZ provide ready access to material. I can turn up a reference, and in a few minutes, be reading it, from the original, unmediated by others' interpretations or editorialising. That is a refreshing change from online sources which skew to magazine articles, blogs, social media posts, and tweets.

Given all this, I'm finding DDG to be a more useful search engine in that from it, using bang syntax, I can hit specific sources (often from much better search dialogs than their own native facilities). From Firefox, bang searches work within the navbar. From Chrome/Android, "Home" => search. This is faster than going to or typing in individual domains.

The ability to permanently blacklist domains from search results, shifting more of the search filtering to client side, would be useful.


I searched for "How long does it take to caramelize onions?", and Google returned an infobox with "35 to 40 minutes" highlighted (from this article) along with three other results in the top ten explaining how most articles about caramelising onions are wrong, from 2012 and 2014. So this particular problem seems not to exist? Although, it could have been fixed in the last week, I admit...


I think the confusion might come in from the fact that many recipes refer to the process of onions starting to brown as "beginning to caramelize" simply meaning that they start to turn brown and sweeten. This does in fact happen in five to ten minutes. "Caramelized Onions" might be an entirely different proper noun than the concept "Onions which have begun to caramelize".


After thinking about this, I think you're probably right, along with a gradual declining of standards about caramelization.

I've made caramelized onions multiple times, watching over them for long periods of time. At some point several years ago I became confused by the issue referred to in the article, and thought that I was mistaken about what caramelizing onions was. I've come to realize that I was not.

However, caramelizing onions is a gradual process, and so I could see how what one calls "caramelized onions" could be seen as a matter of degree. With candy making, as a parallel, there are degrees of hardness; with roux there's different degrees of browning, etc. etc. etc.

You're probably right, because at some point early on something very very roughly resembling "caramelized onions" is achieved, which is basically the same as "fried onions." People assumed because the onions resembled what they were familiar with in fried onions, they must be the same, even though they're not.


If you try to search google for technical information or product information you can often end up with bad answers because it typically prioritizes older, longer lived results. So instead of getting a result that's relevant to the here and now you get one that was true for whatever version of a thing existed 5 or 10 years ago.

Even worse, google still hasn't banned pinterest from results.


> bad information that people passionately believe has a way of rising to the top

That's the money quote for me. Certainly relevant to contemporary politics... but I bet most of us can also think of cases where this happens in the workplace, trendy frameworks, etc.


I know this isn't really the point of the story, but if you add a pinch of baking soda to onions you can turn them into something that's pretty similar to caramelised in a few minutes. Add a little sugar to help them along.


Sadly, if you ask how to caramelise onions, Google still says to minutes. Ask it how to caramelize onions, and it'll correctly say 45. I guess those of us outside America will have to figure it out ourselves.


One could even go as far as saying that "Google is fake news."

Now, I wouldn't say that myself, but Google brought this upon themselves. I've criticized them in the past about their "top answer" solution, as well about the AMP-powered carousel with (corporate-owned) "media partners".

They're setting up a system where there's a high chance that the top answer is indeed false, but Google acts as if it isn't - and that's the real issue here. Google envisions a future where its AI assistant will soon only give you that "top answer" as the "right answer".

I think it's wrong of Google to do that, at least until our AIs become smarter than humans, and can actually discern the truth way better than humans can from "reading" thousands of related articles/papers.


The page seems slow/dead, so here's a mirror: http://archive.is/0Jc7G


Is nobody else disturbed by the fact that nowhere is anyone specifying what kind of onions to use? Vidalia, Texas, Walla Walla? Bermuda? Cévennes??


I just assume 'onion' means the bog-standard (and comparatively cheap) brown onion unless a more specific term is used.


Interesting, I've never even heard of a brown onion. I thought the cheap standard onions were white, and I always buy yellow.

I'm even a gardener! (But I just buy whatever sets they have locally)


Brown onions are also known as yellow onions; depends on your locale. I guess the locale of the recipe's author matters as well (although it usually doesn't matter too much if you use a white or brown onion).


This article would get even more traffic if it were titled: "7 falsehoods programmers believe about onions"


The article and the comments are great, but where is that recipe? (seriously)


lol I thought this article is about .onion


The grand irony is that this guy is absolutely wrong about caramelizing onions. It simply does not have to take 45 minutes to caremelize onions. Sure, you can do it in that amount of time, but he's wrong. His "crusade" is based on incorrect information.

One of the best tricks I've learned for quickly caramelizing onions is to not use any oil in the pan; the oil is more likely to cause it to burn. No oil; slightly over medium heat; a thick pan; barely any stirring; slices evenly cut; that's all you need.


The author is absolutely right. You're browning your onions, not caramelizing them.


> this guy is absolutely wrong

> but he's not wrong

I think you either contradicted yourself there or it isn't clear from the way you've written it that you're talking about different things.


gah, I typed it in a hurry; that "not" was NOT supposed to be there.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: