Hacker News new | past | comments | ask | show | jobs | submit login
Nvidia Announces A100 80GB GPU for AI (nvidia.com)
183 points by alphadevx on Nov 18, 2020 | hide | past | favorite | 154 comments



While these are amazing performance numbers, I seriously wonder if all this expensive AI magic will be cost effective in the end.

Their lead example is recommendation systems, but I can't say that I have received many good suggestions recently.

Spotify and Deezer both suggest the chart hits, regardless of how often I dislike that kind of music.

Amazon keeps recommending me tampons (I'm a guy) ever since I've had a coworking office with a female colleague in 2015.

For all the data that they collect and all the AI that they pay for, these companies get very little revenue to show for it.


The meme that never dies.

There are 1 billion users, each of which spend x$ on the platform. The recommendation system does not need to get 100% accurate (it'd be very hard to get to something that 10% nonaccidental/nonfraud click through rate for example). It needs to be slightly accurate. The difference between 0.5% ctr and 0.6% CTR or conversion is probably 20% increase in revenue, and much more in profits (assuming fixed cost).


This is a perfect example of why end-users shouldn't buy into the idea that all of this invasive tracking is getting them better/ more relevant advertising. That Google and Facebook were able to pedal this lie for so long and so effectively is the biggest scam of modern times.


Well, people always have examples of fail cases. My friend at Amazon was dealing with the problem of black socks being tagged as DSLR cameras. It would have been too soon to close the curtains on ML just because that's obviously wrong.

Same with bogus results in Google search. It would be a mistake to fixate on a fail case at the expense of seeing what it gets right.

One thing that can be said about Amazon is how data-driven it is. Even an obvious "improvement" to a system would require analysis to back it up as an improvement. For example, it might seem obvious to filter out lower quality user-created answers in the product FAQ, but answers with poor grammar might actually boost sales because shoppers trust the answer more.

Also, as we descend deeper into ML/AI and black boxes, the deeper we get into effects from afar. There's no real place to write if (user.sex == M) then weigh('tampons', -1) as it was a constellation of factors that cascaded into a man seeing tampons like that time he purchased something related for his girlfriend. The next rung in line is the business of mind-reading.


I still hate how I'll buy something that is clearly an expensive one time purchase from amazon (such as a lawnmower or an audio receiver) and the best its dumb algorithm can come up with is, "Hey, wanna buy another lawnmower?"


Actually this recommendation makes a lot of sense. You generally have 30 days to return your purchase, no questions asked. The algorithm knows that you're now quite likely to buy another lawnmower/audio receiver. Perhaps there is a fat chance you may even spend more the second time over (after returning your previous purchase)?


Since Amazon knows whether you’re submitting a return.. they could just filter to that scenario and then provide such recommendations.


There's a good chance you purchase the 2nd one before returning the first.

Or you sell the 1st elsewhere like Craigslist.

Either way, the numbers show those ads (remarketing ads in industry speak) are insanely effective.


They might be weighting by E(rev|ad_shown) in which case you'd only need a very small amount of repeat users to make it worthwhile to show it to everyone.

Not that that justifies the practice.


My anecdote is that Facebook got really good at targeting ads at me. Once I actually purchased 4 things from FB ads in a short time, I realized I had to quit. Going on 2 years now FB free!


Do you regret those purchases?


Not original poster. Also got ads for things I would definitely like. 100% of them were cheap crap that actually did not work at all. Stopped using Facebook because I got tired of the deceptive ads, lack of accountability, and invasive profiling

And, yes, I do think Facebook should take responsibility for the content of their ads and products sold through them. If the New York Times only sold ads for scams, and all their articles were lies, we would no longer trust the New York Times

Facebook is like Vegas strip junk ads. It may look nice, but you’re going to have a bad time


there's whole facebook groups now dedicated to posting screenshots of, and laughing at, the ridiculous advertisements and products sold by wish.com


The funny thing about this is that those groups and the people in them, look at the wish ads, and engage with them, thus making wish's ads cheaper and making the company more likely to serve them bizarre ads.


The government should legislate that all tracking is off by default. If it is such a good deal we could opt into it.

Facebook and Google would have to innovate instead of rent seek.


Lie & scam - big words.

Ads/marketing budgets create employment that would otherwise not be there.

Ads provide monetization for apps you use on your phone. Without monetization you wouldn't have the proliferation of the ads, and not everyone is willing to pay directly.

It creates jobs, supports businesses, helps otherwise unsustainable content. It helps the small businesses grow their market. It supports economy.

You are also sort of missing the point. We are trying to make ads relavant for everyone, but even if it's relavant, you don't click on everything you see. This is true because of how you as a person do things. You don't buy the first car you see on the street, you do research, you spend days etc. Just think of that, and you'll see why why 1% ctr is actually not a terrible thing as you made it sound like. if you clicked on everything you see, you wouldn't be able to do _anything_.

Disclaimer: Current FB and ex-Google employee working on ads for 8 years. And no, my pay doesn't depend on me saying these, i could just work _anywhere_ i wanted, literally.


You know what else would monetize apps on our phone?

Actual money.

In the absence of ads, we'd have people paying money to honest app makers for the utility of the app. And along with that, the accountability, honesty, and incentive alignment that comes from a straightforward exchange. Instead, we have app makers selling users, their data, and their attention to the various companies/intermediaries/entities involved -- without ever telling the user how the bread is buttered.

Yeah, when I'm buying a toaster, I do research. But that has nothing to do with modern ad-tech and its various tentacles whatsoever. Actually, nowadays, I find this whole ecosystem actively prevents me from being proactive about my consumer choice, because every single thing I see when I google "best toaster 2020" is some type of eldritch symbiosis of Fb Goog Az (choose any; none are good), a 1 cent-paid-per-letter 3rd world content farmer, instant click bidding based on my browser, age, gender, location, prior 2 week consumption pattern, political view.

It's really tough to compete with free, and I understand that there are significant barriers to paying directly for content/utility/etc. I know, the system we have set up is ambiguous and complicated and subtle. But maybe those ads/marketing budget dollars could be used to actually like, I don't know: improve people's lives? fix these systems? reverse course? Convince people to consume less so that we don't collapse the biosphere?

It's so tiring to see the most intelligent people alive say, well, this is quite clearly a massive problem that might literally destabilize the order of our entire society on one hand, but on the other, people just have to know how bad they want the Newest Garbage On Sale This Upcoming Black Friday...

Yes, it is a lie and a scam. One of the most epic of recent years.


Feel free to pay money, I applaud that. For the content you benefit from, if you are willing to pay, that's amazing. I do pay for content too (i have yt premium, netflix, blutv etc). But not everyone wants to pay for the content. You certainly wouldn't pay for a game you play for two weeks and get bored. How do you think a mobile game on a device would get the word of mouth they do without initial monetization etc?

How do you think you'd get to know your neighborhood burger joint without some form of advertising (either word of mouth, or through real paid advertising). how would you know the burger joint somewhere else?

Short of a micropayment solution that pays out proportional to the value you get (flattr? maybe), ads is the only viable option. But that only solves the content creator pov. It doesn't help the advertiser - the business that need your money to survive.


The burger joint did absolutely fine before ads and the internet. That's kind of my point. Nobody is inventing a micropayment solution because doing ads pays. (Like, maybe we can put our efforts towards updating payments to the 21st century instead of just outsourcing everything to VISA and paying their 30c tithe...? Maybe we can have some kind of publicly funded digital cash thing?)

What I'm getting at is that some things are better for giant tech companies and corporations and worse for regular people and some things are better for regular people but worse for giant corporations and tech companies.

I'm not even saying that I hate that this is the case, I understand our reality. The lack of creativity, the lack of imagination on this from anyone at all, but especially our best and brightest -- that is the worst part about all of this.

The absolute depth of monoculture on these issues is "oh for sure, it's messed up, but like, fixing it is too hard because of how messed up it is. Better double down before this whole thing implodes!"


A shared prosperity micropayments solution is entirely possible. I am testing one version of this in regards to phone spam.[1] Your bigger picture is what I am addressing with my micropayments as a service platform.[2] We are relatively now & still creating our path, but I just wanted you to know that there are people working on micropayments solution as an alternative to the traditional ad model.

[1] https://myrobocash.com

[2] https://fyncom.com


> if you are willing to pay, that's amazing.

Here's the difference: If I am paying for a thing, I have a choice. I am never presented with the choice over whether I want to get tracked online. Or between apps. Usually tracking is invisible and completely obfuscated in such a way that even if you want to know who is tracking me and what is getting tracked you can't.

> How do you think you'd get to know your neighborhood burger joint without some form of advertising

It's in my neighborhood, I see it when I drive by, friends recommend it. Sometimes I do a web search. I don't think I've ever found a restaurant (grocery store, pub, etc etc) due to an advertisement. About the closest I get is when the local paper runs their people's choice awards for local businesses. (and I know, the local paper gets revenue from advertising)


> If I am paying for a thing, I have a choice.

The only choice you have is to stop paying for the thing. In reality it's more likely that you'd end up paying with money and with the data that's being collected. Businesses always want to grow revenue so at some point collecting data again or serving you ads in a paid product constitutes low hanging fruit.

Look at Samsung and the ads they force on you after you paid thousands on their TVs, look at Amazon who crams some ads in movies and shows you already pay for with Prime, look at Google who still collects info on you even if you pay for YouTube Premium.

This isn't about paying with money or your data. You may get something for your money at first, until you don't anymore.


> Samsung and the ads they force on you after you paid thousands on their TVs

TV manufacturers were forced to add AD revenue because they reduced their purchase price to levels that were less than sustainable so there is a trade off being made there.

https://www.in2013dollars.com/Televisions/price-inflation


That's an explanation, not a justification and it just goes to show that "paying for a product" does not guarantee anything anymore. Today the assumption is that a "free" service is paid with ads and personal data, with the implication that you could pay money instead. I just posit that you'll and up paying money also.


Browsers (starting with apple) and regulations (ccpa etc) will give you a choice.

The area is ripe for disruption. But sadly, that's the world we have to live in. I am pretty sure google would have prefered if you paid for the services you get (i suggest you sign up for google one, if you use gmail/drive/etc). But until that's ubiquitous ads is what we have.

re: neighborhood burger joint - web search implies someone is providing you this for free. or through ads. or you pay.


> You certainly wouldn't pay for a game you play for two weeks and get bored.

I certainly would and have. I highly suspect that's the majority (or at least a significant percentage) of money made in the gaming industry.


You are taking "you" as literal you. Not everyone is literal you. IAP revenue probably is on par with or more than ads. But ads is on top of that. There's a difference.


There is this common meme that people buy tons of games during steam sales and never play them.


I think the OP was talking about mobile, whereas the replies are talking about console/pc games.

Very, very different markets.


I mean but I don't think it's really fundamentally different right? Among other games, I played Alto's adventure, bought the Plus version, and got bored in a week.


>In the absence of ads, we'd have people paying money to honest app makers for the utility of the app.

I don’t think so; and the counter example is gaming. We would see free to download products with upsells catered to whales. I’m not sure if that would be a better product but I don’t think that business model is honest either


> You certainly wouldn't pay for a game you play for two weeks and get bored. How do you think a mobile game on a device would get the word of mouth they do without initial monetization etc?

[Replying to parent because I can't reply directly to poster]

It looks like you have no idea that there are games worth paying for. Not on mobile phones though. And definitely not free to play games. They are designed to take your money, not entertain you.


I do know there are games worth paying 60 bucks for. Also worth games worth a few buck. I normally don't play games, but the thought of having to make pyments every single time is too much for me personally.


I'm sorry, but whenever I'm looking for a new app to do something, it's consistently either the "hobbyist-made, free because ideology" apps, or the "made by $bigcorp, costs $5 but no ads" apps that actually are useful and good.

That, and the seemingly utter stupidity of ad engines. Like, yes I bought that new power tool last week. Stop showing me the ad for the tool from the same store I already bought it from.

I'm pretty much convinced that apart from the people working in the ad business, nobody actually profits off ads. Certainly it's close to impossible to prove that ads are effective, and people who sell ads to companies are good at cherry picking and suggestive correlations. Then in the end everyone tries to buy the same eyeballs by paying the same ad companies, and it all averages out to nothing except you spent a bunch of money.


It's fairly easy to prove. Businesses are not dumb. Why would they throw their money on advertising otherwise?


Come now, this is only an enhanced form of prisoners dilemma, where the prison guards do a side hustle helping prisoners rat each other out, in exchange for a "very small" commission. Everyone spending money on advertising is a strong Nash equilibrium, but it is not the optimal solution.

If everyone spends money on ads, a single actor choosing to not spend money on ads would (probably) see a loss. And in this case, you have a centralized actor (ad companies) spending a lot of resources telling everyone this fact.

But if everyone decided to not spend money on ads, everyone (except ad companies of course) would see a win.


I think you are digging yourself too deep into antagonistic thinking.

>Come now, this is only an enhanced form of prisoners dilemma, where the prison guards do a side hustle helping prisoners rat each other out, in exchange for a "very small" commission. Everyone spending money on advertising is a strong Nash equilibrium, but it is not the optimal solution.

Imagine you are making your own craft beer in your basement and you are sitting on two crates of beer. You want to sell your beer but you don't tell anyone that you are selling beer because that would be advertising and according to you advertising is not an optimal solution. Even if we assume you are the only company on the planet and have no competitors, your business is still in trouble and about to make losses and close down.

>If everyone spends money on ads, a single actor choosing to not spend money on ads would (probably) see a loss. And in this case, you have a centralized actor (ad companies) spending a lot of resources telling everyone this fact.

Now we assume that you tell someone that you are selling beer which is basically what advertising is. People know that you sell beer now. They can now make a decision to buy your beer. If people like beer that's what they are going to do. If they hate beer they can still decide to not buy your beer.

Yeah, if you did not advertise then you would indeed see losses. Simply because nobody is aware of your products, meaning they are unable to buy your products. But again, no second party is involved, so you don't need a centralized actor to decide to do advertising.

>But if everyone decided to not spend money on ads, everyone (except ad companies of course) would see a win.

Since we are the only company around we are "everyone" and if we decide to not spend money on ads then we would see losses.

So now that I have proven that your hypothesis is not correct we can actually talk about the ad market in general.

Advertising is providing value for companies but the amount of value advertising can provide is not based on how many resources you spend on advertising, rather it is dependent on the size of the market. A big market with lots of consumers can make more money off of more advertising but there is a certain point beyond which you end up spending more on advertising than the market needs. On a planet with 1000 consumers and two companies the best case would be if both companies put out 500 ads. So clearly the optimum amount of advertising is not 0. However, a big company can put out 1000 ads and thereby displace its competitor's ads. It's best to have some ads but not too many.


Advertising per se isn't bad. Worked pretty okay for print press and I even liked to specifically browse the pages with ads on certain magazines. They were focused. If you were a farmer and bought magazines on this topic, the ads there were relevant.

The problem discussed today is the invasive and often clueless nature of online ads. They just spam you and nothing else. Adtech companies gather mountains of data and at the end of the day what do they have to show for it? "Oh, we see you bought a gaming machine. How about you buy these other five ones?". lol.

Seriously. This is sloppy work.

My conclusion is that they don't utilize the private data (that they gather in some very legally questionable ways) and thus they won't lose anything if they're robbed of all the personal data. And the world will be better for it.


> Businesses are not dumb.

Over all businesses over long periods of time, maybe. But a business can definitely be dumb. And a smart business can definitely do a dumb thing.


"some can be" and "all are" are different things. There are a lot of businesses that don't keep track of their finances, just like there are many people who don't. But assuming that's de-facto situation and every business is losing money on ads is cynical thinking.


>"I bought that new power tool last week. Stop showing me the ad for the tool from the same store I already bought it from."

Yes. I hate this too, but think of it as a bug, not the actual intention. We would love to know when you wouldn't buy a product as much as we would love to know when you actually would. But we don't, always, and end up having to make approximations. We don't always know what you actually bought, we just know you bought something. Advertisers don't even always tell the value of stuff you bought, something tht'd have benefited them to share from ROI pov etc.


That "bug" exists ever since adtech exists.

Clearly, nobody is interested in fixing it then?

"It's a developing area" is a pretty old adage by now, and is worn out. Modern tech exists for 25+ years now. Do something. Most of the people I've spoken to hate ads.

Not fixing ancient bugs isn't helping the situation.


No, this is within the framework - the buyer (say, amazon) wants to target you since you recently bought something. They could say they don't want to target you since you already bought it, but it's partly up to them. FB/Google/etc could downrank and reduce its frequency or whatever, but it will keep happening one way or the other due to variety of sellers etc in the market.


> Ads/marketing budgets create employment that would otherwise not be there.

Whether something creates jobs or not doesn't make it a net good. Would you suggest Ransomware is good? Because it also creates jobs. As do Nigerian email scams and Ponzi schemes. "Creating Jobs" doesn't mean something is good for society.

> You are also sort of missing the point. We are trying to make ads relavant for everyone, but even if it's relavant, you don't click on everything you see.

Apparently you are missing the point here. Whether you see it or not, Facebook and google's marketplaces by nature serve advertisers, not me. Advertisers don't give a damn what advertising is "most relevant" to me. What they care about is which demographics are most profitable to their brands.

> We are trying to make ads relavant for everyone

So long as Google and Facebook put the advertisers in control of who sees their advertising, any assertion that the system is designed to serve us is bullshit. People don't see advertising that is relevant, they see advertisements from the people who pay google/ Facebook to see their message. Those two things are not equivalent and never will be.


Advertisers don't give a damn what advertising is "most relevant" to me. -> They certainly do, indirectly - that'd mean they get better ROI on their investment.

But think of it this way. You are getting a service, you are paying either directly or indirectly through ads.

Facebook wouldn't be a thing if it were a paid product from get go. It'd probably even lose a major part of its user base if it became a paid product overnight.

Like i said, i would love if i can pay for a product i am benefiting from, but it's not always possible.


> So long as Google and Facebook put the advertisers in control of who sees their advertising, any assertion that the system is designed to serve us is bullshit

But they don't put advertisers in control of who sees their ads.

If you look at audience sizes on FB, and then run some direct response ads on that audience, you'll notice that you only ever reach maybe 10% of that audience.

This is because what FB/Goog are good at is figuring out which ads are likely to get someone to click and/or convert, and show only those ads.

The dirty secret is that those people might have converted anyway.

One can measure this with an attribution model, but the trouble is that the two biggest players Google and Facebook have very little incentive to co-operate, so all attribution models are extremely biased.

tl;dr the advertisers set boundaries on who should see the ad, but they don't control who the ads get served to.


You haven’t addressed the only point in the comment you’re replying to.


I did. "tracking is getting them better/ more relevant advertising" -> it makes it more relavant yes. But your individual experience on 100 impressions don't matter. Us showing you ads for something you already bought is a bug. Not a fundamental to advertising kind of issue.


The problem is that it’s called a recommendation, as if your close friend, who knows you, told Amazon, hey I think this person might also want to buy XYZ. Or a jazz connoisseur says you might like this album if you liked that one.

But it’s not, it’s a tractable similarity based solution to the question of, which of millions of ads to show, in order to increase CTR.

The biggest difference is motive: a good recommendation is made in good faith for the benefit of the recipient while ads are businesses trying to turn a profit. Maybe that’s why this meme never dies.


Perfectly said.

If I had an acquaintance who's a recovering drug addict, then I would never ever recommend him to try out a new drug, because that is obviously against his best interests. But for an ethical void such as a company blindly pursuing profits, bombarding him with free samples for highly addictive drugs would be an effective strategy for increasing short-term profits.


I didn't try to say that these systems don't work at all. I meant to say that I don't believe they are cost effective. Let's say you can get that 0.1% CTR improvement by purchasing a typical AI recommendation cluster for $100mio + $3mio annually. There are only very few companies for which that 0.1% CTR will offset the costs and increase profits.


It needs to be slightly accurate.

People aren't saying recommendations need to be good, but just they need to be better than they are right now. To justify the cost of investment in AI, AI hardware, and AI developers an AI recommendation system needs to be more accurate than the next best recommendation system. The real point here is that no matter how clever you think your AI system is the end user still thinks it's worse than a very simple system based on "people who bought X also bought Y".


Thanks for the reminder that we have to think in probability and scale.

Good example!


Thank you, this should be added to the hacker news rules or something. I can't remember the last time there was a thread related to personalized recommendations and someone didn't complain about irrelevant results.


We already suffer the ads, why do we also have to internalize and normalize the business rationale?


>For all the data that they collect and all the AI that they pay for, these companies get very little revenue to show for it.

Not really, they make a shit ton of money from their recommendations and perform extensive A/B testing to keep track of how much money it makes them. Anecdotes aren't data and large tech companies don't spend money for no reason (especially when they can A/B test the impact trivially). Remember that at their scales even a 0.01% increase in revenue is worth $10+ million per year so they don't need to be perfect to make a shit ton of money. There's a reason ML engineers get paid $1+ million and it's not corporate stupidity.


I doubt it’s A/B testing, more likely some multi-armed bandit algorithm.


You typically need both.

You'll have some kind of MAB in production, but in order to measure individual incremental impacts, you still need a proper experiment.


> Amazon keeps recommending me tampons (I'm a guy)

I bought one toilet seat on Amazon and now it thinks I'm a toilet seat collector. No I'm not going to buy another one any time soon, no matter how many color and size/shape/design variations it presents to me.


This is exactly the issue with all recommendation systems. It's silly when I look at Amazon and see 15 different cat water trays just because I got one for my cat a month ago - but when it comes to social media the effect is really poisonous. YouTube is a good example: if I listen to something about Roman history it's all Rome all the time - a much better recommendation would be to suggest Greek / Persian history - rather than pigeon-holing me after 1 or 2 videos.

I really think I would watch more videos if the recommendations -challenged- what I enjoyed, rather than couching me, but I have to assume that someone has run that experiment and ad revenue went down.


I occasionally watch firearms videos, usually Forgotten Weapons, and the recommendations it now gives me are ... interesting. The ads seem to think that I’m either a day trader, or I’m desperate to escape from Fake News.


I have that issue as well with the recommendation algorithm on YouTube. It's similar to the challenge of balancing exploration vs exploitation in reinforcement learning. I think that most people want more exploration/discovery so they don't get bored. More exploration also leads to more learning. It's a win-win for everyone.


the yt recommendation algorithm is probably the worst among all the major platforms. It will pigeonhole you into the same 10 videos, which you will never escape. I listen to music on youtube relatively frequently but then it will always go back to the same 10 songs its decided i should listen to, no matter what genres I play.


The system learned people buy one seat to try followed by more if they like it. It doesn’t know how many bathrooms you have, but it’ll learn after your second purchase or lack thereof.


Same for me with a part I ordered to fix my AC/furnace, I better not need another one soon!


Or you might be going into the furnace repair business and about to buy a bunch more. Or you might have bought the wrong model.

Both of those are low probability, but they're probably higher probability than the probability that you are interested a product chosen uniformly at random.


My Spotify recommendation is amazing.

I have been an Apple Music user, but I am subscribing to Spotify just for its recommendations.


If I had to guess this is partially because Spotify has actual humans with ears who are categorizing specific artists into genres. Not because of any great advancement in AI. If you like portishead, you might like (similar thing).

Just within electronic music, look at how many genres and sub-genres have been categorized by someone simply as a hobby project. Now updated for the web 2.0 era.

https://music.ishkur.com/


> Spotify has actual humans with ears

Yeah the actual humans with ears are me and you. They use play-time as the basis for recommendation as well as content-similarity, and put effort into de-biasing this data for, e.g., position bias. This is the same for YouTube etc.

Spotify is one of the best examples of a modern large-scale recommender system, for me.



Was about to say the same. Have been eagerly awaiting an updated "Discover Weekly" playlist every Monday for years and it keeps getting better.


Scrolled through the comments exactly for this. In addition to the discover weekly, I'll often make a new playlist of 5-10 similar songs I like to get another in the same vein, and it works very well for my decidedly not-chart-topping music preferences. Maybe there are some genres I don't listen to enough that it struggles with?


Spotify has a couple of weird fetishes. If you throw a Eurovision song or a song in your local language into a playlist with unrelated music, it will exclusively recommend those genres. Even when they are outnumbered 20 to 1 in the playlist in question.


This is so foreign to me that I have have to ask this at the risk of appearing very rude, but from my perspective this question makes total sense.

Do you have a memory problem? My weekly have always been 30-70% the same recordings of songs I have already listened to at least ten times. And it's always been this way.


Yo that’s indeed very rude.

Why would you assume I use discover weekly to discover music?

I just use the Radio feature, which plays endless amount of songs based on the song you started with.


I'm sorry but I couldn't come up with a nicer way to ask an objective question about it.

The radio feature is global to all playback situations so I assumed that couldn't be it.


My discover weekly is maybe 50% songs that I've heard before though maybe not on spotify, I think they're from the itunes library I imported.


I've been using the weekly for many years now amd that's not the case for me.


I've long wondered why recommendations (especially shopping) are so bad.

You just bought a light bulb? How about a dozen others?

But I've realized that accuracy is not really a problem for them, because a false recommendation at most slightly annoys the user and get glossed over. Among 1000 wrong recommendations, if one works, it's a win (for the company).


>You just bought a light bulb? How about a dozen others?

Ironically, this suggestion is probably driven by real-world behavior. Whether your bulb fails due to age (and you'll probably need replacements for your other similarly-aged bulbs soon), or an electrical problem caused the burn (and you might need another replacement soon), or you just want to stock up, having your exact bulb model recommended also saves you a little time on having to check it yourself. :P

I think a lot of "wrong" recommendations are actually right for other users, which seems to be just what you're saying -- they annoy one user but result in more sales for others. From the company's perspective, these recommendation systems are working as expected.

On the bright side, that also means in their best interest to improve accuracy (and drive more sales for more users, instead of just annoying them). Hopefully new tools like Nvidia's announcement result in fewer annoying ads.


No matter what I buy, no matter how niche and one-off the purchase is, Amazon never fails to recommend a hundred similar (usually knockoff) products.

Instead of focusing on what I just bought, they need to focus on predicting what I'll want to buy next. Just because I bought a coffee maker doesn't mean I'm turning into a coffee maker collector.


For anecdote's sake, I:

- frequently click through on Facebook ads because they're often SaaS products I'd be interested in (and have found a bunch of cool tools I use now)

- think my Spotify suggestions are spot-on. I used to use Google Music which had comparable suggestion qualities at the time, but I feel like Spotify has gotten significantly better at suggestions over the past year-ish.

- think YouTube is the shining example of controllable recommendation systems. Looking at my front page right now, I'm interested in all 8 of the videos above the fold, and almost every video below it -- probably a direct result from actively guiding/curating what videos get recommended to me. My "to watch" queue is hundreds of videos long since I almost always add more and more videos until I get the time to sit down and watch a chunk, which usually turns into a positive feedback loop of more good videos getting recommended.

All three of the above recommendation systems make me enjoy the related product more than I would without them, and probably also directly lead to more revenue for the company (sales on FB, keeping my Spotify sub, and seeing more YT ads).

On the other end of the spectrum, posts on FB and Quora are two examples where recommendation systems seem to make products significantly worse, so I guess it's hit or miss depending on whether what you want out of each product aligns with how the recommendation systems are set up.


> think YouTube is the shining example of controllable recommendation systems.

Youtube is the worst. I have all the time to open videos (and think about it!) in private tab so that my recommendations don't get messed up. And then of course on phone you don't have that luxury so you have to go clean history manually. And now they added two interstitials before you can actually view video in private.

Shining example indeed


I've noticed that, lately, YT recommends mostly based on my last week/days of video history with them. Which is ok-ish, but certainly some content that I haven't been following lately gets relegated to oblivion.


The YouTube recommendations have been onpoint, and I'm a huge fan of it as well. I have found new channels I really enjoy but randomly get some extremely left/right wing US political content, but I just hide it (I'm Australian).


I could imagine thats a bias.

That we just remember the bad examples when den recommendation was bad but not when it was good because we didn't make the connection.

I'm asking myself that for quite some time.

Do you use a tracking blocker. That could also be a reason why you get bad recommendations.


I wouldn't consider that the lead example. More like ML algorithms completely destroyed all benchmarks for computer vision algorithms and changed the direction of research completely.


Sounds like those folks should buy A100 80GB and up their game!

Seriously though - When amazon is showing me all these recommendations, they are charging someone to show these to me. That means I am not the customer of those recommendations. They make money when I browse and I wonder if they make more money doing that than when I actually purchase. Meanwhile to the people selling - they pay to advertise with amazon, and they pay (via a %) to sell as well.


> Spotify and Deezer both suggest the chart hits, regardless of how often I dislike that kind of music.

"It is difficult to get a man to understand something, when his salary depends on his not understanding it."

-- Upton Sinclair

In other words the reason might be that there is an incentive to suggest you chart hits. The abilities of the recommendation have probably not much to do with it.


The true question is whether throwing electricity at your algorithms is cheaper than, for example in the Spotify case, hiring a couple hundred curators to hand pick music for a wide range of segments.


> Spotify and Deezer both suggest the chart hits, regardless of how often I dislike that kind of music.

Of course they do. Their real customers, record labels, pay handsomely for that service.


I'm not sure about that. The famous Spotify discover weekly playlist suggested a chart hit to me only once since its creation years ago, and I liked it.

The release radar playlist may have more chart hits, but this one is not super smart. If you listen to a popular artist you will get all its new songs and shitty remix forever, including its chart hits.


I mean, I work in robotics. And while algorithmic improvements are certainly needed, I think we could manage to use way more processing power too. Think of the Tesla self driving system - a beefier processing system is a big reason its even possible. And on their backend? Gobs of video data crunched in what must be a huge dataset. I think there’s plenty of room to throw more power at the problem and make traction. At least in robotics.


I imagine that's their lead example because recommender systems are familiar, easy to understand in terms of functionality, and they get non-technical business types excited about spending money on new AI toys like the A100. There are far more interesting use cases for hardware like this, but AI researchers don't need to be told about them.


> Spotify and Deezer both suggest the chart hits, regardless of how often I dislike that kind of music.

That is just part of the game and it is called "machine learning". They get to know you better. ;)

Annoying but only part of the game/training.


I agree with you. The “gold” rush toward ML does not expand the total wallet size outside of a few niche markets.


My Spotify recommendations are great. Do you listen to music on it often?


nVidia is on its last legs, honestly. If this is the best they can come up with, forget it. What is the real world application/use case for this GPU? I doubt it would be used for autonomous driving - you could buy some cheap used Tesla P100s and toss them on a rack for a better price point IMO.


wow... uh science? for one.Smaller outfits and research groups that want to do rapidly iterate through modeling and simulation but don't have a datacenter. the A100 is about 11 times more performant than a p100 in HPC benchmarks. You can get a dgx station a100 with 4 A100s in it and run it on mains power and have the equivalent of 44 p100s under your desk (for these applications).


[flagged]


The words "man" and "woman" can refer both to assumed gender and biological gender. Since the context involves menstruation, assumed gender makes no sense in the sentence.


[flagged]


Outside the clown world of leftist only women menstruate and cisgenderd is normal.


[flagged]


Here's some perspective: NH is one of the most politically correct online forums in US. US itself is one of the most politically correct and tolerant countries in the world. If you think NH is "incredibly unsafe place" for people who are different (in any respect), try to imagine what would happen to such people in some other cultures (e.g. some Muslim countries, rural Africa/India, etc). I bet an American white male community wouldn't seem so bad.

There was zero reason to get offended by the parent comment. If someone did, it's strictly their own personal issue. They should learn how to deal with it and adjust their expectations.


What is NH?


News Hacker?


Preach inclusivity and respect, and in the same breath take a cheap out-of-the-blue jab at fucking white males. Truly amazing what it’s come to.


I'm quite interested in what you suppose the word "man" means? I'm serious.


I am more wondering why hasn't AMD massively invested into porting common ML frameworks to OpenCL. Nvidia has outrageous margins on their datacenter GPUs. They've even banned the use of lower-margin gamer-oriented GPU in datacenters [0]. Given that tensor arithmetic is essentially an easily abstractable commodity, I just don't understand why they don't offer a drop-in replacement.

Most users won't care what hardware their PyTorch model runs on in the cloud. All that matters for them is dollars per training epoch (or cents per inference). This could be a steal for an alternate hardware vendor.

[0] https://web.archive.org/web/20201109023551/https://www.digit...


Because AMD's OpenCL tooling sucks. Set -O (optimization) flag on your OpenCL on AMDPro drivers, and malformed code comes out. AMDPro OpenCL 2.0 doesn't support debugging outside of printf statements (and lol at reading through 1024-SIMD threads worth of printf statements every time you wanna figure something out).

Compiler bugs a plenty: the compiler can enter infinite loops just trying to compile OpenCL 2.0 code, taking down your program. If you ever come across such a bug, you're now in guess-or-check mode to figure out exactly what grammar you did to bork the OpenCL compiler.

Oh, and the OpenCL compiler is in the device driver. As soon as your customers update to Radeon 19.x.x.whatever, then you have a new OpenCL compiler with new bugs and/or regressions. The entire concept of tying the COMPILER to the device driver is insane. Or you get support tickets along the lines of "I get an infinite loop on Radeon 18.x.x.y drivers", and now you have to have if(deviceDriver == blah) scattered across your code to avoid those situations.

In practice, you end up staying on OpenCL 1.2 which is stable and has fewer bugs... and has functional debugger and profiler. But now you're missing roughly 8-years worth of features that's been added to GPUs over the last decade.

----------

ROCm OpenCL is decent, but that's ROCm. At that point, you might as well be using HIP, since HIP is just a way easier programming language to use.

Ultimately, I think if you're serious about AMD GPU coding, you should move onto ROCm. Either ROCm/OpenCL, or ROCm/HIP.

ROCm is statically compiled: the compiler is Clang/LLVM and completely compiled on your own workstation. If you distribute the executable, it works. Optimization flags work, there's a GDB interface to debug code. Like, you have a reasonable development environment.

So long as your card supports ROCm (admittingly: not many cards are supported, but... AMDPro OpenCL tooling is pretty poor)


Right. I think the question though is why isn't somebody fixing this situation? There's money sitting on the table for them when they figure it out and get their act together.


> I think the question though is why isn't somebody fixing this situation?

They are. Its called "Use ROCm". Tensorflow support, PyTorch support, etc. etc.

Yeah, its limited to Linux, its limited to a few cards. But within those restrictions, ROCm does work.


As far as I know, AMD doesn't have an incentive to improve this limited offering because they don't have chips with a good enough cost-to-compute ratio to get people to buy them if they did get Rocm/hip/etc working.


Frontier and El Capitan will be the first Exascale processors on the planet (now that Project Aurora has slipped schedule).

Both with AMD MI100 providing the bulk of their compute. Frontier seems like it was given development boards of MI100, because AMD is talking about how they already ported some code over to the MI100 and tested it.


You're on HN so you're probably aware of the costs and difficulty involved in staffing an organization large enough to tackle these issues in an effective time frame.

Nvidia has quite a head start. You're not just talking about some simple driver support either. You're talking about runtime compilation/JIT(to target various flavors of HW), tooling support, library optimizations, API stability and maintenance... AMD can catch up, but unless they come up with a new approach it's going to take a long time and a lot of smart people to do so.


> AMD can catch up, but unless they come up with a new approach it's going to take a long time and a lot of smart people to do so.

I think they will. AMD has the challenger mindset. They rose from the ashes and now actually compete with Intel and they can tackle NVIDIA as well.


ROCm built executables aren't targeted towards an IR, it means pains for each switch to a newer GPU architecture for the user, with you having to distribute binaries again.

Also, no Windows support whatsoever...


AMD is trying to do it but they were late to the party to start with. They maintain compatible versions of PyTorch and Tensorflow, among others. They support only Linux.

https://www.amd.com/en/graphics/servers-solutions-rocm-ml

https://rocmdocs.amd.com/en/latest/


Linux only support is a bit of a letdown for me. My clients insist on using Windows, so I'm left without choice there, it's nvidia or nvidia.


It might be that you could get those up and running in WSL 2, once the GPU passthrough functionality comes to general availability (currently it's only in insider build)[1] - but don't quote me on that as it's just a guess!

[1] https://docs.microsoft.com/en-us/windows/win32/direct3d12/gp...


Okay, gonna take a look at that once it's available.


Rather than OpenCL they have invested in HIP which can target AMD or NVIDIA cards.


Can HIP run on AMD consumer Radeon cards? I'm trying to find the best option to write GPGPU code that runs on other people's machines with hardware I have no control over. I thought OpenCL would become the best way to write code that could run on all PCs and mobile phones, but from my research the GPGPU landscape looks more fragmented for each year.


Not officially supported... and no ROCm/HIP at all on Windows.


OpenCL is supported via ROCm on the new consumer GPUs now, so if the stars align HIP support might come too. The lack of announcements doesn't inspire much confidence though, especially compared to NVIDIA's always-on PR machine.


Is HIP continually evolving ala CUDA? For instance, every new version of CUDA seems to jam in new features to get things to launch faster, use less memory, be easier to use, etc.


It doesn't work on their new GPUs and many others. It's Linux only and etc.


They have invested and it works pretty well. But CUDA has such a huge lead and is the default. Most users dont care about the hardware but they also dont care enough about the cost. And the really price sensitive users are running their own clusters under their desks using gaming GPU’s.

A more detailed explanation (ROCm section) https://timdettmers.com/2020/09/07/which-gpu-for-deep-learni...


According my research and others' comment, OpenCL is mess, so filled with boilerplate that it is just an interface to each companies own approach. Cuda actually encapsulates GPU computation.

AMD has HIP, which is closer the Cuda but HIP seems less developed.

But I believe the basic problem is AMD doesn't make sufficiently high end GPUs to compete with Nvidia in ML.

Personally, I don't care what happens in the cloud, just what I can buy. I would note Nvidia does have competition in the cloud from Google's TPUs and I assume any large cloud vendor is going to negotiate with Nvidia. While I'd love AMD to be cost-effective for ML somewhere, it seems they aren't 'cause that's not what they're targeting.


As someone who worked deeply with OpenCL at one point, it's because it went off the rails and became a useless hodge-podge. They recently announced that V3 is basically a reversion to 1.0, and that is the first good decision they've made in a while.


It occurs to me that if anyone is going to release an ARM based CPU that is competitive with Apple, it's Nvidia. A Microsoft/ Nvidia partnership could create some pretty impressive Surface laptops, and if Nvidia were allowed to sell those CPUs to OEMs for other 2 in 1s or laptops, Microsoft might just get some traction on their ARM efforts.


Does processing power translate to actual power? Will most stock market gains in the end go to the people with the fastest computers? Will wars be won by the groups with the most processing speed? Was Cyberpunk (slightly) wrong and it's just about having more memory and more instructions per millisecond than the rest? Are sophisticated circuits the new oil?


Well I can answer the stock market question and the answer is a resounding no. Speed at this point is a commodity without a moat and brains are much more important than computational speed. If you're solely making money because you are fast.. well, you probably went out of business or you will soon. Anyone can spend money to be fast (speed is pure function of money spent).

Great ideas are much harder come by. And while you can try and buy your way to the best ideas, it doesn't seem to work all that well.


I was reading the latest earnings call transcript of this HFT firm Virtu Financial the other day, and it does appear to be that way.

>So there is a back and forth here and all of the other players that we compete with, they are also economic animals. They don't have any magic elixir or magic algorithm that we don't have, right. We all kind of are doing the same thing and we're all providing great service and value for the marketplace. So I know this a little handy way to give an answer but there is an ebbs and flows around competition. The business continues to be very profitable for us on a net basis. On a gross basis it's incredibly profitable but as I said in my remarks, we have paid, put that in quotes, not in our financials we've provided back to our retail customers about $950 million of price improvement this year.

https://seekingalpha.com/article/4386120-virtu-financial-inc...

Top quantitative firms like RenTec don't rely critically on speed or compute power, but rather on high quality data, which they've also said publicly.


Also Virtu is a dying market participant. They were flying high but then lost out. Probably why they're talking about providing price improvements instead of raking in cash.


Isn't cyberpunk partly about companies having these most powerful tools to control (predict) people?


Processing power is just a proxy for money+energy which equals power.


>"The new A100 with HBM2e technology doubles the A100 40GB GPU’s high-bandwidth memory to 80GB and delivers

over 2 terabytes per second of memory bandwidth."


The workstation they announced in the press release [0] sounds and looks incredible. I'm sure it costs over $100k, but I wish that kind of case design and cooling system could be available at a more reasonable price point. I wonder how long it will take for a computer with the performance specs of the DGX Station A100 to be available for under $3,000. Will that take 5 years? 10 years? Based on the historical trends of the last decade, those estimates strike me as pretty optimistic.

[0] https://www.nvidia.com/en-us/data-center/dgx-station-a100/


Why do they still produce GPU rather than specialized ASIC for neural networks like Google does with their Tensor Processing Units?


Because bandwidth-optimized computers do more than just matrix-multiply all day long.

Tensor Processing Units are too specialized: they can't traverse a linked list, they can't traverse trees. They're good at one thing and one thing only: matrix multiplication.

GPUs are still bandwidth-optimized and are good at matrix multiplication (but not as good as tensor units). But GPUs can traverse trees and new data-structures. Ex: BVH trees for raytracing, or linked lists... or whatever else you need. Its a general computer, a weird... terrible latency computer with HUGE bandwidth... but that's still useful in many compute applications.

--------------

Matrix multiplication is the cornerstone of many scientific problems. But you still need software to manipulate the data into the correct "form", so that the matrix multiplication units can then process the data.

Its in this "preprocessing" or "postprocessing" phase where GPUs do best. You can implement bitonic sort for highly-parallel sorting / searching. You can perform GPU-accelerated join networks for SQL. Etc. etc.

And even then, NVidia's A100 have incredibly good matrix multiplication units. So you're really not losing much anyway.


The focus is on Tensor Cores[0], which are pretty NN specific. https://www.nvidia.com/en-us/data-center/tensor-cores/


GPUs do lots of heavy lifting outside of ML, like simulations or crypto. Nvidia is even working on using GPUs to implement pieces of 5G(https://www.ericsson.com/en/press-releases/2019/10/ericsson-...).


Practically, GPU is still way more generalized than TPU. For that reason, optimizing an algorithm on TPU is usually much trickier than GPUs. There are much more nuances because of the way TPU is designed.

On the other hand, Google has good reason to hold back TPUs from general public, and instead only offer them on Cloud. That also contributes its limited use.


> On the other hand, Google has good reason to hold back TPUs from general public, and instead only offer them on Cloud.

What is the reason? Just competitive advantage for running Google level AI/ML workloads?


Many reasons:

1. Technically, TPU is less generalized, making it publicly appealing requires too much engineering effort, which has very high risk of not paying off. Case in comparison: How AMD is not able to capitialize on GPU in Deep Learning despite more uniform architecture.

2. TPUs does have some advantage, that Google want to keep at its own possession. For example, Google would not want FB to easily copy TPUs. FB indeed very likely benefit from TPUs, as they are running similar business.


Doesn't Google sell TPUs in the Coral AI line of devboards? (https://coral.ai/products/dev-board)


That’s inference only.


Legit question: Why are these still called "GPU"s? Shouldn't they rightly be called "AIPU"s, or "IPU"s?


They can be use used for anything related to numerical programming. They are GPGPUs https://en.wikipedia.org/wiki/General-purpose_computing_on_g...

There are processors that are designed only for DNN inference but A100 is not one of them.


It is amusing to see this so soon after a post on how workstations were dead : https://news.ycombinator.com/item?id=24977652

$US 200K for startling performance this time.


Workstations are dead. With this kind of CapEx, nobody’s going to be buying N machines with these in them for their entire team, with one card per seat, going mostly unused. They’re going to build a farm of them, or they’re going to rent time on a cluster of them. Either way, that’d make them “server cards”, not “workstation cards.”


Definitely not dead in certain areas - video, 3D, audio, CAD, certain research. And there is more of that work being done than ever.


Any word on how expensive this board will be?


For only the low low cost of everything in your bank account!


Jokes on them, there's nothing there!




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: