Hacker News new | past | comments | ask | show | jobs | submit | codelord's comments login

Having worked in the game industry in the past it's amusing to see people talk about the greed of game developers. You have no idea! You have no idea what an effort it takes to ship a game. An album is the work of one or a few individuals for a relatively short period of time with very little cost. Because of that you can have services like Spotify that allow pretty much access to all the music ever created for a 10$ fee. The math just doesn't work with video games. Video games take an army of developers and years of work to make. Game development is one of the hardest and worst paying professions in tech. Most people in the industry are there not for the money but because of their passion for the profession. Most games fail to pay for their production costs despite all the effort that goes into them. Companies who have had a few mega successes have to make enough money out of their popular titles to be able to pay for all the other titles that fail to pay for themselves. Please don't complain about video game prices!


When people say "greed of game developers" they're clearly not talking about rank-and-file devs, they're talking about development companies.

> An album is the work of one or a few individuals for a relatively short period of time with very little cost

Kind of ironic that you're saying that.


Majority of "development companies" are also shit poor and go bankrupt in after 1 or 2 projects even if they're moderately successful. Basically it's just hard to make profit in this industry and well over 90% of games never recouperate development costs.

Gamedev is just hard and there are very few exceptions like Epic or Rockstar that even get an option to become "greedy".


> are also shit poor and go bankrupt in after 1 or 2 projects even if they're moderately successful

Fair point, but those are clearly not the ones being pointed out as greedy by most people.

...except for maybe scrappy mobile companies that churn shitty microtransaction games looking for whales, but those are greedy indeed, and I doubt they have the sympathy of you or OP.


I personally not making mobile games or ones with microtransactions. Yet you can basically choose: either there will be microtransaction games for mobile or there will be none. This is not because of developers greed, but because it's the only way to monetize this audience. People voted with their wallet.

Microtransactions in PC games are there for the same reason - because you can't just go and sell your game for $25 if all of the competitors with similar production quality in last 5 years released for $15. Gamers simply wouldnt buy it and nobody care that with inflation $15 back then and $15 now are very different money.

Yet you can put microtransactions in the same $15 game and the same people will pay for them. And you'll reach desired $$$ of profit per copy sold. If everyone refused to pay for microtransactions and would spend more money on buying games without them instead there wouldn't be any microtransactions by now.


> People voted with their wallet.

Not really. Most of these games are preying on weaknesses. These people are not voting with their wallets. They're being duped into giving away their mental health, and their wallet is taken away when they're not looking because of they're high in dopamine induced by images and sounds.


That's about as sound an argument as saying people vote to go to casinos with their wallet.

While technically true, it omits a pretty damn significant detail behind the appeal.


Mostly this is not even about development companies, but the distribution platforms which are even further away from the game developer. It is the distribution platforms that makes policy and generally dictate the conditions on which a game is "sold".

The design and blame of microtransaction/gambling is more on the game developer, but even here we keep hearing stories on how such design is being pushed by the publisher (who act as investors) rather than the game developers.

The discussion is not about developers with passion for the profession.


Microtransactions exist because there are people who happily pay for them, but not spend similar amounts on high-quality pay-to-play single-payer games. It's just a market with supply and demand.

As about distribution platforms neither investors, publishers or game developers have any leverage against Valve, Microsoft or Sony. They just do whatever they want. So you're totally right here. These kind of monopolists can only be regulated by large political bodies like US or EU.


> Microtransactions exist because there are people who happily pay for them, but not spend similar amounts on high-quality pay-to-play single-payer games.

I can't speak for everyone, but I think this may be because with microtransactions you pay for additions to some product already known to be good (you tested it and you like it enough to buy some more), while with single-player games you typically have to pay upfront in hope it will be good. So risk-aversion sets in.


This is like saying that drugs only exist because there are people who will happily pay for them, but not spend similar amounts on high quality coffee (although that's debatable!).


To be fair, games often include an album (or more) worth of music...


Why not both. Big tech is greedy. It also is a difficult domain. There are people and companies with passion. There are also big companies trying to milk every penny out of their customers, not caring about the product, and customers.

Take mobile games for example. I am not sure how much passion goes into majority of products in that space.


Counterpoint: Rockstar.

They take a long time to make games, but it's always very good quality.

At least enough for people to want to buy them over and over. Their game are not cheap but aren't more expensive than others triple A, yet they make a ton of money and Take 2 interactive stocks are doing great.

I'd say it's more that the gaming market is extremely concurrentiel, either you're very good at what you do, or you got load of money for marketing campaign, but if you got neither it's barely profitable. Increasing the price of your games in that case won't solve the issue, people would buy even less of your company's game.


> Most games fail to pay for their production costs despite all the effort that goes into them.

Yet for decades before the forced-online/micro-transaction ecosystem, tens (hundreds?) of thousands of games were made, sold for a singular price and the industry spun on.

Nobody is complaining about the price, the complaint is about the indentured nature of modern game sales and the ephemeral state of the online elements that most players don't want or care about.


Doom and Doom 2... shareware, try it before buy it. And mod ability and open source later... id Software was great, shame they are gone now...


can respect your perspective about pricing, but do not forget how we "buy" things now. it is just a virtual lease of unspecified duration in reality.

when physical ownership was possible, you could tend to have games that you can use for perpetuity. nowadays, you can lose access to what you buy for arbitrary reasons (see ubisoft example - https://news.ycombinator.com/item?id=40020961).

it then all boils down to similar grounds as mentioned for other media. all the dark patterns, forcing online connection for singleplayer experiences, and intrusive drm really downplays the labour of love argument for me. if it is all for passion for most game devs, they should not seek monetary expectations from their audience, who dedicate thousands of hours to play their works.

it should be well-understood that streaming is not ownership, and it has been an unsustainable business model. but so is owning anything digitally by paying up front. at least the arrangement is more apparent for the former. i personally work on things that can and is pirated, and having been on both sides, i would not demonize one.


This is from a gamer's PoV, not someone in the industry:

AAA Games are too expensive. Pricing for a retail product is not solely based on the cost to produce it, you have to price in what the market will bear and I think the games industry just isn't doing that. £60 or £70 for a base game (that often has microtransactions in it, or is often kinda incomplete with major plot still to be delivered via DLC, usually with £80 or £90 'premium' editions) is a lot of money for what are already stretched budgets. Most gamers I know wait for sales with significant discounts (50% or more) before they even consider buying AAA titles.

If you can't make a game affordable, then maybe the big AAA industry is making the wrong games, oversaturating the marketplace, or quality is suffering. Starfield's a good example - years of work to produce a game that is resoundingly 'meh'. You can't expect customers to shell out £60+ for 'meh', no matter how many people or resources were used to create it.

There's a lot of smaller 'indie' developers that do not have hundreds of staff that are making games that the market engages with and seems to love. Anecdotally I know my friend group generally prefers these titles to AAA. They frequently fill a significant percentage of the Steam Top Selling lists and these lists are sorted by revenue, not by number of sales. Their prices are more affordable (£10, £15, £20 or £30 are common price points) compared to AAA titles that are trying to cling to that £60 price point.


> services like Spotify [...] allow pretty much access to all the music ever created for a 10$ fee. The math just doesn't work with video games.

Are you not describing XBox Game Pass?


Did you read the article? The problem is not the price, the problem is you can't buy and own stuff no matter what you pay.


"If writing down your ideas always makes them more precise and more complete, then no one who hasn't written about a topic has fully formed ideas about it."

Apparently, even writing it down didn't help the author with this flawed deduction.


To be sure, the quoted text in the parent comment is itself the linked essay’s quotation of Paul Graham.

Whether logically rigorous or not, that excerpt seems to be the essay’s author’s way of rhetorically opening his reflections on the idea that writing verbally crystallizes thought.

As a reader, I do not believe that the author is making a claim that the quoted Paul Graham statement, reduced to symbolic logic, is in all respects valid or sound.


How is it flawed logcally? Seems perfectly correct to me. Although I'd agree it's a bit over-literal. As if the emotional workings of the human mind can be precisely reasoned about (i.e. precisely enough to say "always").

Regardless, I've experienced this effect a lot when writing design docs. Iteration and objective criticism on a tangible thing (a doc) is an extremely effective way to see the problem from all sides.


Taking the statement completely out of context, it states : if A implies B, then not A implies not B. This is a logical flaw.

The correct statement from a logical point of view is: if A implies B, then not B implies not A.

In this case, even if writing down your ideas makes them more precise, there might be other methods that make your ideas more precise. Again this is just the logical point of view, out of context.


> Taking the statement completely out of context, it states : if A implies B, then not A implies not B. This is a logical flaw.

The statement in TFA is not that though. Instead, it is "if A implies B, then not A implies not C."

  A: writing about thoughts
  B: thoughts become more complete
  C: thoughts are most complete
If "A implies B" is true, then it also doesn't matter if other methods also make your ideas more complete, because "A implies B" means that writing would make them even more complete, therefore "not C."


You're perfectly right. It is indeed perfectly logical then. It could be reformulated like this: if f(A) > f(not A) then f(not A) is not maximal.

f: a function indicating how complete the thoughts are.

A: writing about thoughts.


What books can I read to reason like this?

EDIT: shortened sentence


This might sound strange but a book on real analysis or topology that walks through proofs could be one.


+1, pg is using a pretty typical argument you see in analysis/topology.

If you want to get to real analysis/topology the typical sequence is

1. Logic and Set theory (recommendation: How to Prove It, Velleman)

2. Linear Algebra (don't have a good recommendation)

3a. Real analysis (recommendation: PMA, Rudin)

3b. Topology (recommendation: Topology, Munkres)

I'm not sure I'd recommend learning math. It's an extremely expensive skill -- though pretty valuable in the software industry. People who go learn math are generally just drawn to it; you can't stop them even if you wanted to.

But be aware, (1) you'll have no one to talk about math with. And (2) you'll be joining a club of all the outcasts in society, including the Unabomber.


Disclaimer: I'm not OP and I haven't read the full post yet.

But the quote above says "If..." and then makes a statement that isn't true and then having a conclusion based on that false premise. I can tell you it isn't true because I can recall countless times in the last few months alone where writing down my ideas has resulted in a muddier thought; lost ideas while writing them down; confusing me and missing some parts; it does not "always make them more precise and more complete". So the rest of the statement is just silly.

Sure, sometimes writing down ideas helps clear things up. Most times even. But always?! Definitely not.


The deduction is flawed because the success of one method (thinking with writing) does not necessarily disprove the success of other methods (such as thinking without writing).


You're objecting to the premise, not the conclusion*. The deduction is valid for the premise (the part in the 'if'). Well, assuming you accept that an idea that can be "more complete" isn't "fully formed", but I'd say that's definitional.

* Although it's not really right to use this kind of language here (premise, conclusion, deduction). It's a casual statement, so I suppose people can somewhat reasonably argue about it, but the assertion is tautological ('if something is incomplete, it isn't fully formed').


The keyword is "always". IF writing about something always improves it, that implies it cannot ever reach full potential without writing about it.


Or with writing about it. But there's an implicit "if you haven't already written about it". We might wonder what other implicit preconditions there are.

Similarly, if walking North always brings you closer to the North Pole, then you can never reach the North Pole without walking North, or at all. But look out for oceans.


There's no logical flaw here. An idea can't be fully formed, if it could be more precise and more complete.


Sure, and even ideas that have been written about can be more precise and complete, perhaps by writing more about them, for example, so no one has fully formed ideas by this logic.


And that’s probably true. I doubt anyone has ever expressed an idea that couldn’t be amended, clarified, or expanded upon in some way.


Depends on the idea. To me the whole article was too generic or handwavy without giving specific examples of what kind of ideas are actually fully formed and which are not.

What is a definition of an idea that is fully formed and that is sufficiently complex enough?


But also, if more writing can always make the idea “more complete,” then no one at all (even the people who write) has any “completely complete” ideas.


> "If writing down your ideas always makes them more precise and more complete, then no one who hasn't written about a topic has fully formed ideas about it."

> Apparently, even writing it down didn't help the author with this flawed deduction.

I think that it can be rescued, at some expense of awkwardness, by grouping not as one would expect ("(fully formed) ideas"), but in a slightly non-standard way:

> "If writing down your ideas always makes them more precise and more complete, then no one who hasn't written about a topic has fully (formed ideas about it)."

That is, if you haven't written about the topic, then you haven't understood it as precisely and completely as you could. While this is obviously exaggeration, I think that it's (1) logically consistent, (2) possibly what pg meant, and (3) a useful slogan, even if intentionally over-stated.


<div class="commtext c00">Yes this is some terrible logic, but the idea is true.

Writing about something fixes (most) wrong thoughts, and since you are wrong in 99% of cases you can safely say that you are wrong unless you have written about it.</div>


The deduction is logically sound; it's of the form "if <false statement> then <other false statement that would follow if the first was true>".

This is, of course, even worse than a logical error.


"If allspice makes food taste better, than no one who doesn't use allspice can cook well."


The analogy is probably something more like:

Salt is necessary to bring out the flavor in pretty much all food. So no one who doesn't use salt has made a good meal.

Because salt is much more irreplaceable than allspice in cooking, just like writing is difficult to replace in honing ideas.


> Apparently, even writing it down didn't help the author with this flawed deduction.

... or this writing improved an even more flawed original.


Or it gave author unwarranted confidence in a flawed argument on the basis of a flawed assumption.


[flagged]


Only lately?


I thought this is using AI so I was gonna dismiss it. But then saw the No AI sign and immediately signed up. Seriously though why is "No AI" a "feature" worth mentioning on top?


Adobe had some clause where they could train AI based on your creative, effectively building a model that can ultimately plagiarize your work. No AI is a nice appeal in this context. That and it being simple, fully offline, not at the whims of execs trying to bump their share price with AI features that put the user second.


I wish people stopped equating AI with Adobe's content policy.


In this context that's a very reasonable assumption.


Yes, but 1) it's unnecessary conjecture when the facts already make them look bad, and 2) it's not the limit of what they could actually do - their TOS says (IIRC) "as long as it's for the purpose of improving our software, we can use anything you make". So they could use peoples' drawn art directly, for splash screens, or even, (speculation) offer it as a template/stock material for anyone who pays them for it.

It's not just AI.


Doesn’t your behavior prove the point of showing No AI ?


I think it was a joke


If you are on TikTok and you don't think CCP has all your personal data TikTok has collected from you, I have a bridge to sell you.


I think this is a pretty commonly held belief at this point in time, and I'd be somewhat surprised if you could change even a single American TikTok user's behavior by convincing them of it, since, anecdotally, people seem to lump it in with their general feeling that tech and advertising companies already harvest and share vast amounts of personal information, and conclude that protecting that information is a lost cause, and they may as well use the fun app.


Unlike Facebook which would never collect any data?


Why is that a problem?


Well speaking for myself I'd generally like to keep my data away from nefarious communist regimes.


here to express my support for you before the neocommunists come piling in on you


Are there seriously neo-communists on YC? I can understand reddit, but here?


Yes. It's paradoxical, but apparent if you engage in any kind of political discourse over here.

Nerds have always had some of the worst political stances, not because they're dumb but because they bend in face of the slightest pressure of losing social capital - in the current climate, this usually means submitting to the most psychotic leftist interpretation that is physically proximate to them.

Explains the politics of places like SF quite well.


How to start Google? You can't. That holds for 99.999999% of the people. The rest 0.000001% aren't wasting their time reading Paul Graham's essays.


You're overestimating the importance of talent and underestimating the importance of luck.

Not saying Larry Page and Sergey page aren't talented - clearly they were/are very talented people. But they were also extraordinarily lucky. To take another example, if Gary Kildall had been a slightly more ruthless businessman and IBM had had a little more foresight, Bill Gates would not be a billionaire.


Agreed, BG actually come across as fellow less smarter than several people I know, he does not have profound insights, and keeps repeating trite stuff.


This essay is for kids. Many successful founders have likely listened to inspiring talks like this when they were kids. In places like Stanford or the prestigious high schools they come from.

The people who read here can read it for clarity and the chance that they can show this article to a smart nephew to inspire them. Like I just did.


We live in a world where proven maniacs (e.g. Putin) have access to an arsenal of nuclear weapons that can essentially make the earth uninhabitable for all humans. That's a very real possibility (with no ifs and buts and maybes) that exists now and we have learned to live with it. Yet somehow the hypothetical scenario of a human exterminator super-intelligent AI is getting all the coverage.


Making the Earth uninhabitable to all humans is bad news for Putin, a human who lives on Earth. He derives his power from hundreds of millions of people in Russia working, and his lifestyle from hundreds of millions of people around the world buying Russian exports so an attack on everyone is also going to harm him.

AI wouldn’t be human, it wouldn’t necessarily object to a world uninhabitable to humans, or be hurt by it. If it would be hurt by it AI doesn’t have a billion years of evolved survival instinct to preserve itself. A super intelligence has many ways to cause mass destruction that we can imagine but cannot yet do, whereas nuclear weapons are pretty much all or nothing explosions only. Nuclear assault could leave some remote places still inhabited, AI could make certain not to.


Innumerable times quite reasonable political leaders have started wars that resulted in their own deaths; and sometimes resulted in the destruction of civilization as they knew it (most recently - World War I, which effectively put an end to the European monarchies).

This suggests that giving nukes to reasonable political leaders presents a high and concrete risk. Giving nukes to political leaders with crazy ideas presents an even greater risk.

Climate change seems to be another high and concrete risk.

AI escaping and taking over the world can be also considered a risk, but it's far more remote. It is similar to the risk of aliens attacking earth after detecting its TV and radio transmissions. Some people might argue that we should slow down the work on AI. Others may demand we also ban all TVs and radios on earth. Both proposals seem to be a bit of an overkill given how remote the associated risks are. Especially given that we have the far more pressing risks to address.

The concerns about AI risks are like a drunk driver going 90 mph on a country road and suddenly deciding to stop saying "goddamn" just in case Jesus returns and punishes those who invoked God's name in vain in violation of the Third Commandment.


"AI escaping and taking over the world" is a phrasing which inverts the situation to make it sound much safer; a special effects explosion has to get everything right to be safe. If anything isn't right, people could get hurt by shrapnel, by pressure, by heat, by smoke, by nearby structures being weakened and collapsing, by that causing breakage of steam pipes or other secondary effects, by it triggering a chain of other fires or explosions, by nearby people reacting e.g. swerving the car they are driving, etc. There are few safe outcomes and many dangerous ones. If a baby elephant has to jump over you while you are lying on the ground without crushing you, it has to be graceful and precise in a way that baby elephants aren't. Maybe it will see you as a wobbly unsafe landing place and try to avoid you for that reason if you're lucky. If there was such a thing as a 'baby monster truck' well it would behave more like a monster truck with a brick on the accelerator. And your defense is to pooh pooh the idea that a vehicle "would escape and try to kill you" handwaving away all the times runaway vehicles kill people without any intent to do so.

CUDA (2007), capable GPGPUs (circa 2012), Attention Is All You Need paper (2017), GPT 2 (2019), ChatGPT (2022), OpenAI valued at ~$28Bn (April 2023), OpenAI valued at $80Bn (Feb 2024).

Computers use fewer bits to store a list of countries than humans use braincells to do the same, and they have an easier time of it. Computers do arithmetic much faster than humans. Computers use fewer logic gates to do arithmetic than humans use braincells to do arithmetic. I don't think we can take it for granted that we need enough computing power to simulate 86 billion neurons in realtime before computers can show any glimmer of intelligence.

> "It is similar to the risk of aliens attacking earth after detecting its TV and radio transmissions."

We have high confidence that there's no quicker way to get here than the speed of light. We know that space travel is vastly complex and expensive, so the kind of reasons humans went to war in the past (land, resources) do not apply to aliens - any species capable of making interplanetary warships can synthesise water, mine asteroids, build Dyson swarms, cheaper and quicker than coming here to take them from Earth. Even if they did want to destroy us, it wouldn't be Independence Day and Will Smith dogfighting, the aliens could piledrive Earth with one spaceship moving at interstellar speeds and bam, extinction level event we'd never see coming or have a chance to react to. When the meteorite crash extinguished the dinosaurs, the impact was like a megaton nuke every six kilometers and lead to hours of sustained inferno as all the displaced rock rained back to Earth, being on the other side of the Earth didn't protect dinosaurs from being cooked.[1]

A thing with power which is also untrained, untamed, clumsy, unaware, fundamentally alien without even the shared mammal / living creature history, doesn't have to choose to attack us it can potentially end our rather fragile lives with its initial thrashing about.

> "given how remote the associated risks are. The concerns about AI risks are like a drunk driver going 90 mph on a country road and suddenly deciding to stop saying "goddamn" just in case Jesus returns and punishes those who invoked God's name in vain in violation of the Third Commandment."

GPT1 (2018), GPT2 (2019), GPT3 (2020), GPT4 (2023), LLaMA (Feb 2023), TogetherAI released LLaMA training set (April 2023), LLaMA-2 (July 2023), llama.cpp (July 2023) now with >1500 releases since then, Mistral AI (April 2023), Anthropic Claude 3 (March 2024), Bard, LaMDA, Bing chat, BloombergAI, Bart, Gemma, Gemini, Grok, Sora, Falcon, https://llmmodels.org/

I don't think a present-day LLM is going to be the superintelligent AI, but in the last ~5-10 years we've poured billions of dollars, tens of thousands of the world's smartest information processing people, the resources of the world's biggest companies, the greed and investment resources of the world's VCs, the open collaborative spread of the internet, and added the heat and chaos of hype and FOMO and nationalist competition and dangled results (like SoRA) that machines have never been able to do before. If this doesn't ignite it, maybe it cannot be ignited. But if it can be ignited, we're trying hard. Could it be as far away as 2200? 2100? 2050? Could it be as near as 2040? 2035? 2030? 2026?

To just handwave this away as "the risks are remote" isn't convincing. The risk is on our doorstep, Aladdin's cave is open, the lamp is found and people are shoving pipecleaners down the spout, cupping their hands over it and calling "halloooooo, is anyone in theeeereeee? Genieeeeee?". The lockpickers are at Pandora's box, and Pandora is stepping away and looking nervous. Louis Slotin is showing off holding the two halves of the AI Demon Core apart with a screwdriver we're the bystanders saying to each other "this can't be dangerous there are no aliens in the room, lol". The future is here, it's just not evenly distributed yet - well if the AI wakes up[2] we'll have a sixteenth of a second to understand and respond before light has distributed its influence both ways around the Earth and met on the other side. Our main hope is that there is no Genie and that Pandora's box is past its use-by date and the contents are dust. Because we sure aren't building a suitable containment chamber, cautiously scanning it, and standing well back.

[1] https://www.smithsonianmag.com/science-nature/what-happened-...

[2] https://scifi.stackexchange.com/a/28116 - Dial F for Frankenstein by Arthur C. Clarke ~1965


OK, so how does AI get its hands on nuclear weapons?


If your thinking is along the lines “if jodrellblank can’t convince me that a superintelligence is plausibly dangerous, off the top of their head, before my attention span wanders, then AI can’t be dangerous” that’s a really weak plan for protecting humanity.

Say, pops up on the US or Kremlin computers and blackmails or bribes or threatens someone to launch the nukes? Say, places orders with a lab to build some robots which then become the AI’s body and it builds nukes itself. Say, placed orders for custom biological things using stolen money which turn out to be extremely virulent and then it doesn’t need nukes. Say it finds out an effect to worsen global climate change and uses it to make earth uninhabitable on a short timescale, and doesn’t need nukes.


Social engineering, for a start. Everyone with a secret will be a target.

Half the world will follow stupid leaders. An AI would run rings around your average voter. I'd be surprised if it can't run rings around basically every voter given time.

We're also creating various kinds of robots, so the thing could just plug itself into the network it wants.

That's 5 minutes. An AI would be smarter than me and have a lot more time every second to think up better ways, and A/B test them on millions of people.

All it needs is one guy thinking "Ah, what's the worst that could happen? It promised me a million dollars."


Some contractor creates an internal webhook for testing the silo door that later gets accidentally hooked up to the full launch sequence and - during a billion dollar DoD mainframe upgrade project (to finally get away from COBOL) - gets exposed to the internet. MAD doctrine does the rest.

That is how the Anthropocene ends and the age of the machines begins. So say we all.


This is the kind of nonsense that happens at startups, not government agencies with the capability to destroy cities.


It sounds like you may not have read the "broken arrow" incidents Wikipedia page:

https://en.m.wikipedia.org/wiki/List_of_military_nuclear_acc...

There are a number of hair-raising stories tucked away in there.

Paired with knowledge of things like Stuxnet (https://en.m.wikipedia.org/wiki/Stuxnet), the broken arrow list fills with worry for what might happen with nuclear weapons systems in our modern, hyper-connected world.



if trump can figure it out an AI probably could


You don't even need to be Putin. If you cook up the right chemicals, or breed the right virus, you can kill many thousands of people, maybe millions.


Right? Humans are more than capable of severely harming civilization at any point, but for some reason this AI narrative is so much more compelling than the boring old issues we're used to. I still don't get the reasoning - we have pretty good text and image generators, yes - but the article jumps the ship to a whole different universe, one where "AI" is an actual entity that has needs and that outpaces humans in all possible facets. Yet, despite the wide gap between what we have and what they're talking about, this level of AI is treated as if it's not just feasible, but basically already here.

The article constantly refers to a strange niche sub-community of pro-AI people, and extrapolates it to say that almost anyone who backs new technology is a hyper-capitalist libertarian who just wants to see the world burn for the sake of money. I feel that the opposite ideology is also far from pure - with this immense reaction to generative AI, it almost feels like big companies are capitalizing on fear to promote regulations that shut out anyone who's not a big company that publishes fancy charts about "risks of catastrophe".


IMO Unreal Engine is the best deal available and fits >90% of use cases for game developers. Unless you are building something for the web or low powered mobile VR I wouldn't even consider anything else. For PC and console games UE5 provides incredible amount of tools and flexibility. It's also great for building 2D/3D mobile games. People who say complexity of Unreal has stopped them from using it have gotten it wrong. UE5 provides you with a lot of tools, you don't have to use them all. But if you are thinking of building something more complex than a hello world example, you'd realize that the additional tools that UE5 provides you greatly save your time.

If you are a total beginner you can use Blueprints to write the game logic and use the existing out-of-the box tools. If you are a more experienced programmer you can use C++ to build custom components/plugins to get more customization.

I remember a time that game engines were these precious secret tools that you had to pay millions of dollars to get a license for. Now you can get the full source of UE5 on Github for free. And you pay something like 5% after 1 million dollars of revenue. This is just a no-brainer folks. IMO 5% is totally deserved and justified. In fact it's a bargain and you save money by paying Epic 5% compared to anything else out there. Use UE5 unless you have a really really really good reason not to.


> IMO Unreal Engine is the best deal available and fits >90% of use cases for game developers.

The only place Unity really shines (both IME and in industry) is mobile games. Unreal IIRC doesn't really have even close to comparable support for mobile platforms. Mobile games by revenue make up more than 10% of the games industry, so I would say the ">90% of use cases" thing is just untrue.

> And you pay something like 5% after 1 million dollars of revenue.

> IMO 5% is totally deserved and justified.

Again, for mobile games, that's 5% after 1m in revenue, which also includes Apple and Google's 30% cut. So again, no, it's really not a good deal _at all_.

In retrospect this comment feels like some form of advertisement for UE5 more than actual discussion.


> Mobile games by revenue make up more than 10% of the games industry

I've said this already in another recent thread but mobile games and pc/console games are two entirely separate markets, with different potential customer pools.

Conflating them together makes about as much sense as conflating console/pc games with accounting software.


> I've said this already in another recent thread but Mobile games and PC/Console games are two entirely separate markets, with different potential customer pools.

Yes, agreed. But again, as I've already said, the reason to point out mobile games specifically is because that's where much of Unity's success has come from. There's really no point in discussing Unity's monetization efforts without also discussing the mobile games industry. PC and Console games just aren't written in Unity at the frequency or scale that it would matter.

> Conflating them together makes about as much sense as conflating console/pc games with accounting software.

Yes, which is why I pointed out the above post combining them when the ought not be.


Sorry, you managed to trigger my pet peeve faster than I managed to properly read the whole conversation :)


>Again, for mobile games, that's 5% after 1m in revenue, which also includes Apple and Google's 30% cut. So again, no, it's really not a good deal _at all_.

Not necessarily, let's do the math:

for 1m downloads, UE depends on how much money you make. But regardless of how much money you made, Unity's new plan on Enterprise (in the worst case, because they have not specified if the charges start after 1m installs or applies to the first million as well) will cost you $46,500, on top of the per seat pricing of enterprise. in this case, the cutoff point for when Epic costs more is if you made more than $930k in revenue. But since Epic waives the first million, this actually means you need to make $1.93m in revenue before it cancels out.

and if we go further along (where Unity's prices for enterprise start to stabilize at $0.01 per install), if you hit 5m downloads you are charged a total of $106k, the breakpoint here for UE is if you made $3.1m in revenue (again, waiving the first million).

----

By the looks of things, for a mid-revenue game UE looks better, but higher revenues mean Unity start to win out. In particular, mobile games tend to utilize whales that can make the attach rate MUCH higher than $2/user (you may not have 99 users paying anything, but a whale dropping $1000 balances the arithmetic mean to $10/user), so Unity will win out. Funnily enough, the less ethical f2p games may still prefer Unity over unreal.

For games that rely on ads or subscriptions, though? Absolutely fucked over. Drastically. You simply cannot make an ethical mobile app with Unity anymore as every user that visits and leaves in 10 minutes after a certain threshold is costing you money. If you had a bad launch with lots of users but barely any revenue, you can legitimately end up in the red for using Unity. As it would be better to shut down your app and relaunch under a different name than to try and recoup the costs with the current app.

----

I won't ramble on too much longer, but I do want to add one more tidbit to keep in mind. Gamepass and Apple Arcade are also factors, and Unity said they would charge the distributors for this. In the worst case, this can mean that Microsoft/Apple can remove your existing games from these services and disallow Unity games to be hosted. So if you want to one day utilize these kinds of subscription services, you may not even have such a choice to begin with.


30% for distribution is fine, 5% for more than half the cost of development is not?

Fortnite is also a top selling mobile game developed with UE5. You can make great mobile games with UE5 now. Mobile hardware right now is comparable to last gen consoles.


> 30% for distribution is fine, 5% for more than half the cost of development is not?

Quote me where I said the 30% tax was "fine" please.

> Fortnite is also a top selling mobile game developed with UE5.

Yep! So we've gotten to the exception that proves the rule. Aside from Fortnite, which is written by Epic, Unreal Engine hasn't had even close to the success or adoption on mobile platforms as it has elsewhere.

> You can make great mobile games with UE5 now.

You can make great mobile games in javascript. This isn't really about "can".


>Unreal Engine hasn't had even close to the success or adoption on mobile platforms as it has elsewhere.

not sure if that's a fair comparison. UE existed for 15 years before smartphone applications existed. Unity's first public release was mac only and focused a lot on making web and IOS apps. No surprise that Epic's decade long dynasty wasn't surpassed when they never put a strong emphasis on mobile to begin with.

IIRC Unreal Engine has 15% mobile marketshare, so it's not an unviable option. Especially in times where even mobile games are starting to come into the open world action frenzy.


> 5% for more than half the cost of development is not?

The same can be said about Unity asking you to pay $0.05 per install (which is massively cheaper than Unreal for any non F2P game)


It isn't though, in the context of a f2p mobile game. Under one model, only paying customers cost you. In the other you're paying for every drive by (re)download.


Yeah, certainly. This feels like such a weird model. Many developers will end up paying much less than 1% maybe even closer 0.1% while for others it could be much higher than 5%.

And the worst thing is that they’ll apply it retroactively so that your only choice will be take down the game or pay up whatever Unity asks..


> Again, for mobile games, that's 5% after 1m in revenue, which also includes Apple and Google's 30% cut. So again, no, it's really not a good deal _at all_.

5% of 70%? Or 30% + 5%?

Regardless, price your game in a way that makes you money.


> IMO 5% is totally deserved and justified. In fact it's a bargain and you save money by paying Epic 5% compared to anything else out there.

How? That’s significantly more expensive that Unity if you make more than $2 per user or so.


IMO AI is more underrated than overhyped. The scale of value that AI can bring maybe larger than what internet brought. But, product design and engineering havn't caught up with the science yet. I think we are too narrowly looking at AI. LLMs are cool, but investors should look beyond that. ChatGPT wrappers aren't the next big thing. The fact that LLMs and image generation models work as good as they do now, should give investors a signal that the science of AI is approaching a tipping point where it's finally good enough to be incorporated into products. I see potential in 10 years time for a new FAANG, 5 trillion dollar companies with heavy reliance on AI that bring automation to various aspects of our lives.


I agree. LLMs have a ton of unrealised applications in business. Imagine training one on your company wiki and chat history.

Barely any companies have done that yet because of legal and security concerns and because it isn't easy to do yet, but that will change.

It's not going to be long before someone makes and end-to-end speech to speech model. A single model that incorporates speech recognition, LLM and speech synthesis. In fact I'm really surprised it hasn't happened already because it's such an obvious thing to try. That's going to blow people's minds.


Yes that's probably true. The article makes a parallel between the current AI hype and crypto, but there's a huge difference. Crypto didn't bring any benefit to anyone and didn't do anything one couldn't do before (with orders of magnitude better efficiency and security).

The current situation is more like the early dot-com boom of the 2000s; Webvan and pets.com or Altavista were ill-executed but they weren't stupid ideas. It was then that Amazon and Google were founded.


As a Comcast customer who's paying 5x what I was paying in Europe for half the bandwidth I want this to succeed. However let's not celebrate a win before actually delivering the service to the customers. There's more to building a business than seed funding it. The pessimistic in me would say if there was a viable path to providing high-speed internet with low cost in the US surely companies with a lot on the line like Netflix, Google, Amazon, etc. would make that happen. If you think it's all Comcast profit margins you can always go and buy Comcast stocks to get your share of that profit. They are doing well but not spectacularly well.


As someone who did a PhD and published and read many papers (in ML) I believe vast majority of papers in ML are misleading. If in fact every paper that claims a better performance over state of the art was true we would have solved AI by now. You see all sorts of problems when you dig deeper into the technical details of peer reviewed publications (even in top tier conferences) including misleading baselines, statistical insignificance of improvements, overfitting to test data, and in some cases just flat-out fabrication of results.

I hope this is not as bad in medicine and health related research. But just thinking that some paper can be used against you in court to claim billions of dollars in damages makes me uneasy. Peer reviewed paper != science. Peer review is a crude filter on research that can both accept bad work and reject good work. There must be a higher bar for something that can be used in the court of law. Some sort of scientific consensus must be needed at least.

It's easy to dismiss this because screw J&J. But I think we are all paying for these lawsuits through our insurances and taxes and higher drug prices.

Not saying these lawsuits don't have merits, but I think there must be a higher bar for what is presented as evidence.


When you become a world expert in a very narrow area of a field - like when you do a PhD - I found you discover that around 50% of papers are either pointless, misleading or wrong.

Let me be clear - the largest proportion of those are the pointless ones ( the reader already knew what's in the paper ).

Some of this is because research is hard, a lot of it is because of the immense pressure to publish and the strong bias for positive messages ( this approach is better, we discovered X ) to be published, rather than stuff that says - we tried this but it didn't work etc.

> I hope this is not as bad in medicine and health related research.

Same pressure, same human factors.

There is of course an additional factor in cases like this - if your research hints at a link between talc and cancer - is it ethical not to publish while you wait another 5 years for a longer study?



"Safe and effective"


We need a Journal of Medicore Results, or a Journal of Tried it and it Didn't Work.

As with social media and society in general, only the most inflated of headlines gets attention causing the gold rush of hyped up results. Nothing can be published without the hype and hyperbole, which then sets the benchmark for the next round of hype.


> We need a Journal of Medicore Results, or a Journal of Tried it and it Didn't Work.

Publication in this journal would be used as evidence against lawsuits.

Cigarette companies paid for studies designed to produce no conclusive evidence.

I don't see this as a route to progress. What if all you did wrong was follow the wrong process? The bacteria grows better at 10C than 30C. This journal would be full of results by fools and charlatans. The inclusion criteria would have to be much more complex for it to be useful.


Former chemist here. I disagree - you try many many reactions that fail. It would be good to just see what was tried so you can 1) see what was tried so you can change the conditions and try again or 2) avoid an approach altogether. I joke in many of the forums here about making The journal of Failed Chemistry. But I am very serious about saving time so 10 different PhD candidates don’t waste the same time I did.


Better yet, your documented "failure" may be an unknown path to "success" in someone else's research context.

This would basically be the same as caching the results of a brute-force attack; except instead of trying to break the entropy of encryption, we are trying to unravel the entropy of chemistry and physics.


Bad actors will act bad in any environment in which they're able


There are already a number of these: searching for “journal of negative results” turns up several


You say it as if that is something that should be accepted.


Is their desire to change it not evidence that they don't accept it? What they want is a real solution rather than to simply complain about the problem while doing nothing (which is closer to acceptance).


> I hope this is not as bad in medicine and health related research.

I have friends and family working with medicine and health related research, and I don't think it is bad. In fact I'm a bit amazed at how good the Medicine academia is at science, and as comparison how bad computer-science (CSc) is at science.

With very few exceptions, CSc papers don't have any actual science, they don't have a hypothesis and a method to test it, instead most papers in CSc can be summarized as "I did this new thing and I find it cool", the peers simply don't expect you to do actual science. In comparison, in Medicine papers you are expected to follow the scientific method to a T, they have a hypothesis and test it (often heavily relying on statistics), and yes some Medicine papers are bad (with bad methodology, bad sampling, or bad statistics like p-hacking) but even the bad ones try (and fail) to follow the scientific method (or at least pretend to try in case of malicious papers), because the standard is that high.


It is as bad in medicine, if not worse. Recently a very influential paper about alzheimer was pulled[1] because of almost everything you said, but for biology instead: data fabrication, overestimating numbers, etc.

There's also a case related with cloning (which includes stem cell research) and massive fraud in South Korea 20 or so years ago.[2]

It's a complicated matter: research is a beacon of light on top of a mountain of failed experiments. And not even the government will fund a mountain of failures if your shining discovery at the end can't bring returns to society (or shareholders) in greater value than what you took.

And the pharmaceutical industry never settled for small results. And, unlike ML, it's much harder to test things as an outsider (you can hate the hype, but openAI, hugging face and all these "open" language models makes it much easier to learn, test, tweak and improve things without a master's degree in data engineering.).

It's a grimy area to research even as a pastime. There's a lot of recorded practices both in research and production that makes you question how low can a person go for money

1: https://www.science.org/content/article/potential-fabricatio...

2: https://youtu.be/ett_8wLJ87U


> It's easy to dismiss this because screw J&J. But I think we are all paying for these lawsuits through our insurances and taxes and higher drug prices.

1) You think these lawsuits have a bigger effect on insurance premiums than an increase in the incidence of ovarian cancer?

2) This is a choice we make as a society, and has little to do with lawsuits. J&J is not the victim of how we finance health care, it is the beneficiary.


1) Did you run the numbers?


There is saying (at least in spanish): "You are owner of your silence and slave of your words".

Maybe you need your moderate your claiming before publishing something that could cause several damage to someone (That is why most news websites use potential words when publishing something: reported/may/thought/potentially/etc...).


I don't know if you looked into pursuing academic career and running the post-doc circuit (maybe that doesn't happen in ML) but academia is ultimately a business and academics are overwhelming pressured to publish and increase the stats.


How do you measure consensus? There are many researchers who claim there's a consensus of whatever they happen to believe, but when counter-examples are pointed out they start No True Scotsmanning ("no real expert believes...").

Health related research is in a much worse state than ML unfortunately. ML suffers from metrics gaming and overfitting but there's probably not much outright fraud? Common estimates are that half of all medical papers are making false claims. This blog post of mine from a few years ago has some quotes from editors of well known medical journals where they express disbelief in their own published research base, and some examples of blatant fraud [0]. And this interview with Marc Andreessen indicates why VCs don't invest into biotech startups much [1]:

I had a conversation with the long-time head of one of the big federal funding agencies for healthcare research who is also a very accomplished entrepreneur, and I said, “do you really think it’s true that 50-70% of biomedical research is fake?” This is a guy who has spent his life in this world. And he said “oh no, that’s not true at all. It’s 90%.” [Richard laughs]. I was like “holy shit,” I was flabbergasted that it could be 90%.

[...]

I said “good God, why does the other 90% continue to get funded if you know this?” And he said, “well, there are all these universities and professors who have tenure, there are all these journals, there are all these systems and people have been promised lifetime employment.” Anyway, a longwinded way of saying that we have pretty serious structural and incentive problems in the research complex.

In the same interview Andreessen references a study [2] that implies most medical/drug research stopped working overnight in the year 2000. The reason being that in that year the US govt started requiring drug trials to pre-register their hypothesis and then evaluate their results relative to that hypothesis i.e. they tried to prevent P-hacking.

We identified all large NHLBI supported RCTs between 1970 and 2012 evaluating drugs or dietary supplements for the treatment or prevention of cardiovascular disease [...] 17 of 30 studies (57%) published prior to 2000 showed a significant benefit of intervention on the primary outcome in comparison to only 2 among the 25 (8%) trials published after 2000

[0] https://blog.plan99.net/fake-science-part-i-7e9764571422

[1] https://www.richardhanania.com/p/flying-x-wings-into-the-dea...

[2] https://journals.plos.org/plosone/article?id=10.1371/journal...


> well, there are all these universities and professors who have tenure, there are all these journals, there are all these systems and people have been promised lifetime employment.”

Odd that tenure is singled out as the core of the problem. I'd say the undermining of tenure is a large part of the problem.

Research is hard - very hard - you are being asked to discover stuff nobody else has - to push the boundaries of knowledge. I would argue that the idea that one failed research project, and you are out on your ear because of a gap in publishing - is a core part of the problem.

In terms of medical doctors doing research - I'd argue that's a special dangerous mix - medical doctors need to have an element of self-belief just to be able to do their job ( life and death decisions ). Instilling it is part of the training.

That self-belief, plus a lack ( in most cases ) of any in depth scientific research training is a dangerous mix.

Medical training != science training.

In this specific case I have no idea about the rights and wrongs.


Research might be hard, harder than whatever else the rest of the world is doing, but the tenure system is not giving us the best researchers. It rewards those that are at the right place at the right time, it's mostly luck of the current spectacle, and then it sets those "accomplished" people up for failure as they eventually see their luck running out. They are very motivated to use less and less ethical tools for years to keep producing "results".


Eh?

Why does having job security motivate you to make stuff up - incidentally, the only way you might actually lose your job?


not the job security, but the same thing that is required to get into that job with the job security


Weird how the continuous weakening of the tenure system over the past 20 years or more hasn’t upped the research quality then


Research quality went up in fields where the community (and/or the funding orgs) started demanding things like pre-registration, data availability, and so on.


This also describes the ultra-wealthy quite well, but few are in favor of a wealth tax.


That doesn't make those few wrong. (Also the number of supporters seems to be growing. Piketty's book - Capital in the 21st century - had big waves, and it argues for a wealth tax, for example.)


But because “some day it’ll be me and I won’t want to pay that”


Having a similar background (PhD in math, working in deep learning), so knowing nothing of the specifics of medicine, I think that, if anything, there the situation is probably _worse_. Results are less transparent, and you can't just read a paper and know if the results are made "look nicer", if you don't have the raw numbers of the experiments (which you almost never have). At least, in ML you typically can get an idea by reading the paper attentively, and if you're determinate enough, often you even have the source code to check.

I guess that's why in medicine (and some other sciences), meta-reviews are so important: they read dozens or hundreds of papers on the same topic, compare methodologies, and try to deduce more realistic confidence intervals to limit statistical noise. That's how we now know that smoking definitely leads to higher probability of cancer, and most likely that's the case with red meat (and even more likely with smoked or otherwise preserved meat). The risk increases are small enough that a single study can never have the statistical power to prove it. Otherwise, when the famously wrong paper linking vaccines and autism came out, all families of poor autistic children could sue a lot (it was later proved that that one paper was fabricated with malice, but that doesn't change anything, if it were a random error the effect would be the same)

This is why I share your concern for the billions of $ in sues due to this one paper. I have no idea who is abstractly "right" - it might be there was asbestos, or maybe not, and it might be that these talc products increased cancer likelihood, or maybe not. But health, and particularly cancer, is tricky: we all develop multiple cancers, sooner or later, unless we die before. Trying to pinpoint a specific cancer to a source is like fining the proverbial last straw that broke the camel's back - except that in this case it's not the sum of the straws that kill the camel, but rather one random straw across all its load.

That said, I don't want to use this argument to wave company responsibilities away - of course companies who knowingly neglected safety measures, or endangered patient safety, should be punished with large fines (and possibly forced to shut down, in particularly extreme cases). As a European, I'm just uncomfortable with the idea of settling this in spectacular "patient vs. company" trials, where everything is dictated by the ability of lawyers and the will of judges, with results rather randomly ranging from nothing to literally making you rich (and again, your particular cancer might or might not be caused by that... and you can't really know, in any way). With the small additional caveat that you must have survived that in the first place, in order for the company to actually be significantly punished.


> If in fact every paper that claims a better performance over state of the art was true we would have solved AI by now.

"Better" is not a single axis. By your logic, all I need to do is jump higher and higher, and sooner or later I will be able to fly!

You aren't alone in this line of thinking. This fallacy has been written all over "AI" since we started calling it "AI"! That was the biggest mistake of all: by calling any arbitrary project "an Artificial Intelligence", we declare it an instance of the end goal! Now it just needs to be a "better", never "different".

And that's what it means for a company to fail: it doesn't just need to be better, it needs to be different. If we don't allow failure to happen, we can never explore "different".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: