Hacker News new | past | comments | ask | show | jobs | submit | gcampos's comments login

Unity was never profitable and was always dependent on vc money.

For sure it's a great product, but the truth is that it was subsidized by vc capital and is not a sustainable business


It's slightly better than operating cashflow neutral. It's growing at a very healthy rate. It's definitely sustainable.

The debate is around is it worth $5 billion or $50 billion. It's not as easy as one might think to figure which.


> slightly better than operating cashflow neutral

Only because they are issuing massive amounts of stock (IIRC ~15% per year) to pay their employees and not doing any buybacks. How sustainable is that?

> It's growing at a very healthy rate

The problem is that it’s not. YoY growth they reported yesterday was -2% and they are also guiding for negative growth next year.


Sheesh, I haven't watched closely for a couple qtrs and only saw the headline. You are right.


Can you explain the "issuing stock to pay people"? 15% per year sounds quite a bit to me .. is this the norm in Silicon Valley? I presume this in the 10K?


A good portion of compensation is in stock. There is nothing wrong with it per se. It shows up on the income statement as an expense - all good. But then when people quote "operating cashflow" the company has added back that expense because it wasn't in cash. So the quoted operating cashflow (which in theory should be "real") becomes more fake from an economic viewpoint.

It's not bad, you just have to be aware of it. It's in the k - right there in the income statement (the opex lines will all have some % which is equity) and cashflow statement (the operating cashflow calc explicitly has a line for it).


It certainly could be a smaller profitable company but that’s not how the world works apparently


That is actually an interesting question. We track how many accidents the automated cars cause, but I'm not sure how many accidents they prevent are tracked.


That metric is tracked indirectly, by the lower number of accidents that AVs are involved in per distance travelled compared to human-driven vehicles.


> by the lower number of accidents that AVs are involved in per distance travelled compared to human-driven vehicles

Just a note that this number is hard to calculate accurately with an acceptable degree of certainty.

Anyone claiming that AVs are involved in fewer accidents per distance traveled than human drivers is either extrapolating from incomplete data or baking some unreliable assumptions into their statement.

Welch Labs has a good introductory video to this topic: https://youtu.be/yaYER2M8dcs?si=XEB4aWlYf6gnnTqM


Tesla just announced 500 million miles driven by FSD [1]. Per the video, were it fully autonomous they could have a 95% CI on "safer than human" at only 275 million miles [2], but obviously having human supervision ought to remove many of the worst incidents from the dataset. Does anyone know if they publish disengagement data?

[1] https://digitalassets.tesla.com/tesla-contents/image/upload/...

[2] https://youtu.be/yaYER2M8dcs?t=477


This just shows how statistics can mislead. I own a Tesla with FSD and it's extremely unsafe for city driving. Just to quantify, I'd say at its absolute best, about 1 in 8 left turns result in a dangerous error that requires me to retake control of the car. There is no way it even comes close to approaching the safety of a human driver.


I only spent 3/4 of my post adding caveats, geez. Thanks for the first hand intuition, though.


The caveats are missing the point that FSD is very obviously less safe than a human driver, unless you constrain the data to long stretches of interstate road during the day, with nice weather, clearly marked road lines and minimal construction. At that point, my "intuition" tells me human drivers probably still safer, but under typical driving conditions they very obviously are (at least with Tesla FSD, I don't know about Waymo)


The reason why I spent 3/4 of my post on caveats was because I didn't want people to read my post as claiming that FSD was safe, and instead focus on my real point that the unthinkable numbers from the video aren't actually unthinkable anymore because Tesla has a massive fleet. You're right, though, I could have spent 5/6 of my post on caveats instead. I apologize for my indiscretion.


> my real point that the unthinkable numbers from the video aren't actually unthinkable anymore because Tesla has a massive flee

Yes, I'm addressing that point directly, specifically the fact that this "unthinkable number" is misleading regardless of the number's magnitude.


FSD's imperfections and supervision do not invalidate their fleet's size and its consequent ability to collect training data and statistics (eventually, deaths per mile statistics). The low fleet size assumption in the presentation is simply toast.

If I had claimed that the 500 million number indicated a certain level of deaths-per-mile safety, that would be invalid -- but I spent 3/4 of my post emphasizing that it did not, even though you keep pretending otherwise.


You could start by comparing highway driving, where I think Tesla actually is quite good.


Tesla's mileage numbers are meaningless because the human has to take over frequently. They claim credit for miles driven, but don't disclose disconnects and near misses.

California companies with real self driving have to count their disconnects and report all accidents, however minor, to DMV. You can read the disconnect reports online.


Do you trust claims and data from Tesla?


Do you think they lied about miles driven in the investor presentation?

Nah, that would be illegal. Their statement leaves plenty of room for dirty laundry though. I'm sure they won't disclose disengagement data unless forced, but they have plenty of legal battles that might force them to disclose. That's why I'm asking around. I'd love to rummage through. Or, better, to read an article from someone else who spent the time.


> Nah, that would be illegal.

Musk has violated many rules regarding investors.


Note that it would need to drive those 275 million miles without incident to be safer than a human.

Which for Tesla's FSD is obviously not the case.

https://www.motortrend.com/news/tesla-fsd-autopilot-crashes-...


Your video and my response were talking about fatal crashes. Humans don't go 100 million miles between crashes.

Has FSD had a fatality? Autopilot (the lane-follower) has had a few, but I don't think I've heard about one on FSD, and if their presentations on occupancy networks are to be believed there is a pretty big distinction between the two.


Isn't "FSD" the thing they're no longer allowed to call self driving because it keeps killing cyclists? Google suggests lots of Tesla+cyclist+dead but with Tesla claiming it's all fine and not their fault, which isn't immediately persuasive.


> Google suggests lots of Tesla+cyclist+dead but with Tesla claiming it's all fine and not their fault, which isn't immediately persuasive.

With human drivers -- are we blaming Tesla for those too?

You do you, but I'm here to learn about FSD. It looks like there was a public incident where FSD lunged at a cyclist. See, that's what I'm interested in, and that's why I asked if anyone knew about disengagement stats.


It appears that the clever trick is to have the automated system make choices that would be commercially unfortunate - such as killing the cyclist - but to hand control back to the human driver just before the event occurs. Thus Tesla are not at fault. I feel ok with blaming Tesla for that, yeah.


Is that real? I've heard it widely repeated but the NHTSA definitions very strongly suggest that this loophole doesn't actually exist:

> https://static.nhtsa.gov/odi/ffdd/sgo-2021-01/SGO-2021-01_Da...

The Reporting Entity’s report of the highest- level driving automation system engaged at any time during the period 30 seconds immediately prior to the commencement of the crash through the conclusion of the crash. Possible values: ADAS, ADS, “Unknown, see Narrative.”


"It appears" according to what?

Stuff people made up is a bad reason to blame a company.


From here[1]:

> The new data set stems from a federal order last summer requiring automakers to report crashes involving driver assistance to assess whether the technology presented safety risks. Tesla‘s vehicles have been found to shut off the advanced driver-assistance system, Autopilot, around one second before impact, according to the regulators.

[1] https://www.washingtonpost.com/technology/2022/06/15/tesla-a...


You also need to cite them using that as a way to attempt to avoid fault.

Especially because the first sentence you quoted strongly suggests they do get counted.


Yeah, their very faux self driving package.


Can someone summarize the video? That was my first thought as well: crash data for humans is clearly underreported. For example, police don't always write reports or human drivers agree to keep it off the books.


The probability of a human driver causing a fatality on any given mile driven is 0.00000109% (1.09 fatalities occur per 100 million miles driven).

Applying some basic statistics to show to a 95% confidence level that a self driving system causes fewer fatalities than a human you would have to drive 275 million autonomous miles flawlessly.

This would require a fleet of 100 vehicles to drive continuously for 12.56 years.

And in practice self-driving vehicles don't drive perfectly. Best estimates on the actual number of miles driven to validate their safety is around 5 billion autonomously driven miles and that's assuming that they actually are safer than a human driver.

Then you get into the comparison itself. In practice AVs don't drive on all the same roads, at the same times, as human drivers. A disproportionate number of accidents happen at night, in adverse weather, and on roads that AVs don't typically drive on.

Then you have to ask if comparing AVs to all drivers and vehicles is valid comparison. We know for instance that vehicles with automated breaking and lane assist are involved in fewer accidents.

Then of course if minimizing accidents is really what you care about there's something easy we could do right now: just mandate all vehicles must have an breathalyzer ignition. We do this for some people who have been convicted of DUI but doing it for everyone would eliminate a third of fatalities.


Then of course if minimizing accidents is really what you care about there's something easy we could do right now: just mandate all vehicles must have an breathalyzer ignition. We do this for some people who have been convicted of DUI but doing it for everyone would eliminate a third of fatalities.

In a similar vein, if we put geo-informed speed governors in cars that physically prevented you from exceeding, say, 20% of the speed limit, fatalities would also likely plummet.

But people haaaaate that idea.


I'm fine with it notifying the driver it thinks you might be speeding, but I don't like the idea of actually limiting the speed of the car. I've used several cars which had a lot of cases where it didn't track the speed right. Driving near but not in a construction zone. Driving in an express lane which has a faster posted speed than the main highway. School zones. I've seen cars routinely get these things wrong. A few months ago I was on an 85 MPH highway and Google Maps suddenly thought I was on the 45 MPH feeder. Add 20%, that's 54 MPH max speed. So what, my car would have quickly enforced the 30 MPH drop and slam on the brakes to get it into compliance?

I'd greatly prefer just automatic enforcement of speeding laws rather than doing things to try and prevent people from speeding.


Honestly I would think something like transponders on every freeway would work better than GPS. Regardless, I think everyone in the thread could think of 10 technological ways of making this work. I think the biggest barriers are political, not logistical, and definitely not engineering.


So we spend a ton of money putting in transponders and readers in cars which still have various failure modes, or we just put cameras on the highways and intersections and say "car with tag 123123 went from gate 1 to gate 2 in x minutes, those are y miles apart, average speed had to be > speed limit, issue ticket to 123123".

The toll roads could trivially automatically enforce speed limits. They already precisely know when each car goes through each gantry, they know the distance between each gantry, so they know everyone's average speed.


Mostly because I think it would glitch and get the speeds wrong

If it was 100% accurate and you couldn’t get a speeding ticket if it was active I’d be all for it


Yeah, because need to be able to use your vehicle to escape pursuers and also as a ramming weapon. I assume police would get an exception from this rule, but they don't actually have more of a legal right to use their vehicle as a weapon than anyone else, they are just less likely to be questioned on their judgement of the situation as an emergency by the DA. Probably also a second amendment violation, but the Supreme Court might be too originalist (and not textualist enough) to buy that argument, as cars did not exist in the decades surrounding the founding.


> prevented you from exceeding, say, 20% of the speed limit

I initially read this as "20% of the speed of light", and though you were being sarcastic.


Did you mean exceeding the speed limit by 20%?

Because what you actually said is true too, and hints at why "it would be safer" is not a good enough reason to implement something.


I doubt it. Both of these methods are very intrusive.


I sustained traumatic injuries a few months ago when a driver on a suspended learner's permit hit me. The lazy cop issued no tickets for the multiple traffic violations. He couldn't be bothered to show up for the trial and the lazy prosecutor who only notified me of the trial three days in advance went with a bare minimum wrist slap for the suspension. It's as if it officially never happened.


That's awful. The amount of egregious vehicular violence the US has tolerated is disgusting. Waymo seems like the best bet to making experiences like yours a thing of the past.


And I'm not even sure how reliable is the "miles driven" metric. I mean I'm sure you can estimate it somehow, but what's the margin of error there?


Odometers are pretty well regulated and insurance companies will often have a good record of the readings over long periods. I'm not sure how the org doing the data collection does it precisely, but pretty accurate data is out there.


It would be interesting if some kind of active/inductive measurement could be made.

As a human driver, I'm keenly aware of the "close calls" I've had where I've narrowly avoided a probable collision through good decisions and/or reaction times. I'm probably less aware of the times when I've just gotten lucky.

No doubt self-driving companies have internally available data on this stuff. Can't wait til superior performance to human drivers is a major selling point!


Although things are changing, the overwhelming majority of AV miles are generally-safer miles on protected highways. And yet their statistics are typically compared against vehicle miles on all road types.

Further, most AVs explicitly disengage when they encounter situations beyond their understanding. So again, we’re selecting AV statistics based on generally favorable conditions, but we don’t track human-operated miles the same way.

It’s not really a fair comparison.


> most AVs explicitly disengage when they encounter situations beyond their understanding

What do you mean by disengage? Cruise’s AVs don’t have a driver to take over.


I’m lumping things like Tesla FSD.

But also, there are explicitly times and areas where Cruises don’t operate due to being insufficiently able to operate. And times where they do just pull over and wait for human intervention when they don’t know what else to do. Both of which are safe and reasonable, but which also selects themselves into a safer set of miles actually traveled.


Human drivers "disengage" also, though we don't think of it that way. I have aging friends who refuse to drive at night. I'll stay home in bad weather. When I was younger, I'd sometimes ask a passenger to back out of tricky parking space for me.


Yes, but we’re still comparing a highly selected set of AV miles vs the massive variety of human-driven miles.


It's not too useful to lump Tesla, Cruise, and Waymo together here. Tesla is years behind Cruise and cruise is years behind Waymo in terms of driving capability. Waymo doesn't even drive on highways, so we don't know how safe it would be (probably very safe).


>Further, most AVs explicitly disengage when they encounter situations beyond their understanding

the bigger problem here is that the machine may not realize it isn't fully understanding the situation; I think that's the more common situation : a computer situational model doesn't match reality, but the machine has the confidence in perception to proceed, producing dangerous effects.


Not sure that's fair - the number of accidents in human-driven vehicles varies significantly by the human driving it. Do you compare with the teen that just obtained their license and is more interested in their phone than in driving properly, or the 65-year old doctor who's been driving for work and other purposes every day for the past 40 years? According to https://injuryfacts.nsc.org/motor-vehicle/overview/age-of-dr... it's at least a 6x difference.


You compare what matters — mean accidents per mile driven. And like any actuary, you can compare distributions with finer granularity data (location, tickets, ages, occupations, collusion severity, etc). None of this is new or intractable. We can have objective standards with confident judgements of what’s safe for our roads.

As an aside, public acceptance of driverless cars is another story. A story filled with warped perspectives due to media outlets stoking the audience emotions which keep outlets in business — outrage, fear, and drama. For every story covering an extreme accident involving human drivers, there might be 100 stories covering a moderate driverless accident. No matter how superhuman autonomous cars perform, they’ll have a massive uphill battle for a positive image.


I think you compare it to the average as that's who you are encountering when on the road. You have no way of selecting the other drivers to bias toward the doctor.


Teens who’ve just gotten their license are often quite good drivers. It’s after that when they drive poorly. (This also applies to adults, to a lesser extent.)


I still want that metric restricted to geographical area. The average across the entire country seems many times outright malicious to use.


Well, obviously the number of autonomous vehicles involved in accidents is going to be lower, but that's because barely any of them exist compared to the vast majority of people driving their cars. If you had statistics on proportions though, that might be a different story.


You missed "per distance travelled" - that partially normalises the results. You still have to adjust for situation and road type (which Tesla conveniently doesn't do in their advertised numbers) for a better result.


I’ve had several low-light situations where my Tesla identified a pedestrian and applied the brakes before I did (typically dark clothing on a dark street).


This is low hanging fruit I had this feature in my 2015 Volvo.


Sure, it’s just more evidence that these automated systems improve overall safety vs. an unassisted driver. (Although I worry a bit about automation-induced inattention negating these benefits)


There are also issues like phantom braking which Teslas are prone to (or were, I'm not sure if that's better these days). That's part of a whole class of problems which the AVs suffer from which humans don't. I think the main problem is that those problems are really unpredictable to human drivers, whereas good defensive drivers will take into account what another lousy human driver might do based on lots of experience.


On the other hand, if you’re following the car in front of you too close so you cannot react to phantom breaking, the accident is on you. One of the things I appreciate the most about using autopilot/FSD everywhere is that the car maintains a safe following distance basically 100% of the time, even when someone cuts me off. Implemented (and used) consistently, this sort of adaptive cruise control by itself should solve a bunch of accident-causing hazards and other traffic issues.

I haven’t had a phantom breaking issue in a long time either, I’m not sure if this is because of the FSD package or if the underlying autopilot system has improved.


Ditto, FWIW. The car may not always behave like a human does in those circumstances[1], but at this point it's objectively much better at the attention and recognition side of the task.

[1] It tends to be more conservative than human drivers, and in weird ways. If a pedestrian seems at all to be moving in the direction of traffic, even if they're on a sidewalk and just meandering a bit, the car will try to evade (not violently, but it will brake a bit and pull to the outside of the lane).


Yeah, if anything, my Tesla’s issues seem to stem from being overly concerned about accidents than being recklessly dangerous.


The same applies to regular drivers. We do not track accidents they prevent.


Easy, humans prevent nearly 100% of the accidents. A car without a driver invariably crashes in a few seconds.

The question is how many accidents (if any) self driving prevents with respect to average human drivers.


Self-driving cars are used in a very controlled environment.

They will not function in high grass, I guess from my experience with different parktronics.

The best "off road" demonstration of self-driving and/or AI assistance I've found is this: https://media.jaguarlandrover.com/news/2016/07/jaguar-land-r...

Note they avoid going into grass. Human can deduce trails from grass profile, AI? I don't think so.

Will you count an inability to reach a lakeside as an accident? I guess, no.


We have not trained AIs to deduce trails from grass.

AIs need training before they can do things.*

*They just might be learning to do things without being trained based on the emergency behavior I see in LLMs.


My point was not about discerning track from grass, but about driving in acttual grass. Current self-driving tech uses sensors that are useless in high grass.

As for "emergency behavior" (emergent behavior, I guess) - we do not know how LLMs are trained. Thus, what you consider "emergency behavior," could very well be a part of training.


This is such an odd take I don’t know if it’s trolling. Both can be measured against similar metrics (an AV model against the avg AV/avg human or vice versa.


Or even the ones they cause, for the most part. It's notoriously difficult to get clean data about non-fatal crashes.


This Cruise debacle clearly shows it is notoriously difficult to get clean data about crashes caused by or related to self-driving vehicles.


The companies do track that data internally. It would be nice if the DMV mandated it's release to the public. But of course this type of data is a counterfactual, so much more subjective.


They should let an accident happen occasionally, and then show how it had the data to avoid it if humans had just trusted them.


Maybe of interest. There has been research by SwissRe on this topic. https://jdsemrau.substack.com/p/paper-review-comparative-saf...


That's behind a paywall. Do you have a link to the SwissRe paper by chance? (Which might also behind a paywall, of course ; )



Waymo also talks about its work with SwissRe here: https://www.youtube.com/watch?v=9-Qu6HNZu8g


Waymo thinks about this a lot and has posted a good video about it here: https://www.youtube.com/watch?v=9-Qu6HNZu8g


To think that 1 year ago they were aggressively spamming me for an engineering position there.

How you go from "hyper growth mode" (according to the recruiter email) to closed operations within 13 months?


Easy - high burn rate and no revenue.


“Hyper growth mode” = hiring as fast as possible. Startups gotta borrow and spend their way to success, and fast, ya know. Can’t keep the VC’s waiting on their returns.


Lose 90% revenue in 1 quarter


Not everyone knows how to run a company successfully, despite hyper growth fervor. Just look at FTX lol.


It's an interesting system but the reward of the bounties, specially when thinking of $/hour, makes me wonder what is the quality of the work.


I had some exploratory conversations with them last year.

The product was nice, but their business plan made zero sense to me, so I decided to not continue the process.


Technically you were right, the game is Epic ;)


> we cannot ship your game while it contains these AI-generated assets, unless you can affirmatively confirm that you own the rights to all of the IP used in the data set that trained the AI to create the assets in your game.

Yep, you are absolutely right.


This is exactly how this stuff should work. Greedily scraping whatever content you want hurts people who made that content and ultimately hurts AI developers who still need more high quality data.


I'm still not convinced by this argument. All human-generated art is also a result of the artist's experiences, and is strongly influenced by other art they have consumed. So why should GenAI not be allowed to blend the work of other artists?

If you want to argue that we should fundamentally treat machine- and human-generated works differently, that's fine -- but it's a different argument from "looking at a bunch of art and then synthesising ideas is bad," because that's exactly what many (most?) human artists do.


There's a pretty big fucking difference between the organic experience of a human being and a massive VC funded hellsystem that can process 400 million exact copies of images and generate thousands per day.

I honestly can't believe people are still making this dishonest, bad faith argument. It's obviously problematic if you think about it for more than 3 minutes.


If you want to talk about bad faith arguments, calling the AI a "hellsystem" is showing your bias just a bit.


I believe the expectation is that there is a difference between new creative work by humans and the output of tools. Tools are not 'artistically influenced' by their inputs.

Also, a human can take a work, modify it, and create a derivative work. They do not have copyright to the original material, and the degree of derivation is a winding blurry line through the court system to determine if they fully own the new work.

I suspect these to dominate the arguments for the first court cases around generative AI art - that the artist (operator) is the one who has to justify that they provided enough creativity in the process to create an independent work.


> you can affirmatively confirm that you own the rights to all of the IP used in the data set that trained the AI

i think valve is over reaching with this policy.

It should be that you need to prove the art used in the game itself that does not breach any copyright. The tool used to create said art has no bearing on the final art with regards to copyright.

Otherwise, would valve also have mentioned that the developer should also produce evidence of their photoshop license/subscription (if they used photoshop in the course of making their game)? Do they need to check that the version of windows being used to make the game is a licensed one?


> The tool used to create said art has no bearing on the final art with regards to copyright.

This is still an open question with regards to AI art generation tools. Do you have recent legal precedent to cite that I don't know about, or are you just making things up?


> This is still an open question with regards to AI art generation tools.

While it is an unanswered question with no legal precedence, i am a believer that what isn't currently illegal is and should be considered legal, until harm has been shown and thus require legal ruling.


Valve may be anticipating that it will become illegal, in which case they would have to pull games from the store and issue refunds. They may be waiting to see which way the wind blows on this before they get their revenue stream mixed up with it.


It would be so nice if they reflected that in their submission policy if that is the case.

But that requires Valve to be transparent to developers, and we know how well they are with that.


Do you have a legal decision that AI art forms no derivative of a copyrighted work, vs a derivative of ALL the copyrighted work in its training set?

There is no new law, there is ambiguity with the lack of current case law. There are certainly people who think generative art based on scraping the internet is infringing under current legal standards. We won't know until people go to court and judges make decisions.

Valve's insistence in the face of their own liability for selling derivative works seems a sound business decision here, at least when selling in markets where it is an open legal issue.


I tried to go to a dumb phone in the past, but I quickly realized that modern life is way too dependent on a smartphone.

Between using your phone gps to drive around, ordering food, unlocking doors, and check-in out kids from daycare, while not impossible, it would be very inconvenient to not have a smartphone


When I was using a dumbphone I just took photos of Google Maps on my computer before I headed out, this once resulted in me going in totally the wrong direction for hours, not that I'm complaining since it was an interesting adventure.


It might be a good idea to try the dumb phone to gradually reduce smartphone dependency. After your habits have changed, you can re-evaluate your app usage and only reinstall the essential ones.

Dumb phones are cheap enough to pull this off.


I don't mind them selling something that is free for $500.

The problem is when they sue people for using public domain images, claiming that they own the picture.


it gets sticky when you sell your product of work based on a public domain image. maybe the original image that is available in the public domain is dirty, scratches, faded, or any other thing that happens with old images. if they paid to have it restored and are selling the restored image, then that image is not part of the public domain. it's a big bit of spin on this being a possible misunderstanding of what they are doing.

however, with all of the other stories about the original photographer getting served notices of infringement when they have their image on their own site or socials and similar type of just unchecked automation, then yeah, it's hard to be able to give any benefit of the doubt


I don't think that this is accurate, in general. If it is true, it certainly varies by jurisdiction. Here's an interesting read: https://jcms-journal.com/articles/10.5334/jcms.1021217


Restorations are not eligible for copyright protection.


then something else can be done to it that does grant them copyright on the work product. similar to modifying the mouse to extend/grant new copyrights


Link two such lawsuits.


How many users do you have right now?

Last year I created a prototype of something very similar, the key difference was that the idea was to auto pre screen recruiters instead of ask for pay.


We stopped counting after a bit but regularly get an influx get new features drop (like the virtual business card) and when we post someplace.

It's profitable enough to cover hosting but honestly we built the features we wanted out a product we wanted and it went from there.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: