Hacker News new | past | comments | ask | show | jobs | submit login
Tesla hires Andrej Karpathy (techcrunch.com)
538 points by janober on June 21, 2017 | hide | past | favorite | 316 comments



What this really reflects is that Tesla has painted itself into a corner. They've shipped vehicles with a weak sensor suite that's claimed to be sufficient to support self-driving, leaving the software for later. Tesla, unlike everybody else who's serious, doesn't have a LIDAR.

Now, it's "later", their software demos are about where Google was in 2010, and Tesla has a big problem. This is a really hard problem to do with cameras alone. Deep learning is useful, but it's not magic, and it's not strong AI. No wonder their head of automatic driving quit. Karpathy may bail in a few months, once he realizes he's joined a death march.

If anything, Tesla should have learned by now that you don't want to need to recognize objects to avoid them. The Mobileye system works that way, being very focused on identifying moving cars, pedestrians, and bicycles. It's led to at least four high speed crashes with stationary objects it didn't identify as obstacles. This is pathetic. We had avoidance of big stationary objects working in the DARPA Grand Challenge back in 2005.

With a good LIDAR, you get a point cloud. This tells you where there's something. Maybe you can identify some of the "somethings", but if there's an unidentified object out there, you know it's there. The planner can plot a course that stays on the road surface and doesn't hit anything. Object recognition is mostly for identifying other road users and trying to predict their behavior.

Compare Chris Urmson's talk and videos at SXSW 2016 [1] with Tesla's demo videos from last month.[2] Notice how aware the Google/Waymo vehicle is of what other road users are doing, and how it has a comprehensive overview of the situation. See Urmson show how it handled encountering unusual situations such as someone in a powered wheelchair chasing a duck with a broom. Note Urmson's detailed analysis of how a Google car scraped the side of a bus at 2MPH while maneuvering around sandbags placed in the parking lane.

Now watch Tesla's sped-up video, slowed down to normal speed. (1/4 speed is about right for viewing.) Tesla wouldn't even detect small sandbags; they don't even see traffic cones. Note how few roadside objects they mark. If it's outside the lines, they just don't care. There's not enough info to take evasive action in an emergency. Or even avoid a pothole.

Prediction: 2020 will be the year the big players have self-driving. It will use LIDAR, cameras, and radars. Continental will have a good low-cost LIDAR using the technology from Advanced Scientific Concepts at an affordable price point.

Tesla will try to ship a self-driving system before that while trying to avoid financial responsibility for crashes. People will die because of this.

[1] https://www.youtube.com/watch?v=Uj-rK8V-rik [2] https://player.vimeo.com/video/192179727


> Prediction: 2020 will be the year the big players have self-driving.

I think that depends on what you mean with self-driving. My prediction is that in 2020 we will have slightly better driver assistance systems. Maybe a lane assist system which won't kill me if I don't babysit it on a road with construction ongoing and multiple lane markings or if corners get too tight. Maybe some limited self-driving, e.g. only available in dedicated areas like highway/autobahn.

Keep in mind that 2020 is 3 years, which is less than the development cycle of a car. Or in other words: If something should get released in 2020 you now would already see it driving around on the roads for first tests.

My personal prediction is that we will see reliable and advanced self-driving technology in mass production cars maybe in 2 full car generations from now - which is 2030.


I am curious about what percentage of accidents are caused by the driver falling asleep, and if a lot of lives could be saved by a driver assistance system that self-drives with even the crudest of sensors until the driver is successfully woken up, which would probably be several seconds later.


I don't know why you're getting downvoted. I think watching highway death numbers is an excellent way to judge the success of self-driving cars.


At a minimum a self-stopping car should be the basic requirement, such that any vehicle which senses lack of input from the driver, or crazy input which is differential to the experience of its sensors, should be able to safely turn on a hazard alarm for all other surrounding vehicles and safely stop itself on the nearest shoulder.

If all sensors indicate that there is no emergency obstacle requiring sudden swerving or braking or what not, the car should be able to safely decelerate and stay in a lane etc... but this might be to complex a problem. (I was thinking if a driver may have a seizure or some other episode - but if a driver is under duress this may be a bad thing)


2020 is the year major manufacturers say they will have self-driving cars on the market.

- Volvo [1] - Chrysler [2] - Ford [3] (2021) - GM [4] (2018, first live test fleet) - Toyota [5] (although they just started with self-driving)

[1] https://www.digitaltrends.com/cars/volvo-predicts-fully-autn... [2] https://www.usatoday.com/story/tech/news/2017/04/25/google-s... [3] http://money.cnn.com/2016/08/16/technology/ford-self-driving... [4] http://fortune.com/2017/02/17/gm-lyft-chevy-bolt-fleet/ [5] https://www.wsj.com/articles/toyota-aims-to-make-self-drivin...


> Maybe a lane assist system which won't kill me if I don't babysit it on a road with construction ongoing and multiple lane markings or if corners get too tight.

Or if someone else makes a mistake.

That's another scenario that can have very unpredictable and immediate consequences ranging from 'nothing' to 'two car accident with fatalities'. Even in relatively placid (when it comes to driving) NL I see this kind of situation at least once per year.

Then there are blow-outs and other instant changes of the situation. I do believe that especially in those cases it should not take long before computers are better than humans because of their superior reaction speed.


Yup, these are the reasons I'm not interested in those "hands-free but still assisted" autodriving deals. I'd love cruise control assist and lane assist in my car, but that's about all I want until I can be 100% sure cars can handle the situations by themselves.


I believe driver assist technology will get more and more advanced to the point where the car is essentially driving and you are giving suggested feedback. In other words -- the bad situations will be handled themselves. Following distance might be enforced and come with a warning, unless you really want to get dangerously close -- the car will "help".

Your viewpoint is how self-driving cars will come to be accepted into the mainstream. It will essentially sneak up on the average driver.

I would be extremely nervous to let a fully autonomous vehicle drive me given the current state of the art.


Either the human or the computer has to drive. Throwing control back to the human on an automation failure and expecting a correct response from the human takes too long. This has been well-studied. It takes at least 3 seconds for the driver to take control at all, and 15-40 seconds before human control has stabilized. I linked to some studies in an earlier post.

The aviation people hit this problem in the 1990s, as autopilots got more comprehensive. Watch "Children of the magenta", a talk by American Airlines' chief pilot on this.[1]

[1] https://www.youtube.com/watch?v=pN41LvuSz10


Right. The driver would be in control in the scenario I'm describing, just like they are with lane assist, cruise control, or closest to stealth self-driving, auto braking and swerve. I guess it would be better described as the driver would always have the illusion of control.

The automated parts would kick in when necessary and they would be increasingly intrusive in their warnings and modification of driving until the human is only needed for identifying source and destination.


You set you destination for: gym

Are you sure? Your calendar says that Bobby needs to be picked up from practice.

Why don't I go ahead and set the destination for you?


>>I think that depends on what you mean with self-driving.

It means a car that doesn't have a steering wheel, brake and accelerator. That kind of a car is quite far.


That's at least 4-5 automotive cycles away, IMO (so 25-30 year away).

And that's for the high end. The low end cars are probably 50-60 years away, IMO.


I expect to see little 25MPH self-driving cars running around senior communities quite soon. At 25MPH and below, sensor range required is low, data quality is good, and most problems can be dealt with by slamming on the brakes. That seemed to be the target market for Google's little bubble car. The problem there is price, not capability.


The notion that the low end cars will take decades longer is very, very ridiculous.


Ummm... how many cheap cars have: adaptive cruise control or lane assist? Both technologies have been available for expensive cars for what now? 15 years?

Really cheap cars don't even have automatic gearboxes (or they are bought without them 90% of the time, outside of the US).

By "cheap car" I mean something cheaper than $20000 and "really cheap" would be below $12-15000.


Ok, with "really cheap" you're talking about a species of cars that I'm unfamiliar with.

Here in the US, the sensors and control fly by wire that Lexus used a few years ago have largely trickled down to the Corolla and the like as a standard feature. The differences between making autonomy work on low end and high end cars will be purely a software problem. That's not the kind of additional work that takes thirty years.


The difference is not purely a software problem.

It's a price problem since you need to install a bazillion sensors, motors and other thingies which are basically used only for one purpose.

And regarding really cheap, you're on HackerNews, ergo less likely to ever meet those. But it's usually cheap models from cheaper brands such as Dacia, Hyunday, Kia, Tata, several of the Chinese brands. I doubt most people you know actually own one :)


> It's a price problem since you need to install a bazillion sensors, motors and other thingies which are basically used only for one purpose.

That's an overstatement. You need to install sensors, many of which you'd install anyway for the modern lane-keeping and crash avoidance features. The sensors are not ruinously expensive and economy of scale incentivizes an automaker that makes both cheap and expensive cars to get these features out to the low end very quickly. As Animats pointed out, you really want to order this stuff by the million, not the thousand.

I guess I wasn't arguing with the crux of what you're saying - having a feature is generally going to be more expensive than not having it. The overstatement was very strong, especially in terms of timeline for reducing the costs of this functionality.


Here in the US I don't even think you can buy a "really cheap" car; my current vehicle is a used Jeep TJ and it set me back $18k (great condition though - still, I overpaid a bit, but I consider it the price for getting the options I wanted on the vehicle).

While I'm sure there are probably new (or relatively new) vehicle out there sub-$15k, they probably aren't much to write home about. Certainly nothing I want (that's just my personal wants - for instance, if it ain't 4wd and/or it can't be lifted, I don't want it).

As far as the transmission is concerned, here in the US most vehicles can't be bought without an automatic. Manual transmissions are becoming the exception; most car models don't even have them as an option. The few that do (mostly trucks and sports cars) have seen declining sales of the option (and I wouldn't be surprised if it actually costs more to get it!).


I imagine that if you waited for the right manufacturer's incentives and so on, you could get a Toyota Yaris here for $13k, new, with financing. It's a very fine car for someone who can comfortably fit in it.


Well, prototypes have already drove millions of miles like that.

(Whether they also had a steering wheel is of course irrelevant, what the parent means is that they can drive without nobody needing to use the steering wheel).


All of the systems I've heard about need human intervention every couple of miles.


As far as I know, no car manufacturer ships cars equipped with LIDAR. Nor do they seem to have a camera setup as extensive as Tesla. So I fail to see how Tesla has painted itself into a corner. The worst case is, that they are not reaching full autonomy with the current hardware setup. It certainly would be a big marketing blunder if they don't, but if necessary they can add LIDAR to the production, if they choose to.


> As far as I know, no car manufacturer ships cars equipped with LIDAR.

I can confirm that, at least in a sense, this is false. There are plenty of series cars with LIDARS, but not he scaning things you are thinking about, but simpler kind of lidar tech[1]. I know that is not what you were talking about, but I thought it's worth pointing out other, existing, alternative approaches.

[1]http://www.conti-online.com/www/industrial_sensors_de_en/the...


I think Tesla not waiting for Lidars to get cheaper is the right decision. They can always integrate Lidar once they cheaper in the new models. Right now, no production car ships with a scanning Lidar that I know of.

Waymo cars are really expensive because of that and they cant scale because Velodyne can't make Lidars that quickly.

You also consider the fact that Lidars are literally beams of light being emitted and reflections measured. If you have every car with a Lidar, you get interference and its not the gold standard of measurement anymore.

Tesla conquering the problem with algorithms is the right approach. Remember our brains use algorithms and two cameras to drive too. So its technically possible.

Nvidia keeps on pumping out faster GPU's, cameras keep on getting better. Teslas getting more and more data. They just need better algos while they wait for cheaper sensors that scale.

That's a very wise business move while everyone else waits for a magic bullet.


> If you have every car with a Lidar, you get interference and its not the gold standard of measurement anymore.

I had people working on these things that told me that interference is not an issue. Take this with a grain of salt but I would love to see some technical analysis about the possible impact of interference before dismissing LIDAR as susceptible to interference and the impact the interfecene it makes relative to the inference of positioning and spatial estimation.

I've found this: http://sci-hub.io/10.1109/ivs.2015.7225724 but it seems that while interference exist, from what I'm reading from the paper, they are not critical in the sense that they can't be worked around.

And if that is so, if you believe LIDAR is going to dominate the field, it's even more critical working with LIDAR early on so you have the know-how to fix the issues that might arise.


Shipping hardware that isn't needed is not how automotive engineering works today.

Most automotive projects look like: You want to release a new car model in year X (with is typically around now() + 3-5 years). Then you start a development project exactly for that car, which involves creating roadmaps for the car, creating the architecture, sourcing the components, packaging everything together and testing everything. Most components (including infotainment systems, driver assistance systems, etc). are contracted to sub-suppliers, which develop them especially for that car model (or maybe a range of models from one OEM). At the end of the development cycle you have a car which has exactly one car which has (hopefully) everything that was planned for that model and which will get sold. In parallel the development cycle for the next model begins, where there might be only a minimal reuse from the last one. E.g. it might be decided that one critical driver assistance component is sourced from another supplier, is now working completely different, and requires also changes for the remaining components.

So if you do not intend to upgrade something or reuse it, it just doesn't make sense to include additional hardware for it. We will see the required hardware in cars which also will make use of their functionality.

For Tesla it will be quite interesting if they will really deliver huge autonomous functions on that hardware, or whether we will see a new generation Model S (with overhauled hardware) before anyway. I'm personally pretty sure that we will see new model generations before the software will be on a "fully autonomous" level.


>where there might be only a minimal reuse from the last one

Not from what I saw from some constructors. Lot of software and parts are reused for multiples models.


They may have painted themselves into a corner by saying the current hardware is sufficient for "full self-driving capabilities." The worst case is they're putting half-baked solutions on the road.


Yeah. Theoretically the claim is true (at least wrt. the sensors, not sure about computing power onboard). Tesla's hardware is already better than human hardware for this task.

The trick is, they'd have to advance the state of the art in software quite far, to derive "full self-driving capabilities" from this hardware.


Chevrolet Bolt self driving test fleet comes this way. The difference is that Tesla has been talking loudly to get people to not see the short comings other companies are actively compensating for.

so what if they can add it to future production, their talk promises or implies they have all they need. yet cursory review of random youtube videos will show you how limited their system still is.

this may be another "war" they lead the charge on but falter in securing the win. you can play the car marketing game at times similar to the technology market but in the area of safety there is no compromise. instead of acting like a tech company pushing a new tablet they should have acted like SpaceX


I think Nissan has bet that in a few years, LIDAR will be inexpensive: https://www.youtube.com/watch?v=cfRqNAhAe6c


They're probably right, since it's been reported that Google's Waymo has achieved just that - press reports from January say $8,000 per sensor which is positively affordable when compared to $75,000 for a competing sensor.

https://arstechnica.com/cars/2017/01/googles-waymo-invests-i...


I think most people think LIDAR is going to eventually be inexpensive -- even Tesla has a few test cars with LIDAR. However, it's a fair weather sensor, so you'll have to do a lot more for level 5.


I wonder if I want to put my money betting on this how should I proceed?


Invest in the trucking industry?


> no car manufacturer ships cars equipped with LIDAR

For love of the god, please do not comment in the public if you do not understand the subject. Take a look at Chrysler + waymo minivans, Volvo + Uber suvs, Mercedes self-driving test cars. They all have lidars.


Sorry, but you did not get the point op was making. None of those you mentioned are production cars like the Tesla, they are test mules. The point of op is correct, he/she is not talking about modded research cars.


But no other company is selling the promise of an autopilot car. For example, Audi is expected to launch renewed Audi A8 this year with Level 3 autonomous driving but it's called "Traffic Jam Pilot". Level 2 was called "Traffic Jam Assist".

https://media.audiusa.com/models/piloted-driving


None of which are production vehicles being shipped from manufacturers. They are test vehicles, nothing more.


For the love of god, stop the hyperbole, and try to understand the comments you are answering better.

See how condescending this is?


For the love of god, stop the hyperbole, and try to understand the comments you are answering better.

I think that's good advice, very often applicable, even on HN.

Sometimes, s/even/especially/.


Sorry, I kinda tuned out when you mentioned lidar like it's some requirement for autonomous driving.

Lidar is AMAZING for giving press demos on sunny days. For the real world with rain, snow, leaves, plastic bags, etc? Useless.

The future is radar + cameras + a LOT of software blood, sweat and tears.


That's like saying our eyes are useless because we can't see in the dark.


Our eyes are not LIDAR, though, and we can drive pretty well.

In fact, the reason we have crashes is NOT our eyes’ lack of distance detection through laser return timing — having two eyes is enough for distance appreciation. We have crashes because of attention deficit instead.

At this point, there is no reason to believe that a machine can't achieve and outperform a human on a driving task given the same inputs. Sure, human eyes have 5 million cone cells and 1080p feeds only have 2 million pixels, but 4K has 9 million, and more importantly, that level of precision is unnecessary for regular driving.

And Tesla doesn’t even bet just on the visible spectrum; it also relies on radar.


The trick is in our wetware. What the brain does with visual input is not just trivial object recognition. It relies on a complex internal model of the world to both augment the object recognition and to sometimes discard the visual data as invalid.

So sure, theoretically cameras would be enough. But we're not yet there with software, we can't use the camera input well enough. So if you can side-step the need for not-yet-invented ML methods by simply adding a LIDAR to a sensor suite, then it's an obvious way to go.

Compare with powered flight: we didn't get very far by trying to copy the way birds do it. The trick is in the super-light materials birds are made of, and the energy efficiency of their organisms. We only succeeded at powered flight when we brute-forced it by strapping a gasoline engine onto a bunch of wooden planks.


> It relies on a complex internal model of the world to both augment the object recognition and to sometimes discard the visual data as invalid.

That in particular is what makes the hiring fascinating. This problem is Andrej Karpathy’s expertise[0]. His CNN/RNN designs have reached comfortable results, in particular showcasing the ability to identify elements of a source image, and the relationship between different parts of the image.

The speed at which those techniques improve is also stunning. I didn’t expect CNNs to solve Go and image captioning so fast, but here we are!

I think the principles are already there; a few tweaks and a careful design is all it takes to beat the average driver.

[0]: http://cs.stanford.edu/people/karpathy/main.pdf


I think LIDAR will have a place, if they can get the per-sensor cost down to something reasonable (under $500.00 per sensor, for 3D 180 degree with at least 32 vertical res beams - or the equivalent).

But I think first we'll see cars utilizing tech as described in this paper:

https://arxiv.org/abs/1604.07316

...and variations of it to handle other modeling and vision tasks.

Self-driving vehicle systems are amazing complex; it won't ultimately be any single system or sensor, or piece of software or algorithm that solves the problem - it's going to be a complex mesh of all of them working in concert.

And even then, there will be mistakes, injuries, and deaths unfortunately.


Image captioning is not solved (yet), even if there was a lot of progress made in recent years.


Correct, eyes in fact are useless if you can't see in the dark.

Now imagine walking on top of a sky scraper in pitch darkness. Yes your eyes work in light, but in this case you will likely fall to death.


You can't really drive in the dark, can you? What if it gets dark for 1 second on a cliffside turn?


Actually in the dark you often drive as a leap of faith in the state of the road. I.e. with very little visibility on what can come from the side of the road (no light) or after a turn. We shouldn't. But we do.


Yes, but it will cause you to slow down. Or at least, it should. As soon as you have less than your stopping distance of space in front of the car you are going too fast.


Again we probably should but we don't always.

In fact even in daylight you drive a lot as a leap of faith. When the traffic light is green for you at a crossing and you see a car arriving on the side, you assume that you have the priority, that the car will stop and you go ahead without adjusting your speed to the coming car. This is a leap of faith in the fact that all other cars will follow the rules.


There's always the option to enable the headlights in such a case. LEDs switch on way faster than incandescent bulbs. And if they're needed at all - a camera has way more flexibility in brightness input range than a human eye.

A car like a Tesla has also highest-quality maps and GPS sensors - these alone are way better than what you get in your smartphone and are enough to keep the car from going over the cliff.


It's pretty obvious that a self-driving camera-based software would take advantage of headlights on a car, just like a human does? So it never has to drive in full darkness.


You missed the analogy:

Darkness is to eyesight as inclement weather/obstacles is to LIDAR


Our eyes also come with our brains packed full of fancy algorithms to extract value from minimal information.

I am not sure what software does with noise from a lidar sensor but I have seen data from other noisy sensors and they are often useless.


What's the point of that analogy? Darkness can be fixed with lights on the vehicle itself. Inclement weather is entirely outside the control of the vehicle.


LIDAR doesn't work in inclement weather, that's the point.


Actually you will need more than a camera input, may be even a sound input.

In darkness it helps if you can hear the vehicles coming close.


> may be even a sound input

Bingo. When we drive a vehicle, we use so much more than just our eyes to sense the environment, and hearing plays a very large part.

I believe that it is something that warrants research for self-driving vehicle usage; I don't know if anyone has done such research, but I haven't seen any papers on it yet. If not, it seems like an underappreciated sensor aspect that could potentially greatly augment self-driving vehicle capabilities, and would be a very simple and cheap sensor to add to a vehicle as well.

EDIT: Found this recent article...

https://www.technologyreview.com/s/604272/a-sense-of-hearing...

While it seems to be focused mainly on diagnosing issues with vehicles before they become larger problems, there are hints about it being used for self-driving tasks as well.


I think you're implying that deaf drivers are fundamentally less safe than hearing drivers.

Studies have been highly mixed: http://www.lifeprint.com/asl101/topics/deaf-drivers.htm

I would be comfortable saying that the advantages a deaf, many eyed, always alert self driving system would far outweigh the safety of a hearing, two eyed and sometimes alert human driver.


Not really. Self driving cars should be able to drive in all of those conditions (and one condition might even quickly turn into another, e.g. sunny into rain).

If a technology only helps with some of the cases (e.g. fair weather) and does not work for the others, then there are two cases:

(a) A single replacement technology will be found that works in 100% of cases.

or:

(b) The technology will only be used on the cases it works well, and the other cases will be handled by some alternative technology equally only suited to them.

In the case of (a), Lidar is indeed useless (or at best, only used as a supplementary technology in favourable conditions).

And I fail to see how (b) can be the case -- that is, how there can be another technology that will solve the rain/snow/night driving problem, but which cannot also outperform/replace Lidar for fair weather driving.


> then there are two cases:

Isn't it interesting that we have five senses, when we could just have one that works in 100% of the cases? A third option is a system based on multi-sensory inputs. Several inputs that are just marginal on their own can provide good performance when combined.


> The future is radar + cameras + a LOT of software

... all of which Waymo's solution also has, in addition to LIDAR.


I will take your self driving car seriously if you can drive it in all conditions in India.

Until then its just an attempt to make something that breaks at the next unanticipated exception.


Human beings can not drive in all conditions in India.

Just to get our car out of the garage, I had to plead and negotiate with N vegetable vendors with makeshift stores on the road.

Also: bicycles, motorbikes, rickshaws(in human pulled, CNG and electric varieties!)and pedestrians mixed in traffic everywhere.


Yep, I have a feeling that for driving in India/Iran/Pakistan/Bangladesh you're going to need a strong AI to negotiate with the street vendors. In Iran I even had a particularly... enthusiastic flower seller actually maneuver himself to make it even harder to drive away.

Not to take away from anyone's work in this area, but I have no idea how long it'll take to go from "works in America" to "works in India". In many countries the safest option (to evade disaster) can occasionally be "floor it and break the speed limit" to get away from x dangerous thing. I'm not sure if that's something that Google is willing to write into an AI.


This will require something like Rick's (from Rick and Morty) car/spaceship.


>>I had to plead and negotiate with N vegetable vendors with makeshift stores on the road.

This is a very practical test case for a car on a road. Not just in India but anywhere in the world.

Instead of N vegetable vendors you could have N traffic cops. How do you manage the human interaction part in the self driving car?


Well, most non-Indian drivers also can't make it in "all conditions in India" so that's a moot point.


Made me laugh but it's a very true statement. When I was driving whilst holidaying in India I realised that the main rule of the road was "largest vehicle wins"! This actually made for quite an easy to understand system with few questions of whose right of way it was.

bike < car < van < truck

which makes sense because if you're the one who's going to come off worse in an impact then you really want to give way - especially if you have a massive painted tipper truck hurtling towards you!

It will be very interesting to see how self driving systems can cope with these local unwritten bylaws.


It will most definitely be interesting, but it's silly to say that self-driving cars can't be taken seriously, if they don't master these conditions (yet). Hell, I couldn't master those conditions myself, nor do I need to, because I live in the the inner city of a Western-European metropolis, so what I need my self-driving car to do varies massively from what people in other regions of the globe may need it to do, but that doesn't make it any less useful for me.


>This actually made for quite an easy to understand system with few questions of whose right of way it was. bike < car < van < truck

That doesn't make much sense, because the main (and most common) question would still be between vehicles of the same class: car vs car, and this doesn't solve it.


>>It will be very interesting to see how self driving systems can cope with these local unwritten bylaws.

This is why self driving AI will require Hard AI.

India is a perfect test bed for these people to test their algorithms. And for heaven's sake why would you test it in some place like the US. Cars in US are pretty much trains on road any way.


Because that's where the (easy) money is


I get it. They would want to release their cars in US or Europe as their primary markets.

But even in those cases it makes sense to test in India. Why? Sooner or later you will have some situation in the US which may resemble daily traffic conditions in India. Imagine a law and order situation where people are running around without regards to traffic laws. Or some other situation where traffic is being rerouted through a wrong way, In US may be as an exception traffic is being routed through the left lane(being a right lane drive country) etc etc.

For all these situations you will very soon need a test environment that provides you with all situations to test.


>For all these situations you will very soon need a test environment that provides you with all situations to test.

Which won't be India.

Because you obviously don't want to be testing unprecedented conditions with live subjects in actual traffic.

If anything, that will only happen after tons of simulations of such conditions in fake environments.


I wouldn't drive in India. And I'm human.


You are only human. Its super human who can drive there.


I imagine a self driving car will drive there similarly to a human; slowly, because of the extreme risk. Potentially humans tend to take more risks than they should, so a self driving car in India might seem to drive differently than a human would. But the algorithm stays the same; drive at a speed matching the obstacle risk, and avoid hitting things. Perhaps the parking aspect would be the most different.


IMO, India has the simplest driving algorithm: Keep going where you're going and don't hit anything. Also, honk if you think you might be in someone's blindspot.


An autonomous driving feature that you can use on most days but not all of them is still very useful and would have a good market.

It's not going to rain or snow today, and if it would, then I can take the wheel myself.


Isn't Tesla parterned with Nvidia to solve the computation side of things?

I thought I remembered Nvidia presenting some additional stuff about it in their Tech demo recently.


I know Karpathy wasn't involved in the "end-to-end" research paper Nvidia published - but I wouldn't be surprised if they weren't involved somehow, and if anyone could push the CNN tech in that paper further it just might be Karpathy.


When musk said they were doing the first decent electrical sport car, everybody became an expert and said it won't work. Then he decided to do solar city and space x. And the crowd said again "naaaa".

Now the guy is tackling another hard problem and everybody knows better.


My thoughts exactly. Elon Musk is not known for vaporware or BSing. If he says something is doable, there's a good reason for it. Musk has a combination of vision, an understanding of physics, and execution ability unmatched by any other person (in my opinion). To say that he has painted himself in a corner is highly misinformed.


>Elon Musk is not known for vaporware or BSing...

Hyperloop though..

I mean, I think bullshit can be sold to even 'techies' aka HN crowd, if it is wrapped just the right way...


That is neither vaporware nor BSing because Musk never suggested he was going to build the Hyperloop. He explicitly said the opposite, that it was an idea he thought was neat and he hoped somebody else would work on it.


Huh? Musk gave away the idea, never said he was himself going to do it.

But in spite of that, technically, Musk has already built a subscale Hyperloop nearly a mile long at the SpaceX campus in Hawthorne including an electric sled used for student competitions: (epilepsy warning) https://www.youtube.com/watch?v=moeI8DxQR5Q


He doesn't have any educational background in AI or it's underpinnings. Running open AI doesn't instantly make him an expert. The above comment is from an expert with experience in the self driving space.


And people from NASA and the CNES said space x was moot.

Can we just let the man try without being annoying the whole way ?

Or do something that helps ?

Or do something at all to understand that it's hard enough and that you really don't need a thousand voices enumerating all the reasons you might fail ?


I think it helps to give constructive criticism.

Musk is adamant that lidar isn't necessary. Many disagree and are voicing their opinions on that.

I also think his view that strong ai is around the corner is detrimental to the industry. He is overhyping things and potentially creating an expectation whose investments reality won't support.

So, when I spend time pointing out he is not an expert in the ai space, it is to soften his outlandish predictions in the space, and help bring a more realistic perspective on who is talking and who knows what they're talking about.

I think Musk will contribute something to the self driving and AI space, just not in the way he claims.


This is not formulated as constructive criticism. This is formulated as prophecies. "It will fail because...". "It can't work because...". Or formulated in the form of "I know better".

A constructive criticism would weight pros and cons, explain POV without condamnation and not make grandiose pronostics.


Odd that you would put quotes around things you made up and neither I nor the first comment in this comment-chain said.

Animats gave plenty of detailed constructive criticism, as did I. Neither comment could be simplified as you allege without leaving out important context.


his educational background also didn't include rocket science but yet spaceX has succeeded.

If you'd like to put money where your mouth is, I'd happily do a wager with you. Tesla will be the first to have a fully autonomous vehicle - care to bet against that?


> Tesla will be the first to have a fully autonomous vehicle - care to bet against that?

Yes, I would. Google is way ahead in the tech. Nevermind that they don't have a product. If we're talking strictly about the tech, Tesla is and always has been far behind


$1,000 wager?


Eh, if I knew you in person I would, but this is a loosely defined term and will likely occur iteratively over the years. Also I'd only take the bet while Musk is still in charge and while he is anti-lidar.

Feel free to message me if your side of the bet comes true. But, I really doubt it will. If Waymo forms a partnership with any major manufacturer they'll pretty much have it.


I made the same type of wagers with Thunderf00t with regards to SpaceX. No taker!


The difference between Tesla and Google is that Tesla actually has to ship these cars to customers right now. Wouldn't LIDAR double the cost of a Tesla right now?


Tesla didn't have to ship a car with unusable self-driving hardware. They'll probably have to eat the cost of a retrofit package on some vehicles to make that work. Like the Roadster transmission problem, where they had to replace all the early drivetrains with the two-speed transmission.

Nobody has built automotive LIDAR units in volume yet. That's why they're so expensive. It's not an inherently expensive technology once someone is ready to order a million units. It does take custom silicon. Tesla, at 25K units per quarter, may not be big enough to start that market.

Continental, which is a very large auto parts maker in Germany, has demo units of their flash LIDAR. They plan to ship in quantity in 2020. Custom ASICs have to be designed and fabbed to get the price down.[1]

[1] http://www.jobs.net/jobs/continental-corporation-careers/en-...


Isn't it possible that by the time LIDAR is technically and economically ready for general deployment current Tesla models will have enough mileage and Tesla avoids retrofitting completely?


How could Tesla avoid retrofitting if they are unable to solve full self driving with cameras alone? Their new vehicles would have lidar, and old customers who were promised FSD in their models would seem to be legally entitled to it, given that Tesla is currently selling that feature as a product, even though it isn't functional.


Classic engineering ethics problem. Management says they have to ship "right now", but you know that if you do, 1 in a 1000 customers will die. If you wait a couple years, that'll go down to 1 in 1000000, but your company might go bankrupt.

Engineers at Takata and in GM's ignition key department made one choice, Waymo seems to be making the other.


Waymo has the benefit of not going bankrupt if they wait another couple years..


Oh? http://www.cbsnews.com/news/google-alphabet-moonshot-project...

The whole reason the car project got spun out into Waymo was to fast-track commercialization. They do not in fact have an infinite amount of money.


Cost should not be an excuse at the expense of safety. Plain and simple.


If that was true, nobody would ever ship anything to do with safety for less than a million dollars. We make trade-offs between cost and safety all the time. Doctors walk that line day by day, it's a big part of their job. In particular, car safety regulations walk a very fine line between safety and cost. No car regulations require every car to have all and every one of the best and most advanced safety features. If they did, no cars could be sold for less than hundreds of thousands of dollars and they'd all look and perform like blocky vans with great huge crumple zones.


What's unsafe about radar cruise plus lane keep? People act like Tesla is shipping cars that fling themselves into pedestrians at every opportunity. Somehow we all manage to absolve the auto maker when someone with cruise control set on their 1998 Mazda rear ends someone on the freeway. Let's judge Tesla for autonomous safety when they produce an autonomous car.


What's unsafe about radar cruise plus lane keep?

It takes longer for a driver to react to a problem in that mode than to react without it.[1][2] There have been full-motion car simulator and test track studies on this. Even with test subjects who are expecting an obstacle to appear, it takes about 3 seconds to react and start to take over vehicle control from lane keeping automation. Full recovery into manual, where control quality is comparable to normal driving, takes 15 to 40 seconds.

There are now many studies on this, but too many of them are paywalled.

[1] http://www.sciencedirect.com/science/article/pii/S1369847814... [2] http://www.iea.cc/congress/2015/252.pdf


I am not sure why this tech is acceptable, at all.

People fall asleep even while actively driving the car. How can they be expected to maintain vigil with something like this.

But I guess Tesla is content with ending their responsibility at "Informing the user that they should be vigil at all times, even when car is driving itself", with out considering how feasible it is.

Another funny thing about it is that, earlier, with regular cars, you only had to watch the errors from the other drivers on the road. Now you have to watch other cars and also mistakes made by your own car's AI...

What could go wrong?


I know it's popular to bash on Tesla's current challenges with their Autopilot software, but I think it's a bit unfair to expect them to be back in front of the pack just yet. They had their big breakup with MobilEye in, what, September last year? Nine months is pretty quick to go from total reliance on a vendor package to a reasonably functional fully in-house system.


Can you explain the 'back' part of expect them to be back in front of the pack ?

As far as I know they've never been in the front of the pack.


With AP1 they were the only company that had anything approaching level 2/3 in a publicly available production vehicle. Google might have been ahead of them but it's hard to tell since all we ever saw were carefully staged demos.


Was it? Weren't things like Distronic+ w/ Steering Assist (and Audi and BMWs equivalents) almost the same thing?


Similar, but from what I've read, Tesla did it better. I think this was the comparison I was thinking of in particular:

http://www.caranddriver.com/features/semi-autonomous-cars-co...


I chalked some of the differences to being more defensive than AP

For example, I'd be surprised if you took one of those competing systems to "fail road" and it started to veer the way Tesla's system does instead of disengaging


Actually, they are and have always been in front of the pack.


Tesla had been working on a mobileye replacement for some time before they parted ways.


>Tesla will try to ship a self-driving system before [...] People will die because of this.

On the other hand pushing the envelope on self driving technology using cheap sensors will probably help reduce the world's 1m annual auto deaths earlier than otherwise. Thousands of people will not die because of this.


They really only hired a well known guy cool guy at a post with a lot of exposure. Your comment feels like pretty intense overanalysis to me.


Their previous guy quit. That indicates a problem.


Human drivers are also not equipped with a LIDAR. We rely on stereo-vision combined with a couple of low-tech instruments (rearview mirror, left and right side mirrors and looking-over-shoulder to achieve approx 250-300 degrees view of field) to navigate the road in a vehicle. If you extrapolate the current state of AI and treat 10-20 in-car cameras + radar as the equivalent of what a human brings to the table then I fail to see why Tesla has painted itself in a corner.


Agreed. Raquel Urtasan's research is the cutting edge in this area: https://www.cs.toronto.edu/~urtasun/. And she was recently hired/retained by Uber to lead their robot car efforts in Canada. Here's a recent video of her research from the National Academy of Sciences: https://www.youtube.com/watch?v=sW4M7-xcseI


true, but bird wings flap and airplane wings don't flap.


Do submarines swim?

Doesn't matter, only the results matter. Planes work and work well, they crush birds in every performance metric and sometimes literally. Can a self driving car be made safe without lidar? I suspect so, but I am not certain, but I am no expert.


Have there been advances in using LIDAR in rain/fog/snow? For all the autonomous car demos this seems like too large of a use case to gloss over...


Yes, but they haven't made it down to the automotive level yet.

Most automotive LIDARs just report the time of the first return, but it's possible to do more processing. Airborne LIDAR surveys often record "first and last"; the first return is the canopy of trees or plants; the last is from ground level.

It's also possible to use range gating in fog, smoke, and dust conditions.[1][2] Returns from outside the range gate are ignored. You can move through depth ranges in slices until something interesting shows up. This seems to be in use for military purposes, but hasn't reached the civilian market yet.

Range gated LIDAR imagers have been around for at least 15 years. By now, it should be possible to obtain a full list of returns for each pixel for several frames in succession, crunch on that, and automatically filter out noise such as rain, snow, and dust. It's a lot of data per frame, but not more than GPUs already handle. Some recent work in China seems to be working to make range-gated imaging more automatic in bad conditions.[3]

[1] http://www.sensorsinc.com/applications/military/laser-range-... [2] http://www.obzerv.com/en/videos [3] http://proceedings.spiedigitallibrary.org/proceeding.aspx?ar...


I find it funny that people can seemingly flippantly state that Tessa (or any of these cars) have "weak sensor systems" as if it is such a trivial problem.

"Well, there's your problem right there, let's just slap on some strong sensors and you should be good to go!"

You know what has a weak sensor system? Any car without any sensors.


Yea this seems like the key question. I don't know if Level 5 in two years is feasible, but if it's Level 5 or bust, then LIDAR won't fly (AIUI; would certainly welcome corrections).

On one hand, not pulling in potential safety improvements because they only work in good weather seems wrong, but on the other hand...that might be what needs to happen from a cost/marketing/legal perspective.


> Tesla will try to ship a self-driving system before that while trying to avoid financial responsibility for crashes. People will die because of this.

This is a pretty strong statement. Would you sign up for a slightly more specific version of your claim?

"I believe the Tesla self-driving system that ships by the end of 2020 will be statistically less safe than unassisted human drivers."


Full autonomous is conflict with Tesla's business model: selling cars. I think Tesla's real goal is to build the easiest driving massive production car. Massive production means the car always need a person to baby sit. For full autonomous, they don't need massive production (>10M units) to build the service network.


> People will die because of this.

And not just drivers of Teslas.


>Or even avoid a pothole.

Humans can avoid potholes with one eye. I don't know why you assume LiDAR is a requirement for this.


One eye backed by millions of years of training in depth perception and object recognition... The argument that humans are able to drive with just two cameras and software (brain) is deeply flawed because the brain is highly advanced in areas necessary for driving and Tesla's claim it will replicate that any time soon is absurd. This is why you need additional sensors like LIDAR to alleviate the computational load.


The human brain is a hard AI entity that can think through problems in any generic situation.

In self driving AI you are programming the car to do a specific thing. Sooner or later you will run into a situation in which algorithm will panic and can't do much.


Humans cannot and do not "think through problems in any generic situation", and specifically when it comes to driving, they fail with fatal consequences about 100 times every single day in the U.S. alone. Applying such unreasonably high standards (i.e., algorithms that never fail) before we allow self-driving technology to be deployed actually kills people.


I am not convinced Kamaal was arguing against self driving cars. He might have been arguing that they could be simpler and more achievable. It could be just fine as long as when the software panics it does something sane like slow to a controlled stop and turn on the hazard lights.


If only the brain would be smart enough to focus on driving instead of distractions...


Yeah, Tesla may fail to ship a safe LIDAR-less car.

But the problem of making a self-driving car without LIDAR or something equivalent is awesomely challenging! An I bet Andrej Karpathy will really enjoy working on it.

And the tech resulting from this line of work will surely find its way in other things. (I guess the military has wet dreams about this stuff... I mean, even "unsafeness" can become a "feature" here: "uhm, look, that school we blew up by, uhm, mistake... was an AI-error... like... these stuff happens, you know, even Tesla's cars have an accident from time to time, that's life". Well, those dreams could also be nightmares: basically any "self driving thingie" is a potential guided missile, and dirt-cheap-because-lidar-less stuff has the potential of becoming ubiquitous, and unmaintained/unupdated/unsecured/hackable, leading to nightmarish urban warfare scenarios...)

And: "People will die because of this."... Uhm, yeah, they will, but if people ain't dying it means research is not moving fast enough, and competition will overtake you. I'd be more worried about when this stuff will be deployed on buses with tens of people, but hopefully public transport would stay a safe decade behind bleeding-edge stuff :)

And about and Tesla: however this plays out, Elon Musk made quite a lot of what would've been technically considered "bad business decisions" and things turned up OK so far... so I wouldn't feel sorry for them or short their stock ;)


> but if people ain't dying it means research is not moving fast enough

Could you help me understand this further? It feels quite insensitive to me.


If companies wait for the tech to be perfectly safe, it will never be released.


Since it seems to be a thing to report that X person specializing in machine learning has moved from Y company to Z, it makes me wonder if other areas of computer science is seen as relevant by the general public.

One rarely hears Dr. John Doe from Florida State University (or insert non-Standford university here) in Distributed systems has moved from Microsoft Research to NetApp. These are arbitrary names. The point is you rarely hear about people from other areas of CS outside of machine learning/universities outside of Stanford moving from one company to another. The field of CS is vast and there are multitude of practical and theoretical problems outside of machine learning that are worth looking into (ones that aren't currently considered hip or cool by the public).


You touched on a much wider human phenomenon: attention isn't evenly distributed. The media isn't pushed to report in proportion to an individual's impact. It's guided by trends, aka what people want to read.

AI is hot. So therefore there is a huge spotlight on all angles there. You can argue whether that is actually fair (personally, I do think AI is a high beta field). Topics that don't fall under this are regarded as inside baseball.


Media coverage and popularity always works this way. I'm not sure why so many people perpetually can't wrap their head around it. There have been endless comments online where people are offended or surprised that x person is getting attention, but they are just like these other people who are just as good.

But obviously there's more to drawing people's attention than the individual skill or a comparable position at a different company. As you mentioned this top of HN ranking is driven by joining a trendy company, leading a hugely hyped product team (the Tesla automation stuff was on the front page yesterday), and a personal with a really hot skillset.

Combined of course with the usual luck and good timing.

Is it really hard to see how this isn't much more interesting than someone joining a realitively standard branch of Microsoft?


Karpathy in particular has built an excellent personal brand, lots of which is through his blog. It seems that technically backgrounded "explainers", á la Neil deGrasse Tyson or Richard Feynman, Elon Musk or even Bill Nye, can credibly straddle both expert and layperson worlds. People like to feel like they know what's going on.


Part of me dies inside every time I read the phrase "personal brand". I think it's sad that it is no longer enough to be good at what you do--you need to self-promote, blog, and talk talk talk in order to make it these days. What happened to also recognizing the quiet but competent craftsmen?


Out of interest, what makes you think it was ever not this way? I recently reread the Isaacson Ben Franklin biography and one thing which struck me was how much time he put into crafting his public persona.

The comment up this thread holds true: human attention is not evenly distributed. That doesn't mean, however, that there's an imperative to "network" or build a "personal brand" – plenty of people gain a deep satisfaction from excelling at their craft.


Maybe I'm seeing the past through rose-colored glasses, but it seems there was once a time in Silicon Valley when you could make it big as a pure technologist and not have to always be marketing and selling yourself. Maybe I'm just fooling myself.


As a pure technologist you'll never be Steve Jobs, at best, if you're exceedingly lucky, you might manage to be Steve Wozniak. However, Steve Wozniak's fortune and minor celebrity status owes a great deal to Steve Jobs, whos success in marketing and selling himself was so great that it earned the name Reality Distortion Field.

If "make it big" just means a giant pile of money, there are plenty of millionaire pure technologists at Silicon Valley companies whos names are never told; the thousand or so that were created when Google IPO'd are basically unknown. Forbes had a recent article advertising Craigslist competitors, but reading between the lines, Craigslist has minted some of them, but they're entirely nameless among the wider population. If thats your definition of "making it big", then it's possible, but if you want broader recognition, I don't know that it's possible.

Maybe I'm being unimaginative, but outside of Steve Wozniak I can't think of any pure-technologists with household name recognition. The closest that comes to mind is Elon Musk, but unfortunately for you, there's plenty of marketing going on. I'd bet a large number of readers even here won't even recognize the name Vint Cerf.

Maybe you feel marketing is about lying, maybe selling yourself feels icky. However they're skills like any other; refusing to learn and use them would be like refusing to learn or use multiplication.

Read Sam Altman's praise of Greg (gdb) (http://blog.samaltman.com/greg) who is quite the gifted technologist, but the praise is for his dedication, on both technical and non-technical talent.


I'm not after recognition or household name recognition--quite the opposite. I'm just a normal, unremarkable technologist who's getting old and wondering whether I should have spend the last 20 years blogging and self-promoting rather than quietly polishing my skills. It feels very uncomfortable that "Becoming a tech celebrity" has emerged as a legitimate path to advancing in one's career.

EDIT: Not saying these celebrities don't also earn their keep through their skills. It's just disappointing how much of a factor self-promotion is.


I haven't been around here very long so I defer to your memories :) But my suspicion is that if it's not the technologist marketing herself, then someone else is marketing her. There's almost always more to these things than meets the eye.

In a valley of smart and motivated people, discoverability will always be a challenge...though I don't doubt it's much more competitive now than ever.


What you are seeing is the result of the big push to get everyone into tech several years ago. You didn't have to market yourself in the past because there were more jobs than people. Having ability was enough to have wanting employers find you. Now, skilled professionals are everywhere. Employers don't need to make the effort anymore.


It may have not been called a personal brand, but I think it's fair to say that many past inventors, scientists, generals, artists, etc. who we remember today were pretty adept at self-promotion.


I think that depends on what your interpretation of "making it" is.

There area millions of quiet, confident, competent people across industries. People who reliably turn out high-quality products and are well-paid for their work. They get on well with their colleagues and progress with their career at a decent pace. You just don't hear about this much. Doesn't this count as making it?

I'd say that the reason you hear more about people who are well-known is essentially just because they are well-known :)


> Media coverage and popularity always works this way. I'm not sure why so many people perpetually can't wrap their head around it.

It probably wouldn't work this way if people could wrap their heads around it.


> I'm not sure why so many people perpetually can't wrap their head around it.

The really question is, should we (as thoughtful human being as we consider ourselves to be) bother to question the trend (Which you are surprised that people are doing)? Or silently accept it?


If we're going to bother questioning trends, shouldn't we be aiming to be questioning in a way that is different from everyone else who has questioned before us? Because, as pointed out by GP, it would appear that previous questioning has failed to have an impact. Just asking the same questions over and over doesn't get us anywhere.


>If we're going to bother questioning trends, shouldn't we be aiming to be questioning in a way that is different from everyone else who has questioned before us?

That would be nice.

> it would appear that previous questioning has failed to have an impact.

Just because something does not cause a change, does not mean it does not have an impact. Maybe, the impact is that it is keeping things from getting worse...In this case, question like these may help to balance the influence of "trends" and helps us to maintain perspective...


The person being replaced is Chris Lattner, inventor of LLVM and Swift, of whom it was big news of his leaving Apple to join Tesla fairly recently.


Dunno if this is his real account, but it is supposedly not working out https://twitter.com/clattner_llvm/status/877341760812232704


Huh, that's stange. He's been at Tesla what? 6 months? Interesting to see that he's moved on already.


We'll see how long Karpathy lasts.


must be nice to post a tweet that you're looking for "interesting roles" and immediately have the replies fill up with "come to microsoft", "come to google" etc etc. strange he would just throw tesla under the bus like that. anyone know why?


How is he throwing them under the bus? He said it turns out he didn't fit in well at Tesla. That's just being honest.

If the alternative is phony politeness and masking reality under a guise of everything went great then I prefer this approach.

This is a great aspect of the software industry IMO. And if you want to get lots of job offers then build some great OSS projects like this guy. It's a great way to demonstrate your skill and attract a following which guarantees you job offers.


I don't know, to me the statement begs the question "So why wasn't it a great fit?". People are going to speculate regardless of an announcement from you, if it were me I would prefer to just keep my head low and let those who really care to know/I care to tell ask me (or Tesla) personally without a bunch of unneeded commentators jumping in and potentially putting negative connotations in my mouth.

But I hadn't seen that Telsa released a similar statement, and that changes the context quite a bit to match what you describe.


There are thousands of ecological niches. I'm the kind of programmer who takes time off to make gnocchi and think shallow thoughts about quantum physics. Not every company is into that. Some are. It's not a judgement against anyone, it's just a question of fit.


> I don't know, to me the statement begs the question "So why wasn't it a great fit?"

That's not "begging the question" [1].

[1] https://en.wikipedia.org/wiki/Begging_the_question


Interesting. What is Chris going to do now? He was a big hire.


Looking for a new job it seems.


I think it's fair and makes sense. A lot of Karpathy's work has been featured on HN and I find this news relevant in following what he's working on.

When Scott Aaronson wrote about moving to Austin, that post made the rounds here too.


I doubt that Andrej is known to the general public. With every decade the fields of computer science sees its share of buzz words and trendy topics. It used to be programming languages or networking, and people in these fields used to make the news: Guido van Rossum joining Google or Leslie Lamport joining Microsoft Research.


A lot of Karpathy's research and blog posts have been featured on HN. He made char-rnn which spawned hundreds of spin off projects that made it onto HN. He also comments here regularly and explains new research. So it makes sense that news about him would get attention here.


This same thing is an issue in many scientific disciplines. How many people can even name an area of physics research that isn't particle physics or general relativity, for example?


Well, physics is quite complex. I think electrical engineering is similar in that regard.

Personally, I am aware of the two you listed + experimental, condensed matter, and astrophysics. There is some overlap between physics and EE, so I may be aware of others.


There's lots of action in low-energy physics, down around absolute zero. There are jobs in semiconductor device physics.


Biomedical


biomedical physics?


Next big thing. Get in on the ground floor


Tesla's market cap exceeds Ford and GM. Their ability to surpass Uber is contingent on hiring talent to develop self-driving cars.


Seems to be taking Chris Lattner's place:

https://twitter.com/clattner_llvm/status/877341760812232704


Wow! Tesla's official statement: "Chris just wasn’t the right fit for Tesla, and we’ve decided to make a change. We wish him the best." [0]

Chris's response: "Turns out that Tesla isn't a good fit for me after all. I'm interested to hear about interesting roles for a seasoned engineering leader!"

I don't think I've ever seen a tech company throw an employee under the bus so publicly. I wonder what Lattner did to warrant such a public separation?

[0] https://electrek.co/2017/06/20/tesla-autopilot-chris-lattner...


Don't people like it here when a company does not indulge in pc bullshit. I think Tesla statement is to the point.


You have to be careful how you let people go if you want to attract good talent. There is always a chance of a misfit, and the right thing to do is to part ways as soon either party realizes it, but you want to do this in a way that is not unnecessarily damaging to either party's reputation. Otherwise, good people with a reputation may think twice about working with you.


Reputation is for B players. Chris Lattner can go work for any company in the world that can afford him.

A players don't look at company's reputation, they look at things that actually matter.


You are contradicting yourself. Being an A player is your reputation. No one is going to hire your into a senior level role without you first having a good reputation.


A players don't worry about getting hired into senior level roles.

They don't worry about their reputation either.

If you worry about any of that, you're not an A player :)


Exactly, not everyone is interested in playing the pc game. What if they mean exactly what they say: not a good fit. That doesn't mean he is bad at his job.


More specifics from Lattner's online resume

"In the end, Elon and I agreed that he and I did not work well together and that I should leave, so I did."

This part was removed after one day


Are we sure it wasn't the other way around? Maybe Chris really didn't like the engineering culture or something else and decided to leave, but given how big a story it was when he joined they needed to both put out a statement.


The company saying we have decided to make a change makes it brutally obvious he was pushed - Chris should have gone for a compromise agreement.


Both are definitely high profile enough that it would be strange if one said something and the other didn't.


I don't know if there's any better way to downplay a major hire leaving after less than half a year. Everything in those statements is extremely obvious from the circumstances and I can't think of a more cordial way to phrase it.


There are much better standard ways to phrase this and that's precisely why you almost never see it put that way. If for no other reason than that if hiring high-profile talent is something you need to do more than once, you don't want to be publicly suggesting it's somehow their fault when they leave.


I didn't take it this way at all. Why can't "he's not the right fit" mean exactly that? People can be brilliant but just not right fit the operating processes and culture at a company. I didn't really read anything negative into it from either side.


"we’ve decided to make a change" could easily be interpreted as "we've fired him".

In any case, judging by the reactions to his tweet it looks like he can pick and choose his next job.


Sure, you didn't but as you can see in this thread, it's trivially interpretable in lots of other ways. Which is exactly why it is usually not put that way. This doesn't seem like a complicated thing at all.


As another poster said, though, you're basically asking for more BS, wishy washy corporate-speak, wish is often what people on this forum rail against. Damned if you do, damned in you don't.


I'm not asking for anything. You were asking something and I tried to offer an explanation. The answer is mostly 'because lots of other people don't feel the same way as you on a (relatively minor and subjective) point of interpretation.' Applies just as well to your feelings about 'BS, wishy washy corporate-speak'.


it also helps avoid expensive lawsuits especially for high profile roles who have the money to sue


Not really.


You think a wealthy c level and an average engineer have the same access to legal redress?

In that case I have a bridge id like to sell to you


How did they throw him under the bus? I interpret both of their statements to be neutral and honest.


Bet Chris Lattner is heading to Google.


If google were to embrace swift that would be spicy.


At this point, not really. The compiler is slow and buggy. (And to make the type system faster would take some interesting theoretical breakthroughs.)


The compiler is being worked on pretty heavily. The compiler is slow compared with what? Go? C? Generics are computationally expensive no matter what so the question boils down to generics vs not. Le_no_generics.maymay.jpig


It is not just about generics though, it is about using a constraint solver for type checking programs of a type system which is needlessly expressive.

Also, it is slow compared to almost everything, even C++. It has got better over the last 3 years, but most of that is pumping heuristics into the system.


I would think Google would already be using Swift for their iOS apps and all. As for other stuff they have lots of choice and Swift can't be ruled out. After all, they also use Typescript developed by MS.

I think many want rewrites of fine, working software to show language's worth. This is startup or single developer stuff, large professional companies rarely rewrite stuff on whim.


Ofc they do, but use it as a first class language.

> After all, they also use Typescript developed by MS.

Ofc, it's the best language for the purpose.

> This is startup or single developer stuff, large professional companies rarely rewrite stuff on whim.

That's not correct. https://martinfowler.com/bliki/SacrificialArchitecture.html#...


I don't know, a bad fit may be just that. Could be more to it, but maybe not.


> I wonder what Lattner did to warrant such a public separation?

Failed to make AP2 work well with cameras alone would be my guess. Tesla is hitting the glass ceiling with its sensor hardware and the future isn't going to be pretty. Expect more changes in engineering leadership until Musk realizes he needs better data (sensors) for his neural nets.


From his online resume

"Tesla

VP Autopilot Software January 30 - June 20, 2017 When I joined Tesla, it was in the midst of a hardware transition from "Hardware 1" Autopilot (based primarily on MobileEye for vision processing) to "Hardware 2", which uses an in-house designed TeslaVision stack. The team was facing many tough challenges given the nature of the transition. My primary contributions over these fast five months were:

We evolved Autopilot for HW2 from its first early release (which had few features and was limited to 45mph on highways) to effectively parity with HW1, and surpassing it in some ways (e.g. silky smooth control). This required building and shipping numerous features for HW2, including: support for local roads, Parallel Autopark, High Speed Autosteer, Summon, Lane Departure Warning, Automatic Lane Change, Low Speed AEB, Full Speed Autosteer, Pedal Misapplication Mitigation, Auto High Beams, Side Collision Avoidance, Full Speed AEB, Perpendicular Parking, and 'silky smooth' performance. This was done by shipping a total of 7 major feature releases, as well as numerous minor releases to support factory, service, and other narrow markets. One of Tesla's huge advantages in the autonomous driving space is that it has tens of thousands of cars already on the road. We built infrastructure to take advantage of this, allowing the collection of image and video data from this fleet, as well as building big data infrastructure in the cloud to process and use it. I defined and drove the feature roadmap, drove the technical architecture for future features, and managed the implementation for the next exciting features to come. I advocated for and drove a major rewrite of the deep net architecture in the vision stack, leading to significantly better precision, recall, and inference performance. I ended up growing the Autopilot Software team by over 50%. I personally interviewed most of the accepted candidates. I made massive improvements to internal infrastructure and processes that I cannot go into detail about. I was closely involved with others in the broader Autopilot program, including future hardware support, legal, homologation, regulatory, marketing, etc. Overall I learned a lot, worked my butt off, met a lot of great people, and had a lot of fun. I'm still a firm believer in Tesla, its mission, and the exceptional Autopilot team: I wish them well."

The first draft ended "In the end, Elon and I agreed that he and I did not work well together and that I should leave, so I did."


> I wonder what Lattner did to warrant such a public separation?

Or what Tesla did. Why are you assuming it's Lattner's fault?


It was always a weird fit having a compiler and programming language designer lead an AI team.


Exactly. Chris is very smart but this is unlikely to be a good fit.


I disagree (that it was inevitable...obviously it ended up not being a good fit, ha). This comment from the original announcement has an insightful point of view on how much sense it could make to have someone concerned with correctness and reliability heading the AI/Autonomous Driving team.

https://news.ycombinator.com/item?id=13370144


The most insightful part seems to be:

"Naively one might have expected some machine learning expert to take over the reins at Tesla."

Here we are.


Chris accomplished a great deal in a short time at Tesla, but ultimately could not work with Elon Musk. He was a great fit if Elon wasn't a maniac.


If the main source of problems was related to software architecture, it may have made sense. Academic AI people may be stronger on theory and end up with unmaintainable bridges. OTOH good architecture will be responsive to the fundamental underlying problems being solved, and that requires depth.

Compilers give a good base for transferring to things like databases, operating systems and IO heavy systems with lots of transforms / filters etc. They also ingrain a way of thinking that isn't native to most devs - writing code that generates code. Monads and other approaches to dynamically composing a computation - they come easy.


I agree on that. I personally think AI (and especially topics like deep learning) only makes up for a small part of autonomous driving or driver assistance in general. The remaining parts will be lots of old-fashioned control theory, signal-processing, general good software engineering practices, having a good software and system architecture, being able to build a fully deterministic (hard-realtime capable) system. For the latter topics a highly experienced person with more of an engineering background would be a better fit than a researcher with with a mostly-algorithm background.


That's the best explanation I've yet seen of why he left.



I haven't seen this mentioned yet, but in the spirit of the high-profile failures (or misuse?) of the Tesla autopilot system, could it be that Chris had a fundamental disagreement with Tesla management over the direction and technical grounding of this feature?

All rank speculation, of course, but maybe he didn't like what was "under the hood" of this feature and how it was being developed and marketed?

Just spectator curiosity, but interesting to ponder none the less.


I follow Karpathy on Twitter and really enjoy his blog. I do fear that his impact in Tesla could be less than his impact at openai. Openai had some fundamentally great research and ideas.

I wish him that best though. Hopefully some of Tesla algorithms will be open source someday and those of us who can't afford a Tesla will be able to use it as well.


Tesla: "All Our Patent Are Belong To You"[0]

0:https://www.tesla.com/blog/all-our-patent-are-belong-you


Not relevant. The software may not be patented but also just not published. You would have to reverse engineer it to get at the algorithms, and the car is surely heavily encrypted.

It's like Musk said with SpaceX: publishing patents is just like putting out a recipe book for China.


it's also a recipe for future generations... or a backup if you will...


But not their "trade secrets". It's striking how little is publicly known about how each vendor's self-driving technology works. This is a technology that's grown up since the anti-patent "America Invents Act", and, as a result, there are few patents and much mystery. On rare occasions, someone gives a technical talk, but papers are seldom published.


Interesting, to move from an Elon-Musk-chaired non-profit to an Elon-Musk-owned for-profit.


Also, this is very impressive for someone who finished his PhD less than two years ago.


He is one of a very few PhD students whose work I'd heard of and followed before they graduated. He had a great & informative blog, too.


Yeah, it's always nice to see people overcome this particular handicap.


Open AI was always a pipeline for Musk. Use other people's donations to attract and identify the talent, then move them under his payroll.

I guess that's a win win for the employees and for Musk. Not sure how many other supporters Open AI has, though I doubt that's what they had in mind when they donated to support that effort.


It's not like Musk doesn't donate to OpenAI himself..


That he spends money to find talent for his own company isn't so surprising. He's just done it in a new, slightly abstract way


Worth noting that OpenAI may not otherwise exist. It mightn't be precisely ideal, but I think it's broadly a 3-way win.


Their mission, to add more open source tools to the ai space, is already being accomplished by for-profit companies like Google and Facebook.

If it weren't attached to Musk, and if Tesla never hired from them, I'd agree it's good to have a non-profit in the mix. As it is, if it looks and acts like a pipeline for talent, it's a pipeline for talent.


I don't think he ever intended to stay at OpenAI long to be honest. He was a founding scientist of the nonprofit, so think he went there to do his time before he found an interesting opportunity (either in academia because he just finished his PhD or in industry)


Anyone have any insight into what the top ML people are being paid right now?


The equity packages tend to be in the tens of millions for a 4-year vest.

Source: I am a manager who has given offers to top-tier ML experts.


If you are a hiring manager for top-tier ML experts, why would you be trying to inflate market value with outrageous claims?


I guarantee you that my comments here will not influence the market rate for top-tier ML experts.


I feel so small.


Just to be clear, "tens" as in multiples of 10 million (i.e 20+ million)?


Depends how close to the "top" you are, but yes. 20-30 million/4 years is not unheard of for the very top -- comparable to an acquisition.


Yes, but what is the salary. Tens of millions of stock could end up being worth $0.


At early stage startups, not at public companies like Tesla


Ah, good point.


With yearly refresh?


No idea how you are getting downvotes, this is accurate.


If I fail at startup life, this is an obvious next step in my career.


Your obvious next career step is to be a top-tier ML expert?


500k+ for someone with a lot of experience.

300k+ for a new hire ML/CV/NLP PhD with some relevant experience.

150k+ for a new hire ML/CV/NLP MS with little to no experience

We were working with a very expert ML contractor that is doing 800k on his own from pop-up projects.


The top of the range you're quoting is for a typical Staff Software Engineer at Google/Facebook/Microsoft. "Top ML experts" garner much, much higher comp packages.


These are representative of salary numbers for new hires.

Agreed though on total comp for top ML experts who have been around for a while - or the highest end.


Yeah these ranges are such a joke. Top new grads at Google are getting comp packages along the lines of 115 base, 450 RSUs / 4 years, 60k signing, 15% bonus these days. L5s regularly make 400k+ all in.

The obvious reality is that top people rarely talk about their comp packages, as there is no reason to rock the boat.


Are these numbers for real, and are these all Bay Area? I'm further North and pull no where near these numbers as an ML PhD with a lot of experience now in the tech industry. Have I goofed on all negotiations?


Probably real, yes only Bay Area, and no not the norm for ML. First of all only a few companies can pay that much (Google, Facebook, LinkedIn, Netflix, etc) and these are outliers. The average ML engineer in the Bay Area does not make 500k/yr total comp.


ML engineer is a programmer usually with a BS in CS, sometimes​ a MS. They are, in the end, only an engineer.

The AI scientists, those with work in computer vision, natural language, and audio, developing novel networks and training methods, make at least $500K/year. I've been a data scientist and the pay (and work content) was a joke. I switched to AI and damn, work makes you think and you get paid like a mid-range NBA star.


How did you make the switch from being a normal developer to doing AI specialist work?


Also interested, yes.


I went back for a PhD. I know a lot of people can't do this, but this is the reason why there are few AI specialists. You just can't become an expert by reading blogs and even research papers online. You need the full specialist environment, from discussions outside the bathroom to the drawings on the whiteboard.


And here I am dreaming of achieving it by doing MOOC courses :)...So, not possible at all this way?

Theoretically, one would feel, that by just reading blogs, watching videos along side taking MOOC courses, and spawning GPUs on the cloud, should do it.


Fei Fei Li said Karpathy was offered more than a million out of school by an unnamed firm. One can only imagine what people like Alex Graves Ann's Nando get paid...


Karpathy. Car pathy.

* I hate myself for this.


You may enjoy (or hate) Wikipedia's article on aptronyms: https://en.wikipedia.org/wiki/Aptronym


Andrej paired with chip guru Jim Keller [1] (vice president of autopilot hardware engineering) should be an amazing combo.

[1] https://en.wikipedia.org/wiki/Jim_Keller_(engineer)


I think he'll regret accepting the position. Impossible deadlines are going to force him out in under 12 months.


As much as I want to see Tesla and Elon Musk succeed, I wouldn't work for one of this companies. So I agree.


Likewise. As notorious as Amazon became for it's negative work/life balance, it seems like every Musk-run company is at least twice as bad. People regularly throw out 80hr/week numbers when discussing life at SpaceX. Thanks, but no thanks.


So where does this leave openAi?


A fair question. I suspect OpenAI's research agenda is actually less idyllic than most think since a) they never promised to "change the world", and b) their projects eventually must serve multiple masters with vested interests.

I don't really grok OpenAI's long term mission plan. "Keeping AI open" seems rather banal to survive long amid a cultural ethos as purposeful and dynamic as SV's. Perhaps the prospect of meandering their rather random path proved less pregnant with possibilities than AK had hoped.

Or maybe Tesla promises to build more than just KITT? Musk has huge ambition. I suspect Karpathy was enraptured at the prospect of sinking his teeth deep into a wide variety of tasty pies there.


OpenAI is a weird animal. Their research seems generally less impactful than rest of the top tier labs out there. But they do have great people on the team. Is it because their engineering infrastructure is subpar?


Depends on how you see it though; the most impactful stuff is not necessarily the most reported in the media. As always, the media lags these things and so the public don't get to see shit coming.


I am pretty catching up-to-date with this field, and I didn't really get my DL(AI though it is pointless buzzword) news from media.

As far as I can remember, most important research from OpenAI is InfoGAN. Others feel somewhat below outstanding. Their open source record on the other hand is pretty solid, gym/universe and various implementations and testing of existing models does live up to the openness in their name.


I heard there were layoffs of engineers there. But obviously unrelated. OpenAI will continue unchanged.


I imagine Pieter Abbeel and the rest of the gang will be able to carry on in his absence.


This. Abbeel is a highly underrated research powerhouse.


I'm curious why you think Abbeel is underrated. He is one of the few ML researchers to not succumb to the pull of the industry (at least completely) and his work is highly respected in the community, especially continuous control.


Doesn't OpenAI gets large part of their funding from Elon?


I mean OpenAI is not a competitor to Tesla.


But it very well might be a beneficiary of Tesla's Autopilot machine learning work. Mr Musk is not shy about sharing tech within his companies (Tesla adopted friction stir welding techniques pioneered at SpaceX [1]). Also, the patents Tesla offered to license to other automakers [2] to help accelerate the shift to electrified transportation.

Now, granted, SpaceX does not file patents because their competition is primarily nation-states [3], but I feel thats a valid concern with regards to that domain.

"We have essentially no patents in SpaceX. Our primary long-term competition is in China," said Musk in the interview. "If we published patents, it would be farcical, because the Chinese would just use them as a recipe book."

[1] https://electrek.co/2015/05/24/spacex-transferred-novel-weld...

[2] https://www.tesla.com/blog/all-our-patent-are-belong-you

[3] http://www.businessinsider.com/elon-musk-patents-2012-11


Friction stir is really old... how did they pioneer it? Perhaps they perfected it?


Not friction stiring itself, a specific technique.


OpenAI is pretty independent and has grown quite considerably. Karpathy was a founding research scientist but it seems like the nonprofit will march on just fine


This is great to hear. Andrej has contributed a lot to DL students worldwide, what with his lectures online and his writeups, that I'm glad he's continuing his upward trajectory. A very inspiring person.


Haven't thought that Andrej is such a big star in tech and deep learning communities. Which him all the best, his deep learning course was amazing


How old is Andrej Karpathy ? Unable to find on google.


From his website, he entered his BSc in 2005, suggesting he'd be about 30.


Probably around 30.


so it's done for other car industries :/


If A.I. can't fold my laundry, I wouldn't trust it with my car.


Driving a car is currently specialised, as is folding laundry. The opportunity for autonomous vehicles is clear; you can start with replacing the cost of paying every person around the world who drives for a living with profit equivalent to their income. (likely at somewhat less than 100% efficiency, but who cares because of the massive scale?)

The next bit would be converting private trips into automated rides; replacing car ownership. The unit profits will decrease as part of this effort, but the opportunity is again huge.

How will you monetise laundry folding automation? At best you can replace just some of the people at all the commercial laundries and turn that into profit, but many laundries may hire one person who does everything. For home use you could sell a machine, but how much would you need to charge for it for that to equal recurring revenue from autonomous vehicles and how much are people willing to pay for a laundry robot? (even robotic vacuums are still quite niche despite probably being more convenient and costing less than LaundryBot)


> replacing the cost of paying every person around the world who drives for a living with profit equivalent to their income

That's not how it works, because your competitors will undercut you. Then you will undercut them, and continue the race to the bottom until the natural price is reached.


The efficiency will certainly be a lot less than 100%, and it does seem likely that at least two companies might bring the tech to market within a couple of years of each other. I think whatever the starting price point is, there will probably be so much of a demand/supply gap that the initial price point will hold for a number of years.


Laundry bot would make no money, I think. People are happy to do this task on their own or via a maid.


Laundry bot would definitely make money, tons of it. Robo maid is like numero uno on things people have historically wished robots would do in the future


Before the invention of modern washing and drying machines, sure. But today many people in big cities don't even have an washer and dryer in their home or may not have one in the building, this problem spans many different use cases of how people consume the service. I'd consider asking things like:

* Is the barrier higher and value lower because they have laundry in their unit or does the value increase marginally because they have in the building, or is it more lucrative because they don't have laundry at all?

* And does a solution scale easily across these different use cases?

* The ability to provide continuity on even a pure software scale makes this a particularly pricey arena to enter because a person is required to fulfill this service. To find ways to make this economical, we'd have to consider co-opting existing behaviors or economic patterns. Are there _multiple_ existing patterns that we can enhance?


Yep, I paid around $2000 for a nice Hitachi drum-type washer-dryer machine last year.

I would certainly, without any hesitation whatsoever, have been willing to pay $5000 or even a bit more for one that, instead of just washing and drying all our clothes, took a pile of dirty clothes as input and output a few stacks of washed, dried, folded, and sorted clothes.

Anybody with children would want this; those who also have sufficient money would buy it.


It would be simpler to invent wrinkle free clothes and convince your wife clothes don't need to be sorted.


People are downvoting you. But I think this is a really good point...


It isn't, though. The two tasks aren't very similar. It's a pithy put-down if you already don't like the idea of driving AI for other reasons, but nobody who's looking to objectively evaluate a technology for use in cars would say, "Hang on, let's first try to make it fold laundry."


> The two tasks aren't very similar..

Care to elaborate? Is folding laundry harder than driving? Doesn't both involve following visual cues and making corrections continuously based on that...

>It's a pithy put-down if you already don't like the idea of driving AI for other reasons

And your comment can be seen as a pithy attack on criticism of something you are overly enthusiastic about "for other reasons"...

>nobody who's looking to objectively evaluate a technology for use in cars would say, "Hang on, let's first try to make it fold laundry."

That is not the point. Point is to evaluating how capable the current AI/visual processing technology is.

And why not test advancement in AI with something much safer, like folding laundry? Instead of putting it in a car that can actually kill people...

It is a good point.


AI isn't a magic black box which one just plugs into stuff. There's some core mechanisms, but those are already well tried in many other applications. The hard part is developing a specific AI for driving.

Also, self-driving cars are not new, they're being developed for over thirty years, and they've logged millions of miles, many in public streets. The problems remaining will be in edge cases, and you definitively can't use your "folding AI" to test for that.


No it isn't. Modeling how a robot interacts with cloth is actually quite difficult dynamically. Much more difficult than how a car operates.



If I understand it right, it takes this robot ~ 90 minutes to fold 5 towels.


The car is a legitimate next step with AI, computers and sensors are already integrated and have been doing basic functions like parking, whereas computers are not integrated into manual laundry task, this market has decided computers are not economical or profitable to integrate into the chain.


Laundry seems like more of a mechanics problem. I'm sure it's doable, maybe not particularly cheaply/compactly though


Wonderful news!


What is so hackerynews about this ? Hackernews is turning facebooky with low quality content.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: