cycomanic is claiming that leobg is ignorant. They do this despite leobg displaying accurate historical knowledge and stating the way that the sensor works in a way which does not fundamentally disagree with the correction that cycomanic implied leobg needed. As a reader of the comment chain, I have to ask why cycomanic thinks leobg is ignorant - he failed to articulate why. It seems to me that the most contentious and debatable claim that leobg made was the claim that full self driving requires solving vision regardless of whether or not you have LiDAR. If this was the reason - maybe it isn't, but if it was, the fact that everyone uses vision isn't evidence for cycomanic's position - it is evidence for leobg's position.
You've retreated from this as the reason that leobg is frightfully ignorant on behalf of cycomanic. That means the next most contentious claim is the claim that builds upon the foundation of the first claim - that vision being required makes LiDAR irrelevant. The problem for you though is that when you make the concession that vision is necessary, you run into a problem. The sensor situations under which LiDAR is much better than vision tend to involve a vision failure through a lack of light or due to heavy occlusion. There is definitely and necessarily a cut off point at which leobg's claim becomes somewhat true. This denies the right to call him ignorant, because the law of charity demands that his point be the thing that maximizes the truth of his comments. So the claim of ignorance - which amounts to a character attack - becomes unjustified.
>of which LIDAR tells you is exactly 12.327 ft away, you still need to figure out whether it is a trashcan or a child. And if it is a child, whether it is about to jump on the road or walking away from the road. LIDAR does not tell you these things.
That is ignorant, because LIDAR together with processing obviously can tell you if the thing is a trashcan or a child. The post by is ignorant, because to my understanding it implies that LIDAR does not provide enough information to make that determination, which is untrue and not how LIDAR works.
Now if they mean we still need some way to process this information and make decisions of what the different things are, that's a bit disingenuous because that's completely orthogonal to LIDAR vs cameras vs RADAR and using that argument we could dismiss any of the other technologies ignoring the fact that more (and different) data typically allows you to make better decisions.
Thanks for the response. I agree that LiDAR can make that determination. I think he was confused about what it was possible to learn from the LiDAR sensors rather than what LiDAR provides. His ability to distinguish between radar in former Tesla vehicles and LiDAR in former Tesla vehicles wouldn't be present if he thought they were the same sensor. I figured you would be responding to his argument, which was outside the () rather than his fallacious support for a premise that was true which was inside the ().
> using that argument we could dismiss any of the other technologies ignoring the fact that more (and different) data typically allows you to make better decisions.
Bellman gave us the bellman equations, but also gave us the term curse of dimensionality. The equations he shared and the modeling problems he encountered are fundamentally related to modern reinforcement learning. More data doesn't come without cost. So often I hear people speak of the introduction of lower resolution as equivalent to the introduction of error, but this is a false equivocation. Latency in decision making means the introduction of a lower resolution can increase the resolution error, but still decrease the solution error. This is so fundamental a result that it applies even to games which don't have latency in their definition. Consider poker. The game tree is too big. Operating with respect to it as an agent is a mistake. You need to create a blueprint abstraction. That abstraction applied to the game introduces an error. It is lower resolution view of the game and in some ways it is wrong. Yet if two programs try to compete with each other, the one that calculated with respect to the lower resolution version of the game will do better than the one that did its calculations with respect to the higher resolution view of the game. High resolution was worse. The resolution without error was worse. Yet the game under consideration was orders of magnitude simpler than the game of a self driving car.
I've been paying some attention to this debate and I'm not convinced yet that the situations under which LiDAR is superior are sufficient. I think we agree on that already. For me, this reduces the set of situations under which LiDAR is able to be considered superior - if vision is bad, but you need vision, then better to avoid the situation then use the wrong thing [1]. So the situations under which LiDAR becomes superior becomes a subset of the situations that it is actually superior. That subset doesn't seem very large to me, because both LiDAR + vision and vision alone are both necessarily going to be reducing the dimensionality of the data so that the computation becomes more tractable.
[1]: This isn't exactly uncommon as an optimization choice. It'll get dark later and you'll stop operating for a time. Then light will come. You'll resume operation. This is true across most of the species on this planet. If you are trying to avoid death by car accident you could do worse than striving to operate in situations where your sensors will serve you well.
You've retreated from this as the reason that leobg is frightfully ignorant on behalf of cycomanic. That means the next most contentious claim is the claim that builds upon the foundation of the first claim - that vision being required makes LiDAR irrelevant. The problem for you though is that when you make the concession that vision is necessary, you run into a problem. The sensor situations under which LiDAR is much better than vision tend to involve a vision failure through a lack of light or due to heavy occlusion. There is definitely and necessarily a cut off point at which leobg's claim becomes somewhat true. This denies the right to call him ignorant, because the law of charity demands that his point be the thing that maximizes the truth of his comments. So the claim of ignorance - which amounts to a character attack - becomes unjustified.