Hacker News new | past | comments | ask | show | jobs | submit login
Donkeycar: A Python self driving library (github.com/autorope)
92 points by tildef on May 13, 2023 | hide | past | favorite | 9 comments



Damn! This brings back a lot of memories.

Back in 2017, I was just starting out as an intern at a Systems Design lab in a university. I was about to head into my first meeting with a potential advisor, when a college senior from my robotics club forwarded the link to DonkeyCar on the club group chat. IIRC, the first repo was hosted on wroscoe's personal account, and allenwells and AdamConway were the most active contributors, along with wroscoe of course.

I did not have any experience with Linux, Git, ML. Nothing! I walked into the meeting with the advisor and pitched working on DonkeyCar. It was new, it was shiny, it used TensorFlow v1! It was down to working on this, or some GIS thing, analyzing satellite spectral bands.

Anyways, my accidental blurring out of this project in the meeting led to the most fruitful summer of my life, where I managed to learn about package managers, machine earning, computer vision and how to ask the right questions on an open source communities Slack. I even remember badgering the project Slack so many times that wroscoe replies to me, saying that I should be more mindful about people's time, and that I should read the Python Error messages more carefully, before just copy pasting lines and lines of error code.

In addition to introducing me to Linux, this project introduced me to Deep Learning as well. I remember going through Josh Gordan's tutorials on KNN's, during my attempts to debug the Differential Drive DonkeyCar code. Ah the memories! One thing led to another, and I was following the launch of the first TPUs. I remember heading for lunch with a few folks from the department, where they began discussing the dedicated hardware that would be accelerating TF code. It was wonderful, as I tried to keep up with the announcements.

This brings back so many memories. If Will, Adam or Allan are reading this: Thank you. Your brilliant project absolutely changed my life and sparked a lifelong fascinating with robots, ML and tinkering!


haha, I just read this... this is Adam. Donkey car is still going strong!.


Some other toy-scale self-driving car projects which come with simulators in case someone cannot get the hardware:

1. Duckietown: https://www.duckietown.org/ from ETH Zurich, comes with a MOOC with all material.

2. MuSHR: https://mushr.io/ from Sid Srinivasa’s group at UW.

3. F1TENTH: https://f1tenth.org/ probably the most popular, regularly heads physical competitions, sometimes at popular robotics conferences.

There’s Amazon DeepRacer too (https://aws.amazon.com/deepracer/) but haven’t heard anyone talk about it recently - imo it feels like a ‘AWS can handle RL for robotics’ project.

As any other robotics research areas, there’s too many projects trying to do the same thing. There are many reasons for this, although thankfully the community is converging to common hardware when cheap, high quality option is available to buy commercially.


Donkey Car has a nice simulator and a virtual racing league (though priority seems to have shifted back to real-world races).

Sim: http://docs.donkeycar.com/guide/deep_learning/simulator/

Virtual arena: http://docs.donkeycar.com/guide/deep_learning/virtual_race_l...

Recent race: https://youtu.be/2vZBCjuhW8U


This is a great project. Great experiential intro to machine learning. I’ve used it with high school & college students for a few years.

Training a Donkey Car to drive helped me understand the risks of bias in machine learning data.

Chris Anderson has a couple tech talks on YouTube that compare the tech stack to human-scale autonomy and discuss some of the sensing approaches different Donkey Car builds are using. Some use cameras, some use LiDAR, some use star tracking, etc.


Why do Python libraries have such unusual names?



I guess the language having an unusual name sort of encourages it?


does project like this also have a second internal spatial representation, similar to how tesla does, or it's just sensor to output. i mean how far can you go with just simple input sensors to output controls.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: