Hacker News new | past | comments | ask | show | jobs | submit login
Open Source Time-of-Flight DIY Lidar (github.com/iliasam)
181 points by iliasam on March 11, 2020 | hide | past | favorite | 49 comments



This is a very neat project, I really wish I could read Russian to understand some of the details in his writeup. The overall architecture is pretty standard for a LIDAR setup.

For the source they are using an osram SPL PL90_3 laser diode which provides 75W peak output at 900nm, with pulse duration roughly 10ns and costs $10.

For the detector they are using a MTAPD-07-013 avalanche photodiode, which has a quantum efficiency of about 80% and an internal avalanche gain of 100 and 0.6ns rise time. Also costs about $10.

For the time measurement they are using a TDC-GP21 time to digital converter with 22ps resolution, although the author has it configured for 90ps counts. Costs a bit over $5.

It is really a testament to the amount of development work that has gone into the lidar and related field, that for under $100 you can build a full lidar system out of parts from digikey including custom pcb's and scan mirror. When I was in grad school we paid $7500 for our avalanche photodiode, $25k for our pulsed laser, and used a $50k oscilloscope to read it out.


Here is a link to the video: https://youtu.be/lTPH_Xa9yCk with short Lidar description. What kind of details do you want to know?

Why did you write "they are using"? Thre was no team, it's my own project.


Great project, thanks for posting it!

> What kind of details do you want to know?

What is the limiting factor on the precision - is it just the specs of the TDC-GP21 or something else?

How did you calculate the appropriate laser lens and standoff?

Is the CS mount lens's focal length important, or is there no need for a sharp image at all distances?

Why use one CS-mount lens and one M12 lens, rather than two M12 lenses?

Do you need any sort of dynamic gain or optical filtering on the received signal? Is it vulnerable to getting washed out by bright sunlight?

Did you need much fancy $$$$ test equipment to get the design working, or did you achieve it with tools mere mortals can afford?


> "What is the limiting factor on the precision"

Precision in this project is limited by rise time of the received signal. Meanwhile, this time is limited by rise time of laser's current. It is too long now - 10 ns.

Theoretical TDC resolution is ~13mm per BIN.

You could see real results here: https://github.com/iliasam/OpenTOFLidar/wiki/Resolution-of-t...

> "How did you calculate the appropriate laser lens and standoff"

I can't say that there was too much calculations.

The is some information here: http://www.ti.com/lit/ug/tiduc73b/tiduc73b.pdf (part 2.3.5)

and here: http://www.ti.com/lit/ug/tiducm1b/tiducm1b.pdf (part 2.3.2)

Laser's lens should have maximum possible focal length, but it must not crop light beam from the diode.

Photodiode's lens should have maximum Entrance pupil

> "Is the CS mount lens's focal length important, or is there no need for a sharp image at all distances?"

You need to have sharp image at big distances. At short distances it is not important - light signal is high enough.

> "Why use one CS-mount lens and one M12 lens, rather than two M12 lenses?"

I think that it is easier to find lens with big "Entrance pupil" with CS mount that with M12 mount. Bigger mount - bigger lens diameter, bigger "Entrance pupil".

> "Do you need any sort of dynamic gain"

I don't have any kind of electrical gain control. There is an ability to change APD gain, but I don't use it.

> "optical filtering on the received signal"

I don't have filter in my LIDAR, but it is necessary to have it if you are going to run LIDAR at sunlight. I give links to a several interference filters in "LidarTotalBOM.xlsx".

> "Did you need much fancy $$$$ test equipment to get the design working".

All that I have is Tektronix TDS540D oscilloscope (bought at the Ebay for $300) and multimeter.


"They" can be singular.


Thanks, I have never heard about it.


Not a problem. Every so often this otherwise unremarkable grammatical point becomes somewhat contentious, but it's easy to see how it can be confusing if English isn't your first language. There's more here: http://itre.cis.upenn.edu/~myl/languagelog/archives/002748.h...


I just love the line

"This use of they isn't ungrammatical, it isn't a mistake, it's a feature of ordinary English syntax that for some reason attracts the ire of particularly puristic pusillanimous pontificators, and we don't buy what they're selling."


Is it common? I have never seen it in singular context in English grammar textbooks.


Actually incorrect in textbook sense, but widely accepted in recent years to avoid specifying gender.


Also in less recent years. They has always been used in the singular in cases where gender is not apparent (e.g. with babies)


And also where it is:

    There's not a man I meet but doth salute me
    As if I were their well-acquainted friend
- A Comedy of Errors, Act IV, Scene 3


Widely accepted back at least as far as Shakespeare. The objection that it's incorrect is what's recent.


For example "they must have forgotten to add the catch all redirect in the .htaccess file"


English grammar textbooks started excluding it because doing so made the language more like Latin and this was considered a good thing.


It's misused for that. It's not correct grammatically and leads to confusion for non-native english speakers.


The Oxford English Dictionary disagrees with you: https://public.oed.com/blog/a-brief-history-of-singular-they...


Not really.


The video is impressive. I liked seeing the speed it spins at, and I liked the demo of how an AGV can explore and build a map of its surroundings.


Is it worth trying to do spread spectrum with this? Or do you need a different amplifier setup?


Average power vs. SNR favours very short pulses.

Peak power vs. SNR favours pseudorandom sequences with a peaky autocorrelation function. In practice, an LFSR is typical, but if you use e.g. AES-CTR, you can get quite a lot of resilience against (intentional) jamming and be potentially undetectable as long as the sensor doesn't move directly into the beam.


Yeah, I'm just not clear on how to go from that theory to practice in this case. Like, could you do it with this specific photoamplifier, or would you need something very different? Are we immediately into FPGA territory to get enough processing speed, or is there something you can do with the delay lines to have the same effect?


You basically do AM of the laser, and then otherwise normal baseband radar processing. You don't need much bandwidth, but more bandwidth needs less SNR, as you can do superresolution if you have sufficient SNR.

There is no need for an FPGA, but you will rather quickly throw FFT correlation processing at it.

This would not use any delay lines. You'd just need to lock the sampling of your ADC to the signal generation, because 2ns jitter are 30cm/1ft jitter in distance.


The original article (in Russian) is very well written. And the author feels like a true "full stack" engineer: high frequency and power electronics, optics, mechanical engineering, micro-controller software development, real time, simultaneous localization and mapping, visualization.


These are the folks that give me impostor syndrome. Given enough time and a project like this, I think I can learn all of that stuff.


The thing is, with Google at your fingertips, you can build almost anything. You don't need to deep dive on all these technologies, but you can probably gather just enough information in a short time.

However it still takes a lot of effort and most of your free time. You have to be extremely driven to complete such a complex project.


Free time in large chunks, was the revelation for me.

I took a serious "staycation" back in December, and wrote about it here: https://www.reddit.com/r/Coronavirus/comments/fgvbsv/staythe...

During those few weeks, I made what seems like a year worth of progress on several projects. Being able to dive in and focus, for hours at a time without worrying that I had other stuff I should be doing, made all the difference in the world.

Rearranging your week to have one "no-chores no-email" evening, when you just use a single large block of time to immerse yourself in a leisure activity, is worth a try.


In fact there is still many kinds of information that could not be found in Google.


Yes. Almost. There's a complexity threshold over which there is a need for: vision, full stack, deep dive, patience, steady progress over long periods of time. And yes, certain amount of extreme drive to go over the bumps ;).


It's a great project. I built a lidar scanner for my PhD (still have the bits and a £2.5k galvo in a box somewhere). Didn't make the actual lidar unit though.

This has basically become possible due to cheap and accessible TDCs. Both TI and Ams make them now, designed for gas ultrasound flow sensing. Most of the other components have been readily available for a while (not sure about the APDs, but certainly you could buy stuff from Hamamatsu or Thorlabs/Edmund). ROS makes SLAM quite easy if you have a hardware driver, though the utility that the author made for debugging is very neat.


Actually I do have a question - how safe is this for eyes (human or animal?) and what changes are needed to make it safer?

How about measuring through translucent or transparent surfaces? Should work as is if there is no reflection (e.g. measurement at Brewster’s angle for this wavelength) but is it possible to TOF multiple reflections from? Or multiple depth slices (e.g. by gating TOF to specific depth ranges? 1ns=~1ft=~30cm


As it was answered, you could read about safety here: https://github.com/iliasam/OpenTOFLidar/wiki/Laser-Safety

Measuring through translucent or transparent surfaces is theoretically possible (TDC is supporting multiple measurements), but received pulse width is too high now - it could be > 30ns after amplifier.


Thanks! Really an awesome project and great documentation.


He discusses this both in his very detailed Russian post as well as on GitHub: https://github.com/iliasam/OpenTOFLidar/wiki/Laser-Safety


I'm surprised that there is absolutely no comments. Does that mean that this project is really not interesting?


I think it's very interesting, but I suspect too few people have anything to contribute.

I know I don't have anything smart to say about this specific subject, and I usually try to follow the old german saying "Selig sind die, sie nicht zu sagen haben und trotzdem schweigen" (hope I got it right - I don't actually speak german), which means "Blessed be those, who have nothing to say and nevertheless remain silent..."


Discussion about my previous project: https://news.ycombinator.com/item?id=16756901 was much bigger in comparison with this project - this is surprising me.


Probably just timing. I find it pretty cool, but I have nothing to say. I'm impressed with the work and would like to do the same myself, but I'm not sure immediately where I'd use this and when I'll have time. I've put it in the section of my brain where I keep things for "when I'll need it, go look this crazy ambitious project".

If I can provide more feedback, while looking at your project, I had the following thoughts:

  - This is super impressive.
  - I'm thankful that people spend time making technology 
  opensource.
  - I'm sure this will grow over time and more "complex" 
  technology will become open this way.
  - I'm curious in which context the author decided to 
  dedicate that much time to this cause (probably in 
  academia? or someone with access to a lab and a lot of 
  experience in all the fields involved in this project).


Thank you for reply!


German native speaker here, you just got two letters wrong:

Selig sind die, die nichts zu sagen haben und trotzdem schweigen


Thanks.

I was under the impression that it a well known saying when I first heard it, but google doesn't come up with many hits for me (and some with wrong spelling) - likely because of my google bubble / history.

Is this a common/well-known saying?


I heard it before, but it is not very commonly used. It is a bit more like a literaric aphorism than something people would use in any everyday conversation.


On the contrary - it's super interesting! I definitely want to come back to it and study this and thinking of projects I can use it on for fun and no profit, but also know that though I have analysed lidar data, I have very superficial understanding of the technical details of implementation. For me this is not just great as a cheap lidar implementation but a great working example to learn on.


Honestly, I didn't open comments and simply added it to my "come back later" list, because it looks very very interesting to me, and would need to read it in detail. Great work, btw!


Thank you so much for releasing this. I have several personal projects were this can be useful.

One variation I’m thinking about and want your opinion on. I’m building a circuit that VERY precisely (think <1ms accuracy) marks the start and end of when a high speed object passes the field of view. The exact scenario is a bit hard to describe if you aren’t familiar with the subject area, but an analogy that works well is imagine having two sets of cones laid out on the ground with 10 meters between the groups. Then imagine a car doing 100mph between the two sets of cones. What I need to be able to answer is:

- did the car go between the first set of two cones?

- what’s the exact time it went between them?

- did the car go between the second set of two cones?

- what’s the precise time it went through the second set?

- what’s the calculated speed between the two sets?

The real scenario isn’t touching the ground, so anything pressure sensitive doesn’t work. I’ve been currently experimenting with using lasers on both the start and end gates and an fpga doing the calculations, but I’d love if I could do this with a single laser setup using a scanning setup like yours.

Can you speak to how hard it would be to use something like this to precisely (1ms or better accuracy) measure a low flying object between a starting and ending point?


I am not the OP. Seems like you should be able to set up two break beam detectors: one per set of cones. Process detector output into digital form where you get a pulse when the detector is broken. Connect them to a microcontroller input and use a fast counter in input capture/compare mode to measure time between pulses. A counter running at 1 MHz (1 us) should do.


I find it super interesting - but it's beyond my capability to contribute so I keep quiet for the moment. I imagine many people would be doing the same.


This is fantastic! I want to make one. Is it possible to get a cheap polygonal mirror? Then you can get more scans per revolution.


I think that it is possible to make own polygonal mirror (cut polygonal mirror holder at CNC, attach flat mirrors and balance it). Also it may be found in stationary barcode reader. But you will lose field of view with polygonal mirror.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: