Hacker News new | past | comments | ask | show | jobs | submit login

How the hell does the Seek by iNaturalist app work so well and also be small/performant enough to the job completely offline on a phone? You should really try it out for IDing animals and plants if you haven't, it's like a real life pokedex. Have they released any information (e.g. a whitepaper?) about how the model works or how it was trained? The ability to classify things incrementally and phylogenetically makes it helpful to narrow down your own search even when it doesn't know the exact species. I've been surprised by it even IDing the insects that made specific galls on random leaves or plants.



I reverse engineered their stuff a bit. I downloaded their Android APK and found a tensorflow lite model inside. I found that it accepts 299x299px RGB input and spits out probabilities/scores for about 25,000 species. The phylogenetic ranking is performed separately (outside of the model) based on thresholds (if it isn't confident enough about any species, it seems to only provide genus, family, etc.) They just have a CSV file that defines the taxonomic ranks of each species.

I use it to automatically tag pictures that I take. I took up bird photography a few years ago and it's become a very serious hobby. I just run my Python script (which wraps their TF model) and it extracts JPG thumbnails from my RAW photos, automatically crops them based on EXIF data (regarding the focus point and the focus distance) and then feeds it into the model. This cropping was critical - I can't just throw the model a downsampled 45 megapixel image straight from the camera, usually the subject is too small in the frame. I store the results in a sqlite database. So now I can quickly pull up all photos of a given species, and even sort them by other EXIF values like focus distance. I pipe the results of arbitrary sqlite queries into my own custom RAW photo viewer and I can quickly browse the photos. (e.g. "Show me all Green Heron photos sorted by focus distance.") The species identification results aren't perfect, but they are very good. And I store the score in sql too, so I can know how confident the model was.

One cool thing was that it revealed that I had photographed a Blackpoll Warbler in 2020 when I was a new and budding birder. I didn't think I had ever seen one. But I saw it listed in the program results, and was able to confirm by revisiting the photo.

I don't know if they've changed anything recently. Judging by some of their code on GitHub, it looked like they were also working on considering location when determining species, but the model I found doesn't seem to do that.

I can't tell you anything about how the model was actually trained, but this information may still be useful in understanding how the app operates.

Of course, I haven't published any of this code because the model isn't my own work.


I don't use Seek, but the iNaturalist website filters computer vision matches using a "Seen Nearby" feature:

> The “Seen Nearby” label on the computer vision suggestions indicates that there is a Research Grade observation, or an observation that would be research grade if it wasn't captive, of that taxon that is:

> - within nine 1-degree grid cells in around the observation's coordinates and

> - observed around that time of year (in a three calendar month range, in any year).

https://www.inaturalist.org/pages/help#computer-vision

For how the model was trained, it's fairly well documented on the blog, including different platforms used as well as changes in training techniques. Previously the model was updated twice per year, as it required several months to train. For the past year they've been operating on a transfer learning method, so the model is trained on the images then updated, roughly once each month, to reflect changes in taxa. The v2.0 model was trained on 60,000 taxa and 30 million photos. There are far more taxa on iNaturalist, however there is a threshold of ~100 observations before a new species is included in the model.

https://www.inaturalist.org/blog/83370-a-new-computer-vision...

https://www.inaturalist.org/blog/75633-a-new-computer-vision...


>It looked like they were also working on considering location when determining species, but the model I use doesn't do that.

I do this in fish for very different work and there's a good chance the model for your species does not exist yet. For fish we have 6,000 distribution models based on sightings (aquamaps.org) but there are at least 20,000 species. These models have levels of certainty from 'expert had a look and fixed it slightly manually' to 'automatically made based on just three sightings' to 'no model as we don't have great sightings data'. So it may be that the model uses location, just not for the species you have?


Well, there's no way to feed lat/lng or similar into this particular tensorflow model.


that is actually surprising. surely they use location at some point in the ID process. its possible they have a secondary location based model to do sorting/ranking after the initial detection?

Merlin's bird detection system is almost non-functional without location.


yeah that's true! You can't really do that, these models are just polygons, all we do is doublecheck the other methods' predictions' overlap with these polygons as a second step.


Sounds like a real-life Pokémon Snap. You should add a digital professor who gives you points based on how good your recent photos are. (Size of subject in photo, focus, in-frame, and if the animal is doing something interesting.)


That doesn't work well even when it's the only game mechanic and everything else is designed around making it work.

https://www.awkwardzombie.com/comic/angle-tangle

It's not likely to work well on actual photos of actual wildlife.


That just adds to the fun.


That sounds like an awesome setup! Would you be willing to share your script with another bird photography enthusiast?


Comments like these are why I lurk on HN. Genius solution.

As a birder I have thousands of bird photos and would pay for this service.


This post fits the username perfectly.


You wrote your own custom RAW photo viewer? Like, including parsing? That's incredibly cool, do you share it anywhere?

Also why not just darktable / digikam?


I would pay for this


Thanks for sharing - I was curious too but didn’t delve in myself.


if you're willing it's totally fine to share your work with the model itself removed



I'd never heard of this app, but your description made me want to install it. When I googled it I was surprised at the app ratings:

Apple: 4.8

Google Play: 3.4

The most common issue mentioned by negative Play store reviews is the camera not focusing on the right thing, and needing to try many different angles before something is recognized correctly. This is probably nothing to do with the underlying model, which I guess is the same on both platforms.


Camera zoom is definitely annoying, there's no way to control how zoomed in it is.

And yes, it often takes as much as a minute to identify a species, because you have to keep adjusting zoom and angle and trying to catch every important feature.

That said, once you are used to it, it becomes less noticeable and just feels like part of the game.


I'm curious why this seems (from the reviews) to be an issue only on Android?


I tried this app for a while, and there's definitely some rough edges. My partner's phone was much quicker in recognizing plants and flowers, so after a while I gave up and we just used her phone instead.

And then there's the issue of many plants being slightly mis-identified, many really common flowers being incorrectly identified. I don't really know how people get remotely close to wild animals to identify them, I had no luck with animal life with this and after a while I started mistrusting its results and cross referencing with google lens anyway.


our own app Birda has this issue too, most of the 1 star reviews are 'not the app I was looking for' or 'not a game'


I wish I had the same experience as you. The vast majority of time I point at tree leaves in South East Asia it tells me it’s Dicots and it stops there. Only rarely I get the actual full classification; the last time it happened was for a common almond tree.


It's very bad at trees for some reason. Also mushrooms, but I thought that might be intentional so they don't get blamed for someone eating something poisonous that was misidentified.

PlantNet often works better for trees.


Trees are generally difficult to classify well with computer vision. It's hard for the models to establish context because at a scale where you can see the whole tree you tend to include lots of background. If you include a bark photo, it's often ambiguous if there's growth/stuff/weathering on top. Flowers tend to be good inputs.

The training imagery is also really inconsistent in inaturalist and again for plants it's hard to establish context. These are mostly smartphone pictures that non experts have captured in the app. While someone might have verified that a plant is a particular species, because there isn't any foreground/background segmentation, there are often confounding objects in the frame. On top of that you only get a 300px input to classify from. With animals I'd say it's much more common for photographers to isolate the subject. There's also massive class imbalance in inat, a large number of the observations are things like mallards (ie common species in city parks).

I guess the best solution would be to heavily incorporate geographic priors and expected species in the location (which I think is partly done already).


Flowers are crucial for human IDs as well. A lot of tropical tree leaves are very similar, so without context they're virtually impossible to visually distinguish.


Yeah this is a good point. I've done some work with local experts to do tree ID from drone images over rainforest and there were several species where they would need to see the underside of the leaves to get better than the family.


My experience has been great with mushrooms, just to add another datapoint. I mean, it's often about as good as you can get by eye without breaking out the lab equipment.


It seems to do well for trees for me in California.


For trees try to photograph the flowers, the seeds, the bark, the leaves (both sides), the trunk growth habit (especially bottom portion), and the upper branches growth habit. Often when asking it to suggest a species switching between these will create progress.


Probably because many people around the world participated in classifying what was posted?

I am guessing. Please tell me if that is correct. How do they prevent false labels ?


Any observations can be submitted, but the observation has to be verified by a different observer. Most identifiers are folks with more experience identifying things locally, and the data quality is high. There's very little incentive to game the system and if something is misidentified other iNatters can also add identifications correcting mistakes which happens regularly - often various scientists/specialists tend to sweep observations in their taxa of note and correct issues. There's criteria for a "high quality" observation, including being verified. Only those observations that are high quality are used for training.


There are hundreds of thousands of "false' labels. Pictures can be classified many times.


I always wondered how do you determine truth in such sites?


You ask actual experts for identification




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: