Unfortunately, I have tried to identify some of my plants but it could not identify correctly a single one. Just an endless procession of at best similar plants, most of the time completely dissimilar.
I think this is another naive model that just tries to push entire problem to AI. That is unfortunately what I am seeing nowadays, very unimaginative. Just try to have fun with parameters of the network until you find some kind of configuration that seems to be working.
What it would benefit from would be some kind of analysis/classification of basic features of the plants like what's the basic shape of the leaf, trunk, how things are connected, etc.
The classification would benefit from AI (like identify where leaves are, where trunk is, etc.) but then that intel would be passed to a more classification-oriented algorithm.
(disclaimer, I am not an AI developer, it just seems to me like pretty rational way to approach the problem)
Flora Incognita, a free plant identification app for Android and iOS, developed by a German university, does that.
You choose a category first, like "tree", "flower", "grass" or "fern" and it will guide you through the process, trying to identify the plant with as few photos as necessary. Common ones it will identify from a single image, for others, it will e.g. prompt to take a close-up photo of the bark, bloom or the complete plant in its environment. From what I understand, they are aiming for accuracy of the identification and will provide a description of possibly matching plants if there is still ambiguity. Very recommended!
plantnet works very well. I have also used Google Lens (the magic button on Google Photos on Android) but I have the subjective feeling that plantnet works more reliably for plant identification.
This app is phenomenal! So far it's picked up everything I've thrown at it. Can't say that of the other two plant ID apps I've tried before. I think it uses geocoding to some extent - it asked for my location (which is fine by me) but it required a leaf, fruit, and bark pic to figure out Northern Catalpa, which isn't common around here. Everything else has been leaf and maybe a fruit or whole plant pic.
Funny enough, I only get a 403 Forbidden error when opening that website. And I am located in Germany with a German IP. Though my server at Hetzner (also in Germany) has no problem retrieving contents from that domain.
One of these things I think of when I do garden things, "I bet there is an app for this" but typically as soon as I have left the garden I forget all about it.
I have installed the app now for the next time I am gardening :)
To be honest, nothing works better than Plantnet for identifying plants. They do the pre-selection where you choose wich organ you're looking at (trunk, leaf, flower, fruit, ...).
Right. This kind of thing is only going to work on very common plants that are easy to ID. I am very into carnivorous plants and even us experts sometimes have trouble identifying plants, often you need very specific morphological details to ID species that a phone picture will never be able to catch.
IMO this will just lead people to wrongly ID their plants more often than not, and that's a really bad thing.
The iNaturalist app does a spectacular job at identifying plants (and insects, fish, birds…), even disambiguating based on your location
You label geotagged images with the AIs suggestion if you agree with it and then other users can either confirm or suggest a different / more specific species.
You got me curious, so I tried that app, and I'm getting "Unknown species" for every single one of my plants... I guess no identification is better than bad identification.
You need to click on the "Unknown Species" box in order to get a list of suggestions. Probably not the most intuitive, but it avoids auto-assigning IDs without prior review.
Perhaps they are unknown species! Hah, well, sorry my recommendation didn’t come through, I can imagine it performs better on common plants everyone takes pictures of, which is what I’m identifying when I walk around town.
Still, if you post the plants under a more general species other users can identify it mentally, adding to the data set.
It is an important feature. For plants, users will immediately see that the answer is wrong but for many other things (identifying suspects automatically) it might not be as obvious.
As a bonus if the algorithm can tell it can't identify correctly you can use this feedback to teach the algorithm.
A great example of this is the Akinator app. The developers built up a database of answers by getting the users to fill in the missing data, which produced more accurate results and subsequently attracted more users. That feedback loop for apps like this seems like a very effective model. What I find interesting about this is that the value of the product, from a business perspective, comes from the success of the software in enabling that feedback loop, while from the user’s perspective, the value is in the data. But the data is emergent from the feedback algorithm.
Is it perhaps also a difference in intended use? If iNaturalist is for wildlife and uses location heavily, eg identifying tropical houseplants indoors in a different region might not work.
The GPS helps with a “seen nearby” weight, but it shouldn’t exclude anything. Could be a factor of photography, with some plants it’s easier to get a nice canonical leaf shape than others. Taking a wide angle of a potted plant such that the whole plant is in frame might not provide enough detail, so there’s a learning curve in knowing how to photography for the algorithms benefit, kind of like learning how to phrase search queries for better Google results
Could be, I just set location data on it with the original location of the plants, but it still cannot ID them. Granted, they are not very common species.
I've used PictureThis a great deal to identify both wild and garden flowers. I've found it really good for the 95% most common plants and also for getting you in the right ballpark so you can then look things up more easily in a field guide. I've just accepted that there are some species that it can't distinguish between - often you need to know exactly the right feature to look at of the plant to differentiate species. It'd be useful though if it flagged this a bit better. Umbellifers are a classic example. It gets them correct more often than not, but I think it'd be good for it to indicate that uncertainty better.
Not sure if this will put your mind at ease or not. But I really like learning plants, I hike a lot and usually the only book I bring is a plant ID book. But I'm not good at knowing where to start looking in the book. I find apps like this (I use Seek) incredibly helpful, I don't just assume the app is correct but it gives me a place to start looking in the book. That said, I also don't assume I'm correct when I find what I think it is in the book. Anyway just wanted to say I find plant I'd apps a helpful learning tool.
I will say though, most of the time I don't have service where I'm IDing plants, so it's not the most convenient tool.
>What it would benefit from would be some kind of analysis/classification of basic features of the plants like what's the basic shape of the leaf, trunk, how things are connected, etc.
There's no reason to think that needs to be done separately, and in fact that's the kind of things that you'd expect a good model to find on its own.
In general, we've already learned that while handcrafted features can help, they are often ultimately worse than learned ones as techniques get better.
AIs are useful for "fuzzy" problems, but are not very good at doing precise things with high reliability.
Every plant has baked in restrictions on how it grows. If you can identify separate features of a leaf and how the leaf grows out of the stem you can basically look it up in a table and tell what kind of tree you are looking at, without need for guessing.
On the other hand AI will develop its own classification method but one that has unknown faults in it.
Maybe it has learned to look at the lighting direction because half of the data set was non-suculents with light from the left and half was succulents with light from the right, because it came from a different facility?
Or maybe different cameras were used to photograph different types of plants?
So now rather than looking at the leaves it uses light direction or photo grain to tell if it is succulent or not?
Face the reality, you are wrong.
The above algorithm returns completely nonsensical results for my searches, plants that have completely different structure and coloration and nobody would ever mistake them.
It relies at you looking at the suggested solution, and then, Hey!, here you have five more, maybe your plant is somewhere on the list?
This is only marginally useful but could have been so much better if it tried to identify structure of the plant.
Would it be possible to train the model to identify the features and then just have it run those through a trie of plant classifications? Trying to bake classifications into the model does seem silly but you still need an element of computer vision if you want to do this.
That's exactly what I thought up without having any experience in the area.
So, basically do what a person would do -- identify simple features visually and then consult the book to go through decision tree to figure out what you are looking at based on these features.
As you go through decision tree, you keep excluding more and more possibilities until you are left with only one match.
You would never mistake oak with a cactus because there are so many occasions on this decision tree to go one or the other way that you would have to make multiple mistakes.
But that's exactly what the algorithm seems to be doing -- mistaking plants that are very far away from each other when it comes to their build.
That’s the same problem you might have with a self driving car driven by an ML algorithm: is that an open lane or someone wearing a black outfit with white stripes?
> There's no reason to think that needs to be done separately, and in fact that's the kind of things that you'd expect a good model to find on its own.
Disagree - learning to classify morphological characteristics is learning the generalizable features. Especially given that some plants are going to have relatively few photos, knowing with high confidence some diagnostic factors and GPS could absolutely outperform the brute force approach.
This isn’t about hand tuned features, it’s about predicting the right thing.
Unfortunately, I have tried to identify some of my plants but it could not identify correctly a single one. Just an endless procession of at best similar plants, most of the time completely dissimilar.
I think this is another naive model that just tries to push entire problem to AI. That is unfortunately what I am seeing nowadays, very unimaginative. Just try to have fun with parameters of the network until you find some kind of configuration that seems to be working.
What it would benefit from would be some kind of analysis/classification of basic features of the plants like what's the basic shape of the leaf, trunk, how things are connected, etc.
The classification would benefit from AI (like identify where leaves are, where trunk is, etc.) but then that intel would be passed to a more classification-oriented algorithm.
(disclaimer, I am not an AI developer, it just seems to me like pretty rational way to approach the problem)