This sounds like a very interesting area! I guess you are the "Statistical Process Monitoring"/"Control charts"/"Shewhart charts" [0] for images. Very cool!
Is this correct or is your solution totally different? In what aspect is it most similar and most different from "Control charts"?
Are there any keywords for interested hackernews readers to research this further and play with this concept? Is it correct that you do "just" outlier detection on the embeddings of the images? I guess it works something like this:
1) Image --CNN--> Embedding: maybe enforce (properties) of distribution on the embedding (something like VAE)
2) Approximate this distribution and call a (sequence of) images an outlier if its likelihood is small. Alternatively, compare the empirical distribution of a few collected images to a distribution of "good images", e.g. via embedding into RKHS.
What type of anomalies can be detected? Does in evaluate each image separately (i.e. it cannot differentiate between objects going from left to right) or does it "understand" short sequences of images? The latter sound even more interesting. Could you provide some keywords for it.
On the production line, there are already cameras and computer vision products, e.g. Halcon. These can be used to "drag/drop" a computer vision pipeline together. Could your software be integrated into it such that the output can be further processed in Halcon etc. ?
Is this correct or is your solution totally different? In what aspect is it most similar and most different from "Control charts"?
Are there any keywords for interested hackernews readers to research this further and play with this concept? Is it correct that you do "just" outlier detection on the embeddings of the images? I guess it works something like this:
1) Image --CNN--> Embedding: maybe enforce (properties) of distribution on the embedding (something like VAE)
2) Approximate this distribution and call a (sequence of) images an outlier if its likelihood is small. Alternatively, compare the empirical distribution of a few collected images to a distribution of "good images", e.g. via embedding into RKHS.
What type of anomalies can be detected? Does in evaluate each image separately (i.e. it cannot differentiate between objects going from left to right) or does it "understand" short sequences of images? The latter sound even more interesting. Could you provide some keywords for it.
On the production line, there are already cameras and computer vision products, e.g. Halcon. These can be used to "drag/drop" a computer vision pipeline together. Could your software be integrated into it such that the output can be further processed in Halcon etc. ?
[0]: https://en.wikipedia.org/wiki/Control_chart