Yea ok, eliminating misinformation is what we want to eventually do. But what we can sell is something that eliminates human errors (the backend also allows for that).
I think you could provide good things to Wikipedia with your tool. For instance, I recently read that earth spinning make earth 10^7 kg heavier. While the principle is corrShow HN: I'm 16 and building an AI based startup c...
https://news.ycombinator.com/item?id=40222051ect (energy is mass), the source is really weak and it can easily be found that earth mass uncertainty is about 10^20 kg. Which make the whole point useless.
I challenged your AI on this thematic : "a spring weight heavier when compressed", or "earth spinning make it heavier". Both results tell me it's not true despite it being correct (E=mc2).
I admit I'm cherrypicking, because it does say croissant is not French. But can I trust it blindly? That's why you must provide sources. It's valuable to have sources when we talk about truth.
Yea, that's why we also added in an grammar checker, even if they dont care about facts, they can get something better than gram marly that checks for way more for way less.
Yes, we checked it out, it seems pretty cool. What we aim to do, is creating a software that will check for everything, not just grammar or factuality. There is nothing in the market like it
This is impressive for someone who is 16. My Two cents would be to add a short demo. A 30 second gif or video with accompanying text that explains how to use this. Just some kind of demonstration on how your interface works and what to expect the output to look like. And secondly to possibly help with the over suggestion/under suggestion issue, the grammar and writing assistance should be a separate mode from the fact checker.
Again, I would love to know more about how this works in the sense of how does it determine facts, and as you alluded to in other comments how it avoids political opinions.