I'm curious about how the $15k/mo figure is computed.
The story reads as though Logojoy more or less launched on Producthunt, which was 7 days ago. And all posts on the blog are dated Nov 15th. Finally the domain itself seems to have been listed on HugeDomains as of Oct 8th [1].
Now maybe I'm missing something, but it seems like the revenue figure is extrapolated from a small window which includes a wave of initial traffic from PH.
I'm hoping that the revenue figure is an actual ongoing sustained amount for Dawson's sake (as it is great looking & functioning product, solves a need that I've faced, and for full disclosure I've been putting together something along these lines on the backburner for awhile), but I can't help but feel this is a bit sensationalized based on what I'm seeing here.
That does seem dodgy, so we took the 15k/month bit out of the title. I guess we'll leave "AI-powered" so as not to leave it completely naked. Or should it just say "Logo Creator"?
I noticed that https://news.ycombinator.com/from?site=indiehackers.com constantly uses this trick of revenue numbers in titles. That's annoying clickbait in an HN context, but on the other hand, we need more good stories about startups, entrepreneurs, and projects, and dodgy stories are the price one pays for good ones. So I think the answer here is continued community vigilance.
Usually we'd see a point like this before 11 hours had gone by, but clearly not always, so everybody: if you notice something misleading on HN, especially if it doesn't get fixed right away, please alert us at hn@ycombinator.com. We can't read all the comments but we do read and reply to emails.
This was my fault. I tell companies not to submit for an interview unless they're at least a few months old, otherwise they won't have much to say. Because of that + Dawson's reported $15k/mo figure + the fact that Logojoy's story is inherently interesting, I blindly assumed the business was started earlier in the year than it was.
Go figure that the interview in which this happens is also the one that gets to the top of HN for the longest time. In the future, I'll do more to verify the launch dates so I don't end up with another interview where the average revenue is such an extrapolation.
As for including revenue numbers in titles, I agree it is a bit clickbaity (or at least very enticing). But at the same time the entire point of these interviews is transparency, and highlighting the revenue numbers adds crucial context. I'm happy not to include revenue numbers in titles, though, if HNers or mods don't want to see those.
It's little more complex than just "highlighting the numbers"; in my opinion, there really needs to be a definition of what is being counted and how that's universal within Indie Hackers presentation of startups.
For example, three identical startups using different accounting methods (cash vs accrual), time periods, etc. would have at face value potential vastly different numbers, though the reality is they are identical in every way.
I don't have exp in startups, but I did experience this watching the traffic of an FOSS book I have written, when the surge started, the repo was trending on Github for two days! Now it has 1.1k stars, steadily increasing over the time.
I do not count the stars as a metric of the book's quality, I value user feedback!
I enjoy reading Indie Hackers, but yes, the math on the monthly revenue is almost always fuzzy. I find it's best to just enjoy the content of the interview and ignore the "stats."
I just flagged this post for what clearly is a very dishonest title. The title should be changed at least, and maybe the entire post taken down, for the dishonesty.
Nonsense on stilts. Extrapolating $3500 in your first week of media-driven traffic into a $15k/mo run rate is ridiculous. Calling it "AI-powered" is a whopper too.
Extrapolation aside, making decisions automatically based off of incoming data points _is_ AI.
If the interview is to be believed, almost anything you consider "AI-powered" is within one degree of what is happening here. At least on a conceptual level.
AI doesn't need to be magic, it just needs to be making decisions :)
It's not just his definition. Many AI courses will teach you pathfinding algorithms as examples of AI, along with optimisation algorithms, genetic algorithms and neutral nets.
There is a valuable distinction in hard AI vs soft AI (hard AI being that whole thinking, emotional maschines thing that we are not really getting closer to, and soft AI being the things that actually generate money because we know how to do them)
I think what he's doing would be more acurately described as Machine Learning, rather than AI. His algorithms are producing better logo proposals based on the data it is collecting from other users, which fits a common definition of ML [1]
[1] A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E (Tom M. Mitchell, 1997)
Machine learning is a better term because it's more specific (and less controversial). But under most definitions of AI, ML is a subfield of AI, so calling it AI may be suboptimal, but not wrong.
Personally I'm a fan of the AI definition "things humans can do and computers can't do yet", but I recognize that that definition isn't terribly useful.
The distinction you are drawing is between Strong and Weak AI. Weak AI is any sort of useful application of computation. Strong AI is solving the AGI (Artificial General Intelligence) problem.
anything that doesn't work but theoretically could is AI, when it starts working it stops being called AI. most recent examples are convnets, deep learning, etc. nobody calls them AI anymore.
He is doing a weighted decision. But that is not AI.
AI is more complex and based on Neural Nets, Petri Nets, or alike.
But he is just applying the same logic like every webshop, or advertisement network is doing: you bought X, other people who bought X also liked to buy Y. There is a long way to make AI.
If you pull out old AI books, they're all about search spaces and agent systems. Basically an abstraction of graph search.
Your definition requires calling the first couple generations of AI research "not AI".
And I would most definitely call recommendation engines Artifical Intelligence (what is Google?)
To paraphrase an old quote "AI stops being AI when people start understanding how it works".
Is it not a replacement for a guy standing in a room and trying to keep the temperature at the right spot?
I understand that there's "current-generation AI" which is miles beyond this. But closed-loop systems that try to make decisions based off of loop seems very much in the domain of intelligence in the practical sense.
I'm being a bit pedantic here, but the GGP in this thread saying that OP is not AI feels like shifting the goalposts way beyond what we would have said even 5 years ago.
Control systems are not intelligent. It helps to know how they work, and when you do you realize they work on very simple principles.
Intelligence is about learning and applying knowledge, AI is either simulating or synthesizing these behaviors outside of nature. It's not just taking the place of a function that something intelligent does.
The story reads as though Logojoy more or less launched on Producthunt, which was 7 days ago. And all posts on the blog are dated Nov 15th. Finally the domain itself seems to have been listed on HugeDomains as of Oct 8th [1].
Now maybe I'm missing something, but it seems like the revenue figure is extrapolated from a small window which includes a wave of initial traffic from PH.
I'm hoping that the revenue figure is an actual ongoing sustained amount for Dawson's sake (as it is great looking & functioning product, solves a need that I've faced, and for full disclosure I've been putting together something along these lines on the backburner for awhile), but I can't help but feel this is a bit sensationalized based on what I'm seeing here.
[1] https://web.archive.org/web/20161008092541/http://logojoy.co...