That's our (Beeminder's) esteemed, if now somewhat moribund, competitor, StickK. See https://blog.beeminder.com/anticharity for our argument against anti-charities like that.
Hi! Beeminder cofounder here. We do have a charity option but only in our most expensive premium plan. My own feeling is that a commitment contract with a charity as a beneficiary is less effective because what kind of jerk is motivated to avoid donating to charity? Unless you set the stakes so high that you can't really afford it, I guess?
Congrats on the launch! I think this is powerful and handles the use cases you've listed better than Beeminder can currently. So that's exciting for us -- we love worthy competitors!
Beeminder cofounder here! Thanks for the plug! So much to say here but maybe I'll start with a pointer to our philosophy on anti-charities: https://blog.beeminder.com/anticharity/ (short version: we hate them).
Hi! Beeminder cofounder here! I'm pretty excited to see all the positive comments but of course I've homed in on this negative one first. I think Beeminder is incentivized to make you fail at your goals the same way eBay sellers are incentivized to not actually send you your stuff after you pay them.
Anyway, we have a whole elaborate essay on why there's very much the opposite of a conflict of interest: https://blog.beeminder.com/defail/ (about how Beeminder revenue is proportional to induced user awesomeness)
There's a key faulty assumption that may make it seem like our incentives are more perverse than they are. Namely, it's not the case that Beeminder goals are binary things that you either succeed or fail at. They're things you make long-term graphs of, like averaging 10k steps per day or working 40 hours per week. You pay Beeminder because your overall progress is much greater with Beeminder than without it, even though the specific moments you pay are kicks in the pants when you've deviated from your commitment.
I'm definitely interested to hear if any of this is persuasive. We hear the perverse incentives thing a lot so we need to figure out how to convey our apologia much more concisely in our intro material! (And thank you for voicing it!)
Again, I am perfectly willing to believe that you are the rare kind of people who can ignore the perverse incentives. But people change. Companies change. Companies get sold, sometimes to people who are only in it for maximal short term revenue. Who then run the companies right into the ground either in the usual way or the private equity way.
As somebody who's spent decades supporting the Long Now, I believe that a lot of what's wrong in our society is people incorrectly understanding their long-term incentives and focusing on the short term. And I'm happy to believe here that Beeminder's long-term incentives really do work out to be mutually beneficial when handled by you.
But there's just no way I need the mental overhead of wondering all the time whether me paying you when I fail in a given instance really conforms to the ultra-long-term, 12-dimensional-chess understanding of conflict of interest. And then if/when it does, whether I'm failing enough to give you sufficient money so that there's a balanced exchange of value. That is way too much overhead, especially for a tool I'll be using in areas where I'll be hitting my cognitive limits on the regular.
I totally believe this works for some people, maybe most of them, but for me it's a non-starter.
Ah, this continues to be good feedback. Thanks for continuing to hash it out with me! I see I made it sound like there were a lot of moving parts in my argument for why our incentives aren't so perverse. I don't think that's the case! In particular, I don't think my argument relies on what kind of people we are. I mean, it relies on us not turning totally evil and myopic, but that's true of any company. If we started effectively wrongly charging you, you'd cry foul and quit.
I'm worried I'm not really grokking your underlying argument though. Maybe it just feels gross to have this kind of setup with a third party as opposed to doing it with friends. That's the kind of thing I can't argue with so if it's something like that we can leave it at that. Thanks again for helping me think through how to convey our pitch for the general non-perverseness of it in any case.
It's true that any company can turn greedy, and many do. But a problem here for me is that the structure is more dangerous when it does. If HBO turns evil, my downside risk is the $15/month I pay them. But with a habit incentive system like this, the downside risk is larger and unknown. And given that the whole point is to build habits that people stick with, "you'd cry foul and quit" is in question. Look at the way the various online games milk vast sums milk from their "whale" players for example. When behaviors are correctly engineered, plenty of people don't quit.
For me yes, doing it with friends is different, because the metagame (or in Carse's term, the infinite game) is about the friendship. That too acts as a downside risk limit. But if your company were taken over tomorrow by invading aliens or private equity MBAs, all they'd want is the money.
And again, very important to me is value-for-value exchange. E.g., I'm a Newsblur subscriber. I was on their $36/year subscription. They just added a new $99 tier. I signed up immediately not because I need the features, but because I value it higher than $36/year and want to help make sure they're well funded.
So if you had a similar service where I paid you a subscription fee and then money went to, say, my brother, that would be a different deal. My downside risk is limited, the metagame keeps things safer, my cognitive load about systemic effects is manageable, and it adds a social component that means more to me than cash incentives anyhow.
Scott Aaronson adds the following in the comment on his blog post in response to a question about this:
> the NDA is about OpenAI’s intellectual property, e.g. aspects of their models that give them a competitive advantage, which I don’t much care about and won’t be working on anyway. They want me to share the research I’ll do about complexity theory and AI safety.
> In my opinion these automated solutions seldom work in the long run
Beeminder cofounder here. Can I hear more about why you think this? There are definitely people for whom Beeminder doesn't work at all but you sound like you're making a different claim -- that it may work for a while but then stop working. That's the opposite of our experience. Our churn numbers get really good for those who stick around for a year and anecdotally we have lots of people getting PhD theses written thanks to Beeminder, etc.
But if you've had short-term success with things like Beeminder -- https://blog.beeminder.com/competitors -- and then had it fail, that would be valuable to hear more about.
Oh, and I should mention that Beeminder isn't necessarily entirely automated. If you derail and are about to be charged money but don't agree that it was a legit derailment, you talk to a human about that.
Oh! Beeminder's not actually affiliated with TaskRatchet. We're just friends with the creator of it and have promoted it a lot and have an autodata integration with it, etc. See https://blog.beeminder.com/taskratchet -- which is a guest post by the TaskRatchet creator on the Beeminder blog (so, um, I can see where the confusion came from!).