Again, I am perfectly willing to believe that you are the rare kind of people who can ignore the perverse incentives. But people change. Companies change. Companies get sold, sometimes to people who are only in it for maximal short term revenue. Who then run the companies right into the ground either in the usual way or the private equity way.
As somebody who's spent decades supporting the Long Now, I believe that a lot of what's wrong in our society is people incorrectly understanding their long-term incentives and focusing on the short term. And I'm happy to believe here that Beeminder's long-term incentives really do work out to be mutually beneficial when handled by you.
But there's just no way I need the mental overhead of wondering all the time whether me paying you when I fail in a given instance really conforms to the ultra-long-term, 12-dimensional-chess understanding of conflict of interest. And then if/when it does, whether I'm failing enough to give you sufficient money so that there's a balanced exchange of value. That is way too much overhead, especially for a tool I'll be using in areas where I'll be hitting my cognitive limits on the regular.
I totally believe this works for some people, maybe most of them, but for me it's a non-starter.
Ah, this continues to be good feedback. Thanks for continuing to hash it out with me! I see I made it sound like there were a lot of moving parts in my argument for why our incentives aren't so perverse. I don't think that's the case! In particular, I don't think my argument relies on what kind of people we are. I mean, it relies on us not turning totally evil and myopic, but that's true of any company. If we started effectively wrongly charging you, you'd cry foul and quit.
I'm worried I'm not really grokking your underlying argument though. Maybe it just feels gross to have this kind of setup with a third party as opposed to doing it with friends. That's the kind of thing I can't argue with so if it's something like that we can leave it at that. Thanks again for helping me think through how to convey our pitch for the general non-perverseness of it in any case.
It's true that any company can turn greedy, and many do. But a problem here for me is that the structure is more dangerous when it does. If HBO turns evil, my downside risk is the $15/month I pay them. But with a habit incentive system like this, the downside risk is larger and unknown. And given that the whole point is to build habits that people stick with, "you'd cry foul and quit" is in question. Look at the way the various online games milk vast sums milk from their "whale" players for example. When behaviors are correctly engineered, plenty of people don't quit.
For me yes, doing it with friends is different, because the metagame (or in Carse's term, the infinite game) is about the friendship. That too acts as a downside risk limit. But if your company were taken over tomorrow by invading aliens or private equity MBAs, all they'd want is the money.
And again, very important to me is value-for-value exchange. E.g., I'm a Newsblur subscriber. I was on their $36/year subscription. They just added a new $99 tier. I signed up immediately not because I need the features, but because I value it higher than $36/year and want to help make sure they're well funded.
So if you had a similar service where I paid you a subscription fee and then money went to, say, my brother, that would be a different deal. My downside risk is limited, the metagame keeps things safer, my cognitive load about systemic effects is manageable, and it adds a social component that means more to me than cash incentives anyhow.
As somebody who's spent decades supporting the Long Now, I believe that a lot of what's wrong in our society is people incorrectly understanding their long-term incentives and focusing on the short term. And I'm happy to believe here that Beeminder's long-term incentives really do work out to be mutually beneficial when handled by you.
But there's just no way I need the mental overhead of wondering all the time whether me paying you when I fail in a given instance really conforms to the ultra-long-term, 12-dimensional-chess understanding of conflict of interest. And then if/when it does, whether I'm failing enough to give you sufficient money so that there's a balanced exchange of value. That is way too much overhead, especially for a tool I'll be using in areas where I'll be hitting my cognitive limits on the regular.
I totally believe this works for some people, maybe most of them, but for me it's a non-starter.