I believe that this comment is unintentionally the reason healthcare is broken in America.
America is capitalist and federalist. That means either hospitals are for profit entities, non-profit entities or government owned entities.
Ducks quack, for profit entities attempt to grow profits, non-profit entities attempt to grow donors and government entities attempt to keep the things way they were yesterday.
As a society, we can pick which models we allow and in which allocation, but we can't ask a duck to be a swan - each option has positives and negatives.
When we ask for the positives without the negatives, then complain when the inevitable negatives come, is the fault with the model or with our expectations?
The better way is to iterate and grapple with the rewards in the system and where transparency should be forced or legislated. Change the incentives and you'll change the behavior. Unfortunately, you will also encounter new and unexpected negative effects...see: ducks quack.
I joined and couldn't see my own face or bubble. I clicked around trying to figure that part out but didn't get it.
I saw other people could easily publish content to the board and I think that part is a step forward. I could have clicked chess/gif and posted something but exited there.
When I join, I'd like a clear "you are here" with my own face looking back at me.
The book Thinking in Bets points out that predictions are given with a % of certainty. If I say X party will win at Y% and Z party wins, it doesn't mean the model is broken.
On the other hand, this makes the whole operation unfalsifiable in any reasonable time frame. "I always said the candidate had a .1% chance of winning!"
Given two major elections in a row where the results were essentially out of the error bars, it is perfectly reasonable to be dropping your confidence on pollsters very hard. Once was perhaps understandable, but twice is getting into "right .01% of the time" territory.
I hadn't considered the "unfalsifiable" aspect of my comment. I think you make a good point.
At a poker table you get a high number of events to check against but a presidential election is only once every 4 years.
I wonder how accurate the local elections are vs the national? If neither is accurate, then that's a strong case for your point of view and a change in methodology.
> On the other hand, this makes the whole operation unfalsifiable in any reasonable time frame. "I always said the candidate had a .1% chance of winning!"
I wonder how many people still think Nate Silver has a credible perspective: he's spun the above strategy into a job in unscientific punditry.
But elections aren’t single events, they are 50 different events. 5% of the time the results in Michigan will be outside the error bars. But if the results are outside the error bars in Michigan, Iowa, Wisconsin, Minnesota, etc., and the polls all got it wrong in the same direction for each (overestimated Biden’s support) that’s actually statistically quite unlikely.
Wisconsin, Michigan, and Minnesota aren’t statistically independent though, so if a poll is wrong in one of them, it’s also likely to be wrong in the others too.
Maybe it’s semantics, but to me that means something is wrong with the model. It’s not just a matter of an unlikely result being happening some percentage of the time. There is some hidden factor that could have been adjusted for ahead of time.
The model certainly ought to handle correlations, but the evaluation should take them into account too.
You can't treat the election as fifty independent replicates: a miss in both Minnesota and Wisconsin is clearly worse than one error, but it's also not as bad as (say) getting Wisconsin and Rhode Island wrong, where it's more likely that two separate errors occurred.
I understand the point mathematically, I think I'm talking about more of how the presentation of the error term is unintuitive to me. It makes sense to me to say "the predications were right, Trump just got lucky" when he outperforms the 95th percentile error bars in a state. He just got lucky. But if he outperforms it in a bunch of states because the polls didn't account for the fact that Trump supporters don't answer pollsters, then it doesn't make sense to me to say "the predictions were right because they factored in the chance that the polls hard correlated errors." I get that you've quantified the possibility that the polls are wrong in a systematic way, but I don't think that's the kind of possibility people are thinking of when they hear "there is a 5% chance Trump could still win."
Fair enough. I don't have any expenses, everything I use is free. Gumroad DOES take a fee, so there's that. And taxes.. always taxes. Thanks for pointing this out!
"Oh, and by the way, responding to these kinds of issues with “if you wanted your voice to be heard, you should have turned on analytics” is inexcusable."
Authentic question: how would a business know this otherwise in an actionable and effective way?
Maybe we should we whitelisting for analytics. Instead of perpetually trying to identify bots, Pick the 20% of your users who you are pretty sure are actually humans. Nielsen-style, if you will.
Maybe two cohorts. The people we're really sure about, and the ones we're pretty sure about. If they diverge, ask why.
In general it's quite easy to filter out bots and crawlers from your basic access logs, as most bots and crawlers will identify themselves as such.
If you're running anything with an API, then unless somethings horribly wrong it's even easier: look at the number of requests being made to an API endpoint and spot check a few of the user identifiers (tokens, keys, whatever you're using) to see the variety of users.
All of this is assuming you're trying to merely investigate the volume of use of a feature, not trying to diagnose demographics. If you're trying to extract more fine-grained detail, I don't have as many answers; I hope others will chime in with constructive ways to get things like geographic demographics via server logs.
I don't use Google analytics but I have seen time and again vocal users who seem ignorant of their own usage of an application...and very much not representative of the majority.
People are bad at analysing metrics, too. You make a change and users spend half as much time on a page. Did you make the page twice as easy to use, or so bad that they gave up?
I honestly think that a simple “we are considering removing feature X, please let us know if you’re still using this” can work extremely well. Everyone who cares about the feature will be sure to write in.
Plus, people often don't give valuable feedback when asked questions about features they want and use... people are poor judges of what they actually want, and will list things they think they care about and then end up never using or which don't affect their choices.
We do. We have real customers do real surveys of our real web sites. In person. We even do shadowing to see how real people use our sites in their daily work.
It's uncommon in SV due to laziness and an unwillingness to talk to actual human beings. Which is dumb because there are companies that will handle this for you.
But SV is stuck in this mindset that everything can be solved by an algorithm. It can't. The tech echo chamber really needs to get over itself.
Our customers certainly do. We get excellent results from asking a few simple questions now and then, providing both a good source actionable feedback on feature requests and any current problems, and often some encouraging comments that reassure us we are basically doing things that our customers like.
It doesn't just have to be surveys with lots of participants, though. For example, we've known for decades that a simple observational study with just a handful of people is often enough to identify most of the serious usability problems with an interface.
The idea that everything important must be reduced to automated analytics and number-crunching is a very strange disease. Even if the numbers don't lie -- and as we see here, that is far from guaranteed -- you still need to be asking the right questions and comparing useful alternatives for the results to be valuable.
"...someone recently explained to me how great it is that, instead of using data to make decisions, we use political connections, and that the idea of making decisions based on data is a myth anyway; no one does that." -- from https://danluu.com/wat/
Sorry, your comment just reminded me of that. Are surveys perfect? No, but they have their uses, and plenty of companies find real value by making use of them.
Your email to group admin@myrtlelime.com was rejected due to spam classification. The owner of the group can choose to enable message moderation instead of bouncing these emails. More information can be found here: https://support.google.com/a/answer/168383
so I'll post my feedback to HN:
Likes
1. Good idea. My wife and I are going to <place X> soon. Scratched an itch.
2. The "share" link was easy but you should have asked me for her email to build the network.
3. The adding of categories of things was intuitive and easy.
Dislikes
1. When searching for the place the search term I typed in didn't change into the data that populated with my search. I did it again to make sure I did it right. Update the text box with the full format name airline style.
2. The itinerary inspirations provided lists of things to see and do (good) but no reviews and no way to click for more outside your site. I popped another google tab to copy and paste.
Thanks for the feedback! Feel free to email us at harry@travelchime.com or peter@travelchime.com if you have trouble in the future.
We've also noticed the text being inconsistent. It's especially bad for foreign places. We've added it to our bug-list!
The itinerary inspiration lists honestly are kinda similar to the place list you create. For the place list you create, you can click on a place to see more info and links out. We'd like to merge the behaviour for those so it isn't confusing, but that's also in the pipeline. Sorry about that!
In your blather situation, you'd just set forward skip to 30s and hit it a few times. Podcasts are for profit and so if you aren't paying...you are the product. I don't think a bookmark after the ads is going to be a thing.
I do like the idea of having content areas and being able to navigate between them! Why on earth not indeed!
I wish Overcast did a better job of queuing and making fresh content available in an intuitive way. I've gone so far as to watch youtube videos to see if I'm doing it right or not...
I agree that Overcast is the best game in town, but that's just because it has the best UX. There is something about the underlying audio technology that seems to be keeping this world pinned to mid-2000's tech.
I love Stripe and am an early and long time customer.
Everything about them is beautiful, simplified and easy - except Radar.
For all the blogs and good intentions of the team, Radar to me means an email 3 days after I've processed and shipped an order telling me it may be fraudulent, followed a month later by a reminder "I'm sorry, you lost your credit dispute for $X" email, followed by a $15 charge by Stripe for no error or bad faith on the part of my company.
When Radar isn't busy sending me reminders that we're having money stolen from us, they are hard at work denying legit charges and sending customers down a Kafka-esque rabbit hole, hell bent on seeing exactly how much friction can be introduced into our website's buying process by a single third party service.
Stripe didn't create the fraud and they are taking on a difficult and emotional part of their business, which is commendable, but it must be said:
1. The false positive rate of Radar is so bad that it renders the product worse than useless - worse because it initially provides a false hope.
2. You can't disable some parts of Radar. Better to throw the whole thing into the sea and use a plugin then be forced into some parts of this.
I know there are good people trying hard to build this product well. Some of the people on it serviced our account in the early days. They had a vision of a better way and they made it real. My hats off to them. Now, people I respect, I have hard words and a hard truth to impart to you:
You are not delivering the value you claim to provide. Do not continue iterating. Discontinue the product. Allow others who focus on this to do it well. You are great at what you do but you are frustratingly, annoyingly, arrogantly bad at this. It pisses us all off to be forced to use it and to be told again and again how great it is going to be or how fixed it is this time. I don't want to engage with Stripe on Radar or "learn more about it", I don't want to be interviewed by you so you can better understand the voice of customer, there is no email or back channel thing you can send to make this better. The beauty of Stripe is that it "just works" but Radar does not just work - and it never will.
PM for Radar here. Really sorry to hear that. Portions of your comment are surprising to me but I don’t want to discuss your business in public. We’d be happy to discuss in private. If you don’t want to, we respect that.
For the benefit of other HNers: if you ever have a concern about this sort of issue, we’d love to hear from you (my email is eeke@stripe.com). This is all I do every day; you can never waste my time.
America is capitalist and federalist. That means either hospitals are for profit entities, non-profit entities or government owned entities.
Ducks quack, for profit entities attempt to grow profits, non-profit entities attempt to grow donors and government entities attempt to keep the things way they were yesterday.
As a society, we can pick which models we allow and in which allocation, but we can't ask a duck to be a swan - each option has positives and negatives.
When we ask for the positives without the negatives, then complain when the inevitable negatives come, is the fault with the model or with our expectations?
The better way is to iterate and grapple with the rewards in the system and where transparency should be forced or legislated. Change the incentives and you'll change the behavior. Unfortunately, you will also encounter new and unexpected negative effects...see: ducks quack.