Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've launched a side project to let people track the outcome promises/predictions made by public figures and popular Twitter accounts, as I find unrealistic predictions and outright lies can be very damaging:

https://ontherecord.live

The stack is Django + Gunicorn / nginx, PostgreSQL and some Intercooler.js and vanilla JS to make the experience smoother.

I've also been trying to learn Elixir + Phoenix, as I find some of the concepts (e.g. LiveView) very promising.



This is interesting, but I'd love to see a different take on the idea of binary "this is wrong" and "this is right".

Take [1] for example, it lists the source as [2]. However, the source as claimed in [2] is reported as:

"Department of Health and Human Services confirmed".

You might not want to turn it into a "wikipedia", but it would be nice to offer the following:

1) Wrong reports and right reports - a list of sources that suggest whether the claim is factually sound or not (this could be news reports like CNN). You can pull together many sources in a given topic to support a claim

2) "Authority sources", for example, DoJ or other official information distributors that are claimed in an article.

3) "Linked news sources" - These days, many news sources are rehashes of rehashes, sometimes there are 4/5 chains before you get to the "authority" source (reported by the Verge which was detailed by Tech Crunch which was first outlined by AReallyCoolBlog). It would be nice to have a "trail" of where the news/source/information came from, and how many links there are in the chain.

[1]: https://ontherecord.live/17

[2]: https://edition.cnn.com/2020/03/31/politics/drive-thru-coron...


Thank you very much for taking the time to provide detailed feedback.

1) Fake quotes do concern me, but for now I've settled on requiring a single reputable source, preferably the primary source (so in your example the DoH press release, if available, would actually be a better source). I've also provided a 'Report quote' button to let users flag fake quotes. I am planning to allow users to validate and/or post additional sources for quotes.

2) I've considered adopting a whitelist approach, but given I can't possibly know all the relevant sources for all the topics which can be covered, I'm just manually accepting quotes for now (the 'does it look credible to me?' test). Things might get even more interesting if/once I start getting quotes/sources on a topic I know nothing about or even in a language I don't understand.

3) Tracking down the primary source is definitely an issue I've already run into when trying to post the first quotes. Ideally, I would like to only accept 'primary sources', even though I know it takes extra work to provide them. Maybe implementing what you suggest in 1) would help with that.

Another feature I'd really like to implement is to automatically archive (on the Internet archive's Wayback machine) the source once I validate it, so that quotes don't lose their sources over time.


Good project. I think it is unfortunate that we pay too much attention on such promises/predictions. We should treat as we do horoscopes, palm reading, etc. Nobody knows the future. I think we can make some broad educated guesses. At times I feel sorry for the public figures that make promises. In some ways the whole setup forces them to do so. Such promises make for good headlines, soundbites and could even swing elections. In some ways they're playing to the audience. No matter how many times they get it wrong we still come back to hear more promises/predictions. It might just be in our nature to do so...


Thank you. You raise a very interesting point, and I completely agree that predictions should be given much less importance. But, as you say, it is human nature to trust people who look confident, and therefore predictions do hold weight and a lot of people abuse that.

I disagree about promises though, I believe if you promise to (not) do something and then break that promise, you should be held accountable for it. Otherwise, people will keep promising more and more outlandish things, because you eliminate the downside.


Agreed. It's absolutely reasonable for a politician (for example) to get into power and discover that for some reason they can't do the thing they promised to do, but I'd love it if they'd just briefly say so and why, rather than brushing it under the carpet and assuming noone will remember what they said while they were campaigning.

"I know I said I'd increase funding for schools by 20%, but that budget is controlled by local authorities rather than by my office - I'm going to increase funding to local authorities by X% instead and ask them to fund schools" - that kind of thing (although, you know, you'd hope they'd know about the funding infrastructure before making promises).


Dude should know how the funding works before making said promise.


Good point. I would say that even if we know how funding works we cannot predict the future. We do not know what circumstances will prevail at the time actions/decisions need to be made. With the best of intentions it is still possible to have to renege on a promise made. My argument is don't make promises. But even where they are made we should just read them as a general statement of intent. As an example a certain government promised to conduct 100,000 covid-19 tests per day by a certain deadline. When the deadline hit they only got as far as 80,000+ or so. They got hammered for failing to meet the target. But I'm thinking yes they missed the target but we're a lot better off than when the promise was made. If they hadn't made any promise and achieved the same result the press might have celebrated it (probably not).


That’s a cool idea. I also notice that a lot of times there’s a news story without any follow up. For example, someone gets arrested for a big crime like murder or rape. They might never write about it again. Did they go to trial? How can we improve follow up in our news sources?


Thank you. Follow up is definitely also an issue, see also stories based on a scientific article which is then retracted. I guess information overload is partly responsible, there's always something new happening somewhere.

I've tried to address this by only accepting quotes with a 'due date', so I can easily bring them back into the spotlight (top of the home page in the Open section) once they can be assessed.


If it's a local level crime, a decade ago local news had the resources to put someone in courthouses and report on cases as they happen. Incentivize follow ups by supporting local news and journalists who provide them. You can also reach out to show interest.


I really like this project! It sort of feels like Kialo.com, I hope projects like this can support democracy in the age of information overload.


Thank you. I didn't know about Kialo.com, their debate concept looks very interesting.


This is fantastic. I've had a very similar idea kicking around in my head for a very long time.

These days it seems impossible to make sense of all the predictions from so-called experts. Tracking the accuracy of the talking heads is a great place to start measuring just how valuable their information is.


Thank you, it's great to get this kind of feedback. Hehe, "tracking the accuracy of the talking heads" sums it up quite nicely.


If the purpose is to decide how valuable the information is, one doesn't need to do anything at all ...


You could add a Brier score as well to get a sense of how well they are at predicting at a glance.


Thank you for the suggestion, I didn't know about the Brier score. It does indeed look very appropriate for the author page.


If this is a topic you're interested in, I strongly recommend Philip Tetlock's book Superforecasting. The Good Judgment project that he helped launch uses Brier scores to assess forecasters.


Thank you, I've been hearing about that book lately. I'll definitely look into it, it seems very relevant to this.


Thank you everyone for your feedback, it's very motivating to see that so many people find this project interesting. I might turn this into a Show HN soon, once the site fills up with some interesting predictions.

On the technical side, the logs show over 6600 unique visitors in the last 12 hours, with over 1100 of them (and 18k requests) during the busiest hour. I know that's not too impressive, but it's still good to see the basic VPS running the server didn't break a sweat (load average peaked at around 0.1).

In the mean time feel free to post content you're interested in, and also contact me (e-mail is in my profile) if you have any feedback. Thanks again!


I'm interested in building a site in the same way (Django + Intercooler + VanillaJS). Do you have any recommended projects/readings to look into? Or do you have any thoughts on other alternatives (StimulusJS + Turbolinks / Unpoly)?


Hey, I think Django has some of the best documentation of any project I know (both reference and tutorials/guides):

https://docs.djangoproject.com/en/3.0/

There is some material on using Django with intercooler.js:

https://www.reddit.com/r/django/comments/5nj242/psa_intercoo...

https://engineering.instawork.com/iterating-with-simplicity-...

but in the end I did something slightly different: for actions I just have in my views:

  if 'X-Ic-Request' not in request.headers:
    # render full page (which includes the fragment template)
  else:
    # render fragment template (which Intercooler will interpolate into the page)
That way, should I get to that page by any other way (user has disabled JS, I have a link in my page because I want a full page reload), it still works fine. Hope this helps.

I had a look at Unpoly, that looks very interesting and could simplify the backend code even more.


Yeah, Django's documentation is top notch, but I haven't seen a lot about mixing Django + Intercooler. That's a cool approach to make sure the actions are always reachable. Will read these, and maybe reach out to you sometime to find out how you like Unpoly! Thanks!


This looks really cool, nice! I don't want to look like I'm spamming links, but I posted a comment yesterday about an Elixir course I've been doing - they have a great Liveview one too. You might be interested :)


Thank you. I don't think it would be spam, you meant this one, right?

https://pragmaticstudio.com/phoenix-liveview

Those are the videos I've been watching just to get a better idea of the concepts -- I agree, they are well made. I actually found them via HN about a week ago.

I'm thinking of reading this once I finish the videos: https://pragprog.com/book/phoenix14/programming-phoenix-1-4


Yep, that's the liveview one I started with :) They have a general Elixir/OTP one too: https://pragmaticstudio.com/courses/elixir - which is the one I mentioned in my other comment (in addition to the discount code LIVEVIEW).

I really like that course because you basically hack a web server together, and then refactor it to be kind-of like the Phoenix architecture, which for me is the perfect way to learn something like that. Toward the end they do the same with GenServers - hack together a stateful server, and then refactor it so it looks like a GenServer, before introducing the actual thing as a drop-in replacement. That's where I've got up to so far, but I'd definitely recommend that course if that sort of thing appeals to you and you like their teaching/video style.

That book looks great too, thanks for sharing :) Reading the author section, I'm definitely tempted to pick up a copy.


I love your idea! Thank you. Keep going!

Just yesterday I had a talk with my friend with the PR industry. We have been discussing how to select the best information sources and became to the idea that the future will be some kind of rating system based on pair experts-field of knowledge. After that, I start to think about creating some kind of automatic system for fact-checking facts in articles at least numerous ones (like based on world population and companies valuation and other public data). Do you have anything like this in your backlog?


Thank you very much. My initial goal, should the number of quotes get too large for me to follow up myself, is to tag quotes and invite volunteer experts in a specific field to moderate quotes tagged by certain keywords.

Automating that sounds very attractive, but I wouldn't even know where to start. NLP? I've heard algorithms are sometimes used to automatically make trades based on news stories, so I imagine such a system could initially help moderators by providing suggestions/assessments?


I'm sure this is the right idea to invite people to volunteer for checking information, but this creates needs to build some kind of rating or double-checking system for volunteers too (something like Wikipedia authors/moderation system).

About fact-checking. I think this feature contains two big parts: 1) extract a fact from the text 2) to check a fact

For 1s task on the basic level and numeric data, it seems not so difficult with NLP (just tried to create a tree and it looks pretty simple: https://nbviewer.jupyter.org/github/SerafimPikalov/Sandbox/b...)

For 2nd task probably needs to create some kind of "fact-tree" (something like Word population>USA population>target audience for service potential audience...)


https://en.wikipedia.org/wiki/Quis_custodiet_ipsos_custodes%...

I completely agree, I think the next step will be to open up user accounts (but keeping them optional), so that I can get a track record for registered users and invite the most active/correct ones to start moderating. At the moment I can go through the submissions myself and it doesn't (yet) take up that much time.

I find the other part (automating validation) fascinating intellectually, but I'm currently focusing 100% on implementing the functionality of the site itself (Reddit integration is at the top of the list), so this will have to wait. If you're interested in discussing this further at any time, you're more than welcome to also reach out (my e-mail is in my profile).


Automating validation is a good idea - as I told you probably could use some kind of facts graph to make it.

I sent email today. Let's keep in touch


Hey, the web site looks really clean. I admire your work.


Feature request: people claiming to be experts who never make actual predictions that can be tested.

Would be good to have a block list of people who can I safely ignore.


That's a very good (related) idea, I'll have to see how it can be integrated.

I've been thinking a lot about what makes a prediction/promise testable. So far I've summarized my conclusions in the 'How it works' page, but if anyone has any references on this I'd love to hear about it.


Great project. Thanks. With predictions, I've noticed a trend where a person will appear on TV and make prediction after prediction that are wrong. And they don't seem to feel even a bit shy about making further predictions with apparent certaintity.

Like, they will go on cable news and say, I know about this stuff, and I guarantee that X will happen to Trump by June. And the commentator never says, "but you predicted that he wouldn't actually run for office (wrong), wouldn't win the nomination (wrong) wouldn't get elected (wrong), would get impeached by January (wrong), etc. Maybe you shouldn't be so sure in your predictions?"


I think on cable news you can see different treatment of “MSNBC/FoxNews/CNN analyst...” who are under contract to the network for regular appearances vs. guests that are unpaid. Other than the phrase I quoted above it isn’t always obvious (especially to casual viewers) that the host is talking to a colleague.


Thank you. That was one of the reasons I started the website, I think we desperately need experts, but it's getting more and more difficult to tell the charlatans apart from the experts outside our own field. Glad to see I'm not alone.


I made a similar project some years ago and the name was the same too! Didn't launch. Hope you succeed let's keep them honest.


Thank you, I'm really curious where this will go. The reaction so far here on HN has been very inspiring.


You could add a section for post-checking of /r/futureology type techno media sensational claims about how this or that ground breaking new xyz which works great in the lab on rats etc and will fix everything for you!

These overly optimistic/negative press releases can also be damaging to public understanding over time.


That should provide a good source, thank you.

I've started work on Reddit integration (/u/OnTheRecordBot -- it will be a bot you can call, similar to RemindMe!).

Tags are also already present in the backend, but not yet implemented. I imagine they will be come necessary once the number of quotes/topics increases.


Great project! I've been doing this manually on HN and found the predictions I was tracking to be totally wrong. Please continue on this project, I'm not a movie critic but I give this project two whole thumbs up.


Thank you very much, I was thinking of extending the concept to any (publicly accessible) quotes, like forums, HN, etc. but for now started with:

- Wikipedia authors (because there's some kind of threshold for relevance there)

- Twitter (because I can automate the validation with a bot -- see @OnTheRecordBot).


This is great! I've been wanting to do something similar for a while, to call people out for making unfounded assertions - with the goal of trying to encourage more humility in making predictions.

Will follow with interest.


Thank you, that's what I'm aiming for as well. If you have any quotes you'd like to follow up on, please don't hesitate to submit them.


This is really a cool project. How are you determining whether a given post from someone is a promise/prediction? Are you doing this manually?


Thank you! I handle this at the moment by requiring manual approval before the content becomes visible on the website (I customized the Django Admin interface a bit), but should the number of quotes become unmanageable I'll start looking for other moderators.

Edit: Also, we try to follow our guidelines on what a promise/prediction should be, you can find them on the 'How it works' page.


Really cool idea!


Thank you!

Also, there are no user accounts so all submissions/votes are anonymous, in case anyone wants to submit something they're interested in.


This is great




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: