Gave it a try for a language I know (Spanish) and one I don't (Russian).
On Russian, the explanations of why some answers were correct/incorrect didn't load (presumably an AI call failed?). Especially at lower layers, a good fallback would be a simple dictionary definition.
On Spanish, I did the placement test, then it asked which "dialect" I wanted. I selected Mexican, and was treated with truly excellent renderings of European pronunciation. I wouldn't have been mad if all it had was one set of pronunciation, and it's more frustrating to see the ignored option than to never have it at all.
As for the placement test: I got dropped into lesson 2 for Spanish. For comparison, I placed into Lesson 5 in Russian, where I actually got more incorrect answers. The Spanish placement test wasn't very deep, and I knew all the answers. It told me I got two wrong, so either the test is wrong or I just got punchy and hit the wrong buttons.
Recommendation: scale back on the ambition. Focus on getting the educational and product experience right with languages you know first. Be honest about data provenance and limitations.
But the things that irritate me even more are the infernal modals and alerts on my computing devices. It is hard enough maintaining focus without having to spend an entire work session playing whack-a-mole at random intervals for a hundred different things that aren’t relevant. I never want to know that my scanner software has an update available.
I realized that at its core, this problem is caused by developers and product managers mistakenly believing that I care as much about their product as they do.
It would be nice if the gatekeepers had mechanisms that punished this behavior. Search engines should lower the rankings of every site with random modals. App stores could display a normalized metric of alert click through — “this app has an above average number of alerts that are ignored”.
I've disabled the entire notification stack on macOS and Windows 10 with some tweaks and couldn't be happier. It's not like I'm going to miss out on anything of value as Slack, Discord, Mail will just indicate new messages with a dock/taskbar icon change.
But it's sure as hell annoying to have unsolicited popups randomly appearing ("Java update available! Apple Music now 50% off! GeForce Experience driver update! Windows Defender scan results! USB drive not ejected properly!..."). They're also often embarrassing when screen sharing.
One thing that drives me up the wall on macOS is when an application demands attention and its dock icon starts bouncing... and doesn't stop. It happens over fullscreen stuff too.
The flashing icons in Windows are far less obtrusive, and I was just looking at the latest insider preview for 11 where they are making it so the icon will only flash a few times and then change the little "application is running" bar that sits under the app icon from white to red to indicate that it wanted your attention. Which sounds like an excellent way to handle it to me.
Any app that pops up a notification when NOTHING EXTERNAL HAS HAPPENED has all its notifications turned off immediately and permanently. It's literally just deciding "hey, I'll bother the user about something pre-programmed right... now!" No.
This is a bigger problem, not just of software developers, but all businesses thinking you care about them as much as they do, not seeming to understand that I've made purchases from tens of thousands of businesses over the course of three decades as an adult, with more to come, and no matter how much I might care in theory or principle about any one of them, there is no universe in which I can read daily, weekly, or even monthly e-mails, SMS messages, or pop-up notifications from all of them, because if I actually did that, my entire life would consist of nothing but filling out surveys. The cheeky little smiley emoji asking if they can take just five minutes of my time misses the point. Sure, I've got five minutes, but you're one of 30 businesses asking for that every day, and it's no longer "just a moment" when it adds up to two and a half hours across all of them.
If the anecdote were real, it would still be an anecdote: tempting to generalize, but wise to hold loosely.
The article is about how AI will lead to a labor surplus in certain professions, while other professions will retain employment.
The article compares this to the Black Death, where labor supply decreased uniformly. Labor was then able to extract concessions from capital.
Industrialization leads to a different outcome: capital captures more value initially, while devastating workers in the short-term. In the long-term, everyone benefits from higher living standards. Even if the article is correct about AI impacts, it doesn't explain why AI is different from all prior industrialization.
Counterpoint: the past continues to inspire, surprise, and delight.
Your comment about “1700’s newspapers” reminded me of The Past Times podcast, where comedians read random newspapers from across American history. The episodes I’ve listened to were delightful, and they covered mundane news in mundane places.
“O brother, where art thou” is one of my favorite movies. It’s a retelling of The Odyssey (a literally prehistoric tale) set in Depression-era Mississippi, made in the early 2000’s.
The specific question of editing out these production artifacts doesn’t rile me either way, though. I didn’t see the original mistake, and I won’t notice the fix either.
I’ll also agree that just as no one steps in the same river twice, how the past is viewed and interpreted changes over time. What is valued or not also changes. 90% of everything is still crap. And quite a bit of the interest in the past is reflected in remixes or retellings for modern audiences.
Still, people also read Beowulf or Chaucer in the original or in modern translation. Others will enjoy both Jane Austen and Bridgerton. People will listen to Beethoven and Jon Batiste. Sure, not all those things are for everyone, but neither are modern music genres, sports entertainment, or most TV shows.
Yes, Homer will outlive us all, but what 20th century film is likely to have Homer’s longevity?
I think people will still be playing Tetris and reading Homer in a thousand years, but I’m not confident at all that they’ll be watching any of our videos.
Certain right wing figures have spent decades coordinating an alternative media infrastructure whose only goal was to sow doubt about truths that were inconvenient to their political aims.
Any discussion of public media trust that doesn’t include this as a component of analysis is immediately suspect.
An analysis that claims that public mistrust in media is because the media did not create more space for right wing obfuscation and disinformation is at best misguided. In practice, it is more likely another element of the campaign that created the problem in the first place.
There are many valid criticisms of the modern press. “They didn’t conform to the right wing’s warped presentation of reality” is not one of them.
This comment's absolutely and completely misguided and nakedly partisan/tribal viewpoint almost perfectly describes the issue with the media: A complete and utter lack of understanding that perhaps some opposing viewpoints are also valid and worthy of discussion, which then leads to completely biased and untrustworthy activist reporting.
It's worth noting that this is a paper from 2014. The premise seems well-known now, but I wonder if it was as strong then?
I agree root cause analysis would be more interesting, but it wouldn't be justified until the base phenomena was validated.
Sure, people who do exercise think it helps stress and anxiety, but lots of people also find homeopathic remedies to be helpful. Papers like this show the former stand up under experimentation and the later don't.
I think that in 2014 it was already well known, there are several meta-analysis that precede this work. yes research is important, even just to state common knowledge, I just find the paper lacking depth and sources, making it not that interesting.
There is no follow-up of this research by the authors, and their research is not really focused on something in particular, I've the impression that it's just another paper on the subject.
there was a lot of popular science on sports training in the 1980s.. perhaps related to new, excellent measurement of blood content at different stages of exercise? a fallout of that science was the steriods era in US pro sports
The claim made here is that at 100GbE the sequence numbers wrap in milliseconds. That number seems right (source: vibes).
Why isn’t this a serious problem then? I’d love a networking expert to chime in.
Is it that high bandwidth links also have very low packet error rates?
Or is it that individual TCP flows rarely saturate the link? (Because of congestion control, lower end to end throughput, sharing links, or some other reason?)
OP is citing a post about genomics research data sharing, where they have identifiers that are exceptionally vulnerable to Excel’s date inference and conversion algorithm.
OP’s post is probably still a good idea for researchers, where something like SPSS offers better protection for datasets, albeit with a higher learning curve.
For the rest of us, so long as we pay attention, Excel is an incredibly underrated analysis approach.
Memphis is unique in sitting on a major artesian aquifer, and its tap water is as good or better than can be bought in a bottle. The environmental group here is concerned about unsustainable use of the aquifer, instead of gray water or Mississippi River water.
TFA states the facility will use gray water, but I think it’s good of watchdogs to make sure that stays the case.
The author states the purpose of the Jones Act was to preserve US maritime capacity. He then claims that because the US does not have a global hegemony on this market, the Act has failed. Therefore, we should end Jones Act protectionism to compete with China.
I don’t see how removing a guaranteed market will spur investment in the capital intensive activity of ship building.
It also occurs to me to ask how much domestic shipping traffic there is in the first place, and if it is actually price-sensitive. Unlike Japan and Korea, most of the US is far from coastal and riverine shipping lanes.
On Russian, the explanations of why some answers were correct/incorrect didn't load (presumably an AI call failed?). Especially at lower layers, a good fallback would be a simple dictionary definition.
On Spanish, I did the placement test, then it asked which "dialect" I wanted. I selected Mexican, and was treated with truly excellent renderings of European pronunciation. I wouldn't have been mad if all it had was one set of pronunciation, and it's more frustrating to see the ignored option than to never have it at all.
As for the placement test: I got dropped into lesson 2 for Spanish. For comparison, I placed into Lesson 5 in Russian, where I actually got more incorrect answers. The Spanish placement test wasn't very deep, and I knew all the answers. It told me I got two wrong, so either the test is wrong or I just got punchy and hit the wrong buttons.
Recommendation: scale back on the ambition. Focus on getting the educational and product experience right with languages you know first. Be honest about data provenance and limitations.
reply