Science is a process, not a result. Retractions like this promote the integrity of scientific research and evidence-based medicine.
> When Dr. Oz in 2015 spoke out against glyphosate...
Oz also promoted MLM dietary supplements, antimalarial drugs as COVID treatments, gay conversion "therapy", colloidal silver, and vaccine skepticism. He has zero credibility and cannot be trusted.
> Science is a process, not a result. Retractions like this promote the integrity of scientific research and evidence-based medicine.
He was obviously poking fun at people who say "trust the science" when what they really mean is "trust these scientits" or, even better, "trust this one study".
Undoubtedly "trust the science" is little more than an appeal to authority when used in a casual debate, not some appeal to skepticism, peer review and testability.
“Trust the science” … always when talking to a flat-earther or similar huckster.
There definitely needs to be more nuance to the phrase in the general case. Eg: “trust established science” Let’s be honest though, it’s a lack of nuance in some world views that need science as an authority the most.
> Let’s be honest though, it’s a lack of nuance in some world views that need science as an authority the most.
I agree but if they're flat earthers they've already rejected established science, so what's that appeal to authority going to do?
This is why "trust the science" is so memeable, it's a lazy appeal to authority the other party has already told you they don't trust and yet people are shocked when this argument doesn't work.
“Trust x,y” will also basically never mean “trust, completely, always, equally, and blindly”.
Trust the science was a shorthand for “you, or even I, may not understand this thing in perfect detail, but the people working on it do, and they GENERALLY aren’t making catastrophic mistakes that you can detect as an amateur. And when these people collectively stand behind a conclusion the odds of it being completely wrong are exceptionally low. We don’t have a more accurate alternative regardless. Please stop JAQing off about it”
But writing all of that over and over again is annoying. And a lot of “”””critical thinkers”””” can’t be bothered to read it. So the shorthand emerges. Sometimes used incorrectly? Definitely.
Pandas is generally awful unless you're just living in a notebook (and even then it's probably least favorite implementation of the 'data frame' concept).
Since Pandas lacks Polars' concept of an Expression, it's actually quite challenging to programmatically interact with non-trivial Pandas queries. In Polars the query logic can be entirely independent of the data frame while still referencing specific columns of the data frame. This makes Polars data frames work much more naturally with typical programming abstractions.
Pandas multi-index is a bad idea in nearly all contexts other than it's original use case: financial time series (and I'll admit, if you're working with purely financial time series, then Pandas feels much better). Sufficiently large Pandas code bases are littered with seemingly arbitrary uses of 'reset_index', there are many times where multi-index will create bugs, and, most important, I've never seen any non-financial scenario where anyone has ever used Multi-index to their advantage.
Finally Pandas is slow, which is honestly the least priority for me personally, but using Polars is so refreshing.
What other data frames have you used? Having used R's native dataframes extensively (the way they make use of indexing is so much nicer) in addition to Polars both are drastically preferable to Pandas. My experience is that most people use Pandas because it has been the only data frame implementation in Python. But personally I'd rather just not use data frames if I'm forced to used Pandas. Could you expand on what you like about Pandas over other data frames models you've worked with?
I initially considered using Pandas to work with community collections of Elite: Dangerous game data, specifically those published first by EDDB (RIP) and now by Spansh. However, I quickly hit the maximum process memory limits because my naïve attempts at manipulating even the smallest of those collections resulted in Pandas loading GB-scale JSON data files into RAM. I'm intrigued by Polars stated support for data streaming. More professionally, I support the work of bioinformaticians, statisticians, and data scientists, so I like to stay informed.
I like how in Pandas (and in R), I can quickly load data sets up in a manner that lets me do relational queries using familiar syntax. For my Elite: Dangerous project, because I couldn't get Pandas to work for me (which the reader should chalk up to my ignorance and not any deficiency of Pandas itself), I ended up using the SQLAlchemy ORM with Marshmallow to load the data into SQLite or PostgreSQL. Looking back at the work, I probably ought to have thrown it into a JSON-aware data warehouse somehow, which I think is how the guy behind Spansh does it, but I'm not a big data guy (yet) and have a lot to learn about what's possible.
R and Matlab workflows have been fairly stable for the past decade. Why is the Python ecosystem so... unstable? It puts me off investing any time in it.
The R ecosystem has had a similar evolution with the tidyverse, it was just a little further ago. As for Matlab, I initially learned statistical programming with it a long time ago, but I’m not sure I’ve ever seen it in the wild. I don’t know what’s going on there.
I’m actually quite partial to R myself, and I used to use it extensively back when quick analysis was more valuable to my career. Things have probably progressed, but I dropped it in favor of python because python can integrate into production systems whereas R was (and maybe still is) geared towards writing reports. One of the best things to happen recently in data science is the plotnine library, bringing the grammar of graphics to python imho.
The fact is that today, if you want career opportunities as a data scientist, you need to be fluent in python.
Mostly what's going on with Matlab in the wild is that it costs at least $10k a seat as soon as you are no longer at an academic institution.
Yes, there is Octave but often the toolboxes aren't available or compatible so you're rewriting everything anyway. And when you start rewriting things for Octave you learn/remember what trash Matlab actually is as a language or how big a pain doing anything that isn't what Mathworks expects actually is.
To be fair: Octave has extended Matlab's syntax with amazing improvements (many inspired by numpy and R). It really makes me angry that Mathworks hasn't stolen Octave's innovations and I hate every minute of not being able to broadcast and having to manually create temp variables because you can't chain indexing whenever I have to touch actual Matlab. So to be clear Octave is somewhat pleasant and for pure numerical syntax superior to numpy.
But the siren call of Python is significant. Python is not the perfect language (for anything really) but it is a better-than-good language for almost everything and it's old enough and used by so many people that someone has usually scratched what's itching already. Matlab's toolboxes can't compete with that.
I love R, but how can you make that claim when R uses three distinct object-oriented systems all at the same time? R might seem stable only because it carries along with it 50 years of history of programming languages (part of it's charm, where else can you see the generic function approach to OOP in a language that's still evolving?)
Finally, as someone who wrote a lot of R pre-tidyverse, I've seen the entire ecosystem radically change over my career.
The pandas workflows have also been stable for the last decade. That there is a new kid on the block (polars) does not make the existing stuff any less stable. And one can just continue writing pandas for the next decade too.
I had the same problem. Make sure you're running the latest version of uBlock Origin. By that I mean you should explicitly check for updates and install the latest version. That fixed it for me.
Stories like these make me want to give modernizing vtrek another try. I'd originally played it on my cousin's 3B2 in the late 1980s after being introduced to the CP/M port of classic trek, which I played on my family's IMS 5000SX. (Oh, how I wished at the time that we'd gotten an Apple II! But that's a story for another time.) I have distinct memories of playing vtrek on my dad's VT220, dialed into my cousin's BBS over Tymenet at 1200 baud. Whereas classic trek, having been written in the era of the ASR-33, was line-oriented like a text adventure, vtrek was a full-screen interactive terminal app, like vi. You'd issue movement commands using the 3x3 block of keys of the left side of the keyboard—Q, W, E, etc. Other keys controlled the ship's scanners, shields, weapons, and warp drive. Those inputs drove the game's event loop and updated the display accordingly. It was great fun, especially for a video game starved kid like me. I was forever pestering friends and cousins to play on their Ataris or Nintendos or Apples or Tandys. I didn't get my hands on a proper gaming computer until the early 90s, when we replaced the 5000SX with a 386.
So I'm sure you can imagine my excitement when I stumbled across an archive containing XENIX ports of a bunch of Unix games, including vtrek. (Thank you, Vince!) I'm not even sure how I managed to find that. Nowadays, Google only returns two search results for vtrek, the XENIX game port archive and a munged version of the original release to net.sources.games, and that's only if you know to include the "duncel" insult the game uses in the search terms. Google Groups searches of net.sources.games will lead you to a series of posts from the fall of 1985, but how would anyone other than an old fuddy duddy like me even know to look there? (Also, Google Groups doesn't have the original Usenet posts, so formatting is all screwed up. It's a vexing problem for the modern programmer archeologist.) Now imagine, if you will, an eager and not inexperienced nerd trying to compile a System V-era game on Linux and FreeBSD circa 2005. This Star Trek quote seems appropriate:
PAIN!
I mean, even the Real Hackers back in 1985 had problems getting it to compile, so I don't know why I thought my experience would be anything other than worse. The termios code in glibc just didn't work. At all. Neither did the sgtty code, which had been broken since at least 4.4BSD. After a good long while beating my head against vtrek, even going so far as to trying to build it on OpenStep 4.2 (from 1997) and FreeBSD 2.0 (from 1994), I gave up. Maybe it's time to give it another go for nostalgia's sake.
Author of the article here, I encourage you to do so, and share the results!
I started this journey in 2006, doing the same as you, crawling old usenet archives in the newsgroups interface taht groups.google.com provided. Finding the code was troublesome, because I lost track of it, when moving from floppy disks, to different storage systems, until it has finally been preserved on github.
I find it fascinating that your father had a VT220, did he have it at home or in his office. I thought that kind of terminals were more like a thing of labs.
Regarding that VT220, I misremembered. My dad's workplace loaned him a 1200-baud modem and a C. Itoh terminal, maybe a CIT-101 because [this picture](https://terminals-wiki.org/wiki/index.php/File:C._Itoh_CIT-1...) matches my memory. He was a software engineer and occasionally worked from home.
We also has a Wyse 50 terminal. It's how we used the IMS 5000SX, which had both a 10-MB hard disk drive (I think it was called a Winchester) and a 5.25" floppy disk drive. I have a huge stash of 5.25-inch floppy diskettes from back then, including copies of TurboDOS (for the 5000SX) and Apple II games and little BASIC programs us kids wrote, but I've all but given up on recovering anything. The IMS 5000SX and the Wyse 50 terminal are long dead and buried. I've made some half-hearted attempts to boot TurboDOS up under simh, but it isn't the same. If they aren't all corrupt, I suspect my Apple diskettes have a virus of some kind on them, too.
Around 1991-1992, I helped a dentist install an electronic medical record system using a multi-user DOS variant called PC-MOS. We connected Link MC5 terminals via serial to a 386 running SoftDent, if I'm remembering it correctly. I got one of MC5s when that system was decommissioned. Unfortunately, I lost it in a house fire. Then, a few years later, I got another of the MC5s when the dentist was doing some housecleaning. I still have that one, and I'd use it more often if there wasn't something wonky with its serial interface's flow control that causes corrupted I/O.
Ah,that makes more sense. I see you inherited that engineering chops from your father, and that story with the dentist made me chuckle, it sounds like the first freelance attempts in the 20s :D
I started getting old computers back in the day, even a bulky IBM with AS/400, featuring a PowerPC RISC architecture, although it worked, and I learned how to login and all, I donated it to a friend that have a garage full of all kind of machines, that probably could preserve it better than me.
Regarding, the apple diskettes with virus, probably they are nowadays worth preserving too (for some archeological sleuthing) :D Thanks for sharing this story.!
Firefox profiles suck. Their UX is so bad. Containers are better but still have their issues. I use Containerise plus Cookie AutoDelete plus Temporary Containers to give me what is effectively per-tab private browsing. The major downside is that I have to copy containers.json (which enumerates all of the dedicated containers I have defined, e.g., for Facebook), my Containerise rules (which automatically puts certain web sites into specific containers), and my Cookie AutoDelete config (which says which cookies to delete and when) among browsers manually. I wish more things supported Firefox's sync feature. I ended up adding them to my dotfiles, so it isn't too painful, but it definitely isn't grandparent friendly.
That's what these changes aim to fix. You're getting a Chromium-like profile switcher/manager.
> Containers are better
Containers are very good... for container stuff. Profiles allow us to have different bookmarks, settings, extensions, themes, etc. Different tools for different jobs. I use both!
If you look at about:profiles and compare it to the new UI or to what Chromium has offered for years, then I think you'll understand that the new UI is much better for the average user. There are also improvements for operating systems like macOS, where profiles always worked, but switching between them wasn't exactly a nice experience.
You can still use the old profile manager. At the pace Mozilla moves, it will be there for years.
The old UI != about:profiles. I never knew about:profiles even existed for a long time, why I was already happy using the profiles UI for years. I mean the profiles UI, that pops up at startup.
> If you look at about:profiles and compare it to the new UI or to what Chromium has offered for years, then I think you'll understand that the new UI is much better for the average user.
No I think this looks way worse, than the native UI, they have had for decades and hope they don't remove the old UI, so it startup doesn't become slower and more annoying.
The lock file shouldn't be in the repository. That forces the developers into maintenance that's more properly the responsibility of the CI/CD pipeline. Instead, the lock file should be published with the other build artifacts—the sdist and wheel(s) in Python's case. And it should be optional so that people who know what they're doing can risk breaking things by installing newer versions of locked dependencies should the need arise.
You can reproduce the release just fine using the lock file published alongside the release. Checking it in creates unnecessary work for devs, who should only be specifying version constraints when absolutely necessary.
> Checking it in creates unnecessary work for devs, who should only be specifying version constraints when absolutely necessary.
The unnecessary work of a `git commit`?
Having the file be versioned creates no requirement to update its contents any more frequently than before, and it streamlines "publishing alongside the release". The presence of the lockfile in the repo doesn't in any way compel devs to use the lockfile.
> While internal modules and libraries should be kept as generic as possible, external-facing components, on the other hand, are a good place to put business-specific domain logic. External-facing components here refer not only to views but also to any kind of externally-triggered handlers including external API endpoints (e.g. HTTP/REST API handlers).
That goes against every bit of advice and training I've ever gotten, not to mention my experience designing, testing, and implementing APIs. Business logic belongs in the data model because of course the rules for doing things go with the things they operate on. API endpoints should limit themselves to access control, serialization, and validation/deserialization. Business logic in the endpoint handler—or worse, in the user interface—mixes up concerns in ways that are difficult to validate and maintain.
I must be Doing It Wrong(TM), because my experience has been pretty negative overall. Is there like a FAQ or a HOWTO or hell even a MAKE.MONEY.FAST floating around that might clue me in?
> Trust the science.
Science is a process, not a result. Retractions like this promote the integrity of scientific research and evidence-based medicine.
> When Dr. Oz in 2015 spoke out against glyphosate...
Oz also promoted MLM dietary supplements, antimalarial drugs as COVID treatments, gay conversion "therapy", colloidal silver, and vaccine skepticism. He has zero credibility and cannot be trusted.
reply