> The Elo rating system is a method for calculating the relative skill levels of players in zero-sum games such as chess. It is named after its creator Arpad Elo, a Hungarian-American physics professor.
There are other rating systems that are similar, but people commonly call any similar rating system "Elo". In this case, Lichess uses https://en.m.wikipedia.org/wiki/Glicko_rating_system, so the GP is arguing against calling it "Elo". I think it's kind of a pedantic point given that more people will understand "Elo" than "glicko-2", but I suppose "rating" would be clearer than either term.
Why do they have to be for-profit corporations in the first place? Wikipedia, Internet Archive, and Archive of our Own are nonprofits that are capable of handing a large numbers of users creating and accessing a large volume of content.
The post doesn't say "There were three units". That's a quote you made up. The post says "the original CSS specification included 3 relative units". That is factually true. The original CSS spec lists only em, ex, and px as supported relative units, which are distinct from the five absolute units you accused them of leaving out.[0] Accusing someone of lying while lying about what they said is a bad look.
The author also has a footnote mentioning that there are actually 36 relative length units in CSS now, so it's actually a 12-fold increase in relative unit types.
Lastly, I don't think the author is even complaining about the number of units. He obviously has reservations about it, but the main point is to illustrate the growing complexity of CSS over time.
> These raw numbers tell an underlying story; websites now appear in a number of different shapes, sizes and dimensions, and CSS needs to account for that.
I wonder if there's been any observable correlation between JSON support in the major SQL databases and the decreased (or increased?) adoption of NoSQL document databases like MongoDB. It would be interesting to do some bulk analysis on GitHub commits to compare their use over time.
Just one bit of personal experience, but for me it was a significant reason. In most cases you want objects to have highly structured data (e.g. for joins and queries) and in other cases you just want "a bunch of semi-structured stuff". Sure, DBs always had blobs and text, but JSON is really what you want a lot of the time.
There's also a good article by Martin Fowler about how "NoSQL" was really "NoDBA" for a lot of folks, and I definitely saw that dynamic. JSON fields can also be a good middle ground here, where a DBA can insure good "structural integrity" of your schema, but you don't need to go through the hassle of adding a new column and schema update if you're just adding some "trivial" bit of data.
The canonical example for me, is when you want to store/use additional payment processor details for a transaction... If it's direct CC, PayPal, Amazon Payments etc. Relationally you only really care that the amount of the transaction was sent/received/accepted. But you may want to store the additional details, without a series of specific tables per payment processor. If you need to see the extra details that can still be done at runtime.
Another good example is for generalized classified ads, different categories may have additional details, but you don't necessarily want to create the plethora of tables to store said additional details.
Honestly, I pretty much always want structure. The reasons I've opted for NoSQL are almost always that cloud providers offer it for practically free while managed SQL databases are wayyyy more expensive. The nice thing about JSON is that it's a lot more ergonomic, but not because of the lack of typing--I would absolutely use a database that let my write reasonable type constraints for JSON columns. (I realize that you're talking about why most people use NoSQL and I'm remarking about why I use NoSQL).
Some other controversial thoughts: SQL itself is a really not-ergonomical query language, and also the lack of any decent Rust-like enum typing is really unfortunate. I know lots of people think that databases aren't for typing, but (1) clearly SQL aspires toward that but gives up half way and (2) that's a shame because they have a lot of potential in that capacity. Also while you can sort of hack together something like sum types / Rust enums, it's a lot of work to do it reasonably well and even then there are gaps.
Not sure I understand what you mean, or rather that all of this appears to be available in postgres.
pg_jsonschema is a postgres extension that implements schema validation for JSON columns. I'm not particularly familiar with Rust, so not sure exactly what you mean by "Rust-like enum typing", but postgres has enums, composite types, array types, and custom scalars, so not sure what's missing.
By "Rust-like enums", I mean "sum types" or "algebraic data types". In general, it's a way of saying that a piece of data can have one of several different types/shapes (whereas a Postgres enum is basically just a label backed by an int). But yeah, with jsonschema you can probably express sum types, but jsonschema is disappointing for a bunch of reasons and needing an extension is also not great.
Every ecosystem I've ever worked in has had good tooling for managing DB migrations (and in some cases I've been the one to add it). It's trivial to write a migration to ALTER TABLE bar ADD COLUMN foo and I think keeping structure explicit is generally quite beneficial for data safety even if you're not doing fancy things. DBAs are great but most companies simply don't need one - you can just get by with some pretty rudimentary SQL and skill up as needed.
Assuming you've got good integration test coverage of the database schema alterations end up taking a minuscule amount of time and if you lack test coverage than please reconsider and add more tests.
Completely disagree. The issue is not about really about how hard or easy it is to run migrations (every project I've worked on has also used migration files), it's that, depending on the data, it can just be a total waste of time.
Sibling comment, "is when you want to store/use additional payment processor details for a transaction", is a great example IMO. I've dealt with card processing systems where the card transaction data can be reams of JSON. Now, to be clear, there are a lot of subfields here that are important that I do pull out as columns, but a lot of them are just extra custom metadata specific to the card network. When I'm syncing data from another API, it's awesome that I can just dump the whole JSON blob in a single field, and then pull out the columns that I need. Even more importantly, by sticking the API object blob in a single field, unchanged, it guarantees that I have the full set of data from the API. If I only had individual columns, I'd be losing that audit trail of the API results, and if, for example, the processor added some fields later, I wouldn't be able to store them without updating my DB, too.
Before JSON columns were really standard, saw lots of cases where people would pull down external APIs into something like mongo, then sync that to a relational DB. Tons of overhead for a worse solution where instead I can just keep the source JSON blob right next to my structured data in postgres.
I think when you really need/want a DBA is when you're at a point where either you need redundancy/scale or have to remain up. Most developers aren't going to become that familiar with the details of maintenance and scale for any number of different database platforms. I think MS-SQL does better than most at enabling the developer+dba role, but even then there's a lot of relatively specialized knowledge. More so with the likes of Oracle or DB2.
If you want to be high-availability then you need sharding or something like it from day 1. There's still no first-class way of running PostgreSQL that doesn't give you at least a noticeable write outage from a single-machine failure.
>> There is a reason almost every new database aims to be distributed from the beginning.
That's partly because you can't compete with the existing RDBMSs if you're single node: they are good enough already. Nobody will buy your database if you don't introduce something more novel than PostgreSQL, whether that novelty is worth it or not.
The experience for at least some of us is that failover is not robust. At all. And that < 1m is best case scenario that still requires a person to be monitoring the process.
And given that the entire industry has moved to a distributed model despite its complexity gives you a hint as to which way the wind has been blowing for the last decade.
You don't need to be that arrogant. The number-one reason why there are no new (No)SQL-Databases for one node is that the existing databases are great and you can't monetize them.
Failover is automatic for PG when using e.g. Patroni. Of course you lose active transactions and that might be a showstopper, but monitoring failover? I am curious when you'll have to do that.
Agreed, when you see the index size in Mongo vs PostgreSQL, you will quickly understand that a single PostgreSQL instance can outscale a huge Mongo cluster.
You would have to tell the decreased adoption of NoSQL due to JSON support in major SQL databases apart then from the decreased adoption of NoSQL due to the hype being over...
Tangentially, I think this explains the conspiracy theory that ad companies are spying on everyone's phones and serving ads based on what we talk about in real life.
Think about all the stuff ChatGPT and GPT-4 can do with even minimal prompting. Even when they hallucinate, the text is still ostensibly coherent and natural sounding. Now imagine a similarly powerful model, but its input is a ton of metadata about your behavior and its output is ads.
Now consider that adtech has had substantially more funding for substantially longer than research into LLMs, so ad serving models are probably way more powerful and optimized than even GPT-4.
Another thing is: people's individual behavior is not as unique as we'd like to think. As a whole everyone is unique, but in single surprisingly complex aspects of our life we are hardly ever alone.
And yet there's still zero remotely plausible evidence for it.
Nobody has caught continuous audio feeds being transmitted from smart devices to the cloud (which would be noticeable due to increased network traffic and bandwidth usage) nor identified any secret speech recognition code on the client (which would be noticeable due to severely shortened battery life). Nobody who's worked in adtech has come forward to blow the whistle or admit that they shipped this feature for a big tech company.
I get why it's an appealing conspiracy from a gut instinct perspective, but it really makes no sense. When you're observing the behavior of billions of people and using machine learning algorithms trained to get the best results possible, some uncanny shit will naturally result. Look at how effective LLMs like ChatGPT have gotten without an obvious route to profitability, then think about how much more money has been invested into ad targeting algorithms just in the last couple decades alone.
If I was going to do it... I'd definitely hook in through non-battery-powered IOT devices. Something like a smart TV or various other home security stuff. The TV seems ideal, you have non-trivial compute there so you could do some local speech-to-text and keyword matching, then just periodically phone home with tiny bandwidth usage; that's enough to associate IPs to interests, and then that dataset doesn't even have to look that creepy at the surface-level (you wouldn't tell many people where you got it exactly) when you sell it on to ad networks...
Sneaky apps would be another source, obviously the phone OS/computer vendors wouldn't want this, but I imagine there's some cat and mouse. It's just a new version of browser toolbars, not something hard to imagine some unscrupulous 3rd party data collection company building.
I definitely wouldn't expect Facebook or Google to be doing it directly.
TVs already do a lot of such tracking, and they are open about it. Samsung famously has their ACR feature[1] which works in a manner similar to what you suggest - it basically phones home periodically with screenshots of what you're watching.
Yeah, I plugged a NUC into my TV, installed Pop OS on it, and got a Pepper Jobs USB gyro-remote to control it, and it instantly upgraded my living room.
I previously used my PS4 for watching everything, but having a full desktop OS is just way better: I can browse the web on my TV, control it remotely with my laptop's keyboard and mouse using Barrier, open multiple live feeds in picture-in-picture mode, play music in the background of whatever I'm watching, etc. Also, when I'm watching sports and they go to a side-by-side commercial break, I use the zoom accessibility feature to zoom in on the tiny live feed so I can watch that full screen without seeing any ads haha.
I'd be curious to know more about how you configured KDE. I run a pretty vanilla Pop OS setup right now, which is more usable than I expected with my USB remote, but I've been meaning to explore how to set up something more "smart TV"-like that's a bit easier to navigate. My initial thought was to write a browser extension or a custom app with an embedded browser, but that feels like a bit too much work for something that's supposed to be a leisure/entertainment setup.
Basically I removed all of the default panels and setup desktop widgets. The launcher widget is the most used. Within it I created custom launchers to open a web browser in full screen mode directed to a particular site like Netflix, YouTube, Prime Video etc. and downloaded icons for it. Other widgets I use is a task tray, weather widget, clock, external HD manager and power. I also found that I had to add the volume widget to get the volume controls on my USB remote to work for some reason.
I also configured font sizes, cursor size and mouse acceleration for usability. It works really well.
The quoted section says "both the former employee and the tipster", so "the tipster" here clearly refers to the non-employee who would presumably not have signed any NDA with Apple.
> The Elo rating system is a method for calculating the relative skill levels of players in zero-sum games such as chess. It is named after its creator Arpad Elo, a Hungarian-American physics professor.
https://en.m.wikipedia.org/wiki/Elo_rating_system
There are other rating systems that are similar, but people commonly call any similar rating system "Elo". In this case, Lichess uses https://en.m.wikipedia.org/wiki/Glicko_rating_system, so the GP is arguing against calling it "Elo". I think it's kind of a pedantic point given that more people will understand "Elo" than "glicko-2", but I suppose "rating" would be clearer than either term.
https://en.m.wikipedia.org/wiki/Chess_rating_system