The problem with using this as an analogy is that the commas attach to the interjection "well," not to the words "rich" or "richer." Remove the "well" and you should remove all the punctuation around it.
"I'm getting fed up of making the rich... richer" might be grammatically ok in informal text, but a comma is definitely wrong.
Of course it is. Doing a wrong thing a lot doesn't make it less wrong. Over time maybe its "wrongness" will change the way language naturally changes over time.
I think a lot of English speakers use commas to emphasize a pause in speech. I don’t think a comma was needed. I think the author was slightly mixing up the relative clauses and appositives rules with commas.
Source: native speaker, North America
it's not grammatically correct but reflects a pausing speaking style to avoid confusion generated from repeating the same word twice in a row, and also to emphasize the repetition (a thing becoming more of a thing after already being that thing). (others are telling you whether it's correct per textbook rules or that it's purely stylistic but not why it's used in practice and what it conveys)
There are “close” and “open” styles of comma usage. “Open” has been ascendant for the last few decades (it began to rise in the early 20th century, but wasn’t firmly dominant until later). It’s less precise and expressive, but “cleaner”.
Because native English users don't follow a style book. As much as English teachers in school want pupils to be prescriptivists in academic contexts, native English communication (writing/speaking) is better understood using a descriptivist lens.
There aren't fixed rules, even to the degree to which there are such rules for other grammatical questions in English. Much of comma usage comes down to preference.
I think part of why we've shifted so strongly against their use is because if you leave it up to taste, as had previously been common, most people make poor choices.
It's funny because even as we've moved away from prescriptivism, the "rules" around comma usage have tightened and people have gotten quicker to call a given previously-common usage incorrect.
There are rules around comma usage that are up to taste (eg the Oxford comma), and people get very dogmatic about these rules for bad reasons. There are also times when usage of a comma is incorrect according to all the known rulesets for English grammar. This usage is in the latter category.
The notion that a comma is any sort of pause fell out of favor in written English in the 1800's and thankfully hasn't been back (see the second amendment as to why "a comma is a generic pause" is a bad idea). You would have to be the loosest form of descriptivist to say that this usage is close to correct, and I would question whether you would accept any grammar rule at all at that point. Many people use run-on sentences and many don't capitalize the start of sentences in very casual text, even though these are widely (universally) accepted rules.
I like the aesthetic of its usage in this case and find it makes the sentence read easier. It eliminates even temporary ambiguity about part-of-speech for the final two words. It stands in for a clarifying word like "become".
To the extent it's "incorrect", it's in that it generated this discussion at all.
I completely disagree with you and find the comma misplaced in a jarring way. It interrupts the flow of thought for me in a negative way: much more than a brief pause, it places a marker that the syntax of "richer" isn't fully bound to the previous words. There's also no ambiguity in the last two words without the comma.
I think if the author wanted a "pause the sound while keeping the syntax flowing" mark, the ellipsis (...) would have done the trick much better. In my opinion, though, this sentence did not merit any pause between "rich" and "richer" since there's nothing surprising about that word.
> Native English speakers don't know how to use commas, so they throw them anywhere they want to have a pause.
Like with any language, there are wildly varying levels of literacy. Many native English speakers know how to use commas, and many others don't. I think that shades from using them grammatically (most literate), to using them ungrammatically as a generic pause, to not using punctuation at all.
They're using an AMD APU: it has unified RAM and VRAM, much like Apple Silicon. Hence why it needs socketed RAM.
Unified RAM/VRAM is very nice for running LLMs locally, since you can get wayyyy more RAM than you typically can get VRAM on discrete GPUs. 128GB VRAM on discrete GPUs is 4x5090s — aka $8k just on GPU spend alone. This is $2k and it includes the CPU!
Of course, it'll be somewhat slower than a discrete GPU setup, but at a quarter of the cost, that's a reasonable tradeoff for most people I'd think. It should run Llama 3.1 70b (or various finetunes/LoRAs) quite easily, even with reasonably long context.
Correct me if I am wrong, but I believe this is an AMD SoC, so a combo of a CPU+GPU (and TPU/AI engine, whatever you wanna call it) on the same chip. And they do share the RAM.
That sounds very optimistic. Do you have any data to back that up?
Is it hot or cold storage?
I understand, that the breakdown of magnetic field is indeed slow, but the HDD as a whole is not as sturdy, I think - you need to spin the platters, control the heads and so on.
That's true. Perhaps I should say that data on hard drives will remain recoverable, not available, for a century.
Data on CDs/DVDs should remain recoverable for millenia (properly stored, even readable). Another advantage: CDs/DVDs can be duplicated with only analog tools maybe 10 times to further extend that (obviously not writable CD/DVS). And if we were to glue cd's top-to-top, that could be an easy hack to 10x that, which would even work for (re)writable CDs/DVDs.
(Re)writable CDs/DVDs should remain readable/recoverable for centuries too. Probably not millenia.
TLDR: SSDs keep data for "minimum 1 year" when used as archival storage (of course specific models have been caught losing data in as little as 3 months). Keeping the SSD powered on regularly should increase that, but only to 2-5 years if you want to be on the safe side.
> Data on CDs/DVDs should remain recoverable for millenia (properly stored, even readable).
If by "properly stored" you mean in a cold, dark vacuum, then maybe. Otherwise this is not true in my experience. I've had CD's in temperature controlled storage for 25 years and about 1 on 10 are unreadable. It's my understanding that they oxidize. In theory gold CD'S are immune to that.
I really hope you mean you unplug the power cable from the TV, cause none of the modern TVs turn off when you ask them to. The TV bootup takes too long for it to be "off" off.
No, but I most often use the hardwired button on the side of the TV to shut it down, not just the remote into standby. It takes about 15 seconds to turn on, nothing to worry about.
Most binaries you encounter today fit in this cache. This is what makes AMD's x3d chips so fast in games. And you're off by 8x on the size, it's 104MB.
AMD writes MB, and they list the faster caches' sizes in KByte, where indeed I think it should be Mi and Ki respectively. And in the NV space (hdds, ssds) a GB means 10^9.
Reminds me of DDR vendors telling us that memory runs at 8GHz (8GT/s, 4GHz in reality).
TB5 is 15GB/s. So gen 5 equivalent. I'm not saying there are tb5 enclosures in the wilds, but it's a matter of time. Also if you're bottlenecked by buffered, linear reads and writes so much that there is a difference between 3GB/s and 7GB/s then I envy you. Most of what I choke my desktops and servers with is random IO that wouldn't saturate gen2 :)
Thunderbolt 5 is very high data transport, but the latency of going through the TB port is still higher than going through PCIe. In a single large transfer, I'd expect TB5 to win, in a millions-of-tiny-transfers scenario, I'm not so sure.
Thunderbolt is PCIe though, just over an external interface. That's why eGPUs worked so well. I can't see a situation where the latency of Thunderbolt has a significant impact on disk usage when eGPUs, where latency is so much more noticeable, worked acceptably?
Thunderbolt provides a tunneling mechanism for PCIe, DisplayPort, USB etc. It's also a mesh network where packets are source-static routed from node to node in the network - so the source sets up the route-to-the-destination and the data packet is transmitted from controller-node to controller-node until it gets to the destination, then it's unpacked and presented as data to the system.
You could see some of this on the venerable "trashcan" Mac Pro, where one of the TB controllers wasn't directly connected to the port, but came through another TB node. The latencies on the ports connected to this TB controller were slightly higher due to the extra transit-time.
Latencies over PCIe are measured in tens of nanoseconds (say 70-100) depending on chipset and how much you pay. Latencies over TB can be several hundreds of nanoseconds. TB presents as a PCI interface, but that's an adaptor-type design pattern, it's not fundamentally PCIe underneath.
Bandwidth vs latency is like a pickup vs lambo I guess. And what the tb limits is the bandwidth, if you catch my drift (although lambos are awd and poor at drifting). So the actual performance that matters (the snappiness) is still there.