The fractional part isn't helping them serve data any faster. To the contrary, it actually reduces the speed from parallelism. E.g. a 5:9 scheme only achieves 1.8x throughput, whereas straight-up triple redundancy would achieve 3x.
It just saves AWS money is all, by achieving greater redundancy with less disk usage.
OP here. 0-based vs 1-based indexing is one these hot debates I don't want get in as I've mentioned in the post. Yet, point is, if you take pretty much any math textbook, indexing starts at 1, whether we like it or not. And for many students, understanding the mechanics of a given algorithm is already enough of an effort that they do not need, on top of that, to translate every indices in the book's pseudo-code from 1-based to 0-based. That's all I'm saying. No other value judgement.
Dijkstra is right, of course. However, most math texts still use 1-based indexing. If you want to translate them into code, it's easier when the conventions match.
(Now, if you had a proposal for switching math over to 0-based indexing ...)
> The above has been triggered by a recent incident, when, in an emotional outburst, one of my mathematical colleagues at the University —not a computing scientist— accused a number of younger computing scientists of "pedantry" because —as they do by habit— they started numbering at zero.
I did have one or two math professors who would use x_0 and x_1 instead of x_1 and x_2 when they had to name two objects.
But I have also seen places where 1-based indexing was used despite being "obviously wrong". I don't quite recall what it was, but there was sequence of objects A_1, A_2, ... and a natural way of combining A_k and A_l to get A_(k + l - 1). Had the indices been shifted by 1 to be 0-based, the result would have been A_(k + l), which would be much nicer to work with.
I didn't say that they were not helpful; I said they have dangerous pitfalls. Also, they're not perfectly portable. My testing shows that only two compilers get lower bounds right in all the tricky cases that I know of.
There are "lightweight formal methods". Most problems can be produced via small models. Tools like alloy are built around this idea. (IIRC alloy was used to show that a famous DHT had issues with the churn protocol)
>functional programming languages ... break down when you try to coordinate between components, especially when those components are distributed ...
I think the exact opposite. Having spent more than a decade in distributed systems, I became convinced functional programming (and pure functions) are essential for survival in distributed systems.
Back then I wrote down my insights here (amazed that the site still exists)
And part of why functional programming works so well is exactly that you don't need to care about things like control flow and interactions. You're just expanding definitions. Those definitions are local/isolated, and compose in regular ways.
Even with my limited knowledge about FP I am pretty sure I would only grow more in agreement as I learn more. My only exposure to functional is via Nix and Rust (I promise I'll learn OCaml soon). One thing that I've really come to appreciate is the concept of "making invalid states unrepresentable," a trick that is harder than it should be (though not impossible) in "less functional" languages. Coming back to distributed systems, I have wondered what a functional database would look like. Mutations as pure functions, DUs in tuples, could we store a function, etc.
> "making invalid states unrepresentable," a trick that is harder than it should be (though not impossible) in "less functional" languages
The flip side of this is to "make representable states valid." If you have an enum that doesn't fill a bitfield, values of the bitfield outside the enum are representable -- and the behavior of the system must be defined in that case. (Most often, this is done by mapping the behavior of undefined states to a chosen defined state, or using it to trigger an abort -- the key is that it must be an explicit choice.)
I never understood why stackless wasn't more popular. It was rather nice, clean and performant (well, it's still python but it provide(d) proper concurrency)
Also caused subtle bugs. I once had to debug a crash in C++ code that turned out to be due to Stackless Python corrupting stack state on Windows. OutputDebugString() would intermittently crash because Stackless had temporarily copied out part of the stack and corrupted the thread's structured exception handling chain. This wasn't obvious because this occurred in a very deep call stack with Stackless much higher up, and it only made sense if you knew that OutputDebugString() is implemented internally by throwing a continuable exception.
The more significant problem was that Stackless was a separate distribution. Every time CPython updated, there would be a delay until Stackless updated, and tooling like Python IDEs varied in whether they supported Stackless.
there was a fight between Apple, the computer company and Apple, the record company (initially owned by The Beatles).
They initially resolved it by The Beatles allowing the other to one to keep its name on the condition they would refrain from entering the music business.
It's a sign of the design quality of a programming language when 2 arbitrary features A and B of that language can be combined and the combination will not explode in your face.
In python and C++ (and plenty of other languages) you constantly have the risk that 2 features don't combine. Both python and C++ are full of examples where you will learn the hard way: "ah yes, this doesn't work." Or "wow, this is really unexpected".
Well, there is also a question of attitude. Most of the Python programmers don't overload << or >> even though they technically can, while in C++ that's literally the way the standard library does I/O ― and I suspect it leaves an impression on people studying it as one of their first languages that no, it's fine to overload operators however quirkily you want. Overload "custom_string * 1251" to mean "convert string from Windows-1251 to UTF-8"? Sure, why not.
reply