Hacker Newsnew | past | comments | ask | show | jobs | submit | toolslive's commentslogin

It's not really "redundant copies". It's erasure coding (ie, your data is the solution of an overdetermined system of equations).

That’s just fractional redundant copies.

And "fractional redundant copies" is way less obvious.

The fractional part isn't helping them serve data any faster. To the contrary, it actually reduces the speed from parallelism. E.g. a 5:9 scheme only achieves 1.8x throughput, whereas straight-up triple redundancy would achieve 3x.

It just saves AWS money is all, by achieving greater redundancy with less disk usage.


> No off-by-one error – By default, Fortran uses a 1-based indexing. No off-by-one errors, period.

I'm with Dijkstra on this one. https://www.cs.utexas.edu/~EWD/transcriptions/EWD08xx/EWD831...


OP here. 0-based vs 1-based indexing is one these hot debates I don't want get in as I've mentioned in the post. Yet, point is, if you take pretty much any math textbook, indexing starts at 1, whether we like it or not. And for many students, understanding the mechanics of a given algorithm is already enough of an effort that they do not need, on top of that, to translate every indices in the book's pseudo-code from 1-based to 0-based. That's all I'm saying. No other value judgement.

Dijkstra is right, of course. However, most math texts still use 1-based indexing. If you want to translate them into code, it's easier when the conventions match.

(Now, if you had a proposal for switching math over to 0-based indexing ...)


He (Dijkstra) even mentions this in the article:

> The above has been triggered by a recent incident, when, in an emotional outburst, one of my mathematical colleagues at the University —not a computing scientist— accused a number of younger computing scientists of "pedantry" because —as they do by habit— they started numbering at zero.


Good luck getting a community with literally hundreds of years of literature using 1 based indexing to change.

I did have one or two math professors who would use x_0 and x_1 instead of x_1 and x_2 when they had to name two objects.

But I have also seen places where 1-based indexing was used despite being "obviously wrong". I don't quite recall what it was, but there was sequence of objects A_1, A_2, ... and a natural way of combining A_k and A_l to get A_(k + l - 1). Had the indices been shifted by 1 to be 0-based, the result would have been A_(k + l), which would be much nicer to work with.


Doesn't fortran also support the ability to define arrays with arbitrary bounds i.e. (-4, 5) which is quite difficult todo in other languages

Yes, but pitfalls were added to the feature in Fortran '90 and it should now be generally avoided.

I strongly disagree. Arbitrary bounds are tremendously helpful in dealing with arrays whose starting point must be offset.

I didn't say that they were not helpful; I said they have dangerous pitfalls. Also, they're not perfectly portable. My testing shows that only two compilers get lower bounds right in all the tricky cases that I know of.

There are "lightweight formal methods". Most problems can be produced via small models. Tools like alloy are built around this idea. (IIRC alloy was used to show that a famous DHT had issues with the churn protocol)

https://en.wikipedia.org/wiki/Alloy_(specification_language)


The author is right, but it could have been worse too. At least they were not using JSON for serialization.


>functional programming languages ... break down when you try to coordinate between components, especially when those components are distributed ...

I think the exact opposite. Having spent more than a decade in distributed systems, I became convinced functional programming (and pure functions) are essential for survival in distributed systems.

Back then I wrote down my insights here (amazed that the site still exists)

https://incubaid.wordpress.com/2012/03/28/the-game-of-distri...


And part of why functional programming works so well is exactly that you don't need to care about things like control flow and interactions. You're just expanding definitions. Those definitions are local/isolated, and compose in regular ways.


Even with my limited knowledge about FP I am pretty sure I would only grow more in agreement as I learn more. My only exposure to functional is via Nix and Rust (I promise I'll learn OCaml soon). One thing that I've really come to appreciate is the concept of "making invalid states unrepresentable," a trick that is harder than it should be (though not impossible) in "less functional" languages. Coming back to distributed systems, I have wondered what a functional database would look like. Mutations as pure functions, DUs in tuples, could we store a function, etc.


> "making invalid states unrepresentable," a trick that is harder than it should be (though not impossible) in "less functional" languages

The flip side of this is to "make representable states valid." If you have an enum that doesn't fill a bitfield, values of the bitfield outside the enum are representable -- and the behavior of the system must be defined in that case. (Most often, this is done by mapping the behavior of undefined states to a chosen defined state, or using it to trigger an abort -- the key is that it must be an explicit choice.)


regarding databases, you get quite far using purely functional data structures and zippers.

- https://www.cs.cmu.edu/~rwh/students/okasaki.pdf

- https://en.wikibooks.org/wiki/Haskell/Zippers


I never understood why stackless wasn't more popular. It was rather nice, clean and performant (well, it's still python but it provide(d) proper concurrency)


It was memmove() on each task switch. So you could forget about d-cache. And that killed performance on anything but benchmarks.


Also caused subtle bugs. I once had to debug a crash in C++ code that turned out to be due to Stackless Python corrupting stack state on Windows. OutputDebugString() would intermittently crash because Stackless had temporarily copied out part of the stack and corrupted the thread's structured exception handling chain. This wasn't obvious because this occurred in a very deep call stack with Stackless much higher up, and it only made sense if you knew that OutputDebugString() is implemented internally by throwing a continuable exception.

The more significant problem was that Stackless was a separate distribution. Every time CPython updated, there would be a delay until Stackless updated, and tooling like Python IDEs varied in whether they supported Stackless.


We never ran stackless/greenlet under windows. And pycoev was setcontext(3)-based, so no windows either.

But I can imagine what that code did there...


isn't sharpness just the opposite of quiescence ?

https://www.chessprogramming.org/Quiescence_Search


there was a fight between Apple, the computer company and Apple, the record company (initially owned by The Beatles). They initially resolved it by The Beatles allowing the other to one to keep its name on the condition they would refrain from entering the music business.

We all know how that turned out.

https://en.wikipedia.org/wiki/Apple_Records


How did it turn out? Seems like both still have their trademark and everyone lived happily ever after



The Beatles got richer, and then got a whole lot richer.


Apple Computers obviously broke their promise to stay out of the music business (most notably with iTunes and related products).


Apple owns all of the related trademarks, and licenses the relevant ones to Apple Records.



IMNSHO: Yes.

It's a sign of the design quality of a programming language when 2 arbitrary features A and B of that language can be combined and the combination will not explode in your face. In python and C++ (and plenty of other languages) you constantly have the risk that 2 features don't combine. Both python and C++ are full of examples where you will learn the hard way: "ah yes, this doesn't work." Or "wow, this is really unexpected".


Well, there is also a question of attitude. Most of the Python programmers don't overload << or >> even though they technically can, while in C++ that's literally the way the standard library does I/O ― and I suspect it leaves an impression on people studying it as one of their first languages that no, it's fine to overload operators however quirkily you want. Overload "custom_string * 1251" to mean "convert string from Windows-1251 to UTF-8"? Sure, why not.


I've seen >> being overloaded in several libraries/frameworks. From the top of my head:

   - Airflow: https://airflow.apache.org/docs/apache-airflow/stable/index.html#dags

   - Diagrams: https://diagrams.mingrammer.com/docs/getting-started/examples


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: