Hacker Newsnew | past | comments | ask | show | jobs | submit | quelltext's commentslogin

Several things remain unclear:

"The beauty of this scheme is that a node only has to know about its share of the interval"

The article doesn't explain curve changes in much detail, but I assume it increases the portion of the curve "owned" by the node.

With unique identifiers all a node needs to know its identifier. So that can't be what's interesting about these interval portions.

Also:

- How that curve is initially drawn isn't clear at all. Is it flat and becomes complex over time by forking (+ data modification)?

- Why are interval boundaries real-value in a system that cannot actually express real numbers?

- How are the intervals / portions decided? Is that simpler than generating UUIDs?

- How does comparison work?

"Comparison works similarly to Version Vectors: if the curve of a stamp is above the other one, it descends it, otherwise the curves intersect and the stamps are concurrent."

But now you have events with curves and intervals where one event might miss a portion. It's not immediately clear what happens in that comparison. It's maybe obvious to some readers but clearly not an audience that needed introduction for the other things initially explained by the article.

In terms of conclusions:

My understanding is that the main benefit is that the overall complexity of the "vector" becomes simpler in light of actor explosion due to the merging mechanism. Whereas UUIDs (or even monotonous index indexed vectors) would grow indefinitely, making tracking them on events a challenge.

This intro article fails to make this stuff clear.


Hello, author here. Sorry about the lack of clarity, this article is the transcript of a 5 min lightning talk and it was really hard fitting all the relevant content in that little time :) (In retrospect that was a poorly chosen topic. When I picked it I thought the talk would be 10 min, 5 min is too short to explain a subject like this.)

> How that curve is initially drawn isn't clear at all. Is it flat and becomes complex over time by forking (+ data modification)?

Yes, the initial curve is typically constant 0.

> Why are interval boundaries real-value in a system that cannot actually express real numbers?

Like snthpy said "real" is a shortcut to say infinitely subdivisible. The numbers themselves are actually rationals.

> How are the intervals / portions decided? Is that simpler than generating UUIDs?

Nodes are forked from an existing node, that node decides which portion of its interval it gives to the new node. You pick the splitting point to keep complexity low.

Regarding comparison: you always know the values of the whole curve. When I say "a node only has to know about its share of the interval" I only mean the ID space. In a version vector there is a direct link between identifiers and counters, whereas here outside of your share of the interval you don't know who owns what or how many devices there are at any given point.


> Like snthpy said "real" is a shortcut to say infinitely subdivisible. The numbers themselves are actually rationals.

That sounds like a fair bit of complexity vs using 64/128-bit unsigned integers or something. I guess the benefit is that then you "never" have to reallocate / defragment?


> Why are interval boundaries real-value in a system that cannot actually express real numbers?

They can be represented as a binary tree, that is a nested list or tuples, for instance. The whole interval (1) could be split into (1,0) and (0,1)). Then (1,0) split into ((1,0),0) and ((0,1),0). And (0,1) split into (0,(1,0)) and (0,(0,1) and so on.


I guess his point remains though that if all you need is infinite divisibility then using the rational numbers between 0 and 1 would be sufficient. I take it as that was what was meant and "real" numbers was just shorthand for that.


> and I can't explain exactly how their ordered dicts work

Traditionally you simply use a doubly linked list approach on the entries (each entry maintains two additional references to the previous and next entry) for that like LinkedHashMap: https://docs.oracle.com/javase//8/docs/api/java/util/LinkedH...

https://github.com/openjdk-mirror/jdk7u-jdk/blob/master/src/...

Which is also what Python seems to be doing: https://stackoverflow.com/a/34496644

It's fairly intuitive.

Do their new default (now also ordered?) dics do this differently?


Note that OrderedDict is an implementation in Python. CPython's dict has a different implementation. There's more about it at https://docs.python.org/3.6/whatsnew/3.6.html#new-dict-imple... and https://mail.python.org/pipermail/python-dev/2012-December/1... .


This implementation was used from 3.6, right?

It's interesting that the idea mail mentions that nothing changes about the implementation (including order) but the memory layout. Which would imply insertion order was already preserved in older versions (not the case IIRC) or the idea underwent a few more changes that did in fact impact order.

EDIT: I couldn't quite find an answer but https://softwaremaniacs.org/blog/2020/02/05/dicts-ordered/ mentions the behavior happens since then because the implementation only tracks indices in the hash table itself and relies on maintaining entries separately in a second array that gets expanded in insertion order.

This would also seem straightforward but it raises a few questions such as how deletion is implemented (efficiently).

EDIT2: Okay, the talk (https://youtu.be/p33CVV29OG8) mentions they just leave holes until the next resize (at around 42:00).

Raymond also mentions there that his original idea didn't preserve ordering but happened due to an additional compacting optimization? Should probably watch the whole thing some time to get the history. Sounds like a fun talk.


Oh! Yeah, that was the talk, but repeated at PyCon. It’s a very clever design that wasn’t at all obvious to me.


In the log replication example, after healing the partition the uncommitted log changes in the minority group are rolled back and the leader's log is used.

However it's not clear how that log is transmitted. Until this point only heartbeats via append entry were discussed, so it's not clear if the followers pull that information from the leader somehow via a different mechanism, or whether it's the leader's responsibility to detect followers that are left behind and replay everything. That would seem rather error prone and a lot of coordination effort. So how's it actually done?


Fair that perhaps they (the director specifically) thought that (they look like something nobody would want to wear) about Crocs. Heck, I thought that back then, many did. So perhaps that's why Snopes is saying it's true.

But Crocs had actually become somewhat popular already before Idiocracy.

The more realistic full picture explanation being that they chose something that they or someone on their staff, like many "look at those idiots" types (myself at the time included), already knew and considered a stupid trend is much more likely. It doesn't at all negate that they in fact thought nobody with taste would wear those shoes, but I don't think that choice was entirely made in isolation not aware of the trend.

The effect of watching the movie and seeing Crocs worn was yet another of those pieces of evidence that the stupid people of today connect to that fictional future world, like all the other stuff on the movie dialed all the way to the top (energy drinks, corporate sponsorships, etc.)

The mere fact that someone knew of Crocs, thought of them, and chose them because of their ugliness, means they were popular/successful enough to pop up on someone's radar, despite them ostensibly not being something that would be worn by anyone. Perhaps they didn't know how much more popular Crocs would become but they for sure must have picked them as an artifact of things already going in a weird direction (Why can you get this? Who would want this? Someone must, these will be the stupid people of tomorrow.)

But also, actually, so what?

Look at some of the fashion of past decades older movies. Some of it is cool but a lot of it is super ridiculous.

And if you look at Crocs, are they really objectively stupid? Treating them as a high fashion item probably is. But they are versatile and robust, good for many types of use cases were people used to wear other types of cheap plastic sandals. People wearing leather shoes surely thought sneaker were stupid until they became so mainstream that they were evaluated more objectively.

Citing idiocracy and Crocs seems like a very weak argument to your case and even Idiocracy's point (fashion choices don't indicate the world is getting stupid). Mind you I'm not disagreeing that things have gotten worse in many ways and social media is definitely not helping. OTOH, Facebook actually was somewhat reasonable for a long time, and useful to connect with people. Only once the Twitterification of it started did it get so bad. But somehow Twitter never gets the bad reputation.


No. Crocks became popular after Idiocracy. Check the year they were shooting as it took them a few years to release it. Chat with Mike Judge https://www.youtube.com/watch?v=UBu_RpKqCg8


Just to confirm: Are you suggesting engineers working during work hours on an alert should get paid double? Or only outside work hours?

I'm not sure we're all on the same page here but let me give you an example of how on-call essentially works on my team.

- Week long rotations spread out across the year among members.

- On-call means holding a pager but also taking in any non-urgent requests that can be handled within a reasonable time. New feature requests are out of scope, answering a bug report from support is in scope, including a fix if that's possible.

- Responding to paging alerts only at night. On some teams we did have sister teams in other regions to cover with their on-call over some portion of the night.

- Generally, paging alerts are rare enough (once or twice a week) so out of work hours disruption is fairly low.

- Non-urgent breakages, bug reports, etc. are fairly common though.

Someone has to handle all that so it's a rotation. I don't think providing incentives to engineers to take more on-call is practical. Unless you are okay with them stagnating in their career. And it's the EM asking here so I'd hope they didn't want that.


What you are describing is an org smell[0] I think. On-call should be used to handle urgent, emergent situations that need to be addressed at once in order to keep the business running. What you are describing as the responsibilities of your on-call rotation includes explicitly non-urgent problems: bugs, customer support, reporting. Now these all need to be handled by any competent organization, but they are routine matters of any software system. They should be handled in a routine fashion. For a small company it makes sense for the founders to do all of this, and systems will need to be developed to manage the inevitable overflow of bugs, support requests, and reporting. The fact that this is handled by the on-call engineer in your organization suggests a failure of organizational design: there are "important" tasks like adding new features and "non-important" tasks like fixing bugs (!), communicating with your users (!) and doing root cause analysis of incidents (!).

To put things simply, there are jobs in your organization that are not the responsibility of anyone, and thus when they are encountered they go on to the heap of "non-important" things to do. This is unfortunately common in software-making organizations. The problem is that if this heap gets to large it catches on fire. And allocating an engineer to spray water on this flaming trash heap on a reliable schedule is not what most people consider to be a fulfilling task of their employment.

So to answer your inquiry, perhaps in addition to giving extraordinary compensation to work which is by definition extraordinary (if it's ordinary work why does it need a special on-call system to handle it?), it is also best to make sure that items which regularly end up on the on-call heap become the responsibility of a person. In an early stage company customer support can be handled by the founder, bugs can be handled as part of sprints, and root cause analysis should be done as the final part of any on-call alert as a matter of good practice.

It's my belief, again, that making on-call unreasonably expensive incentivizes the larger organization to create a system that handles bugs, customer support, and reports before they end up on the flaming trash heap. And that long-term this reduces costs, churn, and burnout. I again point to Will Larson because I developed all my thinking on this based on his works.[1]

To put it succinctly: Making on-call just another job responsibility incentivizes the creation of an eternal flaming trash heap that a single, poor engineer is responsible for firefighting on a reliable schedule (not fun). Recognizing that on-call is by its nature an extraordinary job responsibility, and compensating engineers in alert in extraordinary fashion, incentivizes the larger organization, i.e. executives, directors and managers, to build systems to minimize, extinguish, and eventually destroy the flaming trash heap (yay).

[0] Organization smell, analogous to a "code smell", where a programmer with sufficient intuition can tell something is amiss without being able to precisely describe it immediately.

[1] https://lethain.com/doing-it-harder-and-hero-programming/. I recommend buying "An Elegant Puzzle" because some of his best essays on the subject of on-call are only available in the book, not on his blog.


I'm by no means important in my org but when something appears like a shitty idea I will raise that (like other ICs around me) and more often than not it works out fine. I'll agree to give something a shot but if it doesn't work it doesn't work and my managers so far have all realized that a bit into the trial period.

Reading comments like yours, I guess I should value my work environment more.


It's not about speaking up on a shitty idea, it's about not playing the game on a tool used to measure employee performance. That's a big difference.

Especially because here we're talking about someone whose performance and contributions were very clear to everyone. Otherwise, he might have been seen like an underperformer by managers.


If you want to see an extrema of people being afraid to speak up, just look at the Gemini image examples in a company that in theory encourages people to speak up. There are always topics that are exceptions.


If you're OOTL on Google Gemini ML-based content creator then https://blog.google/products/gemini/gemini-image-generation-... is a pretty good summary.

Ostensibly it appeared to be tuned to be racist.

Maybe Google encourages people to speak up but also has a culture of racism?


I think you are very fortunate then. The number of times I have raised concerns and my warnings have been ignored ... Even when I had direct past learning or experience with the thing they want to do, I can no longer change their mind.


The question is really more do they actually act on the feedback on a regular basis?


Yeah, honestly over nearly 20 years of working in this industry, I’m not sure that I’ve ever worked anywhere where there’d be significant management pushback on something like this. Now, granted, I’ve mostly worked in small companies, and one rather selective largish companies; maybe things really are much worse in the truly huge companies.


You don't remember the "What's a computer?" ad?

iPads are most definitely marketed as devices suitable to take the place of conventional computers.


To me, that ad underscores the point somewhat. Apple is marketing these devices as something other than a computer. Something that makes a computer unnecessary.

The underlying implication being: “You don’t need a computer”, and “our ecosystem is so good that the new generation won’t even know what a computer is”.

As a tech and Linux nerd since the early 2000s, I can understand why other tech savvy people could interpret this as “this is no different than a computer”, but I don’t think this is the right framing, and I don’t think we’re the intended audience.

Their claim has always been that this ecosystem makes general purpose computers unnecessary for a wide array of use cases, because “there’s an app for that”.

From the perspective of a layperson, I think the message is: “Computers are for tech people (and/or outdated). This is for the rest of us”.

The term “general purpose” means something very different to the HN crowd than it does for the majority of Apple customers.

I want to reiterate that I’m not endorsing their position, just trying to point out that their marketing has been consistent in trying to differentiate the i*OS products. The difference between “you don’t need a computer” and “this is a general purpose computer” is subtle but important I think.

I also don’t think it’s a good direction for tech in general, even though I value some of the benefits of the locked down ecosystem. I do most of my productive work on a Linux system and think it’s critically important to continue having this option.

I’m just not trying to use an iPad for this purpose.


> Not to mention it's a crime.

Doesn't that heavily depend on where the employee is based?

https://en.m.wikipedia.org/wiki/Affirmative_action_in_the_Un...

Quotas are legal in many states. Granted the author is likely based in CA. Still, let's keep away from absolutes.

FWIW I also don't support discrimination in whichever direction. However, so far the story is "there was some kind of quota delaying my promotion". Is that really news or surprising?

I guess it is surprising someone just stated it to them directly but this whole thing isn't particularly juicy yet.


It is a crime [1]. You don't have to hurt someone's career prospects on the basis of skin color to apply affirmative action.

[1] https://en.m.wikipedia.org/wiki/Matthew_Shepard_and_James_By...


Saying they have a quota doesn't mean "because you are white".

I mean, it has a similar effect but it's not like we don't know quotas exist and are largely tolerated, are they not? Are quotas outright illegal in the US?

Do we know it was a racial quota? Now, gender quota would still not be better for the author but this is still lacking in novelty.


Yes, Quotas are illegal in the USA.


So, where's the actual story?

I don't use X so maybe it's buried somewhere else.

Even the tweet you mentioned wasn't directly discoverable from the submission.

But it looks like he already tweeted months ago that this had happened.

What's actually new info now?


> FYI this was just my teaser story… not even close to the worst of it

https://x.com/shaunmmaguire/status/1760908574870368513


Okay, let's see then I guess?

I guess this HN submission was just a teaser as well then. Maybe another one will reveal something actually interesting.


Exactly why I flagged. I am interested in hearing the story, not the teaser to the story.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: