Hacker News new | past | comments | ask | show | jobs | submit login
Infinite loops and doomed machines (rachelbythebay.com)
38 points by r4um on Feb 15, 2019 | hide | past | favorite | 6 comments



"The system should not allow a loop to be introduced. Obviously. But, it happened, and it'll happen again given enough time, so what else?"

The algorithm to detect a cycle is pretty simple. Why not prevent them?

https://en.m.wikipedia.org/wiki/Cycle_detection


What system are you trusting to do this prevention? TFA seems to describe a situation in which the configuration can be changed in any number of ways by any number of users, and the configuration itself is an implicit sum of configurations across numerous machines. TFA's suggestion seems more reliable.


(You'll probably want this link instead: https://en.wikipedia.org/wiki/Cycle_(graph_theory)#Cycle_det...)


It was Floyd's Algorithm I was thinking of.


On a physical level, unbounded deadlines and unlimited resource constraints don't exist, so I've learned to distrust anything that assumes "it scales forever".

It's just a lot simpler to plan a static capacity target for each system resource, test and enforce around it, then review it when it falls over. Otherwise you get scenarios like this one, where the bug permeates the whole system and you get "dead allocations" that float in the ethereal void of the system's plumbing, where they are hardest to trace. Leaks can happen in all sorts of environments as soon as you start putting in some layers of indirection, whether it's UI listeners, memory allocations, processes, or whole computers in a network. The indirection is usually done with the intent of optimizing a near-term development goal, but it burdens every future goal with a thing to design for or design around.


This sounds a lot like the initial rejection of TCP/IP.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: