Hacker News new | past | comments | ask | show | jobs | submit login

People who work closely with machines develop an instinct for how it operates. If you think about it, all mechanical machines are state machines (in the CS sense) and if you’ve had to work with and fix shit long enough your brain commits those experiences to long term memory.



Or more likely state machines are a highly simplified view of reality... Working with machines is a combination of probabilities, intuitions, experience, chance... Most work with machines is not 1 or 0.


Analog state machines are the older kind. Examples: gearbox, ignition switch, Big Red Button. They don't follow or respect the laws of digital state machines, but are reasoned about in a broadly similar way.


This is really obvious if you ride bicycles & do your own maintenance. You can quickly identify weird sounds/patterns with problems, because it’s such a simple machine.


I agree, and I'd say good computer guys also make good mechanics. The core skills are the same. There's a guy called M539 on YT who's the best mechanic I know and just started out doing it as a hobby aside his IT job.


It reminds me of an old story - The Handyman’s Invoice

https://www.snopes.com/fact-check/know-where-man/


I like this. It holds true for looking at code to some extent - what people call "code smell". But the main difference would be that failure modes in CS tend to be binary - a piece of code produces the expected result or it doesn't. And unless you do something to the environment, it doesn't wear down over time. Whereas your car's brakes don't go out all at once (hopefully), and yet it's guaranteed to happen eventually without maintenance, so a mechanic's eye is trained more toward what's currently going wrong with a known system than what might go wrong with a novel system.


> And unless you do something to the environment, it doesn't wear down over time.

Disagree here (at least in spirit?) - code smell can also point to maintainability issues - if the codebase is continually changed some code is more likely to break than others.


I've never heard/read/thought of "code smell" as anything BUT referring to maintainability issues.

When you're debugging a concrete issue and have a hunch what might be the cause, that's not a "code smell."


Code can smell and be perfectly maintainable. Sometimes it's even more maintainable, because it's so unnecessarily replicated that the patterns become obvious. You could even argue that the best precision code is the hardest to grok and maintain in the long run.


A code smell is a heuristic to detect maintainability issues, so while a heuristic can fail, if it fails consistently, you're using a bad heuristic.

Sounds like you're basing your heuristic on dogma rather than experience.


I'm kinda coming from a do-it-once, do-it-right background. Usually code I write just runs untouched for 10 years. So I'd classify any breaking changes to the codebase a change in the environment. But either way, something breaks or it doesn't. It's not like some non-obvious wear and tear is going to happen over a year or two on its own, if no human messes with it.


> But the main difference would be that failure modes in CS tend to be binary - a piece of code produces the expected result or it doesn't.

I think if you’re talking about CS and code you’re right - it’s a very pure and deterministic system. But if you consider software engineering at scale, you do get scenarios where failure states emerge in a gradual fashion, like the OP example of a worn mechanism. An example would be monitoring error rates on a large fleet of servers. You expect a certain background level of failures, but when the rate starts to rise you might be observing the beginning of some kind of cascading failure mode. The errors might be manageable up to a point, but it isn’t always clear when the system will tip beyond that point. Post-mortems of cloud platform outages often mention these kinds of failures, because they typically arise from complex interactions between different layers which put the system into a state which nobody anticipated.


I guess my response to that would be that any change of platform - like scaling up, replication, etc - is an alteration of the environment, similar to changing the geometry of your car's wheels by changing their size, getting different shocks or raising/lowering it. Then you could expect failure states to emerge, but that's when you're looking for unpredictable behavior. I think I was just trying to make the point that in a static system, code you write that works keeps working, whereas mechanical things break down. Emergent problems in mechanical systems come from normal wear, not from scaling.


Yes. The point is that software is not static. I work on mobile app that doesn't really have the scaling issue, but it's not that uncommon that a new OS update or phone model has new bugs that needs to be worked around or changes some undocumented behavior that the app accidentally depended on. It's not that difficult from mechanical wear.

In a previous job in early 2000s they did have some legacy software that was originally written for VAX/VMS and run on an emulator. Even that environment wasn't stable enough so they maintained some real VAX hardware just in case. And as far as I know, the hardware was physically breaking down.


"code smells" are almost never about reliability.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: