Hacker News new | past | comments | ask | show | jobs | submit | verinus's comments login

Being stupid and selfish should not be celebrated at all but warned against!

He did not understand bears, just saw them how he wanted them to be and actually did them a severe disservice!

Going outside of you comfort zone is valuable though, but only if done with knowledge of your own limits and consequences!

e.g. I live in the Alps and we have much too many stupid tourists' emergencies in the mountains due to ignorance than should be. They know nothing about the mountains, the tour, how weather is up there, what equipment and clothing to bring and wear and completely disregard advice of locals. Then Mountain Rescue risk their lives and health to get them down. Most of them volunteers no less!


> and actually did them a severe disservice!

I can't imagine what disservice was done. A guy hung around bears for a while, and one ate him and his girlfriend.

To the bears literally nothing happened of note.


Had you read the article, you would have known that two bears were put down, as a direct result of all this.


Two of the bears that ate him and his girlfriend got killed.


The bears were shot and killed.


> To the bears literally nothing happened of note.

They ate an exotic meal.


Reputation damage


You fail to understand how software development and maintenance works.

OFC you need an updated system that has all known security holes fixed to run homebanking apps.

Also, as a dev I would only support one config and not a myriad of different devices and operating system versions (APIs). Livelong. For 3$ purchase price. On all devices. For the whole family.

And imho Apple devices are supported much longer than most Android ones...


> You fail to understand how software development and maintenance works.

Someone complains about bad food at restaurant. Defender responds "you fail to understand how cooking works".


An analogy is not a valid argument. Analogies are useful for illustrating a concept, but a waste of time when trying to support a claim. Also, your analogy does not seem analogous to me.


Analogy isn't any kind of argument. It is an aid to understanding.


I agree. We're talking about phones, not a restaurant.


Sir this is a Wendy's. We actually can not serve you lobster. Our kitchen doesn't support cooking lobster so sorry you will have to go find somewhere else.


Sir, you cannot order the Wendy's Baconator burger on account of it being the second Tuesday of the month, please try something else.


Even food goes bad after a while :)


It isn't the food.

> 3rd party software developers insist that I’m running the absolute latest OS

3rd-party food developers insist I'm running the absolute latest gut microbiome ;)


> You fail to understand how software development and maintenance works.

OK, dude. It's not like I've been in software for 25 years, 15 of them being on mobile, both on the OS side and on the third party app side. But, yea, I fail to understand.

On the 3p side, I've heard all the lame excuses, and they are almost all excuses rather than reasons. The big one is that supporting older versions blows up the test matrix (the number of device/OS/API level combinations that need to be validated). I can understand if you're a hobbyist, you might not be able to afford to buy the "myriad of different devices" and aren't staffed to test your app on each one. But if you are any kind of serious business, you signed up for this investment when you decided to write apps. Throwing a couple of older devices running older OSes onto that test plan should not blow your budget, and if it does, you probably shouldn't be writing that banking software to begin with. Also, almost all of your testing is automated, right? (Please say Yes). So it's not like you need to hire more human testers as your test matrix grows. If you are unable to support more than the bleeding edge OS, it makes me wonder what kind of fly-by-night developer shop you are.

The second excuse is about valuing developer convenience over users. "Oh, the new OSes contain cool new stuff that we want to take advantage of, and it's a total bummer to maintain the code paths that support 'legacy' devices." You already have the code that runs on the previous OS or API level, you're choosing to get rid of it, deliberately throwing users under the bus, so that you can clean code up or at least not have to maintain it. Bad tradeoff IMO.

There are a whole bunch of other little excuses for no longer targeting older systems, and most of them boil down to either cost, laziness, or a skill issue. None of them respect the end user.

I have a little more sympathy on the OS side. Sometimes a major step forward on the hardware (particularly the CPU architecture) might make it really tempting to cut off previous versions. I still think both major mobile OSes cut off old devices way too early. I have PCs from the early 2000s that can still run modern Linux distributions, so support for old hardware is usually technically possible, just inconvenient and costly. I'm not asking companies to support devices from 20 years ago, but they could.

Your "security holes" excuse is ridiculous: All major OS vendors already provide security updates to at least 5 previous major OS releases. There is no security reason for an app developer to support only the bleeding edge latest OS.


Part of it is also Apple. They don't hesitate to break things across OS versions, whether it's on iPhones or Macs. Unlike Windows which will run basically anything ever built for Windows. Also AutoLayout is bad at adapting to new screen sizes. The iPhone app I built in high school targeting a 3GS was more futureproof than a lot of newer apps cause I just used C macros to calculate UI sizes/positions.

End result, even a simple "fart button" app has probably broken several times.


> Throwing a couple of older devices running older OSes onto that test plan should not blow your budget, and if it does, you probably shouldn't be writing that banking software to begin with. Also, almost all of your testing is automated, right? (Please say Yes).

Surely no. Emg. Installing on phone?


Yeah but a lot of apps will require the very latest iOS for no reason other than the dev happened to build it that way per Xcode defaults, and some apps will also require you to use the latest version of the app at all times.


to me it seem you have bad experience and I agree that this kind of simple may be the wrong kind of simple ;)

but: for me simple is to have a short and concise solution of a problem/ requirement that does exactly what is required and not more.


for me I try to find the simplest solution that fulfils the requirement I have for the task on hand. No fancy abstractions/ design patterns- most of the time you don't need them, and when you do you certainly need them in another way, completely different from the one you chose. Abstraction based on sample size one and "foresight" is a sure mark of a noob :)


I was thinking about code along the same lines: we are modeling, not writing text. This just happens to be the best way to express our models in a way a computer can be made to understand it, be formal enough and still be understandable by others.

What current languages are bad about is expressing architecture, and the problem of having one way to structure our models (domain models) vs. the actions/transformations that run on them (flow of execution).

I strongly disagree on the global variable side though...


> I strongly disagree on the global variable side though...

My thinking is that software has been terrible (over-complex) for such a long time, so its time to start questioning our most dogmatic principles, such as "global variables are bad".

Imagine you can instantly see all the dependencies to/from every global variable whenever you select it. This mitigates most of the traditional complaints.

I would argue that adequate tooling that allows for this would dramatically simplify all development. It's the only thing that matters and its so absent from every development platform/language/workflow.

If we could only see what was going on in our programs, we would see the complexity, and we could avoid it.

Another related bit of dogma is _static scoping_. Why does a function have to explicitly state all its arguments? Why aren't we allowed to access variables from anywhere higher up in a call stack?

What you realize is that all of these rules are so you can look at plain text code and (kind of) see what is going on. This is a holdover from low-powered computers without GUIs like most of programming. Even if an argument is explicit, if its passed down via 10 layers, you still have to go look.


Mutable global variables online work for linear code with no concurancy.

Having code be tree like or at least a DAG is how you resolve what is probably most easily visualized as dependency hell.

This is why many modern patterns try and steer people into having their domain logic centered and compromised of idempotent functions.

And while they didn't know why back then, it is why accounting has used immutable events for over 500 years.

The notion of free variable is another lens to think about it.


> globals not bad

Globals are a tool with tradeoffs.

Even with excellent introspection and debugging tools, it's hard to prove anything about the state of a mutable global variable (since it's hard to reason about the effects of many interleaved instructions), so it's hard to prove anything about your program that depends on that state (like whether the program is correct enough), and code accessing that global must be more complicated to account for the fact that it doesn't know much about it.

Suppose you have something that's effectively a global anyway (logging configuration), isn't mutable (math constants), or for some reason you can actually only have one of (memory page table). Sure, you probably gain a lot by making the fact that it's global explicit instead of splattering that information across a deep, uncommented call chain.

Could other use cases exist? Sure. Just be aware that there's a cost.

> dynamic scoping not bad

It's not just a matter of visibility (though I agree with something I think you believe -- that more visual tools and more introspectability are extremely helpful in programming). No matter whether you use text or interactive holograms to view the program, dynamic scoping moves things you know at compile-time to things you know only at run-time. It makes your programs slower, it makes them eat more RAM, and it greatly complicates the implementation of any feature with heap-spilled call stacks (like general iterators or coroutines).

Implementation details vary, but dynamic scoping also greatly increases the rate of shadowing bugs -- stomping on loop iterators, mistyping a variable and accidentally using an outer scope instead of an inner scope, deleting a variable in an outer scope without doing a full text search across your whole codebase to ensure a wonky configuration won't accidentally turn that into a missing variable error at runtime 1% of the time, ....

Modern effect systems actually look a lot like a constrained form of dynamic scoping, and some people seem to like those. Dynamic scoping isn't "bad"; it just has costs, and you want to make sure you buy something meaningful.


> Another related bit of dogma is _static scoping_. Why does a function have to explicitly state all its arguments? Why aren't we allowed to access variables from anywhere higher up in a call stack?

E.g. dynamic vs lexical scoping. Dynamic scoping used to be more popular and you can still use it in some languages like elisp. In some situations it's a natural fit for the problem, but I think in most cases lexical scoping is simply easier to use.


Yeh, static = lexical.

> easier to use

With plain text editors for sure. You really need a mandatory re-imagined IDE to make it work.

You need to be able to see exactly where the variable is coming from...which I think would be a good feature anyway.

And for this you really need a live programming environment...which I think would be good too...but they are very rare. Everyone is obsessed with static typing these days, but runtime value tracing is more useful imho.


and I find your judgement on this quite disturbing, even arrogant.

there is a reason nearly all religions take a stance against killing, even killing yourself!

and for me it starts with: who am I to judge somebody should rather be dead than alive? do you REALLY know?


That somebody is simply gone. It doesn't make sense to view it any other way.

It's not a disturbing view. However, as long as the affected person is not capable of suffering anymore it is fully up to the family to decide what they prefer. So either way can be a valid and ethical then. It can be ethical to hang on and it can be ethical to let go.

(That being said, If there is chance that the affected person could recover or even not recover but suffer at the moment then the question becomes very complex. I think most would admit that the complexity exists.)


But the son isn't gone from the experience of the father, why should he pretend like he is?


I think you forget the fact that his son has 0 ability for self-preservance. In some cases, we may make things that have no chance to exist on their own, persist, but in this case, what does it serve, but our own selfishness?


Every human being is born with zero ability for self-preservation...


And we help them out for their own sake, so that they can live a life of their own eventually.


I don't believe in selfless acts. All selfless acts are done for selfish reasons.


What a bullshit response.


It's my sincere belief, if you think differently then we can agree to disagree. For myself, I've never done a selfless act. All my so-called selfless compassion has ultimately been selfishly motivated. Therefore I can't judge the father for keeping his son alive.


What a strange thing to say. Come on, you must have done at least something small as a selfless act. At least something does has not inconvenienced you very much. We do this all the time and not just to feel better about ourselves.


I believe it's selfish to keep somebody going when they have absolutely 0 ability for self-maintenance.


I believe the comment you reply to is disturbing (or rather insensitive) only because the article author is reading this thread.

But we should be able to have the discussion the grandparent poster wants to have.

If the author wants to do what he described in the article, that's fine – it's up to him and his partner.

However, we should as a society not expect it as a standard IMHO. No one should be expected to sacrifice themselves like that, for apparently little reason. It seems irrational and painful.


I am certain in some areas it has. but on the other hand in others it has damaged it- screentime is an awesome feature measuring "destroyed" productivity ;)


sure, but to such a fundamental amount? or was it IT- the internet is not IT, super computers and simulations, databases and so are not the internet...

I like google but google is only as good as the stuff out there, and then finding useful content (that is not popular) is hard, and getting even harder the more content is produced.

greed in the form of ads also don't help much...


Why do you think computers have progressed so quickly? It’s not because of simulations.


science was there long before the internet, and if history teaches us one thing it is, that gaining fundamental knowledge takes time- often generations (where one generation must die for new ideas to spread)

so no, the internet may be good for lots of things, but it would by no means replace the groundwork required over centuries to come up with modern medicine...


esp. when it comes with all the stress of modern work with all the real time communication, social media pressure, constant advertising.

now that I have kids, I grow increasingly conscious towards the effects this has on them.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: