Hacker News new | past | comments | ask | show | jobs | submit login

It sounds like your argument is that writing ANY software is easier than traditional engineering, which I would agree with but I think it's a poor comparison.

Most software shops aren't practising engineering, so to compare them to traditional engineering seems odd.




Software engineering is poorly defined. You say most shops aren't practicing engineering yet every software guy who works at these shops has the title "Software Engineer."

Let's dispense with the linguistic technicalities. Software, whether you want to attach the word engineering at the end of it or not is easier than other forms of engineering.


I agree software engineering is poorly defined, but I don't think it's just "linguistic technicalities".

There is an enormous difference in the work between shops that hack code together and those that have well defined processes.

The former is nothing close to engineering whereas the latter is very much engineering.

Also on the title thing, I think they're basically meaningless everywhere. Engineering grads will also call themselves engineers regardless of the kind of work they do.


The titles are as meaningless as the process itself. I understand what you're getting at. Everyone understands.

I'm saying these processes are pointless. They're are made up. An arbitrary set of methodologies or plans to execute certain things made up by mostly project managers.

It's like company bylaws or procedures for scheduling an event. Engineering processes are usually designed around some form of mathematical model or statistical phenomenon. This is not the case for software. Software processes aka Software engineering is just a made up set of arbitrary planning procedures.

Shit like kanban or poker planning comes to mind.


Things like Kanban were not at all what I was thinking of.

Software processes are designed with mathematical/statistical backing when needed. For example, A/B testing, anomaly detection and merge queues.

> arbitrary planning procedures

I don't think this is an accurate description. In traditional engineering the main focus is: does it work and is it cheap? In software, both of these are usually non-issues so things can vary a lot more.

For example, recently there was a blogpost by a company about not doing code reviews by default to allow them to move faster [0]. They are making a conscious effort to optimize their process for speed. Other companies (Or even teams within companies) will make different optimizations. A service that must be highly available may have 100% test coverage as a requirement, etc.

I suspect you will call this arbitrary, but I see these process decisions as conscious choices. The fact these aren't grounded in mathematics or statistics is not a big deal to me because the goals/focus are more complex than does it work and how much does it cost.

In software everything is a tradeoff rather than having an absolute answer.

[0] https://news.ycombinator.com/item?id=29792859


>For example, A/B testing, anomaly detection and merge queues.

A/B testing is arguable a product based thing. It can be called "engineering" but it's not strictly a software thing. It's associated more with UI.

I've never seen anomaly detection used with software development. Looks more like a data ML thing.

Merge Queues are just a collaborative tool. I mean you can use this on any document outside of software engineering. Say, for example, CAD drawings where a bunch of people work collaboratively.

I don't think your examples are focused enough to be strictly things that are part of "Software Engineering" in the same way agile isn't strictly "Software Engineering."

>Software processes are designed with mathematical/statistical backing when needed.

This is rarely done. Very little statistical methods make it into the dev process or are even influenced by the dev process. When it does make it in there's no common methodology either as it's usually just some data driven behavioral changes based off of some analytics. The reasoning behind why it's like this is clear. At it's core software is deterministic. Code is simply a series axioms and theorems that can be logically proven correct. Statistics is for unknown processes and is usually employed at the intersection of software and real world stuff like the failure rate of ssd drives.

A passing unit test does give more confidence that a program is correct. This is a statistical phenomenon, but hardly anyone tries to actualize it from a statistical quantitative standpoint.

>does it work and is it cheap? In software, both of these are usually non-issues so things can vary a lot more.

I disagree. These are issues and we do use methods to mitigate these things. We want our software to work and we want to build it at a certain cost. Agile and measuring velocity is a way to monitor costs and working software is verified through mostly testing.

But most of this stuff isn't engineering. It's done out of necessity. There's no mathematical modelling going on as it's hard to even quantify the costs of software or even correctness. Hence why it's all arbitrary processes made up by managers.

>I suspect you will call this arbitrary, but I see these process decisions as conscious choices. The fact these aren't grounded in mathematics or statistics is not a big deal to me because the goals/focus are more complex than does it work and how much does it cost.

All decisions made in any field even say plumbing is a conscious choice. The difference is in engineering we use science and mathematics as much as possible to optimize these choices. We don't in software. Mainly because it's hard to model it.

>In software everything is a tradeoff rather than having an absolute answer.

Everything in life is a tradeoff. It's like this in fields OUTSIDE of engineering as well. The thing with engineering is you try to optimize your choice using math and science as much as possible. In software no such optimization procedure exists.


That is not what I meant by a Merge Queue, this is: https://news.ycombinator.com/item?id=21584144.

> The thing with engineering is you try to optimize your choice using math and science as much as possible. In software no such optimization procedure exists.

It's comparatively easy to optimize something with an absolute answer. Ex: will this building fall down with x design?

It's practically impossible to optimize something like: will our software development process allow us to deliver new functionality faster and win market share?

There is far more ambiguity in software development than in other fields.

https://en.wikipedia.org/wiki/Software_engineering#Definitio...

https://en.wikipedia.org/wiki/Software_engineering#Criticism


>That is not what I meant by a Merge Queue,

I meant exactly what you're referring to. Source code isn't the only thing that can go under source control. As long as the file isn't fully binary and is in a somewhat readable human format it can be subject to version control and therefore a Merge Queue.

>It's practically impossible to optimize something like: will our software development process allow us to deliver new functionality faster and win market share?

This is my point. It's impossible in the same way optimizing a painting of still life is impossible. Therefore it's not really engineering.

>There is far more ambiguity in software development than in other fields.

Edsger Dijkstra, had it right.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: