Hacker News new | past | comments | ask | show | jobs | submit login
A New Software Engineering (acm.org)
111 points by henrik_w on Nov 30, 2014 | hide | past | favorite | 91 comments



Another silver bullet in the making.

I don't mean to be rude but to me it's pure BS. A grand plan, a conclusion talking about paradigm shifts, and no substance at all. Seeing the method co-signed by Robert Martin even made me chuckle a little. I guess we're on for a second round.

This is clearly meant for management (again) and doesn't give a hint of a clue to what is the practice of software engineering.

Won't they ever understand (I use they on purpose as they're clearly not us) that software engineering is the process of discovering and documenting in a formal language the methods to be applied to solve a problem defined in an ambiguous natural language with the use of imprecise concepts. What we really do is assess the validity and feasibility of what an user wants, and along the way refit concepts, articulate them in a way that can actually work or formally make sense. Building software is the act of understanding how the implicit things we take for granted are actually not. This is a discovery process. This precludes precise estimates or estimates at all, which is very difficult to finally accept and live with, I concur. It has to be dealt with in most cases or in a large part like a research endeavour.

We'll have some day to take things at face value and learn to live with them.


Your comment is actually one of the best formulations of software engineering I've ever read. However, I'd disagree, at least to some extent, with your conclusion:

"This precludes precise estimates or estimates at all..."

While we software developers do occasionally solve novel problems that can't be estimated since the methods for implementing them are not currently known (i.e., research problems), most of the problems we work on are variants of problems we've already solved, and our experience - especially if we record it and make an effort to learn from it - can be used to estimate the scope and complexity of a project (including the "discovery process").

Sure, sometimes our estimates will be way off, but an estimate that's within a factor of two or five of the actual cost of a project is more useful than no estimate at all.

The people who are paying us (management, customers, investors, etc.) will eventually want to know what they're going to get for their money and when they're going to get it - and sometimes the answers to those questions need to be answered before they provide any funding at all, since a solution might be completely useless to a customer if it's not available by a certain date.


I should have worded it "This precludes precise estimates and sometimes estimates at all", as of course depending on your knowledge of the problem domain and your field experience you can get some degree of confidence.


Software Engineering is in a way meta engineering discipline. Problem domains are vastly different. A mobile game development is different than building a system software for surgical robot. The first step is to define the vocabulary of the solution space and then build the solution based on that vocabulary. Expectations of reliability, maintenance, requirement changes adds another dimension to its solution space.


The best way I've heard this described is this:

Engineering has a well-defined constant you're always building against: gravity.

In software, you choose your gravity, and it changes all the time. Sometimes there are twelve gravities. Sometimes there are different types, and they interact in complex unpredictable ways.

In short, it is a much more difficult problem. But personally I believe we can reach a meta-engineering capable of achieving quality in a predictable way.


Engineering is not as simple as building against gravity. There are a lot many things that "engineers" (non-software) have to take account of, even in something as mundane as laying out a road. In this example, you not only have to build the road within the area; you have to build it so that it has the right curvature so that vehicles can make the turn etc. You have to make sure that the road you're laying out has a solid base; you need to make sure that the road can withstand the natural forces of that region for whatever time it is designed to last against. So I don't understand how engineers build against gravity.

But if you're saying that engineering has to follow the laws of physics, then yes, I'm totally on board. Computer Science departments in most universities started of as branches within the mathematics department. While most of software engineering is more than simple math; it does operate on its own plane, with no regard for the laws of physics (except maybe of time :) ).

In that regards, Software Engineering seems more of an art, doesn't it? But it does have its own guidelines; which govern how good software is written.


A road is a bad example. The only reason a road is required is because of gravity.

I disagree with your hand wave of "software = art". Software is also bound by many constraints. Time (as you mentioned) is not trivial because it's the difference between useless and useful software. Memory also puts limitations in that can't be ignored.

Useful software runs on real machines bound by physics. Theoretical computer science can disregard the constraints to see what's possible, but that's CS and not SE.


Software engineering is an art in that it's more like theory building than theory using. Each project, each context is its own set of new physical laws.


This is the point, yes.


Software is not all made up out of thin air -- you do not 'choose your own gravity'. The equivalent of phyical laws in software are computational/algorithmical, logical laws.

For example, sorting cannot be done faster than O(n log n) -- that is as hard and objective as anything physical. (In fact, one would think it is even harder in some sense, since it is so purely logical.). Software is built within algorithmic constraints.


Only comparison-based sorting is bound by O(n log n). Non-comparison based sorts can do better.


>What we really do is assess the validity and feasibility of what an user wants

Then why do software developers get no training in talking to users? In understanding them and make sense of what they tell us and what they really?

This to me is the biggest short coming of today's software 'engineering'


Software engineering was taken more seriously 20 years ago than it is now. There have been some notable successes of rigorous development, but they're not well known. Here are two in wide use.

The first is the operating system kernel in the air link processor of mobile phones. In most current phones, that's an L4 kernel with a full proof of correctness. Since any mobile phone can potentially knock out all phones for some distance around if it doesn't follow the sharing rules for the air link, this is important to carriers. They got it right. Nobody talks about this much, but if that layer had problems, there would be regular cellular blackouts.

The second is the Windows Static Driver Verifier. This has been used since Windows 7. It verifies that kernel drivers don't crash, clobber memory or call the driver APIs incorrectly. Before the Static Driver Verifier, drivers accounted for more than half of Windows crashes. Now, crashes from signed drivers are very rare, and usually involve getting the driven device itself to do bad DMA operations. (IOMMUs are coming along to stop that.)

This shows the right direction for software engineering. Some software really matters, and has to be engineered properly. Most software doesn't matter all that much. Engineered systems should separate the two, develop them in different ways, assume the low-grade stuff will crash, and architect systems so the low-grade stuff can only do limited damage. We're seeing architectures like that in the mobile world and in server-side systems. In the mobile world, "apps" run in relatively contained environments. In the server world, things seem to be moving towards containerized "apps" in systems like Docker, running on some minimal glue layer inside a container running on a secure microkernel such as Xen and talking via message passing.

That's software engineering.


"The first is the operating system kernel in the air link processor of mobile phones. In most current phones, that's an L4 kernel with a full proof of correctness. Since any mobile phone can potentially knock out all phones for some distance around if it doesn't follow the sharing rules for the air link, this is important to carriers. They got it right. Nobody talks about this much, but if that layer had problems, there would be regular cellular blackouts."

That's... an interesting claim. You're basically saying that a cellular network is wide open to DOS attacks. I would need to see some serious proof before accepting such a claim.


GSM-type cellular uses time division multiplexing, with many handsets on the radio channel. Any handset which transmits during another handset's time slot will interfere with the other handset's signal.


Any wireless link is open to DOS attacks: jamming.


Jamming is not a DoS attack. The essential feature of a DoS attack is that it forces the receiver to use resources, which it can then not attribute to legitimate users of the service. Jamming does not force a cell tower to use resources, hence it's not a DoS attack.

To put it another way, if jamming is a DoS attack, then so is hacking a server and reconfiguring its DNS. But we don't call that type of attack a DoS attack, we call it a penetration, or simply a hack.

And yes, wireless communications are susceptible to more physical attacks, such as jamming, but jamming is a) easy to track down, and hence dangerous to execute, b) expensive, you need to invest in special hardware to do it and c)surprisingly difficult to effectively execute in a radio environment like a cell network, which is generally quite adept at routing around things like jamming ( if you put your jammer between me and the antenna, my phone will in all likelihood simply find another antenna to connect to on the other side of me - with signal strength dropping as the square of the distance, your jammer is going to need to be BIG to jam me effectively).


This starts out talking about engineering having a "theory" to work with, meaning physics, materials science, etc., that is the source of its ability to construct new reliable systems.

Then it proposes a mechanism to develop a "theory" of software engineering where the generated theory seems to consist entirely of methods of project management.

Am I completely missing the point of the article? Because that doesn't seem at all parallel. Engineering has methods of project management too, but that's not what makes bridges not fall down.


Knowledge workers are completely screwed when it comes to getting things done if the project management techniques are crappy.

What you're missing here is that these processes/tools are light-weight and are easy to keep track of. Pick the alphas that you need, read the description on the card and if you match, that's your card for that alpha. Look at that! You just quantified how well your team is doing.

We have theories and proofs and code for everything in programming, but we don't have a good process.


What is the equivalent in other fields that build things? I mean, project management must be studied in other disciplines, and what have they come up with?


A book I read, Industrial Megaprojects, suggests that everyone else makes a lot of the same dumb mistakes we do.

We're not comparing ourselves to civil, structural, process, mining, aeronautical or industrial engineering.

We're comparing ourselves to flawless platonic ideals of those professions that we invented whilst talking amongst ourselves.


Thank you! I wish more people would go and learn what engineering is in other fields before trying to discuss software engineering. I recently had an argument on HN about the particulars of this topic: https://news.ycombinator.com/item?id=8633616

Another book I can recommend on the subject is Petroski's To Engineer is Human.


A good book, definitely. I reviewed it here:

http://chester.id.au/2013/07/07/review-to-engineer-is-human-...


The article points out in the beginning that borrowing project management practices from other disciplines is what we tried to do (which gave us The Waterfall method) and it doesn't work.

That said, I agree with the GP in that I was expecting more focus on actual software matters, like how to test the reliability of a system before it's built etc.


I don't think waterfall is actually used in other disciplines, perhaps we were just cargo culting back then?

But my question is: if SE is supposed to be based on a theory of project management, then what is the theory used in other disciplines? Or do project management not see that as their underlying theory, but rather something more hard like physics or chemistry?

It sounds like there must be a general field of project management out there...getting people to apply technical skills to get things done. It is probably a soft science, but definitely necessary. So why not talk about this aspect SE as a sub-field at that field rather than as an aspect of a more hard science (computer science)?


> I don't think waterfall is actually used in other disciplines, perhaps we were just cargo culting back then?

I think this is half right. There was quite a bit of cargo-culting going on. But the fundamental error was to try to apply the project-management process for the construction of other kinds of artefacts to the design of software. I wrote about this very topic here:

http://www.zerobanana.com/essays/reclaiming-software-enginee...

You're correct that no other engineering discipline attempts to use waterfall-style project management of its design process. They don't really use it in the construction process either (in practice, design tends to continue alongside construction), though it probably has at least been attempted.

It was no surprise to see that the authors of this piece are in fact among the originators of the Rational Unified Process. They claim to have corrected their mistake; in fact they are still pushing the same wrongheaded ideas as they were in the 1980s.


"They don't really use it in the construction process either (in practice, design tends to continue alongside construction)..."

For something like a high-rise office tower, the design decisions you can change after construction is in progress are quite limited. For example, you can't add ten more floors to the building as an afterthought, since that would involve adding extra elevator shafts, emergency stairways, water and sewage lines, etc. that would cause significant disruption to the floors of the building that have already been built. Similarly, redesigning the layout of a floor to accommodate twice as many people would require similar changes to stay compliant with building and safety codes.

I suppose if you're constructing a single-family house, there's much more leeway to change the design during construction - but it would still be costly.


> I don't think waterfall is actually used in other disciplines

From http://en.wikipedia.org/wiki/Waterfall_model:

"The waterfall development model originates in the manufacturing and construction industries; highly structured physical environments in which after-the-fact changes are prohibitively costly, if not impossible. Since no formal software development methodologies existed at the time, this hardware-oriented model was simply adapted for software development."

> It sounds like there must be a general field of project management out there

Yes, project management is a discipline in itself. Why the article is calling its project management framework for software development "software engineering" is not clear. Probably because it sounds more promising; it promises to be the silver bullet:

http://worrydream.com/refs/Brooks-NoSilverBullet.pdf


Erm, how to put this delicately...it's difficult to design and construct a bridge or refinery in two-week sprints.

These are projects that span years--you bet your hat that they do that design work up-front and try not to make changes otherwise.


Actually, what gave us the waterfall method was an attempt to self reflect on how we built software. I can't help but see irony there.


It's not like we can't design very elegant, robust, reliable software, you know.

We just can't find anybody to pay for us to retool the whole stack (and I do mean the whole stack, since we're only as strong as our weakest link) while the current ad hoc solution operates within acceptable parameters.

The guy who wrote this paper, in my opinion, is missing two really bedrock principles of "pure" engineering-- manufacturing tolerances and cost. If it works as well as it needs to and comes in under budget, it's miller time. There's a reason every stereo ever made goes -clunk- when you push the 'on' button. There's a reason none of the walls in your house are exactly plumb.


Addendum:

The other real serious mischaracterization here is likening software to a physical product, even one as complicated as a skyscraper.

Software is a factory that makes products, whether they're html pages, or graphics on a screen, or inputs to an industrial controller. You start looking at how to engineer and design and manage factories and a lot of the chaos of computing looks very familiar. E.g."367 days since someone lost a limb in a major industrial accident."


The factory analogy is extremely apt.

As in a factory, Quality in software is a result of a complex myriad of factors, but it reduces to some simple concepts: an understanding of psychology, a deep understanding of statistics, knowledge and application of systems theory, and the simple idea of epistemology and scientific method applied to management and production. Finally, leadership and universal application of these concepts throughout an organization, using a PDCA cycle of continuous improvement (Agile is a rough one).

W. Edwards Deming had this all exactly correct way back in the 1940's, after WWII, where he taught these concepts to Japanese companies, transforming them from cheap crap makers into the quality powerhouse economy we know and love.

We should listen to him again. That's the "new software development" we need.

http://en.wikipedia.org/wiki/W._Edwards_Deming


I've been increasingly wondering if the cost of building reliable, effective, and secure software in all the places we use it... is just more than we can afford on a social level (like a higher percentage of GDP). With that price mostly being people, of course.

When we say "We don't know how to build good software", does that really just mean "We don't know how to build good software cheaply enough for the businesses that use it to still remain sufficiently profitable"?

I may not have said this quite right, I keep coming back to it and trying to think it through more. It's not a popular thing to say on HN.


Given the costs involved in making for real high reliability software (think planes, pacemakers, Mars rovers), profits don't factor into it at all-- we're looking at 10x the costs at a minimum. Nobody pays for that level of reliability without an excellent reason.


I'm not talking about making all software as reliable as is needed for planes, pacemakers, etc.

I'm talking about making all software as reliable and secure as appropriate for it's context. I think there's a general opinion amongst many (esp software engineers), that most currently deployed software is not as high quality as it should be, thus the concern on "What are we doing wrong, do we not know how to make quality software?"

Of course, the context and the consensus expectations for reliability/security can change, which is part of what's happened, as software has become more integral to the society and economy.


Well, I think I mostly agree. The only distinction I would draw is "good enough for its context" is synonymous in my mind for "as good as I'm willing to pay for." Everybody wants to pay for hamburger and eat caviar, but that ain't how the world works.

For all the increased whinging about software reliability, I have not seen a corresponding increase in what people are willing to pay for software. This indicates to me that we've achieved a level of rough market equilibrium.

But I would be thrilled to be wrong about that. I have a whole laundry list of refactors I'd love to take on. Cleaning up after myself is a luxury I am ill afforded.


Recipe for making great software:

- Build a good team. Keep management overhead and bullshit to a minimum. Motivate people by giving them really good people to work with and not pulling crap compensation games.

- Do your product three or four times. Rule of thumb: When you're absolutely sick of re-writing the thing and you're ready to work on something else (probably at about this many repetitions) you're reaching the point where you have a decent solution.

- Work on something hard and fun that people actually want to buy.

- If someone in management tries to cram a methodology down your throat, or a salesman knocks on your door with a rad new way to do scrum, kill them and stuff the body with all the other miserable wastes of oxygen who tried to bamboozle you with snake oil.

I dunno. This stuff is just bloody hard, and every silver bullet I've ever seen -- especially the ones that management loves or that use the word "paradigm" in their literature -- has been a dud.


This.

Things go wrong in a team (with or without a particular silver bullet) when there are bad or unmotivated programmers, or when there are no experienced leads to pass the wisdom.

Recruiter more often than not don't realize this is a sure recipe for failure. As studies don't produce software engineers but merely (in the best case) algorithm engineers, and as software engineering is not a thing yet, the only thing we can rely on is the past successes of experienced individuals.

Methodologies won't remove complexity but only provide an additional (time-consuming) component to your complexity average that will make things look a bit better on a cursory glance but won't reduce at all your codebase complexity and maintenance costs.

Try instead to make your team spend half of its time doing simple things totally unrealted to your project or product (like piling cubes all day long) and you'll feel the exact same relief (half better) and have the exact same productivity gain (zero).


Software is more a design activity than an engineering activity. In the construction world it would be more like architecture than civil engineering. Architecture in the sense that you do need technical knowledge but it's much more about thinking about how do we design a building for people who are engaged in certain activities. What are their needs. How does it fit in with its environment etc. Soft things. Burt difficult things nonetheless. In a way the work of the civil engineer is done for software by the language, library, OS and browser developers (and the hardware people obviously)


In the US and many other parts of the world architects are professionally liable for the public welfare in general and life safety, regulatory compliance and system performance specifically of the buildings they design.

The practice is regulated in these places because there is very little soft about people dying. It is the absense of a culture that comes from individuals aaccepting such responsibility that concerns people like the author and Uncle Bob.


I don't mean that all of what we do is soft. I mean about 50℅ of it (in my case anyway, the rest I spend on data structure design, performance, testing, maintainability etc etc.) I'm saying that it is more like architecture than civil eng. Because it has such a large soft design component it will always elude a hard engineering approach.


Are they? I worked for a top 5 consultanting enginers and the buck stoped with us in ters of the desugn all the archtects did was make it look purty.


It depends on what you are working on. I would argue front end is more like architecture and back end is more like civil engineering.


I struggle to come up with a distinction between the two, with respect to the original post. The design work that goes into programming is largely irrespective of the problem. In fact, if you stick to common design methodologies you often won't even see a significant difference between "front end" and "back end" work – and I might even suggest the line is completely arbitrary.


I'm thinking about things like animations & css work. That is almost entirely aesthetic, and thus more like architecture. Even when I'm writing the entire app, I draw a pretty clear distinction between 'front-end'/UI and 'back-end' type work.


That seems more like paint to me, which is certainly important in its own right, but not the core focus of an architect, or what I think the parent was trying to convey. The architect is more concerned with structure and conveying feeling to other people – which is other programmers in the case of code. Attention to aesthetics is important, but those aesthetics are more like how to appropriately use whitespace to evoke a sense of beauty when the next person catches a glimpse of your work in a text editor, to ensure that the reader understand that "a door is a door and not a window", things like that.


Computer science isn't a science and it's not about actual computers. Software engineering isn't engineering and is really about the limits of people, not software in and of itself.

Nevertheless, my job title is "Software Engineer". Where it makes sense, I model myself after our elder cousin professions. When it doesn't, I don't.


I have that title too; I feel that basically "coder" + "process" = "software engineer."


Only when we can specify the design of a software product with the same precision that we can specify the design of a skyscraper will we ever see the failure rates of software "engineering" efforts approach those of other engineering fields. As any practicing programmer can tell you, even the best upfront design specs are imprecise and incomplete. And most of the time those specs are changed over and over again during the development effort so radically that the final product is often barely recognizable.

I put the blame for this firmly on the shoulders of the "clients". We can and do achieve high success rates with low defect counts in special cases where the goal is precisely and clearly defined and the development budget is sufficient. When we start caring about the quality of a website or mobile app as much as we care about the quality of the software that runs airline flight control systems or medical devices then we'll see real software "engineering" emerge.


There are many hard problems in computer science and software engineering. As it turns out, the solution to these problems is spider charts.

Finally, we'll be able to write distributed low-latency software that interacts with both legacy systems and browsers with complete end-to-end type safety, provable correctness and fault tolerance! In half the time, at half the cost!

With spider charts.


But don't forget the cards.


So the author is basically saying 'we need to improve software development', and trying to use history as an example of 'what we need' in order to improve it.

Back in the day, craftsmen were just people who were so specialized in a trade that they could build amazingly complex and difficult things through example and practice. But it took a long time to learn this craft from another craftsperson, and knowing this one trade so well left them at a loss for other aspects of the thing they built (resulting in things like building collapses).

Modern software developers are the same. Indeed, we're even going through the cultural shift that happens when different civilizations revisit the same things without really looking at how their forebears did it. We're still reinventing the wheel instead of creating a better one, and we're far from creating any new transportation mechanisms.

In order to achieve the kind of evolution from 'craftsman' to 'engineer' that existed for physical architecture, software developers need to learn about things other than software. It's not enough to simply learn how the kernel works, or the hardware works. You have to learn how all those pieces work with other pieces, and the resulting interactions between different natural and non-natural processes with computer systems.

The author gets to those points with his 'essence' of new software engineering. But it gets a bit bogged down by not being generic enough, and not being specific enough. It's simple to see the difference between a craftman's output and an engineer's: the application of scientific principles to achieve an output we can be much more confident in. And that's what 'new software engineering's goal should really be: producing a more reliable, reproducible, safer product.

Are the proposed methodologies going to get us there? I don't think so. I think we need less process, and more science, and to create software which has science as an inherent requirement of its design and implementation, and not merely an afterthought for performance reasons.


I HATE WHEN PEOPLE MISUSE THOMAS KUHN. Read the goddam book, or actually read and try to understand it. The scientific revolution occurs in opposition of the normal science period, we never had a normal science period in software development (or engineering, if you like this awful term), so it makes no sense to use Thomas Kuhn here. Which is a very profound and complex author discussing a epistemological paradigm shift, really different from this nonsense there.


Automated tests are the closest thing to actual engineering in software. While communication methods like agile methods are important, to me they still are outside the scope of actually engineering software. I've found that one of most significant factors of the software quality I've produced is the test coverage of the code, and not just lines of code but the different branches of a function, which honestly feels quite impossible especially if you have exceptions or threads involved.

Some celebrate TDD, but to me it just has appears to help people not dropping out the tests.

Automatic static analysis with compiler warnings or pure static analyzers is a really good and quick solution for it, but what I'm really hoping is a fully automated dynamic analysis solution. What's really exciting is that there already are some dabbles with that approach, like the american fuzzy lop (http://lcamtuf.coredump.cx/afl/) which actively tries to find new branches from the application being tested.


I agree with the first half of the article but find the "kernel" concept lacking. Here's why: It doesn't mention actual programming theory.

SCRUM, Extreme Programming, Waterfall, etc. These are all about management and business practices, not about what we do. Extreme Programming has some craft related "practices" (TDD, CI) but even these are mostly in the QA corner.

I think we need a theory about classes of programming problems and their solutions. We need to analyze our code bases, and how we grew those, in order to understand the trade-offs and nature of our approaches. And by this I really mean analyze the code base, the "project idioms", the abstractions, their cooperation.

Examples I recall are from the Lisp world where the bottom-up onion is a well liked strategy (layering toolboxes on toolboxes, each made up of many small pieces which can be combined effectively). Another such observation is the many types with few methods vs many functions on few types debate. This is what I want to see researched (and research myself).


I smell someone trying to make money off of seminars and corporate "re-education". There's no doubt that many of our corporate overlords need re-education on agile software development, but this is not the way to do it. They simply need to engage their IT people openly and collaboratively. Nothing else will work.


My software engineering process works as follows:

I try to code as much as possible as early as possible. I throw away lots of stuff and recode it. Besides that I have an eye for stuff that is "similar" and can be abstracted. If someone wants an estimate, I guess as good as possible.

Big code is idealy split into one-person-chunks each with a documented API, but sometimes many people have to work on the same "files". Then big code is split between multiple people that sit nearby and communicate personally while discussing implementations based on technical arguments.

How to make a product of software is a different story. But I guess it works when you design your product in estimateable pieces and adapt fast to changing requirements.

Also I am pretty sure I forgot one or two things...


What I like about SEMAT is that it's straight-forward and it can encompass waterfall, agile or scrum or whatever. It quantifies what's happening on your project without relying on traditional MBA-type project management processes.

The barrier to SEMAT, as with Agile, is training and leveraging the power that programmers/software engineers have in the economy to get it adopted. We're going to have to wait 5-10 years to see any significant adoption. What would really helped is guides and documents on how to present this to your team or organization and make it easier to get it adopted or at least to trial run it.


the ethos of software engineering has tended to devalue coders (if not explicitly, then implicitly through controlling practices)

All aboard the software engineering boat: see you downriver for delivery, right past the waterfall!


Truth be told, there's a huge amount of craft in mainstream product design. Only a certain subset of the things that we use are engineered at the level of discipline reserved for things like airplanes and skyscrapers. And hardware designers are comfortable with some things that would horrify software designers, such as relying on closed source tools for critical tasks like structural analysis.

When we discuss engineering discipline, somebody will invariably remind us: "Look, we don't designin airplanes here."


TL;DR some perennial software methodology consultants have come up with a base class for software methodologies.

It seems suspiciously like it has the usual properties of post-facto defined base classes i.e. arbitrary and fragile. But I could be wrong.

BTW if you found equating physics to marking stakeholder-ness out of 6 hard to swallow, you might struggle with "Major-league SEMAT". It reads a bit like an outline for the next Scott Adams book.


I see a lot of detractions here. One of the authors, Ivar Jacobson, is behind UML (together with Grady Booch & James Rumbaugh). Which explains where SEMAT fits - the enterprise. Specifically those that live in a world of class diagrams, sequence diagrams and activity diagrams. I doubt there's much benefit to the typical HN audience (for reasons the comments here point out), but SEMAT has a place.


For more technical methods and theory, I would check out The Five Orders of Ignorance (http://dl.acm.org/authorize?9919) and the book The Pragmatic Programmer.


Five Orders of Ignorance working link: http://www-plan.cs.colorado.edu/diwan/3308-07/p17-armour.pdf .. I didn't find it to be particularly of value.


I lost it when I saw requirements as a cornerstone. There are no requirements - it's all design. The word 'requirement' is an artifact from processes where you have handoffs from one group of people to another rather than integrated discovery and coding. When you look at things as requirements you lose the fact that most things are negotiable and should be negotiated in service of the ultimate design.

Beyond that, it's worth noting that when the same people go from idea to A/B testing to full production the idea of a 'requirement' seems quaint.


<sarcasm> didn't he already solve this problem in the past? (RUP) </sarcasm>


"Engineering" is only possible in the physical world because the laws of physics don't change every thirty years or so. In software, we aren't so lucky. Bridge-building best practices from the 1950s would build a fine bridge today. Software development practices from the 80s would get you... basically nowhere.


I think you both a) are greatly unfamiliar with bridge building practices and b) underestimate the ability of 1980s software design.

Seriously, were there shortcomings in the software design of yesteryear? Almost certainly. Are they blown vastly out of proportion in most discussions? My assertion is that they are.

Sadly, I do not know as that I have anything coherent to offer on how to fix things. What I can offer is that I think we'd do better by not always looking across the horizon to a silver bullet language/framework/whatever and keep focused on what the job at hand requires. The amount of failures I have witnessed due to a desire to over generalize is staggering.


But the assertion is that bridges could be built today using techniques from the 1950's, and they would still be good bridges, because the principles and physical laws they were built on haven't changed.

Software design had a lot of cool things happening in the 1980's (possibly more in the 1970's, though), but the mindset has shifted since then. How many people are implementing their own VM's? How many are running on non x86 intel/amd chipsets? Things that needed to be heavily optimized in the early 80's might actually run faster now with JIT & whole-program optimizations. And so on.

It's hard to build an engineering discipline out of such a shifting substrate.


And I disagree with this assertion. Heavily. Simply look at the state of most bridges in the US (and elsewhere?) to see that they are not holding up nearly as well as is implied by this assertion.

I don't know what to say regarding your examples. Don't put too much credit in JIT and whole-program optimizations. One could just as easily point to the fact that what used to be too slow of a program is now fast enough with the advance of computing speed. And memory. Do not overlook the importance of memory.

More, I doubt most programs of yesteryear failed due to lack of optimization. Seems likely to me, that the main cause of software failure has not changed that heavily in the years.


>And I disagree with this assertion. Heavily. Simply look at the state of most bridges in the US (and elsewhere?) to see that they are not holding up nearly as well as is implied by this assertion.

I find that they (from the '50s and even older) hold up just fine.

If software worked initially, and survived 60 years that well as bridges from the 50s do today, that would be a miracle.


This leads to a few questions, though. First, if bridges were built just fine in the 50s, why do they build them differently nowdays? Second, why is the maintenance cost of bridges decidedly non-trivial? Third, is there any software that is that old that still works fine?

For the first question, I would only be offering speculation. Google and friends can give pretty good references.

For the second, a quick google gives "The annual direct cost of corrosion for highway bridges is estimated to be $6.43 billion to $10.15 billion ..."[1]

Third, I offer TeX as a good example of old software that has managed to survive for quite a long time. I am always pleasantly surprised when I go to typeset something from 20+ years ago and things "just work."

I think the theme here is that maintenance costs for bridges more often than not entails just keeping them working. Seems that far too often in software maintenance costs try and include complete rewrites into new technologies.

[1] http://www.dnvusa.com/Binaries/highway_tcm153-378806.pdf


Tex is not 50 years old. Also, it's borderline unusable to anyone who hasn't mastered what "underfilled hbox" means and the like. Internally it's so unsustainable that people are trying to start from scratch with full rewrites. However, because there actually isn't a standard for latex beyond "how tex" renders it, it's a horrific undertaking. TeX is far from "just working".

Onto bridges, just because they build them differently now does not negate the fact that 50 year old bridges are still reliable and safe to drive over because they were engineered well. Their shortcomings and failure modes are well known so maintenance can be performed to prevent collapses.

50 year old bridges may be expensive to maintain at this point, but the fact that it's still possible shows that they were engineered well. They may build them differently today, but that's more likely related to cost constraints changing (e.g. can't afford an army of riviters now) rather than the engineering being unsound.


This is pure goal post shifting. Without massive and expensive maintenance, many of these bridges would be unusable, having crumbled to the point of destruction. Pretty much period.

TeX is not 50, I had not meant to imply it was. Just that it is a relatively old piece of software that has had virtually no maintenance compared to many other pieces of software.

Could it have used an overhaul? It is certainly debatable. However, Knuth has had massive success in keeping it working without worrying about use cases that just were not the aim of software.

To continue the comparison with bridges, it is not uncommon for them to have restrictions saying that large trucks can not cross them. This is not a recommendation, it is a requirement. In the software world, many would have modified TeX to do such things as typeset documents miles wide. Because, generality! Or some such.

And the concerns you have about people not knowing what an "underfilled hbox" means is simply a lack of training. I would be surprised if anyone I work with below the age of 25 has even heard of TeX, much less read the documentation for it. Heck, I could probably raise that to 40.

So, my assertion is that the fact that TeX is still very much usable shows that it was "well engineered." Would it require somewhat expensive training? Sure, but how is that any different than the maintenance of bridges?


"If software worked initially, and survived 60 years that well as bridges from the 50s do today, that would be a miracle."

Given a Fortran compiler I see no reason why lot of the numerical stuff in netlib would not work just as well in 60 years time: http://www.netlib.org


Libraries sure. I'm talking about production software.


Yes, that's an entirely different thing. But I would rather compare a production software system to something bit more dynamic, like a nuclear plant or an airplane than a bridge.


This is just survivor bias at work, though. How has road building been in the past 50 years? If your road has seen heavy use, it has massively changed.

So, then should we assume that software is like road building? No, rather I would assert that there are some pieces of the road that are more isolated from dangerous use and changes than others. In software, we have some of that, too, but not to nearly the extent. We are often holding up pitchforks to rewrite everything.


My impression has been that the people in the LISP community would be happy to tell you about how their model has been fairly consistent over the past several decades and that a lot of the 'new advancements' we're seeing all the time have been available in LISP for that whole time. And, funnily enough, one of the most highly recommended projects in LISP is making your own!

I don't get to deal with LISP anywhere enough to substantiate these claims from actual experience, but that is the impression I have gotten from hanging around places like this.


They might be good bridges, but they wouldn't necessarily be up to modern safety spec or tolerances, and may not take advantage of new materials.

It might actually be a good allegory after all...


If software engineering is shifting, it's shifting due to the youth of the discipline.

Remember, we've been building bridges for thousands of years. We've had a lot of time to figure out how best to do it. The amount of improvement we've had in 50 years in terms of technique is probably pretty small; improvement in materials is the more likely reason for any variations in technique.


Like others said, it's hard to compare because the hardware also greatly improved. But if you look at embedded programming - in 80's we had c/ada, today we might have rust - fitting similar environments. That's surely an improvement.


Why? And there is a lot more to tooling in embedded systems creation than the language that the software is specified in.

If anything, I think the dream from back then would be that embedded systems of today would be more FPGA based. Screw this using fixed design systems. :)


Why rust is better? for example, it has generics, which more abstract and offering greater reuse while still being efficient.

As for the dream of dropping fixed systems, you can already buy a cheap fast mcu from xmos with 4 cores which you can use 3 to create extra peripherals.Or another cheap CPU with a small fpga. But of course if you're fighting for pennies of cost or power, there might be better fixed alternatives than those 2.

BTW the reason we don't have FPGA on chips is probably mostly commercial - charging by the peripheral offer chip companies ways to more revenues and more differentiation between each other. They really don't want to sell the same commodity chips of mcu+fpga.


Sorry, I shouldn't have thrown in the quip on FPGAs. It really is a complete non-sequitor. I think a very neat topic, but didn't belong here.

I don't accept that generics are somehow an automatic win for languages. I do like them somewhat, myself, but I don't think there has been compelling evidence that they can be used to great effect in building embedded systems.

Consider, at the low level, the majority of the code written is still in C. Not C++, C. And I can't bring myself to agree that it is lacking because of it.

I know he is intentionally inflammatory, but I think Torvald's rant on why he rejected C++ for Git is somewhat poignant. As is all of the attempts at writing the git core in higher level languages. Specifically, how they haven't really succeeded.

I will confess to being open to the argument, but outside of revisionist history and wishful thinking, I just don't see proof that things are automatically better with today's practices over those of yesterday's.


If you look at surveys ,c++ ,as shitty as it may be, is used a lot around the industry.And the fact that ARM have chosen it for the mbed on which they plan to build an operating system around for IOT chips, shows it's value for mcu's, at least in some segments.

The other data point regarding rust is the huge excitement in the embedded community.


I shouldn't be surprised that c++ is as popular as it is. For some reason, I was fixated on the kernel for the data point I was thinking of.

And do not mistake what I'm saying as that I think rust isn't as good as c. Even better, in many ways. My question is specifically if people today will be able to accomplish better things because of the language. I'm doubtful.

Most of the biggest accomplishments in embedded space comes down to the massive gains in the silicon. Most of which is dominated by advances in boolean chain evaluation to make faster circuits. Wider, sure, but also faster. (At least, that is my understanding... I'll admit I am no authority on this.)


GCC is moving from C to C++ and from what I can tell GCC developers seem to think it is an improvement, and GCC suffered from inadequacies of C.


This does excite me. Though, there are at least two obvious problems. First, I want it to be a bit more objective. "Suffered from inadequacies of C" and "from what I can tell" both need more concrete examples/numbers for it to be make sense.

Second, I don't think anyone would argue that the biggest thing spurring movement in GCC is the success of clang.


For web and mobile apps, you need programming ability and tooling (bug tracker, source code control, editor/ide, test/release system, etc.)

With that, IMHO your job looks more like a writer than an engineer, and your product feels more like a serial TV show than a skyscraper. Key skills: identifying a compelling story line, building awareness and engagement with a loyal audience.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: