You'd still need to write the glue code that correctly maps one program's output to the other program's expected input. I don't see how this is materially different.
If by "Unix philosophy" you mean creating modular software, then sure, we're already doing this. If you mean pushing unstructured ASCII data through actual pipe(2)s, then I'm sorry, but this is not a workable solution in 2017.
Well, part of the difference is that now the "beep" program can be considered finished and other people can use it piping it to whatever programs they want, and it will never (or at least rarely, compared to the monolithic approach) need to be updated because all it does it beep.
If and when it does need to be updated it will be much easier to update because all it does is beep, so it doesn't include a calendar library and so on. It won't have to be recompiled when the calendar library is updated, etc.
You have to do the same amount of work overall to get the same end results but the individual pieces can stabilize and be used independently.
Unix philosophy also seems to assume that the users and developers of the beep software are different people with different priorities than the calendar app people. I think the crux of this discussion is whether you are in one of these camps or you're trying to sell a holistic solution to their beeping calendar needs. The latter will never be done because holism means being tenderly connected to a changing world.
But in terms of pipes, it seems like the 2010s version would probably be JSON over HTTP.
> You'd still need to write the glue code that correctly maps one program's output to the other program's expected input. I don't see how this is materially different.
That is called feature creep, and is usually not viewed in positive light. Its real art to know how to decompose problems and design neat, contained, solutions.
I'd also argue that the "change" you are seeing is mostly illusionary, but that is another story altogether.
Contained solutions can be only designed for contained problems. Sure if you define a limited set of requirements you can finish your project and will have a limited software that is only useful in that very limited context.
"change" might be unnecessary but it is certainly not illusionary.
Once you reach a certain scale, you have to decompose your problems into subproblems. You solve those and assemble them into the final system.
The people making AMQP (following the original author's example and line of reasoning) were solving too many problems: they had a message queue, they had a wire protocol, they had service discovery, etc. This is all fine.
The author and others went on to make ZeroMQ with the intention of focusing on the communication between nodes and various communication patterns (pub/sub, reply/respond, etc.). They had some work on what should go over the wire but not too much, and they didn't expose things like authentication or discovery of services. Why? Because those are critical to applications, but not to a distilled, core message queue.
Using ZeroMQ, now, you can build up to what AMQP wanted to be. You can agree between your customers that ZMQ will be how you coordinate between systems, but you define your own message format using a separate standard. That standard is now independent of the wire protocol as well. Take something like ASN.1 or protocol buffers and define your message schema using those.
You want discovery? You agree on how services should be announced (a schema in ASN.1) and where the service registry should live (this would be a system configuration detail, where you specify where your system looks for a registry, maybe even federate registries). But this doesn't redo the work of ZMQ or the work of the serialization/deserialization format defined above.
You want to handle authentication? You pick your desired authentication protocol (public/private keypairs for identity, a capabilities based system, access tokens, whatever) and you again define a schema, define where the authentication service is located (or how it's identified in the registry above) and continue.
We don't reinvent TCP every time we want a new network protocol (well, some people do). Why would you reinvent the message queue when its already been distilled to a basic and fundamental framework that can be used to implement your desired business logic (now, if the MQ lacks some features that are basic or primitive, then it should be extended because it's not complete).
So true. A friend of mine that does hardware says "I like hardware because when I run out of parts to install I know I'm done." :-) I've teased him about board re-spins and feature creep, but by and large because there are some externalized costs (like getting new board fabs) it is very crisp in prioritizing adding features vs being done.
From the blog post: "Except for some basic UNIX tools, like grep or make, it's almost impossible to find anything that's truly finished, not simply abandoned."
Now, granted, must of us on anything Unix-like is on a BSD or Linux, not on a UNIX, but grep and make are still being patched, here and there. Even 'ls' gets commits. We could get into a pedantic discussion on when something can be classified "finished," but that doesn't seem fruitful. I think we just need to realize that despite following the Unix philosophy (or something similar), which is mainly about modularity and composability, not a single part (modular piece, service, etc.) of modern systems can be left unmaintained, it can never be in a "truly finished state."
When you ship it it should be the example of "the little web app" and when you have others use it. That is "finishing". When you have something that has a modicum of value.
If you add more features in response to users or sales or whatever metric that is the rest of your comment. Or maybe you don't need any more features in the future maybe its only bug fixes like TeX or the Python Requests library.
I guess it depends on what granularity you are viewing your project at. Your beep app could be complete and if you want calendar integration, that's a new project that takes input that is somehow piped into it from the beep app. The ios version is a new project that imports beep as a lib..
As long as you don't remove things from the problem in the name of removing them from the solution.
Or, to be less terse, don't say "Well, you shouldn't need to solve that problem!" or "You're holding it wrong!" Reality doesn't give you neat problems which map cleanly to the simple solution, and adding complexity is the only way to solve the problem, as opposed to a dishonestly reduced form of the problem.
Yep, this is especially noticeable when using software (most common for libraries and drivers) that was abandoned.
Even though the library worked perfectly fine several years ago and had many users, several years later it seems to show up some kind of errors that most people didn't experience, certain tasks require workarounds or are down right impossible to do.
And the reason fir it is that the environment where the library operates changed and most likely it is used differently than in the past, but it gives a perception as if the quality of software actually degrades with time.
I think we all can imagine it means you get the app running in the first place, it's in a stable state, ready for modifications and improvements; done in this circumstance should mean, the-first finalized version after the rough drafts-done.
Porting your program to a different platform makes it a different program. It doesn't change the completeness of the original. Modifying your program to account for API changes also qualifies as "a different program" - you can have different programs that all do the same thing, for different platforms, for different API versions of the same platform, and have them all be "complete" even if you may need to write a new one at some point because you want to run it on your wristwatch or something.
There is a whole class of rhetorically attractive analogies comparing programming to the building of physical things like furniture, bridges, and buildings. They usually overlook that software requirements vary greatly and can be quite complex, while the physical requirements of a chair are not going to vary much across chair designs. Modification of software requires no physical resources and can be executed at scale (e.g. package managers), while modifying the design of a chair once it has shipped is impractical. Software errors can certainly be consequential, but I'd much rather encounter a bug in a messaging queue than a defect in the chair I'm sitting on or the bridge I'm driving over.
> They usually overlook that software requirements vary greatly and can be quite complex, while the physical requirements of a chair are not going to vary much across chair designs.
I feel as though that's somewhat addressed in the essay:
Yes, I hear you claiming that your project is special and
cannot be made functionally complete because, you know,
circumstances X, Y and Z.
And here's the trick: If it can't be made functionally
complete it's too big. You've tried to bite off a piece
that's bigger than you can swallow. Just get back to the
drawing board and split off a piece that can be made
functionally complete. And that's the component you want
to implement.
Software engineering is an offshoot of systems engineering, which is often described as "managing complexity". I don't think it's at all unreasonable for people to look at the methodologies actual systems engineers use and learn from them. One of those things is decomposition of tasks, what this article is about. If you're making a message queue system, divide it into subsystems that are each more feasible. Your over-the-wire data format protocol is one thing (asn.1, protobufs, etc.), your concurrency library and patterns in your systems are another (go's channels, erlang's processes, libmill from the article), your database is another. Once you have your system decomposed in this fashion you can do each part, and actually finish them, and assemble your complete system by combining them.
Once they're small enough the Unix philosophy applies. Write components that do one thing and do it well, and design them in a way to communicate with each other.
> If it can't be made functionally complete it's too big.
Too big for whom? Most people on HN aren't building systems software or embedded software with a strictly-defined purpose; they're building an app business or a service business, where the "purpose" is "to make money" and adding additional features or "checkboxes" are how you attract more customers to make that money.
A program can be done. A product usually can't be, as long as its authors want to continue to rely on it to put food on the table.
I think the problem with those analogies is much simpler and more fundamental - they assume writing software is the "building" part. It's not. Construction is what compilers do for us. What we do is design. Architecture, if you must. And we can release a new version of a software every day instead of every year like other industries, because compilers do their work fast and effectively for free.
I think the article does address that to some degree. My interpretation is that if your requirements change so much and are so complex that you are not able to complete the solutions, then you should step back from coding and work more on the requirements.
Of course that also means slowing down, and obviously that won't do in silicon valley. Gotta go fast, even if you are running around like headless chicken.
If you're designing for yourself, sure, work on the requirements.
If you're designing for the marketplace, the market decides the requirements, and expecting everyone to fall in line behind one thing is a mugg's game.
> They usually overlook that software requirements vary greatly and can be quite complex
Or change after it's built. "Oh yeah, I guess we'll also need a floor to set our chair on. And perhaps a foundation for that floor." Later: "I know I said I wanted a chair, but what we really wanted was a glider."
Out of the frustration with AMQP I've started my own ZeroMQ project.
I doubt that Martin was the sole initiator of ZeroMQ project. I think that late Pieter Hintjens, the original author of AMQP, deserves some credit as well, at least out of respect[1][2].
As someone who only discovered Pieter by finding his last article posted on HN, and then subsequently learning a lot from reading many of his other articles on his website (different perspective on life, humility, consulting in tech outside of SV, presentations), here is his homepage for those who are curious: http://hintjens.com.
Regardless of whether you doubt it, it is an accurate claim. I remember following the development in real time, having used AMQP and ZeroMQ during 2007-2010.
I am not doubting that Pieter Hintjens deserves respect, but the ZeroMQ project was started by Martin. He wrote the vast majority of the code for the first two years, none of which was written by Pieter. Martin's company owned the copyright for the first 2 years, before it was purchased by Pieter's company. The only messages I see on the ZeroMQ mailing list from Pieter in those days are messages of congratulations and enthusiasm. I know Pieter and Martin worked together at iMatix, and that some of the ideas for ZeroMQ spun out of their mutual work with AMQP, but it is fair to say Martin started the ZeroMQ project.
I understand how Pieter presented things, but I think he did so with a revisionist viewpoint. I don't mean to disparage him or downplay the immense impact he had on the later success of ZeroMQ though.
Finish your work... only to have legions of newb "hackers" bitching at you and calling your project dead because "(s)he hasn't posted an update in months"
Death to evergreen software! Let projects be finished!
This is a really good point. I'm guilty of seeing a repo that hasn't had a new commit in 18 months and choosing not to use that code because it's 'not supported any more'. Maybe maintainers could help by making it clear that their code is still supported but just doesn't need any changes in the Readme or something.
I get the problem with feature creep (who wants a text editor that also implements support for tetris?! ^•^), but as others have pointed out, that sentiment probably isn't precise enough to be all that useful.
Being a command line junkie, what stands out to me is the composability of my tools. This composability, I feel, is the key that let's us separate concerns and write small, standalone tools.
> who wants a text editor that also implements support for tetris?! ^•^
Well, I do :) - given that said editor also has better UX than mainstream operating systems (in terms of efficiency, extensibility and interoperability), and it's because of that it is possible for someone to implement Tetris in it.
This sounded oddly familiar, until I read about the projects Libmill and Ribosome. And yeah it's a post already submitted in 2015 and should be labeled 2015.
Any software tool that is deemed as "complete" is not really complete. Even TeX, which will be releasing its final version when Donald Knuth feels the end of his life is near, is not complete because it is constantly expanded with reimplementations like XeLaTeX and LuaTeX and with thousands of plugins in constant development. GNU utilities like ls are not complete because they are built on top of operating system syscalls which are changing due to updated filesystems, and on glibc which releases a new version every 6 months. A change upstream might trigger ls to be changed, so it's not finished, just evolving slowly. Just look at its git history http://git.savannah.gnu.org/cgit/coreutils.git/log/src/ls.c, where the last change was three weeks ago. GNU Make, mentioned in the article, is still evolving. Look at its history. http://git.savannah.gnu.org/cgit/make.git/log/
Re TeX: That's actually covered by the author. TeX itself is complete. Knuth isn't adding new features, instead the version number is converging to pi (new digit added) with bug fixes [0]. The other variations are not TeX (they're reimplementations) or they're extension built on top of the completed TeX program.
Yes, that's what I meant to imply. So yes, TeX is finished software but the boundary between what is known as TeX and "not TeX" is blurred by the thousands of extensions in the TeX environment that people must use when using anything beyond the basics.
Basically what I'm saying with that example is that you can draw a line around some code and say "this is TeX, and it is finished." But what users mean when they say "I use TeX" will never be finished and will be forever expanding.
TeX is the hardest to argue because it is probably the most stable software in existence for what it's capable of, so any other software you use is much less finished.
> the boundary between what is known as TeX and "not TeX" is blurred by the thousands of extensions in the TeX environment that people must use when using anything beyond the basics
I would say the boundary is very well defined.
Consider Internet wire protocols: to access this website, you're using HTTP over TLS over TCP over IP [etc.]
That doesn't mean that TCP or IP are "changing" when HTTP or TLS change. TCP and IP are feature-complete, low-level layers that each just do one thing well. We add on more layers to get the effect we want, and those layers change frequently, but changing those layers doesn't mean the lower-level thing "changes" as a part of it.
I think the problem with TeX is just that people have the nomenclature flipped. People think of things like LaTeX as being "what TeX is"—that LaTeX "is an implementation of" TeX. But that's off; it'd be like saying that HTTP is an implementation of TCP/IP, or that Ubuntu is an implementation of the Linux kernel.
In reality, LaTeX et al are software distributions—they include TeX as their text-constraint-processing engine, just like a Linux distro includes the Linux kernel, or a Unity game includes the Unity engine. It's one component, with a well-defined function.
While it's very good to encapsulate functionality, and have stable and well-defined interfaces, there is also a great value to having a unified platform and community that builds interoperable things.
Would you rather assemble your project from 250 different libraries, some of which may be incompatible with other ones? Sure, each may solve a tiny problem, and they may all be orthogonal (best case) but there is a major need for a unifying paradigm above all those components so they can all work together.
Basically, I prefer to have a growing community working on a growing snowball of components that are all interoperable and can be assembled like lego bricks. I think Wikipedia has such a community. RDF has several such a communities. The Web has such a community. This really creates a lot of value by having an exponentially growing platform where many thing works with many other things.
Wow, I've heard about the leftpad package but reading this made me chuckle.
There’s a package called is-positive-integer (GitHub) that is 4 lines long and as of yesterday required 3 dependencies to use. The author has since refactored it to require 0 dependencies, but I have to wonder why it wasn’t that way in the first place.
Also as a web developer completing projects for different clients regularly I don't see the point of the whole article. Wouldn't a successful delivery of a campaign site be considered a finished project? Especially with the analogy made it's like comparing apples to oranges
Tiny problems that are solved provide a common part that many people can be reasonably sure is free of bugs - if thousands of people use left-pad, you can be reasonably assured that it has no bugs (or else people wouldn't use it, or a bug report would be filed - "many eyes make all bugs shallow"), if a thousand people write left-pad, you will get many versions with bugs that are never found, because there is much higher entropy/surface area.
I agree with that, but I just think there should be some overarching platform / interoperability into which all these small pieces can fit. And also version pinning in package managers, where you manually check diffs and compatibility before upgrading anything.
The whole point of software is that it is soft. Malleable. Ductile. Thats the strength and sometimes a weakness. It makes software hard to deal with. See the many late projects. But that also makes it awesome because you can change it with changing requirements.
Yes, it’s important to generally finish what you start. Equally important is the capacity to accept, learn and make changes
software is never complete. It might be usable (in the world outside personal projects perhaps "shippable") but never complete.
I think the point the author is making is to not start projects that aren't likely to have a minimum viable product that is usable and achievable within reasonable time.
I'm not so sure. It's more fun to write the skeleton of a compiler and abandon, than it is to write something minimal sometimes. The only (proper) way to learn what tradeoffs exist in sql database design is to make one. Making a sql parser, btrees and so on is HARD but rewarding. After a week you'll know a lot about a lot of things. But you likely won't have a working SQL database. If you aren't a very special kind of crazy you won't reach the finish line and you'll abandon it somewhere along the way. So was it all a waste? Is exploratory hobby programming somehow a bad practice? I can't see that it is.
The point of the author is to define your targets. When you're building a compiler if you're also building a parsing engine then maybe the parsing engine should be its own project, with the compiler a separate project that uses the engine. When you build a web app you don't closely tie in the web server code, you use an existing or build a new web server. When building a web server you don't build the TCP stack with it. You make the TCP stack and then have the web server use it.
Complex systems are constructed from smaller, less complex subsystems. Complete them, let them standalone and build out our final systems from them. We generally don't rewrite tools like grep because it exists and it works well, we may port it to new platforms. And we don't change the effects of the -R flag out from under the users (clients), it is what it is. The interface is, essentially, completed.
If you're leaving incomplete hobby projects in your wake (all of mine are), that's not what he's talking about.
His specific experience was going from AMQP (large, complex specification) to ZMQ (smaller, focused on the essentials and let complexity be built on top) to nanomsg and libmill (even smaller, more focused). So in theory his nanomsg and libmill could be considered complete, as we'd consider TCP "complete", today (from a client perspective at least). And we can work our way back up plugging it into systems to recreate the capabilities of ZMQ, which can be plugged into systems recreating the capabilities of AMQP. Which can then be used to solve our business problems.
I find this funny coming from the originator of ZeroMQ. When I firt tried out ZMQ it took me all of 12 seconds to figure out that if you didn't pass exactly the message that it was expecting, it crashed the server.
That problem along with I'm sure hundreds of others were fixed along the way.
You don't achieve perfection in software development. Ever. Even when developing something for yourself where you define the scope and you leverage it for a menial task. There's always better. There's always more robust. There's always something else you can do.
That's the draw to software development for many of us. There is a constant challenge awaiting our brain. You can consistently push yourself in a new direction that you care about.
The downside is that feeling of a completed accomplishment really is just an arbitrary release ceremony. What software developers need to get good at is letting go and moving on.
Great software projects are never finished; instead they always evolve and improved similarly to living organisms and unsimilar to material things.
I completely disagree with your analogy to a carpenter who builds a chair, this metaphor is wrong and responsible for a lot of misunderstandings when it comes to software development.
I also don't agree that "finish your program" needs to be added to the unix philosophy because a tool that does one thing and does it well is finished by definition.
Except very trivial cases, any piece of software contains bugs that need to be fixed needs to add features based on new user stories and also must to adopt to the constantly evolving hardware. Linux is a perfect example of this constantly evolving and never finished paradigm and if anything it can be used not to support but to invalidate the "finish your program" philosophy.
> Imagine the carpenters were like programmers. You bought a chair. You bought it because you've inspected it and found out that it fulfills all your needs. Then, every other day, the carpenter turns up at your place and makes a modification to the chair.
505 Bad Analogy. Carpenters work all the time on new designs and better chairs. When a programmer releases a new version of his program, it's like when the carpenter made a new chair and put it up on sale on Amazon or ebay (aren't these kind-of like package managers for physical goods?). You may want to buy that new one, but you may also not. Similarly, nobody forces software upgrades down your throat. But sometimes the people change, they want to sit on divans or nail-beds instead of chairs, the carpenters follow the trends and produce those instead of chairs, and people expect to find these when they come to visit you. You can say "sit on chairs or f*ck off!", but you can't avoid the consequences: loneliness. Similarly, you can hold on to your select programs in select versions, maybe backport patches for them, but the protocols change, the services change, world goes on. You may say eff-off and stick to your programs, but the consequence is that it'll become harder to share experiences and information with others, both ways. Nobody fixes or modifies your stuff should you not want them to, they may try hard to sell and bug you, but you always are the one that buys. Mobile phones do by default, but you can opt-out.
Yeah, I'd compare that more to living in a hotel, and coming into your room every day to find that the hotel has replaced the chairs. The hotel, here, is the web service, or your OS's package manager
If you have a problem with constantly changing chairs, you're free to find a new hotel that doesn't do that—probably one with "LTS" in the name.
I was referring to individual software packages, an entire distro is a bit of a different beast. Going with the same chair analogy, well, you'd mostly care about the chair in your room, no? The one that you use the most. That's similar to what I do with Ubuntu (and any other OS): it takes care of all the programs except Emacs which I compile myself following master, Firefox Developer Edition which I'll use until 57 comes out, and my scripts and utilities that I've written. It's impossible to micro-manage every bit of software I directly or indirectly use, but it's a nice trade off to ensure the most important bits myself and trust the rest to the distro maintainers.
> Please join me in my effort and do finish your projects. Your users will love you for that.
Will they?
If your users are programmers, they might love you for that.
For pretty much all other users, they will want more. "It would be great if your app did...". I would argue that users now expect software to change and add more features.
And why wouldnt they, when pretty much all software they use is doing this? If your app isnt growing and changing, to users it way well look dead: "It's abandonware".
That's actually not my experience; I see people using "old" software all the time, and rarely looking for newer alternatives even if they feel that some things could be added or improved. As just an example, my mother still uses ATnotes - last released in 2005.
And I think the limited data we have shows that, like the number of people who upgraded their browsers before it was automatic.
And my experience is that there are many many combinations of some amount of both. I find a combination of both even with myself and sometimes even for a single piece of software! "I wish they'd add Markdown support!" "I wish they hadn't changed the UI!"
> it's almost impossible to find anyting that's truly finished, not simply abandoned.
This goes far beyond programming. "completeness" is nice but is highly subjective. For work that can be infinitely revised, refactored, recut, "finished" is an idea you impose from the outside, in relation to specific criteria.
The author seems to be saying that there is value in creating smaller, independent modules of things that can be said to be "finished" when they do their thing and no further functionality is added or expected to be added. Further functionality, if needed, would come from connecting "finished" things together, I suppose, or making new things- not over extending a component to handle all cases.
This moves the complexity around a bit and maybe it's more efficient and easier to handle this way.
Artists have a similar problem. It's hard to know when a poem or a song is "functionally complete" but eventually you have to let them go or you never get anything out there.
I'm all for conscientiously abandoning work, especially if the state of the work is well documented so that somebody else could, in theory, pick it up.
> This goes far beyond programming. "completeness" is nice but is highly subjective. For work that can be infinitely revised, refactored, recut, "finished" is an idea you impose from the outside, in relation to specific criteria.
This is why I think a "definition of done" is so valuable on a dev team, especially in a startup environment. Without it, there's just natural inconsistency or variability across people and features.
The author implies that by releasing new versions of our software with security patches or features we somehow differ from construction where house are considered finished. They are not - there is a lot of maintenance to be done, painting, replacing pumps, sewage pipes, insulation, cables, I even had a house where we got a new balcony some 20 years after the house was "done", dito a completely new elevator recently some 40 years after finishing the house. I'm sure that if constructing houses had been as easy as rewriting software, we'd get a lot of other convenience changes just for the sake of being modern.
Software engineering is not flawed. It just lacks hundreds of years of experience which it will gain with time.
> Imagine the carpenters were like programmers. You bought a chair. You bought it because you've inspected it and found out that it fulfills all your needs.
Then, every other day, the carpenter turns up at work and tries to improve how they make chairs!
Why can't they just stop changing how chairs are made? I like my chair, but I go back to the store after a few years and they're all different! Carpenters, finish your chairs. Those ideas you have? Stop having them. I like chairs how they are now.
That is how market works, people not satisfied with sitting experience and constantly trying to improve on that, yet QWERTY layout is still in use by 99.9%, it is enough to satisfy typing needs and good enough experience so nothing changes for ages.
> That is how market works, people not satisfied with sitting experience and constantly trying to improve on that
Not sure if you're joking here, but if not - then no, it's not because of that. Companies jsut make new, "better" versions, market them hard, and are happy because when they pull the old ones from the stores, you have no choice but to buy one of the new ones.
The reason this is an implicit principle of the Unix philosophy is because the Unix philosophy is to be as lazy as possible as the tool-creator and push all complexity onto the tool-user. Thus we get things like Go, regular expressions, and null pointers.
People outside this school of thought don't finish their projects because their projects actually try to solve the underlying problems in computing, which are inevitably hard.
Examples of underlying computing problems that are unsolvable if one follows the Unix philosophy? Keep in mind that it's often misunderstood what the primary tenets of the Unix philosophy are; your comment demonstrates you, too, have misunderstood.
It's not on the stated list, but null pointers is a classic expedient unsafe shortcut for those too lazy to design a better type system. C is full of such shortcuts (c.f. Algol 68).
First of all, that has nothing to do with the UNIX philosophy, second of all, it has nothing to do with type theory. Anyone is perfectly capable of implementing a Maybe/Option/whatever type in C, allocating it on the heap (because you wouldn't be able to return it if it were stack-allocated), and then returning a pointer to it, whereupon C's actual type system would happily complain about mismatched types if you tries to use this type without deliberately accessing its internal members, at which point you're doing exactly as much intentional effort as if you used a convenient accessor macro that checks to make sure it's not the nil case. While I agree that C (and its type system) could be much better, fixing the problem is not quite as simple as "just design a better type system".
> Anyone is perfectly capable of implementing a Maybe/Option/whatever type in C
1. No parametric polymorphism, but I'll let you move the goal post with us macro-ing a bunch of monomorphized nullable pointer types. It still sucks because
2. No static checks against nullable values in the non-Maybe case, no ergonomic pattern matching in the maybe-case
Moreover, let's be real. Nobody writes new C in a vacuum, and given that that ecosystem is very mature, there's no changing course now. But safety techniques are only as good as their weakest link---their use must be enforced everywhere. The damage is done and irreparable with C.
Parametric polymorphism is needed to implement “Maybe” as a generic type that can be applied to another type, but you can implement individual concrete Maybe X types without it.
You could possibly use macro programming to do it “generically” at a level outside the type system.
The difficult thing is to divide a big problem into a set of smaller ones that together solve the problem BUT such that those smaller solutions are valuable on their own, not only as part of the bigger solution that is your eventual goal.
So the problem is not so much to solve a problem but to FIND a problem whose solution is useful on its own and perhaps also as a sub-solution to bigger problems.
I don't agree. The goal of software isn't usually to discover some perfect abstraction with nothing left to add and nothing to take away, the goal is usually to solve some practical real-world problem. If your software satisfies the requirements without taking too much time or effort to create, then it's a success.
If a project is going to be used by a lot of people or become some kind of industry standard, then it makes sense to spend some time up-front and figure out a clean interface and a well-defined feature set so you don't have to make non-backwards-compatible changes. It also makes sense to look at the feature set and figure out if you have the time and ability to complete those features, whether they're important enough to justify the effort, and whether there's an easier option that satisfies the requirement without having to boil the ocean. All this should be driven by the project goals, though, not some abstract ideal that all programs should be complete.
Some of the most useful software is going to be forever incomplete. Web browsers, CAD packages, operating systems, etc... I'm glad there are people working on software like that.
The author's point is not that there shouldn't be new systems or that no systems should ever change. He's not arguing for a perfect abstraction, either.
Following his development path (AMQP->ZeroMQ->nanomsg + libmill + others) his interest was in decomposing the larger system (AMQP which included message queues, databases for storing messages, wire protocols, etc.) into its smaller parts so that you could build back up to it in a better way. It happens that when you get to those lower levels, you can call them done.
If you're building a web browser, what's the proper scope for your project? Should it actually consist of all of these things in one project: tcp stack, http client, javascript interpreter, html parser and renderer, keyboard drivers, mouse drivers, touch screen drivers, etc.?
No. You build on the existing OS for the drivers. You build on the OS for the TCP stack. You may build an HTTP client, but odds are you can reuse an existing one, or the one you make could be partitioned off into a separate, and hopefully reusable, library. Your javascript interpreter can also be its own project/library. And most of that already exists, so you can focus on HTML parsing and rendering and glue in the other components. Your HTML parser doesn't need to know how to send an HTTP request, your renderer doesn't need to know it either. Only your browser needs to connect the HTML renderer and the HTTP client.
> Following his development path (AMQP->ZeroMQ->nanomsg + libmill + others) his interest was in decomposing the larger system (AMQP which included message queues, databases for storing messages, wire protocols, etc.) into its smaller parts so that you could build back up to it in a better way. It happens that when you get to those lower levels, you can call them done.
I think breaking things up into small pieces so that it's modular and you can work on one part at a time and then assemble a working system out of individually-testable working parts is a good thing (especially if you can re-use parts that other people have already made), but I don't think that's quite the argument the author was making.
I think the author was arguing that you should reduce the scope of the task any time you notice open-ended requirements with no clear completion criteria. I think that's good advice most of the time, but there are exceptions. It's okay to have a project that isn't "finished" if it's useful.
What I love about games is that at some point they have to ship. They used to have to go in boxes. But even now that they don't, there is a still a date when they have to go live in stores. Deadlines terrify me and they create great anxiety in my life. But ultimately they are cathartic.
I don't think the chair example is appropriate, because I think there are different, more comfortable chairs today than let's say a 100 years ago. So the "chairmans" are continuously improving the chairs, just not the one you bought.
Software can be usable but 'incomplete', in the same way that stools and Aerons are both usable, but stools weren't an unfinished technology. If your bug count is low and your software has features, it's already done.
Somebody has given up a significant amount of their time to give you something that's free as in beer and speech. You get at least what you pay for, don't ask for your money back.
The one thing you can expect to receive in exchange for said significant donation of your time is recognition for your work.
If your work consists of half-finished code that you then attempt to pass off as a usable product, the expected reward is shunning.
There's a difference between pushing your CS101 homework to github and publishing your package to npm. As with academia, once you publish you implicitly vouch for the quality of your work, and your reputation is permanently tied to it.
Consider an example:
You have made a little web app that beeps at certain times of the day to remind me to do something.
Is it complete?
Oh, you want a calendar integration. That makes sense. You add it. Is it complete?
Oh, you say you want to release mobile versions? Okay, now is it complete?
Sorry, but the iOS version needs to be updated to remain compatible. Now is it complete?
There is a new popular cal app that everyone uses. Should you update it to work with that?
The problem isn't that the software is incomplete. The "problem" is that the world keeps changing.
To the extent that the environment doesn't change (unix), you can "complete" your tool.