The handmade hero community is wonderful. It is a portal to a lot of knowledge transfer. If you're looking to approach C/C++ in a welcoming environment and/or love getting closer to the metal, I highly recommend the community.
I also recommend the handmade video series.
Casey Muratori has been live streaming himself build the same game for the past few years. He try's to build everything from scratch, which is entertaining and quite the brain dump.
Also, the video archive and live chat is searchable. I can't count the number of times I
- searched for some keyword or phrase, like "CPU"
- was greeted by a link that read something like this
"Day 025: Finishing the Win32 Prototyping Layer (01:38:59) Isn't the CPU Memory bandwidth only valid for on-die memory?"
- clicked the link to receive 3-10 minutes of extremely dense but well-explained information on a topic.
- spent the next hour or so googling technical terms and subjects that filled massive holes in my knowledge.
If you love learning, treat yourself to jumping around the video archive :)
P.S. Its worth mentioning that you can buy the game hes building and get access to the source code. I'm not associated with handmadehero.org or Casey Muratori in anyway. The community just led to me becoming a better programmer over the past 5 years. I hope someone reading this finds the value I did in rabbit-holing for hours and laughing at the rants of a seasoned game dev.
There also used to be a meetup in Seattle https://www.handmade-seattle.com/ (which has since moved online due to the pandemic situation) but in the past they had some pretty good presentations and interesting speakers. You can find some videos of them online (Youtube: "HandmadeCon" and "Handmade Seattle") although regrettably there are no recordings of last year's meetup.
That is great to hear Abner, thank you very much for all your efforts! Because I couldn't make it to Seattle last year, I missed the entire thing, it would be awesome to see some of the presentations.
I'm looking forward to this! I love everything about handmade and can never get enough content haha. I've definitely watched a few of the handmadecon videos multiple times.
Hey! I organized this community a few years ago; it's under great leadership. Trivia points:
1. The manifesto author was barely 18 when he wrote this; representative of a new, younger cohort interested in this stuff.
2. Handmade is into low-level understanding, but certainly a few people stick to high-level projects, and simply peek behind the curtain when they need to. Low-level thinking translating into high-level wins.
3. Someone mentioned my conference, Handmade Seattle [0]. This is part of an effort to expand into the mainstream, and to inspire the next generation of compiler writers, kernel developers, game engine programmers etc.
We're not saying everyone has to be into this. We want to carve out our place, let those who care flourish, and not be caricatured for "reinventing the wheel."
Why is that we praise hand-crafted objects in the real world while at the same time deriding such hand-crafted code as a quaint curiosity that can only make sense as a hobby of the hopelessly romantic?
Do we compare your local carpenter with IKEA, or that delicious, nutritive and healthly meal you cooked with McDonald's, in terms of pure (developer) performance?
Do we obsess over how a handmade jewelrer can't scale operations to five continents?
Why do we copycat every process or idea from tech behemoths but laugh when someone tells us that maybe a chat client shouldn't gobble up gigs of RAM?
In the real world, both craft workshops and assembly lines have their place. Maybe that should be true for the digital world as well.
>Why is that we praise hand-crafted objects in the real world while at the same time deriding such hand-crafted code [...] In the real world, both craft workshops and assembly lines have their place. Maybe that should be true for the digital world as well.
I'm not sure what the complaint is here. The concept of "hand-crafted software" _is_ being praised. Consider:
+ the domain url exists : https://handmade.network/ ... this means some enthusiasts spent money to host a focal point for showcasing the work to others
+ the submission is currently on the frontpage of HN ... which means it was upvoted many times (more than 50+ as of this writing)
+ comments in this thread showing appreciation of it
Are you annoyed that there isn't universal absolute flattery because a few want to mention caveats of tradeoffs? Well, that's true of any piece of technology/method.
Yes, it's being praised here (however this is also in the "hobby" context, supporting my argument somewhat).
But what I am saddend, more than annoyed, about, is that it's extremely lopsided in general.
I don't have a tally, but reading HN and other tech sources (HN is not an outlier here), these kinds of stories are buried under the mountain of articles and resources that hype up the "software industrial complex" (for lack of better description).
Software industrial complex is a great description.
Everything is about software; thus the collective incentivizing of writing more and more software. We see it as a bad sign when the commits stop flowing. Despite the source being there, it is viewed as stagnant.
Nobody even stops to ask if the project is simply done, and doesn't need anything else.
Because our technical culture has been colonized by business interests to the point that people cannot tell the difference anymore.
Many comments on this thread show an unwillingness to even consider alternate technical aesthetics. Everything has to be about Creating Business Value, as if that's somehow the most important thing in life.
There is a trifecta of the art of code, the art of discipline (engineering), and the art of servicing others (business).
A core challenge is that "success" in the modern day starts with business --> engineering --> art.
Like, you will read countless articles of "sit with your customer" or "customer obsession" or "code first and you will fail", but that is from the perspective that business > engineering > art.
Handmade exists such that art > engineering > business.
I praise it just like I praise a handmade piece of furniture made by a master. However, I have the luxury to appreciate it.
The real struggle is finding a place in life between the spectrum of art and business. The key that I have found is the see the spectrum, appreciate it, and adjust the career to find balance, learn what you can, and move on.
My hope is to retire focusing 75% art and 25% engineering, and not worry about the business side. For instance, I have built a programming language for board games: http://www.adama-lang.org/
The key thing that I am focused on is how I think about success, and my #1 metric is whether or not I enjoy working on it.
> The real struggle is finding a place in life between the spectrum of art and business.
Well said. In my adolescence I was all the way on the "art" side, and as I grew older, at some points I was forced to be all the way on the "business" side, just to survive.
Along the way, there was a break in routine, and I was unemployed for a good part of a year. During that time, I naturally pursued "the art of code", as you put it - creating all kinds of software for fun, with no business goals whatsoever.
It was handcrafted code, patiently and lovingly written, that aimed for simplicity and my own aesthetic. My "production routine" became a more organic creative process. I wrote what I felt like, when I felt like it.
Then, by chance, one of the first open-source libraries I wrote became somewhat popular. It led to a trickle, then a stream of clients. Somehow I had lucked out in finding a niche, where people appreciated my craft.
A decade later, I'm still riding that wave - blessed to have a life with a sensible balance of art and business. Now that I have better financial security, I want to shift the balance back to where I came from, to bring the focus back to "art" - and at the same time, continue to flourish on the business side.
We don't universally praise hand-crafted objects in the real world. We only praise it in situations where the resulting object is significantly aesthetically or functionally unique. In all other cases hand crafting is absolutely considered a quaint curiosity. A good way of thinking of this:
* Mass produced bread <- cost effective product, heavy use of machinery and tooling
* Small local bakery <- higher quality product, still use plenty of professional tooling to make it cost effective
* Making your own bread at home with store bought flour <- Makes use of prepackaged assets to enable a creative experiment
* Making your own bread with wheat you grew in your backyard and hand shaped stone tools <- quaint curiosity
I agree 100%. I see many comments here critiquing the fact that this type of programming does not scale. Modern programming methodologies and this low-level hobby programming can coexist in the world.
Carpenters still use harvested, dried, milled lumber. Chefs still use harvested, dried, milled flour.
Frameworks are tools and building blocks like this, but for software. Not using them is not like an artisan carpenter using lumber from the store. It’s like a carpenter making their own boards, or a chef making her own flour. It is not really relevant to efficient, performant software, it is more like an art for the sake of it.
Libraries are closer to raw (but still refined) materials than frameworks. Frameworks impose structure on users, libraries ask users to plug them in where appropriate.
> It is not really relevant to efficient, performant software, it is more like an art for the sake of it.
This statement sort of presumes frameworks are a necessary prerequisite to software, which isn't the case.
Tools, maybe, but I have yet to see anything that calls itself a "framework" (i.e. Spring, Angular, Django, Salesforce) that doesn't a) bloat the hell out of the binary, b) use more memory than it can possibly justify, c) create impossible-to-diagnose problems, d) not actually provide anything except overhead.
The best framework I've used is probably MonoGame. It abstracts away accelerated 2D graphics across multiple platforms, asset loading, and input handling, among a few other things. You write your game not against DirectX, but against the concepts it introduces. Unlike modern ideas of frameworks, it doesn't preach to you that you need to name things certain ways, or impose other slightly-pedantic notions on you.
Instead, it's focused on getting out of your way. Most other frameworks seem to love getting in your way, because they see themselves as so important.
What people don't realize is that modern, high-level languages and abstractions give you much more than just speed of development.
If you use a garbage collector, you don't have to do manual memory management, which is amazing for security. You also don't need to worry about buffer overflows and weird pointer exploits.
If you use web tech, you get a lot of important stuff, like sandboxing, handling of esoteric keyboards, RTL languages and screen reader accessibility almost for free. If you use some extremely fast, small and lightweight GUI framework, you will probably not implement those, as they seem boring, and who needs that anyways.
If you go the handcrafted, lightweight and fast route, you're sacrificing a lot of features, possibly without even being aware of what you lose until it's too late.
I will happily shelve a few more bucks for RAM and wait a few more seconds to get an accessible, secure Messenger, instead of one that is going to steal my bank details the moment some stranger sends me a malicious GIF.
> If you go the handcrafted, lightweight and fast route, you're sacrificing a lot of features, possibly without even being aware of what you lose until it's too late.
The notion that an engineer signing on to make high-performance software in 2020 doesn't know what they're "sacrificing" is a little unreasonable. Conversely, it seems unlikely most people developing the kind of software you're advocating for understand the tradeoffs they've made at all.
This is all simply about where engineers start off today, and how woefully uninterested they often are in expanding their horizons, and how uninterested they tend to be in studying the history of their discipline.
You make good points. And I do think miki123211 is conflating memory safety with GC.
But the concerns about Handmade software lacking important user-facing things like accessibility and internationalization are real. TO take one particularly ironic example, lots of tools in the field of audio production roll their own GUIs, and invariably, these UIs are completely inaccessible to screen reader users. I say this is ironic because "blind musician" is such a stereotype. To dispense with a straw man, it's true that Electron would not be a good fit for these tools. But maybe their developers could accept the size of something like Qt in exchange for a GUI that works for all users without having to roll it themselves. You say that engineers should be aware of trade-offs, but in my experience, way too many, even very talented engineers, seem to ignore this one.
There are much deeper and more sophisticated reasons why inaccessibility to screen readers is not really the core of why we should consider audio production software to be inaccessible.
This is a debate/discussion I've been having for 20+ years now (as the original author of Ardour).
The GUI of a typical DAW (or even a moderately sophisticated plugin) serves as memory aide for sighted users. 48 tracks? Which ones did I mute? A sighted user doesn't have to remember - they can just look.
Sight-impaired/blind users of older hardware mixing consoles use the tactile experience of those devices to partially fulfill the same role, which is why modern full digital consoles with tap-only-LED-lit buttons are so much harder for them to use: they have to either remember or investigate which features are enabled and which are not.
In addition, most DAW and many plugin GUIs present information in ways that are more or less inconceivable for screen reader presentation. Waveforms are the obvious example, although here perhaps sight-impaired users have what some would consider an advantage, in that they are forced to work entirely by ear, rather than use on-screen representations that have only a debatable relationship with the sound.
But routing presents another area where it can be very difficult to present the current state in a way that could ever be rendered meaningful by a screen reader.
Please don't think that I don't view accessibility or i18n as of critical importance - I absolutely do. But I also think that when application niches (like audio production) evolve their own highly-developed visual language to supplement and assist (sighted) users, that the correct/better answer for sight-impaired users may be non-graphical applications rather than trying to figure out to how speech-render the graphical representation.
This is one area in which Linux is far ahead of Windows and macOS: there are numerous command line applications available for audio production which would strike utter fear into the hearts of most sighted users, but that turn out to be incredibly great for sight-impaired users. Ecasound, I'm looking at you (among others).
You obviously know way more about this field than I do. I'm a dilettante when it comes to recording in general, never mind developing a DAW. Thanks for sharing your thoughtful insights.
> the correct/better answer for sight-impaired users may be non-graphical applications rather than trying to figure out to how speech-render the graphical representation.
This cuts to the heart of a long-standing debate among accessibility advocates and blind techies. It's true that tools tailored specifically for blind people can be more productive and easier to use. But they can also isolate the blind community from the sighted mainstream. For example, would a DAW developed specifically for blind people enable them to collaborate with sighted peers, or to use one-of-a-kind plugins that are developed for mainstream DAWs with a visual interface? This is why, while retrofitting accessibility onto a GUI is often suboptimal, it's necessary.
A lot of VST/AU plugins already use Qt, JUCE or SynthMaker for the GUI and are completely inaccessible.
On the other hand, there are a lot of web pages and Electron apps that are completely inaccessible as well, despite using accessibility-ready web tech.
The problem is not really the lack of an abstraction layer, but more the fact that those developers are doing a shitload of work to make something look über cool and different from everyone else without taking a couple weeks (or maybe not even that much) to iron out accessibility issues.
By the way, at least for plugins, another option is not doing any GUI at all and just exposing all parameters to the host, like the AirWindows guy does [1]. Or using native OS controls. This is both very accessible and EXTREMELY cheap.
No disagreement there. Still, there are also developers who go out of their way to make their software both cool-looking and lean and mean by rolling their own GUI toolkit... and now it would take them way longer to implement accessibility.
I agree, but it sounds to me that the issue then is not with lean and mean code per se. Both AirWindows and people using native interfaces in plugins are leaner, meaner and more accessible than your example.
And by the way, to keep on topic, "handmade" audio software people are not really creating their own GUI toolkit. That's the big boys at Waves, Native Instruments, IK Multimedia, etc.
The problem to me is that people are trying to make things that look exactly the same everywhere with complete disregard for both the user and the particularities of each platform.
And by platform I don't just mean the OS: one of my pet peeves with audio software are plugins that skip the DAW's preset saving and implement their own. The walked an extra mile only to give me a shitty interface when I wish I could just use Logic presets or whatever.
What do you think about the following trade-off? (a) make a software accessible to 99.9% of the population with base-level quality-of-life (b) only be accessible to 98% but increase quality-of-life by 5% for the rest
A pedantic point, buffer overflow protection and the GC are separate things. Buffer overflow protection is bounds checking, GC's are ensuring memory that is no longer in use is reclaimed. But the boundaries on the programs you mention, are more to do with the system not allowing these actions than the language.
Also, there are other ways to achieve memory safety for all but maybe a few cases without a GC. One being ref counted structures that have loops in the links. Another case, but it has work arounds, is postponing the cleanup until a later time. But C++ provides guarantees like, at scope end the destructors(finalizers) are run and the values are cleaned up. This allows for more than memory safety and can include other resources too. This is easier in greenfield projects where old idioms/C compatibility are more common.
To add to your point, high level abstractions allow for optimal code as it is more declarative. The low level constructs to make it fast are of no concern to the caller. And one can build upon this for new higher level abstractions.
What is your point? The way refcounted smart pointers are used in modern c++ doesn't come close to addressing all the memory safety issues. Iterator invalidation, bounds checking, use of uninitialized memory, use after move, dangling reference (outside the aforementioned cases), etc.
What they're pretty good at is solving for use after free. Even there, caveats apply (see widely documented thread safety fiasco of shared_ptr).
IMHO these are unrelated problems. There's nothing preventing those 'handmade' libraries from using memory-safe languages and development practices, or calling into accessibility APIs provided by operating systems (assuming they are provided by operating systems), and on the other hand, Electron is the opposite of the 'handmade philosophy', but is still not accessible.
There's required complexity, and unnecessary complexity, and the latter is the worse problem in modern 'industrial' software by far, because industrial software isn't developed by customers needs, but by business needs, and the gap between what customers want and what the industry provides seems to widen more and more each year.
This 'industrial software development' that doesn't serve the user anymore is what the handmade philosophy is providing an alternative to.
> Electron is the opposite of the 'handmade philosophy', but is still not accessible.
On the contrary, Chromium probably has the single most mature and robust implementation of accessibility APIs across platforms. It's true that not all Electron apps are accessible, but it's way easier to make an Electron app accessible than to make a handmade GUI accessible.
Yeah, ok, but the problem is the same: even though accessibility APIs might be provided (this time by Electron, not the OS), applications don't make use of them.
Sure, we still have to fight that uphill battle. But it's way easier for a web application to implement ARIA than for a Win32 application with a custom control to implement UI Automation. Maybe that problem is unique to Windows.
>If you go the handcrafted, lightweight and fast route, you're sacrificing a lot of features, possibly without even being aware of what you lose until it's too late.
Some modern languages offer memory safety and very very high performance ceilings as well, if you are willing to learn how and work at. Examples include, C#, Rust, F#, and Nim. There are others!
And any language, taking some time to understand performance pitfalls, and leveraging the language the best you can, can make a huge difference. Sometimes without any major maintenance or dev time penalty. It just takes caring, and practice.
The linked manifesto doesn't seem to be especially against garbage collection. And on their web site the latest news item is a "Lisp Jam"...
The only reference to GC is in this sentencs: "The deadline is approaching or the rent is due or we have taxes to fill out and a manager on our back and someone asking us why we always spend so much time at the office, and we just have to stick the library or virtual machine or garbage collector in there to cover up the places we can't think through right now. "
The situation is kind of choosing-shame-but-ending-up-in-war outcome[1] and is not saying that the referenced techniques (incl gc) would be generally bad.
If you want the advantages of the garbage collector but the power of native there are languages like D, Go, Rust, Objective-C, Swift.
If you want an "extremely fast, small and lightweight GUI framework" that also handles esoteric keyboards, RTL languages, screen reader accessibility and other things you can use your OSs native controls. I bet most naïve (and native) Cocoa/Win32 app will give you 100x better keyboard navigation than most Electron apps from large companies.
> I bet most naïve (and native) Cocoa/Win32 app will give you 100x better keyboard navigation than most Electron apps from large companies.
That may be true for Cocoa, but it actually isn't for Win32. It takes work to make keyboard navigation work correctly in Win32, unless you're using a dialog with nothing but standard controls (note: dialog means something specific in Win32). Windows Forms helps with this. Also, as soon as you depart from the stock Win32 controls, you have to implement the UI Automation accessibility API entirely on your own, and that's not an easy task even for a single custom control. So there are actually good reasons why Win32 isn't a good choice for new projects.
There's another reason why web applications tend to be more navigable with third-party Windows screen readers than native applications. For the mainstream web rendering engines, third-party screen readers enable a sort of virtual cursor that let's the user navigate the DOM in a linear way by pressing the up and down arrow keys. This makes it straightforward to navigate to anything in the document, even if the application itself doesn't implement keyboard navigation. Now, this same navigation model could just as well be applied to native applications, and the Narrator screen reader built into Windows actually does offer that option. The fact that third-party screen readers restrict that option to web content is arguably a deficiency, but one that developers who care about accessibility should consider when targeting Windows today.
Disclosure: I work for Microsoft on the Windows accessibility team, primarily on UI Automation and Narrator. These are my own opinions though.
Actually it has been adding them back in the renderer process, and encouraging developers to use explicit IPC between the renderer process and the main process.
Electron removes a heap of sandbox limitations does it not?
That's not really web dev. It's the application of web tech to something that isn't web. A slightly pedantic point I guess, but if you choose to give up the benefits of a browser you can't really complain that you don't have them any more.
> If you go the handcrafted, lightweight and fast route, you're sacrificing a lot of features, possibly without even being aware of what you lose until it's too late.
And, just to amplify the point about accessibility, you may be blocking some people from using your app. So it's not entirely about what you lose. That's not so bad if you're developing a game. But the projects on this site include a programmer's editor and a debugger, both of which are completely inaccessible with a screen reader because, of course, they use a handmade GUI.
Handmade GUIs, frameworks drawing on canvas instead of using DOM, and abominations like VNC over web instead of normal desktop apps seem to be the new accessibility plague.
We're pretty much done with Flash and Java on the desktop, QT isn't that bad any more, accessibility-wise, but this kind of stuff is now becoming more and more popular.
I've heard of systems that promise you better utilization of your licensing resources, but what they do is put your precious 10 computers with some expensive software on the internet, letting any employee with a web browser book some time and access them via Web VNC. That's a brilliant way to turn a perfectly-accessible Windows app into something you can literally do nothing with.
Not everything needs to prioritize accessibility. This page is mostly tools for video games which typically have no accessibility. I think you may be bringing a pet concern into an area that isn't relevant.
What we, or at least I, want to ensure is that developers outside of the gaming context don't get too fired up by this manifesto, develop non-game-related tools in that spirit, and thereby block someone from doing their job because the tool is inaccessible.
Not everything is going to work for everyone from the very start. You don't need to stomp on every piece of software because it doesn't have niche features that you care about intensely. Pick your battles, each of these programs were created by a single person and are open source. They aren't government websites, they aren't even the only way do the things they do.
Pretty sure OP’s article is advocating completely removing that whole massive stack of trash of web tech and instead use technology designed for the particular computer they are running on.
I've been writing C code for more than 30 years, C++ for 27. I've written code at every level of "the stack", from kernels of several nix-related operating systems to application GUIs. I've worked on "plumbing", on libraries, and on applications.
And here's the thing: I understand, at a very very deep level, precisely why my application is slow to start up. I understand it at every level from the semiconductor gates in silicon up to the color specification lists used by the GUI toolkit. But none of that helps me make it as fast as the handmade community thinks things should be.
Why? Because I know that the slow startup is caused by substantial amounts of code that all add useful functionality for at least some of my users. Do I want to get rid of i18n? Shall I skip checking for new versions? How about forcing plugin scans to be initiated by users and never doing them automatically (don't even think of suggesting doing them in another thread, and don't ask me why)? Shall I skip the ability to handle RTL text? Should we drop human readable and editable configuration files in favor of some faster and less resource-eating format? How about a scalable GUI, is that really useful for most? How about a themeable GUI, given that it's not actually that themeable anyway?
And so on and so forth. Each one of the above (and lots more) contributes just a little bit to the slow startup. For most users, all but one of them will likely seem not very important. So which ones do we drop to get back to the sort of startup times that the handmade community seems to expect?
Because let's face it, xterm can put a window on the screen before my finger is off the return key. So why not a DAW (or an IDE or an editor or a vector graphics app or an image editor or a browser or a circuit emulator or whatever your thing is) ? Just which features do you want to cut to accomplish this?
Let’s call 1 second the threshold for a bare system with an SSD and 4 core CPU at 2Ghz. That gives you 8 billion cycles to start up, trading off with file access, so maybe only 4 billion of those have access to all the data you need in memory (modern ssd loads what, 400 MB/s?).
So all the startup code and initial assets are larger than 200 megabytes and take more than 2 billion cycles to prep? Computers are fast, software is slow. Sure, things aren’t trivial to always fit in that package, but it’s disingenuous to say you need to take many seconds to start up.
Look at a giant piece of software that was built for more limited times: Visual Studio. It used to start up in milliseconds back in 2000. Now it takes 30 seconds. Did it really improve things so much to justify that in the intervening years? I’d go with no.
You know what else has an amazing set of features and capabilities including internationalization and accessibility and all those other complicated things? Web browsers. And there’s not a browser on the market that doesn’t start up in more than a couple of seconds, including loading a full webpage and rendering it from the internet.
Well, maybe I'm being too harsh on myself. Timing Ardour on my machine (Ryzen 2950X with 64GB of RAM), it has something on-screen is less than 1 second. But actually loading a "significant" session is more like 6 seconds. There's approximately 300MB of data in the session in total, but most of that is audio data that isn't being read at startup.
I've seen DAWs start up massively faster than this, though. And also slower. Essentially I have some intuition that "correct" software design would appear to the user as a more incremental startup: the window would appear as rapidly as xterm does, and would slowly fill with everything that makes up the actual program. I personally find this hard to design for. Could be me, or the GUI toolkit I use or maybe it just is pretty hard.
As for browsers, "a couple of seconds" has some slop. When I restart Firefox here, with typically 100-200 tabs and the main/primary/#1 tab being gmail (unfair! i hear you cry), I'd say it was more like 5-10 seconds until it is usable.
So what you're saying is that it's okay for Windows Modern photo app to take 3 seconds to boot up and show an image on the monitor? And it's okay for the image to be blurry and misplaced in the first couple of frame, then corrected in the next couple of frames. And it's okay that it crashes once in a while too, and it's okay for it to be frames behind when resizing, because all of those things is simply the result of features and layers of abstraction that have some little purpose whatever that purpose might be. And you're okay with your software suffering for it?
To me, it's completely unacceptable and not the userexperience I want to give. Where do you draw the line is the question, but I personally want to provide the best experience possible.
The quality of software these days are extremly bad, buggy, big, and slow. And I'm not okay with it, despite how many reasons there might be. And the reasons for it is usually massive amounts of code, frameworks, and layers up layers upon layers of abstractions.
So you want your software to check updates? Well find a way that doesn't make the start-up time or user-experience suffer for it- otherwise don't implement it. That would be my answer to that. I know standards like that can be hard achive in a business, but those are the goals I want to become better at achiving. And to me, that's what handmade is about; giving me the confidence to build my own systems, making them more responsive, and provide a BETTER user experience. Maybe with less features, yes. But at least the features that are there will be extremly good!
I'm saying that I know that my application does not start up as quickly as the hardware I personally run it on would suggest, and that I know why that is, but that doesn't help me know what to do about it.
While "handmade" code may be faster, it's also a big time sink and as a developer your job is to make tradeoffs.
For most software, you should probably focus on architecture instead of the lower level of your code. Sane code and architecture solves most performance issues. This includes technology decisions, like using web tech for applications instead of the native platform.
For example, Apple has had a lead of at least five years over Android when it comes to performance and user experience, because their system and apps were all built in native environments, instead of the JVM abstraction layer. It took at least five years for Android to catch up, and it took quadcore CPU's to do it. Around then, Apple carefully started using dual-core CPUs.
Meanwhile, companies like Facebook and Twitter spent dozens of work-years on trying to make web technology feasible in native apps and never really cracked it.
If they had just focused on building good native apps they wouldn't have wasted so much time.
Caveat / Aside: IIRC Facebook is having / has had massive issues with having too much code / classes / files in their app(s), on both iOS and Android. Which is (I believe) one of the reasons why they split off Messenger into its own app at the time.
Here's the fundamental difference between iOS and Android, and it's related to memory management:
On iOS, you pay for memory management in small installments, which make your code slower overall (reference counting in ObjC/Swift), but predictably so.
On Android, you pay almost nothing for memory management until at some random point the garbage collector shows up and stops your program to clean up your mess. Your program is going to be janky.
Apple deliberately ships devices with little RAM and kills applications that use too much of it. As a developer you better pay attention, or you will be flooded with bad reviews.
Google leaves RAM up to vendors. The secret to good performance with GC is to make the heap twice as large as the data which it contains. Low-end Android devices feel low-end, not just because of slower CPUs, but also because of too little RAM.
It has nothing to do with the "JVM abstraction layer". Java code on a good JIT easily runs faster than Swift or ObjC if there's a lot of allocations involved, but the GC behavior must be accounted for to give a good experience.
“Why does it take your operating system 10 seconds, 30 seconds, a minute to boot up? Why does your word processor freeze when you save a document on the cloud? Why does your web browser take 3, 4, 10 seconds to load a web page?”
The Handmade metaphor masks a key difference between software and physical goods. IIUC, mass-produced physical goods are inferior to their handmade counterparts because of the trade-offs that enable profitable mass production. So we have a strong emotional attachment to handmade things being better, and the Handmade movement plays on that emotional attachment.
But the metaphor breaks down if you think about it. A carpenter making a single piece of furniture can tailor it to the person who will be sitting on it. A cook making a meal for one person can tailor it to their dietary requirements or tastes. But software of any non-trivial scale is rarely written for one person. Even when it's developed according to the ideals of the Handmade movement, it will be mass-distributed.
I actually think that, contrary to our intuition about handmade things, Handmade software tends to have compromises compared to software that builds on well-established components. I won't repeat miki123211's top-level comment [1]. I'll just add here that it's precisely the economics of large-scale software distribution that allow concerns such as internationalization and accessibility to be addressed, while a solo Handmade project will probably not have these important things.
So, while slinging JavaScript code with no regard for performance and packaging it with Electron is certainly not the ideal, I strongly believe that the Handmade approach isn't either. As is so often the case, we need a middle ground.
> As is so often the case, we need a middle ground.
This is not a popular opinion here, but: the middle ground is writing handmade native apps using native OS widgets. You get both the speed and a mature environment to work with.
If you follow the platform conventions you will probably have better accessibility and internationalisation support than Electron. They will run fast as a bullet. They will consume a puny amount of memory. They will have dark mode for free on macOS. They will look like most other applications that come with the OS. It probably also has a graphical editor that will make your job easier (.NET, MFC, Cocoa, all have).
If it's something like .NET you won't even have to worry much about the "Turkish internationalisation Problem" that we were discussing yesterday. [1]
Things like localisation that require a framework/library in the web world are probably handled by the libraries provided by the OS vendor.
If it's "handmade", you're already putting a lot of effort that you wouldn't put in your day job. Why not use that effort to port directly to each platform, instead of recreating everything from scratch?
For Linux the situation is more complicated, of course, but maybe Qt is acceptable?
I mostly agree with you. But which toolkit for Windows is the "native" one? This is a well-known problem with Windows development, so I won't belabor it here. Hopefully Project Reunion improves the situation. But the murky state of native development on Windows is a big reason why developers just throw up their hands and go with something non-native that at least will work consistently all the way down to Windows 7 (or XP before that).
Disclosure: I work for Microsoft, but on the Windows accessibility team, not on a UI framework or developer tools.
Glad to see someone from Microsoft interested in seeing this issue solved. :)
Well, native for me is whatever provides native look and feel. Win32 if you want C/C++, WinForms (yeah I know) if you're ok with C#. Most people featured in the article's website would only consider "truly native" as being Win32, though, which is fine.
I figured this point of view would come up in this discussion. What you're really implying here, though, is that it's impossible to produce quality "hand-crafted" (that is, zero-dependency) software, or that it's so time-consuming that it might as well be considered impossible.
I'm not 100% sure I even agree that it takes longer to produce software that builds on top of a tower of dependencies than it does to just program the damned thing yourself, but it seems indisputable that the finished product will be higher quality than the one that defers as much functionality off to external dependencies as possible. Why? Because for an external dependency to be useful, it has to address many different use cases. To do that, it has to expose abstractions that only exist to allow reuse. The unused use cases cause memory bloat (that even tree shaking won't address all of) and the abstractions slow things down at runtime.
But beyond that... why are you guys always in such an almighty goddamned hurry? You do realize that if you succeed in rushing out an inferior, shameful, mass-produced crud app, there's either another one waiting in the wings immediately after this one, or there's no reason to keep you around? Why not take pride in your work and insist on spending the time needed to produce the best quality, most efficient, smallest footprint, most user-friendly, snappy, responsive software possible? Users despise the software 99% of us churn out and I don't blame them - it feels like somebody was putting in the least work possible so they could leave work a half hour early in time to make it to the golf course.
> but it seems indisputable that the finished product will be higher quality than the one that defers as much functionality off to external dependencies as possible. Why? Because for an external dependency to be useful, it has to address many different use cases. To do that, it has to expose abstractions that only exist to allow reuse. The unused use cases cause memory bloat (that even tree shaking won't address all of) and the abstractions slow things down at runtime.
You seem to be treating "quality" as a synonym for performance. But it doesn't matter how high-performance your GUI is if someone can't use it because, for example, it doesn't support their language or their assistive technology (e.g. screen reader or alternative input). And supporting these things requires -- guess what? -- abstractions, that take lots of time to implement if you roll your own GUI from the ground up. That's why I stand by my view, which you nicely summarized in your opening paragraph.
Your last paragraph is a tired straw man. No one has the time to develop software that's perfect in every way. When we sacrifice absolute performance, it's not necessarily for the sake of slapping together crap software to make a quick buck. Suppose I set out to solve an urgent problem in my chosen field (accessibility). Shipping my solution faster doesn't just mean that I can quit work faster and start making money. It also means the solution can start making a positive difference in users' lives sooner. In light of that, it would be irresponsible for me to obsess over maximizing speed or minimizing RAM consumption, as much as some vocal nerds might insist that I have a responsibility to do so. My real responsibility is to the users who are waiting for the solution that I'm developing. Now, that doesn't mean that I should absolutely ignore performance or resource consumption, but I shouldn't obsess over them either, and I certainly shouldn't use them as reasons to waste time developing everything from the ground up when there are so many high-quality components I can use.
> But software of any non-trivial scale is rarely written for one person.
Maybe it should be? Much like how people cook for themselves all the time, it seems reasonable to encourage people to code for themselves at least every once in awhile.
The overwhelmingly vast majority of software I've written, both personally and professionally, has been software with exactly one user: myself. Usually these are indeed "trivial" programs or scripts, but they are indeed software nonetheless, and are crafted to my tastes, much like how the vast majority of the meals I cook are for myself alone and are cooked to my tastes.
> But software of any non-trivial scale is rarely written for one person. Even when it's developed according to the ideals of the Handmade movement, it will be mass-distributed.
It could be written for one person and mass-distributed with the understanding that you are using software written for Bob's needs.
>I'll just add here that it's precisely the economics of large-scale software distribution that allow concerns such as internationalization and accessibility to be addressed, while a solo Handmade project will probably not have these important things.
You have mixed up cause and effect here. Internationalization is a concern caused by mass distribution, not an existing concern not being addressed. The same goes for accessibility. Handmade software will meet the accessibility needs of the one person it is written for. Have different accessibility needs? That is a different piece.
Damn, this place looks amazing, it looks like El Dorado in a sea of despair, for someone being used to make the same points than this manifesto and being looked as a weirdo in response.
Sadly they block Tor users, because they use Cloudflare. Maybe we're not that much on the same page after all.
Are you kidding? Or is it corporate speak, like "it does not block Tor, only 99% of its users"?.
Cloudflare is what makes the web unusable for Tor users. Every two websites you try to reach, yup, blocked by Cloudflare. The fact that you try to deny it makes it painfully obvious how there is no hope for Cloudflare. Sorry, I can't wish any good to your company.
TL,Dr : knowledgeable, skilled programmers writing semi-confidential software in C for mostly a single OS, by themselves, with total control of the feature set and release schedule, can produce pretty fast binaries.
Which is great ! I just don't know how to transfer that to building software as a team with multiple skillsets, unknown feature set (because market feet is still searched for) aggressive schedule, (We don't know what we want to build , but we need it in six months), large target audience ("what do you mean they have to install stuff ? Where is the link I can paste in an email so they open it in IE6 ?") , and,let's be honest, less knowledgable and more lazy developers.
(To be clear : I'm not criticizing the network or project. I just find it interesting that they had a goal - making fast software - and that, to the ebest of my knowledge and without a rigorous analysis - it seems that most projects converged on a very particular kind of software. Which just happens not to be the kind most of us will ever be working on, but it does not matter. Go them !)
For me, it's just about learning from others or getting in touch with the part of myself that was excited about programming.
I worked in web development for a few years (2010-2015) and then left to start a company with a friend (mid 2015). We decided to build audio reactive visualization software written in C/C++/glsl, way different from where web development was and is (still do web stuff but primarily backend services for our app and React for the sites).
The industry we're in focuses primarily on international music festivals (burning man, boom) and specialized art installations/exhibits. We were successful and now we have employees, a vibrant community, ambitions plans for new products, and aggressive deadlines.
I say that only to point out that work is work. Whether you're working for someone or working for yourself, you can always find things to stress about and reasons to push yourself to the brink of burnout.
As corny as it sounds, Handmade hero has been a reminder of the reason I quit my job 5 years ago to do something technologically and emotionally challenging. It keeps me in touch with the younger me that that just wanted to learn everything I could about programming, web/native/embedded whatever.
To your point, though, it definitely is not for everyone. If you see software as simply a means to an end, then there is nothing wrong with focusing on components that make products viable. I enjoy the craft but I also understand you can't have success by being precious about every little technical detail.
Just wanted to share my experience. I wish I would have come across something like handmade sooner in my dev journey.
Speed is a very real selling point for software though. If your competitor takes 6 months to get to market, and then is locked-in to a bloated, sluggish, badly-architected codebase made from 50 abstraction layers, you have plenty of time to come in later and steal their userbase, when your project is done and can do things in 1/10th the time.
My prior in such a situation would be to assume that "stealing userbase" is very hard. It depends on the kind of application, of course, but once you have paying / committed user, I'd assume the friction to change (and natural risk/change aversion )would be in favor of the incumbent.
That being said, arriving afterwards with a message of "we're like X,but faster" could be a good marketing catch, but probably not for any kind of enterprise app. (The only good marketing catch in this case would be "we're like X, but cheaper.")
No idea how such an hypothesis could be tested. We would need records of user switching from an app to another.
It's not only about being fast, it's also about being bug free. You inherit all the bugs of all of your software dependencies... That may be the reason why flagship software products such as Twitter or YouTube constantly have pretty glaring bugs.
And SQLite has a page on why [1] that explicitly lists the benefits of C for the project. Not on that list: bug reduction. Moreover, it has an almost superhuman level of effort put into testing and verification.
SQLite is just one piece of a broader argument that C's inherent risks aren't completely unmanageable. All it takes is an incredible amount of diligence, tooling, expertise, and money.
Your comment said "no one writing in C has bug reduction as a priority." Bug reduction is a priority for the SQLite authors, and it's written in C. This feels to me like an indication that it is a priority: https://www.sqlite.org/testing.html
So the bottom of the linked article says 2016, which made me immediately think 'old!'
It makes complete sense for the manifesto to be timestamped, but the surprise was that I didn't realise I was reading the manifesto rather than this actually being the normal landing page... there's no title or anything and HN usually links to homepages.
But a quick glance at the forum and the homepage shows there's activity there still!
> So the bottom of the linked article says 2016, which made me immediately think 'old!'
I yearn for the day when the industry reaches a state of reliability where we see "old" as a good thing, i.e. "this problem was solved ages ago!", rather than it seeming out-of-date.
I felt that the manifesto gave the best description of the community. The homepage is mostly news about specific projects. I'm not directly affiliated with the group, but I follow some of the developers. And their podcast is pretty good too!
I also recommend the handmade video series.
Casey Muratori has been live streaming himself build the same game for the past few years. He try's to build everything from scratch, which is entertaining and quite the brain dump.
Also, the video archive and live chat is searchable. I can't count the number of times I
- searched for some keyword or phrase, like "CPU"
- was greeted by a link that read something like this
"Day 025: Finishing the Win32 Prototyping Layer (01:38:59) Isn't the CPU Memory bandwidth only valid for on-die memory?"
- clicked the link to receive 3-10 minutes of extremely dense but well-explained information on a topic.
- spent the next hour or so googling technical terms and subjects that filled massive holes in my knowledge.
If you love learning, treat yourself to jumping around the video archive :)
https://guide.handmadehero.org/
Two favorites of mine from the archive Undefined Behavior: https://guide.handmadehero.org/chat/chat011/
Modern x64 Architectures and the Cache: (https://guide.handmadehero.org/chat/chat017/)
P.S. Its worth mentioning that you can buy the game hes building and get access to the source code. I'm not associated with handmadehero.org or Casey Muratori in anyway. The community just led to me becoming a better programmer over the past 5 years. I hope someone reading this finds the value I did in rabbit-holing for hours and laughing at the rants of a seasoned game dev.
*edit: formatting