Hacker News new | past | comments | ask | show | jobs | submit login

On a side note inspired by this blog post.

I'm wondering if humans are mostly incapable of producing great things without (artifical) restrictions.

In this case, marginalia is (ridiculously) efficient because Victor (the creator) is intentionally restricting what hardware it runs on and how much ram it has.

If he just caved in and added another 32GiB it would work for a while, but the inefficient design would persist and the problem would just show it's head later and then there would be more complexity around that design and it might not be as easy to fix then.

If the original thesis is correct, then I think it explains why most software is so bad (bloated, slow, buggy) nowadays. It's because very few individual pieces of software nowadays are hitting any limits (in isolation). So each individual piece is terribly inefficient but with the latest M2 Pro and GiB connection you can just keep ahead of the curve where it becomes a problem.

Anyways, turned into a rant; but the conclusion might be to limit yourself, and you (and e everyone else) will be better off long term.




It is mostly a matter of priorities.

For most applications it simply does not make any sense to spend this much time on relatively small optimizations. If you can choose to either buy 32GiB of RAM for your server for less than $50 or spend probably over 40 hours of developer time at at least $20 / hour, it is quite obvious which one makes more sense from a business perspective. Not to mention that the website was offline for an entire week - that alone would've killed most businesses!

A lot of tech people really like doing such deep dives and would happily spend years micro-optimizing even the most trivial code, but endless "yak shaving" isn't going to pay any bills. When the code runs on a trivial number of machines, it probably just isn't worth it. Not to mention that such optimizations often end up in code which is more difficult to maintain.

In my opinion, a lot of "software bloat" we see these days for apps running on user machines comes from a mismatch between the developer machine and the user machine. The developer is often equipped with a high-end workstation as they simply need those resources to do their job, but they end up using the same machine to do basic testing. On the other hand, the user is running it on a five-year-old machine which was at best mid-range when they bought it.

You can't really sell "we can save 150MB of memory" to your manager, but you can sell "saving 150MB of memory will make our app's performance go from terrible to borderline for 10% of users".


What if runtime performance and developer performance aren’t inversely proportional?

It might just be to a certain degree, we’re not actually getting any business efficiency from creating bloated and slow software?

A lot of things, especially in business IT, are built on top of outdated and misleading assumptions and are leaning on patterns and norms touted as best practices.

We sometimes get trapped in this belief that any form of performance improvement somehow costs us something. What if it’s baggage that we didn’t need in the first place?


I say this having worked in the software business for coming on 15 years: I don't think an organization where business is calling the shots would be capable of building an Internet search engine.

The entire project a fractal of this type of business-inscrutable engineering, and in any organization where engineers aren't calling the shots, that engineering isn't going to get done, and the project is going to be hideously slow and expensive as a result.

In a parallel universe where I had gone the classic startup route and started out using the biggest off-the-shelf pieces I could find, gluing them together with python (instead of building a bespoke index); then thrown VC money at hardware at when it started struggling; even more when it struggled again; I'd be absolutely dumbfounded when it yet again hit a brick wall and my hardware cost is tens of thousands every month (as opposed to $100/mo now).

Since I've instead built the solution from scratch, I've also built a deep understanding of both the domain and the solution, and when I'm faced with scaling problems, they're solvable in software rather than hardware. I can just change how the software works until it does. It's a slower route, but it's also able to take you places where conventional-wisdom-driven-development does not.


Congratulations on your latest steps forward. We empathise. Resourcefulness, the ultimate asset which sparks creativity.


> We sometimes get trapped in this belief that any form of performance improvement somehow costs us something

But it does always cost you something. Developer time isn't free, after all.

If we only cared about performance, we would be handwriting SIMD intrinsics for baremetal applications. But we don't, because it is easily worth a 20% performance penalty to write code in a modern programming language. We're willing to trade 10% performance for a well-tested framework and library ecosystem which greatly reduces development time. Nobody cares how efficient your application is when it never ships.

Even "bloated" and "slow" do not always mean the same thing. Just look at something like database indexes: they take up space (bloated), but make your application faster. Often that's a worthwhile tradeoff, but creating indexes for literally everything isn't a good idea either. It is all about finding the right balance.

I do agree that a lot of user-facing applications have gone way too far, though. Even completely trivial Android apps are 150MB+ binaries these days, and the widespread use of memory-hogging Electron tools is a bit worrying. When your app runs on millions of devices, you should care about resource usage!


> We're willing to trade 10% performance for a well-tested framework and library ecosystem which greatly reduces development time. Nobody cares how efficient your application is when it never ships.

I think this is the sticking point. People assert without any real evidence that whatever framework greatly reduces development time, and if that were the tradeoff, it might make sense. e.g. Rails and Laravel bill themselves this way.

Meanwhile, I've found that a more barebones framework in Scala is more productive to develop with, and also gets at least 100x the performance (e.g. a request rate in the 10s of thousands/second is easy to do on laptop hardware), which also makes it operationally easier since now you don't need a giant cluster.


> But it does always cost you something. Developer time isn't free, after all.

According to the rest of the comment we seem to largely agree. But this statement is what I want to challenge a bit.

Basically it comes down to this: We're doing plenty of unnecessary stuff all the time, especially in business IT (web applications, CRUD, CRM, CMS, reporting etc.) software.

Your Electron example points to the right direction, but there are also a lot of practices regarding code organization (patterns, paradigms), being too liberal with pulling down dependencies, framework bloat etc.

Simply getting rid of things that we don't need and thinking about performance in very rough terms gets us _both_ better performance and minimal code. I would wager that this isn't a tradeoff as in you describe. We might, especially in the mid-long term, actually gain developer time.

This often means thinking about DB access patterns, indexing and so on as you mentioned. Meaning we think in SQL and lean on the capabilities and heuristics of our DB. What does the DB actually need to do when I do these queries? How can I model the data in a way that gives me just enough flexibility with reasonable performance? Which parts of the system needs to know what? How does the data need to flow through it?

All that stuff we sometimes put on top (ORMs, OO patterns etc.) can get in the way of that. Does it really make us more productive to put these abstractions on top? Do we gain anything from doing these OO incantations?

The article in question is a really good example of removing complexity and drastically increasing performance at the same time.

I have a good example as well.

Our internal image optimization module that we use to generate differently sized images and formats in order to accommodate browser capabilities and screen sizes, was getting noticeably slow, especially for e-commerce related sites and brochures that typically feature a gallery per product/service.

Long story short: It got 50-60x faster simply by removing a convenience layer, writing SQL directly, processing the images in grouped batches and so on. AKA all just low hanging fruit. The end result is also simpler. It took work/time but it didn't need to be that slow in the first place.

And we are our own users too! We have to maintain, extend, test our code. Faster code gives us faster iteration cycles. Fast feedback loops lead to much higher productivity and flow.


The "no free lunch" fallacy is pervasive, but it's quote obviously false. Lots and lots of things are free.

Things normally don't exist in an optimum state where you must necessarily lose one thing to gain another.


In my opinion, a lot of "software bloat" we see these days for apps running on user machines comes from a mismatch between the developer machine and the user machine. The developer is often equipped with a high-end workstation as they simply need those resources to do their job, but they end up using the same machine to do basic testing.

Incidentally, I think the reason they need those specs is the same: the people building the dev tools all have top end hardware, and what’s fast enough for them is good enough to ship. I don’t think the people building the dev tools at meta, or apple, or google are seriously considering the use case of a developer working on an old dual core 8 gb machine, but that’s the reality in large parts of the world.


To a certain extent, yes.

On the other hand, development tools simply require a lot of resources: a debug build is always going to be more demanding than a production build, and a full recompilation of something like Firefox is never going to be fast.

I reckon a lot of developers are more than happy to sacrifice 8GB of RAM for near-instant advanced autocomplete and typechecking, but we should keep in mind that it should not become a requirement to do basic development.


Oh, I agree that this is the cause, but no thanks.

I will keep using the heavyweight development tools that run through all of my code discovering problems, analyzing its quality, changing it into more performant code, deducting all kinds of new code to complete my high-level view, and doing all kinds of the modern benefits that use my computer to replace my work.

I can get behind testing the resulting executable on a low-power machine. But not on using it for everything.


So i guess that is the point GP is making. From practical perspective one would just spend 50$ on RAM and forget about it. But you miss opportunity to make something great in terms of algorithm improvements for example. Even if it costs you more.

SO here artificial constraint is that "you cant have more RAM" and so you need to find other more creative solutions.


But OP could have more RAM, and the end user doesn't care about how clever the algorithm is. They only care that the service is offline for a week while the developer is having fun implementing their new toy algorithm.

Working on improvements like this makes a lot of sense in academia, when you are running thousands of servers, or when you need so much memory that you can't buy a big enough server. It's a nice backlog item for when there is literally nothing else to do.

I have definitely fallen into this rabbit hole myself. Solving difficult problems with clever solutions is a lot of fun! But my manager - rightfully - chastised me for it because from a business PoV it was essentially a waste of time and money. Fine for a hobby project, not so much when you are trying to run a business.


> Working on improvements like this makes a lot of sense in academia, when you are running thousands of servers, or when you need so much memory that you can't buy a big enough server.

It also makes sense for all desktop and mobile apps, per gnyman’s point on not hitting individual limits. Also for anything that other people will be running, like libraries. Because your thing might perform marvelously in isolation on a beefy dev machine and still run like ass on every user’s somewhat underpowered computer with ten other apps competing for RAM (hello Slack).

See also: “craptop duty”[1]; how the 512K Mac reduced the competitive advantage of MS’s ruthlessly-optimized office apps but the MultiFinder, Switcher, restored it[2].

[1] https://css-tricks.com/test-your-product-on-a-crappy-laptop/

[2] https://www.folklore.org/StoryView.py?project=Macintosh&stor...


Thanks for the links, interesting story about switcher.

And yes, hearty recommendation on the craptop.

I was discussing with a colleague a while back when he said he had to get one as he was unable to reproduce a problem on his M2 even with the 6x cpu throttling you can do in Chrome. On the old laptop he dug out of storage he managed to reproduce it instantly.

I need to check with him how easily he could fix it, one argument I have to why some optimisation is always good is that it's easier to fix things as you go along vs having to go back and try to make things better. I mean if there is one thing making it slow then it's easy, but I have a feeling it's often a mix of things and at that point you might not be able to replace the slow parts as everything else depends on them.


(I might have actually linked the wrong Switcher story. The one I remembered explained why it existence was strategic for Microsoft, but control over it was not: multitasking made memory optimization matter on a larger machine.)


It also costs everyone else more to run the inefficient code that you produce.


> over 40 hours of developer time at at least $20

I think maybe you dropped a zero.


I intentionally underestimated the cost to provide a lower bound.

In reality the listed changes once you consider all team members are likely going to run closer to 80-120 hours at $150 / hour - but that only confirms the final conclusion.


Agreed, it's a no-brainer to add some cheap hardware rather than spend two orders of magnitude more to fix the inefficient code.


Yeah this aligns with my view. Limitations breed ingenuity, and that isn't limited to demo scene outputs. You're going to run into scaling problems sooner or later, and they're a lot easier to deal with early than late. If your software runs well on a raspberry pi[1], it's going to be absurdly performant on a real server.

It's actually how we used to build software. It's why we could have an entire operating system perform well on a machine like a Pentium 1 with most of what you'd expect today, etc. while at the same time we have web pages that struggle to scroll smoothly on a smartphone with literally a thousand times more resources across all axes. The Word 95 team were constantly faced with limits and performance tradeoffs, and it very clearly worked or did not.

If I had just gone and added more RAM (or whatever), I would still have been stuck with an inferior design, and soon enough I would need to buy even more RAM. The crazy part about this change is that it isn't just reducing the resource utilization, it's actually making the system more capable, and faster because free RAM means more disk caching.

[1] e.g. this runs on a single pi, and is much faster than production wikipedia because it doesn't permit updates: https://encyclopedia.marginalia.nu/article/Hacker_News


Oh yes ! That’s my pet theory too.

I think it’s why old computers felt good and also why old games were so good.

Maybe it have something to do with the complexity of the systems we deal with.

When you have a restricted amount of some resource (RAM, physical space, food, materials, time, money …) you have to plan how you will use it. You are forced to be smart.

When you have a virtually infinite resource, you can make whatever you feel making but you don’t have to really care about the final state, you just start and you’ll see when it will work.

I’m not exactly a true gamer, but I’ve always been amazed by the fact humans were capable to store so much emotion, adventures and time to enjoy in the good old cartridges with some kb/mb of rom. I mean, Ocarina Of Time rom is just the size of the last 8 photos I took with my iPhone.


The guy who made Virtualdub (virtualdub.org) has a blog who said essentially that. His video program is supersmall because it doesn’t use 4 packaged libraries; he programmed everything to hardware / OS interfaces directly.


American Airlines ran SABRE, a sizeable airline ticketing and reservation system, in the mid-1970s on two system/360 mainframes that could only process a few tens of millions of instructions per second.

A raspberry pi 2 can do over 4 billion Dhrystone instructions per second, and a pi 4 over 10 billion per second.

Of course by modern standards mid-1970s SABRE was pretty barebones for an airline's main system, but it's at least theoretically possible to run simplified systems for over a 100 airlines simultaneously on a single pi 2...

So yes modern programs are very far from optimized. 1000x or 10 000x improvements are possible, less for math heavy stuff.


Good point about not hitting limits individually.

I think microsoft has a huge problem with this. Even 3000$ laptops from 5 years ago struggle with running a teams call, some office instances and a browser with 30 tabs at the same time without slowing down to unacceptable levels.

They test stuff individually and running one thing alone is fine, but that's not what people do.

I'd imagine that artificial limits in the form of run time on well-defined hardware that are only raised after an explicit decision could be the solution to this.

But then again I only write business software where the performance aspect comes down to "don't do stupid shit with the database and don't worry about the rest because the client won't pay for those worries", so I might be on the wrong track entirely.


This is especially true in UX design.

When I see a website...

- using a ridiculously thin or small font,

- relying on a high-end monitor to provide sufficient color contrast

- loading an unreasonable amount of resources only to provide laggy animations

...I'm wondering if the responsible designer(s) only have 32-in Retina displays and the latest Macbooks to work with. Because on any other combination of devices, the website looks and feels awful.

And I know this because I was formerly guilty of it!


“ I'm wondering if the responsible designer(s) only have 32-in Retina displays and the latest Macbooks to work with. Because on any other combination of devices, the website looks and feels awful.”

I think often it’s that they aren’t users themselves. They make it “pretty” but not functional.


It's extremely rare that UX designers are also users of the products they develop. Maybe they use Figma or the order entry of Amazon even if they don't work for Figma or Amazon, but who of them is going to use again the order entry form for Random Customer N after they started working on the registration form of Random Customer N+1?


My GF is a designer and she always says that the problem is that nobody test anything. I've been helping with a project for her client and pretty much everything went on the fly.

I was able to spot obvious flaws in the design, she agreed but said that she had no time and such is life.


There are so many tools to see how a website looks like on multiple screens / devices. Even full on emulation. I can see a designer making this oversight but a UX designer doing that kinda makes the UX part of the title irrelevant.


> I'm wondering if humans are mostly incapable of producing great things without (artifical) restrictions.

On the one hand, I agree with this. I think an awful lot of great art and great work comes from the enforced genius of operating within constraints, and there's a profound feeling that comes from recognizing that kind of brilliance.

I'll also say, though, that there's also something about seeing the results of absolutely turning every nob to 11 - about seeing the absolute unfettered apex of what particularly talented people can actually do with no constraints whatsoever. It's a very different experience, and I deeply respect the genius of making art under constraint, but sometimes you've just gotta put a dude on the moon, you know?


> sometimes you've just gotta put a dude on the moon, you know?

in software development it looks like everyone and their grandmother is sending people to all moons known to mankind, though.


>in software development it looks like everyone and their grandmother is sending people to all moons known to mankind, though.

And crash landing every time. To do a far reaching soft landing you need to have learned to reach LEO within constraints.


> I'm wondering if humans are mostly incapable of producing great things without (artifical) restrictions.

Here is an opinion to support your hypothesis from a couple of different domains: poetry and music.

While some people prefer free verse and avante-garde music, what stays most in my mind, and what seems to endure longest overall, are poetry with regular rhyme and meter and music that follows standard patterns of melody, rhythm, and harmony. Having to force their creativity into those sometimes rigid frameworks seems to enable many artists to produce better works.


Maybe a counterexample to it, not sure if it can be applied:

I write software in ABAP, which is a weird and ridiculously complicated language that has inline SQL and type checking against the database and has never had a major version that breaks old stuff, so code from 30 years ago will (and does) still run.

I used to have fun working around the quirks of it and finding solutions that work within the limitations, but now I'm just frustrated by having to solve problems that haven't been around for the past 15 years in the rest of world and looking at terrible code that can't be made any nicer because of those limitations or because customers don't give a shit as long as it works most of the time.


There's definitely some artist who feel that way. Jack White comes to mind. He'll deliberately use restrictions(like writing music for only two instruments) and even physically obstruct things at live performances. See this (very good) interview with Conan: https://m.youtube.com/watch?t=890&v=AJgY9FtDLbs&feature=yout...


This is very true.

And as a developer or a team, you're bound by how long development takes, not by the required resources.

You won't be asked by a business stakeholder "oh, and how much RAM does it take?" or "why is it $2,000 a month instead of $1,000?". These questions tend to come much later when profit needs to be ironed out.


And later, when performance becomes important, it is often much harder to improve than early on. Especially with legacy db schemas with a lot of existing customer data.


Interesting observation and aligns with my experience of really enjoying small focused tools and apps. This website is a good example.

Further, it feels like there's a corollary here to companies, where financially constrained companies who are smaller and more focused provide better customer experience than cash-flush competitors.


> I'm wondering if humans are mostly incapable of producing great things without (artifical) restrictions.

I think the real issue is that there isn't a program language that produces a compiler error if the given code can exceed a maximum specified latency.

Even working on a program with soft-realtime scheduling, I've had to constantly push back against patches that introduce some obscure convenience without having measured worst case latency.

The problem is so bad I doubt most people realize it's there. I don't know what the answer is, but I have the feeling there's an intersection with timing attacks on software/hardware. Some kind of tooling that makes both worst case times and variance as visible as the computed CSS in devTools would probably help. Added to some kind of static analysis, perhaps devs to hack their way to decently responsive interfaces and services.


It would probably help if the whole "developer spec" thing would go away. I never understood why people think they need 32GB of RAM and top of the line CPU to write code. If you're compiling a lot (especially C++ I guess) then you need a build server. I wonder how much better things would be if "developer spec" actually meant something close to median or representative spec.


I completely agree. Every creative I've ever trusted has the same philosophy: freedom through constraints. I've found in my life, too, I can focus more closely on elegant solutions when they become (perhaps artificially) necessary, not merely aesthetically pleasing. I'm actually having a similar experience of insane efficiency improvements in a personal project, much smaller in scope, that came down to using bit operations and as-branchless-as-possible methods for an Arduino Nano.


Without going about efficiency and priorities. I think it's easy enough to claim that a great way to spark creativity or great solutions is putting constraints.

It's about specialization around the usage of few elements to achieve a goal vs a paradox of choice or going through common and known patterns.

Jams can be great for this and people realize they can work so much more efficiently and focus on the core of their idea.


I very often use my highspeed 3g option in the network tab when developing web UIs to give myself some serious constraints instead of assuming everyone is using a developer workstation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: