Every once in a while a company like Tesla comes along and reconstructs a particular product gaining incalculable advantages over its competitors.
Every once in a while a doctor is forced to re-break someone's arm which didn't recover properly.
Every once in a while severe economic recessions comes along and wipes out all the badly managed companies kept in life support by banks/government.
Every once in a while a programmer writes a piece of software that has almost the same functionality as an existing one but, is, some how, just better.
And every once in a while there are revolutions that completely destroys the existing system of governance, when it can no longer fix itself properly.
And that's when resources get deallocated.
Civilization will fall, and it will rise again from its ashes.
Looks like the world uses stop-the-world garbage collection on particular threads keeping the rest of the world running.
This happens at the micro level (one employee is swapped for another who is better) and at the macro level (banks/tesla) and at the country level (socialism is replaced by more efficient and humane capitalism as in ussr/GDR).
The problem is when the bad employee cannot be removed, the bank is bailed out or the people are unable to get themselves a new form of government.
The key to efficient reallocation is not letting too much power concentrate in one location. And I think that is the thing at risk the most right now. Governments have become too powerful and are preventing the necessary creative destruction, like an operating system refusing to run the garbage collector and preventing new programs from running.
This is a real problem, but not as big as Libertarians like to make it out to be. Additionally there is a tendency to ignore the very large command economies within our market Surely the command economy within Wal-Mart is at least as large as the USSR was, and it appears to have just as many inefficiencies, and absurdities generated by perverse incentives and miscalculations.Because it is not the state it manages to do away with the most egregious cruelties of a planned economy, and is definitely preferable to a state planed economy, but it's also probably not the best we can do.
It's a late reply but national socialism only ran for a short time before being replaced with a wartime economy. In effect the program ran fine because it didn't run long enough for garbage collection to be an issue.
This is a good example of the adaptive cycle that's used to explain behavior in complex adaptive systems of various kinds. The cycle goes through up to four phases: rapid growth, conservation, release, and reorganization.
Rapid growth happens when a relatively unexploited resource is abundant (usually after a release/reorganization) and there isn't much competition. This is like a bare field - weeds move in quickly to take up unused space.
Gradually, competition heats up as more things get in on the action. This is where optimization starts to become important, because the ones that do a better job with what's available start to come out on top, at the expense of the less-competitive ones. Late in the process, you're in the conservation phase, where it's hard to change the entrenched winners because they have all of the resources locked up (can be money, nutrients, or whatever). In the field analogy, the bigger weeds shade out the smaller ones, until eventually you have large trees taking up most of the available resources. This phase is the longest-lasting of the four in general.
The system at this point is highly optimized and dependent on the winners of the conservation phase. At some point, these few hub elements get hit by a disturbance that shakes them up a bit, and the repercussions spread throughout the whole system. If it's a big enough disturbance, the whole system can fall apart due to the loss of the hubs. This is the release (so-called because the stuff that was locked up by the entrenched players becomes available again). In the field example, this might be a blight that knocks out the climax tree species, opening tons of space for new competitors and releasing the accumulated biomass in the trees for reuse.
At this point, it's a free-for-all to see who can get a toehold first in the new rich environment, and a brief period of reorganization ensues while that's being sorted out. In the field example, this is where all of the dormant weed and tree seeds that have been waiting in the soil sprout at the same time. This quickly transitions into rapid growth again, and the whole thing repeats.
It's also an instance of self-organized criticality, where a system tends to evolve toward a critical state (i.e. local disturbances can have global implications).
There are ways to manage this cycle to some extent when you're aware of it - primary among them is to deliberately introduce disturbances to shake up the formation of the conservation phase and loosen the connections that have built up. Basically, you introduce more randomness into the works in order to keep things from becoming overly optimized (or highly correlated, in the language of self-organized criticality).
"Basically, you introduce more randomness into the works in order to keep things from becoming overly optimized (or highly correlated, in the language of self-organized criticality)."
That sounds a lot like how I've heard AI guys talk about avoiding optimization pitfalls when training.
I guess the difference is we can run that on an accelerated timescale, whereas civilization marches to its own, slower beat.
I hadn't thought of it that way before, nice point. I guess you could look at a complex adaptive system as a learning system that gradually discovers the best way to exploit its environment. Over-learning is what leads to the release phase (when the environment goes outside of your expected bounds and causes problems),and our meddling to induce extra randomness serves the same purpose here as it does in AI systems - ensuring that all of the variation of reality is represented, and a bit more.
tantalor posted a good link - artificially preventing a complete crash would only cause the next crash to be bigger.
Central banks can prevent bank crashes - but only at the cost of partially compromising its own integrity (See central bank balance sheets and the amount of toxic assets they hold). Eventually a crash would become so big the central banking system itself would fail. One possible failure mode is hyperinflation.
No, no it isn't. People really need to stop using the word "hyperinflation". It does not "just happen". It is a very specific problem, that happens when specific, deliberate but poorly thought through actions are taken: namely, the free printing of money by a government in order to pay down it's own debts, well in excess of any viable economic activity.
The only place in the western world this ever happened was pre-WW2 Germany, and it will never happen again unless someone specifically sets out to destroy their own economy.
The BOJ is now almost the entire demand of new Japanese Government bonds. You can say Japan is printing all the new money it needs to fund its 50% fiscal deficit.
One step after another Japan refuses to accept the failure of its own banks and excess government spending, and is walking towards the direction of central bank failure.
EDIT: I don't mean to say Japan is in a state of hyperinflation but to say if it continues to avoid major effects happening to its economy, e.g. Significant reduction of government spending, hyperinflation is one possible final outcome, akin to a computer crashing after hours of refusing to deallocate memory from terminated processes. Debt repudiation, confiscation and war are other possible outcomes.
Devaluations in the range of even 90% aren't hyperinflation. Hyperinflation is "wheel barrows of money for bread". It is a scale and rate of devaluation that does not allow informed financial decisions about the market to be made.
It is very different to a currency falling a large amount of its value over the course of a couple of years. People have time to divest, predict, etc. - otherwise engage in normal business activities.
> In the past 2 years, the yen has reduced in value by a least a third against the dollar
What does that have to do with hyperinflation? Let's assume, for no particular reason, that the dollar inflates at 1% a year. For the yen to fall in dollar-measured value by a third over the same two years, it would need to inflate at an annual rate of 23.7%, or just under 1.79% per month. Nobody calls that hyperinflation. Compare the original definition of hyperinflation as inflation rising above 50% per month.
(Admittedly, there is a current standard[1] which views annual inflation in excess of 26% as a risk factor. But, first, 23.7 is less than 26, and second, they require other factors such as "the general population is unwilling to hold monetary instruments" and "prices are quoted in foreign currency". Do you see that happening in Japan?)
It will never happen again at "wheelbarrows of cash for one loaf of bread" levels, because the Brazilian Plano Real (to fix hyperinflation of 1980-1994) largely provided the blueprint for pulling out of it.
Once the old debts are hyperinflated away, the hyperinflating currency is publicly indexed to an imaginary unit of trade, with artificially stabilized value. Then once full penetration has been achieved, and people start making their business decisions based upon the imaginary unit, the old currency is dropped entirely, and the imaginary unit becomes the new circulating currency.
The net effect is to simply wipe out all debts and start over. If you're going to do that, you might as well just get it over with, and skip the hyperinflation step entirely.
Otherwise, you could end up with a Zimbabwean situation, where the currency is destroyed, and the ruling regime actively impedes any naturally occurring repairs to the system, as a means of seizing more economic resources and consolidating its power.
You cannot have hyperinflation without a deliberate and malicious decision to manipulate the money supply and seize economic resources from the savers and builders. Since monetary inflation works best when no one knows it is happening, it would probably be disguised by first redefining the publicly available statistics on the money supply, and then might take the form of buying up certain forms of debt that essentially allow private banks to expand the money supply by issuing their own notes, rather than running the printing press that everyone already knows about and watches closely. Also, if you refer to the program by a pseudonym like "quantitative easing", people will be slower to catch on.
Hyperinflation only happens when the peripheral economic actors are able to recalculate prices as fast as the central bank can manipulate them. It can be mitigated by employing massive amounts of deception, propaganda, and crass stage magic. As long as the retail store managers don't realize that they should be raising prices by 15% per year, they won't. And if people don't see higher prices for goods, they won't demand 15% pay raises. And if workers don't demand more pay, companies won't raise their prices or fire marginal employees to compensate.
If all the people dependent on the US-dollar-based economy realized what the Federal Reserve has been doing, there would be hyperinflation. It doesn't "just happen" because the smartest guys in the room are so much smarter than the median that they are effectively fooling all the people, all the time. The root cause of hyperinflation is already there, but the feedback loop that causes prices to rise rapidly and noticeably has been deliberately obstructed.
Um, I think the smoke and mirrors is in this explanation, which deliberately ignores a lot of things. The Federal reserve system raised interest rates in the past to deal successfully with inflation, and lowered them in the last few years to deal with deflation, and discourage hoarding of money, which happens when companies sit on their cash "until they see demand" which never materializes, lay off their workers who then spend less etc. When every individual acts in their own interest they collectively exacerbate deflationary spirals, runs on banks etc. -- in an economy with a centralized money supply this needs to be mitigated. In a society with decentralized money issuers, this is less of a concern - eg if Detroit goes bankrupt, others can swoop in and pick up the slack.
In short - no, there is no hyperinflation about to happena and in fact the US is able to borrow money and sell treasuries at record low interest rates compared to other countries. So, did the Fed also fool all the treasury market participants AROUND THE WORLD who go long US treasuries during its open market operations?
Can the median business owner or store manager independently calculate the money supply of US dollars?
The Fed simply stopped publishing M3 in 2006. Yet, the portion of M3 that is not also in M2 is nearly 30% of the total money supply, and increasing steadily since about 1960, with only a slight setback from 1991-1995. The portion of M2 not in M1 has remained about 50% of the money supply.
By now, everyone who buys or sells knows that the CPI is completely useless as a measure of inflation. The portion of inflation that is visible and obvious to consumers is now about 10% per year. That doubles prices about every 7 years. This is somewhat mitigated on the shelf by adding more air to packages and aggressively hunting foreign suppliers whose costs are not measured in dollars, thus delaying price increases until trade balances are settled. The gallon jugs of Floridian fresh-squeezed orange juice are replaced by 0.75 gallon jugs of blended Floridian and Brazilian frozen concentrate.
I believe that Fed operations since 2005 have been less about stabilizing the value of money than they have been about keeping large changes in the money supply off the Fed's books, and hidden in numbers that have historically been less scrutinized as economic indicators, or perhaps never reported at all. The bond-buying spree appears to be a shell game to manipulate money velocity and leverage monetary reservoirs to replicate the effects of running the printing press, without actually alarming those who cry bloody murder whenever they hear it operating.
Nice use of the bandwagon fallacy, there. They don't necessarily have to fool everyone around the world. They just have to be better than all the alternatives. Since the rest of the world economy is also in bad shape, that is not a high bar to pass. That's a lot like asking why so many customers in Comcast's service area choose to get Comcast. Most of the time, it's because they're the only game in town. It may also be that all those who are not fooled do not have control over institutional investment funds.
Your post contains several implicit and unstated assumptions about economics that I do not necessarily agree with. And I also have several assumptions that I am certain you would not agree with.
I believe that the Fed and the other central banks of advanced industrial economies do have the tools at their disposal to prevent hyperinflation. But I do not believe that the consistent use of those tools are less damaging over the long term than an actual outbreak of hyperinflation. They often simply spread the pain out over a longer period, and prevent reallocation of resources from failed or failing institutions.
But others don't have to buy the US treasuries. They can diversity - China's beegun more aggressively pushing theirs, France recently got pisset at the US and may diversify. Russia is cutting ties with US banks, etc. I am not sure why you think the worldwide market simply doesn't realize that the dollar is going to get devalued in the near future, and with it the coupon payments. Countries compete on many levels including their currenxy, and in the market, the Fed sells treasuries at a favorable interest rate.
In the United States since 1959, banks lent out close to the maximum allowed for the 49-year period from 1959 until August 2008, maintaining a low level of excess reserves, then accumulated significant excess reserves over the period September 2008 onwards. Corporations began sitting on their cash. So money is being allocated towards other things than jobs and investment in local wages, which have been stagnating for decades, and given a boost by the consumer credit explosion of the 80s and 90s. The money is being offshored, globalization and automation have caused the American worker to participate in a race to the bottom with the rest of the world - this mean reversion helps billions of people around the world but undoes the dominanxe the US has enjoyed thus far. Supply chains now stretch around the world. Apple offshored its money to avoid taxes for one thing. Amazon employs robots now, and its wage slaves are interchangeable etc. I see structural problems as being responsible for the slow recovery the US, not Fed policy.
Although I would probably agree with you in one thing - a direct unconditional basic income for US residents would do much more long term to revitalize the economy and lower misallocation of resources, than giving it to banks.
Increases in M3 that are not present in M2 are by definition unable to affect inflation because they represent an increase in money that is locked up in long term deposits and thus unable to influence the real economy. Inflation occurs when too much money is chasing too few goods, and that doesn't happen in this case.
You are right to say that CPI is not all that useful as a measure of inflation. A better measure is the core CPI which strips out volatile commodity prices. (If the actual CPI that includes the cost of gas were to be used, we would be facing a massive deflationary spiral given the recent trend in oil prices.) Looking at core CPI trends that you'll notice that globalization, and automation have actually caused price decreases for many goods. (With some inflation in service industries not subject to competitive pressures such as healthcare and education.)
To amplify a bit on EGreg's reply: In 2008, several trillion dollars evaporated. That's why "If all the people dependent on the US-dollar-based economy realized what the Federal Reserve has been doing, there would be hyperinflation" is false. All the people dependent on the US-dollar-based economy realizing what the Fed was doing barely avoided a deflationary collapse.
Now it's true that, if the Fed isn't careful (and lucky?) winding this down, there could be problems. But we are not (yet) at the point where hyperinflation is the rational outcome of the Fed's moves.
> It doesn't "just happen" because the smartest guys in the room are so much smarter than the median that they are effectively fooling all the people, all the time.
By "the smartest guys in the room", I presume you mean the Fed, since you seem to be casting them as the ones who are able to pull the wool over everyone else's eyes. But do you really think that they're enough smarter than the ones at, say, Goldman Sachs, to be able to fool them all the time? I don't.
There is no mention of all the benefits that encapsulation gives us - to take the blogs argument to the extreme imagine writing your own web server, caching layer, load balancer, JSON/HTTP parser, frontend logic for every platform, and a hundred other layers of abstraction I may not even know about every time you want to write a blog.
There is a real tradeoff between programmer productivity and resource usage, because every library that isn't tailored for your exact use case (i.e every library) is an inefficiency. Given programmers wages vs. cost of CPU/RAM/storage, and the long term trends (technology makes one cost lower, the other cost higher!), it is economically insane to reject encapsulation.
I definitely see the appeal viewing programming as craft. In that case, its no coincidence that "reinventing wheels" is good advice to learn how to program. Its just not how to run a business.
to take the blogs argument to the extreme imagine writing your own web server, caching layer, load balancer, JSON/HTTP parser, frontend logic for every platform, and a hundred other layers of abstraction I may not even know about every time you want to write a blog.
I see the argument as saying that all those layers are not really necessary. Take HTTP for example: it's a large complex protocol that requires plenty of code to handle correctly (e.g. the size of a request, and the various pieces inside it, are not specified and the parser has to read through the whole request sequentially to figure this out), and the majority of applications using it never need the full functionality. What if it was a simpler, easier-to-parse format that was as extensible?
One of the points I get from his argument is that encapsulation has encouraged us to build massively overcomplicated systems by piling layers upon layers with little understanding of how everything fits together, when we could instead be building much simpler and more efficient systems that could still be using abstraction but in a far more cautious manner.
"What if [HTTP] was a simpler, easier-to-parse format that was as extensible?"
Then it would become HTTP. It would take time but it would be inevitable.
HTTP didn't just happen. Using Fred Brooks' terminology, the real world is essentially complicated. We may pile accidental complexity on top of it at times, but the real world is essentially complicated.
While I agree that there is room for accidental complexity to occasionally be swept away and a new clean slate presented, I do not believe that a new protocol for HTTP could solve all the problems HTTP solves without being at least nearly as complex as HTTP. The set of problems that HTTP solves is a great deal larger than "request page -> get page". There's caching, proxies, security, encryption, policy enforcement, needs that schools have, needs that Facebook has... and that's still just the high-level table of contents of the table of contents of the world of HTTP. And if you think you can just ignore those cases and simplify... try it and you'll see how wrong you are.
Note I do think there is room for improvement in HTTP, and it probably isn't the direction of 2.0. But it's not like there's some sort of brilliantly simple protocol in there struggling to get out, buried under layers of cruft. It's a thorny hard problem and the result will be a thorny hard protocol. It often fails to work not because the protocol itself is broken but because people doing work in HTTP have truly failed to account for a fundamental use case properly, like caching or the use of proxies.
Also note that there's no guarantee that "simple protocol + time" will yield something any simpler than HTTP. Under HTTP is a fairly powerful idea, and there's a reason why so many Internet protocols look very similar to it. It would be easy to start with something "simpler" that turns out to grow much, much worse.
I think you are partially right, but the situation (I'd rather not label it as a problem just yet) goes on many levels.
By example, Moore Law has allowed us to pack incredibly large amounts to transistors in reasonably affordable machines, and what we do is... virtualization, a.k.a. write software that (presumably) simulates being a (much slower and limited) hardware system so a third party can run other programs on top of it, for a fee.
That kind of makes economic sense (because customers have a big appetite for computing power... just as long as they do not have to keep those pesky sysadmins on the payroll), but from a purely technological point of view is craziness. Every component you add into your stack is a potential source of error, and you do not gain any extra features because you are simulating real hardware... so, why to do it in the first place?
I think the answer is: because nobody would buy all those transistors if we didn't do so.
> That kind of makes economic sense (because customers have a big appetite for computing power... just as long as they do not have to keep those pesky sysadmins on the payroll), but from a purely technological point of view is craziness.
...which is part of the reason why you're seeing people in Linuxland rediscover things (LXC) the BSD community (Jails) have known for ages.
Another, possible answer is: Because transistors are really cheap, and energy only a bit expensive. But that wasting saves competent people time, what's a very expensive and limited resoure.
Why does everybody ignore the obvious explanation, that you'd get from the mouth of any participant, for a convoluted one that only fits the criterium of political correctness?
I do not think anything you say here contradicts what I originally said. Yes, there are economic reasons why this makes sense, probably having to do with labor compensation.
Having said that, please excuse my ignorance on system administration but, how does running a virtual machine that pretends to be a Linux/x64 box... on top of a real Linux/x64 box saves anybody at least some time? As a software developer who drank the Java kool-aid back in the 90, at least the "write once, run everywhere" mantra did make some kind of sense in those terms, but this other example is genuinely beyond me.
To me it has more to do with a tech company being able to put together an A-class team to serve a large number of customers that would not stand to pay the salary of a single full-time B-level IT guy if their business model depended on it.
Or, in summary, it's a heck of a lot easier to instantiate a new instance of a machine from a snapshot than instantiate a new machine from a set of directions and some auto-config scripts.
I agree standard images are much easier to set up than a hand configured system. That does not mean said image should run on top of an emulator.
I see some value in having a running system right now, instead of waiting for a long latency while the image is downloaded to real hardware... at least in some situations. However, is taking a permanent performance degradation of an order of magnitude worth that advantage?
Sure, you can create a simple transfer protocol that meets 80% of the use cases. Now add on the other 20% of the use cases, and you will end up with equatable complexity. And then, congratulations, you now have two competing standards.
Perhaps, though, computing resources are priced artificially low due to externalities (e.g. third world wage inequality, environmental impact not paid for), and perhaps a sane long-term approach would find a different balance between programmer wages and physical tech components.
Moore's law eats the difference between first world prices and third world prices for breakfast. Changing how the hardware is manufactured wouldn't materially affect the choices programmers make even if it doubled the hardware cost, and it probably wouldn't even do that.
"Improved" is a relative term and says nothing of the associated costs. Could we maybe reduce costs and still "improve" at a slower, more sustainable pace? It's worth exploring the notion without turning to the dogma of efficiency at any cost.
Exactly. "At any cost" should only apply to long-term full-scale survival threats. With every other problem, you should only ever except "at reasonable cost."
And in order to know what cost is reasonable, you have to do some kind of rational analysis. There are few things less efficient than an ideology without an understood purpose.
And most likely exacerbated the negative externalities. Each pencil company wants to sell as many pencils as possible, just because. Most organizations try to grow not shrink.
That's a straw man. It's a good thing that there is specialization for tasks (automation is even more efficient) but with that comes other consequences. The organization always seeks to grow and expand, to create demand even if there isn't any. That is a consequence of the fact that the money it gets is more valuable to it than the services, since it can be spent on anything. Meaning it is more widely accepted so trading services for money is a net gain, and the more they do it the better for them. However on the collective level, this results in exploiting any externalities that they can including people's attention, free time, the planet's resources, polluting etc. The incentive is always to expand at the expense of the environment, until ecosystem collapse is threatened (see for example logging forests, etc. or yeast making beer). In short, the tragedy of the unmanaged commons.
Thanks for asking, I don't think I've been able to tie it all in under one overarching picture so clearly before.
Externalities and long term planning are known issues. They can be managed by the government to some extent.
Really it all comes down to managing incentives. Obviously capitalism isn't perfect, but I doubt there is a better alternative. Any other system will have the same issues and complexities. Even trying to solve issues with government like I did above isn't necessarily better, since governments create their own messy web of incentives and complexities.
Maybe there is no solution. For billions of years populations have wanted to expand and been kept in check by eventual predators and death etc. We humans may be driving towards the same fate.
You make a qualitative argument, while OP makes a quantitative argument. You rightly argue that division of labour boosted productivity tremendously, but that point doesn't affect OP's argumentation at all. Why not keep division of labour and try to make it less wasteful?
Nothing in the OP's argument was quantitative. Both are qualitative arguments.
The OP raises some good points that deserve serious thought, but I find his argument sketchy and hand-wavy. The crux of the argument is this:
> "Our mainstream economic system is oriented towards maximal production and growth. This effectively means that participants are forced to maximize their portions of the cake in order to stay in the game. It is therefore necessary to insert useless and even harmful "tumor material" in one's own economical portion in order to avoid losing one's position. This produces an ever-growing global parasite fungus that manifests as things like black boxes, planned obsolescence and artificial creation of needs."
Everything else in the post is either analogy or following the assumptions in that paragraph to their conclusions. But the sheer amount of reasonable objections that are ignored or dismissed in that paragraph is staggering.
If today's capitalist systems have systemic issues that result in gross misuse of resources, I don't think it's for any of the reasons that the OP suggested. For that reason, I am skeptical of his proposed solutions.
This entire thing is just a rant against specialization and abstraction. Both in economics and computer programming. This is just ridiculous. Without specialization the world wouldn't function at all. No single human being can build a car by himself from raw materials. Programmers shouldn't be expected to write their web apps in assembly, and can you imagine the mess if everyone tried that.
The author starts out with a legitimate complaint that things are suboptimal. But his cure is worse than the disease.
The second comment on his blog says he's just repeating what Marx said, which is not quite correct but in the right ballpark. People who don't know anything about economics tend to do that.
The "problem" of exponential growth is the belief that it will result in using up all available resources rather than finding more efficient ways to use resources we didn't even know we had. Engines become more efficient, power generation becomes lower cost, food production moves from farms to factories... all of these are good things. We feed the world with far fewer resources today than we did a century ago when many more people were starving, even while we use more resources building smartphones and sending spacecraft out to explore the solar system. We solve problems our ancestors didn't have because they still hadn't solve the problems they faced.
Economically, exponential growth is required to keep the monetary system stable, but in human terms we live in a world where new problems always arise, and require new solutions.
Before Watt and Newcomb we didn't have many problems related to steam power. Before the Wrights getting to the airport was just not an issue for most people. Human invention creates new problems which human invention solves, and that is a treadmill we cannot get off.
Capital markets turn out to be a remarkably good method of funding new solutions, and private enterprise turns out to be a remarkably good way of exploring the landscape of new solutions. No one has ever been able to figure out how to incentive socialist managers to explore that landscape with anything like the same efficiency--particularly given the opportunities they have for corruption--and when you look at how inefficient capitalism is you'll realize that's saying something.
So we continue to press on, while a certain type of person who has been more-or-less loud in the past two hundred years shouts at us that it is all going to end badly Real Soon Now.
Somehow it never does. It could, certainly, but we're centuries away from that. Possibly millennia. In the short term, if the naysayers are so sure of their analysis, I recommend they apply it to the stock market and use their impressive prescience to get rich. It shouldn't be difficult for anyone who has the kind of deep insight into the workings of the world that they claim to have to manage that.
Going to the extreme, instead of using blogspot you can write your post in raw html and host them in your own server. But the blogspot abstraction layer is much easier (and it can handle the HN load), so you have more time to do your unique activities.
From what I can tell the writer is arguing that abstractions and encapsulations are wasteful of resources. The author claims that this is an inherent problem of abstractions and a lack of understanding of their implementations. The justification for this is that time and mental energy are resources as well and although we are being wasteful of other resources we use up less time and mental energy. The author states "However, during recent years, I have become increasingly convinced that the portion of true tradeoff is quite marginal" however doesn't seem to give anything quantifiable to illustrate this marginality. Much of the argument is based on anecdote and personal ideas on general human nature.
I am inclined to think that the author has come to this conclusion because time and mental energy are less valuable to them than other resources and that's not an unreasonable stance to take. However I don't think this view is widely shared and that thought most people agree that premature optimisation is the root of all evil.
Parts of the post are interesting and challenging, and to some degree I agree that the cost of layering abstractions is often underestimated, but the final few paragraphs read like a naive manifesto...
I maintain a small outfit of old, antiquated, 'un-useful' computers, the eldest 30+ years old. I have pretty much never thrown a computer out - there is always a use for it.
Computers do not die; their users do. ("s/Computers/ConsumerToy/g")&etc.
The leak can be plugged, and it starts like this: don't use technology you don't understand, or .. to put it likewise: use all technology with full understanding. The only reason we threw away our old computers was because we didn't think anyone could/should use them; they are nevertheless: still useful. The 'usefulness or not' is entirely a spiritual decision, however, because the machine does not care, and it is this fact - and ignorance of such spiritual components of so many facets of human culture in general - is the blind spot in the debate, imho. Things are only as they are, if we decide they are; the usefulness of a computing device is absolutely not dependent on its physics properties in any way other than: can you turn it on, does it work, can the user do something with it? It is the User who makes the decision, all along the way to actual, real, usefulness.
I'm still using my computer to send email, just like I did in the 80's and so on, albeit it now requires a much more sinister amount of maintenance, and general abstract stress it seems, emotionally, than the old hazeltine connected to a Vaxen we used to have to use to 'get things done'.
tl;dr: there is no fault in our machines but that which the user decides and identifies.
As a working definition, a piece of hardware really isn't useful if another piece of hardware will do the same task better, at a lower cost.
You can run a low-power PC and check your email like it's 1980 just fine, and these days they're VERY cheap. A $30 5W Raspberry Pi is more than capable of running 'mail' on the command line. For a bit more power, quad-core 1.6ghz ARM SOCs are about $60-70. For fileserver type stuff, a 15W Liva with a Baytrail-M can be had for $115, or you can build out a full quad-core mini-ITX for something like $175.
The reduced power usage very quickly amortizes the cost of the hardware. If you would save 200W by retiring a dinosaur - that's $17.28 per month of 24/7 usage. If you've got a Rpi on for 4 hours per day checking your email - it amortizes out within 10 months.
This totally ignores the additional headaches associated with old, failing, unsupported hardware. You yourself described it as a "sinister amount of maintenance and stress".
You can make a philosophical argument here that a piece of hardware is never truly dead if it'll still push bits, but be honest here - if you're running up your power bill to avoid spending $100 on new hardware, if you're putting up with a PITA system for the heck of it - it's not useful, it's your hobby. Hobbies are fun, I think it's fun to keep old gear going too, but be realistic about the merits of such activities.
What do you mean by this? I struggle to understand the workings of a linear voltage regulator. To even try and suggest that any one person could "understand" anything much more complex is deluded.
How deep does your understanding need to go to allow you to use it? You'd have to go back 40 years to find computers that any one person has a hope of "understanding". Even at that point there was all sorts of abstractions - did the logic designer fully understand the power supply? Did they understand the CRT screen?
Buggy civilization? I don't think so, the improvement through evolution favors those who aim to grow exponentially. The alternative, micromanaging extremely complex systems, will unfortunately fail.
Trying to force the global economic system into a software analogy just feels silly and wrong.
> Trying to force the global economic system into a software analogy just feels silly and wrong.
It's the same thing you see software developers do all the time. "I know this one thing, let me apply it to everything. No no, your job can't be hard, I'm a software developer, let me think for you."
I think as a developer I am attuned to this most acutely. I see garbage ok the street or read about Amazon's new warehouse and think, "yeah, we may have 500 years to store our garbage, but then what?" When gas prices go down I think "past peak oil, we are just accelerating running out". Or when I read about overfishing, etc I think how we could irreparably disrupt ecosystems with a cascading effect.
But then I think how big this planet is and how ingenuous humans are collectively, and wonder - am I overworrying? I see small waste accumulatung but in scandinavia they're discovering ways to recycle garbage into resources!
Meh. Describe an alternative to a growth-oriented society that isn't either feudalism, some kind of white agrarian fantasy, or an even more imaginary "sci-fi utopia where everyone wears togas." (Google The Venus Project and the Zeitgeist Project for an example of this stuff.)
Don't discard feudalism just yet. If nothing is done (and there are reasons to believe that not enough will be done to count as "more than nothing") this is what we will be getting by default.
Also, do not conflate the social concept of feudalism (survival by pledging allegiance to a chain of command that competes with other chains of command for access to resources) with the particular customs, laws (or lack thereof) and religious & aesthetic sensitivities that occurred historically the last time this happened in Western civilization. It can be argued that global corporations are already modeled on those basis and beyond the control of outdated national democracies.
The bug in this article. Economically speaking (in other words: in the real world, where you are not the only player), which is optimal :
1) optimizing resource usage
2) maximizing resource usage
(it's easy to see why some others are suboptimal, e.g. minimizing resource usage)
From a long term perspective (ie. which of these guarantees you'll survive and doesn't provide another player with an easy way to destroy you). Other way of putting this question, which is the Nash-equilibrium answer ?
The answer is : 2.
(Is it pareto-efficient ? yes !)
In other words, no matter how good it sounds to conserve resources, it's a mistake. If a civilization did that, either it would get run over by another civilization, or, if it came to that extreme, by evolution itself.
Nash-equilibrium and pareto-efficiency are extremely technical terms, you shouldn't abuse them here to get some faux sense of authority. I say abuse because we aren't in a context well defined enough to give them an interesting meaning.
Can you expand on this? It seems like you're saying that, for example, taking all the world's oil and lighting it on fire, or (more dramatically) firing all the world's resources out of a giant cannon and into the sun, is an optimal economic strategy. I'm not an economist, but that doesn't sound right.
It seems like the optimal economic strategy, assuming finite resources, is to optimize your use of resources. Then, there is a complementary defense strategy, which is to monitor other civilizations so that they don't get the drop on you. This might also include acquiring resource pools for future use, or simply to deny them to adversaries, but it doesn't mean you need to burn through them as quickly as you can.
Maximising deprives others of the resource, optimising doesn't. I think that's the point being made.
An economy that first maximised, then second optimised, would beat an economy that first optimised, then second maximised, because the latter would have nowhere to expand.
An economy that optimised and maximised would beat an economy that only optimised.
This doesn't take resource exhaustion into account. Which is to say, that by maximizing first and then optimizing, you may be only ensuring the extinction of our species, and the fact that your civilization will still be around when the lights go out will be cold comfort.
I mean, this discussion is absurd, of course, since we seem to be operating under the assumption that the problem of how to do civilization can be solved with a half-dozen variables or so. But, even within that simplistic framework, the approach being laid out by you two is pretty daft IMO.
How about, an economy that achieves hegemony, and then continuously optimizes, increases resource consumption as appropriate and only to the extent that such consumption is sustainable (over some reasonable planning horizon anyway), and seeks out and destroys rival upstart economies that are a threat to it (especially when they have an unsustainable model of resource consumption), will tend to be successful and stay successful.
I'm not saying that I wish that it were true, or that I think it's a good thing. I agree that it's daft. But over the short term, pillaging societies appear to beat conserving ones. And it's a fact that we do seem to be (as a species) adopting a short-term-profitable long-term-suicidal approach.
(An interesting model is Norway's oil – they're maximising in the sense of getting all the oil out from under the sea, but they're stashing most of the profits away as a rainy-day fund, which puts the country on very secure footing for about the next hundred years or so. You might call that 'maximising without consuming', or 'camping'.)
I think the future of the human race depends on finding a better answer to this whole question, but I think accepting that pillagers beat conservers in a lot of cases is a necessary step to finding something better.
(I don't think we actually disagree; we seem to be arguing different points.)
> Which is to say, that by maximizing first and then optimizing, you may be only ensuring the extinction of our species, and the fact that your civilization will still be around when the lights go out will be cold comfort.
Compared to not being around anymore before the lights go out ? I'd say that, yes, it's the more comfortable situation. But you're being dramatic, there's plently of places in nature where you can observe how this actually happens. It's almost never the case that resources fully run out, so some people/places will survive. And it may take tens or hundreds of thousands of years, but they will recover from that, as unlikely as that seems.
Population explosion, like humans are experiencing now, is followed by ... a die-off. Not by an extinction. If humans are different, that will be the first exception to the rule in 4 billion years.
> How about, an economy that achieves hegemony, ... and seeks out and destroys rival upstart economies that are a threat to it
Of course the key here is that you attack BEFORE the other guys become a threat. Most species in fact do that, in combination with maximizing resource usage. In fact, that's what they use the resources for, to a large extent.
Now there's plenty of economies currently around. Are you saying we should attack all of them right now ? Watch out, China !
Of course, if we don't, maximizing resource usage will be what "naturally" happens. Resource stewardship is a tactic that's trivially defeatable by a single bad actor, so that won't be what happens.
One caveat does exist. This is a law like the second law of thermodynamics. On the whole, you can't avoid this, as that would be similar to creating a perpetuum mobile. But keep in mind that there is no such thing as a law that states nothing can remain in motion for a very, very long time. The longer the timeframe you look at in the future, the better the odds we'll be maximizing resource usage. But similar to perpetuum mobile, even relatively long-term exceptions can happen, and in fact, happen often (a big misunderstanding people have about large-scale random/chaotic processes : situations don't repeat and the future does not look like the past at all). But it will always come back to the rule.
I wonder why we see this so much in political discussions. You think we have a choice. The only choice we really have is death now, or death later. You'll find that humans got were they are by choosing death later.
I'm glad you are sharing your conclusions, but your post seems completely devoid of an actual argument. Saying "Pareto" and "Nash equilibrium" does not count.
You're assuming resources are liquid (meaning: able to be relocated for low cost). This is only true if you're trading, and civilizations trading energy can't really be separate civilizations.
The only civilization we're stealing resources from does not exist yet because it's in the future. We have one global civilization that is pooling a single resource base via highly liquid trade. There might be civil war between factions, but don't think a real resource war can occur under the current environment. Now, might we split into an 'eastern empire' and a 'western empire' like the Romans did? Quite Possible.
I think resource usage is also a matter of control. At least during the cold war large amounts of resources were in danger of vanishing behind the "iron curtain" forever. If socialism had successfully caught on in latin america, indochina and more of the gulf states had fallen under soviet influence, it would have been lights out for western economies, which largely rely on raw materials imported from those regions. To a smaller degree this is even still true today, with Europe dependant on Russian gas. In a market where everyone has equal and free access to natural resources 2) might be indeed the best option. Otherwise a strategy where you exploit the resources of other nations first to enrich yourself seems like a far better option. Of course this has happened to various degrees all over the world to the benefit of a few industrialised nations.
economy, when confronted with physics and mathematics, always loses. exponential growth is not sustainable. i recommend thinking how much energy we can release before the oceans boil off and how long will that take assuming 3% growth yoy. (hint: sooner than you might think.)
It depends. Any complex question gets a complex answer. If we release the energy in the form of heat, and if we let that heat get trapped in our planet, then you'll boil off the oceans in a predictable horizon. However, those two ifs are very big ifs. Two examples:
If you start using energy to bond complex molecules, you are using energy, but the energy is getting trapped in chemical bonds. You can easily achieve 3% YoY energy use growth without any ocean boiling in the predictable time horizon.
If you use the energy to finally expand away from Earth, then clearly you can achieve 3% energy usage growth with no ill effects whatsoever.
In the end, the fact of the matter is that civilization evolution is very highly correlated to the ability to manipulate energy, and consequentially to the amount of energy used globally. Locking ourselves out of increasing energy usage is paramount to locking ourselves out of pie-in-the-sky projects such as planet terraforming.
Oh, but complexity is just another word for ignorance. There's a reason that mankind didn't learn to fly by mimicking birds -- it's because you don't solve a complex problem by mimicking complexity.
The complexity of a problem doesn't tell you anything at all about the complexity of a solution. Many problems that we face are "angels dancing on pins"-type problems, where the only 'complexity' involved is all the hoops you have to jump through to defend a flawed perspective. And on the other hand we have these coin-flip-prediction problems that only demand one bit of information, but are completely intractable by our most advanced methods.
Complexity is not a useful way to think about problem solving. Nobody solves complex problems.
Young whipersnapers and their throw the old code away syndrome...
You don't restart from scratch and hope for the best. If the systemic conditions that make our civilization ill persist, they will affect version 2.0.
It is much much better to incrementally improve the current system, even if the end game is not clear yet. A full rewrite always throws out the good along with the bad, and it is clear our civilization has many good characteristics in it.
I, of course, disagree on the need to restart from scratch.I think that the resource problems we face are slow moving enough that we will avoid them. The real danger is in fast moving problems that we do not know about yet.
Well I think we will be forced to restart from scratch since we can't manage to make even the easiest incremental changes needed ... imo we're bound to collapse a la USSR style.
Every once in a while a doctor is forced to re-break someone's arm which didn't recover properly.
Every once in a while severe economic recessions comes along and wipes out all the badly managed companies kept in life support by banks/government.
Every once in a while a programmer writes a piece of software that has almost the same functionality as an existing one but, is, some how, just better.
And every once in a while there are revolutions that completely destroys the existing system of governance, when it can no longer fix itself properly.
And that's when resources get deallocated.
Civilization will fall, and it will rise again from its ashes.
Looks like the world uses stop-the-world garbage collection on particular threads keeping the rest of the world running.