In one of the more obscure Ursula K Le Guin books, there was a passage that has always stuck with me. She's describing a hypothetical society where (to paraphase) they had eventually come to the realization that:
"the computer, once invented, could not be un-invented"
They put most of the storage on the moon, most of the processing power in a network of satellites, and in every village there was a hut with a dumb terminal. The vast majority of the population didn't need computer skills, only the handful of people whose lifetime tenured position was to maintain the hut and the terminal. The only 'useful' function provided by the terminal was you could tell it what you had and what you wanted, and if there were people nearby with complementary needs and wants, it would tell you which direction to walk.
Weird that you think saying "democratically managed" somehow fixes the principle-agent problem.
Voters don't write code. There is a process that goes from votes to personnel decisions to design decisions to writing patches. This process involves people who have their own ideas about how to interpret what the voters might theoretically want. There is lots of scope for them to make decisions you wouldn't like.
> Weird that you think saying "democratically managed" somehow fixes the principle-agent problem.
Your moving the goal post, but I'll bite anyway. Liquid democracy does as best as one can in that respect (and perfection is the enemy of good), since anyone that's capable can intervene at any point without hindrance. That's not possible in "representative" "democracies."
> This process involves people who have their own ideas about how to interpret what the voters might theoretically want. There is lots of scope for them to make decisions you wouldn't like.
Sure, and anyone who betrays the people could be removed as soon as they do, unlike today. In liquid democracy, you can just override them and with RIC, you can just recall them at will.
No, not remotely. Please read up on liquid democracy. By intervene I meant withdraw their vote from their delegate and cast it themselves, at any time, for any reason.
I don't see how this can work wth computer system? Let's say evrtyone loves the computers, but there is a single person who hates computers, and they want do irrecoverably delete all data on those satellites. Can you deny them access?
If not, then your system is not going to last long.
If yes, then how do you ensure this mechanism would not be subverted?
Weird that you don't consider how democracy doesn't always work out as intended. See: Corruption, Donald Trump. Not that it's bad or isn't the best we have, but there are a lot of superficially democratic states that became de facto property of one or a few. See: Russia.
Superficially democratic states (I like your term) aren't democracies to me. I meant literally democratic, not a theatrical aristocracy. I don't consider the US to be democratic for that reason. Something like liquid democracy or a representative democracy with arbitrary right of recall would work.
Being literally democratic is not a stable state, and you cannot plan such a system without making sure it fails in safe ways. System that has united, monopolistic control of such a vital resource does not fail in a safe way.
Great example of a 'no true Scotsman' argument. Such idealizations are of little to no practical relevance, we need solutions that work in the real world, with all its flaws.
Differentiating between "societies which actually adhere to democratic principles and processes" v. "societies which merely pretend to do so" is hardly a "no true Scotsman" argument.
"This person claims to be a Scotsman, but has never set foot in Scotland, nor have any of his ancestors or even acquaintenances (let alone friends or family), nor does he have any other ties whatsoever to Scotland."
I don't know how to respond to your claim because it's too vague. In any case, perfection is the enemy of good. Actual democracy would be better than this. It seemed like the society they were talking about was utopian anyway, so everyone was being kind and cooperative, not operating adversarially like we do today.
From the synopsis of the book:
> they do nothing on an industrial scale, reject governance, have no non-laboring caste, do not expand their population or territory, consider disbelief in what we consider “supernatural” absurd, and deplore human domination of the natural environment.
>Theatrical aristocracy. I don't consider the US to be democratic for that reason.
This is often repeated, and I don't believe it's true. The US has, without a doubt, the most accessible and open democracy in the world. Take just for example the state legislatures. A state like Indiana has an economy and budget larger than most countries. Yet literally anyone can get elected to the state house of Indiana. Happens all the time. Congress is pretty similar, too (although in full fairness, the senate is indeed a bastion of oligarchy). Anyone who thinks our system isn't democratic isn't really trying.
> The US has, without a doubt, the most accessible and open democracy in the world. Take just for example the state legislatures.
This is a shockingly sheltered view. You must have no exposure to European democracy at all.
> Yet literally anyone can get elected to the state house of Indiana
Nope, this is false. You must have enough money, free time and access to media and advertising in order to that. It's just a positive feedback loop and it doesn't actually matter if legally anyone can run if practically nobody can.
> Anyone who thinks our system isn't democratic isn't really trying.
Flukes happen and they don't change the fact that your chances are still close to zero in general. If FOX news (or whatever) didn't like him, he would have certainly lost. Don't forget the selection bias: people will be filtered by whether they are what the people in control want. Similar to the propaganda model: You don't need to tell people what to write at your newspaper if you never hire people that wouldn't just write what you want anyway.
I have told this story many times and I will tell it again:
Since there are so many candidates in the US presidential elections and they nearly all have youtube channels, facebook pages, twitter etc, I was curious how much attention they are getting. The first 2 is easy, they are plastered all over the media and everyone around the world knows them on a first name basis.... but when I looked at random candidates on the long list it was amazing to see the 2 digit numbers. Some didn't get enough attention to account for friends and relatives. Truly shocking was that the green party had youtube videos with 100-250 views. This is such a mind blowingly small number that it doesn't account for any of the media. All those journalists around the world had to write about the top 2 candidates, they wrote tirelessly every day, again and again... Not one of them had the idea to have a looksie what just the number 3 was talking about. That voters collectively didn't give a rats ass about anyones program is one thing but that journalists collectively wrote 1 article on average about half the candidates. Most honestly told the reader they had no idea what their program was about.
So there you have it, the digital age provided us with hard data that demonstrates how no one in the world did their homework. No one bothered to inform themselves before voting. They all looked to others for information but those others didn't have any.
I therefore conclude we need a voting diploma. We bill everyone for 200 bucks and if they fail the exam government keeps the money. Long in advance the voter gets a randomly chosen list of 20 candidate with 500 chars for each of their top 3 agenda points. The test is multiple choice and you have to match the headlines of the agenda points with the candidates.
I'm sure there are people who don't agree but I think it is fair to never listen to them. Don't read what they write. Glaze over when they talk to you and think of puppies.
there is a very good reason that nobody looks at a third party candidate: it would be foolish to vote for them, no matter how closely your views might align to them. Assuming you had any preference at all of which of the major parties get in, by voting for a third party instead of one of them, you in fact increase the chance the other will win. This is a fundamental issue of this kind of voting system, and there are many alternate systems that do not have this crippling, paralysing flaw. For example, look at many northern european democracies, which have ranked choice or proportional votes, and sure enough they often have several-way coalitions of much more reprasentative parties governing them
Coalitions are not needed. Politicians, regardless of their personal agenda, are interested in what the voters want. They might not agree with it, they might not do what the people want, they are always interested. The other candidates, however small, provide valuable insights to them. It shows how far they can push their own agenda.
People can display their self-less submission by voting for the top 2 candidates regardless what they represent. It sends a strong signal that they can do whatever they want.
All countries get a bunch of stuff right and makes a spectacular mess from other things.
Each of those spectacles should have one or more dedicated candidates and people who are sick and tired of it should vote for such candidate. The agenda, most of the time, can be a summary of examples of other [western] countries getting it right.
While original ideas are definitely possible you can have countless [shall we say] boring topics that are absolutely worth representing.
Such party may draw an utopian or extreme picture, it may even be unrealistic. If people vote for it it will eventually send a message to those politicians who always win the election each and every time. In turn they can propose or implement a trimmed down / mild version of it or explain why they are against it.
There might be truly silly ideas worth investigating but the winning candidate cant spend his finite time in office addressing the silly things. Unless there is a significant number of people who don't think it is silly at all.
The only chance for such a party to win is if the issue is ignored or escalated to the point everyone is fed up with it. Long before that one of the bigger candidates will absorb that juicy bag of votes.
> Assuming you had any preference at all of which of the major parties get in, by voting for a third party instead of one of them, you in fact increase the chance the other will win
You need to use Esperanto contracts. When Esperanto was new, what they did is had this chain letter (or something?) where they said "If 1000 people agree to learn this language, will you also learn it?" and then they collected signatures and once they had gotten 1000 (or whatever the number was) then many of the people actually learned it...
We bill everyone for 1% of their net worth, and if they fail the exam, government keeps the money. And if you fail you can not contribute money to campaigns etc.
I had so many hilarious conversations with opponents of voting diplomas. If you let people vote who don't know what they are voting for it just adds noise to the formula. What we need is a signal strong enough to drown out the noise.
1% is nothing in contrast with the number of people murdered by democracies. Elections give the impression it is not really important but it might just be the most important thing for most of us to do. It seems reasonable to force people to spend some time learning what it is all about. 1% is nothing compared to getting shot or blown up. Democracy is a well oiled machine that can put all of the jews on the train the next day. It can take us to mars but it cal also kill you with the most sophisticated machinery.
> The US has, without a doubt, the most accessible and open democracy in the world.
Without a doubt, you are both extremely arrogant and ignorant in your US exceptionalism. What you think makes the US system unique (anyone can get elected) is actually standard in democracies. In fact, it's arguably easier for people without significant financial backing to be elected in most Western countries than in the US. And that's not even mentioning countries with far more direct democracy than the US, like Switzerland (the actual contender for the 'most democratic nation on earth' title).
In our reality, at least for now, there's no single entity that controls it. Instead you have competition between them on most layers of the stack, which means that nobody's at someone's mercy.
As someone who's stage-managed the migration of significant IT assets out of at least the big 3 providers who "controlled the compute and storage infrastructure" (AWS, GCP, Azure), I'm curious if your faith in 'competition' as a control is naïve or deliberate obfuscation (boy have I seen both). There are things you 'can' do, because yes an alternative, in theory exists, and there are things that are practical to do.
"The factory of the future will have only two employees, a man and a dog. The man will be there to feed the dog. The dog will be there to keep the man from touching the equipment."
I absolutely love it, that the website is available over plaintext HTTP. (The maintainer(s) should consider honouring the Upgrade-Insecure-Requests header[1], so that modern browsers still get the HTTPS version.)
I've recently got my hands on a PowerBook G4 (2002), a quite interesting and still somewhat capable machine; however the OSX version it's stuck on (10.5.8) is having more and more problems reaching the TLS-secured web: TenFourFox is no longer maintained; Safari, curl, etc are all built against an ancient release of OpenSSL; etc. Even downloading TenFourFox is no longer possible, as system Safari can no longer load SourceForge, since SF requires a more modern TLS version than what the OS can understand.
Treating both plaintext HTTP and modern HTTPS as first-class citizens is the way to go for such projects & efforts, so hats off.
Oh thanks for the tip! Did not know about the Upgrade-Insecure-Requests, will have a look and add to the wiki :)
Your story is exactly why we gave both HTTP and HTTPS access. I'm also still super inspired and happy to see how the whole Amiga website ecosystems tend to be served mostly over HTTP, so that it's possible to browse such resources on the most modest/limited/bare configurations.
Just wanted to toss an idea, you can run latest OpenBSD on that G4 PowerBook and it should run well (and have all the latest encryption/security features of course). I've got it running on an iMac G4 (among many other older computers). Of course I also know OS X has its own appeal, just throwing the idea out there in case you weren't aware! :)
I do absolutely love OpenBSD, and I'm enjoying it on my Thinkpad already. It makes for an awesome desktop/workstation, and I actually feel very productive using it.
I have different plans for that machine. I'm planning to max it out (it's an A1025, so 2x512MB RAM, 802.11g, IDE->CF adapter + 2x64GB CF for storage, battery replacements, etc), and try various OS's of the era: MacOS 9.2.2; OSX 10.4, 10.5 and developer preview for 10.6; maybe even BeOS - to see which one(s) I like the best. Perhaps it will end up with OpenBSD on it as well! I'm also planning to do some development on each of these OSs :)
Niiiice, sounds good. Indeed, I have a few machines with their "original" (or originally-supported OS), and then sometimes try different stuff on them. BTW, another cool one to try on PowerPC is Haiku OS! I haven't yet tried it myself, but... https://www.haiku-os.org/
Haiku might not be the easiest thing to try, judging by their docs[1], the PowerPC port seems dead, with not much interest from the dev team to continue - understandable, it's a niche OS on a niche platform. It's a lovely system though, runs great in a VM - maybe something to run on a spare x86 box :)
I allow http:// on my personal site as well due to this reason. I strongly support the use of old computers (and am planning to remove all JS from my site, just haven't got around to replacing the ready-made theme I started with). "Old" computers are only "obsolete" because corporate powers pushed forward and left perfectly-working stuff behind.
> I absolutely love it, that the website is available over plaintext HTTP. (The maintainer(s) should consider honouring the Upgrade-Insecure-Requests header[1], so that modern browsers still get the HTTPS version.)
And browsers should honor HSTS sent over HTTP. And hstspreload.org should not refuse to include sites that maintain HTTP access instead of just redirecting to HTTP. And https:// should never have existed as a separate URL scheme.
It sucks that security advocates refuse to consider backwards compatibility even when for most websites the only attack they really need to protect from is malicious (US-based?) ISPs injecting ads, which should have been solved via legislation instead.
I'm actually doing this kind of thing at work right now. A client has a piece of software written by a vendor that went belly-up in 2007. The software is a central part of their business (don't ask me why they didn't try harder earlier to replace this piece of software in the past 15 years), but only talks SSL 3.0 and talks to internet resources to function.
We had set up a shim for them to give them time to fix this mess, by setting up mitmproxy[0] explicitly enabling SSL 3.0 and upgrading the protocol for external requests. Since then, the shim has been killed by a careless upgrade, and it turns out that most SSL software (including OpenSSL) can't even be forced to talk SSL 3.0 anymore. If you want to get OpenSSL to talk SSL 3.0, you need an old version. The modern versions maintain the enable-ssl3 option, but it is always forced to no-ssl3 at configure time. I don't know if there's an easy way around this, so I've set up a docker image that pulls and builds and old version, and installs an old version of mitmproxy (along with python's cryptography and other dependencies).
It's not elegant, but it does technically work, for now. At some point, it's likely that the ciphers supported by it won't be supported by the modern internet, in which case I suppose you could daisy-chain mitmproxy instances, each upgrading the protocols for the last.
If somebody has a better idea for this kind of situation, I'd love to hear it. I hate this setup and would love to have a more elegant solution.
edit: I actually discovered that OpenSSL 1.1.1p doesn't force no-ssl if you do enable-ssl3 as well as enable-ssl3-method. That's a much more workable solution, and passes tests. I mentioned OpenSSL 3.0.4 in a previous edit of this comment, but it turns out that compiles, says it enables ssl3, but fails to complete an SSL 3.0 handshake.
edit 2: If anybody is curious, here's a working Dockerfile example for this, with configuration, volumes, and path stuff left as an exercise for the reader: https://paste.ofcode.org/uCyMuF6NtLKGyesT8FKYTB
Not really insecure, because all communication over the public Internet is done on TLS 1.2+. Really, it's at worst as insecure as just using HTTP to the mitm proxy and then having the proxy do the encryption to the outside world, given that the SSL3 connection is entirely contained in a private network. I had actually suggested that initially, but the software refuses to communicate without SSL (which was a fine policy at the time). Also, SSL3 isn't actually that insecure when there is no possibility of controlling any aspect of client communication.
> Why don't you modify the hard-coded `no-ssl3` config and try re-compiling?
I was actually missing an option, but it still doesn't work with OpenSSL 3.0. Fortunately, 1.1.1 works with it just fine, including the latest version released just today.
The biggest challenge is in getting the machine to download a modern TLS implementation in the first place. You could just use a different, modern machine for that, but this asks interesting questions about our software/formats/protocols supply chains. How recent does a machine need to be, to be able to upgrade to anything at all? Can you understand what the software you're running is actually doing? With the forked Firefox being hopelessly behind upstream, what are the risks of running an outdated stack - should you bet on being obscure enough to not be a target, or take every precaution, such as just disabling JS altogether?
Ugh. Permaculture might have biosphere balance as a secondary effect, but the idea and focus of effort is about allowing natural processes to perform the maintenance for you. It is about using living machinery to automate the process. It jives well with sustainability, and that's fantastic, but that's not the focus. The focus is on the efficient production of useful goods in ways that require minimal maintenance by letting other creatures do all the work for you.
How exactly do you do computation in that way? There are ways, I'm sure. But not with anything resembling computers we use today. You'd basically need living computers. I'd love that, but I am not a fan of this rebranding of the term "permaculture" to shoehorn silicon into the sustainability movement, it doesn't fit, unfortunately.
There is a wide diversity in motivations people have for permaculture, and this view that permaculture focuses on "the efficient production of useful goods in a ways that require minimal maintenance by letting other creatures do all the work for you" is one of several types of views I have come across ("lazy farming"), and is by no means the exclusive one. There is very much a strong contingent of permies who also focus on restoring the earth, and stewardship of the land, even food sovereignty.
It's true, but all these alternative reasons would be non starters if the approach didn't work for lazy farming. The core reason to do permaculture is that your food (and other resources) produce themselves. These other reasons may be primary motivators for some people to do permaculture, but without that core benefit nobody could do it even if they wanted to and have a viable farm.
I think you're coming from an anthroprocentric perspective and behaviorialist paradigm, where natural processes are understood in the lens of humans and human actions ("The focus is on the efficient production of useful goods in ways that require minimal maintenance by letting other creatures do all the work for you"), and the reason someone does something is because it benefits themselves or other humans ("without that core benefit nobody could do it even if they wanted to and have a viable farm").
The ethical principle of "Fair Share" isn't just about the yields the land owner has, but also the yields other inhabitants of an ecology have. For people who are motivated by stewardship, for example, humans obtaining benefits is not elevated into its own thing. As an example, some of the Native tribes would say something along the lines that when you plant, one is for the plants, one is for the animals, one is for the birds, one is for us. It is certainly not about maximizing production efficiencies for the benefit of humans alone.
That motivation and attitude shapes the way someone views and experiences their life, and their place, and in turn shapes how we go about caring for land, caring for people, and fair share.
I know I'm cheating here a bit. I'm using the work of Carol Sanford to identify world view and paradigm, and that way of thinking through these things are not spelled out in the original works of Mollison and Holmgren. Sanford's work on regenerative paradigms and living systems world view goes a long way towards sorting out the different ways people approach things in the permaculture community, and is generalizable more broadly than food systems.
Regeneration is a characteristic exclusive to living systems. It's not something that can be approached from a world view that everything is a machine, or the paradigm that one can control behavior through incentives and disincentives. Only living systems can regenerate. It's the broader paradigm from which "your food (and other resources) produce themselves" comes from. Living systems are capable of growing and adapting on their own; they are nested -- so that is you and I, within larger living systems of family, community, organization, ecology. It is because of regeneration that "food and other resources produce themselves".
My point in all of this is that there is a diversity of motivations and views, and the view that "without that core benefit nobody could do it even if they wanted to and have a viable farm" is not as universal as it sounds like. "The core reason to do permaculture is that your food (and other resources) produce themselves" might be your core reason, but it is not true it is the reason that everyone in the permaculture community applies permaculture.
I realize how much I sound like I've fallen for a cult or am about to sell you some essential oils. Perhaps I have and I'm not really sure how to NOT come off like that given how excited I am. Perhaps a step in that direction is providing some other/adjacent projects that have also made me feel similar
1. See this video by Lionel Penrose from the late 50s on a pre-digital version of "cellular automata":
Granted, the assumption he made is that computing power would grow increasingly cheap (which is basically the opposite of the point here), but I also wonder if techniques like those shown in that talk could be used effectively with a network of smaller, lower-power computers.
(I mean, the talk does mention the use of small low-power computers specifically, but more in the context of e.g. sensor networks.)
This looks really cool, can't wait to watch. Thanks so much for sharing. Seems like a good fit for my "computing paradigms that make me actually excited to work in tech" list
There were some interesting articles on HN a few weeks ago about doing backpropagation in a machine learning system with physical systems, the particular example I remember was a natural language processing system using a metal plate and sound. Everything that happens in the universe is a computational process. You could build a living computer with enough knowledge about genetics and feedback mechanisms between species and specimens. A living computer doesn't have to be a human brain, or even resemble a brain at all. It might resemble a forest or a fish tank.
"Don't do things that harm the biosphere" and "maximize the lifespans of hardware components" seem to be in conflict with each other.
Post 2008, CPU typical use efficiency per Watt has doubled every 1.5 years [1]. This means that your decade old machine is probably burning a lot of coal and dumping heavy metals into the atmosphere. Doesn't sound too green to me.
There's going to be break even point where it's worth doing the replacement.
Manufacturing a modern laptop produces around 360 kg of CO2, while energy utilisation operating it for 8 hours a day for a year produces roughly 15 kg of CO2. So in CO2 terms manufacturing costs dominate massively over operating costs. Even if you run the computer for 10 years the energy consumption comes to less than half of the manufacturing footprint.
That's for modern laptops though, so I suppose if a 10 year old computer uses vastly more energy its possible it's energy consumption dominates over it's manufacturing footprint in a much shorter period.
The older laptop isn't likely to draw much more power. It will run far slower, however, most especially if it utilises spinning rust rather than SSD. But net draw will probably be 15--45 watts or so at load. Check your power unit rating.
For many tasks ... this doesn't matter. At the shell, I rarely notice the difference between running commands on a seven-year-old Android system vs. a ... somewhat similarly-aged iMac, despite the fact that the latter is literally an order of magnitude faster. There's simply not enough load on either system to make a difference between a command completing in 0.01s vs. 0.1s, both being below my perception threshold.
I do see the difference when, say, prodcessing larger JSON archives with jq, but that's really the only major differenct for direct interactive use.
A heavily-loaded server is of course a different matter. As is running a full-fledged GUI desktop browser (this makes older desktop bend and break, even with only 2--3 tabs open).
The other big difference is in going from classic desktop/laptop x86 architectures to low-power ARM or RISC-V, which can run on very minimal power --- most < 15 watts, some considerably less than this. Performance will vary, and many of these systems are 32 bit rather than 64 (which has major impacts on data and computational throughput), some lack maths coprocessors (which slows crypto). But again, the systems themselves are often surprisingly capable, with a verylow power budget.
I think to truly advance this discussion in good faith (as opposed to just as a general snow against e-waste concerns), you have to look at the full lifespan story. Like, yeah, that computer that's 10 years old maybe shouldn't be running any more, but do the gains associated with annual replacement justify the production and e-waste cost of all the intermediate ones? What about a 2- or 3-year replacement cadence?
How does the calculus change if the computer is being operated somewhere where it's cold most of the year and so the "waste" heat still has a useful function?
How does it change if throwing out the computer also means throwing out a battery and screen, as is the case for laptops and phones? Does the extra mass of a tower pay for itself in terms of longer lifespan as people upgrade those systems piecemeal?
I could run my 38 year old Macintosh off solar panels if I wanted to.
In contrast, I'm not aware of any currently-produced computers made from completely-recycled materials, so any new hardware you buy will have components made of materials dug out of the earth & processed.
On the other hand, creating a new computer and throwing out the old might be a lot more harmful to the environment than just continuing to use the old computer.
For many devices (e.g. a power tool, or part of a vehicle) making a chip that lasts for many years would be more efficient than replacing it with one that drew less power but required a large batch of power to manufacture, plus the cost to dispose of the old one.
Also, there are many hardware components besides CPUs where durability and repurposability would far outweigh operating efficiency.
A lot of the principles they give are of the in dying you shall die variety of errors.
Learning is counterintuitive.
There are a lot of times in learning where getting what you want means intentionally getting what you don't want.
As you've said, though with different words, their second principle doesn't respect that.
It is near-term thinking masquerading as a long-term concept; myopia, not wisdom.
At the root of the error in short term thinking is pride.
You think you know, but you don't know - not as you ought to know.
So you stay trapped in a local minima.
That local minima comes with unforced error.
Which means a potential stochastic chance for extinction.
Geometric probability distributions drives this to certainty.
So in dying you die forever.
Notice - nature doesn't optimize for longevity.
Thus, it gains it.
The biospheres actual version of what they are trying to get at is absurdly different.
Remember I said learning was counterintuitive?
The actual solution was to "eat shit" and "die".
If they were really taking advice from permaculture the idea of eating our own shit would have taken center stage.
Manure isn't exactly an unfamiliar concept in that world.
It is foundational.
Literally, not figuratively, but literally.
It is the dirt.
It is interesting to think about energy efficiency, but we should also consider carbon efficiency, which often conflicts with energy efficiency.
For instance, if this old machine is connected to a solar panel, then its operational emissions will be 0 gCO2eq/kWh (grams of carbon dioxide equivalent per kilowatt-hour of electricity) if its consumption is kept lower than the solar generation. If it is connected to a greener grid like CAISO, its emissions will be mostly green (~ 0 gCOeq/kWh) during afternoons (~ 12-6pm).
Sure, at browner times (6pm - 6am), this old machine should be turned off (unless there is wind coming to the rescue). But at greener times, it should use the excess energy that comes from renewables and that can't be stored to be useful elsewhere.
If you bought a raspberry pi, a solar panel, controller and a lead acid battery and maintained them well, wouldn't it be more efficient to use the compute (and the other stuff) until it stops working permanently?
What are you going to do about tin whiskers[1] showing up in electronics? How will you get lead soldered electronics that don't grow tin whiskers and short randomly now that the EU has banned lead solder in new products? Are you going to use electronics from before 2006 when the RoHS legislation came into effect? Is anyone making consumer electronics with everything nickel plated for durability despite the cost and weight issues?
> Simple, you wipe them off when you change the heatpaste.
That doesn't work so well for BGA packages. I'd need to do some research to see if reflowing the solder would fix a low level of tin whiskers. Usually, the flux effectively increases the surface tension of the solder, causing it to form nicely around the pads and contacts. So it isn't clear to me how well the reflow will work without additional flux.
> Human-sized computing: a reasonable level of complexity for a computing system is that it can be entirely understood by a single person (from the low-level hardware details to the application-level quirks).
"Personal Mastery: If a system is to serve the creative spirit, it must be entirely comprehensible to a single individual."
"Is there even place for high technology (such as computing) in a world where human civilizations contribute to the well-being of the biosphere rather than destroy it?"
One helpful indicator would be the technology required to prompt and hold this discussion.
Yes, here are some perhaps more direct examples: Irrigation control for fields, temperature control for greenhouses and compost piles, early detection of fires via treatment of satellite imagery, calendar control (when to plant seeds, when to fertilize), knowledge repository, calculation of shaded/lighted areas throughout the year.
Since I recently received my MNT Reform laptop, I feel like it's appropriate to shill it here. Built like a tank, to be upgradeable for a long time. Also comes with more sustainable LiFePO4 batteries!
I love mine, and will be using it for a very long time.
No calibrator, sorry, but from a totally layman perspective, the panel is very nice and has good colors and viewing angles. Wouldn't say it's quite comparable to a P3 gamut panel, but I switched to this from a 2018 MacBook Pro, and those are known for having good panels.
Of course, this panel is a bit smaller and lower resolution, but 1920x1080 at this panel size is still very nice and crisp.
I had a look online, and there are similar compatible panels from innolux that have a glossy finish, which might be a mod I'll do in the future.
I really want one! It will probably be my next "new computer" purchase, unless something more appealing to me is made (probably not happening). amazing balance of fully open hardware and sustainable/repairable. I couldn't feel good about buying a computer that is not OSHW. It just feels wrong to buy single-use unrepairable stuff that has forced obsolescence built in (looking at Apple ecosystem especially here). That said, my current hoard of old ThinkPads is doing well so it's pretty tough to justify purchasing anything new.
Yeah, the SoC and RAM are on a replaceable board, and upgrades are in the works. The connector is also fully documented, so other parties can also design SoMs for it.
I'd also really like to see OpenGL ES 3 myself!
> Human-sized computing: a reasonable level of complexity for a computing system is that it can be entirely understood by a single person (from the low-level hardware details to the application-level quirks).
It's a good attempt, though reading https://permacomputing.net/Principles/ ... I don't think the authors really needed to create a different set of design principles for computing. Rather, applying the existing permaculture ethical and design principles to computing would have gotten everything and a lot more.
For example, rather than "Care for Life" and "Care for Chips", those are still derived from the ethical principles of "Care for Earth", "Care for People", and "Fair Share". And those three ethical principles are more comprehensive. Further, it would contextualize the purpose of computing, and not just simply designing the thing itself, in isolation.
The 12 design principles for Permaculture are also far clearer because they contextualize the relationship of the computing with the people, rather than the design of the machine by itself.
For example, "Expose Everything" is not a bad design principle when looking at just the machine ("observability"), but it is not nearly as versatile as the more generalized Permaculture design principle of "Observe and Interact". It's more important to look at a computational device's place in the overall ecology. When I apply this to my backyard, I cannot always directly "observe and interact" the root system of plants. If I dig up the plant to examine its root system, then I have most likely killed that plant. I have to make guesses. Looking at it from a different angle, the Permaculture design concept of "Zones" was is a way to organizing things so one can systematically "Observe and Interact".
Another Peramculture design principle is "Integrate not segregate", another reason to "Expose Everything".
Another one is "Keep it small and simple" in which there are two better Permaculture design principles, "Use Small and Slow Solutions", and "Apply self-regulation and accept feedback". Those two principles allows for the system to adapt, change, and grow, within the local conditions, and take advantage of regenerative cycles.
So I think it is a nice try, but I think the author has not yet really applied the Permaculture principles sufficiently broadly and flexibly.
Having said all of that, I remember coming across the saying -- what isn't grown is mined. Permaculture is great for working with what's grown. I don't know if it says much about what's mined.
Computing and hardware technology ultimately comes from the assembly of what is mined. If so, maybe there is a place for developing a separate of design principles for things that are mined, not grown.
What separates things that are mined and things that are grown is that things that are grown are capable of regenerating on its own without the human beings doing anything. What's mined don't have that kind of autonomy, or if they do, they start to resemble things that are grown, and we would have to treat it as participants of an ecology. That includes integrating them with existing ecosystems, lest they become invasive (things that grow vigorously, but do not contribute anything to the existing ecosystem).
I would expect recycling (i.e. "growing") of electronic components will become more incentivized as the rare metals become rarer and mining more expensive. Also, agriculture is technological -- since its inception, I would argue. My point being, I think you're initial point is relevant.
I think there's is a place for technology in permaculture. It's something I've been thinking alot during the lockdowns and after I started deep diving permaculture. I remember seeing one of designs from the original permaculture book for rooftop rainwater harvesting. There was a valve mechanism that lets the initial flush of water shed off first along with the accumulated debris from the roof. Being a software developer, I remember my initial thought for solving that problem was some kind of electronic device, maybe even one running Linux :-)
When I look at this as something that can last generations, the valve is simpler and probably more reliable. If we were to evaluate its resilience and anti-fragility, because it is produced with a much shorter technological dependency graph, it is more likely to be maintainable.
I marveled at the "curb cutting" that Brad Landcaster pioneered out in Tuscon. He dug basins for native trees and shrubs in public right-of-ways, and then cut notches in curbs. When the monsoon rains flush down the street, it fills up a basin, helps recharge the local aquifer, held by living plants. When the basin is full, there's a natural backpressure that then lets the rainwater flow to the next basin. There are no moving parts. The curb cut uses power tools, but if needed, you can chisel it out by hand using something that is lower-tech. Those are all technologies too, ones that may seem simple, perhaps primitive, but were deployed and used in alignment with permaculture ethical principles.
There was another book I have and the author talked about how they are intended to design their farm to grow multiple generations ... but they are willing to use a chainsaw for the initial build, a way of investing the last dredges of fossil fuels and power tools to set in motion something that can regenerate and keep on growing.
I think about some of Richard Stallman's essays. I remember the ones that laid down design principles for software that makes it accessible to people who may be blind, or have difficulty using a mouse, or even keyboard. It's laid out in terms of the principles of free software, but it could easily have come from permaculture ethical principles. I remember Christopher Alexander, and people's attempt to bring the ideas of living architecture, where inhabitants can make local changes that are still architecturally coherent. That would be end-user-modifiable software, such as Smalltalk or Hypercard. They don't web-scale and drive the aggregation flywheel, so it's been abandoned in consumer software.
I remember watching a documentary on a Native American elder speaking to her people about permaculture. How her initial reaction was that this was, yet again, another example of being told by arrogant people on what to do. Then she recounted how she saw people working the earth with reverence, and that went a long way towards credibility. In terms of permaculture, these people were living the permaculture ethical principles.
I don't think my life and career making a living with technology is a waste. There's a place for technology in all of this (rather than the inverse, a place for traditional values in technology). I've already been applying some of the permaculture design principles to how I work with Kubernetes, but I think there is a deeper change in my way of life that involves computer technology.
I think permacomputing is something about software rather than hardware. For hardware, to human civilization it's all about the use of power. But for software, we have lost so many things. Many documents have lost, many software can run in nowhere. That's the problems we should solve.
I'd say that permacomputing's principles are first of all something to apply when designing new systems and new software, so as to prevent that from happening again.
I think systems such as Nix/Guix would be useful for permacomputing, with which (unlike in many Linux distros) you can keep old libraries that older un-upgraded apps depend on right beside newer versions of those libraries.
as far as inspiration goes, Devine Lu Linvega's ideas and lifestyle is pretty interesting to me. would recommend people check him out. definitely adjacent.
edit: also the low tech movement can be relevant
I love this idea and what is proposed in the Principles, but it's missing a crisp definition. It relies a bit too much on the reader's knowledge of permaculture.
I like the idea of piggybacking on permaculture but it's not a perfect analogy, because computing isn't exactly a naturally flourishing phenomenon.
What does carry over is the idea of being more thoughtful about computing, reducing energy waste, and generally architecting these systems for a longer time horizon. Perhaps in a way that can encourage stewardship over multiple generations.
Presumably, the principle of maximizing the life of hardware means after it's gone into production and is widely replicated. (And, hopefully, standardized.) This the long-term supported version.
But to get the design right, you need to make prototypes, and probably a lot of them. I try to minimize design mistakes because reprinting a part is tedious, but I still have a box full of 3D printed parts that turned out not to fit quite right.
This is also true of education. Most of what students create themselves is never really used. Either it's thrown away or gets put on a shelf somewhere. Making things badly and throwing them away is an essential part of education.
I like the idea in theory, but basically all systems currently in use, are good enough to run "pseudosimplicity" programs without trouble.
We don't need to make stuff all that lightweight, Moore's law seems to have won. We just need to not invent a new way to make things horribly slow or otherwise bad, like, by not making stuff cloud first or have a mandatory blockchain feature.
Of course, a movement can be about more than one thing at once, and if the permacomputing community values through and through simplicity for it's own sake, I can't argue with that.
You can also use streacom cases with Atom 8-core (load balancer) and Xeon for that extra kick in computing, but 3x Raspberry 4 can saturate a symmetric 1Gb/s.
Rather than 1x 10Gb/s it's better to have 2x 1Gb/s (preferably distant from each other) with all passively cooled components under 60 celcius at full blast during the hottest day in summer.
The whole capable of running atleast 24 hours on lead-acid backup.
I love the concept, it’s very appealing. However, I can’t stop thinking that this is mostly aimed to a post apocalyptic scenario, and if that is the case, computers would be the least thing to worry about :(
This one seems more reasonable, but I recall seeing similar projects/initiatives in the past, aimed more towards post-apocalyptic scenarios... and yeah, I share the same exact thought as you. Computers are totally useless in such a world, with the possible exception of data storage
I think data storage would be extremely important in that case. If we notice the apocalypse is coming, one of the first things I'm doing is downloading the entire Wikipedia, agriculture websites, survival guides, etc. As much as I possibly can. A raspberry pi can be used to index into that.
(Obviously I'd buy supplies etc. too)
If this happens, you want a reliable, low-power, easy to maintain no-BS computer, since you have more important things to do than mess with that.
Now if we think a bit more into the future, at some point humans will start living in okay environments and we're probably going to want to start building computers again. Having some infrastructure for this would help in bootstrapping computing. Maybe we'll know better this time, maybe we wouldn't destroy our own civilization the second time.
This is obviously all just theorising, but I can see myself needing a reliable low-power computer during an apocalypse.
Could computers be built without perishable components such as capacitors? From my experience they're the first components that usually break down or have the shortest shelf life.
"the computer, once invented, could not be un-invented"
They put most of the storage on the moon, most of the processing power in a network of satellites, and in every village there was a hut with a dumb terminal. The vast majority of the population didn't need computer skills, only the handful of people whose lifetime tenured position was to maintain the hut and the terminal. The only 'useful' function provided by the terminal was you could tell it what you had and what you wanted, and if there were people nearby with complementary needs and wants, it would tell you which direction to walk.