Hacker News new | past | comments | ask | show | jobs | submit login
This website has 81% battery power remaining (lowtechmagazine.com)
1225 points by behnamoh on Dec 12, 2021 | hide | past | favorite | 351 comments



*Technically*, this is very cool, impressive, and generally an elegant work of art.

*Pragmatically*, I see 2 flaws in their thesis (as explained on the about page):

>> "The entire network already consumes 10% of global electricity production with traffic doubling roughly every 2 years"

I think the implication is electricity consumption will also double roughly every 2 years, but Moore's law actually operates on approx. same timeline, so traffic can continue doubling at this rate without an increase in energy consumption. *This is why technology is brilliant.* It allows us to do much much more with the same resources. We should want more technological innovation.

>> " These black-and-white images are then coloured according to the pertaining content category via the browser’s native image manipulation capacities"

This essentially shifts some of the burden of computation from the server (PNG compression) to the browser (dithering interpretation). This may save some energy, or it may increase energy as most personal computer processors are much less efficient than server processors, and don't benefit from energy savings from caching. I'm not sure where it nets out, but just solely focusing on reducing server compute time isn't necessarily a path toward sustainability if it shifts more computation to the client.

Very happy to hear if I've misinterpreted the thesis. Again, I commend the technical work itself.


> This is why technology is brilliant. It allows us to do much much more with the same resources. We should want more technological innovation.

But that's not what happens in practice. There is this paradox, the name of which I forget, which says that the more efficiently we use a resource, the more we is it in absolute terms. We never really freed up any leisure time when we introduced household appliances or when productivity went up. Cars became both more fuel-efficient but also heavier, so we burn more fuel. Smartphones have gotten really good at managing power, making them a viable choice for plenty of activities throughout the day, thus increasing their total energy consumption.

Personally, I just don't believe more efficient tech will help reduce our power consumption. It will require either a completely different tech that doesn't require electricity or fuel to run, or a change in societal habits.


"In economics, the Jevons paradox occurs when technological progress or government policy increases the efficiency with which a resource is used (reducing the amount necessary for any one use), but the rate of consumption of that resource rises due to increasing demand."

https://en.wikipedia.org/wiki/Jevons_paradox


It is interesting they consider it a paradox. When the demand curves show it. As making something more efficient moves the supply curve around (and moves where MR=MC is). Or in econ 250 class speak 'shift the supply curve right'. I think the 'paradox' comes in where they do not consider there is more demand on the other side of lower prices? Not all goods goto infinity on cost vs demand curve. But some sure act like it.


In urban planning there's the 'law of induced demand' that has parallels to this and shows the paradoxical nature. When a road starts to see congestion, the reaction in the past has been to add lanes, widen the road, and/or increase the speed limit to get more cars through faster. This has its intended effect in the short term, but makes the road more attractive to drivers. More drivers start using the particular road, exceeding the planned capacity and in the average case causing worst congestion than existed before.


I guess the "paradox" or counterintuitive part comes in situations when the price elasticities are such that a 10% efficiency gain (or price cut) would result in >10% increase in demand and hence in final spending.

Another part may be that non-economists (and even lots of students who succesfully passed econ 101) don't think about shifting demand or supply curves. In my experience, most people who remember the textbook supply-demand curves only think about moving along the curves (which makes the classic diagrams pretty crappy pedagogical devices IMO).


agreed. 1 to 1 would be an interesting result and probably a linear set of curves. But some curves look more like log scales so the effect depending on the slope of the curve could be wildly more or less.

I agree the classic diagrams are kind of crappy. As usually they are very simple just for demonstration. But to make both of those curves is usually more like a N dimensional curve. But remembering back on my econ classes I took they were excited about point slope formulas and maybe a mix of linear algebra in the advanced classes. Which was amusing from the CS/Math background which was my major.


> There is this paradox, the name of which I forget

It's called the Rebound Effect: https://en.wikipedia.org/wiki/Rebound_effect_(conservation)


I'm not sure about your general point - people definitely use more things as they become cheaper, which is a good thing IMO, but you're right that this means power consumption doesn't necessarily go down with lowering costs.

But just to add another dimension to the discussion, there's an interesting book I've heard about, the thesis of which is basically that technology has made a lot of material goods much more efficient. E.g. they give the example of aluminum soda cans - nowadays, the cans themselves use much less material than 50 years ago - using modern production technology, using CAD systems to design them better, etc. has led to a lot less material wastage (I think it was like 25% or something like that, but I'm not sure).

And in general, from the point of view of the companies selling material goods, the less material and more efficiently produced materials the better, since that's a direct cost, meaning they are in general working to make everything a lot more efficient.


Eh, I’m not so convinced that the relationship is direct - Usage is going up and power efficiency is also going up, but I’m not convinced the two are necessarily linked.

I spend roughly the same amount of time at the computer as I did 20 years ago, but my old desktop PC and CRT monitor was certainly not as power efficient as my (bigger) current monitor and Mac mini.


"I spend roughly the same amount of time..." the paradox works if you define usage as energy consumption, not time spent.

20 years ago, you were not streaming 1080p videos or downloading 50 GB video games. Even kids do that on a regular basis now and a bigger share of the population worldwide has access to similar services


No, but I was torrenting those movies to my seedbox which was on 24/7.

IMO I don’t think usage scales to power consumption - this sounds like correlation vs causation to me. It sounds like use has also increased while efficiency has increased, not one because of the other (ie I would personally still be streaming the same number of movies if my AppleTV used twice the power, and if they release a new version that doubles the energy efficiency that won’t cause me to watch more)


With a lot of these arguments (including for example the Unabomber manifesto), I wonder if we're not projecting our own behavior on technology, because we see the effects so easily when looking outward rather than inward.

Technology just is. How it's used makes the difference, and that stems from our way of living in the world. Any tool can be used for both "good" and "bad", however you define those.

I think we tend to prioritize our own desires above anything else, because "someone else" might do that and gain an advantage over "us". We're largely stuck in survival mode, rather than an abundance mindset where we take a step back and reflect on what the goal is, and what is needed to achieve that goal.


Significantly, shifting the image calculations from the server to the client means the work is duplicated for every client. While it saves energy for the server, it increases the overall energy consumption required to deliver the content.


Apparently the mix-blend-mode for images on the client is noticable performance-wise: https://github.com/lowtechmag/solar/issues/6


False. Energetic costs of distributing duplicate information is significantly more expensive than local computation. Eg: measuring joules spent for decompression vs transmission is orders of magnitude over the n duplication.


But surely the orders of magnitude thing factors in when you can have orders of magnitude more clients than servers? I think you'd need to run more specific numbers to lean either way on this.


You seem to not realize that moving the calculation client-side means to send a JavaScript implementation, which for all sensible implementations would be significantly larger than the pre-computed result.


The way the image is colored on the client is just some CSS (mix-blend-mode). Check it in the inspector.


No useless JavaScript in this case.


Can one client do the work and post back the final version to the server so the server sends that version to all the other clients?


Don't trust the client with that, or everybody is getting goatse'd.


A. Cryptographically verify the final version

B. Schedule the work to be performed on multiple clients in case some fail


> A. Cryptographically verify the final version

How?


Check out the upcoming ATX power consumption and compare it to what we're using currently...

What you said was true for a very long time, but Moore's law has slowed down a lot over the last 5yrs and power consumption has increased significantly, and is about to become unreasonable in my opinion.

Maximum power draw of 2.4 kw for about to 10% of the uptime is ... sadly going to be reality soon.


> power consumption has increased significantly, and is about to become unreasonable in my opinion

It is not the first time it has happened, and then went down again. It's just one of the usual "cycles".


I don't think we've went through a circle of more then doubling the power consumption before, but I've only been paying attention to it since shortly before 2010, as I was using prebuilt systems before that.

Maybe you're right and more then doubling the power consumption is a normal thing that just occasionally happens.


Precisely it was with Core that I think the previous "nuclear power CPUs" cycle ended.


I feel like it's a very useful metaphor for the work ahead in dealing with energy production, consumption, storage, efficiency, etc.

Layer 1: We must think of clever solutions to reduce our energy footprint where possible and shifting typical energy uses around is one way to do so

Layer 2: But we must also be conscious and aware of economies of scale, and shifting the burden of energy usage from one central server to N clients is ultimately not a good answer for NET energy consumption.

Layer 3: But for some constrained resources, that might be worthwhile, depending on the magnitude: The incremental cost of extra rendering on the client is a rounding error. But on a solar-powered low-energy server, it MIGHT be meaningful, and be the difference between something working and not.

This is a lens we must apply to everything:

* Logging * Battery technology * Buying locally vs economies of scale * International shipping * Container ships * etc.


I'd rather avoid all these layers and just build so much nuclear we get to a future where electricity is too cheap to meter.


Current world power consumption is about 15 terawatts. That's 15,000 gigawatts. [0] One gigawatt takes about 1 nuke. So, 15,000 nukes. Round that down to 10,000 nukes; ignore demand from new electric cars and cooling, and required downtime.

On a very good day, a new nuke rounds down to $10 billion. 10^4 x 10^10 = $ 100 trillion. Not to mention mining, operating, disposal, and environmental costs. Or tsunamis, or profits, or interest.

In other units (Q = 10^18 joules) Total world energy consumption per year: 400 Q. Total solar power striking the Earth each year: more than 4 x 10^6 Q.[1] Catch 1/10,000 of that.

[0] https://science.howstuffworks.com/environmental/green-scienc...

[1] http://sites.science.oregonstate.edu/~hetheriw/energy/topics...


But isn't one of the problems of global warming (ie where humans have a "future") the amount of waste heat we are producing?

https://environmentalsystemsresearch.springeropen.com/articl...


> This essentially shifts some of the burden of computation from the server (PNG compression) to the browser (dithering interpretation). This may save some energy, or it may increase energy as most personal computer processors are much less efficient than server processors, and don't benefit from energy savings from caching.

I believe doing image color setting is already eaten up inside the cost of doing png decompression: the client is already doing image manipulation, this is just doing an additional multiplication, so i believe it is minimal additional cost on the client (i may be wrong tho, never wrote a png decoder).

> I think the implication is electricity consumption will also double roughly every 2 years, but Moore's law actually operates on approx. same timeline, so traffic can continue doubling at this rate without an increase in energy consumption.

We're not doing more with the same amount of energy, we're doing much more with more energy. This is always the same energy efficiency fallacy: being more efficient (in percent "out/in") usually requires having bigger "in" (bigger scale). And you know, the thing that capitalistic companies always seek to get bigger no matter what. There's no incentive to maintain you current consumption. See https://solar.lowtechmagazine.com/2018/01/bedazzled-by-energ... for more details by same source.


I thought Moore’s Law is about the density of components on an IC—what about it suggests a constant energy usage of IC as it gets denser?


This property is called Dennard scaling. It mostly does not apply to the latest few generations due to increasing static power (leakage current).

https://en.m.wikipedia.org/wiki/Dennard_scaling


Fascinating, thank you


I believe ICs are limited by how fast you can cool them. All that energy goes into heat eventually, so temperature is going to depend only on total energy / area


Digression: Is it true that every single watt my CPU/server/electronics consume is ultimately turned into heat?

To put it another way, does a server under a constant 500W produce the exact same amount of heat as a dumb 500W electric space heater? Excluding things like the tiny amount of energy that gets sent out on CAT5 cables, which I believe still gets turned into heat just elsewhere.

I think this is true, but it’s still counterintuitive. Makes me think there’s something more useful we could make electric space heaters do with their energy and still produce the same amount of heat.


> I think this is true, but it’s still counterintuitive. Makes me think there’s something more useful we could make electric space heaters do with their energy and still produce the same amount of heat.

Indeed: people have experimented with mining during the winter to recoup electric heating costs, which makes sense but only if you ignore the (expensive and getting more expensive by the day) cost of procuring the original mining hardware (aka GPU).


Hardware is a high cost if you're using current gen mining hardware to be competitive. If power is essentially "free" (since you'd use it anyway for heat), then even very old mining hardware would have a net positive benefit for you. It might only be worth a cup of coffee a week, but it is "free" money you can collect...


It still depends on what your alternative is. Much of the world does not use electricity for heat; in the United States, electricity is rarely used for heat in in the parts of the country that traditionally get brutal winters. It's cheaper for me to heat my home via the forced-air natural gas furnace (even though it is a less efficient fuel-to-heat process as compared to resistance heaters) given the disparity in pricing between residential electric and natural gas costs. I tried mining ethereum for a month last winter and I don't think that I broke even (despite using a relatively recent RTX 2080).


In short, yes. Exactly (well minus a few used to spin fans and hard drives). Here's one source from a quick search:

https://www.pugetsystems.com/labs/articles/Gaming-PC-vs-Spac...

I remember seeing a paper referenced here once explaining why that energy loss is actually necessary, but I don't seem to have it saved anymore.


> I remember seeing a paper referenced here once explaining why that energy loss is actually necessary, but I don't seem to have it saved anymore.

Taking a stab at explaining the energy loss…

When computing you are reducing local entropy (e.g. achieving a certain pattern of electric charges in chips and a certain pattern of pixels on the screen) while still increasing global entropy (the second law of thermodynamic). Heat dissipation from the local system to its surroundings accounts for the difference.

This is analogous to Schrodinger’s description of life where life forms increase the entropy of their surroundings to maintain their own low entropy.


The spinning fans use energy to move air, a (slightly) viscous fluid, which eventually encounters turbulence and friction and turns its kinetic energy into heat. Similar to discs that use energy to overcome friction in bearings and generate noise, which is a kinetic energy in the form of vibrations that also eventually dissipates into heat.


By a paper, do you mean the laws of thermodynamics?

The fans also eventually also dissipate their energy as heat when they push air around.

There is an alternative heater design though for the same power, which is a heat pump (the same as an AC, run in reverse), which can be much more efficient at heating (down to -5F outside, or so, when they may freeze over and stop working entirely)


> something more useful...

Ha! A space heater with bitcoin mining coils. Call them "bitcoils".


I'm sure you've heard the old joke about gamer college students using their fancy graphics cards to heat their rooms. :)


I did this last winter. Mining 24/7 on my personal rig noticeably took a bit of load off the heater and made a modest amount of ethereum. I stopped once the weather warmed as it seemed like a waste of energy.


I did the same, then my graphics card died and it was a net loss :-(


How were your electric bills?


Not appreciably different, that was the whole logic behind my thinking. In my case I have electric heaters, so doesn't make a difference compared to a gpu -- they're both effectively 100% thermally efficient.


The only other places the energy could go are small - LED light, radio waves, sound. Almost all of it will go to heat.


And all of those things will go to heat eventually too, as will all other energy!


The state of Missouri pushes SSN sorting to the browser. I’ll allow it.


Don't forget the reduced bandwidth usage- the energy used to transport the data across the entire internet.


It seems like we have tons of room to make the internet more efficient even without Moore’s law - how many backends and clients are written in scripting languages (PhP, Node, Python, Ruby…) without a whole lot of thought put into energy use (even if they are carefully designed to be fast or “efficient” by some metrics)?

And how much of internet energy use is made of serving ads? It seems to be a growing share, and ads increasingly want to stream video, which puts things like PNG decoding & dithering in the margins of overall energy use.


> but Moore's law actually operates on approx. same timeline, so traffic can continue doubling at this rate without an increase in energy consumption

Except that data bloat grows at a steady rate, and you only take advantage of Moore once you discard your old equipment and replace it with newer silicon. That generates ewaste, which is counterproductive to ecologically aware projects.


While I agree with most of your points, I think that these dithering images use less client-side power than a regular image compression (e.g. JPEG or PNG). The dithered images are smaller (according to them), and I don't see why they would use more CPU per byte to decode than JPEG.


> The dithered images are smaller (according to them), and I don't see why they would use more CPU per byte to decode than JPEG.

Just try substituting something else in that statement and see if it makes sense. “The lzma-compressed files are smaller than the raw test files, so opening them in an editor shouldn’t use more cpu” or “HEVC-compressed videos are smaller than DivX videos, so shouldn’t they be more efficient to decode and play (sans hardware acceleration)?”


The examples you give compare new/good compression (lzma, HVEC) with old/bad compression (uncompressed, DivX). Yes the new/good compression uses more CPU per byte.

I see no reason to believe their website uses a compression scheme that uses more CPU per byte than PNG or JPEG. I don't think they're using anything advanced. Actually I just checked, and their website is using PNG.


That is entirely forgetting that you throw away and replace hardware in that timeline - what happens to the old hardware?


Small technical nit: I love the dithered images and the retro feel, but their CSS should specify `image-rendering: pixelated` to make sure browsers don't interpolate the pixels.


Whoa, TIL. It actually makes quite a big difference. Nevertheless I don't actually think being pixelated is much of a stylistic choice.

Here [1] they explain how they use dithering to minimise their bandwidth and computational costs.

[1] - https://solar.lowtechmagazine.com/2018/09/how-to-build-a-low...


AVIF and WEBP formats give pretty good compression : https://squoosh.app/editor

* original colourful image : https://homebrewserver.club/images/lime2.png

* dithering/PNG on this site : 34 kB

* AVIF (quality 16 for similar "readability") : 21 kB

* WEBP (quality 16 for similar "readability") : 24 kB

But the "nice dithering style" is lost in the process, obviously.


Lossless compression of their 4 colours dithering PNG (34 kB) :

* AVIF : 69 kB (+100%)

* WEBP : 27 kB (-22%)


Except dithering is not great for modern compression algorithms, making the endeavour mostly performative.


Yeah, I reduced their file sizes in their examples by just setting jpeg compression levels to 7% and had actual grayscale


Wouldn't the correct css for the dithered image be `crisp-edges`? From the specification:

> The image must be scaled with an algorithm that preserves contrast and edges in the image, and which does not smooth colors or introduce blur to the image in the process. [...] This value is intended for pixel-art images, such as in browser games.


No, `crisp-edges` allows for algorithms[1] like HQ2X and 2xSaI, which would not work with the dithered images, whereas `pixelated` enforces nearest-neighbor scaling.

[1] https://en.wikipedia.org/wiki/Pixel-art_scaling_algorithms


I think this also shows how inefficient modern website hosting is. The fact that this person was able to get a raspberry pi to host the #1 website on HN powered by a small 50 watt solar panel is very cool (meaning maybe 10w average power budget), but also shouldn't be as uncommon as it is today. To put this in perspective, a modern server uses 50-100 watts idle doing nothing, and many more under load. To handle the top of HN, the developer probably would use load balancing and other tech, multiplying the power usage accordingly. Edit: fixing typos.


I hate to be that old foggy but, aren’t websites just getting worse and bloated with JS crap?

I’ve had a couple websites I use daily for work just get flashy new interfaces which causes 1/3 to 1/2 second delays in the interface which used to not exist, previously they just had normal page load delays.

For example, SalesForce Lightning, their UI overhaul. Old UI is mainly just flat HTML with some loading on fields. New UI doesn’t have as many page loads it seems but wherever you navigate to takes far longer to load because of api calls or just baaad JS.

Slow for the user, slow for the server. It's almost like the people who push website technology are the same one selling you servers. Hate it and want to go back.


I agree with this so so much.

The problem is too much reliances on frameworks and add on libraries.

Developers will import an entire framework that for the benefit of a single feature. It's mind blowing to look at the amount of js includes for seemingly simple sites.

Stackoverflow answers that direct you to import a library or framework should be banned in most cases.

I will often have to scroll past several answers that say to import a library before finding a simple and functional answer that uses only a few lines of code down near the bottom. Which in my eyes is the real answer. I often wonder if there's a behind the scenes effort on SO to promote certain includes.

The entire ecosystem of some languages / implementations relies on this far too heavily.

We are seeing some of the consequences other than just bloated systems from this style of coding with malicious node packages.


I am guilty of this, and I feel bad for it. I am not a front end developer, but I have built a few web sites for various projects here and there. I certainly don't NEED to use a front-end framework, but I don't want have to spend a ton of time crafting CSS rules and figuing out how many divs to nest. To get something done quickly, my choices pretty much boil down to plain, unstyled, pages, or a full blown framework like Vuetify. So far, I haven't found anything in between. I would love to find a CSS library that I can just import and be able to create simple, nicely styled pages, e.g. that look Material-esque, without jQuery, node, npm, gulp, grunt, sass, and all that jazz.


do you know the matrix movie quote "but there is no spoon" – maybe the framework you look for is vanilla CSS. Write sensible markup to hold your content (almost no divs), CSS it and be good. Sounds that feasible?


> ... vanilla CSS... Sounds that feasible?

Vanilla CSS is the other end of the spectrum, but the problem is there is apparently nothing between hand-crafted CSS and a full front-end framework. People, like me, who are not good at design will choose the convenience of the latter over going through the tedium of the former, even if we don't really want to.


You won't be a sculptor if you avoid the chisel. If in rome, do as the romans do. If you want to swim, you'll get wet.

There is no design in/for the web without html+css, is there?

Edit: by removing 3rd parties from your project you remove a lot of overhead and current and future risk. But be warned: Maybe your company sells exactly that for a good margin and you ruin the business model.


We don't want to sculptors. We have other projects that just happen to require a sculpture as part of finishing them.

> There is no design in/for the web without html+css, is there?

Only as much as there's no programming without assembly language.


> We don't want to sculptors.

It is really sad that you are forced to do things that you believe you are not capable of doing and refuse to do.


It's not that I don't believe I'm capable of it. I do not want to do it. I'm happy to not have to do it.

When did I ever say I was not capable of doing it? It's really sad the way you push your mindset on others.


What's your opinion about bootstrap? Unfashionable I get it, but doesn't it serve the purpose?


It's been a long time since I looked at Bootstrap. I don't remember it being anywhere near as easy to create a nice-looking page with it as it is with Vuetify.


How about Tailwind?


Bootstrap is unfashionable?


In the same way that Corollas, Applebee's, and Walmart brand jeans are unfashionable, yes.


Bootstrap v2 had a LOT of unnecessary features while simultaneously lacking features because of Less limitations. By v5 it was significantly slimmed down if not for removing IE support and jQuery, but also a lot of rarely-used components.


aren’t websites just getting worse and bloated with JS crap

Maybe they are, but that bloat is just some static files that are sent to the user as far as the web server is concerned. They should have no practical impact on the battery life of the server.

There are JS sites that render on the server as well, but that's not the bloat you mean.


And, heck, there's a solid argument that server-side rendering is more environmentally efficient, since the work is done in a data-center, which can (1) utilize caching to avoid re-doing work and (2) be built in an optimal location for electricity generation.


Conversely, you are losing the distributed computing gained by rendering on the client, and therefore need a bigger server to scale when needed. And HTTP caching can and should be used for API responses as well.


If we're measuring by total resource consumption regardless of location, is distributed computing beneficial? Your server can be less powerful, but the client needs to be more powerful. I'd think the primary difference is who's paying for it.

Sure, most clients may already be adequately provisioned, but only because so many websites with bloated Javascript have forced their hand...


>Maybe they are, but that bloat is just some static files that are sent to the user as far as the web server is concerned. They should have no practical impact on the battery life of the server."

...and if those static files make 100 API calls as soon as they land?


I think he is talking about badly written JS code serving the APIs and the overhead of it.

Of course it's hard to debate whether JS, Java or PHP is most inefficient in that regard.


I strongly believe that the efficiency of an API is 1% down to the language it's coded in, and 99% down to who coded it.


Yes and no; while it's still true that you can write FORTRAN in any language, there are network effects that mean the effort required to write efficient code is different per language/community/framework.


I think that’s true if you take a no dependencies approach but as soon as you tap into the ecosystem the argument is lost.


Not really. It still depends on who coded it. It's just that there's more people involved now.


Corollary: The average API is far less efficient than the languages you like to complain about their efficiency.


it's gotten out of hand imo. page load times take longer than when I was browsing the web on dialup in many cases.


Website Obesity crisis going on and on: https://idlewords.com/talks/website_obesity.htm

Was here on HN several times, sadly still the case.


"I don't care about bloat because it's inefficient. I care about it because it makes the web inaccessible.

Keeping the Web simple keeps it awesome. "


I don't know, we're currently rewriting our UI from the classic "PHP renders everything with almost zero JS" to the more modern "single page application with a crap ton of JS" and the new UI feels much faster to me. The old way was to resend and rerender everything on each click, which is problematic for complex UIs with a lot of data.


On a modest 3 year old phone running Firefox, such websites are excruciating.


What a false equivalence. You are comparing static sites of the past to dynamic sites of today. Apples to oranges.


The vast majority of sites don't need to be dynamic though.


[citation needed]

Websites have become significantly more complex in the last two decades.


Let’s look at Twitter for a real-world example. The core concept of it hasn’t changed, it still just has to display a blurb of a few hundred characters at most. Back in the day this was achieved by server-side-rendered HTML and a simple form POST. I don’t have the numbers for the page back then but I’d estimate it at 100KB - nowadays it’s a multi-megabyte-sized pile of shit that often fails at its primary purpose of displaying a block of text with a stupid “something went wrong” message or endless spinner.

The “new” Reddit is also a good example. Even ignoring all the user-hostile functionality changes, the actual experience is still slower and less reliable.


And for proof that you can do better, Nitter is a better Twitter interface than Twitter is, and it's much lighter.


Reddit web is incredibly sluggish. I open the app to browse which is a smooth experience (putng aside dark patterns).

Same for Twitter. Maybe it's intentional to move users into the app, where ads are more likely to be actually seen (e.g. Many web users have ad blockers) and in app purchases are frictionless.


Just FYI, for Reddit there's an amazing third-party client called Apollo on iOS, and I'm sure there are others too, same on Android.


Yeah I have it. I actually made the in app purchase to support the developer.

I ended up going back to the official app which I think is nice, but also feel nice about financially supporting an indie developer who does a nice third party client for a (generally) awesome community.


Really ?

Take for instance news sites or blogs: When I read news article, what I want is mostly (there are some great interactive infographics, but those are tiny minority) text and few images, that are static.

And it's not like that as a consumer I get anything extra. The text and images are still static, for the most part. Except that now its makes 20 separate requests to load it all up. Hell, pages are usually even less dynamic, after comments fell out of favor.

What increased is number of extra stuff, that is mostly focused on selling me stuff, tracking every metrics possible, and trying to figure out somne clickbait I would click next for extra engagement.

But that's not content, content has largely stayed the same.


I don't believe the bloat of modern websites is because of ad-tech of dark patterns - all of these can still be done with an otherwise lightweight website. If anything, ads (excluding intentional resource usage such as crypto-miners) would be much lighter than what a typical SPA website like Reddit or Twitter is.

I believe what's happening is a broader trend in the industry of building engineering playgrounds and doing engineering purely for engineering's sake to benefit one's career - a positive feedback loop where not participating puts developers at a disadvantage as they won't gain over-engineering skills that companies now require (they require them because their developers or managers want them for the same reason).


Text-only NPR: https://text.npr.org/


I don't want my browser to be loading entire JS frameworks and trackers and whatever other crap just to read a bunch of text. That's absolutely nonsensical


I'm using two browsers, one with disabled JS (primary) and vanilla one. When and only WHEN page doesn't load on non-js browser (and if I really, really, reaaaally want that piece of content) then maybe I will use vanilla browser...

Browsing with js disabled is fast, pages load quickly, almost no trackers and there are "old" or "text" versions of sites still available... old.reddit, old.twitter or nitter instances...

Heck, even google has one...

To be honest, I just use dillo browser most of the time. Small, speedy and safer then most...

Edit: typo.


For the user's benefit, or for the dev's?


Or the advertises/whatever assholes profit of “engagement”?


When you replace one with the other and notice a substantial change in time-required-for-task, I think you can make comparisons.


We’ve recently transitioned to Salesforce for a project. It’s remarkable how laggy the interface is. Removing a line from a quote takes three clicks and four seconds. The UI also doesn’t always refresh the items in a reasonable period of time, requiring a page reload.

Reloading the page is like 20MB as well. Great when you’re tethered to your phone.


I agree. There needs to be a substantial effort in web development to shed the bloat. Clean and small reduces issues with resources, security, and maintainability. The status quo is gross.


You're not alone in feeling this way

https://handmade.network/manifesto


>and bloated with JS crap?

CSS animations too , especially the ones that use infinite.


Nothing to do with site hosting. css animations don't eat the server's CPU; nor does JS bloat (other than bandwidth).


JS bloat can have significant server overhead when data is loaded dynamically. It’s generally more efficient to have one GET request that can be heavily optimized than a lot of tiny XMLHttpRequest that need to be parsed separately. That may flip around when someone spends a long time interacting with a SPA, but there is plenty of truly terrible code in the wild.


I've built embedded web interfaces serving up static pages that were precompressed with gzip and then used XHR to fill in dynamic content. I kept it under 100K for the uncached download (zero third party scripts). Everything worked well and was reasonably lightweight as long as you avoided framework bloat. Not having to compress anything on device helps a bit on energy usage although that wasn't a concern.


> It’s generally more efficient to have one GET request that can be heavily optimized than a lot of tiny XMLHttpRequest that need to be parsed separately.

Without context, this statement is misleading at best and downright false at worst. You’re right that splitting up a single request into multiple would incur a small performance penalty, but you also generally gain other advantages like more localized caching and the ability to conditionally skip requests. In the long run, those advantages may actually make your app significantly more efficient. But without discussing details like this, it’s pointless to make wild assumptions about performance.


The context was JS bloat, so we are specifically talking about the subset of poorly written sites. When it’s possible to shoot yourself in the foot many people will do so.

That said, if you ever actually profile things you will find the overhead per request is actually quite high. There is a clear tendency to request far to little data at a time.


yes, I wanted to add that css (especially infinite)animation also eats the client energy and CPU.


I like a good clean CSS animation! They can be very short and meaningful. Maybe not for daily driver UI but somethings I like them.

Infinite scrolling could be annoying with animations though, I grant you that.


My issue is with infinite animations, constantly moving/blinking stuff. They also do not have same effect on different system configurations so you might not notice any effect on your dev machine and on users it makes the page unusable, and some are super distracting).


The new lightning makes it such a pain to do my time cards. Try to open my sub projects to see how many hours are left and have a tab open with my time card and constantly errors out. I can’t be the only person who checks hours left on sub projects when entering my time.


Genuine question from a not-a-web-developer: Is this not mitigated by minimization tools like webpack?


Webpack is a front-end build toolchain, not a minifier. Usually Terser is used inside of Webpage to minify.


It is not a Raspberry Pi, but an Olimex Olinuxino A20 Lime 2 and a 30W solar panel: https://solar.lowtechmagazine.com/about.html#hardware

But yes, web hosting, especially for small/mid traffic websites has become very cheap (in power consumption and in money), especially for static websites where CDNs can be used to serve assets and static content from edge caches. A full x86 server or PC is often total overkill and a little SBC sufficient instead.

It is a dual core CPU btw, to bring the average CPU load into perspective. A very interesting project as a prove of concept, also for others to adopt in countries with unstable electricity supply and/or in relation high electricity costs :).


Did the author use a CDN? Because that's kindof cheating: you are just having another (free) service burning the electricity for you.

I assumed the solar server serves the sites directly, because of this. Maybe I was wrong.


True, in this case it is great, somehow mandatory, that it is fully self-contained :). However, dynamic content like the current power consumption and CPU load would still need be served by the origin, or cached at the CDN with short timeouts only.

Using CDNs was more an idea/suggestion for others who take this project as an inspiration to run their own website even with small hardware, unstable electricity supply and/or expensive/limited bandwidth, where a CDN can further reduce server load and traffic. Also when speaking about efficiency of the Internet in general, using small SBCs where sufficient, a CDN usually serves assets/content much more effective, given a network where a particular edge server is usually closer to the visitor than the origin server, and hardware that is specifically designed and run for that purpose and can be assumed to be highly loaded (less wasted power consumption). So as long as one trusts a CDN, or the content is not of any security or privacy concerns, it is usually a reasonable choice to make use of it :).


There is no CDN, thankfully.


> a modern server uses 50-100 watts idle doing nothing

I'm really tired of hearing this. "Serverless because otherwise server doing nothing", "very small virtual machine because otherwise server doing nothing".

The server is not doing "nothing" it's waiting for incoming requests. It's like if you told "this cashier is doing nothing because there is no customers in the store".

When a server is loaded at capacity minus some margin, latencies are going up, which may not always be acceptable. Also, not every web workload scales linearly nor is cacheable and traffic patterns may not be that predictable and some requests may generate higher loads.

Managing capacity is way more involved that just "this server is doing nothing".

Also, many of these technologies supposedly reducing "idle time" such as "serverless" are usually incredibly wasteful where handling a single request may start a completely new environment and may pull resources across the globe.


If there are 100 servers but only one is needed to handle the user traffic, then 99% of those servers are considered to be "doing nothing" even if they are powered on and running software. At the end of the day, running that software is meaningless to the business and to customers.


I think the point was that "ready and waiting" is valuable to the end customer, even if it only makes a different later when they are doing something. It's kind-of like how firemen are valuable even when they are not getting calls, because they are available for low latency response instead of busy doing something else. The idea that this is just wasted computation is therefore somewhat disingenuous.


Oh, but it could be improved. Linux can cold boot under 300ms (easier if you control the BIOS and can tune it for speed, like coreboot can), faster if resuming from RAM. That should allow you to perform load-balancing while powering off the extra capacity (using wake-on-lan).

If load becomes too important for the SBC or close to capacity, wake the server, and perform a handover once it's up. You can either hold the packets and use the SBC as a proxy, or change your router's config to point to the newly awakened server (alternatively, just exchange IP or MAC addresses). With a bit of magic to avoid closing existing connections (I believe home ISP routers should keep NAT connections open if a port forward is changed), it would work. Obviously it's even easier with a proper load balancer.

edit: actually even a router might be able to handle low loads

There seems to be surprisingly little interest in this (closest I found was https://github.com/kubernetes/kubernetes/issues/89271 ).

So yeah, it's still wasted power and computation in my opinion. "Ready and waiting" should not take 100W per server, but be closer to 0.1W (WoL), or lower if managing state from a central node. I guess it's not worth optimizing for most people, and big cloud probably does something similar already.

In a way, it's a bit like big.LITTLE with additional latency: small, power-efficient vs big, fast, inefficient for small loads.


Modern CPUs go to lower power states super quickly and draw almost nothing. The thing is, if the server is running many VMs, there's no way it's going to a low power state, eveb if some are doing nothing (others will be). You also have 10 jet engines blowing air at the front, which probably is more than the CPU uses when both are idle.


Totally agreed that 100w is a waste at idle, but I don’t think that’s what the parent was talking about. My read was that the parent comment was responding to capacity planning tending towards reducing the number of servers (without building out low latency cold boot infra) in the name of cost/energy savings, and that resulting in higher latency request latency. Anecdotally this seems plausible, but I don’t have metrics to back it up.


Still, if you switch 100 servers with 100 owners all waiting for connections for 1 server hosting 100 sites and 99 other in a low power mode waiting for traffic, you save a lot of power and doesn't lose much.

Anyway, it would be waste even if you couldn't save it at all. "Waste" is simply a name for things we consume but don't actually use. All industries use that term.


We waste so many resources customizing each response to time and observer and it’s just nuts. Most people aren’t going to notice if a calculation is being debounced, amortizing it over hundreds of seconds or requests. Instant gratification is the most expensive thing by far. And debouncing has such a profound effect similar to load shedding for traffic bursts, it really should be front and center in the literature.

When I was young I worked on a project that was so inefficient that I was professionally embarrassed to have my name associated with it. So I moved heaven and earth to fix it. Gave myself an RSI before I learned to better automate some transformations.

Today I’m also working on another, lesser embarrassment, but I’m not working weekends and holidays on it anymore. I’m not a hero surrounded by villains, I’m an observant person drowning in a sea of apathetic faces.

The amount of hardware we have per user request should have gotten someone fired. Most of the people responsible are gone, but one is still here complecting anything that isn’t nailed down, and few others know enough to realize that the reason they don’t feel confident in the code is because someone intentionally made it that way, and you should not be looking up to those people. They are literally making you dumber.


Fwiw, the top of HN isn’t all that stress. It mostly comes down to disk I/O and efficiency of the language.

I’ve been at the top of HN for extended hours a couple of times on just a Heroku hobby dyno with no caching at all, but I had Cloudflare out front absorbing all the traffic that would have come from serving static assets.


Not to be contrary, but if your site is largely static and you're fronting it with Cloudfare, then you're essentially saying Cloudfare can handle load.

Not much revelation there, right?


Cloudflare really doesn't make much of a difference for HN. The last front page traffic I saw (~a week ago?) was still at most a handful of QPS. Any nginx instance with default configuration serving static files from any modern computer should be able to handle that (given that your link is big enough).

Now if you reach the top of a large subreddit, or have a viral tweet with a link to you, that's a different order of magnitude. HN is just not that large.


Then why do people talk about the "HN hug of death"?


If you’ve got a bunch of images and your site is running uncached with a clunky CMS behind it, it will probably strain you.

The traffic is usually about 40k uniques over 24 hours. For reference, my uncached site is running on Elixir which is often better without caching.


It’s usually not images but rather the CMS making 100 read queries and 20 write queries per page load without any object caching that brings sites down. Even a slow uplink serving big image won’t bring an nginx server down, it’s all async.


Serving a lot of images at the same time will work the disk I/O up in many cases. Probably gotten a lot better with NVMe though.


It’s typically one page that hits Reddit/HN/whatever with its fixed set of static images. It’s pretty much the ideal scenario for the kernel’s in-memory cache.


That’s a fair point for GP’s Heroku + Cloudflare deployment. The OP solar site is a better example of efficient static hosting as it is run on a lightweight server [1] and not fronted with Cloudflare. The reading at the bottom of the website indicates 2.70 W power usage at the moment and over two weeks uptime.

[1]: https://solar.lowtechmagazine.com/about.html#hardware


Mine isn’t static fwiw. Just running Elixir w/ Postgres. Elixir usually performs well enough uncached that you need to justify any caching needs.


So, specifically, why are using Cloudfare? Kind of sounds like you're saying both that you do and don't need it.


It’s free, solid DNS product, gives me a CDN on top of it and stops a lot of pesky bots. Plus moving the domain registration there keeps the renewal at the lowest possible price.

I can’t see any good reason not to use it.


I see. I think I was confused by your statement that kicked off this sub-thread:

>...but I had Cloudflare out front absorbing all the traffic that would have come from serving static assets.

I interpreted that to mean Cloudflare contributed significantly load-wise, then you indicated that your site wasn't largely static or cacheable (I think?).

Anyway, in addition to curiosity, we're also considering Cloudflare to offload from more costly AWS instances. So, just trying to suss out whether Cloudflare was or wasn't instrumental for you per your comments. Still not 100% sure, but thanks for the discussion.


I know people who host a static website on a home dsl connection with 5Mbps upload, using Cloudflare. The CDN literally does all the work.


5Mbps when 56k would be more than enough.


The key here is that it's just a very simple website with very low computational requirements.


And yet it provides the same amount of information as other websites 10 or 100 times it's weight.



> I think this also shows how inefficient modern website hosting is.

I suspect it's the opposite. Hosting a static site like this on a service designed for it is going to use less power than using dedicated hardware. Single server can host hundred to thousands of static sites. The power use per site is going to be much lower.


Hey @dang, you wouldn't happen to know anything about how Hacker News is hosted, would you? Reading this has piqued my curiosity. If time / your position permits, of course.


From what I remember from previous posts by dang: It's living on a single dedicated server with some hosting company (I think you can look up the IP to figure out which) - at least a while ago the code also was single-core, not sure if that is still the case. (in the past it used cloudflare for caching, but hasn't in a few years)


It's M5 Hosting


Last I read it was hosted out of M5 Networks in San Diego. I used to like fairly close to their data center and had considered hosting there at some point, so when I read that Hacker News hosted there it stuck with me.


IIRC it's just stored on someone's Dell Inspiron that they've got laying around in an office. Might be outdated info, but it's really nothing special if memory serves.


It's a static page... not everything is a static page in the web world.


a lot of the web would probably be better off if it were


I think so, WordPress that isnt cached to a frozen state on the backend is kind of silly in my eyes, the only exception would be comments, but you could hack around that by using Disqus or something, voila.


> you could hack around that by using Disqus

But that's not actually solving the problem, it's just offloading it. Not to mention that you're selling your community to yet another tracking company, and jacking up user page load time.


>> by Disqus or something

depends on what you want, the low end is incoming comments as emails and putting them semi-manually into an iframe on the unchanged static article page. Doing it myself at https://blog.mro.name/2019/05/wp-to-hugo-making-of/ and sacrificed commenter speed.

Others may easily be more sophisticated than above brutalist solution. But still: comments in iframes align well with static sites IMO.

Edit: even better may be to phase in comments from HN or the fediverse or whatever you care about into an iframe. Be it copied or inline and re-styled.


I’m a technical lead for a SaaS community forum product and we handle billions of page views a month. Many of them don’t put any load on our servers though because guest pages are cached with a short duration and the cached page gets served up.

Today that’s cloudflare but in the past it was varnish.

Otherwise it is very dynamic. Different users have access to different content so we generally can’t cache full pages at the edge for authenticated users.


nice. You're working regularly on the website. Not all do.


I like this approach, is there something specific you're using to automate this at least in part? I feel like it could be made into a simple service with very limited JS to make it less "slow", unless you decide to manually approve of a comment.


> automate

not yet, but I think of a delayed shell script (cron 15min?). JS (clientside) may be not of much use.

Or use a feedback webform via https://github.com/mro/form2xhtml - doesn't have to be email then.

Or monitor IMAP folders for accepted comments and cron them to the webserver a la https://codeberg.org/mro/flohmarkt.monte-ts.de/src/branch/ma... and online.sh


I agree, but comments could also be static. Have a service handle comment submission, regenerate the page a bit later. If displaying to the user is an issue, do it client side. Most websites use a moderation queue anyway.

One could even generate a dedicated HTML page for the comments, and include it in an iframe, although inlining them is probably more performant.


Forcing people to use disqus is a fantastic way to fuck over your userbase


Just POST to a PHP script that regenerates the cached HTML including comments.


Even comments come quite rarely in most cases, so that the complete page with comments can be cached.


You don’t need disqus, you just invalidate the cache and regenerate it for only the next request, serving stale copies until the regen is complete. nginx or varnish can both do that out of the box.


Comments are far from the only exception to static page caches! There are often dynamic changes via plugins or functions.php. There are shortcodes and a number of other examples too.


we had semi-static comments you are talking about. they were called guestbooks. but we gave up on them.


I see a fair amount of stories here where the endpoint appears to be a VPS, sometimes fronted by a CDN. It's hard to say exactly how efficient that is, since configurations vary, but it's likely pretty good. Sure, there's hungry servers under there, but the multi-tenancy spreads that out.


It's not all so simple. For one, this A20 is connected to a router which is connected to the grid. The connection used is a 100 Mb fiber which - thanks to small average page size and very little JS - is more than enough. The whole thing is in the owner's home. I have a similar setup, and I wouldn't say "This is a solar-powered website, which means it sometimes goes offline" but "This website is served from someone's home, which means it sometimes goes offline."


It's mostly static files, if it was a modern SPA with APIs and such, it would probably crash having to fetch the same data for what is quite literally a static site.


I think I've mentioned this before, but nothing about SPAs require that level of bloat. My personal site (https://chadnauseam.com/) uses React and SPA-type features like preloading internal pages so they load instantly when you click a link, but almost all of it works fine with js disabled. It used to get a perfect score on lighthouse too but it doesn't anymore :(


Wouldn't it be way more efficient to run it on some (virtualized) node in a datacenter that is optimized for it?


Yes but it only matters if you ignore all the constant factors and sunk costs that exist. For example, I already have an rpi and a solar panel. My crappy google home mini wastes more power than this doing absolutely nothing. It's kind of pointless to hyper-optimize efficiency of a little server like this given all the waste around it.


embracing outages, though reducing them, is revolutionary, I guess.

Just not serving everybody all the time.


I recall someone saying that being on HN front page resulted in a peak load of a few requests per second. I.e. absolutely nothing if you're just serving static content.


I mean, that's because a lot of websites these days are built on bloat on top of bloat. A periodically generated static HTML page is pretty easy to take HN load.


> this also shows how inefficient modern website hosting is

And this is in a world without distributed, locality-aware caching.


Though topical thought, the Uk NHS site is text and blue hyperlinks. Its still collapsing tonight because the prime minister just announced booster jabs available for everyone. (Guess what I'm spending my evening doing). So you can't always win.


It'd be interesting to know what you actually are doing!

Thoughts and Prayers, etc


Oh, actually, just browsing to kill time while watching the open browser window at the side refresh on the NHS booking site hoping it will work long enough to give me a booster jab appointment. It alternates between "you are in a queue ten minutes to go" and "our site is overloaded please try later" for about two hours now. Everyone else between the ages of 18-50 in the UK is basically doing the same thing, hammering the site. Its not quite as life and death exciting as I make it sound....


Ah, I thought you were working on the NHS IT systems!


https://twitter.com/AmandaPritchard/status/14703629354489856...

> Over 110,000 people booked their COVID-19 booster vaccine before 9am this morning.

I don't know if this is a huge amount of traffic, or just an unexpectedly large bump.

Good luck getting a booking!


Yeah not like the requirements have changed at all........


i wonder how much information is really needed on wires today


ICANN should make a .solar extension with the caveat that you have to provide evidence yearly that all IPs mapped by the domain were running on solar power


That ... is actually an awesome idea! The condition for getting a domain name should be to have a public consumption and battery status page.


Which anybody could fake - literally pointless tld. What business could benefit from this?


It would be quite entertaining for one to watch the large solar manufacturers have to put their money where their mouth is for their entire public facing websites if they want a .solar domain


encouraging innovation, dont be so negative


Why the distinction between solar vs other renewable energy sources?


.waterwheel

.volcanopower


I'd buy a .volcanopower

.turbine would also be fun.


Maybe leverage "specialization + trade" and have some 'SCDN' (Solar Content Delivery Network).


Can't force me to disclose my subdomains. Maybe for the @ A record.


ICANN can do whatever they want, so they could do things differently for this extension. Wouldn't have to publicly disclose either -- just to the auditors at ICANN who would check yearly that all the IPs that have been mapped have a paper trail showing only solar or renewable energy sources, something of that nature. It's not as impossible as it sounds, especially considering the small number of entities that would do it. Analogous to filing your taxes every year, and we do that just fine.


Nah, it would be .sol


super cool idea, anybody have connects at ICANN?


This is a fascinating site beyond the power indicator. For example, a recent article discusses low-tech solar panels:

> ... To start with, ever since the 1950s, solar panels have been unfit for recycling, resulting in a waste stream that ends up in landfills. This waste stream will grow significantly during the coming years. Solar panels are discarded only after at least 25 to 30 years, and most have been installed only in recent years. By 2050, researchers expect that almost 80 million tonnes of solar panels will reach the end of their lives. 1 2 3 That is a significant waste of resources and a danger to the environment – discarded solar PV panels contain toxic elements and present a fire hazard.

https://solar.lowtechmagazine.com/2021/10/how-to-build-a-low...


We have some solar panel recycling companies coming online here in Australia already ([1] for example, but there are at least three or four other companies I've heard of starting up), and I'm sure that will be the case elsewhere as well, so I don't know how accurate that is.

It's worth putting it in perspective too - huge amounts of waste are generated in power generation that solar is replacing, like coal power. One source (quoting research from IEA but the original document link is dead now) puts the amount of coal ash produced each year at 3.7 billion tons [2]. Here in Australia, it makes up more than one fifth of all waste produced in the country, and most of it is just dumped (in some places around the world it's used as an additive in concrete). But coal fly ash is full of highly toxic elements, including heavy metals.

1. https://reneweconomy.com.au/australias-first-solar-panel-rec... 2. https://www.envirojustice.org.au/wp-content/uploads/2019/07/...


There's another article that discusses a material-efficient way of making domes arches and vaulted ceilings: https://solar.lowtechmagazine.com/2008/11/tiles-vaults.html

I've been watching videos of the technique. It's almost as relaxing as watching a professional butcher dismantle a cow.


How is PV unfit for recycling?

It’s pure silicon, with something like Boron on it. Losslessly recycling might be challenge, but you could definitely reuse all the Silicon and remake another panel. Which toxic elements do you mean? What’s the fire hazard?


While solar cells might be almost pure silicon, the panels themselves use a lot more materials to work. For example, 2% of all global copper production was just for panels 2018. The frames and the cells both use aluminum (the actual most abundant material overall). Silver, the most expensive component, has been pushed from about 400mg per panel in 2007 to about 100mg per panel today.[0]

Each solar panel contains about 14mg of lead which means around 4.4k tons were used in the production of solar panels in 2018.[1] This is much smaller than, say batteries (which solar panels drive a huge demand for), but is still significant considering lead has been found to leak into the environment from solar panels even from regular rainfall.[2]

In 2017, a study found that as much as 62% of the cadmium from cadmium telluride models were leached out at room temperature depending mostly on acidity of the solutions.[3]

"Even only one day of leaching of two module pieces in 1 day of acid rain and neutral solution is sufficient to exceed the World Health Organization (WHO) drinking water limit: for Cd the threshold limit is 3 µg=L.33) Even under alkaline conditions (pH 11), it takes only three days to exceed this limit. After nearly one year, the Cd concentration cCd in acidic solutions is almost 20000 µg=L (62%)" [4]

[0] https://www.freeingenergy.com/do-we-have-enough-materials-to... [1] https://www.freeingenergy.com/are-solar-panels-really-full-o... [2] https://www.zmescience.com/science/solar-panels-lead-plants-... [3] https://iopscience.iop.org/article/10.7567/JJAP.56.08MD02/me... [4] https://sci-hub.se/https://iopscience.iop.org/article/10.756...


> In 2017, a study found that as much as 62% of the cadmium from cadmium telluride models were leached out at room temperature depending mostly on acidity of the solutions.[3]

From the related article:

"The pieces are cut out from modules of the four major commercial photovoltaic technologies: crystalline and amorphous silicon, cadmium telluride as well as from copper indium gallium diselenide."

So they cut pieces from a sealed module? Seems unsurprising that leaching would occur when you cut pieces of a PV panel and expose it to acidic solution for an extended period. Functional installed solar panels are sealed behind a protective glass panel.


The frames of aluminium are obviously directly recyclable. The silicon is also completely recyclable, so you are talking about fractional materials being “wasted” if you apply a zero effort approach of recycling.

Applying a small amount of effort, given you’d need to re-smelt the silicon, all of the materials you mentioned have different melting points (mostly lower than silicon) so you could extract them at the appropriate time as you resmelted the entire product.

I don’t see how this is at all a waste problem, looks like a great recycling opportunity.


Recycling isn't free. Currently it costs 20-30x more to recycle a panel than to dump it in a landfill. Maybe the aluminum frame, the glass and polymer sheets, and the junction box containing the copper wiring are relatively straightforward to recycle, but it takes much more complicated (see: expensive) machines to get to the smaller parts like the intra-cell wiring and the silicon itself. And the silicon wafers aren't really recyclable. There's some specialized companies that can melt them down to reclaim the silicon cells and various metals within but this takes a lot of energy (and/or chemicals if it's a chemical treatment) and money.

The reason solar panels are hard to recycle isn't really because of their materials. The hardest part is separating all those materials, which all have their own unique recycling needs.

The EU requires solar recycling; Japan, India, and Australia have some minimal regulation around it; but in the US it's the wild west (except in Washington). Which means there's almost no recycling infrastructure in the US

All of this is the reason recycling a solar panel is so much more expensive than dumping it right now. Regulation will definitely help, but getting to the point where it's economically feasible to recycle them will require technology that hasn't actually been developed (yet, hopefully).

Some readings:

[0] https://grist.org/energy/solar-panels-are-starting-to-die-wh...

[1] https://news.energysage.com/recycling-solar-panels/

[2] https://www.researchgate.net/publication/342671383_Metal_dis...


Okay, so we’ve moved on from: Panels can’t be recycled. To: Panels in the US aren’t recycled enough because the economic conditions aren’t right.

Thats a much more nuanced point. Government regulation can help here for sure.


Umm, it's not just that. The most advanced company in the world at recovering precious materials from solar panels is a French company that uses a chemical treatment to recover the silver intra-cell wiring. Even they are not really economically feasible. The reason the EU recycles is because they have to. Not because of economics

So no, it's not just solar panels in the US. The technology to make recycling solar panels feasible just doesn't currently exist


I don’t understand your point at all. You are now down to “the intra-cell wiring is difficult”. Why can’t I melt the whole thing?

Melt the silicon, purify, and you’ll get the silver. What am I missing?


To put that in perspective, in 2019 we generated a total of 53.6 million metric tons of e-waste.[0] By 2050, we expect to be generating 6 million new metric tons of e-waste from solar panels alone.[1]

[0] https://ewastemonitor.info/gem-2020/ [1] https://grist.org/energy/solar-panels-are-starting-to-die-wh...


They haven't yet responded, but I wonder if it would make any difference in power consumption (CPU usage, to be more specific) if redbean[1][2] could be used instead of nginx.

[1] https://news.ycombinator.com/item?id=26271117

[2] https://redbean.dev


I doubt it for two reasons:

1. nginx has had many more years of performance tuning (it's 17 years old) than a project made in the last few years.

2. redbean is x86 only if I'm not mistaken (it's using x86-64 from αcτµαlly pδrταblε εxεcµταblε), whereas lowtechmagazine runs their server on an ARM CPU. I think switching to a lower powered x86 chip might be costly or still draw too much power, but I don't have as much experience in that regard.


The power consumtpion of that server is honestly great already. Sustained 2W, even with the HN hug of death?!


> This was caused by a software upgrade of the Linux kernel, which increased the average power use of the server from 1.19 to 1.49 watts

I wonder what change in the linux kernel caused the increased load. Someone out there is responsible for this crime!


This is an area where Apple is way ahead of everyone else. A code change that increases power usage 25% wouldn’t make it past CI at Apple.


How many more hardware configurations is Linux deployed on than MacOS? Would it be in the region of 3-4 orders of magnitude more hardware configurations?

How long has there been a power consumption focus in the linux kernel? The efforts to reduce power usage by the linux kernel pre-date PowerTOP’s first release and wiki tells me that was 2006.

Is Apple way ahead or is it just less popular?


> Is Apple way ahead or is it just less popular?

They're definitely solving a narrower problem but they do appear to have it solved. Is there even a subset of hardware configurations for which this is the case with Linux? (if there is I would love to know about it)


> Is there even a subset of hardware configurations for which this is the case with Linux?

That's not how cross-platform, cross-application software works. Linux is used for everything. Every other change for power efficiency will get balanced out by another change for raw performance.


Well sure but there are vendors that advertise Linux computers.


That's incredible. How do they measure that? Does their CI somehow measure power impact on a set of real devices?


Apple is designing much of the silicon. Measuring power usage is a core competency.


On the contrary. Linux runs on any Android or FLOSS phone, almost every car and plane, TV, router, and plenty of industrial devices.

The amount of testing that Linux goes through is staggering.

This has to be a specific bug on that platform in its specific configuration.


I find that hard to believe considering everything else that makes it past not only Apple's CI, but their automatic and manual QA testing, their beta process, and multiple public releases. I know they have to do something in their CI, but I'd like to see any evidence of your claim.


[citation needed]


Sorry, what does CI stand for?


Continuous integration -- Software development practice based on frequent submission of granular changes

https://en.wikipedia.org/wiki/Continuous_integration

... basically check every small commit, ideally end-to-end.


Thanks!


Could be a scheduler change. I think the default scheduler doesn't try too hard to be power efficient on the assumption servers/desktops generally aren't optimizing for a single watt or less

On Android, the scheduler can have a decent impact battery life


Perhaps Spectre/Meltdown mitigations?


My own battery + solar powered blog [0] is 100% inspired by lowtechmagazine. I am based in The Netherlands and due to my sub optimal location, I have to cheat in the winter by recharging from mains about weekly. I still do get some sun, but nowhere nearly enough to get through the day, let alone the night.

[0]: https://louwrentius.com/this-blog-is-now-running-on-solar-po...

And lead acid is also terrible for solar applications because no matter the capacity, recharging is very slow. Even solar could recharge the battery, the slow adsorption rate prevents it from doing so.

Lifepo or similar Chemistry is probably the better choice for a project like this.


> And lead acid is also terrible for solar applications because no matter the capacity, recharging is very slow. Even solar could recharge the battery, the slow adsorption rate prevents it from doing so.

This doesn't make sense to me. Surely you can reach a sufficient rate by adding more lead-acid cells in parallel? You're kind of forced to do this anyways since they don't like being discharged below 50% of their actual capacity. So you end up building in a shitload of excess capacity in parallel, in the process attaining high aggregate discharge/charge rates.

It's just annoying because you waste a lot of physical space on underutilized batteries. But for a stationary system, it's not such a big deal, assuming you're not trying to fit it into a studio apartment. You end up with a dedicated battery shed or cellar, at least they're cheap.


I actually run quite a few of them in parallel, but that doesn't solve the problem:

As the other person stated: charging lead acid is time constrained. And that means that you can't fully charge the battery within the time period when you have sun.

Lead acid deteriorates quickly if left (partially) discharged. This is why lead acid works so well with cars (almost always fully charged at all times).

Depleted lead acid (50% charge) needs to be recharged within 24 or serious damage will occur, accumulating over time. A week of bad weather may thus be hard on battery longevity.

Some more info:

[x]: https://louwrentius.com/a-practical-understanding-of-lead-ac...


Can't you just alternate between sets of cells with sufficient excess capacity then? They don't all need to be in lockstep at the same phase of their charge:discharge cycles if it takes so long.

Perhaps that becomes cost prohibitive even with the low cost of lead-acid, I've never attempted this. It just appears obvious from a high level that excess capacity can overcome all these limitations.


You want to top off lead acid with constant voltage to prevent gucking. It is time bound not power bound.

https://batteryuniversity.com/article/bu-403-charging-lead-a...


This is exactly the problem.


Lead acid have the advantage of being easy to buy and needing no balancer. Also they don't get damaged by overcharging. They handle full deplation way better too. Also lower initial cost.


Lithium batteries don't need a balancer either. I've been running a 3.6kWh pack of 12 cells for almost 5 years with no BMS or balancer.

I bottom balanced all the cells before building the pack and setup my chargers to only go to about 98%. This leaves more than enough leeway to avoid problems if one or more cells drift.

I've rebalanced a couple of cells once because they were off by a few hundreds of a volt. They're probably about due for another minor rebalance.

That said, this is in controlled conditions with regular use and monitoring. The cells are LiFePO4. I wouldn't run other chemistries without a BMS. Next pack I build will be much larger and have BMSes for safety and so I don't have to think about it.


Hmm ok. My experience with Lithium batteries is with EVs. Do you have a low discharge rate on your pack? Because the EV battery pack needed balancing like every charge. Otherwise it has to be a quality difference.


Most EVs don't use LiFePO4 although some are moving to it. There are also hundreds or thousands of small cells in most EV packs. With that volume you'll get more variation between the best and worst cells. You also need it to be foolproof and require no maintenance.

My average discharge rate is much lower than 1C which does help. The max is around 0.9C but that is pretty uncommon.


The section on alternative energy storage (e.g. compressed air) was also really neat.


As another commenter pointed out, Lead-Acid batteries are terrible for this. They are fine for backup power supplies that you don't expect to actually use more than a couple of times a year, but discharging them too much will completely kill them, and even if you keep a margin and discharge them to only 30%, the number of cycles is quite limited (1500 to 3000 cycles quoted in [1] seems a bit optimistic, probably depends on the specific battery).

Lithium-based batteries such as LFP are becoming very affordable [2] and can handle much more cycles. They can also handle more current variation and are more efficient.

[1] https://offgridtech.org/tech-updates-online/2021/lithium-iro...

[2] https://news.ycombinator.com/item?id=28943741


Technology Connections[0] did an episode where he talked through some of the science behind why lead acid batteries work poorly for the task. He used a marine deep cycle battery as a compromise solution.

[0] https://www.youtube.com/watch?v=1q4dUt1yK0g


The problem with LFP batteries is that they really need to be kept resonably close to room temperature in order to be efficient and safe to charge. This is fine if you're using them for some kind of off-grid home that needs to be heated anyway or for intermittent high-power applications like cars that can cope with the energy required for heating and cooling them, but it won't really work in this setup.


Thanks for pointing this out, I didn't have it in mind.

You probably need some insulation, and some sort of heating, although the computing device could probably provide the heat. Handling summer/winter cycles without human intervention might be a bit complex, but intervening twice a year doesn't seem too much hassle.

> it won't really work in this setup.

I don't recall reading that it was placed outdoors.


When the server is down you can get the offline version of it. AKA the Printed Website.

https://www.lowtechmagazine.com/2021/12/printed-website-thir...


I bought both volumes last year and made a point of only reading them outside by sunlight.

It's great stuff, very thought provoking. One of the many points that has really stuck with me is how the invention of the typewriter allowed us to write five times faster... and as a result we now spend most of our time typing, somehow.


Sounds similar to Jevon's Paradox [1], where consumption rises with increasing energy efficiency.

Also, as transportation gets faster, commutes don't get shorter, people just move farther and farther from their workplace.

I'm sure there are lots of other parallels too.

[1] https://en.wikipedia.org/wiki/Jevons_paradox


Semi-related —- does anyone have recommendations for where to find small hobbyist solar panels and/or kits to experiment with for small devices like a Pi?

Also curious about how a setup like this compares with using a traditional electricity source in terms of cost per hr of running the site off of solar with batteries. What is the break even point against utility costs over there(if there is one)? And are there any concerns about the sustainability of a setup like this if it’s adopted on a larger scale?


For small panels, check sites already suggested by others. If you want to experiment with larger panels, craigslist or similar have good deals. Used panels or new leftovers from a pallet.

I've paid $130ish each for 4 panels 280w-315w. And since it's local, you don't get dinged for shipping.


For hobby/experiment use decomissioned panels from upgaded farms, sometimes you can get two years old panels that were swapped for more efficient ones for a fraction of the original price.

For setup with batteries there is (depending slightly on local electricy cost) no long time break-even - the battery deprecation due to discharge cycle costs more than electricity from the utility.


I'd look for solar panels intended for camping.

Probably eBay or Amazon is the easiest place to look for cheap gear. Hobbyist electronic stores might be a good source too depending on the country (here in Australia Jaycar has some fairly good value panels and PWM solar charge controllers).


Unless you don't have the space there's no reason to go smaller than a 100W panel. You can find them used for $50 or less. For smaller panels the cost per watt goes up enough that it ends up costing the same for less output.


If you're in the UK, Pimoroni (https://shop.pimoroni.com/?q=solar) has some good options as well.



Sparkfun is a great goto place for electronic stuff like this. However, I’m sure there’s more specialized solar panel sources.

Https://sparkfun.com


Interesting and non-obvious fact about solar panels...

The optimal angle for generating as much power as possible from the panels is very different to the optimal angle for powering something year round.

If you want to power something year round, it's the power in winter you need to maximize - so you angle the panel very steep to collect winter sun. In the summer, this angle is far from optimal, but due to more hours of sunlight there will still be plenty to power whatever device it is you want to be always powered.


> This connection is a 100mbit consumer fiber connection with a static IP-adress.

Yikes! And me here in Palo Alto, in the middle of the silicon Valley, cannot get any fiber because of AT&T.


It's hard to get even 100Mbit in Paulo Alto? Must be hard for everyone working remotely these days....


My AT&T DSL is blasting at 3 MBit/s down and 0.5 MBit/s up!


And I was complaining about 15/1 from my DSL. Ironically I work for the company that won't lay fiber fast enough, yet asks me to work remotely.


F


It's not because of AT&T, it's because of government regulation.


Riiiiight.

I'm sure it has nothing to do with a complete lack of competition (cf. how in communities where Google Fiber showed up, the incumbents were suddenly quite capable of getting gigabit fiber to households, and for a price that was competitive with Google Fiber, and a fraction of the price they were charging for inferior service before Google Fiber's arrival).

I'm sure it also has nothing to do with the billions upon billions of subsidies AT&T and Verizon have received to build out broadband and make it available everywhere, which they pocketed and then didn't deliver on. Cf. https://www.huffpost.com/entry/the-book-of-broken-promis_b_5...

If you really want to blame it on government, do it the right way: the government has not been keeping AT&T and the other giant telcos accountable for effectively stealing all those subsidies, and it has not enforced competition at the local level. Regulatory capture, and all that.


Lack of competition is due to regulatory capture. Local governments often disallow competition in the ISP space.


And the regulations are captured by the companies who much prefer no competition. So we’re screwed by BOTH the corporations and the government.

We need new laws, and smaller companies.


> We need new laws

We need fewer laws.


We can have both new and fewer. The current laws allowing for massive consolidation and monopoly are strangling the competitive nature of capitalism, and destroying democracy.


AT&T offers fiber on the other side of the road. How is this government regulation?


Just as a thought, that might be pretty cheap to have extended to your house.

In Germany, Telekom (the formerly-government-owned provider and largest player afaik) offers to dig fiber for you for a "nice" price. But if it's literally about 5 meters it might actually be worth it.


If CA laws weren't so restrictive there would be competition. Without competition there is nothing compelling AT&T to offer better service. The barrier to entry for ISPs is so high that it's essentially impossible in many areas.


*located in Barcelona*

I hugely admire this website and the tech/philosophy behind it. So much so that I decided to build one myself. But here's the problem: I live in Yorkshire (UK) where good weather usually means a thinner layer of cloud cover. I tried, I really did, but I'm not about to move to Spain :) Long may this project continue (while I dream of more sun).


Right?

> This is a forecast for the coming days, updated daily: > TODAY > Clear throughout the day. > TOMORROW > Clear throughout the day. > DAY AFTER TOMORROW > Clear throughout the day.

I get 62 hours of sunshine average in the entire month of December, looks like you're a couple degrees north of me and get slightly fewer (assuming our weather departments do their math similarly). This server is probably going to see more sun in the next three days than we'll see until the New Year.

I'm not about to move to Spain, but definitely thinking about it (or Cali, or Arizona, or the south of France, or...anywhere sunny, really) in a few years when my kids move out of the house.


It is very gray today, maybe we should do wind power up ere'


just use a bigger solar panel and battery


As someone located in the PNW, I imagine Barcelona must have perfect weather for this sort of thing? Great concept


We've had a few cloudy days these weeks but today I've been playing volleyball on the beach and almost got sunburnt :)


I hope tons of web developers see this project and take note. How much more enjoyable could the web be if it moved in this direction, cutting out the bloat in favor of dense, useful content.

I also find the whole site a trove of well-researched information and well-argued points of view on sustainability. Truly refreshing to find something that does the math instead of regurgitating the latest hype and buzzwords.

One thing I come away with, and have realized more and more lately, is just how much we are getting in our own way and preventing ourselves from getting on a sustainable path, continuously chasing "innovation" when so much of the technology we'd need has been around for a long time. I'm not too optimistic we can break our growth addiction before things get (much more) ugly.


I really like the concept of this website, a clutter free website that running on a solar-powered server. The UI also makes me enjoy browsing and reading page after page on this website. I don't even need Firefox's reader view, something I almost certainly use whenever I read a blog/news site. Granted, that something must be sacrificed by selecting a solar-powered server, like uptime and they seem to be deliberately not using a full colored images, maybe to save the processing power of their server. But, IMO, it doesn't diminish the enjoyment of reading the website. Good job !!


How would I even host a website at home? 20 years ago I knew how to get a fixed IP from my local ISP but I don't know how to do that anymore.


You can use dynamic DNS if the IP address you're getting is a public one.


There are also services which you can get a reserved IP with all ports open, and then you wire guard tunnel it to your server.


Reverse proxy with a dynamic DNS server (DuckDNS for example)


A recent post on HN mentioned use of a Cloudfare tunnel. Apparently that allows for self-hosting behind a router with no incoming ports needing to be opened.

https://eevans.co/blog/garage/


Give them a call? Or switch ISP if yours doesn't allow it. Otherwise you need a relay service, at which point you might as well host with them directly.


I'm in a weird situation, hence my confusion. I live next door to an Xfinity hotspot. I signed up with Xfinity so they sent me a modem. I couldn't get it to work, but I found I could just use the hotspot. It works great but it serves potentially thousands of users so it's not personal to me.


That's an interesting situation indeed! Hosting a service on there might be rather tricky but if you do get support to punch through the CGNAT on it (I presume they use that) it will be even funnier :D


https://solar.lowtechmagazine.com/2020/01/how-sustainable-is...

> A website that goes off-line in evening could be an interesting option for a local online publication with low anticipated traffic after midnight. However, since Low-tech Magazine’s readership is almost equally divided between Europe and the USA this is not an attractive option. If the website goes down every night, our American readers could only access it during the morning.

What about putting a mirror on each continent, and adjusting DNS to pick the one that has solar power at the moment? It could even work at higher levels of granularity.


This is such an amazingly simple thing to do that I’m surprised it shows up so consistently on HN.

A raspberry pi or similar, a solar panel, a charge controller, and a battery.

Running everything off it (router, modem) would be cooler, since that’s likely powered from the home and connected via WiFi.



I love this experiment. The one comment I have is that when you provide “CPU%” but you’re using the special ”per core” unit that can go above 100%, you must list the number of cores. 128% of what?


CPU load is not CPU usage. Load is basically “number of processes waiting to be scheduled to run”. If load=cpu count then you can’t schedule more processes, your system runs at maximum capacity (although it might mean that the cores wait on some IO).

Eg.: https://estl.tech/cpu-usage-vs-load-ecca22287b21


Marginally related*

If anybody is interested about their energy consumption while browsing they can use this Chrome Extension that I've created: https://chrome.google.com/webstore/detail/globemallow/jibhio...


A lead-acid battery doesn't seem like a good choice for this application. Prolonged discharge states cause the lead sulfate to crystallize which is irreversible, making it impossible to recharge the battery.

A lithium chemistry might be better, since that prefers partially discharged states to being fully charged.


If it's a deep discharge lead acid and it is maintained it can last 5-6 years.


For posterity ---

@ 5:00 CET (Local time zome to the site) it's @ 36%. Sunrise for Barcelona is @ 8:08 am


@ 08:44 CET it is at 28%. - Seems to have survived the night.

edit

@ 08:50 CET and it's up to 30%. She's in the clear.


A suggestion: in bad weathers, the server should turn off during the time when there is less visitors and less sunshine. That probably could help it to serve a few more days (Of course the visitors must know when is the best time to visit)


> Uptime: 2 weeks, 2 days, 11 hours, 25 minutes

This is the most impressive stat of the website!


I Don't know if it uses the forecast for much other than printing on the page, but it has until the end of 2022 to change the api it uses, dark sky is shutting down after being bought by apple.


Looking into the response headers, I couldn't find any cache related header. I hope the site is using cache to prevent extra unnecessary load. Brilliant idea by the way.


It would be interesting if other websites were catered to in each different country. Thus we can say that there's an interesting comparison between countries.


How hard do we have to hit it to drain the battery ;-)


Ironic that the content is solely about the server itself. It's a bit like a blog about the libraries and tools used to make the blog.


More like a blog post about it, Lowtech mag has an assortment of other articles.


I have to disagree. While the common thread is consumption and sustainability I found the article about moving away from new laptops to be interesting.


ironic? It's the literal raison d'être of the whole website.

It would be remotely similar to what you implied if the website was librariesandtoolstomakeblogs.com


Thanks to everyone here, it's now down to 66%!


52% when I looked at it. It's like we're all dragging the battery down as we're hugging it.


I wonder the carbon footprint difference for connecting this server to the power grid vs using battery and solar.


This site loaded instantly for me.. I also get 210ms RTT to Barcelona from Australia. That's a first


Reminds me of that Netflix show Clickbait. The more views the more it kills the site.


These are neat, reminds me of those live robots you can control by stream chat


I love this. I want to see more things like this hitting HN.


This is cool! Now I know what to do with some old PIs! :D


Nothing new here.

AWS US-EAST-1 has been doing this for years.


we need this but for storage space usage. i can't even imagine the insane amounts of storage used to host useless livestreams of gamers gaming.

today's profit oriented economic system causes alienation and loneliness for many people, which leads to various coping mechanisms (such as watching other people playing games).

the streams are only valuable as content because the working class is heavily alienated and therefore our attention can be bought by capitalists as 'audience commodity': https://youtu.be/CK319sIWwbA?t=519


Very cool :D


It’s a like a deja vu, this website. It’s been linked repeatedly, for years, having the same kind of discussion on HN over and over


I had a similar deja vu because just few days ago I posted a comment asking similar tech:

> Other than raspberry pi, does anyone know of an even lower powered board which can run a very simply web server (only needs to return a single html file)? I have an idea for a fun hobby project where I want to connect my echo bike (for cardio) to the board which charges it everyday and returns an html with how much I charged it and daily cardio stats. Basically, if I don’t do cardio, then the board won’t be charged enough to keep the site up, so that gives me incentive to do it regularly.

https://news.ycombinator.com/item?id=29409339


I ran a webserver of sorts on an STM32F767, a python one of all things.

200mA looks like about par for the course, ethernet and all.


What's the supply voltage? Do you have a breakdown of which parts of the board consume how much? I feel like one should be able to use way less than that, but there are plenty of ways to waste power on a board that isn't designed to be battery powered.

One project I've been thinking about recently (if I only I could find the time..) is to build a tiny server around a microcontroller coupled to a small NiMH pack (just standard AA/AAA rechargeable cells) and a small solar panel, running by the (not particularly well lit) window of my home office.


3.3V, that's pretty standard nowadays.

I don't have a breakdown, it was cobbled together from whatever was around, an STM32 devboard, some regulators, some half-dead li-fe-pos, almost useless BMS from ali, this sort of stuff. Obviously that ate significantly less than an RPi, but I can't say how much, don't actually remember since that wasn't the problem.

Another take was a weather station, basically an ESP on a ~2500mAh 18650 li-ion.

That lived for like a week on a charge, stuff being fetched from it every minute over wifi.

I would recommend against using NiMH for that, just get some 18650s and work them between 80-40% charge, they'll last forever.


It’s at 76% now. In 53 minutes since the post it has gone down 4%. So say roughly 5% per hour as a back of the envelop calculation. That’s roughly 20 hours on a full charged cycle.

It would be good to know if there is a fast charge capability of the battery and how long would that be. An hour of full sun a day? Or two hours of partial sun?


It’s on the HN front page, I don’t think I’d extrapolate anything from its current power usage.


Ha. We better read it while we can!


Or, in keeping with the spirit of the post: think long-term and give the battery a chance to rest, then read it later :)


Maybe this should have been posted on a sunny summer's day.


Quick, everyone in!


They are using a 168Wh battery and a 30W panel, so a naive first approximation would be 168Wh / 30W = 5.6h to go from 0% to 100% capacity. In reality:

* The panel won't be pointed squarely at the sun most of the day

* The sun won't be high-noon-bright most of the day

* Lead-acid is about 80% efficient (perhaps 90% at low state-of-charge, dropping to 60% at high SOC)

* Lead-acid chemistry is sluggish. It can charge pretty quickly to 80% (a 168Wh battery will easily accept the full 30W up to that point), but the acceptance rate decays asymptotically toward zero as the battery approaches 100% capacity. No matter how powerful the charger, going from 80% to ~98% takes about 6-8 hours.

* 0% state of charge is about 10.5V, but lead-acid cycle life falls dramatically with deeper discharges. To reduce wear on the battery, they designate 12V as 0% charge and shut down at that point. This means they're only using 50% of the 168Wh capacity, but it also means they hit the sluggish part of the charge cycle sooner.

Summing up the above, with perfect-noon-sun conditions generating the full 30W, the charge curve will be roughly: 3h from 50% to 80% SOC (0%-60% indicated); 6h from 80% to 98% SOC (60%-100% indicated); a few more hours for the last 2%.

Realistically it will get the majority of the charge by early afternoon, and the slow trip from 80%-98% will complete sometime in the early evening. At that point the panel will only be producing a few watts, but it's still enough to finish trickle charging to 100%.


One can only blame HN traffic for its sudden decrease.


Well the sun has probably set in Barcelona too, this won't help either.


Indeed, there is a row in the "Power demand" table saying "Solar panel active: no".


I clicked the link, curious what the what the power reading would be now. Now I feel guilty.


A 30W panel would need about 6 full hours of sun to fully charge that battery from 0-100. Lead acid can charge fairly quickly and can suck up that 30W input pretty easily.


Avg. CPU load is 99.5%




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: