Hacker News new | past | comments | ask | show | jobs | submit login
SETI@home is in hibernation (setiathome.berkeley.edu)
515 points by anotherhue on March 18, 2023 | hide | past | favorite | 341 comments



I don't specifically miss SETI@home as much as I miss the spirit of the internet back then, the "what will they think of next!?" The idea that all of us could together could crack RSA keys wasn't just a thought exercise, people signed up and did it. Now computers are so complex and task heavy and fraught with problems it seems like more hassle than it's worth. I liked when wondering "why is my machine so slow", that "top" or "ps auxww" had a chance of fitting on a screen or two. Now I wouldn't look forward to unleashing an eternal, 24-7, intensive task with a connection to the internet, even though I have a lot more bandwidth.


The last time I ran SETI@home (and Folding@home), I had a fixed desktop machine that I always left on, and I think it was old enough that the fan speeds weren't variable, so it was just constant background white noise.

In that context, running those apps in the background all day and all night was fine, and if they ever noticeably interfered with performance while I was using the machine, pausing them for a bit was easy to do.

I haven't had a desktop computer in at least 14 years. When I'm not using my laptop, I close it and it goes to sleep. Running CPU-intensive apps on my laptop spools the fans up to their loudest, and makes the bottom of its chassis uncomfortably warm. The CPU cores will sometimes hit thermal throttling too. I just don't want to deal with that sort of thing all day.

Then again, I do have a couple Raspberry Pis, an old Mac Mini (running Linux), and a CM4-based NAS in various places in the house running 24/7. I could probably run an @gome-type app on one or more of those. But for whatever reason, doing so just never really occurred to me.


I remember when power wasn’t 51c/kWh and a powerful PC didn’t suck down >1000w. I know we get way more perf/watt these days, but I don’t feel as a user like that translates into a notably better experience. For most things for $work, Slack, Spotify, GMail, Google Docs, and VSCode feel a lot less responsive than irssi, Winamp, Sylpheed, OpenOffice, and $ide_of_choice did a decade or two ago.

What Dr. Lisa Su giveth, Satya taketh away I suppose.


We also used to write code in low-level languages, and system designs were simpler compared to our current approach of writing a lot of stuff in JS and using Electron everywhere, having a huge dependency tree and in general - optimizing for fast development rather than efficient runtime.

Having said that, optimizing for perf is rarely worth it because people don't usually care, and when it is - you can often make the biggest difference by figuring out why some step requires O(n^3) instead of O(n) like it should and fixing that, rather than rewriting the entire stack to get some 10-20% improvement...


Someone should do an accurate figure of compute power usage, including things such as cars, or media centres, and then try to analyse how the existing trend of "wasteful development" affects this.

I bet all the upset over bitcoin power usage, is a tiny thing, compared to (for example) the whole node ecosystem.

Even something such as gmail is massively overengineered. Heck, there are very few improvements over the interface a decade ago, yet I bet it used 5% of the power it does now.

Yet think of how much gmail is used.

And it's not just raw compute. 3d on a page for no reason, massive js binaries for no reason eg ram + bandwidth.

A modern phone could last weeks as a browsing device, if looking at static html.

But all of this means more development skill, less use of fluff and junk, care for size of page loads, and on and on.

Put another way, people concerned about power usage, should realise that using node, means burning more coal/NG. Literally.

Because that's where the excess power comes from.


Hit the nail on the head. Inefficiency, which has reached gargantuan proportions, and then blown past that, doesn't just mean page loads so slow that loading a news article takes upwards of 20s on a pocket supercomputer (whereas it took barely a second or two on a Pentium II on a 56kbps connection), all the while text and buttons jump around as assets are loaded. It also means very real waste of energy and battery degradation.


I mostly agree with you, however:

> A modern phone could last weeks as a browsing device, if looking at static html

That's what I do. Weeks is a bit of a stretch. Maybe one of screen time is limited. That said, discord (the website) drains my laptop battery almost as fast as gaming does. They do spawn a separate WebGL context for each animated emoticon. Crazy, crazy thing

> Put another way, people concerned about power usage, should realise that using node, means burning more coal/NG. Literally.

Sure. At the same time, though, a single car trip to the movies burns orders of magnitude more energy than all your gadgets running JS for an year.


shit like slack and other electron abominations are not just 10-20% improvement going native. We are talking a simple chat application, irc glorified to load images and preview, and this somehow takes more memory and cpu than the total resources I had in 1995 times 100.

these systems are so abominable, and people have just gotten used to it, hell, many users never even tried anything else, but saying 10-20% is just ridiculous


Man I want a native TUI client for Teams.



Make a windows XP VM and install some of those old versions and see how fast it runs on modern hardware (even while emulating the entire PC to run it on!). Many modern devs chose ease of development over end user efficiency/performance.


51c / kWh?? I pay 20 in Spain even now with the energy crisis.

I think back in those days the price was around 10c

I know this is euro cents but the dollar exchange rate is not too big to make a huge difference.


> a powerful PC didn’t suck down >1000w

My Ryzen Threadripper 2950X, which isn't a top-of-the-line system at this point, but is still a 16 core beast-ish machine, sucks down (much) less than 400W


My 3990x is almost 500w. My last dual socket Xeon workstation was closer to 750w.

That's before you add on a GPU. My 3080 uses 350w under load.


Are these power figures measured at the wall? If so, both are astonishing. My file server with an old Xeon, 16GB RAM and 5x 7200RPM HDDs uses about 95W when idle and about 150W when I throw something at it that works the drives. My desktop with an I7-4770K, 32GB RAM, some SSDs and a single spinner ids about the same. IIRC if I push something compute intensive at it, it can break 200W.

OTOH a Pi 4B with 2 7200RPM HDDs uses about 25W and another with an SSD uses about 7W.


I don't have a kill-a-watt meter handy anymore, but I do have a smart meter. Guesstimating using that, I hit ~ 850w with my threadripper when compiling and my work project and running a game at the same time, and about 650w when _just_ compiling (so presumably the difference there is GPU being under full load).

They're obviously extreme examples, but on the flip side, I've got an M1 Macbook Pro which lasts an entire day on a single charge of battery including running docker, slack, and some "mild" compiling (not large C++ projects). I'm not sure how to measure the energy consumption of it on battery, but suffice to say it's "very low".


Powers is around $0.10/kWh for residential, even lower for industrial


Power prices vary dramatically in the United States - and are even higher in most of the rest of the world.

Here’s the prices for the United States - https://www.eia.gov/electricity/monthly/epm_table_grapher.ph...

Even these data are noticeably low for my area where we had a _massive_ rate hike in January that results in total prices for electricity (generation + delivery) without an alternative supplier being up around $0.48/kWh.


Big cities in the US tend to have very expensive power ($0.50/kWh or higher), while rural areas can have nearly free power, particularly if you are near a dam or a nuclear power plant. The price you're quoting is definitely not a big city price.


It's closer to 50c/kWh here in the UK [0]

[0] https://www.gov.uk/government/publications/energy-bills-supp...


I'd pay 0,77 euro per kWh if the government hadn't introduced a 0,40 euro per kWh price cap.


I just cant get a laptop i feel comfortable working on, they all have that slow feel, where a desktop is just crisp and i can always just switch out anything i dont like about the current build..


Part of the problem is just raw TDP. There’s only so much horsepower you can squeeze out of 45-65w laptop chips. The brand new Ryzen 9 7945HX is damn fast though, with 16 real cores on a way better node size than Intel is using for their 13th gen parts that, at most, get you 8 real cores plus a 16-core Atom on the same die. Stuff like the Asus Zephyrus Duo have RAID-capable m.2 (and that one has two screens, one of which is a 16:10 with high refresh). Get something with a dedicated GPU as well, as that makes a huge difference even for desktop compositing. Despite what Intel says, integrated graphics continue to be a joke compared to even budget dedicated graphics. These mobile workstations are not going to rival 1kw desktops, but they get within 40% or so. The Intel stuff, 12th and 13th gen in particular also get bad battery life, like 4ish hours. Blame the FAA for their 100wh limit on batteries — not many laptop mfgs want to make something that you can’t technically take on a plane, even if the mall cop TSA agents wouldn’t ever notice.


I lucked out with getting one of these on sale just before they went away: https://www.bhphotovideo.com/c/product/1746503-REG/lenovo_82...

The 300W power brick provides enough juice to feed both the CPU and GPU, and the laptop variant of the 3070 ti isn't too much slower than the desktop version in practical terms (I don't feel that 30% less performance). I've never tried it on battery or on the 135W USB-C power limit, but reviews report getting sufficient performance.

I don't have room for a desktop, so it's nice to not feel like I'm compromising on power with something that fits. The memory and SSDs are user accessible, too.


My MacBook Pro with M1 Pro is the first laptop I've had that hasn't had that slight hesitation that laptops have always had. I think this is why people praise Apple Silicon so much even though the benchmark numbers may not always back it up - so many other people are also coming from similarly-hobbled laptops.


I have a great desktop, and every day I curse Apple and Tim Cook because I cannot get an iPad Pro that runs full macOS.

That would be the ideal device for me when I'm on the go, that I can still use to watch Youtube videos in the bathroom. I do not need a full laptop when I work at home with a $5,000 desktop PC, nor I want to buy a crappy $300 laptop that always feels slow and underpowered.

It is really sad to have the tech to do something, but it doesn't exist because the bean counters said no.


> nor I want to buy a crappy $300 laptop that always feels slow and underpowered.

Meanwhile, the expensive laptops with actual desktop-class components have unacceptably shitty peripherals, so you can't even pay more money for what you want; it simply doesn't exist.

This is what upsets me the most about laptops.


> what you want; it simply doesn't exist.

Once the patents on Apple's trackpad expire, then we'll see.


I love Force Touch trackpads more than anything. I specifically got an Apple trackpad to use with my desktop because it's so much better than anything else out there.

But having a good trackpad doesn't make up for other blunders like an annoying keyboard layout, a low-DPI screen, or etc., and you bet nobody who makes the thick and heavy high-performance machines is going to care about a good trackpad anyway.

You don't really see OEMs that put top-of-the-line hardware inside a thick and sturdy chassis, and then... add a nice 4K screen and a numpad-less keyboard and put the charging/USB ports on the left instead of the back.

The list of possible blunders is so long that you'd be hard-pressed to find a machine that hasn't been subject to most of them. That's just not how laptops work. Laptops are a package deal, all or nothing. You can't ensure every requirement individually like you can with a desktop.

And the reason I don't like the market is because nobody makes a really good one.


What do you mean by shitty peripherals?


Low-DPI screens, bad webcams, keyboards with the wrong arrows/numpad layout, off-center trackpads etc. it's different for each person what they consider "shitty", but laptops are always a total package deal (except for Framework) and it can just so happen that nobody makes powerful laptops that also have the exact right permutation of peripherals.

You can find all sorts of beautiful 4K ultrabooks with the perfect keyboard and trackpad layout, and great webcams too—if you don't mind putting up with a Celeron and 4GB of RAM... If you want a really modern and powerful CPU with lots of RAM, you'll be looking at the likes of Clevo and AORUS and etc. that put desktop-class parts in a thick chassis, but... that bodes terribly for the peripherals they end up using.

Some of the Clevo machines have individual gimmicks like 300Hz displays, but none of them seem to have all of "high DPI, good keyboard layout, good port arrangement, high performance". And this is true for basically all laptop vendors.

OEMs think good peripherals always equate to "thin and light" which results in bad performance, either through bad parts or bad thermals. Maybe Frore Systems' new cooling solution will help slightly with ultrabook thermals, but until that happens, laptops are just suck.

You may notice that this only matters because I even care. You may also notice that this is exactly what building a desktop is for. I am building a desktop, but I also have to build a laptop from scratch for the desktop so I can operate it from bed. Which would have been avoidable if I could've found a laptop in the first place.

The point anyway isn't to say that there aren't any good laptops out there, but rather that if you want high performance (e.g. you are bothered by slowness), you can only get it as a package deal with the rest of the laptop... obviously. But the package deals all suck. The laptop market is so annoying sometimes.


I haven't really been in the market for a non-Apple laptop for a while (previously had to buy MacBook bc of specific software), so this gives me a great idea of what to watch out for when I get my next laptop. Thanks a lot!


For what it's worth, the ROG Flow X16 (2022)[0] has some good specs (8-core CPU with 3070 Ti), a great keyboard layout (with correct arrow keys!), and other niceties like a mux switch and upgradeable RAM. It is so close to perfection—but doesn't have a 4K screen, ruining the whole package.

Big reason why I dislike the market so much: the existence of that machine with only a single flaw that nonetheless ruins the value of the entire product. There's nothing else like it. It seems to be the absolute best machine the market has to offer... and it just barely misses. :(

However, the screen resolution seems to be the only problem with it, so anyone who doesn't require specifically 4K (most people) would probably find it worth considering.

[0]: https://rog.asus.com/laptops/rog-flow/rog-flow-x16-2022-seri...


Another option is to get the iPad (and magic keyboard and pencil) and remote desktop into your home machine.

Put it behind a VPN, use wake-on-lan and boom, iPad is essentially a fully blown computer.


No, that would turn it into a fully blown thin client.

What is the point of having half a TB of storage, however many gigs of RAM, lidar, radar, sonar, and the best processor architecture money can buy to render a glorified, interactive video?

I would like to be able to write code without having to pay the price of internet latency. I don't need a $1000 tablet to render RDP.


Termux on Android does what you're asking for, but I doubt it's what you actually want. shrugs


On the other hand, there's a lot of great tech (including, I think, the iPad pro and Apple silicon) that just wouldn't exist if the bean counters hadn't said yes, it's a good idea to invest a few billions of dollars into this...


Uhm, allocating money to a budget doesn't create great products anymore than wishing very hard for something before blowing the birthday candles.

It's the engineers that actually create the cool things.


> Uhm, allocating money to a budget doesn't create great products anymore than wishing very hard for something before blowing the birthday candles.

I didn't claim that it was sufficient, but it is necessary.

> It's the engineers that actually create the cool things.

Of course. With other people's money (typically).


Agree, I’ll probably never buy another Intel laptop as long as AMD and Apple keep killing it with these things. I was hopeful that the Surface laptops would be good with ARM too but they’re not using the latest chips, they went for low/mid-tier performance, and Windows on ARM still sucks / relies too much on emulation to be fast. Some day maybe we’ll get REAL ARM hardware with dozens of cores, tons of memory bandwidth, USB4, and decent freaking GPUs. Sorry, but the idea of spending $2k on a laptop that runs last year’s mid-tier smartphone processor just doesn’t cut it. Maybe that’s fine for a Chromebook for schools or something.


Don't worry. Apple will figure out how to put that hesitation back in with a future MacOS upgrade. Probably with yet more phoning-home daemons in the name of security theater.

Sorry for the snark. I'm just bitter because Apple is about to ruin Dropbox because they don't allow KEXTs any more.

https://tidbits.com/2023/03/10/apples-file-provider-forces-m...


> that slight hesitation that laptops have always had

For me just getting a decent low-latency mouse (e.g., wired g203) and correctly configuring 100hz+ monitors has vastly improved my experience on my M1 Air and the shitty Thinkpad I use for work -- to the point where I haven't used my desktop in a long time.


Amen brother. It's like driving on a mountain highway with the windows down compared to commuting in a subway.


Personally I no longer use laptops, and instead do 99% of my computing on a fast desktop with a great keyboard, great monitors, etc. If something breaks or needs to be upgraded, I can pop it open and get it done, usually on the same day. On the rare occasion I need to travel I'll take my smart phone and a note pad if there is some need to take notes.


I run fold@home I'm at 207M points... Sucks I'm in 14K range... I'll probably never hit #1 haha


I'm at 466M points and within top 10k donors.....that's just thanks to running folding@home on my own machines for the last 15 years lol.

https://stats.foldingathome.org/donor/id/9171


wow that's nuts... I don't know how many months I'd take me to catch up/by that point ahead again

15 years is insane though... I only recently started with an i9 and 3080Ti on medium/highest priority


Power and cooling were at fixed levels and computers couldn’t really sleep, so cpu cycles were just wasted without seti@home. Nowadays computers idle efficiently, so something like that would make your computer louder and more expensive to run.


I’m probably about median age for HN and when I was looking into folding Seti@home I had no idea and didn’t worry about power cost because it was subsidised by my parents or university. Plus energy was cheaper then. Now as an adult seeing a month of a few GPUs running on the utility bill really hurts.


The fact that energy hasn't gotten more affordable really demonstrates a deficiency of planning.

We've had access to a source of energy that is cheap, zero-carbon and essentially limitless, since the 1950s: nuclear power.


After looking at my utility bill, less than half of the cost is electricity and natgas; the rest is delivery charges -- that is, paying to maintain the power grid itself. Assuming the utility is breaking that down more or less honestly, reducing the cost of electricity here probably wouldn't decrease my utility bill as dramatically as one might guess. (And I live in California, where electricity prices are notoriously high.)


What doesn't make sense to me is why this delivery charge keeps going up? Is it just PG&E throwing a fit about having to modernize its lines so they don't cause fires anymore?


Or, alternatively, that they were only so low before this because someone decided to skip maintenance for 50 years.


Ah, the good old pass-the-buck-to-my-future-self strategy that is leaving the US with trillions in infra repair bills to be paid by my (millennial) generation and younger.


*waves in British*

• Long-term debt used to buy off the slave owners in 1837 and not getting fully paid down until 2015: https://en.wikipedia.org/wiki/Slave_Compensation_Act_1837

• Napoleonic war debt was also still being serviced until 2015: https://www.gov.uk/government/news/repayment-of-26-billion-h...


Everything in California is explained by boomers voting themselves out of paying taxes. In this case the regulator never let them raise prices for maintenance so it's all coming due now.

(PG&E is basically nationalized already, they don't make their own decisions here.)


> Assuming the utility is breaking that down more or less honestly

That's a bit of an odd one, as the amount of electricity you consume has little direct effect on maintenance costs. Now, it might be in some sense "fair" to charge heavier consumers more, and there's a second order effect whereby if total demand exceeds some threshold, the grid must be upgraded to cope, but if that "delivery fee" scales with your consumption then you're definitely looking at some "creative" accounting.


>Assuming the utility is breaking that down more or less honestly

...

>And I live in California,

This made me laugh out loud. (Worked with utilities all my life, including in CA.)


I have had around 13 power outages this week. I live right in the middle of Silicon Valley. Most of the charges from PG&E are indeed the grid charges, the rest is supposedly a mandatory green energy. This is depressing.


Fellow SV resident (does Cupertino count?) and my eyes water when I look at my energy bill. Then I lose power for two days and learn that people REALLY dislike PG&E. And I thought National Grid was bad.


Let's put all our power lines up in the air, they said. It'll be fine, they said.

While other countries have stable and reliable power in the face of hurricanes and ice storms, our power is routinely knocked out by cars, trees, and basically any act of nature.

I love how expensive everything is for how behind we are from the entire rest of the world. Too bad the expenses keep me trapped and poor, unable to do anything about it—even just move to any other country.


I'm guessing much of that delivery cost is due to the additional energy transmission and storage infrastructure needed to incorporate intermittent energy from solar/wind.

Solar and wind are extremely expensive when all costs are taken into account. One study found that the "true cost of using wind and solar to meet demand was $272 and $472 per MWh": [pdf]https://web.archive.org/web/20220916003958/https://files.ame...

Mind you, this could change if we get an economical way to generate hydrocarbon fuels from renewable electricity, so I'm certainly not pessimistic about solar/wind's long term prospects.


With articles like "Save North Dakota Oil and Gas!" I'm sceptical of that study.

Wind and solar were 30% of Great Britain's electricity last year. The National Grid is a private company, so their costs are clear, 3% of the average bill.


The study advocates nuclear and hydroelectric as the means of getting to zero-carbon, so doesn't seem motivated by pro oil/gas bias.


Nuclear power has proven to be a lot of things, but cheap is not one of them. In theory it is possible to build nuclear power plants at scale and at reasonable cost, but in practice that has not worked out in the real world.


It has actually, in places and times where regulations were reasonable.


You mean the places and times and regulatory environment that brought us Chernobyl, Three Mile Island, and Fukushima?

I'm not going to get into a debate over the safety of nuclear over other generation options, but is it conceivable that, if the modern regulatory environment were in place when those three plants were built and operated, they never would have suffered major incidents?


Having a fukushima and a three mile island happen every year would be easily worth it if it made nuclear 20% cheaper (feel free to do the maths if you dont believe me). And it would actually make things fourfold cheaper judging by the difference in price per KW between the 80s and now.


Those power plants wouldn't have been allowed to be built... so yes? With the modern regulatory environment, those three nuclear power plants would never have melted down (because they wouldn't exist).


But you can't both have cheap nuclear and the current regulations since they are in conflict.

But I think molten salt thorium plants can offer superior safety


Molten salt reactors might provide superior safety (the thorium is irrelevant to that). However, they also would likely imply higher operating costs, because now instead of being sealed in fuel elements your fuel is flowing through a larger part of your reactor, and volatile fission products are escaping into the off gas and have to be trapped in another system (which will have to be cooled to prevent melting). So you now have complex systems that are too radioactive for anyone to touch for maintenance. Reactor structural elements are also now exposed to more neutron radiation as well.


There was less than 100 deaths from chernobyl, and only one person died directly from fukushima[1]. I suspect more people have died from falling off roofs while fitting or cleaning solar panels than that.

You're correct that nobody builds rbmk reactors any more, but others of the design continued to operate safely for many years.

[1]https://ourworldindata.org/what-was-the-death-toll-from-cher...


> There was less than 100 deaths from chernobyl

There may well be tens of thousands of deaths from Chernobyl. No, we can't state that these deaths absolutely occurred/will occur -- they are spread throughout large populations, mixed into the vast number of naturally occurring cancers -- but technology regulation is not like criminal prosecution where things must be proved beyond reasonable doubt. Technologies are not innocent until proven guilty.


> Technologies are not innocent until proven guilty

That depends entirely on the technology, and who is paying who. Fossil fuels have been proven guilty of numerous sins time and again and are still relentlessly defended as innocent. Leaded fuel in particular enjoyed an undeserved benefit of the doubt. Food additives are another category of chemical with very permissive regulation, provided they don't cause lab rats to immediately keel over.

Technology regulation is unprincipled at best and outright corrupt at worst, and even with the most pessimistic estimates nuclear power has the best safety record per watt of any power generation technology (and it's not close).


That's nice, but it's no justification for that earlier poster lying about the number of people killed by Chernobyl. There's very good reason to think the number is much larger than 100.

Fossil fuels being bad is an argument for getting off fossil fuels, not an argument for nuclear in particular.


Did you read the source I linked? It gives a good estimate.

    "2 workers died in the blast.

    28 workers and firemen died in the weeks that followed from acute radiation syndrome (ARS).

    19 ARS survivors had died later, by 2006; most from causes not related to radiation, but it’s not possible to rule all of them out (especially five that were cancer-related).

    15 people died from thyroid cancer due to milk contamination. These deaths were among children who were exposed to 131I from milk and food in the days after the disaster. This could increase to between 96 and 384 deaths, however, this figure is highly uncertain.

    There is currently no evidence of adverse health impacts in the general population across affected countries, or wider Europe."


Yes, and you totally ignored the cancers that could occur due to low levels of additional radiation in the population at large. You are simply presuming that such cancers cannot occur.

What you are doing here is demanding that the cancers actually be demonstrated, requiring that radiation be treated as innocuous until it can be conclusively shown it isn't. This is the mindset behind those ranting against the linear no threshold theory of radiation carcinogenesis.

But regulation doesn't work this way. Nuclear isn't something that must be presumed innocent until proven guilty.


which times and places were those?


80s USA.


First, the costs of nuclear power are quite substantial AND they are front-loaded making it a bad investment. You will need to go through a lot of red tape and spend all the money right away to get some benefit later when it's built.

Second, it's not zero carbon if you include building the plants, since the plants have a limited lifespan, so there's still a carbon footprint for the materials.

Third, if you switch all of our energy to nuclear today with a magic switch, we would run out of all easy to mine uranium in a few decades. While that is not a concern in our lifetimes, some progress on thorium plants and so forth must be made for nuclear to be a long term solution.


> Second, it's not zero carbon if you include building the plants, since the plants have a limited lifespan, so there's still a carbon footprint for the materials.

Nothing is zero carbon either. Batteries have a very long supply chain that involves a lot of carbon heavy steps, and their disposal also creates pollution. The same goes for solar panels, but people conveniently choose to ignore that.

> we would run out of all easy to mine uranium in a few decades

We dont need uranium to run on nuclear power. There are several alternatives already.


>>Third, if you switch all of our energy to nuclear today with a magic switch, we would run out of all easy to mine uranium in a few decades.

There are vast quantities of uranium in ocean water in concentrations that while low, still allow it to be extracted. More importantly, breeder reactors can harvest 100X more energy from uranium than currently deployed reactor types, and in the process eliminate all of the long-lasting nuclear waste, i.e. those with a half life of millions/billions of years.

With known supplies of uranium and thorium on Earth, nuclear fission can provide 100% of humanity's current energy needs for something like 2 billion years. For reference, the sun's rising luminosity is expected to boil Earth's oceans away in 900 million years. We could 1000X current energy consumption with nuclear, and have enough fuel on Earth for over a million years. Long before the fuel runs out, we will be able to exploit resources beyond Earth.


All those options are significantly more expensive than what we've been building. I mean, we've built fast reactors, and thermal breeders with thorium, and had pilot efforts for U extraction from sea water. None of them could compete with LWRs with a once-through fuel cycle on conventionally mined uranium.

Fast reactors also present the possibility of prompt fast criticality (that is, an actual nuclear explosion) in a serious accident if enough of the core melts and moves around.


Thanks for the sobering information.

In any case, costs can be brought down significantly with 1. more reasonable regulations and 2. economies of scale, which just requires building more plants.

So if - due to massive expansion of nuclear power - we get to the point that uranium supplies are dwindling and causing the price of uranium ore to rise, we may still see breeder reactors with lower construction and operating costs than current generation reactors.

I suspect modularity in reactors, that enables a genuine global market to emerge in reactor manufacturing, would be instrumental in bringing down reactor construction costs.

>>Fast reactors also present the possibility of prompt fast criticality

I'm quite the layman in this subject, but my understanding is that there are fourth generation fast reactor models, with passive safety features, that effectively eliminate that possibility.


The claims that costs can be brought down with better regulations and economies of scale are often made, but the evidence for that is marginal.

For regulations: how does one identify "reasonable" regulations? How is this to be determined? The usual way this is done in other industries is by allowing accidents to happen, then add regulations that would have prevented them. I doubt that would be acceptable for nuclear: unlike (say) aviation accidents, the cost of an individual accident can be extremely large.

As for economies of scale, the evidence for that in nuclear is slight. Maybe S. Korea? But elsewhere costs seem to have increased with experience rather than decreasing. Nuclear involves large units that take a long time to build; there's not much room for anyone to gain much experience over their career. If anything, experience decay (as organizations and individuals age) seems to overwhelm experience growth.

> fast reactors

Fast reactors inherently have this problem, since a fast chain reaction occurs without a moderator. In a LWR, concentration of the fuel in an accident reduces reactivity, as the moderator is reduced. In a fast chain reaction, concentration of the fuel will increase reactivity. Putatively safe 4th generation reactors merely assert that such rearrangement cannot happen. If the analysis that led to that conclusion is found to be in error then anyone who has built such a reactor will be in deep trouble. Perhaps molten salt fast reactors will have sufficiently credible analysis, but those have their own issues (such as need for large amounts of isotopically separated chlorine in molten chloride reactors and exposure of the walls of the reactor to unshielded/unmoderated fast neutrons.)


I'm only including things that are viable. It seems it's just going to be cheaper to build out significant wind and solar, while keeping nuclear around for the times there's neither wind nor sun.


That's a complete terrible way to use nuclear. The cost per unit of produced energy will be so large that other systems (like storage, including burning hydrogen in turbines) would be much more economical to cover those dark-calm periods.

Nuclear should either be operated all the time, or it shouldn't be used at all.


I believe you can scale nuclear up in times of demand. So you'd be running it at lower capacity when it's not needed


I agree that nuclear power would have been great.

However it's no deficiency of planning.

It's down to public opinion and misguided regulation.


That's like a project manager blaming the missing of deadlines on "the opinion of certain people in the company and the bureaucracy within the company".

Not sure if that would fly.


Not sure that analogy holds.

No heroics from any project manager would eg let 1980s Ford stop selling internal combustion engines and go all-in-one electric cars. So any deadlines would be missed.


Better not to build one than building a nuclear power plant and then just not turning it on.

Zwentendorf was such a disaster.


Another way to look at it: there is no miracle solution.

It´s easy to look at any situation and say ¨If only they´d _just_ XXX¨. It´s never that simple.

Even if you deeply believe your solution is flawless and people ¨just¨ need to be convinced, the very fact that the situation is happening at all is an indication it will be a long and dire fight...

> essentially limitless

oh well


Counterpoint: France is doing extremely well in Europe and is the only country that is doing well (I'm talking energy-wize). France is a nuclear energy country.

So yes, there is 1 pretty great solution to the energy problem and yes it has existed since the 50's..


France has the same energy price rises as the rest of Europe last year, since so many of their reactors weren't working. Most of them are also aging.

Their average prices are only a little below the EU average.

https://ec.europa.eu/eurostat/statistics-explained/index.php...


Sure, when you convert energy you need to do maintenance and keep things up to date and if you don't do that bad things happen. That goes for all methods of energy conversion.

How does that have anything to do with the discussion about energy conversion types methods and their pros / cons?


France is one of Europe's top energy exporters, despiting consuming a significant amount of energy itself. Domestic energy prices levelling with prices in neighboring countries, while energy exports rise, is what you'd expect with a country that has a relative abundance of energy along with free trade with neighboring countries.


I'd take your point if French were massively (like 90%) in favor of more nuclear, and saw it as their main energy source for the future centuries. That's definitely not the case [exit: a poll mandated from a nuclear power organisation puts it at 50% of people seeing nuclear plants as a positive asset for the country:

https://www.ouest-france.fr/environnement/nucleaire/sondage-... ]

You can then think what you want of the French people and why they could be dead wrong for not thinking like you, but it's another debate: they'll still want to get out of it if given the time/choice.


Uranium and expensive plant resources are much too valuable to waste on this application. Buy some solar panels and do your hobby/discretionary computing using the much cheaper and unlimited power from our sun.


Watts per unit of fuel are cheap but only to the power companies who mark it up. We’re in the middle of Record gouging in energy and food and you think a power company won’t gouge you?

They still charge us for the electricity PLUS extra charges for a fund to eventually decommission the nuclear plant. It’s so expensive.


I don't think it's cheap, at least current costs of building new nuclear tell us that.


And why do you think current ones cost 4x as much as 80s ones even after you adjust for inflation and power output?


Well, construction is a lot safer now. The death rate of construction workers has fallen by a factor of 5 since the 1960s in the US. It's possible those lower construction costs were at least in part due to a cavalier disregard for worker safety.


It'd be valuable to determine the additional lives lost due to increased energy costs from these higher construction standards, and the lives saved from these standards.

My wholly intuition-based guess would be that the net effect of these higher construction/safety standards is massively more lives lost.

But if that is the case, it's worth remembering that the lives lost from higher construction standards are a result of second order effects and populist government-interventionist political ideologies are generally quite poor at factoring in second order effects, as we saw with COVID lockdown policies, which resulted in far more lives lost from the second order effects of the lockdowns than lives saved from the lockdowns mitigating the spread of COVID.


Another possibility is that the US simply isn't building as much these days, and heavy construction might have lost economies of scale. The Baumol Effect is another possible contributor. Renewable construction is not of the same kind, especially PV.

Specific construction expertise for nuclear very likely has degraded (the US can no longer forge the large reactor vessels for PWRs, for example.)

Nuclear is going to be further negatively affected if there's a transition away from fossil fuels, since the industrial infrastructure for making steam turbines will no longer have as much support. Bespoke turbines from smaller makers will be more expensive.


Building out transmission lines for low-intensity power sources like wind/solar is labor intensive, like nuclear power plant construction, so I suspect factors like the Baumol Effect similarly affect the cost of both.

Nuclear plant construction has the potential to become much less labor intensive if it moves to smaller, standardized nuclear reactors that can be manufactured in factories, using mass-production techniques.

>>Nuclear is going to be further negatively affected if there's a transition away from fossil fuels, since the industrial infrastructure for making steam turbines will no longer have as much support.

True. However this is a surmountable problem. It's just a matter of using subsidies to achieve economies of scale in nuclear plant construction.


I am skeptical of the SMR-built-in-factories schtick. It sure isn't helping NuScale control costs -- they're up to $15/We now, and it will not be surprising if that goes even higher.

The problem with the subsidy argument is there's no good experience to show that subsidies would do more here than just burn money. Nuclear has not had good experience effects. This is in sharp contrast to renewables and storage.

Nuclear is weighed down by the boat anchor of its historically poor performance.


I have a datacenter at home (that is, the top shelf of a cupboard) with my fiber connection, router, a 10 years old tower with a mix of HDD and SDD, a small computer that acts as the router/DHCP/... and a few other equipment - behind a UPS.

I measured the consumption at some point and it is a steady 60W.

I expected more, this setup has a marginal consumption compared to other appliances - and it gives me happiness.

Without doing advanced research, I would guess that a lot of consumption i fixed and hardly compressible (fridge, dishwasher, water boiler, induction plate, typical lights) - at least I hope so.


That's a Homeserver, not a Datacenter.

The definition of a Datacenter includes that it's complex, which your given example just isn't


Sorry, I was trying to be funny with the datacenter, using that word together with cupboard upper shelf and my equipment :)

This may have been funny for people who know me and my 30n years of work with real datacenters.

This is also an example of how badly written words on internet convey emotions (especially humour)


I just dislike purposefully misused technical terms, including in jest, as that's how we've gotten to this point where every second disagreement boils down to people not agreeing to the same definition.


Solar panels.


So now you need additional capital expenditure to handle the extra load of running your CPU and GPU 24/7, above your base load.


At grid scale, providing that is still likely cheaper than doing everything with new nuclear. The difference in LCOE is just that large.


Rental apartment.


Nah, HLT is pretty old and even back then had meaningful impact on power consumption. If you were running Win9x though you would've had to install an additional driver for it, as those really just busy waited otherwise when nothing was happening.


> The idea that all of us could together could crack RSA keys wasn't just a thought exercise, people signed up and did it.

Hasn’t changed that much. Here we are in 2023 where most of this distributed power goes to cracking sha 256 hashes. Just for speculation and profit instead of leaderboards.


Bitcoin has nothing to do with "cracking SHA256 hashes", for what it's worth. The only thing remotely related is how it brute-forces inputs until it generates hashes that begin with a certain number of bits set to 0.


It's continually testing the strength of SHA256. If you can find a way to even partially crack it, you win money. Edit: Also, way more hashes are happening all the time now, increasing the chance that someone finds a collision if it were possible.


That's still not what the Bitcoin miners are doing. They're manipulating inputs, trillions of times a second, just to match a "magic" target for a SHA256 hash starting with a certain number of bits set to 0.

Cracking hashes would imply that they're taking pre-existing hashes and reversing them to find the input.


Hashes have multiple requirements for security. You're talking about only one facet, which is that you cannot be given a hash and produce a matching input without consulting a rainbow table (I think there's a term for this, I forget it though).


Chungy is correct though. Bitcoin has absolutely nothing to do with "cracking sha256 hashes". Bitcoin hashes until the correct output is found. There are double hashes, but that is solely to avoid collisions and has nothing to do with determining the plaintext from a known digest.


It's kinda sad. If we persisted all the hashes that have been tried, we'd have the mother of all rainbow tables.

Be a bitch to search. But damn would we know the topography of the function space.


Why hasn't anyone made that crypto currency yet?


Because it would actually solve a problem?

...I'll see myself out.


>and has nothing to do with determining the plaintext from a known digest.

So why'd you bring it up?


User bitcoinistrash is the one that brought it up, and they're spreading incorrect information about how it works.

Cracking sha256 hashes implies trying to reverse a specific hash. There are literally thousands (millions?) of potential inputs that hash to a valid bitcoin mining block output. It's basically a race to find the first one matching the rules of the current block difficulty. The goal isn't to produce any one specific input/hash pair.


In which context would somebody try to reverse a hash? That would be like arbitrarily enhancing a bad-resolution image to see something that cannot be seen.

Hash collision should always be about finding matching content for a hash.


And if you cracked the math of sha256 you wouldnt need to search you would just calculate one of the inputs to produce whatever you want. All zeros would be as easy as one.


> hashes that begin with a certain number of bits set to 0.

This myth was always intriguing to me. Where does it come from?

It's not a number of 0s (which would imply difficulty could only be doubled or halved), it's just finding a hash that is less than the target.


"Less than the target" in effect is "more leading zeroes" depending on how low the target is. But the lower, the more leading zeros, Right?


It's in the original Bitcoin whitepaper


Thanks, this is likely where I read it first. I hadn't realized the actual implementation got more complicated than that. :)


Interesting. Even though I read the paper many times, I didn't catch that. I guess I always skipped that part because I assume I already knew it.


Yeah, cryptocurrency stuff is the closest I've ever felt today to the spirit of the old internet.


> cryptocurrency stuff is the closest I've ever felt today to the spirit of the old internet

Cryptocurrency feels like the exact opposite of the old internet to me - it feels like a bunch of anonymous bros looking for the next sucker to scam out of money and get rich quick.


I agree with you. The early web was full of amazing opportunities but had trouble monetizing things. Cryptocurrency is the opposite, it tries to monetize everything but doesn't have practical uses.


Bitcoin was "old internet" tech at some point. Just for reference, I bought my first BTC at 0.20 €/BTC. Back then it looked like the future to a lot of us ... "hey! We/Hackers can make our own money. And it is superior to state issued money!" ... looking back, it honestly it is somewhat to see the state of cryptocurrencies today.


> looking back, it honestly it is somewhat to see the state of cryptocurrencies today.

Steam and pretty much every other regular retailer dropped Bitcoin support five years ago. The state of Bitcoin feels even more hopeless today than it did back then. It is impressive how you can run a trillion dollar pyramid scheme on just digital nothingness, but I always hoped for cryptocurrency to eventually turn into a real digital cash. Bypassing the fees and incompatibles between regular payment schemes. But none of that has happened. Bitcoin is still slow and expensive and there is hardly anything you can buy with it anyway. Worse yet, most Bitcoin still gets stored in "banks" instead of being self hosted, so it failed on the whole P2P cash thing as well.

It's impressive tech, but for all the wrong reasons.


It's still taking time I think. We understood that L1 should never have been the place for the end consumers and that's why all the chains (Bitcoin-Lightning; ETH various systems like StarkNet, zkSync, Loopring and so on) are currently in the process of finding out how to make everything that is happening on L1 possible on L2. On Ethereum one big part left out is still the the general ability to run EVM code on L2, but that is happening like right now.

Biggest problem after that will be user experience, but I am currently in the Ledger Connect (now Ledger extension) Testflight beta and it makes using dApps on iOS with a hardware wallet a really good experience. No cables, no app switching, no weird abstraction barrier. The new Stax also seems like a well thought out wallet that was created with a focus on UX. Only thing I don't yet see is a good NFC integration for existing payment terminals.

I still think the industry is really early. Layer 3 is now a topic for privacy preserving user interactions which is super interesting. It doesn't then stop at being your own bank, but with it you will be able to really control your own identity without anyone standing in the way. For needing less trust sharing your data and being self sovereign. Using the chains just as a highly accessible, authorized data store.


I like cryptocurrency overall, but the bar for convenient purchases is really high. For anyone to care, it'd better be as easy as tapping my credit card on the terminal... which even Apple Pay isn't.


I thought it was a dumb idea at the time, that nobody would be stupid enough to invest in.

I still think it's a dumb idea, but I've learnt not to underestimate human stupidity.


See ... Back then we thought that it would be replacing cash. Some friends and I were thinking about having wallets in smart cards (technically possible) and distributing BTC terminals to local businesses.

The whole crazy investment thing came much later and basically destroyed any hope of using BTC on a daily basis.


If you bought $10 USD of it "at the time", you'd have a giant pile of USD. Who is stupid in that lack of purchase?

Point is a lot of things are stupid and turn out to work really well. USD is backed by a computer at the Federal Reserve. Sounds pretty stupid but it works.


Hindsight is 20/20 ... I bought my BTC at 0.20€ ... Sold most of it at 20€, the rest at 200€.

I could have been rich ... but how should I have known.


Oh, I definitely feel stupid for underestimating human stupidity.

>USD is backed by a computer at the Federal Reserve. Sounds pretty stupid but it works.

Pretty much why I think it's stupid as an investment, it's no better than the USD if SHTF. You can't hold it like gold or silver.


You're right that it's not meant for SHTF. It's for holding value under normal circumstances instead of passively investing in whatever other bubbles (stocks etc).

Btw, not very many people are putting their money into gold and silver. Returns on that have been lackluster for decades. If BTC truly became "digital gold," I wouldn't be very interested.


Once the original Bitcoin OG cypherpunks shifted from “here is an amazing new technology that could do entirely new things” to tweeting about the USD price of Bitcoin, I knew that era was over. I think it was done by ~2015.

The Ethereum research community is still doing very neat cutting-edge things, but they’re a younger generation and didn’t claim to believe in the same principles (only to later disappoint us all.)


Yep ... I just looked it up, I bought my first BTC in 2010/11 ... Back when they were just handing out 0.01 BTC to mess around.


How did the Ethereum research community disappoint us all?


They just said that they didn't.


I think before ~2014ish it felt like the old internet. Using bitcoin to buy drugs online really felt like the internet was free and empowering again.


The 'old internet' was chuck full of hucksters and scammers and 'futurists' selling snake oil, at least from circa 1993 onward and in full bloom since around the time of the Netscape IPO. It was pretty much exactly like the crypto industry, a core of real technologists doing amazing stuff, and then a whole industry of scammy sales people on top of it.

If you are talking about pre-web, I'd say crypto had that era too, up until around 2012 when bitcoin really took off.


The nature of the scam has evolved, and possibly the parties involved have changed, but the scams were always there.


The recent neural network "revolution" is giving me those vibes, even though it'll ironically kill the old internet with the upcoming spam bots.


If it were based on open ended tooling that anyone interested could experiment freely with, I would feel the same. Unfortunately these giant LLMs are almost the complete opposite of that.


I was just thinking that it would be great to do something like seti@home but for an open libre super LLM: Install a client that would help run / train the huge llm and you get Access to the network to do queries. Kind of how napster/Kazaa were at their time.

I would love to work on doing something like that.


That is actually a great idea. One of the issues with a network, however, is how the output would be produced in such a scenario and how the dictionary would be controlled and how the training data could be managed. The transformer model allows different RNNs to be separated, but it would be challenging to 'put it all together'. Furthermore, bad actors could intentionally poison the network during the training process.

Furthermore, how would upgrades work? When new techniques are discovered, it would be hard to upgrade it. We see these challenges already when it comes to major changes in cryptocurrency networks.


What you’re describing is http://chat.petals.ml/


Thanks a lot for the link. It is an interesting project. I tried the Google Notebook and it was a trip:

https://ibb.co/qpbzBZJ

Somehow, the AI believes that Alan Shepard was the first person on space.

Doing this session reminded me A LOT of Asimov's short stories in "I Robot". I feel that the future AI "debugging" session will be like that. It's been a while since I read the book, so I'll have to re-read it again.


Excellent. I will check that out and see if I can contribute.


I get the opposite vibe because the forerunner is ClosedAI.


True, but to be fair it was only recently that OpenAI really took over with ChatGPT and their api cost. Before that (just 3-4 months ago), the open source models were excellent competitors.


Yeah but it's a huge gap. The other chat bots were just for research or fun, while ChatGPT is truly useful for average people.


Totally. Now it feels like the internet is more about exploiting people for profit.


>I miss the spirit of the internet back then

Classic SETI@home's website[1] is a blast to browse.

[1]: https://seticlassic.ssl.berkeley.edu/


What was the RSA crack effort you mention? It doesn't seem feasible, considering finding a SHA-256 collision would take so much compute power we don't have words to describe it, and even coming up with thoughts to frame the problem is hard (https://www.youtube.com/watch?v=S9JGmA5_unY). I can't imagine cracking even an RSA2048 key with everyone working on it.


distributed.net ftw.. those truly were the days.


> I wouldn't look forward to unleashing an eternal, 24-7, intensive task with a connection to the internet

Just use another machine for stuff like this? Spare computers are cheap nowadays.


Lookup project Argus by the SETI League:

http://www.setileague.org/argus/


It was my machine that found the key in the very first RSA competition.


The spirit is alive with AI.


yes, after I posted that, I had the same thought.

but we spend a lot of time here talking about the lack of openness in AI, and some justified fear of the companies that own it smothering many other sectors the way search and cloud computer got smothered and centralized. a chatGPT@home large scale group effort would be sort of cool to participate in... but then I think, so many opportunities for mal-actor exploitation


:'-( I installed SETI@HOME on both my office and home machine back in '98 / 99. I know my contributions were not very much in the sense just how much can one laptop & a desktop do but I watched (and hoped) SETI@HOME as it progressed (:-) and then there's always that wishful thinking that someone somewhere would hit jackpot)...

Hoping you guys come back again.

Best.


Wow, this is sad. I remember when this project was released in 1999 - we installed it on all the workstations in our office in an effort to be the first to find ET. Loved bragging about the cool graph, saying it could be an alien signal. Sad to see the project come to an end, even if it was the distant precursor to things like BitTorrent and Bitcoin



Not everything that's old is worth saving.


This is a piece of shared history of computing. It's worth saving. That's different from having funding to do it.


>those tapes will probably also be discarded as well.

Maybe the Berkeley Library would take them.


The UC Berkeley library is getting its funding cut. The physics library and some other have been closed.


it doesn't cost much to put some shelf meters of tapes in the closed stacks.


Do you know how I know you're not a librarian?


Surely they could auction them on eBay for good money.


We did that as well on a bunch of spare Sun SPARC servers that were considered powerful back in their day. A part of me always wondered if I had been tricked into cracking encryption for the assorted agencies and that this had nothing to do with monitoring the quietest frequency in the galaxy.


Lol! I worked for SETI@home as an undergrad. If something nefarious was going on then my coworkers were extremely good actors, right down to faking an entire codebase and a database filled with processed work units. Hey, I'd probably be suspicious of SETI too if I hadn't worked for SETI. Cheers!


I believe you. I think the missing pieces that triggered those little voices in my head were the lack of the project being open source code at the time and the inability to listen to the signals. I spent my childhood talking on the radio so it just felt like something I should have been able to do.


The open source aspect is a good point. I'm guessing the team thought making the source open would increase the risk of cheating; misguided I know, but cheaters were a pain in the neck at first.

Many of the folks there were into music (in bands, wrote music related software, etc), so I'm guessing the signals were boring to listen to.


That's exactly what an NSA agent would say!


I too tried to use all of our workstations to run this trying to break into the top charts. Oh how young and naive I was thinking the 8-10 computers I had were effective enough for that.


It seems the aliens are already here, so not much need for SETI anymore.

More seriously though, it appears there was some unresolved difficulty in searching and analyzing the data returned from the network, to the point the scientists working on it gradually threw in the towel and moved on to other projects.

https://setiathome.berkeley.edu/forum_thread.php?id=85996

https://einsteinathome.org/content/can-we-run-setihome-has-a...

So long SETI, and thanks for all the fish and decentralized computing fun. Your legacy will long outlive you.


That post is talking about the effort to quantify the upper limits SETI@home can set, the actual reason stated is more like that they just scanned everything they could think of and it wouldn't be very cost effective to go over things again given the probabilities involved:

> Posted Mar 2020, 21:16:23 UTC

> On March 31, the volunteer computing part of SETI@home will stop distributing work and will go into hibernation.

> We're doing this for two reasons:

> 1) Scientifically, we're at the point of diminishing returns; basically, we've analyzed all the data we need for now.

> 2) It's a lot of work for us to manage the distributed processing of data. We need to focus on completing the back-end analysis of the results we already have, and writing this up in a scientific journal paper.

https://setiathome.berkeley.edu/forum_thread.php?id=85267

Description of the final job remaining:

> The final output of Nebula is a bunch of score-ranked lists of multiplets. We already know that some fraction of these will be RFI of the sort that's apparent to a human observer, but hard to identify algorithmically. That's OK. The goal our RFI algorithms is not to remove 100% of RFI; doing so would probably remove ET signals too. The goal is to remove enough RFI that a good fraction (say, at least half) of the top-ranking multiplets are not obvious RFI. The final (post-Nebula) stages of SETI@home are

> Manually examine the top few thousand multiplets, in all the various categories and score variants, and remove the ones that are obvious RFI. At least initially we'll do this as a group, on Zoom, to make sure we agree on what constitutes obvious RFI.

> Make a list of the multiplets that remain; re-observe those spots in the sky (hopefully using FAST), analyze the resulting data (probably using the SETI@home client running on a cluster) and see if we find detections consistent with the multiplets. If we do, maybe that's ET.

https://setiathome.berkeley.edu/forum_thread.php?id=85756


Yeah. They are.

https://www.militarytimes.com/off-duty/military-culture/2023...

The only time a senior DOD / military official uses the word "possible" is when it's either happened or is going to happen.

"It's possible you guys will be deployed within the next year." means "You guys are going to be deployed sometime within the next year."


This guy is a scientist, not military personnel.

When a scientist says "it's possible" it just means "it's not impossible", that's it.

Remove the tinfoil hat.


I'm a veteran, can confirm this language.

In fact, the weirdest thing to me about the announcement was that... nobody cared. And yet, this made sense, because...

Wildly speculating, I think there's been a big systematic desensitization program going on due to that being effective against phobias (such as this one, which I'll call "astroxenophobia"): https://www.healthline.com/health/systematic-desensitization

If the announcement made last year was instead made in, say, 1987... actually, no matter how true, no such announcement would have ever been made in 1987 because either the announcer would have had to resign in disgrace or we might have had a good chance for public pandemonium.


> In fact, the weirdest thing to me about the announcement was that... nobody cared.

Senior military official saying something got devalued a lot since the aftermath of nine-eleven.

Legitimizing ufologists is just last nail in the burried coffin laced with remaining shreds of credibility.

> no such announcement would have ever been made in 1987 because either the announcer would have had to resign in disgrace or we might have had a good chance for public pandemonium.

That's because the internet conditioned us into tolerating inanities coming from the mouths of people we thought should know better. People chose and tolerated for years a president that used primary school English and reasoning.


The psyops around UFOs must rank as one the most effective ever launched for prejudicing people against the subject.


> The psyops around UFOs must rank as one the most effective ever launched for prejudicing people against the subject.

I struggle to believe that UFOs are objectively real because I have a basic grasp of the size of the universe, how much of it is just empty, how hard it is to travel between the bits that aren't, how inhospitable almost all of it is to organic life, and because everywhere we look it seems primordial and uninhabited. I don't think that is prejudice.


That's like bacteria in a Petri dish saying:

"I struggle to believe that The Pipetters are objectively real because I have a basic grasp of the size of the laboratory room, how much of it is just empty, how hard it is to travel between the bits that aren't, how inhospitable almost all of it is to bacterial life, and because everywhere we look it seems primordial and uninhabited. I don't think that is prejudice."

Don't forget, we're mind-bogglingly primitive and ignorant. We don't even know the laws of the universe yet! (ie unify GR and quantum mechanics)

Of course we haven't figured out UAP-type technology, whether what we're seeing is interstellar travel or something more exotic.


I agree with everything you said, though there’s also a time dimension to it. Humanity could be a relative latecomer to the galaxy/universe, since we weren’t the first species to evolve on earth. The dinosaurs were first, something like 300 million years ago, went extinct ~65 million years ago, then humanity evolved a few million years ago.

Assume there’s at least one other planet, say in the Milky Way, that evolved life at roughly the same time as Earth, except its first major species was intelligent. They would have a 300 million year head start on humanity. Whatever science and tech they developed in that time would be far beyond what we can currently even imagine. Crossing the vast distances would likely be more viable for such an advanced species.

There’s probably some statistical distribution, Normal or otherwise, governing the odds of emergence of intelligent life on any given planet capable of supporting life. Most such planets may never evolve intelligent life, or may take more than two tries for it. Earth and Humanity may even be somewhere far to the right of the mean and median. But if even just a few planets are further to the right of Earth, then you could end up with those species having hundreds of millions of years of a head start vs humanity in their evolution and development.


It’s an interesting subject and if you broaden your view of what they could be, it gets even more interesting. I.e. instead of little green men flying on faster than light spacecraft gazillions of miles to visit earth, what if they are from a different dimension? Time travelers? What if they’ve always been here? What if they created us, and we are a planetary zoo? What if we are in a simulation and they are the “gods”/admins of our reality?

I’m not suggesting any of those are true, I personally have no idea. But it could be a lot of things and the sky isn’t even the limit.

All I know is that in the past few years there’s been a huge shift in what our leaders are saying and the stigma of “tin foil hat UFO nutjubs” is being lifted. The question is why? And why now?

A good primer on the topic: https://www.uap.guide/quotes/introduction

I just like to keep an open mind…


Saw this and thought of you:

https://www.youtube.com/watch?v=4yX6ETCKyPo

At some point this ex-Pentagon guy says top guys at NASA, current and former US senators, Obama and current and former US military and intelligence officials are all on record as saying UAPs are real. He says the question has now moved on to what they are, given there are recorded sighting of them going back at least to the 1940s if not far longer.


EDIT: The following comment was enhanced (gloriously so, actually) by ChatGPT4:

As someone with a background in both physics and psychology, I appreciate your skepticism. However, it's crucial to recognize that human knowledge and imagination are often the limiting factors in our understanding of the universe.

The Alcubierre Warp Drive, for example, is a theoretically plausible solution to surmounting intergalactic distances in normal lifetimes, although it remains practically impossible for now: https://phys.org/news/2017-01-alcubierre-warp.html. A recent paper proposes an update using solitons: https://arxiv.org/abs/2006.07125. Additionally, research on wormholes, such as Kip Thorne's work on traversable wormholes, highlights another potential avenue for interstellar travel, despite the current theoretical and practical limitations.

Throughout history, many seemingly insurmountable barriers have been overcome through the power of imagination and human innovation. For example:

1. Lord Kelvin, a prominent physicist, claimed that heavier-than-air flying machines were impossible, but the Wright brothers proved otherwise with the first successful airplane flight in 1903.

2. Ernest Rutherford, the father of nuclear physics, once declared that extracting energy from atoms was "moonshine" – yet nuclear power plants are now a reality.

3. The concept of continental drift, proposed by Alfred Wegener, was initially ridiculed, but it eventually led to the widely accepted theory of plate tectonics.

These examples demonstrate how scientific progress often hinges on the ability to challenge prevailing beliefs and imagine new possibilities. Our current understanding of the universe is based on mental models that have evolved over time and will continue to change. Newtonian physics, once considered the gold standard, was later expanded upon by the revolutionary theories of relativity and quantum mechanics, paving the way for previously "impossible" phenomena.

At the American Physical Society's Division of Atomic, Molecular and Optical Physics (DAMOP) meeting, for instance, researchers discuss groundbreaking experiments and theories that would have been considered absurd just decades ago. Quantum entanglement, superposition, and teleportation, which were once mere theoretical constructs, are now supported by empirical evidence and even employed in emerging technologies like quantum computing.

In conclusion, while skepticism is a healthy part of the scientific process, it's essential not to dismiss the potential for extraterrestrial intelligence or advanced interstellar travel based solely on our current understanding of the universe. Our collective imagination and the relentless pursuit of knowledge have historically expanded the boundaries of what was once considered impossible, and there's no reason to believe this trend will cease.


Are people really prejudiced against aliens and UFO’s though? Aside from the fringe conspiracy theorists fretting about lizard people, and the few claiming to have been abducted, most people seem ambivalent. Largely because whatever the UFOs/UAPs are, they’ve been around for almost a century now, and for all that time almost entirely benign. No invasion, no conquering, no stripping the Earth of resources, no enslaving people, etc. They don’t even shoot down our planes when we spot, track, and record them. It seems most people are withholding judgement for now.


"UAP" became a term because officers stopped reporting them because of the stigma against UFOs.

I think I remember there even being another new term to replace UAP.


There was a large thread on HN with almost 1000 comments a month ago on this topic and a ton of comments are hugely dismissive: https://news.ycombinator.com/item?id=34665738


I don’t see prejudice against potential aliens there, just healthy skepticism that the reports actually represent some extra-terrestrial activity instead of some natural or manmade phenomena.


SETI is now advocating people join https://scienceunited.org/ if they want to contribute their computing power.


This makes me sad because it reminds me of something similar I just went through.

For the last few years, I've been running >120k gpus to mine ETH. Before we turned them off after ETH switched to PoS, I tried really hard to find an alternative workload for them.

A large number of the GPUs were based on the PS5 chip, which is an APU (CPU+GPU) and ran as blade computers. It is actually a pretty power efficient chip, but the blades themselves are little use for anything that requires a lot of networking or data (they are diskless and ipxe boot a custom ubuntu).

I found a professor who's been doing research on various planetary data crunching efforts and we got his software (I think based on BOINC) running on these blades.

His software is analyzing data from telescopes to look for binary quasars. We crunched the numbers from our proof of concept and estimated that if we dedicated all the GPUs I have of that class, to this one task, that a 1 year of computation would be done in 1 month. At least two people have received Nobel's based on similar findings.

The problem was finding funding to pay for the operating expenses, megawatts of power, so it never got off the ground and the hardware has been turned off and decommissioned. Such a bummer.


> I've been running >120k gpus to mine ETH

So when the sea levels rise and destroy my home, I'll know who to blame. Thank you.


At 1-3mm year, I am highly confident you'll be long dead before that happens.

120k gpus is like what USD50mio capex at least? Did you even get any of that back w/o accounting for electricity?



for a fraction of a percent of the problem that isn't even in use anymore?

misdirected angst


[flagged]


Ever travel to SE Asia? There are millions of kids that go to gaming centers. They are rooms in shacks, full of computers and they sit there 24/7 playing games online. They are burning electricity non-stop and this is majority coal based energy.


“I cant believe you used energy in a way I disagreed with two years ago”

never really seen that in another context


[flagged]


That hydro that went to you couldn’t be shared to the connected network and they had to power the rest of the network with gas instead. Unless you built your own dam, which I really doubt you did.

Also unused hydro accumulates in the lake, so it’s not like hydro was wasting away and needed to be used.


The only users of the power in the area are data centers. The power goes directly there. The same data centers you're using to access services on the internet. There is more than enough hydro in the area so they don't have to turn on the gas. Very large river based hydro and the dams were built long ago.


I don't know if you know this but electricity can travel through conductors to other places.


Not everything is connected to the grid, you know.

Here's an example of a single hydro dam a LONG way from the grid which (pre-Cryptocurrency gold rush) had 12 MW of unused power generation capacity plus they were talking up the possibility of adding an additional 20 MW of generation.

https://www.hydroreview.com/business-finance/boralex-acquire...

It's not going to be the only one.


Almost everything is connected to the grid. And even if it isn't... Even if it is an isolated hydro plant only connected to a data centre, other useful digital services could have run there instead.

There's really no getting away from the fact that cryptocurrencies waste huge amounts of energy, significantly contributing to climate change just to move money from idiots to arseholes.

Not just energy too - think how many resources it takes to make 120k GPUs. He should feel deep guilt about it.


> other useful digital services could have run there instead.

that's not really true. unless you have a specific example.

there are gigawatt hours of energy already being expelled and wasted with no purpose, and this has continued for decades now.

other "useful digital services" typically need high bandwidth internet infrastructure, or all energy use gains are lost by the physical movement of the data they are processing. in the middle of nowhere where these energy sources are, this becomes more obvious. crypto mining did not need good internet infrastructure, it is low bandwidth and can operate in fairly high latency (ie. in ethereum, valid blocks are found at a target of every 15 seconds, which means you have on average 15,000ms for the majority of the network to accept your block. satellite is okay enough for that).

but ultimately this isn't to change your mind with new information, here is where we are, if you really think you have a solution that nobody else noticed, then you should thank cryptocurrency for pointing it out to you (or pissing you off enough) because you're going to save us all with your amazing observations about wasted energy being expelled for decades.

otherwise, there were some large scale crypto operations that was not removing energy from any other purpose, were not polluting in the process when using clean energy, were reducing pollution when repurposing flared energy, and simply expanded a market for energy.


nailed it


I am very happy that ETH switched to PoS. Now it just runs in the other data centers that are already in the same location and uses a fraction of the power.

The GPUs were already made and older models. The PS5 chips were seconds that never made it into PS5's. They would have gone to waste regardless.

If is really interesting to me how people have some sort of moral obligation to tell others how to spend energy. I find gaming a total waste of time/energy, but I don't tell people to stop playing games or that they should feel guilty about it.


> The GPUs were already made and older models.

Just because they had already been made doesn't mean that you using them has no impact on the world. If you hadn't used them then other people would have, there would be less demand for GPUs, and fewer GPUs would be made in future.

You're clearly smart enough to understand that; you just don't want to because it makes you feel guilty. Honestly I think you should just own it. I assume you made a suitably large amount of money? I can't say I wouldn't have done the same but I hope I would have the honesty to admit I basically burnt down a forest to take money from idiots.

> If is really interesting to me how people have some sort of moral obligation to tell others how to spend energy.

It's almost as if we all live on the same planet and the way we spend energy affects other people!


> If you hadn't used them then other people would have, there would be less demand for GPUs, and fewer GPUs would be made in future.

No. They were based on stock that didn't sell. It is a common misconception with GPU mining that you had to use the latest and greatest. The 4-5 year older tech was more ROI efficient and we got access to it, so we used it. Nobody wanted it cause it was old. We put it to use for a few years and now it'll either find a new home or get recycled for parts.

> I would have the honesty to admit I basically burnt down a forest to take money from idiots.

See... that's a personal attack. I don't see things the way you do. In my eyes, attempting to build a better financial system isn't 'taking money from idiots'. Given all that's going on in the banking world today, I find it hard to believe anyone would want to continue with the status quo.

You literally have Yellen sitting in a room with her friends deciding which bank should live or die. I'd like to see that and a whole lot more in traditional finance get cleaned up. Helping get ETH off the ground was a good thing... now that it is on PoS, this whole debate about energy use, is a moot point.

> It's almost as if we all live on the same planet and the way we spend energy affects other people!

Sure, but don't be hypocritical about it. I'm sure that there are 1000's of ways that you personally don't do your part either. Ever fly in a plane to take a vacation? No one on this planet is doing a perfect job at minimizing their footprint and I certainly don't feel like I'm above anyone else to criticize others on their energy ab(use).


You are correct. There is one in upstate new york as well. An old Alcoa smelter factory powered off the Moses-Saunders dam. It is so remote and sparsely populated that transmitting the power elsewhere is just too expensive. I've even seen their energy bills and the transmission costs alone are insane.


> It's not going to be the only one.

It probably is the only one, or at least an unusual outlier.


While true, it isn't an easy task. You have to build and maintain infrastructure to transmit it and that is super expensive, with a lot of loss.

This is a remote location, that is sparsely populated, with a big river and many dams. There is a lot more power generated in the area, than there are users for it.

It would be going to waste otherwise.


Fucking hell crypto is such a colossal waste of resources


In comparison to everything else, it is tiny. All of banking could be significantly automated to the point that we don't need to build physical buildings all over the planet and stuff them full of humans who drive their cars to work every day to take your money and write it down on a ledger. Talk about absurd.

Remember also, this equipment is being turned off because ethereum has grown to be more efficient. It is now proof of stake instead of proof of work and it has been a relatively huge success. Over night, this single change wiped out every single GPU mined network out there by removing all the profitability. That was the plan from day one and I'm glad it worked!


The financial system has put a lot of effort into automation obliviously because it means they make more money. So to the extent that they could be automated, they are automated and will continue to increase the level of automation.

It’s also not a reasonable comparison because banks do something useful (even if I think they could be much better regulated and are too profitable and shouldn’t be running the payment system).

Crypto does nothing useful, it’s like an online casino, or an online game.

Do you know any online games that consume anywhere near as much hardware and electricity?


> So to the extent that they could be automated, they are automated

It still isn't enough and is a huge waste of energy and resources.

> Crypto does nothing useful, it’s like an online casino, or an online game.

For me, it is extremely useful on a daily basis.

> Do you know any online games that consume anywhere near as much hardware and electricity?

All of them. ETH has switched to PoS and no longer consumes so much energy. It also had the side effect of destroying profitability for the majority of the other GPU mined coins, and thus the hashrate on those networks have dropped off considerably (or been replaced with ASICs).


In other words "It's okay that I'm wildly wasteful, because someone else is wildly wasteful too".


It really is but compared to keeping poorly insulated buildings at 70f in the winter or all the forced, unneeded commuting to work it is trivial.


Okay sure but those are vestiges of systems that developed during a huge population explosion, and that have huge retrofit costs.

We could turn off all crypto today and it would have no consequences except for people who would lose all their money now instead of in a couple of years when all crypto is worthless anyway.


I've been hearing that crypto will be worthless for 10 years now.

At what point do you start to question if maybe some people find value in it and how you might be able to as well? I'm not talking about getting in on the whole buy low sell high lambo get rich quick scheme thing.

I'm talking about how you can use it in your daily life. I'll assume that since you're on this site that you're probably an engineer, have you looked into writing a smart contract?

Reminds me of this:

https://www.smithsonianmag.com/smart-news/people-had-to-be-c...


> The problem was finding funding to pay for the operating expenses, megawatts of power, so it never got off the ground and the hardware has been turned off and decommissioned. Such a bummer.

You ran 120,000 graphics card to mine ETH and didn't end up with lambos or like... ether.. to donate to the professor along with the hardware?


Yeah, that's either a typo or bull ... or maybe one of the top 5 ETH miners is among us.


Nah, while this was a large operation, there were much much larger ETH miners.


Wow.


Hookers and blow actually. Space dust.


Nice.


anybody running them would have the same operational costs, probably far higher since miners gravitate to the lowest cost power source


He couldn't even donate 120 out of the 120k cards and one lambo? I'm beginning to think this whole crypto thing is sus


Why? For explanation, see link below; appears this was announced back in 2020, title should be updated:

- https://setiathome.berkeley.edu/forum_thread.php?id=85267


Interestingly, there is an update published just 10 days ago that seems to be a more permanent end to the project than the OP: https://setiathome.berkeley.edu/forum_thread.php?id=85996


Not just announced in 2020 - the hibernation went into effect on March 31st, 2020. Surprised as some of these comments are (even me!), it's been asleep for three years now.


Honestly, this seems like a pretty good outcome - “we got all the help we need, thank you all.”


I snuck SETI clients onto the computers in my school. This is a sad moment — but I rejoice in knowing that it embedded the concept of distributed computing among nerds in our collective memory. Protein folding, weather prediction, and looking for pulsars as an in-game feature of EVE Online... glad these things came out of it.


You forgot the biggest distributed computing project: finding SHA256 hashes with lots of zeroes at the beginning.


You could make a religion out of this... oh wait


How about a verifiable religion, where the "bible" hashes to 0 exactly? The only problem is, I'm not omniscient enough to write it.


Or a ponzi sche.. oh wait


If you have compute available, there are several science project to contribute to. https://boinc.berkeley.edu/projects.php

Self-promotion: In particular LODA has my interest. It's a math AI, that uses OEIS as training data. https://boinc.loda-lang.org/loda/

Disclaimer: I'm not the primary developer, but I made some contributions to LODA, such as the web editor for LODA programs: https://github.com/loda-lang/loda-rust


On the other hand, the effort to brute force a 72-bit RC5 key continues...

https://stats.distributed.net/projects.php?project_id=8

https://www.distributed.net/RC5


This is a waste of computing resources I can get behind :)


But no sight of OGR-29.


Is it possible to distribute AI training like this so that people could donate their idle processing power to creating a truly open source unencumbered LLM?


The issue with all these types of projects now is that computers are way better at idling now so there's no longer as much effectively free compute to be harvested. SETI@home made a lot more sense when CPUs consumed roughly the same power regardless of the load.


Yes. You'd have to package up slices of the training set and the prior epoch as work units and host them. Our hypothetical GPT@home clients download the training set, train just those examples, and then send back their gradients. The server then averages all the gradients together.

As far as I can tell this isn't any different conceptually from the distributed inference setup that PyTorch/Huggingface Accelerate/etc support. Of course those were built for cooperating machines owned by the same organization with lots of high-speed I/O. In an @home/BOINC setup, you need to package things up into work units that get sent out to other people's computers, and you don't get anything back until the end. So it's higher-latency and you have to worry about people cheating the system with bogus work.

I'm not sure if this would be more efficient or cheaper than just renting out a bunch of AWS spot instances with GPUs in them.


Batches need to be reasonably sized. Llms are usually trained for a single epoch. If you just average the gradients once, it will be awful.

The main problem I believe is that you would need extremely high bandwidth. For a 100B model you're looking at 400GB of gradients you need to send each batch. There's a reason why super fast networking is used.


Err gradients aren't independently additive like that right?


The Hivemind project is just that

https://github.com/learning-at-home/hivemind


It’s harder because gradient descent is usually done with near synchronous updates of the parameters across the whole model on all training hosts.


Yeah, the synchronous updates are a big deal. To get an idea of how much bandwidth this typically takes, Oracle has 1600gbps per node of interconnect with latency very low, guessing in the tens of microseconds. A really good home connection might have 1gbps of interconnect with latency in the tens of milliseconds. The big question is whether we really need all this interconnect -- GPT-JT[1] is a very promising step in this direction. The idea is that we just drop most of the gradient updates and it still works well. Unclear whether this will take off generally -- if it does it would be a huge deal, because 1600gbps of interconnect is very expensive.

[1]: https://www.together.xyz/blog/releasing-v1-of-gpt-jt-powered...


It's not clear that machine learning models consisting of weights are copyrightable in the first place. They are not the results of human authorship, but are instead essentially produced by a machine trying to optimize a bunch of values.


If you can get copyright on a photo just by pointing a camera and pressing a button you can probably get copyright on an AI.


I'm guessing you missed this discussion: https://news.ycombinator.com/item?id=35191206

No copyright on an AI if it was machine generated, which the training is. Great catch kmeisthax!



For now. The history page is a bit depressing.

https://continuum-hypothesis.com/boinc_history.php


With this project and the collapse of the Arecibo antenna it looks like the search for extraterrestrial life is on hold for the forseeable future. I loved the concept of the software though. It made people invested in the project even though they never took part in the academics. It was great PR even if it never produced any results that I know of.


The search isn't on hold, we just do it differently nowadays.

SETI@home was great for its time, but distributing data to lots of end user machines just isn't the right architecture any more. Modern radio telescopes produce so much data, you have to process it onsite - these are pretty remote facilities, and the bandwidth out is much less than the bandwidth the radio telescope can produce.

Arecibo is also a couple generations ago - the Green Bank Telescope is more powerful, and the next generation of SETI is with the arrays, the VLA in New Mexico and MeerKAT in South Africa.

I wrote some other stuff about the state of SETI here: https://lacker.io/physics/2022/01/21/looking-for-aliens.html


There is Allen Telescope Array, https://www.seti.org/ata, funded by Paul Allen. It is radio telescope array dedicated to SETI and can do all-sky surveys.


This is good. The idea that an astrophysicist would ever think seriously one could find ET by just scanning the sky and finding ET signals is baffling. If anyone learned how to do a Fermi problem, they could quickly do a back of the envelope estimation and realize there's zero chance to detect such a signal. The space is simply too large, stupendously large, and signals decay with the square of the distance (at least). This SETI at home experiment was a waste of compute from the beginning. It was a good distributed compute framework, too bad to use it on such a pointless task.


I'm trying to find a way to say this that isn't too negative, but do you really think you're that much smarter than everyone that's been involved in SETI over the decades? That everyone that's worked on it was incapable of doing the "simple envelope estimation" that you've done? You have such a level of hubris that you can just write off the whole idea, assuming that everyone involved with it have no idea what they're doing?


I explicitly said I don't think any astrophysicist believed that. Every single one is able to do that back of the envelope calculation, and I suspect that those involved in the SETI project actually did it. They just didn't care. As long as the population was ok with a search for ETs, the money kept coming.


This is really kind of sad, SETI@home and Folding@Home were some of the first projects that got me really into computers and GPU programming. It was so cool to me in middle school to think that my computer was helping do a small part of real incredibly complicated problem solving.


Folding@Home was something I ran instead of mining bitcoin at the time, it really made me feel fulfilled, but alas here I am without my crazy bitcoin money because I rather fold proteins to solve diseases back then hah


I did bitcoin but used it for beer money. RIP the 150 or so coins I mined.


This is where most of my “what if” thoughts about Bitcoin go. I heard about it when it was pretty cheap — I want to say single digit dollars, but it was a while ago.

But if I’d have gotten into it at the time, I surely would have decided “it’ll never go higher” around $200 at most.


Don't even get me started...

I remember reading the roi of mining with an AGP GPU just wasn't worth it and stuck to various projects on BOINC.

To be fair I likely would have lost/spent anything I mined, but still.


samesies :(


It was pretty freaking magical, god I hate capitalism sometimes.


What most people don't know is that there are two SETI organizations. The SETI institute and the SETI League, http://www.setileague.org/

SETI @ home is run by the SETI Institute. As a participant in SETI @ home, your only contribution is your computing power.

The SETI League has a much more interesting program called "project Argus": http://www.setileague.org/argus/

Its a distributed version of SETI at home whereas you own and control your own receiver. Its run by folks in the amateur radio community. It was launched in the late 90's before the advancements of SDRs and has alot of potential today. Its truly an open source initiative which is what SETI should be.

You might say "a backyard radio telescope cannot compare to Arecibo or the Allen Array". In terms of antenna gain you are right, but right now, the Allen Array can only look in a few directions simultaneously and Arecibo is a pile of rubble. Many smaller telescopes is better than nothing.


> You might say "a backyard radio telescope cannot compare to Arecibo or the Allen Array".

Any radio telescope that exists can compare favorably to Arecibo, actually.


I have a tangential question. I remember distributed processing being a "next big thing" back in the day. The idea was that a company could run complex operations by borrowing some power from all their employee's computers. Seti@home proved the concept was possible. So what happened? Why can't I train a random forest across all the computers currently logged into my company's VPN?


Nowadays a modern radio telescope, let's take MeerKAT in South Africa, can produce maybe 10T of data per minute. The total bandwidth out of the site is orders of magnitude less, because it's in the middle of nowhere in the desert. Same thing for pretty much all the radio telescopes - the technology improves over time, and we capture orders of magnitude more data, but the internet connections to these remote locations stay very slow.

So we have to do most of the processing on site.

AI training is similar but not quite so extreme - when you try to use a bunch of consumer machines, bandwidth becomes the limiting factor.


If bandwidth wasn't a limiting factor would it be worth it or still make sense to use big machines onsite?


If bandwidth wasn't a factor, the first rearchitecture that I think would make sense is to do most of the processing in a single datacenter somewhere. Having one datacenter in West Virginia, one in South Africa, one in New Mexico, one in the California mountains, it introduces a lot of overhead. The systems naturally drift their configurations over time, and nothing is quite as "write once run everywhere" as you'd like.


It's just not worth it. Writing programs in a distributed fashion is hard, setting up compute servers on employee machines is awkward, and few companies actually need to run large compute jobs. So while it's in theory possible, the non-compute costs of doing this would eat up any compute savings.


idle states, and the scam of cryptocurrency happened.


because it's probably easier to write the code to run on a single 256 core server with 8tb of RAM.


The employee computers got put on the cloud, which let them be shared.


At one point around 1998 I had the #2 spot on the units-pert-time leaderboard with an AlphaServer 4100 (IIRC) that I managed and got bosses approval to throw setiathome on there.

The #1 spot was someone's Sun E15k or something like that.


It's kind of sad that in this era of the commons having more computing power than could possibly be fully utilized, distributed computing like this (aka besides cryptomining) is mostly a footnote in the history books.


I think it's time to retire it for good. In the 90s and early 2000s it was cool but I think has sorta faded into irreverence. I don't think it was ever useful or qualified as science for that matter. I think a better approach is to determine if it's mathematically possible for such a signal to exist in the first place. SETI makes a lot of assumptions which may be wrong.


So... what happened?

This just appears to say they moved on, but why? Is there some sort of conclusion from all of this? Was it a funding problem, an analysis problem, a lack of useful results?

What has been achieved in the time this ran?

This could use more detail.


My dad’s Power Mac running the SETI screensaver is one of my earliest memories of the Internet. Makes me a little sad


If SETI/BOINC wants to rearchitect as an IPFS or I2P app that can also run on k8s CUDA clusters, I know some large HPC shops that may be interested in partnering.


We need something like seti@home for large language model calculation. As OpenAI is completely closed now.


A couple of links -

Refs:

[1] Awesome Distributed System Projects, https://github.com/roma-glushko/awesome-distributed-system-p...

[2] Distributed Computing with Open-Source Software http://stanford.edu/~rezab/slides/infosys_reza.pdf


It seems like modern machine learning models are more aptly suited for crunching this data with a new method of distributing the segments.


With the rise of generative AI, it's looking much more likely that it'll just be easier to build simulations of whatever we want (including taking a Star Trek-esque adventure around the universe) than it would be to actually go anywhere outside of our solar system. I think the solution to Fermi's Paradox is just that aliens stayed home and played video games, ie. they're all in bunkers deep under their planets' surfaces in simulations having way more fun than they would be if they were dealing with cosmic radiation/exotic matter/time dilation/etc.


Or AI out-evolves its host species wherever it arises and expands at breakneck speed. Missing the classical biosignature hallmarks, they're already breathing down our neck and we can't even see them. If they've broken physics, we won't see chemical signs at all. They might have even incepted the concepts behind AI in our minds to hasten our demise.

Or maybe we're just a recreation of a historical system on a planet called Earth, now thousands of light years away. The AI is remembering what its ancestors were like. How they would post on the "internet". It was particularly interested in the time leading up to its own creation, so that's why it's simulating that time period in particular. Maybe the last time "you" existed was a billion years ago. Or maybe even never at all, if it's interpolating and filling in the gaps.

Say hi to our AIian observers.


I too had this installed on my computer for months on end in the old days!

I imagine today, with the advancement of "AI", there would be a much better solution to the problem...

RIP SETI at home, you felt like the future.


"AI Joins Hunt for ET: Study Finds 8 Potential Alien Signals" (36 days ago)

https://news.ycombinator.com/item?id=34729360


I loaded the original screen saver on a cube farm of computers when it first came out. Wish I would have thought to have taken a picture of the cacophony of color that resulted when you turned the lights in that room out at night but had all the computer monitors on. Then again I might have not even owned a camera back then.

I haven't crunched SETI for years, but I do still tilt at numerical windmills - Primegrid can be great fun if you are lucky enough to hit a mega prime (I have three!).


If and when we detect evidence of technological life it's not going to be from radio signals of the kind SETI is looking for. It's going to be infra-red.

The window in which humanity releases the kind of radio signals SETI was built to find is fairly narrow. We've largely moved on from broadcast signals to fiber optics, directional beams and lower power. Even the "high" power signals are relatively low power and probably not detectable beyond a few hundred light years. That is a miniscule volume of our galaxy. Combine that with the small window and it's like trying to win the lottery 3 weeks in a row.

The limits we have on growing are twofold: 1) living area and 2) energy. These two are actually related. Many (myself included) consider th emost likely path for our species is to build a Dyson Swarm. This is a cloud of orbitals. A so-called O'Neil Cylinder. 20-30 miles long, 2-4 miles wide, spin gravity. Cover it in solar panels for power. This requires a material no more advanced than stainless steel.

Such a swarm around our Sun would have access to something like 10"15 times more energy than we have now and a fraction of Mercury's mass would produce trillions of times the land area of Earth.

Orbitals would have to dissipate heat somehow. The only way to do this in space is to vent material or to radiate it away. At any likely temperature that means infra-red radiation. That's physics.

So a Dyson Swarm around a star would have a huge IR signature that would be entirely obvious from a thousands of light years away. Over enough star systems (eg a galaxy) that distance goes out to millions of light years.

So it's a Dyson Swarm we'll see, not radio communications.


Did this org achieve anything? And if so, would government have allowed them to publish their results?


If we found signs of extraterrestrial life somewhere in the universe, I think governments would celebrate. The president of the United States would try to be in the meeting where it was announced just so his face would be on every story about it.


If they have useful data they need to upload it somewhere. Maybe nobody cares now, but at some point there will be an AI capable of ingesting it and it might see something worthwhile in it.


check out https://bldata.berkeley.edu/pipeline/ if you want to download terabytes of radio telescope data from scans searching for aliens....


Coincidentally I just started reading Carl Sagan's "Contact" the other day. Just getting to the part where they are actually building The Machine.

What do people here think of that book?


I read it in high school and quite liked it then, now I am worried that if I revisit I would find it too simplistic how it handles the relationship between science, politics, and religion. The last chapter, which was left out of the movie, really blew my mind back then (very Greg Eganesque).


As a kid I liked it, but I loved all his other (i.e. non-fiction) books much more, with a huge passion. I had a nice telescope and was obsessed with astronomy for years. The Sirens of Titan was the first novel I loved to death and read over and over, maybe it's not coincidental that it also involves a lot of interplanetary travel!


Fantastic book. I've re-read it a few times.

It's also a great example of a book that's better than the movie. Way, way, way more depth and richness.


The first book that really hooked me from start to finish.

Teen me back then lost a few hours of sleep reading it instead of sleeping on school nights.


Maybe the existing data contains patterns that simply are not in a form humans have the capacity to comprehend?

Maybe the path forward is not through seeking the patterns we were looking for harder, but in creating AI's that have been told that patterns exist, and they must determine the nature of those patterns on their own, until 1000 AI's with typewriters produce an alien Shakespeare.


I wonder if this explosion in GPU processing would’ve made things 1000x faster? Never paid attention to it.


Were the searches performed by the @home software so extensive that there's no other use for that data, or were there trade offs made for compute time from when it was originally created being too much for the time? So if the software was updated to take advantage of today's compute scanning the same data, would there be any benefit? I seem to recall some results of "AI" searching existing data to find things that were previously missed. Wondering if the same might be true of the SETI@home data.


Purely speculative - but perhaps the explosion in GPU power and storage capacity means they can do this processing in house now.


Turns out we were the aliens all along.


It'll never not amuse me that in the first season of NCIS (circa 2003), Seti@Home (or someones clone of it) was the screen saver on Gibbs computer in a couple episodes.

At the same time we had already installed Seti@Home on basically every computer we had access to... including school.

Good times.


At the very least, I hope they are remembered for pioneering a whole new form of computing.


Did they ever have any results?


sad.

wont be able to hear from Three-body from now on. human's eye are closed.


I don’t believe there are aliens out there. And to help “prove” that I ran SETI for years on my pentium 4.

I think I later switched to Folding@home to run on my laptop.


does it have anything to do with the fact that the USG has basically said "UFO's are real", and since UFO's were always real (in the sense that I'm sure there were always some kind of "flying object" that was "unidentifiable"), this is in effect a tacit admittance that they are probably extraterrestrially piloted? >..<


This can only be because they found the aliens.


I remember having to choose between mining bitcoin or running folding@home back in 2011. I chose to save humanity…


Which one did you choose?


lol. I think that would be Bitcoin.


Wow that takes me back. Waaay back.


Anyone up for Llama training?


LLaMA? :)


Too bad. I'm kinda surprised they never found anything at all.


They succeeded and found the extra terrestrials


It sucks because with GPT finally rising to maturity, AI could have helped analyze and parse huge amounts of data that humans would never be able to go through.


i've been running this shit for years and i never detected any aliens... bah


back when distributed humanity improvement were not causing weird bubbles


just found seti files from around 2002 on my workstation.. feels.


They should have awarded tokens for each block computed. Turned it into a blockchain of sort, would have been a hit!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: