Hacker Newsnew | past | comments | ask | show | jobs | submit | tombert's commentslogin

Maybe someone can elaborate on this, since I know basically nothing about chemistry or nuclear physics; isn't Three Mile Island still completely irradiated and unsafe for humans to inhabit?

Unit 2 is the reactor that melted down and it has been shut down ever since (and partially decommissioned). Unit 1, a separate reactor at the same site, was operated normally until 2019 when it was shut down due to high costs. It was originally scheduled to be decommissioned by 2079 (sic) but is now being brought back online.

If it wasn't profitable in 2019, why is profitable now? Because Microsoft is commiting to be a customer?

Microsoft committed to purchase the plant's capacity for 20 years. And US electricity demand grew very slowly from 2005 to 2020. It is growing rapidly now.

At around $110/MWh, according to the article. This is about 50% higher than what utility-scale PV or wind would cost. Guess they're using OpenAI accounting.

> utility-scale PV or wind would cost

Are you comparing cost against what electricity currently costs or what it would cost to add capacity? I feel like Microsoft is not acting on hype here, they're going to pay a premium just because it's cool to refire a nuclear plant? Surely they've done the math to decide the feasibility of building out a few acres of solar panels.


There could also be incentives beyond the loan or political pressure we’re not privy to. Such pressure is part of the reason Boeing ended up acquiring McDonnell Douglass even though it wasn’t exactly the financial best move for Boeing. If the US government is serious about restarting its nuclear industry then this is a small first step to build up the skills for building new reactors or refurbishing old ones.

It’s not really that farfetched, either. If the government expects a conflict in the next few decades, solar build out might become much more expensive or impossible since our domestic production might not be enough to support NATO’s growth.


The electricity cost is actually very low compared to the capital cost of the stuff the the electricity runs. But not having access to the electricity means that all that capital is going to waste.

So Microsoft is less price sensitive than other electricity customers.

Plus they get the PR and hype boost from saying they are using nuclear, which is huge right now. Which is big enough that the other hyperscalers thought they had to announce new nuclear projects, even though it will be a decade before those new nuclear projects could ever come on line.


Running a data center on unreliable energy would be shockingly stupid.

And unreliable energy sources routinely exclude the wildly uneconomical costs and environmental impact it would take to make them reliable.


> Running a data center on unreliable energy would be shockingly stupid.

For the right kind of workloads and at sufficient scale, I wonder if this is actually true. (It probably is, but it's fun to hypothesize.) I'm assuming the workloads are mostly AI-related.

AI training presumably isn't super time-sensitive, so could you just pause it while it's cloudy?

AI inference, at least for language models, presumably isn't particularly network-intensive nor latency-sensitive (it's just text). So if one region is currently cloudy... spin it down and transfer load to a different region, where it's sunny? It's kind of like the "wide area grid" concept without actually needing to run power lines.

Yes, I know that in reality the capex of building and equipping a whole DC means you'll want to run it 24/7, but it is fun to think about ways you could take advantage of low cost energy. Maybe in a world where hardware somehow got way cheaper but energy usage remained high we'd see strategies like this get used.


> So if one region is currently cloudy... spin it down and transfer load to a different region, where it's sunny? It's kind of like the "wide area grid" concept without actually needing to run power lines.

> Yes, I know that in reality the capex of building and equipping a whole DC means you'll want to run it 24/7, but it is fun to think about ways you could take advantage of low cost energy.

There's some balance between maximizing capex, business continuity planning, room for growth, and natural peak and trough throughout the day.

You probably don't really want all your DCs maxxed out at the daily peak. Then you have no spare capacity for when you've lost N DCs on you biggest day of the year. N might typically be one, but if you have many DCs, you probably want to plan for two or three down.

Anyway, so on a normal day, when all your DCs are running, you do likely have some flexibility on where tasks run/where traffic lands. It makes sense to move traffic where it costs less to serve, within some reasonable bounds of service degradation. Even if electricity prices are the same, you might move traffic where the ambient temperature is lower, as that would reduce energy used for cooling and with it the energy bill.

You might have some non-interactive, non-time sensitive background jobs that could fill up spare DC capacity... but maybe it's worth putting a dollar amount on those --- if it's sunny and windy and energy is cheap, go ahead ... when it's cloudy and still and energy is expensive, some jobs may need to be descheduled.


> AI training presumably isn't super time-sensitive, so could you just pause it while it's cloudy?

or pause it when "organic traffic" has a peak demand, and resume in off-peak hours, so that the nuclear powerplant can operate efficiently without too much change in its output.


One big problem is you have a bunch of expensive GPUs sitting around doing nothing during these outages

Nuclear power plants go down for an entire month at a time for refueling.

Sure - at scheduled, predictable times. That matters.

Except when it’s unscheduled. For months on end.

See for example Oskarshamn 3 in Sweden having a 7 month long unscheduled outage this year.

Ringhals 4 had an 8 month unscheduled outage during the energy crisis.


A machine that operates continuously is a perfect machine, and no machine is perfect.

The greater the number and diversity of machines, as well as their geographical dispersion, the greater their availability.

In this respect, a mix of renewables (solar, wind, geothermal, biomass, etc.) deployed on a continental scale, along with storage (batteries and V2(G|H), hydro, green hydrogen...) is unbeatable (total cost, availability, risk, etc.).


Tangent, but "outage" and "7 month" makes me feel like we need a new word.

Maybe modern day and tech has given "outage" a much shorter connotation than what it meant in the past.

7 months? That's almost longer than the Christmas offseason.


I imagine data centers make the best economic sense when they can run full tilt 24/7. You’ll double your payoff time if you can only run work when the sun shines.

In most parts of the country, solar plus batteries to get through 24 hours will be cheaper than $110/MWh.

Do you have a source for that, when i googled it came up closer to $200/MWh for new york, but that was from older sources. The only thing i saw approaching this price point was if you were somewhere like las vegas.

I also think you would need more than 24 hours battery. You have to prepare for freak weather events that reduce system capacity.

I also wonder what time horizon we are talking. solar and batteries presumably have to be replaced more often than nuclear.


> I also think you would need more than 24 hours battery. You have to prepare for freak weather events that reduce system capacity.

In general, yes. Not really in the context of utility generation for a DC, though. A DC should have onsite backup generation, at least to supply critical loads. If your contracted utility PV + storage runs out, and there's no spare grid capacity available (or it's too expensive) you can switch to onsite power for the duration. The capex for backup power is already required, so you're just looking at additional spending for fuel, maybe maintenance if the situation requires enough hours on backup.


You ignore the fact that these datacenters also operate at night and in windless times.

PV did get spectacularly cheaper, but is not a panacea.

Nuclear is great fit for constant load, for example a cloud datacenter where relatively constant utilization is also a business goal and multiple incentives are in place to promote this. (eg. spot pricing to move part of the load off from peaks)


Nuclear power is reliable 24/7 while wind and solar are not and handling this costs money. Microsoft has said that they have more GPUs than electricity to run them so even at $110/MWh it makes sense for them.

I don't know where this '24/7' stuff comes from; they have maintenance outages like anything else. Refueling takes months every couple years, so you're going to have to "handle this" even with nuclear.

France’s nuclear fleet has an average capacity factor of ~75%… so less “24/7” and more like 18/7 or 24/5.25 or something..

The US fleet was at 93% capacity factor in 2023.

https://www.nei.org/resources/statistics/us-nuclear-generati...

As for France's capacity factor, that has a lot to do with the presence of intermittents on the continental grid, combined with the EU's Renewable Energy Directive making France liable to pay fines if they use nuclear power in preference to wind/solar.


And had half their fleet offline at the peak of the energy crisis caused by the Russian invasion of Ukraine.

https://www.nytimes.com/2022/11/15/business/nuclear-power-fr...

In Sweden this year we’ve had 2 separate instances of 50% of the fleet being offline. With one reactor having a 7 month unscheduled outage.

I just don’t get where this ”100% reliable!!!!” is coming from.


"they have maintenance outages like anything else"

not often and most importantly they are PREDICTABLE. You do understand why being able to control when a power plant is operating is a very important thing, right?


i thought the conversation was regarding utilization of capital, in which case 80% is 80%, predictability doesnt change the fact you have to let GPUs sit idle 20% of the time.

I guess if I knew there would be two months with less power I might design my data center to fit into 40 foot containers so I could deploy wherever power and latency are cheapest


The Wikipedia page makes it seem like it's been largely cleaned up for decades:

> In 1988, the NRC announced that, although it was possible to further decontaminate the Unit 2 site, the remaining radioactivity had been sufficiently contained as to pose no threat to public health and safety.

https://en.wikipedia.org/wiki/Three_Mile_Island_accident


No.

Chernobyl (which was a far worse accident) continued to produce power at other units on the same site for 14 years after the meltdown of unit 4.


TMI was never irradiated and unsafe for humans to inhabit.

The article says the reactor they are bringing back on was active until as recently as 2019, so it's safe to say it's probably not uninhabitable.

My dad is in his mid 60's, and I'm pretty convinced he's going to be like that. He's not a software engineer, mostly a mechanical engineer, but it's pretty rare that I talk to him and he's not hacking on something mechanical.

I'm not talking just woodshop stuff; he is actually doing math and calculations for little things that he's building. He is an engineer by blood that happened to make a career out of it.


I'm not a pro or anything, and I don't edit video super often, but I would like to point out that Lightworks is quite good, and offers a perpetual license [1] for $420 that is very often on sale.

I don't have the ability to compare these things in intimate detail, but Lightworks has at least been used for "real" productions [2] so I think it's production-ready.

[1]https://lwks.com/pricing

[2] https://en.wikipedia.org/wiki/Lightworks#Users


Resolve Studio is more feature rich by a significant margin and 2/3rd’s of the price. Lightworks is a respectable program but I can’t really see picking it over resolve studio tbh. Definitely not for professional work.

I still root for them though. More NLE’s is good for the editing world as a whole and who knows, BMD could heel turn on us and ruin resolve. I’ve gone through 3 different NLE’s since 2011 (FCPX->briefly Premiere->Resolve) so I definitely don’t plan for more than 3-5 years ahead lol


I remember someone telling me at a conference that they write all program types (where possible) in Protocol Buffers, since this guarantees that there's a reasonably efficient way of serializing and deserializing anything that they need to in basically any language/platform that they could realistically write software in.

I don't know if I would go that far, but I kind of find the idea interesting; if everything can be encoded and decoded identically, then the choice of language stops mattering so much.


The comment you were replying to was about Microsoft.

Even if Windows weren't a dogshit product, which it is, Microsoft is a lot more than just an operating system. In the 90's they actively tried to sabotage any competition in the web space, and held web standards back by refusing to make Internet Explorer actually work.


If Microsoft hadn't tried to actively kill all its competition then there's a good chance that we'd have a much better internet. Microsoft is bigger than just an operating system, they're a whole corporation.

Instead they actively tried to murder open standards [1] that they viewed as competitive and normalized the antitrust nightmare that we have now.

I think by nearly any measure, Microsoft is not a net good. They didn't invent the operating system, there were lots of operating systems that came out in the 80's and 90's, many of which were better than Windows, that didn't have the horrible anticompetitive baggage attached to them.

[1] https://en.wikipedia.org/wiki/Embrace,_extend,_and_extinguis...


Alternatively: had MS Embraced and Extended harder instead of trying to extinguish ASAP we’d have a much better internet owned to a much higher degree by MS.

A few decades back Microsoft were first to the prize with asynchronous JavaScript, Silverlight really was flash done better and still missed, a proper extension of their VB6/MFC client & dev experience out to the web would have gobbled up a generation of SaaS offerings, and they had a first in class data analysis framework with integrated REPL that nailed the central demands of distributed/cloud-first systems and systems configuration (F#). That on top of near perfect control of the document and consumer desktop ecosystems and some nutty visualization & storage capabilities.

Plug a few of their demos from 2002 - 2007 together and you’ve got a stack and customer experience we’re still hurting for.


Microsoft is a company that hasn't even figured out how to get system updating working consistently on their premier operating system in three decades. It seems unlikely to me that somehow moving to Azure is going to make anything more stable.

Wingdings isn't really a "font" in the same way that Times New Roman is a "font". Wingdings and and Webdings were basically proto-emojis, a vestige of the old "dingbats" publishers would put at the top of chapter pages to make them look nice.

https://youtu.be/JdKV1L1DJHc


About a decade ago, I was working with a guy who was getting a PhD in search engine design, which I knew/know nothing about.

It was actually a lot of fun to chat with him, because he was so enthusiastic about how searching works and how it can integrate with databases, and he was eager to explain this all to anyone who would listen. I learned a fair amount from him, though admittedly I still don't know much about the intricacies of how search engines work.

Some day, I am going to really go through the guts of Apache Solr and Lucene to understand the internals (like I did for Kafka a few years ago), and maybe I'll finally be competent with it.


People who work on really obscure things love to talk about their work, heck if someone would listen to me I could talk for hours about what I do.

Unfortunately very few people care about the minutia of making a behemoth system work.


As I have gotten older, I have grown immense respect for older people who can geek out over stuff.

It’s so easy to be cynical and not care about anything, I am certainly guilty of that. Older people who have found things that they can truly geek out about for hours are relatively rare and some of my favorite people as a result (and part of the reason that I like going to conferences).

I like my coworkers and they’re certainly not anti-intellectual or anything, but there’s only so long I can ramble on about TLA+ or Isabelle or Alloy before they lose interest. It’s not a fault on them at all, there are plenty of topics I am not interested in.


It seems a common problem in our profession that you can’t really talk to anybody about what you are doing. My friends have a vague idea but that’s it.

I would be more than interested to listen to you and what you do. Do not hesitate to share (blog post, AskHN, ShowHN, ...)

I would. Heck, I bet half of HN would be interested in what kind of insanity lies under those behemoths.

I work with music streaming, it is mostly just a lot of really banal business rules that become an entangled web of convoluted if statements. Where to show a single button might mean hitting 5 different microservices and checking 10 different booleans

Greg Graffin, lead singer for punk rock band Bad Religion, has a PhD in zoology [1] and is a frequent university lecturer.

[1] https://en.wikipedia.org/wiki/Greg_Graffin


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: