> When the network is bad, you get... fewer JPEGs. That’s it. The ones that arrive are perfect.
This would make sense... if they were using UDP, but they are using TCP. All the JPEGs they send will get there eventually (unless the connection drops). JPEG does not fix your buffering and congestion control problems. What presumably happened here is the way they implemented their JPEG screenshots, they have some mechanism that minimizes the number of frames that are in-flight. This is not some inherent property of JPEG though.
> And the size! A 70% quality JPEG of a 1080p desktop is like 100-150KB. A single H.264 keyframe is 200-500KB. We’re sending LESS data per frame AND getting better reliability.
h.264 has better coding efficiency than JPEG. For a given target size, you should be able to get better quality from an h.264 IDR frame than a JPEG. There is no fixed size to an IDR frame.
Ultimately, the problem here is a lack of bandwidth estimation (apart from the sort of binary "good network"/"cafe mode" thing they ultimately implemented). To be fair, this is difficult to do and being stuck with TCP makes it a bit more difficult. Still, you can do an initial bandwidth probe and then look for increasing transmission latency as a sign that the network is congested. Back off your bitrate (and if needed reduce frame rate to maintain sufficient quality) until transmission latency starts to decrease again.
WebRTC will do this for you if you can use it, which actually suggests a different solution to this problem: use websockets for dumb corporate network firewall rules and just use WebRTC everything else
They shared the polling code in the article. It doesn't request another jpeg until the previous one finishes downloading. UDP is not necessary to write a loop.
> They shared the polling code in the article. It doesn't request another jpeg until the previous one finishes downloading.
You're right, I don't know how I managed to skip over that.
> UDP is not necessary to write a loop.
True, but this doesn't really have anything to do with using JPEG either. They basically implemented a primitive form of rate control by only allowing a single frame to be in flight at once. It was easier for them to do that using JPEG because they (to their own admission) seem to have limited control over their encode pipeline.
> have limited control over their encode pipeline.
Frustratingly this seems common in many video encoding technologies. The code is opaque, often has special kernel, GPU and hardware interfaces which are often closed source, and by the time you get to the user API (native or browser) it seems all knobs have been abstracted away and simple things like choosing which frame to use as a keyframe are impossible to do.
I had what I thought was a simple usecase for a video codec - I needed to encode two 30 frame videos as small as possible, and I knew the first 15 frames were common between the videos so I wouldn't need to encode that twice.
I couldn't find a single video codec which could do that without extensive internal surgery to save all internal state after the 15th frame.
A 15 frame min anf max GOP size would do the trick, then you'd get two 15 frame GOPs. Each GOP can be concatenated with another GOP with the same properties (resolution, format, etc) as if they were independent streams. So there is actually a way to do this. This is how video splitting and joining without re encoding works, at GOP boundary.
In my case, bandwidth really mattered, so I wanted all one GOP.
Ended up making a bunch of patches o libx264 to do it, but the compute cost of all the encoding on CPU is crazy high. On the decode side (which runs on consumer devices), we just make the user decode the prefix many times.
A word processor can save it's state at an arbitrary point... That's what the save button is for, and it's functional at any point in the document writing process!
In fact, nearly everything in computing is serializable - or if it isn't, there is some other project with a similar purpose which is.
However this is not the case with video codecs - but this is just one of many examples of where the video codec landscape is limiting.
Another for example is that on the internet lots of videos have a 'poster frame' - often the first frame of the video. That frame for nearly all usecases ends up downloaded twice - once as a jpeg, and again inside the video content. There is no reasonable way to avoid that - but doing so would reduce the latency to play videos by quite a lot!
> A word processor can save it's state at an arbitrary point... That's what the save button is for, and it's functional at any point in the document writing process!
No, they generally can't save their whole internal state to be resumed later, and definitely not in the document you were editing. For example, when you save a document in vim it doesn't store the mode you were in, or the keyboard macro step that was executing, or the search buffer, or anything like that.
> In fact, nearly everything in computing is serializable - or if it isn't, there is some other project with a similar purpose which is.
Serializable in principle, maybe. Actually serializable in the sense that the code contains a way to dump to a file and back, absolutely not. It's extremely rare for programs to expose a way to save and restore from a mid-state in the algorithm they're implementing.
> Another for example is that on the internet lots of videos have a 'poster frame' - often the first frame of the video. That frame for nearly all usecases ends up downloaded twice - once as a jpeg, and again inside the video content.
Actually, it's extremely common for a video thumbnail to contain extra edits such as overlayed text and other graphics that don't end up in the video itself. It's also very common for the thumbnail to not be the first frame in the video.
> Serializable in principle, maybe. Actually serializable in the sense that the code contains a way to dump to a file and back, absolutely not. It's extremely rare for programs to expose a way to save and restore from a mid-state in the algorithm they're implementing.
If you should ever look for an actual example; Cubemap, my video reflector (https://manpages.debian.org/testing/cubemap/cubemap.1.en.htm...), works like that. It supports both config change and binary upgrade by serializing its entire state down to a file and then re-execing itself.
It's very satisfying; you can have long-running HTTP connections and upgrade everything mid-flight without a hitch (serialization, exec and deserialization typically takes 20–30 ms or so). But it means that I can hardly use any libraries at all; I have to use a library for TLS setup (the actual bytes are sent through kTLS, but someone needs to do the asymmetric crypto and I'm not stupid enough to do that myself), but it was a pain to find one that could serialize its state. TLSe, which I use, does, but not if you're at certain points in the middle of the key exchange.
Why not hand off the fd to the new process spawned as a child? That’s how a lot of professional 0 downtime upgrades work: spawn a process, hand off fd & state, exit.
> No, they generally can't save their whole internal state to be resumed later, and definitely not in the document you were editing.
I broadly agree, but I feel you chose a poor example - Vim.
> For example, when you save a document in vim it doesn't store the mode you were in,
Without user-mods, it does in fact start up in the mode that you were in when you saved, because you can only save in command/normal mode.
> or the keyboard macro step that was executing,
Without user-mods, you aren't able to interrupt a macro that is executing anyway, so if you cannot save mid-macro, why would you load mid-macro?
> or the search buffer,
Vim, by default, "remembers" all my previous searches, all the macros, and all my undos, even across sessions. The undo history is remembered per file.
> A word processor can save it's state at an arbitrary point...
As ENTIRE STATE. Video codecs operate on essentially full frame + stream of differences. You might say it's similar to git and you'd be incorrect again, because while with git you can take current state and "go back" using diffs, that is not the case for video, it alwasy go forward from the keyframe and resets on next frame.
It's fundamentally order of magnitude more complex problem to handle
I'm on a media engineering team and agree that applying the tech to a new use case often involves people with deep expertise spending a lot of time in the code.
I'd guess there are fewer media/codec engineers around today than there were web developers in 2006. In 2006, Gmail existed, but today's client- and server-side frameworks did not. It was a major bespoke lift to do many things which are "hello world" demos with a modern framework in 2025.
It'd be nice to have more flexible, orthogonal and adaptable interfaces to a lot of this tech, but I don't think the demand for it reaches critical mass.
> It was a major bespoke lift to do many things which are "hello world" demos with a modern framework in 2025.
This brings back a lot of memories -- I remember teaching myself how to use plain XMLHTTPRequest and PHP/MySQL to implement "AJAX" chat. Boy was that ugly JavaScript code. But on the other hand, it was so fast and cool and I could hardly believe that I had written that.
I started doing media/codec work around 2007 and finding experienced media engineers at the time was difficult and had been for quite some time. It's always been hard - super specialized knowledge that you can only really pick up working at a company that does it often enough to invest in folks learning it. In my case we were at a company that did desktop video editing software so it made sense, but that's obviously uncommon.
So US->Australia/Asia wouldn't that limit you to 6fps or so due half-rtt? Each time a frame finishes arriving you have 150ms or so for your new request to reach.
Probably either (1) they don't request another jpeg until they have the previous one on-screen (so everything is completely serialized and there are no frames "in-flight" ever) (2) they're doing a fresh GET for each and getting a new connection anyway (unless that kind of thing is pipelined these days? in which case it still falls back to (1) above.)
You can still get this backpressure properly even if you're doing it push-style. The TCP socket will eventually fill up its buffer and start blocking your writes. When that happens, you stop encoding new frames until the socket is able to send again.
You probably won't get acceptable latency this way since you have no control over buffer sizes on all the boxes between you and the receiver. Buffer bloat is a real problem. That said, yeah if you're getting 30-45 seconds behind at 40 Mbps you've probably got a fair bit of sender-side buffering happening.
> you have no control over buffer sizes on all the boxes between you and the receiver
You certainly do; the amount of data buffered can never be larger than the actual number of bytes you've sent out. Bufferbloat happens when you send too much stuff at once and nothing (typically the candidate to do so would be either the congestion window or some intermediate buffer) stops it from piling up in an intermediate buffer. If you just send less from userspace in the first place (which isn't a good thing to do for e.g. a typical web server, but _can_ be for this kind of video conference-like application), it can't pile up anywhere.
(You could argue that strictly speaking, you have no control over the _buffer_ sizes, but that doesn't matter in practice if you're bounding the _buffered data_ sizes.)
Related tangent: it's remarkable to me how a given jpeg can be literally visually indistinguishable from another (by a human on a decent monitor) yet consist of 10-15% as many bytes. I got pretty deep into web performance and image optimization in the late 2000s and it was gratifying to have so much low-hanging fruit.
> Still, you can do an initial bandwidth probe and then look for increasing transmission latency as a sign that the network is congested. Back off your bitrate (and if needed reduce frame rate to maintain sufficient quality) until transmission latency starts to decrease again.
They said playing around with bitrate didn't reduce the latency; all that happened was they got blocky videos with the latency remaining the same.
I am almost sure that the most perfect solution would involve using a video codec protocol but the issue is implementation complexity and having to implement a production encoder yourself if your usecase is unusual.
This is exactly the point of the article they tried keyframes only but their library had a bug that broke it
Regarding the encoding efficiency, I imagine the problem is that the compromise in quality shows in the space dimension (aka fewer or blurry pixels) rather than in time. Users need to read text clearly, so the compromise in the time dimension (fewer frames) sounds just fine.
Nothing stopping you from encoding h264 at a low frame rate like 5 or 10 fps. In webRTC, you can actually specify how you want to handle low bitrate situations with degredationPreference. If set to maintain-resolution, it will prefer sacrificing frame rate.
FWIW, they still seem to have not actually pulled the trigger on the account requirement and they've removed the "Starting soon" portion of the nag bar text in the Hue app (though it's still on the web page you get to when hitting "Learn more"). I do wish they would either get it over with or make it clear they're not actually going ahead with forcing accounts though.
You totally can do it with some combination of overbuilding, storage and increased interconnection. It just starts to get expensive the higher the portion of your generation you want to supply with renewables. There's a good Construction Physics article[0] about this (though it simplifies by only looking at solar, batteries and natural gas plants and mostly does not distinguish between peaker and more baseload oriented combined cycle plants).
Personally, while I'm not opposed to nuclear, I'm pretty bearish on it. Most places are seeing nuclear get more expensive and not less. Meanwhile solar and batteries are getting cheaper. There's also the issue that nuclear reactors are generally most economical when operating with very high load factors (i.e. baseload generation) because they have high capital costs, but low fuel costs. Renewables make the net-demand curve (demand - renewable generation) very lumpy which generally favors dispatchable (peaker plants, batteries, etc.) generation over baseload.
Now a lot of what makes nuclear expensive (especially in the US) is some combination of regulatory posture and lack of experience (we build these very infrequently). We will also eventually hit a limit on how cheap solar and batteries can get. So it's definitely possible current trends will not hold, but current trends are not favorable. Currently the cheapest way to add incremental zero-carbon energy is solar + batteries. By the time you deploy enough that nuclear starts getting competitive on an LCOE basis, solar and batteries will probably have gotten cheaper and nuclear might have gotten more expensive.
> Renewables make the net-demand curve (demand - renewable generation) very lumpy which generally favors dispatchable (peaker plants, batteries, etc.) generation over baseload.
Even without renewables in the equation, the demand side of the curve is already extremely lumpy. If you're only affordable when you're operating near 100% of the time (i.e. "baseload") you simply can't make up the majority of power generation. Batteries are poised to change this - but at that point you've got to be cheaper than the intermittent power sources.
If the goal is 100% carbon-free energy, then we simply can't let economics get in the way. Otherwise we will always be stuck building some natural gas peaker plants.
And one option is to mass produce nuclear power plants, get prices down even further via economics of scale and then run them uneconomically.
Uneconomically doesn't mean "at a loss", just that you aren't making as much profit as you could optimally. With enough economics of scale, we can probably still run these nuclear plants at a profit, maybe even cheaper than natural gas peakers. But it doesn't matter, the goal is saving the planet, not profit.
It's not the only option, you can also build massive amounts of wind/solar/tidal and pair them with massive amounts of battery storage.
The third option is to build way more hydro power plants. Hydro tends to get overlooked as a form of green energy, because while it might be 100% renewable, you do have to "modify" a local ecosystem to construct a new dam. But hydro has the massive advantage that it can work as both baseload and demand load, so they can pair nicely with wind/solar/tidal.
I'm not even talking about pumped hydro (though, that's a fourth option to consider). Regular hydro can work as energy storage by simply turning the turbines off at letting the lakes fill up whenever there is sufficient power from your other sources.
Yeah, I'm just arguing that "baseload" should be understood to be a bad thing in my comment above.
If you want to argue that nuclear is affordable as non-baseload power, because the (non-economic) cost to the environment of the alternatives is otherwise too high.... well I'd disagree because of how far solar/wind/batteries have come in the last couple of years, but prior to that you would have had a point. And you still would as far as continuing to operate existing plants goes of course.
Nuclear power has a massive handicap that most R&D was abandoned back in the 80s because it was uneconomic. And another handicap that the R&D it did get was never that focused on economics, commercial nuclear power were always a side effect of the true goal (Small reactors for nuclear submarines and Breeder reactors for nuclear weapons). And to get the promised low costs, you really need to commit and take advantage of massive economics of scale.
I'm not arguing that when taking environmental damage into account, that nuclear is cheaper than current solar/wind/battery technology for any single power project. They have the advantage of massive R&D over the last 30 years.
What I am arguing is that focusing on solar/wind/battery might not be the best route to 100% carbon free power in the long term. Maybe it is? But we really shouldn't be jumping to that assumption.
And we shouldn't be disregarding Nuclear because of any argument that can be summed up in a hacker news comment.
... voters (or however we want to handwave preference aggregation) are very passive about carbon-free energy (and global warming and sustainability and economics and ...)
they either pick some pet peeves (coral reefs, rainforests, global South inequality, desertification) and usually start buying things (EVs, PV panels, heat pumps)
but when it comes to policy they usually revert to Greenpeace/degrowth/NIMBY cult members
This is not how nuclear works. Nuclear sets a low price that corresponds to its cost, then lets more expensive marginal energy sources set the final price. Nuclear can by the way be modulated +20%\-20%, which makes it quite flexible in real condition. See https://www.rte-france.com/en/eco2mix/power-generation-energ... as a proof - nuclear generation in France can go from 25GW to 45GW during a day.
New small modular reactors promise great improvements, as they can be pre-built in factories, require limited maintenance, lower risk, and as a result much lower cost per MW.
> This is not how nuclear works. Nuclear sets a low price that corresponds to its cost, then lets more expensive marginal energy sources set the final price.
This may be an accurate description for fully-depreciated nuclear plants, but it doesn't reflect the economics of new-build nuclear at all. You have to consider both operating and capital costs. Nuclear plants are cheap to operate once built, but those operations have to pay off the capital costs. If the load factor is low, then each unit of generated power has to bear a higher portion of the capital costs. If your capital costs are very high, then you either need a very high load factor or very high spot prices to bear those costs.
> Nuclear can by the way be modulated +20%\-20%
Net demand on CAISO can go from about 2 MW to 30 MW in the summer. 20 MW of that ramp occurs over just 3 hours. I'm sure you can build nuclear plants that ramp that fast, but you need a lot more than the range you're mentioning here. Regardless, I'm not making an argument about the physics of nuclear power plants, just the economics. Expensive plants generally need high load factors to pay off the capital costs.
> nuclear generation in France can go from 25GW to 45GW during a day.
Most of France's nuclear plants are old and thus fully depreciated. The only one built recently (Flamanville Unit 3), is a good example of the bad cost trend in nuclear. While this was a bit cheaper than Vogtle Units 3 and 4 in the US on a dollars per nameplate capacity basis, at 19 billion euro it's still very expensive (and also way over budget).
France also has high rates of curtailment, which is not necessarily a huge problem for them since so much of their generation is already carbon-free, but it does suggest they're already hitting the limits of their ability to ramp production up and down. Whether this is an engineering problem or something to do with the structure of their electricity market is a bit unclear to me
> New small modular reactors promise great improvements, as they can be pre-built in factories, require limited maintenance, lower risk, and as a result much lower cost per MW.
This has been the promise for years, but so far the low costs have yet to materialize and they are estimated to have a higher LCOE than traditional plants. Currently only 2 are actually operational, a demonstration plant in China and a floating power plant using adapted ice-breaker reactors in Russia. There are a few more in the pipeline, but they are all at least a couple years out from actually producing power.
> This may be an accurate description for fully-depreciated nuclear plants, but it doesn't reflect the economics of new-build nuclear at all.
I'm talking about the wholesale market, which works as an auction, where producers give their price for units of capacity, and the clearing price is set by the marginal producer. Typically, nuclear reactors will give their marginal cost, near 0, and let the more expensive producers set the clearing price. Given that capital cost is a sunk cost, it doesn't matter to nuclear plants as long as the market price is above the marginal one. So called "renewables" do this as well, but have to account for the risk that mother nature doesn't provide, and therefore factor in the risk of having to buy coal or gas-produced electricity on the spot.
> Net demand on CAISO can go from about 2 MW to 30 MW in the summer.
Well if this is the case this is not a "nuclear sized" market then and other ways of supplying capacity are better. But remember that it's estimated that blackouts are much,much more costly society-wise than whatever marginal price you could pay for electricity, so having a baseload and some excess capacity is always good. This is also why many electricity producers are nationalized. It's not a market like the others.
> Flamanville
France has the strictest regulator of the world, which adds a lot of costs, and Flamanville required to re-learn many things after losing the expertise from the 70's. For the record, an airliner should be able to fall on Flamanville without any problem, due to regulations.
> Curtailements
Excess electricity is sold in Germany, which lacks a much-needed baseload, especially since they have a big industry. Most people ignore that electricity consumption follows Pareto's law, with around 1k industrial plants consuming around 50% of the electricity (sorry no source for this, my econ teacher said in a class a few years ago!).
> SMR
Yes, still in development, many different designs so costs estimates are difficult to make. I'm citing Wikipedia's[0]. The good thing is that the possibility to build them serially should decrease a lot the costs as demand ramps up.
The massive capital costs of the plant have to be paid back in the sale price of energy, that’s what makes it expensive. France’s state built plants don’t have that accounting
But the market price asked by nuclear plant doesn't account for capital costs as the real price will always be higher. What is important is the clearing price not the asking price.
Thank you for this, this along with many other comments have really helped me understand.
This isn’t a simple issue, and I think your basic common sense take now mostly aligns with mine (though correct me if I’m wrong) which is something along the lines of that we don’t have to be anti-nuclear specifically but we do have to be bearish because it has downsides that mean if we are going to use it for some specific use case we’d better be sure that the pros are significant to outweigh the natural cons it brings with it.
It is possible because interconnecting grids at continental-level is The Way for quite a while, even without any renewable, because it enables operators to optimize (preferring less-emitting and cheaper production units) and also to obtain a better service guarantee (less blackouts!).
It also enables over-produced electricity to power electrolyzers. 'Green hydrogen' thus obtained can power the backup/peakers (producing electricity to load-follow and also when other intermittent equipment cannot produce enough on-the-spot), further reducing 'intermittency' effects. This isn't sci-fi ( https://www.gevernova.com/gas-power/future-of-energy/hydroge... ), many can burn a mix (methane, hydrogen...) and some recent models can be retrofitted into burning hydrogen (no major investment nor need to reform existing heavy resources/organization).
I have a 2017 Bolt as my only car and the slow L3 charging is definitely a downside, but I haven't found it to be a huge issue in practice. On a trip long enough to worry about fast-charging you're going to need to stop to eat periodically anyway so if you plan your charging around meals you don't end up waiting too long. Obviously gets a bit more annoying on trips that are long enough to require more than one fast-charge per-day, but I don't take trips that long frequently.
Day to day charging is generally all going to be L2 or even L1 depending on how far you drive and how long typically parked somewhere with a plug. That will be roughly the same speed in any car. Some cars do have higher capacity L2 chargers than the Bolt does, but most public L2 stations don't provide the higher current needed to see the difference.
This I think is the key that most non-EV drivers don't recognize. Especially if you own a home, then an EV is really fantastic. You simply plug it in whenever you are home and 99% of the time you spend 0 time waiting for a charge. The slow trickle charge is both cheaper and more convenient because you aren't making trips to a gas station on a weekly/biweekly basis.
Rather than planning your charging around meals, you're more likely going to have to plan your meals around charging. I don't see a lot of restaurants with level 3 chargers in the parking lot.
If you compare it to the commuter rail systems in those places, BART feels impressive (though less so with the service cuts). I was a regular rider on the Metro North New Haven line and had experience with SEPTA and NJT commuter rail and I was really impressed with BART when I moved out here. Peak frequency was pretty good (at least on the Red line I primarily used) and when things were on time they were very on-time ("on-time" Metro North trains were always at least a few minutes late in my experience).
If you compare it to the NYC subway, it's obviously not impressive at all (though the tech is less dated). As a rapid-transit system, BART isn't exactly a commuter rail or subway system exactly, but I think it's closer to the former than the latter.
As another commenter pointed out, you can do pre-emptive multitasking just fine without an MMU. And as it turns out AmigaOS had just that. All you need for pre-emptive multitasking is a suitable interrupt source to use for task switching.
What it did not have was memory protection or virtual memory. You do need an MMU for those.
Went to Drexel for CS, but dropped out in my Sophmore year back in 2004. Did PHP webdev in my home state of CT until 2011. Moved to the SF Bay Area and transitioned to doing Erlang and C++ for some F2P games for a while. I'm currently a Staff Engineer at Discord focused on AV and other "native" stuff.
This would make sense... if they were using UDP, but they are using TCP. All the JPEGs they send will get there eventually (unless the connection drops). JPEG does not fix your buffering and congestion control problems. What presumably happened here is the way they implemented their JPEG screenshots, they have some mechanism that minimizes the number of frames that are in-flight. This is not some inherent property of JPEG though.
> And the size! A 70% quality JPEG of a 1080p desktop is like 100-150KB. A single H.264 keyframe is 200-500KB. We’re sending LESS data per frame AND getting better reliability.
h.264 has better coding efficiency than JPEG. For a given target size, you should be able to get better quality from an h.264 IDR frame than a JPEG. There is no fixed size to an IDR frame.
Ultimately, the problem here is a lack of bandwidth estimation (apart from the sort of binary "good network"/"cafe mode" thing they ultimately implemented). To be fair, this is difficult to do and being stuck with TCP makes it a bit more difficult. Still, you can do an initial bandwidth probe and then look for increasing transmission latency as a sign that the network is congested. Back off your bitrate (and if needed reduce frame rate to maintain sufficient quality) until transmission latency starts to decrease again.
WebRTC will do this for you if you can use it, which actually suggests a different solution to this problem: use websockets for dumb corporate network firewall rules and just use WebRTC everything else
reply