Hacker Newsnew | past | comments | ask | show | jobs | submit | fngjdflmdflg's commentslogin

>what's really funny here is how absolutely horrified people are by the suggestion a single company which has a monopoly shouldn't also define the web platform

They don't. In general browser specs are defined via various standards groups like WHATWG. As far as I know there is no standard for what image formats must be supported on a web browser,[0] which is why in this one case any browser can decide to support an image format or not.

[0] https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/...


>They barely have their toes in the middleware game anymore.

Well they do have Steam Audio but yeah I agree. I think Epic is much better in this space, even though its only source available in practice they do a lot to support engine modifications and also accept external PRs. I think Valve has a lot to gain from open sourcing Source 2 and they should realize how important modding was to their initial success. The issue is now they can just print money with Steam so there is no need to invest in modding support.


The weighting of this study is strange. The difference of number of years of education maxes out at 1 point, while being raised in different locations and different school types are each given also 1 point. It seems unreasonable that going to school in London vs New York should be given a point here despite the average educational quality in both cities potentially being the same. This also means that someone with 4 years more education but from the same city is considered educationally similar, and it is impossible to achieve the "educationally dissimilar" metric (ED DIFF > 2) without one of the other two points. I feel therefore like there is some wordplay being done here by the term "educational differences." I think some readers will assume that "educational differences" means "educational quality," but only one metric out of the 3 is directly correlated to this. This said there does seem to be some correlation and that is interesting, as we would expect no difference between location, yet there does seem to be one. In my opinion the different location variable is likely to be measuring something aside from/in addition to education. Some education types would seem to be better than others eg. boarding school. Also worth noting that the "very educationally dissimilar" group is n = 10. This said, the authors do admit that "certain level of inference is involved with comparing pedagogies and curriculum" and "Readers are encouraged to re-score and re-analyze the data in additional ways not done here." I would try weighting location much less and not cap number of years of education at all, instead studying how the differences change as the number of years increases.

Because commenters outside Japan may end up buying products containing chips made in Japan. If it was built in let's say France people would be thinking less about potential invasions. Just as "obviously Japan is going to want to develop lucrative manufacturing within Japan," obviously people outside of Japan are going to want manufacturing that is not liable to be shut down or taken over in some way. Not that I think Japan and China will actually go to war any time soon myself.

>geopolitical narrative fed to them by the US state department

Just this week Japan and China have been getting into a fight over the current PM's comments over Taiwan. China has canceled some flights to Japan and complained to the UN, announcing it will defend itself from Japan.[0][1] I'm not sure what point you are trying to make here. Are you saying major disputes between China and Japan don't exist and are invented by the US state department? Or that thinking about it in this context is the result of the commenters being fed by the US state department?

[0] https://www.scmp.com/news/china/article/3333992/china-blasts...

[1] https://www.reuters.com/world/china/china-takes-spat-with-ja...


Fun fact, there's more precedent to Russia successfully invading Paris than Japan. Although to be fair, they had help.

https://en.wikipedia.org/wiki/War_of_the_Sixth_Coalition


The PRC and Japan is not a remotely comparable situation to the PRC and Taiwan.

The most the PRC could do is potentially sabotage production in Hokkaido, but if they can sabotage production in Hokkaido, they can sabotage it in Arizona.


I don't think China wants to go to war with Japan. I just mean to explain why people are focusing on geopolitical tensions. And the answer is that those tensions do exist, and is partly why some countries are trying to become more self-sufficient to begin with. So discussion of it is valid, that is my main point. Now once we get into those discussions, they might not be as high quality or informed as in let's say a pure technology article, but that is to be expected.

Imagine HN was Japanese and everyone was talking about how the US was threatening to invade Greenland on a topic about a new plant in Montana.

More like a new plant in Iceland, after the PM of Iceland said any attack on Greenland would be a survival-threatening situation for Iceland.

To be clear I think the comments about "geopolitical stability" or whatever term we use are not as interesting as new chip plants itself. Or at least they are a bit tired by now. I also wish Japan the best and I think they are fully capable of building such a factory and I hope they do so. But to claim that the geopolitical considerations are invented is wrong. And in fact one of the reasons the Japanese government is investing in local fabs to begin with is due to national security, as mentioned in the article:

>Securing control over chip manufacturing is being seen as a national security priority, both in Japan and elsewhere, as recent trade frictions and geopolitical tensions between China and Taiwan raise concerns around the risks of relying on foreign suppliers.

So yes, viewing the entire story through a geopolitical lens is understandable.


The built in Steam DRM is very weak. Of course that can change at any time, but at least the current catalog of Steam DRM-only games are not really tied down to steam except via law/licensing.


Fascinating project. Based on section 3.9, it seems the output is in the form of a bitmap. So I assume you have to do a full memory copy to the GPU to display the image in the end. With skia moving to WebGPU[0] and with WebGPU supporting compute shaders, I feel that 2D graphics is slowly becoming a solved problem in terms of portability and performance. Of course there are cases where you would a want a CPU renderer. Interestingly the web is sort of one of them because you have to compile shaders at runtime on page load. I wonder if it could make sense in theory to have multiple stages to this, sort of like how JS JITs work, were you would start with a CPU renderer while the GPU compiles its shaders. Another benefit, as the author mentions, is binary size. WebGPU (via dawn at least) is rather large.

[0] https://blog.chromium.org/2025/07/introducing-skia-graphite-...


The output of this renderer is a bitmap, so you have to do an upload to GPU if that's what your environment is. As part of the larger work, we also have Vello Hybrid which does the geometry on CPU but the pixel painting on GPU.

We have definitely thought about having the CPU renderer while the shaders are being compiled (shader compilation is a problem) but haven't implemented it.


In any interactive environment you have to upload to the GPU on each frame to output to a display, right? Or maybe integrated SoCs can skip that? Of course you only need to upload the dirty rects, but in the worst case the full image.

>geometry on CPU but the pixel painting on GPU

Wow. Is this akin to running just the vertex shader on the CPU?


It just depends on what architecture your computer has.

On a PC, the CPU typically has exclusive access to system RAM, while the GPU has its own dedicated VRAM. The graphics driver runs code on both the CPU and the GPU since the GPU has its own embedded processor so data is constantly being copied back and forth between the two memory pools.

Mobile platforms like the iPhone or macOS laptops are different: they use unified memory, meaning the CPU and GPU share the same physical RAM. That makes it possible to allocate a Metal surface that both can access, so the CPU can modify it and the GPU can display it directly.

However, you won’t get good frame rates on a MacBook if you try to draw a full-screen, pixel-perfect surface entirely on the CPU it just can’t push pixels that fast. But you can write a software renderer where the CPU updates pixels and the GPU displays them, without copying the surface around.


Surely not if the CPU and video output device share common RAM?

Or with old VGA, the display RAM was mapped to known system RAM addresses and the CPU would write directly to it. (you could write to an off-screen buffer and flip for double/triple buffering)


I regularly do remote VNC and X11 access on stuff like raspberry pi zero and in these cases GPU does not work, you won't be able to open a GL context at all. Also whenever i upadte my kernel on archlinux i'm not able to open a gl context until i reboot, so I really need apps that don't need a gpu context just to show stuff


For the Pi Zero you can force a headless HDMI output in the config and then use that instead of a virtual display to get working GPU with VNC.


You can also trick any HDMI output to believe it's connected to a monitor.

One commercial product is:

https://eshop.macsales.com/item/NewerTech/ADP4KHEAD/

But I seem to recall there are dirt cheap hacks to do same. I may be conflating it with "resister jammed into DVI port" which worked back in the VGA and DVI days. Memory unlocked - did this to an old Mac Mini in a closet for some reason.


It's analogous, but vertex shaders are just triangles, and in 2D graphics you have a lot of other stuff going on.

The actual process of fine rasterization happens in quads, so there's a simple vertex shader that runs on GPU, sampling from the geometry buffers that are produced on CPU and uploaded.


One place where a CPU renderer is particularly useful is in test runners (where the output of the test is a image/screenshot). Or I guess any other use cases where the output is an image. In that case, the output never needs to get to the GPU, and indeed if you render on the GPU then you have to copy the image back!


> "I assume you have to do a full memory copy to the GPU to display the image in the end."

On a unified memory architecture (eg: Apple Silicon), that's not an expensive operation. No copy required.


Unfortunately graphics APIs suck pretty hard when it comes to actually sharing memory between CPU and GPU. A copy is definitely required when using WebGPU, and also on discrete cards (which is what these APIs were originally designed for). It's possible that using native APIs directly would let us avoid copies, but we haven't done that.


I've found the built in gitk is pretty good for some GUI tasks. If I want to view the sate of some file at a given commit, it's easier to navigate using that rather than going through git log, find and copy the commit, git show, paste, copy the file path. GitHub desktop didn't seem to have this feature last I checked, even though the GitHub web viewer does.


Hundreds of thousands to millions of people have come to the US legally each year for the last thirty years.[0] How is that impractical? In fact the share of immigrants in the US has increased significantly (by 3 times) in the last 50 years, and is above the level of the EU, and is at the highest level in the last 100 years in the US.[1][2] Even if legal immigration was set to zero, that shouldn't give people the right to come here illegally.

To be clear I am not making an argument that mass surveillance is needed to solve any problem.

[0] https://www.migrationpolicy.org/article/green-card-holders-a...

[1] https://www.pewresearch.org/wp-content/uploads/sites/20/2024... via https://www.pewresearch.org/short-reads/2024/09/27/u-s-immig...

[2] https://data.worldbank.org/indicator/SM.POP.TOTL.ZS?most_rec...

US vs EU vs OECD: https://data.worldbank.org/indicator/SM.POP.TOTL.ZS?most_rec... - I'm pretty sure the values here include illegal immigration as well, so if you factor that in the US may be lower than the EU, but again still at historically very high levels.


The biggest illegal immigration source is the southern border. Yes, lots of people have immigrated, but they're a tiny fraction of those who wanted to immigrate. H-1B is a good example, it counts as immigration but it is really not, it is residency contingent on specific employment contracts. Those people with H-1B have no way to gain permanent residency without their employer sponsoring them, which would let them leave the company so employers don't tend to do that a lot.

The comparison with EU is not meaningful, especially since it isn't even a country. The population growth of the US and the world as a hole has also risen by more than that factor, even in the past two decades or so it has more than doubled.


>Yes, lots of people have immigrated, but they're a tiny fraction of those who wanted to immigrate

What point are you making here specifically? Are you saying the law is considered broken unless all or most people that want to come to the US can come? If so, the citizens (or at least the government) of the country are the ones that decide its laws, not people who want to immigrate to that country.

>H-1B is a good example, it counts as immigration but it is really not

The fist link I gave only includes green cards issued, it doesn't include H-1B visas to begin with. In any case, H-1B is not that significant a source of immigration, it seems to account for less than 1 million people in the US.[0] And it pays better than immigrating illegally in 99% of cases, most people would take that. Also by your own metric immigrating illegally isn't immigration either. I don't see what specific point you are making. Are you saying people come here illegally because they don't want to come via an H-1B visa, or are you just making a general point that immigration is not that high?

>The comparison with EU is not meaningful, especially since it isn't even a country

Then why does the worldbank include it? And why use OECD as a metric for anything if it isn't a country?

>population growth of the US and the world

The "highest in 100 years" statistic is in terms of percentage, so that shouldn't be relevant.

[0] https://www.uscis.gov/sites/default/files/document/reports/U... "As of September 30, 2019, the H-1B authorized-to-work population is approximately 583,420."


Why do companies attempt to prevent piracy if it doesn't hurt sales?


Because companies are reactionary structures of power, they often act out of fear of losing control, not out of data or reason. It's easier to lobby governments for harsher copyright laws instead of modernising business model.

There are many counter-examples.

Gabe Newell (Valve co-founder) famously said:

"Piracy is almost always a service problem and not a pricing problem."

Jeff Bewkes (CEO of Time Warner) famous quote about piracy:

"Game of Thrones being the most pirated show in the world? That's better than an Emmy."

Radiohead released their In Rainbows album as "pay what you want", directly online. It generated more revenue than their previous label-backed album.


Hell I have a premium Netflix and premium Spotify account that I don't even use very often.

Gabe Newell was right. I subscribed to those on my phone without having to go out of the house. It was just a few touches.


I don't think this is true. Many video game companies pay for DRM protection only for the first few months to a year, then remove it after most of the sales are generated. And before the current mostly uncrackable DRMs came out (denuvo) they would still use DRM that they knew would be cracked eventually as long as it wouldn't be cracked in the first few months. They are not simply blindly acting out of fear, they are estimating the actual cost of piracy. In fact someone recently did an analysis of this and came that conclusion.[0] The companies likely have much better data than this external researcher.

>"Piracy is almost always a service problem and not a pricing problem."

It is true that how good the service is is an important factor and can be more important than lets say a $10 difference in price. I think that is what is meant by this quote. However if piracy was easy and legal much fewer people pay for it. Assuming that "piracy is a service problem not a pricing problem" if anything would prove that there are a significant number of people who pay for something if it is easier than pirating. Usually people that claim that anti-piracy measures have no effect say something like "people that can afford to pay for a given media will always pay for it, and people that cannot will always not" or "people who are going to pirate something will never pay for it even if it becomes impossible to pirate." But if pricing is not actually the main issue at hand here then this not correct.

>"Game of Thrones being the most pirated show in the world? That's better than an Emmy."

This doesn't say anything about income generated. He's basically remarking about how successful the show was.

>Radiohead

This is a special case where consumers have a special attachment to the producers of their entertainment and buy their products specifically to support them. You can see a similar idea with YouTubers that sell everyday items (eg. coffee) with their name on it and people buy it mainly to support them, and this is even how the sales pitch is phrased. So if you are (at least partially) selling the ability to support the creator, then it is impossible to pirate that, as piracy (obviously) does not support the creator.

>act out of fear of losing control

Even after now 20 years of digital media existing?

[0] https://www.sciencedirect.com/science/article/abs/pii/S18759...


It's hard to convince people of X, if their earnings depend on them not agreeing.

Management and lawyers are paid to be busy and "defend rights", not on sitting still and saying that nothing should be done. Even if it true, they still need look busy and "earn their check", otherwise their numbers/salaries can be reduced.


The opinions of their principals may not align with published findings, for many reasons.


Because those studies aren't actual proof, and companies selling things are biased to believe that people won't pay for shit if they don't have to. (Which they won't.)


Is there a reason that some of the linked benchmarks, if I'm reading it right, have Fil-C running faster than C?[0] I assume it's just due to micro-benchmark variability but I'm curious. Some of them seem impossibly fast compared to C so I wonder if there are some correctness issue there.

[0] https://cr.yp.to/2025/20251028-filcc-vs-clang.html


Usually garbage collection does improve alot of benchmarks, just look at the hans boem gc benchmarks.


Back in the day, the cheat was to set up the GC so that the GC happened outside the timed portion of the benchmark. You know what's faster than the fastest GC? Not doing it.



The two extreme outliers I see are labeled "aead/clx192q/opt,-O3" and "aead/schwaemm128128v2/opt,-Os" according to clicking on the points with devtools. aead/schwaemm128128v2/opt,-Os looks like it is almost at 0x. 1x is at about y = 659 and that test is at 769 out of I guess 780 based on the graph.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: