Hacker Newsnew | past | comments | ask | show | jobs | submit | kam's commentslogin

Nixpkgs pulls source code from places like pypi and crates.io, so verifying the integrity of those packages does help the Nix ecosystem along with everyone else.


What improvements does WebGPU bring vs WebGL for things like Potree?


Compute shaders, which can draw points faster than the native rendering pipeline. Although I have to admit that WebGPU implements things so poorly and restrictive, that this benefit ends up being fairly small. Storage buffers, which come along with compute shaders, are still fantastic from a dev convenience point of view since it allows implementing vertex pulling, which is much nicer to work with than vertex buffers.

For gaussian splatting, WebGPU is great since it allows implementing sorting via compute shaders. WebGL-based implementations sort on the CPU, which means "correct" front-to-back blending lags behind for a few frames.

But yeah, when you ask like that, it would have been much better if they had simply added compute shaders to WebGL, because other than that there really is no point in WebGPU.


They did, Intel had the majority of the work done, and then Google sabotaged the effort, because WebGPU was right around the corner.

https://registry.khronos.org/webgl/specs/latest/2.0-compute/

https://github.com/9ballsyndrome/WebGL_Compute_shader/issues...


Access to slightly more recent GPU features (e.g. WebGL2 is stuck on a feature set that was mainstream ca. 2008, while WebGPU is on a feature set that was mainstream ca 2015-ish).


All of these new features could have easily be added to WebGL. There was no need for creating a fundamentally different API just for compute shaders.


The GL programming only feels 'natural' if you've been following GL development closely since the late 1990s and learned to accept all the design compromises for sake of backward compatibility. If you come from other 3D APIs and never touched GL before it's one "WTF were they thinking" after another (just look at VAOs as an example of a really poorly designed GL feature).

While I would have designed a few things differently in WebGPU (especially around the binding model), it's still a much better API than WebGL2 from every angle.

The limited feature set of WebGPU is mostly to blame on Vulkan 1.0 drivers on Android devices I guess, but there's no realistic way to design a web 3D API and ignore shitty Android phones unfortunately.


It's not about feeling natural - I fully agree that OpenGL is a terrible and outdated API. It's about the complete overengengineered and pointless complexity in Vulkan-like APIs and WebGPU. Render Passes are entirely pointless complexity that should not exist. It's even optional in Vulkan nowadays, but still mandatory in WebGPU. Similarly static binding groups are entirely pointless, now I've got to cache thousands of vertex and storage buffers. In Vulkan you can nowadays modify those, but not in WebGPU. Wish I could batch them buffers in a single one so I dont need to create thousands of bind groups, but that's also made needlessly cumbersome in WebGPU due to the requirement to use staging buffers. And since buffer sizes are fairly limited, I can't just create one that fits all, so I have to create multiple buffes anyway, might as well have a separate buffer for all nodes. Virtual/Sparse buffers would be helpful in single-buffer designs by growing those as much as needed, but of course they also dont exist in WebGPU.

The one thing that WebGPU is doing better is that it does implicit syncing by default. The problem is, it provides no options for explicit syncing.

I mainly software-rasterize everything in Cuda nowadays, which makes the complexity of graphics apis appear insane. Cuda allows you to get things done simple and easily, but it still has all the functionaility to make things fast and powerful. The important part is that the latter is optinal, so you can get things done quickly, and still make them fast.

In cuda, allocating a buffer and filling it with data is a simple cuMemAlloc and cuMemcpy. When calling a shader/kernel, I dont need bindings and descriptors, I simply pass a pointer to the data. Why would I need that anyway, the shader/kernel knows all about the data, the host doesnt need to know.


> Render Passes are entirely pointless complexity that should not exist. It's even optional in Vulkan nowadays.

AFAIK Vulkan only eliminated pre-baked render pass objects (which were indeed pointless), and now simply copied Metal's design of transient render passes, e.g. there's still 'render pass boundaries' between vkCmdBeginRendering() and vkCmdEndRendering() and the VkRenderingInfo struct that's passed into the vkCmdBeginRendering() function (https://registry.khronos.org/vulkan/specs/latest/man/html/Vk...) is equivalent with Metal's MTLRenderPassDescriptor (https://developer.apple.com/documentation/metal/mtlrenderpas...).

E.g. even modern Vulkan still has render passes, they just didn't want to call those new functions 'Begin/EndRenderPass' for some reason ;) AFAIK the idea of render pass boundaries is quite essential for tiler GPUs.

WebGPU pretty much tries to copy Metal's render pass approach as much as possible (e.g. it doesn't have pre-baked pass objects like Vulkan 1.0).

> The one thing that WebGPU is doing better is that it does implicit syncing by default.

AFAIK also mostly thanks to the 'transient render pass model'.

> Why would I need that anyway, the shader/kernel knows all about the data, the host doesnt need to know.

Because old GPUs are a thing and those usually don't have such a flexible hardware design to make rasterizing (or even vertex pulling) in compute shaders performant enough to compete with the traditional render pipeline.

> Similarly static binding groups are entirely pointless

I agree, but AFAIK Vulkan's 1.0 descriptor model is mostly to blame for the inflexible BindGroups design.

> but that's also made needlessly cumbersome in WebGPU due to the requirement to use staging buffers

Most modern 3D APIs also switched to staging buffers though, and I guess there's not much choice if you don't have unified memory.


> AFAIK the idea of render pass boundaries is quite essential for tiler GPUs.

I've been told by a driver dev of a tiler GPU that they are, in fact, not essential. They pick that info up by themselves by analyzing the command buffer.


> Most modern 3D APIs also switched to staging buffers though, and I guess there's not much choice if you don't have unified memory.

Well I wouldn't know since I switched to using Cuda as a graphics API. It's mostly nonsense-free, and faster than the hardware pipeline for points, and about as fast for splats. Seeing how Nanite also software-rasterizes as a performance improvement, Cuda may even be great for triangles. Only implemented a rudimentary triangle rasterizer that can draw 10 million small textured triangles per millisecond. Still working on the larger ones, but low-priority since I focus on point clouds.

In any case, I won't touch graphics APIs anymore until they make a clean break to remove the legacy nonsense. Allocating buffers should be a single line, providing data to shaders should be as simple as passing pointers, etc..


The examples in the repo are using the open-source yosys + nextpnr tooling.


No, compression formats are not Turing-complete. You control the code interpreting the compressed stream and allocating the memory, writing the output, etc. based on what it sees there and can simply choose to return an error after writing N bytes.


Yes, and even if they were Turing complete, you could still run your Turing-machine-equivalent for n steps only before bailing.


At least that sounds like it would be a more interesting story than the one where the person who quit a year ago didn't document all the places they manually installed the 2-year certificate.


I think the idea is that if an LLM trained prior to the patent date can reproduce the invention, then either the idea is obvious or there was prior art in the training set; either way the patent is invalid.


> ...if an LLM trained prior to the patent date can reproduce the invention...

Would we even be able to tell if the machine reproduced the invention covered by the claims in the patent?

I (regrettably) have my name on some US software patents. I've read the patents, have intimate knowledge of the software they claim to cover, and see nearly zero relation between the patent and the covered software. If I set a skilled programmer to the task of reproducing the software components that are supposed to be covered by the patents, I guarantee that they'd fail, and fail hard.

Back before I knew about the whole "treble damage thing" (and just how terrible many-to-most software patents are) I read many software patents. I found them to offer no hints to the programmer seeking to reproduce the covered software component or system.


If it can't be reduced to practice, then it's a vanity patent, but also, impossible to violate.


A patent application is a constructive reduction to practice. MPEP 2138.05. https://www.uspto.gov/web/offices/pac/mpep/s2138.html#:~:tex...


Indeed, but a constructive reduction to practice means that the inventor still has to describe how it can be done. And if it's impossible, then it's not a reduction to practice, just an invalid patent.


I had similar thoughts before. It's worth thinking about what attorneys will do in response to rejections based on LLMs reproducing ideas. I'm a former patent examiner, and attorneys frequently argue that the examiners showed "hindsight bias" when rejecting claims. The LLM needs to produce the idea without being led too much towards it.

Something like clean-room reverse engineering could be applied. First ask a LLM to describe the problem in a way that avoids disclosing the solution, then ask an independent LLM how that problem could be solved. If LLMs can reliably produce the idea in response to the problem description, that is, after running a LLM 100 times over half show the idea (the fraction here is made up for illustration), the idea's obvious.


Yes that's the idea, and now I'm wondering why I'm being downvoted. Maybe the patent trolls don't like it.


Good idea but poorly stated.


I also looked around AOSP and found the commit for the battery alert icon [1], but no kernel source.

[1] https://android.googlesource.com/platform/frameworks/base/+/...


This isn't "Rust Evangelists" pushing Rust on Git, it's Git developers wanting to use Rust.

Also, there's already a separate from-scratch re-implementation of git in Rust (gitoxide).


> it's Git developers wanting to use Rust.

They're full time developers exclusively for git? How long have they been doing this? What is the set of their contributions to date? Of the total set of developers how many of them want this?

Is their set of desires anything more than "lets use Rust?" Is there a specific set of new functionality that would depend on it? New use cases that could be served with it? Is there even a long term plan for "new new C code" at some date?

I sense disaster fomented by poorly articulated goals.

> Also, there's already a separate from-scratch re-implementation of git in Rust (gitoxide).

Sounds perfect. Then each project can maintain the focus on their core language and not potentially take several steps backwards by hacking two incompatible pieces together with no roadmap.


  > They're full time developers exclusively for git?
"Exclusive" is a bit silly but yes, most of the people discussed in this article are paid to develop Git full time.


By whom?


GitHub


The people who are trying to commercialize the product? It's interesting that with all this money all they do is send "their developers" into the mailing list to push the product and everyone else in their own direction.

Who owns github again?


You asked who is pushing this. The answer is people who are paid full time to work on Git. These are Git developers. Their names are mentioned in the article, it is not hard to look up what these people have done, what they are doing now, and who they work for.

What are you trying to get at? It's not a conspiracy theory, it's people who just want to be able to be more effective at getting things done.


The license is for the use of the broadcast spectrum (a scarce, shared resource), not practicing journalism.


If your site is vulnerable to SQL injection, you need to fix that, not pretend Cloudflare will save you.


Obviously. But I was responding to "what is sinister about a GET request". To put it a slightly different way, it does not matter so much whether the request is a read or a write. For example DNS amplfication attacks work by asking a DNS server (read) for a much larger record than the request packet requires, and faking the request IP to match the victim. That's not even a connection the victim initiated, but that packet still travels along the network path. In fact, if it crashes a switch or something along the way, that's just as good from the point of view of the attacker, maybe even better as it will have more impact.

I am absolutely not a fan of all these "are you human?" checks at all, doubly so when ad-blockers trigger them. I think there are very legitimate reasons for wanting to access certain sites without being tracked - anything related to health is an example.

Maybe I should have made a more substantive comment, but I don't believe this is as simple a problem as reducing it to request types.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: