(I work at Oxide, though I wasn't around for the initial chip selection process)
It's at least partially a matter of timing: Oxide was picking its initial hardware in roughly 2020, and the RP2040 wasn't released until 2021.
A handful of people have done ports, e.g. https://github.com/oxidecomputer/hubris/pull/2210, but I expect to stick with STM32s for the foreseeable future – we've got a lot to do, and they're working well enough!
> The researchers identified that Rubocop, one of our tools, was running outside our secure sandbox environment — a configuration that deviated from our standard security protocols.
This is still ultra-LLM-speak (and no, not just because of the em-dash).
A few years ago such phrases would have been candidates for a game of bullshit bingo, now all the BS has been ingested by LLMs and is being regurgitated upon us in purified form...
Absolutely. In my experience every AI startup is full of AI maximalists. They use AI for everything they can - in part because they believe in the hype, in part to keep up to date with model capabilities. They would absolutely go so far as to write such an important piece of text using an LLM.
This is taken directly from the paper's introduction, which admittedly uses the more specific terminology of "1-Lipschitz signed distance bounds".
The paper cites the original Hart '96 paper on sphere tracing; quoth Hart, "a function is Lipschitz if and only if the magnitude of its derivative remains bounded".
The concept of a Lipschitz function comes from mathematical analysis; neither computer graphics nor numerical analysis. It's straightforward to find the definition of a Lipschitz function online, and it is not in terms of its derivative. If a function is differentiable, then your quote applies; but again, it isn't the definition of a Lipschitz function.
I'd say this is a little pedantic, save for the fact that your function of interest (an SDF) isn't a differentiable function! It has big, crucially important subset of points (the caustic sets) where it fails to be differentiable.
>I wonder if there's a terminology schism here between computer graphics and numerical analysis folks.
The first group just pretends every function has a derivative (even when it clearly does not), the other doesn't.
The linked Wikipedia article gets it exactly right, I do not know why you would link to something which straight up says your definition is incorrect.
There is no point in talking about Lipschitz continuity when assuming that there is a derivative, you assume that it is Lipschitz because it is a weaker assumption. The key reason Lipschitz continuity is interesting because it allows you to talk about functions without a derivative, almost like they have one. It is the actual thing which makes any of this work.
You may also enjoy "Spelunking the Deep: Guaranteed Queries on General Neural Implicit Surfaces via Range Analysis", which uses interval arithmetic (ish) to raymarch neural implicit surfaces:
> Can anyone explain where this blob of "assembly language" comes from?
Assembly language is definitely the right analogy: it's a low-level target generated by higher-level tools. In this case, the expression came from a Python script calling this text(...) function:
The font is hand-built from geometric primitives (rectangles, circles, etc) and CSG operations (union, intersection, difference)
> What is considered an acceptable preprocessing or transformation?
I'm looking for interesting ideas, and to mine the depths of PLs / compiler / interpreter / runtime research. Just returning a fixed image isn't particularly interesting, but (for example) I just updated the site with a compile-to-CUDA example that shows off the brute force power of a modern GPU.
What are the practical implications of this kind of assembly language? Surely there’s more efficient means of describing 2D SDFs?
Fun exercise! I’ve been enjoying trying to find some new ways to approach the challenge. I managed to build a single string expression for the entire program, so it could be evaluated per-pixel in a shader, but it turns out the expression is too complex for WebGL & WebGPU and the shader fails to compile.
My next thought would be to evaluate the program at a low resolution to create a low res SDF texture for the shader to draw at a higher resolution. Some information will probably be lost, though.
> What are the practical implications of this kind of assembly language? Surely there’s more efficient means of describing 2D SDFs?
By analogy, you wouldn't program in LLVM IR, but it's a useful intermediate representation for a bunch of higher-level languages. Higher-level tools can target this representation, and then they all get to use a standard set of optimizations and algorithms (fast evaluation, rendering, etc).
Is there any reason to believe this isn’t an AI-assisted crank publication?