I didn't do the WASM port. It was started by our previous lead whitequark, and some more work done by a few others. It's not quite complete, but you can do some 3d modeling. Just might not be able to save...
Oops, I did not read that before going ham in the editor. It seems that the files are stored inside the emscripten file system, so they are not lost. I could download my exported 'test.stl' with the following JavaScript code:
var data = FS.readFile('test.stl');
var blob = new Blob([data], { type: 'application/octet-stream' });
var a = document.createElement('a');
a.href = URL.createObjectURL(blob);
a.download = 'test.stl';
a.click();
One thing worth noting, everything but the menus and popups is drawn with openGL. The text window uses GNU unifont which is a bitmap font. All the interactions are handled the same way as the desktop versions.
This is awesome, keep up the good work! Have liked and subscribed =)
After this section…
> Ultima VII was in development from 1990 to 1992 (shipping in April ’92), and even though software 3D was taking off at the time, it hadn’t been designed as a 3D game (even though it used a 3D world) and that contributed to its issues.
> But software-rendered 3D games with isometric perspectives started coming out very soon after Ultima VII, and every time I saw one I thought, “This is what Ultima VII should have looked like.”
…there are clips from three games: Dungeon Keeper (June 1997), Myth II (Dec. 1998) and Grandia (Dec. 1997). Not sure if that really counts as coming out 'very soon' after Ultima VII!
Yep, but a float is more useful than a bool for tracking progress, especially if you want to answer questions like "how soon can we expect (drivers/customer support staff/programmers) to lose their jobs?"
Hard to find the right float but worth trying I think.
I agree, but it does seem a bit strange that you are allowed to "custom-fit" an AI program to solve a specific benchmark. Shouldn't there be some sort of rule that for something to be AGI it should work as "off-the-shelf" as possible?
If OpenAI had an embedded python interpreter or for that matter an interpreter for lambda calculus or some other equally universal Turing machine then this approach would work but there are no LLMs with embedded symbolic interpreters. LLMs currently are essentially probability distributions based on a training corpus and do not have any symbolic reasoning capabilities. There is no backtracking, for example, like in Prolog.
There are costs to creating and enforcing regulation, and then more costs if it needs to be rolled back (say, if there are unforeseen downsides to it).
My first instinct was to vectorize the interpreter, amortizing the cost across searching many possibilities in parallel. But 9^14 is still a huge search space even when efficiently implemented on a CPU, and indeed the compile to Rust brute force version I built later isn't faster than this one at finding the lowest/highest answers.
Then I started to think about pruning the search space with early out. If you squint, there's a similarity here to bloom filters. It gives you two possible answers: definitely no, or possibly yes.
I was expecting to run in to issues where the state grew too large, but that turned out never to be the case.
An alternate way to approach it would have been with dynamic programming, but I didn't feel like that would be as much fun.
Yep, have recently had some troubles with provisioning two relatively new products (Redpanda and Materialize), and both of them relied on Slack communities and it's really not an experience I enjoy or find helpful.