Hacker Newsnew | past | comments | ask | show | jobs | submit | HexDecOctBin's commentslogin

When I read an article, I am expecting to read the author's own experiences and insights they gained from them. Not the regurgitation of an industrial scale word generator.

> She still needed to know what questions to ask and prompts to give

Then publish the prompts. Let me enter them in an LLM of my choosing and see what bullshit it hallucinates and diff it against the 'article'.

> hopefully steered it right when it made up falsehoods.

"Hopefully"? Publishing something a stochastic parrot dreamed up under your name is ghost writing at best and spreading misinformation at worst.


The "insight" that I needed a map, and that I had effectively created a map from my research, reading and "prompting" was mine, but I have no problem with using fancy tooling to help me pull it all together.

If someone could've pointed me to some other fully laid out mapping of the CL tooling stack I would've been happy as the article was a rather time consuming side quest.


> something a stochastic parrot dreamed up

With more time and energy, human discovery and invention, the statistical mechanics backing the information digest will improve beyond any one human's lifetime internalization and idiosyncratic writings divined.


> will improve

If only I was capable of such divination.


What I do see is somewhat curated cache of what stochastic parrot dreamed of so I don’t have to burn tokens myself.

As I understand author is interested in the topic and didn’t simply publish total hallucinations.


Author here. Deeply interested but not an expert by any means happy to have saved anyone a few tokens. I have done my best to fact check the content and the people on r/lisp have contributed a ton of corrections that I incorporated into revised edits. Always welcome constructive inputs if you have spotted any mistakes let me know.

Hi well you see it doesn’t matter how many times you will repeat „I am not an expert I did it for myself and just sharing in case someone else would be interested”.

Assholes will come out of woodwork claiming only experts are allowed to post anything online.

My point is, stop being apologetic as it only eats your energy and DGAF about such comments as the top one I replied to.


Thank you! Point taken and appreciated. Time is better spent on producing better materials. I have made a short version of the post as the primary article being too long was a valid criticism.

If someone from Teardown dev team is here, did you guys ever tried to do the physics in voxel space? If I understand correctly, Teardown convert each physics chunk into a mesh and feeds all the meshes into a traditional physics engine. But this means that the voxel models remain constrained to the voxel grid only locally but not globally.

I have been trying to figure out a way to do physics completely in voxel space to ensure a global grid. But I have not been able to find any theory of Newtonian Mechanics that would work in discretised space (Movable Cellular Automata was the closest). I wonder if anyone in the Teardown dev team tried to solve this problem?


(I’m not a Teardown dev!)

I tried this on a local project. It looks very jank and the math falls apart quickly. Unfortunately, using a fixed axis-aligned grid for rotating reference frames is not practical.

One to thing I wanted to try but didn’t, was to use dynamic axes. So once an entity is created (that is, a group of voxels not attached to the world grid), it gets its own grid that can rotate relative to the world grid. The challenge would be collision detection between two unaligned grids of voxels. Converting the group to a mesh, like Teardown does, would probably be the easiest and most effective way, unless you want to invent some new game-physics math!


This sounds like a fun thing to do simply for the pleasing global consistency, but the price you will pay is that the physics will inevitably look weird since all our intuition is for smooth space. In this sense it's like those games that try to put you into a 4D space, where the weirdness is sort of the point.

Not sure what you mean with the claim that Newtonian Mechanics doesn't work in discretised space? I'm know there are plenty of codes that discretise space and solve fluid mechanical problems, and that's all Newtonian physics.

Of course you need a quite high resolution (compared to the voxel grid in teardown) when you discretise for it to come out like it does in reality, but if you truly want discretised physics on the same coarse scale as the voxels in teardown you can just run these methods and accept it looks weird.


(not a teardown dev)

i had brainstormed a bit a similar problem (non world aligned voxels "dynamic debris" in a destructible environment. One of the ideas that came through was to have a physics solver like the physX Flex sdk.

https://developer.nvidia.com/flex * 12 years old, but still runs in modern gpus and is quite interesting on itself as a demo * If you run it, consider turning on the "debug view", it will show the colision primitives intead of the shapes.

General purpose physics engine solvers arent that much gpu friendly, but if the only physical primitive shape being simulated are spheres (cubes are made of a few small spheres, everything is a bunch of spheres) the efficiency of the simulation improves quite a bit. (no need for conditional treatment of collisions like sphere+cube, cube+cylinder, cylinder+sphere and so on)

wondered if it could be solved by having a single sphere per voxel, considering only the voxels at the surface of the physically simulated object.


You could project your dynamic objects to world coordinates, but it would look pretty wonky for small objects. A grid is just fundamentally not going to look very physical.

Maybe you could simulate physics but completely constrain any rotation? Then you’d have falling stuff, and it could move linearly (still moving in 3d space but snapping to the world grid for display purposes)?


Is there a similar document for the memory arena feature? I tried searching the official documentation, but found scant references and no instructions on how and when to use it.

Huh, you're right.

Apparently it's still considered experimental (even though Google uses it in production) so it's not in the User Manual. There's this: https://github.com/sbcl/sbcl/blob/master/doc/internals-notes...


You can't scroll without moving the cursor.


Split the windows and scroll the other windows. Or mark your current position and pop back to it after scrolling.


> Language features like templates are not the issue – the Standard Library is.

What sins does STL commits that make it slow if templates themselves are not slow, and what kind of template code doesn't bloat compile times? In my experience, C++ libraries are usually one order of magnitude or more slower to compile than equivalent C ones, and I always chalked it upto the language.


Why do they use the bottom bit for tag and not the top bit?


Traditionally it has been done because the last three bits in an object pointer typically are always zero because of alignment, so you could just put a tag there and mask it off (or load it with lea and an offset, especially useful if you have a data structure where you'd use an offset anyway like pairs or vectors). In 64-bit architectures there are two bytes at the top that aren't used (one byte with five-level paging), but they must be masked, since they must be 0x00 or 0xff when used for pointers. In 32-bit archs the high bits were used and unsuitable for tags. All in all, I think the low bits still are the most useful for tags, even if 32-bit is not an important consideration anymore.


The sibling comment explains why we prefer to use the lower bits as a tag (these are guaranteed to be zero if the value is a pointer on a 64-bit system).

Another reason why we wouldn’t want to use the top bit is that, as the parent comment suggested, the tagged pointer representation of a fixnum integer isn’t a pointer at all but is instead twice the number it represents. Generally speaking, we represent integers in twos-complement representation which uses that top bit to determine if the value is positive or negative.


One issue with Voxel-based physics destruction games is that the physics happens in continuous space (as opposed to voxel space). This means that the moment you break off a chunk of geometry, it has to be converted into a mesh and simulated like any other mesh-based model will. This makes voxels seem like more complicated Voronoi-noise based fractures. If you want the modelling workflow or the looks of voxels, it's fine. But assuming that voxels will somehow help with the destruction physics seems not to be a valid assumption.

Ideally, we would be able to do physics in voxel space itself (sort of like a cellular automata based classical mechanics), but that doesn't seem to be possible.


This isn’t actually true if you use GPU raytracing, as everyone involved with voxel destruction seems to realize at one point or another. Meshing in a performant way after every destruction event is simply not possible.


So how would you do destruction physics on voxels without meshing? This is how even Teardown does it, and it uses raymarching.


Have you tried Teardown? Has incredibly good voxel physics. Definitely possible.


Teardown calculates collision meshes and does physics on those. Not on voxels directly, and definitely on in voxel space.


I often wonder what a Prolog implemented as an Objective-C like extension to C would look like. Since WAM has proper stack and heap IIRC, it might be possible to plug that in through some region-based memory management on C side. Is there some prior art like this?


I ported from Pascal to C a Lisp interpreter system that had an embedded Prolog in it (that used Lisp syntax) (and wrote a new memory subsystem) in my spare time in College. Later I helped a grad student a little bit with their implementation of a Warren machine (runtime for a Prolog compiler) for it. That’s the only embedded Prolog I’m aware of.


Check http://t3x.org, it has some book on logic for Scheme which implements a Prolog under it wich very few files and lines per file.


Linux syscall interface is actually stable and can easily be targeted. It’s BSDs (and Mac OS) that force everyone to link to only libc.


More like everyone else, Linux kernel is the exception here.


The entire README reads like it was AI generated.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: