Hacker News new | past | comments | ask | show | jobs | submit login

The idea is that you would have bits in the hardware dedicated to type checking and garbage collection. The example being, that in the assembly language/machine code, you may have a single arithmetic '+' operation.

Determining which hardware path to use to add two numbers would be done in the hardware itself. Check the type bits of the numbers and feed it into my ALU. Compare this to an x86 lisp, or compiled C, where 'type' of a 'number' is determined by the assembly code instruction that is used on it.

This isn't just a performance improvement, it is also an improvement in the safety of the dynamic language.

There are a lot of different things that you could do for garbage collection. You could have in-hardware reference counting, or 'dirty' and 'clean' (or color) bits, for a mark and sweep collector, or 'generational' bits for an ephemeral garbage collector.

The idea is that any time you take something out of software, and put it into specialized hardware, you should get a performance improvement.

This doesn't mean that the lisp on a chip would be faster than C on a comparable x86 chip, it means that the things that make lisp (and other functional languages) safer and easier to use would be supported in hardware-- therefore not slowing things down as noticeably.




> it means that the things that make lisp (and other functional languages) safer and easier to use would be supported in hardware-- therefore not slowing things down as noticeably.

Or, at least, forcing every language implementation on that hardware to use the same safety mechanisms, making some apples-to-apples benchmarks impossible.

It would be interesting to see what a C implementation for that hypothetical modern Lisp machine (CADDR?) would look like.

A close parallel is AMPC, which compiles C to JVM bytecode.[1] The vendors say it's standards-compliant, and I actually am pretty sure it is, but it doesn't do a lot of the nonstandard things C programmers have come to depend on. For example, the 'struct hack', where you pack data of multiple types into a struct and proceed to index into it as if it were an array (usually an unsigned char array), flatly does not work, due entirely to the runtime type checking done by the JVM. This always seems to lead to major debates over whether it's a very good compiler.

[1] http://www.axiomsol.com/


The 'struct hack' is when you leave the type of the last member of a struct undefined (effectively making structs variable-sized). This is actually not a problem for runtime type checking, and is C99 compliant.

What causes problems is casting pointers to ints and back, and casting all other crap to chars. This is not standards compliant.

Casting ints to pointers will never be type-safe, but one way to get around that is to just ignore the cast, and overload arithmetic operators to work correctly on pointers - the pointers will carry around their type info, and everything should work ok.

Casting other crap to chars will never work because it interferes with the way the other crap has its type encoded. Luckily in most cases this casting is done to perform I/O, where you can also just ignore the cast, and specialize the lowest-level I/O functions to dispatch on the actual types.

The moral of the story is that you should basically ignore all the line noise the programmer produces about types, and look at the actual objects. This is exactly how Java works, btw.

WRT hardware tagging and type checks, there's really no reason to do it on a byte-addressed superscalar processor. If you look at 64-bit Common Lisp implementations today, you'll actually find that they use only about half the available tag bits in each word. The only thing that needs to be boxed is double-floats.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: