> * "Macros work different in most languages. However they are used for mostly the same reasons: code deduplication and less repetition." That could be said for any abstraction mechanism. E.g. functions. The defining features of macros is they run at compile-time.
In the context of the blog post, he wants to generate structure definitions.
This is not possible with functions.
It took me a bit to hunt down, but it was part of the go 1.4 release in 2014.
the release notes[0] at the time stated
> The implementation of interface values has been modified. In earlier releases, the interface contained a word that was either a pointer or a one-word scalar value, depending on the type of the concrete object stored. This implementation was problematical for the garbage collector, so as of 1.4 interface values always hold a pointer. In running programs, most interface values were pointers anyway, so the effect is minimal, but programs that store integers (for example) in interfaces will see more allocations.
@rsc wrote in some detail[0] about it the initial layout on his blog.
I couldn't find a detailed explanation about how the optimization interacted w/ the gc, but my understanding is that it couldn't discern between pointer and integer values
I can't imagine it was impossible to keep the inline-value optimization; after all, the full type information is always right there in the other word of the interface value, but I think it would have been too complicated/expensive for too little benefit.
The bigger issue IMO is and has been that fat pointers (strings, slices) can't be inlined in interface values. There's definitely some benefit to inlining integers and floats, but the indirection that comes from not inlining them isn't that significant compared to the double-indirection that comes from not inlining strings and slices. There was some discussion on the mailing list IIRC about expanding the interface to 3 words wide (to hold strings at least) or 4 words wide (to hold slices too), but this was rejected as going too far the other way (at the time).
Interestingly, the log/slog package does inline string values at least [1], and demonstrates how such a thing can be done when needed, albeit with a fair deal of complexity.
It was a memory model / two word atomicity problem. The mutator uses two writes, one for type and one for value to create the interface. The GC concurrently reads the 2 words of the interface to see if the value is a pointer or not. This is a race that was considered too expensive / complicated to fix.
C is often used as target language for compiler from higher level languages.
The Scheme programming language requires all tail calls not to grow the stack.
Therefore implementors have explored various techniques including trampolines.
I don't have a citation, but you can find the answer in the papers on compiling Scheme to C. If there is no guarantee of TCO in the target language, then the generated programs will be slower.
Incidentally, this is why implementors of (especially) high level languages are annoyed that TCO was removed from the JavaScript specification. There are even solution for having TCO and still have stack inspection.
I'm not going to file a bug report, that's more time than I'm willing to spend on a project I'm not involved in. I will say I'm using Termux 0.118 and Racket 8.13 on kernel 5.10 and Android 14 and am willing to answer questions.
I just installed the package called `racket` from Termux's upstream, and it seems that they're using racket-minimal for that. Bit of a gotcha, but at least it doesn't seem like there's a bug. Thanks for the tip.
The literature on programming languages contains an abundance of informal claims on the relative expressive power of programming languages, but there is no framework for formalizing such statements nor for deriving interesting consequences. As a first step in this direction, we develop a formal notion of expressiveness and investigate its properties. To validate the theory, we analyze some widely held beliefs about the expressive power of several extensions of functional languages. Based on these results, we believe that our system correctly captures many of the informal ideas on expressiveness, and that it constitutes a foundation for further research in this direction.
I a not sure which Lisp you are thinking of, but compared to a "standard" Lisp you might be interested in how the module system interacts with the macro system.
In particular the concept of a tower of phases.
The syntax `expr :~ annot` is used to annotate the expression with static information. This is different from type annotations.
It's a general mechanism to associate an expression with "extra information" that can be used elsewhere (at compile time).
One can for example using static information to implement dot notation for an object system. Using static information, one can push name resolution from runtime to compile time.
The important part is that users of Rhombus can use the mechanism for tracking static information specific for their programs.
It will be exciting to see the creative uses of this mechanism in the future.
> The syntax `expr :~ annot` is used to annotate the expression with static information.
Compiler can tell if a thing is static or dynamic and apply the correct behavior. Why would I ever want to check static thing only dynamically and why would I ever want to try statically check dynamic type if not by mistake?
If programmer doesn't really have a real choice why make him choose which buttons to press?
The idea behind macros is so to speak to allow the programmer to extend the compiler.
In a language like MetaPost I can use equations between variables:
x + y = 10
x - y = 6
A reference to a variable x will use the current set of equations and solve for x. The solution (it it exists) will become the value of the variable reference.
Let's say I want to extend Rhombus with this new kind of variable
(let's call them linear variables).
Each linear variable needs to have some static information attached.
Here the relevant information is the set of equations to consider.
A reference to a linear variable will then reduce (at compile time) the equations associated with the variable. The reduced set of equations is then used to generate runtime code that finds the value.
In a sense "static information attached to an expression" is just a convenient
way of working with compile time information (in the sense currently used by
macros in Racket).
I'm a bit confused: Can this static information system be used for run-of-the-mill static type checking as well or not? And if so, does a static type checker for this language exist or is one in the works?
In the context of the blog post, he wants to generate structure definitions. This is not possible with functions.
reply