Metaprogramming is very alluring on the surface, we've all been frustrated by the limitations of our languages of choice at some point or another.
But, I think this trend might lead to extremely hard to read code, and there is a good chance that this hard-to-read code will be treated as some black box/voodoo.
It might not be a better idea than self-modifying machine code or some of the wildest C macros...
Metaprogramming via reflection is used heavily in the JVM ecosystem, and I can say with confidence that a majority of the bugs I encounter in third party code is somehow related to reflection. I think the overuse of reflection in Java is a symptom of the language not having adequate support for expression the abstractions that developers need in an affordable way, and hence they turn to reflection to work around these limitations. This leads developers down a path where they can't seem to stop applying reflection until they reach a point where the type system gives you almost no meaningful guarantees and compositionality of your components is ruined.
At least compile time reflection and code generation will catch a large chunk of bugs that would otherwise be deferred to runtime. I will take a puzzling compile time error message over having to debug runtime reflection errors any day.
> I think the overuse of reflection in Java is a symptom of the language not having adequate support for expression the abstractions that developers need in an affordable way, and hence they turn to reflection to work around these
Another explanation is that playing around with annotations and AST decorations/rewriters is an excellent excuse for procrastinating through your day without having to deal with mundane business code :). I also think there's a psychological effect at work here where you're dellusioning yourself into being a "tool developer" and part of an academic discourse when you throw around annotation libs, to deflect from the harsh reality that you're working in cost-center IT. Or maybe a kind of inner migration from an enterprise code base which you can't identify with, and wish to put a fence against, as in "they" (your lib users) vs "us" (elite metaprogrogrammers). As someone else said, today's Spring-heavy Java code bases express behaviour through "anything and everything except actual Java code". C++ developers should take a look at Spring MVC in particular to see if that really is what they want, where the relative sequence of method parameters has significance, and when presented slightly different will result in service request routing 404ing, which you find out only by examining 600-items stack traces with multiple reflection pits.
I think that is a very accurate analysis. There are generally two types of reflection code: the type that aims to achieve modularity and decoupling, and the kind that aims to reduce boilerplate code. I avoid both like the plague, but I think that the latter type in particular is a result of the psychological bias that developers have against writing and maintaining trivial boilerplate code vs. developing fancy tools to reduce it. In the end, you often spend much more time debugging strange issues with the reflection based alternatives than you do actually writing and maintaining the boilerplate.
This is also why we have banished all forms of reflection based serialization in favour of hand-written JSON mappings. Yes, they are a bit tedious to write, but it isn't actually that bad. As a plus, you get type errors up front, and you avoid strange errors due to e.g. third party `Map` libraries that the reflection based "magic" cannot figure out how to handle correctly. If the amount of work gets out of hand, you can always turn to code generation later if it is deemed worth it.
That being said, I think generic programming and avoidance of boilerplate does have its merits as it can help reduce the cost of abstractions. But it absolutely must be done in a principled way rather than be a result of quick and dirty hacks such as C macros, Java reflection and C++ templates, which all accidentally give you an advanced metaprogramming environment with little safety. An example of an approach that I like is the "Generics SOP" approach in Haskell, although I do recognize that the type-level programming that it involves is not for everyone: https://www.andres-loeh.de/TrueSumsOfProducts/
> In the end, you often spend much more time debugging strange issues with the reflection based alternatives than you do actually writing and maintaining the boilerplate.
It pains me to say that, but I think you are right. A lot of boilerplate elimination ends up being premature generalization.
> C++ developers should take a look at Spring MVC in particular to see if that really is what they want,
I don't understand why it would be bad to make it impossible to have Spring-like frameworks in C++. A metric ton of useful things have been written in it, and are only written much more painstakingly in C++.
The obvious answer would be: why don't use Java/Spring then? Does C++ have to be everything to everybody (though that ship has probably sailed some 30 years ago)?
> The obvious answer would be: why don't use Java/Spring then? Does C++ have to be everything to everybody (though that ship has probably sailed some 30 years ago)?
I really prefer writing C++ code where :
- I have the choice of the programming style for every subproblem of my software than Java code where most of the time the only choice is new-riddled, OOPish BS. Writing a Java visitor or observer pattern once again makes me shiver from dread when I'm used to std::variant and Qt's signal / slots. I'll admit that Scala mostly solves that though, if I really had to develop on the JVM that's likely the only language that I'd happily use. No type-level programming -> not relevant for me, given how many metric tons of bugs this has saved me so far.
And integrating JVM code with C++ (or any kind of native) code is an exercice in pain - I've had the displeasure to wrap one of the libraries I've developed through JNA to make it accessible to Processing, wouldn't wish that on my enemies.
- Things can be made to happen deterministically and automatically with RAII, I still have nightmares of trying to get finalizers to work in C# for instance to release resources other than memory at deterministic times and not "some time away in the future".
Ok I can get that, though it's not that much of a problem with "finally" code blocks and modern idiomatic Java/try-with-resources. But (and I'm not pretending to be an expert here) I think attempting to write generic multithreaded service-oriented backends in a non-GCd language is going to give you a hard time with memory fragmentation (even more so with async/evented code), plus the performance, for all I know, isn't really all that great.
Runtime metaprogramming is mighty, but also dangerous. I view compile time metaprogramming as a much saner thing. The type system can still help you avoid potential problems and if in doubt you can just look at the generated code. While it doesn't solve every problem solvable by runtime mp it's good enough in most cases (e.g. building serializers for classes as done with serde in rust).
Indeed, the JVM sorely lacks in the reflection/metaprogrammaing department. The Manifold framework[1] picks up where Java leaves off. For instance, @Jailbreak is a badass, type-safe alternative to reflection.
While I agree that metaprogramming is rife for abuse, and I'd definitely prefer if my fellow programmers used it less than they do, I'd argue that the alternative to having metaprogramming is much worse and leads to brittle black-box voodoo code.
Metaprogramming is pretty wide, ranging from primitive textual substitutions like C macros to type-aware hygienic macros ala rust, and from limited scope like C++ templates to full-on program writing like in lisp.
In C and C++ (current versions) the entire module system is built-up around the metaprogramming hack of doing #ifdef header guards. Even this is a bit error prone, but the alternative is only expressible as a compiler intrisic (#pragma once).
In languages like C, Go and early Java, the lack of generics (a type of metaprogramming) makes it impossible to write type-safe generic algorithms forcing casts to void*, interface{} and Object resp.
Implementing type checking for printf-like constructs requires compiler-intrinsics or C++17 constexpr meta-programming.
In C and C++ you must manually implement serialization and deserialization for structs and everything else that is naturally expressed and iterating over the elements of a struct. Alternatively you could use something like protobuf, which (surprise!) has compiles your protobuf file into a C++ program you can include. Using something like Rust's serde is /much/ simpler, and is only possible due to metaprogramming.
You can write non maintainable code at any level of abstraction.
How hard it will be to debug is more dependent on the available tools to link the error to some source code.
For example, there's a huge load of source to source compilers used in the web stack now. This is not such a big deal it seems, probably because debuggers make an adequate job to link the error to the original source. Actually I didn't directly touched Babylscript and so on, but I didn't saw much complaint about traceability of errors, so here I just guess: a more informed point of view would be valuable here.
Self-modifying code might have been its purpose in highly resource constrained environment. But otherwise, in my opinion, generating a whole distinct source or tailoring a runnable AST has always been more understandable while offering the same level of flexibility.
>But, I think this trend might lead to extremely hard to read code, and there is a good chance that this hard-to-read code will be treated as some black box/voodoo.
I think LLVM and its Tablegen mechanism is a good example that large C++ projects almost inevitably will contain some code generation facilities. In that case this mechanism is rather poorly documented and is used to generate 10+ different targets.
I believe the facilities provided by circle would make most of the Tablegen infrastructure redundant.
I no longer share your viewpoint. Not selling someone a footgun only ensures that they either glue a footgun onto whatever you sell them or go buy from another vendor.
People are ressorting to friggin bash scripts because they don't have that feature. Between unmaintainable bash scripts, and type-checked C++ code, what do you think is better ?
Recently, ( https://news.ycombinator.com/item?id=23055121 ), Walter Bright pointed out that although D has extensive compile-time meta programming support, system (and, I imagine, arbitrary dll) calls are explicitly not allowed because of security concerns.
If I understand the Circle docs correctly...
[...] searched for in the pre-loaded standard binaries: libc, libm, libpthread, libstdc++ and libc++abi.
Additional libraries may be loaded with the -M compiler switch. When the requested function is found,
a foreign-function call is made, [...]
...all libraries are fair game? And I guess you might be able to do your own function/dll probing with libdl.
I love the D language but I actually think its designers got that point wrong.
Compilers are _far_ from security hardened and an attacker slipping something evil into the output binary is probably equally as bad anyway (you distribute it to your users after all). Ultimately you shouldn't be compiling code you don't trust without a good reason and appropriate precautions.
As a counterexample, as far as I'm aware Common Lisp makes no distinction between execution that occurs at compile time versus run time. It still seems to be doing pretty well though!
Common Lisp programmers know what they do. We do have the same problem in perl, where people don't get the difference of a BEGIN block to an INIT block.
With C++ the template syntax is so horrible convoluted, that I doubt people get the idea of compile-time expressions. What is allowed, and what forbidden.
The problem with compile-time expressions are side-effects.
They are only done locally, which is sometimes not what you want. syscalls, fileio, Config checks are not done at runtime, and this is for 99% a bug.
You really need to know what you do. And each such sideffect is only done once, when you run the compiler. Not at the client.
I agree. This is very impressive tech and I definitely see the appeal. I once spent a lot of energy attacking the same problem from the other end so to speak, by generating code at runtime (https://github.com/kristiandupont/rtasm/). I managed to make some logic that had many levels of loops and conditions perform extremely well with it but when I had to debug that stuff a year later, I was basically at my wit's end.
Currently, I am writing JS and TS code and I do quite a bit of code generation. It's great -- it goes into my repo so I get nice diffs when I make a change, I have easily debuggable code and my generator-code can be "unclean" and support weird edge cases through simple if-statements. Of course, my younger self would feel contempt bordering on pity for someone like me who clearly has no sense of beauty, or integrity, really. :-)
> But, I think this trend might lead to extremely hard to read code, and there is a good chance that this hard-to-read code will be treated as some black box/voodoo.
Code readability is one issue, build times another.
Chandler Carruth had a nice talk at CppCon 2019 about how widespread usage of protobuf wound up causing the Compiler to time out on single translation units in the Google code base.
This is where C++ is headed anyway. There's just a little bit more work that's needed in c++23, but it's planned, and that will make it possible to use a lot more of the STL in consteval/constexpr contexts.
I think this is a killer feature. Between execution policies and constexpr-all-the-things, I really think C++ is set to pull away in a way that other languages will have difficulty competing with.
I used C++ professionally for over 20 years until recently.
What pleasant surprise(s) exactly are you talking about?
Actually, never mind.
After a year of Rust I see myself touching C++ code next when I'm 70 and they need some old farts to fix code no one can or wants to touch any more.
And they pay five digits/day (adjusted for inflation) for that service by that time.
Now that will be the only 'pleasant surprise' I presage coming from C++ to me in the foreseeable future. ;)
Here is one possible pleasant surprise if I was starting out today - if my calling in life was to be a systems programmer then learning C++ opens up a rich sea of codebases built by people who knew what they were doing in the last ~25 years.
Irrespective of what language you actually choose to write your project in, if you want to learn how databases, caching servers, language runtimes, compilers, allocators, message buses, games, rendering engines, browsers, distributed systems, AI runtimes etc are implemented then the cutting edge as it exists right now is mostly written in C++.
Sure I hate the notation and the language as much as the next guy but I would hesitate to turn down the learning opportunity from so many cutting edge codebases.
In some ways it is analogous to math - lets say you hate the formalisms, symbols and terminology used in set-theory or in calculus, would you refuse to learn it if you knew that learning the formalism allows you to tap into 300 years of accumulated wisdom from some of the brightest minds that went on to do math?
> Irrespective of what language you actually choose to write your project in, if you want to learn how databases, caching servers, language runtimes, compilers, allocators, message buses, games, rendering engines, browsers, distributed systems, AI runtimes etc are implemented then the cutting edge as it exists right now is mostly written in C++.
First things that come to mind when one hears:
- databases: PostgreSQL (C), MySQL (C), Berkeley DB (C), SQLite (C)
- caching: memcached (C)
- language runtimes: Java (C++), Python (C), Go (Go), Chez Scheme (C + Scheme)
- games: some people write C, some people write C++, some people write C# and some write Swift
- rendering engines: C++ is admittedly popular here, if that's your thing
- distributed systems: this thing is so hyped that it's implemented in every language in existence
- AI runtimes: people here change the language every 2 decades. they started with Lisp, now it's C++ + Python, later it will be JavaScript
- browsers: I pity anyone who has to work on this software (I have to)
Now let's add one obvious category which was suspiciously left out from this list:
operating systems: Unix, BSDs, Linux, Plan 9, Illumos, GNU Mach, Windows NT, XNU --- there is almost no C++ here (and any C++ that there is in just the last two).
So personally, if I were to write something new in this area, I would choose neither C, nor C++, but when it comes to learning something, it's clear that C is more than enough.
I've been working a lot with C++. I've seen its horror. No more.
MySQL and MariaDB are a mixed of C++ and C nowadays [^2] [^1]. Most Database reimplementation nowadays are C++, Go or more recently (Rust).
> - message buses: DBus (C)
ZeroMQ is C++, Same for gRPC (reference of RPC nowadays), same for Thrift [^4]
> - games
Almost all triple A use the Unreal Engine or an home made engine in C++. Even Unity that brand itself as C# uses some C++ internally.
> distributed systems
Almost all Distributed system in the HPC world is C++ or Julia nowadays, with a bit of Fortran surviving.
> - AI runtimes
AI means often GPU usage, GPU usage means CUDA, meaning C++. That's valid for the today three main contenders (Tensorflow, pyTorch, MXNet ) and they all provide a python API on top of C++.
> Now let's add one obvious category which was suspiciously left out from this list: operating systems: Unix, BSDs, Linux, Plan 9, Illumos, GNU Mach, Windows NT, XNU --- there is almost no C++ here (and any C++ that there is in just the last two).
C++ is also in Fushia (Google) and BeOS/Haiku. But OS are mainly C (or Rust nowaday) because kernel space. Exception handling is problematic in kernel space.
I'm not debating that there are a lot of projects written in C++. I only argue that C is enough to be able to learn about state-of-the-art tech in this area.
I'm not even going to debate C++'s merits at this point. Everybody's had enough of that already. There are people who stick to it and there are those who bailed after a while. It's just that, if I have a choice, I won't work with the people who belong to the former group.
for what is worth, GCC is no longer just C compiled as C++ as a lot of stuff is being and has been ported over to proper C++ classes, RAI and templated containers.
Pleasantly surprised at how different every C++ project is because they each have to mandate their own C++ subset in order to tame it's ridiculous complexity?
I get what you're saying, C++ is worlds away from where it was even 15 years ago. Don't downplay it's complexity though.
I dont think I made any references to complexity in my tweet.
All I was trying to say is that the bad news with respect to C++ is priced in and the language only has to be marginally better than what its detractors make it out to be.
IMHO a language's complexity is proportional to the square of the number of bnf rules. By this measure C++ is a disaster. If Thompson and Ritchie had tried to write UNIX in C++ they would have certainly failed!
It would seem that most mainstream languages other than C/C++ have already bit the bullet on fancy build systems, due to dependency management, so codegen is not that expensive.
Any code you compile can now execute arbitrary syscalls, shell out to git, interact with the user's terminal, host a web service or whatever else as you build it and then write the results to your final executable code?
While most ecosystems are moving to hermetic, reproducible builds this is going full steam on the other direction. Goodbye 'it works on my machine' hello 'it works on Jane's machine when Ryan builds it on Tuesdays. Oh Ryan WFH on Tuesday so it gets built on his home machine!'
I wanted to like this but it seems to have gone way too far. It is a college project gone wild.
If you do any codegen can't all of that already happen though? Many (most?) build systems can already execute more or less arbitrary logic. Reproducible builds are only guaranteed if you actually verify the signature of the output.
There's also runtime reflection in most dynamic languages, and don't forget self modifying code! No technical restriction can prevent people from writing bad code at the end of the day because bad code is really a social issue.
The difference is that code generating build actions are necessarily and explicitly called out in build systems - with the opportunity to observe their declared dependencies and sandbox their execution.
In circle compiling any cxx file executes arbitrary code and emits different results. There's no way to reason about what will happen during the build without manual inspection and that really doesn't scale past trivial projects. I'd imagine most static analysis and ide tools would be significantly hindered understanding circle code.
It is concerning that one of the first dozen examples of the language show how the compiler can request input from the terminal mid-compilation. Amazingly powerful feature put forward without any commentary about how / when it should be used or not used - let alone if it should exist at all. As I said it feels like a college experiment - it doesn't feel like they asked how these features would make it easier to write clear, understandable, maintainable code - they just implemented them.
I think reading through all the examples https://github.com/seanbaxter/circle/blob/master/examples/RE... makes it clear that the author has put quite a deal of thought into how these features make it easier to write clear, understandable and maintainable code.
Even the example you point to goes to say: "Let's use the integrated interpreter to define a useful build tool. Software is the subject of constant revisions, and it's useful to mark a distributable with the exact version of the source-control archive it was built with." It's clear (to me) that the developer is not advocating using terminal input in the middle of a build. I haven't used terminal input since I was a child; I don't think the developer expects anyone to take that as a serious example.
> code you compile can now execute arbitrary syscalls, shell out to git, interact with the user's terminal, host a web service or whatever else as you build it and then write the results to your final executable code?
There is no way you can get 100% reproducible builds without using an isolated sandbox with 'modern build tools' anyway (npm, cargo, pip, etc etc). They love to depend on hidden files in your home directory or download some random staff in your back.
C++ meta classes will primarily be useful for library authors. It is not expected that you have to write them yourself. They can document and enforce some pattern (e.g. an interface) or provide boilerplate that would currently need either macros or, well, boilerplate.
But I don't understand what you mean with "an implementation trade-off"?
This looks like a more mature/different approach to the same thing I tried to do in a side project a while ago (https://github.com/blackhole89/macros), down to similar aesthetic choices. However, I pitch my project as a macro system rather than compile-time execution.
While here the metalanguage is itself a C++-lookalike, in my project I wound up with something that is perhaps better described as a weird TeX (which seems to be what happens naturally when your machine model is based on binding and substitution).
On one hand, C++ is surely more powerful/expressive and there is elegance to having the language and metalanguage use the same idiom. On the other, I feel like this might actually making Circle a bit more confusing to use at times, as it becomes less clear exactly when what parts of the code are executed, and you could even imagine a typo accidentally lifting a part of the code from runtime to compile time, resulting in mystifying bugs. (I am myself puzzled by the "serialize structs generically" example: does it imply that the template is specialised before the @meta for is executed?)
Zig seems to have arbitrary compile-time code evaluation, but not the kind of AST generation you see here[1]. Nim macros seem to be a closer analogue[2].
Sean is a very good engineer. Another of his work is moderngpu, a GPU primitive library. It has the same level of doc/tutorials. I really like one thing he wrote: Software is an asset, code a liability. On good days I'd add 200 or 300 lines to this repository. On great days I'd subtract 500.
Nim has generics, templates and macors. Macros are procedural, and they make things like generating a dsl for your software or library a simple matter.
Reminds me of Manifold[1] for Java. But Manifold integrates seamlessly with the Java compiler as a plugin as opposed to being a separate compiler, which makes Circle less approachable imo. Still, I like the F# type provider-ish aspect of this.
Thanks for pointing out Manifold, looks delicious.
As a historical note, PL/I had a preprocessor that itself was a subset of PL/I. I don't recall any type safety features, but was quite useful for creating DSLs within a program.
I'm going to disagree here. It reminds me of the "Python Ninja"-type job titles that became popular ~2012. It's so preoccupied with standing out and breaking the formality of its medium that it actually says nothing.
The repo description literally says two things: Circle is a C++ automation language, and Circle trained with the League of Shadows. The rest requires clicks.
That's a terrible way to drive adoption for something that obviously was difficult to build. Sometimes being earnest and straightforward is the best way to break down skepticism, not by using meaningless hipster taglines.
"Circle is a compiler that extends C++17 for new introspection, reflection and compile-time execution. An integrated interpreter allows the developer to execute any function during source translation. Compile-time flavors of control flow statements assist in generating software programmatically. The configuration-oriented programming paradigm captures some of the conveniences of dynamic languages in a statically-typed environment, helping separate a programmer's high-level goals from the low-level code and allowing teams to more effectively reason about their software."
While I appreciate your qualm, respectfully I think that no matter how it were introduced, someone would've quibbled. "You should be straightforward", "you should be more detailed", "you should have more ninjas". There's no win. The solution is to hope the readers you care about will be adventurous enough to make two clicks.
I'm all for doing things at compile time rather than run-time but both the syntax and debugability of CPP template metaprogramming suck.
I wonder if C++ should pull a Python 3 and clean up the syntax. Given that the transition for Py3 took more than a decade maybe they should do it in small but frequent language upgrades instead.
C and C++ live on backwards compatibility. There are many decades old codebases that are still being worked on. You can't clean up the syntax if that breaks backwards compatibility, you'd just end up creating a new language instead.
the C++ committee is willing to break backward compatibility in isolated areas if it is really worth paying the price, but it will (thankfully) never pull a Python3.
IMO too much backwards compatibility hinders progress. It's obviously not sustainable to be forever backwards compatible all the way to the 70s. I'd say old code bases can simply stick with the older compiler.
As for whether it'd be considered a new language or not, it depends. I wouldn't consider Py3 a new language even though it's not backwards compatible.
General software tip: don't make a new version if you'll have to maintain the old and the new. It gets annoying real quick. much easier to tell your users to update and be reasonable with the changes
I actually don't see a problem keeping the old syntax alive. I do see a problem with new programmers using obsolete constructs
I've just started storing metadata in JSON format. Then, I have a cmake target that invokes a python program that reads in the JSON and generates .h/.cpp files which can then be consumed by other targets.
The advantage of this is that I can then use the same metadata JSON to generate documentation, or whatever I want.
Have you learned about Lisp macros yet? If you’re intrigued by this idea, Lisp macros are a few steps above in terms of functionality, integration, etc.
Are you talking about using Lisp macros to generate C/C++/documentation? I'm not sure how Lisp macros would be better than static metadata files being operated on by some high level scripting language.
First, can you talk a little about how Nim's macro system is more-powerful than Lisp's? That seems like a fairly broad claim, and unless it allows you to influence the compiler at compile time (and maybe even then), I am curious how that works out.
Second, one of the biggest issues with lisp is that macros end up sort of slow, especially when you're using them + lists to replicate ADTs with pattern matching (versus, say, ML, which can do very fast dispatch based on types). Doesn't Nim fall into that same trap?
It can do absolutely everything lisp macros can do, with more flexible input syntax, and zero runtime overhead, while being both strongly typed and giving the macro body access to the types
(PS: Lisp fanboys downvoting this, how about attempting to make a counterargument instead of just hitting the downvote button?)
Lisp macros have runtime overhead? I don't think reader macros or compile macros have any runtime overhead unless you add it yourself.
More flexible input syntax than a programmable reader? Data like sexps and edn is pretty flexible to begin with, but a lot of lisps have fully programmable readers for arbitrary syntax. Some lisps even support grammar dialects to switch out of sexps entirely (such as racket).
Ok this is nice but you this would be rougly the same in Lisp, apart from AST constructors ("newIdentNode()",...), which would simply be backquote/unquote. Plus you have to write enumInfo[0], enumInfo[1] instead of pattern matching.
Easily. The macro gets handed the ast of the input. It can interpret it however it wants. There are no reserved words or anything like that. Much of nim is implemented as macros, just like most lisps
I don't understand what you mean by more flexible input syntax? Lisp hardly has any syntax to begin with; it's one of my favorite things about the language. (If that's a problem for a particular usecase, the aforementioned read time macros can change that.)
Regarding giving the macro body access to the types, that's because Nim is statically typed. I'm sure Lisp would do the same if it were as well, but it's (mostly) dynamic and that's life.
Your definition of flexible must differ from mine. The fact that you mentioned an AST as a plus and macros w/ runtime overhead tells me you don’t understand Lisp.
Lisp macros operate on a representation of the AST.
By overhead I mean the nim compiler can generate direct specific instatiations at compile time instead of doing runtime based duck-typing as in lisp.
Like, you can have a macro that generates a different expansion based on the type of arguments passed, if you structure the macro that way.
The macro can also take entire code blocks as arguments of course, so you can implement general purpose control structures that are first class citizens.
> Lisp macros operate on a representation of the AST.
The representation of the AST is the AST in Lisp (Scheme has syntax objects).
> By overhead I mean the nim compiler can generate direct specific instatiations at compile time instead of doing runtime based duck-typing as in lisp.
SBCL has `deftransform`, which allows macro expansions to benefits from static type analysis.
I'm guessing this is a joke, but in case it's not - sure, technically you could generate template metaprogramming with an additional layer of metaprogramming as I've described. But since you already have metaprogramming at the higher level, there is not as much need for template metaprogramming IMO
Jokes like this are common now, but, as an outsider, I think it's crazy that the C++ community says stuff like this as if they weren't dealing with template errors and slowly making everything work under constexpr.
This looks amazing, but the fact that it is implemented in an entirely new compiler (atop LLVM, sure) is a major turn-off. Would be more useful if it were implemented as a meta compiler instead.
There is basically no way to modify clang or GCC to support executing arbitrary C++ code at compile time. Something like that would be a fork that wouldn’t get accepted upstream. According to the author the compiler front end is ~100kLoc which is an order of magnitude less than what clang is.
Circle is honestly a breath of fresh air and much more palatable in terms of usability/speed than the existing constexpr facilities or proposed reflection IMO.
It makes my eyes bleed, but so does C++ by itself to a slight degree. I imagine those that like C++ (or are forced to use it) might find it very interesting.
I wonder how this compares to hygienic macros that some languages support?
I'm not sure about other compilers, but I use gcc. Mild errors in templates can produce reams of errors, the first of which might have a good hint, and if not, the actual error is tucked away in, perhaps, the third attempted template substitution. This is a very valid concern. Good error messages are hard to produce, and gcc doesn't even try to figure out which brackets are mismatched.
Mix that in with a supercharged comptime constructions, and I'm quite terrified. I don't have these issues with macros anymore, thanks to templates, but I nearly prefer them.
There is no separate C++ layer and the nice thing is that you can just use exceptions at compile time to signal errors. So the errors are actually much nicer than with template meta programming.
But, I think this trend might lead to extremely hard to read code, and there is a good chance that this hard-to-read code will be treated as some black box/voodoo.
It might not be a better idea than self-modifying machine code or some of the wildest C macros...