Looking at the code, a great deal of this is non-portable and/or U.B, so I would hesitate to call it C. And I'm not talking about some hypothetical problems, but stuff that should surface quite soon. For example, this is how stack allocations are made:
So far as I can see, there are basically no alignment guarantees here - the returned pointer to the char array is not guaranteed to be aligned properly for Header (which is a struct of a single void* field), nor is there any attempt to align T inside the array. If things get misaligned, on x86 and x64, it'll work (but possibly much slower than normal), but on e.g. ARM you'll get all kinds of weird things happening.
Apparently ARM allows unaligned loads now (permitted in ARMv6, default in ARMv7). There seem to be some instructions that don't support it, like STM. Also, afaik code needs to be aligned properly, and function pointers need to be aligned properly lest you will switch between arm and thumb mode.
Apparently you can tell the ARM compiler to assume all memory accesses are unaligned.
Under ARMv8, I think alignment is less of an issue.
Cello seems to make its way to HN quite often. The author (also of buildyourownlisp.com fame) has previously posted his motivations for building libcello on HN: https://news.ycombinator.com/item?id=8800575
> It might be better to try Cello out on a hobby project first. Cello does aim to be production ready, but because it is a hack it has its fair share of oddities and pitfalls, and if you are working in a team, or to a deadline, there is much better tooling, support and community for languages such as C++.
Wow. Straight talk instead of salesmanship. High marks for that.
As a superstitious C programmer, typedefing (void star) feels like walking on the cracks in the pavement, crossing the path of a black cat, walking under a ladder, or squeezing a lemon under the full moon to me. These kinds of tricks seem very clever at first but there always comes a point when they start to break down. I'd be leery about using union in 2017, but typedefing (void star) is like putting on your underpants outside your trousers, thinking you're superman and jumping out of a window thinking that you can fly.
I'm not super familiar with Cello (and I'm not sure I get the point of it overall, are there that many platforms left that you can target from C but not C++?) but in its defense it does seem to implement fat pointers and runtime checks to have a degree of type safety. Not sure how thorough it is but it's not just decaying everything to void pointers behind the scenes.
It's a pretty clever hack though, like using setjmp for exception handling. I'm pretty sure I'd never want to use that in production anywhere but it was probably fun to implement.
the Big "I" is the type, which is a macro parameter. The variable name "i" is not a parameter. If There is another variable "i" in the scope, the loop variable will shadow it in the DO statement. Something like this
float i = 10;
DO({i+= 10;} 20);
will evaluate to this, modifying the "i" declared inside the macro
Hm, in the linked page there is a typedef long for I.
typedef long I;
#define DO(n,x) {I i=0,_n=(n);for(;i<_n;++i){x;}}
(I added a comma here after the }, 20 is the x parameters)
DO({i+= 10;}, 20);
expands to:
typedef long I;
{I i=0,_n=({i+= 10;});for(;i<_n;++i){20;}};
The '20' presumably should have been some kind of function call or action to repeat and the 'n' should have been the count (the reverse from your example).
So that would make this:
typedef long I;
{I i=0,_n=(20);for(;i<_n;++i){ i+= 10; }};
You're still right that it will shadow any 'i' declared outside and so it won't work but it will not overwrite it as far as I can see. You'll just end up right where you started.
What I really don't get is if they're going to use _n for the count anyway why not count it down to 0, that way the whole 'i' could be avoided.
(you'd still have a problem with _n but that could be overcome with a convention, which of course someone will forget with some nasty bug as a result)
Cool. This is a bit extreme but on purpose I think, it is an exploratory project AFAIK. If I had my hands free I would spend my time writing a new C library since I firmly believe that the problems of C are, for the larger part, in its standard library and not in the language itself. C memory model is unsafe but if mediated by a sane library for strings, and if by default you have most of the things where bugs are put (data structures, facilities for parsing, ...) things get a lot simpler and safer.
I've used Cello for a few side projects, and it doesn't even really feel like C in the end.
Just a few off the top of my head:
* No need to specify type. Use var.
* Simpler for loops
* Inbuilt types for Hash Tables
* File types, making file reading much easier
* Function types, making it easier to pass functions around
* Doctype access
* Threads & Mutexes
* Format strings
* GC (with ability to turn C-types into Cello-types for GCing.)
It is fundamentally syntax-sugar, but enough that what you end up with doesn't necessarily look like C at the end.
with(f in new(File, $S("test.txt"), $S("r"))) {
var k = new(String); resize(k, 100);
var v = new(Int, $I(0));
foreach (i in range($I(2))) {
scan_from(f, 0, "%$ is %$ ", k, v);
show(k); show(v);
}
}
I believe those things are exactly what make it higher level. Unless I'm misunderstanding your question, all that makes a language higher level is the amount of abstraction between the human and the computer.
Automatic memory management is an improvement. I prefer region-based memory management to GC but a GC is a godsend in, for example, dynamic languages where you don't want any restrictions, or functional programming where you're pretending to program for a machine with infinite memory.
IIRC Cello doesn't enforce type safety, which means you can foldl but it's not much different from writing foldl in C and using void pointers everywhere.
I had thought about trying to make a type-safe version of Cello but I eventually realized that I can't do it in cpp so at that point it became its own language and too much work (I did not write Cello).
I'm going to say you're right, given one of the first declarations is
typedef void* var;
Cello is extremely impressive and could be fun for hobby projects, but for something you'd actually want to use on a real product you'd be better off just creating a new language (it could compile down to C even, like Vala did).
« In this talk I dig into the depths of my programming library Cello - a fun experiment to see what C looks like when pushed to it's limits. I'll cover how Cello works internally, some of the cuter tricks used to make it look and feel so different, what is in store for future versions of Cello, and why it is important to push languages to their boundaries. »
How about performance? As I understand, it uses fat pointers fat pointers and GC, so performance drop is expected. There are not many reasons to use C nowadays beside performance.
> There are not many reasons to use C nowadays beside performance.
portability? stable ABI? ... writing something in C make it easy for any other higher level language to link to it. That's why we're not done with C, at all ... it's basically the only serious language out there used to share code among every possible platform or language. Even C++ which is a bit safer in practice is harder to link.
I just wished C was a bit safer by default (arrays with bound checking,...)
> Even C++ which is a bit safer in practice is harder to link.
Not to go down the C++ evangelist route, but if you want to write libraries in C++ to use with other high level languages, you can wrap the headers in extern "C", and still write C++ as normal in your own code.
…unless your library exposes classes/templates, which is probably what the majority of C++ libraries do (e.g., wxWidgets, Qt, Boost, etc.). In this case, creating C bindings is a real pain.
There are many languages with excellent C ABI compatibility with nearly the same portability, as well as some languages that compile down to C, while offering more safety and productivity.
So aside from performance and legacy codebases, there's no huge incentive to start a project in C these days. I think those who do it, do it in spite of the alternatives out there.
That's like saying "Why not just use lisp?" to a python programmer. They're entirely separate languages (albeit it they do share some commonalities, but not as many as you might think).
C is my main language and I dabble in C++. I really dislike C++. I love C. I welcome any efforts to add a bit of higher level functionality to C. I have no desire (ever) to switch to C++.
C programmer here. Cello isn't really C. So as long as you're using something that isn't quite C, and you want an object model, I think "why not C++?" is a reasonable question.
In C you would only use void pointers very occasionally, to work around the lack of generics/templates in data structures. Most of the time, objects have types. In Cello, everything is a void pointer. Limited as it is, compiler type checking in C is still valuable.
I'm not sure what you mean by objects at runtime. Can you elaborate? The sort of objects I interact with at runtime are typically typed structures, so I guess I don't understand your statement.
Not void pointers per se, but if you use runtime information to determine how to interpret the data pointed by a void pointer, that's dynamic typing, yes.
> That's like saying "Why not just use lisp?" to a python programmer. They're entirely separate languages
That's not really true now is it? Sure, C++ and C are separate languages but they have a common history and everyone knows that. One can write extensive programs that are valid for both C++ and C compilers. C isn't a true subset of C++ but for from many practical perspectives it is. In practice they have a common ancestor language that still can be (more or less) compiled with both compilers.
Lisp and Python however... they have no such common ancestor and the syntax are very different.
For "pure C" with high level string and dynamic memory handling (with data types in both heap and stack), consider checking https://github.com/faragon/libsrt (not production ready for "NASA-like" things, but I've put an important effort in test-coverage, aliasing awareness, etc.) :-)
I'm not the parent, but C has a certain simple elegance: everything's an int; those things which aren't ints (e.g. compound structures like structs and arrays) are represented using pointers and offsets, so in a sense they're also ints; except for floats, but meh.
In C++, everything is an int; except for objects; and templates; and lambdas; and classes; and exceptions; and all the other things which keep getting chucked on to the pile.
I like languages with a clear, simple concept behind their operation: lambda calculus, combinatory logic, Scheme, Forth, Joy, Pure, etc. C is a bit gnarly, but passable. C++ is horrible.
Yeah, that phrase "everything's an int" scares me (and I'm a C programmer---been programming in it since 1990). At one time I was playing around with Viola (http://www.viola.org/) (quite possibly the first graphical web browser). The code is a mess precisely because it grasps the "everything's an int" and doesn't let go. Once you get it to compile (and out of the box it doesn't any more) it barely runs on a 32-bit system (which it assumes to an nth degree---sizeof(int) == sizeof(long) == sizeof(void *)) and promptly crashes on a 64-bit system.
Horrible, horrible code.
You can write clean C without casts and treating everything as an integer, but you have to start your code with that goal.
As a C programmer, I would also agree. "Everything is an int" is not a good way to be thinking about things. There's a reason that `intptr_t` exists. Well written C shouldn't rely on any integers being specific sizes or the same size unless you're using the standard types for that purpose (Llke `uint8_t`, `uint32_t`, etc.). Even then, you probably don't need them in a lot of cases, and you should basically never be casting integers into pointers unless you're using the above `intptr_t`, or `uintptr_t`. Even then, you're probably still better off just making your life simple and use a `union`.
That said, I would argue that minimal explicit casts is a pretty good goal for any C programs - I find that crazy casting tends to be a sign something could be done better. But obviously, casts are definitely necessary in some instances, so it's not like "castless" code is guaranteed to be the right way to do things anyway.
To clarify, I should probably have said "integer" to avoid confusion with the C type called "int", or to be more accurate ℤ/w for a variety of widths w.
I didn't mean "C is nice because I can just write 'int' for all the types and it works", I meant "C is nice because it represents data in a very conceptually uniform way: either as integer, or 'pointers' which are themselves integers."
The current world population is an integer, my bank account number is an integer, but that doesn't make it's meaningful to add them together. Values can have the same representation without being the same type :)
> "Everything is an int" is not a good way to be thinking about things. There's a reason that `intptr_t` exists.
intptr_t
integer type capable of holding a pointer
They're integers :)
> Well written C shouldn't rely on any integers being specific sizes or the same size unless you're using the standard types for that purpose (Llke `uint8_t`, `uint32_t`, etc.).
I never said it should; but from that same cppreference site:
int8_t, int16_t, int32_t, int64_t
signed integer type with width of exactly 8, 16, 32 and 64 bits respectively
These are also integers :)
> you should basically never be casting integers into pointers
Again, I never said you should. I didn't say, or mean, anything about casts.
I don't understand this. Floating point values aren't ints. I suppose strings are integers because they're made up of 8-bit chars (which are basically ints), but I don't understand how that is advantageous or helpful.
He was trying to illustrate the concept that in C you deal directly with blocks of memory. It can be useful because instead of worrying about the implementation of the language which you're using, you can think about what the hardware is doing and understand what's going to happen based on that.
So, for example, you know if at offset 0xDEADBEEF there is are 8 bytes which are holding a number that you want to use, you can access that chunk of memory and use it however you want. And you can interpret it how it is appropriate for your use case, for example reading it into a string, or an int.
Being grounded in hardware makes a lot of other stuff quite arbitrary, and it can become a question of style rather than functionality. And by imagining what the hardware is doing you can understand what code is doing that you have never seen before at the lowest levels (all the way to what the registers, FPU, APU, etc on the CPU are doing). That can also help with debugging.
> So, for example, you know if at offset 0xDEADBEEF there is are 8 bytes which are holding a number that you want to use, you can access that chunk of memory and use it however you want. And you can interpret it how it is appropriate for your use case, for example reading it into a string, or an int.
Except these days with strict aliasing that's not true. If you access memory through a pointer of the wrong type, you've probably committed the crime of undefined behavior, for which the compiler might punish you by breaking your code - but probably won't, leaving the bug to be triggered in the future by some random code change or compiler upgrade that creates an optimization opportunity.
Admittedly there are ways to avoid undefined behavior, but still. C gives the compiler a lot of leeway to mess around with your code.
Strict aliasing does not preclude using the bytes however you want. All that does is prohibit type casting in certain situations. By using different syntax you can still interpret the bytes how you want to (even if, in extreme situations, you might have to resort to using a memcmp in order to do so).
Not really that dangerous, I think it's more of an issue of having code written before strict aliasing rule.
Essentially, what strict aliasing rule requires is that user cannot go crazy casting pointers. Casting float pointer to int pointer is completely wrong. The exception is that it's completely safe to cast values to char pointer or void pointer (and back, but only to original type of a pointer). Objects of different types cannot use the same memory area.
What this usually affects is code that parses external data into a structure. This can be dealt with by use of `memcpy` (to copy data from char array into a structure) instead of pointer casts, which is mostly safe as far C specification is concerned (mostly because C specification doesn't really define stuff like paddings, so you need to be careful about that). There is also an option of using `union` which is allowed by C11 specification (but compilers released with C11 that had strict aliasing optimization didn't break code that had aliasing through union and even documented that).
Not really that dangerous, except for all the code for which it is dangerous?
Of course, you've got me, it only breaks existing code, and doesn't affect code that's written perfectly according to the rules of C and not peoples' intuitions about what pointers actually mean on a hardware level.
It's not even safe to do this:
struct a {
int x;
};
struct b {
int x;
int y;
};
struct b b;
struct a *a = &b;
As I said, C is gnarly, mostly for legacy and "practicality" (WorseIsBetter) reasons.
Undefined behaviour, declared as in a language standard, isn't too bad; it might forbid some potentially-useful combinations of expressions, but there'll usually be a workaround.
The problem is compilers which allow undefined behaviour by default, rather than aborting with a helpful error message. This puts the burden of avoiding undefined behaviour on the programmer, which is really bad. This might be unsolvable for C, due to the nature of its particular undefined behaviour.
For brand new languages, I'd recommend minimising undefined behaviour as much as possible, but definitely ensure that any that remains can be found statically!
As someone who likes C, I'd like to add a bit of a different look:
The aspect that I love about C is how easy it becomes to read. C offers a (arguably) nice programming interface while not allowing the programmer to hide very many things that are going on. C allows you to make nice abstractions, but those abstractions are still built upon the same basic concepts that C supports, and these abstractions generally don't fundamentally change how the language works, so you can very easily tell the control flow of your program if you simply understand the interactions between the basic parts in use.
Now that said, I'll be the first to say that I think C could benefit from a variety of different features (Some from C++, some from others, and some unique to C), and I could easily give you a fairly long "wish-list" I have for C. But, the clear nature of C is a huge advantage that I think is underrated, and is something that you really can't get back in a language once you lose it.
When you (or I) read a piece of C++ code, there are a lot more details you have to take into account then when you read a piece of C code, simply because C would force the writer to be more explicit in what they are doing in those cases.
Again though, I'll make it clear that I don't think C is the be-all end-all of languages - in particular I'm excited to see how Rust goes - but I do think it has traits that are very nice but largely ignored in most of the current languages in favor of adding lots of different ways to write your code.
C++ is very complicated. C is basically the simplest possible programming language, and the closest to the API actually presented by the processor (i.e., assembly language)
"Simple" was a poor choice of words as it doesn't capture what I mean.
I meant "fewest possible abstractions over the underlying hardware".
BF and Lisp are both much simpler than C, but don't fit what I was trying to get at, as they present a totally different abstraction that is actually quite different from the underlying hardware.
(Actually, so is C, since assembly language is itself an abstraction and processors work much differently than they did in the 70s and 80s... but as a programmer, and not a hardware engineer, I consider asm to be the baseline, zero-abstraction level ;) )
No, you're still not getting it. It isn't about inefficient abstractions. C++ has lots of zero-cost abstractions, but that's not what the parent is talking about.
He's saying that C is simple because it offers a small set of mostly orthogonal features. You get structs, functions, pointers and integers, basically. You don't even have opaque 'String' types or any of that. It's all simple, orthogonal bits.
In C, if there are no macros involved, everything you read in code is arithmetic, function calls or pointer dereferencing. There is no overloading, there are no automatic secret implicit constructors, there are no destructors, no garbage collection, etc.
C++ has a huge amount of behaviour, a lot of it very abstract compared to C. For example, vtables. How are they implemented? That isn't clear at all from reading C++ code. You have to go look at the generated code. If there are vtables in C you have explicitly written
The run-time cost of C++ abstractions is also NOT what I was talking about.
I was talking about how C++ requires more communication with your team, because some "abstraction" features you might not want (RAII, RTTI, polymorphic dispatch, templates, exceptions) are a lot more easy to introduce than with C.
It would be hard for it to be a completely strict superset of C because that would require not introducing new reserved words (like "class") since those end up being legal variable names in C but not C++.
For people who aren't professionally familiar with C or C++ it's an easy mistake to make considering how often things refer to "C/C++" like they are a single skill.
Once upon a time there was this thing called 'cfront'. Cfront took C++ and turned it into C which you then fed through your compiler.
At that time mixing C and C++ was trivial, especially because C++ was still quite simple and the C compiler was the target.
But after that things got more complicated. C evolved, several times in fact since that time and the C++ standard evolved as well. Leading to the impression that C and C++ are merely the old and the improved version of C but it is probably much better to think of them as two distinct species that share a common ancestor, where one of the two had some very radical mutations.
It would be relatively easy for C++ to add some kinds of C compatibility, and they just haven't bothered to. (E.g., the "static" keyword for parameter array sizes.) It's a little annoying for shared headers that have to be used in C and C++ programs.
But the standards have made some effort at avoiding gratuitous incompatibility, and have even introduced changes to reflect changes in the other standard.
Also, much C code is still valid C++. Sure, you can write code that isn't, but I would guess (pulls a number out of nowhere) that 90% of the valid C code is also valid C++.
That makes them languages that have separate standards, but not completely independent standards, that share a whole lot of source code. That's something less than fully separate in my book.
Besides the various "bad old programming idioms" that do not compile under C++, there are a few genuinely useful features that have only just been added to C++ more than 15 years after they appeared in C, like hexadecimal floating-point literals (in practice, many compilers supported these, but they were not formally part of the language). Designated struct initializers are another C feature that I would love to have in C++.
This is a great example of how many of the things in C that don't compile in C++ are horrible programming practices, and it's really nice that C++ doesn't allow such garbage.
Actually those are very bad examples. In practice, compilers will produce warnings in all of those examples.
Better examples (of actually useful things) would be things like designated initializers, struct literals, declaring array lengths in args using static, etc.
I had that same problem. I wrote a C library [1] that has a structure field named "class". It's "class" because the protocol being described (DNS) calls that particular field "class" [2]. And I'm not about to pay lip service to an abomination like C++ [3][4].
Yeah I super hate this. There's no other language that requires other languages to contort themselves like C++ does. Every time I see `#ifdef __cplusplus` or `klass` my blood pressure spikes.
Cello and C++ are have very different design goals. Cello is essentially a dynamic language implementation on top of C, and uses runtime reflection in places where a C++ programmer would probably prefer compile-time template hackery.
Idiomatic C++ is not like Cello at all. But it still allows you to do all things that Cello does - and most of them wouldn't be macros then, but classes and overloaded operators on them.
Somehow I doubt C++'s built-in object system will allow you to create classes (yes, classes, not objects) and/or alter their vtables at runtime. (I know, vtables are an implementation detail. You still get the idea.)
Not anymore so than the C one - it's just as static.
The point is that building blocks that you get from C++ are higher-level even so - classes, operator overloading, template metaprogramming etc - which should make it possible to implement something like Cello with a lot less effort (and hacks).
A good and valid reason would be compiler availability. C++11, 14 and 17 might not stable for every compiler out there.
C++11 and further can be fancy, and not be supported in every possible platform like exotic ones.
Bad reasons would be to talk about the gimmicks of C++, but generally it boils down to programmers being taught well about the language, because it's not an easy one to understand fully, as there many weird pitfalls.
C++ still has to fix some its flaws, like compile times (this should be improved with modules).
So to put it simply, you can still do good work with C because C compilers are just simple and straightforward. When you don't need C anymore (because you only use for it's size and speed, not for the high level), you use something more high level like python. C++ should allow programmers to do everything, but C++ is not perfect and not fully accessible to everyone.
Compile times aren't that slow, and the language is barely less complex than this insane hack. In fact C++'s complexity allows libraries to make their use a lot simpler. Consider string concatenation in C++ and C. Which is really simpler?
The main reason I can think not to use C++ is that C has a simple stable ABI so you can easily use C libraries from many languages and compilers. But you can always wrap C++ libraries in a C API so that's not a huge concern. Just a pain.
I beg to differ. C++ encourages placing more code than strictly necessary into header files. Even the standard headers such as <algorithm> add considerably to the compile time, and they keep getting larger with every revision of the standard. When you are making incremental changes, it accumulates to a lot of time spent waiting.
Precompiled headers never worked well due to a variety of limitations.
C++ modules, on the other hand, will be like precompiled headers done right. If they ever get standardized. They didn't make it into C++17, and the prototype implementations in Clang and MSVC are incompatible with each other, but I guess it'll happen someday…
>Precompiled headers never worked well due to a variety of limitations.
want to elaborate? i use MSVC's implementation and never encounter any problems with it. then again, i also don't work on large projects, so i would like some insight.
C++ libraries often add new semantics to the language, and that is not so obviously a benefit. Recently I explored Kiwi project (constraint solver). It is easy to use if you solve specific cases, e.g.:
Variable x, y, z;
solver.add(x <= y);
solver.add(z >= x - y);
But if you need it in meta-way (and I always do, I rarely write non-meta code), you have to unwind all the sources[-in-headers] to find out what Expression is, how does one construct one, write dynamic expression builder, etc. Sources do consist mostly of one-line helper methods/getters/constructors that are nested and intercalled 10-15 times per operation. LPSolve was much simpler in API, since you don't mess with variables and add constraints directly as matrix values.
In the end of the day, both libraries involve identical usage pattern, but for C++ I have to create a convenience unwrapper. This 'convenience' swamp is SO popular in C++. Once you say "++", you have to deal with it all the time, unless you write something really dumb and straightforward. Not to mention most libraries have huge header-only parts due to template obsession. Qt, yeah. It takes forever to compile 20 modules on reasonably priced laptop even with fine-tuned prefix header and -j9.
From the employment point of view, C++ is a nightmare. No one really knows it, and everyone thinks he does. Experienced developers can't tell what explicit constructor is, are confused with move semantics, blink hopelessly on "what is RVO". You actually employ python programmers for sky prices, while all they know is easiest part of C++/stl magic, which is effectively beginner-grade python with low-level bugs and UBs.
>wrap C++ libraries in a C API
Together with "ecosystem" such as std::cool_ptr<std::map<std::bar,std::baz>>. That's a concern.
When I want nice bridgeable object model above C, I use Objective-C or simple embedded language with proper features. Personally, I'm really tired of C++ everywhere.
And make sure Iron has a small group of annoying, hardcore fans who relentlessly disrupts any thread about C or indeed programming in general with their ham fisted advocacy.
The Iron fans are doing a bit of harm but that Perl zealotism is one of the things that made it wildly popular in its day. And unlike Perl, Iron development is structured, not "organic" a la Larry Wall/Perl.
I'm not an Iron zealot but I do believe that there rarely is such a thing as bad publicity :)
Yes, surely the downfall of Perl was caused by its zealous fans, not by the apparition of other programming languages that were either easier to use (PHP), more readable (Python) or had more attractive frameworks (Ruby) :)
There's literally not a single comment in this thread comparing C to Rust, and yet Rust fanatics are the ones derailing here? Care to elaborate, my good friend?
Agreed, and "Iron", which, according to frostirosti, "doesn't totally enforce type safety", doesn't describe Rust in the slightest, so it almost feels like anyone projecting Rust evangelism from that comment must be talking out of their ass. You know. :)
That seems annoying. It's probably a lot better to have a large group of hardcore fans of C who relentlessly disrupt all of programming in general with vulnerability-riddled code. Make sure they're writing code instead of disrupting any discussion threads, that will make sure they're not annoying.
C has sharp edges, and buffer overflows are bad, we can all agree on that. Still, there are good ways to advocate a language and there are bad ways. Iron fans are in-your-face to the point that it crossed a line for me in a way that no other language supporters have and I haven't coded in C for a long time now so I don't feel like I have dog in the race.
That's what Martin Richards did when he couldn't get a high-level language to run on crap hardware. Chopped features until BCPL compiler worked. That inspired and vot turned into C.
So, this Iron language sounds like deja vu to me from two, different directions. I recommend against it.