Nim developer here, I'm glad to see so much enthusiasm for Nim. I'm currently writing a book about it called Nim in Action[1] and would love to know what you guys think about it. The first chapter is free[2], so even if you don't intend to buy it you can still take a look at it.
We also have a BountySource campaign[3], so if you would like to support us please consider making a donation.
I wonder how Nim's cycle detection work? It is said in the docs that Nim never scans the whole heap, but I believe in worst cases you have to scan the whole heap to detect cycles.
Also how does the gc trace variables on the stack? How do you determine if a value is a pointer to an object or an integer?
Good questions. I'm afraid the documentation is seriously out of date about the GC: It used to implement a variant of "trial deletion" so that "never scans the whole heap" used to refer to the fact that it doesn't use traditional mark&sweep, but only scans the subgraph(s) leading to the cycles. Of course you can always create a "subgraph" that spans the whole heap, so even for trial deletion it is a dubious claim.
Since version 0.14 (iirc) however, Nim uses a traditional mark&sweep backup GC to collect cycles. Reasons: Trial deletion in practice is much slower.
For all versions of the GC is stack is scanned conservatively as precise stack scanning is simply too expensive when C is your main target.
That has been my experience too. That all the extra work and logic (cause the algorithm is complicated) you need for detecting cycles and trial deletion is so expensive that regular mark&sweep beats it.
But to ask a pointed question, doesn't that mean Nim gets the worst of both worlds? You have both the overhead of updating reference counts and the (relatively long) garbage collection pauses. I guess if the programmer codes in such a way that no cyclic garbage is created it is not a problem because the gc will never be triggered. But how common is that in practice? Does the language make it easy to avoid cycles?
> But to ask a pointed question, doesn't that mean Nim gets the worst of both worlds? You have both the overhead of updating reference counts and the (relatively long) garbage collection pauses.
There's a patch of the collector to make the M&S incremental, to avoid pauses for this step too. Of course, whether a deferred RC'ing GC with incremental cycle collector works better in practice than a more conventional generational incremental GC (or just an incremental GC) is an open question. :-)
Nim has garbage collected pointers (ref) and raw pointers (ptr). You can use 'ptr' to break up cycles manually and disable the M&S. I suppose it's very much comparable to Swift except that Nim's RC overhead is much lower since it's deferred RC.
> The multisync macro was implemented to enable you to define both synchronous and asynchronous IO procedures without having to duplicate a lot of code.
This is great! There are a lot of little things like this in the nim standard library that make life a lot easier. I should write more nim.
but what is the return type? The multisync example shows separate generated methods that return `string` and `Future[string]` depending on the method argument.
If that's the case then blocking and non-blocking operations need to be treated separately since operations defined on a result of `Future[string]` are not the same as those available on `string`.
HKTs not only allow for both method argument and return type to be polymorphic, but also allow one to define the same operations (like map, flatMap/bind, fold, etc.) on the wrapped type.
Glad you like it! I wrote it while I was improving the httpclient module which otherwise would have a lot of duplication. Now it has an (almost) perfectly symmetrical API for both synchronous and asynchronous sockets :)
I do like Python (my work mostly done in Java, though).
A few years ago, I was looking for a language with Python-like expressiveness, with system programming capabilities ala C.
Not only that. Nim also has benefits like Ada's range types, a sophisticated macro system, and Perl's native regular expression syntax. It also compiles to C which makes porting to other platforms easy.
I know Ada not D. Nim's subrange are close to Ada. If you define a type t1 = range [5..20] and you assign a value of 1 to a variable of type t1 then you get a compilation or runtime error.
I think the language called wirbel would push some pleasant buttons. Its rather sad that its author had to stop spending more time on it. Leaving this here because i doubt wirbel is well known. It came with a full fledged type inference
I looked at Nim briefly. It seemed cool, but there were some things that I just heavily disagree with. Forcing a keyword for discard means that there's an incentive for imperative code not to return a value, whereas I argue that that should be encouraged. Also, I find the macro system unweildy to use, especially coming from Lisp.
However, once I started a reasonably sized project, getting Nim to compile it became difficult. Lots of small little quirks that weren't greatly documented.
However, when it did build, it was reasonably fast, and allowed me to tweak the C code when I disagreed with the compiler.
Macros were still getting developed, last I played with Nim, but I'm not sure comparing with Lisp is fair.
I adore Lisp, and its macros. (Both CL style and Scheme).
But most languages are designed to constrain the programmer to a way of thinking, a series of patterns.
Lisp was the opposite.
Even in languages that try to give the programmer flexibility, they miss the power of Lisp's homoiconity.
That being said, I think Nim is trying to find a middle ground between Python's "only one way to do it" and Lisp's "get out of the programmers way".
Its a decent language, albeit with a tiny, rather opinionated, development team.
> Lots of small little quirks that weren't greatly documented.
For instance?
> But most languages are designed to constrain the programmer to a way of thinking, a series of patterns. Lisp was the opposite.
Lisp is still the opposite. Exactly this is the problem of Lisp: It is too powerful. I admire Lisp for its power, and there actually is no other language with such a freedom. Everyone can write his own DSL to simplify his task. Writing DSLs is extremely easy in Lisp.
However, this doesn't profit maintainability, and it makes sharing code with others difficult. You actually need a lot of discipline that you code in such a way that it remains maintainable. In teams you need to constrain the way of thinking otherwise you can quickly run into chaos.
Java is the opposite of Lisp. You cannot code productively in Java unless you use a sophisticated framework. Such frameworks force you to a certain style of coding which profits maintainability. That is one factor which explains the huge success of Java in business.
Nim and Rust combine both paradigms. They provide a certain degree of freedom, and they require a certain degree of discipline. Haskell requires an extreme amount of discipline. In case of Haskell's freedom I am not convinced.
> Lots of small little quirks that weren't greatly documented.
> For instance?
Nimble file naming [0]. It must end in '.nimble', and must have at least character in front of that suffix, but the prefix doesn't actually matter.
I'm not convinced on the maintainability argument. Many, many projects have a coding style guide. LISPs require one too. Once you have one, maintaining the code becomes as difficult as any other language. I've encountered unmaintainable Python code [1], and insanely well documented Scheme code [2]. I'm not sure the language has much to do with it, apart from supporting a wide range of paradigms and patterns.
I'm not sure black-boxing through use of a sophisticated framework is any better than the way you work in more flexible languages.
Here's my take:
* Nim is great. The use of the 'auto' keyword let me be productive using it.
* Scheme is great. It lets me do insane things, and the compiler will optimise it to work, and well. (Rather than breaking three similar function into pieces, I can make one function that generates all three functions using their commonalities).
* Java is the smart kid at college who always answers questions in the form of a thesis. He isn't wrong, but you don't really remember what he was saying.
As I said last time this argument came up, you can write INTERCAL in any language. And ultimately, and language that tries to make it hard to write bad code (ada, Java, Pascal, etc.) merely walls off one option. There are always other ways to write code badly (frequently, by making it overly baroque. My rule of thumb as to make the code as simple as you can get away with, avoid doing anything obviously stupid, and fix anything that goes wrong. I'm not in industry yet, so I can get away with it).
You need a lot of discipline in the overall design of your application so that you can add code and data at any time without messing up the design.
It's easy to add code in most languages, even for unexpected features which were not considered at first place. In Haskell it can be painful to do that.
The whole language is still in development. If there are things you dislike, like the macros, then please submit some bug reports on Github[1] or talk to us on IRC[2] and Gitter[3]. Ideally, help us improve it by submitting pull requests :)
I don't see how a keyword that allows return values to be ignored creates an incentive to create void functions.
I sort of find the idea refreshing myself. As a full time Clojure developer, I consider Nim to be the closest thing I can find to what I want for a great experience coding closer to the bare metal.
It's not so much that it allows it to be ignored, it's that if you want it to be ignored, you have to use that keyword. It means that there's syntactic noise in cases where you don't care about the return value of a function that has one, and thus discourages you from writing functions like that.
The idea behind this feature is to ensure that you don't lose important data, not to discourage you from writing functions which return values. For cases where a functions return value is truly optional, you can use the {.discardable.} pragma.
Huh. Interesting idea. I suppose that could work. I'm still not wholly convinced that Nim is for me, but you've definitely alleviated some of my biggest concerns.
Actually, in Nim almost everything is an expression, including if blocks, case, and even except blocks.
Together with immutable "variables" (let) this encourages a pretty functional style. I'd just wish their support for lambdas and list comprehensions would be a bit better.
Yeah, looking at it from the perspective an utopical pure functional world this seems as much of an abomination as "out arguments" in C# ...but in real world code things look different :)
CQS is neat, but return values on imperative functions are really useful. For one thing, it allows you to compress your API, have an action that can be used solely for the side effect, or for the value of that expression. In addition, you can use it to determine success or failure, and other important information about the function.
Yes, but Nim's macro system, while very powerful, is, at present, the most clumsy and awkward thing I've seen. it's imperative, like Lisp macros. However, unlike Lisp macros, there are no templates, and the datastructure that Nim's macros manipulate is much closer to the actual AST, and thus far more complex. Because of this, Nim macros must clumsily plug together an AST, and do so in a manner that's so noisy that you have to squint to see what it's actually doing by the time you're done. Meanwhile, Lisp macros are not necessarily shorter, but it's far easier to see what's going on.
Yes, but Nim doesn't have imperative templates, so its template system is fairly limited.
As for hygene problems, Nim's system has the same issues.
Here's an example of the problem:
This is a simple Nim macro from the tutorial:
macro debug(n: varargs[expr]): stmt =
result = newNimNode(nnkStmtList, n)
for x in n:
result.add(newCall("write", newIdentNode("stdout"), toStrLit(x)))
result.add(newCall("write", newIdentNode("stdout"), newStrLitNode(": ")))
result.add(newCall("writeLine", newIdentNode("stdout"), x))
What the heck is that doing? Now let's compare to my lisp of choice, Chicken Scheme (although this macro would be much the same in any Lisp with imperative macros)
Now see, that's much clearer. And before you ask, yes, this macro is completely hygenic, because it uses ir-macro-transformer. Don't worry too much about it if you're not familliar with Chicken Scheme. It's not the important part.
Now see, that's much clearer. From the perspective of someone who knows Lisp, perhaps. I find it very confusing. Moreover, you are using an example from a tutorial, which is meant to explain how things work, not to show what the best way to write some macro is.
You can write the same macro with less verbosity:
macro debug(n: varargs[typed]): typed =
result = newNimNode(nnkStmtList, n)
for x in n:
let xRepr = toStrLit(x)
result.add(quote do: writeLine(stdout, `xRepr` & ": " & $`x`))
I could have done that, but I couldn't be bothered to look up the interpolation strings for format (yes, we have it), or the one level flatten function, and the splicing unquote is less efficient than cons in this case. But yes, in real code, I'd probably go in that direction.
There is one key difference between my code and yours: in mine, the functions, values, and symbols referenced in the macro are automatically renamed, thus guaranteeing no namespacing conflicts. Due to the way the CL package system works (IIRC), CL provides almost the same guarantees, at least in this case. But it is an important semantic difference.
And it is the Common Lisp. Despite how you may feel, Scheme, Clojure, PicoLisp, NewLisp, Racket, Interlisp, LeLisp, EuLisp, and others are as much a Lisp as CL.
Well, then, if you don't have macros, then you don't have procedural macros, which was my point.
And I didn't say it was a legitimate reason to use cons. I said it technically had better performance. As I stated above, the real reason I used it is because it fit my mental model of what was happening.
> Well, then, if you don't have macros, then you don't have procedural macros, which was my point.
Trivially.
Anyway, I don't consider them to be Lisp dialects anyway. They are new languages, Scheme dialects, scripting languages with parentheses, whatever. The name 'new'LISP says it already.
> I said it technically had better performance.
You thought it had, without actually knowing it.
CL-USER 36 > (let* ((bar '(1 2 3))
(baz `(foo ,@bar)))
(eq bar (rest baz)))
T
So it's not copied and no traverse is needed.
What actually is traversing the code is your IR-macro mechanism. Twice. -> ir-macro-transformer. Which makes it slower both in the interpreter and the compiler.
Not considering them to be Lisp is ridiculous. Picolisp is closer to Lisp 1.5 than CL is. CL took many ideas from Scheme, and vice versa. The claim that CL is The One True Lisp is tenuous at best, and absurdly ridiculous at worst. It's like a Catholic claiming that they're the only true Christian religion (not a great analogy, but not the worst in the world). It also, quite frankly, given how these languages, particularly PicoLisp, fit every definition of Lisp I've heard, takes us straight into No True Scottsman territory. Haggis, anyone?
It's good to know that splicing unquote optimizes for that case in Lisp. I wasn't sure, and so assumed the general case. In any case, that's not the reason I didn't use it, as I've explained multiple times now.
Yes, I know ir-macro does code traversal. Yes, sc-macros are more elegant. But that's the mechanism we picked, And I have no issue with it. Furthermore, it doesn't traverse all the same code twice. It traverses the inputs and the outputs.
>Btw., the code won't win any beauty contests.
...Says the CL user. Actually, I agree, it won't. But it works, it's reasonably clear, and it doesn't do anything obviously stupid. It's okay.
...Unless you're talking about my code. You want me to clean that up? Okay. I will.
> Not considering them to be Lisp is ridiculous. Picolisp is closer to Lisp 1.5 than CL is.
How so? CL runs a lot Lisp 1.5 code unchanged.
Picolisp not. The Picolisp evaluator is not compatible with Lisp 1.5. It doesn't even have LAMBDA.
>CL took many ideas from Scheme, and vice versa.
Sure not. The main idea CL took from Scheme was 'lexical binding by default'. Other than that the Scheme influence was minor.
CL is based on Lisp Machine Lisp, NIL, S1 Lisp and Spice Lisp. All coming from Maclisp, which was developed out of Lisp 1.5.
> The claim that CL is The One True Lisp is tenuous at best, and absurdly ridiculous at worst.
I never said that. But it is the most widely used Lisp, and the one I mostly use - minus some minor use of Emacs Lisp.
> It's good to know that splicing unquote optimizes for that case in Lisp. I wasn't sure, and so assumed the general case.
You could have looked it up or tried it, before claiming it. I did it for you.
> Yes, I know ir-macro does code traversal. Yes, sc-macros are more elegant. But that's the mechanism we picked, And I have no issue with it. Furthermore, it doesn't traverse all the same code twice. It traverses the inputs and the outputs.
Yeah, but claiming that the CL code was less efficient. Great move.
I looked that up from the Chicken Scheme sources, to actually see what it does. It traverses inputs and outputs during macro execution. You could have mentioned that.
Sorry, I don't trust your judgements, your claims are simply not backed up by the source and how things actually work.
> Says the CL user
I can't remember seeing such ugly code for macro expansion in a CL implementation.
Take make-er/ir-transformer . That function code is fully obfuscated. It bundles several utility functions, which don't belong there as sub-functions. Some are using access to lexical variables defined several dozen lines above, others don't. The result code is over hundred lines long, even though the basic mechanism could be written down much more compact. Each subfunction can only be understood by referring to the surrounding code, which is above or below.
Then it takes a parameter for using two different expansion mechanism. From that, two new closures are created, which then are given to the user in, again, two differently named versions. The code itself contains lots of debug code, which simply outputs intermediate results, and which will overwhelm any human user for any non-trivial macro expansion.
>The Picolisp evaluator is not compatible with Lisp 1.5. It doesn't even have LAMBDA.
However, it keeps a lot of ideas from 1.5 that were later dropped by CL and others. Names don't matter: ideas do.
>You could have looked it up or tried it, before claiming it. I did it for you.
It wasn't entirely relevant to the present situation, until I mentioned that I thought cons was more performant. Thanks for trying it. I don't have an excuse, but thanks.
>Yeah, but claiming that the CL code was less efficient. Great move.
I didn't. I said that splicing unquote had to traverse the resultant list, making it slower than cons, which was pretty much irrelevant in this case. I then explained the real reason I used cons.
>I looked that up from the Chicken Scheme sources, to actually see what it does. It traverses inputs and outputs during macro execution. You could have mentioned that.
Quite honestly, I didn't see how it was relevant. We weren't discussing macro system internals until just now. It's not great, but it gets the job done, and that wasn't the point.
I'm starting to get really really frustrated here. You seem to miss every point I make, to the point that I'm very nearly wondering if it's deliberate.
> However, it keeps a lot of ideas from 1.5 that were later dropped by CL and others. Names don't matter: ideas do.
That's what I say: vague ideas don't matter much when forming language families. Code does. Books. Libraries. Communities.
What were those ideas that were dropped? Fexprs would be one. That was dropped when compilers were used and Fexprs were found not to be compilable. That happened in the 70s before CL existed. Pitman published his paper on macros in 1980, which summarized the view of the Maclisp / LML developers. What else?
The Lisp 1.5 manual gives an extended example: the Wang algorithm.
What were the 'ideas' that were dropped, even though somehow old code still runs?
> We weren't discussing macro system internals until just now. It's not great, but it gets the job done, and that wasn't the point.
The point was, claiming a 'slower compilation process' due to splicing backquote usage, while in fact the whole compilation of the example you gave was the really slower one, because use used a slower macro system which traverses code for renaming and re-renaming.
> You seem to miss every point I make
EVERY POINT? Are you really sure I miss EVERY POINT you make?
Personally I would only claim that you miss SOME of my points, not every. In some cases I would claim that we have different opinions, for example what makes a language and its dialect.
But I would not claim that you miss all my points.
Well, maybe not EVERY point. It just often feels like you emphasize the parts of the write that I focus on least.
>What were the 'ideas' that were dropped, even though somehow old code still runs?
Well, fexprs and dynamic scope by default are the big ones, but also the idea of functions as lists, which are why it doesn't have lambda.
>The point was, claiming a 'slower compilation process' due to splicing backquote usage, while in fact the whole compilation of the example you gave was the really slower one, because use used a slower macro system which traverses code for renaming and re-renaming.
I appreciate the irony, but as I've now said several times, that wasn't my justification for using cons. I even said that they hypothetical speed increase would be negligible, and unlikely to be noticed, before you showed that the speed increase wasn't even there. This is one of the things it seems like you missed.
>That's what I say: vague ideas don't matter much when forming language families. Code does. Books. Libraries. Communities.
That's not entirely true. Sure, code matters a bit, but Java definitely comes from the C family, and the code doesn't transfer at all. As for communities, see for yourself: Scheme was born from the MACLisp community, and retains strong ties the modern equivalent: Common Lisp.
> From the perspective of someone who knows Lisp, perhaps.
For what it's worth, as a perpetual novice programmer who knows Lisp a little and Nim not at all, I also found qwertyuiop924 (https://news.ycombinator.com/item?id=12617350 )'s example much clearer. (EDIT: Of course it's totally anecdotal, but I thought that it might be worthwhile to have a data point from someone who is not an expert in either language.)
Ah, thanks. I didn't mean to misrepresent Nim, but I don't know the language super well.
It seems Nim has a code templating system, which is nice, but I find a bit more confusing and less pleasant than the Lisp equivalent. As I said before, it just feels clumsy, especially compared to Lisp.
You can also do this (in case that's any clearer):
template debugImpl(varName, value: typed) =
stdout.write(varName)
stdout.write(": ")
stdout.writeLine(x)
macro debug(n: varargs[expr]): stmt =
result = newNimNode(nnkStmtList, n)
for x in n:
result.add(getAst(debugImpl(toStrLit(x), x)))
Whether the Lisp version is clearer or not is very subjective, and you obviously have a lot more experience with it so of course it's clearer for you. To me, the Nim version is far clearer.
I am aware that a template would work in this case (it would also do so in Scheme, but templates are generally less powerful, and I was making a point about imperative macro systems). And yeah, that mixture of templates and macros that you used is semantically similar to what my Scheme code did. But I find Nim discourages writing small templates, which I find myself doing a lot when writing macros in Scheme. Personally, I find that Nim less confusing but still quite awkward.
Many people claim that, but no Lisp compiler or interpreter beyond toy level actually uses it as such: It doesn't provide a lot of the syntactic information compilers need. However, Lisp code as AST is an excellent abstraction for macros, so that's what macros manipulate.
I disagree. I have written a lot of Lisp and Scheme code, and now I am very content with Nim. I also tried Rust and Haskell but those languages are too cluttering for me. Nim is compact, well readable, powerful and performant.
Nim developed from the bottom up, being closely related to C which is nice and makes porting to other platforms easy. The Nim development team adds only features which really make sense. If they just remove those immature ugly features like strong spaces and the redundancy of underscores then Nim 2.0 could be a really great language.
OTOH, as a schemer, I much prefer Haskell and Rust to Nim, both theoretically and in practice. What syntax and abstractions you like are down to how you think, and Lisp programmers are programmers like any other, not incarnations of the spirit of smugness, as many people paint us.
> What syntax and abstractions you like are down to how you think
Obviously this is not only my problem because there are only sparely applications written in Haskell. It is possible to write real world applications in Haskell, as proven by Leksah and the window manager xmonad. However, would you write an MS Office clone in Haskell? I wouldn't. I am still honestly interested in Haskell but I cannot understand why the cabal hell has not been fixed yet. Sandboxes and VMs are no option for me. Rust, Nim, and many other languages don't have this problem which points out that the cabal hell is due to Haskell's nature.
Rust is another story. It is way more practical than Haskell, and I would use it for safety and systems programming. Nim however has become my favorite after a long journey of languages because I am really productive with it. Only Lisp has a similar productivity, and sometimes I still use it. Nim has the advantage of being very close to C which makes porting to other platforms extremely easy -- and hence also all my Nim code.
Rust is an awesome language. It has the power to replace C++ and Java as industry standard.
However, there are three points which annoy me. First, the Rust compiler is huge (LLVM based). Second, it requires a native Rust compiler for bootstrap. Third, it doesn't compile to C which makes porting to other platforms difficult. Nim doesn't have these problems. It is small, self-hosting, and easily portable.
> Third, it doesn't compile to C which makes porting to other platforms difficult.
Not compiling to C is a feature. It insulates us from the undefined behavior of C (signed overflow never results in UB in Rust, for example) and allows us to actually get proper debug info.
Compiling to C vs using LLVM is a complex design tradeoff. For example, the Posix standard specifies a C interface. Quote: "The stat structure shall contain at least the following members..." This means you can wrap 'struct stat' in Nim once and be done with it (since the mapping to C is by generating C code) whereas for eg Rust you need to wrap it once for every different OS (and potentially even for every OS version+CPU combination), since the offsets and sizes can differ. So yes, porting to other platforms really is easier when you have a C code generator.
There are other ways to address this issue. You can do what Swift does and call into libclang so that you can pull out the structure definitions, for instance.
Furthermore, if your frontend doesn't know about structure layout then you forgo some important systems language features. For example, you can't implement sizeof as a compile time constant.
> There are other ways to address this issue. You can do what Swift does and call into libclang so that you can pull out the structure definitions, for instance.
Sure and you only need to call libclang for each different platform. Or is it on every platform? ;-)
It would be physically impossible for a call to that function to work with a single definition of the struct stat, because the set of fields (and their ordering) and struct size differ between platforms. So that libc wrapper must provide separate definitions of the struct for each platform.
Ah, sorry! I was thinking about things like "the size of this type is different per-platform," not "every single platform has a different definition of this struct."
I still think that, given Cargo, the one-time cost makes it worth it, but after seeing my error, think that the point makes more sense. Thank you for being patient. :)
That's true, but Rust and its compiler components are fairly well ported, LLVM's going to be near the top of the porting list for any new architecture, and processors are almost a monoculture at this point.
With so many programming languages around looking very similar, and realising the enourmous tough challenge to get adoption: why do programmers pursue writing a general purpose language and trying to have it adopted?
Every language has pros/cons and applicable use cases. Think of Python. It is a beautiful language that is high level, uses white space, but requires the python VM and is very slow. Nim has a somewhat pythonic syntax, compiles to C and then binary executables and is very fast with good metaprogramming facilities. That is appealing to a lot of people. I encourage you to read the free chapter of Nim In Action as the author does a good job explaining why.
But in the specific case of Nim - AFAIK there aren't that many languages around with a similar set of characteristics. For me the killer is the combination of GC (I am quite fine with not having to bother with memory management), readability, reasonable high levelness including closures etc, nice OO support, extremely good C and C++ interop and really good performance.
Your comments in this thread are breaking the HN guidelines by being unsubstantive, calling names, and going on about downvoting. We ban users who do these things repeatedly, and have warned you before. Please up your game if you want to keep commenting here.
We also have a BountySource campaign[3], so if you would like to support us please consider making a donation.
Finally, please feel free to AMA!
1 - https://manning.com/books/nim-in-action?a_aid=niminaction&a_...
2 - https://manning.com/books/nim-in-action?a_aid=niminaction&a_...
3 - https://salt.bountysource.com/teams/nim