Hacker News new | past | comments | ask | show | jobs | submit login
Crystal in Production: Diploid (crystal-lang.org)
246 points by sdogruyol on Oct 28, 2017 | hide | past | favorite | 120 comments



Crystal is a Ruby inspired compiled language, allowing it to run blazingly fast with a very low memory footprint. It uses LLVM for emitting native code, thus making use of all the optimisations built into the toolchain.

I've been using Crystal for more than 2 years have some projects in production. Can't wait for 1.0 :)


We have been using kemal in small projects and we are super happy with it. In my company that I am working for we are evaluating crystal and elixir as the next language to start investing/developing (we are on Ruby/Rails now). 2 things that crystal is missing to clearly win this internal battle:

* immaturity in terms of the ecosystem (libs, frameworks, best practices etc) which will be solved in 1-2 years

* coming from Ruby I am wondering how powerful its metaprogramming capabilities are. In Ruby, using metaprogramming alcohol I can save tons of time by cutting off code and creating human friendly, clean interfaces that otherwise would require a lot of code. How powerful are macros compared to Ruby's meta capabilities where you can do pretty much anything?

On the bright side (in regards with elixir comparison) it has great type system that would save us from tons of bugs, it's more speedy and transition from Ruby would be slightly easier.


I've been working on https://luckyframework.org and I can say that meta programming is extremely powerful and much easier for me to use than either Ruby or Elixir.

You can see a bit of what the macro system enables here: https://robots.thoughtbot.com/lucky-an-experimental-new-web-...

It generates a bunch of query methods and type specific querying. Without the macro system this would have been impossible.


> You can see a bit of what the macro system enables here

The first thing that struck me about this post is that it is a good illustration for why dynamic-typing people and static-typing people often butt heads:

Most static typing we see is shallow like this, where the concern is to shove data into an unstructured object like a String, and the problem is something trivial.

In this case, String? solves the immediate problem of whether or not the value is really nil, but it fails to consider other aspects:

- Are you really intending to remove the country code from just US numbers? Or all North American ones? Quite possibly you're ok with stripping it from all North American ones rather than just US ones, but if not (e.g. because you later use the presence or absence of a country code to differentiate on billing), it does the wrong thing

- Does the code otherwise enforce a format where "+" can not appear elsewhere? (because someone e.g. decided to use it as a non-standard separator) Does it ensure nobody has input country codes without a "+"? (I've lost count of the number of times I've seen just "1" or (1)). Does it ensure nobody has used spaces? ("+ 1"). What about the number after that - it does nothing to prevent the returned local number from having consistent formatting.

Maybe this is all ok in the code you looked at and the data is guaranteed to only ever have "+1" and be nicely formatted when you strip it.

But it is a really bad way of selling static typing as a feature of the framework, as my first reaction is "but it gives me nothing, as there are all kinds of other checks I also need to do".

Which means either modelling the type of the data much more precisely - which I can do with or without static typing - and/or building a test suite that includes testing any places where data can get put into that table in the first place.

In both cases the kind of problems above tends to fall away with little to no effort.

I'm all for typing used for data validation, but an example showing what more precise modelling would look like would be a lot more convincing.

E.g. I detest Haskell syntax, but one of the strengths of static typing as Haskell developers tends to apply it, is that there tends to be a lot more focus on the power that comes from more precisely modelling the data.


TL;DR I totally agree. This can be done with Crystal and Lucky but I wanted to keep the example simple.

I totally agree with your sentiment and you have a lot of great points. This post was not meant to get too deep into things right off the bat, so it left off a lot of this stuff.

In this case the phone number always has a country code with +1 because we have a validation that ensures it won't make it into the database without it. We also only accept US country codes in the validation.

I get your point though that just having a `String` doesn't really guarantee that those validations took place, luckily, LuckyRecord has the idea of custom types that can define their own casting and validation behavior. So we could have done this:

``` field phone : UnitedStatesPhoneNumber? ```

And it would validate that the string param is a valid US number before saving. It would also do that for queries and you could add whatever other type specific methods you want to it. so you could do

``` def formatted_fax_number phone.try { |number| number.without_country_code } end ```

But like I said, I think this is fitting for a whole separate post, rather than an intro style post :D


And for a Ruby/Elixir/dynamic lang programmer just catching `nil` is actually a pretty big win, even without the custom type.

I will go more into depth about leveraging the type system with Lucky for even better data modeling in one of the Lucky guides


I just think it makes it a lot less interesting to cover that kind of intro use case, as it's not very compelling exactly because of reactions of "but that's not how I'd do it".

Even more so because the "try" syntax is exaggerated. This works fine:

    fax_number.try(:gsub,"+1", "")
And with a new enough version of Ruby, this works too:

    fax_number&.gsub("+1","")
You still need to remember to do it of course, but using the "old" syntax makes the problem seem exaggerated.


Only replying to your last point.

I had a chance to see static typing benefits in action and I will always agree they are a bit ahead of the dynamic typing.

That being said, I feel static typing is a bit overrated in web dev and outside of enterprise systems overall. F.ex. when I tried to quickly immerse myself in Elm, I figured that obsessiveness with static typing can make a dev's life very miserable and can basically require doing cartesian product structs for many scenarios -- especially if you have to work with data coming from several API providers and provide API yourself to users whose requirements are periodically changing.

I might be a bit in a fanboy mode here I admit, but the compromises that Erlang / Elixir do seem very adequate -- they do forgo some of the benefits of the static typing in return for a bit more productivity / less friction. Granted, if you are irresponsible then you can easily shoot yourself in the foot with them as well. No magic bullets.

(Conversely, if you apply some discipline -- which is still probably two orders of magnitude less than the discipline you need in C/Cpp -- then Erlang / Elixir's dynamic typing is almost like static typing.)

This isn't a blind hate towards static typing; having exposed myself for educational purposes for limited amounts of time to Elm, Pony, Crystal and just an hour of Haskell -- and having worked with Go professionally for several months -- I am not as impressed as I expected to be. I clearly see the benefits but again, I feel they are a bit overrated. A language with a reasonable compromise between type safety and programmer productivity seems to be my cup of tea. And I am not claiming that this "reasonable compromise" -- which is a very subjective term -- is an universal truth. Not at all.

Lastly, Elixir's macro system is not hugely powerful (at least compared to what I know about Clojure) but has served all semi-arcane tasks I tried -- and succeeded -- to achieve with it. But that's of course very specific to one's work and it too can't be claimed as an absolute truth.

Not willing to derail here. Your mention of Haskell triggered a few associations. Apologies if the comment is out of place / topic.


The example from the blog post is a classic error we've all made. There seems to be an obvious opportunity for Rails do something creative about that.

I like the String? sugar and believe there is probably some opportunity in Rails to address it. For example make a new NillableObject class that wraps the real object or delegates method_missing.

The idea would be to throw a warning, raise an error, etc. when accessing a nillable attribute. This could get obviously more robust than the below example, hook into read_attribute, leverage validations, log a warning instead of raise a runtime error, etc... but hopefully the point is made.

  class NillableObject
    def initialize(obj = nil);@obj = obj;end

    def method_missing(name, *args, &block)
      raise "I'm nillable, don't call methods against me directly"
    end

    def try(&block)
      return if @obj.nil?
      block.call(@obj)
    end
  end

  ns = NillableObject.new
  ns.gsub("f","g") # RuntimeError: I'm nillable, don't call methods against me directly
  ns.try{ |o| o.gsub("f","g") } # => nil

  s = NillableObject.new("food")
  s.gsub("f","g") # RuntimeError: I'm nillable, don't call methods against me directly
  s.try{ |o| o.gsub("f","g") } # => "good"


This totally makes sense. I think the disadvantage is that this is done at runtime, but assuming you have a test that hits it, it would catch the bug.

My colleagues wrote a library that does something like this. You may want to check it out: https://github.com/thoughtbot/wrapped


Nice, looks well thought out.

Yeah the runtime thing is a disadvantage over type systems for sure. It might be possible with a NillablObject to do some kind of sanity checks at Rails initialization time. Also some other safety nets could help for example use of a ViewModel object to dictate some constraints on Views (type constraints and others).

I like Lucky’s table definition in the model class. A similar construct in AR might be useful in implementing some sanity checks. I often wish there was a way to get a table definition into AR while keeping the goodness of Migrations.


I wonder if the culture of "Optional" with ".and_then" / ".or_else" has any foothold in the Ruby or Crystal communities. Functional approach is so much simpler and more elegant; with Ruby's blocks, it can be made to also look natural.

With promises and things like array.map being widely accepted in e.g. JS community, I'd hazard to say that mainstream industrial programming finally starts to embrace the use of monads. It would be great to embrace the most badly missing of them all, Option / Maybe, instead of the "billion dollar mistake" of null. ("Nullable" is a half-step in the right direction.)


I believe that while monads are great, crystal obviates the need for option because String? doesn't mean "nullable string", it means (String | Nil) or "the type which is the union of String and Nil". This is a much more powerful and generic concept than nillable types, as you can call any method in this object which is defined both on String and Nil. One such method (which happens to be defined in every type) is the method try. This is equivalent to a map in monad optional. I'm sure you can see that this generalises to make type unions possible to represent the optional type (with no overhead) and all methods which you could implement on it. Furthermore, its actually more powerful as this is implemented at the type system level, meaning flow typing works, which makes code much cleaner as you can use normal if statements with boolean conjugates to "unwrap" these types (really just removing types from the union) which is cleaner.


This (chaining "try" calls) is basically what I had hoped would happen :)

It is the "happy path", represented compactly but without a way to forget and step on a nil.


Ruby has new syntax to handle calling methods on null objects.

obj&.is&.nil? can replace the use of try obj.try(:is).try(:nil?)


Agree, love this syntax

  obj = nil
  obj&.gsub("f","g") # => nil 
  obj = "food"
  obj&.gsub("f","g") # => "good" 
Unfortunately this is not likely second nature and will continue to bite us, especially where our expectations about AR models are not well thought out. A wrapping class for nillables might force safe access of AR attributes.


Would love to hear your take on the good and bad parts of Crystal. Been toying around with it for a few small things, and been following the blog... would love to hear more input from users. :-)


Would love to have a nokogiri-level xpath or css3 selector capabilities. I'm up to my neck with projects, but this looks like a good start: https://github.com/jgehring/hcxselect/tree/83d3edaa8a6944d20...


We have libxml2 bound in the stdlib with xpath support, do you need more?


Yes! it would take a lot of hard work to convert all my css3 selectors to xpath.


I think having css3 to xpath selector conversion in the stdlib (or perhaps a shard based on how complex it is)would be possible. It would be great if you could open an issue in crystal's repo.


This shard implements a CSS selector API on top of an html parser: https://github.com/kostya/modest


If only it was compatible with the existing XML interface...


XML has way more surface area than HTML, you could only query a small subset of valid XML documents with CSS. Plus, these days nobody uses XML parsers for HTML anymore, their behavior has diverged; things like behaviour of unclosed elements, valid children tags, are defined in the HTML spec and not possible in XML.


Everytime I read a HN comment claiming a language to be „blazingly“ fast, i wish they posted a link to statistically sound benchmarks, including vm-warmup, GC collection times, etc.

Otherwise I just throw these adjectives away. I argue a new compiler/interpreter will always lose against the JVM, which has thousands of man hours of optimization built-in.

The JVM in turn will always lose against a clever memory-conscious lowlevel-implementation in Rust or C or assembler.

Please don‘t advertise speed without any studies or comparison to back up that claim.


We compile using LLVM, which has had many many man years put into it. Our GC is bdwgc, which while generic and conservative has also had a lot of optimization put into it and it works very well.

We don't pretend to be a mature language with even so much as predictable performance characteristics but the "blazing fast" statement is there to indicate we're very much closer to C performance than even go is typically.


Everytime I read such comment in HN, I smile and remember the days C programms on 8 bit micros and later MS-DOS, were made of 80% inline Assembly statements, because the compilers were quite lousy.

Fran Allen is the opinion that the adoption of C delayed the field of compiler optimizations research back to pre-history (Coders at Work).

It took 40 years of optimization research and clever use of UB defined in the standard, for C compilers to achieve the code quality generation they have nowadys.


> Fran Allen is the opinion that the adoption of C delayed the field of compiler optimizations research back to pre-history (Coders at Work).

Yes! The entire book is wonderful, but as a compiler writer myself, Fran's interview really stuck with me.

The relevant passage, for the curious:

———

Seibel: When do you think was the last time that you programmed?

Allen: Oh, it was quite a while ago. I kind of stopped when C came out. That was a big blow. We were making so much good progress on optimizations and transformations. We were getting rid of just one nice problem after another. When C came out, at one of the SIGPLAN compiler conferences, there was a debate between Steve Johnson from Bell Labs, who was supporting C, and one of our people, Bill Harrison, who was working on a project that I had at that time supporting automatic optimization.

The nubbin of the debate was Steve's defense of not having to build optimizers anymore because the programmer would take care of it. That it was really a programmer's issue. The motivation for the design of C was three problems they couldn't solve in the high-level languages: One of them was interrupt handling. Another was scheduling resources, taking over the machine and scheduling a process that was in the queue. And a third one was allocating memory. And you couldn't do that from a high-level language. So that was the excuse for C.

Seibel: Do you think C is a reasonable language if they had restricted its use to operating-system kernels?

Allen: Oh, yeah. That would have been fine. And, in fact, you need to have something like that, something where experts can really fine-tune without big bottlenecks because those are key problems to solve.

By 1960, we had a long list of amazing languages: Lisp, APL, Fortran, COBOL, Algol 60. These are higher-level than C. We have seriously regressed, since C developed. C has destroyed our ability to advance the state of the art in automatic optimization, automatic parallelization, automatic mapping of a high-level language to the machine. This is one of the reasons compilers are …[sic] basically not taught much anymore in the colleges and universities.

Seibel: Surely there are still courses on building a compiler?

Allen: Not in lots of schools. It's shocking. There are still conferences going on, and people doing good algorithms, good work, but the payoff for that is, in my opinion, quite minimal. Because languages like C totally overspecify the solution of problems. Those kinds of languages are what is destroying computer science as a study.

———

(pp. 501-502)

I recommend that any programmer who hasn't read this book give it a read. In fact, I think I might give it another read this week :)



I have dedicated this very weekend to digging into Crystal. Uncanny coincidence seeing it popping up here, today of all days.

It is a tremendously nice looking language - and I say this as a pythonesque guy who never wrote one line of Ruby. The feeling I get from community and projects is that it's a very up-and-coming thing, about to take off in a major way. There is just too much enthusiasm and too many things done right for Crystal not to earn some solid share within a very foreseeable future.


Crystal really does feel like Ruby without the annoying parts, plus the macro system feels like a _better_ meta programming facility.

On the other hand I've read reports that sufficiently large codebases begin to hit machine memory limits for the compiler because they have to scan and construct every possible union type in the codebase. Not sure what there is to do about that other than bisecting the codebase into shared libraries over time.


The actual memory usage is coming from the number of instantiated methods. If you write a method, it gets an entirely new implementation of it copied and compiled for every different list of argument types. For example,

    def foo(bar)
      bar + bar
    end
    
    bar1 : Int32
    foo(bar1)
    
    bar2 : String
    foo(bar1)
    
    bar2 : Int32 | String
    foo(bar1)
The above code gives you three methods to generate code for: foo(Int32), foo(String), foo(Int32 | String).

This isn't really a problem for small projects, but with large projects (60k+ loc), you end up with some methods such as puts, Array(T)#<<, etc with thousands of instantiations.

We have some edge-case semantics (we don't cast arguments to their restriction types, which we plan to remove before 1.0) which also prevent us from allowing only instantiating foo(Int32 | String) when we change the definition to `def foo(bar : Int32 | String)`. Hopefully in the future, we can allow the compiler to take advantage of method argument type restrictions to reduce the number of instantiated methods and speed up the compiler.

That doesn't mean the compiler will be super slow if you don't annotate your methods, it just means that the stdlib and shards' public API will start annotating it's method arguments more (which gives you better docs and error messages anyway).

For more info: https://github.com/crystal-lang/crystal/issues/4864


They also have a very detailed blog post about how their compiler works: https://crystal-lang.org/2015/03/04/internals.html


> On the other hand I've read reports that sufficiently large codebases begin to hit machine memory limits for the compiler because they have to scan and construct every possible union type in the codebase

I'd be interested why a compiler would even need to do so in the first place --- constructing every possible one instead of every one it encounters as clearly needed right now.

Edit: OK I see it now, you can have stuff like Int32|String. Still though.. TS can "handle" that too?


Maybe we're confusing type inference with code generation here? TypeScript doesn't compile to machine code ahead of time, it only has to verify the types.

Here is one of the creators of the language talking about the global type inference of Crystal:

https://www.youtube.com/watch?v=xbdVs4FhZac&t=16m


What are some annoying parts of Ruby?


It's a personal preference, but I'm talking specifically about most meta programming functionality (eval, define_method, etc...). It can lead to obtuse code in the hands of both experts and beginners who are looking for the most clever way to DRY up code.

And if it that were the worst part of meta programming, that might be ok, but it's not because the existence of meta programming puts ceiling on just how fast the Ruby runtime can be.


Yep. It’s not annoying in libraries or frameworks, but when you see engineers meta programming business logic it makes me sad.


Why does it? If the metaprogramming is readable and sound, I don't see where the problem is. Most of the time, good meta-programming can turn 30 lines into 5, which is not systematically better for the reader, but can help avoid some of the most mundane repetition.

It's true that it shouldn't be abused and it most often is, but it does have its place.


So can composition.

It makes me sad for the reason you said, it’s most often abused.


Thank you for the insightful response.

I'll never ever touch upon these areas, so I still hold Ruby to a high regard. It simply is fun.


have you checked out nim? if you're a pythonista you might find it more similar to what you're used to


I most certainly have. I'm working with Nim at the moment. I really, really like the tooling, the speed, the flexibility, and many aspects of the syntax. I really am getting to dislike the endless complexity - just such a goddamn huge language, with always yet another way to do something - and a lot of the implicit (or so it seems) stuff (I'm not comfortable exporting my procs by placing an asterisk somewhere, and even somewhere so illogical I can never remember where to put it), and not at ease with the weird variable naming and casing conventions. Don't even mention the arrays and the seqs... Taking Crystal for this weekend-spin to see if this is really the direction I should be going.


Hey, core developer of Nim here.

Thank you for this awesome feedback. It's really great to see that you like so much about Nim. Of course, I am more concerned about the negatives and I would love to do something about them.

In general it would be great if you could elaborate on your remarks. Some questions that I have: Are there other complexities in the language that you dislike? Is there something you think Nim would be better without?

Some specific remarks and questions:

> a lot of the implicit (or so it seems) stuff (I'm not comfortable exporting my procs by placing an asterisk somewhere, and even somewhere so illogical I can never remember where to put it)

An easy rule to remember this is "the asterisk goes after the identifier", for example:

    proc life*() =
      echo 42

    proc identity*[T](x: T): T =
      return x

    var ident* = "hello"

    type
      World* = object
        field*: int
> not at ease with the weird variable naming and casing conventions

What's weird about them?

> Don't even mention the arrays and the seqs...

Please elaborate on this as well.

By the way, you might have done this already but in the future please consider sharing this kind of feedback with myself or somebody else on Nim's team directly (via IRC/Twitter/Gitter/email). It's easy to miss something like this on HN. Please let me know if there is something I could do to make giving feedback easier for everyone, I understand that some people may not feel comfortable coming into Nim's IRC channel and giving criticism that way. It's very valuable to us though, and we don't want it going missing.


> proc identity*[T](x: T): T =

Personally, I still have symbol-overload PTSD from my Perl 4 days. When I see a flurry of that many symbols crunched together, followed by a dev saying "what's weird about them", I stay far far away from that language. Could just be me.


I was asking what's weird about the variable naming convention which the OP made a remark about.

You will actually find that Nim typically avoids operators. For example, boolean expressions use `and`, `or` instead of `&&`, `||`. I personally like that a lot.

Having to write `public` before every procedure would, on the other hand, be a massive PITA :)


I agree, I think ruby strikes the perfect balance between symbols and words, and I think that spirit has been kept alive in crystal despite it's syntax extensions. I truly enjoy crystal's "visual weight" (apart from proc syntax, thats just ugly).


Thanks Dom, Nim is a great language! <3


Dominik! Of course you are reading this. Your scope and level of activity is absolutely breathtaking. Thank you! I really do like much about Nim, I wish it every kind of success, and I probably will stick with it as my compilable language of choice - not that I imagine my sticking or not sticking is going to make any conceivable difference in the greater scheme of things. Trying to get up to speed, I have launched into what may be too heavy a web-app project for a beginner, and frustrations are beginning to mount as I hit a lot more conceptual walls than I expected to, and every now and again I can't help sneaking an envious look at the Crystal camp where syntax is easy, community is growing, spirits are high, and I can go from zero to banging out a respectable Kemal-app within 48 hours. At a price, of course, as it always turns out. There really is no such thing as a free lunch. The Nim tooling, as mentioned, is in a class by itself. Nim goes the sensible way of compiling to C, a far more efficient and transparent process than the LLVM route the Crystal folks have chosen. And it shows: Nim's portability is far ahead of Crystal's, and it produces better executables - smaller, a loss let brittle, and apparently more compliant. LLVM executables just often behave weirdly, like identifying themselves as "shared libraries", and going bust when UPX-compressed (which I do not need to do, but it's a tell-tale sign).

Anyway, let me try to address your points:

> Is there something you think Nim would be better without? Definitely, but these are entrenched features, and clearly not to be fiddled with at this stage: As hinted above, the sheer size of the thing, and the multiple ways there always seem to be to do whatever. Not necessarily a problem when writing code, but a real burden when trying to read it: "So what's a nice @ like you doing in a declaration like this?" "Oh, you're another way of declaring - or is it creating? - a seq. Which is sort of another way of declaring an array. Which has a couple of different implementations. Excuse me while I go and bang my head". Way too much of that stuff. Beginning with the alternative ways of seemingly each and every proc: Standalone and object-like, with a dot. I know this is meant to bridge the gap between us procedural folks and the object crowd. But it's still an obfuscation. It leads to disappointment, not least because at first look, Nim looks so deceptively clean and simple. In reality, I find the reading of Nim-code a lot harder than it needed to be. For my too heavy beginner project, I am porting some old Delphi code. And I mean old! I wrote it twenty years ago, and haven't looked at it since. Despite which, I have no trouble whatsoever understanding what it all means. But I do have trouble working out how best to represent my simple packed array[] of byte, what with the multitude of choices I have, though not a single one of them as concise and expressive as the Pascal, and all harder to comprehend after two weeks (as I discovered) than the original after two decades. I appreciate the enourmous dedication, knowledge, skill, and talent brought to Nim by Andreas and yourself, and whichever other core developers are out there, but damn!, I wish some useability freak had been consulted, and a lot of the heavyweight CS stuff had been kept solidly under lid and out of sight. There simply is too much unnessecary choice. It's overwhelming. And the overloading! This is a purely personal point of view; please ignore. But I don't like it, I never do. I see there may be use cases - that '+' might need to have a new meaning assigned in some new context, but to my uncomprehending eyes, that should be a hardwired lexer/compiler thing, never a userspace toy. And yes, I know just about every language does this nowadays. It's just that Nim-code seems to abound in it. And again, it hampers readability. A '+' should mean a '+', always, everywhere, no need to ever backtrack in order to understand.

>An easy rule to remember this is "the asterisk goes after the identifier" Yes, I sort of remember that now. But it's easily forgettable. And those poor characters - asterisk [how to escape in an HN comment?], @, #, !, etc. - don't carry any implicit meaning. Their use work against the ethos of Nim, at least as seen on the glittering surface. "public" or "export", or even just "exp" would do the trick, never mind the typing. If I wish to compress meaning down to arbitrary squiggles, I should go and code in C or Perl or some such nonsense.

>What's weird about them? [weird variable naming and casing conventions] That point has been beaten to death in threads all over HN, Reddit, and StackExchange. All the "this variable name equals that variable name, except when it doesn't, and give or take an underscore or two". It makes for uncertainty. The rules are sufficiently strange and arbitrary that I am never quite sure I remember them correctly. Like so many others, I should vastly, VASTLY prefer the way it's done almost everywhere else. With or without case sensitivity, but clear and unmistakable. It is not a huge problem, but it's a constant irritant.

>Please elaborate on this as well [arrays and seqs] I touched upon it above. I'm sure there is loads of clever, competent, unassailable reasoning behind the way things are done here. Me, a rather naive and superficial user, I just don't grasp it. Why do I need two very distinct types performing essentially the same function? And the array type in different flavors to boot, the fixed size and the open variety, each with its own incompatible set of behaviors? Again, like my other points, it's not a huge problem in itself, but all these double (and triple) options do add up, and [apparently] needlessly add to the burden of extra stuff I have to carry in my head. Clear and simple explanations of the need and rationale would go a long way towards making this acceptable. As it stands, it merely annoys.

>in the future please consider sharing this kind of feedback with myself or somebody else on Nim's team You are absolutely right, I am grossly remiss. I spew out my somewhat unqualified opinions in a place like this, but hang back from entering the lion's den - somehow thinking that as noob as I am, I really have no useful input to give. Right at the moment, I am insanely pressed for time and resources, but the instant I am rid of my present day-job, which I hate, I shall make an effort to participate much more actively in the community.


I just wanted to add to the arrays and seqs comment.

>Why do I need two very distinct types performing essentially the same function?

Arrays are very simple static blocks of memory and can be allocated on the stack, which is really fast, or at compile time. They can't ever be resized.

Sequences have overhead due to being pointers to pointers, however this means that they can be safely resized.

It sounds a bit like you're overwhelmed. Maybe the language needs a really paired down version of nim-by-example to explain these sort of things.


I actually understand that. But not from any docs I've seen. And then there are open arrays. But only as params in procs. Overwhelmed, nah, just occasionally annoyed. Mind you, I do like my world simple. Hence my envious looks at the Crystal camp.


welcome to the language vision quest. may you find what you seek. you probably won't!


Crystal reminded me of Mirah [1], a Ruby-like programming language which is statically typed but relies heavily on type inference in order to compile efficiently to JVM bytecode, while retaining the appearance of being a dynamically typed language.

(I've got no real point here other than to say I had a strong feeling of deja vu while reading about Crystal.)

[1] https://en.wikipedia.org/wiki/Mirah_(programming_language)


Why are people so interested in running things on the JVM? I'd much rather compile and ship binaries than force someone to install Java (or Ruby or Python).


For one, the JVM is available on multiple platforms. Rather than requiring a project to target each of these platforms individually, they can target a single platform—the JVM.

There are plenty of people who dislike the JVM for any number of reasons (performance often being cited as one) and prefer to target specific platforms individually. Like many other things, it's a tradeoff, and people may arrive at different decisions depending on their priorities.


Modern languages like Crystal target LLVM IR, which if, I'm not mistaken, supports even more platforms than the JVM.


But it would need to be recompiled right?


Correct. But due to the separation of the front-end and back-end, the language developer doesn't have to do extra work to support different architectures.


With JVM you get a couple nice features: a good garbage collector, a JIT compiler[1], platform independence, access to a lot of libraries through Java interop.

The requirement for a JVM to be installed on a system isn't that dramatic in my opinion, it may change for others' use cases though (e.g. if you are shipping code to embedded systems etc., it's not a good option probably). Also, if you are OK with heavy app sizes, you can ship with OpenJDK although I acknowledge that it is as bad taste as shipping with electron.

[1]: JIT compilers can help with performance even for pretty static languages (e.g. Java is not so much dynamic than C++) because you have runtime statistics for that specific run and you can speculate using that. Although, with smarter branch predictors etc. in CPUs, some of the benefits of the JITs are slowly disappearing.


There's a lot of existing tooling around the JVM: debugging, diagnostics, profiling, performance tuning, deployment, security, enterprise-y concerns... It also boasts perhaps the most advanced VM implementation of its kind—hard to beat the number of collective person-hours invested in it over its long lifespan.


The JVM is a fairly nice platform overall if somewhat heavy (though with modules in JDK9 you can ship an executable package competitive in size to a Go binary).

All that aside, performance is great and has a massive ecosystem of mature libraries.

I’m not saying it’s always appropriate, but it’s not a bad platform.


You can ship native binaries with Java, it is only a matter of getting the right toolchain for it.

Java is not only OpenJDK.


Jars are also just zip files, so you can prepend a shell script without impacting the ability for Java/zip to decompress. Add a shell script to run Java on its self and you have a executable (though not a true native elf/mach binary).

It’s not much worse than a standard dynamically linked binary at that point. You just need a JVM on the system. Combined with the AOT compiler (as it stabilizes) makes Java applications more competitive with standard binaries.


The sbt module I'm using for creating JARs (sbt-assembly) has this feature and I really like how seamless it makes creating executable JARs. I used it in a couple cases to create artifacts/executables for research projects that can be easily run and without many dependencies, usually just a recent enough JVM.


Crystal is one of the most exciting new languages out there. I have been using it for my one off tasks at Ola and it works pretty well. It is huge plus point that you have safety harness of statically typed languages and speed comparable to java and scala(unscientific benchmark here https://github.com/kostya/benchmarks)


> Features that we take for granted in Ruby or other languages, are not available in Go. Examples include operator overloading or keyword extensibility, as well as true OOP.

These are literally the top reasons why I love Go. To each their own?


Crystal is a super cool project and doing websockets with Kemal (Crystals Sinatra equivalent) was one of the easiest ways of hosting websockets I've encountered so far.

That said, I'm eagerly waiting for true paralellism support since the use case we have in mind would greatly benefit from that. Some of the testing tools are also not quite as polished as Rspec (yet).


Kemal: Fast, Effective, Simple web framework for Crystal Website: http://kemalcr.com/

P.S: I'm the author of Kemal :)


So cool! I really enjoy working with Kemal. :)

The State of Crystal at v0.21 article (https://crystal-lang.org/2017/02/24/state-of-crystal-at-0.21...) stated that multithreading with work stealing was coming "soon" and that you already managed to run kemal in parallel. Can you share anything about the current state of that project?


Even though it's experimental I've successfully compiled Kemal with multi-thread support. The throughput was OK. It's promising and will definitely help CPU bound apps :)

You can check the wiki for more info https://github.com/crystal-lang/crystal/wiki/Threads-support


Is it possible to pass a cookie (session id) using websockets with kemal? or do I have to manage authentication manually?


I use to send my JWT token to check validity of the user's session


Actually, the two projects I am currently working on use Websockets, and for certain, Crystal is the fastest and easiest way to prototype a Websockets server, and then turn it into a production server in quick time.


Here's a great blog post about Kemal and Websockets :) http://kemalcr.com/blog/2016/11/13/benchmarking-and-scaling-...


I've been using Crystal for only a few months, but impressed with its speed and low memory footprint. Starting to build some small services with it now, but using Kemal or the Amber framework, I can see some medium sized projects coming out of it soon.

Was a cinch to swap over to it from Ruby. It is a little fussier with type definitions, but I guess that is to be understandable with a compiled system.

The third party ecosystem is still a little thin on the ground, or immature, and I hope it will grow. The Crystal community has also been really friendly and responsive. The couple of questions I have asked on Reddit or SO have been answered quickly and with lots of useful info.


Here is another (sort of) real world app I wrote using Crystal and Kemal - a real time race telemetry display app for an F1 racing game [Blog post link] - http://devan.blaze.com.au/blog/2017/10/28/racing-along-build...


> With Crystal, data scientists could have the ease-of-use of Python/Ruby combined with the performance of C.

Big if true.

Are there any benchmarks to back up this claim? There is some mention of experiments, but I'm not seeing any numbers or code.




Still waiting for v1; Ruby being my fav in terms of syntax, Crystal is really going to lift my experience in terms of syntax. This interview describes low traffic usage, what I am looking for is high traffic usage scenario and how crystal’s GC behaves running for longer periods of time.


Check latest benchmark (not the best but shows good performance) https://www.techempower.com/benchmarks/previews/round15


These benchmarks show raw throughput though, and not latency distribution. A well functioning GC should not be too bad in either, and the current solution in the GC area is very much stop the world, so I expect the latency tail to be pretty bad even if raw throughput is damned good.


I'm also interested in this...


"...OOP. Moving from Ruby to Go sometimes feels like ignoring 20 years of progress made in language design."

If OOP is so important to them, why did they bother including Elixir in their list of possibilities?

Also, given their problem domain, I'm doubtful of the fit of OOP (vs functional). But regarding Crystal, it is nice to see a potential performant Ruby replacement.


>Also, given their problem domain, I'm doubtful of the fit of OOP (vs functional). But regarding Crystal, it is nice to see a potential performant Ruby replacement.

Part of their problem domain was to easily port their code from Ruby, which would presumably be OOP.


As mentioned in the article, library support in Crystal is growing but still far from complete. This GitHub repo might be a good place to start searching for useful shards: https://github.com/veelenga/awesome-crystal/blob/master/READ...



Is there an easy way to get interop working between crystal and ruby? It would be nice to be able to simply require a crystal module that calls out to a compiled binary in the middle of my Ruby code.


Not at the moment. About a year ago I did a proof of concept that replaced ActiveSupport's Inflector module with a fully working Crystal version, which was then compiled as a native extension. Since then, language changes have broken the way that I had things hooked up, although it was suboptimal anyways.

There is another strategy using macros, which is much better. I wasn't familiar enough with macros when I first tried it, which is why it was a bit hacky.

Crystal's core team has plans for a DSL to do this, so there will surely be something nice in the future. It's definitely feasible, and the performance was decent.

If you're interested (just remember mine's broken with current Crystal), my work is https://github.com/phoffer/crystalized_ruby and a macro approach is demonstrated at https://github.com/spalladino/crystal-ruby_exts




Crystal looks interesting based on the comments in this thread by those using it. Anyone know why it seems to not be available for Windows (without WSL)? Does it have some POSIX dependencies?


Yep, proper Windows support is a must for easy to use dev setup - Go is OK'ish in this sense but Crystal is currently really difficult to get running (either vm or Win10 Linux sub-system or Cygwin hack needed!)


Thanks. Still wondering what the reason is though - but based on what you said, it seems like POSIX-like APIs being needed may be the reason. Maybe for the concurrency features or other language / library features of Crystal. But then again, Go has concurrency features but works on Windows, and of course Windows has threads anyway, which are usable from C/C++ and other languages.


Here is an installer waiting for Crystal support on Windows https://github.com/faustinoaq/crystal-windows-installer :)


Thanks.


It's simply a lack of manpower to port to the windows APIs. There's a partial windows port, but it needs a lot of work before it's merged into master.


BTW, with a bit of fidgeting, it's possible to get Visual Studio Code for Windows to use Crystal for WSL quite nicely. On the other hand, Crystal in a VM is faster.



I had to change a few more settings to get it to work properly. I should blog about it at some point.


I love crystal yet it puzzles me how the heroic developers can make rapid progress when each compilation of the compiler takes half a minute.


How is the compile time on larger projects?



Crystal is very good language, seems very promising

Waiting for parallelism & Windows support +1


I’ve always wondered why Crystal doesn’t get more attention.


Because it's not production ready yet. It's getting there.


That's a nice looking language. The inferred type system on an procedual / OO language with protection against nulls is something I had wished existed. I didn't know it had been done.


Heard of Swift?


I've heard of it but not investigated. So many programming languages. Is it general or tied to Apple?


general


I'd be curious to know what the compile times are like? Does ever compile run recompile the entire code base or is there some kind of incremental compilation?


Is there a place with the distribution of the number of developers per programming language? I have the feeling it has a very long tail.


I'm surprised Swift was outperformed by Ruby in their testing. I wonder what their benchmarking methodology was.


On a related note, how well Crystal is suited for number-crunching?


Crystal is currently no faster than Ruby when parsing a 19Mb Apache log file on OS X Sierra:

  fh = File.open("logs1.txt")
  fh.each_line {|x| puts x if (/\b\w{15}\b/.match x) }
  fh.close


I guess it depends, for one-off tasks you'll have to compile and run the Crystal code. I assume you ran the code above with `crystal run`. If you first compile the code with `crystal build --release` and run it, you should see a dramatic improvement over Ruby.

On my machine, parsing a 374MB file took:

    - 25.63 using Ruby
    - 19.97 using Crystal (run w/o optimizations)
    - 19.01 using Crystal (compiled w/o optimizations)
    - 18.68 using Crystal (run w/ `--release` flag)
    - 11.54 using Crystal (compiled w/ `--release` flag)


No, it was compiled but without --release as I couldn't get it working on OS X Sierra.


This isn't a benchmark that I'd expect to see a significant difference in. The majority of the CPU work is in the regular expression, and Ruby's regex engine is implemented in C.


Having no idea what Crystal is, my mind replaced it with crystal meth and it made the introduction text a lot more entertaining.


I thought this is a new type of drug




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: