Hacker News new | past | comments | ask | show | jobs | submit | more sdogruyol's comments login

Crystal is a Ruby inspired compiled language, allowing it to run blazingly fast with a very low memory footprint. It uses LLVM for emitting native code, thus making use of all the optimisations built into the toolchain.

Website: https://crystal-lang.org/

Github: https://github.com/crystal-lang/crystal


Crystal uses Boehm GC https://github.com/ivmai/bdwgc


Crystal is a Ruby inspired compiled language, allowing it to run blazingly fast with a very low memory footprint. It uses LLVM for emitting native code, thus making use of all the optimisations built into the toolchain.

Website: https://crystal-lang.org/

Github: https://github.com/crystal-lang/crystal



Windows support is still WIP, recently we are making some good progress https://github.com/crystal-lang/crystal/pull/5339


Crystal is a Ruby inspired compiled language, allowing it to run blazingly fast with a very low memory footprint. It uses LLVM for emitting native code, thus making use of all the optimisations built into the toolchain.

Website: https://crystal-lang.org/ Github: https://github.com/crystal-lang/crystal


For me Crystal is what Ruby might have turned out to be, if its designers cared to take some inspiration from Dylan.


That makes no sense to me. Most of what makes Crystal different from Ruby results from the former being a static language and the later being dynamic.

I understand that dynamic languages are not popular these days on HN, but it's silly to suggest they should become static languages. They just offer different tradeoffs, a bit like screws and nails.


Dylan is a Lisp (thus dynamic) with Algol-like syntax.

Dylan supports AOT compilation like Crystal, and had optional type checking, so one could make it into Crystal if all variable declarations happened to be annotated.

Don't forget Dylan was intended to be used as Newton systems programming language and the team managed to create their own OS, even after C++ was decided to take Dylan's role.

Personally dynamic languages without AOT or JIT support were never popular with me beyond shell scripting tasks.


I thought NewtonScript took Dylan’s place in Newton development?


No, Dylan was supposed to be a systems language in the spirit of Lisp Machines.

But internal politics and battling between teams spoiled it.

Check the comments from mikelevins and wrs.

https://news.ycombinator.com/item?id=15106802


We have a dedicated wiki page for Crystal production users :) https://github.com/crystal-lang/crystal/wiki/Used-in-product...


Here's a great blog post about Kemal and Websockets :) http://kemalcr.com/blog/2016/11/13/benchmarking-and-scaling-...


Kemal: Fast, Effective, Simple web framework for Crystal Website: http://kemalcr.com/

P.S: I'm the author of Kemal :)


So cool! I really enjoy working with Kemal. :)

The State of Crystal at v0.21 article (https://crystal-lang.org/2017/02/24/state-of-crystal-at-0.21...) stated that multithreading with work stealing was coming "soon" and that you already managed to run kemal in parallel. Can you share anything about the current state of that project?


Even though it's experimental I've successfully compiled Kemal with multi-thread support. The throughput was OK. It's promising and will definitely help CPU bound apps :)

You can check the wiki for more info https://github.com/crystal-lang/crystal/wiki/Threads-support


Is it possible to pass a cookie (session id) using websockets with kemal? or do I have to manage authentication manually?


I use to send my JWT token to check validity of the user's session


Crystal is a Ruby inspired compiled language, allowing it to run blazingly fast with a very low memory footprint. It uses LLVM for emitting native code, thus making use of all the optimisations built into the toolchain.

I've been using Crystal for more than 2 years have some projects in production. Can't wait for 1.0 :)


We have been using kemal in small projects and we are super happy with it. In my company that I am working for we are evaluating crystal and elixir as the next language to start investing/developing (we are on Ruby/Rails now). 2 things that crystal is missing to clearly win this internal battle:

* immaturity in terms of the ecosystem (libs, frameworks, best practices etc) which will be solved in 1-2 years

* coming from Ruby I am wondering how powerful its metaprogramming capabilities are. In Ruby, using metaprogramming alcohol I can save tons of time by cutting off code and creating human friendly, clean interfaces that otherwise would require a lot of code. How powerful are macros compared to Ruby's meta capabilities where you can do pretty much anything?

On the bright side (in regards with elixir comparison) it has great type system that would save us from tons of bugs, it's more speedy and transition from Ruby would be slightly easier.


I've been working on https://luckyframework.org and I can say that meta programming is extremely powerful and much easier for me to use than either Ruby or Elixir.

You can see a bit of what the macro system enables here: https://robots.thoughtbot.com/lucky-an-experimental-new-web-...

It generates a bunch of query methods and type specific querying. Without the macro system this would have been impossible.


> You can see a bit of what the macro system enables here

The first thing that struck me about this post is that it is a good illustration for why dynamic-typing people and static-typing people often butt heads:

Most static typing we see is shallow like this, where the concern is to shove data into an unstructured object like a String, and the problem is something trivial.

In this case, String? solves the immediate problem of whether or not the value is really nil, but it fails to consider other aspects:

- Are you really intending to remove the country code from just US numbers? Or all North American ones? Quite possibly you're ok with stripping it from all North American ones rather than just US ones, but if not (e.g. because you later use the presence or absence of a country code to differentiate on billing), it does the wrong thing

- Does the code otherwise enforce a format where "+" can not appear elsewhere? (because someone e.g. decided to use it as a non-standard separator) Does it ensure nobody has input country codes without a "+"? (I've lost count of the number of times I've seen just "1" or (1)). Does it ensure nobody has used spaces? ("+ 1"). What about the number after that - it does nothing to prevent the returned local number from having consistent formatting.

Maybe this is all ok in the code you looked at and the data is guaranteed to only ever have "+1" and be nicely formatted when you strip it.

But it is a really bad way of selling static typing as a feature of the framework, as my first reaction is "but it gives me nothing, as there are all kinds of other checks I also need to do".

Which means either modelling the type of the data much more precisely - which I can do with or without static typing - and/or building a test suite that includes testing any places where data can get put into that table in the first place.

In both cases the kind of problems above tends to fall away with little to no effort.

I'm all for typing used for data validation, but an example showing what more precise modelling would look like would be a lot more convincing.

E.g. I detest Haskell syntax, but one of the strengths of static typing as Haskell developers tends to apply it, is that there tends to be a lot more focus on the power that comes from more precisely modelling the data.


TL;DR I totally agree. This can be done with Crystal and Lucky but I wanted to keep the example simple.

I totally agree with your sentiment and you have a lot of great points. This post was not meant to get too deep into things right off the bat, so it left off a lot of this stuff.

In this case the phone number always has a country code with +1 because we have a validation that ensures it won't make it into the database without it. We also only accept US country codes in the validation.

I get your point though that just having a `String` doesn't really guarantee that those validations took place, luckily, LuckyRecord has the idea of custom types that can define their own casting and validation behavior. So we could have done this:

``` field phone : UnitedStatesPhoneNumber? ```

And it would validate that the string param is a valid US number before saving. It would also do that for queries and you could add whatever other type specific methods you want to it. so you could do

``` def formatted_fax_number phone.try { |number| number.without_country_code } end ```

But like I said, I think this is fitting for a whole separate post, rather than an intro style post :D


And for a Ruby/Elixir/dynamic lang programmer just catching `nil` is actually a pretty big win, even without the custom type.

I will go more into depth about leveraging the type system with Lucky for even better data modeling in one of the Lucky guides


I just think it makes it a lot less interesting to cover that kind of intro use case, as it's not very compelling exactly because of reactions of "but that's not how I'd do it".

Even more so because the "try" syntax is exaggerated. This works fine:

    fax_number.try(:gsub,"+1", "")
And with a new enough version of Ruby, this works too:

    fax_number&.gsub("+1","")
You still need to remember to do it of course, but using the "old" syntax makes the problem seem exaggerated.


Only replying to your last point.

I had a chance to see static typing benefits in action and I will always agree they are a bit ahead of the dynamic typing.

That being said, I feel static typing is a bit overrated in web dev and outside of enterprise systems overall. F.ex. when I tried to quickly immerse myself in Elm, I figured that obsessiveness with static typing can make a dev's life very miserable and can basically require doing cartesian product structs for many scenarios -- especially if you have to work with data coming from several API providers and provide API yourself to users whose requirements are periodically changing.

I might be a bit in a fanboy mode here I admit, but the compromises that Erlang / Elixir do seem very adequate -- they do forgo some of the benefits of the static typing in return for a bit more productivity / less friction. Granted, if you are irresponsible then you can easily shoot yourself in the foot with them as well. No magic bullets.

(Conversely, if you apply some discipline -- which is still probably two orders of magnitude less than the discipline you need in C/Cpp -- then Erlang / Elixir's dynamic typing is almost like static typing.)

This isn't a blind hate towards static typing; having exposed myself for educational purposes for limited amounts of time to Elm, Pony, Crystal and just an hour of Haskell -- and having worked with Go professionally for several months -- I am not as impressed as I expected to be. I clearly see the benefits but again, I feel they are a bit overrated. A language with a reasonable compromise between type safety and programmer productivity seems to be my cup of tea. And I am not claiming that this "reasonable compromise" -- which is a very subjective term -- is an universal truth. Not at all.

Lastly, Elixir's macro system is not hugely powerful (at least compared to what I know about Clojure) but has served all semi-arcane tasks I tried -- and succeeded -- to achieve with it. But that's of course very specific to one's work and it too can't be claimed as an absolute truth.

Not willing to derail here. Your mention of Haskell triggered a few associations. Apologies if the comment is out of place / topic.


The example from the blog post is a classic error we've all made. There seems to be an obvious opportunity for Rails do something creative about that.

I like the String? sugar and believe there is probably some opportunity in Rails to address it. For example make a new NillableObject class that wraps the real object or delegates method_missing.

The idea would be to throw a warning, raise an error, etc. when accessing a nillable attribute. This could get obviously more robust than the below example, hook into read_attribute, leverage validations, log a warning instead of raise a runtime error, etc... but hopefully the point is made.

  class NillableObject
    def initialize(obj = nil);@obj = obj;end

    def method_missing(name, *args, &block)
      raise "I'm nillable, don't call methods against me directly"
    end

    def try(&block)
      return if @obj.nil?
      block.call(@obj)
    end
  end

  ns = NillableObject.new
  ns.gsub("f","g") # RuntimeError: I'm nillable, don't call methods against me directly
  ns.try{ |o| o.gsub("f","g") } # => nil

  s = NillableObject.new("food")
  s.gsub("f","g") # RuntimeError: I'm nillable, don't call methods against me directly
  s.try{ |o| o.gsub("f","g") } # => "good"


This totally makes sense. I think the disadvantage is that this is done at runtime, but assuming you have a test that hits it, it would catch the bug.

My colleagues wrote a library that does something like this. You may want to check it out: https://github.com/thoughtbot/wrapped


Nice, looks well thought out.

Yeah the runtime thing is a disadvantage over type systems for sure. It might be possible with a NillablObject to do some kind of sanity checks at Rails initialization time. Also some other safety nets could help for example use of a ViewModel object to dictate some constraints on Views (type constraints and others).

I like Lucky’s table definition in the model class. A similar construct in AR might be useful in implementing some sanity checks. I often wish there was a way to get a table definition into AR while keeping the goodness of Migrations.


I wonder if the culture of "Optional" with ".and_then" / ".or_else" has any foothold in the Ruby or Crystal communities. Functional approach is so much simpler and more elegant; with Ruby's blocks, it can be made to also look natural.

With promises and things like array.map being widely accepted in e.g. JS community, I'd hazard to say that mainstream industrial programming finally starts to embrace the use of monads. It would be great to embrace the most badly missing of them all, Option / Maybe, instead of the "billion dollar mistake" of null. ("Nullable" is a half-step in the right direction.)


I believe that while monads are great, crystal obviates the need for option because String? doesn't mean "nullable string", it means (String | Nil) or "the type which is the union of String and Nil". This is a much more powerful and generic concept than nillable types, as you can call any method in this object which is defined both on String and Nil. One such method (which happens to be defined in every type) is the method try. This is equivalent to a map in monad optional. I'm sure you can see that this generalises to make type unions possible to represent the optional type (with no overhead) and all methods which you could implement on it. Furthermore, its actually more powerful as this is implemented at the type system level, meaning flow typing works, which makes code much cleaner as you can use normal if statements with boolean conjugates to "unwrap" these types (really just removing types from the union) which is cleaner.


This (chaining "try" calls) is basically what I had hoped would happen :)

It is the "happy path", represented compactly but without a way to forget and step on a nil.


Ruby has new syntax to handle calling methods on null objects.

obj&.is&.nil? can replace the use of try obj.try(:is).try(:nil?)


Agree, love this syntax

  obj = nil
  obj&.gsub("f","g") # => nil 
  obj = "food"
  obj&.gsub("f","g") # => "good" 
Unfortunately this is not likely second nature and will continue to bite us, especially where our expectations about AR models are not well thought out. A wrapping class for nillables might force safe access of AR attributes.


Would love to hear your take on the good and bad parts of Crystal. Been toying around with it for a few small things, and been following the blog... would love to hear more input from users. :-)


Would love to have a nokogiri-level xpath or css3 selector capabilities. I'm up to my neck with projects, but this looks like a good start: https://github.com/jgehring/hcxselect/tree/83d3edaa8a6944d20...


We have libxml2 bound in the stdlib with xpath support, do you need more?


Yes! it would take a lot of hard work to convert all my css3 selectors to xpath.


I think having css3 to xpath selector conversion in the stdlib (or perhaps a shard based on how complex it is)would be possible. It would be great if you could open an issue in crystal's repo.


This shard implements a CSS selector API on top of an html parser: https://github.com/kostya/modest


If only it was compatible with the existing XML interface...


XML has way more surface area than HTML, you could only query a small subset of valid XML documents with CSS. Plus, these days nobody uses XML parsers for HTML anymore, their behavior has diverged; things like behaviour of unclosed elements, valid children tags, are defined in the HTML spec and not possible in XML.


Everytime I read a HN comment claiming a language to be „blazingly“ fast, i wish they posted a link to statistically sound benchmarks, including vm-warmup, GC collection times, etc.

Otherwise I just throw these adjectives away. I argue a new compiler/interpreter will always lose against the JVM, which has thousands of man hours of optimization built-in.

The JVM in turn will always lose against a clever memory-conscious lowlevel-implementation in Rust or C or assembler.

Please don‘t advertise speed without any studies or comparison to back up that claim.


We compile using LLVM, which has had many many man years put into it. Our GC is bdwgc, which while generic and conservative has also had a lot of optimization put into it and it works very well.

We don't pretend to be a mature language with even so much as predictable performance characteristics but the "blazing fast" statement is there to indicate we're very much closer to C performance than even go is typically.


Everytime I read such comment in HN, I smile and remember the days C programms on 8 bit micros and later MS-DOS, were made of 80% inline Assembly statements, because the compilers were quite lousy.

Fran Allen is the opinion that the adoption of C delayed the field of compiler optimizations research back to pre-history (Coders at Work).

It took 40 years of optimization research and clever use of UB defined in the standard, for C compilers to achieve the code quality generation they have nowadys.


> Fran Allen is the opinion that the adoption of C delayed the field of compiler optimizations research back to pre-history (Coders at Work).

Yes! The entire book is wonderful, but as a compiler writer myself, Fran's interview really stuck with me.

The relevant passage, for the curious:

———

Seibel: When do you think was the last time that you programmed?

Allen: Oh, it was quite a while ago. I kind of stopped when C came out. That was a big blow. We were making so much good progress on optimizations and transformations. We were getting rid of just one nice problem after another. When C came out, at one of the SIGPLAN compiler conferences, there was a debate between Steve Johnson from Bell Labs, who was supporting C, and one of our people, Bill Harrison, who was working on a project that I had at that time supporting automatic optimization.

The nubbin of the debate was Steve's defense of not having to build optimizers anymore because the programmer would take care of it. That it was really a programmer's issue. The motivation for the design of C was three problems they couldn't solve in the high-level languages: One of them was interrupt handling. Another was scheduling resources, taking over the machine and scheduling a process that was in the queue. And a third one was allocating memory. And you couldn't do that from a high-level language. So that was the excuse for C.

Seibel: Do you think C is a reasonable language if they had restricted its use to operating-system kernels?

Allen: Oh, yeah. That would have been fine. And, in fact, you need to have something like that, something where experts can really fine-tune without big bottlenecks because those are key problems to solve.

By 1960, we had a long list of amazing languages: Lisp, APL, Fortran, COBOL, Algol 60. These are higher-level than C. We have seriously regressed, since C developed. C has destroyed our ability to advance the state of the art in automatic optimization, automatic parallelization, automatic mapping of a high-level language to the machine. This is one of the reasons compilers are …[sic] basically not taught much anymore in the colleges and universities.

Seibel: Surely there are still courses on building a compiler?

Allen: Not in lots of schools. It's shocking. There are still conferences going on, and people doing good algorithms, good work, but the payoff for that is, in my opinion, quite minimal. Because languages like C totally overspecify the solution of problems. Those kinds of languages are what is destroying computer science as a study.

———

(pp. 501-502)

I recommend that any programmer who hasn't read this book give it a read. In fact, I think I might give it another read this week :)



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: