Hacker News new | past | comments | ask | show | jobs | submit login
My Weird Ruby (awardwinningfjords.com)
128 points by adamesque on March 6, 2015 | hide | past | favorite | 93 comments



I'd also add that rubys confusion about what a function is is the thing that's been annoying me the most lately. You have procs, lambdas, blocks, and all with subtle differences (http://awaxman11.github.io/blog/2013/08/05/what-is-the-diffe...), and all I want are simple funcs as first class citizens that have consistent behavior. (TCO as a default would also be nice).

"And finally, pie in the sky, can we solve packaging apps up into distributable binaries, please? Rust and Go are making us look bad."

As a ruby programmer the lack of any reasonable - i.e. not hacky - way to build binaries is annoying. Unless you're building a server side app, you can pretty much forget using ruby because of this. Maybe that's all ruby cares about, its certainly its niche but I like ruby and would like to be able to use it for cli programs, etc. Personally, I don't want end users of my code to have to install ruby, learn about ruby gems and boot the ruby vm before having to run a cli program.


Without some difference between functions and blocks, patterns like

  foo.each do |x|
    return if x == 3
  end
Don't "return" out of the enclosing function.


That seems okay to me. JavaScript works well enough with that behavior.



Works well meaning don't use functional calls at all if you want to escape from them.


I've just never encountered that use case in Ruby code, and to me it feels like really bad style. You can use .find to find the first element that matches a block, then return that item (if one exists).


It doesn't look like fantastic Ruby code, you're right. I just wouldn't agree that JS "works well" when it comes to their somewhat afterthought functional iterators.


What doesn't work well? The only thing I can think of, which comes up on occasion, is that you can't manually stop iteration in forEach without ugly tricks like throwing and catching an exception. I actually wanted that behavior for the first time recently, so I just used JQuery's each which lets you stop iteration by returning false. I'm assuming other JS utility libraries like underscore provide something similar.


Javascript iteration primitive isn't block-based.


Ruby iteration primitive isn’t block based, either.


Of course it is. `for a in b` desugars to `b#each`, not the other way around.


How do “while” and “until” desugar? Kernel#loop?


Those are two rare exceptions and you're right they don't desugar.


Those are not “exceptions” to Ruby’s primitives, those are Ruby’s primitives. Things that desugar to methods aren’t primitives.

The thing is, they’re rare in practice for cultural reasons.


RE binaries, compilability seems difficult due to how dynamic method calls are, but Rubymotion seems to have solved that to some extent, though it's a shame it requires a subscription to even develop on which doesnt appeal.

What are the "hacky" methods currently used for packaging you mentioned? The only methods I'm aware of no longer work or still require a separate runtime.


Producing a self-sufficient binary doesn't have to mean compiling the code; py2exe is a good example.


I know this is in the vein of hacky solutions, but I tend to use jruby for the purpose of packaging up ruby applications.


This is a good solution. I've used this approach as well for server daemons.


Do you still require the JVM installed in the same way you would the Ruby's VM?


JRuby's ruby VM runs on top of whatever JVM you choose. You just have to make sure you have the JVM in your path.


Ruby's proc block mess really irritates me too, especially when I flip from JS back to Ruby. I just cannot understand why the notion of a block as a distinct entity needs to exist, other than some weird syntactic side effects of Ruby's paren-less method calling.


Here's the reason, it's because of how iteration works in Ruby with "everything is an expression" semantics: http://lucumr.pocoo.org/2012/10/18/such-a-little-thing/

In Ruby iteration is implemented by letting something call a block repeatedly until the end of the iteration. The interpreter provides jump points in order to implement skipping or breaking the iteration. A continue is implemented as a form of jumping to the end of the block, a break is implemented by jumping past the call to the iterator function. Without the return it would be very awkward to return something from the function.Imagine a function that returns the first even item from a list:

    >> def find_even iterable
    >>  iterable.each { |x| return x if x % 2 == 0 }
    >> end
    => nil
    >> find_even [1, 3, 5, 6]
    => 6
Imagine the non-local return was not available, you would have to rewrite it like this:

    >> def find_even iterable
    >>  done = false
    >>  rv = nil
    >>  iterable.each { |x|
    >>   if x % 2 == 0
    >>    done = true
    >>    rv = x
    >>    break
    >>   end
    >>  }
    >>  rv
    >> end
*


Because it is so convenient to be able to return from the outer scope.


I can't recall any occasional where I have done that.


I wish Ruby had better support for functions, passing it to a method incurs a 400% performance penalty: https://www.omniref.com/ruby/2.2.0/symbols/Proc/yield#annota...


Completely agree on the distributable binaries! You couldn't express it better. This is holding Ruby much behind of where it could be!


For first class citizen functions, you want lambdas. Lambdas are a type of Proc but trap the return statement.

Blocks are pretty much an iteration construct.

It's really not that confusing.


Some small but I think important nit-picky things about Struct

1.) A struct in ruby isn't just data, it is an object.

2.) Structs come with some weird gotchas, most notably that it is an Enumerable!

3.) It's generally better to use a hash (in ruby) if you want "pure" data object (and they at least used to be faster).

I myself have written a lot of "weird" Ruby and I will say that more often than not it's detrimental to an application that is used/developed by others. Often times ideas that seem brilliant in one language, cannot be translated into another (immutability), and it's best to use the recommended paradigms. Especially when those using the software are not familiar with them, and most likely won't become familiar with them....

For me, many functional paradigms are just lost on ruby because it's so highly mutable that practicing them is more academic than practical (sadly). That's why I write Scala or Elixir now when I want/need those paradigms-- because they're the right tools for that job.

I definitely don't want to discourage innovation, or bringing great ideas to ruby from other places-- I just want to emphasize caution :)


Hashes are faster than structs? Not in my experience. You can speed up code considerably by using a struct instead of a hash.

An OpenStruct is slower than a hash, but then again, an OpenStruct is kind of pointless.


The question in my mind then becomes "what job is ruby the right tool for?"

If the reason not to use better principles is because Ruby makes them painful, rather than there's something about the domain that makes them a bad idea, why choose Ruby to begin with? Presumably the ecosystem.

I agree with emphasizing caution, but I definitely think it's worth being at least somewhat weird with your Ruby to get as much as you can from the best of both worlds, the Ruby ecosystem and functional principles.


You can always freeze your hashes in Ruby for safety if you want to. All that's missing is some sugar for declaring immutable hash literals. I guess one way would be to allow clojure-style minimal punctuation map literals and make them immutable...


You can't. Freezing a hash does not make it immutable. Try it.

It requires a good knowledge and dedication to consistently return a new object (not mutate state). Further, chaining is problematic for tracing errors because so much happens on a single line. Ruby has many subtle features that can bite you in the ass.

Don't get me wrong, I LOVE RUBY, I would love these paradigms to be better suited (rbx/jruby come close). I just find if I embrace mutability, state, and OO principles in Ruby I am rewarded in quality by having other smart people who are more easily able to critique my code and better it.

The abstraction I (completely willing to admit being wrong/stupid) don't believe is worth it. I think less effort and more reward can be had using the right tool for that job. I, for what it is worth, came to my conclusion through the pain of trying.


When I try to modify a frozen hash it raises an exception "Can't modify a frozen hash" in ruby 2.2.

I generally agree with the rest of your comment though, however I think some ruby code (per the original post) can be improved by introducing some of these paradigms.


Yes, but you only freeze the hash, not the items it contains. It is also an issue for thread safety, as I wrote here: https://bearmetal.eu/theden/how-do-i-know-whether-my-rails-a...


> ou can always freeze your hashes in Ruby for safety if you want to. All that's missing is some sugar for declaring immutable hash literals.

And immutable structures with efficient "updates". Because otherwise every time you're altering the structure you have to copy it in full (though shallowly) then mutate it in place.



To go along with that--methods that have immutable/pure functional semantics, meaning they return a modified copy of the structure, with no side-effects, rather than mutating a shared object.


Is there something in Scala that prevents one man's "val" from becoming another man's "var"?

Not that I don't wish we could swap tired old Java for Scala at work, but I worry that opt-in immutability means somebody will opt-out and foul things up in Scala, as well :-(

For languages (e.g. - Scala, Nim) that have separate keywords for declaring/initializing immutable and mutable symbols, I wish they would avoid short words like "var" for mutable identifiers, maybe using a keyword like "mutable", "datadivision" or "ipeedinthepool" to discourage mutable data.


That's not that weird. After trying to learn Haskell and reading "Understanding Computation" the Struct and enumerable style of structuring code feels very natural.

I disagree with the conclusions though. Contracts and replacing `nil` with `Maybe` doesn't sound right to me at all since I use `nil` pretty heavily all over the place and the extra layer of `Maybe` doesn't buy me anything. I don't understand what he means by a dependency system since Gems have their dependencies spelled out pretty explicitly. Killing symbols is just silly because I use them to tag all sorts of stuff and message passing with symbols feels more natural than any other data type. Packaging could be better and optional types would also be nice. I like how TypeScript does it actually quite a lot.


if you think replacing nil with Maybe buys you nothing, I highly recommend this video:

"Why and how to avoid nil"

https://www.destroyallsoftware.com/screencasts/catalog/how-a...

you have to pay to see it, but it's terrific. (not mine, just a fan.)


> extra layer of `Maybe` doesn't buy me anything.

Wouldn't that layer be there all the time anyway? Either in the form of a sum type (Just a | Nothing), explicit nil checking, or ignoring nil's and getting a runtime error.


It's there with nil to begin with. A Maybe type without pattern matching is basically nil checking. Now if he had said I want some kind of pattern matching then I would get behind that.


So you would prefer a language without sum types but with pattern matching and exhaustiveness checking for nil then?


Not what I said and I don't see why you make those choices mutually exclusive. You can have pattern matching without sum types and you can have sum types without pattern matching.

In fact Ruby's case/when statement is very limited form of pattern matching that can be put to great use with proper use of Structs and overloading of ===.


Erlang is probably a better example of pattern matching (complete) without formal sum types (though dialyzer has value and type alternatives)


Mostly that was an interesting article, but there was one pain point: having to read the parameter list to a routine three (!) times.

Honestly, why has no language (few languages?) come up with a scheme yet to put the formal parameters to a routine one per line, with documentation, like so:

    ...
param-name [<type, if specified>] [<documentation>]

    ...
Swap the name and type order for "New Jersey" style languages vs "Swiss" style languages as needed.

This is actually how I used to write out my C function headers (Frack K&R layout!), even though we didn't use a doc generator tool such as Doxygen (sp?).

Javadoc is only slightly less offensive, naming the parameters twice, due to slavish adherence to the altar of New Jersey formatting. I would love to see "//@ description..." after each parameter as an alternate to "* @param name description..." duplication above the parameters. "WET brain-death forever!", I guess.


Can you show an example of your C header style? I confess I'm not quite sure I follow you there.


    /* Return validated output for printing */
    static
    t_output *      val_and_xform
        (
        int         in_cnt     /* input item counter, 1 based */
        t_input *   raw_in     /* raw input, immutable, not null */
        )
        {
        ...
        }
Further examples: https://github.com/roboprog/buzzard/blob/master/bzrt/src/bzr...


Ah, got it. That has a lot going for it, documentation wise. Thanks for the illumination.

I can see the style-nazis hating on it with a white-hot fury, tho.


That Ruby looks excellent. I hadn't realized the contracts gem existed.

So much of what people redundantly cram into test suites could be handled with simple contracts.


If, like me, you were wondering how contracts.ruby works, the comments in here are quite instructive: https://github.com/egonSchiele/contracts.ruby/blob/master/li...


very cool.


Ruby is duck-typed, so it is advantageous to write code and tests that are not tied to classes at all.

http://www.poodr.com/ goes in-depth about this.


Well, the same kind of contracts approach could be used to enforce a duck-typed set of behaviors:

  [:quacks], [:barks] => Maybe[:flies]
Some Rubyists get pedantic about duck typing. It's just a tool to design good systems, not an article of faith.


Yeah the contract here is a form of nominative type checking, it could just as well be structural.


I'd be really interested to see success typing applied to ruby.

http://user.it.uu.se/~kostis/Papers/succ_types.pdf

TL/DR: Success typing considers all possible types a value can have. If a function returns either a bool or an int and you then pass that value into a function that accepts a string or an int, then the success typing checks out. It doesn't mean your program is correct, but that it could possibly be correct. On the other hand, if you pass the "bool or int" value into a function that only accepts a string, the type checker will complain and you know for sure your program is incorrect. In other words, you will get false negatives but never a false positive.


> It doesn't mean your program is correct, but that it could possibly be correct.

That doesn't seem very useful, I already know that my program could "possibly" be correct. If I didn't think so I wouldn't be working on it.

I'm interested in finding out whether it "is" correct or not.

If I ask the type system, and it just shrugs and says "dunno lol" then that type system isn't worth much to me.


It ends up being pretty useful. The number of times it catches problems is pretty high. Also, the more you annotate, the greater the efficacy.

Is it inferior to a full static route system? Yes. Are there benefits to dynamic type systems? Yes. Success typing is a way to get a bit a both.


But how do you know what the function only accepts a string in a dynamic, duck-typed language?


Perhaps more relevantly, in a dynamic, duck-typed language in which methods can be overridden per-object, there is no guarantee that an instance of class String actually implements any given contract (including that implemented by unmodified instances of class String).

So, contracts that specify what functions accept and produce by class membership provide less assurance than they superficially appear to.


Ruby is not a duck-typed language. "duck-typing" is only touted by its community to award it free credibility and merit where there is none. "duck-typing" is essentially free for any dynamic typed language which have method RTTI. So please, stop the "duck-typing" pride parade. It is nauseating.


Huh? If Ruby is not duck-typed, can you give me an example of a something that is? It seems like the textbook example. Here are two weird examples: 1. C++ templates and 2) C# variables declared dynamic (I don't know C#, could be wrong) Are we using the same definitions as each other?


Duck typing is more of a technique than a quality of a language. Any dynamically-typed language is going to end up having some code that does runtime type checking. Duck typing just means doing respond_to? instead of :kind_of? when doing so.

Ruby is certainly a language suitable for the duck typing technique. The term was originally used in reference to Python, and both languages have reasonably equivalent reflection abilities.


I think the point is that duck-typing in Ruby is less of a feature and more of an inevitability due to its dynamic RTTI nature.

Compare that to duck-typing in Go where it was a very deliberate design decision.


Duck typing generally refers exclusively to runtime type checking (i.e. dynamically-typed languages). What Go does is structural typing.


I disagree. C++ templates for instance are also duck-typed, but they act at compile time, not run time.


The same way the contracts gem does it. Annotations.


Traveling Ruby is a good way to package ruby applications. https://github.com/phusion/traveling-ruby


but you can't yet build Windows binaries ON Windows


Ruby's cardinal sin: there's no way to `require` code without global side effects.

e.g. https://twitter.com/tomdale/status/457282269342744576


One man's sin is another's superpower.


Yes, which is why having the ability to do both would be nice.


Technically this is true for Python as well though, right? You can't import a module without executing it.


but you can namespace the import, which avoids _accidental_ global pollution i.e.

    import foo # creates only foo
    require 'foo' # can change everything 
In both ruby and python you can execute code that changes the global namespace from within a module, but you have to go out of your way to do it in python, while in ruby it's the default.

The thing is: ruby has #load(true) to load a file in anonymous module, it would just need to return that module and we could build something reasonably close to python's import.

(I think there was an RCR for this many years ago)


> import foo # creates only foo

Technically incorrect. The code within `foo` could alter anything itself accessible via an import, and it could walk up the stack to mess with the "current" stack frame of the module being defined by the import.

It's generally not done and would be considered very bad form, but it's definitely possible.

> In both ruby and python you can execute code that changes the global namespace from within a module, but you have to go out of your way to do it in python, while in ruby it's the default.

The tweet is about monkeypatching existing structures though. And you can definitely do that in Python, including at import (though that's generally considered a Bad Idea, libraries which can do that such as gevent usually require an explicit call to patch or replace existing modules)


If you want to replace floats, what with?

Symbolic computation is way too expensive.

Fractions are good whilst they're restricted in size but are too slow at arbitrary precision and too inaccurate with numbers not centred on 1.

Fixed precision decimal floating point has numerical problems because numbers scale by too large an increment. (see footnote)

Arbitrary precision floats don't solve anything unless your problem was too little precision - with the precision of a 64 bit float this is rarely the case and 128 bits is massively more than that.

Arbitrary precision decimals don't solve anything either.

Logarithmic number systems are a good contender but almost all operations become lossy. This is OK if you're using them as approximations - which is the common use of floats - but the fact floats are exact on the integers up to a large ceiling is useful (see Javascript) as with many other of their exactness properties.

Even better might be a symmetric level-index number system; you get the advantages of logarithmic number systems but also get immunity from overflow and underflow - the only operation that isn't closed is division by 0 and since operations can't underflow all 0s are actually 0.

If it were me implementing a new very-high level language with only weak cares about speed (eg. faster than doing it symbolically), I'd probably choose to use fixed precision fractions that fall back to symmetric level-index numbers instead of rounding. That way I'd get precise rationals (including int64s), immunity from underflow and overflow and a smoother error curve than floats.

---

Some people are pretty surprised to hear me criticize decimal types, but they're genuinely numerically worse than binary floating point. I've talked about this in depth here:

http://www.reddit.com/r/programming/comments/2ut00j/a_little...

and I give a rough rule-of-thumb about what type to use when here:

http://www.reddit.com/r/learnpython/comments/2wyeho/is_this_...


Really, that's not weird at all. Far from it.

For a truly different Ruby coding style checkout some of Ara T. Howard's code: https://github.com/ahoward.


Why kill symbols?


I was wondering this as well. Symbols are just atoms and atoms are great. The biggest issue with symbols in ruby is how gc is handled and that has improved in recent releases.


Improved is an understatement. Symbols have been garbage collected only since 2.2.0 (December 25, 2014) and 2.2.1 has been released a few days ago with a patch for a corner case that was leaking symbols. Leaking them was what any Ruby < 2.2.0 was doing. jRuby added that https://github.com/jruby/jruby/issues/2350 but I think it's not released yet. Rubinius will follow but there is no estimates for a release https://github.com/rubinius/rubinius/issues/3280

Apparently Rails 5 will take advantage of symbols GC to use symbols as keys for the params hash instead of strings. It uses strings now only to prevent DoS because of uncollectable symbols.


Yes; this is what I was referring to as well. The potential for DoS was a problem since they weren't garbage collected. I was thinking that symbol GC was introduced in the 2.1 series, but I see you are correct.

I find it a bit surprising though that an article that is generally in favor of functional programming is bashing Ruby symbols. Ruby symbols come from Lisp and are just a form of atoms which are common place in functional languages. Erlang, Lisp, Elixir, Clojure, Scale to name a few all have atom/symbol support in some form. They are immutable even across systems. It just seems to run counter to the article's argument. Also there was no supporting reason why symbols should be killed. The article is released after GC was introduced for symbols so I'm assuming that's not the reason to "kill symbols".


Actually it is part of the argument. I watched the video linked to the post. The speaker says that symbols and strings are converging. Frozen strings are symbol like and symbol GC made symbols string like. Symbols were a performance optimization and we are close not to need it anymore.

I'm not sure there isn't more about symbols than that but I'll think about it the next time I'll write some Ruby. I'll pretend I have only strings and see what happens. One thing for sure: we'd need a syntactical shortcut because having to type "string".freeze everytime is unbearable. If the right shortcut is :string or the lispy 'string, I don't know but if we need it symbols are fine even if symbol.class could end up being String.


I just read the article so it sounds like the author is punting on symbols in the direction of the video. Before responding now I went back and watched the video.

As far as the difference between "string".freeze and :string; the symbol will resolve to the same object_id every time across systems and processes. If you spin up irb and type :foo.object_id you will see 1092508. Now one way this is commonly used in Ruby is when we use Object#send. We pass a symbol in place of the function name as the first argument and internally this is used as a system optimization to lookup the function that is being called dynamically.

Now I haven't been knee deep in the code of any of the Rack web servers, but I would imagine that something like Puma which is built for concurrency and multi-threading could take advantage of this behavior as well for some shared process to enable spinning up lighter weight additional processes or threads thus leading to lower memory consumption on your web server and the ability to handle more traffic on the same hardware.

Erik is right in saying that strings and symbols are "converging"; they have not yet converged however. At the point at which strings converge to symbols in the sense that both resolve to the same object across systems, then what we have left is symbols and so it is not symbols that have been killed, but rather strings supplanted by symbols. If instead we go the direction of saying memory and performance don't matter so let's punt on that whole symbols are immutable thing I think we are giving up too much.

I use Ruby because the high level abstraction is great and saves me developer cycles, but I also want it to eek out as much performance as possible. Honestly in general I'm tired of hearing the argument that X should die now because it is a performance hack. Some performance hacks are good.

(For reference: In the video he discusses symbols vs freezing strings from 22:00 to 25:07)


> the symbol will resolve to the same object_id every time across systems and processes. If you spin up irb and type :foo.object_id you will see 1092508.

object_id for symbols is not consistent across processes.

   % ruby -e "p :foo.object_id"
   396968
   % ruby -e "p :bar.object_id"
   396968
   % ruby -e ":bar; p :foo.object_id"
   397128
You're getting 1092508 consistently not because it's :foo but because it's the next object_id available for a symbol after all the ones created during IRB startup.


Well dang... that is interesting and not what I expected. So from this I can infer that it also changes if a symbol is garbage collected now. So has the implementation of Object.send changed in Ruby 2.2+ as well? The latest copy of the code I have looked at is 1.9.3.


Yes, symbols that are garbage collected will get new object IDs if they appear again.

https://gist.github.com/mboeh/83338a4e3a6a77689e22


Everything in that list made me think "make Ruby less like Perl and more like Clojure" until I saw "kill symbols" and did a double-take. Symbols are awesome, they are perfect for map/hash keys. Symbols are from Lisp. Why kill symbols? They are even garbage collected now.


thirded. Can someone who doesn't like symbols help me understand the downsides of them? I really like how natural it makes passing around params that can be one of {x1,x2,x3...,xn} different values. Accomplishing the same with strings just feels messier and more prone to error.


The only problem with Ruby's implementation of symbols was possibility of DoS which is resolved through garbage collection in the latest Ruby.

Something that I found a bit ironic about the video linked in the article is at 5 mins 30 secs in; he states, "Why not give the memory address a name that makes sense to a person?" Here is referring to assembly and the abstraction it makes and how this relates to the abstraction that is variables. Symbols are really a cross system immutable memory address abstraction; I don't see how this is a bad thing.


See djur's response to me in the thread. Apparently the cross processes piece is not true. So now that symbols are garbage collected there and the way "string".freeze works now the two are now basically the same construct.


> Can someone who doesn't like symbols help me understand the downsides of them?

I wish I had been clearer in my talk but I only had 30 minutes and wanted to cover other topics. Here is a more comprehensive argument against symbols in Ruby:

In every instance where you use a literal symbol in your Ruby source code, you could you could replace it with the equivalent string (i.e. calling Symbol#to_s on it) without changing the semantics of your program. Symbols exist purely as a performance optimization. Specifically, the optimization is: instead of allocating new memory every time a literal string is used, lookup that symbol in a hash table, which can be done in constant time. There is also a memory savings from not having to re-allocate memory for existing symbols. As of Ruby 2.1.0, both of these benefits are redundant. You can get the same performance benefits by using frozen strings instead of symbols.

  "string".freeze.object_id == "string".freeze.object_id
Since this is now true, symbols have become a vestigial type. Their main function is maintaining backward compatibility with existing code. Here is a short benchmark:

  def measure
    t0 = Time.now
    yield
    t1 = Time.now
    return t1 - t0
  end

  N = 1_000_000

  puts measure { N.times { "string" } }
  puts measure { N.times { "string".freeze } }
  puts measure { N.times { :symbol } }
There are a few things to take away from this benchmark:

1. Symbols and frozen strings offer identical performance, as I claim above.

2. Allocating a million strings takes about twice as long as allocating one string, putting it in into a hash table, and looking it up a million times.

3. You can allocate a million strings on your 2015 computer in about a tenth of a second.

If you’ve optimized your code to the point where string allocation is your bottleneck and you still need it to run faster, you probably shouldn’t be using Ruby.

With respect to memory consumption, at the time when Matz began working on Ruby, most laptops had 8 megabytes of memory. Today, I am typing this on a laptop with 8 gigabytes. Servers have terabytes. I’m not arguing that we shouldn’t be worried about memory consumption. I’m just pointing out that it is literally 1,000 times less important that it was when Ruby was designed.

Ruby was designed to be a high-level language, meaning that the programmer should be able to think about the program in human terms and not have to think about low-level computer concerns, like managing memory. This is why Ruby has a garbage collector. It trades off some memory efficiency and performance to make it easier for the programmer. New programmers don’t need to understand or perform memory management. They don’t need to know what memory is. They don’t even need to know that the garbage collector exists (let alone what it does or how it does it). This makes the language much easier to learn and allows programmers to be more productive, faster.

Symbols require the programmer to understand and think about memory all the time. This adds conceptual overhead, making the language harder to learn, and forcing programmers to make the following decision over and over again: Should I use a symbol or a string? The answer to this question is almost certainly inconsequential but, in the aggregate, it has consumed hours upon hours of my (and your) valuable time.

This has culminated in objects like Hashie, ActiveSupport’s HashWithIndifferentAccess, and extlib’s Mash, which exist to abstract away the difference between symbols and strings. If you search GitHub for "def stringify_keys" or "def symbolize_keys", you will find over 15,000 Ruby implementations (or copies) of these methods to convert back and forth between symbols and strings. Why? Because the vast majority of the time it doesn’t matter. Programmers just want to consistently use one or the other.

Beyond questions of language design, symbols aren’t merely a harmless, vestigial appendage to Ruby. They have been a denial of service attack vector (e.g. CVE-2014-0082), since they weren’t garbage collected until Ruby 2.2. Now that they are garbage collected, their behavior is even closer to a frozen string. So, tell me: Why do we need symbols, again?

I should mention, I’d be okay with :foo being syntactic sugar for a frozen string, as long as :foo == "foo" is true. This would go a long way toward making existing code backward compatible (of course, this would cause some other code to break, so—like everything—it’s a tradeoff).


Is Rubinius X still a thing?


Is there an ETA for middleman 4.0?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: