Hacker News new | past | comments | ask | show | jobs | submit login
Why Ruby is not my favorite language (codeslower.com)
42 points by vladimir on Dec 9, 2008 | hide | past | favorite | 56 comments



I had a great question in a job interview yesterday: "What do you like least about your least favorite programming language?". I picked Ruby as my least favorite, as there's not many languages out there that undermine themselves so totally (I may not want to program in basic, but it does what it says on the tin). Here's my answer:

  Matz's decision-making process
    He tries to make Ruby be all things to all people
      Lots of confusing sugar and overloading baked in
    I much prefer Guido's hard pragmatism
  
  The panoply of function types: Methods, Blocks, Procs, Lambdas
    All intertwined and yielding into one another.
    I love that in Python there is only one:
      Objects with a __call__ method, defined metacircularly.
  
  The culture of adding/overloading methods on base classes
    Many gems do this en masse, and there are a lot of low-quality gems
    Especially disastrous because it's unscoped, and infects the whole process
      For a language with four scoping sigils it sure screws up scope a lot
  
  The Matz Ruby Implementation
    The opposite of turtles-all-the-way-down (Smalltalk crushed beneath Perl)
    It punishes you for taking advantage of Ruby's strengths
    The standard library is written almost entirely in C
      It doesn't use Ruby message dispatch to call other C code.
      That means that if you overload a built-in, other built-ins won't use it
    Anything fiddly that's not written in C will be dog slow


I had a great question in a job interview yesterday

For what it's worth, this has recently become a 'standard' interview question for developers. I predict that people will start game it by having a canned response, thereby rendering it virtually useless---just like all other standard interview questions.


"For what it's worth, this has recently become a 'standard' interview question for developers."

I used to ask interviewees to discuss what they didn't like about their favorite language. If they had a hard time finding stuff to criticize, there was a good chance they didn't really know the language. or were not particularly critical developers.


Right, that's why it was initially useful. Now that lots of people use the question, though, even mediocre developers will make sure to come to the interview with a list of criticisms of their favorite language.


"... even mediocre developers will make sure to come to the interview with a list of criticisms of their favorite language."

Which is fine if, when pressed, they can back up what they say. The question (like most good interview questions) is largely an excuse to help guide a conversation and some exploration to avoid canned responses.


Hilariously, I had already written an answer to the preceding question: "What do you like least about your favorite programming language?" for an email to someone else just a day earlier.

Granted it's still a terrific question -- expository about the applicant, and easy to banter about to detect bullshit / plagiarism.


s/start game it/start to game it

Argh. I hate catching typos in my own comments.


I just realized what the true inverse of turtles all the way down is:

"You're in the desert, you see a tortoise lying on its back, struggling, and you're not helping -- why is that?"


The fucker stole my hat.


A tortoise? What's that?


"... A tortoise? What's that? ..."

'Do you make up these questions? Or do they write 'em down for you?' ~ http://www.imdb.com/title/tt0083658/quotes


You know what a turtle is?


Of course!


I'm Mario. That bitch stole my ho.


That's a trashy, misleading question to ask, designed to make the interviewee look like a fool. If I don't like a programming language, I don't learn it (unless I have to). For this reason, somebody's reasons for disliking a particular language are likely to be riddled with inaccuracies unless they're into reading up heavily on languages they don't use.


How can you get to dislike a programming language without learning it first, at least well enough to have things you don't like about it? If your reasons for not liking it are "riddled with inaccuracies", then maybe what you don't like about it isn't that language but something else that you have mistaken for that language?


I usually like to ask: how would you change your favorite language?

You must be able to criticize what you love to be a good programmer.


Now that's a quality question that demands critical thinking.


This is at a company where they listed 11 (!!) fantasy languages in the job description. They want candidates to be the type of people who do projects in novel languages just because, that can reason about programming language design issues, that have well-founded reasons for not liking things.

Yes, it would bait idiots who "don't like C because you can only define variables at the top of a function", or "don't like Java because you have to put classes in different files" -- but that's intentional.


"11 fantasy languages"

As in 11 not-real languages. Languages that don't exist?

I'm not sure I follow.


As in 11 academic languages that you would get paid to use in your fantasies (the company advertising the job doesn't use most of them, they just want coworkers that like them too).


oh no, all this time I have been putting many classes in one file. I guess I should have compiled more often.


Curious. What's your opinion of Perl?


The fundamental difference is that Perl doesn't think itself beautiful. There's plenty to hate specific to Perl: sigils for types, noisy syntax (not even the #$@%, just the structure is ugly), nested lists always being flattened silently (as a design choice, Larry is crazy).

I never end up using Perl, the niche it occupies between shell and python is empty for me. I never have a problem with a shell script where I'm wishing for more control structures or builtins. The few times I've written long-running resource-intensive shell scripts (like brute-forcing an MBR partition scheme), reducing the number of forks wouldn't make them more than a few percent faster.

The major ray of light shining down on Ruby is that it is a Perl that people want strongly to write better looking code in. That's a pretty big accomplishment, I think. It seems like a lot of the classic Perl webdev shops are moving to Ruby, and I think that's pretty great.

I don't hate Ruby, I'm just disappointed in it.


Another (old?) rant about open classes aka "monkey patching". There's a lot of fear and hand wringing over open classes. It's certainly a language feature that can be abused, but it's also incredibly useful. There are things you can do to be safe:

Check for the existence of methods before adding them. Don't overwrite existing standard lib methods. If you do overwrite one, alias it, call the alias from your replacement which should add something without noticeably changing the original behavior.

If you write unit tests, it shouldn't be too difficult to detect that a library you depend on has changed the behavior of a standard method in an unacceptable way. I've encountered collisions like this, maybe, a handful of times and each time finding and resolving the problem was not difficult.

I've been writing ruby software for several years now, and there are definitely libraries that annoy me (ahem activesupport), but the utility of having open classes has far outweighed any of the negatives. Let's face it, in any programming ecology there are going to be some awful libraries. Bad programmers don't need open classes to do damage.


It's a fundamental problem of language design.

In Lisp, extending the built-in functions is trivial, so lots of people built Lisp up into the language to solve their problem in, and Ruby makes it easy to do the same. Rails is extends the language, and some people like it. Even the JSON gem does -- when you do "require 'json'", you give arrays the power to convert themselves to json strings. Ruby lets you convert arrays to Ruby-evallable strings, so it's natural to convert them to JSON-parseable strings.

So, do you do what Java does, in making the String class final? Languages only exist for people who speak them, so you need to gauge the social aspects whenever you choose a language. Perhaps Java means that your bozo coworker won't trample your toes. Perhaps Haskell means that you can import just as much as you want. That's ultimately your call.

There are lots of valid complaints about Ruby 1.8: the syntax can be baroque, the scoping rules have too many gotchas, they're (thinking of) taking out continuations in the next version, the VM is dog-slow. But 200 methods on an object is small fries in comparison.


This argument is totally spurious. You can find terrible, brain-bending libraries in every language. Why is ruby a write-off because of ActiveRecord?

Seems like the implicit argument is that Root-Object modifications are unmaintainable. But the reality is that assertion really isn't backed up anywhere.

There are lots of valid complaints against Ruby. This is not one of them.


He said "Ruby is a great language," but not his favorite language. I don't consider that writing-off the language.


It just seems like he's claiming citing something and claiming it's a mess, but there's no real evidence that it is a problem.


So there's this Ruby tool called RDoc that will introspect your project and tell you all about what's inside. Problem solved?

Here, look at RDoc's features (from the RDoc website):

--Generates structured HTML documentation from Ruby source. --Authomatically extracts class, module, method, and attribute definitions. These can be annonated using inline comments. --Analyzes method visibility. --Handles aliasing. --Uses non-intrusive and implicit markup in the comments. --Readers of the original source needn't know that it is marked up at all.

Or to come at this from a slightly different angle, don't Ruby's introspection powers provide a release valve for the problems the author is talking about?

Anyway, this is just another occasion to state the old adage, "With great power comes great responsibility." You don't have to crapify your Ruby projects if you don't want to. If you need to enforce clarity at all times, I see no better alternative than Python (which I think makes it a better candidate for scientific or math-intensive projects, MRI slowness aside [although I am curious about the implications of genetic algorithms + Ruby introspection]).

I do speculate that if you take two programmers of equal skill levels and give one Python and the other Ruby, the Rubyist can gain a productivity advantage over the Pythonista via judicious use of Ruby hackery. But this comes at a cost: it does require more thoughtfulness before execution. Maybe you don't want to think that hard, and that's a totally legitimate and respectable position. In the end, which of the two languages you choose will probably have more to do with personal workflow preferences or project management ideologies.

In the case of Rails, yes, it hacks in a DSL and extends Ruby classes, but Rails relies heavily on conventions and things being put in designated areas. It's therefore hard to get lost in a Rails project unless you really have no respect for the conventions (in which case, it doesn't matter what language or framework you're using).


I wonder how much of the monkey patching is really necessary? Recently I have looked at Rails again, and in prototype (Javascript) "function" is monkey patched to have a bind method, so you can write function.bind(object). In Mochikit you have a method bind(function, object). It is not even longer to type. Mochikit is not monkey patched - although admittedly "bind" is in the global scope. Mochikit can prevent the global scope optionally, though.

That is just one example, but I suppose it is more attitude than necessity that makes the difference here...


So what's the alternative?


Well, the alternative could be more like Python.

Python supports both object methods and functions. Ruby only supports object methods. However, Ruby does allow you to define methods outside of an object - these become methods of the Object class. As such, any methods defined like functions become changes to all objects since they inherit from Object.

Python is also very explicit about its namespacing and imports. In Python, you should always be able to know where any name you're using is coming from. In the example from the article, 'to_yaml' is added by requiring rubygems. That isn't so apparent (in the way that 'gem' being added by requiring rubygems would be apparent). Names are reused all the time. Names sometimes aren't related to their package in an apparent way.

With Python, you have three options (one of which Python programmers will flog you for): from package import name, name2 import package from package import * -- should really only be used in an interactive python shell and not in programs

In the first one, you could then call name(something) or name2(something). BUT you can see exactly where those names are coming from -- package. Unlike to_yaml which could come from wherever, it's explicit that it comes from 'package'. In the second one, you can do package.name(something) and the like. Similarly, you can see that it comes from 'package'.

Ruby is really flexible, but I would argue that the author is correct in assessing that the ability to modify all objects implicitly is a bad thing.


Also, information hiding is rumored to be important in real software. Ruby gives you the ability to nominally "hide" data, but it's trivially easy for any later programmer to violate the encapsulation. It's DHINO programming -- Data Hiding In Name Only.

Not only isn't this considered a bad idea in the Ruby world, but it's actually a common technique. Use a third-party library, and you may well be re-writing core parts of the language. Sound good? You can't prevent it.

I like parts of Ruby; it's a fun language. But the author is right -- it's not a language for large projects, or projects with more than a few programmers. My alternative to Ruby would allow the fast-and-loose stuff that makes Ruby fun, but also give you a way to make classes immutable and guarantee data-hiding and encapsulation when you need to do real software engineering.


DHINO is not really a problem. We are missing true private methods and attributes in Perl, but it doesn't stop us from writing large maintainable programs. The rule is that you don't call methods that start with an underscore from outside of the class that defines. Sure, you could, but we don't.

Java's strict hiding has made my life more difficult in a number of situations. Recently, I needed to change the way URLConnection worked. I could have fixed the problem by changing a private attribute, but no no, you can't do that in Java. Instead I had to write 3 additional classes to set everything correctly for me. This is not good software engineering practice, it is a waste of my fucking time. (This was a one-off script, not a large program. Yes, I know, too many hacks and your app falls apart... but with Java you end up with an app that is falling apart AND a shit-ton of useless code to wade through.)

Anyway, if you think Java-style programming is "real programming", that's great. We don't need to work together, so you are free to waste your time however you like.


Out of curiosity, what attribute could you not change in URLConnection?

I don't consider Java-style any more "real" as any other, but if something is private in a class and doesn't provide methods for modification, there is either a reason for it, or it is poorly coded... If you mess with private variables, you could be jacking up the way the class works and not know it. If you instead write 3 additional classes, you're at least ensuring your URLConnection state is consistent.

> Anyway, if you think Java-style programming is "real programming", that's great. We don't need to work together, so you are free to waste your time however you like.

Sounds like you're a lot of fun to work with in any language.


I don't like the "real software engineering is big heavy Java projects" thing (aka "bondage and discipline"). Plenty of "real software engineering" has been done to great effect with dynamic languages.

Here's a random idea: rather than strict locks, perhaps transparency would be a good approach. Load up a third party library, and you get informed that it's fiddling with things, and can then find out what. Maybe it really does need to fiddle with things. Or maybe in a given situation, fiddling with things is the most efficient way of accomplishing something.


"rather than strict locks, perhaps transparency would be a good approach"

Transparency is a great, but you still need rules when you're working with any code that's bigger than what you can hold in your head at any given time. Doubly so when you're working with a team.

Programming in a team is hard in the same way that taking care of a room full of toddlers is hard -- obviously, you want everything out in the open, but you also want to hide the sharp objects, put caps over things that can shock, and otherwise make the big, transparent room as soft and bouncy as possible. You need locks on the cabinets full of poisonous chemicals. Obviously, the locks need to be open-able, but only in special situations, and certainly not by nosy toddlers.

(Before you criticize me for being paternalistic, realize that I include myself in the "toddler" category. I much prefer code that's written in a defensive style, because it's easier for me to maintain later on -- and that's where any programmer spends the majority of his time. I've developed this attitude because I know that I'm stupid, not because I'm arrogant.)


While I agree that writing code defensively is the way to go when a team is involved and/or you plan to have to maintain the code for years, I don't think it's the language's job to take away all the 'sharp objects'.

Programmers should be smart enough to know what is sharp and how to avoid it when needed, and they should collaborate enough that no one can sneak through sloppy code.

I'd rather have the ability to do certain things and rarely use it than not have the ability at all. If other programmers are constantly doing really dumb things, don't use their code in your projects.


Even in strict languages, there's rarely anything that absolutely prevents you from doing what you need to do; they just make it harder to do bad things accidentally. Languages like Ruby actually make it harder to do things correctly, and constantly tempt you with hideous shortcuts.

Again, I view programming teams as groups of nosy toddlers. Thus, any sentence beginning with "programmers should be..." is wrong by definition. Programmers are people, and people make mistakes. People also get lazy, try to take shortcuts, and therefore make even more mistakes. The way that people minimize mistakes is by setting up systems that make it harder to do wrong things. This is a fundamental tenet of engineering.


Yea, it is fairly trivial to enable logging every time a method is overridden or added to any class in ruby. Dynamic code inspection like in JavaScript would be pretty cool, too.


But wait, there's more!

You can achieve immutability in ruby: http://scie.nti.st/2008/9/17/making-methods-immutable-in-rub...

You can achieve data-hiding if you really need to in ruby using the same techniques you use in JavaScript.


Dear god in heaven, that's ugly.

I sincerely hope that people voted that up out of a deep appreciation for dark, ironic humor.


"Can't" is very different from "can, but is very ugly."

Most of the "problems" with Ruby that are often quoted are the result of bad programming and management decisions. Ruby puts a lot of power into the hands of the programmer (not as much as IO [the langauge]) and with great power comes...flexible re-factoring! Thankfully, it should be easy to re-factor the offending code, send a note to the programmer about what you changed and why and chalk it up to a learning experience.

Sufficiently typed languages make re-factoring very easy.

If you are having a hard time debugging your Ruby, you are doing it wrong.

As for ActiveSupport's littering of Object, meh. Rails really changes Ruby. If you stick to the Rails way, then it shouldn't cause a problem in Rails projects, and it isn't a problem outside of rails projects. Don't like it? Well, there are plenty of good Ruby alternatives for the web (Sinatra!!! Also, merrrrrrrrrrrrb,) and you don't have to have everything in one app.


Three things:

1) If ugliness isn't a concern, I can do everything I need to do in C++. Arguments in favor of ugliness don't get a pass just because they happen to support your favored language.

2) What does "sufficiently typed" mean? This sounds suspiciously like a tautology: "re-factoring is easy when you've written code that's easy to re-factor."

3) The "you're doing it wrong" response is zero-content. Prove that we're doing it wrong.


1) I never said it wasn't a concern. In fact, I emphasized the important difference between can't and what translates to "shouldn't" in most circumstances.

2) I apologize if i was vague. I was trying to be concise. The nature of a "Type" is not a fixed thing. Some languages are said to be strongly-typed, some are said to be weakly-typed. Some languages are type-safe, others are not. If we were to simply draw a line and suggest that one end approaches absolute Typeness and the other approaches absolute untypeness, then sufficiently typed were to mean the lowest point along this line that supports the goal. In this instance, we are talking about re-factoring. So, you may say that my argument sounds circular: "refactoring is easy when you use a language in which refactoring is easy." However, the important point that I am making is that ruby's amount and implementation of Type is what facilitates its ease of refactoring. In particular, I have found its features of reflection, its implementation of polymorphism ("duck typing"), and its hooks into the Type system (look at Object, Module, Class and Method for more info) to be particularly helpful. Also, because there is not too much typedness, we don't have to worry about sending data of a new/different type to a function as long as it satisfies the assumptions that the function makes about it (which can be inferred by a number of techniques.) Delegator and Forwardable are also pretty nifty. One of the least appreciated tricks is calling .dup on an instannce of Class (that is, an object that represents a class definition, not an instance of a class,) but thankfully it is rarely necessary. -- You need to use it when you want to a) inherit from Foo and still b) have 'super' refer to the implementation in Foo's parent class.

3) I suppose the above might give some clues into my favorite debugging techniques. If you'd like me to go further in depth, contact me via email, my name @gmail.com I didn't think the details were salient to this discussion.

Edit: c++ makes some of the worst decisions wrt supporting the notion of Type, in syntax and in features. That being said, Bjarn is still one of my favorite authors and learning C++ was a great entre into OO for me.


"Strong" and "Weak" are not useful nomenclature.

What To Know Before Debating Type Systems: http://www.pphsg.org/cdsmith/types.html


That's a great article. I will attempt to use more precise terms in the future.

To clarify my previous post: I intended "strong" typing to mean increasing restriction of what you can do in order to provide more guarantees about runtime behavior, and "weak" typing, the opposite.


So you really meant "static" and "dynamic", referring to type-checkers, which are completely independent of "strength" (which is commonly used to imply soundness).

Ironically, all dynamic type-checkers are more sound (by definition, they check all types at runtime) than most ancient static type-checkers that just do naive structural compile-time checks, allow casts, and mainly exist to specify size.


No, how did you infer that from what I wrote? A dynamic type system, as you pointed out, may be better at enforcing restrictions than a static one. The notion of 'strength' that I was talking about is separate from the enforcement mechanism.

Some questions that pop into my mind when i consider type systems are:

1) What are the criteria that a datum must fulfill in order to be considered of type Foo? 2) How is this to be determined? 3) Why would I ever need to know this?

1 & 3 impact the strong-weak continuum, number 2 does not. The less that type impacts allowed (at run or compile time) operations (#3), the weaker the type system. Note that you often can implement the manipulation of metadata in your language if your language does not already provide these features.

As for #1, the more stringent the criteria, the more 'strong' the given language's type system. A very weakly typed language may have no mechanism of checking for type (in such a language, the type system may only be used internally to the object for things like method resolution.) The criterion for being of a particular type may be 1) the datum supports all operations defined for a particular type 2) the datum supports all operations defined for a particular type that are used in the current execution scope 3) the datam has the name of the type in a list of types that it claims to be 4) the datum is at least as large as the size of the given type .. and many more ..

In the context of re-factoring, the less frequently type is checked and the less rigorous the criteria when it is checked, the more opportunity for type substitution (and thus more flexibility) we have.


Is there a tool here, begging to be written? Why can't I point to a Ruby object and ask it "Hey, where did that method come from? Which file/method/object defined that? Which were the last three files/methods/objects to redefine that?"

Obviously, you wouldn't want to enable this feature on a production server. But it might be a useful debugging tool. Worth a try, anyway.

Such a thing would only help in tracking down problems. It wouldn't resolve the larger problems of, e.g., namespace collision. ("Oops, library foo and library bar both tried to define an Object.stupify() method!"), though it might make such issues easier to detect. And it would be quite easy to write perverse programs that would defeat my proposed debugger thingy, or even cause it to melt. ("My program uses singletons to dynamically redefine methods on other singletons.") But you could perhaps mitigate the latter by restricting this tool to monitoring Ruby's built in classes, and/or whatever classes the user specifically chooses to aim it at. Moreover, if a piece of Ruby code did cause the metaprogramming-tracker to dump core, that fact might serve as a valuable code smell of its own.


Heh. Oddly, the solution to convoluted monkeypatching might be monkeypatching itself. Any reason why you couldn't extend the Object class to have this self-knowledge? Or couldn't a library be required at program start that works like Perl's 'use strict;', keeping you on the straight and narrow and not falling into the tempting trap of extending classes when you shouldn't be?


the solution to convoluted monkeypatching might be monkeypatching itself

Yes, the very same thought occurred to me. Set a monkey to catch a monkey!

I didn't suggest it myself, because I've been away from Ruby just long enough that I didn't remember whether one could accomplish this in pure Ruby, or if one would have to ascend up a level and do it in the interpreter.


yeah, it's possible in pure ruby. You can do stuff to prevent monkey patching, as well as do stuff to make it easier for future monkey patchers. I prefer libraries to not do either, to make code easier to use.


Don't care about the alternative. Just use whatever you are good at.

Don't get into this endless programming languages war.


No, its not an endless war. Use Python.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: