Off topic: Thank-you HN for having a "Don't editorialize the article's headline" policy. Over on proggit this was submitted with the title Metaprogramming has its pluses and minuses. I feel that statement completely misses the point of the post.
Usually, I would agree, except in this case the title gives absolutely no indication as to the topic of the article. Putting the word metaprogramming somewhere in the title would have been better.
Well, I could say "I'm sick of this metaproggraming shit." But I'm not sick of the metaprogramming, I'm sick of the conflicts. Or there's "I'm sick of the conflicts between other people's metaproggraming shit." That's closer to it, but doesn't capture my disappointment in the lack of progress in writing better tools. So there's "I'm sick of the lack of really cool metaproggraming shit" which is actually what bothers me and has the informal profanity I was going for.
So... Please feel free to fork my repo and retitle it I'm sick of the lack of really cool metaprogramming shit with my blessing.
Interesting. We have sort of solved this problem in Perl. When you want to add a method to a class, you do it by applying a role. If two roles conflict, like by adding a method with the same name, you have to resolve the conflict, or the class won't compile. (The class can rename methods, but you can also call the method with a name that will always work; $instance->Role::Name::method.)
We can also apply roles at runtime. This means you can add a method to a single instance of a class. So Rails could add its version of sum to its own instances of Array, and Classifier could do the same. Then there would be no conflict, except when Classifier tries to add sum to the Rails array... but at least you get a fatal error message instead of unexpectedly wrong behavior.
We can do better, though. The way around this problem is to implement (and CPAN) a role that adds a sum method to arrays, and to have both Rails and Classifier use that role. Then they can both share arrays, and when one tries to apply the sum role, the already-applied sum role will be used instead.
They both get their sugar, the sugar is isolated to the smallest area possible (a single instance), and they can both confidently share those arrays-that-can-sum.
And oh yeah, with Moose::Autobox, you can apply roles to Perl's native types (arrays, hashes, etc.), and call methods on them. So this is not "pie in the sky", it's an already-solved problem for us :)
Edit: ContextL also provides an interesting solution, "layers". Each module can define their own layer, add their own sum method to it, and activate it inside themselves:
We can also apply roles at runtime. This means you can add a method to a single instance of a class. So Rails could add its version of sum to its own instances of Array, and Classifier could do the same.
This is possible with Ruby, but culturally rare. Quite possibly because (blamestorming mode=ON) Rails simply patches whatever it wants, whenever it wants and many people follow its lead. I personally avoid the practice because it is difficult to contain in Ruby. For example, if I am handed an Array it is easy to extend that instance with its own #sum method, and when I'm done with the array I can remove my own extension.
However, what happens if, while I am still holding this extended array, I pass the array to some other method that wants to extend it with a #sum method? Same problem. Working on an instance method does reduce the scope of each modification and thus make collisions statistically less likely, so I am all for it. But at the same time, when I step back and look at the big picture, I yearn for something that solves the problem in a more fundamental way.
It is interesting... Perl has such a reputation for being unmaintainable that Perl programmers today are very careful to spend extra effort making their code as maintainable as possible.
I think a lot of practicing Ruby (and Python) programmers are under the impression that Ruby magically solved Perl's maintainability shortcomings, and so they don't invest any effort into writing maintainable code. The result is ... unmaintainable code :) It turns out that programmers write the unmaintainable code, not programming languages.
I feel that using patterns or coding conventions to make Ruby metaprogramming maintainable is isomorphic to greenspunning higher-level programming practices with if statements and gotos. It is assuredly what a good programmer does to get the job done with the tool that is provided. But a toolsmith thinks about ways to improve the tool itself.
I think you've successfully avoided the problem through rigorous adherence to a design pattern.
A full solution would be to synthesize that design pattern into a language feature so its use is enforced by the language, not programmer discipline. This, I think, is what Raganwald is calling for.
"you can also call the method with a name that will always work; $instance->Role::Name::method."
I'm coming at this from a Perl and C background, but I'm not really understanding the fuss. It seems obvious that diddling with base classes is not scalable unless you have have some strong social conventions. But why do it at all?
For those not fluent in Perl's syntax, Jonathan's suggestion is equivalent to "$instance->Package::method()", which is in turn equivalent to "method($instance) from within a given Package. This seems foolproof. No one's followed up on it, but at a glance I don't see any downsides to this approach.
I may be illustrating my ignorance (and I'm definitely ignorant of Ruby syntax), but why is there such strong desire to have array->sum() instead of sum(array)? Or as a utility function Utility::sum(array)? And is there no equivalent in Ruby to array->Utility::sum()?
"The way around this problem is to implement (and CPAN) a role that adds a sum method to arrays, and to have both Rails and Classifier use that role."
+1. I see this, ideally, as a social organization problem rather than a conflict resolution/avoidance problem. The conflict between two array#sums is a side effect of the real problem which is that each project had to create array#sum in the first place.
The granularity of gems is too large, so some other system would need to exist. That system, whatever it is, seems to me the answer.
Merb, and other projects, have "extlib" gems that go along with them. I think that's another symptom of the problem. Those need to go away and some sort of social extlib repository/system needs to take their place.
EDIT:
Facets can probably be thought of as the cathedral (non-)solution to the problem. What we need is the bazaar solution.
I see this, ideally, as a social organization problem rather than a conflict resolution/avoidance problem.
This requires getting all of the framework, gem, and plug-in authors everywhere to agree on standards for everything. If even one goes his own way and writes his own idiosyncratic thing that conflicts with a "standard" thing, there are going to be problems.
I just can't see this scaling, it's a Tragedy of the Commons in the making.
"This requires getting all of the framework, gem, and plug-in authors everywhere to agree on standards for everything."
Let me flesh it out a little bit more.
I'm not suggesting there be a canon. I'm thinking of the way github does gems, where the gems are all namespaced with a username. There could be 20 versions of array#sum, but the one that the pioneers/influencers/cool-kids-who-write-frameworks decide to use is the one that gets used. If someone decides to go their own way, someone else will fork and fix.
Maybe conflicts could be detected at the gem level via gems exposing the names of the monkey patched methods they contain.
Honestly I think git/github fundamentally changes some things and that we as a culture haven't yet fully adapted our thinking to the new possibilities.
Of course there will always be problems, and maybe this is utopian crazy talk, but the natural "that can't possibly work" reaction might be worth questioning.
EDIT:
Allow me to go from just crazy to total nutter: a language market where the commodity is semantics and the currency is popularity. If it were given influence over language design would it be an improvement over the benevolent dictators (Larry/Matz/Guido) or an epic fail?
Arg, more crazy thoughts. But, a (rhetorical) real world question:
Once pasta chef Katz is done turning the rails spaghetti into merb-style ravioli and all the rails monkey patching is self contained in an extlib style gem, how many projects will begin including that monkey patching as a dependency?
If someone wants an array#sum there's a good chance they'll just depend on rails' array#sum. With its massive influence the rails project has the, perhaps small, potential of becoming an autocratic second-level Matz with a default set of widely used ruby extensions. What Facets wants to be but isn't.
Given that scenario, is it better to let rails be the guardians of a defacto "standard" extlib or to have a more fine-grained system that allows the marketplace of popularity to decide what gets to be considered standard?
With rails recipes the division of labor between programmers and "editors" has already begun. The granularity of the web framework has changed from the single monolithic option handed down by the programmers to the gem level. Now we get a default framework from the programmers and dozens or hundreds of remixed frameworks provided by people with specific configuration needs.
These people are doing the same job that the creators of linux distributions are doing. It's up to them to resolve compatibility issues and create any patches needed to make their distribution work. You find the custom framework that does exactly what you need or if it doesn't exist you create it and become an editor yourself.
Now take the component granularity down to the level of a single language extension or monkey patch. Gems might have gem recipes. The authors of the classifier gem would require array#sum and identify a default version, but an editor could change that.
Right now you can create a recipe that replaces prototype with jquery. Maybe someone thinks that rails' #try sucks so they create a recipe that replaces it with raganwald's #try. It's then the job of the editor to make sure all the components of their recipe play nice with each other.
Just like there are more mutual funds than there are stocks you get an explosion of recipes, but that's exactly what you want. Mass customization. You look through the directory and find the package/distribution that does exactly what you need. The software that gets used gets used.
The key here is that no one needs to agree on anything. Coders code and editors edit. The producer->editor->consumer relationship is natural. The producer->consumer relationship is artificial. It's a bug in the system.
It disgusts me on levels I cannot describe that this is standard operating procedure for ruby modules (gems, etc) and is the main reason I could never use it. Adding and replacing methods on builtin classes? Are these people out of their minds?! I don't understand it. I've used monkey patches once or twice and it always feels really dirty.
Ruby, despite having 4? sigils as scoping operators, pervasively screws up or simply doesn't have support for scoping.
It really should have support for having class modifications scoped to your module (and accessible manually from outside), but I couldn't find it.
If the language is such that everything "should" be a method on the broadest-possible class, people are going to monkeypatch, because it feels "wrong" otherwise. People just want to write idiomatic code.
In Objective-C this ends up not being an issue, because there's very little usage of non-Apple libraries -- plus the default way of adding methods doesn't allow replacement, and Apple libraries are well-versioned (if you build your app for >10.4, API changes in 10.5 are invisible).
Do you mean to say the default way of adding methods to existing Objective C classes can't replace existing methods? That's not true. I just hit it recently. Subtle bug.
It's the shiny new toy every new ruby dev wants to play with. Once they get bit in the butt by some really obscure bug that requires the correct combination of 10 gems to reproduce they'll realize that maybe monkeypatching isn't such a hot idea, until then we'll have a bazillion versions of String.
I encourage Ruby programmers to monkey-patch in their "leaf projects." If you want to write Array#sum for your web app, I say go ahead, it's your butt and you are old enough to deal with the consequences. However, when writing gems, plugins, frameworks, any code that others will use is a different story: Now your "clients" suffer the consequences of your choices.
A story about how a web app was broken by installing the Classifier gem due to an Array#sum conflict wouldn't be a story. You change your web app and move along smartly. But who in tarnation is up for patching gems to deal with the conflicts caused by gems patching Array on each other. Ugh.
This very consideration led me to redefine nil? on an inner class in my SafeNestedHash gem. It's definitely ugly and weird, but I needed a method that existed on object so I can guarantee it's everywhere. Any other thoughts on how to solve this problem?
if hash[:a][:long][:key][:chain].is_a?(SafeNestedHash::UndefinedHash)
Instead they can just say:
if hash[:a][:long][:key][:chain].nil?
If the nested hash IS defined it could be any type under the sun, therefore a method has to be defined on object, and nil? seemed the only reasonable choice.
Maybe I don't really understand Ruby, but isn't it basically a duck-typed language? Why would you be checking the type at all (which I am assuming `is_a?` does)? Type checking is evil for weakly-typed languages. Either it implements what you need it to implement or has some property value, otherwise it should throw an exception.
What are you testing for here? To see if the hash is empty or something? What makes an "UndefinedHash" undefined? If it IS defined, then it should be assumed to be the type of object you are looking for until the code tries to do something that breaks on a non-supported type.
The purpose of the SafeNestedHash is to be able to directly reference or assign deep nested hashes without initializing every level. Ruby will throw an error if you do this with a regular hash because the first undefined level returns nil and nil does not respond to the [] method (square brackets are just a method in Ruby). The UndefinedHash is a class that stands in for nil so we don't get an error as we make a deep reference and it also keeps track of the keys so that if there is an assignment at the end it can initialize the necessary sub-hash and attach it to the original top-level hash. There are simpler hacks around that auto-initialize a hash, but they suffer from auto-vivification of hash keys (ie. looking up a hash key will create an empty hash there even without an assignment).
In essence, the problem could be "solved" by monkey patching nil, though that would be the worst sort of abuse of monkey patching I can imagine.
As far as checking the class is concerned, well that's just a convenient method of showing what I'm doing, I could also define a method such as is_not_defined? and check if the object at the end of the hash responds to that. The fundamental issue is that any object could be stored in the hash, so whatever I check has to be valid for any possible object. That's why I defined nil? for the UndefinedHash class even though it is a bit kludgey--it's better than monkey patching Object or NilClass.
Ruby will throw an error if you do this with a regular hash
So, why not just let it throw an error? The interpreter is expecting one thing and gets another -- that's a type error right there. Why bother with anything else?
(ie. looking up a hash key will create an empty hash there even without an assignment)
Now you're asking for the entire justification of my class to exist.
Say there is an algorithm you want to run where you are grouping things into a deeply nested set of categories. A nested hash referencing arrays is a good data structure for this. In PHP this is trivial (though not terribly efficient). In Ruby you have to deal with maintaining the hash. There are many ways to do it, but my SafeNestedHash class lets you do it with a concise, natural syntax. Perhaps if you have an aversion to using a library for this then you would define a pair of methods like key_exists?(hash, key_chain) and set_key(hash, key_chain). You could also monkey-patch hash to have those methods. You could also encapsulate the logic into an object that handles all the logic. Or (as you seem to suggest) you could let the interpreter throw an exception and then handle it with rescue, which IMO would be just a notch behind monkey-patching in terms of bad practice since exceptions should be for unexpected circumstances. I just happened to go with creating a class that abstracts this all away. Yes there is a bit of leakiness, but there is no monkey patching, and any other solution would require either ballooning the actual function with incidental accounting details, or else utilizing some other abstraction which would need to be used with the same understanding of its purpose, at which point it's just a matter of taste.
I think you're misinterpreting exceptions in so much that you feel they need to be unexpected. Say you have a list like [[1,2,3],[4,5,6]]. You could count each of the lists and loop through them individually, or you could create a recursive function that assumes a flat list and switches when an index error occurs:
def wtf(l, count=0):
cl = l[0]
try:
return wtf(cl[count:], count+1)
except IndexError:
# Reverse the list and start over
cl = l[::-1]
return wtf(cl, 0)
Hopefully that gets my point across. Basically, exceptions are very useful, and there is nothing wrong with expecting and catching their throws under certain circumstances.
Or I can just use my class and everything is a zillion times cleaner. I don't see why you're still arguing with me over this. My class is very clean and guaranteed to be compatible with all other code. Perhaps you got thrown by a previous commenter saying I "redefined nil?". But actually I did was define a nil? method on my internal UndefinedHash class, overriding the default Object implementation (ie. standard inheritance stuff, nothing dangerous).
I know you can do useful things with Exceptions when necessary. However I have to say unequivocably that using an exception in this case would be the worst possible idea. Why? Because the exception would be NoMethodError (ie. from nil[:key]). This equivalent to a NullException in Java, indicating all manner of generic logic errors that could have occurred anywhere within the stack.
If you looked at what I was actually doing, you might agree that SafeNestedHash is an incredibly useful abstraction. Hell, if it was written in PHP you could look at the algorithm and say, "wow, that's fast, elegant and readable." I'm not sure why you're so dead set that I'm doing something wrong, but you're just pulling this stuff out of thin air without any context.
The difference between us is this: You see something that sickens you and publicly blame and shame others for the mess. I see something sickening and I am trying to clean it up.
I commend you for that, though I have no desire to clean up the messes of others. What would you have me do? Learn Ruby and start rewriting modules? Teach proper metaprogramming to the ignorant masses? The first step to fixing a problem, in this case, is conving people they have one. I'm throwing my hat into that ring.
Actually, I was just trying to pretend that I'm a kindly old wise man that everybody likes for his civic-minded attention to duty. In reality, I'm just as disgusted as you are and the post title says it all.
It's true that Ruby should provide better tools. But it's also true that a lot of what goes on is avoidable and there's nothing wrong with you pointing that out.
"The difference between us is this: You see something that sickens you and publicly blame and shame others for the mess. I see something sickening and I am trying to clean it up."
I am not sure if I am reading this right, but there seems to be abit of a moralizing tone there. "The difference between us is you [ do something slightly slimy] , I [ do something very noble]".
At least that's the way it reads to me.
I am not sure in a discussion of a technical issue there is much of a difference in practice. Why is "this is sucky" not acceptable unless one has spent significant time trying to clean it up?
Edit : Retracted after reading "Actually, I was just trying to pretend that I'm a kindly old wise man that everybody likes for his civic-minded attention to duty. In reality, I'm just as disgusted as you are and the post title says it all.".
I thought I'll leave it up here as a warning not others not to read too much into something.
This is the main reason I've switched to Python in all my side-projects. I drank the Rails kool-aid for a long time, until I finally realized I couldn't get much done with it, as the libraries I wanted to use were unstable, and I couldn't understand the source, due to all the black magic going on. (Not to mention that you're forced to go to the source more often than not, because for some reason most Ruby projects' documentation sucks.) Sometimes I miss Ruby's expressiveness, but I'll take Python's abundant, stable and well-documented libraries any day.
It disgusts me on levels I cannot describe that this is standard operating procedure for ruby modules (gems, etc) and is the main reason I could never use it. Adding and replacing methods on builtin classes? Are these people out of their minds?! I don't understand it. I've used monkey patches once or twice and it always feels really dirty.
Which is why good Rubyists use these tools sparingly and sensibly. It is not "standard operating procedure."
Programmers of any language can misuse the tools made available. Tarring the language with the brush of its poorest users is disingenuous.
Alan Kay wrote a paper about a very elegant metaphor to solve this, it's called "Worlds":
http://www.vpri.org/pdf/rn2008001_worlds.pdf
You create a "world" for your Classifier gem, which (in a trivial implementation) makes local copies of any global objects when they're modified instead of actually changing them. ActiveSupport runs in yet another "world", and knows nothing about the "local" changes to the global variables Classifier made.
Cooperation is dandy if n = 2, but for n > 2, this can get cumbersome. Unfortunately method names like 'sum' are very attractive.
A language/library designer shouldn't have to think of "all of the short or otherwise possibly overloaded nouns and verbs people might like to use, ever." Being able to say you are going to do a 'sum' which means X in this particular context is very powerful.
This notion should also be unified with version control. (As in OS X, where a process only sees the versions of libraries that it is supposed to run against.)
A language/library designer shouldn't have to think of "all of the short or otherwise possibly overloaded nouns and verbs people might like to use, ever." Being able to say you are going to do a 'sum' which means X in this particular context is very powerful.
In this specific case, what's needed is more abstraction. Consider the approach Haskell takes:
It seems to me that worlds permit sharing immutable data structures, and that Warth & Kay's operations on worlds do not permit sharing mutations done in parallel worlds.
The Worlds paper you cited [http://www.vpri.org/pdf/rn2008001_worlds.pdf] cites the COP overview paper [http://p-cos.net/documents/contextl-overview.pdf] mentioned near the top of the link jrockway gave. Warth & Kay say "Similarly, in order to support context-oriented programming (COP) [8], we may want to add a third key to our lookup table that identifies a context." So that gives me some confidence in my interpretation.
JSON.js does something similar. It puts methods in the javascript object prototype for generating and parsing json strings. WTF? Is it so inconvenient to just expose two functions to do this?
This was a problem because it broke some jquery functionality, and I ended up removing the portion of JSON.js that was updating the object prototype.
It was a big problem in the JavaScript world, until JQuery and YUI and Google came out and said "Thou shalt not modify Object.prototype" and started leading by example. But Prototype and Mootools and a bunch of smaller libraries still modify the prototypes of built-in objects, which is the main reason I won't go near those libraries. Unfortunate, since they're otherwise pretty good...
I think new versions of JSON.js (json2.js) don't modify Object.prototype anymore - Crockford has certainly learned his lesson.
One of the neat things about JavaScript is that objects are just hashes/dictionaries. It turned out to be a really flexible way to do objects. Unfortunately, when you actually want to use an object as a dictionary, all the methods get in the way. If methods weren't owned by objects, there wouldn't be any such problem. A JavaScript with generic functions would be awesome.
I've been thinking of implementing such a thing (and a couple of other things to fix JavaScript), but it's sort of pointless, seeing as Common Lisp already does many things right. The only thing JavaScript has going for it is that it runs in browsers. A full-fledged interpreter (in JavaScript) for another language would be too slow. Perhaps such a language should be compiled down to JavaScript using only very simple and fast text transformations.
I'm not a lisp expert, but my understanding is that CLOS uses packages for this.
In CLOS, methods (generic functions) live outside classes. So, like normal functions, their names are qualified by the packages that they are defined in. If you want the 'sum' method declared in package 'foo', you either import it into the package/namespace you're working in, or you fully qualify the name ('foo::sum', I think).
This lets you extend built-in classes within your package without getting in anyone else's way.
Great explanation. It doesn't really have to do with CLOS though. The package system works on any names whether you use classes or not (we don't).
One little detail I love because it illustrates Lisp's design style nicely. If you're using an exported (public) name, you say foo:sum. But if the name is private and you want to breach encapsulation to get at it, you say foo::sum. The language makes you state your intention, but makes it easy to do so and doesn't get in your way.
Clojure deals with this by having strict namespaces. Every function and "global" variable belongs to a namespace. At any time, you can refer to a function by it's fully qualified name. At the top of a namespace declaration, you can import other namespaces and alias (rename) other namespaces. It's a compile error if your declarations create an ambiguous name. i.e.
(ns :foo
(:use classifier)
(:use active-support))
Use allows you to call all of the functions in the namespace without a fully qualified name. In this case, there would be a compile error, because both use statements try to alias a function sum in the current namespace. The alternative would be to use require, which doesn't bring the functions into the current namespace:
The huge advantage is that since there are only functions, there are no methods on objects and every function is subject to namespacing.
If you want to monkey patch, there is a function called binding, which changes the value of a variable, but only within the calling scope, in the current thread. i.e.
Since named functions are just anonymous functions stored in variables, you can use the binding trick to monkey patch. It's very clean because you can't interfere with any other code.
Binding affects fully qualified vars, which makes it very difficult to stomp on someone else's function.
Lisp's reputation has a tenuous connection to reality; even so, I see no connection between it and the problem being discussed here (name collisions) which doesn't come up in CL. If anyone does make that argument, I'd like to see it: it would appear to require even more obtuseness than usual.
I was thinking specifically of unintentional variable capture causing name collisions in non-hygienic Common Lisp macros. While people can certainly write macros correctly and avoid them, this takes experience, and minor mistakes can introduce some really subtle bugs.
[edit: noted that I'm talking specifically about Common Lisp]
In Scheme, it's basically impossible to write a non-hygienic macro, and in Clojure, there are no name collisions and it takes extra work to variable capture.
A lot of this is solved by namespaces in Ruby (Modules.) Unfortunately, developers don't really apply them consistently. A best practice should be to put everything in it's own namespace. Been playing around with .NET MVC and C#, and this is something it manages really well.
But solid Ruby programmers use them this way and audit third-party code before using it and avoid it if it contains such code smell.
One of the fun things about Ruby is that you can open up, override and chain just about anything.
Apart from allowing for some interesting experiments, it also allows you to do things like monkey-patch faulty methods in libraries (including the std libs) at run-time while you wait for a sanctioned update to come out.
This may or may not be a Good Thing, depending on who you are and what you're doing.
I don't think global variables were mentioned in the article. Do you mean the global (top-level) namespace? If so, then yeah, "Don't do that" is pretty much the solution.
Sorry, maybe this isn't correct Ruby nomenclature, but I see Array simply as a global object, which can be modified, with rather obvious consequences. It's sad that I feel old enough to say this, but isn't this a problem that we solved a long time ago?
"Don't do that" is one solution, Alan Kay's idea of "worlds" is another.
Yup, proper use of namespaces would solve the problem, in part. Supposedly Rails3 will be using namespaces quite a bit more than the current version of Rails. That will allow things like mounting apps inside other apps, aka slices.
What I take away as the overall point of the article though is: monkey patch with care.
I'd almost blame this problem on Ruby not having its own standard Array#sum method (or more likely it should be a method on Enumerable?). I'm guessing a lot of projects add their own, so it's a natural source of conflict.
Secondly, it seems like most conflicts like this occur between Rails and some other library. Rails, via ActiveSupport, adds a lot of extensions to core Ruby. I kind of wish it didn't. Framework or general purpose library developers should recognize that their library will likely be combined with many others and avoid conflicts by sticking to what's available in the standard, namespacing their code, etc.
I like Ruby, but I'll readily admit it's not a perfect language. I'm sure someday a better Ruby-like language will come along. Hopefully, it'll be a language with better support for avoiding these kind of conflicts without removing support for open classes. I'm sure it's an interesting technical challenge.
While I grouse about Rails, here is the challenge I posed last July at Rubyfringe: Forget about Rails providing all these extensions for Rails programmers to use. How does Rails perform this kind of metaprogramming for its own use? How can its own internal libraries use Array#sum while leaving Array unmodified for Rails programmers?
In my own library, I use inject instead of adding Array#sum. But, I know that's sidestepping the issue, not solving it.
Honestly, I'm not sure how Rails uses #sum. I'm not sure whether they really need it or not. I suspect not.
I'm not a big fan of a lot of non-standard additions to Ruby. For example, all the #try, or #returning, or even your #andand. I don't consider any of those to have enough intrinsic value to be worth the risk of collisions with someone else's subtly different implementation.
I'm not a big fan of a lot of non-standard additions to Ruby. For example, all the #try, or #returning, or even your #andand. I don't consider any of those to have enough intrinsic value to be worth the risk of collisions with someone else's subtly different implementation.
#andand is soooooooo "raganwald." The Homoiconic Way is to use rewrite: #try, #returning, and #andand are all removed from your code and never conflict with other people's implementations ;-)
I'd blame this problem on gem writers monkeypatching base functionality.
Why not use the Array.map and pass it a summing block? That way if you've got an array of things other than numbers, you can correctly utilize the strategy pattern to sum them....
I don't quite understand why that sort of methods should belong to array-class. Should array also have product-method? Average? Standard-deviation? Concatenating strings? Where should you draw the line?
Functions/methods that call themselves, as opposed to having a higher-level function that takes a function as an argument and does something recursive with it.
EDIT: I was going to criticize your idea that there is anything lacking in "naked recursion". It's a really powerful technique and in fact I just wrote a recursive function for production code an hour ago. Comment continued here: http://news.ycombinator.com/item?id=555714
http://www.reddit.com/r/programming/comments/8ayr1/metaprogr...