Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Your claim that F# or even C# are as concise as a dynamically typed language is the crux of your argument, and is the weakest part of your post.

I haven't worked with F#, but I have read a share of C# code. A non-trivial program is likely to need some methods to respond to many types; in this circumstance C# lets you write many methods, use generics, or (now I believe) use dynamically typed variables. Writing many methods is hard to maintain, and not at all concise. Using generics is theoretically great, but the syntax in C++/C# is just awkward and writing type-agnostic code with generics is nigh unreadable.

In Ruby or Python you have duck-typing, and you can easily be type-agnostic and concise; in fact if you are writing code in one of these languages, you /should/ be both of these things.

Duck typing is fantastic for true object orientation: "If it walks like and duck and it quacks like a duck, we can safely treat it like a duck." This reliance upon support for response to messages instead of reliance upon types is often the key to being concise and readable.

In ruby, for example, I can use object.responds_to?(:method_name) and be assured that the object will support this piece of the interface before I attempt use. This is an elegant solution to keeping code type-agnostic, and it emphasizes the best sort of object oriented methodology.

This is why I love Ruby.



> in this circumstance C# lets you write many methods, use generics, or (now I believe) use dynamically typed variables.

Or you could be explicit about the interface you are writing the duck-typed code against. The problem here is that you can't assign an interface to an existing type, and adding interface declarations results in more code. It will be interesting to see whether this functionality (extension interfaces and inferred interfaces) makes it into C# in the next 5 years. My guess is that it's functionality which is possible to add, but we're not there yet.


I believe the Go language gets most of the effects of ducktyping in a statically typed language. The compiler is smart enough to figure out if an object adheres to a specific interface without having to explicitly tag that object as implementing the said interface. So this basically means if you have a method which just calls 'read' on it's argument, you can define a 'read' interface and it will accept any existing type that responds to 'read'


How is a compiler supposed to figure out if I've added a method at run time?


hm... I think I didn't explain properly (or I misunderstood your point).

Lets say you have a method the sole task of which is to call call .read() on its argument and return a single char.

char readch(obj) { return obj.read(); }

Now, since I have not declared any explicit type for obj, the compiler would generate code like this:

    interface readch_obj {
        char read();
    }
    
    char readch(readch_obj obj) { return obj.read(); }
So the function basically takes in an interface readch_obj which defines a single read() method that returns a char.

Now, the trick is, the compiler can intelligently recognize that the FILE object has a method with the required signature (char read() ) and that the STRINGIO object also has a method with that required signature and so the compiler automatically tags the FILE object and STRINGIO object as implementing obj_readch and so, either of those objects can be passed in as an argument to readch().

Note: I'm not saying this is how the Go compiler actually works but this is the basic concept from what I understood.

The language doesn't need to allow for adding methods at runtime for this to work.

If the language did have the scaffolding needed to add methods at runtime, then yes, this technique wouldn't work and thats probably the fundamental difference between ducktyping and this (structural subtyping? not sure what it's called).


Sorry, it's that I was reading on my phone and missed your 'most.' Go's structural typing stuff seems pretty cool, but what I was trying to point out was that it can only get most of the way there, not the whole way. In Ruby, for example, I can define singleton methods, and method names based on a dynamically generated string. I'm not sure how a compiler would be able to do stuff like this. At least, not easily.

Then again, I haven't been keeping up with the more recent advances in compiler stuff, so maybe someone has already done this.


object.responds_to?(:method_name) is just a run-time variation on generics constraints. C# could use a bit more work on its syntax there, but something like go's inline interface definitions would make it just as nice as Ruby.


This is entirely fair -- my intended point is that C# is /not/ yet concise for operations that are sugar-sweet in Ruby, and that's why I don't love to use it.


But statically typed languages have duck typing too. Except it's called type inference. And F# has it.


This is patently false.

Duck typing: http://en.wikipedia.org/wiki/Duck_typing Type inference: http://en.wikipedia.org/wiki/Type_inference

Type inference relies upon known, preset interfaces and inheritance from other types. Duck typing relies solely upon the methods being used.

In duck typing, this means that the user finds out at the time of use whether or not a method is supported by an object. Here, the "interface" to a type is not set in stone. It is inherently dynamic and as long as the message you pass is supported, (duck->quack), there isn't a problem.

The mental jump which many people fail to make is to understand that Object Orientation is about message passing, not types. The key questions are, "what message are you sending", and "does the receiver have a response" - it got dogmatized somewhere in small-memory static-land long ago that the best or perhaps only way to tell if an object will respond is through static definitions of complete interfaces, which came to be very strongly associated with our conception of a "type". Once the compiler has used it to check interface constraints are unbroken, this information becomes useless and unchangeable.

Ruby is an example of a language that does not use that means of answering the questions. Instead, an object has dynamic metadata about what messages it supports (specifically a hash of the method names), and uses that (malleable) data to generate an answer. It is fundamentally different than type inference in a statically typed language.


While duck typing is not type inference, you can still get the benefits of duck typing and real OO with structural typing, which can be done statically.


Duck typing is a problem when it's done by message name and those names aren't distinguished by namespaces. We're already starting to hear horror stories of Ruby libraries that want the same name to mean different things, especially with monkey-patching.


Heh, I've never run into or even thought of this - it seems like a very rare and very hard problem to solve. It's almost a semantic problem of language and meanings as opposed to a technical one -- I guess the solution would be to find some technical means of adding specificity...

hmm, if you have two objects which both support the method 'run', one of which is a jogger that will move into a 'running' state, and the other is a program which will actually begin executing code, and you have them both in say, an array that gets iterated over and every object is sent a 'run' message.. and they both take one argument and return nothing distinctive, well, that's extremely contrived. But such things happen. Scary.

I'm gonna start thinking about this. Thank you :)


Monkey patching has nothing to do with duck typing. Monkey patching is both incredibly powerful and exceedingly dangerous at the same time but, there is nothing about monkey patching that says you need duck typing. You do static typing AND have open classes. To pretend that problems related to open classes is somehow connected to duck typing is simply false.


I wouldn't say that it has _nothing_ to do with monkey patching. Monkey patching is the ability to update classes members dynamically. Duck typing is the ability to update an object's members dynamically. Since the overall point is to dynamically create a new member of some object, I think the previous poster can be forgiven.


Duck typing isn't the ability to update an object's members dynamically. There is nothing inherent in the meaning of duck typing that says anything about updating. All it says if i ask:

walk and talk of it and can walk and talk then as far as i'm concerned, its a duck because that is what a duck was expected to do.

duck typing is polymorphism w/o inheritance.


> walk and talk of it and can walk and talk then as far as i'm concerned, its a duck because that is what a duck was expected to do.

This doesn't even parse.

Duck-typing is the idea that an object's type at a particular time is made up of the set of fields and methods it has _at that time_. I suppose that doesn't strictly require updating, but it would be a pretty lame duck with it.

Polymorphism w/o inheritance is a fine description. How do you think that can be accomplished without dynamic updating?


I dont believe Duck typing and type inference are really the same thing.For duck typing the type is unimportant, it only cares about the operation, types matter for type inference. The behavior of structural typing in languages such as Scala are is closer to Duck typing IMHO.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: