Hacker News new | past | comments | ask | show | jobs | submit login

> I’ll admit, multiple dispatch is a super abstract concept, one that you really only appreciate when you need it.

After reading this post and this stackexchange link[0], I'm not sure it is that complicated. The behaviour in the post can be explained as Julia picking the most specific function that fits based on the runtime type (i.e. uncovering the underlying concrete type at runtime and checking for an implementation). In fact the stackexchange answer mentions that dynamic languages don't differentiate between overloading and multiple dispatch at all, since everything is resolved at runtime. So, overloading and multiple dispatch seem more like that same concept but are one is eagerly resolved while the other is lazily resolved.

[0] https://softwareengineering.stackexchange.com/a/125088




Overloading tends to be different because the first argument gets massively higher priority in determining the effective method.

The main promise of Julia’s multiple dispatch is that it allows the obvious code to do the right thing and dispatch to fast specialised methods for a large variety of types, which tends to make programs more composable.

Another way to look at it is that the language is designed to be able to express a full numeric tower, i.e. different int and float types, complex and arbitrary precision numbers, suitable upcasting when types don’t match. One desirable property is that a + b should generally end up calling the same function as b + a. Try to think about how you would do this with Java-style overloading, even if the overloading was more dynamic and looked at the runtime type and all methods up the class’ ancestors. And how would you extend this to add a new numeric type (say dual numbers.) IIRC, the Julia strategy mostly uses some type-level programming to figure out a suitable shared type for a and b (which basically only runs at JIT-compilation-time), then coerces them both to that type, then adds them.

Scheme and Common Lisp manage to pricide quite big complicated numeric towers but they are basically entirely impossible to extend with custom types.


The promise of multiple dispatch is that it allows every library to be generic. One author writes a library that calculates $COMPLICATEDTHING; another author writes a library that implements a new type of number, such as ranges; provided the second library defines all the methods that the first uses, then, magically, they work together and you can calculate $COMPLICATEDTHING on ranges! That is the thing that Julia does that nothing else does.


"Entirely" is hyperbolic here. Building new numeric types with CL is certainly possible if you write your own versions of numeric functions like `plus` in lieu of the builtin `+`, and it's easy to make your new functions backward-compatible so `plus` will work on e.g. integers. This is not as easy as if the numeric functions were generic but it's still doable. By not being generic the builtin CL numeric functions run very fast on the most common number types; that was the tradeoff benefit of not designing them be generic.


"By not being generic the builtin CL numeric functions run very fast on the most common number types; that was the tradeoff benefit of not designing them be generic."

Julia shows this tradeoff isn't necessary.


Depending on the language target and method of execution dynamic resolution of runtime types can be quite hard. IIRC Java's JVM is super optimized for static codepaths, whereas Julia uses a JIT system that is "slow" on first run but speeds up considerably after all methods in a hot code path are resolved.


Yep, it's just late binding.

If you want that in the JVM, you can use Groovy, which does just that.


Example:

    class Fire{}
    class Water {}
    class Grass {}
    class Ice {}
    
    def eff(Fire f, Grass g) { 'burn' }
    def eff(Fire f, Ice ice) { 'melt' }
    def eff(Water w, Grass g) { 'grow' }
    def eff(Water w, Ice ice) { 'freeze' }
    def eff(a, b) { 'undefined' }
    
    println eff(new Fire(), new Grass())
    println eff(new Fire(), new Ice())
    println eff(new Water(), new Grass())
    println eff(new Water(), new Ice())
    
    println eff(new Ice(), new Water())
Prints:

burn melt grow freeze undefined


That is method overloading, which isn't the same as multi-dispatch.


Groovy supports multi-dispatch, if you needed to encapsulate a generic type, the compile would dynamically resolve that type correctly. See my example in Java (which has method overloading, but not MD)


What would multi-dispatch do differently???


the difference is that overloading works on compile time types but dispatch works on runtime types. This matters because when you write a generic algorithm, you don't know the type, so don't get the right behavior with overloading (you get the generic fallback).


In Groovy, dynamic dispatch is used. It's the runtime types that matter, as my example makes patently clear.

EDIT: to make it even more obvious there's no difference:

    class Fire{}
    class Water {}
    class Grass {}
    class Ice {}
    
    def eff(Fire f, Grass g) { 'burn' }
    def eff(Fire f, Ice ice) { 'melt' }
    def eff(Water w, Grass g) { 'grow' }
    def eff(Water w, Ice ice) { 'freeze' }
    def eff(a, b) { 'undefined' }
    
    // hide compile-time types
    Object fire = new Fire()
    Object grass = new Grass()
    Object ice = new Ice()
    Object water = new Water()
    
    println eff(fire, grass)
    println eff(fire, ice)
    println eff(water, grass)
    println eff(water, ice)
    
    println eff(ice, water)
Prints:

    burn
    melt
    grow
    freeze
    undefined




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: