Hacker News new | past | comments | ask | show | jobs | submit login
Advanced Programming Languages (2009) (might.net)
117 points by graderjs on March 6, 2022 | hide | past | favorite | 98 comments



This list is pretty FP-biased (even Scala, despite being a hybrid language, is mainly well-known for the FP bits it brings to the table rather than any new innovations on the OO side).

There's a lot of other mind-expanding non-FP programming languages. Here are some ones I'd suggest.

For new high-level ideas of programming semantics:

1. Prolog: This teaches you how to program without the notion of "execution" at all. Even in pure FP languages, there is still the notion of "calling" a function which roughly corresponds to a notion of "execution." (yes eventually execution creeps in as you start working with IO, but unification is still a very different way of thinking about programming than function/procedure calls)

2. Forth: RPN on steroids. The semantic foundation of a lot of other esolangs.

For lower-level ideas of how computers work and how programming relates to that (but that don't introduce very different programming semantics):

1. Smalltalk: The usual exemplar of image-based programming. Even if the actual programming model is not terribly esoteric, the way of interacting with the machine being programmed is a very different way of programming than the usual batch-oriented programming model that occurs with a compiler or interpreter constantly being spun up and spun down.

2. Assembly: Again, not terribly esoteric, and not all that interesting from a CS perspective, but it's extremely useful for a working programmer to understand what things compile to.

3. Racket (a modification of Scheme I suppose and not fully outside of FP as a result): You'll never be afraid of creating a DSL again (whether you should of course is a different manner).


Something I always wondered about Prolog: if you're not defining how the program executes (but rather, what it should do), how do you know if something (a constraint) you're asking for is extremely expensive under the hood? Sure you can run the program and see how long it takes, but if your program is based on real-time data it seems like it would be difficult to have any idea if your program will consistently yield proper results in a reasonable amount of time.

Do I misunderstand something about how Prolog and other constraint-based programming languages work?


Nope you're totally right that at some point you have to understand how the Prolog engine actually tries to evaluate things. This is not just for performance, but as I allude to in the case of IO, sometimes necessary for correctness. Usually this is just knowing that Prolog does backtracking, but it can be more involved depending on your performance needs. But often for a complicated enough project you will find yourself resorting to cut quite a bit.

What I meant by execution is more around how it feels to program in Prolog. So many things that feel like "hmmm I need to call this function and then this function here", etc. in other programming languages are just "oh, I just need to substitute a variable here" in Prolog. It's a very interesting world to remove the entire concept of "return" from your lexicon (e.g. a function returning a value).


You can optimize the search, a naive solution would brute force all possible solutions. But you can cut the search short, so adding cuts in the right places will speed it up even tho it doesn't make it pure. You can also memoize results. The difference between a naive and an optimized solution in prolog is orders of magnitude difference.


Prolog appeals to me a lot. In my company we work a lot from written requirements (if X does this, then Y needs to be that). I feel Prolog could fit well with this.


Your intuition is correct. We're working on a project where 3rd party developers will write their logic in Prolog, which is a boon for both security and ease-of-writing.

Here's a good video explaining some of this, as well as a nice introduction to the language & and how you can use it in your own projects: https://twitter.com/ThePrologClause/status/15006425339400233...


I really don't understand how/why Prolog (or something Prologesque) isn't more popular and easy to use (extremely linked of course) especially embedded within other languages/frameworks.

It was amazing to me ('mind-expanding' indeed) at university but didn't seem practical then nor now.

The closest I can think of that has solid usage is Terraform (or HCL, but mostly as incarnated in tf). Having studied a course with Prolog certainly gave me a grip on (and liking of) Terraform early on in my encounter with it, and people's struggle with it often seems to come from procedural vs. declarative. Oh and VHDL/Verilog of course. But I've written all that now so I'll leave it! (And they're more outside the sphere of those to whom Prolog could be of benefit, as I see it.)


This is exactly what we are using it for on classic.com. Except the rules come from a CMS. Code size dropped 80% going from Python manually written evaluator to Prolog. Give it a shot!


Then use it, it's easy to interoperate with SWIProlog, you can even have it as serve up a RESTful API.


Prolog matches up really well with AI, it just needs the killer library/tools


You used quotes around "execution" a lot. I think a better term would be control. SQL, regex literals and so on work similarly, they are declarative while avoiding explicit control.

An opposite would be something like an FSM, or one of these DAGs that people do visual scripting with. These are declarative ways to describe very explicit control.


I initially wrote the word "declarative" but I don't really like it. It implies an objectivity and absoluteness that doesn't align with the subjectivity inherent in trying to tie code to a programmer's intentions. That is what is declarative for one developer or one set of circumstances may smack of being overly procedural for another (see e.g. what opinions you'll get if you tell a population of programmers that regexes are declarative).

To be more specific, SQL and regex literals still don't really capture the magic of Prolog which is unification. It's a heady thing to realize that telling a machine facts and asking it for information is fundamentally the same thing! Just a matter of whether you pass a constant or a variable.

To compare with SQL, it would be as if you did away altogether with the notion of separate `SELECT` and `INSERT` statements and the same prepared SQL statement could be used to mutate the DB or return data just based on what you pass to the `?` parameter.


I seem to be monad proof - cannot understand them from any explanation whatsoever. If I'm not alone then it's going to be hard to hire people that get it - on top of how hard it is to hire skilled people at all.


Don't let people waste your time with metaphors. A monad is something that you can use `flatMap` on. That's it.

It's not it, of course, but that's all you need to know. You can open up the js console in your browser right now and play around with `map` vs `flatMap` since lists are monads.

It's not a very useful function on lists, but if you take pains to simulate an optional type with it, where an empty list is Nothing/failure and a list with a value indicates Just/success, you can see how it might be an exciting form of control flow: chaining together optimistic functions, but short circuiting if any of them fail. Happy path only programming.

I wouldn't worry too much if this is yet another shit explanation. Monads will eventually be added in earnest to your language of choice, you will use them, and think "oh, why could nobody explain that?"


Yes, I think metaphors are used to put the formal definition of computation monads into some concrete situations (for getting intuition). But "too many bad metaphors" will make monads something mysterious, incomprehensible.


Hm, small addition:

> A monad is something that you can use `flatMap` on that cannot act in a surprising way. That's it.


Personally, I think monads are hard to understand in Haskell mostly due to its quite complex and comparatively ad hoc type system.

Let's remove all the complexities that are introduced by the static type system for a moment, and focus on the essence. To illustrate the key point, let's use a dynamically typed logic programming language such as Prolog, where we think in terms of relations instead of functions.

Suppose we have a predicate definition of this shape:

    pred(S0, S) :-
            goal_1(S0, S1),
            goal_2(S1, S2),
            goal_3(S2, S3),
            ...,
            goal_N(SN1, S).
In this case, the arguments clearly follow a pattern: They are "threaded through", starting from S0, and ending with S, in a sequence of the form S0 → S1 → S2 → ⋯ → S. This shape of argument passing is common, for example if you want to describe successive "modifications" of a data structure in the form of relations between an initial state S0 of the data, several intermediary states (S1, S2, ...), and a final state S. Or successive applications of side-effects: Think of sending packets over a network, or writing a sequence of strings to a file, where each such relation affects the state of a socket or other entity. Or when describing strings, where the string becomes successively "more known" in the form of further and further instantiation of a list of characters.

You can think of monads as mechanism to make this cumbersome explicit passing of arguments implicit. For example, since the arguments that are passed around follow such a clear pattern, we could write the example above equivalently as:

    pred -->
            goal_1,
            goal_2,
            goal_3,
            ...,
            goal_N.
and let the compiler do the necessary transformation for us. In fact, that's exactly what also happens in Prolog, using definite clause grammars (DCGs), which you can think of as a mechanism to implicitly describe such a passing of arguments, so that you do not have to write them yourself.


"Let's forget about Haskell for a second. Look at this Prolog code."

You people live in an ivory tower.


What is the specific set of knowledge that delineates those who are "living in the ivory tower" from, to use a term introduced by another poster down-thread, "blue-collar programmers?" Do I become an ivory tower programmer as soon as I express any interest at all in functional programming? Or is there some level of FP knowledge ("first-class functions and recursion are acceptable, but no ADTs!") where you start "living in an ivory tower?" Or perhaps if I use these things in an industrial context it's okay, or does simply knowing about monads or Prolog mean I'm relegated to the ivory-tower ghetto?

Does understanding how a database works on a level past what most industrial programmers need to, say, leverage the value of an index make someone an ivory-tower resident? How about understanding how Paxos (ha) or Raft work? For that matter, does reading any computer science paper take me out of the realm of being a "blue-collar" programmer or does it depend on how much the paper stinks of "ivory-tower-ness?"

Have I made my point yet?


Ivory-tower-ness has little to nothing to do with the level you're at, it has to do with being unable to relate to people who aren't at the same level. Reread your comment and note that your examples are entirely about the person said to be in an ivory tower, and not how that person relates to others who don't understand those technical details.


Wooosh


And what's the problem with living in an ivory tower, as someone who also lives in one?

A monad is a mathematical idea. It has some learning curve, some inherent complexity that you can't magically solve.

The post you're mocking attempts to draw a helpful analogy. Maybe it helps the original author who said they couldn't find an explanation of monads that they liked. Maybe it doesn't help.

I don't see how name calling such conversation as "ivory tower" helps anyone


> And what's the problem with living in an ivory tower, as someone who also lives in one?

The stated intent of the post is "to provide an understandable notion of Monads to people who do not get them", so it seems terribly ignorant of the audience to assume that they will know Prolog, a language known for incredibly arcane syntax that has mostly been unused except for a handful of circumstances, outside of the 1980s.

The author wants to draw a parallel between Monads and Prolog's execution, however how many programmers understand Prolog? Of those programmers, how many haven't grasped monads? Even I, who has some Prolog knowledge, now has the mental and cognitive load of having to parse the Prolog that I haven't seen used properly in perhaps 6 years, on top of having to understand the analogy to monads, on top of having to understand the description of monads.

If n-gate still existed, I would pay them to review this comment chain, personally.


> A monad is a mathmatical idea.

Not here. A monad is a programming idea. It bears some similarity to a mathematical idea, but we aren't really doing category theory here. We're doing programming.

The programming idea is simple: Anything that implements certain methods with certain semantics. That's it.


The comment was supposed to explain monads by common usage rather than as a mathematical notion. Prolog is not a commonly used language by us blue collar programmers. I feel I have a grasp on monads and yet that explanation meant little to me given my limited experience with Prolog.


What's your point?

The code example basically requires no Prolog knowledge to understand. It's basically just psuedo-code.


if anything Prolog lives in an oubliette not a tower.


So really monads are essentially an abbreviated syntax? Chaining functions, where (some of) the arguments are hidden from the language syntax and inserted by the compiler?

How does that relate to “pipe operator” as in https://github.com/tc39/proposal-pipeline-operator

Are monads also linear types? I mean, if I have a monad which represents system IO state, each value of system state can only be consumed once in actual execution, I can’t split the IO state in two and print different outputs on each branch. But I can do that in a conditional expression because only one branch is ever actually realised. For IO state, data branching is only allowed when guarded by control branching.


> So really monads are essentially an abbreviated syntax? Chaining functions, where (some of) the arguments are hidden from the language syntax and inserted by the compiler?

That's a part of it, for instance see:

https://philipnilsson.github.io/Badness10k/posts/2017-05-07-...

I recently wrote a note related to this that had a useful exchange:

>> When you see a "do" block, where do you look to figure out what it's actually doing?

> If you need to know, you look at what consumes the result. If it's abstract (say it's a top level binding `... -> m a` where the m is abstract) *you don't _need_ to know "what it is actually doing" - it should be correct for any choice of* `m`.

The behavior, and the "laws" or perhaps general far-reaching properties, are the more important part of Monads I think.

> How does that relate to “pipe operator” as in https://github.com/tc39/proposal-pipeline-operator

The items in the pipeline don't have a set of laws that let you know what's going on for sure without looking at the implementation I guess.


a monad is a type that is basically a structure

  { 
  X:value-that-we-care-about, 
  Y:something-we-only-care-about-occasionally-if-at-all
  }
and then a bunch of functions that let you pretend it's the value X, that's what the compiler is abbreviating for you, all the necessary wrapping/unwrapping or calling members. In the prolog example what we care about is the last/latest value and monads let us just shove the previous value(s) into a hole and forget about it, or at least ignore it until later.

with error monads we don't always care if something failed (like a write) but we probably want to keep that information for later, so it just gets hidden and we can pretend we're just passing around a value (see https://fsharpforfunandprofit.com/rop/ )

I'm not particularly clear on how Haskell does the IO monad, but basically the hidden value is what lets the compiler keep track of what was done pre/post IO, allowing you to thread-the-whole-world through an IO call.

"Do" uses the same trick as that prolog example, by threading previous calculations through a function call you can force

   Do
   Foo1
   Foo2
   Foo3
to be called in that order (important for a lazy language) so you don't have force it by nesting them Foo3 (Foo2(Foo1(X))) can get pretty ugly fast.


Can you expand on what you mean by calling Haskell’s type system ‘ad hoc’? I was under the impression that ‘ad hoc’ was basically the opposite of formalized/well-founded. That doesn’t sound like an accurate description of Haskell’s type system in my understanding.


For example, aptly named ad hoc polymorphism and overloading violates referential transparency:

    ( (7^7^7`mod`5`mod`2)==1, [False,True]!!(7^7^7`mod`5`mod`2) )
This yields:

    (True,False)
suggesting that the same arithmetic expression is both 1 and 0. Yet, the ability to plug in expressions and reason algebraically about their values was frequently advertised as a core property of Haskell. This example shows that we cannot do this in general, we must also take into account the types, which in this case seem to be defined in a rather "ad hoc" way, by reflecting low-level implementation details such as the range of admissible integers in the type system.

In GHC, there is a dedicated flag to spot such cases (-fwarn-type-defaults).

In general, the guarentees that the type system actually gives and also the ways to specify them appear somewhat unfinished and are also quite hard to understand, often necessitating semantic restrictions and syntactic extensions. For further examples, see for instance:

https://stackoverflow.com/questions/27019906/type-inference-...

https://stackoverflow.com/questions/14865734/referential-tra...


For readers wondering why this is, there are a few things at play here. I preface my explanation by saying that the benefit of type classes vastly outweigh issues like the above example. Also if I put the above example in my work project I immediately get a warning about the exact issue, so it's not like it is a foot gun or anything like that. Anyhow:

0. Integer is arbitrary precision while Int is bounded, machine dependent (eg. 32 or 64 bit). They are both instances of the Num type class as well as the Integral type class as we'll see later.

1. Numbers without explicit type signatures are overloaded (aka. constraint polymorphic):

  λ> :t 1
  1 :: Num p => p
where p can be any type that has a Num constraint, like Integer or Int.

2. As per https://www.haskell.org/onlinereport/decls.html#sect4.3.4 we have

  default (Integer, Double)
as concrete types for numbers to default to in expressions without explicit type signatures to guide inference.

3. The type of the list index operator is:

  λ> :t (!!)
  (!!) :: [a] -> Int -> a
where the index is a concrete Int type.

Right, so in the above example if we check the type of 7^7^7`mod`5`mod`2

  λ> :t 7^7^7`mod`5`mod`2
  (7^7^7`mod`5`mod`2) :: Integral a => a
it is still overloaded (Integral), ie. can be either Integer or Int. Now in the first case there's nothing to concretise the type thus we are defaulting to Integer as per the defaulting rule. In the second case the usage of (!!) concretise the type to an Int. As 7^7^7 is big it does not fit in an Int (overflows). Compare:

  λ> 7^7^7`mod`5 :: Integer
  3
  λ> 7^7^7`mod`5 :: Int
  2
The mystery is now solved. Side note: if we do

  default ()
to prevent GHC defaulting we'll get a type error and we will be forced to specific a type. We can also say:

  λ> ( (7^7^7`mod`5`mod`2)==1, [False,True]!!fromInteger((7^7^7`mod`5`mod`2)) )
  (True,True)


In the context of Haskell "ad-hoc" means ad-hoc polymorphism.

See https://wiki.haskell.org/Polymorphism for details on parametric (ie. unconstrained) vs. ad-hoc (ie. constrained) polymorphism. In short the difference is that ad-hoc is parametric + one or more type class constraints. Eg. in:

  λ> :t fmap
  fmap :: Functor f => (a -> b) -> f a -> f b
the type variables a and b are unconstrained while f is constrained by the Functor type class.


I'll also get nerd-sniped here :)

Monads are state/system backdoors for pure functions. Life would be pretty cool if we could just map/filter/reduce everything, it's a beautiful model and ripe for insane optimization. Sadly programs like that aren't useful, and programmers always want to do gross animal stuff like writing to sockets or files. Enter the humble monad, a way to pass something like a database connection, a logger, a system IO instance, or whatever into your cool nest of map calls.

John Ousterhout talks about this problem in A Philosophy of Software Design. Sometimes you need to carry things around, like a rendering context, a browser window, whatever, and you're give a couple of imperfect options:

- Make it global

- Thread it through literally everything

(React people might know #2 as "prop drilling", and they might know #1 as hooks)

Monads are infrastructure to help you do #2, since #1, well, people don't like it.


Option 3: Create an object that carries it around. (which, if you squint, is exactly what the monad is - an object that carries the data around. The difference is that monads always have the same interface, no matter what extra data they carry around. I'll let you decide for yourself whether that's a plus or a minus...)


I see where you're coming from, but I wouldn't say it's different than "thread it through everything". The monad way is to pass a blob through a pipeline and use types to say "this thing I'm threading through the pipes is an integer, but also a logger". The convenience here is that the extra state is attached to your data, so you don't have to explicitly thread it through each function. But for that convenience, you're (almost always) using up memory or suffering dereferencing (or both).

This dichotomy is fundamental, which is why I'm sticking on it here. Either you have to deliberately hand this piece of data down into the scope you're working with, or you can summon it from any scope. There really are no other options.


I see your point about "threading it through".

But doesn't that mean that you're agreeing with me that OO and monads are doing the same thing (at least in this regard)?

And, can someone explain to me how the type system works here? I had thought that an advantage of monads were that you just had to change the type signature to take a monad rather than to take an int, but once you did that, it could take any monad. But that won't work, will it? If you're going to try to log from that function, then it has to be a monad that logs, doesn't it? It can't just be a generic monad; it has to be a specific one.


Oh, yeah I like this. Yeah we definitely agree.

Re: type system, AFAIK you do like (let's assume something Go-ish here):

    type Integer interface {
      GetValue() int
      Mul(i Integer) Integer
    }

    type Logger interface {
      LogMessage()
    }

    type DataBlob struct {
      value int
      message string
    }

    func (db *DataBlob) GetValue() int {
      return db.value
    }

    func (db *DataBlob) Mul(i Integer) Integer {
      db.value *= i.GetValue()
      return db
    }

    func (db *DataBlob) LogMessage() {
      log.Println(db.message)
    }

    func triple(db *DataBlob) *DataBlob {
      return db.Mul(&DataBlob{3, ""})
    }
---

In this way, you can always implement more interfaces onto DataBlob, and then modify your pipeline steps to take advantage of the new functionality. This example isn't really perfect, but it's reasonably illustrative. The thing is getting the compiler to know you can do different things with a given piece of data. In Go that's interfaces, in Haskell that's typeclasses, in Rust that's traits, blah.

P.S. This [0] Rosetta Code example made it pretty clear to me, so if you're the same kind of thinker maybe it'll help you.

[0]: https://rosettacode.org/wiki/Monads/List_monad#Go


i'm with the gp. i guess another way i think about it (which might be wrong, please tell me) is that with monads we can talk treat something like pipe as a pure object, even though whats its carrying is decidely not


Yup, exactly. It's basically like if you have a bunch of simple arithmetic functions:

half(x)

double(x)

triple(x)

you can easily do things like:

double(half(triple(13)))

Or a la pipes:

triple 13 | half | double

But now what if you want to print something in `half` without a global? Well, I guess it's half(x, printer) now. Hmm and now `triple` has to carry that around too, so it's triple(x, printer). Well, that makes me think everything gets a printer, so it's double(x, printer) now too.

Well, printing is cool, but now we're a real company and real companies log everything. I... guess we're making new functions? half_log(x, logger) and triple_log(x, logger) and double_log(x, logger). And... new pipelines? Ugh.

At this point you might be thinking one of a number of things:

- Are globals really that bad?

- Can OOP fix this (dependency injection, builders)?

- Surely there's some kind of framework for this.

And if you're a functional programmer who's super not into globals or OOP, you might start seeing the appeal of #3 there. Et voila, a whole bunch of functions that handle this "pass an extra blob into your pipelines" thing.


Monads are cross cutting functionality shaped into a unified interface. Kinds that are in common use:

- lists are monads that provide multiplicity

- options are monads that provide nullability

- try wrappers are monads that provide failure semantics

- futures and promises are monads that provide asyncrony

- either is a monad that provides duality

… and many more. The key concept here is that you wrap computations in some behavior before executing them. there are many problems that can be monadefied.

The problem is that learning about monads comes with a lot of threads to pull about category theory and it’s use in computation. This is where I feel folks get lost.

Monads are tremendously useful and understandable. Many go full theory when teaching them and that is regrettable.


The problem is that you are trying to understand them from an explanation, when the correct method is to code with them and feel them and get used to them.

The programming world is filled with concepts that beginners find extremely difficult to comprehend from explanations:

  - pointers
  - closures
  - recursion
Monads are no more difficult to grasp than any of these concepts (but no one would ever claim that it will be "hard to hire devs that understand closures...")


I take your point but actually I don't think there is an abundance of developers who understand closures. :-)


A structure is a monad if you can nest one of itself inside another without a visible seam.

If you have a binary tree whose leaves are also binary trees, is that meaningfully different from a single grafted binary tree? Not really.

If you have a sequence (list) of sequences, is that meaningfully different from having a single sequence? Not really.

If you have an option of option of bool, is that meaningfully different from having a single optional? Not really.

More precisely, if you don't care about the difference, structures that are monads let you smooth over the seam between nesting levels.

The flatMap operator lets you extend a data structure at each of its leaves. That is, you can map over each leaf and replace it with another structure of the same kind, then smooth over the seam.

The connection with computation comes about by imagining the data structure as the history of outputs of a process; a trace. You can extend a program trace by specifying what to do next (via flatMap). Clever choices of data structure give you interesting alternative models of computation. But, this is an application of modeling with monads. It's not what a monad is.


Have you spent a lot of time trying to write code using monads? If not, that's the only real way to learn I think. I wouldn't expect someone to understand what object oriented programming or functional programming is unless they've tried to solve quite a few problems using those techniques.

Try solving a non trivial problem on Haskell. You may not fully understand monads but you should get a feeling for it.


The definition of computation monads (in sense of Moggy) is simple, but not trivial. It's deep since it generalizes many computation "phenomena": side effects, exceptions, etc.

Conceptually, a "pure" function is something as `A -> B`, so all information about its internal (e.g. side effects) lost: we put something of type A but we can only observe a result of type B.

The idea of computation monads is to attach the result type of the function with a "constructor" T, then the type T B is used to "restore" (or to reason about) the lost information.

In composing such functions, T X (for some result type X) is all information we need, then this explains naturally the third rule of the Kleisli triple `f: A -> T B; g: B -> T C`: A -> T C


I think a lot of explanations miss the mark on what exactly a monad is, and make it out to be like some kind of tool or data structure or something, when it's really just describing operations on some piece of data.

It can be helpful to just take the word "monad" out of your brain and work up from understanding the idea of a functor first, then gradually understand what a monad is.

In the end, I think, monads are not that exciting outside the world of pure FP :) https://twitter.com/plt_borat/status/228009057670291456


If you've already tried, you probably already came across this explanation, so there are good chances it won't help, but let me just try: I think a monad as a computation, meaning the act of computing something. A box with a button, that will do something (and possibly return a value) when you press the button. Before the button is pressed, you can't know what value is going to be generated, because none has yet.

The "return" monad operation does a very simple thing: given a value, constructs a box that is going to generate that value once you press the button. Not very useful in itself, but the point is that that value can be used for further computation. The "bind" operation precisely takes another monad and a function, so now we have two monads: the first one is provided by you and the second one is the result of the "bind" operation. Think the first monad as being "embedded" in the second one. When you press the button on the outer monad, first the button on the inner monad is pressed and a value is generated; then the value is mapped through the function; and then the result of the function is the value generated by the outer monad. In practice you extended the inner monad, doing some more computation (the function) after it had executed.

In the case of the Haskell IO monad, you can't really press the button inside your program: there is no way to extract values that were put in a IO monad. Still, on object of the IO monad encodes what you want the program to do. Basically the Haskell interpreter/compiler gives you the opportunity the press just once the button of a single IO monad, precisely of the one that is given by the main symbol (which has type "IO ()"). By composing monads in the right way, you can arrange your program to do what you want.

In other words, a monad like IO, where you can't press the button yourself, allows you to express precisely what a program with side-effects should be: something for which you cannot know the result (you cannot press the button and see the generated value in the program) unless you really execute it and commit to the side effects (which is what happen when main returns something and the single allowed button press is executed on that thing).


Here is an article and video on "Railway Oriented Programming":

https://fsharpforfunandprofit.com/rop/

You won't believe it's about monads!


I felt the same way. This is the explanation that made it finally make sense for me [1]. Having a bunch of examples right in a row helped me see the general form of a monad (instead of once again having the "Maybe Monad" example thrown at me.)


Have you ever tried: https://ericlippert.com/2013/02/21/monads-part-one/ ? it was the one that did it for me.


You know how in shells, you can use the pipe operator, "|", to chain commands together?

Monads are the same thing, but for functions.

They adjust the output of one function so it can become the input of the next one.


The easiest way for me to learn was by using limited versions of them in scala. I started to “get it” after really applying combinations of Future, Option, Try and friends.



This an analogy, not a definition. But imagine a pointer to a struct that could only be dereferenced by passing it to another function.


I, too, have tried to figure out what exactly are monads, and have come away even more confused than I was before.


You know how some functions return a value or an error? Imagine trying to compose several of those functions, passing the value into the next function or handling it’s error. That ends up with a lot of boilerplate. The first time I wrote a Monad was because of this issue. Wrapping this series of function calls in a Monad allows you to write something that reduces that boilerplate by handling the errors of any functions within it, so you can compose them like you’d want

How do you handle state in a language like Haskell that doesn’t have mutable state? You pass a data value between functions keeping track of everything, as an extra argument. Imagine the boilerplate of that! Every function suddenly needs an extra argument for Data, regardless of if it needs it, to keep it around. The State Monad deals with that boilerplate.


I've heard that dividing e.g 1 by 0 in Excel feels like a monad

it's either value or dividing by 0 error :)


I'm wanting to learn APL (I'm leaning towards J or BQN) and Lisp as two languages/paradigms. APL for is expressivity and Lisp for its ML background.

More mainstream I'm wanting to learn TCL and lua.

I don't program professionally so don't need to learn anything to get a job.


I learned a little APL & J, and enjoyed doing anything mathy or with matrices. I didn't like them for generic office stuff.

I read 3-4 books on Lisp and can appreciate the power of it, but somehow I've never really enjoyed using it.

Lua was a bit too barebones for me.

Odly enough, Tcl is where I felt the most comfortable. It's partly because it's a command language, but I also found it full featured and lightweight. I didn't like that there is no official distribution though. There are commercial options and things on sourceforge and I'm not comfortable downloading from them. Anaconda Python has an option to include the Tcl interpreter, Tk GUI, and Wish as Python has a wrapper around Tk.


> I learned a little APL & J, and enjoyed doing anything mathy or with matrices. I didn't like them for generic office stuff.

Really? I found the boring stuff and things like web programming to be the most interesting in J.


About Tcl on Windows, personally i use the one that MSYS2 installs which i assume is the most "vanilla".


I read that Sourceforge changed ownership and the shady downloads are no longer a thing to worry about.


J or BQN are both fine choices for an array language. I prefer J's ascii-based syntax, but it has some rough edges (e.g. namespaces) that BQN seems to have solved more elegantly.


As a disclaimer, Lisp's ML background is different ML than what's mainstream today.


Symbolic reasoning AI is mainstream, it was just never related to ML. The debate was between symbolic reasoning and machine learning in achieving AI. Somehow AI and ML eventually got equated in the last decade (in that ML became the dominant AI approach).

Symbolic reasoning lives well enough in things like business rules, we just don’t associate it so much with AI anymore, nor is it done with LISP.


I thought that "Machine Learning" was also a topic for Symbolic AI.


They are generally posited as in opposition, though there is some research in trying to combine them.


Topics like "rule learning" existed already decades ago. Genetic Algorithms. Decision Tree learning. ...


All five recommended languages are functional languages very similar to each other. Why not recommend a language outside the functional paradigm, such as a logic programming language (Prolog, ASP, Mercury, etc), or, I don't know, an esoteric language like Brainfuck or Modrian? We're trying to teach programmers to think outside the box, correct?


Ah yes Haskell, the language that everybody loves and nobody uses.


I'll let my teammates at standup tomorrow know we are fake!


My team uses it.

¯\_(ツ)_/¯


Has anyone done a study on startup success vs language choice? I've never done anything commercial outside of being an employee/contractor, but wouldn't a niche language make it harder to find developers, and possibly increase hardware costs if It's not optimized as well as JS?


> but wouldn't a niche language make it harder to find developers

Maybe? There are less of them, but sometimes there are great people who really want to use a language you’d have a hard time hiring otherwise. But then you have to make sure they’re not just into the language. Some people complain “we’re having a hard time hiring experienced developers in our area at rates we can afford for our weird language.”, but you have to wonder if “for our weird language” is actually the difference maker there.

> possibly increase hardware costs if It's not optimized as well as JS?

I don’t exactly think of JS as “optimized”. Most languages will be faster. Certainly anything in the APL family if you’re doing that type of work.


Maybe not much of a study, but there is a lot of examples. Paul Graham developed viaweb using common lisp and ran circles around his competition using C++. I read another one involving APL in the early ~80s for financial reporting uses. Back then with Fortran or C you'd have to have the programmer go back and rewrite chunks of the code if the VP wanted to invert a table, but it's immediate in APL with like a single character change. The programmer would just say "sure", click the key, and print out the answer.

So in general, I think dynamic languages are probably better for getting together a prototype for many use cases, but they don't scale as well to a large number of users or high performance uses.


"According to W3Techs' data, PHP is used by 78.9% of all websites with a known server-side programming language."

I am prepared for the downvotes here. But numbers don't lie. PHP is the dominate language of sites that run. If they are up running I would surmise the business is up and running. Facebook is a great example of hacking something together with php. IMHO I think if Facebook started with java it wouldn't exist. It's so fast to just hackup a php script and rsync it.


According to the same site, 43.2% of sites surveyed are running Wordpress. That's over half the PHP number, and I suspect much of the rest is running something open source that hasn't been significantly modified by the site operator.

I just put up a site running Wordpress. Backend language was not a significant factor in selecting that software. Its ease of deployment on cheap shared hosting was certainly a factor in its initial success.



Original source: https://charliereese.ca/y-combinator-top-50-software-startup...

Lots of Ruby and Python. What I get from that is that being able to put up a CRUD type app in a hurry is a requirement for most of these companies, and frameworks like Rails and Django make it easy to do so.


Ruby and Python being used about equally as often, but Ruby (primarily Rails) leading to 3x the market cap in aggregate...

It's important to look at both the numerator and the denominator :)


I agree wholeheartedly with the premise here - learn different languages to expand your mind - but some details have not aged well.

In particular, today I would strongly recommend Python over Haskell for machine learning or natural language processing tasks. Not because Python is a better base language for these sorts of things (it isn't) but because the library ecosystem that surrounds it is so much larger. Python is basically batteries-included for ML/NLP these days.


He meant ML as in Standard ML and other variants, ML here means modular language or something else, and isn’t related to machine learning at all.

Suffice it to say no one has really explored doing ML in an ML. :)


I think he was referring to the author saying Haskell would be his choice for machine learning: "I don't do a lot of artificial intelligence, natural-language processing or machine-learning research, but if I did, Haskell would be my first pick there too." In the real world, python dominates the machine learning space.

> Suffice it to say no one has really explored doing ML in an ML. :)

Learned Standard ML in college. Never did anything with it in the corporate and don't know anyone who has either. Though I think it is something everyone should learn. One thing I agree with the author: "Standard ML was the first functional language I learned well, so I still remember being shocked by its expressiveness." If you come from C/C++/Java/etc world, just pattern matching by itself is mind-blowing.


Ok, that’s just weird and comes completely out of left field. I just saw ML and assumed it was a reference to the language since Haskell couldn’t possibly be useful in machine learning. Alas, it’s in the second sentence.


The article says "I don't do a lot of artificial intelligence, natural-language processing or machine-learning research, but if I did, Haskell would be my first pick there too." No explanation, the article goes on to explain purity and lazy evaluation that as far as I understand don't have any connection to machine learning.


>No explanation

To be fair, the precedeeing sentence doesn't read like the author is about to give an argument about why Haskell is good for "artificial intelligence, natural-language processing or machine-learning research".

It just reads like he merely expands his preference of Haskell for what he does, and just mention in passing how he thinks it suit him also in those other domains.


Standard Meta Language actually... for the origins of the name probably see the paper 'A metalanguage for interactive proof in LCF*'


No he didn't. The article specifically talks about using Haskell for machine learning and natural language programming.


He has a better list in an equally old post where he basically just says "learn lots of different programming paradigms."

https://matt.might.net/articles/what-cs-majors-should-know/


It's interesting to observe that in the couple of years that followed this article (published in 2009), three of today's most influential and arguably, best designed languages, were born: Kotlin, Swift, and Rust.


“ I encourage my students to never stop learning niche languages. They expand your modes of thinking, the kinds of problems you solve quickly and your appreciation for the meaning of computation.”

Timeless advice.


scheme is not "untyped"


Previosly on HN:

[1] https://news.ycombinator.com/item?id=23077992 - (May 5, 2020 — 151 points, 133 comments)

[2] https://news.ycombinator.com/item?id=11932675 - (June 19, 2016 — 219 points, 202 comments)

etc.


And originally from 2009 (as noted in the previous HN links)


> I encourage my students to never stop learning niche languages.

Well, speaking for most mortals doing programming : once you'll find a job, you'll mostly learn languages and tech usefull for your job, not niche languages. And not everyone like programming enough to keep learning niche languages outside of work hours. And once looking for another job, you'll mostly learn languages and stuff usefull to find an interesting new job.

If you have the time and interest to learn a ton of niche languages, good for you. And maybe you'll be able to create a start-up with your epic Haskell skills, the "enlightenment" you got from learning niche languages and the golden spoon you were born with.


>If you're looking for a job in industry, my reply is to learn whatever is hot right now: C++, Java and C#--and probably Python, Ruby, PHP and Perl too. If, on the other hand, you're interested in enlightenment, academic research or a start-up, the criterion by which you should choose your next language is not employability, but expressiveness. In academic research and in entrepreneurship, you need to multiply your effectiveness as a programmer

Great, I'd love to see the studies on how much of a "multiplier" these leet languages give me. Hopefully I too can ascend to the mythical x10er because I use Lisp and Haskell.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: