I used SML in my compiler course in college. Our professors loved SML and maintains a pretty popular compiler for it, so he was a wealth of knowledge and helped us along the way, which made it a much more approachable language than if I were to embark on learning it myself.
Really shows the power for statically typed functional languages, and especially the power of sum types. Representing trees with sum types is so extremely powerful and makes writing compiler oriented code feel very natural. I haven't used it since graduation, but if I had to write a compiler, I'd probably opt for an SML like language, like F#.
Yeah I am not a huge fan of the compiler code in SML for the ones I've looked at (though I haven't looked at them all). Often it's just one author so they often omit types on nested functions (since they're optional for the compiler) so they're nearly impossible for me to read without spending a huge amount of time on it.
Going off on a tangent, I think every language should require function parameters and return types to be explicitly typed. If that gets rid of the bulk of HM type inference that's cool. The only value to me of type inference is at the local variable level. Function type inference makes code impossible to read.
> Function type inference makes code impossible to read.
And the type errors very non-local and therefore hard to understand. The reason is that HM type inference infers types not only in the leaves->roots direction, but also roots->leaves. An expression gets assigned a type based on the context it appears in. It works wonderfully when the program type checks, because e.g. it allows one to write something like "let x = None" in Rust instead of "let x: Option<very long type name> = Option<very long type name>::None;" as long as further uses of x (such as returning it as the value of the enclosing function) clarify its type. However, when a program is during development, you change things here and there, when one variable binding (such as a function definition), which wasn't explicitly typed, appears in at least two places which require this variable to have conflicting types, then almost certainly the error is going to be generated for a line that you as a human see completely disconnected from where the error actually is. Maybe it's in the function definition, maybe it's in one of its uses. And those things can be further obscured by several indirections of intermediate functions with successfully inferred types.
Standard ML is incredibly usable but also very small (syntactically and semantically) especially compared with modern algol-family languages.
There are a couple of quality of life features missing (most notably is record update syntax), but I really enjoy that the core language has not changed in 25 years.
There is also ongoing discussion and other work on something called "successor ML" (sML). See smlfamily.org for further information. Functional record update is one of the features contemplated.
It's great functional language to learn pattern matching, type inference, polymorphism, etc in an academic setting. Great language to compare and contrast to other languages ( dynamic, OOP, imperative, etc ), but I'm not sure how useful it is as I've never seen it used in the business world. When all you knew was C/C++/Java, standard ml throws you for a loop - in a good way.
At my current job we use Scala extensively. I didn't have much experience with it, but found it pretty easy to pick up, mostly due to its similarity to StandardML (much moreso than e.g. Java, Haskell or Python; which are also clearly influences)
In fact, my only previous experience with Scala was from hacking on Isabelle/HOL, which is mostly StandardML, but happens to use Scala in the periphery (scripting, UI, etc.)!
This is the historic origin of the name, but I'm not sure it's still supposed to be an acronym. Modern ML may just be named with two arbitrary letters, as often happens when things named by acronyms grow beyond their original domain.
The "meta language" part refers to MLs original domain of writing tactics for proof assistants, where the "object language" was the language of the programs you were manipulating, and the "meta language" is the language in which you manipulate the programs. This also sort of makes sense when using ML to write a compiler (the "object language" is whatever you're compiling), but of course modern ML goes far beyond just language processing, even if it's still pretty good at it.
I use it for my work in program analysis, code generation, and other languages-research projects. I’ve also used it to a lesser degree for modeling and simulation problems.
You would probably be better served by F# or Ocaml for a web-oriented ML stack. SML has less of an ecosystem in that area.
I used to work on a hardware compiler written in SML, which compiled a language called Handel to logic circuits for FPGAs.
Mostly I worked on fancy optimisation passes, and a fast gate-level simulator.
Similar to languages like Haskell which emphasise first-class algebraic datatypes (tagged unions) and pattern-matching in a concise but clear syntax, it's good for writing things that manipulate tree-like and symbol-heavy data structures like program syntax trees and circuit graphs, and quickly trying out new ideas with those.
It is good at parsing/other compiler-related tasks. Basically anything where you are working with tree structures I think can be a good fit. It is also good as a way to learn about functional programming.
Really shows the power for statically typed functional languages, and especially the power of sum types. Representing trees with sum types is so extremely powerful and makes writing compiler oriented code feel very natural. I haven't used it since graduation, but if I had to write a compiler, I'd probably opt for an SML like language, like F#.