Hacker News new | past | comments | ask | show | jobs | submit login
Strong arrows: a new approach to gradual typing (elixir-lang.org)
200 points by ahamez on Sept 21, 2023 | hide | past | favorite | 85 comments



When we talk about arrows in terms of functional programming, we really mean Hughes’ Arrows (or Freyd-categories) a generalization of monads. The term arrow represents a level of abstraction over computations, essentially modeling them as objects with inputs and outputs. Hughes' Arrows extend beyond the traditional notion of functors and monads in handling side effects and common computational patterns in a more structured manner.

Arrows are useful because they provide a computation model that can implicitly make use of state without ever exposing that state to the programmer. The programmer can use arrowized computations and combine them to create sophisticated systems.

A strong arrow is an arrow A having an extra structure known as strength. The strength for A is a natural transformation, generally denoted by t, which consumes a pair of objects (X,Y) and gives back an arrow in the category. In a concrete manner, the strength has this form: t_X,Y: X x AY -> A(X x Y), called tensorial strength. The strength obeys some particular coherence conditions. A deeper explanation involves the monoidal structure of a category and how the 'strength' aids in elaborating the combination of values in the arrow context.


My understanding here for Elixir is that you have dynamic() as a universe U and an error set E. All functions f are from U to E + U. f is an arrow A -> B iff f[A] subset B + E. f is a strong arrow iff it is an arrow and f(U\A) subset E. I'm not sure how this connects to strength in the category theoretic sense.


This was my first thought, but this connection was not made clear in the article - or if it was a naming coincidence.


And here I thought it was just about functions (arrow type) ;p


Slightly OT (though I do really like Elixir: I used to work with José and he’s a goddamned visionary): TypeScript just hits it out of the park here. With arguably harder erasure constraints than either Java or C# the covariance and contravariance bounds and ergonomics are so much more intuitive than either. The MSR brain trust is clearly keeping their eye on the ball.

Elixir looks to be one of few languages that is never going to sit still on absorbing the best ideas. At the risk of repetition, no one would expect less from the person who’s passion created it.


> TypeScript just hits it out of the park here

but the type system isn't sound :( whats the point of a type system? if it's not sound?


"gradual typing" is exactly how I code Julia.

Disclaimer: this is not how to the strengths of Julia are normally described by most people, it's just how I think about it.

You might have heard that Julia solves the two language problem (easy as python, fast as C++). But exactly how does it do that? In python you don't have to care about types, but even if you were willing to care about types you wouldn't get any performance benefit. In C++ you have to care about types, you don't have the option not to, even when you don't care about performance.

In Julia you don't have to care about types. You can code it exactly as python. But if you are willing to you can care about types, and you'll get a huge performance benefit. So, the way I code Julia is on a first pass I code it like python. I don't care about types, I just care that it's correct. Then, after it's correct, and if performance is critical, I start thinking about types. So Julia very much lends itself to "make it work, make it right, make it fast", by allowing you to gradually introducing types into the parts of your code that matter.


The way you use the word "correct" is interesting: in PL theory circles, it usually means : "bug free", but it appears that you use it to mean: "produces the results I'm looking for".

Indeed, one may write a program which is bug free, yet does not implement the algorithm that produces the expected result (for instance, a program sorting data in ascending order, when a descending order is needed). In strongly typed languages, type systems are used to ensure that programs are bug free, following the adage : "if it compiles, it works". The issue of having a proper implementation is not a concern of researchers in their papers, it's purely an engineering problem, so there is no interest for them in using the word correct in that sense.

Also, with type inference, one often does not have to mind too much about types, and instead gradually introduce type annotations to lift disagreements between the compiler and its user.

Gradual typing is yet another tool to achieve a similar result.


> In strongly typed languages, type systems are used to ensure that programs are bug free, following the adage : "if it compiles, it works".

(chokes on his latte) - could you elaborate on this, cos, much as I love strong typing, it surely don't mean 'bug free' at the end, not in any way useful sense.


You make a valid point: I used the word "bug" here without trying first to define it, and that led to confusion in my mind, and thus in my comment.

In that comment, I used the example of programs performing an ascending sort or a descending one. While both programs would be valid, one of them, at least, would not correspond to the intent of the programmer. From an engineering perspective, that would be considered a bug.

I guess an informal yet hopefully apt definition of the word bug could be "an error in the source code of a program leading to an incorrect behaviour of that program at runtime". That incorrect behaviour can take many forms: the most obvious one is the program breaking in the middle of a computation, without providing any result (a semantic bug). A second one is when the compiler will not accept the program as valid (a syntactic bug). A third one is the program producing the wrong result, as in my example (also a semantic bug).

In my mind, I only considered the first 2 kinds of errors as bugs, and qualified the last as something else, perhaps for the reason that only the programmer may really know the intent behind a program's implementation.

One of the goals when designing a programming language is to help reduce the occurrence of bugs. The main strategy is to turn bugs of the first kind (semantic) into bugs of the second kind (syntactic), or at least that is my understanding.

Concretely, with a powerful enough type system, one may express expected properties of a program using the provided syntax, and let the compiler validate these claims using only formal rules.


This is a slightly above my pay grade, so take it with a bit salt.

Type systems in general reject some programs as invalid, but this claim is usually made with powerful type systems in mind; think Haskell, Idris, Rust. I only played with these casually, but it is true that you can write some code, and then let yourself be guided by compiler in fixing all the errors. It's often the case that the resulting program just works.

Rust's borrow checker is a part of its type system, based on some flavor of linear types if I remember correctly. Think about it: the type system can protect you from double frees. This is powerful.

In Haskell, for starters, you can't do any side effects outside of IO monad. You can express precisely what a given function is allowed to do.

In dependently-typed languages you can express that, for example, a function concatenating lists of length n and m returns a list of length n + m.

And aside from these above there is a lot of other, less magical things: no nulls, exhaustiveness checks for ADTs, some focus on immutability, lack of exceptions, and so on.

Now, I doubt we are all going to write Idris in five years, and even there bugs do happen obviously. But the idea compiles==works is not entirely out of this world.


Thank you, that's exactly what I had in mind when speaking about strongly typed languages. I don't think I could have presented it in a better way.


Julia is slower than Python for most applications if you include the compile time (which is every time you run your program, because it's "just in time").

edit: I want to be clear - I like Julia, and I have long wanted a scripting language that was gramatically simple like python but had support for strong typing, etc. But the TTFX problem, for me, muddies the waters on the question of "which is faster, Julia or Python?"


Maybe if your program is just println("hello world!")

Julia also has done a lot lately[1] to cache the results of JIT compilation from packages and re-use it later. So if you're doing a lot of println("hello world!") or whatever, you can make that faster by bundling it in a package and adding a precompile workflow. This will also be improving more with upcoming releases.

[1] https://julialang.org/blog/2023/04/julia-1.9-highlights/#cac...


This is not true https://benchmarksgame-team.pages.debian.net/benchmarksgame/... shows the middle of the pack julia implementations being over 10x faster than the fastest python implementations including startup and compile time.


On contrived benchmarks, sure. What if I want to, say, parse a json file and print something from it?

I understand that Julia is scientific computing oriented, and is probably faster than Python for those applications, but the fact is that Python is no slouch when it comes to scientific computing. And it can do a lot more, including simple but powerful scripts, which is what I mean when I say "most applications."


> contrived benchmarks

Why is doing actual science more of a contrived benchmark than parsing and printing a json? I think this says more about what you personally do than anything else.


Most code is that. N-body problems and computing the julia set are cool and beautiful and important. But most code is plumbing scripts that get run for .5s 10,000,000 times a day.


Sure, if you're solving easy problems python will be fine.


And if you're solving hard problems, it's fine too. That's my only point.

You'll see on that same benchmark page that C is 2-3x faster than Julia. If you want performance, use C. Julia is this weird middle ground where it has the simplified syntax of Python, is a little faster than Python, but still slower than C. Anything that needs to be done in real time will be optimized into a "real" language like C, C++ or Rust.


It's worth noting that the benchmarkgame is including startup time. If you look at the execution time (which is what matters once you start doing more work) the speeds are equal. For example, https://arxiv.org/pdf/2207.12762.pdf shows Julia beating hand codes BLAS kernels for the 2nd fastest supercomputer in the world.


I agree that if you keep increasing n on any of these benchmarks, Julia and C should start to approach each other, but the JIT overhead is not meaningless. I think there’s a reason benchmarkgame includes it.

It sounds, though, like they’ve started to seriously address this in versions more recent than what I’ve played with. I suppose I’ll check it out again.


I agree JIT overhead is not meaningless, but it's pretty odd that only some programming languages in the benchmark measure compilation time while others do not. If we really think it's not meaningless, then other languages (C, Fortran, etc.) should include that in the timing as well. Even better would be to have timings which include compilation and which do not. Then we would have a nice way of making a multi-dimensional comparison about the latency and runtime.

Currently, Julia's benchmarks add its compilation time while the building of the C binaries is not measured in its, so it's not a direct 1-1 comparison. And we don't have the numbers in there to really know how much of an effect it has either. More clarity would just be better for everyone.


That's because, until recently, you compiled every time you ran with Julia. It's not the case with C.


It has NEVER been the case that you have to compile a function every time you run it.


Leave him alone. He already made up his mind, you're just confusing him with facts.


LOL no, it's not a little faster than python :D

I just showed you that it's possible to generate native code ahead of time but you ignored that. Now you've moved on to the "next" objection. Anyway, good luck with your life.


with the first example on https://www.json.org/example.html and the following Julia code

using JSON3

data = JSON3.read("test.json")

println(data[:glossary][:title])

running `time julia +release --project=. --startup-file=no test.jl` gives a total time spend of 0.39 seconds (running on a dev version of Julia brings it down to 0.30). The translation of this into python is faster (.02 seconds), but this means that as long as your script has at least a second or so of work to do, Julia will be faster.

Specifically, the timing breakdown is 0.07 seconds to launch julia, 0.07 seconds to load JSON3, 0.0001 seconds to parse the file, .07 seconds of compilation for the indexing (I'm pretty sure this is fixable on the package side, see https://github.com/quinnj/JSON3.jl/pull/271), and 0.0001 seconds to do the indexing


What does "most applications" mean?


Most code that is written is code that is run frequently and with a short runtime. Julia is not good at this.


You compile your code every time you run it? Yikes. Sorry to hear that.


Is there a way to get around that in Julia? I tried to find a way to compile programs directly, and was disappointed with what was on offer.



I cannot support much in terms of money or time currently, but somehow I'd wish there are ways to speed this up.

Sure, sponsoring from big corps comes with their own problems, but maybe that would be an option? There are quite some that use Elixir (or BEAM in general) out there...

Idk what would be the best approach.

But I am very convinced that Elixir and the wider ecosystem (not only phoenix liveview) is (or could be) the very best "default" web app stack we have right now in general. It solves _so many_ problems of other comparable tech stacks right out of the box, one way or the other, and is the only one I've noticed a sentiment of pure joy from seasoned devs that also lasts well after the novelty wears off.

At this point it looks "only" like a number problem to become mainstream (again a number problem). No idea how to solve it, just felt I needed to share my thoughts as a data point here.

Like the type system here: I'd _kill_ to have this exact implementation today rather than _maybe in a few years_. And yes, I can draw comparison to Typescript, Rust and Haskell which I used for projects in the past. Its like combining the very beast ideas (and adding some more) into one awesome thing.


I understand many would love to have this yesterday but I strongly believe we should not speed it up. :)

This is potentially the largest change the language will ever go through and we are being very intentional and deliberate on every step and decision we make. Once we start collecting feedback from the community, it may speed up or slow down the process, but I want to make sure everyone gets plenty of time to experience and internalize the changes.

When it comes to funding, the on-going work on Elixir type system is well funded by companies like Fresha, Starfish*, Supabase, and Dashbit. However, we also want to venture into new research areas, such as existential set-theoretic types, which would allow us to type behaviours (such as GenServer) similar to how one would type first-class modules in OCaml (or Typeclasses in Haskell). If someone is interested in sponsoring research work, hit me up (github.com/josevalim), and I can put you in touch with the proper institutions.


I would like to add that for what it's worth, I wouldn't mind slower compilation times if it means we get the type system as described. If it really does end up having a significant impact on performance, would you be open to making compile-time type-checking optional instead of scrapping the whole thing?


If it comes to this, we will certainly need to have a discussion on the topic. So I am definitely open to discussing the idea!


Tbf. While more money to allows stability of people involved would help, it would not really speed it up that much.

Typescript had ms behind it and it still took a decade. At some point the engineering, experiments and learning take time and you can hardly speed it up with more ressources.


Maybe more like different layers could do different strategies? I can't even imagine what could happen to the BEAM itself if there are the same amount of ressources put on it like in our modern JS engines that have come a loooong way. Its already an amazing piece of tech in itself, but I wonder what an army of experts could do ontop of it.


Then we might have NPM for the BEAM--Uber popularity isn't always a good thing. Don't get me wrong, though, I want Elixir to get more popular.


> It solves _so many_ problems of other comparable tech stacks right out of the box

Could you expand top 5(or more) major pain points that it solves?


I don't thinks that would be a helpful discussion right here. Any one point there might come someone who points at some external "solution" (package, tool, ...) that might also do the same feature in some way for X, ignoring the fact that there are lot of Y without that particular thing still, but leads into derailing the argument anyways.

Elixir (and esp. Phoenix for web dev) is more like the culmination of the best ideas and heavily draws/depends on the last decades of real-world experience. Most stuff isn't (or shouldn't) be surprising, but the point is that everything you might need for the vast majority of projects is right there, nicely packaged up into a unified nice thingy.

But to give you a few of _my_ top points:

1. Architecture and DX for complex interactions: there is no need to bring in supplemental infrastructure crutches (Redis, AWS whatever service, ...) in many cases to begin with. OTP, which is like a core library always available, already includes the building blocks for things like cache servers, application clustering and supervision strategies, ... .

2. Reliability and availability: this tech is the original one achieving the "nine-nines" of uptime record. To get an idea of how, https://www.youtube.com/watch?v=JvBT4XBdoUE this talked is packed and excellent, and also highlights a few things why it can achieve my point 1 above.

3. "SPA-like" interactivity with full server side rendering without all the pains of exposing/consuming a dedicated HTTP API in between, or overcomplicated nuances of rendering pipelines (also see react server components) to achieve SSR/CSR in a way that not sucks or is noticeably slow. Liveview is tried to be replicated in other stacks, but they don't come close in reality today or complicate the apps' infrastructure with additional crudges. And don't get me started on the integration testing story compared with SPA+Backend setups.

4. Agents/Message passing parallelism (not only concurrency, but also distributed!) at your fingertips, which makes modeling current "async problems" kinda trivial. OTOH, building something like a proper async-based web framework in rust is quite a pain in the ass, with lots of type fiddling needed, instead of doing the solution you actually want to deliver with it.

5. Some absolutely awesome takes on some typical libraries. Take the ORM options for javascript/typescript (all with severe tradeoffs, and analysis paralysis ontop). Take Rails' ActiveRecord with lots of magic and while great to get started becomes quickly bloated (iE fat models) in bigger projects. And then Ecto got it "just right" with slim schemas, changesets, and basically pure functions + pipelines. Any "model" can be used within new usecases by adding new changesets/validations explicitly and easily, there is no "general" validation right on the model level you have to tiptoe around, and starting If-cascades.


I would add some more points.

6. Livebook, Jupyter notebook are amazing in Python, Livebook is the same but for Elixir. Also you can connect to your running environment and interact with it.

7. The Nx ecosystem, ML is not only a Python thing now, Elixir is completely usable for ML now.

8. Ash framework, this is not a web framework but a framework for your domain, it's amazing


absolutelt hard agree on livebook, but I think thats not an immediate thing every webdev needs.

but speaking from recent practice, it is laughably simple and kinda mindblowing for non-experts in this stuff. Like, I klick me a few smart cells to interact with external data and get a connection/data, write a few straightforward logic modules to do something, and a super simple form for users of this thing (like two inputs and a submit button, and an event handler) and I can eliminate a business workflow or report. I can call a dev into it ad-hoc to help with a code snippet like a liveshare-coding-session. And finally I deploy it to a bookmarkable URL with a password.

Maybe an hour worth of work, _including_ talking with people while fleshing this out in quick iterations as-we-speak.

I mean people kinde get how cool phoenix is for one reason or the other, but livebook is really killing it for me. It's hard to talk with business people about tech internals no matter how awesome, but its a different game with lightweight livebooks. If you also happen to have phoenix as your actual product/app, linking the livebook indeed is nice to directly call functions instead of rewriting those snippets in SQL... but the endgame here is when the livebook starts to become unwieldly due to increasing feature requests, it can be integrated into a phoenix app rather easily, since a dev starts out with working code to begin with (and can play with it interactively).


I've replaced postman (and postman likes) with a liveview notebook. It's absolutely great. No need to mess with weird plugins or mentally have to parse responses, you can just use cells that use real elixir to do that.

It's really nice


> it can be integrated into a phoenix app rather easily

I can't figure out how to do this.


By this I mean taking the code logic snippets and slap a dedicated new liveview UI on it within your phoenisx app - which isn't too hard since the initial ui in the livebook is pretty bare bones. Yes its rebuilding the whole thing technically, but probably really quick to do


I’m curious. What exactly is Elixir good for? What do you find in it more appealing than other languages?


I have listed a few points here in the other comment: https://news.ycombinator.com/item?id=37596258

as a more general/subjective/emotional note:

The very pragmatic take on FP makes code in most cases so much easier to reason about. Immutability, no magic coming from some dark corner of the codebase unexpectedly, thinking in steps of data transformation from the get go, piping operator to make this visually pleasant, handling edge cases with pattern matches right at the function head level explicitly, ... and to have stateful stuff, there is OTP right on your fingertips with GenServer/Actor/ETS/... .

And a genserver basically is only just another module that implements certain callbacks and then gets spawned. Modelling some scenarios are are breeze with that. And when I feel like I need some better syntax for some exotic problem I am dealing with, I can bend and stretch my language constructs via hygienic macros right away.

And the whole experience is _extremely_ polished in terms of DX. docs, tests, doctests, interactive shell, a AOT-compiler, a JIT-runtime, and some very high-quality ecosystem bangers, like phoenix/livebook/membrane/boradway/Nx/Axon/... . Its like best of all worlds, and definitively the _very best_ choice for orchestrating stuff, which is a boon when you run into a problem you'd rather not use elixir for. Like building a native extension in rust/zig for some raw CPU task is a breeze.

Its not so much the individual feature (no matter which one) but the overall package.


It's a functional, concurrent-first language. People have talked about the upsides of functional programming to death, either you prefer it or you don't. The concurrency story, though, is a significant advantage exclusive to the BEAM languages. They utilize green threads with message passing queues that we call "processes" (not to be confused with OS-level processes).

A good almost-toy level project I built in Elixir was a Minecraft server proxy. At the top level, you've got a listener process, that waits for an incoming TCP connection, and a Supervisor process, who's responsible for the existing connections. When the listener receives a new TCP connection, it spawns a process under the Supervisor to handle that incoming connection, and goes back to listening. That's about 10 lines for code: 4 in the initiation (function declaration, start Supervisor, start Listener, end), and 6 in the Listener (function declaration, get new TCP connection, spawn process as a child of the Supervisor, assign TCP connection, recurse, end)

The child processes are running a little state machine to do some parsing of the TCP stream. They receive TCP packets as a series of messages, and based on those messages they do various things. Unimportant for the purposes of explanation, except to say that the built-in gen-statem state machine handler simplifies it down to a series of functions that match a message, and produce a new state.

But what if I receive a packet that I don't have a defined response to? Well, I don't need to write any extra code for that. When it fails to pattern match any of my states, the process is going to crash. Then it's children will crash (including the tcp connection, which will be closed by this operation), then it'll tell it's Supervisor "Alas, I am slain". By default, the Supervisor would replace it with a clone, passed the same initial arguments. But that's not the behavior we want here, we want it to just accept that the process has died, and maybe pass an alert to some process for logging or monitoring purposes. That's a built-in option, so it was just an argument in our declaration of the Supervisor in the first place.

The key thing to notice there is that anything that causes that child to crash will resolve the same, safe, way, from unpredicted packets, to the processor core it's running in being struck by a meteor. The other, perfectly fine child processes will keep chugging along, doing their own thing. Likewise, the Supervisor and Listener are protected from each other's failings. If the Listener fails, the top-level application supervisor will replace it, and likewise for the Supervisor for all the connection handlers.

This synnergy of fault tolerance and concurrency is the key idea of Erlang, and by extension Elixir. It lets you code the pretty path, and accept that there are failure modes you can't or don't want to bother anticipating.


Thanks for the detailed response. There’s no other mainstream language which has this fault tolerance built-in, right?


Definitely nothing in the mainstream. The closest you get is a couple of Actor-model libraries for Java and C#, but since languages weren't built from the ground up around it, you end up with a lot of jagged edges that simply don't exist in Erlang or Elixir. Outside of the functional paradigm, there are a lot of things you can express that simply cannot be translated into a distributed environment.

The other "in the same ballpark" thing that is much more mainstream is microservice architecture. They aim for the same goal, of achieving concurrency-modeled parallelism and fault tolerance by translating a system into a distributed one, but since microservices tend to be hacked together with a variety of languages with no intrinsic support for the paradigm, you end up with an "ad hoc informally-specified bug-ridden slow implementation of half of Erlang."

There's a few projects even further outside the mainstream than Erlang and Elixir that are taking a swing at it, but they're all extremely obscure, and IMO haven't yet demonstrated any novelty worth sacrificing the vast libraries available on the BEAM.


I read that Brex moved away from Elixir to Kotlin. If Elixir was really that great then why would a company put such a big effort to ditch it? That killed off my interest in learning it, but maybe their reasons were invalid? Genuinely curious.


You will find examples from companies moving away from Elixir, Python, JavaScript, Scala, Java, etc. I don't think there is any programming language that is "unditchable". So I'd suggest to analyze the reasons that made a company move away from X, check if they are still relevant, if they'd apply to you, and, if yes, if they are a deal breaker or something you can work around.

For the Brex case, if I remember correctly, lack of static typing and early-development of gRPC tools were the reasons back then. How much this impacts your needs is up to you.


'I don't think there is any programming language that is "unditchable".'

There can not exist one perfect programming language for all uses. (Sometimes people don't like that statement but I think if you seriously sit down and work the math it's pretty obvious that, as a simple for instance, Rust is not going to replace all shell scripting and shell scripting is not going to replace all Rust, and that's just an example, the principle holds in numerous dimensions in the programming language space. [1])

There are always people making bad decisions, some of those bad decisions are about which programming language to adopt for a problem.

So, yes, you'll always be able to find companies dropping some language, but it isn't necessarily the language's "fault".

I don't say this on every such article in the comments because it's kinda kicking someone while they're down, but I would assess that the clear majority of the ever-present torrent of "We switched from X to Y and look how much better Y is!" are actually long and detailed explanations of how the company in question chose the wrong technology for their problem, not how bad X is. The two major exceptions are "our tech in X predates Y existing as a valid choice, and now that it does, it's a better choice" (can't blame them for existing before the right tech, I've made that move myself in my own career) and "We did our research and X really should have worked but we got into it only to discover that it really didn't for some reason we could not have reasonably anticipated". That latter is pretty rare, IMHO, but I've seen it a couple of times. (Generally it is when a closed-source stack has a fundamental problem and you can't fix it, because it's closed source.)

(I suppose there's also "X was a good choice initially but we outgrew it", but in my very opinionated opinion even these are often... dubious. It happens, yeah, but in my personal opinion it still means the language chosen was not able to scale the solution. Only people truly suprised by their growth, which does happen, should reach for this.)

[1]: And if that still bothers you, think small shell rather than large. I'm not going to write Rust for the equivalent of "awk '{ print $3 }' myfile | sort | gzip > sorted_names.gz". However broken shell legitimately is as a programming language, you simply can not argue with that kind of bang-for-the-buck.


Too bad, you're missing out. Did you look into companies that have not ditched it?

People don't tend to write articles titled, "We've been using the same tech for 10 years and have had no major issues with it so we are still using it." It's always the "we ditched x for y" ones that get traction and the actions of ONE company can seemingly have a far-reaching impression for people who otherwise know nothing about the technology in question.


Do you reject learning any language that has been migrated away from by any company? Why not make up your own mind about whether it is nice instead of depending on rumours about other companies?

In any case you could have easily googled "brex elixir" and have found their blog explaining their reasoning as the first result: https://medium.com/brexeng/building-backend-services-with-ko...


"Development speed: The future will be great and we want to get there fast. Our choice needs to help us move with urgency and focus. A type-system was a non-negotiable choice here."

While Jose Valim did a good job in his talk about dispelling the idea of type system offering all these promises that may or may not be true, the idea will never die. This could mean that it is the right idea, but it could also mean that now's the time to look into a way to offer it in away it attracts people who have that perception while offering a type system of value.


I hear new Cto came in, got nervous about hiring. They are still hiring for elixir positions, though?


i heard basically the same and also (the new CTO just really like kotlin and knew a bunch of people who know kotlin and wanted to bring them over. )

again, take the above with a grain of salt.


the arc of progress in software engineering bends towards static typing. in the future, our type systems will be so powerful they'll be able to do more inference so that we have to be less explicit. then people who have come to hate static type because of these explicit type declarations will know they hated the costume after all, not the person in it.


Yeah, Rich Feldman has a good talk on this, entitled Why Static Typing Came Back[1]. It makes a good argument that static typing can recover most of the advantages traditionally accounted to dynamic typing. Rapid iteration you can get from incremental compilers, being concise you can get from powerful type inference, etc. Conversely, the advantages of static typing (largely boiling down to increased reliability) cannot easily be captured by dynamically typed languages. Gradual typing, particularly successful by TypeScript, might be an interesting middle ground, or it might just be a reflection of the dynamic history of JS.

[1]: https://www.youtube.com/watch?v=Tml94je2edk


it's a good talk.

the static typing (of the future) at the end of the video isn't, say, the static typing of C# or Java.

Static typing should be more like OCaml, Elm, Rust, Haskell (and ideally Roc)


This talk is so good!


> in the future, our type systems will be so powerful they'll be able to do more inference so that we have to be less explicit

And in the past too!

In my Haskell workflow I constantly use the _ (typed hole) keyword. When the compiler hits that expression, it tells me what type I have to put there.

If I were using Idris, it can oftentimes provide and implementation too!

Two caveats about what you said though:

- More power is not always better (depending on how you define power). Structured programming removes the power of goto. Functional programming removes the power of mutation.

- There are limitations to what can be inferred. The more your type system can do, the harder it is for type inference to do its thing.


OCaml had this figured out many years ago. Elixir/Erlang's dynamism can be particularly good compared to other dynamic languages if you lean into it properly. But yes, the world is loving static typing right now, and I'm actually a bit excited myself with their ideas for Elixir's type system.


What exactly do you mean that OCaml has figured out?


It's a statically typed language that doesn't require explicit type declarations (in most cases, at least).


Er, to clarify, I mean it has very strong type inference. If you're making a record or complex type then yes, you still need to define the type, so maybe that is what OP was actually talking about?


This is great! I've been an Elixir developer professionally for many years now, and I love it. (I took my current job in large part for the opportunity to use it some more.) But this scratches the one last itch I have with it. After having a taste of types from Rust and Typescript, I do miss them from time to time.

One thing I really like about a strong type system is it makes VSCode feel like it has super powers, type-checking in line and improving suggestions and all that. Given that this seems to be a somewhat novel type system, what are the implications for editor integration? Will it still do all the cool things that Language Servers support? In other words, is the "set theoretic type system" an implementation detail which doesn't affect the language server API?

The post is great, and I'm still digesting it, but this one part caught my eye:

> In a nutshell, it means we can only compare functions if they have the same bounds. For example, our type system states a -> a when a: integer() or boolean() is not a subtype of a -> a when a: integer().

It's not a subtype right? If a function could safely accept some function `f` as a parameter, then if that function were able to return more than expected, it wouldn't be safe to drop it in. I think the return values have to be more restricted. On the other hand, the input can be more general. This is "covariance" and "contravariance" in types, I think?

But regardless, this is one of my main hangups with dialyzer: its `underspecs` and `overspecs` design gets it wrong. I mentioned in once a while ago in this comment[0] with more examples. But, if you typed it like this:

     @type direction :: east | north | west | south
     @spec common_wind() :: direction
     def common_winds(), do: :east
dialyzer will complain because it infers `common_winds/0` can only return one of the values that it's spec'ed for. (If you have `overspecs` or `underspecs` - I forget which - enabled, which you need, if you want to restrict input types.)

So I guess my question with this approach, is whether you can explicitly spec "larger" sets than is inferred? And if so, whether the given specs overrule the inferred ones elsewhere.

[0] https://news.ycombinator.com/item?id=31568098


> In other words, is the "set theoretic type system" an implementation detail which doesn't affect the language server API?

This should be the case. Type violations would automatically appear on your IDEs when you update Elixir. However, if you need additional metadata/features, then we need to expose them from Elixir and Language Servers need to consume it. But that's implementation work and not "type system" work. :)

> But regardless, this is one of my main hangups with dialyzer: its `underspecs` and `overspecs` design gets it wrong.

I believe recent Erlang/OTP versions changed this to give you more fine grained control for the precise reasons you mentioned.

> So I guess my question with this approach, is whether you can explicitly spec "larger" sets than is inferred? And if so, whether the given specs overrule the inferred ones elsewhere.

I am 99% sure the theory allows both options, so we will pick one based on patterns and community needs.


> If the function returns none() (i.e. it does not type check) or a type which is a subset of its codomain (i.e. its output), then it is a strong arrow.

Unless I didn't understand correctly, surely 'none()' is also a subset of the codomain? Or is none() a type containing all errors?

Anyway the concept of strong arrows is interesting. But does it also mean you can 'weaken' an arrow if you implement a more general version of a function? That would make it a bit odd because you can't know up front whether an arrow is strong or not. It makes code very non-local.


> Unless I didn't understand correctly, surely 'none()' is also a subset of the codomain?

Correct. Both are included for clarity.

> But does it also mean you can 'weaken' an arrow if you implement a more general version of a function?

If you have a function that is a strong arrow because of other functions (i.e. the current function does not explicit guard its arguments) then that would be the case.

However, we don't know yet how much of a big deal this is. Strong arrows is all about making static typing more useful to dynamic code. Dynamic code should not be relying on strong arrows for correctness, they are a nice to have. If you want guarantees, then you need static typing.

It may be that, in practice, it is enough for Elixir standard library to be made of strong arrows and that will propagate enough type information to most dynamic programs. Otherwise, if more is needed, we will definitely need to add visibility and reflection APIs around strong arrows.


> Strong arrows is all about making static typing more useful to dynamic code.

Where "more useful" == safer and faster [0] when you make use of static typing.

[0] If the compiler and runtime take advantage of the optimization opportunities.


Please tell me that I'm not the only one who thought that this was a story about learning how to touch-type better.

I guess we peaked at Mavis Beacon then?


This blog post makes this look like a type system for theoretical type theory people to play with than something designed to actually be practical and useful.


I guess I'd be glad to have my type system designed by people well grounded in theory.


Not if it results in the type system being optimized for type theorists to play around with and not to assist developers in making real projects.


This is a surprising assessment given most of the blog post is explaining how to build the type system in a way it can give feedback on all _existing real projects_ already written in Elixir. :)

It is the opposite of optimizing for type theorists and rather exploring how to bring values to the existing millions of lines of code.

Edit: I am the author.


It's about the way it is framed and that this is the main blog of the programming language, so the target demographic isn't people interested about how awesome a type theory based off of set theory is.

My complaint was about the way the information was presented was in a way that I don't think properly motivates why this is being done and will be a good thing. For example the text about gradual typing should have been near the start to motivate why this was being done, but instead the blog post starts by trying to teach people the basics of using types + set theory.


It is impossible to ignore the set theoretic type aspects, because the whole foundation of strong arrows is based on being able to negate a type. If I don't explain the set theory bits, then the rest of the article falls apart.

I could perhaps rearrange some of those sections but that doesn't change my main argument: the type system is not being designed for type theoretic people. At best, you could say the _article_ focus too much on theory, rather than on the practical bits.


This is how type systems work, you ground them in a strong theoretical foundation then build on top of that for actual consumption.


I'm not saying that having a foundation in bad, but you need the right foundation. If a foundation results in compile times ballooning people writing 100 line toy programs won't care, but people writing millions of lines will.


Do you have an example of a practical and useful type system? One that's not "for type theory people" ?


Java




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: