Hacker News new | past | comments | ask | show | jobs | submit login
Why programming languages matter [video] (youtube.com)
74 points by hazelnut-tree on Oct 27, 2023 | hide | past | favorite | 124 comments



Programming languages do matter but not as much as many people think f.ex. the HN crowd has a soft spot for LISP, and most of the don't even have proper parallel execution (and no I am not talking OS threads, having direct access to those should honestly be removed at one point). And sure, Racket and a few others are "Working on having an actor model". Wake me up when they achieve it. I am betting on somewhere in the 2030s, best case scenario.

A combination of a good and restrictive language compiler (Rust, OCaml, Haskell) and an amazing runtime (Erlang) is the sweet spot that everyone should be aiming at.

If anything, I am very tired of seeing yet another LISP dialect -- or any other language really, come to think of it now -- being announced. Many of us the programmers love to play and going to check on these languages is IMO taking away precious mind-share.

If anything, in my eyes it's exactly because programming languages matter is the reason why we should have less of them. We should start folding some languages inside others. (Or abandon them.)


If anything, in my eyes it's exactly because programming languages matter is the reason why we should have less of them. We should start folding some languages inside others. (Or abandon them.)

As a counterpoint, the programming industry is vast, so the opportunity cost of using tools that are not as good as they could be is also very high. If we don’t continue to explore new possibilities, how will we make those tools better? How can we learn what is worth keeping, what to combine, what to abandon?

I believe there is huge potential in finding better programming languages and better programming models for them to describe. We are still at the stage where we are worrying about whether our programs are even behaving correctly. Sometimes we get as far as worrying about performance. There is a lot of talk about productivity but it is relatively rare that we consider language design as a way of expressing our ideas more efficiently or making it easier for someone else to understand them later.

If we look at languages like Haskell or Erlang or Rust, these have certainly been used professionally, but in the long term their most important contribution might be the ideas they introduced to wider audiences rather than anything written in those languages themselves.


As a counter-counter point, I hear your sentiment a lot, I work for 22 years in the profession now -- and I am a fairly average programmer so I don't claim any credentials or anything.

...And I've never seen your sentiment work. Nobody is really learning anything, people just want to tinker with stuff and never read history or even best practices. All I see are people going in circles forever.

Maybe there's a smaller percentage of much clever programmers out there. I wouldn't know because I never (so far) attempted to do anything beyond web apps (though I do regret that a lot and might yet change it). And maybe these people are truly evolving the art. I mean we have Golang, Rust, WASM, and others so I know these people exist.

But somehow that almost never penetrates the broader programming community. Maybe Rust is the only true example because people started wanting to have sum types in their day-job languages -- which I view as a good thing. Such sentiments have the potential to gain enough critical mass to have language designers reconsider their initial choices (f.ex. Elixir is working on an optional type checker, one that's stricter and more descriptive and correct than the one coming with Erlang).

So I don't know. I wish you are correct but after 22 years in the sidelines the only true improvements I've seen are Elixir / Golang (in terms of transparent parallelism where Elixir wins over Golang but Golang is still much better than almost everything else out there as well) and Rust (for the aforementioned compile-time guarantees and other ones like memory safety).

Outside of that though? Nope. "Hey let's have one more JS framework, we didn't have a new one this month" is something I've seen a lot of, not to mention the eternally growing list LISP interpreters because apparently that's the peak of our collective intelligence and ambition. :(

I just get sad. HN is a place where a supposed intellectual elite gathers but they don't seem to be interested in anything beyond their pet language that will never have even 1% of the goodness that a much-derided language like Golang has. And I view that as a waste of energy and time, and as a very sad thing in general.


It’s easy to become disheartened, particularly in the world of web development. It looks like we’ve been around the industry for a similar amount of time, and I agree that the amount of wheel-reinventing in web development is shockingly bad for a part of the industry with so many resources poured into it by now. But have faith, my friend! Even in frontend web work, good ideas do break through from time to time. :-)

Not so long ago, there were plenty of FE people who would argue to the end of time that JavaScript was fine and building ever larger frontend applications wouldn’t benefit from static types that just take more work to write. Today, it seems TypeScript has almost entirely displaced JavaScript for that kind of work.

Not so long ago, we were trying to build those web frontends by manually tracking state implicit in HTML form elements and maybe doing a bit of automatic binding. Then React came along, the frontend world finally realised that immediate mode is a thing, and today building web UIs in a more declarative style from some kind of components is common to almost all of the popular libraries and frameworks.

Each of those changes must have saved millions of hours of developer effort already compared to how we did things before, as well as avoiding countless bugs. Now imagine what we could do if, just as a thought experiment, we could all use a language for frontend web work that didn’t have the legacy baggage that TS/JS have, did have a much more comprehensive and well-designed standard library, and provided better control over side effects and by extension safer and more readable code for all the interactions with remote APIs and state management and interactive DOM elements that modern web apps do all the time. I don’t know where such a language might come from today, but if we can shift such a large part of the industry to TypeScript and declarative/component-based rendering within a few years, it doesn’t seem an impossible dream.


It’s simply a reflection of the immaturity and terribleness of the state of the art in programming languages.

People eventually got around to things like restricting bytes to be eight bits, using a standard character set, and so on. Eventually we will find some language factors that “stick” (e.g. abandon the stupid distinction between statements and expressions) which will become baseline and the space of variation will diminish.


Yeah, your comment is likely the answer. I am just bitter why is all that happening so darned slow. I really hoped that 22 years in my career I would've seen certain problems disappear forever (as in, be solved, formalized, nailed, and never ever discussed again). But alas, nope.


Would you not consider "null" to be a problem that has been solved? That's the most obvious example to me, it's a problem that used to be extremely prevalent but is now a completely solved problem.

I'm not saying that every existing language has solved it, I mean that it's been shown by modern non-research languages (ex. Rust) that the problem need not exist. Something to be excited about, not bitter


Oh I do like that for sure, but as you said it's a solved problem in very few languages.

But in general, very few such unquestionably good practices are adopted.


> but as you said it's a solved problem in very few languages.

That's not what I said. What I said was "I'm not saying that every existing language has solved it". TypeScript, C#, Kotlin, Dart are all examples of popular languages that have solved this problem.

Another problem that's currently being solved is the inability to do basic data modeling. In particular, the inability to say "it's A or B". That has been solved in some languages for a long time but is now solved in a number of popular languages too.

There really have been quite a number of advancements in the last 22 years, but sometimes it feels like it's just the bare minimum (ex. being able to do basic data modeling, not having everything be surprise null, etc.). It does seem like it takes a long time for good ideas to propagate into the languages that most people are using. The field is young and it's going to take time for things to mature.

The next big thing is going to be effect/capability systems (tracking which functions do I/O, access the filesystem, etc.). That might take another 22 years to go mainstream, but that doesn't mean progress isn't being made, it just means that we need more new languages that try new things, evaluate new concepts, and establish techniques to be adopted by other languages.


Yes, “solved” can mean multiple things.

1. We have a describable or demonstrable solution Y for problem X.

2. We have a language that uses Y to solve X.

3. We have several languages that use Y to solve X.

4. Most languages use Y to solve X, and popular languages that don’t are anticipated to do so soon so they can.

5. Any sensible language uses Y to solve X.

> I really hoped that 22 years in my career I would've seen certain problems disappear forever (as in, be solved, formalized, nailed, and never ever discussed again). But alas, nope.

I interpret this comment as referring to “solved number 5”.

And I also find it disheartening that that level of solution, for well known problems, seems so rare.


> I interpret this comment as referring to “solved number 5”.

Yep, that's what I meant, thank you.

It breaks my heart seeing e.g. Python and JS still being the same terrible monstrosities that they always were, and what's even worse is all the post-hoc rationalization that people do to justify their favorite language. Instead of, you know, just say "yeah it's bad that we don't have that, choose another language if it bothers you" but no, it has to be BS like "almost everyone uses Python, have you considered that you might be wrong?". The good old appeal to popularity and tradition, and one of the well-documented logical fallacies that's being done every day and every hour. But hey, who's counting, right.

I wish that our work area -- the programming -- had more balls. More courage. "Alright guys, sum types are unequivocally better than having NULL, let's start changing our language" probably followed by an announcement "Sorry we couldn't fit sum types in Python, we're forking the language with a small list of incompatibilities and a tool to rewrite your old Python to the new one, please migrate". Or something along the lines.

But no, of course not. Let's pretend everything is okay. :/


the idea that a byte was 8 bits and not some dynamic length bitstream took at least 20 years, so don't get too excited.

So many ideas from Lisp have slowly become mainstream (lambdas, closures, GC, interpreters with dynamic typing, languages both interpreted and compiled...) that I feel like any convergence is far off.


repl based development, or until people see how cool structural editing really is


Agree on the REPL and exploratory programming!

But can’t agree on the structured editing side.

I used to do use structured editing (D-Edit on the Interlisp-D machines) and found it annoying, but perhaps that was due to the mandatory mouse use. The structured editing built into emacs is pretty good mainly because it’s an option you can use at any time rather than a single paradigm.

Also the only way to use comments in a structure-based system is to extend the syntax of the language, and that is awful IMHO.


I use the paredit emacs package (not a builtin). The issues you mention are not present.


I made my own lisp because it was the simplest, easiest language to parse and implement. Lisps are essentially frontends for C data structures. I wanted to get some ideas working and that was the easiest way. Still one of the most fulfilling projects I've made.

> We should start folding some languages inside others. (Or abandon them.)

Who's "we" though? No one decides what we work on unless they pay us for that privilege.


We who want to achieve stuff, not play all our careers. :/


What's the difference between play and achievement? Someone running the so called "toy" in the production server?


You can put a program with 50 lines and 10 bugs in production. :)

To me "production" also means "runs reliably over long periods of time, does not fall over under load, and has no unexpected panics". Well, and also "makes full use of the available computing resources and prevents lag as much as possible".

"Toy" is basically "I am such a huge fan of LISP, I am convinced that the world absolutely needs one more interpreter!".


> Programming languages do matter but not as much as many people think f.ex. the HN crowd has a soft spot for LISP, and most of the don't even have proper parallel execution

I don’t understand how the argument about LISP would imply that programming languages matter “not as much as many people think”


I mean that people over-fixate on syntax or language quirks, but when you start writing for (and deploying for) production then it turns out that many others things are much more important. And I wish people were more practical in our area and they are often not, they are like kids who only want to play even.

And even though I am eating down votes I will keep saying it: fangirling over LISP I view as a collective drag and to the detriment of the entire programming area.

Obviously I am not a world dictator saying what should people spend their time on. I am saying that if you want to truly move the art forward, well, we have 5000 other problems that are at least 100x more important than "oh look I can code a basic LISP interpreter".

My opinion, obviously, but it's also one that I would not be easily dissuaded from.


> I am saying that if you want to truly move the art forward, well, we have 5000 other problems that are at least 100x more important than "oh look I can code a basic LISP interpreter".

How does one "move the art forward" without understanding the landscape first? That is, an individual can't know what "forward" means without understanding where they currently are. There isn't a finite number of people over a fixed period of time doing all this. Just because something "is known" to humanity doesn't mean it's known by every individual presently. Each generation must rediscover what previous generations knew, either through raw insight or by knowledge transfer. People aren't born with computing knowledge :)

Viewed this way, how many people in the world, presently, have sufficient knowledge to "move the art forward"? How many have the means? I'd put it at a few hundred. Maybe you're one of them?

You explicitly say you can't be easily disuaded from your opinion, so maybe I'm just spitting into the wind. However, rather than talk down on people doing the hard work of learning where they're at in history, being excited my genuinely exciting things, and taking the necessary steps to "move the art forward", maybe you could assist them by sharing your experience? Or, simply move the art forward yourself :)

PS: Play is exactly what's needed. That's how humans learn and discover :) But maybe we need to play more efficiently? ;)


> How does one "move the art forward" without understanding the landscape first?

Practice data structures, make complex algorithms described in books (most of which are free, and the rest are maximum $35), code a small program for an embedded controller for once to see how the sausage is made first-hand, participate in different kinds of open-source. There are plenty of ways outside of "the world needs one more LISP interpreter".

> Just because something "is known" to humanity doesn't mean it's known by every individual presently.

Sadly. I wish I belonged to another species where this assertion was not true. Sigh.

> Viewed this way, how many people in the world, presently, have sufficient knowledge to "move the art forward"? How many have the means? I'd put it at a few hundred. Maybe you're one of them?

Me? Absolutely not, I am 43 and at this point I am severely burned out. I had so many ideas and ambitions but working for the man for 22 years has crushed me. Maybe if I get a bag of money in the upper 7 digits I'll be able to reignite my tinkering spirit but we don't live in fairly land and this is not happening so nope. You're looking at one more person who was crushed by the free market forces.

That being said, you have people working on e.g. the Rust compiler or OCaml in general. Likely a bunch of geniuses and I love it that they are actually funded and are keeping up the good fight. Gives me some hope.

> maybe you could assist them by sharing your experience?

Yes, that's the best idea really: education. But as we all know (1) younger people never listen to advice, and (2) I probably never found the right audience (though thinking of it I am starting to see some lectures popping up lately in my city again, maybe I should try to go). I've been told I am excellent mentor (as recently as 3 months ago) but my health and time constraints prevent me from doing it more.

But how do you educate people to be unhappy with the status quo? I seriously don't know. The human brain is kind of like this: "Hey I am not hungry, I have where to sleep and most of my basic needs are covered, I guess the world is perfect and I don't need to change anything at all anywhere". Observed it thousands of times in my life and I am bitter about it to this day, likely to my grave.

> But maybe we need to play more efficiently?

Yep, I would not be against some central "authority" website / app that distributes "play" tasks to whoever is willing to do them (obviously with plenty of repetition and redundancy, you can't have 10 people agree to tackle on 10 different tasks and then they never show up again). And this does not remove choice, you can still present people with like 50 choices and they surely will identify with at least one of them.

All this free energy, wasted all the time. [sighs deeply] I wish we had more structure and direction is what I am saying all along.


I support abandoning dynamic typing.


Same. The dynamic typing helps with prototyping but all MBAs have consistently proven they are NOT willing to have the prototype rewritten in something better later.


I found that it is better to ask for forgiveness than permission.


Same, by the way. I am ashamed that I didn't do it more often over the course of a 22-year long career. I really should change that.


I just launched 2 experiments to production this week without going through the launch approval process. I'm such a rebel.


I don't think static typing is the right way to go for every kind of application (imagine doing data analysis without dynamic typing), but every dynamic programming language should ideally support progressively adding types to a codebase.


Just FYI: Swift has actor model. Pony is built around actor model.


Sure. But now we get to the really hairy problem: library coverage and community support. That's why I think most languages should start converging together already.

IMO we the programmers scatter ourselves too much.


On that I totally agree.


Then my apologies to you and everyone else who replied to me: I really should have just said "I feel we scatter our attention and energy too much and we don't collectively evolve our craft as much as I believe we are capable of".


No worries! Apologies for being overly curt.


> Wake me up when they achieve it. I am betting on somewhere in the 2030s, best case scenario

(slap)

(wake-up sleepy)

common lisp has actors for a while now


Actors were developed in MACLISP, Commonlisp’s parent.


Oh? Nice, missed that. Is it on the level of Erlang's VM or at least Go's goroutines?


what would "proper parallel execution" look like in a Lisp? A library or a more fundamental form? Something with the GC?


There's a language called PARLANSE which is a parallel LISP.

http://www.semdesigns.com/products/parlanse/index.html


Also MultiLisp, QLisp, *Lisp, and others. Parallel lisps are not a new concept, though they were often written with specific machines in mind (*Lisp for instance).


It's not about them being a new concept. Do we have Erlang actors or Golang goroutines there? Are they fully safe like Erlang's actors or Rust's futures would be (provided you don't use escape hatches)?

If not, they remain toys.


Very little innovation in programming languages has happened regarding new realities at the hardware level especially transition from serial to parallel execution.



STM is about hardware, not programming languages, but is decently recent so maybe someone will actually try it in hardware and see if it provides a good benefit for the increased complexity of the chip.

Region based memory management was first conceived in 1967 and is achievable by any programming language that lets you manage memory yourself.

Mutable value semantics in native code have been available since at least 1980 with Ada.

Lifetimes in Cyclone seem the best example of PL research in the last 50 years you have there, as it’s only 20 years old.

Overall, I’m still unsure if this list proves the point that there is active useful research in the PL space or if it proves that there’s very little in the PL space to research. More research is probably required.


> STM is about hardware, not programming languages

As others have pointed out, there's STM research in PL, it's not entirely about hardware. (The link I gave wasn't great, sorry.)

> Mutable value semantics in native code have been available since at least 1980 with Ada.

Could you link to the relevant docs? I wasn't aware Ada had anything like this.

Is this implemented under the hood with deep copying? Because if so, that would explain why it hasn't started to catch on anywhere until now. Swift and Hylo have much more efficient implementations that "copy all the time".

https://www.jot.fm/issues/issue_2022_02/article2.pdf

> Region based memory management was first conceived in 1967

There's active research in this general space. I met someone who was working in it on a train, though I forget the details.

> and is achievable by any programming language that lets you manage memory yourself

Sure. I mean Rust lifetime discipline is "achievable" in C too, so long as you're very very careful.

> Lifetimes in Cyclone seem the best example of PL research in the last 50 years you have there, as it’s only 20 years old.

It typically takes 10+ years for PL research to go from papers to research languages to being incorporated into"real" languages.

> More research is probably required.

Always.


> STM is about hardware, not programming languages

STM is not about hardware, it's literally "software transactional memory" and is meant to be implemented in software without hardware support (beyond a CAS instruction or a similar set of instructions, perhaps). As a software component it could be part of libraries or part of a programming language as part of that language's general concurrency model.


The paper that OP linked to is indeed about hardware transactional memory, but STM is a variation of transactional memory that is implemented entirely in software.


Rust compiler enables trivial parallelism by enforcing multiple readers ^ writer; and it’s beautiful.

See Rayon.


I wouldn't say it's trivial but yes it's there and it's very helpful.


There are many cases where you can replace a call to `.iter()` in your Rust code to a call to `.par_iter()` from `rayon`. Those cases are trivial, and it's great.


That's a very nice feature but I don't know that it belongs in the category of new ideas from programming language theory. Fortran 90 had automatically parallelized language constructs and OpenMP added them to C and C++ as compiler intrinsics in 2002.


Yes, that one I like a lot.


Except for all the research around functional programming?

This is like when I hear people claim that physics has not advanced in the last 50-70 years.


Futhark (https://futhark-lang.org) is an example of a functional language specifically designed for writing "normal" high-level functional code that compiles to parallel execution.


Genuine question-- is there something about Futhark that makes it particularly well-suited for parallel execution compared to any other functional programming language (especially of the purely functional kind)? FP in general is inherently well suited for this application.

As I understand it, Futhark aims to leverage GPUs in particular, and that approach seems to be what makes it unique within the category of FP languages?


Functional languages have a core that is well suited for parallelism, but all mainstream functional languages have various features that are unsuitable for parallel execution - particularly if you want to target limited machines like GPUs.

The Futhark programming model is standard functional combinators: map, reduce, scan, etc. You can do that in any Functional language, and it is mostly trivial to rewrite a Futhark program in Haskell or SML or OCaml. Futhark removes a lot of things (like recursive data structures) to make efficient parallel execution easier (and even possible), and adds various minor conveniences. But you don't really need to add something for functional programming to be suitable for parallelism; you just have to avoid certain common things (like linked lists or laziness).


Functional programming is not based off how hardware is implemented. Serial execution of instructions and mutating chunks of memory at a time are all core parts of how the hardware works which aren't functional. Doing graph reduction and making tons of copies will be slow.


Are we still doing this stupid ass reasoning around FP? Is the CPU really that serial, when it literally reschedules your instructions based on the graph of their interconnections?

Also, just think about all the optimizations your "serial" programming language does -- are your yourself really write all those mutations? Or is that the compiler, that in many cases can do a much better job? Now what if the language's semantics allowed even more freedom for the compiler in exchange for more restrictions on the language? Sure, we still don't have "sufficiently advanced compilers" that would replace programmers, but FP languages absolutely trade blows with most imperative languages in many different kinds of problems. Very visibly when parallelism comes to the picture, as it turns out, a smart parallel-aware data structure will easily beat out their serial counterparts here.


> "Is the CPU really that serial, when it literally reschedules your instructions based on the graph of their interconnections?"

Yes. Yes, it is because all of that rescheduling and reordering is completely hidden at great effort and expense to make it seem like the instruction stream is executing in exactly in the order specified. If it weren't, lines of code would essentially execute in an indeterminate order and no programs would function.


Hardware was for a very long time a limiting factor in the practicality of FP. For most general applications, today this is a relative non-issue.

FP is also particularly well suited for cloud computing and parallel computation.


"FP" camps tend to come in two flavors: "I'm a mathematician writing a computer program, and all problems will be made to look like math problems even if it means the program becomes an inscrutable mess of types and dense syntax" and "functional-ish idioms are included". The latter is useful, sometimes, for cloud computing and parallel computation; the former tends to have too many problems (slow build, slow execution, poor jargon laden syntax, etc.) to be very useful outside of academia.

I am (perhaps obviously) biased, here, but I tend to just roll my eyes whenever any of my colleagues suggests we should use functional programming to solve a problem. There are actually very few real-world use cases where it's objectively better.


> all problems will be made to look like math problems

All problems that can be solved with code are math problems. Proofs and programs are isomorphic (see the Curry-Howard correspondence).

---

Edit: this is a factually accurate comment, delivered dispassionately. It's not controversial or new-- it's something we've known for longer than the C language has existed. Why the downvote? Like I said, see this: https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspon...


Factually accurate (to some degree), but almost completely irrelevant.

First, not all problems that can be solved with code are math problems. Take driving a serial port, for instance. There might be some aspects of it that are mathematical, but mostly it's a matter of interfacing with the chip spec.

Second, even for problems that are isomorphic to a math problem... the thing about isomorphisms is that they aren't identical. One form or the other is often easier to deal with. (That's one of the reasons we care about isomorphisms - we can turn hard problems into easier ones.)

Well, which is easier, writing programs or doing proofs? Almost always, writing programs is easier. This is why it's (almost) completely irrelevant.

Nobody wants to write code that way. So nobody cares. They're not going to care, either, no matter how forcefully you point out Curry-Howard.

Now, as you say elsewhere, you can gain insights from math that can change your code. That's true. But I suspect that most of the time, you don't actually write the code as a proof.


This is like saying all physical engineering problems are quantum mechanics (or insert other physics theory) problems. It’s technically correct (the best kind of correct), but misleading and useless as a matter of practice.


Your analogy is interesting to me.

I definitely see the parallel, but I'm not actually sure this is true.

A lot of the deep functional stuff I'm learning right now are more about finding connections and shortcuts between things that we used think were different.

For me, comparing functional programming to older languages is more like comparing "tally marks" or "roman numerals " to a modern "place value system".

Now back to the physics analogy. The gap between quantum physics and chemistry is both a theoretical and computation limit.

There are also seen to be very distinct layers where the lower level don't seem to correlate with higher levels.

But I can also see this might apply to the Curry-Howard correspondence.

Hmmm. I have to think about it more...


I see what you’re trying to do with the comparison, but it’s not really the same.

In your example, the two things are separated by at a minimum one layer of emergence: your example is more like saying biology is just chemistry. In maths and programming, they are both at the same level, no emergence.

I also haven’t found what you say to be true at all— As I’ve been learning more maths and more programming, and learning more about the link between the two, I have found that the ability to see problems from more than one angle has had a dramatic impact on how clearly I think and how efficiently I solve problems. Not useless whatsoever.


I was an academic physicist for the first several years of my post-graduate school career--I, too, see much value in having multiple perspectives to a problem.

But that's quite different from your other claim. Maths and programming are not at the same level. When one writes a "hello world" program, math does not figure into the final text of the code at all. Similarly, when one writes code to implement a system interacting with multiple dependencies, one is not doing mathematics, except in the trivial sense of your original comment. That is to say, at such a remote distance that it's meaningless to describe the activity as a mathematical one.


You’re doing theorem proving in all cases where you handle errors or exceptions in non-trivial fashion. Same when you’re implementing any kind of authz. When dealing with async code or threads, you’d better be good at your invariants. This is all discrete maths. Yes I don’t differentiate continuous functions at work but let me tell you juggling multiple condition scenarios when integrating multiple inputs with multiple systems is damn close to working with logic proofs.


I completely disagree, but I don’t have the energy to argue or explain.


Interesting. Functional programming is the only strategy I've seen be successful at building business software. (Hint: you can do FP in Java and it's not even awkward, and SQL is inherently functional).


I work at a FAANG-like company. The code base has almost no functional programming paradigms deployed. It’s a multi-billion dollar company from which I, an IC cog in the machine employee, became a multimillionaire through from the IPO. It’s wildly successful.


Is it all design-patterny OOP?


The vast majority of it is, unfortunately. That's starting to change, though, as we are hitting some pretty severe maintenance and performance issues, as well as being bottlenecks to making changes, that all the abstraction is obviously the root cause of. None of the services the team I used to lead utilized OOP (in the sense you're referring to), and the new team I lead will be making sweeping changes to move away from it under my guidance.

It served its purpose, but we've outgrown it.


Is “wanting evidence for constant silver bullet claims that never actually pan out” really “obviously biased”?

It’s not just you. Functional Programming really does not adequately solve any of the problems that its advocates claim while refusing to provide any evidence.

And it’s not “biased” to write these claims off.


I don't know what "silver bullet" claims you've heard, but I'm not sure how that is relevant to this thread. I don't think I've made any outlandish claims, or any claims that aren't substantiated by a preponderance of academic literature.


I find this whole thread fascinating because nobody seems to have identified what functional programming is defined as (programmers can just use functions to code and end up with functional programming, it isn't an obscure style), who the advocates are or what they claim.

And yet, without any substance, the debate rages.


> nobody seems to have identified what functional programming is defined as

Nobody asked (until now, so thank you for asking!), and this is a fairly well discussed topic for anyone who cares to search!

You will probably get a lot of slightly different answers depending on who you ask or where you look, but I think a very strong common thread is "referential transparency". Functional programming gives you that, and that is the property that makes FP particularly well suited for parallel computation. Referential transparency is related to the concept of "function purity" (in the sense that either one usually guarantees the other), which you will often hear people talk about as well. The two concepts are so intimately tied that sometimes I wonder if they're two different perspectives on the same thing.

This, along with the fact that FP has been an active area of research (an important part of innovation) for a long time, is why I brought it up.

https://en.wikipedia.org/wiki/Functional_programming

https://en.wikipedia.org/wiki/Pure_function

https://en.wikipedia.org/wiki/Referential_transparency

https://softwareengineering.stackexchange.com/questions/2938...

---

> programmers can just use functions to code and end up with functional programming, it isn't an obscure style

That's not how it works. Note that functional programming has nothing to do with merely "writing functions".

---

> without any substance, the debate rages

The substance is there and there is plenty of it, but learning requires work.


> Referential transparency is related to the concept of "function purity" (in the sense that either one usually guarantees the other), which you will often hear people talk about as well.

Isn't referential transparency (the property of a function that allows it to be replaced by its equivalent output) a consequence of function purity? In other words: could a pure function not be referentially transparent?

Also, I remember Robert C. Martin describing the functional programming paradigm as a "restriction upon assignment". I kind of like this definition as the rest seems to flow from it: if you can't assign, you can't mutate. If you can't mutate, you have pure functions, which are referentially transparent.


"Could a pure function not be referentially transparent?"

Yes there are pure functions which are not referentially transparent. A pure function with side effects, such as printing a result to standard output, is not referentially transparent. You can't replace the function with it's return value, since that doesn't replicate the printing side effect.


A pure function is by definition side-effect free. What you write makes 0 sense. Maybe thinking of total functions?

But not all FP languages have to be so puritan to only allow pure functions, most have escape hatches and it is just good form to prefer purity (even in more mainstream languages!). The most common way to circumvent this problem is through a Monad, which very naively put, just a description of the order of side-effecting functions, and their inter-dependencies. This will later get executed at a specific place, e.g. the main function -- the point is, that a large part of the code will be pure, and much easier to reason about.


I thought that a pure function could not have side effects by definition. Would you mind sharing your definition of a pure function?


Functional programmers regularly claim:

-Haskell is faster than C (lol)

-FP gives you free concurrency

-FP makes code more testable

-FP is easier to read

-FP is easier to consume for people

-FP results in no bugs

-FP is easier to change

-FP will literally suck your peepee

-Actually FP is the second coming for Christ

It’s really funny how you also pretend you’ve never heard of all the silver bullet claims that are incessantly plaguing every programming forum.

What’s also really funny is your multiple alt accounts manipulating your votes. You must be real secure in those those claims.


> -FP will literally suck your peepee

> -Actually FP is the second coming for Christ

On a style point, you've rather undermined yourself that these are common claims because there are entries on this list that are clearly fabricated, as well as others that look like wilful misinterpretations of what someone else said. That casts doubt on the more reasonable entries. There are annoying FP evangelists out there, but the overall tone pattern matches straw-manning.

You'd have made it easier for everyone taking the whole list seriously. Transparently mixing fact and fiction just makes it harder for people who aren't already part of a conversation.


I agree with some of these points, but none of them are claims I made in the context of this discussion, so I’m not sure why you’re even arguing about those. It’s irrelevant.


They’re false claims. And I didn’t bring up the false claims FP programmers make until you asked for it to happen. I simply said that rejecting the ridiculous silver bullet claims that FP programmers make is completely fine because they don’t provide any evidence and that which is asserted without evidence can be dismissed without it.

Not that that matter because all of those claims are demonstrably false anyway!


> Functional programmers regularly claim:

> -Haskell is faster than C (lol)

If they regularly claim that you should easily be able to point to several recent examples. Can you?


Then why has it been slowly incorporated into literally every mainstream PL? You do know you don't have to be an extremist, and your code can contain both FP, OOP and imperative parts, whichever makes the most sense? FP and OOP are not even opposites.

They absolutely solve real issues, and it's just sticking your head into sand to say otherwise.


The parent comment was talking about aligning programming languages with the hardware. I am not commenting about the viablity of those languages, but rather that if your goal is to write the most performant code by understanding the strengths and weaknesses of the hardware than using fp concepts is not the way to do it.


I feel like in both of your comments you've changed the topic slightly. I responded to the following comment, which I interpreted literally:

> Very little innovation in programming languages has happened regarding new realities at the hardware level especially transition from serial to parallel execution


Well... if code were pure (in the FP sense), then a "sufficiently smart compiler" could move it around to extract the maximum performance.

But, as always, the sufficiently smart compiler never shows up. So we're left with the humans doing the tuning, and as you say, FP is kind of antithetical to that approach.


But how will language performance evolve as the nature of the hardware our programs run on evolves? IMHO this is not an easy question to answer right now.

C compilers don’t produce 100% optimal assembly language in all cases, but typically the assumptions they make are light. The executable code they output is somewhat predictable and often close enough to hand-optimised assembly in efficiency that we ignore the difference. But this whole approach to programming was originally designed for single-threaded execution on a CPU with a chunk of RAM for storage.

What happens if we never find a way to get a single core to run much faster but processors come with ever more cores and introduce other new features for parallel execution? What happens if we evolve towards ever more distributed systems, but farming out big jobs to a set of specialised components in the cloud at a much lower level than we do today? What happens if systems start coming with other kinds of memory that have different characteristics to RAM as standard, from content-addressable memory we already have today to who-knows-what as quantum technology evolves?

If we change the rules then maybe a different style of programming will end up being more efficient. It’s true that today’s functional programming languages that control mutation and other side effects usually don’t compile down to machine code as efficiently as a well-written C program can. The heavier runtimes to manage responsibilities like garbage collection and the reliance on purely functional data structures that we don’t yet know how to convert to efficient imperative code under the hood are bottlenecks. But on the other hand, those languages can make much stronger assumptions than a lower-level language like C in other ways, and maybe those assumptions will allow compilers to safely allocate different behaviour to new hardware in ways that weren’t possible before, and maybe dividing a big job into 500 quantum foobar jobs that each run half as fast as a single-threaded foobaz job still ends up doing the job 200x faster overall.


> C compilers don’t produce 100% optimal assembly language in all cases, but typically the assumptions they make are light. The executable code they output is somewhat predictable

Lol, since when? C compilers literally will run some of your code at build time, and only write the results into the binary and they do all sort of crazy "mental gymnastics" to make people believe it is still a dumb single-pass compiler.


You're mixing up functional programming with an execution model for functional languages. These are not the same.


Research around functional languages involves their execution models. Even ignoring execution models immutability is a staple of functional programming which is not good for performance.


For a field in which a large fraction (if not a majority) of the people in industry have (nominally at least) science degrees, CS research takes a fairly long time to penetrate into industry. Rust 1.0 had few, if any, features that weren't demonstrated in academia 30 years earlier.


It is a lot easier to write a paper demonstrating some feature than to write a production-quality ecosystem based on that feature. There isn't much either the academic side nor the engineering side can do about that.

And it's not like every academic idea that worked in a paper has worked as well as hoped when someone tried to turn it into a production-quality ecosystem.


The central core of Rust is mutability NAND sharing, enforced by a borrow checker. Was that really established 30 years prior? I thought that came from the Cyclone research language (v1 release 2006) which was only a few years prior to the initial stages of Rust (late 2000s, depending on how you count it)


Does anyone have any good counterpoints to this? From my, naive, perspective, this seems to be relatively true.

I've always assumed that, by now, I would be able to write code, in a semi mainstream language, and it would be made somewhat parallel, by the compiler. No need for threads, or me thinking of it.

There's projects like https://polly.llvm.org, but I guess I assumed there would be more progress through the decades.


Proving legality of transformations in the compiler is frequently impossible. Consequently, the main mode of implementation has been to essentially think of the problems in terms of the user saying that this loop is parallel, please make it run in parallel. OpenMP or Rust's rayon crate, for example. The other similar innovation has been programming SIMD as if each lane were an independent thread, which is essentially the model of ispc or CUDA (or #pragma omp simd, natch).

The other big impossible task is that most code isn't written to be able to take advantage of theoretical autoparallelization--you really want data to be in struct-of-arrays format, but most code tends to be written in array-of-struct format. This means that vectorization cost model (even if proven, whether by user assertion or sufficiently smart compiler, legal) sees it needs to do a lot of gathers and scatters and gives up on a viable path to vectorization really quickly.


Maybe some history will help here too. In the 90s, the data model of most programming languages wasnt even array-of-structs, but array-of-pointers to otehr pointers to other pointers...

And the majority of software we've inherited is written this way.

In the 90s this didnt matter, since dereferencing a pointer was comparably expensive to arithmetical operations. But with modern CPUs with massive caches and more native parallelisation, the difference is dramatic.

So, even now, the majority of languages we're using; and almost all code we've inherited today, are as far away as you can get from efficiently using modern CPUs.

The task is first to change all these languages to enable ergonomic programming without tons of indirection -- we're very far away from even providing basic tools for performant code


It's not totally clear what you're looking for.

As you noted, polyhedral compilers work on a pretty restricted subset of programs, but are fairly impressive in what they do. There has been research on distributed code generation [1] as well as GPUs [2]. While there has been work on generalizing the model [3], I think the amount of parallelization that a compiler can do is still very limited by its ability to analyze the code (which is to say, highly restricted).

Then you've got a large class of data-parallel-ish constructs like Rayon [4] as well as executors which may work their way into the C++ standard at some point [5]. How much safety these provide depends greatly on the underlying language. Generally speaking, the constructs here are usually pretty restricted (think parallel map), but often you can write more-or-less arbitrary code inside, which is often not the case in the polyhedral compilers.

If you don't care so much about safety and just want access to every parallel programming construct under the sun, Chapel [6] may be interesting to you. There is no attempt here, as best I can tell, to offer any sort of safety guarantees, but maybe that's fine.

On the other end of the spectrum you have languages like Pony [7] that do very much care about safety, but (I assume, haven't looked deeply) this comes with tradeoffs in expressiveness.

(I work in this area too [8].)

Overall, there are some very stringent tradeoffs involved in parallelizing code and while it certainly has been and continues to be a very active area of research, there's only so much you can do to tackle fundamentally intractable analysis problems that pop up in the area of program parallelization.

[1]: https://www.csa.iisc.ac.in/~udayb/publications/uday-sc13.pdf

[2]: https://arxiv.org/pdf/1804.10694.pdf

[3]: https://inria.hal.science/file/index/docid/551087/filename/B...

[4]: https://docs.rs/rayon/latest/rayon/

[5]: https://github.com/NVIDIA/stdexec (disclaimer: I did a quick Google search on this, not 100% sure this is the best link)

[6]: https://chapel-lang.org/

[7]: https://www.ponylang.io/

[8]: https://regent-lang.org/


There is also HVM [9], which can run sequentially written code parallel for some degrees. (It can run parallel some sequential Haskell code naively transpiled to HVM that GHC doesn't parallelize.)

[9] : https://github.com/HigherOrderCO/HVM


Languages like Erlang (or Elixir) that naturally split programs into isolated processes with local state and that communicate via message passing map well onto multi core systems. No need to have the compiler figure that out for you - instead it is expressed directly in the code.


One of the prerequisites to this is for the mainstream to stop doing things which are incompatible with concurrency.

Mutation (and other effects) makes the order of computations important. If you're writing to and reading from variables, the compiler is not free to move those operations around, or schedule them simultaneously.

And you probably don't want to be rid of all mutation. So what if you separated the mutating from the non-mutating? Well you'd need a sufficiently powerful type system. Likely one without nulls - as they can punch a hole through any type checking.

If you want this stuff in the mainstream, you at least have to get all the nulls and mutation out of the mainstream, which I don't think will happen.

The industry for the most part heeded "goto considered harmful" (1968), but hasn't done so with "the null reference is my billion dollar mistake (2009)". Maybe we just have to wait.


Especially when doing things like mapping or list comprehension. I'd love to be able to do operations on collections in parallel by simply marking my functions as pure. Just a simple xs.map {x -> f(x)} combined with "f" marked as pure confirmed by the compiler to make the magic happen.


It isn't really. There have been plenty of innovations in parallel computing:

* Go, with goroutines and heavy use of channels.

* Rust which is free of data races and generally improves the safety of multithreaded programming via Sync, Send and just safer APIs (e.g. Mutex).

* Chapel, which is a language designed primarily for multithreaded and multiprocess computing.

Those are just the ones I know about. There's obviously way more.


Take a look at Halide, which can autovectorize and multi-thread graphics computations (but does require a restricted language).


I was thinking outside of "embarrassingly parallel" [1] type work. :) But, that is fair.

[1] https://en.wikipedia.org/wiki/Embarrassingly_parallel#:~:tex....


Array languages, such as APL and others, tend to be easily parallisable given primitives tend to focus on intent and how to transform data rather than imperative operations.

Some of the SIMD operations feel very reminicent of APL primitives


So your best bet for a modern language is one from 1966?


Just because something's old doesn't mean it's not relevant, yes might not've been the best language to suggest - the more modern BQN, Uiua and Singeli exist too - but it's still a fairly niche paradigm. Ideas tend to come in cycles too - look at the 1980s ideas of the transputer or connection machine.

I wanted to point towards a programming paradigm that's approach enables you to take advantage of the parallel execution possible within chips today due to the notation being both precise in intent yet vague in execution. Take summing an array (`+/vector`) or selecting values given a boolean mask (`mask/values`) - both these very simple expressions are expressible directly in SIMD instructions, as there's no for loop index enforcing an order.


I feel this comment is pequant and jejuon. Of the distributed nature of computing has made it's way into programming language research.


Piquant and jejune?


oddly, there are very few job positions to work in things related to programming languages.


I feel like every major company is hiring for AI compiler engineers right now (based on my inbox at least). May not be directly related to general 'programming languages', but my take, as someone in the industry, is that all the PL people are working on this right now.


same here, some companies are hiring a compiler team for AI in fact.

I was told Rice and UIUC provide the best compiler program, though not necessarily AI related, should be similar though.


What does it mean, specifically, "compiler team for AI"? I get it that it's some hot new trend from the last 2 posts, but I'm struggling to imagine what exactly the perfect product should look like and why everyone wants it so bad, allegedly.


It's just about compiling a neural net down to run as efficiently as possible. Either on GPU, CPU, or your own accelerator. Neural nets are very computational intensive, while being pretty uniform internally and as a class. Everyone making silicon and a fair few other companies as well has an AI compiler team right now. At the moment the hot product would just be LLM tokens for as few cents each as possible.


Ai problems boil down to compiling linear algebra problems onto very complicated chips.

In standard processing, the code is so branchy that we often resort to heuristics in order to get 'good enough' perf.

The FLOPS difference between a cpu and gpu is huge. It makes things that are intractable on cpus possible. Without gpus there is no deep learning.

That being said, writing code for gpus by relying on cpu compilers will result in terrible perf. In order to take advantage of the hardware you have to take into account minute details of the architecture that most cpu compilers ignore.

Cache oblivious algorithms are algorithms that know that there is a cache but don't rely on particular cache sizes. It's the way a lot of cpu code is written because it means not having to deal with particulars.

On gpus, particulars matter. For example, to compile a matrix multiply on an Nvidia GPU, you cant just use vectorized multiplies and adds. No. In order to achieve max performance you need to utilize the warp level matrix multiply instruction which requires that you split an arbitrarily sized matrix into the perfect native tensor sizes and then orchestrate the memory loads (which are asynchronous on gpus, and transparently synchronous on cpus) correctly. If you don't you waste millions of dollars (literally).

So whereas on a cpu you might just modify your matrix multiply loops to get contiguous memory access and add some vectorization in and cross your fingers, on a gou your compiler needs to take the trivial three nested loop algorithm, look up the cache size particulars and instruction capabilities for the particular generation of the chip, and then rewrite the loop nesting to make it optimal. All while making sure you don't introduce further memory hazards (bank conflicts), etc. So your simple three nestled loop algorithm gets turned into a nine nested loop monstrosity.

The stakes are much higher here and the optimizations much different. Whereas on a cpu, we kind of give up with the branching complexity, and just do our best since we never truly know the state of the program, on gpus, the algorithms being executed are extremely amenable to static analysis so we do that, and optimize the shit out of them.


I believe it's either related to the new AI-specialized chips, or maybe to the factoring or neural network graphs.

Any specialized domain tend to have its own domain specific language, so obviously it would be true for AI, too.


correct, every AI chip maker needs their own compiler team these days


The hot thing is AI now, but you can sneak PL into a wide variety of SWE jobs.



https://youtu.be/JqYCt9rTG8g

Missing the last letter




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: