Hacker News new | past | comments | ask | show | jobs | submit login
Carmack talks practical use of Haskell, Lisp in game prog at Quakecon keynote (youtube.com)
90 points by macmac on Aug 11, 2013 | hide | past | favorite | 39 comments



It struck me how well Carmack's ideas about running all actors in the game world in parallel map to Clojure's agents[1].

Clojure introduces a bunch of interesting ideas on how to handle parallel programming. "Perception and Action" by Stuart Halloway is a great talk to listen to if you're interested[2].

[1] http://clojure.org/agents [2] http://www.infoq.com/presentations/An-Introduction-to-Clojur...


In fact one of Rich Hickey's very first demos of Clojure was ants.clj which uses agents to simulate the ants. While not a game but rather a simulation it includes many elements that would also be present in a game. A literate version of ants.clj may be found here: https://github.com/limist/literate-clojure-ants/blob/master/...


It's inspiring to hear how he struggles learning a new language and having trouble doing things not related to his work.


He's one of my favorite people to follow on Twitter [1] because of this; he regularly tweets about stuff he's learning, especially new languages. It's quite inspiring and uplifting to see a such a figure in the industry still make time to learn on his own and candidly share his thoughts.

[1]: https://twitter.com/ID_AA_Carmack


I agree. It is quite charming to hear him point quite precisely to the qualities that makes him fond of Haskell, but then struggle to put words to the strengths of Scheme.


I strongly recommend listening to his Quakecon keynote in full. It meanders over a huge variety of topics and I personally find him a very engaging and easy to listen to orator.


Absolutely! I uploaded it as single segment at https://www.youtube.com/watch?v=o2bH7da_9Os , will archive it to archive.org shortly.


Is there any chance you could extract the audio and put it online somewhere? I am currently finishing a cross-country bike trip, and would absolutely love to listen to his keynote, but only have a phone on me currently. It would mean a ton!


Sure thing! http://archive.org/details/Quakecon_2013_-_Welcome_and_Annua... includes the AAC track extracted from the video as well as a mono 32kbit/s VBR Opus in Ogg to save you bandwidth. Enjoy!


Sorry, that ogg was borked. At least I cannot play it anywhere. Will replace it with a standard Ogg Vorbis one.


A phone that doesn't do YouTube - is that even legal?


Glad to see him think that strong static typing is the way to go.


He wrote a really good article about using static code analysis with Visual Studio here: http://www.altdevblogaday.com/2011/12/24/static-code-analysi... I imagine this was on the same train of thought that led him to investigate statically typed languages.


There's only so many times you can get an error in your programs before you begin thinking that static typing might offer non-trivial benefits.


I have a feeling that all the people still arguing in favor of dynamic typing at all, are tilting at the windmill of static languages without type inference. Nobody really thinks those are good languages any more :)


Very valid point. Really hate spelling out things for the compiler these days. Unfortunately for some reason language designers think it's a good idea to go crazy about syntax when developing something functional, because it makes things "terse".


> all the people still arguing in favor of dynamic typing at all, are tilting at the windmill of static languages without type inference

Some of us who argue in favor of dynamic typing have a much more nuanced and informed view...

I, for one, am of the mind that the there are precisely zero production-caliber statically typed environments that possess a sufficiently powerful type system for the kinds of problems I tackle on a regular basis. Haskell doesn't count, since you need to turn on about a dozen GHC language extensions in order to incorporate the last 20 years of research. There's also quite a bit of design warts that newer academic languages are starting to iron out. In particular, I don't think monad transformer stacks are a reasonable solution to computational effects.

That's not to say you can't write any program in an environment where the type system is constraining you. You can. You simply implement a "tagged interpreter", which is something that's so simple to do, people do it all the time without realizing. Either you have a run-time map or you pattern match on an sum type data constructor, then loop over some sequence of those things with a state value threaded through. Poof! You've got a little interpreter.

I find that this happens a lot. And, I also find that a lot of problems are easier to reason about if you create a lazy sequence of operations and then thread a state through a reduction over that sequence. Now, in Haskell, I've got a type correct interpreter for an untyped (unlikely turing-complete) language! Sadly, I can't re-use any of the reflective facilities of the host language because my host language tries to eliminate those reflective facilities at compile time :-(

I'm in favor of optional, pluggable, and modular type systems. I think that a modern dynamic language should come with an out-of-the-box default type system that supports full inference. If, for some reason, I build a mini interpreter-like thing. I should be able to reuse components to construct a new type system that lets me prove static properties about that little dynamic system. This level of proof should enable optimizations of both my general purpose host language and my special purpose embedded "language".

Additionally, I require that type checking support external type annotations, such that I can separate my types from my source code. In this way, type checking becomes like super cheap & highly effective unit tests: The `test` subcommand on your build tool becomes an alias for both `type-check` and `test-units`. You just stick a "types/" directory right next to your "tests/" directory in your project root. Just as a stale unit test won't prevent my program from executing, neither will an over-constrained type signature.


> Haskell doesn't count, since you need to turn on about a dozen GHC language extensions in order to incorporate the last 20 years of research.

I don't follow. Turning on a language extension is as simple as adding a single annotation to your source file. Is this a problem?

Are you objecting to "the last 20 years of research" not being part of the definition of the Haskell language standard? This is a little off the mark, as Haskell was conceived in 1990, and the latest version of the standard is from 2010.

Moreover, Haskell is evolving rapidly precisely due to the use of language extensions. Research is done, papers are written, and extensions are added — then validated through practical use, and either kept or removed.

> There's also quite a bit of design warts that newer academic languages are starting to iron out. In particular, I don't think monad transformer stacks are a reasonable solution to computational effects.

Granted, monad transformer stacks can get unwieldy. Fortunately, writing in this style is not required. Monad transformers are just library code, so it's not necessary to invent a new language to replace them.


> Haskell was conceived in 1990, and the latest version of the standard is from 2010

Haskell was released in 1990, but the designs started on Haskell itself in 1987 and were heavily based on prior languages; standardizing on a common, agreeable subset. The fact that there is a 2010 version of the spec provides zero insight into how much the language has evolved over that 23 year period. That's not to say it hasn't evolved, just that it's silly to pick on my obviously hyperbolic trivialization of 20 years of progress.

> Haskell is evolving rapidly precisely due to the use of language extensions

Sure. And the fact that it is still rapidly evolving, especially in the type system department, is proof that there are interesting classes of problems that don't fit in to Haskell's type system in a sufficiently pleasing way.

Evolution is a good thing & I have a ton of respect for both Haskell & the PL research community. See the rest of my post for how I'd prefer an advanced language/type-system duo to work in practice.


> "there are precisely zero production-caliber statically typed environments that possess a sufficiently powerful type system for the kinds of problems I tackle on a regular basis"

This tells us nothing.

> "...since you need to turn on about a dozen GHC language extensions in order to incorporate the last 20 years of research..."

So it's a bad thing that you can turn off parts of language that you don't like? Also, don't I need to make an effort to go and download the latest version of e.g. Python just to get "the latest 21 year of research" (yes, it's that old)?


That's an apples and oranges comparison. Python's type system doesn't restrict me in the same ways a static type system does. There are Haskell programs that were not valid that are now valid if you enable a compiler flag and change something in the type sub-language as opposed to changing something in the term sub-language.


I write ruby and javascript all day, and while I make plenty of errors, I find that they are rarely related to types. What are you working on where you regularly run into these? I hear this argument often enough that I assume there is something to it, but I struggle to understand it.


The bugs you make may not be related to types because your definition of what a "type" is refers to the very weak guarantees given by C, Java et al.

Have you ever had a bug due to something being null/nil when you didn't expect it? How about a string or list being unexpectedly empty? Perhaps you've discovered an XSS or SQL-injection vulnerability? What about an exception that you didn't anticipate, or really know what to do with?

In a more robust type system, these could all be type errors caught at compile time rather than run time. A concrete example of the null/nil thing; in Scala, null is not idiomatic (although you can technically still use it due to Java interop, which is understandable but kind of sucks). To indicate that a computation may fail, you use the Option type. This means that the caller of a flaky method HAS to deal with it, enforced by the compiler.

My "come to Jesus" moment with the Option type was when writing Java and using a Spring API that was supposed to return a HashMap of data. I had a simple for-loop iterating over the result of that method call, and everything seemed fine. Upon running it, however, I got a null-pointer exception; if there was no sensible mapped data to return, the method returned null rather than an empty map (which is hideously stupid, but that's another conversation). This information was unavailable from the type signature, and it wasn't mentioned in the documentation. The only way I had of knowing this would happen was either inspecting the source code, or running it; for a supposedly "statically-typed" language, that is pretty poor compile-time safety.

This particular example of a stronger null type would be doable in the "weaker" languages, but it isn't done for several reasons - culture and convenience are the two most prominent in my opinion. In this sense, "convenience" means having an interface that does not introduce significant boilerplate; any monadic type essentially requires lean lambdas to be at all palatable. "Culture" refers to users of the language tolerating the overhead of a more invasive type system, which admittedly does introduce more mental overhead.


> "Have you ever had a bug due to something being null/nil when you didn't expect it? How about a string or list being unexpectedly empty?"

I understand that's not exactly the point, but I find that LINQ (Sequence monad), built-in Nullable<> and custom Maybe monad make your life easier in C# in that respect.


That's certainly true. Although I'm not an expert on the really advanced type system features, this is an example of convenience without culture, IMO. C# having painless lambdas allows this particular example to exist, but it's not required (or idiomatic, in some code bases) - one can still use plain 'ol null. That said, I'd much rather use C# than Java for exactly this reason.


I don't want to write a big wall of text, but a lot of situations come up where bugs could (if you so choose) be encoded into the type system. Here are some examples:

- The Maybe/Option type: you explicitly declare that a value may be missing, so you cannot call methods/functions on it willy-nilly; the compiler will force you to handle both cases. Say bye-bye to NoneType object has no attribute 'foo' errors.

- Different types for objects that have the same representation: in a language like Python, text, SQL queries, HTTP parameters, etc. are all represented as strings. Using a statically-typed language, you can give them each their own representation and prevent them from being mixed with one another. See [1] for a nice description of such a system. See also how to separate different units of measurements instead of using doubles for everything.

- Prevent unexpected data injections. With Python's json module, anything that is a valid JSON representation can be decoded. This is pretty convenient, but it means you must be very careful about what you decode. With Haskell's Aeson, you parse a JSON string into a predefined type, and if there are missing/extra fields, you get an error.

- When I was doing my machine learning class homeworks, I very often struggled with matrix multiplications errors. An important part of that was that the treatment of vectors vs Nx1 matrices was different. I feel that if I could encode the matrix size in the types, I'd have had an easier time and less errors.

These are simple examples, but whenever I code in Python, I inevitably make mistakes that I know would've been caught by the compiler if I had been coding in OCaml or Haskell.

[1] http://blog.moertel.com/posts/2006-10-18-a-type-based-soluti...


Also, if you use F# flavour of OCaml you can use units of measure that would eliminate even more compile-time errors (if those matrices had something to do with physics for example).


At 1:44:00 https://www.youtube.com/watch?feature=player_detailpage&v=o2... it sounds like he is actively looking for an opportunity to use Scheme as an embedded language in a game. Does HN know of any examples of such use?


Naughty Dog is the canonical example. They were founded by two ex-MIT AI Lab guys. They designed a couple of in-house languages for Crash Bandicoot, and Jak & Daxter.

In the early days of the PS2, they supposedly derived a benefit from having a higher-level language that could be compiled for any of the PS2's various processors (EE, VU0, VU1). I think that at the time, you couldn't do VU programming in C.

In the end, they had to revert to industry standard C/C++ due to hiring issues.


While their games used to be even more Scheme-based than they are now, AFAIK they do still use PLT Scheme (at least as far up as the Uncharted games, haven't read too much on the tech under The Last of Us) for the sort of traditional scripting one would use UnrealScript or QuakeC for.


PLT Scheme, or Racket as it is now known, is in use in the Last of Us: http://lists.racket-lang.org/users/archive/2013-June/058325....

A bit of insight into the use of Racket here (scroll down): http://comments.gmane.org/gmane.comp.lang.racket.devel/6915


Actually, only one was MIT AI lab. The other was an economics major at the University of Michigan. But, same result nevertheless.


Abuse [1] uses Lisp for game logic. You'd have to dive in the source code to find out more, a quick online search didn't turn out much.

[1] http://en.wikipedia.org/wiki/Abuse_%28video_game%29


Some guy on /g/ is developing a scheme gamedev library. https://github.com/davexunit/guile-2d


It blows my mind that this is his first foray into functional languages. Makes my imagination wonder what the current state of the gaming and programming industries would be like, had he built Wolf 3D in a functional language and inspired everyone else from there.


Did you listen to the end? He wonders aloud what might have been if QuakeC had been QuakeScheme.

It's not his first foray into functional languages though. It's his first attempt to write production scale code in a pure functional language.


while listening to the structuring of the haskell version, with a "world" and an "actor" for each element, and the interaction as message-passing, I would think Carmack would find using erlang very satisfing as another alternative :D


Is there a transcript?


Not that I know of besides the one auto generated by YouTube.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: