Hacker News new | past | comments | ask | show | jobs | submit login
Chez Scheme is now free (github.com/cisco)
453 points by jordigh on April 26, 2016 | hide | past | favorite | 183 comments



Dybvig's compiler course was exemplary. Say what you may about Scheme, you learned so much more in those classes. His Scheme Programming Language book is highly recommended. Especially check out his extended examples chapter: http://www.scheme.com/tspl4/examples.html#./examples:h0


Thanks for this! I worked on implementing a type-inference algorithm last week and I wish I had stumbled upon the chapter on unification[1] earlier.

[0] - https://github.com/prakhar1989/type-inference

[1] - http://www.scheme.com/tspl4/examples.html#./examples:h10


I thought scheme was like the beatles - people universally only had good things to say about it.


I might not be getting a reference here, but FWIW, I had a recent experience with Scheme that was interesting.

I did SICP nearly 19 years ago as a freshman, in 1997. And then a few months ago, I ported the metacircular-evaluator -- the "crown" of the course -- to femtolisp (the Lisp implementation underlying Julia).

My thoughts were:

1) It sure is awkward to represent struct fields 1 2 3 as (cdr struct), (cadr struct), (caddr), ... Yes this is nice to show that car and cdr are all you need as axiomatic primitives, but for practical purposes it's annoying. You end up with lots of little functions with long names.

2) Scheme code is very imperative! Even the metacircular evaluator uses set-cdr! and so forth. I don't like imperative code with the Lisp syntax.

3) It is awkward to represent environments with assoc lists. I feel that having a language which is really bootstrapped requires some kind of hash table/dictionary. Because you need that to implement scopes with O(1) access rather than O(n). I believe there are experimental lisps that try to fix this.

4) Macros also seem to have a needlessly different syntax than regular functions. There are Lisps with f-exprs rather than s-exprs that try to fix this: https://en.wikipedia.org/wiki/Fexpr

I was surprised by #3 and #4 -- it some sense Scheme is less "meta" and foundational than it could be. #2 is also a fundamental issue... at least if you want to call it the "foundation" of computing and build Lisp machines; I think this is evidence that this idea is a fundamentally flawed. #1 just makes it pale into languages like Python or even JavaScript.


>1) It sure is awkward to represent struct fields 1 2 3 as (cdr struct), (cadr struct), (caddr)

Every Scheme implementation I know of supports record types aka SRFI-9[0]. No one actually makes new data types from cons cells.

>2) Scheme code is very imperative!

Scheme supports many programming paradigms. Imperative programming is one. Functional programming, object-oriented programming, and relational programming are others. Not all Scheme code is imperative.

>3) It is awkward to represent environments with assoc lists.

Every Scheme implementation I know of has traditional mutable hash tables. Sometimes you want a hash table, sometimes you want an alist. It depends.

>4) Macros also seem to have a needlessly different syntax than regular functions.

I don't really understand this point. Are you talking about syntax-rules? If so, then I must disagree. syntax-rules is a very elegant language for defining hygienic macros.

SICP does not teach you everything that Scheme has to offer, and the Scheme implementations of the 1980s are a lot different than the Scheme implementations of 2016.

[0] http://srfi.schemers.org/srfi-9/srfi-9.html


OK, point taken about #1. It is valuable to have the basic axioms and then separate syntactic sugar.

But #2 and #3 are what I would call bootstrapping problems... in other words, there is a reason that C is the foundation of computing rather than Lisp. I don't think anybody really thinks otherwise anymore. But for example, set-cdr! is not in the lambda calculus, and you need it even for basic things.

Likewise, Scheme implementations have mutable hash tables, but they're written in C and not Scheme. I don't know how you even write a hash table based on cons cells rather than O(1) indexing.

Regarding #4, here is a good link. The basic idea is that macros could just be functions on lists, and then you get composition of macros like you have composition of functions. Paul Graham incorporated this into Arc.

http://matt.might.net/articles/metacircular-evaluation-and-f...

My point is you could say Scheme sits at a somewhat awkward place between "not foundational enough" (not fully bootstrapped) and "awkward in practice" (compared to say Python). Though I wouldn't go as far as to say that... Obviously it was groundbreaking work that influenced Python and R and tons of stuff we use today. It's outstanding research, but it feels like it has been almost fully incorporated into the computing culture now.


>there is a reason that C is the foundation of computing rather than Lisp. I don't think anybody really thinks otherwise anymore.

C is not the foundation of computing. Why would you say this?

>Likewise, Scheme implementations have mutable hash tables, but they're written in C and not Scheme.

A native code compiler written in Scheme would have its hash table implementation also written in Scheme.

>I don't know how you even write a hash table based on cons cells rather than O(1) indexing.

You wouldn't do that! Cons cells are not the only primitive data type! Another primitive type in Scheme is the vector, which is a mutable array.

I'm sorry, but you greatly misunderstand Lisp and how compilers work.


>C is not the foundation of computing. Why would you say this?

Because all the popular OSes, drivers, userlands, servers, GUI libraries, and compilers/languages are 99% written in C (or C++ which is close enough).


That's different. It means C is the most popular language in system programming. Makes so much sense when I explain above that C's prevalence is due to social and economic reasons given you argued it won a popularity contest. Hillary and Trump are also winning those right now if you want to argue how logical correctness and utility are connected to popularity. ;)


>That's different. It means C is the most popular language in system programming.

And the basis upon which computing sits upon.

Take away all C/C++ code and we have nothing or almost nothing.

Take away all Lisp/Scheme code and people will barely notice.

>Makes so much sense when I explain above that C's prevalence is due to social and economic reasons given you argued it won a popularity contest.

If by economic you mean "pragmatic" and "engineering considerations", then yes.


"Take away all C/C++ code and we have nothing or almost nothing."

In that meaning, it's true but only as an accident of history that has little to nothing to do with C's design itself.

"If by economic you mean "pragmatic" and "engineering considerations", then yes."

BCPL was whatever compiled on a machine from the 1960's. C was what compiled on a machine from the 1970's. ALGOL was engineered. C was what compiled and ran fast on old hardware. That's it.

http://pastebin.com/UAQaWuWG

The rest was social factors. Even when alternative languages did better, most people didn't adopt them. Most buyers also paid for performance per dollar totally ignoring reliability, security, maintenance, and so on. Unless you argue these don't matter, then the dominance of C and UNIX is once again due to something other than their technical merits. Plus, the fact that their problems stayed in... intentionally... once better hardware came online while other players fixed them in various ways.


>In that meaning, it's true but only as an accident of history that has little to nothing to do with C's design itself.

I don't believe that for a second. C had very specific performance and memory characteristics that alternatives didn't have.

>C was what compiled and ran fast on old hardware. That's it.

That's a HUGE pragmatic benefit, not a "historical accident".


If C was made a decade later, we'd have been using Pascal or BCPL or something. Other stuff could do the job. Example: Hansen later put a Pascal variant, Edison, on the same machine thst was simpler, safer, and faster to compile. Pascal itself was ported to 70+ architectures from mainframes to 8-bitters.

Nah, we didn't need C. Thompson just really liked BCPL. It was crap. So they tweaked it into C. It still couldn't write UNIX. Ritchie added structs and that version finally did the job. All in the papers I cited. It's facts in their own writings and predecessor papers (eg BCPL) why each decision was made.


>Nah, we didn't need C. Thompson just really liked BCPL. It was crap. So they tweaked it into C. It still couldn't write UNIX. Ritchie added structs and that version finally did the job.

Well, Pascal was also inspired by languages that were crap compared to modern (70s/80s needs), and early Pascal's also had tons of missing features -- so I'm not sure what this "C wasn't good enough from the start" is supposed to mean, especially since C already had structs and all by the time it caught on.


You should read the history. Compare it to how ALGOL68 was made and what it offered. One looked at all the needs and common situations programmers ran into then engineered a solution to that in the form of a language. It balanced maintenance, efficiency, and safety.

One group tried to implement a version of it, CPL, on horrific hardware in batches on punchcards. Not best way to do state-of-the-art language compilers. Upon failing, they applied this method to CPL: chop off a feature, try to finish compiler, repeat. Result was easiest features to implement on a 1960's EDSAC in form of BCPL. Thompson preferred it over alternatives and tweaked it to his preferences, including assignments from := to =. Admits he just liked that better. That's the amount of science that went into his modifications.

Eventually, Ritchie tweaking it a bit, they got their toy OS to run on a toy machine. Many design decisions they made were due to its own design and limitations. Then, after UNIX spread everywhere and many apps were made, they just kept all that because fixing it would break something. That's the opposite of engineering a good system language. It's an acceptable, but not ideal, hack to make their crappy computers work. After they got better ones, they should've started migrating toward something better incrementally. They didn't and many defended the language as if it was designed well upfront instead of what compiled on an EDSAC and PDP. Facts don't lie.

Plenty of better stuff and techniques. For example, MULTICS project that gave them OS and BCPL experience had critical stuff in PL/0. In MULTICS, a microkernel, prefixed strings, and a reverse-flowing stack would've prevented tons of data loss and hacks that happened in UNIX. Why didn't they use those? Hardware expected a bad stack and other two techniques were too slow on it. Once hardware sped up, they kept the bad techniques to not rewrite stuff. I mean, this shows up over and over in UNIX/C. Its history really defines it.

Now, stop and go look at Modula-3 on Wikipedia. A few of us think it was one of best compromises between a safe, C alternative like Modula-2 and a heavyweight, ALGOL or C++ alternative. It was the product of professionals engineering, as done for ALGOL, an industrial language based on Wirth's prior work. Simple syntax that compiles fast as Go, a Wirth-style language. Has safety by default with off button where necessary, has basic OOP, has GC by default with off button for specific variables, built-in concurrency, mathematically verified stdlib (partially), runs efficient code, and was used for an OS (SPIN) w/ type-safe linking of 3rd-party code into kernel.

So, we know it could've been done better if it was engineered or addressed more programming needs than runs fast on 1960's hardware. Unfortunately, that's basically all BCPL and C did while ignoring good techniques then and later. Fortunately, we can learn from their mistakes for use in new languages or projects. :)


> And the basis upon which computing sits upon.

How does being the basis follow from being popular

> Take away all C/C++ code and we have nothing or almost nothing.

We had Lisp machines in the 70s, Oberon system in the 90s, Forth systems basically throughout history... take away C and C++ and something else would've become popular. Probably Pascal, some random low-level Lisp dialect, or Forth, given that those were all reasonably popular in a similar timeframe as C. For something that is a ‘basis’, C had an awful lot of competition.

> Take away all Lisp/Scheme code and people will barely notice.

Well, aside from every Emacs and AutoCAD user in the world.

Also HN wouldn't exist, so there's that.


>How does being the basis follow from being popular

The basis is by definition popular.

If it's not popular (at least where it matters) it's not the basis. Basis is the fundamental thing on top of which something (the IT world as we know it) stands.

One could argue that algorithms are more basic, but we're talking about programming languages here, and at that level, C/C++ has been, and remains king for anything crucial. Even Java, the CLR and V8 are written in C/C++ (to name but a few environments standing on this "base").

>We had Lisp machines in the 70s, Oberon system in the 90s, Forth systems basically throughout history... take away C and C++ and something else would've become popular. Probably Pascal, some random low-level Lisp dialect, or Forth, given that those were all reasonably popular in a similar timeframe as C. For something that is a ‘basis’, C had an awful lot of competition.

Not sure how this argument is supposed to work.

To be the basis of something doesn't mean you don't have competition. Just that you prevailed over it.

>Well, aside from every Emacs and AutoCAD user in the world. Also HN wouldn't exist, so there's that.

Still people would barely notice. If you think Emacs and AutoCAD would make a huge difference to the world if they disappeared (compared to say, Windows, Linux, Android, or, if we're to talk about sites and apps, Google, Facebook, Photoshop, Word, etc) then you've been on an echo chamber for too long.

(Not to mention that most Emacs users use it for if not C/C++ then for languages whose compilers are written with C/C++, on OSes written in C/C++, and that Emacs itself is written in C -- the base were elisp stands on is C).


> If by economic you mean "pragmatic" and "engineering considerations", then yes.

C didn't won because of "engineering" or "pragmatic" reasons, it won because it run faster on cheap hardware, which was a big selling point. It wasn't a pragmatic choice, but a stupid and short-sighted one - but those tend to usually win. Computing in the last 30 years was done in spite of, not because of, C.


An hardware more powerful than Burroughs machines from 1961, which were happily working with a safe systems programming language based on Algol.

Given that I remember the days junior Assembly developers could easily outperform C code, I don't agree with that point.

I bet if it wasn't for the rise of free UNIX clones, C would already be sharing drinks with Pascal at some retirement home.


Did I ever tell you about the OS that was written in FORTRAN? Here it is in case I forgot:

https://en.wikipedia.org/wiki/PRIMOS

That's a CPU and OS for Fortran. I found a web framework for Fortran, too. Today, we could do Hacker News in FORTRAN from the metal up. We'll leave that monstrosity to our imaginations, though. Not even that. ;)


>C didn't won because of "engineering" or "pragmatic" reasons, it won because it run faster on cheap hardware

Isn't that the very definition of an engineering/pragmatic reason?


> And the basis upon which computing sits upon.

No, C is the basis of a lot of programs. It is not, nor could it be, the basis of a sane system of computation.


That's an "moral" style judgement.

From a pragmatic perspective computing is just "a lot of programs".

It's not what "should be" -- it's what it is.


No, computation is a mathematical discipline; the lambda calculus is one way of thinking about computation (I won't say it's the best, but it's a way) which is part of that discipline; C simply … isn't.

Common Lisp, of course, is a hell of a lot more than just the lambda calculus, but it's also a hell of a lot better a language than is C.


An historical accident driven by the facts that AT&T initially gave UNIX code for free to universities and some of those students went out to win the workstation market based on that free code, e.g. Sun.

If UNIX had been commercially licensed, like every other OS back then, the C foundation would never had happened.


That's ignoring all the computing work done before C. C and UNIX were huge steps back in computing, that we're only slowly beginning to recover from.


There a sentence from a famous women in the history of computing, I don't recall which one, about C setting the progress of compiler optimizations back to the dawn of computing.


Perhaps it was Fran Allen. In Coders at Work, she has quite a few things to say on the topic:

--- Begin Quote ---

-Seibel-: When do you think was the last time that you programmed?

-Allen-: Oh, it was quite a while ago. I kind of stopped when C came out. That was a big blow. We were making so much good progress on optimizations and transformations. We were getting rid of just one nice problem after another. When C came out, at one of the SIGPLAN compiler conferences, there was a debate between Steve Johnson from Bell Labs, who was supporting C, and one of our people, Bill Harrison, who was working on a project that I had at that time supporting automatic optimization.

The nubbin of the debate was Steve's defense of not having to build optimizers anymore because the programmer would take care of it. That it was really a programmer's issue. The motivation for the design of C was three problems they couldn't solve in the high-level languages: One of them was interrupt handling. Another was scheduling resources, taking over the machine and scheduling a process that was in the queue. And a third one was allocating memory. And you couldn't do that from a high-level language. So that was the excuse for C.

-Seibel-: Do you think C is a reasonable language if they had restricted its use to operating-system kernels?

-Allen-: Oh, yeah. That would have been fine. And, in fact, you need to have something like that, something where experts can really fine-tune without big bottlenecks because those are key problems to solve.

By 1960, we had a long list of amazing languages: Lisp, APL, Fortran, COBOL, Algol 60. These are higher-level than C. We have seriously regressed, since C developed. C has destroyed our ability to advance the state of the art in automatic optimization, automatic parallelization, automatic mapping of a high-level language to the machine. This is one of the reasons compilers are... basically not taught much anymore in colleges and universities.

--- End Quote ---

(taken from pp. 501-502)


Yep that one.


Great quote. I had never read that one. The irony of language wars about the monolith that is C vs. the many higher-level languages that allow a human to code according to his mental abstractions and cognitive ability, rather than memorizing machine-specific, or os-specific facts that don't translate over to newer machine architectures. All this on HN, running on a Lisp, Arc, that once sat upon an academic scheme now called Racket.

I agree C is necessary for low-level programming, but for the meat of all the other applications and usages, higher-level languages are needed. I don't mind programming certain things in C, but I thoroughly enjoy the mental exercise when programming in the J programming language, Lisp, or Forth. Yes, Forth. C programmers can have the Earth; I would like to be coding with the fellas who design mission-critical software for satellites like Rosetta, and groups like NASA and the ESA, using Forth in Rosetta's case [1].

Slim Whitman sold more records than the Beatles, or so I think I heard that on a late-night TV commercial back in the 80s, but I never owned a record from him ;)

  [1]  http://adsabs.harvard.edu/full/2003ESASP.532E..72B


This is a bit off topic, but would you mind sharing what resources you used to learn forth? I'm interested in the language myself, but I don't really get how to use it. I've learned the basics about defining words and manipulating the stack, but whenever I try to apply it to an actual problem, I have a very hard time even understanding how to approach, wheras if I were using a procedural or OO language, I would know where to start.

Any advice on getting over this hump?


There are the main books everyone refers to like 'Thinking Forth' [1], and others, but I really learned faster by picking up Factor [2] and Retro [3]. The community on Factor is very helpful and smart. I actually wrote my first real Forth program for work in Factor. I had written a lot of one-liners, and some curiosities, but this was a business need met in one night. It was basically a program to munge a whole lot of tab-delimted text files, and do some math on fields and then generate a report. It was all < 100 LOC, and it was easy to interactively test if I was missing anything in the interpreter. Retro is cool, since it is so minimal, and there are add-ons for Chrome so you can run it in your browser locally to try it; otherwise, you download the image file for your platform, and Retro itself. Retro was built on a VM called Ngaro, which is an interesting diversion, but not Forth. There are also microcontroller ports of Forth [4], which I found a lot easier than using the assembler for a particular microcontroller. FlashForth has been recently updated, and is featured in a 'Circuit Cellar' magazine article. I have also purchased iForth [5], since I had stopped dual-booting linux at the time, and it offered a lot for Windows, and mathematics (fast). I hope these help you get started. If you are a maker, using an Arduino or PIC-like chip with FlashForth is great. The whole dictionary, or word list (all the defined functions called 'words' in Forth, are listed in the reference I provided). In Forth, you can easily add to the dictionary, it is concise enough to know your whole system in a short time, and add to it when you need it. Good luck!

  [1]  http://thinking-forth.sourceforge.net/ 
  [2]  https://factorcode.org/
  [3]  http://forthworks.com/retro/
  [4]  http://flashforth.com/tutorials.html
  [5]  http://home.iae.nl/users/mhx/


Thanks for the reply. Those look like some pretty cool resources, and I can't wait to check them out.


> I agree C is necessary for low-level programming, but for the meat of all the other applications and usages, higher-level languages are needed.

I actually disagree with this for the most part.

C's main advantage as a systems language is its ubiquity -- virtually every platform released in at least the last 25 years has had a reliable C compiler available. Before JavaScript hit it big with Web 2.0, C was a lingua franca among programmers. Starting in the late 80s and continuing throughout much of the 90s, pretty much every textbook that had used Pascal or Pidgin Algol for code listings was edited to contain code listings in C instead.

Of course, as Java gained momentum in the education space in the late 90s and early 00s, many were again updated for code listings in Java. Although a great deal of computer science curricula now crowd around Java, most of them also have at least one required course either on C or that makes good use of C (my own track at NC State contained, beyond the introductory Java courses, "C & Software Tools", "Operating Systems", and "Computer Graphics", all in C).

That all being said, there were historically quite a few systems languages that were better than C in almost every way[1]: they were (typically both memory- and type-) safe, they were higher level in the sense that they offered more and better tools for abstraction (and were therefore considerably more expressive), they offered more opportunities for automatic optimization, they were easier to port to new platforms, many of them were easier to read and had far fewer nuances/"gotchas"/"footguns" than C, the languages themselves were designed in such a way that better tooling was possible, the list goes on and on. The one thing that C had on them was that it was already default. Programmers could (and can) count on the target platform having a C compiler. C programs have access to the wide assortment of C libraries. C was good enough. In a sense, C's popularity was and is perpetuated by the fact that it's the path of least resistance. Once C managed to get into that position, of course it became the "king of systems languages".

I think C's days are numbered. There was a time when systems programming was how you went about learning to program professionally on microcomputers. (Unstructured) BASIC was popular among hobbyists but right-out for professional programs due to abysmal performance. So, you learned assembly/machine language, or, if you were lucky, Turbo Pascal, a similar-in-spirit C system, or QuickBASIC (a compiled, structured language that I'd actually classify as a systems language, given that its feature set is mostly isomorphic to C's). Nothing else offered the level of performance you needed for commercial applications. This continued from the mid 80s through to the early 00s, yielding several generations of programmers who were systems programmers by training.

But things are different now: most programmers nowadays are cutting their teeth on high-level languages like JavaScript, Ruby, Python, Lua, etc., and if they become systems programmers it's due to their own desires and interests rather than out of any real necessity. These programmers know better, and are in an excellent position to notice the many shortcomings of C, and some even outright reject it due to its gnarliness in comparison with the high-level languages they're accustomed to. They know, deep down in their bones, that systems programming doesn't have to be so unsafe, so intricate, so field-of-landmine-ish, so... shitty. And some of them intend to do something about it.

I think we're already seeing the results of this: languages like Rust, Nim, and Myrddin strike me as products of these happenings. And now, I think, is a good time for it, because C's ubiquity has never been less relevant since its rise to power: there's only a handful of platforms anyone needs to support nowadays (POSIX and Windows, desktop and mobile; you can count OS X/iOS as a separate platform if you're feeling squirrely, but even then you haven't reached an unattainable set of targets, and it seems as if soon you may be able to strike Windows from the list as well); if you offer ABI compatibility with C, you get libraries for free; parsing is practically a solved problem; .... And then there's the LLVM, which can handle roughly a third of the compilation process for you -- and it's the backend: the hardest part!

I imagine that within another 20 years to 30 years, C will be hiding within its final strongholds of legacy code and embedded programs. Outside the embedded space, new projects in C will be exceedingly rare, and another decade or two after that will see even the embedded programmers breathing a collective sigh of relief as they become gradually liberated from the tyranny of C.

Only time will tell if the newcomers can usurp C's throne, but I have my fingers crossed. When C gasps its last breath, I will say "good riddance", and happily return to my safe, expressive, pleasant systems language, whatever that may happen to be at that time.

[1]: Just to name a few: Algol 60, Modula-2, Oberon, Ada, Modula-3(one of my personal favorites), ATS, even several non-standard dialects of Pascal such as the ones from UCSD, Apple, and Borland. Then, of course, there are the newcomers: Clay, Rust, Go, D, (does Nim count?), Myrddin (definitely one to keep an eye on), even some dialect(s) of C# were used at Microsoft Research for systems programming. I'd also be remiss not to mention PreScheme (another one of my favorites), used by Jonathan Rees and Richard Kelsey to implement the Scheme48 virtual machine, and its nephew RPython, used for implementing the PyPy JIT framework. I'm sure there are quite a few I'm leaving out, but these are the ones that come to mind at the moment (also, do keep in mind that I'm restricting this to systems languages that are strictly better than C, so some otherwise neat ones like BCPL and BLISS have been intentionally omitted).


Great summary. I especially like your point about the programmers migrating to systems programming having a different mindset than an older person like me, who started with assembly. Thanks for that.

Not sure if by 'disagree with this' you mean the whole piece, or just the quoted sentence. Addressing the obvious quoted sentence:

Don't get me wrong, I like C. I am currently writing a Lisp in C, playing with ImGui and Nuklear immediate-mode GUI kits. That is why I used the word 'needed' vs. something like 'are majorly used'. C pulled me from my Vic-20 6502 assembly language days (I lie, I had moved to Basic before C!). When I program in a higher-level language (Lisp/Scheme, Python, Julia), I find I am dealing with what I actually want to get done, and in the end my programs are not AAA games, or large deep learning neural networks, so they are plenty fast for my needs. I start out clearly with an objective in C, but usually get mired in some C-specific, non-goal-related task, whether I need to refresh my knowledge of pointers, or platform-specific idiosyncrasies. I certainly would choose C over Java any day, or move to another higher-level language. I think Java's VM is a fantastic piece of work, and Java syntax is very C-like, but I'd rather not use it. I use a lot of programs written it though! I prefer Lisp/Scheme, and compiled SBCL is fast. And now with the opensourcing of Chez Scheme, I will be busy this next week. The Spring 2016 Lisp Game Jam is starting in about 24 hours [1].

  [1]  https://itch.io/jam/spring-2016-lisp-game-jam


I had spent 50 minutes or so writing a reply to this Thursday morning, and was nearly finished when I must’ve accidentally hit F5 or something and lost it all. It went more or less like this:

> Great summary.

Thanks!

> Not sure if by ’disagree with this’ you mean the whole piece, or just the quoted sentence.

Just the quoted sentence, though it does make two assertions: 1) that C is necessary for low-level programming; and 2) that higher-level languages are necessary for all other applications. My previous comment focused primarily on the first one, but I disagree with both. One only need look at the sheer number of applications written in C [1] to see that C is suitable for applications outside of the “low-level” realm.

> I especially like your point about the programmers migrating to systems programming having a different mindset than an older person like me, who started with assembly.

We’re in a similar boat, I think. I cut my teeth on QuickBASIC, quickly picking up some 8086 assembly to get some fancy-schmancy VGA graphics and some SoundBlaster goodness. After around two years, I got my hands on Turbo C 2.0 and never looked back. Although I now prefer higher-level languages, I’m a systems programmer at heart, and that affects how I approach the art and the act of programming. I have a hunch that many programmers “suffer” from the same affliction: that however they learned the ropes still affects their approach to programming in some way; and that’s the thought that underlies my suspicion that we’re about to see a “systems programming renaissance” of sorts.

> Don’t get me wrong, I like C.

Don’t get me wrong, either. I also like C. But it’s also just not a good language. Even at the systems level, there are so many better alternatives. I don’t know if it’s familiarity (most of the lines of code I write are in C), nostalgia (C was one of the first languages I learned), laziness (C’s ubiquity makes it pretty convenient), tradition (these days I’m a UNIX™ guy through-and-through), some combination, or something else entirely. I’ve also noticed that many C programmers have an attitude of “if you don’t know all the ins-and-outs, nuances, footguns, hazards, dark corners, unspecified and implementation-dependent behaviors, and compiler optimizations, then you’re stupid and a bad programmer and you should feel bad” — so maybe it’s narcissistic elitism…

> I am currently writing a Lisp in C, …

Care to elaborate? Is it a naïve interpreter, bytecode compiler & VM, native code compiler, …? Is it an existing dialect or one of your own design? Is there anything about it that you feel is distinctive? Anything you feel particularly proud of?

I’m working on a Lisp of my own, too. It’s based primarily on EuLisp, but with some influences from Scheme, Common Lisp, and some older Lisps like Le-Lisp. I’m still in the design process, though, and am not ready to talk about it at length yet.

> I certainly would choose C over Java any day, or move to another higher-level language. I think Java's VM is a fantastic piece of work, and Java syntax is very C-like, but I'd rather not use it.

I’d originally written a rather lengthy response to this, going over various shortcomings of Java and various trade-offs involved in making that decision, and cases where Java might be a good choice, but honestly I don’t feel like reproducing it now. I don’t care enough for Java to defend it, really.

> I use a lot of programs written it though!

Of course! It’d be silly to say “X does exactly what I want/need, but it’s written in language Y, and I don’t like Y, so I refuse to use X”, though sadly I do know some people who say such things. It baffles me. What an idiotic attitude.

> I prefer Lisp/Scheme, and compiled SBCL is fast.

As do I. I’m a smug Lisp weenie and proud! In fact, up until recently, I would’ve told you that Scheme is my favorite programming language, followed by ML. Then, being disenchanted with the current state of Scheme — the divisive and controversial standards, the inability of the community to unite behind the language — and my “discovery” of EuLisp have lead me to say that my favorite programming language is my own breed of Lisp. I took a couple days of vacation from work, and I’d planned to spend them working on my Lisp, but…

> And now with the opensourcing of Chez Scheme, I will be busy this next week.

Me too. I completely changed my vacation plans when I saw this announcement. I’ve spent some time, and will continue to spend time, playing Chez and studying the source :)

> The Spring 2016 Lisp Game Jam is starting in about 24 hours….

I hadn’t heard about this. Thanks!

P.S. Hit me up sometime if you want to talk about Lisp or programming languages in general. I’m definitely a PL nerd, and I like to talk about this stuff (including implementation strategies and history). My e-mail address is in my profile.

[1] https://github.com/search?q=language%3Ac

Besides the obvious low-level stuff, like the Linux kernel, there are quite a few applications that don’t require the low-level offerings of C: git, redis, the-silver-searcher, awesomeWM, etc.


>That's ignoring all the computing work done before C.

And we can really ignore it software wise. We only use its theoritical heritage now.


C is the foundation of computing in the sense that essentially every programming language and operating system is written in C or C++, and those are the tools that enable every other piece of software. More precisely, I would say that C is the foundation of software; it's how we stopped throwing out our programs when we changed computers. The first portable operating system kernels were written in C.

I guess I could have been more precise and said that the lambda calculus (rather than Scheme/Lisp) is not the foundation of computing. It seems like there are people who still think this; see my recent response here:

https://news.ycombinator.com/item?id=11412392

You could say Lisp and Scheme are proof of that. To actually be bootstrapped, they had to add all this other stuff like vectors and hash tables. I don't know the details of how well those are axiomatized. Paul Graham's Arc tried to a little further down, i.e. unifying functions and macros, defining numbers in terms of lists a la Peano arithmetic, etc., but I'm not sure how far that effort went.

I mentioned all my experience with Lisp... doing SICP 19 years ago, and then coming back to it. As I said, I think it's outstanding research, but if you are trying to build an entire computing universe out of it, that's folly. Good luck. It's just not powerful enough -- once you add all the stuff you actually need, you're not far from the complexity of C.


In the 1980s Scheme was not performant enough. C was how you got tolerably fast programs. A lot of research, notably in garbage collection, has made Scheme much more performant since then. Additionally, computer hardware has improved to the point where people write useful programs in languages that are dramatically slower than Scheme, e.g. PHP, Python, Ruby.

You are conflating minimalism with the Scheme language because Scheme is often used to illustrate minimalism. Vectors and hash tables are not "all this other stuff", they're part of the language spec[1]. You're also throwing Lisp in there even though minimalism is not a central theme of Lisp.

[1] When you did SICP hash tables were not part of the language spec although implementations generally had them; they got standardized in 2007 with R6RS. But vectors were in the language spec since at least 1985.


> To actually be bootstrapped, they had to add all this other stuff like vectors and hash tables.

Just like C had to add things like arrays to the Turing machine? C doesn't even have hash tables in the spec! According to your definitions, C is a toy language.


No, because it's possible to implement efficient hash tables with C's primitives -- in fact that's how hash tables in essentially ALL languages ARE implemented.

cons cells are not sufficient to implement hash tables. Scheme needs arrays for that. cons cells can be implemented efficiently using arrays, but the converse isn't true, so arrays are more fundamental in some sense.

If you don't care about algorithmic efficiency, then you could choose either cons cells or arrays as your primitive. But obviously we do care, so arrays were the right choice. IOW, C was the right choice, not Scheme.


"No, because it's possible to implement efficient hash tables with C's primitives -- in fact that's how hash tables in essentially ALL languages ARE implemented."

Because each architecture has a C compiler that's been highly optimized. Popularity plus money invested. That's it. If you were right, we'd see optimizations coded in C even when alternative, optimizing compilers were available. I got a one-word counter to that interestingly enough from "high-performance computing" field: FORTRAN. Free Pascal people are doing fine in performance and low-level code as well despite little to no investment in them.

Seems throwing money at a turd (eg FORTRAN, C) can get a lot of people's hands on it despite some of us shuddering and saying "Get that pile of crap away from me!"


It is a toy language.

One step above a portable macro assembler, developed in a decade where research labs outside AT&T were already using safe systems programming languages for about a decade.

Their big failure was that they were selling their work, instead of doing like AT&T that initially gave UNIX for free, because it was forbidden to sell it.

Unfortunately free always wins, regardless of quality.


>cons cells are not sufficient to implement hash tables.

Again, cons cells are not the only primitive type for making compound data structures.


> in fact that's how hash tables in essentially ALL languages ARE implemented

Obviously a false statement.


> Just like C had to add things like arrays to the Turing machine?

Care to elaborate on that?


" there is a reason that C is the foundation of computing rather than Lisp"

That's a myth. Try using first-hand sources like the papers from people who designed BCPL, B, and C to understand why it's that way. Answer: terrible hardware back then. That's it. Its popularity was result of prevalence of terrible hardware and that UNIX was written in C. I broke it's history down in just a few pages with a timeline and references here:

http://pastebin.com/UAQaWuWG

Likewise, before the social effect, the OS's were coded in a number of HLL's with capabilities UNIX lacked. Languages included ALGOL, PL/0, and Pascal. Later, they were done in Modula, Oberon, Ada, Fortran (yeah lol), LISP, and so on as hardware improved beyond 1970's minicomputers. Here's some UNIX alternatives and their capabilities that developed... some of which you still don't have. :)

https://news.ycombinator.com/item?id=10957020

Far as LISP, it's been implemented in hardware multiple times. This included naive ones that worked like a simple evaluator with garbage collection built into the memory-management unit. There was also one that had four specialized units for more sophisticated execution. One Scheme was designed and mathematically verified using the DDD toolkit that was also LISP if I recall.

http://www.cs.indiana.edu/pub/techreports/TR544.pdf

Oh heck, forgot they did it with VLISP. That was a Scheme48 interpreter and PreScheme compiler rigorously verified for correctness. PreScheme was a Scheme subset for systems programming. So, the work actually took a verified Scheme then mechanically derived verified HW from that Scheme code using a LISP-based tool. I recall from other papers they got it working on a FPGA and some PAL's.

Whereas, I don't know many small teams producing verifiable C code on verifiable C processors from verifiable C tools. Nah, I don't think your favorite language is anywhere near where you think it is. I don't even find LISP ideal here by far. It just did more in functionality & bare metal. Also, first LISP machine was started when UNIX was released interesting enough.


> there is a reason that C is the foundation of computing rather than Lisp. I don't think anybody really thinks otherwise anymore

I must not be anybody, then, because I think that Lisp provides a wonderful notation for thinking about symbolic computation which will last for millennia while C is … a successful programming language of the late 20th century.

> I don't know how you even write a hash table based on cons cells rather than O(1) indexing.

A cons cell is just a double-pointer. You can write a hash table using conses just as easily as you would using pointers (note, I'm not saying that'd be efficient, which is why Lisp offers arrays as well as conses).


C isn't the foundation of computing. It's a popular systems language largely because of, first, it's connection with UNIX, and, second, network effects of its early popularity.


C is far too high level to be a "foundation". NAND gate is a foundation.


> 1) It sure is awkward to represent struct fields 1 2 3 as (cdr struct), (cadr struct), (caddr), ...

The Right Way to do this is with a D-List, or Detached List, analogous to an A-List or a P-List. An A-List, if you recall, is a list of key-value conses. A P-List is a list of alternating key-value pairs. A D-list is a cons of a list of keys and a list of values, i.e.:

((key1 key2 ...) val1 val2 ...)

D-lists are superior to A-Lists and P-Lists because:

1. The key list structure can be re-used

2. D-ASSOC only requires one traversal down the key list, after which the index of the key can be cached. This is usually the first step in writing a "fast" interpreter, but if you use A-Lists or P-Lists then you have to change data structures. If you use a D-List you already have the optimized structure in the CDR of the D-List pair.

3. Going from optimized interpreter to full compiler is a simple matter of replacing the linked list of values with a vector of values.

It's a shame that D-Lists are very rarely taught.


Interesting. I've never heard of D-list nomenclature before.

With the A-list method you typically need to write a "zipper" function, that recursively conses the heads of two lists to generate the A-list. Makes "apply" an expensive operation with needless allocations.

What's great about D-lists is that the "make-env" method is just a single cons operation. Clearly superior. I'm surprised its not more well known.


> Clearly superior.

Thanks.

> I'm surprised its not more well known.

Yeah, me too :-(


Only global/dynamic environments should be hashed. If you're optimizing lexical environments from assoc to hashing, you're optimizing interpreted semantics instead of writing a compiler. The location of a lexical variable isn't a moving target; it sits at some statically fixed offset in some environment frame, and access to it is reduced to an indexing operation in the compiled code.

Hashing lexicals will not necessarily speed up an interpreter. It depends on what kind of code and how you do it. A lot of code has only a few lexicals at any binding level. If you construct a new hash table on each entry into a binding construct which has only a handful of variables, that could end up performing worse than the original assoc lists. You still have to cascade through multiple hash tables under that approach to resolve nesting. One hash table for an entire lexical scope leaves you with problems like how to resolve shadowing, and how to capture closures at different sub-nestings of that scope that have different lifetimes from containing scopes.


#1 - srfi 9 solves this mostly, though I agree about the long names. If your implementation doesn't bundle that for you, you should probably use a different implementation.

#2 - preach it.

#3 - The "big" schemes provide hash tables for you.

#4 - Well... fexprs used to be standard in lisp, but implementers didn't enjoy figuring out what a symbol meant at compile time.

I dunno, I really enjoy coding in scheme. Especially syntax-rules. I know syntax-case is the bees knees, but syntax-rules is pretty easy for me to model quickly.


Regarding #2: I have a complete version of McCarthy's original evaluator at https://programmingpraxis.com/2011/11/01/rip-john-mccarthy/, and it doesn't use a single mutation.


Schemes minimalism and elegance makes it a great first language (in my opinion). Plus you can explore multiple programming paradigms - functional, imperative, object oriented. There are legitimate reasons why scheme isnt a very practical language, but its a good first language imo.


>There are legitimate reasons why scheme isnt a very practical language

The operating system I currently run uses a Scheme program as its init system and a Scheme program as its package manager. Scheme is a practical language.


What operating system is it? Is the Scheme written in C or assembly language?


See GuixSD: https://www.gnu.org/software/guix/

Edit: remove useless commentary


I also use GNU Guix and Shepherd on top of Ubuntu at work. Shepherd manages all of my user daemons (mostly Ruby web application servers) and Guix as an RVM (and other such tool) replacement. Is that practical enough?


There's a big difference between "practical" and being the foundation of software (and I guess people ARE still trying to make that happen; it's not a strawman).

Emacs Lisp is probably a better example of practical.


Or LispWorks or Franz Allegro (esp AllegroCache) that have made real money for years by letting people code circles around others with better tooling. Scripting languages have closed the gap a lot but not fully. I'd say at least two commercial tools are quite practical.


Sure I agree that both are examples of real software written in scheme.


To be fair to perception, that Scheme (Guile, I'd imagine) isn't exactly the subset of MIT-Scheme used in SICP.


SICP 2e sticks to IEEE Scheme.


Is the fundamental issue in #2 that you don't like the syntax? or that Scheme has mutation?

It's interesting looking back on the history of Scheme. Probably because of the order of presentation of SICP, along with hearsay, people seem to get an impression of Scheme being all about functional programming (and if it's because functions are values you can pass around... well even Algol and Pascal could do that). It's true the original paper was called "Scheme: an interpreter for the extended lambda calculus" [1], but the big idea was that if variables were bound lexically, and if the environment structures used to close the variables were mutable, then you would have something like the actor model -- functions could have state and respond to messages. As they admit in the abstract, the purpose was to demonstrate the core interpreter for implementations of contemporary AI systems. The chapter on register machines in SICP is just an elaboration of their methods in this paper.

[1] http://repository.readscheme.org/ftp/papers/ai-lab-pubs/AIM-...

Something about porting a metacircular evaluator seems odd to me. Not that it's a bad exercise -- it's a good one. But rather, that it's no longer a _metacircular_ evaluator. The progression of the book is 1) abstracting processes and names with procedures 2) abstracting data representation 3) abstracting interface as a module, along with the theory and practice of mutation 4) abstracting semantics with interpreters (programs which take programs as data). The proof that 4 is a thing is by implementing an interpreter for the language in the language and then extending the interpreter in various ways. Even procedure calls are implemented by calling a procedure in the host language. (All I'm saying is that you exercised the metalinguistic abstraction by writing a scheme interpreter in femtolisp. Metacircularity in itself is not much more than an interesting phenomenon to demonstrate.)

(P.S. The language in SICP is not the Scheme which was later standardized, say R5RS + SRFIs. Mutation of lexical closures didn't change. You can't really (cleanly) get away from that until you invent something like a monad to model state.)


It's both... I feel like Scheme is oversold as axiomatic mathematics. You have to add all this other stuff to it to make anything useful, in particular mutation and mutable data structures. Even toy programs like the metacircular evaluator SICP need this!

And yes, I've been programming a lot in the intervening 19 years, and doing imperative programming with ((Lisp)) syntax is hugely annoying. OCaml actually annoyed me in this regard too. Maybe I will like Haskell, since it seems principled about mutation.

This is a real misunderstanding, see my response to this comment here: https://news.ycombinator.com/item?id=11412392

People think that there could have been some "Church basis" for computing. In other words, the whole Lisp machines thing was folly. It's rightly in the dustbin of computing history.

FWIW I did many experiments in bootstrapping languages, with Python/Lua, OCaml, femtolisp, C, ... I eventually ended up with (a tasteful subset) of C++, which somewhat amazed me, since I've never been one to like C++. This is a whole other story, but it had to do with the fact that OCaml "needs" code generation with ocamllex and ocamlyacc/menhir, and I was looking at how Julia is bootstrapped Lisp (impressive, but not what I want), etc.


In pursuit of a mathematically derived Scheme, you might be interested in John Shutt's Kernel [0], based on his formal theory of F-exprs called the Vau Calculus [1].

[0] http://axisofeval.blogspot.com/2011/09/kernel-underground.ht...

[1] http://lambda-the-ultimate.org/node/4093


I agree with you on this. I used lisp/scheme a fair bit in 2004-2008 and over time have gravitated to C++. Not being able to treat memory as a first class primitive ends up being restrictive eventually.


I don't really understand our what your issue with ocamllex/ocamlyacc is. Clearly you don't need them if you are willing to write your lexer/parser yourself. Also the same tools exist (and originate) in Unix/C land with (f)lex and yacc (a.k.a bison)


WRT 3, assoc lists are just the simplest thing to store things like environments, they're traditional for "writing a Lisp in one week". If you modulerize your code, you can trivially use hash tables or whatever later. Ditto using lists for structures.

4: As I understand it, there are many macro systems available for Scheme. It's one of the languages in which "hygienic" macros has been explored, etc. So if you don't like the femtolisp bundled version, there might be another out there more to your taste.


If it's being used as a teaching language, it depends on the students. Some students will rail against the idea of learning a tool, any tool, that they don't think they would use professionally, even if the purpose of using the tool is to give them a different perspective on programming.

Also, some people just really hate the parentheses (not me).

Also, some people really hate the Beatles (not me). More so in the '60s as contemporary artists, less so now when they're more likely to be referred to in a historical context. But still. The White Album had no shortage of devastating reviews when it came out.

I'd imagine Scheme as a teaching language would be less controversial in schools where the CS students are more likely to already know some programming when they start, and/or don't have as much of a trade school mentality.


I newly joined this company, and they have 20 years old codebase which is mix of C and Scheme code. The only way they debug the massive Scheme part is using print statements. And I only have bad things to say about that. :( If there's a better way all of them have been missing, I'd love to hear that. I've learned that they have adapted the MIT Scheme implementation to add Object Oriented features, and it "kind of" works like an object oriented language, except when it doesn't - which happens a lot. It's a mess.


Pretty much every 20 year old codebase is a mix of terrible stuff. The exceptions are rare and usually involve a strong handed dictator who is willing to make cleanups from time to time.

You're lucky it isn't Fortran and a homegrown (crappy) macro language.... You can't fairly judge Scheme or C from a legacy codebase unless you judge every other language that way too.


I'm judging only from its debugging capabilities - and it looks like even now there's no (open to all) way to do that. Compared to that, C even if that's 20 years old code too, has some great debugging tools for it.


> The only way they debug the massive Scheme part is using print statements.

Why would you need anything else for debugging?!?


See I'm so naive I don't even know if this is sarcasm. :/


I'm 100% serious. Interactive debugging is hugely overrated. I am not aware of a debugging technique better than logging + contracts + asserts.


For one, that forces you to change the code and recompile all the time. And some projects have huge recompile times.


And compiling in a debug mode also results in huge recompile times. Is it such a problem?

There is no big difference between building with contracts and logs on (log level is selected dynamically, no need to recompile) and building with debug symbols/suppressed optimisations.


I was fortunate enough to take both his compiler course as well as a follow-up course involving optimization and hygienic macros. Brilliant man, fantastic teacher, and of course, he wrote a great compiler :)


Thanks for referencing Dybvig's compiler course, can you point me to the course materials online, or share them with us here, if it's not an issue of copyrights of course?

I tried looking the course materials online but the Indiana University website gave me a 404 error page when I tried to access the course from Dybvig's website.

Thanks.


Paul is that you?


IIRC, this was a high performance scheme developed at Indiana University that was closed source for a long time.

Good on Cisco for open sourcing it.

I'm interested to hear what regular scheme programmers feel about this news.


It's massively strange that this was closed source while at a public university, and open source under a public company.


US universities have entire departments devoted to technology licensing. The default to secrecy may be an artifact of their historic reliance on defense agency funding for technical projects and the bureaucratic staffing for paperwork heavy processes.

On the other hand, there's an intellectual idea that for many companies, existing software should perhaps be seen as a liability in double entry accounting due to it's need for ongoing maintenance, upgrades, potential for catastrophic failure, and the cost associated with alterations for new business processes.

Companies offload some of those costs by open sourcing and allowing developers to happily "give back" to their bottom line.


Can you explain that a bit more? How does a company offload costs by freeing their software? Surely they're not expecting to rely on public contributions to reduce maintenance costs, are they?


Even at its most basic, an arbitrary developer filing a bug report is potentially less costly to handle than a customer experiencing a failure...it can be ignored and maybe the developer submits a patch.

On a longer timeline, hiring developers experienced in the technology from the open source community saves on the cost of training them for a proprietary code base.

Ideally, the open source community develops new features, identifies and patchs bugs, writes tests and libraries, etc. adding value realized by the company's paying customers. In the meantime the company still can devote staff to its business priorities and ignore those of the larger community: e.g. no `map` in go-lang and no `react new myapp` at the CLI for React.

To put it another way, why would a business open source code for anything other than business reasons?


It actually predates Dybvig's time at Indiana; he started writing it as part of his PhD work at UNC Chapel Hill. Plus, he set up Cadence Research Systems for licensing Chez Scheme, so it was never owned by any university.

The part where Cisco open sourced it is a bit unusual, but it's not uncommon for people to close source and sell systems they create during college or grad school and continue to use them in research.


It wasn't quite developed at Indiana University. It's based on research that was done at Indiana University, but the actual development was done by a spinoff company, Cadence Research Systems. Vaguely similar arrangement to how the original Google was "developed at Stanford" in the loose sense of being based on research the founders did while at Stanford, but it was owned by a spinoff company.


I'm very excited about this. I've been using Chez as my performance benchmark target for my own Scheme implementation.


Scheme user here. I'm very excited about this, and I think I just changed my vacation plans for next week :D


Can't say (yet). I've used a number of implementations: Gambit, MIT Scheme, PLT/Racket, and played around with Chicken. But never Chez, because it wasn't free.


And aside from the JVM implementations and some works in progress the last time I checked, the only one with native instead of green threading.


Well, dfsch is currently not work in progress, because there is no meaningful progress :) But it was originally intended as scheme interpreter with native threading without GIL. Last version has GIL-style global lock, but interpreter threads can run user code without ever acquiring that lock. (In essence: the lock protect structures that cannot be proven to not be shared between threads, which are not required to interpret output of dfsch's compiler for most inputs)


I know Gauche uses pthreads because I got a linker error about it one time ;)

Also, does Gambit not support native threads? That's surprising considering Marc Feeley did quite a bit of research on multiprocessing in Scheme, and he wrote the SRFI for threads.


GNU Guile has pthreads.


Racket has native threads, called "places".


Not in the shared memory way I mean (which I did not make clear), per http://docs.racket-lang.org/guide/parallelism.html

The racket/place library provides support for performance improvement through parallelism with the place form. The place form creates a place, which is effectively a new Racket instance that can run in parallel to other places, including the initial place. The full power of the Racket language is available at each place, but places can communicate only through message passing—using the place-channel-put and place-channel-get functions on a limited set of values—which helps ensure the safety and independence of parallel computations.

Compare to current Guile, where the documentation says sharing a hash table without using a mutex will not corrupt memory, but probably won't give you the results you desire.


Cisco prices from 2013

    SP-SW-LMIX0CH0 SP BASE Chez Scheme Dev Env for Wind,  Per Unit  $65.00
    SP-SW-LMIX0CHL SP BASE Chez Scheme Dev Env for Linux, Per 10    $325.00
    SP-SW-LMIX0CHW SP BASE Chez Scheme Dev Env for Wwind, Per 10    $325.00
    SP-SW-LMIX0CH1 SP BASE Chez Scheme Dev Env for Apple Mac,Per 10 $325.00
    SP-SW-LMIX0CHA SP BASE Chez Scheme Developm $65.00
    SP-SW-LMX01CHL SP BASE Chez Scheme Developm $65.00


Chez Scheme price schedule I received in 2002:

    Chez Scheme Version 6
    Software License Fee Schedule
    V60901f
    
    Supported machine types:
       Intel 80x86 Linux 2.x
       Intel 80x86 Windows 95/98/ME/NT/2000
       Silicon Graphics IRIX 6.x
       Sun Sparc Solaris 2.x
    
    
    Classification                                  License fee (USD)
    -----------------------------------------------------------------
    Single Machines
      first machine per machine type                            $4000
      each additional machine                                    3000
    -----------------------------------------------------------------
    Site
      first machine type                                         9000
      two machine types                                         14000
      three or more machine types                               19000
    -----------------------------------------------------------------
    Academic Site (for qualified academic institutions)
      first machine type                                         4500
      two machine types                                          7000
      three or more machine types                                9500
    -----------------------------------------------------------------
    Corporate
      each machine type                                         24500


The $65-per-seat price was listed by a third-party Cisco vendor. There's never been any explanation as to how they came up with that price, nor whether anyone tried to purchase it that way. It was fun to tell people to get a license, especially given the normal prices (which someone else posted).


For people interested in the legalities of licenses, it's released under the Apache License 2.0 which is a "free software" opensource license that is compatible when combined with GPL3, but not with GPL 1 or 2.

the Apache 2.0 license includes not just copyright but patent licensing, so the software will contain no hidden patent restrictions for patents owned by the creators and contributors.


Why did you quote free software but not opensource? I am curious to know what the difference in your writing intended to convey.


That may be because "free software" is still an ambiguous term, while open source is relatively unambiguous. I prefer to use Free Software as my disambiguator of choice, but I understand the GP using the other form, and occasionally use it myself.


In my experience, people frequently think that "open source" means a bunch of different things, such as "visible source code" (and nothing more) or "opposite of commercial" (i.e. no money allowed) or "inviting public participation" (as in "open source governance").


> while open source is relatively unambiguous

In my experience both terms are ambiguous. Many people seem to believe that "open source" simply means the source is available for examination. For example:

https://news.ycombinator.com/item?id=3205771


Both are ambiguous, as explained here: https://opensource.org/faq#free-software

If you want to make an unambiguous statement, use "OSI approved license".


I think a charitable reading would be that it was intended to convey something other than offense.


Of course. I don't think anyone is offended. But what was the intended meaning? Can you figure it out?


I take it as trying, apparently without success, to avoid stepping on a land mine.


What is the library ecosystem like? This is ultimately what limits all other Scheme implementations.


That is indeed a great question. Anyone know? I used to use Scheme for prototyping numerical code, etc., but have switched to Julia. Partly because Julia is more convenient in some ways, but mainly because of access to libraries, both native Julia libraries and Python libraries (via PyCall). I personally still prefer Scheme as a language, but missing libraries is a real problem.


Making fast scheme interpreters is something I always come back to, a timeless exercise that makes for a great way to decompress over a week (wow I'm a nerd). I'm excited to find some nuggets of micro-optimized gold!


Compiler. Chez scheme is compiled with a nano-pass compiler. https://www.youtube.com/watch?v=Os7FE3J-U5Q


Compiler what? I simply said that I enjoyed writing fast scheme interpreters.


You said it again. It's a compiler, not an interpreter.


S/He's saying that s/he enjoys writing interpreters. That is not related to the fact that Chez Scheme is a compiler.


Its both a compiler and an interpreter. Read up on Petite Chez Scheme.


Yeah most compiled lisps need to do some interpretation in case an unsuspecting eval comes around ^^


It's about time! Although I'm curious---how did Cisco end up owning it?


The short version: Dybvig left his faculty position at Indiana at the end of 2012 to join Cisco, and Cisco bought Cadence Research Systems (the company set up for licensing Chez Scheme), too. There's an FTC filing somewhere, but my Google-fu is failing me.


If you look at scheme.com, you will see Copyright © 2011 Cadence Research Systems. Cisco bought Cadence back in 2012.


(Cadence for those who don't know are basically the equivalent of SolidWorks for anything in EE. They are the only company (maybe Synopsis or Mentor, but I think there are a few holes here and there) on the planet that lets you go from designing something as simple and low level as an analog jellybean op-amp (with state of the art EM and S-param simulation) and verification, to RTL design (full power simulation and everything), to piecing together TSMC[edit: learn2proof-read, self]-based fab designs on their 28nm processes (ASICs or SoCs), to laying out boards at the Altium level. You're paying around 100k/yr/seat for all this but if you're actually using the whole feature set it's worth every cent.

Other than Solidworks (which has everything from industrial machining stock cold-rolled steel and the finite-element analysis of your components all the way to computational fluid dynamics of anything your engineering heart could desire) I've never seen a company cover such an industry so thoroughly, so well[1].)

[1] Maybeeee Adobe has equal coverage re: for vector work with Illustrator/image manipulation with PS/page layout and typesetting with PageMaker/Indesign and doing post on Video with After Effects and such. And I don't think anyone would disagree, the degree of complexity for this genre of software is on a different tier.

edit: Haha. Confused Cadence Design Systems with Cadence Research Systems. Thanks for correcting me. Mea culpa. Keeping this up just for continuity sake. Viva la Scheme though! Have an upvote, @beering ;)


I'm pretty sure that the Cadence you're thinking of is not the Cadence that owned Scheme. Cadence Research Systems was afaik a tiny entity that mainly owned Chez Scheme.


Yeah, it's a sifferent company. I doubt Cisco could even afford an EDA vendor other than little Mentor. Parent is also wrong about Cadence being only one that can do each of the major jobs: all of them have products for that. Too many to keep travk of actually.

Mentor has an advantage since they acquired Tanner, best of the budget ones.


Synopsys definitely has offerings for all of those things. IC Validator also completely blows both Cadence's and Mentor's offerings out of the water when it comes to DRC and LVS, although Calibre from Mentor Graphics has more market share in that area at the moment (though I think ICV might be slowly picking up steam when it comes to market share).


Cool - was it used as a scripting language or for the core of the system?


Amusingly Cadence Design Systems also uses Lisp from what I heard from people who worked there on a Lisp meetup.


TMSC?


https://en.wikipedia.org/wiki/TSMC

Biggest "independent" semiconductor foundry company (independent = they make customer designs, not their own)


I guessed TSMC myself but wasn't sure it's not some EE acronym I'm unaware of.


AFAIK, Cisco never bought Cadence -- Cadence is an independent company -- maybe they bought some division or subsidiary of Cadence?


Wrong Cadence, CRS was a small company.


ah, thanks!


I used Chez Scheme for many years and loved its lightning fast compile times. For example, I'm not aware of any other full-scale compiler that can compile itself as fast as Chez can.


OberonSystem can build the whole compiler, OS, and applications in around 3 seconds.


>OberonSystem can build the whole compiler, OS, and applications in around 3 seconds.

I'm not usually given to short, low-content comments here on HN, but:

wow!


Staying on the low-content theme . . . that's just how awesome Wirth et al are.


Fast compile times are cool, but I've never heard anyone say they liked programming in a language that Wirth created. The possible exception is Delphi as a Pascal derivative, but that has very little to do with wanting to program in Pascal.


Pascal was the language of choice for Windows development for quite some time even before Delphi.

Also, for what it's worth, Ada has quite a few Wirthisms and I personally quite enjoy it. But it technically isn't a Wirth language.


I look forward to the day someone takes the effort of writing a bare metal runtime for Go and producing something like "Goberon", given the influence.


Why? Go is a terrible language with a terrible community.



A step on the right direction, but I was thinking more about Oberon System 3 with its Gadgets UI framework.


I think we got plan9 down to about 45 seconds, all os and userspace


That's a myth. Show the evidence please.


Looking forward to having this integrated in geiser/emacs. I worked with chez recently, and it is really a high quality scheme implementation.


Any details on what you were using it for? What did you like about it vs something like Racket?


We have a macro expander for pascal written in scheme (quick and dirty draft made by me that worked so well it stayed). We had some performance problems with some crazy recursive macros (it generates a _lot_ of code. Don't ask, I am not allowed to talk much about it), so I investigated porting it to chez.

Instead I just switched to guile (scheme implementation) trunk and got a 3x speedup. Did some optimizing work and it ended up at 4x, which is good enough.


What Scheme implementation did the code originally use?


The Guile 2.0 branch. I don't know what magic optimisation dust they sprinkled over the upcoming 2.2, but it sure is fast.

We thought about using chicken, but it depends quite a lot on using syntax-case to deconstruct everything, and I didn't want to learn their implicit renaming stuff.

Apparently the 2.2 branch has full elisp support. Can't wait for Emacs to run on it.


Ah, that makes sense! Guile 2.2 has a completely rewritten compiler and virtual machine. I'm happy to see some real-world instances of it greatly improving performance.


"greatly improving performance" is an understatement! It was literally 3x. I didn't even have to change anything. Not bad for a language that usually beats python by quite a large margin :)


> Apparently the [guile] 2.2 branch has full elisp support. Can't wait for Emacs to run on it.

I really wish folks would spend the time they spent porting elisp to guile Scheme porting elisp to Lisp instead. Scheme's great for what it is (really), but what it is not is an industrial-strength systems programming language. Common Lisp is.


>porting elisp to guile Scheme

This is a common misunderstanding. Emacs Lisp is not being rewritten as Scheme. What is actually happening is that there is a compiler for Emacs Lisp that runs on Guile's virtual machine. Elisp isn't going anywhere.


Regardless, I wish that they'd spent that effort writing an elisp→Lisp compiler rather than an elisp→Scheme compiler.


It's not an elisp->scheme compiler.


What does it compile to, if not Scheme?

And, regardless, I wish that they'd not used Scheme. I really, really wish that they'd not used Scheme.


I'm not that familiar with guile internals, but IIRC they have some intermediate representation (bytecode?) that the VM then runs. So the frontend languages (Scheme, Elisp, even Javascript IIRC) are compiled to bytecode, the Guile VM then interprets the bytecode.


Someone explain to me: Chez vs Gambit vs Chicken vs Bigloo. Which do I pick? Especially interested in parallel/multithreading abilities, standards compliance, and overall performance.


This page has a useful comparison between various scheme implementations. It's written by one of the authors of Guile Scheme: https://wingolog.org/archives/2013/01/07/an-opinionated-guid...


Out of the four you mentioned, only Chez has true posix threads.


Bigloo has support for posix threads. The only one on the list that absolutely does not have them is Chicken.


Why should I be interested in this - given there are many free high quality implementations for Scheme like Racket, Gambit etc?


Chez is known for being very fast, and for using state-of-the-art compiler technology.


What kind of compiler technology? I've seen it mentioned that it uses a nanopass design, but that's more of a development strategy than something that results in performant code or low compile times.


Any numbers?


Look at Clinger's benchmarks for Larceny.


This is great. I was hoping this would happen. The reason was a paper posted here in the past tracing its development from 8-bit days. I was impressed but knew it needed a community and OSS license.

Good.


Very cool!

I had trouble building it on Linux if I tried to set --installprefix= to a non-standard location, it built fine using the defaults. Nice!

On OS X, I have a clang version of gcc installed and perhaps because of that my build broke.


UPDATE: building on OS X:

I installed gcc-5 using brew, set "alias gcc=gcc-5", and then ./configure ; sudo make install worked fine.


Wow, I never thought they'd open source Chez. This is really cool.


Wow, this takes me back. I took intro CS at IU in 1993. At that time they were still teaching Scheme, using George Springer's Scheme and the Art of Programming and something like The Little Schemer (but not that because I guess it didn't come out for another two years). Delightful language with a really clean library. I always found Common Lisp's naming conventions to be—dare I say it?—PHP-esque in their irregularity. Scheme, meanwhile, actually has naming conventions. :)


The Little LISPer (precursor to the Little Schemer) was definitely in print and used at IU in the 1980s. Maybe that was it?


That must've been it.


A quote from BUILDING: "Building Chez Scheme under Windows is currently more complicated than it should be. It requires the configure script (and through it, the workarea script) to be run on a host system that supports a compatible shell, e.g., bash, and the various command-line tools employed by configure and workarea, e.g., sed and ln. For example, the host system could be a Linux or MacOS X machine. The release directory must be made available on a shared filesystem, e.g., samba, to a build machine running Windows. It is not presently possible to copy the release directory to a Windows filesystem due to the use of symbolic links."

Maybe someone have managed to build the Windows version?


Pretty great, always wanted to look at the implementation.


Wow, such a big news! I have heard so much praise about it and have been always want to use it. Now dream comes to true. Thanks!


anyone know how this became cisco's to release?



This is good news. The code is worth reading. It is as good as MIT Scheme, and less esoteric than Gambit.)




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: