Hacker News new | past | comments | ask | show | jobs | submit | Fixnum's comments login

On the other hand, the burden of proof is on the library writer who creates the typeclass instance, not on the end user - possible largely thanks to purity - whereas programmers may expect to re-implement various patterns by hand.


It could be related to StackOverflow displaying "Hot Network Questions" in a sidebar, some of which are from the aviation StackExchange, combined with the usual phenomenon of upvoted stories attracting similar submissions.

Incidentally, I find this sidebar extremely distracting; maybe it's time for a plugin/script to fix it.


Julia seems nice, but the fact that the developers don't care about TCO is annoying and vaguely reminiscent of Python. If this were added, not only would many algorithms become more natural to express, but it'd go to the top of my list as a recommended first language (instead of Scheme or Lua).



They're not similar at all. FoCS is mostly about introductory programming and data structures (with a tiny bit about automata). Sipser's book is about computation and complexity - it's comparable to Hopcroft and Ullman's Intro to automata theory, languages, and computation, for which Ullman recommends FoCS or equivalent as a prerequisite.


Interesting. Thanks.

Coincidentally I just signed up for Ullman's Automata course at Coursera. The description makes it seem pretty basic but I'm interested to see what he does with it.


Actually, Scheme is more accessible than ever before. If installing MIT Scheme and Edwin/Emacs is too difficult, now there's Racket-SICP, which allows you to program with DrRacket, a beginner-friendly graphical IDE, and actually comes with implementations of some parts of SICP that MIT Scheme omitted, like the picture language and (afaik) some concurrency primitives.

Is Scheme really decreasing in usage? Sure, in recent years there have been high-profile moves to Python at MIT and UCB, but I don't know of any hard stats on worldwide Scheme usage. Not that it matters.


For one thing, you generate as few as possible by aggressively fusing away intermediate lists (at least if your language is pure).

Also, the garbage collector will often move lists into contiguous region(s) of memory.

More to the point, though, sometimes a list is the appropriate data structure for a problem (like returning all intermediate results from the Collatz call, where you don't know how many there will be). Sure, you could use an array and double its size every now and then, but you're doing an awful lot of violence to your program just to handle something the runtime could take care of for you :)

And yes, I know there will be a slight performance penalty under many circumstances.


The point is you're supposed to use truly random word combinations since those are at least memorable.

  $ wc -l /usr/share/dict/words
  119095
  $ python -c 'print(119095 ** 4)'
  201175048646341950625
  $ python -c 'print(85 ** 10)'
  19687440434072265625
So, even if your target is known to be using this scheme in pure form, this has more entropy than a completely random 10-digit password (assuming ~85 characters) -- and who would actually be using such a thing, except someone using a password management program - who could just as easily be using a 20-character random password?

So even if it becomes known, it's an improvement on what users are doing now.


Can never turn down an opportunity for a one-liner.

  $ perl -E 'open(my $fh, "<", "/usr/share/dict/words"); my @words = map {chomp; $_} <$fh>; close $fh; say join " ", map {$words[int rand @words]} 1..4'
  menu chemists administrative seeps
Might have to run it a couple of times before you get something that you can memorize.


You shouldn't use a non-cryptographically secure random number generator (perl's rand) in the context of password generation. It's too risky.


Ew.

    shuf -n 4 /usr/share/dict/words | tr -dc 'A-Za-z0-9'


You can use a dictionary of the most common 10000 words, you'd still have loads of entropy.



For imperative languages, the most popular approach to formal proof is to add partial correctness annotations in an axiomatic semantics such as Hoare logic. I've heard that Microsoft does this to verify many properties of the Windows kernel. (I think the most popular tools are 3rd-party and proprietary at the moment.) Of course, finding the right theorems to write down is not easy either.

There's currently an attempt to formalize a subset of C, with an axiomatic semantics, all the way from specification down to machine language: http://vst.cs.princeton.edu/


Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: