Hacker News new | past | comments | ask | show | jobs | submit login

The argument can be adapted almost verbatim for hacker type vs theory type programmers. One is in the problem solver camp the other in the fundamental understanding and theory building camp. Silicon Valley is dominated by hackers and they are constantly re-inventing wheels because of lack of theoretical understanding, e.g. NoSQL, and the theory builders are working on problems that will almost surely collect dust in some dark corner of a university library.



Funny… While I personally like beautiful theory, the last bit of deepish theoretical work I am studying was to solve a particular problem.

I'm currently delving into Earley Parsing, so I can write compilers easily, so I can write DSLs economically, so I can write short programs in any language, so I can spend less time writing those programs, so I can generate less bugs from those programs, so these programs ultimately cost less money$$$.

Then there is Bret Victor, who uses very simple principles to solve the long standing problem of understanding in various domains.

I probably stand in the "the best solutions come from the deepest theories" camp. With the caveat that "deep" does not necessarily mean "complex", on the contrary. Lambda calculus is very deep, but quite simple. So, while I believe problems are the centre of our field, I also believe we need to break down those problems into deep and simple math before we can tackle them efficiently. Not doing so is a recipe for bloat.


> I'm currently delving into Earley Parsing, so I can write compilers easily, so I can write DSLs economically, so I can write short programs in any language, so I can spend less time writing those programs, so I can generate less bugs from those programs, so these programs ultimately cost less money$$$.

'Earley parsing' immediately trips a flag in my memory, so I'll just ask—have you seen Jeffrey Kegler's Marpa parser (http://jeffreykegler.github.io/Marpa-web-site, with associated blog)?

(I don't know you, so I apologise in advance if you are Jeffrey Kegler. :-) )


Oh. My. I have to read this. And those 40 pages too: https://doc-0s-0c-apps-viewer.googleusercontent.com/viewer/s...

Okay, a cursory look shows this will only work on context-free grammar. I have read another paper showing that you can add some context sensitivity to Earley parsers, I hope you can do that with Marpa parsers as well: ultimately, I want to combine the generality of Earley parsing with the expressive power of Parsing Expression Grammars.

Damn, though. If Marpa parsing is a good as its promises, I may have to jump ship again. (The first time I did was when going from PEGs to Earley.)


I disagree about Bret Victor. There is nothing simple about the principles that he uses. In fact you need a whole lot of very deep theoretical understanding to even get to the point of Bret's abilities. Time traveling debuggers for one are very advanced technology. If anything Bret Victor is applying very advanced theory to mundane domains like programming.


Bret Victor's field is, "Applied Advanced Complex Simplicity."


> Time traveling debuggers for one are very advanced technology.

Yes they are. But the underlying principles are dead simple: just keep all the history of the memory, never erase anything!

Of course, the implementation is not that simple, because efficiency, because complex imperative languages, because many other things. But the basic principles are still dead simple. You may be amazed by time travelling with GDB, but OCaml had that years before GNU did it.

I feel the same way about Bret Victor. He does understand things that are hard to grok. Probably very abstract stuff. But I suspect it wouldn't take much space to write those things down.

I gather you have seen his videos, most notably the dead fish and dynamic visualization ones? Learnable Programming also counts. Tell, me, why did you find those presentations so amazing?

The reason I found them so amazing was their simplicity. Bret's demos show incredibly innovative features, but at the same time, those features felt incredibly simple: easy to use, the feedback is obvious, and frankly, most of his particular examples don't seem that hard to implement. The hard part was coming up with the idea in the first place.

Plus, I can see 2 overarching principles in Bret's work: the first one is, using our visual system to see the relevant stuff at a glance. Give me a graph, not a listing. The second one is direct interaction. You could think of it as a REPL on steroids, or even a live image, but it's more than that. In a live environment, you would change a variable, and see the consequences in real time. Bret Victor goes beyond even time, and let us manipulate systems from the outside, as if they were a block universe (where time is just another dimension).

In learnable programming, you can see how he slides the force of the jump, to make it just right so our little Mario can go through that little secret passage. We can visualize the whole path (that's the first principle), and we can see what that path would have been if you change the laws of physics (that's the second principle).

Very deep stuff. Very hard stuff. But as I said, very simple as well. And I expect future insights will make it even simpler.

---

Let us go back to my Earley Parsing for a second, because it's quite illustrative. My journey began with this talk[1] by Ian Piumarta. He mentioned Earley parsing at some point saying it was very general, handles every possible context-free grammars, and is also very simple, very easy to implement: half an hour to translate the algorithm in any language.

Months later, I'm still at it.

But you know what? As I come to understand Earley parsing, I agree with him more and more. This thing is genuinely simple. I wrote a recognizer in less than a hundred lines of Lua code, and the whole parser will likely take less than 200 lines. As I said, simple.

Simple does not mean easy however. Without the proper insights, I could only struggle. But then I stumble upon a sentence, a picture… and then something clicks. And I can see the simplicity, as well as the depth. And then it clicks again. And it becomes even simpler. To the point that now, I understand where Piumarta is coming from. And I still remember why it was so difficult for me back then.

I mean, Earley parsing did use to feel kinda complex, and quite ad-hoc. At the same time, it didn't seem too difficult. I was wrong. Earey parsing proved to be much more difficult, and at the same time much simpler, than I had expected.

[1]: http://www.youtube.com/watch?v=EGeN2IC7N0Q

---

I think Bret's work is like that: deep, hard, and simple.


I feel the parallel is more sophisticated. In the paper Gowers bemoans that the theoreticians dismiss the depth of combinatorics. Similarly, in modern distributed systems, there is a huge gulf between the theoretically 'interesting' problems of optimality and the difficult problems solved by the practitioners (deployment, fault mitigation and control mechanisms) and similarly a lack of vocabulary to articulate a grand sounding theory from real problems.


I would put myself in the problem solver class. I want to understand the theory, so I can solve the problems more effectively. Certainly the more experience I get, the better I write code, due to a greater depth of understanding. My programming is becoming more declarative / functional, though at the end of the day, sometimes crappy hacks need to be added to get things working.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: