Hacker News new | past | comments | ask | show | jobs | submit login
Is it really “Complex”? Or did we make it “Complicated”? (2014) [video] (youtube.com)
185 points by saturnian on April 24, 2017 | hide | past | favorite | 79 comments



I haven't watched the video yet, so please forgive the possibly premature comment... But this is something that I've found myself thinking about a lot lately. Are the things that we're currently building or maintaining truly that complicated or are we over-engineering things? I've been humbled on more than one occasion where I initially thought an enterprise-y solution was over-engineered, until all the details of the problem were explained to me.

What I wish we had was a "man" equivalent to provide every-day examples of how to use the tool "correctly" (although I'm aware there's stuff like "bro" pages), as well as another tool to explain why some tool / option even exists and how they're "expected" (by the creator / maintainers) to be used. As I've gotten into the habit of reading man pages I've become increasingly aware of how many options certain tools provide, but in many cases I really cannot fathom why those options are available or in what kind of situation they might be used.


Most things can be done in a less complicated manner, but it costs more.

Consider SAP. It's complicated to say the least, but a lot of that complexity comes from both it's generality, it's flexibility, and the quality of the solutions it solves underneath.

Any solution you write in SAP could be written in a much simpler manner using simpler tools, but doing so would and getting your solution to the quality of what you can get in SAP would be hugely expensive and take a lot of time.

In that way we subsidize each other, but in doing so, we often make things much more complicated than they strictly need to be to solve any particular problem.

Now this is a different class from things that are just shitty design. Those exist in abundance, and it's unfortunate, but that's life.


With your comment on man pages, I've been thinking a lot about lately; the fact is, we completely neglect documentation, so much that its neglect resulted in it being pushed up a layer, into the browser - namely, google (and stackoverflow).

Fundamentally, we don't really do discoverability well because volunteers are fickle as hell, and commercial interests are profit-focused.


It is too bad "bro pages" weren't just "example pages" :-/


I wonder what a Smalltalk-like environment would look like if had been developed in the 2000s instead of the 1970s.

If you could marry up the advantages of text/files, live code, visual layouts, data visualization and perhaps machine learning in the future, maybe you could come up with a huge jump in productivity and being able to handle complexity.


I believe Red[0] is closest along to practically realizing this concept by focusing on compositions of small languages, a premise Alan Kay also worked on with the STEPS project at VPRI[1].

The main thing that stops people from beelining down this path is the sheer quantity of yak-shaving involved. We're all impatient and have near-term goals, and glue-and-iterate gets us there without having to engage in a non-linear deconstruction and analysis of what's going on.

[0] http://www.red-lang.org/ [1] http://www.vpri.org/html/writings.php


Trying to find a short REAL Red example (not Hello World or here's how to show an alert), and I can't seem to find one. Can you help me out? Something that would help me understand what the language is like.


The concept of Red is heavily based off of Carl Sassenrath's Rebol, only Red is both very high level and fully capable of low level programming as well. Rebol can show you the high level things possible with Red. It truly is amazing. Even though its kind of old now, I installed Rebol recently and was blown away by how much power I got with no installation. Red will be much the same and allow you to make miniscule native binaries.


Look at Rebol 2 docs. Or try this (enough for me to get started): http://redprogramming.com/Getting%20Started.html


Check out the REBOL examples on Rosetta code. I'm very fond of the "percent of image difference" one -- it's not large, but shows off some of the nice features like image handling and the fantastic REBOL GUI dialect. (Yes, I wrote it...)


Mathematica gets live code, data visualization, and ML, all with a lispy+tacit+functional syntax. I find it much more integrated and easy to use compared to Jupyter notebook, although it's near useless for anything imperative -- I find myself using Jupyter a lot these days to develop one-off scripts interactively.

(please, no one mention stephen)


(I am pleased you brought up Stephen W, and also that you asked us not to. That's all you'll get from me.)


I would have called him the W-ster and respect parent's wishes


> (please, no one mention stephen)

Why? Does he come to every place that mentions his name, like that AI guy?


Because any HN discussion that mentions the W-word will inevitably devolve into discussion of the controversial man


If you could marry up the advantages of text/files

Is that really such an advantage? What kind of advantage does having source code scattered in files have over Smalltalk's Change Log? The Change Log greatly simplifies having live code, and having a runtime environment where you could crash the system with a runtime change. Source code in text files complicates this. It's also a powerful development tool all by itself. What's more, it's just a clever use of a text file!


I mean in being able to generate source code as text files at any point, or update from text files, since that's what programmers are used to, and there are so much of the tooling is built around that. You're not going to be putting an image on github.


I mean in being able to generate source code as text files at any point, or update from text files

It has been done many times. There was a Camp Smalltalk initiative to standardize such a mechanism back in the early 2000's. Anyone could code something up that does this for a particular dialect in a matter of minutes.


The way I see it, plaintext's advantage is much like the iPhone's stylus-less touchscreen - it's much more direct, and people deal with it much more intuitively as a result. Although I'm starting to think that it's more about not coupling the program and data file, and providing documentation (a comment-less plaintext XML file is often not much more useful than a binary file).


How about an Erlang unikernel with its relup functionality, running under a VM with the ability to hibernate to disk? That gives you nearly the same set of benefits as Smalltalk, without being nearly as "fossilized."


But with a huge barrier to entry. Smalltalk is at least reasonably easy to grasp for people new to programming, Erlang not so much (though it is incredibly powerful).

The graphical nature of the Smalltalk environment also really helped to make it accessible. Erlang lives mostly in text terminals.

I'm still not convinced of the 'image' mechanism, it's really nice to have implicit and automatic persistence but it glues the code so strongly to the data that it starts to hamper collaboration. Being able to easily pull a bunch of stuff from one machine to another and to integrate it with stuff that was already there is something that other programming languages have solved very well (together with DVCSs), Smalltalk seems a step backwards in that regard.

Though there are times I wished for an easy way to hibernate an entire session for later re-use.


> Erlang lives mostly in text terminals.

People (outside of Ericsson) just haven't bothered to take much advantage of Erlang's strengths. Erlang speaks network wire protocols very efficiently, so if you want graphical Erlang sessions, you just need to write Erlang applications that act as e.g. X11 clients. Which is what things like Erlang's own Observer application does, complete with the ability to use the graphics toolkit Tk. (Or, if you like, you could expose an HTTP server serving a web-app with live updates over a websocket, like https://github.com/shinyscorpion/wobserver. Or an VNC/RDP server. It's really all equivalent in the end.)

Unlike Smalltalk where the graphical capabilities are part of the local console framebuffer, Erlang's approach allows you to interact with a running Erlang node graphically irrespective of your locality to it—which is important, because, in the modern day, your software is very likely a daemon on a headless server/cloud somewhere.


After long searching I've settled on python for hardware control, number crunching and ML stuff (it's really just wrappers around C libraries and GPU kernels) and Erlang for everything else. So far no regrets. I wasn't aware of that software, thank you for the pointer!


I've never used it seriously, but the bit pattern pattern constructs look really sweet. I don't write a lot of protocol code anymore. But I remember doing parsing badly on 10-100 bytes to figure out what a message meant. Websocket stuff would get a lot of help from that syntax.


> your software is very likely a daemon on a headless server/cloud somewhere

This is true of every Erlang system I have developed, so, I concur.


Then this could work for Elixir as well, which seems to be a little more approachable for the majority of programmers.


I dispute that Erlang has a huge barrier to entry. Although there is a larger barrier than for, say, Python or Ruby, I feel that Erlang gets an undeservedly bad rap, especially given the ROI.

I am no genius, but I was able circa 2008 to develop a significant production system in Erlang, while learning Erlang on the job over a period of 3-4 months. I had never programmed in a functional language before (C++ was my forte). In 2008, the tooling and environment surrounding Erlang was far less supportive than it is today, so the barrier now should be lower.

If you want to talk about a huge barrier to entry (for me, anyway), it's the Great Wall of Haskell. I found it a great deal harder to learn Haskell. I have only written one small utility in it and still don't claim to know the language. And that's after using Erlang for many years.

Also, these days, Elixir is reputed to lower whatever barrier to entry there is for Erlang.


Haskell is less about the "language" and more about the mindset. You simply have to program in a very different way, and that is hard to rewire your brain to do this. Once you learn Haskell, you will program different in other languages as well, simply because you think differently.


> circa 2008 to develop a significant production system in Erlang

Is that whatsapp by any chance? ;)


Not nearly that significant! I wish!


As far as I know, Bret Victor is looking exactly into that, check out his talk "The Future of Programming": https://www.youtube.com/watch?v=IGMiCo2Ntsc


Check out https://en.wikipedia.org/wiki/Oberon_(operating_system)

It never caught on but it was an interesting path not taken in terms of what an operating system could be.



Is it theoretically impossible to fit an interpreter for a dynamic programming language in the L1 cache of a modern chip?

(I understand there are physical constraints that prohibit super low-latency memory lookups (of unconstrained size) in 0+epsilon time (where epsilon is small))


Symbolics managed to fit their Lisp VM into the cache of DEC Alpha processors in the early nineties: http://pt.withy.org/publications/VLM.html


Thanks for the information.

> We built a prototype of the emulator in C, but it quickly became obvious that we could not achieve the level of performance desired in C. Examination of code emitted by the C compiler showed it took very poor advantage of the Alpha's dual-issue capabilities. A second implementation was done in Alpha assembly language and is the basis for the current product.

First pass in C. Final, in Assembly.

Chip at the time was a first-generation DEC Alpha AXP 500 which had a 512 KB B-cache and two 8 KB caches.

https://en.wikipedia.org/wiki/DEC_Alpha

Let's say its present day and you want to fit into a 256K L2. What language toolchains are available? How far can one go with JIT?


I think Lua, statically compiled against musl libc, can fit in 200KiB. Not in a (32KiB) L1 cache as the grand-parent comment asked, but in L2. There's also LuaJIT, which I think is only a bit bigger, I'm not sure ...


> Is it theoretically impossible to fit an interpreter for a dynamic programming language in the L1 cache of a modern chip?

I'm pretty sure Chuck Moore (yes, he's still around) would be able to fit the interpreter and an entire OS into the L1 cache with room to spare. Forth technically is an interpreter.


You can get a FORTH kernel in 2K words. It's also incredibly efficient for your own code since you basically have a dictionary and memory addresses. Thinking about it, in the old days dictionaries stored 8 character identifiers, which will handily fit in a 64 bit word. That means that the dictionary only needs 2 words per entry.

As you imply, the interpreter will be dwarfed by the code needed to talk to the rest of the OS. As an aside, this is why I was initially very excited about the JVM when Java first came around. Compiling down to a FORTH style language should give you pretty impressive benefits.

Virtual machines were very popular for a long time, but I'm not entirely convinced that we've really pushed the concept as far as it can go.


The amount of Forth code you could put in a small space is amazing. One of reasons a 128k Mac had Forth show up on it relatively quickly.

I remember back in the day someone was working on a Forth version of OpenStep. Weird, but there were some funky Forths then too. I so wish they had succeeded.


> Is it theoretically impossible to fit an interpreter for a dynamic programming language in the L1 cache of a modern chip?

The APL, J and K languages are known for being very fast; I vaguely remember one reason being given that their programs are so terse and high-level, they're compact enough to fit in cache. Not sure if the interpreter would though.


Also, apl j k have low interpretation cost, so you're pushing the metal close to its limits.


Even fitting the interpreter in cache, there's obviously still some overhead to interpreting instructions rather than executing them directly.

Also, I suspect most of the memory related slowdown with interpreters is due to the indirections in memory representation of data/code, not the interpreter itself falling out of cache.


There was a period where Opera had the fastest javascript engine of all the browser. It was a stack based runtime like all others at the time and, as a side-effect of being developed for mobile devices, was small enough to fit on a cache line. That was the key to the performance advantage. Then came V8 and everyone changed over to JITing compilers.


Maybe better: https://vimeo.com/82301919

A link to a transcript would be cool.

Edit: There's a transcript of the iPad question here: https://news.ycombinator.com/item?id=8857113


This is a distinction I first learned about working in france years ago. Without any real basis I wondered whether it is their general more precise use of language which made it a more obvious distinction for a french person to make. At the time they were more or less synonyms for me, but since then have become very distinct especially when talking about software!


> their general more precise use of language

This isn't really true, it's just a snobby idea the French have somehow successfully convinced us of. (It goes along with the idea that they have the most "refined" culture or something).


A lot of the specific term in English comes from the common French vocabulary and are still very (very) close to the common words in the French spoken today. The common vocabulary in English comes from German origin. Actually I think you can basically speak about anything using only German origin words.

In order to learn French and its vocabulary, English speaker will found a lot of similarity but from the more formal side of their vocabulary. That would lead English speaker to think French is more precise, I don't think the French have something to do with this.

That's BTW a common mistake English speaker make when evaluating some French speaker proficiency. The fact that I use rarely used words does not mean that I have a large vocabulary, it is just the opposite.


There's also "Anglish", where non-germanic influences are replaced. A funny sample of this is Uncleftish Beholding [0], a fictional textbook entry by sci-fi writer Poul Anderson.

[0] https://groups.google.com/forum/message/raw?msg=alt.language...


>In order to learn French and its vocabulary, English speaker will found a lot of similarity but from the more formal side of their vocabulary. That would lead English speaker to think French is more precise, I don't think the French have something to do with this.

Worse, the French apparently teach young students to write in a way that they consider profound, and the Anglosphere considers imprecise drivel.


Do elaborate?


I can't really go into detail much, but my wife too intensive/immersion French in her school days. As she became fluent in basic spoken and casual-written French, they taught her the French style of literary writing. She's the one who told me it's meant to be profound or deep, but comes across to her Anglo brain as vague and, well, bad at saying anything at all.


I beg to differ, as a young adult I became enamored with English, for its simplicity in structure and vocabulary, I was annoyed by French redundancy and diversity and almost started to think in English.

Few years later, English feels restrictive and too simple. French aggregated many influences from centuries at a crossroad, and it seems it kept a lot in order to be able to add subtle layers of information by using particular sets of words fitting together well to propel metaphores and other succint yet precise description of the world.

Now I say that, on average, about mainstream incarnations of both En and Fr, surely you can find poetic and tailor made English wording, in France it seemed part of the culture, but recently it's been on the way out, only elders speak a bit that way.


> French aggregated many influences from centuries at a crossroad

This describes English perhaps more so.

> and it seems it kept a lot in order to be able to add subtle layers of information by using particular sets of words fitting together well to propel metaphores and other succint yet precise description of the world.

It's your native language, so of course you might think that (especially when added with this snobby French cultural ideal).


I don't appreciate your comment. I left my own native tongue for a reason, and came back to it for one too. I'm not even boasting superiority. I don't give the slightest damn if you talk with metaphores, or whatever figure of speech there is, I don't submit to cultural ideals or snobbery. It's a point of view on how one likes to communicate. And I very very rarely encountered it even when talking, reading and watching almost entirely English sources for many months.

Also how can England be more influenced by countries as an Island ? the naval conquests, the commonwealth ? I don't know history much, but it seemed to me they kept a very cohesive identity except for the not so French/English feedback loop.


dat baguette tho, mmm


In what sense are other languages less precise?


The fool complicate the simple, while the wise simplify the complex.


And evolution laughs at us.


He mentions a Microsoft Office bug that's been around since the 80s. Is there any more information about this?


Pah, "complex" is just latin for "put together". Take it apart, divide and rule.


More like, divide and be strangled by the huge web of interrelationships... :)


This is a fun one. But then again most Alan Kay talks are fun.


Alan Kay has had about a few decades to empirically demonstrate that "we" have willfully made it complicated. I don't believe he has done so.


Have you used OMeta? That came out ouf VPRI I believe: https://en.wikipedia.org/wiki/OMeta. It is a really nice approach to constructing parsers and interpreters. Then there is "Open, extensible object models": https://www.recurse.com/blog/65-paper-of-the-week-open-exten.... Again by the same folks.

Those are the one that come to mind when I think of simple and very powerful tools. There are many others. So I think Alan Kay and friends have demonstrated that we have made things complicated for dubious reasons.


I've read the OMeta paper, the object models paper, and most of the VPRI papers. I'm generally sympathetic to their point of view -- I hate bloated software.

But what has been built with these tools? If OMeta was really better, wouldn't we be using it to parse languages by now? They already have a successor Ohm, which is interesting, but I also think it falls short.

I'm searching desperately for a meta-language to describe my language, so I would use it if it were better.

I think they generally do the first 90%... but skip out on the remaining 90% (sic) you need to make something more generally useful and usable.


I think this is the case with all software. None of it is 100% or even 95%. This is why I've given up on learning anything language specific. If you understand the concepts then you'll be able to re-create the pieces you need in your language of choice because most of the time the other 5% or 10% is context dependent.


Yes I think you're expressing the same sentiment as Knuth, who I quoted in my blog here:

http://www.oilshell.org/blog/2016/12/27.html

I tend to agree and most of my shell is either written from scratch, or copying and adapting small piees of code that are known to work. I take ownership of them rather than using them as libraries.

That is even the case for the Python interpreter, which is the current subject of the blog.

I'm not quite sure, but I think OMeta falls even shorter than the typical piece of software. I'm not saying it's not worthwhile -- it seems like a good piece of research software and probably appropriate for their goals. But to say it generalizes is a different claim.


I think Javascript,nodejs, electron and HTML and CSS demostrated that "we" have made things complicated.


This is the first thing that came to my mind while watching the video. Web development has become so overly complicated. I believe it is mostly due to working around limitations of the platform and the lack of a de facto standard of how web development should be done. In contrast for smartphone development, largely speaking, the way to do it is the way Android and Apple provide. The web has no single vendor to dictate that and what we have now is a result of that.


I think a lot of it is a lack of documentation. This has two effects.

1) People just get started and get used to not really understanding the interface they are working with. Who hasn't felt like they were playing whack-a-mole with CSS layout? This means lack of conceptual clarity fails isn't notable.

2) Nobody ends up noticing just how complex the rules are.

It was 9 years from when I started looking for a tutorial like http://book.mixu.net/css/ to when I actually found it. That is...really quite bad.


I had the same thought, if he had a solution then he should have it by now.


He actually did lead a project which took this on: STEPS. (I think this is the last annual report on the project: http://www.vpri.org/pdf/tr2012001_steps.pdf) They did build a functional proof of concept which was significantly smaller/less complex than Smalltalk/Squeak which were predecessor projects he and his team worked on. Unfortunately, it's not based on the trinity of files, curly brackets and semicolons so it's not likely to take the mainstream computing world by storm.


His critical error is evident in his comparative analysis that places Physics and Programming on the same level. The systems that underly natural sciences are givens. The entire kettle of soup of software complexity boils on the fact that software engineering must first create the 'terra firma' of computing. That is the root cause of the complexity in software: it lacks a physics.


He tackles your question during the Q&A. He (and his PhD student, Dan) talk about how so much of the effort to create 'terra firma', as you call it, is caused by the terrible hardware sold to us by Intel. He argues that much of the hardware we have is just software that's been crystallized too early. If he had a machine more like an FPGA he could build all of these abstractions in powerful DSLs right down to the metal.


I think it's the other way around:

In physics, we don't know what the fundamental rules are, we can only see complicated outcomes and have to infer (guess) what the rules might be.

In computing, we know what the fundamental rules are (universal computation; whether that's turing machines, lambda calculus, sk logic, etc. they're all equivalent in power), but we have to deduce what the complicated outcomes are.


>In computing, we know what the fundamental rules are

In a limited way. Because we're making systems that involve people. Important and relevant aspects of human nature must go far deeper than our present understanding.


There's been numerous logics and unified methods for specifying, synthesizing, or verifying software. The problem wasn't that we didn't have one. The problem is intrinsic complexity of the domain. It leaks through in all the formalisms where the formalism gets ugly if you want to automate it and it gets clean only with much manual labor.


This is a good line!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: