The enso link is interesting because there was another Enso project over a decade ago by Aza Raskin. That project, ironically, asserted the superiority of natural language and thereby text as a user interface.
text is superior (as in programming languages, not natural language) in the general case but there may be exceptions GUI builders, mock-ups, diagrams, tricking non-programmers into programming (iOS shortcuts(yahoo pipes), Scratch/Alice).
I wonder how many of them are in use for people beyond its creators/contributors, and in case of having actual users and customers, who are those (not theoretical profiles, but existing ones).
This is an actual question, if someone has an answer I will gladly like to learn more about it.
Yep. Erlang/BEAM PIDs, processes, supervision trees, supervisor strategies, supervisors are underrated. They should be built into the OS. Honestly, every "program" should be a function, return structured data, and callable from anywhere. Logs also should be structured rather than deserialized lines. Lines requires parsing. Parsing is a waste of electricity and time because it discards boundaries between values. The current paradigm is unwise because it's slow and inefficient unless you actually need lines. Rarely do you need just lines. Look at how nasty bash, grep, awk, sed, cut, head, tail, tr scripts will become.
Also, debugging and profiling, I still can't believe many programming environments / IDEs lack the features and debugging integration that ancient Borland Pascal and MSVC had.
I'd settle for software that could reasonably save and restore its own state. Half the time when I close the laptop lid I have no idea what will happen when I open it again. More often than not some application and its state just disappears into the computational ether.
One of Alan Kay's slogans is "The computer revolution hasn't happened yet" and he's mostly right. What we have are glorified calculators and media players instead of programmable devices that can augment intelligence and it's because the main runtimes/operating systems on these devices are not dynamic enough. Great deal of effort is required to extend and modify them to fit personal use cases. If you're not a programmer then you might as well just give up because the technical barrier is too high. So it's not surprising that most people have a negative view of personal computers and would rather let Apple and friends manage things for them even if that means giving up a great deal of control and privacy to a 3rd party which is mostly interested in making as much money as possible.
The larger implications of all this is that a great deal of potentially innovative use cases are not feasible because the required effort is too high. Every innovative application has to essentially re-invent its own dynamic runtime and shoehorn it into the existing non-dynamic setup.
macOS AND iOS flavor these fairly well with the defaults systems and prefs, but software developers have to understand how to save and restore state properly, and not fight it.
Also, I think complete process state (secured by a kernel decryption key) should be able to be saved and restored. Open I/O file descriptors would probably drop if they represent remote resources, but code should be made resilient enough to reconnect and retry in the event of errors.
Logging is a good example of how the opposite to what you say tends to be true. Have you ever wondered why Windows logs are so useless? It’s not windows-specific; when you look at journald you’ll see plenty of structured junk; the actually useful parts are plaintext.
“Every program as a function” would be a disaster for reliability and security. There’s a reason no mature operating system does that, apart from tiny embedded ones.
OP is right. Functional programming is more secure and reliable than imperative programming. There are no buffer overflows when code is formally specified and verified. It's next to impossible to do this for imperative code but very easy to do for functional code [0].
I'm glad I happened across the item before you made the change! The transcript is the best transcript I've ever seen of a talk.
I think the ideas are very interesting. I don't agree with his condemnation of Docker and single-threaded programming, but he's certainly right about the value of being able to kill threads in Erlang, and about the importance of being able to fix things that are broken, and about our computers cosplaying as PDP-11s (and the consequent teletype fetishism).
I hadn't made the connection between Sussman's propagators and VisiCalc before. I mean I don't think Bricklin and Frankston were exposed to Sussman, were they? They were business students? But if not, it's certainly a hell of a coincidence.
My defense of single-threaded code and aborting is that the simplest way we've found so far to write highly concurrent systems is with transactions. A transaction executes and makes some changes to mutable state based on, ideally, a snapshot of mutable state, and if it has any error in the middle, none of those changes happen. So it executes from beginning to end, starting from a blank (internal) state, and runs through to termination, unless halted by a failure, just like the "dead programs" Rusher is complaining about. You put a lot of these transactions together, executing concurrently, and you have a magnificent live system, and one that's much easier to reason about than RCU stuff or even Erlang stuff. This is what the STM he praises in Clojure is doing, and it's also how OLTP systems have been built for half a century. Its biggest problem is that it has a huge impedance mismatch with the rest of the software world.
I've said before that to get anything done you need some Dijkstra and some Alan Kay. If you don't have any Dijkstra in you, you'll thrash around making changes that empirically seem to work, your progress will be slow, and your code will be too buggy to use for anything crucial. If you don't have any Alan Kay in you, you'll never actually put any code into a computer, and so you won't get anything done either except to prove theorems. Alan Kay always had a fair bit of Dijkstra in him, and Dijkstra had some Kay in him in his early years before he gave up programming.
Ideologically, Rusher is way over on the Kay end of the spectrum, but he may not be aware of the degree to which the inner Dijkstra he developed in his keypunch days allows him to get away with that. The number of programmers who are ridiculously unproductive with Forth (i.e., almost all of us) is some kind of evidence of that.
Interestingly he doesn't talk about observability at all, and I suspect that observability may be a more useful kind of liveness for today's systems than setting breakpoints and inspecting variables, even with Clouseau.
Data Rabbit, Maria.cloud, Hazel, livelits, and Clerk sound really interesting.
I think it's unfortunate that you switched the URL; even for people without hearing impairment, transcripts are far preferable to videos, and this is a really excellent transcript. With a couple of screenshots, it would be better than the video in almost every way, though a few of the demos would lose something. (The demos start at 14'40".) The sort of people who were making worthless comments because they were confronted with a webpage formatted in an unfamiliar way won't suddenly start making insightful comments because there's a video link; they won't make any comments at all. So it's a mistake to cater to them and damage the experience for people who might have something to contribute. Video links make for shallow conversations.
"The sort of people who were making worthless comments because they were confronted with a webpage formatted in an unfamiliar way won't suddenly start making insightful comments because there's a video link; they won't make any comments at all."
I think "not making a comment if they don't have anything to say about the content" was the goal of the change. Reduces the noise on the page (I seldom go to page 2) and probably also the moderation burden.
I understand that reducing the worthless drive-by dismissals based on unfamiliar formatting was the objective of the change. My complaint is that there are a lot of comments like mine, which I hope is not worthless and which is based on reading the entire transcript as well as watching parts of the talk, which will never be made because you can't write those comments if you just watch the video. Also, watching the video takes a long time, so many people won't bother.
I would have liked to read those comments, even if there were worthless drive-by dismissals underneath them. I think it's bad to eliminate thoughtful discussion because the conditions necessary to produce it also cause discomfort in our less thoughtful brethren.
You can't scroll back and forth through the video to compare different points where he's talking about related themes, you can't text-search to find where or whether he mentioned a particular theme, and when you're reading the transcript, it pauses automatically when you stop to think, so thinking is the default. With the video pausing requires effort, so the default is to not think.
There are things that work better in some sort of video. GUI demos, mechanical movements, some kinds of data visualizations, and facial expressions, for example.
Interesting thought on Forth. I'm also unproductive in it, but I think at no fault to the language. I simply haven't had the time to build a true forth to solve a problem. I usually have some data and some transformations and maybe some API calls to make to an application and a database. Not really a good use for Forth. At least not time wise.
I got to being about 25% as productive in Forth as in C once I learned to stop trying to use the stack for local variables. Maybe with enough practice I might get to being as productive as in C, or even more so. I doubt I'd get to being as productive as in Python, which I think is about 3× as productive as C for me.
I think that if I were, say, trying to get some piece of buggy hardware working, so that most of the complexity of my system was poking various I/O ports and memory locations to see what happened, Forth would already be more productive than C for me. Similar to what Yossi Kreinin said about Tcl:
Tcl is also good for that kind of thing, but Tcl is 1.2 megabytes, and Forth is 4 kilobytes. You can run Forth on computers that are two orders of magnitude too small for Tcl.
So I think we shouldn't evaluate Forth as a programming language. We should think of it as an embedded operating system. It has a command prompt, multitasking, virtual memory, an inspector for variables (and arbitrary memory locations), and a sort of debugger: at any time, at the command prompt, you can type the name of any line of code to execute it and see what the effect is, so you can sort of step through a program by typing the names of its lines in order. Like Tcl and bash, you can also program in its command-prompt language, and in fact build quite big systems that way, but the language isn't really its strength.
But there is an awful lot of software out there that doesn't really need much complicated logic: some data, some transformations, and maybe some API calls to make to some motors or sensors (or an application and a database). So it doesn't really matter if you're using a weak language like Tcl or Forth because the program logic isn't the hard part of what you're doing.
And it's in that spirit that Frank Sergeant's "three instruction Forth" isn't a programming language at all; it's a 66-byte monitor program that gives you PEEK, POKE, and CALL.
On the other hand, if the computer you're programming has megabytes of RAM rather than kilobytes, and megabits of bandwidth to your face rather than kilobits, you can probably do better than Forth. You can get more powerful forms of what Rusher is calling "liveness" than Forth's interactive procedure definition and testing at the command prompt and textual inspection of variables and other memory locations on demand; you can plot metrics over time and record performance traces for later evaluation. You can afford infix syntax, array bounds checking (at least most of the time), and dynamic type checking.
Well, Yossi is probably a better programmer than I am, but I think I'm probably better at Forth than he was, and I think he was Doing It Wrong. And he sort of admits this: he was doing big-bang compilation rather than poking interactively at the hardware, he left out all the metaprogramming stuff, and he was trying to use the stack for local variables because he designed the hardware with a two-cycle memory fetch and no registers for local variables:
: mean_std ( sum2 sum inv_len -- mean std )
\ precise_mean = sum * inv_len;
tuck u* \ sum2 inv_len precise_mean
\ mean = precise_mean >> FRAC;
dup FRAC rshift -rot3 \ mean sum2 inv_len precise_mean
\ var = (((unsigned long long)sum2 * inv_len) >> FRAC) - (precise_mean * precise_mean >> (FRAC*2));
dup um* nip FRAC 2 * 32 - rshift -rot \ mean precise_mean^2 sum2 inv_len
um* 32 FRAC - lshift swap FRAC rshift or \ mean precise_mean^2 sum*inv_len
swap - isqrt \ mean std
;
I've done all these things (except designing the hardware) and I agree that it can be very painful. I did some of them in 02008, for example: https://github.com/kragen/stoneknifeforth
The thing is, though, you can also not do all those things. You can use variables, and they don't even have to be allocated on a stack (unless you're writing a recursive function, which you usually aren't), and all the NIP TUCK ROT goes away, and with it all the Memory Championship tricks. You can test each definition interactively as you write it, and then the fact that the language is absurdly error-prone hardly matters. You can use metaprogramming so that your code is as DRY as a nun's pochola. You can use the interactivity of Forth to quickly validate your hypotheses about not just your code but also the hardware in a way you can't do with C. You can do it with GDB, but Forth is a lot faster than GDBscript, but that's not saying much because even Bash is a lot faster than GDBscript.
But Yossi was just using Forth as a programming language, like a C without local variables or type checking, not an embedded operating system. And, as I said, that's really not Forth's strength. Bash and Tcl aren't good programming languages, either. If you try to use Tcl as a substitute for C you will also be very sad. But the way they're used, that isn't that important.
So, I don't think Forth is only useful when you have the freedom to change the problem, though programs in any language do become an awful lot easier when you have that freedom.
There is a strawman argument in the talk, about the limitation (or now, default) to 80 characters wide consoles which is presented as a "proof" that we're still living in the past.
80 characters (plus or minus 10) has a justification outside the history of teletypes and VT100s: that's the optimum for readability.
Otherwise, there are good ideas, in the talk, but this particular one rubbed me in the wrong way.
In a book, line breaks don't necessarily have any meaning. In poetry they might, but in a book they probably don't. In programming, line breaks OFTEN have a meaning.
Breaking up a sentence over multiple lines can TOTALLY change how it's interpreted by the computer.
Using methods to circumvent this which allow you to have a computer interpret two lines as if it's a single line can confuse and change how it's interpreted by the programmer, at least momentarily.
Refactoring your code to ensure it never exceeds 80 characters can ALSO make code harder to read, especially in modern languages that tend to be more verbose than what was seen in the TTY days.
Expanding the limit to even 120 characters and aiming the average line to be significantly shorter allows you to have a more consistent readable style across languages where you aren't doing nearly as much weird crap to force source code to fit an arbitrary character limit. You STILL have to do quite a bit of rewriting to force code to fit 120 characters, but this may be worth the readability tradeoff.
This is an argument I’ve heard over and over again throughout my coding career, especially from JavaScript and PHP developers. Interestingly enough there was a noticable overlap between this mindset and “clever” code, comprised of endless chaining, nested statements and/or internal recursion. They also never wrote code comments – at all. The defense being: “I think this way and its easier for me!” Regarding comments: “The code IS the documentation!”
Of course, they were never around 6 months later to prove if they still understood their gobbledygook then. They never had to refactor anything.
I have very rarely encountered a piece of code that would be hard to fit into 80 characters width while being readable. In fact, if done correctly, it forces you to break stuff up into manageable pieces and simple statements, to be explicit and often verbose.
But if this runs counter to a messy, ego-centric style a developer is used to and there’s no one to reign them in, it’s what you get.
I can empathise - when I started out I was all about clever code. But the older I get, and the more code I have to deal with, the more I value simplicity.
The 80 character limit is a really good signal that my code is too complex and needs simplifying. The most common occurrence these days is when I have too many arguments to a function, and either need to curry it or create an options data struct. My code is cleaner for this.
And there have been many, many, times that a comment from past me to 3-months-later me has saved me an hour of reading. Code, even intentionally simple code, is not as self-documenting as it appears to be when we're writing it.
I used to be clever, and bumped up against the 80 character limit. Now it's the opposite: my avoidance of cleverness is what is causing me to bump up against the 80 character limit.
There are two reasons for lines to get long: (1) cleverness, or (2) long names.
I used to hate long names, and still can't stomach most Java code. But I've learned that if you want understandable code, you can either have short names and long comments, or long names and hardly any comments. (But please, no longer than is necessary to communicate what needs to be communicated. addKeyAndValueToTable() tells you nothing of use over add() or put(). setForwardingPointerWhileTenuring() does communicate some important things that set() or setPtr() would not.)
yeh, another signal that you're over-complicating things.
I stick to VerbNoun function naming, and usually code in Go where short (1-letter short) variable names are idiomatic.
If I can't VerbNoun a function, then I probably need to rethink it.
Part of the reason I don't do anonymous/lambda functions too happily - it's actually harder to read a stack of anonymous functions than a stack of VerbNoun named functions.
Indeed. Comments should document intent and – wherever applicable – approaches taken that did NOT work and why. Even awkward code is sometimes okay, if there’s a comment with a clear justification. Saves hours of pointless busywork following down all the paths in a medium to large code base.
area = manager
.getUser()
.getPhone(PhoneType.mobile)
.getAreaCode();
I see this form a lot in Java and Kotlin code, especially in Kotlin where a single function is just assigned to a chain of functions like the one above.
I have spent much of my career reading and understanding code written by others. Comments have helped me twice in that time.
Many other times the comments have made understanding the code harder.
My motto is that comments are the only part of the code you can be sure are not tested. At best they record the intent of an author at some unknown point in the past.
Quite the opposite, limiting myself to ~80 characters per line improved the readability of my code. Shorter lines mean less complexity per line and more labelled values, which then again reduces the need for comments.
Expanding to 120 character lines means I can only have one column of code on my laptop screen at the same time and only if I maximise the window. It also means I can't have 3 columns on my desktop screen. And no, horizontal scrolling or line wrapping is not an option, it's a nightmare.
Yes, I find it harder to read code at lengths much longer than this. 120 is quite difficult. It also makes it even harder to read if you split the screen vertically, which I do all the time.
That said, I don’t think it should be a hard limit and it’s fine if a lines a bit over, +/- 10 like you said. Certainly not something that we should contort into multiple lines just to keep under a hard limit. Unfortunately, a few auto formatters only do hard limits - it’d be interesting to see how an acceptable interval around the limit would work.
Plus, I’ve noticed the limit makes more of a difference for comments than code so I try to keep comments under that. The written word appears more sensitive to line length.
For my personal Python projects I set a hard limit in the 94-96 range. That's wide enough that I actualy adhere to it instead of just ignoring it.
PEP8's and Google style guide's limits of 79 and 80 are way to narrow for a language with 4 space identation. However PEP says that "it is okay to increase the line length limit up to 99 characters" while Google's 80 is just a soft limit that can be broken in certain cases like long URLs.
How do you envision an interval around a limit? The fact is that you have to draw the line somewhere. If your interval is +-20, then setting a "limit" of 80 is really just a hard limit of 100.
By letting the code formatter exceed the limit if it allows for more readable formatting in certain cases. Going for a 100% hard limit means sometimes it'll shuffle chunks of code around because of 1-2 characters and that just doesn't make a lot of sense.
Or in other words, by formatting the code more like a human than dumping the source tree with a blind set of rules. If Copilot is possible then so is an AI model able to consider how code actually looks on the screen.
It's related to how reading and the human eye work. For each eye fixation, you take in about 5-10 characters. The optimal line length for books is about 60-75 characters, which takes somewhere between 6-15 fixation to scan.
Code is not like books: most lines will be shorter than the "limit", there's indentation, you're typically reading monospaced characters on a screen, and you have syntax highlighting, etc. So, bumping the length up to 80 characters is still okay, as is the occasional outlier. But regularly writing 100-120 characters-long LoCs will definitely impact readability.
Optimal for reading prose texts. I believe experiments have shown the optimal line width for reading speed and comprehension to be about 50-60 characters.
But as far as I known this have only been researched for reading prose. I doubt the result will translate directly to reading source code, which is read in a completely different way.
Assuming you're using a language where indentation is either mandated (e.g. Python) or recommended (e.g. C with "One True Braces"), with 4-spaces indentation (here again, no experimental evidence, but this seems to be the norm nowadays), and you don't abuse cyclomatic complexity in your functions / methods, you have let's say between 2 and 4 indents on the left so 80 is close to the optimum.
Note: up to 100 is probably still OK from a readability POW. But this assumes that all readers are able to increase their window size from the default 80. That's probably true for people using GUIs, but may still be problematic for people with sight issues (e.g. people over 50 or with more serious medical conditions...).
video is a really bad format for this kind of content, imho. I can read faster than the video can explain. Even at 2x or 3x speed, it's still faster to read, especially if I'm only skim-reading to find interesting bits.
Like Wittgenstein, I sometimes think of our [scientific and] technological age as a bedazzlement. Jack Rusher, on the other hand, seems to point toward making technology transparent (an end in itself) and is incredibily inspiring for it. I love his Vector Field III (2017)[1] generative art piece.
I think his take on “debug-ability > correct by construction,” has to assume that you can understand the problem or can understand the problem. In my experience this works fine when you have a mostly procedural process on a single machine. Set some watchers, step through execution, pause to inspect state. It is much harder to do with concurrency. And damn near impossible with distributed systems.
As someone who has claimed to use these kinds of tools to verify protocols, I’m curious what cases he would break this preference for debugging over reasoning. Or is the contradiction intentional?
I was big into Common Lisp for years and mostly into Haskell and Lean now. I still think debugging is fine and useful to do but I get the most bang out of being able to reason about my program before I run it. Once you have the definitions and proofs in place running the code is fine I guess but all the work is done.
Whereas with more dynamic systems the work has barely begun: now I need to bump into all of the errors in my code while it is running.
There are times when I want both and I don’t think the computing world needs to pick one and only one.
A dead program is one that can't be changed after it's running.
In the talk he discusses what are called "live programming" environments, as exemplified by Smalltalk and some implementations of Lisp. Instead of restricting programmers to writing code and then compiling/running it, live programming environments let you modify the program after it has started running.
If you want to make a change, you just modify the relevant bits of code and the program will immediately start using them. There is no need for a build step and no need to restart the program (which would involve losing any state that's in memory).
How would that work in real life? It sounds nice in theory, but as the saying goes, "in theory, there is no difference, in practice, there is."
One programmer fixing a bug, one adding a new feature, are both working on the same program at the same time? Or is each one working on their own copy? And how does the merge happen if they're working on separate copies? How is the updated code moved to production? How is this "live coding" supposed to work?
* There's still a notion of production. Most people work on dev environments just like today. You do need the ability to merge code in some reliable form from one environment to another. The Clerk project mentioned in the talk does this.
* Dev environments are more fluid. You can debug issues faster because you spend less time getting to arcane parts of your program after every restart.
* It is possible to live-edit production for important incidents. It's very much a weapon of last resort, you have to be super careful, and you probably want to rehearse in staging the way NASA does with their rovers. But it has the promise to reduce the amount of time your customers are impacted in major incidents.
> It sounds nice in theory, but as the saying goes
Listen to Ron Garret’s interview on Corecursive.[1] He sent verified Lisp code to Mars at NASA. The code failed but the debugger popped up and they were able to recover from it. Look for “Debugging Code in Space” and “Sending S-Expressions” in the time stamps.
By using source code and version control. It's not magic, check out Pharo + Iceberg (their integration with git). You still end up with a live environment for much of your work, but in a way that still works well for collaboration. Just don't be foolish and think, "Oh, I can redefine this live and never commit it anywhere and that'll be fine." That's not very smart, and you don't want people to think you're an idiot, so don't do it.
> That's not very smart, and you don't want people to think you're an idiot, so don't do it.
Yeah, that works about as well as saying you don't need tests because it's not very smart to write bugs. You need processes to enforce these things at scale.
I mean, was it that hard to get your team to adopt git or another version control system? I've only had one team that struggled with that, and they struggled with a lot of things. Everyone else adapted quickly to it. Working in Smalltalk would be no different in that regard. It's just as easy and just as hard as working with git in C# or another language. Someone forgets to check in a new source file, everyone else has a broken build. It's obvious quickly and gets addressed.
A la Emacs, I'd say. Emacs is basically a live environment where you can edit text. While you're running it, you can add new features on the go, then save the modifications once you're happy with them. Imagine if you open apple notes and wanted to change a few things, maybe adding an interface to some services. You'd just open the live environment, code your changes, and voilà.
Smalltalk, Lisp, Scheme, Erlang, Haskell, OCaml, Java, .NET, C++ (via VS, Live++, Unreal) allow for this kind of interactivity, where Smalltalk and Lisp variants win in tooling.
Some Lisp variants, Erlang, Java (via JMX beans / JVMTI) also allow to connect to a running application in production and change its behaviour, although on Java's case it is only a subset of possible changes.
The sound in this video is pretty bad, but several years ago I did a talk showing how this development style work on a real project, and I think it may answer some of these questions. https://www.youtube.com/watch?v=bl8jQ2wRh6k
any property that you would make about the possible state of the program by looking at the program itself has now been completely eliminated. Losing the state that is in memory is not universally desirable, but it is almost universally desirable.
Version control can be built into the system. One click to deploy the change, one click to revert it. (You could even have automatic one-box/A-B deployment of a change within code)
You end up building an ad-hoc, informally specified, slow implementation of half of a regular development environment "inside" your system. These "closed world" systems can do some amazing things, but it's very rarely enough to make up for not being able to work with standard tools from outside the ecosystem.
Sounds like a reliability and auditability nightmare. And, in securing software systems, full bill of materials and even reproducible builds are used to lock in the software to known good versions. How could software that can change thanks to the whim of a dev in the ops environment ever be considered reproducible? Wouldn't the result be just hugely fragile code state with no known source?
Why can't a "live coding" interface have auditing, access control, etc, just like any other interface? In production, disable it completely or limit its access to highly privileged users – but developers can still enjoy it in their local environments.
A good implementation of the idea would incorporate version control – when a service starts, it knows which version of the code it is running (commit hash, etc), and it has an introspection interface to report that version. If any "live coding" changes are made, it knows exactly which classes/methods/functions/etc have been changed compared to that initial version, and reports that through that interface too. You can then have a centralised configuration management system, which polls all services to find out which version they are running (and whether there are any additional changes beyond that version), and alerts if any production system is running different code from what it is supposed to be. Since your "live coding" interface is audited, you know the exact timestamp/username/etc of the change, and so anyone making inappropriate changes to production systems can be caught and dealt with.
In production, a "live coding" interface can be used to enable live patching – so you can apply a patch to a running service, without even having to restart it. Of course, the patch would be tested first in lower environments, and the interface would be invoked by some patching tool / deployment pipeline, not an individual developer.
I assume SQL stored procedures on some database work have some of those features? Seems like the sort of things that database engine developers would consider and the atomic semantics of committing stored procedures to a database lend itself to that model more closely (for databases with atomic DDL).
I worked with an Object Oriented database that had some cool features because the code was stored as highly structured data within the database itself, i.e. the code you saved was not text at all (except when dumped/reloaded to external file - similar to exporting/importing data of a database to/from a file as a textual representation).
Clearly some people just don't care about that sort of thing. Look at the popularity of Jupyter notebooks, Julia, etc.
There just isn't much forcing people to engage in good software development and operational strategies. Sometimes there are regulations in the field (PCI, HIPAA), sometimes adversaries force care (state sponsored attacks on Google), but that's kind of rare. The usual forcing function is a competitor that delivers features faster than you. In that case, maybe "live programming" is worth a look. I've never seen "less bugs" on a product comparison sheet, for example.
(It's a contentious view, and I'm personally not a fan. I like it for software I'm sitting in front of, like Emacs, but hate it for things that need to run unattended on their own, which is like everything except text editors. And "less bugs" is a big selling point for me personally. Every bug I run into consumes time that I'd rather spend elsewhere. But, not everyone sees it that way.)
>I've never seen "less bugs" on a product comparison sheet, for example.
Those show up, your sales department just uses the more formal names for the actual customer demands that “less bugs” encodes.
That will look like some combination of
• “We have an SLA for an annual 99.9xx% uptime and average sub x00 latency!” (so if a bug causes significant annual reduction in service, your business gets penalized),
• ”We guarantee regulatory compliance!” (so if a bug or business use causes regulatory issues, it’s your ass on the line),
• “We guarantee 24/7 same day chat or email support!” (so if a bug causes an outage, they have a warm body they can yell at, demand an explanation for their customers, and ask questions)
• “We guarantee backups, data redundancy, and worldwide data replication!” (so if a bug blows up one of your data centers somewhere, or the intern fat-fingers an accidental deletion of the production user database, the customer doesn’t even notice something went wrong)
• “We guarantee API backwards compatibility or a service maintenance guarantee until 20xx!” (So you’re forced to fix bugs, and the bug isn’t ‘your PM team might try to #KilledByGoogle’ the service you invested in building infrastructure on.)
Teams that believe in the health, care, and security of their design are far better equipped to offer the above valuable terms to customers to gain competitive business advantage, and teams with bad engineering hygiene are likely to be scared off.
> Look at the popularity of Jupyter notebooks, Julia, etc.
Neither of these (inherently) have the kind of "live programming" environment that the original comment was talking about.
Jupyter notebooks have a different reproducibility problem though, with how easy it is to create hidden state. But they're not intended to be used in production at all anyway, so the problem mainly affects pedagogical and information sharing use cases.
Julia with the Revise.jl package installed and loaded comes kinda close to the live programming model - but the state is always preserved in the source file necessarily with Revise (and it's a development tool too, not one loaded in production).
Since you mentioned it alongside Jupyter, maybe you meant the Pluto notebook for Julia? There too the state is always preserved, both in terms of packages (with Julia's usual Manifest file) and in terms of your code (no hidden state like Jupyter, since it's a reactive notebook).
Production environments aren't the only environment software is run in.
Pretty much by definition, more code touches non production environments than is ever ran in a production one: everything that touches production likely passed through at least 1 other environment beforehand. Then you have the code that never made it to production due to it being buggy/etc.
> Sounds like a reliability and auditability nightmare.
Why? Reliability, auditability, and testing isn't affected. This is about developing methodology and how the language syntax and runtimes can enable that.
It's not that human unfriendly, if you know Smalltalk (which is not a terribly hard thing to learn). But you also wouldn't interact with it in that form but rather inside of Pharo and using its class browser.
The way I see it, there's two scenarios where this approach can be useful, which I'll address separately.
The first is for local development. When iterating on a solution to a problem, it can be a hassle to restart the program from the beginning and incur the time penalty to get to the part you're modifying in order to test it. This can include not just compile times, but also program startup, loading in necessary data, and going through a series of steps either in a UI or via API calls to get the system into a state where it's about to trigger the functionality you're working on and want to test.
For example, I'm currently working on a business workflow that has a bunch of steps that require someone to fill in several forms, and things happen after each of those. When I'm working on stage 5 of that process, testing it involves repeating all the actions necessary to go through steps 1, 2, 3, and 4. Yes, it's possible with extra effort to automate this, or to create mock state that can be used to jump directly to step 5 on launch, but that's just a manual substitute for what a live programming environment gives you automatically. A similar situation occurs in game development; you don't want to play through half the game every time you want to get to a particular part where you've changed how a particular enemy acts.
The second scenario is production. You're correct that it's important to only deploy code that's undergone proper testing and review, has a known version, and is reproducible. However this is orthogonal to the use of a live programming environment. With the latter, you wouldn't just have developers interacting with it in an ad-hoc fashion (just as you wouldn't have developers directly modifying python scripts on a production server). Instead, the ability to modify running programs can be used as a deployment mechanism. Once a set of changes has been made in a local development environment, undergone testing, and is deemed ready for deployment, the same mechanisms for updating running code that are used for development purposes can be used as a way of deploying code. And in production, the fact that you don't need to restart the process means you can avoid downtime.
As far as deployment and versioning is concerned, you can think of it as being a similar to how you would do those things for a server based on CGI scripts or similar (e.g. PHP) where every time a request is served, the file is read from disk and executed. The difference there is that all state has to live in a database, so if you have long-running processes e.g. business workflows that span days or weeks, all state transitions/control flow has to be managed manually, rather than using language features like sequencing, iteration, and function calls. With a live programming environment that supports persistence (meaning that execution state is stored in a file or database in a manner transparent to the programmer), deployment consists of adding/updating a set of objects in the data store, rather than copying a set of files to a particular location.
An example of a system that supports runtime code updates is Erlang, though it is not persistent. An example of a system that is persistent is Ethereum, though that doesn't support runtime code updates. Smalltalk and its variants support both.
Auditability can be supported by ensuring all changes to code or runtime state are made through transactions.
That sounds like a shitstorm of a security nightmare waiting to happen. Someone hacks in, compromises the program, re-writes it to do whatever they want.
It's no different to someone hacking in and then copying over a modified executable, changing entries in the database, or attaching to a running process using gdb to inject code.
Docker containers, any scripting language file, etc is not a "shitstorm of a security nightmare". Security is generally not focused at the artifact level (whatever artifacts might exist).
I don't think you fully grasped what I was implying. Let me give you an example, since I have real-world experience with this. I once made a little 2D game that would allow for live modifications to the game world and code while active, from your character avatar to the tiles (assuming you 'owned' the area) to adding code live to make things (one person made a live arcade!) - it was basically a 2D Second Life.
And it was a complete security nightmare. It had to shut down after about two weeks, due to rampant abuse. One person managed to escape the game world, then the VM which contained it, then wreak havoc on the host machine running several other instances of the game (for linked worlds.)
If you aren't focusing on security at every level, you're asking to get wrecked. You think things are secure, man can make it, man can and will break it, eventually.
> I don't think you fully grasped what I was implying.
I understood you. This example doesn't illustrate the same concept. You're talking about allowing RPC into a running program. The talk is about RPC to your IDE as you develop the program. These are very different situations.
The colloquialism "thinking about security at every level" doesn't mean that shipping a program on air-gapped faraday-caged hardware is the only security. Any machine running a website can have the program modified by changing the HTML (or HTML generating code) at any time. Ruby, PHP, Perl, or even Tomcat (which will reload artifacts in realtime, without some tweaks) are hobbled versions of the same concept. Elixir/Erlang and LISP coding (et al) is live coding due to the nature of the runtime.
This idea of having an interactive program, as you develop, does not preclude a hardened artifact (which has never been the problem, since swapping out a replacement that's hardened would make that pointless) but that's partly the point. Making new toys and features, the talk is about the important elements to keep focusing on and why to move development forward.
While the exact phrase "dead program" isn't used outside the title, looking at his use of "live" and "dead" in the talk I think point to what he means:
At about 22 minutes:
> I want to talk about interactive programming, which is I think the only kind of programming we should really be doing. Some people call it live coding, mainly in the art community, and
this is when you code with what Dan Ingalls refers to as liveness. It is the opposite of batch processing. Instead, there is a programming environment, and the environment and the program are combined during development. So what does this do for us? Well, there's no compile and run cycle. You're compiling inside your running program, so you no longer have that feedback loop. It doesn't start with a blank slate and run to termination. Instead, all of your program state is still there while you're working on it. This means that you can debug. You can add things to it. You can find out what's going on, all while your program is running.
And a few minutes later:
> So, for example, in a dead coding language, I will have to run a separate debugger, load in the program, and run it, set a break point, and get it here. Now, if I've had a fault in production, this is not actually so helpful to me. Maybe I have a core dump, and the core dump has some information that I could use, but it doesn't show me the state of things while it's running.
While I understand that an interactive environment could be useful in a lot of cases, in other cases it might just be in the way...for example, when I am designing an API, usually I don't need any sort of visualization...on the other hand, if I want to explore data, then I do need visualization.
Also, his critique for my favorite language, C++, is unfair. C++ is born out of the necessity to take advantage of the hardware in the best way possible. And while it might be a mix of different ideas, it does work well in a lot of cases, and testament to that is the software we are using to communicate, the browsers, the web etc, the infrastructure of which is basically done almost exclusively in C++.
Furthermore, static typing helps in complex programs a lot. There are concrete examples around where static typing greatly helps solves complex problems. And complexity is not about algorithms only, it's also about change over time. A piece of code that does not have type annotations can become an ugly spaghetti mess really quickly, and that must be multiplied by ten each time a new developer is added to a team.
Very entertaining talk, by the way. At no point it was a drag. The presenter has a real talent for it.
anyone know how the timestamps in this article were produced? format isn't great on desktop but the timestamps are interesting and I can't imagine those were done by hand...I guess the line width must be fixed so they are always correct (?) which probably also borks desktop viewing
I downloaded the automatic transcript from the YouTube video and wrote some code to reformat it in this way to make referencing the position in the video easier. I should probably have linked each time code to open the video at that point, but I'm a bit time constrained this week.
Yep, I've done this too. I had a video with a live youtube transcript for a talk, but in addition I had a manually written transcript from one of the attendees. She wasn't trying to make it word-for-word perfect, but it was reasonably close and obviously had better formatting.
The automatic transcript was fairly poor quality but had fairly precise timestamps. The manual transcript lacked timestamps but was high quality. So I used an approximate matching algorithm to combine them and produce a clickable version of the manual transcript where every group of words was a direct link to that portion of the youtube video. It all worked out surprisingly well. (The other piece was that I hand-inserted annotations to produce an index of various topics and concepts that I thought were significant.)
I don't know how common of a situation this is, since it requires having a high-fidelity human-created transcript. I could clean up the tools and release them, I suppose. I did this for a birthday present.
(I don't have a demo because it's a private video, sadly, and I have rights to neither the video nor the manual transcript.)
Zoom does a similar thing with it’s closed captioning feature. Not sure if this was generated from zoom or something else but I’ve seen these sort of scripts come from automated closed captioning features.
I enjoyed watching it, but plenty of this talk is completely misguided. Had some ideas from there were so much better, surely by now people sharing them would show people not sharing them how much better they can do and overtake the industry.
Not to waste my time - most of the ideas expressed there lack in composability, and composability trumps almost everything else in the long run. Something that enthusiast of VMs, dynamic typing and runtimes don't want to understand. That's why stuff like live coding and interactive programming is always ever employed for small scope, throw-away things.
Visual programming is harder to automate, operate on, reproduce and compose and so on.
These two things don’t necessarily have to be conflated.
I rarely touch my vim config but at the same time during development, I write mini-programs (using macros, reflexes and buffer operations) in vim on the fly, that can write code for me or perform refactor stuff in ways that IDEs can’t typically do. I spend little to no time updating my vim tooling.
Jack is looking towards the inevitable paradigm shift away from a primarily text-based programming to whatever the future may hold. It's to be expected that most people will say they are perfectly productive with how things are thank you very much.
Yeah, well, "whatever the future may hold" isn't enough to get people to switch. I'm at least somewhat productive with how things are. You want me to switch? Show me how to be more productive with something that is concretely available today. You have something that may be revolutionary sometime in the future? Then I'll care sometime in the future, if it turns out to be actually revolutionary.
I'll be happy to use a better paradigm than text when someone comes up with one. In the meantime, all the attempts to replace programs-as-text have failed to result in any actual improvement, save for niches like e.g. StarLogo that cater to beginners.
I started using vi in 1989. I still use vi, almost every day. I've had to learn my way around I don't know how many text editors, IDEs and so forth over time, often to throw that away when the new thing comes out. Not so with vi - most of what I was doing in 1989 in vi I am still doing in vi.
I made my way through part of the talk on youtube, it was interesting but far too much for me to stomach. It really drove it home when he flashed slides of graphic organizations of complex data. While some parts might be better than a textual description, graphic representation of data needs to be useful, that is, you need to be able to look at a graph and glean information from it. At the section where he flashes an image of a re-imagined periodic table[0], he immediately exclaimed "look how beautiful it is!" and I was immediately instead confused. What does this image show? If it is a periodic table, can I see the periodicity? For example, can I tell two elements have electrons in the same shell by just looking at it? Can I see if two elements have the same filling of the subshell, and thus will covalently bind in similar ways? Can I tell which elements are more electronegative? I can glean all that from the "boring periodic table," which yes is a graphic representation and cannot easily be written in sequential text, but this new representation doesn't give me any of that at least as far as I can tell. The desire to represent an already existing set of data in a new graphic representation that seems to give no value apart from being novel is not helpful.
I can extrapolate this down. A lot of things in life are the result of historical evolution; that is just how things are. While that history can lead to problems, it is just the way things are. And yet, it should be beyond obvious that not everything new is good. For example, while sure the examples he gave were clunky, I can say with absolute certainty that there are times where playing games on my NES is more enjoyable sometimes than playing games on my "supercomputer," because on my NES I can play until I'm blue in the mouth, whereas on my phone, I have to stop every 2 minutes and watch an ad that makes my phone even hotter than the console. When what I value is "having fun" instead of "newness" or "shiny graphics with anime girls" I can see that an older device is better. Should I conclude that "advancement in technology is bad for enjoyment and for my sanity"?
No, because I am not that shallow in my reasoning here. It's clear that "old" vs "new" for my NES vs. mobile game comparison is really a complaint about the change in monetization models, and thus a different user experience for the gamer. In fact, the "old" vs. "new" argument obscures the real difference worth consideration, substituting the argument for a fight between nostalgia and novelty.
"Old" and "new" should never be what anyone focuses on because more often than not, such labels obscure the true conflict. Really, certain people like the presenter value things other than "familiarity," things like "introspection" and "graphical representation" and "concurrency," and he's mad people resist what he likes. Thus, he chalks it up to others stuck in their ways, clinging to "history." The thing is "familiarity" isn't the only argument people have for why they don't use clojure or reactive graphical displays: "familiarity" is often a stand in for other things that the people he critiques value, things like their time and latitude to learn new things, or the fact that indeed, some things are actually better expressed in text and as a sequential program than visually or concurrently, or that there are cases where those models are still infeasible on a computer even today (none of my electromagnetic simulations other than the most simplest will work in a reactive notebook because while computers are much faster, being much faster means I just do larger simulations that can no longer actually run in a reactive setting well). That to me is the more (pun not intended) valuable discussion, a discussion of the actual "innovations" and the values you have in mind when you evaluate them. Like, sell me on why I even care about introspection in a computational physics simulation code when I think I'm doing fine without it. That to me is the more interesting discussion, a discussion of values.
But that's the problem, just making another talk about how modern lispy introspection or something is cool is just another technical talk, and it certainly feels much less cathartic than painting everyone else who hasn't adopted your reactive notebooks as being luddites clinging to their VT100 emulating terminal windows. But that to me feels like where the actual meat is, a discussion of boring technical topics because there I can respond with actual concerns or reasons I can't use this or that, and such a discussion would actually be more interesting and productive for me and for people who want to sell newer paradigms.
> At the section where he flashes an image of a re-imagined periodic table, he immediately exclaimed "look how beautiful it is!" and I was immediately instead confused …
As it happens, this particular version of the periodic table is my own favourite. It’s perhaps easier to see why it’s so nice if you look at a larger version [0] — the periodicity is extremely obvious, not just in the elements themselves but also in the arrangement of the d- and f-blocks (which are in fact obscured in the usual periodic table). On the other hand, I suppose it’s true that trends in electronegativity etc. become somewhat less obvious. As with everything, it’s a trade-off.
The problem is that the tradespeople and craftsfolk who make them are, whether intentionally or not, wage slaves.
They do not make for the joy and art of making, but constrained by monetary need.
They do not seek to grow as makers and share what it is to make because . . . no one would pay them for that.
If we want better tech and better ways of doing tech, we need to start being paid like smart people who can not only do what management can't, but also we insist on doing it in a way that's creative. In a way not managed as a commodity.
Here on HN, we see hacker (at best) as commodity. This is not the right community for the message, imo.
It's a persuasive argument, but most of the "dead" tools mentioned in the talk were created in the era where the creators were inspired more by the beauty of their creations than their relevance to some bottom line. C C++ Pascal Python Go .NET ML Docker etc. were all made by people who saw a niche and sculpted something new for the joy of it. (Not just for the joy, but still.)
The current "...but does it monetize?" culture is a recent thing. For most of the history of computing, nearly everything was done by starry-eyed idealists and outcasts who were in love with exploring the new conceptual frontiers opened up by technology. Sure, even then there were Larry Ellisons out to skin the selkies and sell their pelts to the highest bidder, but they weren't driving the field. The vast majority of developments came from people on BBSes or MUDs or just in their private basement rooms, who delighted in coming up with working demonstrations of crazy new ideas and showing them off.
Silicon Valley was the Kingdom of the Geek, before the Geek had been tempted to crawl into the gilt cage and be locked inside by the army of MBAs and VCs. But we did it to ourselves—where once we might show off a live coding hack or a self-modifying set of scenes in a MUSH, now we show off our new Tesla or whatever the latest variant of a giant cell phone on wheels is.
(I don't disagree with what we should be moving towards.)
People are still using the same screwdrivers and screws they used a hundred years ago. Phillips head isn't even that good - you can easily strip the head over a few uses, or even one, if you don't know what you're doing. There have been innovations in big machines (analogy parallel would be Hadoop or whatever). But at the small level, people are still using the basic hand implements they've always used because they always work, they work everywhere, everyone else knows them, and it's pretty hard to get the design for new ones wrong.
Like the author says, his ideas don't mean you should use Smalltalk or Lisp. Just that you should demand features, like how it took until Rust for sum types to escape functional. But the reason you shouldn't use Smalltalk is also the reason why languages like Smalltalk aren't going to get made: because when you're making a general-purpose language, it is extraordinarily hard to paradigmatically improve on what came before, and it's very easy to just get it all wrong and make a pile of trash that makes developers at the few companies that adopt it mad at you. Even Rust is not that amazing in this regard; its wild and new data model is in fact the exact same one you were supposed to be using in C++, just minus the ability to refuse.
Everyone smart who was able to make these things has been snapped up by big data where their talents produce the most direct value.
FWIW, the patent for Phillips Head screwdrivers states that the pattern is intentionally designed to cam out/ strip instead of allowing over-tightening.
It’s an obnoxious feature, but it’s intentional. Not some kind of “this is old and therefore bad” thing.
Would definitely have made sense back in the day. Nowadays most cordless drills, even cheap models, have that exact torque limiter mechanism built in so you can dial in the torque you need with a simple twisting collar - use it all the time.
I hate Phillips Head screws, second only to flat head screws. They're "fine I guess" for some low torque applications, but even here they're not great. Was at GF's house the other day trying to screw through 18mm ply with some Phillips screws, which were all she had in the length I needed. Couldn't even get them through one sheet with predrilled pilot holes without them camming out and stripping the heads. Awful. Ended up making a trip to Screwfix to get some posidrivs.
This seems to be a misconception. It actually doesn't say that.
"[..] and in such a way that there will be no tendency of the driver to cam out of the recess when united in operative engagement with each other."[1]
Wikipedia says:
"The design is often criticized for its tendency to cam out at lower torque levels than other "cross head" designs. There has long been a popular belief that this was a deliberate feature of the design, to assemble aluminium aircraft without overtightening the fasteners.[14]: 85 [15] Extensive evidence is lacking for this specific narrative, and the feature is not mentioned in the original patents.[16] However, a 1949 refinement to the original design described in US Patent #2,474,994[17][18][19] describes this feature. "
Wanted to add a reference here to the wikipedia, which claims this is a myth[1]:
> Despite popular belief,[2] there is no clear evidence that this was a deliberate design feature. When the original patent application was filed in 1933, the inventors described the key objectives as providing a screw head recess that (a) may be produced by a simple punching operation and which (b) is adapted for firm engagement with a driving tool with "no tendency of the driver to cam out".[3]
> Nevertheless, the property of the Phillips screw to easily cam out was found to be an advantage when driven by power tools of that time that had relatively unreliable slipping clutches, as cam-out protected the screw, threads, and driving bit from damage due to excessive torque. A follow-up patent refining the Phillips screw design in 1942 describes this feature
Also Note: the opposite is not true. Tightening using and impact driver is a great way to cam-out and damage a Phillips head screw. That's why torx style is much more popular with impact drills.
And flathead screws often work when somewhat rusty (easy to clean slot). Good luck with many other screw head designs which are easier to use only when in new condition.
Screws have evolved a lot over the years, in particular over the last century. Instead of precision screws being made from a pattern, screws like metric ones have specifically designed pitch and thread shape.
We still use wheels but they don’t look like ones on old waggons either.
The Whitworth thread form (55° thread angle, a thread depth of 0.640327p, radius of 0.137329p, rounded roots and crests, and a set of standard pitches p) was designed in 01841; that's when screws started being made with specifically designed pitch and thread shape. The Unified Thread Standard used today is virtually identical to the Sellers thread from 01864, though the metric pitches were added later, and standardized internationally in 01898, and some problems in the UTS were ironed out in 01949 in the wake of wartime Whitworth/Sellers incompatibilities.
So I would say screw threads have only evolved very slightly over the last century. Screw heads evolved quite a lot during that time, though: Robertson heads are from 01907 (just outside the century!), Phillips got his patent in 01932, hex-key screws are from 01936, and then we have hex head, Pozidriv, Torx, external Torx, and literally dozens of others. Hex heads pretty much replaced square head screws sometime around 01960.
(Also, bolts replaced rivets for structural steel about 100 years ago, due to improvements in heat treatment that couldn't be applied to rivets.)
More interesting to me is the increasing number of snap-fit fasteners, which can often replace screws with greatly improved convenience at lower cost. These aren't always applicable, and sometimes they're designed badly (Ramagon and USB plugs come to mind) but when they're designed well they often have much longer life than screw fasteners. Also, they don't vibrate free the way screws do (without lockwire or loctite, anyway).
for other materials there has been more innovation later for threads, and for special uses of metal screws as well, for example self-tapping screws and coating of screws.
side question: why do you prefix years with a leading zero?
> CELO is leading the self-tapping and self-drilling screw market.
> CELO's self-tapping and self-drilling screws offer the widest selection for installations of joining metals, PVC profiles and aluminium sheets.
> In this section you will find our screws in all sizes, recesses, head types and coatings.
and then pages over pages of a catalogue.
it was the first catalogue link searching for "self taping and self drilling screws"
What kind of direct value are you thinking of? I don't think most data scientists and ML engineers could write compilers or tensor algebra frameworks and gradient based optimizers.
I think OP is saying the people who can write compilers or tensor algebra frameworks and gradient based optimizers get snapped up by big companies, not that every one who works in big companies is so capable.
"Docker shouldn't exist. It exists only because everything else is so terribly complicated that they added another layer of complexity to make it work. It's like they thought: if deployment is bad, we should make development bad too. It's just... it's not good."
Jack's not wrong. But better to look at it as throwing (increasingly cheaper) compute at the it-runs-on-my-machine problem.
When I lived in the Java world, Docker solved a heap of problems because Java has so many external dependencies - you needed the JVM, all its libraries, and all sorts of configuration files for those libraries.
But how is a docker image different from, for example, a statically compiled single-binary Go executable? Because when I work with Go, I tend to think that if what I'm working on requires something defined from outside the binary, I'm doing it wrong.
So is Docker a solution to problems that are inevitable, or is it just a solution to problems caused by other solutions?
These days I tend to think that something like Firecracker is more likely to be a solution to deployment problems than Docker is, but I haven't tried it yet...
I live in Java world and docker solves exactly 0 problems for me.
Maven or Gradle deals with assembling all the necessary jars into a single directory (or uberjar if you want, I don't like it).
JVM is another directory.
Running application is a shell script of 2-3 lines.
So my Java application is: directory with JVM, directory with jars and start.sh.
It works almost everywhere. It's simpler than docker. I can replace JVM with windows version and start.sh with start.bat and it'll run on Windows. Natively. Can't do that with Docker.
To build my application one would need to have maven (one directory), JVM (another directory) and project sources. Set up two env vars (JAVA_HOME, PATH) and run mvn verify. That's about it. Windows, Linux, macOS, doesn't matter.
Single binary is simpler than a directory with some files. But not much simpler.
I use docker, because that's the way things are done nowadays. I think that for personal project which does not need Kubernetes, I wouldn't use it.
I totally agree with and think we've all accepted it's (docker/oci's) place regarding deployability on infrastructure not managed by java developers.
Recently I've done a few migration projects and the biggest pain-point (not really a pain but not k8 friendly) is the containerization of JEE servers/services. As these solved most of what containers provide (deployment wise) albeit only for java. 'DevOps' generally killed this (and the related tech debt) but it's hard to validate the utility of any of this as the 'generalized' solution feels like it's compromised down to a point of convenience.
In saying that I do enjoy having a deployment platform that is always going to be Linux(like).
I don't think I would use it either for a personal project; but I would perhaps investigate how hard it would be to add some level of "kublet" (cri) like jvm integration
> When I lived in the Java world, Docker solved a heap of problems because Java has so many external dependencies - you needed the JVM, all its libraries, and all sorts of configuration files for those libraries.
Lol, no you don't. You need a fat jar and you need to install the JVM on your servers (and, sure, maybe upgrade it once every three years). In the early days people actually used Java to do the same thing that docker does, by having an "application server" that you would deploy individual java apps into, before realising what a bad idea that was.
> So is Docker a solution to problems that are inevitable, or is it just a solution to problems caused by other solutions?
It's a solution to the problem that Python dependency management sucks. Unfortunately the last 6 or 7 iterations of "no, Python dependency management is good now, we've fixed it this time, honest" suggest that that's inevitable.
> It's a solution to the problem that Python dependency management sucks. Unfortunately the last 6 or 7 iterations of "no, Python dependency management is good now, we've fixed it this time, honest" suggest that that's inevitable.
I didn't mean to imply that I think Docker exists solely for Java; I didn't realise that Python has the same problems, but it's unsurprising, and there are plenty of other languages/platforms that probably have the same problem. My point was that I have started to think that Docker is a solution to a problem that probably shouldn't exist.
> Lol, no you don't. You need a fat jar and you need to install the JVM on your servers (and, sure, maybe upgrade it once every three years). In the early days people actually used Java to do the same thing that docker does, by having an "application server" that you would deploy individual java apps into, before realising what a bad idea that was.
I'm surprised that you advocate for deploying JVMs to individual production servers; given that Docker exists, per-server JVM installs are quite possibly the worst possible way to manage Java deployment in a production environment. Give me Docker over this any day. Docker is great for Java apps.
As someone else pointed out, one of the benefits of Docker is that you get rid of the "it-runs-on-my-machine" problem: if you want your application to run reliably throughout dev, test and prod, then you must include the JVM in the distribution, because you can't otherwise guarantee that the JVM you're running on is the one you developed and/or tested on.
Don't even get me started on JavaEE: an operating system built by consultants, for consultants.
> I'm surprised that you advocate for deploying JVMs to individual production servers; given that Docker exists, per-server JVM installs are quite possibly the worst possible way to manage Java deployment in a production environment. Give me Docker over this any day. Docker is great for Java apps.
Using Docker doesn't solve any problems though - instead of installing the right version of the JVM on all your servers, now you have to install the right version of Docker on all your servers.
(In practice if I had more than a couple of servers I'd use Puppet or something to get the right version of the JVM on all of them, sure. But you have the exact same problem when using Docker too).
> As someone else pointed out, one of the benefits of Docker is that you get rid of the "it-runs-on-my-machine" problem: if you want your application to run reliably throughout dev, test and prod, then you must include the JVM in the distribution, because you can't otherwise guarantee that the JVM you're running on is the one you developed and/or tested on.
In theory, sure. In practice, the JVM is tested and backwards compatible enough that those problems don't happen often enough to matter. You can still have "it works on this machine but not that machine" problems with Docker too - different versions of Docker have different bugs that may affect your application.
> Don't even get me started on JavaEE: an operating system built by consultants, for consultants.
I already said it was doing the same thing Docker does :).
> now you have to install the right version of Docker on all your servers.
No, you don't. You might not even have to install Docker, e.g. Podman will probably do just fine. "All" docker is doing is calling some kernel code to set up namespaces, etc.
With very few exceptions the only thing you might have to worry about is the kernel version, but given Linux's historical compatibility story there and the fact that the JVM doesn't really rely on any esoteric kernel features, you'll also be fine there.
With the JVM the changes around modularization from 8->11 were hugely distruptive such that you couldn't just run any old 8 program on 11, so you couldn't upgrade the JVM unless all the JVM-based stuff running on the server was upgraded in one go, etc. etc.
Yes I was going to say Docker is just a wrapper around exec so the list of things that can go wrong for Docker seem a lot smaller than those that can go wrong with Java.
> In theory, sure. In practice, the JVM is tested and backwards compatible enough that those problems don't happen often enough to matter.
In my experience, JVM upgrades can and did cause us all sorts of problems, both subtle and not-so-subtle, depending on how big the upgrade was. Docker resolved a lot of that pain for us by making it possible to upgrade piecemeal, on a per-service basis, when we were ready.
> You can still have "it works on this machine but not that machine" problems with Docker too - different versions of Docker have different bugs that may affect your application.
I haven't experienced this, but in any case, the joy of Docker in a JVM environment with many applications is that you can pin the JVM for each individual application; you aren't forced to use whatever JVM is installed on the machine. This gave developers more freedom, because they could deploy whatever JVM environment they needed.
You can get away with more ad-hoc solutions if you're just one person, but once you have a team of people and a long list of JVM services, you need to be able to delegate control of as much of the operating environment to the developers.
As I said originally, this problem doesn't exist with Go because it emits static binaries that don't generally need additional support files, which is why I'd love to see more discussion on dynamic deployment of binaries instead of docker containers.
> the joy of Docker in a JVM environment with many applications is that you can pin the JVM for each individual application; you aren't forced to use whatever JVM is installed on the machine. This gave developers more freedom, because they could deploy whatever JVM environment they needed.
If you ever find yourself needing to do this, something's gone very wrong. (Maybe you're using a dodgy framework that mucks around with JVM internals?) You can download jars from 25 years ago and run them on today's JVM no problem, to the point I have a lot more faith in JVM backward compatibility than in Docker backward compatibility.
> As I said originally, this problem doesn't exist with Go because it emits static binaries that don't generally need additional support files, which is why I'd love to see more discussion on dynamic deployment of binaries instead of docker containers.
I actually think the way forward in the long term is unikernels. Given that people are mostly going for a one-application-per-VM/container model, most of what a multiuser OS is designed for is unnecessary. For the cases where you do need isolation, containers aren't really good enough, VM-level isolation is better. And for the cases where you don't need isolation, you might as well go full serverless.
> If you ever find yourself needing to do this, something's gone very wrong.
Not at all. We simply didn’t want to upgrade, test and redeploy 100+ applications every time a developer wanted to use the latest JVM on just one of them. It made the upgrade process much more incremental, predictable and safer than if we just upgraded JVMs for everyone all at once.
> You can download jars from 25 years ago and run them on today's JVM no problem
Yeah but you may not be able to compile the source code for them. Especially if you have old code that uses things like xjc or SOAP.
EDIT: Quekid5 below points out that this wasn’t true for Java 8->11 upgrades, and it was actually the desire to upgrade to 11 and to get on the faster Java release train that really made Docker images our deployment system of choice.
> I actually think the way forward in the long term is unikernels.
On this we appear to agree, which is why I mentioned firecracker in my original post:
> These days I tend to think that something like Firecracker is more likely to be a solution to deployment problems than Docker is, but I haven't tried it yet...
Well for one thing Docker tends to keep running for a long time while the several JEE App servers I used all needed rebooting after a few deploys just to keep them running. I mean ... don't even get me started, Java app servers were a nightmare and I was so very happy when we were able to ditch ours and go back to a sane architecture.
Docker is a resoponse to the rejoinder of "works on my machine" when there is a production problem. Rather than working out why there is a problem in production, it's bascically a way tojust run the dev machine in production. Never been sure why anyone thought that was a great idea, but I guess it does solve that one issue.
Docker is an interesting thing. I really love what I'm able to do with it. But I despise when people make it a necessary component of the development environment.
It's not really about Docker. If the software was easy, we wouldn't need Docker this much in the first place. But software sucks, so we wrap it in Docker, but now Docker sucks, because we simply gathered all the shits so that it spills everywhere now. Also, some of the shit is from Docker itself.
It's a shame he doesn't mention Ada, which is static but oh so nice about it.
Right, but the problem is – as Peter Harkins mentions here – that programmers have this tendency to, once they master something hard (often pointlessly hard), rather than then making it easy they feel proud of themselves for having done it and just perpetuate the hard nonsense.
Yes, see the entire history of UNIX. I convinced someone programming was a bad choice for his major because of this stupid attitude.
I'm still reading, but I like how he takes issue with what currently passes for machine text. Some of my work covers the same problem. I should send him an e-mail.
Well, there are lots of replacements for common commands like "ls", "grep", "find", even "cd", and some of them are pretty popular. And there is a wide variety of shells and terminals. No one is complaining about them and the worst you get is "I don't care/not my cup of tea" attitude.
Of course the key idea is to keep compatibility with existing world, for example by choosing a new command name ("rg"? "ack"?). If you just take over the existing name that has been used for years and break existing scripts, people will be unhappy.
I like pointing out that millions of lines of code in the Linux kernel have no real memory exhaustion strategy beyond randomly killing a process. Those are millions of lines of code, few of which are reusable, and they do so very little.
> The reason I disagree with this position is because the visual cortex exists. [...] There's no reason to eschew [graphics] when it comes to program representation.
Not everyone can take advantage of the visual cortex. Please don't take away one area where we're on a fairly level playing field.
I think visual cortex did not evolve to process dense symbolic information, with the emphasis on symbolic. Dense diagrams are much more difficult for most people than plain text arranged in lines or in a grid.
That story is clearly absurd. But in the real world, it might not always be such a bad thing to hold back the runaway feedback loop where the people with the most advantages gain even more.
If non-textual program representations do catch on, based on the unchecked assumption (as in this talk) that everyone has normal vision, then figuring out accessibility for future non-textual program representations will be good job security for someone, if that work gets funded. I just hope that no blind programmers lose their jobs in the meantime. (Personally, I could probably muddle through with my limited vision, albeit possibly with lower productivity.)
I should be used to people like me and some of my friends being routinely overlooked, as if we don't exist or can be relegated to a footnote, but sometimes it gets to me.
The presentation is great. Jack Rusher has a lot of energy. I recommend the video if you have time. The notes in the transcript are useful, too; both expansion of what he said as well as references to material.
Fantastic talk. Loved it! Thanks for the post and the commenters who recommended it.
Formatting was terrible -- even when viewing source!
This made it somewhat readable
import requests
from bs4 import BeautifulSoup
url = 'https://jackrusher.com/strange-loop-2022/'
bs = BeautifulSoup(requests.get(url).text)
muh_text = ' '.join([x for x in bs.stripped_strings if not x.startswith('00')])
print(muh_text)
let c;
for (let p of document.querySelectorAll('body>p, body>div')) {
if (p.classList.contains('aside')) { c = undefined; continue; }
p.querySelector('span.time').remove();
if (c) {
c.innerHTML += ' ' + p.innerHTML; p.remove();
} else { c = p; }
}
And this CSS:
body p { width: 100%; }
body div.aside { width: 100%; border: 1px solid black; }
I'm thinking maybe that page was supposed to be embedded somewhere, next to a video maybe? It wasn't meant to be read like this right?
Sorry you hated the formatting. The transcript is meant to be an assistive technology for the video, and a place to put extra notes I couldn't fit into the time I had. Ideally, the transcript would scroll as the video advances and the timestamps would move the playhead to that part of the talk, but I haven't time this week to do as much hacking on that as I'd like.
Watched the video of this on YouTube. Overall a great talk, though I don’t agree with many of his conclusions. He seems to argue that ‘move fast and break things’ is the way forward, and that big dynamic runtimes allow for this with great runtime debugging etc. I prefer to catch my bugs at compile time, and have found this to be a far more reliable path to actually finishing a software project.
I think more he was saying that types don't help you not break things, your preparation is worthless, you're not actually helping bugs, and when something does break your far slower at fixing it.
So not "move fast break things".
More "Don't over prepare, fix things fast."
The only thing I don't like about all these people talking about types is when you need to make a program change and that change has large impacts on other areas, like in a core API.
I do not want to make a change in a core API and have my program build without me addressing all of the places that would be affected by that.
This is where types shine, and this is where I think types are important.
If a language abandons types that is fine, the dynamic possibilities are amazing and I'd love to have them.
But I will not give up that ability to make a refactor in core code and then turn be able to address all of the places that that change breaks things.
Am I wrong in that?
Is my desire to do this just rooted in my own bad programming habits?
> I do not want to make a change in a core API and have my program build without me addressing all of the places that would be affected by that.
> This is where types shine, and this is where I think types are important.
This is also where tests shine, which are far more expressive than the type systems we have today. Tests are usually not as convenient as types though, but it's another parameter to consider when choosing the right solution for a given scenario.
> But I will not give up that ability to make a refactor in core code and then turn be able to address all of the places that that change breaks things.
This will be less satisfying since it's anecdotal, but I'll offer up my experience anyhow: I rarely find myself refactoring. When I do refactor, it's almost always in the "changing the factoring" sense, in that callers are none the wiser to changes since the interface is the same, which limits the fear of breakage. That's not to say that it always turns out this way, but churning regularly on interface boundaries would be a "smell" to me.
To further beat the drum from above, I'd additionally expect the tests to help prevent breakage whether the program's dynamically or statically-typed. I review plenty of code, much of it in Scala, which puts a heavy emphasis on its strong typing. When there aren't tests, I request them or write them myself, and that uncovers bugs more often than not despite the programs passing the type checker.
You’re not at all, and it’s one of the main reasons that I feel confident hopping into a codebase that I’m not as familiar with in something like Rust than say Ruby or JavaScript. The lack of strict types makes understanding how a program works very difficult, and being told this by the compiler or interpreter all but impossible.
I didn't get that from the talk. What the author meant (I'm also extrapolating from advocates of dynamic languages) is that quality increases with iteration. By changing and running your program, you can find edge cases you hadn't thought about and modify it to make it testable, modifiable, easy to inspect/visualize, etc. An environment that reduces friction required to tinker with a programs allows you to make them robust too - if you so wish. If you just want to play, then you're free to do that too (and the creative process benefits from being able to do so).
I'm not advocating for only using this workflow, ideally we could add types too. Compilers can enable an iterative workflow (Elm comes to mind), but I find myself sprinkling types as I go exploring how my program will accomplish the task (TS without strict).
The pendulum has swung to type-everything-first and I'm not sure it's the silver bullet we're looking for.
I watched the talk last week, so perhaps my memory's a bit off, but "move fast and break things" was not the takeaway that I got. I thought of it more of "problems are going to happen, being able to debug them is important, and there are better tools available for dealing with that than what's common".
Additionally, I don't recall if he said it in the talk, but it's been my experience that type-based bugs often surface early and are generally incredibly cheap compared to other classes of bugs (functional bugs, logic bugs, security bugs, etc.).
It's also just the wrong place in the stack to add these features. It's not a language concern, it's a platform concern. If I'm running a program in a web browser, it doesn't matter what language it's written in, I can pause the program and interactively explore it via the browser console. We should have the same thing for native apps on operating systems in general, and they should be native to the OS (provided by the OS vendor themselves) and not require any modification to the program (or any special programming language) in order to use.
I agree this would be very cool. I can’t count the number of times I wish I had runtime interactive debugging in a repl along the lines of pry in Ruby in just about every language that doesn’t have it.
That said, gdb etc are pretty awesome too if it’s an option
History has proven the exact opposite, as the failure of Smalltalk and the ascendancy of the web browser demonstrate. And as I assert, Dan Ingals is incorrect.
Last time I looked into it, the Web browser is a OS agnostic platform and JavaScript influence in tooling traces back to Smalltalk via SELF influence.
There are even two Smalltalk like development experiences Web browser based, Amber Smalltalk and Lively Kernel, the latter from Dan Ingals.
The OS should be an implementation detail of language runtimes, as proven by serveless computing and cloud native development, who cares if those runtimes run on top of an OS, bare metal or a type-1 hypervisor.
Agreed on static type checking - I also consider it extremely important. However I don't see it as being incompatible with the philosophy of live programming. Smalltalk relies a great deal on dynamism, but I believe it would be possible to create a language and environment that both enforces static typing and supports live programming + persistence. It's an open research problem though.
In web, I’d kill for a way to debug an error by replicating user’s state and play back actions and requests while stepping through an uncompiled version of the application running. I think I could solve bugs into infinity if I had that kind of power.
This kind of thing shouldn’t even be difficult, yet I have never been anywhere that has this kind of live code retrospection.
Fulcro (a library for Clojure/ClojureScript web applications ) has a way of doing this, and I'm sure it's not unique in this aspect: I just don't do enough work in this space to be familiar with the other offerings. This type of feature is valued in the Clojure community, so I wouldn't be surprised if reagent has something like this as well. And this type of thing isn't unique to Clojure, either.
Ironically one of my major annoyances in debugging Clojure is that stack traces don't come with a program state that can be inspected (as you get in ELisp or GDB)
I'm probably missing some subtlety. I'd think you could have some "debug mode" layer where the Clojure runtime catches exceptions. Basically wrapping every exception in Clojure with a try/catch, and doing a try/catch on every interop call
It's not ideal having two different modes (like a C++ Release/Debug) - but it'd be better than the current situation
Maybe this is what CIDER's debug macro is actually doing - I always forget to play around with it :) I'll need to try it in the future.
btw, thanks for your work. I really appreciate the stuff you've shared and it's nice to know someone else also uses thing/geom :))
Has the highly decoupled "mini-library" thing/geom architecture influenced Clerk? I'm been meaning to try it out - but notebooks always feel like they come with some ecosystem lock-in (esp if it's a company trying to make money - ie. Nextjournal). It'd guess it's part of why everyone reverts back to plain text. With thing/geom I just pick and choose and tweak the pieces I need - and then swap them out when I want to change to something else entirely (mostly for building GUI applications in CLJFX)
"Please don't complain about tangential annoyances—things like article or website formats, name collisions, or back-button breakage. They're too common to be interesting."
Amen. If the timestamps were hotlinked to the video for reference, that might be clever, but the overall format is awful. For reference, this is what it looks like on my screen: https://i.imgur.com/0VZEldv.png
Maria https://www.maria.cloud/
Glamorous Toolkit https://gtoolkit.com/
Data Rabbit https://datarabbit.com/
Nextjournal https://nextjournal.com/
Clerk https://github.com/nextjournal/clerk
Enso https://enso.org/