Note this is about k2; the current production version is k4 (from 2006 or so), and the current prototype is k9/shakti (There was now k8 afaik - possibly to avoid k8s mixup? but there were k5, k6 and k7 protoypes).
Compared to k2, modern k9 adds a lot of atomic data type (integer bit sizes up to 128; 32 bit floats, time and date, uuids, ...); adds dictionaries as a first-class compound data type (K2 had extremely limited dictionaries); rolls the database into the langauge (k2 had it as a library+app), adds a lot of connectivity (web server and client including websockets), threads, various forms of multiprocessing
> K also has a the concept of dependencies and triggers. They are used for efficient, demand-based calculation of data and executing callback code whenever a value is changes (this is how the timer system in K works). K will keep track of out of date dependencies for you and only do the minimal amount of work necessary to update values.
This seems powerful. Is "doing the minimal amount of work" provably true, or is this just a way of saying that it's usually fast?
For example, let's say that I have a list of numbers that I have sorted using bubble sort. Now I change one of the numbers. Would the update be O(log N)?
> K has a unique, declarative GUI subsystem that is based around the K tree and triggers. It takes a minimal amount of work to put a GUI on your application. K takes the approach that you show variables and changes made to that variable on the screen are immediately reflected in the program without any work by you.
This sounds like Svelte avant la lettre. I'm really curious how the updates are performed.
> This seems powerful. Is "doing the minimal amount of work" provably true, or is this just a way of saying that it's usually fast?
It is powerful, it's not provably true (or even generally true ...)
"Dependencies" are Excel-style recomputation. The code c::a+b says (1) that c depends on the value of a and the value of b, and that (2) whenever you evaluate c, if 'a' or 'b' has changed since the last evaluation, you re-evaluate the expression and assign it to c; otherwise return the cached value. So, multiple changes to a or b will not trigger recomputation - only the eventual use of C; However, it's possible that 'a' is a vector of length 1,000,000 and so is b, and only one element in a was changed - K would still do a million addition operations.
"Triggers" work the other way arround: You used to set a trigger on 'a' by assigning a trigger function a..t:{[d,i] ...} which would receive as argument the post-assignment value 'd' and the indexes assigned 'i'; You could use these to do a minimal update of whatever logically depends on a; However, the triggers DO fire on every change unlike dependencies.
> For example, let's say that I have a list of numbers that I have sorted using bubble sort. Now I change one of the numbers. Would the update be O(log N)?
I think there is no way to guarantee O(log n) anyway in this case - change the first element to be in the middle - requires at least O(n) changes on either a sequential or singly/doubly linked list. But assuming you're talking about say, a heap structure - the answer would be "No" for a dependency, and "If you implement it yourself" for a trigger.
> This sounds like Svelte avant la lettre. I'm really curious how the updates are performed.
Sort of. It was a native GUI, incredibly fast, and incredibly bare bones - it was eventually dropped because all the customers were making web front ends anyway.
Being a native GUI, I suspect (but don't know) it was using the triggers to mark damage regions and let the windowing system take over from there.
K is the bee's knees! I'm convinced if we all just used k for all of our software, programming would be a much nicer endeavor. And as a corollary, programs would be a lot nicer, faster, and usable. The status quo of terrible performance, massive codebases, and staggering complexity is a sad, self-inflicted, and unnecessary state of affairs.
I'm an avid user of k myself, but let's not pretend that k doesn't posses numerous issues and questionable design decisions. As is, the language is well suited for some tasks (e.g. processing timeseries data) , but certainly not all. The lack of modules as a first class feature makes it difficult to design large k projects, and hurts the discoverability of community made modules. A related issue is that namespaces/contexts can't be nested. Another thing that always feels like a thorn in my side when I'm working with k is the type system. Dealing with enums and anymaps is a nightmare, and the dynamic type system with no type hints makes creating and using k APIs difficult. I could go on, but I'd rather not stay up enumerating what I believe the bad parts of k are.
I believe that the ideas in k can be built upon into something that would be more useful as a general purpose programming language, but for now most of the benefits that k has to offer come in the form of introducing developers to vector languages. The ease with which functions can be composed, projected, and applied to vectors and tables in k is remarkable. For this reason, and the many other benefits of k, I try to get my programmer friends to try it out. I honestly believe learning it makes one a better programmer.
The lack of modules as a first class feature [...] hurts the discoverability of community made modules. A related issue is that namespaces/contexts can't be nested. Another thing that always feels like a thorn in my side when I'm working with k is the type system.
All of these read more like benefits than drawbacks, speaking for myself.
However, the mindset for k doesn't really align with 'complex programs': simplicity is generally more useful than complexity, and it's not like there's a difference in how much you can get done with either. Consider this two-page IM implementation:
Now, it's obviously just a silly demo, but in how many other languages could you get a graphical client and server in two pages of code? (Alternatively, in five lines but not graphical: http://www.kparc.com/$/chat.k )
I'm not trying to imply that these are particularly-complex programs, but I'd argue that in most cases, trying to organize programs more than what's obvious (generally a directory) just makes them more complex for no benefit. This goes for many languages, not just k.
Consider Apter's SLACK, a SASL-like (direct Haskell/Miranda precursor) functional (k) programming language, written in k:
You'll notice that it's pretty complex as far as these things go, but the organization of it is far less complex than most programs of a similar level of complexity. I'd argue that it actually works to the benefit of the reader, though I can see how you might disagree.
To be clear, I'm not sure I could ever call an inability to write complex programs a feature, but I do recognize that there are many useful languages with this property; Bash comes to mind. Thanks for the links, by the way; I am sure I'll get a kick out of them when I eventually do know how to read k ;)
It's not an inability. It's just that there's no reason for complexity.
C++, for example, is a language built around managing complexity. k, on the other hand, is a language built around avoiding complexity. SLACK is an actual language! It does all of the things you really need an actual language to do! And yet the implementation's not particularly complex.
bash isn't really at all similar; bash is actually more like Swift or Javascript than k is like bash. bash is for extremely complex programs: programs that are other programs taped together.
k takes the reverse position: there's absolutely no reason you should ever, ever, ever create something complex if a simple solution will do.
If you can create a programming language, an operating system, a windowing system, a database that can handle millions of terabytes of data without flinching, etc. all without complexity, why would you opt for the more-complex solution?
Complexity kills manageability, clarity & fun. Simplicity allows you to do the same things, faster, and without sacrifices.
EDIT:
In some ways, you can consider this very similar to Chuck Moore's philosophy on programming, although taking a different approach to it. I'm paraphrasing here, so forgive me if I get the quote wrong, but something along the lines of 'FORTH will do anything you could possibly want, but it won't do everything'.
If you accept the premise that a language shouldn't be infinite, which isn't particularly controversial[1], it becomes much less of a leap to think, "Wait, why not allow the programmer to expand where they need, and just not throw in what they don't? It'll take less overall cognitive load!"
Now, that's why FORTH is pretty interesting as a language. k, on the other hand, doesn't really lend itself to being expanded (there are solutions to expanding the language itself; many people & companies do). However, it takes the same philosophy externally rather than internally: "We're not going to give you everything, we're going to give you a method of creating anything in a highly-efficient way."
An analogy that I think works fine: k is LEGO to your average programming language's pre-built kits of Lincoln Logs.
[1] This is why Lisp is dead (Common Lisp killed it) and C is still going strong, despite the former being great (except Common Lisp) and the latter being...highly-controversial, and why when a simple Lisp was released it blew up for quite a while despite absolute mismanagement (Clojure).
I have to say, the more experienced I get the less I have such kind of excitements. It sounds like K is great in certain domains, but for example looking at the language it is missing big tools I deem important for software for example - extensive static typing, a great module system and community of modules, etc
Traditional static typing wouldn't really help k very much since the 'type' of a variable has more to do with the dimensionality and length of the vector being processed. A dependent typing system that could statically verify the length and shape of some vectors at compile time (like Idris) could be useful though.
In general though, it's very hard to judge k or any array language from the outside. You've been made to believe that a module system is necessary, or that long variable names are useful, but none of these things are true. It's only when you start using k that you realize that all of dogma our professors, our bosses, and our coworkers have imparted on us on how to structure programs is unnecessary, and worse, actually harmful.
I know I sound a little preachy, but learning k/q maybe 6 months ago has completely changed my perspective on how programs should be structured. Functional Haskell or Ocaml code that I previously would have considered elegant and "correct" now strikes me as unnecessarily verbose and clunky.
"It's only when you start using k that you realize that all of dogma our professors, our bosses, and our coworkers have imparted on us on how to structure programs is unnecessary, and worse, actually harmful." This is an incredibly bold statement - how readable are these programs revisiting them after 5 years? Tired? I believe many programmers feel that their programs are an optimal representation - and it may be, for their current mind set, their personality, and the current problem. I don't know how you think this extrapolates to all software
Absolutely I agree. K and APL like languages have an excellent philosophy. I've found that you can carry this philosophy over to a lot of other languages as long as they have some respectable level of composability.
Going on a bit of a tangent but I feel like just about the entirety of software divides fairly cleanly into five cases more or less regardless of the scenario.
The first case is scripting for automation or duct taping services together. Shell scripts are more or less ubiquitous so they serve the purpose well.
The second case is application scripting DSLs. These are definitely oriented towards an end-user and most likely need to be usable without too much prior programming experience. These end up much like the first case where they end up serving as a high level duct tape. If possible non turing complete DSLs should be preferred or if something fully featured is absolutely necessary, something like lisp or lua or possibly even a prolog-like language works well.
The third case is data based programming. This covers most of the "interesting" code that people write. I'd argue that this applies to any ML, control theory, data processing, compilers, graphics applications, and other algorithmic code. This is arguably best suited to APL-like languages(functional and array based). The only exception to this would probably situations where encoding properties into a type system could prove useful. Things like differentiable programming benefit from this and APL like languages don't really have the tooling to handle these cases as far as I'm aware.
The fourth case is user interface, hardware abstraction, and/or systems programming. I'd argue that low level languages with strong type systems and zero cost abstractions like Rust or Boost/Metatemplated C++ cover almost all of these cases. Being able to encode formal properties into the system through variable bounds, Hoare logic, and state machines while still being able to interact in a "low level" way is extraordinarily useful. Surprisingly in this sense, creating a user interface is very similar to interfacing with hardware over SPI, I2C, USB, etc.
The fifth case is quick and dirty prototyping. The languages in the prior cases work perfectly fine for this but sometimes the very high level duck typed languages such as lisp, lua, and python end up being easier for when just trying to flesh out an idea. I'd argue that this case shouldn't see production and should really only be used for prototyping and academic work.
Rant over. I definitely think APL derived languages need to make a resurgence as they could definitely improve the ergonomics, readability, and reliability of developing the "interesting" stuff which is honestly the class of application that desperately needs this.
this is so incomplete, there are at least 2 levels between the third and forth cases, and probably more after the fifth.
The fifth (quick and dirty prototyping) should be moved down the line. There are also platforms (like Erlang/OTP or Elixir, and also kdb+/q) where a project can start as a quick and dirty prototype and will gradually become a production-grade system without the full rewrite which will be usually required when using other platforms.
Joel Spolsky's classification of the types of software [1] can be useful here:
Compared to k2, modern k9 adds a lot of atomic data type (integer bit sizes up to 128; 32 bit floats, time and date, uuids, ...); adds dictionaries as a first-class compound data type (K2 had extremely limited dictionaries); rolls the database into the langauge (k2 had it as a library+app), adds a lot of connectivity (web server and client including websockets), threads, various forms of multiprocessing
and drops the native GUI.